Datasets:

ArXiv:
License:
Kaining commited on
Commit
9f80179
Β·
verified Β·
1 Parent(s): 155e0a9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +49 -3
README.md CHANGED
@@ -1,3 +1,49 @@
1
- ---
2
- license: cc-by-sa-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-sa-4.0
3
+ ---
4
+ # MOVE: Motion-Guided Few-Shot Video Object Segmentation
5
+
6
+ 🏠 [Homepage](https://henghuiding.com/MOVE/) | πŸ“„ [Paper](https://arxiv.org/abs/2507.22061) | πŸ”— [GitHub](https://github.com/FudanCVL/MOVE)
7
+
8
+ ## Abstract
9
+ This work addresses motion-guided few-shot video object segmentation (FSVOS), which aims to segment dynamic objects in videos based on a few annotated examples with the same motion patterns. Existing FSVOS datasets and methods typically focus on object categories, which are static attributes that ignore the rich temporal dynamics in videos, limiting their application in scenarios requiring motion understanding. To fill this gap, we introduce MOVE, a large-scale dataset specifically designed for motion-guided FSVOS. Based on MOVE, we comprehensively evaluate 6 state-of-the-art methods from 3 different related tasks across 2 experimental settings. Our results reveal that current methods struggle to address motion-guided FSVOS, prompting us to analyze the associated challenges and propose a baseline method, Decoupled Motion-Appearance Network (DMA). Experiments demonstrate that our approach achieves superior performance in few-shot motion understanding, establishing a solid foundation for future research in this direction.
10
+ ## Download
11
+ We recommend using huggingface-cli to download:
12
+ ```
13
+ pip install -U "huggingface_hub[cli]"
14
+ huggingface-cli download FudanCVL/MOVE --repo-type dataset --local-dir ./data/ --local-dir-use-symlinks False --max-workers 16
15
+ ```
16
+ ## Data Structure
17
+
18
+ ```
19
+ MOVE_release/
20
+ β”œβ”€β”€ frames/
21
+ β”‚ β”œβ”€β”€ video_1/
22
+ β”‚ β”‚ β”œβ”€β”€ 00000.jpg
23
+ β”‚ β”‚ β”œβ”€β”€ 00001.jpg
24
+ β”‚ β”‚ └── ...
25
+ β”‚ β”œβ”€β”€ video_2/
26
+ β”‚ β”‚ β”œβ”€β”€ 00000.jpg
27
+ β”‚ β”‚ β”œβ”€β”€ 00001.jpg
28
+ β”‚ β”‚ └── ...
29
+ β”‚ └── ...
30
+ β”œβ”€β”€ annotations/
31
+ β”‚ β”œβ”€β”€ video_1.json
32
+ β”‚ β”œβ”€β”€ video_2.json
33
+ β”‚ └── ...
34
+ β”œβ”€β”€ action_groups.json # overlapping split
35
+ └── challenging_group.json # non-overlapping split
36
+ ```
37
+ ## BibTeX
38
+ If you find our paper and dataset useful for your research, please generously cite our paper.
39
+ ```
40
+ @inproceedings{ying2025move,
41
+ title={{MOVE}: {M}otion-{G}uided {F}ew-{S}hot {V}ideo {O}bject {S}egmentation},
42
+ author={Ying, Kaining and Hu, Hengrui and Ding, Henghui},
43
+ year={2025},
44
+ booktitle={ICCV}
45
+ }
46
+ ```
47
+ # πŸ“„ License
48
+ MOVE is licensed under a CC BY-NC-SA 4.0 License. The data of MOVE is released for non-commercial research purpose only.
49
+