diff --git a/ChallengeCVPR2024.html b/ChallengeCVPR2024.html new file mode 100644 index 0000000..68b2b79 --- /dev/null +++ b/ChallengeCVPR2024.html @@ -0,0 +1,437 @@ + + +
+ + + ++ The 1st MeViS challenge will be held in conjunction with CVPR 2024 PVUW Workshop in Seattle, USA. In this edition of the workshop and challenge, we focus on referring video segmentation with motion expressions, i.e., segmenting objects in video content based on a sentence describing the motion of the objects. MeViS contains 2,006 video clips and 443k high-quality object segmentation masks, with 28,570 sentences indicating 8,171 objects in complex environments. The goal of MeViS dataset is to provide a platform that enables the development of effective language-guided video segmentation algorithms that leverage motion expressions as a primary cue for object segmentation in complex video scenes. The workshop will culminate in a round table discussion, in which speakers will debate the future of video object representations. +
++ | + | + | + | + + |
Henghui Ding Primary Organizer |
+ Chang Liu Primary Organizer |
+ Shuting He Nanyang Technological University |
+ Xudong Jiang Nanyang Technological University |
+ Chen Change Loy Nanyang Technological University |
+
@inproceedings{MeViS,
+ title={{MeViS}: A Large-scale Benchmark for Video Segmentation with Motion Expressions},
+ author={Ding, Henghui and Liu, Chang and He, Shuting and Jiang, Xudong and Loy, Chen Change},
+ booktitle={ICCV},
+ year={2023}
+}
+ + This paper strives for motion expressions guided video segmentation, which focuses on segmenting objects in video content based on a sentence describing the motion of the objects. Existing referring video object datasets typically focus on salient objects and use language expressions that contain excessive static attributes that could potentially enable the target object to be identified in a single frame. These datasets downplay the importance of motion in video content for language-guided video object segmentation. To investigate the feasibility of using motion expressions to ground and segment objects in videos, we propose a large-scale dataset called MeViS, which contains numerous motion expressions to indicate target objects in complex environments. We benchmarked 5 existing referring video object segmentation (RVOS) methods and conducted a comprehensive comparison on the MeViS dataset. The results show that current RVOS methods cannot effectively address motion expression-guided video segmentation. We further analyze the challenges and propose a baseline approach for the proposed MeViS dataset. The goal of our benchmark is to provide a platform that enables the development of effective language-guided video segmentation algorithms that leverage motion expressions as a primary cue for object segmentation in complex video scenes. +
+Given a video and an expression describing the motion clues of the target object(s), MeViS requires to segment and track the target object(s) accuractely.
+ + ☆ Input: a video and a sentence that refer to the target object(s). + +Dataset | +Pub. & Year | +Videos | +Object | +Expression | +Mask | +Obj/Video | +Obj/Expn | +Target | +
---|---|---|---|---|---|---|---|---|
A2D Sentence | +CVPR 2018 | +3,782 | +4,825 | +6,656 | +58k | +1.28 | +1 | +Actor | +
J-HMDB Sentence | +CVPR 2018 | +928 | +928 | +928 | +31.8k | +1 | +1 | +Actor | +
DAVIS16-RVOS | +ACCV 2018 | +50 | +50 | +100 | +3.4k | +1 | +n/a | +Object | +
DAVIS17-RVOS | +ACCV 2018 | +90 | +205 | +1,544 | +13.5k | +2.27 | +1 | +Object | +
Refer-Youtube-VOS | +ECCV 2020 | +3,978 | +7,451 | +15,009 | +131k | +1.86 | +1 | +Object | +
MeViS (ours) | +ICCV 2023 | +2,006 | +8,171 | +28,570 | +443k | +4.28 | +1.59 | +Object(s) | +
The newly built MeViS has the largest number of objects and language expressions. More importantly, MeViS focuses on segmenting objects in the videos indicated by motion expressions. The MeViS enables the investigation of the feasibility of using motion expressions for object segmentation and grounding in videos.
+Figure 2. The overview architecture of the proposed baseline approach Language-guided Motion Perception and Matching (LMPM). We first detect all possible target objects in each frame of the video and use object embeddings to represent them through Language-Guided Extractor. Then, Motion Perception is conducted on all the object embeddings of the video to grasp the global temporal context. By leveraging language queries and object embeddings with motion information, we generate object trajectories through a Transformer Decoder. Finally, we match the language features with the predicted object trajectories to identify the target object(s).
We benchmark the state-of-the-art methods to the best of our knowledge, please see the paper for details. If your method is more powerful, please feel free to contract us for benchmark evaluation, we will update the results.
@inproceedings{MeViS,
+ title={{MeViS}: A Large-scale Benchmark for Video Segmentation with Motion Expressions},
+ author={Ding, Henghui and Liu, Chang and He, Shuting and Jiang, Xudong and Loy, Chen Change},
+ booktitle={ICCV},
+ year={2023}
+}
+