Datasets:

Modalities:
Text
Video
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 6,280 Bytes
775ac97
 
 
 
 
 
 
 
cda9f21
 
775ac97
 
 
 
 
 
 
 
 
 
 
 
 
3ec1270
 
 
ce91894
 
 
f7cacad
 
 
e876f69
 
 
b4b28ba
 
 
180d86e
 
 
 
 
775ac97
 
 
 
 
 
 
 
 
 
 
 
 
17a2b91
 
775ac97
7a45e55
617b238
7a45e55
cc2e678
 
ed825f9
7a45e55
559d36b
ea29f32
559d36b
 
7a45e55
ed825f9
7a45e55
ed825f9
7a45e55
ed825f9
 
7a45e55
ed825f9
7a45e55
 
ed825f9
 
 
 
 
 
 
 
7a45e55
ed825f9
b992dcd
7a45e55
ed825f9
7a45e55
ed825f9
7a45e55
ed825f9
7a45e55
ed825f9
7a45e55
ed825f9
 
 
 
 
 
7a45e55
ed825f9
7a45e55
ed825f9
7a45e55
ed825f9
7a45e55
ed825f9
 
 
 
 
 
 
b992dcd
ed825f9
 
 
 
 
 
 
 
 
 
9e824fd
0332e7c
 
 
 
 
 
 
 
 
 
 
 
 
ed825f9
 
f675e9d
ed825f9
f675e9d
ed825f9
b992dcd
ed825f9
 
 
 
 
f675e9d
ed825f9
 
f675e9d
ed825f9
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
---
license: mit
dataset_info:
  features:
  - name: qid
    dtype: string
  - name: video_id
    dtype: string
  - name: question_type
    dtype: string
  - name: capability
    dtype: string
  - name: question
    dtype: string
  - name: duration
    dtype: string
  - name: question_prompt
    dtype: string
  - name: answer
    dtype: string
  - name: youtube_url
    dtype: string
  splits:
  - name: test
    num_bytes: 515490
    num_examples: 1000
  - name: test_primary_oe
    num_bytes: 695302
    num_examples: 1000
  - name: test_paraphrased_oe
    num_bytes: 702618
    num_examples: 1000
  - name: test_correctly_led_oe
    num_bytes: 719648
    num_examples: 1000
  - name: test_wrongly_led_oe
    num_bytes: 715143
    num_examples: 1000
  - name: test_all
    num_bytes: 3348201
    num_examples: 5000
  download_size: 21546400
  dataset_size: 8685160
configs:
- config_name: default
  data_files:
  - split: test
    path: data/test-*
  - split: test_paraphrased_oe
    path: data/test_paraphrased_oe-*
  - split: test_correctly_led_oe
    path: data/test_correctly_led_oe-*
  - split: test_wrongly_led_oe
    path: data/test_wrongly_led_oe-*
  - split: test_all
    path: data/test_all-*
  - split: test_primary_oe
    path: data/test_primary_oe-*
---

# Towards Video Thinking Test (Video-TT): A Holistic Benchmark for Advanced Video Reasoning and Understanding

<img src="teaser_2_page-0001.jpg" style="width:70%;">

Video-TT comprises 1,000 YouTube videos, each paired with one open-ended question and four adversarial questions designed to probe visual and narrative complexity.

Paper: https://arxiv.org/abs/2507.15028

Project page: https://zhangyuanhan-ai.github.io/video-tt/

## 🚀 What's New
- **[2025.03]** We release the benchmark!

## 1. Why Do We Need a New Benchmark Like Video-TT?

- **Sampling vs. Understanding:** Current video understanding benchmarks do not clearly distinguish between errors caused by insufficient frame sampling and errors due to failures in actual video comprehension. We ensure that each question in Video-TT can be answered with 80 uniformly sampled frames—frames that most current video models can easily process.
- **Pursuing Human-Level Video Understanding:** We carefully select Q&A pairs where humans achieve an 87.5% accuracy rate, while the best models only reach 47.5%. In contrast, when sampling is not a limiting factor, current video models score above 85% on existing video understanding benchmarks [1].

## 2. Dataset Summary


### Dataset Statistics
- **Number of videos:** 1,000, primarily in formats like YouTube Shorts.
- **Number of Q&A pairs:** Each video is paired with five questions that probe different aspects of the same content, designed to challenge model robustness in adversarial scenarios:
  - Primary open-ended question
  - Paraphrased open-ended question
  - Correctly-led open-ended question
  - Wrongly-led open-ended question
  - Multiple-choice question

One example of five question types is shown below:
<img src="dataset_expansion_page-0001.jpg" style="width:50%;">

### Evaluation Metrics

#### Correctness:

Measures accuracy for each question type. We use Qwen2.5-72B as the judge for open-ended questions and a rule-based method for multiple-choice questions.

#### Robustness:

Let:
- A_primary_correct be the set of videos where the primary open-ended question is answered correctly.
- A_paraphrased_correct be the set of videos where the paraphrased open-ended question is answered correctly.
- A_correctly_led_correct be the set of videos where the correctly-led open-ended question is answered correctly.
- A_wrongly_led_correct be the set of videos where the wrongly-led open-ended question is answered correctly.
- A_multiple_choice_correct be the set of videos where the multiple-choice question is answered correctly.

The set of videos where all five questions are answered correctly, denoted as A_full_correct, is the intersection of all these sets:

A_full_correct = A_primary_correct ∩ A_paraphrased_correct ∩ A_correctly_led_correct ∩ A_wrongly_led_correct ∩ A_multiple_choice_correct

Thus, the **Robustness Score (RB)** becomes:

R = |A_full_correct| / |A_primary_correct|

Where |A| denotes the size of the set A, representing the number of videos in that set.


## 3. Ensuring the Quality of Video-TT

<img src="annotation_pipeline_page-0001.jpg" style="width:70%;">

The Video-TT annotation process consists of four stages:

- **Stage 1:** We select complex videos that allow for challenging questions. (Refer to the paper for the definition of complexity factors.)
- **Stage 2:** We prompt several vision-language models (VLMs) to check whether a question can be answered. If a model answers it correctly, we remove the question.
- **Stage 3:** For each remaining question, we provide an answer and an explanation.
- **Stage 4:** We manually verify that each question can be answered using 80 uniformly sampled frames.

For more details, please refer to the paper.

## 4. Run and exactly reproduce qwen2vl results!
```
# pip install git+https://github.com/EvolvingLMMs-Lab/lmms-eval.git
# pip3 install qwen_vl_utils
# export HF_HOME="~/.cache/huggingface"

accelerate launch --num_processes=8 --main_process_port=12346 -m lmms_eval \
    --model videott_single_mc \
    --model_args=pretrained=Qwen/Qwen2.5-VL-7B-Instruct,max_pixels=12845056,attn_implementation=flash_attention_2,interleave_visuals=False \
    --tasks mme \
    --batch_size 1
```


## 5. Leaderboard

<!-- For the latest leaderboard, please refer to **XXX**. You can submit your evaluations there. Some results are produced by us, while others are submitted by external researchers.

To reproduce our results, please check **XXX** for evaluation scripts. -->

<img src="771742752771_.pic.jpg" style="width:70%;">

## 6. Dataset Maintenance

If you find any mistakes in the dataset, please submit the corresponding `question_id` to our issue page. Our team is committed to maintaining this dataset in the long run to ensure its quality.

<!-- 
## 7. Acknowledgments

(Include acknowledgments as needed.) -->




[1] Fu, Chaoyou, et al. "Video-mme: The first-ever comprehensive evaluation benchmark of multi-modal llms in video analysis." arXiv preprint arXiv:2405.21075 (2024).