Datasets:

Modalities:
Text
Video
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
video-tt / README.md
ZhangYuanhan's picture
Update README.md
ea29f32 verified
metadata
license: mit
dataset_info:
  features:
    - name: qid
      dtype: string
    - name: video_id
      dtype: string
    - name: question_type
      dtype: string
    - name: capability
      dtype: string
    - name: question
      dtype: string
    - name: duration
      dtype: string
    - name: question_prompt
      dtype: string
    - name: answer
      dtype: string
    - name: youtube_url
      dtype: string
  splits:
    - name: test
      num_bytes: 515490
      num_examples: 1000
    - name: test_primary_oe
      num_bytes: 695302
      num_examples: 1000
    - name: test_paraphrased_oe
      num_bytes: 702618
      num_examples: 1000
    - name: test_correctly_led_oe
      num_bytes: 719648
      num_examples: 1000
    - name: test_wrongly_led_oe
      num_bytes: 715143
      num_examples: 1000
    - name: test_all
      num_bytes: 3348201
      num_examples: 5000
  download_size: 21546400
  dataset_size: 8685160
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test-*
      - split: test_paraphrased_oe
        path: data/test_paraphrased_oe-*
      - split: test_correctly_led_oe
        path: data/test_correctly_led_oe-*
      - split: test_wrongly_led_oe
        path: data/test_wrongly_led_oe-*
      - split: test_all
        path: data/test_all-*
      - split: test_primary_oe
        path: data/test_primary_oe-*

Towards Video Thinking Test (Video-TT): A Holistic Benchmark for Advanced Video Reasoning and Understanding

Video-TT comprises 1,000 YouTube videos, each paired with one open-ended question and four adversarial questions designed to probe visual and narrative complexity.

Paper: https://arxiv.org/abs/2507.15028

Project page: https://zhangyuanhan-ai.github.io/video-tt/

🚀 What's New

  • [2025.03] We release the benchmark!

1. Why Do We Need a New Benchmark Like Video-TT?

  • Sampling vs. Understanding: Current video understanding benchmarks do not clearly distinguish between errors caused by insufficient frame sampling and errors due to failures in actual video comprehension. We ensure that each question in Video-TT can be answered with 80 uniformly sampled frames—frames that most current video models can easily process.
  • Pursuing Human-Level Video Understanding: We carefully select Q&A pairs where humans achieve an 87.5% accuracy rate, while the best models only reach 47.5%. In contrast, when sampling is not a limiting factor, current video models score above 85% on existing video understanding benchmarks [1].

2. Dataset Summary

Dataset Statistics

  • Number of videos: 1,000, primarily in formats like YouTube Shorts.
  • Number of Q&A pairs: Each video is paired with five questions that probe different aspects of the same content, designed to challenge model robustness in adversarial scenarios:
    • Primary open-ended question
    • Paraphrased open-ended question
    • Correctly-led open-ended question
    • Wrongly-led open-ended question
    • Multiple-choice question

One example of five question types is shown below:

Evaluation Metrics

Correctness:

Measures accuracy for each question type. We use Qwen2.5-72B as the judge for open-ended questions and a rule-based method for multiple-choice questions.

Robustness:

Let:

  • A_primary_correct be the set of videos where the primary open-ended question is answered correctly.
  • A_paraphrased_correct be the set of videos where the paraphrased open-ended question is answered correctly.
  • A_correctly_led_correct be the set of videos where the correctly-led open-ended question is answered correctly.
  • A_wrongly_led_correct be the set of videos where the wrongly-led open-ended question is answered correctly.
  • A_multiple_choice_correct be the set of videos where the multiple-choice question is answered correctly.

The set of videos where all five questions are answered correctly, denoted as A_full_correct, is the intersection of all these sets:

A_full_correct = A_primary_correct ∩ A_paraphrased_correct ∩ A_correctly_led_correct ∩ A_wrongly_led_correct ∩ A_multiple_choice_correct

Thus, the Robustness Score (RB) becomes:

R = |A_full_correct| / |A_primary_correct|

Where |A| denotes the size of the set A, representing the number of videos in that set.

3. Ensuring the Quality of Video-TT

The Video-TT annotation process consists of four stages:

  • Stage 1: We select complex videos that allow for challenging questions. (Refer to the paper for the definition of complexity factors.)
  • Stage 2: We prompt several vision-language models (VLMs) to check whether a question can be answered. If a model answers it correctly, we remove the question.
  • Stage 3: For each remaining question, we provide an answer and an explanation.
  • Stage 4: We manually verify that each question can be answered using 80 uniformly sampled frames.

For more details, please refer to the paper.

4. Run and exactly reproduce qwen2vl results!

# pip install git+https://github.com/EvolvingLMMs-Lab/lmms-eval.git
# pip3 install qwen_vl_utils
# export HF_HOME="~/.cache/huggingface"

accelerate launch --num_processes=8 --main_process_port=12346 -m lmms_eval \
    --model videott_single_mc \
    --model_args=pretrained=Qwen/Qwen2.5-VL-7B-Instruct,max_pixels=12845056,attn_implementation=flash_attention_2,interleave_visuals=False \
    --tasks mme \
    --batch_size 1

5. Leaderboard

6. Dataset Maintenance

If you find any mistakes in the dataset, please submit the corresponding question_id to our issue page. Our team is committed to maintaining this dataset in the long run to ensure its quality.

[1] Fu, Chaoyou, et al. "Video-mme: The first-ever comprehensive evaluation benchmark of multi-modal llms in video analysis." arXiv preprint arXiv:2405.21075 (2024).