Datasets:

Modalities:
Text
Video
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
video-tt / README.md
ZhangYuanhan's picture
Update README.md
7a45e55 verified
|
raw
history blame
2.58 kB
metadata
license: mit
dataset_info:
  features:
    - name: qid
      dtype: string
    - name: video_id
      dtype: string
    - name: question_type
      dtype: string
    - name: capability
      dtype: string
    - name: question
      dtype: string
    - name: duration
      dtype: string
    - name: question_prompt
      dtype: string
    - name: answer
      dtype: string
    - name: youtube_url
      dtype: string
  splits:
    - name: test_primary_oe
      num_bytes: 695309
      num_examples: 1000
    - name: test
      num_bytes: 515497
      num_examples: 1000
    - name: test_paraphrased_oe
      num_bytes: 702625
      num_examples: 1000
    - name: test_correctly_led_oe
      num_bytes: 719655
      num_examples: 1000
    - name: test_wrongly_led_oe
      num_bytes: 715150
      num_examples: 1000
    - name: test_all
      num_bytes: 3348236
      num_examples: 5000
  download_size: 18714323
  dataset_size: 8685230
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test-*
      - split: test_paraphrased_oe
        path: data/test_paraphrased_oe-*
      - split: test_correctly_led_oe
        path: data/test_correctly_led_oe-*
      - split: test_wrongly_led_oe
        path: data/test_wrongly_led_oe-*
      - split: test_all
        path: data/test_all-*
      - split: test_primary_oe
        path: data/test_primary_oe-*

Towards Video Turing Test (Video-TT): Video Comprehension and Reasoning Benchmark with Complex Visual Narratives

Video-TT comprises 1,000 YouTube Shorts videos, each with one open-ended question and four adversarial questions that probe visual and narrative complexity.

🚀 What's New

  • [2025.03] We release the benchmark!

1. Why we need a new benchmark like Video-TT?

  • Sampling v.s Understanding: Current video understanding benchmark do not clearly distinguish between errors caused by insufficient frame sampling and errors caused by failures in actual video understanding. We ensure that each question in Video-TT can be well answered with uniformly sampled 80 frames where most current video model can easily handle.v
  • Pursing human level video understanding: We carefully select Q&A, where human can achieve 87.5 points, but even the best model can only achieve 47.5. In comparison, without sampling issue, current video model can achieve more than 85 points on video understanding benchmark [1]

2. How to ensure the quality of Video-TT?

3. How to evaluate on Video-TT

4. Dataset summary

[1] Fu, Chaoyou, et al. "Video-mme: The first-ever comprehensive evaluation benchmark of multi-modal llms in video analysis." arXiv preprint arXiv:2405.21075 (2024).