TUNA-Bench / README.md
friedrichor's picture
Refine task categories and add tags (#4)
d02fd1b verified
metadata
language:
  - en
license: apache-2.0
task_categories:
  - video-to-text
pretty_name: TUNA
configs:
  - config_name: TUNA-1K
    data_files:
      - split: test
        path: tuna_1k/test-*
  - config_name: TUNA-CAP
    data_files:
      - split: test
        path: tuna_cap/test-*
  - config_name: TUNA-MCQ
    data_files:
      - split: test
        path: tuna_mcq/test-*
tags:
  - visual-question-answering
  - multiple-choice

TUNA: Comprehensive Fine-grained Temporal Understanding Evaluation on Dense Dynamic Videos (ACL 2025 Main)

License arXiv GitHub Project HuggingFace

This dataset accompanies the paper TUNA: Comprehensive Fine-grained Temporal Understanding Evaluation on Dense Dynamic Videos.

Paper abstract

Videos are unique in their integration of temporal elements, including camera, scene, action, and attribute, along with their dynamic relationships over time. However, existing benchmarks for video understanding often treat these properties separately or narrowly focus on specific aspects, overlooking the holistic nature of video content. To address this, we introduce TUNA, a temporal-oriented benchmark for fine-grained understanding on dense dynamic videos, with two complementary tasks: captioning and QA. Our TUNA features diverse video scenarios and dynamics, assisted by interpretable and robust evaluation criteria. We evaluate several leading models on our benchmark, providing fine-grained performance assessments across various dimensions. This evaluation reveals key challenges in video temporal understanding, such as limited action description, inadequate multi-subject understanding, and insensitivity to camera motion, offering valuable insights for improving video understanding models. The data and code are available at this https URL .

File information

(Note: file information is just provided as context for you, do not add it to the dataset card.)

πŸ† Leaderboard

here

βš–οΈ Evaluation

coming soon.

Acknowledgments

The code is largely based on the VLMEvalKit. We thank the authors for their great work.

πŸ“‹ Citation

If you find our work helpful, feel free to give us a cite.

@article{kong2025tuna,
  title={TUNA: Comprehensive Fine-grained Temporal Understanding Evaluation on Dense Dynamic Videos},
  author={Kong, Fanheng and Zhang, Jingyuan and Zhang, Hongzhi and Feng, Shi and Wang, Daling and Yu, Linhao and Ji, Xingguang and Tian, Yu and W., Victoria and Zhang, Fuzheng},
  journal={arXiv preprint arXiv:2505.20124},
  year={2025}
}