TUNA-Bench / README.md
friedrichor's picture
Refine task categories and add tags (#4)
d02fd1b verified
---
language:
- en
license: apache-2.0
task_categories:
- video-to-text
pretty_name: TUNA
configs:
- config_name: TUNA-1K
data_files:
- split: test
path: tuna_1k/test-*
- config_name: TUNA-CAP
data_files:
- split: test
path: tuna_cap/test-*
- config_name: TUNA-MCQ
data_files:
- split: test
path: tuna_mcq/test-*
tags:
- visual-question-answering
- multiple-choice
---
## TUNA: Comprehensive Fine-grained Temporal Understanding Evaluation on Dense Dynamic Videos (ACL 2025 Main)
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
[![arXiv](https://img.shields.io/badge/arXiv-2505.20124-b31b1b.svg)](https://arxiv.org/abs/2505.20124)
[![GitHub](https://img.shields.io/badge/GitHub-TUNA-4b32c3?logo=github)](https://github.com/friedrichor/TUNA)
[![Project](https://img.shields.io/badge/🌐%20Project-Website-green)](https://friedrichor.github.io/projects/TUNA)
[![HuggingFace](https://img.shields.io/badge/πŸ€—%20HuggingFace-Dataset-yellow)](https://huggingface.co/datasets/friedrichor/TUNA-Bench)
This dataset accompanies the paper [TUNA: Comprehensive Fine-grained Temporal Understanding Evaluation on Dense Dynamic Videos](https://huggingface.co/papers/2505.20124).
# Paper abstract
Videos are unique in their integration of temporal elements, including camera, scene, action, and attribute, along with their dynamic relationships over time. However, existing benchmarks for video understanding often treat these properties separately or narrowly focus on specific aspects, overlooking the holistic nature of video content. To address this, we introduce TUNA, a temporal-oriented benchmark for fine-grained understanding on dense dynamic videos, with two complementary tasks: captioning and QA. Our TUNA features diverse video scenarios and dynamics, assisted by interpretable and robust evaluation criteria. We evaluate several leading models on our benchmark, providing fine-grained performance assessments across various dimensions. This evaluation reveals key challenges in video temporal understanding, such as limited action description, inadequate multi-subject understanding, and insensitivity to camera motion, offering valuable insights for improving video understanding models. The data and code are available at this https URL .
# File information
(Note: file information is just provided as context for you, do not add it to the dataset card.)
## πŸ† Leaderboard
[here](https://friedrichor.github.io/projects/TUNA/#leaderboard)
## βš–οΈ Evaluation
coming soon.
## Acknowledgments
The code is largely based on the [VLMEvalKit](https://github.com/open-compass/VLMEvalKit). We thank the authors for their great work.
## πŸ“‹ Citation
If you find our work helpful, feel free to give us a cite.
```
@article{kong2025tuna,
title={TUNA: Comprehensive Fine-grained Temporal Understanding Evaluation on Dense Dynamic Videos},
author={Kong, Fanheng and Zhang, Jingyuan and Zhang, Hongzhi and Feng, Shi and Wang, Daling and Yu, Linhao and Ji, Xingguang and Tian, Yu and W., Victoria and Zhang, Fuzheng},
journal={arXiv preprint arXiv:2505.20124},
year={2025}
}
```