AVQA / README.md
Joysw909's picture
Update README.md
09a79fa verified
metadata
pretty_name: AVQA JSONL (Audio Multiple-Choice QA)
license: cc-by-4.0
language:
  - en
task_categories:
  - question-answering
task_ids:
  - multiple-choice-qa
size_categories:
  - 10K<n<100K
tags:
  - audio
  - multiple-choice
  - dataset-viewer
configs:
  - config_name: default
    data_files:
      - split: train
        path: train_r1aqa_line.json

Summary | 摘要

This dataset is collected from the AVQA training subset (train_qa.json). We converted the data to the R1-AQA format, where each line in the text file represents a JSON object with specific keys. The AVQA training set originally consists of approximately 40k samples. However, we use only about 38k samples because some data sources have become invalid (e.g. link failure, or less than 10 seconds).

Given that there is no quick link to the audio mentioned in the above two papers, we constructed this dataset to facilitate research on speech-related tasks. Audio tracks were extracted from online videos using ytb-dl and saved in WAV format. Based on this dataset, we successfully reproduced the experiments of the R1-AQA project and successfully reproduced results similar to those reported in the original paper. For original data, please refer to the URL in the Acknowledgement.

Example Row | 示例行

{
  # The data presented below originates from the original AVQA dataset.
  "id": 183,
  "video_name": "-HG3Omg_89c_000030",
  "video_id": 341,
  "question_text": "What happened in the video?",
  "multi_choice": ["motorboat", "Yacht consignment", "Sailboat set sail", "Consignment car"],
  "answer": 1,
  "question_relation": "View",
  "question_type": "Happening",
  # R1-AQA add the following data. I separate the whole data into four random directories.
  "dataset_name": "AVQA",
  "audio_path": "./VGG10000/-HG3Omg_89c_30.wav"
}

Acknowledgement | 致谢

If you encounter any issues with the data, please feel free to contact me at [email protected]. Finally, we would like to express our gratitude to the authors of the AVQA and R1-AQA papers for their foundational work.

@inproceedings{yang2022avqa,
  author = {Pinci Yang and Xin Wang and Xuguang Duan and Hong Chen and Runze Hou and Cong Jin and Wenwu Zhu},
  title = {{AVQA: A Dataset for Audio-Visual Question Answering on Videos}},
  booktitle = {Proceedings of the 30th ACM International Conference on Multimedia (MM '22)},
  pages = {12 pages},
  year = {2022},
  address = {Lisboa, Portugal},
  publisher = {ACM},
  url = {https://doi.org/10.1145/3503161.3548291}
}


@article{li2025reinforcement,
  title={Reinforcement Learning Outperforms Supervised Fine-Tuning: A Case Study on Audio Question Answering},
  author={Li, Gang and Liu, Jizhong and Dinkel, Heinrich and Niu, Yadong and Zhang, Junbo and Luan, Jian},
  journal={arXiv preprint arXiv:2503.11197},
  year={2025},
  url={https://github.com/xiaomi-research/r1-aqa; https://huggingface.co/mispeech/r1-aqa}
}