Dataset Viewer
The dataset viewer is not available for this subset.
Job manager crashed while running this job (missing heartbeats).

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

2026 Soccernet Challenge - VQA Overview

Task

Soccernet-VQA is a challenge focused on multimodal (text, image, video) multiple-choice question answering, covering 14 distinct soccer understanding tasks. These tasks include assessing background knowledge of players and teams, determining camera status, classifying actions, recognizing fouls, and many other complex scenarios.

More details could be found at:

You can find the train/valid/test/challenge set of this challenge. With each .zip file contains a .json file and a materials folder, where .json file contains the questions (challenge.zip has no answers) and materials folder contains the required picture/videos in the questions.

Such train.zip and valid.zip is split from the benchmark SoccerBench, which is the largest multimodal soccer understanding QA benchmark introduced by our previous work of SoccerAgent. In the train/valid dataset, you could find the soccer understanding QA pairs featuring:

  • Around 10k QA pairs across 14 tasks
  • Balanced distribution across modalities and difficulty levels
  • Multi-choice format with one correct answer and three distractors
  • Professional annotations from soccer experts

Both the test phase and challenge phase are supported by 500 unique QA pairs, which span all 14 aforementioned tasks. You can download the test set and challenge set on this huggingface page.

Each QA pair contains three core components in its dictionary:

  • Q: The question content.
  • materials: Paths to relevant images or videos.
  • Ox (e.g., O1, O2): The multiple-choice options.

An example of a QA pair is shown below:

  {
    "Q": "How many appearances did the midfielder who is replacing Antoine Griezmann in this video make for Atletico Madrid from 2002 to 2018?",
    "materials": [
      "materials/q12/SoccerReplay-1988/europe_champions-league_2023-2024/2023-11-07_atletico-de-madrid-celtic-fc-champions-league/2_19_01.mp4"
    ],
    "O1": "25 appearances",
    "O2": "7 appearances",
    "O3": "18 appearances",
    "O4": "13 appearances"
  }

Evaluation

As for this close-ended QA task, we directly use the accuracy as the evaluation metric:

score=number of correct answers500 \text{score} = \frac{\text{number of correct answers}}{500}

Baseline

To facilitate benchmarking, we provide two frequently used models (Qwen2.5VL and GPT-4o) to infer directly as our baselines. Also, the SoccerAgent pipeline with multi-agent thoughts could be regarded as baseline as well, they can all be found in our Official Github Repo.

Prize

The Rank 1 submission of the challenge set can finally win the $1000 prize sponsored by KNQ Technology.

Downloads last month
65