VOILA / README.md
nlylmz's picture
Update README.md
6f0e810 verified
|
raw
history blame
6.07 kB
metadata
annotations_creators:
  - machine-generated
language_creators:
  - machine-generated
language:
  - en
license:
  - cc
multilinguality:
  - monolingual
size_categories:
  - 1M<n<10M
  - 100K<n<1M
  - 10K<n<100K
  - 1K<n<10K
  - n<1K
source_datasets:
  - original
task_categories:
  - visual-question-answering
  - image-to-image
  - image-to-text
task_ids:
  - visual-question-answering
  - image-captioning
pretty_name: VOILA
tags:
  - analogy
  - relational reasoning
  - visual perception
dataset_info:
  features:
    - name: image1
      dtype: image
    - name: image2
      dtype: image
    - name: image3
      dtype: image
    - name: image4
      dtype: string
    - name: descriptions
      dtype: string
    - name: relations
      dtype: string
  splits:
    - name: train
      num_bytes: 41071851275.771
      num_examples: 10013
  download_size: 38443824733
  dataset_size: 41071851275.771
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

Dataset Card for VOILA

This dataset card aims to be a base template for new datasets. It has been generated using this raw template.

Dataset Details

Dataset Description

VOILA is an open-ended, large-scale and dynamic dataset which evaluates the visual understanding and relational reasoning capability of the MLLMs. It consists of distinct visual analogy questions designed to derive an answer by following the relation rules among a given triplet of images (A : A’ :: B : B’). Unlike previous visual analogy dataset, VOILA presents more complex rule-based structure incorporating various property relations and distraction rules and manipulation of up to three properties at a time across 14 subject types, 13 actions, and 4 numeric values. VOILA comprises two sub-tasks: the more complex VOILA-WD and the simpler VOILA-ND Our experiment results show state-of-the-art models not only struggle to apply the relationship to a new set of images but also to reveal the relationship between images. LLaMa 3.2 achieves the highest performance, attaining 13% accuracy in implementing the relationship stage on VOILA-WD. Interestingly, GPT-4o outperforms other models on VOILA-ND, achieving an accuracy of 29% in applying relationships. However, human performance significantly surpasses these results, achieving 71% and 69% accuracy on VOILA-WD and VOILA-ND, respectively.

  • Curated by: [More Information Needed]
  • Language(s) (NLP): English
  • License: cc
  • Contact: [email protected]

Dataset Sources [optional]

Uses

Direct Use

[More Information Needed]

Dataset Structure

{‘img1': 'two_hamsters_carrying something_1111.png',
‘img2': 'two_hamsters_walking_9111.png’,
‘img3': ‘four_cats_carrying something_11111.png’,
‘img4’: ‘four cats walking’,
‘desc_img1’: 'two hamsters carrying something’,
‘desc_img2': ‘two hamsters walking’,
‘desc_img3':’four cats carrying something’,
‘desc_im4’: ‘four cats walking’,
‘combined_description’: ‘Image 1: two hamsters carrying something. Image 2: two hamsters walking. Image 3: four cats carrying something’,
‘question’:  ‘image_questions_1.png’,
‘rule’ : ‘1’,
‘Real_relations’ : ‘Number remains constant two. Action is changed from carrying something to walking. Subject type remains constant hamsters.’}

Data Fields

  • id:
  • img1: the file name of the first input image
  • img2: the file name of the second input image
  • img3: the file name of the third input image
  • img4: the content of the fourth image – analogy solution
  • desc_img1: description of the first image
  • desc_img2: description of the second image
  • desc_img3: description of the third image
  • desc_im4: description of the solution image
  • combined_description: The combined content description of first three images. question: the file name of the image collage which combine the first three images for analogy question.
  • rule: the number of the rule configuration.
  • Real_relations : the changed and unchanged properties between first and second images.

Data Splits

  • VOILA_WD : There are approximately 10K image analogy questions for TEST case which includes Distraction rule.
  • VOILA_ND : There are approximately 3.6K image analogy questions for TEST case, excluding Distraction rule.

Dataset Creation

Curation Rationale

[More Information Needed]

Data Collection and Processing

[More Information Needed]

Who are the source data producers?

[More Information Needed]

Bias, Risks, and Limitations

Because the images are generated by Stable Diffusion XL (SDXL). They might reveal biases that the model possesses.

Citation

BibTeX:

@inproceedings{
yilmaz2025voila,
title={Voila: Evaluation of {MLLM}s For Perceptual Understanding and Analogical Reasoning},
author={Nilay Yilmaz and Maitreya Patel and Yiran Lawrence Luo and Tejas Gokhale and Chitta Baral and Suren Jayasuriya and Yezhou Yang},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025},
url={https://openreview.net/forum?id=q5MUMlHxpd}
}