nlylmz commited on
Commit
6f0e810
·
verified ·
1 Parent(s): 074b101

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -66,7 +66,7 @@ This dataset card aims to be a base template for new datasets. It has been gener
66
 
67
  ### Dataset Description
68
 
69
- VOILA is an open-ended, large-scale and dynamic dataset which evaluates the visual understanding and relational reasoning capability of the MLLMs. It consists of distinct visual analogy questions designed to derive an answer by following the relation rules among a given triplet of images (A : A’ :: B : B’). Unlike previous visual analogy dataset, VOILA presents more complex rule-based structure incorporating various property relations and distraction rules and manipulation of up to three properties at a time across 14 subject types, 13 actions, and 4 numeric values. VOILA comprises two sub-tasks: the more complex VOILA-WD and the simpler VOILA-ND Our experiment results show state-of-the-art models not only struggle to apply the relationship to a new set of images but also to reveal the relationship between images. The best performing model, MolmoE-8B, achieves 34% and 19% at finding the text-based answer on VOILA-WD and VOILA-ND, whereas human participants reach averaged of 70% on both scenarios.
70
 
71
 
72
 
 
66
 
67
  ### Dataset Description
68
 
69
+ VOILA is an open-ended, large-scale and dynamic dataset which evaluates the visual understanding and relational reasoning capability of the MLLMs. It consists of distinct visual analogy questions designed to derive an answer by following the relation rules among a given triplet of images (A : A’ :: B : B’). Unlike previous visual analogy dataset, VOILA presents more complex rule-based structure incorporating various property relations and distraction rules and manipulation of up to three properties at a time across 14 subject types, 13 actions, and 4 numeric values. VOILA comprises two sub-tasks: the more complex VOILA-WD and the simpler VOILA-ND Our experiment results show state-of-the-art models not only struggle to apply the relationship to a new set of images but also to reveal the relationship between images. LLaMa 3.2 achieves the highest performance, attaining 13% accuracy in implementing the relationship stage on VOILA-WD. Interestingly, GPT-4o outperforms other models on VOILA-ND, achieving an accuracy of 29% in applying relationships. However, human performance significantly surpasses these results, achieving 71% and 69% accuracy on VOILA-WD and VOILA-ND, respectively.
70
 
71
 
72