Datasets:
File size: 4,463 Bytes
6d5fb9b c2aef97 fcd52bf c2aef97 b34dec1 83c2689 b34dec1 01f05d3 83c2689 01f05d3 fcd52bf 83c2689 c2aef97 b34dec1 01f05d3 088f1df 00424b7 b7bb2ed 79509da 00424b7 088f1df 6ce9ad3 f80139d 6480421 f80139d 80955cd ac83b89 5ee9602 f80139d 5ee9602 f80139d 5ee9602 f80139d 5ee9602 f80139d 5ee9602 80955cd 39b57ba 5ee9602 998dc60 5ee9602 39b57ba f80139d a757f70 00424b7 8977599 257c367 8977599 d63d1ff 91c4fce |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 |
---
license: mit
language:
- en
pretty_name: common-o
dataset_info:
features:
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: question
dtype: string
- name: answer
dtype: string
- name: objects_1
dtype: string
- name: objects_2
dtype: string
- name: num_objects_image_1
dtype: int64
- name: num_objects_image_2
dtype: int64
- name: question_template
dtype: string
- name: answer_type
dtype: string
- name: choices
dtype: string
- name: num_choices
dtype: int64
- name: num_ground_truth_objects
dtype: int64
- name: real_or_synthetic
dtype: string
- name: ground_truth_objects
dtype: string
splits:
- name: main
num_bytes: 5408696753
num_examples: 10426
- name: challenge
num_bytes: 594218345
num_examples: 12600
download_size: 1102814055
dataset_size: 6002915098
configs:
- config_name: default
data_files:
- split: main
path: data/main-*
- split: challenge
path: data/challenge-*
---
# Common-O
> measuring multimodal reasoning across scenes
Common-O, inspired by cognitive tests for humans, probes multimodal LLMs' ability to reason across scenes by asking "what’s in common?"

Common-O is comprised of household objects:

We have two subsets: Common-O (3 - 8 objects) and Common-O Complex (8 - 16 objects).
## Multimodal LLMs excel at single image perception, but struggle with multi-scene reasoning

## Evaluating a Multimodal LLM on Common-O
```python
import datasets
# get a sample
common_o = datasets.load_dataset("facebook/Common-O")["main"]
# common_o_complex = datasets.load_dataset("facebook/Common-O")["complex"]
x = common_o[3]
output: str = model(x["image_1"], x["image_2"], x["question"])
check_answer(output, x["answer"])
```
To check the answer, we use an exact match criteria:
```python
import re
def check_answer(generation: str, ground_truth: str) -> bool:
"""
Args:
generation: model response, expected to contain "Answer: ..."
ground_truth: comma-separated string of correct answers
Returns: bool, whether the prediction matches the ground truth
"""
preds = generation.split("\n")[-1]
preds = re.sub("Answer:", "", preds)
preds = preds.split(",")
preds = [p.strip() for p in preds]
preds = sorted(preds, key=lambda x: x[0])
# split into a list
ground_truth_list = [a.strip() for a in ground_truth.split(",")]
ground_truth_list = sorted(ground_truth_list)
return preds == ground_truth_list
```
Some models have specific formatting outputs for their answers, e.g. \boxed{A} or Answer: A. We recommend checking a few responses as you may notice slight variations based on this.
This public set also has slight variations with the set used in the original paper, so while the measured capabilities are identical do not expect an exact replication of accuracy figures.
If you'd like to use a single image model, here's a handy function to turn `image_1` and `image_2` into a single split image:
```python
from PIL import Image
def concat_images_horizontal(
image1: Image.Image, image2: Image.Image, include_space: bool=True, space_width: int=20, fill_color: tuple=(0, 0, 0)
) -> Image.Image:
# from https://note.nkmk.me/en/python-pillow-concat-images/
if not include_space:
dst = Image.new("RGB", (image1.width + image2.width, image1.height))
dst.paste(image1, (0, 0))
dst.paste(image2, (image1.width, 0))
else:
total_width = image1.width + space_width + image2.width
max_height = max(image1.height, image2.height)
dst = Image.new("RGB", (total_width, max_height), color=fill_color)
dst.paste(image1, (0, (max_height - image1.height) // 2))
dst.paste(image2, (image1.width + space_width, (max_height - image2.height) // 2))
return dst
```
For more details about Common-O see the [dataset card](https://huggingface.co/datasets/facebook/Common-O/blob/main/COMMON_O_DATASET_CARD.md)
|