Common-O / README.md
marksibrahim's picture
Update README.md
a757f70 verified
metadata
license: mit
language:
  - en
pretty_name: common-o
dataset_info:
  features:
    - name: image_1
      dtype: image
    - name: image_2
      dtype: image
    - name: question
      dtype: string
    - name: answer
      dtype: string
    - name: objects_1
      dtype: string
    - name: objects_2
      dtype: string
    - name: num_objects_image_1
      dtype: int64
    - name: num_objects_image_2
      dtype: int64
    - name: question_template
      dtype: string
    - name: answer_type
      dtype: string
    - name: choices
      dtype: string
    - name: num_choices
      dtype: int64
    - name: num_ground_truth_objects
      dtype: int64
    - name: real_or_synthetic
      dtype: string
    - name: ground_truth_objects
      dtype: string
  splits:
    - name: main
      num_bytes: 5408696753
      num_examples: 10426
    - name: challenge
      num_bytes: 594218345
      num_examples: 12600
  download_size: 1102814055
  dataset_size: 6002915098
configs:
  - config_name: default
    data_files:
      - split: main
        path: data/main-*
      - split: challenge
        path: data/challenge-*

Common-O

measuring multimodal reasoning across scenes

Common-O, inspired by cognitive tests for humans, probes multimodal LLMs' ability to reason across scenes by asking "what’s in common?"

fair conference content copy.001

Common-O is comprised of household objects:

fair conference content copy.003

We have two subsets: Common-O (3 - 8 objects) and Common-O Complex (8 - 16 objects).

Multimodal LLMs excel at single image perception, but struggle with multi-scene reasoning

single_vs_multi_image(1)

Evaluating a Multimodal LLM on Common-O

import datasets

# get a sample
common_o = datasets.load_dataset("facebook/Common-O")["main"]
# common_o_complex = datasets.load_dataset("facebook/Common-O")["complex"]
x = common_o[3]

output: str = model(x["image_1"], x["image_2"], x["question"])

check_answer(output, x["answer"])

To check the answer, we use an exact match criteria:

import re

def check_answer(generation: str, ground_truth: str) -> bool:
    """
    Args:
        generation: model response, expected to contain "Answer: ..."
        ground_truth: comma-separated string of correct answers

    Returns: bool, whether the prediction matches the ground truth
    """
    preds = generation.split("\n")[-1]
    preds = re.sub("Answer:", "", preds)
    preds = preds.split(",")
    preds = [p.strip() for p in preds]
    preds = sorted(preds, key=lambda x: x[0])

    # split into a list
    ground_truth_list = [a.strip() for a in ground_truth.split(",")]
    ground_truth_list = sorted(ground_truth_list)
    return preds == ground_truth_list

Some models have specific formatting outputs for their answers, e.g. \boxed{A} or Answer: A. We recommend checking a few responses as you may notice slight variations based on this. This public set also has slight variations with the set used in the original paper, so while the measured capabilities are identical do not expect an exact replication of accuracy figures.

If you'd like to use a single image model, here's a handy function to turn image_1 and image_2 into a single split image:

from PIL import Image

def concat_images_horizontal(
        image1: Image.Image, image2: Image.Image, include_space: bool=True, space_width: int=20, fill_color: tuple=(0, 0, 0)
        )  -> Image.Image:
    # from https://note.nkmk.me/en/python-pillow-concat-images/
    if not include_space:
        dst = Image.new("RGB", (image1.width + image2.width, image1.height))
        dst.paste(image1, (0, 0))
        dst.paste(image2, (image1.width, 0))
    else:
        total_width = image1.width + space_width + image2.width
        max_height = max(image1.height, image2.height)

        dst = Image.new("RGB", (total_width, max_height), color=fill_color)
        dst.paste(image1, (0, (max_height - image1.height) // 2))
        dst.paste(image2, (image1.width + space_width, (max_height - image2.height) // 2))
    return dst

For more details about Common-O see the dataset card