## Dataset Card ### Motivation **For what purpose was the dataset created?** The `Common-O` dataset was created the test the reasoning abilities of multimodal LLMs in multi-image, multi-object settings. **Who created the dataset?** The dataset was created by the authors of this paper. **Who funded the dataset creation?** This dataset was created with contributions from all of the authors on this paper and funded by Meta. **Any other comments?** None. ### Composition **What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)? Are there multiple types of instances (e.g., movies, users, and ratings; people and interactions between them; nodes and edges)? Please provide a description.** Each instance is a tuple of 2 images, a set of potential objects that are in both images and a set of the ground-truth, common objects between both images. **How many instances are there in total (of each type, if appropriate)?** There are 10586 instances in \dataset\ and 12600 instances in \datasetComplex. **Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set? If the dataset is a sample, then what is the larger set? Is the sample representative of the larger set (e.g., geographic coverage)?** These were manually created instances, either via the authors taking the images or the authors using a game engine to synthetically create the images. We created a large set of synthetic images (~400k). For `Common-O` ($N$=3 to $N$=7 objects) and `Common-O Complex` ($N$=3 to $N$=7 objects), we randomly sampled images with the target number of objects. **Is there a label or target associated with each instance?** The target associated with each instance is the set of objects in common between both images (\eg apple, keys). **Is any information missing from individual instances?** All of the information is included for every instance. **Are relationships between individual instances made explicit (e.g., users’ movie ratings, social network links)? If so, please describe how these relationships are made explicit.** Each image in a given contains a specific configuration of objects. This configuration is taken from multiple orientations. These orientations are labeled in the data files. Additionally, each image is contained with multiple instances. The instances in the data file are label with the image filenames so it's clear to see which instances have the same images. **Are there recommended data splits (e.g., training, development/validation, testing)?** This is an evaluation-only benchmark; we do not provide any training or validation splits. **Are there any errors, sources of noise, or redundancies in the dataset?** The instances were manually created. Potential sources of noise may come from ambiguitiy in idenitiying objects, which is captured by our human baseline. **Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)?** The dataset is entirely self-contained. **Does the dataset contain data that might be considered confidential (e.g., data that is protected by legal privilege or by doctor–patient confidentiality, data that includes the content of individuals’ nonpublic communications)?** The dataset does not contain any confidential or private information. **Does the dataset contain data that might be considered sensitive in any way (e.g., data that reveals race or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history)?** The dataset does not contain any sensitive information. **Any other comments?** None. ### Collection Process **How was the data associated with each instance acquired?** Every real photo was manually taken by one of the authors on this paper specifically for this dataset. Every synthetic photo was generated by the authors using a game engine. We manually wrote the set of objects found in each image. **What mechanisms or procedures were used to collect the data (e.g., hardware apparatuses or sensors, manual human curation, software, programs, software APIs)?** We used manual human curation for the real images and the Unreal engine for synthetic images. We validated the images by sampling a subset to hand-annotate. **If the dataset is a sample from a larger set, what was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)?** For the synthetic images, we manually downsampled via random sampling. **Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated (e.g., how much were crowdworkers paid)?** The authors performed all components of the data collection. **Over what timeframe was the data collected?** The data was collected over about 3 months. **Were any ethical review processes conducted (e.g., by an institutional review board)?** The data collection went through IRB. We did not include humans in the images. **Did you collect the data from the individuals in question directly, or obtain it via third parties or other sources (e.g., websites)?** The data was not collected from external individuals, third parties or web sources. We manually collected all data. **Were the individuals in question notified about the data collection?** N/A; see previous question. **Did the individuals in question consent to the collection and use of their data?** N/A; see previous question. **If consent was obtained, were the consenting individuals provided with a mechanism to revoke their consent in the future or for certain uses? If so, please provide a description, as well as a link or other access point to the mechanism (if appropriate).** N/A. **Has an analysis of the potential impact of the dataset and its use on data subjects (e.g., a data protection impact analysis) been conducted? If so, please provide a description of this analysis, including the outcomes, as well as a link or other access point to any supporting documentation.** N/A. **Any other comments?** None. ### Preprocessing/Cleaning/Labeling **Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)? If so, please provide a description. If not, you may skip the remaining questions in this section** We manually collected/generated all dataset instances and therefore did not perform any additional data processing beyond image resizing. All images in their original size were saved. ### Uses **Has the dataset been used for any tasks already?** The dataset has not been publicly released yet (outside of the private repository for paper review) and therefore has not been used for any additional tasks. **Is there a repository that links to any or all papers or systems that use the dataset? If so, please provide a link or other access point.** The dataset is assessible on HuggingFace at [this link](https://huggingface.co/datasets/facebook/Common-O). **What (other) tasks could the dataset be used for?** \dataset\ has been tested for multiple-choice QA with multiple possible answers. The dataset could also be tested in open-ended question answering. **Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses?** There is very minimal risk for harm. We did not include any pictures of people, real or generated, and we also excluded any logos. Additionally, this dataset is only for evaluation and therefore will not be used in model training. **Are there tasks for which the dataset should not be used?** The dataset is exclusively for evaluation and should not be used to train or finetune any models. **Any other comments?** None. ### Distribution **Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which the dataset was created? If so, please provide a description.** Yes, the dataset is publicly available on HuggingFace. **How will the dataset will be distributed (e.g., tarball on website, API, GitHub)? Does the dataset have a digital object identifier (DOI)?** We will host the dataset on HuggingFace. Because this paper is the introduction of the dataset, we will use the paper DOI. **When will the dataset be distributed?** The dataset will be distributed upon acceptance of the paper in 2025. **Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)?** The dataset is being distributed under the non-commercial CC BY-NC 4.0 license. **Have any third parties imposed IP-based or other restrictions on the data associated with the instances? If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms, as well as any fees associated with these restrictions.** No. \textit{Do any export controls or other regulatory restrictions apply to the dataset or to individual instances? If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any supporting documentation.} No. **Any other comments?** None. ### Maintenance **Who will be supporting/hosting/maintaining the dataset?** The authors of the paper will be maintaining the dataset. **How can the owner/curator/manager of the dataset be contacted (e.g., email address)?** Candace Ross and Mark Ibrahim can be contacted through the email addresses provided in the paper. **Is there an erratum? If so, please provide a link or other access point.** There is currently not an erratum. \textit{Will the dataset be updated (e.g., to correct labeling errors, add new instances, delete instances)? If so, please describe how often, by whom, and how updates will be communicated to dataset consumers (e.g., mailing list, GitHub)?** We will update the dataset for any errors. We will likely communicate this via social media and perhaps a GitHub page. **If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances (e.g., were the individuals in question told that their data would be retained for a fixed period of time and then deleted)? If so, please describe these limits and explain how they will be enforced.** N/A. **Will older versions of the dataset continue to be supported/hosted/maintained? If so, please describe how. If not, please describe how its obsolescence will be communicated to dataset consumers. ** N/A **If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so? If so, please provide a description.** We encourage anyone interested in potential augmentations and contributions to contact us using our email addresses, listed above. **Any other comments?** None.