Datasets:
File size: 4,497 Bytes
9a94f7b c4a5f13 9399f63 0415680 97630d9 9a94f7b 6e6a5dc 245f486 6e6a5dc 245f486 d5a32c1 245f486 9399f63 245f486 d5a32c1 245f486 9399f63 245f486 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 |
---
configs:
- config_name: image2text_info
data_files: image2text_info.csv
- config_name: image2text_option
data_files: image2text_option.csv
- config_name: text2image_info
data_files: text2image_info.csv
- config_name: text2image_option
data_files: text2image_option.csv
license: cc-by-nc-sa-4.0
language:
- en
size_categories:
- 1K<n<10K
tags:
- benchmark
- mllm
- scientific
- cover
- live
task_categories:
- image-text-to-text
---
# MAC: A Live Benchmark for Multimodal Large Language Models in Scientific Understanding
[](https://arxiv.org/abs/2508.15802)
[](https://github.com/mhjiang0408/MAC_Bench)
[](https://creativecommons.org/licenses/by-nc-sa/4.0/)
## π Dataset Description
MAC is a comprehensive live benchmark designed to evaluate multimodal large language models (MLLMs) on scientific understanding tasks. The dataset focuses on scientific journal cover understanding, providing challenging testbeds for assessing visual-textual comprehension capabilities of MLLMs in academic domains.
### π― Tasks
**1. Image-to-Text Understanding**
- **Input**: Scientific journal cover image
- **Task**: Select the most accurate textual description from 4 multiple-choice options
- **Question Format**: "Which of the following options best describe the cover image?"
**2. Text-to-Image Understanding**
- **Input**: Journal cover story text description
- **Task**: Select the corresponding image from 4 visual options
- **Question Format**: "Which of the following options best describe the cover story?"
### π Dataset Statistics
| Attribute | Value |
|-----------|-------|
| **Source Journals** | Nature, Science, Cell, ACS journals |
| **Task Types** | 2 (Image2Text, Text2Image) |
| **Options per Question** | 4 (A, B, C, D) |
| **Languages** | English |
| **Image Format** | High-resolution PNG journal covers |
### π Quick Start
#### Loading the Dataset
```python
from datasets import load_dataset
dataset = load_dataset("mhjiang0408/MAC_Bench")
```
#### Data Fields
**Image-to-Text Task Fields** (`image2text_info.csv`):
```python
{
'journal': str, # Source journal name (e.g., "NATURE BIOTECHNOLOGY")
'id': str, # Unique identifier (e.g., "42_7")
'question': str, # Task question
'cover_image': str, # Path to cover image
'answer': str, # Correct answer ('A', 'B', 'C', 'D')
'option_A': str, # Option A text
'option_A_path': str, # Path to option A story file
'option_A_embedding_name': str, # Embedding method name
'option_A_embedding_id': str, # Embedding identifier
# Similar fields for options B, C, D
'split': str # Dataset split ('train', 'val', 'test')
}
```
### π§ Evaluation Framework
Use the official MAC_Bench evaluation toolkit:
```bash
# Clone repository
git clone https://github.com/mhjiang0408/MAC_Bench.git
cd MAC_Bench
./setup.sh
```
### π Use Cases
- **MLLM Evaluation**: Systematic benchmarking of multimodal large language models
- **Scientific Vision-Language Research**: Cross-modal understanding in academic domains
- **Educational AI**: Development of AI systems for scientific content comprehension
- **Academic Publishing Tools**: Automated analysis of journal covers and content
### π Citation
If you use the MAC dataset in your research, please cite our paper:
```bibtex
@misc{jiang2025maclivebenchmarkmultimodal,
title={MAC: A Live Benchmark for Multimodal Large Language Models in Scientific Understanding},
author={Mohan Jiang and Jin Gao and Jiahao Zhan and Dequan Wang},
year={2025},
eprint={2508.15802},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2508.15802},
}
```
### π License
This dataset is released under the CC BY-NC-SA 4.0 License. See [LICENSE](https://github.com/mhjiang0408/MAC_Bench/blob/main/LICENSE) for details.
### π€ Contributing
We welcome contributions to improve the dataset and benchmark:
1. Report issues via [GitHub Issues](https://github.com/mhjiang0408/MAC_Bench/issues)
2. Submit pull requests for improvements
3. Join discussions in our [GitHub Discussions](https://github.com/mhjiang0408/MAC_Bench/discussions) |