The Dataset Viewer has been disabled on this dataset.

Molecule Detection Benchmark Collection

📑 Task 1: Multi-scale Chemical Structure Detection

Localizing all molecular structures within an image is a fundamental prerequisite for chemical structure recognition. Since user-provided images may vary widely in scale—ranging from a single molecule, to multiple molecules, to an entire PDF page—molecular detection algorithms must handle extreme scale variability. To evaluate this capability, we construct MolDet-Bench-General, a benchmark designed to assess multi-scale molecular localization performance.

MolDet-Bench-General

MolDet-Bench-General encompasses a diverse set of molecular detection tasks across multiple domains and scales. It includes single-molecule localization (i.e., determining whether an image contains a molecule), multi-molecule detection, molecular localization within reaction schemes and tables, handwritten molecule detection, and molecule localization on PDF pages (a total of 799 images). This benchmark therefore provides a comprehensive evaluation of detection performance under varied and challenging real-world conditions.

It is important to note that, to better accommodate multi-scale molecular detection and to avoid incomplete molecule crops, the benchmark does not use tight, edge-aligned bounding boxes. Instead, each molecular bounding box is intentionally expanded with an adaptive margin, ensuring that downstream recognition models are not adversely affected by truncated molecular structures.

image

📑 Task 2: Chemical Structure Detection in Documents

Accurately localizing molecular structures in chemistry and biology literature and patents is a critically important task. We introduce MolDet-Bench-Doc, a benchmark for molecular localization within document pages, and additionally incorporate the third-party BioVista benchmark as part of our evaluation suite.

MolDet-Bench-Doc

We build upon the patent and scientific PDF corpus introduced in the Uni-Parser Benchmark (introduced in Uni-Parser Technical Report), which comprises 50 patents from 20 patent offices and 100 journal articles from multiple open-access sources. From this collection, we extract all pages containing molecular structures, yielding 447 pages with a total of 2,178 molecules. Molecular localization is annotated using edge-aligned (tight) bounding boxes. This subset forms MolDet-Bench-Doc, a benchmark designed to evaluate molecular localization performance directly on PDF document pages.

image

BioVista

We also processed the BioVista molecular object detection benchmark (introduced in BioMiner: A Multi-modal System for Automated Mining of Protein-Ligand Bioactivity Data from Literature) and converted its annotations into the YOLO format. The benchmark covers 500 papers in the domains of protein biology and related fields, containing a total of 11,212 molecular instances. However, it is important to note that the BioVista annotation guidelines differ from ours for molecular localization, and the dataset contains some instances of missing or incomplete molecular annotations. (Raw dataset from: https://github.com/jiaxianyan/BioMiner)

image

🎯 Benchmark Evaluation

All benchmark datasets are organized in the Ultralytics YOLO format, and we use ultralytics to evaluate mAP50 and mAP50-95.

You may install the library via:

pip install ultralytics

Example evaluation code:

from ultralytics import YOLO

model = YOLO("/your/path/to/yolo_weights.pt")

metrics = model.val(
    data="./path/to/benchmark/dataset.yaml",  # Path to the benchmark dataset YAML
    imgsz=640,  # Inference resolution (e.g., 640 for multi-scale MolDet-General models, 960 for document-optimized MolDet-Doc models)
    split="val",
    classes=[0]
)

print("mAP50:", metrics.box.map50)
print("mAP50-95:", metrics.box.map)

For further usage instructions, please refer to the official Ultralytics documentation.

📊 Benchmark Leaderboard

MolDet-Bench-General

Model mAP50 mAP50-95 Speed (T4 TensorRT10)
MolDetv2-General-n 0.9872 0.8776 1.5 ± 0.0 ms
MolDet-General-l 0.9675 0.8349 6.2 ± 0.1 ms
MolDet-General-m 0.9702 0.8269 4.7 ± 0.1 ms
MolDet-General-s 0.9685 0.8260 2.5 ± 0.1 ms
MolDet-General-n 0.9574 0.8052 1.5 ± 0.0 ms

MolDet-Bench-Doc

Model mAP50 mAP50-95 Speed (T4 TensorRT10)
MolDetv2-Doc-n 0.9936 0.9544 3.1 ± 0.0 ms
Uni-Parser-LD 0.9935 0.9679 10.1 ± 0.2 ms
MolDet-Doc-s 0.9927 0.9531 5.5 ± 0.1 ms
MolDet-Doc-l 0.9926 0.9367 13.1 ± 0.3 ms
MolDet-General-l 0.9921 0.8251 6.2 ± 0.1 ms
MolDet-General-m 0.9921 0.8063 4.7 ± 0.1 ms
MolDet-Doc-n 0.9913 0.9555 3.1 ± 0.0 ms
MolDet-Doc-m 0.9913 0.9539 9.9 ± 0.2 ms
MolDetv2-General-n 0.9908 0.8003 1.5 ± 0.0 ms
MolDet-General-s 0.9878 0.8535 2.5 ± 0.1 ms
MolDet-General-n 0.9836 0.8093 1.5 ± 0.0 ms

BioVista

Model mAP50 Speed (T4 TensorRT10)
Uni-Parser-LD 0.9806 10.1 ± 0.2 ms
MolDetv2-Doc-n 0.9748 3.1 ± 0.0 ms
MolDetv2-General-n 0.9609 2.5 ± 0.0 ms
MolDet-Doc-l 0.9607 13.1 ± 0.3 ms
MolDet-Doc-m 0.9558 9.9 ± 0.2 ms
MolDet-General-m 0.9460 4.7 ± 0.1 ms
MolDet-General-l 0.9447 6.2 ± 0.1 ms
MolDet-Doc-s 0.9416 5.5 ± 0.1 ms
MolDet-Doc-n 0.9391 3.1 ± 0.0 ms
MolDet-General-s 0.9318 2.5 ± 0.1 ms
BioMiner 0.9290 -
MolDet-General-n 0.9258 1.5 ± 0.0 ms
MolMiner 0.8990 -

image

📖 Citation

If you use this benchmark in your work, please cite:

MolDet-Bench & MolDetv2 Moldel:

Comming Soon!

MolDet Model:

@inproceedings{fang2025molparser,
  title={Molparser: End-to-end visual recognition of molecule structures in the wild},
  author={Fang, Xi and Wang, Jiankun and Cai, Xiaochen and Chen, Shangqian and Yang, Shuwen and Tao, Haoyi and Wang, Nan and Yao, Lin and Zhang, Linfeng and Ke, Guolin},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={24528--24538},
  year={2025}
}

BioVista Benchmark:

@article {Yan2025.04.22.648951,
    title = {BioMiner: A Multi-modal System for Automated Mining of Protein-Ligand Bioactivity Data from Literature},
    author = {Yan, Jiaxian and Zhu, Jintao and Yang, Yuhang and Liu, Qi and Zhang, Kai and Zhang, Zaixi and Liu, Xukai and Zhang, Boyan and Gao, Kaiyuan and Xiao, Jinchuan and Chen, Enhong},
    doi = {10.1101/2025.04.22.648951},
    journal = {bioRxiv},
    year = {2025}
}
Downloads last month
22

Models trained or fine-tuned on UniParser/MolDet-Bench

Collection including UniParser/MolDet-Bench