--- license: cc-by-nc-4.0 task_categories: - image-text-to-text - image-segmentation language: - en tags: - medical - image - detection - measurement - angle - distance pretty_name: medvision size_categories: - 10M # News - [Oct 8, 2025] πŸš€ Release **MedVision** dataset v1.0.0
# TODO - [ ] Add preprint, project page - [x] Add instructions on how to prepare the SKM-TEA and ToothFairy2 datasets - [ ] Add tutorial on how to expand the dataset
# Datasets πŸ“ The MedVision dataset consists of public medical images and quantitative annotations from this study. MRI: Magnetic Resonance Imaging; CT: Computed Tomography; PET: positron emission tomography; US: Ultrasound; b-box: bounding box; T/L: tumor/lesion size; A/D: angle/distance; HF: HuggingFace; GC: Grand-Challenge; * redistributed. | **Dataset** | **Anatomy** | **Modality** | **Annotation** | **Availability** | **Source** | **# Sample (Train/Test)** | | | **Status** | | ---------------- | ------------- | ------------ | -------------- | ---------------- | ------------ | ------------------------- | ------------- | -------------- | ---------- | | | | | | | | **b-box** | **T/L** | **A/D** | | | AbdomenAtlas | abdomen | CT | b-box | open | HF | 6.8 / 2.9M | 0 | 0 | βœ… | | AbdomenCT-1K | abdomen | CT | b-box | open | Zenodo | 0.7 / 0.3M | 0 | 0 | βœ… | | ACDC | heart | MRI | b-box | open | HF*, others | 9.5 / 4.8K | 0 | 0 | βœ… | | AMOS22 | abdomen | CT, MRI | b-box | open | Zenodo | 0.8 / 0.3M | 0 | 0 | βœ… | | autoPEI-III | whole body | CT, PET | b-box, T/L | open | HF*, others | 22 / 9.7K | 0.5 / 0.2K | 0 | βœ… | | BCV15 | abdomen | CT | b-box | open | HF*, Synapse | 71 / 30K | 0 | 0 | βœ… | | BraTS24 | brain | MRI | b-box, T/L | open | HF*, Synapse | 0.8 / 0.3M | 7.9 / 3.1K | 0 | βœ… | | CAMUS | heart | US | b-box | open | HF*, others | 0.7 / 0.3M | 0 | 0 | βœ… | | Ceph-Bio-400 | head and neck | X-ray | b-box, A/D | open | HF*, others | 0 | 0 | 5.3 / 2.3K | βœ… | | CrossModDA | brain | MRI | b-box | open | HF*, Zenodo | 3.0 / 1.0K | 0 | 0 | βœ… | | FeTA24 | fetal brain | MRI | b-box, A/D | registration | Synapse | 34 / 15K | 0 | 0.2 / 0.1K | βœ… | | FLARE22 | abdomen | CT | b-box | open | HF*, others | 72 / 33K | 0 | 0 | βœ… | | HNTSMRG24 | head and neck | MRI | b-box, T/L | open | Zenodo | 18 / 6.6K | 1.0 / 0.4K | 0 | βœ… | | ISLES24 | brain | MRI | b-box | open | HF*, GC | 7.3 / 2.5K | 0 | 0 | βœ… | | KiPA22 | kidney | CT | b-box, T/L | open | HF*, GC | 26 / 11K | 2.1 / 1.0K | 0 | βœ… | | KiTS23 | kidney | CT | b-box, T/L | open | HF*, GC | 80 / 35K | 5.9 / 2.6K | 0 | βœ… | | MSD | multiple | CT, MRI | b-box, T/L | open | others | 0.2 / 0.1M | 5.3 / 2.2K | 0 | βœ… | | OAIZIB-CM | knee | MRI | b-box | open | HF | 0.5 / 0.2M | 0 | 0 | βœ… | | SKM-TEA | knee | MRI | b-box | registration | others | 0.2 / 0.1M | 0 | 0 | βœ… | | ToothFairy2 | tooth | CT | b-box | registration | others | 1.0 / 0.4M | 0 | 0 | βœ… | | TopCoW24 | brain | CT, MRI | b-box | open | HF*, Zenodo | 43 / 20K | 0 | 0 | βœ… | | TotalSegmentator | multiple | CT, MRI | b-box | open | HF*, Zenodo | 9.6 / 4.0M | 0 | 0 | βœ… | | **Total** | | | | | | **22 / 9.2M** | **23 / 9.6K** | **5.6 / 2.4K** | | ⚠️ For the following datasets, which do not allow redistribution, you need to apply for access from data owners, (optionally) upload to your private HF dataset repo, and set corresponding environment variables. | **Dataset** | **Source** | Host Platform | Env Var | | ----------- | ------------------------------------------------------- | ------------- | --------------------------- | | FeTA24 | https://www.synapse.org/Synapse:syn25649159/wiki/610007 | Synapse | SYNAPSE_TOKEN | | SKM-TEA | https://aimi.stanford.edu/datasets/skm-tea-knee-mri | Huggingface | MedVision_SKMTEA_HF_ID | | ToothFairy2 | https://ditto.ing.unimore.it/toothfairy2/ | Huggingface | MedVision_ToothFairy2_HF_ID | πŸ“ For SKM-TEA and ToothFairy2, you need to process the raw data and upload the preprocessed data to your **private** HF dataset repo. To use HF private dataset, you need to set `HF_TOKEN` and login with `hf auth login --token $HF_TOKEN --add-to-git-credential` - Prepare SKM-TEA data: [tutorial](https://huggingface.co/datasets/YongchengYAO/MedVision/blob/main/doc/dataset_skm-tea.md) - Prepare ToothFairy2 data: [tutorial](https://huggingface.co/datasets/YongchengYAO/MedVision/blob/main/doc/dataset_toothfairy2.md)
# Requirement πŸ“ Note: `trust_remote_code` is no longer supported in datasets>=4.0.0, install `dataset` with `pip install datasets==3.6.0`
# Use ```python import os from datasets import load_dataset # Set data folder os.environ["MedVision_DATA_DIR"] = # Pick a dataset config name and split config = split_name = "test", # use "test" for testing set config; use "train" for training set config # Get dataset ds = load_dataset( "YongchengYAO/MedVision", name=config, trust_remote_code=True, split=split_name, ) ``` πŸ“ List of config names in `info/`
# Environment Variables ```bash # Set where data will be saved, requires ~1T for the complete dataset export MedVision_DATA_DIR= # Force download and process raw images, default to "False" export MedVision_FORCE_DOWNLOAD_DATA="False" # Force install dataset codebase, default to "False" export MedVision_FORCE_INSTALL_CODE="False" ```
# Advanced Usage The dataset codebase `medvision_ds` can be used to scale the dataset, including adding new annotation types and datasets. πŸ› οΈ **Install** ```bash pip install "git+https://huggingface.co/datasets/YongchengYAO/MedVision.git#subdirectory=src" pip show medvision_ds ``` or ```bash # First, install the benchmark codebase: medvision_bm pip install "git+https://github.com/YongchengYAO/MedVision.git" # Install the dataset codebase: medvision_ds pip install huggingface_hub # NOTE: replace python -c "from medvision_bm.utils import install_medvision_ds; install_medvision_ds(data_dir='')" ``` πŸ§‘πŸ»β€πŸ’» Use [utility functions](https://huggingface.co/datasets/YongchengYAO/MedVision/tree/main/src/medvision_ds/utils) for image processing ```python from medvision_ds.utils.data_conversion import ( convert_nrrd_to_nifti, convert_mha_to_nifti, convert_nii_to_niigz, convert_bmp_to_niigz, copy_img_header_to_mask, reorient_niigz_RASplus_batch_inplace, ) from medvision_ds.utils.preprocess_utils import ( split_4d_nifti, ) ``` πŸ‘©πŸΌβ€πŸ’»Examples of dataset scaling: - Setup automatic data processing pipeline - Download preprocessed data from HF: [medvision_ds/datasets/OAIZIB_CM/download.py](https://huggingface.co/datasets/YongchengYAO/MedVision/blob/main/src/medvision_ds/datasets/OAIZIB_CM/download.py) - Download and processed data from source: [medvision_ds/datasets/BraTS24/download_raw.py](https://huggingface.co/datasets/YongchengYAO/MedVision/blob/main/src/medvision_ds/datasets/BraTS24/download_raw.py) - Prepare annotations - Generate b-box annotations from segmentation masks: - [medvision_ds/datasets/BraTS24/preprocess_detection.py](https://huggingface.co/datasets/YongchengYAO/MedVision/blob/main/src/medvision_ds/datasets/BraTS24/preprocess_detection.py) - Generate tumor/lesion size (TL) annotations from segmentation masks: - [medvision_ds/datasets/BraTS24/preprocess_biometry.py](https://huggingface.co/datasets/YongchengYAO/MedVision/blob/main/src/medvision_ds/datasets/BraTS24/preprocess_biometry.py) - Generate angle/distance (AD) annotations from landmarks: - [medvision_ds/datasets/Ceph_Biometrics_400/preprocess_biometry.py](https://huggingface.co/datasets/YongchengYAO/MedVision/blob/main/src/medvision_ds/datasets/Ceph_Biometrics_400/preprocess_biometry.py) - [medvision_ds/datasets/FeTA24/preprocess_biometry.py](https://huggingface.co/datasets/YongchengYAO/MedVision/blob/main/src/medvision_ds/datasets/FeTA24/preprocess_biometry.py)
# License: CC-BY-NC-4.0 This repository is released under CC-BY-NC 4.0 (https://creativecommons.org/licenses/by-nc/4.0/). βœ… **What you can do** - Copy and redistribute the material in any medium or format. - Adapt, remix, transform, and build upon the material. - Use it privately or in non-commercial educational, research, or personal projects. 🚫 **What you cannot do** - Use the material for commercial purposes πŸ“„ **Requirement** | **Requirement** | **Description** | | ---------------------- | ------------------------------------------------------------ | | **Attribution (BY)** | You must give appropriate credit, provide a link to [this dataset](https://huggingface.co/datasets/YongchengYAO/MedVision), and indicate if changes were made. | | **NonCommercial (NC)** | You may not use the material for commercial purposes. | | **Indicate changes** | If you modify the work, you must note that it has been changed. |