Enhance dataset card with task categories, paper/code links, and improved sample usage

#8
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +28 -13
README.md CHANGED
@@ -1,7 +1,21 @@
1
  ---
2
  license: cc-by-nc-sa-4.0
 
 
 
 
 
 
 
3
  ---
4
 
 
 
 
 
 
 
 
5
  # CVPR 2025 Competition: Foundation Models for 3D Biomedical Image Segmentation
6
 
7
  **Highly recommend watching the [webinar recording](https://www.youtube.com/playlist?list=PLWPTMGguY4Kh48ov6WTkAQDfKRrgXZqlh) to learn about the task settings and baseline methods.**
@@ -21,19 +35,20 @@ Folder structure
21
  - CVPR25_TextSegFMData_with_class.json: text prompt for test-guided segmentation task
22
 
23
 
24
-
25
- ## Interactive 3D segmentation ([Homepage](https://www.codabench.org/competitions/5263/))
26
 
27
  The training `npz` files contain three keys: `imgs`, `gts`, and `spacing`.
28
  The validation (and testing) `npz` files don't have `gts` keys. We provide an optional box key in the `npz` file, which is defined by the middle slice 2D bounding box and the top and bottom slice (closed interval).
29
  Here is a demo to load the data:
30
 
31
  ```python
32
- npz = np.load(‘path to npz file’, allow_pickle=True)
 
 
33
  print(npz.keys())
34
- imgs = npz[imgs]
35
- gts = npz[gts] # will not be in the npz for testing cases
36
- boxes = npz[boxes] # a list of bounding box prompts
37
  print(boxes[0].keys()) # dict_keys(['z_min', 'z_max', 'z_mid', 'z_mid_x_min', 'z_mid_y_min', 'z_mid_x_max', 'z_mid_y_max'])
38
  ```
39
 
@@ -46,7 +61,7 @@ Remarks:
46
  3. The provided box prompts is designed for better efficiency for annotators, which may not cover the whole object. [Here](https://github.com/JunMa11/CVPR-MedSegFMCompetition/blob/main/get_boxes.py ) is the script to generate box prompts from ground truth.
47
 
48
 
49
- ## Text-guided segmentation ([Homepage](https://www.codabench.org/competitions/5651/))
50
 
51
  For the training set, we provide a json file with dataset-wise prompts `CVPR25_TextSegFMData_with_class.json`.
52
 
@@ -57,16 +72,16 @@ In the text prompts,
57
  For the validation (and hidden testing) set, we provided a text key for each validation npz file
58
 
59
  ```python
60
- npz = np.load(‘path to npz file’, allow_pickle=True)
 
 
61
  print(npz.keys())
62
- imgs = npz[imgs]
63
- print(npz[text_prompts])
64
  ```
65
 
66
  Remarks:
67
 
68
  1. To ensure rotation consistency, all testing cases will be preprocessed to standard rotation by https://nipy.org/nibabel/reference/nibabel.funcs.html#nibabel.funcs.as_closest_canonical
69
  2. Some datasets don't have text prompts, please simply exclude them during model training.
70
- 3. For instance labels, the evaluate metric is [F1 score](https://github.com/JunMa11/NeurIPS-CellSeg/blob/main/baseline/compute_metric.py) where the order of instance id doesn't matter.
71
-
72
-
 
1
  ---
2
  license: cc-by-nc-sa-4.0
3
+ task_categories:
4
+ - image-segmentation
5
+ tags:
6
+ - medical
7
+ - biomedical
8
+ - 3d
9
+ - cvpr2025
10
  ---
11
 
12
+ This repository contains the BiomedSegFM dataset, a crucial resource for the **CVPR 2025 Competition: Foundation Models for 3D Biomedical Image Segmentation**.
13
+
14
+ The dataset is utilized by the model presented in the paper [Medal S: Spatio-Textual Prompt Model for Medical Segmentation](https://huggingface.co/papers/2511.13001). This paper introduces a medical segmentation foundation model that supports native-resolution spatial and textual prompts, achieving channel-wise alignment between volumetric prompts and text embeddings. The dataset preserves full 3D context, efficiently processes multiple native-resolution masks in parallel, and supports up to 243 classes across CT, MRI, PET, ultrasound, and microscopy modalities.
15
+
16
+ **Paper:** [https://huggingface.co/papers/2511.13001](https://huggingface.co/papers/2511.13001)
17
+ **Code:** [https://github.com/yinghemedical/Medal-S](https://github.com/yinghemedical/Medal-S)
18
+
19
  # CVPR 2025 Competition: Foundation Models for 3D Biomedical Image Segmentation
20
 
21
  **Highly recommend watching the [webinar recording](https://www.youtube.com/playlist?list=PLWPTMGguY4Kh48ov6WTkAQDfKRrgXZqlh) to learn about the task settings and baseline methods.**
 
35
  - CVPR25_TextSegFMData_with_class.json: text prompt for test-guided segmentation task
36
 
37
 
38
+ ## Sample Usage (Interactive 3D Segmentation)
 
39
 
40
  The training `npz` files contain three keys: `imgs`, `gts`, and `spacing`.
41
  The validation (and testing) `npz` files don't have `gts` keys. We provide an optional box key in the `npz` file, which is defined by the middle slice 2D bounding box and the top and bottom slice (closed interval).
42
  Here is a demo to load the data:
43
 
44
  ```python
45
+ import numpy as np
46
+
47
+ npz = np.load('path to npz file', allow_pickle=True)
48
  print(npz.keys())
49
+ imgs = npz['imgs']
50
+ gts = npz['gts'] # will not be in the npz for testing cases
51
+ boxes = npz['boxes'] # a list of bounding box prompts
52
  print(boxes[0].keys()) # dict_keys(['z_min', 'z_max', 'z_mid', 'z_mid_x_min', 'z_mid_y_min', 'z_mid_x_max', 'z_mid_y_max'])
53
  ```
54
 
 
61
  3. The provided box prompts is designed for better efficiency for annotators, which may not cover the whole object. [Here](https://github.com/JunMa11/CVPR-MedSegFMCompetition/blob/main/get_boxes.py ) is the script to generate box prompts from ground truth.
62
 
63
 
64
+ ## Sample Usage (Text-Guided Segmentation)
65
 
66
  For the training set, we provide a json file with dataset-wise prompts `CVPR25_TextSegFMData_with_class.json`.
67
 
 
72
  For the validation (and hidden testing) set, we provided a text key for each validation npz file
73
 
74
  ```python
75
+ import numpy as np
76
+
77
+ npz = np.load('path to npz file', allow_pickle=True)
78
  print(npz.keys())
79
+ imgs = npz['imgs']
80
+ print(npz['text_prompts'])
81
  ```
82
 
83
  Remarks:
84
 
85
  1. To ensure rotation consistency, all testing cases will be preprocessed to standard rotation by https://nipy.org/nibabel/reference/nibabel.funcs.html#nibabel.funcs.as_closest_canonical
86
  2. Some datasets don't have text prompts, please simply exclude them during model training.
87
+ 3. For instance labels, the evaluate metric is [F1 score](https://github.com/JunMa11/NeurIPS-CellSeg/blob/main/baseline/compute_metric.py) where the order of instance id doesn't matter.