Datasets:
Update dataset card for MinD-3D++ paper and add task category (#2)
Browse files- Update dataset card for MinD-3D++ paper and add task category (989739bc8bc2526b3dc36b2aab59c086b356d4c1)
Co-authored-by: Niels Rogge <[email protected]>
README.md
CHANGED
|
@@ -1,14 +1,25 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
| 4 |
|
| 5 |
-
#
|
| 6 |
|
| 7 |
-
[
|
| 8 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 9 |
|
| 10 |
## Overview
|
| 11 |
-
MinD-3D aims to reconstruct high-quality 3D objects based on fMRI data.
|
| 12 |
|
| 13 |
## Repository Structure
|
| 14 |
- **annotations**: Contains metadata and annotations related to the fMRI data for each subject.
|
|
@@ -20,12 +31,11 @@ MinD-3D aims to reconstruct high-quality 3D objects based on fMRI data.
|
|
| 20 |
- **raw_data**: Raw fMRI data collected directly from the imaging machine.
|
| 21 |
- **npy_data**: Processed data. We utilized fMRIPrep and the methodologies described in our paper to derive and store the data in NumPy format (.npy).
|
| 22 |
|
| 23 |
-
|
| 24 |
## Citation
|
| 25 |
|
| 26 |
-
If you find our
|
| 27 |
|
| 28 |
-
```
|
| 29 |
@misc{gao2023mind3d,
|
| 30 |
title={MinD-3D: Reconstruct High-quality 3D objects in Human Brain},
|
| 31 |
author={Jianxiong Gao and Yuqian Fu and Yun Wang and Xuelin Qian and Jianfeng Feng and Yanwei Fu},
|
|
@@ -34,4 +44,16 @@ If you find our paper useful for your research and applications, please cite usi
|
|
| 34 |
archivePrefix={arXiv},
|
| 35 |
primaryClass={cs.CV}
|
| 36 |
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 37 |
```
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- image-to-3d
|
| 5 |
+
tags:
|
| 6 |
+
- fmri
|
| 7 |
+
- 3d-reconstruction
|
| 8 |
+
- neuroscience
|
| 9 |
+
- brain-decoding
|
| 10 |
---
|
| 11 |
|
| 12 |
+
# fMRI-Shape Dataset: A Component of the fMRI-3D Dataset for MinD-3D++
|
| 13 |
|
| 14 |
+
This repository contains the `fMRI-Shape` dataset, a component of the comprehensive fMRI-3D dataset introduced and utilized in the paper [MinD-3D++: Advancing fMRI-Based 3D Reconstruction with High-Quality Textured Mesh Generation and a Comprehensive Dataset](https://huggingface.co/papers/2409.11315). This work builds upon the initial "MinD-3D" research.
|
| 15 |
+
|
| 16 |
+
The fMRI-3D dataset consists of two components: `fMRI-Shape` (this dataset) and [fMRI-Objaverse](https://huggingface.co/datasets/Fudan-fMRI/fMRI-Objaverse). Both datasets are designed to advance 3D visual reconstruction from fMRI data.
|
| 17 |
+
|
| 18 |
+
**Project Page:** https://jianxgao.github.io/MinD-3D
|
| 19 |
+
**Code (GitHub):** https://github.com/JianxGao/MinD-3D
|
| 20 |
|
| 21 |
## Overview
|
| 22 |
+
MinD-3D (and MinD-3D++) aims to reconstruct high-quality 3D objects based on fMRI data. This `fMRI-Shape` dataset contains fMRI recordings and corresponding 3D object stimuli used in this research.
|
| 23 |
|
| 24 |
## Repository Structure
|
| 25 |
- **annotations**: Contains metadata and annotations related to the fMRI data for each subject.
|
|
|
|
| 31 |
- **raw_data**: Raw fMRI data collected directly from the imaging machine.
|
| 32 |
- **npy_data**: Processed data. We utilized fMRIPrep and the methodologies described in our paper to derive and store the data in NumPy format (.npy).
|
| 33 |
|
|
|
|
| 34 |
## Citation
|
| 35 |
|
| 36 |
+
If you find our work and datasets useful for your research and applications, please cite the respective papers:
|
| 37 |
|
| 38 |
+
```bibtex
|
| 39 |
@misc{gao2023mind3d,
|
| 40 |
title={MinD-3D: Reconstruct High-quality 3D objects in Human Brain},
|
| 41 |
author={Jianxiong Gao and Yuqian Fu and Yun Wang and Xuelin Qian and Jianfeng Feng and Yanwei Fu},
|
|
|
|
| 44 |
archivePrefix={arXiv},
|
| 45 |
primaryClass={cs.CV}
|
| 46 |
}
|
| 47 |
+
```
|
| 48 |
+
|
| 49 |
+
```bibtex
|
| 50 |
+
@misc{gao2025mind3dadvancingfmribased3d,
|
| 51 |
+
title={MinD-3D++: Advancing fMRI-Based 3D Reconstruction with High-Quality Textured Mesh Generation and a Comprehensive Dataset},
|
| 52 |
+
author={Jianxiong Gao and Yanwei Fu and Yuqian Fu and Yun Wang and Xuelin Qian and Jianfeng Feng},
|
| 53 |
+
year={2025},
|
| 54 |
+
eprint={2409.11315},
|
| 55 |
+
archivePrefix={arXiv},
|
| 56 |
+
primaryClass={cs.CV},
|
| 57 |
+
url={https://arxiv.org/abs/2409.11315},
|
| 58 |
+
}
|
| 59 |
```
|