Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,133 +1,112 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
-
task_categories:
|
| 4 |
-
- multimodal
|
| 5 |
-
- other
|
| 6 |
language:
|
| 7 |
- en
|
| 8 |
tags:
|
| 9 |
-
- neuroscience
|
| 10 |
- fMRI
|
| 11 |
- EEG
|
| 12 |
-
-
|
| 13 |
- brain-imaging
|
| 14 |
-
- multimodal
|
| 15 |
- science
|
| 16 |
- huggingscience
|
| 17 |
size_categories:
|
| 18 |
- 100K<n<1M
|
| 19 |
---
|
| 20 |
|
| 21 |
-
# Dataset Card for CineBrain
|
| 22 |
|
| 23 |
[](https://arxiv.org/abs/2503.06940)
|
| 24 |
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
### Dataset Summary
|
| 28 |
-
|
| 29 |
-
CineBrain is a large-scale multimodal brain dataset that includes fMRI, EEG, and ECG recordings collected while participants watched episodes of The Big Bang Theory. Each participant viewed 20 episodes, and for each episode, only the first 18 minutes were used. In total, each participant watched approximately 6 hours of video. The fMRI was acquired with a TR of 0.8 seconds, and the EEG was recorded at 1000 Hz.
|
| 30 |
-
|
| 31 |
-
### Supported Tasks and Leaderboards
|
| 32 |
-
|
| 33 |
-
- **Multimodal Brain Analysis**: Analyze relationships between visual/auditory stimuli and brain responses
|
| 34 |
-
- **Neural Decoding**: Decode brain states from neuroimaging data during naturalistic viewing
|
| 35 |
-
- **Cross-modal Learning**: Learn mappings between different brain imaging modalities
|
| 36 |
-
- **Temporal Dynamics**: Study temporal patterns in brain activity during narrative processing
|
| 37 |
-
|
| 38 |
-
### Languages
|
| 39 |
-
|
| 40 |
-
The dataset contains brain recordings during English audiovisual narrative processing (The Big Bang Theory episodes).
|
| 41 |
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
### Repository Structure
|
| 45 |
-
|
| 46 |
-
- **videos.tar**: Contains the video stimuli viewed by participants. Subjects 1, 2, and 6 watched the first 20 episodes, while subjects 3, 4, and 5 watched the first 10 and the last 10 episodes.
|
| 47 |
-
- **sub-00xx**: Each folder corresponds to a specific participant and includes their raw and processed fMRI data, as well as the processed EEG data.
|
| 48 |
-
- **captions-qwen-2.5-vl-7b.json**: Video captions generated using Qwen-2.5-VL-7B model
|
| 49 |
|
| 50 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 51 |
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
-
|
| 55 |
-
-
|
|
|
|
| 56 |
|
| 57 |
-
###
|
|
|
|
|
|
|
|
|
|
| 58 |
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
|
|
|
|
|
|
|
| 63 |
|
| 64 |
### Data Splits
|
|
|
|
|
|
|
| 65 |
|
| 66 |
-
|
| 67 |
-
- **Subjects 1, 2, 6**: Episodes 1-20 (full coverage)
|
| 68 |
-
- **Subjects 3, 4, 5**: Episodes 1-10 and 11-20 (split coverage)
|
| 69 |
-
|
| 70 |
-
Total: ~36 hours of brain recordings across all subjects
|
| 71 |
-
|
| 72 |
-
## Dataset Creation
|
| 73 |
-
|
| 74 |
-
### Curation Rationale
|
| 75 |
|
| 76 |
-
|
| 77 |
-
- Neural mechanisms of narrative comprehension
|
| 78 |
-
- Cross-modal sensory integration
|
| 79 |
-
- Individual differences in brain responses to media content
|
| 80 |
-
- Temporal dynamics of attention and engagement
|
| 81 |
-
|
| 82 |
-
### Source Data
|
| 83 |
-
|
| 84 |
-
Brain recordings were collected from healthy participants while they watched sitcom episodes in a controlled laboratory environment. The use of popular media content ensures ecological validity while maintaining experimental control.
|
| 85 |
-
|
| 86 |
-
#### Initial Data Collection and Normalization
|
| 87 |
-
|
| 88 |
-
- **fMRI**: Collected with high temporal resolution (TR=0.8s) for detailed temporal dynamics
|
| 89 |
-
- **EEG**: Recorded at 1000 Hz for precise temporal resolution of neural events
|
| 90 |
-
- **Preprocessing**: Standard neuroimaging preprocessing pipelines applied
|
| 91 |
-
- **Quality control**: Data quality checks and artifact removal procedures applied
|
| 92 |
-
|
| 93 |
-
### Personal and Sensitive Information
|
| 94 |
-
|
| 95 |
-
β οΈ **Neuroimaging Data**: This dataset contains brain imaging data from human subjects. While anonymized, users should follow appropriate ethical guidelines and data use agreements when working with neuroimaging data.
|
| 96 |
-
|
| 97 |
-
## Important Notes
|
| 98 |
-
|
| 99 |
-
- **Data Release**: All data has been released and is available for download
|
| 100 |
-
- **Cross-dataset Correspondence**: Subjects 1, 2, 3, and 4 in this dataset correspond to Subjects 6, 8, 1, and 4 in the [fMRI-Shape](https://huggingface.co/datasets/Fudan-fMRI/fMRI-Shape) and [fMRI-Objaverse](https://huggingface.co/datasets/Fudan-fMRI/fMRI-Objaverse) datasets
|
| 101 |
-
|
| 102 |
-
## Considerations for Using the Data
|
| 103 |
|
| 104 |
-
|
|
|
|
|
|
|
|
|
|
| 105 |
|
| 106 |
-
|
| 107 |
-
|
| 108 |
-
|
| 109 |
-
-
|
| 110 |
-
-
|
|
|
|
|
|
|
| 111 |
|
| 112 |
-
###
|
|
|
|
|
|
|
|
|
|
| 113 |
|
| 114 |
-
|
| 115 |
-
- **Cultural bias**: Content is English-language Western sitcom
|
| 116 |
-
- **Selection bias**: Participants were likely university-affiliated volunteers
|
| 117 |
-
- **Temporal bias**: Data collected at specific time points may not generalize
|
| 118 |
|
| 119 |
-
|
| 120 |
|
| 121 |
-
### Dataset Curators
|
| 122 |
|
| 123 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 124 |
|
| 125 |
-
###
|
|
|
|
|
|
|
|
|
|
| 126 |
|
| 127 |
-
|
| 128 |
|
| 129 |
-
|
|
|
|
|
|
|
| 130 |
|
|
|
|
| 131 |
If you find our paper useful for your research and applications, please cite using this BibTeX:
|
| 132 |
|
| 133 |
```bibtex
|
|
@@ -140,8 +119,4 @@ If you find our paper useful for your research and applications, please cite usi
|
|
| 140 |
primaryClass={cs.CV},
|
| 141 |
url={https://arxiv.org/abs/2503.06940},
|
| 142 |
}
|
| 143 |
-
```
|
| 144 |
-
|
| 145 |
-
### Contributions
|
| 146 |
-
|
| 147 |
-
Thanks to the neuroscience research community and the original authors for creating and sharing this valuable dataset for advancing our understanding of brain function during naturalistic conditions.
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
| 3 |
language:
|
| 4 |
- en
|
| 5 |
tags:
|
|
|
|
| 6 |
- fMRI
|
| 7 |
- EEG
|
| 8 |
+
- neuroscience
|
| 9 |
- brain-imaging
|
|
|
|
| 10 |
- science
|
| 11 |
- huggingscience
|
| 12 |
size_categories:
|
| 13 |
- 100K<n<1M
|
| 14 |
---
|
| 15 |
|
| 16 |
+
# π Dataset Card for CineBrain
|
| 17 |
|
| 18 |
[](https://arxiv.org/abs/2503.06940)
|
| 19 |
|
| 20 |
+
**CineBrain** is a **large-scale multimodal brain dataset** comprising **fMRI, EEG, and ECG** recordings collected while participants watched episodes of The Big Bang Theory.
|
| 21 |
+
It supports research on **neural decoding, multimodal learning, and modality transfer** in naturalistic narrative processing.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 22 |
|
| 23 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 24 |
|
| 25 |
+
## π§ Dataset Description
|
| 26 |
+
### Summary
|
| 27 |
+
- **Participants**: 6 subjects
|
| 28 |
+
- **Stimuli**: 30 episodes of *The Big Bang Theory* (first 18 minutes per episode)
|
| 29 |
+
- **Recording time**: 6 hours per subject (36 hours total)
|
| 30 |
+
- **Modalities**:
|
| 31 |
+
- fMRI: TR = 0.8s
|
| 32 |
+
- EEG: 64 channels, 1000 Hz
|
| 33 |
+
- ECG: synchronous recording
|
| 34 |
+
|
| 35 |
+
### Supported Tasks
|
| 36 |
+
- **Multimodal Brain Analysis**: Investigating relationships between audiovisual stimuli and neural responses
|
| 37 |
+
- **Neural Decoding**: Inferring cognitive states from fMRI and EEG signals
|
| 38 |
+
- **Cross-Modal Learning**: Learning shared representations across fMRI, EEG, and ECG
|
| 39 |
+
- **Modality Transfer**: Predicting **fMRI from EEG** and **EEG from fMRI**
|
| 40 |
+
---
|
| 41 |
|
| 42 |
+
## π Dataset Structure
|
| 43 |
+
### Repository Contents
|
| 44 |
+
- `videos.tar`: Video stimuli (8100 clips from 30 episodes)
|
| 45 |
+
- `sub-00xx/`: Participant folders with raw + preprocessed fMRI/EEG
|
| 46 |
+
- `captions-qwen-2.5-vl-7b.json`: Auto-generated video captions
|
| 47 |
|
| 48 |
+
### Inside Each Participant Folder
|
| 49 |
+
- `fMRI_raw_data.tar` β raw fMRI
|
| 50 |
+
- `fMRI_preprocessed_data.tar` β preprocessed fMRI
|
| 51 |
+
- `EEG_preprocessed_data.tar` β preprocessed EEG
|
| 52 |
|
| 53 |
+
### Data Statistics
|
| 54 |
+
| Modality | Sampling | Duration | Size (approx.) |
|
| 55 |
+
|----------|----------|----------|----------------|
|
| 56 |
+
| fMRI | TR=0.8s | 6h/subject | ~12 GB total |
|
| 57 |
+
| EEG | 1000 Hz, 64 ch | 6h/subject | ~72 GB total |
|
| 58 |
+
| Video | 30 eps Γ 18 min | 8100 clips | ~2.59 GB |
|
| 59 |
|
| 60 |
### Data Splits
|
| 61 |
+
- **Subjects 1, 2, 6**: Episodes 1β20 (5400 clips)
|
| 62 |
+
- **Subjects 3, 4, 5**: Episodes 1β10 and 21β30 (5400 clips)
|
| 63 |
|
| 64 |
+
Total: 36 hours of brain recordings across all subjects
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 65 |
|
| 66 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 67 |
|
| 68 |
+
## π Important Notes
|
| 69 |
+
- **Data Release**: Fully open and downloadable.
|
| 70 |
+
- **Cross-Dataset Correspondence**: Subjects **1, 2, 3, and 4** in CineBrain correspond to Subjects **6, 8, 1, and 4** in the [fMRI-Shape](https://huggingface.co/datasets/Fudan-fMRI/fMRI-Shape) and [fMRI-Objaverse](https://huggingface.co/datasets/Fudan-fMRI/fMRI-Objaverse) datasets.
|
| 71 |
+
---
|
| 72 |
|
| 73 |
+
## π Dataset Creation
|
| 74 |
+
### Motivation
|
| 75 |
+
CineBrain was designed to support **naturalistic neuroscience research**, focusing on:
|
| 76 |
+
- Narrative comprehension
|
| 77 |
+
- Multisensory integration
|
| 78 |
+
- Individual variability in brain responses
|
| 79 |
+
- Temporal dynamics of engagement
|
| 80 |
|
| 81 |
+
### Source Data & Preprocessing
|
| 82 |
+
- **fMRI**: High temporal resolution (TR=0.8s)
|
| 83 |
+
- **EEG**: High sampling rate (1000 Hz)
|
| 84 |
+
- **Preprocessing**: Standard pipelines, artifact removal, quality control
|
| 85 |
|
| 86 |
+
β οΈ **Ethics**: All data anonymized. Please follow ethical guidelines when using human neuroimaging data.
|
|
|
|
|
|
|
|
|
|
| 87 |
|
| 88 |
+
---
|
| 89 |
|
|
|
|
| 90 |
|
| 91 |
+
## βοΈ Considerations for Using the Data
|
| 92 |
+
### Social Impact
|
| 93 |
+
This dataset may advance:
|
| 94 |
+
- Understanding of **complex narrative processing**
|
| 95 |
+
- **Brain-computer interfaces**
|
| 96 |
+
- **Clinical applications** for attention & comprehension disorders
|
| 97 |
|
| 98 |
+
### Potential Biases
|
| 99 |
+
- **Demographic bias**: Limited participant diversity
|
| 100 |
+
- **Cultural bias**: English-language sitcom
|
| 101 |
+
- **Selection bias**: Likely university volunteers
|
| 102 |
|
| 103 |
+
---
|
| 104 |
|
| 105 |
+
## π Additional Information
|
| 106 |
+
- **License**: [Apache-2.0](https://www.apache.org/licenses/LICENSE-2.0)
|
| 107 |
+
- **Languages**: English audiovisual content
|
| 108 |
|
| 109 |
+
## Citation Information
|
| 110 |
If you find our paper useful for your research and applications, please cite using this BibTeX:
|
| 111 |
|
| 112 |
```bibtex
|
|
|
|
| 119 |
primaryClass={cs.CV},
|
| 120 |
url={https://arxiv.org/abs/2503.06940},
|
| 121 |
}
|
| 122 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|