orrzohar commited on
Commit
b158d5f
·
verified ·
1 Parent(s): 40eb19e

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +145 -34
README.md CHANGED
@@ -1,36 +1,147 @@
1
  ---
2
- dataset_info:
3
- features:
4
- - name: audio
5
- dtype:
6
- audio:
7
- sampling_rate: 16000
8
- - name: sampling_rate
9
- dtype: int32
10
- - name: image
11
- dtype: image
12
- - name: same
13
- dtype: bool
14
- - name: emotion
15
- dtype: string
16
- - name: question
17
- dtype: string
18
- - name: answer
19
- dtype: string
20
- splits:
21
- - name: train
22
- num_bytes: 11858753276.0
23
- num_examples: 24000
24
- - name: test
25
- num_bytes: 2964688319.0
26
- num_examples: 6000
27
- download_size: 14426301761
28
- dataset_size: 14823441595.0
29
- configs:
30
- - config_name: default
31
- data_files:
32
- - split: train
33
- path: data/train-*
34
- - split: test
35
- path: data/test-*
36
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ dataset_name: EMID-Emotion-Matching
3
+ annotations_creators:
4
+ - expert-generated
5
+ language:
6
+ - en
7
+ license: cc-by-nc-sa-4.0
8
+ pretty_name: EMID Music ↔ Image Emotion Matching Pairs
9
+ tags:
10
+ - audio
11
+ - music
12
+ - image
13
+ - multimodal
14
+ - emotion
15
+ - contrastive-learning
16
+ task_categories:
17
+ - audio-classification
18
+ - image-classification
19
+ - visual-question-answering
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  ---
21
+
22
+ # EMID-Emotion-Matching
23
+
24
+ `orrzohar/EMID-Emotion-Matching` is a derived dataset built on top of
25
+ the **Emotionally paired Music and Image Dataset (EMID)** from ECNU (`ecnu-aigc/EMID`).
26
+ It is designed for *music ↔ image emotion matching* with Qwen-Omni–style models.
27
+
28
+ Each example contains:
29
+
30
+ - `audio`: mono waveform stored as `datasets.Audio` (HF Hub preview can play it)
31
+ - `sampling_rate`: sampling rate used when decoding (typically 16 kHz)
32
+ - `image`: a single image (`datasets.Image`)
33
+ - `same`: `bool`, whether the audio and image are labeled with the **same** emotion
34
+ - `emotion`: normalized image emotion tag (e.g. `amusement`, `excitement`) for positive pairs; empty string for negatives
35
+ - `question`: natural-language question used to prompt the model (several templates are mixed)
36
+ - `answer`: canonical supervision text (`yes - {emotion}` for positives, `no` for negatives)
37
+
38
+ | column | type | description |
39
+ | -------------- | ------------------------------- | ----------- |
40
+ | `audio` | `datasets.Audio (16k mono)` | decoded waveform; HF UI can play it |
41
+ | `sampling_rate`| `int32` | explicit sample rate mirrored beside the `Audio` column |
42
+ | `image` | `datasets.Image` | PIL.Image-compatible object |
43
+ | `same` | `bool` | `True` if the pair is emotion-aligned |
44
+ | `emotion` | `string` | normalized emotion label for positives, `""` otherwise |
45
+ | `question` | `string` | user prompt template |
46
+ | `answer` | `string` | canonical supervision text (`yes - {emotion}` / `no`) |
47
+
48
+ The original EMID row has one music clip and up to **three** tagged images
49
+ (`Image1`, `Image2`, `Image3`). For each `(audio, image)` pair we create:
50
+
51
+ - **1 positive example**: the audio and its own tagged image (`same = True`, `emotion = image_tag`)
52
+ - **NEGATIVES_PER_POSITIVE = 1 negative example**: the same audio paired with an image drawn
53
+ from a *different* emotion tag (`same = False`, `emotion = ""`)
54
+
55
+ With `MAX_SOURCE_ROWS = 4000`, this yields ~24,000 examples (positives + negatives),
56
+ which we then split into:
57
+
58
+ - `train`: 19,200 examples
59
+ - `test`: 4,800 examples
60
+
61
+ ## Source Data (EMID)
62
+
63
+ The base EMID dataset is described in:
64
+
65
+ - **Emotionally paired Music and Image Dataset (EMID)**
66
+ *Y. Guo, J. Li, et al.*
67
+ arXiv:2308.07622 — "Emotionally paired Music and Image Dataset (EMID)"
68
+ <https://arxiv.org/abs/2308.07622>
69
+
70
+ EMID contains 10,738 unique music clips, each paired with three images in the same
71
+ emotional category, plus rich annotations:
72
+
73
+ - `Audio_Filename`: unique filename of the music clip
74
+ - `genre`: letter A–M, one of 13 emotional categories
75
+ - `feeling`: distribution of free-form feelings reported by listeners (% per feeling)
76
+ - `emotion`: ratings on 11 emotional dimensions (1–9)
77
+ - `Image{1,2,3}_filename`: matched image filenames
78
+ - `Image{1,2,3}_tag`: image emotion category (e.g. `amusement`, `excitement`)
79
+ - `Image{1,2,3}_text`: GIT-generated captions
80
+ - `is_original_clip`: whether this is an original or expanded clip
81
+
82
+ For more details, see the EMID README and the paper above.
83
+
84
+ ## How This Derived Dataset Was Built
85
+
86
+ The script `prepare_emid_pairs.py` performs the following steps offline:
87
+
88
+ 1. Load `ecnu-aigc/EMID` (train split) and decode:
89
+ - `Audio_Filename` with `Audio(decode=True)`
90
+ - `Image{1,2,3}_filename` with `datasets.Image(decode=True)`
91
+ 2. Optionally cap the number of source rows with `MAX_SOURCE_ROWS` (default 4000).
92
+ 3. Build an **image pool** keyed by normalized emotion tags.
93
+ 4. For each EMID row and each available image (up to 3 per row):
94
+ - Create a positive pair `(audio, image, same=True, emotion=image_tag)`.
95
+ - Sample `NEGATIVES_PER_POSITIVE` images from *different* emotion tags to form negatives.
96
+ 5. Normalize the emotion strings (lowercase, replace spaces and punctuation with `_`).
97
+ 6. Draw a random question from a small set of Qwen-style templates and attach it as `question`.
98
+ 7. Store the mono waveform as `datasets.Audio` and the image as `datasets.Image` so
99
+ that downstream scripts can call `datasets.load_dataset` without extra decoding logic.
100
+ 8. Split into train/test with `TRAIN_FRACTION = 0.8`.
101
+
102
+ This yields a simple, flat structure that is convenient for SFT / contrastive training
103
+ with Qwen2.5-Omni (or other multimodal LMs), without re-doing negative sampling or
104
+ audio/image decoding inside notebooks.
105
+
106
+ ## Suggested Usage
107
+
108
+ ```python
109
+ from datasets import load_dataset
110
+
111
+ ds = load_dataset("orrzohar/EMID-Emotion-Matching")
112
+ train_ds = ds["train"]
113
+ test_ds = ds["test"]
114
+
115
+ ex = train_ds[0]
116
+ audio = ex["audio"] # dict with "array" + "sampling_rate"
117
+ sr = ex["sampling_rate"] # int
118
+ image = ex["image"] # PIL.Image.Image
119
+ same = ex["same"] # bool
120
+ emotion = ex["emotion"] # str
121
+ question = ex["question"] # str
122
+ answer = ex["answer"] # str
123
+ ```
124
+
125
+ In the Qwen-Omni demos, we typically:
126
+
127
+ - Use `question` as the user prompt,
128
+ - Provide `audio` and `image` as multimodal inputs, and
129
+ - Supervise the model with the provided `answer` (or regenerate your own phrasing from `same`/`emotion`).
130
+
131
+ ## License
132
+
133
+ This derived dataset **inherits the license** from EMID:
134
+
135
+ - **CC BY-NC-SA 4.0** (Attribution–NonCommercial–ShareAlike 4.0 International)
136
+
137
+ You **must**:
138
+
139
+ - Use the data only for **non-commercial** purposes.
140
+ - Provide appropriate **attribution** to the EMID authors and this derived dataset.
141
+ - Distribute derivative works under the **same license**.
142
+
143
+ Please refer to the full license text for details:
144
+ <https://creativecommons.org/licenses/by-nc-sa/4.0/>
145
+
146
+ If you use this dataset in academic work, please cite the EMID paper and, if appropriate,
147
+ this derived dataset as well.