Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,81 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
### **Dataset Card for Emilia with Emotion Annotations**
|
| 3 |
+
|
| 4 |
+
#### **Dataset Description**
|
| 5 |
+
|
| 6 |
+
This dataset is an enhanced version of the Emilia dataset, enriched with detailed emotion annotations. The annotations were generated using models from the EmoNet suite to provide deeper insight into the emotional content of speech. This work is based on the research and models described in the blog post "Do They See What We See?".
|
| 7 |
+
|
| 8 |
+
The annotations include 54 scores for each sample, covering a wide range of emotional and paralinguistic attributes, as well as an emotion caption generated by the BUD-E Whisper model. The goal is to enable more nuanced research and development in emotionally intelligent AI.
|
| 9 |
+
|
| 10 |
+
#### **Dataset Structure & Access**
|
| 11 |
+
|
| 12 |
+
The dataset includes the original Emilia audio data, with the addition of the new emotion annotations, provided in WebDataset format.
|
| 13 |
+
|
| 14 |
+
Currently, the dataset is distributed across five Hugging Face repositories:
|
| 15 |
+
* `laion/Emilia-with-Emotion-Annotations`
|
| 16 |
+
* `laion/Emilia-with-Emotion-Annotations2`
|
| 17 |
+
* `laion/Emilia-with-Emotion-Annotations3`
|
| 18 |
+
* `laion/Emilia-with-Emotion-Annotations4`
|
| 19 |
+
* `laion/Emilia-with-Emotion-Annotations5`
|
| 20 |
+
|
| 21 |
+
To access the complete dataset, you must gather the data from all five repositories. (Note: We plan to merge these into a single repository in a future update with an even better annotated version.)
|
| 22 |
+
|
| 23 |
+
The original `.tar` files for the Emilia dataset are also included. Files belonging to the YODAS subset can be identified by a suffix in their filenames.
|
| 24 |
+
|
| 25 |
+
#### **Dataset Statistics**
|
| 26 |
+
|
| 27 |
+
This combined dataset comprises approximately **215,600 hours** of speech, merging the original Emilia dataset with a large portion of the YODAS dataset. The inclusion of YODAS significantly expands the linguistic diversity and the total volume of data.
|
| 28 |
+
|
| 29 |
+
The language distribution is broken down as follows:
|
| 30 |
+
|
| 31 |
+
| Language | Emilia Duration (hours) | Emilia-YODAS Duration (hours) | Total Duration (hours) |
|
| 32 |
+
| :--- | :--- | :--- | :--- |
|
| 33 |
+
| English | 46.8k | 92.2k | 139.0k |
|
| 34 |
+
| Chinese | 49.9k | 0.3k | 50.3k |
|
| 35 |
+
| German | 1.6k | 5.6k | 7.2k |
|
| 36 |
+
| French | 1.4k | 7.4k | 8.8k |
|
| 37 |
+
| Japanese | 1.7k | 1.1k | 2.8k |
|
| 38 |
+
| Korean | 0.2k | 7.3k | 7.5k |
|
| 39 |
+
| **Total** | **101.7k**| **113.9k**| **215.6k**|
|
| 40 |
+
|
| 41 |
+
#### **Interpretation of Scores**
|
| 42 |
+
|
| 43 |
+
The models predict raw scores for 40 emotional categories and 14 attribute dimensions. For the emotional categories, these raw scores are also used to calculate a normalized Softmax probability, indicating the relative likelihood of each emotion.
|
| 44 |
+
|
| 45 |
+
| Attribute | Range | Description |
|
| 46 |
+
| :--- | :--- | :--- |
|
| 47 |
+
| **Valence** | -3 to +3 | -3: Ext. Negative, +3: Ext. Positive, 0: Neutral |
|
| 48 |
+
| **Arousal**| 0 to 4 | 0: Very Calm, 4: Very Excited, 2: Neutral |
|
| 49 |
+
| **Dominance**| -3 to +3 | -3: Ext. Submissive, +3: Ext. Dominant, 0: Neutral |
|
| 50 |
+
| **Age**| 0 to 6 | 0: Infant/Toddler, 2: Teenager, 4: Adult, 6: Very Old |
|
| 51 |
+
| **Gender**| -2 to +2 | -2: Very Masculine, +2: Very Feminine, 0: Neutral/Unsure |
|
| 52 |
+
| **Humor** | 0 to 4 | 0: Very Serious, 4: Very Humorous, 2: Neutral |
|
| 53 |
+
| **Detachment**| 0 to 4 | 0: Very Vulnerable, 4: Very Detached, 2: Neutral |
|
| 54 |
+
| **Confidence**| 0 to 4 | 0: Very Confident, 4: Very Hesitant, 2: Neutral |
|
| 55 |
+
| **Warmth**| -2 to +2 | -2: Very Cold, +2: Very Warm, 0: Neutral |
|
| 56 |
+
| **Expressiveness**| 0 to 4 | 0: Very Monotone, 4: Very Expressive, 2: Neutral |
|
| 57 |
+
| **Pitch**| 0 to 4 | 0: Very High-Pitched, 4: Very Low-Pitched, 2: Neutral |
|
| 58 |
+
| **Softness**| -2 to +2 | -2: Very Harsh, +2: Very Soft, 0: Neutral |
|
| 59 |
+
| **Authenticity**| 0 to 4 | 0: Very Artificial, 4: Very Genuine, 2: Neutral |
|
| 60 |
+
| **Recording Quality**| 0 to 4 | 0: Very Low, 4: Very High, 2: Decent |
|
| 61 |
+
| **Background Noise**| 0 to 3 | 0: No Noise, 3: Intense Noise |
|
| 62 |
+
|
| 63 |
+
#### **Citation**
|
| 64 |
+
|
| 65 |
+
If you use this dataset, please cite the original Emilia dataset paper as well as the EmoNet-Voice paper.
|
| 66 |
+
|
| 67 |
+
```bibtex
|
| 68 |
+
@inproceedings{emilialarge,
|
| 69 |
+
author={He, Haorui and Shang, Zengqiang and Wang, Chaoren and Li, Xuyuan and Gu, Yicheng and Hua, Hua and Liu, Liwei and Yang, Chen and Li, Jiaqi and Shi, Peiyang and Wang, Yuancheng and Chen, Kai and Zhang, Pengyuan and Wu, Zhizheng},
|
| 70 |
+
title={Emilia: A Large-Scale, Extensive, Multilingual, and Diverse Dataset for Speech Generation},
|
| 71 |
+
booktitle={arXiv:2501.15907},
|
| 72 |
+
year={2025}
|
| 73 |
+
}
|
| 74 |
+
|
| 75 |
+
@article{emonet_voice_2025,
|
| 76 |
+
author={Schuhmann, Christoph and Kaczmarczyk, Robert and Rabby, Gollam and Friedrich, Felix and Kraus, Maurice and Nadi, Kourosh and Nguyen, Huu and Kersting, Kristian and Auer, Sören},
|
| 77 |
+
title={EmoNet-Voice: A Fine-Grained, Expert-Verified Benchmark for Speech Emotion Detection},
|
| 78 |
+
journal={arXiv preprint arXiv:2506.09827},
|
| 79 |
+
year={2025}
|
| 80 |
+
}
|
| 81 |
+
```
|