File size: 8,958 Bytes
7c75574
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ff0a66f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7c75574
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9df35ed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7c75574
9df35ed
 
7c75574
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
---
license: cc-by-4.0
task_categories:
- translation
- automatic-speech-recognition
language:
- it
- en
multilinguality:
- multilingual
pretty_name: FAMA-data
tags:
- speech
- speech-to-text
- open-source
- speech translation
- ST
- ASR
- audio
- text
size_categories:
- 100K<n<1M
configs:
  - config_name: en
    data_files:
      - split: train_commonvoice
        path: train_commonvoice_en-it.tsv
      - split: train_covost2
        path: train_covost2_en-it.tsv
      - split: train_fleurs
        path: train_fleurs_en-it.tsv
      - split: train_librilight_large
        path: train_librilightlarge_en-it.tsv
      - split: train_librilight_medium
        path: train_librilightmedium_en-it.tsv
      - split: train_librilight_small
        path: train_librilightsmall_en-it.tsv
      - split: train_librispeech
        path: train_librispeech_en-it.tsv
      - split: train_mls
        path: train_mls_en-it.tsv
      - split: train_voxpopuli
        path: train_voxpopuli_en-it.tsv
      - split: train_voxpopuliasr
        path: train_voxpopuliasr_en-it.tsv
      - split: train_youtubecommons
        path: train_youtubecommons_en-it.tsv
  - config_name: it
    data_files:
      - split: train_commonvoice
        path: train_commonvoice_it-en.tsv
      - split: train_covost2
        path: train_covost2_it-en.tsv
      - split: train_fleurs
        path: train_fleurs_it-en.tsv
      - split: train_mls
        path: train_mls_it-en.tsv
      - split: train_voxpopuli
        path: train_voxpopuli_it-en.tsv
      - split: train_voxpopuliasr
        path: train_voxpopuliasr_it-en.tsv
      - split: train_youtubecommons
        path: train_youtubecommons_it-en.tsv
---

<img src="https://huggingface.co/FBK-MT/fama-small/resolve/main/FAMA.png" align="center" width="100%">

### Dataset Description, Collection, and Source

The FAMA training data is the collection of English and Italian datasets for automatic speech recognition (ASR) and speech translation (ST)
used to train the [FAMA models family](https://huggingface.co/collections/FBK-MT/fama-683425df3fb2b3171e0cdc9e).
The ASR section of FAMA is derived from the [MOSEL data collection](https://github.com/hlt-mt/mosel), including the automatic
transcripts obtained with Whisper and available in the [HuggingFace MOSEL Dataset](https://huggingface.co/datasets/FBK-MT/mosel).
The ASR is further augmented with automatically transcribed speech from the 
[YouTube-Commons dataset](https://huggingface.co/datasets/PleIAs/YouTube-Commons).
The ST section is composed of gold-labeled ST datasets and the automatic translations of the ASR datasets with 
[MADALAD-400 3B-MT](https://huggingface.co/google/madlad400-3b-mt). 
The complete list of datasets for both tasks are reported in the [Dataset Statistics](#dataset-statistics).

- **Curated by:** Sara Papi, Marco Gaido, Luisa Bentivogli, Alessio Brutti, Mauro Cettolo, Roberto Gretter, Marco Matassoni, Mohamed Nabih, and Matteo Negri
- **Funded by:** FAIR, Meetween, and CINECA
- **Shared by:** Fondazione Bruno Kessler

### License
- CC-BY-4.0

### Dataset Sources

- **MOSEL Collection:** [MOSEL GitHub](https://github.com/hlt-mt/mosel)
- **MOSEL Pseudolabels:** [MOSEL HuggingFace](https://huggingface.co/datasets/FBK-MT/mosel)
- **YouTube-Commons:** [YouTube-Commons](https://huggingface.co/datasets/PleIAs/YouTube-Commons)
- **Paper:** [FAMA: The First Large-Scale Open-Science Speech Foundation Model for English and Italian](https://huggingface.co/papers/2505.22759)

## Dataset Structure

### Data Config
The dataset is split into multiple tsv files corresponding to the dataset name and the source and target languages, 
either Italian (it) and English (en), containing both the ASR transcript and translation in the other language.

### Data Field

`id`: unique id of the segment (text, e.g.: "5NTUCHeZuds_0")

`audio`: filename (text, e.g. "5NTUCHeZuds.wav")

`offset`: start of the segment, in seconds (float, e.g. "0.020")

`duration`: duration of the segments, in seconds (float, e.g. "5.946")

`speaker`: id of the speaker (text, e.g. "000")

`src_lang`: id of the source language (ISO 639-1 code, e.g. "it", "en")

`src_text`: recognized text (text, e.g. "Grazie a tutti.")

`tgt_lang`: id of the source language (ISO 639-1 code, e.g. "it", "en")

`tgt_text`: translated text (text, e.g. "Thank you all.")

`ASR`: True/False - indicates whether the sample has been used for ASR training

`ST`: True/False - indicates whether the sample has been used for ST training

## Dataset Statistics

The full list of FAMA training datasets, together with the number of hours for each language/language pair and 
the type of labels (A for automatic and G for gold labels) is reported below for both ASR and ST tasks.

### Automatic Speech Recognition (ASR)
| Dataset | English (h) | Italian (h) | Label |
|--------|--------|--------|-------|
| CommonVoice v18     | 1,746    | 250  | G |
| CoVoST2     | 420     | 28   | G  |
| FLEURS     | 7    | 9  | G |
| LibriSpeech     | 358    | -  | G |
| MOSEL     | 66,301    | 21,775  | A |
| MLS     | 44,600    | 247  | G |
| VoxPopuli-ASR     | 519    | 74  | G |
| YouTube-Commons     | 14,200    | 1,828  | A |
| **TOTAL**     | 128,152    | 24,211  | G+A |

### Speech Translation (ST)
| Dataset | English (h) | Italian (h) | Label |
|--------|--------|--------|-------|
| CommonVoice v18     | 1,746    | 250  | A |
| CoVoST2     | 420     | 28   | A  |
| LibriSpeech     | 358    | -  | A |
| MOSEL     | 66,301    | 21,775  | A |
| MLS     | 44,600    | 247  | A |
| VoxPopuli-ASR     | 519    | 74  | A |
| YouTube-Commons     | 14,200    | 1,828  | A |
| *TOTAL (A)*     | 128,144    | 24,202  | A |
| *FILTERED (A)*     | 123,777    | 23,445  | A |
| CoVoST2     | 420     | 28   | G  |
| FLEURS     | 7    | 9  | G |
| **TOTAL**     | 124,204    | 23,482  | G+A |

## Dataset Creation
To reproduce the MOSEL-derived datasets (all but YouTube-Commons), please refer to the 
[MOSEL README in the fbk-llm](https://github.com/hlt-mt/fbk-llm) repository and to the 
[MOSEL data card on HuggingFace](https://huggingface.co/datasets/FBK-MT/mosel).

To download and process YouTube-Commons, please refer to the 
[dedicated YouTube-Commons README](https://huggingface.co/datasets/FBK-MT/fama-data/blob/main/scripts/YouTube-Commons-README.md).

The code used to produce all translations with [MADALAD-400 3B-MT](https://huggingface.co/google/madlad400-3b-mt) is the following:
```python
import os
import sys
import torch
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer

modelname = "google/madlad400-3b-mt"
batch_size = {$BATCH_SIZE}   
tlang = {$LANGUAGE}

class BatchedMT:
    def __init__(self, tokenizer, model):
        self.buffer_lines = []
        self.model = model
        if torch.cuda.is_available():
            self.model = self.model.cuda()
        self.tokenizer = tokenizer

    def process_line(self, line):
        self.buffer_lines.append(line.strip())
        if len(self.buffer_lines) >= BATCHSIZE:
            self.print_translations()
            self.buffer_lines = []

    def print_translations(self):
        outs = self._do_translate()
        for s in outs:
            print(s)

    def _do_translate(self):
        tokens = self.tokenizer(self.buffer_lines, return_tensors="pt", padding=True)
        if torch.cuda.is_available():
            tokens = {k: v.cuda() for k, v in tokens.items()}
        translated = self.model.generate(**tokens, max_new_tokens=512)
        return [self.tokenizer.decode(t, skip_special_tokens=True) for t in translated]

    def close(self):
        if len(self.buffer_lines) > 0:
            self.print_translations()
            self.buffer_lines = []


mt = BatchedMT(
    AutoTokenizer.from_pretrained(modelname),
    AutoModelForSeq2SeqLM.from_pretrained(modelname))

for input_line in sys.stdin:
    mt.process_line("<2" + tlang + "> " + input_line)
mt.close()
```
where the input text is passad as stdin, `{$BATCH_SIZE}` is the batch size supported on your machine 
and `{$LANGUAGE}` is either `en` for Italian to English translation and `it` for English to Italian translation.

The script used for filtering the ST datasets is 
[`filter_tsv_based_on_ratio`](https://huggingface.co/datasets/FBK-MT/fama-data/blob/main/scripts/filter_tsv_based_on_ratio.py) and 
available in the `scripts` folder of this repository.
For English speech datasets, we set `--threshold-min 0.75` and `--threshold-max 1.45` 
while, for the Italian speech datasets, `--threshold-min 0.65` and `--threshold-max 1.35`.


## Citation
```
@misc{papi2025fama,
      title={FAMA: The First Large-Scale Open-Science Speech Foundation Model for English and Italian}, 
      author={Sara Papi and Marco Gaido and Luisa Bentivogli and Alessio Brutti and Mauro Cettolo and Roberto Gretter and Marco Matassoni and Mohamed Nabih and Matteo Negri},
      year={2025}
}
```

## Dataset Card Contact
[@spapi](https://huggingface.co/spapi)