File size: 1,633 Bytes
52f187a c3e4463 52f187a c3e4463 156f868 72567c0 c3e4463 72567c0 c3e4463 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 |
---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 3125353264.6964455
num_examples: 5778
- name: test
num_bytes: 1004055850.0756147
num_examples: 1683
download_size: 3490774262
dataset_size: 4129409114.7720604
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
task_categories:
- automatic-speech-recognition
- text-to-speech
language:
- km
tags:
- openslr42
- fleurs
- asr
---
__NOTE:__ If your colab crashes, please use `pip install --upgrade --quiet datasets[audio]==3.6.0` to install `datasets[audio]` version `3.6.0`.
This dataset combined [google/fleurs](https://huggingface.co/datasets/google/fleurs), [openslr/openslr42](https://huggingface.co/datasets/openslr/openslr), and cleaned [seanghay/khmer_mpwt_speech](https://huggingface.co/datasets/seanghay/khmer_mpwt_speech).
Severals processes are executed:
1. clean up [seanghay/khmer_mpwt_speech](https://huggingface.co/datasets/seanghay/khmer_mpwt_speech): manually correct wrong transcriptions over 2058 rows
2. normalize transcription: remove invisible white space; process `ៗ`, numbers, currencies, date into khmer text; and separate each word by space
3. filter out texts whose number of token ids are more than 448: use tokenizer of Whisper-Small to encode text and filter out sequences longer than 448
4. filter out audio with length longer than 30 seconds
5. resample audio to 16000kHz
__Disclaimer__ I do not own any of these datasets.
|