Dataset Viewer
The dataset could not be loaded because the splits use different data file formats, which is not supported. Read more about the splits configuration. Click for more details.
Couldn't infer the same data file format for all splits. Got {'train': (None, {}), 'validation': (None, {}), 'test': ('parquet', {})}
Error code:   FileFormatMismatchBetweenSplitsError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Datasets

MDCC: A New Cantonese ASR Dataset

📦 Update [1 Feb, 2024]

The .wav data of the dataset is available here:
🔗 Google Drive Link
Note: For research purposes only.


📖 Overview

MDCC (“Multi-Domain Cantonese Corpus”) is a large-scale Cantonese automatic speech recognition (ASR) dataset compiled from multiple domains. It provides:

  • Audio: .wav recordings of spontaneous and read speech
  • Transcript: UTF‑8 plain‑text transcripts
  • Speaker metadata: sex
  • Duration: audio length in seconds

This repo contains metadata files and a conversion script to turn the data into a Hugging Face-compatible dataset.

Unlike ming030890/cantonese_asr_eval_mdcc_long, this repo only keeps audio segments that are longer than 8 seconds.


📝 Paper & Citation

Tiezheng Yu, Rita Frieske, Peng Xu, Samuel Cahyawijaya, Cheuk Tung Shadow Yiu, Holy Lovenia,
Wenliang Dai, Elham J. Barezi, Qifeng Chen, Xiaojuan Ma, Bertram E. Shi & Pascale Fung
“Automatic Speech Recognition Datasets in Cantonese: A Survey and New Dataset”
📄 arXiv:2201.02419

@misc{yu2022automatic,
  title        = {Automatic Speech Recognition Datasets in Cantonese: A Survey and New Dataset},
  author       = {Tiezheng Yu and Rita Frieske and Peng Xu and Samuel Cahyawijaya and
                  Cheuk Tung Shadow Yiu and Holy Lovenia and Wenliang Dai and
                  Elham J. Barezi and Qifeng Chen and Xiaojuan Ma and
                  Bertram E. Shi and Pascale Fung},
  year         = {2022},
  eprint       = {2201.02419},
  archivePrefix= {arXiv},
  primaryClass = {cs.CL}
}

🚀 How to Load on Hugging Face

from datasets import load_dataset

ds = load_dataset("ming030890/cantonese_asr_eval_mdcc_long ")
print(ds["test"][0])

Example output:

{
  'audio': {
    'path': '/path/to/audio.wav',
    'array': [...],
    'sampling_rate': 16000
  },
  'transcript': '你好,歡迎收聽…',
  'sex': 'female',
  'duration': 3.08
}

🔓 License & Access

  1. Review the MDCC_LICENSE file in this repo.
  2. Sign it and send to [email protected].
  3. Then download the dataset here:
    🔗 Google Drive Folder

✅ Checkpoints

Download pretrained models here:
🔗 Checkpoints Google Drive


⚠️ Disclaimer

I am not the original author of the dataset or the research paper.
This repo only provides a Hugging Face-compatible version of the public MDCC data.

For the original codebase and documentation, refer to:
🔗 https://github.com/HLTCHKUST/cantonese-asr

Downloads last month
13