Dataset Viewer (First 5GB)
Auto-converted to Parquet
aya_name
string
reciter_name
string
recitation_id
int64
url
string
audio
dict
duration
float64
speech_intervals
list
is_interval_complete
bool
001000
ู…ุญู…ูˆุฏ ุฎู„ูŠู„ ุงู„ุญุตุฑูŠ
0
https://everyayah.com/data/Husary_128kbps/000_versebyverse.zip
{"array":[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,-0.000030517578125,0.0,-0.000030517578125,0.0,-0.000030517(...TRUNCATED)
5.27675
[ [ 0.0560000017285347, 4.6479997634887695 ] ]
true
001001
ู…ุญู…ูˆุฏ ุฎู„ูŠู„ ุงู„ุญุตุฑูŠ
0
https://everyayah.com/data/Husary_128kbps/000_versebyverse.zip
{"array":[0.0,0.0,0.0,0.0,0.0,0.0,-0.000030517578125,0.0,-0.000030517578125,0.0,-0.000030517578125,-(...TRUNCATED)
5.12
[ [ 0.0560000017285347, 4.552000045776367 ] ]
true
001002
ู…ุญู…ูˆุฏ ุฎู„ูŠู„ ุงู„ุญุตุฑูŠ
0
https://everyayah.com/data/Husary_128kbps/000_versebyverse.zip
{"array":[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.(...TRUNCATED)
6.269438
[ [ 0.6320000290870667, 5.415999889373779 ] ]
true
001003
ู…ุญู…ูˆุฏ ุฎู„ูŠู„ ุงู„ุญุตุฑูŠ
0
https://everyayah.com/data/Husary_128kbps/000_versebyverse.zip
{"array":[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.(...TRUNCATED)
4.493063
[ [ 0.0560000017285347, 3.5920000076293945 ] ]
true
001004
ู…ุญู…ูˆุฏ ุฎู„ูŠู„ ุงู„ุญุตุฑูŠ
0
https://everyayah.com/data/Husary_128kbps/000_versebyverse.zip
{"array":[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.(...TRUNCATED)
4.623688
[ [ 0.15199999511241913, 3.7839999198913574 ] ]
true
001005
ู…ุญู…ูˆุฏ ุฎู„ูŠู„ ุงู„ุญุตุฑูŠ
0
https://everyayah.com/data/Husary_128kbps/000_versebyverse.zip
{"array":[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.(...TRUNCATED)
6.844125
[ [ 0.24799999594688416, 6.0879998207092285 ] ]
true
001006
ู…ุญู…ูˆุฏ ุฎู„ูŠู„ ุงู„ุญุตุฑูŠ
0
https://everyayah.com/data/Husary_128kbps/000_versebyverse.zip
{"array":[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.(...TRUNCATED)
5.355125
[ [ 0.24799999594688416, 4.552000045776367 ] ]
true
001007
ู…ุญู…ูˆุฏ ุฎู„ูŠู„ ุงู„ุญุตุฑูŠ
0
https://everyayah.com/data/Husary_128kbps/000_versebyverse.zip
{"array":[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.(...TRUNCATED)
14.942063
[ [ 0.3440000116825104, 13.960000038146973 ] ]
true
002001
ู…ุญู…ูˆุฏ ุฎู„ูŠู„ ุงู„ุญุตุฑูŠ
0
https://everyayah.com/data/Husary_128kbps/000_versebyverse.zip
{"array":[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.(...TRUNCATED)
9.900438
[ [ 1.1119999885559082, 9.15999984741211 ] ]
true
002002
ู…ุญู…ูˆุฏ ุฎู„ูŠู„ ุงู„ุญุตุฑูŠ
0
https://everyayah.com/data/Husary_128kbps/000_versebyverse.zip
{"array":[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.(...TRUNCATED)
12.564938
[ [ 1.2079999446868896, 5.703999996185303 ], [ 7.25600004196167, 11.847999572753906 ] ]
true
End of preview. Expand in Data Studio

Recitation Segmentation Dataset for Holy Quran Pronunciation Error Detection

This dataset is used for building models that segment Holy Quran recitations based on pause points (waqf) with high accuracy. The segments are crucial for tasks like Automatic Pronunciation Error Detection and Correction, leveraging the rigorous recitation rules (tajweed) of the Holy Quran.

The dataset was presented in the paper Automatic Pronunciation Error Detection and Correction of the Holy Quran's Learners Using Deep Learning.

This dataset comprises 850+ hours of audio (~300K annotated utterances) and was generated through a 98% automated pipeline, which includes:

  • Collection of recitations from expert reciters.
  • Segmentation at pause points (waqf) using a fine-tuned wav2vec2-BERT model.
  • Transcription of segments.
  • Transcript verification via the novel Tasmeea algorithm.

A novel Quran Phonetic Script (QPS) is employed to encode Tajweed rules, distinguishing it from standard IPA for Modern Standard Arabic. This high-quality annotated data addresses the scarcity of resources for Quranic speech processing and is crucial for developing robust ASR-based approaches for pronunciation error detection and correction.

Data Structure and Features

The dataset is organized to provide audio waveforms along with their segmentation information. Each record typically includes the following features:

  • aya_name: Name of the Ayah (verse).
  • reciter_name: Name of the reciter.
  • recitation_id: Unique identifier for the recitation.
  • url: URL to the original audio source.
  • audio: Audio waveform data.
  • duration: Duration of the audio segment in seconds.
  • speech_intervals: Timestamps (start and end) of detected speech intervals.
  • is_interval_complete: Boolean indicating if the interval represents a complete pause (waqf).

The dataset is organized into multiple configurations based on different reciters, with data files located in paths like data/recitation_0/train/*.parquet.

Sample Usage

You can use the recitations-segmenter Python library to process audio files and extract speech intervals.

First, install the necessary Python packages, including recitations-segmenter and transformers. You may also need ffmpeg and libsndfile for audio processing:

conda create -n segment python=3.12
conda activate segment
conda install -c conda-forge ffmpeg libsndfile
pip install recitations-segmenter

Here's an example of how to use the Python API to segment Holy Quran recitations:

from pathlib import Path

from recitations_segmenter import segment_recitations, read_audio, clean_speech_intervals
from transformers import AutoFeatureExtractor, AutoModelForAudioFrameClassification
import torch

if __name__ == '__main__':
    device = torch.device('cuda')
    dtype = torch.bfloat16

    processor = AutoFeatureExtractor.from_pretrained(
        "obadx/recitation-segmenter-v2")
    model = AutoModelForAudioFrameClassification.from_pretrained(
        "obadx/recitation-segmenter-v2",
    )

    model.to(device, dtype=dtype)

    # Change this to the file pathes of Holy Quran recitations
    # File pathes with the Holy Quran Recitations
    file_pathes = [
        './assets/dussary_002282.mp3',
        './assets/hussary_053001.mp3',
    ]
    waves = [read_audio(p) for p in file_pathes]

    # Extracting speech inervals in samples according to 16000 Sample rate
    sampled_outputs = segment_recitations(
        waves,
        model,
        processor,
        device=device,
        dtype=dtype,
        batch_size=8,
    )

    for out, path in zip(sampled_outputs, file_pathes):
        # Clean The speech intervals by:
        # * merging small silence durations
        # * remove small speech durations
        # * add padding to each speech duration
        # Raises:
        # * NoSpeechIntervals: if the wav is complete silence
        # * TooHighMinSpeechDruation: if `min_speech_duration` is too high which
        # resuls for deleting all speech intervals
        clean_out = clean_speech_intervals(
            out.speech_intervals,
            out.is_complete,
            min_silence_duration_ms=30,
            min_speech_duration_ms=30,
            pad_duration_ms=30,
            return_seconds=True,
        )

        print(f'Speech Intervals of: {Path(path).name}: ')
        print(clean_out.clean_speech_intervals)
        print(f'Is Recitation Complete: {clean_out.is_complete}')
        print('-' * 40)
Downloads last month
118