Datasets:
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationError
Exception: TypeError
Message: Mask must be a pyarrow.Array of type boolean
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1595, in _prepare_split_single
num_examples, num_bytes = writer.finalize()
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 728, in finalize
self.write_examples_on_file()
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 581, in write_examples_on_file
self.write_batch(batch_examples=batch_examples)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 701, in write_batch
self.write_table(pa_table, writer_batch_size)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 716, in write_table
pa_table = embed_table_storage(pa_table)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2249, in embed_table_storage
embed_array_storage(table[name], feature, token_per_repo_id=token_per_repo_id)
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1795, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2124, in embed_array_storage
return feature.embed_storage(array, token_per_repo_id=token_per_repo_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/audio.py", line 300, in embed_storage
storage = pa.StructArray.from_arrays([bytes_array, path_array], ["bytes", "path"], mask=bytes_array.is_null())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/array.pxi", line 4259, in pyarrow.lib.StructArray.from_arrays
File "pyarrow/array.pxi", line 4929, in pyarrow.lib.c_mask_inverted_from_obj
TypeError: Mask must be a pyarrow.Array of type boolean
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1450, in compute_config_parquet_and_info_response
parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 993, in stream_convert_to_parquet
builder._prepare_split(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1447, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1604, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
audio
audio | label
class label |
|---|---|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
DCASE 5-Class 3-Source Separation 32k
Dataset Description
This dataset is a collection of 10,000 synthetic audio mixtures designed for the task of audio source separation.
Each audio file is a 10-second, 32kHz mixture containing 3 distinct audio sources from a pool of 5 selected classes. The mixtures were generated with a random Signal-to-Noise Ratio (SNR) between 5 and 20 dB.
This dataset is ideal for training and evaluating models that aim to separate a mixed audio signal into its constituent sources.
The 5 selected source classes are:
- Speech
- FootSteps
- Doorbell
- Dishes
- AlarmClock
Dataset Generation
The dataset was generated using the following Python configuration. This provides a 100% reproducible recipe for the data.
SELECTED_CLASSES = [
"Speech",
"FootSteps",
"Doorbell",
"Dishes",
"AlarmClock"
]
N_MIXTURES = 10_000
N_SOURCES = 3
DURATION = 10.0
SR = 32000
SNR_RANGE = [5, 20]
TARGET_PEAK = 0.95
MIN_GAIN = 3.0
SPLIT_DATA = {
'train': {
'source_event_dir': 'test/oracle_target',
'source_noise_dir': 'noise/train',
'split_noise': False,
'portion': 0.70
},
'valid': {
'source_event_dir': 'sound_event/train',
'source_noise_dir': 'noise/valid',
'split_noise': True,
'noise_portion': 0.50,
'portion': 0.15
},
'test': {
'source_event_dirs': ['test/oracle_target', 'sound_event/valid'],
'source_noise_dir': 'noise/valid',
'split_noise': True,
'noise_portion': 0.50,
'portion': 0.15
}
}
Data Splits
The dataset is split into train, valid, and test sets as defined in the generation config.
| Split | Portion | Number of Mixtures |
|---|---|---|
train |
70% | 7,000 |
valid |
15% | 1,500 |
test |
15% | 1,500 |
| Total | 100% | 10,000 |
Data Fields
This dataset is built on a central metadata file (metadata/mixtures_metadata.json) which contains an entry for each generated mixture.
A single entry in the metadata has the following structure:
{
"mixture_id": "mixture_000001",
"mixture_path": "mixtures/train/mixture_000001.wav",
"split": "train",
"config": {
"duration": 10.0,
"sr": 32000,
"max_event_overlap": 3,
"ref_channel": 0
},
"fg_events": [
{
"label": "Speech",
"source_file": "dcase_source_files/speech_001.wav",
"source_time": 0.0,
"event_time": 1.234567,
"event_duration": 2.500000,
"snr": 15.678901,
"role": "foreground"
},
{
"label": "Doorbell",
"source_file": "dcase_source_files/doorbell_002.wav",
"source_time": 0.0,
"event_time": 4.500000,
"event_duration": 1.800000,
"snr": 10.123456,
"role": "foreground"
}
],
"bg_events": [
{
"label": null,
"source_file": "dcase_noise_files/ambient_noise_001.wav",
"source_time": 0.0,
"event_time": 0.0,
"event_duration": 10.0,
"snr": 0.0,
"role": "background"
}
],
"int_events": [],
"normalization_gain": 0.85,
"original_peak": 1.123
}
Field Descriptions
mixture_id: A unique identifier for the mixture.mixture_path: The relative path to the generated mixture.wavfile.split: The data split this mixture belongs to (train,valid, ortest).config: An object containing the main generation parameters for this file.fg_events: A list of "foreground" sound event objects. Each object contains:label: The class of the event (e.g., "Speech", "Doorbell").source_file: The relative path to the original clean audio file used.event_time: The onset time (in seconds) of the event in the mixture.event_duration: The duration (in seconds) of the event.snr: The target Signal-to-Noise Ratio (in dB) of this event against the background.role: Always "foreground".
bg_events: A list of "background" noise objects (usually one). It has the same structure asfg_events, but thelabelisnullandsnris0.0.int_events: A list for "interfering" events (unused in this config, so it's[]).normalization_gain: The gain (e.g.,0.85) applied to the final mixture to reach theTARGET_PEAK.original_peak: The peak amplitude of the mixture before normalization.
Intended Use
This dataset is primarily intended for training and evaluating audio source separation models, particularly those that can handle:
- 3-source separation
- 32kHz sampling rate
- SNRs in the 5-20 dB range
Generate Your Own Dataset
You can run the same script in Google Colab to create your own custom version with different configurations.
- Change the number of mixtures
- Select different classes
- Change the number of active events per mixture
Click the badge below to open the generator notebook directly in Google Colab:
Citation
Citing the Original DCASE Data
@dataset{yasuda_masahiro_2025_15117227,
author = {Yasuda, Masahiro and
Nguyen, Binh Thien and
Harada, Noboru and
Takeuchi, Daiki},
title = {{DCASE2025Task4Dataset: The Dataset for Spatial
Semantic Segmentation of Sound Scenes}},
month = apr,
year = 2025,
publisher = {Zenodo},
version = {1.0.0},
doi = {10.5281/zenodo.15117227},
url = {https://doi.org/10.5281/zenodo.15117227}
}
Citing this Dataset
If you use this specific dataset generation recipe, please cite it as:
@misc/Kiuyha2025dcase5class3source,
title = {DCASE 3-Source Separation 32k Dataset},
author = {[Kiuyha]},
year = {2025},
url = {https://huggingface.co/datasets/Kiuyha/dcase-5class-3source-mixtures-32k},
howpublished = {Hugging Face Datasets}
}
License
The original DCASE source data has its own license. Please refer to the official DCASE website for details.
This derived dataset (the mixture 'recipe' and generated files) is made available under the MIT LICENCE.
- Downloads last month
- 420