Kiuyha's picture
Update README.md
301970c verified
metadata
title: DCASE 5-Class 3-Source Separation 32k
license: mit
tags:
  - audio
  - dcase
  - audio-source-separation
  - 32k
  - dcase-derived
language:
  - en
task_categories:
  - audio-to-audio
size_categories:
  - 10K<n<100K
configs:
  - config_name: default
    data_files:
      - split: train
        path:
          - metadata/train_metadata.jsonl
          - mixtures/train/*
          - noise/train/*
          - sound_event/train/*
      - split: valid
        path:
          - metadata/valid_metadata.jsonl
          - mixtures/valid/*
          - noise/valid/*
          - sound_event/valid/*
      - split: test
        path:
          - metadata/test_metadata.jsonl
          - mixtures/test/*
          - noise/test/*
          - sound_event/test/*

DCASE 5-Class 3-Source Separation 32k

Dataset Description

This dataset is a collection of 10,000 synthetic audio mixtures designed for the task of audio source separation.

Each audio file is a 10-second, 32kHz mixture containing 3 distinct audio sources from a pool of 5 selected classes. The mixtures were generated with a random Signal-to-Noise Ratio (SNR) between 5 and 20 dB.

This dataset is ideal for training and evaluating models that aim to separate a mixed audio signal into its constituent sources.

The 5 selected source classes are:

  • Speech
  • FootSteps
  • Doorbell
  • Dishes
  • AlarmClock

Dataset Generation

The dataset was generated using the following Python configuration. This provides a 100% reproducible recipe for the data.

SELECTED_CLASSES = [
  "Speech",
  "FootSteps",
  "Doorbell",
  "Dishes",
  "AlarmClock"
]

N_MIXTURES = 10_000
N_SOURCES = 3
DURATION = 10.0
SR = 32000
SNR_RANGE = [5, 20]
TARGET_PEAK = 0.95
MIN_GAIN = 3.0

SPLIT_DATA = {
  'train': {
    'source_event_dir': 'test/oracle_target',
    'source_noise_dir': 'noise/train',
    'split_noise': False,
    'portion': 0.70
  },
  'valid': {
    'source_event_dir': 'sound_event/train',
    'source_noise_dir': 'noise/valid',
    'split_noise': True,
    'noise_portion': 0.50,
    'portion': 0.15
  },
  'test': {
    'source_event_dirs': ['test/oracle_target', 'sound_event/valid'],
    'source_noise_dir': 'noise/valid',
    'split_noise': True,
    'noise_portion': 0.50,
    'portion': 0.15
  }
}

Data Splits

The dataset is split into train, valid, and test sets as defined in the generation config.

Split Portion Number of Mixtures
train 70% 7,000
valid 15% 1,500
test 15% 1,500
Total 100% 10,000

Data Fields

This dataset is built on a central metadata file (metadata/mixtures_metadata.json) which contains an entry for each generated mixture.

A single entry in the metadata has the following structure:

{
  "mixture_id": "mixture_000001",
  "mixture_path": "mixtures/train/mixture_000001.wav",
  "split": "train",
  "config": {
    "duration": 10.0,
    "sr": 32000,
    "max_event_overlap": 3,
    "ref_channel": 0
  },
  "fg_events": [
    {
      "label": "Speech",
      "source_file": "dcase_source_files/speech_001.wav",
      "source_time": 0.0,
      "event_time": 1.234567,
      "event_duration": 2.500000,
      "snr": 15.678901,
      "role": "foreground"
    },
    {
      "label": "Doorbell",
      "source_file": "dcase_source_files/doorbell_002.wav",
      "source_time": 0.0,
      "event_time": 4.500000,
      "event_duration": 1.800000,
      "snr": 10.123456,
      "role": "foreground"
    }
  ],
  "bg_events": [
    {
      "label": null,
      "source_file": "dcase_noise_files/ambient_noise_001.wav",
      "source_time": 0.0,
      "event_time": 0.0,
      "event_duration": 10.0,
      "snr": 0.0,
      "role": "background"
    }
  ],
  "int_events": [],
  "normalization_gain": 0.85,
  "original_peak": 1.123
}

Field Descriptions

  • mixture_id: A unique identifier for the mixture.
  • mixture_path: The relative path to the generated mixture .wav file.
  • split: The data split this mixture belongs to (train, valid, or test).
  • config: An object containing the main generation parameters for this file.
  • fg_events: A list of "foreground" sound event objects. Each object contains:
    • label: The class of the event (e.g., "Speech", "Doorbell").
    • source_file: The relative path to the original clean audio file used.
    • event_time: The onset time (in seconds) of the event in the mixture.
    • event_duration: The duration (in seconds) of the event.
    • snr: The target Signal-to-Noise Ratio (in dB) of this event against the background.
    • role: Always "foreground".
  • bg_events: A list of "background" noise objects (usually one). It has the same structure as fg_events, but the label is null and snr is 0.0.
  • int_events: A list for "interfering" events (unused in this config, so it's []).
  • normalization_gain: The gain (e.g., 0.85) applied to the final mixture to reach the TARGET_PEAK.
  • original_peak: The peak amplitude of the mixture before normalization.

Intended Use

This dataset is primarily intended for training and evaluating audio source separation models, particularly those that can handle:

  • 3-source separation
  • 32kHz sampling rate
  • SNRs in the 5-20 dB range

Generate Your Own Dataset

You can run the same script in Google Colab to create your own custom version with different configurations.

  • Change the number of mixtures
  • Select different classes
  • Change the number of active events per mixture

Click the badge below to open the generator notebook directly in Google Colab: Open In Colab

Citation

Citing the Original DCASE Data

@dataset{yasuda_masahiro_2025_15117227,
  author       = {Yasuda, Masahiro and
                  Nguyen, Binh Thien and
                  Harada, Noboru and
                  Takeuchi, Daiki},
  title        = {{DCASE2025Task4Dataset: The Dataset for Spatial 
                   Semantic Segmentation of Sound Scenes}},
  month        = apr,
  year         = 2025,
  publisher    = {Zenodo},
  version      = {1.0.0},
  doi          = {10.5281/zenodo.15117227},
  url          = {https://doi.org/10.5281/zenodo.15117227}
}

Citing this Dataset

If you use this specific dataset generation recipe, please cite it as:

@misc/Kiuyha2025dcase5class3source,
  title  = {DCASE 3-Source Separation 32k Dataset},
  author = {[Kiuyha]},
  year   = {2025},
  url    = {https://huggingface.co/datasets/Kiuyha/dcase-5class-3source-mixtures-32k},
  howpublished = {Hugging Face Datasets}
}

License

The original DCASE source data has its own license. Please refer to the official DCASE website for details.

This derived dataset (the mixture 'recipe' and generated files) is made available under the MIT LICENCE.