Kiuyha commited on
Commit
ffd22fe
·
verified ·
1 Parent(s): 91c290d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +230 -3
README.md CHANGED
@@ -1,3 +1,230 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: DCASE 5-Class 3-Source Separation 32k
3
+ license: mit
4
+ tags:
5
+ - audio
6
+ - dcase
7
+ - audio-source-separation
8
+ - 32k
9
+ - dcase-derived
10
+ language:
11
+ - en
12
+ task_categories:
13
+ - audio-to-audio
14
+ size_categories:
15
+ - 10K<n<100K
16
+ configs:
17
+ - config_name: default
18
+ data_files:
19
+ - split: train
20
+ path:
21
+ - metadata/train_metadata.jsonl
22
+ - mixtures/train/*
23
+ - noise/train/*
24
+ - sound_event/train/*
25
+ - split: valid
26
+ path:
27
+ - metadata/valid_metadata.jsonl
28
+ - mixtures/valid/*
29
+ - noise/valid/*
30
+ - sound_event/valid/*
31
+ - split: test
32
+ path:
33
+ - metadata/test_metadata.jsonl
34
+ - mixtures/test/*
35
+ - noise/test/*
36
+ - sound_event/test/*
37
+ ---
38
+
39
+ # DCASE 5-Class 3-Source Separation 32k
40
+
41
+ ## Dataset Description
42
+
43
+ This dataset is a collection of 10,000 synthetic audio mixtures designed for the task of **audio source separation**.
44
+
45
+ Each audio file is a 10-second, **32kHz** mixture containing **3 distinct audio sources** from a pool of **5 selected classes**. The mixtures were generated with a random Signal-to-Noise Ratio (SNR) between 5 and 20 dB.
46
+
47
+ This dataset is ideal for training and evaluating models that aim to separate a mixed audio signal into its constituent sources.
48
+
49
+ The 5 selected source classes are:
50
+ * Speech
51
+ * FootSteps
52
+ * Doorbell
53
+ * Dishes
54
+ * AlarmClock
55
+
56
+ ## Dataset Generation
57
+
58
+ The dataset was generated using the following Python configuration. This provides a 100% reproducible recipe for the data.
59
+
60
+ ```python
61
+ SELECTED_CLASSES = [
62
+ "Speech",
63
+ "FootSteps",
64
+ "Doorbell",
65
+ "Dishes",
66
+ "AlarmClock"
67
+ ]
68
+
69
+ N_MIXTURES = 10_000
70
+ N_SOURCES = 3
71
+ DURATION = 10.0
72
+ SR = 32000
73
+ SNR_RANGE = [5, 20]
74
+ TARGET_PEAK = 0.95
75
+ MIN_GAIN = 3.0
76
+
77
+ SPLIT_DATA = {
78
+ 'train': {
79
+ 'source_event_dir': 'test/oracle_target',
80
+ 'source_noise_dir': 'noise/train',
81
+ 'split_noise': False,
82
+ 'portion': 0.70
83
+ },
84
+ 'valid': {
85
+ 'source_event_dir': 'sound_event/train',
86
+ 'source_noise_dir': 'noise/valid',
87
+ 'split_noise': True,
88
+ 'noise_portion': 0.50,
89
+ 'portion': 0.15
90
+ },
91
+ 'test': {
92
+ 'source_event_dirs': ['test/oracle_target', 'sound_event/valid'],
93
+ 'source_noise_dir': 'noise/valid',
94
+ 'split_noise': True,
95
+ 'noise_portion': 0.50,
96
+ 'portion': 0.15
97
+ }
98
+ }
99
+ ```
100
+
101
+ ## Data Splits
102
+
103
+ The dataset is split into `train`, `valid`, and `test` sets as defined in the generation config.
104
+
105
+ | Split | Portion | Number of Mixtures |
106
+ | :--- | :--- | :--- |
107
+ | `train` | 70% | 7,000 |
108
+ | `valid` | 15% | 1,500 |
109
+ | `test` | 15% | 1,500 |
110
+ | **Total** | **100%** | **10,000** |
111
+
112
+ ## Data Fields
113
+
114
+ This dataset is built on a central metadata file (metadata/mixtures_metadata.json) which contains an entry for each generated mixture.
115
+
116
+ A single entry in the metadata has the following structure:
117
+
118
+ ```json
119
+ {
120
+ "mixture_id": "mixture_000001",
121
+ "mixture_path": "mixtures/train/mixture_000001.wav",
122
+ "split": "train",
123
+ "config": {
124
+ "duration": 10.0,
125
+ "sr": 32000,
126
+ "max_event_overlap": 3,
127
+ "ref_channel": 0
128
+ },
129
+ "fg_events": [
130
+ {
131
+ "label": "Speech",
132
+ "source_file": "dcase_source_files/speech_001.wav",
133
+ "source_time": 0.0,
134
+ "event_time": 1.234567,
135
+ "event_duration": 2.500000,
136
+ "snr": 15.678901,
137
+ "role": "foreground"
138
+ },
139
+ {
140
+ "label": "Doorbell",
141
+ "source_file": "dcase_source_files/doorbell_002.wav",
142
+ "source_time": 0.0,
143
+ "event_time": 4.500000,
144
+ "event_duration": 1.800000,
145
+ "snr": 10.123456,
146
+ "role": "foreground"
147
+ }
148
+ ],
149
+ "bg_events": [
150
+ {
151
+ "label": null,
152
+ "source_file": "dcase_noise_files/ambient_noise_001.wav",
153
+ "source_time": 0.0,
154
+ "event_time": 0.0,
155
+ "event_duration": 10.0,
156
+ "snr": 0.0,
157
+ "role": "background"
158
+ }
159
+ ],
160
+ "int_events": [],
161
+ "normalization_gain": 0.85,
162
+ "original_peak": 1.123
163
+ }
164
+ ```
165
+
166
+ ### Field Descriptions
167
+
168
+ * **`mixture_id`**: A unique identifier for the mixture.
169
+ * **`mixture_path`**: The relative path to the generated mixture `.wav` file.
170
+ * **`split`**: The data split this mixture belongs to (`train`, `valid`, or `test`).
171
+ * **`config`**: An object containing the main generation parameters for this file.
172
+ * **`fg_events`**: A list of "foreground" sound event objects. Each object contains:
173
+ * **`label`**: The class of the event (e.g., "Speech", "Doorbell").
174
+ * **`source_file`**: The relative path to the original clean audio file used.
175
+ * **`event_time`**: The onset time (in seconds) of the event in the mixture.
176
+ * **`event_duration`**: The duration (in seconds) of the event.
177
+ * **`snr`**: The target Signal-to-Noise Ratio (in dB) of this event against the background.
178
+ * **`role`**: Always "foreground".
179
+ * **`bg_events`**: A list of "background" noise objects (usually one). It has the same structure as `fg_events`, but the `label` is `null` and `snr` is `0.0`.
180
+ * **`int_events`**: A list for "interfering" events (unused in this config, so it's `[]`).
181
+ * **`normalization_gain`**: The gain (e.g., `0.85`) applied to the final mixture to reach the `TARGET_PEAK`.
182
+ * **`original_peak`**: The peak amplitude of the mixture *before* normalization.
183
+ ## Intended Use
184
+
185
+ This dataset is primarily intended for training and evaluating audio source separation models, particularly those that can handle:
186
+
187
+ * 3-source separation
188
+ * 32kHz sampling rate
189
+ * SNRs in the 5-20 dB range
190
+
191
+ ## Citation
192
+
193
+ ### Citing the Original DCASE Data
194
+
195
+ ```bibtex
196
+ @dataset{yasuda_masahiro_2025_15117227,
197
+ author = {Yasuda, Masahiro and
198
+ Nguyen, Binh Thien and
199
+ Harada, Noboru and
200
+ Takeuchi, Daiki},
201
+ title = {{DCASE2025Task4Dataset: The Dataset for Spatial
202
+ Semantic Segmentation of Sound Scenes}},
203
+ month = apr,
204
+ year = 2025,
205
+ publisher = {Zenodo},
206
+ version = {1.0.0},
207
+ doi = {10.5281/zenodo.15117227},
208
+ url = {https://doi.org/10.5281/zenodo.15117227}
209
+ }
210
+ ```
211
+
212
+ ### Citing this Dataset
213
+
214
+ If you use this specific dataset generation recipe, please cite it as:
215
+
216
+ ```bibtex
217
+ @misc/Kiuyha2025dcase5class3source,
218
+ title = {DCASE 3-Source Separation 32k Dataset},
219
+ author = {[Kiuyha]},
220
+ year = {2025},
221
+ url = {https://huggingface.co/datasets/Kiuyha/dcase-5class-3source-mixtures-32k},
222
+ howpublished = {Hugging Face Datasets}
223
+ }
224
+ ```
225
+
226
+ ## License
227
+
228
+ The original DCASE source data has its own license. Please refer to the official DCASE website for details.
229
+
230
+ This derived dataset (the mixture 'recipe' and generated files) is made available under the [MIT LICENCE](https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/mit.md).