File size: 2,859 Bytes
f4314af
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
eb20fde
f4314af
 
 
 
 
 
 
 
eb20fde
 
 
f4314af
 
 
 
eb20fde
f4314af
 
 
eb20fde
f4314af
 
 
 
eb20fde
f4314af
 
 
 
eb20fde
 
 
 
 
 
 
 
 
 
 
 
 
 
f4314af
 
 
 
 
eb20fde
f4314af
 
 
eb20fde
f4314af
 
 
eb20fde
f4314af
 
 
eb20fde
f4314af
 
 
eb20fde
f4314af
 
 
eb20fde
f4314af
 
 
eb20fde
f4314af
 
 
eb20fde
f4314af
 
 
 
eb20fde
f4314af
 
fed7d3c
2fb49d0
 
 
 
9127be4
2fb49d0
eb20fde
f4314af
 
fed7d3c
2fb49d0
f4314af
 
eb20fde
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
---
license: cc-by-nc-4.0
task_categories:
- audio-classification
language:
- en
- da
tags:
- music
pretty_name: Instrument_Classification
size_categories:
- n<1K
---

# Dataset Card for Dataset Name

This dataset is a collection of audioclips used for training machinelearning models on instrument based sound classification.


## Dataset Details

### Dataset Description



- **Curated by:** [AlfredVThor]
- **Funded by [AlfredVThor]:** [More Information Needed]
- **Shared by [AlfredVThor]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]

## Uses
This dataset can be used for training sound classification models on basic instruments

### Direct Use

The Direct use of this dataset is to train a model on Sound Classification data based on instruments, using it to help those with Hearing Impairment identifying instruments.


### Out-of-Scope Use

This dataset will not be usefull for anything other than sound based models


## Dataset Structure

Clap
89% / 11% (5m 10s / 38s)
Guitar
75% / 25% (42s / 14s)
Kling
80% / 20% (2m 0s / 30s)
Maracas
75% / 25% (3m 31s / 1m 11s)
Noise
90% / 10% (8m 54s / 1m 0s)
Snap
83% / 17% (4m 21s / 52s)
Whistle
85% / 15% (1m 51s / 19s)

## Dataset Creation

### Curation Rationale

This dataset it aimed at training models to help people with impaired hearing identify instruments based on sound in a public space. 

### Source Data

.Wav audio files

#### Data Collection and Processing

Data was collected with a mobile microphone to best simulate the usecase for users. Data was cleaned as to minimally include high volume background noise. 

#### Who are the source data producers?

The data was collected by students at Medialogy Masters at AAU copenhagen

#### Annotation process

The process included data cleaning, removing data that was unclear or duplicated.

#### Who are the annotators?

Student at Medialogy Masters AAU Copenhagen.

#### Personal and Sensitive Information

This dataset does not include private or sensitive data.

## Bias, Risks, and Limitations

Cannot be used for anything non-soundbased, Additional data can be recomended for higher scale models.


### Recommendations

Additional data can be recomended for higher scale models.

**BibTeX:**
PROBLEMS ENCOUNTERED!! TRY APA!!
@misc{huggingfaceAthorl22MLME_Instrument_ClassificationMain,
	author = {Athorl22},
	title = {{A}thorl22/{M}{L}{M}{E}\_{I}nstrument\_{C}lassification at main --- huggingface.co},
	howpublished = {\url{https://huggingface.co/datasets/Athorl22/MLME_Instrument_Classification/tree/main}},
	year = {2025},
	note = {[Accessed 04-12-2025]},
}

**APA:**
WORKS!!!
Athorl22/MLME_Instrument_Classification at main. (n.d.). https://huggingface.co/datasets/Athorl22/MLME_Instrument_Classification/tree/main

## Dataset Card Contact
Github: AlfredVThor