The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
Tigre Speech Corpus
1. Overview
This Tigre Speech Corpus is a curated collection of 18,470 aligned audio–text pairs designed to support research and development in speech technologies for Tigre (tig), an under-resourced South Semitic language spoken primarily in Eritrea. The dataset contains approximately 32 hours of recorded speech contributed by over 100 native speakers.
It reflects a collective effort by Tigre-speaking contributors worldwide, including a significant number of diaspora native speakers who participated in the Mozilla Common Voice project. This corpus is intended to serve as a foundational resource for advancing NLP technologies for the Tigre language and is suitable for tasks such as:
- Automatic Speech Recognition (ASR)
- Forced alignment
- Speech translation
- Speech-to-text pretraining
- Low-resource speech benchmark creation
2. Data Source, Provenance, and Licensing
The data was voluntarily collected via the Mozilla Common Voice platform, with contributions from community members using personal devices (mobile phones and computers). The sentence prompts used for recording originate from the Common Voice sentence bank.
Licensing
The dataset follows Mozilla Common Voice licensing terms:
- Audio recordings (WAV files): CC0 1.0 Universal — Public Domain Dedication
- Text content: CC BY 4.0 — Attribution required
3. Corpus Composition
3.1 Data Pairing Structure
Each item in the corpus consists of:
- A
.wavfile (audio recording) - A
.txtfile (transcript)
Both share the same filename prefix (e.g., clip_00123.wav / clip_00123.txt), ensuring straightforward alignment.
3.2 Duration Distribution
The speech duration distribution:
- ~⅓ of audio clips: < 5 seconds
- ~⅓ of audio clips: 5–7 seconds
- ~⅓ of audio clips: 7–41 seconds
This natural variation supports both short- and long-utterance modeling.
4. Dataset Structure and File Format
All files are stored in a flat directory structure inside the compressed release file.
Each audio file has a single matching transcript with the exact same filename prefix.
File Layout
dataset_root/
├── clip_xxxxx.wav
├── clip_xxxxx.txt
├── clip_yyyyy.wav
├── clip_yyyyy.txt
└── ...
.wavfiles contain audio recordings..txtfiles contain transcripts.- There are no nested folders to simplify ingestion by ASR pipelines.
Naming Convention
Filename alignment pattern:
clip_12345.wavclip_12345.txt
This ensures 1:1 audio–text alignment for all 18,470 entries.
5. Intended Use & Applications
This dataset supports:
- Low-resource ASR development
- End-to-end speech models (wav2vec2, Whisper, MMS)
- African language benchmarks
- Cross-lingual representation learning
- Phonetic/linguistic analysis
Potential applications:
- Voice assistants
- Speech-enabled educational tools
- Accessibility technologies
- Humanitarian language technology
6. Ethical Considerations
- All recordings were contributed voluntarily.
- Contributors acknowledged CC0 audio licensing.
- No personally identifiable information (PII) is included.
- Derivative models should respect the spirit of open, community-driven data.
7. Recommended Citation
@dataset{tigre_speech_corpus_2025,
title = {Tigre Speech Corpus},
author = {Tigre Diaspora Community Contributors and Mozilla Common Voice},
year = {2025},
url = {https://huggingface.co/},
note = {A collection of 18,470 audio–text pairs for the Tigre language.}
}
8. Acknowledgments
We gratefully acknowledge:
- The Tigre diaspora community for their contributions.
- The Mozilla Common Voice team for enabling community-driven speech data creation.
- Downloads last month
- 17