Datasets:
metadata
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: language
dtype: string
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 2240165860.82
num_examples: 24607
download_size: 2213674221
dataset_size: 2240165860.82
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- automatic-speech-recognition
language:
- de
pretty_name: S
Dataset Card: Swiss Parliaments Corpus — Train v0.9
Summary
The SPC Train v0.9 release pairs Swiss German speech with Standard German transcriptions, providing a high‑quality resource for training and evaluating automatic speech‑recognition (ASR) or speech‑translation systems.
If you intend to fine‑tune Whisper, we recommend the companion project i4Ds/whisper‑finetune, which is fully compatible with the data structure produced here.
Dataset Details
Generation Pipeline
The corpus was created with i4Ds/whisper‑prep using the following configuration:
# Generation configuration
maintain_speaker_chance: 0.50 # Probability of keeping the current speaker for consecutive utterances
n_samples_per_srt: 120 # Number of audio fragments merged into each SRT file
normalize_text: true # Clean text according to rules in whisper_prep/generation/text_normalizer.py
# Overlap settings
# Overlaps are inserted only in non‑speech regions identified by VAD.
overlap_chance: 0.80 # Probability of creating an overlap between consecutive clips
max_overlap_chance: 0.50 # If an overlap occurs, probability of using the maximum duration
max_overlap_duration: 0.30 # Maximum overlap length in seconds
Maintainer
- Curated by: Vincenzo Timmel (@vincenzo.timmel)
Intended Use & Scope
- Primary use‑case: Fine‑tuning multilingual ASR or speech‑translation models, particularly OpenAI Whisper.
- Not suitable for: Language‑identification or emotion‑recognition tasks without additional annotation. For evaluation, please see "SPC_Test"
Dataset Sources
Citation
If you use this corpus, please cite the papers above and acknowledge I4DS FHNW for data preparation.