Automatic Speech Recognition for Kinyarwanda

Hugging Face Hugging Face License

Model Description

This model is a fine-tuned version of Wav2Vec2-BERT 2.0 for Kinyarwanda automatic speech recognition (ASR). It was trained on the 1000 hours dataset from the Kinyarwanda ASR hackthon on Kaggle (Track B), dataset covering Health, Government, Finance, Education, and Agriculture domains. The model is robust and the in-domain WER is below 8.4%.

  • Developed by: Badr al-Absi
  • Model type: Speech Recognition (ASR)
  • Language: Kinyarwanda (rw)
  • License: CC-BY-4.0
  • Finetuned from: facebook/w2v-bert-2.0

Examples πŸš€

Audio Human Transcription ASR Transcription
1 Umugore wambaye umupira w'akazi mpuzankano iri mu ibara ry'umuhondo handitseho amagambo yandikishije ibara ry'ubururu. Afite igikoresho cy'itumanaho gikoreshwa mu guhamagara no kwandika ubutumwa bugufi. umugore wambaye umupira w'akazi impuzankano iri mu ibara ry'umuhondo handitseho amagambo yandikishije ibara ry'ubururu afite igikoresho cy'itumanaho gikoreshwa mu guhamagara no kwandika ubutumwa bugufi
2 Igikoresho cyifashishwa mu kwiga imibare ndetse kiba kirimo ibindi bikoresho byinshi harimo amarati atatu ndetse n'irati imwe ndende n'ikaramu na kompa, ibigibi byahawe abanyeshuri biga mu myaka ya mbere n'iya kabiri kugira ngo bajye babyifashisha bari kwiga imibare. igikoresho cyifashishwa mu kwiga imibare ndetse kiba kirimo ibindi bikoresho byinshi harimo amarati atatu ndetse n'irati imwe ndende n'ikaramu na kompa ibi ngibi byahawe abanyeshuri biga mu myaka ya mbere n'iya kabiri kugira ngo bajye babyifashisha bari kwiga imibare
3 Iyi ni Kizimyamwoto iri mu ibara ry'umutuku. Hejuru hakaba hariho amabara y'umuhondo ku ruhande hakaba hariho akantu kameze nk'isaha, hasi hakaba hariho akabara gasa n'ubururu kari amagambo menshi mu rurimi rw'icyongereza hasi yako hakaba hari n'akandi kari mu ibara ry' umuhondo handikishijemo amagambo y'icyongereza, hasi yako hakaba hari n' utundi tuntu tw' utubokisi tw' umweru harimo utuntu tujyiye dushushanyije hakaba hariho n' inyajwi bi na si. iyinzuzinyamwoto iri mu ibara ry'umutuku hejuru hakaba hariho ahariho amabara y'umuhondo ku ruhande hakaba hariho akantu kameze nk'isaha hasi hakaba hariho akabara gatoya k'ubururu kariho amagambo yandikishije mu rurimi rw'icyongereza hasi yako hakaba hari n'akandi kari mu ibara ry'umuhondo wandikishijemo amagambo y'icyongereza hasi yako hakaba hariho utundi tutu tw'tuboisi hariho amaotw'utubogisi tw'umweru harimo utuntu tugiye dushushanyije hakaba hariho n'inyajwi bi na si

Model Sources

Direct Use

The model can be used directly for automatic speech recognition of Kinyarwanda audio:

from transformers import Wav2Vec2BertProcessor, Wav2Vec2BertForCTC
import torch
import torchaudio

# load model and processor
processor = Wav2Vec2BertProcessor.from_pretrained("badrex/w2v-bert-2.0-kinyarwanda-asr-1000h")
model = Wav2Vec2BertForCTC.from_pretrained("badrex/w2v-bert-2.0-kinyarwanda-asr-1000h")

# load audio
audio_input, sample_rate = torchaudio.load("path/to/audio.wav")

# preprocess
inputs = processor(audio_input.squeeze(), sampling_rate=sample_rate, return_tensors="pt")

# inference
with torch.no_grad():
    logits = model(**inputs).logits

# decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)[0]
print(transcription)

Downstream Use

This model can be used as a foundation for:

  • building voice assistants for Kinyarwanda speakers
  • transcription services for Kinyarwanda content
  • accessibility tools for Kinyarwanda-speaking communities
  • research in low-resource speech recognition

Out-of-Scope Use

  • transcribing languages other than Kinyarwanda
  • real-time applications without proper latency testing
  • high-stakes applications without domain-specific validation

Bias, Risks, and Limitations

  • Domain bias: primarily trained on formal speech from specific domains (Health, Government, Finance, Education, Agriculture)
  • Accent variation: may not perform well on dialects or accents not represented in training data
  • Audio quality: performance may degrade on noisy or low-quality audio
  • Technical terms: may struggle with specialized vocabulary outside training domains

Training Data

The model was fine-tuned on the Kinyarwanda ASR hackthon - Track B dataset:

  • Size: ~1000 hours of transcribed Kinyarwanda speech
  • Domains: Health, Government, Finance, Education, Agriculture
  • Source: Digital Umuganda (Gates Foundation funded)
  • License: CC-BY-4.0

Model Architecture

  • Base model: Wav2Vec2-BERT 2.0
  • Architecture: transformer-based with convolutional feature extractor
  • Parameters: ~600M (inherited from base model)
  • Objective: connectionist temporal classification (CTC)

Compute Infrastructure

Citation

@misc{w2v_bert_kinyarwanda_asr,
  author = {Badr M. Abdullah},
  title = {Adapting Wav2Vec2-BERT 2.0 for Kinyarwanda ASR},
  year = {2025},
  publisher = {Hugging Face},
  url = {https://huggingface.co/badrex/w2v-bert-2.0-kinyarwanda-asr-1000h}
}

@misc{kinyarwanda_asr_track_b,
  title={Kinyarwanda Automatic Speech Recognition Track B},
  author={Digital Umuganda},
  year={2025},
  url={https://www.kaggle.com/competitions/kinyarwanda-automatic-speech-recognition-track-b}
}

Model Card Contact

For questions or issues, please contact via the Hugging Face model repository.

Downloads last month
603
Safetensors
Model size
0.6B params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for badrex/w2v-bert-2.0-kinyarwanda-asr

Finetuned
(388)
this model

Dataset used to train badrex/w2v-bert-2.0-kinyarwanda-asr

Space using badrex/w2v-bert-2.0-kinyarwanda-asr 1

Collection including badrex/w2v-bert-2.0-kinyarwanda-asr