Soloba-TDT-600M Series

Model architecture | Model size | Language

soloba-tdt-0.6b-v1.5 is a fine tuned version of RobotsMali/soloba-tdt-0.6b-v0.5 on RobotsMali/kunkado. This model does not consistently produce Capitalizations and Punctuations and it cannot produce acoustic event tags like those found in Kunkado its transcriptions. It was fine-tuned using NVIDIA NeMo.

🚨 Important Note

This model, along with its associated resources, is part of an ongoing research effort, improvements and refinements are expected in future versions. Users should be aware that:

  • The model may not generalize very well accross all speaking conditions and dialects.
  • Community feedback is welcome, and contributions are encouraged to refine the model further.

NVIDIA NeMo: Training

To fine-tune or play with the model you will need to install NVIDIA NeMo. We recommend you install it after you've installed latest PyTorch version.

pip install nemo-toolkit['asr']

How to Use This Model

Note that this model has been released for research purposes primarily.

Load Model with NeMo

import nemo.collections.asr as nemo_asr
asr_model = nemo_asr.models.ASRModel.from_pretrained(model_name="RobotsMali/soloba-tdt-0.6b-v1.5")

Transcribe Audio

model.eval()
# Assuming you have a test audio file named sample_audio.wav
asr_model.transcribe(['sample_audio.wav'])

Input

This model accepts any mono-channel audio (wav files) as input and resamples them to 16 kHz sample rate before performing the forward pass

Output

This model provides transcribed speech as an hypothesis object with a text attribute containing the transcription string for a given speech sample. (nemo>=2.3)

Model Architecture

This model uses a FastConformer Ecoder and an autoregressive Token-and-Duration Transducer decoder, a variant of RNN-T that predicts jointly learn to predict a token and its duration. FastConformer is an optimized version of the Conformer model with 8x depthwise-separable convolutional downsampling. You may find more information on the details of FastConformer here: Fast-Conformer Model.

Training

The NeMo toolkit was used for finetuning this model for 40,000 steps over RobotsMali/soloba-tdt-0.6b-v0.5 model with bacth_size 32. The finetuning codes and configurations can be found at RobotsMali-AI/bambara-asr.

The tokenizer for this model was trained on the text transcripts of the train set of RobotsMali/kunkado using this script.

Dataset

This model was fine-tuned on the kunkado dataset, the human-reviewed subset, which consists of ~40 hours of transcribed Bambara speech data. The text was normalized with the bambara-normalizer prior to training, normalizing numbers, removing punctuations and removings tags.

Performance

We report the Word Error Rate (WER) and Character Error Rate (CER) for this model:

Benchmark Decoding WER (%) ↓ CER (%) ↓
Kunkado CTC 39.78 23.21
Nyana Eval CTC XX.XX YY.YY

License

This model is released under the CC-BY-4.0 license. By using this model, you agree to the terms of the license.


Feel free to open a discussion on Hugging Face or file an issue on GitHub for help or contributions.

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for RobotsMali/soloba-tdt-0.6b-v1.5

Finetuned
(1)
this model

Dataset used to train RobotsMali/soloba-tdt-0.6b-v1.5

Evaluation results