whisper-tiny-f16 / README.md
fmasterpro27's picture
Update README.md
0a81f0d verified
metadata
language:
  - en
  - zh
  - de
  - es
  - ru
  - ko
  - fr
  - ja
  - pt
  - tr
  - pl
  - ca
  - nl
  - ar
  - sv
  - it
  - id
  - hi
  - fi
  - vi
  - he
  - uk
  - el
  - ms
  - cs
  - ro
  - da
  - hu
  - ta
  - 'no'
  - th
  - ur
  - hr
  - bg
  - lt
  - la
  - mi
  - ml
  - cy
  - sk
  - te
  - fa
  - lv
  - bn
  - sr
  - az
  - sl
  - kn
  - et
  - mk
  - br
  - eu
  - is
  - hy
  - ne
  - mn
  - bs
  - kk
  - sq
  - sw
  - gl
  - mr
  - pa
  - si
  - km
  - sn
  - yo
  - so
  - af
  - oc
  - ka
  - be
  - tg
  - sd
  - gu
  - am
  - yi
  - lo
  - uz
  - fo
  - ht
  - ps
  - tk
  - nn
  - mt
  - sa
  - lb
  - my
  - bo
  - tl
  - mg
  - as
  - tt
  - haw
  - ln
  - ha
  - ba
  - jw
  - su
tags:
  - audio
  - automatic-speech-recognition
  - hf-asr-leaderboard
  - open4bits
widget:
  - example_title: Librispeech sample 1
    src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
  - example_title: Librispeech sample 2
    src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
  - name: whisper-tiny
    results:
      - task:
          name: Automatic Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: LibriSpeech (clean)
          type: librispeech_asr
          config: clean
          split: test
          args:
            language: en
        metrics:
          - name: Test WER
            type: wer
            value: 7.54
      - task:
          name: Automatic Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: LibriSpeech (other)
          type: librispeech_asr
          config: other
          split: test
          args:
            language: en
        metrics:
          - name: Test WER
            type: wer
            value: 17.15
      - task:
          name: Automatic Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: Common Voice 11.0
          type: mozilla-foundation/common_voice_11_0
          config: hi
          split: test
          args:
            language: hi
        metrics:
          - name: Test WER
            type: wer
            value: 141
pipeline_tag: automatic-speech-recognition
license: apache-2.0
base_model:
  - openai/whisper-tiny

Open4bits / Whisper Tiny FP16

This repository provides the Whisper Tiny model converted to FP16 (float16) precision, published by Open4bits to enable highly efficient inference with minimal memory usage.

The underlying Whisper model and architecture are owned by OpenAI. This repository contains only a precision-converted version of the original model weights.

The model is designed for fast, lightweight multilingual speech-to-text tasks and is well suited for resource-constrained environments.


Model Overview

Whisper is a sequence-to-sequence transformer model developed by OpenAI for automatic speech recognition and speech translation.
This release uses the Tiny variant, prioritizing speed and low memory usage while preserving the original architecture.


Model Details

  • Architecture: Whisper Tiny
  • Parameters: ~37.85 million
  • Precision: float16 (FP16)
  • Task: Automatic Speech Recognition (ASR)
  • Languages: Multilingual
  • Weight tying: Preserved
  • Compatibility: Hugging Face Transformers, PyTorch

Compared to larger Whisper variants, this model offers significantly faster inference and lower VRAM requirements, with reduced accuracy in some scenarios.


Intended Use

This model is intended for:

  • Fast speech-to-text transcription
  • Lightweight and real-time ASR applications
  • Edge or low-resource deployments
  • Research and prototyping

Limitations

  • Lower transcription accuracy compared to larger Whisper variants
  • Performance depends on audio quality, language, and accent
  • Not fine-tuned for domain-specific or noisy audio

License

This model is released under the Apache License 2.0. The original Whisper model and associated intellectual property are owned by OpenAI.


Support

If you find this model useful, please consider supporting the project. Your support helps us continue releasing and maintaining high-quality open models. Support us with a heart.