Commit
·
6750d7e
1
Parent(s):
1800ec3
Update README.md
Browse files
README.md
CHANGED
|
@@ -12,21 +12,16 @@ license: apache-2.0
|
|
| 12 |
|
| 13 |
# Wav2Vec2-Conformer-Large-100h with Rotary Position Embeddings
|
| 14 |
|
| 15 |
-
|
| 16 |
|
| 17 |
-
|
| 18 |
|
| 19 |
-
|
| 20 |
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
**Abstract**
|
| 24 |
-
|
| 25 |
-
...
|
| 26 |
|
| 27 |
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
|
| 28 |
|
| 29 |
-
|
| 30 |
# Usage
|
| 31 |
|
| 32 |
To transcribe audio files the model can be used as a standalone acoustic model as follows:
|
|
|
|
| 12 |
|
| 13 |
# Wav2Vec2-Conformer-Large-100h with Rotary Position Embeddings
|
| 14 |
|
| 15 |
+
Wav2Vec2 Conformer with rotary position embeddings, pretrained on 960h hours of Librispeech and fine-tuned on **100 hours of Librispeech** on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
|
| 16 |
|
| 17 |
+
**Paper**: [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171)
|
| 18 |
|
| 19 |
+
**Authors**: Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino
|
| 20 |
|
| 21 |
+
The results of Wav2Vec2-Conformer can be found in Table 3 and Table 4 of the [official paper](https://arxiv.org/abs/2010.05171).
|
|
|
|
|
|
|
|
|
|
|
|
|
| 22 |
|
| 23 |
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
|
| 24 |
|
|
|
|
| 25 |
# Usage
|
| 26 |
|
| 27 |
To transcribe audio files the model can be used as a standalone acoustic model as follows:
|