Transformers
PyTorch
wav2vec2
pretraining
speech
xls_r
xls_r_pretrained
Inference Endpoints
wav2vec2-xls-r-300m / README.md
lysandre's picture
lysandre HF staff
Add language tags (#1)
1a640f3
|
raw
history blame
3.74 kB
metadata
language:
  - multilingual
  - ab
  - af
  - sq
  - am
  - ar
  - hy
  - as
  - az
  - ba
  - eu
  - be
  - bn
  - bs
  - br
  - bg
  - my
  - yue
  - ca
  - ceb
  - km
  - zh
  - cv
  - hr
  - cs
  - da
  - dv
  - nl
  - en
  - eo
  - et
  - fo
  - fi
  - fr
  - gl
  - lg
  - ka
  - de
  - el
  - gn
  - gu
  - ht
  - cnh
  - ha
  - haw
  - he
  - hi
  - hu
  - is
  - id
  - ia
  - ga
  - it
  - ja
  - jv
  - kb
  - kn
  - kk
  - rw
  - ky
  - ko
  - ku
  - lo
  - la
  - lv
  - ln
  - lt
  - lm
  - mk
  - mg
  - ms
  - ml
  - mt
  - gv
  - mi
  - mr
  - mn
  - ne
  - 'no'
  - nn
  - oc
  - or
  - ps
  - fa
  - pl
  - pt
  - pa
  - ro
  - rm
  - rm
  - ru
  - sah
  - sa
  - sco
  - sr
  - sn
  - sd
  - si
  - sk
  - sl
  - so
  - hsb
  - es
  - su
  - sw
  - sv
  - tl
  - tg
  - ta
  - tt
  - te
  - th
  - bo
  - tp
  - tr
  - tk
  - uk
  - ur
  - uz
  - vi
  - vot
  - war
  - cy
  - yi
  - yo
  - zu
language_bcp47:
  - zh-HK
  - zh-TW
  - fy-NL
datasets:
  - common_voice
  - multilingual_librispeech
tags:
  - speech
  - xls_r
  - xls_r_pretrained
license: apache-2.0

Wav2Vec2-XLS-R-300M

Facebook's Wav2Vec2 XLS-R counting 300 million parameters.

model image

XLS-R is Facebook AI's large-scale multilingual pretrained model for speech (the "XLM-R for Speech"). It is pretrained on 436k hours of unlabeled speech, including VoxPopuli, MLS, CommonVoice, BABEL, and VoxLingua107. It uses the wav2vec 2.0 objective, in 128 languages. When using the model make sure that your speech input is sampled at 16kHz.

Note: This model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Translation, or Classification. Check out this blog for more information about ASR.

XLS-R Paper

Authors: Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli

Abstract This paper presents XLS-R, a large-scale model for cross-lingual speech representation learning based on wav2vec 2.0. We train models with up to 2B parameters on 436K hours of publicly available speech audio in 128 languages, an order of magnitude more public data than the largest known prior work. Our evaluation covers a wide range of tasks, domains, data regimes and languages, both high and low-resource. On the CoVoST-2 speech translation benchmark, we improve the previous state of the art by an average of 7.4 BLEU over 21 translation directions into English. For speech recognition, XLS-R improves over the best known prior work on BABEL, MLS, CommonVoice as well as VoxPopuli, lowering error rates by 20%-33% relative on average. XLS-R also sets a new state of the art on VoxLingua107 language identification. Moreover, we show that with sufficient model size, cross-lingual pretraining can outperform English-only pretraining when translating English speech into other languages, a setting which favors monolingual pretraining. We hope XLS-R can help to improve speech processing tasks for many more languages of the world.

The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.

Usage

See this google colab for more information on how to fine-tune the model.

You can find other pretrained XLS-R models with different numbers of parameters: