mHuBERT-147 / README.md
mzboito's picture
Update README.md
69467f5 verified
|
raw
history blame
6.17 kB
metadata
license: cc-by-nc-sa-4.0
language:
  - ab
  - af
  - am
  - ar
  - as
  - az
  - ba
  - be
  - bn
  - bo
  - bs
  - br
  - bg
  - ca
  - cs
  - cv
  - cy
  - da
  - de
  - dv
  - el
  - en
  - eo
  - et
  - eu
  - ee
  - fo
  - fa
  - tl
  - fi
  - fr
  - fy
  - ga
  - gl
  - gv
  - gn
  - gu
  - ht
  - ha
  - he
  - hi
  - hr
  - hu
  - hy
  - ig
  - ia
  - id
  - is
  - it
  - jv
  - ja
  - kn
  - ka
  - kk
  - km
  - rw
  - ky
  - ku
  - ko
  - lo
  - la
  - lv
  - ln
  - lt
  - lb
  - lg
  - ml
  - mr
  - mk
  - mg
  - mt
  - mn
  - mi
  - ms
  - my
  - ne
  - nl
  - nn
  - 'no'
  - oc
  - or
  - pa
  - pl
  - pt
  - ps
  - ro
  - ru
  - sa
  - si
  - sl
  - sk
  - sn
  - sd
  - so
  - st
  - es
  - sq
  - sc
  - sr
  - su
  - sw
  - sv
  - ta
  - tt
  - te
  - tg
  - th
  - tn
  - tk
  - tr
  - tw
  - ug
  - uk
  - ur
  - uz
  - vi
  - xh
  - yi
  - yo
  - zh

This repository contains the best mHuBERT-147 pre-trained model.

MODEL DETAILS: 3rd iteration, K=1000, HuBERT base architecture (95M parameters), 147 languages.

mHuBERT-147 models

mHuBERT-147 are compact and competitive multilingual HuBERT models trained on 90K hours of open-license data in 147 languages. Different from traditional HuBERTs, mHuBERT-147 models are trained using faiss IVF discrete speech units. Training employs a two-level language, data source up-sampling during training. See more information in our paper.

Table of Contents:

  1. Summary
  2. Training Data and Code
  3. ML-SUPERB Scores
  4. Languages and Datasets
  5. Intermediate Checkpoints
  6. Citing and Funding Information

This repository contains:

  • Fairseq checkpoint (original);
  • HuggingFace checkpoint (conversion using transformers library);
  • Faiss index for continuous pre-training (OPQ16_64,IVF1000_HNSW32,PQ16x4fsr).

Related Models:

Training

ML-SUPERB Scores

mHubert-147 reaches second and first position in the 10min and 1h leaderboards respectively. We achieve new SOTA scores for three LID tasks. See more information in our paper.

image/png

Languages and Datasets

Datasets: For ASR/ST/TTS datasets, only train set is used.

Languages present not indexed by Huggingface: Asturian (ast), Basaa (bas), Cebuano (ceb), Central Kurdish/Sorani (ckb), Hakha Chin (cnh), Hawaiian (haw), Upper Sorbian (hsb) Kabyle (kab), Moksha (mdf), Meadow Mari (mhr), Hill Mari (mrj), Erzya (myv), Taiwanese Hokkien (nan-tw), Sursilvan (rm-sursilv), Vallader (rm-vallader), Sakha (sah), Santali (sat), Scots (sco), Saraiki (skr), Tigre (tig), Tok Pisin (tpi), Akwapen Twi (tw-akuapem), Asante Twi (tw-asante), Votic (vot), Waray (war), Cantonese (yue).

Intermediate Checkpoints

For allowing research in training dynamics, the intermediate checkpoints for the three iterations are made available under the CC-BY-NC-SA-4.0 license via a protected link.

Citing and Funding Information

@inproceedings{boito2024mhubert,
author={Boito, Marcely Zanon and Iyer, Vivek and Lagos, Nikolaos and Besacier, Laurent and Calapodescu, Ioan},
title={{mHuBERT-147: A Compact Multilingual HuBERT Model}},
year=2024,
booktitle={Interspeech 2024},
This is an output of the European Project UTTER (Unified Transcription and Translation for Extended Reality) funded by European Union’s Horizon Europe Research and Innovation programme under grant agreement number 101070631.

For more information please visit https://he-utter.eu/

NAVER LABS Europe: https://europe.naverlabs.com/