cointegrated's picture
Update README.md
705177b
|
raw
history blame
No virus
6.05 kB
metadata
license: cc-by-nc-4.0
datasets:
  - slone/nllb-200-10M-sample
pipeline_tag: translation
language:
  - ak
  - am
  - ar
  - awa
  - azj
  - bm
  - ban
  - be
  - bem
  - bn
  - bho
  - bjn
  - bug
  - bg
  - ca
  - ceb
  - cs
  - cjk
  - ckb
  - crh
  - da
  - de
  - dik
  - dyu
  - el
  - en
  - eo
  - et
  - ee
  - fo
  - fj
  - fi
  - fon
  - fr
  - fur
  - ff
  - gaz
  - gd
  - ga
  - gl
  - gn
  - gu
  - ht
  - ha
  - he
  - hi
  - hne
  - hr
  - hu
  - hy
  - ig
  - ilo
  - id
  - is
  - it
  - jv
  - ja
  - kab
  - kac
  - kam
  - kn
  - ks
  - ks
  - ka
  - kk
  - kbp
  - kea
  - mn
  - km
  - ki
  - rw
  - ky
  - kmb
  - kmr
  - kr
  - kr
  - kg
  - ko
  - lo
  - lij
  - li
  - ln
  - lt
  - lmo
  - ltg
  - lb
  - lua
  - lg
  - luo
  - lus
  - lv
  - mag
  - mai
  - ml
  - mr
  - min
  - mk
  - mt
  - mni
  - mos
  - mi
  - my
  - nl
  - nb
  - ne
  - nso
  - nus
  - ny
  - oc
  - ory
  - pag
  - pa
  - pap
  - pbt
  - fa
  - plt
  - pl
  - pt
  - prs
  - qu
  - ro
  - rn
  - ru
  - sg
  - sa
  - sat
  - scn
  - shn
  - si
  - sk
  - sl
  - sm
  - sn
  - sd
  - so
  - st
  - es
  - sc
  - sr
  - ss
  - su
  - sv
  - sw
  - szl
  - ta
  - taq
  - tt
  - te
  - tg
  - tl
  - ti
  - tpi
  - tn
  - ts
  - tk
  - tum
  - tr
  - tw
  - tzm
  - ug
  - uk
  - umb
  - ur
  - uz
  - vec
  - vi
  - war
  - wo
  - xh
  - yi
  - yo
  - zh
  - zh
  - ms
  - zu

It is a truncated version of NLLB-200-600M model (6 layers instead of 12, 512 hidden dimensions instead of 1024) with 175M parameters (131M of which are embeddings).

This model was fine-tuned on the slone/nllb-200-10M-sample subset of the NLLB dataset, using only the samples with BLASER score above 3.5.

It is really bad at translation, but can serve as a base model for further fine-tuning for a small number of languages. It is recommended to prune the vocabulary of this model before fine-tuning, to preserve only the tokens used with the intended languages.