Update README.md
Browse files
README.md
CHANGED
@@ -265,7 +265,7 @@ The compression module is a light-weight transformer that takes as input the hid
|
|
265 |
|
266 |
## Version
|
267 |
|
268 |
-
This version of ZeroSwot is trained with ASR data from CommonVoice
|
269 |
|
270 |
We have more versions available:
|
271 |
|
@@ -335,7 +335,7 @@ print(translation)
|
|
335 |
|
336 |
## Results
|
337 |
|
338 |
-
BLEU scores on CoVoST-2 test compared to supervised SOTA models
|
339 |
|
340 |
| Models | ZS | Size (B) | Ar | Ca | Cy | De | Et | Fa | Id | Ja | Lv | Mn | Sl | Sv | Ta | Tr | Zh | Average |
|
341 |
|:--------------:|:----:|:----------:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:-------:|
|
|
|
265 |
|
266 |
## Version
|
267 |
|
268 |
+
This version of ZeroSwot is trained with ASR data from CommonVoice. It adapts [wav2vec2.0-large](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self) to the embedding space of the [nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M) model.
|
269 |
|
270 |
We have more versions available:
|
271 |
|
|
|
335 |
|
336 |
## Results
|
337 |
|
338 |
+
BLEU scores on CoVoST-2 test compared to supervised SOTA models XLS-R-1B and SeamlessM4T-Medium. You can refer to Table 5 of the Results section in the paper for more details.
|
339 |
|
340 |
| Models | ZS | Size (B) | Ar | Ca | Cy | De | Et | Fa | Id | Ja | Lv | Mn | Sl | Sv | Ta | Tr | Zh | Average |
|
341 |
|:--------------:|:----:|:----------:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:-------:|
|