Update Model URLs + Notification about PyTorch versions
#2
by
reach-vb
HF staff
- opened
README.md
CHANGED
@@ -3,6 +3,7 @@ inference: false
|
|
3 |
tags:
|
4 |
- SeamlessM4T
|
5 |
license: cc-by-nc-4.0
|
|
|
6 |
---
|
7 |
|
8 |
# SeamlessM4T - On-Device
|
@@ -10,20 +11,21 @@ SeamlessM4T is designed to provide high quality translation, allowing people fro
|
|
10 |
|
11 |
SeamlessM4T covers:
|
12 |
- 📥 101 languages for speech input
|
13 |
-
- ⌨️
|
14 |
-
- 🗣️
|
15 |
|
16 |
Apart from [SeamlessM4T-LARGE (2.3B)](https://huggingface.co/facebook/seamless-m4t-large) and [SeamlessM4T-MEDIUM (1.2B)](https://huggingface.co/facebook/seamless-m4t-medium) models, we are also developing a small model (281M) targeting for on-device inference.
|
17 |
[This folder](https://huggingface.co/facebook/seamless-m4t-unity-small) contains an example to run an exported small model covering most tasks (ASR/S2TT/S2ST). The model could be executed on popular mobile devices with Pytorch Mobile (https://pytorch.org/mobile/home/).
|
18 |
|
19 |
## Overview
|
|
|
|
|
|
|
|
|
20 |
|
21 |
-
|
22 |
-
|---------|----------------------|-------------------------|-------------------------|
|
23 |
-
| [UnitY-Small](https://huggingface.co/facebook/seamless-m4t-unity-small/resolve/main/unity_on_device.ptl) | 862MB | S2ST, S2TT, ASR |eng, fra, hin, por, spa|
|
24 |
-
| [UnitY-Small-S2T](https://huggingface.co/facebook/seamless-m4t-unity-small-s2t/resolve/main/unity_on_device_s2t.ptl) | 637MB | S2TT, ASR |eng, fra, hin, por, spa|
|
25 |
|
26 |
-
|
27 |
|
28 |
## Inference
|
29 |
To use exported model, users don't need seamless_communication or fairseq2 dependency.
|
|
|
3 |
tags:
|
4 |
- SeamlessM4T
|
5 |
license: cc-by-nc-4.0
|
6 |
+
library_name: fairseq2
|
7 |
---
|
8 |
|
9 |
# SeamlessM4T - On-Device
|
|
|
11 |
|
12 |
SeamlessM4T covers:
|
13 |
- 📥 101 languages for speech input
|
14 |
+
- ⌨️ 96 Languages for text input/output
|
15 |
+
- 🗣️ 35 languages for speech output.
|
16 |
|
17 |
Apart from [SeamlessM4T-LARGE (2.3B)](https://huggingface.co/facebook/seamless-m4t-large) and [SeamlessM4T-MEDIUM (1.2B)](https://huggingface.co/facebook/seamless-m4t-medium) models, we are also developing a small model (281M) targeting for on-device inference.
|
18 |
[This folder](https://huggingface.co/facebook/seamless-m4t-unity-small) contains an example to run an exported small model covering most tasks (ASR/S2TT/S2ST). The model could be executed on popular mobile devices with Pytorch Mobile (https://pytorch.org/mobile/home/).
|
19 |
|
20 |
## Overview
|
21 |
+
| Model | Checkpoint | Num Params | Disk Size | Supported Tasks | Supported Languages|
|
22 |
+
|---------|------------|----------|-------------|------------|-------------------------|
|
23 |
+
| UnitY-Small|[🤗 Model card](https://huggingface.co/facebook/seamless-m4t-unity-small) - [checkpoint](https://huggingface.co/facebook/seamless-m4t-unity-small/resolve/main/unity_on_device.ptl) | 281M | 862MB | S2ST, S2TT, ASR |eng, fra, hin, por, spa|
|
24 |
+
| UnitY-Small-S2T |[🤗 Model card](https://huggingface.co/facebook/seamless-m4t-unity-small-s2t) - [checkpoint](https://huggingface.co/facebook/seamless-m4t-unity-small-s2t/resolve/main/unity_on_device_s2t.ptl) | 235M | 637MB | S2TT, ASR |eng, fra,hin, por, spa|
|
25 |
|
26 |
+
UnitY-Small-S2T is a pruned version of UnitY-Small without 2nd pass unit decoding.
|
|
|
|
|
|
|
27 |
|
28 |
+
Note: If using pytorch runtime in python, only **pytorch<=1.11.0** is supported for **UnitY-Small(281M)**. We tested UnitY-Small-S2T(235M), it works with later versions.
|
29 |
|
30 |
## Inference
|
31 |
To use exported model, users don't need seamless_communication or fairseq2 dependency.
|