reach-vb HF staff ylacombe HF staff commited on
Commit
76a062d
1 Parent(s): 4e3960f

Update README.md with hyperlinks and more descriptions on the difference with small-s2t (#1)

Browse files

- Update README.md with hyperlinks and more descriptions on the difference with small-s2t (a958307844313934704dfba378fff1df0aadc4c5)


Co-authored-by: Yoach Lacombe <ylacombe@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -14,7 +14,7 @@ SeamlessM4T covers:
14
  - 🗣️ 35 languages for speech output.
15
 
16
  Apart from [SeamlessM4T-LARGE (2.3B)](https://huggingface.co/facebook/seamless-m4t-large) and [SeamlessM4T-MEDIUM (1.2B)](https://huggingface.co/facebook/seamless-m4t-medium) models, we are also developing a small model (281M) targeting for on-device inference.
17
- This folder contains an example to run an exported small model covering most tasks (ASR/S2TT/S2ST). The model could be executed on popular mobile devices with Pytorch Mobile (https://pytorch.org/mobile/home/).
18
 
19
  ## Overview
20
 
@@ -23,7 +23,7 @@ This folder contains an example to run an exported small model covering most tas
23
  | [UnitY-Small](https://huggingface.co/facebook/seamless-m4t-unity-small/resolve/main/unity_on_device.ptl) | 862MB | S2ST, S2TT, ASR |eng, fra, hin, por, spa|
24
  | [UnitY-Small-S2T](https://huggingface.co/facebook/seamless-m4t-unity-small-s2t/resolve/main/unity_on_device_s2t.ptl) | 637MB | S2TT, ASR |eng, fra, hin, por, spa|
25
 
26
- UnitY-Small-S2T is a pruned version of UnitY-Small without 2nd pass unit decoding.
27
 
28
  ## Inference
29
  To use exported model, users don't need seamless_communication or fairseq2 dependency.
 
14
  - 🗣️ 35 languages for speech output.
15
 
16
  Apart from [SeamlessM4T-LARGE (2.3B)](https://huggingface.co/facebook/seamless-m4t-large) and [SeamlessM4T-MEDIUM (1.2B)](https://huggingface.co/facebook/seamless-m4t-medium) models, we are also developing a small model (281M) targeting for on-device inference.
17
+ [This folder](https://huggingface.co/facebook/seamless-m4t-unity-small) contains an example to run an exported small model covering most tasks (ASR/S2TT/S2ST). The model could be executed on popular mobile devices with Pytorch Mobile (https://pytorch.org/mobile/home/).
18
 
19
  ## Overview
20
 
 
23
  | [UnitY-Small](https://huggingface.co/facebook/seamless-m4t-unity-small/resolve/main/unity_on_device.ptl) | 862MB | S2ST, S2TT, ASR |eng, fra, hin, por, spa|
24
  | [UnitY-Small-S2T](https://huggingface.co/facebook/seamless-m4t-unity-small-s2t/resolve/main/unity_on_device_s2t.ptl) | 637MB | S2TT, ASR |eng, fra, hin, por, spa|
25
 
26
+ [UnitY-Small-S2T](https://huggingface.co/facebook/seamless-m4t-unity-small-s2t) is a pruned version of [UnitY-Small](https://huggingface.co/facebook/seamless-m4t-unity-small) without 2nd pass unit decoding. Unlike [UnitY-Small](https://huggingface.co/facebook/seamless-m4t-unity-small), it can only be used for ASR and S2TT tasks.
27
 
28
  ## Inference
29
  To use exported model, users don't need seamless_communication or fairseq2 dependency.