reach-vb HF staff sanchit-gandhi HF staff commited on
Commit
06c5c9a
1 Parent(s): e43ec49

Update weights (#3)

Browse files

- Update weights and README.md (ca3a1fdbf88967caf76ba804613d4df0999d0a5e)


Co-authored-by: Sanchit Gandhi <sanchit-gandhi@users.noreply.huggingface.co>

Files changed (3) hide show
  1. .gitattributes +1 -0
  2. README.md +9 -12
  3. unity_on_device_s2t.ptl +2 -2
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ *.ptl filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -14,9 +14,9 @@ SeamlessM4T covers:
14
  - ⌨️ 96 Languages for text input/output
15
  - 🗣️ 35 languages for speech output.
16
 
17
- Apart from [SeamlessM4T-LARGE (2.3B)](https://huggingface.co/facebook/seamless-m4t-large) and [SeamlessM4T-MEDIUM (1.2B)](https://huggingface.co/facebook/seamless-m4t-medium) models, we are also developing a small model (281M) targeting for on-device inference. [This folder]((https://huggingface.co/facebook/seamless-m4t-unity-small-s2t)) contains an example to run an exported small model covering ASR and S2TT. The model could be executed on popular mobile devices with Pytorch Mobile (https://pytorch.org/mobile/home/).
18
 
19
- Refer to [UnitY-Small](https://huggingface.co/facebook/seamless-m4t-unity-small) if you also wish to cover speech-to-speech translation (S2ST) in addition to ASR and S2TT tasks.
20
 
21
  ## Overview
22
  | Model | Checkpoint | Num Params | Disk Size | Supported Tasks | Supported Languages|
@@ -26,30 +26,27 @@ Refer to [UnitY-Small](https://huggingface.co/facebook/seamless-m4t-unity-small)
26
 
27
  UnitY-Small-S2T is a pruned version of UnitY-Small without 2nd pass unit decoding.
28
 
29
- Note: If using pytorch runtime in python, only **pytorch<=1.11.0** is supported for **UnitY-Small(281M)**. We tested UnitY-Small-S2T(235M), it works with later versions.
30
-
31
  ## Inference
32
  To use exported model, users don't need seamless_communication or fairseq2 dependency.
33
 
34
  ```python
35
  import torchaudio
36
  import torch
 
37
  audio_input, _ = torchaudio.load(TEST_AUDIO_PATH) # Load waveform using torchaudio
38
 
39
  s2t_model = torch.jit.load("unity_on_device_s2t.ptl") # Load exported S2T model
40
- text = s2t_model(audio_input, tgt_lang=TGT_LANG) # Forward call with tgt_lang specified for ASR or S2TT
41
- print(f"{lang}:{text}")
42
 
43
- s2st_model = torch.jit.load("unity_on_device.ptl")
44
- text, units, waveform = s2st_model(audio_input, tgt_lang=TGT_LANG) # S2ST model also returns waveform
45
- print(f"{lang}:{text}")
46
- torchaudio.save(f"{OUTPUT_FOLDER}/{lang}.wav", waveform.unsqueeze(0), sample_rate=16000) # Save output waveform to local file
47
  ```
48
 
49
  Also running the exported model doesn't need python runtime. For example, you could load this model in C++ following [this tutorial](https://pytorch.org/tutorials/advanced/cpp_export.html), or building your own on-device applications similar to [this example](https://github.com/pytorch/ios-demo-app/tree/master/SpeechRecognition)
50
 
51
  # Citation
52
- If you use SeamlessM4T in your work or any models/datasets/artifacts published in SeamlessM4T, please cite :
53
 
54
  ```bibtex
55
  @article{seamlessm4t2023,
@@ -61,4 +58,4 @@ If you use SeamlessM4T in your work or any models/datasets/artifacts published i
61
  ```
62
  # License
63
 
64
- seamless_communication is CC-BY-NC 4.0 licensed
 
14
  - ⌨️ 96 Languages for text input/output
15
  - 🗣️ 35 languages for speech output.
16
 
17
+ Apart from [SeamlessM4T-LARGE (2.3B)](https://huggingface.co/facebook/seamless-m4t-large) and [SeamlessM4T-MEDIUM (1.2B)](https://huggingface.co/facebook/seamless-m4t-medium) models, we are also developing a small model (281M) targeting for on-device inference.
18
 
19
+ This README contains an example to run an exported small model covering most tasks (ASR/S2TT/S2ST). The model could be executed on popular mobile devices with Pytorch Mobile (https://pytorch.org/mobile/home/).
20
 
21
  ## Overview
22
  | Model | Checkpoint | Num Params | Disk Size | Supported Tasks | Supported Languages|
 
26
 
27
  UnitY-Small-S2T is a pruned version of UnitY-Small without 2nd pass unit decoding.
28
 
 
 
29
  ## Inference
30
  To use exported model, users don't need seamless_communication or fairseq2 dependency.
31
 
32
  ```python
33
  import torchaudio
34
  import torch
35
+
36
  audio_input, _ = torchaudio.load(TEST_AUDIO_PATH) # Load waveform using torchaudio
37
 
38
  s2t_model = torch.jit.load("unity_on_device_s2t.ptl") # Load exported S2T model
 
 
39
 
40
+ with torch.no_grad():
41
+ text = s2t_model(audio_input, tgt_lang=TGT_LANG) # Forward call with tgt_lang specified for ASR or S2TT
42
+
43
+ print(text) # Show text output
44
  ```
45
 
46
  Also running the exported model doesn't need python runtime. For example, you could load this model in C++ following [this tutorial](https://pytorch.org/tutorials/advanced/cpp_export.html), or building your own on-device applications similar to [this example](https://github.com/pytorch/ios-demo-app/tree/master/SpeechRecognition)
47
 
48
  # Citation
49
+ If you use SeamlessM4T in your work or any models/datasets/artifacts published in SeamlessM4T, please cite:
50
 
51
  ```bibtex
52
  @article{seamlessm4t2023,
 
58
  ```
59
  # License
60
 
61
+ seamless_communication is CC-BY-NC 4.0 licensed
unity_on_device_s2t.ptl CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:230b8cd72c6eac9b7d021e2edf7b865a89dcc410b49bf6835c817fc16dcd5f01
3
- size 667605684
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:834591dbf6df69dd08f93ab303b7d506df8830552b8472818cc16c7564b2b9df
3
+ size 504153032