Changhan reach-vb HF staff commited on
Commit
4e5c41b
1 Parent(s): b6d5b80

Update README.md (#22)

Browse files

- Update README.md (00be8f4ac36e47fa4545349752b2dd8fa341e01b)
- Update README.md (9067aa2aed66f92b4ab2559551bcceab578870e3)


Co-authored-by: Vaibhav Srivastav <reach-vb@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +52 -0
README.md CHANGED
@@ -32,6 +32,58 @@ This is the "large" variant of the unified model, which enables multiple tasks w
32
 
33
  We provide extensive evaluation results of SeamlessM4T-Medium and SeamlessM4T-Large in the SeamlessM4T paper (as averages) in the `metrics` files above.
34
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
  ## Instructions to run inference with SeamlessM4T models
36
 
37
  The SeamlessM4T models are currently available through the `seamless_communication` package. The `seamless_communication`
 
32
 
33
  We provide extensive evaluation results of SeamlessM4T-Medium and SeamlessM4T-Large in the SeamlessM4T paper (as averages) in the `metrics` files above.
34
 
35
+ ## 🤗 Transformers Usage
36
+
37
+ First, load the processor and a checkpoint of the model:
38
+
39
+ ```python
40
+ >>> from transformers import AutoProcessor, SeamlessM4TModel
41
+ >>> processor = AutoProcessor.from_pretrained("facebook/hf-seamless-m4t-large")
42
+ >>> model = SeamlessM4TModel.from_pretrained("facebook/hf-seamless-m4t-large")
43
+ ```
44
+
45
+ You can seamlessly use this model on text or on audio, to generated either translated text or translated audio.
46
+
47
+ Here is how to use the processor to process text and audio:
48
+
49
+ ```python
50
+ >>> # let's load an audio sample from an Arabic speech corpus
51
+ >>> from datasets import load_dataset
52
+ >>> dataset = load_dataset("arabic_speech_corpus", split="test", streaming=True)
53
+ >>> audio_sample = next(iter(dataset))["audio"]
54
+ >>> # now, process it
55
+ >>> audio_inputs = processor(audios=audio_sample["array"], return_tensors="pt")
56
+ >>> # now, process some English test as well
57
+ >>> text_inputs = processor(text = "Hello, my dog is cute", src_lang="eng", return_tensors="pt")
58
+ ```
59
+
60
+
61
+ ### Speech
62
+
63
+ [`SeamlessM4TModel`](https://huggingface.co/docs/transformers/main/en/model_doc/seamless_m4t#transformers.SeamlessM4TModel) can *seamlessly* generate text or speech with few or no changes. Let's target Russian voice translation:
64
+
65
+ ```python
66
+ >>> audio_array_from_text = model.generate(**text_inputs, tgt_lang="rus")[0].cpu().numpy().squeeze()
67
+ >>> audio_array_from_audio = model.generate(**audio_inputs, tgt_lang="rus")[0].cpu().numpy().squeeze()
68
+ ```
69
+
70
+ With basically the same code, I've translated English text and Arabic speech to Russian speech samples.
71
+
72
+ ### Text
73
+
74
+ Similarly, you can generate translated text from audio files or from text with the same model. You only have to pass `generate_speech=False` to [`SeamlessM4TModel.generate`](https://huggingface.co/docs/transformers/main/en/model_doc/seamless_m4t#transformers.SeamlessM4TModel.generate).
75
+ This time, let's translate to French.
76
+
77
+ ```python
78
+ >>> # from audio
79
+ >>> output_tokens = model.generate(**audio_inputs, tgt_lang="fra", generate_speech=False)
80
+ >>> translated_text_from_audio = processor.decode(output_tokens[0].tolist()[0], skip_special_tokens=True)
81
+ >>> # from text
82
+ >>> output_tokens = model.generate(**text_inputs, tgt_lang="fra", generate_speech=False)
83
+ >>> translated_text_from_text = processor.decode(output_tokens[0].tolist()[0], skip_special_tokens=True)
84
+ ```
85
+
86
+
87
  ## Instructions to run inference with SeamlessM4T models
88
 
89
  The SeamlessM4T models are currently available through the `seamless_communication` package. The `seamless_communication`