seamless-m4t-large / README.md
reach-vb's picture
reach-vb HF staff
Update README.md (#1)
57d469e
metadata
inference: false
tags:
  - SeamlessM4T
license: cc-by-nc-4.0

SeamlessM4T

SeamlessM4T is designed to provide high quality translation, allowing people from different linguistic communities to communicate effortlessly through speech and text.

SeamlessM4T covers:

  • 📥 101 languages for speech input
  • ⌨️ 96 Languages for text input/output
  • 🗣️ 35 languages for speech output.

This unified model enables multiple tasks without relying on multiple separate models:

  • Speech-to-speech translation (S2ST)
  • Speech-to-text translation (S2TT)
  • Text-to-speech translation (T2ST)
  • Text-to-text translation (T2TT)
  • Automatic speech recognition (ASR)

SeamlessM4T models

Model Name #params checkpoint metrics
SeamlessM4T-Large 2.3B 🤗 Model card - checkpoint metrics
SeamlessM4T-Medium 1.2B 🤗 Model card - checkpoint metrics

We provide the extensive evaluation results of seamlessM4T-Large and SeamlessM4T-Medium reported in the paper (as averages) in the metrics files above.

Instructions to run inference with SeamlessM4T models

Install seamless_communication by following the instructions mentioned here: Installation

Inference calls for the Translator object instanciated with a Multitasking UnitY model with the options:

  • multitask_unity_large
  • multitask_unity_medium

and a vocoder vocoder_36langs

import torch
import torchaudio
from seamless_communication.models.inference import Translator


# Initialize a Translator object with a multitask model, vocoder on the GPU.
translator = Translator("multitask_unity_large", "vocoder_36langs", torch.device("cuda:0"))

Now predict() can be used to run inference as many times on any of the supported tasks.

Given an input audio with <path_to_input_audio> or an input text <input_text> in <src_lang>, we can translate into <tgt_lang> as follows:

S2ST and T2ST:

# S2ST
translated_text, wav, sr = translator.predict(<path_to_input_audio>, "s2st", <tgt_lang>)

# T2ST
translated_text, wav, sr = translator.predict(<input_text>, "t2st", <tgt_lang>, src_lang=<src_lang>)

Note that <src_lang> must be specified for T2ST.

The generated units are synthesized and the output audio file is saved with:

wav, sr = translator.synthesize_speech(<speech_units>, <tgt_lang>)

# Save the translated audio generation.
torchaudio.save(
    <path_to_save_audio>,
    wav[0].cpu(),
    sample_rate=sr,
)

S2TT, T2TT and ASR:

# S2TT
translated_text, _, _ = translator.predict(<path_to_input_audio>, "s2tt", <tgt_lang>)

# ASR
# This is equivalent to S2TT with `<tgt_lang>=<src_lang>`.
transcribed_text, _, _ = translator.predict(<path_to_input_audio>, "asr", <src_lang>)

# T2TT
translated_text, _, _ = translator.predict(<input_text>, "t2tt", <tgt_lang>, src_lang=<src_lang>)

Note that <src_lang> must be specified for T2TT

Inference using the CLI, from the root directory of the repository:

The model can be specified with e.g., --model_name multitask_unity_large:

S2ST:

python scripts/m4t/predict/predict.py <path_to_input_audio> s2st <tgt_lang> --output_path <path_to_save_audio> --model_name multitask_unity_large

S2TT:

python scripts/m4t/predict/predict.py <path_to_input_audio> s2tt <tgt_lang>

T2TT:

python scripts/m4t/predict/predict.py <input_text> t2tt <tgt_lang> --src_lang <src_lang>

T2ST:

python scripts/m4t/predict/predict.py <input_text> t2st <tgt_lang> --src_lang <src_lang> --output_path <path_to_save_audio>

ASR:

python scripts/m4t/predict/predict.py <path_to_input_audio> asr <tgt_lang>

Citation

If you use SeamlessM4T in your work or any models/datasets/artifacts published in SeamlessM4T, please cite :

@article{seamlessm4t2023,
  title={SeamlessM4T—Massively Multilingual \& Multimodal Machine Translation},
  author={{Seamless Communication}, Lo\"{i}c Barrault, Yu-An Chung, Mariano Cora Meglioli, David Dale, Ning Dong, Paul-Ambroise Duquenne, Hady Elsahar, Hongyu Gong, Kevin Heffernan, John Hoffman, Christopher Klaiber, Pengwei Li, Daniel Licht, Jean Maillard, Alice Rakotoarison, Kaushik Ram Sadagopan, Guillaume Wenzek, Ethan Ye,  Bapi Akula, Peng-Jen Chen, Naji El Hachem, Brian Ellis, Gabriel Mejia Gonzalez, Justin Haaheim, Prangthip Hansanti, Russ Howes, Bernie Huang, Min-Jae Hwang, Hirofumi Inaguma, Somya Jain, Elahe Kalbassi, Amanda Kallet, Ilia Kulikov, Janice Lam, Daniel Li, Xutai Ma, Ruslan Mavlyutov, Benjamin Peloquin, Mohamed Ramadan, Abinesh Ramakrishnan, Anna Sun, Kevin Tran, Tuan Tran, Igor Tufanov, Vish Vogeti, Carleigh Wood, Yilin Yang, Bokai Yu, Pierre Andrews, Can Balioglu, Marta R. Costa-juss\`{a} \footnotemark[3], Onur \,{C}elebi,Maha Elbayad,Cynthia Gao, Francisco Guzm\'an, Justine Kao, Ann Lee, Alexandre Mourachko, Juan Pino, Sravya Popuri, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, Paden Tomasello, Changhan Wang, Jeff Wang, Skyler Wang},
  journal={ArXiv},
  year={2023}
}

License

seamless_communication is CC-BY-NC 4.0 licensed.