valhalla commited on
Commit
2059ae1
1 Parent(s): 16a56a4
README.md ADDED
@@ -0,0 +1,122 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - es
5
+ datasets:
6
+ - mustc
7
+ tags:
8
+ - audio
9
+ - speech-translation
10
+ - automatic-speech-recognition
11
+ license: MIT
12
+ ---
13
+
14
+
15
+ # S2T-SMALL-MUSTC-EN-ES-ST
16
+
17
+ `s2t-small-mustc-en-es-st` is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).
18
+ The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in
19
+ [this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text)
20
+
21
+
22
+ ## Model description
23
+
24
+ S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
25
+ Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
26
+ fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
27
+ transcripts/translations autoregressively.
28
+
29
+ ## Intended uses & limitations
30
+
31
+ This model can be used for end-to-end English speech to Spanish text translation.
32
+ See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints.
33
+
34
+
35
+ ### How to use
36
+
37
+ As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
38
+ transcripts by passing the speech features to the model.
39
+
40
+ *Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the
41
+ filter bank features. Make sure to install the `torchaudio` package before running this example.*
42
+
43
+ You could either install those as extra speech dependancies with
44
+ `pip install transformers"[speech, sentencepiece]"` or install the packages seperatly
45
+ with `pip install torchaudio sentencepiece`.
46
+
47
+
48
+ ```python
49
+ import torch
50
+ from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
51
+ from datasets import load_dataset
52
+ import soundfile as sf
53
+
54
+ model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-mustc-en-es-st")
55
+ processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-mustc-en-es-st")
56
+
57
+ def map_to_array(batch):
58
+ speech, _ = sf.read(batch["file"])
59
+ batch["speech"] = speech
60
+ return batch
61
+
62
+ ds = load_dataset(
63
+ "patrickvonplaten/librispeech_asr_dummy",
64
+ "clean",
65
+ split="validation"
66
+ )
67
+ ds = ds.map(map_to_array)
68
+
69
+ inputs = processor(
70
+ ds["speech"][0],
71
+ sampling_rate=16_000,
72
+ return_tensors="pt"
73
+ )
74
+ generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"])
75
+
76
+ translation = processor.batch_decode(generated_ids, skip_special_tokens=True)
77
+ ```
78
+
79
+
80
+ ## Training data
81
+
82
+ The s2t-small-mustc-en-es-st is trained on English-Spanish subset of [MuST-C](https://ict.fbk.eu/must-c/).
83
+ MuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems
84
+ for speech translation from English into several languages. For each target language, MuST-C comprises several hundred
85
+ hours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual
86
+ transcriptions and translations.
87
+
88
+
89
+ ## Training procedure
90
+
91
+ ### Preprocessing
92
+
93
+ The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
94
+ WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
95
+ is applied to each example.
96
+
97
+ The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 8,000.
98
+
99
+
100
+ ### Training
101
+
102
+ The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779).
103
+ The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
104
+ model training and for better performance the encoder is pre-trained for English ASR.
105
+
106
+ ## Evaluation results
107
+
108
+ MuST-C test results for en-es (BLEU score): 27.2
109
+
110
+
111
+
112
+ ### BibTeX entry and citation info
113
+
114
+ ```bibtex
115
+ @inproceedings{wang2020fairseqs2t,
116
+ title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq},
117
+ author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino},
118
+ booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations},
119
+ year = {2020},
120
+ }
121
+
122
+ ```
config.json ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "activation_dropout": 0.1,
3
+ "activation_function": "relu",
4
+ "architectures": [
5
+ "Speech2TextForConditionalGeneration"
6
+ ],
7
+ "attention_dropout": 0.1,
8
+ "bos_token_id": 0,
9
+ "classifier_dropout": 0.0,
10
+ "conv_channels": 1024,
11
+ "conv_kernel_sizes": [
12
+ 5,
13
+ 5
14
+ ],
15
+ "d_model": 256,
16
+ "decoder_attention_heads": 4,
17
+ "decoder_ffn_dim": 2048,
18
+ "decoder_layerdrop": 0.0,
19
+ "decoder_layers": 6,
20
+ "decoder_start_token_id": 2,
21
+ "dropout": 0.1,
22
+ "early_stopping": true,
23
+ "encoder_attention_heads": 4,
24
+ "encoder_ffn_dim": 2048,
25
+ "encoder_layerdrop": 0.0,
26
+ "encoder_layers": 12,
27
+ "eos_token_id": 2,
28
+ "gradient_checkpointing": false,
29
+ "init_std": 0.02,
30
+ "input_channels": 1,
31
+ "input_feat_per_channel": 80,
32
+ "is_encoder_decoder": true,
33
+ "max_length": 200,
34
+ "max_source_positions": 6000,
35
+ "max_target_positions": 1024,
36
+ "model_type": "speech_to_text",
37
+ "num_beams": 5,
38
+ "num_conv_layers": 2,
39
+ "num_hidden_layers": 12,
40
+ "pad_token_id": 1,
41
+ "scale_embedding": true,
42
+ "tie_word_embeddings": false,
43
+ "transformers_version": "4.4.0.dev0",
44
+ "use_cache": true,
45
+ "vocab_size": 8000
46
+ }
preprocessor_config.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "do_ceptral_normalize": true,
3
+ "feature_size": 80,
4
+ "normalize_means": true,
5
+ "normalize_vars": true,
6
+ "num_mel_bins": 80,
7
+ "padding_side": "right",
8
+ "padding_value": 0.0,
9
+ "return_attention_mask": true,
10
+ "sampling_rate": 16000
11
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0916430ff4e7cd6e1aa6e1957822ba5baea1ab93a36283a4708b606ab7f204a7
3
+ size 124411397
sentencepiece.bpe.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3abb8e44dfaff10d2750873439a5cfd8e44f45dfa26563f191573fc71f16c7d3
3
+ size 381548
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"bos_token": "<s>", "eos_token": "</s>", "unk_token": "<unk>", "pad_token": "<pad>"}
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"bos_token": "<s>", "eos_token": "</s>", "unk_token": "<unk>", "pad_token": "<pad>", "do_upper_case": false, "do_lower_case": false, "tgt_lang": null, "lang_codes": null}
vocab.json ADDED
The diff for this file is too large to render. See raw diff