sdelangen's picture
Fix wer typo in table (metadata was correct, though)
83bc491 verified
---
inference: false # need to implement streaming API in HF
language:
- en
thumbnail: null
tags:
- automatic-speech-recognition
- Transducer
- Conformer
- pytorch
- speechbrain
license: apache-2.0
datasets:
- librispeech_asr
metrics:
- wer
model-index:
- name: Conformer-Transducer by SpeechBrain
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER (non-streaming greedy)
type: wer
value: 2.72
- name: Test WER (960ms chunk size, 4 left context chunks)
type: wer
value: 3.13
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER (non-streaming greedy)
type: wer
value: 6.47
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# Conformer for LibriSpeech
This repository provides all the necessary tools to perform automatic speech
recognition from an end-to-end system pretrained on LibriSpeech (EN) within
SpeechBrain. For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io).
The performance of the model in full context mode (no streaming) is the following:
| Release | Test clean WER | Test other WER | GPUs |
|:-------------:|:--------------:|:--------------:|:--------:|
| 24-02-26 | 2.72 | 6.47 | 4xA100 40GB |
With streaming, the results with different chunk sizes on test-clean are the following:
| | full | cs=32 (1280ms) | 24 (960ms) | 16 (640ms) | 12 (480ms) | 8 (320ms) |
|:-----:|:----:|:-----:|:-----:|:-----:|:-----:|:-----:|
| full | 2.72%| - | - | - | - | - |
| lc=32 | - | 3.09% | 3.07% | 3.26% | 3.31% | 3.44% |
| 16 | - | 3.10% | 3.07% | 3.27% | 3.32% | 3.50% |
| 8 | - | 3.10% | 3.11% | 3.31% | 3.39% | 3.62% |
| 4 | - | 3.12% | 3.13% | 3.37% | 3.51% | 3.80% |
| 2 | - | 3.19% | 3.24% | 3.50% | 3.79% | 4.38% |
## Pipeline description
This ASR system is a Conformer model trained with the RNN-T loss (with an auxiliary CTC loss to stabilize training). The model operates with a unigram tokenizer.
Architecture details are described in the [training hyperparameters file](https://github.com/speechbrain/speechbrain/blob/develop/recipes/LibriSpeech/ASR/transducer/hparams/conformer_transducer.yaml).
Streaming support makes use of Dynamic Chunk Training. Chunked attention is used for the multi-head attention module, and an implementation of [Dynamic Chunk Convolutions](https://www.amazon.science/publications/dynamic-chunk-convolution-for-unified-streaming-and-non-streaming-conformer-asr) was used.
The model was trained with support for different chunk sizes (and even full context), and so is suitable for various chunk sizes and offline transcription.
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling `transcribe_file` if needed.
## Install SpeechBrain
First of all, please install SpeechBrain with the following command:
```
pip install speechbrain
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Transcribing your own audio files (in English)
```python
from speechbrain.inference.ASR import StreamingASR
from speechbrain.utils.dynamic_chunk_training import DynChunkTrainConfig
asr_model = StreamingASR.from_hparams(
source="speechbrain/asr-streaming-conformer-librispeech",
savedir="pretrained_models/asr-streaming-conformer-librispeech"
)
asr_model.transcribe_file(
"speechbrain/asr-streaming-conformer-librispeech/test-en.wav",
# select a chunk size of ~960ms with 4 chunks of left context
DynChunkTrainConfig(24, 4),
# disable torchaudio streaming to allow fetching from HuggingFace
# set this to True for your own files or streams to allow for streaming file decoding
use_torchaudio_streaming=False,
)
```
The `DynChunkTrainConfig` values can be adjusted for a tradeoff of latency, computational power and transcription accuracy. Refer to the streaming WER table to pick a value that is suitable for your usecase.
<details>
<summary>Commandline tool to transcribe a file or a live stream</summary>
**Decoding from a live stream using ffmpeg (BBC Radio 4):**
`python3 asr.py http://as-hls-ww-live.akamaized.net/pool_904/live/ww/bbc_radio_fourfm/bbc_radio_fourfm.isml/bbc_radio_fourfm-audio%3d96000.norewind.m3u8 --model-source=speechbrain/asr-streaming-conformer-librispeech --device=cpu -v`
**Decoding from a file:**
`python3 asr.py some-english-speech.wav --model-source=speechbrain/asr-streaming-conformer-librispeech --device=cpu -v`
```python
from argparse import ArgumentParser
import logging
parser = ArgumentParser()
parser.add_argument("audio_path")
parser.add_argument("--model-source", required=True)
parser.add_argument("--device", default="cpu")
parser.add_argument("--ip", default="127.0.0.1")
parser.add_argument("--port", default=9431)
parser.add_argument("--chunk-size", default=24, type=int)
parser.add_argument("--left-context-chunks", default=4, type=int)
parser.add_argument("--num-threads", default=None, type=int)
parser.add_argument("--verbose", "-v", default=False, action="store_true")
args = parser.parse_args()
if args.verbose:
logging.getLogger().setLevel(logging.INFO)
logging.info("Loading libraries")
from speechbrain.inference.ASR import StreamingASR
from speechbrain.utils.dynamic_chunk_training import DynChunkTrainConfig
import torch
device = args.device
if args.num_threads is not None:
torch.set_num_threads(args.num_threads)
logging.info(f"Loading model from \"{args.model_source}\" onto device {device}")
asr = StreamingASR.from_hparams(args.model_source, run_opts={"device": device})
config = DynChunkTrainConfig(args.chunk_size, args.left_context_chunks)
logging.info(f"Starting stream from URI \"{args.audio_path}\"")
for text_chunk in asr.transcribe_file_streaming(args.audio_path, config):
print(text_chunk, flush=True, end="")
```
</details>
<details>
<summary>Live ASR decoding from a browser using Gradio</summary>
We want to optimize some things around the model before we create a proper HuggingFace space demonstrating live streaming on CPU.
In the mean time, this is a simple hacky demo of live ASR in the browser using Gradio's live microphone streaming feature.
If you run this, please note:
- Modern browsers refuse to stream microphone input over an untrusted connection (plain HTTP), unless it is localhost. If you are running this on a remote server, you could use SSH port forwarding to expose the remote's port on your machine.
- Streaming using Gradio on Firefox seems to cause some issues. Chromium-based browsers seem to behave better.
Run using:
`python3 gradio-asr.py --model-source speechbrain/asr-streaming-conformer-librispeech --ip=localhost --device=cpu`
```python
from argparse import ArgumentParser
from dataclasses import dataclass
import logging
parser = ArgumentParser()
parser.add_argument("--model-source", required=True)
parser.add_argument("--device", default="cpu")
parser.add_argument("--ip", default="127.0.0.1")
parser.add_argument("--port", default=9431)
parser.add_argument("--chunk-size", default=24, type=int)
parser.add_argument("--left-context-chunks", default=4, type=int)
parser.add_argument("--num-threads", default=None, type=int)
parser.add_argument("--verbose", "-v", default=False, action="store_true")
args = parser.parse_args()
if args.verbose:
logging.getLogger().setLevel(logging.INFO)
logging.info("Loading libraries")
from speechbrain.inference.ASR import StreamingASR, ASRStreamingContext
from speechbrain.utils.dynamic_chunk_training import DynChunkTrainConfig
import torch
import gradio as gr
import torchaudio
import numpy as np
device = args.device
if args.num_threads is not None:
torch.set_num_threads(args.num_threads)
logging.info(f"Loading model from \"{args.model_source}\" onto device {device}")
asr = StreamingASR.from_hparams(args.model_source, run_opts={"device": device})
config = DynChunkTrainConfig(args.chunk_size, args.left_context_chunks)
@dataclass
class GradioStreamingContext:
context: ASRStreamingContext
chunk_size: int
waveform_buffer: torch.Tensor
decoded_text: str
def transcribe(stream, new_chunk):
sr, y = new_chunk
y = y.astype(np.float32)
y = torch.tensor(y, dtype=torch.float32, device=device)
y /= max(1, torch.max(torch.abs(y)).item()) # norm by max abs() within chunk & avoid NaN
if len(y.shape) > 1:
y = torch.mean(y, dim=1) # downmix to mono
# HACK: we are making poor use of the resampler across chunk boundaries
# which may degrade accuracy.
# NOTE: we should also absolutely avoid recreating a resampler every time
resampler = torchaudio.transforms.Resample(orig_freq=sr, new_freq=asr.audio_normalizer.sample_rate).to(device)
y = resampler(y) # janky resample (probably to 16kHz)
if stream is None:
stream = GradioStreamingContext(
context=asr.make_streaming_context(config),
chunk_size=asr.get_chunk_size_frames(config),
waveform_buffer=y,
decoded_text="",
)
else:
stream.waveform_buffer = torch.concat((stream.waveform_buffer, y))
while stream.waveform_buffer.size(0) > stream.chunk_size:
chunk = stream.waveform_buffer[:stream.chunk_size]
stream.waveform_buffer = stream.waveform_buffer[stream.chunk_size:]
# fake batch dim
chunk = chunk.unsqueeze(0)
# list of transcribed strings, of size 1 because the batch size is 1
with torch.no_grad():
transcribed = asr.transcribe_chunk(stream.context, chunk)
stream.decoded_text += transcribed[0]
return stream, stream.decoded_text
# NOTE: latency seems relatively high, which may be due to this:
# https://github.com/gradio-app/gradio/issues/6526
demo = gr.Interface(
transcribe,
["state", gr.Audio(sources=["microphone"], streaming=True)],
["state", "text"],
live=True,
)
demo.launch(server_name=args.ip, server_port=args.port)
```
</details>
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
## Parallel Inference on a Batch
Currently, the high level transcription interfaces do not support batched inference, but the low-level interfaces (i.e. `encode_chunk`) do.
We hope to provide efficient functionality for this in the future.
### Training
The model was trained with SpeechBrain (Commit hash: `3f9e33a`).
To train it from scratch, follow these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```bash
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Run Training:
```bash
cd recipes/LibriSpeech/ASR/transducer
python train.py hparams/conformer_transducer.yaml --data_folder=your_data_folder
```
See the [recipe directory](https://github.com/speechbrain/speechbrain/tree/develop/recipes/LibriSpeech/ASR/transducer) for details.
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
# **About SpeechBrain**
- Website: https://speechbrain.github.io/
- Code: https://github.com/speechbrain/speechbrain/
- HuggingFace: https://huggingface.co/speechbrain/
# **Citing SpeechBrain**
Please, cite SpeechBrain if you use it for your research or business.
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
```