--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- ![](https://github.com/facebookresearch/encodec/raw/2d29d9353c2ff0ab1aeadc6a3d439854ee77da3e/architecture.png) # Model Card for EnCodec This model card provides details and information about EnCodec, a state-of-the-art real-time audio codec developed by Facebook Research. ## Model Details ### Model Description EnCodec is a high-fidelity audio codec leveraging neural networks. It introduces a streaming encoder-decoder architecture with quantized latent space, trained in an end-to-end fashion. The model simplifies and speeds up training using a single multiscale spectrogram adversary that efficiently reduces artifacts and produces high-quality samples. It also includes a novel loss balancer mechanism that stabilizes training by decoupling the choice of hyperparameters from the typical scale of the loss. Additionally, lightweight Transformer models are used to further compress the obtained representation while maintaining real-time performance. - **Developed by:** Facebook Research - **Model type:** Audio Codec ### Model Sources - **Repository:** [GitHub Repository](https://github.com/facebookresearch/encodec) - **Paper:** [EnCodec: End-to-End Neural Audio Codec](https://arxiv.org/abs/2210.13438) ## Uses ### Direct Use EnCodec can be used directly as an audio codec for real-time compression and decompression of audio signals. It provides high-quality audio compression and efficient decoding. ### Downstream Use EnCodec can be fine-tuned for specific audio tasks or integrated into larger audio processing pipelines for applications such as speech recognition, audio streaming, or voice communication systems. [More Information Needed] ### Out-of-Scope Use [More Information Needed] ## How to Get Started with the Model Use the following code to get started with the EnCodec model: ```python import torch from encodec import EnCodecModel # Load the pre-trained EnCodec model model = EnCodecModel() # Load the audio data audio_data = torch.load('audio.pt') # Compress the audio audio_codes = model.encode(audio_data)[0] # Decompress the audio reconstructed_audio = model.decode(audio_codes) ``` ## Training Details ### Training Data We train all models for 300 epochs, with one epoch being 2,000 updates with the Adam optimizer with a batch size of 64 examples of 1 second each, a learning rate of 3 · 10−4 , β1 = 0.5, and β2 = 0.9. All the models are traind using 8 A100 GPUs. We use the balancer introduced in Section 3.4 with weights λt = 0.1, λf = 1, λg = 3, λfeat = 3 for the 24 kHz models. For the 48 kHz model, we use instead λg = 4, λfeat = 4. - For speech: - DNS Challenge 4 - [Common Voice](https://huggingface.co/datasets/common_voice) - For general audio: - AudioSet - FSD50K - For music: - Jamendo dataset (Bogdanov et al., 2019) [More Information Needed] They used four different training strategies: - (s1) we sample a single source from Jamendo with probability 0.32; - (s2) we sample a single source from the other datasets with the same probability; - (s3) we mix two sources from all datasets with a probability of 0.24; - (s4) we mix three sources from all datasets except music with a probability of 0.12. The audio is normalized by file and we apply a random gain between -10 and 6 dB. We reject any sample that has been clipped. Finally we add reverberation using room impulse responses provided by the DNS challenge with probability 0.2, and RT60 in the range [0.3, 1.3] except for the single-source music samples. For testing, we use four categories: clean speech from DNS alone, clean speech mixed with FSDK50K sample, Jamendo sample alone, proprietary music sample alone. ## Evaluation We consider both subjective and objective evaluation metrics. For the subjective tests we follow the MUSHRA protocol (Series, 2014), using both a hidden reference and a low anchor. Annotators were recruited using a crowd-sourcing platform, in which they were asked to rate the perceptual quality of the provided samples in a range between 1 to 100. We randomly select 50 samples of 5 seconds from each category of the the test set and force at least 10 annotations per samples. To filter noisy annotations and outliers we remove annotators who rate the reference recordings less then 90 in at least 20% of the cases, or rate the low-anchor recording above 80 more than 50% of the time. For objective metrics, we use ViSQOL (Hines et al., 2012; Chinen et al., 2020) 2 , together with the Scale-Invariant Signal-to-Noise Ration (SI-SNR) (Luo & Mesgarani, 2019; Nachmani et al., 2020; Chazan et al., 2021). ### Results We start with the results for EnCodec with a bandwidth in {1.5, 3, 6, 12} kbps and compare them to the baselines. Results for the streamable setup are reported in Figure 3 and a breakdown per category in Table 1. We additionally explored other quantizers such as Gumbel-Softmax and DiffQ (see details in Appendix A.2), however, we found in preliminary results that they provide similar or worse results, hence we do not report them. When considering the same bandwidth, EnCodec is superior to all evaluated baselines considering the MUSHRA score. Notice, EnCodec at 3kbps reaches better performance on average than Lyra-v2 using 6kbps and Opus at 12kbps. When considering the additional language model over the codes, we can reduce the bandwidth by ∼ 25 − 40%. For instance, we can reduce the bandwidth of the 3 kpbs model to 1.9 kbps. We observe that for higher bandwidth, the compression ratio is lower, which could be explained by the small size of the Transformer model used, making hard to model all codebooks together. #### Summary EnCodec is a state-of-the-art real-time neural audio compression model that excels in producing high-fidelity audio samples at various sample rates and bandwidths. The model's performance was evaluated across different settings, ranging from 24kHz monophonic at 1.5 kbps to 48kHz stereophonic, showcasing both subjective and objective results (Figure 3 and Table 4). Notably, EnCodec incorporates a novel spectrogram-only adversarial loss, effectively reducing artifacts and enhancing sample quality. Training stability and interpretability were further enhanced through the introduction of a gradient balancer for the loss weights. Additionally, the study demonstrated that a compact Transformer model can be employed to achieve an additional bandwidth reduction of up to 40% without compromising quality, particularly in applications where low latency is not critical (e.g., music streaming). ## Citation **BibTeX:** @misc{défossez2022high, title={High Fidelity Neural Audio Compression}, author={Alexandre Défossez and Jade Copet and Gabriel Synnaeve and Yossi Adi}, year={2022}, eprint={2210.13438}, archivePrefix={arXiv}, primaryClass={eess.AS} }