Sidon: Fast and Robust Open-Source Multilingual Speech Restoration for Dataset Cleansing

Wataru Nakata, Yuki Saito, Yota Ueda, Hiroshi Saruwatari
The University of Tokyo, Japan.

Abstract

Large-scale text-to-speech (TTS) systems are limited by the scarcity of clean, multilingual recordings. We introduce Sidon, a fast, open-source speech restoration model that converts noisy in-the-wild speech into studio-quality speech and scales to dozens of languages. Sidon consists of two models: w2v-BERT 2.0 finetuned feature predictor to cleanse features from noisy speech and vocoder trained to synthesize restored speech from the cleansed features. Sidon achieves restoration performance comparable to Miipher: Google's internal speech restoration model with the aim of dataset cleansing for speech synthesis. Sidon is also computationally efficient, running up to 3,390× faster than real time on a single GPU. We further show that training a TTS model using a Sidon-cleansed automatic speech recognition corpus improves the quality of synthetic speech in a zero-shot setting. Code and model are released to facilitate reproducible dataset cleansing for the research community.

Full Multilingual Results (FLEURS)

The full multilingual evaluation table is large. It is hidden by default.

Show results table

Loads an embedded page with all language-wise metrics.

Multilingual Samples from FLEURS

English Demo Samples from LibriTTS

BibTeX

@inproceedings{sidon2026,
  author    = {Nakata, Wataru and Saito, Yuki and Ueda, Yota and Saruwatari, Hiroshi},
  title     = {Sidon: Fast and Robust Open-Source Multilingual Speech Restoration for Dataset Restoration},
  booktitle = {TBA},
  year      = {TBA}
}