fleurs-hs-vits / README.md
suflaj's picture
Update README.md to reflect code changes
87cdf02 verified
---
license: cc-by-4.0
task_categories:
- audio-classification
language:
- de
- en
- es
- fr
- it
- nl
- pl
- sv
tags:
- speech
- speech-classifiation
- text-to-speech
- spoofing
- multilingualism
pretty_name: FLEURS-HS VITS
size_categories:
- 10K<n<100K
---
# FLEURS-HS VITS
An extension of the [FLEURS](https://huggingface.co/datasets/google/fleurs) dataset for synthetic speech detection using text-to-speech, featured in the paper **Synthetic speech detection with Wav2Vec 2.0 in various language settings**.
This dataset is 1 of 3 used in the paper, the others being:
- [FLEURS-HS](https://huggingface.co/datasets/realnetworks-kontxt/fleurs-hs)
- the default train, dev and test sets
- separated due to different licensing
- [ARCTIC-HS](https://huggingface.co/datasets/realnetworks-kontxt/arctic-hs)
- extension of the [CMU_ARCTIC](http://festvox.org/cmu_arctic/) and [L2-ARCTIC](https://psi.engr.tamu.edu/l2-arctic-corpus/) sets in a similar manner
## Dataset Details
### Dataset Description
The dataset features 8 languages originally seen in FLEURS:
- German
- English
- Spanish
- French
- Italian
- Dutch
- Polish
- Swedish
The `synthetic` samples are generated using:
- [Google Cloud Text-To-Speech](https://cloud.google.com/text-to-speech)
- [Azure Text-To-Speech](https://azure.microsoft.com/en-us/products/ai-services/text-to-speech)
- [Amazon Polly](https://aws.amazon.com/polly/)
Only the test VITS samples are provided. For every VITS voice, which is in practice specific model weights, one sample per transcript is provided.
- **Curated by:** [KONTXT by RealNetworks](https://realnetworks.com/kontxt)
- **Funded by:** [RealNetworks](https://realnetworks.com/)
- **Language(s) (NLP):** English, German, Spanish, French, Italian, Dutch, Polish, Swedish
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) for the code, [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) for the dataset (but various licenses depending on the source for VITS samples)
### Dataset Sources
The original FLEURS dataset was downloaded from [HuggingFace](https://huggingface.co/datasets/google/fleurs).
- **FLEURS Repository:** [HuggingFace](https://huggingface.co/datasets/google/fleurs)
- **FLEURS Paper:** [arXiv](https://arxiv.org/abs/2205.12446)
- **Paper:** Synthetic speech detection with Wav2Vec 2.0 in various language settings
## Uses
This dataset is best used as a difficult test set. Each sample contains an `Audio` feature, and a label, which is always `synthetic`; this dataset does not include any human samples.
### Direct Use
The following snippet of code demonstrates loading the training split for English:
```python
from datasets import load_dataset
fleurs_hs = load_dataset(
"realnetworks-kontxt/fleurs-hs-vits",
"en_us",
split="test",
trust_remote_code=True,
)
```
To load a different language, change `en_us` into one of the following:
- `de_de` for German
- `es_419` for Spanish
- `fr_fr` for French
- `it_it` for Italian
- `nl_nl` for Dutch
- `pl_pl` for Polish
- `sv_se` for Swedish
This dataset only has a `test` split.
The `trust_remote_code=True` parameter is necessary because this dataset uses a custom loader. To check out which code is being ran, check out the [loading script](./fleurs-hs-vits.py).
## Dataset Structure
The dataset data is contained in the [data directory](https://huggingface.co/datasets/realnetworks-kontxt/fleurs-hs-vits/tree/main/data).
There exists 1 directory per language.
Within that directory, there is a directory named `splits`; it contains 1 file per split:
- `test.tar.gz`
That `.tar.gz` file contains 1 or more directories, named after the VITS model being used: ex. `thorsten-vits`
Each of these directories contain `.wav` files. Each `.wav` file is named after the ID of its transcript. Keep in mind that these directories can't be merged as they share their file names. An identical file name implies a speaker-voice pair, ex. `human/123.wav` and `thorsten-vits/123.wav`.
Finally, back to the language directory, it contains 3 metadata files, which are not used in the loaded dataset, but might be useful to researchers:
- `recording-metadata.csv`
- contains the transcript ID, file name, split and gender of the original FLEURS samples
- `recording-transcripts.csv`
- contains the transcrpits of the original FLEURS samples
- `voice-metadata.csv`
- contains the groupation of TTS' used alongside the splits they were used for
### Sample
A sample contains contains an Audio feature `audio`, and a string `label`.
```
{
'audio': {
'path': 'ljspeech-vits/1660.wav',
'array': array([0.00119019, 0.00109863, 0.00106812, ..., 0., 0., 0.]),
'sampling_rate': 16000
},
'label': 'synthetic'
}
```
## Citation
The dataset is featured alongside our paper, **Synthetic speech detection with Wav2Vec 2.0 in various language settings**, which will be published on IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW). We'll provide links once it's available online.
**BibTeX:**
Note, the following BibTeX is incomplete - we'll update it once the actual one is known.
```
@inproceedings{dropuljic-ssdww2v2ivls
author={Dropuljić, Branimir and Šuflaj, Miljenko and Jertec, Andrej and Obadić, Leo}
booktitle={2024 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW)}
title={Synthetic speech detection with Wav2Vec 2.0 in various language settings}
year={2024}
volume={}
number={}
pages={1-5}
keywords={Synthetic speech detection;text-to-speech;wav2vec 2.0;spoofing attack;multilingualism}
doi={}
}
```
## Dataset Card Authors
- [Miljenko Šuflaj](https://huggingface.co/suflaj)
## Dataset Card Contact
- [Miljenko Šuflaj](mailto:msuflaj@realnetworks.com)