|
--- |
|
license: cc-by-4.0 |
|
task_categories: |
|
- audio-classification |
|
language: |
|
- de |
|
- en |
|
- es |
|
- fr |
|
- it |
|
- nl |
|
- pl |
|
- sv |
|
tags: |
|
- speech |
|
- speech-classifiation |
|
- text-to-speech |
|
- spoofing |
|
- multilingualism |
|
|
|
pretty_name: FLEURS-HS VITS |
|
size_categories: |
|
- 10K<n<100K |
|
--- |
|
|
|
# FLEURS-HS VITS |
|
|
|
An extension of the [FLEURS](https://huggingface.co/datasets/google/fleurs) dataset for synthetic speech detection using text-to-speech, featured in the paper **Synthetic speech detection with Wav2Vec 2.0 in various language settings**. |
|
|
|
This dataset is 1 of 3 used in the paper, the others being: |
|
- [FLEURS-HS](https://huggingface.co/datasets/realnetworks-kontxt/fleurs-hs) |
|
- the default train, dev and test sets |
|
- separated due to different licensing |
|
- [ARCTIC-HS](https://huggingface.co/datasets/realnetworks-kontxt/arctic-hs) |
|
- extension of the [CMU_ARCTIC](http://festvox.org/cmu_arctic/) and [L2-ARCTIC](https://psi.engr.tamu.edu/l2-arctic-corpus/) sets in a similar manner |
|
|
|
## Dataset Details |
|
|
|
### Dataset Description |
|
|
|
The dataset features 8 languages originally seen in FLEURS: |
|
|
|
- German |
|
- English |
|
- Spanish |
|
- French |
|
- Italian |
|
- Dutch |
|
- Polish |
|
- Swedish |
|
|
|
The original FLEURS samples are used as `human` samples, while `synthetic` samples are generated using: |
|
|
|
- [Google Cloud Text-To-Speech](https://cloud.google.com/text-to-speech) |
|
- [Azure Text-To-Speech](https://azure.microsoft.com/en-us/products/ai-services/text-to-speech) |
|
- [Amazon Polly](https://aws.amazon.com/polly/) |
|
|
|
The resulting dataset features roughly twice the samples per language (every `human` sample usually has its `synthetic` counterpart). |
|
|
|
|
|
- **Curated by:** [KONTXT by RealNetworks](https://realnetworks.com/kontxt) |
|
- **Funded by:** [RealNetworks](https://realnetworks.com/) |
|
- **Language(s) (NLP):** English, German, Spanish, French, Italian, Dutch, Polish, Swedish |
|
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) for the code, [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) for the dataset (but various licenses depending on the source for VITS samples) |
|
|
|
### Dataset Sources |
|
|
|
The original FLEURS dataset was downloaded from [HuggingFace](https://huggingface.co/datasets/google/fleurs). |
|
|
|
- **FLEURS Repository:** [HuggingFace](https://huggingface.co/datasets/google/fleurs) |
|
- **FLEURS Paper:** [arXiv](https://arxiv.org/abs/2205.12446) |
|
|
|
- **Paper:** Synthetic speech detection with Wav2Vec 2.0 in various language settings |
|
|
|
## Uses |
|
|
|
This dataset is best used as a difficult test set. Each sample contains an `Audio` feature, and a label: `human` or `synthetic`. |
|
|
|
### Direct Use |
|
|
|
The following snippet of code demonstrates loading the training split for English: |
|
|
|
```python |
|
from datasets import load_dataset |
|
|
|
fleurs_hs = load_dataset( |
|
"realnetworks-kontxt/fleurs-hs-vits", |
|
"en_us", |
|
split="test", |
|
trust_remote_code=True, |
|
) |
|
``` |
|
|
|
To load a different language, change `en_us` into one of the following: |
|
- `de_de` for German |
|
- `es_419` for Spanish |
|
- `fr_fr` for French |
|
- `it_it` for Italian |
|
- `nl_nl` for Dutch |
|
- `pl_pl` for Polish |
|
- `sv_se` for Swedish |
|
|
|
This dataset only has a `test` split. |
|
|
|
To load only the synthetic samples, append `_without-human` to the name. For example, `en_us` will load the test set also containing the original English FLEURS samples, while `en_us_without-human` will only load the synthetic VITS samples. This is useful if you simply want to include the VITS samples into the original FLEURS-HS test set without duplicating human samples. |
|
|
|
The `trust_remote_code=True` parameter is necessary because this dataset uses a custom loader. To check out which code is being ran, check out the [loading script](./fleurs-hs-vits.py). |
|
|
|
## Dataset Structure |
|
|
|
The dataset data is contained in the [data directory](https://huggingface.co/datasets/realnetworks-kontxt/fleurs-hs-vits/tree/main/data). |
|
|
|
There exists 1 directory per language. |
|
|
|
Within those directories, there is a directory named `splits`; it contains 1 file per split: |
|
- `test.tar.gz` |
|
|
|
Those `.tar.gz` files contain 2 or more directories: |
|
- `human` |
|
- 1 or more directories named after the VITS model being used, ex. `thorsten-vits` |
|
|
|
Each of these directories contain `.wav` files. Keep in mind that these directories can't be merged as they share most of their file names. An identical file name implies a speaker-voice pair, ex. `human/123.wav` and `thorsten-vits/123.wav`. |
|
|
|
Finally, back to the language directory, it contains 3 metadata files, which are not used in the loaded dataset, but might be useful to researchers: |
|
- `recording-metadata.csv` |
|
- contains the transcript ID, file name, split and gender of the original FLEURS samples |
|
- `recording-transcripts.csv` |
|
- contains the transcrpits of the original FLEURS samples |
|
- `voice-metadata.csv` |
|
- contains the groupation of TTS' used alongside the splits they were used for |
|
|
|
### Sample |
|
|
|
A sample contains contains an Audio feature `audio`, and a string `label`. |
|
|
|
``` |
|
{ |
|
'audio': { |
|
'path': 'ljspeech-vits/1003119935936341070.wav', |
|
'array': array([-0.00048828, -0.00106812, -0.00164795, ..., 0., 0., 0.]), |
|
'sampling_rate': 16000 |
|
}, |
|
'label': 'synthetic' |
|
} |
|
``` |
|
|
|
## Citation |
|
|
|
The dataset is featured alongside our paper, **Synthetic speech detection with Wav2Vec 2.0 in various language settings**, which will be published on IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW). We'll provide links once it's available online. |
|
|
|
**BibTeX:** |
|
|
|
Note, the following BibTeX is incomplete - we'll update it once the actual one is known. |
|
|
|
``` |
|
@inproceedings{dropuljic-ssdww2v2ivls |
|
author={Dropuljić, Branimir and Šuflaj, Miljenko and Jertec, Andrej and Obadić, Leo} |
|
booktitle={2024 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW)} |
|
title={Synthetic speech detection with Wav2Vec 2.0 in various language settings} |
|
year={2024} |
|
volume={} |
|
number={} |
|
pages={1-5} |
|
keywords={Synthetic speech detection;text-to-speech;wav2vec 2.0;spoofing attack;multilingualism} |
|
doi={} |
|
} |
|
``` |
|
|
|
## Dataset Card Authors |
|
|
|
- [Miljenko Šuflaj](https://huggingface.co/suflaj) |
|
|
|
## Dataset Card Contact |
|
|
|
- [Miljenko Šuflaj](mailto:msuflaj@realnetworks.com) |