Datasets:

Modalities:
Audio
Text
Formats:
parquet
Size:
< 1K
ArXiv:
Libraries:
Datasets
Dask
License:
id
stringlengths
8
8
audio
audioduration (s)
11.8
10.2k
jf_00000
jf_00001
jf_00002
jf_00003
jf_00004
jf_00005
jf_00006
jf_00007
jf_00008
jf_00009
jf_00010
jf_00011
jf_00012
jf_00013
jf_00014
jf_00015
jf_00016
jf_00017
jf_00018
jf_00019
jf_00020
jf_00021
jf_00022
jf_00023
jf_00024
jf_00025
jf_00026
jf_00027
jf_00028
jf_00029
jf_00030
jf_00031
jf_00032
jf_00033
jf_00034
jf_00035
jf_00036
jf_00037
jf_00038
jf_00039
jf_00040
jf_00041
jf_00042
jf_00043
jf_00044
jf_00045
jf_00046
jf_00047
jf_00048
jf_00049
jf_00050
jf_00051
jf_00052
jf_00053
jf_00054
jf_00055
jf_00056
jf_00057
jf_00058
jf_00059
jf_00060
jf_00061
jf_00062
jf_00063
jf_00064
jf_00065
jf_00066
jf_00067
jf_00068
jf_00069
jf_00070
jf_00071
jf_00072
jf_00073
jf_00074
jf_00075
jf_00076
jf_00077
jf_00078
jf_00079
jf_00080
jf_00081
jf_00082
jf_00083
jf_00084
jf_00085
jf_00086
jf_00087
jf_00088
jf_00089
jf_00090
jf_00091
jf_00092
jf_00093
jf_00094
jf_00095
jf_00096
jf_00097
jf_00098
jf_00099

The WikiTongues speech corpus is a collection of conversational audio across 700+ languages. It can be used for spoken language modelling or speech representation learning. This dataset includes the raw unsegmented audio in a 16kHz single channel format. Each clip is usually 2-10 minutes long, and contains one or more speakers conversing in their language(s). Sometimes, a speaker may switch languages within a single clip. The total dataset size is around 70 hours.

The current version of the dataset does not include labels for the language(s) being spoken in each clip. This information will be included in an update in the near future

This dataset was crawled from the WikiTongues project, which collected the original recordings. We use this corpus to train XEUS, a multilingual speech encoder for 4000+ languages. For more details about the dataset and its usage, please refer to our paper or project page.

License and Acknowledgement

WikiTongues is released under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 license.

If you use this dataset, we ask that you cite our paper:

@misc{chen2024robustspeechrepresentationlearning,
      title={Towards Robust Speech Representation Learning for Thousands of Languages}, 
      author={William Chen and Wangyou Zhang and Yifan Peng and Xinjian Li and Jinchuan Tian and Jiatong Shi and Xuankai Chang and Soumi Maiti and Karen Livescu and Shinji Watanabe},
      year={2024},
      eprint={2407.00837},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2407.00837}, 
}

And credit the original creators of the audio.

Downloads last month
165