Datasets:

Modalities:
Audio
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
id
stringlengths
9
11
iso3
stringclasses
48 values
audio
audioduration (s)
8
3.33k
mzb_00000
mzb
mzb_00001
mzb
mzb_00002
mzb
mzb_00003
mzb
mzb_00004
mzb
mzb_00005
mzb
mzb_00006
mzb
mzb_00007
mzb
mzb_00008
mzb
mzb_00009
mzb
mzb_00010
mzb
mzb_00011
mzb
mzb_00012
mzb
mzb_00013
mzb
mzb_00014
mzb
mzb_00015
mzb
mzb_00016
mzb
mzb_00017
mzb
mzb_00018
mzb
mzb_00019
mzb
mzb_00020
mzb
mzb_00021
mzb
mzb_00022
mzb
mzb_00023
mzb
mzb_00024
mzb
mzb_00025
mzb
mzb_00026
mzb
mzb_00027
mzb
mzb_00028
mzb
mzb_00029
mzb
mzb_00030
mzb
mzb_00031
mzb
mzb_00032
mzb
mzb_00033
mzb
mzb_00034
mzb
mzb_00035
mzb
mzb_00036
mzb
mzb_00037
mzb
mzb_00038
mzb
mzb_00039
mzb
mzb_00040
mzb
mzb_00041
mzb
mzb_00042
mzb
mzb_00043
mzb
mzb_00044
mzb
mzb_00045
mzb
mzb_00046
mzb
mzb_00047
mzb
mzb_00048
mzb
kgj_00049
kgj
kgj_00050
kgj
kgj_00051
kgj
kgj_00052
kgj
kgj_00053
kgj
kgj_00054
kgj
kgj_00055
kgj
kgj_00056
kgj
kgj_00057
kgj
kgj_00058
kgj
kgj_00059
kgj
kgj_00060
kgj
kgj_00061
kgj
kgj_00062
kgj
kgj_00063
kgj
kgj_00064
kgj
kgj_00065
kgj
wry_00066
wry
wry_00067
wry
wry_00068
wry
wry_00069
wry
wry_00070
wry
wry_00071
wry
tvs_00072
tvs
tvs_00073
tvs
tvs_00074
tvs
tvs_00075
tvs
tvs_00076
tvs
tvs_00077
tvs
tvs_00078
tvs
tvs_00079
tvs
tvs_00080
tvs
tvs_00081
tvs
tvs_00082
tvs
tvs_00083
tvs
tvs_00084
tvs
tvs_00085
tvs
tvs_00086
tvs
tvs_00087
tvs
tvs_00088
tvs
tvs_00089
tvs
tvs_00090
tvs
tvs_00091
tvs
tvs_00092
tvs
tvs_00093
tvs
tvs_00094
tvs
tvs_00095
tvs
tvs_00096
tvs
tvs_00097
tvs
tvs_00098
tvs
tvs_00099
tvs

MMS ulab v2 is a a massively multilingual speech dataset that contains 8900 hours of unlabeled speech across 4023 languages. In total, it contains 189 language families. It can be used for language identification, spoken language modelling, or speech representation learning.

MMS ulab v2 is a reproduced and extended version of the MMS ulab dataset originally proposed in Scaling Speech Technology to 1000+ Languages, covering more languages and containing more data. This dataset includes the raw unsegmented audio in a 16kHz single channel format. It can be segmented into utterances with a voice activity detection (VAD) model such as this one. We use 6700 hours of MMS ulab v2 (post-segmentation) to train XEUS, a multilingual speech encoder for 4000+ languages.

For more details about the dataset and its usage, please refer to our paper or project page.

License and Acknowledgement

MMS ulab v2 is released under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 license.

If you use this dataset, we ask that you cite the following papers:

@misc{chen2024robustspeechrepresentationlearning,
      title={Towards Robust Speech Representation Learning for Thousands of Languages}, 
      author={William Chen and Wangyou Zhang and Yifan Peng and Xinjian Li and Jinchuan Tian and Jiatong Shi and Xuankai Chang and Soumi Maiti and Karen Livescu and Shinji Watanabe},
      year={2024},
      eprint={2407.00837},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2407.00837}, 
}

@article{pratap2024scaling,
  title={Scaling speech technology to 1,000+ languages},
  author={Pratap, Vineel and Tjandra, Andros and Shi, Bowen and Tomasello, Paden and Babu, Arun and Kundu, Sayani and Elkahky, Ali and Ni, Zhaoheng and Vyas, Apoorv and Fazel-Zarandi, Maryam and others},
  journal={Journal of Machine Learning Research},
  volume={25},
  number={97},
  pages={1--52},
  year={2024}
}

And also reference The Global Recordings Network, the original source of the data.

Downloads last month
1,277

Models trained or fine-tuned on espnet/mms_ulab_v2