Datasets:

Multilinguality:
multilingual
Language Creators:
crowdsourced
Annotations Creators:
crowdsourced
Source Datasets:
extended|common_voice
ArXiv:
Tags:
License:

Making README more robust and verbose

#4
by reach-vb HF staff - opened
Mozilla Foundation org
  1. Updates old URLs to SpeechBench.
  2. Adds code snippet for playing with the datasets.
  3. Adds example scripts to further leverage this dataset.

In general this looks very nice, happy with the overall structure. Leaving some quite specific points just so that we can polish it off!

  • Under "How to Use", I would first explain that all dataset loading and pre-processing is done through the datasets library and pure python functions -> we straight away break out of the typical Kaldi/C++ norms (which is our big selling point!)
  • Next line about the dataset being downloaded by load_dataset function is great 👍
  • Would explain how splits relate to langs:
- The entire dataset (or a particular split) can be downloaded to your local drive by using the `load_dataset` function.
+ The entire dataset (or a particular split) can be downloaded and prepared in one call to your local drive by using the `load_dataset` function. To download the Hindi split, simply specify the corresponding dataset config name:
  • (nit!) Would avoid caps in variable names
- CV_11_hi_train = load_dataset("mozilla-foundation/common_voice_11_0", "hi", split="train")
+ cv_11 = load_dataset("mozilla-foundation/common_voice_11_0", "hi", split="train")
  • "iterating" over the dataset might be confusing without explaining what this means - I understand you but a new-comer might not! Would go up one level of abstraction and just say we load individual samples at a time
- Loading a dataset in streaming mode allows one to iterate over the dataset without downloading it on disk.
+ Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk. Storage...
  • Maybe we can link the audio datasets blog here? "To find out more about loading and preparing audio datasets, ..."
  • Like the data loader section a lot! Think this is great for integrating into other libraries/training scripts
  • Suggestion: let's make it specific to CV11!
- Train your own CTC or Seq2Seq Automatic Speech Recognition models with `transformers` - [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition).
+ Train your own CTC or Seq2Seq Automatic Speech Recognition models on Common Voice 11 with `transformers` - [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition).

Love it! The snippets for data loaders are awesome. Also like using not English language in examples - btw maybe clarify it before the snippet, smth like "To load a Hindi subset of CV ...".

+1 for @sanchit-gandhi idea to link an audio datasets blog (https://huggingface.co/blog/audio-datasets)

Mozilla Foundation org

Thank You! @sanchit-gandhi & @polinaeterna - I incorporated both of your changes. If you're okay with it then please feel free to merge it. 🤗

You're a writer at heart @reach-vb ! Think here we can trim it down for the final version to keep it streamlined 🐬

- The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. No need to rely on decades old, hacky shell scripts and C/C++ pre-processing scripts anymore. The minimalistic API ensures that you can plug-and-play this dataset in your existing Machine Learning workflow, with just a few lines of code.
+ The `datasets` library allows you to load and pre-process your dataset in pure Python. The entire dataset (or a particular split) ...
Mozilla Foundation org

Thank You! @sanchit-gandhi - committed the suggestion! :)

Mozilla Foundation org

Hi @anton-l - Can I please request you to merge this PR? :)

Mozilla Foundation org

Sorry! @anton-l , you can ignore this request. @polinaeterna will take care of the merging and she has couple of suggestions for improvement here.

@reach-vb just two suggestions from my side:

line 379:

- The entire dataset (or a particular split) can be downloaded and prepared in one call to your local drive by using the `load_dataset` function. 
+ The dataset can be downloaded and prepared to your local drive in one call by using the `load_dataset` function. 

I'd better say like this because for now even when providing split as a parameter, load_dataset triggers download (but not prepare) of the full config data, this is the library limitation. I'd better not confuse users here, this is a known pretty annoying issue, see for example https://github.com/huggingface/datasets/issues/5243. (btw I think I have an idea of how can we mitigate this in the library itself). Also suggest to swap "to your local drive" and "in one call", just for clearness.

line 381:

- For example, to download the Hindi split, simply specify the corresponding language config name (i.e., "hi" for Hindi):
+ For example, to download the Hindi config, simply specify the corresponding language config name (i.e., "hi" for Hindi):
  • not to confuse the two concepts - of splits and of configs.

and we can definitely merge then :)

Mozilla Foundation org

Brilliant point @polinaeterna ! I just made the update. If you are okay with it then feel free to merge :)

great! thank you @reach-vb 💙

polinaeterna changed pull request status to merged

Sign up or log in to comment