Datasets:

Multilinguality:
multilingual
Language Creators:
crowdsourced
Annotations Creators:
crowdsourced
Source Datasets:
extended|common_voice
ArXiv:
Tags:
License:
reach-vb HF staff commited on
Commit
402006a
1 Parent(s): 1ddf8ea

Add suggestions from the review from Polina and Sanchit

Browse files
Files changed (1) hide show
  1. README.md +18 -15
README.md CHANGED
@@ -376,47 +376,50 @@ Abkhaz, Arabic, Armenian, Assamese, Asturian, Azerbaijani, Basaa, Bashkir, Basqu
376
 
377
  ### How to use
378
 
379
- To get started, you should be able to plug-and-play this dataset in your existing Machine Learning workflow
380
 
381
- The entire dataset (or a particular split) can be downloaded to your local drive by using the `load_dataset` function.
 
 
382
  ```python
383
  from datasets import load_dataset
384
 
385
- CV_11_hi_train = load_dataset("mozilla-foundation/common_voice_11_0", "hi", split="train")
386
  ```
387
 
388
- Using the datasets library, you can stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode allows one to iterate over the dataset without downloading it on disk.
389
  ```python
390
  from datasets import load_dataset
391
 
392
- CV_11_hi_train_stream = load_dataset("mozilla-foundation/common_voice_11_0", "hi", split="train", streaming=True)
393
 
394
- # Iterate through the stream and fetch individual data points as you need them
395
- print(next(iter(CV_11_hi_train_stream)))
396
  ```
397
 
398
- *Bonus*: Create a PyTorch dataloader with directly with the downloaded/ streamed datasets.
399
  ```python
400
  from datasets import load_dataset
401
  from torch.utils.data.sampler import BatchSampler, RandomSampler
402
 
403
- ds = load_dataset("mozilla-foundation/common_voice_11_0", "hi", split="train")
404
- batch_sampler = BatchSampler(RandomSampler(ds), batch_size=32, drop_last=False)
405
- dataloader = DataLoader(ds, batch_sampler=batch_sampler)
406
  ```
407
 
408
- ofcourse, you can do the same with streaming datasets as well.
409
  ```python
410
  from datasets import load_dataset
411
  from torch.utils.data import DataLoader
412
 
413
- ds = load_dataset("mozilla-foundation/common_voice_11_0", "hi", split="train")
414
- dataloader = DataLoader(ds, batch_size=32)
415
  ```
416
 
 
 
417
  ### Example scripts
418
 
419
- Train your own CTC or Seq2Seq Automatic Speech Recognition models with `transformers` - [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition).
420
 
421
  ## Dataset Structure
422
 
 
376
 
377
  ### How to use
378
 
379
+ The 'datasets' library allows you to load and pre-process your dataset in pure Python at scale. There's no need to rely on hacky shell scripts and decades old C/C++ pre-processing scripts anymore.
380
 
381
+ The minimalistic API ensure that you can plug-and-play this dataset in your existing Machine Learning workflow with just a few lines of code.
382
+
383
+ The entire dataset (or a particular split) can be downloaded and prepared in one call to your local drive by using the `load_dataset` function. To download the Hindi split, simply specify the corresponding dataset config name:
384
  ```python
385
  from datasets import load_dataset
386
 
387
+ cv_11 = load_dataset("mozilla-foundation/common_voice_11_0", "hi", split="train")
388
  ```
389
 
390
+ Using the datasets library, you can stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
391
  ```python
392
  from datasets import load_dataset
393
 
394
+ cv_11 = load_dataset("mozilla-foundation/common_voice_11_0", "hi", split="train", streaming=True)
395
 
396
+ print(next(iter(cv_11)))
 
397
  ```
398
 
399
+ *Bonus*: Create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) with directly with the downloaded/ streamed datasets.
400
  ```python
401
  from datasets import load_dataset
402
  from torch.utils.data.sampler import BatchSampler, RandomSampler
403
 
404
+ cv_11 = load_dataset("mozilla-foundation/common_voice_11_0", "hi", split="train")
405
+ batch_sampler = BatchSampler(RandomSampler(cv_11), batch_size=32, drop_last=False)
406
+ dataloader = DataLoader(cv_11, batch_sampler=batch_sampler)
407
  ```
408
 
409
+ ofcourse, you can also create a dataloader with streaming datasets as well.
410
  ```python
411
  from datasets import load_dataset
412
  from torch.utils.data import DataLoader
413
 
414
+ cv_11 = load_dataset("mozilla-foundation/common_voice_11_0", "hi", split="train")
415
+ dataloader = DataLoader(cv_11, batch_size=32)
416
  ```
417
 
418
+ To find out more about loading and preparing audio datasets, head over to [hf.coblog/audio-datasets](https://huggingface.co/blog/audio-datasets).
419
+
420
  ### Example scripts
421
 
422
+ Train your own CTC or Seq2Seq Automatic Speech Recognition models on Common Voice 11 with `transformers` - [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition).
423
 
424
  ## Dataset Structure
425