Datasets:

Multilinguality:
multilingual
Language Creators:
crowdsourced
Annotations Creators:
crowdsourced
Source Datasets:
extended|common_voice
ArXiv:
Tags:
License:
reach-vb HF staff commited on
Commit
f0004f9
1 Parent(s): 40e3c8a
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -376,9 +376,9 @@ Abkhaz, Arabic, Armenian, Assamese, Asturian, Azerbaijani, Basaa, Bashkir, Basqu
376
 
377
  ### How to use
378
 
379
- The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. No need to rely on decades old, hacky shell scripts and C/C++ pre-processing scripts anymore. The minimalistic API ensures that you can plug-and-play this dataset in your existing Machine Learning workflow, with just a few lines of code.
380
 
381
- The entire dataset (or a particular split) can be downloaded and prepared in one call to your local drive by using the `load_dataset` function. For example, to download the Hindi split, simply specify the corresponding language config name (i.e., "hi" for Hindi):
382
  ```python
383
  from datasets import load_dataset
384
 
 
376
 
377
  ### How to use
378
 
379
+ The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The entire dataset (or a particular split) can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
380
 
381
+ For example, to download the Hindi split, simply specify the corresponding language config name (i.e., "hi" for Hindi):
382
  ```python
383
  from datasets import load_dataset
384