up
Browse files
README.md
CHANGED
@@ -959,16 +959,16 @@ crowdsourced voice recordings. There are 2,900 hours of speech represented in th
|
|
959 |
|
960 |
The dataset contains the audio, transcriptions, and translations in the following languages, French, German, Dutch, Russian, Spanish, Italian, Turkish, Persian, Swedish, Mongolian, Chinese, Welsh, Catalan, Slovenian, Estonian, Indonesian, Arabic, Tamil, Portuguese, Latvian, and Japanese.
|
961 |
|
962 |
-
|
963 |
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
|
964 |
|
965 |
-
For example, to download the English-
|
966 |
```python
|
967 |
from datasets import load_dataset
|
968 |
|
969 |
-
covost2 = load_dataset("covost2", "
|
970 |
```
|
971 |
-
Note: For a successful load, you'd first need to download the Common Voice 4.0 `en` split from the Hugging Face Hub. You can download it via `cv4 = load_dataset("mozilla-foundation/common_voice_4_0", "en", split="all")`.
|
972 |
|
973 |
*Bonus*: create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets.
|
974 |
|
@@ -976,7 +976,7 @@ Note: For a successful load, you'd first need to download the Common Voice 4.0 `
|
|
976 |
from datasets import load_dataset
|
977 |
from torch.utils.data.sampler import BatchSampler, RandomSampler
|
978 |
|
979 |
-
covost2 = load_dataset("covost2", "
|
980 |
batch_sampler = BatchSampler(RandomSampler(covost2), batch_size=32, drop_last=False)
|
981 |
dataloader = DataLoader(covost2, batch_sampler=batch_sampler)
|
982 |
```
|
|
|
959 |
|
960 |
The dataset contains the audio, transcriptions, and translations in the following languages, French, German, Dutch, Russian, Spanish, Italian, Turkish, Persian, Swedish, Mongolian, Chinese, Welsh, Catalan, Slovenian, Estonian, Indonesian, Arabic, Tamil, Portuguese, Latvian, and Japanese.
|
961 |
|
962 |
+
## How to use
|
963 |
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
|
964 |
|
965 |
+
For example, to download the English-Turkish config, simply specify the corresponding language config name (i.e., "en_tr" for English to Turkish):
|
966 |
```python
|
967 |
from datasets import load_dataset
|
968 |
|
969 |
+
covost2 = load_dataset("covost2", "en_tr", data_dir="<path/to/manual/data>, split="train")
|
970 |
```
|
971 |
+
Note: For a successful load, you'd first need to download the Common Voice 4.0 `en` split from the Hugging Face Hub. You can download it via `cv4 = load_dataset("mozilla-foundation/common_voice_4_0", "en", split="all")`. Upon successful download pass the location of the CV 4.0 dataset in the `data_dir` argument of the CoVoST2 `load_dataset` call.
|
972 |
|
973 |
*Bonus*: create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets.
|
974 |
|
|
|
976 |
from datasets import load_dataset
|
977 |
from torch.utils.data.sampler import BatchSampler, RandomSampler
|
978 |
|
979 |
+
covost2 = load_dataset("covost2", "en_tr", data_dir="<path/to/manual/data>, split="train")
|
980 |
batch_sampler = BatchSampler(RandomSampler(covost2), batch_size=32, drop_last=False)
|
981 |
dataloader = DataLoader(covost2, batch_sampler=batch_sampler)
|
982 |
```
|