Datasets:
Tasks:
Automatic Speech Recognition
Formats:
parquet
Languages:
English
Size:
10M - 100M
ArXiv:
License:
ChickySparrow
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -112,11 +112,9 @@ The large split requires 4TB of storage (including HuggingFace extraction). The
|
|
112 |
Example:
|
113 |
|
114 |
```python
|
115 |
-
import datasets
|
116 |
from datasets import load_dataset
|
117 |
|
118 |
-
|
119 |
-
ds = load_dataset('speechbrain/LargeScaleASR', {'small'||'medium'||'large'}, num_proc=6, verification_mode=datasets.VerificationMode.NO_CHECKS)
|
120 |
print(ds['train'])
|
121 |
|
122 |
from io import BytesIO
|
|
|
112 |
Example:
|
113 |
|
114 |
```python
|
|
|
115 |
from datasets import load_dataset
|
116 |
|
117 |
+
ds = load_dataset('speechbrain/LargeScaleASR', {'small'||'medium'||'large'}, num_proc={nb_of_cpu_cores_you_want})
|
|
|
118 |
print(ds['train'])
|
119 |
|
120 |
from io import BytesIO
|