Datasets:

Formats:
parquet
Libraries:
Datasets
Dask
roshansh commited on
Commit
0ade6f1
1 Parent(s): 282daf3

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +44 -0
README.md ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ configs:
3
+ - config_name: default
4
+ data_files:
5
+ - split: train
6
+ path: data/train-*
7
+ - split: validation
8
+ path: data/validation-*
9
+ - split: validation_tts
10
+ path: data/validation_tts-*
11
+ - split: test
12
+ path: data/test-*
13
+ - split: test_tts
14
+ path: data/test_tts-*
15
+ dataset_info:
16
+ features:
17
+ - name: input_ids
18
+ sequence: int32
19
+ - name: attention_mask
20
+ sequence: int8
21
+ - name: labels
22
+ sequence: int64
23
+ splits:
24
+ - name: train
25
+ num_bytes: 7058957907
26
+ num_examples: 281241
27
+ - name: validation
28
+ num_bytes: 79544090
29
+ num_examples: 5406
30
+ - name: validation_tts
31
+ num_bytes: 39772045
32
+ num_examples: 2703
33
+ - name: test
34
+ num_bytes: 39828951
35
+ num_examples: 2620
36
+ - name: test_tts
37
+ num_bytes: 39828951
38
+ num_examples: 2620
39
+ download_size: 620258987
40
+ dataset_size: 7257931944
41
+ ---
42
+ # Dataset Card for "librispeech960-encodec1024_asr_tokenized_final"
43
+
44
+ [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)