Datasets:

Formats:
parquet
Libraries:
Datasets
Dask
File size: 1,032 Bytes
0ade6f1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
---
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: validation
    path: data/validation-*
  - split: validation_tts
    path: data/validation_tts-*
  - split: test
    path: data/test-*
  - split: test_tts
    path: data/test_tts-*
dataset_info:
  features:
  - name: input_ids
    sequence: int32
  - name: attention_mask
    sequence: int8
  - name: labels
    sequence: int64
  splits:
  - name: train
    num_bytes: 7058957907
    num_examples: 281241
  - name: validation
    num_bytes: 79544090
    num_examples: 5406
  - name: validation_tts
    num_bytes: 39772045
    num_examples: 2703
  - name: test
    num_bytes: 39828951
    num_examples: 2620
  - name: test_tts
    num_bytes: 39828951
    num_examples: 2620
  download_size: 620258987
  dataset_size: 7257931944
---
# Dataset Card for "librispeech960-encodec1024_asr_tokenized_final"

[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)