The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 4 new columns ({'eos_token', 'pad_token', 'bos_token', 'unk_token'}) and 8 missing columns ({'padding_value', 'padding_side', 'return_attention_mask', 'processor_class', 'sampling_rate', 'feature_size', 'do_normalize', 'feature_extractor_type'}).

This happened while the json dataset builder was generating data using

hf://datasets/ai4bharat/indicwav2vec_noa_hf/special_tokens_map.json (at revision 75a4cb13976c4ec2dea945d2e5c65082a726305d)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              bos_token: string
              eos_token: string
              pad_token: string
              unk_token: string
              to
              {'do_normalize': Value(dtype='bool', id=None), 'feature_extractor_type': Value(dtype='string', id=None), 'feature_size': Value(dtype='int64', id=None), 'padding_side': Value(dtype='string', id=None), 'padding_value': Value(dtype='int64', id=None), 'processor_class': Value(dtype='string', id=None), 'return_attention_mask': Value(dtype='bool', id=None), 'sampling_rate': Value(dtype='int64', id=None)}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1321, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 935, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 4 new columns ({'eos_token', 'pad_token', 'bos_token', 'unk_token'}) and 8 missing columns ({'padding_value', 'padding_side', 'return_attention_mask', 'processor_class', 'sampling_rate', 'feature_size', 'do_normalize', 'feature_extractor_type'}).
              
              This happened while the json dataset builder was generating data using
              
              hf://datasets/ai4bharat/indicwav2vec_noa_hf/special_tokens_map.json (at revision 75a4cb13976c4ec2dea945d2e5c65082a726305d)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Open a discussion for direct support.

do_normalize
bool
feature_extractor_type
string
feature_size
int64
padding_side
string
padding_value
int64
processor_class
string
return_attention_mask
bool
sampling_rate
int64
bos_token
string
eos_token
string
pad_token
string
unk_token
string
do_lower_case
bool
replace_word_delimiter_char
string
tokenizer_class
string
word_delimiter_token
string
</s>
int64
<pad>
int64
<s>
int64
<unk>
int64
|
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
ि
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
int64
true
Wav2Vec2FeatureExtractor
1
right
0
Wav2Vec2Processor
true
16,000
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
<s>
</s>
<pad>
<unk>
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Wav2Vec2Processor
null
null
<s>
</s>
<pad>
<unk>
false
Wav2Vec2CTCTokenizer
|
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
2
0
1
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
67
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66