The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code: DatasetGenerationCastError Exception: DatasetGenerationCastError Message: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 9 new columns ({'tokenize_chinese_chars', 'clean_up_tokenization_spaces', 'never_split', 'strip_accents', 'added_tokens_decoder', 'do_basic_tokenize', 'tokenizer_class', 'model_max_length', 'do_lower_case'}) This happened while the json dataset builder was generating data using hf://datasets/Alijeff1214/DILA_FRENCH_DATASET/tokenizer_config.json (at revision d1ce70d7216098c1f3d926d114a60ddd252d8e15) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations) Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single writer.write_table(table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table pa_table = table_cast(pa_table, self._schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast return cast_table_to_schema(table, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema raise CastError( datasets.table.CastError: Couldn't cast added_tokens_decoder: struct<0: struct<content: string, lstrip: bool, normalized: bool, rstrip: bool, single_word: bool, special: bool>, 100: struct<content: string, lstrip: bool, normalized: bool, rstrip: bool, single_word: bool, special: bool>, 101: struct<content: string, lstrip: bool, normalized: bool, rstrip: bool, single_word: bool, special: bool>, 102: struct<content: string, lstrip: bool, normalized: bool, rstrip: bool, single_word: bool, special: bool>, 103: struct<content: string, lstrip: bool, normalized: bool, rstrip: bool, single_word: bool, special: bool>> child 0, 0: struct<content: string, lstrip: bool, normalized: bool, rstrip: bool, single_word: bool, special: bool> child 0, content: string child 1, lstrip: bool child 2, normalized: bool child 3, rstrip: bool child 4, single_word: bool child 5, special: bool child 1, 100: struct<content: string, lstrip: bool, normalized: bool, rstrip: bool, single_word: bool, special: bool> child 0, content: string child 1, lstrip: bool child 2, normalized: bool child 3, rstrip: bool child 4, single_word: bool child 5, special: bool child 2, 101: struct<content: string, lstrip: bool, normalized: bool, rstrip: bool, single_word: bool, special: bool> child 0, content: string child 1, lstrip: bool child 2, normalized: bool child 3, rstrip: bool child 4, single_word: bool child 5, special: bool child 3, 102: struct<content: string, lstrip: bool, normalized: bool, rstrip: bool, single_word: bool, special: bool> child 0, content: string child 1, lstrip: bool child 2, normalized: bool child 3, rstrip: bool child 4, single_word: bool child 5, special: bool child 4, 103: struct<content: string, lstrip: bool, normalized: bool, rstrip: bool, single_word: bool, special: bool> child 0, content: string child 1, lstrip: bool child 2, normalized: bool child 3, rstrip: bool child 4, single_word: bool child 5, special: bool clean_up_tokenization_spaces: bool cls_token: string do_basic_tokenize: bool do_lower_case: bool mask_token: string model_max_length: int64 never_split: null pad_token: string sep_token: string strip_accents: null tokenize_chinese_chars: bool tokenizer_class: string unk_token: string to {'cls_token': Value(dtype='string', id=None), 'mask_token': Value(dtype='string', id=None), 'pad_token': Value(dtype='string', id=None), 'sep_token': Value(dtype='string', id=None), 'unk_token': Value(dtype='string', id=None)} because column names don't match During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1577, in compute_config_parquet_and_info_response parquet_operations = convert_to_parquet(builder) File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1191, in convert_to_parquet builder.download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare self._download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single raise DatasetGenerationCastError.from_cast_error( datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 9 new columns ({'tokenize_chinese_chars', 'clean_up_tokenization_spaces', 'never_split', 'strip_accents', 'added_tokens_decoder', 'do_basic_tokenize', 'tokenizer_class', 'model_max_length', 'do_lower_case'}) This happened while the json dataset builder was generating data using hf://datasets/Alijeff1214/DILA_FRENCH_DATASET/tokenizer_config.json (at revision d1ce70d7216098c1f3d926d114a60ddd252d8e15) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Need help to make the dataset viewer work? Open a discussion for direct support.
cls_token
string | mask_token
string | pad_token
string | sep_token
string | unk_token
string | added_tokens_decoder
dict | clean_up_tokenization_spaces
bool | do_basic_tokenize
bool | do_lower_case
bool | model_max_length
int64 | never_split
null | strip_accents
null | tokenize_chinese_chars
bool | tokenizer_class
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
[CLS] | [MASK] | [PAD] | [SEP] | [UNK] | null | null | null | null | null | null | null | null | null |
[CLS] | [MASK] | [PAD] | [SEP] | [UNK] | {
"0": {
"content": "[PAD]",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"100": {
"content": "[UNK]",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"101": {
"content": "[CLS]",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"102": {
"content": "[SEP]",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"103": {
"content": "[MASK]",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
}
} | true | true | true | 512 | null | null | true | BertTokenizer |