Dataset Preview
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 6 new columns ({'do_basic_tokenize', 'do_lower_case', 'never_split', 'strip_accents', 'tokenizer_class', 'tokenize_chinese_chars'})

This happened while the json dataset builder was generating data using

hf://datasets/Juny2312/amino/mytoken/tokenizer_config.json (at revision d424932147a3910bc820465b4d13ec90c07fa71e)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              do_lower_case: bool
              do_basic_tokenize: bool
              never_split: null
              unk_token: string
              sep_token: string
              pad_token: string
              cls_token: string
              mask_token: string
              tokenize_chinese_chars: bool
              strip_accents: null
              tokenizer_class: string
              to
              {'unk_token': Value(dtype='string', id=None), 'sep_token': Value(dtype='string', id=None), 'pad_token': Value(dtype='string', id=None), 'cls_token': Value(dtype='string', id=None), 'mask_token': Value(dtype='string', id=None)}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1321, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 935, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 6 new columns ({'do_basic_tokenize', 'do_lower_case', 'never_split', 'strip_accents', 'tokenizer_class', 'tokenize_chinese_chars'})
              
              This happened while the json dataset builder was generating data using
              
              hf://datasets/Juny2312/amino/mytoken/tokenizer_config.json (at revision d424932147a3910bc820465b4d13ec90c07fa71e)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

unk_token
string
sep_token
string
pad_token
string
cls_token
string
mask_token
string
do_lower_case
bool
do_basic_tokenize
bool
never_split
null
tokenize_chinese_chars
bool
strip_accents
null
tokenizer_class
string
[UNK]
[SEP]
[PAD]
[CLS]
[MASK]
null
null
null
null
null
null
[UNK]
[SEP]
[PAD]
[CLS]
[MASK]
false
true
null
true
null
BertTokenizer
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
36