The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 1 new columns ({'versions'})

This happened while the json dataset builder was generating data using

hf://datasets/nan/results/demo-leaderboard/gpt2-demo/results_2023-11-22 15:46:20.425378.json (at revision 6de24a6e0dec1e3d65aab31a8177ee457f2e451b)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              results: struct<anli_r1: struct<acc: double, acc_stderr: double>, logiqa: struct<acc: double, acc_stderr: double, acc_norm: double, acc_norm_stderr: double>>
                child 0, anli_r1: struct<acc: double, acc_stderr: double>
                    child 0, acc: double
                    child 1, acc_stderr: double
                child 1, logiqa: struct<acc: double, acc_stderr: double, acc_norm: double, acc_norm_stderr: double>
                    child 0, acc: double
                    child 1, acc_stderr: double
                    child 2, acc_norm: double
                    child 3, acc_norm_stderr: double
              versions: struct<anli_r1: int64, logiqa: int64>
                child 0, anli_r1: int64
                child 1, logiqa: int64
              config: struct<model: string, model_args: string, num_fewshot: int64, batch_size: int64, batch_sizes: list<item: null>, device: string, no_cache: bool, limit: int64, bootstrap_iters: int64, description_dict: null, model_dtype: string, model_name: string, model_sha: string>
                child 0, model: string
                child 1, model_args: string
                child 2, num_fewshot: int64
                child 3, batch_size: int64
                child 4, batch_sizes: list<item: null>
                    child 0, item: null
                child 5, device: string
                child 6, no_cache: bool
                child 7, limit: int64
                child 8, bootstrap_iters: int64
                child 9, description_dict: null
                child 10, model_dtype: string
                child 11, model_name: string
                child 12, model_sha: string
              to
              {'config': {'model_dtype': Value(dtype='string', id=None), 'model_name': Value(dtype='string', id=None), 'model_sha': Value(dtype='string', id=None)}, 'results': {'anli_r1': {'acc': Value(dtype='int64', id=None)}, 'logiqa': {'acc_norm': Value(dtype='float64', id=None)}}}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1321, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 935, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 1 new columns ({'versions'})
              
              This happened while the json dataset builder was generating data using
              
              hf://datasets/nan/results/demo-leaderboard/gpt2-demo/results_2023-11-22 15:46:20.425378.json (at revision 6de24a6e0dec1e3d65aab31a8177ee457f2e451b)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

config
dict
results
dict
versions
dict
{ "model_dtype": "torch.float16", "model_name": "demo-leaderboard/gpt2-demo", "model_sha": "ac3299b02780836378b9e1e68c6eead546e89f90" }
{ "anli_r1": { "acc": 0 }, "logiqa": { "acc_norm": 0.9 } }
null
{ "model": "hf-causal-experimental", "model_args": "pretrained=demo-leaderboard/gpt2-demo,revision=main,dtype=bfloat16", "num_fewshot": 0, "batch_size": 1, "batch_sizes": [], "device": "cpu", "no_cache": true, "limit": 20, "bootstrap_iters": 100000, "description_dict": null, "model_dtype": "bfloat16", "model_name": "demo-leaderboard/gpt2-demo", "model_sha": "main" }
{ "anli_r1": { "acc": 0.4, "acc_stderr": 0.11239029738980327 }, "logiqa": { "acc": 0.35, "acc_stderr": 0.10942433098048308, "acc_norm": 0.3, "acc_norm_stderr": 0.10513149660756933 } }
{ "anli_r1": 0, "logiqa": 0 }