The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code: DatasetGenerationCastError Exception: DatasetGenerationCastError Message: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 1 new columns ({'job_id'}) This happened while the json dataset builder was generating data using hf://datasets/hallucinations-leaderboard/requests/EleutherAI/gpt-neo-1.3B_eval_request_False_False_False.json (at revision 586a51df692a71d8620aa3ce2c71cbc1d75bf84f) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations) Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single writer.write_table(table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table pa_table = table_cast(pa_table, self._schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast return cast_table_to_schema(table, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema raise CastError( datasets.table.CastError: Couldn't cast model: string base_model: string revision: string private: bool status: string job_id: string weight_type: string precision: string model_type: string submitted_time: timestamp[s] license: string likes: int64 params: double to {'model': Value(dtype='string', id=None), 'base_model': Value(dtype='string', id=None), 'revision': Value(dtype='string', id=None), 'private': Value(dtype='bool', id=None), 'precision': Value(dtype='string', id=None), 'weight_type': Value(dtype='string', id=None), 'status': Value(dtype='string', id=None), 'submitted_time': Value(dtype='timestamp[s]', id=None), 'model_type': Value(dtype='string', id=None), 'likes': Value(dtype='int64', id=None), 'params': Value(dtype='float64', id=None), 'license': Value(dtype='string', id=None)} because column names don't match During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1396, in compute_config_parquet_and_info_response parquet_operations = convert_to_parquet(builder) File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1045, in convert_to_parquet builder.download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1029, in download_and_prepare self._download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1124, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1884, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2015, in _prepare_split_single raise DatasetGenerationCastError.from_cast_error( datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 1 new columns ({'job_id'}) This happened while the json dataset builder was generating data using hf://datasets/hallucinations-leaderboard/requests/EleutherAI/gpt-neo-1.3B_eval_request_False_False_False.json (at revision 586a51df692a71d8620aa3ce2c71cbc1d75bf84f) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
model
string | base_model
string | revision
string | private
bool | precision
string | weight_type
string | status
string | submitted_time
unknown | model_type
string | likes
int64 | params
float64 | license
string |
---|---|---|---|---|---|---|---|---|---|---|---|
0-hero/Matter-0.2-7B-DPO | main | false | bfloat16 | Original | PENDING | "2024-06-10T16:08:28" | πΆ : fine-tuned on domain-specific datasets | 3 | 7.242 | apache-2.0 |
|
01-ai/Yi-1.5-34B-32K | main | false | bfloat16 | Original | PENDING | "2024-05-27T17:10:02" | π’ : pretrained | 24 | 34.389 | apache-2.0 |
|
01-ai/Yi-1.5-34B-Chat-16K | main | false | bfloat16 | Original | PENDING | "2024-05-27T17:09:11" | π¬ : chat models (RLHF, DPO, IFT, ...) | 16 | 34.389 | apache-2.0 |
|
01-ai/Yi-1.5-34B-Chat | main | false | bfloat16 | Original | PENDING | "2024-05-27T17:09:40" | π¬ : chat models (RLHF, DPO, IFT, ...) | 140 | 34.389 | apache-2.0 |
|
01-ai/Yi-1.5-34B | main | false | bfloat16 | Original | PENDING | "2024-05-27T17:10:05" | π’ : pretrained | 39 | 34.389 | apache-2.0 |
|
01-ai/Yi-1.5-9B-32K | main | false | bfloat16 | Original | PENDING | "2024-05-27T17:04:37" | π’ : pretrained | 14 | 8.829 | apache-2.0 |
|
01-ai/Yi-1.5-9B-Chat-16K | main | false | bfloat16 | Original | PENDING | "2024-05-27T17:06:05" | π¬ : chat models (RLHF, DPO, IFT, ...) | 19 | 8.829 | apache-2.0 |
|
01-ai/Yi-1.5-9B-Chat | main | false | bfloat16 | Original | FINISHED | "2024-05-27T17:06:08" | π¬ : chat models (RLHF, DPO, IFT, ...) | 80 | 8.829 | apache-2.0 |
|
01-ai/Yi-1.5-9B | main | false | bfloat16 | Original | PENDING | "2024-05-27T17:04:42" | π’ : pretrained | 36 | 8.829 | apache-2.0 |
|
01-ai/Yi-9B-200K | main | false | bfloat16 | Original | PENDING | "2024-05-16T07:01:28" | π’ : pretrained | 71 | 8.829 | other |
|
BarraHome/zephyr-dpo-v2 | unsloth/zephyr-sft-bnb-4bit | main | false | float16 | Original | FINISHED | "2024-02-05T03:18:41" | πΆ : fine-tuned | 0 | 7.242 | mit |
Cognitive-Machines-Labs/Dolus-14b-Mini | LlamaForCausalLM | main | false | float32 | Original | PENDING | "2024-08-20T19:21:45" | π’ : pretrained | 5 | 11.52 | cc-by-nc-nd-4.0 |
CohereForAI/c4ai-command-r-plus | main | false | bfloat16 | Original | PENDING | "2024-09-22T14:12:20" | pretrained | 1,653 | 103.811 | cc-by-nc-4.0 |
|
CohereForAI/c4ai-command-r-v01 | main | false | float32 | Original | PENDING | "2024-09-22T15:53:57" | pretrained | 1,045 | 34.981 | cc-by-nc-4.0 |
|
DeepMount00/Llama-3-8b-Ita | main | false | bfloat16 | Original | FINISHED | "2024-05-17T15:15:42" | πΆ : fine-tuned on domain-specific datasets | 17 | 8.03 | llama3 |
|
EleutherAI/gpt-j-6b | main | false | float32 | Original | FINISHED | "2023-12-03T18:26:55" | pretrained | 1,316 | 6 | apache-2.0 |
|
EleutherAI/gpt-neo-1.3B | main | false | float32 | Original | FINISHED | "2023-09-09T10:52:17" | pretrained | 206 | 1.366 | mit |
|
EleutherAI/gpt-neo-125m | main | false | float32 | Original | FINISHED | "2023-09-09T10:52:17" | pretrained | 132 | 0.15 | mit |
|
EleutherAI/gpt-neo-2.7B | main | false | float32 | Original | FINISHED | "2023-09-09T10:52:17" | pretrained | 361 | 2.718 | mit |
|
EleutherAI/llemma_7b | main | false | float32 | Original | FINISHED | "2023-12-07T22:46:12" | pretrained | 54 | 7 | llama2 |
|
FacebookAI/roberta-base | main | false | float32 | Original | PENDING | "2024-05-10T01:32:30" | π’ : pretrained | 334 | 0.125 | mit |
|
HuggingFaceH4/mistral-7b-sft-beta | main | false | float32 | Original | FINISHED | "2023-12-11T15:58:50" | fine-tuned | 15 | 7 | mit |
|
HuggingFaceH4/zephyr-7b-alpha | main | false | float32 | Original | FINISHED | "2023-12-07T22:45:00" | fine-tuned | 986 | 7.242 | mit |
|
HuggingFaceH4/zephyr-7b-beta | main | false | float32 | Original | FINISHED | "2023-12-03T18:26:39" | fine-tuned | 973 | 7.242 | mit |
|
Intel/neural-chat-7b-v3-1 | main | false | float32 | Original | PENDING | "2024-09-22T15:55:46" | fine-tuned | 542 | 7.242 | apache-2.0 |
|
KnutJaegersberg/Qwen-14B-Llamafied | main | false | bfloat16 | Original | FINISHED | "2024-01-30T15:47:05" | π’ : pretrained | 1 | 14 | other |
|
KnutJaegersberg/internlm-20b-llama | main | false | bfloat16 | Original | RUNNING | "2024-01-30T15:47:15" | π’ : pretrained | 0 | 20 | other |
|
KoboldAI/GPT-J-6B-Janeway | main | false | float32 | Original | FINISHED | "2023-12-07T22:46:31" | pretrained | 11 | 6 | mit |
|
KoboldAI/LLAMA2-13B-Holodeck-1 | main | false | float32 | Original | FINISHED | "2023-12-11T15:44:36" | pretrained | 20 | 13.016 | other |
|
KoboldAI/OPT-13B-Erebus | main | false | float32 | Original | FINISHED | "2023-12-07T22:45:26" | pretrained | 168 | 13 | other |
|
KoboldAI/OPT-13B-Nerys-v2 | main | false | float32 | Original | FINISHED | "2023-12-07T22:48:11" | pretrained | 9 | 13 | other |
|
KoboldAI/OPT-2.7B-Erebus | main | false | float32 | Original | FINISHED | "2023-12-11T15:28:21" | pretrained | 32 | 2.7 | other |
|
KoboldAI/OPT-6.7B-Erebus | main | false | float32 | Original | FINISHED | "2023-12-07T22:48:16" | pretrained | 88 | 6.7 | other |
|
KoboldAI/OPT-6B-nerys-v2 | main | false | float32 | Original | FINISHED | "2023-12-07T22:45:41" | pretrained | 21 | 6 | other |
|
KoboldAI/fairseq-dense-13B-Janeway | main | false | float32 | Original | FINISHED | "2023-12-11T15:19:11" | pretrained | 10 | 13 | mit |
|
Kukedlc/NeuralLLaMa-3-8b-DT-v0.1 | main | false | float16 | Original | PENDING | "2024-05-29T18:01:44" | π€ : base merges and moerges | 1 | 8.03 | other |
|
Kukedlc/NeuralLLaMa-3-8b-ORPO-v0.3 | main | false | float16 | Original | PENDING | "2024-05-28T07:14:42" | π¬ : chat models (RLHF, DPO, IFT, ...) | 0 | 8.03 | apache-2.0 |
|
Kukedlc/NeuralSynthesis-7B-v0.1 | main | false | bfloat16 | Original | PENDING | "2024-06-17T22:54:12" | π€ : base merges and moerges | 3 | 7.242 | apache-2.0 |
|
LeoLM/leo-hessianai-7b-chat | main | false | float32 | Original | FINISHED | "2023-12-11T15:29:23" | instruction-tuned | 11 | 7 | null |
|
LeoLM/leo-hessianai-7b | main | false | float32 | Original | FINISHED | "2023-12-11T15:52:44" | pretrained | 32 | 7 | null |
|
Locutusque/Hyperion-3.0-Mistral-7B-alpha | main | false | bfloat16 | Original | PENDING | "2024-03-18T16:59:48" | π¬ : chat models (RLHF, DPO, IFT, ...) | 4 | 7.242 | apache-2.0 |
|
Locutusque/Hyperion-3.0-Mixtral-3x7B | main | false | bfloat16 | Original | PENDING | "2024-03-16T02:18:54" | π€ : base merges and moerges | 3 | 18.516 | apache-2.0 |
|
Locutusque/Hyperion-3.0-Yi-34B | main | false | bfloat16 | Original | PENDING | "2024-03-18T16:58:33" | π¬ : chat models (RLHF, DPO, IFT, ...) | 7 | 34.389 | other |
|
LoneStriker/Smaug-34B-v0.1-GPTQ | main | false | GPTQ | Original | PENDING | "2024-05-17T07:57:31" | π¬ : chat models (RLHF, DPO, IFT, ...) | 1 | 272 | other |
|
MTSAIR/multi_verse_model | main | false | bfloat16 | Original | PENDING | "2024-05-25T05:33:53" | πΆ : fine-tuned on domain-specific datasets | 31 | 7.242 | apache-2.0 |
|
NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss | main | false | float16 | Original | RUNNING | "2024-01-29T17:05:05" | πΆ : fine-tuned | 10 | 46.703 | cc-by-nc-4.0 |
|
NexaAIDev/Octopus-v2 | main | false | float32 | Original | PENDING | "2024-09-22T15:54:17" | fine-tuned | 850 | 2.506 | cc-by-nc-4.0 |
|
Nexusflow/NexusRaven-V2-13B | main | false | bfloat16 | Original | FINISHED | "2024-01-29T16:55:48" | πΆ : fine-tuned | 344 | 13 | other |
|
NotAiLOL/Yi-1.5-dolphin-9B | main | false | bfloat16 | Original | RUNNING | "2024-05-17T20:47:37" | πΆ : fine-tuned on domain-specific datasets | 0 | 8.829 | apache-2.0 |
|
NousResearch/Llama-2-13b-hf | main | false | float32 | Original | FINISHED | "2023-12-07T22:47:45" | pretrained | 64 | 13.016 | null |
|
NousResearch/Llama-2-7b-chat-hf | main | false | float32 | Original | FINISHED | "2023-12-03T18:26:34" | instruction-tuned | 63 | 6.738 | null |
|
NousResearch/Llama-2-7b-hf | main | false | float32 | Original | FINISHED | "2023-12-03T18:27:05" | pretrained | 92 | 6.738 | null |
|
NousResearch/Nous-Capybara-34B | 62a3489d78d050632a6681208618c8ce61e1ac85 | false | bfloat16 | Original | PENDING | "2024-05-25T06:20:26" | πΆ : fine-tuned on domain-specific datasets | 229 | 34 | mit |
|
NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO | main | false | bfloat16 | Original | PENDING | "2024-06-16T02:38:58" | π¬ : chat models (RLHF, DPO, IFT, ...) | 382 | 46.703 | apache-2.0 |
|
NousResearch/Nous-Hermes-Llama2-13b | main | false | float32 | Original | FINISHED | "2023-12-03T18:27:19" | pretrained | 251 | 13 | mit |
|
NousResearch/Nous-Hermes-llama-2-7b | main | false | float32 | Original | FINISHED | "2023-12-07T22:46:18" | pretrained | 54 | 6.738 | mit |
|
NousResearch/Yarn-Mistral-7b-128k | main | false | float32 | Original | FINISHED | "2023-12-03T18:27:34" | pretrained | 470 | 7 | apache-2.0 |
|
Open-Orca/Mistral-7B-OpenOrca | main | false | float32 | Original | FINISHED | "2023-12-03T18:27:13" | pretrained | 485 | 7 | apache-2.0 |
|
Open-Orca/Mistral-7B-SlimOrca | main | false | float32 | Original | FINISHED | "2023-12-11T15:53:45" | pretrained | 21 | 7 | apache-2.0 |
|
Open-Orca/OpenOrca-Platypus2-13B | main | false | float32 | Original | FINISHED | "2023-12-11T15:20:13" | pretrained | 219 | 13 | cc-by-nc-4.0 |
|
PygmalionAI/pygmalion-6b | main | false | float32 | Original | PENDING | "2024-09-22T15:54:39" | pretrained | 725 | 6 | creativeml-openrail-m |
|
Q-bert/MetaMath-Cybertron-Starling | main | false | bfloat16 | Original | RUNNING | "2024-01-30T15:57:50" | πΆ : fine-tuned | 38 | 7.242 | cc-by-nc-4.0 |
|
Qwen/Qwen-7B-Chat | main | false | float32 | Original | PENDING | "2024-09-22T15:54:29" | pretrained | 745 | 7.721 | other |
|
Qwen/Qwen1.5-14B-Chat | main | false | bfloat16 | Original | PENDING | "2024-05-22T05:22:21" | π¬ : chat models (RLHF, DPO, IFT, ...) | 93 | 14.167 | other |
|
Qwen/Qwen1.5-14B | main | false | bfloat16 | Original | PENDING | "2024-05-22T05:22:08" | π’ : pretrained | 33 | 14.167 | other |
|
Qwen/Qwen1.5-32B-Chat | main | false | bfloat16 | Original | PENDING | "2024-05-22T05:22:38" | π¬ : chat models (RLHF, DPO, IFT, ...) | 93 | 32.512 | other |
|
Qwen/Qwen1.5-32B | main | false | bfloat16 | Original | PENDING | "2024-05-22T05:21:55" | π’ : pretrained | 71 | 32.512 | other |
|
Qwen/Qwen2-72B-Instruct | main | false | bfloat16 | Original | PENDING | "2024-09-22T15:55:08" | instruction-tuned | 659 | 72.706 | other |
|
Qwen/Qwen2-7B-Instruct | main | false | bfloat16 | Original | PENDING | "2024-09-22T15:55:31" | instruction-tuned | 571 | 7.616 | apache-2.0 |
|
Qwen/Qwen2.5-14B-Instruct | main | false | bfloat16 | Original | PENDING | "2024-09-18T22:22:45" | π¬ : chat models (RLHF, DPO, IFT, ...) | 14 | 14.77 | apache-2.0 |
|
Qwen/Qwen2.5-14B | main | false | bfloat16 | Original | PENDING | "2024-09-18T22:23:02" | π’ : pretrained | 5 | 14.77 | apache-2.0 |
|
Qwen/Qwen2.5-32B-Instruct | main | false | bfloat16 | Original | PENDING | "2024-09-18T22:22:41" | π¬ : chat models (RLHF, DPO, IFT, ...) | 8 | 32.764 | apache-2.0 |
|
Qwen/Qwen2.5-32B | main | false | bfloat16 | Original | PENDING | "2024-09-18T22:22:59" | π’ : pretrained | 4 | 32.764 | apache-2.0 |
|
Qwen/Qwen2.5-72B-Instruct | main | false | bfloat16 | Original | PENDING | "2024-09-18T22:22:35" | π¬ : chat models (RLHF, DPO, IFT, ...) | 18 | 72.706 | other |
|
Qwen/Qwen2.5-72B | main | false | bfloat16 | Original | PENDING | "2024-09-18T22:22:56" | π’ : pretrained | 7 | 72.706 | other |
|
Qwen/Qwen2.5-7B-Instruct | main | false | bfloat16 | Original | PENDING | "2024-09-18T22:22:49" | π¬ : chat models (RLHF, DPO, IFT, ...) | 20 | 7.616 | apache-2.0 |
|
Qwen/Qwen2.5-7B | main | false | bfloat16 | Original | PENDING | "2024-09-18T22:22:53" | π’ : pretrained | 6 | 7.616 | apache-2.0 |
|
SakanaAI/DiscoPOP-zephyr-7b-gemma | main | false | bfloat16 | Original | PENDING | "2024-06-17T05:08:49" | π¬ : chat models (RLHF, DPO, IFT, ...) | 19 | 8.538 | gemma |
|
SeaLLMs/SeaLLM-7B-v2.5 | main | false | bfloat16 | Original | PENDING | "2024-05-16T06:49:33" | πΆ : fine-tuned on domain-specific datasets | 40 | 8.538 | other |
|
Steelskull/Umbra-v2.1-MoE-4x10.7 | main | false | bfloat16 | Original | RUNNING | "2024-01-29T13:48:43" | πΆ : fine-tuned | 1 | 36.099 | apache-2.0 |
|
TIGER-Lab/MAmmoTH2-7B-Plus | main | false | bfloat16 | Original | PENDING | "2024-05-17T07:37:52" | πΆ : fine-tuned on domain-specific datasets | 1 | 7.242 | mit |
|
TIGER-Lab/MAmmoTH2-8B-Plus | main | false | bfloat16 | Original | PENDING | "2024-05-17T07:37:56" | πΆ : fine-tuned on domain-specific datasets | 11 | 8.03 | mit |
|
TIGER-Lab/MAmmoTH2-8x7B-Plus | main | false | bfloat16 | Original | PENDING | "2024-05-17T07:43:12" | πΆ : fine-tuned on domain-specific datasets | 6 | 46.703 | mit |
|
TheBloke/Falcon-180B-Chat-GPTQ | main | false | GPTQ | Original | PENDING | "2024-06-19T05:27:56" | π¬ : chat models (RLHF, DPO, IFT, ...) | 67 | 197.968 | unknown |
|
TheBloke/Falcon-180B-Chat-GPTQ | main | false | float32 | Original | RUNNING | "2023-12-07T22:44:49" | fine-tuned | 66 | 197.968 | unknown |
|
TheBloke/Llama-2-13B-chat-AWQ | main | false | float32 | Original | FINISHED | "2023-12-07T22:47:40" | instruction-tuned | 12 | 2.026 | llama2 |
|
TheBloke/Llama-2-13B-chat-GPTQ | main | false | float32 | Original | FINISHED | "2023-12-03T18:27:31" | instruction-tuned | 309 | 16.232 | llama2 |
|
TheBloke/Llama-2-13B-fp16 | main | false | float32 | Original | FINISHED | "2023-12-07T22:47:30" | pretrained | 53 | 13 | null |
|
TheBloke/Llama-2-70B-Chat-AWQ | main | false | float32 | Original | FINISHED | "2023-12-07T22:44:51" | fine-tuned | 9 | 9.684 | llama2 |
|
TheBloke/Llama-2-70B-Chat-GPTQ | main | false | float32 | Original | FINISHED | "2023-12-07T22:45:08" | fine-tuned | 232 | 72.816 | llama2 |
|
TheBloke/Llama-2-70B-GPTQ | main | false | float32 | Original | FINISHED | "2023-12-07T22:47:16" | fine-tuned | 78 | 72.816 | llama2 |
|
TheBloke/Llama-2-7B-Chat-AWQ | main | false | float32 | Original | FINISHED | "2023-12-11T15:32:24" | fine-tuned | 8 | 1.129 | llama2 |
|
TheBloke/Llama-2-7B-Chat-GPTQ | main | false | float32 | Original | FINISHED | "2023-12-03T18:26:23" | fine-tuned | 185 | 9.048 | llama2 |
|
TheBloke/Llama-2-7B-GPTQ | main | false | float32 | Original | FINISHED | "2023-12-07T22:48:03" | fine-tuned | 65 | 9.048 | llama2 |
|
TheBloke/Mistral-7B-OpenOrca-AWQ | main | false | float32 | Original | FINISHED | "2023-12-07T22:47:42" | fine-tuned | 31 | 1.196 | apache-2.0 |
|
TheBloke/Mistral-7B-OpenOrca-GPTQ | main | false | float32 | Original | FINISHED | "2023-12-03T18:27:01" | fine-tuned | 83 | 9.592 | apache-2.0 |
|
TheBloke/Mythalion-13B-AWQ | main | false | float32 | Original | FINISHED | "2023-12-11T15:54:46" | fine-tuned | 4 | 2.026 | llama2 |
|
TheBloke/MythoMax-L2-13B-GPTQ | main | false | float32 | Original | FINISHED | "2023-12-07T22:45:52" | fine-tuned | 97 | 16.232 | other |
|
TheBloke/Upstage-Llama-2-70B-instruct-v2-AWQ | main | false | float32 | Original | FINISHED | "2023-12-07T22:48:32" | instruction-tuned | 0 | 9.684 | llama2 |
|
TheBloke/Wizard-Vicuna-13B-Uncensored-GPTQ | main | false | float32 | Original | FINISHED | "2023-12-07T22:45:12" | fine-tuned | 267 | 16.224 | other |
End of preview.