Dataset Preview
View in Dataset Viewer
Viewer
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code: DatasetGenerationCastError Exception: DatasetGenerationCastError Message: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 2 new columns ({'likes', 'license'}) and 5 missing columns ({'job_start_time', 'flageval_id', 'eval_id', 'architectures', 'job_id'}). This happened while the json dataset builder was generating data using hf://datasets/open-cn-llm-leaderboard/requests/01-ai/Yi-1.5-34B-Chat_eval_request_False_float16_Original.json (at revision 462330aaad6d780bba9d0dc7adaaa29000e55a6d) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations) Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single writer.write_table(table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table pa_table = table_cast(pa_table, self._schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast return cast_table_to_schema(table, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema raise CastError( datasets.table.CastError: Couldn't cast model: string base_model: string revision: string private: bool precision: string weight_type: string status: string submitted_time: timestamp[s] model_type: string likes: int64 params: double license: string to {'model': Value(dtype='string', id=None), 'base_model': Value(dtype='string', id=None), 'revision': Value(dtype='string', id=None), 'private': Value(dtype='bool', id=None), 'precision': Value(dtype='string', id=None), 'params': Value(dtype='float64', id=None), 'architectures': Value(dtype='string', id=None), 'weight_type': Value(dtype='string', id=None), 'status': Value(dtype='string', id=None), 'submitted_time': Value(dtype='timestamp[s]', id=None), 'model_type': Value(dtype='string', id=None), 'job_id': Value(dtype='int64', id=None), 'job_start_time': Value(dtype='null', id=None), 'eval_id': Value(dtype='int64', id=None), 'flageval_id': Value(dtype='int64', id=None)} because column names don't match During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1323, in compute_config_parquet_and_info_response parquet_operations = convert_to_parquet(builder) File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 938, in convert_to_parquet builder.download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare self._download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single raise DatasetGenerationCastError.from_cast_error( datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 2 new columns ({'likes', 'license'}) and 5 missing columns ({'job_start_time', 'flageval_id', 'eval_id', 'architectures', 'job_id'}). This happened while the json dataset builder was generating data using hf://datasets/open-cn-llm-leaderboard/requests/01-ai/Yi-1.5-34B-Chat_eval_request_False_float16_Original.json (at revision 462330aaad6d780bba9d0dc7adaaa29000e55a6d) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Need help to make the dataset viewer work? Open a discussion for direct support.
model
string | base_model
string | revision
string | private
bool | precision
string | params
float64 | architectures
string | weight_type
string | status
string | submitted_time
unknown | model_type
string | job_id
int64 | job_start_time
null | eval_id
int64 | flageval_id
int64 | likes
int64 | license
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
01-ai/Yi-1.5-34B-32K | main | false | bfloat16 | 34.389 | LlamaForCausalLM | Original | FINISHED | "2024-05-24T23:58:39" | π’ : pretrained | -1 | null | 4,400 | 707 | null | null |
|
01-ai/Yi-1.5-34B-Chat-16K | main | false | bfloat16 | 34.389 | LlamaForCausalLM | Original | FINISHED | "2024-05-24T23:59:06" | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 4,402 | 708 | null | null |
|
01-ai/Yi-1.5-34B-Chat | main | false | bfloat16 | 34.389 | LlamaForCausalLM | Original | FINISHED | "2024-05-15T01:37:19" | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 3,069 | 622 | null | null |
|
01-ai/Yi-1.5-34B-Chat | main | false | float16 | 34.389 | null | Original | CANCELLED | "2024-05-15T03:04:37" | π¬ chat models (RLHF, DPO, IFT, ...) | null | null | null | null | 84 | apache-2.0 |
|
01-ai/Yi-1.5-34B | main | false | float16 | 34.389 | null | Original | FINISHED | "2024-05-15T03:00:48" | π’ pretrained | null | null | 3,087 | 624 | 22 | apache-2.0 |
|
01-ai/Yi-1.5-6B | main | false | float16 | 6.061 | null | Original | FINISHED | "2024-05-15T03:03:46" | π’ pretrained | null | null | 3,064 | 623 | 14 | apache-2.0 |
|
01-ai/Yi-1.5-9B-32K | main | false | bfloat16 | 8.829 | LlamaForCausalLM | Original | FINISHED | "2024-05-25T00:00:23" | π’ : pretrained | -1 | null | 4,395 | 714 | null | null |
|
01-ai/Yi-1.5-9B-Chat-16K | main | false | bfloat16 | 8.829 | LlamaForCausalLM | Original | FINISHED | "2024-05-25T00:00:04" | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 4,004 | 710 | null | null |
|
01-ai/Yi-1.5-9B-Chat | main | false | bfloat16 | 8.829 | LlamaForCausalLM | Original | FINISHED | "2024-05-15T01:44:49" | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 3,042 | 621 | null | null |
|
01-ai/Yi-1.5-9B | main | false | float16 | 8.829 | null | Original | FINISHED | "2024-05-14T08:12:00" | π’ pretrained | null | null | 3,037 | 619 | 21 | apache-2.0 |
|
01-ai/Yi-34B-200K | main | false | float16 | 34.389 | null | Original | FINISHED | "2024-04-29T06:30:17" | PT | null | null | 2,956 | 450 | 300 | other |
|
01-ai/Yi-34B-Chat | main | false | float16 | 34.389 | null | Original | FINISHED | "2024-03-20T07:13:31" | chat | null | null | 2,910 | 478 | 296 | other |
|
01-ai/Yi-34B | main | false | float16 | 34.389 | LlamaForCausalLM | Original | FINISHED | "2024-03-11T06:38:38" | π’ : pretrained | -1 | null | 3,022 | 451 | null | null |
|
01-ai/Yi-6B-Chat | main | false | float16 | 6.061 | null | Original | FINISHED | "2024-04-22T10:16:43" | chat | null | null | 2,866 | 545 | 53 | other |
|
01-ai/Yi-6B | main | false | float16 | 6.061 | null | Original | FINISHED | "2024-04-22T10:17:41" | PT | null | null | 2,865 | 544 | 360 | other |
|
01-ai/Yi-9B-200K | main | false | bfloat16 | 8.829 | LlamaForCausalLM | Original | FINISHED | "2024-05-26T04:15:23" | π’ : pretrained | -1 | null | 4,658 | 736 | null | null |
|
01-ai/Yi-9B | main | false | float16 | 8.829 | null | Original | FINISHED | "2024-04-24T06:56:34" | PT | null | null | 2,891 | 437 | 175 | other |
|
Artples/L-MChat-7b | main | false | bfloat16 | 7.242 | MistralForCausalLM | Original | FINISHED | "2024-05-26T20:30:41" | π€ : base merges and moerges | -1 | null | 4,669 | 743 | null | null |
|
Artples/L-MChat-Small | main | false | bfloat16 | 2.78 | PhiForCausalLM | Original | FINISHED | "2024-05-26T20:31:44" | π€ : base merges and moerges | -1 | null | 4,670 | 744 | null | null |
|
Artples/LAI-Paca-7b | Artples/Adapter-Baseline | main | false | bfloat16 | 7 | ? | Adapter | CANCELLED | "2024-05-26T22:07:33" | πΆ : fine-tuned on domain-specific datasets | -1 | null | 4,757 | 745 | null | null |
Azure99/blossom-v5.1-34b | main | false | bfloat16 | 34.389 | LlamaForCausalLM | Original | FINISHED | "2024-05-22T09:46:58" | πΆ : fine-tuned on domain-specific datasets | -1 | null | 4,401 | 692 | null | null |
|
Azure99/blossom-v5.1-9b | main | false | bfloat16 | 8.829 | LlamaForCausalLM | Original | FINISHED | "2024-05-22T09:47:24" | πΆ : fine-tuned on domain-specific datasets | -1 | null | 3,389 | 693 | null | null |
|
CausalLM/34b-beta | main | false | bfloat16 | 34.389 | LlamaForCausalLM | Original | FINISHED | "2024-05-30T19:56:30" | πΆ : fine-tuned on domain-specific datasets | -1 | null | 4,801 | 781 | null | null |
|
CofeAI/Tele-FLM | main | false | float16 | 0 | null | Original | RUNNING | "2024-06-06T08:58:56" | ? : | null | null | 4,809 | 787 | 17 | apache-2.0 |
|
ConvexAI/Luminex-34B-v0.1 | main | false | float16 | 34.389 | LlamaForCausalLM | Original | PENDING | "2024-06-08T22:12:14" | πΆ : fine-tuned on domain-specific datasets | -1 | null | null | null | null | null |
|
ConvexAI/Luminex-34B-v0.2 | main | false | float16 | 34.389 | LlamaForCausalLM | Original | FINISHED | "2024-05-30T19:58:21" | πΆ : fine-tuned on domain-specific datasets | -1 | null | 4,804 | 782 | null | null |
|
CultriX/NeuralMona_MoE-4x7B | main | false | bfloat16 | 24.154 | MixtralForCausalLM | Original | PENDING | "2024-06-10T09:29:16" | πΆ : fine-tuned on domain-specific datasets | -1 | null | null | null | null | null |
|
Danielbrdz/Barcenas-Llama3-8b-ORPO | main | false | float16 | 8.03 | LlamaForCausalLM | Original | FINISHED | "2024-05-16T06:18:54" | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 3,090 | 636 | null | null |
|
DeepMount00/Llama-3-8b-Ita | main | false | bfloat16 | 8.03 | LlamaForCausalLM | Original | FINISHED | "2024-05-17T15:16:03" | πΆ : fine-tuned on domain-specific datasets | -1 | null | 3,147 | 668 | null | null |
|
FlagAlpha/Llama3-Chinese-8B-Instruct | main | false | float16 | 8.03 | LlamaForCausalLM | Original | FINISHED | "2024-05-17T20:24:58" | πΆ : fine-tuned on domain-specific datasets | -1 | null | 3,166 | 682 | null | null |
|
GritLM/GritLM-8x7B | main | false | bfloat16 | 46.703 | MixtralForCausalLM | Original | FINISHED | "2024-05-30T22:02:14" | πΆ : fine-tuned on domain-specific datasets | -1 | null | 4,808 | 786 | null | null |
|
HIT-SCIR/Chinese-Mixtral-8x7B | main | false | bfloat16 | 46.908 | MixtralForCausalLM | Original | FINISHED | "2024-05-17T20:15:48" | πΆ : fine-tuned on domain-specific datasets | -1 | null | 4,412 | 678 | null | null |
|
Kquant03/CognitiveFusion2-4x7B-BF16 | main | false | bfloat16 | 24.154 | MixtralForCausalLM | Original | PENDING | "2024-06-10T09:31:14" | πΆ : fine-tuned on domain-specific datasets | -1 | null | null | null | null | null |
|
Kukedlc/NeuralLLaMa-3-8b-DT-v0.1 | main | false | float16 | 8.03 | LlamaForCausalLM | Original | FINISHED | "2024-05-17T15:18:34" | πΆ : fine-tuned on domain-specific datasets | -1 | null | 3,137 | 669 | null | null |
|
Kukedlc/NeuralLLaMa-3-8b-ORPO-v0.3 | main | false | float16 | 8.03 | LlamaForCausalLM | Original | FINISHED | "2024-05-28T07:13:56" | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 4,766 | 756 | null | null |
|
Kukedlc/NeuralSynthesis-7b-v0.4-slerp | main | false | bfloat16 | 7.242 | MistralForCausalLM | Original | FINISHED | "2024-05-30T09:33:09" | π€ : base merges and moerges | -1 | null | 4,790 | 776 | null | null |
|
LoneStriker/Smaug-34B-v0.1-GPTQ | main | false | GPTQ | 272 | LlamaForCausalLM | Original | CANCELLED | "2024-05-17T07:58:21" | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 3,160 | 672 | null | null |
|
MTSAIR/multi_verse_model | main | false | bfloat16 | 7.242 | MistralForCausalLM | Original | FINISHED | "2024-05-25T05:36:13" | πΆ : fine-tuned on domain-specific datasets | -1 | null | 4,396 | 715 | null | null |
|
MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.2 | main | false | bfloat16 | 70.554 | LlamaForCausalLM | Original | FINISHED | "2024-05-26T03:55:11" | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 4,663 | 738 | null | null |
|
MaziyarPanahi/Llama-3-8B-Instruct-v0.8 | main | false | bfloat16 | 8.03 | LlamaForCausalLM | Original | FINISHED | "2024-05-29T09:24:33" | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 4,775 | 763 | null | null |
|
MaziyarPanahi/Mistral-7B-Instruct-v0.2 | main | false | bfloat16 | 7.242 | MistralForCausalLM | Original | FINISHED | "2024-05-29T07:54:05" | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 4,771 | 760 | null | null |
|
MaziyarPanahi/Topxtral-4x7B-v0.1 | main | false | bfloat16 | 18.516 | MixtralForCausalLM | Original | PENDING | "2024-06-10T09:33:06" | πΆ : fine-tuned on domain-specific datasets | -1 | null | null | null | null | null |
|
MoaData/Myrrh_solar_10.7b_3.0 | main | false | float16 | 10.732 | LlamaForCausalLM | Original | FINISHED | "2024-05-17T20:49:38" | πΆ : fine-tuned on domain-specific datasets | -1 | null | 3,960 | 688 | null | null |
|
NLPark/AnFeng_v3_Avocet | main | false | bfloat16 | 34.981 | CohereForCausalLM | Original | FINISHED | "2024-05-14T18:11:34" | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 3,091 | 626 | null | null |
|
NLPark/Test1_SLIDE | main | false | bfloat16 | 8.03 | LlamaForCausalLM | Original | FINISHED | "2024-05-14T18:12:56" | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 3,039 | 620 | null | null |
|
NotAiLOL/Yi-1.5-dolphin-9B | main | false | bfloat16 | 8.829 | LlamaForCausalLM | Original | FINISHED | "2024-05-17T20:49:03" | πΆ : fine-tuned on domain-specific datasets | -1 | null | 3,403 | 686 | null | null |
|
NousResearch/Nous-Hermes-2-SOLAR-10.7B | main | false | bfloat16 | 10.732 | LlamaForCausalLM | Original | FINISHED | "2024-05-30T16:49:20" | πΆ : fine-tuned on domain-specific datasets | -1 | null | 4,794 | 778 | null | null |
|
NousResearch/Nous-Hermes-2-Yi-34B | main | false | bfloat16 | 34.389 | LlamaForCausalLM | Original | FINISHED | "2024-05-25T05:38:28" | πΆ : fine-tuned on domain-specific datasets | -1 | null | 4,397 | 716 | null | null |
|
OpenBuddy/openbuddy-deepseek-67b-v18.1-4k | main | false | bfloat16 | 67.425 | LlamaForCausalLM | Original | FINISHED | "2024-05-23T07:30:29" | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 3,907 | 704 | null | null |
|
OpenBuddy/openbuddy-llama3-8b-v21.1-8k | main | false | bfloat16 | 8.03 | LlamaForCausalLM | Original | FINISHED | "2024-05-23T07:30:13" | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 3,880 | 703 | null | null |
|
OpenBuddy/openbuddy-mistral-22b-v21.1-32k | main | false | bfloat16 | 22.354 | MistralForCausalLM | Original | FINISHED | "2024-05-23T07:31:04" | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 3,908 | 705 | null | null |
|
OpenBuddy/openbuddy-yi1.5-9b-v21.1-32k | main | false | bfloat16 | 8.85 | LlamaForCausalLM | Original | FINISHED | "2024-05-28T04:04:03" | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 4,759 | 753 | null | null |
|
OrionStarAI/OrionStar-Yi-34B-Chat-Llama | main | false | float16 | 34.389 | LlamaForCausalLM | Original | PENDING | "2024-06-09T01:34:42" | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | null | null | null | null |
|
Qwen/Qwen-1_8B-Chat | main | false | float16 | 1.837 | null | Original | FAILED | "2024-04-22T10:14:21" | chat | null | null | 2,925 | 452 | 100 | null |
|
Qwen/Qwen1.5-0.5B-Chat | main | false | float16 | 0.62 | null | Original | FINISHED | "2024-04-27T14:18:09" | chat | null | null | 2,931 | 571 | 49 | other |
|
Qwen/Qwen1.5-0.5B | main | false | float16 | 0.62 | null | Original | FINISHED | "2024-02-21T09:46:43" | PT | null | null | 2,794 | 428 | 42 | other |
|
Qwen/Qwen1.5-1.8B-Chat | main | false | float16 | 1.837 | Qwen2ForCausalLM | Original | FINISHED | "2024-05-08T01:21:14" | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 2,982 | 589 | null | null |
|
Qwen/Qwen1.5-1.8B | main | false | float16 | 1.837 | Qwen2ForCausalLM | Original | FINISHED | "2024-05-07T08:40:01" | π’ : pretrained | -1 | null | 2,980 | 587 | null | null |
|
Qwen/Qwen1.5-110B-Chat | main | false | bfloat16 | 111.21 | Qwen2ForCausalLM | Original | CANCELLED | "2024-05-15T01:35:41" | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 3,140 | 625 | null | null |
|
Qwen/Qwen1.5-14B-Chat | main | false | bfloat16 | 14.167 | Qwen2ForCausalLM | Original | FINISHED | "2024-05-16T06:13:45" | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 3,089 | 635 | null | null |
|
Qwen/Qwen1.5-14B | main | false | bfloat16 | 14.167 | Qwen2ForCausalLM | Original | FINISHED | "2024-05-16T06:13:35" | π’ : pretrained | -1 | null | 3,088 | 633 | null | null |
|
Qwen/Qwen1.5-32B-Chat | main | false | float16 | 32.512 | Qwen2ForCausalLM | Original | FINISHED | "2024-05-07T08:56:42" | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 3,040 | 588 | null | null |
|
Qwen/Qwen1.5-32B | main | false | float16 | 32.512 | Qwen2ForCausalLM | Original | FINISHED | "2024-05-11T09:38:32" | π’ : pretrained | -1 | null | 3,058 | 609 | null | null |
|
Qwen/Qwen1.5-4B-Chat | main | false | float16 | 3.95 | null | Original | FINISHED | "2024-04-27T14:19:06" | chat | null | null | 2,932 | 572 | 30 | other |
|
Qwen/Qwen1.5-4B | main | false | float16 | 3.95 | null | Original | FINISHED | "2024-04-25T06:17:03" | PT | null | null | 2,926 | 564 | 25 | other |
|
Qwen/Qwen1.5-72B-Chat | main | false | bfloat16 | 72.288 | Qwen2ForCausalLM | Original | FINISHED | "2024-05-15T01:34:51" | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 3,071 | 629 | null | null |
|
Qwen/Qwen1.5-72B | main | false | float16 | 72.288 | null | Original | FINISHED | "2024-05-15T11:17:36" | π’ pretrained | null | null | 3,096 | 628 | 55 | other |
|
Qwen/Qwen1.5-7B | main | false | float16 | 7.721 | null | Original | FINISHED | "2024-04-22T10:15:46" | PT | null | null | 2,924 | 430 | 32 | other |
|
Qwen/Qwen1.5-MoE-A2.7B-Chat | main | false | float16 | 14.316 | null | Original | FINISHED | "2024-04-29T08:57:44" | chat | null | null | 3,038 | 577 | 93 | other |
|
Qwen/Qwen1.5-MoE-A2.7B | main | false | float16 | 14.316 | null | Original | FINISHED | "2024-04-29T08:56:36" | PT | null | null | 3,024 | 576 | 176 | other |
|
SeaLLMs/SeaLLM-7B-v2.5 | main | false | bfloat16 | 8.538 | GemmaForCausalLM | Original | FINISHED | "2024-05-16T06:19:56" | πΆ : fine-tuned on domain-specific datasets | -1 | null | 3,092 | 637 | null | null |
|
THUDM/chatglm3-6b-32k | main | false | float16 | 6 | null | Original | FAILED | "2024-05-11T13:20:05" | chat | null | null | 3,011 | 613 | 241 | null |
|
THUDM/chatglm3-6b-base | main | false | float16 | 6 | null | Original | FAILED | "2024-05-11T13:19:32" | PT | null | null | 3,009 | 611 | 82 | null |
|
THUDM/chatglm3-6b | main | false | float16 | 6.244 | null | Original | FINISHED | "2024-05-11T13:18:45" | chat | null | null | 3,010 | 612 | 970 | null |
|
TIGER-Lab/MAmmoTH2-7B-Plus | main | false | bfloat16 | 7.242 | MistralForCausalLM | Original | FINISHED | "2024-05-17T07:36:41" | πΆ : fine-tuned on domain-specific datasets | -1 | null | 3,125 | 662 | null | null |
|
TIGER-Lab/MAmmoTH2-8B-Plus | main | false | bfloat16 | 8.03 | LlamaForCausalLM | Original | FINISHED | "2024-05-17T07:36:47" | πΆ : fine-tuned on domain-specific datasets | -1 | null | 3,126 | 663 | null | null |
|
TIGER-Lab/MAmmoTH2-8x7B-Plus | main | false | bfloat16 | 46.703 | MixtralForCausalLM | Original | FINISHED | "2024-05-17T07:44:00" | πΆ : fine-tuned on domain-specific datasets | -1 | null | 3,133 | 665 | null | null |
|
UnicomLLM/Unichat-llama3-Chinese-8B-28K | main | false | bfloat16 | 8 | LlamaForCausalLM | Original | FINISHED | "2024-05-17T20:25:58" | πΆ : fine-tuned on domain-specific datasets | -1 | null | 3,168 | 684 | null | null |
|
UnicomLLM/Unichat-llama3-Chinese-8B | main | false | bfloat16 | 8 | LlamaForCausalLM | Original | FINISHED | "2024-05-17T20:25:33" | πΆ : fine-tuned on domain-specific datasets | -1 | null | 3,173 | 683 | null | null |
|
VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct | main | false | bfloat16 | 8.03 | LlamaForCausalLM | Original | FINISHED | "2024-05-17T15:12:22" | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 3,135 | 667 | null | null |
|
Weyaxi/Bagel-Hermes-34B-Slerp | main | false | bfloat16 | 34.389 | LlamaForCausalLM | Original | FINISHED | "2024-05-16T07:55:10" | π€ : base merges and moerges | -1 | null | 3,131 | 657 | null | null |
|
Weyaxi/Einstein-v6.1-Llama3-8B | main | false | bfloat16 | 8.03 | LlamaForCausalLM | Original | FINISHED | "2024-05-22T12:36:11" | πΆ : fine-tuned on domain-specific datasets | -1 | null | 4,389 | 694 | null | null |
|
Weyaxi/Nous-Hermes-2-SUS-Chat-34B-Slerp | main | false | bfloat16 | 34.389 | LlamaForCausalLM | Original | FINISHED | "2024-05-22T12:36:41" | π€ : base merges and moerges | -1 | null | 4,399 | 695 | null | null |
|
abacusai/Llama-3-Smaug-8B | main | false | bfloat16 | 8.03 | LlamaForCausalLM | Original | FINISHED | "2024-05-16T06:24:16" | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 3,094 | 641 | null | null |
|
abacusai/Smaug-34B-v0.1 | main | false | bfloat16 | 34.389 | LlamaForCausalLM | Original | FINISHED | "2024-05-16T07:55:46" | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 3,061 | 630 | null | null |
|
abacusai/Smaug-72B-v0.1 | main | false | bfloat16 | 72.289 | LlamaForCausalLM | Original | FINISHED | "2024-05-30T16:45:49" | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 4,797 | 779 | null | null |
|
abacusai/Smaug-Llama-3-70B-Instruct | main | false | bfloat16 | 70.554 | LlamaForCausalLM | Original | FINISHED | "2024-05-17T19:01:14" | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 3,648 | 673 | null | null |
|
abacusai/Smaug-Mixtral-v0.1 | main | false | bfloat16 | 46.703 | MixtralForCausalLM | Original | RUNNING | "2024-05-30T19:58:58" | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 4,835 | 784 | null | null |
|
abhishek/autotrain-llama3-70b-orpo-v1 | main | false | float16 | 70.554 | LlamaForCausalLM | Original | FINISHED | "2024-05-29T18:59:50" | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 4,793 | 777 | null | null |
|
alpindale/WizardLM-2-8x22B | main | false | bfloat16 | 140.621 | MixtralForCausalLM | Original | RUNNING | "2024-05-26T04:22:40" | πΆ : fine-tuned on domain-specific datasets | -1 | null | 4,816 | 791 | null | null |
|
altomek/YiSM-34B-0rn | main | false | float16 | 34.389 | LlamaForCausalLM | Original | PENDING | "2024-06-06T16:23:05" | π€ : base merges and moerges | -1 | null | 4,840 | 792 | null | null |
|
apple/OpenELM-270M-Instruct | main | false | float16 | 0.272 | null | Original | FAILED | "2024-04-27T13:35:02" | FT | null | null | 2,928 | 568 | 62 | other |
|
apple/OpenELM-270M | main | false | float16 | 0.272 | null | Original | FAILED | "2024-04-27T11:28:44" | PT | null | null | 2,927 | 567 | 23 | other |
|
apple/OpenELM-450M-Instruct | main | false | float16 | 0.457 | null | Original | FAILED | "2024-04-27T13:36:38" | FT | null | null | 2,929 | 569 | 16 | other |
|
apple/OpenELM-450M | main | false | float16 | 0.457 | null | Original | FAILED | "2024-04-27T13:36:13" | PT | null | null | 2,930 | 570 | 13 | other |
|
automerger/YamshadowExperiment28-7B | main | false | bfloat16 | 7.242 | MistralForCausalLM | Original | FINISHED | "2024-05-30T09:31:45" | π€ : base merges and moerges | -1 | null | 4,789 | 775 | null | null |
|
chujiezheng/LLaMA3-iterative-DPO-final-ExPO | main | false | bfloat16 | 8.03 | LlamaForCausalLM | Original | FINISHED | "2024-05-25T06:10:53" | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 4,410 | 719 | null | null |
|
chujiezheng/Llama3-70B-Chinese-Chat-ExPO | main | false | float16 | 70.554 | null | Original | PENDING | "2024-06-25T10:20:42" | π¬ : chat models (RLHF, DPO, IFT, ...) | null | null | null | null | 0 | llama3 |
|
chujiezheng/Llama3-8B-Chinese-Chat-ExPO | main | false | bfloat16 | 8.03 | LlamaForCausalLM | Original | PENDING | "2024-06-25T06:53:37" | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | null | null | null | null |
|
chujiezheng/Smaug-34B-v0.1-ExPO | main | false | bfloat16 | 34.389 | null | Original | FINISHED | "2024-05-25T04:50:35" | π¬ : chat models (RLHF, DPO, IFT, ...) | null | null | 4,516 | 723 | 0 | other |
End of preview.
README.md exists but content is empty.
Use the Edit dataset card button to edit it.
- Downloads last month
- 0