Dataset Preview
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 174 new columns ({'report.per_token.latency.stdev', 'report.load.memory.unit', 'config.environment.diffusers_version', 'report.decode.latency.p95', 'config.backend.hub_kwargs.force_download', 'config.environment.optimum_benchmark_commit', 'report.decode.efficiency.value', 'report.prefill.memory.max_allocated', 'report.load.latency.unit', 'report.decode.latency.mean', 'report.decode.latency.total', 'config.scenario.input_shapes.num_choices', 'report.prefill.energy.ram', 'config.backend.device_map', 'config.scenario.name', 'report.load.efficiency', 'report.decode.memory.max_allocated', 'report.per_token.latency.p95', 'config.environment.timm_commit', 'config.launcher.start_method', 'config.environment.accelerate_commit', 'report.prefill.memory.max_process_vram', 'config.backend.to_bettertransformer', 'config.environment.machine', 'config.environment.optimum_benchmark_version', 'report.per_token.latency.mean', 'report.load.latency.values', 'report.decode.energy.total', 'report.per_token.latency.p90', 'config.backend.library', 'config.backend.processor_kwargs.trust_remote_code', 'config.environment.system', 'config.backend.torch_dtype', 'config.environment.gpu', 'config.backend.autocast_enabled', 'config.environment.accelerate_version', 'config.backend.low_cpu_mem_usage', 'config.backend.autocast_dtype', 'config.scenario.iterations', 'report.per_token.efficiency', 'config.environment.optimum_version', 'config.environment.gpu_count', 'report.prefill.latency.p90', 'config.backend.cache_implementati
...
cated', 'config.environment.peft_commit', 'report.prefill.latency.p99', 'report.prefill.energy.unit', 'config.backend.quantization_config.bits', 'config.environment.cpu', 'report.load.latency.p50', 'config.environment.gpu_vram_mb', 'config.backend.task', 'report.decode.energy.cpu', 'config.backend.model_kwargs.trust_remote_code', 'report.load.latency.count', 'report.load.energy.gpu', 'config.backend.no_weights', 'report.prefill.latency.unit', 'config.environment.optimum_commit', 'report.decode.energy.gpu', 'report.decode.latency.count', 'report.prefill.memory.max_ram', 'report.decode.latency.unit', 'config.environment.platform', 'report.decode.latency.values', 'report.prefill.latency.count', 'config.scenario.duration', 'config.backend.attn_implementation', 'report.load.memory.max_reserved', 'config.backend.quantization_scheme', 'config.launcher.name', 'config.environment.transformers_version', 'config.backend.quantization_config.exllama_config.version', 'report.per_token.latency.count', 'config.backend.hub_kwargs.trust_remote_code', 'config.environment.peft_version', 'report.prefill.latency.total', 'config.backend._target_', 'report.prefill.latency.p50', 'report.load.energy.total', 'config.backend.hub_kwargs.local_files_only', 'config.backend.name', 'config.environment.timm_version', 'report.load.energy.ram', 'config.backend.quantization_config.exllama_config.max_input_len', 'report.decode.throughput.unit', 'config.scenario.input_shapes.batch_size', 'config.launcher.numactl'}) and 34 missing columns ({'Type', 'MUSR', 'MoE', 'Base Model', 'Model', '#Params (B)', 'GPQA', 'MMLU-PRO', 'Average ⬆️', 'MUSR Raw', 'Submission Date', 'IFEval Raw', 'MATH Lvl 5', 'Hub License', 'Hub ❀️', 'fullname', 'IFEval', 'Not_Merged', 'Chat Template', 'Architecture', 'Generation', 'Model sha', 'GPQA Raw', 'MMLU-PRO Raw', 'BBH', 'Available on the hub', 'BBH Raw', 'T', 'Precision', 'Flagged', 'Weight type', 'Upload To Hub Date', "Maintainer's Highlight", 'MATH Lvl 5 Raw'}).

This happened while the csv dataset builder was generating data using

hf://datasets/optimum-benchmark/llm-perf-leaderboard/perf-df-awq-1xA10.csv (at revision 9739ef784eefa855acf7e205026bdf3787701fd9)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              config.name: string
              config.backend.name: string
              config.backend.version: string
              config.backend._target_: string
              config.backend.task: string
              config.backend.library: string
              config.backend.model_type: string
              config.backend.model: string
              config.backend.processor: string
              config.backend.device: string
              config.backend.device_ids: int64
              config.backend.seed: int64
              config.backend.inter_op_num_threads: double
              config.backend.intra_op_num_threads: double
              config.backend.model_kwargs.trust_remote_code: bool
              config.backend.no_weights: bool
              config.backend.device_map: double
              config.backend.torch_dtype: string
              config.backend.eval_mode: bool
              config.backend.to_bettertransformer: bool
              config.backend.low_cpu_mem_usage: double
              config.backend.attn_implementation: string
              config.backend.cache_implementation: double
              config.backend.autocast_enabled: bool
              config.backend.autocast_dtype: double
              config.backend.torch_compile: bool
              config.backend.torch_compile_target: string
              config.backend.quantization_scheme: string
              config.backend.quantization_config.bits: int64
              config.backend.quantization_config.version: string
              config.backend.quantization_config.exllama_config.version: double
              config.backend.quantization_config.exllama_config.max_input_len: double
              config.backend.quantization_config.exllama_config.max_batch_size: double
              config.backend.deepspeed_inference: bool
              config.backend.peft_type: double
              config.scenario.name: string
              config.scenario._target_: string
              config.scenario.iterations: int64
              config.scenario.duration:
              ...
              rt.decode.latency.mean: double
              report.decode.latency.stdev: double
              report.decode.latency.p50: double
              report.decode.latency.p90: double
              report.decode.latency.p95: double
              report.decode.latency.p99: double
              report.decode.latency.values: string
              report.decode.throughput.unit: string
              report.decode.throughput.value: double
              report.decode.energy.unit: string
              report.decode.energy.cpu: double
              report.decode.energy.ram: double
              report.decode.energy.gpu: double
              report.decode.energy.total: double
              report.decode.efficiency.unit: string
              report.decode.efficiency.value: double
              report.per_token.memory: double
              report.per_token.latency.unit: string
              report.per_token.latency.count: double
              report.per_token.latency.total: double
              report.per_token.latency.mean: double
              report.per_token.latency.stdev: double
              report.per_token.latency.p50: double
              report.per_token.latency.p90: double
              report.per_token.latency.p95: double
              report.per_token.latency.p99: double
              report.per_token.latency.values: string
              report.per_token.throughput.unit: string
              report.per_token.throughput.value: double
              report.per_token.energy: double
              report.per_token.efficiency: double
              report.traceback: string
              config.backend.processor_kwargs.trust_remote_code: bool
              config.backend.hub_kwargs.trust_remote_code: bool
              config.backend.hub_kwargs.revision: string
              config.backend.hub_kwargs.force_download: bool
              config.backend.hub_kwargs.local_files_only: bool
              -- schema metadata --
              pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 27877
              to
              {'T': Value(dtype='string', id=None), 'Model': Value(dtype='string', id=None), 'Average ⬆️': Value(dtype='float64', id=None), 'IFEval': Value(dtype='float64', id=None), 'IFEval Raw': Value(dtype='float64', id=None), 'BBH': Value(dtype='float64', id=None), 'BBH Raw': Value(dtype='float64', id=None), 'MATH Lvl 5': Value(dtype='float64', id=None), 'MATH Lvl 5 Raw': Value(dtype='float64', id=None), 'GPQA': Value(dtype='float64', id=None), 'GPQA Raw': Value(dtype='float64', id=None), 'MUSR': Value(dtype='float64', id=None), 'MUSR Raw': Value(dtype='float64', id=None), 'MMLU-PRO': Value(dtype='float64', id=None), 'MMLU-PRO Raw': Value(dtype='float64', id=None), 'Type': Value(dtype='string', id=None), 'Architecture': Value(dtype='string', id=None), 'Weight type': Value(dtype='string', id=None), 'Precision': Value(dtype='string', id=None), 'Not_Merged': Value(dtype='bool', id=None), 'Hub License': Value(dtype='string', id=None), '#Params (B)': Value(dtype='int64', id=None), 'Hub ❀️': Value(dtype='int64', id=None), 'Available on the hub': Value(dtype='bool', id=None), 'Model sha': Value(dtype='string', id=None), 'Flagged': Value(dtype='bool', id=None), 'MoE': Value(dtype='bool', id=None), 'Submission Date': Value(dtype='string', id=None), 'Upload To Hub Date': Value(dtype='string', id=None), 'Chat Template': Value(dtype='bool', id=None), "Maintainer's Highlight": Value(dtype='bool', id=None), 'fullname': Value(dtype='string', id=None), 'Generation': Value(dtype='int64', id=None), 'Base Model': Value(dtype='string', id=None)}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1396, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1045, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1029, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1124, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1884, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2015, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 174 new columns ({'report.per_token.latency.stdev', 'report.load.memory.unit', 'config.environment.diffusers_version', 'report.decode.latency.p95', 'config.backend.hub_kwargs.force_download', 'config.environment.optimum_benchmark_commit', 'report.decode.efficiency.value', 'report.prefill.memory.max_allocated', 'report.load.latency.unit', 'report.decode.latency.mean', 'report.decode.latency.total', 'config.scenario.input_shapes.num_choices', 'report.prefill.energy.ram', 'config.backend.device_map', 'config.scenario.name', 'report.load.efficiency', 'report.decode.memory.max_allocated', 'report.per_token.latency.p95', 'config.environment.timm_commit', 'config.launcher.start_method', 'config.environment.accelerate_commit', 'report.prefill.memory.max_process_vram', 'config.backend.to_bettertransformer', 'config.environment.machine', 'config.environment.optimum_benchmark_version', 'report.per_token.latency.mean', 'report.load.latency.values', 'report.decode.energy.total', 'report.per_token.latency.p90', 'config.backend.library', 'config.backend.processor_kwargs.trust_remote_code', 'config.environment.system', 'config.backend.torch_dtype', 'config.environment.gpu', 'config.backend.autocast_enabled', 'config.environment.accelerate_version', 'config.backend.low_cpu_mem_usage', 'config.backend.autocast_dtype', 'config.scenario.iterations', 'report.per_token.efficiency', 'config.environment.optimum_version', 'config.environment.gpu_count', 'report.prefill.latency.p90', 'config.backend.cache_implementati
              ...
              cated', 'config.environment.peft_commit', 'report.prefill.latency.p99', 'report.prefill.energy.unit', 'config.backend.quantization_config.bits', 'config.environment.cpu', 'report.load.latency.p50', 'config.environment.gpu_vram_mb', 'config.backend.task', 'report.decode.energy.cpu', 'config.backend.model_kwargs.trust_remote_code', 'report.load.latency.count', 'report.load.energy.gpu', 'config.backend.no_weights', 'report.prefill.latency.unit', 'config.environment.optimum_commit', 'report.decode.energy.gpu', 'report.decode.latency.count', 'report.prefill.memory.max_ram', 'report.decode.latency.unit', 'config.environment.platform', 'report.decode.latency.values', 'report.prefill.latency.count', 'config.scenario.duration', 'config.backend.attn_implementation', 'report.load.memory.max_reserved', 'config.backend.quantization_scheme', 'config.launcher.name', 'config.environment.transformers_version', 'config.backend.quantization_config.exllama_config.version', 'report.per_token.latency.count', 'config.backend.hub_kwargs.trust_remote_code', 'config.environment.peft_version', 'report.prefill.latency.total', 'config.backend._target_', 'report.prefill.latency.p50', 'report.load.energy.total', 'config.backend.hub_kwargs.local_files_only', 'config.backend.name', 'config.environment.timm_version', 'report.load.energy.ram', 'config.backend.quantization_config.exllama_config.max_input_len', 'report.decode.throughput.unit', 'config.scenario.input_shapes.batch_size', 'config.launcher.numactl'}) and 34 missing columns ({'Type', 'MUSR', 'MoE', 'Base Model', 'Model', '#Params (B)', 'GPQA', 'MMLU-PRO', 'Average ⬆️', 'MUSR Raw', 'Submission Date', 'IFEval Raw', 'MATH Lvl 5', 'Hub License', 'Hub ❀️', 'fullname', 'IFEval', 'Not_Merged', 'Chat Template', 'Architecture', 'Generation', 'Model sha', 'GPQA Raw', 'MMLU-PRO Raw', 'BBH', 'Available on the hub', 'BBH Raw', 'T', 'Precision', 'Flagged', 'Weight type', 'Upload To Hub Date', "Maintainer's Highlight", 'MATH Lvl 5 Raw'}).
              
              This happened while the csv dataset builder was generating data using
              
              hf://datasets/optimum-benchmark/llm-perf-leaderboard/perf-df-awq-1xA10.csv (at revision 9739ef784eefa855acf7e205026bdf3787701fd9)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

T
string
Model
string
Average ⬆️
float64
IFEval
float64
IFEval Raw
float64
BBH
float64
BBH Raw
float64
MATH Lvl 5
float64
MATH Lvl 5 Raw
float64
GPQA
float64
GPQA Raw
float64
MUSR
float64
MUSR Raw
float64
MMLU-PRO
float64
MMLU-PRO Raw
float64
Type
string
Architecture
string
Weight type
string
Precision
string
Not_Merged
bool
Hub License
string
#Params (B)
int64
Hub ❀️
int64
Available on the hub
bool
Model sha
string
Flagged
bool
MoE
bool
Submission Date
string
Upload To Hub Date
string
Chat Template
bool
Maintainer's Highlight
bool
fullname
string
Generation
int64
Base Model
string
πŸ’¬
dfurman/CalmeRys-78B-Orpo-v0.1
50.78
81.63
0.82
61.92
0.73
37.92
0.38
20.02
0.4
36.37
0.59
66.8
0.7
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
Qwen2ForCausalLM
Original
bfloat16
true
mit
77
5
true
7988deb48419c3f56bb24c139c23e5c476ec03f8
true
true
2024-09-24
2024-09-24
true
false
dfurman/CalmeRys-78B-Orpo-v0.1
1
dfurman/CalmeRys-78B-Orpo-v0.1 (Merge)
πŸ’¬
MaziyarPanahi/calme-2.4-rys-78b
50.26
80.11
0.8
62.16
0.73
37.69
0.38
20.36
0.4
34.57
0.58
66.69
0.7
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
Qwen2ForCausalLM
Original
bfloat16
true
mit
77
30
true
0a35e51ffa9efa644c11816a2d56434804177acb
true
true
2024-09-03
2024-08-07
true
false
MaziyarPanahi/calme-2.4-rys-78b
2
dnhkng/RYS-XLarge
πŸ”Ά
dnhkng/RYS-XLarge
44.75
79.96
0.8
58.77
0.71
38.97
0.39
17.9
0.38
23.72
0.5
49.2
0.54
πŸ”Ά fine-tuned on domain-specific datasets
Qwen2ForCausalLM
Original
bfloat16
true
mit
77
66
true
0f84dd9dde60f383e1e2821496befb4ce9a11ef6
true
true
2024-08-07
2024-07-24
false
false
dnhkng/RYS-XLarge
0
dnhkng/RYS-XLarge
πŸ’¬
MaziyarPanahi/calme-2.1-rys-78b
44.14
81.36
0.81
59.47
0.71
36.4
0.36
19.24
0.39
19
0.47
49.38
0.54
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
Qwen2ForCausalLM
Original
bfloat16
true
mit
77
3
true
e746f5ddc0c9b31a2382d985a4ec87fa910847c7
true
true
2024-08-08
2024-08-06
true
false
MaziyarPanahi/calme-2.1-rys-78b
1
dnhkng/RYS-XLarge
πŸ’¬
MaziyarPanahi/calme-2.3-rys-78b
44.02
80.66
0.81
59.57
0.71
36.56
0.37
20.58
0.4
17
0.45
49.73
0.55
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
Qwen2ForCausalLM
Original
bfloat16
true
mit
77
4
true
a8a4e55c2f7054d25c2f0ab3a3b3d806eb915180
true
true
2024-09-03
2024-08-06
true
false
MaziyarPanahi/calme-2.3-rys-78b
1
dnhkng/RYS-XLarge
πŸ’¬
MaziyarPanahi/calme-2.2-rys-78b
43.92
79.86
0.8
59.27
0.71
37.92
0.38
20.92
0.41
16.83
0.45
48.73
0.54
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
Qwen2ForCausalLM
Original
bfloat16
true
mit
77
3
true
8d0dde25c9042705f65559446944a19259c3fc8e
true
true
2024-08-08
2024-08-06
true
false
MaziyarPanahi/calme-2.2-rys-78b
1
dnhkng/RYS-XLarge
πŸ’¬
MaziyarPanahi/calme-2.1-qwen2-72b
43.61
81.63
0.82
57.33
0.7
36.03
0.36
17.45
0.38
20.15
0.47
49.05
0.54
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
Qwen2ForCausalLM
Original
bfloat16
true
other
72
27
true
0369c39770f45f2464587918f2dbdb8449ea3a0d
true
true
2024-06-26
2024-06-08
true
false
MaziyarPanahi/calme-2.1-qwen2-72b
2
Qwen/Qwen2-72B
πŸ”Ά
dnhkng/RYS-XLarge-base
43.56
79.1
0.79
58.69
0.7
34.67
0.35
17.23
0.38
22.42
0.49
49.23
0.54
πŸ”Ά fine-tuned on domain-specific datasets
?
Adapter
bfloat16
true
mit
77
3
true
c718b3d9e24916e3b0347d3fdaa5e5a097c2f603
true
true
2024-08-30
2024-08-02
true
false
dnhkng/RYS-XLarge-base
0
dnhkng/RYS-XLarge-base
πŸ’¬
arcee-ai/Arcee-Nova
43.5
79.07
0.79
56.74
0.69
40.48
0.4
18.01
0.39
17.22
0.46
49.47
0.55
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
Qwen2ForCausalLM
Original
bfloat16
true
other
72
36
true
ec3bfe88b83f81481daa04b6789c1e0d32827dc5
true
true
2024-09-19
2024-07-16
true
false
arcee-ai/Arcee-Nova
0
arcee-ai/Arcee-Nova
πŸ’¬
MaziyarPanahi/calme-2.2-qwen2-72b
43.4
80.08
0.8
56.8
0.69
41.16
0.41
16.55
0.37
16.52
0.45
49.27
0.54
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
Qwen2ForCausalLM
Original
bfloat16
true
other
72
4
true
529e9bd80a76d943409bc92bb246aa7ca63dd9e6
true
true
2024-08-06
2024-07-09
true
false
MaziyarPanahi/calme-2.2-qwen2-72b
1
Qwen/Qwen2-72B
πŸ’¬
dfurman/Qwen2-72B-Orpo-v0.1
43.32
78.8
0.79
57.41
0.7
35.42
0.35
17.9
0.38
20.87
0.48
49.5
0.55
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
Qwen2ForCausalLM
Original
bfloat16
true
other
72
4
true
26c7bbaa728822c60bb47b2808972140653aae4c
true
true
2024-08-22
2024-07-05
true
false
dfurman/Qwen2-72B-Orpo-v0.1
1
dfurman/Qwen2-72B-Orpo-v0.1 (Merge)
πŸ”Ά
Undi95/MG-FinalMix-72B
43.28
80.14
0.8
57.5
0.7
33.61
0.34
18.01
0.39
21.22
0.48
49.19
0.54
πŸ”Ά fine-tuned on domain-specific datasets
Qwen2ForCausalLM
Original
bfloat16
false
other
72
3
true
6c9c2f5d052495dcd49f44bf5623d21210653c65
true
true
2024-07-13
2024-06-25
true
false
Undi95/MG-FinalMix-72B
1
Undi95/MG-FinalMix-72B (Merge)
πŸ’¬
Qwen/Qwen2-72B-Instruct
42.49
79.89
0.8
57.48
0.7
35.12
0.35
16.33
0.37
17.17
0.46
48.92
0.54
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
Qwen2ForCausalLM
Original
bfloat16
true
other
72
661
true
1af63c698f59c4235668ec9c1395468cb7cd7e79
true
true
2024-06-26
2024-05-28
false
true
Qwen/Qwen2-72B-Instruct
1
Qwen/Qwen2-72B
πŸ”Ά
abacusai/Dracarys-72B-Instruct
42.37
78.56
0.79
56.94
0.69
33.61
0.34
18.79
0.39
16.81
0.46
49.51
0.55
πŸ”Ά fine-tuned on domain-specific datasets
Qwen2ForCausalLM
Original
bfloat16
true
other
72
15
true
10cabc4beb57a69df51533f65e39a7ad22821370
true
true
2024-08-16
2024-08-14
true
true
abacusai/Dracarys-72B-Instruct
0
abacusai/Dracarys-72B-Instruct
πŸ”Ά
VAGOsolutions/Llama-3.1-SauerkrautLM-70b-Instruct
42.24
86.56
0.87
57.24
0.7
29.91
0.3
12.19
0.34
19.39
0.47
48.17
0.53
πŸ”Ά fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
bfloat16
true
llama3.1
70
13
true
e8e74aa789243c25a3a8f7565780a402f5050bbb
true
true
2024-08-26
2024-07-29
true
false
VAGOsolutions/Llama-3.1-SauerkrautLM-70b-Instruct
0
VAGOsolutions/Llama-3.1-SauerkrautLM-70b-Instruct
πŸ’¬
anthracite-org/magnum-v1-72b
42.21
76.06
0.76
57.65
0.7
35.27
0.35
18.79
0.39
15.62
0.45
49.85
0.55
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
Qwen2ForCausalLM
Original
bfloat16
true
other
72
159
true
f8f85021bace7e8250ed8559c5b78b8b34f0c4cc
true
true
2024-09-21
2024-06-17
true
false
anthracite-org/magnum-v1-72b
2
Qwen/Qwen2-72B
πŸ’¬
alpindale/magnum-72b-v1
42.17
76.06
0.76
57.65
0.7
35.27
0.35
18.79
0.39
15.62
0.45
49.64
0.55
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
Qwen2ForCausalLM
Original
bfloat16
true
other
72
159
true
fef27e0f235ae8858b84b765db773a2a954110dd
true
true
2024-07-25
2024-06-17
true
false
alpindale/magnum-72b-v1
2
Qwen/Qwen2-72B
πŸ’¬
meta-llama/Meta-Llama-3.1-70B-Instruct
41.74
86.69
0.87
55.93
0.69
28.02
0.28
14.21
0.36
17.69
0.46
47.88
0.53
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
LlamaForCausalLM
Original
bfloat16
true
llama3.1
70
532
true
b9461463b511ed3c0762467538ea32cf7c9669f2
true
true
2024-08-15
2024-07-16
true
true
meta-llama/Meta-Llama-3.1-70B-Instruct
1
meta-llama/Meta-Llama-3.1-70B
πŸ”Ά
dnhkng/RYS-Llama3.1-Large
41.6
84.92
0.85
55.41
0.69
28.4
0.28
16.55
0.37
17.09
0.46
47.21
0.52
πŸ”Ά fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
bfloat16
true
mit
81
1
true
52cc979de78155b33689efa48f52a8aab184bd86
true
true
2024-08-22
2024-08-11
true
false
dnhkng/RYS-Llama3.1-Large
0
dnhkng/RYS-Llama3.1-Large
πŸ’¬
anthracite-org/magnum-v2-72b
41.15
75.6
0.76
57.85
0.7
31.65
0.32
18.12
0.39
14.18
0.44
49.51
0.55
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
Qwen2ForCausalLM
Original
bfloat16
true
other
72
25
true
c9c5826ef42b9fcc8a8e1079be574481cf0b6cc6
true
true
2024-09-05
2024-08-18
true
false
anthracite-org/magnum-v2-72b
2
Qwen/Qwen2-72B
πŸ’¬
abacusai/Smaug-Qwen2-72B-Instruct
41.08
78.25
0.78
56.27
0.69
35.35
0.35
14.88
0.36
15.18
0.44
46.56
0.52
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
Qwen2ForCausalLM
Original
bfloat16
true
other
72
7
true
af015925946d0c60ef69f512c3b35f421cf8063d
true
true
2024-07-29
2024-06-26
true
true
abacusai/Smaug-Qwen2-72B-Instruct
0
abacusai/Smaug-Qwen2-72B-Instruct
🀝
paulml/ECE-ILAB-Q1
40.93
78.65
0.79
53.7
0.67
26.13
0.26
18.23
0.39
18.81
0.46
50.06
0.55
🀝 base merges and moerges
Qwen2ForCausalLM
Original
bfloat16
false
other
72
0
true
393bea0ee85e4c752acd5fd77ce07f577fc13bd9
true
true
2024-09-16
2024-06-06
false
false
paulml/ECE-ILAB-Q1
0
paulml/ECE-ILAB-Q1
πŸ”Ά
KSU-HW-SEC/Llama3.1-70b-SVA-FT-1000step
40.33
72.38
0.72
55.49
0.69
29.61
0.3
19.46
0.4
17.83
0.46
47.24
0.53
πŸ”Ά fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
bfloat16
true
null
70
0
false
b195fea0d8f350ff29243d4e88654b1baa5af79e
true
true
2024-09-08
null
false
false
KSU-HW-SEC/Llama3.1-70b-SVA-FT-1000step
0
Removed
πŸ’¬
MaziyarPanahi/calme-2.3-llama3.1-70b
40.3
86.05
0.86
55.59
0.69
21.45
0.21
12.53
0.34
17.74
0.46
48.48
0.54
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
LlamaForCausalLM
Original
bfloat16
true
null
70
2
false
a39c79250721b75beefa1b1763895eafd010f6f6
true
true
2024-09-18
2024-09-10
true
false
MaziyarPanahi/calme-2.3-llama3.1-70b
2
meta-llama/Meta-Llama-3.1-70B
πŸ’¬
upstage/solar-pro-preview-instruct
39.61
84.16
0.84
54.82
0.68
20.09
0.2
16.11
0.37
15.01
0.44
47.48
0.53
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
SolarForCausalLM
Original
bfloat16
true
mit
22
397
true
b4db141b5fb08b23f8bc323bc34e2cff3e9675f8
true
true
2024-09-11
2024-09-09
true
true
upstage/solar-pro-preview-instruct
0
upstage/solar-pro-preview-instruct
πŸ”Ά
pankajmathur/orca_mini_v7_72b
39.06
59.3
0.59
55.06
0.68
26.44
0.26
18.01
0.39
24.21
0.51
51.35
0.56
πŸ”Ά fine-tuned on domain-specific datasets
Qwen2ForCausalLM
Original
bfloat16
true
apache-2.0
72
11
true
447f11912cfa496e32e188a55214043a05760d3a
true
true
2024-06-26
2024-06-26
false
false
pankajmathur/orca_mini_v7_72b
0
pankajmathur/orca_mini_v7_72b
πŸ’¬
Qwen/Qwen2.5-72B-Instruct
38.35
86.5
0.87
61.78
0.73
1.28
0.01
17.45
0.38
11.81
0.42
51.3
0.56
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
Qwen2ForCausalLM
Original
bfloat16
true
other
72
274
true
a13fff9ad76700c7ecff2769f75943ba8395b4a7
true
true
2024-09-19
2024-09-16
true
true
Qwen/Qwen2.5-72B-Instruct
1
Qwen/Qwen2.5-72B
🀝
gbueno86/Meta-LLama-3-Cat-Smaug-LLama-70b
38.27
80.72
0.81
51.51
0.67
26.81
0.27
10.29
0.33
15
0.44
45.28
0.51
🀝 base merges and moerges
LlamaForCausalLM
Original
bfloat16
false
llama3
70
1
true
2d73b7e1c7157df482555944d6a6b1362bc6c3c5
true
true
2024-06-27
2024-05-24
true
false
gbueno86/Meta-LLama-3-Cat-Smaug-LLama-70b
1
gbueno86/Meta-LLama-3-Cat-Smaug-LLama-70b (Merge)
πŸ’¬
MaziyarPanahi/calme-2.2-qwen2.5-72b
38.01
84.77
0.85
61.8
0.73
3.63
0.04
14.54
0.36
12.02
0.42
51.31
0.56
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
Qwen2ForCausalLM
Original
bfloat16
true
other
72
2
true
c6c7fdf70d8bf81364108975eb8ba78eecac83d4
true
true
2024-09-26
2024-09-19
true
false
MaziyarPanahi/calme-2.2-qwen2.5-72b
1
Qwen/Qwen2.5-72B
πŸ’¬
MaziyarPanahi/calme-2.2-llama3-70b
37.98
82.08
0.82
48.57
0.64
22.96
0.23
12.19
0.34
15.3
0.44
46.74
0.52
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
LlamaForCausalLM
Original
bfloat16
true
llama3
70
17
true
95366b974baedee4d95c1e841bc3d15e94753804
true
true
2024-06-26
2024-04-27
true
false
MaziyarPanahi/calme-2.2-llama3-70b
2
meta-llama/Meta-Llama-3-70B
🟒
Qwen/Qwen2.5-72B
37.94
41.37
0.41
54.62
0.68
36.1
0.36
20.69
0.41
19.64
0.48
55.2
0.6
🟒 pretrained
Qwen2ForCausalLM
Original
bfloat16
true
other
72
27
true
587cc4061cf6a7cc0d429d05c109447e5cf063af
true
true
2024-09-19
2024-09-15
false
true
Qwen/Qwen2.5-72B
0
Qwen/Qwen2.5-72B
πŸ”Ά
VAGOsolutions/Llama-3-SauerkrautLM-70b-Instruct
37.82
80.45
0.8
52.03
0.67
21.68
0.22
10.4
0.33
13.54
0.43
48.8
0.54
πŸ”Ά fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
bfloat16
true
other
70
21
true
707cfd1a93875247c0223e0c7e3d86d58c432318
true
true
2024-06-26
2024-04-24
true
false
VAGOsolutions/Llama-3-SauerkrautLM-70b-Instruct
0
VAGOsolutions/Llama-3-SauerkrautLM-70b-Instruct
🟒
Qwen/Qwen2.5-32B
37.54
40.77
0.41
53.95
0.68
32.85
0.33
21.59
0.41
22.7
0.5
53.39
0.58
🟒 pretrained
Qwen2ForCausalLM
Original
bfloat16
true
apache-2.0
32
17
true
ff23665d01c3665be5fdb271d18a62090b65c06d
true
true
2024-09-19
2024-09-15
false
true
Qwen/Qwen2.5-32B
0
Qwen/Qwen2.5-32B
🀝
mlabonne/BigQwen2.5-52B-Instruct
37.42
79.29
0.79
59.81
0.71
17.82
0.18
6.94
0.3
10.45
0.41
50.22
0.55
🀝 base merges and moerges
Qwen2ForCausalLM
Original
bfloat16
false
apache-2.0
52
1
true
425b9bffc9871085cc0d42c34138ce776f96ba02
true
true
2024-09-25
2024-09-23
true
true
mlabonne/BigQwen2.5-52B-Instruct
1
mlabonne/BigQwen2.5-52B-Instruct (Merge)
πŸ’¬
NousResearch/Hermes-3-Llama-3.1-70B
37.31
76.61
0.77
53.77
0.68
13.75
0.14
14.88
0.36
23.43
0.49
41.41
0.47
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
LlamaForCausalLM
Original
bfloat16
true
llama3
70
76
true
093242c69a91f8d9d5b8094c380b88772f9bd7f8
true
true
2024-08-28
2024-07-29
true
true
NousResearch/Hermes-3-Llama-3.1-70B
1
meta-llama/Meta-Llama-3.1-70B
πŸ”Ά
ValiantLabs/Llama3-70B-Fireplace
36.82
77.74
0.78
49.56
0.65
19.64
0.2
13.98
0.35
16.77
0.44
43.25
0.49
πŸ”Ά fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
float16
true
llama3
70
3
true
220079e4115733991eb19c30d5480db9696a665e
true
true
2024-06-26
2024-05-09
true
false
ValiantLabs/Llama3-70B-Fireplace
0
ValiantLabs/Llama3-70B-Fireplace
πŸ”Ά
BAAI/Infinity-Instruct-7M-Gen-Llama3_1-70B
36.79
73.35
0.73
52.5
0.67
21.07
0.21
16.78
0.38
16.97
0.45
40.08
0.46
πŸ”Ά fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
bfloat16
true
llama3.1
70
11
true
1ef63c4993a8c723c9695c827295c17080a64435
true
true
2024-09-26
2024-07-25
true
false
BAAI/Infinity-Instruct-7M-Gen-Llama3_1-70B
0
BAAI/Infinity-Instruct-7M-Gen-Llama3_1-70B
πŸ’¬
tenyx/Llama3-TenyxChat-70B
36.54
80.87
0.81
49.62
0.65
22.66
0.23
6.82
0.3
12.52
0.43
46.78
0.52
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
LlamaForCausalLM
Original
bfloat16
true
llama3
70
63
true
a85d31e3af8fcc847cc9169f1144cf02f5351fab
true
true
2024-08-04
2024-04-26
true
false
tenyx/Llama3-TenyxChat-70B
0
tenyx/Llama3-TenyxChat-70B
πŸ’¬
MaziyarPanahi/calme-2.2-llama3.1-70b
36.39
85.93
0.86
54.21
0.68
2.11
0.02
9.96
0.32
17.07
0.45
49.05
0.54
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
LlamaForCausalLM
Original
bfloat16
true
null
70
2
false
c81ac05ed2c2344e9fd366cfff197da406ef5234
true
true
2024-09-09
2024-09-09
true
false
MaziyarPanahi/calme-2.2-llama3.1-70b
2
meta-llama/Meta-Llama-3.1-70B
🀝
gbueno86/Brinebreath-Llama-3.1-70B
36.29
55.33
0.55
55.46
0.69
29.98
0.3
12.86
0.35
17.49
0.45
46.62
0.52
🀝 base merges and moerges
LlamaForCausalLM
Original
bfloat16
false
llama3.1
70
1
true
c508ecf356167e8c498c6fa3937ba30a82208983
true
true
2024-08-29
2024-08-23
true
false
gbueno86/Brinebreath-Llama-3.1-70B
1
gbueno86/Brinebreath-Llama-3.1-70B (Merge)
πŸ’¬
meta-llama/Meta-Llama-3-70B-Instruct
36.18
80.99
0.81
50.19
0.65
23.34
0.23
4.92
0.29
10.92
0.42
46.74
0.52
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
LlamaForCausalLM
Original
bfloat16
true
llama3
70
1,399
true
7129260dd854a80eb10ace5f61c20324b472b31c
true
true
2024-06-12
2024-04-17
true
true
meta-llama/Meta-Llama-3-70B-Instruct
1
meta-llama/Meta-Llama-3-70B
πŸ’¬
Qwen/Qwen2.5-32B-Instruct
36.17
83.46
0.83
56.49
0.69
0
0
11.74
0.34
13.5
0.43
51.85
0.57
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
Qwen2ForCausalLM
Original
bfloat16
true
apache-2.0
32
81
true
70e8dfb9ad18a7d499f765fe206ff065ed8ca197
true
true
2024-09-19
2024-09-17
true
true
Qwen/Qwen2.5-32B-Instruct
1
Qwen/Qwen2.5-32B
πŸ”Ά
BAAI/Infinity-Instruct-3M-0625-Llama3-70B
35.88
74.42
0.74
52.03
0.67
16.31
0.16
14.32
0.36
18.34
0.46
39.85
0.46
πŸ”Ά fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
float16
true
apache-2.0
70
3
true
6d8ceada57e55cff3503191adc4d6379ff321fe2
true
true
2024-08-30
2024-07-09
true
false
BAAI/Infinity-Instruct-3M-0625-Llama3-70B
0
BAAI/Infinity-Instruct-3M-0625-Llama3-70B
πŸ”Ά
KSU-HW-SEC/Llama3-70b-SVA-FT-1415
35.8
61.8
0.62
51.33
0.67
20.09
0.2
16.67
0.38
17.8
0.46
47.14
0.52
πŸ”Ά fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
bfloat16
true
null
70
0
false
1c09728455567898116d2d9cfb6cbbbbd4ee730c
true
true
2024-09-08
null
false
false
KSU-HW-SEC/Llama3-70b-SVA-FT-1415
0
Removed
πŸ”Ά
failspy/llama-3-70B-Instruct-abliterated
35.79
80.23
0.8
48.94
0.65
23.72
0.24
5.26
0.29
10.53
0.41
46.06
0.51
πŸ”Ά fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
bfloat16
true
llama3
70
85
true
53ae9dafe8b3d163e05d75387575f8e9f43253d0
true
true
2024-07-03
2024-05-07
true
false
failspy/llama-3-70B-Instruct-abliterated
0
failspy/llama-3-70B-Instruct-abliterated
πŸ’¬
dnhkng/RYS-Llama-3-Large-Instruct
35.78
80.51
0.81
49.67
0.65
21.83
0.22
5.26
0.29
11.45
0.42
45.97
0.51
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
LlamaForCausalLM
Original
bfloat16
true
mit
73
1
true
01e3208aaf7bf6d2b09737960c701ec6628977fe
true
true
2024-08-07
2024-08-06
true
false
dnhkng/RYS-Llama-3-Large-Instruct
0
dnhkng/RYS-Llama-3-Large-Instruct
πŸ”Ά
KSU-HW-SEC/Llama3-70b-SVA-FT-final
35.78
61.65
0.62
51.33
0.67
20.09
0.2
16.67
0.38
17.8
0.46
47.14
0.52
πŸ”Ά fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
bfloat16
true
null
70
0
false
391bbd94173b34975d1aa2c7356977a630253b75
true
true
2024-09-08
null
false
false
KSU-HW-SEC/Llama3-70b-SVA-FT-final
0
Removed
πŸ”Ά
KSU-HW-SEC/Llama3-70b-SVA-FT-500
35.61
61.05
0.61
51.89
0.67
19.34
0.19
17.45
0.38
16.99
0.45
46.97
0.52
πŸ”Ά fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
bfloat16
true
null
70
0
false
856a23f28aeada23d1135c86a37e05524307e8ed
true
true
2024-09-08
null
false
false
KSU-HW-SEC/Llama3-70b-SVA-FT-500
0
Removed
πŸ”Ά
cognitivecomputations/dolphin-2.9.2-qwen2-72b
35.42
63.44
0.63
47.7
0.63
18.66
0.19
16
0.37
17.04
0.45
49.68
0.55
πŸ”Ά fine-tuned on domain-specific datasets
Qwen2ForCausalLM
Original
bfloat16
true
other
72
57
true
e79582577c2bf2af304221af0e8308b7e7d46ca1
true
true
2024-09-19
2024-05-27
true
true
cognitivecomputations/dolphin-2.9.2-qwen2-72b
1
Qwen/Qwen2-72B
πŸ”Ά
cloudyu/Llama-3-70Bx2-MOE
35.35
54.82
0.55
51.42
0.66
19.86
0.2
19.13
0.39
20.85
0.48
46.02
0.51
πŸ”Ά fine-tuned on domain-specific datasets
MixtralForCausalLM
Original
bfloat16
true
llama3
126
1
true
b8bd85e8db8e4ec352b93441c92e0ae1334bf5a7
true
false
2024-06-27
2024-05-20
false
false
cloudyu/Llama-3-70Bx2-MOE
0
cloudyu/Llama-3-70Bx2-MOE
πŸ”Ά
Sao10K/L3-70B-Euryale-v2.1
35.35
73.84
0.74
48.7
0.65
20.85
0.21
10.85
0.33
12.25
0.42
45.6
0.51
πŸ”Ά fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
bfloat16
true
cc-by-nc-4.0
70
113
true
36ad832b771cd783ea7ad00ed39e61f679b1a7c6
true
true
2024-07-01
2024-06-11
true
false
Sao10K/L3-70B-Euryale-v2.1
0
Sao10K/L3-70B-Euryale-v2.1
πŸ’¬
OpenBuddy/openbuddy-llama3.1-70b-v22.1-131k
35.23
73.33
0.73
51.94
0.67
3.4
0.03
16.67
0.38
18.24
0.46
47.82
0.53
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
LlamaForCausalLM
Original
bfloat16
true
other
70
1
true
43ed945180174d79a8f6c68509161c249c884dfa
true
true
2024-08-24
2024-08-21
true
false
OpenBuddy/openbuddy-llama3.1-70b-v22.1-131k
0
OpenBuddy/openbuddy-llama3.1-70b-v22.1-131k
πŸ”Ά
migtissera/Llama-3-70B-Synthia-v3.5
35.2
60.76
0.61
49.12
0.65
18.96
0.19
18.34
0.39
23.39
0.49
40.65
0.47
πŸ”Ά fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
float16
true
llama3
70
5
true
8744db0bccfc18f1847633da9d29fc89b35b4190
true
true
2024-08-28
2024-05-26
true
false
migtissera/Llama-3-70B-Synthia-v3.5
0
migtissera/Llama-3-70B-Synthia-v3.5
🟒
Qwen/Qwen2-72B
35.13
38.24
0.38
51.86
0.66
29.15
0.29
19.24
0.39
19.73
0.47
52.56
0.57
🟒 pretrained
Qwen2ForCausalLM
Original
bfloat16
true
other
72
187
true
87993795c78576318087f70b43fbf530eb7789e7
true
true
2024-06-26
2024-05-22
false
true
Qwen/Qwen2-72B
0
Qwen/Qwen2-72B
πŸ”Ά
Sao10K/L3-70B-Euryale-v2.1
35.11
72.81
0.73
49.19
0.65
20.24
0.2
10.85
0.33
12.05
0.42
45.51
0.51
πŸ”Ά fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
float16
true
cc-by-nc-4.0
70
113
true
36ad832b771cd783ea7ad00ed39e61f679b1a7c6
true
true
2024-06-26
2024-06-11
true
false
Sao10K/L3-70B-Euryale-v2.1
0
Sao10K/L3-70B-Euryale-v2.1
πŸ’¬
microsoft/Phi-3.5-MoE-instruct
35.1
69.25
0.69
48.77
0.64
20.54
0.21
14.09
0.36
17.33
0.46
40.64
0.47
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
Phi3ForCausalLM
Original
bfloat16
true
mit
42
488
true
482a9ba0eb0e1fa1671e3560e009d7cec2e5147c
true
false
2024-08-21
2024-08-17
true
true
microsoft/Phi-3.5-MoE-instruct
0
microsoft/Phi-3.5-MoE-instruct
πŸ’¬
Qwen/Qwen2-Math-72B-Instruct
34.79
56.94
0.57
47.96
0.63
35.95
0.36
15.77
0.37
15.73
0.45
36.36
0.43
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
Qwen2ForCausalLM
Original
bfloat16
true
other
72
83
true
5c267882f3377bcfc35882f8609098a894eeeaa8
true
true
2024-08-19
2024-08-08
true
true
Qwen/Qwen2-Math-72B-Instruct
0
Qwen/Qwen2-Math-72B-Instruct
πŸ’¬
abacusai/Smaug-Llama-3-70B-Instruct-32K
34.72
77.61
0.78
49.07
0.65
21.22
0.21
6.15
0.3
12.43
0.42
41.83
0.48
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
LlamaForCausalLM
Original
bfloat16
true
llama3
70
20
true
33840982dc253968f32ef3a534ee0e025eb97482
true
true
2024-08-06
2024-06-11
true
true
abacusai/Smaug-Llama-3-70B-Instruct-32K
0
abacusai/Smaug-Llama-3-70B-Instruct-32K
πŸ”Ά
Replete-AI/Replete-LLM-V2.5-Qwen-14b
34.52
58.4
0.58
49.39
0.65
15.63
0.16
16.22
0.37
18.83
0.47
48.62
0.54
πŸ”Ά fine-tuned on domain-specific datasets
Qwen2ForCausalLM
Original
bfloat16
true
apache-2.0
14
8
true
834ddb1712ae6d1b232b2d5b26be658d90d23e43
true
true
2024-09-29
2024-09-28
false
false
Replete-AI/Replete-LLM-V2.5-Qwen-14b
1
Replete-AI/Replete-LLM-V2.5-Qwen-14b (Merge)
πŸ”Ά
BAAI/Infinity-Instruct-3M-0613-Llama3-70B
34.47
68.21
0.68
51.33
0.66
14.88
0.15
14.43
0.36
16.53
0.45
41.44
0.47
πŸ”Ά fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
bfloat16
true
apache-2.0
70
5
true
9fc53668064bdda22975ca72c5a287f8241c95b3
true
true
2024-06-28
2024-06-27
true
false
BAAI/Infinity-Instruct-3M-0613-Llama3-70B
0
BAAI/Infinity-Instruct-3M-0613-Llama3-70B
πŸ’¬
dnhkng/RYS-Llama-3-Huge-Instruct
34.37
76.86
0.77
49.07
0.65
21.22
0.21
1.45
0.26
11.93
0.42
45.66
0.51
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
LlamaForCausalLM
Original
bfloat16
true
mit
99
1
true
cfe14a5339e88a7a89f075d9d48215d45f64acaf
true
true
2024-08-07
2024-08-06
true
false
dnhkng/RYS-Llama-3-Huge-Instruct
0
dnhkng/RYS-Llama-3-Huge-Instruct
πŸ’¬
mistralai/Mixtral-8x22B-Instruct-v0.1
33.89
71.84
0.72
44.11
0.61
18.73
0.19
16.44
0.37
13.49
0.43
38.7
0.45
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
MixtralForCausalLM
Original
bfloat16
true
apache-2.0
140
667
true
b0c3516041d014f640267b14feb4e9a84c8e8c71
true
false
2024-06-12
2024-04-16
true
true
mistralai/Mixtral-8x22B-Instruct-v0.1
1
mistralai/Mixtral-8x22B-v0.1
πŸ’¬
HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1
33.77
65.11
0.65
47.5
0.63
18.35
0.18
17.11
0.38
14.72
0.45
39.85
0.46
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
MixtralForCausalLM
Original
float16
true
apache-2.0
140
260
true
a3be084543d278e61b64cd600f28157afc79ffd3
true
true
2024-06-12
2024-04-10
true
true
HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1
1
mistral-community/Mixtral-8x22B-v0.1
🀝
Lambent/qwen2.5-reinstruct-alternate-lumen-14B
33.66
47.94
0.48
48.99
0.65
19.79
0.2
16.89
0.38
19.62
0.48
48.76
0.54
🀝 base merges and moerges
Qwen2ForCausalLM
Original
bfloat16
true
null
14
3
false
dac3be334098338fb6c02636349e8ed53f18c4a4
true
true
2024-09-28
2024-09-23
false
false
Lambent/qwen2.5-reinstruct-alternate-lumen-14B
1
Lambent/qwen2.5-reinstruct-alternate-lumen-14B (Merge)
πŸ’¬
tanliboy/lambda-qwen2.5-14b-dpo-test
33.52
82.31
0.82
48.45
0.64
0
0
14.99
0.36
12.59
0.43
42.75
0.48
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
Qwen2ForCausalLM
Original
bfloat16
true
apache-2.0
14
2
true
96607eea3c67f14f73e576580610dba7530c5dd9
true
true
2024-09-20
2024-09-20
true
false
tanliboy/lambda-qwen2.5-14b-dpo-test
2
Qwen/Qwen2.5-14B
πŸ’¬
CohereForAI/c4ai-command-r-plus-08-2024
33.42
75.4
0.75
42.84
0.6
11.03
0.11
13.42
0.35
19.84
0.48
38.01
0.44
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
CohereForCausalLM
Original
float16
true
cc-by-nc-4.0
103
131
true
2d8cf3ab0af78b9e43546486b096f86adf3ba4d0
true
true
2024-09-19
2024-08-21
true
true
CohereForAI/c4ai-command-r-plus-08-2024
0
CohereForAI/c4ai-command-r-plus-08-2024
🀝
v000000/Qwen2.5-14B-Gutenberg-Instruct-Slerpeno
33.39
48.55
0.49
49.74
0.65
19.71
0.2
15.21
0.36
18.43
0.47
48.68
0.54
🀝 base merges and moerges
Qwen2ForCausalLM
Original
bfloat16
false
apache-2.0
14
2
true
1069abb4c25855e67ffaefa08a0befbb376e7ca7
true
true
2024-09-28
2024-09-20
false
false
v000000/Qwen2.5-14B-Gutenberg-Instruct-Slerpeno
1
v000000/Qwen2.5-14B-Gutenberg-Instruct-Slerpeno (Merge)
πŸ’¬
jpacifico/Chocolatine-14B-Instruct-DPO-v1.2
33.3
68.52
0.69
49.85
0.64
17.98
0.18
10.07
0.33
12.35
0.43
41.07
0.47
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
Phi3ForCausalLM
Original
float16
true
mit
13
7
true
d34bbd55b48e553f28579d86f3ccae19726c6b39
true
true
2024-08-28
2024-08-12
true
false
jpacifico/Chocolatine-14B-Instruct-DPO-v1.2
0
jpacifico/Chocolatine-14B-Instruct-DPO-v1.2
πŸ”Ά
migtissera/Tess-v2.5.2-Qwen2-72B
33.28
44.94
0.45
52.31
0.66
27.42
0.27
13.42
0.35
10.89
0.42
50.68
0.56
πŸ”Ά fine-tuned on domain-specific datasets
Qwen2ForCausalLM
Original
bfloat16
true
other
72
11
true
0435e634ad9bc8b1172395a535b78e6f25f3594f
true
true
2024-08-10
2024-06-13
true
false
migtissera/Tess-v2.5.2-Qwen2-72B
0
migtissera/Tess-v2.5.2-Qwen2-72B
πŸ’¬
microsoft/Phi-3-medium-4k-instruct
32.67
64.23
0.64
49.38
0.64
16.99
0.17
11.52
0.34
13.05
0.43
40.84
0.47
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
Phi3ForCausalLM
Original
bfloat16
true
mit
13
209
true
d194e4e74ffad5a5e193e26af25bcfc80c7f1ffc
true
true
2024-06-12
2024-05-07
true
true
microsoft/Phi-3-medium-4k-instruct
0
microsoft/Phi-3-medium-4k-instruct
πŸ’¬
01-ai/Yi-1.5-34B-Chat
32.63
60.67
0.61
44.26
0.61
23.34
0.23
15.32
0.36
13.06
0.43
39.12
0.45
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
LlamaForCausalLM
Original
bfloat16
true
apache-2.0
34
236
true
f3128b2d02d82989daae566c0a7eadc621ca3254
true
true
2024-06-12
2024-05-10
true
true
01-ai/Yi-1.5-34B-Chat
0
01-ai/Yi-1.5-34B-Chat
πŸ”Ά
alpindale/WizardLM-2-8x22B
32.61
52.72
0.53
48.58
0.64
22.28
0.22
17.56
0.38
14.54
0.44
39.96
0.46
πŸ”Ά fine-tuned on domain-specific datasets
MixtralForCausalLM
Original
bfloat16
true
apache-2.0
140
380
true
087834da175523cffd66a7e19583725e798c1b4f
true
true
2024-06-28
2024-04-16
false
false
alpindale/WizardLM-2-8x22B
0
alpindale/WizardLM-2-8x22B
πŸ’¬
google/gemma-2-27b-it
32.31
79.78
0.8
49.27
0.65
0.68
0.01
16.67
0.38
9.11
0.4
38.35
0.45
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
Gemma2ForCausalLM
Original
bfloat16
true
gemma
27
410
true
f6c533e5eb013c7e31fc74ef042ac4f3fb5cf40b
true
true
2024-08-07
2024-06-24
true
true
google/gemma-2-27b-it
1
google/gemma-2-27b
πŸ’¬
MaziyarPanahi/calme-2.4-llama3-70b
32.18
50.27
0.5
48.4
0.64
22.66
0.23
11.97
0.34
13.1
0.43
46.71
0.52
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
LlamaForCausalLM
Original
bfloat16
true
llama3
70
14
true
cb03e4d810b82d86e7cb01ab146bade09a5d06d1
true
true
2024-06-26
2024-04-28
true
false
MaziyarPanahi/calme-2.4-llama3-70b
2
meta-llama/Meta-Llama-3-70B
πŸ’¬
Qwen/Qwen2.5-14B-Instruct
32.18
81.58
0.82
48.36
0.64
0
0
9.62
0.32
10.16
0.41
43.38
0.49
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
Qwen2ForCausalLM
Original
bfloat16
true
apache-2.0
14
62
true
f55224c616ca27d4bcf28969a156de12c98981cf
true
true
2024-09-18
2024-09-16
true
true
Qwen/Qwen2.5-14B-Instruct
1
Qwen/Qwen2.5-14B
🀝
paloalma/TW3-JRGL-v2
32.12
53.16
0.53
45.61
0.61
15.86
0.16
14.54
0.36
20.7
0.49
42.87
0.49
🀝 base merges and moerges
LlamaForCausalLM
Original
bfloat16
false
apache-2.0
72
0
true
aca3f0ba2bfb90038a9e2cd5b486821d4c181b46
true
true
2024-08-29
2024-04-01
false
false
paloalma/TW3-JRGL-v2
0
paloalma/TW3-JRGL-v2
πŸ’¬
v000000/Qwen2.5-14B-Gutenberg-1e-Delta
32.11
80.45
0.8
48.62
0.64
0
0
10.51
0.33
9.38
0.41
43.67
0.49
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
Qwen2ForCausalLM
Original
bfloat16
true
apache-2.0
14
3
true
f624854b4380e01322e752ce4daadd49ac86580f
true
true
2024-09-28
2024-09-20
true
false
v000000/Qwen2.5-14B-Gutenberg-1e-Delta
1
v000000/Qwen2.5-14B-Gutenberg-1e-Delta (Merge)
πŸ’¬
internlm/internlm2_5-20b-chat
32.08
70.1
0.7
62.83
0.75
0
0
9.51
0.32
16.74
0.46
33.31
0.4
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
InternLM2ForCausalLM
Original
bfloat16
true
other
19
80
true
ef17bde929761255fee76d95e2c25969ccd93b0d
true
true
2024-08-12
2024-07-30
true
true
internlm/internlm2_5-20b-chat
0
internlm/internlm2_5-20b-chat
πŸ’¬
MTSAIR/MultiVerse_70B
31.73
52.49
0.52
46.14
0.62
16.16
0.16
13.87
0.35
18.82
0.47
42.89
0.49
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
LlamaForCausalLM
Original
bfloat16
true
other
72
38
true
063430cdc4d972a0884e3e3e3d45ea4afbdf71a2
true
true
2024-06-29
2024-03-25
false
false
MTSAIR/MultiVerse_70B
0
MTSAIR/MultiVerse_70B
🀝
paloalma/Le_Triomphant-ECE-TW3
31.66
54.02
0.54
44.96
0.61
17.45
0.17
13.2
0.35
18.5
0.47
41.81
0.48
🀝 base merges and moerges
LlamaForCausalLM
Original
bfloat16
false
apache-2.0
72
3
true
f72399253bb3e65c0f55e50461488c098f658a49
true
true
2024-07-25
2024-04-01
false
false
paloalma/Le_Triomphant-ECE-TW3
0
paloalma/Le_Triomphant-ECE-TW3
πŸ”Ά
failspy/Phi-3-medium-4k-instruct-abliterated-v3
31.55
63.19
0.63
46.73
0.63
14.12
0.14
8.95
0.32
18.52
0.46
37.78
0.44
πŸ”Ά fine-tuned on domain-specific datasets
Phi3ForCausalLM
Original
bfloat16
true
mit
13
22
true
959b09eacf6cae85a8eb21b25e998addc89a367b
true
true
2024-07-29
2024-05-22
true
false
failspy/Phi-3-medium-4k-instruct-abliterated-v3
0
failspy/Phi-3-medium-4k-instruct-abliterated-v3
πŸ’¬
microsoft/Phi-3-medium-128k-instruct
31.52
60.4
0.6
48.46
0.64
16.16
0.16
11.52
0.34
11.35
0.41
41.24
0.47
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
Phi3ForCausalLM
Original
bfloat16
true
mit
13
363
true
fa7d2aa4f5ea69b2e36b20d050cdae79c9bfbb3f
true
true
2024-08-21
2024-05-07
true
true
microsoft/Phi-3-medium-128k-instruct
0
microsoft/Phi-3-medium-128k-instruct
🟒
Qwen/Qwen2.5-14B
31.45
36.94
0.37
45.08
0.62
25.98
0.26
17.56
0.38
15.91
0.45
47.21
0.52
🟒 pretrained
Qwen2ForCausalLM
Original
bfloat16
true
apache-2.0
14
19
true
83a1904df002b00bc8db6f877821cb77dbb363b0
true
true
2024-09-19
2024-09-15
false
true
Qwen/Qwen2.5-14B
0
Qwen/Qwen2.5-14B
πŸ’¬
Danielbrdz/Barcenas-14b-Phi-3-medium-ORPO
31.42
47.99
0.48
51.03
0.65
17.45
0.17
10.18
0.33
20.53
0.48
41.37
0.47
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
MistralForCausalLM
Original
float16
true
mit
13
3
true
b749dbcb19901b8fd0e9f38c923a24533569f895
true
true
2024-08-13
2024-06-15
true
false
Danielbrdz/Barcenas-14b-Phi-3-medium-ORPO
0
Danielbrdz/Barcenas-14b-Phi-3-medium-ORPO
πŸ”Ά
SicariusSicariiStuff/Qwen2.5-14B_Uncensored
31.35
31.73
0.32
46.72
0.63
29.38
0.29
17.56
0.38
15.29
0.45
47.4
0.53
πŸ”Ά fine-tuned on domain-specific datasets
Qwen2ForCausalLM
Original
float16
true
null
14
0
false
0710a2341d269dcd56f9136fed442373d4dadc5d
true
true
2024-09-21
null
false
false
SicariusSicariiStuff/Qwen2.5-14B_Uncensored
0
Removed
πŸ”Ά
SicariusSicariiStuff/Qwen2.5-14B_Uncencored
31.32
31.58
0.32
46.72
0.63
29.38
0.29
17.56
0.38
15.29
0.45
47.4
0.53
πŸ”Ά fine-tuned on domain-specific datasets
Qwen2ForCausalLM
Original
float16
true
null
14
0
false
1daf648ac2f837c66bf6bb00459e034987d9486f
true
true
2024-09-20
null
false
false
SicariusSicariiStuff/Qwen2.5-14B_Uncencored
0
Removed
🀝
CombinHorizon/YiSM-blossom5.1-34B-SLERP
31.09
50.33
0.5
46.4
0.62
19.79
0.2
14.09
0.36
14.37
0.44
41.56
0.47
🀝 base merges and moerges
LlamaForCausalLM
Original
bfloat16
false
apache-2.0
34
0
true
ebd8d6507623008567a0548cd0ff9e28cbd6a656
true
true
2024-08-27
2024-08-27
true
false
CombinHorizon/YiSM-blossom5.1-34B-SLERP
1
CombinHorizon/YiSM-blossom5.1-34B-SLERP (Merge)
πŸ’¬
OpenBuddy/openbuddy-qwen2.5llamaify-14b-v23.1-200k
30.92
63.09
0.63
43.28
0.6
15.71
0.16
11.07
0.33
11.54
0.42
40.82
0.47
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
LlamaForCausalLM
Original
bfloat16
true
apache-2.0
14
0
true
001e14063e2702a9b2284dc6ec889d2586dc839b
true
true
2024-09-23
2024-09-23
true
false
OpenBuddy/openbuddy-qwen2.5llamaify-14b-v23.1-200k
0
OpenBuddy/openbuddy-qwen2.5llamaify-14b-v23.1-200k
πŸ’¬
CohereForAI/c4ai-command-r-plus
30.86
76.64
0.77
39.92
0.58
7.55
0.08
7.38
0.31
20.42
0.48
33.24
0.4
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
CohereForCausalLM
Original
float16
true
cc-by-nc-4.0
103
1,657
true
fa1bd7fb1572ceb861bbbbecfa8af83b29fa8cca
true
true
2024-06-13
2024-04-03
true
true
CohereForAI/c4ai-command-r-plus
0
CohereForAI/c4ai-command-r-plus
πŸ”Ά
Replete-AI/Replete-LLM-V2.5-Qwen-7b
30.8
62.37
0.62
36.37
0.55
26.44
0.26
9.06
0.32
12
0.43
38.54
0.45
πŸ”Ά fine-tuned on domain-specific datasets
Qwen2ForCausalLM
Original
bfloat16
true
apache-2.0
7
9
true
dbd819e8f765181f774cb5b79812d081669eb302
true
true
2024-09-29
2024-09-28
false
false
Replete-AI/Replete-LLM-V2.5-Qwen-7b
1
Replete-AI/Replete-LLM-V2.5-Qwen-7b (Merge)
πŸ’¬
mattshumer/ref_70_e3
30.74
62.94
0.63
49.27
0.65
0
0
11.41
0.34
13
0.43
47.81
0.53
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
LlamaForCausalLM
Original
float16
true
llama3.1
70
50
true
5d2d9dbb9e0bf61879255f63f1b787296fe524cc
true
true
2024-09-08
2024-09-08
true
false
mattshumer/ref_70_e3
2
meta-llama/Meta-Llama-3.1-70B
πŸ”Ά
mmnga/Llama-3-70B-japanese-suzume-vector-v0.1
30.54
46.49
0.46
50.02
0.65
24.24
0.24
4.81
0.29
10.76
0.41
46.94
0.52
πŸ”Ά fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
bfloat16
true
llama3
70
4
true
16f98b2d45684af2c4a9ff5da75b00ef13cca808
true
true
2024-09-19
2024-04-28
true
false
mmnga/Llama-3-70B-japanese-suzume-vector-v0.1
0
mmnga/Llama-3-70B-japanese-suzume-vector-v0.1
πŸ’¬
internlm/internlm2_5-7b-chat
30.46
61.4
0.61
57.67
0.71
8.31
0.08
10.63
0.33
14.35
0.44
30.42
0.37
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
InternLM2ForCausalLM
Original
float16
true
other
7
163
true
bebb00121ee105b823647c3ba2b1e152652edc33
true
true
2024-07-03
2024-06-27
true
true
internlm/internlm2_5-7b-chat
0
internlm/internlm2_5-7b-chat
πŸ’¬
ValiantLabs/Llama3-70B-ShiningValiant2
30.45
61.22
0.61
46.71
0.63
7.1
0.07
10.74
0.33
13.64
0.43
43.31
0.49
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
LlamaForCausalLM
Original
bfloat16
true
llama3
70
4
true
bd6cce8da08ccefe9ec58cae3df4bf75c97d8950
true
true
2024-07-25
2024-04-20
true
false
ValiantLabs/Llama3-70B-ShiningValiant2
0
ValiantLabs/Llama3-70B-ShiningValiant2
🀝
mlabonne/BigQwen2.5-Echo-47B-Instruct
30.31
73.57
0.74
44.52
0.61
3.47
0.03
8.61
0.31
10.19
0.41
41.49
0.47
🀝 base merges and moerges
Qwen2ForCausalLM
Original
bfloat16
false
apache-2.0
47
3
true
f95fcf22f8ab87c2dbb1893b87c8a132820acb5e
true
true
2024-09-24
2024-09-23
true
true
mlabonne/BigQwen2.5-Echo-47B-Instruct
1
mlabonne/BigQwen2.5-Echo-47B-Instruct (Merge)
πŸ’¬
recoilme/recoilme-gemma-2-9B-v0.3
30.21
74.39
0.74
42.03
0.6
8.76
0.09
9.84
0.32
12.08
0.42
34.14
0.41
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
Gemma2ForCausalLM
Original
float16
true
cc-by-nc-4.0
10
0
true
772cab46d9d22cbcc3c574d193021803ce5c444c
true
true
2024-09-18
2024-09-18
true
false
recoilme/recoilme-gemma-2-9B-v0.3
0
recoilme/recoilme-gemma-2-9B-v0.3
πŸ’¬
MaziyarPanahi/calme-2.3-qwen2-72b
30.17
38.5
0.38
51.23
0.66
14.73
0.15
16.22
0.37
11.24
0.41
49.1
0.54
πŸ’¬ chat models (RLHF, DPO, IFT, ...)
Qwen2ForCausalLM
Original
bfloat16
true
other
72
2
true
12ff2e800f968e867a580c072905cf4671da066f
true
true
2024-09-15
2024-08-06
true
false
MaziyarPanahi/calme-2.3-qwen2-72b
1
Qwen/Qwen2-72B
🀝
altomek/YiSM-34B-0rn
30.15
42.84
0.43
45.38
0.61
20.62
0.21
16.22
0.37
14.76
0.44
41.06
0.47
🀝 base merges and moerges
LlamaForCausalLM
Original
float16
false
apache-2.0
34
1
true
7a481c67cbdd5c846d6aaab5ef9f1eebfad812c2
true
true
2024-06-27
2024-05-26
true
false
altomek/YiSM-34B-0rn
1
altomek/YiSM-34B-0rn (Merge)
🀝
allknowingroger/Yislerp2-34B
30.1
39.93
0.4
47.2
0.62
21
0.21
15.21
0.36
15.85
0.45
41.38
0.47
🀝 base merges and moerges
LlamaForCausalLM
Original
bfloat16
false
apache-2.0
34
0
true
3147cf866736b786347928b655c887e8b9c07bfc
true
true
2024-09-19
2024-09-17
false
false
allknowingroger/Yislerp2-34B
1
allknowingroger/Yislerp2-34B (Merge)
πŸ”Ά
VAGOsolutions/SauerkrautLM-Phi-3-medium
30.09
44.09
0.44
49.63
0.64
14.12
0.14
11.3
0.33
20.7
0.48
40.72
0.47
πŸ”Ά fine-tuned on domain-specific datasets
MistralForCausalLM
Original
bfloat16
true
mit
13
8
true
ebfed26a2b35ede15fe526f57029e0ad866ac66d
true
true
2024-09-19
2024-06-09
false
false
VAGOsolutions/SauerkrautLM-Phi-3-medium
0
VAGOsolutions/SauerkrautLM-Phi-3-medium
End of preview.

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
2