Dataset Preview
Viewer
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 150 new columns ({'config.backend.model', 'report.prefill.efficiency.value', 'report.prefill.latency.p50', 'config.backend.torch_compile_target', 'config.launcher.name', 'report.decode.efficiency.value', 'config.environment.timm_version', 'config.scenario._target_', 'config.environment.cpu_count', 'config.environment.gpu_vram_mb', 'report.prefill.memory.max_global_vram', 'report.per_token.latency.p50', 'report.decode.latency.stdev', 'config.backend.hub_kwargs.force_download', 'config.scenario.name', 'report.prefill.latency.stdev', 'report.prefill.memory.max_process_vram', 'report.per_token.throughput.value', 'config.environment.machine', 'config.launcher._target_', 'config.environment.platform', 'report.prefill.latency.total', 'config.backend.autocast_enabled', 'report.decode.latency.p95', 'report.per_token.efficiency', 'report.per_token.memory', 'config.environment.transformers_commit', 'report.prefill.throughput.unit', 'config.environment.diffusers_commit', 'config.scenario.latency', 'config.environment.accelerate_commit', 'report.per_token.latency.count', 'config.backend.torch_dtype', 'config.backend.to_bettertransformer', 'config.launcher.start_method', 'config.environment.optimum_benchmark_commit', 'config.backend.cache_implementation', 'report.prefill.latency.mean', 'config.backend.hub_kwargs.revision', 'report.decode.latency.unit', 'config.environment.diffusers_version', 'report.prefill.memory.max_reserved', 'report.per_token.latency.p99', 'config.environment.transformers_version', 're
...
nt.cpu', 'config.environment.peft_version', 'config.environment.peft_commit', 'config.launcher.device_isolation_action', 'config.scenario.generate_kwargs.min_new_tokens', 'config.environment.optimum_benchmark_version', 'config.launcher.numactl', 'config.environment.gpu', 'config.backend.quantization_scheme', 'config.environment.python_version', 'report.prefill.energy.ram', 'report.decode.energy.cpu', 'config.scenario.input_shapes.batch_size', 'config.backend.library', 'report.prefill.throughput.value', 'report.prefill.efficiency.unit', 'report.decode.latency.p99', 'report.decode.latency.values', 'report.prefill.energy.unit', 'report.decode.memory.max_ram', 'config.environment.timm_commit', 'config.backend.quantization_config.version', 'report.decode.latency.p90', 'config.backend.quantization_config.bits', 'config.environment.system', 'report.decode.efficiency.unit', 'config.scenario.energy', 'config.backend.hub_kwargs.trust_remote_code', 'report.per_token.latency.total', 'config.launcher.device_isolation', 'report.decode.memory.max_allocated', 'report.prefill.latency.values', 'config.backend.inter_op_num_threads', 'config.environment.optimum_commit', 'config.scenario.input_shapes.sequence_length', 'config.environment.gpu_count', 'report.decode.memory.max_global_vram', 'config.backend.processor', 'config.backend.torch_compile', 'config.backend.no_weights', 'report.per_token.latency.stdev', 'report.prefill.latency.p95', 'report.prefill.latency.p99', 'report.prefill.energy.cpu'}) and 21 missing columns ({'Model', 'MoE', 'Architecture', 'GSM8K', 'HellaSwag', 'Precision', 'T', 'Hub License', 'TruthfulQA', 'Flagged', 'Model sha', 'Weight type', 'MMLU', 'Merged', 'Average ⬆️', 'Winogrande', 'Available on the hub', 'Type', 'Hub ❤️', 'ARC', '#Params (B)'}).

This happened while the csv dataset builder was generating data using

hf://datasets/optimum-benchmark/llm-perf-leaderboard/perf-df-awq-1xA10.csv (at revision eb9b1fab2d755287ab732703dfbb1e4b47719cc2)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              config.name: string
              config.backend.name: string
              config.backend.version: string
              config.backend._target_: string
              config.backend.task: string
              config.backend.library: string
              config.backend.model: string
              config.backend.processor: string
              config.backend.device: string
              config.backend.device_ids: int64
              config.backend.seed: int64
              config.backend.inter_op_num_threads: double
              config.backend.intra_op_num_threads: double
              config.backend.model_kwargs.trust_remote_code: bool
              config.backend.processor_kwargs.trust_remote_code: bool
              config.backend.hub_kwargs.trust_remote_code: bool
              config.backend.no_weights: bool
              config.backend.device_map: double
              config.backend.torch_dtype: string
              config.backend.eval_mode: bool
              config.backend.to_bettertransformer: bool
              config.backend.low_cpu_mem_usage: double
              config.backend.attn_implementation: string
              config.backend.cache_implementation: double
              config.backend.autocast_enabled: bool
              config.backend.autocast_dtype: double
              config.backend.torch_compile: bool
              config.backend.torch_compile_target: string
              config.backend.quantization_scheme: string
              config.backend.quantization_config.bits: int64
              config.backend.quantization_config.version: string
              config.backend.deepspeed_inference: bool
              config.backend.peft_type: double
              config.scenario.name: string
              config.scenario._target_: string
              config.scenario.iterations: int64
              config.scenario.duration: int64
              config.scenario.warmup_runs: int64
              config.scenario.input_shapes.batch_size: int64
              config.scenario.input_shapes.num_choices: int64
              co
              ...
              .latency.p50: double
              report.decode.latency.p90: double
              report.decode.latency.p95: double
              report.decode.latency.p99: double
              report.decode.latency.values: string
              report.decode.throughput.unit: string
              report.decode.throughput.value: double
              report.decode.energy.unit: string
              report.decode.energy.cpu: double
              report.decode.energy.ram: double
              report.decode.energy.gpu: double
              report.decode.energy.total: double
              report.decode.efficiency.unit: string
              report.decode.efficiency.value: double
              report.per_token.memory: double
              report.per_token.latency.unit: string
              report.per_token.latency.count: double
              report.per_token.latency.total: double
              report.per_token.latency.mean: double
              report.per_token.latency.stdev: double
              report.per_token.latency.p50: double
              report.per_token.latency.p90: double
              report.per_token.latency.p95: double
              report.per_token.latency.p99: double
              report.per_token.latency.values: string
              report.per_token.throughput.unit: string
              report.per_token.throughput.value: double
              report.per_token.energy: double
              report.per_token.efficiency: double
              config.backend.hub_kwargs.revision: string
              config.backend.hub_kwargs.force_download: bool
              config.backend.hub_kwargs.local_files_only: bool
              config.backend.quantization_config.exllama_config.version: double
              config.backend.quantization_config.exllama_config.max_input_len: double
              config.backend.quantization_config.exllama_config.max_batch_size: double
              -- schema metadata --
              pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 24248
              to
              {'T': Value(dtype='string', id=None), 'Model': Value(dtype='string', id=None), 'Average ⬆️': Value(dtype='float64', id=None), 'ARC': Value(dtype='float64', id=None), 'HellaSwag': Value(dtype='float64', id=None), 'MMLU': Value(dtype='float64', id=None), 'TruthfulQA': Value(dtype='float64', id=None), 'Winogrande': Value(dtype='float64', id=None), 'GSM8K': Value(dtype='float64', id=None), 'Type': Value(dtype='string', id=None), 'Architecture': Value(dtype='string', id=None), 'Weight type': Value(dtype='string', id=None), 'Precision': Value(dtype='string', id=None), 'Merged': Value(dtype='bool', id=None), 'Hub License': Value(dtype='string', id=None), '#Params (B)': Value(dtype='float64', id=None), 'Hub ❤️': Value(dtype='float64', id=None), 'Available on the hub': Value(dtype='bool', id=None), 'Model sha': Value(dtype='string', id=None), 'Flagged': Value(dtype='bool', id=None), 'MoE': Value(dtype='bool', id=None)}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1324, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 938, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 150 new columns ({'config.backend.model', 'report.prefill.efficiency.value', 'report.prefill.latency.p50', 'config.backend.torch_compile_target', 'config.launcher.name', 'report.decode.efficiency.value', 'config.environment.timm_version', 'config.scenario._target_', 'config.environment.cpu_count', 'config.environment.gpu_vram_mb', 'report.prefill.memory.max_global_vram', 'report.per_token.latency.p50', 'report.decode.latency.stdev', 'config.backend.hub_kwargs.force_download', 'config.scenario.name', 'report.prefill.latency.stdev', 'report.prefill.memory.max_process_vram', 'report.per_token.throughput.value', 'config.environment.machine', 'config.launcher._target_', 'config.environment.platform', 'report.prefill.latency.total', 'config.backend.autocast_enabled', 'report.decode.latency.p95', 'report.per_token.efficiency', 'report.per_token.memory', 'config.environment.transformers_commit', 'report.prefill.throughput.unit', 'config.environment.diffusers_commit', 'config.scenario.latency', 'config.environment.accelerate_commit', 'report.per_token.latency.count', 'config.backend.torch_dtype', 'config.backend.to_bettertransformer', 'config.launcher.start_method', 'config.environment.optimum_benchmark_commit', 'config.backend.cache_implementation', 'report.prefill.latency.mean', 'config.backend.hub_kwargs.revision', 'report.decode.latency.unit', 'config.environment.diffusers_version', 'report.prefill.memory.max_reserved', 'report.per_token.latency.p99', 'config.environment.transformers_version', 're
              ...
              nt.cpu', 'config.environment.peft_version', 'config.environment.peft_commit', 'config.launcher.device_isolation_action', 'config.scenario.generate_kwargs.min_new_tokens', 'config.environment.optimum_benchmark_version', 'config.launcher.numactl', 'config.environment.gpu', 'config.backend.quantization_scheme', 'config.environment.python_version', 'report.prefill.energy.ram', 'report.decode.energy.cpu', 'config.scenario.input_shapes.batch_size', 'config.backend.library', 'report.prefill.throughput.value', 'report.prefill.efficiency.unit', 'report.decode.latency.p99', 'report.decode.latency.values', 'report.prefill.energy.unit', 'report.decode.memory.max_ram', 'config.environment.timm_commit', 'config.backend.quantization_config.version', 'report.decode.latency.p90', 'config.backend.quantization_config.bits', 'config.environment.system', 'report.decode.efficiency.unit', 'config.scenario.energy', 'config.backend.hub_kwargs.trust_remote_code', 'report.per_token.latency.total', 'config.launcher.device_isolation', 'report.decode.memory.max_allocated', 'report.prefill.latency.values', 'config.backend.inter_op_num_threads', 'config.environment.optimum_commit', 'config.scenario.input_shapes.sequence_length', 'config.environment.gpu_count', 'report.decode.memory.max_global_vram', 'config.backend.processor', 'config.backend.torch_compile', 'config.backend.no_weights', 'report.per_token.latency.stdev', 'report.prefill.latency.p95', 'report.prefill.latency.p99', 'report.prefill.energy.cpu'}) and 21 missing columns ({'Model', 'MoE', 'Architecture', 'GSM8K', 'HellaSwag', 'Precision', 'T', 'Hub License', 'TruthfulQA', 'Flagged', 'Model sha', 'Weight type', 'MMLU', 'Merged', 'Average ⬆️', 'Winogrande', 'Available on the hub', 'Type', 'Hub ❤️', 'ARC', '#Params (B)'}).
              
              This happened while the csv dataset builder was generating data using
              
              hf://datasets/optimum-benchmark/llm-perf-leaderboard/perf-df-awq-1xA10.csv (at revision eb9b1fab2d755287ab732703dfbb1e4b47719cc2)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Open a discussion for direct support.

T
string
Model
string
Average ⬆️
float64
ARC
float64
HellaSwag
float64
MMLU
float64
TruthfulQA
float64
Winogrande
float64
GSM8K
float64
Type
string
Architecture
string
Weight type
string
Precision
string
Merged
bool
Hub License
string
#Params (B)
float64
Hub ❤️
float64
Available on the hub
bool
Model sha
string
Flagged
bool
MoE
bool
🤝
paloalma/TW3-JRGL-v1
81.31
78.5
90.3
77.81
75.84
85.56
79.83
🤝 base merges and moerges
LlamaForCausalLM
Original
bfloat16
true
apache-2.0
72
0
false
aa4d9084fcfb69afff6b2bac5c1350bf29a159cb
true
true
🤝
paloalma/Le_Triomphant-ECE-TW3
81.31
78.5
90.3
77.81
75.84
85.56
79.83
🤝 base merges and moerges
LlamaForCausalLM
Original
bfloat16
false
apache-2.0
72
2
true
aa4d9084fcfb69afff6b2bac5c1350bf29a159cb
true
true
🔶
freewheelin/free-evo-qwen72b-v0.8-re
81.28
79.86
91.34
78
74.85
87.77
75.89
🔶 fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
float16
false
mit
72
4
true
df20836951a07c52d4aacc668fca3143429d485c
true
true
🔶
freewheelin/free-evo-qwen72b-v0.8
81.28
79.86
91.34
78
74.85
87.77
75.89
🔶 fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
float16
true
mit
72
0
false
7169478b57edff434bd943be28415ea9fc2cf1e0
true
true
🔶
davidkim205/Rhea-72b-v0.5
81.22
79.78
91.15
77.95
74.5
87.85
76.12
🔶 fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
float16
true
apache-2.0
72
110
true
fda5cf998a0f2d89b53b5fa490793e3e50bb8239
true
true
💬
Contamination/contaminated_proof_7b_v1.0_safetensor
81.14
78.07
90.22
78.92
82.29
88.16
69.14
💬 chat models (RLHF, DPO, IFT, ...)
MistralForCausalLM
Original
float16
true
unknown
7
11
true
5d7fcb3724d6b08cf82e1b0c1faa1695b9fd6932
false
true
💬
Contamination/contaminated_proof_7b_v1.0
81.14
78.07
90.22
78.92
82.29
88.16
69.14
💬 chat models (RLHF, DPO, IFT, ...)
MistralForCausalLM
Original
float16
true
unknown
7
4
true
b1415875faed65cd29fd804941f5dcf835e99608
false
true
🔶
davidkim205/Rhea-72b-v0.4
81.09
78.5
90.75
78.01
73.91
86.74
78.62
🔶 fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
float16
true
apache-2.0
72
0
false
5502123c46485914a580d6794eeb5fb3554b46aa
true
true
💬
MTSAIR/MultiVerse_70B
81
78.67
89.77
78.22
75.18
87.53
76.65
💬 chat models (RLHF, DPO, IFT, ...)
LlamaForCausalLM
Original
bfloat16
true
other
72
30
true
ea2b4ff8e5acd7a48993f56b2d7b99e049eb6939
true
true
💬
MTSAIR/MultiVerse_70B
80.98
78.58
89.74
78.27
75.09
87.37
76.8
💬 chat models (RLHF, DPO, IFT, ...)
LlamaForCausalLM
Original
float16
true
other
72
30
true
ea2b4ff8e5acd7a48993f56b2d7b99e049eb6939
true
true
🔶
davidkim205/Rhea-72b-v0.2
80.95
77.56
90.84
77.98
74.5
86.35
78.47
🔶 fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
float16
true
apache-2.0
72
0
false
c51bcf1a3dc3c5e512e805f52d5e15384d798ba7
true
true
🔶
davidkim205/Rhea-72b-v0.3
80.85
76.79
89.98
77.47
75.93
85.08
79.83
🔶 fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
float16
true
apache-2.0
72
0
false
7db39c93177958d94ebc3b719f8bfc75826b345e
true
true
🔶
SF-Foundation/Ein-72B-v0.11
80.81
76.79
89.02
77.2
79.02
84.06
78.77
🔶 fine-tuned on domain-specific datasets
?
Adapter
bfloat16
true
apache-2.0
72
0
false
40d451f32b1a6c9ad694b32ba8ed4822c27f3022
true
true
🔶
SF-Foundation/Ein-72B-v0.13
80.79
76.19
89.44
77.07
77.82
84.93
79.3
🔶 fine-tuned on domain-specific datasets
?
Adapter
bfloat16
true
apache-2.0
72
0
false
1f302e0e15f3d3711778cd61686eb9b28b0c72ae
true
true
🔶
SF-Foundation/Ein-72B-v0.12
80.72
76.19
89.46
77.17
77.78
84.45
79.23
🔶 fine-tuned on domain-specific datasets
?
Adapter
bfloat16
true
apache-2.0
72
0
false
84d38e29fec0dc9c274237968fdafe9396702f9b
true
true
🔶
abacusai/Smaug-72B-v0.1
80.48
76.02
89.27
77.15
76.67
85.08
78.7
🔶 fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
bfloat16
true
other
72
450
true
54a8c35600ec5cb30ca2129247854ece23e57f57
true
true
🔶
ibivibiv/alpaca-dragon-72b-v1
79.3
73.89
88.16
77.4
72.69
86.03
77.63
🔶 fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
float16
true
other
72
23
true
4df251a558c53b6b6a4c459045b161951cfc3c4e
true
true
💬
mistralai/Mixtral-8x22B-Instruct-v0.1
79.15
72.7
89.08
77.77
68.14
85.16
82.03
💬 chat models (RLHF, DPO, IFT, ...)
MixtralForCausalLM
Original
bfloat16
true
apache-2.0
140
586
true
eb69dca9c68bbdcffd5f522f632d5c04ab6c65b3
true
false
💬
MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.2
78.96
72.53
86.22
80.41
63.57
82.79
88.25
💬 chat models (RLHF, DPO, IFT, ...)
LlamaForCausalLM
Original
bfloat16
true
llama3
70
6
true
0ef6aba21c4537fe693c4160b820efb28270705b
true
true
💬
MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4
78.89
72.61
86.03
80.5
63.26
83.58
87.34
💬 chat models (RLHF, DPO, IFT, ...)
LlamaForCausalLM
Original
bfloat16
true
llama3
70
10
true
5a44e1d115e991a9814b9dd96fa60132ced9b99f
true
true
💬
MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.3
78.74
72.35
86
80.47
63.45
82.95
87.19
💬 chat models (RLHF, DPO, IFT, ...)
LlamaForCausalLM
Original
bfloat16
true
llama3
70
2
true
17f4cce3f08bc798516839315b07f0c8e05d6611
true
true
💬
mmnga/Llama-3-70B-japanese-suzume-vector-v0.1
78.6
72.35
85.81
80.28
62.93
82.79
87.41
💬 chat models (RLHF, DPO, IFT, ...)
LlamaForCausalLM
Original
bfloat16
true
llama3
70
2
true
16f98b2d45684af2c4a9ff5da75b00ef13cca808
true
true
💬
moreh/MoMo-72B-lora-1.8.7-DPO
78.55
70.82
85.96
77.13
74.71
84.06
78.62
💬 chat models (RLHF, DPO, IFT, ...)
LlamaForCausalLM
Original
bfloat16
true
mit
72
67
true
c64edea08b27be1e7e2ae6a95bcdd74849cb887e
true
true
💬
tenyx/Llama3-TenyxChat-70B
78.4
72.1
86.21
80.04
62.85
82.95
86.28
💬 chat models (RLHF, DPO, IFT, ...)
LlamaForCausalLM
Original
bfloat16
true
llama3
70
58
true
de770dc2c767b50b17bef491ec6983c29e60f668
true
false
🔶
failspy/llama-3-70B-Instruct-abliterated
78.26
72.01
86.02
79.97
63.15
83.11
85.29
🔶 fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
float16
true
llama3
70
55
true
53ae9dafe8b3d163e05d75387575f8e9f43253d0
true
true
🔶
saltlux/luxia-21.4b-alignment-v1.2
78.14
77.73
90.86
67.86
79.16
86.27
66.94
🔶 fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
bfloat16
true
apache-2.0
21
2
true
e318e0a864db847b4020cbc8d23035dae08522ab
true
true
🔶
4season/final_model_test_v2
78.14
77.73
90.86
67.86
79.16
86.27
66.94
🔶 fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
bfloat16
true
apache-2.0
21
0
false
cf690c35d9cf0b0b6bf034fa16dbf88c56fe861c
true
true
💬
MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.1
78.11
71.67
85.83
80.12
62.11
82.87
86.05
💬 chat models (RLHF, DPO, IFT, ...)
LlamaForCausalLM
Original
bfloat16
true
llama3
70
8
true
99d755d89cfbb28f19179d07f02876720646a767
true
true
🔶
abhishek/autotrain-llama3-70b-orpo-v1
78.08
70.65
85.99
80.11
61.78
84.29
85.67
🔶 fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
float16
true
other
70
3
true
053236c6846cc561c1503ba05e2b28c94855a432
true
true
🔶
failspy/llama-3-70B-Instruct-abliterated
78.08
71.84
86.04
79.8
63.18
82.4
85.22
🔶 fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
bfloat16
true
llama3
70
55
true
53ae9dafe8b3d163e05d75387575f8e9f43253d0
true
true
🔶
cloudyu/TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16
77.91
74.06
86.74
76.65
72.24
83.35
74.45
🔶 fine-tuned on domain-specific datasets
MixtralForCausalLM
Original
bfloat16
true
other
60
15
true
cd29cfa124072c96ba8601230bead65d76e04dcb
true
false
💬
meta-llama/Meta-Llama-3-70B-Instruct
77.88
71.42
85.69
80.06
61.81
82.87
85.44
💬 chat models (RLHF, DPO, IFT, ...)
LlamaForCausalLM
Original
float16
true
llama3
70
1,063
true
5fcb2901844dde3111159f24205b71c25900ffbd
true
true
🔶
4season/merge_model_test_v2
77.82
79.35
89.75
67.89
71.58
86.58
71.8
🔶 fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
bfloat16
false
apache-2.0
21
0
true
e9542d2e5f8ede339a2917b37f2c570f2847becc
true
true
🔶
fblgit/UNA-ThePitbull-21.4B-v2
77.82
77.73
91.79
68.25
78.24
87.37
63.53
🔶 fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
bfloat16
true
afl-3.0
21
3
true
6f59176110b23838a01fc401512df3ada96e9557
true
true
🔶
saltlux/luxia-21.4b-alignment-v1.0
77.74
77.47
91.88
68.1
79.17
87.45
62.4
🔶 fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
bfloat16
true
apache-2.0
21
31
true
ba3403eaafc6d1f6e3a73245314ee96025c08d96
true
true
🔶
saltlux/luxia-21.4b-alignment-v1.0
77.74
77.73
91.82
68.05
79.2
87.37
62.24
🔶 fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
float16
true
apache-2.0
21
31
true
910c73192c30fb51dc94f69777b2ec7cc3a4465b
true
true
🔶
fblgit/UNA-ThePitbull-21.4-v1
77.66
77.9
91.81
68.07
79.24
87.29
61.64
🔶 fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
bfloat16
true
afl-3.0
21
4
true
125288b68a54f1ec42877a53e6bbdcfbc5375e1d
true
true
🔶
HanNayeoniee/LHK_DPO_v1
77.62
74.74
89.3
64.9
79.89
88.32
68.54
🔶 fine-tuned on domain-specific datasets
MixtralForCausalLM
Original
bfloat16
true
mit
12
0
false
4e2c0a8fb1a1654312a573e85fec79832bfa489c
true
true
🔶
saltlux/luxia-21.4b-alignment-v0.2
77.51
76.71
91.61
68.27
79.8
87.06
61.64
🔶 fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
bfloat16
true
apache-2.0
21
0
false
59243de958296a4516f72ebfb1b597188dd59229
true
true
🔶
zhengr/MixTAO-7Bx2-MoE-v8.1
77.5
73.81
89.22
64.92
78.57
87.37
71.11
🔶 fine-tuned on domain-specific datasets
MixtralForCausalLM
Original
bfloat16
true
apache-2.0
12
44
true
2d8cff968dbfb31e0c1ccc42053ccc4d2698a390
true
false
💬
yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B
77.44
74.91
89.3
64.67
78.02
88.24
69.52
💬 chat models (RLHF, DPO, IFT, ...)
MixtralForCausalLM
Original
bfloat16
true
mit
12
52
true
915651208ea9f40c65a60d1f971a09f9461ee691
true
false
🔶
HanNayeoniee/LHK_DPO_v1
77.43
74.74
89.37
64.87
79.88
88.16
67.55
🔶 fine-tuned on domain-specific datasets
MixtralForCausalLM
Original
float16
true
mit
12
0
false
4e2c0a8fb1a1654312a573e85fec79832bfa489c
true
true
🔶
JaeyeonKang/CCK_Asura_v1
77.43
73.89
89.07
75.44
71.75
86.35
68.08
🔶 fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
float16
true
cc-by-nc-4.0
68
0
false
7dd3ddea090bd63f3143e70d7d6237cc40c046e4
true
true
🔶
fblgit/UNA-SimpleSmaug-34b-v1beta
77.41
74.57
86.74
76.68
70.17
83.82
72.48
🔶 fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
bfloat16
true
apache-2.0
34
20
true
e1cdc5b02c662c5f29a50d0b22c64a8902ca856b
true
true
🔶
TomGrc/FusionNet_34Bx2_MoE_v0.1
77.38
73.72
86.46
76.72
71.01
83.35
73.01
🔶 fine-tuned on domain-specific datasets
MixtralForCausalLM
Original
bfloat16
true
mit
60
7
true
6c7ec6d2ca1c0d126a26963fedc9bbdf5210b0d1
true
false
💬
shenzhi-wang/Llama3-70B-Chinese-Chat
77.34
70.39
85.81
79.74
61.1
83.74
83.24
💬 chat models (RLHF, DPO, IFT, ...)
LlamaForCausalLM
Original
bfloat16
true
llama3
70
82
true
9820f8e02b5b091dc5ebbb6442f83ea6a0db4205
true
true
💬
TwT-6/cr-model-v1
77.32
70.65
87.85
74.73
80.47
83.66
66.57
💬 chat models (RLHF, DPO, IFT, ...)
Qwen2ForCausalLM
Original
bfloat16
true
cc-by-4.0
14
0
true
4b9fdd5c5f6efe32c6cb1b7636c897610c9d8b65
true
true
🔶
saltlux/luxia-21.4b-alignment-v0.1
77.32
76.79
91.79
68.18
76.7
87.53
62.93
🔶 fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
bfloat16
true
apache-2.0
21
0
false
88a47c498102132f5262581803fe1ed9252a16bc
true
true
🔶
migtissera/Tess-72B-v1.5b
77.3
71.25
85.53
76.63
71.99
81.45
76.95
🔶 fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
float16
true
other
72
16
true
dc092ecc5d5a424678eac445a9f4443069776691
true
true
💬
moreh/MoMo-72B-lora-1.8.6-DPO
77.29
70.14
86.03
77.4
69
84.37
76.8
💬 chat models (RLHF, DPO, IFT, ...)
LlamaForCausalLM
Original
bfloat16
true
mit
72
32
true
76389d5d825c3743cc70bc75b902bbfdad11beba
true
true
🔶
abacusai/Smaugv0.1
77.29
74.23
86.76
76.66
70.22
83.66
72.18
🔶 fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
bfloat16
true
apache-2.0
34
0
false
036927bc2b54d408bb9e9357c3df8353f5853ea8
true
true
🔶
abacusai/Smaug-34B-v0.1
77.29
74.23
86.76
76.66
70.22
83.66
72.18
🔶 fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
bfloat16
true
other
34
55
true
7b74a95019f01b59630cbd6469814c752d0e59e5
true
true
🔶
cloudyu/Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE
77.28
72.87
86.52
76.96
73.28
83.19
70.89
🔶 fine-tuned on domain-specific datasets
MixtralForCausalLM
Original
bfloat16
true
mit
60
4
true
097b951c2524e6113252fcd98ba5830c85dc450f
true
false
🤝
louisbrulenaudet/Maxine-34B-stock
77.28
74.06
86.74
76.62
70.18
83.9
72.18
🤝 base merges and moerges
LlamaForCausalLM
Original
bfloat16
true
apache-2.0
34
3
false
5d87d746433f6eaddf34fd1dbdeed859b15348aa
true
true
🔶
jefferylovely/MoeLovely-13B
77.25
73.72
89.49
64.78
78.74
87.61
69.14
🔶 fine-tuned on domain-specific datasets
MixtralForCausalLM
Original
float16
true
cc-by-nc-nd-4.0
12
0
false
ac4f0ad8a665eb6b54c286810a9b4551b0bcdc25
true
false
🔶
saltlux/luxia-21.4b-alignment-v0.4
77.23
76.88
91.83
68.06
76.72
87.21
62.7
🔶 fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
bfloat16
true
apache-2.0
21
0
false
4c4342a9c3e8e793a0969b74222d887d53cb294e
true
true
🔶
ibivibiv/orthorus-125b-v2
77.22
73.63
89.04
75.99
70.19
85.48
68.99
🔶 fine-tuned on domain-specific datasets
MixtralForCausalLM
Original
float16
true
apache-2.0
125
4
true
95b3b4e432d98b804d64cfe42dd9fa6b67198e5b
true
false
🔶
ConvexAI/Luminex-34B-v0.2
77.19
74.49
86.76
76.55
70.21
83.27
71.87
🔶 fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
float16
true
other
34
11
true
3880710724abcaffbdf8fa4031e1d02066fbfe9d
true
true
🔶
senseable/Wilbur-30B
77.18
74.06
86.68
76.7
69.96
83.43
72.25
🔶 fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
float16
true
apache-2.0
34
0
false
eab679f95e078efb71fbaa7b1aa0be05bb4e46ca
true
true
🤝
RubielLabarta/LogoS-7Bx2-MoE-13B-v0.2
77.15
74.4
89.09
64.9
74.53
88.4
71.57
🤝 base merges and moerges
MixtralForCausalLM
Original
bfloat16
false
apache-2.0
12
10
true
354f0eb0a1299473c861c0505c2ede04ced90972
true
false
🔶
RubielLabarta/LogoS-7Bx2-MoE-13B-v0.1
77.14
74.49
89.07
64.74
74.57
88.32
71.65
🔶 fine-tuned on domain-specific datasets
MixtralForCausalLM
Original
bfloat16
true
apache-2.0
12
0
false
1e4670ddb878fa696f2e6293a4db9d8657993fd8
true
false
🔶
yunconglong/DARE_TIES_13B
77.1
74.32
89.5
64.47
78.66
88.08
67.55
🔶 fine-tuned on domain-specific datasets
MixtralForCausalLM
Original
bfloat16
true
['other']
12
10
true
74c6e4fbd272c9d897e8c93ee7de8a234f61900f
true
false
🔶
yunconglong/13B_MATH_DPO
77.08
74.66
89.51
64.53
78.63
88.08
67.1
🔶 fine-tuned on domain-specific datasets
MixtralForCausalLM
Original
bfloat16
true
other
12
1
true
96c62ad90f2b82016a1cdbfe96cfa5c4bb278e21
true
false
🔶
TomGrc/FusionNet_34Bx2_MoE
77.07
72.95
86.22
77.05
71.31
83.98
70.89
🔶 fine-tuned on domain-specific datasets
MixtralForCausalLM
Original
bfloat16
true
mit
60
7
true
c5575550053c84a401baf56174cb2e5d5bd9e79a
true
false
🔶
ConvexAI/Luminex-34B-v0.1
77.06
73.63
86.59
76.55
69.68
83.43
72.48
🔶 fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
float16
true
other
34
8
true
d3efc551679d7ec00da14722d44151c948a48d25
true
true
🔶
yunconglong/MoE_13B_DPO
77.05
74.32
89.39
64.48
78.47
88
67.63
🔶 fine-tuned on domain-specific datasets
MixtralForCausalLM
Original
bfloat16
true
other
12
5
true
d8d6a47f877fee3e638a158c2bd637c0013ed4e4
true
false
🔶
JaeyeonKang/CCK_Asura_v3.0
77.03
72.95
88.86
75.41
69.1
85.08
70.81
🔶 fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
float16
true
cc-by-nc-4.0
68
0
false
06fd0e293aeb3b2722e3910daefcd185fad4558c
true
true
🔶
4season/alignment_model_test
76.97
78.24
89.68
68.08
80.88
86.5
58.45
🔶 fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
bfloat16
false
apache-2.0
21
0
true
791a326ee0f6d5246962039803fd79b28608e54c
true
true
🔶
cloudyu/4bit_quant_TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO
76.95
73.21
86.11
75.44
72.78
82.95
71.19
🔶 fine-tuned on domain-specific datasets
MixtralForCausalLM
Original
4bit
true
other
31
1
true
331bb6bdba4140bbf0031bd37076f2c8a76d7dbb
true
false
🔶
NeverSleep/Llama-3-Lumimaid-70B-v0.1-alt
76.88
71.33
86.28
80.03
58.81
84.77
80.06
🔶 fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
bfloat16
true
cc-by-nc-4.0
70
15
true
60d97fcfb259f1e9ba57b9880b14a40590bb0350
true
true
🤝
automerger/YamshadowExperiment28-7B
76.86
73.29
89.25
64.38
78.53
85.24
70.51
🤝 base merges and moerges
MistralForCausalLM
Original
bfloat16
false
apache-2.0
7
19
true
b8f628c51f138538afc4c3d0d7dbcbab523c3b7a
true
true
🤝
Kquant03/CognitiveFusion2-4x7B-BF16
76.86
73.38
89.18
64.32
78.12
84.93
71.27
🤝 base merges and moerges
MixtralForCausalLM
Original
bfloat16
false
apache-2.0
24
3
true
a6df0928520ffdeb7f041ee84a56f316c30ca913
true
false
🤝
alchemonaut/QuartetAnemoi-70B-t0.0001
76.86
73.38
88.9
75.42
69.53
85.32
68.61
🤝 base merges and moerges
LlamaForCausalLM
Original
float16
false
other
68
29
true
392d963e63267650f2aea7dc26c60ee6fd2b26d4
true
true
🔶
SF-Foundation/TextBase-7B-v0.1
76.84
73.89
90.27
64.78
78.13
86.03
67.93
🔶 fine-tuned on domain-specific datasets
MistralForCausalLM
Original
float16
true
cc-by-nc-sa-4.0
7
0
false
40ea1e766860c831152653358beb3b7991a37af7
true
true
🤝
liminerity/Multiverse-Experiment-slerp-7b
76.82
72.87
89.15
64.5
77.93
84.77
71.72
🤝 base merges and moerges
MistralForCausalLM
Original
float16
true
apache-2.0
7
0
false
2103c07a06ff4d6e7f4c031b98d4c1a455690436
true
true
🟩
liminerity/M7-7b
76.82
72.87
89.15
64.5
77.93
84.77
71.72
🟩 continuously pretrained
MistralForCausalLM
Original
float16
false
apache-2.0
7
15
true
23497a39fe5d290494fad49e5b8077f76440ad11
true
true
🤝
allknowingroger/MultiverseEx26-7B-slerp
76.8
72.95
89.17
64.36
78.12
85.16
71.04
🤝 base merges and moerges
MistralForCausalLM
Original
bfloat16
false
apache-2.0
7
1
true
43f18d84e025693f00e9be335bf12fce96089b2f
true
true
🔶
Kukedlc/NeuralSynthesis-7B-v0.1
76.8
73.04
89.18
64.37
78.15
85.24
70.81
🔶 fine-tuned on domain-specific datasets
MistralForCausalLM
Original
bfloat16
false
apache-2.0
7
3
true
6cc3389eb2c1968e8b1355ee90135b9c769b4fa0
true
true
🤝
AurelPx/Percival_01-7b-slerp
76.79
73.21
89.16
64.42
77.97
85.08
70.89
🤝 base merges and moerges
MistralForCausalLM
Original
bfloat16
false
apache-2.0
7
3
true
6d415ca49b7717b8e851ae3271f569e83d4de589
true
true
🤝
shyamieee/J4RVIZ-v6.0
76.78
73.29
89.15
64.41
77.87
85
70.96
🤝 base merges and moerges
MistralForCausalLM
Original
bfloat16
false
apache-2.0
7
0
true
cbbb7b37ac2318b473f059a32a508e89ad5c26e9
true
true
🤝
LewisDeBenoisIV/Jason1903_SLERP
76.77
73.12
89.13
64.43
78.13
85.08
70.74
🤝 base merges and moerges
MistralForCausalLM
Original
float16
true
apache-2.0
7
0
false
ea187cf89f44197d9007798316a087bc63286227
true
true
🤝
automerger/Ognoexperiment27Multi_verse_model-7B
76.77
72.95
89.29
64.39
78.04
84.85
71.11
🤝 base merges and moerges
MistralForCausalLM
Original
bfloat16
false
apache-2.0
7
0
true
7eb7e390625ec0ca13a11c8977b9710d2316451f
true
true
🤝
Infinimol/miiqu-f16
76.77
72.87
88.97
75.99
69.37
85.56
67.85
🤝 base merges and moerges
LlamaForCausalLM
Original
float16
false
other
90
11
true
395d6398cb2ab71621a43f5f5df8994de9c46175
true
true
🤝
shyamieee/B3E3-SLM-7b-v3.0
76.76
73.04
89.14
64.48
78.2
85
70.74
🤝 base merges and moerges
MistralForCausalLM
Original
bfloat16
false
apache-2.0
7
0
true
2eb74c7e22dde18a1f41c187ec4b24d02ec0cb01
true
true
🔶
Kukedlc/NeuralSynthesis-7b-v0.4-slerp
76.76
73.21
89.14
64.28
78.07
84.85
71.04
🔶 fine-tuned on domain-specific datasets
MistralForCausalLM
Original
bfloat16
false
apache-2.0
7
0
true
7dc00cb312bddce98224d5e07bd56db7f110ffa4
true
true
💬
BarraHome/Mistroll-7B-v2.2
76.76
72.78
89.16
64.35
78.1
85
71.19
💬 chat models (RLHF, DPO, IFT, ...)
MistralForCausalLM
Original
bfloat16
true
mit
7
8
true
4869d62c238e828d6afdff2f22b928d41bae8578
true
true
🔶
JaeyeonKang/CCK_Asura_v1.1.0
76.75
73.21
88.55
75.43
69.55
85.32
68.46
🔶 fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
float16
true
cc-by-nc-4.0
68
0
false
baf3e2cc3a8d18098199b3cee4bdf79f00935be1
true
true
🤝
nlpguy/T3QM7
76.75
73.12
89.14
64.48
77.96
85.08
70.74
🤝 base merges and moerges
MistralForCausalLM
Original
float16
false
apache-2.0
7
0
true
fa6bd0d1019345cddabd90127c6a8f524a0d7a67
true
true
🔶
ValiantLabs/Llama3-70B-Fireplace
76.75
70.65
85
78.97
59.77
82.48
83.62
🔶 fine-tuned on domain-specific datasets
LlamaForCausalLM
Original
float16
true
llama3
70
3
true
220079e4115733991eb19c30d5480db9696a665e
true
true
🔶
bardsai/jaskier-7b-dpo-v7.1
76.74
73.38
89.28
64.37
78.28
85.24
69.9
🔶 fine-tuned on domain-specific datasets
MistralForCausalLM
Original
float16
true
apache-2.0
7
0
false
305544e9edd98253540141e91653d308e9b135cc
true
true
🔶
yam-peleg/Experiment26-7B
76.74
73.38
89.15
64.32
78.24
84.93
70.43
🔶 fine-tuned on domain-specific datasets
MistralForCausalLM
Original
float16
true
apache-2.0
7
78
true
bbaef291e93a7f6c9f8cb76a4dbd8c3c054d3f3c
true
true
🤝
Undi95/Miqu-MS-70B
76.74
73.29
88.63
75.48
69.32
85.71
68.01
🤝 base merges and moerges
LlamaForCausalLM
Original
bfloat16
false
cc-by-nc-4.0
68
7
true
2aa17f8d8aadc2c8bf2aed438a6714fe3dbd9794
true
true
🔶
MTSAIR/multi_verse_model
76.74
72.87
89.2
64.4
77.92
84.77
71.27
🔶 fine-tuned on domain-specific datasets
MistralForCausalLM
Original
bfloat16
true
apache-2.0
7
32
true
a4ca706d1bbc263b95e223a80ad68b0f125840b3
true
true
🟩
ammarali32/multi_verse_model
76.74
72.87
89.2
64.4
77.92
84.77
71.27
🟩 continuously pretrained
MistralForCausalLM
Original
bfloat16
true
apache-2.0
7
0
false
e2aa6fdad0b28a6019b0fc7c178a3579c3d671e8
true
true
🔶
MiniMoog/Mergerix-7b-v0.3
76.73
72.87
89.14
64.44
78.01
84.93
71.04
🔶 fine-tuned on domain-specific datasets
MistralForCausalLM
Original
bfloat16
false
apache-2.0
7
0
true
680449fa566aa5fe1845c40b28eae05659c417f0
true
true
🤝
louisbrulenaudet/Maxine-7B-0401-stock
76.73
73.12
89.13
64.42
78.07
85
70.66
🤝 base merges and moerges
MistralForCausalLM
Original
bfloat16
true
apache-2.0
7
1
false
a23c75b9b6d9c47bdd106af999f6a33c981e2bd6
true
true
🤝
automerger/Experiment27Pastiche-7B
76.73
73.04
89.08
64.2
79.31
85.4
69.37
🤝 base merges and moerges
MistralForCausalLM
Original
bfloat16
false
apache-2.0
7
0
true
f69af11ca954a3441cca023a9e1cb6bb8bf4eb66
true
true
🤝
cloudyu/Yi-34Bx2-MoE-60B
76.72
71.08
85.23
77.47
66.19
84.85
75.51
🤝 base merges and moerges
MixtralForCausalLM
Original
bfloat16
true
other
60
63
true
483359d70b3fef480cdaeb6d722a18626d34f0ce
false
false
🔶
MaziyarPanahi/MeliodasPercival_01_Experiment26T3q
76.72
73.04
89.17
64.48
78.28
84.93
70.43
🔶 fine-tuned on domain-specific datasets
MistralForCausalLM
Original
float16
false
apache-2.0
7
0
true
437c3830ec71b5f027cbe3bf1ce2b398f69e8406
true
true
🔶
nlpguy/T3QM7XP
76.71
73.04
89.12
64.45
78.06
85
70.58
🔶 fine-tuned on domain-specific datasets
MistralForCausalLM
Original
float16
false
apache-2.0
7
0
true
1da031f9fdf04ea93b04e0bba7672560ea9d6255
true
true
End of preview.

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
0
Add dataset card