Dataset Preview
View in Dataset Viewer
Viewer
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code: DatasetGenerationCastError Exception: DatasetGenerationCastError Message: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 150 new columns ({'config.environment.diffusers_commit', 'report.decode.latency.p99', 'report.prefill.latency.count', 'report.prefill.energy.total', 'report.decode.memory.unit', 'config.environment.python_version', 'report.decode.latency.stdev', 'config.scenario._target_', 'report.per_token.energy', 'report.prefill.memory.max_reserved', 'config.backend.hub_kwargs.force_download', 'report.per_token.memory', 'config.backend.hub_kwargs.local_files_only', 'report.decode.memory.max_ram', 'config.backend.autocast_dtype', 'config.backend.no_weights', 'config.launcher._target_', 'report.prefill.throughput.value', 'config.environment.optimum_version', 'config.environment.accelerate_commit', 'report.decode.latency.unit', 'config.backend.low_cpu_mem_usage', 'config.backend.seed', 'config.launcher.device_isolation', 'report.prefill.energy.ram', 'report.decode.latency.values', 'config.backend.deepspeed_inference', 'config.environment.transformers_version', 'report.decode.throughput.value', 'report.decode.energy.cpu', 'config.launcher.numactl', 'config.backend.torch_compile_target', 'report.prefill.latency.unit', 'report.prefill.latency.p99', 'config.backend.device_ids', 'report.decode.efficiency.value', 'report.decode.memory.max_process_vram', 'config.backend.device_map', 'report.decode.throughput.unit', 'config.backend.attn_implementation', 'config.scenario.duration', 'config.environment.timm_version', 'config.backend.hub_kwargs.trust_remote_code', 'report.per_token.latency.mean', 'config.scenario.iterat ... d.quantization_config.bits', 'config.name', 'config.backend.quantization_config.exllama_config.max_input_len', 'report.per_token.throughput.unit', 'report.prefill.energy.unit', 'config.launcher.start_method', 'config.backend.processor_kwargs.trust_remote_code', 'config.backend.autocast_enabled', 'config.environment.diffusers_version', 'config.environment.cpu_ram_mb', 'report.prefill.latency.stdev', 'config.scenario.generate_kwargs.min_new_tokens', 'report.per_token.latency.p99', 'config.backend.quantization_config.version', 'config.environment.transformers_commit', 'config.backend.peft_type', 'config.scenario.input_shapes.batch_size', 'config.backend.quantization_config.exllama_config.version', 'report.prefill.latency.p95', 'config.backend.library', 'config.backend.to_bettertransformer', 'report.decode.energy.unit', 'config.backend.torch_dtype', 'report.per_token.latency.count', 'report.decode.latency.p50', 'config.backend.intra_op_num_threads', 'report.traceback', 'report.prefill.latency.p90', 'report.prefill.memory.max_allocated', 'report.per_token.latency.p95', 'report.per_token.latency.p90', 'config.environment.optimum_benchmark_version', 'config.backend.inter_op_num_threads', 'config.scenario.input_shapes.num_choices', 'report.per_token.latency.total', 'config.environment.cpu_count', 'config.environment.cpu', 'report.decode.memory.max_allocated', 'config.backend.quantization_config.exllama_config.max_batch_size', 'report.prefill.memory.max_ram', 'config.backend.version'}) and 22 missing columns ({'Flagged', 'TruthfulQA', 'Merged', 'T', 'Type', 'Architecture', 'HellaSwag', 'GSM8K', 'Model sha', 'ARC', 'date', 'MoE', 'Winogrande', 'Hub ❤️', 'Hub License', 'Average ⬆️', 'Available on the hub', 'Model', 'MMLU', 'Weight type', 'Precision', '#Params (B)'}). This happened while the csv dataset builder was generating data using hf://datasets/optimum-benchmark/llm-perf-leaderboard/perf-df-awq-1xA10.csv (at revision e8cbd26d5bd3b75e041cceb7e727c48d8781e44b) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations) Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single writer.write_table(table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table pa_table = table_cast(pa_table, self._schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast return cast_table_to_schema(table, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema raise CastError( datasets.table.CastError: Couldn't cast config.name: string config.backend.name: string config.backend.version: string config.backend._target_: string config.backend.task: string config.backend.library: string config.backend.model: string config.backend.processor: string config.backend.device: string config.backend.device_ids: int64 config.backend.seed: int64 config.backend.inter_op_num_threads: double config.backend.intra_op_num_threads: double config.backend.model_kwargs.trust_remote_code: bool config.backend.processor_kwargs.trust_remote_code: bool config.backend.hub_kwargs.trust_remote_code: bool config.backend.no_weights: bool config.backend.device_map: double config.backend.torch_dtype: string config.backend.eval_mode: bool config.backend.to_bettertransformer: bool config.backend.low_cpu_mem_usage: double config.backend.attn_implementation: string config.backend.cache_implementation: double config.backend.autocast_enabled: bool config.backend.autocast_dtype: double config.backend.torch_compile: bool config.backend.torch_compile_target: string config.backend.quantization_scheme: string config.backend.quantization_config.bits: int64 config.backend.quantization_config.version: string config.backend.deepspeed_inference: bool config.backend.peft_type: double config.scenario.name: string config.scenario._target_: string config.scenario.iterations: int64 config.scenario.duration: int64 config.scenario.warmup_runs: int64 config.scenario.input_shapes.batch_size: int64 config.scenario.input_shapes.num_choices: int64 co ... .latency.p50: double report.decode.latency.p90: double report.decode.latency.p95: double report.decode.latency.p99: double report.decode.latency.values: string report.decode.throughput.unit: string report.decode.throughput.value: double report.decode.energy.unit: string report.decode.energy.cpu: double report.decode.energy.ram: double report.decode.energy.gpu: double report.decode.energy.total: double report.decode.efficiency.unit: string report.decode.efficiency.value: double report.per_token.memory: double report.per_token.latency.unit: string report.per_token.latency.count: double report.per_token.latency.total: double report.per_token.latency.mean: double report.per_token.latency.stdev: double report.per_token.latency.p50: double report.per_token.latency.p90: double report.per_token.latency.p95: double report.per_token.latency.p99: double report.per_token.latency.values: string report.per_token.throughput.unit: string report.per_token.throughput.value: double report.per_token.energy: double report.per_token.efficiency: double config.backend.hub_kwargs.revision: string config.backend.hub_kwargs.force_download: bool config.backend.hub_kwargs.local_files_only: bool config.backend.quantization_config.exllama_config.version: double config.backend.quantization_config.exllama_config.max_input_len: double config.backend.quantization_config.exllama_config.max_batch_size: double -- schema metadata -- pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 24248 to {'T': Value(dtype='string', id=None), 'Model': Value(dtype='string', id=None), 'Average ⬆️': Value(dtype='float64', id=None), 'ARC': Value(dtype='float64', id=None), 'HellaSwag': Value(dtype='float64', id=None), 'MMLU': Value(dtype='float64', id=None), 'TruthfulQA': Value(dtype='float64', id=None), 'Winogrande': Value(dtype='float64', id=None), 'GSM8K': Value(dtype='float64', id=None), 'Type': Value(dtype='string', id=None), 'Architecture': Value(dtype='string', id=None), 'Weight type': Value(dtype='string', id=None), 'Precision': Value(dtype='string', id=None), 'Merged': Value(dtype='bool', id=None), 'Hub License': Value(dtype='string', id=None), '#Params (B)': Value(dtype='int64', id=None), 'Hub ❤️': Value(dtype='int64', id=None), 'Available on the hub': Value(dtype='bool', id=None), 'Model sha': Value(dtype='string', id=None), 'Flagged': Value(dtype='bool', id=None), 'MoE': Value(dtype='bool', id=None), 'date': Value(dtype='string', id=None)} because column names don't match During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1317, in compute_config_parquet_and_info_response parquet_operations = convert_to_parquet(builder) File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 932, in convert_to_parquet builder.download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare self._download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single raise DatasetGenerationCastError.from_cast_error( datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 150 new columns ({'config.environment.diffusers_commit', 'report.decode.latency.p99', 'report.prefill.latency.count', 'report.prefill.energy.total', 'report.decode.memory.unit', 'config.environment.python_version', 'report.decode.latency.stdev', 'config.scenario._target_', 'report.per_token.energy', 'report.prefill.memory.max_reserved', 'config.backend.hub_kwargs.force_download', 'report.per_token.memory', 'config.backend.hub_kwargs.local_files_only', 'report.decode.memory.max_ram', 'config.backend.autocast_dtype', 'config.backend.no_weights', 'config.launcher._target_', 'report.prefill.throughput.value', 'config.environment.optimum_version', 'config.environment.accelerate_commit', 'report.decode.latency.unit', 'config.backend.low_cpu_mem_usage', 'config.backend.seed', 'config.launcher.device_isolation', 'report.prefill.energy.ram', 'report.decode.latency.values', 'config.backend.deepspeed_inference', 'config.environment.transformers_version', 'report.decode.throughput.value', 'report.decode.energy.cpu', 'config.launcher.numactl', 'config.backend.torch_compile_target', 'report.prefill.latency.unit', 'report.prefill.latency.p99', 'config.backend.device_ids', 'report.decode.efficiency.value', 'report.decode.memory.max_process_vram', 'config.backend.device_map', 'report.decode.throughput.unit', 'config.backend.attn_implementation', 'config.scenario.duration', 'config.environment.timm_version', 'config.backend.hub_kwargs.trust_remote_code', 'report.per_token.latency.mean', 'config.scenario.iterat ... d.quantization_config.bits', 'config.name', 'config.backend.quantization_config.exllama_config.max_input_len', 'report.per_token.throughput.unit', 'report.prefill.energy.unit', 'config.launcher.start_method', 'config.backend.processor_kwargs.trust_remote_code', 'config.backend.autocast_enabled', 'config.environment.diffusers_version', 'config.environment.cpu_ram_mb', 'report.prefill.latency.stdev', 'config.scenario.generate_kwargs.min_new_tokens', 'report.per_token.latency.p99', 'config.backend.quantization_config.version', 'config.environment.transformers_commit', 'config.backend.peft_type', 'config.scenario.input_shapes.batch_size', 'config.backend.quantization_config.exllama_config.version', 'report.prefill.latency.p95', 'config.backend.library', 'config.backend.to_bettertransformer', 'report.decode.energy.unit', 'config.backend.torch_dtype', 'report.per_token.latency.count', 'report.decode.latency.p50', 'config.backend.intra_op_num_threads', 'report.traceback', 'report.prefill.latency.p90', 'report.prefill.memory.max_allocated', 'report.per_token.latency.p95', 'report.per_token.latency.p90', 'config.environment.optimum_benchmark_version', 'config.backend.inter_op_num_threads', 'config.scenario.input_shapes.num_choices', 'report.per_token.latency.total', 'config.environment.cpu_count', 'config.environment.cpu', 'report.decode.memory.max_allocated', 'config.backend.quantization_config.exllama_config.max_batch_size', 'report.prefill.memory.max_ram', 'config.backend.version'}) and 22 missing columns ({'Flagged', 'TruthfulQA', 'Merged', 'T', 'Type', 'Architecture', 'HellaSwag', 'GSM8K', 'Model sha', 'ARC', 'date', 'MoE', 'Winogrande', 'Hub ❤️', 'Hub License', 'Average ⬆️', 'Available on the hub', 'Model', 'MMLU', 'Weight type', 'Precision', '#Params (B)'}). This happened while the csv dataset builder was generating data using hf://datasets/optimum-benchmark/llm-perf-leaderboard/perf-df-awq-1xA10.csv (at revision e8cbd26d5bd3b75e041cceb7e727c48d8781e44b) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Need help to make the dataset viewer work? Open a discussion for direct support.
T
string | Model
string | Average ⬆️
float64 | ARC
float64 | HellaSwag
float64 | MMLU
float64 | TruthfulQA
float64 | Winogrande
float64 | GSM8K
float64 | Type
string | Architecture
string | Weight type
string | Precision
string | Merged
bool | Hub License
string | #Params (B)
int64 | Hub ❤️
int64 | Available on the hub
bool | Model sha
string | Flagged
bool | MoE
bool | date
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
🤝 | paloalma/Le_Triomphant-ECE-TW3 | 81.31 | 78.5 | 90.3 | 77.81 | 75.84 | 85.56 | 79.83 | 🤝 base merges and moerges | LlamaForCausalLM | Original | bfloat16 | false | apache-2.0 | 72 | 2 | true | aa4d9084fcfb69afff6b2bac5c1350bf29a159cb | true | true | 2024-05-05T12:22:44Z |
🔶 | SF-Foundation/Ein-70B-v2 | 81.29 | 79.86 | 91.49 | 78.05 | 75.14 | 87.77 | 75.44 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | float16 | true | null | 72 | 0 | false | a22e7ff7cc1511e6c513d0883bcb7bb2d4307e11 | true | true | 2024-04-29T14:54:38Z |
🔶 | freewheelin/free-evo-qwen72b-v0.8-re | 81.28 | 79.86 | 91.34 | 78 | 74.85 | 87.77 | 75.89 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | float16 | false | mit | 72 | 4 | true | df20836951a07c52d4aacc668fca3143429d485c | true | true | 2024-05-05T07:26:59Z |
🔶 | freewheelin/free-evo-qwen72b-v0.8 | 81.28 | 79.86 | 91.34 | 78 | 74.85 | 87.77 | 75.89 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | float16 | true | null | 72 | 0 | false | 7169478b57edff434bd943be28415ea9fc2cf1e0 | true | true | 2024-05-03T03:54:54Z |
🔶 | davidkim205/Rhea-72b-v0.5 | 81.22 | 79.78 | 91.15 | 77.95 | 74.5 | 87.85 | 76.12 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | float16 | true | apache-2.0 | 72 | 117 | true | fda5cf998a0f2d89b53b5fa490793e3e50bb8239 | true | true | 2024-03-22T15:04:33Z |
💬 | Contamination/contaminated_proof_7b_v1.0 | 81.14 | 78.07 | 90.22 | 78.92 | 82.29 | 88.16 | 69.14 | 💬 chat models (RLHF, DPO, IFT, ...) | MistralForCausalLM | Original | float16 | true | unknown | 7 | 4 | true | b1415875faed65cd29fd804941f5dcf835e99608 | false | true | 2024-03-29T09:34:19Z |
💬 | Contamination/contaminated_proof_7b_v1.0_safetensor | 81.14 | 78.07 | 90.22 | 78.92 | 82.29 | 88.16 | 69.14 | 💬 chat models (RLHF, DPO, IFT, ...) | MistralForCausalLM | Original | float16 | true | unknown | 7 | 11 | true | 5d7fcb3724d6b08cf82e1b0c1faa1695b9fd6932 | false | true | 2024-04-02T01:59:03Z |
🔶 | davidkim205/Rhea-72b-v0.4 | 81.09 | 78.5 | 90.75 | 78.01 | 73.91 | 86.74 | 78.62 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | float16 | true | null | 72 | 0 | false | 5502123c46485914a580d6794eeb5fb3554b46aa | true | true | 2024-03-22T15:04:15Z |
💬 | MTSAIR/MultiVerse_70B | 81 | 78.67 | 89.77 | 78.22 | 75.18 | 87.53 | 76.65 | 💬 chat models (RLHF, DPO, IFT, ...) | LlamaForCausalLM | Original | bfloat16 | true | other | 72 | 32 | true | ea2b4ff8e5acd7a48993f56b2d7b99e049eb6939 | true | true | 2024-03-26T06:31:14Z |
🔶 | binbi/Ein-72B-v0.1 | 80.99 | 76.45 | 89.43 | 77.14 | 78.09 | 84.77 | 80.06 | 🔶 fine-tuned on domain-specific datasets | ? | Adapter | bfloat16 | true | null | 72 | 0 | false | 84ec4c0fcefc5af86f649a70c9d3ff493334e868 | true | true | 2024-02-04T01:01:02Z |
🔶 | MTSAIR/MultiVerse_70B | 80.98 | 78.58 | 89.74 | 78.27 | 75.09 | 87.37 | 76.8 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | float16 | true | other | 72 | 32 | true | ea2b4ff8e5acd7a48993f56b2d7b99e049eb6939 | true | true | 2024-03-27T09:27:48Z |
🔶 | davidkim205/Rhea-72b-v0.2 | 80.95 | 77.56 | 90.84 | 77.98 | 74.5 | 86.35 | 78.47 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | float16 | true | null | 72 | 0 | false | c51bcf1a3dc3c5e512e805f52d5e15384d798ba7 | true | true | 2024-03-19T09:05:23Z |
🔶 | davidkim205/Rhea-72b-v0.3 | 80.85 | 76.79 | 89.98 | 77.47 | 75.93 | 85.08 | 79.83 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | float16 | true | null | 72 | 0 | false | 7db39c93177958d94ebc3b719f8bfc75826b345e | true | true | 2024-03-22T15:05:55Z |
🔶 | SF-Foundation/Ein-72B-v0.11 | 80.81 | 76.79 | 89.02 | 77.2 | 79.02 | 84.06 | 78.77 | 🔶 fine-tuned on domain-specific datasets | ? | Adapter | bfloat16 | true | null | 72 | 0 | false | 40d451f32b1a6c9ad694b32ba8ed4822c27f3022 | true | true | 2024-02-11T02:46:31Z |
🔶 | SF-Foundation/Ein-72B-v0.13 | 80.79 | 76.19 | 89.44 | 77.07 | 77.82 | 84.93 | 79.3 | 🔶 fine-tuned on domain-specific datasets | ? | Adapter | bfloat16 | true | null | 72 | 0 | false | 1f302e0e15f3d3711778cd61686eb9b28b0c72ae | true | true | 2024-02-12T04:12:05Z |
🔶 | binbi/Ein-72B-v0.1 | 80.79 | 76.54 | 89.2 | 77.11 | 78.47 | 84.06 | 79.38 | 🔶 fine-tuned on domain-specific datasets | ? | Adapter | float16 | true | null | 72 | 0 | false | 84ec4c0fcefc5af86f649a70c9d3ff493334e868 | true | true | 2024-02-04T00:59:30Z |
🔶 | SF-Foundation/Ein-72B-v0.12 | 80.72 | 76.19 | 89.46 | 77.17 | 77.78 | 84.45 | 79.23 | 🔶 fine-tuned on domain-specific datasets | ? | Adapter | bfloat16 | true | null | 72 | 0 | false | 84d38e29fec0dc9c274237968fdafe9396702f9b | true | true | 2024-02-11T23:02:50Z |
🔶 | abacusai/Smaug-72B-v0.1 | 80.48 | 76.02 | 89.27 | 77.15 | 76.67 | 85.08 | 78.7 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | bfloat16 | true | other | 72 | 452 | true | 54a8c35600ec5cb30ca2129247854ece23e57f57 | true | true | 2024-02-03T18:49:25Z |
🔶 | ibivibiv/alpaca-dragon-72b-v1 | 79.3 | 73.89 | 88.16 | 77.4 | 72.69 | 86.03 | 77.63 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | float16 | true | other | 72 | 23 | true | 4df251a558c53b6b6a4c459045b161951cfc3c4e | true | true | 2024-02-06T23:43:35Z |
💬 | mistralai/Mixtral-8x22B-Instruct-v0.1 | 79.15 | 72.7 | 89.08 | 77.77 | 68.14 | 85.16 | 82.03 | 💬 chat models (RLHF, DPO, IFT, ...) | MixtralForCausalLM | Original | bfloat16 | true | apache-2.0 | 140 | 600 | true | eb69dca9c68bbdcffd5f522f632d5c04ab6c65b3 | true | false | 2024-04-17T15:30:22Z |
💬 | MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.2 | 78.96 | 72.53 | 86.22 | 80.41 | 63.57 | 82.79 | 88.25 | 💬 chat models (RLHF, DPO, IFT, ...) | LlamaForCausalLM | Original | bfloat16 | true | llama3 | 70 | 8 | true | 0ef6aba21c4537fe693c4160b820efb28270705b | true | true | 2024-05-01T17:04:08Z |
💬 | MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4 | 78.89 | 72.61 | 86.03 | 80.5 | 63.26 | 83.58 | 87.34 | 💬 chat models (RLHF, DPO, IFT, ...) | LlamaForCausalLM | Original | bfloat16 | true | llama3 | 70 | 10 | true | 5a44e1d115e991a9814b9dd96fa60132ced9b99f | true | true | 2024-05-01T17:05:18Z |
💬 | MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.3 | 78.74 | 72.35 | 86 | 80.47 | 63.45 | 82.95 | 87.19 | 💬 chat models (RLHF, DPO, IFT, ...) | LlamaForCausalLM | Original | bfloat16 | true | llama3 | 70 | 2 | true | 17f4cce3f08bc798516839315b07f0c8e05d6611 | true | true | 2024-05-01T17:04:37Z |
💬 | mmnga/Llama-3-70B-japanese-suzume-vector-v0.1 | 78.6 | 72.35 | 85.81 | 80.28 | 62.93 | 82.79 | 87.41 | 💬 chat models (RLHF, DPO, IFT, ...) | LlamaForCausalLM | Original | bfloat16 | true | llama3 | 70 | 2 | true | 16f98b2d45684af2c4a9ff5da75b00ef13cca808 | true | true | 2024-05-05T10:30:19Z |
💬 | moreh/MoMo-72B-lora-1.8.7-DPO | 78.55 | 70.82 | 85.96 | 77.13 | 74.71 | 84.06 | 78.62 | 💬 chat models (RLHF, DPO, IFT, ...) | LlamaForCausalLM | Original | bfloat16 | true | mit | 72 | 67 | true | c64edea08b27be1e7e2ae6a95bcdd74849cb887e | true | true | 2024-01-22T00:16:35Z |
💬 | tenyx/Llama3-TenyxChat-70B | 78.4 | 72.1 | 86.21 | 80.04 | 62.85 | 82.95 | 86.28 | 💬 chat models (RLHF, DPO, IFT, ...) | LlamaForCausalLM | Original | bfloat16 | true | llama3 | 70 | 59 | true | de770dc2c767b50b17bef491ec6983c29e60f668 | true | false | 2024-04-27T17:40:55Z |
🔶 | failspy/llama-3-70B-Instruct-abliterated | 78.26 | 72.01 | 86.02 | 79.97 | 63.15 | 83.11 | 85.29 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | float16 | true | llama3 | 70 | 58 | true | 53ae9dafe8b3d163e05d75387575f8e9f43253d0 | true | true | 2024-05-08T05:23:55Z |
💬 | abhishek/autotrain-llama3-70b-orpo-v2 | 78.17 | 70.9 | 86.09 | 80.07 | 62.82 | 84.93 | 84.23 | 💬 chat models (RLHF, DPO, IFT, ...) | LlamaForCausalLM | Original | float16 | true | other | 70 | 1 | true | a2c16a8a7fa48792eb8a1f0c50e13309c2021a63 | true | true | 2024-05-04T21:00:31Z |
🔶 | 4season/final_model_test_v2 | 78.14 | 77.73 | 90.86 | 67.86 | 79.16 | 86.27 | 66.94 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | bfloat16 | true | null | 21 | 0 | false | cf690c35d9cf0b0b6bf034fa16dbf88c56fe861c | true | true | 2024-05-20T17:16:00Z |
🔶 | saltlux/luxia-21.4b-alignment-v1.2 | 78.14 | 77.73 | 90.86 | 67.86 | 79.16 | 86.27 | 66.94 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | bfloat16 | true | apache-2.0 | 21 | 4 | true | e318e0a864db847b4020cbc8d23035dae08522ab | true | true | 2024-05-27T14:08:06Z |
💬 | MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.1 | 78.11 | 71.67 | 85.83 | 80.12 | 62.11 | 82.87 | 86.05 | 💬 chat models (RLHF, DPO, IFT, ...) | LlamaForCausalLM | Original | bfloat16 | true | llama3 | 70 | 8 | true | 99d755d89cfbb28f19179d07f02876720646a767 | true | true | 2024-04-26T19:35:47Z |
🔶 | abhishek/autotrain-llama3-70b-orpo-v1 | 78.08 | 70.65 | 85.99 | 80.11 | 61.78 | 84.29 | 85.67 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | float16 | true | other | 70 | 3 | true | 053236c6846cc561c1503ba05e2b28c94855a432 | true | true | 2024-05-03T07:57:16Z |
🔶 | failspy/llama-3-70B-Instruct-abliterated | 78.08 | 71.84 | 86.04 | 79.8 | 63.18 | 82.4 | 85.22 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | bfloat16 | true | llama3 | 70 | 58 | true | 53ae9dafe8b3d163e05d75387575f8e9f43253d0 | true | true | 2024-05-07T19:45:27Z |
🔶 | cloudyu/TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16 | 77.91 | 74.06 | 86.74 | 76.65 | 72.24 | 83.35 | 74.45 | 🔶 fine-tuned on domain-specific datasets | MixtralForCausalLM | Original | bfloat16 | true | other | 60 | 15 | true | cd29cfa124072c96ba8601230bead65d76e04dcb | true | false | 2024-02-03T13:36:59Z |
💬 | meta-llama/Meta-Llama-3-70B-Instruct | 77.88 | 71.42 | 85.69 | 80.06 | 61.81 | 82.87 | 85.44 | 💬 chat models (RLHF, DPO, IFT, ...) | LlamaForCausalLM | Original | float16 | true | llama3 | 70 | 1,131 | true | 5fcb2901844dde3111159f24205b71c25900ffbd | true | true | 2024-04-18T17:05:16Z |
🔶 | 4season/merge_model_test_v2 | 77.82 | 79.35 | 89.75 | 67.89 | 71.58 | 86.58 | 71.8 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | bfloat16 | false | apache-2.0 | 21 | 0 | true | e9542d2e5f8ede339a2917b37f2c570f2847becc | true | true | 2024-05-20T06:12:52Z |
🔶 | fblgit/UNA-ThePitbull-21.4B-v2 | 77.82 | 77.73 | 91.79 | 68.25 | 78.24 | 87.37 | 63.53 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | bfloat16 | true | afl-3.0 | 21 | 10 | true | 6f59176110b23838a01fc401512df3ada96e9557 | true | true | 2024-05-28T12:13:22Z |
🔶 | saltlux/luxia-21.4b-alignment-v1.0 | 77.74 | 77.47 | 91.88 | 68.1 | 79.17 | 87.45 | 62.4 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | bfloat16 | true | apache-2.0 | 21 | 32 | true | ba3403eaafc6d1f6e3a73245314ee96025c08d96 | true | true | 2024-03-11T03:09:26Z |
🔶 | fblgit/UNA-ThePitbull-21.4-v1 | 77.66 | 77.9 | 91.81 | 68.07 | 79.24 | 87.29 | 61.64 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | bfloat16 | true | afl-3.0 | 21 | 5 | true | 125288b68a54f1ec42877a53e6bbdcfbc5375e1d | true | true | 2024-05-27T06:01:02Z |
🔶 | HanNayeoniee/LHK_DPO_v1 | 77.62 | 74.74 | 89.3 | 64.9 | 79.89 | 88.32 | 68.54 | 🔶 fine-tuned on domain-specific datasets | MixtralForCausalLM | Original | bfloat16 | true | null | 12 | 0 | false | 4e2c0a8fb1a1654312a573e85fec79832bfa489c | true | true | 2024-02-12T14:27:24Z |
🔶 | cloudyu/TomGrc_FusionNet_34Bx2_MoE_v0.1_full_linear_DPO | 77.52 | 74.06 | 86.67 | 76.69 | 71.32 | 83.43 | 72.93 | 🔶 fine-tuned on domain-specific datasets | MixtralForCausalLM | Original | float16 | true | other | 60 | 2 | true | e8e558b5fd4ac9da839577b1295d10ca75fc2663 | true | false | 2024-02-05T07:10:37Z |
🔶 | saltlux/luxia-21.4b-alignment-v0.2 | 77.51 | 76.71 | 91.61 | 68.27 | 79.8 | 87.06 | 61.64 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | bfloat16 | true | null | 21 | 0 | false | 59243de958296a4516f72ebfb1b597188dd59229 | true | true | 2024-03-11T17:08:55Z |
🔶 | zhengr/MixTAO-7Bx2-MoE-v8.1 | 77.5 | 73.81 | 89.22 | 64.92 | 78.57 | 87.37 | 71.11 | 🔶 fine-tuned on domain-specific datasets | MixtralForCausalLM | Original | bfloat16 | true | apache-2.0 | 12 | 45 | true | 2d8cff968dbfb31e0c1ccc42053ccc4d2698a390 | true | false | 2024-02-26T07:46:08Z |
💬 | yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B | 77.44 | 74.91 | 89.3 | 64.67 | 78.02 | 88.24 | 69.52 | 💬 chat models (RLHF, DPO, IFT, ...) | MixtralForCausalLM | Original | bfloat16 | true | mit | 12 | 52 | true | 915651208ea9f40c65a60d1f971a09f9461ee691 | true | false | 2024-01-21T09:10:58Z |
🔶 | JaeyeonKang/CCK_Asura_v1 | 77.43 | 73.89 | 89.07 | 75.44 | 71.75 | 86.35 | 68.08 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | float16 | true | null | 68 | 0 | false | 7dd3ddea090bd63f3143e70d7d6237cc40c046e4 | true | true | 2024-02-11T22:52:45Z |
🔶 | fblgit/UNA-SimpleSmaug-34b-v1beta | 77.41 | 74.57 | 86.74 | 76.68 | 70.17 | 83.82 | 72.48 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | bfloat16 | true | apache-2.0 | 34 | 20 | true | e1cdc5b02c662c5f29a50d0b22c64a8902ca856b | true | true | 2024-02-05T13:24:35Z |
🔶 | TomGrc/FusionNet_34Bx2_MoE_v0.1 | 77.38 | 73.72 | 86.46 | 76.72 | 71.01 | 83.35 | 73.01 | 🔶 fine-tuned on domain-specific datasets | MixtralForCausalLM | Original | bfloat16 | true | mit | 60 | 7 | true | 6c7ec6d2ca1c0d126a26963fedc9bbdf5210b0d1 | true | false | 2024-01-30T20:37:25Z |
💬 | shenzhi-wang/Llama3-70B-Chinese-Chat | 77.34 | 70.39 | 85.81 | 79.74 | 61.1 | 83.74 | 83.24 | 💬 chat models (RLHF, DPO, IFT, ...) | LlamaForCausalLM | Original | bfloat16 | true | llama3 | 70 | 85 | true | 9820f8e02b5b091dc5ebbb6442f83ea6a0db4205 | true | true | 2024-05-10T01:15:11Z |
💬 | TwT-6/cr-model-v1 | 77.32 | 70.65 | 87.85 | 74.73 | 80.47 | 83.66 | 66.57 | 💬 chat models (RLHF, DPO, IFT, ...) | Qwen2ForCausalLM | Original | bfloat16 | true | cc-by-4.0 | 14 | 1 | true | 4b9fdd5c5f6efe32c6cb1b7636c897610c9d8b65 | true | true | 2024-05-29T02:42:42Z |
🔶 | saltlux/luxia-21.4b-alignment-v0.1 | 77.32 | 76.79 | 91.79 | 68.18 | 76.7 | 87.53 | 62.93 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | bfloat16 | true | null | 21 | 0 | false | 88a47c498102132f5262581803fe1ed9252a16bc | true | true | 2024-03-11T17:09:10Z |
🔶 | migtissera/Tess-72B-v1.5b | 77.3 | 71.25 | 85.53 | 76.63 | 71.99 | 81.45 | 76.95 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | float16 | true | other | 72 | 16 | true | dc092ecc5d5a424678eac445a9f4443069776691 | true | true | 2024-02-08T18:03:29Z |
💬 | moreh/MoMo-72B-lora-1.8.6-DPO | 77.29 | 70.14 | 86.03 | 77.4 | 69 | 84.37 | 76.8 | 💬 chat models (RLHF, DPO, IFT, ...) | LlamaForCausalLM | Original | bfloat16 | true | mit | 72 | 32 | true | 76389d5d825c3743cc70bc75b902bbfdad11beba | true | true | 2024-01-16T11:52:34Z |
🔶 | abacusai/Smaugv0.1 | 77.29 | 74.23 | 86.76 | 76.66 | 70.22 | 83.66 | 72.18 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | bfloat16 | true | null | 34 | 0 | false | 036927bc2b54d408bb9e9357c3df8353f5853ea8 | true | true | 2024-01-25T17:56:45Z |
🔶 | abacusai/Smaug-34B-v0.1 | 77.29 | 74.23 | 86.76 | 76.66 | 70.22 | 83.66 | 72.18 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | bfloat16 | true | other | 34 | 55 | true | 7b74a95019f01b59630cbd6469814c752d0e59e5 | true | true | 2024-01-27T21:10:02Z |
🔶 | cloudyu/Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE | 77.28 | 72.87 | 86.52 | 76.96 | 73.28 | 83.19 | 70.89 | 🔶 fine-tuned on domain-specific datasets | MixtralForCausalLM | Original | bfloat16 | true | mit | 60 | 4 | true | 097b951c2524e6113252fcd98ba5830c85dc450f | true | false | 2024-01-25T11:51:28Z |
🤝 | louisbrulenaudet/Maxine-34B-stock | 77.28 | 74.06 | 86.74 | 76.62 | 70.18 | 83.9 | 72.18 | 🤝 base merges and moerges | LlamaForCausalLM | Original | bfloat16 | true | apache-2.0 | 34 | 3 | false | 5d87d746433f6eaddf34fd1dbdeed859b15348aa | true | true | 2024-04-04T20:31:00Z |
🔶 | jefferylovely/MoeLovely-13B | 77.25 | 73.72 | 89.49 | 64.78 | 78.74 | 87.61 | 69.14 | 🔶 fine-tuned on domain-specific datasets | MixtralForCausalLM | Original | float16 | true | null | 12 | 0 | false | ac4f0ad8a665eb6b54c286810a9b4551b0bcdc25 | true | false | 2024-03-08T05:53:20Z |
🔶 | saltlux/luxia-21.4b-alignment-v0.4 | 77.23 | 76.88 | 91.83 | 68.06 | 76.72 | 87.21 | 62.7 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | bfloat16 | true | null | 21 | 0 | false | 4c4342a9c3e8e793a0969b74222d887d53cb294e | true | true | 2024-03-11T17:10:06Z |
🔶 | ibivibiv/orthorus-125b-v2 | 77.22 | 73.63 | 89.04 | 75.99 | 70.19 | 85.48 | 68.99 | 🔶 fine-tuned on domain-specific datasets | MixtralForCausalLM | Original | float16 | true | apache-2.0 | 125 | 4 | true | 95b3b4e432d98b804d64cfe42dd9fa6b67198e5b | true | false | 2024-03-01T02:20:27Z |
🔶 | ConvexAI/Luminex-34B-v0.2 | 77.19 | 74.49 | 86.76 | 76.55 | 70.21 | 83.27 | 71.87 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | float16 | true | other | 34 | 11 | true | 3880710724abcaffbdf8fa4031e1d02066fbfe9d | true | true | 2024-02-18T19:56:57Z |
🔶 | senseable/Wilbur-30B | 77.18 | 74.06 | 86.68 | 76.7 | 69.96 | 83.43 | 72.25 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | float16 | true | null | 34 | 0 | false | eab679f95e078efb71fbaa7b1aa0be05bb4e46ca | true | true | 2024-01-27T04:03:58Z |
🤝 | RubielLabarta/LogoS-7Bx2-MoE-13B-v0.2 | 77.15 | 74.4 | 89.09 | 64.9 | 74.53 | 88.4 | 71.57 | 🤝 base merges and moerges | MixtralForCausalLM | Original | bfloat16 | false | apache-2.0 | 12 | 10 | true | 354f0eb0a1299473c861c0505c2ede04ced90972 | true | false | 2024-02-11T16:05:38Z |
🔶 | RubielLabarta/LogoS-7Bx2-MoE-13B-v0.1 | 77.14 | 74.49 | 89.07 | 64.74 | 74.57 | 88.32 | 71.65 | 🔶 fine-tuned on domain-specific datasets | MixtralForCausalLM | Original | bfloat16 | true | null | 12 | 0 | false | 1e4670ddb878fa696f2e6293a4db9d8657993fd8 | true | false | 2024-01-21T18:44:06Z |
🔶 | yunconglong/DARE_TIES_13B | 77.1 | 74.32 | 89.5 | 64.47 | 78.66 | 88.08 | 67.55 | 🔶 fine-tuned on domain-specific datasets | MixtralForCausalLM | Original | bfloat16 | true | other | 12 | 10 | true | 74c6e4fbd272c9d897e8c93ee7de8a234f61900f | true | false | 2024-01-30T04:51:50Z |
🔶 | yunconglong/13B_MATH_DPO | 77.08 | 74.66 | 89.51 | 64.53 | 78.63 | 88.08 | 67.1 | 🔶 fine-tuned on domain-specific datasets | MixtralForCausalLM | Original | bfloat16 | true | other | 12 | 1 | true | 96c62ad90f2b82016a1cdbfe96cfa5c4bb278e21 | true | false | 2024-01-28T11:53:09Z |
🔶 | TomGrc/FusionNet_34Bx2_MoE | 77.07 | 72.95 | 86.22 | 77.05 | 71.31 | 83.98 | 70.89 | 🔶 fine-tuned on domain-specific datasets | MixtralForCausalLM | Original | bfloat16 | true | mit | 60 | 8 | true | c5575550053c84a401baf56174cb2e5d5bd9e79a | true | false | 2024-01-22T03:18:24Z |
🔶 | ConvexAI/Luminex-34B-v0.1 | 77.06 | 73.63 | 86.59 | 76.55 | 69.68 | 83.43 | 72.48 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | float16 | true | other | 34 | 8 | true | d3efc551679d7ec00da14722d44151c948a48d25 | true | true | 2024-02-16T23:13:45Z |
🔶 | yunconglong/MoE_13B_DPO | 77.05 | 74.32 | 89.39 | 64.48 | 78.47 | 88 | 67.63 | 🔶 fine-tuned on domain-specific datasets | MixtralForCausalLM | Original | bfloat16 | true | other | 12 | 5 | true | d8d6a47f877fee3e638a158c2bd637c0013ed4e4 | true | false | 2024-01-28T06:50:23Z |
🔶 | JaeyeonKang/CCK_Asura_v3.0 | 77.03 | 72.95 | 88.86 | 75.41 | 69.1 | 85.08 | 70.81 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | float16 | true | null | 68 | 0 | false | 06fd0e293aeb3b2722e3910daefcd185fad4558c | true | true | 2024-02-19T00:05:32Z |
🔶 | 4season/alignment_model_test | 76.97 | 78.24 | 89.68 | 68.08 | 80.88 | 86.5 | 58.45 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | bfloat16 | false | apache-2.0 | 21 | 0 | true | 791a326ee0f6d5246962039803fd79b28608e54c | true | true | 2024-03-16T11:32:40Z |
🔶 | cloudyu/4bit_quant_TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO | 76.95 | 73.21 | 86.11 | 75.44 | 72.78 | 82.95 | 71.19 | 🔶 fine-tuned on domain-specific datasets | MixtralForCausalLM | Original | 4bit | true | other | 31 | 2 | true | 331bb6bdba4140bbf0031bd37076f2c8a76d7dbb | true | false | 2024-02-03T07:43:26Z |
🔶 | abhishek/autotrain-llama3-oh-sft-v0-2 | 76.89 | 68.34 | 85.65 | 79.73 | 60.29 | 83.43 | 83.93 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | float16 | true | other | 70 | 1 | true | 14e07850d3e1ee35f5788270ab514c2e3b3821bf | true | true | 2024-04-25T19:28:21Z |
🔶 | NeverSleep/Llama-3-Lumimaid-70B-v0.1-alt | 76.88 | 71.33 | 86.28 | 80.03 | 58.81 | 84.77 | 80.06 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | bfloat16 | true | cc-by-nc-4.0 | 70 | 16 | true | 60d97fcfb259f1e9ba57b9880b14a40590bb0350 | true | true | 2024-05-04T04:08:20Z |
🤝 | automerger/YamshadowExperiment28-7B | 76.86 | 73.29 | 89.25 | 64.38 | 78.53 | 85.24 | 70.51 | 🤝 base merges and moerges | MistralForCausalLM | Original | bfloat16 | false | apache-2.0 | 7 | 19 | true | b8f628c51f138538afc4c3d0d7dbcbab523c3b7a | true | true | 2024-04-01T12:59:47Z |
🤝 | Kquant03/CognitiveFusion2-4x7B-BF16 | 76.86 | 73.38 | 89.18 | 64.32 | 78.12 | 84.93 | 71.27 | 🤝 base merges and moerges | MixtralForCausalLM | Original | bfloat16 | false | apache-2.0 | 24 | 3 | true | a6df0928520ffdeb7f041ee84a56f316c30ca913 | true | false | 2024-04-06T05:43:27Z |
🤝 | alchemonaut/QuartetAnemoi-70B-t0.0001 | 76.86 | 73.38 | 88.9 | 75.42 | 69.53 | 85.32 | 68.61 | 🤝 base merges and moerges | LlamaForCausalLM | Original | float16 | false | other | 68 | 29 | true | 392d963e63267650f2aea7dc26c60ee6fd2b26d4 | true | true | 2024-02-04T03:32:46Z |
🔶 | SF-Foundation/TextBase-7B-v0.1 | 76.84 | 73.89 | 90.27 | 64.78 | 78.13 | 86.03 | 67.93 | 🔶 fine-tuned on domain-specific datasets | MistralForCausalLM | Original | float16 | true | cc-by-nc-sa-4.0 | 7 | 5 | true | 40ea1e766860c831152653358beb3b7991a37af7 | true | true | 2024-05-23T06:57:42Z |
🟩 | liminerity/M7-7b | 76.82 | 72.87 | 89.15 | 64.5 | 77.93 | 84.77 | 71.72 | 🟩 continuously pretrained | MistralForCausalLM | Original | float16 | false | apache-2.0 | 7 | 15 | true | 23497a39fe5d290494fad49e5b8077f76440ad11 | true | true | 2024-03-10T03:08:41Z |
🤝 | liminerity/Multiverse-Experiment-slerp-7b | 76.82 | 72.87 | 89.15 | 64.5 | 77.93 | 84.77 | 71.72 | 🤝 base merges and moerges | MistralForCausalLM | Original | float16 | true | null | 7 | 0 | false | 2103c07a06ff4d6e7f4c031b98d4c1a455690436 | true | true | 2024-03-07T20:50:13Z |
🤝 | allknowingroger/MultiverseEx26-7B-slerp | 76.8 | 72.95 | 89.17 | 64.36 | 78.12 | 85.16 | 71.04 | 🤝 base merges and moerges | MistralForCausalLM | Original | bfloat16 | false | apache-2.0 | 7 | 1 | true | 43f18d84e025693f00e9be335bf12fce96089b2f | true | true | 2024-04-10T18:45:37Z |
🔶 | Kukedlc/NeuralSynthesis-7B-v0.1 | 76.8 | 73.04 | 89.18 | 64.37 | 78.15 | 85.24 | 70.81 | 🔶 fine-tuned on domain-specific datasets | MistralForCausalLM | Original | bfloat16 | false | apache-2.0 | 7 | 3 | true | 6cc3389eb2c1968e8b1355ee90135b9c769b4fa0 | true | true | 2024-04-06T04:05:27Z |
🤝 | AurelPx/Percival_01-7b-slerp | 76.79 | 73.21 | 89.16 | 64.42 | 77.97 | 85.08 | 70.89 | 🤝 base merges and moerges | MistralForCausalLM | Original | bfloat16 | false | apache-2.0 | 7 | 3 | true | 6d415ca49b7717b8e851ae3271f569e83d4de589 | true | true | 2024-03-22T17:15:40Z |
🤝 | shyamieee/J4RVIZ-v6.0 | 76.78 | 73.29 | 89.15 | 64.41 | 77.87 | 85 | 70.96 | 🤝 base merges and moerges | MistralForCausalLM | Original | bfloat16 | false | apache-2.0 | 7 | 0 | true | cbbb7b37ac2318b473f059a32a508e89ad5c26e9 | true | true | 2024-05-07T09:12:01Z |
🤝 | LewisDeBenoisIV/Jason1903_SLERP | 76.77 | 73.12 | 89.13 | 64.43 | 78.13 | 85.08 | 70.74 | 🤝 base merges and moerges | MistralForCausalLM | Original | float16 | true | null | 7 | 0 | false | ea187cf89f44197d9007798316a087bc63286227 | true | true | 2024-03-20T06:06:53Z |
🤝 | automerger/Ognoexperiment27Multi_verse_model-7B | 76.77 | 72.95 | 89.29 | 64.39 | 78.04 | 84.85 | 71.11 | 🤝 base merges and moerges | MistralForCausalLM | Original | bfloat16 | false | apache-2.0 | 7 | 0 | true | 7eb7e390625ec0ca13a11c8977b9710d2316451f | true | true | 2024-04-05T22:40:18Z |
🤝 | Infinimol/miiqu-f16 | 76.77 | 72.87 | 88.97 | 75.99 | 69.37 | 85.56 | 67.85 | 🤝 base merges and moerges | LlamaForCausalLM | Original | float16 | false | other | 90 | 11 | true | 395d6398cb2ab71621a43f5f5df8994de9c46175 | true | true | 2024-03-19T19:30:05Z |
🤝 | shyamieee/B3E3-SLM-7b-v3.0 | 76.76 | 73.04 | 89.14 | 64.48 | 78.2 | 85 | 70.74 | 🤝 base merges and moerges | MistralForCausalLM | Original | bfloat16 | false | apache-2.0 | 7 | 0 | true | 2eb74c7e22dde18a1f41c187ec4b24d02ec0cb01 | true | true | 2024-05-11T10:24:05Z |
🔶 | Kukedlc/NeuralSynthesis-7b-v0.4-slerp | 76.76 | 73.21 | 89.14 | 64.28 | 78.07 | 84.85 | 71.04 | 🔶 fine-tuned on domain-specific datasets | MistralForCausalLM | Original | bfloat16 | false | apache-2.0 | 7 | 0 | true | 7dc00cb312bddce98224d5e07bd56db7f110ffa4 | true | true | 2024-04-13T15:56:21Z |
💬 | BarraHome/Mistroll-7B-v2.2 | 76.76 | 72.78 | 89.16 | 64.35 | 78.1 | 85 | 71.19 | 💬 chat models (RLHF, DPO, IFT, ...) | MistralForCausalLM | Original | bfloat16 | true | mit | 7 | 10 | true | 4869d62c238e828d6afdff2f22b928d41bae8578 | true | true | 2024-04-26T18:34:47Z |
🔶 | JaeyeonKang/CCK_Asura_v1.1.0 | 76.75 | 73.21 | 88.55 | 75.43 | 69.55 | 85.32 | 68.46 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | float16 | true | null | 68 | 0 | false | baf3e2cc3a8d18098199b3cee4bdf79f00935be1 | true | true | 2024-02-17T23:01:08Z |
🤝 | nlpguy/T3QM7 | 76.75 | 73.12 | 89.14 | 64.48 | 77.96 | 85.08 | 70.74 | 🤝 base merges and moerges | MistralForCausalLM | Original | float16 | false | apache-2.0 | 7 | 0 | true | fa6bd0d1019345cddabd90127c6a8f524a0d7a67 | true | true | 2024-03-16T18:38:50Z |
🔶 | ValiantLabs/Llama3-70B-Fireplace | 76.75 | 70.65 | 85 | 78.97 | 59.77 | 82.48 | 83.62 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | float16 | true | llama3 | 70 | 3 | true | 220079e4115733991eb19c30d5480db9696a665e | true | true | 2024-05-09T19:44:54Z |
🔶 | bardsai/jaskier-7b-dpo-v7.1 | 76.74 | 73.38 | 89.28 | 64.37 | 78.28 | 85.24 | 69.9 | 🔶 fine-tuned on domain-specific datasets | MistralForCausalLM | Original | float16 | true | null | 7 | 0 | false | 305544e9edd98253540141e91653d308e9b135cc | true | true | 2024-03-01T11:47:41Z |
🔶 | yam-peleg/Experiment26-7B | 76.74 | 73.38 | 89.15 | 64.32 | 78.24 | 84.93 | 70.43 | 🔶 fine-tuned on domain-specific datasets | MistralForCausalLM | Original | float16 | true | apache-2.0 | 7 | 78 | true | bbaef291e93a7f6c9f8cb76a4dbd8c3c054d3f3c | true | true | 2024-02-27T21:34:40Z |
🤝 | Undi95/Miqu-MS-70B | 76.74 | 73.29 | 88.63 | 75.48 | 69.32 | 85.71 | 68.01 | 🤝 base merges and moerges | LlamaForCausalLM | Original | bfloat16 | false | cc-by-nc-4.0 | 68 | 7 | true | 2aa17f8d8aadc2c8bf2aed438a6714fe3dbd9794 | true | true | 2024-04-01T21:39:43Z |
🟩 | ammarali32/multi_verse_model | 76.74 | 72.87 | 89.2 | 64.4 | 77.92 | 84.77 | 71.27 | 🟩 continuously pretrained | MistralForCausalLM | Original | bfloat16 | true | null | 7 | 0 | false | e2aa6fdad0b28a6019b0fc7c178a3579c3d671e8 | true | true | 2024-03-07T07:39:34Z |
🔶 | MTSAIR/multi_verse_model | 76.74 | 72.87 | 89.2 | 64.4 | 77.92 | 84.77 | 71.27 | 🔶 fine-tuned on domain-specific datasets | MistralForCausalLM | Original | bfloat16 | true | apache-2.0 | 7 | 33 | true | a4ca706d1bbc263b95e223a80ad68b0f125840b3 | true | true | 2024-03-29T15:21:34Z |
🔶 | MiniMoog/Mergerix-7b-v0.3 | 76.73 | 72.87 | 89.14 | 64.44 | 78.01 | 84.93 | 71.04 | 🔶 fine-tuned on domain-specific datasets | MistralForCausalLM | Original | bfloat16 | false | apache-2.0 | 7 | 0 | true | 680449fa566aa5fe1845c40b28eae05659c417f0 | true | true | 2024-04-02T20:56:41Z |
🤝 | louisbrulenaudet/Maxine-7B-0401-stock | 76.73 | 73.12 | 89.13 | 64.42 | 78.07 | 85 | 70.66 | 🤝 base merges and moerges | MistralForCausalLM | Original | bfloat16 | true | apache-2.0 | 7 | 1 | false | a23c75b9b6d9c47bdd106af999f6a33c981e2bd6 | true | true | 2024-04-01T18:16:03Z |
🤝 | automerger/Experiment27Pastiche-7B | 76.73 | 73.04 | 89.08 | 64.2 | 79.31 | 85.4 | 69.37 | 🤝 base merges and moerges | MistralForCausalLM | Original | bfloat16 | false | apache-2.0 | 7 | 0 | true | f69af11ca954a3441cca023a9e1cb6bb8bf4eb66 | true | true | 2024-04-01T12:59:12Z |
End of preview.
No dataset card yet
New: Create and edit this dataset card directly on the website!
Contribute a Dataset Card- Downloads last month
- 0