IlyasMoutawwakil HF Staff commited on
Commit
6b89f6a
Β·
verified Β·
1 Parent(s): 0027e38

Upload llm-df.csv with huggingface_hub

Browse files
Files changed (1) hide show
  1. llm-df.csv +36 -29
llm-df.csv CHANGED
@@ -1,18 +1,18 @@
1
  T,Model,Average ⬆️,IFEval,IFEval Raw,BBH,BBH Raw,MATH Lvl 5,MATH Lvl 5 Raw,GPQA,GPQA Raw,MUSR,MUSR Raw,MMLU-PRO,MMLU-PRO Raw,Type,Architecture,Weight type,Precision,Not_Merged,Hub License,#Params (B),Hub ❀️,Available on the hub,Model sha,Flagged,MoE,Submission Date,Upload To Hub Date,Chat Template,Maintainer's Highlight,fullname,Generation
2
  πŸ”Ά,dnhkng/RYS-XLarge,44.75,79.96,0.8,58.77,0.71,38.97,0.39,17.9,0.38,23.72,0.5,49.2,0.54,πŸ”Ά fine-tuned on domain-specific datasets,Qwen2ForCausalLM,Original,bfloat16,True,mit,77,55,True,0f84dd9dde60f383e1e2821496befb4ce9a11ef6,True,True,2024-08-07,2024-07-24,False,False,dnhkng/RYS-XLarge,0
3
  πŸ’¬,Qwen/Qwen2-72B,43.61,81.63,0.82,57.33,0.7,36.03,0.36,17.45,0.38,20.15,0.47,49.05,0.54,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",Qwen2ForCausalLM,Original,bfloat16,True,other,72,25,True,0369c39770f45f2464587918f2dbdb8449ea3a0d,True,True,2024-06-26,2024-06-08,True,False,MaziyarPanahi/calme-2.1-qwen2-72b,2
4
- πŸ’¬,Qwen/Qwen2-7B,43.4,80.08,0.8,56.8,0.69,41.16,0.41,16.55,0.37,16.52,0.45,49.27,0.54,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",Qwen2ForCausalLM,Original,bfloat16,True,,72,3,False,529e9bd80a76d943409bc92bb246aa7ca63dd9e6,True,True,2024-08-06,2024-07-09,True,False,MaziyarPanahi/calme-2.2-qwen2-72b,1
5
  πŸ”Ά,Undi95/MG-FinalMix-72B (Merge),43.28,80.14,0.8,57.5,0.7,33.61,0.34,18.01,0.39,21.22,0.48,49.19,0.54,πŸ”Ά fine-tuned on domain-specific datasets,Qwen2ForCausalLM,Original,bfloat16,True,,72,2,False,6c9c2f5d052495dcd49f44bf5623d21210653c65,True,True,2024-07-13,2024-06-25,True,False,Undi95/MG-FinalMix-72B,1
6
  πŸ’¬,Qwen/Qwen2-72B,42.49,79.89,0.8,57.48,0.7,35.12,0.35,16.33,0.37,17.17,0.46,48.92,0.54,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",Qwen2ForCausalLM,Original,bfloat16,True,other,72,629,True,1af63c698f59c4235668ec9c1395468cb7cd7e79,True,True,2024-06-26,2024-05-28,False,True,Qwen/Qwen2-72B-Instruct,1
7
  πŸ’¬,Qwen/Qwen2-72B,42.17,76.06,0.76,57.65,0.7,35.27,0.35,18.79,0.39,15.62,0.45,49.64,0.55,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",Qwen2ForCausalLM,Original,bfloat16,True,other,72,158,True,fef27e0f235ae8858b84b765db773a2a954110dd,True,True,2024-07-25,2024-06-17,True,False,alpindale/magnum-72b-v1,2
8
- πŸ’¬,meta-llama/Meta-Llama-3.1-70B,41.74,86.69,0.87,55.93,0.69,28.02,0.28,14.21,0.36,17.69,0.46,47.88,0.53,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,llama3.1,70,416,True,b9461463b511ed3c0762467538ea32cf7c9669f2,True,True,2024-08-15,2024-07-16,True,True,meta-llama/Meta-Llama-3.1-70B-Instruct,1
9
  πŸ’¬,abacusai/Smaug-Qwen2-72B-Instruct,41.08,78.25,0.78,56.27,0.69,35.35,0.35,14.88,0.36,15.18,0.44,46.56,0.52,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",Qwen2ForCausalLM,Original,bfloat16,True,other,72,5,True,af015925946d0c60ef69f512c3b35f421cf8063d,True,True,2024-07-29,2024-06-26,True,True,abacusai/Smaug-Qwen2-72B-Instruct,0
10
  🀝,paulml/ECE-ILAB-Q1,40.93,78.65,0.79,53.7,0.67,26.13,0.26,18.23,0.39,18.81,0.46,50.06,0.55,🀝 base merges and moerges,Qwen2ForCausalLM,Original,bfloat16,False,other,72,0,True,393bea0ee85e4c752acd5fd77ce07f577fc13bd9,True,True,2024-06-26,2024-06-06,True,False,paulml/ECE-ILAB-Q1,0
11
  πŸ”Ά,pankajmathur/orca_mini_v7_72b,39.06,59.3,0.59,55.06,0.68,26.44,0.26,18.01,0.39,24.21,0.51,51.35,0.56,πŸ”Ά fine-tuned on domain-specific datasets,Qwen2ForCausalLM,Original,bfloat16,True,apache-2.0,72,10,True,447f11912cfa496e32e188a55214043a05760d3a,True,True,2024-06-26,2024-06-26,False,False,pankajmathur/orca_mini_v7_72b,0
12
  🀝,gbueno86/Meta-LLama-3-Cat-Smaug-LLama-70b (Merge),38.27,80.72,0.81,51.51,0.67,26.81,0.27,10.29,0.33,15.0,0.44,45.28,0.51,🀝 base merges and moerges,LlamaForCausalLM,Original,bfloat16,False,llama3,70,1,True,2d73b7e1c7157df482555944d6a6b1362bc6c3c5,True,True,2024-06-27,2024-05-24,True,False,gbueno86/Meta-LLama-3-Cat-Smaug-LLama-70b,1
13
  πŸ’¬,meta-llama/Meta-Llama-3-70B,37.98,82.08,0.82,48.57,0.64,22.96,0.23,12.19,0.34,15.3,0.44,46.74,0.52,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,llama3,70,16,True,95366b974baedee4d95c1e841bc3d15e94753804,True,True,2024-06-26,2024-04-27,True,False,MaziyarPanahi/calme-2.2-llama3-70b,2
14
  πŸ”Ά,VAGOsolutions/Llama-3-SauerkrautLM-70b-Instruct,37.82,80.45,0.8,52.03,0.67,21.68,0.22,10.4,0.33,13.54,0.43,48.8,0.54,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,other,70,20,True,707cfd1a93875247c0223e0c7e3d86d58c432318,True,True,2024-06-26,2024-04-24,True,False,VAGOsolutions/Llama-3-SauerkrautLM-70b-Instruct,0
15
- πŸ’¬,meta-llama/Meta-Llama-3.1-70B,37.31,76.61,0.77,53.77,0.68,13.75,0.14,14.88,0.36,23.43,0.49,41.41,0.47,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,llama3,70,65,True,093242c69a91f8d9d5b8094c380b88772f9bd7f8,True,True,2024-08-28,2024-07-29,True,True,NousResearch/Hermes-3-Llama-3.1-70B,1
16
  πŸ”Ά,ValiantLabs/Llama3-70B-Fireplace,36.82,77.74,0.78,49.56,0.65,19.64,0.2,13.98,0.35,16.77,0.44,43.25,0.49,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,float16,True,llama3,70,3,True,220079e4115733991eb19c30d5480db9696a665e,True,True,2024-06-26,2024-05-09,True,False,ValiantLabs/Llama3-70B-Fireplace,0
17
  πŸ’¬,tenyx/Llama3-TenyxChat-70B,36.54,80.87,0.81,49.62,0.65,22.66,0.23,6.82,0.3,12.52,0.43,46.78,0.52,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,llama3,70,62,True,a85d31e3af8fcc847cc9169f1144cf02f5351fab,True,True,2024-08-04,2024-04-26,True,False,tenyx/Llama3-TenyxChat-70B,0
18
  πŸ’¬,meta-llama/Meta-Llama-3-70B,36.18,80.99,0.81,50.19,0.65,23.34,0.23,4.92,0.29,10.92,0.42,46.74,0.52,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,llama3,70,1369,True,7129260dd854a80eb10ace5f61c20324b472b31c,True,True,2024-06-12,2024-04-17,True,True,meta-llama/Meta-Llama-3-70B-Instruct,1
@@ -21,19 +21,20 @@ T,Model,Average ⬆️,IFEval,IFEval Raw,BBH,BBH Raw,MATH Lvl 5,MATH Lvl 5 Raw,G
21
  πŸ”Ά,cloudyu/Llama-3-70Bx2-MOE,35.35,54.82,0.55,51.42,0.66,19.86,0.2,19.13,0.39,20.85,0.48,46.02,0.51,πŸ”Ά fine-tuned on domain-specific datasets,MixtralForCausalLM,Original,bfloat16,True,llama3,126,0,True,b8bd85e8db8e4ec352b93441c92e0ae1334bf5a7,True,False,2024-06-27,2024-05-20,False,False,cloudyu/Llama-3-70Bx2-MOE,0
22
  πŸ”Ά,Sao10K/L3-70B-Euryale-v2.1,35.35,73.84,0.74,48.7,0.65,20.85,0.21,10.85,0.33,12.25,0.42,45.6,0.51,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,cc-by-nc-4.0,70,108,True,36ad832b771cd783ea7ad00ed39e61f679b1a7c6,True,True,2024-07-01,2024-06-11,True,False,Sao10K/L3-70B-Euryale-v2.1,0
23
  πŸ”Ά,migtissera/Llama-3-70B-Synthia-v3.5,35.2,60.76,0.61,49.12,0.65,18.96,0.19,18.34,0.39,23.39,0.49,40.65,0.47,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,float16,True,llama3,70,5,True,8744db0bccfc18f1847633da9d29fc89b35b4190,True,True,2024-08-28,2024-05-26,True,False,migtissera/Llama-3-70B-Synthia-v3.5,0
24
- 🟒,Qwen/Qwen2-72B,35.13,38.24,0.38,51.86,0.66,29.15,0.29,19.24,0.39,19.73,0.47,52.56,0.57,🟒 pretrained,Qwen2ForCausalLM,Original,bfloat16,True,other,72,172,True,87993795c78576318087f70b43fbf530eb7789e7,True,True,2024-06-26,2024-05-22,False,True,Qwen/Qwen2-72B,0
25
  πŸ”Ά,Sao10K/L3-70B-Euryale-v2.1,35.11,72.81,0.73,49.19,0.65,20.24,0.2,10.85,0.33,12.05,0.42,45.51,0.51,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,float16,True,cc-by-nc-4.0,70,108,True,36ad832b771cd783ea7ad00ed39e61f679b1a7c6,True,True,2024-06-26,2024-06-11,True,False,Sao10K/L3-70B-Euryale-v2.1,0
26
- πŸ’¬,microsoft/Phi-3.5-MoE-instruct,35.1,69.25,0.69,48.77,0.64,20.54,0.21,14.09,0.36,17.33,0.46,40.64,0.47,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",Phi3ForCausalLM,Original,bfloat16,True,mit,42,426,True,482a9ba0eb0e1fa1671e3560e009d7cec2e5147c,True,False,2024-08-21,2024-08-17,True,True,microsoft/Phi-3.5-MoE-instruct,0
27
  πŸ’¬,abacusai/Smaug-Llama-3-70B-Instruct-32K,34.72,77.61,0.78,49.07,0.65,21.22,0.21,6.15,0.3,12.43,0.42,41.83,0.48,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,llama3,70,20,True,33840982dc253968f32ef3a534ee0e025eb97482,True,True,2024-08-06,2024-06-11,True,True,abacusai/Smaug-Llama-3-70B-Instruct-32K,0
28
  πŸ”Ά,BAAI/Infinity-Instruct-3M-0613-Llama3-70B,34.47,68.21,0.68,51.33,0.66,14.88,0.15,14.43,0.36,16.53,0.45,41.44,0.47,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,apache-2.0,70,4,True,9fc53668064bdda22975ca72c5a287f8241c95b3,True,True,2024-06-28,2024-06-27,True,False,BAAI/Infinity-Instruct-3M-0613-Llama3-70B,0
29
  πŸ’¬,dnhkng/RYS-Llama-3-Huge-Instruct,34.37,76.86,0.77,49.07,0.65,21.22,0.21,1.45,0.26,11.93,0.42,45.66,0.51,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,mit,99,1,True,cfe14a5339e88a7a89f075d9d48215d45f64acaf,True,True,2024-08-07,2024-08-06,True,False,dnhkng/RYS-Llama-3-Huge-Instruct,0
30
  πŸ’¬,mistralai/Mixtral-8x22B-v0.1,33.89,71.84,0.72,44.11,0.61,18.73,0.19,16.44,0.37,13.49,0.43,38.7,0.45,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",MixtralForCausalLM,Original,bfloat16,True,apache-2.0,140,652,True,b0c3516041d014f640267b14feb4e9a84c8e8c71,True,False,2024-06-12,2024-04-16,True,True,mistralai/Mixtral-8x22B-Instruct-v0.1,1
31
  πŸ’¬,mistral-community/Mixtral-8x22B-v0.1,33.77,65.11,0.65,47.5,0.63,18.35,0.18,17.11,0.38,14.72,0.45,39.85,0.46,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",MixtralForCausalLM,Original,float16,True,apache-2.0,140,259,True,a3be084543d278e61b64cd600f28157afc79ffd3,True,True,2024-06-12,2024-04-10,True,True,HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1,1
 
32
  πŸ”Ά,migtissera/Tess-v2.5.2-Qwen2-72B,33.28,44.94,0.45,52.31,0.66,27.42,0.27,13.42,0.35,10.89,0.42,50.68,0.56,πŸ”Ά fine-tuned on domain-specific datasets,Qwen2ForCausalLM,Original,bfloat16,True,other,72,12,True,0435e634ad9bc8b1172395a535b78e6f25f3594f,True,True,2024-08-10,2024-06-13,True,False,migtissera/Tess-v2.5.2-Qwen2-72B,0
33
  πŸ’¬,microsoft/Phi-3-medium-4k-instruct,32.67,64.23,0.64,49.38,0.64,16.99,0.17,11.52,0.34,13.05,0.43,40.84,0.47,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",Phi3ForCausalLM,Original,bfloat16,True,mit,13,203,True,d194e4e74ffad5a5e193e26af25bcfc80c7f1ffc,True,True,2024-06-12,2024-05-07,True,True,microsoft/Phi-3-medium-4k-instruct,0
34
  πŸ’¬,01-ai/Yi-1.5-34B-Chat,32.63,60.67,0.61,44.26,0.61,23.34,0.23,15.32,0.36,13.06,0.43,39.12,0.45,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,apache-2.0,34,222,True,f3128b2d02d82989daae566c0a7eadc621ca3254,True,True,2024-06-12,2024-05-10,True,True,01-ai/Yi-1.5-34B-Chat,0
35
  πŸ”Ά,alpindale/WizardLM-2-8x22B,32.61,52.72,0.53,48.58,0.64,22.28,0.22,17.56,0.38,14.54,0.44,39.96,0.46,πŸ”Ά fine-tuned on domain-specific datasets,MixtralForCausalLM,Original,bfloat16,True,apache-2.0,140,370,True,087834da175523cffd66a7e19583725e798c1b4f,True,True,2024-06-28,2024-04-16,False,False,alpindale/WizardLM-2-8x22B,0
36
- πŸ’¬,google/gemma-2-27b,32.31,79.78,0.8,49.27,0.65,0.68,0.01,16.67,0.38,9.11,0.4,38.35,0.45,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",Gemma2ForCausalLM,Original,bfloat16,True,gemma,27,370,True,f6c533e5eb013c7e31fc74ef042ac4f3fb5cf40b,True,True,2024-08-07,2024-06-24,True,True,google/gemma-2-27b-it,1
37
  πŸ’¬,meta-llama/Meta-Llama-3-70B,32.18,50.27,0.5,48.4,0.64,22.66,0.23,11.97,0.34,13.1,0.43,46.71,0.52,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,llama3,70,12,True,cb03e4d810b82d86e7cb01ab146bade09a5d06d1,True,True,2024-06-26,2024-04-28,True,False,MaziyarPanahi/calme-2.4-llama3-70b,2
38
  πŸ’¬,internlm/internlm2_5-20b-chat,32.08,70.1,0.7,62.83,0.75,0.0,0.0,9.51,0.32,16.74,0.46,33.31,0.4,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",InternLM2ForCausalLM,Original,bfloat16,True,other,19,71,True,ef17bde929761255fee76d95e2c25969ccd93b0d,True,True,2024-08-12,2024-07-30,True,True,internlm/internlm2_5-20b-chat,0
39
  πŸ’¬,Qwen/Qwen2-72B,32.0,40.38,0.4,47.7,0.63,21.37,0.21,16.0,0.37,17.04,0.45,49.52,0.55,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",Qwen2ForCausalLM,Original,bfloat16,True,other,72,50,True,e79582577c2bf2af304221af0e8308b7e7d46ca1,True,True,2024-06-27,2024-05-27,True,True,cognitivecomputations/dolphin-2.9.2-qwen2-72b,1
@@ -41,7 +42,7 @@ T,Model,Average ⬆️,IFEval,IFEval Raw,BBH,BBH Raw,MATH Lvl 5,MATH Lvl 5 Raw,G
41
  🀝,paloalma/Le_Triomphant-ECE-TW3,31.66,54.02,0.54,44.96,0.61,17.45,0.17,13.2,0.35,18.5,0.47,41.81,0.48,🀝 base merges and moerges,LlamaForCausalLM,Original,bfloat16,False,apache-2.0,72,3,True,f72399253bb3e65c0f55e50461488c098f658a49,True,True,2024-07-25,2024-04-01,False,False,paloalma/Le_Triomphant-ECE-TW3,0
42
  πŸ”Ά,failspy/Phi-3-medium-4k-instruct-abliterated-v3,31.55,63.19,0.63,46.73,0.63,14.12,0.14,8.95,0.32,18.52,0.46,37.78,0.44,πŸ”Ά fine-tuned on domain-specific datasets,Phi3ForCausalLM,Original,bfloat16,True,mit,13,22,True,959b09eacf6cae85a8eb21b25e998addc89a367b,True,True,2024-07-29,2024-05-22,True,False,failspy/Phi-3-medium-4k-instruct-abliterated-v3,0
43
  πŸ’¬,Danielbrdz/Barcenas-14b-Phi-3-medium-ORPO,31.42,47.99,0.48,51.03,0.65,17.45,0.17,10.18,0.33,20.53,0.48,41.37,0.47,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",MistralForCausalLM,Original,float16,True,mit,13,3,True,b749dbcb19901b8fd0e9f38c923a24533569f895,True,True,2024-08-13,2024-06-15,True,False,Danielbrdz/Barcenas-14b-Phi-3-medium-ORPO,0
44
- πŸ’¬,CohereForAI/c4ai-command-r-plus,30.86,76.64,0.77,39.92,0.58,7.55,0.08,7.38,0.31,20.42,0.48,33.24,0.4,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",CohereForCausalLM,Original,float16,True,cc-by-nc-4.0,103,1639,True,fa1bd7fb1572ceb861bbbbecfa8af83b29fa8cca,True,True,2024-06-13,2024-04-03,True,True,CohereForAI/c4ai-command-r-plus,0
45
  πŸ’¬,internlm/internlm2_5-7b-chat,30.46,61.4,0.61,57.67,0.71,8.31,0.08,10.63,0.33,14.35,0.44,30.42,0.37,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",InternLM2ForCausalLM,Original,float16,True,other,7,147,True,bebb00121ee105b823647c3ba2b1e152652edc33,True,True,2024-07-03,2024-06-27,True,True,internlm/internlm2_5-7b-chat,0
46
  πŸ’¬,ValiantLabs/Llama3-70B-ShiningValiant2,30.45,61.22,0.61,46.71,0.63,7.1,0.07,10.74,0.33,13.64,0.43,43.31,0.49,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,llama3,70,4,True,bd6cce8da08ccefe9ec58cae3df4bf75c97d8950,True,True,2024-07-25,2024-04-20,True,False,ValiantLabs/Llama3-70B-ShiningValiant2,0
47
  🀝,altomek/YiSM-34B-0rn (Merge),30.15,42.84,0.43,45.38,0.61,20.62,0.21,16.22,0.37,14.76,0.44,41.06,0.47,🀝 base merges and moerges,LlamaForCausalLM,Original,float16,False,apache-2.0,34,1,True,7a481c67cbdd5c846d6aaab5ef9f1eebfad812c2,True,True,2024-06-27,2024-05-26,True,False,altomek/YiSM-34B-0rn,1
@@ -62,13 +63,13 @@ T,Model,Average ⬆️,IFEval,IFEval Raw,BBH,BBH Raw,MATH Lvl 5,MATH Lvl 5 Raw,G
62
  🟒,dnhkng/RYS-Phi-3-medium-4k-instruct,28.38,43.91,0.44,46.75,0.62,11.78,0.12,13.98,0.35,11.09,0.43,42.74,0.48,🟒 pretrained,Phi3ForCausalLM,Original,bfloat16,True,mit,17,1,True,1009e916b1ff8c9a53bc9d8ff48bea2a15ccde26,True,True,2024-08-07,2024-08-06,False,False,dnhkng/RYS-Phi-3-medium-4k-instruct,0
63
  πŸ”Ά,NLPark/AnFeng_v3.1-Avocet,28.05,50.96,0.51,40.31,0.58,13.9,0.14,9.96,0.32,14.98,0.45,38.2,0.44,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,float16,True,cc-by-nc-nd-4.0,34,0,True,5170739731033323e6e66a0f68d34790042a3b2a,True,True,2024-08-07,2024-08-03,False,False,NLPark/AnFeng_v3.1-Avocet,0
64
  🀝,OpenBuddy/openbuddy-zero-56b-v21.2-32k,27.99,50.57,0.51,44.8,0.61,12.99,0.13,9.06,0.32,12.78,0.43,37.77,0.44,🀝 base merges and moerges,LlamaForCausalLM,Original,float16,True,other,56,0,True,c7a1a4a6e798f75d1d3219ab9ff9f2692e29f7d5,True,True,2024-06-26,2024-06-10,True,False,OpenBuddy/openbuddy-zero-56b-v21.2-32k,0
65
- πŸ’¬,meta-llama/Meta-Llama-3.1-8B,27.91,78.56,0.79,29.89,0.51,17.6,0.18,2.35,0.27,8.41,0.39,30.68,0.38,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,llama3.1,8,2146,True,df34336b42332c6d360959e259cd6271c6a09fd4,True,True,2024-08-15,2024-07-18,True,True,meta-llama/Meta-Llama-3.1-8B-Instruct,1
66
  πŸ’¬,vicgalle/Configurable-Llama-3.1-8B-Instruct,27.77,83.12,0.83,29.66,0.5,15.86,0.16,3.24,0.27,5.93,0.38,28.8,0.36,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,float16,True,apache-2.0,8,8,True,133b3ab1a5385ff9b3d17da2addfe3fc1fd6f733,True,True,2024-08-05,2024-07-24,True,False,vicgalle/Configurable-Llama-3.1-8B-Instruct,0
67
  πŸ”Ά,BAAI/Infinity-Instruct-3M-0625-Yi-1.5-9B,27.74,51.86,0.52,35.38,0.55,13.97,0.14,13.87,0.35,16.72,0.46,34.65,0.41,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,apache-2.0,8,2,True,a42c86c61b98ca4fdf238d688fe6ea11cf414d29,True,True,2024-08-05,2024-07-09,True,False,BAAI/Infinity-Instruct-3M-0625-Yi-1.5-9B,0
68
  πŸ”Ά,01-ai/Yi-1.5-34B,27.73,38.53,0.39,44.17,0.61,15.18,0.15,12.42,0.34,16.97,0.46,39.1,0.45,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,apache-2.0,34,34,True,1ec522298a6935c881df6dc29d3669833bd8672d,True,True,2024-07-27,2024-05-18,True,True,cognitivecomputations/dolphin-2.9.1-yi-1.5-34b,1
69
  πŸ’¬,01-ai/Yi-1.5-9B-Chat,27.71,60.46,0.6,36.95,0.56,11.63,0.12,11.3,0.33,12.84,0.43,33.06,0.4,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,apache-2.0,8,126,True,bc87d8557c98dc1e5fdef6ec23ed31088c4d3f35,True,True,2024-06-12,2024-05-10,True,True,01-ai/Yi-1.5-9B-Chat,0
70
  πŸ’¬,jpacifico/Chocolatine-3B-Instruct-DPO-Revised,27.63,56.23,0.56,37.16,0.55,14.5,0.15,9.62,0.32,15.1,0.45,33.21,0.4,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",Phi3ForCausalLM,Original,float16,True,mit,3,10,True,c403df6c0f78148cfb477972455cbd859149311a,True,True,2024-07-19,2024-07-17,True,False,jpacifico/Chocolatine-3B-Instruct-DPO-Revised,0
71
- πŸ’¬,microsoft/Phi-3.5-mini-instruct,27.4,57.75,0.58,36.75,0.55,14.95,0.15,11.97,0.34,10.1,0.4,32.91,0.4,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",Phi3ForCausalLM,Original,bfloat16,True,mit,3,399,True,64963004ad95869fa73a30279371c8778509ac84,True,True,2024-08-21,2024-08-16,True,True,microsoft/Phi-3.5-mini-instruct,0
72
  πŸ’¬,microsoft/Phi-3-mini-4k-instruct,27.2,54.77,0.55,36.56,0.55,14.2,0.14,10.96,0.33,13.12,0.43,33.58,0.4,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",Phi3ForCausalLM,Original,float16,True,mit,3,995,True,c1358f8a35e6d2af81890deffbbfa575b978c62f,True,True,2024-07-02,2024-04-22,True,True,microsoft/Phi-3-mini-4k-instruct,0
73
  πŸ’¬,mistralai/Mixtral-8x7B-v0.1,27.13,58.97,0.59,37.11,0.55,10.88,0.11,9.51,0.32,16.68,0.46,29.62,0.37,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",MixtralForCausalLM,Original,bfloat16,True,apache-2.0,46,408,True,286ae6737d048ad1d965c2e830864df02db50f2f,True,False,2024-07-27,2024-01-11,True,True,NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO,1
74
  πŸ’¬,Qwen/Qwen1.5-32B-Chat,27.1,55.32,0.55,44.55,0.61,6.65,0.07,7.49,0.31,10.2,0.42,38.41,0.45,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",Qwen2ForCausalLM,Original,bfloat16,True,other,32,106,True,0997b012af6ddd5465d40465a8415535b2f06cfc,True,True,2024-06-12,2024-04-03,True,True,Qwen/Qwen1.5-32B-Chat,0
@@ -81,6 +82,7 @@ T,Model,Average ⬆️,IFEval,IFEval Raw,BBH,BBH Raw,MATH Lvl 5,MATH Lvl 5 Raw,G
81
  πŸ”Ά,meta-llama/Meta-Llama-3-8B-Instruct,26.7,76.3,0.76,27.9,0.49,6.8,0.07,7.72,0.31,9.85,0.41,31.62,0.38,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,other,8,6,True,ddf91fdc0a3ab5e5d76864f1c4cf44e5adacd565,True,True,2024-08-06,2024-05-30,True,False,MaziyarPanahi/Llama-3-8B-Instruct-v0.9,3
82
  🟒,Qwen/Qwen1.5-32B,26.69,32.97,0.33,38.98,0.57,26.66,0.27,10.63,0.33,12.04,0.43,38.89,0.45,🟒 pretrained,Qwen2ForCausalLM,Original,bfloat16,True,other,32,79,True,cefef80dc06a65f89d1d71d0adbc56d335ca2490,True,True,2024-06-13,2024-04-01,False,True,Qwen/Qwen1.5-32B,0
83
  πŸ”Ά,meta-llama/Meta-Llama-3-8B-Instruct,26.66,76.67,0.77,27.92,0.49,4.91,0.05,7.83,0.31,10.81,0.42,31.8,0.39,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,other,8,6,True,4411eb9f6f5e4c462a6bdbc64c26dcc123100b66,True,True,2024-06-26,2024-06-04,True,False,MaziyarPanahi/Llama-3-8B-Instruct-v0.10,4
 
84
  πŸ”Ά,meta-llama/Meta-Llama-3-8B,26.58,75.3,0.75,28.08,0.49,5.36,0.05,7.38,0.31,11.68,0.43,31.69,0.39,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,llama3,8,23,True,d40847d2981b588690c1dc21d5157d3f4afb2978,True,True,2024-06-27,2024-05-01,True,False,DeepMount00/Llama-3-8b-Ita,1
85
  πŸ”Ά,VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct,26.52,74.45,0.74,28.05,0.49,5.74,0.06,7.83,0.31,11.28,0.42,31.75,0.39,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,other,8,51,True,37127c44d7c0fb56cef817270c4b1a6802d8793a,True,True,2024-07-22,2024-04-19,True,False,VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct,0
86
  πŸ”Ά,unsloth/gemma-2-9b-it-bnb-4bit,26.5,58.87,0.59,35.57,0.55,12.16,0.12,11.63,0.34,9.34,0.41,31.43,0.38,πŸ”Ά fine-tuned on domain-specific datasets,Gemma2ForCausalLM,Original,float16,True,apache-2.0,9,0,True,4adc2d61d530d23026493d29e6191e06cf549fc6,True,True,2024-07-31,2024-07-16,True,False,ehristoforu/Gemma2-9B-it-psy10k-mental_health,2
@@ -106,7 +108,7 @@ T,Model,Average ⬆️,IFEval,IFEval Raw,BBH,BBH Raw,MATH Lvl 5,MATH Lvl 5 Raw,G
106
  🟒,mistral-community/mixtral-8x22B-v0.3,25.55,25.83,0.26,45.73,0.63,16.84,0.17,17.0,0.38,7.46,0.4,40.44,0.46,🟒 pretrained,MixtralForCausalLM,Original,bfloat16,True,apache-2.0,140,3,True,211b177b79ab5ef245ee334d106c27623e786882,True,False,2024-06-13,2024-05-25,False,True,mistral-community/mixtral-8x22B-v0.3,0
107
  πŸ”Ά,arcee-ai/Arcee-Spark,25.54,56.21,0.56,37.14,0.55,12.31,0.12,7.61,0.31,8.6,0.4,31.36,0.38,πŸ”Ά fine-tuned on domain-specific datasets,Qwen2ForCausalLM,Original,bfloat16,True,apache-2.0,7,84,True,3fe368ea5fd32bc4a8d1bcf42510416f7fa28668,True,True,2024-06-26,2024-06-22,True,False,arcee-ai/Arcee-Spark,0
108
  🟒,mistralai/Mixtral-8x22B-v0.1,25.49,25.83,0.26,45.59,0.62,16.84,0.17,16.78,0.38,7.46,0.4,40.44,0.46,🟒 pretrained,MixtralForCausalLM,Original,bfloat16,True,apache-2.0,140,186,True,b03e260818710044a2f088d88fab12bb220884fb,True,False,2024-06-12,2024-04-16,False,True,mistralai/Mixtral-8x22B-v0.1,0
109
- πŸ’¬,microsoft/Phi-3-mini-128k-instruct,25.49,59.76,0.6,37.1,0.56,8.91,0.09,9.06,0.32,7.71,0.39,30.38,0.37,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",Phi3ForCausalLM,Original,bfloat16,True,mit,3,1550,True,5be6479b4bc06a081e8f4c6ece294241ccd32dec,True,True,2024-08-21,2024-04-22,True,True,microsoft/Phi-3-mini-128k-instruct,0
110
  🀝,Sao10K/L3-8B-Lunaris-v1,25.48,68.95,0.69,32.11,0.52,8.46,0.08,6.82,0.3,5.55,0.37,30.97,0.38,🀝 base merges and moerges,LlamaForCausalLM,Original,bfloat16,True,llama3,8,78,True,8479c2a7ee119c935b9a02c921cc2a85b698dfe8,True,True,2024-07-22,2024-06-26,True,False,Sao10K/L3-8B-Lunaris-v1,0
111
  🟒,01-ai/Yi-1.5-34B,25.43,28.41,0.28,42.75,0.6,14.05,0.14,15.44,0.37,11.22,0.42,40.73,0.47,🟒 pretrained,LlamaForCausalLM,Original,bfloat16,True,apache-2.0,34,45,True,4b486f81c935a2dadde84c6baa1e1370d40a098f,True,True,2024-06-12,2024-05-11,False,True,01-ai/Yi-1.5-34B,0
112
  πŸ”Ά,Tremontaine/L3-12B-Lunaris-v1 (Merge),25.38,69.09,0.69,32.18,0.52,8.16,0.08,7.94,0.31,4.05,0.37,30.83,0.38,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,,11,2,False,7be236530a835416ebca712d51d661c4488a45de,True,True,2024-07-15,2024-07-14,True,False,Tremontaine/L3-12B-Lunaris-v1,1
@@ -138,7 +140,7 @@ T,Model,Average ⬆️,IFEval,IFEval Raw,BBH,BBH Raw,MATH Lvl 5,MATH Lvl 5 Raw,G
138
  πŸ’¬,haoranxu/Llama-3-Instruct-8B-CPO-SimPO,24.48,70.46,0.7,29.76,0.5,7.7,0.08,5.7,0.29,3.42,0.36,29.84,0.37,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,mit,8,1,True,3ca4b5c3a6395ff090e1039d55ac1f6120777302,True,True,2024-07-28,2024-06-19,True,False,haoranxu/Llama-3-Instruct-8B-CPO-SimPO,0
139
  πŸ’¬,rhplus0831/maid-yuzu-v7 (Merge),24.38,64.62,0.65,26.82,0.48,8.91,0.09,7.94,0.31,9.77,0.41,28.22,0.35,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",MixtralForCausalLM,Original,bfloat16,True,,46,1,False,a0bd8c707bb80024778da4a0d057917faa53d2f6,True,True,2024-08-23,2024-02-09,True,False,rhplus0831/maid-yuzu-v7,1
140
  πŸ’¬,mistralai/Mixtral-8x7B-v0.1,24.35,53.95,0.54,34.02,0.53,9.06,0.09,7.61,0.31,12.11,0.43,29.36,0.36,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",MixtralForCausalLM,Original,bfloat16,True,apache-2.0,46,4083,True,1e637f2d7cb0a9d6fb1922f305cb784995190a83,True,False,2024-06-12,2023-12-10,True,True,mistralai/Mixtral-8x7B-Instruct-v0.1,1
141
- πŸ”Ά,ValiantLabs/Llama3.1-8B-ShiningValiant2,24.29,65.24,0.65,26.35,0.48,11.63,0.12,8.95,0.32,7.19,0.39,26.38,0.34,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,llama3.1,8,2,True,6b2b5694a192cb29ad0e4314138affa25b630c0e,True,True,2024-08-10,2024-08-06,True,False,ValiantLabs/Llama3.1-8B-ShiningValiant2,0
142
  πŸ”Ά,VAGOsolutions/SauerkrautLM-Mixtral-8x7B-Instruct,24.29,56.02,0.56,33.95,0.53,8.61,0.09,6.38,0.3,11.32,0.42,29.45,0.37,πŸ”Ά fine-tuned on domain-specific datasets,MixtralForCausalLM,Original,bfloat16,True,apache-2.0,46,21,True,30ed549de7d84f68b4c6cb619f73275c99af23cc,True,False,2024-06-26,2023-12-15,True,False,VAGOsolutions/SauerkrautLM-Mixtral-8x7B-Instruct,0
143
  🀝,UKzExecution/LlamaExecutor-8B-3.0.5 (Merge),24.26,74.03,0.74,28.41,0.5,8.53,0.09,0.78,0.26,4.65,0.38,29.17,0.36,🀝 base merges and moerges,LlamaForCausalLM,Original,bfloat16,True,,8,0,False,2047978e8ab1146b8881cde3d998856594f437a4,True,True,2024-07-30,2024-07-29,True,False,UKzExecution/LlamaExecutor-8B-3.0.5,1
144
  πŸ”Ά,ycros/BagelMIsteryTour-v2-8x7B (Merge),24.26,59.94,0.6,31.7,0.52,7.85,0.08,7.27,0.3,11.3,0.42,27.48,0.35,πŸ”Ά fine-tuned on domain-specific datasets,MixtralForCausalLM,Original,float16,False,cc-by-nc-4.0,46,16,True,98a8b319707be3dab1659594da69a37ed8f8c148,True,True,2024-06-28,2024-01-19,True,False,ycros/BagelMIsteryTour-v2-8x7B,1
@@ -150,7 +152,7 @@ T,Model,Average ⬆️,IFEval,IFEval Raw,BBH,BBH Raw,MATH Lvl 5,MATH Lvl 5 Raw,G
150
  🀝,PJMixers/LLaMa-3-CursedStock-v2.0-8B (Merge),24.03,63.31,0.63,32.56,0.53,8.61,0.09,3.24,0.27,8.04,0.39,28.4,0.36,🀝 base merges and moerges,LlamaForCausalLM,Original,bfloat16,False,llama3,8,9,True,d47cc29df363f71ffaf6cd21ac4bdeefa27359db,True,True,2024-06-27,2024-06-26,True,False,PJMixers/LLaMa-3-CursedStock-v2.0-8B,1
151
  πŸ”Ά,Qwen/Qwen2-7B,24.01,41.0,0.41,32.84,0.52,15.18,0.15,6.6,0.3,14.06,0.44,34.4,0.41,πŸ”Ά fine-tuned on domain-specific datasets,Qwen2ForCausalLM,Original,bfloat16,True,other,7,33,True,d5a2f245bf98a40d196821bc378e10f35b4da81a,True,True,2024-06-26,2024-06-24,True,False,Weyaxi/Einstein-v7-Qwen2-7B,1
152
  πŸ”Ά,BAAI/Infinity-Instruct-3M-0625-Qwen2-7B,24.01,55.54,0.56,34.66,0.53,6.12,0.06,8.39,0.31,6.46,0.39,32.89,0.4,πŸ”Ά fine-tuned on domain-specific datasets,Qwen2ForCausalLM,Original,bfloat16,True,apache-2.0,7,6,True,503c24156d7682458686a7b5324f7f886e63470d,True,True,2024-08-05,2024-07-09,True,False,BAAI/Infinity-Instruct-3M-0625-Qwen2-7B,0
153
- πŸ”Ά,ValiantLabs/Llama3.1-8B-ShiningValiant2,24.0,64.74,0.65,26.26,0.48,10.73,0.11,8.95,0.32,6.91,0.39,26.4,0.34,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,float16,True,llama3.1,8,2,True,6b2b5694a192cb29ad0e4314138affa25b630c0e,True,True,2024-08-07,2024-08-06,True,False,ValiantLabs/Llama3.1-8B-ShiningValiant2,0
154
  πŸ’¬,vicgalle/Roleplay-Llama-3-8B,23.94,73.2,0.73,28.55,0.5,8.69,0.09,1.45,0.26,1.68,0.35,30.09,0.37,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,float16,True,apache-2.0,8,36,True,57297eb57dcc2c116f061d9dda341094203da01b,True,True,2024-06-26,2024-04-19,True,False,vicgalle/Roleplay-Llama-3-8B,0
155
  πŸ’¬,meta-llama/Meta-Llama-3-8B-Instruct,23.91,74.08,0.74,28.24,0.5,8.69,0.09,1.23,0.26,1.6,0.36,29.6,0.37,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,llama3,8,3356,True,e1945c40cd546c78e41f1151f4db032b271faeaa,True,True,2024-06-12,2024-04-17,True,True,meta-llama/Meta-Llama-3-8B-Instruct,0
156
  πŸ’¬,01-ai/Yi-34B-Chat,23.9,46.99,0.47,37.62,0.56,4.31,0.04,11.74,0.34,8.36,0.4,34.37,0.41,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,apache-2.0,34,340,True,2e528b6a80fb064a0a746c5ca43114b135e30464,True,True,2024-06-12,2023-11-22,True,True,01-ai/Yi-34B-Chat,0
@@ -164,7 +166,7 @@ T,Model,Average ⬆️,IFEval,IFEval Raw,BBH,BBH Raw,MATH Lvl 5,MATH Lvl 5 Raw,G
164
  πŸ’¬,SeaLLMs/SeaLLMs-v3-7B-Chat,23.63,43.77,0.44,33.8,0.53,15.11,0.15,6.49,0.3,10.47,0.42,32.16,0.39,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",Qwen2ForCausalLM,Original,bfloat16,True,other,7,38,True,67ef6dfd0a5df7af4be7a325786105a2ba4cbaf7,True,True,2024-07-29,2024-07-03,True,False,SeaLLMs/SeaLLMs-v3-7B-Chat,0
165
  πŸ”Ά,meta-llama/Meta-Llama-3-8B-Instruct,23.56,69.03,0.69,29.08,0.5,5.74,0.06,1.12,0.26,5.5,0.38,30.92,0.38,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,mit,8,4,True,9c95ccdeceed14a3c2881bc495101a1acca1385f,True,True,2024-07-02,2024-05-25,True,False,ZhangShenao/SELM-Llama-3-8B-Instruct-iter-3,3
166
  πŸ’¬,lordjia/Qwen2-Cantonese-7B-Instruct,23.5,54.35,0.54,32.45,0.52,8.76,0.09,6.04,0.3,7.81,0.4,31.59,0.38,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",Qwen2ForCausalLM,Original,bfloat16,True,apache-2.0,7,1,True,eb8b0faee749d167fd70e74f5e579094c4cfe7fb,True,True,2024-08-03,2024-07-13,True,False,lordjia/Qwen2-Cantonese-7B-Instruct,0
167
- πŸ’¬,meta-llama/Meta-Llama-3.1-8B,23.49,61.7,0.62,30.72,0.52,4.76,0.05,6.38,0.3,13.62,0.44,23.77,0.31,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,llama3,8,140,True,aabb745a717e133b74dcae23195d2635cf5f38cc,True,True,2024-08-28,2024-07-28,True,True,NousResearch/Hermes-3-Llama-3.1-8B,1
168
  πŸ’¬,saltlux/luxia-21.4b-alignment-v1.2,23.44,41.15,0.41,47.77,0.64,1.59,0.02,7.72,0.31,14.9,0.45,27.48,0.35,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,apache-2.0,21,7,True,eed12b5574fa49cc81e57a88aff24c08c13721c0,True,True,2024-07-30,2024-05-27,True,False,saltlux/luxia-21.4b-alignment-v1.2,0
169
  πŸ’¬,meta-llama/Meta-Llama-3-8B-Instruct,23.43,66.87,0.67,28.06,0.48,6.57,0.07,3.02,0.27,5.31,0.38,30.77,0.38,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,cc-by-nc-4.0,8,1,True,555f4a0092f239557e1aa34f9d489e8156b907bb,True,True,2024-06-29,2024-04-26,True,False,lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top75,2
170
  πŸ’¬,meta-llama/Meta-Llama-3-8B-Instruct,23.37,66.37,0.66,27.67,0.49,8.53,0.09,3.02,0.27,4.81,0.36,29.83,0.37,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,cc-by-nc-4.0,8,2,True,5a2f17238cc83932e00613d285f8bf6b8f4a0c3a,True,True,2024-06-29,2024-04-26,True,False,lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top25,2
@@ -194,17 +196,20 @@ T,Model,Average ⬆️,IFEval,IFEval Raw,BBH,BBH Raw,MATH Lvl 5,MATH Lvl 5 Raw,G
194
  🀝,Pretergeek/OpenChat-3.5-0106_BlockExpansion-40Layers-End (Merge),22.55,59.61,0.6,24.06,0.46,6.8,0.07,7.61,0.31,11.78,0.43,25.44,0.33,🀝 base merges and moerges,MistralForCausalLM,Original,bfloat16,False,apache-2.0,8,2,True,2120720b7fb2ecc27b9c03cc876316fd25b26e40,True,True,2024-07-27,2024-07-26,True,False,Pretergeek/OpenChat-3.5-0106_BlockExpansion-40Layers-End,1
195
  🀝,Pretergeek/OpenChat-3.5-0106_BlockExpansion-44Layers-End (Merge),22.55,59.61,0.6,24.06,0.46,6.8,0.07,7.61,0.31,11.78,0.43,25.44,0.33,🀝 base merges and moerges,MistralForCausalLM,Original,bfloat16,False,apache-2.0,9,2,True,8a7ef4a2c4faf8760650e26e44509920bace633a,True,True,2024-07-27,2024-07-27,True,False,Pretergeek/OpenChat-3.5-0106_BlockExpansion-44Layers-End,1
196
  πŸ’¬,vicgalle/ConfigurableBeagle-11B,22.52,58.34,0.58,32.39,0.53,3.7,0.04,6.94,0.3,7.38,0.4,26.38,0.34,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",MistralForCausalLM,Original,float16,True,apache-2.0,10,2,True,bbc16dbf94b8e8a99bb3e2ada6755faf9c2990dd,True,True,2024-06-26,2024-02-17,True,False,vicgalle/ConfigurableBeagle-11B,0
 
197
  πŸ”Ά,mistralai/Mistral-7B-v0.1,22.5,59.51,0.6,24.04,0.46,6.5,0.06,7.72,0.31,11.75,0.43,25.46,0.33,πŸ”Ά fine-tuned on domain-specific datasets,MistralForCausalLM,Original,bfloat16,True,apache-2.0,7,340,True,ff058fda49726ecf4ea53dc1635f917cdb8ba36b,True,True,2024-06-27,2024-01-07,True,True,openchat/openchat-3.5-0106,1
 
 
198
  πŸ”Ά,pankajmathur/orca_mini_v7_7b,22.41,43.88,0.44,33.95,0.53,2.64,0.03,6.15,0.3,12.66,0.44,35.19,0.42,πŸ”Ά fine-tuned on domain-specific datasets,Qwen2ForCausalLM,Original,bfloat16,True,apache-2.0,7,2,True,f5e84ff6ea25fb4585908ea45d1520bac416d803,True,True,2024-06-26,2024-06-20,False,False,pankajmathur/orca_mini_v7_7b,0
199
  πŸ”Ά,Replete-AI/Llama3-8B-Instruct-Replete-Adapted,22.4,69.15,0.69,26.89,0.49,4.83,0.05,4.14,0.28,2.82,0.36,26.57,0.34,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,float16,True,other,8,1,True,d930f2111913da6fb7693187e1cdc817191c8e5e,True,True,2024-07-09,2024-07-05,True,False,Replete-AI/Llama3-8B-Instruct-Replete-Adapted,0
200
  πŸ’¬,vicgalle/CarbonBeagle-11B (Merge),22.36,54.15,0.54,33.06,0.53,5.51,0.06,6.94,0.3,9.19,0.4,25.29,0.33,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",MistralForCausalLM,Original,float16,False,apache-2.0,10,9,True,3fe9bf5327606d013b182fed17a472f5f043759b,True,True,2024-06-26,2024-01-21,True,False,vicgalle/CarbonBeagle-11B,1
201
  πŸ”Ά,WizardLMTeam/WizardLM-70B-V1.0,22.32,49.51,0.5,37.54,0.56,3.47,0.03,2.13,0.27,14.09,0.44,27.18,0.34,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,float16,True,llama2,70,234,True,54aaecaff7d0790eb9f0ecea1cc267a94cc66949,True,True,2024-06-12,2023-08-09,False,True,WizardLMTeam/WizardLM-70B-V1.0,0
202
  🀝,johnsutor/Llama-3-8B-Instruct_breadcrumbs-density-0.1-gamma-0.01 (Merge),22.3,42.71,0.43,29.55,0.5,3.7,0.04,9.62,0.32,17.8,0.46,30.44,0.37,🀝 base merges and moerges,LlamaForCausalLM,Original,bfloat16,False,apache-2.0,8,0,True,f4ebbf27d586e94c63f0a7293f565cbd947b824f,True,True,2024-06-26,2024-06-07,False,False,johnsutor/Llama-3-8B-Instruct_breadcrumbs-density-0.1-gamma-0.01,1
203
  πŸ”Ά,NousResearch/Meta-Llama-3-8B,22.29,57.63,0.58,30.51,0.51,5.97,0.06,6.26,0.3,10.06,0.42,23.31,0.31,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,float16,True,apache-2.0,8,5,True,3cb5792509966a963645be24fdbeb2e7dc6cac15,True,True,2024-07-24,2024-05-02,True,False,vicgalle/Configurable-Hermes-2-Pro-Llama-3-8B,2
204
- πŸ’¬,mistralai/Mistral-Nemo-Base-2407,22.27,62.61,0.63,27.11,0.49,0.3,0.0,8.72,0.32,8.48,0.39,26.37,0.34,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",MistralForCausalLM,Original,bfloat16,True,apache-2.0,12,1002,True,4d14c1db68fe20dbf80b8eca85d39b909c5fe1d5,True,True,2024-08-29,2024-07-17,True,True,mistralai/Mistral-Nemo-Instruct-2407,1
205
  🟒,01-ai/Yi-34B,22.26,30.46,0.3,35.54,0.55,4.46,0.04,15.55,0.37,9.65,0.41,37.91,0.44,🟒 pretrained,LlamaForCausalLM,Original,bfloat16,True,apache-2.0,34,1277,True,e1e7da8c75cfd5c44522228599fd4d2990cedd1c,True,True,2024-06-12,2023-11-01,False,True,01-ai/Yi-34B,0
206
  🀝,johnsutor/Llama-3-8B-Instruct_breadcrumbs-density-0.5-gamma-0.1 (Merge),22.18,43.96,0.44,30.85,0.51,6.87,0.07,7.61,0.31,13.84,0.44,29.96,0.37,🀝 base merges and moerges,LlamaForCausalLM,Original,bfloat16,False,apache-2.0,8,0,True,a481edaceeaab34f4dc0e90c4d8ec0f72658bbdd,True,True,2024-06-26,2024-06-08,False,False,johnsutor/Llama-3-8B-Instruct_breadcrumbs-density-0.5-gamma-0.1,1
207
- πŸ”Ά,ValiantLabs/Llama3.1-8B-Enigma,22.14,64.05,0.64,24.8,0.47,10.8,0.11,4.7,0.29,2.29,0.36,26.22,0.34,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,float16,True,llama3.1,8,4,True,332c99d80f378c77b090745a5aac10f8ab339519,True,True,2024-08-14,2024-08-11,True,False,ValiantLabs/Llama3.1-8B-Enigma,0
208
  πŸ”Ά,mlabonne/Daredevil-8B (Merge),22.13,45.48,0.45,31.63,0.52,8.99,0.09,7.72,0.31,7.53,0.39,31.45,0.38,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,False,other,8,30,True,717953c83631cc9adf2dddccfff06739308f10f7,True,True,2024-07-02,2024-05-25,True,True,mlabonne/Daredevil-8B,1
209
  πŸ’¬,OpenBuddy/openbuddy-mixtral-7bx8-v18.1-32k,22.12,54.93,0.55,24.54,0.47,9.52,0.1,7.27,0.3,5.28,0.38,31.16,0.38,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",MixtralForCausalLM,Original,bfloat16,True,apache-2.0,46,13,True,98596b6731058cc9cca85f3b8ac9077342cb60ae,True,False,2024-06-26,2024-02-12,True,False,OpenBuddy/openbuddy-mixtral-7bx8-v18.1-32k,0
210
  🀝,johnsutor/Llama-3-8B-Instruct_breadcrumbs_ties-density-0.1-gamma-0.01 (Merge),22.09,43.59,0.44,29.53,0.5,4.31,0.04,8.05,0.31,16.34,0.45,30.69,0.38,🀝 base merges and moerges,LlamaForCausalLM,Original,bfloat16,False,apache-2.0,8,0,True,861347cd643d396877d8e560367cf0717c671228,True,True,2024-06-26,2024-06-07,False,False,johnsutor/Llama-3-8B-Instruct_breadcrumbs_ties-density-0.1-gamma-0.01,1
@@ -347,7 +352,7 @@ T,Model,Average ⬆️,IFEval,IFEval Raw,BBH,BBH Raw,MATH Lvl 5,MATH Lvl 5 Raw,G
347
  πŸ”Ά,mistralai/Mixtral-8x7B-v0.1,19.67,23.26,0.23,30.4,0.51,9.37,0.09,9.4,0.32,13.66,0.44,31.9,0.39,πŸ”Ά fine-tuned on domain-specific datasets,MixtralForCausalLM,Original,float16,True,apache-2.0,46,1623,True,985aa055896a8f943d4a9f2572e6ea1341823841,True,False,2024-06-27,2023-12-01,False,True,mistralai/Mixtral-8x7B-v0.1,0
348
  πŸ”Ά,RLHFlow/LLaMA3-iterative-DPO-final,19.64,53.4,0.53,29.79,0.51,0.0,0.0,4.47,0.28,5.08,0.37,25.08,0.33,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,llama3,8,41,True,40b73bd07a019795837f80579fe95470484ca82b,True,True,2024-06-26,2024-05-17,True,False,RLHFlow/LLaMA3-iterative-DPO-final,0
349
  🀝,allknowingroger/limyClown-7B-slerp (Merge),19.63,40.17,0.4,31.93,0.51,6.42,0.06,4.14,0.28,12.46,0.43,22.64,0.3,🀝 base merges and moerges,MistralForCausalLM,Original,bfloat16,False,apache-2.0,7,0,True,732a1ed0c2c7007297ad9d9797793073825f65ca,True,True,2024-06-26,2024-03-23,False,False,allknowingroger/limyClown-7B-slerp,1
350
- πŸ’¬,upstage/SOLAR-10.7B-Instruct-v1.0 (Merge),19.63,47.37,0.47,31.87,0.52,0.0,0.0,7.83,0.31,6.94,0.39,23.76,0.31,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,float16,True,cc-by-nc-4.0,10,605,True,c08c25ed66414a878fe0401a3596d536c083606c,True,True,2024-06-12,2023-12-12,True,True,upstage/SOLAR-10.7B-Instruct-v1.0,1
351
  🀝,invisietch/EtherealRainbow-v0.3-8B,19.61,36.82,0.37,30.08,0.51,6.57,0.07,7.27,0.3,7.77,0.39,29.18,0.36,🀝 base merges and moerges,LlamaForCausalLM,Original,bfloat16,False,llama3,8,6,True,c986c4ca5a5b8474820a59d3e911a431cf26938d,True,True,2024-07-01,2024-06-19,False,False,invisietch/EtherealRainbow-v0.3-8B,0
352
  🀝,allknowingroger/Ph3unsloth-3B-slerp (Merge),19.61,18.94,0.19,36.46,0.55,6.87,0.07,9.96,0.32,15.43,0.45,30.01,0.37,🀝 base merges and moerges,MistralForCausalLM,Original,bfloat16,False,apache-2.0,3,0,True,465444b3cdd43876717f7386ea2f3357c5fe8e53,True,True,2024-06-26,2024-05-31,False,False,allknowingroger/Ph3unsloth-3B-slerp,1
353
  🟒,01-ai/Yi-1.5-9B-32K,19.61,23.03,0.23,28.94,0.5,9.59,0.1,14.54,0.36,10.83,0.42,30.72,0.38,🟒 pretrained,LlamaForCausalLM,Original,bfloat16,True,apache-2.0,8,18,True,116561dfae63af90f9d163b43077629e0e916bb1,True,True,2024-06-12,2024-05-15,False,True,01-ai/Yi-1.5-9B-32K,0
@@ -400,20 +405,21 @@ T,Model,Average ⬆️,IFEval,IFEval Raw,BBH,BBH Raw,MATH Lvl 5,MATH Lvl 5 Raw,G
400
  🀝,johnsutor/Llama-3-8B-Instruct_breadcrumbs-density-0.7-gamma-0.01 (Merge),18.44,28.14,0.28,27.16,0.49,0.0,0.0,5.37,0.29,24.47,0.52,25.5,0.33,🀝 base merges and moerges,LlamaForCausalLM,Original,bfloat16,False,apache-2.0,8,0,True,61f4b44fb917cdb46f0ade9f8fc2a382e0cf67af,True,True,2024-06-26,2024-06-08,False,False,johnsutor/Llama-3-8B-Instruct_breadcrumbs-density-0.7-gamma-0.01,1
401
  πŸ’¬,mistralai/Mistral-7B-v0.1,18.37,50.82,0.51,22.75,0.45,2.57,0.03,5.26,0.29,6.59,0.34,22.26,0.3,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",MistralForCausalLM,Original,bfloat16,True,mit,7,119,True,30172203a2d41cb487bf7e2b92a821080783b2c9,True,True,2024-06-27,2023-11-16,True,True,argilla/notus-7b-v1,2
402
  πŸ”Ά,uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b,18.34,37.0,0.37,29.65,0.5,2.95,0.03,4.47,0.28,13.85,0.44,22.12,0.3,πŸ”Ά fine-tuned on domain-specific datasets,MistralForCausalLM,Original,bfloat16,True,llama2,7,17,True,b1de043468a15198b55a6509293a4ee585139043,True,True,2024-06-26,2023-10-13,False,False,uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b,0
403
- πŸ”Ά,ValiantLabs/Llama3.1-8B-Fireplace2,18.31,54.83,0.55,24.07,0.46,5.82,0.06,5.15,0.29,4.38,0.34,15.63,0.24,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,float16,True,llama3.1,8,4,True,be3a5c18b5e8e86a3703df1a8227f784ad2c713c,True,True,2024-07-25,2024-07-23,True,False,ValiantLabs/Llama3.1-8B-Fireplace2,0
404
  πŸ’¬,meta-llama/Meta-Llama-3-8B,18.3,38.5,0.39,27.86,0.49,5.06,0.05,4.92,0.29,13.79,0.44,19.68,0.28,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,other,8,398,True,5aeb036f9215c558b483a654a8c6e1cc22e841bf,True,True,2024-06-12,2024-04-20,True,True,cognitivecomputations/dolphin-2.9-llama3-8b,1
405
  🟒,meta-llama/Llama-2-70b-hf,18.25,24.07,0.24,35.9,0.55,2.49,0.02,7.05,0.3,9.78,0.41,30.2,0.37,🟒 pretrained,LlamaForCausalLM,Original,float16,True,llama2,68,823,True,3aba440b59558f995867ba6e1f58f21d0336b5bb,True,True,2024-06-12,2023-07-11,False,True,meta-llama/Llama-2-70b-hf,0
406
  πŸ”Ά,microsoft/Orca-2-13b,18.14,31.28,0.31,27.31,0.49,0.98,0.01,4.03,0.28,25.79,0.51,19.44,0.27,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,other,13,659,True,2539ff53e6baa4cc603774ad5a2d646f4041ea4e,True,True,2024-06-12,2023-11-14,False,True,microsoft/Orca-2-13b,0
407
  πŸ’¬,gradientai/Llama-3-8B-Instruct-Gradient-1048k,18.12,44.56,0.45,21.01,0.43,4.38,0.04,3.69,0.28,13.52,0.43,21.56,0.29,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,llama3,8,663,True,8697fb25cb77c852311e03b4464b8467471d56a4,True,True,2024-06-12,2024-04-29,True,True,gradientai/Llama-3-8B-Instruct-Gradient-1048k,0
408
  🀝,johnsutor/Llama-3-8B-Instruct_ties-density-0.5 (Merge),18.11,37.97,0.38,26.01,0.48,5.44,0.05,7.27,0.3,7.8,0.39,24.17,0.32,🀝 base merges and moerges,LlamaForCausalLM,Original,bfloat16,False,apache-2.0,8,0,True,c857e33c30016960f114e3a049f5dae41d68bfe7,True,True,2024-06-26,2024-06-07,False,False,johnsutor/Llama-3-8B-Instruct_ties-density-0.5,1
409
  πŸ”Ά,uukuguy/speechless-code-mistral-7b-v1.0,18.09,36.65,0.37,24.09,0.46,4.61,0.05,4.59,0.28,14.77,0.45,23.84,0.31,πŸ”Ά fine-tuned on domain-specific datasets,MistralForCausalLM,Original,bfloat16,True,apache-2.0,7,18,True,1862e0a712efc6002112e9c1235a197d58419b37,True,True,2024-06-26,2023-10-10,False,False,uukuguy/speechless-code-mistral-7b-v1.0,0
410
- πŸ’¬,ValiantLabs/Llama3.1-8B-Fireplace2,18.05,53.28,0.53,24.09,0.46,5.66,0.06,5.26,0.29,4.22,0.34,15.82,0.24,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,llama3.1,8,4,True,ef129903bbdcc59efdbe10fe9061bff473334a99,True,True,2024-08-10,2024-07-23,True,False,ValiantLabs/Llama3.1-8B-Fireplace2,0
411
  🀝,johnsutor/Llama-3-8B-Instruct_ties-density-0.9 (Merge),18.04,38.58,0.39,25.46,0.47,5.59,0.06,6.6,0.3,7.74,0.39,24.24,0.32,🀝 base merges and moerges,LlamaForCausalLM,Original,bfloat16,False,apache-2.0,8,0,True,57c280ce43fe81a23c966b48de6db7f4a85383a3,True,True,2024-06-26,2024-06-07,False,False,johnsutor/Llama-3-8B-Instruct_ties-density-0.9,1
412
  πŸ”Ά,uukuguy/speechless-instruct-mistral-7b-v0.2,18.02,32.61,0.33,24.56,0.46,4.38,0.04,4.25,0.28,21.17,0.49,21.14,0.29,πŸ”Ά fine-tuned on domain-specific datasets,MistralForCausalLM,Original,bfloat16,True,apache-2.0,7,0,True,87a4d214f7d028d61c3dc013a7410b3c34a24072,True,True,2024-06-26,2024-05-22,False,False,uukuguy/speechless-instruct-mistral-7b-v0.2,0
413
  🟒,THUDM/glm-4-9b,18.01,14.26,0.14,35.81,0.55,0.0,0.0,8.84,0.32,14.19,0.44,34.94,0.41,🟒 pretrained,ChatGLMModelM,Original,bfloat16,True,other,9,96,True,99a140996f9d4f197842fb6b1aab217a42e27ef3,True,True,2024-07-04,2024-06-04,False,False,THUDM/glm-4-9b,0
414
  πŸ”Ά,mistralai/Mistral-7B-v0.1,17.94,27.78,0.28,30.21,0.5,2.19,0.02,5.59,0.29,23.02,0.51,18.87,0.27,πŸ”Ά fine-tuned on domain-specific datasets,MistralForCausalLM,Original,float16,True,apache-2.0,7,65,True,fc679274dfcd28a8b6087634f71af7ed2a0659c4,True,True,2024-06-12,2023-10-25,False,True,Intel/neural-chat-7b-v3,1
415
  πŸ”Ά,Kukedlc/NeuralExperiment-7b-MagicCoder-v7.5,17.92,45.53,0.46,16.39,0.4,6.12,0.06,6.15,0.3,13.06,0.43,20.27,0.28,πŸ”Ά fine-tuned on domain-specific datasets,MistralForCausalLM,Original,float16,True,apache-2.0,7,3,True,43ea8d27d652dc15e4d27f665c5d636a5937780b,True,True,2024-07-30,2024-03-07,True,False,Kukedlc/NeuralExperiment-7b-MagicCoder-v7.5,0
416
  🀝,johnsutor/Llama-3-8B-Instruct_ties-density-0.7 (Merge),17.89,36.81,0.37,25.37,0.47,5.74,0.06,7.94,0.31,7.58,0.39,23.92,0.32,🀝 base merges and moerges,LlamaForCausalLM,Original,bfloat16,False,apache-2.0,8,0,True,8d7d8bbb1e8cba5e51337f97bc3d6d8ae40544d5,True,True,2024-06-26,2024-06-07,False,False,johnsutor/Llama-3-8B-Instruct_ties-density-0.7,1
 
417
  πŸ”Ά,meta-llama/Meta-Llama-3-8B-Instruct,17.89,38.09,0.38,23.65,0.46,5.36,0.05,11.07,0.33,1.6,0.34,27.56,0.35,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,llama3,8,3,True,2560556d655d0ecaefec10f579c92292d65fb28b,True,True,2024-06-27,2024-06-10,False,False,collaiborateorg/Collaiborator-MEDLLM-Llama-3-8B-v2,1
418
  πŸ”Ά,meta-llama/Meta-Llama-3.1-8B,17.8,47.88,0.48,26.14,0.48,7.93,0.08,1.45,0.26,1.87,0.34,21.53,0.29,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,llama3.1,8,2,True,b191916912f0e76b2bdc93c46c0af590cc87e7ae,True,True,2024-08-06,2024-07-23,True,False,Magpie-Align/Llama-3.1-8B-Magpie-Align-SFT-v0.1,1
419
  🀝,meraGPT/mera-mix-4x7B,17.78,48.32,0.48,17.49,0.4,4.91,0.05,7.27,0.3,9.27,0.41,19.42,0.27,🀝 base merges and moerges,MixtralForCausalLM,Original,bfloat16,True,apache-2.0,24,18,True,09d965c5ef9b66ce419986027e03a915cb869e43,True,True,2024-06-27,2024-04-13,True,False,meraGPT/mera-mix-4x7B,0
@@ -461,7 +467,7 @@ T,Model,Average ⬆️,IFEval,IFEval Raw,BBH,BBH Raw,MATH Lvl 5,MATH Lvl 5 Raw,G
461
  πŸ’¬,meta-llama/Meta-Llama-3-8B,16.26,40.27,0.4,26.29,0.48,3.25,0.03,3.58,0.28,1.92,0.31,22.24,0.3,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,float16,True,llama3,8,10,True,a83ddac146fb2da1dd1bfa4069e336074d1439a8,True,True,2024-07-03,2024-06-29,True,False,Magpie-Align/Llama-3-8B-Magpie-Align-v0.1,2
462
  πŸ”Ά,senseable/WestLake-7B-v2,16.26,44.19,0.44,17.86,0.41,4.83,0.05,3.58,0.28,7.48,0.39,19.6,0.28,πŸ”Ά fine-tuned on domain-specific datasets,MistralForCausalLM,Original,float16,True,apache-2.0,7,104,True,41625004c47628837678859753b94c50c82f3bec,True,True,2024-07-23,2024-01-22,True,False,senseable/WestLake-7B-v2,0
463
  πŸ’¬,stabilityai/stablelm-2-12b-chat,16.22,40.82,0.41,25.25,0.47,2.04,0.02,2.24,0.27,7.73,0.39,19.27,0.27,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",StableLmForCausalLM,Original,bfloat16,True,other,12,83,True,b6b62cd451b84e848514c00fafa66d9ead9297c5,True,True,2024-06-12,2024-04-04,True,True,stabilityai/stablelm-2-12b-chat,0
464
- πŸ’¬,CohereForAI/aya-23-8B,15.97,46.99,0.47,20.2,0.43,1.44,0.01,4.59,0.28,8.42,0.39,14.2,0.23,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",CohereForCausalLM,Original,float16,True,cc-by-nc-4.0,8,351,True,ec151d218a24031eb039d92fb83d10445427efc9,True,True,2024-06-12,2024-05-19,True,True,CohereForAI/aya-23-8B,0
465
  🀝,johnsutor/Llama-3-8B-Instruct_dare_ties-density-0.3 (Merge),15.94,21.13,0.21,23.09,0.46,0.0,0.0,6.26,0.3,22.5,0.51,22.67,0.3,🀝 base merges and moerges,LlamaForCausalLM,Original,bfloat16,False,apache-2.0,8,0,True,6f966d14d7236f3da6d1ea9ce3bd9b20808e02a9,True,True,2024-06-26,2024-06-07,False,False,johnsutor/Llama-3-8B-Instruct_dare_ties-density-0.3,1
466
  πŸ”Ά,Pretergeek/openchat-3.5-0106_Rebased_Mistral-7B-v0.2,15.94,37.06,0.37,10.91,0.36,3.85,0.04,2.91,0.27,20.57,0.48,20.33,0.28,πŸ”Ά fine-tuned on domain-specific datasets,MistralForCausalLM,Original,bfloat16,True,apache-2.0,7,2,True,31c11027a7320115af1e5c33b41bcace83420fe2,True,True,2024-07-21,2024-07-21,True,False,Pretergeek/openchat-3.5-0106_Rebased_Mistral-7B-v0.2,0
467
  πŸ’¬,Locutusque/Llama-3-NeuralHercules-5.0-8B,15.93,44.89,0.45,16.34,0.39,3.63,0.04,2.46,0.27,6.78,0.39,21.48,0.29,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,llama3,8,3,True,2bbb675e592a1772f2389fe2d58a5b610d479d94,True,True,2024-06-26,2024-05-28,True,False,Locutusque/Llama-3-NeuralHercules-5.0-8B,0
@@ -488,7 +494,7 @@ T,Model,Average ⬆️,IFEval,IFEval Raw,BBH,BBH Raw,MATH Lvl 5,MATH Lvl 5 Raw,G
488
  🟒,mistralai/Mistral-Nemo-Base-2407,15.08,16.3,0.16,29.37,0.5,4.98,0.05,5.82,0.29,6.52,0.39,27.46,0.35,🟒 pretrained,MistralForCausalLM,Original,bfloat16,True,apache-2.0,11,234,True,d2efb15544d5401f761235bef327babb850887d0,True,True,2024-07-19,2024-07-18,False,True,mistralai/Mistral-Nemo-Base-2407,0
489
  πŸ”Ά,Changgil/K2S3-14b-v0.2,15.07,32.43,0.32,24.28,0.46,4.53,0.05,4.14,0.28,6.8,0.39,18.26,0.26,πŸ”Ά fine-tuned on domain-specific datasets,MistralForCausalLM,Original,bfloat16,True,cc-by-nc-4.0,14,0,True,b4f0e1eed2640df2b75847ff37e6ebb1be217b6c,True,True,2024-06-27,2024-06-17,False,False,Changgil/K2S3-14b-v0.2,0
490
  🟩,NousResearch/Yarn-Solar-10b-64k,15.06,19.89,0.2,28.4,0.49,2.27,0.02,6.94,0.3,9.01,0.4,23.87,0.31,🟩 continuously pretrained,LlamaForCausalLM,Original,bfloat16,True,apache-2.0,10,15,True,703818628a5e8ef637e48e8dbeb3662aa0497aff,True,True,2024-06-12,2024-01-17,False,True,NousResearch/Yarn-Solar-10b-64k,0
491
- 🟒,tiiuae/falcon-mamba-7b,15.04,33.36,0.33,19.88,0.43,3.63,0.04,8.05,0.31,10.86,0.42,14.47,0.23,🟒 pretrained,FalconMambaForCausalLM,Original,bfloat16,True,other,7,172,True,5337fd73f19847e111ba2291f3f0e1617b90c37d,True,True,2024-07-23,2024-07-17,False,True,tiiuae/falcon-mamba-7b,0
492
  πŸ”Ά,pankajmathur/orca_mini_v3_13b,15.0,28.97,0.29,25.55,0.47,1.89,0.02,2.01,0.27,17.11,0.46,14.5,0.23,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,other,13,32,True,7d6e567d24ce2f228beaf54e89c17b0e750bfe99,True,True,2024-06-26,2023-08-09,False,False,pankajmathur/orca_mini_v3_13b,0
493
  🟒,Deci/DeciLM-7B,14.95,28.13,0.28,21.25,0.44,2.42,0.02,6.04,0.3,13.05,0.44,18.8,0.27,🟒 pretrained,DeciLMForCausalLM,Original,bfloat16,True,apache-2.0,7,222,True,c3c9f4226801dc0433f32aebffe0aac68ee2f051,True,True,2024-06-12,2023-12-10,False,True,Deci/DeciLM-7B,0
494
  πŸ’¬,meta-llama/Meta-Llama-3-8B,14.87,36.53,0.37,21.95,0.44,3.85,0.04,3.91,0.28,4.01,0.36,18.95,0.27,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,float16,True,other,8,53,True,7f200e4c84ad0daa3ff6bc414012d8d0bacbf90e,True,True,2024-06-12,2024-04-18,True,True,mlabonne/OrpoLlama-3-8B,1
@@ -510,8 +516,8 @@ T,Model,Average ⬆️,IFEval,IFEval Raw,BBH,BBH Raw,MATH Lvl 5,MATH Lvl 5 Raw,G
510
  πŸ”Ά,microsoft/Orca-2-7b,14.22,21.83,0.22,22.43,0.45,0.83,0.01,1.45,0.26,24.09,0.5,14.65,0.23,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,other,7,213,True,60e31e6bdcf582ad103b807cb74b73ee1d2c4b17,True,True,2024-06-12,2023-11-14,False,True,microsoft/Orca-2-7b,0
511
  πŸ”Ά,TencentARC/Mistral_Pro_8B_v0.1,14.2,21.15,0.21,22.89,0.45,5.66,0.06,4.03,0.28,11.83,0.42,19.61,0.28,πŸ”Ά fine-tuned on domain-specific datasets,MistralForCausalLM,Original,bfloat16,True,apache-2.0,8,66,True,366f159fc5b314ba2a955209d2bca4600f84dac0,True,True,2024-06-12,2024-02-22,False,True,TencentARC/Mistral_Pro_8B_v0.1,0
512
  🟒,tklohj/WindyFloLLM (Merge),14.17,26.69,0.27,24.4,0.46,1.13,0.01,3.36,0.28,11.86,0.43,17.57,0.26,🟒 pretrained,LlamaForCausalLM,Original,float16,True,,13,0,False,21f4241ab3f091d1d309e9076a8d8e3f014908a8,True,True,2024-07-10,2024-06-30,False,False,tklohj/WindyFloLLM,1
513
- 🟒,mistral-community/Mistral-7B-v0.2,14.15,22.66,0.23,23.95,0.45,2.64,0.03,5.59,0.29,8.36,0.4,21.7,0.3,🟒 pretrained,MistralForCausalLM,Original,bfloat16,True,apache-2.0,7,230,True,2c3e624962b1a3f3fbf52e15969565caa7bc064a,True,True,2024-06-12,2024-03-23,False,True,mistral-community/Mistral-7B-v0.2,0
514
  🟒,mistralai/Mistral-7B-v0.3,14.15,22.66,0.23,23.95,0.45,2.64,0.03,5.59,0.29,8.36,0.4,21.7,0.3,🟒 pretrained,MistralForCausalLM,Original,bfloat16,True,apache-2.0,7,349,True,b67d6a03ca097c5122fa65904fce0413500bf8c8,True,True,2024-06-12,2024-05-22,False,True,mistralai/Mistral-7B-v0.3,0
 
515
  🟒,awnr/Mistral-7B-v0.1-signtensors-7-over-16,14.15,22.94,0.23,21.04,0.43,3.25,0.03,7.16,0.3,7.93,0.4,22.56,0.3,🟒 pretrained,MistralForCausalLM,Original,bfloat16,True,apache-2.0,7,1,True,0e1f2cb0a81c38fc6c567d9c007883ab62fae266,True,True,2024-07-29,2024-07-29,False,False,awnr/Mistral-7B-v0.1-signtensors-7-over-16,0
516
  πŸ”Ά,netcat420/MFANNv0.19,14.14,30.57,0.31,24.92,0.47,2.64,0.03,7.61,0.31,2.72,0.35,16.36,0.25,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,float16,True,llama3.1,8,0,True,af26a25549b7ad291766c479bebda58f15fbff42,True,True,2024-07-27,2024-07-27,False,False,netcat420/MFANNv0.19,0
517
  🀝,johnsutor/Llama-3-8B-Instruct_dare_linear (Merge),14.12,21.45,0.21,19.61,0.43,0.0,0.0,6.15,0.3,21.81,0.5,15.72,0.24,🀝 base merges and moerges,LlamaForCausalLM,Original,bfloat16,False,apache-2.0,8,0,True,abb81fd8fdc2ad32f65befcb7ae369c9837cd563,True,True,2024-06-26,2024-06-07,False,False,johnsutor/Llama-3-8B-Instruct_dare_linear,1
@@ -561,7 +567,7 @@ T,Model,Average ⬆️,IFEval,IFEval Raw,BBH,BBH Raw,MATH Lvl 5,MATH Lvl 5 Raw,G
561
  πŸ’¬,Enno-Ai/EnnoAi-Pro-Llama-3-8B,12.17,31.95,0.32,17.51,0.42,0.15,0.0,1.57,0.26,9.08,0.41,12.79,0.22,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,creativeml-openrail-m,8,0,True,6a5d745bdd304753244fe601e2a958d37d13cd71,True,True,2024-07-08,2024-07-01,True,False,Enno-Ai/EnnoAi-Pro-Llama-3-8B,0
562
  🟒,awnr/Mistral-7B-v0.1-signtensors-5-over-16,12.16,21.18,0.21,17.54,0.41,2.19,0.02,4.14,0.28,6.14,0.37,21.75,0.3,🟒 pretrained,MistralForCausalLM,Original,bfloat16,True,apache-2.0,7,1,True,5ea13b3d0723237889e1512bc70dae72f71884d1,True,True,2024-07-29,2024-07-29,False,False,awnr/Mistral-7B-v0.1-signtensors-5-over-16,0
563
  πŸ”Ά,NousResearch/Llama-2-13b-hf,12.12,26.68,0.27,18.21,0.42,0.83,0.01,3.02,0.27,8.53,0.4,15.44,0.24,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,mit,13,53,True,bcad6fff9f8591e091d2d57356a3f102197e8c5f,True,True,2024-06-12,2023-09-06,False,True,teknium/OpenHermes-13B,1
564
- πŸ’¬,internlm/internlm2_5-1_8b-chat,12.11,38.49,0.38,21.03,0.45,0.0,0.0,5.37,0.29,4.42,0.36,3.32,0.13,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",InternLM2ForCausalLM,Original,bfloat16,True,other,1,17,True,4426f00b854561fa60d555d2b628064b56bcb758,True,True,2024-08-07,2024-07-30,True,True,internlm/internlm2_5-1_8b-chat,0
565
  πŸ’¬,unsloth/mistral-7b-v0.3-bnb-4bit,12.08,37.7,0.38,14.86,0.4,0.53,0.01,2.24,0.27,2.97,0.36,14.2,0.23,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",MistralForCausalLM,Original,bfloat16,True,apache-2.0,7,1,True,868d8a51e8deb6fd948eabe5bc296c53bcf41073,True,True,2024-08-08,2024-08-04,True,False,llmat/Mistral-v0.3-7B-ORPO,1
566
  πŸ’¬,unsloth/mistral-7b-v0.3-bnb-4bit,12.02,36.4,0.36,15.59,0.4,0.15,0.0,2.57,0.27,2.97,0.35,14.46,0.23,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",MistralForCausalLM,Original,float16,True,apache-2.0,7,1,True,868d8a51e8deb6fd948eabe5bc296c53bcf41073,True,True,2024-08-06,2024-08-04,True,False,llmat/Mistral-v0.3-7B-ORPO,1
567
  πŸ”Ά,TencentARC/MetaMath-Mistral-Pro,12.01,21.19,0.21,22.37,0.44,4.61,0.05,2.57,0.27,4.99,0.35,16.35,0.25,πŸ”Ά fine-tuned on domain-specific datasets,MistralForCausalLM,Original,bfloat16,True,apache-2.0,8,5,True,3835d38de15ed2a04c32aca879b782fc50e390bf,True,True,2024-06-12,2024-02-26,False,True,TencentARC/MetaMath-Mistral-Pro,0
@@ -586,17 +592,17 @@ T,Model,Average ⬆️,IFEval,IFEval Raw,BBH,BBH Raw,MATH Lvl 5,MATH Lvl 5 Raw,G
586
  πŸ”Ά,Qwen/Qwen2-1.5B,11.07,30.14,0.3,10.43,0.35,0.91,0.01,2.46,0.27,9.74,0.41,12.74,0.21,πŸ”Ά fine-tuned on domain-specific datasets,Qwen2ForCausalLM,Original,bfloat16,True,apache-2.0,1,21,True,86fcccbf921b7eb8a4d348e4a3cde0beb63d6626,True,True,2024-06-26,2024-06-23,True,False,Replete-AI/Replete-Coder-Qwen2-1.5b,1
587
  πŸ’¬,meta-llama/Llama-2-13b-chat-hf,11.0,39.85,0.4,7.16,0.33,0.6,0.01,0.0,0.23,8.16,0.4,10.26,0.19,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,float16,True,llama2,13,1014,True,a2cb7a712bb6e5e736ca7f8cd98167f81a0b5bd8,True,True,2024-06-12,2023-07-13,True,True,meta-llama/Llama-2-13b-chat-hf,0
588
  🟒,meta-llama/Llama-2-13b-hf,10.99,24.82,0.25,17.22,0.41,1.06,0.01,4.14,0.28,3.39,0.35,15.31,0.24,🟒 pretrained,LlamaForCausalLM,Original,float16,True,llama2,13,566,True,5c31dfb671ce7cfe2d7bb7c04375e44c55e815b1,True,True,2024-06-12,2023-07-13,False,True,meta-llama/Llama-2-13b-hf,0
589
- πŸ’¬,THUDM/glm-4-9b-chat,10.97,0.0,0.0,25.21,0.47,0.0,0.0,8.5,0.31,8.06,0.4,24.07,0.32,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",ChatGLMModelM,Original,bfloat16,True,other,9,546,True,04419001bc63e05e70991ade6da1f91c4aeec278,True,True,2024-07-09,2024-06-04,True,False,THUDM/glm-4-9b-chat,0
590
  πŸ”Ά,winglian/Llama-3-8b-64k-PoSE,10.89,28.57,0.29,13.31,0.37,2.64,0.03,1.45,0.26,3.08,0.34,16.3,0.25,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,,8,74,False,5481d9b74a3ec5a95789673e194c8ff86e2bc2bc,True,True,2024-06-26,2024-04-24,True,False,winglian/Llama-3-8b-64k-PoSE,0
591
  πŸ”Ά,Josephgflowers/Cinder-Phi-2-V1-F16-gguf,10.86,23.57,0.24,22.45,0.44,0.0,0.0,4.25,0.28,1.97,0.34,12.9,0.22,πŸ”Ά fine-tuned on domain-specific datasets,PhiForCausalLM,Original,float16,True,mit,2,4,True,85629ec9b18efee31d07630664e7a3815121badf,True,True,2024-06-26,2024-02-25,True,False,Josephgflowers/Cinder-Phi-2-V1-F16-gguf,0
592
  πŸ”Ά,lmsys/vicuna-7b-v1.5,10.78,23.52,0.24,15.15,0.39,0.76,0.01,1.12,0.26,11.42,0.42,12.74,0.21,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,float16,True,llama2,7,286,True,3321f76e3f527bd14065daf69dad9344000a201d,True,True,2024-06-12,2023-07-29,False,True,lmsys/vicuna-7b-v1.5,0
593
  πŸ’¬,allenai/OLMo-7B-Instruct-hf,10.76,34.73,0.35,13.16,0.37,0.83,0.01,2.8,0.27,4.33,0.38,8.72,0.18,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",OlmoForCausalLM,Original,bfloat16,True,apache-2.0,7,0,True,2ea947518df93433aa71219f29b36c72ac63be95,True,True,2024-06-27,2024-06-04,True,True,allenai/OLMo-7B-Instruct-hf,0
594
  πŸ”Ά,yam-peleg/Hebrew-Mistral-7B-200K,10.64,18.56,0.19,17.49,0.41,2.34,0.02,3.47,0.28,4.53,0.38,17.48,0.26,πŸ”Ά fine-tuned on domain-specific datasets,MistralForCausalLM,Original,float16,True,apache-2.0,7,15,True,7b51c7b31e3d9e29ea964c579a45233cfad255fe,True,True,2024-07-11,2024-05-05,False,False,yam-peleg/Hebrew-Mistral-7B-200K,0
595
  πŸ”Ά,kno10/ende-chat-0.0.5,10.61,34.04,0.34,11.13,0.36,0.6,0.01,2.01,0.27,7.1,0.39,8.78,0.18,πŸ”Ά fine-tuned on domain-specific datasets,MistralForCausalLM,Original,float16,True,apache-2.0,7,0,True,fff913e8ce204bab72b02582b663db669cb61412,True,True,2024-06-27,2024-06-27,True,False,kno10/ende-chat-0.0.5,0
596
- πŸ”Ά,VAGOsolutions/SauerkrautLM-gemma-2-2b-it,10.56,13.21,0.13,18.89,0.42,0.08,0.0,4.03,0.28,8.4,0.4,18.76,0.27,πŸ”Ά fine-tuned on domain-specific datasets,Gemma2ForCausalLM,Original,bfloat16,True,gemma,2,5,True,7fd35fcb32aebfc422e535739161d7528fc562d5,True,True,2024-08-26,2024-08-03,True,False,VAGOsolutions/SauerkrautLM-gemma-2-2b-it,0
597
  πŸ’¬,internlm/internlm2-chat-1_8b,10.5,23.87,0.24,20.67,0.45,2.42,0.02,2.13,0.27,4.61,0.36,9.33,0.18,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",InternLM2ForCausalLM,Original,bfloat16,True,other,1,28,True,4e226eeb354499f4d34ef4c27f6939f377475cc1,True,True,2024-06-12,2024-01-30,True,True,internlm/internlm2-chat-1_8b,0
 
598
  πŸ’¬,tiiuae/falcon-40b-instruct,10.41,24.54,0.25,17.22,0.41,1.51,0.02,0.0,0.25,5.16,0.38,14.02,0.23,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",FalconForCausalLM,Original,bfloat16,True,apache-2.0,40,1171,True,ecb78d97ac356d098e79f0db222c9ce7c5d9ee5f,True,True,2024-06-09,2023-05-25,False,True,tiiuae/falcon-40b-instruct,0
599
- 🟒,Qwen/Qwen2-1.5B,10.32,21.13,0.21,11.78,0.36,6.27,0.06,1.9,0.26,3.59,0.37,17.24,0.26,🟒 pretrained,Qwen2ForCausalLM,Original,bfloat16,True,apache-2.0,1,64,True,8a16abf2848eda07cc5253dec660bf1ce007ad7a,True,True,2024-06-09,2024-05-31,False,True,Qwen/Qwen2-1.5B,0
600
  🟩,meta-llama/Meta-Llama-3-8B,10.28,10.44,0.1,18.68,0.42,1.51,0.02,4.47,0.28,6.15,0.39,20.44,0.28,🟩 continuously pretrained,LlamaForCausalLM,Original,bfloat16,True,llama3,6,6,True,7000b39346162f95f19aa4ca3975242db61902d7,True,True,2024-06-26,2024-05-17,False,False,pszemraj/Llama-3-6.3b-v0.1,1
601
  πŸ”Ά,lmsys/vicuna-13b-v1.3,10.27,33.44,0.33,7.49,0.34,0.45,0.0,2.35,0.27,4.09,0.37,13.81,0.22,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,float16,True,,13,194,True,6566e9cb1787585d1147dcf4f9bc48f29e1328d2,True,True,2024-06-28,2023-06-18,True,True,lmsys/vicuna-13b-v1.3,0
602
  πŸ”Ά,google/gemma-2-2b,10.26,20.18,0.2,12.5,0.37,2.42,0.02,1.68,0.26,11.27,0.42,13.52,0.22,πŸ”Ά fine-tuned on domain-specific datasets,Gemma2ForCausalLM,Original,float16,True,gemma,2,302,True,0738188b3055bc98daf0fe7211f0091357e5b979,True,True,2024-08-04,2024-07-16,False,True,google/gemma-2-2b,0
@@ -611,6 +617,7 @@ T,Model,Average ⬆️,IFEval,IFEval Raw,BBH,BBH Raw,MATH Lvl 5,MATH Lvl 5 Raw,G
611
  🟒,google/flan-t5-large,9.42,22.01,0.22,17.51,0.42,0.0,0.0,0.11,0.25,9.01,0.41,7.88,0.17,🟒 pretrained,T5ForConditionalGeneration,Original,float16,True,apache-2.0,0,523,True,0613663d0d48ea86ba8cb3d7a44f0f65dc596a2a,True,True,2024-08-14,2022-10-21,False,True,google/flan-t5-large,0
612
  🟒,meta-llama/Llama-2-7b-chat-hf,9.4,39.65,0.4,4.49,0.31,0.68,0.01,0.56,0.25,3.48,0.37,7.52,0.17,🟒 pretrained,LlamaForCausalLM,Original,float16,True,llama2,6,3801,True,f5db02db724555f92da89c216ac04704f23d4590,True,True,2024-08-30,2023-07-13,True,True,meta-llama/Llama-2-7b-chat-hf,0
613
  πŸ”Ά,iRyanBell/ARC1-II,9.32,17.08,0.17,7.25,0.34,0.76,0.01,2.91,0.27,20.31,0.49,7.62,0.17,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,llama3,8,1,True,c81076b9bdaac0722b33e411a49b07a296e8fae8,True,True,2024-06-26,2024-06-12,False,False,iRyanBell/ARC1-II,0
 
614
  πŸ”Ά,NousResearch/Nous-Hermes-llama-2-7b,9.28,17.29,0.17,13.79,0.38,0.68,0.01,1.79,0.26,11.68,0.43,10.44,0.19,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,mit,6,68,True,b7c3ec54b754175e006ef75696a2ba3802697078,True,True,2024-06-12,2023-07-25,False,True,NousResearch/Nous-Hermes-llama-2-7b,0
615
  πŸ’¬,stabilityai/stablelm-2-zephyr-1_6b,9.26,32.79,0.33,6.71,0.34,2.11,0.02,0.0,0.24,5.99,0.35,7.93,0.17,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",StableLmForCausalLM,Original,float16,True,other,1,176,True,2f275b1127d59fc31e4f7c7426d528768ada9ea4,True,True,2024-06-12,2024-01-19,True,True,stabilityai/stablelm-2-zephyr-1_6b,0
616
  πŸ”Ά,huggyllama/llama-13b,9.25,24.11,0.24,16.15,0.4,1.21,0.01,0.67,0.26,2.81,0.35,10.58,0.2,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,float16,True,other,13,137,True,bf57045473f207bb1de1ed035ace226f4d9f9bba,True,True,2024-07-04,2023-04-03,False,False,huggyllama/llama-13b,0
@@ -706,17 +713,17 @@ T,Model,Average ⬆️,IFEval,IFEval Raw,BBH,BBH Raw,MATH Lvl 5,MATH Lvl 5 Raw,G
706
  🟒,facebook/opt-1.3b,5.25,23.83,0.24,3.65,0.31,0.76,0.01,0.0,0.24,2.08,0.34,1.19,0.11,🟒 pretrained,OPTForCausalLM,Original,float16,True,other,1,147,True,3f5c25d0bc631cb57ac65913f76e22c2dfb61d62,True,True,2024-06-12,2022-05-11,False,True,facebook/opt-1.3b,0
707
  πŸ’¬,microsoft/DialoGPT-medium,5.25,14.79,0.15,2.56,0.3,0.0,0.0,0.56,0.25,12.28,0.43,1.32,0.11,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",GPT2LMHeadModel,Original,bfloat16,True,mit,0,314,True,7b40bb0f92c45fefa957d088000d8648e5c7fa33,True,True,2024-06-13,2022-03-02,True,True,microsoft/DialoGPT-medium,0
708
  🟒,stabilityai/stablelm-2-1_6b,5.22,11.57,0.12,8.63,0.34,0.15,0.0,0.0,0.25,5.79,0.39,5.15,0.15,🟒 pretrained,StableLmForCausalLM,Original,float16,True,other,1,177,True,8879812cccd176fbbe9ceb747b815bcc7d6499f8,True,True,2024-06-12,2024-01-18,False,True,stabilityai/stablelm-2-1_6b,0
709
- πŸ’¬,HuggingFaceTB/SmolLM-1.7B,5.14,23.48,0.23,2.08,0.29,0.0,0.0,1.34,0.26,2.08,0.35,1.85,0.12,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,apache-2.0,1,93,True,0ad161e59935a9a691dfde2818df8b98786f30a7,True,True,2024-07-18,2024-07-15,True,False,HuggingFaceTB/SmolLM-1.7B-Instruct,1
710
  🟒,Qwen/Qwen1.5-0.5B,5.14,17.06,0.17,5.04,0.32,0.45,0.0,0.56,0.25,4.3,0.36,3.41,0.13,🟒 pretrained,Qwen2ForCausalLM,Original,bfloat16,True,other,0,139,True,8f445e3628f3500ee69f24e1303c9f10f5342a39,True,True,2024-06-13,2024-01-22,False,True,Qwen/Qwen1.5-0.5B,0
711
  🟒,EleutherAI/pythia-410m,5.11,21.95,0.22,2.72,0.3,0.3,0.0,1.23,0.26,3.06,0.36,1.42,0.11,🟒 pretrained,GPTNeoXForCausalLM,Original,float16,True,apache-2.0,0,21,True,9879c9b5f8bea9051dcb0e68dff21493d67e9d4f,True,True,2024-06-09,2023-02-13,False,True,EleutherAI/pythia-410m,0
712
  🟒,tiiuae/falcon-7b,5.1,18.21,0.18,5.96,0.33,0.53,0.01,0.0,0.24,4.5,0.38,1.39,0.11,🟒 pretrained,FalconForCausalLM,Original,bfloat16,True,apache-2.0,7,1062,True,898df1396f35e447d5fe44e0a3ccaaaa69f30d36,True,True,2024-06-09,2023-04-24,False,True,tiiuae/falcon-7b,0
713
  πŸ’¬,tiiuae/falcon-7b-instruct,5.02,19.69,0.2,4.82,0.32,0.6,0.01,0.0,0.25,3.25,0.36,1.73,0.12,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",FalconForCausalLM,Original,bfloat16,True,apache-2.0,7,900,True,cf4b3c42ce2fdfe24f753f0f0d179202fea59c99,True,True,2024-06-09,2023-04-25,False,True,tiiuae/falcon-7b-instruct,0
714
- 🟒,openai-community/gpt2-xl,4.98,20.39,0.2,2.58,0.3,0.3,0.0,1.12,0.26,4.04,0.37,1.46,0.11,🟒 pretrained,GPT2LMHeadModel,Original,bfloat16,True,mit,1,302,True,15ea56dee5df4983c59b2538573817e1667135e2,True,True,2024-06-12,2022-03-02,False,True,openai-community/gpt2-xl,0
715
  πŸ”Ά,LeroyDyer/_Spydaz_Web_AI_ALPACA,4.95,14.15,0.14,5.6,0.32,0.0,0.0,2.13,0.27,2.56,0.34,5.28,0.15,πŸ”Ά fine-tuned on domain-specific datasets,MistralForCausalLM,Original,float16,True,mit,7,1,True,9f86dd12d4c75e0290aa3084a44cf111bc975144,True,True,2024-08-06,2024-08-02,False,False,LeroyDyer/_Spydaz_Web_AI_ChatQA,1
716
  πŸ”Ά,RESMPDEV/Qwen2-Wukong-0.5B,4.95,18.54,0.19,4.2,0.31,0.0,0.0,0.0,0.24,3.33,0.35,3.64,0.13,πŸ”Ά fine-tuned on domain-specific datasets,Qwen2ForCausalLM,Original,bfloat16,True,apache-2.0,0,6,True,52c58a4aa3d0b44c363c5761fa658243f5c53943,True,True,2024-06-30,2024-06-29,True,False,RESMPDEV/Qwen2-Wukong-0.5B,0
717
  πŸ”Ά,togethercomputer/GPT-NeoXT-Chat-Base-20B,4.94,18.3,0.18,6.83,0.33,1.13,0.01,0.0,0.25,1.76,0.35,1.61,0.11,πŸ”Ά fine-tuned on domain-specific datasets,GPTNeoXForCausalLM,Original,float16,True,apache-2.0,20,694,True,d386708e84d862a65f7d2b4989f64750cb657227,True,True,2024-06-12,2023-03-03,False,True,togethercomputer/GPT-NeoXT-Chat-Base-20B,0
718
  πŸ”Ά,togethercomputer/RedPajama-INCITE-Chat-3B-v1,4.75,16.52,0.17,5.16,0.32,0.3,0.0,0.0,0.24,5.09,0.37,1.41,0.11,πŸ”Ά fine-tuned on domain-specific datasets,GPTNeoXForCausalLM,Original,float16,True,apache-2.0,3,150,True,f0e0995eba801096ed04cb87931d96a8316871af,True,True,2024-06-13,2023-05-05,False,True,togethercomputer/RedPajama-INCITE-Chat-3B-v1,0
719
- πŸ”Ά,HuggingFaceTB/SmolLM-360M,4.71,19.52,0.2,2.08,0.29,0.0,0.0,1.9,0.26,2.9,0.35,1.85,0.12,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,apache-2.0,0,62,True,8e951de8c220295ea4f85d078c4e320df7137535,True,True,2024-08-20,2024-07-15,True,False,HuggingFaceTB/SmolLM-360M-Instruct,1
720
  🟒,TinyLlama/TinyLlama_v1.1,4.7,20.01,0.2,3.21,0.3,0.45,0.0,0.0,0.25,3.98,0.37,0.54,0.1,🟒 pretrained,LlamaForCausalLM,Original,bfloat16,True,apache-2.0,1,56,True,ff3c701f2424c7625fdefb9dd470f45ef18b02d6,True,True,2024-06-12,2024-03-09,False,True,TinyLlama/TinyLlama_v1.1,0
721
  🟒,AI-Sweden-Models/gpt-sw3-40b,4.68,14.7,0.15,6.89,0.33,0.6,0.01,0.0,0.23,2.84,0.36,3.06,0.13,🟒 pretrained,GPT2LMHeadModel,Original,float16,True,other,39,8,True,1af27994df1287a7fac1b10d60e40ca43a22a385,True,True,2024-06-26,2023-02-22,False,False,AI-Sweden-Models/gpt-sw3-40b,0
722
  🀝,paloalma/TW3-JRGL-v2,4.57,3.1,0.03,4.11,0.31,5.21,0.05,0.78,0.26,12.38,0.43,1.85,0.12,🀝 base merges and moerges,LlamaForCausalLM,Original,bfloat16,False,apache-2.0,72,0,True,aca3f0ba2bfb90038a9e2cd5b486821d4c181b46,True,True,2024-08-29,2024-04-01,False,False,paloalma/TW3-JRGL-v2,0
@@ -727,7 +734,7 @@ T,Model,Average ⬆️,IFEval,IFEval Raw,BBH,BBH Raw,MATH Lvl 5,MATH Lvl 5 Raw,G
727
  🟒,EleutherAI/gpt-neo-125m,4.38,19.05,0.19,3.44,0.31,0.45,0.0,0.45,0.25,2.62,0.36,0.28,0.1,🟒 pretrained,GPTNeoForCausalLM,Original,bfloat16,True,mit,0,176,True,21def0189f5705e2521767faed922f1f15e7d7db,True,True,2024-08-10,2022-03-02,False,True,EleutherAI/gpt-neo-125m,0
728
  πŸ”Ά,LeroyDyer/Mixtral_AI_SwahiliTron_7b,4.27,15.34,0.15,3.21,0.31,0.83,0.01,2.01,0.27,1.92,0.34,2.31,0.12,πŸ”Ά fine-tuned on domain-specific datasets,MistralForCausalLM,Original,float16,True,mit,7,3,True,fd997ccdee03788e7e79944d26d9c641dc4fcd4c,True,False,2024-07-12,2024-04-10,True,False,LeroyDyer/Mixtral_AI_SwahiliTron_7b,0
729
  🟒,bigscience/bloom-3b,4.26,12.71,0.13,3.42,0.31,0.08,0.0,0.0,0.24,7.89,0.4,1.48,0.11,🟒 pretrained,BloomForCausalLM,Original,bfloat16,True,bigscience-bloom-rail-1.0,3,88,True,52bc5b43010b4844513826b8be3f78c7344c37d7,True,True,2024-06-13,2022-05-19,False,True,bigscience/bloom-3b,0
730
- πŸ’¬,HuggingFaceTB/SmolLM-135M,4.23,15.96,0.16,2.08,0.29,0.0,0.0,1.9,0.26,3.62,0.37,1.84,0.12,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,apache-2.0,0,80,True,8ca7af58e27777cae460ad8ca3ab9db15f5c160d,True,True,2024-07-18,2024-07-15,True,False,HuggingFaceTB/SmolLM-135M-Instruct,1
731
  πŸ’¬,JackFram/llama-160m,4.1,15.75,0.16,3.17,0.3,0.0,0.0,1.01,0.26,3.17,0.37,1.51,0.11,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,float16,True,apache-2.0,0,13,True,e7f50665676821867ee7dfad32d0ca9fb68fc6bc,True,True,2024-07-23,2023-12-20,True,False,Felladrin/Llama-160M-Chat-v1,1
732
  πŸ’¬,davidkim205/Rhea-72b-v0.5,4.02,1.45,0.01,3.67,0.31,5.51,0.06,0.34,0.25,11.32,0.42,1.85,0.12,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,apache-2.0,72,131,True,bc3806efb23d2713e6630a748d9747fd76b27169,True,True,2024-07-01,2024-03-22,True,False,davidkim205/Rhea-72b-v0.5,0
733
  🟒,bigscience/bloom-1b7,3.97,10.44,0.1,4.4,0.31,0.08,0.0,1.12,0.26,6.84,0.39,0.96,0.11,🟒 pretrained,BloomForCausalLM,Original,bfloat16,True,bigscience-bloom-rail-1.0,1,116,True,cc72a88036c2fb937d65efeacc57a0c2ef5d6fe5,True,True,2024-06-13,2022-05-19,False,True,bigscience/bloom-1b7,0
 
1
  T,Model,Average ⬆️,IFEval,IFEval Raw,BBH,BBH Raw,MATH Lvl 5,MATH Lvl 5 Raw,GPQA,GPQA Raw,MUSR,MUSR Raw,MMLU-PRO,MMLU-PRO Raw,Type,Architecture,Weight type,Precision,Not_Merged,Hub License,#Params (B),Hub ❀️,Available on the hub,Model sha,Flagged,MoE,Submission Date,Upload To Hub Date,Chat Template,Maintainer's Highlight,fullname,Generation
2
  πŸ”Ά,dnhkng/RYS-XLarge,44.75,79.96,0.8,58.77,0.71,38.97,0.39,17.9,0.38,23.72,0.5,49.2,0.54,πŸ”Ά fine-tuned on domain-specific datasets,Qwen2ForCausalLM,Original,bfloat16,True,mit,77,55,True,0f84dd9dde60f383e1e2821496befb4ce9a11ef6,True,True,2024-08-07,2024-07-24,False,False,dnhkng/RYS-XLarge,0
3
  πŸ’¬,Qwen/Qwen2-72B,43.61,81.63,0.82,57.33,0.7,36.03,0.36,17.45,0.38,20.15,0.47,49.05,0.54,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",Qwen2ForCausalLM,Original,bfloat16,True,other,72,25,True,0369c39770f45f2464587918f2dbdb8449ea3a0d,True,True,2024-06-26,2024-06-08,True,False,MaziyarPanahi/calme-2.1-qwen2-72b,2
4
+ πŸ’¬,Qwen/Qwen2-7B,43.4,80.08,0.8,56.8,0.69,41.16,0.41,16.55,0.37,16.52,0.45,49.27,0.54,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",Qwen2ForCausalLM,Original,bfloat16,True,other,72,3,True,529e9bd80a76d943409bc92bb246aa7ca63dd9e6,True,True,2024-08-06,2024-07-09,True,False,MaziyarPanahi/calme-2.2-qwen2-72b,1
5
  πŸ”Ά,Undi95/MG-FinalMix-72B (Merge),43.28,80.14,0.8,57.5,0.7,33.61,0.34,18.01,0.39,21.22,0.48,49.19,0.54,πŸ”Ά fine-tuned on domain-specific datasets,Qwen2ForCausalLM,Original,bfloat16,True,,72,2,False,6c9c2f5d052495dcd49f44bf5623d21210653c65,True,True,2024-07-13,2024-06-25,True,False,Undi95/MG-FinalMix-72B,1
6
  πŸ’¬,Qwen/Qwen2-72B,42.49,79.89,0.8,57.48,0.7,35.12,0.35,16.33,0.37,17.17,0.46,48.92,0.54,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",Qwen2ForCausalLM,Original,bfloat16,True,other,72,629,True,1af63c698f59c4235668ec9c1395468cb7cd7e79,True,True,2024-06-26,2024-05-28,False,True,Qwen/Qwen2-72B-Instruct,1
7
  πŸ’¬,Qwen/Qwen2-72B,42.17,76.06,0.76,57.65,0.7,35.27,0.35,18.79,0.39,15.62,0.45,49.64,0.55,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",Qwen2ForCausalLM,Original,bfloat16,True,other,72,158,True,fef27e0f235ae8858b84b765db773a2a954110dd,True,True,2024-07-25,2024-06-17,True,False,alpindale/magnum-72b-v1,2
8
+ πŸ’¬,meta-llama/Meta-Llama-3.1-70B,41.74,86.69,0.87,55.93,0.69,28.02,0.28,14.21,0.36,17.69,0.46,47.88,0.53,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,llama3.1,70,417,True,b9461463b511ed3c0762467538ea32cf7c9669f2,True,True,2024-08-15,2024-07-16,True,True,meta-llama/Meta-Llama-3.1-70B-Instruct,1
9
  πŸ’¬,abacusai/Smaug-Qwen2-72B-Instruct,41.08,78.25,0.78,56.27,0.69,35.35,0.35,14.88,0.36,15.18,0.44,46.56,0.52,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",Qwen2ForCausalLM,Original,bfloat16,True,other,72,5,True,af015925946d0c60ef69f512c3b35f421cf8063d,True,True,2024-07-29,2024-06-26,True,True,abacusai/Smaug-Qwen2-72B-Instruct,0
10
  🀝,paulml/ECE-ILAB-Q1,40.93,78.65,0.79,53.7,0.67,26.13,0.26,18.23,0.39,18.81,0.46,50.06,0.55,🀝 base merges and moerges,Qwen2ForCausalLM,Original,bfloat16,False,other,72,0,True,393bea0ee85e4c752acd5fd77ce07f577fc13bd9,True,True,2024-06-26,2024-06-06,True,False,paulml/ECE-ILAB-Q1,0
11
  πŸ”Ά,pankajmathur/orca_mini_v7_72b,39.06,59.3,0.59,55.06,0.68,26.44,0.26,18.01,0.39,24.21,0.51,51.35,0.56,πŸ”Ά fine-tuned on domain-specific datasets,Qwen2ForCausalLM,Original,bfloat16,True,apache-2.0,72,10,True,447f11912cfa496e32e188a55214043a05760d3a,True,True,2024-06-26,2024-06-26,False,False,pankajmathur/orca_mini_v7_72b,0
12
  🀝,gbueno86/Meta-LLama-3-Cat-Smaug-LLama-70b (Merge),38.27,80.72,0.81,51.51,0.67,26.81,0.27,10.29,0.33,15.0,0.44,45.28,0.51,🀝 base merges and moerges,LlamaForCausalLM,Original,bfloat16,False,llama3,70,1,True,2d73b7e1c7157df482555944d6a6b1362bc6c3c5,True,True,2024-06-27,2024-05-24,True,False,gbueno86/Meta-LLama-3-Cat-Smaug-LLama-70b,1
13
  πŸ’¬,meta-llama/Meta-Llama-3-70B,37.98,82.08,0.82,48.57,0.64,22.96,0.23,12.19,0.34,15.3,0.44,46.74,0.52,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,llama3,70,16,True,95366b974baedee4d95c1e841bc3d15e94753804,True,True,2024-06-26,2024-04-27,True,False,MaziyarPanahi/calme-2.2-llama3-70b,2
14
  πŸ”Ά,VAGOsolutions/Llama-3-SauerkrautLM-70b-Instruct,37.82,80.45,0.8,52.03,0.67,21.68,0.22,10.4,0.33,13.54,0.43,48.8,0.54,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,other,70,20,True,707cfd1a93875247c0223e0c7e3d86d58c432318,True,True,2024-06-26,2024-04-24,True,False,VAGOsolutions/Llama-3-SauerkrautLM-70b-Instruct,0
15
+ πŸ’¬,meta-llama/Meta-Llama-3.1-70B,37.31,76.61,0.77,53.77,0.68,13.75,0.14,14.88,0.36,23.43,0.49,41.41,0.47,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,llama3,70,66,True,093242c69a91f8d9d5b8094c380b88772f9bd7f8,True,True,2024-08-28,2024-07-29,True,True,NousResearch/Hermes-3-Llama-3.1-70B,1
16
  πŸ”Ά,ValiantLabs/Llama3-70B-Fireplace,36.82,77.74,0.78,49.56,0.65,19.64,0.2,13.98,0.35,16.77,0.44,43.25,0.49,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,float16,True,llama3,70,3,True,220079e4115733991eb19c30d5480db9696a665e,True,True,2024-06-26,2024-05-09,True,False,ValiantLabs/Llama3-70B-Fireplace,0
17
  πŸ’¬,tenyx/Llama3-TenyxChat-70B,36.54,80.87,0.81,49.62,0.65,22.66,0.23,6.82,0.3,12.52,0.43,46.78,0.52,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,llama3,70,62,True,a85d31e3af8fcc847cc9169f1144cf02f5351fab,True,True,2024-08-04,2024-04-26,True,False,tenyx/Llama3-TenyxChat-70B,0
18
  πŸ’¬,meta-llama/Meta-Llama-3-70B,36.18,80.99,0.81,50.19,0.65,23.34,0.23,4.92,0.29,10.92,0.42,46.74,0.52,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,llama3,70,1369,True,7129260dd854a80eb10ace5f61c20324b472b31c,True,True,2024-06-12,2024-04-17,True,True,meta-llama/Meta-Llama-3-70B-Instruct,1
 
21
  πŸ”Ά,cloudyu/Llama-3-70Bx2-MOE,35.35,54.82,0.55,51.42,0.66,19.86,0.2,19.13,0.39,20.85,0.48,46.02,0.51,πŸ”Ά fine-tuned on domain-specific datasets,MixtralForCausalLM,Original,bfloat16,True,llama3,126,0,True,b8bd85e8db8e4ec352b93441c92e0ae1334bf5a7,True,False,2024-06-27,2024-05-20,False,False,cloudyu/Llama-3-70Bx2-MOE,0
22
  πŸ”Ά,Sao10K/L3-70B-Euryale-v2.1,35.35,73.84,0.74,48.7,0.65,20.85,0.21,10.85,0.33,12.25,0.42,45.6,0.51,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,cc-by-nc-4.0,70,108,True,36ad832b771cd783ea7ad00ed39e61f679b1a7c6,True,True,2024-07-01,2024-06-11,True,False,Sao10K/L3-70B-Euryale-v2.1,0
23
  πŸ”Ά,migtissera/Llama-3-70B-Synthia-v3.5,35.2,60.76,0.61,49.12,0.65,18.96,0.19,18.34,0.39,23.39,0.49,40.65,0.47,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,float16,True,llama3,70,5,True,8744db0bccfc18f1847633da9d29fc89b35b4190,True,True,2024-08-28,2024-05-26,True,False,migtissera/Llama-3-70B-Synthia-v3.5,0
24
+ 🟒,Qwen/Qwen2-72B,35.13,38.24,0.38,51.86,0.66,29.15,0.29,19.24,0.39,19.73,0.47,52.56,0.57,🟒 pretrained,Qwen2ForCausalLM,Original,bfloat16,True,other,72,173,True,87993795c78576318087f70b43fbf530eb7789e7,True,True,2024-06-26,2024-05-22,False,True,Qwen/Qwen2-72B,0
25
  πŸ”Ά,Sao10K/L3-70B-Euryale-v2.1,35.11,72.81,0.73,49.19,0.65,20.24,0.2,10.85,0.33,12.05,0.42,45.51,0.51,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,float16,True,cc-by-nc-4.0,70,108,True,36ad832b771cd783ea7ad00ed39e61f679b1a7c6,True,True,2024-06-26,2024-06-11,True,False,Sao10K/L3-70B-Euryale-v2.1,0
26
+ πŸ’¬,microsoft/Phi-3.5-MoE-instruct,35.1,69.25,0.69,48.77,0.64,20.54,0.21,14.09,0.36,17.33,0.46,40.64,0.47,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",Phi3ForCausalLM,Original,bfloat16,True,mit,42,427,True,482a9ba0eb0e1fa1671e3560e009d7cec2e5147c,True,False,2024-08-21,2024-08-17,True,True,microsoft/Phi-3.5-MoE-instruct,0
27
  πŸ’¬,abacusai/Smaug-Llama-3-70B-Instruct-32K,34.72,77.61,0.78,49.07,0.65,21.22,0.21,6.15,0.3,12.43,0.42,41.83,0.48,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,llama3,70,20,True,33840982dc253968f32ef3a534ee0e025eb97482,True,True,2024-08-06,2024-06-11,True,True,abacusai/Smaug-Llama-3-70B-Instruct-32K,0
28
  πŸ”Ά,BAAI/Infinity-Instruct-3M-0613-Llama3-70B,34.47,68.21,0.68,51.33,0.66,14.88,0.15,14.43,0.36,16.53,0.45,41.44,0.47,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,apache-2.0,70,4,True,9fc53668064bdda22975ca72c5a287f8241c95b3,True,True,2024-06-28,2024-06-27,True,False,BAAI/Infinity-Instruct-3M-0613-Llama3-70B,0
29
  πŸ’¬,dnhkng/RYS-Llama-3-Huge-Instruct,34.37,76.86,0.77,49.07,0.65,21.22,0.21,1.45,0.26,11.93,0.42,45.66,0.51,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,mit,99,1,True,cfe14a5339e88a7a89f075d9d48215d45f64acaf,True,True,2024-08-07,2024-08-06,True,False,dnhkng/RYS-Llama-3-Huge-Instruct,0
30
  πŸ’¬,mistralai/Mixtral-8x22B-v0.1,33.89,71.84,0.72,44.11,0.61,18.73,0.19,16.44,0.37,13.49,0.43,38.7,0.45,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",MixtralForCausalLM,Original,bfloat16,True,apache-2.0,140,652,True,b0c3516041d014f640267b14feb4e9a84c8e8c71,True,False,2024-06-12,2024-04-16,True,True,mistralai/Mixtral-8x22B-Instruct-v0.1,1
31
  πŸ’¬,mistral-community/Mixtral-8x22B-v0.1,33.77,65.11,0.65,47.5,0.63,18.35,0.18,17.11,0.38,14.72,0.45,39.85,0.46,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",MixtralForCausalLM,Original,float16,True,apache-2.0,140,259,True,a3be084543d278e61b64cd600f28157afc79ffd3,True,True,2024-06-12,2024-04-10,True,True,HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1,1
32
+ πŸ’¬,jpacifico/Chocolatine-14B-Instruct-DPO-v1.2,33.3,68.52,0.69,49.85,0.64,17.98,0.18,10.07,0.33,12.35,0.43,41.07,0.47,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",Phi3ForCausalLM,Original,float16,True,mit,13,1,True,d34bbd55b48e553f28579d86f3ccae19726c6b39,True,True,2024-08-28,2024-08-12,True,False,jpacifico/Chocolatine-14B-Instruct-DPO-v1.2,0
33
  πŸ”Ά,migtissera/Tess-v2.5.2-Qwen2-72B,33.28,44.94,0.45,52.31,0.66,27.42,0.27,13.42,0.35,10.89,0.42,50.68,0.56,πŸ”Ά fine-tuned on domain-specific datasets,Qwen2ForCausalLM,Original,bfloat16,True,other,72,12,True,0435e634ad9bc8b1172395a535b78e6f25f3594f,True,True,2024-08-10,2024-06-13,True,False,migtissera/Tess-v2.5.2-Qwen2-72B,0
34
  πŸ’¬,microsoft/Phi-3-medium-4k-instruct,32.67,64.23,0.64,49.38,0.64,16.99,0.17,11.52,0.34,13.05,0.43,40.84,0.47,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",Phi3ForCausalLM,Original,bfloat16,True,mit,13,203,True,d194e4e74ffad5a5e193e26af25bcfc80c7f1ffc,True,True,2024-06-12,2024-05-07,True,True,microsoft/Phi-3-medium-4k-instruct,0
35
  πŸ’¬,01-ai/Yi-1.5-34B-Chat,32.63,60.67,0.61,44.26,0.61,23.34,0.23,15.32,0.36,13.06,0.43,39.12,0.45,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,apache-2.0,34,222,True,f3128b2d02d82989daae566c0a7eadc621ca3254,True,True,2024-06-12,2024-05-10,True,True,01-ai/Yi-1.5-34B-Chat,0
36
  πŸ”Ά,alpindale/WizardLM-2-8x22B,32.61,52.72,0.53,48.58,0.64,22.28,0.22,17.56,0.38,14.54,0.44,39.96,0.46,πŸ”Ά fine-tuned on domain-specific datasets,MixtralForCausalLM,Original,bfloat16,True,apache-2.0,140,370,True,087834da175523cffd66a7e19583725e798c1b4f,True,True,2024-06-28,2024-04-16,False,False,alpindale/WizardLM-2-8x22B,0
37
+ πŸ’¬,google/gemma-2-27b,32.31,79.78,0.8,49.27,0.65,0.68,0.01,16.67,0.38,9.11,0.4,38.35,0.45,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",Gemma2ForCausalLM,Original,bfloat16,True,gemma,27,371,True,f6c533e5eb013c7e31fc74ef042ac4f3fb5cf40b,True,True,2024-08-07,2024-06-24,True,True,google/gemma-2-27b-it,1
38
  πŸ’¬,meta-llama/Meta-Llama-3-70B,32.18,50.27,0.5,48.4,0.64,22.66,0.23,11.97,0.34,13.1,0.43,46.71,0.52,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,llama3,70,12,True,cb03e4d810b82d86e7cb01ab146bade09a5d06d1,True,True,2024-06-26,2024-04-28,True,False,MaziyarPanahi/calme-2.4-llama3-70b,2
39
  πŸ’¬,internlm/internlm2_5-20b-chat,32.08,70.1,0.7,62.83,0.75,0.0,0.0,9.51,0.32,16.74,0.46,33.31,0.4,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",InternLM2ForCausalLM,Original,bfloat16,True,other,19,71,True,ef17bde929761255fee76d95e2c25969ccd93b0d,True,True,2024-08-12,2024-07-30,True,True,internlm/internlm2_5-20b-chat,0
40
  πŸ’¬,Qwen/Qwen2-72B,32.0,40.38,0.4,47.7,0.63,21.37,0.21,16.0,0.37,17.04,0.45,49.52,0.55,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",Qwen2ForCausalLM,Original,bfloat16,True,other,72,50,True,e79582577c2bf2af304221af0e8308b7e7d46ca1,True,True,2024-06-27,2024-05-27,True,True,cognitivecomputations/dolphin-2.9.2-qwen2-72b,1
 
42
  🀝,paloalma/Le_Triomphant-ECE-TW3,31.66,54.02,0.54,44.96,0.61,17.45,0.17,13.2,0.35,18.5,0.47,41.81,0.48,🀝 base merges and moerges,LlamaForCausalLM,Original,bfloat16,False,apache-2.0,72,3,True,f72399253bb3e65c0f55e50461488c098f658a49,True,True,2024-07-25,2024-04-01,False,False,paloalma/Le_Triomphant-ECE-TW3,0
43
  πŸ”Ά,failspy/Phi-3-medium-4k-instruct-abliterated-v3,31.55,63.19,0.63,46.73,0.63,14.12,0.14,8.95,0.32,18.52,0.46,37.78,0.44,πŸ”Ά fine-tuned on domain-specific datasets,Phi3ForCausalLM,Original,bfloat16,True,mit,13,22,True,959b09eacf6cae85a8eb21b25e998addc89a367b,True,True,2024-07-29,2024-05-22,True,False,failspy/Phi-3-medium-4k-instruct-abliterated-v3,0
44
  πŸ’¬,Danielbrdz/Barcenas-14b-Phi-3-medium-ORPO,31.42,47.99,0.48,51.03,0.65,17.45,0.17,10.18,0.33,20.53,0.48,41.37,0.47,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",MistralForCausalLM,Original,float16,True,mit,13,3,True,b749dbcb19901b8fd0e9f38c923a24533569f895,True,True,2024-08-13,2024-06-15,True,False,Danielbrdz/Barcenas-14b-Phi-3-medium-ORPO,0
45
+ πŸ’¬,CohereForAI/c4ai-command-r-plus,30.86,76.64,0.77,39.92,0.58,7.55,0.08,7.38,0.31,20.42,0.48,33.24,0.4,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",CohereForCausalLM,Original,float16,True,cc-by-nc-4.0,103,1640,True,fa1bd7fb1572ceb861bbbbecfa8af83b29fa8cca,True,True,2024-06-13,2024-04-03,True,True,CohereForAI/c4ai-command-r-plus,0
46
  πŸ’¬,internlm/internlm2_5-7b-chat,30.46,61.4,0.61,57.67,0.71,8.31,0.08,10.63,0.33,14.35,0.44,30.42,0.37,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",InternLM2ForCausalLM,Original,float16,True,other,7,147,True,bebb00121ee105b823647c3ba2b1e152652edc33,True,True,2024-07-03,2024-06-27,True,True,internlm/internlm2_5-7b-chat,0
47
  πŸ’¬,ValiantLabs/Llama3-70B-ShiningValiant2,30.45,61.22,0.61,46.71,0.63,7.1,0.07,10.74,0.33,13.64,0.43,43.31,0.49,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,llama3,70,4,True,bd6cce8da08ccefe9ec58cae3df4bf75c97d8950,True,True,2024-07-25,2024-04-20,True,False,ValiantLabs/Llama3-70B-ShiningValiant2,0
48
  🀝,altomek/YiSM-34B-0rn (Merge),30.15,42.84,0.43,45.38,0.61,20.62,0.21,16.22,0.37,14.76,0.44,41.06,0.47,🀝 base merges and moerges,LlamaForCausalLM,Original,float16,False,apache-2.0,34,1,True,7a481c67cbdd5c846d6aaab5ef9f1eebfad812c2,True,True,2024-06-27,2024-05-26,True,False,altomek/YiSM-34B-0rn,1
 
63
  🟒,dnhkng/RYS-Phi-3-medium-4k-instruct,28.38,43.91,0.44,46.75,0.62,11.78,0.12,13.98,0.35,11.09,0.43,42.74,0.48,🟒 pretrained,Phi3ForCausalLM,Original,bfloat16,True,mit,17,1,True,1009e916b1ff8c9a53bc9d8ff48bea2a15ccde26,True,True,2024-08-07,2024-08-06,False,False,dnhkng/RYS-Phi-3-medium-4k-instruct,0
64
  πŸ”Ά,NLPark/AnFeng_v3.1-Avocet,28.05,50.96,0.51,40.31,0.58,13.9,0.14,9.96,0.32,14.98,0.45,38.2,0.44,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,float16,True,cc-by-nc-nd-4.0,34,0,True,5170739731033323e6e66a0f68d34790042a3b2a,True,True,2024-08-07,2024-08-03,False,False,NLPark/AnFeng_v3.1-Avocet,0
65
  🀝,OpenBuddy/openbuddy-zero-56b-v21.2-32k,27.99,50.57,0.51,44.8,0.61,12.99,0.13,9.06,0.32,12.78,0.43,37.77,0.44,🀝 base merges and moerges,LlamaForCausalLM,Original,float16,True,other,56,0,True,c7a1a4a6e798f75d1d3219ab9ff9f2692e29f7d5,True,True,2024-06-26,2024-06-10,True,False,OpenBuddy/openbuddy-zero-56b-v21.2-32k,0
66
+ πŸ’¬,meta-llama/Meta-Llama-3.1-8B,27.91,78.56,0.79,29.89,0.51,17.6,0.18,2.35,0.27,8.41,0.39,30.68,0.38,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,llama3.1,8,2147,True,df34336b42332c6d360959e259cd6271c6a09fd4,True,True,2024-08-15,2024-07-18,True,True,meta-llama/Meta-Llama-3.1-8B-Instruct,1
67
  πŸ’¬,vicgalle/Configurable-Llama-3.1-8B-Instruct,27.77,83.12,0.83,29.66,0.5,15.86,0.16,3.24,0.27,5.93,0.38,28.8,0.36,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,float16,True,apache-2.0,8,8,True,133b3ab1a5385ff9b3d17da2addfe3fc1fd6f733,True,True,2024-08-05,2024-07-24,True,False,vicgalle/Configurable-Llama-3.1-8B-Instruct,0
68
  πŸ”Ά,BAAI/Infinity-Instruct-3M-0625-Yi-1.5-9B,27.74,51.86,0.52,35.38,0.55,13.97,0.14,13.87,0.35,16.72,0.46,34.65,0.41,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,apache-2.0,8,2,True,a42c86c61b98ca4fdf238d688fe6ea11cf414d29,True,True,2024-08-05,2024-07-09,True,False,BAAI/Infinity-Instruct-3M-0625-Yi-1.5-9B,0
69
  πŸ”Ά,01-ai/Yi-1.5-34B,27.73,38.53,0.39,44.17,0.61,15.18,0.15,12.42,0.34,16.97,0.46,39.1,0.45,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,apache-2.0,34,34,True,1ec522298a6935c881df6dc29d3669833bd8672d,True,True,2024-07-27,2024-05-18,True,True,cognitivecomputations/dolphin-2.9.1-yi-1.5-34b,1
70
  πŸ’¬,01-ai/Yi-1.5-9B-Chat,27.71,60.46,0.6,36.95,0.56,11.63,0.12,11.3,0.33,12.84,0.43,33.06,0.4,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,apache-2.0,8,126,True,bc87d8557c98dc1e5fdef6ec23ed31088c4d3f35,True,True,2024-06-12,2024-05-10,True,True,01-ai/Yi-1.5-9B-Chat,0
71
  πŸ’¬,jpacifico/Chocolatine-3B-Instruct-DPO-Revised,27.63,56.23,0.56,37.16,0.55,14.5,0.15,9.62,0.32,15.1,0.45,33.21,0.4,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",Phi3ForCausalLM,Original,float16,True,mit,3,10,True,c403df6c0f78148cfb477972455cbd859149311a,True,True,2024-07-19,2024-07-17,True,False,jpacifico/Chocolatine-3B-Instruct-DPO-Revised,0
72
+ πŸ’¬,microsoft/Phi-3.5-mini-instruct,27.4,57.75,0.58,36.75,0.55,14.95,0.15,11.97,0.34,10.1,0.4,32.91,0.4,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",Phi3ForCausalLM,Original,bfloat16,True,mit,3,400,True,64963004ad95869fa73a30279371c8778509ac84,True,True,2024-08-21,2024-08-16,True,True,microsoft/Phi-3.5-mini-instruct,0
73
  πŸ’¬,microsoft/Phi-3-mini-4k-instruct,27.2,54.77,0.55,36.56,0.55,14.2,0.14,10.96,0.33,13.12,0.43,33.58,0.4,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",Phi3ForCausalLM,Original,float16,True,mit,3,995,True,c1358f8a35e6d2af81890deffbbfa575b978c62f,True,True,2024-07-02,2024-04-22,True,True,microsoft/Phi-3-mini-4k-instruct,0
74
  πŸ’¬,mistralai/Mixtral-8x7B-v0.1,27.13,58.97,0.59,37.11,0.55,10.88,0.11,9.51,0.32,16.68,0.46,29.62,0.37,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",MixtralForCausalLM,Original,bfloat16,True,apache-2.0,46,408,True,286ae6737d048ad1d965c2e830864df02db50f2f,True,False,2024-07-27,2024-01-11,True,True,NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO,1
75
  πŸ’¬,Qwen/Qwen1.5-32B-Chat,27.1,55.32,0.55,44.55,0.61,6.65,0.07,7.49,0.31,10.2,0.42,38.41,0.45,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",Qwen2ForCausalLM,Original,bfloat16,True,other,32,106,True,0997b012af6ddd5465d40465a8415535b2f06cfc,True,True,2024-06-12,2024-04-03,True,True,Qwen/Qwen1.5-32B-Chat,0
 
82
  πŸ”Ά,meta-llama/Meta-Llama-3-8B-Instruct,26.7,76.3,0.76,27.9,0.49,6.8,0.07,7.72,0.31,9.85,0.41,31.62,0.38,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,other,8,6,True,ddf91fdc0a3ab5e5d76864f1c4cf44e5adacd565,True,True,2024-08-06,2024-05-30,True,False,MaziyarPanahi/Llama-3-8B-Instruct-v0.9,3
83
  🟒,Qwen/Qwen1.5-32B,26.69,32.97,0.33,38.98,0.57,26.66,0.27,10.63,0.33,12.04,0.43,38.89,0.45,🟒 pretrained,Qwen2ForCausalLM,Original,bfloat16,True,other,32,79,True,cefef80dc06a65f89d1d71d0adbc56d335ca2490,True,True,2024-06-13,2024-04-01,False,True,Qwen/Qwen1.5-32B,0
84
  πŸ”Ά,meta-llama/Meta-Llama-3-8B-Instruct,26.66,76.67,0.77,27.92,0.49,4.91,0.05,7.83,0.31,10.81,0.42,31.8,0.39,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,other,8,6,True,4411eb9f6f5e4c462a6bdbc64c26dcc123100b66,True,True,2024-06-26,2024-06-04,True,False,MaziyarPanahi/Llama-3-8B-Instruct-v0.10,4
85
+ πŸ’¬,jpacifico/Chocolatine-3B-Instruct-DPO-v1.2,26.59,54.55,0.55,36.0,0.55,12.84,0.13,11.86,0.34,12.33,0.42,31.97,0.39,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",Phi3ForCausalLM,Original,float16,True,mit,3,1,True,ebc9de6c266586adb1ec0db31bf050d1cd8fdffe,True,True,2024-08-28,2024-08-22,True,False,jpacifico/Chocolatine-3B-Instruct-DPO-v1.2,0
86
  πŸ”Ά,meta-llama/Meta-Llama-3-8B,26.58,75.3,0.75,28.08,0.49,5.36,0.05,7.38,0.31,11.68,0.43,31.69,0.39,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,llama3,8,23,True,d40847d2981b588690c1dc21d5157d3f4afb2978,True,True,2024-06-27,2024-05-01,True,False,DeepMount00/Llama-3-8b-Ita,1
87
  πŸ”Ά,VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct,26.52,74.45,0.74,28.05,0.49,5.74,0.06,7.83,0.31,11.28,0.42,31.75,0.39,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,other,8,51,True,37127c44d7c0fb56cef817270c4b1a6802d8793a,True,True,2024-07-22,2024-04-19,True,False,VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct,0
88
  πŸ”Ά,unsloth/gemma-2-9b-it-bnb-4bit,26.5,58.87,0.59,35.57,0.55,12.16,0.12,11.63,0.34,9.34,0.41,31.43,0.38,πŸ”Ά fine-tuned on domain-specific datasets,Gemma2ForCausalLM,Original,float16,True,apache-2.0,9,0,True,4adc2d61d530d23026493d29e6191e06cf549fc6,True,True,2024-07-31,2024-07-16,True,False,ehristoforu/Gemma2-9B-it-psy10k-mental_health,2
 
108
  🟒,mistral-community/mixtral-8x22B-v0.3,25.55,25.83,0.26,45.73,0.63,16.84,0.17,17.0,0.38,7.46,0.4,40.44,0.46,🟒 pretrained,MixtralForCausalLM,Original,bfloat16,True,apache-2.0,140,3,True,211b177b79ab5ef245ee334d106c27623e786882,True,False,2024-06-13,2024-05-25,False,True,mistral-community/mixtral-8x22B-v0.3,0
109
  πŸ”Ά,arcee-ai/Arcee-Spark,25.54,56.21,0.56,37.14,0.55,12.31,0.12,7.61,0.31,8.6,0.4,31.36,0.38,πŸ”Ά fine-tuned on domain-specific datasets,Qwen2ForCausalLM,Original,bfloat16,True,apache-2.0,7,84,True,3fe368ea5fd32bc4a8d1bcf42510416f7fa28668,True,True,2024-06-26,2024-06-22,True,False,arcee-ai/Arcee-Spark,0
110
  🟒,mistralai/Mixtral-8x22B-v0.1,25.49,25.83,0.26,45.59,0.62,16.84,0.17,16.78,0.38,7.46,0.4,40.44,0.46,🟒 pretrained,MixtralForCausalLM,Original,bfloat16,True,apache-2.0,140,186,True,b03e260818710044a2f088d88fab12bb220884fb,True,False,2024-06-12,2024-04-16,False,True,mistralai/Mixtral-8x22B-v0.1,0
111
+ πŸ’¬,microsoft/Phi-3-mini-128k-instruct,25.49,59.76,0.6,37.1,0.56,8.91,0.09,9.06,0.32,7.71,0.39,30.38,0.37,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",Phi3ForCausalLM,Original,bfloat16,True,mit,3,1552,True,5be6479b4bc06a081e8f4c6ece294241ccd32dec,True,True,2024-08-21,2024-04-22,True,True,microsoft/Phi-3-mini-128k-instruct,0
112
  🀝,Sao10K/L3-8B-Lunaris-v1,25.48,68.95,0.69,32.11,0.52,8.46,0.08,6.82,0.3,5.55,0.37,30.97,0.38,🀝 base merges and moerges,LlamaForCausalLM,Original,bfloat16,True,llama3,8,78,True,8479c2a7ee119c935b9a02c921cc2a85b698dfe8,True,True,2024-07-22,2024-06-26,True,False,Sao10K/L3-8B-Lunaris-v1,0
113
  🟒,01-ai/Yi-1.5-34B,25.43,28.41,0.28,42.75,0.6,14.05,0.14,15.44,0.37,11.22,0.42,40.73,0.47,🟒 pretrained,LlamaForCausalLM,Original,bfloat16,True,apache-2.0,34,45,True,4b486f81c935a2dadde84c6baa1e1370d40a098f,True,True,2024-06-12,2024-05-11,False,True,01-ai/Yi-1.5-34B,0
114
  πŸ”Ά,Tremontaine/L3-12B-Lunaris-v1 (Merge),25.38,69.09,0.69,32.18,0.52,8.16,0.08,7.94,0.31,4.05,0.37,30.83,0.38,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,,11,2,False,7be236530a835416ebca712d51d661c4488a45de,True,True,2024-07-15,2024-07-14,True,False,Tremontaine/L3-12B-Lunaris-v1,1
 
140
  πŸ’¬,haoranxu/Llama-3-Instruct-8B-CPO-SimPO,24.48,70.46,0.7,29.76,0.5,7.7,0.08,5.7,0.29,3.42,0.36,29.84,0.37,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,mit,8,1,True,3ca4b5c3a6395ff090e1039d55ac1f6120777302,True,True,2024-07-28,2024-06-19,True,False,haoranxu/Llama-3-Instruct-8B-CPO-SimPO,0
141
  πŸ’¬,rhplus0831/maid-yuzu-v7 (Merge),24.38,64.62,0.65,26.82,0.48,8.91,0.09,7.94,0.31,9.77,0.41,28.22,0.35,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",MixtralForCausalLM,Original,bfloat16,True,,46,1,False,a0bd8c707bb80024778da4a0d057917faa53d2f6,True,True,2024-08-23,2024-02-09,True,False,rhplus0831/maid-yuzu-v7,1
142
  πŸ’¬,mistralai/Mixtral-8x7B-v0.1,24.35,53.95,0.54,34.02,0.53,9.06,0.09,7.61,0.31,12.11,0.43,29.36,0.36,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",MixtralForCausalLM,Original,bfloat16,True,apache-2.0,46,4083,True,1e637f2d7cb0a9d6fb1922f305cb784995190a83,True,False,2024-06-12,2023-12-10,True,True,mistralai/Mixtral-8x7B-Instruct-v0.1,1
143
+ πŸ”Ά,meta-llama/Meta-Llama-3.1-8B,24.29,65.24,0.65,26.35,0.48,11.63,0.12,8.95,0.32,7.19,0.39,26.38,0.34,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,llama3.1,8,2,True,6b2b5694a192cb29ad0e4314138affa25b630c0e,True,True,2024-08-10,2024-08-06,True,False,ValiantLabs/Llama3.1-8B-ShiningValiant2,2
144
  πŸ”Ά,VAGOsolutions/SauerkrautLM-Mixtral-8x7B-Instruct,24.29,56.02,0.56,33.95,0.53,8.61,0.09,6.38,0.3,11.32,0.42,29.45,0.37,πŸ”Ά fine-tuned on domain-specific datasets,MixtralForCausalLM,Original,bfloat16,True,apache-2.0,46,21,True,30ed549de7d84f68b4c6cb619f73275c99af23cc,True,False,2024-06-26,2023-12-15,True,False,VAGOsolutions/SauerkrautLM-Mixtral-8x7B-Instruct,0
145
  🀝,UKzExecution/LlamaExecutor-8B-3.0.5 (Merge),24.26,74.03,0.74,28.41,0.5,8.53,0.09,0.78,0.26,4.65,0.38,29.17,0.36,🀝 base merges and moerges,LlamaForCausalLM,Original,bfloat16,True,,8,0,False,2047978e8ab1146b8881cde3d998856594f437a4,True,True,2024-07-30,2024-07-29,True,False,UKzExecution/LlamaExecutor-8B-3.0.5,1
146
  πŸ”Ά,ycros/BagelMIsteryTour-v2-8x7B (Merge),24.26,59.94,0.6,31.7,0.52,7.85,0.08,7.27,0.3,11.3,0.42,27.48,0.35,πŸ”Ά fine-tuned on domain-specific datasets,MixtralForCausalLM,Original,float16,False,cc-by-nc-4.0,46,16,True,98a8b319707be3dab1659594da69a37ed8f8c148,True,True,2024-06-28,2024-01-19,True,False,ycros/BagelMIsteryTour-v2-8x7B,1
 
152
  🀝,PJMixers/LLaMa-3-CursedStock-v2.0-8B (Merge),24.03,63.31,0.63,32.56,0.53,8.61,0.09,3.24,0.27,8.04,0.39,28.4,0.36,🀝 base merges and moerges,LlamaForCausalLM,Original,bfloat16,False,llama3,8,9,True,d47cc29df363f71ffaf6cd21ac4bdeefa27359db,True,True,2024-06-27,2024-06-26,True,False,PJMixers/LLaMa-3-CursedStock-v2.0-8B,1
153
  πŸ”Ά,Qwen/Qwen2-7B,24.01,41.0,0.41,32.84,0.52,15.18,0.15,6.6,0.3,14.06,0.44,34.4,0.41,πŸ”Ά fine-tuned on domain-specific datasets,Qwen2ForCausalLM,Original,bfloat16,True,other,7,33,True,d5a2f245bf98a40d196821bc378e10f35b4da81a,True,True,2024-06-26,2024-06-24,True,False,Weyaxi/Einstein-v7-Qwen2-7B,1
154
  πŸ”Ά,BAAI/Infinity-Instruct-3M-0625-Qwen2-7B,24.01,55.54,0.56,34.66,0.53,6.12,0.06,8.39,0.31,6.46,0.39,32.89,0.4,πŸ”Ά fine-tuned on domain-specific datasets,Qwen2ForCausalLM,Original,bfloat16,True,apache-2.0,7,6,True,503c24156d7682458686a7b5324f7f886e63470d,True,True,2024-08-05,2024-07-09,True,False,BAAI/Infinity-Instruct-3M-0625-Qwen2-7B,0
155
+ πŸ”Ά,meta-llama/Meta-Llama-3.1-8B,24.0,64.74,0.65,26.26,0.48,10.73,0.11,8.95,0.32,6.91,0.39,26.4,0.34,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,float16,True,llama3.1,8,2,True,6b2b5694a192cb29ad0e4314138affa25b630c0e,True,True,2024-08-07,2024-08-06,True,False,ValiantLabs/Llama3.1-8B-ShiningValiant2,2
156
  πŸ’¬,vicgalle/Roleplay-Llama-3-8B,23.94,73.2,0.73,28.55,0.5,8.69,0.09,1.45,0.26,1.68,0.35,30.09,0.37,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,float16,True,apache-2.0,8,36,True,57297eb57dcc2c116f061d9dda341094203da01b,True,True,2024-06-26,2024-04-19,True,False,vicgalle/Roleplay-Llama-3-8B,0
157
  πŸ’¬,meta-llama/Meta-Llama-3-8B-Instruct,23.91,74.08,0.74,28.24,0.5,8.69,0.09,1.23,0.26,1.6,0.36,29.6,0.37,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,llama3,8,3356,True,e1945c40cd546c78e41f1151f4db032b271faeaa,True,True,2024-06-12,2024-04-17,True,True,meta-llama/Meta-Llama-3-8B-Instruct,0
158
  πŸ’¬,01-ai/Yi-34B-Chat,23.9,46.99,0.47,37.62,0.56,4.31,0.04,11.74,0.34,8.36,0.4,34.37,0.41,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,apache-2.0,34,340,True,2e528b6a80fb064a0a746c5ca43114b135e30464,True,True,2024-06-12,2023-11-22,True,True,01-ai/Yi-34B-Chat,0
 
166
  πŸ’¬,SeaLLMs/SeaLLMs-v3-7B-Chat,23.63,43.77,0.44,33.8,0.53,15.11,0.15,6.49,0.3,10.47,0.42,32.16,0.39,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",Qwen2ForCausalLM,Original,bfloat16,True,other,7,38,True,67ef6dfd0a5df7af4be7a325786105a2ba4cbaf7,True,True,2024-07-29,2024-07-03,True,False,SeaLLMs/SeaLLMs-v3-7B-Chat,0
167
  πŸ”Ά,meta-llama/Meta-Llama-3-8B-Instruct,23.56,69.03,0.69,29.08,0.5,5.74,0.06,1.12,0.26,5.5,0.38,30.92,0.38,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,mit,8,4,True,9c95ccdeceed14a3c2881bc495101a1acca1385f,True,True,2024-07-02,2024-05-25,True,False,ZhangShenao/SELM-Llama-3-8B-Instruct-iter-3,3
168
  πŸ’¬,lordjia/Qwen2-Cantonese-7B-Instruct,23.5,54.35,0.54,32.45,0.52,8.76,0.09,6.04,0.3,7.81,0.4,31.59,0.38,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",Qwen2ForCausalLM,Original,bfloat16,True,apache-2.0,7,1,True,eb8b0faee749d167fd70e74f5e579094c4cfe7fb,True,True,2024-08-03,2024-07-13,True,False,lordjia/Qwen2-Cantonese-7B-Instruct,0
169
+ πŸ’¬,meta-llama/Meta-Llama-3.1-8B,23.49,61.7,0.62,30.72,0.52,4.76,0.05,6.38,0.3,13.62,0.44,23.77,0.31,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,llama3,8,142,True,aabb745a717e133b74dcae23195d2635cf5f38cc,True,True,2024-08-28,2024-07-28,True,True,NousResearch/Hermes-3-Llama-3.1-8B,1
170
  πŸ’¬,saltlux/luxia-21.4b-alignment-v1.2,23.44,41.15,0.41,47.77,0.64,1.59,0.02,7.72,0.31,14.9,0.45,27.48,0.35,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,apache-2.0,21,7,True,eed12b5574fa49cc81e57a88aff24c08c13721c0,True,True,2024-07-30,2024-05-27,True,False,saltlux/luxia-21.4b-alignment-v1.2,0
171
  πŸ’¬,meta-llama/Meta-Llama-3-8B-Instruct,23.43,66.87,0.67,28.06,0.48,6.57,0.07,3.02,0.27,5.31,0.38,30.77,0.38,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,cc-by-nc-4.0,8,1,True,555f4a0092f239557e1aa34f9d489e8156b907bb,True,True,2024-06-29,2024-04-26,True,False,lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top75,2
172
  πŸ’¬,meta-llama/Meta-Llama-3-8B-Instruct,23.37,66.37,0.66,27.67,0.49,8.53,0.09,3.02,0.27,4.81,0.36,29.83,0.37,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,cc-by-nc-4.0,8,2,True,5a2f17238cc83932e00613d285f8bf6b8f4a0c3a,True,True,2024-06-29,2024-04-26,True,False,lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top25,2
 
196
  🀝,Pretergeek/OpenChat-3.5-0106_BlockExpansion-40Layers-End (Merge),22.55,59.61,0.6,24.06,0.46,6.8,0.07,7.61,0.31,11.78,0.43,25.44,0.33,🀝 base merges and moerges,MistralForCausalLM,Original,bfloat16,False,apache-2.0,8,2,True,2120720b7fb2ecc27b9c03cc876316fd25b26e40,True,True,2024-07-27,2024-07-26,True,False,Pretergeek/OpenChat-3.5-0106_BlockExpansion-40Layers-End,1
197
  🀝,Pretergeek/OpenChat-3.5-0106_BlockExpansion-44Layers-End (Merge),22.55,59.61,0.6,24.06,0.46,6.8,0.07,7.61,0.31,11.78,0.43,25.44,0.33,🀝 base merges and moerges,MistralForCausalLM,Original,bfloat16,False,apache-2.0,9,2,True,8a7ef4a2c4faf8760650e26e44509920bace633a,True,True,2024-07-27,2024-07-27,True,False,Pretergeek/OpenChat-3.5-0106_BlockExpansion-44Layers-End,1
198
  πŸ’¬,vicgalle/ConfigurableBeagle-11B,22.52,58.34,0.58,32.39,0.53,3.7,0.04,6.94,0.3,7.38,0.4,26.38,0.34,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",MistralForCausalLM,Original,float16,True,apache-2.0,10,2,True,bbc16dbf94b8e8a99bb3e2ada6755faf9c2990dd,True,True,2024-06-26,2024-02-17,True,False,vicgalle/ConfigurableBeagle-11B,0
199
+ πŸ”Ά,Pretergeek/OpenChat-3.5-0106_10.7B_48Layers-Interleaved (Merge),22.51,59.61,0.6,24.06,0.46,6.8,0.07,7.27,0.3,11.78,0.43,25.54,0.33,πŸ”Ά fine-tuned on domain-specific datasets,MistralForCausalLM,Original,bfloat16,False,apache-2.0,10,0,True,dd6bd9a8a9a2223a02a4e8aa6270accbc8d4d81a,True,True,2024-08-16,2024-08-10,True,False,Pretergeek/OpenChat-3.5-0106_10.7B_48Layers-Interleaved,1
200
  πŸ”Ά,mistralai/Mistral-7B-v0.1,22.5,59.51,0.6,24.04,0.46,6.5,0.06,7.72,0.31,11.75,0.43,25.46,0.33,πŸ”Ά fine-tuned on domain-specific datasets,MistralForCausalLM,Original,bfloat16,True,apache-2.0,7,340,True,ff058fda49726ecf4ea53dc1635f917cdb8ba36b,True,True,2024-06-27,2024-01-07,True,True,openchat/openchat-3.5-0106,1
201
+ πŸ”Ά,Pretergeek/OpenChat-3.5-0106_8.99B_40Layers-Interleaved (Merge),22.49,59.76,0.6,24.08,0.46,6.8,0.07,7.27,0.3,11.51,0.42,25.54,0.33,πŸ”Ά fine-tuned on domain-specific datasets,MistralForCausalLM,Original,bfloat16,False,apache-2.0,8,0,True,b6dfa36a99179674706d5e859714afa6b8743640,True,True,2024-08-16,2024-08-10,True,False,Pretergeek/OpenChat-3.5-0106_8.99B_40Layers-Interleaved,1
202
+ πŸ”Ά,Pretergeek/OpenChat-3.5-0106_8.11B_36Layers-Interleaved (Merge),22.47,59.61,0.6,24.08,0.46,6.8,0.07,7.27,0.3,11.51,0.42,25.54,0.33,πŸ”Ά fine-tuned on domain-specific datasets,MistralForCausalLM,Original,bfloat16,False,apache-2.0,8,0,True,485ebe835c6c001af0a1a6e0e40aab27bc195842,True,True,2024-08-16,2024-08-10,True,False,Pretergeek/OpenChat-3.5-0106_8.11B_36Layers-Interleaved,1
203
  πŸ”Ά,pankajmathur/orca_mini_v7_7b,22.41,43.88,0.44,33.95,0.53,2.64,0.03,6.15,0.3,12.66,0.44,35.19,0.42,πŸ”Ά fine-tuned on domain-specific datasets,Qwen2ForCausalLM,Original,bfloat16,True,apache-2.0,7,2,True,f5e84ff6ea25fb4585908ea45d1520bac416d803,True,True,2024-06-26,2024-06-20,False,False,pankajmathur/orca_mini_v7_7b,0
204
  πŸ”Ά,Replete-AI/Llama3-8B-Instruct-Replete-Adapted,22.4,69.15,0.69,26.89,0.49,4.83,0.05,4.14,0.28,2.82,0.36,26.57,0.34,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,float16,True,other,8,1,True,d930f2111913da6fb7693187e1cdc817191c8e5e,True,True,2024-07-09,2024-07-05,True,False,Replete-AI/Llama3-8B-Instruct-Replete-Adapted,0
205
  πŸ’¬,vicgalle/CarbonBeagle-11B (Merge),22.36,54.15,0.54,33.06,0.53,5.51,0.06,6.94,0.3,9.19,0.4,25.29,0.33,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",MistralForCausalLM,Original,float16,False,apache-2.0,10,9,True,3fe9bf5327606d013b182fed17a472f5f043759b,True,True,2024-06-26,2024-01-21,True,False,vicgalle/CarbonBeagle-11B,1
206
  πŸ”Ά,WizardLMTeam/WizardLM-70B-V1.0,22.32,49.51,0.5,37.54,0.56,3.47,0.03,2.13,0.27,14.09,0.44,27.18,0.34,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,float16,True,llama2,70,234,True,54aaecaff7d0790eb9f0ecea1cc267a94cc66949,True,True,2024-06-12,2023-08-09,False,True,WizardLMTeam/WizardLM-70B-V1.0,0
207
  🀝,johnsutor/Llama-3-8B-Instruct_breadcrumbs-density-0.1-gamma-0.01 (Merge),22.3,42.71,0.43,29.55,0.5,3.7,0.04,9.62,0.32,17.8,0.46,30.44,0.37,🀝 base merges and moerges,LlamaForCausalLM,Original,bfloat16,False,apache-2.0,8,0,True,f4ebbf27d586e94c63f0a7293f565cbd947b824f,True,True,2024-06-26,2024-06-07,False,False,johnsutor/Llama-3-8B-Instruct_breadcrumbs-density-0.1-gamma-0.01,1
208
  πŸ”Ά,NousResearch/Meta-Llama-3-8B,22.29,57.63,0.58,30.51,0.51,5.97,0.06,6.26,0.3,10.06,0.42,23.31,0.31,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,float16,True,apache-2.0,8,5,True,3cb5792509966a963645be24fdbeb2e7dc6cac15,True,True,2024-07-24,2024-05-02,True,False,vicgalle/Configurable-Hermes-2-Pro-Llama-3-8B,2
209
+ πŸ’¬,mistralai/Mistral-Nemo-Base-2407,22.27,62.61,0.63,27.11,0.49,0.3,0.0,8.72,0.32,8.48,0.39,26.37,0.34,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",MistralForCausalLM,Original,bfloat16,True,apache-2.0,12,1004,True,4d14c1db68fe20dbf80b8eca85d39b909c5fe1d5,True,True,2024-08-29,2024-07-17,True,True,mistralai/Mistral-Nemo-Instruct-2407,1
210
  🟒,01-ai/Yi-34B,22.26,30.46,0.3,35.54,0.55,4.46,0.04,15.55,0.37,9.65,0.41,37.91,0.44,🟒 pretrained,LlamaForCausalLM,Original,bfloat16,True,apache-2.0,34,1277,True,e1e7da8c75cfd5c44522228599fd4d2990cedd1c,True,True,2024-06-12,2023-11-01,False,True,01-ai/Yi-34B,0
211
  🀝,johnsutor/Llama-3-8B-Instruct_breadcrumbs-density-0.5-gamma-0.1 (Merge),22.18,43.96,0.44,30.85,0.51,6.87,0.07,7.61,0.31,13.84,0.44,29.96,0.37,🀝 base merges and moerges,LlamaForCausalLM,Original,bfloat16,False,apache-2.0,8,0,True,a481edaceeaab34f4dc0e90c4d8ec0f72658bbdd,True,True,2024-06-26,2024-06-08,False,False,johnsutor/Llama-3-8B-Instruct_breadcrumbs-density-0.5-gamma-0.1,1
212
+ πŸ”Ά,meta-llama/Meta-Llama-3.1-8B,22.14,64.05,0.64,24.8,0.47,10.8,0.11,4.7,0.29,2.29,0.36,26.22,0.34,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,float16,True,llama3.1,8,5,True,332c99d80f378c77b090745a5aac10f8ab339519,True,True,2024-08-14,2024-08-11,True,False,ValiantLabs/Llama3.1-8B-Enigma,2
213
  πŸ”Ά,mlabonne/Daredevil-8B (Merge),22.13,45.48,0.45,31.63,0.52,8.99,0.09,7.72,0.31,7.53,0.39,31.45,0.38,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,False,other,8,30,True,717953c83631cc9adf2dddccfff06739308f10f7,True,True,2024-07-02,2024-05-25,True,True,mlabonne/Daredevil-8B,1
214
  πŸ’¬,OpenBuddy/openbuddy-mixtral-7bx8-v18.1-32k,22.12,54.93,0.55,24.54,0.47,9.52,0.1,7.27,0.3,5.28,0.38,31.16,0.38,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",MixtralForCausalLM,Original,bfloat16,True,apache-2.0,46,13,True,98596b6731058cc9cca85f3b8ac9077342cb60ae,True,False,2024-06-26,2024-02-12,True,False,OpenBuddy/openbuddy-mixtral-7bx8-v18.1-32k,0
215
  🀝,johnsutor/Llama-3-8B-Instruct_breadcrumbs_ties-density-0.1-gamma-0.01 (Merge),22.09,43.59,0.44,29.53,0.5,4.31,0.04,8.05,0.31,16.34,0.45,30.69,0.38,🀝 base merges and moerges,LlamaForCausalLM,Original,bfloat16,False,apache-2.0,8,0,True,861347cd643d396877d8e560367cf0717c671228,True,True,2024-06-26,2024-06-07,False,False,johnsutor/Llama-3-8B-Instruct_breadcrumbs_ties-density-0.1-gamma-0.01,1
 
352
  πŸ”Ά,mistralai/Mixtral-8x7B-v0.1,19.67,23.26,0.23,30.4,0.51,9.37,0.09,9.4,0.32,13.66,0.44,31.9,0.39,πŸ”Ά fine-tuned on domain-specific datasets,MixtralForCausalLM,Original,float16,True,apache-2.0,46,1623,True,985aa055896a8f943d4a9f2572e6ea1341823841,True,False,2024-06-27,2023-12-01,False,True,mistralai/Mixtral-8x7B-v0.1,0
353
  πŸ”Ά,RLHFlow/LLaMA3-iterative-DPO-final,19.64,53.4,0.53,29.79,0.51,0.0,0.0,4.47,0.28,5.08,0.37,25.08,0.33,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,llama3,8,41,True,40b73bd07a019795837f80579fe95470484ca82b,True,True,2024-06-26,2024-05-17,True,False,RLHFlow/LLaMA3-iterative-DPO-final,0
354
  🀝,allknowingroger/limyClown-7B-slerp (Merge),19.63,40.17,0.4,31.93,0.51,6.42,0.06,4.14,0.28,12.46,0.43,22.64,0.3,🀝 base merges and moerges,MistralForCausalLM,Original,bfloat16,False,apache-2.0,7,0,True,732a1ed0c2c7007297ad9d9797793073825f65ca,True,True,2024-06-26,2024-03-23,False,False,allknowingroger/limyClown-7B-slerp,1
355
+ πŸ’¬,upstage/SOLAR-10.7B-Instruct-v1.0 (Merge),19.63,47.37,0.47,31.87,0.52,0.0,0.0,7.83,0.31,6.94,0.39,23.76,0.31,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,float16,True,cc-by-nc-4.0,10,606,True,c08c25ed66414a878fe0401a3596d536c083606c,True,True,2024-06-12,2023-12-12,True,True,upstage/SOLAR-10.7B-Instruct-v1.0,1
356
  🀝,invisietch/EtherealRainbow-v0.3-8B,19.61,36.82,0.37,30.08,0.51,6.57,0.07,7.27,0.3,7.77,0.39,29.18,0.36,🀝 base merges and moerges,LlamaForCausalLM,Original,bfloat16,False,llama3,8,6,True,c986c4ca5a5b8474820a59d3e911a431cf26938d,True,True,2024-07-01,2024-06-19,False,False,invisietch/EtherealRainbow-v0.3-8B,0
357
  🀝,allknowingroger/Ph3unsloth-3B-slerp (Merge),19.61,18.94,0.19,36.46,0.55,6.87,0.07,9.96,0.32,15.43,0.45,30.01,0.37,🀝 base merges and moerges,MistralForCausalLM,Original,bfloat16,False,apache-2.0,3,0,True,465444b3cdd43876717f7386ea2f3357c5fe8e53,True,True,2024-06-26,2024-05-31,False,False,allknowingroger/Ph3unsloth-3B-slerp,1
358
  🟒,01-ai/Yi-1.5-9B-32K,19.61,23.03,0.23,28.94,0.5,9.59,0.1,14.54,0.36,10.83,0.42,30.72,0.38,🟒 pretrained,LlamaForCausalLM,Original,bfloat16,True,apache-2.0,8,18,True,116561dfae63af90f9d163b43077629e0e916bb1,True,True,2024-06-12,2024-05-15,False,True,01-ai/Yi-1.5-9B-32K,0
 
405
  🀝,johnsutor/Llama-3-8B-Instruct_breadcrumbs-density-0.7-gamma-0.01 (Merge),18.44,28.14,0.28,27.16,0.49,0.0,0.0,5.37,0.29,24.47,0.52,25.5,0.33,🀝 base merges and moerges,LlamaForCausalLM,Original,bfloat16,False,apache-2.0,8,0,True,61f4b44fb917cdb46f0ade9f8fc2a382e0cf67af,True,True,2024-06-26,2024-06-08,False,False,johnsutor/Llama-3-8B-Instruct_breadcrumbs-density-0.7-gamma-0.01,1
406
  πŸ’¬,mistralai/Mistral-7B-v0.1,18.37,50.82,0.51,22.75,0.45,2.57,0.03,5.26,0.29,6.59,0.34,22.26,0.3,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",MistralForCausalLM,Original,bfloat16,True,mit,7,119,True,30172203a2d41cb487bf7e2b92a821080783b2c9,True,True,2024-06-27,2023-11-16,True,True,argilla/notus-7b-v1,2
407
  πŸ”Ά,uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b,18.34,37.0,0.37,29.65,0.5,2.95,0.03,4.47,0.28,13.85,0.44,22.12,0.3,πŸ”Ά fine-tuned on domain-specific datasets,MistralForCausalLM,Original,bfloat16,True,llama2,7,17,True,b1de043468a15198b55a6509293a4ee585139043,True,True,2024-06-26,2023-10-13,False,False,uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b,0
408
+ πŸ”Ά,meta-llama/Meta-Llama-3.1-8B,18.31,54.83,0.55,24.07,0.46,5.82,0.06,5.15,0.29,4.38,0.34,15.63,0.24,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,float16,True,llama3.1,8,4,True,be3a5c18b5e8e86a3703df1a8227f784ad2c713c,True,True,2024-07-25,2024-07-23,True,False,ValiantLabs/Llama3.1-8B-Fireplace2,2
409
  πŸ’¬,meta-llama/Meta-Llama-3-8B,18.3,38.5,0.39,27.86,0.49,5.06,0.05,4.92,0.29,13.79,0.44,19.68,0.28,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,other,8,398,True,5aeb036f9215c558b483a654a8c6e1cc22e841bf,True,True,2024-06-12,2024-04-20,True,True,cognitivecomputations/dolphin-2.9-llama3-8b,1
410
  🟒,meta-llama/Llama-2-70b-hf,18.25,24.07,0.24,35.9,0.55,2.49,0.02,7.05,0.3,9.78,0.41,30.2,0.37,🟒 pretrained,LlamaForCausalLM,Original,float16,True,llama2,68,823,True,3aba440b59558f995867ba6e1f58f21d0336b5bb,True,True,2024-06-12,2023-07-11,False,True,meta-llama/Llama-2-70b-hf,0
411
  πŸ”Ά,microsoft/Orca-2-13b,18.14,31.28,0.31,27.31,0.49,0.98,0.01,4.03,0.28,25.79,0.51,19.44,0.27,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,other,13,659,True,2539ff53e6baa4cc603774ad5a2d646f4041ea4e,True,True,2024-06-12,2023-11-14,False,True,microsoft/Orca-2-13b,0
412
  πŸ’¬,gradientai/Llama-3-8B-Instruct-Gradient-1048k,18.12,44.56,0.45,21.01,0.43,4.38,0.04,3.69,0.28,13.52,0.43,21.56,0.29,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,llama3,8,663,True,8697fb25cb77c852311e03b4464b8467471d56a4,True,True,2024-06-12,2024-04-29,True,True,gradientai/Llama-3-8B-Instruct-Gradient-1048k,0
413
  🀝,johnsutor/Llama-3-8B-Instruct_ties-density-0.5 (Merge),18.11,37.97,0.38,26.01,0.48,5.44,0.05,7.27,0.3,7.8,0.39,24.17,0.32,🀝 base merges and moerges,LlamaForCausalLM,Original,bfloat16,False,apache-2.0,8,0,True,c857e33c30016960f114e3a049f5dae41d68bfe7,True,True,2024-06-26,2024-06-07,False,False,johnsutor/Llama-3-8B-Instruct_ties-density-0.5,1
414
  πŸ”Ά,uukuguy/speechless-code-mistral-7b-v1.0,18.09,36.65,0.37,24.09,0.46,4.61,0.05,4.59,0.28,14.77,0.45,23.84,0.31,πŸ”Ά fine-tuned on domain-specific datasets,MistralForCausalLM,Original,bfloat16,True,apache-2.0,7,18,True,1862e0a712efc6002112e9c1235a197d58419b37,True,True,2024-06-26,2023-10-10,False,False,uukuguy/speechless-code-mistral-7b-v1.0,0
415
+ πŸ’¬,meta-llama/Meta-Llama-3.1-8B,18.05,53.28,0.53,24.09,0.46,5.66,0.06,5.26,0.29,4.22,0.34,15.82,0.24,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,llama3.1,8,4,True,ef129903bbdcc59efdbe10fe9061bff473334a99,True,True,2024-08-10,2024-07-23,True,False,ValiantLabs/Llama3.1-8B-Fireplace2,2
416
  🀝,johnsutor/Llama-3-8B-Instruct_ties-density-0.9 (Merge),18.04,38.58,0.39,25.46,0.47,5.59,0.06,6.6,0.3,7.74,0.39,24.24,0.32,🀝 base merges and moerges,LlamaForCausalLM,Original,bfloat16,False,apache-2.0,8,0,True,57c280ce43fe81a23c966b48de6db7f4a85383a3,True,True,2024-06-26,2024-06-07,False,False,johnsutor/Llama-3-8B-Instruct_ties-density-0.9,1
417
  πŸ”Ά,uukuguy/speechless-instruct-mistral-7b-v0.2,18.02,32.61,0.33,24.56,0.46,4.38,0.04,4.25,0.28,21.17,0.49,21.14,0.29,πŸ”Ά fine-tuned on domain-specific datasets,MistralForCausalLM,Original,bfloat16,True,apache-2.0,7,0,True,87a4d214f7d028d61c3dc013a7410b3c34a24072,True,True,2024-06-26,2024-05-22,False,False,uukuguy/speechless-instruct-mistral-7b-v0.2,0
418
  🟒,THUDM/glm-4-9b,18.01,14.26,0.14,35.81,0.55,0.0,0.0,8.84,0.32,14.19,0.44,34.94,0.41,🟒 pretrained,ChatGLMModelM,Original,bfloat16,True,other,9,96,True,99a140996f9d4f197842fb6b1aab217a42e27ef3,True,True,2024-07-04,2024-06-04,False,False,THUDM/glm-4-9b,0
419
  πŸ”Ά,mistralai/Mistral-7B-v0.1,17.94,27.78,0.28,30.21,0.5,2.19,0.02,5.59,0.29,23.02,0.51,18.87,0.27,πŸ”Ά fine-tuned on domain-specific datasets,MistralForCausalLM,Original,float16,True,apache-2.0,7,65,True,fc679274dfcd28a8b6087634f71af7ed2a0659c4,True,True,2024-06-12,2023-10-25,False,True,Intel/neural-chat-7b-v3,1
420
  πŸ”Ά,Kukedlc/NeuralExperiment-7b-MagicCoder-v7.5,17.92,45.53,0.46,16.39,0.4,6.12,0.06,6.15,0.3,13.06,0.43,20.27,0.28,πŸ”Ά fine-tuned on domain-specific datasets,MistralForCausalLM,Original,float16,True,apache-2.0,7,3,True,43ea8d27d652dc15e4d27f665c5d636a5937780b,True,True,2024-07-30,2024-03-07,True,False,Kukedlc/NeuralExperiment-7b-MagicCoder-v7.5,0
421
  🀝,johnsutor/Llama-3-8B-Instruct_ties-density-0.7 (Merge),17.89,36.81,0.37,25.37,0.47,5.74,0.06,7.94,0.31,7.58,0.39,23.92,0.32,🀝 base merges and moerges,LlamaForCausalLM,Original,bfloat16,False,apache-2.0,8,0,True,8d7d8bbb1e8cba5e51337f97bc3d6d8ae40544d5,True,True,2024-06-26,2024-06-07,False,False,johnsutor/Llama-3-8B-Instruct_ties-density-0.7,1
422
+ πŸ’¬,meta-llama/Meta-Llama-3-8B,17.89,42.53,0.43,24.28,0.46,6.72,0.07,6.82,0.3,6.33,0.37,20.65,0.29,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,apache-2.0,8,0,True,6c03cb7af723c7f7785df9eee5d5838247619bee,True,True,2024-08-27,2024-07-31,True,False,sunbaby/BrainCog-8B-0.1-Instruct,1
423
  πŸ”Ά,meta-llama/Meta-Llama-3-8B-Instruct,17.89,38.09,0.38,23.65,0.46,5.36,0.05,11.07,0.33,1.6,0.34,27.56,0.35,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,llama3,8,3,True,2560556d655d0ecaefec10f579c92292d65fb28b,True,True,2024-06-27,2024-06-10,False,False,collaiborateorg/Collaiborator-MEDLLM-Llama-3-8B-v2,1
424
  πŸ”Ά,meta-llama/Meta-Llama-3.1-8B,17.8,47.88,0.48,26.14,0.48,7.93,0.08,1.45,0.26,1.87,0.34,21.53,0.29,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,llama3.1,8,2,True,b191916912f0e76b2bdc93c46c0af590cc87e7ae,True,True,2024-08-06,2024-07-23,True,False,Magpie-Align/Llama-3.1-8B-Magpie-Align-SFT-v0.1,1
425
  🀝,meraGPT/mera-mix-4x7B,17.78,48.32,0.48,17.49,0.4,4.91,0.05,7.27,0.3,9.27,0.41,19.42,0.27,🀝 base merges and moerges,MixtralForCausalLM,Original,bfloat16,True,apache-2.0,24,18,True,09d965c5ef9b66ce419986027e03a915cb869e43,True,True,2024-06-27,2024-04-13,True,False,meraGPT/mera-mix-4x7B,0
 
467
  πŸ’¬,meta-llama/Meta-Llama-3-8B,16.26,40.27,0.4,26.29,0.48,3.25,0.03,3.58,0.28,1.92,0.31,22.24,0.3,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,float16,True,llama3,8,10,True,a83ddac146fb2da1dd1bfa4069e336074d1439a8,True,True,2024-07-03,2024-06-29,True,False,Magpie-Align/Llama-3-8B-Magpie-Align-v0.1,2
468
  πŸ”Ά,senseable/WestLake-7B-v2,16.26,44.19,0.44,17.86,0.41,4.83,0.05,3.58,0.28,7.48,0.39,19.6,0.28,πŸ”Ά fine-tuned on domain-specific datasets,MistralForCausalLM,Original,float16,True,apache-2.0,7,104,True,41625004c47628837678859753b94c50c82f3bec,True,True,2024-07-23,2024-01-22,True,False,senseable/WestLake-7B-v2,0
469
  πŸ’¬,stabilityai/stablelm-2-12b-chat,16.22,40.82,0.41,25.25,0.47,2.04,0.02,2.24,0.27,7.73,0.39,19.27,0.27,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",StableLmForCausalLM,Original,bfloat16,True,other,12,83,True,b6b62cd451b84e848514c00fafa66d9ead9297c5,True,True,2024-06-12,2024-04-04,True,True,stabilityai/stablelm-2-12b-chat,0
470
+ πŸ’¬,CohereForAI/aya-23-8B,15.97,46.99,0.47,20.2,0.43,1.44,0.01,4.59,0.28,8.42,0.39,14.2,0.23,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",CohereForCausalLM,Original,float16,True,cc-by-nc-4.0,8,353,True,ec151d218a24031eb039d92fb83d10445427efc9,True,True,2024-06-12,2024-05-19,True,True,CohereForAI/aya-23-8B,0
471
  🀝,johnsutor/Llama-3-8B-Instruct_dare_ties-density-0.3 (Merge),15.94,21.13,0.21,23.09,0.46,0.0,0.0,6.26,0.3,22.5,0.51,22.67,0.3,🀝 base merges and moerges,LlamaForCausalLM,Original,bfloat16,False,apache-2.0,8,0,True,6f966d14d7236f3da6d1ea9ce3bd9b20808e02a9,True,True,2024-06-26,2024-06-07,False,False,johnsutor/Llama-3-8B-Instruct_dare_ties-density-0.3,1
472
  πŸ”Ά,Pretergeek/openchat-3.5-0106_Rebased_Mistral-7B-v0.2,15.94,37.06,0.37,10.91,0.36,3.85,0.04,2.91,0.27,20.57,0.48,20.33,0.28,πŸ”Ά fine-tuned on domain-specific datasets,MistralForCausalLM,Original,bfloat16,True,apache-2.0,7,2,True,31c11027a7320115af1e5c33b41bcace83420fe2,True,True,2024-07-21,2024-07-21,True,False,Pretergeek/openchat-3.5-0106_Rebased_Mistral-7B-v0.2,0
473
  πŸ’¬,Locutusque/Llama-3-NeuralHercules-5.0-8B,15.93,44.89,0.45,16.34,0.39,3.63,0.04,2.46,0.27,6.78,0.39,21.48,0.29,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,llama3,8,3,True,2bbb675e592a1772f2389fe2d58a5b610d479d94,True,True,2024-06-26,2024-05-28,True,False,Locutusque/Llama-3-NeuralHercules-5.0-8B,0
 
494
  🟒,mistralai/Mistral-Nemo-Base-2407,15.08,16.3,0.16,29.37,0.5,4.98,0.05,5.82,0.29,6.52,0.39,27.46,0.35,🟒 pretrained,MistralForCausalLM,Original,bfloat16,True,apache-2.0,11,234,True,d2efb15544d5401f761235bef327babb850887d0,True,True,2024-07-19,2024-07-18,False,True,mistralai/Mistral-Nemo-Base-2407,0
495
  πŸ”Ά,Changgil/K2S3-14b-v0.2,15.07,32.43,0.32,24.28,0.46,4.53,0.05,4.14,0.28,6.8,0.39,18.26,0.26,πŸ”Ά fine-tuned on domain-specific datasets,MistralForCausalLM,Original,bfloat16,True,cc-by-nc-4.0,14,0,True,b4f0e1eed2640df2b75847ff37e6ebb1be217b6c,True,True,2024-06-27,2024-06-17,False,False,Changgil/K2S3-14b-v0.2,0
496
  🟩,NousResearch/Yarn-Solar-10b-64k,15.06,19.89,0.2,28.4,0.49,2.27,0.02,6.94,0.3,9.01,0.4,23.87,0.31,🟩 continuously pretrained,LlamaForCausalLM,Original,bfloat16,True,apache-2.0,10,15,True,703818628a5e8ef637e48e8dbeb3662aa0497aff,True,True,2024-06-12,2024-01-17,False,True,NousResearch/Yarn-Solar-10b-64k,0
497
+ 🟒,tiiuae/falcon-mamba-7b,15.04,33.36,0.33,19.88,0.43,3.63,0.04,8.05,0.31,10.86,0.42,14.47,0.23,🟒 pretrained,FalconMambaForCausalLM,Original,bfloat16,True,other,7,173,True,5337fd73f19847e111ba2291f3f0e1617b90c37d,True,True,2024-07-23,2024-07-17,False,True,tiiuae/falcon-mamba-7b,0
498
  πŸ”Ά,pankajmathur/orca_mini_v3_13b,15.0,28.97,0.29,25.55,0.47,1.89,0.02,2.01,0.27,17.11,0.46,14.5,0.23,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,other,13,32,True,7d6e567d24ce2f228beaf54e89c17b0e750bfe99,True,True,2024-06-26,2023-08-09,False,False,pankajmathur/orca_mini_v3_13b,0
499
  🟒,Deci/DeciLM-7B,14.95,28.13,0.28,21.25,0.44,2.42,0.02,6.04,0.3,13.05,0.44,18.8,0.27,🟒 pretrained,DeciLMForCausalLM,Original,bfloat16,True,apache-2.0,7,222,True,c3c9f4226801dc0433f32aebffe0aac68ee2f051,True,True,2024-06-12,2023-12-10,False,True,Deci/DeciLM-7B,0
500
  πŸ’¬,meta-llama/Meta-Llama-3-8B,14.87,36.53,0.37,21.95,0.44,3.85,0.04,3.91,0.28,4.01,0.36,18.95,0.27,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,float16,True,other,8,53,True,7f200e4c84ad0daa3ff6bc414012d8d0bacbf90e,True,True,2024-06-12,2024-04-18,True,True,mlabonne/OrpoLlama-3-8B,1
 
516
  πŸ”Ά,microsoft/Orca-2-7b,14.22,21.83,0.22,22.43,0.45,0.83,0.01,1.45,0.26,24.09,0.5,14.65,0.23,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,other,7,213,True,60e31e6bdcf582ad103b807cb74b73ee1d2c4b17,True,True,2024-06-12,2023-11-14,False,True,microsoft/Orca-2-7b,0
517
  πŸ”Ά,TencentARC/Mistral_Pro_8B_v0.1,14.2,21.15,0.21,22.89,0.45,5.66,0.06,4.03,0.28,11.83,0.42,19.61,0.28,πŸ”Ά fine-tuned on domain-specific datasets,MistralForCausalLM,Original,bfloat16,True,apache-2.0,8,66,True,366f159fc5b314ba2a955209d2bca4600f84dac0,True,True,2024-06-12,2024-02-22,False,True,TencentARC/Mistral_Pro_8B_v0.1,0
518
  🟒,tklohj/WindyFloLLM (Merge),14.17,26.69,0.27,24.4,0.46,1.13,0.01,3.36,0.28,11.86,0.43,17.57,0.26,🟒 pretrained,LlamaForCausalLM,Original,float16,True,,13,0,False,21f4241ab3f091d1d309e9076a8d8e3f014908a8,True,True,2024-07-10,2024-06-30,False,False,tklohj/WindyFloLLM,1
 
519
  🟒,mistralai/Mistral-7B-v0.3,14.15,22.66,0.23,23.95,0.45,2.64,0.03,5.59,0.29,8.36,0.4,21.7,0.3,🟒 pretrained,MistralForCausalLM,Original,bfloat16,True,apache-2.0,7,349,True,b67d6a03ca097c5122fa65904fce0413500bf8c8,True,True,2024-06-12,2024-05-22,False,True,mistralai/Mistral-7B-v0.3,0
520
+ 🟒,mistral-community/Mistral-7B-v0.2,14.15,22.66,0.23,23.95,0.45,2.64,0.03,5.59,0.29,8.36,0.4,21.7,0.3,🟒 pretrained,MistralForCausalLM,Original,bfloat16,True,apache-2.0,7,230,True,2c3e624962b1a3f3fbf52e15969565caa7bc064a,True,True,2024-06-12,2024-03-23,False,True,mistral-community/Mistral-7B-v0.2,0
521
  🟒,awnr/Mistral-7B-v0.1-signtensors-7-over-16,14.15,22.94,0.23,21.04,0.43,3.25,0.03,7.16,0.3,7.93,0.4,22.56,0.3,🟒 pretrained,MistralForCausalLM,Original,bfloat16,True,apache-2.0,7,1,True,0e1f2cb0a81c38fc6c567d9c007883ab62fae266,True,True,2024-07-29,2024-07-29,False,False,awnr/Mistral-7B-v0.1-signtensors-7-over-16,0
522
  πŸ”Ά,netcat420/MFANNv0.19,14.14,30.57,0.31,24.92,0.47,2.64,0.03,7.61,0.31,2.72,0.35,16.36,0.25,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,float16,True,llama3.1,8,0,True,af26a25549b7ad291766c479bebda58f15fbff42,True,True,2024-07-27,2024-07-27,False,False,netcat420/MFANNv0.19,0
523
  🀝,johnsutor/Llama-3-8B-Instruct_dare_linear (Merge),14.12,21.45,0.21,19.61,0.43,0.0,0.0,6.15,0.3,21.81,0.5,15.72,0.24,🀝 base merges and moerges,LlamaForCausalLM,Original,bfloat16,False,apache-2.0,8,0,True,abb81fd8fdc2ad32f65befcb7ae369c9837cd563,True,True,2024-06-26,2024-06-07,False,False,johnsutor/Llama-3-8B-Instruct_dare_linear,1
 
567
  πŸ’¬,Enno-Ai/EnnoAi-Pro-Llama-3-8B,12.17,31.95,0.32,17.51,0.42,0.15,0.0,1.57,0.26,9.08,0.41,12.79,0.22,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,creativeml-openrail-m,8,0,True,6a5d745bdd304753244fe601e2a958d37d13cd71,True,True,2024-07-08,2024-07-01,True,False,Enno-Ai/EnnoAi-Pro-Llama-3-8B,0
568
  🟒,awnr/Mistral-7B-v0.1-signtensors-5-over-16,12.16,21.18,0.21,17.54,0.41,2.19,0.02,4.14,0.28,6.14,0.37,21.75,0.3,🟒 pretrained,MistralForCausalLM,Original,bfloat16,True,apache-2.0,7,1,True,5ea13b3d0723237889e1512bc70dae72f71884d1,True,True,2024-07-29,2024-07-29,False,False,awnr/Mistral-7B-v0.1-signtensors-5-over-16,0
569
  πŸ”Ά,NousResearch/Llama-2-13b-hf,12.12,26.68,0.27,18.21,0.42,0.83,0.01,3.02,0.27,8.53,0.4,15.44,0.24,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,mit,13,53,True,bcad6fff9f8591e091d2d57356a3f102197e8c5f,True,True,2024-06-12,2023-09-06,False,True,teknium/OpenHermes-13B,1
570
+ πŸ’¬,internlm/internlm2_5-1_8b-chat,12.11,38.49,0.38,21.03,0.45,0.0,0.0,5.37,0.29,4.42,0.36,3.32,0.13,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",InternLM2ForCausalLM,Original,bfloat16,True,other,1,18,True,4426f00b854561fa60d555d2b628064b56bcb758,True,True,2024-08-07,2024-07-30,True,True,internlm/internlm2_5-1_8b-chat,0
571
  πŸ’¬,unsloth/mistral-7b-v0.3-bnb-4bit,12.08,37.7,0.38,14.86,0.4,0.53,0.01,2.24,0.27,2.97,0.36,14.2,0.23,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",MistralForCausalLM,Original,bfloat16,True,apache-2.0,7,1,True,868d8a51e8deb6fd948eabe5bc296c53bcf41073,True,True,2024-08-08,2024-08-04,True,False,llmat/Mistral-v0.3-7B-ORPO,1
572
  πŸ’¬,unsloth/mistral-7b-v0.3-bnb-4bit,12.02,36.4,0.36,15.59,0.4,0.15,0.0,2.57,0.27,2.97,0.35,14.46,0.23,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",MistralForCausalLM,Original,float16,True,apache-2.0,7,1,True,868d8a51e8deb6fd948eabe5bc296c53bcf41073,True,True,2024-08-06,2024-08-04,True,False,llmat/Mistral-v0.3-7B-ORPO,1
573
  πŸ”Ά,TencentARC/MetaMath-Mistral-Pro,12.01,21.19,0.21,22.37,0.44,4.61,0.05,2.57,0.27,4.99,0.35,16.35,0.25,πŸ”Ά fine-tuned on domain-specific datasets,MistralForCausalLM,Original,bfloat16,True,apache-2.0,8,5,True,3835d38de15ed2a04c32aca879b782fc50e390bf,True,True,2024-06-12,2024-02-26,False,True,TencentARC/MetaMath-Mistral-Pro,0
 
592
  πŸ”Ά,Qwen/Qwen2-1.5B,11.07,30.14,0.3,10.43,0.35,0.91,0.01,2.46,0.27,9.74,0.41,12.74,0.21,πŸ”Ά fine-tuned on domain-specific datasets,Qwen2ForCausalLM,Original,bfloat16,True,apache-2.0,1,21,True,86fcccbf921b7eb8a4d348e4a3cde0beb63d6626,True,True,2024-06-26,2024-06-23,True,False,Replete-AI/Replete-Coder-Qwen2-1.5b,1
593
  πŸ’¬,meta-llama/Llama-2-13b-chat-hf,11.0,39.85,0.4,7.16,0.33,0.6,0.01,0.0,0.23,8.16,0.4,10.26,0.19,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,float16,True,llama2,13,1014,True,a2cb7a712bb6e5e736ca7f8cd98167f81a0b5bd8,True,True,2024-06-12,2023-07-13,True,True,meta-llama/Llama-2-13b-chat-hf,0
594
  🟒,meta-llama/Llama-2-13b-hf,10.99,24.82,0.25,17.22,0.41,1.06,0.01,4.14,0.28,3.39,0.35,15.31,0.24,🟒 pretrained,LlamaForCausalLM,Original,float16,True,llama2,13,566,True,5c31dfb671ce7cfe2d7bb7c04375e44c55e815b1,True,True,2024-06-12,2023-07-13,False,True,meta-llama/Llama-2-13b-hf,0
595
+ πŸ’¬,THUDM/glm-4-9b-chat,10.97,0.0,0.0,25.21,0.47,0.0,0.0,8.5,0.31,8.06,0.4,24.07,0.32,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",ChatGLMModelM,Original,bfloat16,True,other,9,547,True,04419001bc63e05e70991ade6da1f91c4aeec278,True,True,2024-07-09,2024-06-04,True,False,THUDM/glm-4-9b-chat,0
596
  πŸ”Ά,winglian/Llama-3-8b-64k-PoSE,10.89,28.57,0.29,13.31,0.37,2.64,0.03,1.45,0.26,3.08,0.34,16.3,0.25,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,,8,74,False,5481d9b74a3ec5a95789673e194c8ff86e2bc2bc,True,True,2024-06-26,2024-04-24,True,False,winglian/Llama-3-8b-64k-PoSE,0
597
  πŸ”Ά,Josephgflowers/Cinder-Phi-2-V1-F16-gguf,10.86,23.57,0.24,22.45,0.44,0.0,0.0,4.25,0.28,1.97,0.34,12.9,0.22,πŸ”Ά fine-tuned on domain-specific datasets,PhiForCausalLM,Original,float16,True,mit,2,4,True,85629ec9b18efee31d07630664e7a3815121badf,True,True,2024-06-26,2024-02-25,True,False,Josephgflowers/Cinder-Phi-2-V1-F16-gguf,0
598
  πŸ”Ά,lmsys/vicuna-7b-v1.5,10.78,23.52,0.24,15.15,0.39,0.76,0.01,1.12,0.26,11.42,0.42,12.74,0.21,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,float16,True,llama2,7,286,True,3321f76e3f527bd14065daf69dad9344000a201d,True,True,2024-06-12,2023-07-29,False,True,lmsys/vicuna-7b-v1.5,0
599
  πŸ’¬,allenai/OLMo-7B-Instruct-hf,10.76,34.73,0.35,13.16,0.37,0.83,0.01,2.8,0.27,4.33,0.38,8.72,0.18,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",OlmoForCausalLM,Original,bfloat16,True,apache-2.0,7,0,True,2ea947518df93433aa71219f29b36c72ac63be95,True,True,2024-06-27,2024-06-04,True,True,allenai/OLMo-7B-Instruct-hf,0
600
  πŸ”Ά,yam-peleg/Hebrew-Mistral-7B-200K,10.64,18.56,0.19,17.49,0.41,2.34,0.02,3.47,0.28,4.53,0.38,17.48,0.26,πŸ”Ά fine-tuned on domain-specific datasets,MistralForCausalLM,Original,float16,True,apache-2.0,7,15,True,7b51c7b31e3d9e29ea964c579a45233cfad255fe,True,True,2024-07-11,2024-05-05,False,False,yam-peleg/Hebrew-Mistral-7B-200K,0
601
  πŸ”Ά,kno10/ende-chat-0.0.5,10.61,34.04,0.34,11.13,0.36,0.6,0.01,2.01,0.27,7.1,0.39,8.78,0.18,πŸ”Ά fine-tuned on domain-specific datasets,MistralForCausalLM,Original,float16,True,apache-2.0,7,0,True,fff913e8ce204bab72b02582b663db669cb61412,True,True,2024-06-27,2024-06-27,True,False,kno10/ende-chat-0.0.5,0
 
602
  πŸ’¬,internlm/internlm2-chat-1_8b,10.5,23.87,0.24,20.67,0.45,2.42,0.02,2.13,0.27,4.61,0.36,9.33,0.18,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",InternLM2ForCausalLM,Original,bfloat16,True,other,1,28,True,4e226eeb354499f4d34ef4c27f6939f377475cc1,True,True,2024-06-12,2024-01-30,True,True,internlm/internlm2-chat-1_8b,0
603
+ πŸ”Ά,VAGOsolutions/SauerkrautLM-gemma-2-2b-it,10.47,13.21,0.13,18.91,0.42,0.08,0.0,3.02,0.27,8.77,0.4,18.81,0.27,πŸ”Ά fine-tuned on domain-specific datasets,Gemma2ForCausalLM,Original,bfloat16,True,gemma,2,5,True,7fd35fcb32aebfc422e535739161d7528fc562d5,True,True,2024-08-26,2024-08-03,True,False,VAGOsolutions/SauerkrautLM-gemma-2-2b-it,0
604
  πŸ’¬,tiiuae/falcon-40b-instruct,10.41,24.54,0.25,17.22,0.41,1.51,0.02,0.0,0.25,5.16,0.38,14.02,0.23,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",FalconForCausalLM,Original,bfloat16,True,apache-2.0,40,1171,True,ecb78d97ac356d098e79f0db222c9ce7c5d9ee5f,True,True,2024-06-09,2023-05-25,False,True,tiiuae/falcon-40b-instruct,0
605
+ 🟒,Qwen/Qwen2-1.5B,10.32,21.13,0.21,11.78,0.36,6.27,0.06,1.9,0.26,3.59,0.37,17.24,0.26,🟒 pretrained,Qwen2ForCausalLM,Original,bfloat16,True,apache-2.0,1,65,True,8a16abf2848eda07cc5253dec660bf1ce007ad7a,True,True,2024-06-09,2024-05-31,False,True,Qwen/Qwen2-1.5B,0
606
  🟩,meta-llama/Meta-Llama-3-8B,10.28,10.44,0.1,18.68,0.42,1.51,0.02,4.47,0.28,6.15,0.39,20.44,0.28,🟩 continuously pretrained,LlamaForCausalLM,Original,bfloat16,True,llama3,6,6,True,7000b39346162f95f19aa4ca3975242db61902d7,True,True,2024-06-26,2024-05-17,False,False,pszemraj/Llama-3-6.3b-v0.1,1
607
  πŸ”Ά,lmsys/vicuna-13b-v1.3,10.27,33.44,0.33,7.49,0.34,0.45,0.0,2.35,0.27,4.09,0.37,13.81,0.22,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,float16,True,,13,194,True,6566e9cb1787585d1147dcf4f9bc48f29e1328d2,True,True,2024-06-28,2023-06-18,True,True,lmsys/vicuna-13b-v1.3,0
608
  πŸ”Ά,google/gemma-2-2b,10.26,20.18,0.2,12.5,0.37,2.42,0.02,1.68,0.26,11.27,0.42,13.52,0.22,πŸ”Ά fine-tuned on domain-specific datasets,Gemma2ForCausalLM,Original,float16,True,gemma,2,302,True,0738188b3055bc98daf0fe7211f0091357e5b979,True,True,2024-08-04,2024-07-16,False,True,google/gemma-2-2b,0
 
617
  🟒,google/flan-t5-large,9.42,22.01,0.22,17.51,0.42,0.0,0.0,0.11,0.25,9.01,0.41,7.88,0.17,🟒 pretrained,T5ForConditionalGeneration,Original,float16,True,apache-2.0,0,523,True,0613663d0d48ea86ba8cb3d7a44f0f65dc596a2a,True,True,2024-08-14,2022-10-21,False,True,google/flan-t5-large,0
618
  🟒,meta-llama/Llama-2-7b-chat-hf,9.4,39.65,0.4,4.49,0.31,0.68,0.01,0.56,0.25,3.48,0.37,7.52,0.17,🟒 pretrained,LlamaForCausalLM,Original,float16,True,llama2,6,3801,True,f5db02db724555f92da89c216ac04704f23d4590,True,True,2024-08-30,2023-07-13,True,True,meta-llama/Llama-2-7b-chat-hf,0
619
  πŸ”Ά,iRyanBell/ARC1-II,9.32,17.08,0.17,7.25,0.34,0.76,0.01,2.91,0.27,20.31,0.49,7.62,0.17,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,llama3,8,1,True,c81076b9bdaac0722b33e411a49b07a296e8fae8,True,True,2024-06-26,2024-06-12,False,False,iRyanBell/ARC1-II,0
620
+ πŸ”Ά,google/gemma-2-27b,9.3,24.07,0.24,15.31,0.39,0.0,0.0,4.03,0.28,1.6,0.35,10.79,0.2,πŸ”Ά fine-tuned on domain-specific datasets,Gemma2ForCausalLM,Original,bfloat16,True,gemma,27,9,True,27f15219df2000a16955c9403c3f38b5f3413b3d,True,True,2024-08-27,2024-08-13,True,False,AALF/gemma-2-27b-it-SimPO-37K,2
621
  πŸ”Ά,NousResearch/Nous-Hermes-llama-2-7b,9.28,17.29,0.17,13.79,0.38,0.68,0.01,1.79,0.26,11.68,0.43,10.44,0.19,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,mit,6,68,True,b7c3ec54b754175e006ef75696a2ba3802697078,True,True,2024-06-12,2023-07-25,False,True,NousResearch/Nous-Hermes-llama-2-7b,0
622
  πŸ’¬,stabilityai/stablelm-2-zephyr-1_6b,9.26,32.79,0.33,6.71,0.34,2.11,0.02,0.0,0.24,5.99,0.35,7.93,0.17,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",StableLmForCausalLM,Original,float16,True,other,1,176,True,2f275b1127d59fc31e4f7c7426d528768ada9ea4,True,True,2024-06-12,2024-01-19,True,True,stabilityai/stablelm-2-zephyr-1_6b,0
623
  πŸ”Ά,huggyllama/llama-13b,9.25,24.11,0.24,16.15,0.4,1.21,0.01,0.67,0.26,2.81,0.35,10.58,0.2,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,float16,True,other,13,137,True,bf57045473f207bb1de1ed035ace226f4d9f9bba,True,True,2024-07-04,2023-04-03,False,False,huggyllama/llama-13b,0
 
713
  🟒,facebook/opt-1.3b,5.25,23.83,0.24,3.65,0.31,0.76,0.01,0.0,0.24,2.08,0.34,1.19,0.11,🟒 pretrained,OPTForCausalLM,Original,float16,True,other,1,147,True,3f5c25d0bc631cb57ac65913f76e22c2dfb61d62,True,True,2024-06-12,2022-05-11,False,True,facebook/opt-1.3b,0
714
  πŸ’¬,microsoft/DialoGPT-medium,5.25,14.79,0.15,2.56,0.3,0.0,0.0,0.56,0.25,12.28,0.43,1.32,0.11,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",GPT2LMHeadModel,Original,bfloat16,True,mit,0,314,True,7b40bb0f92c45fefa957d088000d8648e5c7fa33,True,True,2024-06-13,2022-03-02,True,True,microsoft/DialoGPT-medium,0
715
  🟒,stabilityai/stablelm-2-1_6b,5.22,11.57,0.12,8.63,0.34,0.15,0.0,0.0,0.25,5.79,0.39,5.15,0.15,🟒 pretrained,StableLmForCausalLM,Original,float16,True,other,1,177,True,8879812cccd176fbbe9ceb747b815bcc7d6499f8,True,True,2024-06-12,2024-01-18,False,True,stabilityai/stablelm-2-1_6b,0
716
+ πŸ’¬,HuggingFaceTB/SmolLM-1.7B,5.14,23.48,0.23,2.08,0.29,0.0,0.0,1.34,0.26,2.08,0.35,1.85,0.12,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,apache-2.0,1,94,True,0ad161e59935a9a691dfde2818df8b98786f30a7,True,True,2024-07-18,2024-07-15,True,False,HuggingFaceTB/SmolLM-1.7B-Instruct,1
717
  🟒,Qwen/Qwen1.5-0.5B,5.14,17.06,0.17,5.04,0.32,0.45,0.0,0.56,0.25,4.3,0.36,3.41,0.13,🟒 pretrained,Qwen2ForCausalLM,Original,bfloat16,True,other,0,139,True,8f445e3628f3500ee69f24e1303c9f10f5342a39,True,True,2024-06-13,2024-01-22,False,True,Qwen/Qwen1.5-0.5B,0
718
  🟒,EleutherAI/pythia-410m,5.11,21.95,0.22,2.72,0.3,0.3,0.0,1.23,0.26,3.06,0.36,1.42,0.11,🟒 pretrained,GPTNeoXForCausalLM,Original,float16,True,apache-2.0,0,21,True,9879c9b5f8bea9051dcb0e68dff21493d67e9d4f,True,True,2024-06-09,2023-02-13,False,True,EleutherAI/pythia-410m,0
719
  🟒,tiiuae/falcon-7b,5.1,18.21,0.18,5.96,0.33,0.53,0.01,0.0,0.24,4.5,0.38,1.39,0.11,🟒 pretrained,FalconForCausalLM,Original,bfloat16,True,apache-2.0,7,1062,True,898df1396f35e447d5fe44e0a3ccaaaa69f30d36,True,True,2024-06-09,2023-04-24,False,True,tiiuae/falcon-7b,0
720
  πŸ’¬,tiiuae/falcon-7b-instruct,5.02,19.69,0.2,4.82,0.32,0.6,0.01,0.0,0.25,3.25,0.36,1.73,0.12,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",FalconForCausalLM,Original,bfloat16,True,apache-2.0,7,900,True,cf4b3c42ce2fdfe24f753f0f0d179202fea59c99,True,True,2024-06-09,2023-04-25,False,True,tiiuae/falcon-7b-instruct,0
721
+ 🟒,openai-community/gpt2-xl,4.98,20.39,0.2,2.58,0.3,0.3,0.0,1.12,0.26,4.04,0.37,1.46,0.11,🟒 pretrained,GPT2LMHeadModel,Original,bfloat16,True,mit,1,303,True,15ea56dee5df4983c59b2538573817e1667135e2,True,True,2024-06-12,2022-03-02,False,True,openai-community/gpt2-xl,0
722
  πŸ”Ά,LeroyDyer/_Spydaz_Web_AI_ALPACA,4.95,14.15,0.14,5.6,0.32,0.0,0.0,2.13,0.27,2.56,0.34,5.28,0.15,πŸ”Ά fine-tuned on domain-specific datasets,MistralForCausalLM,Original,float16,True,mit,7,1,True,9f86dd12d4c75e0290aa3084a44cf111bc975144,True,True,2024-08-06,2024-08-02,False,False,LeroyDyer/_Spydaz_Web_AI_ChatQA,1
723
  πŸ”Ά,RESMPDEV/Qwen2-Wukong-0.5B,4.95,18.54,0.19,4.2,0.31,0.0,0.0,0.0,0.24,3.33,0.35,3.64,0.13,πŸ”Ά fine-tuned on domain-specific datasets,Qwen2ForCausalLM,Original,bfloat16,True,apache-2.0,0,6,True,52c58a4aa3d0b44c363c5761fa658243f5c53943,True,True,2024-06-30,2024-06-29,True,False,RESMPDEV/Qwen2-Wukong-0.5B,0
724
  πŸ”Ά,togethercomputer/GPT-NeoXT-Chat-Base-20B,4.94,18.3,0.18,6.83,0.33,1.13,0.01,0.0,0.25,1.76,0.35,1.61,0.11,πŸ”Ά fine-tuned on domain-specific datasets,GPTNeoXForCausalLM,Original,float16,True,apache-2.0,20,694,True,d386708e84d862a65f7d2b4989f64750cb657227,True,True,2024-06-12,2023-03-03,False,True,togethercomputer/GPT-NeoXT-Chat-Base-20B,0
725
  πŸ”Ά,togethercomputer/RedPajama-INCITE-Chat-3B-v1,4.75,16.52,0.17,5.16,0.32,0.3,0.0,0.0,0.24,5.09,0.37,1.41,0.11,πŸ”Ά fine-tuned on domain-specific datasets,GPTNeoXForCausalLM,Original,float16,True,apache-2.0,3,150,True,f0e0995eba801096ed04cb87931d96a8316871af,True,True,2024-06-13,2023-05-05,False,True,togethercomputer/RedPajama-INCITE-Chat-3B-v1,0
726
+ πŸ”Ά,HuggingFaceTB/SmolLM-360M,4.71,19.52,0.2,2.08,0.29,0.0,0.0,1.9,0.26,2.9,0.35,1.85,0.12,πŸ”Ά fine-tuned on domain-specific datasets,LlamaForCausalLM,Original,bfloat16,True,apache-2.0,0,63,True,8e951de8c220295ea4f85d078c4e320df7137535,True,True,2024-08-20,2024-07-15,True,False,HuggingFaceTB/SmolLM-360M-Instruct,1
727
  🟒,TinyLlama/TinyLlama_v1.1,4.7,20.01,0.2,3.21,0.3,0.45,0.0,0.0,0.25,3.98,0.37,0.54,0.1,🟒 pretrained,LlamaForCausalLM,Original,bfloat16,True,apache-2.0,1,56,True,ff3c701f2424c7625fdefb9dd470f45ef18b02d6,True,True,2024-06-12,2024-03-09,False,True,TinyLlama/TinyLlama_v1.1,0
728
  🟒,AI-Sweden-Models/gpt-sw3-40b,4.68,14.7,0.15,6.89,0.33,0.6,0.01,0.0,0.23,2.84,0.36,3.06,0.13,🟒 pretrained,GPT2LMHeadModel,Original,float16,True,other,39,8,True,1af27994df1287a7fac1b10d60e40ca43a22a385,True,True,2024-06-26,2023-02-22,False,False,AI-Sweden-Models/gpt-sw3-40b,0
729
  🀝,paloalma/TW3-JRGL-v2,4.57,3.1,0.03,4.11,0.31,5.21,0.05,0.78,0.26,12.38,0.43,1.85,0.12,🀝 base merges and moerges,LlamaForCausalLM,Original,bfloat16,False,apache-2.0,72,0,True,aca3f0ba2bfb90038a9e2cd5b486821d4c181b46,True,True,2024-08-29,2024-04-01,False,False,paloalma/TW3-JRGL-v2,0
 
734
  🟒,EleutherAI/gpt-neo-125m,4.38,19.05,0.19,3.44,0.31,0.45,0.0,0.45,0.25,2.62,0.36,0.28,0.1,🟒 pretrained,GPTNeoForCausalLM,Original,bfloat16,True,mit,0,176,True,21def0189f5705e2521767faed922f1f15e7d7db,True,True,2024-08-10,2022-03-02,False,True,EleutherAI/gpt-neo-125m,0
735
  πŸ”Ά,LeroyDyer/Mixtral_AI_SwahiliTron_7b,4.27,15.34,0.15,3.21,0.31,0.83,0.01,2.01,0.27,1.92,0.34,2.31,0.12,πŸ”Ά fine-tuned on domain-specific datasets,MistralForCausalLM,Original,float16,True,mit,7,3,True,fd997ccdee03788e7e79944d26d9c641dc4fcd4c,True,False,2024-07-12,2024-04-10,True,False,LeroyDyer/Mixtral_AI_SwahiliTron_7b,0
736
  🟒,bigscience/bloom-3b,4.26,12.71,0.13,3.42,0.31,0.08,0.0,0.0,0.24,7.89,0.4,1.48,0.11,🟒 pretrained,BloomForCausalLM,Original,bfloat16,True,bigscience-bloom-rail-1.0,3,88,True,52bc5b43010b4844513826b8be3f78c7344c37d7,True,True,2024-06-13,2022-05-19,False,True,bigscience/bloom-3b,0
737
+ πŸ’¬,HuggingFaceTB/SmolLM-135M,4.23,15.96,0.16,2.08,0.29,0.0,0.0,1.9,0.26,3.62,0.37,1.84,0.12,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,apache-2.0,0,81,True,8ca7af58e27777cae460ad8ca3ab9db15f5c160d,True,True,2024-07-18,2024-07-15,True,False,HuggingFaceTB/SmolLM-135M-Instruct,1
738
  πŸ’¬,JackFram/llama-160m,4.1,15.75,0.16,3.17,0.3,0.0,0.0,1.01,0.26,3.17,0.37,1.51,0.11,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,float16,True,apache-2.0,0,13,True,e7f50665676821867ee7dfad32d0ca9fb68fc6bc,True,True,2024-07-23,2023-12-20,True,False,Felladrin/Llama-160M-Chat-v1,1
739
  πŸ’¬,davidkim205/Rhea-72b-v0.5,4.02,1.45,0.01,3.67,0.31,5.51,0.06,0.34,0.25,11.32,0.42,1.85,0.12,"πŸ’¬ chat models (RLHF, DPO, IFT, ...)",LlamaForCausalLM,Original,bfloat16,True,apache-2.0,72,131,True,bc3806efb23d2713e6630a748d9747fd76b27169,True,True,2024-07-01,2024-03-22,True,False,davidkim205/Rhea-72b-v0.5,0
740
  🟒,bigscience/bloom-1b7,3.97,10.44,0.1,4.4,0.31,0.08,0.0,1.12,0.26,6.84,0.39,0.96,0.11,🟒 pretrained,BloomForCausalLM,Original,bfloat16,True,bigscience-bloom-rail-1.0,1,116,True,cc72a88036c2fb937d65efeacc57a0c2ef5d6fe5,True,True,2024-06-13,2022-05-19,False,True,bigscience/bloom-1b7,0