Update Model Categorization System ?

#2
by CombinHorizon - opened

btw the main leaderboard updated to use a different categorization system
๐ŸŸข pretrained
๐ŸŸฉ continuously pretrained
๐Ÿ”ถ fine-tuned on domain-specific datasets
๐Ÿ’ฌ chat models (RLHF, DPO, IFT, ...
๐Ÿค base merges and moerges

maybe also add a [category]:
๐Ÿ†Ž : language adapted (FP, FT, ...)

the change was due to the fact that it: either wasn't always clear , or from a practical standpoint, which category to put them in

so it's
๐ŸŸข pretrained โ†’ ๐ŸŸข or ๐ŸŸฉ
๐Ÿ”ถ fine-tuned on domain-specific datasets โ†’ ๐Ÿ”ถ
โญ• instruction-tuned โ†’ ๐Ÿ’ฌ
๐ŸŸฆ RL-tuned โ†’ ๐Ÿ’ฌ

the following are direct merges/MoE's

  • SicariusSicariiStuff/Zion_Alpha_Instruction_Tuned_SLERP
  • uygarkurt/llama-3-merged-linear
  • kekmodel/StopCarbon-10.7B-v5
  • jeonsworld/CarbonVillain-en-10.7B-v4
  • invalid-coder/Sakura-SOLAR-Instruct-CarbonVillain-en-10.7B-v2-slerp
  • shadowml/BeagSake-7B
  • zhengr/MixTAO-7Bx2-MoE-v8.1
  • yunconglong/DARE_TIES_13B
  • yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B
  • shanchen/llama3-8B-slerp-med-chinese

๐ŸŸฉ continuously pretrained:

  • yam-peleg/Hebrew-Mistral-7B
  • yam-peleg/Hebrew-Gemma-11B-V2

๐Ÿ†Ž : language adapted (FP, FT, ...) :

  • ronigold/dictalm2.0-instruct-fine-tuned-alpaca-gpt4-hebrew
  • SicariusSicariiStuff/Zion_Alpha_Instruction_Tuned
  • ronigold/dictalm2.0-instruct-fine-tuned
  • SicariusSicariiStuff/Zion_Alpha

maybe add these models?

  • ๐ŸŸฉ yam-peleg/Hebrew-Mistral-7B-200K (FP32, BF16 is closest)
  • ๐ŸŸฉ yam-peleg/Hebrew-Mixtral-8x22B (FP16)
  • ๐Ÿ’ฌ yam-peleg/Hebrew-Gemma-11B-Instruct (FP16)

Sign up or log in to comment