Id like to request you add my two new models to your leaderboard. DeepMagic-Coder-7b-Alt and Everyone-Coder-33b-v2-Base
I have been making merges for weeks, even months now, and through all my effort I have discovered how to make merges models that not only are high quality but directly compete with finetunes models. I have released the following models that are a direct result of this fine tunes merging process:
These coding models use the merge-kits task_arithmetic technique and are (not only the best merged model Ive ever made) but are better than some finetunes models. Both models have non of the typical draw backs of merge models, which includes issues with generation and other bugs. Not only do they work as well as any fine tuned models, but fully encapsulate the combined trainings of the included models that were merges together.
I have tested both models extensively and they extremely high quality. Everyone-Coder-33b-v2-Base has even been able to pass expert coding challenges that gpt-4-1106 failed. DeepMagic-Coder-7b-Alt also hits far above its weight class, competing with gpt 3.5 in coding despite being only around 7b parameters.
The reason for these long written paragraphs is to prove to you the validity of these models despite their being merges because merged models often have a negative stigmatism in the open source ai community. I hope that those at lmsys do not feel the same way, because you would truly be missing out on the great opportunity of experiencing such great models I have created and shared completely free of charge with the open source community.