Would you be willing to fine-tune a much more capable base gemma model?

#1
by rombodawg - opened

Hello, i have created a much more capable base-gemma model with precise and highly refined merging techniques. The model is much higher quality than base Gemma-7b, and performs exceptionally well at coding. I thing it would be much better suited for a coding fine-tune. You can find the model linked bellow, as well as information about the model in the model card.

https://huggingface.co/rombodawg/EveryoneLLM-7b-Gemma-Base

This seems to be a merged model of many fine-tuned models, and fine-tuning again often does not achieve good results. I will update if there are any major improvements.

Thank you, i appreciate you taking this seriously. Im very confident in my merges as I've spend months perfecting my techniques and i believe they will achieve good results with a fine tune. I look forward to hearing your results. πŸ™‚

You have to check this out @TechxGenus Massive improvements for gemma finetuning because of these findings

https://www.reddit.com/r/LocalLLaMA/comments/1bd18y8/gemma_finetuning_should_be_much_better_now/

Ive opened an official issue for transformers to implement a fix
https://github.com/huggingface/transformers/issues/29616

@TechxGenus I would like to share that we at Replete-Ai have created a new model called Mistral-11b-v0.1 which is an expanding on the size and pretraining on the mistral-7b model. Feel free to check it out. I would love to see a coding variant if your team is at all interested.

https://huggingface.co/Replete-AI/Mistral-11b-v0.1

Sign up or log in to comment