Amazing model
It passes most of my internal Q&As, really good model!
I'm experimenting with your LORA extract, diffs and merges as well and I'm also doing some LIMA finetunes, which brings me to the key question:
- is this model finetuned from Phind v2 or base CodeLlama?
I wonder if finetuning finetunes is a way forward or one of the rabbit holes? What's your opinion?
Thanks!
Thank you for recognizing my current work.
My work goal is to apply the reasoning ability of large models to practical business scenarios. Combining with the currently known advanced work and insights in the industry, we have chosen a technical route based on a powerful code-capable base model. Therefore, the publicly available models so far are intermediate achievements rather than final goals. Under this guiding principle, efficient and powerful fine-tuning capability becomes a key foundational skill that enables us to respond to various hypothetical requirements of the base model at any time.
The timely appearance of CodeLlama has allowed us to smoothly carry out our planned work. The recent popular Mistral and Tora, which is a powerful mathematical reasoning model based on CodeLlama, are also our focus of attention.
The ongoing research on LoRA aims to achieve concurrent collaborative scenarios for multi-agent models, similar to implementing the concept of mixture-of-lora.