This repository contains a fine-tuned version of the SOLAR Model, merged and customized for enhanced performance on specific tasks.
Overview The model was fine-tuned starting from the merged SOLAR model. The merging process combined the SOLAR-10.7B-Instruct and SOLAR-10.7B models using the slerp merge method, focusing on specific layers to optimize self-attention and feed-forward modules.
This fine-tuning process specializes the model for domain-specific tasks while maintaining the high performance and generalization capabilities of the original SOLAR model.