--- library_name: peft tags: - code - instruct - mistral datasets: - cognitivecomputations/dolphin-coder base_model: mistralai/Mistral-7B-v0.1 license: apache-2.0 --- ### Finetuning Overview: **Model Used:** mistralai/Mistral-7B-v0.1 **Dataset:** cognitivecomputations/dolphin-coder #### Dataset Insights: [Dolphin-Coder](https://huggingface.co/datasets/cognitivecomputations/dolphin-coder) dataset – a high-quality collection of 100,000+ coding questions and responses. It's perfect for supervised fine-tuning (SFT), and teaching language models to improve on coding-based tasks. #### Finetuning Details: With the utilization of [MonsterAPI](https://monsterapi.ai)'s [no-code LLM finetuner](https://monsterapi.ai/finetuning), this finetuning: - Was achieved with great cost-effectiveness. - Completed in a total duration of 7hrs 36min for 0.5 epochs using an A6000 48GB GPU. - Costed `$15.2` for the entire run #### Hyperparameters & Additional Details: - **Epochs:** 0.5 - **Cost for full run:** $15.2 - **Model Path:** mistralai/Mistral-7B-v0.1 - **Learning Rate:** 0.0002 - **Data Split:** 100% train - **Gradient Accumulation Steps:** 128 - **lora r:** 32 - **lora alpha:** 64 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6313732454e6e5d9f0f797cd/0O1VKp3SJNfrhTd5earci.png) --- license: apache-2.0