|
--- |
|
language: |
|
- en |
|
license: apache-2.0 |
|
library_name: transformers |
|
datasets: |
|
- HuggingFaceH4/deita-10k-v0-sft |
|
tags: |
|
- unsloth |
|
--- |
|
|
|
# Model Card for Model ID |
|
|
|
This adapter for Gemma 7B has been fine-tuned (SFT) using unsloth on the instruction dataset HuggingFaceH4/deita-10k-v0-sft. |
|
|
|
## Model Details |
|
|
|
The model was created using a recipe detailed in this article: |
|
[Fine-tune a Better Google Gemma with Unsloth and Distilled DPO |
|
](https://kaitchup.substack.com/p/fine-tune-a-better-google-gemma-with) |
|
|
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
|
|
|
|
|
|
- **Developed by:** [The Kaitchup](https://kaitchup.substack.com/) |
|
- **Model type:** Causal |
|
- **Language(s) (NLP):** English |
|
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) |