MOLAR-10.7B / README.md
Q-bert's picture
Update README.md
c0cf029
|
raw
history blame
No virus
1.17 kB
---
library_name: peft
base_model: kyujinpy/Sakura-SOLAR-Instruct
license: apache-2.0
datasets:
- meta-math/MetaMathQA
language:
- en
pipeline_tag: text-generation
tags:
- SOLAR
- MetaMathQA
- llama
---
# MOLAR 10.7B
Q-LoRA fine-tuned on **kyujinpy/Sakura-SOLAR-Instruct** with **meta-math/MetaMathQA**
Adapter model and normal model is avaible, you can download with both styles.
You must use this prompt style.
```yaml
### Query: <query>
### Response: <response>
```
# Loss Curve:
![image/png](https://cdn-uploads.huggingface.co/production/uploads/63da3d7ae697e5898cb86854/5OW8k59b_3rlqWlZfYkWx.png)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results will be found [Here]()
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | |
| ARC (25-shot) | |
| HellaSwag (10-shot) | |
| MMLU (5-shot) | |
| TruthfulQA (0-shot) | |
| Winogrande (5-shot) | |
| GSM8K (5-shot) | |