File size: 1,380 Bytes
19d1660
69f13f3
 
f7cee93
 
69f13f3
19d1660
 
69f13f3
f7cee93
38abda7
69f13f3
 
 
 
 
 
f7cee93
69f13f3
f7cee93
69f13f3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0ece1f5
69f13f3
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
---
language:
- en
Dataset:
- argilla/distilabel-math-preference-dpo  
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0
---

# **Sakura-SOLAR-Instruct-DPO-v1**  
<img src='./sakura.png' width=512>

## Model Details

**Model Developers** Kyujin Han (kyujinpy)

**Method**  
Using [Mergekit](https://github.com/cg123/mergekit).  
I shared the information about my model. (training and code)  
Please see: [⭐Sakura-SOLAR(will update)]().  

# **Model Benchmark**  

## Open leaderboard
- Follow up as [link](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).  

| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
| --- | --- | --- | --- | --- | --- | --- | --- |
| akura-SOLAR-Instruct-DPO-v2 | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
| Sakura-SOLAR-Instruct-DPO-v1 | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
| [kyujinpy/Sakura-SOLAR-Instruct](https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct) | NaN | NaN | NaN | NaN | NaN | NaN | NaN |

   
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

repo = "kyujinpy/Sakura-SOLAR-Instruct"
OpenOrca = AutoModelForCausalLM.from_pretrained(
        repo,
        return_dict=True,
        torch_dtype=torch.float16,
        device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```

---