File size: 2,835 Bytes
6225c63
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5be5fd7
 
 
 
 
f577dc2
5be5fd7
3a9db0f
5be5fd7
 
 
 
 
 
 
 
 
 
b83b8e4
 
 
 
 
 
6225c63
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
---
tags:
- merge
- mergekit
- lazymergekit
- paulml/DPOB-INMTOB-7B
- bardsai/jaskier-7b-dpo-v6.1
base_model:
- paulml/DPOB-INMTOB-7B
- bardsai/jaskier-7b-dpo-v6.1
---

# djinn-7b

djinn-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [paulml/DPOB-INMTOB-7B](https://huggingface.co/paulml/DPOB-INMTOB-7B)
* [bardsai/jaskier-7b-dpo-v6.1](https://huggingface.co/bardsai/jaskier-7b-dpo-v6.1)



# 🏆 Benchmarks 
#### Open LLM Leaderboard

| Model                  | Average | ARC_easy  | HellaSwag | MMLU | TruthfulQA_mc2 | Winogrande | GSM8K |
|------------------------|--------:|-----:|----------:|-----:|-----------:|-----------:|------:|
| mayacinka/djinn-7B |   78.40 | 86.7 |      87.37| 61.84 |      77.23 |      82.64 |  74.68|

#### MMLU (per category)
|      Groups      |Version|Filter|n-shot|Metric|Value |   |Stderr|
|------------------|-------|------|------|------|-----:|---|-----:|
|mmlu              |N/A    |none  |     0|acc   |0.6184|±  |0.0039|
| - humanities     |N/A    |none  |None  |acc   |0.5741|±  |0.0067|
| - other          |N/A    |none  |None  |acc   |0.6933|±  |0.0079|
| - social_sciences|N/A    |none  |None  |acc   |0.7166|±  |0.0080|
| - stem           |N/A    |none  |None  |acc   |0.5147|±  |0.0085|

### AutoEval 
[Maxime Labonne's autoeval notebook](https://gist.github.com/majacinka/dfa0800c65f995c8f970c75f3e73d268)
|                        Model                        |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
|-----------------------------------------------------|------:|------:|---------:|-------:|------:|
|[djinn-7b](https://huggingface.co/mayacinka/djinn-7b)|   44.9|  77.33|     77.18|   49.36|  62.19|

## 🧩 Configuration

```yaml
slices:
  - sources:
      - model: paulml/DPOB-INMTOB-7B
        layer_range: [0, 32]
      - model: bardsai/jaskier-7b-dpo-v6.1
        layer_range: [0, 32]
merge_method: slerp
base_model: paulml/DPOB-INMTOB-7B
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16
```

## 💻 Usage

```python
!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "mayacinka/djinn-7b"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```