File size: 5,920 Bytes
a03d93b
d4b2a26
a03d93b
 
 
 
b13f41b
d4b2a26
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a03d93b
4bb743b
 
ef24132
4bb743b
1f92f7c
 
1e57db1
 
 
 
be92e3c
a03d93b
be92e3c
a03d93b
be92e3c
a03d93b
be92e3c
a03d93b
97f07ea
a03d93b
6baeb5f
 
 
 
 
 
 
 
 
 
 
 
c55553b
a03d93b
 
 
 
 
 
 
 
 
 
 
 
 
397cdf1
a03d93b
946680c
1bc7265
 
 
 
 
 
 
 
946680c
362650d
 
d4b2a26
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
---
license: llama3
library_name: transformers
tags:
- mergekit
- merge
- not-for-all-audiences
base_model:
- Hastagaras/anjrit
- Hastagaras/anying
model-index:
- name: Anjir-8B-L3
  results:
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: AI2 Reasoning Challenge (25-Shot)
      type: ai2_arc
      config: ARC-Challenge
      split: test
      args:
        num_few_shot: 25
    metrics:
    - type: acc_norm
      value: 63.57
      name: normalized accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Hastagaras/Anjir-8B-L3
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: HellaSwag (10-Shot)
      type: hellaswag
      split: validation
      args:
        num_few_shot: 10
    metrics:
    - type: acc_norm
      value: 84.15
      name: normalized accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Hastagaras/Anjir-8B-L3
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MMLU (5-Shot)
      type: cais/mmlu
      config: all
      split: test
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 67.67
      name: accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Hastagaras/Anjir-8B-L3
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: TruthfulQA (0-shot)
      type: truthful_qa
      config: multiple_choice
      split: validation
      args:
        num_few_shot: 0
    metrics:
    - type: mc2
      value: 52.67
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Hastagaras/Anjir-8B-L3
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: Winogrande (5-shot)
      type: winogrande
      config: winogrande_xl
      split: validation
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 78.61
      name: accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Hastagaras/Anjir-8B-L3
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: GSM8k (5-shot)
      type: gsm8k
      config: main
      split: test
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 67.78
      name: accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Hastagaras/Anjir-8B-L3
      name: Open LLM Leaderboard
---
# ANJIRRR

This model aims to achieve the human-like responses of the [Halu Blackroot](https://huggingface.co/Hastagaras/Halu-8B-Llama3-Blackroot), the no refusal tendencies of the [Halu OAS](https://huggingface.co/Hastagaras/Halu-OAS-8B-Llama3), and the smartness of the [Standard Halu](https://huggingface.co/Hastagaras/Halu-8B-Llama3-v0.3).

GGUF: [**STATIC**](https://huggingface.co/mradermacher/Anjir-8B-L3-GGUF)/[**IMATRIX**](https://huggingface.co/mradermacher/Anjir-8B-L3-i1-GGUF) made available by [mradermacher](https://huggingface.co/mradermacher)

<div align="left">
  <img src="https://huggingface.co/Hastagaras/Anjir-8B-L3/resolve/main/anjir.png" width="500"/>
</div>

**Model Details:**

* **Anjrit:** This model is similar to my [Halu Blackroot](https://huggingface.co/Hastagaras/Halu-8B-Llama3-Blackroot) model, but instead of using the standard version, this model uses the OAS version.

* **Anying:** This model is also similar to the Halu Blackroot, but instead of using the model stock, I merged the Blackroot lora manually with a very low alpha.

Both models have downsides. The Anjrit model **lacks coherency**, while the Anying model lacks a **human-like responses**.

**I decided to merge both models with the following method:**

1. First, I compared the response from each layer of both models using the baukit notebook.

2. After comparing both, it seems that around the bottom layer, the Anjrit model is better, perhaps because it is unhinged.

3. From the bottom to the middle layer, the Anjrit is still better, but the Anying seems smarter.

4. At the middle layer, both seem equal, but again, the Anjrit is unhinged, so I prefer this one.

5. From the middle to the top layer, the Anying is better. It is smarter, and the response is more structured.

6. The top layer of the Anjrit model is better since the model itself is orthogonalized, so I prefer this one.

7. Then I performed slerp with the following configuration. I don't know if this is really how the slerp merge works, so let's just say this is an **experimental merge**. Maybe I will try the other merge methods for future experiments

### Configuration

The following YAML configuration was used to produce this model:

```yaml
models:
  - model: Hastagaras/anjrit
  - model: Hastagaras/anying
merge_method: slerp
base_model: Hastagaras/anjrit
dtype: bfloat16
parameters:
  t: [0.12, 0.17, 0.29, 0.44, 0.26]

```
**SAMPLER:**

You can start with this and tweak it

* TEMP: 1.0
* TOP_P: 0.95
* TOP_K: 100
* MIN_P: 0.05

---

# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Hastagaras__Anjir-8B-L3)

|             Metric              |Value|
|---------------------------------|----:|
|Avg.                             |69.07|
|AI2 Reasoning Challenge (25-Shot)|63.57|
|HellaSwag (10-Shot)              |84.15|
|MMLU (5-Shot)                    |67.67|
|TruthfulQA (0-shot)              |52.67|
|Winogrande (5-shot)              |78.61|
|GSM8k (5-shot)                   |67.78|