File size: 6,304 Bytes
a36ceec
53321e8
a36ceec
 
 
 
 
 
 
 
 
f8583a0
09f1c9e
53321e8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
464d585
 
339ae47
a36ceec
75a7b61
53321e8
69848fe
 
 
a36ceec
c7fcb7f
53321e8
0cbd1d1
a36ceec
69848fe
 
 
 
a36ceec
53321e8
a36ceec
a678fa9
 
 
 
 
 
 
 
 
 
 
 
 
a36ceec
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
69848fe
 
 
 
 
 
 
53321e8
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
---
inference: false
base_model:
- SanjiWatsuki/Silicon-Maid-7B
- sethuiyer/Aika-7B
- sethuiyer/Nandine-7b
- mlabonne/AlphaMonarch-7B
library_name: transformers
tags:
- mergekit
- merge
- not-for-all-audiences
license: cc
model-index:
- name: sethuiyer/Diana-7B
  results:
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: AI2 Reasoning Challenge (25-Shot)
      type: ai2_arc
      config: ARC-Challenge
      split: test
      args:
        num_few_shot: 25
    metrics:
    - type: acc_norm
      value: 68.34
      name: normalized accuracy
    source:
      url: >-
        https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Diana-7B
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: HellaSwag (10-Shot)
      type: hellaswag
      split: validation
      args:
        num_few_shot: 10
    metrics:
    - type: acc_norm
      value: 86.73
      name: normalized accuracy
    source:
      url: >-
        https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Diana-7B
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MMLU (5-Shot)
      type: cais/mmlu
      config: all
      split: test
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 64.58
      name: accuracy
    source:
      url: >-
        https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Diana-7B
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: TruthfulQA (0-shot)
      type: truthful_qa
      config: multiple_choice
      split: validation
      args:
        num_few_shot: 0
    metrics:
    - type: mc2
      value: 60.55
    source:
      url: >-
        https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Diana-7B
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: Winogrande (5-shot)
      type: winogrande
      config: winogrande_xl
      split: validation
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 80.19
      name: accuracy
    source:
      url: >-
        https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Diana-7B
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: GSM8k (5-shot)
      type: gsm8k
      config: main
      split: test
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 63.23
      name: accuracy
    source:
      url: >-
        https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Diana-7B
      name: Open LLM Leaderboard
language:
- en
pipeline_tag: text-generation
---
# Diana-7B

<p align="center">
  <img src="https://huggingface.co/sethuiyer/Diana-7B/resolve/main/diana.webp" height="128px" alt="Diana">
</p>

This is Diana-7b, rated **93.56/100** by GPT-4 on a collection of 30 synthetic prompts generated by GPT-4. 

Diana stands for  **D**eep **I**nsight and **A**nalytical **N**arrative **A**ssistant and is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):

1. [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B): This model has impressive conversational abilities, formal and sophisticated style, and strong reasoning skills.
2. [sethuiyer/Aika-7b](https://huggingface.co/sethuiyer/Aika-7B): A merge of SanjiWatsuki/Silicon-Maid-7B, Guilherme34/Samantha-v2, jan-hq/stealth-v1.3, and senseable/WestLake-7B-v2, Aika-7b is designed for natural and human-like interactions, accurate information delivery, comprehensive analysis, emotional intelligence, clarity, and structure.
3. [SanjiWatsuki/Silicon-Maid-7B](https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B): This model is known for its excellent multi-turn conversational skills and logical coherence.
4. [sethuiyer/Nandine-7b](https://huggingface.co/sethuiyer/Nandine-7b): A merge of senseable/Westlake-7B, Guilherme34/Samantha-v2, and uukuguy/speechless-mistral-six-in-one-7b, Nandine-7b excels in narrative skill, empathetic interaction, intellectual depth, and eloquent communication.

By combining these models, Diana-7B offers a balanced blend of capabilities, making it suitable for various tasks and providing a comprehensive AI companion for creative writing, thoughtful discussions, problem-solving, and general assistance. 

## OpenLLM Benchmark

| Model                          | Average ⬆️ | ARC    | HellaSwag | MMLU  | TruthfulQA | Winogrande | GSM8K | 
|--------------------------------|------------|-------|-----------|-------|------------|------------|-------| 
| sethuiyer/Diana-7B 📑       | 70.6      | 68.34 | 86.73     | 64.58 | 60.55       | 80.19      | 63.23  | 


## Nous Benchmark
|                          Model                          |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
|---------------------------------------------------------|------:|------:|---------:|-------:|------:|
|[Diana-7B](https://huggingface.co/sethuiyer/Diana-7B)|  44.38 |  75.1|     60.55|   44.58|  56.09|



### Configuration

The following YAML configuration was used to produce this model:

```yaml

base_model: mlabonne/AlphaMonarch-7B
dtype: bfloat16
merge_method: dare_ties
models:
- model: mlabonne/AlphaMonarch-7B
- model: sethuiyer/Aika-7B
  parameters:
    density: 0.85
    weight: 0.30
- model: SanjiWatsuki/Silicon-Maid-7B
  parameters:
    density: 0.85
    weight: 0.50
- model: sethuiyer/Nandine-7b
  parameters:
    density: 0.85
    weight: 0.30
parameters:
  int8_mask: true

```

## Prompt Template

```text
{bos}user
{ .Prompt }{eos}
{bos}assistant
```

## GGUF
GGUF files are available at [Diana-7B-GGUF](https://huggingface.co/sethuiyer/Diana-7B-GGUF/tree/main)

## Ollama
Diana is now available on Ollama. You can use it by running the command ```ollama run stuehieyr/diana``` in your 
terminal. If you have limited computing resources, check out this [video](https://www.youtube.com/watch?v=Qa1h7ygwQq8) to learn how to run it on 
a Google Colab backend.