File size: 8,320 Bytes
a433937
 
 
 
 
 
 
 
 
 
 
0d71866
a433937
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c946eb9
a433937
 
 
8743a3e
 
a433937
 
 
fdaca5a
a433937
 
f7a3313
a433937
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5eab2c8
a433937
 
 
 
 
 
93db18f
a433937
 
 
 
 
 
 
 
 
 
 
93db18f
 
 
 
 
 
a433937
 
 
 
 
 
3177614
 
 
 
 
a433937
fee642c
 
 
a433937
 
 
 
3177614
 
a433937
3177614
 
 
 
 
 
 
 
a433937
3177614
a433937
 
fee642c
 
a433937
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3177614
a433937
3177614
a433937
 
3177614
a433937
 
3177614
a433937
9a47387
 
0d71866
41cf735
a433937
 
 
 
 
ca220d7
a433937
 
 
 
 
 
 
3177614
 
6b3d82d
a433937
3177614
 
4cced00
a433937
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
---
datasets:
- Open-Orca/SlimOrca-Dedup
- teknium/openhermes
- meta-math/MetaMathQA
- migtissera/Synthia-v1.3
- THUDM/AgentInstruct
- LeoLM/German_Songs
- LeoLM/German_Poems
- LeoLM/OpenSchnabeltier
- bjoernp/ultrachat_de
- LDJnr/Capybara
language:
- en
- de
library_name: transformers
pipeline_tag: text-generation
license: llama2
model_creator: DiscoResearch
model_type: llama
tags:
- goliath
- deutsch
- llama2
- discoresearch
---


![EM Logo](imgs/disco_leo.jpeg)

# DiscoLM 70b

**DiscoLM 70b** is a 70b model based on [Laion's LeoLM 70b](https://huggingface.co/LeoLM/leo-hessianai-70b) which underwent additional continued pretraining for 65b tokens of German
text, strengthening it's multilingual capabilities while retaining (and partially improving) English capabilities.
This was then further finetuned on a combination of some the most popular open-source instruction sets. 
DiscoLM 70b is a [DiscoResearch](https://huggingface.co/DiscoResearch) project and was trained by [Björn Plüster](https://huggingface.co/bjoernp).

Many thanks to [LAION](https://laion.ai) and [HessianAI](https://hessian.ai/) for scientific supervision, coordination and compute resources provided for this project on supercomputer 42 by [HessianAI](https://hessian.ai/)! 

<img src="https://hessian.ai/wp-content/themes/hessianai/img/hessian-ai-logo.svg" width="120">
<img src="https://avatars.githubusercontent.com/u/92627801?s=200&v=4" width="120">

## Table of Contents

1. [Download](#download)
2. [Benchmarks](#benchmarks)
3. [Prompt Format](#prompt-format)
4. [Dataset](#dataset)
5. [Acknowledgements](#acknowledgements)
6. [Contact](#contact)
7. [About DiscoResearch](#about-discoresearch)
8. [Disclaimer](#disclaimer)

## Download 

| Huggingface    | GPTQ  | GGUF  | AWQ   | *Base Model* |
|-------|-------|-------|-------|-------|
| [Link](https://huggingface.co/DiscoResearch/DiscoLM-70b) | [@TheBloke](https://huggingface.co/TheBloke/DiscoLM-70B-GPTQ) | [@TheBloke](https://huggingface.co/TheBloke/DiscoLM-70B-GGUF) | [@TheBloke](https://huggingface.co/TheBloke/DiscoLM-70B-AWQ) | [LeoLM 70b](https://huggingface.co/LeoLM/leo-hessianai-70b) |

## Benchmarks

### Hugginface Leaderboard

This models is still an early Alpha and we can't guarantee that there isn't any contamination. 
The following are the scores from our own evaluation.

| Metric | Value |
|-----------------------|-------|
| ARC (25-shot)         | 68.77 |
| HellaSwag (10-shot)   | 85.41 |
| MMLU (5-shot)         | 68.64 |
| TruthfulQA (0-shot)   | 57.69 |
| Winogrande (5-shot)   | 83.27 |
| GSM8k (5-shot)   | 63.68 |
| **Avg.**                  | **71.24** |

The model is now also officially ranked on the Open LLM Leaderboard as #6 overall and as the second strongest Llama-2-70b based model (ranking only begind TigerBot 70b):

![image/png](https://cdn-uploads.huggingface.co/production/uploads/62e3b6ab0c2a907c388e4965/0ZIBCnO08tX44ilGcl8Wb.png)
(Screenshot from the 05. of December 2023)


We use [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard.

### FastEval

| Metric | Value |
|-----------------------|-------|
| GSM8K       | 70.6 |
| Math   | 17.8 |
| BBH         | 63.4 |
| MMLU   | 64.7 |
| **Avg.**                  | **48.87** |

Screenshot of the current (sadly no longer maintained) FastEval CoT leaderboard:
![FastEval Leaderboard](imgs/cot_leaderboard.png)

### MTBench

```json
{
    "first_turn": 7.9,
    "second_turn": 7.0625,
    "categories": {
        "writing": 9.55,
        "roleplay": 8.35,
        "reasoning": 6.15,
        "math": 4.7,
        "coding": 4.8,
        "extraction": 7.35,
        "stem": 9.1,
        "humanities": 9.85
    },
    "average": 7.48125
}
```
Screenshot of the current FastEval MT Bench leaderboard:
![FastEval Leaderboard](imgs/mtbench_leaderboard.png)
## Prompt Format

This model follows the ChatML format:

```
<|im_start|>system
You are DiscoLM, a helpful assistant.
<|im_end|>
<|im_start|>user
Please tell me possible reasons to call a research collective "Disco Research"<|im_end|>
<|im_start|>assistant
```

This formatting is also available via a pre-defined Transformers chat template, which means that lists of messages can be formatted for you with the apply_chat_template() method:

```python
chat = [
  {"role": "system", "content": "You are DiscoLM, a helpful assistant."},
  {"role": "user", "content": "Please tell me possible reasons to call a research collective Disco Research"}
]
tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```

If you use `tokenize=True` and `return_tensors="pt"` instead, then you will get a tokenized and formatted conversation ready to pass to `model.generate()`.

## Dataset

The dataset curation for DiscoLM 70b followed a "brute force"/"PoC" approach.

The following datasets were used for training DiscoLM 70b:

* [SlimOrca-Dedup](https://huggingface.co/datasets/Open-Orca/SlimOrca-Dedup)
* [OpenSchnabeltier](https://huggingface.co/datasets/LeoLM/OpenSchnabeltier) translated to DE from [OpenPlatypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus)
* [OpenHermes](https://huggingface.co/datasets/teknium/openhermes)
* [MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA)
* [UltraChat DE](https://huggingface.co/datasets/bjoernp/ultrachat_de) translated to DE from [UltraChat](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k)
* [Synthia v.1.3](https://huggingface.co/datasets/migtissera/Synthia-v1.3)
* [German_Songs](https://huggingface.co/datasets/LeoLM/German_Songs)
* [German_Poems](https://huggingface.co/datasets/LeoLM/German_Poems)
* Capybara Dataset by [LDJnr](https://huggingface.co/LDJnr)
* Vezora/Tested-188k-Python (No longer available? Version changed to [Vezora/Tested-22k-Python-Alpaca](https://huggingface.co/datasets/Vezora/Tested-22k-Python-Alpaca))

Many thanks for all dataset providers/curators!

## Contact

Best way to reach us is on our [Discord](https://discord.gg/S8W8B5nz3v).

## About DiscoResearch

DiscoResearch is an aspiring open research community. Disco should be a place where researchers from many communities can come together to combine their expertise and create innovative and groundbreaking LLMs. Come join our Discord, share your opinions and ideas, and advance open LLM research with us!

## Acknowledgements

Disco 70b is a [DiscoResearch](https://huggingface.co/DiscoResearch) project and was trained by [Björn Plüster](https://huggingface.co/bjoernp). [Jan Harries](https://huggingface.co/jphme) helped with technical adivce, logistics and the Model Card. 
[AutoMeta](https://huggingface.co/Alignment-Lab-AI) also provided helpful technical advice and rounded up his connections to select a set of high-quality datasets.
The model was trained with compute provided by [HessianAI](https://hessian.ai/) in collaboration with [LAION](https://laion.ai) - many thanks in particular to [Patrick Schramowski](https://huggingface.co/PSaiml) for his support. 

We are standing on the shoulders of giants; many thanks in no particular order to [Laion](https://laion.ai) for LeoLM 70b
(especially to [Christoph Schuhmann](https://laion.ai) who got us all connected),
[TheBloke](https://huggingface.co/TheBloke) for providing quantized versions, [winglian](https://huggingface.co/winglian) for Axolotl which was used to train the model and the SlimOrca dataset, [garage-bAInd](https://huggingface.co/garage-bAInd), [Teknium](https://huggingface.co/teknium), [Migel Tissera](https://huggingface.co/migtissera), [MetaMath](https://huggingface.co/meta-math), and [LDJnr](https://huggingface.co/LDJnr) for their great datasets (please contact us if we forgot to mention you here!).

[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)

## Disclaimer

The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model.
This model should only be used for research purposes. The original Llama2 license and all restrictions of datasets used to train this model apply.