File size: 7,650 Bytes
c67147f
620668e
 
 
 
 
c67147f
 
620668e
c67147f
000a74a
c67147f
620668e
 
c67147f
f5cefd1
620668e
 
c67147f
620668e
 
c67147f
 
620668e
c67147f
620668e
 
 
 
 
 
f5cefd1
32cd0f4
c67147f
620668e
c67147f
620668e
 
c67147f
620668e
 
3c71184
620668e
 
 
 
c67147f
620668e
c67147f
620668e
c67147f
620668e
 
 
 
c67147f
620668e
 
c67147f
620668e
c67147f
620668e
 
 
 
 
 
c67147f
 
620668e
c67147f
620668e
c67147f
 
 
620668e
 
 
c67147f
620668e
 
4868799
5c39f32
 
 
 
95a5c24
 
 
 
 
 
 
5c39f32
 
072a658
5c39f32
4868799
 
95a5c24
 
 
 
 
 
 
4868799
c67147f
620668e
c67147f
620668e
c67147f
620668e
 
 
c67147f
 
620668e
c67147f
620668e
c67147f
620668e
c67147f
620668e
c67147f
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
---
license: apache-2.0
language:
- en
- de
pipeline_tag: text-generation
---

![image/png](https://huggingface.co/datasets/malteos/images/resolve/main/occiglot.medium.png)

# Occiglot-7B-DE-EN

> A [polyglot](https://en.wikipedia.org/wiki/Multilingualism#In_individuals) language model for the [Occident](https://en.wikipedia.org/wiki/Occident).
> 

**Occiglot-7B-DE-EN** is a generative language model with 7B parameters for German and English and trained by the [Occiglot Research Collective](https://occiglot.github.io/occiglot/).
It is based on [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) and trained on 114B tokens of additional multilingual and code data with a block size of 8,192 tokens per sample.
Note that the model is a general-purpose base model and was not instruction-fine-tuned nor optimized for chat or other applications. We make an instruction tuned variant available as [occiglot-7b-de-en-instruct](https://huggingface.co/occiglot/occiglot-7b-de-en-instruct)

This is the first release of an ongoing open research project for multilingual language models. 
If you want to train a model for your own language or are working on evaluations, please contact us or join our [Discord server](https://discord.gg/wUpvYs4XvM). **We are open for collaborations!**


### Model details

- **Continued-pretraining from:** [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
- **Model type:** Causal decoder-only transformer language model
- **Languages:** English, German, and code.
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0.html)
- **Compute resources:** [HessianAI's 42](https://hessian.ai/)
- **Contributors:** Manuel Brack, Patrick Schramowski, Pedro Ortiz, Malte Ostendorff, Fabio Barth, Georg Rehm, Kristian Kersting
- **Research labs:** [Occiglot](https://occiglot.github.io/occiglot/) with support from [SAINT](https://www.dfki.de/en/web/research/research-departments/foundations-of-systems-ai) and [SLT](https://www.dfki.de/en/web/research/research-departments/speech-and-language-technology)
- **Contact:** [Discord](https://discord.gg/wUpvYs4XvM)

### How to use

You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:

```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='occiglot/occiglot-7b-de-en')
>>> set_seed(42)
>>> generator("Hallo, Ich bin ein Sprachmodell,", max_length=40, num_return_sequences=1)
[{'generated_text': 'Hallo, Ich bin ein Sprachmodell, das dir bei der Übersetzung von Texten zwischen Deutsch und Englisch helfen kann. Wenn du mir einen Text in Deutsch'}]
```

## Dataset

The training data is the respective subset of the data used for [occiglot-7b-eu5](https://huggingface.co/occiglot/occiglot-7b-eu5), i.e. German plus English and Code.

The data distribution by language (estimated) is as follows:
- English: ~34%
- Code: ~13%
- German: ~52%

The training data was prepared using [lm-datasets](https://github.com/malteos/lm-datasets). 
The exact data configuration is [here](https://huggingface.co/occiglot/occiglot-7b-eu5/blob/main/lm-datasets-config.yml).

## Training settings

- Continual pre-training on 128 x A100-80GB on [HessianAI's 42](https://hessian.ai/). 
- Framework: [Determined](https://www.determined.ai/)
- Precision: bf16
- Optimizer: AdamW (lr: 0.00001, warmup_steps: 420)
- Global batch size: 512 (with 8192 blocksize) split over 128 GPUs
- Cosine Annealing with Warmup


## Tokenizer

Tokenizer is unchanged from [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1).

## Evaluation

Preliminary evaluation results can be found below. 
Please note that the non-English results are based on partially machine-translated datasets and English prompts ([Belebele](https://huggingface.co/datasets/facebook/belebele) and [Okapi framework](https://github.com/nlp-uoregon/Okapi)) and thus should be interpreted with caution, e.g., biased towards English model performance.
Currently, we are working on more suitable benchmarks for Spanish, French, German, and Italian.

<details>
<summary>Evaluation results</summary>

### English

|                                      |   arc_challenge |   belebele |   hellaswag |     mmlu |   truthfulqa |      avg |
|:-------------------------------------|----------------:|-----------:|------------:|---------:|-------------:|---------:|
| Occiglot-7b-eu5             |        0.530717 |   0.726667 |    0.789882 | 0.531904 |     0.403678 | 0.59657  |
| Occiglot-7b-eu5-instruct    |        0.558874 |   0.746667 |    0.799841 | 0.535109 |     0.449034 | 0.617905 |
| Occiglot-7b-de-en           |        0.556314 |   0.791111 |    0.803824 | 0.568438 |     0.423251 | 0.628587 |
| Occiglot-7b-de-en-instruct  |        0.604096 |   0.812222 |    0.80004  | 0.570574 |     0.493807 | 0.656148 |
| Leo-mistral-hessianai-7b       |        0.522184 |   0.736667 |    0.777833 | 0.538812 |     0.429248 | 0.600949 |
| Mistral-7B-v0.1            |        0.612628 |   0.844444 |    0.834097 | 0.624555 |     0.426201 | 0.668385 |
| Mistral-7B-Instruct-v0.2   |        0.637372 |   0.824444 |    0.846345 | 0.59201  |     0.668116 | 0.713657 |


### German

|                                      |   arc_challenge_de |   belebele_de |   hellaswag_de |   mmlu_de |   truthfulqa_de |      avg |
|:-------------------------------------|-------------------:|--------------:|---------------:|----------:|----------------:|---------:|
| Occiglot-7b-eu5             |           0.493584 |      0.646667 |       0.666631 |  0.483406 |        0.251269 | 0.508311 |
| Occiglot-7b-eu5-instruct    |           0.529512 |      0.667778 |       0.685205 |  0.488234 |        0.286802 | 0.531506 |
| Occiglot-7b-de-en           |           0.50556  |      0.743333 |       0.67421  |  0.514633 |        0.26269  | 0.540085 |
| Occiglot-7b-de-en-instruct  |           0.54491  |      0.772222 |       0.688407 |  0.515915 |        0.310914 | 0.566474 |
| Leo-mistral-hessianai-7b       |           0.474765 |      0.691111 |       0.682109 |  0.488309 |        0.252538 | 0.517766 |
| Mistral-7B-v0.1            |           0.476476 |      0.738889 |       0.610589 |  0.529567 |        0.284264 | 0.527957 |
| Mistral-7B-Instruct-v0.2   |           0.485885 |      0.688889 |       0.622438 |  0.501961 |        0.376904 | 0.535215 |


</details>

## Acknowledgements

The model training was supported by a compute grant at the [42 supercomputer](https://hessian.ai/)  which is a central component in the development of [hessian AI](https://hessian.ai/), the [AI Innovation Lab](https://hessian.ai/infrastructure/ai-innovationlab/) (funded by the [Hessian Ministry of Higher Education, Research and the Art (HMWK)](https://wissenschaft.hessen.de) & the [Hessian Ministry of the Interior, for Security and Homeland Security (HMinD)](https://innen.hessen.de)) and the [AI Service Centers](https://hessian.ai/infrastructure/ai-service-centre/) (funded by the [German Federal Ministry for Economic Affairs and Climate Action (BMWK)](https://www.bmwk.de/Navigation/EN/Home/home.html)).
The curation of the training data is partially funded by the [German Federal Ministry for Economic Affairs and Climate Action (BMWK)](https://www.bmwk.de/Navigation/EN/Home/home.html)
through the project [OpenGPT-X](https://opengpt-x.de/en/) (project no. 68GX21007D).


## License

[Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0.html)

## See also

- https://huggingface.co/collections/occiglot/occiglot-eu5-7b-v01-65dbed502a6348b052695e01