File size: 7,643 Bytes
50dbabc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1e0fac4
50dbabc
 
 
 
 
 
 
 
 
 
 
9e59016
50dbabc
 
 
1e0fac4
9e393b2
50dbabc
 
 
 
 
 
 
 
9e59016
50dbabc
fd971cc
 
50dbabc
 
 
 
24bfbd5
50dbabc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
725cd80
 
 
 
 
8e4ad98
 
 
 
 
 
 
725cd80
50dbabc
725cd80
 
 
 
8e4ad98
 
 
 
 
 
 
50dbabc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
---
license: apache-2.0
language:
- en
- fr
pipeline_tag: text-generation
---

![image/png](https://huggingface.co/datasets/malteos/images/resolve/main/occiglot.medium.png)

# Occiglot-7B-FR-EN

> A [polyglot](https://en.wikipedia.org/wiki/Multilingualism#In_individuals) language model for the [Occident](https://en.wikipedia.org/wiki/Occident).
> 

**Occiglot-7B-FR-EN** is a generative language model with 7B parameters for French and English and trained by the [Occiglot Research Collective](https://occiglot.github.io/occiglot/).
It is based on [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) and trained on 113B tokens of additional multilingual and code data with a block size of 8,192 tokens per sample.
Note that the model is a general-purpose base model and was not instruction-fine-tuned nor optimized for chat or other applications. We make an instruction tuned variant available as [occiglot-7b-fr-en-instruct](https://huggingface.co/occiglot/occiglot-7b-fr-en-instruct)

This is the first release of an ongoing open research project for multilingual language models. 
If you want to train a model for your own language or are working on evaluations, please contact us or join our [Discord server](https://discord.gg/wUpvYs4XvM). **We are open for collaborations!**


### Model details

- **Continued-pretraining from:** [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
- **Model type:** Causal decoder-only transformer language model
- **Languages:** English, French, and code.
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0.html)
- **Compute resources:** [HessianAI's 42](https://hessian.ai/)
- **Contributors:** Manuel Brack, Patrick Schramowski, Pedro Ortiz, Malte Ostendorff, Fabio Barth, Georg Rehm, Kristian Kersting
- **Research labs:** [Occiglot](https://occiglot.github.io/occiglot/) with support from [SAINT](https://www.dfki.de/en/web/research/research-departments/foundations-of-systems-ai) and [SLT](https://www.dfki.de/en/web/research/research-departments/speech-and-language-technology)
- **Contact:** [Discord](https://discord.gg/wUpvYs4XvM)

### How to use

You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:

```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='occiglot/occiglot-7b-fr-en')
>>> set_seed(42)
>>> generator("Bonjour, Je suis un modèle linguistique,", max_length=40, num_return_sequences=1)
[{'generated_text': 'Bonjour, Je suis un modèle linguistique qui peut t'aider à traduire des textes entre le français et l'anglais. Si tu me donnes un texte en français'}]
```

## Dataset

The training data is the respective subset of the data used for [occiglot-7b-eu5](https://huggingface.co/occiglot/occiglot-7b-eu5), i.e. French plus English and Code.

The data distribution by language (estimated) is as follows:
- English: ~34%
- Code: ~13%
- French: ~52%

The training data was prepared using [lm-datasets](https://github.com/malteos/lm-datasets). 
The exact data configuration is [here](https://huggingface.co/occiglot/occiglot-7b-eu5/blob/main/lm-datasets-config.yml).

## Training settings

- Continual pre-training on 128 x A100-80GB on [HessianAI's 42](https://hessian.ai/). 
- Framework: [Determined](https://www.determined.ai/)
- Precision: bf16
- Optimizer: AdamW (lr: 0.00001, warmup_steps: 420)
- Global batch size: 512 (with 8192 blocksize) split over 128 GPUs
- Cosine Annealing with Warmup


## Tokenizer

Tokenizer is unchanged from [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1).

## Evaluation

Preliminary evaluation results can be found below. 
Please note that the non-English results are based on partially machine-translated datasets and English prompts ([Belebele](https://huggingface.co/datasets/facebook/belebele) and [Okapi framework](https://github.com/nlp-uoregon/Okapi)) and thus should be interpreted with caution, e.g., biased towards English model performance.
Currently, we are working on more suitable benchmarks for Spanish, French, German, and Italian.

<details>
<summary>Evaluation results</summary>

### English

|                                      |   arc_challenge |   belebele |   hellaswag |     mmlu |   truthfulqa |      avg |
|:-------------------------------------|----------------:|-----------:|------------:|---------:|-------------:|---------:|
| Occiglot-7b-eu5             |        0.530717 |   0.726667 |    0.789882 | 0.531904 |     0.403678 | 0.59657  |
| Occiglot-7b-eu5-instruct    |        0.558874 |   0.746667 |    0.799841 | 0.535109 |     0.449034 | 0.617905 |
| Occiglot-7b-fr-en           |        0.568259 |   0.771111 |    0.804919 | 0.570716 |     0.394726 | 0.621947 |
| Occiglot-7b-fr-en-instruct  |        0.586177 |   0.794444 |    0.808305 | 0.569862 |     0.474064 | 0.646571 |
| Claire-Mistral-7B-0.1 |        0.59727  |   0.817778 |    0.827126 | 0.600912 |     0.415906 | 0.651798 |
| Mistral-7B-v0.1            |        0.612628 |   0.844444 |    0.834097 | 0.624555 |     0.426201 | 0.668385 |
| Mistral-7B-Instruct-v0.2   |        0.637372 |   0.824444 |    0.846345 | 0.59201  |     0.668116 | 0.713657 |

  
### French

|                                      |   arc_challenge_fr |   belebele_fr |   hellaswag_fr |   mmlu_fr |   truthfulqa_fr |      avg |
|:-------------------------------------|-------------------:|--------------:|---------------:|----------:|----------------:|---------:|
| Occiglot-7b-eu5             |           0.506416 |      0.675556 |       0.712358 |  0.495684 |        0.23507  | 0.525017 |
| Occiglot-7b-eu5-instruct    |           0.541488 |      0.7      |       0.724245 |  0.499122 |        0.306226 | 0.554216 |
| Occiglot-7b-fr-en           |           0.532934 |      0.706667 |       0.718891 |  0.51333  |        0.242694 | 0.542903 |
| Occiglot-7b-fr-en-instruct  |           0.542344 |      0.752222 |       0.72553  |  0.52051  |        0.29479  | 0.567079 |
| Claire-Mistral-7B-0.1 |           0.486741 |      0.694444 |       0.642964 |  0.479566 |        0.271919 | 0.515127 |
| Mistral-7B-v0.1            |           0.525235 |      0.776667 |       0.66481  |  0.543121 |        0.280813 | 0.558129 |
| Mistral-7B-Instruct-v0.2   |           0.551754 |      0.758889 |       0.67916  |  0.506837 |        0.382465 | 0.575821 |

</details>

## Acknowledgements

The model training was supported by a compute grant at the [42 supercomputer](https://hessian.ai/)  which is a central component in the development of [hessian AI](https://hessian.ai/), the [AI Innovation Lab](https://hessian.ai/infrastructure/ai-innovationlab/) (funded by the [Hessian Ministry of Higher Education, Research and the Art (HMWK)](https://wissenschaft.hessen.de) & the [Hessian Ministry of the Interior, for Security and Homeland Security (HMinD)](https://innen.hessen.de)) and the [AI Service Centers](https://hessian.ai/infrastructure/ai-service-centre/) (funded by the [German Federal Ministry for Economic Affairs and Climate Action (BMWK)](https://www.bmwk.de/Navigation/EN/Home/home.html)).
The curation of the training data is partially funded by the [German Federal Ministry for Economic Affairs and Climate Action (BMWK)](https://www.bmwk.de/Navigation/EN/Home/home.html)
through the project [OpenGPT-X](https://opengpt-x.de/en/) (project no. 68GX21007D).


## License

[Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0.html)

## See also

- https://huggingface.co/collections/occiglot/occiglot-eu5-7b-v01-65dbed502a6348b052695e01