Transformers
GGUF
Inference Endpoints
aashish1904 commited on
Commit
cae309d
1 Parent(s): 01e305a

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +139 -0
README.md ADDED
@@ -0,0 +1,139 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ license: apache-2.0
5
+ language:
6
+ - en
7
+ - de
8
+ - es
9
+ - fr
10
+ - it
11
+ - pt
12
+ - pl
13
+ - nl
14
+ - tr
15
+ - sv
16
+ - cs
17
+ - el
18
+ - hu
19
+ - ro
20
+ - fi
21
+ - uk
22
+ - sl
23
+ - sk
24
+ - da
25
+ - lt
26
+ - lv
27
+ - et
28
+ - bg
29
+ - 'no'
30
+ - ca
31
+ - hr
32
+ - ga
33
+ - mt
34
+ - gl
35
+ - zh
36
+ - ru
37
+ - ko
38
+ - ja
39
+ - ar
40
+ - hi
41
+ library_name: transformers
42
+
43
+ ---
44
+
45
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
46
+
47
+
48
+ # QuantFactory/EuroLLM-9B-GGUF
49
+ This is quantized version of [utter-project/EuroLLM-9B](https://huggingface.co/utter-project/EuroLLM-9B) created using llama.cpp
50
+
51
+ # Original Model Card
52
+
53
+
54
+ # Model Card for EuroLLM-9B
55
+
56
+
57
+ This is the model card for EuroLLM-9B. You can also check the instruction tuned version: [EuroLLM-9B-Instruct](https://huggingface.co/utter-project/EuroLLM-9B-Instruct).
58
+
59
+ - **Developed by:** Unbabel, Instituto Superior Técnico, Instituto de Telecomunicações, University of Edinburgh, Aveni, University of Paris-Saclay, University of Amsterdam, Naver Labs, Sorbonne Université.
60
+ - **Funded by:** European Union.
61
+ - **Model type:** A 9B parameter multilingual transfomer LLM.
62
+ - **Language(s) (NLP):** Bulgarian, Croatian, Czech, Danish, Dutch, English, Estonian, Finnish, French, German, Greek, Hungarian, Irish, Italian, Latvian, Lithuanian, Maltese, Polish, Portuguese, Romanian, Slovak, Slovenian, Spanish, Swedish, Arabic, Catalan, Chinese, Galician, Hindi, Japanese, Korean, Norwegian, Russian, Turkish, and Ukrainian.
63
+ - **License:** Apache License 2.0.
64
+
65
+ ## Model Details
66
+
67
+ The EuroLLM project has the goal of creating a suite of LLMs capable of understanding and generating text in all European Union languages as well as some additional relevant languages.
68
+ EuroLLM-9B is a 9B parameter model trained on 4 trillion tokens divided across the considered languages and several data sources: Web data, parallel data (en-xx and xx-en), and high-quality datasets.
69
+ EuroLLM-9B-Instruct was further instruction tuned on EuroBlocks, an instruction tuning dataset with focus on general instruction-following and machine translation.
70
+
71
+
72
+ ### Model Description
73
+
74
+ EuroLLM uses a standard, dense Transformer architecture:
75
+ - We use grouped query attention (GQA) with 8 key-value heads, since it has been shown to increase speed at inference time while maintaining downstream performance.
76
+ - We perform pre-layer normalization, since it improves the training stability, and use the RMSNorm, which is faster.
77
+ - We use the SwiGLU activation function, since it has been shown to lead to good results on downstream tasks.
78
+ - We use rotary positional embeddings (RoPE) in every layer, since these have been shown to lead to good performances while allowing the extension of the context length.
79
+
80
+ For pre-training, we use 400 Nvidia H100 GPUs of the Marenostrum 5 supercomputer, training the model with a constant batch size of 2,800 sequences, which corresponds to approximately 12 million tokens, using the Adam optimizer, and BF16 precision.
81
+ Here is a summary of the model hyper-parameters:
82
+ | | |
83
+ |--------------------------------------|----------------------|
84
+ | Sequence Length | 4,096 |
85
+ | Number of Layers | 42 |
86
+ | Embedding Size | 4,096 |
87
+ | FFN Hidden Size | 12,288 |
88
+ | Number of Heads | 32 |
89
+ | Number of KV Heads (GQA) | 8 |
90
+ | Activation Function | SwiGLU |
91
+ | Position Encodings | RoPE (\Theta=10,000) |
92
+ | Layer Norm | RMSNorm |
93
+ | Tied Embeddings | No |
94
+ | Embedding Parameters | 0.524B |
95
+ | LM Head Parameters | 0.524B |
96
+ | Non-embedding Parameters | 8.105B |
97
+ | Total Parameters | 9.154B |
98
+
99
+ ## Run the model
100
+
101
+ from transformers import AutoModelForCausalLM, AutoTokenizer
102
+
103
+ model_id = "utter-project/EuroLLM-9B"
104
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
105
+ model = AutoModelForCausalLM.from_pretrained(model_id)
106
+
107
+ text = "English: My name is EuroLLM. Portuguese:"
108
+
109
+ inputs = tokenizer(text, return_tensors="pt")
110
+ outputs = model.generate(**inputs, max_new_tokens=20)
111
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
112
+
113
+
114
+ ## Results
115
+
116
+ ### EU Languages
117
+
118
+
119
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63f33ecc0be81bdc5d903466/ob_1sLM8c7dxuwpv6AAHA.png)
120
+ **Table 1:** Comparison of open-weight LLMs on multilingual benchmarks. The borda count corresponds to the average ranking of the models (see ([Colombo et al., 2022](https://arxiv.org/abs/2202.03799))). For Arc-challenge, Hellaswag, and MMLU we are using Okapi datasets ([Lai et al., 2023](https://aclanthology.org/2023.emnlp-demo.28/)) which include 11 languages. For MMLU-Pro and MUSR we translate the English version with Tower ([Alves et al., 2024](https://arxiv.org/abs/2402.17733)) to 6 EU languages.
121
+ \* As there are no public versions of the pre-trained models, we evaluated them using the post-trained versions.
122
+
123
+ The results in Table 1 highlight EuroLLM-9B's superior performance on multilingual tasks compared to other European-developed models (as shown by the Borda count of 1.0), as well as its strong competitiveness with non-European models, achieving results comparable to Gemma-2-9B and outperforming the rest on most benchmarks.
124
+
125
+ ### English
126
+
127
+
128
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63f33ecc0be81bdc5d903466/EfilsW_p-JA13mV2ilPkm.png)
129
+
130
+ **Table 2:** Comparison of open-weight LLMs on English general benchmarks.
131
+ \* As there are no public versions of the pre-trained models, we evaluated them using the post-trained versions.
132
+
133
+ The results in Table 2 demonstrate EuroLLM's strong performance on English tasks, surpassing most European-developed models and matching the performance of Mistral-7B (obtaining the same Borda count).
134
+
135
+
136
+ ## Bias, Risks, and Limitations
137
+
138
+ EuroLLM-9B has not been aligned to human preferences, so the model may generate problematic outputs (e.g., hallucinations, harmful content, or false statements).
139
+