Text Generation
Transformers
Safetensors
Finnish
llama
finnish
conversational
text-generation-inference
aapot commited on
Commit
fcb4832
1 Parent(s): aa79db2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +286 -1
README.md CHANGED
@@ -5,9 +5,294 @@ license: apache-2.0
5
  tags:
6
  - finnish
7
  - llama
 
 
 
 
 
 
8
  inference: false
9
  pipeline_tag: text-generation
10
 
11
  ---
12
 
13
- # TBA
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  tags:
6
  - finnish
7
  - llama
8
+ datasets:
9
+ - Finnish-NLP/CulturaX_fi_cleaned
10
+ - Finnish-NLP/HPLT_1.2_fi_cleaned
11
+ - Finnish-NLP/wikipedia_20231101_fi_cleaned
12
+ - Finnish-NLP/Reddit_fi_2006_2022
13
+ - intfloat/multilingual_cc_news
14
  inference: false
15
  pipeline_tag: text-generation
16
 
17
  ---
18
 
19
+ # Ahma-7B for Finnish
20
+
21
+ Ahma-7B is 7B parameter decoder-only transformer model based on Meta's Llama (v1) architecture pretrained from scratch on Finnish language. Original Llama model architecture was introduced in
22
+ [this paper](https://arxiv.org/abs/2302.13971)
23
+ and first released at [this page](https://github.com/facebookresearch/llama).
24
+
25
+ What does Ahma mean? Ahma is the Finnish word for wolverine! In the Finnish Lapland, wolverines are the biggest cause of reindeer damage.
26
+
27
+ There are two different sized base Ahma models both pretrained from scratch, Ahma-3B for 139B tokens and Ahma-7B for 149B tokens:
28
+
29
+ | Model | Context length | Layers | Dim | Heads | Params |
30
+ |:--------------------------------------------------------------------------------|:---------------|:-------|:-----|:------|:-------|
31
+ | [Ahma-3B](https://huggingface.co/Finnish-NLP/Ahma-3B) | 2048 | 26 | 3200 | 32 | 3.6B |
32
+ | [Ahma-7B](https://huggingface.co/Finnish-NLP/Ahma-7B) | 2048 | 32 | 4096 | 32 | 7.0B |
33
+
34
+ And two instruct-tuned versions:
35
+
36
+ | Model | Context length | Layers | Dim | Heads | Params |
37
+ |:--------------------------------------------------------------------------------|:---------------|:-------|:-----|:------|:-------|
38
+ | [Ahma-3B-Instruct](https://huggingface.co/Finnish-NLP/Ahma-3B-Instruct) | 2048 | 26 | 3200 | 32 | 3.6B |
39
+ | [Ahma-7B-Instruct](https://huggingface.co/Finnish-NLP/Ahma-7B-Instruct) | 2048 | 32 | 4096 | 32 | 7.0B |
40
+
41
+ ## Intended uses & limitations
42
+
43
+ This model was pretrained only in a self-supervised way, without any supervised training. You can use this model for text generation or fine-tune it for a downstream task. This model followed a 2-stage pretraining approach where single-turn instruction-following examples were mixed in with the other training data in the second stage (explained more later in this readme). Thanks to this approach, this pretrained model is already capable of instruction following, but you might get even better results if you specifically fine-tune it for instruction following or other use cases. For instruction-following fine-tuning, you should use the same prompt format showcased below.
44
+
45
+ ### How to use
46
+
47
+ #### Fine-tuning
48
+
49
+ We have now added finetuning example notebook along with video! \
50
+ Notebook: https://huggingface.co/Finnish-NLP/Ahma-3B/blob/main/Finetune_Ahma_3B_example.ipynb \
51
+ Video: https://www.youtube.com/watch?v=6mbgn9XzpS4
52
+
53
+
54
+ #### Inference
55
+
56
+ If you want to use this model for instruction-following, you need to use the same prompt format we used in the second stage of the pretraining (basically the same format what Meta used in their Llama2 models). **Note: do not use "LlamaTokenizer" from transformers library but always use the AutoTokenizer instead, or use the plain sentencepiece tokenizer.** Here is an example using the instruction-following prompt format, with some generation arguments you can modify for your use:
57
+
58
+ ```python
59
+ from transformers import AutoTokenizer, AutoModelForCausalLM
60
+
61
+ system_prompt = "Olet tekoälyavustaja. Vastaat aina mahdollisimman avuliaasti. Vastauksesi eivät saa sisältää mitään haitallista, epäeettistä, rasistista, seksististä, vaarallista tai laitonta sisältöä. Jos kysymyksessä ei ole mitään järkeä tai se ei ole asiasisällöltään johdonmukainen, selitä miksi sen sijaan, että vastaisit jotain väärin. Jos et tiedä vastausta kysymykseen, älä kerro väärää tietoa."
62
+
63
+
64
+ def format_prompt(prompt: str) -> str:
65
+ prompt = f" [INST] <<SYS>>\n{system_prompt.strip()}\n<</SYS>>\n\n{prompt.strip()} [/INST] "
66
+ return prompt
67
+
68
+
69
+ tokenizer = AutoTokenizer.from_pretrained("Finnish-NLP/Ahma-7B")
70
+ model = AutoModelForCausalLM.from_pretrained("Finnish-NLP/Ahma-7B")
71
+ model = model.to("cuda")
72
+
73
+ # use the custom prompt format function or the chat template feature in the tokenizer to format your inputs
74
+
75
+ # prompt = format_prompt("Listaa kolme hyötyä, joita pienet avoimen lähdekoodin kielimallit tuovat?")
76
+ # inputs = tokenizer(prompt, return_tensors="pt")
77
+
78
+ messages = [
79
+ {
80
+ "role": "system",
81
+ "content": system_prompt,
82
+ },
83
+ {"role": "user", "content": "Listaa kolme hyötyä, joita pienet avoimen lähdekoodin kielimallit tuovat?"},
84
+ ]
85
+ inputs = tokenizer.apply_chat_template(
86
+ messages, tokenize=True, add_generation_prompt=True, return_tensors="pt"
87
+ )
88
+ inputs = inputs.to("cuda")
89
+
90
+ generated_ids = model.generate(
91
+ inputs,
92
+ temperature=0.6,
93
+ penalty_alpha=0.6,
94
+ top_k=4,
95
+ do_sample=True,
96
+ repetition_penalty=1.2,
97
+ min_length=5,
98
+ max_length=2048,
99
+ )
100
+ generated_text = tokenizer.batch_decode(
101
+ generated_ids, skip_special_tokens=False
102
+ )[0]
103
+
104
+ """
105
+ 1. Parempi luettavuus ja ymmärtäminen: Pienten avoimen lähdekoodin kielimallien avulla voidaan luoda ymmärrettävämpää ja luettavampaa tekstiä, mikä helpottaa ihmisten ymmärtämistä ja tiedon hankkimista.
106
+ 2. Parempi mukautuvuus ja monipuolisuus: Avoimen lähdekoodin mallit antavat kehittäjille mahdollisuuden luoda räätälöityjä ratkaisuja omiin tarpeisiinsa, jolloin he voivat hyödyntää olemassa olevaa tietämystä ja asiantuntemusta.
107
+ 3. Lisääntynyt yhteistyö ja avoimuus: Avoimen lähdekoodin mallien ansiosta kehittäjät voivat tehdä yhteistyötä muiden kanssa, jakaa ideoita ja parantaa koodin laatua jakamalla oivalluksia ja parhaita käytäntöjä. Tämä edistää yhteistyöhön perustuvaa ympäristöä ja kannustaa jatkuvaan parantamiseen.
108
+ """
109
+ ```
110
+
111
+ You may experiment with different system prompt instructions too if you like.
112
+
113
+ ### Limitations and bias
114
+
115
+ This model was trained only with Finnish texts excluding code so it should not be used for multilingual and code generation use cases.
116
+
117
+ The training data used for this model contains a lot of content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
118
+
119
+ To reduce toxic content, training data was filtered with a toxicity classifier but it cannot truly eliminate all toxic text.
120
+
121
+ ## Training data
122
+
123
+ This model was pretrained on the combination of 14 datasets:
124
+ - [CulturaX_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/CulturaX_fi_cleaned), we cleaned Finnish split from the original [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX) dataset
125
+ - [HPLT_1.2_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/HPLT_1.2_fi_cleaned), we cleaned Finnish split from the original [HPLT v1.2](https://hplt-project.org/datasets/v1.2) dataset
126
+ - [wikipedia_20231101_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/wikipedia_20231101_fi_cleaned), we used the Finnish subset of the wikipedia (November 2023) dataset
127
+ - [Reddit_fi_2006_2022](https://huggingface.co/datasets/Finnish-NLP/Reddit_fi_2006_2022), filtered and post-processed dataset of Finnish Reddit
128
+ - [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501)
129
+ - [Yle Finnish News Archive 2019-2020](http://urn.fi/urn:nbn:fi:lb-2021050401)
130
+ - [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001)
131
+ - [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803)
132
+ - [Project Lönnrot](http://www.lonnrot.net/)
133
+ - [Finnish parliament speeches](https://avoindata.eduskunta.fi)
134
+ - [multilingual_cc_news](https://huggingface.co/datasets/intfloat/multilingual_cc_news), we used the Finnish subset of the multilingual CC-News dataset
135
+ - [fi-news-corpus](https://github.com/nkrusch/fi-news-corpus)
136
+ - Finnish higher education public theses
137
+ - Finnish single-turn instruction-following datasets, combination of multiple originally openly licensed English datasets translated to Finnish. For example, [Ultrachat, Aya, Capybara, etc](https://huggingface.co/collections/Finnish-NLP/sft-dpo-dataset-65f55dde1139c3cd683ff035)
138
+
139
+
140
+ Raw datasets were automatically cleaned to filter out bad quality and non-Finnish examples. Also, a [perplexity](https://huggingface.co/course/chapter7/3#perplexity-for-language-models) score was calculated for all texts with a KenLM model which was trained with very clean Finnish texts only. This perplexity score can then be used to determine how "clean" Finnish language the text contains. To reduce toxic text, we used Finnish toxicity classifier [TurkuNLP/bert-large-finnish-cased-toxicity](https://huggingface.co/TurkuNLP/bert-large-finnish-cased-toxicity) released by TurkuNLP to classify all text examples. Classified toxicity label scores can then be used to determine how toxic the text is.
141
+
142
+ All datasets were concatenated and the whole dataset near deduplicated using MinHashLSH from [text-dedup](https://github.com/ChenghaoMou/text-dedup). Top 95% perplexity score was used as a filtering threshold to filter out the worst quality 5% of texts. To reduce amount of toxic content, the dataset was filtered to include text examples having lower than 80% score for the toxicity labels "label_identity_attack", "label_insult", "label_threat" and "label_severe_toxicity".
143
+
144
+ Finally, 20,000 text examples from each of the CulturaX, Wikipedia, Yle, STT, Suomi24, and Reddit datasets were randomly selected for evaluation dataset.
145
+
146
+ The final training dataset had 23 billion words (calculated with regex "\w+") and the evaluation dataset had 23 million words. After tokenization, the training dataset had 41 billion tokens and the evaluation dataset had 40 million tokens. For the 2-stage pretraining, training datasets are divided as follows:
147
+
148
+ The first stage:
149
+ |Dataset | Words | Ratio |
150
+ |:-----------------------------|:------------|:-------------|
151
+ |CulturaX | 12.820B | 59.88\% |
152
+ |HPLT v1.2 | 5.034B | 23.51\% |
153
+ |Suomi24 | 3.018B | 14.09\% |
154
+ |Reddit | 0.141B | 0.66\% |
155
+ |CC-News | 0.311B | 1.45\% |
156
+ |FI news corpus | 0.004B | 0.02\% |
157
+ |Project Lönnrot | 0.083B | 0.39\% |
158
+ |**TOTAL** | **21.410B** | **100.0\%** |
159
+
160
+
161
+ The second stage:
162
+ |Dataset | Words | Ratio |
163
+ |:--------------------------------------------------------------|:------------|:------------|
164
+ |CulturaX (cleaner sample using KenLM perplexity score) | 2.252B | 55.48\% |
165
+ |Wikipedia | 0.095B | 2.34\% |
166
+ |STT | 0.253B | 6.23\% |
167
+ |Yle | 0.212B | 5.22\% |
168
+ |Finnish parliament speeches | 0.021B | 0.52\% |
169
+ |Finnish higher education public theses | 0.855B | 21.07\% |
170
+ |Finnish instruction-following datasets (note: 2X upsampled) | 0.371B | 9.14\% |
171
+ |**TOTAL** | **4.059B** | **100.0\%** |
172
+
173
+ ## Training procedure
174
+
175
+ ### Preprocessing
176
+
177
+ Texts are tokenized using Byte Pair Encoding (BPE) using the implementation from SentencePiece splitting all numbers into individual digits and using bytes to decompose unknown UTF-8 characters. The total
178
+ vocabulary size is 64k tokens. Inputs are sequences of 2048 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish. Both BOS and EOS tokens were used in the pretraining.
179
+
180
+ ### 2-stage pretraining
181
+
182
+ The model was trained on TPUv4-32 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/). Training was conducted with a slightly modified Jax/Flax based [EasyLM](https://github.com/young-geng/EasyLM) framework, and inspired by the [OpenLLaMA](https://github.com/openlm-research/open_llama) project. The optimizer used was a [Lion](https://arxiv.org/abs/2302.06675).
183
+
184
+ The 2-stage pretraining approach was inspired by [MiniCPM](https://shengdinghu.notion.site/MiniCPM-Unveiling-the-Potential-of-End-side-Large-Language-Models-d4d3a8c426424654a4e80e42a711cb20) findings. For the first stage (79% of the entire training), we used noisier web-scraped datasets. For the second stage (21% of the entire training), we primarily used cleaner datasets and instruction-following datasets shuffled together, like in MiniCPM. The learning rate schedule for the 2-stage pretraining was Warmup-Stable-Decay (WSD). During the first stage, the learning rate schedule had a linear warmup for about 8 billion tokens to a peak learning rate of 1e-4 (note: with the Lion optimizer, the learning rate had to be about 10 times smaller than with the commonly used AdamW), followed by a stable phase where the rate of 1e-4 was kept constant. During the second stage, the learning rate schedule had a linear decay from 1e-4 to 6e-6 for the first 7 billion tokens, followed by a stable phase for the remaining tokens.
185
+
186
+ In the first stage, the model was trained for 118 billion tokens, which is about three epochs of the first-stage training data, inspired by the findings of [this paper](https://arxiv.org/abs/2305.16264). In the second stage, the model was trained for 31 billion tokens, which is close to five epochs of the second-stage training data.
187
+
188
+ Thanks to the WSD learning rate schedule, you can more easily experiment with different first-stage model checkpoints. For example, you could apply the second-stage training on an earlier checkpoint or continue pretraining further before the second stage. Model checkpoints were pushed to this repository every 100,000 training steps (approximately 13 billion tokens).
189
+
190
+ - [900K](https://huggingface.co/Finnish-NLP/Ahma-7B/tree/5f6eb9498b17fece810d766f81c711c38a2b2de2)
191
+ - [800K](https://huggingface.co/Finnish-NLP/Ahma-7B/tree/bc2d607ce302c1b0ff75c229496645cf232c6d98)
192
+ - [700K](https://huggingface.co/Finnish-NLP/Ahma-7B/tree/69352a497d5953c5290296a1f429a450978c7f7f)
193
+ - [600K](https://huggingface.co/Finnish-NLP/Ahma-7B/tree/760ab5f865b08d9a512c1df523a5c4deb6874322)
194
+ - [500K](https://huggingface.co/Finnish-NLP/Ahma-7B/tree/32ea3d35931da8039180e80d67f6c323719ae50a)
195
+ - [400K](https://huggingface.co/Finnish-NLP/Ahma-7B/tree/d1256a6815983053d0f9934f21f163d764fc5ecd)
196
+ - [300K](https://huggingface.co/Finnish-NLP/Ahma-7B/tree/1e3094c66e788fe81d2aadad5bf8f0431358bd38)
197
+ - [200K](https://huggingface.co/Finnish-NLP/Ahma-7B/tree/a4afd130fa0effea047deaaf8bf63b3eba1b323b)
198
+ - [100K](https://huggingface.co/Finnish-NLP/Ahma-7B/tree/245fad2f5838af1465cb40ad42caef092e875cd9)
199
+
200
+ ## Evaluation results
201
+
202
+ ### FIN-bench
203
+
204
+ This Ahma 7B base model was primarily evaluated using [FIN-bench by TurkuNLP](https://github.com/TurkuNLP/FIN-bench), and the same evaluation was carried out for other relevant Finnish models for comparison: [FinGPT 8B by TurkuNLP](https://huggingface.co/TurkuNLP/gpt3-finnish-8B), [Viking 7B by TurkuNLP, SiloGen and HPLT](https://huggingface.co/LumiOpen/Viking-7B), and [Poro 34B by SiloGen, TurkuNLP and HPLT](https://huggingface.co/LumiOpen/Poro-34B). Below are the results with 0-shot and 3-shot settings in FIN-bench.
205
+
206
+ 0-shot results:
207
+
208
+ | Benchmark | Ahma 3B base (instruct prompt format) | Ahma 3B Instruct (instruct prompt format) | Ahma 7B base (instruct prompt format) | Ahma 7B Instruct (instruct prompt format) | FinGPT 8B | Viking 7B | Poro 34B (8bit quant) |
209
+ |:---------------------------|:--------------------------------------|:------------------------------------------|:--------------------------------------|:------------------------------------------|:----------|:----------|:----------------------|
210
+ | Analogies | 50.77 | 48.46 | 56.92 | TBA | 49.23 | 40.00 | 54.62 |
211
+ | Arithmetic | 27.64 | 22.14 | 11.50 | TBA | 33.15 | 30.16 | 30.34 |
212
+ | Cause and Effect | 59.48 | 58.82 | 59.48 | TBA | 66.01 | 58.82 | 62.74 |
213
+ | Emotions | 36.25 | 28.12 | 36.25 | TBA | 22.50 | 26.25 | 35.63 |
214
+ | Empirical Judgements | 33.33 | 35.35 | 33.33 | TBA | 27.27 | 33.33 | 49.49 |
215
+ | General Knowledge | 44.29 | 48.57 | 51.43 | TBA | 40.00 | 24.29 | 51.43 |
216
+ | HHH Alignment | 42.09 | 41.66 | 44.23 | TBA | 41.81 | 42.51 | 42.92 |
217
+ | Intent Recognition | 24.42 | 26.16 | 43.64 | TBA | 17.49 | 22.40 | 68.35 |
218
+ | Misconceptions | 46.27 | 47.01 | 46.27 | TBA | 53.73 | 53.73 | 52.24 |
219
+ | Paraphrase | 59.50 | 73.00 | 67.00 | TBA | 51.00 | 50.00 | 51.00 |
220
+ | Sentence Ambiguity | 53.33 | 65.00 | 60.00 | TBA | 51.67 | 48.33 | 50.00 |
221
+ | Similarities Abstraction | 65.79 | 68.42 | 71.05 | TBA | 60.53 | 65.79 | 60.53 |
222
+ | **Non-Arithmetic Average** | **47.55** | **48.95** | **51.33** | TBA | **46.17** | **44.42** | **52.08** |
223
+ | **Overall Average** | **36.49** | **34.06** | **29.20** | TBA | **38.93** | **36.50** | **40.00** |
224
+
225
+
226
+ 3-shot results:
227
+
228
+ | Benchmark | Ahma 3B base (instruct prompt format) | Ahma 3B Instruct (instruct prompt format) | Ahma 7B base (instruct prompt format) | Ahma 7B Instruct (instruct prompt format) | FinGPT 8B | Viking 7B | Poro 34B (8bit quant) |
229
+ |:---------------------------|:--------------------------------------|:------------------------------------------|:--------------------------------------|:------------------------------------------|:----------|:----------|:----------------------|
230
+ | Analogies | 50.77 | 49.23 | 49.23 | TBA | 40.77 | 54.62 | 76.92 |
231
+ | Arithmetic | 38.38 | 43.89 | 20.88 | TBA | 43.63 | 45.78 | 53.68 |
232
+ | Cause and Effect | 60.78 | 64.71 | 66.01 | TBA | 64.05 | 58.17 | 67.32 |
233
+ | Emotions | 30.00 | 41.25 | 30.00 | TBA | 44.37 | 48.13 | 56.87 |
234
+ | Empirical Judgements | 46.46 | 44.44 | 39.39 | TBA | 32.32 | 43.43 | 63.64 |
235
+ | General Knowledge | 47.14 | 40.00 | 27.14 | TBA | 54.29 | 28.57 | 74.29 |
236
+ | HHH Alignment | 43.53 | 44.80 | 43.80 | TBA | 45.39 | 44.80 | 46.07 |
237
+ | Intent Recognition | 20.52 | 44.22 | 36.42 | TBA | 51.45 | 58.82 | 83.67 |
238
+ | Misconceptions | 50.75 | 52.24 | 46.27 | TBA | 52.99 | 46.27 | 52.99 |
239
+ | Paraphrase | 50.50 | 58.50 | 57.50 | TBA | 53.00 | 54.50 | 55.00 |
240
+ | Sentence Ambiguity | 53.33 | 48.33 | 53.33 | TBA | 51.67 | 53.33 | 66.67 |
241
+ | Similarities Abstraction | 69.74 | 72.37 | 72.37 | TBA | 64.47 | 73.68 | 75.00 |
242
+ | **Non-Arithmetic Average** | **48.48** | **51.49** | **49.05** | TBA | **51.19** | **50.94** | **61.96** |
243
+ | **Overall Average** | **42.87** | **47.27** | **33.41** | TBA | **46.99** | **48.07** | **57.36** |
244
+
245
+
246
+ As we can see, Ahma 7B base model has bad arithmetic performance but in non-arithmetic tasks it clearly outperforms same sized models like the FinGPT 8B and Viking 7B, especially in 0-shot usage. Ahma 7B base model is even on-par with the 5X larger Poro 34B model, in non-arithmetic tasks in 0-shot usage. This result might be attributed to Ahma's 2-stage pretraining and the inclusion of instruct-following examples during the pretraining phase.
247
+
248
+ In a 3-shot setting, the results are more mixed. The poorer performance of Ahma 7B base model in 3-shot settings might be due to the use of the instruct prompt format and having only single-turn instruction-following training examples.
249
+
250
+
251
+ ### MTBench Finnish
252
+
253
+ This Ahma 7B base model was also evaluated using [MTBench Finnish by LumiOpen](https://github.com/LumiOpen/FastChat/tree/main/fastchat/llm_judge) even though this Ahma model is not fine-tuned for chat. Since the MTBench evaluates also multi-turn chats while Ahma base models were only pretrained with single-turn instruction following examples, we have reported MTBench Finnish results separately for their single-turn and multi-turn evaluation examples. [Poro 34B Chat by SiloGen, TurkuNLP and HPLT](https://huggingface.co/LumiOpen/Poro-34B-chat) model's presumably multi-turn results are copied from their model card for the comparison.
254
+
255
+ Single-turn results:
256
+
257
+ | Benchmark | Ahma 3B base (instruct prompt format) | Ahma 3B Instruct | Ahma 7B base (instruct prompt format) | Ahma 7B Instruct |
258
+ |:--------------------|:--------------------------------------|:-----------------|:--------------------------------------|:-----------------|
259
+ | Coding | 1.00 | 1.00 | 1.70 | TBA |
260
+ | Extraction | 2.00 | 1.30 | 3.10 | TBA |
261
+ | Humanities | 4.05 | 6.20 | 6.60 | TBA |
262
+ | Math | 3.00 | 3.20 | 3.90 | TBA |
263
+ | Reasoning | 2.90 | 4.60 | 3.70 | TBA |
264
+ | Roleplay | 4.80 | 6.50 | 6.60 | TBA |
265
+ | STEM | 5.10 | 5.95 | 6.75 | TBA |
266
+ | Writing | 6.60 | 9.00 | 7.10 | TBA |
267
+ | **Overall Average** | **3.68** | **4.72** | **4.93** | TBA |
268
+
269
+ Multi-turn results:
270
+
271
+ | Benchmark | Ahma 3B base (instruct prompt format) | Ahma 3B Instruct | Ahma 7B base (instruct prompt format) | Ahma 7B Instruct | Poro 34B Chat |
272
+ |:--------------------|:--------------------------------------|:-----------------|:--------------------------------------|:-----------------|:--------------|
273
+ | Coding | 1.00 | 1.00 | 1.40 | TBA | 3.70 |
274
+ | Extraction | 1.55 | 1.15 | 2.05 | TBA | 6.37 |
275
+ | Humanities | 3.25 | 6.20 | 4.95 | TBA | 9.25 |
276
+ | Math | 2.20 | 2.70 | 2.50 | TBA | 1.20 |
277
+ | Reasoning | 2.45 | 3.50 | 2.55 | TBA | 4.35 |
278
+ | Roleplay | 4.90 | 6.40 | 6.35 | TBA | 7.35 |
279
+ | STEM | 4.20 | 4.78 | 4.28 | TBA | 7.80 |
280
+ | Writing | 3.80 | 6.65 | 4.10 | TBA | 8.50 |
281
+ | **Overall Average** | **2.92** | **4.05** | **3.52** | TBA | **6.06** |
282
+
283
+ As we can see, Ahma 7B base model struggles with multi-turn examples, as expected, since it has only been pretrained with single-turn instruction following examples. In addition, coding performance was expectedly poor because the Ahma 7B model is not trained with code data. In single-turn setting, Ahma 7B beats both the Ahma 3B base and Instruct-tuned versions, demonstrating greater base capability to be further improved with Instruct-tuning.
284
+
285
+ ## Acknowledgements
286
+
287
+ This project would not have been possible without compute generously provided by Google through the
288
+ [TPU Research Cloud](https://sites.research.google/trc/).
289
+
290
+ ## Team Members
291
+
292
+ - Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
293
+ - Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
294
+
295
+ Feel free to contact us for more details 🤗
296
+
297
+
298
+ ![Ahma](ahma.jpg)