File size: 6,137 Bytes
428a689
0359b0d
68199bd
 
0359b0d
 
 
 
 
 
0c8fd5a
 
 
bd8ead3
0d56a02
 
 
0359b0d
68199bd
0359b0d
68199bd
aecbcac
68199bd
 
0359b0d
68199bd
e8233f5
68199bd
2b84ea6
 
0359b0d
68199bd
20211fe
bd8ead3
 
68199bd
8f47f61
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
73a4836
8f47f61
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
73a4836
8f47f61
 
 
 
 
0359b0d
68199bd
0359b0d
ca34f90
c4c61cf
 
 
 
 
 
 
 
 
bd8ead3
c4c61cf
 
ca34f90
87123a2
 
 
 
 
 
 
c4c61cf
 
 
68199bd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
---
license: apache-2.0
base_model: eryk-mazus/tinyllama-with-custom-tokenizer
datasets:
- allenai/MADLAD-400
- eryk-mazus/polka-pretrain-en-pl-v1
language:
- pl
- en
pipeline_tag: text-generation
widget:
  - text: "Wiedźmin 3 to fabularna gra akcji wyprodukowana"
    output:
      text: " przez studio CD Projekt RED. Akcja rozgrywa się w świecie fantasy, a jej bohaterem jest Geralt z Rivii,"
  - text: "Gdy już będziecie w Warszawie, miejscem, które koniecznie musicie odwiedzić jest"
    output:
      text: " Muzeum Powstania Warszawskiego. To jedyne tego typu muzeum w Europie"
---

![image/png](https://cdn-uploads.huggingface.co/production/uploads/61bf0e11c88f3fd22f654059/EMSrPEzAFkjY9nvbaJoC3.png)

# polka-1.1b


`polka-1.1b` takes the [TinyLlama-1.1B](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) model and enhances it by continuing pretraining on an additional **5.7 billion Polish tokens**, primarily sourced from the [MADLAD-400](https://arxiv.org/abs/2309.04662) dataset. The tokens were sampled in a 10:1 ratio between Polish and English shards using [DSIR](https://github.com/p-lambda/dsir). Furthermore, Polka extends the TinyLlama tokenizer's vocabulary to 43,882 tokens, improving its efficiency for generating Polish text.

The training took 680 GPU hours on a single 8 x RTX 4090 machine with DeepSpeed ZeRO-2.

Context size: 2,048 tokens.

## Notes

This base model was initially developed as the foundation for instruction tuning, which resulted in [polka-1.1b-chat](https://huggingface.co/eryk-mazus/polka-1.1b-chat). Nonetheless, I'm sharing it with the community because I see potential value in its combination of relatively good performance and an efficient bilingual tokenizer.

The model is capable of producing coherent Polish text, but due to its size, it is likely to suffer from hallucination.

## Evaluation

Performed by [OPI-PG](https://huggingface.co/OPI-PG), the authors of Qra models.

### PolEval-2018

<table>
<thead>
<tr><th>Model</th><th>Perplexity</th></tr>
</thead>
<tr><td colspan="2"><strong>English models</strong></td></tr>
<tr><td>meta-llama/Llama-2-7b-hf</td><td>24.3</td></tr>
<tr><td>meta-llama/Llama-2-13b-hf</td><td>21.4</td></tr>
<tr><td>mistralai/Mistral-7B-v0.1</td><td>21.4</td></tr>
<tr><td>TinyLlama/TinyLlama-1.1B</td><td>40.4</td></tr>
<tr><td colspan="2"><strong>Polish models</strong></td></tr>
<tr><td>sdadas/polish-gpt2-small</td><td>134.4</td></tr>
<tr><td>sdadas/polish-gpt2-medium</td><td>100.8</td></tr>
<tr><td>sdadas/polish-gpt2-large</td><td>93.2</td></tr>
<tr><td>sdadas/polish-gpt2-xl</td><td>94.1</td></tr>
<tr><td>Azurro/APT3-275M-Base</td><td>129.8</td></tr>
<tr><td>Azurro/APT3-500M-Base</td><td>153.1</td></tr>
<tr><td>Azurro/APT3-1B-Base</td><td>106.8</td></tr>
<tr><td><b>eryk-mazus/polka-1.1b</b></td><td><b>18.1</b></td></tr>
<tr><td>szymonrucinski/Curie-7B-v1</td><td>13.5</td></tr>
<tr><td>OPI-PG/Qra-1b</td><td>14.7</td></tr>
</table>

### Long documents (2024)

Currently, LLMs support contexts of thousands of tokens. Their practical applications usually also involve processing long documents. Therefore, evaluating perplexity on a sentence-based dataset such as PolEval-2018 may not be meaningful. Additionally, the PolEval corpus has been publicly available on the internet for the past few years, which raises the possibility that for some models the training sets have been contaminated by this data.  For this reason, we have prepared a new collection consisting of long papers published exclusively in 2024, which will allow us to more reliably test the perplexities of the models on new knowledge that was not available to them at the time of training. The corpus consists of 5,000 documents ranging from several hundred to about 20,000 tokens. Half of the set consists of press texts from Polish news portals from February 2024, the other half are scientific articles published since January 2024. Most of the documents exceed the context size of the evaluated models. To calculate perplexity for these documents, we divided them into chunks of size equal to the model's context length with a stride of 512 tokens, following [this example](https://huggingface.co/docs/transformers/en/perplexity).

<table>
<thead>
<tr><th>Model</th><th>Context</th><th>Perplexity</th></tr>
</thead>
<tr><td colspan="3"><strong>English models</strong></td></tr>
<tr><td>meta-llama/Llama-2-7b-hf</td><td>4096</td><td>5.9</td></tr>
<tr><td>meta-llama/Llama-2-13b-hf</td><td>4096</td><td>5.3</td></tr>
<tr><td>mistralai/Mistral-7B-v0.1</td><td>4096</td><td>4.9</td></tr>
<tr><td>TinyLlama/TinyLlama-1.1B</td><td>2048</td><td>9.6</td></tr>
<tr><td colspan="3"><strong>Polish models</strong></td></tr>
<tr><td>sdadas/polish-gpt2-small</td><td>2048</td><td>27.3</td></tr>
<tr><td>sdadas/polish-gpt2-medium</td><td>2048</td><td>20.3</td></tr>
<tr><td>sdadas/polish-gpt2-large</td><td>1536</td><td>18.0</td></tr>
<tr><td>sdadas/polish-gpt2-xl</td><td>1536</td><td>16.6</td></tr>
<tr><td>Azurro/APT3-275M-Base</td><td>2048</td><td>77.0</td></tr>
<tr><td>Azurro/APT3-500M-Base</td><td>2048</td><td>50.5</td></tr>
<tr><td>Azurro/APT3-1B-Base</td><td>2048</td><td>19.1</td></tr>
<tr><td><b>eryk-mazus/polka-1.1b</b></td><td><b>2048</b></td><td><b>6.9</b></td></tr>
<tr><td>szymonrucinski/Curie-7B-v1</td><td>4096</td><td>4.8</td></tr>
<tr><td>OPI-PG/Qra-1b</td><td>4096</td><td>6.1</td></tr>
</table>


## Sample code

```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "eryk-mazus/polka-1.1b"

tokenizer = AutoTokenizer.from_pretrained(model_name, padding_side="left")
tokenizer.pad_token = tokenizer.eos_token

model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_8bit=True)

prompt = """Przykładowe zapytanie do modelu"""

model_inputs = tokenizer([prompt], return_tensors="pt").to("cuda")
with torch.no_grad():
  generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512,
    do_sample=True,
    penalty_alpha=0.6,
    top_k=5
  )

output = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(output)
```