RichardErkhov commited on
Commit
c926d48
1 Parent(s): d594d36

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +152 -0
README.md ADDED
@@ -0,0 +1,152 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ Qra-7b - GGUF
11
+ - Model creator: https://huggingface.co/OPI-PG/
12
+ - Original model: https://huggingface.co/OPI-PG/Qra-7b/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [Qra-7b.Q2_K.gguf](https://huggingface.co/RichardErkhov/OPI-PG_-_Qra-7b-gguf/blob/main/Qra-7b.Q2_K.gguf) | Q2_K | 2.36GB |
18
+ | [Qra-7b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/OPI-PG_-_Qra-7b-gguf/blob/main/Qra-7b.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
19
+ | [Qra-7b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/OPI-PG_-_Qra-7b-gguf/blob/main/Qra-7b.IQ3_S.gguf) | IQ3_S | 2.75GB |
20
+ | [Qra-7b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/OPI-PG_-_Qra-7b-gguf/blob/main/Qra-7b.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
21
+ | [Qra-7b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/OPI-PG_-_Qra-7b-gguf/blob/main/Qra-7b.IQ3_M.gguf) | IQ3_M | 2.9GB |
22
+ | [Qra-7b.Q3_K.gguf](https://huggingface.co/RichardErkhov/OPI-PG_-_Qra-7b-gguf/blob/main/Qra-7b.Q3_K.gguf) | Q3_K | 3.07GB |
23
+ | [Qra-7b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/OPI-PG_-_Qra-7b-gguf/blob/main/Qra-7b.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
24
+ | [Qra-7b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/OPI-PG_-_Qra-7b-gguf/blob/main/Qra-7b.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
25
+ | [Qra-7b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/OPI-PG_-_Qra-7b-gguf/blob/main/Qra-7b.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
26
+ | [Qra-7b.Q4_0.gguf](https://huggingface.co/RichardErkhov/OPI-PG_-_Qra-7b-gguf/blob/main/Qra-7b.Q4_0.gguf) | Q4_0 | 3.56GB |
27
+ | [Qra-7b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/OPI-PG_-_Qra-7b-gguf/blob/main/Qra-7b.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
28
+ | [Qra-7b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/OPI-PG_-_Qra-7b-gguf/blob/main/Qra-7b.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
29
+ | [Qra-7b.Q4_K.gguf](https://huggingface.co/RichardErkhov/OPI-PG_-_Qra-7b-gguf/blob/main/Qra-7b.Q4_K.gguf) | Q4_K | 3.8GB |
30
+ | [Qra-7b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/OPI-PG_-_Qra-7b-gguf/blob/main/Qra-7b.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
31
+ | [Qra-7b.Q4_1.gguf](https://huggingface.co/RichardErkhov/OPI-PG_-_Qra-7b-gguf/blob/main/Qra-7b.Q4_1.gguf) | Q4_1 | 3.95GB |
32
+ | [Qra-7b.Q5_0.gguf](https://huggingface.co/RichardErkhov/OPI-PG_-_Qra-7b-gguf/blob/main/Qra-7b.Q5_0.gguf) | Q5_0 | 4.33GB |
33
+ | [Qra-7b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/OPI-PG_-_Qra-7b-gguf/blob/main/Qra-7b.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
34
+ | [Qra-7b.Q5_K.gguf](https://huggingface.co/RichardErkhov/OPI-PG_-_Qra-7b-gguf/blob/main/Qra-7b.Q5_K.gguf) | Q5_K | 4.45GB |
35
+ | [Qra-7b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/OPI-PG_-_Qra-7b-gguf/blob/main/Qra-7b.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
36
+ | [Qra-7b.Q5_1.gguf](https://huggingface.co/RichardErkhov/OPI-PG_-_Qra-7b-gguf/blob/main/Qra-7b.Q5_1.gguf) | Q5_1 | 4.72GB |
37
+ | [Qra-7b.Q6_K.gguf](https://huggingface.co/RichardErkhov/OPI-PG_-_Qra-7b-gguf/blob/main/Qra-7b.Q6_K.gguf) | Q6_K | 5.15GB |
38
+
39
+
40
+
41
+
42
+ Original model description:
43
+ ---
44
+ license: llama2
45
+ ---
46
+ <center><img src="https://huggingface.co/OPI-PG/Qra-7b/resolve/main/images/7b-logo.png"></img></center>
47
+
48
+ Qra is a series of LLMs adapted to the Polish language, resulting from a collaboration between the National Information Processing Institute (OPI) and Gdańsk University of Technology (PG). The models were trained on the infrastructure of PG TASK Computing Center using 21 Nvidia A100 cards. The published versions of the Qra models were initialized with the weights of English LLama 2 checkpoints and then further trained on a carefully cleaned, filtered, and deduplicated corpus of Polish texts, totaling about 90 billion tokens. The original corpus consisted primarily of web data, including CommonCrawl dumps, and the MADLAD-400 corpus.
49
+
50
+ ⚠️ **Important: Qra are foundation language models trained with causal language modeling objective on a large corpus of texts. They are therefore not intended for conversational or instruction-following purposes, and should be further fine-tuned to be used for such tasks.** ⚠️
51
+
52
+ The preprocessing pipeline included the following steps:
53
+ - Text normalization, removal of URLs.
54
+ - Removal of documents shorter than 500 characters.
55
+ - Cleaning sentences in documents using a set of heuristic rules. Among others, sentences consisting of mostly non-alphabetical characters, as well as sentences in languages other than Polish and English, were removed.
56
+ - Filtering documents using a quality classifier trained on a set of several thousand documents manually labeled as being of high or low quality. The input to the classifier is a set of several statistics ("quality signals") such as the percentage of Polish words, average word and sentence length, number of word and character duplications, proportion of different characters classes in the text.
57
+ - Filtering documents based on the perplexity value calculated by a lightweight KenLM language model.
58
+ - Assigning the document to one of 18 topical domains using a trained classifier.
59
+ - Fuzzy deduplication using the MinHash algorithm within each topical domain.
60
+
61
+ The final distribution of documents by topic is shown in the chart below:
62
+
63
+ <center><img src="https://huggingface.co/OPI-PG/Qra-7b/resolve/main/images/topics.png"></img></center>
64
+
65
+ ## Model details
66
+
67
+ The models were trained for one epoch on sequences of 4096 tokens. During training, we used many modern optimizations such as:
68
+ - [torch.compile](https://pytorch.org/docs/stable/generated/torch.compile.html)
69
+ - [adamw_apex_fused](https://huggingface.co/docs/transformers/main/en/perf_train_gpu_one#optimizer-choice) optimizer
70
+ - [Flash Attention 2](https://github.com/Dao-AILab/flash-attention)
71
+ - [Mixed precision](https://huggingface.co/docs/transformers/main/en/perf_train_gpu_one#bf16) (`--bf16` and `--tf32` options)
72
+ - [Gradient accumulation](https://huggingface.co/docs/transformers/main/en/perf_train_gpu_one#gradient-accumulation)
73
+ - [Fully Sharded Data Parallel (FSDP)](https://pytorch.org/tutorials/intermediate/FSDP_tutorial.html) with the SHARD_GRAD_OP mode
74
+ - [Gradient checkpointing](https://huggingface.co/docs/transformers/main/en/perf_train_gpu_one#gradient-checkpointing) (only for the 13B model)
75
+
76
+ Below is a summary of the Qra-7B model:
77
+
78
+ | Attribute | Value |
79
+ | ---- | ---- |
80
+ | Adapted from | [Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) |
81
+ | License | [LLama 2 Community License Agreement](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt) |
82
+ | Batch size | 1344 |
83
+ | Context length | 4096 |
84
+ | Learning rate | 2e-5 |
85
+ | Learning rate decay | cosine |
86
+ | Warmup steps | 0 |
87
+ | Training time | 14 days |
88
+
89
+ ## Evaluation
90
+
91
+ In this section we compare the perplexity of Qra models on Polish texts with other Polish and English LLMs.
92
+
93
+ Note that perplexity values between different text segmentations are not directly comparable. Therefore, we can draw conclusions based on comparisons only beetween models using the same tokenizer, such as Qra and the original LLama / TinyLLama.
94
+
95
+ ### PolEval-2018
96
+
97
+ In 2018, the PolEval competition included a language modeling task, for which training and test sets totaling over 20 million Polish sentences were made available. We used the first 10k sentences from the test set to evaluate modern neural language models. To calculate the perplexity, we used a script from the [HuggingFace Evaluate](https://huggingface.co/spaces/evaluate-metric/perplexity) library.
98
+
99
+ <table>
100
+ <thead>
101
+ <tr><th>Model</th><th>Perplexity</th></tr>
102
+ </thead>
103
+ <tr><td colspan="2"><strong>English models</strong></td></tr>
104
+ <tr><td>meta-llama/Llama-2-7b-hf</td><td>24.3</td></tr>
105
+ <tr><td>meta-llama/Llama-2-13b-hf</td><td>21.4</td></tr>
106
+ <tr><td>mistralai/Mistral-7B-v0.1</td><td>21.4</td></tr>
107
+ <tr><td>TinyLlama/TinyLlama-1.1B</td><td>40.4</td></tr>
108
+ <tr><td colspan="2"><strong>Polish models</strong></td></tr>
109
+ <tr><td>sdadas/polish-gpt2-small</td><td>134.4</td></tr>
110
+ <tr><td>sdadas/polish-gpt2-medium</td><td>100.8</td></tr>
111
+ <tr><td>sdadas/polish-gpt2-large</td><td>93.2</td></tr>
112
+ <tr><td>sdadas/polish-gpt2-xl</td><td>94.1</td></tr>
113
+ <tr><td>Azurro/APT3-275M-Base</td><td>129.8</td></tr>
114
+ <tr><td>Azurro/APT3-500M-Base</td><td>153.1</td></tr>
115
+ <tr><td>Azurro/APT3-1B-Base</td><td>106.8</td></tr>
116
+ <tr><td>eryk-mazus/polka-1.1b</td><td>18.1</td></tr>
117
+ <tr><td>szymonrucinski/Curie-7B-v1</td><td>13.5</td></tr>
118
+ <tr><td colspan="2"><strong>Qra models</strong></td></tr>
119
+ <tr><td>OPI-PG/Qra-1b</td><td>14.7</td></tr>
120
+ <tr><td>OPI-PG/Qra-7b</td><td>11.3</td></tr>
121
+ <tr><td>OPI-PG/Qra-13b</td><td>10.5</td></tr>
122
+ </table>
123
+
124
+ ### Long documents (2024)
125
+
126
+ Currently, LLMs support contexts of thousands of tokens. Their practical applications usually also involve processing long documents. Therefore, evaluating perplexity on a sentence-based dataset such as PolEval-2018 may not be meaningful. Additionally, the PolEval corpus has been publicly available on the internet for the past few years, which raises the possibility that for some models the training sets have been contaminated by this data. For this reason, we have prepared a new collection consisting of long papers published exclusively in 2024, which will allow us to more reliably test the perplexities of the models on new knowledge that was not available to them at the time of training. The corpus consists of 5,000 documents ranging from several hundred to about 20,000 tokens. Half of the set consists of press texts from Polish news portals from February 2024, the other half are scientific articles published since January 2024. Most of the documents exceed the context size of the evaluated models. To calculate perplexity for these documents, we divided them into chunks of size equal to the model's context length with a stride of 512 tokens, following [this example](https://huggingface.co/docs/transformers/en/perplexity).
127
+
128
+ <table>
129
+ <thead>
130
+ <tr><th>Model</th><th>Context</th><th>Perplexity</th></tr>
131
+ </thead>
132
+ <tr><td colspan="3"><strong>English models</strong></td></tr>
133
+ <tr><td>meta-llama/Llama-2-7b-hf</td><td>4096</td><td>5.9</td></tr>
134
+ <tr><td>meta-llama/Llama-2-13b-hf</td><td>4096</td><td>5.3</td></tr>
135
+ <tr><td>mistralai/Mistral-7B-v0.1</td><td>4096</td><td>4.9</td></tr>
136
+ <tr><td>TinyLlama/TinyLlama-1.1B</td><td>2048</td><td>9.6</td></tr>
137
+ <tr><td colspan="3"><strong>Polish models</strong></td></tr>
138
+ <tr><td>sdadas/polish-gpt2-small</td><td>2048</td><td>27.3</td></tr>
139
+ <tr><td>sdadas/polish-gpt2-medium</td><td>2048</td><td>20.3</td></tr>
140
+ <tr><td>sdadas/polish-gpt2-large</td><td>1536</td><td>18.0</td></tr>
141
+ <tr><td>sdadas/polish-gpt2-xl</td><td>1536</td><td>16.6</td></tr>
142
+ <tr><td>Azurro/APT3-275M-Base</td><td>2048</td><td>77.0</td></tr>
143
+ <tr><td>Azurro/APT3-500M-Base</td><td>2048</td><td>50.5</td></tr>
144
+ <tr><td>Azurro/APT3-1B-Base</td><td>2048</td><td>19.1</td></tr>
145
+ <tr><td>eryk-mazus/polka-1.1b</td><td>2048</td><td>6.9</td></tr>
146
+ <tr><td>szymonrucinski/Curie-7B-v1</td><td>4096</td><td>4.8</td></tr>
147
+ <tr><td colspan="3"><strong>Qra models</strong></td></tr>
148
+ <tr><td>OPI-PG/Qra-1b</td><td>4096</td><td>6.1</td></tr>
149
+ <tr><td>OPI-PG/Qra-7b</td><td>4096</td><td>4.5</td></tr>
150
+ <tr><td>OPI-PG/Qra-13b</td><td>4096</td><td>4.2</td></tr>
151
+ </table>
152
+