Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

Qra-1b - GGUF

Name Quant method Size
Qra-1b.Q2_K.gguf Q2_K 0.4GB
Qra-1b.IQ3_XS.gguf IQ3_XS 0.44GB
Qra-1b.IQ3_S.gguf IQ3_S 0.47GB
Qra-1b.Q3_K_S.gguf Q3_K_S 0.47GB
Qra-1b.IQ3_M.gguf IQ3_M 0.48GB
Qra-1b.Q3_K.gguf Q3_K 0.51GB
Qra-1b.Q3_K_M.gguf Q3_K_M 0.51GB
Qra-1b.Q3_K_L.gguf Q3_K_L 0.55GB
Qra-1b.IQ4_XS.gguf IQ4_XS 0.57GB
Qra-1b.Q4_0.gguf Q4_0 0.59GB
Qra-1b.IQ4_NL.gguf IQ4_NL 0.6GB
Qra-1b.Q4_K_S.gguf Q4_K_S 0.6GB
Qra-1b.Q4_K.gguf Q4_K 0.62GB
Qra-1b.Q4_K_M.gguf Q4_K_M 0.62GB
Qra-1b.Q4_1.gguf Q4_1 0.65GB
Qra-1b.Q5_0.gguf Q5_0 0.71GB
Qra-1b.Q5_K_S.gguf Q5_K_S 0.71GB
Qra-1b.Q5_K.gguf Q5_K 0.73GB
Qra-1b.Q5_K_M.gguf Q5_K_M 0.73GB
Qra-1b.Q5_1.gguf Q5_1 0.77GB
Qra-1b.Q6_K.gguf Q6_K 0.84GB

Original model description:

license: apache-2.0

Qra is a series of LLMs adapted to the Polish language, resulting from a collaboration between the National Information Processing Institute (OPI) and Gdańsk University of Technology (PG). The models were trained on the infrastructure of PG TASK Computing Center using 21 Nvidia A100 cards. The published versions of the Qra models were initialized with the weights of English LLama 2 checkpoints and then further trained on a carefully cleaned, filtered, and deduplicated corpus of Polish texts, totaling about 90 billion tokens. The original corpus consisted primarily of web data, including CommonCrawl dumps, and the MADLAD-400 corpus.

⚠️ Important: Qra are foundation language models trained with causal language modeling objective on a large corpus of texts. They are therefore not intended for conversational or instruction-following purposes, and should be further fine-tuned to be used for such tasks. ⚠️

The preprocessing pipeline included the following steps:

  • Text normalization, removal of URLs.
  • Removal of documents shorter than 500 characters.
  • Cleaning sentences in documents using a set of heuristic rules. Among others, sentences consisting of mostly non-alphabetical characters, as well as sentences in languages other than Polish and English, were removed.
  • Filtering documents using a quality classifier trained on a set of several thousand documents manually labeled as being of high or low quality. The input to the classifier is a set of several statistics ("quality signals") such as the percentage of Polish words, average word and sentence length, number of word and character duplications, proportion of different characters classes in the text.
  • Filtering documents based on the perplexity value calculated by a lightweight KenLM language model.
  • Assigning the document to one of 18 topical domains using a trained classifier.
  • Fuzzy deduplication using the MinHash algorithm within each topical domain.

The final distribution of documents by topic is shown in the chart below:

Model details

The models were trained for one epoch on sequences of 4096 tokens. During training, we used many modern optimizations such as:

Below is a summary of the Qra-1B model:

Attribute Value
Adapted from TinyLlama-1.1B
License Apache 2.0
Batch size 1344
Context length 4096
Learning rate 2e-5
Learning rate decay cosine
Warmup steps 0
Training time 2 days

Evaluation

In this section we compare the perplexity of Qra models on Polish texts with other Polish and English LLMs.

Note that perplexity values between different text segmentations are not directly comparable. Therefore, we can draw conclusions based on comparisons only beetween models using the same tokenizer, such as Qra and the original LLama / TinyLLama.

PolEval-2018

In 2018, the PolEval competition included a language modeling task, for which training and test sets totaling over 20 million Polish sentences were made available. We used the first 10k sentences from the test set to evaluate modern neural language models. To calculate the perplexity, we used a script from the HuggingFace Evaluate library.

ModelPerplexity
English models
meta-llama/Llama-2-7b-hf24.3
meta-llama/Llama-2-13b-hf21.4
mistralai/Mistral-7B-v0.121.4
TinyLlama/TinyLlama-1.1B40.4
Polish models
sdadas/polish-gpt2-small134.4
sdadas/polish-gpt2-medium100.8
sdadas/polish-gpt2-large93.2
sdadas/polish-gpt2-xl94.1
Azurro/APT3-275M-Base129.8
Azurro/APT3-500M-Base153.1
Azurro/APT3-1B-Base106.8
eryk-mazus/polka-1.1b18.1
szymonrucinski/Curie-7B-v113.5
Qra models
OPI-PG/Qra-1b14.7
OPI-PG/Qra-7b11.3
OPI-PG/Qra-13b10.5

Long documents (2024)

Currently, LLMs support contexts of thousands of tokens. Their practical applications usually also involve processing long documents. Therefore, evaluating perplexity on a sentence-based dataset such as PolEval-2018 may not be meaningful. Additionally, the PolEval corpus has been publicly available on the internet for the past few years, which raises the possibility that for some models the training sets have been contaminated by this data. For this reason, we have prepared a new collection consisting of long papers published exclusively in 2024, which will allow us to more reliably test the perplexities of the models on new knowledge that was not available to them at the time of training. The corpus consists of 5,000 documents ranging from several hundred to about 20,000 tokens. Half of the set consists of press texts from Polish news portals from February 2024, the other half are scientific articles published since January 2024. Most of the documents exceed the context size of the evaluated models. To calculate perplexity for these documents, we divided them into chunks of size equal to the model's context length with a stride of 512 tokens, following this example.

ModelContextPerplexity
English models
meta-llama/Llama-2-7b-hf40965.9
meta-llama/Llama-2-13b-hf40965.3
mistralai/Mistral-7B-v0.140964.9
TinyLlama/TinyLlama-1.1B20489.6
Polish models
sdadas/polish-gpt2-small204827.3
sdadas/polish-gpt2-medium204820.3
sdadas/polish-gpt2-large153618.0
sdadas/polish-gpt2-xl153616.6
Azurro/APT3-275M-Base204877.0
Azurro/APT3-500M-Base204850.5
Azurro/APT3-1B-Base204819.1
eryk-mazus/polka-1.1b20486.9
szymonrucinski/Curie-7B-v140964.8
Qra models
OPI-PG/Qra-1b40966.1
OPI-PG/Qra-7b40964.5
OPI-PG/Qra-13b40964.2
Downloads last month
195
GGUF
Model size
1.1B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

Inference API
Unable to determine this model's library. Check the docs .