--- license: cc-by-nc-4.0 language: - pl library_name: transformers pipeline_tag: text-generation tags: - llama - ALLaMo - finetuned inference: false --- # APT3-1B-Instruct-v1 The APT3-1B-Instruct-v1 Large Language Model (LLM) is an instruct fine-tuned version of the [APT3-1B-Base](https://huggingface.co/Azurro/APT3-1B-Base) generative text model. ## Introduction At [Azurro](https://azurro.pl), we consistently place importance on using the Open Source technologies, both while working on the projects and in our everyday lives. We have decided to share a base language model trained by us. We are confident that smaller language models have great potential, and direct access to them for all people that are interested in such models democratizes this significant and dynamically changing field even more. ## Statements Training large language models requires a lot of computing power and it is meant for the major players on the market. However, does it mean that individuals or small companies cannot train language models capable of performing specific tasks? We decided to answer this question and train our own language model from scratch. We have made the following statements: * we use 1 consumer graphic card * we train the model only with the Polish corpus * we use manually selected, high quality texts for training the model. Why have we made such statements? It is worth noting that training a model requires several times more resources than using it. To put it simply, it can be assumed that it is about 3-4 times more. Therefore, if a model can be run with a graphic card that has 6 GB VRAM, then training this model requires about 24 GB VRAM (this is the minimum value). Many consumer computers are equipped with good quality graphic cards that can be used for training a model at one’s own home. This is why we have decided to use a top consumer graphic card - Nvidia’s RTX 4090 24GB VRAM. All the currently available language models have been trained mainly with English corpora with a little bit of other languages, including Polish. The effect is that these models are not the best at dealing with the Polish texts. Even the popular GPT models from OpenAI and Bard from Google often have issues with correct forms. Therefore, we have decided to prepare a model based only on the Polish corpus. An additional advantage of using only the Polish corpus is the size of the model - it is better to focus on one language in the case of smaller models. It is important to remember that models are only as good as the data with which they are trained. Given the small size of the model, we trained it with carefully selected texts and instructions. With close collaboration and advice from the [Speakleash](https://speakleash.org) team, our team has prepared over 285 GB of Polish language text corpus and 2.5 million instructions that have then been processed and used for training the model. Additionally, the unique feature of our model is that it has been trained on the largest amount of text among all available models for the Polish language. ## Model APT3-1B-Instruct-v1 has been trained and fine-tuned with the use of an original open source framework called [ALLaMo](https://github.com/chrisociepa/allamo). This framework allows the user to train and fine-tune language models similar to the Meta AI’s LLaMA models quickly and efficiently. APT3-1B-Instruct-v1 is an autoregressive language model based on the architecture of a transformer. It has been fine-tuned with 2.5 million instructions, over two epochs, on over 1 billion tokens in total. The training dataset (instructions in Polish) was created by combining 1.2 million instructions from [Speakleash](https://speakleash.org) and 1.3 million of our private instructions. ### Model description: * **Developed by:** [Azurro](https://azurro.pl) * **Language:** Polish * **Model type:** causal decoder-only * **License:** CC BY NC 4.0 (non-commercial use)

## Instruction format In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should start with the beginning of a sentence token. The generatated completion will be finished by the end-of-sentence token. E.g. ``` prompt = "[INST] Jakie mamy pory roku? [/INST]" completion = "W Polsce mamy 4 pory roku: wiosna, lato, jesień i zima." ``` ### Quickstart This model can be easily loaded using the AutoModelForCausalLM functionality. ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "Azurro/APT3-1B-Instruct-v1" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) ``` In order to reduce the memory usage, you can use smaller precision (`bfloat16`). ```python import torch model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16) ``` And then you can use Hugging Face Pipelines to generate text: ```python import transformers prompt = "[INST] Jakie mamy pory roku? [/INST]" pipeline = transformers.pipeline("text-generation", model=model, tokenizer=tokenizer) sequences = pipeline(max_new_tokens=100, do_sample=True, top_k=50, eos_token_id=tokenizer.eos_token_id, text_inputs=prompt) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` Generated output: `[INST] Jakie mamy pory roku? [/INST] W Polsce mamy 4 pory roku: wiosna, lato, jesień i zima.` ## Limitations and Biases APT3-1B-Instruct-v1 model is a quick demonstration showing that the base model can be easily fine-tuned to achieve desired performance. It does not have any moderation mechanisms. It should not be used for human-facing interactions without further guardrails and user consent. APT3-1B-Instruct-v1 can produce factually incorrect output, and should not be relied on to produce factually accurate information. APT3-1B-Base and APT3-1B-Instruct-v1 were trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that these models could generate lewd, biased or otherwise offensive outputs. ## License Because of an unclear legal situation, we have decided to publish the model under CC BY NC 4.0 license - it allows for non-commercial use. The model can be used for scientific purposes and privately, as long as the license conditions are met. ## Disclaimer The license of this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. ## Citation Please cite this model using the following format: ``` @online{AzurroAPT3Base1B, author = {Krzysztof Ociepa, Azurro}, title = {Introducing APT3-1B-Base: Polish Language Model}, year = {2024}, url = {www.azurro.pl/apt3-1b-base-en}, note = {Accessed: 2024-01-04}, % change this date urldate = {2024-01-04} % change this date } ``` ## Special thanks We would like to especially thank the [Speakleash](https://speakleash.org) team for collecting and sharing texts and instructions in Polish, and for the support we could always count on while preparing the training set for our models. Without you, it would not have been possible to train this model. Thank you! ## The Azurro Team Please find more information on the Azurro [homepage](https://azurro.pl). ## Contact Us If you have any questions or suggestions, please use the discussion tab. If you want to contact us directly, drop an email to [contact@azurro.pl](mailto:contact@azurro.pl).