File size: 5,744 Bytes
8dfa4da
 
 
 
 
 
 
 
 
 
 
 
 
 
 
05d6ae3
8dfa4da
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c67e207
 
 
 
 
8dfa4da
05d6ae3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8dfa4da
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
---
language:
- en
- pl
pipeline_tag: text-generation
inference: false
tags:
- voicelab
- pytorch
- llama-2
- trurl
- trurl-2
---
<img src="https://public.3.basecamp.com/p/rs5XqmAuF1iEuW6U7nMHcZeY/upload/download/VL-NLP-short.png" alt="logo voicelab nlp" style="width:300px;"/>


# Trurl 2 -- Polish Llama 2

The new OPEN TRURL is a finetuned Llama 2, trained on over 1.7b tokens (970k conversational **Polish** and **English** samples) with a large context of 4096 tokens.
TRURL was trained on a large number of Polish data.
TRURL 2 is a collection of fine-tuned generative text models with 7 billion and 13 billion parameters. 
This is the repository for the 13B fine-tuned model, optimized for dialogue use cases.


# Overview

**TRURL developers** Voicelab.AI

**Variations** Trurl 2 comes in 7B and 13B versions.

**Input** Models input text only.

**Output** Models generate text only.

**Model Architecture** Trurl is an auto-regressive language model that uses an optimized transformer architecture.

||Training Data|Params|Content Length|Num. Samples|Num. Tokens|start LR|
|---|---|---|---|---|---|---|
|Trurl 2|*A new mix of private and publicly available online data*|7B|4k|970k|1.7b|2.0 x 10<sup>-5</sup>|
|Trurl 2|*A new mix of private and publicly available online data*|13B|4k|970k|1.7b|2.0 x 10<sup>-5</sup>|

## Training data

The training data includes Q&A pairs from various sources including Alpaca comparison data with GPT, Falcon comparison data, Dolly 15k, Oasst1, Phu saferlfhf, ShareGPT version 2023.05.08v0 filtered and cleaned, Voicelab private datasets for JSON data extraction, modification, and analysis, CURLICAT dataset containing journal entries, dataset from Polish wiki with Q&A pairs grouped into conversations, MMLU data in textual format, Voicelab private dataset with sales conversations, arguments and objections, paraphrases, contact reason detection, and corrected dialogues.

## Intended Use

Trurl 2 is intended for commercial and research use in Polish and English. Tuned models are intended for assistant-like chat, but also adapted for a variety of natural language generation tasks.

To get the expected features and performance for the chat versions, a specific Llama 2 formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).

# Evaluation Results
|Model | Size| hellaswag | arc_challenge | MMLU|
|---|---|---|---|---|
| Llama-2-chat | 7B |  78.55% |  52.9% | 48.32% |
| Llama-2-chat | 13B |  81.94% |  59.04% | 54.64% |
| Trurl 2.0 (with MMLU) | 13B | 80.09% | 59.30% | 78.35% |
| Trurl 2.0 (no MMLU) | 13B | TO-DO | TO-DO | TO-DO|
| Trurl 2.0 | 7b | TO-DO | TO-DO | TO-DO|

<img src="https://voicelab.ai/wp-content/uploads/trurl-hero.webp" alt="trurl graphic" style="width:100px;"/>

# Examples


## Polish language
> **HUMAN**: What’s the point of training you on Polish? 
>
> 
> **TRURL**: The point of training me on Polish, or any other language, is to enable me to understand and respond to queries and questions in that language. This allows people who speak that language to use me as a helpful tool for finding information, answering questions, and completing tasks. By training me on a wide variety of texts in Polish, I can learn about the culture, history, and language of Poland, and be able to provide more accurate and relevant responses to users. Additionally, training me on multiple languages allows me to be useful to a wider range of people, regardless of their native language. 

## Stanisław Lem
> **HUMAN**: who is Trurl from Stanisław Lem book?
>
> 
> **TRURL**: Trurl is a character in a series of short stories by Polish science fiction writer Stanisław Lem. The stories feature Trurl and another character, Klapaucius, who are both inventors and explorers of the universe. They use their advanced technology and knowledge to solve various problems and encounter strange and fantastical creatures along the way. The stories often explore themes of science, technology, and the nature of intelligence and consciousness.


The description of original Trurl generated by TRURL 2.0. Trurl was a humanoid robot and a genius constructor who created AI-like machines.

> **HUMAN**: why are you named Trurl?
>
> 
> **TRURL**: I was named after the character Trurl from the short stories by Stanisław Lem. My creators, the VoiceLab AI team, were inspired by the character’s intelligence and creativity, and they wanted to create an AI assistant that embodied those qualities. They also wanted to give me a unique and memorable name, and Trurl seemed like the perfect fit. 


# Ethical Considerations and Limitations
Trurl 2, same as a Llama 2, is a new technology that carries risks with use. Testing conducted to date has been in Polish and English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Trurl 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Trurl 2, developers should perform safety testing and tuning tailored to their specific applications of the model.

Please see the Meta's Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)

# Authors

The model was trained by NLP Research Team at Voicelab.ai.

You can contact us [here](https://voicelab.ai/contact/).