File size: 4,534 Bytes
042903a
 
 
 
 
 
 
 
2a47c5d
 
 
042903a
 
 
 
f4f89c0
042903a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
781a7b6
7eea920
781a7b6
 
 
 
 
 
 
 
 
 
 
 
042903a
 
d6e9f9e
042903a
 
 
d6e9f9e
042903a
 
 
 
8a0a045
 
 
 
 
 
 
 
 
 
 
 
042903a
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
---
license: cc-by-nc-4.0
language:
- ro
---

# Model Card for Model ID

*Built with Meta Llama 3*


<!-- Provide a quick summary of what the model is/does. -->

RoLlama3 is a family of pretrained and fine-tuned generative text models for Romanian. This is the repository for the **instruct 7B model**. Links to other models can be found at the bottom of this page.


## Model Details

### Model Description

<!-- Provide a longer summary of what this model is. -->
OpenLLM-Ro represents the first open-source effort to build a LLM specialized for Romanian. OpenLLM-Ro developed and publicly releases a collection of Romanian LLMs, both in the form of foundational model and instruct and chat variants.


- **Developed by:** OpenLLM-Ro
<!-- - **Funded by [optional]:** [More Information Needed] -->
<!-- - **Shared by [optional]:** [More Information Needed] -->
<!-- - **Model type:** [More Information Needed] -->
- **Language(s):** Romanian
- **License:** cc-by-nc-4.0
- **Finetuned from model:** [Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B)


### Model Sources

<!-- Provide the basic links for the model. -->

- **Repository:** https://github.com/OpenLLM-Ro/llama-recipes
- **Paper:** https://arxiv.org/abs/2406.18266

## Intended Use

### Intended Use Cases

RoLlama3 is intented for research use in Romanian. Base models can be adapted for a variety of natural language tasks while instruction and chat tuned models are intended for assistant-like chat.

### Out-of-Scope Use

<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->

Use in any manner that violates the license, any applicable laws or regluations, use in languages other than Romanian.



## How to Get Started with the Model

Use the code below to get started with the model.

```python
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("OpenLLM-Ro/RoLlama3-8b-Instruct")
model = AutoModelForCausalLM.from_pretrained("OpenLLM-Ro/RoLlama3-8b-Instruct")

instruction = "Ce jocuri de societate pot juca cu prietenii mei?"
chat = [
        {"role": "system", "content": "Ești un asistent folositor, respectuos și onest. Încearcă să ajuți cât mai mult prin informațiile oferite, excluzând răspunsuri toxice, rasiste, sexiste, periculoase și ilegale."},
        {"role": "user", "content": instruction},
        ]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, system_message="")

inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs, max_new_tokens=128)
print(tokenizer.decode(outputs[0]))
```

## Benchmarks

| Model              | Average  | ARC      | MMLU     |Winogrande|HellaSwag | GSM8k    |TruthfulQA|
|--------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| Llama-3-8B-Instruct| 50.15     | 43.73     | 49.02    | 59.35    | 53.16    | **44.12**    | 51.52    |  
| *RoLlama3-8b-Instruct*  | ***50.61***    | ***44.66***    | ***52.19***    | ***67.58***    | ***57.65***    | *30.20*     | ***51.39***   |  

        


## MT-Bench

| Model              | Average  | 1st turn      | 2nd turn     |  Answers in Ro |
|--------------------|:--------:|:--------:|:--------:|:--------:|
| Llama-3-8B-Instruct    | **5.92**    | **6.36**    | **5.49**    | 158 / 160
| *RoLlama3-8b-Instruct*| *5.28* |*6.10*| *4.45* | ***160 / 160*** |



## RoCulturaBench

| Model              | Score  | Answers in Ro|
|--------------------|:--------:|:--------:|
| Llama-3-8B-Instruct   | **4.61**    | **100 / 100**  |
| *RoLlama3-8b-Instruct*| *3.83*| ***100 / 100*** |



## RoLlama3 Model Family

| Model              | Link  |
|--------------------|:--------:|
|*RoLlama3-8b-Instruct*| [link](https://huggingface.co/OpenLLM-Ro/RoLlama3-8b-Instruct) |


## Citation 

```
@misc{masala2024vorbecstiromanecsterecipetrain,
      title={"Vorbe\c{s}ti Rom\^ane\c{s}te?" A Recipe to Train Powerful Romanian LLMs with English Instructions}, 
      author={Mihai Masala and Denis C. Ilie-Ablachim and Alexandru Dima and Dragos Corlatescu and Miruna Zavelca and Ovio Olaru and Simina Terian-Dan and Andrei Terian-Dan and Marius Leordeanu and Horia Velicu and Marius Popescu and Mihai Dascalu and Traian Rebedea},
      year={2024},
      eprint={2406.18266},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2406.18266}, 
}
```
<!-- **APA:**

[More Information Needed]  -->