--- license: cc-by-nc-4.0 language: - ro base_model: - OpenLLM-Ro/RoLlama2-7b-Base new_version: OpenLLM-Ro/RoLlama2-7b-Instruct model-index: - name: OpenLLM-Ro/RoLlama2-7b-Chat results: - task: type: text-generation dataset: name: OpenLLM-Ro/ro_arc_challenge type: RoARC metrics: - name: Average type: accuracy value: 41.92 - name: 0-shot type: accuracy value: 39.59 - name: 1-shot type: accuracy value: 41.05 - name: 3-shot type: accuracy value: 42.42 - name: 5-shot type: accuracy value: 42.16 - name: 10-shot type: accuracy value: 43.36 - name: 25-shot type: accuracy value: 42.93 --- # Model Card for Model ID RoLlama2 is a family of pretrained and fine-tuned generative text models for Romanian. This is the repository for the **chat 7B model**. Links to other models can be found at the bottom of this page. ## Model Details ### Model Description OpenLLM represents the first open-source effort to build a LLM specialized for Romanian. OpenLLM-Ro developed and publicly releases a collection of Romanian LLMs, both in the form of foundational model and instruct and chat variants. - **Developed by:** OpenLLM-Ro - **Language(s):** Romanian - **License:** cc-by-nc-4.0 - **Finetuned from model:** [RoLlama2-7b-Base](https://huggingface.co/OpenLLM-Ro/RoLlama2-7b-Base) ### Model Sources - **Repository:** https://github.com/OpenLLM-Ro/llama-recipes - **Paper:** https://arxiv.org/abs/2405.07703 ## Intended Use ### Intended Use Cases RoLlama2 is intented for research use in Romanian. Base models can be adapted for a variety of natural language tasks while instruction and chat tuned models are intended for assistant-like chat. ### Out-of-Scope Use Use in any manner that violates the license, any applicable laws or regluations, use in languages other than Romanian. ## How to Get Started with the Model Use the code below to get started with the model. ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("OpenLLM-Ro/RoLlama2-7b-Chat") model = AutoModelForCausalLM.from_pretrained("OpenLLM-Ro/RoLlama2-7b-Chat") instruction = "Care este cel mai înalt vârf muntos din România?" chat = [ {"role": "system", "content": "Ești un asistent folositor, respectuos și onest. Încearcă să ajuți cât mai mult prin informațiile oferite, excluzând răspunsuri toxice, rasiste, sexiste, periculoase și ilegale."}, {"role": "user", "content": instruction}, ] prompt = tokenizer.apply_chat_template(chat, tokenize=False) inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt") outputs = model.generate(input_ids=inputs, max_new_tokens=128) print(tokenizer.decode(outputs[0])) ``` ## Benchmarks | Model | Average | ARC | MMLU |Winogrande|HellaSwag | GSM8k |TruthfulQA| |--------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:| | Llama-2-7b-chat | 36.84 | 37.03 | 33.81 | 55.87 | 45.36 | 4.90 | 44.09 | |RoLlama2-7b-Instruct|**45.71**|**43.66**|**39.70**|**70.34** | 57.36 |**18.78**| 44.44 | |*RoLlama2-7b-Chat* | *43.82* | *41.92* | *37.29* | *66.68* | ***57.91***| *13.47* | ***45.65***| ## Romanian MT-Bench | Model | Average | 1st turn | 2nd turn | Answers in Ro | |--------------------|:--------:|:--------:|:--------:|:--------:| | Llama-2-7b-chat | 1.08 | 1.44 | 0.73 | 45 / 160 | |RoLlama2-7b-Instruct| **3.86**|**4.68**| **3.04** | **160 / 160** | |*RoLlama2-7b-Chat* | *TBC* | *TBC* | *TBC* | *TBC* | ## RoCulturaBench | Model | Score | Answers in Ro| |--------------------|:--------:|:--------:| | Llama-2-7b-chat | 1.21 | 33 / 100 | |RoLlama2-7b-Instruct| **3.77**| **160 / 160** | |*RoLlama2-7b-Chat* | *TBC* | *TBC* | ## RoLlama2 Model Family | Model | Link | |--------------------|:--------:| |RoLlama2-7b-Base | [link](https://huggingface.co/OpenLLM-Ro/RoLlama2-7b-Base) | |RoLlama2-7b-Instruct| [link](https://huggingface.co/OpenLLM-Ro/RoLlama2-7b-Instruct) | |*RoLlama2-7b-Chat* | [link](https://huggingface.co/OpenLLM-Ro/RoLlama2-7b-Chat) | ## Citation ``` @misc{masala2024openllmrotechnicalreport, title={OpenLLM-Ro -- Technical Report on Open-source Romanian LLMs}, author={Mihai Masala and Denis C. Ilie-Ablachim and Dragos Corlatescu and Miruna Zavelca and Marius Leordeanu and Horia Velicu and Marius Popescu and Mihai Dascalu and Traian Rebedea}, year={2024}, eprint={2405.07703}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2405.07703}, } ```