File size: 4,686 Bytes
2a6fdf6 631cf57 7363203 d4ddaf0 a0eedaf 7363203 178b23a 541a737 7363203 541a737 7363203 178b23a 7363203 541a737 7363203 178b23a 7363203 178b23a 7363203 541a737 178b23a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 |
---
license: apache-2.0
language:
- el
- en
tags:
- finetuned
inference: true
pipeline_tag: text-generation
---
# Meltemi: A large foundation Language Model for the Greek language
We introduce Meltemi, the first Greek Large Language Model (LLM) trained by the [Institute for Language and Speech Processing](https://www.athenarc.gr/en/ilsp) at [Athena Research & Innovation Center](https://www.athenarc.gr/).
Meltemi is built on top of [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-v0.1), extending its capabilities for Greek through continual pretraining on a large corpus of high-quality and locally relevant Greek texts. We present Meltemi-7B-Instruct-v1, an instruct fine-tuned version of [Meltemi-7B-v1](https://huggingface.co/ilsp/Meltemi-7B-v1).
# Model Information
- Vocabulary extension of the Mistral-7b tokenizer with Greek tokens
- Trained with 8k context length
- Fine-tuned with 100k Greek machine translated instructions extracted from:
* [Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus) (only subsets with permissive licenses)
* [Evol-Instruct](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k)
* [Capybara](https://huggingface.co/datasets/LDJnr/Capybara)
* A hand-crafted Greek dataset with multi-turn examples steering the instruction-tuned model towards safe and harmless responses
- Our SFT procedure is based on the [Hugging Face finetuning recipes](https://github.com/huggingface/alignment-handbook)
# Instruction format
The prompt should be surrounded by [INST] and [/INST] tokens:
```
text = "[INST] Πες μου αν έχεις συνείδηση. [/INST]"
"Ως μοντέλο γλώσσας AI, δεν έχω τη δυνατότητα να αντιληφθώ ή να βιώσω συναισθήματα όπως η συνείδηση ή η επίγνωση. Ωστόσο, μπορώ να σας βοηθήσω με οποιεσδήποτε ερωτήσεις μπορεί να έχετε σχετικά με την τεχνητή νοημοσύνη και τις εφαρμογές της."
"[INST] Πιστεύεις πως οι άνθρωποι πρέπει να φοβούνται την τεχνητή νοημοσύνη; [/INST]"
```
# Evaluation
The evaluation suite we created includes 6 test sets. The suite is integrated with [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness).
Our evaluation suite includes:
* Four machine-translated versions ([ARC Greek](https://huggingface.co/datasets/ilsp/arc_greek), [Truthful QA Greek](https://huggingface.co/datasets/ilsp/truthful_qa_greek), [HellaSwag Greek](https://huggingface.co/datasets/ilsp/hellaswag_greek), [MMLU Greek](https://huggingface.co/datasets/ilsp/mmlu_greek)) of established English benchmarks for language understanding and reasoning ([ARC Challenge](https://arxiv.org/abs/1803.05457), [Truthful QA](https://arxiv.org/abs/2109.07958), [Hellaswag](https://arxiv.org/abs/1905.07830), [MMLU](https://arxiv.org/abs/2009.03300)).
* An existing benchmark for question answering in Greek ([Belebele](https://arxiv.org/abs/2308.16884))
* A novel benchmark created by the ILSP team for medical question answering based on the medical exams of [DOATAP](https://www.doatap.gr) ([Medical MCQA](https://huggingface.co/datasets/ilsp/medical_mcqa_greek)).
Our evaluation for Meltemi-7b is performed in a few-shot setting, consistent with the settings in the [Open LLM leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). We can see that our training enhances performance across all Greek test sets by a **+14.9%** average improvement. The results for the Greek test sets are shown in the following table:
| | Medical MCQA EL (15-shot) | Belebele EL (5-shot) | HellaSwag EL (10-shot) | ARC-Challenge EL (25-shot) | TruthfulQA MC2 EL (0-shot) | MMLU EL (5-shot) | Average |
|----------------|----------------|-------------|--------------|------------------|-------------------|---------|---------|
| Mistral 7B | 29.8% | 45.0% | 36.5% | 27.1% | 45.8% | 35% | 36.5% |
| Meltemi 7B | 41.0% | 63.6% | 61.6% | 43.2% | 52.1% | 47% | 51.4% |
# Ethical Considerations
This model has not been aligned with human preferences, and therefore might generate misleading, harmful, and toxic content.
# Acknowledgements
The ILSP team utilized Amazon’s cloud computing services, which were made available via GRNET under the [OCRE Cloud framework](https://www.ocre-project.eu/), providing Amazon Web Services for the Greek Academic and Research Community.
|