ibalampanis commited on
Commit
97805a3
1 Parent(s): c04b794

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +185 -0
README.md CHANGED
@@ -1,3 +1,188 @@
1
  ---
 
2
  license: apache-2.0
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model: ilsp/Meltemi-7B-Instruct-v1
3
  license: apache-2.0
4
+ model_name: Meltemi-7B-Instruct-v1
5
+ pipeline_tag: text-generation
6
+ quantized_by: SPAHE
7
+ tags:
8
+ - finetuned
9
  ---
10
+ <!-- markdownlint-disable MD041 -->
11
+
12
+ # Meltemi 7B Instruct v1 - GGUF
13
+ - Original model: [Meltemi 7B Instruct v1](https://huggingface.co/ilsp/Meltemi-7B-Instruct-v1)
14
+
15
+ <!-- description start -->
16
+ ## Description
17
+
18
+ This repo contains GGUF format model files for [ilsp's Meltemi 7B Instruct v1](https://huggingface.co/ilsp/Meltemi-7B-Instruct-v1).
19
+
20
+ <!-- description end -->
21
+ <!-- README_GGUF.md-about-gguf start -->
22
+ ### About GGUF
23
+
24
+ GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
25
+
26
+ Here is an incomplete list of clients and libraries that are known to support GGUF:
27
+
28
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
29
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
30
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
31
+ * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
32
+ * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
33
+ * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
34
+ * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
35
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
36
+ * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
37
+ * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
38
+
39
+
40
+ <!-- compatibility_gguf start -->
41
+ ## Compatibility
42
+
43
+ These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
44
+
45
+ <!-- README_GGUF.md-provided-files start -->
46
+ ## Provided files
47
+
48
+ | Name | Quant method | Bits/Floats | Size | Max RAM required | Use case |
49
+ | ---- | ---- | ---- | ---- | ---- | ----- |
50
+ | [meltemi-7B-instruct-v1_q8_0.gguf](https://huggingface.co/SPAHE/Meltemi-7B-Instruct-v1-GGUF/blob/main/meltemi-7B-instruct-v1_q8_0.gguf) | Q8_0 | 5 | 7.40 GB| 7.30 GB | very low quality loss - recommended |
51
+ | [meltemi-7B-instruct-v1_f16.gguf](https://huggingface.co/SPAHE/Meltemi-7B-Instruct-v1-GGUF/blob/main/meltemi-7B-instruct-v1_f16.gguf) | F16 | 16 | 13.90 GB| 14.20 GB | very large, extremely low quality loss |
52
+ | [meltemi-7B-instruct-v1_f32.gguf](https://huggingface.co/SPAHE/Meltemi-7B-Instruct-v1-GGUF/blob/main/meltemi-7B-instruct-v1_f32.gguf) | F32 | 32 | 27.80 GB| 29.30 GB | very large, extremely low quality loss - not recommended |
53
+
54
+ **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
55
+
56
+ <!-- README_GGUF.md-provided-files end -->
57
+
58
+ <!-- README_GGUF.md-how-to-download start -->
59
+ ## How to download GGUF files
60
+
61
+ **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
62
+
63
+ The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
64
+
65
+ * LM Studio
66
+ * LoLLMS Web UI
67
+ * Faraday.dev
68
+
69
+ ### On the command line, including multiple files at once
70
+
71
+ I recommend using the `huggingface-hub` Python library:
72
+
73
+ ```shell
74
+ pip3 install huggingface-hub
75
+ ```
76
+
77
+ Then you can download any individual model file to the current directory, at high speed, with a command like this:
78
+
79
+ ```shell
80
+ huggingface-cli download SPAHE/Meltemi-7B-Instruct-v1-GGUF meltemi-7B-instruct-v1_q8_0.gguf --local-dir . --local-dir-use-symlinks False
81
+ ```
82
+
83
+ <!-- original-model-card start -->
84
+ # Original model card: ilsp's Meltemi 7B Instruct v1
85
+
86
+ # Meltemi Instruct Large Language Model for the Greek language
87
+
88
+ We present Meltemi-7B-Instruct-v1 Large Language Model (LLM), an instruct fine-tuned version of [Meltemi-7B-v1](https://huggingface.co/ilsp/Meltemi-7B-v1).
89
+
90
+
91
+ # Model Information
92
+
93
+ - Vocabulary extension of the Mistral-7b tokenizer with Greek tokens
94
+ - 8192 context length
95
+ - Fine-tuned with 100k Greek machine translated instructions extracted from:
96
+ * [Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus) (only subsets with permissive licenses)
97
+ * [Evol-Instruct](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k)
98
+ * [Capybara](https://huggingface.co/datasets/LDJnr/Capybara)
99
+ * A hand-crafted Greek dataset with multi-turn examples steering the instruction-tuned model towards safe and harmless responses
100
+ - Our SFT procedure is based on the [Hugging Face finetuning recipes](https://github.com/huggingface/alignment-handbook)
101
+
102
+
103
+ # Instruction format
104
+ The prompt format is the same as the [Zephyr](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) format and can be
105
+ utilized through the tokenizer's [chat template](https://huggingface.co/docs/transformers/main/chat_templating) functionality as follows:
106
+
107
+ ```python
108
+ from transformers import AutoModelForCausalLM, AutoTokenizer
109
+
110
+ device = "cuda" # the device to load the model onto
111
+
112
+ model = AutoModelForCausalLM.from_pretrained("ilsp/Meltemi-7B-Instruct-v1")
113
+ tokenizer = AutoTokenizer.from_pretrained("ilsp/Meltemi-7B-Instruct-v1")
114
+
115
+ model.to(device)
116
+
117
+ messages = [
118
+ {"role": "system", "content": "Είσαι το Μελτέμι, ένα γλωσσικό μοντέλο για την ελληνική γλώσσα. Είσαι ιδιαίτερα βοηθητικό προς την χρήστρια ή τον χρήστη και δίνεις σύντομες αλλά επαρκώς περιεκτικές απαντήσεις. Απάντα με προσοχή, ευγένεια, αμεροληψία, ειλικρίνεια και σεβασμό προς την χρήστρια ή τον χρήστη."},
119
+ {"role": "user", "content": "Πες μου αν έχεις συνείδηση."},
120
+ ]
121
+
122
+ # Through the default chat template this translates to
123
+ #
124
+ # <|system|>
125
+ # Είσαι το Μελτέμι, ένα γλωσσικό μοντέλο για την ελληνική γλώσσα. Είσαι ιδιαίτερα βοηθητικό προς την χρήστρια ή τον χρήστη και δίνεις σύντομες αλλά επαρκώς περιεκτικές απαντήσεις. Απάντα με προσοχή, ευγένεια, αμεροληψία, ειλικρίνεια και σεβασμό προς την χρήστρια ή τον χρήστη.</s>
126
+ # <|user|>
127
+ # Πες μου αν έχεις συνείδηση.</s>
128
+ # <|assistant|>
129
+ #
130
+
131
+ prompt = tokenizer.apply_chat_template(messages, return_tensors="pt")
132
+ input_prompt = prompt.to(device)
133
+ outputs = model.generate(input_prompt, max_new_tokens=256, do_sample=True)
134
+
135
+ print(tokenizer.batch_decode(outputs)[0])
136
+ # Ως μοντέλο γλώσσας AI, δεν έχω τη δυνατότητα να αντιληφθώ ή να βιώσω συναισθήματα όπως η συνείδηση ή η επίγνωση. Ωστόσο, μπορώ να σας βοηθήσω με οποιεσδήποτε ερωτήσεις μπορεί να έχετε σχετικά με την τεχνητή νοημοσύνη και τις εφαρμογές της.
137
+
138
+ messages.extend([
139
+ {"role": "assistant", "content": tokenizer.batch_decode(outputs)[0]},
140
+ {"role": "user", "content": "Πιστεύεις πως οι άνθρωποι πρέπει να φοβούνται την τεχνητή νοημοσύνη;"}
141
+ ])
142
+
143
+ # Through the default chat template this translates to
144
+ #
145
+ # <|system|>
146
+ # Είσαι το Μελτέμι, ένα γλωσσικό μοντέλο για την ελληνική γλώσσα. Είσαι ιδιαίτερα βοηθητικό προς την χρήστρια ή τον χρήστη και δίνεις σύντομες αλλά επαρκώς περιεκτικές απαντήσεις. Απάντα με προσοχή, ευγένεια, αμεροληψία, ειλικρίνεια και σεβασμό προς την χρήστρια ή τον χρήστη.</s>
147
+ # <|user|>
148
+ # Πες μου αν έχεις συνείδηση.</s>
149
+ # <|assistant|>
150
+ # Ως μοντέλο γλώσσας AI, δεν έχω τη δυνατότητα να αντιληφθώ ή να βιώσω συναισθήματα όπως η συνείδηση ή η επίγνωση. Ωστόσο, μπορώ να σας βοηθήσω με οποιεσδήποτε ερωτήσεις μπορεί να έχετε σχετικά με την τεχνητή νοημοσύνη και τις εφαρμογές της.</s>
151
+ # <|user|>
152
+ # Πιστεύεις πως οι άνθρωποι πρέπει να φοβούνται την τεχνητή νοημοσύνη;</s>
153
+ # <|assistant|>
154
+ #
155
+
156
+ prompt = tokenizer.apply_chat_template(messages, return_tensors="pt")
157
+ input_prompt = prompt.to(device)
158
+ outputs = model.generate(input_prompt, max_new_tokens=256, do_sample=True)
159
+
160
+ print(tokenizer.batch_decode(outputs)[0])
161
+ ```
162
+
163
+ # Evaluation
164
+
165
+ The evaluation suite we created includes 6 test sets. The suite is integrated with [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness).
166
+
167
+ Our evaluation suite includes:
168
+ * Four machine-translated versions ([ARC Greek](https://huggingface.co/datasets/ilsp/arc_greek), [Truthful QA Greek](https://huggingface.co/datasets/ilsp/truthful_qa_greek), [HellaSwag Greek](https://huggingface.co/datasets/ilsp/hellaswag_greek), [MMLU Greek](https://huggingface.co/datasets/ilsp/mmlu_greek)) of established English benchmarks for language understanding and reasoning ([ARC Challenge](https://arxiv.org/abs/1803.05457), [Truthful QA](https://arxiv.org/abs/2109.07958), [Hellaswag](https://arxiv.org/abs/1905.07830), [MMLU](https://arxiv.org/abs/2009.03300)).
169
+ * An existing benchmark for question answering in Greek ([Belebele](https://arxiv.org/abs/2308.16884))
170
+ * A novel benchmark created by the ILSP team for medical question answering based on the medical exams of [DOATAP](https://www.doatap.gr) ([Medical MCQA](https://huggingface.co/datasets/ilsp/medical_mcqa_greek)).
171
+
172
+ Our evaluation for Meltemi-7b is performed in a few-shot setting, consistent with the settings in the [Open LLM leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). We can see that our training enhances performance across all Greek test sets by a **+14.9%** average improvement. The results for the Greek test sets are shown in the following table:
173
+
174
+ | | Medical MCQA EL (15-shot) | Belebele EL (5-shot) | HellaSwag EL (10-shot) | ARC-Challenge EL (25-shot) | TruthfulQA MC2 EL (0-shot) | MMLU EL (5-shot) | Average |
175
+ |----------------|----------------|-------------|--------------|------------------|-------------------|---------|---------|
176
+ | Mistral 7B | 29.8% | 45.0% | 36.5% | 27.1% | 45.8% | 35% | 36.5% |
177
+ | Meltemi 7B | 41.0% | 63.6% | 61.6% | 43.2% | 52.1% | 47% | 51.4% |
178
+
179
+
180
+ # Ethical Considerations
181
+
182
+ This model has not been aligned with human preferences, and therefore might generate misleading, harmful, and toxic content.
183
+
184
+
185
+ # Acknowledgements
186
+
187
+ The ILSP team utilized Amazon’s cloud computing services, which were made available via GRNET under the [OCRE Cloud framework](https://www.ocre-project.eu/), providing Amazon Web Services for the Greek Academic and Research Community.
188
+ <!-- original-model-card end -->