jartine commited on
Commit
19945ed
1 Parent(s): d46a9c4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +243 -0
README.md ADDED
@@ -0,0 +1,243 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ inference: false
4
+ base_model: CohereForAI/aya-34-8B
5
+ license_link: LICENSE
6
+ quantized_by: jartine
7
+ prompt_template: |
8
+ <BOS_TOKEN>
9
+ <|START_OF_TURN_TOKEN|>
10
+ <|USER_TOKEN|>{{prompt}}<|END_OF_TURN_TOKEN|>
11
+ <|START_OF_TURN_TOKEN|>
12
+ <|CHATBOT_TOKEN|>
13
+ tags:
14
+ - llamafile
15
+ - arabic
16
+ language:
17
+ - en
18
+ - fr
19
+ - de
20
+ - es
21
+ - it
22
+ - pt
23
+ - ja
24
+ - ko
25
+ - zh
26
+ - ar
27
+ - el
28
+ - fa
29
+ - pl
30
+ - id
31
+ - cs
32
+ - he
33
+ - hi
34
+ - nl
35
+ - ro
36
+ - ru
37
+ - tr
38
+ - uk
39
+ - vi
40
+ ---
41
+
42
+ # aya-34-8B - llamafile
43
+
44
+ This repository contains executable weights (which we call
45
+ [llamafiles](https://github.com/Mozilla-Ocho/llamafile)) that run on
46
+ Linux, MacOS, Windows, FreeBSD, OpenBSD, and NetBSD for AMD64 and ARM64.
47
+
48
+ - Model creator: [CohereForAI](https://huggingface.co/CohereForAI)
49
+ - Original model: [CohereForAI/aya-23-8B](https://huggingface.co/CohereForAI/aya-23-8B)
50
+
51
+ This is multilingual model, with a focus on Arabic.
52
+
53
+ ## Quickstart
54
+
55
+ You can run the following command which download, concatenate, and
56
+ execute the model.
57
+
58
+ ```
59
+ wget https://huggingface.co/jartine/aya-23-8B-llamafile/resolve/main/aya-23-8B.Q8_0.llamafile
60
+ chmod +x aya-23-8B.Q8_0.llamafile
61
+ ./aya-23-8B.Q8_0.llamafile --help # view manual
62
+ ./aya-23-8B.Q8_0.llamafile # launch web gui + oai api
63
+ ./aya-23-8B.Q8_0.llamafile -p ... # cli interface (scriptable)
64
+ ```
65
+
66
+ Alternatively, you may download an official `llamafile` executable from
67
+ Mozilla Ocho on GitHub, in which case you can use the Mixtral llamafiles
68
+ as a simple weights data file.
69
+
70
+ ```
71
+ llamafile -m ./aya-23-8B.Q8_0.llamafile ...
72
+ ```
73
+
74
+ For further information, please see the [llamafile
75
+ README](https://github.com/mozilla-ocho/llamafile/).
76
+
77
+ Having **trouble?** See the ["Gotchas"
78
+ section](https://github.com/mozilla-ocho/llamafile/?tab=readme-ov-file#gotchas)
79
+ of the README.
80
+
81
+ ## About Upload Limits
82
+
83
+ Files which exceed the Hugging Face 50GB upload limit have a .cat𝑋
84
+ extension. You need to use the `cat` command locally to turn them back
85
+ into a single file, using the same order.
86
+
87
+ ## Prompting
88
+
89
+ Prompt template:
90
+
91
+ ```
92
+ <BOS_TOKEN>
93
+ <|START_OF_TURN_TOKEN|>
94
+ <|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|>
95
+ <|START_OF_TURN_TOKEN|>
96
+ <|CHATBOT_TOKEN|>
97
+ ```
98
+
99
+ Command-line instruction template:
100
+
101
+ ```
102
+ ./aya-23-8B.Q8_0.llamafile -p '<BOS_TOKEN>
103
+ <|START_OF_TURN_TOKEN|>
104
+ <|USER_TOKEN|>Who is the president?<|END_OF_TURN_TOKEN|>
105
+ <|START_OF_TURN_TOKEN|>
106
+ <|CHATBOT_TOKEN|>'
107
+ ```
108
+
109
+ The maximum context size of this model is 8192 tokens. These llamafiles
110
+ use a default context size of 512 tokens. Whenever you need the maximum
111
+ context size to be available with llamafile for any given model, you can
112
+ pass the `-c 0` flag.
113
+
114
+ ## License
115
+
116
+ The aya-34-8B license requires:
117
+
118
+ - You can't use these weights for commercial purposes
119
+
120
+ - You have to give Cohere credit if you share or fine tune it
121
+
122
+ - You can't use it for purposes they consider unacceptable, such as
123
+ spam, misinformation, etc. The license says they can change the
124
+ definition of acceptable use at will.
125
+
126
+ - The CC-BY-NC 4.0 stipulates no downstream restrictions, so you can't
127
+ tack on your own list of unacceptable uses too if you create and
128
+ distribute a fine-tuned version.
129
+
130
+ This special license only applies to the LLM weights (i.e. the .gguf
131
+ file inside .llamafile). The llamafile software itself is permissively
132
+ licensed, having only components licensed under terms like Apache 2.0,
133
+ MIT, BSD, ISC, zlib, etc.
134
+
135
+ ## About llamafile
136
+
137
+ llamafile is a new format introduced by Mozilla Ocho on Nov 20th 2023.
138
+ It uses Cosmopolitan Libc to turn LLM weights into runnable llama.cpp
139
+ binaries that run on the stock installs of six OSes for both ARM64 and
140
+ AMD64.
141
+
142
+ In addition to being executables, llamafiles are also zip archives. Each
143
+ llamafile contains a GGUF file, which you can extract using the `unzip`
144
+ command. If you want to change or add files to your llamafiles, then the
145
+ `zipalign` command (distributed on the llamafile github) should be used
146
+ instead of the traditional `zip` command.
147
+
148
+ ---
149
+
150
+ # Model Card for Aya-23-8B
151
+
152
+ **Try Aya 23**
153
+
154
+ You can try out Aya 23 (35B) before downloading the weights in our hosted Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/aya-23).
155
+
156
+ ## Model Summary
157
+
158
+ Aya 23 is an open weights research release of an instruction fine-tuned model with highly advanced multilingual capabilities. Aya 23 focuses on pairing a highly performant pre-trained [Command family](https://huggingface.co/CohereForAI/c4ai-command-r-plus) of models with the recently released [Aya Collection](https://huggingface.co/datasets/CohereForAI/aya_collection). The result is a powerful multilingual large language model serving 23 languages.
159
+
160
+ This model card corresponds to the 8-billion version of the Aya 23 model. We also released a 35-billion version which you can find [here](https://huggingface.co/CohereForAI/aya-23-35B).
161
+
162
+ We cover 23 languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese
163
+
164
+ Developed by: [Cohere For AI](https://cohere.for.ai) and [Cohere](https://cohere.com/)
165
+
166
+ - Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/)
167
+ - License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy)
168
+ - Model: aya-23-8B
169
+ - Model Size: 8 billion parameters
170
+
171
+ ### Usage
172
+
173
+ Please install transformers from the source repository that includes the necessary changes for this model
174
+
175
+ ```python
176
+ # pip install transformers==4.41.1
177
+ from transformers import AutoTokenizer, AutoModelForCausalLM
178
+
179
+ model_id = "CohereForAI/aya-23-8B"
180
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
181
+ model = AutoModelForCausalLM.from_pretrained(model_id)
182
+
183
+ # Format message with the command-r-plus chat template
184
+ messages = [{"role": "user", "content": "Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz"}]
185
+ input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
186
+ ## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
187
+
188
+ gen_tokens = model.generate(
189
+ input_ids,
190
+ max_new_tokens=100,
191
+ do_sample=True,
192
+ temperature=0.3,
193
+ )
194
+
195
+ gen_text = tokenizer.decode(gen_tokens[0])
196
+ print(gen_text)
197
+ ```
198
+
199
+ ### Example Notebook
200
+
201
+ [This notebook](https://huggingface.co/CohereForAI/aya-23-8B/blob/main/Aya_23_notebook.ipynb) showcases a detailed use of Aya 23 (8B) including inference and fine-tuning with [QLoRA](https://huggingface.co/blog/4bit-transformers-bitsandbytes).
202
+
203
+ ## Model Details
204
+
205
+ **Input**: Models input text only.
206
+
207
+ **Output**: Models generate text only.
208
+
209
+ **Model Architecture**: Aya-23-8B is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model is fine-tuned (IFT) to follow human instructions.
210
+
211
+ **Languages covered**: The model is particularly optimized for multilinguality and supports the following languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese
212
+
213
+ **Context length**: 8192
214
+
215
+ ### Evaluation
216
+
217
+ <img src="benchmarks.png" alt="multilingual benchmarks" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
218
+ <img src="winrates.png" alt="average win rates" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
219
+
220
+ Please refer to the [Aya 23 technical report](https://cohere.com/research/papers/aya-command-23-8b-and-35b-technical-report-2024-05-23) for further details about the base model, data, instruction tuning, and evaluation.
221
+
222
+ ### Model Card Contact
223
+
224
+ For errors or additional questions about details in this model card, contact info@for.ai.
225
+
226
+ ### Terms of Use
227
+
228
+ We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant multilingual model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy).
229
+
230
+ ### Try the model today
231
+
232
+ You can try Aya 23 in the Cohere [playground](https://dashboard.cohere.com/playground/chat) here. You can also use it in our dedicated Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/aya-23).
233
+
234
+ ### Citation info
235
+ ```bibtex
236
+ @misc{aya23technicalreport,
237
+ title={Aya 23: Open Weight Releases to Further Multilingual Progress},
238
+ author={Viraat Aryabumi, John Dang, Dwarak Talupuru, Saurabh Dash, David Cairuz, Hangyu Lin, Bharat Venkitesh, Madeline Smith, Kelly Marchisio, Sebastian Ruder, Acyr Locatelli, Julia Kreutzer, Nick Frosst, Phil Blunsom, Marzieh Fadaee, Ahmet Üstün, and Sara Hooker},
239
+ url={https://cohere.com/research/papers/aya-command-23-8b-and-35b-technical-report-2024-05-23},
240
+ year={2024}
241
+ }
242
+
243
+ ```