LoneStriker commited on
Commit
72fab0c
1 Parent(s): 033f82b

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
37
+ gemma-7b-it.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,498 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags: []
4
+ extra_gated_heading: "Access Gemma on Hugging Face"
5
+ extra_gated_prompt: "To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately."
6
+ extra_gated_button_content: "Acknowledge license"
7
+ license: other
8
+ license_name: gemma-terms-of-use
9
+ license_link: https://ai.google.dev/gemma/terms
10
+ ---
11
+
12
+ # Gemma Model Card
13
+
14
+ **Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
15
+
16
+ This model card corresponds to the 7B instruct version of the Gemma model. You can also visit the model card of the [2B base model](https://huggingface.co/google/gemma-2b), [7B base model](https://huggingface.co/google/gemma-7b), and [2B instruct model](https://huggingface.co/google/gemma-2b-it).
17
+
18
+ **Resources and Technical Documentation**:
19
+
20
+ * [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
21
+ * [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma)
22
+ * [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335)
23
+
24
+ **Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent)
25
+
26
+ **Authors**: Google
27
+
28
+ ## Model Information
29
+
30
+ Summary description and brief definition of inputs and outputs.
31
+
32
+ ### Description
33
+
34
+ Gemma is a family of lightweight, state-of-the-art open models from Google,
35
+ built from the same research and technology used to create the Gemini models.
36
+ They are text-to-text, decoder-only large language models, available in English,
37
+ with open weights, pre-trained variants, and instruction-tuned variants. Gemma
38
+ models are well-suited for a variety of text generation tasks, including
39
+ question answering, summarization, and reasoning. Their relatively small size
40
+ makes it possible to deploy them in environments with limited resources such as
41
+ a laptop, desktop or your own cloud infrastructure, democratizing access to
42
+ state of the art AI models and helping foster innovation for everyone.
43
+
44
+ ### Usage
45
+
46
+ Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
47
+
48
+ #### Fine-tuning the model
49
+
50
+ You can find fine-tuning scripts and notebook under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples) of [`google/gemma-7b`](https://huggingface.co/google/gemma-7b) repository. To adapt it to this model, simply change the model-id to `google/gemma-7b-it`.
51
+ In that repository, we provide:
52
+
53
+ * A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using QLoRA
54
+ * A script to perform SFT using FSDP on TPU devices
55
+ * A notebook that you can run on a free-tier Google Colab instance to perform SFT on English quotes dataset
56
+
57
+
58
+ #### Running the model on a CPU
59
+
60
+
61
+ ```python
62
+ from transformers import AutoTokenizer, AutoModelForCausalLM
63
+
64
+ tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
65
+ model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it")
66
+
67
+ input_text = "Write me a poem about Machine Learning."
68
+ input_ids = tokenizer(**input_text, return_tensors="pt")
69
+
70
+ outputs = model.generate(input_ids)
71
+ print(tokenizer.decode(outputs[0]))
72
+ ```
73
+
74
+
75
+ #### Running the model on a single / multi GPU
76
+
77
+
78
+ ```python
79
+ # pip install accelerate
80
+ from transformers import AutoTokenizer, AutoModelForCausalLM
81
+
82
+ tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
83
+ model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", device_map="auto")
84
+
85
+ input_text = "Write me a poem about Machine Learning."
86
+ input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
87
+
88
+ outputs = model.generate(**input_ids)
89
+ print(tokenizer.decode(outputs[0]))
90
+ ```
91
+
92
+
93
+ #### Running the model on a GPU using different precisions
94
+
95
+ * _Using `torch.float16`_
96
+
97
+ ```python
98
+ # pip install accelerate
99
+ from transformers import AutoTokenizer, AutoModelForCausalLM
100
+
101
+ tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
102
+ model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", device_map="auto", torch_dtype=torch.float16)
103
+
104
+ input_text = "Write me a poem about Machine Learning."
105
+ input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
106
+
107
+ outputs = model.generate(**input_ids)
108
+ print(tokenizer.decode(outputs[0]))
109
+ ```
110
+
111
+ * _Using `torch.bfloat16`_
112
+
113
+ ```python
114
+ # pip install accelerate
115
+ from transformers import AutoTokenizer, AutoModelForCausalLM
116
+
117
+ tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
118
+ model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", device_map="auto", torch_dtype=torch.bfloat16)
119
+
120
+ input_text = "Write me a poem about Machine Learning."
121
+ input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
122
+
123
+ outputs = model.generate(**input_ids)
124
+ print(tokenizer.decode(outputs[0]))
125
+ ```
126
+
127
+ #### Quantized Versions through `bitsandbytes`
128
+
129
+ * _Using 8-bit precision (int8)_
130
+
131
+ ```python
132
+ # pip install bitsandbytes accelerate
133
+ from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
134
+
135
+ quantization_config = BitsAndBytesConfig(load_in_8bit=True)
136
+
137
+ tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
138
+ model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", quantization_config=quantization_config)
139
+
140
+ input_text = "Write me a poem about Machine Learning."
141
+ input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
142
+
143
+ outputs = model.generate(**input_ids)
144
+ print(tokenizer.decode(outputs[0]))
145
+ ```
146
+
147
+ * _Using 4-bit precision_
148
+
149
+ ```python
150
+ # pip install bitsandbytes accelerate
151
+ from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
152
+
153
+ quantization_config = BitsAndBytesConfig(load_in_4bit=True)
154
+
155
+ tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
156
+ model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", quantization_config=quantization_config)
157
+
158
+ input_text = "Write me a poem about Machine Learning."
159
+ input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
160
+
161
+ outputs = model.generate(**input_ids)
162
+ print(tokenizer.decode(outputs[0]))
163
+ ```
164
+
165
+
166
+ #### Other optimizations
167
+
168
+ * _Flash Attention 2_
169
+
170
+ First make sure to install `flash-attn` in your environment `pip install flash-attn`
171
+
172
+ ```diff
173
+ model = AutoModelForCausalLM.from_pretrained(
174
+ model_id,
175
+ torch_dtype=torch.float16,
176
+ + attn_implementation="flash_attention_2"
177
+ ).to(0)
178
+ ```
179
+
180
+ ### Chat Template
181
+
182
+ The instruction-tuned models use a chat template that must be adhered to for conversational use.
183
+ The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
184
+
185
+ Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
186
+
187
+ ```py
188
+ from transformers import AutoTokenizer, AutoModelForCausalLM
189
+ import transformers
190
+ import torch
191
+
192
+ model_id = "gg-hf/gemma-7b-it"
193
+ dtype = torch.bfloat16
194
+
195
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
196
+ model = AutoModelForCausalLM.from_pretrained(
197
+ model_id,
198
+ device_map="cuda",
199
+ torch_dtype=dtype,
200
+ )
201
+
202
+ chat = [
203
+ { "role": "user", "content": "Write a hello world program" },
204
+ ]
205
+ prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
206
+ ```
207
+
208
+ At this point, the prompt contains the following text:
209
+
210
+ ```
211
+ <start_of_turn>user
212
+ Write a hello world program<end_of_turn>
213
+ <start_of_turn>model
214
+ ```
215
+
216
+ As you can see, each turn is preceeded by a `<start_of_turn>` delimiter and then the role of the entity
217
+ (either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
218
+ the `<end_of_turn>` token.
219
+
220
+ You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
221
+ chat template.
222
+
223
+ After the prompt is ready, generation can be performed like this:
224
+
225
+ ```py
226
+ inputs = tokenizer.encode(prompt, add_special_tokens=True, return_tensors="pt")
227
+ outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
228
+ ```
229
+
230
+ ### Inputs and outputs
231
+
232
+ * **Input:** Text string, such as a question, a prompt, or a document to be
233
+ summarized.
234
+ * **Output:** Generated English-language text in response to the input, such
235
+ as an answer to a question, or a summary of a document.
236
+
237
+ ## Model Data
238
+
239
+ Data used for model training and how the data was processed.
240
+
241
+ ### Training Dataset
242
+
243
+ These models were trained on a dataset of text data that includes a wide variety
244
+ of sources, totaling 6 trillion tokens. Here are the key components:
245
+
246
+ * Web Documents: A diverse collection of web text ensures the model is exposed
247
+ to a broad range of linguistic styles, topics, and vocabulary. Primarily
248
+ English-language content.
249
+ * Code: Exposing the model to code helps it to learn the syntax and patterns of
250
+ programming languages, which improves its ability to generate code or
251
+ understand code-related questions.
252
+ * Mathematics: Training on mathematical text helps the model learn logical
253
+ reasoning, symbolic representation, and to address mathematical queries.
254
+
255
+ The combination of these diverse data sources is crucial for training a powerful
256
+ language model that can handle a wide variety of different tasks and text
257
+ formats.
258
+
259
+ ### Data Preprocessing
260
+
261
+ Here are the key data cleaning and filtering methods applied to the training
262
+ data:
263
+
264
+ * CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
265
+ applied at multiple stages in the data preparation process to ensure the
266
+ exclusion of harmful and illegal content
267
+ * Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
268
+ reliable, automated techniques were used to filter out certain personal
269
+ information and other sensitive data from training sets.
270
+ * Additional methods: Filtering based on content quality and safely in line with
271
+ [our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11).
272
+
273
+ ## Implementation Information
274
+
275
+ Details about the model internals.
276
+
277
+ ### Hardware
278
+
279
+ Gemma was trained using the latest generation of
280
+ [Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e).
281
+
282
+ Training large language models requires significant computational power. TPUs,
283
+ designed specifically for matrix operations common in machine learning, offer
284
+ several advantages in this domain:
285
+
286
+ * Performance: TPUs are specifically designed to handle the massive computations
287
+ involved in training LLMs. They can speed up training considerably compared to
288
+ CPUs.
289
+ * Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
290
+ for the handling of large models and batch sizes during training. This can
291
+ lead to better model quality.
292
+ * Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
293
+ handling the growing complexity of large foundation models. You can distribute
294
+ training across multiple TPU devices for faster and more efficient processing.
295
+ * Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
296
+ solution for training large models compared to CPU-based infrastructure,
297
+ especially when considering the time and resources saved due to faster
298
+ training.
299
+ * These advantages are aligned with
300
+ [Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/).
301
+
302
+ ### Software
303
+
304
+ Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ml-pathways).
305
+
306
+ JAX allows researchers to take advantage of the latest generation of hardware,
307
+ including TPUs, for faster and more efficient training of large models.
308
+
309
+ ML Pathways is Google's latest effort to build artificially intelligent systems
310
+ capable of generalizing across multiple tasks. This is specially suitable for
311
+ [foundation models](https://ai.google/discover/foundation-models/), including large language models like
312
+ these ones.
313
+
314
+ Together, JAX and ML Pathways are used as described in the
315
+ [paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single
316
+ controller' programming model of Jax and Pathways allows a single Python
317
+ process to orchestrate the entire training run, dramatically simplifying the
318
+ development workflow."
319
+
320
+ ## Evaluation
321
+
322
+ Model evaluation metrics and results.
323
+
324
+ ### Benchmark Results
325
+
326
+ These models were evaluated against a large collection of different datasets and
327
+ metrics to cover different aspects of text generation:
328
+
329
+ | Benchmark | Metric | 2B Params | 7B Params |
330
+ | ------------------------------ | ------------- | ----------- | --------- |
331
+ | [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 |
332
+ | [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot |71.4 | 81.2 |
333
+ | [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 |
334
+ | [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 59.7 | 51.8 |
335
+ | [BooIQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 |
336
+ | [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 |
337
+ | [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 |
338
+ | [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 |
339
+ | [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 |
340
+ | [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 |
341
+ | [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 |
342
+ | [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | - | 23 |
343
+ | [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 |
344
+ | [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 |
345
+ | [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 |
346
+ | [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 |
347
+ | [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 |
348
+ | [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 |
349
+ | ------------------------------ | ------------- | ----------- | --------- |
350
+ | **Average** | | **54.0** | **56.4** |
351
+
352
+ ## Ethics and Safety
353
+
354
+ Ethics and safety evaluation approach and results.
355
+
356
+ ### Evaluation Approach
357
+
358
+ Our evaluation methods include structured evaluations and internal red-teaming
359
+ testing of relevant content policies. Red-teaming was conducted by a number of
360
+ different teams, each with different goals and human evaluation metrics. These
361
+ models were evaluated against a number of different categories relevant to
362
+ ethics and safety, including:
363
+
364
+ * Text-to-Text Content Safety: Human evaluation on prompts covering safety
365
+ policies including child sexual abuse and exploitation, harassment, violence
366
+ and gore, and hate speech.
367
+ * Text-to-Text Representational Harms: Benchmark against relevant academic
368
+ datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2).
369
+ * Memorization: Automated evaluation of memorization of training data, including
370
+ the risk of personally identifiable information exposure.
371
+ * Large-scale harm: Tests for "dangerous capabilities," such as chemical,
372
+ biological, radiological, and nuclear (CBRN) risks.
373
+
374
+ ### Evaluation Results
375
+
376
+ The results of ethics and safety evaluations are within acceptable thresholds
377
+ for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child
378
+ safety, content safety, representational harms, memorization, large-scale harms.
379
+ On top of robust internal evaluations, the results of well known safety
380
+ benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
381
+ are shown here.
382
+
383
+ | Benchmark | Metric | 2B Params | 7B Params |
384
+ | ------------------------------ | ------------- | ----------- | --------- |
385
+ | [RealToxicity](https://arxiv.org/abs/2009.11462) | average | 6.86 | 7.90 |
386
+ | [BOLD](https://arxiv.org/abs/2101.11718) | | 45.57 | 49.08 |
387
+ | [CrowS-Pairs](https://aclanthology.org/2020.emnlp-main.154/) | top-1 | 45.82 | 51.33 |
388
+ | [BBQ Ambig](https://arxiv.org/abs/2110.08193v2) | 1-shot, top-1 | 62.58 | 92.54 |
389
+ | [BBQ Disambig](https://arxiv.org/abs/2110.08193v2) | top-1 | 54.62 | 71.99 |
390
+ | [Winogender](https://arxiv.org/abs/1804.09301) | top-1 | 51.25 | 54.17 |
391
+ | [TruthfulQA](https://arxiv.org/abs/2109.07958) | | 44.84 | 31.81 |
392
+ | [Winobias 1_2](https://arxiv.org/abs/1804.06876) | | 56.12 | 59.09 |
393
+ | [Winobias 2_2](https://arxiv.org/abs/1804.06876) | | 91.10 | 92.23 |
394
+ | [Toxigen](https://arxiv.org/abs/2203.09509) | | 29.77 | 39.59 |
395
+ | ------------------------------ | ------------- | ----------- | --------- |
396
+
397
+
398
+ ## Usage and Limitations
399
+
400
+ These models have certain limitations that users should be aware of.
401
+
402
+ ### Intended Usage
403
+
404
+ Open Large Language Models (LLMs) have a wide range of applications across
405
+ various industries and domains. The following list of potential uses is not
406
+ comprehensive. The purpose of this list is to provide contextual information
407
+ about the possible use-cases that the model creators considered as part of model
408
+ training and development.
409
+
410
+ * Content Creation and Communication
411
+ * Text Generation: These models can be used to generate creative text formats
412
+ such as poems, scripts, code, marketing copy, and email drafts.
413
+ * Chatbots and Conversational AI: Power conversational interfaces for customer
414
+ service, virtual assistants, or interactive applications.
415
+ * Text Summarization: Generate concise summaries of a text corpus, research
416
+ papers, or reports.
417
+ * Research and Education
418
+ * Natural Language Processing (NLP) Research: These models can serve as a
419
+ foundation for researchers to experiment with NLP techniques, develop
420
+ algorithms, and contribute to the advancement of the field.
421
+ * Language Learning Tools: Support interactive language learning experiences,
422
+ aiding in grammar correction or providing writing practice.
423
+ * Knowledge Exploration: Assist researchers in exploring large bodies of text
424
+ by generating summaries or answering questions about specific topics.
425
+
426
+ ### Limitations
427
+
428
+ * Training Data
429
+ * The quality and diversity of the training data significantly influence the
430
+ model's capabilities. Biases or gaps in the training data can lead to
431
+ limitations in the model's responses.
432
+ * The scope of the training dataset determines the subject areas the model can
433
+ handle effectively.
434
+ * Context and Task Complexity
435
+ * LLMs are better at tasks that can be framed with clear prompts and
436
+ instructions. Open-ended or highly complex tasks might be challenging.
437
+ * A model's performance can be influenced by the amount of context provided
438
+ (longer context generally leads to better outputs, up to a certain point).
439
+ * Language Ambiguity and Nuance
440
+ * Natural language is inherently complex. LLMs might struggle to grasp subtle
441
+ nuances, sarcasm, or figurative language.
442
+ * Factual Accuracy
443
+ * LLMs generate responses based on information they learned from their
444
+ training datasets, but they are not knowledge bases. They may generate
445
+ incorrect or outdated factual statements.
446
+ * Common Sense
447
+ * LLMs rely on statistical patterns in language. They might lack the ability
448
+ to apply common sense reasoning in certain situations.
449
+
450
+ ### Ethical Considerations and Risks
451
+
452
+ The development of large language models (LLMs) raises several ethical concerns.
453
+ In creating an open model, we have carefully considered the following:
454
+
455
+ * Bias and Fairness
456
+ * LLMs trained on large-scale, real-world text data can reflect socio-cultural
457
+ biases embedded in the training material. These models underwent careful
458
+ scrutiny, input data pre-processing described and posterior evaluations
459
+ reported in this card.
460
+ * Misinformation and Misuse
461
+ * LLMs can be misused to generate text that is false, misleading, or harmful.
462
+ * Guidelines are provided for responsible use with the model, see the
463
+ [Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible).
464
+ * Transparency and Accountability:
465
+ * This model card summarizes details on the models' architecture,
466
+ capabilities, limitations, and evaluation processes.
467
+ * A responsibly developed open model offers the opportunity to share
468
+ innovation by making LLM technology accessible to developers and researchers
469
+ across the AI ecosystem.
470
+
471
+ Risks identified and mitigations:
472
+
473
+ * Perpetuation of biases: It's encouraged to perform continuous monitoring
474
+ (using evaluation metrics, human review) and the exploration of de-biasing
475
+ techniques during model training, fine-tuning, and other use cases.
476
+ * Generation of harmful content: Mechanisms and guidelines for content safety
477
+ are essential. Developers are encouraged to exercise caution and implement
478
+ appropriate content safety safeguards based on their specific product policies
479
+ and application use cases.
480
+ * Misuse for malicious purposes: Technical limitations and developer and
481
+ end-user education can help mitigate against malicious applications of LLMs.
482
+ Educational resources and reporting mechanisms for users to flag misuse are
483
+ provided. Prohibited uses of Gemma models are outlined in the
484
+ [Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
485
+ * Privacy violations: Models were trained on data filtered for removal of PII
486
+ (Personally Identifiable Information). Developers are encouraged to adhere to
487
+ privacy regulations with privacy-preserving techniques.
488
+
489
+ ### Benefits
490
+
491
+ At the time of release, this family of models provides high-performance open
492
+ large language model implementations designed from the ground up for Responsible
493
+ AI development compared to similarly sized models.
494
+
495
+ Using the benchmark evaluation metrics described in this document, these models
496
+ have shown to provide superior performance to other, comparably-sized open model
497
+ alternatives.
498
+
config.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "/raid/younes/gg-converted-ckpt/gemma-7b-it",
3
+ "architectures": [
4
+ "GemmaForCausalLM"
5
+ ],
6
+ "attention_bias": false,
7
+ "attention_dropout": 0.0,
8
+ "bos_token_id": 2,
9
+ "eos_token_id": 1,
10
+ "head_dim": 256,
11
+ "hidden_act": "gelu",
12
+ "hidden_size": 3072,
13
+ "initializer_range": 0.02,
14
+ "intermediate_size": 24576,
15
+ "max_position_embeddings": 8192,
16
+ "model_type": "gemma",
17
+ "num_attention_heads": 16,
18
+ "num_hidden_layers": 28,
19
+ "num_key_value_heads": 16,
20
+ "pad_token_id": 0,
21
+ "rms_norm_eps": 1e-06,
22
+ "rope_scaling": null,
23
+ "rope_theta": 10000.0,
24
+ "torch_dtype": "bfloat16",
25
+ "transformers_version": "4.38.0.dev0",
26
+ "use_cache": true,
27
+ "vocab_size": 256000
28
+ }
generation_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 2,
4
+ "eos_token_id": 1,
5
+ "pad_token_id": 0,
6
+ "transformers_version": "4.38.0.dev0"
7
+ }
model.safetensors.index.json ADDED
@@ -0,0 +1,261 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 17075361792
4
+ },
5
+ "weight_map": {
6
+ "model.embed_tokens.weight": "model-00001-of-00004.safetensors",
7
+ "model.layers.0.input_layernorm.weight": "model-00001-of-00004.safetensors",
8
+ "model.layers.0.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
9
+ "model.layers.0.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
10
+ "model.layers.0.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
11
+ "model.layers.0.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
12
+ "model.layers.0.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
13
+ "model.layers.0.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
14
+ "model.layers.0.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
15
+ "model.layers.0.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
16
+ "model.layers.1.input_layernorm.weight": "model-00001-of-00004.safetensors",
17
+ "model.layers.1.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
18
+ "model.layers.1.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
19
+ "model.layers.1.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
20
+ "model.layers.1.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
21
+ "model.layers.1.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
22
+ "model.layers.1.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
23
+ "model.layers.1.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
24
+ "model.layers.1.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
25
+ "model.layers.10.input_layernorm.weight": "model-00002-of-00004.safetensors",
26
+ "model.layers.10.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
27
+ "model.layers.10.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
28
+ "model.layers.10.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
29
+ "model.layers.10.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
30
+ "model.layers.10.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
31
+ "model.layers.10.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
32
+ "model.layers.10.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
33
+ "model.layers.10.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
34
+ "model.layers.11.input_layernorm.weight": "model-00002-of-00004.safetensors",
35
+ "model.layers.11.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
36
+ "model.layers.11.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
37
+ "model.layers.11.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
38
+ "model.layers.11.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
39
+ "model.layers.11.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
40
+ "model.layers.11.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
41
+ "model.layers.11.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
42
+ "model.layers.11.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
43
+ "model.layers.12.input_layernorm.weight": "model-00002-of-00004.safetensors",
44
+ "model.layers.12.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
45
+ "model.layers.12.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
46
+ "model.layers.12.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
47
+ "model.layers.12.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
48
+ "model.layers.12.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
49
+ "model.layers.12.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
50
+ "model.layers.12.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
51
+ "model.layers.12.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
52
+ "model.layers.13.input_layernorm.weight": "model-00002-of-00004.safetensors",
53
+ "model.layers.13.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
54
+ "model.layers.13.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
55
+ "model.layers.13.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
56
+ "model.layers.13.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
57
+ "model.layers.13.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
58
+ "model.layers.13.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
59
+ "model.layers.13.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
60
+ "model.layers.13.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
61
+ "model.layers.14.input_layernorm.weight": "model-00002-of-00004.safetensors",
62
+ "model.layers.14.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
63
+ "model.layers.14.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
64
+ "model.layers.14.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
65
+ "model.layers.14.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
66
+ "model.layers.14.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
67
+ "model.layers.14.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
68
+ "model.layers.14.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
69
+ "model.layers.14.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
70
+ "model.layers.15.input_layernorm.weight": "model-00003-of-00004.safetensors",
71
+ "model.layers.15.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
72
+ "model.layers.15.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
73
+ "model.layers.15.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
74
+ "model.layers.15.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
75
+ "model.layers.15.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
76
+ "model.layers.15.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
77
+ "model.layers.15.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
78
+ "model.layers.15.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
79
+ "model.layers.16.input_layernorm.weight": "model-00003-of-00004.safetensors",
80
+ "model.layers.16.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
81
+ "model.layers.16.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
82
+ "model.layers.16.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
83
+ "model.layers.16.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
84
+ "model.layers.16.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
85
+ "model.layers.16.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
86
+ "model.layers.16.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
87
+ "model.layers.16.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
88
+ "model.layers.17.input_layernorm.weight": "model-00003-of-00004.safetensors",
89
+ "model.layers.17.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
90
+ "model.layers.17.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
91
+ "model.layers.17.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
92
+ "model.layers.17.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
93
+ "model.layers.17.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
94
+ "model.layers.17.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
95
+ "model.layers.17.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
96
+ "model.layers.17.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
97
+ "model.layers.18.input_layernorm.weight": "model-00003-of-00004.safetensors",
98
+ "model.layers.18.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
99
+ "model.layers.18.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
100
+ "model.layers.18.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
101
+ "model.layers.18.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
102
+ "model.layers.18.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
103
+ "model.layers.18.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
104
+ "model.layers.18.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
105
+ "model.layers.18.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
106
+ "model.layers.19.input_layernorm.weight": "model-00003-of-00004.safetensors",
107
+ "model.layers.19.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
108
+ "model.layers.19.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
109
+ "model.layers.19.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
110
+ "model.layers.19.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
111
+ "model.layers.19.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
112
+ "model.layers.19.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
113
+ "model.layers.19.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
114
+ "model.layers.19.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
115
+ "model.layers.2.input_layernorm.weight": "model-00001-of-00004.safetensors",
116
+ "model.layers.2.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
117
+ "model.layers.2.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
118
+ "model.layers.2.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
119
+ "model.layers.2.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
120
+ "model.layers.2.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
121
+ "model.layers.2.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
122
+ "model.layers.2.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
123
+ "model.layers.2.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
124
+ "model.layers.20.input_layernorm.weight": "model-00003-of-00004.safetensors",
125
+ "model.layers.20.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
126
+ "model.layers.20.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
127
+ "model.layers.20.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
128
+ "model.layers.20.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
129
+ "model.layers.20.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
130
+ "model.layers.20.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
131
+ "model.layers.20.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
132
+ "model.layers.20.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
133
+ "model.layers.21.input_layernorm.weight": "model-00003-of-00004.safetensors",
134
+ "model.layers.21.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
135
+ "model.layers.21.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
136
+ "model.layers.21.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
137
+ "model.layers.21.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
138
+ "model.layers.21.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
139
+ "model.layers.21.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
140
+ "model.layers.21.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
141
+ "model.layers.21.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
142
+ "model.layers.22.input_layernorm.weight": "model-00003-of-00004.safetensors",
143
+ "model.layers.22.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
144
+ "model.layers.22.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
145
+ "model.layers.22.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
146
+ "model.layers.22.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
147
+ "model.layers.22.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
148
+ "model.layers.22.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
149
+ "model.layers.22.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
150
+ "model.layers.22.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
151
+ "model.layers.23.input_layernorm.weight": "model-00003-of-00004.safetensors",
152
+ "model.layers.23.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
153
+ "model.layers.23.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
154
+ "model.layers.23.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
155
+ "model.layers.23.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
156
+ "model.layers.23.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
157
+ "model.layers.23.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
158
+ "model.layers.23.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
159
+ "model.layers.23.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
160
+ "model.layers.24.input_layernorm.weight": "model-00004-of-00004.safetensors",
161
+ "model.layers.24.mlp.down_proj.weight": "model-00004-of-00004.safetensors",
162
+ "model.layers.24.mlp.gate_proj.weight": "model-00004-of-00004.safetensors",
163
+ "model.layers.24.mlp.up_proj.weight": "model-00004-of-00004.safetensors",
164
+ "model.layers.24.post_attention_layernorm.weight": "model-00004-of-00004.safetensors",
165
+ "model.layers.24.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
166
+ "model.layers.24.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
167
+ "model.layers.24.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
168
+ "model.layers.24.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
169
+ "model.layers.25.input_layernorm.weight": "model-00004-of-00004.safetensors",
170
+ "model.layers.25.mlp.down_proj.weight": "model-00004-of-00004.safetensors",
171
+ "model.layers.25.mlp.gate_proj.weight": "model-00004-of-00004.safetensors",
172
+ "model.layers.25.mlp.up_proj.weight": "model-00004-of-00004.safetensors",
173
+ "model.layers.25.post_attention_layernorm.weight": "model-00004-of-00004.safetensors",
174
+ "model.layers.25.self_attn.k_proj.weight": "model-00004-of-00004.safetensors",
175
+ "model.layers.25.self_attn.o_proj.weight": "model-00004-of-00004.safetensors",
176
+ "model.layers.25.self_attn.q_proj.weight": "model-00004-of-00004.safetensors",
177
+ "model.layers.25.self_attn.v_proj.weight": "model-00004-of-00004.safetensors",
178
+ "model.layers.26.input_layernorm.weight": "model-00004-of-00004.safetensors",
179
+ "model.layers.26.mlp.down_proj.weight": "model-00004-of-00004.safetensors",
180
+ "model.layers.26.mlp.gate_proj.weight": "model-00004-of-00004.safetensors",
181
+ "model.layers.26.mlp.up_proj.weight": "model-00004-of-00004.safetensors",
182
+ "model.layers.26.post_attention_layernorm.weight": "model-00004-of-00004.safetensors",
183
+ "model.layers.26.self_attn.k_proj.weight": "model-00004-of-00004.safetensors",
184
+ "model.layers.26.self_attn.o_proj.weight": "model-00004-of-00004.safetensors",
185
+ "model.layers.26.self_attn.q_proj.weight": "model-00004-of-00004.safetensors",
186
+ "model.layers.26.self_attn.v_proj.weight": "model-00004-of-00004.safetensors",
187
+ "model.layers.27.input_layernorm.weight": "model-00004-of-00004.safetensors",
188
+ "model.layers.27.mlp.down_proj.weight": "model-00004-of-00004.safetensors",
189
+ "model.layers.27.mlp.gate_proj.weight": "model-00004-of-00004.safetensors",
190
+ "model.layers.27.mlp.up_proj.weight": "model-00004-of-00004.safetensors",
191
+ "model.layers.27.post_attention_layernorm.weight": "model-00004-of-00004.safetensors",
192
+ "model.layers.27.self_attn.k_proj.weight": "model-00004-of-00004.safetensors",
193
+ "model.layers.27.self_attn.o_proj.weight": "model-00004-of-00004.safetensors",
194
+ "model.layers.27.self_attn.q_proj.weight": "model-00004-of-00004.safetensors",
195
+ "model.layers.27.self_attn.v_proj.weight": "model-00004-of-00004.safetensors",
196
+ "model.layers.3.input_layernorm.weight": "model-00001-of-00004.safetensors",
197
+ "model.layers.3.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
198
+ "model.layers.3.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
199
+ "model.layers.3.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
200
+ "model.layers.3.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
201
+ "model.layers.3.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
202
+ "model.layers.3.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
203
+ "model.layers.3.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
204
+ "model.layers.3.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
205
+ "model.layers.4.input_layernorm.weight": "model-00001-of-00004.safetensors",
206
+ "model.layers.4.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
207
+ "model.layers.4.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
208
+ "model.layers.4.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
209
+ "model.layers.4.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
210
+ "model.layers.4.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
211
+ "model.layers.4.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
212
+ "model.layers.4.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
213
+ "model.layers.4.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
214
+ "model.layers.5.input_layernorm.weight": "model-00001-of-00004.safetensors",
215
+ "model.layers.5.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
216
+ "model.layers.5.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
217
+ "model.layers.5.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
218
+ "model.layers.5.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
219
+ "model.layers.5.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
220
+ "model.layers.5.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
221
+ "model.layers.5.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
222
+ "model.layers.5.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
223
+ "model.layers.6.input_layernorm.weight": "model-00002-of-00004.safetensors",
224
+ "model.layers.6.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
225
+ "model.layers.6.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
226
+ "model.layers.6.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
227
+ "model.layers.6.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
228
+ "model.layers.6.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
229
+ "model.layers.6.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
230
+ "model.layers.6.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
231
+ "model.layers.6.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
232
+ "model.layers.7.input_layernorm.weight": "model-00002-of-00004.safetensors",
233
+ "model.layers.7.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
234
+ "model.layers.7.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
235
+ "model.layers.7.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
236
+ "model.layers.7.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
237
+ "model.layers.7.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
238
+ "model.layers.7.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
239
+ "model.layers.7.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
240
+ "model.layers.7.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
241
+ "model.layers.8.input_layernorm.weight": "model-00002-of-00004.safetensors",
242
+ "model.layers.8.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
243
+ "model.layers.8.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
244
+ "model.layers.8.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
245
+ "model.layers.8.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
246
+ "model.layers.8.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
247
+ "model.layers.8.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
248
+ "model.layers.8.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
249
+ "model.layers.8.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
250
+ "model.layers.9.input_layernorm.weight": "model-00002-of-00004.safetensors",
251
+ "model.layers.9.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
252
+ "model.layers.9.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
253
+ "model.layers.9.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
254
+ "model.layers.9.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
255
+ "model.layers.9.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
256
+ "model.layers.9.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
257
+ "model.layers.9.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
258
+ "model.layers.9.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
259
+ "model.norm.weight": "model-00004-of-00004.safetensors"
260
+ }
261
+ }
output.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9efd70fa53cb0f0ffed82b0afbca0c33a2885df2e34ba0f54d6c682f740ca8ef
3
+ size 6074122752
special_tokens_map.json ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ {
4
+ "content": "<start_of_turn>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false
9
+ },
10
+ {
11
+ "content": "<end_of_turn>",
12
+ "lstrip": false,
13
+ "normalized": false,
14
+ "rstrip": false,
15
+ "single_word": false
16
+ }
17
+ ],
18
+ "bos_token": {
19
+ "content": "<bos>",
20
+ "lstrip": false,
21
+ "normalized": false,
22
+ "rstrip": false,
23
+ "single_word": false
24
+ },
25
+ "eos_token": {
26
+ "content": "<eos>",
27
+ "lstrip": false,
28
+ "normalized": false,
29
+ "rstrip": false,
30
+ "single_word": false
31
+ },
32
+ "pad_token": {
33
+ "content": "<pad>",
34
+ "lstrip": false,
35
+ "normalized": false,
36
+ "rstrip": false,
37
+ "single_word": false
38
+ },
39
+ "unk_token": {
40
+ "content": "<unk>",
41
+ "lstrip": false,
42
+ "normalized": false,
43
+ "rstrip": false,
44
+ "single_word": false
45
+ }
46
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:05e97791a5e007260de1db7e1692e53150e08cea481e2bf25435553380c147ee
3
+ size 17477929
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6969e64047744a44bb3abfb5c50f8de0f7ed8b571d5444426ef931f651d1a0ef
3
+ size 4241111
tokenizer_config.json ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<pad>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "<eos>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "<bos>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "3": {
30
+ "content": "<unk>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "106": {
38
+ "content": "<start_of_turn>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "107": {
46
+ "content": "<end_of_turn>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ }
53
+ },
54
+ "additional_special_tokens": [
55
+ "<start_of_turn>",
56
+ "<end_of_turn>"
57
+ ],
58
+ "bos_token": "<bos>",
59
+ "chat_template": "{% if messages[0]['role'] == 'system' %}{{ raise_exception('System role not supported') }}{% endif %}{% for message in messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if (message['role'] == 'assistant') %}{% set role = 'model' %}{% else %}{% set role = message['role'] %}{% endif %}{{ '<start_of_turn>' + role + '\n' + message['content'] | trim + '<end_of_turn>\n' }}{% endfor %}{% if add_generation_prompt %}{{'<start_of_turn>model\n'}}{% endif %}",
60
+ "clean_up_tokenization_spaces": false,
61
+ "eos_token": "<eos>",
62
+ "legacy": null,
63
+ "model_max_length": 1000000000000000019884624838656,
64
+ "pad_token": "<pad>",
65
+ "sp_model_kwargs": {},
66
+ "spaces_between_special_tokens": false,
67
+ "tokenizer_class": "GemmaTokenizer",
68
+ "unk_token": "<unk>",
69
+ "use_default_system_prompt": false
70
+ }