File size: 4,876 Bytes
eeaef3d
46025ea
 
00f378f
 
 
 
e5e4e5b
46025ea
 
fc6a1e0
 
00f378f
 
46025ea
 
 
eeaef3d
 
00f378f
 
 
eeaef3d
 
322cbad
00f378f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e5e4e5b
 
 
 
 
 
 
00f378f
 
 
 
e5e4e5b
00f378f
 
 
 
 
 
 
322cbad
00f378f
 
 
322cbad
00f378f
 
322cbad
 
00f378f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
322cbad
 
00f378f
 
 
 
 
 
 
 
 
 
 
 
 
46025ea
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
---
language:
- code
tags:
- generated_from_trainer
- code
- coding
- gemma
datasets:
- HuggingFaceH4/CodeAlpaca_20K
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
thumbnail: https://huggingface.co/mrm8488/gemma-2b-coder/resolve/main/logo.png
pipeline_tag: text-generation
model-index:
- name: gemma-2b-coder
  results: []
---

<div style="text-align:center;width:250px;height:250px;">
    <img src="https://huggingface.co/mrm8488/gemma-2b-coder/resolve/main/logo.png" alt="gemma coder logo"">
</div>


# Gemma Coder πŸ‘©β€πŸ’»
**Gemma 2B** fine-tuned on the **CodeAlpaca 20k instructions dataset** by using the method **QLoRA** with [PEFT](https://github.com/huggingface/peft) library.

## Model description 🧠

[Gemma-2b](https://huggingface.co/google/gemma-2b)

Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights, pre-trained variants, and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone.


## Training and evaluation data πŸ“š

[CodeAlpaca_20K](https://huggingface.co/datasets/HuggingFaceH4/CodeAlpaca_20K): contains 20K instruction-following data used for fine-tuning the Code Alpaca model.


### Training hyperparameters βš™

Training took 1h 40 min on Free Colab T4 GPU (16GB VRAM) with the following params:

```py
num_train_epochs=2,
per_device_train_batch_size=2,
per_device_eval_batch_size=1,
gradient_accumulation_steps=32
learning_rate=2.5e-5,
optim="paged_adamw_8bit",
logging_steps=5,
seed=66,
load_best_model_at_end=True,
save_strategy="steps",
save_steps=50,                
evaluation_strategy="steps",
eval_steps=50,
save_total_limit=2,
remove_unused_columns=True,
fp16=True,
bf16=False
```

### Training results πŸ—’οΈ

| Step | Training Loss | Validation Loss |
|------|---------------|-----------------|
| 50   | 1.467800      | 1.450770        |
| 100  | 1.060000      | 1.064840        |
| 150  | 0.900200      | 0.922290        |
| 200  | 0.848400      | 0.879911        |
| 250  | 0.838100      | 0.867354        |



### Eval results πŸ“Š

WIP


### Example of usage πŸ‘©β€πŸ’»

I recommend install the following version of `torch`:

```sh
pip install "torch>=2.1.1" -U
```

```py
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig

model_id = "MAISAAI/gemma-2b-coder"

tokenizer = AutoTokenizer.from_pretrained(model_id)

model = AutoModelForCausalLM.from_pretrained(model_id).to("cuda")

def generate(
        instruction,
        max_new_tokens=256,
        temperature=0.1,
        top_p=0.75,
        top_k=40,
        num_beams=2,
        **kwargs,
):
    system = f"<bos><|system|>\nYou are a helpful coding assistant.<eos>\n"
    prompt = f"{system}<|user|>\n{instruction}<eos>\n<|assistant|>\n"
    inputs = tokenizer(prompt, return_tensors="pt")
    input_ids = inputs["input_ids"].to("cuda")
    attention_mask = inputs["attention_mask"].to("cuda")
    generation_config = GenerationConfig(
        temperature=temperature,
        top_p=top_p,
        top_k=top_k,
        num_beams=num_beams,
        **kwargs,
    )
    with torch.no_grad():
        generation_output = model.generate(
            input_ids=input_ids,
            attention_mask=attention_mask,
            generation_config=generation_config,
            return_dict_in_generate=True,
            max_new_tokens=max_new_tokens,
            early_stopping=True
        )
    s = generation_output.sequences[0]
    output = tokenizer.decode(s, skip_special_tokens=True)
    return output.split("<|assistant|>")[1]

instruction = """
Edit the following XML code to add a navigation bar to the top of a web page
<html>
<head>
  <title>Maisa</title>
</head>
"""
print(generate(instruction))
```

### Citation

WIP
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MAISAAI__gemma-2b-coder)

|             Metric              |Value|
|---------------------------------|----:|
|Avg.                             |45.65|
|AI2 Reasoning Challenge (25-Shot)|48.98|
|HellaSwag (10-Shot)              |71.43|
|MMLU (5-Shot)                    |37.02|
|TruthfulQA (0-shot)              |33.54|
|Winogrande (5-shot)              |66.85|
|GSM8k (5-shot)                   |16.07|