File size: 3,666 Bytes
558b45a bb81dc4 1bfb00d bb81dc4 558b45a bb81dc4 1bfb00d bb81dc4 4ae69ae bb81dc4 4ae69ae bb81dc4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 |
---
tags:
- generated_from_trainer
- code
- coding
- phi-2
- phi2
model-index:
- name: phi-2-coder
results: []
license: apache-2.0
language:
- code
thumbnail: https://huggingface.co/mrm8488/phi-2-coder/resolve/main/phi-2-coder-logo.png
datasets:
- HuggingFaceH4/CodeAlpaca_20K
pipeline_tag: text-generation
---
<div style="text-align:center;width:250px;height:250px;">
<img src="https://huggingface.co/mrm8488/phi-2-coder/resolve/main/phi-2-coder-logo.png" alt="phi-2 coder logo"">
</div>
# Phi-2 Coder π©βπ»
**Phi-2** fine-tuned on the **CodeAlpaca 20k instructions dataset** by using the method **QLoRA** with [PEFT](https://github.com/huggingface/peft) library.
## Model description π§
[Phi-2](https://huggingface.co/microsoft/phi-2)
Phi-2 is a Transformer with **2.7 billion** parameters. It was trained using the same data sources as [Phi-1.5](https://huggingface.co/microsoft/phi-1.5), augmented with a new data source that consists of various NLP synthetic texts and filtered websites (for safety and educational value). When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-2 showcased a nearly state-of-the-art performance among models with less than 13 billion parameters.
## Training and evaluation data π
[CodeAlpaca_20K](https://huggingface.co/datasets/HuggingFaceH4/CodeAlpaca_20K): contains 20K instruction-following data used for fine-tuning the Code Alpaca model.
### LoRa config
```py
config = LoraConfig(
r=32,
lora_alpha=64,
target_modules=[
"Wqkv",
"fc1",
"fc2",
"out_proj"
],
bias="none",
lora_dropout=0.05,
task_type="CAUSAL_LM",
)
```
### Training hyperparameters β
```py
per_device_train_batch_size=4,
gradient_accumulation_steps=32,
num_train_epochs=2,
learning_rate=2.5e-5,
optim="paged_adamw_8bit",
seed=66,
load_best_model_at_end=True,
save_strategy="steps",
save_steps=50,
evaluation_strategy="steps",
eval_steps=50,
```
### Training results ποΈ
| Step | Training Loss | Validation Loss |
|------|---------------|-----------------|
| 50 | 0.763100 | 0.717398 |
| 100 | 0.673500 | 0.694871 |
| 150 | 0.696000 | 0.689336 |
| 200 | 0.786100 | 0.687515 |
| 250 | 0.734600 | 0.686658 |
### HumanEval results π
WIP
### Example of usage π©βπ»
```py
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mrm8488/phi-2-coder"
tokenizer = AutoTokenizer.from_pretrained(model_id, add_bos_token=True, trust_remote_code=True, use_fast=False)
model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True, torch_dtype=torch.float16, device="auto")
def generate(
instruction,
max_new_tokens=128,
temperature=0.1,
top_p=0.75,
top_k=40,
num_beams=2,
**kwargs,
):
prompt = "Instruct: " + instruction + "\nOutput:"
print(prompt)
inputs = tokenizer(prompt, return_tensors="pt")
input_ids = inputs["input_ids"].to("cuda")
attention_mask = inputs["attention_mask"].to("cuda")
with torch.no_grad():
generation_output = model.generate(
input_ids=input_ids,
attention_mask=attention_mask,
max_new_tokens=max_new_tokens,
eos_token_id = tokenizer.eos_token_id,
use_cache=True,
early_stopping=True
)
output = tokenizer.decode(generation_output[0])
return output.split("\nOutput:")[1].lstrip("\n")
instruction = "Design a class for representing a person in Python."
print(generate(instruction))
``` |