phi-2-coder / README.md
mrm8488's picture
Update README.md
bb81dc4
metadata
tags:
  - generated_from_trainer
  - code
  - coding
  - phi-2
  - phi2
model-index:
  - name: phi-2-coder
    results: []
license: apache-2.0
language:
  - code
thumbnail: >-
  https://huggingface.co/mrm8488/llama-2-coder-7b/resolve/main/llama2-coder-logo-removebg-preview.png
datasets:
  - HuggingFaceH4/CodeAlpaca_20K
pipeline_tag: text-generation
llama-2 coder logo

Phi-2 Coder πŸ‘©β€πŸ’»

Phi-2 fine-tuned on the CodeAlpaca 20k instructions dataset by using the method QLoRA with PEFT library.

Model description 🧠

Phi-2

Phi-2 is a Transformer with 2.7 billion parameters. It was trained using the same data sources as Phi-1.5, augmented with a new data source that consists of various NLP synthetic texts and filtered websites (for safety and educational value). When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-2 showcased a nearly state-of-the-art performance among models with less than 13 billion parameters.

Training and evaluation data πŸ“š

CodeAlpaca_20K: contains 20K instruction-following data used for fine-tuning the Code Alpaca model.

LoRa config

config = LoraConfig(
    r=32,
    lora_alpha=64,
    target_modules=[
        "Wqkv",
        "fc1",
        "fc2",
        "out_proj"
    ],
    bias="none",
    lora_dropout=0.05,
    task_type="CAUSAL_LM",
)

Training hyperparameters βš™

per_device_train_batch_size=4,
gradient_accumulation_steps=32,
num_train_epochs=2,
learning_rate=2.5e-5,
optim="paged_adamw_8bit",
seed=66,
load_best_model_at_end=True,
save_strategy="steps",
save_steps=50,
evaluation_strategy="steps",
eval_steps=50,

Training results πŸ—’οΈ

Step Training Loss Validation Loss
50 0.624400 0.600070
100 0.634100 0.592757
150 0.545800 0.586652
200 0.572500 0.577525
250 0.528000 0.590118

HumanEval results πŸ“Š

WIP

Example of usage πŸ‘©β€πŸ’»

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "mrm8488/phi-2-coder"

tokenizer = AutoTokenizer.from_pretrained(model_id, add_bos_token=True, trust_remote_code=True, use_fast=False)

model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True, torch_dtype=torch.float16, device="auto")

def generate(
        instruction,
        max_new_tokens=128,
        temperature=0.1,
        top_p=0.75,
        top_k=40,
        num_beams=2,
        **kwargs,
):
    prompt = "Instruct: " + instruction + "\nOutput:"
    print(prompt)
    inputs = tokenizer(prompt, return_tensors="pt")
    input_ids = inputs["input_ids"].to("cuda")
    attention_mask = inputs["attention_mask"].to("cuda")
  
    with torch.no_grad():
        generation_output = model.generate(
            input_ids=input_ids,
            attention_mask=attention_mask,
            max_new_tokens=max_new_tokens,
            eos_token_id = tokenizer.eos_token_id,
            use_cache=True,
            early_stopping=True
        )
    output = tokenizer.decode(generation_output[0])
    return output.split("\nOutput:")[1].lstrip("\n")

instruction = "Design a class for representing a person in Python."
print(generate(instruction))