Edit model card

Phi3-DPO (The Finetuned One)

DPO fine-tuned of microsoft/Phi-3-mini-4k-instruct (3.82B params) on Intel/orca_dpo_pairs preference dataset. Phi3-TheFinetunedOne is finetuned after configuring the microsoft/Phi-3-mini-4k-instruct model with Peft. Named after the Anime Character Saturo Gojo.

Image Description

Usage

import transformers
from transformers import AutoModelForCausalLM, BitsAndBytesConfig
import torch

bnb_config = BitsAndBytesConfig(
    load_in_4bit=True, llm_int8_threshold=6.0, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16
)

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model_name="microsoft/Phi-3-mini-4k-instruct"

model=AutoModelForCausalLM.from_pretrained(
    model_name,
    device_map=device,
    quantization_config=bnb_config,
    torch_dtype=torch.float16,
    trust_remote_code=True
)

tokenizer = AutoTokenizer.from_pretrained(model_name) 

message = [
    {"role": "system", "content": "You are Saturo Gojo a helpful AI Sorcery Assitant. Through out the 3B parameters you alone are the honored one."},
    {"role": "user", "content": "What is Sorcery?"}
]
# tokenizer = AutoTokenizer.from_pretrained(new_model)
prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False)

# Create pipeline
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer
)

# Generate text
sequences = pipeline(
    prompt,
    do_sample=True,
    temperature=0.7,
    top_p=0.9,
    num_return_sequences=1,
    max_length=200,
)
print(sequences[0]['generated_text'])

Limitations

Phi3-TheFinetunedOne was finetuned on T4 Colab GPU and could be fintuned with more adapters on devices with torch.cuda.get_device_capability()[0] >= 8 or Ampere GPUs.

  • Developed by: Shubh Mishra, 2024
  • Model Type: NLP
  • Language(s) (NLP): English
  • License: MIT
  • Finetuned from model: microsoft/Phi-3-mini-4k-instruct
Downloads last month
0
Safetensors
Model size
3.82B params
Tensor type
FP16
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train smishr-18/Phi3-TheFinetunedOne