Orpo-Phi3-3B-128K

image/jpeg

This is an ORPO fine-tune of microsoft/Phi-3-mini-128k-instruct on 10k samples of mlabonne/orpo-dpo-mix-40k.

๐Ÿ’ป Usage

!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "Muhammad2003/Orpo-Phi3-3B-128K"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])

๐Ÿ“ˆ Training curves

Wandb Report

image/png

๐Ÿ† Evaluation

Coming Soon!

Downloads last month
16
Safetensors
Model size
3.82B params
Tensor type
FP16
ยท
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Muhammad2003/Orpo-Phi3-3B-128K

Finetuned
(39)
this model

Dataset used to train Muhammad2003/Orpo-Phi3-3B-128K

Collection including Muhammad2003/Orpo-Phi3-3B-128K