calme-2.3-phi3-4b / README.md
MaziyarPanahi's picture
Create README.md
9499f3a verified
|
raw
history blame
2.45 kB
---
language:
- en
license: mit
library_name: transformers
tags:
- axolotl
- finetune
- dpo
- microsoft
- phi
- pytorch
- phi-3
- nlp
- code
- chatml
base_model: microsoft/Phi-3-mini-4k-instruct
model_name: Phi-3-mini-4k-instruct-v0.3
pipeline_tag: text-generation
inference: false
model_creator: MaziyarPanahi
quantized_by: MaziyarPanahi
---
<img src="./phi-3-instruct.webp" alt="Phi-3 Logo" width="500" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# MaziyarPanahi/Phi-3-mini-4k-instruct-v0.3
This model is a fine-tune (DPO) of `microsoft/Phi-3-mini-4k-instruct` model.
# ⚡ Quantized GGUF
coming soon
# 🏆 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
coming soon
# Prompt Template
This model uses `ChatML` prompt template:
```
<|im_start|>system
{System}
<|im_end|>
<|im_start|>user
{User}
<|im_end|>
<|im_start|>assistant
{Assistant}
````
# How to use
You can use this model by using `MaziyarPanahi/Phi-3-mini-4k-instruct-v0.3` as the model name in Hugging Face's
transformers library.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
from transformers import pipeline
import torch
model_id = "MaziyarPanahi/Phi-3-mini-4k-instruct-v0.3"
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True,
# attn_implementation="flash_attention_2"
)
tokenizer = AutoTokenizer.from_pretrained(
model_id,
trust_remote_code=True
)
streamer = TextStreamer(tokenizer)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
# this should work perfectly for the model to stop generating
terminators = [
tokenizer.eos_token_id, # this should be <|im_end|>
tokenizer.convert_tokens_to_ids("<|assistant|>"), # sometimes model stops generating at <|assistant|>
tokenizer.convert_tokens_to_ids("<|end|>") # sometimes model stops generating at <|end|>
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
"streamer": streamer,
"eos_token_id": terminators,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
```