Edit model card

Model Card for Model ID

This model replicates microsoft/Phi-3-mini-4k-instruct and has been further finetuned on the timdettmers/openassistant-guanaco dataset using AutoTrain.

Model Details

The Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties.

Uses

Direct Use

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

tokenizer = AutoTokenizer.from_pretrained("Aryan-401/Aryan-401/phi-3-mini-4k-instruct-finetune-guanaco-PEFT-Merged", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("Aryan-401/Aryan-401/phi-3-mini-4k-instruct-finetune-guanaco-PEFT-Merged", trust_remote_code=True)

# Prompt content: "hi"
messages = [
    {"role": "user", "content": "What is the Value of Pi?"}
]
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

model = model.to(device).eval()

input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to(device), max_length= 1000)
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)

print(response)
# Pi is an irrational number, which means it cannot be expressed as a simple fraction and its decimal representation goes on forever without repeating. However, the value of Pi is approximately 3.14159. It is often rounded to 3.14 for simplicity in calculations.
Downloads last month
8
Safetensors
Model size
3.82B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train Aryan-401/phi-3-mini-4k-instruct-finetune-guanaco-PEFT-Merged