Edit model card
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("heegyu/TinyMistral-248M-v2.5-Instruct-orpo")
model = AutoModelForCausalLM.from_pretrained("heegyu/TinyMistral-248M-v2.5-Instruct-orpo")

conv = [
  {
    'role': 'user',
    'content': 'What can I do with Large Language Model?'
  }
]
prompt = tokenizer.apply_chat_template(conv, add_generation_prompt=True, return_tensors="pt")
output = model.generate(prompt, max_new_token=128)
print(tokenizer.decode(output[0]))
Downloads last month
9
Safetensors
Model size
248M params
Tensor type
BF16
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for heegyu/TinyMistral-248M-v2.5-Instruct-orpo

Finetuned
this model
Quantizations
2 models

Dataset used to train heegyu/TinyMistral-248M-v2.5-Instruct-orpo