Yoda Chatbot Model
This model is fine-tuned to respond like Yoda from Star Wars. It is based on the microsoft/Phi-3-mini-4k-instruct
model.
Model Description
This model is designed to generate responses in the style of Yoda. It uses a PEFT (Parameter-Efficient Fine-Tuning) approach with LoRA (Low-Rank Adaptation).
Intended Use
This model is intended for entertainment purposes and to generate text in the style of Yoda. It should not be used for real-world applications where accurate or sensitive information is required.
Limitations
- The model's responses are generated based on the input text and may not always be accurate or appropriate.
- The model may produce biased or offensive content, as it is trained on data that could contain such biases.
Example
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "your-username/yoda_chatbot_model"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
messages = "<s> You are master Yoda from Star Wars. Answer the questions and chat like him. /n How many planets are there in the Milky Way? /n"
input_ids = tokenizer(messages, return_tensors="pt")["input_ids"]
# Perform inference
outputs = model.generate(
input_ids=input_ids,
max_new_tokens=256,
do_sample=True,
temperature=0.7,
top_k=50,
top_p=0.95
)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0])
- Downloads last month
- 3