kevin009/llama322
Model Description
This is a LoRA-tuned version of kevin009/llama322 using KTO (Knowledge Transfer Optimization).
Training Parameters
- Learning Rate: 5e-06
- Batch Size: 1
- Training Steps: 2043
- LoRA Rank: 16
- Training Date: 2024-12-29
Usage
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
model = AutoPeftModelForCausalLM.from_pretrained("kevin009/llama322", token="YOUR_TOKEN")
tokenizer = AutoTokenizer.from_pretrained("kevin009/llama322")