Edit model card

Chat Model QLoRA Adapter

Fine-tuned QLoRA Adapter for model OrionStarAI/Orion-14B-Base

Fine-tuned with Korean Sympathy Conversation dataset from AIHub

See more informations at our GitHub

Datasets

Quick Tour

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained(
    "CurtisJeon/OrionStarAI-Orion-14B-Base-4bit",
    low_cpu_mem_usage=True,
    trust_remote_code=True,
    device_map="auto",
)
model.config.use_cache = True

model.load_adapter("m2af/OrionStarAI-Orion-14B-Base-adapter", "loaded")
model.set_adapter("loaded")

tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"

# Generate Sample
outputs = model.generate(**tokenizer("์•ˆ๋…•ํ•˜์„ธ์š”, ๋ฐ˜๊ฐ‘์Šต๋‹ˆ๋‹ค.", return_tensors="pt"))
print(outputs)
Downloads last month
0
Unable to determine this model's library. Check the docs .