metadata
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.2
license: mit
datasets:
- TESTtm7873/ChatCat
language:
- en
Model Card: Model ID
License
MIT License
Languages Supported
- English (en)
Overview
This model is part of the VCC project and has been fine-tuned on the TESTtm7873/ChatCat dataset using the mistralai/Mistral-7B-Instruct-v0.2
as the base model. The fine-tuning process utilized QLoRA for improved performance.
Getting Started
To use this model, you'll need to set up your environment first:
Model initialization
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
from peft import PeftModel
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")
model = AutoModelForCausalLM.from_pretrained(
"mistralai/Mistral-7B-Instruct-v0.2",
load_in_8bit=True,
device_map="auto",
)
model = PeftModel.from_pretrained(model, "TESTtm7873/MistralCat-1v")
model.eval()
Inference
def evaluate(question: str) -> str:
prompt = f"The conversation between human and Virtual Cat Companion.\n[|Human|] {question}.\n[|AI|] "
inputs = tokenizer(prompt, return_tensors="pt")
input_ids = inputs["input_ids"].cuda()
generation_output = model.generate(
input_ids=input_ids,
generation_config=generation_config,
return_dict_in_generate=True,
output_scores=True,
max_new_tokens=256
)
output = tokenizer.decode(generation_output.sequences[0]).split("[|AI|]")[1]
return output
your_question: str = "You have the softest fur."
print(evaluate(your_question))
- Developed by: testtm
- Funded by: Project TEST
- Model type: Mistral
- Language: English
- Finetuned from model: mistralai/Mistral-7B-Instruct-v0.2