MistralCat-1v / README.md
TESTtm7873's picture
Update README.md
088c92b verified
|
raw
history blame
No virus
1.75 kB
# Model Card: Model ID
## License
MIT License
## Languages Supported
- English (en)
---
## Overview
This model is part of the VCC project and has been fine-tuned on the TESTtm7873/ChatCat dataset using the `mistralai/Mistral-7B-Instruct-v0.2` as the base model. The fine-tuning process utilized QLoRA for improved performance.
---
## Getting Started
To use this model, you'll need to set up your environment first:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
# Base model configuration
base_model_id = "mistralai/Mistral-7B-Instruct-v0.2"
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
# Loading the base model with quantization config
base_model = AutoModelForCausalLM.from_pretrained(
base_model_id,
quantization_config=bnb_config,
device_map="auto",
trust_remote_code=True,
)
# Setting up tokenizer
eval_tokenizer = AutoTokenizer.from_pretrained(base_model_id, add_bos_token=True, trust_remote_code=True)
from peft import PeftModel
# Loading the fine-tuned model
ft_model = PeftModel.from_pretrained(base_model, "mistral-journal-finetune/checkpoint-150")
# Sample evaluation
eval_prompt = "You have the softest fur."
model_input = eval_tokenizer(eval_prompt, return_tensors="pt").to("cuda")
ft_model.eval()
with torch.no_grad():
print(eval_tokenizer.decode(ft_model.generate(**model_input, max_new_tokens=100, repetition_penalty=1.15)[0], skip_special_tokens=True))
- **Developed by:** testtm
- **Funded by:** testtm
- **Model type:** Mistral
- **Language:** English
- **Finetuned from model:** mistralai/Mistral-7B-Instruct-v0.2