PEFT
Safetensors
ruslanmv's picture
Update README.md
26308f3 verified
|
raw
history blame
1.75 kB
metadata
library_name: peft
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
datasets:
  - ruslanmv/ai-medical-chatbot

Model Card for Medical-Mixtral-7B-v1.5k

Model Description

The Medical-Mixtral-7B-v1.5k is a fine-tuned Mixtral model for answering medical assistance questions. This model is a novel version of mistralai/Mixtral-8x7B-Instruct-v0.1, adapted to a subset of 1.5k records from the AI Medical Chatbot dataset, which contains 250k records. The purpose of this model is to provide a ready chatbot to answer questions related to medical assistance.

Model Sources [optional]

How to Get Started with the Model

Use the code below to get started with the model.

from transformers import AutoModelForCausalLM, AutoTokenizer

# Define the name of your fine-tuned model
finetuned_model = 'ruslanmv/Medical-Mixtral-7B-v1.5k'

# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(finetuned_model, trust_remote_code=True)

# Load the model with the provided adapter configuration and weights
model_pretrained = AutoModelForCausalLM.from_pretrained(finetuned_model, trust_remote_code=True, torch_dtype=torch.float16)

messages = [
    {'role': 'user', 'content': 'What should I do to reduce my weight gained due to genetic hypothyroidism?'},
    {'role': 'assistant', 'content': ''},
]

input_ids = tokenizer.apply_chat_template(messages, return_tensors='pt').to('cuda')

outputs = model_pretrained.generate(input_ids, max_new_tokens=500)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Framework versions

  • PEFT 0.10.0

Please fill in the missing parts with the relevant information for your model. Let me know if you need further assistance!