Edit model card

MedAssistant

Introduction

MedAssistant is an AI-powered medical assistant model designed to answer medical questions, assist in preliminary diagnosis, and provide general health information and advice. It is based on the unsloth/llama-3-8b-bnb-4bit model and fine-tuned on the LinhDuong/chatdoctor-200k dataset.

Model Details

  • Model Name: MedAssistant
  • Developer: Abiral7 (Individual)
  • Base Model: unsloth/llama-3-8b-bnb-4bit
  • Model Type: Language Model
  • Language(s): English
  • License: apache-2.0

Intended Use

MedAssistant is designed to:

  • Answer medical questions
  • Assist in preliminary diagnosis
  • Provide general health information and advice

Primary use cases include:

  1. Answering Medical Questions: Providing information and guidance on various health-related issues.
  2. Symptom Analysis: Offering potential explanations or suggestions based on symptoms described by users.
  3. Health Advice: Giving general advice on maintaining health and wellness.

Training Data

This model was fine-tuned on the LinhDuong/chatdoctor-200k dataset.

Performance and Limitations

Currently, no specific performance metrics are available for this model.

Limitations and Biases

Users should be aware of the following limitations:

  1. Accuracy and Reliability:

    • Not a Substitute for Professional Advice: The model provides general health information but is not a licensed medical professional. Users should consult healthcare providers for serious concerns.
    • Potential for Incorrect Information: AI-generated responses may occasionally be incorrect or outdated.
  2. Biases:

    • Training Data Bias: The model may reflect biases present in the training data, potentially leading to skewed or inappropriate responses.
    • Cultural Sensitivity: Responses may not account for cultural differences and may inadvertently provide advice that is culturally insensitive or inappropriate.
  3. Scope of Knowledge:

    • Limited to Training Data: The model's knowledge is limited to the data it was trained on and may not include the latest medical research or emerging health trends.

Ethical Considerations

Users are strongly advised to:

  • Use this model as a supplementary tool, not a replacement for professional medical advice.
  • Protect their privacy by avoiding sharing sensitive personal health information.
  • Verify any critical information or advice with licensed healthcare professionals.

Installation

To use MedAssistant, you need to install the following dependencies:

pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
pip install --no-deps "xformers<0.0.27" "trl<0.9.0" peft accelerate bitsandbytes

Usage

Here's a basic example of how to use the MedAssistant model:

from unsloth import FastLanguageModel
from transformers import AutoModel, TextStreamer
from peft import PeftModel, PeftConfig

# Load base model
model_name = "unsloth/llama-3-8b-bnb-4bit"
base_model, tokenizer = FastLanguageModel.from_pretrained(
    model_name=model_name,
    max_seq_length=2048,
    load_in_4bit=True
)

# Load adapter configuration
adapter_name = "Abiral7/MedAssistant"
peft_config = PeftConfig.from_pretrained(adapter_name)

# Apply adapter to base model
model = PeftModel.from_pretrained(base_model, adapter_name)

FastLanguageModel.for_inference(model) # Enable native 2x faster inference

# Example usage
inputs = tokenizer(
[
"""     #System:
        You are a Medical Assistant with knowledge in the medical domain.

        #User:
        I have pain in my eyes from playing too much video game. Please help?

        #Assistant
        """
], return_tensors = "pt").to("cuda")

text_streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 5000)

Hardware Requirements

To run this model efficiently, the following hardware is recommended:

  • GPU: A modern GPU with at least 8GB of VRAM, preferably an NVIDIA GPU with CUDA support.
  • CPU: A multi-core CPU for preprocessing and managing data.
  • Memory (RAM): At least 16GB of RAM.
  • Storage: Sufficient SSD storage for model weights and datasets.

Feedback and Issues

Users can provide feedback or report issues related to the model: Use the "Discussions" tab on this Hugging Face model page to ask questions, provide feedback, or report issues directly to the model developer or community.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for Abiral7/MedAssistant

Finetuned
(2306)
this model