Edit model card

PharmaLlama-3.2-3B-Instruct (For Research Purposes Only)

馃弳 Part of 3rd Place Winning Project at Meta's Llama Impact Hackathon 2024 (London)
PharmaLlama-3.2-3B-Instruct was run locally on an iOS app, which won 3rd place at Meta's Llama Impact Hackathon 2024. The project demonstrated innovative use of AI to explore healthcare accessibility.

WARNING: This model is strictly for research purposes and should not be used by patients or healthcare professionals for medical advice. The model may generate hallucinations and inaccurate responses. Always consult a licensed pharmacist or healthcare provider for any medical concerns.


Overview

PharmaLlama-3.2-3B-Instruct is a fine-tuned variant of Llama 3.2 3B Instruct. The model is designed to explore how AI can assist in identifying potential medication side effects and drug interactions in controlled, experimental settings. While the model demonstrates significant improvements in processing healthcare-related queries, it is not a substitute for professional advice and must not be used in real-world medical decision-making.


Research Highlights

  • Fine-Tuning Dataset: The model was trained using a question-answer dataset derived from pharmacy-related Anki Flashcards, focused on medication safety and pharmacological education. This experiment demonstrated potential improvements in detecting medication side effects and interactions.
  • Offline Operation: The model runs entirely on-device, providing a proof-of-concept for AI deployment in scenarios without internet access.
  • Guardrails: Enhanced for professionalism and conversational safety during testing. Despite this, hallucinations and inaccuracies remain a significant limitation.

Important Warnings

  1. NOT FOR PATIENT USE:

    • This model is a research tool and is not approved for real-world medical applications.
    • Patients must always consult a qualified pharmacist or healthcare provider for medication-related concerns.
  2. RISK OF INACCURATE RESPONSES:

    • The model can generate hallucinations or incorrect medical information.
    • The suggestions provided during research testing are for experimental purposes only and must not be acted upon.
  3. LIMITATIONS:

    • Despite fine-tuning, the model may fail to detect critical interactions or provide incomplete or misleading advice.
    • Always defer to professional human judgment in healthcare scenarios.

Research Applications

PharmaLlama is intended for the following controlled research purposes:

  • Exploring AI in Healthcare Communication: Investigating how AI models can process and respond to pharmacy-related questions.
  • Side Effect Identification in Research Settings: Testing the ability to identify potential side effects from a dataset in a simulated environment.
  • Offline AI Accessibility: Assessing on-device performance for AI in areas with limited connectivity.

This research does not endorse or support patient-facing applications of this model.


Technical Details

  • Base Model: Llama 3.2 3B.
  • Quantization: 4-bit for experimental efficiency on edge devices.

Disclaimer

PharmaLlama-3.2-3B-Instruct is an experimental model and should only be used for research. It is not safe, reliable, or suitable for patient use. The creators of this model bear no responsibility for any misuse or adverse outcomes resulting from its application.

  • The model should not be used to provide or interpret medical advice under any circumstances.
  • Misuse of this model in a healthcare setting could result in serious harm.

Call to Action

Researchers interested in studying the potential of localized AI for healthcare applications are encouraged to explore this model in strictly controlled environments. Any contributions to improve guardrails or reduce hallucinations are welcome, but must align with the model鈥檚 experimental purpose and safety limitations.


PharmaLlama demonstrates the potential and challenges of AI in healthcare communication. However, it must not be used outside of research contexts.

Downloads last month
29
GGUF
Model size
3.21B params
Architecture
llama

4-bit

Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for Malikhaa/PharmaLlama-3.2-3B-Instruct

Quantized
(148)
this model