--- base_model: huzaifa525/MedGenius_LLaMA-3.2B datasets: - huzaifa525/Medical_Intelligence_Dataset_40k_Rows_of_Disease_Info_Treatments_and_Medical_QA language: - en library_name: transformers license: apache-2.0 tags: - Medical AI - AI-powered healthcare - Diagnostic AI model - Medical chatbot - Healthcare AI solutions - Symptom analysis AI - Disease diagnosis model - Medical NLP model - AI for doctors - Medical Q&A model - Healthcare chatbot - AI in telemedicine - Medical research assistant - AI medical assistant - Disease treatment suggestions AI - Medical education AI - AI in healthcare innovation - LLaMA medical model - AI healthcare applications - Medical intelligence dataset - llama-cpp - gguf-my-repo --- # huzaifa525/MedGenius_LLaMA-3.2B-Q4_K_M-GGUF Refer to the [original model card](https://huggingface.co/huzaifa525/MedGenius_LLaMA-3.2B) for more details on the model. --- ## **MedGenius_LLaMA-3.2B: A Fine-Tuned Medical AI Model for Diagnostic Assistance** --- **Overview:** MedGenius_LLaMA-3.2B is a specialized AI model fine-tuned on the **Medical Intelligence Dataset** consisting of over 40,000 detailed rows of medical information. Built upon the powerful **LLaMA-3.2B** architecture, MedGenius is designed to assist in medical diagnostics, patient-doctor dialogue generation, symptom analysis, and offering tailored responses to common medical queries. The model has been optimized for real-time healthcare applications, making it a valuable tool for medical professionals, students, and researchers. --- ### **Model Details:** - **Base Model**: LLaMA-3.2B (Meta AI’s Large Language Model) - **Fine-Tuned Dataset**: Medical Intelligence Dataset (40,443 rows of comprehensive disease info, treatments, Q&A for medical students, patient dialogues) - **Dataset Source**: Available on both Kaggle and Hugging Face: - Kaggle: [Medical Intelligence Dataset (40K Disease Info & Q&A)](https://www.kaggle.com/datasets/huzefanalkheda/medical-intelligence-dataset-40k-disease-info-qa) - Hugging Face: [Medical Intelligence Dataset (40K Rows of Disease Info, Treatments, and Medical Q&A)](https://huggingface.co/datasets/huzaifa525/Medical_Intelligence_Dataset_40k_Rows_of_Disease_Info_Treatments_and_Medical_QA) - **Model Size**: 3.2 Billion parameters - **Language**: English --- ### **About the Creator:** **Huzefa Nalkheda Wala** Connect with me on [LinkedIn](https://linkedin.com/in/huzefanalkheda) Follow me on [Instagram](https://www.instagram.com/imhuzaifan/) --- ### **Dataset and Training**: The **Medical Intelligence Dataset** used for training MedGenius is carefully curated with information including: - Disease names and descriptions - Symptoms and diagnosis criteria - Treatments, including medications, procedures, and alternative therapies - Doctor-patient conversation samples (for clinical AI chatbot development) - Q&A content specifically aimed at medical students preparing for exams - Real-world medical scenarios across a variety of specializations By focusing on diverse medical fields such as cardiology, neurology, infectious diseases, and mental health, the dataset ensures that MedGenius_LLaMA-3.2B can offer relevant and accurate information for a wide range of medical topics. The fine-tuning process incorporated **paged_adamw_32bit** optimizer with **SFTTrainer** to ensure faster convergence and precise learning. The result is a model that can offer real-time medical advice and educational content without compromising accuracy. --- ### **Use Cases**: **1. Medical Assistance:** MedGenius can be integrated into healthcare apps to provide instant diagnostic suggestions, symptom checklists, and treatment advice. It bridges the gap between healthcare professionals and patients by facilitating clear, informative, and tailored responses to medical questions. **2. Medical Education:** MedGenius is designed for medical students and practitioners. It can help explain complex medical conditions, symptoms, and treatments, enabling students to prepare better for exams or clinical rounds. The question-answer format in the dataset is optimized to generate valuable insights for learners. **3. Telemedicine Chatbots:** One of the key features of MedGenius_LLaMA-3.2B is its ability to generate realistic, helpful dialogues between patients and healthcare providers. This makes it an ideal foundation for building AI-driven telemedicine chatbots to assist with preliminary consultations. **4. Healthcare Research:** For researchers, MedGenius offers an extensive knowledge base that can be used to pull insights on disease progression, treatment efficacy, and healthcare statistics. It can also assist in the generation of clinical reports or serve as a tool for hypothesis generation in the medical research field. --- ### **Performance & Features:** - **Real-Time Responses**: Fast, responsive model suitable for real-time applications. - **Medical Q&A**: Efficient in handling complex medical questions, providing answers backed by data and real-world scenarios. - **Customizability**: Can be fine-tuned further for specific healthcare specializations. - **Medical Dialogue Generation**: Capable of producing human-like, contextually relevant medical conversations between doctors and patients. - **Educational Insights**: Especially beneficial for students, providing educational summaries and detailed explanations on diseases, symptoms, and treatments. --- ### **Why MedGenius_LLaMA-3.2B?** - **Accuracy & Reliability**: Based on a vast dataset encompassing various fields of medicine, MedGenius provides high-accuracy results that medical practitioners can rely on. - **Scalability**: From educational purposes to real-world healthcare solutions, MedGenius is designed to scale across multiple domains within the medical industry --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo huzaifa525/MedGenius_LLaMA-3.2B-Q4_K_M-GGUF --hf-file medgenius_llama-3.2b-q4_k_m.gguf -p "What is the use of Paracetamol" ``` ### Server: ```bash llama-server --hf-repo huzaifa525/MedGenius_LLaMA-3.2B-Q4_K_M-GGUF --hf-file medgenius_llama-3.2b-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo huzaifa525/MedGenius_LLaMA-3.2B-Q4_K_M-GGUF --hf-file medgenius_llama-3.2b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo huzaifa525/MedGenius_LLaMA-3.2B-Q4_K_M-GGUF --hf-file medgenius_llama-3.2b-q4_k_m.gguf -c 2048 ```