Edit model card

Fine-tuning llama3-instruct for Arabic Question Answering in the Medical and Mental Health Domain This work presents the fine-tuning of the llama3-instruct model for Arabic question answering in the medical and mental health domain. The approach leverages a custom dataset of Arabic questions and answers collected from medical and mental health websites.

Key aspects:

Model: unsloth/llama-3-8b-Instruct-bnb-4bit Fine-tuning Technique: LORA Dataset: Custom Arabic QA dataset from medical/mental health websites Quantization: Applied for efficiency Results:

The model successfully transitioned from answering solely in English to Arabic after fine-tuning. The fine-tuned model demonstrates good performance in generating relevant and informative answers to Arabic questions within the medical and mental health domain. Applications:

This work can serve as a foundation for building Arabic chatbots for healthcare applications. This approach highlights the effectiveness of fine-tuning large language models like llama3-instruct for domain-specific question answering in Arabic.

Downloads last month
506
Safetensors
Model size
4.65B params
Tensor type
BF16
·
F32
·
U8
·
Inference API
Input a message to start chatting with mkay8/llama3_Arabic_mentalQA_4bit.
This model can be loaded on Inference API (serverless).

Dataset used to train mkay8/llama3_Arabic_mentalQA_4bit