Uploaded model

  • Developed by: student-abdullah
  • License: apache-2.0
  • Finetuned from model: meta-llama/Llama-3.2-1B
  • Created on: 1st October, 2024

Acknowledgement


Model Description

This model is fine-tuned from the meta-llama/Llama-3.2-1B base model to enhance its capabilities in generating relevant and accurate responses related to generic medications under the PMBJP scheme. The fine-tuning process included the following hyperparameters:

  • Fine Tuning Template: Llama Q&A
  • Max Tokens: 512
  • LoRA Alpha: 6
  • LoRA Rank (r): 128
  • Learning rate: 5e-5
  • Gradient Accumulation Steps: 2
  • Batch Size: 4
  • Quantization: None

Model Quantitative Performace

  • Training Quantitative Loss: 0.1473 (at final 3rd epoch 4505th Step)

Limitations

  • Token Limitations: With a max token limit of 512, the model might not handle very long queries or contexts effectively.
  • Training Data Limitations: The model’s performance is contingent on the quality and coverage of the fine-tuning dataset, which may affect its generalizability to different contexts or medications not covered in the dataset.
  • Potential Biases: As with any model fine-tuned on specific data, there may be biases based on the dataset used for training.

Model Performace Evaluation:

  • Evaluation on 1000 Questions based on dataset (to evaluate the finetuned knowledge base)
  • At temperature 0.3
  • Correct Responses: 82.23%%
  • Incorrect Responses: 17.77%

Downloads last month
28
GGUF
Model size
1.24B params
Architecture
llama

32-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for student-abdullah/Llama3.2_Med-Dataset_Finetuning_01-10_32-bit_gguf

Quantized
(83)
this model

Dataset used to train student-abdullah/Llama3.2_Med-Dataset_Finetuning_01-10_32-bit_gguf