LoRA Adapter Layers!

Uploaded model

  • Developed by: student-abdullah
  • Finetuned from model: meta-llama/Llama-3.2-1B
  • Created on: 29th September, 2024
  • Full model: student-abdullah/Llama3.2_Medicine-Hinglish-Dataset_Fine-Tuned_29-09

Acknowledgement


Model Description

This LoRA adapter layer model is fine-tuned from the meta-llama/Llama-3.2-1B base model to specialisation related to generic medications under the PMBJP scheme. The fine-tuning process included the following hyperparameters:

  • Fine Tuning Template: Llama Q&A
  • Max Tokens: 512
  • LoRA Alpha: 32
  • LoRA Rank (r): 128
  • Learning rate: 1.5e-4
  • Gradient Accumulation Steps: 4
  • Batch Size: 8

Model Quantitative Performace

  • Training Quantitative Loss: 0.1207 (at final 800th epoch)

Limitations

  • This is not a fully compiled model, rather just LoRA layers
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Model tree for student-abdullah/Llama3.2_Medicine-Hinglish-Dataset_LoRA-Adapters_29-09

Finetuned
(143)
this model

Dataset used to train student-abdullah/Llama3.2_Medicine-Hinglish-Dataset_LoRA-Adapters_29-09