File size: 599 Bytes
10546e4
 
 
31e5b01
 
 
2b5f491
31e5b01
 
 
2b5f491
31e5b01
 
1
2
3
4
5
6
7
8
9
10
11
12
13
---
license: other
---

# MedLLAMA-LoRA
#### Experimental llama finetune on medical qa dataset
This model has not been evaluated yet, and should NOT be used for medical advice. It is an experiment to create a domain-specific model from LLaMA using LoRA finetuning.

Training Details:
- 13b model, finetuned on 76k question-answer pairs
- superset of alpaca-data-cleaned instruct dataset with additional medical qa pairs adapted from icliniq dataset
- Trained for 18 hours on A100, minibatch size 10, batch size 256, cutoff_len 512, all other parameters default
- https://github.com/tloen/alpaca-lora