license: other | |
Finetuned LoRA model with command | |
$llmtune finetune --model llama-30b-4bit --weights llama-30b-4bit.pt --dataset data50.json --adapter alpaca-adapter-folder-30b-4bit | |
Using first 50 records of Alpaca dataset from original dataset.json | |
The training loss is almost flat, and result is clearly better than original llama-30b-4bit model. | |
License shall follow typical Alpaca family models situation. | |