license: other | |
datasets: | |
- tatsu-lab/alpaca | |
LLaMA model finetuned using LoRA (1 epoch) on the Stanford Alpaca training data set and quantized to 4bit. | |
Because this model contains the merged LLaMA weights it is subject to their license restrictions. |