lxe's picture
Update README.md
97b1843
|
raw
history blame contribute delete
No virus
519 Bytes
metadata
{}

LoRA weights for LLaMA-7b trained on a subset of the Stanford Alpaca dataset in which the long tail of lengthy entries are removed and the prompt is shortened to the following:

Appropriately respond to the following instruction:
### Instruction: Write a javascript function that sorts array alphabetically
### Response:

It doesn't contain the foundation model itself, so it's MIT licensed!

Tuned using https://github.com/lxe/simple-llama-finetuner