lxe's picture
Update README.md
97b1843
---
{}
---
LoRA weights for LLaMA-7b trained on a subset of the [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) dataset in which the long tail of lengthy entries are removed and the prompt is shortened to the following:
```
Appropriately respond to the following instruction:
### Instruction: Write a javascript function that sorts array alphabetically
### Response:
```
It doesn't contain the foundation model itself, so it's MIT licensed!
Tuned using https://github.com/lxe/simple-llama-finetuner