|
--- |
|
license: mit |
|
language: |
|
- en |
|
--- |
|
|
|
## Instruction-tuned LLaMA (Alpaca-GPT4) |
|
|
|
Fine-tune [LLaMA-7B](https://huggingface.co/decapoda-research/llama-7b-hf) on the alpaca dataset. |
|
|
|
The main training scripts are from [stanford-alpaca repo](https://github.com/tatsu-lab/stanford_alpaca), while the data is from [GPT-4-LLM repo](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM#data-release), with the default training hyper-parameters. |
|
|
|
Please refer to [this page](https://instruction-tuning-with-gpt-4.github.io/) for more details. |