alpaca_gpt4 / README.md
Reza8848's picture
Update README.md
b767c03
|
raw
history blame
537 Bytes
---
license: mit
language:
- en
---
## Instruction-tuned LLaMA (Alpaca-GPT4)
Fine-tune [LLaMA-7B](https://huggingface.co/decapoda-research/llama-7b-hf) on the alpaca dataset.
The main training scripts are from [stanford-alpaca repo](https://github.com/tatsu-lab/stanford_alpaca), while the data is from [GPT-4-LLM repo](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM#data-release), with the default training hyper-parameters.
Please refer to [this page](https://instruction-tuning-with-gpt-4.github.io/) for more details.