Instruction-tuned LLaMA (Alpaca-GPT4)

Fine-tune LLaMA-7B on the alpaca dataset.

The main training scripts are from stanford-alpaca repo, while the data is from GPT-4-LLM repo, with the default training hyper-parameters.

Please refer to this page for more details.

Downloads last month
4
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support