Edit model card

Alpaca-LoRA 13B

This repo contains a Low-Rank Adapter (LoRA) for LLaMA 13B fit on the cleaned Stanford Alpaca dataset.

This repo does not contain the LLaMA weights, but instead the tuned LoRA weights which can then be applied to existing LLaMA 13B weights.

Instructions for running it can be found at https://github.com/tloen/alpaca-lora.

Downloads last month
0
Unable to determine this model's library. Check the docs .