Edit model card

llama2-piratelora-13b

This repo contains a Low-Rank Adapter (LoRA) for llama 2 13b (16 float) fit on a simple dataset comprised of thousands of pirate phrases, conversation pieces, and obscura. The purpose behind the generation of this lora was to determine whether enforcement of dialect and diction was possible throug the LoRa fine tuning method. Results were less than perfect, but the LoRa does seem to push the model to stick to maritime and nautical topics when spontaneously prompted to generate.

Downloads last month
0
Unable to determine this model's library. Check the docs .