Edit model card

OpenOrca-Platypus2-13B-PirateLora

This repo contains a Low-Rank Adapter (LoRA) for OpenOrca-Platypus2 13b (16 float) fit on a simple dataset comprised of thousands of pirate phrases, conversation pieces, and obscura. The purpose behind the generation of this lora was to determine whether enforcement of dialect and diction was possible through the LoRa fine tuning method. Results were much better than the previous adapter we created for Llama 2, but this may be a due to a combination of effects: the superior performance of the base model compared to Llama 2, and the higher quality training set as compared to our previous effort.

Downloads last month
0
Unable to determine this model's library. Check the docs .