Edit model card

This repository houses a Low-Rank Adapter (LoRA) specifically designed for the OpenOrca-Platypus2 13b (16 float) model. The LoRA is uniquely trained on a diverse dataset encompassing thousands of pirate-centric content: from typical phrases and extended conversation fragments to more obscure pirate vernacular.

Objective: The primary motivation behind the creation of this LoRA was to explore the potential of the LoRA fine-tuning methodology in achieving specific dialect and diction enforcement. We aimed to ascertain whether it's possible to guide the model towards a more authentic pirate-style language, both in terms of vocabulary and syntactic structure.

Evolution: This iteration represents the second version of the adapter that we've developed for OpenOrca-Platypus2. Compared to our initial attempt, this version benefits from a significantly enhanced dataset. The data is not only more extensive but also showcases a broader spectrum of complexity, ranging from short, concise phrases to longer, intricate samples. This deliberate variation aimed to test and strengthen the model's adaptability.

Outcomes: With the improved dataset and the insights gained from our first attempt, we've witnessed marked progress. The updated LoRA exhibits a much-refined capacity to generate text that resonates with the nuances of pirate-speak. It embodies the idiosyncrasies of the pirate dialect more organically, paving the way for enhanced user experiences.

Note: Users might notice that the LoRA occasionally produces continuous streams of text. However, this behavior is not exclusive to the adapted model but is also observed in the underlying base model of OpenOrca-Platypus2.

Downloads last month
0
Unable to determine this model's library. Check the docs .