Edit model card

Eluwa: A Conversational LoRA for Facebook's OPT 2.7b Architecture

logo

Eluwa is a fine-tuned Low-Rank Adapter (LoRA) model for Facebook's OPT 2.7b. It is trained on the Stanford Alpaca dataset. The idea was that OPT 2.7 was too curt (and frankly, a bit of an asshole) for a model of its size, and that we could finetune it like Alpaca did to Llama.

This repository contains the Eluwa 2.7b 2 epoch model, which represents a significant improvements in question-answering ability compared to the default OPT 2.7b model. Despite Eluwa being not too different from OPT 2.7b, it often does a lot better than default OPT2.7b at accuracy and coherency.

For example, when asked "What is the capital of Sri Lanka", OPT 2.7b generates nonsense:

A: Sri Lanka is the country where the sun rises every morning on a beautiful beach. It has many beaches, but only one city - Colombo. (The sun sets at night.)

While Eluwa correctly says,

A: Colombo.

Likewise, when asked how to become a data scientist, Eluwa tries to be useful, whereas OPT 2.7B ends up insulting the user. Below are the results of Vicuna-style testing: 80 questions in various categories, with the responses rated by GPT-4.

Model OPT 2.7b base Eluwa 2.7b 1000 iter Eluwa 2.7b 2 epoch
Generic 22 44 57
Knowledge 35 60 72
Roleplay 29 38 58
Common sense 20 48 50
Fermi 4 28 23
Counterfactual 5 24 23
Coding 2 7 7
Math 0 3 3
Writing 8 19 19
Total 125 271 312

A csv of questions, answers and GPT's reviews are also included in the Eluwa github repo in the /TestResults/ folder, along with the base model for comparison.

Because of its small size, Eluwa can be used as research into conversational models with older and slower hardware.

Using Eluwa

I used oobabooga's text generation UI for testing, because it lets me easily regenerate outputs, modify the conversation history passed to the model, and mess with parameters.

To load Eluwa, download OPT 2.7b from Huggingface and download both the .bin and .json file from the /model folder on this Github. Follow the instructions on the text generation UI repository to figure out where the model goes and how to load a LoRA. Eluwa goes in the /loras folder.

Training and notes

Training Eluwa is a straightforward process. It is essentially Facebook's GPT-like OPT 2.7b model, loaded in 8-bit and trained using Stanford's Alapaca dataset. The training code is available on the Eluwa github repo and will as-is in Google Colab.

Why "Eluwa"?

Well, the whole thing was inspiration from Alpaca, which is a LoRA based on Llama. Others adopted the trend (Cabrita, Vicuna etc). Now, in Sri Lanka, we don't have llamas (at least, I've never seen any), but we do have goats. Goats are spectacular animals. In Ragama I once beheld a goat fighting a pack of stray dogs (and winning). Then it came for me. I hit it on the head with my umbrella, whereupon which it ate the umbrella and chased me the length and breadth of the entire village.

If you can't beat em, join em. "Eluwa" means goat. Goats are fearsome, versatile, and double as the essential ingredient in mutton rolls. Everything in the known universe is either a goat, or not a goat. They're not as nice as llamas or alpacas, but they'll do.

License

Facebook's OPT has its own license. Please read it here. Alpaca is licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and they note that models trained using the dataset should not be used outside of research purposes.

Eluwa, therefore, is only for research and non-commercial use, under CC BY NC 4.0. Go experiment with it, but don't use it commercially. This applies to the testing dataset.

Downloads last month
0
Unable to determine this model's library. Check the docs .

Dataset used to train BackyardLabs/Eluwa-2.7b