Is there any plan for a Pygmalion model based on OPT

#11
by Conanak99 - opened

Recently, with the help of Flexgen, we can offload OPT model with limited GPU memory https://github.com/FMInference/FlexGen

Some experiments show that we can offload OPT-6.7B and OPT-13B with just ~2GB VRAM.

Unfortunately, Pygmalion is based on GPT model. Is there any plan for training another Pygmalion model based on OPT? This will help people with low GPU run the model locally, and we can also run bigger Pygmalion model on Colab with 16GB limit.

Pygmalion org

Not at the moment, since I don't like how restrictive OPT's license is.

However, I am keeping an eye on the project since the developers plan to support other model architectures (at least according to their roadmap). If I get the compute resources to train bigger models, they will likely be based on NeoX, so feel free leave a thumbs up for NeoX support on the FlexGen repo: https://github.com/FMInference/FlexGen/issues/9

possibility of a model based on LLaMA 7B/13B? though I imagine the same restrictiveness applies there

but would ofc be much more cost efficient to fine tune than neoX-20B (and much more creative)

Pygmalion org

possibility of a model based on LLaMA 7B/13B? though I imagine the same restrictiveness applies there

Yep, the CEO of HuggingFace himself has asked people to not upload any LLaMA models until further notice. The claims about the model's performance are very exciting though, so if Meta allows distribution of fine-tunes I do plan on trying it.

Sign up or log in to comment