Dolly_GPT-J-6b / README.md
TehVenom's picture
Update README.md
d1ad19e

This is a merge of the Dolly LoRA with the main GPT-J-6B model, allowing users to use Dolly without having to worry about PEFT dependencies.

A model similar to this has been talked about

The performance is good but not as good as the orginal Alpaca trained from a base model of LLaMa

This is mostly due to the LLaMa 7B model being pretrained on 1T tokens and GPT-J-6B being trained on 300-400M tokens.