Dolly_GPT-J-6b / README.md
TehVenom's picture
Update README.md
d1ad19e
|
raw
history blame
No virus
760 Bytes
This is a merge of the Dolly LoRA with the main GPT-J-6B model, allowing users to use Dolly without having to worry about PEFT dependencies.
A model similar to this has been talked about
The performance is good but not as good as the orginal Alpaca trained from a base model of LLaMa
This is mostly due to the LLaMa 7B model being pretrained on 1T tokens and GPT-J-6B being trained on 300-400M tokens.
- LoRA originally trained by samwit, in: https://huggingface.co/samwit/dolly-lora
- The dataset is the cleaned version of the Alpaca dataset - https://github.com/gururise/AlpacaDataCleaned
- GPT-J-6b: https://huggingface.co/EleutherAI/gpt-j-6B
- here is a Colab https://colab.research.google.com/drive/1O1JjyGaC300BgSJoUbru6LuWAzRzEqCz?usp=sharing