Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
This is a Finetuning of GPT-J-6B using LoRa - https://huggingface.co/EleutherAI/gpt-j-6B
|
3 |
+
|
4 |
+
The dataset is the cleaned version of the Alpaca dataset - https://github.com/gururise/AlpacaDataCleaned
|
5 |
+
|
6 |
+
A model similar to this has been talked about
|
7 |
+
|
8 |
+
The performance is good but not as good as the orginal Alpaca trained from a base model of LLaMa
|
9 |
+
|
10 |
+
This is mostly due to the LLaMa 7B model being pretrained on 1T tokens and GPT-J-6B being trained on 300-400M tokens
|