samwit commited on
Commit
3b0cbd8
1 Parent(s): ef37fe9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -1
README.md CHANGED
@@ -7,4 +7,8 @@ A model similar to this has been talked about
7
 
8
  The performance is good but not as good as the orginal Alpaca trained from a base model of LLaMa
9
 
10
- This is mostly due to the LLaMa 7B model being pretrained on 1T tokens and GPT-J-6B being trained on 300-400M tokens
 
 
 
 
 
7
 
8
  The performance is good but not as good as the orginal Alpaca trained from a base model of LLaMa
9
 
10
+ This is mostly due to the LLaMa 7B model being pretrained on 1T tokens and GPT-J-6B being trained on 300-400M tokens
11
+
12
+ You will need a 3090 or A100 to run it, unfortunately this current version won't work on a T4.
13
+
14
+ here is a Colab https://colab.research.google.com/drive/1O1JjyGaC300BgSJoUbru6LuWAzRzEqCz?usp=sharing