MichelNivard commited on
Commit
e64d759
1 Parent(s): de1e985

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -6,9 +6,9 @@ datasets:
6
 
7
  ![hex_stickers](https://www.mitchelloharawild.com/blog/2018-07-10-hexwall_files/figure-html/final-1.png)
8
 
9
- This is a model that trains the base [santacoder model](https://huggingface.co/bigcode/santacoder) on all r code and rmarkdown code in "the stack". Training for 6 epochs on 512 toklen length snippets of r and rmarkdown code. While there isnt that much r code in the stack (far less then python or java...) this should at least give the model some r skills
10
 
11
- BEcause I am on a limited compute budget, I trained the modle on 512 token length pieces of R code, this means that for longer pievces of code it will do poorly. I will now proseed to QLoRa train the base model on 2048 context length pieces of R code for another 2 epochs (to ensure acceptable performance beyond 512 tokens).
12
 
13
  Then I intned to instruction tune the model on all stackoverflow questions and anwsers with the tag 'r' in the 2011 to 2016 timeframe, apresenting stackoverflow questions as <|human|> and the best answer as <|assistant|>. This will teach the modle that it is expected to produce an answer to a user's question about 'r'.
14
 
 
6
 
7
  ![hex_stickers](https://www.mitchelloharawild.com/blog/2018-07-10-hexwall_files/figure-html/final-1.png)
8
 
9
+ This is a model that trains the base [santacoder model](https://huggingface.co/bigcode/santacoder) on all r code and rmarkdown code in "the stack". Training for 6 epochs on 512 token length snippets of r and rmarkdown code. While there isnt that much r code in the stack (far less then python or java...) this should at least give the model some r skills
10
 
11
+ BEcause I am on a limited compute budget, I trained the model on 512 token length pieces of R code, this means that for longer pieces of code it will do poorly. I will now proseed to QLoRa train the base model on 2048 context length pieces of R code for another 2 epochs (to ensure acceptable performance beyond 512 tokens).
12
 
13
  Then I intned to instruction tune the model on all stackoverflow questions and anwsers with the tag 'r' in the 2011 to 2016 timeframe, apresenting stackoverflow questions as <|human|> and the best answer as <|assistant|>. This will teach the modle that it is expected to produce an answer to a user's question about 'r'.
14