MichelNivard commited on
Commit
ac2db14
1 Parent(s): 1d341b0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -8,8 +8,8 @@ datasets:
8
 
9
  This is a model that trains the base [santacoder model](https://huggingface.co/bigcode/santacoder) on all r code and rmarkdown code in "the stack". Training for 6 epochs on 512 token length snippets of r and rmarkdown code. While there isnt that much r code in the stack (far less then python or java...) this should at least give the model some r skills
10
 
11
- Because I am on a limited compute budget, I trained the model on 512 token length pieces of R code, this means that for longer pieces of code it will do poorly. I will now proseed to QLoRa train the base model on 2048 context length pieces of R code for another 2 epochs (to ensure acceptable performance beyond 512 tokens).
12
 
13
- Then I intned to instruction tune the model on all stackoverflow questions and anwsers with the tag 'r' in the 2011 to 2016 timeframe, apresenting stackoverflow questions as <|human|> and the best answer as <|assistant|>. This will teach the modle that it is expected to produce an answer to a user's question about 'r'.
14
 
15
- The intended outcome is a reasonably adequate model which can answer basic r user questions.
 
8
 
9
  This is a model that trains the base [santacoder model](https://huggingface.co/bigcode/santacoder) on all r code and rmarkdown code in "the stack". Training for 6 epochs on 512 token length snippets of r and rmarkdown code. While there isnt that much r code in the stack (far less then python or java...) this should at least give the model some r skills
10
 
11
+ Because I am on a limited compute budget, I trained the model on 512 token length pieces of R code, this means that for longer pieces of code it will do poorly. I will now proceed to fine tune the base model on 2048 context length pieces of r code in a parameter efficient way, for another 2 epochs (to ensure acceptable performance beyond 512 tokens).
12
 
13
+ Then I intend to instruction tune the model on all stackoverflow questions and anwsers with the tag 'r' in the 2011 to 2016 timeframe, presenting stackoverflow questions as <|human|> and the best answer as <|assistant|>. This will teach the model that it is expected to produce an answer to a user's question about 'r'.
14
 
15
+ The intended outcome is a reasonably adequate model which can answer basic r user questions, but more broadly an evaluaino of the data/sources and training needed to produce great open source code generating models for r.