MichelNivard commited on
Commit
f197d2e
1 Parent(s): 1d8d2f6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -3,6 +3,8 @@ datasets:
3
  - bigcode/the-stack
4
  ---
5
 
 
 
6
  This is a model that trains the base [santacoder model](https://huggingface.co/bigcode/santacoder) on all r code and rmarkdown code in "the stack". Training for 6 epochs on 512 toklen length snippets of r and rmarkdown code. While there isnt that much r code in the stack (far less then python or java...) this should at least give the model some r skills
7
 
8
  BEcause I am on a limited compute budget, I trained the modle on 512 token length pieces of R code, this means that for longer pievces of code it will do poorly. I will now proseed to QLoRa train the base model on 2048 context length pieces of R code for another 2 epochs (to ensure acceptable performance beyond 512 tokens).
 
3
  - bigcode/the-stack
4
  ---
5
 
6
+ https://www.mitchelloharawild.com/blog/2018-07-10-hexwall_files/figure-html/final-1.png
7
+
8
  This is a model that trains the base [santacoder model](https://huggingface.co/bigcode/santacoder) on all r code and rmarkdown code in "the stack". Training for 6 epochs on 512 toklen length snippets of r and rmarkdown code. While there isnt that much r code in the stack (far less then python or java...) this should at least give the model some r skills
9
 
10
  BEcause I am on a limited compute budget, I trained the modle on 512 token length pieces of R code, this means that for longer pievces of code it will do poorly. I will now proseed to QLoRa train the base model on 2048 context length pieces of R code for another 2 epochs (to ensure acceptable performance beyond 512 tokens).