File size: 1,408 Bytes
1727c10
 
 
 
de1e985
1727c10
675bcc0
f197d2e
e64d759
1727c10
ac2db14
2848e13
ac2db14
2848e13
ac2db14
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
---
datasets:
- bigcode/the-stack
---
# hexcoder

![hex_stickers](https://www.mitchelloharawild.com/blog/2018-07-10-hexwall_files/figure-html/final-1.png)

This is a model that trains the base [santacoder model](https://huggingface.co/bigcode/santacoder) on all r code and rmarkdown code in "the stack". Training for 6 epochs on 512 token length snippets of r and rmarkdown code. While there isnt that much r code in the stack (far less then python or java...) this should at least give the model some r skills

Because I am on a limited compute budget, I trained the model on 512 token length pieces of R code, this means that for longer pieces of code it will do poorly. I will now proceed to fine tune the base model on 2048 context length pieces of r code in a parameter efficient way, for another 2 epochs (to ensure acceptable performance beyond 512 tokens).

Then I intend to instruction tune the model on all stackoverflow questions and anwsers with the tag 'r' in the 2011 to 2016 timeframe, presenting stackoverflow questions as <|human|> and the best answer as <|assistant|>. This will teach the model that it is expected to produce an answer to a user's question about 'r'.

The intended outcome is a reasonably adequate model which can answer basic r user questions, but more broadly an evaluaino of the data/sources and training needed to produce great open source code generating models for r.