dbernsohn commited on
Commit
f64214c
1 Parent(s): 7950b8d

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +45 -0
README.md ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # roberta-javascript
2
+ ---
3
+ language: javascript
4
+ datasets:
5
+ - CodeSearchNet
6
+ ---
7
+
8
+ This is a [roberta](https://arxiv.org/pdf/1907.11692.pdf) pre-trained version on the [CodeSearchNet dataset](https://github.com/github/CodeSearchNet) for **javascript** Mask Language Model mission.
9
+
10
+ To load the model:
11
+ (necessary packages: !pip install transformers sentencepiece)
12
+ ```python
13
+ from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline
14
+ tokenizer = AutoTokenizer.from_pretrained("dbernsohn/roberta-javascript")
15
+ model = AutoModelWithLMHead.from_pretrained("dbernsohn/roberta-javascript")
16
+
17
+ fill_mask = pipeline(
18
+ "fill-mask",
19
+ model=model,
20
+ tokenizer=tokenizer
21
+ )
22
+ ```
23
+
24
+ You can then use this model to fill masked words in a Java code.
25
+
26
+ ```python
27
+ code = """
28
+ var i;
29
+ for (i = 0; i < cars.<mask>; i++) {
30
+ text += cars[i] + "<br>";
31
+ }
32
+ """.lstrip()
33
+
34
+ pred = {x["token_str"].replace("Ġ", ""): x["score"] for x in fill_mask(code)}
35
+ sorted(pred.items(), key=lambda kv: kv[1], reverse=True)
36
+ # [('length', 0.9959614872932434),
37
+ # ('i', 0.00027875584783032537),
38
+ # ('len', 0.0002283261710545048),
39
+ # ('nodeType', 0.00013731322542298585),
40
+ # ('index', 7.5289819505997e-05)]
41
+ ```
42
+
43
+ The whole training process and hyperparameters are in my [GitHub repo](https://github.com/DorBernsohn/CodeLM/tree/main/CodeMLM)
44
+
45
+ > Created by [Dor Bernsohn](https://www.linkedin.com/in/dor-bernsohn-70b2b1146/)