Transformers
PyTorch
code
custom_code
Inference Endpoints
codesage commited on
Commit
58647fb
1 Parent(s): 46f3b50

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +49 -0
README.md CHANGED
@@ -1,3 +1,52 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ datasets:
4
+ - bigcode/the-stack-dedup
5
+ library_name: transformers
6
+ language:
7
+ - code
8
  ---
9
+
10
+ ## CodeSage-Large
11
+
12
+ ### Model description
13
+ CodeSage is a new family of open code embedding models with an encoder architecture that support a wide range of source code understanding tasks. It is introduced in the paper:
14
+
15
+ [Code Representation Learning At Scale by
16
+ Dejiao Zhang*, Wasi Uddin Ahmad*, Ming Tan, Hantian Ding, Ramesh Nallapati, Dan Roth, Xiaofei Ma, Bing Xiang](https://arxiv.org/abs/2402.01935) (* indicates equal contribution).
17
+
18
+ ### Pretraining data
19
+ This checkpoint is trained on the Stack data (https://huggingface.co/datasets/bigcode/the-stack-dedup). Supported languages (9 in total) are as follows: c, c-sharp, go, java, javascript, typescript, php, python, ruby.
20
+
21
+ ### Training procedure
22
+ This checkpoint is first trained on code data via masked language modeling (MLM) and then on bimodal text-code pair data. Please refer to the paper for more details.
23
+
24
+ ### How to use
25
+ This checkpoint consists of an encoder (1.3B model), which can be used to extract code embeddings of 2048 dimension. It can be easily loaded using the AutoModel functionality and employs the Starcoder tokenizer (https://arxiv.org/pdf/2305.06161.pdf).
26
+
27
+ ```
28
+ from transformers import AutoModel, AutoTokenizer
29
+
30
+ checkpoint = "codesage/codesage-large"
31
+ device = "cuda" # for GPU usage or "cpu" for CPU usage
32
+
33
+ tokenizer = AutoTokenizer.from_pretrained(checkpoint, trust_remote_code=True)
34
+ model = AutoModel.from_pretrained(checkpoint, trust_remote_code=True).to(device)
35
+
36
+ inputs = tokenizer.encode("def print_hello_world():\tprint('Hello World!')", return_tensors="pt").to(device)
37
+ embedding = model(inputs)[0]
38
+ print(f'Dimension of the embedding: {embedding[0].size()}')
39
+ # Dimension of the embedding: torch.Size([13, 2048])
40
+ ```
41
+
42
+ ### BibTeX entry and citation info
43
+ ```
44
+ @inproceedings{
45
+ zhang2024codesage,
46
+ title={CodeSage: Code Representation Learning At Scale},
47
+ author={Dejiao Zhang* and Wasi Ahmad* and Ming Tan and Hantian Ding and Ramesh Nallapati and Dan Roth and Xiaofei Ma and Bing Xiang},
48
+ booktitle={The Twelfth International Conference on Learning Representations},
49
+ year={2024},
50
+ url={https://openreview.net/forum?id=vfzRRjumpX}
51
+ }
52
+ ```