Transformers
PyTorch
code
custom_code
Inference Endpoints
tomaarsen HF staff commited on
Commit
dd7c7a1
1 Parent(s): 542998f

Update code snippet to use sentence-level embeddings

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -24,7 +24,7 @@ This checkpoint is first trained on code data via masked language modeling (MLM)
24
  ### How to use
25
  This checkpoint consists of an encoder (130M model), which can be used to extract code embeddings of 1024 dimension. It can be easily loaded using the AutoModel functionality and employs the Starcoder tokenizer (https://arxiv.org/pdf/2305.06161.pdf).
26
 
27
- ```
28
  from transformers import AutoModel, AutoTokenizer
29
 
30
  checkpoint = "codesage/codesage-small"
@@ -33,10 +33,10 @@ device = "cuda" # for GPU usage or "cpu" for CPU usage
33
  tokenizer = AutoTokenizer.from_pretrained(checkpoint, trust_remote_code=True)
34
  model = AutoModel.from_pretrained(checkpoint, trust_remote_code=True).to(device)
35
 
36
- inputs = tokenizer.encode("def print_hello_world():\tprint('Hello World!')", return_tensors="pt").to(device)
37
- embedding = model(inputs)[0]
38
- print(f'Dimension of the embedding: {embedding[0].size()}')
39
- # Dimension of the embedding: torch.Size([13, 1024])
40
  ```
41
 
42
  ### BibTeX entry and citation info
 
24
  ### How to use
25
  This checkpoint consists of an encoder (130M model), which can be used to extract code embeddings of 1024 dimension. It can be easily loaded using the AutoModel functionality and employs the Starcoder tokenizer (https://arxiv.org/pdf/2305.06161.pdf).
26
 
27
+ ```python
28
  from transformers import AutoModel, AutoTokenizer
29
 
30
  checkpoint = "codesage/codesage-small"
 
33
  tokenizer = AutoTokenizer.from_pretrained(checkpoint, trust_remote_code=True)
34
  model = AutoModel.from_pretrained(checkpoint, trust_remote_code=True).to(device)
35
 
36
+ inputs = tokenizer("def print_hello_world():\tprint('Hello World!')", return_tensors="pt").to(device)
37
+ embedding = model(**inputs).pooler_output
38
+ print(f'Dimension of the embedding: {embedding.size()}')
39
+ # Dimension of the embedding: torch.Size([1, 1024])
40
  ```
41
 
42
  ### BibTeX entry and citation info