File size: 2,232 Bytes
8c93542
91920bd
 
 
8c93542
91920bd
8c93542
91920bd
8c93542
91920bd
8c93542
91920bd
8c93542
91920bd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
---
language: code
datasets:
- code_search_net

---

This is an *unofficial* reupload of [razent/cotext-1-cc](https://huggingface.co/razent/cotext-1-cc) in the `SafeTensors` format using `transformers` `4.40.1`. The goal of this reupload is to prevent older models that are still relevant baselines from becoming stale as a result of changes in HuggingFace. Additionally, I may include minor corrections, such as model max length configuration.

Original model card below:

---

# CoText (1-CC)

## Introduction
Paper: [CoTexT: Multi-task Learning with Code-Text Transformer](https://arxiv.org/abs/2105.08645)

Authors: _Long Phan, Hieu Tran, Daniel Le, Hieu Nguyen, James Anibal, Alec Peltekian, Yanfang Ye_

## How to use

Supported languages:

```shell
"go"
"java"
"javascript"
"php"
"python"
"ruby"
```

For more details, do check out [our Github repo](https://github.com/justinphan3110/CoTexT). 
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained("razent/cotext-1-cc")  
model = AutoModelForSeq2SeqLM.from_pretrained("razent/cotext-1-cc")

sentence = "def add(a, b): return a + b"
text =  "python: " + sentence + " </s>"

encoding = tokenizer.encode_plus(text, pad_to_max_length=True, return_tensors="pt")
input_ids, attention_masks = encoding["input_ids"].to("cuda"), encoding["attention_mask"].to("cuda")

outputs = model.generate(
    input_ids=input_ids, attention_mask=attention_masks,
    max_length=256,
    early_stopping=True
)

for output in outputs:
    line = tokenizer.decode(output, skip_special_tokens=True, clean_up_tokenization_spaces=True)
    print(line)
```

## Citation
```
@inproceedings{phan-etal-2021-cotext,
    title = "{C}o{T}ex{T}: Multi-task Learning with Code-Text Transformer",
    author = "Phan, Long and Tran, Hieu and Le, Daniel and Nguyen, Hieu and Annibal, James and Peltekian, Alec and Ye, Yanfang",
    booktitle = "Proceedings of the 1st Workshop on Natural Language Processing for Programming (NLP4Prog 2021)",
    year = "2021",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.nlp4prog-1.5",
    doi = "10.18653/v1/2021.nlp4prog-1.5",
    pages = "40--47"
}
```