Delete README.md
Browse files
README.md
DELETED
@@ -1,44 +0,0 @@
|
|
1 |
-
---
|
2 |
-
license: bsd-3-clause
|
3 |
-
---
|
4 |
-
|
5 |
-
CodeT5+ 220M
|
6 |
-
Model description
|
7 |
-
CodeT5+ is a new family of open code large language models with an encoder-decoder architecture that can flexibly operate in different modes (i.e. encoder-only, decoder-only, and encoder-decoder) to support a wide range of code understanding and generation tasks. It is introduced in the paper:
|
8 |
-
|
9 |
-
CodeT5+: Open Code Large Language Models for Code Understanding and Generation by Yue Wang*, Hung Le*, Akhilesh Deepak Gotmare, Nghi D.Q. Bui, Junnan Li, Steven C.H. Hoi (* indicates equal contribution).
|
10 |
-
|
11 |
-
Compared to the original CodeT5 family (base: 220M, large: 770M), CodeT5+ is pretrained with a diverse set of pretraining tasks including span denoising, causal language modeling, contrastive learning, and text-code matching to learn rich representations from both unimodal code data and bimodal code-text data. Additionally, it employs a simple yet effective compute-efficient pretraining method to initialize the model components with frozen off-the-shelf LLMs such as CodeGen to efficiently scale up the model (i.e. 2B, 6B, 16B), and adopts a "shallow encoder and deep decoder" architecture. Furthermore, it is instruction-tuned to align with natural language instructions (see our InstructCodeT5+ 16B) following Code Alpaca.
|
12 |
-
|
13 |
-
How to use
|
14 |
-
This model can be easily loaded using the T5ForConditionalGeneration functionality and employs the same tokenizer as original CodeT5.
|
15 |
-
|
16 |
-
from transformers import T5ForConditionalGeneration, AutoTokenizer
|
17 |
-
|
18 |
-
checkpoint = "Salesforce/codet5p-220m"
|
19 |
-
device = "cuda" # for GPU usage or "cpu" for CPU usage
|
20 |
-
|
21 |
-
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
|
22 |
-
model = T5ForConditionalGeneration.from_pretrained(checkpoint).to(device)
|
23 |
-
|
24 |
-
inputs = tokenizer.encode("def print_hello_world():<extra_id_0>", return_tensors="pt").to(device)
|
25 |
-
outputs = model.generate(inputs, max_length=10)
|
26 |
-
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
27 |
-
# ==> print "Hello World"
|
28 |
-
|
29 |
-
Pretraining data
|
30 |
-
This checkpoint is trained on the stricter permissive subset of the deduplicated version of the github-code dataset. The data is preprocessed by reserving only permissively licensed code ("mit" “apache-2”, “bsd-3-clause”, “bsd-2-clause”, “cc0-1.0”, “unlicense”, “isc”). Supported languages (9 in total) are as follows: c, c++, c-sharp, go, java, javascript, php, python, ruby.
|
31 |
-
|
32 |
-
Training procedure
|
33 |
-
This checkpoint is trained on the unimodal code data at the first-stage pretraining, which includes a diverse set of pretraining tasks including span denoising and two variants of causal language modeling. Please refer to the paper for more details.
|
34 |
-
|
35 |
-
Evaluation results
|
36 |
-
CodeT5+ models have been comprehensively evaluated on a wide range of code understanding and generation tasks in various settings: zero-shot, finetuning, and instruction-tuning. Specifically, CodeT5+ yields substantial performance gains on many downstream tasks compared to their SoTA baselines, e.g., 8 text-to-code retrieval tasks (+3.2 avg. MRR), 2 line-level code completion tasks (+2.1 avg. Exact Match), and 2 retrieval-augmented code generation tasks (+5.8 avg. BLEU-4). In 2 math programming tasks on MathQA-Python and GSM8K-Python, CodeT5+ models of below billion-parameter sizes significantly outperform many LLMs of up to 137B parameters. Particularly, in the zero-shot text-to-code generation task on HumanEval benchmark, InstructCodeT5+ 16B sets new SoTA results of 35.0% pass@1 and 54.5% pass@10 against other open code LLMs, even surpassing the closed-source OpenAI code-cushman-001 mode Please refer to the paper for more details.
|
37 |
-
|
38 |
-
BibTeX entry and citation info
|
39 |
-
@article{wang2023codet5plus,
|
40 |
-
title={CodeT5+: Open Code Large Language Models for Code Understanding and Generation},
|
41 |
-
author={Wang, Yue and Le, Hung and Gotmare, Akhilesh Deepak and Bui, Nghi D.Q. and Li, Junnan and Hoi, Steven C. H.},
|
42 |
-
journal={arXiv preprint},
|
43 |
-
year={2023}
|
44 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|