jncraton commited on
Commit
bad3905
1 Parent(s): c5328a9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +56 -0
README.md CHANGED
@@ -1,3 +1,59 @@
1
  ---
2
  license: bsd-3-clause
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: bsd-3-clause
3
  ---
4
+ # CodeGen (CodeGen-Mono 350M)
5
+
6
+ This is an int4 quantization for use with [cformers](https://github.com/NolanoOrg/cformers).
7
+
8
+ ## Model description
9
+
10
+ CodeGen is a family of autoregressive language models for **program synthesis** from the paper: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. The models are originally released in [this repository](https://github.com/salesforce/CodeGen), under 3 pre-training data variants (`NL`, `Multi`, `Mono`) and 4 model size variants (`350M`, `2B`, `6B`, `16B`).
11
+
12
+ The checkpoint included in this repository is denoted as **CodeGen-Mono 350M** in the paper, where "Mono" means the model is initialized with *CodeGen-Multi 350M* and further pre-trained on a Python programming language dataset, and "350M" refers to the number of trainable parameters.
13
+
14
+ ## Training data
15
+
16
+ This checkpoint (CodeGen-Mono 350M) was firstly initialized with *CodeGen-Multi 350M*, and then pre-trained on BigPython dataset. The data consists of 71.7B tokens of Python programming language. See Section 2.1 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
17
+
18
+ ## Training procedure
19
+
20
+ CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs.
21
+ The family of models are trained using multiple TPU-v4-512 by Google, leveraging data and model parallelism.
22
+ See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
23
+
24
+ ## Evaluation results
25
+
26
+ We evaluate our models on two code generation benchmark: HumanEval and MTPB. Please refer to the [paper](https://arxiv.org/abs/2203.13474) for more details.
27
+
28
+
29
+ ## Intended Use and Limitations
30
+
31
+ As an autoregressive language model, CodeGen is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them.
32
+ However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well.
33
+
34
+ ## How to use
35
+
36
+ This model can be easily loaded using the `AutoModelForCausalLM` functionality:
37
+
38
+ ```python
39
+ from transformers import AutoTokenizer, AutoModelForCausalLM
40
+ tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-350M-mono")
41
+ model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-350M-mono")
42
+
43
+ text = "def hello_world():"
44
+ input_ids = tokenizer(text, return_tensors="pt").input_ids
45
+
46
+ generated_ids = model.generate(input_ids, max_length=128)
47
+ print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
48
+ ```
49
+
50
+ ## BibTeX entry and citation info
51
+
52
+ ```bibtex
53
+ @article{Nijkamp2022ACP,
54
+ title={A Conversational Paradigm for Program Synthesis},
55
+ author={Nijkamp, Erik and Pang, Bo and Hayashi, Hiroaki and Tu, Lifu and Wang, Huan and Zhou, Yingbo and Savarese, Silvio and Xiong, Caiming},
56
+ journal={arXiv preprint},
57
+ year={2022}
58
+ }
59
+ ```