osolmaz commited on
Commit
b0a9e09
1 Parent(s): 9daee60

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -48
README.md CHANGED
@@ -2,60 +2,29 @@
2
  license: bsd-3-clause
3
  ---
4
  # CodeGen (CodeGen-Mono 350M)
5
- This is a clone of CodeGen project which is optimized to run on CPU by using the ONNX optimisations.
6
- The reason we created ONNX version of the original version is, we wanted to make it possible for ICortex kernel users
7
- to easily generate code for their use case without using a GPU.
8
- Original model can be found [here](https://huggingface.co/Salesforce/codegen-350M-mono).
9
 
10
- ## Model description
11
 
12
- CodeGen is a family of autoregressive language models for **program synthesis** from the paper: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. The models are originally released in [this repository](https://github.com/salesforce/CodeGen), under 3 pre-training data variants (`NL`, `Multi`, `Mono`) and 4 model size variants (`350M`, `2B`, `6B`, `16B`).
13
-
14
- The checkpoint included in this repository is denoted as **CodeGen-Mono 350M** in the paper, where "Mono" means the model is initialized with *CodeGen-Multi 350M* and further pre-trained on a Python programming language dataset, and "350M" refers to the number of trainable parameters.
15
-
16
- ## Training data
17
-
18
- This checkpoint (CodeGen-Mono 350M) was firstly initialized with *CodeGen-Multi 350M*, and then pre-trained on BigPython dataset. The data consists of 71.7B tokens of Python programming language. See Section 2.1 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
19
-
20
- ## Training procedure
21
-
22
- CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs.
23
- The family of models are trained using multiple TPU-v4-512 by Google, leveraging data and model parallelism.
24
- See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
25
-
26
- ## Evaluation results
27
-
28
- We evaluate our models on two code generation benchmark: HumanEval and MTPB. Please refer to the [paper](https://arxiv.org/abs/2203.13474) for more details.
29
-
30
-
31
- ## Intended Use and Limitations
32
-
33
- As an autoregressive language model, CodeGen is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them.
34
- However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well.
35
-
36
- ## How to use
37
-
38
- This model can be easily loaded using the `AutoModelForCausalLM` functionality:
39
 
40
  ```python
41
- from transformers import AutoTokenizer, AutoModelForCausalLM
42
- tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-350M-mono")
43
- model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-350M-mono")
 
 
44
 
45
  text = "def hello_world():"
46
  input_ids = tokenizer(text, return_tensors="pt").input_ids
47
-
48
- generated_ids = model.generate(input_ids, max_length=128)
49
- print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
 
 
 
 
 
 
50
  ```
51
 
52
- ## BibTeX entry and citation info
53
-
54
- ```bibtex
55
- @article{Nijkamp2022ACP,
56
- title={A Conversational Paradigm for Program Synthesis},
57
- author={Nijkamp, Erik and Pang, Bo and Hayashi, Hiroaki and Tu, Lifu and Wang, Huan and Zhou, Yingbo and Savarese, Silvio and Xiong, Caiming},
58
- journal={arXiv preprint},
59
- year={2022}
60
- }
61
- ```
 
2
  license: bsd-3-clause
3
  ---
4
  # CodeGen (CodeGen-Mono 350M)
 
 
 
 
5
 
6
+ Clone of [Salesforce/codegen-350M-mono](https://huggingface.co/Salesforce/codegen-350M-mono) converted to ONNX and optimized.
7
 
8
+ ## Usage
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
 
10
  ```python
11
+ from transformers import AutoTokenizer
12
+ from optimum.onnxruntime import ORTModelForCausalLM
13
+
14
+ model = ORTModelForCausalLM.from_pretrained("TextCortex/codegen-350M-optimized")
15
+ tokenizer = AutoTokenizer.from_pretrained("TextCortex/codegen-350M-optimized")
16
 
17
  text = "def hello_world():"
18
  input_ids = tokenizer(text, return_tensors="pt").input_ids
19
+ generated_ids = model.generate(
20
+ input_ids,
21
+ max_length=64,
22
+ temperature=0.1,
23
+ num_return_sequences=1,
24
+ early_stopping=True,
25
+ )
26
+ out = tokenizer.decode(generated_ids[0], skip_special_tokens=True)
27
+ print(out)
28
  ```
29
 
30
+ Refer to the original model for more details.