alecsharpie commited on
Commit
1a72659
1 Parent(s): a48b26c

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +70 -0
README.md ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - code
4
+ license: bsd-3-clause
5
+ tags:
6
+ - code
7
+ - generative
8
+ datasets:
9
+ - bigcode/the-stack
10
+ ---
11
+
12
+ # CodeGen (CodeGen-HTML 350M)
13
+
14
+ ## Model description
15
+
16
+ CodeGen is a family of autoregressive language models for **program synthesis** from the paper: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. The models are originally released in [this repository](https://github.com/salesforce/CodeGen), under 3 pre-training data variants (`NL`, `Multi`, `Mono`) and 4 model size variants (`350M`, `2B`, `6B`, `16B`).
17
+
18
+ The checkpoint included in this repository is finetuned on top of the **CodeGen-Multi 350M**, where "Multi" means the model is initialized with *CodeGen-NL 350M* and further pre-trained on a dataset of multiple programming languages, and "350M" refers to the number of trainable parameters.
19
+
20
+ It has been finetuned on HTML code contained in bigcode/the-stack dataset on huggingface.
21
+
22
+ ## Training data
23
+
24
+ This checkpoint (CodeGen-Multi 350M) was firstly initialized with *CodeGen-NL 350M*, and then pre-trained on [BigQuery](https://console.cloud.google.com/marketplace/details/github/github-repos), a large-scale dataset of multiple programming languages from GitHub repositories. The data consists of 119.2B tokens and includes C, C++, Go, Java, JavaScript, and Python.
25
+
26
+ Lastly it has been finetuned on HTML code contained in [bigcode/the-stack](https://huggingface.co/datasets/bigcode/the-stack) dataset on huggingface
27
+
28
+ ## Training procedure
29
+
30
+ Initially:
31
+ CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs.
32
+ The family of models are trained using multiple TPU-v4-512 by Google, leveraging data and model parallelism.
33
+ See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
34
+
35
+ Finetune:
36
+ I fine tuned the 350M model on a single A100 with 40Gb of RAM, with batch size 10 and an input length of 512 tokens
37
+ Used 80-90% of the RAM
38
+
39
+ ## Intended Use and Limitations
40
+
41
+ As an autoregressive language model, CodeGen is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them.
42
+ However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well.
43
+
44
+ ## How to use
45
+
46
+ This model can be easily loaded using the `AutoModelForCausalLM` functionality:
47
+
48
+ ```python
49
+ from transformers import AutoTokenizer, AutoModelForCausalLM
50
+
51
+ tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-350M-multi")
52
+ model = AutoModelForCausalLM.from_pretrained("alecsharpie/codegen_350m_html")
53
+
54
+ text = "<body>"
55
+
56
+ input_ids = tokenizer(text, return_tensors="pt").input_ids
57
+ generated_ids = model.generate(input_ids, max_length=128)
58
+ print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
59
+ ```
60
+
61
+ ## BibTeX entry and citation info
62
+
63
+ ```bibtex
64
+ @article{Nijkamp2022ACP,
65
+ title={A Conversational Paradigm for Program Synthesis},
66
+ author={Nijkamp, Erik and Pang, Bo and Hayashi, Hiroaki and Tu, Lifu and Wang, Huan and Zhou, Yingbo and Savarese, Silvio and Xiong, Caiming},
67
+ journal={arXiv preprint},
68
+ year={2022}
69
+ }
70
+ ```