lbourdois commited on
Commit
bf4d466
1 Parent(s): de182c3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +151 -0
README.md CHANGED
@@ -1,3 +1,154 @@
1
  ---
2
  language: code
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  language: code
3
+ tags:
4
+ - summarization
5
+ widget:
6
+ - text: "you are given an array of numbers a and a number b , compute the difference of elements in a and b"
7
+
8
  ---
9
+
10
+
11
+ # CodeTrans model for program synthesis
12
+
13
+ ## Table of Contents
14
+ - [Model Details](#model-details)
15
+ - [How to Get Started With the Model](#how-to-get-started-with-the-model)
16
+ - [Uses](#uses)
17
+ - [Risks, Limitations and Biases](#risks-limitations-and-biases)
18
+ - [Training](#training)
19
+ - [Evaluation](#evaluation)
20
+ - [Environmental Impact](#environmental-impact)
21
+ - [Citation Information](#citation-information)
22
+
23
+ ## Model Details
24
+ - **Model Description:** This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the program synthesis task for the lisp inspired DSL code.
25
+ - **Developed by:** [Ahmed Elnaggar](https://www.linkedin.com/in/prof-ahmed-elnaggar/),[Wei Ding](https://www.linkedin.com/in/wei-ding-92561270/)
26
+ - **Model Type:** Summarization
27
+ - **Language(s):** English
28
+ - **License:** Unknown
29
+ - **Resources for more information:**
30
+ - [Research Paper](https://arxiv.org/pdf/2104.02443.pdf)
31
+ - [GitHub Repo](https://github.com/agemagician/CodeTrans)
32
+
33
+
34
+ ## How to Get Started With the Model
35
+
36
+ Here is how to use this model to generate lisp inspired DSL code using Transformers SummarizationPipeline:
37
+
38
+ ```python
39
+ from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
40
+
41
+ pipeline = SummarizationPipeline(
42
+ model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_program_synthese_transfer_learning_finetune"),
43
+ tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_program_synthese_transfer_learning_finetune", skip_special_tokens=True),
44
+ device=0
45
+ )
46
+
47
+ tokenized_code = "you are given an array of numbers a and a number b , compute the difference of elements in a and b"
48
+ pipeline([tokenized_code])
49
+ ```
50
+ Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/transfer%20learning%20fine-tuning/small_model.ipynb).
51
+ ## Training data
52
+
53
+ The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
54
+
55
+
56
+
57
+
58
+ ## Uses
59
+
60
+ #### Direct Use
61
+
62
+ The model could be used to generate lisp inspired DSL code given the human language description tasks.
63
+
64
+ ## Risks, Limitations and Biases
65
+
66
+
67
+ As detailed in this model’s [publication](https://arxiv.org/pdf/2104.02443.pdf), this model makes use of the data-set [One Billion Word Language Model Benchmark corpus](https://www.researchgate.net/publication/259239818_One_Billion_Word_Benchmark_for_Measuring_Progress_in_Statistical_Language_Modeling) in order to gather the self-supervised English data samples.
68
+
69
+ Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
70
+ As such, it should be noted that language models that are pretrained from text corpus such as the One Billion Word Word Language Model Benchmark corpus have been further explored (e.g by [Ngo, Helen & Araújo et al(2021)](https://www.researchgate.net/publication/355582954_No_News_is_Good_News_A_Critique_of_the_One_Billion_Word_Benchmark) reports that the One Billion Word Word Language Model Benchmark corpus
71
+ > “generate text in the linguistic style of news, without any grounding in the real world. In addition to potential harms from models which are inadvertently optimized for generating fake news.”
72
+
73
+ The aforementioned publication continues to warn that the One Billion Word Word Language Model Benchmark corpus
74
+ > contains sentences which contain words commonly found on blocklists. While these sentences may have plausibly been used in expository contexts within the article, the destructive sentence-level preprocessing and shuffling applied to lm1b [One Billion Word Word Language Model Benchmark corpus] removes all long-range structure from the text and makes it infeasible to track the context and intent of individual examples.
75
+
76
+ [Ngo, Helen & Araújo et al(2021)](https://www.researchgate.net/publication/355582954_No_News_is_Good_News_A_Critique_of_the_One_Billion_Word_Benchmark)
77
+
78
+ ## Training
79
+
80
+ #### Training Data
81
+
82
+ The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
83
+
84
+ The authors provide additionally notes about the vocabulary used, in the [associated paper](https://arxiv.org/pdf/2104.02443.pdf):
85
+
86
+ > We used the SentencePiece model (Kudo, 2018) to construct the vocabulary for this research, as well as to decode and encode the input/output.
87
+
88
+
89
+ ## Training procedure
90
+
91
+ #### Preprocessing
92
+
93
+ ##### Transfer-learning Pretraining
94
+
95
+ The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).
96
+ It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
97
+ The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
98
+
99
+ ###### Fine-tuning
100
+
101
+ This model was then fine-tuned on a single TPU Pod V2-8 for 5,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing lisp inspired DSL data.
102
+
103
+
104
+ ## Evaluation
105
+
106
+ #### Results
107
+
108
+ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
109
+
110
+ Test results :
111
+
112
+ | Language / Model | LISP |
113
+ | -------------------- | :------------: |
114
+ | CodeTrans-ST-Small | 89.43 |
115
+ | CodeTrans-ST-Base | 89.65 |
116
+ | CodeTrans-TF-Small | 90.30 |
117
+ | CodeTrans-TF-Base | 90.24 |
118
+ | CodeTrans-TF-Large | 90.21 |
119
+ | CodeTrans-MT-Small | 82.88 |
120
+ | CodeTrans-MT-Base | 86.99 |
121
+ | CodeTrans-MT-Large | 90.27 |
122
+ | CodeTrans-MT-TF-Small | **90.31** |
123
+ | CodeTrans-MT-TF-Base | 90.30 |
124
+ | CodeTrans-MT-TF-Large | 90.17 |
125
+ | State of the art | 85.80 |
126
+
127
+
128
+ ## Environmental Impact
129
+
130
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). We present the hardware type based on the [associated paper](https://arxiv.org/pdf/2105.09680.pdf).
131
+
132
+
133
+ - **Hardware Type:** Nvidia RTX 8000 GPUs
134
+
135
+ - **Hours used:** Unknown
136
+
137
+ - **Cloud Provider:** GCC TPU v2-8 and v3-8.
138
+
139
+ - **Compute Region:** Unknown
140
+
141
+ - **Carbon Emitted:** Unknown
142
+
143
+ ## Citation Information
144
+
145
+ ```bibtex
146
+ @misc{elnaggar2021codetrans,
147
+ title={CodeTrans: Towards Cracking the Language of Silicon's Code Through Self-Supervised Deep Learning and High Performance Computing},
148
+ author={Ahmed Elnaggar and Wei Ding and Llion Jones and Tom Gibbs and Tamas Feher and Christoph Angerer and Silvia Severini and Florian Matthes and Burkhard Rost},
149
+ year={2021},
150
+ eprint={2104.02443},
151
+ archivePrefix={arXiv},
152
+ primaryClass={cs.SE}
153
+ }
154
+ ```