Update README.md
Browse files
README.md
CHANGED
@@ -5,3 +5,77 @@ widget:
|
|
5 |
- text: "def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end"
|
6 |
|
7 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
- text: "def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end"
|
6 |
|
7 |
---
|
8 |
+
|
9 |
+
|
10 |
+
|
11 |
+
# CodeTrans model for code documentation generation ruby
|
12 |
+
Pretrained model on programming language ruby using the t5 base model architecture. It was first released in
|
13 |
+
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized ruby code functions: it works best with tokenized ruby functions.
|
14 |
+
|
15 |
+
|
16 |
+
## Model description
|
17 |
+
|
18 |
+
This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code documentation generation task for the ruby function/method.
|
19 |
+
|
20 |
+
## Intended uses & limitations
|
21 |
+
|
22 |
+
The model could be used to generate the description for the ruby function or be fine-tuned on other ruby code tasks. It can be used on unparsed and untokenized ruby code. However, if the ruby code is tokenized, the performance should be better.
|
23 |
+
|
24 |
+
### How to use
|
25 |
+
|
26 |
+
Here is how to use this model to generate ruby function documentation using Transformers SummarizationPipeline:
|
27 |
+
|
28 |
+
```python
|
29 |
+
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
|
30 |
+
|
31 |
+
pipeline = SummarizationPipeline(
|
32 |
+
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_ruby_multitask_finetune"),
|
33 |
+
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_ruby_multitask_finetune", skip_special_tokens=True),
|
34 |
+
device=0
|
35 |
+
)
|
36 |
+
|
37 |
+
tokenized_code = "def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end"
|
38 |
+
pipeline([tokenized_code])
|
39 |
+
```
|
40 |
+
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/function%20documentation%20generation/ruby/base_model.ipynb).
|
41 |
+
## Training data
|
42 |
+
|
43 |
+
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
|
44 |
+
|
45 |
+
## Training procedure
|
46 |
+
|
47 |
+
### Multi-task Pretraining
|
48 |
+
|
49 |
+
The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).
|
50 |
+
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
|
51 |
+
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
|
52 |
+
|
53 |
+
### Fine-tuning
|
54 |
+
|
55 |
+
This model was then fine-tuned on a single TPU Pod V2-8 for 12,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing ruby code.
|
56 |
+
|
57 |
+
|
58 |
+
## Evaluation results
|
59 |
+
|
60 |
+
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
|
61 |
+
|
62 |
+
Test results :
|
63 |
+
|
64 |
+
| Language / Model | Python | Java | Go | Php | Ruby | JavaScript |
|
65 |
+
| -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
|
66 |
+
| ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 |
|
67 |
+
| ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 |
|
68 |
+
| TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 |
|
69 |
+
| TF-Base | 20.26 | 20.19 | **19.50** | 25.84 | 14.07 | 18.25 |
|
70 |
+
| TF-Large | XX | XX | XX | XX | XX | XX |
|
71 |
+
| MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 |
|
72 |
+
| MT-Base | **20.39** | **21.22** | 19.43 | **26.23** | **15.26** | 16.11 |
|
73 |
+
| MT-Large | XX | XX | XX | XX | XX | XX |
|
74 |
+
| MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 |
|
75 |
+
| MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | **18.62** |
|
76 |
+
| MT-TF-Large | XX | XX | XX | XX | XX | XX |
|
77 |
+
| State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 |
|
78 |
+
|
79 |
+
|
80 |
+
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
81 |
+
|