Pretrained model for api recommendation generation using the t5 base model architecture. It was first released in this repository.
This CodeTrans model is based on the
t5-base model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the api recommendation generation task for the java apis.
The model could be used to generate api usage for the java programming tasks.
Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline:
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_api_generation_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_api_generation_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "parse the uses licence node of this package , if any , and returns the license definition if theres" pipeline([tokenized_code])
Run this example in colab notebook.
The supervised training tasks datasets can be downloaded on Link
The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
This model was then fine-tuned on a single TPU Pod V2-8 for 320,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing api recommendation generation data.
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
|Language / Model||Java|
|State of the art||54.42|
Select AutoNLP in the “Train” menu to fine-tune this model automatically.
- Downloads last month