Pretrained model on programming language lisp inspired DSL using the t5 small model architecture. It was first released in this repository.
This CodeTrans model is based on the
t5-small model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets.
The model could be used to generate lisp inspired DSL code given the human language description tasks.
Here is how to use this model to generate lisp inspired DSL code using Transformers SummarizationPipeline:
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_program_synthese_multitask"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_program_synthese_multitask", skip_special_tokens=True), device=0 ) tokenized_code = "you are given an array of numbers a and a number b , compute the difference of elements in a and b" pipeline([tokenized_code])
Run this example in colab notebook.
The supervised training tasks datasets can be downloaded on Link
The model was trained on a single TPU Pod V3-8 for 440,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
|Language / Model||LISP|
|State of the art||85.80|
Select AutoNLP in the “Train” menu to fine-tune this model automatically.
- Downloads last month