Pretrained model on programming language sql using the t5 small model architecture. It was first released in this repository. This model is trained on tokenized sql code functions: it works best with tokenized sql functions.
This CodeTrans model is based on the
t5-small model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets.
The model could be used to generate the description for the sql function or be fine-tuned on other sql code tasks. It can be used on unparsed and untokenized sql code. However, if the sql code is tokenized, the performance should be better.
Here is how to use this model to generate sql function documentation using Transformers SummarizationPipeline:
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_sql_multitask"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_sql_multitask", skip_special_tokens=True), device=0 ) tokenized_code = "select time ( col0 ) from tab0" pipeline([tokenized_code])
Run this example in colab notebook.
The supervised training tasks datasets can be downloaded on Link
The model was trained on a single TPU Pod V3-8 for 460,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
|Language / Model||Python||SQL||C#|
Select AutoNLP in the “Train” menu to fine-tune this model automatically.
- Downloads last month