Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

This is a PolyCoder model with 2.7B parameters, presented in the paper "A Systematic Evaluation of Large Language Models of Code" (MAPS'2022 and ICLR'2022 Workshop Deep Learning 4 Code).

The model was trained on 249 GB of code across 12 programming languages.

Note - this model requires transformers version of at least 4.23.0:

pip install transformers==4.23.0

For more information, see: https://github.com/VHellendoorn/Code-LMs

If you use this model, please cite:

  title={A Systematic Evaluation of Large Language Models of Code},
  author={Frank F. Xu and Uri Alon and Graham Neubig and Vincent Josua Hellendoorn},
  booktitle={Deep Learning for Code Workshop},
Downloads last month
Hosted inference API
Text Generation
This model can be loaded on the Inference API on-demand.