Edit model card

XVCLM

For in-depth understanding of our model and methods, please see our blog here

Model description

XVCLM is a transformer-based model designed to explain and summarize code. It has been pre-trained on approximately 1 million code/description pairs and fine-tuned on a curated set of 10,000 high-quality examples. XVCLM can handle short to medium code snippets in:

  • Python
  • JavaScript (including frameworks like React)
  • Java
  • Ruby
  • Go
  • PHP

The model outputs descriptions in English.

Intended uses

XVCLM, without any additional fine-tuning, can explain code in a few sentences and typically performs best with Python and JavaScript. We recommend using XVCLM for:

  • Simple code explanations
  • Documentation generation
  • Producing synthetic data to enhance model explanations

How to use

You can use this model directly with a pipeline for text-to-text generation, as shown below:

from transformers import pipeline, set_seed

summarizer = pipeline('text2text-generation', model='Binarybardakshat/XVCLM-MIN-DECT')
code = "print('hello world!')"

response = summarizer(code, max_length=100, num_beams=3)
print("Summarized code: " + response[0]['generated_text'])

Which should yield something along the lines of:

Summarized code: The following code is greeting the world.

Model sizes

  • XVCLM (this repo): 1 Million Parameters
  • XVCLM-Small: 220 Million Parameters

Limitations

XVCLM may produce overly simplistic descriptions that don't cover the entirety of a code snippet. We believe that with more diverse training data, we can overcome this limitation and achieve better results.

About Us

At Vinkura, We're all about using AI to benefit everyone. We believe AI should be safe and put people first, not just profits. Our goal is to make AI that helps solve big problems and is available to everyone. We know that diversity is key to success, so we value all kinds of voices and experiences. We're always working to make sure everyone feels included and supported.

Downloads last month
20
Inference Examples
Unable to determine this model's library. Check the docs .

Space using Binarybardakshat/XVCLM-MIN-DECT 1