license: mit language: - en metrics: - accuracy - perplexity - f1 - precision - recall tags: - code
Model Card for VerilogProtoModel
VerilogProtoModel is a predictive model for Verilog next-token prediction, designed to serve as a foundational model for future Verilog code copilots. It demonstrates significant improvements in coding efficiency and accuracy for hardware description languages.
Model Details
Model Description
VerilogProtoModel is developed to predict the next token in Verilog code, aiming to enhance coding efficiency and accuracy. The model was fine-tuned on a large dataset of Verilog code, with significant preprocessing to clean and anonymize the data. It achieved 52% accuracy in predicting the correct next token out of approximately 40,000 possibilities, showcasing its potential to improve the coding process for hardware description languages.
- Developed by: Von Davis
- Model type: GPT-2
- Language(s) (NLP): English
- License: MIT
- Finetuned from model: GPT-2
Model Sources
- HuggingFace repository: Von-R/VerilogProtoToken
- Github Repo: https://github.com/Von-R/VerilogProtoToken
Uses
Direct Use
The model can be directly used for next-token prediction in Verilog code, assisting developers in writing more efficient and accurate code.
Downstream Use
Fine-tuning the model for specific Verilog coding standards or integrating it into a larger code completion system.
Out-of-Scope Use
The model is not intended for use in non-Verilog programming languages or general text prediction. It should not be used for generating Verilog code in safety-critical systems without thorough validation.
Bias, Risks, and Limitations
The model's predictions are based on the training data and may not generalize well to all possible Verilog coding scenarios. The reduced vocabulary size might limit its ability to predict less common tokens accurately.
Recommendations
Users should validate the model's predictions in the context of their specific applications and be aware of its limitations. Continuous monitoring and fine-tuning may be required to maintain performance.
How to Get Started with the Model
Use the code below to get started with the model.
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Von-R/VerilogProtoToken")
model = AutoModelForCausalLM.from_pretrained("Von-R/VerilogProtoToken")
inputs = tokenizer("input Verilog code here", return_tensors="pt")
outputs = model.generate(inputs.input_ids, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Training Details
Training Data
The model was trained on a dataset of Verilog code extracted from GitHub. The data was cleaned, anonymized, and preprocessed to ensure high quality.
Preprocessing
Data extraction involved removing non-synthesizable code, comments, and duplicates. Identifiers were anonymized to reduce vocabulary size and improve model efficiency.
Training Hyperparameters
- Training regime: fp32
- Learning rate: 5e-5
- Batch size: 16
- Epochs: 1
Evaluation
Testing Data, Factors & Metrics
Testing Data
The testing data was a subset of the training dataset, consisting of Verilog code not seen during training.
Factors
Evaluation focused on predicting the correct next token in various Verilog coding scenarios.
Metrics
Next Token Prediction Loss: 0.8175709573030472 - Measures the average loss per predicted token.
Perplexity: 2.2649913893200004 - Evaluates how well the model predicts the sample.
Accuracy: 0.52189453125 - Measures the percentage of correct predictions.
Precision: 0.023324851765829015 - Measures the accuracy of positive predictions.
Recall: 0.023883036472085516 - Measures the model's ability to identify all relevant instances.
F1 Score: 0.02345157579189002 - Balances precision and recall.
Top-5 Accuracy: 0.56113671875 - Measures the percentage of times the correct token is within the top 5 predicted tokens.
Entropy: 0.8339920132160187 - Measures the uncertainty in the predictions.
Prediction Confidence: 0.8293982080221176 - Measures the confidence of the model in its predictions.
Results
The model achieved a 52% accuracy in predicting the next token out of approximately 40,000 possibilities.
Summary
The model demonstrates significant potential in improving Verilog coding efficiency and accuracy.
Model Architecture and Objective
The model is based on the GPT-2 architecture and fine-tuned for next-token prediction in Verilog code.
Compute Infrastructure
The training and evaluation were performed on high-performance GPUs to handle the computational demands of fine-tuning a large language model.
BibTeX:
@article{Davis2024VerilogProtoModel,
title={VerilogProtoModel: A Predictive Model for Verilog Next-Token Prediction},
author={Von Davis},
journal={GitHub Repository},
year={2024}
}
APA:
Davis, V. (2024). VerilogProtoModel: A Predictive Model for Verilog Next-Token Prediction. GitHub Repository.
Model Card Authors
Von Davis
Model Card Contact
Von.Roth.1991@gmail.com https://github.com/Von-R https://www.linkedin.com/in/daelonvondavis/
- Downloads last month
- 7