Edit model card

license: mit

Moore Circuit Gen 1

Model Description

Moore Circuit Gen 1 (MCG-1) is a graphGAN Deep Learning model that was trained on a subset of a dataset containing over 50,000 existing digital logic circuit. This model is capable of generating viable random digital logic circuits without discontinuities or improper connections. This model was made possible using Intel® Developer Cloud.

Purpose

The MCG-1 model was made to become a helpful tool for those researching and developing technology centered around FPGA and ASIC development. The ability to generate a viable random circuit allows for a model that could be further trained to generate a Register Transfer Level (RTL) design from a much higher level circuit or description. If used properly, this technology could rapidly cut down on the production time associated with developing chips with Very Large Scale Integration that typically take years to produce as all gates must be hand placed to be packed into a small package.

Intended Use

Intended Users:

Researchers and developers, design and process engineers, individuals and organizations specializing in ASICs, innovators in the semiconductor industry

Use Cases:

Generates viable random digital logic circuits

Usage Instructions:

To use the MCG-1 model, ensure that you have a Python environment with necessary libraries installed. Prep your dataset by formatting it to match the model's expected input format and dimensions. The model is pre-trained, and by running the load_model.py file, the user can load the MCG-1 model and use it to generate synthetic graph data. Users can re-train the model using gan_train.py. Note that you may need to adjust certain training parameters based on your specific application needs or to improve performance.

Limitations

We were unfortunately unable to train the model using the full dataset due to time constraints and the sophiscated nature of the dataset. Currently, the model can only generate circuits with a fixed number (16) nodes. However, this can be improved in the future.

Optimizations

The model is made more efficient using variou optimized libraries, such aspytorch and numpy.

Training Platform:

This model was trained on Intel Developer Cloud with 4th Generation Intel® Xeon® Scalable Processors (Sapphire Rapids)

Downloads last month
0
Unable to determine this model's library. Check the docs .