Edit model card

Model Card for Model ID

This model generates React component code from natural language descriptions. It leverages the capabilities of the CodeGemma-2B model for text-to-code generation tasks.

Model Details

Model Description

This is a text-to-React component code generation model fine-tuned on the Hardik1234/reactjs_labelled dataset with CodeGemma-2B as the base model. It aims to assist developers by generating React component code from textual descriptions, streamlining the development process.

  • Developed by: Pranav Keshav
  • Model type: Text generation
  • Language(s) (NLP): English
  • License: [More Information Needed]
  • Finetuned from model : google/codegemma-2b

Model Sources [optional]

  • Repository: [More Information Needed]
  • Paper [optional]: [More Information Needed]
  • Demo [optional]: [More Information Needed]

Uses

Direct Use

The model can be used to generate React component code from textual descriptions, such as "NavBar component," which can be integrated directly into React applications.

Downstream Use

This model can be fine-tuned further for specific use cases or integrated into development tools and platforms to enhance developer productivity by automating code generation.

Out-of-Scope Use

The model is not quite suitable for generating code for non-React frameworks or languages. It may also produce incorrect or non-functional code if the input description is unclear or ambiguous.

Bias, Risks, and Limitations

Recommendations

Users should be aware that the generated code may require manual verification and refinement. The model may also reflect biases present in the training data, and care should be taken to review and test the generated code thoroughly.

How to Get Started with the Model

Use the code below to generate react component code from the model:

from transformers import GemmaTokenizer, AutoModelForCausalLM

tokenizer = GemmaTokenizer.from_pretrained("PranavKeshav/reactgpt-1.2")
model = AutoModelForCausalLM.from_pretrained("PranavKeshav/reactgpt-1.2")

input_text = "PageNotFound component"
input_ids = tokenizer(input_text, return_tensors="pt")

outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))

Training Details

Training Data

[More Information Needed]

Training Procedure

Preprocessing [optional]

[More Information Needed]

Training Hyperparameters

  • Training regime: [More Information Needed]

Speeds, Sizes, Times [optional]

[More Information Needed]

Evaluation

Testing Data, Factors & Metrics

Testing Data

[More Information Needed]

Factors

[More Information Needed]

Metrics

[More Information Needed]

Results

[More Information Needed]

Summary

Model Examination [optional]

[More Information Needed]

Environmental Impact

Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).

  • Hardware Type: [More Information Needed]
  • Hours used: [More Information Needed]
  • Cloud Provider: [More Information Needed]
  • Compute Region: [More Information Needed]
  • Carbon Emitted: [More Information Needed]

Technical Specifications [optional]

Model Architecture and Objective

[More Information Needed]

Compute Infrastructure

[More Information Needed]

Hardware

[More Information Needed]

Software

[More Information Needed]

Citation [optional]

BibTeX:

[More Information Needed]

APA:

[More Information Needed]

Glossary [optional]

[More Information Needed]

More Information [optional]

[More Information Needed]

Model Card Authors [optional]

[More Information Needed]

Model Card Contact

[More Information Needed]

Downloads last month
14
Safetensors
Model size
1.56B params
Tensor type
F32
·
U8
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train PranavKeshav/reactgpt-1.1