image/png Landing page showcasing visual richness

Model Card for UIGEN-T1.5


Model Overview

UIGEN-T1.5 is an advanced transformer-based UI generation model fine-tuned from Qwen2.5-Coder-14B-Instruct, specifically enhanced to produce stunning, modern, and unique frontend user interfaces. Leveraging sophisticated reasoning and chain-of-thought methodologies, UIGEN-T1.5 excels at generating highly structured and visually compelling HTML and CSS code, ideal for sleek dashboards, engaging landing pages, and intuitive sign-up forms.


Model Highlights

  • Advanced UI Styles: Produces sleek, modern, and unique designs.
  • Chain-of-Thought Reasoning: Enhanced reasoning capabilities for accurate HTML/CSS layouts.
  • High Usability: Generates responsive and production-ready frontend code.

Visual Examples

See examples below showcasing UIGEN-T1.5-generated interfaces:

image/png Dashboard UI generated by UIGEN-T1.5


Use Cases

Recommended Uses

  • Dashboards: Insightful and visually appealing data interfaces.
  • Landing Pages: Captivating and high-conversion web pages.
  • Authentication Screens: Elegant sign-up and login interfaces.

Limitations

  • Limited Interactivity: Minimal JavaScript functionality, focusing on HTML/CSS.
  • Prompt Engineering: May require specific prompts (e.g., appending "answer").

How to Use

Inference Example

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "smirki/UIGEN-T1.5"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name).to("cuda")

prompt = """<|im_start|>user
Design a sleek, modern dashboard for monitoring solar panel efficiency.<|im_end|>
<|im_start|>assistant
<|im_start|>think
"""

inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=12012, do_sample=True, temperature=0.7)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Performance and Evaluation

  • Strengths:

    • High-quality UI generation.
    • Strong reasoning capabilities for structured layouts.
  • Weaknesses:

    • Occasional repetitive design patterns.
    • Minor artifacting in complex designs.

Technical Specifications

  • Architecture: Transformer-based LLM
  • Base Model: Qwen2.5-Coder-7B-Instruct
  • Precision: bf16 mixed precision, quantized to q8
  • Hardware Requirements: Recommended 12GB VRAM
  • Software Dependencies:
    • Hugging Face Transformers
    • PyTorch

Citation

@misc{Tesslate_UIGEN-T1.5,
  title={UIGEN-T1.5: Advanced Chain-of-Thought UI Generation Model},
  author={smirki},
  year={2025},
  publisher={Hugging Face},
  url={https://huggingface.co/Tesslate/UIGEN-T1.5}
}

Contact & Community

  • Creator: smirki
  • Repository & Demo: Coming soon!

Sponsored by vichar ai Huggingface Website

Downloads last month
13
Safetensors
Model size
14.8B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for Tesslate/UIGEN-T1.5-14B

Base model

Qwen/Qwen2.5-14B
Finetuned
(17)
this model
Quantizations
2 models

Dataset used to train Tesslate/UIGEN-T1.5-14B

Collection including Tesslate/UIGEN-T1.5-14B