BlackBox Component Pascal Assistant Model

Model Logo

Model Description

Model repo on github

This is a specialized AI assistant for programming in BlackBox Component Builder using Component Pascal. The model is fine-tuned on Qwen/Qwen2.5-Coder-3B-Instruct to provide context-aware coding assistance and best practices for BlackBox development.

Key Features:

  • Component Pascal syntax support
  • BlackBox framework-specific patterns
  • Code generation and troubleshooting
  • Interactive programming guidance

Intended Use

✅ Intended for:

  • BlackBox Component Builder developers
  • Component Pascal learners
  • Legacy Oberon-2 system maintainers
  • Educational purposes

🚫 Not intended for:

  • General programming outside BlackBox
  • Non-technical decision making
  • Mission-critical systems without human verification

How to Use

from transformers import BitsAndBytesConfig, AutoModelForCausalLM, AutoTokenizer
import torch
from peft import PeftModel

assert torch.cuda.is_available(), "you need cuda for this part"
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')

base_model_name = 'Qwen/Qwen2.5-Coder-3B-Instruct'
qlora_adapter = "hodza/BlackBox-Coder-3B"
tokenizer = AutoTokenizer.from_pretrained(base_model_name)

bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_use_double_quant=True,
    bnb_4bit_compute_dtype=torch.bfloat16
)
base_model = AutoModelForCausalLM.from_pretrained(base_model_name, device_map=device,quantization_config=bnb_config,)

model = PeftModel.from_pretrained(base_model, qlora_adapter, device_map=device)
# Define the chat template
def format_chat_prompt(user_query):
    return [
        {"role": "system", "content": "You are a helpful coding assistant for BlackBox Component Builder using Component Pascal."},
        {"role": "user", "content": user_query}
    ]

def get_assistant_response(user_query):
    # Format the prompt using the chat template
    chat_prompt = format_chat_prompt(user_query)
    inputs = tokenizer.apply_chat_template(chat_prompt, return_tensors="pt").to(model.device)
    
    # Generate the response
    outputs = model.generate(
        inputs,
        max_new_tokens=256,
        temperature=0.3,
        top_p=0.3,
        pad_token_id=tokenizer.eos_token_id
    )
    return tokenizer.decode(outputs[0], skip_special_tokens=True)

print(get_assistant_response("Как мне вывести массив в Log?"))
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for hodza/BlackBox-Coder-3B

Base model

Qwen/Qwen2.5-3B
Finetuned
(17)
this model
Quantizations
1 model

Dataset used to train hodza/BlackBox-Coder-3B