Instructions to use Break0635/springboot-assistant with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use Break0635/springboot-assistant with PEFT:
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-1.5B-Instruct") model = PeftModel.from_pretrained(base_model, "Break0635/springboot-assistant") - Notebooks
- Google Colab
- Kaggle
Spring Boot Assistant
A fine-tuned version of Qwen2.5-1.5B-Instruct specialized for Spring Boot and Java development questions.
Model Description
This model was fine-tuned using QLoRA (Quantized Low-Rank Adaptation) on a combination of Java code examples from CodeAlpaca-20k and custom Spring Boot question-answer pairs. It is designed to assist developers with Spring Boot concepts, patterns, and best practices.
Training Details
| Parameter | Value |
|---|---|
| Base Model | Qwen/Qwen2.5-1.5B-Instruct |
| Fine-tuning Method | QLoRA (LoRA + float16) |
| LoRA Rank (r) | 16 |
| LoRA Alpha | 32 |
| Training Epochs | 3 |
| Batch Size | 2 |
| Learning Rate | 2e-4 |
| LR Scheduler | Cosine |
| Hardware | Apple Silicon (MPS) |
Training Data
- CodeAlpaca-20k — filtered for Java examples (~200 samples)
- Custom Spring Boot Q&A pairs covering:
- Dependency Injection
- REST API development
- Spring annotations
- Exception handling
- Database configuration
- Security with JWT
- Testing with @SpringBootTest
How to Use
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen2.5-1.5B-Instruct",
torch_dtype=torch.float16,
device_map="auto",
trust_remote_code=True,
)
# Load fine-tuned weights
model = PeftModel.from_pretrained(
base_model,
"Break0635/springboot-assistant"
)
tokenizer = AutoTokenizer.from_pretrained(
"Qwen/Qwen2.5-1.5B-Instruct",
trust_remote_code=True,
)
# Generate answer
def ask(question: str) -> str:
prompt = f"### Instruction:\n{question}\n\n### Response:\n"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=256,
temperature=0.7,
do_sample=True,
pad_token_id=tokenizer.eos_token_id,
)
generated = outputs[0][inputs["input_ids"].shape[1]:]
return tokenizer.decode(generated, skip_special_tokens=True).strip()
# Example usage
answer = ask("What is dependency injection in Spring Boot?")
print(answer)
Example Questions
- Downloads last month
- 36