BullSolve / README.md
nivektk's picture
Update README.md
3da6bf0 verified
metadata
base_model: unsloth/llama-3.1-8B-Instruct-unsloth-bnb-4bit
tags:
  - text-generation-inference
  - transformers
  - unsloth
  - llama
  - gguf
  - text-generation
  - math
  - fine-tuning
  - llama-3
license: apache-2.0
language:
  - en
dataset:
  - nivektk/math-augmented-dataset
task_categories:
  - text-generation
  - question-answering
size_categories:
  - 1K<n<10K
model_name: BullSolve

BullSolve: Fine-Tuned LLaMA 3 Model for Math Problem Solving

Model Description

BullSolve is a fine-tuned version of unsloth/llama-3.1-8B-Instruct-unsloth-bnb-4bit, optimized for solving advanced math problems. The model was trained using LoRA adapters with the nivektk/math-augmented-dataset, which contains algebra problems and their solutions.

This model is optimized for low VRAM usage and efficient inference while maintaining high accuracy in mathematical problem-solving tasks.

Training Data

The model was fine-tuned using a subset of the MATH Dataset, specifically the Algebra category, containing 1,006 validated examples. This dataset, originally developed by Dan Hendrycks et al., consists of mathematical problems structured in JSON format, with attributes:

  • problem: Problem statement in text with LaTeX expressions.
  • level: Difficulty level (1 to 5).
  • type: Mathematical domain (e.g., Algebra, Geometry).
  • solution: Step-by-step solution in English.

For fine-tuning, the dataset was preprocessed into ShareGPT format with the structure:

{question}[[
Solution:
{solution}
]]

Additionally, a chat template was applied for better inference compatibility.

Training Configuration

The model was trained using Unsloth with LoRA, optimizing memory efficiency and inference speed. Key parameters:

  • Model: unsloth/llama-3.1-8B-Instruct-unsloth-bnb-4bit
  • Max Sequence Length: 2048 tokens
  • LoRA Config:
    • Rank (r): 16
    • Alpha: 16
    • Dropout: 0
    • Target Modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
  • Training Arguments:
    • Batch Size: 1
    • Gradient Accumulation: 4
    • Max Steps: 25
    • Learning Rate: 1e-4
    • Optimizer: AdamW (8-bit)
    • Weight Decay: 0.01
    • LR Scheduler: Linear

Inference

BullSolve is optimized for fast inference and mathematical problem-solving. Example usage:

from transformers import TextStreamer
from unsloth import FastLanguageModel
import torch

model, tokenizer = FastLanguageModel.from_pretrained("nivektk/BullSolve")
FastLanguageModel.for_inference(model)

messages = [{"role": "user", "content": "Evaluate $\\log_{5^2}5^4$."}]
input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to("cuda")

text_streamer = TextStreamer(tokenizer, skip_prompt=True)
_ = model.generate(input_ids, streamer=text_streamer, max_new_tokens=2000, pad_token_id=tokenizer.eos_token_id)

Model Usage

This model is suitable for:

  • Math tutoring and automated problem-solving
  • AI-assisted mathematical reasoning
  • Education-based chatbot assistants

Limitations

  • The model is trained only on algebra problems and may not generalize well to other areas of mathematics.
  • It is optimized for inference efficiency rather than large-scale fine-tuning.

Acknowledgments

  • Unsloth for efficient LoRA fine-tuning
  • MATH Dataset by Dan Hendrycks for problem-solving benchmarks

Citation

If you use this model, please cite:

@article{BullSolve2025,
  title={BullSolve: Fine-Tuned LLaMA 3 for Math Problems},
  authors={Kevin Fabio Ramos López and Kevin Camilo Rincon Bohorquez and Nolhan Dumoulin},
  year={2025},
  journal={Hugging Face Models}
}

Uploaded model

  • Developed by: nivektk
  • License: apache-2.0
  • Finetuned from model : unsloth/llama-3.1-8B-Instruct-unsloth-bnb-4bit

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.