LLAMA 3 Story Point Estimator - talendesb - mesos

This model is fine-tuned on issue descriptions from talendesb and tested on mesos for story point estimation.

Model Details

  • Base Model: LLAMA 3.2 1B

  • Training Project: talendesb

  • Test Project: mesos

  • Task: Story Point Estimation (Regression)

  • Architecture: PEFT (LoRA)

  • Input: Issue titles

  • Output: Story point estimation (continuous value)

Usage

from transformers import AutoModelForSequenceClassification, AutoTokenizer
from peft import PeftModel

# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("DEVCamiloSepulveda/000-LLAMA3SP-talendesb-mesos")
model = AutoModelForSequenceClassification.from_pretrained("DEVCamiloSepulveda/000-LLAMA3SP-talendesb-mesos")

# Prepare input text
text = "Your issue description here"
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=20, padding="max_length")

# Get prediction
outputs = model(**inputs)
story_points = outputs.logits.item()

Training Details

  • Fine-tuning method: LoRA (Low-Rank Adaptation)
  • Sequence length: 20 tokens
  • Best training epoch: 14 / 20 epochs
  • Batch size: 32
  • Training time: 657.352 seconds
  • Mean Absolute Error (MAE): 1.611
  • Median Absolute Error (MdAE): 1.199
Downloads last month
79
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for DEVCamiloSepulveda/000-LLAMA3SP-talendesb-mesos

Quantized
(139)
this model

Evaluation results