Edit model card

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Model Card for Model ID

This modelcard aims to be a base template for new models. It has been generated using this raw template.

Model Details

Model Description

  • Developed by: [More Information Needed]
  • Shared by [optional]: [More Information Needed]
  • Model type: [More Information Needed]
  • Language(s) (NLP): [More Information Needed]
  • License: [More Information Needed]
  • Finetuned from model [optional]: [More Information Needed]

Model Card: Bug Classification Algorithm

Purpose: To classify software bugs according to their clarity, relevance, and readability using a revamped dataset of historical bugs.

Model Type: Machine Learning Model (Supervised Learning)

Dataset Information:

Historical Software Bugs Dataset Split into training and validation sets - Training Data consists of approximately 80% of data and validation/testing data comprises of the remaining 20%. Each example contains features including descriptions of software bugs along with human annotations specifying whether they were clear, relevant, and readable. Features Extracted:

    1. Text description of the bug
    1. Number of lines of code affected by the bug
    1. Timestamp of bug submission
    1. Version control tags associated with the bug
    1. Priority level assigned to the bug
    1. Type of software component impacted by the bug
    1. Operating system compatibility of the software
    1. Programming language used to develop the software
    1. Hardware specifications required to run the software

Models Trained:

Naive Bayes Classifier Random Forest Classifier Gradient Boosting Classifier Neural Networks with Convolutional Layers Hyperparameter tuning techniques: Cross-validation, Grid Search and Random Search applied to each model architecture.

Metrics Used For Evaluation:

Accuracy Score: Fraction of correctly predicted examples out of total examples. Precision: Ratio of correct positive predictions over all positive predictions made by the model. Recall: Ratio of true positives found among actual positives. F1 score: Harmonic mean of precision and recall indicating

Downloads last month
0