Edit model card

LLAMA3_8b_LORA_FOR_CLASSIFICATION

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6062

  • Balanced Accuracy: 0.86

  • Accuracy: 0.86

  • Micro F1: 0.86

  • Macro F1: 0.8600

  • Weighted F1: 0.8600

  • Classification Report: precision recall f1-score support

         0       0.86      0.85      0.86       200
         1       0.86      0.86      0.86       200
    

    accuracy 0.86 400 macro avg 0.86 0.86 0.86 400

weighted avg 0.86 0.86 0.86 400

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Accuracy Balanced Accuracy Classification Report Validation Loss Macro F1 Micro F1 Weighted F1
0.5306 1.0 732 0.8125 0.8125 precision recall f1-score support
       0       0.76      0.92      0.83       200
       1       0.90      0.70      0.79       200

accuracy                           0.81       400

macro avg 0.83 0.81 0.81 400 weighted avg 0.83 0.81 0.81 400 | 0.4840 | 0.8103 | 0.8125 | 0.8103 | | 0.4284 | 2.0 | 1464 | 0.4444 | 0.815 | 0.815 | 0.815 | 0.8147 | 0.8147 | precision recall f1-score support

       0       0.84      0.78      0.81       200
       1       0.79      0.85      0.82       200

accuracy                           0.81       400

macro avg 0.82 0.81 0.81 400 weighted avg 0.82 0.81 0.81 400 | | 0.3809 | 3.0 | 2196 | 0.4513 | 0.8475 | 0.8475 | 0.8475 | 0.8470 | 0.8470 | precision recall f1-score support

       0       0.81      0.91      0.86       200
       1       0.89      0.79      0.84       200

accuracy                           0.85       400

macro avg 0.85 0.85 0.85 400 weighted avg 0.85 0.85 0.85 400 | | 0.2413 | 4.0 | 2928 | 0.5228 | 0.87 | 0.87 | 0.87 | 0.8700 | 0.8700 | precision recall f1-score support

       0       0.87      0.86      0.87       200
       1       0.87      0.88      0.87       200

accuracy                           0.87       400

macro avg 0.87 0.87 0.87 400 weighted avg 0.87 0.87 0.87 400 | | 0.1499 | 5.0 | 3660 | 0.6062 | 0.86 | 0.86 | 0.86 | 0.8600 | 0.8600 | precision recall f1-score support

       0       0.86      0.85      0.86       200
       1       0.86      0.86      0.86       200

accuracy                           0.86       400

macro avg 0.86 0.86 0.86 400 weighted avg 0.86 0.86 0.86 400 |

Framework versions

  • PEFT 0.11.1
  • Transformers 4.41.2
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
4
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for rishavranaut/LLAMA3_8b_LORA_FOR_CLASSIFICATION

Adapter
(473)
this model