Edit model card

conversation-summ

This model is a fine-tuned version of facebook/bart-large-xsum on the samsum dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4048
  • Rouge1: 51.7796
  • Rouge2: 26.1341
  • Rougel: 41.4013
  • Rougelsum: 41.4563
  • Gen Len: 29.656

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 2
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 3
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
0.5781 1.0 500 0.3637 50.8871 26.6178 41.8757 41.9291 25.16
0.2183 2.0 1000 0.3586 50.7919 25.4277 40.8428 40.8421 27.712
0.1354 3.0 1500 0.4048 51.7796 26.1341 41.4013 41.4563 29.656

Framework versions

  • Transformers 4.26.0
  • Pytorch 1.13.1+cu116
  • Datasets 2.9.0
  • Tokenizers 0.13.2
Downloads last month
4
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train apatidar0/conversation-summ

Evaluation results