--- license: mit base_model: facebook/bart-large-cnn tags: - generated_from_trainer metrics: - rouge model-index: - name: 01_ToS-BART results: [] datasets: - EE21/ToS-Summaries language: - en pipeline_tag: summarization --- # BART-ToSSimplify This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3895 - Rouge1: 0.6186 - Rouge2: 0.4739 - Rougel: 0.5159 - Rougelsum: 0.5152 - Gen Len: 108.6354 ## Model description BART-ToSSimplify is designed to generate summaries of Terms of Service documents. ## Intended uses & limitations Intended Uses: - Generating simplified summaries of Terms of Service agreements. - Automating the summarization of legal documents for quick comprehension. Limitations: - This model is specifically designed for the English language and cannot be applied to other languages. - The quality of generated summaries may vary based on the complexity of the source text. ## Training and evaluation data BART-ToSSimplify was trained on a dataset consisting of summaries of various Terms of Service agreements. The dataset was collected and preprocessed to create a training and evaluation split. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:--------:| | No log | 1.0 | 360 | 0.3310 | 0.5585 | 0.4013 | 0.4522 | 0.4522 | 116.1105 | | 0.2783 | 2.0 | 720 | 0.3606 | 0.5719 | 0.4078 | 0.4572 | 0.4568 | 114.6796 | | 0.2843 | 3.0 | 1080 | 0.3829 | 0.6019 | 0.4456 | 0.4872 | 0.4875 | 110.8066 | | 0.2843 | 4.0 | 1440 | 0.3599 | 0.6092 | 0.4604 | 0.5049 | 0.5049 | 110.884 | | 0.1491 | 5.0 | 1800 | 0.3895 | 0.6186 | 0.4739 | 0.5159 | 0.5152 | 108.6354 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0