--- license: apache-2.0 base_model: allenai/led-base-16384 tags: - generated_from_trainer metrics: - rouge model-index: - name: LED-Base-NSPCC results: [] --- # LED-Base-NSPCC This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8734 - Rouge1: 0.4910 - Rouge2: 0.2207 - Rougel: 0.2847 - Rougelsum: 0.2840 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:------:|:----:|:---------------:|:------:|:------:|:------:|:---------:| | 2.4662 | 0.9947 | 47 | 1.9451 | 0.4528 | 0.1809 | 0.2560 | 0.2558 | | 1.6508 | 1.9894 | 94 | 1.8497 | 0.4889 | 0.2146 | 0.2720 | 0.2716 | | 1.2549 | 2.9841 | 141 | 1.8268 | 0.4812 | 0.2092 | 0.2756 | 0.2753 | | 0.9955 | 3.9788 | 188 | 1.8734 | 0.4910 | 0.2207 | 0.2847 | 0.2840 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1