|
--- |
|
tags: |
|
- generated_from_trainer |
|
metrics: |
|
- rouge |
|
model-index: |
|
- name: bert2bert-cnn_dailymail-fp16-finetuned-1.0.0 |
|
results: [] |
|
datasets: |
|
- ateneoscsl/BUOD_articlescraper |
|
- cnn_dailymail |
|
language: |
|
- tl |
|
- en |
|
--- |
|
|
|
# π BUOD: bert2bert Transformer Model |
|
[![Model:Bert2Bert](https://img.shields.io/badge/model-bert2bert-green)](https://huggingface.co/0xhaz/bert2bert-cnn_dailymail-fp16-finetuned-1.0.0) |
|
This model is a fine-tuned version of [patrickvonplaten/bert2bert-cnn_dailymail-fp16](https://huggingface.co/patrickvonplaten/bert2bert-cnn_dailymail-fp16) on on KAMI-3000 for the task of Filipino Text Summarization. |
|
Bert2Bert is a EncoderDecoderModel, meaning that both the encoder and the decoder are bert-base-uncased BERT models. |
|
|
|
|
|
It achieves the following results on the evaluation set: |
|
- Loss: 2.3346 |
|
- Rouge1: 46.3609 |
|
- Rouge2: 18.8105 |
|
- Rougel: 30.215 |
|
- Rougelsum: 42.3642 |
|
|
|
## π§ Finetuning/ Training procedure |
|
#### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 5e-05 |
|
- train_batch_size: 4 |
|
- eval_batch_size: 4 |
|
- seed: 42 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- num_epochs: 2 |
|
|
|
#### Training results |
|
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |
|
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:------:|:---------:| |
|
| 2.8263 | 1.0 | 586 | 2.4478 | 45.3367 | 18.3604 | 29.713 | 41.2805 | |
|
| 2.1264 | 2.0 | 1172 | 2.3346 | 46.3609 | 18.8105 | 30.215 | 42.3642 | |
|
|
|
|
|
#### Framework versions |
|
- Transformers 4.26.1 |
|
- Pytorch 1.13.1 |
|
- Datasets 2.10.0 |
|
- Tokenizers 0.13.2 |