NLPClass
Collection
summarization task , fine tuning for novel chapters
•
5 items
•
Updated
This model is a fine-tuned version of facebook/bart-large on the None dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
---|---|---|---|---|---|---|---|---|
3.283 | 1.0 | 317 | 2.7342 | 0.1742 | 0.0364 | 0.128 | 0.1283 | 20.0 |
2.6366 | 2.0 | 634 | 2.7466 | 0.1838 | 0.0448 | 0.139 | 0.1394 | 20.0 |
2.2437 | 3.0 | 951 | 2.7819 | 0.1691 | 0.0374 | 0.1277 | 0.1278 | 20.0 |
1.9957 | 4.0 | 1268 | 2.8209 | 0.1782 | 0.0368 | 0.1349 | 0.1349 | 20.0 |
Base model
facebook/bart-large