File size: 3,367 Bytes
f43cbe0 4923311 f43cbe0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 |
---
base_model: yhavinga/ul2-large-dutch
library_name: peft
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: ul2-large-dutch-finetuned-oba-book-search
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ul2-large-dutch-finetuned-oba-book-search
This model is a fine-tuned version of [yhavinga/ul2-large-dutch](https://huggingface.co/yhavinga/ul2-large-dutch) on a sample dataset from the public library of Amsterdam (OBA).
It achieves the following results on the evaluation set:
- Loss: 5.4042
- Top-5-accuracy: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Top-5-accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------------:|
| 8.2793 | 0.1729 | 200 | 5.9519 | 0.0 |
| 8.1572 | 0.3457 | 400 | 5.9147 | 0.0 |
| 8.1924 | 0.5186 | 600 | 5.8604 | 0.0 |
| 8.021 | 0.6914 | 800 | 5.8270 | 0.0 |
| 7.9558 | 0.8643 | 1000 | 5.7974 | 0.0 |
| 7.9786 | 1.0372 | 1200 | 5.7586 | 0.0 |
| 7.9113 | 1.2100 | 1400 | 5.7306 | 0.0 |
| 8.0483 | 1.3829 | 1600 | 5.7506 | 0.0 |
| 7.8481 | 1.5557 | 1800 | 5.7116 | 0.0 |
| 7.9376 | 1.7286 | 2000 | 5.6599 | 0.0 |
| 7.7537 | 1.9015 | 2200 | 5.6289 | 0.0 |
| 7.7101 | 2.0743 | 2400 | 5.5863 | 0.0 |
| 7.653 | 2.2472 | 2600 | 5.5719 | 0.0 |
| 7.7515 | 2.4201 | 2800 | 5.5510 | 0.0 |
| 7.6844 | 2.5929 | 3000 | 5.5245 | 0.0 |
| 7.7322 | 2.7658 | 3200 | 5.5087 | 0.0 |
| 7.7169 | 2.9386 | 3400 | 5.5065 | 0.0 |
| 7.6177 | 3.1115 | 3600 | 5.4846 | 0.0 |
| 7.6558 | 3.2844 | 3800 | 5.4712 | 0.0 |
| 7.6453 | 3.4572 | 4000 | 5.4564 | 0.0 |
| 7.5664 | 3.6301 | 4200 | 5.4431 | 0.0 |
| 7.5475 | 3.8029 | 4400 | 5.4432 | 0.0 |
| 7.5741 | 3.9758 | 4600 | 5.4393 | 0.0 |
| 7.5523 | 4.1487 | 4800 | 5.4268 | 0.0 |
| 7.6833 | 4.3215 | 5000 | 5.4243 | 0.0 |
| 7.6817 | 4.4944 | 5200 | 5.4098 | 0.0 |
| 7.544 | 4.6672 | 5400 | 5.4070 | 0.0 |
| 7.6062 | 4.8401 | 5600 | 5.4033 | 0.0 |
### Framework versions
- PEFT 0.11.0
- Transformers 4.44.2
- Pytorch 1.13.0+cu116
- Datasets 3.0.0
- Tokenizers 0.19.1 |