AmsterdamDocClassificationLlama200T3Epochs
As part of the Assessing Large Language Models for Document Classification project by the Municipality of Amsterdam, we fine-tune Mistral, Llama, and GEITje for document classification. The fine-tuning is performed using the AmsterdamBalancedFirst200Tokens dataset, which consists of documents truncated to the first 200 tokens. In our research, we evaluate the fine-tuning of these LLMs across one, two, and three epochs. This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf and has been fine-tuned for three epochs.
It achieves the following results on the evaluation set:
- Loss: 0.8116
Training and evaluation data
- The training data consists of 9900 documents and their labels formatted into conversations.
- The evaluation data consists of 1100 documents and their labels formatted into conversations.
Training procedure
See the GitHub for specifics about the training and the code.
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.0345 | 0.1988 | 123 | 0.9800 |
0.8537 | 0.3976 | 246 | 0.8808 |
0.5807 | 0.5964 | 369 | 0.8503 |
0.7419 | 0.7952 | 492 | 0.8413 |
0.9967 | 0.9939 | 615 | 0.8406 |
0.7252 | 1.1939 | 738 | 0.8301 |
0.9605 | 1.3927 | 861 | 0.8214 |
0.7785 | 1.5915 | 984 | 0.8186 |
0.7233 | 1.7903 | 1107 | 0.8178 |
0.8389 | 1.9891 | 1230 | 0.8173 |
0.976 | 2.1891 | 1353 | 0.8148 |
0.6826 | 2.3879 | 1476 | 0.8127 |
0.7712 | 2.5867 | 1599 | 0.8117 |
0.9744 | 2.7855 | 1722 | 0.8116 |
1.0399 | 2.9842 | 1845 | 0.8116 |
Training time: in total it took 2 hours and 3 minutes to fine-tune the model for three epochs.
Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
Acknowledgements
This model was trained as part of [insert thesis info] in collaboration with Amsterdam Intelligence for the City of Amsterdam.
- Downloads last month
- 25
Model tree for FemkeBakker/AmsterdamDocClassificationLlama200T3Epochs
Base model
meta-llama/Llama-2-7b-chat-hf