File size: 4,759 Bytes
71479e4 dbb8b7c 71479e4 0dea749 dbb8b7c 71479e4 0dea749 dbb8b7c 71479e4 6401b5f 71479e4 0dea749 6401b5f 0dea749 71479e4 3bb946e 6401b5f 3927ae9 0dea749 7c68121 24ab3b7 6401b5f 340479a 7c68121 3abe75d 7c68121 340479a 0f00565 340479a dc966c6 71479e4 0f00565 71479e4 0f00565 71479e4 0f00565 71479e4 0f00565 22e97d3 71479e4 dbb8b7c db0b254 dbb8b7c db0b254 dbb8b7c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 |
---
language:
- nl
license: mit
tags:
- trl
- fietje
- alignment-handbook
base_model: microsoft/phi-2
datasets:
- uonlp/CulturaX
- wikimedia/wikipedia
pipeline_tag: text-generation
inference: false
model-index:
- name: fietje-2
results: []
---
<p align="center" style="margin:0;padding:0">
<img src="https://huggingface.co/BramVanroy/fietje-2/resolve/main/img/fietje-2b-banner-rounded.png" alt="Fietje banner" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
</p>
<div style="margin:auto; margin-top: 0; text-align:center">
<h1 style="margin-bottom: 0">Fietje 2</h1>
<em>An open and efficient LLM for Dutch</em>
</div>
<blockquote class="tip" style="padding: 1.5em; border: 0">
<p align="center" style="text-align: center; margin: 0">
<a rel="nofollow" href="https://huggingface.co/BramVanroy/fietje-2">👱♀️ Base version</a> (this one) -
<a rel="nofollow" href="https://huggingface.co/BramVanroy/fietje-2-instruct">🤖 Instruct version</a> -
<a rel="nofollow" href="https://huggingface.co/BramVanroy/fietje-2-chat">💬 Chat version</a> -
<a rel="nofollow" href="https://huggingface.co/BramVanroy/fietje-2-GGUF">🚀 GGUF of base</a>
</p>
<p align="center" style="text-align: center; margin: 0">
<a href="https://huggingface.co/spaces/BramVanroy/fietje-2b"><strong>Chat with Fietje here!</strong></a>
</p>
</blockquote>
Fietje is an adapated version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2), tailored to Dutch text generation by training on 28B tokens. It is small and efficient with a size of 2.7 billion parameters while performing almost on par with more powerful Dutch LLMs of twice its size like [GEITje 7B Ultra](https://huggingface.co/BramVanroy/GEITje-7B-ultra).
A thorough description of the creation and evaluation of Fietje as well as usage examples are available in [this Github repository](https://github.com/BramVanroy/fietje).
## Intended uses & limitations
The same limitations as [phi-2](https://huggingface.co/microsoft/phi-2#limitations-of-phi-2), and LLMs in general, apply here. LLMs hallucinate, make mistakes, and should not be trusted. Use at your own risk!
## Training data
Fietje was continue-pretrained on 28B Dutch tokens, which includes the full Dutch component of Wikipedia (accounting for around 15%), supplemented with Dutch tokens from CulturaX. A newer version of this dataset can be found [here](https://huggingface.co/datasets/BramVanroy/wikipedia_culturax_dutch), which also describes the filtering that took place to ensure high data quality.
## Training procedure
I am thankful to the [Flemish Supercomputer Center](https://www.vscentrum.be/) (VSC) for providing the computational power to accomplish this project. Accounting for waiting for jobs, training took around two weeks on four nodes of 4x A100 80GB each (16 total).
Training was done with the wonderful [alignment-handbook](https://github.com/huggingface/alignment-handbook), using DeepSpeed as a back-end. Exact training recipes and SLURM script are given in the [Github repository](https://github.com/BramVanroy/fietje).
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-05
- train_batch_size: 40
- eval_batch_size: 40
- seed: 42
- distributed_type: multi-GPU
- num_devices: 16
- gradient_accumulation_steps: 3
- total_train_batch_size: 1920
- total_eval_batch_size: 640
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-07
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6334 | 0.13 | 900 | 1.5937 |
| 1.5469 | 0.26 | 1800 | 1.5051 |
| 1.4937 | 0.4 | 2700 | 1.4628 |
| 1.4633 | 0.53 | 3600 | 1.4375 |
| 1.4485 | 0.66 | 4500 | 1.4203 |
| 1.4374 | 0.79 | 5400 | 1.4085 |
| 1.4278 | 0.92 | 6300 | 1.4013 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Results for the English Open LLM Leaderboard. For results specific to Dutch, check out [ScandEval](https://scandeval.com/dutch-nlg/).
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_BramVanroy__fietje-2)
| Metric |Value|
|-------------------|----:|
|Avg. | 9.03|
|IFEval (0-Shot) |20.98|
|BBH (3-Shot) |15.60|
|MATH Lvl 5 (4-Shot)| 0.91|
|GPQA (0-shot) | 0.56|
|MuSR (0-shot) | 5.16|
|MMLU-PRO (5-shot) |10.95|
|