fietje-2-instruct / README.md
BramVanroy's picture
Update README.md
fd0f283 verified
|
raw
history blame
No virus
3.37 kB
metadata
license: mit
base_model: BramVanroy/fietje-2b
tags:
  - trl
  - fietje
  - alignment-handbook
  - sft
datasets:
  - BramVanroy/ultrachat_200k_dutch
  - BramVanroy/no_robots_dutch
  - BramVanroy/belebele_dutch
model-index:
  - name: fietje-2b-instruct
    results: []
pipeline_tag: text-generation
inference: false
language:
  - nl

Fietje banner

Fietje 2B Instruct

An open and efficient LLM for Dutch

πŸ‘±β€β™€οΈ Base version - πŸ€– Instruct version (this one) - πŸ’¬ Chat version - πŸš€ GGUF of instruct model

fietje-2b-sft

This model is a fine-tuned version of BramVanroy/fietje-2b on the generator dataset. It achieves the following results on the evaluation set:

  • Loss: 0.8818

Model description

More information needed

Intended uses & limitations

The same limitations as phi-2, and LLMs in general, apply here. LLMs hallucinate, make mistakes, and should not be trusted. Use at your own risk!

Training and evaluation data

Fietje 2B instruct was finetuned from the base model on the following datasets. Number of training samples per dataset given in brackets, totalling 201,579 samples.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 6e-05
  • train_batch_size: 42
  • eval_batch_size: 42
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 16
  • total_train_batch_size: 672
  • total_eval_batch_size: 672
  • optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-07
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 3.0

Training results

Training Loss Epoch Step Validation Loss
0.9325 1.0 178 0.9060
0.8687 2.0 356 0.8850
0.8385 3.0 534 0.8818

Framework versions

  • Transformers 4.39.1
  • Pytorch 2.1.2+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2