HachiML's picture
Update README.md
1a407e5 verified
|
raw
history blame
No virus
2.46 kB
metadata
license: apache-2.0
tags:
  - generated_from_trainer
model-index:
  - name: myBit-Llama2-jp-127M-2
    results: []

myBit-Llama2-jp-127M-2

This model was built by referring to the config in TinyLlama/TinyLlama-1.1B-Chat-v1.0, 123M The model is a pre-trained Bit-Llama2 of Parameters with only 1 epoch on a Japanese dataset. The dataset used is range3/wiki40b-ja.

  • Loss: 3.0972

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.00024
  • train_batch_size: 96
  • eval_batch_size: 96
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: polynomial
  • lr_scheduler_warmup_steps: 5000
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss
6.9469 0.05 2000 4.7495
4.4263 0.1 4000 4.2282
4.0799 0.15 6000 3.9351
3.8199 0.2 8000 3.7209
3.6462 0.25 10000 3.5755
3.5239 0.29 12000 3.4803
3.4727 0.34 14000 3.4181
3.3953 0.39 16000 3.3752
3.3562 0.44 18000 3.3395
3.3272 0.49 20000 3.3166
3.2965 0.54 22000 3.2869
3.2771 0.59 24000 3.2680
3.2545 0.64 26000 3.2478
3.235 0.69 28000 3.2276
3.2189 0.74 30000 3.2054
3.1973 0.79 32000 3.1910
3.1793 0.83 34000 3.1675
3.1572 0.88 36000 3.1487
3.1406 0.93 38000 3.1279
3.1144 0.98 40000 3.0972

Framework versions

  • Transformers 4.38.2
  • Pytorch 2.1.0+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2