Edit model card

amdchess-v6

This model is a fine-tuned version of amd/AMD-Llama-135m on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7752

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use OptimizerNames.GROKADAMW with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • num_epochs: 0.25

Training results

Training Loss Epoch Step Validation Loss
5.2893 0.0030 5 4.4359
1.9121 0.0059 10 1.6234
1.5379 0.0089 15 1.5644
1.4848 0.0118 20 1.4063
1.3139 0.0148 25 1.2141
1.3932 0.0177 30 1.1964
1.0851 0.0207 35 1.2140
1.4024 0.0236 40 1.1328
1.0103 0.0266 45 1.0825
1.0929 0.0295 50 1.0315
1.0656 0.0325 55 1.1649
1.2373 0.0354 60 1.0288
0.9401 0.0384 65 1.0114
1.0497 0.0413 70 1.0588
1.045 0.0443 75 0.9853
0.8332 0.0472 80 1.0362
0.9828 0.0502 85 0.9737
0.9275 0.0531 90 0.9487
0.98 0.0561 95 0.9751
0.9864 0.0590 100 0.9215
0.9425 0.0620 105 0.9404
0.965 0.0649 110 0.9259
0.9435 0.0679 115 0.9167
0.9628 0.0708 120 0.9259
0.9193 0.0738 125 0.8986
0.9385 0.0767 130 0.9031
0.8773 0.0797 135 0.8952
0.7856 0.0826 140 0.8779
0.9448 0.0856 145 0.8809
0.8727 0.0885 150 0.8683
0.9208 0.0915 155 0.8790
0.8647 0.0945 160 0.8663
0.8454 0.0974 165 0.8706
0.9631 0.1004 170 0.8615
0.8628 0.1033 175 0.8588
0.9279 0.1063 180 0.8537
0.862 0.1092 185 0.8468
0.9091 0.1122 190 0.8471
0.8762 0.1151 195 0.8434
0.8887 0.1181 200 0.8431
0.823 0.1210 205 0.8388
0.8025 0.1240 210 0.8356
0.8372 0.1269 215 0.8315
0.7744 0.1299 220 0.8251
0.8919 0.1328 225 0.8212
0.7742 0.1358 230 0.8206
0.8345 0.1387 235 0.8170
0.8442 0.1417 240 0.8162
0.8268 0.1446 245 0.8149
0.8138 0.1476 250 0.8102
0.8336 0.1505 255 0.8086
0.889 0.1535 260 0.8088
0.7523 0.1564 265 0.8057
0.7892 0.1594 270 0.8049
0.7574 0.1623 275 0.8002
0.8518 0.1653 280 0.7987
0.8566 0.1682 285 0.7990
0.7946 0.1712 290 0.7967
0.8028 0.1741 295 0.7942
0.8159 0.1771 300 0.7932
0.7905 0.1800 305 0.7901
0.8025 0.1830 310 0.7899
0.7278 0.1860 315 0.7889
0.8105 0.1889 320 0.7878
0.7161 0.1919 325 0.7869
0.7971 0.1948 330 0.7847
0.7943 0.1978 335 0.7841
0.7868 0.2007 340 0.7831
0.7387 0.2037 345 0.7814
0.8157 0.2066 350 0.7804
0.8196 0.2096 355 0.7797
0.8074 0.2125 360 0.7793
0.8144 0.2155 365 0.7783
0.7863 0.2184 370 0.7775
0.7865 0.2214 375 0.7769
0.8075 0.2243 380 0.7765
0.8684 0.2273 385 0.7762
0.7657 0.2302 390 0.7759
0.7928 0.2332 395 0.7757
0.8031 0.2361 400 0.7755
0.738 0.2391 405 0.7753
0.7716 0.2420 410 0.7752
0.7283 0.2450 415 0.7752
0.8095 0.2479 420 0.7752

Framework versions

  • Transformers 4.46.0
  • Pytorch 2.4.0+cu121
  • Datasets 3.0.2
  • Tokenizers 0.20.1
Downloads last month
13
Safetensors
Model size
134M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for nlpguy/amdchess-v6

Base model

amd/AMD-Llama-135m
Finetuned
(13)
this model