Edit model card

distilbert-base-uncased-ark

This model is a fine-tuned version of on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9627

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 128

Training results

Training Loss Epoch Step Validation Loss
1.9663 1.0 6541 1.8186
1.3751 2.0 13082 1.3407
1.295 3.0 19623 1.2890
1.281 4.0 26164 1.2717
1.2739 5.0 32705 1.2674
1.2764 6.0 39246 1.2732
1.271 7.0 45787 1.2664
1.2649 8.0 52328 1.2615
1.261 9.0 58869 1.2666
1.2588 10.0 65410 1.2503
1.2554 11.0 71951 1.2602
1.2566 12.0 78492 1.2596
1.2572 13.0 85033 1.2525
1.2519 14.0 91574 1.2540
1.2524 15.0 98115 1.2472
1.2532 16.0 104656 1.2569
1.2529 17.0 111197 1.2488
1.2508 18.0 117738 1.2469
1.2489 19.0 124279 1.2497
1.2498 20.0 130820 1.2362
1.2496 21.0 137361 1.2417
1.2494 22.0 143902 1.2527
1.2478 23.0 150443 1.2434
1.2474 24.0 156984 1.2429
1.2413 25.0 163525 1.2441
1.2428 26.0 170066 1.2312
1.237 27.0 176607 1.2435
1.2335 28.0 183148 1.2213
1.2316 29.0 189689 1.2336
1.2309 30.0 196230 1.2152
1.2291 31.0 202771 1.2238
1.2209 32.0 209312 1.2086
1.2198 33.0 215853 1.2139
1.2199 34.0 222394 1.2089
1.2149 35.0 228935 1.2048
1.2092 36.0 235476 1.1919
1.2134 37.0 242017 1.1960
1.2069 38.0 248558 1.1957
1.2029 39.0 255099 1.1841
1.2027 40.0 261640 1.1865
1.1977 41.0 268181 1.1805
1.1957 42.0 274722 1.1898
1.1982 43.0 281263 1.1812
1.1953 44.0 287804 1.1820
1.1972 45.0 294345 1.1701
1.1947 46.0 300886 1.1777
1.1933 47.0 307427 1.1716
1.1911 48.0 313968 1.1764
1.1948 49.0 320509 1.1651
1.1863 50.0 327050 1.1629
1.1824 51.0 333591 1.1569
1.1838 52.0 340132 1.1523
1.1714 53.0 346673 1.1466
1.174 54.0 353214 1.1501
1.1752 55.0 359755 1.1492
1.1712 56.0 366296 1.1486
1.1669 57.0 372837 1.1346
1.1695 58.0 379378 1.1386
1.1671 59.0 385919 1.1386
1.1655 60.0 392460 1.1415
1.1637 61.0 399001 1.1500
1.1615 62.0 405542 1.1346
1.1655 63.0 412083 1.1374
1.166 64.0 418624 1.1359
1.1581 65.0 425165 1.1270
1.1527 66.0 431706 1.1219
1.1461 67.0 438247 1.1128
1.1374 68.0 444788 1.0986
1.1326 69.0 451329 1.0925
1.1244 70.0 457870 1.0820
1.1145 71.0 464411 1.0820
1.1127 72.0 470952 1.0733
1.106 73.0 477493 1.0577
1.097 74.0 484034 1.0520
1.0964 75.0 490575 1.0553
1.0869 76.0 497116 1.0363
1.0863 77.0 503657 1.0426
1.0808 78.0 510198 1.0375
1.0749 79.0 516739 1.0349
1.0743 80.0 523280 1.0265
1.065 81.0 529821 1.0223
1.0612 82.0 536362 1.0164
1.0601 83.0 542903 1.0076
1.0524 84.0 549444 1.0118
1.0502 85.0 555985 1.0046
1.0475 86.0 562526 1.0019
1.0464 87.0 569067 1.0032
1.0414 88.0 575608 1.0004
1.0405 89.0 582149 0.9960
1.0377 90.0 588690 0.9919
1.0333 91.0 595231 0.9923
1.0374 92.0 601772 0.9863
1.0327 93.0 608313 0.9910
1.027 94.0 614854 0.9871
1.0281 95.0 621395 0.9803
1.0275 96.0 627936 0.9797
1.0296 97.0 634477 0.9827
1.023 98.0 641018 0.9835
1.0228 99.0 647559 0.9745
1.0228 100.0 654100 0.9790
1.0207 101.0 660641 0.9786
1.018 102.0 667182 0.9695
1.0195 103.0 673723 0.9819
1.0143 104.0 680264 0.9724
1.0163 105.0 686805 0.9742
1.0149 106.0 693346 0.9785
1.01 107.0 699887 0.9686
1.01 108.0 706428 0.9656
1.0126 109.0 712969 0.9689
1.0108 110.0 719510 0.9658
1.0115 111.0 726051 0.9660
1.0099 112.0 732592 0.9663
1.0091 113.0 739133 0.9784
1.0076 114.0 745674 0.9662
1.0063 115.0 752215 0.9651
1.0077 116.0 758756 0.9670
1.0078 117.0 765297 0.9685
1.0045 118.0 771838 0.9636
1.0072 119.0 778379 0.9723
1.0061 120.0 784920 0.9622
1.0048 121.0 791461 0.9646
1.0039 122.0 798002 0.9642
1.0024 123.0 804543 0.9607
1.0041 124.0 811084 0.9599
1.002 125.0 817625 0.9617
1.0036 126.0 824166 0.9601
1.0083 127.0 830707 0.9605
1.0057 128.0 837248 0.9700

Framework versions

  • Transformers 4.34.1
  • Pytorch 2.1.0
  • Datasets 2.14.6
  • Tokenizers 0.14.1
Downloads last month
0
Safetensors
Model size
50.9M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.