Edit model card

ebayes/tree-crown-latest

This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1368
  • Accuracy: 0.95

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 10
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 150

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 17 2.0491 0.45
No log 2.0 34 1.7960 0.45
No log 3.0 51 1.6265 0.5
No log 4.0 68 1.4328 0.6
No log 5.0 85 1.3004 0.7
No log 6.0 102 1.1381 0.8
No log 7.0 119 1.0114 0.9
No log 8.0 136 0.9116 0.9
No log 9.0 153 0.8490 0.9
No log 10.0 170 0.7989 0.9
No log 11.0 187 0.7392 0.9
No log 12.0 204 0.6834 0.9
No log 13.0 221 0.6688 0.9
No log 14.0 238 0.6311 0.9
No log 15.0 255 0.5847 0.9
No log 16.0 272 0.5544 0.9
No log 17.0 289 0.5521 0.9
No log 18.0 306 0.5319 0.9
No log 19.0 323 0.5228 0.9
No log 20.0 340 0.4746 0.95
No log 21.0 357 0.4913 0.95
No log 22.0 374 0.4453 0.9
No log 23.0 391 0.4333 0.95
No log 24.0 408 0.4124 0.95
No log 25.0 425 0.4303 0.95
No log 26.0 442 0.4094 0.95
No log 27.0 459 0.3597 0.95
No log 28.0 476 0.3644 0.95
No log 29.0 493 0.3723 0.95
0.6158 30.0 510 0.3200 0.95
0.6158 31.0 527 0.3223 0.95
0.6158 32.0 544 0.3119 0.95
0.6158 33.0 561 0.3002 0.95
0.6158 34.0 578 0.2867 0.95
0.6158 35.0 595 0.3419 0.9
0.6158 36.0 612 0.3020 0.9
0.6158 37.0 629 0.2393 0.95
0.6158 38.0 646 0.3202 0.95
0.6158 39.0 663 0.2727 0.95
0.6158 40.0 680 0.2691 0.95
0.6158 41.0 697 0.3346 0.9
0.6158 42.0 714 0.2446 0.95
0.6158 43.0 731 0.3373 0.9
0.6158 44.0 748 0.2904 0.95
0.6158 45.0 765 0.2307 0.95
0.6158 46.0 782 0.2346 0.95
0.6158 47.0 799 0.2314 0.95
0.6158 48.0 816 0.2209 0.95
0.6158 49.0 833 0.2233 0.95
0.6158 50.0 850 0.2225 0.95
0.6158 51.0 867 0.2326 0.95
0.6158 52.0 884 0.2233 0.95
0.6158 53.0 901 0.2248 0.95
0.6158 54.0 918 0.2268 0.95
0.6158 55.0 935 0.2130 0.95
0.6158 56.0 952 0.2164 0.95
0.6158 57.0 969 0.1972 0.95
0.6158 58.0 986 0.2374 0.95
0.1237 59.0 1003 0.2425 0.95
0.1237 60.0 1020 0.1907 0.95
0.1237 61.0 1037 0.3103 0.9
0.1237 62.0 1054 0.2309 0.95
0.1237 63.0 1071 0.1982 0.95
0.1237 64.0 1088 0.2661 0.9
0.1237 65.0 1105 0.1739 0.95
0.1237 66.0 1122 0.1958 0.95
0.1237 67.0 1139 0.1729 0.95
0.1237 68.0 1156 0.1884 0.95
0.1237 69.0 1173 0.1958 0.95
0.1237 70.0 1190 0.1949 0.95
0.1237 71.0 1207 0.1700 0.95
0.1237 72.0 1224 0.1770 0.95
0.1237 73.0 1241 0.1789 0.95
0.1237 74.0 1258 0.2202 0.95
0.1237 75.0 1275 0.2005 0.95
0.1237 76.0 1292 0.1734 0.95
0.1237 77.0 1309 0.1633 0.95
0.1237 78.0 1326 0.1468 0.95
0.1237 79.0 1343 0.1619 0.95
0.1237 80.0 1360 0.1706 0.95
0.1237 81.0 1377 0.1745 0.95
0.1237 82.0 1394 0.2146 0.95
0.1237 83.0 1411 0.1990 0.95
0.1237 84.0 1428 0.1682 0.95
0.1237 85.0 1445 0.1891 0.95
0.1237 86.0 1462 0.1646 0.95
0.1237 87.0 1479 0.2234 0.95
0.1237 88.0 1496 0.2469 0.9
0.0723 89.0 1513 0.1513 0.95
0.0723 90.0 1530 0.1638 0.95
0.0723 91.0 1547 0.1706 0.95
0.0723 92.0 1564 0.1578 0.95
0.0723 93.0 1581 0.1465 0.95
0.0723 94.0 1598 0.1433 0.95
0.0723 95.0 1615 0.1438 0.95
0.0723 96.0 1632 0.1543 0.95
0.0723 97.0 1649 0.1528 0.95
0.0723 98.0 1666 0.1807 0.95
0.0723 99.0 1683 0.2142 0.95
0.0723 100.0 1700 0.2056 0.95
0.0723 101.0 1717 0.1817 0.95
0.0723 102.0 1734 0.2271 0.95
0.0723 103.0 1751 0.2560 0.9
0.0723 104.0 1768 0.1631 0.95
0.0723 105.0 1785 0.1828 0.95
0.0723 106.0 1802 0.2608 0.95
0.0723 107.0 1819 0.2562 0.95
0.0723 108.0 1836 0.1666 0.95
0.0723 109.0 1853 0.1619 0.95
0.0723 110.0 1870 0.1504 0.95
0.0723 111.0 1887 0.1433 0.95
0.0723 112.0 1904 0.1457 0.95
0.0723 113.0 1921 0.1288 1.0
0.0723 114.0 1938 0.1401 1.0
0.0723 115.0 1955 0.1281 0.95
0.0723 116.0 1972 0.1267 0.95
0.0723 117.0 1989 0.1288 0.95
0.051 118.0 2006 0.1473 0.95
0.051 119.0 2023 0.1106 1.0
0.051 120.0 2040 0.1097 1.0
0.051 121.0 2057 0.1379 1.0
0.051 122.0 2074 0.1347 1.0
0.051 123.0 2091 0.1302 0.95
0.051 124.0 2108 0.1599 0.95
0.051 125.0 2125 0.1574 0.95
0.051 126.0 2142 0.1541 0.95
0.051 127.0 2159 0.1517 0.95
0.051 128.0 2176 0.1462 0.95
0.051 129.0 2193 0.1574 0.95
0.051 130.0 2210 0.1598 0.95
0.051 131.0 2227 0.1520 0.95
0.051 132.0 2244 0.1595 0.95
0.051 133.0 2261 0.1555 0.95
0.051 134.0 2278 0.1515 0.95
0.051 135.0 2295 0.1686 0.95
0.051 136.0 2312 0.1670 0.95
0.051 137.0 2329 0.1533 0.95
0.051 138.0 2346 0.1472 0.95
0.051 139.0 2363 0.1530 0.95
0.051 140.0 2380 0.1563 0.95
0.051 141.0 2397 0.1500 0.95
0.051 142.0 2414 0.1462 0.95
0.051 143.0 2431 0.1432 0.95
0.051 144.0 2448 0.1417 0.95
0.051 145.0 2465 0.1414 0.95
0.051 146.0 2482 0.1362 0.95
0.051 147.0 2499 0.1358 0.95
0.0491 148.0 2516 0.1366 0.95
0.0491 149.0 2533 0.1367 0.95
0.0491 150.0 2550 0.1368 0.95

Framework versions

  • Transformers 4.40.2
  • Pytorch 2.2.1+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
19
Safetensors
Model size
85.8M params
Tensor type
F32
·

Finetuned from

Evaluation results