clip-finetuned-csu-p14-336-e4l58-l

This model is a fine-tuned version of openai/clip-vit-large-patch14-336 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.8028

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-08
  • train_batch_size: 128
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 4.0

Training results

Training Loss Epoch Step Validation Loss
0.4667 0.0533 500 1.4426
0.4532 0.1067 1000 1.3816
0.3749 0.1600 1500 1.3311
0.336 0.2133 2000 1.2891
0.3585 0.2666 2500 1.2536
0.303 0.3200 3000 1.2203
0.3242 0.3733 3500 1.1956
0.2427 0.4266 4000 1.1694
0.2993 0.4799 4500 1.1456
0.3183 0.5333 5000 1.1201
0.307 0.5866 5500 1.0982
0.2638 0.6399 6000 1.0780
0.2226 0.6933 6500 1.0613
0.2453 0.7466 7000 1.0444
0.272 0.7999 7500 1.0301
0.283 0.8532 8000 1.0167
0.2331 0.9066 8500 1.0035
0.2362 0.9599 9000 0.9925
0.2396 1.0132 9500 0.9830
0.2013 1.0666 10000 0.9736
0.2082 1.1199 10500 0.9639
0.2023 1.1732 11000 0.9558
0.2331 1.2265 11500 0.9465
0.1784 1.2799 12000 0.9392
0.1953 1.3332 12500 0.9316
0.1867 1.3865 13000 0.9270
0.22 1.4398 13500 0.9197
0.1656 1.4932 14000 0.9148
0.1968 1.5465 14500 0.9096
0.1676 1.5998 15000 0.9057
0.2074 1.6532 15500 0.8994
0.1847 1.7065 16000 0.8954
0.1845 1.7598 16500 0.8900
0.1721 1.8131 17000 0.8873
0.2627 1.8665 17500 0.8810
0.1623 1.9198 18000 0.8774
0.2162 1.9731 18500 0.8713
0.1802 2.0265 19000 0.8679
0.179 2.0798 19500 0.8633
0.1549 2.1331 20000 0.8606
0.1742 2.1864 20500 0.8585
0.1448 2.2398 21000 0.8546
0.2066 2.2931 21500 0.8513
0.1483 2.3464 22000 0.8481
0.1813 2.3997 22500 0.8447
0.1617 2.4531 23000 0.8411
0.1664 2.5064 23500 0.8394
0.1786 2.5597 24000 0.8358
0.1465 2.6131 24500 0.8330
0.1289 2.6664 25000 0.8314
0.1662 2.7197 25500 0.8296
0.1463 2.7730 26000 0.8262
0.1471 2.8264 26500 0.8249
0.167 2.8797 27000 0.8219
0.1268 2.9330 27500 0.8204
0.177 2.9863 28000 0.8177
0.1206 3.0397 28500 0.8166
0.1345 3.0930 29000 0.8156
0.1907 3.1463 29500 0.8144
0.1395 3.1997 30000 0.8126
0.1511 3.2530 30500 0.8112
0.1334 3.3063 31000 0.8102
0.1799 3.3596 31500 0.8090
0.1289 3.4130 32000 0.8081
0.1545 3.4663 32500 0.8072
0.1705 3.5196 33000 0.8064
0.1424 3.5730 33500 0.8055
0.1873 3.6263 34000 0.8048
0.1432 3.6796 34500 0.8043
0.1485 3.7329 35000 0.8037
0.1286 3.7863 35500 0.8033
0.1469 3.8396 36000 0.8030
0.1708 3.8929 36500 0.8029
0.1226 3.9462 37000 0.8028
0.1549 3.9996 37500 0.8028

Framework versions

  • Transformers 4.45.0.dev0
  • Pytorch 1.12.1
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
75
Safetensors
Model size
428M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for kevinoli/clip-finetuned-csu-p14-336-e4l58-l

Finetuned
(19)
this model