Edit model card

clip-finetuned-csu-p14-336-e5l510-l

This model is a fine-tuned version of openai/clip-vit-large-patch14-336 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.4969

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-10
  • train_batch_size: 128
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5.0

Training results

Training Loss Epoch Step Validation Loss
0.3974 0.0921 500 1.5029
0.462 0.1842 1000 1.5026
0.5249 0.2763 1500 1.5022
0.4642 0.3685 2000 1.5019
0.5056 0.4606 2500 1.5016
0.4604 0.5527 3000 1.5013
0.4508 0.6448 3500 1.5010
0.4674 0.7369 4000 1.5008
0.4435 0.8290 4500 1.5005
0.4573 0.9211 5000 1.5003
0.4681 1.0133 5500 1.5001
0.4862 1.1054 6000 1.4999
0.4231 1.1975 6500 1.4997
0.3734 1.2896 7000 1.4996
0.4606 1.3817 7500 1.4994
0.467 1.4738 8000 1.4992
0.4202 1.5660 8500 1.4990
0.4866 1.6581 9000 1.4988
0.4848 1.7502 9500 1.4987
0.4385 1.8423 10000 1.4985
0.4781 1.9344 10500 1.4984
0.4287 2.0265 11000 1.4983
0.4392 2.1186 11500 1.4982
0.4447 2.2108 12000 1.4980
0.4449 2.3029 12500 1.4980
0.4748 2.3950 13000 1.4978
0.4935 2.4871 13500 1.4978
0.4113 2.5792 14000 1.4977
0.3934 2.6713 14500 1.4976
0.4901 2.7634 15000 1.4975
0.4601 2.8556 15500 1.4974
0.4224 2.9477 16000 1.4974
0.4381 3.0398 16500 1.4973
0.4942 3.1319 17000 1.4973
0.3885 3.2240 17500 1.4972
0.4407 3.3161 18000 1.4972
0.4635 3.4083 18500 1.4971
0.4228 3.5004 19000 1.4971
0.464 3.5925 19500 1.4971
0.4746 3.6846 20000 1.4970
0.4447 3.7767 20500 1.4970
0.4663 3.8688 21000 1.4970
0.4246 3.9609 21500 1.4970
0.4592 4.0531 22000 1.4970
0.4181 4.1452 22500 1.4970
0.4913 4.2373 23000 1.4969
0.4491 4.3294 23500 1.4969
0.5054 4.4215 24000 1.4969
0.3931 4.5136 24500 1.4969
0.4574 4.6057 25000 1.4969
0.418 4.6979 25500 1.4969
0.5482 4.7900 26000 1.4969
0.4376 4.8821 26500 1.4969
0.3889 4.9742 27000 1.4969

Framework versions

  • Transformers 4.45.0.dev0
  • Pytorch 1.12.1
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
15
Safetensors
Model size
428M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for kevinoli/clip-finetuned-csu-p14-336-e5l510-l

Finetuned
(19)
this model