Edit model card

Ocsai-D Web

This model is a trained model for scoring creativity - specifically figural (drawing-based) originality scoring. It is a fine-tuned version of beit-large-patch16-224. It achieves the following results on the evaluation set:

  • Loss: 0.0055
  • Mse: 0.0055
  • Pearsonr: 0.8745
  • R2: 0.7224
  • Rmse: 0.0745

It can be tried at https://openscoring.du.edu/draw.

Model description

See the pre-print:

Acar, S.^, Organisciak, P.^, & Dumas, D. (2023). Automated Scoring of Figural Tests of Creativity with Computer Vision. http://dx.doi.org/10.13140/RG.2.2.26865.25444

^Authors contributed equally.

Intended uses & limitations

This model judges the originality of figural drawings. There are some limitations.

First, there is a confound with elaboration - drawing more leads - partially - to higher originality.

Secondly, the training is specific to one test, and mileage may vary on other images.

Training and evaluation data

This is trained on the Multi-Trial Creative Ideation task (MTCI; Barbot 2018), with the data from Patterson et al. (2023).

For Ocsai-Web, we used a larger training split, 95%, and bound zero-originality images to zero.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 20
  • eval_batch_size: 20
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 160
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.2
  • num_epochs: 12

Training results

Training Loss Epoch Step Validation Loss Mse Pearsonr R2 Rmse
0.0728 0.3992 25 0.0141 0.0141 0.6466 -0.0091 0.1189
0.0137 0.7984 50 0.0094 0.0094 0.7812 0.0650 0.0968
0.0153 1.1976 75 0.0118 0.0118 0.8137 0.1092 0.1087
0.0155 1.5968 100 0.0168 0.0168 0.8303 -0.3131 0.1295
0.0157 1.9960 125 0.0080 0.0080 0.8347 0.2944 0.0893
0.0087 2.3952 150 0.0068 0.0068 0.8488 0.5258 0.0827
0.0078 2.7944 175 0.0093 0.0093 0.8541 0.3130 0.0963
0.0079 3.1936 200 0.0092 0.0092 0.8604 0.3562 0.0960
0.0073 3.5928 225 0.0076 0.0076 0.8684 0.5507 0.0871
0.007 3.9920 250 0.0082 0.0082 0.8662 0.5539 0.0904
0.0055 4.3912 275 0.0055 0.0055 0.8727 0.6912 0.0744
0.0042 4.7904 300 0.0060 0.0060 0.8737 0.6844 0.0773
0.0037 5.1896 325 0.0061 0.0061 0.8702 0.6496 0.0781
0.0034 5.5888 350 0.0061 0.0061 0.8707 0.6426 0.0781
0.0031 5.9880 375 0.0057 0.0057 0.8717 0.7266 0.0757
0.0023 6.3872 400 0.0056 0.0056 0.8716 0.7084 0.0749
0.002 6.7864 425 0.0056 0.0056 0.8708 0.6710 0.0745
0.0018 7.1856 450 0.0055 0.0055 0.8745 0.7224 0.0745

Framework versions

  • Transformers 4.40.0
  • Pytorch 2.2.1+cu121
  • Datasets 2.19.0
  • Tokenizers 0.19.1
Downloads last month
35
Safetensors
Model size
303M params
Tensor type
F32
ยท

Finetuned from

Space using POrg/ocsai-d-web 1