JohannesKiesel's picture
End of training
47728f4 verified
metadata
license: apache-2.0
base_model: bert-base-uncased
tags:
  - generated_from_trainer
model-index:
  - name: valueeval24-bert-baseline-toy-2024-02-27
    results: []

valueeval24-bert-baseline-toy-2024-02-27

This model is a fine-tuned version of bert-base-uncased on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5633
  • F1-score: {'Self-direction: thought attained': 0.67, 'Self-direction: thought constrained': 0, 'Self-direction: action attained': 0, 'Self-direction: action constrained': 0, 'Stimulation attained': 0, 'Stimulation constrained': 0, 'Hedonism attained': 0, 'Hedonism constrained': 0, 'Achievement attained': 0.33, 'Achievement constrained': 0, 'Power: dominance attained': 0, 'Power: dominance constrained': 0, 'Power: resources attained': 0, 'Power: resources constrained': 0, 'Face attained': 0, 'Face constrained': 0, 'Security: personal attained': 0, 'Security: personal constrained': 0, 'Security: societal attained': 0, 'Security: societal constrained': 0, 'Tradition attained': 0, 'Tradition constrained': 0, 'Conformity: rules attained': 0, 'Conformity: rules constrained': 0, 'Conformity: interpersonal attained': 0.57, 'Conformity: interpersonal constrained': 0, 'Humility attained': 0, 'Humility constrained': 0, 'Benevolence: caring attained': 0, 'Benevolence: caring constrained': 0, 'Benevolence: dependability attained': 0, 'Benevolence: dependability constrained': 1.0, 'Universalism: concern attained': 0, 'Universalism: concern constrained': 0, 'Universalism: nature attained': 0, 'Universalism: nature constrained': 0, 'Universalism: tolerance attained': 0, 'Universalism: tolerance constrained': 0}
  • Marco-avg-f1-score: 0.07

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20

Training results

Framework versions

  • Transformers 4.38.2
  • Pytorch 2.2.1+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2