Edit model card

1023

This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:

  • Accuracy: 28.96%
  • single_doc_single_modal Recall: 50.21%
  • single_doc_single_modal Precision: 26.16%
  • single_doc_multi_modals Recall: 25.16%
  • single_doc_multi_modals Precision: 45.62%
  • multi_docs_single_modal Recall: 17.31%
  • single_doc_multi_modals Precision: 40.59%
  • multi_docs_multi_modals Recall: 0%
  • multi_docs_multi_modals Precision: 0%

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 2

Framework versions

  • Transformers 4.28.1
  • Pytorch 1.13.1+cu117
  • Datasets 2.14.5
  • Tokenizers 0.13.3
Downloads last month
8
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.