Edit model card

EconoBert

This model is a fine-tuned version of bert-base-uncased on this dataset: (https://huggingface.co/datasets/samchain/BIS_Speeches_97_23) It achieves the following results on the test set:

  • Accuracy for MLM task: 73%
  • Accuracy for NSP task: 95%

Model description

The model is a simple fine-tuning of a base bert on a dataset specific to the domain of economics. It follows the same architecture and no resize_token_embeddings were required.

Intended uses & limitations

This model should be used as a backbone for NLP tasks applied to the domain of economics, politics and finance.

Training and evaluation data

The dataset used as a fine-tuning domain is : https://huggingface.co/datasets/samchain/BIS_Speeches_97_23

The dataset is made of 773k pairs of sentences, an half being negative pairs (meaning sequence A and B are not related) and the other half positive (sequence B follows sequence A).

The test set is made of 136k pairs.

Training procedure

The model has been fine tuned on 2 epochs, with a batch size of 64 and a sequence length of 128. I used Adam learning-rate with a value of 1e-5,

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 1e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
  • training_precision: float32

Training results

Training loss is 1.6046 on train set and 1.47 on test set.

Framework versions

  • Transformers 4.31.0
  • TensorFlow 2.12.0
  • Datasets 2.13.1
  • Tokenizers 0.13.3

Citing & Authors

Samuel Chaineau

Downloads last month
25
Safetensors
Model size
110M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for samchain/EconoBert

Finetuned
(2103)
this model

Dataset used to train samchain/EconoBert

Collection including samchain/EconoBert