bert_base_lda_20_v1 / README.md
gokulsrinivasagan's picture
End of training
5f779b6 verified
metadata
library_name: transformers
tags:
  - generated_from_trainer
datasets:
  - gokulsrinivasagan/processed_wikitext-103-raw-v1-ld-20
metrics:
  - accuracy
model-index:
  - name: bert_base_lda_20_v1
    results:
      - task:
          name: Masked Language Modeling
          type: fill-mask
        dataset:
          name: gokulsrinivasagan/processed_wikitext-103-raw-v1-ld-20
          type: gokulsrinivasagan/processed_wikitext-103-raw-v1-ld-20
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.155817370175736

bert_base_lda_20_v1

This model is a fine-tuned version of on the gokulsrinivasagan/processed_wikitext-103-raw-v1-ld-20 dataset. It achieves the following results on the evaluation set:

  • Loss: 8.2674
  • Accuracy: 0.1558

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 96
  • eval_batch_size: 96
  • seed: 10
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 10000
  • num_epochs: 25

Training results

Training Loss Epoch Step Validation Loss Accuracy
8.6597 4.1982 10000 8.6005 0.1515
8.3979 8.3963 20000 8.4047 0.1534
8.3034 12.5945 30000 8.3495 0.1549
8.239 16.7926 40000 8.3364 0.1537
8.2045 20.9908 50000 8.3113 0.1524

Framework versions

  • Transformers 4.46.1
  • Pytorch 2.2.0+cu121
  • Datasets 3.1.0
  • Tokenizers 0.20.1