Edit model card

distilgpt2-2k_clean_medical_articles_causal_language_model

This model is a fine-tuned version of distilgpt2. It achieves the following results on the evaluation set:

  • Loss: 2.9268

Model description

This is a causal language modeling project.

For more information on how it was created, check out the following link: https://github.com/DunnBC22/NLP_Projects/blob/main/Causal%20Language%20Modeling/2000%20Clean%20Medical%20Articles/2%2C000%20Clean%20Medical%20Articles%20-%20CLM.ipynb

Intended uses & limitations

This model is intended to demonstrate my ability to solve a complex problem using technology.

Training and evaluation data

Dataset Source: https://www.kaggle.com/datasets/trikialaaa/2k-clean-medical-articles-medicalnewstoday

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss
3.1211 1.0 1991 2.9740
2.998 2.0 3982 2.9367
2.9484 3.0 5973 2.9268

Perplexity: 18.67

Framework versions

  • Transformers 4.26.1
  • Pytorch 1.12.1
  • Datasets 2.9.0
  • Tokenizers 0.12.1
Downloads last month
7

Space using DunnBC22/distilgpt2-2k_clean_medical_articles_causal_language_model 1