Marcuswas's picture
Update README.md
1156752 verified
|
raw
history blame
No virus
2.04 kB
metadata
license: apache-2.0
base_model: bert-base-uncased
tags:
  - 'biology '
  - NLP
  - text-classification
  - drugs
  - BERT
metrics:
  - accuracy
  - precision
  - recall
  - f1
model-index:
  - name: bert-drug-review-to-condition
    results: []
language:
  - en
library_name: transformers
datasets:
  - Zakia/drugscom_reviews

bert-drug-review-to-condition

This model is a fine-tuned version of bert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4308
  • Accuracy: 0.9209
  • Precision: 0.9061
  • Recall: 0.9209
  • F1: 0.9106

Model description

Fine-tuning of Bert model with drug-related data for the purpose of text classification

Intended uses & limitations

Personal project.

Training and evaluation data

Kallumadi,Surya and Grer,Felix. (2018). Drug Reviews (Drugs.com). UCI Machine Learning Repository. https://doi.org/10.24432/C5SK5S.

Training procedure

Multiclass classification The model predicts the 'condition' feature from the 'review' feature, only the first 21 conditions are selected. The 'review' feature is lowercased, we select only values with at least 16 characters.

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 3.0

Training results

Training Loss Epoch Step Validation Loss Accuracy Precision Recall F1
No log 1.0 113 1.1375 0.7747 0.7301 0.7747 0.7450
No log 2.0 226 0.5595 0.8854 0.8675 0.8854 0.8728
No log 3.0 339 0.4308 0.9209 0.9061 0.9209 0.9106

Framework versions

  • Transformers 4.40.0
  • Pytorch 2.2.1+cu121
  • Datasets 2.19.0
  • Tokenizers 0.19.1