drug-reviews / README.md
Mouwiya's picture
Update README.md
fed4675 verified
metadata
dataset_info:
  features:
    - name: patient_id
      dtype: int64
    - name: drugName
      dtype: string
    - name: condition
      dtype: string
    - name: review
      dtype: string
    - name: rating
      dtype: float64
    - name: date
      dtype: string
    - name: usefulCount
      dtype: int64
    - name: review_length
      dtype: int64
  splits:
    - name: train
      num_bytes: 16479750.174480557
      num_examples: 27703
    - name: test
      num_bytes: 27430466
      num_examples: 46108
  download_size: 25530005
  dataset_size: 43910216.17448056
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*
license: odbl
task_categories:
  - text-classification
language:
  - en
size_categories:
  - 10M<n<100M

Dataset Details

1.Dataset Loading:

Initially, we load the Drug Review Dataset from the UC Irvine Machine Learning Repository. This dataset contains patient reviews of different drugs, along with the medical condition being treated and the patients' satisfaction ratings.

2.Data Preprocessing:

The dataset is preprocessed to ensure data integrity and consistency. We handle missing values and ensure that each patient ID is unique across the dataset.

3.Text Preprocessing:

Textual data, such as the reviews and medical conditions, undergo preprocessing steps. This includes converting text to lowercase and handling HTML entities to ensure proper text representation.

4.Tokenization:

We tokenize the text data using the BERT tokenizer, which converts each text example into a sequence of tokens suitable for input into a BERT-based model.

5.Dataset Splitting:

The dataset is split into training, validation, and test sets. We ensure that each split contains a representative distribution of the data, maintaining the original dataset's characteristics.

6.Dataset Statistics:

Basic statistics, such as the frequency of medical conditions, are computed from the training set to provide insights into the dataset's distribution.

7.Dataset Export:

Finally, the preprocessed and split dataset is exported to the Hugging Face Datasets format and pushed to the Hugging Face Hub, making it readily accessible for further research and experimentation by the community.

Dataset Card Authors

Mouwiya S.A. Al-Qaisieh