File size: 2,308 Bytes
cb9ba2b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cf3fb1f
 
49eaabc
 
 
 
 
cb9ba2b
 
 
 
 
49eaabc
 
fed4675
 
 
 
 
 
 
cb9ba2b
03c7e63
 
 
fed4675
 
9e26a4c
fed4675
 
9e26a4c
fed4675
 
9e26a4c
fed4675
 
9e26a4c
fed4675
 
9e26a4c
fed4675
 
9e26a4c
fed4675
 
03c7e63
 
fed4675
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
---
dataset_info:
  features:
  - name: patient_id
    dtype: int64
  - name: drugName
    dtype: string
  - name: condition
    dtype: string
  - name: review
    dtype: string
  - name: rating
    dtype: float64
  - name: date
    dtype: string
  - name: usefulCount
    dtype: int64
  - name: review_length
    dtype: int64
  splits:
  - name: train
    num_bytes: 16479750.174480557
    num_examples: 27703
  - name: test
    num_bytes: 27430466
    num_examples: 46108
  download_size: 25530005
  dataset_size: 43910216.17448056
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
license: odbl
task_categories:
- text-classification
language:
- en
size_categories:
- 10M<n<100M
---

## Dataset Details

### 1.Dataset Loading: 
Initially, we load the Drug Review Dataset from the UC Irvine Machine Learning Repository. This dataset contains patient reviews of different drugs, along with the medical condition being treated and the patients' satisfaction ratings.

### 2.Data Preprocessing:
The dataset is preprocessed to ensure data integrity and consistency. We handle missing values and ensure that each patient ID is unique across the dataset.

### 3.Text Preprocessing:
Textual data, such as the reviews and medical conditions, undergo preprocessing steps. This includes converting text to lowercase and handling HTML entities to ensure proper text representation.

### 4.Tokenization:
We tokenize the text data using the BERT tokenizer, which converts each text example into a sequence of tokens suitable for input into a BERT-based model.

### 5.Dataset Splitting:
The dataset is split into training, validation, and test sets. We ensure that each split contains a representative distribution of the data, maintaining the original dataset's characteristics.

### 6.Dataset Statistics:
Basic statistics, such as the frequency of medical conditions, are computed from the training set to provide insights into the dataset's distribution.

### 7.Dataset Export:
Finally, the preprocessed and split dataset is exported to the Hugging Face Datasets format and pushed to the Hugging Face Hub, making it readily accessible for further research and experimentation by the community.

## Dataset Card Authors 
Mouwiya S.A. Al-Qaisieh