Mouwiya commited on
Commit
fed4675
1 Parent(s): 8868a9d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +22 -9
README.md CHANGED
@@ -33,24 +33,37 @@ configs:
33
  path: data/train-*
34
  - split: test
35
  path: data/test-*
 
 
 
 
 
 
 
36
  ---
37
 
38
  ## Dataset Details
39
 
40
- ###1.Dataset Loading: Initially, we load the Drug Review Dataset from the UC Irvine Machine Learning Repository. This dataset contains patient reviews of different drugs, along with the medical condition being treated and the patients' satisfaction ratings.
 
41
 
42
- 2.Data Preprocessing: The dataset is preprocessed to ensure data integrity and consistency. We handle missing values and ensure that each patient ID is unique across the dataset.
 
43
 
44
- 3.Text Preprocessing: Textual data, such as the reviews and medical conditions, undergo preprocessing steps. This includes converting text to lowercase and handling HTML entities to ensure proper text representation.
 
45
 
46
- 4.Tokenization: We tokenize the text data using the BERT tokenizer, which converts each text example into a sequence of tokens suitable for input into a BERT-based model.
 
47
 
48
- 5.Dataset Splitting: The dataset is split into training, validation, and test sets. We ensure that each split contains a representative distribution of the data, maintaining the original dataset's characteristics.
 
49
 
50
- 6.Dataset Statistics: Basic statistics, such as the frequency of medical conditions, are computed from the training set to provide insights into the dataset's distribution.
 
51
 
52
- 7.Dataset Export: Finally, the preprocessed and split dataset is exported to the Hugging Face Datasets format and pushed to the Hugging Face Hub, making it readily accessible for further research and experimentation by the community.
 
53
 
54
  ## Dataset Card Authors
55
- Mouwiya S.A. Al-Qaisieh
56
-
 
33
  path: data/train-*
34
  - split: test
35
  path: data/test-*
36
+ license: odbl
37
+ task_categories:
38
+ - text-classification
39
+ language:
40
+ - en
41
+ size_categories:
42
+ - 10M<n<100M
43
  ---
44
 
45
  ## Dataset Details
46
 
47
+ ### 1.Dataset Loading:
48
+ Initially, we load the Drug Review Dataset from the UC Irvine Machine Learning Repository. This dataset contains patient reviews of different drugs, along with the medical condition being treated and the patients' satisfaction ratings.
49
 
50
+ ### 2.Data Preprocessing:
51
+ The dataset is preprocessed to ensure data integrity and consistency. We handle missing values and ensure that each patient ID is unique across the dataset.
52
 
53
+ ### 3.Text Preprocessing:
54
+ Textual data, such as the reviews and medical conditions, undergo preprocessing steps. This includes converting text to lowercase and handling HTML entities to ensure proper text representation.
55
 
56
+ ### 4.Tokenization:
57
+ We tokenize the text data using the BERT tokenizer, which converts each text example into a sequence of tokens suitable for input into a BERT-based model.
58
 
59
+ ### 5.Dataset Splitting:
60
+ The dataset is split into training, validation, and test sets. We ensure that each split contains a representative distribution of the data, maintaining the original dataset's characteristics.
61
 
62
+ ### 6.Dataset Statistics:
63
+ Basic statistics, such as the frequency of medical conditions, are computed from the training set to provide insights into the dataset's distribution.
64
 
65
+ ### 7.Dataset Export:
66
+ Finally, the preprocessed and split dataset is exported to the Hugging Face Datasets format and pushed to the Hugging Face Hub, making it readily accessible for further research and experimentation by the community.
67
 
68
  ## Dataset Card Authors
69
+ Mouwiya S.A. Al-Qaisieh