Mouwiya commited on
Commit
9e26a4c
1 Parent(s): 03c7e63

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -35,20 +35,20 @@ configs:
35
  path: data/test-*
36
  ---
37
 
38
- # Dataset Card for Dataset Name
39
-
40
- <!-- Provide a quick summary of the dataset. -->
41
-
42
- This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
43
-
44
  ## Dataset Details
45
 
46
  1.Dataset Loading: Initially, we load the Drug Review Dataset from the UC Irvine Machine Learning Repository. This dataset contains patient reviews of different drugs, along with the medical condition being treated and the patients' satisfaction ratings.
 
47
  2.Data Preprocessing: The dataset is preprocessed to ensure data integrity and consistency. We handle missing values and ensure that each patient ID is unique across the dataset.
 
48
  3.Text Preprocessing: Textual data, such as the reviews and medical conditions, undergo preprocessing steps. This includes converting text to lowercase and handling HTML entities to ensure proper text representation.
 
49
  4.Tokenization: We tokenize the text data using the BERT tokenizer, which converts each text example into a sequence of tokens suitable for input into a BERT-based model.
 
50
  5.Dataset Splitting: The dataset is split into training, validation, and test sets. We ensure that each split contains a representative distribution of the data, maintaining the original dataset's characteristics.
 
51
  6.Dataset Statistics: Basic statistics, such as the frequency of medical conditions, are computed from the training set to provide insights into the dataset's distribution.
 
52
  7.Dataset Export: Finally, the preprocessed and split dataset is exported to the Hugging Face Datasets format and pushed to the Hugging Face Hub, making it readily accessible for further research and experimentation by the community.
53
 
54
  ## Dataset Card Authors
 
35
  path: data/test-*
36
  ---
37
 
 
 
 
 
 
 
38
  ## Dataset Details
39
 
40
  1.Dataset Loading: Initially, we load the Drug Review Dataset from the UC Irvine Machine Learning Repository. This dataset contains patient reviews of different drugs, along with the medical condition being treated and the patients' satisfaction ratings.
41
+
42
  2.Data Preprocessing: The dataset is preprocessed to ensure data integrity and consistency. We handle missing values and ensure that each patient ID is unique across the dataset.
43
+
44
  3.Text Preprocessing: Textual data, such as the reviews and medical conditions, undergo preprocessing steps. This includes converting text to lowercase and handling HTML entities to ensure proper text representation.
45
+
46
  4.Tokenization: We tokenize the text data using the BERT tokenizer, which converts each text example into a sequence of tokens suitable for input into a BERT-based model.
47
+
48
  5.Dataset Splitting: The dataset is split into training, validation, and test sets. We ensure that each split contains a representative distribution of the data, maintaining the original dataset's characteristics.
49
+
50
  6.Dataset Statistics: Basic statistics, such as the frequency of medical conditions, are computed from the training set to provide insights into the dataset's distribution.
51
+
52
  7.Dataset Export: Finally, the preprocessed and split dataset is exported to the Hugging Face Datasets format and pushed to the Hugging Face Hub, making it readily accessible for further research and experimentation by the community.
53
 
54
  ## Dataset Card Authors