syedkhalid076 commited on
Commit
c86a0eb
·
verified ·
1 Parent(s): 4259efb

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +201 -0
README.md ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - sentiment-analysis-dataset
4
+ language:
5
+ - en
6
+ task_categories:
7
+ - text-classification
8
+ task_ids:
9
+ - sentiment-classification
10
+ tags:
11
+ - sentiment-analysis
12
+ - text-classification
13
+ - balanced-dataset
14
+ - oversampling
15
+ - csv
16
+ pretty_name: Sentiment Analysis Dataset (Unbalanced)
17
+ dataset_info:
18
+ features:
19
+ - name: text
20
+ dtype: string
21
+ - name: label
22
+ dtype: int64
23
+ splits:
24
+ - name: train
25
+ num_examples: 83989
26
+ - name: validation
27
+ num_examples: 10499
28
+ - name: test
29
+ num_examples: 10499
30
+ format: csv
31
+ ---
32
+
33
+
34
+ # Sentiment Analysis Dataset
35
+
36
+ ## Overview
37
+
38
+ This dataset is designed for sentiment analysis tasks, providing labeled examples across three sentiment categories:
39
+ - **0**: Negative
40
+ - **1**: Neutral
41
+ - **2**: Positive
42
+
43
+ It is suitable for training, validating, and testing text classification models in tasks such as social media sentiment analysis, customer feedback evaluation, and opinion mining.
44
+
45
+ ---
46
+
47
+ ## Dataset Details
48
+
49
+ ### Key Features
50
+
51
+ - **Type**: CSV
52
+ - **Language**: English
53
+ - **Labels**:
54
+ - `0`: Negative
55
+ - `1`: Neutral
56
+ - `2`: Positive
57
+ - **Pre-processing**:
58
+ - Duplicates removed
59
+ - Null values removed
60
+ - Cleaned for consistency
61
+
62
+ ### Dataset Split
63
+
64
+ | Split | Rows |
65
+ |--------------|--------|
66
+ | **Train** | 83,989 |
67
+ | **Validation** | 10,499 |
68
+ | **Test** | 10,499 |
69
+
70
+ ### Format
71
+
72
+ Each row in the dataset consists of the following columns:
73
+ - `text`: The input text data (e.g., sentences, comments, or tweets).
74
+ - `label`: The corresponding sentiment label (`0`, `1`, or `2`).
75
+
76
+ ---
77
+
78
+ ## Usage
79
+
80
+ ### Installation
81
+
82
+ Download the dataset from the [Hugging Face Hub](https://huggingface.co/datasets/your-dataset-path) or your preferred storage location.
83
+
84
+ ### Loading the Dataset
85
+
86
+ #### Using Pandas
87
+
88
+ ```python
89
+ import pandas as pd
90
+
91
+ # Load the train dataset
92
+ train_df = pd.read_csv("path_to_train.csv")
93
+ print(train_df.head())
94
+
95
+ # Columns: text, label
96
+ ```
97
+
98
+ #### Using Hugging Face's `datasets` Library
99
+
100
+ ```python
101
+ from datasets import load_dataset
102
+
103
+ # Load the dataset
104
+ dataset = load_dataset("your-dataset-path")
105
+
106
+ # Access splits
107
+ train_data = dataset["train"]
108
+ validation_data = dataset["validation"]
109
+ test_data = dataset["test"]
110
+
111
+ # Example: Printing a sample
112
+ print(train_data[0])
113
+ ```
114
+
115
+ ---
116
+
117
+ ## Example Usage
118
+
119
+ Here’s an example of using the dataset to fine-tune a sentiment analysis model with the [Hugging Face Transformers](https://huggingface.co/transformers) library:
120
+
121
+ ```python
122
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification, Trainer, TrainingArguments
123
+ from datasets import load_dataset
124
+
125
+ # Load dataset
126
+ dataset = load_dataset("your-dataset-path")
127
+
128
+ # Load model and tokenizer
129
+ model_name = "bert-base-uncased"
130
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
131
+ model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=3)
132
+
133
+ # Tokenize dataset
134
+ def tokenize_function(examples):
135
+ return tokenizer(examples["text"], padding="max_length", truncation=True)
136
+
137
+ tokenized_datasets = dataset.map(tokenize_function, batched=True)
138
+
139
+ # Prepare training arguments
140
+ training_args = TrainingArguments(
141
+ output_dir="./results",
142
+ evaluation_strategy="epoch",
143
+ save_strategy="epoch",
144
+ learning_rate=2e-5,
145
+ per_device_train_batch_size=16,
146
+ num_train_epochs=3,
147
+ weight_decay=0.01,
148
+ load_best_model_at_end=True,
149
+ )
150
+
151
+ # Initialize Trainer
152
+ trainer = Trainer(
153
+ model=model,
154
+ args=training_args,
155
+ train_dataset=tokenized_datasets["train"],
156
+ eval_dataset=tokenized_datasets["validation"],
157
+ )
158
+
159
+ # Train model
160
+ trainer.train()
161
+ ```
162
+
163
+ ---
164
+
165
+ ## Applications
166
+
167
+ This dataset can be used for:
168
+ 1. **Social Media Sentiment Analysis**: Understand the sentiment of posts or tweets.
169
+ 2. **Customer Feedback Analysis**: Evaluate reviews or feedback.
170
+ 3. **Product Sentiment Trends**: Monitor public sentiment about products or services.
171
+
172
+ ---
173
+
174
+ ## License
175
+
176
+ This dataset is released under the **[Insert Your Chosen License Here]**. Ensure proper attribution if used in academic or commercial projects.
177
+
178
+ ---
179
+
180
+ ## Citation
181
+
182
+ If you use this dataset, please cite it as follows:
183
+
184
+ ```
185
+ @dataset{your_name_2024,
186
+ title = {Sentiment Analysis Dataset},
187
+ author = {Syed Khalid Hussain},
188
+ year = {2024},
189
+ url = {https://huggingface.co/datasets/syedkhalid076/Sentiment-Analysis}
190
+ }
191
+ ```
192
+
193
+ ---
194
+
195
+ ## Acknowledgments
196
+
197
+ This dataset was curated and processed by **Syed Khalid Hussain**. The author takes care to ensure high-quality data, enabling better model performance and reproducibility.
198
+
199
+ ---
200
+
201
+ **Author**: Syed Khalid Hussain