File size: 4,539 Bytes
73ea499
 
5666745
 
 
73ea499
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
83c66a3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
73ea499
 
 
 
 
 
83c66a3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
---
license: mit
language:
- en
pretty_name: X
---

This dataset contains tweets related to the Israel-Palestine conflict from October 17, 2023, to December 17, 2023. It includes information on tweet IDs, links, text, date, likes, and comments, categorized into different ranges of like counts.

## Dataset Details

- **Date Range:** October 17, 2023 - December 17, 2023
- **Total Tweets:** 15,478
- **Unique Tweets:** 14,854

## Data Description

The dataset consists of the following columns:

| Column     | Description                                               |
|------------|-----------------------------------------------------------|
| `id`        | Unique identifier for the tweet                          |
| `link`      | URL link to the tweet                                    |
| `text`      | Text content of the tweet                                |
| `date`      | Date and time when the tweet was posted                   |
| `likes`     | Number of likes the tweet received                       |
| `comments`  | Number of comments the tweet received                    |
| `Label`     | Like count range category                                |
| `Count`     | Number of tweets in the like count range category        |




## How to Process the Data

To process the dataset, you can use the following Python code. This code reads the CSV file, cleans the tweets, tokenizes and lemmatizes the text, and filters out non-English tweets.

### Required Libraries

Make sure you have the following libraries installed:

```bash
pip install pandas nltk langdetect
```

## Data Processing Code

Here’s the code to process the tweets:

```python
import pandas as pd
import re
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from langdetect import detect, LangDetectException

# Define the TweetProcessor class
class TweetProcessor:
    def __init__(self, file_path):
        """
        Initialize the object with the path to the CSV file.
        """
        self.df = pd.read_csv(file_path)
        # Convert 'text' column to string type
        self.df['text'] = self.df['text'].astype(str)

    def clean_tweet(self, tweet):
        """
        Clean a tweet by removing links, special characters, and extra spaces.
        """
        # Remove links
        tweet = re.sub(r'https\S+', '', tweet, flags=re.MULTILINE)
        # Remove special characters and numbers
        tweet = re.sub(r'\W', ' ', tweet)
        # Replace multiple spaces with a single space
        tweet = re.sub(r'\s+', ' ', tweet)
        # Remove leading and trailing spaces
        tweet = tweet.strip()
        return tweet

    def tokenize_and_lemmatize(self, tweet):
        """
        Tokenize and lemmatize a tweet by converting to lowercase, removing stopwords, and lemmatizing.
        """
        # Tokenize the text
        tokens = word_tokenize(tweet)
        # Remove punctuation and numbers, and convert to lowercase
        tokens = [word.lower() for word in tokens if word.isalpha()]
        # Remove stopwords
        stop_words = set(stopwords.words('english'))
        tokens = [word for word in tokens if word not in stop_words]
        # Lemmatize the tokens
        lemmatizer = WordNetLemmatizer()
        tokens = [lemmatizer.lemmatize(word) for word in tokens]
        # Join tokens back into a single string
        return ' '.join(tokens)

    def process_tweets(self):
        """
        Apply cleaning and lemmatization functions to the tweets in the DataFrame.
        """
        def lang(x):
            try:
                return detect(x) == 'en'
            except LangDetectException:
                return False

        # Filter tweets for English language
        self.df = self.df[self.df['text'].apply(lang)]

        # Apply cleaning function
        self.df['cleaned_text'] = self.df['text'].apply(self.clean_tweet)
        # Apply tokenization and lemmatization function
        self.df['tokenized_and_lemmatized'] = self.df['cleaned_text'].apply(self.tokenize_and_lemmatize)

```

Feel free to add or modify any details according to your specific requirements!

Let me know if there’s anything else you’d like to adjust or add!






## Usage

This dataset can be used for various research purposes, including sentiment analysis, trend analysis, and event impact studies related to the Israel-Palestine conflict.
For questions or feedback, please contact:

- **Name:** Mehyar Mlaweh
- **Email:** mehyarmlaweh0@gmail.com