Conflict_Tweets / README.md
Mehyaar's picture
Update README.md
5666745 verified
metadata
license: mit
language:
  - en
pretty_name: X

This dataset contains tweets related to the Israel-Palestine conflict from October 17, 2023, to December 17, 2023. It includes information on tweet IDs, links, text, date, likes, and comments, categorized into different ranges of like counts.

Dataset Details

  • Date Range: October 17, 2023 - December 17, 2023
  • Total Tweets: 15,478
  • Unique Tweets: 14,854

Data Description

The dataset consists of the following columns:

Column Description
id Unique identifier for the tweet
link URL link to the tweet
text Text content of the tweet
date Date and time when the tweet was posted
likes Number of likes the tweet received
comments Number of comments the tweet received
Label Like count range category
Count Number of tweets in the like count range category

How to Process the Data

To process the dataset, you can use the following Python code. This code reads the CSV file, cleans the tweets, tokenizes and lemmatizes the text, and filters out non-English tweets.

Required Libraries

Make sure you have the following libraries installed:

pip install pandas nltk langdetect

Data Processing Code

Here’s the code to process the tweets:

import pandas as pd
import re
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from langdetect import detect, LangDetectException

# Define the TweetProcessor class
class TweetProcessor:
    def __init__(self, file_path):
        """
        Initialize the object with the path to the CSV file.
        """
        self.df = pd.read_csv(file_path)
        # Convert 'text' column to string type
        self.df['text'] = self.df['text'].astype(str)

    def clean_tweet(self, tweet):
        """
        Clean a tweet by removing links, special characters, and extra spaces.
        """
        # Remove links
        tweet = re.sub(r'https\S+', '', tweet, flags=re.MULTILINE)
        # Remove special characters and numbers
        tweet = re.sub(r'\W', ' ', tweet)
        # Replace multiple spaces with a single space
        tweet = re.sub(r'\s+', ' ', tweet)
        # Remove leading and trailing spaces
        tweet = tweet.strip()
        return tweet

    def tokenize_and_lemmatize(self, tweet):
        """
        Tokenize and lemmatize a tweet by converting to lowercase, removing stopwords, and lemmatizing.
        """
        # Tokenize the text
        tokens = word_tokenize(tweet)
        # Remove punctuation and numbers, and convert to lowercase
        tokens = [word.lower() for word in tokens if word.isalpha()]
        # Remove stopwords
        stop_words = set(stopwords.words('english'))
        tokens = [word for word in tokens if word not in stop_words]
        # Lemmatize the tokens
        lemmatizer = WordNetLemmatizer()
        tokens = [lemmatizer.lemmatize(word) for word in tokens]
        # Join tokens back into a single string
        return ' '.join(tokens)

    def process_tweets(self):
        """
        Apply cleaning and lemmatization functions to the tweets in the DataFrame.
        """
        def lang(x):
            try:
                return detect(x) == 'en'
            except LangDetectException:
                return False

        # Filter tweets for English language
        self.df = self.df[self.df['text'].apply(lang)]

        # Apply cleaning function
        self.df['cleaned_text'] = self.df['text'].apply(self.clean_tweet)
        # Apply tokenization and lemmatization function
        self.df['tokenized_and_lemmatized'] = self.df['cleaned_text'].apply(self.tokenize_and_lemmatize)

Feel free to add or modify any details according to your specific requirements!

Let me know if there’s anything else you’d like to adjust or add!

Usage

This dataset can be used for various research purposes, including sentiment analysis, trend analysis, and event impact studies related to the Israel-Palestine conflict. For questions or feedback, please contact: