--- TODO: "Add YAML tags here. Delete these instructions and copy-paste the YAML tags obtained with the online tagging app: https://huggingface.co/spaces/huggingface/datasets-tagging" YAML tags: "Find the full spec here: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1" --- # Dataset Card Creation Guide ## Table of Contents - [Dataset Card Creation Guide](#dataset-card-creation-guide) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ### Dataset Summary The Israeli-Palestinian-Conflict dataset is an English-language dataset contains manually collected claims, regarding the Israel-Palestine conflict, annotated both objectively with multi-labels to categorize the content according to common themes in such arguments, and subjectively by their level of impact on a moderately informed citizen. The primary purpose of this dataset is to support Israeli public relations efforts at all levels. ### Languages The text in the dataset is in English, as spoken by Reddit users on the [r/IsraelPalestine](https://www.reddit.com/r/IsraelPalestine/s/wcCjJ94VtF) and [r/ChangeMyView](https://www.reddit.com/r/changemyview/s/RxCmmNwDdh) subreddits, and by youtube transcribed from English debates. The associated BCP-47 code is en. ## Dataset Structure ### Data Instances A typical data point comprises a unique id (example_id), the type of phase in the annotation process (batch), the origin split from which the data point came (split) a claim about the Israeli-Palestinian conflict (text), the annotators annotation vectors for the 1'st task (annotator[i]_t1_label) and the majority vote vector label (t1_label), the annotators annotation on a scale of 1 to 5 for the 2'st task (annotator[i]_t2_label) and the evaluated final label (t2_label), An example from the Israeli-Palestinian-Conflict dev set looks as follows: ``` { "example_id": 212, "batch": "exploration", "split": "validation", "text": "You're very biased and hypocrites!! Hamas is the byproduct of 80years of occupation and genocide, Israel is responsible for this atrocities don't come here blaming Palestinians and why they support Hamas,bc Hamas is all they have to fight with occupation force, calling them terrorists will not change the truth even Mandela was once called a terrorist by the colonial imperialists! Gtfo", "annotator1_t1_label": [0, 1, 1, 0, 1, 0, 0, 1, 0, 0], "annotator2_t1_label": [0, 1, 1, 0, 1, 0, 0, 1, 1, 0], "annotator3_t1_label": [0, 1, 1, 0, 1, 0, 0, 1, 0, 0], "annotator1_t2_label": 2, "annotator2_t2_label": 2, "annotator3_t2_label": 3, "t1_label": [0, 1, 1, 0, 1, 0, 0, 1, 0, 0], "t2_label": 2 } ``` ### Data Fields - "example_id": an int claim's unique identifier without inherent meaning - "batch": a string representing the type of phase in the annotation process (detailed later in this card) - "split": a string representing the split this claim originally from - "text": a string which is the collected claim regarding the Israel-Palestine conflict - "t1_label": a list of size 10, representing the majority vote of the annotators annotation binary vector for the 1'st task, where every entrance represent a feature that is present/not present in the claim, respectivly as follows: - "Advocacy for Israel" - Statements that express support or positive sentiment towards Israel. These statements advocate for Israel’s actions, policies, or general stance in the conflict - "Advocacy for Palestine" - Statements that express support or positive sentiment towards Palestine. These statements advocate for Palestine’s actions, policies, or general stance in the conflict. Notice, this excludes Hamas or any terror organizations affiliated with the palestinians. - "Criticism of Israel" - Statements that express opposition or negative sentiment towards Israel. These statements criticize Israel’s actions, policies, government, IDF or any other offical institution - "Criticism of Palestine" - Statements that express opposition or negative sentiment towards Palestine. These statements criticize Palestine’s actions, policies, or general stance in the conflict. - "Pro-terror organization" - Statements that express support for organizations considered to be terrorist groups. These statements advocate for or justify the actions and policies of these groups. - "Anti-terror organization" - Statements that express opposition or negative sentiment towards terror organization . These statements criticize the terror organization actions, policies, or general stance in the conflict.. - "Backed-Up" - Statements that provide evidence, data, or references to support their claims. These statements may cite sources, statistics, or authoritative opinions to validate their arguments. - "Logical Argument" - Statements that present a logical and reasoned case or position on the Israel-Palestine conflict. These statements typically include premises and a conclusion, structured to persuade or convince others of a particular viewpoint. They often address counterarguments and provide rationale for their stance. - "Moral argument" - A moral argument is a statement that appeals to ethical principles, values, and beliefs about right and wrong to support a particular stance or viewpoint. These arguments often invoke notions of justice, human rights, fairness, and compassion. They aim to persuade by highlighting the moral implications and ethical considerations of the issue at hand, rather than relying solely on factual evidence or logical reasoning. - "Figurative language" - Statements that use figurative language to describe the conflict or parties involved. These statements use metaphors and comparisons and ”perspective changes” to illustrate points vividly. - "t2_label": the rate a claim is impactful on a scale from 1 to 5, defined as the extent to which the statement influences, engages, or resonates with a moderately informed citizen (A person with a basic to moderate understanding of current events and historical context, familiar with the general facts of the Israel-Palestine conflict, but without a strong personal or ideological bias). This rating considers factors such as clarity, emotional appeal, relevance, and informativeness, with an emphasis on maintaining objectivity, regardless of personal beliefs or biases. The final annotation for this subjective task is based on [Aroyo and Welty, 2015](https://www.researchgate.net/publication/283092501_Truth_Is_a_Lie_Crowd_Truth_and_the_Seven_Myths_of_Human_Annotation) - "annotator1_t1_label": a list of size 10, representing the 1st annotator annotation binary vector for the 1'st task - "annotator2_t1_label": a list of size 10, representing the 2nd annotator annotation binary vector for the 1'st task - "annotator3_t1_label": a list of size 10, representing the 3rd annotator annotation binary vector for the 1'st task - "annotator1_t2_label": an int representing the 1st annotator label for the 2'nd task - "annotator2_t2_label": an int representing the 2nd annotator label for the 2'nd task - "annotator3_t2_label": an int representing the 3rd annotator label for the 2'nd task ### Data Splits | | train | validation/dev | test | |-------------------------|------:|-----------:|-----:| | # Sentences | 210 | 37 | 150 | | Average Sentence Length | 90.45 | 97.38 | 85.55| ## Dataset Creation ### Curation Rationale The primary purpose of this dataset is to support Israeli public relations efforts at all levels. Specifically, it can aid the Israeli Ministry of Public Diplomacy and organizations dedicated to promoting Israeli advocacy. In light of the events of the past year (2023), we gained insight into the critical importance of Israel’s ability to effectively present public arguments and monitor public opinion. This dataset is valuable for developing models to evaluate the quality of arguments prior to addressing international bodies like the UN, the ICJ, and the US Congress. Additionally, it can be utilized for large scale investigating of social media, helping to strategically focus efforts in shaping public opinion. ### Source Data YouTube-Transcribed Debates: We downloaded transcripts from debates and used ChatGPT to extract participants' short arguments, manually selecting them for inclusion. Reddit: Comments about the Israel-Palestine conflict were manually extracted from the subreddits [r/IsraelPalestine](https://www.reddit.com/r/IsraelPalestine/s/wcCjJ94VtF) and [r/ChangeMyView](https://www.reddit.com/r/changemyview/s/RxCmmNwDdh). #### Initial Data Collection and Normalization Our approach during the manual selection of arguments was to include diverse characteristics, such as varying lengths, languages, and structures, while ensuring a balanced representation of both sides of the conflict. None of the arguments in our dataset have been modified. #### Who are the source language producers? The language producers are users of the [r/IsraelPalestine](https://www.reddit.com/r/IsraelPalestine/s/wcCjJ94VtF) and [r/ChangeMyView](https://www.reddit.com/r/changemyview/s/RxCmmNwDdh) subreddits. No further demographic information was available from the data source. ### Annotations #### Annotation process Our annotation process was conducted in several distinct phases, aimed at creating a comprehensive dataset. Below is a detailed overview of our workflow: #### **Initial Setup and Guidelines** We began by collecting a total of approximately 400 documents, which were divided into three distinct batches: - **Exploration Batch**: 50 documents used for the creation and iterative refinement of the annotation guidelines. Each group member annotated this set independently, and we identified areas of disagreement and refined our categories accordingly. We discussed our disagreements and reached a consensus on the approach for each scenario. These decisions were then codified in our guidelines. - **Evaluation Batch**: 80 documents used to evaluate the inter-annotator agreement (IAA) after creating the first draft of the guidelines. During this phase, group members independently annotated the documents without discussing their annotations to prevent collaboration that could influence the IAA scores. - **Part 3 Batch**: ~270 documents reserved for later annotation by a third-party group. This phase focused on creating guidelines that could be followed by external annotators, ensuring consistency and reliability across annotations. Once the guidelines were finalized, two annotators revisited and independently annotated the entire Exploration Batch. Although this batch had previously been used for guideline development, it was fully annotated to ensure a complete dataset for training and evaluation. #### **Peer Feedback and Guideline Improvement** In this phase, we received feedback on our guidelines from a third-party team: - The third-party team annotated a sample of 50 documents from the **Part 3 - Exploration Batch** using our guidelines. They provided feedback on areas where the guidelines could be improved. - We incorporated their feedback to address ambiguities and ensure the clarity of our instructions. - Finally, the third-party team annotated a larger sample of ~270 documents (**Part 3 - Evaluation Batch**) using the refined guidelines. This helped further validate the robustness of the guidelines. #### Inter-Annotator Agreement (IAA) To evaluate the reliability of the annotations, we calculated IAA scores across the different phases and groups involved. The following table summarizes the scores: | Group | Phase | # | Label1 | Label2 | Label3 | Label4 | Label5 | Label6 | Label7 | Label8 | Label9 | Label10 | MAE | Corr | |----------|--------|-----|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------|-------|-------| | Creators | Explor | 50 | 0.69 | 0.67 | 0.71 | 0.66 | NAN | 0.04 | NAN | 0.28 | 0.21 | 0.50 | | | | Creators | Eval | 80 | 0.64 | 0.53 | 0.76 | 0.66 | NAN | 0.72 | 0.29 | 0.30 | 0.40 | 0.47 | 1.024 | 0.264 | | Peer | Explor | 100 | 0.84 | 0.51 | 0.78 | 0.60 | NAN | 0.70 | 0.66 | 0.58 | 0.57 | 0.31 | | | | Peer | Eval | 167 | 0.62 | 0.64 | 0.70 | 0.67 | NAN | 0.66 | 0.72 | 0.65 | 0.71 | 0.74 | 0.740 | 0.698 | - For the first task the Fleiss' Kappa metric was used to evaluate the multi-label agreement between annotators for each label (Label1 through Label10) in the objective task. The higher Kappa values indicate a stronger agreement, with the peer group achieving a higher overall Kappa score in the Exploration Batch. - For the subjective task, we calculated the Mean Absolute Error (MAE) and Pearson Correlation (Corr) between the annotators’ judgments, with the peer group achieving a better overall MAE and Correlation. #### Who are the annotators? The dataset was annotated by two groups, all humans. The creators group, consisting of 2 women and 2 men. The peer group, consisting of 2 women and a man. All annotators were aged 22-28 and were Israeli-Jewish. There was no compensation provided. ### Personal and Sensitive Information This data contains religious beliefs and political opinions but is completely anonymize. ## Considerations for Using the Data ### Social Impact of Dataset The use of this dataset has the potential to foster greater understanding of the Israeli-Palestinian conflict by providing data that can be used for research and technologies like sentiment analysis or claim validation tools, which may inform public discourse. These technologies can also aid in reducing misinformation and helping individuals engage more critically with content related to this imporatant conflict. However, there is a risk that automated systems built using this dataset might reinforce biases present in the data, potentially leading to one-sided analyses. Additionally, the complexity of the conflict means that decisions informed by these technologies might lack the nuance required for sensitive issues, impacting world outcomes in ways that are not easily understood by the affected populations, like classifying ones claim as pro-terror and the possible outcomes. ### Discussion of Biases The significant bias we created during the annotation process stems from approaching the Israeli-Palestinian conflict from an Israeli-Jewish perspective. We attempted to minimize this bias by designing the first task to be as objective as possible. This included creating balanced categories and naming them neutrally, while carefully explaining the meaning of each category in the guidelines in an impartial way. For the second task, we framed the instructions to ask annotators to assess the claims from the viewpoint of a "moderately informed citizen," defined as someone familiar with the general facts of the conflict but without strong personal or ideological bias. Despite our efforts, we acknowledge that the dataset inevitably contains some bias, but we believe its creation is crucial given the importance and relevance of the topic. ### Other Known Limitations [N/A] ## Additional Information ### Dataset Curators Affiliation: Data Science students @ Technion, we got no funding. ### Licensing Information [N/A] Provide the license and link to the license webpage if available. ### Citation Information ``` @article{article_id, author = {Avishag Nevo, Bella Perel, Hadar Sugarman, Tomer Shigani}, title = {The Israeli-Palestinian Conflict Dataset}, year = {2024} } ``` ### Contributions Thanks to Prof. Roi Reichart and Mr. Nitay Calderon for guiding us in this dataset process.