Update README.md
Browse files
README.md
CHANGED
@@ -35,13 +35,14 @@ YAML tags: "Find the full spec here: https://github.com/huggingface/hub-docs/blo
|
|
35 |
- [Citation Information](#citation-information)
|
36 |
- [Contributions](#contributions)
|
37 |
|
38 |
-
|
39 |
|
40 |
- **Homepage:** [Add homepage URL here if available (unless it's a GitHub repository)]()
|
41 |
- **Repository:** [If the dataset is hosted on github or has a github homepage, add URL here]()
|
42 |
- **Paper:** [If the dataset was introduced by a paper or there was a paper written describing the dataset, add URL here (landing page for Arxiv paper preferred)]()
|
43 |
- **Leaderboard:** [If the dataset supports an active leaderboard, add link here]()
|
44 |
- **Point of Contact:** [If known, name and email of at least one person the reader can contact for questions about the dataset.]()
|
|
|
45 |
|
46 |
### Dataset Summary
|
47 |
The Israeli-Palestinian-Conflict dataset is an English-language dataset contains manually collected claims, regarding the Israel-Palestine conflict, annotated both objectively with multi-labels to categorize the content according to common themes in such arguments, and subjectively by their level of impact on a moderately informed citizen. The primary purpose of this dataset is to support Israeli public relations efforts at all levels.
|
@@ -49,11 +50,12 @@ The Israeli-Palestinian-Conflict dataset is an English-language dataset contains
|
|
49 |
<!--Briefly summarize the dataset, its intended use and the supported tasks. Give an overview of how and why the dataset was created. The summary should explicitly mention the languages present in the dataset (possibly in broad terms, e.g. *translations between several pairs of European languages*), and describe the domain, topic, or genre covered.-->
|
50 |
|
51 |
|
52 |
-
|
53 |
|
54 |
For each of the tasks tagged for this dataset, give a brief description of the tag, metrics, and suggested models (with a link to their HuggingFace implementation if available). Give a similar description of tasks that were not covered by the structured tag set (repace the `task-category-tag` with an appropriate `other:other-task-name`).
|
55 |
|
56 |
- `task-category-tag`: The dataset can be used to train a model for [TASK NAME], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a *high/low* [metric name](https://huggingface.co/metrics/metric_name). The ([model name](https://huggingface.co/model_name) or [model class](https://huggingface.co/transformers/model_doc/model_class.html)) model currently achieves the following score. *[IF A LEADERBOARD IS AVAILABLE]:* This task has an active leaderboard which can be found at [leaderboard url]() and ranks models based on [metric name](https://huggingface.co/metrics/metric_name) while also reporting [other metric name](https://huggingface.co/metrics/other_metric_name).
|
|
|
57 |
|
58 |
### Languages
|
59 |
|
@@ -179,86 +181,121 @@ Describe other people represented or mentioned in the data. Where possible, link
|
|
179 |
|
180 |
Our annotation process was conducted in several distinct phases, aimed at creating a comprehensive dataset. Below is a detailed overview of our workflow:
|
181 |
|
182 |
-
|
183 |
-
We began by collecting a total of
|
184 |
-
|
185 |
-
|
186 |
-
|
187 |
-
This phase focused on developing the annotation guidelines. The goal was to create a set of instructions that could be followed by any third-party annotators.
|
188 |
-
After finalizing the guidelines, two annotators went back and independently annotated the entire **Exploration Batch**. Although this batch was previously used to create the guidelines, we completed this step to ensure the dataset was fully annotated and usable for training and evaluation purposes.
|
189 |
|
190 |
-
|
191 |
-
In this phase, we (the creators) received feedback on our guidelines from another group (3rd party).
|
192 |
-
- Another team used our guidelines to annotate a sample of 50 of our documents, the **Part 3 - Exploration Batch**. They provided feedback on areas where the guidelines were unclear or could be improved.
|
193 |
-
- We updated the guidelines based on the peer feedback, ensuring that any ambiguities or unclear instructions were clarified.
|
194 |
-
- Finally, the other team annotated a larger sample of ~270 documents, **Part 3 - Evaluation Batch**, using our refined guidelines. This step provided additional validation of the guideline’s robustness.
|
195 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
196 |
|
197 |
|
198 |
#### Who are the annotators?
|
199 |
|
200 |
-
If annotations were collected for the source data (such as class labels or syntactic parses), state whether the annotations were produced by humans or machine generated.
|
201 |
|
202 |
Describe the people or systems who originally created the annotations and their selection criteria if applicable.
|
203 |
|
204 |
If available, include self-reported demographic or identity information for the annotators, but avoid inferring this information. Instead state that this information is unknown. See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender.
|
205 |
|
206 |
-
Describe the conditions under which the data was annotated (for example, if the annotators were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here
|
|
|
|
|
207 |
|
208 |
### Personal and Sensitive Information
|
209 |
|
210 |
-
State whether the dataset uses identity categories and, if so, how the information is used. Describe where this information comes from (i.e. self-reporting, collecting from profiles, inferring, etc.). See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender. State whether the data is linked to individuals and whether those individuals can be identified in the dataset, either directly or indirectly (i.e., in combination with other data).
|
211 |
|
212 |
State whether the dataset contains other data that might be considered sensitive (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history).
|
213 |
|
214 |
-
If efforts were made to anonymize the data, describe the anonymization process.
|
|
|
|
|
215 |
|
216 |
## Considerations for Using the Data
|
217 |
|
218 |
### Social Impact of Dataset
|
219 |
|
220 |
-
Please discuss some of the ways you believe the use of this dataset will impact society.
|
221 |
|
222 |
The statement should include both positive outlooks, such as outlining how technologies developed through its use may improve people's lives, and discuss the accompanying risks. These risks may range from making important decisions more opaque to people who are affected by the technology, to reinforcing existing harmful biases (whose specifics should be discussed in the next section), among other considerations.
|
223 |
|
224 |
-
Also describe in this section if the proposed dataset contains a low-resource or under-represented language. If this is the case or if this task has any impact on underserved communities, please elaborate here
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
225 |
|
226 |
### Discussion of Biases
|
227 |
|
228 |
-
Provide descriptions of specific biases that are likely to be reflected in the data, and state whether any steps were taken to reduce their impact.
|
229 |
|
230 |
For Wikipedia text, see for example [Dinan et al 2020 on biases in Wikipedia (esp. Table 1)](https://arxiv.org/abs/2005.00614), or [Blodgett et al 2020](https://www.aclweb.org/anthology/2020.acl-main.485/) for a more general discussion of the topic.
|
231 |
|
232 |
-
If analyses have been run quantifying these biases, please add brief summaries and links to the studies here
|
233 |
|
234 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
235 |
|
236 |
-
|
|
|
|
|
237 |
|
238 |
## Additional Information
|
239 |
|
240 |
### Dataset Curators
|
241 |
-
|
242 |
-
List the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here
|
243 |
|
244 |
### Licensing Information
|
245 |
-
|
246 |
Provide the license and link to the license webpage if available.
|
247 |
|
248 |
### Citation Information
|
249 |
|
250 |
-
Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example
|
251 |
```
|
252 |
@article{article_id,
|
253 |
-
author = {
|
254 |
-
title = {
|
255 |
-
|
256 |
-
year = {2525}
|
257 |
}
|
258 |
```
|
259 |
|
260 |
-
If the dataset has a [DOI](https://www.doi.org/), please provide it here
|
261 |
|
262 |
### Contributions
|
263 |
|
264 |
-
Thanks to
|
|
|
|
35 |
- [Citation Information](#citation-information)
|
36 |
- [Contributions](#contributions)
|
37 |
|
38 |
+
<!--## Dataset Description
|
39 |
|
40 |
- **Homepage:** [Add homepage URL here if available (unless it's a GitHub repository)]()
|
41 |
- **Repository:** [If the dataset is hosted on github or has a github homepage, add URL here]()
|
42 |
- **Paper:** [If the dataset was introduced by a paper or there was a paper written describing the dataset, add URL here (landing page for Arxiv paper preferred)]()
|
43 |
- **Leaderboard:** [If the dataset supports an active leaderboard, add link here]()
|
44 |
- **Point of Contact:** [If known, name and email of at least one person the reader can contact for questions about the dataset.]()
|
45 |
+
-->
|
46 |
|
47 |
### Dataset Summary
|
48 |
The Israeli-Palestinian-Conflict dataset is an English-language dataset contains manually collected claims, regarding the Israel-Palestine conflict, annotated both objectively with multi-labels to categorize the content according to common themes in such arguments, and subjectively by their level of impact on a moderately informed citizen. The primary purpose of this dataset is to support Israeli public relations efforts at all levels.
|
|
|
50 |
<!--Briefly summarize the dataset, its intended use and the supported tasks. Give an overview of how and why the dataset was created. The summary should explicitly mention the languages present in the dataset (possibly in broad terms, e.g. *translations between several pairs of European languages*), and describe the domain, topic, or genre covered.-->
|
51 |
|
52 |
|
53 |
+
<!--### Supported Tasks and Leaderboards
|
54 |
|
55 |
For each of the tasks tagged for this dataset, give a brief description of the tag, metrics, and suggested models (with a link to their HuggingFace implementation if available). Give a similar description of tasks that were not covered by the structured tag set (repace the `task-category-tag` with an appropriate `other:other-task-name`).
|
56 |
|
57 |
- `task-category-tag`: The dataset can be used to train a model for [TASK NAME], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a *high/low* [metric name](https://huggingface.co/metrics/metric_name). The ([model name](https://huggingface.co/model_name) or [model class](https://huggingface.co/transformers/model_doc/model_class.html)) model currently achieves the following score. *[IF A LEADERBOARD IS AVAILABLE]:* This task has an active leaderboard which can be found at [leaderboard url]() and ranks models based on [metric name](https://huggingface.co/metrics/metric_name) while also reporting [other metric name](https://huggingface.co/metrics/other_metric_name).
|
58 |
+
-->
|
59 |
|
60 |
### Languages
|
61 |
|
|
|
181 |
|
182 |
Our annotation process was conducted in several distinct phases, aimed at creating a comprehensive dataset. Below is a detailed overview of our workflow:
|
183 |
|
184 |
+
#### **Initial Setup and Guidelines**
|
185 |
+
We began by collecting a total of approximately 400 documents, which were divided into three distinct batches:
|
186 |
+
- **Exploration Batch**: 50 documents used for the creation and iterative refinement of the annotation guidelines. Each group member annotated this set independently, and we identified areas of disagreement and refined our categories accordingly. We discussed our disagreements and reached a consensus on the approach for each scenario. These decisions were then codified in our guidelines.
|
187 |
+
- **Evaluation Batch**: 80 documents used to evaluate the inter-annotator agreement (IAA) after creating the first draft of the guidelines. During this phase, group members independently annotated the documents without discussing their annotations to prevent collaboration that could influence the IAA scores.
|
188 |
+
- **Part 3 Batch**: ~270 documents reserved for later annotation by a third-party group. This phase focused on creating guidelines that could be followed by external annotators, ensuring consistency and reliability across annotations.
|
|
|
|
|
189 |
|
190 |
+
Once the guidelines were finalized, two annotators revisited and independently annotated the entire Exploration Batch. Although this batch had previously been used for guideline development, it was fully annotated to ensure a complete dataset for training and evaluation.
|
|
|
|
|
|
|
|
|
191 |
|
192 |
+
#### **Peer Feedback and Guideline Improvement**
|
193 |
+
In this phase, we received feedback on our guidelines from a third-party team:
|
194 |
+
- The third-party team annotated a sample of 50 documents from the **Part 3 - Exploration Batch** using our guidelines. They provided feedback on areas where the guidelines could be improved.
|
195 |
+
- We incorporated their feedback to address ambiguities and ensure the clarity of our instructions.
|
196 |
+
- Finally, the third-party team annotated a larger sample of ~270 documents (**Part 3 - Evaluation Batch**) using the refined guidelines. This helped further validate the robustness of the guidelines.
|
197 |
+
|
198 |
+
#### Inter-Annotator Agreement (IAA)
|
199 |
+
|
200 |
+
To evaluate the reliability of the annotations, we calculated IAA scores across the different phases and groups involved.
|
201 |
+
The following table summarizes the scores:
|
202 |
+
|
203 |
+
| Group | Phase | # | Label1 | Label2 | Label3 | Label4 | Label5 | Label6 | Label7 | Label8 | Label9 | Label10 | MAE | Corr |
|
204 |
+
|----------|--------|-----|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------|-------|-------|
|
205 |
+
| Creators | Explor | 50 | 0.69 | 0.67 | 0.71 | 0.66 | NAN | 0.04 | NAN | 0.28 | 0.21 | 0.50 | | |
|
206 |
+
| Creators | Eval | 80 | 0.64 | 0.53 | 0.76 | 0.66 | NAN | 0.72 | 0.29 | 0.30 | 0.40 | 0.47 | 1.024 | 0.264 |
|
207 |
+
| Peer | Explor | 100 | 0.84 | 0.51 | 0.78 | 0.60 | NAN | 0.70 | 0.66 | 0.58 | 0.57 | 0.31 | | |
|
208 |
+
| Peer | Eval | 167 | 0.62 | 0.64 | 0.70 | 0.67 | NAN | 0.66 | 0.72 | 0.65 | 0.71 | 0.74 | 0.740 | 0.698 |
|
209 |
+
|
210 |
+
- For the first task the Fleiss' Kappa metric was used to evaluate the multi-label agreement between annotators for each label (Label1 through Label10) in the objective task. The higher Kappa values indicate a stronger agreement, with the peer group achieving a higher overall Kappa score in the Exploration Batch.
|
211 |
+
|
212 |
+
- For the subjective task, we calculated the Mean Absolute Error (MAE) and Pearson Correlation (Corr) between the annotators’ judgments, with the peer group achieving a better overall MAE and Correlation.
|
213 |
|
214 |
|
215 |
#### Who are the annotators?
|
216 |
|
217 |
+
<!--If annotations were collected for the source data (such as class labels or syntactic parses), state whether the annotations were produced by humans or machine generated.
|
218 |
|
219 |
Describe the people or systems who originally created the annotations and their selection criteria if applicable.
|
220 |
|
221 |
If available, include self-reported demographic or identity information for the annotators, but avoid inferring this information. Instead state that this information is unknown. See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender.
|
222 |
|
223 |
+
Describe the conditions under which the data was annotated (for example, if the annotators were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.-->
|
224 |
+
|
225 |
+
The dataset was annotated by two groups, all humans. The creators group, consisting of 2 women and 2 men. The peer group, consisting of 2 women and a man. All annotators were aged 22-28 and were Israeli-Jewish. There was no compensation provided.
|
226 |
|
227 |
### Personal and Sensitive Information
|
228 |
|
229 |
+
<!--State whether the dataset uses identity categories and, if so, how the information is used. Describe where this information comes from (i.e. self-reporting, collecting from profiles, inferring, etc.). See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender. State whether the data is linked to individuals and whether those individuals can be identified in the dataset, either directly or indirectly (i.e., in combination with other data).
|
230 |
|
231 |
State whether the dataset contains other data that might be considered sensitive (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history).
|
232 |
|
233 |
+
If efforts were made to anonymize the data, describe the anonymization process. -->
|
234 |
+
|
235 |
+
This data contains religious beliefs and political opinions but is completely anonymize.
|
236 |
|
237 |
## Considerations for Using the Data
|
238 |
|
239 |
### Social Impact of Dataset
|
240 |
|
241 |
+
<!--Please discuss some of the ways you believe the use of this dataset will impact society.
|
242 |
|
243 |
The statement should include both positive outlooks, such as outlining how technologies developed through its use may improve people's lives, and discuss the accompanying risks. These risks may range from making important decisions more opaque to people who are affected by the technology, to reinforcing existing harmful biases (whose specifics should be discussed in the next section), among other considerations.
|
244 |
|
245 |
+
Also describe in this section if the proposed dataset contains a low-resource or under-represented language. If this is the case or if this task has any impact on underserved communities, please elaborate here.-->
|
246 |
+
|
247 |
+
The use of this dataset has the potential to foster greater understanding of the Israeli-Palestinian conflict
|
248 |
+
by providing data that can be used for research and technologies like sentiment analysis or claim validation tools,
|
249 |
+
which may inform public discourse. These technologies can also aid in reducing misinformation and
|
250 |
+
helping individuals engage more critically with content related to this imporatant conflict.
|
251 |
+
However, there is a risk that automated systems built using this dataset might reinforce biases present in the data, potentially leading to one-sided analyses.
|
252 |
+
Additionally, the complexity of the conflict means that decisions informed by these technologies might lack the nuance required for sensitive issues,
|
253 |
+
impacting world outcomes in ways that are not easily understood by the affected populations, like classifying ones claim as pro-terror and the possible outcomes.
|
254 |
|
255 |
### Discussion of Biases
|
256 |
|
257 |
+
<!--Provide descriptions of specific biases that are likely to be reflected in the data, and state whether any steps were taken to reduce their impact.
|
258 |
|
259 |
For Wikipedia text, see for example [Dinan et al 2020 on biases in Wikipedia (esp. Table 1)](https://arxiv.org/abs/2005.00614), or [Blodgett et al 2020](https://www.aclweb.org/anthology/2020.acl-main.485/) for a more general discussion of the topic.
|
260 |
|
261 |
+
If analyses have been run quantifying these biases, please add brief summaries and links to the studies here.-->
|
262 |
|
263 |
+
The significant bias we created during the annotation process stems from approaching the Israeli-Palestinian conflict from an Israeli-Jewish perspective.
|
264 |
+
We attempted to minimize this bias by designing the first task to be as objective as possible.
|
265 |
+
This included creating balanced categories and naming them neutrally, while carefully explaining the meaning of each category in the guidelines in an impartial way.
|
266 |
+
For the second task, we framed the instructions to ask annotators to assess the claims from the viewpoint of a "moderately informed citizen,"
|
267 |
+
defined as someone familiar with the general facts of the conflict but without strong personal or ideological bias.
|
268 |
+
Despite our efforts, we acknowledge that the dataset inevitably contains some bias,
|
269 |
+
but we believe its creation is crucial given the importance and relevance of the topic.
|
270 |
|
271 |
+
### Other Known Limitations
|
272 |
+
[N/A]
|
273 |
+
<!--If studies of the datasets have outlined other limitations of the dataset, such as annotation artifacts, please outline and cite them here.-->
|
274 |
|
275 |
## Additional Information
|
276 |
|
277 |
### Dataset Curators
|
278 |
+
Affiliation: Data Science students @ Technion, we got no funding.
|
279 |
+
<!--List the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here.-->
|
280 |
|
281 |
### Licensing Information
|
282 |
+
[N/A]
|
283 |
Provide the license and link to the license webpage if available.
|
284 |
|
285 |
### Citation Information
|
286 |
|
287 |
+
<!--Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example:-->
|
288 |
```
|
289 |
@article{article_id,
|
290 |
+
author = {Avishag Nevo, Bella Perel, Hadar Sugarman, Tomer Shigani},
|
291 |
+
title = {The Israeli-Palestinian Conflict Dataset},
|
292 |
+
year = {2024}
|
|
|
293 |
}
|
294 |
```
|
295 |
|
296 |
+
<!--If the dataset has a [DOI](https://www.doi.org/), please provide it here.-->
|
297 |
|
298 |
### Contributions
|
299 |
|
300 |
+
Thanks to Prof. Roi Reichart and Mr. Nitay Calderon for guiding us in this dataset process.
|
301 |
+
|