Alon commited on
Commit
ed67e39
1 Parent(s): af251de

Upload WEC-Eng files

Browse files
.gitattributes CHANGED
@@ -52,3 +52,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
52
  *.jpg filter=lfs diff=lfs merge=lfs -text
53
  *.jpeg filter=lfs diff=lfs merge=lfs -text
54
  *.webp filter=lfs diff=lfs merge=lfs -text
 
 
 
52
  *.jpg filter=lfs diff=lfs merge=lfs -text
53
  *.jpeg filter=lfs diff=lfs merge=lfs -text
54
  *.webp filter=lfs diff=lfs merge=lfs -text
55
+ All_Event_gold_mentions_unfiltered.json filter=lfs diff=lfs merge=lfs -text
56
+ Train_Event_gold_mentions.json filter=lfs diff=lfs merge=lfs -text
All_Event_gold_mentions_unfiltered.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:69d3dc228bada0f8b1bf6c07665f4ba12c6f18586e3bab1b7570bccbf6c958f8
3
+ size 270289869
Dev_Event_gold_mentions_validated.json ADDED
The diff for this file is too large to render. See raw diff
 
README.md CHANGED
@@ -1,3 +1,125 @@
1
- ---
2
- license: cc-by-3.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # WEC-Eng
2
+ A large-scale dataset for cross-document event coreference extracted from English Wikipedia. </br>
3
+
4
+ - **Repository (Code for generating WEC):** https://github.com/AlonEirew/extract-wec
5
+ - **Paper:** https://aclanthology.org/2021.naacl-main.198/
6
+
7
+ ### Languages
8
+
9
+ English
10
+
11
+ ## Load Dataset
12
+ You can read in WEC-Eng files as follows (using the **huggingface_hub** library):
13
+
14
+ ```json
15
+ from huggingface_hub import hf_hub_url, cached_download
16
+ import json
17
+ REPO_ID = "datasets/biu-nlp/WEC-Eng"
18
+ splits_files = ["Dev_Event_gold_mentions_validated.json",
19
+ "Test_Event_gold_mentions_validated.json",
20
+ "Train_Event_gold_mentions.json"]
21
+ wec_eng = list()
22
+ for split_file in splits_files:
23
+ wec_eng.append(json.load(open(cached_download(
24
+ hf_hub_url(REPO_ID, split_file)), "r")))
25
+ ```
26
+
27
+ ## Dataset Structure
28
+
29
+ ### Data Splits
30
+ - **Final version of the English CD event coreference dataset**<br>
31
+ - Train - Train_Event_gold_mentions.json
32
+ - Dev - Dev_Event_gold_mentions_validated.json
33
+ - Test - Test_Event_gold_mentions_validated.json
34
+
35
+ | | Train | Valid | Test |
36
+ | ----- | ------ | ----- | ---- |
37
+ | Clusters | 7,042 | 233 | 322 |
38
+ | Event Mentions | 40,529 | 1250 | 1,893 |
39
+
40
+ - **The non (within clusters) controlled version of the dataset (lexical diversity)**<br>
41
+ - All (experimental) - All_Event_gold_mentions_unfiltered.json
42
+
43
+ ### Data Instances
44
+
45
+ ```json
46
+ {
47
+ "coref_chain": 2293469,
48
+ "coref_link": "Family Values Tour 1998",
49
+ "doc_id": "House of Pain",
50
+ "mention_context": [
51
+ "From",
52
+ "then",
53
+ "on",
54
+ ",",
55
+ "the",
56
+ "members",
57
+ "continued",
58
+ "their"
59
+ ],
60
+ "mention_head": "Tour",
61
+ "mention_head_lemma": "Tour",
62
+ "mention_head_pos": "PROPN",
63
+ "mention_id": "108172",
64
+ "mention_index": 1,
65
+ "mention_ner": "UNK",
66
+ "mention_type": 8,
67
+ "predicted_coref_chain": null,
68
+ "sent_id": 2,
69
+ "tokens_number": [
70
+ 50,
71
+ 51,
72
+ 52,
73
+ 53
74
+ ],
75
+ "tokens_str": "Family Values Tour 1998",
76
+ "topic_id": -1
77
+ }
78
+ ```
79
+
80
+ ### Data Fields
81
+
82
+ |Field|Value Type|Value|
83
+ |---|:---:|---|
84
+ |coref_chain|Numeric|Coreference chain/cluster ID|
85
+ |coref_link|String|Coreference link wikipeida page/article title|
86
+ |doc_id|String|Mention page/article title|
87
+ |mention_context|List[String]|Tokenized mention paragraph (including mention)|
88
+ |mention_head|String|Mention span head token|
89
+ |mention_head_lemma|String|Mention span head token lemma|
90
+ |mention_head_pos|String|Mention span head token POS|
91
+ |mention_id|String|Mention id|
92
+ |mention_index|Numeric|Mention index in json file|
93
+ |mention_ner|String|Mention NER|
94
+ |tokens_number|List[Numeric]|Mentions tokens ids within the context|
95
+ |tokens_str|String|Mention span text|
96
+ |topic_id|Ignore|Ignore|
97
+ |mention_type|Ignore|Ignore|
98
+ |predicted_coref_chain|Ignore|Ignore|
99
+ |sent_id|Ignore|Ignore|
100
+
101
+ ## Citation
102
+ ```
103
+ @inproceedings{eirew-etal-2021-wec,
104
+ title = "{WEC}: Deriving a Large-scale Cross-document Event Coreference dataset from {W}ikipedia",
105
+ author = "Eirew, Alon and
106
+ Cattan, Arie and
107
+ Dagan, Ido",
108
+ booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
109
+ month = jun,
110
+ year = "2021",
111
+ address = "Online",
112
+ publisher = "Association for Computational Linguistics",
113
+ url = "https://aclanthology.org/2021.naacl-main.198",
114
+ doi = "10.18653/v1/2021.naacl-main.198",
115
+ pages = "2498--2510",
116
+ abstract = "Cross-document event coreference resolution is a foundational task for NLP applications involving multi-text processing. However, existing corpora for this task are scarce and relatively small, while annotating only modest-size clusters of documents belonging to the same topic. To complement these resources and enhance future research, we present Wikipedia Event Coreference (WEC), an efficient methodology for gathering a large-scale dataset for cross-document event coreference from Wikipedia, where coreference links are not restricted within predefined topics. We apply this methodology to the English Wikipedia and extract our large-scale WEC-Eng dataset. Notably, our dataset creation method is generic and can be applied with relatively little effort to other Wikipedia languages. To set baseline results, we develop an algorithm that adapts components of state-of-the-art models for within-document coreference resolution to the cross-document setting. Our model is suitably efficient and outperforms previously published state-of-the-art results for the task.",
117
+ }
118
+ ```
119
+
120
+
121
+ ## License
122
+ We provide the following data sets under a <a href="https://creativecommons.org/licenses/by-sa/3.0/deed.en_US">Creative Commons Attribution-ShareAlike 3.0 Unported License</a>. It is based on content extracted from Wikipedia that is licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License
123
+
124
+ ## Contact
125
+ If you have any questions please create a Github issue at https://github.com/AlonEirew/extract-wec.
Test_Event_gold_mentions_validated.json ADDED
The diff for this file is too large to render. See raw diff
 
Train_Event_gold_mentions.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:719e8e476a05b9c6abb01cd51f641c3ace67c6214e60f076dc5d1f26ef63c3a5
3
+ size 90177027
gitattributes.txt ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
28
+ Train_Event_gold_mentions.json filter=lfs diff=lfs merge=lfs -text
29
+ All_Event_gold_mentions_unfiltered.json filter=lfs diff=lfs merge=lfs -text
30
+ Dev_Event_gold_mentions_validated.json filter=lfs diff=lfs merge=lfs -text
31
+ Test_Event_gold_mentions_validated.json filter=lfs diff=lfs merge=lfs -text
gitignore.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ .idea