Dataset Preview
Go to dataset viewer
The dataset preview is not available for this split.
The split features (columns) cannot be extracted.
Error code:   FeaturesError
Exception:    AttributeError
Message:      'list' object has no attribute 'keys'
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/", line 109, in _generate_tables
                  pa_table = paj.read_json(
                File "pyarrow/_json.pyx", line 246, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: Column() changed from object to array in row 0
              During handling of the above exception, another exception occurred:
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/responses/", line 327, in get_first_rows_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 1261, in _resolve_features
                  features = _infer_features_from_batch(self._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 686, in _head
                  return _examples_to_batch([x for key, x in islice(self._iter(), n)])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 686, in <listcomp>
                  return _examples_to_batch([x for key, x in islice(self._iter(), n)])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 708, in _iter
                  yield from ex_iterable
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 112, in __iter__
                  yield from self.generate_examples_fn(**self.kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 651, in wrapper
                  for key, table in generate_tables_fn(**kwargs):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/", line 137, in _generate_tables
                  f"This JSON file contain the following fields: {str(list(dataset.keys()))}. "
              AttributeError: 'list' object has no attribute 'keys'

Need help to make the dataset viewer work? Open an discussion for direct support.

YAML Metadata Warning: empty or missing yaml metadata in repo card (


A large-scale dataset for cross-document event coreference extracted from English Wikipedia.



Load Dataset

You can read in WEC-Eng files as follows (using the huggingface_hub library):

from huggingface_hub import hf_hub_url, cached_download
import json
REPO_ID = "datasets/Intel/WEC-Eng"
splits_files = ["Dev_Event_gold_mentions_validated.json",
wec_eng = list()
for split_file in splits_files:
        hf_hub_url(REPO_ID, split_file)), "r")))

Dataset Structure

Data Splits

  • Final version of the English CD event coreference dataset
    • Train - Train_Event_gold_mentions.json
    • Dev - Dev_Event_gold_mentions_validated.json
    • Test - Test_Event_gold_mentions_validated.json
Train Valid Test
Clusters 7,042 233 322
Event Mentions 40,529 1250 1,893
  • The non (within clusters) controlled version of the dataset (lexical diversity)
    • All (experimental) - All_Event_gold_mentions_unfiltered.json

Data Instances

        "coref_chain": 2293469,
        "coref_link": "Family Values Tour 1998",
        "doc_id": "House of Pain",
        "mention_context": [
  "mention_head": "Tour",
  "mention_head_lemma": "Tour",
  "mention_head_pos": "PROPN",
  "mention_id": "108172",
  "mention_index": 1,
  "mention_ner": "UNK",
  "mention_type": 8,
  "predicted_coref_chain": null,
  "sent_id": 2,
  "tokens_number": [
  "tokens_str": "Family Values Tour 1998",
  "topic_id": -1

Data Fields

Field Value Type Value
coref_chain Numeric Coreference chain/cluster ID
coref_link String Coreference link wikipeida page/article title
doc_id String Mention page/article title
mention_context List[String] Tokenized mention paragraph (including mention)
mention_head String Mention span head token
mention_head_lemma String Mention span head token lemma
mention_head_pos String Mention span head token POS
mention_id String Mention id
mention_index Numeric Mention index in json file
mention_ner String Mention NER
tokens_number List[Numeric] Mentions tokens ids within the context
tokens_str String Mention span text
topic_id Ignore Ignore
mention_type Ignore Ignore
predicted_coref_chain Ignore Ignore
sent_id Ignore Ignore


    title = "{WEC}: Deriving a Large-scale Cross-document Event Coreference dataset from {W}ikipedia",
    author = "Eirew, Alon  and
      Cattan, Arie  and
      Dagan, Ido",
    booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
    month = jun,
    year = "2021",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "",
    doi = "10.18653/v1/2021.naacl-main.198",
    pages = "2498--2510",
    abstract = "Cross-document event coreference resolution is a foundational task for NLP applications involving multi-text processing. However, existing corpora for this task are scarce and relatively small, while annotating only modest-size clusters of documents belonging to the same topic. To complement these resources and enhance future research, we present Wikipedia Event Coreference (WEC), an efficient methodology for gathering a large-scale dataset for cross-document event coreference from Wikipedia, where coreference links are not restricted within predefined topics. We apply this methodology to the English Wikipedia and extract our large-scale WEC-Eng dataset. Notably, our dataset creation method is generic and can be applied with relatively little effort to other Wikipedia languages. To set baseline results, we develop an algorithm that adapts components of state-of-the-art models for within-document coreference resolution to the cross-document setting. Our model is suitably efficient and outperforms previously published state-of-the-art results for the task.",


We provide the following data sets under a Creative Commons Attribution-ShareAlike 3.0 Unported License. It is based on content extracted from Wikipedia that is licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License


If you have any questions please create a Github issue at

Edit dataset card
Evaluate models HF Leaderboard