The dataset viewer is not available for this subset.
Couldn't get the size of external files in `_split_generators` because a request failed: HTTPConnectionPool(host='www.derczynski.com', port=80): Max retries exceeded with url: /resources/ipm_nel.tar.gz (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x7f51f09c3820>, 'Connection to www.derczynski.com timed out. (connect timeout=10.0)')) Please consider moving your data files in this dataset repository instead (e.g. inside a data/ folder).
Exception:    ConnectTimeout
Message:      HTTPConnectionPool(host='www.derczynski.com', port=80): Max retries exceeded with url: /resources/ipm_nel.tar.gz (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x7f51f09c3820>, 'Connection to www.derczynski.com timed out. (connect timeout=10.0)'))
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connection.py", line 174, in _new_conn
                  conn = connection.create_connection(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/util/connection.py", line 95, in create_connection
                  raise err
                File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/util/connection.py", line 85, in create_connection
                  sock.connect(sa)
              socket.timeout: timed out
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 714, in urlopen
                  httplib_response = self._make_request(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 415, in _make_request
                  conn.request(method, url, **httplib_request_kw)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connection.py", line 244, in request
                  super(HTTPConnection, self).request(method, url, body=body, headers=headers)
                File "/usr/local/lib/python3.9/http/client.py", line 1285, in request
                  self._send_request(method, url, body, headers, encode_chunked)
                File "/usr/local/lib/python3.9/http/client.py", line 1331, in _send_request
                  self.endheaders(body, encode_chunked=encode_chunked)
                File "/usr/local/lib/python3.9/http/client.py", line 1280, in endheaders
                  self._send_output(message_body, encode_chunked=encode_chunked)
                File "/usr/local/lib/python3.9/http/client.py", line 1040, in _send_output
                  self.send(msg)
                File "/usr/local/lib/python3.9/http/client.py", line 980, in send
                  self.connect()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connection.py", line 205, in connect
                  conn = self._new_conn()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connection.py", line 179, in _new_conn
                  raise ConnectTimeoutError(
              urllib3.exceptions.ConnectTimeoutError: (<urllib3.connection.HTTPConnection object at 0x7f51f09c3820>, 'Connection to www.derczynski.com timed out. (connect timeout=10.0)')
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/adapters.py", line 486, in send
                  resp = conn.urlopen(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 798, in urlopen
                  retries = retries.increment(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/util/retry.py", line 592, in increment
                  raise MaxRetryError(_pool, url, error or ResponseError(cause))
              urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='www.derczynski.com', port=80): Max retries exceeded with url: /resources/ipm_nel.tar.gz (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x7f51f09c3820>, 'Connection to www.derczynski.com timed out. (connect timeout=10.0)'))
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 488, in _is_too_big_from_external_data_files
                  for i, size in enumerate(pool.imap_unordered(get_size, ext_data_files)):
                File "/usr/local/lib/python3.9/multiprocessing/pool.py", line 870, in next
                  raise value
                File "/usr/local/lib/python3.9/multiprocessing/pool.py", line 125, in worker
                  result = (True, func(*args, **kwds))
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 386, in _request_size
                  response = http_head(url, headers=headers, max_retries=3)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 429, in http_head
                  response = _request_with_retry(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 332, in _request_with_retry
                  raise err
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 328, in _request_with_retry
                  response = requests.request(method=method.upper(), url=url, timeout=timeout, **params)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/api.py", line 59, in request
                  return session.request(method=method, url=url, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 589, in request
                  resp = self.send(prep, **send_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 703, in send
                  r = adapter.send(request, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/adapters.py", line 507, in send
                  raise ConnectTimeout(e, request=request)
              requests.exceptions.ConnectTimeout: HTTPConnectionPool(host='www.derczynski.com', port=80): Max retries exceeded with url: /resources/ipm_nel.tar.gz (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x7f51f09c3820>, 'Connection to www.derczynski.com timed out. (connect timeout=10.0)'))

Need help to make the dataset viewer work? Open a discussion for direct support.

Dataset Card for "ipm-nel"

Dataset Summary

This data is for the task of named entity recognition and linking/disambiguation over tweets. It comprises the addition of an entity URI layer on top of an NER-annotated tweet dataset. The task is to detect entities and then provide a correct link to them in DBpedia, thus disambiguating otherwise ambiguous entity surface forms; for example, this means linking "Paris" to the correct instance of a city named that (e.g. Paris, France vs. Paris, Texas).

The data concentrates on ten types of named entities: company, facility, geographic location, movie, musical artist, person, product, sports team, TV show, and other.

The file is tab separated, in CoNLL format, with line breaks between tweets.

  • Data preserves the tokenisation used in the Ritter datasets.
  • PoS labels are not present for all tweets, but where they could be found in the Ritter data, they're given.
  • In cases where a URI could not be agreed, or was not present in DBpedia, the linking URI is NIL.

See the paper, Analysis of Named Entity Recognition and Linking for Tweets for a full description of the methodology.

Supported Tasks and Leaderboards

Languages

English of unknown region (bcp47:en)

Dataset Structure

Data Instances

ipm_nel

  • Size of downloaded dataset files: 120 KB
  • Size of the generated dataset:
  • Total amount of disk used:

An example of 'train' looks as follows.

{
  'id': '0', 
  'tokens': ['#Astros', 'lineup', 'for', 'tonight', '.', 'Keppinger', 'sits', ',', 'Downs', 'plays', '2B', ',', 'CJ', 'bats', '5th', '.', '@alysonfooter', 'http://bit.ly/bHvgCS'], 
  'ner_tags': [9, 0, 0, 0, 0, 7, 0, 0, 7, 0, 0, 0, 7, 0, 0, 0, 0, 0],
  'uris': "['http://dbpedia.org/resource/Houston_Astros', '', '', '', '', 'http://dbpedia.org/resource/Jeff_Keppinger', '', '', 'http://dbpedia.org/resource/Brodie_Downs', '', '', '', 'NIL', '', '', '', '', '']" 
}

Data Fields

  • id: a string feature.
  • tokens: a list of string features.
  • ner_tags: a list of classification labels (int). Full tagset with indices:
  • uris: a list of URIs (string) that disambiguate entities. Set to NIL when an entity has no DBpedia entry, or blank for outside-of-entity tokens.

Data Splits

name train
ipm_nel 183 sentences

Dataset Creation

Curation Rationale

To gather a social media benchmark for named entity linking that is sufficiently different from newswire data.

Source Data

Initial Data Collection and Normalization

The data is partly harvested from that distributed by Ritter / Named Entity Recognition in Tweets: An Experimental Study, and partly taken from Twitter by the authors.

Who are the source language producers?

English-speaking Twitter users, between October 2011 and September 2013

Annotations

Annotation process

The authors were allocated documents and marked them for named entities (where these were not already present) and then attempted to find the best-fitting DBpedia entry for each entity found. Each entity mention was labelled by a random set of three volunteers. The annotation task was mediated using Crowdflower (Biewald, 2012). Our interface design was to show each volunteer the text of the tweet, any URL links contained therein, and a set of candidate targets from DBpedia. The volunteers were encouraged to click on the URL links from the tweet, to gain addition context and thus ensure that the correct DBpedia URI is chosen by them. Candidate entities were shown in random order, using the text from the corresponding DBpedia abstracts (where available) or the actual DBpedia URI otherwise. In addition, the options ‘‘none of the above’’, ‘‘not an entity’’ and ‘‘cannot decide’’ were added, to allow the volunteers to indicate that this entity mention has no corresponding DBpedia URI (none of the above), the highlighted text is not an entity, or that the tweet text (and any links, if available) did not provide sufficient information to reliably disambiguate the entity mention.

Who are the annotators?

The annotators are 10 volunteer NLP researchers, from the authors and the authors' institutions.

Personal and Sensitive Information

The data was public at the time of collection. User names are preserved.

Considerations for Using the Data

Social Impact of Dataset

There's a risk of user-deleted content being in this data. The data has NOT been vetted for any content, so there's a risk of harmful text.

Discussion of Biases

The data is annotated by NLP researchers; we know that this group has high agreement but low recall on English twitter text C16-1111.

Other Known Limitations

The above limitations apply.

Additional Information

Dataset Curators

The dataset is curated by the paper's authors.

Licensing Information

The authors distribute this data under Creative Commons attribution license, CC-BY 4.0. You must acknowledge the author if you use this data, but apart from that, you're quite free to do most things. See https://creativecommons.org/licenses/by/4.0/legalcode .

Citation Information

@article{derczynski2015analysis,
  title={Analysis of named entity recognition and linking for tweets},
  author={Derczynski, Leon and Maynard, Diana and Rizzo, Giuseppe and Van Erp, Marieke and Gorrell, Genevieve and Troncy, Rapha{\"e}l and Petrak, Johann and Bontcheva, Kalina},
  journal={Information Processing \& Management},
  volume={51},
  number={2},
  pages={32--49},
  year={2015},
  publisher={Elsevier}
}

Contributions

Author-added dataset @leondz

Downloads last month
4