Datasets:
The dataset preview is not available for this split.
Error code: StreamingRowsError Exception: ValueError Message: Cannot seek streaming HTTP file Traceback: Traceback (most recent call last): File "/src/workers/datasets_based/src/datasets_based/workers/first_rows.py", line 485, in compute_first_rows_response rows = get_rows( File "/src/workers/datasets_based/src/datasets_based/workers/first_rows.py", line 120, in decorator return func(*args, **kwargs) File "/src/workers/datasets_based/src/datasets_based/workers/first_rows.py", line 164, in get_rows ds = load_dataset( File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/load.py", line 1751, in load_dataset return builder_instance.as_streaming_dataset(split=split) File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1206, in as_streaming_dataset splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)} File "/tmp/modules-cache/datasets_modules/datasets/wnut_17/077c7f08b8dbc800692e8c9186cdf3606d5849ab0e7be662e6135bb10eba54f9/wnut_17.py", line 131, in _split_generators downloaded_files = dl_manager.download_and_extract(urls_to_download) File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 1074, in download_and_extract return self.extract(self.download(url_or_urls)) File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 1026, in extract urlpaths = map_nested(self._extract, url_or_urls, map_tuple=True) File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 444, in map_nested mapped = [ File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 445, in <listcomp> _single_map_nested((function, obj, types, None, True, None)) File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 346, in _single_map_nested return function(data_struct) File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 1031, in _extract protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token) File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 434, in _get_extraction_protocol return _get_extraction_protocol_with_magic_number(f) File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 401, in _get_extraction_protocol_with_magic_number f.seek(0) File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 737, in seek raise ValueError("Cannot seek streaming HTTP file") ValueError: Cannot seek streaming HTTP file
Need help to make the dataset viewer work? Open an discussion for direct support.
Dataset Card for "wnut_17"
Dataset Summary
WNUT 17: Emerging and Rare entity recognition
This shared task focuses on identifying unusual, previously-unseen entities in the context of emerging discussions. Named entities form the basis of many modern approaches to other tasks (like event clustering and summarisation), but recall on them is a real problem in noisy text - even among annotators. This drop tends to be due to novel entities and surface forms. Take for example the tweet “so.. kktny in 30 mins?” - even human experts find entity kktny hard to detect and resolve. This task will evaluate the ability to detect and classify novel, emerging, singleton named entities in noisy text.
The goal of this task is to provide a definition of emerging and of rare entities, and based on that, also datasets for detecting these entities.
Supported Tasks and Leaderboards
Languages
Dataset Structure
Data Instances
- Size of downloaded dataset files: 0.76 MB
- Size of the generated dataset: 1.66 MB
- Total amount of disk used: 2.43 MB
An example of 'train' looks as follows.
{
"id": "0",
"ner_tags": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 7, 8, 8, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0],
"tokens": ["@paulwalk", "It", "'s", "the", "view", "from", "where", "I", "'m", "living", "for", "two", "weeks", ".", "Empire", "State", "Building", "=", "ESB", ".", "Pretty", "bad", "storm", "here", "last", "evening", "."]
}
Data Fields
The data fields are the same among all splits:
id
(string
): ID of the example.tokens
(list
ofstring
): Tokens of the example text.ner_tags
(list
of class labels): NER tags of the tokens (using IOB2 format), with possible values:- 0:
O
- 1:
B-corporation
- 2:
I-corporation
- 3:
B-creative-work
- 4:
I-creative-work
- 5:
B-group
- 6:
I-group
- 7:
B-location
- 8:
I-location
- 9:
B-person
- 10:
I-person
- 11:
B-product
- 12:
I-product
- 0:
Data Splits
train | validation | test |
---|---|---|
3394 | 1009 | 1287 |
Dataset Creation
Curation Rationale
Source Data
Initial Data Collection and Normalization
Who are the source language producers?
Annotations
Annotation process
Who are the annotators?
Personal and Sensitive Information
Considerations for Using the Data
Social Impact of Dataset
Discussion of Biases
Other Known Limitations
Additional Information
Dataset Curators
Licensing Information
Citation Information
@inproceedings{derczynski-etal-2017-results,
title = "Results of the {WNUT}2017 Shared Task on Novel and Emerging Entity Recognition",
author = "Derczynski, Leon and
Nichols, Eric and
van Erp, Marieke and
Limsopatham, Nut",
booktitle = "Proceedings of the 3rd Workshop on Noisy User-generated Text",
month = sep,
year = "2017",
address = "Copenhagen, Denmark",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/W17-4418",
doi = "10.18653/v1/W17-4418",
pages = "140--147",
abstract = "This shared task focuses on identifying unusual, previously-unseen entities in the context of emerging discussions.
Named entities form the basis of many modern approaches to other tasks (like event clustering and summarization),
but recall on them is a real problem in noisy text - even among annotators.
This drop tends to be due to novel entities and surface forms.
Take for example the tweet {``}so.. kktny in 30 mins?!{''} {--} even human experts find the entity {`}kktny{'}
hard to detect and resolve. The goal of this task is to provide a definition of emerging and of rare entities,
and based on that, also datasets for detecting these entities. The task as described in this paper evaluated the
ability of participating entries to detect and classify novel and emerging named entities in noisy text.",
}
Contributions
Thanks to @thomwolf, @lhoestq, @stefan-it, @lewtun, @jplu for adding this dataset.
- Downloads last month
- 6,591