Datasets:
Tasks:
Token Classification
Modalities:
Text
Sub-tasks:
named-entity-recognition
Languages:
English
Size:
1K - 10K
License:
update
Browse files- README.md +81 -0
- btc.py +83 -0
- dataset/label.json +1 -0
- dataset/test.json +0 -0
- dataset/train.json +0 -0
- dataset/valid.json +0 -0
README.md
ADDED
@@ -0,0 +1,81 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
license:
|
5 |
+
- other
|
6 |
+
multilinguality:
|
7 |
+
- monolingual
|
8 |
+
size_categories:
|
9 |
+
- 1k<10K
|
10 |
+
task_categories:
|
11 |
+
- token-classification
|
12 |
+
task_ids:
|
13 |
+
- named-entity-recognition
|
14 |
+
pretty_name: BTC
|
15 |
+
---
|
16 |
+
|
17 |
+
# Dataset Card for "tner/btc"
|
18 |
+
|
19 |
+
## Dataset Description
|
20 |
+
|
21 |
+
- **Repository:** [T-NER](https://github.com/asahi417/tner)
|
22 |
+
- **Paper:** [https://aclanthology.org/C16-1111/](https://aclanthology.org/C16-1111/)
|
23 |
+
- **Dataset:** Broad Twitter Corpus
|
24 |
+
- **Domain:** Twitter
|
25 |
+
- **Number of Entity:** 3
|
26 |
+
|
27 |
+
|
28 |
+
### Dataset Summary
|
29 |
+
Broad Twitter Corpus NER dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project.
|
30 |
+
- Entity Types: `LOC`, `ORG`, `PER`
|
31 |
+
|
32 |
+
## Dataset Structure
|
33 |
+
|
34 |
+
### Data Instances
|
35 |
+
An example of `train` looks as follows.
|
36 |
+
|
37 |
+
```
|
38 |
+
{
|
39 |
+
'tokens': ['@paulwalk', 'It', "'s", 'the', 'view', 'from', 'where', 'I', "'m", 'living', 'for', 'two', 'weeks', '.', 'Empire', 'State', 'Building', '=', 'ESB', '.', 'Pretty', 'bad', 'storm', 'here', 'last', 'evening', '.'],
|
40 |
+
'tags': [12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 3, 9, 9, 12, 3, 12, 12, 12, 12, 12, 12, 12, 12]
|
41 |
+
}
|
42 |
+
```
|
43 |
+
|
44 |
+
### Label ID
|
45 |
+
The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/btc/raw/main/dataset/label.json).
|
46 |
+
```python
|
47 |
+
{
|
48 |
+
"B-LOC": 0,
|
49 |
+
"B-ORG": 1,
|
50 |
+
"B-PER": 2,
|
51 |
+
"I-LOC": 3,
|
52 |
+
"I-ORG": 4,
|
53 |
+
"I-PER": 5,
|
54 |
+
"O": 6
|
55 |
+
}
|
56 |
+
```
|
57 |
+
|
58 |
+
### Data Splits
|
59 |
+
|
60 |
+
| name |train|validation|test|
|
61 |
+
|---------|----:|---------:|---:|
|
62 |
+
|btc | 2395| 1009|1287|
|
63 |
+
|
64 |
+
### Citation Information
|
65 |
+
|
66 |
+
```
|
67 |
+
@inproceedings{derczynski-etal-2016-broad,
|
68 |
+
title = "Broad {T}witter Corpus: A Diverse Named Entity Recognition Resource",
|
69 |
+
author = "Derczynski, Leon and
|
70 |
+
Bontcheva, Kalina and
|
71 |
+
Roberts, Ian",
|
72 |
+
booktitle = "Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
|
73 |
+
month = dec,
|
74 |
+
year = "2016",
|
75 |
+
address = "Osaka, Japan",
|
76 |
+
publisher = "The COLING 2016 Organizing Committee",
|
77 |
+
url = "https://aclanthology.org/C16-1111",
|
78 |
+
pages = "1169--1179",
|
79 |
+
abstract = "One of the main obstacles, hampering method development and comparative evaluation of named entity recognition in social media, is the lack of a sizeable, diverse, high quality annotated corpus, analogous to the CoNLL{'}2003 news dataset. For instance, the biggest Ritter tweet corpus is only 45,000 tokens {--} a mere 15{\%} the size of CoNLL{'}2003. Another major shortcoming is the lack of temporal, geographic, and author diversity. This paper introduces the Broad Twitter Corpus (BTC), which is not only significantly bigger, but sampled across different regions, temporal periods, and types of Twitter users. The gold-standard named entity annotations are made by a combination of NLP experts and crowd workers, which enables us to harness crowd recall while maintaining high quality. We also measure the entity drift observed in our dataset (i.e. how entity representation varies over time), and compare to newswire. The corpus is released openly, including source text and intermediate annotations.",
|
80 |
+
}
|
81 |
+
```
|
btc.py
ADDED
@@ -0,0 +1,83 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
""" NER dataset compiled by T-NER library https://github.com/asahi417/tner/tree/master/tner """
|
2 |
+
import json
|
3 |
+
from itertools import chain
|
4 |
+
import datasets
|
5 |
+
|
6 |
+
logger = datasets.logging.get_logger(__name__)
|
7 |
+
_DESCRIPTION = """[BTC](https://aclanthology.org/C16-1111/)"""
|
8 |
+
_NAME = "btc"
|
9 |
+
_VERSION = "1.0.0"
|
10 |
+
_CITATION = """
|
11 |
+
@inproceedings{derczynski-etal-2016-broad,
|
12 |
+
title = "Broad {T}witter Corpus: A Diverse Named Entity Recognition Resource",
|
13 |
+
author = "Derczynski, Leon and
|
14 |
+
Bontcheva, Kalina and
|
15 |
+
Roberts, Ian",
|
16 |
+
booktitle = "Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
|
17 |
+
month = dec,
|
18 |
+
year = "2016",
|
19 |
+
address = "Osaka, Japan",
|
20 |
+
publisher = "The COLING 2016 Organizing Committee",
|
21 |
+
url = "https://aclanthology.org/C16-1111",
|
22 |
+
pages = "1169--1179",
|
23 |
+
abstract = "One of the main obstacles, hampering method development and comparative evaluation of named entity recognition in social media, is the lack of a sizeable, diverse, high quality annotated corpus, analogous to the CoNLL{'}2003 news dataset. For instance, the biggest Ritter tweet corpus is only 45,000 tokens {--} a mere 15{\%} the size of CoNLL{'}2003. Another major shortcoming is the lack of temporal, geographic, and author diversity. This paper introduces the Broad Twitter Corpus (BTC), which is not only significantly bigger, but sampled across different regions, temporal periods, and types of Twitter users. The gold-standard named entity annotations are made by a combination of NLP experts and crowd workers, which enables us to harness crowd recall while maintaining high quality. We also measure the entity drift observed in our dataset (i.e. how entity representation varies over time), and compare to newswire. The corpus is released openly, including source text and intermediate annotations.",
|
24 |
+
}
|
25 |
+
"""
|
26 |
+
|
27 |
+
_HOME_PAGE = "https://github.com/asahi417/tner"
|
28 |
+
_URL = f'https://huggingface.co/datasets/tner/{_NAME}/raw/main/dataset'
|
29 |
+
_URLS = {
|
30 |
+
str(datasets.Split.TEST): [f'{_URL}/test.json'],
|
31 |
+
str(datasets.Split.TRAIN): [f'{_URL}/train.json'],
|
32 |
+
str(datasets.Split.VALIDATION): [f'{_URL}/valid.json'],
|
33 |
+
}
|
34 |
+
|
35 |
+
|
36 |
+
class BTCConfig(datasets.BuilderConfig):
|
37 |
+
"""BuilderConfig"""
|
38 |
+
|
39 |
+
def __init__(self, **kwargs):
|
40 |
+
"""BuilderConfig.
|
41 |
+
|
42 |
+
Args:
|
43 |
+
**kwargs: keyword arguments forwarded to super.
|
44 |
+
"""
|
45 |
+
super(BTCConfig, self).__init__(**kwargs)
|
46 |
+
|
47 |
+
|
48 |
+
class BTC(datasets.GeneratorBasedBuilder):
|
49 |
+
"""Dataset."""
|
50 |
+
|
51 |
+
BUILDER_CONFIGS = [
|
52 |
+
BTCConfig(name=_NAME, version=datasets.Version(_VERSION), description=_DESCRIPTION),
|
53 |
+
]
|
54 |
+
|
55 |
+
def _split_generators(self, dl_manager):
|
56 |
+
downloaded_file = dl_manager.download_and_extract(_URLS)
|
57 |
+
return [datasets.SplitGenerator(name=i, gen_kwargs={"filepaths": downloaded_file[str(i)]})
|
58 |
+
for i in [datasets.Split.TRAIN, datasets.Split.VALIDATION, datasets.Split.TEST]]
|
59 |
+
|
60 |
+
def _generate_examples(self, filepaths):
|
61 |
+
_key = 0
|
62 |
+
for filepath in filepaths:
|
63 |
+
logger.info(f"generating examples from = {filepath}")
|
64 |
+
with open(filepath, encoding="utf-8") as f:
|
65 |
+
_list = [i for i in f.read().split('\n') if len(i) > 0]
|
66 |
+
for i in _list:
|
67 |
+
data = json.loads(i)
|
68 |
+
yield _key, data
|
69 |
+
_key += 1
|
70 |
+
|
71 |
+
def _info(self):
|
72 |
+
return datasets.DatasetInfo(
|
73 |
+
description=_DESCRIPTION,
|
74 |
+
features=datasets.Features(
|
75 |
+
{
|
76 |
+
"tokens": datasets.Sequence(datasets.Value("string")),
|
77 |
+
"tags": datasets.Sequence(datasets.Value("int32")),
|
78 |
+
}
|
79 |
+
),
|
80 |
+
supervised_keys=None,
|
81 |
+
homepage=_HOME_PAGE,
|
82 |
+
citation=_CITATION,
|
83 |
+
)
|
dataset/label.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"B-LOC": 0, "B-ORG": 1, "B-PER": 2, "I-LOC": 3, "I-ORG": 4, "I-PER": 5, "O": 6}
|
dataset/test.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
dataset/train.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
dataset/valid.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|