Datasets:
tner
/

Languages:
English
Multilinguality:
monolingual
Size Categories:
1k<10K
ArXiv:
Tags:
License:
tweetner7 / README.md
asahi417's picture
readme
3feb3b7
metadata
language:
  - en
license:
  - other
multilinguality:
  - monolingual
size_categories:
  - 1k<10K
task_categories:
  - token-classification
task_ids:
  - named-entity-recognition
pretty_name: TweetNER7

Dataset Card for "tner/tweetner7"

Dataset Description

Dataset Summary

This is the official repository of TweetNER7 ("Named Entity Recognition in Twitter: A Dataset and Analysis on Short-Term Temporal Shifts, AACL main conference 2022"), an NER dataset on Twitter with 7 entity labels. Each instance of TweetNER7 comes with a timestamp which distributes from September 2019 to August 2021. The tweet collection used in TweetNER7 is same as what used in TweetTopic. The dataset is integrated in TweetNLP too.

  • Entity Types: corperation, creative_work, event, group, location, product, person

Preprocessing

We pre-process tweets before the annotation to normalize some artifacts, converting URLs into a special token {{URL}} and non-verified usernames into {{USERNAME}}. For verified usernames, we replace its display name (or account name) with symbols {@}. For example, a tweet

Get the all-analog Classic Vinyl Edition
of "Takin' Off" Album from @herbiehancock
via @bluenoterecords link below: 
http://bluenote.lnk.to/AlbumOfTheWeek

is transformed into the following text.

Get the all-analog Classic Vinyl Edition
of "Takin' Off" Album from {@herbiehancock@}
via {@bluenoterecords@} link below: {{URL}}

A simple function to format tweet follows below.

import re
from urlextract import URLExtract
extractor = URLExtract()

def format_tweet(tweet):
    # mask web urls
    urls = extractor.find_urls(tweet)
    for url in urls:
        tweet = tweet.replace(url, "{{URL}}")
    # format twitter account
    tweet = re.sub(r"\b(\s*)(@[\S]+)\b", r'\1{\2@}', tweet)
    return tweet

target = """Get the all-analog Classic Vinyl Edition of "Takin' Off" Album from @herbiehancock via @bluenoterecords link below: http://bluenote.lnk.to/AlbumOfTheWeek"""
target_format = format_tweet(target)
print(target_format)
'Get the all-analog Classic Vinyl Edition of "Takin\' Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below: {{URL}}'

We ask annotators to ignore those special tokens but label the verified users' mentions.

Data Split

split number of instances description
train_2020 4616 training dataset from September 2019 to August 2020
train_2021 2495 training dataset from September 2020 to August 2021
train_all 7111 combined training dataset of train_2020 and train_2021
validation_2020 576 validation dataset from September 2019 to August 2020
validation_2021 310 validation dataset from September 2020 to August 2021
test_2020 576 test dataset from September 2019 to August 2020
test_2021 2807 test dataset from September 2020 to August 2021
train_random 4616 randomly sampled training dataset with the same size as train_2020 from train_all
validation_random 576 randomly sampled training dataset with the same size as validation_2020 from validation_all
extra_2020 87880 extra tweet without annotations from September 2019 to August 2020
extra_2021 93594 extra tweet without annotations from September 2020 to August 2021

For the temporal-shift setting, model should be trained on train_2020 with validation_2020 and evaluate on test_2021. In general, model would be trained on train_all, the most representative training set with validation_2021 and evaluate on test_2021.

Dataset Structure

Data Instances

An example of train looks as follows.

{
    'tokens': ['Morning', '5km', 'run', 'with', '{{USERNAME}}', 'for', 'breast', 'cancer', 'awareness', '#', 'pinkoctober', '#', 'breastcancerawareness', '#', 'zalorafit', '#', 'zalorafitxbnwrc', '@', 'The', 'Central', 'Park', ',', 'Desa', 'Parkcity', '{{URL}}'],
    'tags': [14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 2, 14, 2, 14, 14, 14, 14, 14, 14, 4, 11, 11, 11, 11, 14],
    'id': '1183344337016381440',
    'date': '2019-10-13'
}

Label ID

The label2id dictionary can be found at here.

{
    "B-corporation": 0,
    "B-creative_work": 1,
    "B-event": 2,
    "B-group": 3,
    "B-location": 4,
    "B-person": 5,
    "B-product": 6,
    "I-corporation": 7,
    "I-creative_work": 8,
    "I-event": 9,
    "I-group": 10,
    "I-location": 11,
    "I-person": 12,
    "I-product": 13,
    "O": 14
}

Models

Model (link) Data Language Model Micro F1 (2021) Macro F1 (2021) F1 (2021)/corporation F1 (2021)/creative_work F1 (2021)/event F1 (2021)/group F1 (2021)/location F1 (2021)/person F1 (2021)/product Micro F1 (2020) Macro F1 (2020) F1 (2020)/corporation F1 (2020)/creative_work F1 (2020)/event F1 (2020)/group F1 (2020)/location F1 (2020)/person F1 (2020)/product Entity-Span F1 (2021) Entity-Span F1 (2020)
tner/roberta-large-tweetner7-all tweetner7 roberta-large 65.75 61.25 53.92 47.61 46.73 61.4 67.07 82.93 69.06 66.29 62.97 61.84 51.59 50.29 55.99 69.23 82.01 69.86 78.82 76.43
tner/roberta-base-tweetner7-all tweetner7 roberta-base 65.16 60.81 51.74 46.64 46.73 60.71 68.33 83.77 67.77 65.32 61.66 61.94 48.94 45.14 56.58 68.94 82.75 67.33 78.93 75.23
tner/twitter-roberta-base-2019-90m-tweetner7-all tweetner7 cardiffnlp/twitter-roberta-base-2019-90m 65.68 61 50.87 47.3 48.41 61.48 67.94 83.93 67.06 65.46 61.22 56.85 52.15 46.68 56.68 65.1 84.55 66.5 78.89 76.43
tner/twitter-roberta-base-dec2020-tweetner7-all tweetner7 cardiffnlp/twitter-roberta-base-dec2020 65.26 60.7 51.53 47.6 46.69 60.93 66.89 83.87 67.38 65.44 61.39 56.76 55.06 46.24 55.52 64.26 84.87 67 78.68 75.87
tner/bertweet-large-tweetner7-all tweetner7 cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large 66.46 61.87 54.5 47.36 49.15 62.38 67.55 84.15 68.02 66.76 63.08 58.89 55.24 48.89 59.85 66.67 83.49 68.51 79.53 77.59
tner/bertweet-base-tweetner7-all tweetner7 vinai/bertweet-base 65.36 60.52 52.51 46.54 48.06 60.33 65.67 84.08 66.46 65.74 61.61 57.22 54.1 48.55 57.35 64.57 84.16 65.36 78.99 76.91
tner/bert-large-tweetner7-all tweetner7 bert-large 63.58 59 50.13 40.16 47 59.74 67.2 81.86 66.91 62.49 58.63 55.56 47.65 43.08 54.88 63.9 80.31 65.04 77.21 73.58
tner/bert-base-tweetner7-all tweetner7 bert-base 62.3 57.59 51.41 38.86 45.81 56.61 62.65 81.97 65.8 62.1 57.74 56.55 41.52 45.04 54.23 60.53 81.86 64.49 76.62 72.98
tner/roberta-large-tweetner7-continuous tweetner7 roberta-large 66.02 60.9 53.15 44.42 48.79 61.15 67.41 84.72 66.63 66.26 62.4 57.75 54.14 48.48 57.52 67.69 83.33 67.84 79.14 76.44
tner/roberta-base-tweetner7-continuous tweetner7 roberta-base 65.47 60.01 50.97 41.68 46.75 61.52 67.98 84.49 66.67 65.15 60.82 58.05 49.85 44.74 56.05 67.08 82.63 67.33 78.1 75.05
tner/twitter-roberta-base-2019-90m-tweetner7-continuous tweetner7 cardiffnlp/twitter-roberta-base-2019-90m 65.87 61.07 51.66 48.01 48.47 60.42 68.36 84.59 66.01 64.76 60.58 56.19 54.97 44.67 53.17 63.53 83.64 67.88 78.44 75.53
tner/twitter-roberta-base-dec2020-tweetner7-continuous tweetner7 cardiffnlp/twitter-roberta-base-dec2020 65.51 60.57 53.56 45.3 46.92 61.07 66.28 84.33 66.49 65.29 61.28 59.26 55.59 43.84 54.38 64.14 84.08 67.68 78.03 75.88
tner/bertweet-large-tweetner7-continuous tweetner7 cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large 66.41 61.66 55.07 46.85 48.16 61.44 68.87 84.04 67.18 65.88 61.82 58.38 54.65 46.12 56.39 66.67 83.89 66.67 78.97 76.42
tner/bertweet-base-tweetner7-continuous tweetner7 vinai/bertweet-base 65.84 61.02 51.85 46.83 49.66 61.17 66.58 84.47 66.59 65.16 61.35 55.76 56.83 46.22 56.32 66.27 82.94 65.13 79.1 76.8
tner/bert-large-tweetner7-continuous tweetner7 bert-large 63.2 57.67 51.4 39.74 42.55 58.6 63.36 81.27 66.78 62.48 57.87 56.56 43.65 45.51 50.38 60.26 80.62 68.12 76.04 72.46
tner/bert-base-tweetner7-continuous tweetner7 bert-base 61.8 56.84 47.4 38.22 44.05 57.73 64.42 80.72 65.31 61.41 57.11 54.41 42.41 41.46 51.25 63.49 79.9 66.84 76.53 72.5
tner/roberta-large-tweetner7-2021 tweetner7 roberta-large 64.05 59.11 50.58 43.91 46.6 60.68 63.99 82.68 65.3 63.36 59.15 53.22 49.41 46.61 54.65 63.12 81.33 65.67 77.71 74.36
tner/roberta-base-tweetner7-2021 tweetner7 roberta-base 61.76 57 48.9 38 45.51 57.02 65.06 81.34 63.17 60.5 56.12 49.86 45.33 39.83 52.81 60.95 79.93 64.15 76.92 73.75
tner/twitter-roberta-base-2019-90m-tweetner7-2021 tweetner7 cardiffnlp/twitter-roberta-base-2019-90m 63.23 56.72 46.73 33.12 45.97 57.61 64.42 83.21 65.95 61.91 56.09 48.59 41.1 44.35 49.57 64.16 82.3 62.6 75.69 73.04
tner/twitter-roberta-base-dec2020-tweetner7-2021 tweetner7 cardiffnlp/twitter-roberta-base-dec2020 63.98 58.91 51.04 40.86 46.2 60.22 65.55 82.64 65.88 63.07 58.51 53.26 47.09 40.92 56.46 64.86 82.1 64.89 77.87 75.35
tner/bertweet-large-tweetner7-2021 tweetner7 cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large 62.9 58.13 48.87 42.33 44.87 56.4 66.21 81.05 67.16 61.61 56.84 54.24 40.83 43.34 50.3 64.56 81.57 63.05 76.5 74.46
tner/bertweet-base-tweetner7-2021 tweetner7 vinai/bertweet-base 63.09 57.35 45.66 40.99 46.28 59.32 63.34 82.79 63.1 62.06 57.23 49.87 45.83 43.89 52.65 63.58 81.79 63.01 77.88 75.95
tner/bert-large-tweetner7-2021 tweetner7 bert-large 59.75 53.93 44.87 34.17 40.24 55.68 63.95 79.4 59.19 56.63 50.97 49.32 31.58 30.39 50.27 59.76 76.07 59.41 74.98 70.66
tner/bert-base-tweetner7-2021 tweetner7 bert-base 60.67 55.5 46.8 35.35 41.28 56.23 64.78 79.89 64.17 58.45 54.22 48.84 43.05 32.27 50.65 61.54 76.68 66.5 75.72 70.86
tner/roberta-large-tweetner7-2020 tweetner7 roberta-large 64.76 60 52.23 45.89 48.51 60.88 64.43 83.32 64.75 65.67 61.88 56.82 51.85 51.06 58.65 67.06 82.59 65.15 78.36 76.11
tner/roberta-base-tweetner7-2020 tweetner7 roberta-base 64.21 59.11 50.75 44.44 43.9 59.15 65.84 83.92 65.73 64.25 60.23 58.59 48.94 43.84 55.31 65.63 82 67.32 77.89 74.8
tner/twitter-roberta-base-2019-90m-tweetner7-2020 tweetner7 cardiffnlp/twitter-roberta-base-2019-90m 64.28 59.31 48.54 46.89 43.69 59.09 67.01 84 65.98 65.42 61.11 56.28 53.69 43.39 56.23 64.76 84.73 68.72 77.9 76.56
tner/twitter-roberta-base-dec2020-tweetner7-2020 tweetner7 cardiffnlp/twitter-roberta-base-dec2020 62.87 58.26 49.9 44.9 43.68 57.62 64.38 82.29 65.07 64.39 60.31 55.19 51.72 42.91 55.95 65.47 83.98 66.98 76.49 75.65
tner/bertweet-large-tweetner7-2020 tweetner7 cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large 64.01 59.47 52.29 46.3 45 59.27 65.53 82.73 65.19 65.93 62.61 59.67 58.92 45.01 54.55 68.09 83.59 68.47 78.26 77.38
tner/bertweet-base-tweetner7-2020 tweetner7 vinai/bertweet-base 64.06 59.44 51.62 45.72 45.87 59.74 64.7 82.71 65.74 66.38 62.41 58.05 54.95 49.9 56.18 67.45 84.75 65.57 77.91 77.73
tner/bert-large-tweetner7-2020 tweetner7 bert-large 61.43 56.14 50.11 39.03 41.8 57.31 61.13 80.6 63.02 62.19 58.15 56.68 43.75 47.24 49.72 62.62 80.03 66.97 75.86 73.79
tner/bert-base-tweetner7-2020 tweetner7 bert-base 60.09 54.67 44.11 37.52 40.28 55.77 61.8 80.52 62.73 60.87 56.49 50.77 44.07 38.35 53.18 63.29 81.06 64.71 75.61 72.42

Model description follows below.

  • Model with suffix -all: Model fine-tuned on train_all and validated on validation_2021.
  • Model with suffix -continuous: Model fine-tuned on train_2021 continuously after fine-tuning on train_2020 and validated on validation_2021.
  • Model with suffix -2021: Model fine-tuned only on train_2021 and validated on validation_2021.
  • Model with suffix -2020: Model fine-tuned only on train_2021 and validated on validation_2020.

Sub Models (used in ablation study)

  • Model fine-tuned only on train_random and validated on validation_2020.
Model (link) Data Language Model Micro F1 (2021) Macro F1 (2021) F1 (2021)/corporation F1 (2021)/creative_work F1 (2021)/event F1 (2021)/group F1 (2021)/location F1 (2021)/person F1 (2021)/product Micro F1 (2020) Macro F1 (2020) F1 (2020)/corporation F1 (2020)/creative_work F1 (2020)/event F1 (2020)/group F1 (2020)/location F1 (2020)/person F1 (2020)/product Entity-Span F1 (2021) Entity-Span F1 (2020)
tner/roberta-large-tweetner7-random tweetner7 roberta-large 66.33 60.96 52.24 45.19 48.95 63.28 66.92 83.84 66.34 64.4 60.09 53.45 50.27 46.68 57.25 65.44 81.79 65.73 79 75.52
tner/roberta-base-tweetner7-random tweetner7 roberta-base 64.04 59.23 50.73 42.35 45.98 59.73 67.95 82.32 65.58 64.14 59.78 57.58 47.62 42.19 56.48 67.07 82.71 64.84 78.04 74.26
tner/twitter-roberta-base-2019-90m-tweetner7-random tweetner7 cardiffnlp/twitter-roberta-base-2019-90m 63.29 58.5 50.56 41.68 45.7 59.91 64.8 83.02 63.82 64.29 60.67 56.85 48.88 45.36 55.03 71.75 82.29 64.55 77.36 76.21
tner/twitter-roberta-base-dec2020-tweetner7-random tweetner7 cardiffnlp/twitter-roberta-base-dec2020 64.72 59.97 49.08 46.42 45.65 61.68 67.5 83.31 66.15 64.69 60.53 55.56 53.85 44.27 56.57 65.05 84.03 64.41 78.29 75.94
tner/bertweet-large-tweetner7-random tweetner7 cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large 64.86 60.49 53.59 45.47 46.19 61.64 66.16 82.79 67.58 66.02 62.72 57.81 58.19 47.64 58.78 68.25 83.36 64.97 78.43 77.2
tner/bertweet-base-tweetner7-random tweetner7 vinai/bertweet-base 65.55 59.58 49.6 40.06 47.29 62.07 67.98 83.52 66.56 63.89 58.61 54.38 45.05 41.97 55.88 66.03 83.36 63.61 77.8 74.39
tner/bert-large-tweetner7-random tweetner7 bert-large 62.39 57.54 49.15 39.72 44.79 57.67 67.22 81.17 63.07 61.54 57.09 56.34 42.81 42.69 53.36 61.98 81.04 61.43 76.49 73.29
tner/bert-base-tweetner7-random tweetner7 bert-base 60.91 55.92 46.51 39.05 41.83 56.14 63.9 80.45 63.54 61.04 56.75 53.94 42.77 39.15 53.07 62.67 80.59 65.08 75.72 72.73
  • Model fine-tuned on the self-labeled dataset on extra_{2020,2021} and validated on validation_2020.
Model (link) Data Language Model Micro F1 (2021) Macro F1 (2021) F1 (2021)/corporation F1 (2021)/creative_work F1 (2021)/event F1 (2021)/group F1 (2021)/location F1 (2021)/person F1 (2021)/product Micro F1 (2020) Macro F1 (2020) F1 (2020)/corporation F1 (2020)/creative_work F1 (2020)/event F1 (2020)/group F1 (2020)/location F1 (2020)/person F1 (2020)/product Entity-Span F1 (2021) Entity-Span F1 (2020)
tner/roberta-large-tweetner7-selflabel2020 tweetner7 roberta-large 64.56 59.63 52.28 46.82 44.47 61.55 64.24 84.02 64.02 65.9 61.85 58.15 51.99 48.05 57.25 66.86 84.16 66.51 78.46 76.71
tner/roberta-large-tweetner7-selflabel2021 tweetner7 roberta-large 64.6 59.45 50.21 45.89 45.18 60.3 66.71 83.46 64.38 64.75 60.65 56.19 50.41 47.31 55.21 67.46 81.9 66.06 78.57 76.63
tner/roberta-large-tweetner7-2020-selflabel2020-all tweetner7 roberta-large 65.46 60.39 52.56 46.12 45.83 61.7 67.17 84.39 64.95 66.23 62.26 57.5 54.2 46.75 58.32 67.86 83.56 67.61 79.17 77.17
tner/roberta-large-tweetner7-2020-selflabel2021-all tweetner7 roberta-large 64.52 59.45 50.67 45.38 44.53 60.63 66.19 83.59 65.17 66.05 61.83 58.23 53.44 44.39 59.79 68.09 83.43 65.43 78.5 76.94
tner/roberta-large-tweetner7-selflabel2020-continuous tweetner7 roberta-large 65.15 60.23 52.53 46.5 46.18 60.87 66.67 83.83 65.03 66.7 62.86 59.35 54.44 48.28 59.44 67.66 83.36 67.45 78.73 77.12
tner/roberta-large-tweetner7-selflabel2021-continuous tweetner7 roberta-large 64.48 59.41 50.58 45.67 44.4 61.09 66.36 83.63 64.14 65.48 61.42 56.93 51.75 48.72 57.61 67.27 83.29 64.37 78.36 76.5

Model description follows below.

  • Model with suffix -self2020: Fine-tuning on the self-annotated data of extra_2020 split of tweetner7.
  • Model with suffix -self2021: Fine-tuning on the self-annotated data of extra_2021 split of tweetner7.
  • Model with suffix -2020-self2020-all: Fine-tuning on the self-annotated data of extra_2020 split of tweetner7. Combined training dataset of extra_2020 and train_2020.
  • Model with suffix -2020-self2021-all: Fine-tuning on the self-annotated data of extra_2021 split of tweetner7. Combined training dataset of extra_2021 and train_2020.
  • Model with suffix -2020-self2020-continuous: Fine-tuning on the self-annotated data of extra_2020 split of tweetner7. Fine-tuning on train_2020 and continuing fine-tuning on extra_2020.
  • Model with suffix -2020-self2021-continuous: Fine-tuning on the self-annotated data of extra_2021 split of tweetner7. Fine-tuning on train_2020 and continuing fine-tuning on extra_2020.

Reproduce Experimental Result

To reproduce the experimental result on our AACL paper, please see the repository https://github.com/asahi417/tner/tree/master/examples/tweetner7_paper.

Citation Information

@inproceedings{ushio-etal-2022-tweet,
    title = "{N}amed {E}ntity {R}ecognition in {T}witter: {A} {D}ataset and {A}nalysis on {S}hort-{T}erm {T}emporal {S}hifts",
    author = "Ushio, Asahi  and
        Neves, Leonardo  and
        Silva, Vitor  and
        Barbieri, Francesco. and
        Camacho-Collados, Jose",
    booktitle = "The 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing",
    month = nov,
    year = "2022",
    address = "Online",
    publisher = "Association for Computational Linguistics",
}