Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
unknown
Language Creators:
machine-generated
Annotations Creators:
expert-generated
Source Datasets:
original
Tags:
word-segmentation
License:
boun / README.md
ruanchaves's picture
Update README.md
fc7e6ce
metadata
annotations_creators:
  - expert-generated
language_creators:
  - machine-generated
languages:
  - en
licenses:
  - unknown
multilinguality:
  - monolingual
pretty_name: BOUN
size_categories:
  - unknown
source_datasets:
  - original
task_categories:
  - structure-prediction
task_ids:
  - structure-prediction-other-word-segmentation

Dataset Card for BOUN

Dataset Description

Dataset Summary

Dev-BOUN is a Development set that includes 500 manually segmented hashtags. These are selected from tweets about movies, tv shows, popular people, sports teams etc.

Test-BOUN is a Test set that includes 500 manually segmented hashtags. These are selected from tweets about movies, tv shows, popular people, sports teams etc.

Languages

English

Dataset Structure

Data Instances

{
    "index": 0,
    "hashtag": "tryingtosleep",
    "segmentation": "trying to sleep"
}

Data Fields

  • index: a numerical index.
  • hashtag: the original hashtag.
  • segmentation: the gold segmentation for the hashtag.

Citation Information

@article{celebi2018segmenting,
  title={Segmenting hashtags and analyzing their grammatical structure},
  author={Celebi, Arda and {\"O}zg{\"u}r, Arzucan},
  journal={Journal of the Association for Information Science and Technology},
  volume={69},
  number={5},
  pages={675--686},
  year={2018},
  publisher={Wiley Online Library}
}

Contributions

This dataset was added by @ruanchaves while developing the hashformers library.