|
---
|
|
annotations_creators:
|
|
- expert-generated
|
|
language_creators:
|
|
- machine-generated
|
|
languages:
|
|
- en
|
|
licenses:
|
|
- unknown
|
|
multilinguality:
|
|
- monolingual
|
|
pretty_name: BOUN
|
|
size_categories:
|
|
- unknown
|
|
source_datasets:
|
|
- original
|
|
task_categories:
|
|
- structure-prediction
|
|
task_ids:
|
|
- structure-prediction-other-word-segmentation
|
|
---
|
|
|
|
# Dataset Card for BOUN
|
|
|
|
## Dataset Description
|
|
|
|
- **Repository:** [ardax/hashtag-segmentor](https://github.com/ardax/hashtag-segmentor)
|
|
- **Paper:** [Segmenting Hashtags and Analyzing Their Grammatical Structure](https://asistdl.onlinelibrary.wiley.com/doi/epdf/10.1002/asi.23989?author_access_token=qbKcE1jrre5nbv_Tn9csbU4keas67K9QMdWULTWMo8NOtY2aA39ck2w5Sm4ePQ1MZhbjCdEuaRlPEw2Kd12jzvwhwoWP0fdroZAwWsmXHPXxryDk_oBCup1i9_VDNIpU)
|
|
|
|
### Dataset Summary
|
|
|
|
Dev-BOUN is a Development set that includes 500 manually segmented hashtags. These are selected from tweets about movies,
|
|
tv shows, popular people, sports teams etc.
|
|
|
|
Test-BOUN is a Test set that includes 500 manually segmented hashtags. These are selected from tweets about movies, tv shows, popular people, sports teams etc.
|
|
|
|
### Languages
|
|
|
|
English
|
|
|
|
## Dataset Structure
|
|
|
|
### Data Instances
|
|
|
|
```
|
|
{
|
|
"index": 0,
|
|
"hashtag": "tryingtosleep",
|
|
"segmentation": "trying to sleep"
|
|
}
|
|
```
|
|
|
|
### Data Fields
|
|
|
|
- `index`: a numerical index.
|
|
- `hashtag`: the original hashtag.
|
|
- `segmentation`: the gold segmentation for the hashtag.
|
|
|
|
### Citation Information
|
|
|
|
```
|
|
@article{celebi2018segmenting,
|
|
title={Segmenting hashtags and analyzing their grammatical structure},
|
|
author={Celebi, Arda and {\"O}zg{\"u}r, Arzucan},
|
|
journal={Journal of the Association for Information Science and Technology},
|
|
volume={69},
|
|
number={5},
|
|
pages={675--686},
|
|
year={2018},
|
|
publisher={Wiley Online Library}
|
|
}
|
|
```
|
|
|
|
### Contributions
|
|
|
|
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github..com/ruanchaves/hashformers) library. |