File size: 1,692 Bytes
3c5516f
8d3e782
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3c5516f
8d3e782
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
---

annotations_creators:
- expert-generated
language_creators:
- machine-generated
languages:
- en
licenses:
- unknown
multilinguality:
- monolingual
pretty_name: BOUN
size_categories:
- unknown
source_datasets:
- original
task_categories:
- structure-prediction
task_ids:
- structure-prediction-other-word-segmentation
---


# Dataset Card for BOUN

## Dataset Description

- **Repository:** [ardax/hashtag-segmentor](https://github.com/ardax/hashtag-segmentor)
- **Paper:** [Segmenting hashtags using automatically created training data](http://www.lrec-conf.org/proceedings/lrec2016/pdf/708_Paper.pdf)

### Dataset Summary

Automatically segmented 803K SNAP Twitter Data Set hashtags with the heuristic described in the paper "Segmenting hashtags using automatically created training data".

### Languages

English

## Dataset Structure

### Data Instances

```

{

    "index": 0,

    "hashtag": "BrandThunder",

    "segmentation": "Brand Thunder"

}

```

### Data Fields

- `index`: a numerical index.
- `hashtag`: the original hashtag.
- `segmentation`: the gold segmentation for the hashtag.

### Citation Information

```

@inproceedings{celebi2016segmenting,

  title={Segmenting hashtags using automatically created training data},

  author={Celebi, Arda and {\"O}zg{\"u}r, Arzucan},

  booktitle={Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)},

  pages={2981--2985},

  year={2016}

}

```

### Contributions

This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github..com/ruanchaves/hashformers) library.