ruanchaves julien-c HF staff commited on
Commit
41a584b
1 Parent(s): d6013aa

Fix `license` metadata (#1)

Browse files

- Fix `license` metadata (d86783c29c3fe8118a80c6b072785b283b72d332)


Co-authored-by: Julien Chaumond <julien-c@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +87 -87
README.md CHANGED
@@ -1,88 +1,88 @@
1
- ---
2
- annotations_creators:
3
- - expert-generated
4
- language_creators:
5
- - machine-generated
6
- languages:
7
- - en
8
- licenses:
9
- - unknown
10
- multilinguality:
11
- - monolingual
12
- pretty_name: BOUN
13
- size_categories:
14
- - unknown
15
- source_datasets:
16
- - original
17
- task_categories:
18
- - structure-prediction
19
- task_ids:
20
- - structure-prediction-other-word-segmentation
21
- ---
22
-
23
- # Dataset Card for BOUN
24
-
25
- ## Dataset Description
26
-
27
- - **Repository:** [ardax/hashtag-segmentor](https://github.com/ardax/hashtag-segmentor)
28
- - **Paper:** [Segmenting Hashtags and Analyzing Their Grammatical Structure](https://asistdl.onlinelibrary.wiley.com/doi/epdf/10.1002/asi.23989?author_access_token=qbKcE1jrre5nbv_Tn9csbU4keas67K9QMdWULTWMo8NOtY2aA39ck2w5Sm4ePQ1MZhbjCdEuaRlPEw2Kd12jzvwhwoWP0fdroZAwWsmXHPXxryDk_oBCup1i9_VDNIpU)
29
-
30
- ### Dataset Summary
31
-
32
- Dev-BOUN is a Development set that includes 500 manually segmented hashtags. These are selected from tweets about movies,
33
- tv shows, popular people, sports teams etc.
34
-
35
- Test-BOUN is a Test set that includes 500 manually segmented hashtags. These are selected from tweets about movies, tv shows, popular people, sports teams etc.
36
-
37
- ### Languages
38
-
39
- English
40
-
41
- ## Dataset Structure
42
-
43
- ### Data Instances
44
-
45
- ```
46
- {
47
- "index": 0,
48
- "hashtag": "tryingtosleep",
49
- "segmentation": "trying to sleep"
50
- }
51
- ```
52
-
53
- ### Data Fields
54
-
55
- - `index`: a numerical index.
56
- - `hashtag`: the original hashtag.
57
- - `segmentation`: the gold segmentation for the hashtag.
58
-
59
- ## Dataset Creation
60
-
61
- - All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
62
-
63
- - The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
64
-
65
- - There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
66
-
67
- - If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
68
-
69
- ## Additional Information
70
-
71
- ### Citation Information
72
-
73
- ```
74
- @article{celebi2018segmenting,
75
- title={Segmenting hashtags and analyzing their grammatical structure},
76
- author={Celebi, Arda and {\"O}zg{\"u}r, Arzucan},
77
- journal={Journal of the Association for Information Science and Technology},
78
- volume={69},
79
- number={5},
80
- pages={675--686},
81
- year={2018},
82
- publisher={Wiley Online Library}
83
- }
84
- ```
85
-
86
- ### Contributions
87
-
88
  This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library.
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - machine-generated
6
+ language:
7
+ - en
8
+ license:
9
+ - unknown
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: BOUN
13
+ size_categories:
14
+ - unknown
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - structure-prediction
19
+ task_ids:
20
+ - structure-prediction-other-word-segmentation
21
+ ---
22
+
23
+ # Dataset Card for BOUN
24
+
25
+ ## Dataset Description
26
+
27
+ - **Repository:** [ardax/hashtag-segmentor](https://github.com/ardax/hashtag-segmentor)
28
+ - **Paper:** [Segmenting Hashtags and Analyzing Their Grammatical Structure](https://asistdl.onlinelibrary.wiley.com/doi/epdf/10.1002/asi.23989?author_access_token=qbKcE1jrre5nbv_Tn9csbU4keas67K9QMdWULTWMo8NOtY2aA39ck2w5Sm4ePQ1MZhbjCdEuaRlPEw2Kd12jzvwhwoWP0fdroZAwWsmXHPXxryDk_oBCup1i9_VDNIpU)
29
+
30
+ ### Dataset Summary
31
+
32
+ Dev-BOUN is a Development set that includes 500 manually segmented hashtags. These are selected from tweets about movies,
33
+ tv shows, popular people, sports teams etc.
34
+
35
+ Test-BOUN is a Test set that includes 500 manually segmented hashtags. These are selected from tweets about movies, tv shows, popular people, sports teams etc.
36
+
37
+ ### Languages
38
+
39
+ English
40
+
41
+ ## Dataset Structure
42
+
43
+ ### Data Instances
44
+
45
+ ```
46
+ {
47
+ "index": 0,
48
+ "hashtag": "tryingtosleep",
49
+ "segmentation": "trying to sleep"
50
+ }
51
+ ```
52
+
53
+ ### Data Fields
54
+
55
+ - `index`: a numerical index.
56
+ - `hashtag`: the original hashtag.
57
+ - `segmentation`: the gold segmentation for the hashtag.
58
+
59
+ ## Dataset Creation
60
+
61
+ - All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
62
+
63
+ - The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
64
+
65
+ - There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
66
+
67
+ - If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
68
+
69
+ ## Additional Information
70
+
71
+ ### Citation Information
72
+
73
+ ```
74
+ @article{celebi2018segmenting,
75
+ title={Segmenting hashtags and analyzing their grammatical structure},
76
+ author={Celebi, Arda and {\"O}zg{\"u}r, Arzucan},
77
+ journal={Journal of the Association for Information Science and Technology},
78
+ volume={69},
79
+ number={5},
80
+ pages={675--686},
81
+ year={2018},
82
+ publisher={Wiley Online Library}
83
+ }
84
+ ```
85
+
86
+ ### Contributions
87
+
88
  This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library.