system HF staff commited on
Commit
5874c80
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,161 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ - machine-generated
5
+ language_creators:
6
+ - found
7
+ - expert-generated
8
+ languages:
9
+ - th
10
+ licenses:
11
+ - cc-by-3-0
12
+ multilinguality:
13
+ - monolingual
14
+ size_categories:
15
+ - 1k<n<10k
16
+ source_datasets:
17
+ - extended|other-tirasaroj-aroonmanakun
18
+ task_categories:
19
+ - structure-prediction
20
+ task_ids:
21
+ - named-entity-recognition
22
+ - parsing
23
+ ---
24
+
25
+ # Dataset Card for `thainer`
26
+
27
+ ## Table of Contents
28
+ - [Dataset Description](#dataset-description)
29
+ - [Dataset Summary](#dataset-summary)
30
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
31
+ - [Languages](#languages)
32
+ - [Dataset Structure](#dataset-structure)
33
+ - [Data Instances](#data-instances)
34
+ - [Data Fields](#data-instances)
35
+ - [Data Splits](#data-instances)
36
+ - [Dataset Creation](#dataset-creation)
37
+ - [Curation Rationale](#curation-rationale)
38
+ - [Source Data](#source-data)
39
+ - [Annotations](#annotations)
40
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
41
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
42
+ - [Social Impact of Dataset](#social-impact-of-dataset)
43
+ - [Discussion of Biases](#discussion-of-biases)
44
+ - [Other Known Limitations](#other-known-limitations)
45
+ - [Additional Information](#additional-information)
46
+ - [Dataset Curators](#dataset-curators)
47
+ - [Licensing Information](#licensing-information)
48
+ - [Citation Information](#citation-information)
49
+
50
+ ## Dataset Description
51
+
52
+ - **Homepage:** https://github.com/wannaphong/thai-ner
53
+ - **Repository:** https://github.com/wannaphong/thai-ner
54
+ - **Paper:**
55
+ - **Leaderboard:**
56
+ - **Point of Contact:** https://github.com/wannaphong/
57
+
58
+ ### Dataset Summary
59
+
60
+ ThaiNER (v1.3) is a 6,456-sentence named entity recognition dataset created from expanding the 2,258-sentence [unnamed dataset](http://pioneer.chula.ac.th/~awirote/Data-Nutcha.zip) by [Tirasaroj and Aroonmanakun (2012)](http://pioneer.chula.ac.th/~awirote/publications/). It is used to train NER taggers in [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp). The NER tags are annotated by [Tirasaroj and Aroonmanakun (2012)]((http://pioneer.chula.ac.th/~awirote/publications/)) for 2,258 sentences and the rest by [@wannaphong](https://github.com/wannaphong/). The POS tags are done by [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp)'s `perceptron` engine trained on `orchid_ud`. [@wannaphong](https://github.com/wannaphong/) is now the only maintainer of this dataset.
61
+
62
+ ### Supported Tasks and Leaderboards
63
+
64
+ - named entity recognition
65
+ - pos tagging
66
+
67
+ ### Languages
68
+
69
+ Thai
70
+
71
+ ## Dataset Structure
72
+
73
+ ### Data Instances
74
+
75
+ ```
76
+ {'id': 100, 'ner_tags': [27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27], 'pos_tags': [6, 12, 13, 1, 6, 5, 11, 7, 11, 6, 5, 13, 6, 6, 6, 11, 6, 6, 11, 6, 6, 11, 6, 6, 13, 6, 11, 11, 6, 11, 6, 11, 6, 11, 6, 11, 11, 6, 6, 11, 12, 6, 13, 5, 11, 7, 11, 6, 3, 11, 12, 3, 13, 6, 1, 6, 12, 13, 1, 6, 6, 5, 11, 3, 11, 5, 4, 6, 13, 6, 13, 6, 10, 3, 13, 13, 12, 13, 12, 0, 1, 10, 11, 6, 6, 11, 6, 11, 6, 12, 13, 5, 12, 3, 13, 13, 1, 6, 1, 6, 13], 'tokens': ['เชื้อโรค', 'ที่', 'ปรากฏ', 'ใน', 'สัตว์', 'ทั้ง', ' ', '4', ' ', 'ชนิด', 'นี้', 'เป็น', 'เชื้อ', 'โรคไข้หวัด', 'นก', ' ', 'เอช', 'พี', ' ', 'เอ', 'เวียน', ' ', 'อิน', 'ฟลู', 'เอน', 'ซา', ' ', '(', 'Hight', ' ', 'Polygenic', ' ', 'Avain', ' ', 'Influenza', ')', ' ', 'ชนิด', 'รุนแรง', ' ', 'ซึ่ง', 'การ', 'ตั้งชื่อ', 'ทั้ง', ' ', '4', ' ', 'ขึ้น', 'มา', ' ', 'เพื่อที่จะ', 'สามารถ', 'ระบุ', 'เชื้อ', 'ของ', 'ไวรัส', 'ที่', 'ทำอันตราย', 'ตาม', 'สิ่งมีชีวิต', 'ประเภท', 'ต่างๆ', ' ', 'ได้', ' ', 'อีก', 'ทั้ง', 'การ', 'ระบุ', 'สถานที่', 'คือ', 'ประเทศ', 'ไทย', 'จะ', 'ทำให้', 'รู้', 'ว่า', 'พบ', 'ที่', 'แรก', 'ใน', 'ไทย', ' ', 'ส่วน', 'วัน', ' ', 'เดือน', ' ', 'ปี', 'ที่', 'พบ', 'นั้น', 'ก็', 'จะ', 'ทำให้', 'ทราบ', 'ถึง', 'ครั้งแรก', 'ของ', 'การ', 'ค้นพบ']}
77
+ {'id': 107, 'ner_tags': [27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27], 'pos_tags': [0, 1, 6, 5, 11, 12, 3, 3, 13, 6, 13, 12, 0, 2, 12, 11, 6, 5, 13, 6, 5, 1, 6, 6, 1, 10, 11, 4, 13, 6, 11, 12, 6, 6, 10, 11, 13, 6, 1, 6, 4, 6, 1, 6, 6, 11, 4, 6, 1, 5, 6, 12, 2, 13, 6, 6, 5, 1, 11, 12, 13, 1, 6, 6, 11, 13, 11, 6, 6, 6, 11, 11, 6, 11, 11, 4, 10, 11, 11, 6, 11], 'tokens': ['ล่าสุด', 'ใน', 'เรื่อง', 'นี้', ' ', 'ทั้งนี้', 'คง', 'ต้อง', 'มี', 'การ', 'ตรวจสอบ', 'ให้', 'ชัดเจน', 'อีกครั้ง', 'ว่า', ' ', 'ไวรัส', 'นี้', 'เป็น', 'ชนิด', 'เดียว', 'กับ', 'ไข้หวัด', 'นก', 'ใน', 'ไทย', ' ', 'หรือ', 'เป็น', 'การกลายพันธุ์', ' ', 'โดยที่', 'คณะ', 'สัตวแพทย์', 'มหาวิทยาลัยเกษตรศาสตร์', ' ', 'จัด', 'ระดมสมอง', 'จาก', 'คณบดี', 'และ', 'ผู้เชี่ยวชาญ', 'จาก', 'คณะ', 'สัตวแพทย์', ' ', 'และ', 'ปศุสัตว์', 'ของ', 'หลาย', 'มหาวิทยาลัย', 'เพื่อ', 'ร่วมกัน', 'หา', 'ข้อมูล', 'เรื่อง', 'นี้', 'ด้วย', ' ', 'โดย', 'ประสาน', 'กับ', 'เจ้าหน้าที่', 'ระหว่างประเทศ', ' ', 'คือ', ' ', 'องค์การ', 'สุขภาพ', 'สัตว์โลก', ' ', '(', 'OIE', ')', ' ', 'และ', 'องค์การอนามัยโลก', ' ', '(', 'WHO', ')']}
78
+ ```
79
+
80
+ ### Data Fields
81
+
82
+ - `id`: sentence id
83
+ - `tokens`: word tokens by [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp)'s dictionary-based tokenizer `newmm`
84
+ - `pos_tags`: POS tags tagged by [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp)'s `perceptron` engine trained on `orchid_ud`
85
+ - `ner_tags`: NER tags tagged by humans
86
+
87
+ ### Data Splits
88
+
89
+ No explicit split is given
90
+
91
+ ## Dataset Creation
92
+
93
+ ### Curation Rationale
94
+
95
+ ThaiNER (v1.3) is a 6,456-sentence named entity recognition dataset created from expanding the 2,258-sentence [unnamed dataset](http://pioneer.chula.ac.th/~awirote/Data-Nutcha.zip) by [Tirasaroj and Aroonmanakun (2012)](http://pioneer.chula.ac.th/~awirote/publications/). It is used to train NER taggers in [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp).
96
+
97
+ ### Source Data
98
+
99
+ #### Initial Data Collection and Normalization
100
+
101
+ The earlier part of the dataset is all news articles, whereas the part added by [@wannaphong](https://github.com/wannaphong/) includes news articles, public announcements and [@wannaphong](https://github.com/wannaphong/)'s own chat messages with personal and sensitive information removed.
102
+
103
+ #### Who are the source language producers?
104
+
105
+ News articles and public announcements are created by their respective authors. Chat messages are created by [@wannaphong](https://github.com/wannaphong/).
106
+
107
+ ### Annotations
108
+
109
+ #### Annotation process
110
+
111
+ [More Information Needed]
112
+
113
+ #### Who are the annotators?
114
+
115
+ [Tirasaroj and Aroonmanakun (2012)](http://pioneer.chula.ac.th/~awirote/publications/) for the earlier 2,258 sentences and [@wannaphong](https://github.com/wannaphong/) for the rest
116
+
117
+ ### Personal and Sensitive Information
118
+
119
+ News articles and public announcements are not expected to include personal and sensitive information. [@wannaphong](https://github.com/wannaphong/) has removed such information from his own chat messages.
120
+
121
+ ## Considerations for Using the Data
122
+
123
+ ### Social Impact of Dataset
124
+
125
+ - named entity recognition in Thai
126
+
127
+ ### Discussion of Biases
128
+
129
+ Since almost all of collection and annotation is done by [@wannaphong](https://github.com/wannaphong/), his biases are expected to be reflected in the dataset.
130
+
131
+ ### Other Known Limitations
132
+
133
+ [More Information Needed]
134
+
135
+ ## Additional Information
136
+
137
+ ### Dataset Curators
138
+
139
+ [Tirasaroj and Aroonmanakun (2012)](http://pioneer.chula.ac.th/~awirote/publications/) for the earlier 2,258 sentences and [@wannaphong](https://github.com/wannaphong/) for the rest
140
+
141
+ ### Licensing Information
142
+
143
+ CC-BY 3.0
144
+
145
+ ### Citation Information
146
+
147
+ ```
148
+ @misc{Wannaphong Phatthiyaphaibun_2019,
149
+ title={wannaphongcom/thai-ner: ThaiNER 1.3},
150
+ url={https://zenodo.org/record/3550546},
151
+ DOI={10.5281/ZENODO.3550546},
152
+ abstractNote={Thai Named Entity Recognition},
153
+ publisher={Zenodo},
154
+ author={Wannaphong Phatthiyaphaibun},
155
+ year={2019},
156
+ month={Nov}
157
+ }
158
+ ```
159
+
160
+ Work extended from:
161
+ [Tirasaroj, N. and Aroonmanakun, W. 2012. Thai NER using CRF model based on surface features. In Proceedings of SNLP-AOS 2011, 9-10 February, 2012, Bangkok, pages 176-180.](http://pioneer.chula.ac.th/~awirote/publications/)
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"thainer": {"description": "ThaiNER (v1.3) is a 6,456-sentence named entity recognition dataset created from expanding the 2,258-sentence\n[unnamed dataset](http://pioneer.chula.ac.th/~awirote/Data-Nutcha.zip) by\n[Tirasaroj and Aroonmanakun (2012)](http://pioneer.chula.ac.th/~awirote/publications/).\nIt is used to train NER taggers in [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp).\nThe NER tags are annotated by [Tirasaroj and Aroonmanakun (2012)]((http://pioneer.chula.ac.th/~awirote/publications/))\nfor 2,258 sentences and the rest by [@wannaphong](https://github.com/wannaphong/).\nThe POS tags are done by [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp)'s `perceptron` engine trained on `orchid_ud`.\n[@wannaphong](https://github.com/wannaphong/) is now the only maintainer of this dataset.\n", "citation": "@misc{Wannaphong Phatthiyaphaibun_2019,\n title={wannaphongcom/thai-ner: ThaiNER 1.3},\n url={https://zenodo.org/record/3550546},\n DOI={10.5281/ZENODO.3550546},\n abstractNote={Thai Named Entity Recognition},\n publisher={Zenodo},\n author={Wannaphong Phatthiyaphaibun},\n year={2019},\n month={Nov}\n}\n", "homepage": "https://github.com/wannaphong/thai-ner/", "license": "CC-BY 3.0", "features": {"id": {"dtype": "int32", "id": null, "_type": "Value"}, "tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "pos_tags": {"feature": {"num_classes": 14, "names": ["ADJ", "ADP", "ADV", "AUX", "CCONJ", "DET", "NOUN", "NUM", "PART", "PRON", "PROPN", "PUNCT", "SCONJ", "VERB"], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}, "ner_tags": {"feature": {"num_classes": 28, "names": ["B-DATE", "B-EMAIL", "B-LAW", "B-LEN", "B-LOCATION", "B-MONEY", "B-ORGANIZATION", "B-PERCENT", "B-PERSON", "B-PHONE", "B-TIME", "B-URL", "B-ZIP", "B-\u0e44\u0e21\u0e48\u0e22\u0e37\u0e19\u0e22\u0e31\u0e19", "I-DATE", "I-EMAIL", "I-LAW", "I-LEN", "I-LOCATION", "I-MONEY", "I-ORGANIZATION", "I-PERCENT", "I-PERSON", "I-PHONE", "I-TIME", "I-URL", "I-\u0e44\u0e21\u0e48\u0e22\u0e37\u0e19\u0e22\u0e31\u0e19", "O"], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "thainer", "config_name": "thainer", "version": {"version_str": "1.3.0", "description": null, "major": 1, "minor": 3, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 8117918, "num_examples": 6349, "dataset_name": "thainer"}}, "download_checksums": {"https://github.com/wannaphong/thai-ner/raw/master/model/1.3/data-pos.conll": {"num_bytes": 5456461, "checksum": "be9f897b409554f06501c3a26159115fce5140d5654c634276b1a40b8c4dbabd"}}, "download_size": 5456461, "post_processing_size": null, "dataset_size": 8117918, "size_in_bytes": 13574379}}
dummy/thainer/1.3.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:da0ca6a2ed4db856ce0b018a96ea09432b93913f5a562642dee9616d7043f452
3
+ size 1731
thainer.py ADDED
@@ -0,0 +1,164 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import absolute_import, division, print_function
2
+
3
+ import datasets
4
+
5
+
6
+ _CITATION = """\
7
+ @misc{Wannaphong Phatthiyaphaibun_2019,
8
+ title={wannaphongcom/thai-ner: ThaiNER 1.3},
9
+ url={https://zenodo.org/record/3550546},
10
+ DOI={10.5281/ZENODO.3550546},
11
+ abstractNote={Thai Named Entity Recognition},
12
+ publisher={Zenodo},
13
+ author={Wannaphong Phatthiyaphaibun},
14
+ year={2019},
15
+ month={Nov}
16
+ }
17
+ """
18
+
19
+ _LICENSE = "CC-BY 3.0"
20
+
21
+ _DESCRIPTION = """\
22
+ ThaiNER (v1.3) is a 6,456-sentence named entity recognition dataset created from expanding the 2,258-sentence
23
+ [unnamed dataset](http://pioneer.chula.ac.th/~awirote/Data-Nutcha.zip) by
24
+ [Tirasaroj and Aroonmanakun (2012)](http://pioneer.chula.ac.th/~awirote/publications/).
25
+ It is used to train NER taggers in [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp).
26
+ The NER tags are annotated by [Tirasaroj and Aroonmanakun (2012)]((http://pioneer.chula.ac.th/~awirote/publications/))
27
+ for 2,258 sentences and the rest by [@wannaphong](https://github.com/wannaphong/).
28
+ The POS tags are done by [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp)'s `perceptron` engine trained on `orchid_ud`.
29
+ [@wannaphong](https://github.com/wannaphong/) is now the only maintainer of this dataset.
30
+ """
31
+
32
+
33
+ class ThaiNerConfig(datasets.BuilderConfig):
34
+ """BuilderConfig for ThaiNer."""
35
+
36
+ def __init__(self, **kwargs):
37
+ """BuilderConfig for ThaiNer.
38
+
39
+ Args:
40
+ **kwargs: keyword arguments forwarded to super.
41
+ """
42
+ super(ThaiNerConfig, self).__init__(**kwargs)
43
+
44
+
45
+ class Thainer(datasets.GeneratorBasedBuilder):
46
+
47
+ _DOWNLOAD_URL = "https://github.com/wannaphong/thai-ner/raw/master/model/1.3/data-pos.conll"
48
+ _SENTENCE_SPLITTERS = ["", " ", "\n"]
49
+ _POS_TAGS = [
50
+ "ADJ",
51
+ "ADP",
52
+ "ADV",
53
+ "AUX",
54
+ "CCONJ",
55
+ "DET",
56
+ "NOUN",
57
+ "NUM",
58
+ "PART",
59
+ "PRON",
60
+ "PROPN",
61
+ "PUNCT",
62
+ "SCONJ",
63
+ "VERB",
64
+ ]
65
+ _NER_TAGS = [
66
+ "B-DATE",
67
+ "B-EMAIL",
68
+ "B-LAW",
69
+ "B-LEN",
70
+ "B-LOCATION",
71
+ "B-MONEY",
72
+ "B-ORGANIZATION",
73
+ "B-PERCENT",
74
+ "B-PERSON",
75
+ "B-PHONE",
76
+ "B-TIME",
77
+ "B-URL",
78
+ "B-ZIP",
79
+ "B-ไม่ยืนยัน",
80
+ "I-DATE",
81
+ "I-EMAIL",
82
+ "I-LAW",
83
+ "I-LEN",
84
+ "I-LOCATION",
85
+ "I-MONEY",
86
+ "I-ORGANIZATION",
87
+ "I-PERCENT",
88
+ "I-PERSON",
89
+ "I-PHONE",
90
+ "I-TIME",
91
+ "I-URL",
92
+ "I-ไม่ยืนยัน",
93
+ "O",
94
+ ]
95
+
96
+ BUILDER_CONFIGS = [
97
+ ThaiNerConfig(
98
+ name="thainer",
99
+ version=datasets.Version("1.3.0"),
100
+ description="Thai Named Entity Recognition for PyThaiNLP (6,456 sentences)",
101
+ ),
102
+ ]
103
+
104
+ def _info(self):
105
+ return datasets.DatasetInfo(
106
+ description=_DESCRIPTION,
107
+ features=datasets.Features(
108
+ {
109
+ "id": datasets.Value("int32"),
110
+ "tokens": datasets.Sequence(datasets.Value("string")),
111
+ "pos_tags": datasets.Sequence(datasets.features.ClassLabel(names=self._POS_TAGS)),
112
+ "ner_tags": datasets.Sequence(datasets.features.ClassLabel(names=self._NER_TAGS)),
113
+ }
114
+ ),
115
+ supervised_keys=None,
116
+ homepage="https://github.com/wannaphong/thai-ner/",
117
+ citation=_CITATION,
118
+ license=_LICENSE,
119
+ )
120
+
121
+ def _split_generators(self, dl_manager):
122
+ data_path = dl_manager.download_and_extract(self._DOWNLOAD_URL)
123
+ return [
124
+ datasets.SplitGenerator(
125
+ name=datasets.Split.TRAIN,
126
+ gen_kwargs={"filepath": data_path},
127
+ ),
128
+ ]
129
+
130
+ def _generate_examples(self, filepath):
131
+ with open(filepath, encoding="utf-8") as f:
132
+ guid = 0
133
+ tokens = []
134
+ pos_tags = []
135
+ ner_tags = []
136
+
137
+ for line in f:
138
+ if line in self._SENTENCE_SPLITTERS:
139
+ if tokens:
140
+ yield guid, {
141
+ "id": str(guid),
142
+ "tokens": tokens,
143
+ "pos_tags": pos_tags,
144
+ "ner_tags": ner_tags,
145
+ }
146
+ guid += 1
147
+ tokens = []
148
+ pos_tags = []
149
+ ner_tags = []
150
+ else:
151
+ # thainer tokens are tab separated
152
+ splits = line.split("\t")
153
+ # replace junk ner tags
154
+ ner_tag = splits[2] if splits[2] in self._NER_TAGS else "O"
155
+ tokens.append(splits[0])
156
+ pos_tags.append(splits[1])
157
+ ner_tags.append(ner_tag.rstrip())
158
+ # last example
159
+ yield guid, {
160
+ "id": str(guid),
161
+ "tokens": tokens,
162
+ "pos_tags": pos_tags,
163
+ "ner_tags": ner_tags,
164
+ }