system HF staff commited on
Commit
2429dd8
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

Files changed (5) hide show
  1. .gitattributes +27 -0
  2. README.md +210 -0
  3. dataset_infos.json +1 -0
  4. dummy/lst20/1.0.0/dummy_data.zip +3 -0
  5. lst20.py +198 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,210 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - th
8
+ licenses:
9
+ - other-aiforthai
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 100k<n<1M
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - structure-prediction
18
+ task_ids:
19
+ - named-entity-recognition
20
+ - parsing
21
+ - structure-prediction-other-clause-segmentation
22
+ - structure-prediction-other-sentence-segmentation
23
+ - structure-prediction-other-word-segmentation
24
+ ---
25
+
26
+ # Dataset Card for LST20
27
+
28
+ ## Table of Contents
29
+ - [Dataset Description](#dataset-description)
30
+ - [Dataset Summary](#dataset-summary)
31
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
32
+ - [Languages](#languages)
33
+ - [Dataset Structure](#dataset-structure)
34
+ - [Data Instances](#data-instances)
35
+ - [Data Fields](#data-instances)
36
+ - [Data Splits](#data-instances)
37
+ - [Dataset Creation](#dataset-creation)
38
+ - [Curation Rationale](#curation-rationale)
39
+ - [Source Data](#source-data)
40
+ - [Annotations](#annotations)
41
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
42
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
43
+ - [Social Impact of Dataset](#social-impact-of-dataset)
44
+ - [Discussion of Biases](#discussion-of-biases)
45
+ - [Other Known Limitations](#other-known-limitations)
46
+ - [Additional Information](#additional-information)
47
+ - [Dataset Curators](#dataset-curators)
48
+ - [Licensing Information](#licensing-information)
49
+ - [Citation Information](#citation-information)
50
+
51
+ ## Dataset Description
52
+
53
+ - **Homepage:** https://aiforthai.in.th/
54
+ - **Repository:**
55
+ - **Paper:**
56
+ - **Leaderboard:**
57
+ - **Point of Contact:** thepchai@nectec.or.th
58
+
59
+ ### Dataset Summary
60
+
61
+ LST20 Corpus is a dataset for Thai language processing developed by National Electronics and Computer Technology Center (NECTEC), Thailand.
62
+ It offers five layers of linguistic annotation: word boundaries, POS tagging, named entities, clause boundaries, and sentence boundaries.
63
+ At a large scale, it consists of 3,164,002 words, 288,020 named entities, 248,181 clauses, and 74,180 sentences, while it is annotated with
64
+ 16 distinct POS tags. All 3,745 documents are also annotated with one of 15 news genres. Regarding its sheer size, this dataset is
65
+ considered large enough for developing joint neural models for NLP.
66
+ Manually download at https://aiforthai.in.th/corpus.php
67
+ See `LST20 Annotation Guideline.pdf` and `LST20 Brief Specification.pdf` within the downloaded `AIFORTHAI-LST20Corpus.tar.gz` for more details.
68
+
69
+ ### Supported Tasks and Leaderboards
70
+
71
+ - POS tagging
72
+ - NER tagging
73
+ - clause segmentation
74
+ - sentence segmentation
75
+ - word tokenization
76
+
77
+ ### Languages
78
+
79
+ Thai
80
+
81
+ ## Dataset Structure
82
+
83
+ ### Data Instances
84
+
85
+ ```
86
+ {'clause_tags': [1, 2, 2, 2, 2, 2, 2, 2, 3], 'fname': 'T11964.txt', 'id': '0', 'ner_tags': [8, 0, 0, 0, 0, 0, 0, 0, 25], 'pos_tags': [0, 0, 0, 1, 0, 8, 8, 8, 0], 'tokens': ['ธรรมนูญ', 'แชมป์', 'สิงห์คลาสสิก', 'กวาด', 'รางวัล', 'แสน', 'สี่', 'หมื่น', 'บาท']}
87
+ {'clause_tags': [1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3], 'fname': 'T11964.txt', 'id': '1', 'ner_tags': [8, 18, 28, 0, 0, 0, 0, 6, 0, 0, 0, 6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 15, 25, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 6, 0, 0, 0, 6], 'pos_tags': [0, 2, 0, 2, 1, 1, 2, 8, 2, 10, 2, 8, 2, 1, 0, 1, 0, 4, 7, 1, 0, 2, 8, 2, 10, 1, 10, 4, 2, 8, 2, 4, 0, 4, 0, 2, 8, 2, 10, 2, 8], 'tokens': ['ธรรมนูญ', '_', 'ศรีโรจน์', '_', 'เก็บ', 'เพิ่ม', '_', '4', '_', 'อันเดอร์พาร์', '_', '68', '_', 'เข้า', 'ป้าย', 'รับ', 'แชมป์', 'ใน', 'การ', 'เล่น', 'อาชีพ', '_', '19', '_', 'ปี', 'เป็น', 'ครั้ง', 'ที่', '_', '8', '_', 'ใน', 'ชีวิต', 'ด้วย', 'สกอร์', '_', '18', '_', 'อันเดอร์พาร์', '_', '270']}
88
+ ```
89
+
90
+ ### Data Fields
91
+
92
+ - `id`: nth sentence in each set, starting at 0
93
+ - `fname`: text file from which the sentence comes from
94
+ - `tokens`: word tokens
95
+ - `pos_tags`: POS tags
96
+ - `ner_tags`: NER tags
97
+ - `clause_tags`: clause tags
98
+
99
+ ### Data Splits
100
+
101
+ | | train | eval | test | all |
102
+ |----------------------|-----------|-------------|-------------|-----------|
103
+ | words | 2,714,848 | 240,891 | 207,295 | 3,163,034 |
104
+ | named entities | 246,529 | 23,176 | 18,315 | 288,020 |
105
+ | clauses | 214,645 | 17,486 | 16,050 | 246,181 |
106
+ | sentences | 63,310 | 5,620 | 5,250 | 74,180 |
107
+ | distinct words | 42,091 | (oov) 2,595 | (oov) 2,006 | 46,692 |
108
+ | breaking spaces※ | 63,310 | 5,620 | 5,250 | 74,180 |
109
+ | non-breaking spaces※※| 402,380 | 39,920 | 32,204 | 475,504 |
110
+
111
+ ※ Breaking space = space that is used as a sentence boundary marker
112
+ ※※ Non-breaking space = space that is not used as a sentence boundary marker
113
+
114
+ ## Dataset Creation
115
+
116
+ ### Curation Rationale
117
+
118
+ [More Information Needed]
119
+
120
+ ### Source Data
121
+
122
+ #### Initial Data Collection and Normalization
123
+
124
+ [More Information Needed]
125
+
126
+ #### Who are the source language producers?
127
+
128
+ Respective authors of the news articles
129
+
130
+ ### Annotations
131
+
132
+ #### Annotation process
133
+
134
+ Detailed annotation guideline can be found in `LST20 Annotation Guideline.pdf`.
135
+
136
+ #### Who are the annotators?
137
+
138
+ [More Information Needed]
139
+
140
+ ### Personal and Sensitive Information
141
+
142
+ All texts are from public news. No personal and sensitive information is expected to be included.
143
+
144
+ ## Considerations for Using the Data
145
+
146
+ ### Social Impact of Dataset
147
+
148
+ - Large-scale Thai NER & POS tagging, clause & sentence segmentatation, word tokenization
149
+
150
+ ### Discussion of Biases
151
+
152
+ - All 3,745 texts are from news domain:
153
+ - politics: 841
154
+ - crime and accident: 592
155
+ - economics: 512
156
+ - entertainment: 472
157
+ - sports: 402
158
+ - international: 279
159
+ - science, technology and education: 216
160
+ - health: 92
161
+ - general: 75
162
+ - royal: 54
163
+ - disaster: 52
164
+ - development: 45
165
+ - environment: 40
166
+ - culture: 40
167
+ - weather forecast: 33
168
+ - Word tokenization is done accoding to Inter­BEST 2009 Guideline.
169
+
170
+
171
+ ### Other Known Limitations
172
+
173
+ - Some NER tags do not correspond with given labels (`B`, `I`, and so on)
174
+
175
+ ## Additional Information
176
+
177
+ ### Dataset Curators
178
+
179
+ [NECTEC](https://www.nectec.or.th/en/)
180
+
181
+ ### Licensing Information
182
+
183
+ 1. Non-commercial use, research, and open source
184
+
185
+ Any non-commercial use of the dataset for research and open-sourced projects is encouraged and free of charge. Please cite our technical report for reference.
186
+
187
+ If you want to perpetuate your models trained on our dataset and share them to the research community in Thailand, please send your models, code, and APIs to the AI for Thai Project. Please contact Dr. Thepchai Supnithi via thepchai@nectec.or.th for more information.
188
+
189
+ Note that modification and redistribution of the dataset by any means are strictly prohibited unless authorized by the corpus authors.
190
+
191
+ 2. Commercial use
192
+
193
+ In any commercial use of the dataset, there are two options.
194
+
195
+ - Option 1 (in kind): Contributing a dataset of 50,000 words completely annotated with our annotation scheme within 1 year. Your data will also be shared and recognized as a dataset co-creator in the research community in Thailand.
196
+
197
+ - Option 2 (in cash): Purchasing a lifetime license for the entire dataset is required. The purchased rights of use cover only this dataset.
198
+
199
+ In both options, please contact Dr. Thepchai Supnithi via thepchai@nectec.or.th for more information.
200
+
201
+ ### Citation Information
202
+
203
+ ```
204
+ @article{boonkwan2020annotation,
205
+ title={The Annotation Guideline of LST20 Corpus},
206
+ author={Boonkwan, Prachya and Luantangsrisuk, Vorapon and Phaholphinyo, Sitthaa and Kriengket, Kanyanat and Leenoi, Dhanon and Phrombut, Charun and Boriboon, Monthika and Kosawat, Krit and Supnithi, Thepchai},
207
+ journal={arXiv preprint arXiv:2008.05055},
208
+ year={2020}
209
+ }
210
+ ```
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"lst20": {"description": "LST20 Corpus is a dataset for Thai language processing developed by National Electronics and Computer Technology Center (NECTEC), Thailand.\nIt offers five layers of linguistic annotation: word boundaries, POS tagging, named entities, clause boundaries, and sentence boundaries.\nAt a large scale, it consists of 3,164,002 words, 288,020 named entities, 248,181 clauses, and 74,180 sentences, while it is annotated with\n16 distinct POS tags. All 3,745 documents are also annotated with one of 15 news genres. Regarding its sheer size, this dataset is\nconsidered large enough for developing joint neural models for NLP.\nManually download at https://aiforthai.in.th/corpus.php\n", "citation": "@article{boonkwan2020annotation,\n title={The Annotation Guideline of LST20 Corpus},\n author={Boonkwan, Prachya and Luantangsrisuk, Vorapon and Phaholphinyo, Sitthaa and Kriengket, Kanyanat and Leenoi, Dhanon and Phrombut, Charun and Boriboon, Monthika and Kosawat, Krit and Supnithi, Thepchai},\n journal={arXiv preprint arXiv:2008.05055},\n year={2020}\n}\n", "homepage": "https://aiforthai.in.th/", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "fname": {"dtype": "string", "id": null, "_type": "Value"}, "tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "pos_tags": {"feature": {"num_classes": 16, "names": ["NN", "VV", "PU", "CC", "PS", "AX", "AV", "FX", "NU", "AJ", "CL", "PR", "NG", "PA", "XX", "IJ"], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}, "ner_tags": {"feature": {"num_classes": 31, "names": ["O", "B_BRN", "B_DES", "B_DTM", "B_LOC", "B_MEA", "B_NUM", "B_ORG", "B_PER", "B_TRM", "B_TTL", "I_BRN", "I_DES", "I_DTM", "I_LOC", "I_MEA", "I_NUM", "I_ORG", "I_PER", "I_TRM", "I_TTL", "E_BRN", "E_DES", "E_DTM", "E_LOC", "E_MEA", "E_NUM", "E_ORG", "E_PER", "E_TRM", "E_TTL"], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}, "clause_tags": {"feature": {"num_classes": 4, "names": ["O", "B_CLS", "I_CLS", "E_CLS"], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "lst20", "config_name": "lst20", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 107860249, "num_examples": 67104, "dataset_name": "lst20"}, "validation": {"name": "validation", "num_bytes": 9662939, "num_examples": 6094, "dataset_name": "lst20"}, "test": {"name": "test", "num_bytes": 8234542, "num_examples": 5733, "dataset_name": "lst20"}}, "download_checksums": {}, "download_size": 0, "post_processing_size": null, "dataset_size": 125757730, "size_in_bytes": 125757730}}
dummy/lst20/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1a87464f63a619bc0ab9a5ae1106d4d56e47a59f9d4ea7d3063ea0a57ae76022
3
+ size 10978
lst20.py ADDED
@@ -0,0 +1,198 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import absolute_import, division, print_function
2
+
3
+ import glob
4
+ import os
5
+ from pathlib import Path
6
+
7
+ import datasets
8
+
9
+
10
+ _CITATION = """\
11
+ @article{boonkwan2020annotation,
12
+ title={The Annotation Guideline of LST20 Corpus},
13
+ author={Boonkwan, Prachya and Luantangsrisuk, Vorapon and Phaholphinyo, Sitthaa and Kriengket, Kanyanat and Leenoi, Dhanon and Phrombut, Charun and Boriboon, Monthika and Kosawat, Krit and Supnithi, Thepchai},
14
+ journal={arXiv preprint arXiv:2008.05055},
15
+ year={2020}
16
+ }
17
+ """
18
+
19
+ _DESCRIPTION = """\
20
+ LST20 Corpus is a dataset for Thai language processing developed by National Electronics and Computer Technology Center (NECTEC), Thailand.
21
+ It offers five layers of linguistic annotation: word boundaries, POS tagging, named entities, clause boundaries, and sentence boundaries.
22
+ At a large scale, it consists of 3,164,002 words, 288,020 named entities, 248,181 clauses, and 74,180 sentences, while it is annotated with
23
+ 16 distinct POS tags. All 3,745 documents are also annotated with one of 15 news genres. Regarding its sheer size, this dataset is
24
+ considered large enough for developing joint neural models for NLP.
25
+ Manually download at https://aiforthai.in.th/corpus.php
26
+ """
27
+
28
+
29
+ class Lst20Config(datasets.BuilderConfig):
30
+ """BuilderConfig for Lst20"""
31
+
32
+ def __init__(self, **kwargs):
33
+ """BuilderConfig for Lst20.
34
+
35
+ Args:
36
+ **kwargs: keyword arguments forwarded to super.
37
+ """
38
+ super(Lst20Config, self).__init__(**kwargs)
39
+
40
+
41
+ class Lst20(datasets.GeneratorBasedBuilder):
42
+ """Lst20 dataset."""
43
+
44
+ _SENTENCE_SPLITTERS = ["", " ", "\n"]
45
+ _TRAINING_FOLDER = "train"
46
+ _VALID_FOLDER = "eval"
47
+ _TEST_FOLDER = "test"
48
+ _POS_TAGS = ["NN", "VV", "PU", "CC", "PS", "AX", "AV", "FX", "NU", "AJ", "CL", "PR", "NG", "PA", "XX", "IJ"]
49
+ _NER_TAGS = [
50
+ "O",
51
+ "B_BRN",
52
+ "B_DES",
53
+ "B_DTM",
54
+ "B_LOC",
55
+ "B_MEA",
56
+ "B_NUM",
57
+ "B_ORG",
58
+ "B_PER",
59
+ "B_TRM",
60
+ "B_TTL",
61
+ "I_BRN",
62
+ "I_DES",
63
+ "I_DTM",
64
+ "I_LOC",
65
+ "I_MEA",
66
+ "I_NUM",
67
+ "I_ORG",
68
+ "I_PER",
69
+ "I_TRM",
70
+ "I_TTL",
71
+ "E_BRN",
72
+ "E_DES",
73
+ "E_DTM",
74
+ "E_LOC",
75
+ "E_MEA",
76
+ "E_NUM",
77
+ "E_ORG",
78
+ "E_PER",
79
+ "E_TRM",
80
+ "E_TTL",
81
+ ]
82
+ _CLAUSE_TAGS = ["O", "B_CLS", "I_CLS", "E_CLS"]
83
+
84
+ BUILDER_CONFIGS = [
85
+ Lst20Config(name="lst20", version=datasets.Version("1.0.0"), description="LST20 dataset"),
86
+ ]
87
+
88
+ @property
89
+ def manual_download_instructions(self):
90
+ return """\
91
+ You need to
92
+ 1. Manually download `AIFORTHAI-LST20Corpus.tar.gz` from https://aiforthai.in.th/corpus.php (login required; website mostly in Thai)
93
+ 2. Extract the .tar.gz; this will result in folder `LST20Corpus`
94
+ The <path/to/folder> can e.g. be `~/Downloads/LST20Corpus`.
95
+ lst20 can then be loaded using the following command `datasets.load_dataset("lst20", data_dir="<path/to/folder>")`.
96
+ """
97
+
98
+ def _info(self):
99
+ return datasets.DatasetInfo(
100
+ description=_DESCRIPTION,
101
+ features=datasets.Features(
102
+ {
103
+ "id": datasets.Value("string"),
104
+ "fname": datasets.Value("string"),
105
+ "tokens": datasets.Sequence(datasets.Value("string")),
106
+ "pos_tags": datasets.Sequence(datasets.features.ClassLabel(names=self._POS_TAGS)),
107
+ "ner_tags": datasets.Sequence(datasets.features.ClassLabel(names=self._NER_TAGS)),
108
+ "clause_tags": datasets.Sequence(datasets.features.ClassLabel(names=self._CLAUSE_TAGS)),
109
+ }
110
+ ),
111
+ supervised_keys=None,
112
+ homepage="https://aiforthai.in.th/",
113
+ citation=_CITATION,
114
+ )
115
+
116
+ def _split_generators(self, dl_manager):
117
+ """Returns SplitGenerators."""
118
+
119
+ data_dir = os.path.abspath(os.path.expanduser(dl_manager.manual_dir))
120
+
121
+ # check if manual folder exists
122
+ if not os.path.exists(data_dir):
123
+ raise FileNotFoundError(
124
+ f"{data_dir} does not exist. Make sure you insert a manual dir via `datasetts.load_dataset('lst20', data_dir=...)`. Manual download instructions: {self.manual_download_instructions})"
125
+ )
126
+
127
+ # check number of .txt files
128
+ nb_train = len(glob.glob(os.path.join(data_dir, "train", "*.txt")))
129
+ nb_valid = len(glob.glob(os.path.join(data_dir, "eval", "*.txt")))
130
+ nb_test = len(glob.glob(os.path.join(data_dir, "test", "*.txt")))
131
+ assert (
132
+ nb_train > 0
133
+ ), f"No files found in train/*.txt.\nManual download instructions:{self.manual_download_instructions})"
134
+ assert (
135
+ nb_valid > 0
136
+ ), f"No files found in eval/*.txt.\nManual download instructions:{self.manual_download_instructions})"
137
+ assert (
138
+ nb_test > 0
139
+ ), f"No files found in test/*.txt.\nManual download instructions:{self.manual_download_instructions})"
140
+
141
+ return [
142
+ datasets.SplitGenerator(
143
+ name=datasets.Split.TRAIN,
144
+ gen_kwargs={"filepath": os.path.join(data_dir, self._TRAINING_FOLDER)},
145
+ ),
146
+ datasets.SplitGenerator(
147
+ name=datasets.Split.VALIDATION,
148
+ gen_kwargs={"filepath": os.path.join(data_dir, self._VALID_FOLDER)},
149
+ ),
150
+ datasets.SplitGenerator(
151
+ name=datasets.Split.TEST,
152
+ gen_kwargs={"filepath": os.path.join(data_dir, self._TEST_FOLDER)},
153
+ ),
154
+ ]
155
+
156
+ def _generate_examples(self, filepath):
157
+ for fname in sorted(glob.glob(os.path.join(filepath, "*.txt"))):
158
+ with open(fname, encoding="utf-8") as f:
159
+ guid = 0
160
+ tokens = []
161
+ pos_tags = []
162
+ ner_tags = []
163
+ clause_tags = []
164
+
165
+ for line in f:
166
+ if line in self._SENTENCE_SPLITTERS:
167
+ if tokens:
168
+ yield guid, {
169
+ "id": str(guid),
170
+ "fname": Path(fname).name,
171
+ "tokens": tokens,
172
+ "pos_tags": pos_tags,
173
+ "ner_tags": ner_tags,
174
+ "clause_tags": clause_tags,
175
+ }
176
+ guid += 1
177
+ tokens = []
178
+ pos_tags = []
179
+ ner_tags = []
180
+ clause_tags = []
181
+ else:
182
+ # LST20 tokens are tab separated
183
+ splits = line.split("\t")
184
+ # replace junk ner tags
185
+ ner_tag = splits[2] if splits[2] in self._NER_TAGS else "O"
186
+ tokens.append(splits[0])
187
+ pos_tags.append(splits[1])
188
+ ner_tags.append(ner_tag)
189
+ clause_tags.append(splits[3].rstrip())
190
+ # last example
191
+ yield guid, {
192
+ "id": str(guid),
193
+ "fname": Path(fname).name,
194
+ "tokens": tokens,
195
+ "pos_tags": pos_tags,
196
+ "ner_tags": ner_tags,
197
+ "clause_tags": clause_tags,
198
+ }