parquet-converter commited on
Commit
ae39cef
1 Parent(s): 7513b19

Update parquet files

Browse files
Files changed (5) hide show
  1. .gitattributes +0 -51
  2. README.md +0 -109
  3. dataset_infos.json +0 -1
  4. default/gkomet-train.parquet +3 -0
  5. gkomet.py +0 -288
.gitattributes DELETED
@@ -1,51 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ftz filter=lfs diff=lfs merge=lfs -text
6
- *.gz filter=lfs diff=lfs merge=lfs -text
7
- *.h5 filter=lfs diff=lfs merge=lfs -text
8
- *.joblib filter=lfs diff=lfs merge=lfs -text
9
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
- *.lz4 filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.npy filter=lfs diff=lfs merge=lfs -text
14
- *.npz filter=lfs diff=lfs merge=lfs -text
15
- *.onnx filter=lfs diff=lfs merge=lfs -text
16
- *.ot filter=lfs diff=lfs merge=lfs -text
17
- *.parquet filter=lfs diff=lfs merge=lfs -text
18
- *.pb filter=lfs diff=lfs merge=lfs -text
19
- *.pickle filter=lfs diff=lfs merge=lfs -text
20
- *.pkl filter=lfs diff=lfs merge=lfs -text
21
- *.pt filter=lfs diff=lfs merge=lfs -text
22
- *.pth filter=lfs diff=lfs merge=lfs -text
23
- *.rar filter=lfs diff=lfs merge=lfs -text
24
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
25
- *.tar.* filter=lfs diff=lfs merge=lfs -text
26
- *.tflite filter=lfs diff=lfs merge=lfs -text
27
- *.tgz filter=lfs diff=lfs merge=lfs -text
28
- *.wasm filter=lfs diff=lfs merge=lfs -text
29
- *.xz filter=lfs diff=lfs merge=lfs -text
30
- *.zip filter=lfs diff=lfs merge=lfs -text
31
- *.zst filter=lfs diff=lfs merge=lfs -text
32
- *tfevents* filter=lfs diff=lfs merge=lfs -text
33
- # Audio files - uncompressed
34
- *.pcm filter=lfs diff=lfs merge=lfs -text
35
- *.sam filter=lfs diff=lfs merge=lfs -text
36
- *.raw filter=lfs diff=lfs merge=lfs -text
37
- # Audio files - compressed
38
- *.aac filter=lfs diff=lfs merge=lfs -text
39
- *.flac filter=lfs diff=lfs merge=lfs -text
40
- *.mp3 filter=lfs diff=lfs merge=lfs -text
41
- *.ogg filter=lfs diff=lfs merge=lfs -text
42
- *.wav filter=lfs diff=lfs merge=lfs -text
43
- # Image files - uncompressed
44
- *.bmp filter=lfs diff=lfs merge=lfs -text
45
- *.gif filter=lfs diff=lfs merge=lfs -text
46
- *.png filter=lfs diff=lfs merge=lfs -text
47
- *.tiff filter=lfs diff=lfs merge=lfs -text
48
- # Image files - compressed
49
- *.jpg filter=lfs diff=lfs merge=lfs -text
50
- *.jpeg filter=lfs diff=lfs merge=lfs -text
51
- *.webp filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,109 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - expert-generated
4
- language_creators:
5
- - found
6
- language:
7
- - sl
8
- license:
9
- - cc-by-nc-sa-4.0
10
- multilinguality:
11
- - monolingual
12
- size_categories:
13
- - 1K<n<10K
14
- source_datasets: []
15
- task_categories:
16
- - token-classification
17
- task_ids: []
18
- pretty_name: G-KOMET
19
- tags:
20
- - metaphor-classification
21
- - metonymy-classification
22
- - metaphor-frame-classification
23
- - multiword-expression-detection
24
- ---
25
-
26
- # Dataset Card for G-KOMET
27
-
28
- ### Dataset Summary
29
-
30
- G-KOMET 1.0 is a corpus of metaphorical expressions in spoken Slovene language, covering around 50,000 lexical units across 5695 sentences. The corpus contains samples from the Gos corpus of spoken Slovene and includes a balanced set of transcriptions of informative, educational, entertaining, private, and public discourse.
31
-
32
- It is also annotated with idioms and metonymies. Note that these are both annotated as metaphor types. This is different from the annotations in [KOMET](https://huggingface.co/datasets/cjvt/komet), where these are both considered a type of frame. We keep the data as untouched as possible and let the user decide how they want to handle this.
33
-
34
-
35
- ### Supported Tasks and Leaderboards
36
-
37
- Metaphor detection, metonymy detection, metaphor type classification, metaphor frame classification.
38
-
39
- ### Languages
40
-
41
- Slovenian.
42
-
43
- ## Dataset Structure
44
-
45
- ### Data Instances
46
-
47
- A sample instance from the dataset:
48
- ```
49
- {
50
- 'document_name': 'G-Komet001.xml',
51
- 'idx': 3,
52
- 'idx_paragraph': 0,
53
- 'idx_sentence': 3,
54
- 'sentence_words': ['no', 'zdaj', 'samo', 'še', 'za', 'eno', 'orientacijo'],
55
- 'met_type': [
56
- {'type': 'MRWi', 'word_indices': [6]}
57
- ],
58
- 'met_frame': [
59
- {'type': 'spatial_orientation', 'word_indices': [6]}
60
- ]
61
- }
62
- ```
63
-
64
- The sentence comes from the document `G-Komet001.xml`, is the 3rd sentence in the document and is the 3rd sentence inside the 0th paragraph in the document.
65
- The word "orientacijo" is annotated as an indirect metaphor-related word (`MRWi`).
66
- It is also annotated with the frame "spatial_orientation".
67
-
68
- ### Data Fields
69
-
70
- - `document_name`: a string containing the name of the document in which the sentence appears;
71
- - `idx`: a uint32 containing the index of the sentence inside its document;
72
- - `idx_paragraph`: a uint32 containing the index of the paragraph in which the sentence appears;
73
- - `idx_sentence`: a uint32 containing the index of the sentence inside its paragraph;
74
- containing the consecutive number of the paragraph inside the current news article;
75
- - `sentence_words`: words in the sentence;
76
- - `met_type`: metaphors in the sentence, marked by their type and word indices;
77
- - `met_frame`: metaphor frames in the sentence, marked by their type (frame name) and word indices.
78
-
79
- ## Dataset Creation
80
-
81
- The corpus contains samples from the GOS corpus of spoken Slovene and includes a balanced set of transcriptions of informative, educational, entertaining, private, and public discourse. It contains hand-annotated metaphor-related words, i.e. linguistic expressions that have the potential for people to interpret them as metaphors, idioms, i.e. multi-word units in which at least one word has been used metaphorically, and metonymies, expressions that we use to express something else.
82
-
83
- For more information, please check out the paper (which is in Slovenian language) or contact the dataset author.
84
-
85
- ## Additional Information
86
-
87
- ### Dataset Curators
88
-
89
- Špela Antloga.
90
-
91
- ### Licensing Information
92
-
93
- CC BY-NC-SA 4.0
94
-
95
- ### Citation Information
96
-
97
- ```
98
- @InProceedings{antloga2022gkomet,
99
- title = {Korpusni pristopi za identifikacijo metafore in metonimije: primer metonimije v korpusu gKOMET},
100
- author={Antloga, \v{S}pela},
101
- booktitle={Proceedings of the Conference on Language Technologies and Digital Humanities (Student papers)},
102
- year={2022},
103
- pages={271-277}
104
- }
105
- ```
106
-
107
- ### Contributions
108
-
109
- Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"default": {"description": "G-KOMET 1.0 (a corpus of metaphorical expressions in spoken Slovene language) is a corpus of speech transcriptions and \nconversations that covers 50,000 lexical units. The corpus contains samples from the Gos corpus of spoken Slovene \nand includes a balanced set of transcriptions of informative, educational, entertaining, private, and public discourse.\n\nThe annotation scheme was based on the MIPVU metaphor identification process. \nThis protocol was modified and adapted to the specifics of the Slovene language and the specifics of the spoken \nlanguage. Corpus was annotated for the following relations to metaphor: indirect metaphor, direct metaphor, borderline \ncases and metaphor signals. In addition, the corpus introduces a new \u2018frame\u2019 tag, which gives information about the \nconcept to which it refers. \n", "citation": "@InProceedings{antloga2022gkomet,\ntitle = {Korpusni pristopi za identifikacijo metafore in metonimije: primer metonimije v korpusu gKOMET},\nauthor={Antloga, \u000b{S}pela},\nbooktitle={Proceedings of the Conference on Language Technologies and Digital Humanities (Student papers)},\nyear={2022},\npages={271-277}\n}\n", "homepage": "http://hdl.handle.net/11356/1490", "license": "Creative Commons - Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)", "features": {"document_name": {"dtype": "string", "id": null, "_type": "Value"}, "idx": {"dtype": "uint32", "id": null, "_type": "Value"}, "idx_paragraph": {"dtype": "uint32", "id": null, "_type": "Value"}, "idx_sentence": {"dtype": "uint32", "id": null, "_type": "Value"}, "sentence_words": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "met_type": [{"type": {"dtype": "string", "id": null, "_type": "Value"}, "word_indices": {"feature": {"dtype": "uint32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}], "met_frame": [{"type": {"dtype": "string", "id": null, "_type": "Value"}, "word_indices": {"feature": {"dtype": "uint32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}]}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "gkomet", "config_name": "default", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 736441, "num_examples": 5695, "dataset_name": "gkomet"}}, "download_checksums": {"https://www.clarin.si/repository/xmlui/bitstream/handle/11356/1490/G-Komet.zip": {"num_bytes": 1005891, "checksum": "29fffdaf085b889926eedcd9673f2129513b4578f41ba9bf7c632fe50cd45a8f"}}, "download_size": 1005891, "post_processing_size": null, "dataset_size": 736441, "size_in_bytes": 1742332}}
 
 
default/gkomet-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:56e18e9d8efaee60d7903ea136568fea06300475eae6237d3148c617b488a686
3
+ size 234721
gkomet.py DELETED
@@ -1,288 +0,0 @@
1
- """Metaphor corpus G-KOMET 1.0"""
2
- import logging
3
- import os
4
- import re
5
- import xml.etree.ElementTree as ET
6
- from typing import List, Tuple
7
-
8
- import datasets
9
-
10
- _CITATION = """\
11
- @InProceedings{antloga2022gkomet,
12
- title = {Korpusni pristopi za identifikacijo metafore in metonimije: primer metonimije v korpusu gKOMET},
13
- author={Antloga, \v{S}pela},
14
- booktitle={Proceedings of the Conference on Language Technologies and Digital Humanities (Student papers)},
15
- year={2022},
16
- pages={271-277}
17
- }
18
- """
19
-
20
-
21
- _DESCRIPTION = """\
22
- G-KOMET 1.0 (a corpus of metaphorical expressions in spoken Slovene language) is a corpus of speech transcriptions and
23
- conversations that covers 50,000 lexical units. The corpus contains samples from the Gos corpus of spoken Slovene
24
- and includes a balanced set of transcriptions of informative, educational, entertaining, private, and public discourse.
25
-
26
- The annotation scheme was based on the MIPVU metaphor identification process.
27
- This protocol was modified and adapted to the specifics of the Slovene language and the specifics of the spoken
28
- language. Corpus was annotated for the following relations to metaphor: indirect metaphor, direct metaphor, borderline
29
- cases and metaphor signals. In addition, the corpus introduces a new ‘frame’ tag, which gives information about the
30
- concept to which it refers.
31
- """
32
-
33
- _HOMEPAGE = "http://hdl.handle.net/11356/1490"
34
-
35
- _LICENSE = "Creative Commons - Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)"
36
-
37
- _URLS = {
38
- "gkomet": "https://www.clarin.si/repository/xmlui/bitstream/handle/11356/1490/G-Komet.zip"
39
- }
40
-
41
-
42
- XML_NAMESPACE = "{http://www.w3.org/XML/1998/namespace}"
43
- EL_LEAF, EL_TYPE, EL_FRAME = range(3)
44
-
45
-
46
- def namespace(element):
47
- # https://stackoverflow.com/a/12946675
48
- m = re.match(r'\{.*\}', element.tag)
49
- return m.group(0) if m else ''
50
-
51
-
52
- def word_info(sent_el):
53
- def _resolve_recursively(element) -> List:
54
- """ Knowingly ignored tags: name (anonymized, without IDs), gap, vocal, pause, del,
55
- linkGrp (handled separately in linkgroup_info()) """
56
- # Leaf node: word or punctuation character
57
- if element.tag.endswith(("w", "pc")):
58
- id_curr = element.attrib[f"{XML_NAMESPACE}id"]
59
- return [(id_curr, element.text)]
60
-
61
- # Annotated word or word group - not interested in the annotations in this function
62
- elif element.tag.endswith("seg"):
63
- parsed_data = []
64
- for child in element:
65
- if child.tag.endswith(("c", "vocal", "pause")) and not child.tag.endswith("pc"): # empty space betw. words or "special" word
66
- continue
67
-
68
- res = _resolve_recursively(child)
69
- if isinstance(res, list):
70
- parsed_data.extend(res)
71
- else:
72
- parsed_data.append(res)
73
-
74
- return parsed_data
75
-
76
- id_words, words = [], []
77
- for child_el in sent_el:
78
- curr_annotations = _resolve_recursively(child_el)
79
- if curr_annotations is not None: # None = unrecognized ("unimportant") element
80
- for ann in curr_annotations:
81
- id_words.append(ann[0])
82
- words.append(ann[1])
83
-
84
- return id_words, words
85
-
86
-
87
- def seg_info(sent_el):
88
- def _resolve_recursively(element) -> Tuple:
89
- """ Returns (type[, subtype], deeper_elements, latest_element)"""
90
- # Leaf node: word or punctuation character
91
- if element.tag.endswith(("w", "pc")):
92
- id_curr = element.attrib[f"{XML_NAMESPACE}id"]
93
- return EL_LEAF, [], [id_curr]
94
-
95
- # Annotated word or word group
96
- elif element.tag.endswith("seg"):
97
- subtype = element.attrib["subtype"]
98
- if element.attrib["type"] == "frame":
99
- ann_type = EL_FRAME
100
- elif element.attrib["type"] == "metaphor":
101
- ann_type = EL_TYPE
102
- elif element.attrib["type"] == "idiom":
103
- ann_type = EL_TYPE
104
- else:
105
- raise ValueError(f"Unrecognized seg type: {element.attrib['type']}")
106
-
107
- deeper_elements = []
108
- latest_element = []
109
- for child in element:
110
- if child.tag.endswith(("c", "vocal", "pause")) and not child.tag.endswith("pc"): # empty space betw. words or "special" word
111
- continue
112
-
113
- res = _resolve_recursively(child)
114
- if res[0] == EL_LEAF:
115
- latest_element.extend(res[2])
116
- else:
117
- deeper_elements.extend(res[2])
118
- deeper_elements.append((res[0], res[1], res[3]))
119
- latest_element.extend(res[3])
120
-
121
- return ann_type, subtype, deeper_elements, latest_element
122
-
123
- annotations = []
124
- for child_el in sent_el:
125
- if not child_el.tag.endswith("seg"):
126
- continue
127
-
128
- ann_type, subtype, deeper_elements, latest_element = _resolve_recursively(child_el)
129
- annotations.extend(deeper_elements)
130
- annotations.append((ann_type, subtype, latest_element))
131
-
132
- return annotations
133
-
134
-
135
- def linkgroup_info(sent_el):
136
- annotations = []
137
- for child_el in sent_el:
138
- if not child_el.tag.endswith("linkGrp"):
139
- continue
140
-
141
- for curr_link in child_el:
142
- ann_type = EL_TYPE
143
- if child_el.attrib["type"] not in {"metonymy", "frame", "metaphor", "idiom"}:
144
- logging.warning(f"Uncovered linkGrp element type, skipping: {child_el.attrib['type']}")
145
- continue
146
-
147
- if child_el.attrib["type"] == "metonymy":
148
- subtype = curr_link.attrib["ana"]
149
- elif child_el.attrib["type"] in {"frame", "metaphor"}:
150
- ann_type = EL_TYPE if child_el.attrib["type"] == "metaphor" else EL_FRAME
151
- subtype = curr_link.attrib["ana"].split(":")[-1]
152
- else:
153
- subtype = "idiom"
154
-
155
- tokens_involved = list(map(lambda _tok_id: _tok_id[1:] if _tok_id.startswith("#") else _tok_id,
156
- curr_link.attrib["target"].split(" ")))
157
- annotations.append((ann_type, subtype, tokens_involved))
158
-
159
- return annotations
160
-
161
-
162
- class GKomet(datasets.GeneratorBasedBuilder):
163
- """G-KOMET 1.0 is a corpus of metaphorical expressions in spoken Slovene language. """
164
-
165
- VERSION = datasets.Version("1.0.0")
166
-
167
- def _info(self):
168
- features = datasets.Features(
169
- {
170
- "document_name": datasets.Value("string"),
171
- "idx": datasets.Value("uint32"), # index inside current document
172
- "idx_paragraph": datasets.Value("uint32"),
173
- "idx_sentence": datasets.Value("uint32"), # index inside current paragraph
174
- "sentence_words": datasets.Sequence(datasets.Value("string")),
175
- "met_type": [{
176
- "type": datasets.Value("string"),
177
- "word_indices": datasets.Sequence(datasets.Value("uint32"))
178
- }],
179
- "met_frame": [{
180
- "type": datasets.Value("string"),
181
- "word_indices": datasets.Sequence(datasets.Value("uint32"))
182
- }]
183
- }
184
- )
185
- return datasets.DatasetInfo(
186
- description=_DESCRIPTION,
187
- features=features,
188
- homepage=_HOMEPAGE,
189
- license=_LICENSE,
190
- citation=_CITATION,
191
- )
192
-
193
- def _split_generators(self, dl_manager):
194
- data_dir = dl_manager.download_and_extract(_URLS["gkomet"])
195
- return [
196
- datasets.SplitGenerator(
197
- name=datasets.Split.TRAIN,
198
- gen_kwargs={"data_dir": os.path.join(data_dir, "G-Komet")},
199
- )
200
- ]
201
-
202
- def _generate_examples(self, data_dir):
203
- data_files = []
204
- for fname in os.listdir(data_dir):
205
- curr_path = os.path.join(data_dir, fname)
206
- if os.path.isfile(curr_path) and fname.endswith(".xml") and fname != "G-Komet.xml": # G-Komet.xml = meta-file
207
- data_files.append(fname)
208
- data_files = sorted(data_files)
209
-
210
- idx_example = 0
211
- for fname in data_files:
212
- fpath = os.path.join(data_dir, fname)
213
- curr_doc = ET.parse(fpath)
214
- root = curr_doc.getroot()
215
- NAMESPACE = namespace(root)
216
-
217
- idx_sent_glob = 0
218
- for idx_par, curr_par in enumerate(root.iterfind(f".//{NAMESPACE}p")):
219
- id2position = {} # {<idx_sent> -> {<id_word>: <position> foreach word} foreach sent}
220
- all_words = []
221
-
222
- # Pass#1: extract word information
223
- for idx_sent, curr_sent in enumerate(curr_par.iterfind(f"{NAMESPACE}s")):
224
- id_words, words = word_info(curr_sent)
225
-
226
- id2position[idx_sent] = dict(zip(id_words, range(len(words))))
227
- all_words.append(words)
228
-
229
- all_types, all_frames = [], []
230
-
231
- # Pass#2: extract annotations from <seg>ments
232
- for idx_sent, curr_sent in enumerate(curr_par.iterfind(f"{NAMESPACE}s")):
233
- annotated_segs = seg_info(curr_sent)
234
- all_types.append([])
235
- all_frames.append([])
236
-
237
- for curr_ann in annotated_segs:
238
- ann_type, ann_subtype, words_involved = curr_ann
239
- if ann_type == EL_TYPE:
240
- all_types[idx_sent].append({
241
- "type": ann_subtype,
242
- "word_indices": [id2position[idx_sent][_id_word] for _id_word in words_involved
243
- if _id_word in id2position[idx_sent]]
244
- })
245
- elif ann_type == EL_FRAME:
246
- all_frames[idx_sent].append({
247
- "type": ann_subtype,
248
- "word_indices": [id2position[idx_sent][_id_word] for _id_word in words_involved
249
- if _id_word in id2position[idx_sent]]
250
- })
251
-
252
- # Pass#3: extract annotations from <linkGrp>s
253
- for idx_sent, curr_sent in enumerate(curr_par.iterfind(f"{NAMESPACE}s")):
254
- annotated_linkgroups = linkgroup_info(curr_sent)
255
-
256
- for curr_ann in annotated_linkgroups:
257
- ann_type, ann_subtype, words_involved = curr_ann
258
-
259
- if ann_type == EL_TYPE:
260
- all_types[idx_sent].append({
261
- "type": ann_subtype,
262
- "word_indices": [id2position[idx_sent][_id_word] for _id_word in words_involved
263
- if _id_word in id2position[idx_sent]]
264
- })
265
- elif ann_type == EL_FRAME:
266
- all_frames[idx_sent].append({
267
- "type": ann_subtype,
268
- "word_indices": [id2position[idx_sent][_id_word] for _id_word in words_involved
269
- if _id_word in id2position[idx_sent]]
270
- })
271
-
272
- idx_sent = 0
273
- for curr_words, curr_types, curr_frames in zip(all_words, all_types, all_frames):
274
- if len(curr_words) == 0:
275
- continue
276
-
277
- yield idx_example, {
278
- "document_name": fname,
279
- "idx": idx_sent_glob,
280
- "idx_paragraph": idx_par,
281
- "idx_sentence": idx_sent,
282
- "sentence_words": curr_words,
283
- "met_type": curr_types,
284
- "met_frame": curr_frames
285
- }
286
- idx_example += 1
287
- idx_sent += 1
288
- idx_sent_glob += 1