File size: 13,357 Bytes
86a61de
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
IWPT 2020 Shared Task on Parsing into Enhanced Universal Dependencies
---------------------------------------------------------------------

This package contains data used in the IWPT 2020 shared task. The package is available from
http://hdl.handle.net/11234/1-3238 (permanent URL).

For more information on the shared task, see the IWPT conference proceedings in the ACL anthology
https://www.aclweb.org/anthology/sigs/sigparse/
as well as the shared task web site:
https://universaldependencies.org/iwpt20/

The package contains training, development and test (evaluation) datasets as they were used
during the shared task. The data is based on a subset of Universal Dependencies release 2.5
(http://hdl.handle.net/11234/1-3105) but some treebanks contain additional enhanced annotations.
Moreover, not all of these additions became part of Universal Dependencies release 2.6
(http://hdl.handle.net/11234/1-3226), which makes the shared task data unique and worth
a separate release to enable later comparison with new parsing algorithms.


LICENSE
-------

The package is distributed under the same license as Universal Dependencies 2.5. This is
a collective license, i.e., individual treebanks may use different license terms. See
https://lindat.mff.cuni.cz/repository/xmlui/page/licence-UD-2.5?locale-attribute=en
for details.


FILES AND FOLDERS
-----------------

There are 17 languages and 28 treebanks. Some treebanks contain training, development and
test data. Some treebanks contain only the test data. Each treebank has a folder named
UD_Language-TreebankID:

UD_Arabic-PADT
UD_Bulgarian-BTB
UD_Czech-CAC
UD_Czech-FicTree
UD_Czech-PDT
UD_Czech-PUD
UD_Dutch-Alpino
UD_Dutch-LassySmall
UD_English-EWT
UD_English-PUD
UD_Estonian-EDT
UD_Estonian-EWT
UD_Finnish-PUD
UD_Finnish-TDT
UD_French-FQB
UD_French-Sequoia
UD_Italian-ISDT
UD_Latvian-LVTB
UD_Lithuanian-ALKSNIS
UD_Polish-LFG
UD_Polish-PDB
UD_Polish-PUD
UD_Russian-SynTagRus
UD_Slovak-SNK
UD_Swedish-PUD
UD_Swedish-Talbanken
UD_Tamil-TTB
UD_Ukrainian-IU

Each folder contains one or more CoNLL-U files with gold-standard annotation following
the UD guidelines (https://universaldependencies.org/guidelines.html), and one or
more corresponding plain text files (the parsing systems get these files as input and
produce parsed CoNLL-U files on output).

During the shared task, only the training and development portions were distributed in this
form. The test data was distributed blind, i.e., only the plain text input without annotation.
Moreover, test sets from all treebanks of one language were merged in one file and the
participants were not told which part of the text comes from which treebank. In this package,
we add four folders:

test-blind
test-gold
dev-blind
dev-gold

The *-blind folders contain 17 text files each, named using language codes:

ar.txt (Arabic)
bg.txt (Bulgarian)
cs.txt (Czech)
en.txt (English)
et.txt (Estonian)
fi.txt (Finnish)
fr.txt (French)
it.txt (Italian)
lt.txt (Lithuanian)
lv.txt (Latvian)
nl.txt (Dutch)
pl.txt (Polish)
ru.txt (Russian)
sk.txt (Slovak)
sv.txt (Swedish)
ta.txt (Tamil)
uk.txt (Ukrainian)

The *-gold folders contain the corresponding gold-standard annotated files in two flavors:
the real UD-compliant annotation, e.g., ar.conllu, and a file where empty nodes have been
collapsed to make evaluation possible, e.g. ar.nen.conllu (nen = no empty nodes). In addition
to the test files, we also provide the development files in the same form (they were also
available to the participants of the shared task).


FRENCH
------

In addition to enhancements described in the UD v2 guidelines
(https://universaldependencies.org/u/overview/enhanced-syntax.html),
the French data also show neutralized diathesis in the spirit of (Candito et al. 2017).
It is not possible to squeeze this information into the dependency labels in a way that would
be both human-readable and valid according to the UD guidelines. Therefore, the French CoNLL-U
files are provided in two flavors: "fulldeps" and "xoxdeps". The former is the intended, human-
readable format where final and canonical grammatical functions are separated by the "@"
character; e.g., "obl:agent@nsubj" means that the dependent is an oblique agent phrase (the
final function) but canonically it corresponds to the subject of the verb in active form (the
canonical function). Such dependency labels do not comply with the current UD guidelines which
do not allow the "@" character in dependency labels (also, the full labels sometimes contain
more colons ":" than permitted). The labels thus have been transformed reducing the number of
colons and replacing "@" with "xox", hence the xoxdeps.conllu files. The systems participating
in the shared task worked with the xoxdeps files, as these pass the official validation. However,
the cryptic xoxdeps labels can be easily converted back to the original format, even in the
parser output (provided the parser predicted the label correctly; see the tools below).


TOOLS
-----

The package also contains a number of Perl and Python scripts that have been used to process
the data during preparation and during the shared task. They are in the "tools" folder.

validate.py

  The official Universal Dependencies validator. It is a Python 3 script, and it needs the
  third-party module regex to be installed on the system (use pip to install it). The script
  recognizes several levels of validity; system output in the shared task must be valid on
  level 2:

  python validate.py --level 2 --lang ud file.conllu

enhanced_collapse_empty_nodes.pl

  Removes empty nodes from a CoNLL-U file and transforms all paths traversing an empty node
  into a long edge with a combined label, e.g., "conj>nsubj". Note that the output is not
  valid according to UD guidelines, as the ">" character cannot occur in a normal CoNLL-U
  file. After passing validation, system outputs and gold-standard files are processed with
  this script and then they can be evaluated (the evaluator cannot work with empty nodes
  directly).

  perl enhanced_collapse_empty_nodes.pl file.conllu > file.nen.conllu

iwpt20_xud_eval.py

  The official evaluation script in the shared task. It takes valid gold-standard and
  system-output file after these have been processed with enhanced_collapse_empty_nodes.pl.
  It also requires that the text of the gold standard and system output differ only in
  whitespace characters (tokenization) while all non-whitespace characters must be identical
  (no normalization is allowed).

  python iwpt20_xud_eval.py -v gold.nen.conllu system.nen.conllu

  The script can be told that certain types of enhancements should not be evaluated. This
  is done with treebanks where some enhancements are not annotated in the gold standard and
  we do not want to penalize the system for predicting the enhancement. For example,

  --enhancements 156

  means that gapping (1), relative clauses (5), and case-enhanced deprels (6) should be
  ignored.

conllu-quick-fix.pl

  A Perl script that tries to make an invalid file valid by filling out UPOS tags if they
  are empty, fixing the format of morphological features etc. It can also make sure that
  every node in the enhanced graph is reachable from the virtual root node with id 0;
  however, this function is optional, as it modifies the system-output dependency structure,
  and the algorithm it uses is not optimal in terms of evaluation score.

  perl conllu-quick-fix.pl input.conllu > output.conllu
  perl conllu-quick-fix.pl --connect-to-root input.conllu > output.conllu

conllu_to_text.pl

  This script was used to generate the untokenized text files that serve as the input for
  the trained parsing system (the "blind" data). It takes the ISO 639-1 code of the language
  of the text (e.g., "--language en") because it works a bit differently with languages that
  use Chinese characters. The option was not needed for the present shared task, as there
  were no such languages.

  perl conllu_to_text.pl --language xx file.conllu > file.txt

mergetest.pl

  This script was used to merge the test data (both gold and blind) from multiple treebanks
  of one language. The test sets are essentially concatenated, but the script makes sure that
  there is a new document mark at the beginning of each gold CoNLL-U file, and between two
  text files there is an empty line. This increases the chance that the systems will break
  a sentence at this position and it will be possible to separate data from individual treebanks
  in the system output. The first argument of the script is the name of the target file, the
  remaining arguments are names of the source files (any number, but at least one). By default,
  the files are processed as text files. If the target file name ends in ".conllu", the files are
  processed as CoNLL-U files.

  perl mergetest.pl nl.conllu UD_Dutch-Alpino/nl_alpino-ud-test.conllu UD_Dutch-LassySmall/nl_lassysmall-ud-test.conllu
  perl mergetest.pl nl.txt UD_Dutch-Alpino/nl_alpino-ud-test.txt UD_Dutch-LassySmall/nl_lassysmall-ud-test.txt

match_and_split_conllu_by_input_text.pl

  This script reverses mergetest.pl, i.e., splits the system output for a language into smaller
  files corresponding to individual treebanks. This must be done if we want to evaluate each
  treebank with different settings, i.e., ignoring enhancements that are not gold-annotated
  in the treebank. The script takes the input text of one treebank and the CoNLL-U file that
  starts with annotation of the input text but possibly contains more annotation of other text.
  Two CoNLL-U files will be generated: the first one corresponds to the input text, the second
  one is the rest. Therefore, if the language consists of more than two treebanks, the script
  must be run multiple times. The script can handle sentences that cross the file boundary
  (nodes whose parents lie in the other file will be re-attached to some other parent). However,
  the script cannot handle situations where a token crosses the file boundary.

  perl match_and_split_conllu_by_input_text.pl UD_Dutch-Alpino/nl_alpino-ud-test.txt nl.conllu nl_alpino.conllu nl_rest.conllu

evaluate_all.pl

  This script takes one shared task submission (tgz archive with CoNLL-U files for individual
  languages), unpacks it, checks whether the files are valid and evaluates them. The script is
  provided as-is and it is not ready to be run outside the shared task submission site. Various
  paths are hard-coded in the source code! However, the code shows how the evaluation was done
  using the other tools described above.

  perl evaluate_all.pl team_name submission_id dev|test

html_evaluation.pl

  This script relies on the same fixed folder structure as evaluate_all.pl, and it, too, has
  a path hard-wired in the source code. It scans the evaluation logs produced by evaluate_all.pl
  and creates a HTML file with score tables for the submission.

  perl html_evaluation.pl team_name submission_id dev|test

expand_edeps_in_french.pl

  Takes a CoNLL-U file with the cryptic-but-ud-valid encoding of the French double relations,
  ("xoxdeps") and converts them back to the human-readable form ("fulldeps").

  perl expand_edeps_in_french.pl fr.xoxdeps.conllu > fr.fulldeps.conllu

enhanced_graph_properties.pl

  Reads a CoNLL-U file and collects statistics about the enhanced graphs found in the DEPS
  column. Some of the statistics target abstract graph-theoretical properties, others target
  properties specific to Enhanced Universal Dependencies. The script also tries to collect
  clues about individual enhancement types defined in UD (thus assessing whether the
  enhancement is annotated in the given dataset).

  perl enhanced_graph_properties.pl file.conllu


SYSTEM OUTPUTS
--------------

The folder "sysoutputs" contains the official primary submission of each team participating
in the shared task. The folder names are the lowercased team names as used at submission
time; they slightly differ from the spelling used in the shared task overview paper:

TurkuNLP ....... turkunlp
Orange ......... orange_deskin
Emory NLP ...... emorynlp
FASTPARSE ...... fastparse
UNIPI .......... unipi
ShanghaiTech ... shanghaitech_alibaba
CLASP .......... clasp
ADAPT .......... adapt
Køpsala ........ koebsala
RobertNLP ...... robertnlp

In addition, there are three baseline submissions generated by the shared task organizers
and described on the shared task website and in the overview paper:

baseline1 ... gold basic trees copied as enhanced graphs
baseline2 ... UDPipe-predicted basic trees copied as enhanced graphs
baseline3 ... UDPipe predicted basic trees, then Stanford Enhancer created enhanced graphs from them


REFERENCES
----------

Gosse Bouma, Djamé Seddah, Daniel Zeman (2020).
  Overview of the IWPT 2020 Shared Task on Parsing into Enhanced Universal Dependencies.
  In Proceedings of the 16th International Conference on Parsing Technologies and the IWPT 2020
  Shared Task on Parsing into Enhanced Universal Dependencies, Seattle, WA, USA,
  ISBN 978-1-952148-11-8

Marie Candito, Bruno Guillaume, Guy Perrier, Djamé Seddah (2017).
  Enhanced UD Dependencies with Neutralized Diathesis Alternation.
  In Proceedings of the Fourth International Conference on Dependency Linguistics (Depling 2017),
  pages 42–53, Pisa, Italy, September 18–20 2017