File size: 38,553 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
{
    "paper_id": "2020",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T03:15:08.754790Z"
    },
    "title": "HMSid and HMSid2 at PARSEME Shared Task 2020: Computational Corpus Linguistics and unseen-in-training MWEs",
    "authors": [
        {
            "first": "Jean-Pierre",
            "middle": [],
            "last": "Colson",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "University of Louvain Louvain-la-Neuve",
                "location": {
                    "country": "Belgium"
                }
            },
            "email": "jean-pierre.colson@uclouvain.be"
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "This paper is a system description of HMSid, officially sent to the PARSEME Shared Task 2020 for one language (French), in the open track. It also describes HMSid2, sent to the organizers of the workshop after the deadline and using the same methodology but in the closed track. Both systems do not rely on machine learning, but on computational corpus linguistics. Their score for unseen MWEs is very promising, especially in the case of HMSid2, which would have received the best score for unseen MWEs in the French closed track. 1 Introduction Although the PARSEME Shared Task 2018 (Savary et al., 2018) produced very interesting results for the extraction of verbal multiword expressions, one important note of caution has to be made: the participating systems produced poor results for unseen MWEs, i.e. expressions that were absent from the training data. As pointed out by the organizers of the new Parseme Shared Task 2020 1 , a possible solution to this issue is the recourse to large MWE lexicons. In this paper, however, we report the results of two systems offering promising results for unseen MWEs with no recourse to MWE lexicons: HMSid (Hybrid Multi-layer System for the extraction of Idioms) and HMSid2. Both systems are based on computational corpus linguistics: they just used the training data and an additional general linguistic corpus. As the models require a fine-tuned adaptation to each language under study, they were only applied to the French dataset of the PARSEME Shared Task 2020. HMSid used as an external corpus the French WaCky corpus (Baroni et al., 2009) and was submitted to the PARSEME Shared Task 2020. As there was a recourse to an external corpus, it was logically put in the open track. Thanks to the feedback from the organizers of PARSEME 2020, however, we adapted the system in order to propose it in the closed track: the corpus used was the Wikipedia corpus included in the training data. The new version, HMSid2, was sent to the organizers after the official deadline. In this paper, both the official results of HMSid and the new results from HMSid2 are discussed. Our theoretical starting point for both systems is that, while Deep Learning will surpass most techniques for reproducing elements that are somehow present in training sets, it will need additional corpus",
    "pdf_parse": {
        "paper_id": "2020",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "This paper is a system description of HMSid, officially sent to the PARSEME Shared Task 2020 for one language (French), in the open track. It also describes HMSid2, sent to the organizers of the workshop after the deadline and using the same methodology but in the closed track. Both systems do not rely on machine learning, but on computational corpus linguistics. Their score for unseen MWEs is very promising, especially in the case of HMSid2, which would have received the best score for unseen MWEs in the French closed track. 1 Introduction Although the PARSEME Shared Task 2018 (Savary et al., 2018) produced very interesting results for the extraction of verbal multiword expressions, one important note of caution has to be made: the participating systems produced poor results for unseen MWEs, i.e. expressions that were absent from the training data. As pointed out by the organizers of the new Parseme Shared Task 2020 1 , a possible solution to this issue is the recourse to large MWE lexicons. In this paper, however, we report the results of two systems offering promising results for unseen MWEs with no recourse to MWE lexicons: HMSid (Hybrid Multi-layer System for the extraction of Idioms) and HMSid2. Both systems are based on computational corpus linguistics: they just used the training data and an additional general linguistic corpus. As the models require a fine-tuned adaptation to each language under study, they were only applied to the French dataset of the PARSEME Shared Task 2020. HMSid used as an external corpus the French WaCky corpus (Baroni et al., 2009) and was submitted to the PARSEME Shared Task 2020. As there was a recourse to an external corpus, it was logically put in the open track. Thanks to the feedback from the organizers of PARSEME 2020, however, we adapted the system in order to propose it in the closed track: the corpus used was the Wikipedia corpus included in the training data. The new version, HMSid2, was sent to the organizers after the official deadline. In this paper, both the official results of HMSid and the new results from HMSid2 are discussed. Our theoretical starting point for both systems is that, while Deep Learning will surpass most techniques for reproducing elements that are somehow present in training sets, it will need additional corpus",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "based information for unseen-in-training MWEs. It should also be pointed out that MWE extraction is a daunting practical task, but that the theoretical background is also very complex, as it is related to grammatical and semantic structure. Information retrieval (Baeza-Yates and Ribeiro-Neto, 1999) has shown that semantic relations may be analyzed by very diverse methods, including vector space models and clustering methods. Many of its findings are compatible with the Distributional Hypothesis (Harris 1954) : differences in meaning will be reflected by differences in distribution. However, the distribution of words is also affected by existing MWEs, as at least 50 percent of the words from any text will actually be included in MWEs, collocations or phraseological units (Sinclair, 1991) . In addition, a wide array of studies in construction grammar (Hoffmann and Trousdale, 2013) strongly suggest that language structure consists of a very complex and probabilistic network of constructions at various levels of abstraction and schematicity.",
                "cite_spans": [
                    {
                        "start": 263,
                        "end": 299,
                        "text": "(Baeza-Yates and Ribeiro-Neto, 1999)",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 500,
                        "end": 513,
                        "text": "(Harris 1954)",
                        "ref_id": null
                    },
                    {
                        "start": 781,
                        "end": 797,
                        "text": "(Sinclair, 1991)",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 861,
                        "end": 891,
                        "text": "(Hoffmann and Trousdale, 2013)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "It is no wonder then that very complex techniques are necessary for extracting MWEs, in much the same way as for the extraction of semantic links. In particular, the complex interplay between 1 st -order co-occurrence (words appear together) and 2 nd -order co-occurrence (words appear in similar contexts, Lapesa and Evert, 2014) probably requires a hybrid methodology. While deep learning and in particular neural networks are very efficient ways of gaining information from a training set, it may be complemented by a more traditional, corpus-based approach in the case of the extraction of data that are unseen in the training set.",
                "cite_spans": [
                    {
                        "start": 307,
                        "end": 330,
                        "text": "Lapesa and Evert, 2014)",
                        "ref_id": "BIBREF6"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "The technical background for HMSid and HMSid2 is a combination of techniques inherited from Information Retrieval, such as metric clusters (Baeza-Yates and Berthier Ribeiro-Neto, 1999) and a query likelihood model, with a big data approach, in this case a large (unparsed and untagged) linguistic corpus: the French WaCky for HMSid and the Parseme French training corpus (Wikipedia) for HMSid2. As described in Colson (2017; 2018) , a clustering algorithm based on the average distance between the component parts of the MWEs is measured, the cpr-score (Corpus Proximity Ratio): This approach, as opposed to vector models, is a 1 st -order model, as it is based on the co-occurrence of words and not on similar contexts. Given a window W of x tokens (depending on the language and the corpus, typically set at 20 for MWEs), the score simply measures the ratio between the number of exact occurrences of an n-gram, divided by the number of occurrences with a window between each gram. The main advantages of this metric are that it is not limited to bigrams, and that semantic links may be captured as well by enlarging the window, a point that has also been made by Lapesa and Evert (2014) : larger windows may enable 1 st -order models to capture semantic associations.",
                "cite_spans": [
                    {
                        "start": 139,
                        "end": 184,
                        "text": "(Baeza-Yates and Berthier Ribeiro-Neto, 1999)",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 411,
                        "end": 424,
                        "text": "Colson (2017;",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 425,
                        "end": 430,
                        "text": "2018)",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 1166,
                        "end": 1189,
                        "text": "Lapesa and Evert (2014)",
                        "ref_id": "BIBREF6"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "!\" = #($ % , $ & , \u2026 , $ ' ) #*+ -. = $ % , + -/ = $ & , \u2026 , + -0 = $ ' 2 max(3 45% -3 4 ) \u2264 7; 8 = 1, \u2026 , # \u2212 1)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "Experiments with large datasets of idiomatic MWEs have shown (Colson, 2018) that most formulaic and idiomatic constructions can be captured by co-occurrence clusters, provided that the corpus used is sufficiently large (at least 1 billion tokens). In order to reach a good compromise between results that could be extracted from co-occurrence in large corpora and recurrent patterns with specific categories of MWEs, a hybrid methodology was used, as detailed in the following section.",
                "cite_spans": [
                    {
                        "start": 61,
                        "end": 75,
                        "text": "(Colson, 2018)",
                        "ref_id": "BIBREF3"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "In the PARSEME Shared Task 2020 for French, the following categories of verbal MWEs had to be extracted from the test set: IRV (inherently reflexive verbs, as in the English example to help oneself), LVC.cause (light-verb constructions in which the verb adds a causative meaning to the noun, as in the English to grant rights), LVC.full (light-verb constructions in which the verb only adds meaning expressed as morphological features, as in to give a lecture), MVC (multi-verb constructions, as in to make do) and VID (verbal idioms, e.g. to spill the beans).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Methodology used for HMSid and HMSid2",
                "sec_num": "2"
            },
            {
                "text": "After a number of preliminary tests, we decided to extract French MWEs from the test set in a twostep process. The first step concerned all categories of verbal MWEs, as described above, except the last one (VID, verbal idioms). The second step was just devoted to verbal idioms.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Methodology used for HMSid and HMSid2",
                "sec_num": "2"
            },
            {
                "text": "This two-step approach was motivated by the unpredictable character of verbal idioms: contrary to the other categories of MWEs used for the PARSEME Shared Task, idioms display a very irregular number of elements, of which the syntactic structure is also diverse.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Methodology used for HMSid and HMSid2",
                "sec_num": "2"
            },
            {
                "text": "During the first step, we used a Perl script and the Data:: Table module 2 for storing each sentence at a time in RAM memory. For the categories IRV, LVC.cause, LVC.full and MVC, the specific syntactic features of these categories were taken into account by the algorithm: in the case of IRV, for instance, the parsed sentences provided by the PARSEME dataset made it easy to extract all pronouns preceding or following the verbs, and an additional check was performed in order to determine whether those pronouns were indeed French reflexive pronouns, including elision (e.g. the pronominal form s' instead of se). For LVC.cause, a list of French causative verbs was extracted from the training data (for instance apporter, causer, cr\u00e9er, entra\u00eener). In the extraction phase, all objects depending on such causative verbs were measured by our co-occurrence score, the cpr-score (Colson, 2017; 2018) and the highest values were considered as cases of LVC.cause constructions. For LVC.full, a similar methodology was used, taking into account all subjects (for passive constructions) and objects (for direct object constructions) depending on verbs, excluding causative verbs, with a medium-range association between the subject/object and the verb (computed by the cpr-score). In the same way, the MVC category was extracted on the basis of the degree of association between two successive verbs, as in faire remarquer (to point out).",
                "cite_spans": [
                    {
                        "start": 879,
                        "end": 893,
                        "text": "(Colson, 2017;",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 894,
                        "end": 899,
                        "text": "2018)",
                        "ref_id": "BIBREF3"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 60,
                        "end": 72,
                        "text": "Table module",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Methodology used for HMSid and HMSid2",
                "sec_num": "2"
            },
            {
                "text": "In the second step of our extraction methodology, verbal idioms were extracted and added to the results. This made it possible to add the category of verbal idioms in the labels of the final results if and only if the results had not yet received another category label, for instance LVC.full. Preliminary tests on the basis of the training data indeed revealed that our algorithm tended to assign the VID category quite often, whereas the annotators of the gold set had been rather strict as to the idiomatic character of verbal MWEs. Using two separate scripts was a simple way of avoiding interference in the results.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Methodology used for HMSid and HMSid2",
                "sec_num": "2"
            },
            {
                "text": "In the Perl script devoted to the extraction of VIDs, we also used the Data:: Table module and selected in the parsed data all verbs, all their complements, and all complements of each complement. Extensive testing with the training data showed that this approach yielded higher scores than an n-gram based approach, in which the successive grams of each verb were analyzed left and right. Table 1 below displays the results obtained for HMSid, our system that was officially sent to the PARSEME Shared Task 2020. As explained in section 2, HMSid relied on an external corpus and was therefore placed in the open track. Table 2 shows the results obtained with HMSid2, using the same methodology but relying solely on the training data and the training corpus, and therefore belonging to the closed track. The results with HMSid2 were sent to the organizers of the Shared Task after the deadline.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 78,
                        "end": 90,
                        "text": "Table module",
                        "ref_id": null
                    },
                    {
                        "start": 390,
                        "end": 397,
                        "text": "Table 1",
                        "ref_id": "TABREF1"
                    },
                    {
                        "start": 620,
                        "end": 627,
                        "text": "Table 2",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Methodology used for HMSid and HMSid2",
                "sec_num": "2"
            },
            {
                "text": "Global MWE-based Global Token-based Table 2 : Global results obtained with HMSid2 at the PARSEME 2020 Shared Task (French).",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 36,
                        "end": 43,
                        "text": "Table 2",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "System Track Unseen MWE-based",
                "sec_num": null
            },
            {
                "text": "As shown in Table 1 , HMSid obtained a global F1 (Token-based) of 67.1, which puts it in 5 th position in the open track. It should be noted, however, that its F1-score on unseen MWEs (36.49) puts it in 4 th position (and very close to the 3d one), while its recall for unseen French MWEs is the best of all systems, open or closed track (53.33). This is noteworthy, because HMSid (and HMSid2) do not try to reproduce recurrent patterns from the training set, but rely on statistical extraction from a large linguistic corpus. In other words, both systems do not try to reproduce decisions made by annotators, as reflected in the training set, but are looking for statistical patterns in a large linguistic corpus, regardless of the training set. Of course, the training set was used for fine-tuning the statistical thresholds and deciding whether a combination was a MWE or not, and the different categories (which are in itself debatable, such as the distinction between LVC.full and LVC.cause) were integrated into the statistical extraction. The recall score on unseen MWEs also provides additional evidence of the statistical nature of recurrent MWEs in large linguistic corpora. This is even more obvious with HMSid2, which used exactly the same methodology, as explained in the above section, but relied on the training corpus provided by the Shared Task (part of the Wikipedia corpus), and would therefore be placed in the closed track. Among the 3 systems submitted to the closed track for French, HMSid2 would receive rank 2 for the global F1-score (MWE-based or Token-Based), and rank 1 for unseen MWEs, with an F1-score (39.21) far better than those obtained by the other systems in the closed track (with respectively 24.4 and 3.67). The best overall system officially submitted to the French closed track (Seen2Seen) has an F1-score of 3.67 for unseen MWEs.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 12,
                        "end": 19,
                        "text": "Table 1",
                        "ref_id": "TABREF1"
                    }
                ],
                "eq_spans": [],
                "section": "System Track Unseen MWE-based",
                "sec_num": null
            },
            {
                "text": "The difference between precision and recall, especially for unseen MWEs, should also be relativized by the choices made in the training and gold set. In spite of the excellent quality of the PARSEME annotated dataset, decisions as to the idiomatic character of a MWE will never be unanimous. In the case of the French dataset, for instance, the notion of verbal idiom (VID) was taken strictly by the annotators, but there are a few notable exceptions. A number of less idiomatic constructions were also labeled as VIDs. For instance, avoir lieu (to take place), il y a (there is / there are), mettre en pratique (to put into practice), tenir compte de (take into account), are all considered French verbal idioms in the training data, a choice that may be respected but has consequences on the statistical extraction. The statistical metric indeed had to be more tolerant for weaker associations when assigning the label 'VID', which contributed to a fairly good recall but a slightly lower precision. This appears clearly in all results from Tables 1 and 2, and in particular for unseen MWEs. In this case, one should bear in mind that the algorithm is looking for recurrent patterns in the linguistic system itself, as there are no similar examples in the training set.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "System Track Unseen MWE-based",
                "sec_num": null
            },
            {
                "text": "Many cases of verbal idioms from the gold set are quite obvious, such as tourner le dos \u00e0 (turn one's back on, lines 5817-19 of the gold set), il pleuvait des cordes (it was raining cats and dogs, lines 7415-17) or sortir le grand jeu (pull out all the stop, lines 12129-32), all three labelled as VID and also recognized by the algorithm because of the very strong association between the grams: a cpr-score of resp. 0.92 / 0.88 / 0.94. In other cases, however, the algorithm and the annotators are at odds. In lines 5868-9, for instance, rester silencieux (remain silent, keep quiet) is not considered as MWE by the annotators, but the cpr-score contradicts this view: 0.81. The same holds true of many other examples, such as trouver un compromis (lines 14387-89), not considered as a MWE in the gold set, but displaying a cprscore of 0.80. In this specific case, it should be reminded that native speakers are not always the best judges of the idiomaticity of their own language. It may be pretty obvious for speakers or French and of English that a compromise may be found but a quick look at other European languages reveals that this is far from being the case: in Spanish, for instance, the common construction is llegar a un compromiso.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "System Track Unseen MWE-based",
                "sec_num": null
            },
            {
                "text": "It should also be pointed out that the methodology used for HMSid and HMSid2 is easily applicable to other languages. As a matter of fact, we have already implemented it as an experimental web tool 3 , IdiomSearch for English, German, Spanish, French, Dutch and Chinese. Measuring associations based on the cpr-score is indeed possible for any language, provided that the necessary web corpus is compiled. The only caveat is the goal of the classification. The Parseme Shared Task 2020, as the previous editions, wanted the systems to target very specific categories of verbal expressions, whereas our experimental tool IdiomSearch looks for recurrent statistical associations, whatever the precise category may be. Finetuning the algorithm to specific categories expected by the gold set, and annotated as such by native speakers of the language requires sophisticated training algorithms such as those used in deep learning.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "System Track Unseen MWE-based",
                "sec_num": null
            },
            {
                "text": "In conclusion, the most interesting results from HMSid and HMSid2 are those obtained for unseen MWEs. Due to the well-known phenomenon of overfitting, deep learning models often have problems with unseen data, which suggests that a hybrid approach combining deep learning and our model may be useful for future research.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "System Track Unseen MWE-based",
                "sec_num": null
            },
            {
                "text": "Introduction to the PARSEME Shared Task 2020, http://multiword.sourceforge.net/PHITE.php?sitesig=CONF&page=CONF_02_MWE-LEX_2020___lb__COL-ING__rb__&subpage=CONF_40_Shared_Task This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http://creativecommons.org/licenses/by/4.0/.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "https://idiomsearch.lsti.ucl.ac.be",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Modern Information Retrieval",
                "authors": [
                    {
                        "first": "Ricardo",
                        "middle": [],
                        "last": "Baeza",
                        "suffix": ""
                    },
                    {
                        "first": "-",
                        "middle": [],
                        "last": "Yates",
                        "suffix": ""
                    },
                    {
                        "first": "Berthier",
                        "middle": [],
                        "last": "Ribeiro-Neto",
                        "suffix": ""
                    }
                ],
                "year": 1999,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ricardo Baeza-Yates and Berthier Ribeiro-Neto. 1999. Modern Information Retrieval. ACM Press /Addison Wes- ley, New York.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "The WaCky Wide Web: A collection of very large linguistically processed Web-crawled corpora",
                "authors": [
                    {
                        "first": "Marco",
                        "middle": [],
                        "last": "Baroni",
                        "suffix": ""
                    },
                    {
                        "first": "Silvia",
                        "middle": [],
                        "last": "Bernardini",
                        "suffix": ""
                    },
                    {
                        "first": "Adriano",
                        "middle": [],
                        "last": "Ferraresi",
                        "suffix": ""
                    },
                    {
                        "first": "Eros",
                        "middle": [],
                        "last": "Zanchetta",
                        "suffix": ""
                    }
                ],
                "year": 2009,
                "venue": "Journal of Language Resources and Evaluation",
                "volume": "43",
                "issue": "",
                "pages": "209--226",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Marco Baroni, Silvia Bernardini, Adriano Ferraresi and Eros Zanchetta. 2009. The WaCky Wide Web: A collection of very large linguistically processed Web-crawled corpora. Journal of Language Resources and Evaluation, 43: 209-226.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "The IdiomSearch Experiment: Extracting Phraseology from a Probabilistic Network of Constructions",
                "authors": [
                    {
                        "first": "",
                        "middle": [],
                        "last": "Colson",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "16--28",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Colson. 2017. The IdiomSearch Experiment: Extracting Phraseology from a Probabilistic Network of Construc- tions. In Ruslan Mitkov (ed.), Computational and Corpus-based phraseology, Lecture Notes in Artificial Intel- ligence 10596. Springer International Publishing, Cham: 16-28.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "From Chinese Word Segmentation to Extraction of Constructions: Two Sides of the Same Algorithmic Coin",
                "authors": [
                    {
                        "first": "",
                        "middle": [],
                        "last": "Colson",
                        "suffix": ""
                    }
                ],
                "year": 2018,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "41--50",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Colson. 2018. From Chinese Word Segmentation to Extraction of Constructions: Two Sides of the Same Algo- rithmic Coin. In Agatha Savary et al. 2018: 41-50.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "The Oxford Handbook of Construction Grammar",
                "authors": [],
                "year": 2013,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Thomas Hoffmann and Graeme Trousdale (eds.). 2013. The Oxford Handbook of Construction Grammar. Oxford University Press, Oxford/NewYork.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "A large scale evaluation of distributional semantic models: Parameters, interactions and model selection",
                "authors": [
                    {
                        "first": "Gabriella",
                        "middle": [],
                        "last": "Lapesa",
                        "suffix": ""
                    },
                    {
                        "first": "Stefan",
                        "middle": [],
                        "last": "Evert",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "Transactions of the Association for Computational Linguistics",
                "volume": "2",
                "issue": "",
                "pages": "531--545",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Gabriella Lapesa and Stefan Evert. 2014. A large scale evaluation of distributional semantic models: Parameters, interactions and model selection. Transactions of the Association for Computational Linguistics, 2:531-545.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Proceedings of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions",
                "authors": [
                    {
                        "first": "Agata",
                        "middle": [],
                        "last": "Savary",
                        "suffix": ""
                    },
                    {
                        "first": "Carlos",
                        "middle": [],
                        "last": "Ramisch",
                        "suffix": ""
                    },
                    {
                        "first": "Jena",
                        "middle": [
                            "D"
                        ],
                        "last": "Hwang",
                        "suffix": ""
                    },
                    {
                        "first": "Nathan",
                        "middle": [],
                        "last": "Schneider",
                        "suffix": ""
                    },
                    {
                        "first": "Melanie",
                        "middle": [],
                        "last": "Andresen",
                        "suffix": ""
                    }
                ],
                "year": 2018,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Agata Savary, Carlos Ramisch, Jena D. Hwang, Nathan Schneider, Melanie Andresen, Sameer Pradhan and Miriam R. L. Petruck (eds.). 2018. Proceedings of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions, Coling 2018, Santa Fe NM, USA, Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Corpus, Concordance, Collocation. Oxford",
                "authors": [
                    {
                        "first": "John",
                        "middle": [],
                        "last": "Sinclair",
                        "suffix": ""
                    }
                ],
                "year": 1991,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "John Sinclair. 1991. Corpus, Concordance, Collocation. Oxford, Oxford University Press.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "text": "The cpr-score",
                "uris": null,
                "num": null,
                "type_str": "figure"
            },
            "TABREF1": {
                "content": "<table><tr><td colspan=\"4\">Unseen MWE-based</td><td colspan=\"4\">Global MWE-based</td><td colspan=\"2\">Global Token-based</td></tr><tr><td>System Track</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>P</td><td>R</td><td colspan=\"2\">F1 Rank</td><td>P</td><td>R</td><td colspan=\"2\">F1 Rank</td><td>P</td><td>R</td><td>F1 Rank</td></tr><tr><td colspan=\"3\">HMSid2 closed 32.53 49.33 39.21</td><td>1</td><td colspan=\"3\">68.90 72.04 70.43</td><td>2</td><td colspan=\"2\">71.10 72.63 71.86</td><td>2</td></tr></table>",
                "num": null,
                "text": "Global results obtained with HMSid at the PARSEME 2020 Shared Task (French).",
                "type_str": "table",
                "html": null
            }
        }
    }
}