File size: 85,577 Bytes
b161cf5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
# Understanding Linearity Of Cross-Lingual Word Embedding Mappings

Xutan Peng †*x.peng@shef.ac.uk*

Chenghua Lin †*c.lin@shef.ac.uk*

Mark Stevenson † *mark.stevenson@shef.ac.uk* Chen Li ‡*palchenli@tencent.com**Department of Computer Science, The University of Sheffield**Applied Research Center, Tencent PCG*
Reviewed on OpenReview: *https: // openreview. net/ forum? id= 8HuyXvbvqX*

## Abstract

The technique of Cross-Lingual Word Embedding (CLWE) plays a fundamental role in tackling Natural Language Processing challenges for low-resource languages. Its dominant approaches assumed that the relationship between embeddings could be represented by a linear mapping, but there has been no exploration of the conditions under which this assumption holds. Such a research gap becomes very critical recently, as it has been evidenced that relaxing mappings to be non-linear can lead to better performance in some cases. We, for the first time, present a theoretical analysis that identifies the preservation of analogies encoded in monolingual word embeddings as a *necessary and sufficient* condition for the ground-truth CLWE mapping between those embeddings to be linear. On a novel crosslingual analogy dataset that covers five representative analogy categories for twelve distinct languages, we carry out experiments which provide direct empirical support for our theoretical claim. These results offer additional insight into the observations of other researchers and contribute inspiration for the development of more effective cross-lingual representation learning strategies.

## 1 Introduction

Cross-Lingual Word Embedding (CLWE) methods encode words from two or more languages in a shared high-dimensional space in which vectors representing lexical items with similar meanings (regardless of language) are closely located. Compared with alternative techniques, such as cross-lingual pre-trained language models, CLWE is orders of magnitude more efficient in terms of training corpora1 and computational power requirements2. As a result, the topic has received significant attention as a promising means to support Natural Language Processing (NLP) for low-resource languages (including ancient languages) and has been used for a range of applications, e.g., Machine Translation (Herold et al., 2021), Sentiment Analysis (Sun et al., 2021), Question Answering (Zhou et al., 2021) and Text Summarisation (Peng et al., 2021b).

1For example, Kim et al. (2020) show that inadequate monolingual data size (fewer than one million *sentences*) is likely to lead to collapsed performance of XLM (Lample & Conneau, 2019) even for etymologically close language pairs. Meanwhile, CLWE can easily align word embeddings for languages such as African Amharic and Tigrinya for which only have millions of tokens (Zhang et al., 2020) are available.

2For example, XLM-R (Conneau et al., 2020) was trained on 500× Tesla V100 GPUs, whereas the training of VecMap (Artetxe et al., 2018) can be finished on a single Titan Xp GPU.
The most successful CLWE approach, CLWE alignment, learns mappings between independently trained monolingual word vectors with very little, or even no, cross-lingual supervision (Ruder et al., 2019). One of the key challenges of these algorithms is the design of mapping functions. Motivated by the observation that word embeddings for different languages tend to be similar in structure (Mikolov et al., 2013b), many researchers have assumed that the mappings between cross-lingual word vectors are linear (Faruqui & Dyer, 2014; Lample et al., 2018b; Li et al., 2021).

Although models based on this assumption have demonstrated strong performance, it has recently been questioned. Researchers have claimed that the structure of multilingual word embeddings may not always be similar (Søgaard et al., 2018; Dubossarsky et al., 2020; Vulić et al., 2020), which led to the emergence of approaches relaxing the mapping linearity (Glavaš & Vulić, 2020; Wang et al., 2021a) or using nonlinear functions (Mohiuddin et al., 2020; Ganesan et al., 2021). These new methods can sometimes outperform the traditional linear counterparts, causing a debate around the suitability, or otherwise, of linear mappings. However, to the best of our knowledge, the majority of previous CLWE work has focused on empirical findings, and there has been no in-depth analysis of the conditions for the linearity assumption.

This paper approaches the problem from a novel perspective by establishing a link between the linearity of CLWE
mappings and the preservation of encoded monolingual analogies. Our work is motivated by the observation that word analogies can be solved via the composition of semantics based on vector arithmetic (Mikolov et al.,
2013c) and such linguistic regularities might be transferable across languages. More specifically, we notice that if analogies encoded in the embeddings of one language also appear in the embeddings of another, the corresponding multilingual vectors tend to form similar shapes (see Fig. 1), suggesting the CLWE mapping between them should be approximately linear. In other words, we suspect that the preservation of analogy encoding indicates the linearity of CLWE mappings.

![1_image_0.png](1_image_0.png)

Figure 1: Wiki vectors (see § 4.3) of English (left)
and French (right) analogy word pairs based on PCA (Wold et al., 1987). NB: We manually rotate the visualisation to highlight structural similarity.

Our hypothesis is verified both theoretically and empirically. We make a justification that the preservation of analogy encoding should be a *sufficient and necessary* condition for the linearity of CLWE mappings. To provide empirical validation, we first define indicators to qualify the linearity of the ground-truth CLWE
mapping (SLMP) and its preservation of analogy encoding (SPAE). Next, we build a novel cross-lingual word analogy corpus containing five analogy categories (both semantic and syntactic) for twelve languages that pose pairs of diverse etymological distances. We then benchmark SLMP and SPAE on three representative series of word embeddings. In all setups tested, we observe a significant correlation between SLMP and SPAE, which provides empirical support for our hypothesis. With this insight, we offer explanations to why the linearity assumption occasionally fails, and consequently, discuss how our research can benefit the development of more effective CLWE algorithms. We also recommend the use of SPAE to assess mapping linearity in CLWE applications. We release our data and code at https://github.com/Pzoom522/xANLG.

This paper's contributions are summarised as:
- Introduces the previously unnoticed relationship between the linearity of CLWE mappings and the preservation of encoded word analogies.

- Provides a theoretical analysis of this relationship. - Describes the construction of a novel cross-lingual analogy test set with five categories of word pairs aligned across twelve diverse languages.

- Provides empirical evidence of our claim and introduces SPAE to estimate the analogy encoding preservation (and therefore the mapping linearity). We additionally demonstrate that SPAE can be used as an indicator of the relationship between monolingual word embeddings, independently of trained CLWEs.

- Discusses implications of these results, regarding the interpretation of previous results and as well as the future development of cross-lingual representations.

## 2 Related Work

Linearity of CLWE Mapping. Mikolov et al. (2013b) discovered that the vectors of word translations exhibit similar structures across different languages. Researchers made use of this by assuming that mappings between multilingual embeddings could be modelled using simple linear transformations. This framework turned out to be effective in numerous studies which demonstrated that linear mappings are able to produce accurate CLWEs with weak or even no supervision (Artetxe et al., 2017; Lample et al., 2018b; Artetxe et al.,
2018; Wang et al., 2020; Li et al., 2021).

One way in which this is achieved is through the application of a normalisation technique called "mean centring", which (for each language) subtracts the average monolingual word vector from all word embeddings, so that this mean vector becomes the origin of the vector space (Xing et al., 2015; Artetxe et al., 2016; Ruder et al., 2019). This step has the effect of simplifying the mapping from being *affine* (i.e., equivalent to a shifting operation plus a linear mapping) to *linear* by removing the shifting operation.

However, recent work has cast doubt on this linearity assumption, leading researchers to experiment with the use of non-linear mappings. Nakashole & Flauger (2018) and Wang et al. (2021a) pointed out that structural similarities may only hold across particular regions of the embedding spaces rather than over their entirety.

Søgaard et al. (2018) examined word vectors trained using different corpora, models and hyper-parameters, and concluded configuration dissimilarity between the monolingual embeddings breaks the assumption that the mapping between them is linear. Patra et al. (2019) investigated various language pairs and discovered that a higher etymological distance is associated with degraded the linearity of CLWE mappings. Vulić et al. (2020) additionally argued that factors such as limited monolingual resources may also weaken the linearity assumption.

These findings motivated work on designing non-linear mapping functions in an effort to improve CLWE performance. For example, Nakashole (2018) and Wang et al. (2021a) relaxed the linearity assumption by combining multiple linear CLWE mappings; Patra et al. (2019) developed a semi-supervised model that loosened the linearity restriction; Lubin et al. (2019) attempted to reduce the dissimilarity between multilingual embedding manifolds by refining learnt dictionaries; Glavaš & Vulić (2020) first trained a globally optimal linear mapping, then adjusted vector positions to achieve better accuracy; Mohiuddin et al. (2020)
used two independently pre-trained auto-encoders to introduce non-linearity to CLWE mappings; Ganesan et al. (2021) obtained inspirations via the back translation paradigm, hence framing CLWE training as to explicitly solve a non-linear and bijective transformation between multilingual word embeddings. Despite these non-linear mappings outperforming their linear counterparts in many setups, in some settings the linear mappings still seem more successful, e.g., the alignment between Portuguese and English word embeddings in Ganesan et al. (2021). Moreover, training non-linear mappings is typically more complex and thus requires more computational resources. Albeit at the significant recent attention to this problem by the research community, it is still unclear under what condition the linearity of CLWE mappings holds. This paper makes the first attempt to close this research gap by providing both theoretical and empirical contributions.

Analogy Encoding. Analogy is a fundamental concept within cognitive science (Gentner, 1983) that has received significant focus from the NLP community, since the observation that it can be represented using word embeddings and vector arithmetic (Mikolov et al., 2013c). A popular example based on the analogy
"*king is to man as queen is to woman*" shows that the vectors representing the four terms (xking, xman, x*queen* and x*woman*) exhibit the following relation:
xking − xman ≈ xqueen − x*woman*. (1)
Since this discovery, the task of analogy completion has commonly been employed to evaluate the quality of pre-trained word embeddings (Mikolov et al., 2013c; Pennington et al., 2014; Levy & Goldberg, 2014a). This line of research has directly benefited downstream applications (e.g., representation bias removal (Prade &
Richard, 2021)) and other relevant domains (e.g., automatic knowledge graph construction (Wang et al.,
2021b)). Theoretical analysis has demonstrated a link between embeddings' analogy encoding and the Pointwise Mutual Information of the training corpus (Arora et al., 2016; Gittens et al., 2017; Allen &
Hospedales, 2019; Ethayarajh et al., 2019; Fournier & Dunbar, 2021). Nonetheless, as far as we are aware, the connection between the preservation of analogy encoding and the linearity of CLWE mappings has not been previously investigated.

## 3 Theoretical Basis

We denote a ground-truth CLWE mapping as M : X → Y, where X and Y are monolingual word embeddings independently trained for languages LX and LY, respectively.

Proposition. Encoded analogies are preserved during the CLWE mapping *M ⇐⇒ M* is affine.

Remarks. Following Eq. (1), the preservation of analogy encoding under a mapping can be formalised as

$$\mathbf{x}_{\alpha}-\mathbf{x}_{\beta}=\mathbf{x}_{\gamma}-\mathbf{x}_{\theta}\implies\mathcal{M}(\mathbf{x}_{\alpha})-\mathcal{M}(\mathbf{x}_{\beta})=\mathcal{M}(\mathbf{x}_{\gamma})-\mathcal{M}(\mathbf{x}_{\theta}),\tag{2}$$

where xα, xβ, xγ, xθ ∈ X.

If M is affine, for d-dimensional monolingual embeddings X we have

$$\left({\mathrm{3}}\right)$$
$${\mathcal{M}}(\mathbf{x})\,:=\,M\mathbf{x}+\mathbf{b},$$
$${\mathcal{M}}({\vec{0}})\,=\,{\vec{0}}.$$
M(x) := Mx + b, (3)
where $x\in X$, $M\in\mathbb{R}^{d\times d}$, and $\pmb{b}\in\mathbb{R}^{d\times1}$. 
Proof: Eq. (2) =⇒ Eq. (3). To begin with, by adopting the mean centring operation in § 2, we shift the coordinates of the space of X, ensuring M(⃗0) = ⃗0. (4)
This step greatly simplifies the derivations afterwards, because from now on we just need to demonstrate that M is a *linear mapping*, i.e., it can be written as Mx. By definition, this is equivalent to showing that M
preserves both the operations of addition (a.k.a. additivity) and scalar multiplication (a.k.a. homogeneity).

Additivity can be proved by observing that (xi + xj ) − xj = xi −⃗0 and therefore,

$(\mathbf{x_{i}}+\mathbf{x_{j}})-\mathbf{x_{j}}=\mathbf{x_{i}}-\vec{0}\ \stackrel{{\text{Eq.~{}(2)}}}{{\longrightarrow}}\mathcal{M}(\mathbf{x_{i}}+\mathbf{x_{j}})-\mathcal{M}(\mathbf{x_{j}})=\mathcal{M}(\mathbf{x_{i}})-\mathcal{M}(\vec{0})$  $\stackrel{{\text{Eq.~{}(4)}}}{{\longrightarrow}}\mathcal{M}(\mathbf{x_{i}}+\mathbf{x_{j}})=\mathcal{M}(\mathbf{x_{i}})+\mathcal{M}(\mathbf{x_{j}})$.  
$$\left(5\right)$$
Homogeneity can be proved in four steps.

- **Step 1**: Observe that ⃗0 − xi = −xi −⃗0, similar to Eq. (5) we can show that

$$\begin{array}{c}{{\vec{0}-\mathbf{x_{i}}=-\mathbf{x_{i}}-\vec{0}\,\stackrel{\mathrm{Eq.~(2)}}{\longrightarrow}\mathcal{M}(\vec{0})-\mathcal{M}(\mathbf{x_{i}})=\mathcal{M}(-\mathbf{x_{i}})-\mathcal{M}(\vec{0})}}\\ {{\mathrm{Eq.~(4)}}}\\ {{\times(-1)}}\end{array}\mathcal{M}(\mathbf{x_{i}})=-\mathcal{M}(-\mathbf{x_{i}}).$$
$$=\;m{\mathcal{M}}(x_{i})$$

- **Step 2**: Using *mathematical induction*, for arbitrary xi, we show that

$\varepsilon_{\mathrm{max}}$
∀m ∈ N
+, M(mxi) = mM(xi) (7)
holds, where N
+ is the set of positive natural numbers, as Base Case: Trivially holds when m = 1.

Inductive Step: Assume the inductive hypothesis that m = k (k ∈ N
+), i.e.,

$${\mathcal{M}}(k\mathbf{x_{i}})\,=\,k{\mathcal{M}}(\mathbf{x_{i}}).$$
M(kxi) = kM(xi). (8)

$$({\mathfrak{G}})$$
$$\left(7\right)$$
$$({\boldsymbol{\delta}})$$

Then, as required, when m = k + 1,

$\mathcal{M}\big{(}(k+1)\mathbf{x_{i}}\big{)}\ \xrightarrow{\text{Eq.\ ()}}\ \mathcal{M}(k\mathbf{x_{i}})+\mathcal{M}(\mathbf{x_{i}})\ \xrightarrow{\text{Eq.\ ()}}\ k\mathcal{M}(\mathbf{x_{i}})+\mathcal{M}(\mathbf{x_{i}})=(k+1)\mathcal{M}(\mathbf{x_{i}})$.  
$$({\mathfrak{g}})$$
- **Step 3**: We further justify that

$$\forall n\ \in\mathbb{N}^{+},\ \mathcal{M}(\frac{\mathbf{x}_{i}}{n})\,=\,\frac{\mathcal{M}(\mathbf{x}_{i})}{n},$$  holds when $n=1$; as for $n>1$,  $$\mathcal{M}(\mathbf{x}_{i})\,=\,\frac{\mathcal{M}(\mathbf{x}_{i})}{n},$$
which, due to Eq. (4), trivially holds when n = 1; as for n > 1,
$\mathcal{M}(\frac{\mathbf{x}_{i}}{n})=\mathcal{M}(\mathbf{x}_{i}+(-\frac{n-1}{n}\mathbf{x}_{i}))$$\frac{\mathbf{E}_{0}\ (3)}{n}$$\mathcal{M}(\mathbf{x}_{i})+\mathcal{M}(-\frac{n-1}{n}\mathbf{x}_{i})$$\frac{\mathbf{E}_{0}\ (6)}{n}$$\mathcal{M}(\mathbf{x}_{i})-\mathcal{M}(\frac{n-1}{n}\mathbf{x}_{i})$$\mathcal{M}(\mathbf{x}_{i})-(n-1)\mathcal{M}(\frac{\mathbf{x}_{i}}{n})$
directly yields M(
xi n
) = M(xi)
n, i.e., Eq. (9).

- **Step 4**: Considering the set of rational numbers Q = {0} ∪ {± m n |∀*m, n*}, Eqs. (4), (6), (7) and (9) jointly justifies the homogeneity of M for Q. Because Q ⊂ R is a *dense set*, homogeneity of M also holds over R,
see Kleiber & Pervin (1969).

Finally, combined with the additivity that has been already justified above, linearity of CLWE mapping M is proved, i.e., Eq. (2) =⇒ Eq. (3).

 ##### q.$\left(3\right)\implies$ Eq. 
Proof: Eq. (3) =⇒ Eq. (2). Justifying this direction is quite straightforward:

$$\begin{split}\boldsymbol{x_{\alpha}-x_{\beta}=x_{\gamma}-x_{\theta}}&\Longrightarrow M\boldsymbol{x_{\alpha}-Mx_{\beta}=Mx_{\gamma}-Mx_{\theta}}\\ &\Longrightarrow M\boldsymbol{x_{\alpha}+b-(Mx_{\beta}+b)=Mx_{\gamma}+b-(Mx_{\theta}+b)}\\ &\Longrightarrow\mathcal{M}(\boldsymbol{x_{\alpha}})-\mathcal{M}(\boldsymbol{x_{\beta}})=\mathcal{M}(\boldsymbol{x_{\gamma}})-\mathcal{M}(\boldsymbol{x_{\theta}}).\end{split}$$
$\square$
Summarising the proofs for both the forward and reverse directions, we conclude that the proposition holds.

Please note, the high-level assumption of our derivations is that word embedding spaces can be treated as continuous vector spaces, an assumption commonly adopted in previous work, e.g., Levy & Goldberg
(2014b), Hashimoto et al. (2016), Zhang et al. (2018), and Ravfogel et al. (2020). Nevertheless, we argue that the inherent discreteness of word embeddings should not be ignored. The following sections complement this theoretical insight via experiments which confirm the claim holds empirically.

## 4 Experiment

Our experimental protocol assesses the linearity of the mapping between each pair of pre-trained monolingual word embeddings. We also quantify the extent to which this mapping preserves encoded analogies, i.e.,
satisfies the condition of Eq. (2). We then analyse the correlation between these two indicators. A strong correlation provides evidence to support our theory, and *vice versa*. The indicators used are described in
§ 4.1. Unfortunately, there are no suitable publicly available corpora for our proposed experiments, so we develop a novel word-level analogy test set that is fully parallel across languages, namely xANLG (see § 4.2).

The pre-trained embeddings used for the tests are described in § 4.3.

## 4.1 Indicators 4.1.1 Linearity Of Clwe Mapping

Direct measurement of the linearity of a ground-truth CLWE mapping is challenging. One relevant approach is to benchmark the similarity between multilingual word embedding, where the mainstream and state-ofthe-art indicators are the so-called spectral-based algorithms (Søgaard et al., 2018; Dubossarsky et al.,
2020). However, such methods assume the number of tested vectors to be much larger than the number of dimensions, which does not apply in our scenario (see § 4.2). Therefore, we choose to evaluate linearity via the goodness-of-fit of the optimal linear CLWE mapping, which is measured as

$\mathcal{S}_{\rm LMP}\coloneqq-||M^{\star}X-Y||_{F}/r\quad\mbox{with}\quad M^{\star}=\arg\min_{M}||MX-Y||_{F}$
where *|| · ||*F and r denotes the Frobenius norm and the number of X's rows. To obtain matrices X and Y , from X and Y respectively, we first retrieve the vectors corresponding to lexicons of a ground-truth LX-LY dictionary and concatenate them into two matrices. More specifically, if two vectors (represented as rows) share the same index in the two matrices (one for each language), their corresponding words form a translation pair, i.e., the rows of these matrices are aligned. "Mean centring" is applied to satisfy Eq. (4). For fair comparisons across different mapping pairs, in each of X and Y , rows are standardised by scaling the mean Euclidean norm to 1. Generic Procrustes Analysis (not necessarily orthogonal) (Bookstein, 1992)
is applied to find M⋆.

Large absolute values of SLMP mean that the optimal linear mapping is an accurate model of the true relationship between the embeddings, and *vice versa*. SLMP therefore indicates the degree to which CLWE
mappings are linear.

## 4.1.2 Preservation Of Analogy Encoding

To assess how well analogies are preserved across embeddings, we start by probing how analogies are encoded in the monolingual word embeddings. We use the set-based LRCos, the state-of-the-art analogy mining tool for static word embeddings (Drozd et al., 2016).3It provides a score in the range of 0 to 1, indicating the correctness of analogy completion in a single language. For the extension in a cross-lingual setup, we further compute the geometric mean:

$${\mathcal{S}}_{\mathrm{PAE}}:={\sqrt{\mathrm{LRCos}(\mathbf{X})\times\mathrm{LRCos}(\mathbf{Y})}},$$

where LRCos(·) is the accuracy of analogy completion provided by LRCos for embedding X. To simplify our discussion and analysis from now onward, when performing CLWE mappings, by default we select the monolingual embeddings that best encode analogy, i.e., we restrict LRCos(X) ≥ LRCos(Y). SPAE = 1 indicates all analogies are well encoded in both embeddings, and are preserved by the ground-truth mapping between them. On the other hand, lower SPAE values indicate deviation from the condition of Eq. (2).

## 4.1.3 Validity Of Spae

As an aide, we explore the properties of the SPAE indicator to demonstrate its robustness for the interested reader. The score produced by LRCos is relative to a pre-specified set of *known* analogies. In theory, a low LRCos(X) score may not reliably indicate that X does not encode analogies well since there may be other word pairings within that set that produce higher scores. This naturally raises a question: *does* SPAE really promise the validity as the indicator of analogy encoding preservation? In other words, it is necessary to investigate whether there exists an *unknown* analogy word set encoded by the tested embeddings to an equal or higher degree. If there is, then SPAE may not reflect the preservation of analogy encoding completely, as unmatched analogy test sets may lead to low LRCos scores even for monolingual embeddings that encode analogies well. We demonstrate that the problem can be considered as an optimal transportation task and SPAE is guaranteed to be a reliable indicator.

As analysed by Ethayarajh et al. (2019), the degree to which word pairs are encoded as analogies in word embeddings is equivalent to the likelihood that the end points of any two corresponding vector pairs form a high-dimensional coplanar parallelogram. More formally, this task is to identify

$$\mathbf{P}^{\star}=\arg\operatorname*{min}_{\mathbf{P}}\sum_{\mathbf{x}\in\mathbf{X}}c({\mathcal{T}}_{\mathbf{\bigtriangleup}}^{\mathbf{P}}(\mathbf{x})),$$
P (x), (10)
3We have tried alternatives including 3CosAdd (Mikolov et al., 2013a), PairDistance (Levy & Goldberg, 2014a) and 3CosMul (Levy et al., 2015), verifying that they are less accurate than LRCos in most cases. Still, in the experiments they all exhibit similar trends as shown in Tab. 2.

$$(10)$$

![6_image_0.png](6_image_0.png)

Figure 2: An example of solving T
P (·) in Eq. (11), with P = {(x1, x2),(x3, x4),(x5, x6),(x7, x8)}. In the figure we adjust the position of x1, x3, x5 and x7 in the last step, but it is worth noting that there also exists other feasible T
P (·) given p
⋆, e.g., to tune x2, x4, x6 and x8 instead.
where P is one possible pairing of vectors in X and C(·) is the cost of a given transportation scheme. T
P (·)
denotes the corresponding cost-optimal process of moving vectors to satisfy

∀{(xα, xβ),(xγ, xθ)} ⊆ P, T P (xα) − T P (xβ) = T P (xγ) − T P (xθ), (11)
$$(11)$$
i.e., the end points of $\mathcal{T}_{\not{\mathcal{D}}}^{\mathbf{P}}(\mathbf{x_{\alpha}})$, $\mathcal{T}_{\not{\mathcal{D}}}^{\mathbf{P}}(\mathbf{x_{\beta}})$, $\mathcal{T}_{\not{\mathcal{D}}}^{\mathbf{P}}(\mathbf{x_{\gamma}})$ and $\mathcal{T}_{\not{\mathcal{D}}}^{\mathbf{P}}(\mathbf{x_{\theta}})$ form a parallelogram.  
Therefore, in each language and analogy category of xANLG, we first randomly sample vector pairing samples, leading to 1e5 different P. Next, for each of them, we need to obtain T
P (·) that minimises Px∈X CT
P (x)in Eq. (10). Our algorithm is explained using the example in Fig. 2, where the cardinality of X and P is 8 and 4, respectively.

- **Step 1**: Link the end points of the vectors within each word pair, hence our target is to adjust these end points so that all connecting lines not only have equal length but also remain parallel.

- **Step 2**: For each vector pair (xα, xβ) ∈ P, vectorise its connecting line into an offset vector as vα−β =
xα − xβ.

- **Step 3**: As the start points of all such offset vectors are aggregated at ⃗0, seek a vector p
⋆that minimises the total transportation cost between the end point of p
⋆ and those of all offset vectors (again, note they share a start point at ⃗0).

- **Step 4**: Perform the transportation so that all offset vectors become p
⋆, i.e.,

$$\forall(\mathbf{x_{\alpha}},\mathbf{x_{\beta}})\in\mathbf{P},\ {\mathcal{T}}_{\mathbf{\widetilde{\mathcal{D}}}}^{\mathbf{P}}(\mathbf{x_{\alpha}})-{\mathcal{T}}_{\mathbf{\widetilde{\mathcal{D}}}}^{\mathbf{P}}(\mathbf{x_{\beta}})=\mathbf{p^{\star}}.$$

In this way, the tuned vector pairs can always form perfect parallelograms. Obviously, as p
⋆is at the cost-optimal position (see Step 3), this vector-adjustment scheme is also cost-optimal. Solving p
⋆for high dimensions is non-trivial in real world and is a special case of the NP-hard Facility Location Problem (a.k.a. the P-Median Problem) (Kariv & Hakimi, 1979). We, therefore, use the scipy.optimize.fmin implementation of the Nelder-Mead simplex algorithm (Nelder & Mead, 1965) to provide a good-enough solution. To reach convergence, with the mean offset vector as the initial guess, we set both the absolute errors in parameter and function value between iterations at 1e4. We experimented with implementing C(·) using mean Euclidean, Taxicab and Cosine distances respectively. For all analogy categories in all languages, P⋆coincides perfectly with the pre-defined pairing of xANLG. This analysis provides evidence that the situation where *an unknown kind of analogy is better encoded than the ones used* does not occur in practice. SPAE is thus trustworthy.

## 4.2 Datasets

Calculating the correlation between SLMP and SPAE requires a cross-lingual word analogy dataset. This resource would allow us to simultaneously (1) construct two aligned matrices X and Y to check the linearity of CLWE mappings, and (2) obtain the monolingual LRCos scores of both X and Y. Three relevant resources were identified, although none of them is suitable for our study.

| Budapest   | Budapest                                                                                  | Budapest         | Budapeszt   |             |          |         |        |        |
|------------|-------------------------------------------------------------------------------------------|------------------|-------------|-------------|----------|---------|--------|--------|
| CAP†       | 31                                                                                        | Budapest Ungarn  | Hungary     | Hungría     | Hongrie  | Węgry   |        |        |
| Category   | #                                                                                         | ♣♤               | ♤♭          | ♤♲          | ♥♱       | ♧♨      | ♯♫     |        |
| ♷ANLGG     | son                                                                                       | hijo             | fils        | syn         |          |         |        |        |
| GNDR†      | 30                                                                                        | sohn tochter     | daughter    | hija        | fille    | córka   |        |        |
|            | बुडापे⡰ट हंगरी बेटा बेटी पे⣶ पे⣶ ब⡜चा ब⡜चे बुडापे⡰ट हंगरी बेटा बेटी पे⣶ पे⣶ ब⡜चा ब⡜चे बुडापे⡰ट हंगरी बेटा बेटी पे⣶ पे⣶ ब⡜चा ब⡜चे |                  |             |             |          |         |        |        |
| Peru       | Perú                                                                                      | Pérou            | Peru        |             |          |         |        |        |
| NATL†      | 34                                                                                        | Peru             |             |             |          |         |        |        |
| Peruanisch | Peruvian                                                                                  | Peruano          | Péruvien    | Peruwiański |          |         |        |        |
| G-PL‡      | 31                                                                                        | kind             | child       | niño        | enfant   | dziecko |        |        |
| kinder     | children                                                                                  | niños            | enfants     | dzieci      |          |         |        |        |
| Category   | #                                                                                         | ♤♭               | ♤♳          | ♥♨          | ♧♱       | ♫♵      | ♱♴     | ♲♫     |
| M ♷ANLG    | kotkas                                                                                    | kotka            | orao        | ērglis      | орёл     | orel    |        |        |
| ANIM†      | 32                                                                                        | eagle bird       | lind        | lintu       | ptica    | putns   | птица  | ptica  |
| masin      | kone                                                                                      | stroj            | mašīna      | машина      | stroj    |         |        |        |
| G-PL‡      | 31                                                                                        | machine machines | masinad     | koneet      | strojevi | mašīnas | машины | stroji |

Table 1: Summary of and examples from the xANLG corpus. \# denotes the number of cross-lingual analogy word pairs in each language. †Semantic: animal-species|ANIM, capital-world|CAP, male-female|GNDR,
nation-nationality|NATL.

‡Syntactic: grammar-plural|G-PL.

Table 1: Results
- Brychcín et al. (2019) described a cross-lingual analogy dataset consisting of word pairs from six closely related European languages, but it has never been made publicly available.

- Ulčar et al. (2020) open-sourced the MCIWAD dataset for nine languages, but the analogy words in different languages are not parallel4.

- Garneau et al. (2021) produced the cross-lingual WiQueen dataset. Unfortunately, a large part of its entries are proper nouns or multi-word terms instead of single-item words, leading to low coverage on the vocabularies of embeddings.

Consequently, we develop xANLG, which we believe to be the first (publicly available) cross-lingual word analogy corpus. For consistency with previous work, xANLG is bootstrapped using established monolingual analogies and cross-lingual dictionaries. xANLG is constructed by starting with a *bilingual* analogy dataset, say, that for LX and LY. Within each analogy category, we first translate word pairs of the LX analogy corpus into LY, using an available LX-LY dictionary. Next, we check if any translation coincides with its original word pair in LY. If it does, such a word pair (in both LX and LY) will be added into the bilingual dataset. This process is repeated for multiple languages to form a cross-lingual corpus.

We use the popular MUSE dictionary (Lample et al., 2018a) which contains a wide range of language pairs.

Two existing collections of analogies are utilised:
- **Google Analogy Test Set (GATS)** (Mikolov et al., 2013c), the *de facto* standard benchmark of embedding-based analogy solving. We adopt its extended English version, Bigger Analogy Test Set
(BATS) (Gladkova et al., 2016), supplemented with several datasets in other languages inspired by the original GATS: French, Hindi and Polish (Grave et al., 2018), German (Köper et al., 2015) and Spanish (Cardellino, 2019).

- The aforementioned Multilingual Culture-Independent Word Analogy Datasets
(MCIWAD) (Ulčar et al., 2020).

Due to the differing characteristics of these datasets (e.g., the composition of analogy categories), they are used to produce two separate corpora: xANLGG and xANLGM. Only categories containing at least 30 word pairs aligned across all languages in the dataset were included. For comparison, 60% of the semantic analogy categories in the commonly used GATS dataset contains fewer than 30 word pairs. The rationale for selecting this value was that it allows a reasonable number of analogy completion questions to be generated.5 Information in xANLGG and xANLGM for the capital-country of Hindi was supplemented with manual 4Personal communication with the authors. 530 word pairs can be used to generate as many as 3480 unique analogy completion questions such as "king:man :: *queen*:?"
(see Appendix A).

EMNLP 2020 Submission ***. Confidential Review Copy. DO NOT DISTRIBUTE.

000 050 001 051 002 052 003 053 004 054 005 055 006 056 007 057 008 058 009 059 010 060 011 061 012 062 013 063 014 064 015 065 016 066 017 067 018 068 019 069 020 070 021 071 022 072 023 073 024 074 025 075 026 076 027 077 028 078 029 079 030 080 031 081 032 082 033 083 034 084 035 085 036 086 037 087 038 088 039 089 040 090 041 091 042 092 043 093 044 094 045 095 046 096 047 097 048 098 049 099 1
translations by native speakers. In addition, each analogy included in the data set was checked by at least one fluent speaker of the relevant language to ensure that they are valid.

The xANLG dataset contains five distinct analogy categories, including both syntactic (morphological) and semantic analogies, and twelve languages from a diverse range of families (see Tab. 1). From Indo-European languages, one belongs to the Indo-Aryan branch (Hindi|hi), one to the Baltic branch (Latvian|lv), two to the Germanic branch (English|en, German|de), two to the Romance branch (French|fr, Spanish|es) and four to the Slavonic branch (Croatian|hr, Polish|pl, Russian|ru, Slovene|sl). Two non-Indo-European languages, Estonian|et and Finnish|fi, both from the Finnic branch of the Uralic family, are also included. In total, they form 15 and 21 languages pairs for xANLGG and xANLGM, respectively. These pairs span multiple etymological combinations, i.e., intra-language-branch (e.g., es-fr), inter-language-branch (e.g., de-ru) and inter-language-family (e.g., hi-et).

## 4.3 Word Embeddings

To cover the language pairs used in xANLG, we make use of static word embeddings pre-trained on the twelve languages used in the resource. These embeddings consist of three representative open-source series that employ different training corpora, are based on different embedding algorithms, and have different vector dimensions.

- **Wiki**6: 300-dimensional, trained on Wikipedia using the Skip-Gram version of FastText (refer to Bojanowski et al. (2017) for details).

- **Crawl**7: 300-dimensional, trained on CommonCrawl plus Wikipedia using FastText-CBOW.

- **CoNLL**8: 100-dimensional, trained on the CoNLL corpus (without lemmatisation) using Word2Vec (Mikolov et al., 2013c).

## 5 Result

Both Spearman's rank-order (ρ) and Pearson product-moment (r) correlation coefficients are computed to measure the correlation between SLMP and SPAE. Note that, it is not possible to compute the correlations between all pairs due to (1) the number of dimensions varies across embeddings series, and (2) the source and target embeddings have been pre-processed independently for different mappings. Instead, results are grouped by embedding method and analogy category.

Figures in Tab. 2 show that a significant positive correlation between SPAE and SLMP is observed for all setups. In terms of the Spearman's ρ, among the 18 groups, 5 exhibit *very strong* correlation (ρ ≥ 0.80) (with a maximum at 0.96 for CoNLL embeddings on CAP of xANLGG), 4 show *strong* correlation (0.80 > ρ ≥ 0.70),
and the others have *moderate* correlation (0.70 > ρ ≥ 0.50) (with a minimum at 0.58: CoNLL embeddings on ANIM and G-PL of xANLGM). Interestingly, although we do not assume a linear relationship in § 3, large values for the Pearson's r are obtained in practice. To be exact, 4 groups indicate very strong correlation, 6 have strong correlation, while others retain moderate correlation (the minimum r value is 0.58: Wiki embeddings on CAP and G-PL of xANLGG). These results provide empirical evidence that supplements our theoretical analysis (§ 3) of the relationship between linearity of mappings and analogy preservation.

In addition, we explored whether the analogy type (i.e., semantic or syntactic) affects the correlation. To bootstrap the analysis, for both kinds of correlation coefficients, we divide the 18 experiment groups into two splits, i.e., 12 semantic ones and 6 syntactic ones. After that, we compute a two-treatment ANOVA (Fisher, 1925). For both Spearman's ρ and Pearson's r, the results are not significant at p < 0.1. Therefore, we conclude that the connection between CLWE mapping linearity and analogy encoding preservation holds across analogy types. We thus recommend testing SPAE *before* implementing CLWE alignment as an indicator of whether a linear transformation is a good approximation of the ground-truth CLWE mapping.

6https://fasttext.cc/docs/en/pretrained-vectors.html 7https://fasttext.cc/docs/en/crawl-vectors.html 8http://vectors.nlpl.eu/repository/

![9_image_0.png](9_image_0.png)

Table 2: Correlation coefficients (Spearman's ρ and Pearson's r) between SLMP and SPAE. For all groups, we conduct significance tests to estimate the p-value. Empirically, the p-value is always less than 1e-2 (in most groups it is even less than 1e-3), indicating a very high confidence level for the experiment results. To facilitate future research and analyses, we present the raw SLMP and LRCos data in Appendix B.
Although there are strong correlations between the measures, they are not perfect. We therefore carried out further investigation into the data points in Tab. 2 that do not follow the overall trend. Firstly, we identified that some are associated with "crowded" embedding regions, in which the correct answer to an analogy question is not ranked highest by LRCos but the top candidate is a polysemous term (Rogers et al., 2017).

One example is the LRCos score of the CAP analogy for pl's Wiki embeddings, which was underestimated.

If we consider the three highest ranked terms, rather than only the top term, then the overall ρ and r of
"Wiki: CAP" (the first cell in Tab. 2) will increase sharply to 0.79 and 0.76, respectively.

Secondly, we noticed the in certain cases the source and target vectors of a word pair are too close (i.e. the distance between them is near zero). This phenomenon introduces noise to the results of analogy metrics such as LRCos (Linzen, 2016; Bolukbasi et al., 2016), and consequently, impact SPAE. For example, the mean cosine distance between G-PL pairs is smaller in xANLGM (0.18) than xANLGG (0.24). Therefore, the SPAE for G-PL is less reliable for xANLGM than xANLGG, leading to a lower correlation.

## 6 Application: Predicting Relationship Between Monolingual Word Embeddings

As discussed in § 2, in many scenarios linear CLWE mappings outperform their nonlinear counterparts, while in other setups nonlinear CLWE mappings are more successful. Therefore, an indicator that predicts the relationship between independently pre-trained monolingual word embedding which helps decide whether to

| CLWE method     | CCA   | Proc   | Proc-B   |     | DLV   | RCSLS   | S¯ PAE   |     |     |     |     |     |     |     |     |
|-----------------|-------|--------|----------|-----|-------|---------|----------|-----|-----|-----|-----|-----|-----|-----|-----|
| Seed dict. size | 1K    | 3K     | 5K       | 1K  | 3K    | 5K      | 1K       | 3K  | 1K  | 3K  | 5K  | 1K  | 3K  | 5K  |     |
| en-fi           | .26   | .35    | .38      | .27 | .37   | .40     | .36      | .38 | .27 | .37 | .40 | .31 | .40 | .44 | .41 |
| en-hr           | .22   | .30    | .33      | .23 | .31   | .34     | .30      | .34 | .23 | .31 | .33 | .27 | .36 | .38 | .32 |
| en-ru           | .34   | .43    | .45      | .35 | .45   | .46     | .42      | .45 | .35 | .44 | .47 | .40 | .49 | .51 | .46 |
| fi -hr          | .17   | .26    | .29      | .19 | .27   | .29     | .26      | .29 | .18 | .27 | .29 | .21 | .30 | .32 | .23 |
| fi -ru          | .21   | .31    | .34      | .23 | .31   | .34     | .32      | .33 | .23 | .31 | .34 | .26 | .34 | .38 | .33 |
| hr-ru           | .26   | .35    | .37      | .27 | .35   | .37     | .35      | .37 | .26 | .35 | .37 | .29 | .38 | .40 | .26 |
| Spearman's ρ    | .83   | .82    | .86      | .83 | .84   | .88     | .83      | .86 | .84 | .84 | .87 | .87 | .88 | .90 |     |

Table 3: Spearman's ρ between the Word Translation performance (MRR) of linear-mapping-based CLWE
methods (from Glavaš et al. (2019); Proc-B's performance with 5K seed dictionary was not available) and the average analogy encoding preservation score (S¯PAE).

use linear or non-linear mappings without training actual CLWEs, would be beneficial. Use of this indicator has the potential to reduce the resources required to find optimal CLWEs (e.g., some recent approaches need several hours of processing on modern GPUs (Peng et al., 2021a; Ormazabal et al., 2021)), with corresponding reductions in carbon footprint.

The proposed SPAE metric, which can be obtained within several minutes on a single CPU, can be leveraged as such a metric. A high SPAE score suggests that the linear assumption holds strongly on the ground-truth CLWE mapping, so it is feasible to train a linear CLWE mapping; otherwise, the non-linear approaches are recommended. To demonstrate this idea in practice, we revisited a systematic evaluation on CLWE models based on linear mappings (Glavaš et al., 2019), which reported Mean Reciprocal Rank (MRR) of five representative linearmapping-based CLWE approaches on the Word Translation task (the de facto stadard for CLWEs). We focus on six language pairs (en-fi, en-hr, en-ru, fi-hr, fi-ru, hr-ru) as they are covered by both xANLGM
and the dataset of Glavaš et al. (2019). Additionally, only Wiki embeddings were involved in the experiments of Glavaš et al. (2019). Thus, for each language pair, we aggregated SPAE of different analogy categories for Wiki embeddings, then calculated the average, S¯PAE.

Results are shown in Tab. 3, where the Spearman's ρ between S¯PAE and Word Translation performance is highlighted. Strong positive correlations are observed in all setups that were tested. These results demonstrate that S¯PAE provides as accurate indication of the real-world performance of linear CLWE mappings, regardless of the language pair, mapping algorithm, or level of supervision (i.e., size of the seed dictionary for training). These results also provide solid support to the main statement of our paper, i.e., the ground-truth CLWE mapping between monolingual word embeddings is linear iff analogies encoded in those embeddings are preserved.

## 7 Further Discussion

Prior work relevant to the linearity of CLWE mappings has largely been observational (see § 2). This section sheds new light on these past studies from the novel perspective of word analogies. Explaining Non-Linearity. We provide three suggested reasons why CLWE mappings are sometimes not approximately linear, all linked with the condition of Eq. (2) not being met.

The first may be issues with individual monolingual embeddings (see one such example in the upper part of Fig. 3). In particular, popular word embedding algorithms lack the capacity to ensure semantic continuity over the entire embedding space (Linzen, 2016). Hence, vectors for the analogy words may only exhibit local consistency, with Eq. (2) breaking down for relatively distant regions. This caused the locality of linearity that has been reported by Nakashole & Flauger (2018), Li et al. (2021) and Wang et al. (2021a).

![11_image_0.png](11_image_0.png)

Figure 3: Illustration of example scenarios where the CLWE mapping is non-linear. Translations of English
(left) and Chinese (right) terms are indicated by shared symbols. **Upper**: The vector for "*blueberry*"
(shadowed) is ill-positioned in the embedding space, so the condition of Eq. (2) is no longer satisfied. **Lower**:
In the financial domain some Eastern countries (e.g., China and Japan) traditionally use "*black*" to indicate growth and "*green*" for reduction, while Western countries (e.g., US and UK) assign the opposite meanings to these terms, also not satisfying the condition of Eq. (2). The second reason why a CLWE mapping may not be linear is semantic gaps. Despite analogies in our xANLG corpus all are language-agnostic, the analogical relations between words may change or even disappear sometimes. For example, languages pairs may have very different grammars, e.g., Chinese does have the plural morphology (Li & Thompson, 1989), so some types of analogy, e.g. G-PL used above, do not hold.

Also, analogies may evolve differently across cultures, (see example in the lower part of Fig. 3). These two factors go some way to explain why typologically and etymologically distant language pairs tend to have worse alignment (Ruder et al., 2019).

Thirdly, many studies point out that differences in the domain of training data can influence the similarity between multilingual word embeddings (Søgaard et al., 2018; Artetxe et al., 2018). Besides, we argue that due to polysemy, analogies may change from one domain to another. Under such circumstances, Eq. (2) is violated and the linear assumption no longer holds. Mitigating Non-Linearity. The proposed analogy-inspired framework justifies the success and failure of the linearity assumption for CLWEs. As discussed earlier, it also suggests a method for indirectly assessing the linearity of a CLWE mapping prior to implementation. Moreover, it offers principled methods for designing more effective CLWE methods. The most straightforward idea is to explicitly use Eq. (2) as a training constraint, which has very recently been practised by Garneau et al. (2021)9. Based on analogy pairs retrieved from external knowledge bases for different languages, their approach directly learnt to better encode monolingual analogies, particularly those whose vectors are distant in the embedding space. It not only works well on static word embeddings, but also leads to performance gain for large-scale pretrained cross-lingual language models including the multilingual BERT (Devlin et al., 2019). These results on multiple tasks (e.g., bilingual lexicon induction and cross-lingual sentence retrieval) can be seen as an independent confirmation of this paper's main claim and demonstration of its usefulness.

Our study also suggests another unexplored direction: incorporating analogy-based information into nonlinear CLWE mappings. Existing work has already introduced non-linearity to CLWE mappings by applying a variety of techniques including directly training non-linear functions (Mohiuddin et al., 2020), tuning linear mappings for outstanding non-isomorphic instances (Glavaš & Vulić, 2020) and learning multiple linear CLWE mappings instead of a single one (Nakashole, 2018; Wang et al., 2021a) (see § 2). However, there is a lack of theoretical motivation for decisions about how the non-linear mapping should be modelled.

Nevertheless, the results presented here suggest that ensembles of linear transformations, covering analogy preserving regions of the embedding space, would make a reasonable approximation of the ground-truth CLWE mappings and that information about analogy preservation could be used to partition embedding

9They cited our earlier preprint as the primary motivation for their approach.
spaces into multiple regions, between which independent linear mappings can be learnt. We leave this application as our important future work.

## 8 Conclusion And Future Work

This paper makes the first attempt to explore the conditions under which CLWE mappings are linear. Theoretically, we show that this widely-adopted assumption holds iff the analogies encoded are preserved across embeddings for different languages. We describe the construction of a novel cross-lingual word analogy dataset for a diverse range of languages and analogy categories and we propose indicators to quantify linearity and analogy preservation. Experiment results on three distinct embedding series firmly support our hypothesis. We also demonstrate how our insight into the connection between linearity and analogy preservation can be used to better understand past observations about the limitations of linear CLWE mappings, particularly when they are ineffective. Our findings regarding the preservation of analogy encoding provide a test that can be applied to determine the likely success of any attempt to create linear mappings between multilingual embeddings. We hope this study can guide future studies in the CLWE field.

Additionally, we plan to expand our theoretical insight to contextual embeddings, inspired by Garneau et al.

(2021) who demonstrated that developing mappings that preserve encoded analogies benefits pre-trained cross-lingual language models as well. We also aim to enrich xANLG by including new languages and analogies to enable explorations at an even larger scale. Finally, we will further design CLWE approaches that learn multiple linear mappings between local embedding regions outlined with analogy-based metrics
(see § 7).

## Broader Impact Statement

CLWE bridges the gap between languages and is efficient enough to be applied in situations where limited resources are available, including to endangered languages (Zhang et al., 2020; Ngoc Le & Sadat, 2020). This paper presented a theoretical analysis of the mechanisms underlying CLWE techniques which has potential to improve these methods. Moreover, the proposed SPAE metric predicts whether monolingual word embeddings in different languages should be aligned using a linear or non-linear mapping, without actually training the CLWEs. This indicator lowers the computational expense required to identify a suitable mapping approach, thereby reducing the computational power needed and negative environmental effects.

Our analysis relies on the use of analogies and previous work has indicated that these may contain biases, e.g., regarding gender (Bolukbasi et al., 2016; Sun et al., 2019). Any future work that incorporates analogies within the CLWE process should be aware of the potential consequences of any biases that may be contained within the analogies used. On the other hand, there is potential for the findings of this work to be leveraged for bias alleviation in cross-lingual representation learning.

## Acknowledgements

We would like to express our sincerest gratitude to all volunteers from Beijing Foreign Studies University who manually annotated and validated the xANLG corpus, as well as Guowei Zhang, Guanyi Chen, Ruizhe Li, Alison Sneyd, and Harish Tayyar Madabushi who helped this study. We also thank the official TMLR reviewers for their insightful comments and Angeliki Lazaridou for the action editing.

## References

Carl Allen and Timothy Hospedales. Analogies explained: Towards understanding word embeddings. In *Proceedings of the 36th International Conference on Machine Learning*, pp. 223–231, Long Beach, California, USA, 2019. PMLR. URL http://proceedings.mlr.press/v97/allen19a.html.

Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. A latent variable model approach to PMI-based word embeddings. *Transactions of the Association for Computational Linguistics*, 4:385–399, 2016. doi: 10.1162/tacl_a_00106. URL https://www.aclweb.org/anthology/Q16-1028.

Mikel Artetxe, Gorka Labaka, and Eneko Agirre. Learning principled bilingual mappings of word embeddings while preserving monolingual invariance. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 2289–2294, Austin, Texas, November 2016. Association for Computational Linguistics. doi: 10.18653/v1/D16-1250. URL https://www.aclweb.org/anthology/D16-1250.

Mikel Artetxe, Gorka Labaka, and Eneko Agirre. Learning bilingual word embeddings with (almost) no bilingual data. In *Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics* (Volume 1: Long Papers), pp. 451–462, Vancouver, Canada, July 2017. Association for Computational Linguistics. doi: 10.18653/v1/P17-1042. URL https://www.aclweb.org/anthology/P17-1042.

Mikel Artetxe, Gorka Labaka, and Eneko Agirre. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In *Proceedings of the 56th Annual Meeting of the Association* for Computational Linguistics (Volume 1: Long Papers), pp. 789–798, Melbourne, Australia, July 2018.

Association for Computational Linguistics. doi: 10.18653/v1/P18-1073. URL https://www.aclweb.org/
anthology/P18-1073.

Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. Enriching word vectors with subword information. *Transactions of the Association for Computational Linguistics*, 5:135–146, 2017. doi: 10.1162/
tacl_a_00051. URL https://aclanthology.org/Q17-1010.

Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In *Proceedings of the 30th International Conference on Neural Information Processing Systems*, NIPS'16, pp. 4356–4364, Red Hook, NY,
USA, 2016. Curran Associates Inc. ISBN 9781510838819.

Fred L. Bookstein. *Morphometric Tools for Landmark Data: Geometry and Biology*. Cambridge University Press, 1992. doi: 10.1017/CBO9780511573064.

Tomáš Brychcín, Stephen Taylor, and Lukáš Svoboda. Cross-lingual word analogies using linear transformations between semantic spaces. *Expert Systems with Applications*, 135:287 - 295, 2019. ISSN 09574174. doi: https://doi.org/10.1016/j.eswa.2019.06.021. URL http://www.sciencedirect.com/science/ article/pii/S0957417419304191.

Cristian Cardellino. Spanish Billion Words Corpus and Embeddings, August 2019. URL https://
crscardellino.github.io/SBWCE/.

Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. Unsupervised cross-lingual representation learning at scale. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pp. 8440–8451, Online, July 2020. Association for Computational Linguistics. doi:
10.18653/v1/2020.acl-main.747. URL https://aclanthology.org/2020.acl-main.747.

Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1423. URL https://aclanthology.org/N19-1423.

Aleksandr Drozd, Anna Gladkova, and Satoshi Matsuoka. Word embeddings, analogies, and machine learning: Beyond king - man + woman = queen. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pp. 3519–3530, Osaka, Japan, December 2016. The COLING 2016 Organizing Committee. URL https://www.aclweb.org/anthology/C16-1332.

Haim Dubossarsky, Ivan Vulić, Roi Reichart, and Anna Korhonen. The secret is in the spectra: Predicting cross-lingual task performance with spectral similarity measures. In *Proceedings of the 2020* Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 2377–2390, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.186. URL
https://aclanthology.org/2020.emnlp-main.186.

Kawin Ethayarajh, David Duvenaud, and Graeme Hirst. Towards understanding linear word analogies. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 3253–3262, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1315. URL
https://www.aclweb.org/anthology/P19-1315.

Manaal Faruqui and Chris Dyer. Improving vector space word representations using multilingual correlation.

In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pp. 462–471, Gothenburg, Sweden, April 2014. Association for Computational Linguistics.

doi: 10.3115/v1/E14-1049. URL https://www.aclweb.org/anthology/E14-1049.

R.A. Fisher. *Statistical methods for research workers*. Edinburgh Oliver & Boyd, 1925. URL http://
psychclassics.yorku.ca/Fisher/Methods/.

Louis Fournier and Ewan Dunbar. Paraphrases do not explain word analogies. In *Proceedings of the 16th* Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pp. 2129–2134, Online, April 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.

eacl-main.182. URL https://aclanthology.org/2021.eacl-main.182.

Ashwinkumar Ganesan, Francis Ferraro, and Tim Oates. Learning a reversible embedding mapping using bi-directional manifold alignment. In *Findings of the Association for Computational Linguistics: ACLIJCNLP 2021*, pp. 3132–3139, Online, August 2021. Association for Computational Linguistics. doi:
10.18653/v1/2021.findings-acl.276. URL https://aclanthology.org/2021.findings-acl.276.

Nicolas Garneau, Mareike Hartmann, Anders Sandholm, Sebastian Ruder, Ivan Vulić, and Anders Søgaard.

Analogy training multilingual encoders. *Proceedings of the AAAI Conference on Artificial Intelligence*, 35
(14):12884–12892, May 2021. URL https://ojs.aaai.org/index.php/AAAI/article/view/17524.

Dedre Gentner. Structure-mapping: A theoretical framework for analogy. *Cognitive Science*, 7(2):155–
170, 1983. ISSN 0364-0213. doi: https://doi.org/10.1016/S0364-0213(83)80009-3. URL https://www.

sciencedirect.com/science/article/pii/S0364021383800093.

Alex Gittens, Dimitris Achlioptas, and Michael W. Mahoney. Skip-Gram - Zipf + Uniform = Vector Additivity. In *Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume* 1: Long Papers), pp. 69–76, Vancouver, Canada, July 2017. Association for Computational Linguistics.

doi: 10.18653/v1/P17-1007. URL https://www.aclweb.org/anthology/P17-1007.

Anna Gladkova, Aleksandr Drozd, and Satoshi Matsuoka. Analogy-based detection of morphological and semantic relations with word embeddings: what works and what doesn't. In *Proceedings of the NAACL*
Student Research Workshop, pp. 8–15, San Diego, California, June 2016. Association for Computational Linguistics. doi: 10.18653/v1/N16-2002. URL https://aclanthology.org/N16-2002.

Goran Glavaš and Ivan Vulić. Non-linear instance-based cross-lingual mapping for non-isomorphic embedding spaces. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pp.

7548–7555, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.

675. URL https://aclanthology.org/2020.acl-main.675.

Goran Glavaš, Robert Litschko, Sebastian Ruder, and Ivan Vulić. How to (properly) evaluate cross-lingual word embeddings: On strong baselines, comparative analyses, and some misconceptions. In *Proceedings* of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 710–721, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1070. URL https: //aclanthology.org/P19-1070.

Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Armand Joulin, and Tomas Mikolov. Learning word vectors for 157 languages. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan, May 2018. European Language Resources Association
(ELRA). URL https://www.aclweb.org/anthology/L18-1550.

Tatsunori B. Hashimoto, David Alvarez-Melis, and Tommi S. Jaakkola. Word embeddings as metric recovery in semantic spaces. *Transactions of the Association for Computational Linguistics*, 4:273–286, 2016. doi:
10.1162/tacl_a_00098. URL https://aclanthology.org/Q16-1020.

Christian Herold, Jan Rosendahl, Joris Vanvinckenroye, and Hermann Ney. Data filtering using crosslingual word embeddings. In *Proceedings of the 2021 Conference of the North American Chapter of the* Association for Computational Linguistics: Human Language Technologies, pp. 162–172, Online, June 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.naacl-main.15. URL https:
//aclanthology.org/2021.naacl-main.15.

O. Kariv and S. L. Hakimi. An algorithmic approach to network location problems. II: The P-Medians.

SIAM Journal on Applied Mathematics, 37(3):539–560, 1979. doi: 10.1137/0137041. URL https://doi.

org/10.1137/0137041.

Yunsu Kim, Miguel Graça, and Hermann Ney. When and why is unsupervised neural machine translation useless? In *Proceedings of the 22nd Annual Conference of the European Association for Machine Translation*, pp. 35–44, Lisboa, Portugal, November 2020. European Association for Machine Translation. URL
https://www.aclweb.org/anthology/2020.eamt-1.5.

Martin Kleiber and W. J. Pervin. A generalized Banach-Mazur theorem. *Bulletin of the Australian Mathematical Society*, 1(2):169–173, 1969. doi: 10.1017/S0004972700041411.

Maximilian Köper, Christian Scheible, and Sabine Schulte im Walde. Multilingual reliability and "semantic" structure of continuous word spaces. In *Proceedings of the 11th International Conference on Computational* Semantics, pp. 40–45, London, UK, April 2015. Association for Computational Linguistics. URL https:
//www.aclweb.org/anthology/W15-0105.

Guillaume Lample and Alexis Conneau. Cross-lingual language model pretraining. Advances in Neural Information Processing Systems (NeurIPS), 2019.

Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc'Aurelio Ranzato. Unsupervised machine translation using monolingual corpora only. In *International Conference on Learning Representations*,
2018a. URL https://openreview.net/forum?id=rkYTTf-AZ.

Guillaume Lample, Alexis Conneau, Marc'Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. Word translation without parallel data. In *International Conference on Learning Representations*, 2018b. URL
https://openreview.net/forum?id=H196sainb.

Omer Levy and Yoav Goldberg. Linguistic regularities in sparse and explicit word representations. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning, pp. 171–180, Ann Arbor, Michigan, June 2014a. Association for Computational Linguistics. doi: 10.3115/v1/W14-1618.

URL https://www.aclweb.org/anthology/W14-1618.

Omer Levy and Yoav Goldberg. Neural word embedding as implicit matrix factorization. In Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K. Q. Weinberger (eds.), *Advances in Neural Information* Processing Systems, volume 27. Curran Associates, Inc., 2014b. URL https://proceedings.neurips. cc/paper/2014/file/feab05aa91085b7a8012516bc3533958-Paper.pdf.

Omer Levy, Yoav Goldberg, and Ido Dagan. Improving distributional similarity with lessons learned from word embeddings. *Transactions of the Association for Computational Linguistics*, 3:211–225, 2015. doi:
10.1162/tacl_a_00134. URL https://aclanthology.org/Q15-1016.

Charles N Li and Sandra A Thompson. *Mandarin Chinese: A functional reference grammar*, volume 3. Univ of California Press, 1989.

Yuling Li, Kui Yu, and Yuhong Zhang. Learning cross-lingual mappings in imperfectly isomorphic embedding spaces. *IEEE/ACM Transactions on Audio, Speech, and Language Processing*, 29:2630–2642, 2021. doi: 10.1109/TASLP.2021.3097935.

Tal Linzen. Issues in evaluating semantic spaces using word analogies. In Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP, pp. 13–18, Berlin, Germany, August 2016. Association for Computational Linguistics. doi: 10.18653/v1/W16-2503. URL https://www.aclweb.org/anthology/
W16-2503.

Noa Yehezkel Lubin, Jacob Goldberger, and Yoav Goldberg. Aligning vector-spaces with noisy supervised lexicon. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 460–
465, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/
N19-1045. URL https://www.aclweb.org/anthology/N19-1045.

Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. In 1st International Conference on Learning Representations, ICLR 2013, Scottsdale, Arizona, USA, May 2-4, 2013, Workshop Track Proceedings, 2013a. URL https://openreview.net/
forum?id=idpCdOWtqXd60.

Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. Exploiting similarities among languages for machine translation. *CoRR*, abs/1309.4168, 2013b. URL http://arxiv.org/abs/1309.4168.

Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. Distributed representations of words and phrases and their compositionality. In Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 2, NIPS'13, pp. 3111–3119, USA, 2013c. Curran Associates Inc. URL http://dl.acm.org/citation.cfm?id=2999792.2999959.

Tasnim Mohiuddin, M Saiful Bari, and Shafiq Joty. LNMap: Departures from isomorphic assumption in bilingual lexicon induction through non-linear mapping in latent space. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 2712–2723, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.215. URL
https://aclanthology.org/2020.emnlp-main.215.

Ndapa Nakashole. NORMA: Neighborhood sensitive maps for multilingual word embeddings. In *Proceedings* of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 512–522, Brussels, Belgium, October-November 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-1047.

URL https://www.aclweb.org/anthology/D18-1047.

Ndapa Nakashole and Raphael Flauger. Characterizing departures from linearity in word translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 221–227, Melbourne, Australia, July 2018. Association for Computational Linguistics. doi:
10.18653/v1/P18-2036. URL https://www.aclweb.org/anthology/P18-2036.

J. A. Nelder and R. Mead. A Simplex Method for Function Minimization. *The Computer Journal*, 7(4):308–
313, 01 1965. ISSN 0010-4620. doi: 10.1093/comjnl/7.4.308. URL https://doi.org/10.1093/comjnl/
7.4.308.

Tan Ngoc Le and Fatiha Sadat. Revitalization of indigenous languages through pre-processing and neural machine translation: The case of Inuktitut. In *Proceedings of the 28th International Conference on Computational Linguistics*, pp. 4661–4666, Barcelona, Spain (Online), December 2020. International Committee on Computational Linguistics. doi: 10.18653/v1/2020.coling-main.410. URL
https://aclanthology.org/2020.coling-main.410.

Aitor Ormazabal, Mikel Artetxe, Aitor Soroa, Gorka Labaka, and Eneko Agirre. Beyond offline mapping:
Learning cross-lingual word embeddings through context anchoring. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 6479–6489, Online, August 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.acl-long.506. URL https://aclanthology.org/ 2021.acl-long.506.

Barun Patra, Joel Ruben Antony Moniz, Sarthak Garg, Matthew R. Gormley, and Graham Neubig. Bilingual lexicon induction with semi-supervision in non-isometric embedding spaces. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 184–193, Florence, Italy, July 2019.

Association for Computational Linguistics. doi: 10.18653/v1/P19-1018. URL https://www.aclweb.org/
anthology/P19-1018.

Xutan Peng, Chenghua Lin, and Mark Stevenson. Cross-lingual word embedding refinement by ℓ1 norm optimisation. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, pp. 2690–2701, Online, June 2021a. Association for Computational Linguistics. doi: 10.18653/v1/2021.naacl-main.214. URL https:
//aclanthology.org/2021.naacl-main.214.

Xutan Peng, Yi Zheng, Chenghua Lin, and Advaith Siddharthan. Summarising historical text in modern languages. In *Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume*, pp. 3123–3142, Online, April 2021b. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/2021.eacl-main.273.

Jeffrey Pennington, Richard Socher, and Christopher Manning. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing
(EMNLP), pp. 1532–1543, Doha, Qatar, October 2014. Association for Computational Linguistics. doi:
10.3115/v1/D14-1162. URL https://www.aclweb.org/anthology/D14-1162.

Henri Prade and Gilles Richard. Analogical proportions: Why they are useful in ai. In Zhi-Hua Zhou
(ed.), *Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21*, pp. 4568–4576. International Joint Conferences on Artificial Intelligence Organization, 8 2021. doi: 10.24963/
ijcai.2021/621. URL https://doi.org/10.24963/ijcai.2021/621. Survey Track.

Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, and Yoav Goldberg. Null it out: Guarding protected attributes by iterative nullspace projection. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pp. 7237–7256, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.647. URL https://aclanthology.org/2020.acl-main.647.

Anna Rogers, Aleksandr Drozd, and Bofang Li. The (too many) problems of analogical reasoning with word vectors. In *Proceedings of the 6th Joint Conference on Lexical and Computational Semantics (*SEM*
2017), pp. 135–148, Vancouver, Canada, August 2017. Association for Computational Linguistics. doi:
10.18653/v1/S17-1017. URL https://aclanthology.org/S17-1017.

Sebastian Ruder, Ivan Vulić, and Anders Søgaard. A survey of cross-lingual word embedding models. Journal of Artificial Intelligence Research, 65(1):569–630, May 2019. ISSN 1076-9757. doi: 10.1613/jair.1.11640.

URL https://doi.org/10.1613/jair.1.11640.

Anders Søgaard, Sebastian Ruder, and Ivan Vulić. On the limitations of unsupervised bilingual dictionary induction. In *Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics*
(Volume 1: Long Papers), pp. 778–788, Melbourne, Australia, July 2018. Association for Computational Linguistics. doi: 10.18653/v1/P18-1072. URL https://www.aclweb.org/anthology/P18-1072.

Jimin Sun, Hwijeen Ahn, Chan Young Park, Yulia Tsvetkov, and David R. Mortensen. Cross-cultural similarity features for cross-lingual transfer learning of pragmatically motivated tasks. In *Proceedings of* the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pp. 2403–2414, Online, April 2021. Association for Computational Linguistics. doi: 10.18653/v1/
2021.eacl-main.204. URL https://aclanthology.org/2021.eacl-main.204.

Tony Sun, Andrew Gaut, Shirlyn Tang, Yuxin Huang, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, and William Yang Wang. Mitigating gender bias in natural language processing: Literature review. In *Proceedings of the 57th Annual Meeting of the Association for Computational* Linguistics, pp. 1630–1640, Florence, Italy, July 2019. Association for Computational Linguistics. doi:
10.18653/v1/P19-1159. URL https://aclanthology.org/P19-1159.

Matej Ulčar, Kristiina Vaik, Jessica Lindström, Milda Dailid˙enait˙e, and Marko Robnik-Šikonja. Multilingual culture-independent word analogy datasets. In *Proceedings of the 12th Language Resources and Evaluation* Conference, pp. 4074–4080, Marseille, France, May 2020. European Language Resources Association. ISBN
979-10-95546-34-4. URL https://aclanthology.org/2020.lrec-1.501.

Ivan Vulić, Sebastian Ruder, and Anders Søgaard. Are all good word vector spaces isomorphic? In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)*, pp.

3178–3192, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.

emnlp-main.257. URL https://aclanthology.org/2020.emnlp-main.257.

Haozhou Wang, James Henderson, and Paola Merlo. Multi-adversarial learning for cross-lingual word embeddings. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 463–472, Online, June 2021a. Association for Computational Linguistics. doi: 10.18653/v1/2021.naacl-main.39. URL https://aclanthology.org/
2021.naacl-main.39.

Meihong Wang, Linling Qiu, and Xiaoli Wang. A survey on knowledge graph embeddings for link prediction.

Symmetry, 13(3), 2021b. ISSN 2073-8994. doi: 10.3390/sym13030485. URL https://www.mdpi.com/ 2073-8994/13/3/485.

Zirui Wang, Jiateng Xie, Ruochen Xu, Yiming Yang, Graham Neubig, and Jaime G. Carbonell. Cross-lingual alignment vs joint training: A comparative study and A simple unified framework. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020. URL https://openreview.net/forum?id=S1l-C0NtwS.

Svante Wold, Kim Esbensen, and Paul Geladi. Principal Component Analysis. *Chemometrics and Intelligent Laboratory Systems*, 2(1):37 - 52, 1987. ISSN 0169-7439. doi: https://doi.org/10.1016/0169-7439(87)
80084-9. URL http://www.sciencedirect.com/science/article/pii/0169743987800849. Proceedings of the Multivariate Statistical Workshop for Geologists and Geochemists.

Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. Normalized word embedding and orthogonal transform for bilingual word translation. In *Proceedings of the 2015 Conference of the North American Chapter of* the Association for Computational Linguistics: Human Language Technologies, pp. 1006–1011, Denver, Colorado, May–June 2015. Association for Computational Linguistics. doi: 10.3115/v1/N15-1104. URL
https://www.aclweb.org/anthology/N15-1104.

Mozhi Zhang, Yoshinari Fujinuma, and Jordan Boyd-Graber. Exploiting cross-lingual subword similarities in low-resource document classification. In *Proceedings of the AAAI Conference on Artificial Intelligence*,
volume 34, pp. 9547–9554, 2020.

Yi Zhang, Jie Lu, Feng Liu, Qian Liu, Alan Porter, Hongshu Chen, and Guangquan Zhang. Does deep learning help topic extraction? a kernel k-means clustering method with word embedding. *Journal of* Informetrics, 12(4):1099–1117, 2018. ISSN 1751-1577. doi: https://doi.org/10.1016/j.joi.2018.09.004.

URL https://www.sciencedirect.com/science/article/pii/S1751157718300257.

Yucheng Zhou, Xiubo Geng, Tao Shen, Wenqiang Zhang, and Daxin Jiang. Improving zero-shot crosslingual transfer for multilingual question answering over knowledge graph. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 5822–5834, Online, June 2021. Association for Computational Linguistics.

doi: 10.18653/v1/2021.naacl-main.465. URL https://aclanthology.org/2021.naacl-main.465.

## A Question Formulations

For an analogy category with t word pairs, t2 four-item elements can be composed. An arbitrary element, α:β :: γ:θ, can yield eight analogy completion questions as follows:

$$\alpha{:}\beta\::\gamma{:}?$$
$\mathbf{a}=\mathbf{a}\cdot\mathbf{a}$. 
$$\theta{:}\beta\,\,{\vdots}\,\,\gamma{:}?$$
$\beta$:$\alpha$ :: $\theta$:?? 
α:β :: γ:? β:α :: θ:? γ:α :: θ:? θ:β :: γ:?

# $\theta$:$\gamma$ :: $\beta$:?? 
α:γ :: β:? β:θ :: α:? γ:θ :: α:? θ:γ :: β:?

Hence, t2
× 8 unique questions can be generated.

## B Raw Data For Tab. 2

| xANLGG en-de en-es en-fr en-hi en-pl de-es de-fr de-hi de-pl es-fr   | es-hi   | es-pl   | fr-hi fr-pl   | hi-pl   |     |     |     |     |     |     |     |     |     |     |     |     |     |     |     |     |     |     |
|----------------------------------------------------------------------|---------|---------|---------------|---------|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|
| CAP                                                                  | .16     | .21     | .17           | .36     | .23 | .21 | .18 | .36 | .22 | .22 | .35 | .25 | .35 | .23 | .33 |     |     |     |     |     |     |     |
| GNDR                                                                 | .32     | .42     | .39           | .26     | .35 | .48 | .40 | .41 | .36 | .39 | .43 | .38 | .30 | .40 | .42 |     |     |     |     |     |     |     |
| Wiki                                                                 | NATL    | .18     | .16           | .15     | .14 | .20 | .19 | .19 | .33 | .21 | .16 | .30 | .21 | .14 | .20 | .32 |     |     |     |     |     |     |
| G-PL                                                                 | .22     | .23     | .22           | .36     | .26 | .25 | .23 | .35 | .26 | .25 | .38 | .27 | .37 | .26 | .38 |     |     |     |     |     |     |     |
| CAP                                                                  | .23     | .23     | .20           | .23     | .29 | .26 | .23 | .24 | .28 | .23 | .26 | .28 | .24 | .29 | .38 |     |     |     |     |     |     |     |
| GNDR                                                                 | .57     | .58     | .59           | .56     | .54 | .65 | .66 | .57 | .59 | .64 | .56 | .57 | .56 | .57 | .58 |     |     |     |     |     |     |     |
| Crawl                                                                | NATL    | .32     | .43           | .27     | .39 | .29 | .32 | .35 | .47 | .35 | .40 | .43 | .31 | .46 | .31 | .42 |     |     |     |     |     |     |
| G-PL                                                                 | .35     | .24     | .33           | .48     | .29 | .33 | .37 | .44 | .42 | .33 | .47 | .33 | .48 | .42 | .51 |     |     |     |     |     |     |     |
| CAP                                                                  | .31     | .58     | .32           | .55     | .39 | .58 | .32 | .56 | .38 | .59 | .66 | .59 | .56 | .40 | .55 |     |     |     |     |     |     |     |
| GNDR                                                                 | .48     | .76     | .49           | .55     | .48 | .74 | .55 | .57 | .50 | .77 | .76 | .72 | .59 | .52 | .58 |     |     |     |     |     |     |     |
| CoNLL NATL                                                           | .37     | .72     | .26           | .51     | .38 | .78 | .34 | .52 | .36 | .74 | .74 | .73 | .50 | .35 | .50 |     |     |     |     |     |     |     |
| G-PL                                                                 | .32     | .67     | .32           | .48     | .36 | .65 | .34 | .47 | .36 | .68 | .67 | .65 | .50 | .38 | .49 |     |     |     |     |     |     |     |
| xANLGM                                                               | en      | en      | en            | en      | en  | en  | et  | et  | et  | et  | et  | fi  | fi  | fi  | fi  | hr  | hr  | hr  | lv  | lv  | ru  |     |
| et                                                                   | fi      | hr      | lv            | ru      | sl  | fi  | hr  | lv  | ru  | sl  | hr  | lv  | ru  | sl  | lv  | ru  | sl  | ru  | sl  | sl  |     |     |
| Wiki                                                                 | ANIM    | .50     | .50           | .22     | .31 | .19 | .15 | .56 | .27 | .37 | .30 | .35 | .29 | .41 | .30 | .40 | .32 | .36 | .28 | .31 | .22 | .20 |
| G-PL                                                                 | .25     | .22     | .37           | .37     | .28 | .33 | .24 | .31 | .29 | .28 | .26 | .30 | .29 | .26 | .27 | .33 | .32 | .30 | .33 | .28 | .28 |     |
| Crawl                                                                | ANIM    | .55     | .55           | .55     | .49 | .55 | .51 | .34 | .41 | .45 | .22 | .41 | .40 | .46 | .41 | .45 | .37 | .23 | .28 | .38 | .24 | .43 |
| G-PL                                                                 | .28     | .43     | .47           | .43     | .45 | .40 | .30 | .45 | .37 | .43 | .37 | .46 | .40 | .44 | .43 | .42 | .50 | .54 | .39 | .35 | .43 |     |
| CoNLL ANIM                                                           | .54     | .54     | .99           | .55     | .50 | .53 | .29 | .74 | .46 | .37 | .43 | .87 | .51 | .38 | .46 | .64 | .77 | .98 | .42 | .36 | .41 |     |
| G-PL                                                                 | .45     | .40     | .52           | .42     | .40 | .42 | .37 | .77 | .41 | .41 | .40 | .81 | .37 | .36 | .39 | .84 | .66 | .77 | .36 | .40 | .38 |     |

| Wiki   | Crawl                         | CoNLL   |                |     |                |     |     |     |     |     |     |     |      |       |       |
|--------|-------------------------------|---------|----------------|-----|----------------|-----|-----|-----|-----|-----|-----|-----|------|-------|-------|
| CAP    | GNDR NATL G-PL                | CAP     | GNDR NATL G-PL | CAP | GNDR NATL G-PL |     |     |     |     |     |     |     |      |       |       |
| de     | .68                           | .25     | .21            | .23 | .47            | .48 | .79 | .77 | .65 | .43 | .41 | .55 |      |       |       |
| en     | .94                           | .33     | .94            | .58 | .57            | .67 | .76 | .94 | .87 | .57 | .79 | .61 |      |       |       |
| es     | .45                           | .13     | .35            | .13 | .40            | .57 | .68 | .87 | .13 | .07 | .07 | .17 |      |       |       |
| fr     | .92                           | .27     | .76            | .13 | .65            | .50 | .85 | .87 | .48 | .14 | .24 | .35 |      |       |       |
| hi     | .29                           | .30     | .42            | .07 | .58            | .59 | .59 | .32 | .32 | .37 | .31 | .16 |      |       |       |
| pl     | .16                           | .21     | .26            | .10 | .29            | .55 | .82 | .84 | .45 | .45 | .38 | .52 | Wiki | Crawl | CoNLL |
|        | ANIM G-PL ANIM G-PL ANIM G-PL |         |                |     |                |     |     |     |     |     |     |     |      |       |       |
|        | en                            | .48     | .65            | .29 | .87            | .36 | .58 |     |     |     |     |     |      |       |       |
|        | et                            | .12     | .50            | .52 | 1.00           | .21 | .48 |     |     |     |     |     |      |       |       |
|        | fi                            | .06     | .65            | .48 | .87            | .42 | .54 |     |     |     |     |     |      |       |       |
|        | hr                            | .17     | .20            | .50 | .68            | .07 | .11 |     |     |     |     |     |      |       |       |
|        | lv                            | .19     | .10            | .39 | .84            | .27 | .23 |     |     |     |     |     |      |       |       |
|        | ru                            | .36     | .40            | .61 | .87            | .42 | .55 |     |     |     |     |     |      |       |       |
|        | sl                            | .42     | .23            | .39 | .81            | .12 | .39 |     |     |     |     |     |      |       |       |

Table 4: Raw SLMP results (the negative sign is omitted for brevity).

Table 5: Raw monolingual LRCos results (left:xANLGG; right: xANLGM).