id
stringlengths
36
36
filename
stringclasses
310 values
label
stringclasses
13 values
text
stringlengths
0
25.7k
caption_text
stringlengths
2
1.49k
image
imagewidth (px)
2
1.11k
width
int64
0
1.11k
height
int64
0
1.42k
dpi
int64
72
72
mimetype
stringclasses
1 value
page_no
int64
2
305
mime_type
stringclasses
1 value
version
stringclasses
1 value
tags
sequencelengths
0
0
properties
null
error
stringclasses
1 value
raw_response
stringlengths
255
1.07M
synced_at
null
f7aaa3c0-262d-4894-a0d2-1a4d50f0117c
2302.06555v2.pdf
page_header
arXiv:2302.06555v2 [cs.CL] 6 Jul 2024
null
39
660
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/0", "parent": {"cref": "#/body"}, "children": [], "label": "page_header", "prov": [{"page_no": 1, "bbox": {"l": 17.23870086669922, "t": 566.97998046875, "r": 36.33979415893555, "b": 236.99996948242188, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 37]}], "orig": "arXiv:2302.06555v2 [cs.CL] 6 Jul 2024", "text": "arXiv:2302.06555v2 [cs.CL] 6 Jul 2024"}
null
1efbff73-8fa6-4fe1-9456-df4f893eb0cf
2302.06555v2.pdf
section_header
Do Vision and Language Models Share Concepts? A Vector Space Alignment Study
null
613
60
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/1", "parent": {"cref": "#/body"}, "children": [], "label": "section_header", "prov": [{"page_no": 1, "bbox": {"l": 145.64881896972656, "t": 772.0592651367188, "r": 451.7751159667969, "b": 741.8055419921875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 76]}], "orig": "Do Vision and Language Models Share Concepts? A Vector Space Alignment Study", "text": "Do Vision and Language Models Share Concepts? A Vector Space Alignment Study", "level": 1}
null
2f6fc597-e04a-4529-8017-f9e746c2bf7c
2302.06555v2.pdf
section_header
Jiaang Li † Yova Kementchedjhieva ‡ Constanza Fierro † Anders Søgaard †
null
757
26
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/2", "parent": {"cref": "#/body"}, "children": [], "label": "section_header", "prov": [{"page_no": 1, "bbox": {"l": 110.50749206542969, "t": 728.587646484375, "r": 488.74749755859375, "b": 715.568359375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 71]}], "orig": "Jiaang Li \u2020 Yova Kementchedjhieva \u2021 Constanza Fierro \u2020 Anders S\u00f8gaard \u2020", "text": "Jiaang Li \u2020 Yova Kementchedjhieva \u2021 Constanza Fierro \u2020 Anders S\u00f8gaard \u2020", "level": 1}
null
835d05b7-a176-4446-8c03-5cfd00098e08
2302.06555v2.pdf
text
† University of Copenhagen
null
267
28
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/3", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 1, "bbox": {"l": 233.31304931640625, "t": 700.5826416015625, "r": 366.739990234375, "b": 686.6187744140625, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 26]}], "orig": "\u2020 University of Copenhagen", "text": "\u2020 University of Copenhagen"}
null
9960238e-0bda-414a-b41a-5179b70ccfad
2302.06555v2.pdf
text
‡ Mohamed bin Zayed University of Artificial Intelligence
null
558
27
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/4", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 1, "bbox": {"l": 160.5359344482422, "t": 686.524658203125, "r": 439.458984375, "b": 672.8939208984375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 57]}], "orig": "\u2021 Mohamed bin Zayed University of Artificial Intelligence", "text": "\u2021 Mohamed bin Zayed University of Artificial Intelligence"}
null
d223e870-906b-4539-969c-496215626791
2302.06555v2.pdf
text
{jili,c.fierro,soegaard}@di.ku.dk, yova.kementchedjhieva@mbzuai.ac.ae
null
990
25
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/5", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 1, "bbox": {"l": 52.79401397705078, "t": 671.6949462890625, "r": 547.7392578125, "b": 659.25048828125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 69]}], "orig": "{jili,c.fierro,soegaard}@di.ku.dk, yova.kementchedjhieva@mbzuai.ac.ae", "text": "{jili,c.fierro,soegaard}@di.ku.dk, yova.kementchedjhieva@mbzuai.ac.ae"}
null
7e2361f3-7c96-4be2-ad2b-02bb14073d09
2302.06555v2.pdf
section_header
Abstract
null
91
23
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/6", "parent": {"cref": "#/body"}, "children": [], "label": "section_header", "prov": [{"page_no": 1, "bbox": {"l": 158.09361267089844, "t": 635.4051513671875, "r": 203.5260009765625, "b": 623.8914184570312, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 8]}], "orig": "Abstract", "text": "Abstract", "level": 1}
null
b4c1466b-92d1-411f-8b3e-c23c96a2cd1c
2302.06555v2.pdf
text
Large-scale pretrained language models (LMs) are said to "lack the ability to connect utterances to the world" (Bender and Koller, 2020), because they do not have "mental models of the world" (Mitchell and Krakauer, 2023). If so, one would expect LM representations to be unrelated to representations induced by vision models. We present an empirical evaluation across four families of LMs (BERT, GPT-2, OPT and LLaMA-2) and three vision model architectures (ResNet, SegFormer, and MAE). Our experiments show that LMs partially converge towards representations isomorphic to those of vision models, subject to dispersion, polysemy and frequency. This has important implications for both multi-modal processing and the LM understanding debate (Mitchell and Krakauer, 2023). 1
null
355
449
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/7", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 1, "bbox": {"l": 92.48450469970703, "t": 607.5615844726562, "r": 270.1062316894531, "b": 382.78509521484375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 774]}], "orig": "Large-scale pretrained language models (LMs) are said to \"lack the ability to connect utterances to the world\" (Bender and Koller, 2020), because they do not have \"mental models of the world\" (Mitchell and Krakauer, 2023). If so, one would expect LM representations to be unrelated to representations induced by vision models. We present an empirical evaluation across four families of LMs (BERT, GPT-2, OPT and LLaMA-2) and three vision model architectures (ResNet, SegFormer, and MAE). Our experiments show that LMs partially converge towards representations isomorphic to those of vision models, subject to dispersion, polysemy and frequency. This has important implications for both multi-modal processing and the LM understanding debate (Mitchell and Krakauer, 2023). 1", "text": "Large-scale pretrained language models (LMs) are said to \"lack the ability to connect utterances to the world\" (Bender and Koller, 2020), because they do not have \"mental models of the world\" (Mitchell and Krakauer, 2023). If so, one would expect LM representations to be unrelated to representations induced by vision models. We present an empirical evaluation across four families of LMs (BERT, GPT-2, OPT and LLaMA-2) and three vision model architectures (ResNet, SegFormer, and MAE). Our experiments show that LMs partially converge towards representations isomorphic to those of vision models, subject to dispersion, polysemy and frequency. This has important implications for both multi-modal processing and the LM understanding debate (Mitchell and Krakauer, 2023). 1"}
null
2aa84d33-241d-454f-aff6-090baeb93711
2302.06555v2.pdf
section_header
1 Introduction
null
166
22
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/8", "parent": {"cref": "#/body"}, "children": [], "label": "section_header", "prov": [{"page_no": 1, "bbox": {"l": 71.96525573730469, "t": 357.04547119140625, "r": 154.81365966796875, "b": 345.7883605957031, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 14]}], "orig": "1 Introduction", "text": "1 Introduction", "level": 1}
null
43ebcc8a-6b10-4dc7-b1bb-1d1df53ce5e0
2302.06555v2.pdf
text
The debate around whether LMs can be said to understand is often portrayed as a back-and-forth between two opposing sides (Mitchell and Krakauer, 2023), but in reality, there are many positions. Some researchers have argued that LMs are 'all syntax, no semantics', i.e., that they learn form, but not meaning (Searle, 1980; Bender and Koller, 2020; Marcus et al., 2023). 2 Others have argued that LMs
null
442
213
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/9", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 1, "bbox": {"l": 70.90017700195312, "t": 334.8507995605469, "r": 292.1755065917969, "b": 228.49066162109375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 400]}], "orig": "The debate around whether LMs can be said to understand is often portrayed as a back-and-forth between two opposing sides (Mitchell and Krakauer, 2023), but in reality, there are many positions. Some researchers have argued that LMs are 'all syntax, no semantics', i.e., that they learn form, but not meaning (Searle, 1980; Bender and Koller, 2020; Marcus et al., 2023). 2 Others have argued that LMs", "text": "The debate around whether LMs can be said to understand is often portrayed as a back-and-forth between two opposing sides (Mitchell and Krakauer, 2023), but in reality, there are many positions. Some researchers have argued that LMs are 'all syntax, no semantics', i.e., that they learn form, but not meaning (Searle, 1980; Bender and Koller, 2020; Marcus et al., 2023). 2 Others have argued that LMs"}
null
15bed161-2aec-4ca4-ab02-b65cd1cab586
2302.06555v2.pdf
text
dataset:
null
57
17
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/10", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 1, "bbox": {"l": 141.05947875976562, "t": 217.11761474609375, "r": 169.70974731445312, "b": 208.6852569580078, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 8]}], "orig": "dataset:", "text": "dataset:"}
null
7e34928e-b3ee-434d-94db-81cb08d3c4d6
2302.06555v2.pdf
text
https://github.com/
null
206
19
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/11", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 1, "bbox": {"l": 188.3842010498047, "t": 217.300048828125, "r": 291.3439636230469, "b": 208.0350341796875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 19]}], "orig": "https://github.com/", "text": "https://github.com/"}
null
9bf2f166-4937-4ace-bb94-dfb344f88b46
2302.06555v2.pdf
text
$^{1}$Code and jiaangli/VLCA .
null
144
37
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/12", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 1, "bbox": {"l": 72.0, "t": 216.7012176513672, "r": 144.17959594726562, "b": 197.72625732421875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 30]}], "orig": "$^{1}$Code and jiaangli/VLCA .", "text": "$^{1}$Code and jiaangli/VLCA ."}
null
6321b136-a71f-471e-a05c-035bebe99d9a
2302.06555v2.pdf
text
$^{2}$The idea that computers are 'all syntax, no semantics' can be traced back to German 17th century philosopher Leibniz's Mill Argument (Lodge and Bobro, 1998). The Mill Argument states that mental states cannot be reduced to physical states, so if the capacity to understand language requires mental states, this capacity cannot be instantiated, merely imitated, by machines. In 1980, Searle introduced an even more popular argument against the possibility of LM understanding, in the form of the so-called Chinese Room thought experiment (Searle, 1980). The Chinese Room presents an interlocutor with no prior knowledge of a foreign language, who
null
442
238
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/13", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 1, "bbox": {"l": 71.13861846923828, "t": 195.3333740234375, "r": 291.759765625, "b": 76.2840576171875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 651]}], "orig": "$^{2}$The idea that computers are 'all syntax, no semantics' can be traced back to German 17th century philosopher Leibniz's Mill Argument (Lodge and Bobro, 1998). The Mill Argument states that mental states cannot be reduced to physical states, so if the capacity to understand language requires mental states, this capacity cannot be instantiated, merely imitated, by machines. In 1980, Searle introduced an even more popular argument against the possibility of LM understanding, in the form of the so-called Chinese Room thought experiment (Searle, 1980). The Chinese Room presents an interlocutor with no prior knowledge of a foreign language, who", "text": "$^{2}$The idea that computers are 'all syntax, no semantics' can be traced back to German 17th century philosopher Leibniz's Mill Argument (Lodge and Bobro, 1998). The Mill Argument states that mental states cannot be reduced to physical states, so if the capacity to understand language requires mental states, this capacity cannot be instantiated, merely imitated, by machines. In 1980, Searle introduced an even more popular argument against the possibility of LM understanding, in the form of the so-called Chinese Room thought experiment (Searle, 1980). The Chinese Room presents an interlocutor with no prior knowledge of a foreign language, who"}
null
8bb27c97-a19f-443c-b6c3-10c0c3fea2fa
2302.06555v2.pdf
text
have inferential semantics, but not referential semantics (Rapaport, 2002; Sahlgren and Carlsson, 2021; Piantadosi and Hill, 2022), 3 whereas some have posited that a form of externalist referential semantics is possible, at least for chatbots engaged in direct conversation (Cappelen and Dever, 2021; Butlin, 2021; Mollo and Millière, 2023; Mandelkern and Linzen, 2023). Most researchers agree, however, that LMs "lack the ability to connect utterances to the world" (Bender and Koller, 2020), because they do not have "mental models of the world" (Mitchell and Krakauer, 2023).
null
443
320
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/14", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 1, "bbox": {"l": 306.2083740234375, "t": 634.934326171875, "r": 527.3598022460938, "b": 474.9033508300781, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 579]}], "orig": "have inferential semantics, but not referential semantics (Rapaport, 2002; Sahlgren and Carlsson, 2021; Piantadosi and Hill, 2022), 3 whereas some have posited that a form of externalist referential semantics is possible, at least for chatbots engaged in direct conversation (Cappelen and Dever, 2021; Butlin, 2021; Mollo and Milli\u00e8re, 2023; Mandelkern and Linzen, 2023). Most researchers agree, however, that LMs \"lack the ability to connect utterances to the world\" (Bender and Koller, 2020), because they do not have \"mental models of the world\" (Mitchell and Krakauer, 2023).", "text": "have inferential semantics, but not referential semantics (Rapaport, 2002; Sahlgren and Carlsson, 2021; Piantadosi and Hill, 2022), 3 whereas some have posited that a form of externalist referential semantics is possible, at least for chatbots engaged in direct conversation (Cappelen and Dever, 2021; Butlin, 2021; Mollo and Milli\u00e8re, 2023; Mandelkern and Linzen, 2023). Most researchers agree, however, that LMs \"lack the ability to connect utterances to the world\" (Bender and Koller, 2020), because they do not have \"mental models of the world\" (Mitchell and Krakauer, 2023)."}
null
78879112-8454-4241-9808-cdc6670cb00a
2302.06555v2.pdf
text
This study provides evidence to the contrary: Language models and computer vision models (VMs) are trained on independent data sources (at least for unsupervised computer vision models). The only common source of bias is the world. If LMs and VMs exhibit similarities, it must be because they both model the world. We examine the representations learned by different LMs and VMs by measuring how similar their geometries are. We consistently find that the better the LMs are, the more they induce representations similar to those induced by computer vision models. The similarity between the two spaces is such that from a very small set of parallel examples we are able to linearly project VMs representations to the language space and retrieve highly accurate captions, as shown by the examples in Figure 1.
null
443
457
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/15", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 1, "bbox": {"l": 305.9925231933594, "t": 471.6637268066406, "r": 527.4515380859375, "b": 242.90240478515625, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 809]}], "orig": "This study provides evidence to the contrary: Language models and computer vision models (VMs) are trained on independent data sources (at least for unsupervised computer vision models). The only common source of bias is the world. If LMs and VMs exhibit similarities, it must be because they both model the world. We examine the representations learned by different LMs and VMs by measuring how similar their geometries are. We consistently find that the better the LMs are, the more they induce representations similar to those induced by computer vision models. The similarity between the two spaces is such that from a very small set of parallel examples we are able to linearly project VMs representations to the language space and retrieve highly accurate captions, as shown by the examples in Figure 1.", "text": "This study provides evidence to the contrary: Language models and computer vision models (VMs) are trained on independent data sources (at least for unsupervised computer vision models). The only common source of bias is the world. If LMs and VMs exhibit similarities, it must be because they both model the world. We examine the representations learned by different LMs and VMs by measuring how similar their geometries are. We consistently find that the better the LMs are, the more they induce representations similar to those induced by computer vision models. The similarity between the two spaces is such that from a very small set of parallel examples we are able to linearly project VMs representations to the language space and retrieve highly accurate captions, as shown by the examples in Figure 1."}
null
13524540-4a6d-47b7-b3ad-05ded451e35c
2302.06555v2.pdf
text
Contributions. We present a series of evaluations of the vector spaces induced by three families of VMs and four families of LMs, i.e., a total of fourteen VMs and fourteen LMs. We show that within each family, the larger the LMs, the more their vector spaces become structurally similar to
null
442
158
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/16", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 1, "bbox": {"l": 306.3404541015625, "t": 230.98822021484375, "r": 527.3585815429688, "b": 151.7589111328125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 290]}], "orig": "Contributions. We present a series of evaluations of the vector spaces induced by three families of VMs and four families of LMs, i.e., a total of fourteen VMs and fourteen LMs. We show that within each family, the larger the LMs, the more their vector spaces become structurally similar to", "text": "Contributions. We present a series of evaluations of the vector spaces induced by three families of VMs and four families of LMs, i.e., a total of fourteen VMs and fourteen LMs. We show that within each family, the larger the LMs, the more their vector spaces become structurally similar to"}
null
35f66de4-2a68-4b7f-99ab-fe619540be51
2302.06555v2.pdf
text
receives text messages in this language and follows a rule book to reply to the messages. The interlocutor is Searle's caricature of artificial intelligence, and is obviously, Searle claims, not endowed with meaning or understanding, but merely symbol manipulation.
null
439
106
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/17", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 1, "bbox": {"l": 306.3285217285156, "t": 140.958984375, "r": 525.7665405273438, "b": 88.13725280761719, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 265]}], "orig": "receives text messages in this language and follows a rule book to reply to the messages. The interlocutor is Searle's caricature of artificial intelligence, and is obviously, Searle claims, not endowed with meaning or understanding, but merely symbol manipulation.", "text": "receives text messages in this language and follows a rule book to reply to the messages. The interlocutor is Searle's caricature of artificial intelligence, and is obviously, Searle claims, not endowed with meaning or understanding, but merely symbol manipulation."}
null
cbc18cec-292c-43a7-a222-f02d37ed6869
2302.06555v2.pdf
text
$^{3}$See Marconi (1997) for this distinction.
null
291
16
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/18", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 1, "bbox": {"l": 319.9280090332031, "t": 85.00321197509766, "r": 465.3711242675781, "b": 76.98725128173828, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 46]}], "orig": "$^{3}$See Marconi (1997) for this distinction.", "text": "$^{3}$See Marconi (1997) for this distinction."}
null
a4008690-b978-45e3-ab18-4930e3416f4c
2302.06555v2.pdf
text
those of computer vision models. This enables retrieval of language representations of images (referential semantics) with minimal supervision. Retrieval precision depends on dispersion of image and language, polysemy, and frequency, but consistently improves with language model size. We discuss the implications of the finding that language and computer vision models learn representations with similar geometries.
null
442
238
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/19", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 2, "bbox": {"l": 71.13309478759766, "t": 776.2091064453125, "r": 292.083251953125, "b": 657.3726196289062, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 416]}], "orig": "those of computer vision models. This enables retrieval of language representations of images (referential semantics) with minimal supervision. Retrieval precision depends on dispersion of image and language, polysemy, and frequency, but consistently improves with language model size. We discuss the implications of the finding that language and computer vision models learn representations with similar geometries.", "text": "those of computer vision models. This enables retrieval of language representations of images (referential semantics) with minimal supervision. Retrieval precision depends on dispersion of image and language, polysemy, and frequency, but consistently improves with language model size. We discuss the implications of the finding that language and computer vision models learn representations with similar geometries."}
null
7c58bc30-6e15-40ce-b04a-380f6e1ab0e2
2302.06555v2.pdf
section_header
2 Related Work
null
180
23
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/20", "parent": {"cref": "#/body"}, "children": [], "label": "section_header", "prov": [{"page_no": 2, "bbox": {"l": 71.47272491455078, "t": 645.0098876953125, "r": 161.3809814453125, "b": 633.350341796875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 14]}], "orig": "2 Related Work", "text": "2 Related Work", "level": 1}
null
553eacd2-92df-4108-be99-f90ab08aac40
2302.06555v2.pdf
text
Inspiration from cognitive science. Computational modeling is a cornerstone of cognitive science in the pursuit for a better understanding of how representations in the brain come about. As such, the field has shown a growing interest in computational representations induced with self-supervised learning (Orhan et al., 2020; Halvagal and Zenke, 2022). Cognitive scientists have also noted how the objectives of supervised language and vision models bear resemblances to predictive processing (Schrimpf et al., 2018; Goldstein et al., 2021; Caucheteux et al., 2022; Li et al., 2023) (but see Antonello and Huth (2022) for a critical discussion of such work).
null
442
373
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/21", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 2, "bbox": {"l": 71.06283569335938, "t": 622.7850952148438, "r": 292.0802001953125, "b": 436.3576354980469, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 659]}], "orig": "Inspiration from cognitive science. Computational modeling is a cornerstone of cognitive science in the pursuit for a better understanding of how representations in the brain come about. As such, the field has shown a growing interest in computational representations induced with self-supervised learning (Orhan et al., 2020; Halvagal and Zenke, 2022). Cognitive scientists have also noted how the objectives of supervised language and vision models bear resemblances to predictive processing (Schrimpf et al., 2018; Goldstein et al., 2021; Caucheteux et al., 2022; Li et al., 2023) (but see Antonello and Huth (2022) for a critical discussion of such work).", "text": "Inspiration from cognitive science. Computational modeling is a cornerstone of cognitive science in the pursuit for a better understanding of how representations in the brain come about. As such, the field has shown a growing interest in computational representations induced with self-supervised learning (Orhan et al., 2020; Halvagal and Zenke, 2022). Cognitive scientists have also noted how the objectives of supervised language and vision models bear resemblances to predictive processing (Schrimpf et al., 2018; Goldstein et al., 2021; Caucheteux et al., 2022; Li et al., 2023) (but see Antonello and Huth (2022) for a critical discussion of such work)."}
null
e92fdcac-17c2-416a-a570-6a4387d1f6b8
2302.06555v2.pdf
text
Studies have looked at the alignability of neural language representations and human brain activations, with more promising results as language models grow better at modeling language (Sassenhagen and Fiebach, 2020; Schrimpf et al., 2021). In these studies, the partial alignability of brain and model representations is interpreted as evidence that brain and models might process language in the same way (Caucheteux and King, 2022).
null
441
240
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/22", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 2, "bbox": {"l": 71.25102233886719, "t": 433.5199279785156, "r": 292.1755065917969, "b": 313.6751708984375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 434]}], "orig": "Studies have looked at the alignability of neural language representations and human brain activations, with more promising results as language models grow better at modeling language (Sassenhagen and Fiebach, 2020; Schrimpf et al., 2021). In these studies, the partial alignability of brain and model representations is interpreted as evidence that brain and models might process language in the same way (Caucheteux and King, 2022).", "text": "Studies have looked at the alignability of neural language representations and human brain activations, with more promising results as language models grow better at modeling language (Sassenhagen and Fiebach, 2020; Schrimpf et al., 2021). In these studies, the partial alignability of brain and model representations is interpreted as evidence that brain and models might process language in the same way (Caucheteux and King, 2022)."}
null
3da74941-87ce-468b-8e8a-cd69a132d125
2302.06555v2.pdf
text
Cross-modal alignment. The idea of crossmodal retrieval is not new (Lazaridou et al., 2014), but previously it has mostly been studied with practical considerations in mind. Recently, Merullo et al. (2023) showed that language representations in LMs are functionally similar to image representations in VMs, in that a linear transformation applied to an image representation can be used to prompt a language model into producing a relevant caption. We dial back from function and study whether the concept representations converge toward structural similarity (isomorphism). The key question we address is whether despite the lack of explicit grounding, the representations learned by large pretrained language models structurally resemble properties of the physical world as captured by vision models. More related to our work,
null
442
457
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/23", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 2, "bbox": {"l": 71.18010711669922, "t": 304.51641845703125, "r": 292.0832214355469, "b": 76.014892578125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 828]}], "orig": "Cross-modal alignment. The idea of crossmodal retrieval is not new (Lazaridou et al., 2014), but previously it has mostly been studied with practical considerations in mind. Recently, Merullo et al. (2023) showed that language representations in LMs are functionally similar to image representations in VMs, in that a linear transformation applied to an image representation can be used to prompt a language model into producing a relevant caption. We dial back from function and study whether the concept representations converge toward structural similarity (isomorphism). The key question we address is whether despite the lack of explicit grounding, the representations learned by large pretrained language models structurally resemble properties of the physical world as captured by vision models. More related to our work,", "text": "Cross-modal alignment. The idea of crossmodal retrieval is not new (Lazaridou et al., 2014), but previously it has mostly been studied with practical considerations in mind. Recently, Merullo et al. (2023) showed that language representations in LMs are functionally similar to image representations in VMs, in that a linear transformation applied to an image representation can be used to prompt a language model into producing a relevant caption. We dial back from function and study whether the concept representations converge toward structural similarity (isomorphism). The key question we address is whether despite the lack of explicit grounding, the representations learned by large pretrained language models structurally resemble properties of the physical world as captured by vision models. More related to our work,"}
null
6d136f70-fd12-4469-8bbf-ac08f34149d3
2302.06555v2.pdf
picture
null
null
125
108
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/pictures/0", "parent": {"cref": "#/body"}, "children": [], "label": "picture", "prov": [{"page_no": 2, "bbox": {"l": 313.8261413574219, "t": 763.364501953125, "r": 376.35577392578125, "b": 709.6014404296875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 0]}], "captions": [], "references": [], "footnotes": [], "image": {"mimetype": "image/png", "dpi": 144, "size": {"width": 125.0, "height": 108.0}, "uri": null}, "annotations": []}
null
ca6cf0de-1cb4-44f8-a79b-4cdd409904e2
2302.06555v2.pdf
picture
null
null
124
104
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/pictures/1", "parent": {"cref": "#/body"}, "children": [], "label": "picture", "prov": [{"page_no": 2, "bbox": {"l": 313.7725524902344, "t": 701.4967041015625, "r": 376.127685546875, "b": 649.528076171875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 0]}], "captions": [], "references": [], "footnotes": [], "image": {"mimetype": "image/png", "dpi": 144, "size": {"width": 124.0, "height": 104.0}, "uri": null}, "annotations": []}
null
f1d01f98-a855-444d-84e0-3eae0a49e3fc
2302.06555v2.pdf
picture
null
null
123
103
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/pictures/2", "parent": {"cref": "#/body"}, "children": [], "label": "picture", "prov": [{"page_no": 2, "bbox": {"l": 313.82672119140625, "t": 640.9229736328125, "r": 375.7113952636719, "b": 589.3406372070312, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 0]}], "captions": [], "references": [], "footnotes": [], "image": {"mimetype": "image/png", "dpi": 144, "size": {"width": 123.0, "height": 103.0}, "uri": null}, "annotations": []}
null
254effb8-5802-4b97-844a-f2e7488e6665
2302.06555v2.pdf
picture
null
null
123
104
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/pictures/3", "parent": {"cref": "#/body"}, "children": [], "label": "picture", "prov": [{"page_no": 2, "bbox": {"l": 314.2514343261719, "t": 580.6930541992188, "r": 375.9010314941406, "b": 528.97998046875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 0]}], "captions": [], "references": [], "footnotes": [], "image": {"mimetype": "image/png", "dpi": 144, "size": {"width": 123.0, "height": 104.0}, "uri": null}, "annotations": []}
null
0cb7a702-d938-4fc6-b040-3cf8abb2bfb4
2302.06555v2.pdf
picture
null
null
124
104
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/pictures/4", "parent": {"cref": "#/body"}, "children": [], "label": "picture", "prov": [{"page_no": 2, "bbox": {"l": 377.7059326171875, "t": 762.4067993164062, "r": 439.4959716796875, "b": 710.218505859375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 0]}], "captions": [], "references": [], "footnotes": [], "image": {"mimetype": "image/png", "dpi": 144, "size": {"width": 124.0, "height": 104.0}, "uri": null}, "annotations": []}
null
655a5b29-665d-4321-a868-f1be33b24d37
2302.06555v2.pdf
picture
null
null
124
101
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/pictures/5", "parent": {"cref": "#/body"}, "children": [], "label": "picture", "prov": [{"page_no": 2, "bbox": {"l": 377.6625061035156, "t": 700.3104248046875, "r": 439.6013488769531, "b": 649.751953125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 0]}], "captions": [], "references": [], "footnotes": [], "image": {"mimetype": "image/png", "dpi": 144, "size": {"width": 124.0, "height": 101.0}, "uri": null}, "annotations": []}
null
9e733fb5-c421-4396-9509-261f80adbc70
2302.06555v2.pdf
picture
null
null
123
102
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/pictures/6", "parent": {"cref": "#/body"}, "children": [], "label": "picture", "prov": [{"page_no": 2, "bbox": {"l": 377.5209045410156, "t": 640.98388671875, "r": 439.1893310546875, "b": 589.8641357421875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 0]}], "captions": [], "references": [], "footnotes": [], "image": {"mimetype": "image/png", "dpi": 144, "size": {"width": 123.0, "height": 102.0}, "uri": null}, "annotations": []}
null
6413aaf0-41b3-4717-8799-d013d7a0fea6
2302.06555v2.pdf
caption
Figure 1: Mapping from MAE$_{Huge}$ (images) to OPT$_{30B}$ (text). Gold labels are in green.
null
438
48
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/24", "parent": {"cref": "#/body"}, "children": [], "label": "caption", "prov": [{"page_no": 2, "bbox": {"l": 306.5188903808594, "t": 511.5455017089844, "r": 525.5476684570312, "b": 487.3186340332031, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 93]}], "orig": "Figure 1: Mapping from MAE$_{Huge}$ (images) to OPT$_{30B}$ (text). Gold labels are in green.", "text": "Figure 1: Mapping from MAE$_{Huge}$ (images) to OPT$_{30B}$ (text). Gold labels are in green."}
null
ab01c853-c183-4c88-968b-f0ded2001a60
2302.06555v2.pdf
picture
null
Figure 1: Mapping from MAE$_{Huge}$ (images) to OPT$_{30B}$ (text). Gold labels are in green.
124
103
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/pictures/7", "parent": {"cref": "#/body"}, "children": [], "label": "picture", "prov": [{"page_no": 2, "bbox": {"l": 377.6405029296875, "t": 580.822265625, "r": 439.61199951171875, "b": 529.2573852539062, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 93]}], "captions": [{"cref": "#/texts/24"}], "references": [], "footnotes": [], "image": {"mimetype": "image/png", "dpi": 144, "size": {"width": 124.0, "height": 103.0}, "uri": null}, "annotations": []}
null
005ca179-8bd9-467e-a2cb-95f325429644
2302.06555v2.pdf
text
Huh et al. (2024) proposes a similar hypothesis, although studying it from a different perspective, and our findings corroborate theirs.
null
441
76
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/25", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 2, "bbox": {"l": 306.4500427246094, "t": 463.1133728027344, "r": 526.9063720703125, "b": 424.90692138671875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 136]}], "orig": "Huh et al. (2024) proposes a similar hypothesis, although studying it from a different perspective, and our findings corroborate theirs.", "text": "Huh et al. (2024) proposes a similar hypothesis, although studying it from a different perspective, and our findings corroborate theirs."}
null
727ec5f5-36bf-4d02-b9c4-e4782346cbcf
2302.06555v2.pdf
section_header
3 Methodology
null
172
24
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/26", "parent": {"cref": "#/body"}, "children": [], "label": "section_header", "prov": [{"page_no": 2, "bbox": {"l": 306.342529296875, "t": 412.8549499511719, "r": 392.2894287109375, "b": 400.87078857421875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 13]}], "orig": "3 Methodology", "text": "3 Methodology", "level": 1}
null
be759c2b-2025-4928-8f5d-db721824f5ac
2302.06555v2.pdf
text
Our primary objective is to compare the representations derived from VMs and LMs and assess their alignability, i.e. the extent to which LMs converge toward VMs' geometries. In the following sections, we introduce the procedures for obtaining the representations and aligning them, with an illustration of our methodology provided in Figure 2.
null
442
186
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/27", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 2, "bbox": {"l": 306.3543395996094, "t": 390.9617919921875, "r": 527.3591918945312, "b": 297.9390869140625, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 343]}], "orig": "Our primary objective is to compare the representations derived from VMs and LMs and assess their alignability, i.e. the extent to which LMs converge toward VMs' geometries. In the following sections, we introduce the procedures for obtaining the representations and aligning them, with an illustration of our methodology provided in Figure 2.", "text": "Our primary objective is to compare the representations derived from VMs and LMs and assess their alignability, i.e. the extent to which LMs converge toward VMs' geometries. In the following sections, we introduce the procedures for obtaining the representations and aligning them, with an illustration of our methodology provided in Figure 2."}
null
7a40986a-6ccd-463e-88ae-91d4adc8cb14
2302.06555v2.pdf
text
Vision models. We include fourteen VMs in our experiments, representing three model families: SegFormer (Xie et al., 2021), MAE (He et al., 2022), and ResNet (He et al., 2016). For all three types of VMs, we only employ the encoder component as a visual feature extractor. 4
null
442
157
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/28", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 2, "bbox": {"l": 306.2715148925781, "t": 288.335693359375, "r": 527.359375, "b": 210.1456298828125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 274]}], "orig": "Vision models. We include fourteen VMs in our experiments, representing three model families: SegFormer (Xie et al., 2021), MAE (He et al., 2022), and ResNet (He et al., 2016). For all three types of VMs, we only employ the encoder component as a visual feature extractor. 4", "text": "Vision models. We include fourteen VMs in our experiments, representing three model families: SegFormer (Xie et al., 2021), MAE (He et al., 2022), and ResNet (He et al., 2016). For all three types of VMs, we only employ the encoder component as a visual feature extractor. 4"}
null
2d6f6757-f889-4c4f-84e5-e1b6fca83920
2302.06555v2.pdf
text
SegFormer models consist of a Transformerbased encoder and a light-weight feed-forward decoder. They are pretrained on object classification data and finetuned on scene parsing data for scene segmentation and object classification. We hypothesize that the reasoning necessary to
null
443
158
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/29", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 2, "bbox": {"l": 306.04150390625, "t": 207.15533447265625, "r": 527.4514770507812, "b": 127.76580810546875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 278]}], "orig": "SegFormer models consist of a Transformerbased encoder and a light-weight feed-forward decoder. They are pretrained on object classification data and finetuned on scene parsing data for scene segmentation and object classification. We hypothesize that the reasoning necessary to", "text": "SegFormer models consist of a Transformerbased encoder and a light-weight feed-forward decoder. They are pretrained on object classification data and finetuned on scene parsing data for scene segmentation and object classification. We hypothesize that the reasoning necessary to"}
null
8572076e-cbfc-4318-8d1d-4c83905882ef
2302.06555v2.pdf
footnote
$^{4}$We ran experiments with CLIP (Radford et al., 2021), but report on these separately, since CLIP does not meet the criteria of our study, being trained on a mixture of text and images. CLIP results are presented in Appendix C.
null
440
86
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/30", "parent": {"cref": "#/body"}, "children": [], "label": "footnote", "prov": [{"page_no": 2, "bbox": {"l": 306.3114013671875, "t": 119.45751953125, "r": 526.66015625, "b": 76.37322998046875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 231]}], "orig": "$^{4}$We ran experiments with CLIP (Radford et al., 2021), but report on these separately, since CLIP does not meet the criteria of our study, being trained on a mixture of text and images. CLIP results are presented in Appendix C.", "text": "$^{4}$We ran experiments with CLIP (Radford et al., 2021), but report on these separately, since CLIP does not meet the criteria of our study, being trained on a mixture of text and images. CLIP results are presented in Appendix C."}
null
2ac5a30c-715e-483c-9d3d-113e3844e6a6
2302.06555v2.pdf
caption
Figure 2: Experiments stages: During our experiments, words, sentences, and images are selected from the aliases list (wordlist and ImageNet-21K aliases), Wikipedia and ImageNet-21K, respectively. The source and target spaces are constructed utilizing image and word embeddings which are extracted by specialized vision and language models.
null
909
103
72
image/png
4
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/31", "parent": {"cref": "#/body"}, "children": [], "label": "caption", "prov": [{"page_no": 3, "bbox": {"l": 71.41864013671875, "t": 595.5836181640625, "r": 525.9263305664062, "b": 543.7525024414062, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 340]}], "orig": "Figure 2: Experiments stages: During our experiments, words, sentences, and images are selected from the aliases list (wordlist and ImageNet-21K aliases), Wikipedia and ImageNet-21K, respectively. The source and target spaces are constructed utilizing image and word embeddings which are extracted by specialized vision and language models.", "text": "Figure 2: Experiments stages: During our experiments, words, sentences, and images are selected from the aliases list (wordlist and ImageNet-21K aliases), Wikipedia and ImageNet-21K, respectively. The source and target spaces are constructed utilizing image and word embeddings which are extracted by specialized vision and language models."}
null
7d1defea-4674-4d83-93b4-7dc5fd44cef9
2302.06555v2.pdf
picture
null
Figure 2: Experiments stages: During our experiments, words, sentences, and images are selected from the aliases list (wordlist and ImageNet-21K aliases), Wikipedia and ImageNet-21K, respectively. The source and target spaces are constructed utilizing image and word embeddings which are extracted by specialized vision and language models.
909
332
72
image/png
4
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/pictures/8", "parent": {"cref": "#/body"}, "children": [], "label": "picture", "prov": [{"page_no": 3, "bbox": {"l": 70.01079559326172, "t": 777.8876953125, "r": 524.3985595703125, "b": 611.6873168945312, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 340]}], "captions": [{"cref": "#/texts/31"}], "references": [], "footnotes": [], "image": {"mimetype": "image/png", "dpi": 144, "size": {"width": 909.0, "height": 332.0}, "uri": null}, "annotations": []}
null
00423384-00ba-411c-87d9-5681643567ad
2302.06555v2.pdf
text
perform segmentation in context promotes representations that are more similar to those of LMs, which also operate in a discrete space (a vocabulary). The SegFormer models we use are pretrained with ImageNet-1K (Russakovsky et al., 2015) and finetuned with ADE20K (Zhou et al., 2017).
null
442
157
72
image/png
4
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/32", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 3, "bbox": {"l": 71.2052993774414, "t": 519.85791015625, "r": 292.0829162597656, "b": 441.4917297363281, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 284]}], "orig": "perform segmentation in context promotes representations that are more similar to those of LMs, which also operate in a discrete space (a vocabulary). The SegFormer models we use are pretrained with ImageNet-1K (Russakovsky et al., 2015) and finetuned with ADE20K (Zhou et al., 2017).", "text": "perform segmentation in context promotes representations that are more similar to those of LMs, which also operate in a discrete space (a vocabulary). The SegFormer models we use are pretrained with ImageNet-1K (Russakovsky et al., 2015) and finetuned with ADE20K (Zhou et al., 2017)."}
null
d85c634d-dbb3-48ac-9c94-342db0c51fb3
2302.06555v2.pdf
text
MAE models relies on a Transformer-based encoder-decoder architecture, with the VisionTransformer (ViT) (Dosovitskiy et al., 2021) as the encoder backbone. MAE models are trained to reconstruct masked patches in images, i.e., a fully unsupervised training objective, similar to masked language modeling. The encoder takes as input the unmasked image patches, while a lightweight decoder reconstructs the original image from the latent representation of unmasked patches interleaved with mask tokens. The MAE models we use are pretrained on ImageNet-1K.
null
442
320
72
image/png
4
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/33", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 3, "bbox": {"l": 71.08782196044922, "t": 437.7105407714844, "r": 292.07537841796875, "b": 277.903564453125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 552]}], "orig": "MAE models relies on a Transformer-based encoder-decoder architecture, with the VisionTransformer (ViT) (Dosovitskiy et al., 2021) as the encoder backbone. MAE models are trained to reconstruct masked patches in images, i.e., a fully unsupervised training objective, similar to masked language modeling. The encoder takes as input the unmasked image patches, while a lightweight decoder reconstructs the original image from the latent representation of unmasked patches interleaved with mask tokens. The MAE models we use are pretrained on ImageNet-1K.", "text": "MAE models relies on a Transformer-based encoder-decoder architecture, with the VisionTransformer (ViT) (Dosovitskiy et al., 2021) as the encoder backbone. MAE models are trained to reconstruct masked patches in images, i.e., a fully unsupervised training objective, similar to masked language modeling. The encoder takes as input the unmasked image patches, while a lightweight decoder reconstructs the original image from the latent representation of unmasked patches interleaved with mask tokens. The MAE models we use are pretrained on ImageNet-1K."}
null
a5381d8b-f763-484f-9b97-4bb4c9d89391
2302.06555v2.pdf
text
ResNet models for object classification consist of a bottleneck convolutional neural network with residual blocks as an encoder, with a classification head. They are pretrained on the ImageNet-1K.
null
438
104
72
image/png
4
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/34", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 3, "bbox": {"l": 71.25859069824219, "t": 274.5047607421875, "r": 290.271728515625, "b": 222.460205078125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 196]}], "orig": "ResNet models for object classification consist of a bottleneck convolutional neural network with residual blocks as an encoder, with a classification head. They are pretrained on the ImageNet-1K.", "text": "ResNet models for object classification consist of a bottleneck convolutional neural network with residual blocks as an encoder, with a classification head. They are pretrained on the ImageNet-1K."}
null
5adb76e8-32a6-481f-924e-e903b49075e4
2302.06555v2.pdf
text
Language models. We include fourteen Transformer-based LMs in our experiments, representing four model families: BERT (Devlin et al., 2019), GPT-2 (Radford et al., 2019), OPT (Zhang et al., 2022) and LLaMA-2 (Touvron et al., 2023). We use six different sizes of BERT (all uncased): BERT$_{Base}$ and BERT$_{Large}$, which are pretrained on the BooksCorpus (Zhu et al., 2015) and English Wikipedia (Foundation), and four smaller BERT sizes, distilled from BERT$_{Large}$ (Turc et al., 2019).
null
443
266
72
image/png
4
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/35", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 3, "bbox": {"l": 70.57910919189453, "t": 208.846435546875, "r": 292.1816101074219, "b": 76.1505126953125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 490]}], "orig": "Language models. We include fourteen Transformer-based LMs in our experiments, representing four model families: BERT (Devlin et al., 2019), GPT-2 (Radford et al., 2019), OPT (Zhang et al., 2022) and LLaMA-2 (Touvron et al., 2023). We use six different sizes of BERT (all uncased): BERT$_{Base}$ and BERT$_{Large}$, which are pretrained on the BooksCorpus (Zhu et al., 2015) and English Wikipedia (Foundation), and four smaller BERT sizes, distilled from BERT$_{Large}$ (Turc et al., 2019).", "text": "Language models. We include fourteen Transformer-based LMs in our experiments, representing four model families: BERT (Devlin et al., 2019), GPT-2 (Radford et al., 2019), OPT (Zhang et al., 2022) and LLaMA-2 (Touvron et al., 2023). We use six different sizes of BERT (all uncased): BERT$_{Base}$ and BERT$_{Large}$, which are pretrained on the BooksCorpus (Zhu et al., 2015) and English Wikipedia (Foundation), and four smaller BERT sizes, distilled from BERT$_{Large}$ (Turc et al., 2019)."}
null
35c1f1fe-1b43-4ff4-8be7-b962ec994878
2302.06555v2.pdf
text
GPT-2, an auto-regressive decoder-only LM, comes in three sizes, pretrained on the WebText dataset (Radford et al., 2019). OPT also comes in three sizes, pretrained on the union of five datasets (Zhang et al., 2022). LLaMA-2 was pretrained on two trillion tokens.
null
441
158
72
image/png
4
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/36", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 3, "bbox": {"l": 306.2762145996094, "t": 520.4324340820312, "r": 526.9061889648438, "b": 441.523681640625, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 263]}], "orig": "GPT-2, an auto-regressive decoder-only LM, comes in three sizes, pretrained on the WebText dataset (Radford et al., 2019). OPT also comes in three sizes, pretrained on the union of five datasets (Zhang et al., 2022). LLaMA-2 was pretrained on two trillion tokens.", "text": "GPT-2, an auto-regressive decoder-only LM, comes in three sizes, pretrained on the WebText dataset (Radford et al., 2019). OPT also comes in three sizes, pretrained on the union of five datasets (Zhang et al., 2022). LLaMA-2 was pretrained on two trillion tokens."}
null
1d1c5a43-a827-4da9-af56-3b13c0e76332
2302.06555v2.pdf
text
Vision representations. The visual representation of a concept is obtained by embedding the images available for the concept with a given VM encoder and then averaging these representations. When applying SegFormer, we average the patches' representations from the last hidden state as the basis for every image, whereas we use the penultimate hidden state for MAE models. 5 ResNet models generate a single vector per input image from the average pooling layer.
null
442
266
72
image/png
4
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/37", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 3, "bbox": {"l": 306.2260437011719, "t": 429.16314697265625, "r": 527.4528198242188, "b": 296.0677490234375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 461]}], "orig": "Vision representations. The visual representation of a concept is obtained by embedding the images available for the concept with a given VM encoder and then averaging these representations. When applying SegFormer, we average the patches' representations from the last hidden state as the basis for every image, whereas we use the penultimate hidden state for MAE models. 5 ResNet models generate a single vector per input image from the average pooling layer.", "text": "Vision representations. The visual representation of a concept is obtained by embedding the images available for the concept with a given VM encoder and then averaging these representations. When applying SegFormer, we average the patches' representations from the last hidden state as the basis for every image, whereas we use the penultimate hidden state for MAE models. 5 ResNet models generate a single vector per input image from the average pooling layer."}
null
a561ca2b-6f29-40bb-88cd-8a7475f2c7d2
2302.06555v2.pdf
text
Language representations. The LMs included here were trained on text segments, so applying them to words in isolation could result in unpredictable behavior. We therefore represent words by embedding English Wikipedia sentences, using the token representations that form the concept, decontextualizing these representations by averaging across different sentences (Abdou et al., 2021). In the case of masked language models, we employ
null
442
239
72
image/png
4
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/38", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 3, "bbox": {"l": 306.23394775390625, "t": 283.43695068359375, "r": 527.3591918945312, "b": 164.11749267578125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 434]}], "orig": "Language representations. The LMs included here were trained on text segments, so applying them to words in isolation could result in unpredictable behavior. We therefore represent words by embedding English Wikipedia sentences, using the token representations that form the concept, decontextualizing these representations by averaging across different sentences (Abdou et al., 2021). In the case of masked language models, we employ", "text": "Language representations. The LMs included here were trained on text segments, so applying them to words in isolation could result in unpredictable behavior. We therefore represent words by embedding English Wikipedia sentences, using the token representations that form the concept, decontextualizing these representations by averaging across different sentences (Abdou et al., 2021). In the case of masked language models, we employ"}
null
58b5272c-7ffa-4e31-a994-75ac7501824f
2302.06555v2.pdf
footnote
$^{5}$We also experimented with utilizing the representations from the last hidden state; however, the results were not as promising as those obtained from the penultimate hidden state. Caron et al. (2021) demonstrate the penultimate-layer features in ViTs trained with DINO exhibit strong correlations with saliency information in the visual input, such as object boundaries and so on.
null
441
151
72
image/png
4
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/39", "parent": {"cref": "#/body"}, "children": [], "label": "footnote", "prov": [{"page_no": 3, "bbox": {"l": 306.2687072753906, "t": 152.629638671875, "r": 527.111083984375, "b": 76.98725891113281, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 386]}], "orig": "$^{5}$We also experimented with utilizing the representations from the last hidden state; however, the results were not as promising as those obtained from the penultimate hidden state. Caron et al. (2021) demonstrate the penultimate-layer features in ViTs trained with DINO exhibit strong correlations with saliency information in the visual input, such as object boundaries and so on.", "text": "$^{5}$We also experimented with utilizing the representations from the last hidden state; however, the results were not as promising as those obtained from the penultimate hidden state. Caron et al. (2021) demonstrate the penultimate-layer features in ViTs trained with DINO exhibit strong correlations with saliency information in the visual input, such as object boundaries and so on."}
null
1bb9027b-a593-4c15-a20d-54e53fbb922d
2302.06555v2.pdf
text
an averaging approach on the token representations forming the concept; otherwise, we choose for the last token within the concept (Zou et al., 2023).
null
438
75
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/40", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 4, "bbox": {"l": 71.54341125488281, "t": 776.0916748046875, "r": 290.26824951171875, "b": 738.40966796875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 150]}], "orig": "an averaging approach on the token representations forming the concept; otherwise, we choose for the last token within the concept (Zou et al., 2023).", "text": "an averaging approach on the token representations forming the concept; otherwise, we choose for the last token within the concept (Zou et al., 2023)."}
null
4e3bb844-dfc1-4d69-8906-303f9098040b
2302.06555v2.pdf
text
Linear projection. Since we are interested in the extent to which vision and language representations are isomorphic, we focus on linear projections. 6 Following Conneau et al. (2018), we use Procrustes analysis (Schönemann, 1966) to align the representations of VMs to those of LMs, given a bimodal dictionary (§ 4.1). Given the VM matrix A (i.e., the visual representations of concepts) and the LM matrix B (i.e. the language representation of the concepts) we use Procrustes analysis to find the orthogonal matrix Ω that most closely maps source space A onto the target space B . Given the constrain of orthogonality the optimization Ω = min$_{R}$ ∥ RA - B ∥$_{F}$ , s.t. R $^{T}$R = I has the closed form solution Ω = UV $^{T}$,U Σ V = SVD ( BA $^{T}$) , where SVD stands for singular value decomposition. We induce the alignment from a small set of dictionary pairs, evaluating it on held-out data (§ 4.2). Given the necessity for both the source and target space to have the same dimensionality, we employ principal component analysis (PCA) to reduce the dimensionality of the larger space in cases of a mismatch. 7
null
442
618
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/41", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 4, "bbox": {"l": 70.89579772949219, "t": 726.5772705078125, "r": 292.08294677734375, "b": 417.6516418457031, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 1121]}], "orig": "Linear projection. Since we are interested in the extent to which vision and language representations are isomorphic, we focus on linear projections. 6 Following Conneau et al. (2018), we use Procrustes analysis (Sch\u00f6nemann, 1966) to align the representations of VMs to those of LMs, given a bimodal dictionary (\u00a7 4.1). Given the VM matrix A (i.e., the visual representations of concepts) and the LM matrix B (i.e. the language representation of the concepts) we use Procrustes analysis to find the orthogonal matrix \u2126 that most closely maps source space A onto the target space B . Given the constrain of orthogonality the optimization \u2126 = min$_{R}$ \u2225 RA - B \u2225$_{F}$ , s.t. R $^{T}$R = I has the closed form solution \u2126 = UV $^{T}$,U \u03a3 V = SVD ( BA $^{T}$) , where SVD stands for singular value decomposition. We induce the alignment from a small set of dictionary pairs, evaluating it on held-out data (\u00a7 4.2). Given the necessity for both the source and target space to have the same dimensionality, we employ principal component analysis (PCA) to reduce the dimensionality of the larger space in cases of a mismatch. 7", "text": "Linear projection. Since we are interested in the extent to which vision and language representations are isomorphic, we focus on linear projections. 6 Following Conneau et al. (2018), we use Procrustes analysis (Sch\u00f6nemann, 1966) to align the representations of VMs to those of LMs, given a bimodal dictionary (\u00a7 4.1). Given the VM matrix A (i.e., the visual representations of concepts) and the LM matrix B (i.e. the language representation of the concepts) we use Procrustes analysis to find the orthogonal matrix \u2126 that most closely maps source space A onto the target space B . Given the constrain of orthogonality the optimization \u2126 = min$_{R}$ \u2225 RA - B \u2225$_{F}$ , s.t. R $^{T}$R = I has the closed form solution \u2126 = UV $^{T}$,U \u03a3 V = SVD ( BA $^{T}$) , where SVD stands for singular value decomposition. We induce the alignment from a small set of dictionary pairs, evaluating it on held-out data (\u00a7 4.2). Given the necessity for both the source and target space to have the same dimensionality, we employ principal component analysis (PCA) to reduce the dimensionality of the larger space in cases of a mismatch. 7"}
null
db03dd4e-1bdf-433d-b4bd-05021749e207
2302.06555v2.pdf
section_header
4 Experimental Setup
null
242
24
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/42", "parent": {"cref": "#/body"}, "children": [], "label": "section_header", "prov": [{"page_no": 4, "bbox": {"l": 71.08319091796875, "t": 403.1234130859375, "r": 191.88674926757812, "b": 390.90045166015625, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 20]}], "orig": "4 Experimental Setup", "text": "4 Experimental Setup", "level": 1}
null
33c633e8-b91c-4fde-90cf-a16b59f21946
2302.06555v2.pdf
text
In this section, we discuss details around bimodal dictionary compilation (§ 4.1), evaluation metrics, as well as our baselines (§ 4.2).
null
441
76
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/43", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 4, "bbox": {"l": 71.15853881835938, "t": 380.3851623535156, "r": 291.63519287109375, "b": 342.40765380859375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 136]}], "orig": "In this section, we discuss details around bimodal dictionary compilation (\u00a7 4.1), evaluation metrics, as well as our baselines (\u00a7 4.2).", "text": "In this section, we discuss details around bimodal dictionary compilation (\u00a7 4.1), evaluation metrics, as well as our baselines (\u00a7 4.2)."}
null
288e8dfc-c707-4f66-b71e-f47ce763588a
2302.06555v2.pdf
section_header
4.1 Bimodal Dictionary Compilation
null
357
22
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/44", "parent": {"cref": "#/body"}, "children": [], "label": "section_header", "prov": [{"page_no": 4, "bbox": {"l": 71.00336456298828, "t": 329.13519287109375, "r": 249.28378295898438, "b": 317.93157958984375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 34]}], "orig": "4.1 Bimodal Dictionary Compilation", "text": "4.1 Bimodal Dictionary Compilation", "level": 1}
null
1afe2f60-0a37-4fee-b2e2-ebdd5f6eb3c1
2302.06555v2.pdf
text
We build bimodal dictionaries of image-text pairs based on the ImageNet21K dataset (Russakovsky et al., 2015) and the CLDI (cross-lingual dictionary induction) dataset (Hartmann and Søgaard, 2018). In ImageNet, a concept class has a unique ID and is represented by multiple images and one or more names (which we refer to as aliases ), many of which are multi-word expressions. We filter the data from ImageNet-21K: keeping classes with over 100 images available, aliases that appear at least five times in Wikipedia, and classes with at
null
442
295
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/45", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 4, "bbox": {"l": 70.98979187011719, "t": 310.3228759765625, "r": 292.07537841796875, "b": 163.00567626953125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 537]}], "orig": "We build bimodal dictionaries of image-text pairs based on the ImageNet21K dataset (Russakovsky et al., 2015) and the CLDI (cross-lingual dictionary induction) dataset (Hartmann and S\u00f8gaard, 2018). In ImageNet, a concept class has a unique ID and is represented by multiple images and one or more names (which we refer to as aliases ), many of which are multi-word expressions. We filter the data from ImageNet-21K: keeping classes with over 100 images available, aliases that appear at least five times in Wikipedia, and classes with at", "text": "We build bimodal dictionaries of image-text pairs based on the ImageNet21K dataset (Russakovsky et al., 2015) and the CLDI (cross-lingual dictionary induction) dataset (Hartmann and S\u00f8gaard, 2018). In ImageNet, a concept class has a unique ID and is represented by multiple images and one or more names (which we refer to as aliases ), many of which are multi-word expressions. We filter the data from ImageNet-21K: keeping classes with over 100 images available, aliases that appear at least five times in Wikipedia, and classes with at"}
null
c20d0945-d3c3-440c-9645-e23e6f264b44
2302.06555v2.pdf
footnote
$^{6}$For work on non-linear projection between representation spaces, see Nakashole (2018); Zhao and Gilman (2020); Glavaš and Vuli´c (2020).
null
441
63
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/46", "parent": {"cref": "#/body"}, "children": [], "label": "footnote", "prov": [{"page_no": 4, "bbox": {"l": 71.50354766845703, "t": 152.3046875, "r": 291.7594909667969, "b": 120.85226440429688, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 142]}], "orig": "$^{6}$For work on non-linear projection between representation spaces, see Nakashole (2018); Zhao and Gilman (2020); Glava\u0161 and Vuli\u00b4c (2020).", "text": "$^{6}$For work on non-linear projection between representation spaces, see Nakashole (2018); Zhao and Gilman (2020); Glava\u0161 and Vuli\u00b4c (2020)."}
null
8242f6ac-1f27-4b3f-9a93-f2fdc5fe0d78
2302.06555v2.pdf
footnote
$^{7}$The variance is retained for most models after dimensionality reduction, except for a few cases where there is some loss of information. The cumulative of explained variance ratios for different models are presented in Table 8.
null
441
85
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/47", "parent": {"cref": "#/body"}, "children": [], "label": "footnote", "prov": [{"page_no": 4, "bbox": {"l": 71.3525619506836, "t": 118.89080810546875, "r": 291.7541809082031, "b": 76.62451171875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 233]}], "orig": "$^{7}$The variance is retained for most models after dimensionality reduction, except for a few cases where there is some loss of information. The cumulative of explained variance ratios for different models are presented in Table 8.", "text": "$^{7}$The variance is retained for most models after dimensionality reduction, except for a few cases where there is some loss of information. The cumulative of explained variance ratios for different models are presented in Table 8."}
null
65f280ac-2ac5-43b8-903c-d0c2b8f47d32
2302.06555v2.pdf
caption
Table 1: Statistics of the bimodal dictionaries.
null
408
21
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/48", "parent": {"cref": "#/body"}, "children": [], "label": "caption", "prov": [{"page_no": 4, "bbox": {"l": 313.8568420410156, "t": 716.146240234375, "r": 517.947021484375, "b": 705.5986328125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 48]}], "orig": "Table 1: Statistics of the bimodal dictionaries.", "text": "Table 1: Statistics of the bimodal dictionaries."}
null
958fcef6-7205-483c-a140-0bc9acb055f1
2302.06555v2.pdf
table
<table><tbody><tr><th>Set</th><th>Num. of classes</th><th>Num. of aliases</th><th>Num. of pairs</th></tr><tr><td>Only-1K</td><td>491</td><td>655</td><td>655</td></tr><tr><td>Exclude-1K</td><td>5,942</td><td>7,194</td><td>7,194</td></tr><tr><td>EN-CLDI</td><td>1,690</td><td>1,690</td><td>1,690</td></tr></tbody></table>
Table 1: Statistics of the bimodal dictionaries.
435
103
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/tables/0", "parent": {"cref": "#/body"}, "children": [], "label": "table", "prov": [{"page_no": 4, "bbox": {"l": 308.0361328125, "t": 778.875, "r": 525.6458740234375, "b": 727.481201171875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 0]}], "captions": [{"cref": "#/texts/48"}], "references": [], "footnotes": [], "image": {"mimetype": "image/png", "dpi": 144, "size": {"width": 435.0, "height": 103.0}, "uri": null}, "data": {"table_cells": [{"bbox": {"l": 313.4814453125, "t": 774.6961669921875, "r": 323.4200134277344, "b": 767.7437744140625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 0, "end_row_offset_idx": 1, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "Set", "column_header": true, "row_header": false, "row_section": false}, {"bbox": {"l": 361.808349609375, "t": 774.6961669921875, "r": 411.7344970703125, "b": 767.7437744140625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 0, "end_row_offset_idx": 1, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "Num. of classes", "column_header": true, "row_header": false, "row_section": false}, {"bbox": {"l": 420.2577209472656, "t": 774.6961669921875, "r": 469.3206787109375, "b": 767.7437744140625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 0, "end_row_offset_idx": 1, "start_col_offset_idx": 2, "end_col_offset_idx": 3, "text": "Num. of aliases", "column_header": true, "row_header": false, "row_section": false}, {"bbox": {"l": 477.8439025878906, "t": 774.6961669921875, "r": 521.2921142578125, "b": 767.7437744140625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 0, "end_row_offset_idx": 1, "start_col_offset_idx": 3, "end_col_offset_idx": 4, "text": "Num. of pairs", "column_header": true, "row_header": false, "row_section": false}, {"bbox": {"l": 313.4814453125, "t": 760.9743041992188, "r": 341.1274719238281, "b": 754.02197265625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "Only-1K", "column_header": false, "row_header": true, "row_section": false}, {"bbox": {"l": 400.0690002441406, "t": 760.9743041992188, "r": 411.7339782714844, "b": 754.02197265625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "491", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 457.6551818847656, "t": 760.9743041992188, "r": 469.3201599121094, "b": 754.02197265625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 2, "end_col_offset_idx": 3, "text": "655", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 509.6265869140625, "t": 760.9743041992188, "r": 521.2916259765625, "b": 754.02197265625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 3, "end_col_offset_idx": 4, "text": "655", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 313.4814453125, "t": 751.3157958984375, "r": 351.05828857421875, "b": 744.3634643554688, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "Exclude-1K", "column_header": false, "row_header": true, "row_section": false}, {"bbox": {"l": 394.2363586425781, "t": 751.3157958984375, "r": 411.7338562011719, "b": 744.3634643554688, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "5,942", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 451.8225402832031, "t": 751.3157958984375, "r": 469.3200378417969, "b": 744.3634643554688, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 2, "end_col_offset_idx": 3, "text": "7,194", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 503.7939758300781, "t": 751.3157958984375, "r": 521.2914428710938, "b": 744.3634643554688, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 3, "end_col_offset_idx": 4, "text": "7,194", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 313.4814453125, "t": 737.5939331054688, "r": 344.5802917480469, "b": 730.6416015625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "EN-CLDI", "column_header": false, "row_header": true, "row_section": false}, {"bbox": {"l": 394.2363586425781, "t": 737.5939331054688, "r": 411.7338562011719, "b": 730.6416015625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "1,690", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 451.8225402832031, "t": 737.5939331054688, "r": 469.3200378417969, "b": 730.6416015625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 2, "end_col_offset_idx": 3, "text": "1,690", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 503.7939758300781, "t": 737.5939331054688, "r": 521.2914428710938, "b": 730.6416015625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 3, "end_col_offset_idx": 4, "text": "1,690", "column_header": false, "row_header": false, "row_section": false}], "num_rows": 4, "num_cols": 4, "grid": [[{"bbox": {"l": 313.4814453125, "t": 774.6961669921875, "r": 323.4200134277344, "b": 767.7437744140625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 0, "end_row_offset_idx": 1, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "Set", "column_header": true, "row_header": false, "row_section": false}, {"bbox": {"l": 361.808349609375, "t": 774.6961669921875, "r": 411.7344970703125, "b": 767.7437744140625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 0, "end_row_offset_idx": 1, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "Num. of classes", "column_header": true, "row_header": false, "row_section": false}, {"bbox": {"l": 420.2577209472656, "t": 774.6961669921875, "r": 469.3206787109375, "b": 767.7437744140625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 0, "end_row_offset_idx": 1, "start_col_offset_idx": 2, "end_col_offset_idx": 3, "text": "Num. of aliases", "column_header": true, "row_header": false, "row_section": false}, {"bbox": {"l": 477.8439025878906, "t": 774.6961669921875, "r": 521.2921142578125, "b": 767.7437744140625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 0, "end_row_offset_idx": 1, "start_col_offset_idx": 3, "end_col_offset_idx": 4, "text": "Num. of pairs", "column_header": true, "row_header": false, "row_section": false}], [{"bbox": {"l": 313.4814453125, "t": 760.9743041992188, "r": 341.1274719238281, "b": 754.02197265625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "Only-1K", "column_header": false, "row_header": true, "row_section": false}, {"bbox": {"l": 400.0690002441406, "t": 760.9743041992188, "r": 411.7339782714844, "b": 754.02197265625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "491", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 457.6551818847656, "t": 760.9743041992188, "r": 469.3201599121094, "b": 754.02197265625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 2, "end_col_offset_idx": 3, "text": "655", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 509.6265869140625, "t": 760.9743041992188, "r": 521.2916259765625, "b": 754.02197265625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 3, "end_col_offset_idx": 4, "text": "655", "column_header": false, "row_header": false, "row_section": false}], [{"bbox": {"l": 313.4814453125, "t": 751.3157958984375, "r": 351.05828857421875, "b": 744.3634643554688, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "Exclude-1K", "column_header": false, "row_header": true, "row_section": false}, {"bbox": {"l": 394.2363586425781, "t": 751.3157958984375, "r": 411.7338562011719, "b": 744.3634643554688, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "5,942", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 451.8225402832031, "t": 751.3157958984375, "r": 469.3200378417969, "b": 744.3634643554688, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 2, "end_col_offset_idx": 3, "text": "7,194", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 503.7939758300781, "t": 751.3157958984375, "r": 521.2914428710938, "b": 744.3634643554688, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 3, "end_col_offset_idx": 4, "text": "7,194", "column_header": false, "row_header": false, "row_section": false}], [{"bbox": {"l": 313.4814453125, "t": 737.5939331054688, "r": 344.5802917480469, "b": 730.6416015625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "EN-CLDI", "column_header": false, "row_header": true, "row_section": false}, {"bbox": {"l": 394.2363586425781, "t": 737.5939331054688, "r": 411.7338562011719, "b": 730.6416015625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "1,690", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 451.8225402832031, "t": 737.5939331054688, "r": 469.3200378417969, "b": 730.6416015625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 2, "end_col_offset_idx": 3, "text": "1,690", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 503.7939758300781, "t": 737.5939331054688, "r": 521.2914428710938, "b": 730.6416015625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 3, "end_col_offset_idx": 4, "text": "1,690", "column_header": false, "row_header": false, "row_section": false}]]}}
null
27d964d3-7aea-42d2-aa69-cddcfe5b13ef
2302.06555v2.pdf
text
least one alias. As a result, 11,338 classes and 13,460 aliases meet the criteria. We further filter aliases that are shared by two different class IDs, and aliases for which their hyponyms are already in the aliases set. 8 To avoid any form of bias, given that the VMs we experiment with have been pretrained on ImageNet-1K, we report results on ImageNet-21K excluding the concepts in ImageNet-1K (Exclude-1K).
null
442
237
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/49", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 4, "bbox": {"l": 306.3482360839844, "t": 682.125, "r": 527.3558349609375, "b": 563.3076171875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 411]}], "orig": "least one alias. As a result, 11,338 classes and 13,460 aliases meet the criteria. We further filter aliases that are shared by two different class IDs, and aliases for which their hyponyms are already in the aliases set. 8 To avoid any form of bias, given that the VMs we experiment with have been pretrained on ImageNet-1K, we report results on ImageNet-21K excluding the concepts in ImageNet-1K (Exclude-1K).", "text": "least one alias. As a result, 11,338 classes and 13,460 aliases meet the criteria. We further filter aliases that are shared by two different class IDs, and aliases for which their hyponyms are already in the aliases set. 8 To avoid any form of bias, given that the VMs we experiment with have been pretrained on ImageNet-1K, we report results on ImageNet-21K excluding the concepts in ImageNet-1K (Exclude-1K)."}
null
6a58186a-e7f7-4af5-bed5-9cd344e844e7
2302.06555v2.pdf
text
One important limitation of the Exclude-1K bimodal dictionary is that all concepts are nouns. Therefore, to investigate how our results generalize to other parts of speech (POS), we also use the English subset of CLDI dataset (EN-CLDI), which contains images paired with verbs and adjectives. Each word within this set is unique and paired with at least 22 images. Final statistics of the processed datasets are reported in Table 1.
null
443
238
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/50", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 4, "bbox": {"l": 306.14984130859375, "t": 560.3670654296875, "r": 527.4514770507812, "b": 441.3656311035156, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 432]}], "orig": "One important limitation of the Exclude-1K bimodal dictionary is that all concepts are nouns. Therefore, to investigate how our results generalize to other parts of speech (POS), we also use the English subset of CLDI dataset (EN-CLDI), which contains images paired with verbs and adjectives. Each word within this set is unique and paired with at least 22 images. Final statistics of the processed datasets are reported in Table 1.", "text": "One important limitation of the Exclude-1K bimodal dictionary is that all concepts are nouns. Therefore, to investigate how our results generalize to other parts of speech (POS), we also use the English subset of CLDI dataset (EN-CLDI), which contains images paired with verbs and adjectives. Each word within this set is unique and paired with at least 22 images. Final statistics of the processed datasets are reported in Table 1."}
null
4486b15d-794a-4f20-87c0-89454067deb3
2302.06555v2.pdf
text
The pairs in these bimodal dictionaries are split 70-30 for training and testing based on the class IDs to avoid train-test leakage. 9 We compute five such splits at random and report averaged results. See § 6 for the impact of training set size variations.
null
442
131
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/51", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 4, "bbox": {"l": 306.2119140625, "t": 438.3727722167969, "r": 527.4554443359375, "b": 373.1012878417969, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 257]}], "orig": "The pairs in these bimodal dictionaries are split 70-30 for training and testing based on the class IDs to avoid train-test leakage. 9 We compute five such splits at random and report averaged results. See \u00a7 6 for the impact of training set size variations.", "text": "The pairs in these bimodal dictionaries are split 70-30 for training and testing based on the class IDs to avoid train-test leakage. 9 We compute five such splits at random and report averaged results. See \u00a7 6 for the impact of training set size variations."}
null
57508efd-4361-4518-8d6f-0a141a1b30a1
2302.06555v2.pdf
section_header
4.2 Evaluation
null
152
22
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/52", "parent": {"cref": "#/body"}, "children": [], "label": "section_header", "prov": [{"page_no": 4, "bbox": {"l": 306.46258544921875, "t": 362.63116455078125, "r": 382.63604736328125, "b": 351.5690002441406, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 14]}], "orig": "4.2 Evaluation", "text": "4.2 Evaluation", "level": 1}
null
fb9d2a8c-5170-4f2d-99f2-e9ede95d7298
2302.06555v2.pdf
text
We induce a linear mapping Ω based on training image-text pairs sampled from A and B , respectively. We then evaluate how close A Ω is to B by computing retrieval precision on held-out imagetext pairs. To make the retrieval task as challenging as possible, the target space B is expanded with 65,599 words from an English wordlist in addition to 13,460 aliases, resulting in a total of 79,059 aliases in the final target space.
null
442
240
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/53", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 4, "bbox": {"l": 306.2489013671875, "t": 345.0776672363281, "r": 527.352783203125, "b": 225.149169921875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 427]}], "orig": "We induce a linear mapping \u2126 based on training image-text pairs sampled from A and B , respectively. We then evaluate how close A \u2126 is to B by computing retrieval precision on held-out imagetext pairs. To make the retrieval task as challenging as possible, the target space B is expanded with 65,599 words from an English wordlist in addition to 13,460 aliases, resulting in a total of 79,059 aliases in the final target space.", "text": "We induce a linear mapping \u2126 based on training image-text pairs sampled from A and B , respectively. We then evaluate how close A \u2126 is to B by computing retrieval precision on held-out imagetext pairs. To make the retrieval task as challenging as possible, the target space B is expanded with 65,599 words from an English wordlist in addition to 13,460 aliases, resulting in a total of 79,059 aliases in the final target space."}
null
e4e28a87-f60f-436a-b661-d73190c39776
2302.06555v2.pdf
text
Metrics. We evaluate alignment in terms of precision-atk (P@ k ), a well-established metric employed in the evaluation of multilingual word embeddings (Conneau et al., 2018), with k ∈ { 1 , 10 , 100 } . 10 Note that this performance metric
null
438
134
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/54", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 4, "bbox": {"l": 306.3006286621094, "t": 215.8372802734375, "r": 525.5789184570312, "b": 148.8270263671875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 239]}], "orig": "Metrics. We evaluate alignment in terms of precision-atk (P@ k ), a well-established metric employed in the evaluation of multilingual word embeddings (Conneau et al., 2018), with k \u2208 { 1 , 10 , 100 } . 10 Note that this performance metric", "text": "Metrics. We evaluate alignment in terms of precision-atk (P@ k ), a well-established metric employed in the evaluation of multilingual word embeddings (Conneau et al., 2018), with k \u2208 { 1 , 10 , 100 } . 10 Note that this performance metric"}
null
510f1161-7607-48cd-8001-42c1c446291e
2302.06555v2.pdf
footnote
$^{8}$We obtain the aliases hypernyms and hyponyms from the Princeton WordNet (Fellbaum, 2010).
null
438
41
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/55", "parent": {"cref": "#/body"}, "children": [], "label": "footnote", "prov": [{"page_no": 4, "bbox": {"l": 306.5758361816406, "t": 141.65576171875, "r": 525.547119140625, "b": 120.88531494140625, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 95]}], "orig": "$^{8}$We obtain the aliases hypernyms and hyponyms from the Princeton WordNet (Fellbaum, 2010).", "text": "$^{8}$We obtain the aliases hypernyms and hyponyms from the Princeton WordNet (Fellbaum, 2010)."}
null
62693f5e-2a68-4dc6-9215-cfcc0d77a24e
2302.06555v2.pdf
footnote
$^{9}$In the EN-CLDI set, we simply use words to mitigate the risk of train-test leakage.
null
437
40
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/56", "parent": {"cref": "#/body"}, "children": [], "label": "footnote", "prov": [{"page_no": 4, "bbox": {"l": 306.7635803222656, "t": 119.1673583984375, "r": 525.5420532226562, "b": 98.96624755859375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 89]}], "orig": "$^{9}$In the EN-CLDI set, we simply use words to mitigate the risk of train-test leakage.", "text": "$^{9}$In the EN-CLDI set, we simply use words to mitigate the risk of train-test leakage."}
null
817261b3-6267-4805-89fa-7af4c9287a4f
2302.06555v2.pdf
footnote
$^{10}$For example, we could use the mapping of the image of an apple into the word ‘apple’, and the mapping of the image
null
439
42
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/57", "parent": {"cref": "#/body"}, "children": [], "label": "footnote", "prov": [{"page_no": 4, "bbox": {"l": 306.67706298828125, "t": 96.8956298828125, "r": 525.7845458984375, "b": 75.97491455078125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 121]}], "orig": "$^{10}$For example, we could use the mapping of the image of an apple into the word \u2018apple\u2019, and the mapping of the image", "text": "$^{10}$For example, we could use the mapping of the image of an apple into the word \u2018apple\u2019, and the mapping of the image"}
null
e038a730-a293-4950-a8fd-df077cd03f23
2302.06555v2.pdf
caption
Table 2: Alignment results for our baselines. All the Precision@ k scores are reported in percentage.
null
439
49
72
image/png
6
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/58", "parent": {"cref": "#/body"}, "children": [], "label": "caption", "prov": [{"page_no": 5, "bbox": {"l": 71.00687408447266, "t": 713.8568115234375, "r": 290.2685546875, "b": 689.1964721679688, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 101]}], "orig": "Table 2: Alignment results for our baselines. All the Precision@ k scores are reported in percentage.", "text": "Table 2: Alignment results for our baselines. All the Precision@ k scores are reported in percentage."}
null
908f9f51-c9ec-454a-8c0e-c609f8146b96
2302.06555v2.pdf
table
<table><tbody><tr><th>Baseline</th><th>P@1</th><th>P@10</th><th>P@100</th></tr><tr><td>Random retrieval</td><td>0.0015</td><td>0.0153</td><td>0.1531</td></tr><tr><td>Length-frequency alignment</td><td>0.0032</td><td>0.0127</td><td>0.6053</td></tr><tr><td>Non-isomorphic alignment</td><td>0.0000</td><td>0.0121</td><td>0.1105</td></tr></tbody></table>
Table 2: Alignment results for our baselines. All the Precision@ k scores are reported in percentage.
437
111
72
image/png
6
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/tables/1", "parent": {"cref": "#/body"}, "children": [], "label": "table", "prov": [{"page_no": 5, "bbox": {"l": 72.87548065185547, "t": 779.512451171875, "r": 291.2668762207031, "b": 723.8392333984375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 0]}], "captions": [{"cref": "#/texts/58"}], "references": [], "footnotes": [], "image": {"mimetype": "image/png", "dpi": 144, "size": {"width": 437.0, "height": 111.0}, "uri": null}, "data": {"table_cells": [{"bbox": {"l": 79.09597778320312, "t": 774.0670776367188, "r": 109.7222900390625, "b": 766.1170654296875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 0, "end_row_offset_idx": 1, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "Baseline", "column_header": true, "row_header": false, "row_section": false}, {"bbox": {"l": 197.4066619873047, "t": 774.0670776367188, "r": 214.98745727539062, "b": 766.1170654296875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 0, "end_row_offset_idx": 1, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "P@1", "column_header": true, "row_header": false, "row_section": false}, {"bbox": {"l": 227.16151428222656, "t": 774.0670776367188, "r": 249.1886444091797, "b": 766.1170654296875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 0, "end_row_offset_idx": 1, "start_col_offset_idx": 2, "end_col_offset_idx": 3, "text": "P@10", "column_header": true, "row_header": false, "row_section": false}, {"bbox": {"l": 258.9349670410156, "t": 774.0670776367188, "r": 285.4084167480469, "b": 766.1170654296875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 0, "end_row_offset_idx": 1, "start_col_offset_idx": 3, "end_col_offset_idx": 4, "text": "P@100", "column_header": true, "row_header": false, "row_section": false}, {"bbox": {"l": 79.09597778320312, "t": 758.3760375976562, "r": 140.64208984375, "b": 750.426025390625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "Random retrieval", "column_header": false, "row_header": true, "row_section": false}, {"bbox": {"l": 190.5324249267578, "t": 758.3760375976562, "r": 214.98724365234375, "b": 750.426025390625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "0.0015", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 224.7335968017578, "t": 758.3760375976562, "r": 249.18841552734375, "b": 750.426025390625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 2, "end_col_offset_idx": 3, "text": "0.0153", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 260.9533996582031, "t": 758.3760375976562, "r": 285.408203125, "b": 750.426025390625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 3, "end_col_offset_idx": 4, "text": "0.1531", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 79.09597778320312, "t": 747.3314208984375, "r": 180.4634246826172, "b": 739.3814086914062, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "Length-frequency alignment", "column_header": false, "row_header": true, "row_section": false}, {"bbox": {"l": 190.5324249267578, "t": 747.3314819335938, "r": 214.98724365234375, "b": 739.3814697265625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "0.0032", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 224.7335968017578, "t": 747.3314819335938, "r": 249.18841552734375, "b": 739.3814697265625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 2, "end_col_offset_idx": 3, "text": "0.0127", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 260.9533996582031, "t": 747.3314819335938, "r": 285.408203125, "b": 739.3814697265625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 3, "end_col_offset_idx": 4, "text": "0.6053", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 79.09597778320312, "t": 736.286865234375, "r": 175.18118286132812, "b": 728.3368530273438, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "Non-isomorphic alignment", "column_header": false, "row_header": true, "row_section": false}, {"bbox": {"l": 190.5324249267578, "t": 736.286865234375, "r": 214.98724365234375, "b": 728.3368530273438, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "0.0000", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 224.7335968017578, "t": 736.286865234375, "r": 249.18841552734375, "b": 728.3368530273438, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 2, "end_col_offset_idx": 3, "text": "0.0121", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 260.9533996582031, "t": 736.286865234375, "r": 285.408203125, "b": 728.3368530273438, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 3, "end_col_offset_idx": 4, "text": "0.1105", "column_header": false, "row_header": false, "row_section": false}], "num_rows": 4, "num_cols": 4, "grid": [[{"bbox": {"l": 79.09597778320312, "t": 774.0670776367188, "r": 109.7222900390625, "b": 766.1170654296875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 0, "end_row_offset_idx": 1, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "Baseline", "column_header": true, "row_header": false, "row_section": false}, {"bbox": {"l": 197.4066619873047, "t": 774.0670776367188, "r": 214.98745727539062, "b": 766.1170654296875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 0, "end_row_offset_idx": 1, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "P@1", "column_header": true, "row_header": false, "row_section": false}, {"bbox": {"l": 227.16151428222656, "t": 774.0670776367188, "r": 249.1886444091797, "b": 766.1170654296875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 0, "end_row_offset_idx": 1, "start_col_offset_idx": 2, "end_col_offset_idx": 3, "text": "P@10", "column_header": true, "row_header": false, "row_section": false}, {"bbox": {"l": 258.9349670410156, "t": 774.0670776367188, "r": 285.4084167480469, "b": 766.1170654296875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 0, "end_row_offset_idx": 1, "start_col_offset_idx": 3, "end_col_offset_idx": 4, "text": "P@100", "column_header": true, "row_header": false, "row_section": false}], [{"bbox": {"l": 79.09597778320312, "t": 758.3760375976562, "r": 140.64208984375, "b": 750.426025390625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "Random retrieval", "column_header": false, "row_header": true, "row_section": false}, {"bbox": {"l": 190.5324249267578, "t": 758.3760375976562, "r": 214.98724365234375, "b": 750.426025390625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "0.0015", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 224.7335968017578, "t": 758.3760375976562, "r": 249.18841552734375, "b": 750.426025390625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 2, "end_col_offset_idx": 3, "text": "0.0153", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 260.9533996582031, "t": 758.3760375976562, "r": 285.408203125, "b": 750.426025390625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 3, "end_col_offset_idx": 4, "text": "0.1531", "column_header": false, "row_header": false, "row_section": false}], [{"bbox": {"l": 79.09597778320312, "t": 747.3314208984375, "r": 180.4634246826172, "b": 739.3814086914062, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "Length-frequency alignment", "column_header": false, "row_header": true, "row_section": false}, {"bbox": {"l": 190.5324249267578, "t": 747.3314819335938, "r": 214.98724365234375, "b": 739.3814697265625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "0.0032", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 224.7335968017578, "t": 747.3314819335938, "r": 249.18841552734375, "b": 739.3814697265625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 2, "end_col_offset_idx": 3, "text": "0.0127", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 260.9533996582031, "t": 747.3314819335938, "r": 285.408203125, "b": 739.3814697265625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 3, "end_col_offset_idx": 4, "text": "0.6053", "column_header": false, "row_header": false, "row_section": false}], [{"bbox": {"l": 79.09597778320312, "t": 736.286865234375, "r": 175.18118286132812, "b": 728.3368530273438, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "Non-isomorphic alignment", "column_header": false, "row_header": true, "row_section": false}, {"bbox": {"l": 190.5324249267578, "t": 736.286865234375, "r": 214.98724365234375, "b": 728.3368530273438, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "0.0000", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 224.7335968017578, "t": 736.286865234375, "r": 249.18841552734375, "b": 728.3368530273438, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 2, "end_col_offset_idx": 3, "text": "0.0121", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 260.9533996582031, "t": 736.286865234375, "r": 285.408203125, "b": 728.3368530273438, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 3, "end_col_offset_idx": 4, "text": "0.1105", "column_header": false, "row_header": false, "row_section": false}]]}}
null
9cffc2a5-950c-425e-848f-99d35b459371
2302.06555v2.pdf
text
is much more conservative than other metrics used for similar problems, including pairwise matching accuracy, percentile rank, and Pearson correlation (Minnema and Herbelot, 2019). Pairwise matching accuracy and percentile rank have random baseline scores of 0.5, and they converge in the limit. If a has a percentile rank of p in a list A , it will be higher than a random member of A p percent of the time. Pearson correlation is monotonically increasing with pairwise matching accuracy, but P@ k scores are more conservative than any of them for reasonably small values of k . In our case, our target space is 79,059 words, so it is possible to have P@100 values of 0.0 and yet still have near-perfect pairwise matching accuracy, percentile rank, and Pearson correlation scores. P@ k scores also have the advantage that they are intuitive and practically relevant, e.g., for decoding.
null
441
485
72
image/png
6
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/59", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 5, "bbox": {"l": 71.28717041015625, "t": 664.7708740234375, "r": 292.0834045410156, "b": 422.3381652832031, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 887]}], "orig": "is much more conservative than other metrics used for similar problems, including pairwise matching accuracy, percentile rank, and Pearson correlation (Minnema and Herbelot, 2019). Pairwise matching accuracy and percentile rank have random baseline scores of 0.5, and they converge in the limit. If a has a percentile rank of p in a list A , it will be higher than a random member of A p percent of the time. Pearson correlation is monotonically increasing with pairwise matching accuracy, but P@ k scores are more conservative than any of them for reasonably small values of k . In our case, our target space is 79,059 words, so it is possible to have P@100 values of 0.0 and yet still have near-perfect pairwise matching accuracy, percentile rank, and Pearson correlation scores. P@ k scores also have the advantage that they are intuitive and practically relevant, e.g., for decoding.", "text": "is much more conservative than other metrics used for similar problems, including pairwise matching accuracy, percentile rank, and Pearson correlation (Minnema and Herbelot, 2019). Pairwise matching accuracy and percentile rank have random baseline scores of 0.5, and they converge in the limit. If a has a percentile rank of p in a list A , it will be higher than a random member of A p percent of the time. Pearson correlation is monotonically increasing with pairwise matching accuracy, but P@ k scores are more conservative than any of them for reasonably small values of k . In our case, our target space is 79,059 words, so it is possible to have P@100 values of 0.0 and yet still have near-perfect pairwise matching accuracy, percentile rank, and Pearson correlation scores. P@ k scores also have the advantage that they are intuitive and practically relevant, e.g., for decoding."}
null
5c634071-8d05-4683-9f05-c402eec59de6
2302.06555v2.pdf
text
Random retrieval baseline. Our target space of 79,059 words makes the random retrieval baseline:
null
442
49
72
image/png
6
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/60", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 5, "bbox": {"l": 71.1332015991211, "t": 412.76654052734375, "r": 291.7831726074219, "b": 388.6746520996094, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 96]}], "orig": "Random retrieval baseline. Our target space of 79,059 words makes the random retrieval baseline:", "text": "Random retrieval baseline. Our target space of 79,059 words makes the random retrieval baseline:"}
null
1278ce02-1320-42a5-9cfb-d30565291a10
2302.06555v2.pdf
formula
P@ 1 = 1 N N ∑ i =1 n$_{i}$ U (1)
null
301
70
72
image/png
6
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/61", "parent": {"cref": "#/body"}, "children": [], "label": "formula", "prov": [{"page_no": 5, "bbox": {"l": 140.55914306640625, "t": 377.66351318359375, "r": 290.9989929199219, "b": 342.2218017578125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 33]}], "orig": "P@ 1 = 1 N N \u2211 i =1 n$_{i}$ U (1)", "text": "P@ 1 = 1 N N \u2211 i =1 n$_{i}$ U (1)"}
null
89af5241-b077-4675-971d-19f36ea358fa
2302.06555v2.pdf
text
where N represents the total number of image classes; i iterates over each image class; n$_{i}$ denotes the number of labels for image class i ; U refers to the total number of unique aliases. From Equation 1, we get P@1 ≈ 0 . 0015% .
null
441
130
72
image/png
6
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/62", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 5, "bbox": {"l": 71.25822448730469, "t": 331.0116271972656, "r": 292.0785217285156, "b": 265.8031005859375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 234]}], "orig": "where N represents the total number of image classes; i iterates over each image class; n$_{i}$ denotes the number of labels for image class i ; U refers to the total number of unique aliases. From Equation 1, we get P@1 \u2248 0 . 0015% .", "text": "where N represents the total number of image classes; i iterates over each image class; n$_{i}$ denotes the number of labels for image class i ; U refers to the total number of unique aliases. From Equation 1, we get P@1 \u2248 0 . 0015% ."}
null
55e1d195-297b-4a19-9e09-b0abb40ec9f9
2302.06555v2.pdf
text
Length-frequency alignment baseline. The random retrieval baseline tells us how well we can align representations across the two modalities in the absence of any signal (by chance). However, the fact that we can do better than a random baseline, does not, strictly speaking, prove that our models partially converge toward any sophisticated form of modeling the world. Maybe they simply
null
442
213
72
image/png
6
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/63", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 5, "bbox": {"l": 71.22360229492188, "t": 255.29852294921875, "r": 292.08270263671875, "b": 149.0504150390625, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 386]}], "orig": "Length-frequency alignment baseline. The random retrieval baseline tells us how well we can align representations across the two modalities in the absence of any signal (by chance). However, the fact that we can do better than a random baseline, does not, strictly speaking, prove that our models partially converge toward any sophisticated form of modeling the world. Maybe they simply", "text": "Length-frequency alignment baseline. The random retrieval baseline tells us how well we can align representations across the two modalities in the absence of any signal (by chance). However, the fact that we can do better than a random baseline, does not, strictly speaking, prove that our models partially converge toward any sophisticated form of modeling the world. Maybe they simply"}
null
315395b8-2614-46b1-9ef5-e562835a112b
2302.06555v2.pdf
footnote
of a banana into the word ‘banana’, as training pairs to induce a mapping Ω . If Ω then maps the image of a lemon onto the word ‘lemon’ as its nearest neighbor, we say that the precisionat-one for this mapping is 100%. If two target aliases were listed in the bimodal dictionary for the source image, mapping the image onto either of them would result in P@ 1 = 100% .
null
442
130
72
image/png
6
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/64", "parent": {"cref": "#/body"}, "children": [], "label": "footnote", "prov": [{"page_no": 5, "bbox": {"l": 70.99034881591797, "t": 141.355224609375, "r": 291.7580261230469, "b": 76.57489013671875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 368]}], "orig": "of a banana into the word \u2018banana\u2019, as training pairs to induce a mapping \u2126 . If \u2126 then maps the image of a lemon onto the word \u2018lemon\u2019 as its nearest neighbor, we say that the precisionat-one for this mapping is 100%. If two target aliases were listed in the bimodal dictionary for the source image, mapping the image onto either of them would result in P@ 1 = 100% .", "text": "of a banana into the word \u2018banana\u2019, as training pairs to induce a mapping \u2126 . If \u2126 then maps the image of a lemon onto the word \u2018lemon\u2019 as its nearest neighbor, we say that the precisionat-one for this mapping is 100%. If two target aliases were listed in the bimodal dictionary for the source image, mapping the image onto either of them would result in P@ 1 = 100% ."}
null
801cd3c0-c84e-4cd1-9371-710f36591cc2
2302.06555v2.pdf
caption
Figure 3: t-SNE plot of 5 words mapped from MAE$_{Huge}$ (blue) to OPT$_{30B}$ (orange) using Procrustes analysis. The green represent the mapped MAE$_{Huge}$ embeddings.
null
442
105
72
image/png
6
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/65", "parent": {"cref": "#/body"}, "children": [], "label": "caption", "prov": [{"page_no": 5, "bbox": {"l": 306.365478515625, "t": 638.8778076171875, "r": 527.3575439453125, "b": 586.4058837890625, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 170]}], "orig": "Figure 3: t-SNE plot of 5 words mapped from MAE$_{Huge}$ (blue) to OPT$_{30B}$ (orange) using Procrustes analysis. The green represent the mapped MAE$_{Huge}$ embeddings.", "text": "Figure 3: t-SNE plot of 5 words mapped from MAE$_{Huge}$ (blue) to OPT$_{30B}$ (orange) using Procrustes analysis. The green represent the mapped MAE$_{Huge}$ embeddings."}
null
00842dd0-5503-45db-93eb-e97de5e70515
2302.06555v2.pdf
picture
null
Figure 3: t-SNE plot of 5 words mapped from MAE$_{Huge}$ (blue) to OPT$_{30B}$ (orange) using Procrustes analysis. The green represent the mapped MAE$_{Huge}$ embeddings.
432
255
72
image/png
6
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/pictures/9", "parent": {"cref": "#/body"}, "children": [], "label": "picture", "prov": [{"page_no": 5, "bbox": {"l": 308.20428466796875, "t": 778.9642944335938, "r": 524.2200927734375, "b": 651.4071655273438, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 170]}], "captions": [{"cref": "#/texts/65"}], "references": [], "footnotes": [], "image": {"mimetype": "image/png", "dpi": 144, "size": {"width": 432.0, "height": 255.0}, "uri": null}, "annotations": []}
null
a114e3d5-0d2c-4bc9-96a2-f3c4388fe4b6
2302.06555v2.pdf
text
pick up on shallow characteristics shared across the two spaces. One example is frequency: frequent words may refer to frequently depicted objects. Learning what is rare is learning about the world, but more is at stake in the debate around whether LMs understand. Or consider length: word length may correlate with the structural complexity of objects (in some way), and maybe this is what drives our alignment precision? To control for such effects, we run a second baseline aligning representations from computer vision models to two-dimensional word representations, representing words by their length and frequency. We collected frequency data based on English Wikipedia using NLTK (Bird et al., 2009) for all aliases within our target space. We use PCA and Procrustes Analysis or ridge regression (Toneva and Wehbe, 2019) to map into the length-frequency space and report the best of those as a second, stronger baseline.
null
443
509
72
image/png
6
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/66", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 5, "bbox": {"l": 306.16961669921875, "t": 560.0623779296875, "r": 527.3591918945312, "b": 305.34613037109375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 927]}], "orig": "pick up on shallow characteristics shared across the two spaces. One example is frequency: frequent words may refer to frequently depicted objects. Learning what is rare is learning about the world, but more is at stake in the debate around whether LMs understand. Or consider length: word length may correlate with the structural complexity of objects (in some way), and maybe this is what drives our alignment precision? To control for such effects, we run a second baseline aligning representations from computer vision models to two-dimensional word representations, representing words by their length and frequency. We collected frequency data based on English Wikipedia using NLTK (Bird et al., 2009) for all aliases within our target space. We use PCA and Procrustes Analysis or ridge regression (Toneva and Wehbe, 2019) to map into the length-frequency space and report the best of those as a second, stronger baseline.", "text": "pick up on shallow characteristics shared across the two spaces. One example is frequency: frequent words may refer to frequently depicted objects. Learning what is rare is learning about the world, but more is at stake in the debate around whether LMs understand. Or consider length: word length may correlate with the structural complexity of objects (in some way), and maybe this is what drives our alignment precision? To control for such effects, we run a second baseline aligning representations from computer vision models to two-dimensional word representations, representing words by their length and frequency. We collected frequency data based on English Wikipedia using NLTK (Bird et al., 2009) for all aliases within our target space. We use PCA and Procrustes Analysis or ridge regression (Toneva and Wehbe, 2019) to map into the length-frequency space and report the best of those as a second, stronger baseline."}
null
e58f7f49-4941-4c85-a409-6e922bdc74fb
2302.06555v2.pdf
text
Non-isomorphic alignment baseline. The former two baselines examine the possibility of aligning representations across two modalities based on chance or shallow signals. While informative, neither strictly demonstrates that a linear projection cannot effectively establish a connection between two non-isomorphic representation spaces, potentially outperforming the random or lengthfrequency baselines. To rigorously explore this, we disrupt the relationship between words and their corresponding representations by shuffling them. This permutation ensures that the source and target spaces become non-isomorphic. Specifically, we shuffled OPT$_{30B}$ three times at random and report the alignment results between those and original OPT$_{30B}$, we use the same Procrustes analysis for
null
443
428
72
image/png
6
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/67", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 5, "bbox": {"l": 306.20733642578125, "t": 289.99688720703125, "r": 527.4515380859375, "b": 76.0753173828125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 786]}], "orig": "Non-isomorphic alignment baseline. The former two baselines examine the possibility of aligning representations across two modalities based on chance or shallow signals. While informative, neither strictly demonstrates that a linear projection cannot effectively establish a connection between two non-isomorphic representation spaces, potentially outperforming the random or lengthfrequency baselines. To rigorously explore this, we disrupt the relationship between words and their corresponding representations by shuffling them. This permutation ensures that the source and target spaces become non-isomorphic. Specifically, we shuffled OPT$_{30B}$ three times at random and report the alignment results between those and original OPT$_{30B}$, we use the same Procrustes analysis for", "text": "Non-isomorphic alignment baseline. The former two baselines examine the possibility of aligning representations across two modalities based on chance or shallow signals. While informative, neither strictly demonstrates that a linear projection cannot effectively establish a connection between two non-isomorphic representation spaces, potentially outperforming the random or lengthfrequency baselines. To rigorously explore this, we disrupt the relationship between words and their corresponding representations by shuffling them. This permutation ensures that the source and target spaces become non-isomorphic. Specifically, we shuffled OPT$_{30B}$ three times at random and report the alignment results between those and original OPT$_{30B}$, we use the same Procrustes analysis for"}
null
46f07d41-0be5-4d86-9cf0-7991da5d9684
2302.06555v2.pdf
caption
Figure 4: LMs converge toward the geometry of visual models as they grow larger on Exclude-1K set.
null
893
21
72
image/png
7
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/68", "parent": {"cref": "#/body"}, "children": [], "label": "caption", "prov": [{"page_no": 6, "bbox": {"l": 73.5337905883789, "t": 389.9848327636719, "r": 519.9508056640625, "b": 379.31463623046875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 98]}], "orig": "Figure 4: LMs converge toward the geometry of visual models as they grow larger on Exclude-1K set.", "text": "Figure 4: LMs converge toward the geometry of visual models as they grow larger on Exclude-1K set."}
null
0ab8b3df-d493-4755-90ce-a662ff499418
2302.06555v2.pdf
picture
null
Figure 4: LMs converge toward the geometry of visual models as they grow larger on Exclude-1K set.
853
689
72
image/png
7
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/pictures/10", "parent": {"cref": "#/body"}, "children": [], "label": "picture", "prov": [{"page_no": 6, "bbox": {"l": 77.1706314086914, "t": 752.3418579101562, "r": 503.3846435546875, "b": 408.00994873046875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 98]}], "captions": [{"cref": "#/texts/68"}], "references": [], "footnotes": [], "image": {"mimetype": "image/png", "dpi": 144, "size": {"width": 853.0, "height": 689.0}, "uri": null}, "annotations": []}
null
1a77051c-a165-494b-ba02-f228783e0409
2302.06555v2.pdf
text
computing the alignment. Table 2 presents a comparison of the three different baselines. All baselines have P@100 well below 1%. Our mappings between VMs and LMs score much higher (up to 64%), showing the strength of the correlation between the geometries induced by these models with respect to a conservative performance metric.
null
441
185
72
image/png
7
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/69", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 6, "bbox": {"l": 71.34822082519531, "t": 354.8316955566406, "r": 292.0813903808594, "b": 262.27301025390625, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 330]}], "orig": "computing the alignment. Table 2 presents a comparison of the three different baselines. All baselines have P@100 well below 1%. Our mappings between VMs and LMs score much higher (up to 64%), showing the strength of the correlation between the geometries induced by these models with respect to a conservative performance metric.", "text": "computing the alignment. Table 2 presents a comparison of the three different baselines. All baselines have P@100 well below 1%. Our mappings between VMs and LMs score much higher (up to 64%), showing the strength of the correlation between the geometries induced by these models with respect to a conservative performance metric."}
null
84a88245-492a-4513-8107-e1c0ba51c59d
2302.06555v2.pdf
section_header
5 Results
null
112
23
72
image/png
7
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/70", "parent": {"cref": "#/body"}, "children": [], "label": "section_header", "prov": [{"page_no": 6, "bbox": {"l": 71.40553283691406, "t": 246.7689208984375, "r": 127.26032257080078, "b": 235.4713592529297, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 9]}], "orig": "5 Results", "text": "5 Results", "level": 1}
null
8442b477-b8e0-451c-83fe-54f5c4bac58c
2302.06555v2.pdf
text
Similarities between visual and textual representations and how they are recovered through Procrustes Analysis are visualized through t-SNE in Figure 3. Our main results for nine VMs and all LMs are presented in Figure 4. The best P@100 scores are around 64%, with baseline scores lower than 1% (Table 2). In general, even the smallest language models outperform the baselines by orders of magnitude. We focus mainly on P@10 and P@100 scores because P@1 only allows one surface form to express a visual concept, but in reality,
null
442
295
72
image/png
7
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/71", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 6, "bbox": {"l": 71.2156753540039, "t": 222.7291259765625, "r": 292.0754089355469, "b": 75.686279296875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 527]}], "orig": "Similarities between visual and textual representations and how they are recovered through Procrustes Analysis are visualized through t-SNE in Figure 3. Our main results for nine VMs and all LMs are presented in Figure 4. The best P@100 scores are around 64%, with baseline scores lower than 1% (Table 2). In general, even the smallest language models outperform the baselines by orders of magnitude. We focus mainly on P@10 and P@100 scores because P@1 only allows one surface form to express a visual concept, but in reality,", "text": "Similarities between visual and textual representations and how they are recovered through Procrustes Analysis are visualized through t-SNE in Figure 3. Our main results for nine VMs and all LMs are presented in Figure 4. The best P@100 scores are around 64%, with baseline scores lower than 1% (Table 2). In general, even the smallest language models outperform the baselines by orders of magnitude. We focus mainly on P@10 and P@100 scores because P@1 only allows one surface form to express a visual concept, but in reality,"}
null
e4b5caf4-460f-4b2f-af09-49e178d47915
2302.06555v2.pdf
text
an artifact such as a vehicle may be denoted by many lexemes (car, automobile, SUV, etc.), each of which may have multiple inflections and derivations (car, cars, car's, etc.). Figure 5 shows examples where the top predictions seem 'as good' as the gold standard. We find that a region of 10 neighbours corresponds roughly to grammatical forms or synonyms, and a neighbourhood of 100 word forms corresponds roughly to coarse-grained semantic classes. Results of P@10 in Figure 4, show that up to one in five of all visual concepts were mapped to the correct region of the language space, with only a slight deviation from the specific surface form. Considering P@100, we see that more than two thirds of the visual concepts find a semantic match in the language space when using ResNet152 and OPT or LLaMA-2, for example. We see that ResNet models score highest overall, followed by SegFormers, while MAE models rank third. We presume that this ranking is the result, in
null
442
536
72
image/png
7
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/72", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 6, "bbox": {"l": 306.32073974609375, "t": 354.6774597167969, "r": 527.4515380859375, "b": 86.65802001953125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 970]}], "orig": "an artifact such as a vehicle may be denoted by many lexemes (car, automobile, SUV, etc.), each of which may have multiple inflections and derivations (car, cars, car's, etc.). Figure 5 shows examples where the top predictions seem 'as good' as the gold standard. We find that a region of 10 neighbours corresponds roughly to grammatical forms or synonyms, and a neighbourhood of 100 word forms corresponds roughly to coarse-grained semantic classes. Results of P@10 in Figure 4, show that up to one in five of all visual concepts were mapped to the correct region of the language space, with only a slight deviation from the specific surface form. Considering P@100, we see that more than two thirds of the visual concepts find a semantic match in the language space when using ResNet152 and OPT or LLaMA-2, for example. We see that ResNet models score highest overall, followed by SegFormers, while MAE models rank third. We presume that this ranking is the result, in", "text": "an artifact such as a vehicle may be denoted by many lexemes (car, automobile, SUV, etc.), each of which may have multiple inflections and derivations (car, cars, car's, etc.). Figure 5 shows examples where the top predictions seem 'as good' as the gold standard. We find that a region of 10 neighbours corresponds roughly to grammatical forms or synonyms, and a neighbourhood of 100 word forms corresponds roughly to coarse-grained semantic classes. Results of P@10 in Figure 4, show that up to one in five of all visual concepts were mapped to the correct region of the language space, with only a slight deviation from the specific surface form. Considering P@100, we see that more than two thirds of the visual concepts find a semantic match in the language space when using ResNet152 and OPT or LLaMA-2, for example. We see that ResNet models score highest overall, followed by SegFormers, while MAE models rank third. We presume that this ranking is the result, in"}
null
2d965b10-92ed-40d9-bbed-0865ffec6183
2302.06555v2.pdf
paragraph
Image Classes
null
118
20
72
image/png
8
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/73", "parent": {"cref": "#/body"}, "children": [], "label": "paragraph", "prov": [{"page_no": 7, "bbox": {"l": 71.53941345214844, "t": 774.43701171875, "r": 130.51449584960938, "b": 764.5503540039062, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 13]}], "orig": "Image Classes", "text": "Image Classes"}
null
50c5d387-e5f5-4af4-b690-5fd28eb9a8ae
2302.06555v2.pdf
section_header
Nearest Neighbors (Top 100)
null
233
20
72
image/png
8
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/74", "parent": {"cref": "#/body"}, "children": [], "label": "section_header", "prov": [{"page_no": 7, "bbox": {"l": 195.791259765625, "t": 774.43701171875, "r": 312.44464111328125, "b": 764.481201171875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 27]}], "orig": "Nearest Neighbors (Top 100)", "text": "Nearest Neighbors (Top 100)", "level": 1}
null
a44f83d5-3acc-4bee-a5c9-721fedfa3748
2302.06555v2.pdf
picture
null
null
119
101
72
image/png
8
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/pictures/11", "parent": {"cref": "#/body"}, "children": [], "label": "picture", "prov": [{"page_no": 7, "bbox": {"l": 71.45142364501953, "t": 762.3618774414062, "r": 131.0451202392578, "b": 712.0277099609375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 0]}], "captions": [], "references": [], "footnotes": [], "image": {"mimetype": "image/png", "dpi": 144, "size": {"width": 119.0, "height": 101.0}, "uri": null}, "annotations": []}
null
aa678786-5264-43ad-a1ef-2f3a7e9f2b7e
2302.06555v2.pdf
picture
null
null
119
109
72
image/png
8
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/pictures/12", "parent": {"cref": "#/body"}, "children": [], "label": "picture", "prov": [{"page_no": 7, "bbox": {"l": 71.35606384277344, "t": 704.283935546875, "r": 130.9281768798828, "b": 650.0250244140625, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 0]}], "captions": [], "references": [], "footnotes": [], "image": {"mimetype": "image/png", "dpi": 144, "size": {"width": 119.0, "height": 109.0}, "uri": null}, "annotations": []}
null
f17e02d8-34f7-457b-96c8-91c8487bc4d5
2302.06555v2.pdf
picture
null
null
119
101
72
image/png
8
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/pictures/13", "parent": {"cref": "#/body"}, "children": [], "label": "picture", "prov": [{"page_no": 7, "bbox": {"l": 71.25289154052734, "t": 645.5927734375, "r": 131.10714721679688, "b": 594.8673095703125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 0]}], "captions": [], "references": [], "footnotes": [], "image": {"mimetype": "image/png", "dpi": 144, "size": {"width": 119.0, "height": 101.0}, "uri": null}, "annotations": []}
null
2a4dbed3-c4be-46e1-918b-042f4b54da0d
2302.06555v2.pdf
picture
null
null
119
100
72
image/png
8
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/pictures/14", "parent": {"cref": "#/body"}, "children": [], "label": "picture", "prov": [{"page_no": 7, "bbox": {"l": 71.36942291259766, "t": 587.2476806640625, "r": 130.98825073242188, "b": 537.20166015625, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 0]}], "captions": [], "references": [], "footnotes": [], "image": {"mimetype": "image/png", "dpi": 144, "size": {"width": 119.0, "height": 100.0}, "uri": null}, "annotations": []}
null
985b2396-eabc-4d66-9946-e641632253c2
2302.06555v2.pdf
text
palmyra, palmyra palm, palm, palais, palatines, royal palm , palazzi, palazzo, palisades, palatinate, regency, palatial, palas, palatinates, palms, palimony, caribe, palmier, paladins, banyan tree, bermudas, bruneian, palazzos, bahamian, palmers, malacca, madeira, ceiba tree, palmettos, palmtop, oil palm, pal, royal, regal, roystonea regia , lindens, palaces, athenaeum, arboricultural, gabonese, palming, sugar palm, elm tree, palings, palm tree, palaeography, coconut palm, palisaded, bahraini, nicaraguan, … … , regent, myrtle, estancia, pavonia, imperial, royalist, regnal, historic, annals, maduro, rozelle, dominical, hydropathic, andorran
null
639
95
72
image/png
8
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/75", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 7, "bbox": {"l": 195.43907165527344, "t": 760.7074584960938, "r": 514.689697265625, "b": 713.5289916992188, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 647]}], "orig": "palmyra, palmyra palm, palm, palais, palatines, royal palm , palazzi, palazzo, palisades, palatinate, regency, palatial, palas, palatinates, palms, palimony, caribe, palmier, paladins, banyan tree, bermudas, bruneian, palazzos, bahamian, palmers, malacca, madeira, ceiba tree, palmettos, palmtop, oil palm, pal, royal, regal, roystonea regia , lindens, palaces, athenaeum, arboricultural, gabonese, palming, sugar palm, elm tree, palings, palm tree, palaeography, coconut palm, palisaded, bahraini, nicaraguan, \u2026 \u2026 , regent, myrtle, estancia, pavonia, imperial, royalist, regnal, historic, annals, maduro, rozelle, dominical, hydropathic, andorran", "text": "palmyra, palmyra palm, palm, palais, palatines, royal palm , palazzi, palazzo, palisades, palatinate, regency, palatial, palas, palatinates, palms, palimony, caribe, palmier, paladins, banyan tree, bermudas, bruneian, palazzos, bahamian, palmers, malacca, madeira, ceiba tree, palmettos, palmtop, oil palm, pal, royal, regal, roystonea regia , lindens, palaces, athenaeum, arboricultural, gabonese, palming, sugar palm, elm tree, palings, palm tree, palaeography, coconut palm, palisaded, bahraini, nicaraguan, \u2026 \u2026 , regent, myrtle, estancia, pavonia, imperial, royalist, regnal, historic, annals, maduro, rozelle, dominical, hydropathic, andorran"}
null
402ad2ae-7330-49bf-b722-cf3f2833f75b
2302.06555v2.pdf
text
drinking fountain , water fountain , cesspools, water cooler, manhole cover, bird feeder, birdbath, water jug, drainage system, fountain, water tap, watering can, garbage disposal, cesspit, recycling bin, water tank, garbage can, water pipe, manhole, toilet bowl, water closet, cement mixer, trash bin, soda fountain, bubblers, ice chest, footstone, ice machine, churns, milk float, overflowing, privies, grate, disposal, bathing, water bed, trickles, waterworks, drinking vessel, wading pool, carafe, vending machine, toilet water, sandboxes, toilet seat, drainpipe, draining, … … , spring water, ice maker, retaining wall, charcoal burner, litter, sentry box, cistern, waterhole, manholes, baptismal font, waterless
null
636
95
72
image/png
8
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/76", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 7, "bbox": {"l": 195.55520629882812, "t": 702.2905883789062, "r": 513.342041015625, "b": 654.7896728515625, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 717]}], "orig": "drinking fountain , water fountain , cesspools, water cooler, manhole cover, bird feeder, birdbath, water jug, drainage system, fountain, water tap, watering can, garbage disposal, cesspit, recycling bin, water tank, garbage can, water pipe, manhole, toilet bowl, water closet, cement mixer, trash bin, soda fountain, bubblers, ice chest, footstone, ice machine, churns, milk float, overflowing, privies, grate, disposal, bathing, water bed, trickles, waterworks, drinking vessel, wading pool, carafe, vending machine, toilet water, sandboxes, toilet seat, drainpipe, draining, \u2026 \u2026 , spring water, ice maker, retaining wall, charcoal burner, litter, sentry box, cistern, waterhole, manholes, baptismal font, waterless", "text": "drinking fountain , water fountain , cesspools, water cooler, manhole cover, bird feeder, birdbath, water jug, drainage system, fountain, water tap, watering can, garbage disposal, cesspit, recycling bin, water tank, garbage can, water pipe, manhole, toilet bowl, water closet, cement mixer, trash bin, soda fountain, bubblers, ice chest, footstone, ice machine, churns, milk float, overflowing, privies, grate, disposal, bathing, water bed, trickles, waterworks, drinking vessel, wading pool, carafe, vending machine, toilet water, sandboxes, toilet seat, drainpipe, draining, \u2026 \u2026 , spring water, ice maker, retaining wall, charcoal burner, litter, sentry box, cistern, waterhole, manholes, baptismal font, waterless"}
null
5635b7b6-31c2-46b3-bd21-153192905bcc
2302.06555v2.pdf
text
clamp, wrench, screwdriver, socket wrench, carabiner , torque wrench, screwdrivers, fastener, elastic bandage, pliers, retractor, screw thread, carabiners, plunger, spanner, corer, screw, aspirator, clamps, adjustable spanner, applicator, center punch, latch, extractor, lever, adaptor, hose, gripper, compensator, pipe wrench, power drill, retractors, bicycle pump, holding device, grappling hook, fasteners, extension cord, locknuts, bungee cord, drill press, ratcheting, elastic band, reamer, soldering iron, handlebar, plug, stopper knot, tongs, twist drill, crimpers, … … , shock absorber, caliper, shackle, wristband, reducer, wrenches, loop knot, safety belt
null
634
95
72
image/png
8
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/77", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 7, "bbox": {"l": 195.4573974609375, "t": 643.7977905273438, "r": 512.2069091796875, "b": 596.356689453125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 665]}], "orig": "clamp, wrench, screwdriver, socket wrench, carabiner , torque wrench, screwdrivers, fastener, elastic bandage, pliers, retractor, screw thread, carabiners, plunger, spanner, corer, screw, aspirator, clamps, adjustable spanner, applicator, center punch, latch, extractor, lever, adaptor, hose, gripper, compensator, pipe wrench, power drill, retractors, bicycle pump, holding device, grappling hook, fasteners, extension cord, locknuts, bungee cord, drill press, ratcheting, elastic band, reamer, soldering iron, handlebar, plug, stopper knot, tongs, twist drill, crimpers, \u2026 \u2026 , shock absorber, caliper, shackle, wristband, reducer, wrenches, loop knot, safety belt", "text": "clamp, wrench, screwdriver, socket wrench, carabiner , torque wrench, screwdrivers, fastener, elastic bandage, pliers, retractor, screw thread, carabiners, plunger, spanner, corer, screw, aspirator, clamps, adjustable spanner, applicator, center punch, latch, extractor, lever, adaptor, hose, gripper, compensator, pipe wrench, power drill, retractors, bicycle pump, holding device, grappling hook, fasteners, extension cord, locknuts, bungee cord, drill press, ratcheting, elastic band, reamer, soldering iron, handlebar, plug, stopper knot, tongs, twist drill, crimpers, \u2026 \u2026 , shock absorber, caliper, shackle, wristband, reducer, wrenches, loop knot, safety belt"}
null
02fdcde5-29ec-4075-beb5-e487b6f1d1af
2302.06555v2.pdf
text
community center, training school, school, youth hostel, service department, conference center, music school, day school, student union , academy, life office, hall, orphanage, school system, meeting, college, ministry, school principal, government building, house, council, clinic, business office, schoolmaster, workshop, council board, boardinghouse, club, service club, schools, detention centre, gymnasium, gym, schoolmasters, … … , nursing home, meeting house, church, education, reform school, semester, schoolmate, study hall, member, schoolrooms, assembly hall, meetings, hotel, district manager, arena, staff member, firm
null
643
94
72
image/png
8
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/78", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 7, "bbox": {"l": 195.45948791503906, "t": 585.3809204101562, "r": 516.7018432617188, "b": 538.2024536132812, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 631]}], "orig": "community center, training school, school, youth hostel, service department, conference center, music school, day school, student union , academy, life office, hall, orphanage, school system, meeting, college, ministry, school principal, government building, house, council, clinic, business office, schoolmaster, workshop, council board, boardinghouse, club, service club, schools, detention centre, gymnasium, gym, schoolmasters, \u2026 \u2026 , nursing home, meeting house, church, education, reform school, semester, schoolmate, study hall, member, schoolrooms, assembly hall, meetings, hotel, district manager, arena, staff member, firm", "text": "community center, training school, school, youth hostel, service department, conference center, music school, day school, student union , academy, life office, hall, orphanage, school system, meeting, college, ministry, school principal, government building, house, council, clinic, business office, schoolmaster, workshop, council board, boardinghouse, club, service club, schools, detention centre, gymnasium, gym, schoolmasters, \u2026 \u2026 , nursing home, meeting house, church, education, reform school, semester, schoolmate, study hall, member, schoolrooms, assembly hall, meetings, hotel, district manager, arena, staff member, firm"}
null
df877eec-5049-4528-b781-622bae4f52bb
2302.06555v2.pdf
picture
null
null
120
101
72
image/png
8
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/pictures/15", "parent": {"cref": "#/body"}, "children": [], "label": "picture", "prov": [{"page_no": 7, "bbox": {"l": 132.97372436523438, "t": 762.504150390625, "r": 192.79490661621094, "b": 712.1260375976562, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 0]}], "captions": [], "references": [], "footnotes": [], "image": {"mimetype": "image/png", "dpi": 144, "size": {"width": 120.0, "height": 101.0}, "uri": null}, "annotations": []}
null
c4b4432c-40b2-40db-8e20-c6d1c3c4c07e
2302.06555v2.pdf
picture
null
null
120
109
72
image/png
8
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/pictures/16", "parent": {"cref": "#/body"}, "children": [], "label": "picture", "prov": [{"page_no": 7, "bbox": {"l": 132.55625915527344, "t": 704.5750732421875, "r": 192.52426147460938, "b": 649.9854125976562, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 0]}], "captions": [], "references": [], "footnotes": [], "image": {"mimetype": "image/png", "dpi": 144, "size": {"width": 120.0, "height": 109.0}, "uri": null}, "annotations": []}
null
4feea742-44e4-42a3-9b88-999e8919e6ef
2302.06555v2.pdf
picture
null
null
119
101
72
image/png
8
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/pictures/17", "parent": {"cref": "#/body"}, "children": [], "label": "picture", "prov": [{"page_no": 7, "bbox": {"l": 132.8827362060547, "t": 645.2271118164062, "r": 192.2962646484375, "b": 594.8204956054688, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 0]}], "captions": [], "references": [], "footnotes": [], "image": {"mimetype": "image/png", "dpi": 144, "size": {"width": 119.0, "height": 101.0}, "uri": null}, "annotations": []}
null
80a9e68e-cf48-473c-9b83-3729fc5b71ac
2302.06555v2.pdf
caption
Figure 5: Examples featuring the 100 nearest neighbors in the mapping of image classes into the language representation space (from MAE$_{Huge}$ to OPT$_{30B}$). The golden labels are highlighted in green.
null
908
49
72
image/png
8
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/79", "parent": {"cref": "#/body"}, "children": [], "label": "caption", "prov": [{"page_no": 7, "bbox": {"l": 71.26412200927734, "t": 521.2467041015625, "r": 525.5408325195312, "b": 497.1179504394531, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 205]}], "orig": "Figure 5: Examples featuring the 100 nearest neighbors in the mapping of image classes into the language representation space (from MAE$_{Huge}$ to OPT$_{30B}$). The golden labels are highlighted in green.", "text": "Figure 5: Examples featuring the 100 nearest neighbors in the mapping of image classes into the language representation space (from MAE$_{Huge}$ to OPT$_{30B}$). The golden labels are highlighted in green."}
null

Dataset Card for Dataset Name

Dataset Details

Dataset Description

  • Curated by: [More Information Needed]
  • Funded by [optional]: [More Information Needed]
  • Shared by [optional]: [More Information Needed]
  • Language(s) (NLP): [More Information Needed]
  • License: [More Information Needed]

Dataset Sources [optional]

  • Repository: [More Information Needed]
  • Paper [optional]: [More Information Needed]
  • Demo [optional]: [More Information Needed]

Uses

Direct Use

[More Information Needed]

Out-of-Scope Use

[More Information Needed]

Dataset Structure

[More Information Needed]

Dataset Creation

Curation Rationale

[More Information Needed]

Source Data

Data Collection and Processing

[More Information Needed]

Who are the source data producers?

[More Information Needed]

Annotations [optional]

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Bias, Risks, and Limitations

[More Information Needed]

Recommendations

Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.

Citation [optional]

BibTeX:

[More Information Needed]

APA:

[More Information Needed]

Glossary [optional]

[More Information Needed]

More Information [optional]

[More Information Needed]

Dataset Card Authors [optional]

[More Information Needed]

Dataset Card Contact

[More Information Needed]

Downloads last month
792