{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:29:07.981634Z" }, "title": "What BERT Sees: Cross-Modal Transfer for Visual Question Generation", "authors": [ { "first": "Thomas", "middle": [], "last": "Scialom", "suffix": "", "affiliation": { "laboratory": "", "institution": "CNRS", "location": { "addrLine": "France reciTAL", "postCode": "LIP6, F-75005", "settlement": "Paris, Paris", "country": "France" } }, "email": "thomas@recital.ai" }, { "first": "Patrick", "middle": [], "last": "Bordes", "suffix": "", "affiliation": { "laboratory": "", "institution": "CNRS", "location": { "addrLine": "France reciTAL", "postCode": "LIP6, F-75005", "settlement": "Paris, Paris", "country": "France" } }, "email": "patrick.bordes@lip6.fr" }, { "first": "Paul-Alexis", "middle": [], "last": "Dray", "suffix": "", "affiliation": { "laboratory": "", "institution": "CNRS", "location": { "addrLine": "France reciTAL", "postCode": "LIP6, F-75005", "settlement": "Paris, Paris", "country": "France" } }, "email": "paul-alexis@recital.ai" }, { "first": "Jacopo", "middle": [], "last": "Staiano", "suffix": "", "affiliation": { "laboratory": "", "institution": "CNRS", "location": { "addrLine": "France reciTAL", "postCode": "LIP6, F-75005", "settlement": "Paris, Paris", "country": "France" } }, "email": "jacopo@recital.ai" }, { "first": "Patrick", "middle": [], "last": "Gallinari", "suffix": "", "affiliation": { "laboratory": "", "institution": "CNRS", "location": { "addrLine": "France reciTAL", "postCode": "LIP6, F-75005", "settlement": "Paris, Paris", "country": "France" } }, "email": "patrick.gallinari@lip6.fr" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Pre-trained language models have recently contributed to significant advances in NLP tasks. Recently, multi-modal versions of BERT have been developed, using heavy pretraining relying on vast corpora of aligned textual and image data, primarily applied to classification tasks such as VQA. In this paper, we are interested in evaluating the visual capabilities of BERT out-of-the-box, by avoiding pre-training made on supplementary data. We choose to study Visual Question Generation, a task of great interest for grounded dialog, that enables to study the impact of each modality (as input can be visual and/or textual). Moreover, the generation aspect of the task requires an adaptation since BERT is primarily designed as an encoder. We introduce BERT-gen, a BERT-based architecture for text generation, able to leverage on either monoor multi-modal representations. The results reported under different configurations indicate an innate capacity for BERT-gen to adapt to multi-modal data and text generation, even with few data available, avoiding expensive pre-training. The proposed model obtains substantial improvements over the state-of-the-art on two established VQG datasets.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Pre-trained language models have recently contributed to significant advances in NLP tasks. Recently, multi-modal versions of BERT have been developed, using heavy pretraining relying on vast corpora of aligned textual and image data, primarily applied to classification tasks such as VQA. In this paper, we are interested in evaluating the visual capabilities of BERT out-of-the-box, by avoiding pre-training made on supplementary data. We choose to study Visual Question Generation, a task of great interest for grounded dialog, that enables to study the impact of each modality (as input can be visual and/or textual). Moreover, the generation aspect of the task requires an adaptation since BERT is primarily designed as an encoder. We introduce BERT-gen, a BERT-based architecture for text generation, able to leverage on either monoor multi-modal representations. The results reported under different configurations indicate an innate capacity for BERT-gen to adapt to multi-modal data and text generation, even with few data available, avoiding expensive pre-training. The proposed model obtains substantial improvements over the state-of-the-art on two established VQG datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In Artificial Intelligence, several works have investigated the longstanding research question of whether textual representations encode some sort of visual information. This has been done primarily for word embeddings, e.g, by applying them to Zero-Shot Learning (Zablocki et al., 2019) , or sentence embeddings, e.g, by applying them to Image Captioning (Socher et al., 2014) . In this paper, we are interested in evaluating the visual capacities of pre-trained language models; in our case, BERT (Devlin et al., 2019) .", "cite_spans": [ { "start": 264, "end": 287, "text": "(Zablocki et al., 2019)", "ref_id": "BIBREF45" }, { "start": 356, "end": 377, "text": "(Socher et al., 2014)", "ref_id": "BIBREF33" }, { "start": 499, "end": 520, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To do so, we choose the Visual Question Generation (VQG) (Mostafazadeh et al., 2016) task, for the following reasons. First, from a practical standpoint, the VQG task has several applications: robots or AI assistants could ask questions rooted in multi-modal data (e.g. fusing conversational data with visual information from captors and cameras), in order to refine their interpretation of the situation they are presented with. Second, it could also allow systems relying on knowledge-bases to gain visual common sense and deal with the Human Reporting Bias , which states that the content of images and text are intrinsically different, since visual common sense is rarely explicitly stated in text. Moreover, unlike Image Captioning (where the input is only visual) or VQA (where the input is visual and textual), VQG is a multi-modal task where input can be textual and/or visual: this is of particular interest to analyze the impact of each modality. Finally, VQG relies on textual generation, which is challenging since BERT is not primarily designed for generation.", "cite_spans": [ { "start": 57, "end": 84, "text": "(Mostafazadeh et al., 2016)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "BERT-based Multi-Modal Language Models have been proposed Su et al., 2019) to tackle multi-modal tasks, relying on heavy pretraining and large corpora of aligned textual and visual data. From these works, it is left to explore whether the cross-modal capacities come from the pre-traning, or are to some extent already encoded in BERT's representations.", "cite_spans": [ { "start": 58, "end": 74, "text": "Su et al., 2019)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "It has recently been shown that BERT can generalize to another language, with great results, in a zero-shot manner (Artetxe et al., 2019) , i.e. without supervision between languages. In preliminary experiments, we extended this work to another modality: we found out that, in VQG, without any supervision between the images and the questions, the cross-modal alignment was not successfully learnt. This discrepancy between multi-lingual and multi-modal results might find its root cause in the intrinsic semantic difference between textual and visual modalities (Bruni et al., 2014) . Nonetheless, we hypothesize that BERT contains some abstractions that generalize across modalities. If so, it may transfer knowledge to the visual modality even with few training data rather than expensive pre-training and complex architectures.", "cite_spans": [ { "start": 115, "end": 137, "text": "(Artetxe et al., 2019)", "ref_id": "BIBREF3" }, { "start": 563, "end": 583, "text": "(Bruni et al., 2014)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Thus, in contrast with latter Multi-Modal BERT approaches, we explicitly avoid using the following complex mechanisms: (1) Multi-modal supervision: we do not exploit explicit supervision between images and captions through a pre-training step;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(2) Image-specific losses: specific pre-training losses such as Masked RoI Classification with Linguistic Clues (Su et al., 2019) or sentence-image prediction ; (3) Non-linearities: we explore a scenario in which the only learnable parameters, for aligning image representations to BERT, are those of a simple linear projection layer.", "cite_spans": [ { "start": 112, "end": 129, "text": "(Su et al., 2019)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Furthermore, to the best of our knowledge, this paper is the first attempt to investigate multi-modal text generation using pre-trained language models. We introduce BERT-gen, a text generator based on BERT, that can be applied both in mono and multi-modal settings. We treat images similarly to text: while a sentence is seen as a sequence of (sub)word tokens, an image is seen as a sequence of objects associated to their corresponding positions (bounding boxes). We show how a simple linear mapping, projecting visual embeddings into the first layer, is enough to ground BERT in the visual realm: text and image object representations are found to be effectively aligned, and the attention over words transfers to attention over the relevant objects in the image.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our contributions can be summarized as follows: (1) we introduce BERT-gen, a novel method for generating text using BERT, that can be applied in both mono and multi-modal settings; (2) we report state-of-the art results on the VQG task; (3) we show that the semantic abstractions encoded in pretrained BERT can generalize to another modality without pre-training of any sort; (4) we provide extensive ablations and qualitative analyses to interpret the behavior of BERT-gen under different configurations (mono-or multi-modal).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Multi-modal Language Models Following the successful application of BERT (Devlin et al., 2019) , and its derivatives, across a great majority of NLP tasks, several research efforts have focused on the design of multi-modal versions of BERT. The first attempt was VideoBERT (Sun et al., 2019a) , a joint video and text model pre-trained on a huge corpus of YouTube videos, where the video is treated as a \"visual sentence\" (each frame being a \"visual word\") processed by the BERT Transformer.", "cite_spans": [ { "start": 73, "end": 94, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF8" }, { "start": 273, "end": 292, "text": "(Sun et al., 2019a)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Concerning models jointly treating information from images and text, visual features extracted from the image are used as \"visual words\", and a [SEP] special token is employed to separate textual and visual tokens. In the literature, visual features are object features extracted with a Faster R- CNN (Ren et al., 2017) -with the notable exception of Kiela et al. (2019) who used pooling layers from a CNN. A first body of work exploit single-stream Transformers in which visual features are incorporated in a BERT-like Transformer: this is the case for VisualBERT and VL-BERT (Su et al., 2019) . Other works, such as ViLBERT and LXMERT (Tan and Bansal, 2019) have investigated two-stream approaches: these models employ modality-specific encoders built on standard Transformer blocks, which are then fused into a cross-modal encoder. Interestingly, none of the aforementioned models have been used for generation tasks such as VQG, tackled in this work.", "cite_spans": [ { "start": 297, "end": 319, "text": "CNN (Ren et al., 2017)", "ref_id": null }, { "start": 351, "end": 370, "text": "Kiela et al. (2019)", "ref_id": "BIBREF14" }, { "start": 577, "end": 594, "text": "(Su et al., 2019)", "ref_id": "BIBREF34" }, { "start": 637, "end": 659, "text": "(Tan and Bansal, 2019)", "ref_id": "BIBREF38" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Visual Question Generation The text-based Question Generation task has been largely studied by the NLP community Scialom et al., 2019) . However, its visual counterpart, Visual Question Generation, has been comparatively less explored than standard well-known multi-modal tasks such as Visual Question Answering (Gao et al., 2015) , Visual Dialog (Das et al., 2017) , or Image Captioning (Vinyals et al., 2015) .", "cite_spans": [ { "start": 113, "end": 134, "text": "Scialom et al., 2019)", "ref_id": "BIBREF32" }, { "start": 312, "end": 330, "text": "(Gao et al., 2015)", "ref_id": "BIBREF11" }, { "start": 347, "end": 365, "text": "(Das et al., 2017)", "ref_id": "BIBREF7" }, { "start": 388, "end": 410, "text": "(Vinyals et al., 2015)", "ref_id": "BIBREF42" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The VQG task was first introduced by Yang et al. (2015) in their Neural Self Talk model: the goal is to gain knowledge about an image by iteratively generating questions (VQG) and answering them (VQA). The authors tackle the task with a simple RNN conditioned on the image, following Image Captioning works such as Karpathy and Li (2015) .", "cite_spans": [ { "start": 37, "end": 55, "text": "Yang et al. (2015)", "ref_id": "BIBREF44" }, { "start": 315, "end": 337, "text": "Karpathy and Li (2015)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Suitable data for the VQG task can come from standard image datasets on which questions have been manually annotated, such as V QG COCO , V QG F lickr , V QG Bing (Mostafazadeh et al., 2016) , each consisting of 5000 images with 5 questions per image. Alternatively, VQG samples can be derived from VQA datasets, such as V QA1.0 (Antol et al., 2015) , by \"reversing\" them (taking images txt1 < l a t e x i t s h a 1 _ b a s e 6 4 = \" y s l 1 B E R W s 9 7 z J N 5 i h C C 1 v 6 s Z h a c = \" > A A A C y H k D t S S J + 3 4 4 S 7 g e f z l j c 6 V f H W D U 9 S E Y W X c h z z b u A O Q z E Q z J V E O f J O 9 u x e u W J V L b 3 M W W D n o I J 8 N a L y C 6 7 Q R w S G D A E 4 Q k j C P l y k 9 H R g w 0 J M X B c T 4 h J C Q s c 5 7 l E i b U Z Z n D J c Y k f 0 H d K u k 7 M h 7 Z V n q t W M T v H p T U h p Y o 8 0 E e U l h N V p p o 5 n 2 l m x v 3 l P t K e 6 2 5 j + X u 4 V E C t x T e x f u m n m f 3 W q F o k B j n U N g m q K N a O q Y 7 l L p r u i b m 5 + q U q S Q 0 y c w n 2 K J 4 S Z V k 7 7 b G p N q m t X v X V 1 / E 1 n K l b t W Z 6 b 4 V 3 d k g Z s / x z n L G j W q v Z B t X Z x W K m f 5 K M u Y g e 7 2 K d 5 H q G O M z T g k L f A I 5 7 w b J w b s X F r j D 9 T j U K u 2 c a 3 Z T x 8 A I U x k S E = < / l a t e x i t > txt2 < l a t e x i t s h a 1 _ b a s e 6 4 = \" x P B W E l t g X R + x l k Y 1 + l B L L c z U Y O k = \" > A A A C y H < l a t e x i t s h a 1 _ b a s e 6 4 = \" y F i + K B z L Z l 3 4 K j Z x c P 0 x G 0 o x N d k = \" > A A A C y H i c j V H L S s N A F D 3 G V 6 2 v q k s 3 w S K 4 K k k V d F l 0 I 4 J Q w b S F W k q S T u v Q v E g m a i l u / A G 3 + m X i H + h f e G e c g l p E J y Q 5 c + 4 9 Z + b e 6 y U B z 4 R l v c 4 Y s 3 P z C 4 u F p e L y y u r a e m l j s 5 H F e e o z x 4", "cite_spans": [ { "start": 163, "end": 190, "text": "(Mostafazadeh et al., 2016)", "ref_id": "BIBREF24" }, { "start": 329, "end": 349, "text": "(Antol et al., 2015)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V Z I q 6 L L o R l x V M G 2 h l p J M p 3 V o X i Q T t R Q 3 / o B b / T L x D / Q v v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V Z I q 6 L L o R l x V M G 2 h l p J M p 3 V o X i Q T t R Q 3 / o B b / T L x D / Q v v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "+ D O G 1 5 b s Y C H j F H c B G w V p I y N / Q C 1 v S G J z L e v G F p x u P o U o w S 1 g n d Q c T 7 3 H c F U Y 6 4 E 9 3 z b q l s V S y 1 z G l g a 1 C G X v W 4 9 I I r 9 B D D R 4 4 Q D B E E 4 Q A u M n r a s G E h I a 6 D M X E p I a 7 i D P c o k j a n L E Y Z L r F D + g 5 o 1 9 Z s R H v p m S m 1 T 6 c E 9 K a k N L F L m p j y U s L y N F P F c + U s 2 d + 8 x 8 p T 3 m 1 E f 0 9 7 h c Q K X B P 7 l 2 6 S + V + d r E W g j y N V A 6 e a E s X I 6 n z t k q u u y J u b X 6 o S 5 J A Q J 3 G P 4 i l h X y k n f T a V J l O 1 y 9 6 6 K v 6 m M i U r 9 7 7 O z f E u b 0 k D t n + O c x o 0 q h V 7 v 1 K 9 O C j X j v W o C 9 j G D v Z o n o e o 4 R R 1 O O T N 8 Y g n P B t n R m L c G q P P V G N G a 7 b w b R k P H 8 e x k T 0 = < / l a t e x i t > img1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "< l a t e x i t s h a 1 _ b a s e 6 4 = \"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "/ v D z g n R f m o 8 9 Z L Y k B v T Q Q q + U + r s = \" > A A A C y H i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V Z I q 6 L L o R l x V s L V Q S 0 m m 0 z o 0 L 5 K J U o o b f 8 C t f p n 4 B / o X 3 h m n o B b R C U n O n H v P m b n 3 + k k g M u k 4 r w V r b n 5 h c a m 4 X F p Z X V v f K G 9 u t b I 4 T x l v s j i I 0 7 b v Z T w Q E W 9 K I Q P e T l L u h X 7 A r / z R q Y p f 3 f I 0 E 3 F 0 K c c J 7 4 b e M B I D w T x J V F O E w 5 7 b K 1 e c q q O X P Q t c A y o w q x G X X 3 C N P m I w 5 A j B E U E S D u A h o 6 c D F w 4 S 4 r q Y E J c S E j r O c Y 8 S a X P K 4 p T h E T u i 7 5 B 2 H c N G t F e e m V Y z O i W g N y W l j T 3 S x J S X E l a n 2 T q e a 2 f F / u Y 9 0 Z 7 q b m P 6 + 8 Y r J F b i h t i / d N P M / + p U L R I D H O s a B N W U a E Z V x 4 x L r r u i b m 5 / q U q S Q 0 K c w n 2 K p 4 S Z V k 7 7 b G t N p m t X v f V 0 / E 1 n K l b t m c n N 8 a 5 u S Q N 2 f 4 5 z F r R q V f e g W r s 4 r N R P z K i L 2 M E u 9 m m e R 6 j j D A 0 0 y V v g E U 9 4 t s 6 t x L q z x p + p V s F o t v F t W Q 8 f M a q Q / g = = < / l a t e x i t > img2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "< l a t e x i t s h a 1 _ b a s e 6 4 = \" g f y o O + k 3 r k B T z 1 6 a m 6 q T l o j Y W l U = \" > A A A C y H", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V d I q 6 L L o R l x V M L V Q S 0 m m 0 z o 0 L y Y T p R Q 3 / o B b / T L x D / Q v v D O m o B b R C U n O n H v P m b n 3 + k k g U u U 4 r w V r b n 5 h c a m 4 X F p Z X V v f K G 9 u t d I 4 k 4 y 7 L A 5 i 2 f a 9 l A c i 4 q 4 S K u D t R H I v 9 A N + 5 Y 9 O d f z q l s t U x N G l G i e 8 G 3 r D S A w E 8 x R R r g i H v X q v X H G q j l n 2 L K j l o I J 8 N e P y C 6 7 R R w y G D C E 4 I i j C A T y k 9 H R Q g 4 O E u C 4 m x E l C w s Q 5 7 l E i b U Z Z n D I 8 Y k f 0 H d K u k 7 M R 7 b V n a t S M T g n o l a S 0 s U e a m P I k Y X 2 a b e K Z c d b s b 9 4 T 4 6 n v N q a / n 3 u F x C r c E P u X b p r 5 X 5 2 u R W G A Y 1 O D o J o S w + j q W O 6 S m a 7 o m 9 t f q l L k k B C n c Z / i k j A z y m m f b a N J T e 2 6 t 5 6 J v 5 l M z e o 9 y 3 M z v O t b 0 o B r P 8 c 5 C 1 r 1 a u 2 g W r 8 4 r D R O 8 l E X s Y N d 7 N M 8 j 9 D A G Z p w y V v g E U 9 4 t s 6 t x L q z x p + p V i H X b O P b s h 4 + A D Q K k P 8 = < / l a t e x i t > imgN", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "< l a t e x i t s h a 1 _ b a s e 6 4 = \" 4 H V s c F h 1 4 e 3 Q g c w U", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "A U t C X p C I K M g = \" > A A A C y H i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V Z I q 6 L L o R l x I B d M W a i n J d F q H 5 k U y U U p x 4 w + 4 1 S 8 T / 0 D / w j t j C m o R n Z D k z L n 3 n J l 7 r x f 7 I p W W 9 V o w 5 u Y X F p e K y 6 W V 1 b X 1 j f L m V j O N s o R x h 0 V + l L Q 9 N + W + C L k j h f R 5 O 0 6 4 G 3 g + b 3 m j U x V v 3 f I k F V F 4 J c c x 7 w b u M B Q D w V x J l C O C Y e + i V 6 5 Y V U s v c x b Y O a g g X 4 2 o / I J r 9 B G B I U M A j h C S s A 8 X K T 0 d 2 L A Q E 9 f F h L i E k N B x j n u U S J t R F q c M l 9 g R f Y e 0 6 + R s S H v l m W o 1 o 1 N 8 e h N S m t g j T U R 5 C W F 1 m q n j m X Z W 7 G / e E + 2 p 7 j a m v 5 d 7 B c R K 3 B D 7 l 2 6 a + V + d q k V i g G N d g 6 C a Y s 2 o 6 l j u k u m u q J u b X 6 q S 5 B A T p 3 C f 4 g l h p p X T P p t a k + r a V W 9 d H X / T m Y p V e 5 b n Z n h X t 6 Q B 2 z / H O Q u a t a p 9 U K 1 d H l b q J / m o i 9 j B L v Z p n k e o 4 w w N O O Q t 8 I g n P B v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "n R m z c G e P P V K O Q a 7 b x b R k P H 3 a K k R s = < / l a t e x i t > A < l a t e x i t s h a 1 _ b a s e 6 4 = \" 0 A J J 3 6 U B r 7 A j J I P", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "x o A 8 / Y o B y c S E = \" > A A A C x H i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V Z K 6 0 G V V E J c t 2 A f U I s l 0 W o d O H i Q T o R T 9 A b f 6 b e I f 6 F 9 4 Z 0 x B L a I T k p w 5 9 5 4 z c + / 1 Y y l S 5 T i v B W t h c W l 5 p b h a W l v f 2 N w q b + + 0 0 y h L G G + x S E Z J 1 / d S L k X I W 0 o o y b t x w r 3 A l 7 z j j 8 9 1 v H P H k 1 R E 4 Z W a x L w f e K N Q D A X z F F H N 0 5 t y x a k 6 Z t n z w M 1 B B f l q R O U X X G O A C A w Z A n C E U I Q l P K T 0 9 O D C Q U x c H 1 P i E k L C x D n u U S J t R l m c M j x i x / Q d 0 a 6 X s y H t t W d q 1 I x O k f Q m p L R x Q J q I 8 h L C + j T b x D P j r N n f v K f G U 9 9 t Q n 8 / 9 w q I V b g l 9 i / d L P O / O l 2 L w h A n p g Z B N c W G 0 d W x 3 C U z X d E 3 t 7 9", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "U p c g h J k 7 j A c U T w s w o Z 3 2 2 j S Y 1 t e v e e i b + Z j I 1 q / c s z 8 3 w r m 9 J A 3 Z / j n M e t G t V 9 6 h a a 9 Y q 9 b N 8 1 E X s Y R + H N M 9 j 1 H G J B l r G + x F P e L Y u L G m l V v a Z a h V y z S 6 + L e v h A + k b j 0 g = < / l a t e x i t > woman < l a t e x i t s h a 1 _ b a s e 6 4 = \" o 3 G", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "B i h H u / o r d H m h g Z M c f K l t R Z 0 4 = \" > A A A C y H i c j V H L T s J A F D 3 U F + I L d e m m k Z i 4 I g U X u i S 6 M a 4 w s U C C x E z L g B P 6 y n Q q I c S N P + B W v 8 z 4 B / o X 3 h l L o h K", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "j 0 7 Q 9 c + 4 9 Z + b e 6 y W B S J X j v B a s h c W l 5 Z X i a m l t f W N z q 7 y 9 0 0 r j T P r c 9 e M g l h 2 P p T w Q E X e", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "V U A H v J J K z 0 A t 4 2 x u d 6 X j 7 j s t U x N G V m i S 8 F 7 J h J A b C Z 4 o o d x y H L L o p V 5 y q Y 5 Y 9 D 2 o 5 q C B f z b j 8 g m v 0 E c N H h h A c E R T h A A w p P V 3 U 4 C A h r o c p c Z K Q M H G O e 5 R I m 1 E W p w x G 7 I i + Q 9 p 1 c z a i v f Z M j d q n U w J 6 J S l t H J A m p j x J W J 9 m m 3 h m n D X 7 m / f U e O q 7 T e j v 5 V 4 h s Q q 3 x P 6 l m 2 X + V 6 d r U R j g x N Q g q K b E M L o 6 P 3 f J T F f 0 z e 0 v V S l y S I j T u E 9 x S d g 3 y l m f b a N J T e 2 6 t 8 z E 3 0 y m Z v X e z 3 M z v O t b 0 o B r P 8 c 5 D 1 r 1 a u 2 o W r + s V x q n + a i L 2 M M + D m m e x 2 j g H E 2 4 5 C 3 w i C c 8 W x d W Y o 2 t y W e q V", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "c g 1 u / i 2 r I c P + z a R U Q = = < / l a t e x i t > and < l a t e x i t s h a 1 _ b a s e 6 4 = \" o 4 D h", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "B L X g d b g Y I P Q 2 X 6 T m t u m Q l / Y = \" > A A A C x n i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V Z K 6 0 G X R T Z c V 7 Q N q k U k 6 r a F 5 M Z k o p Q j + g F v 9 N P E P 9 C + 8 M 0 5 B L a I T k p w 5 9 5 4 z c + / 1 0 j D I p O O 8 F q y F x a X l l e J q a W 1 9 Y 3 O r v L 3 T z p J c + L z l J 2 E i u h 7 L e B j E v C U D G f J u K j i L v J B 3 v P G Z i n d u u c i C J L 6 U k 5 T 3 I z a K g 2 H g M 0 n U B Y s H 1 + W K U 3 X 0 s u e B a 0 A F Z j W T 8 g u u M E A C H z k i c M S Q h E M w Z P T 0 4 M J B S l w f U + I E o U D H O e 5 R I m 1 O W Z w y G L F j + o 5 o 1 z N s T H v l m W m 1 T 6 e E 9 A p S 2 j g g T U J 5 g r A 6 z d b x X D s r 9 j f v q f Z U d 5 v Q 3 z N e E b E S N 8 T + p Z t l / l e n a p E Y 4 k T X E F B N q W Z U d b 5 x y X V X 1 M 3 t L 1 V J c k i J U 3 h A c U H Y 1 8 p Z n 2 2 t y X T t q r d M x 9 9 0 p m L V 3 j e 5 O d 7 V L W n A 7 s 9 x z o N 2 r e o e V W v n t U r 9 1 I y 6 i D 3 s 4 5 D m e Y w 6 G m i i R d 4 j P O I J z 1 b D i q 3 c u v t M t Q p G s 4 t v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "y 3 r 4 A H S 6 k E 4 = < / l a t e x i t > a < l a t e x i t s h a 1 _ b a s e 6 4 = \" y", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "g + f E B L L d z b H K h + Y b V O d E A s x Z F o = \" > A A A C x H i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V Z K 6 0 G V R E J c t 2 F q o R S b T a R 0 6 T U I y E U r R H 3 C r 3 y b + g f 6 F d 8 Y U 1 C I 6 I c m Z c + 8 5 M / f e I F Y y 1 Z 7 3 W n A W F p e W V 4 q r p b X 1 j c 2 t 8 v Z O O 4 2 y h I s W j 1 S U d A K W C i V D 0 d J S K 9 G J E 8 H G g R J X w e j M x K / u R J L K K L z U k 1 j 0 x m w Y y o H k T B P V Z D f l i l f 1 7 H L n g Z + D C v L V i M o v u E Y f E T g y j C E Q Q h N W Y E j p 6 c K H h 5 i 4 H q b E J Y S k j Q v c o 0 T a j L I E Z T B i R / Q d 0 q 6 b s y H t j W d q 1 Z x O U f Q m p H R x Q J q I 8 h L C 5 j T X x j P r b N j f v K f W 0 9 x t Q v 8 g 9 x o T q 3 F L 7 F + 6 W e Z / d a Y W j Q F O b A 2 S a o o t Y 6 r j u U t m u 2 J u 7 n 6 p S p N D T J z B f Y o n h L l V z v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "r s W k 1 q a z e 9 Z T b + Z j M N a / Y 8 z 8 3 w b m 5 J A / Z / j n M e t G t V / 6 h a a 9 Y q 9 d N 8 1 E X s Y R + H N M 9 j 1 H G B B l r W + x F P e H b O H e W k T v a Z 6 h R y z S 6 + L e f h A z U q j 2 g = < / l a t e x i t > dog < l a t e x i t s h a 1 _ b a s e 6 4 = \" N", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "V O e A X l d b j h M K Y S K f M B S G q a V l g k = \" > A A A C x n i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V Z K 6 0 G X R T Z c V 7 Q N q k S S d x t A 0 E y Y T p R T B H 3 C r n y b + g f 6 F d 8 Y p q E V 0 Q p I z 5 9 5 z Z u 6 9 f h p H m X S c 1 4 K 1 s L i 0 v F J c L a 2 t b 2 x u l b d 3 2 h n P R c B a A Y + 5 6 P p e x u I o Y S 0 Z y Z h 1 U 8 G 8 s R + z j j 8 6 U / H O L R N Z x J N L O U l Z f + y F S T S M A k 8 S d T H g 4 X W 5 4 l Q d v e x 5 4 B p Q g V l N X n 7 B F Q b g C J B j D I Y E k n A M D x k 9 P b h w k B L X x 5 Q 4 Q S j S c Y Z 7 l E i b U x a j D I / Y E X 1 D 2 v U M m 9 B e e W Z a H d A p M b 2 C l D Y O S M M p T x B W p 9 k 6 n m t n x f 7 m P d W e 6 m 4 T + v v G a 0 y s x A 2 x f + l m m f / V q V o k h j j R N U R U U 6 o Z V V 1 g X H L d F X V z + 0 t V k h x S 4 h Q e U F w Q D r R y 1 m d b a z J d u + q t p + N v O l O x a h + Y 3 B z v 6 p Y 0 Y P f n O O d B u 1 Z 1 j 6 q 1 8 1 q l f m p G X c", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Q e 9 n F I 8 z x G H Q 0 0 0 S L v E I 9 4 w r P V s B I r t + 4 + U 6 2 C 0 e z i 2 7 I e P g C F Y Z B V < / l a t e x i t > playing < l a t e x i t s h a 1 _ b a s e 6 4 = \" a A I 0 C / N X m 0 + l r F 9 P p 3 P E G 2 r m 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Z o = \" > A A A C y n i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V Z K 6 0 G X R j Q s X F e w D a p F k O q 1 D 8 2 I y E U J x 5 w + 4 1 Q 8 T / 0 D / w j t j C m o R n Z D k z L n n 3 J l 7 r 5 8 E I l W O 8 1 q y F h a X l l f K q 5 W 1 9 Y 3 N r e r 2 T i e N M 8 l 4 m 8 V B L H u + l / J A R L y t h A p 4 L 5 H c C / 2 A d / 3 J m Y 5 3 7 7 h M R R x d q T z h g 9 A b R 2 I k m K e I 6 i a B l 4 t o f F O t O X X H L H s e u A W o o V i t u P q C a w w R g y F D C I 4 I i n A A D y k 9 f b h w k B A 3 w J Q 4 S U i Y O M c 9 K u T N S M V J 4 R E 7 o e + Y d v 2 C j W i v c 6 b G z e i U g F 5 J T h s H 5 I l J J w n r 0 2 w T z 0 x m z f 6 W e 2 p y 6 r v l 9 P e L X C G x C r f E / u W b K f / r 0 7 U o j H B i a h B U U 2 I Y X R 0 r s m S m K / r m 9 p e q F G V I i N N 4 S H F J m B n n r M + 2 8 a S m d t 1 b z 8 T f j F K z e s 8 K b Y Z 3 f U s a s P t z n P O g 0 6 i 7 R / X G Z a P W P C 1 G X c Y e 9 n F I 8 z x G E + d o o W 2 q f M Q T n q 0 L S 1 q 5 N f 2 U W q X C s 4 t v y 3 r 4 A D 4 8 k j c = < / l a t e x i t > in < l a t e x i t s h a 1 _ b a s e 6 4 = \" w 1 p t r G / G 4 T c V n 6 D P P O X i P W B B / c 8 = \" > A A A C x X i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V Z K 6 0 G X R h S 6 r 2 A f U I s l 0 W o f m x W R S K E X 8 A b f 6 a + I f 6 F 9 4 Z 0 x B L a I T k p w 5 9 5 4 z c + / 1 k 0 C k y n F e C 9 b C 4 t L y S n G 1 t L a + s b l V 3 t 5 p p X E m G W + y O I h l x / d S H o i I N 5 V Q A e 8 k k n u h H / C 2 P z r T 8 f a Y y 1 T E 0 b W a J L w X e s N I D A T z F F F X I r o t V 5 y q Y 5 Y 9 D 9 w c V J C v R l x + w Q 3 6 i M G Q I Q R H B E U 4 g I e U n i 5 c O E i I 6 2 F K n C Q k T J z j H i X S Z p T F K c M j d k T f I e 2 6 O R v R X n u m R s 3 o l I B e S U o b B 6 S J K U 8 S 1 q f Z J p 4 Z Z 8 3 + 5 j 0 1 n v p u E / r 7 u V d I r M I d s X / p Z p n / 1 e l a F A Y 4 M T U I q i k x j K 6 O 5 S 6 Z 6 Y q + u f 2 l K k U O C X E a 9 y k u C T O j n P X Z N p r U 1 K 5 7 6 5 n 4 m 8 n U r N 6 z P D f D u 7 4 l D d j 9 O c 5 5 0 K p V 3 a N q 7 b J W q Z / m o y 5 i D / s 4 p H k e o 4 4 L N N A k 7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "w E e 8 Y R n 6 9 w K L W W N P 1 O t Q q 7 Z x b d l P X w A c 6 K P 6 A = = < / l a t e x i t > the < l a t e x i t s h a 1 _ b a s e 6 4 = \" j Y o L t 6 Q P h H G 3 n", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "v 6 m z F B Y E G F V 3 9 c = \" > A A A C x n i c j V H L T s J A F D 3 U F + I L d e m m k Z i 4 I i 0 u d E l 0 w x K j g A k S 0 w 4 D T O g r 0 6 m G E B N / w K 1 + m v E P 9 C + 8 M 5 Z E J U a n a X v m 3 H v O z L 3 X T w K R K s d 5 L V g L i 0 v L K 8 X V 0 t r 6 x u Z W e X u n n c a Z Z L z F 4 i C W V 7 6 X 8 k B E v K W E C v h V I r k X + g H v + O M z H e / c c p m K O L p U k 4 T 3 Q m 8 Y i Y F g n i L q Q o 3 4 T b n i V B 2 z 7 H n g 5 q C C f D X j 8 g u u 0 U c M h g w h O C I o w g E 8 p P R 0 4 c J B Q l w P U + I k I W H i H P c o k T a j L E 4 Z H r F j + g 5 p 1 8 3 Z i P b a M z V q R q c E 9 E p S 2 j g g T U x 5 k r A + z T b x z D h r 9 j f v q f H U d 5 v Q 3 8 + 9 Q m I V R s T + p Z t l / l e n a 1 E Y 4 M T U I K i m x D C 6 O p a 7 Z K Y r + u b 2 l 6 o U O S T E a d y n u C T M j H L W Z 9 t o U l O 7 7 q 1 n 4 m 8 m U 7 N 6 z / L c D O / 6 l j R g 9 + c 4 5 0 G 7 V n W P q r X z W q V + m o + 6 i D 3 s 4 5 D m e Y w 6 G m i i R d 5 D P O I J z 1 b D i q z M u v t M t Q q 5 Z h f f l v X w A Z Y a k F w = < / l a t e x i t > yard < l a t e x i t s h a 1 _ b a s e 6 4 = \" 8 2 e Z 8 h u N U N A b 9 Q a G O Q Q g I h t d K R U = \" > A A A C x 3 i c j V H L T s J A F D 3 U F + I L d e m m k Z i 4 I i 0 u d E l 0 o z t M B E y Q m O k w Q E N f m U 6 J h L j w B 9 z q n x n / Q P / C O 2 N J V G J 0 m r Z n z r 3 n z N x 7 v S T w U + U 4 r w V r Y X F p e a W 4 W l p b 3 9 j c K m / v t N I 4 k 1 w 0 e R z E 8 t p j q Q j 8 S D S V r w J x n U j B Q i 8 Q b W 9 0 p u P t s Z C p H 0 d X a p K I b s g G k d / 3 O V O a m j D Z u y 1 X n K p j l j 0 P 3 B x U k K 9 G X H 7 B D X q I w Z E h h E A E R T g A Q 0 p P B y 4 c J M R 1 M S V O E v J N X O A e J d J m l C U o g x E 7 o u + A d p 2 c j W i v P V O j 5 n R K Q K 8 k p Y 0 D 0 s S U J w n r 0 2 w T z 4 y z Z n / z n h p P f b c J / b 3 c K y R W Y U j s X 7 p Z 5 n 9 1 u h a F P k 5 M D T 7 V l B h G V 8 d z l 8 x 0 R d / c / l K V I o e E O I 1 7 F J e E u V H O + m w b T W p q 1 7 1 l J v 5 m M j W r 9 z z P z f C u b 0 k D d n + O c x 6 0 a l X 3 q F q 7 r F X q p / m o i 9 j D P g 5 p n s e o 4 x w N N M l 7 i E c 8 4 d m 6 s G J r b N 1 9 p l q F X L O L b 8 t 6 + A D E 3 J D V < / l a t e x i t > What < l a t e x i t s h a 1 _ b a s e 6 4 = \" O 2 N / z M v 3 r r a p q I i k 8 C e y 3 V 6 Q k R U = \" > A A A C x 3 i c j V H L T s J A F D 3 U F + I L d e m m k Z i 4 I i 0 u d E l 0 o z t M B E y Q m L Y M 0 N B 2 m u m U S I g L f 8 C t / p n x D / Q v v D M O i U q M T t P 2 z L n 3 n J l 7 r 5 9 G Y S Y d 5 7 V g L S w u L a 8 U V 0 t r 6 x u b W + X t n V b G c x G w Z s A j L q 5 9 L 2 N R m L C m D G X E r l P B v N i P W N s f n a l 4 e 8 x E F v L k S k 5 S 1 o 2 9 Q R L 2 w 8 C T i m o P P X l b r j h V R y 9 7 H r g G V G B W g 5 d f c I M e O A L k i M G Q Q B K O 4 C G j p w M X D l L i u p g S J w i F O s 5 w j x J p c 8 p i l O E R O 6 L v g H Y d w y a 0 V 5 6 Z V g d 0 S k S v I K W N A 9 J w y h O E 1 W m 2 j u f a W b G / e U + 1 p 7 r b h P 6 + 8 Y q J l R g S + 5 d u l v l f n a p F o o 8 T X U N I N a W a U d U F x i X X X V E 3 t 7 9 U J c k h J U 7 h H s U F 4 U A r Z 3 2 2 t S b T t a v e e j r + p j M V q / a B y c 3 x r m 5 J A 3 Z / j n M e t G p V 9 6 h a u 6 x V 6 q d m 1 E X s Y R + H N M 9 j 1 H G O B p r k P c Q j n v B s X V j c G l t 3 n 6 l W w W h 2 8 W 1 Z D x + B 8 5 C 5 < / l a t e x i t > is < l a t e x i t s h a 1 _ b a s e 6 4 = \" W d 5 E E / r H i H / 3 j g 6 z M q C S O H 2 5 V C U = \" > A A A C x X i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V Z K 6 0 G X R h S 6 r 2 A f U I k k 6 r U P z I j M p l C L + g F v 9 N f E P 9 C + 8 M 0 5 B L a I T k p w 5 9 5 4 z c + / 1 0 5 A L 6 T i v B W t h c W l 5 p b h a W l v f 2 N w q b + + 0 R J J n A W s G S Z h k H d 8 T L O Q x a 0 o u Q 9 Z J M + Z F f s j a / u h M x d t j l g m e x N d y k r J e 5 A 1 j P u C B J 4 m 6 4 u K 2 X H G q j l 7 2 P H A N q M C s R l J + w Q 3 6 S B A g R w S G G J J w C A + C n i 5 c O E i J 6 2 F K X E a I 6 z j D P U q k z S m L U Y Z H 7 I i + Q 9 p 1 D R v T X n k K r Q 7 o l J D e j J Q 2 D k i T U F 5 G W J 1 m 6 3 i u n R X 7 m / d U e 6 q 7 T e j v G 6 + I W I k 7 Y v / S z T L / q 1 O 1 S A x w o m v g V F O q G V V d Y F x y 3 R V 1 c / t L V Z I c U u I U 7 l M 8 I x x o 5 a z P t t Y I X b v q r a f j b z p T s W o f m N w c 7 + q W N G D 3 5 z j n Q a t W d Y + q t c t a p X 5 q R l 3 E H v Z x S P M 8 R h 0 X a K B J 3 g M 8 4 g n P 1 r k V W d I a f 6 Z a B a P Z x b d l P X w A f 4 K P 7 Q = = < / l a t e x i t > the < l a t e x i t s h a 1 _ b a s e 6 4 = \" j Y o L t 6 Q P h H G 3 n v 6 m z F B Y E G F V 3 9 c = \" > A A A C x n i c j V H L T s J A F D 3 U F + I L d e m m k Z i 4 I i 0 u d E l 0 w x K j g A k S 0 w 4 D T O g r 0 6 m G E B N / w K 1 + m v E P 9 C + 8 M 5 Z E J U a n a X v m 3 H v O z L 3 X T w K R K s d 5 L V g L i 0 v L K 8 X V 0 t r 6 x u Z W e X u n n c a Z Z L z F 4 i C W V 7 6 X 8 k B E v K W E C v h V I r k X + g H v + O M z H e / c c p m K O L p U k 4 T 3 Q m 8 Y i Y F g n i L q Q o 3 4 T b n i V B 2 z 7 H n g 5 q C C f D X j 8 g u u 0 U c M h g w h O C I o w g E 8 p P R 0 4 c J B Q l w P U + I k I W H i H P c o k T a j L E 4 Z H r F j + g 5 p 1 8 3 Z i P b a M z V q R q c E 9 E p S 2 j g g T U x 5 k r A + z T b x z D h r 9 j f v q f H U d 5 v Q 3 8 + 9 Q m I V R s T + p Z t l / l e n a 1 E Y 4 M T U I K i m x D C 6 O p a 7 Z K Y r + u b 2 l 6 o U O S T E a d y n u C T M j H L W Z 9 t o U l O 7 7 q 1 n 4 m 8 m U 7 N 6 z / L c D O / 6 l j R g 9 + c 4 5 0 G 7 V n W P q r X z W q V + m o + 6 i D 3 s 4 5 D m e Y w 6 G m i i R d 5 D P O I J z 1 b D i q z M u v t M t Q q 5 Z h f f l v X w A Z Y a k F w = < / l a t e x i t > dog < l a t e x i t s h a 1 _ b a s e 6 4 = \" N V O e A X l d b j h M K Y S K f M B S G q a V l g k = \" > A A A C x n i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V Z K 6 0 G X R T Z c V 7 Q N q k S S d x t A 0 E y Y T p R T B H 3 C r n y b + g f 6 F d 8 Y p q E V 0 Q p I z 5 9 5 z Z u 6 9 f h p H m X S c 1 4 K 1 s L i 0 v F J c L a 2 t b 2 x u l b d 3 2 h n P R c B a A Y + 5 6 P p e x u I o Y S 0 Z y Z h 1 U 8 G 8 s R + z j j 8 6 U / H O L R N Z x J N L O U l Z f + y F S T S M A k 8 S d T H g 4 X W 5 4 l Q d v e x 5 4 B p Q g V l N X n 7 B F Q b g C J B j D I Y E k n A M D x k 9 P b h w k B L X x 5 Q 4 Q S j S c Y Z 7 l E i b U x a j D I / Y E X 1 D 2 v U M m 9 B e e W Z a H d A p M b 2 C l D Y O S M M p T x B W p 9 k 6 n m t n x f 7 m P d W e 6 m 4 T + v v G a 0 y s x A 2 x f + l m m f / V q V o k h j j R N U R U U 6 o Z V V 1 g X H L d F X V z + 0 t V k h x S 4 h Q e U F w Q D r R y 1 m d b a z J d u + q t p + N v O l O x a h + Y 3 B z v 6 p Y 0 Y P f n O O d B u 1 Z 1 j 6 q 1 8 1 q l f m p G X c Q e 9 n F I 8 z x G H Q 0 0 0 S L v E I 9 4 w r P V", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "s B I r t + 4 + U 6 2 C 0 e z i 2 7 I e P g C F Y Z B V < / l a t e x i t > biting < l a t e x i t s h a 1 _ b a s e 6 4 = \" b y H", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "R K J L N i 8 f 1 q e 3 1 1 m D 3 h l f m T R Q = \" > A A A C y X i c j V H L T s M w E J y G V y m v A k c u E R U S p y o p B z h W c E H i U i T 6 k E q F E t c t p m k S H A d R K k 7 8 A F f 4 M c Q f w F + w N q k E V A g c J R n P 7 o y 9 u 3 4 c i E Q 5 z m v O m p m d m 1 / I L x a W l l d W 1 4 r r G 4 0 k S i X j d R Y F k W z 5 X s I D E f K 6 E i r g r V h y b + g H v O k P j n S 8 e c N l I q L w T I 1 i 3 h l 6 / V D 0 B P M U U Q 1 f K B H 2 L 4 o l p + y Y Z U 8 D N w M l Z K s W F V 9 w j i 4 i M K Q Y g i O E I h z A Q 0 J P G y 4 c x M R 1 M C Z O E h I m z n G P A m l T y u K U 4 R E 7 o G + f d u 2 M D W m v P R O j Z n R K Q K 8 k p Y 0 d 0 k S U J w n r 0 2 w T T 4 2 z Z n / z H h t P f b c R / f 3 M a 0 i s w i W x f + k m m f / V 6 V o U e j g w N Q i q K T a M r o 5 l L q n p i r 6 5 / a U q R Q 4 x c R p 3 K S 4 J M 6 O c 9 N k 2 m s T U r n v r m f i b y d S s 3 r M s N 8 W 7 v i U N 2 P 0 5 z m n Q q J T d v X L l t F K q H m a j z m M L 2 9 i l e e 6 j i m P U U C f v K z z i C c / W i X V t 3 V p 3 n 6 l W L t N s 4 t u y H j 4 A + v O R t g = = < / l a t e x i t > [CLS] < l a t e x i t s h a 1 _ b a s e 6 4 = \" O N 1 u Q r 9 n G K j B K m k S 1 v G V H 0 c X V 9 I = \" > A A A C y H i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V Z K 6 0 G W x G x E X F U 1 b q E W S 6 b Q O z Y t k o p T i x h 9 w q 1 8 m / o H + h X f G F N Q i O i H J m X P v O T P 3 X i / 2 R S o t 6 7 V g z M 0 v L C 4 V l 0 s r q 2 v r G + X N r V Y a Z Q n j D o v 8 K O l 4 b s p 9 E X J H C u n z T p x w N / B 8 3 v Z G D R V v 3 / I k F V F 4 K c c x 7 w X u M B Q D w V x J l N N t n F 3 0 r s s V q 2 r p Z c 4 C O w c V 5 K s Z l V 9 w h T 4 i M G Q I w B F C E v b h I q W n C x s W Y u J 6 m B C X E B I 6 z n G P E m k z y u K U 4 R I 7 o u + Q d t 2 c D W m v P F O t Z n S K T 2 9 C S h N 7 p I k o L y G s T j N 1 P N P O i v 3 N e 6 I 9 1 d 3 G 9 P d y r 4 B Y i R t i / 9 J N M / + r U 7 V I D H C k a x B U U 6 w Z V R 3 L X T L d F X V z 8 0 t V k h x i 4 h T u U z w h z L R y 2 m d T a 1 J d u + q t q + N v O l O x a s / y 3 A z v 6 p Y 0 Y P v n O G d B q 1 a 1 D 6 q 1 8 1 q l f p y P u o g d 7 G K f 5 n m I O k 7 Q h E P e A o 9 4 w r N x a s T G n T H + T D U K u W Y b 3 5 b x 8 A G 2 4 5 D J < / l a t e x i t > [SEP]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "< l a t e x i t s h a 1 _ b a s e 6 4 = \" + 5 I 9 g g H u O t Z m a a 1 U M z 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "I j T d w t n M = \" > A A A C y H i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V Z K 6 0 G V R B H F V 0 b S F W i S Z T u t g m o T J R C n F j T / g V r 9 M / A P 9 C + + M K a h F d E K S M + f e c 2 b u v U E S i l Q 5 z m v B m p m d m 1 8 o L p a W l l d W 1 8 r r G 8 0 0 z i T j H o v D W L Y D P + W h i L i n h A p 5 O 5 H c H w Y h b w U 3 R z r e u u U y F X F 0 o U Y J 7 w 7 9 Q S T 6 g v m K K K 9 z f t z o X p U r T t U x y 5 4 G b g 4 q y F c j L r / g E j 3 E Y M g w B E c E R T i E j 5 S e D l w 4 S I j r Y k y c J C R M n O M e J d J m l M U p w y f 2 h r 4 D 2 n V y N q K 9 9 k y N m t E p I b 2 S l D Z 2 S B N T n i S s T 7 N N P D P O m v 3 N e 2 w 8 9 d 1 G 9 A 9 y r y G x C t f E / q W b Z P 5 X p 2 t R 6 O P A 1 C C o p s Q w u j q W u 2 S m K / r m 9 p e q F D k k x G n c o 7 g k z I x y 0 m f b a F J T u + 6 t b + J v J l O z e s / y 3 A z v + p Y 0 Y P f n O K d B s 1 Z 1 9 6 q 1 s 1 q l f p i P u o g t b G O X 5 r m P O k 7 Q g E f e A o 9 4 w r N 1 a i X W n T X 6 T L U K u W Y T 3 5 b 1 8 A H F Q p D P < / l a t e x i t > (textual tokens) < l a t e x i t s h a 1 _ b a s e 6 4 = \" v s J F 5 w / k q K m m h R w Y x h O P g M T p w W Y = \" > A A A C 1 X i c j V H L S s N A F D 2 N r 1 p f U Z d u g k W o m 5 L W h S 6 L b l x W s A + o U p J 0 W o f m R T I p l t K d u P U H 3 O o v i X + g f + G d c Q p q E Z 2 Q 5 M y 5 9 5 y Z e 6 8 b + z w V t v 2 a M x Y W l 5 Z X 8 q u F t f W N z S 1 z e 6 e Z R l n i s Y Y X + V H S d p 2 U + T x k D c G F z 9 p x w p z A 9 V n L H Z 7 J e G v E k p R H 4 a U Y x + w 6 c A Y h 7 3 P P E U R 1 T b M k 2 K 3 I H N 8 S 0 Z C F 6 W H X L N p l W y 1 r H l Q 0 K E K v e m S + 4 A o 9 R P C Q I Q B D C E H Y h 4 O U n g 4 q s B E T d 4 0 J c Q k h r u I M U x R I m 1 E W o w y H 2 C F 9 B 7 T r a D a k v f R M l d q j U 3 x 6 E 1 J a O C B N R H k J Y X m a p e K Z c p b s b 9 4 T 5 S n v N q a / q 7 0 C Y g V u i P 1 L N 8 v 8 r 0 7 W I t D H i a q B U 0 2 x Y m R 1 n n b J V F f k z a 0 v V Q l y i I m T u E f x h L C n l L M + W 0 q T q t p l b x 0 V f 1 O Z k p V 7 T + d m e J e 3 p A F X f o 5 z H j S r 5 c p R u X p R L d Z O 9 a j z 2 M M + S j T P Y 9 R w j j o a 5 D 3 C I 5 7 w b L S M q X F n 3 H + m G j m t 2 c W 3 Z T x 8 A K N 5 l d o = < / l a t e x i t > Caption < l a t e x i t s h a 1 _ b a s e 6 4 = \" o D x j O a H s Y p o y i f 1 9 C z M X K J 1 F D Q U = \" > A A A C y n i c j V H L T s M w E J y G V y m v A k c u E R U S p y o p B z h W 9 M K B Q 5 H o Q 4 I K O a l b r K Z J 5 D h I V c W N H + A K H 4 b 4 A / g L 1 s a V g A q B o 9 j j 2 Z 2 1 x x u k k c i U 5 7 0 W n I X F p e W V 4 m p p b X 1 j c 6 u 8 v d P O k l y G v B U m U S K 7 A c t 4 J G L e U k J F v J t K z s Z B x D v B q K H j n T s u M 5 H E l 2 q S 8 t 6 Y D W M x E C F T R H U a L N X r T b n i V T 0 z 3 H n g W 1 C B H c 2 k / I J r 9 J E g R I 4 x O G I o w h E Y M v q u 4 M N D S l w P U + I k I W H i H P c o k T a n L E 4 Z j N g R z U P a X V k 2 p r 2 u m R l 1 S K d E 9 E t S u j g g T U J 5 k r A + z T X x 3 F T W 7 G + 1 p 6 a m v t u E 1 s D W G h O r c E v s X 7 p Z 5 n 9 1 2 o v C A C f G g y B P q W G 0 u 9 B W y c 2 r 6 J u 7 X 1 w p q p A S p 3 G f 4 p J w a J S z d 3 a N J j P e 9 d s y E 3 8 z m Z r V + 9 D m 5 n j X t 6 Q G + z / b O Q / a t a p / V K 1 d 1 C r 1 U 9 v q I v a w j 0 P q 5 z H q O E M T L e P y E U 9 4 d s 4 d 6 U y c 6 W e q U 7 C a X X w b z s M H 4 t a S E Q = = < / l a t e x i t > Objects < l a t e x i t s h a 1 _ b a s e 6 4 = \" I X D d Y b V H 7 Q v V 9 U 5 w I E 2 X j f 8 F W K k = \" > A A A C y n i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V Z K 6 0 G X R j Q v B C v Y B t U g y n d a x e T G Z C K W 4 8 w f c 6 o e J f 6 B / 4 Z 0 x B b W I T k h y 5 t x z 7 s y 9 1 0 8 C k S r H e S 1 Y c / M L i 0 v F 5 d L K 6 t r 6 R n l z q 5 X G m W S 8 y e I g l h 3 f S 3 k g I t 5 U Q g W 8 k 0 j u h X 7 A 2 / 7 o R M f b d 1 y m I o 4 u 1 T j h v d A b R m I g m K e I a p / 7 t 5 y p 9 L p c c a q O W f Y s c H N Q Q b 4 a c f k F V + g j B k O G E B w R F O E A H l J 6 u n D h I C G u h w l x k p A w c Y 5 7 l M i b k Y q T w i N 2 R N 8 h 7 b o 5 G 9 F e 5 0 y N m 9 E p A b 2 S n D b 2 y B O T T h L W p 9 k m n p n M m v 0 t 9 8 T k 1 H c b 0 9 / P c 4 X E K t w Q + 5 d v q v y v T 9 e i M M C R q U F Q T Y l h d H U s z 5 K Z r u i b 2 1 + q U p Q h I U 7 j P s U l Y W a c 0 z 7 b x p O a 2 n V v P R N / M 0 r N 6 j 3 L t R n e 9 S 1 p w O 7 P c c 6 C V q 3 q H l R r F 7 V K / T g f d R E 7 2 M U + z f M Q d Z y i g a a p 8 h F P e L b O L G m N r c m n 1 C r k n m 1 8 W 9 b D B 9 l X k g 0 = < / l a t e x i t > (visual tokens) < l a t e x i t s h a 1 _ b a s e 6 4 = \" K D j m I B r W a Q j 6 p V y Z D 8 o Y c + r M y q Q = \" > A A A C 1 H i c j V H L S s N A F D 2 N r 1 o f j b p 0 E y x C 3 Z S 0 L n R Z d O O y g n 1 A l Z K k 0 z o 0 L z K T Q q m u x K 0 / 4 F a / S f w D / Q v v j C m o R X R C k j P n n n N n 7 r 1 u 7 H M h b f s 1 Z y w s L i 2 v 5 F c L a + s b m 0 V z a 7 s l o j T x W N O L / C j p u I 5 g P g 9 Z U 3 L p s 0 6 c M C d w f d Z 2 R 6 c q 3 h 6 z R P A o v J C T m F 0 F z j D k A + 4 5 k q i e W S y P u U g d 3 5 L R i I X i o G e W 7 I q t l z U P q h k o I V u N y H z B J f q I 4 C F F A I Y Q k r A P B 4 K e L q q w E R N 3 h S l x C S G u 4 w y 3 K J A 3 J R U j h U P s i L 5 D 2 n U z N q S 9 y i m 0 2 6 N T f H o T c l r Y J 0 9 E u o S w O s 3 S 8 V R n V u x v u a c 6 p 7 r b h P 5 u l i s g V u K a 2 L 9 8 M + V / f a o W i Q G O d Q 2 c a o o 1 o 6 r z s i y p 7 o q 6 u f W l K k k Z Y u I U 7 l M 8 I e x p 5 6 z P l v Y I X b v q r a P j b 1 q p W L X 3 M m 2 K d 3 V L G n D 1 5 z j n Q a t W q R 5 W a u e 1 U v 0 k G 3 U e u 9 h D m e Z 5 h D r O 0 E B T z / w R T 3 g 2 W s a N c W f c f 0 q N X O b Z w b d l P H w A Z e 2 V X Q = = < / l a t e x i t > Decoder < l a t e x i t s h a 1 _ b a s e 6 4 = \" Q F g t 9 b Q O 9 u m d l B I A r S 9 R f 3 O 1 X E U = \" > A A A C y n i c j V H L S s N A F D 2 N r / q u u n Q T L I K r k t S F L o u 6 c O G i g n 1 A L Z J M p 3 V o m g m T i V C K O 3 / A r X 6 Y + A f 6 F 9 4 Z U 1 C L 6 I Q k Z 8 4 9 5 8 7 c e 8 M k E q n 2 v N e C M z e / s L h U X F 5 Z X V v f 2 C x t b T d T m S n G G 0 x G U r X D I O W R i H l D C x 3 x d q J 4 M A o j 3 g q H p y b e u u M q F T K + 0 u O E d 0 f B I B Z 9 w Q J N V O u M M 9 n j 6 q Z U 9 i q e X e 4 s 8 H N Q R r 7 q s v S C a / Q g w Z B h B I 4 Y m n C E A C k 9 H f j w k B D X x Y Q 4 R U j Y O M c 9 V s i b k Y q T I i B 2 S N 8 B 7 T o 5 G 9 P e 5 E y t m 9 E p E b 2 K n C 7 2 y S N J p w i b 0 1 w b z 2 x m w / 6 W e 2 J z m r u N 6 R / m u U b E a t w S + 5 d v q v y v z 9 S i 0 c e x r U F Q T Y l l T H U s z 5 L Z r p i b u 1 + q 0 p Q h I c 7 g H s U V Y W a d 0 z 6 7 1 p P a 2 k 1 v A x t / s 0 r D m j 3 L t R n e z S 1 p w P 7 P c c 6 C Z r X i H 1 a q l 9 V y 7 S Q f d R G 7 2 M M B z f M I N Z y j j o a t 8 h F P e H Y u H O W M n c m n 1 C n k n h 1 8 W 8 7 D B 6 m Z k f k = < / l a t e x i t >", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Encoder layers < l a t e x i t s h a 1 _ b a s e 6 4 = \" 7 U T B r B 9 u + U z t 3 Z r 3 B", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "V S q A Y g 8 o h k = \" > A A A C 0 X i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V Z K 6 0 G V R B J c V 7 Q N q l W Q 6 r a F 5 M Z k I o Q j i 1 h 9 w q z 8 l / o H + h X f G F N Q i O i H J m X P P O T N 3 x o 1 9 L 5 G W 9 V o w Z m b n 5 h e K i 6 W l 5 Z X V t f L 6 R i u J U s F 4 k 0 V + J D q u k 3 D f C 3 l T e t L n n V h w J 3 B 9 3 n Z H R 6 r e v u E i 8 a L w X G Y x 7 w X O M P Q G H n M k U Z f H I Y v 6 X J i + k 5 H o q l y x q p Y e 5 j S w c 1 B B P h p R + Q U X 6 C M C Q 4 o A H C E k Y R 8 O E n q 6 s G E h J q 6 H M X G C k K f r H L c o k T c l F S e F Q + y I v k O a d X M 2 p L n K T L S b 0 S o + v Y K c J n b I E 5 F O E F a r m b q e 6 m T F / p Y 9 1 p l q b x n 9 3 T w r I F b i m t i / f B P l f 3 2 q F 4 k B D n Q P H v U U a 0 Z 1 x / K U V J + K 2 r n 5 p S t J C T F x C v e p L g g z 7 Z y c s 6 k 9 i e 5 d n a 2 j 6 2 9 a q V g 1 Z 7 k 2 x b v a J V 2 w / f M 6 p 0 G r V r X 3 q r X T W q V + m F 9 1 E V v Y x i 7 d 5 z 7 q O E E D T c o W e M Q T n o 0 z I z P u j P t P q V H I P Z v 4 N o y H D 0 c J l P k = < / l a t e x i t > ?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "< l a t e x i t s h a 1 _ b a s e 6 4 = \" b R / o F / W r t H K j m 9 o c z P N 6 7 1 1 6 s P", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "A = \" > A A A C x H i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V Z K 6 0 J 1 F Q V y 2 Y B 9 Q i y T T a R 0 6 e Z B M h F L 0 B 9 z q t 4 l / o H / h n T E F t Y h O S H L m 3 H v O z L 3 X j 6 V I l e O 8 F q y F x a X l l e J q a W 1 9 Y 3 O r v L 3 T T q M s Y b z F I h k l X d 9 L u R Q h b y m h J O / G C f c C X / K O P z 7 X 8 c 4 d T 1 I R h V d q E v N + 4 I 1 C M R T M U 0 Q 1 T 2 / K F a f q m G X P A z c H F e S r E Z V f c I 0 B I j B k C M A R Q h G W 8 J D S 0 4 M L B z F x f U y J S w g J E + e 4 R 4 m 0 G W V x y v C I H d N 3 R L t e z o a 0 1 5 6 p U T M 6 R d K b k N L G A W k i y k s I 6 9 N s E 8 + M s 2 Z / 8 5 4 a T 3 2 3 C f 3 9 3 C s g V u G W 2 L 9 0 s 8 z / 6 n Q t C k O c m B o E 1 R Q b R l f H c p f M d E X f 3 P 5 S l S K H m D i N B x R P C D O j n P X Z N p r U 1 K 5 7 6 5 n 4 m 8 n U r N 6 z P D f D u 7 4 l D d j 9 O c 5 5 0 K 5 V 3 a N q r V m r 1 M / y U R e x h 3 0 c 0 j y P U c c l G m g Z 7 0 c 8 4 d m 6 s K S V W t l n q l X I N b v 4 t q y H D + R b j 0 Y = < / l a t e x i t >", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "< l a t e x i t s h a 1 _ b a s e 6 4 = \" 1 Q D E 1 B J t Z K 3 o 2 o K p 6 n 7 N x y + I 9 8 w = \" > A", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Textual embedding", "sec_num": null }, { "text": "A A C 1 n i c j V H L S s N A F D 2 N r 1 p f q S 7 d B I v g q i R 1 o c u i G 5 c V + o J a J I 9 p H Z o X y U Q t R X f i 1 h 9 w q 5 8 k / o H + h X f G F N Q i O i H J m X P v O T P 3 X i f 2 e S p M 8 7 W g z c 0 v L C 4 V l 0 s r q 2 v r G 3 p 5 s 5 1 G W e K y l h v 5 U d J 1 7 J T 5 P G Q t w Y X P u n H C 7 M D x W c c Z H c t 4 5 5 I l K Y / C p h j H r B / Y w 5 A P u G s L o s 7 1 c p N d i 8 z 2 D R Y 4 z P N 4 O D z X K 2 b V V M u Y B V Y O K s h X I 9 J f c A Y P E V x k C M A Q Q h D 2 Y S O l p w c L J m L i + p g Q l x D i K s 5 w g x J p M 8 p i l G E T O 6 L v k H a 9 n A 1 p L z 1 T p X b p F J / e h J Q G d k k T U V 5 C W J 5 m q H i m n C X 7 m / d E e c q 7 j e n v 5 F 4 B s Q I X x P 6 l m 2 b + V y d r E R j g U N X A q a Z Y M b I 6 N 3 f J V F f k z Y 0 v V Q l y i I m T 2 K N 4 Q t h V y m m f D a V J V", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Textual embedding", "sec_num": null }, { "text": "e 2 y t 7 a K v 6 l M y c q 9 m + d m e J e 3 p A F b P 8 c 5 C 9 q 1 q r V f r Z 3 W K v W j f N R F b G M H e z T P A 9 R x g g Z a 5 H 2 F R z z h W e t q t 9 q d d v + Z q h V y z R a + L e 3 h A z 6 h l n 4 = < / l a t e x i t >", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Textual embedding", "sec_num": null }, { "text": "< l a t e x i t s h a 1 _ b a s e 6 4 = \" Z p q + F 2 8 1 q 5 R 9 s f c m p q 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Visual embedding", "sec_num": null }, { "text": "K G T E + g L I = \" > A A A C 1 X i c j V H L S s N A F D 2 N r 1 p f U Z d u g k V w V d K 6 0 G X R j c s K 9 g G 1 l D y m d W h e Z C a F U r o T t / 6 A W / 0 l 8 Q / 0 L 7 w z p q A W 0 Q l J z p", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Visual embedding", "sec_num": null }, { "text": "x 7 z 5 m 5 9 7 p J w I W 0 7 d e C s b S 8 s r p W X C 9 t b G 5 t 7 5 i 7 e y 0 R Z 6 n H m l 4 c", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Visual embedding", "sec_num": null }, { "text": "x G n H d Q Q L e M S a k s u A d Z K U O a E b s L Y 7 u l D x 9 p i l g s f R t Z w k r B c 6 w 4 g P u O d I o v q m 2 e I i c w K L h S 7 z f R 4 N + 2 b Z r t h 6 W Y u g m o M y 8 t W I z R f c w E c M D x l C M E S Q h A M 4 E P R 0 U Y W N h L g e p s S l h L i O M 8", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Visual embedding", "sec_num": null }, { "text": "x Q I m 1 G W Y w y H G J H 9 B 3 S r p u z E e 2 V p 9 B q j 0 4 J 6 E 1 J a e G I N D H l p Y T V a Z a O Z 9 p Z s b 9 5 T 7 W n u t u E / m 7 u F R I r c U v s X 7 p 5 5 n 9 1 q h a J A c 5 0 D Z x q S j S j q v N y l 0 x 3 R d 3 c + l K V J I e E O I V 9 i q e E P a 2 c 9 9 n S G q F r", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Visual embedding", "sec_num": null }, { "text": "V 7 1 1 d P x N Z y p W 7 b 0 8 N 8 O 7 u i U N u P p z n I u g V a t U T y q 1 q 1 q 5 f p 6 P u o g D H O K Y 5 n m K O i 7 R Q J O 8 x 3 j E E 5 6 N t j E z 7 o z 7 z 1 S j k G v 2 8 W 0 Z D x 8 A d Z Y B < / l a t e x i t > } < l a t e x i t s h a 1 _ b a s e 6 4 = \" J B R j v q 0 h m E 4 s o W R V P K E K b B 1 8 c i s = \" > A A A C x 3 i c j V H L T s J A F D 3 U F + I L d e m m E U x c k R Y X u i S 6 0 R 0 m A i Z A T F s G m N B X p l M i I S z 8 A b f 6 Z 8 Y / 0 L / w z l g S l R i d p u 2 Z c + 8 5 M / d e N / Z 5 I i 3 r N W c s L a + s r u X X C x u b W 9 s 7 x d 2 9 Z h K l w m M N L / I j c e s 6 C f N 5 y B q S S 5 / d x o I 5 g e u z l j u 6 U P H W m I m E R + G N n M S s G z i D k P e 5 5 0 h F l T u z 8 l 2 x Z F U s v c x F Y G e g h G z V o + I L O u g h g o c U A R h C S M I + H C T 0 t G H D Q k x c F 1 P i B C G u 4 w w z F E i b U h a j D I f Y E X 0 H t G t n b E h 7 5 Z l o t U e n + P Q K U p o 4 I k 1 E e Y K w O s 3 U 8 V Q 7 K / Y 3 7 6 n 2 V H e b 0 N / N v A J i J Y b E / q W b Z / 5 X p 2 q R 6 O N M 1 8 C p p l g z q j o v c 0 l 1 V 9 T N z S 9 V S X K I i V O 4 R 3 F B 2 N P K e Z 9 N r U l 0 7 a q 3 j o 6 / 6 U z F q r 2 X 5 a Z 4 V 7 e k A d s / x 7 k I m t W K f V K p X l d L t f N s 1 H k c 4 B D H N M 9 T 1 H C J O h r k P c Q j n v B s X B m R M T b u P 1 O N X K b Z x 7 d l P H w A c C + Q R g = = < / l a t e x i t > } < l a t e x i t s h a 1 _ b a s e 6 4 = \" J B R j v q 0 h m E 4 s o W R V P K E K b B 1 8 c i s = \" > A A A C x 3 i c j V H L T s J A F D 3 U F + I L d e m m E U x c k R Y X u i S 6 0 R 0 m A i Z A T F s G m N B X p l M i I S z 8 A b f 6 Z 8 Y / 0 L / w z l g S l R i d p u 2 Z c + 8 5 M / d e N / Z 5 I i 3 r N W c s L a + s r u X X C x u b W 9 s 7 x d 2 9 Z h K l w m M N L / I j c e s 6 C f N 5 y B q S S 5 / d x o I 5 g e u z l j u 6 U P H W m I m E R + G N n M S s G z i D k P e 5 5 0 h F l T u z 8 l 2 x Z F U s v c x F Y G e g h G z V o + I L O u g h g o c U A R h C S M I + H C T 0 t G H D Q k x c F 1 P i B C G u 4 w w z F E i b U h a j D I f Y E X 0 H t G t n b E h 7 5 Z l o t U e n + P Q K U p o 4 I k 1 E e Y K w O s 3 U 8 V Q 7 K / Y 3 7 6 n 2 V H e b 0 N / N v A J i J Y b E / q W b Z / 5 X p 2 q R 6 O N M 1 8 C p p l g z q j o v c 0 l 1 V 9 T N z S 9 V S X K I i V O 4 R 3 F B 2 N P K e Z 9 N r U l 0 7 a q 3 j o 6 / 6 U z F q r 2 X 5 a Z 4 V 7 e k A d s / x 7 k I m t W K f V K p X l d L t f N s 1 H k c 4 B D H N M 9 T 1 H C J O h r k P c Q j n v B s X B m R M T b u P 1 O N X K b Z x 7 d l P H w A c C + Q R g = = < / l a t e x i t >", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Visual embedding", "sec_num": null }, { "text": "< l a t e x i t s h a 1 _ b a s e 6 4 = \" M 4 U as inputs and questions as outputs).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Positional embedding", "sec_num": null }, { "text": "V m Z A o K Y Q H X F G w k H U N z 5 8 j j 4 s = \" > A A A C 2 X i c j V H L S s N A F D 2 N r 1 p f 8 b F z E y y C q 5 L W h S 6 L b l x W s A + o R f K Y 1 q F J J i Q T o R Y X 7 s S t P + B W f 0 j 8 A / 0 L 7 4 w p q E V 0 Q p I z 5 9 5 z Z u 6 9 b h z w V N r 2 a 8 G Y m Z 2 b X y g u l p a W V 1 b X z P W N V i q y x G N N T w Q i 6 b h O y g I e s a b k M m C d O G F O 6 A a s 7 Q 6 P V b x 9 x Z K U i + h M j m L W C 5 1 B x P v c c y R R F + Z W Q 6 R c Q S e w W O g y 3 + f R 4 M I s 2 x V b L 2 s a V H N Q R r 4 a w n z B O X w I e M g Q g i G C J B z A Q U p P F 1 X Y i I n r Y U x c Q o j r O M M N S q T N K I t R h k P s k L 4 D 2 n V z N q K 9 8 k y 1 2 q N T A n o T U l r Y J Y 2 g v I S w O s 3 S 8 U w 7 K / Y 3 7 7 H 2 V H c b 0 d / N v U J i J S 6 J / U s 3 y f y v T t U i 0 c e h r o F T T b F m V H V e 7 p L p r q i b W 1 + q k u Q Q E 6 e w T / G E s K e V k z 5 b W p P q 2 l V v H R 1 / 0 5 m K V X s v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Positional embedding", "sec_num": null }, { "text": "v Q 4 S S t g b F I W 4 W 6 E l 8 P P A Y U c A = \" > A A A C x H i c j V H b S s N A E D 2 N t 1 p v V R 9 9 C R Z B E E q i g j 4 W B f G x B X u B W i T Z b m t o b m w 2 Q i n 6 A 7 7 q t 4 l / o H / h 7 L o F t Y h u S H L 2 z J y z O z N + G g a Z d J z X g j U 3 v 7 C 4 V F w u r a y u r W + U N 7 d a W Z I L x p s s C R P R 8 b 2 M h 0 H M m z K Q I e + k g n u R H / K 2 P z p X 8 f Y d F 1 m Q x F d y n P J e 5 A 3 j Y B A w T x L V O L g p V 5 y q o 5 c 9 C 1 w D K j C r n p R f c I 0 + E j D k i M A R Q x I O 4 S G j p w s X D l L i e p g Q J w g F O s 5 x j x J p c 8 r i l O E R O 6 L v k H Z d w 8 a 0 V 5 6 Z V j M 6 J a R X k N L G H m k S y h O E 1 W m 2 j u f a W b G / e U + 0 p 7 r b m P 6 + 8 Y q I l b g l 9 i / d N P O / O l W L x A C n u o a A a k o 1 o 6 p j x i X X X V E 3 t 7 9 U J c k h J U 7 h P s U F Y a a V 0 z 7 b W p P p 2 l V v P R 1 / 0 5 m K V X t m c n O 8 q 1 v S g N 2 f 4 5 w F r c O q e 1 Q 9 b B x X a m d m 1 E X s Y B f 7 N M 8 T 1 H C J O p r a + x F P e L Y u r N D K r P w z 1 S o Y z T a + L e v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Positional embedding", "sec_num": null }, { "text": "A variety of approaches have been proposed. Mostafazadeh et al. (2016) use a standard Gated Recurrent Neural Network, i.e. a CNN encoder followed by a GRU decoder to generate questions. aim at generating, for a given image, multiple visually grounded questions of varying types (what, when, where, etc.); similarly, Jain et al. (2017) generate diverse questions using Variational Autoencoders. In Li et al. (2018) , VQG is jointly tackled along its dual task (VQA), just as Yang et al. (2015) . In (Patro et al., 2018; Patro and Namboodiri, 2019), the image (processed by a CNN) and the caption (processed by a LSTM) are combined in a mixture module, followed by a LSTM decoder to generate the question, leading to state-of-the-art results on the VQG task on V QA1.0 data. More recently, Patro et al. (2020) incorporate multiple cues -place information obtained from PlaceCNN , caption, tags -and combine them within a deep Bayesian framework where the contribution of each cue is weighted to predict a question, obtaining the current state-of-the-art results on V QG COCO .", "cite_spans": [ { "start": 44, "end": 70, "text": "Mostafazadeh et al. (2016)", "ref_id": "BIBREF24" }, { "start": 397, "end": 413, "text": "Li et al. (2018)", "ref_id": "BIBREF17" }, { "start": 474, "end": 492, "text": "Yang et al. (2015)", "ref_id": "BIBREF44" } ], "ref_spans": [], "eq_spans": [], "section": "Positional embedding", "sec_num": null }, { "text": "In VQG, the objective is to generate a relevant question from an image and/or its caption. The caption X txt is composed of M tokens txt 1 , ..., txt M ; these tokens can be words or subwords (smaller than word) units depending on the tokenization strategy used. As BERT uses subword tokenization, throughout this paper we will refer to subwords as our tokenization units.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "The proposed model is illustrated in Figure 1 . In 3.1, we detail how images are incorporated in the Transformer framework. In 3.2, we present BERT-gen, a novel approach to use BERT for text generation.", "cite_spans": [], "ref_spans": [ { "start": 37, "end": 45, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "In this work, we treat textual and visual inputs similarly, by considering both as sequences. Since an image is not a priori sequential, we consider the image X img as a sequence of object regions img 1 , ..., img N , as described below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Representing an Image as Text", "sec_num": "3.1" }, { "text": "The images are first processed as in Tan and Bansal (2019): a Faster-RCNN (Ren et al., 2017), pre-trained on Visual Genome (Krishna et al., 2017) , detects the N = 36 most salient regions (those likely to include an object) per image. The weights of the Faster-RCNN are fixed during training, as we use the precomputed representations made publicly available 1 by Anderson et al. (2018) . Each image is thus represented by a sequence of N = 36 semantic embeddings f 1 , ...f N (one for each object region) of dimension 2048, along with the corresponding bounding box coordinates b 1 , ...b N of dimension 4. With this approach, the BERT attention can be computed at the level of objects or salient image regions; had we represented images with traditional CNN features, the attention would instead correspond to a uniform grid of image regions without particular semantics, as noted in Anderson et al. (2018) . To build an object embedding o j encoding both the object region semantics and its location in the image, we concatenate f j and b j (j \u2208 [1, N ]). Hence, an image is seen as a sequence of N = 36 visual representations (each corresponding to an object region) o 1 , ..., o N . Object region representations o i are ordered by the relevance of the object detected, and the model has access to their relative location in the image through the vectors b i .", "cite_spans": [ { "start": 123, "end": 145, "text": "(Krishna et al., 2017)", "ref_id": "BIBREF15" }, { "start": 364, "end": 386, "text": "Anderson et al. (2018)", "ref_id": "BIBREF1" }, { "start": 886, "end": 908, "text": "Anderson et al. (2018)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Representing an Image as Text", "sec_num": "3.1" }, { "text": "To investigate whether our BERT-based model can transfer knowledge beyond language, we consider image features as simple visual tokens that can be presented to the model analogously to textual tokens. In order to make the o j vectors (of dimension 2048 + 4 = 2052) comparable to BERT embeddings (of dimension 768), we use a simple linear cross-modal projection layer W of dimensions 2052 \u00d7 768. The N object regions detected in an image, are thus represented as X img = (W.o 1 , ..., W.o N ). Once mapped into BERT's embedding space with W , the image is seen by the rest of the model as a sequence of units with no explicit indication if it is of a textual or visual nature.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Representing an Image as Text", "sec_num": "3.1" }, { "text": "We cast the VQG task as a classic sequence-tosequence (Sutskever et al., 2014) framework:", "cite_spans": [ { "start": 54, "end": 78, "text": "(Sutskever et al., 2014)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "BERT-gen: Text Generation with BERT", "sec_num": "3.2" }, { "text": "P \u0398,W (Y |X) = T t=1 P \u0398,W (y t |X, y t , allowing target tokens y t to attend only to the input tokens and the already generated target tokens.", "cite_spans": [ { "start": 120, "end": 142, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "BERT-gen: Text Generation with BERT", "sec_num": "3.2" }, { "text": "This novel method allows to use pre-trained encoders for text generation. In this work, we initialize our model BERT-base parameters. Nonetheless, the methodology can be applied to any pre-trained Transformer encoders such as RoBERTa , or Ernie (Sun et al., 2019b) .", "cite_spans": [ { "start": 245, "end": 264, "text": "(Sun et al., 2019b)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "BERT-gen: Text Generation with BERT", "sec_num": "3.2" }, { "text": "Modality-specific setups The proposed model can be used in either mono-or multi-modal setups. This is accomplished by activating or deactivating specific modules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BERT-gen: Text Generation with BERT", "sec_num": "3.2" }, { "text": "Our main objective is to measure whether the textual knowledge encoded in pre-trained BERT can be beneficial in a cross-modal task. Thus, we define the three following experimental setups, which we refer to as Step 1, 2, and 3:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Protocol", "sec_num": "4" }, { "text": "1. Caption only Deactivating the Visual embedding module (see Figure 1) , the model has only access to textual input, i.e. the caption. The model is initialized with the BERT weights and trained according to Equation 1.", "cite_spans": [], "ref_spans": [ { "start": 62, "end": 71, "text": "Figure 1)", "ref_id": null } ], "eq_spans": [], "section": "Experimental Protocol", "sec_num": "4" }, { "text": "2. Image only Conversely, deactivating the Textual embedding module (see Figure 1) , the model has only access to the input image, not the caption. To indicate the position t of img t in the sequence, we sum the BERT positional embedding of t to the visual representation of img t , just as we would do for a text token txt t . The model is initialized with the weights learned during step 1. All BERT-gen \u0398 weights are frozen, and only the linear layer W is learnable. Hence, if the model is able to learn to generate contextualized questions w.r.t. the image, it shows that a simple linear layer is enough to bridge the two modalities.", "cite_spans": [], "ref_spans": [ { "start": 73, "end": 82, "text": "Figure 1)", "ref_id": null } ], "eq_spans": [], "section": "Experimental Protocol", "sec_num": "4" }, { "text": "The full model is given access to both image and caption inputs. In this setup, we separate the two different inputs by a special BERT token [SEP] . Thus, the input sequence for the model takes the form of", "cite_spans": [ { "start": 141, "end": 146, "text": "[SEP]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Image + Caption", "sec_num": "3." }, { "text": "[CLS], img 1 , ..., img N , [SEP], txt 1 , ..., txt M .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Image + Caption", "sec_num": "3." }, { "text": "In step 1, only BERT-gen \u0398 parameters are learned, as no image input was given. In step 2, W is trained while keeping \u0398 frozen. Finally then, in step 3, we fine-tune the model using both image and text inputs: the model is initialized with the parameters \u0398 learned during step 1 and the W learned during step 2, and we unfreeze all parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Image + Caption", "sec_num": "3." }, { "text": "Ablations Additionally, we report results obtained with: Image only (unfreeze), where the BERT-gen parameters \u0398 are not frozen; and Im-age+Caption (from scratch) where the model is learned without the intermediate steps 1 and 2: the BERT-gen parameters \u0398 are initialized with the weights from pre-trained BERT while W is randomly initialized.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Image + Caption", "sec_num": "3." }, { "text": "We conduct our experiments using two established datasets for Visual Question Generation. V QG COCO Mostafazadeh et al. (2016) contains 2500 training images, 1250 validation images and 1250 test images from MS COCO (Lin et al., 2014) ; each image has 5 corresponding questions and 5 ground-truth captions. 3 The Visual Question Answering (V QA) (Antol et al., 2015) dataset can be used to derive VQG data (Li et al., 2018) . The task is reversed: instead of answering the question based on the image (VQA), models are called to generate a relevant question given the image (VQG). Also based on MS COCO, it contains 82783 training images, 40504 validation images and 81434 testing images. In V QA1.0, 4 each image has 3 associated questions. Since the test set of MS COCO does not contain ground-truth captions, we generated artificial captions for it using NeuralTalk2 (Karpathy and Li, 2015) : for fair comparison, we used exactly the same model 5 as Patro and Namboodiri (2019) (MDN-Joint).", "cite_spans": [ { "start": 100, "end": 126, "text": "Mostafazadeh et al. (2016)", "ref_id": "BIBREF24" }, { "start": 215, "end": 233, "text": "(Lin et al., 2014)", "ref_id": "BIBREF19" }, { "start": 306, "end": 307, "text": "3", "ref_id": null }, { "start": 345, "end": 365, "text": "(Antol et al., 2015)", "ref_id": "BIBREF2" }, { "start": 405, "end": 422, "text": "(Li et al., 2018)", "ref_id": "BIBREF17" }, { "start": 869, "end": 892, "text": "(Karpathy and Li, 2015)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "4.1" }, { "text": "We compare the proposed model to the following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.2" }, { "text": "Sample (Yang et al., 2015) Questions are generated by a RNN conditioned on the image: at each generation step, the distribution over the vocabulary is computed and used to sample the next generated word. This baseline enables to generate diverse questions over the same image, as the word selection process is non-deterministic.", "cite_spans": [ { "start": 7, "end": 26, "text": "(Yang et al., 2015)", "ref_id": "BIBREF44" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.2" }, { "text": "Max (Yang et al., 2015) Using the above model, selecting words with maximum probability from the computed distribution. ", "cite_spans": [ { "start": 4, "end": 23, "text": "(Yang et al., 2015)", "ref_id": "BIBREF44" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.2" }, { "text": "We report the following metrics for all experiments, consistently with previous works:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metrics", "sec_num": "4.3" }, { "text": "BLEU (Papineni et al., 2002) A precisionoriented metric, originally proposed to evaluate machine translation. It is based on the counts of over- lapping n-grams between the generated sequences and the human references.", "cite_spans": [ { "start": 5, "end": 28, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Metrics", "sec_num": "4.3" }, { "text": "ROUGE (Lin, 2004) The recall-oriented counterpart to BLEU metrics, based on n-gram overlaps.", "cite_spans": [ { "start": 6, "end": 17, "text": "(Lin, 2004)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Metrics", "sec_num": "4.3" }, { "text": "METEOR (Banerjee and Lavie, 2005) The harmonic mean between precision and recall w.r.t. unigrams. As opposed to the other metrics, it also accounts for stemming and synonymy matching.", "cite_spans": [ { "start": 7, "end": 33, "text": "(Banerjee and Lavie, 2005)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Metrics", "sec_num": "4.3" }, { "text": "CIDEr (Vedantam et al., 2015) Originally designed for Image Captioning, it uses human consensus among the multiple references, favoring rare words and penalizing frequent words. This feature is particularly relevant for our task, as the automatically generated questions often follow similar patterns such as \"What is the [...] ?\". Indeed, we verify experimentally (cf Table 1 and Table 2 ) that the CIDEr metric is the most discriminant in our quantitative results.", "cite_spans": [ { "start": 6, "end": 29, "text": "(Vedantam et al., 2015)", "ref_id": "BIBREF41" } ], "ref_spans": [ { "start": 369, "end": 388, "text": "Table 1 and Table 2", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Metrics", "sec_num": "4.3" }, { "text": "All models are implemented in PyText (Aly et al., 2018) . For all our experiments we used a single NVIDIA RTX 2080 Ti GPU, a batch size of 128 and 5 epochs. We used the Adam optimizer with the recommended parameters for BERT: learning rate is set at 2e \u22125 with a warmup of 0.1. The most computationally expensive experiment is the step 3 described above: for this model, completion of one epoch demands 30 seconds and 2 minutes for V QG COCO and V QA datasets, respectively. Metrics were computed using the Python package released by Du et al. (2017) . 6", "cite_spans": [ { "start": 37, "end": 55, "text": "(Aly et al., 2018)", "ref_id": "BIBREF0" }, { "start": 534, "end": 550, "text": "Du et al. (2017)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Implementation details", "sec_num": "4.4" }, { "text": "In Table 1 , we report quantitative results for the VQG task on V QA1.0. The Caption only model already shows strong improvements for all metrics over SOTA models. For this text-only model, the impressive performance can mostly be attributed to BERT, demonstrating once again the benefits of pre-trained language models. In our Step 2 (Image only), BERT's \u0398 parameters are frozen and only those of the cross-modal projection matrix W are learned. Despite using a simple linear layer, the model is found to perform well, generating relevant questions given only visual inputs. This suggests that the conceptual representations encoded in pre-trained language models such as BERT can effectively be used beyond text. Further, we report an additional Image only experiment, this time unfreezing the BERT parameters \u0398 -see Step 2 (unfreeze) in Table 1 . As could be expected, since the model is allowed more flexibility, the performance is found to further improve.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 840, "end": 847, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "Finally, in our third step (Image + Caption), we obtain the highest scores, for all metrics. This indicates that the model is able to effectively leverage the combination of textual and visual inputs. Indeed, complementary information from both modalities can be exploited by the self-attention mechanism, making visual and textual tokens interact to generate the output sequences. Again, we additionally report the results obtained bypassing the intermediate steps 1 and 2: for the model denoted as Step 3 (from scratch) (last row of Table 1), \u0398 parameters are initialized with the original weights from pre-trained BERT, while the W matrix is randomly initialized. Under this experi- mental condition, we observe lower performances, a finding that consolidates the importance of the multi-step training procedure we adopted.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "In Table 2 , we report quantitative VQG results on V QG COCO . These are globally consistent with the ones above for V QA1.0. However, we observe two main differences. First, a bigger relative improvement over the state-of-the-art. As the efficacy of pre-trained models is boosted in small-data scenarios (Radford et al., 2018) , this difference can be explained by the smaller size of V QG COCO . Second, we note that the Caption only model globally outperforms all other models, especially on the discriminant CIDEr metric. This can be explained by the fact that, in V QG COCO , the captions are human-written (whereas they are automatically generated for V QA1.0) and, thus, of higher quality; moreover, the smaller size of the dataset could play a role hindering the ability to adapt to the visual modality. Nonetheless, the strong performances obtained for Step 2 compared to the baselines highlight the effectiveness of our method to learn a cross-modal projection even with a relatively small number of training images.", "cite_spans": [ { "start": 305, "end": 327, "text": "(Radford et al., 2018)", "ref_id": "BIBREF29" } ], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "Human Evaluation To get more in-depth understanding of our models, we report human assessment results in Table 3 . We randomly sampled 50 images from the test set of V QA1.0. Each image is paired with its caption, the human-written question used as ground-truth, and the output for our three models: Caption only, Image only and Image+Caption. We asked 3 human annotators to assess the quality of each question using a Likert scale ranging from 1 to 5, for the following criteria: readability, measuring how well-written the question is; caption relevance, how relevant the question is w.r.t. to the caption; and, image relevance, how relevant the question is toward the image. For caption and image relevance, the annotators were presented with only the caption and only the image, respectively.", "cite_spans": [], "ref_spans": [ { "start": 105, "end": 112, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "We observe that all evaluated models produce well-written sentences, as readability does not significantly differ compared to human's questions. Unsurprisingly, the Caption only model shows a higher score for caption relevance, while the relatively lower image relevance score can be explained by the automatically generated and thus imperfect captions in the V QA1.0 dataset. Comparatively, the Image only model obtains lower caption relevance and higher image relevance scores; this indicates that the cross modal projection is sufficient to bridge modalities, allowing BERT to generate relevant questions toward the image. Finally, the Image + Caption model obtains the best image relevance among our models, consistently the quantitative results reported in Tables 1 and 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "What is the color of the desk ?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "What does the model look at? To interpret the behavior of attention-based models, it is useful to", "sec_num": null }, { "text": "< l a t e x i t s h a 1 _ b a s e 6 4 = \" P c w k w M l E 2 q 6 S t w R B L 5 z 2 9 F x u c F 4 = \" > A A A C 5 H i c j V H L S s N A F D 2 N r 1 p f V Z c u H C y C q 5 L W h e 4 s u n F Z w T 6 g S k n S a T s 0 T U J m I p T i 0 p 0 7 c e s P u N V v E f 9 A / 8 I 7 Y w p q E Z 2 Q 5 N x z 7 z k z d 6 4 b", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "What does the model look at? To interpret the behavior of attention-based models, it is useful to", "sec_num": null }, { "text": "+ U I q 2 3 7 N W D O z c / M L 2 c X c 0 v L K 6 l p + f a M u w y T 2 e M 0 L / T B u u o 7 k v g h 4 T Q n l 8 2 Y U c 2 f o + r z h D k 5 0 v n H F Y y n C 4 F y N I n 4 5 d H q B 6 A r P U U S 1 8 9 u N v q O Y k E z 1 O T N 2 L O y a o M P l g B 2 1 8 w W 7 a J v F p k E p B Q W k q x r m X 3 C B D k J 4 S D A E R w B F 2 I c D S U 8 L J d i I i L v E m L i Y k D B 5 j m v k S J t Q F a c K h 9 g B f X s U t V I 2 o F h 7 S q P 2 a B e f 3 p i U D L u k C a k u J q x 3 Y y a f G G f N / u Y 9 N p 7 6 b C P 6 u 6 n X k F i F P r F / 6 S a V / 9 X p X h S 6 O D Q 9 C O o p M o z u z k t d E n M r + u T s S 1 e K H C L i N O 5 Q P i b s G e X k n p n R S N O 7 v l v H 5 N 9 M p W Z 1 7 K W 1 C d 7 1 K W n A p Z / j n A b 1 c r G 0 X y y f l Q u V 4 3 T U W W x h B 3 s 0 z w N U c I o q a u R 9 g 0 c 8 4 d n q W r f W n X X / W W p l U s 0 m v i 3 r 4 Q N Q 8 J s Q < / l a t e x i t >", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "What does the model look at? To interpret the behavior of attention-based models, it is useful to", "sec_num": null }, { "text": "What is the color of the table ?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "What does the model look at? To interpret the behavior of attention-based models, it is useful to", "sec_num": null }, { "text": "< l a t e x i t s h a 1 _ b a s e 6 4 = \" h x r B q C 3 b A e / t h y 6 e P 4 z 9 s q R 4 M t Q = \" > A A A C 5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "What does the model look at? To interpret the behavior of attention-based models, it is useful to", "sec_num": null }, { "text": "X i c j V H L S s N A F D 2 N r 1 p f V Z d u B o v g q q R 1 o T u L b l x W s A + o p U z S a T s 0 T U I y E U p x 6 8 6 d u P U H 3 O q v i H + g f + G d M Q W 1 i E 5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "What does the model look at? To interpret the behavior of attention-based models, it is useful to", "sec_num": null }, { "text": "I c u 6 5 9 5 y Z O 9 c J P R k r 2 3 7 N W H P z C 4 t L 2 e X c y u r a + k Z + c 6 s e B 0 n k i p o b e E H U d H g s P O m L m p L K E 8 0 w E n z k e K L h D E 9 1 v n E l o l g G / o U a h 6 I 9 4 n 1 f 9 q T L F", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "What does the model look at? To interpret the behavior of attention-based models, it is useful to", "sec_num": null }, { "text": "V G d P G s M u G I y Z m o g m L F j Q c 8 E i p M H O + 7 k C 3 b R N o v N g l I K C k h X N c i / 4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "What does the model look at? To interpret the behavior of attention-based models, it is useful to", "sec_num": null }, { "text": "What time is it ? ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B J d B H C R Y A Q B H 4 q w B 4 6 Y n h Z K s B E S 1 8 a E u I i Q N H m B a + R I m 1 C V o A p O 7 J C + f Y p a K e t T r D 1 j o 3 Z p F 4 / e i J Q M e 6 Q J q C 4 i r H d j J p 8 Y Z 8 3 + 5 j 0 x n v p s Y / o 7 q d e I W I U B s X / p p p X / 1 e l e F H o 4 M j 1 I 6 i k 0 j O 7 O T V 0 S c y v 6 5 O x L V 4 o c Q u I 0 7 l I + I u w a 5 f S e m d H E p n d 9 t 9 z k 3 0 y l Z n X s p r U J 3 v U p a c C l n + O c B f V y s X R Q L J + X C 5 W T d N R Z 7 G A X + z T P Q 1 R w h i p q 5 H 2 D R z z h 2 e p b t 9 a d d f 9 Z a m V S z T a + L e v h A 2 i c m 3 s = < / l a t e x i t >", "sec_num": null }, { "text": "V n o t U u n e L R G 5 O S Y Z c 0 I e X F h N V p T M d T 7 a z Y 3 7 w n 2 l P d b U x / J / P y i Z U Y E v u X b p r 5 X 5 2 q R a K P Q 1 2 D o J o i z a j q 3 M w l 1 V 1 R N 2 d f q p L k E B G n c I / i M W F X K 6 d 9 Z l q T 6 N p V b 2 0 d f 9 O Z i l V 7 N 8 t N 8 a 5 u S Q O u / B z n L G h W y 5 X 9 c v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "< l a t e x i t s h a 1 _ b a s e 6 4 = \" T E d U t d W I 7 n d R Q e q Z I v P y u I + x S d E = \" > A A A C 1 n i c j V H L S s N A F D 2 N r 1 p f q S 7 d D B b B V U n r Q n c W 3 b i s Y B + g p S T p t B 2 a F 8 l E K U V 3 4 t Y f c K u f J P 6 B / o V 3 x h T U I j o h y Z l z 7 z k z 9 1 4 n 8 k Q i L e s 1 Z 8 z N L y w u 5 Z c L K 6 t r 6 x t m c b O Z h G n s 8 o Y b e m H c d u y E e y L g D S m k x 9 t R z G 3 f 8 X j L G Z 2 o e O u K x 4 k I g 3 M 5 j n j H t w e B 6 A v X l k R 1 z W J r a E s m h c + Z S J i Q 7 K h r l q y y p R e b B Z U M l J C t e m i + 4 B I 9 h H C R w g d H A E n Y g 4 2 E n g t U Y C E i r o M J c T E h o e M c N y i Q N q U s T h k 2 s S P 6 D m h 3 k b E B 7 Z", "sec_num": null }, { "text": "V w D a k 7 Y Y c W S u R L 4 n s = \" > A A A C x n i c j V H L S s N A F D 2 N r 1 p f V Z d u g k W o m 5 K p Y N t d 0 U 2 X F e 0 D a p F k O q 3 B v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "< l a t e x i t s h a 1 _ b a s e 6 4 = \" T E d U t d W I 7 n d R Q e q Z I v P y u I + x S d E = \" > A A A C 1 n i c j V H L S s N A F D 2 N r 1 p f q S 7 d D B b B V U n r Q n c W 3 b i s Y B + g p S T p t B 2 a F 8 l E K U V 3 4 t Y f c K u f J P 6 B / o V 3 x h T U I j o h y Z l z 7 z k z 9 1 4 n 8 k Q i L e s 1 Z 8 z N L y w u 5 Z c L K 6 t r 6 x t m c b O Z h G n s 8 o Y b e m H c d u y E e y L g D S m k x 9 t R z G 3 f 8 X j L G Z 2 o e O u K x 4 k I g 3 M 5 j n j H t w e B 6 A v X l k R 1 z W J r a E s m h c + Z S J i Q 7 K h r l q y y p R e b B Z U M l J C t e m i + 4 B I 9 h H C R w g d H A E n Y g 4 2 E n g t U Y C E i r o M J c T E h o e M c N y i Q N q U s T h k 2 s S P 6 D m h 3 k b E B 7 Z", "sec_num": null }, { "text": "E g m S i m C P + B W P 0 3 8 A / 0 L 7 4 w p 6 K L o h C R n z r 3 n z N x 7 n c h z E 2 l Z 7 z l j a X l l d S 2 / X t j Y 3 N r e K e 7 u d Z M w j b n o 8 N A L 4 7 5 j J 8 J z A 9 G R r v R E P 4 q F 7 T u e 6 D l 3 5 y r e u ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "< l a t e x i t s h a 1 _ b a s e 6 4 = \" T E d U t d W I 7 n d R Q e q Z I v P y u I + x S d E = \" > A A A C 1 n i c j V H L S s N A F D 2 N r 1 p f q S 7 d D B b B V U n r Q n c W 3 b i s Y B + g p S T p t B 2 a F 8 l E K U V 3 4 t Y f c K u f J P 6 B / o V 3 x h T U I j o h y Z l z 7 z k z 9 1 4 n 8 k Q i L e s 1 Z 8 z N L y w u 5 Z c L K 6 t r 6 x t m c b O Z h G n s 8 o Y b e m H c d u y E e y L g D S m k x 9 t R z G 3 f 8 X j L G Z 2 o e O u K x 4 k I g 3 M 5 j n j H t w e B 6 A v X l k R 1 z W J r a E s m h c + Z S J i Q 7 K h r l q y y p R e b B Z U M l J C t e m i + 4 B I 9 h H C R w g d H A E n Y g 4 2 E n g t U Y C E i r o M J c T E h o e M c N y i Q N q U s T h k 2 s S P 6 D m h 3 k b E B 7 Z", "sec_num": null }, { "text": "x d x 4 o b B l Z x G Y u j b k 8 A d u 9 y W R F 2 W 2 f F N s W R V L M t i j J k K s N q p R a D R q F d Z 3 W Q q R K u E b L X D 4 h u u M U I I j h Q + B A J I w h 5 s J P Q M w G A h I m 6 I G X E x I V f H B R 5 R I G 1 K W Y I y b G L v 6 D u h 3 S B j A 9 o r z 0 S r O Z 3 i 0 R u T 0 s Q R a U L K i w m r 0 0 w d T 7 W z Y h d 5 z 7 S n u t u U / k 7 m 5 R M r c U v s X 7 p 5 5 n 9 1 q h a J M e q 6 B p d q i j S j q u O Z S 6 q 7 o m 5 u / q h K k k N E n M I j i s e E u V b O + 2 x q T a J r V 7 2 1 d f x D Z y p W 7 X m W m + J T 3 Z I G P J + i u R h 0 q x V 2 U q l e V E v N s 2 z U e R z g E G W a Z w 1 N t N B G h 7 w n e M Y L X o 2 W E R i p 8 f C d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "< l a t e x i t s h a 1 _ b a s e 6 4 = \" T E d U t d W I 7 n d R Q e q Z I v P y u I + x S d E = \" > A A A C 1 n i c j V H L S s N A F D 2 N r 1 p f q S 7 d D B b B V U n r Q n c W 3 b i s Y B + g p S T p t B 2 a F 8 l E K U V 3 4 t Y f c K u f J P 6 B / o V 3 x h T U I j o h y Z l z 7 z k z 9 1 4 n 8 k Q i L e s 1 Z 8 z N L y w u 5 Z c L K 6 t r 6 x t m c b O Z h G n s 8 o Y b e m H c d u y E e y L g D S m k x 9 t R z G 3 f 8 X j L G Z 2 o e O u K x 4 k I g 3 M 5 j n j H t w e B 6 A v X l k R 1 z W J r a E s m h c + Z S J i Q 7 K h r l q y y p R e b B Z U M l J C t e m i + 4 B I 9 h H C R w g d H A E n Y g 4 2 E n g t U Y C E i r o M J c T E h o e M c N y i Q N q U s T h k 2 s S P 6 D m h 3 k b E B 7 Z", "sec_num": null }, { "text": "v g 0 Y k P f G g f u r e t Y n K D L s n l 8 U y z p m l E x K t U T V d c a j U a t X q d A N y s 1 s 6 o a m i 5 H C d l o h 8 U 3 X G O E E A 5 S + G A I w C n 2 Y C G h Z w A D O i L C h p g R F l P k y j z D I w q k T Y n F i G E R O q H v m G a D D A 1 o L j w T q X Z o F Y / e m J Q q j k g T E i + m W K y m y n w q n Q W 6 y H s m P c X e p v S 3 M y + f U I 4 7 Q v / S z Z n / 1 Y l a O G 5 x K m t w q a Z I I q I 6 J 3 N J 5 a m I n a s / q u L k E B E m 4 h H l Y 4 o d q Z y f s y o 1 i a x d n K 0 l 8 x + S K V A x d z J u i k + x S 2 r w v I v q 4 q B r 0 i X Q z A u z 1 D z L W p 3 H A Q 5 R p n 7 W 0 U Q L b X T I e 4 x n v O B V a S m B k i o P 3 1 Q l l 2 n 2 8 W s o T 1 + M x Y / v < / l a t e x i t > (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "< l a t e x i t s h a 1 _ b a s e 6 4 = \" T E d U t d W I 7 n d R Q e q Z I v P y u I + x S d E = \" > A A A C 1 n i c j V H L S s N A F D 2 N r 1 p f q S 7 d D B b B V U n r Q n c W 3 b i s Y B + g p S T p t B 2 a F 8 l E K U V 3 4 t Y f c K u f J P 6 B / o V 3 x h T U I j o h y Z l z 7 z k z 9 1 4 n 8 k Q i L e s 1 Z 8 z N L y w u 5 Z c L K 6 t r 6 x t m c b O Z h G n s 8 o Y b e m H c d u y E e y L g D S m k x 9 t R z G 3 f 8 X j L G Z 2 o e O u K x 4 k I g 3 M 5 j n j H t w e B 6 A v X l k R 1 z W J r a E s m h c + Z S J i Q 7 K h r l q y y p R e b B Z U M l J C t e m i + 4 B I 9 h H C R w g d H A E n Y g 4 2 E n g t U Y C E i r o M J c T E h o e M c N y i Q N q U s T h k 2 s S P 6 D m h 3 k b E B 7 Z", "sec_num": null }, { "text": "< l a t e x i t s h a 1 _ b a s e 6 4 = \" 2 Z i J u 6 Y s 2 T T r 1 l G j y a X", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "< l a t e x i t s h a 1 _ b a s e 6 4 = \" T E d U t d W I 7 n d R Q e q Z I v P y u I + x S d E = \" > A A A C 1 n i c j V H L S s N A F D 2 N r 1 p f q S 7 d D B b B V U n r Q n c W 3 b i s Y B + g p S T p t B 2 a F 8 l E K U V 3 4 t Y f c K u f J P 6 B / o V 3 x h T U I j o h y Z l z 7 z k z 9 1 4 n 8 k Q i L e s 1 Z 8 z N L y w u 5 Z c L K 6 t r 6 x t m c b O Z h G n s 8 o Y b e m H c d u y E e y L g D S m k x 9 t R z G 3 f 8 X j L G Z 2 o e O u K x 4 k I g 3 M 5 j n j H t w e B 6 A v X l k R 1 z W J r a E s m h c + Z S J i Q 7 K h r l q y y p R e b B Z U M l J C t e m i + 4 B I 9 h H C R w g d H A E n Y g 4 2 E n g t U Y C E i r o M J c T E h o e M c N y i Q N q U s T h k 2 s S P 6 D m h 3 k b E B 7 Z", "sec_num": null }, { "text": "V p g i 8 V O I = \" > A A A C x n i c j V H L T s J A F D 3 U F + I L d e m m k Z j g p m k L A d w R 3 b D E K I 8 E i W n L g A 1 9 p Z 1 q C D H x B 9 z q p x n / Q P / C O 2 N J d E F 0 m r Z 3 z j 3 n z N x 7 7 c h z E 6 7 r 7 z l l Z X V t f S O / W d j a 3 t n d K + 4 f d J M w j R 3 W c U I v j P u 2 l T D P D V i H u 9 x j / S h m l m 9 7 r G d P L 0 S + d 8 / i x A 2 D a z 6 L 2 N C 3 J o E 7 d h 2 L E 3 R V r p z e F k u 6 d t a o m d W a q m u 6 X j d M Q w R m v V q p q g Y h Y p W Q r X Z Y f M M N R g j h I I U P h g C c Y g 8 W E n o G M K A j I m y I O W E x R a 7 M M z y i Q N q U W I w Y F q F T + k 5 o N 8 j Q g P b C M 5 F q h 0 7 x 6 I 1 J q e K E N C H x Y o r F a a r M p 9 J Z o M u 8 5 9 J T 3 G 1 G f z v z 8 g n l u C P 0 L 9 2 C + V + d q I V j j I a s w a W a I o m I 6 p z M J Z V d E T d X f 1 T F y S E i T M Q j y s c U O 1 K 5 6 L M q N Y m s X f T W k v k P y R S o 2 D s Z N 8 W n u C U N e D F F d X n Q N T W j o p m X Z q l 5 n o 0 6 j y M c o 0 z z r K O J F t r o k P c E z 3 j B q 9 J S A i V V H r 6 p S i 7 T H O L X U p 6 + A G 0 l j + E = < / l a t e x i t >", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "< l a t e x i t s h a 1 _ b a s e 6 4 = \" T E d U t d W I 7 n d R Q e q Z I v P y u I + x S d E = \" > A A A C 1 n i c j V H L S s N A F D 2 N r 1 p f q S 7 d D B b B V U n r Q n c W 3 b i s Y B + g p S T p t B 2 a F 8 l E K U V 3 4 t Y f c K u f J P 6 B / o V 3 x h T U I j o h y Z l z 7 z k z 9 1 4 n 8 k Q i L e s 1 Z 8 z N L y w u 5 Z c L K 6 t r 6 x t m c b O Z h G n s 8 o Y b e m H c d u y E e y L g D S m k x 9 t R z G 3 f 8 X j L G Z 2 o e O u K x 4 k I g 3 M 5 j n j H t w e B 6 A v X l k R 1 z W J r a E s m h c + Z S J i Q 7 K h r l q y y p R e b B Z U M l J C t e m i + 4 B I 9 h H C R w g d H A E n Y g 4 2 E n g t U Y C E i r o M J c T E h o e M c N y i Q N q U s T h k 2 s S P 6 D m h 3 k b E B 7 Z", "sec_num": null }, { "text": "What is the woman holding ?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "< l a t e x i t s h a 1 _ b a s e 6 4 = \" T E d U t d W I 7 n d R Q e q Z I v P y u I + x S d E = \" > A A A C 1 n i c j V H L S s N A F D 2 N r 1 p f q S 7 d D B b B V U n r Q n c W 3 b i s Y B + g p S T p t B 2 a F 8 l E K U V 3 4 t Y f c K u f J P 6 B / o V 3 x h T U I j o h y Z l z 7 z k z 9 1 4 n 8 k Q i L e s 1 Z 8 z N L y w u 5 Z c L K 6 t r 6 x t m c b O Z h G n s 8 o Y b e m H c d u y E e y L g D S m k x 9 t R z G 3 f 8 X j L G Z 2 o e O u K x 4 k I g 3 M 5 j n j H t w e B 6 A v X l k R 1 z W J r a E s m h c + Z S J i Q 7 K h r l q y y p R e b B Z U M l J C t e m i + 4 B I 9 h H C R w g d H A E n Y g 4 2 E n g t U Y C E i r o M J c T E h o e M c N y i Q N q U s T h k 2 s S P 6 D m h 3 k b E B 7 Z", "sec_num": null }, { "text": "< l a t e x i t s h a 1 _ b a s e 6 4 = \" j c 8 e V G d k ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "< l a t e x i t s h a 1 _ b a s e 6 4 = \" T E d U t d W I 7 n d R Q e q Z I v P y u I + x S d E = \" > A A A C 1 n i c j V H L S s N A F D 2 N r 1 p f q S 7 d D B b B V U n r Q n c W 3 b i s Y B + g p S T p t B 2 a F 8 l E K U V 3 4 t Y f c K u f J P 6 B / o V 3 x h T U I j o h y Z l z 7 z k z 9 1 4 n 8 k Q i L e s 1 Z 8 z N L y w u 5 Z c L K 6 t r 6 x t m c b O Z h G n s 8 o Y b e m H c d u y E e y L g D S m k x 9 t R z G 3 f 8 X j L G Z 2 o e O u K x 4 k I g 3 M 5 j n j H t w e B 6 A v X l k R 1 z W J r a E s m h c + Z S J i Q 7 K h r l q y y p R e b B Z U M l J C t e m i + 4 B I 9 h H C R w g d H A E n Y g 4 2 E n g t U Y C E i r o M J c T E h o e M c N y i Q N q U s T h k 2 s S P 6 D m h 3 k b E B 7 Z", "sec_num": null }, { "text": "K d v k d r 8 H B 8 5 N O i W Q r H I = \" > A A A C 4 H i c j V G 7 T s M w F D 0 N r 1 J e B U Y W i w q J q U r L A B s V L I x F o h Q J q s p J 3 d Z q E", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "< l a t e x i t s h a 1 _ b a s e 6 4 = \" T E d U t d W I 7 n d R Q e q Z I v P y u I + x S d E = \" > A A A C 1 n i c j V H L S s N A F D 2 N r 1 p f q S 7 d D B b B V U n r Q n c W 3 b i s Y B + g p S T p t B 2 a F 8 l E K U V 3 4 t Y f c K u f J P 6 B / o V 3 x h T U I j o h y Z l z 7 z k z 9 1 4 n 8 k Q i L e s 1 Z 8 z N L y w u 5 Z c L K 6 t r 6 x t m c b O Z h G n s 8 o Y b e m H c d u y E e y L g D S m k x 9 t R z G 3 f 8 X j L G Z 2 o e O u K x 4 k I g 3 M 5 j n j H t w e B 6 A v X l k R 1 z W J r a E s m h c + Z S J i Q 7 K h r l q y y p R e b B Z U M l J C t e m i + 4 B I 9 h H C R w g d H A E n Y g 4 2 E n g t U Y C E i r o M J c T E h o e M c N y i Q N q U s T h k 2 s S P 6 D m h 3 k b E B 7 Z", "sec_num": null }, { "text": "c i S a W K D v R 5 L D o h H 0 S y L 3 2 u i e q W V 9 p D r p l M m R 4 K d q Z C H r G h C n o y G r D t b r n i V l 2 7 2 E 9 Q y 0 E F + W q q 8 h N O 0 I O C j w w h B C J o w g E 4 U n q O U Y O L m L g O L o h L C E k b F 7 h E i b Q Z Z Q n K 4 M S O 6 D u g 3 X H O R r Q 3 n q l V + 3 R K Q G 9 C S o Y 1 0 i j K S w i b 0 5 i N Z 9 b Z s L 9 5 X 1 h P c 7 d z + n u 5 V 0 i s x p D Y v 3 Q f m f / V m V o 0 + t i y N U i q K b a M q c 7 P X T L b F X N z 9 q k q T Q 4 x c Q b 3 K J 4 Q 9 q 3 y o 8 / M a l J b u + k t t / E X m 2 l Y s / f z 3 A y v 5 p Y 0 4 N r 3 c f 4 E h / V", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "< l a t e x i t s h a 1 _ b a s e 6 4 = \" T E d U t d W I 7 n d R Q e q Z I v P y u I + x S d E = \" > A A A C 1 n i c j V H L S s N A F D 2 N r 1 p f q S 7 d D B b B V U n r Q n c W 3 b i s Y B + g p S T p t B 2 a F 8 l E K U V 3 4 t Y f c K u f J P 6 B / o V 3 x h T U I j o h y Z l z 7 z k z 9 1 4 n 8 k Q i L e s 1 Z 8 z N L y w u 5 Z c L K 6 t r 6 x t m c b O Z h G n s 8 o Y b e m H c d u y E e y L g D S m k x 9 t R z G 3 f 8 X j L G Z 2 o e O u K x 4 k I g 3 M 5 j n j H t w e B 6 A v X l k R 1 z W J r a E s m h c + Z S J i Q 7 K h r l q y y p R e b B Z U M l J C t e m i + 4 B I 9 h H C R w g d H A E n Y g 4 2 E n g t U Y C E i r o M J c T E h o e M c N y i Q N q U s T h k 2 s S P 6 D m h 3 k b E B 7 Z", "sec_num": null }, { "text": "i c j V H L S s Q w F D 3 W 9 3 v U p Z v g o L g a O u N C l z 4 2 L k d w R k F F 0 k 5 m D L Z N S F N B", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "< l a t e x i t s h a 1 _ b a s e 6 4 = \" T E d U t d W I 7 n d R Q e q Z I v P y u I + x S d E = \" > A A A C 1 n i c j V H L S s N A F D 2 N r 1 p f q S 7 d D B b B V U n r Q n c W 3 b i s Y B + g p S T p t B 2 a F 8 l E K U V 3 4 t Y f c K u f J P 6 B / o V 3 x h T U I j o h y Z l z 7 z k z 9 1 4 n 8 k Q i L e s 1 Z 8 z N L y w u 5 Z c L K 6 t r 6 x t m c b O Z h G n s 8 o Y b e m H c d u y E e y L g D S m k x 9 t R z G 3 f 8 X j L G Z 2 o e O u K x 4 k I g 3 M 5 j n j H t w e B 6 A v X l k R 1 z W J r a E s m h c + Z S J i Q 7 K h r l q y y p R e b B Z U M l J C t e m i + 4 B I 9 h H C R w g d H A E n Y g 4 2 E n g t U Y C E i r o M J c T E h o e M c N y i Q N q U s T h k 2 s S P 6 D m h 3 k b E B 7 Z", "sec_num": null }, { "text": "x D 9 w 5 0 7 c + g N u 9 T / E P 9 C / 8 C Z W 8 I F o S t t z z 7 3 n J D c 3 0 o n M b R g + 9 w X 9 A 4 N D w y O j Y + M T k 1 P T l Z n Z d q 4 K E 4 t W r B J l 9 i O e i 0 R m o m W l T c S + N o K n U S L 2 o t M t l 9 8 7 E y a X K t u 1 5 1 o c p b y X y a 6 M u S X q u L K 0 w X p G F Z q p L t N C 6 U S w 3 P K s I 7 M e U x n j F B k h 7 H G l G t Z C v 9 h P U C 9 B F e V q q s o T D t G B Q o w C K Q Q y W M I J O H J 6 D l B H C E 3 c E S 6 I M 4 S k z w t c Y o y 0 B V U J q u D E n t K 3 R 9 F B y W Y U O 8 / c q 2 P a J a H X k J J h k T S K 6 g x h t x v z + c I 7 O / Y 3 7 w v v 6 c 5 2 T v + o 9 E q J t T g h 9 i / d R ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "< l a t e x i t s h a 1 _ b a s e 6 4 = \" T E d U t d W I 7 n d R Q e q Z I v P y u I + x S d E = \" > A A A C 1 n i c j V H L S s N A F D 2 N r 1 p f q S 7 d D B b B V U n r Q n c W 3 b i s Y B + g p S T p t B 2 a F 8 l E K U V 3 4 t Y f c K u f J P 6 B / o V 3 x h T U I j o h y Z l z 7 z k z 9 1 4 n 8 k Q i L e s 1 Z 8 z N L y w u 5 Z c L K 6 t r 6 x t m c b O Z h G n s 8 o Y b e m H c d u y E e y L g D S m k x 9 t R z G 3 f 8 X j L G Z 2 o e O u K x 4 k I g 3 M 5 j n j H t w e B 6 A v X l k R 1 z W J r a E s m h c + Z S J i Q 7 K h r l q y y p R e b B Z U M l J C t e m i + 4 B I 9 h H C R w g d H A E n Y g 4 2 E n g t U Y C E i r o M J c T E h o e M c N y i Q N q U s T h k 2 s S P 6 D m h 3 k b E B 7 Z", "sec_num": null }, { "text": "+ V / d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "< l a t e x i t s h a 1 _ b a s e 6 4 = \" T E d U t d W I 7 n d R Q e q Z I v P y u I + x S d E = \" > A A A C 1 n i c j V H L S s N A F D 2 N r 1 p f q S 7 d D B b B V U n r Q n c W 3 b i s Y B + g p S T p t B 2 a F 8 l E K U V 3 4 t Y f c K u f J P 6 B / o V 3 x h T U I j o h y Z l z 7 z k z 9 1 4 n 8 k Q i L e s 1 Z 8 z N L y w u 5 Z c L K 6 t r 6 x t m c b O Z h G n s 8 o Y b e m H c d u y E e y L g D S m k x 9 t R z G 3 f 8 X j L G Z 2 o e O u K x 4 k I g 3 M 5 j n j H t w e B 6 A v X l k R 1 z W J r a E s m h c + Z S J i Q 7 K h r l q y y p R e b B Z U M l J C t e m i + 4 B I 9 h H C R w g d H A E n Y g 4 2 E n g t U Y C E i r o M J c T E h o e M c N y i Q N q U s T h k 2 s S P 6 D m h 3 k b E B 7 Z", "sec_num": null }, { "text": "V c 6 1 s V x W V v g 1 Z 2 S B l z / P s 6 f o N 2 o 1 V d q j Z 1 G d X 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "a 4 X i y 7 W f A + S e t K e c d 3 F p U v h b 8 W d n H 3 q y p K D J s 7 h D u U N 4 d g r P + 6 Z e U 3 u e 3 d 3 y 3 3 + x", "sec_num": null }, { "text": "i c j V H L S s N A F D 3 G V 3 1 H X Y m b w S K 4 K m l d 6 M 6 i G 5 c V b C t U 0 U k 6 N o N 5 k U y E U s S d O 3 H r D 7 j V z x H / Q P / C O 2 M E t Y h O S H L m 3 H v O z L 3 X T Q K Z K c d 5 G b F G x 8 Y n J k t T 0 z O z c / M L 9 u J S K 4 v z 1 B N N L w 7 i 9 M j l m Q h k J J p K q k A c J a n g o R u I t n u x p + P t S 5 F m M o 4 O V T 8 R J y H v R f J c e l w R d W q v t H 2 u m M y Y 8 g U L e c T 8 O O j K q M d 2 T u 2 y U 3 H M Y s O g W o A y i t W I 7 W c c o 4 s Y H n K E E I i g C A f g y O j p o A o H C X E n G B C X E p I m L n C F a d L m l C U o g x N 7 Q d 8 e 7 T o F G 9 F e e 2 Z G 7 d E p A b 0 p K R n W S R N T X k p Y n 8 Z M P D f O m v 3 N e 2 A 8 9 d 3 6 9 H c L r 5 B Y B Z / Y v 3 S f m f / V 6 V o U z r F t a p B U U 2 I Y X Z 1 X u O S m K / r m 7 E t V i h w S 4 j T u U j w l 7 B n l Z 5 + Z 0 W S m d t 1 b b u K v J l O z e u 8 V u T n e 9 C 1 p w N W f 4 x w G r V q l u l m p H d T K 9 d 1 i 1 C W s Y g 0 b N M 8 t 1 L G P B p r k f Y 0 H P O L J O r N u r F v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "a 4 X i y 7 W f A + S e t K e c d 3 F p U v h b 8 W d n H 3 q y p K D J s 7 h D u U N 4 d g r P + 6 Z e U 3 u e 3 d 3 y 3 3 + x", "sec_num": null }, { "text": "r 7 i P V G i k 0 y / i 2 r P t 3 f f a Y 2 Q = = < / l a t e x i t > A < l a t e x i t s h a 1 _ b a s e 6 4 = \" m L m Y i j z i v e 7 S 9 R k 8 S P O i 9 n F + l 8 E = \" > A A", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "a 4 X i y 7 W f A + S e t K e c d 3 F p U v h b 8 W d n H 3 q y p K D J s 7 h D u U N 4 d g r P + 6 Z e U 3 u e 3 d 3 y 3 3 + x", "sec_num": null }, { "text": "A C x H i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V Z I q 6 L I q i M s W b C v U I s l 0 W k M n D z I T o R T 9 A b f 6 b e I f 6 F 9 4 Z 0 x B L a I T k p w 5 9 5 4 z c + / 1 E x F I 5 T i v B W t u f m F x q b h c W l l d W 9 8 o b 2 6 1 Z Z y l j L d Y L O L 0 y v c k F 0 H E W y p Q g l 8 l K f d C X / C O P z r T 8 c 4 d T 2 U Q R 5 d q n P B e 6 A 2 j Y B A w T x H V P L k p V 5 y q Y 5 Y 9 C 9 w c V J C v R l x + w T X 6 i M G Q I Q R H B E V Y w I O k p w s X D h L i e p g Q l x I K T J z j H i X S Z p T F K c M j d k T f I e 2 6 O R v R X n t K o 2 Z 0 i q A 3 J a W N P d L E l J c S 1 q f Z J p 4 Z Z 8 3 + 5 j 0 x n v p u Y / r 7 u V d I r M I t s X / p p p n / 1 e l a F A Y 4 N j U E V F N i G F 0 d y 1 0 y 0 x V 9 c / t L V Y o c E u I 0 7 l M 8 J c y M c t p n 2 2 i k q V 3 3 1 j P x N 5 O p W b 1 n e W 6 G d 3 1 L G r D 7 c 5 y z o F 2 r u g f V W v O w U j / N R 1 3 E D n a x T / M 8 Q h 0 X a K B l v B / x h G f r 3 B K W t L L P V K u Q a 7 b", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "a 4 X i y 7 W f A + S e t K e c d 3 F p U v h b 8 W d n H 3 q y p K D J s 7 h D u U N 4 d g r P + 6 Z e U 3 u e 3 d 3 y 3 3 + x", "sec_num": null }, { "text": "x b V k P H + m 7 j 0 o = < / l a t e x i t > B < l a t e x i t s h a 1 _ b a s e 6 4 = \" z r", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "a 4 X i y 7 W f A + S e t K e c d 3 F p U v h b 8 W d n H 3 q y p K D J s 7 h D u U N 4 d g r P + 6 Z e U 3 u e 3 d 3 y 3 3 + x", "sec_num": null }, { "text": "v 9 N T L k K I 8 B K M s d Y h N K G P K k + z Y = \" > A A A C x H i c j V H L S s N A F D 2 N r / q u u n Q T L I K r k l R B l 6 W C u G z B P q A W S d J p H Z o X m Y l Q i v 6 A W / 0 2 8 Q / 0 L 7 w z T k E t o h O S n D n 3 n j N z 7 / X T k A v p O K 8 F a 2 F x a X m l u L q 2 v r G 5 t V 3 a 2 W 2 L J M 8 C 1 g q S M M m 6 v i d Y y G P W k l y G r J t m z I v 8 k H X 8 8 b m K d + 5 Y J n g S X 8 l J y v q R N 4 r 5 k A e e J K p Z v y m V n Y q j l z 0 P X A P K M K u R l F 5 w j Q E S B M g R g S G G J B z C g 6 C n B x c O U u L 6 m B K X E e I 6 z n C P N d L m l M U o w y N 2 T N 8 R 7 X q G j W m v P I V W B 3 R K S G 9 G S h u H p E k o L y O s T r N 1 P N f O i v 3 N e 6 o 9 1 d 0 m 9 P e N V 0 S s x C 2 x f + l m m f / V q V o k h j j T N X C q K d W M q i 4 w L r n u i r q 5 / a U q S Q 4 p c Q o P K J 4 R D r R y 1 m d b a 4 S u X f X W 0 / E 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "a 4 X i y 7 W f A + S e t K e c d 3 F p U v h b 8 W d n H 3 q y p K D J s 7 h D u U N 4 d g r P + 6 Z e U 3 u e 3 d 3 y 3 3 + x", "sec_num": null }, { "text": "n a l Y t Q 9 M b o 5 3 d U s a s P t z n P O g X a 2 4 x 5 V q 8 6 R c q 5 t R F 7 G P A x z R P E 9 R w y U a a G n v R z z h 2 b q w Q k t Y + W e q V T C a P X x b 1 s M H 7 B u P S w = = < / l a t e x i t > A room with a desk and a laptop < l a t e x i t s h a 1 _ b a s e 6 4 = \" N C c 5 X s f 9 7 c R O l i J + P l 7 N l K d s Z ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "a 4 X i y 7 W f A + S e t K e c d 3 F p U v h b 8 W d n H 3 q y p K D J s 7 h D u U N 4 d g r P + 6 Z e U 3 u e 3 d 3 y 3 3 + x", "sec_num": null }, { "text": "V o = \" > A A A C 5 H i c j V G 7 T s M w F D 2 E 9 7 v A y I B F h c R U p W W A s c D C C B K F S o A q J 3 X B a h J H j g N C i J G N D b H y A 6 z w L Y g / g L / g 2 q Q S U C F w l O T 4 3 H u O f e 8 N 0 k h m x v d f B 7 z B o e G R 0 b H x i c m p 6 Z n Z 0 t z 8 Q a Z y H Y p G q C K l m w H P R C Q T 0 T D S R K K Z a s H j I B K H Q X f b x g / P h c 6 k S v b N Z S p O Y n 6 a y I 4 M u S G q V V r a Z F q p m F 1 I c 8 Y 4 a 4 u s y 3 j S J h j x 1 K i 0 V S r 7 F d 8 t 1 g + q B S i j W L u q 9 I J j t K E Q I k c M g Q S G c A S O j J 4 j V O E j J e 4 E V 8 R p Q t L F B a 4 x Q d q c s g R l c G K 7 9 D 2 l 3 V H B J r S 3 n p l T h 3 R K R K 8 m J c M K a R T l a c L 2 N O b i u X O 2 7 G / e V 8 7 T 3 u 2 S / k H h F R N r c E b s X 7 p e 5 n 9 1 t h a D D j Z c D Z J q S h 1 j q w s L l 9 x 1 x d 6 c f a n K k E N K n M V t i m v C o V P 2 + s y c J n O 1 2 9 5 y F 3 9 z m Z a 1 + 7 D I z f F u b 0 k D r v 4 c Z z 8 4 q F W q a 5 X a X q 1 c 3 y p G P Y Z F L G O V", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "a 4 X i y 7 W f A + S e t K e c d 3 F p U v h b 8 W d n H 3 q y p K D J s 7 h D u U N 4 d g r P + 6 Z e U 3 u e 3 d 3 y 3 3 + x", "sec_num": null }, { "text": "V w D a k 7 Y Y c W S u R L 4 n s = \" > A A A C x n i c j V H L S s N A F D 2 N r 1 p f V Z d u g k W o m 5 K p Y N t d 0 U 2 X F e 0 D a p F k O q 3 B v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "a 4 X i y 7 W f A + S e t K e c d 3 F p U v h b 8 W d n H 3 q y p K D J s 7 h D u U N 4 d g r P + 6 Z e U 3 u e 3 d 3 y 3 3 + x", "sec_num": null }, { "text": "E g m S i m C P + B W P 0 3 8 A / 0 L 7 4 w p 6 K L o h C R n z r 3 n z N x 7 n c h z E 2 l Z 7 z l j a X l l d S 2 / X t j Y 3 N r e K e 7 u d Z M w j b n o 8 N A L 4 7 5 j J 8 J z A 9 G R r v R E P 4 q F 7 T u e 6 D l 3 5 y r e u", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "a 4 X i y 7 W f A + S e t K e c d 3 F p U v h b 8 W d n H 3 q y p K D J s 7 h D u U N 4 d g r P + 6 Z e U 3 u e 3 d 3 y 3 3 + x", "sec_num": null }, { "text": "x d x 4 o b B l Z x G Y u j b k 8 A d u 9 y W R F 2 W 2 f F N s W R V L M t i j J k K s N q p R a D R q F d Z 3 W Q q R K u E b L X D 4 h u u M U I I j h Q + B A J I w h 5 s J P Q M w G A h I m 6 I G X E x I V f H B R 5 R I G 1 K W Y I y b G L v 6 D u h 3 S B j A 9 o r z 0 S r O Z 3 i 0 R u T 0 s Q R a U L K i w m r 0 0 w d T 7 W z Y h d 5 z 7 S n u t u U / k 7 m 5 R M r c U v s X 7 p 5 5 n 9 1 q h a J M e q 6 B p d q i j S j q u O Z S 6 q 7 o m 5 u / q h K k k N E n M I j i s e E u V b O + 2 x q T a J r V 7 2 1 d f x D Z y p W 7 X m W m + J T 3 Z I G P J + i u R h 0 q x V 2 U q l e V E v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "a 4 X i y 7 W f A + S e t K e c d 3 F p U v h b 8 W d n H 3 q y p K D J s 7 h D u U N 4 d g r P + 6 Z e U 3 u e 3 d 3 y 3 3 + x", "sec_num": null }, { "text": "N s 2 z U e R z g E G W a Z w 1 N t N B G h 7 w n e M Y L X o 2 W E R i p 8 f C d a u Q y z T 5 + L e P p C 0 r 6 j 9 M = < / l a t e x i t >", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "a 4 X i y 7 W f A + S e t K e c d 3 F p U v h b 8 W d n H 3 q y p K D J s 7 h D u U N 4 d g r P + 6 Z e U 3 u e 3 d 3 y 3 3 + x", "sec_num": null }, { "text": "(2) < l a t e x i t s h a 1 _ b a s e 6 4 = \" Y l T r c S Y 3 9 G m k Y + 9 Z 6 0 i H E 1 B x 0 k w = \" > A A A C x n i c j V L L T s J A F D 3 U F + I L d e m m k Z j g p m k L g u y I b l h i l E e C x L R l x I a + 0 k 4 1 h J j 4 A 2 7 1 0 4 x / o H / h n b E k u i A 6 T d s 7 5 5 5 z Z u 7 c s S P P T b i u v + e U p e W V 1 b X 8 e m F j c 2 t 7 p 7 i 7 1 0 3 C N H Z Y x w m 9 M O 7 b V s I 8 N 2 A d 7 n K P 9 a O Y W b 7 t s Z 4 9 O R f 5 3 j 2 L E z c M r v g 0 Y k P f G g f u r e t Y n K D L s n l 8 U y z p m l E x K t U T", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "a 4 X i y 7 W f A + S e t K e c d 3 F p U v h b 8 W d n H 3 q y p K D J s 7 h D u U N 4 d g r P + 6 Z e U 3 u e 3 d 3 y 3 3 + x", "sec_num": null }, { "text": "V d c a j U a t X q d A N y s 1 s 6 o a m i 5 H C d l o h 8 U 3 X G O E E A 5 S + G A I w C n 2 Y C G h Z w A D O i L C h p g R F l P k y j z D I w q k T Y n F i G E R O q H v m G a D D A 1 o L j w T q X Z o F Y / e m J Q q j k g T E i + m W K y m y n w q n Q W 6 y H s m P c X e p v S 3 M y + f U I 4 7 Q v / S z Z n / 1 Y l a O G 5 x K m t w q a Z I I q I 6 J 3 N J 5 a m I n a s / q u L k E B E m 4 h H l Y 4 o d q Z y f s y o 1 i a x d n K 0 l 8 x + S K V A x d z J u i k + x S 2 r w v I v q 4 q B r 0 i X Q z A u z 1 D z L W p 3 H A Q 5 R p n 7 W 0 U Q L b X T I e 4 x n v O B V a S m B k i o P 3 1 Q l l 2 n 2 8 W s o T 1 + M x Y / v < / l a t e x i t > (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "a 4 X i y 7 W f A + S e t K e c d 3 F p U v h b 8 W d n H 3 q y p K D J s 7 h D u U N 4 d g r P + 6 Z e U 3 u e 3 d 3 y 3 3 + x", "sec_num": null }, { "text": "< l a t e x i t s h a 1 _ b a s e 6 4 = \" 2 Z i J u 6 Y s 2 T T r 1 l G j y a X", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "a 4 X i y 7 W f A + S e t K e c d 3 F p U v h b 8 W d n H 3 q y p K D J s 7 h D u U N 4 d g r P + 6 Z e U 3 u e 3 d 3 y 3 3 + x", "sec_num": null }, { "text": "V p g i 8 V O I = \" > A A A C x n i c j V H L T s J A F D 3 U F + I L d e m m k Z j g p m k L A d w R 3 b D E K I 8 E i W n L g A 1 9 p Z 1 q C D H x B 9 z q p x n / Q P / C O 2 N J d E F 0 m r Z 3 z j 3 n z N x 7 7 c h z E 6 7 r 7 z l l Z X V t f S O / W d j a 3 t n d K + 4 f d J M w j R 3 W c U I v j P u 2 l T D P D V i H u 9 x j / S h m l m 9 7 r G d P L 0 S + d 8 / i x A 2 D a z 6 L 2 N C 3 J o E 7 d h 2 L E 3 R V r p z e F k u 6 d t a o m d W a q m u 6 X j d M Q w R m v V q p q g Y h Y p W Q r X Z Y f M M N R g j h I I U P h g C c Y g 8 W E n o G M K A j I m y I O W E x R a 7 M M z y i Q N q U W I w Y F q F T + k 5 o N 8 j Q g P b C M 5 F q h 0 7 x 6 I 1 J q e K E N C H x Y o r F a a r M p 9 J Z o M u 8 5 9 J T 3 G 1 G f z v z 8 g n l u C P 0 L 9 2 C + V + d q I V j j I a s w a W a I o m I 6 p z M J Z V d E T d X f 1 T F y S E i T M Q j y s c U O 1 K 5 6 L M q N Y m s X f T W k v k P y R S o 2 D s Z N 8 W n u C U N e D F F d X n Q N T W j o p m X Z q l 5 n o 0 6 j y M c o 0 z z r K O J F t r o k P c E z 3 j B q 9 J S A i V V", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "a 4 X i y 7 W f A + S e t K e c d 3 F p U v h b 8 W d n H 3 q y p K D J s 7 h D u U N 4 d g r P + 6 Z e U 3 u e 3 d 3 y 3 3 + x", "sec_num": null }, { "text": "H r 6 p S i 7 T H O L X U p 6 + A G 0 l j + E = < / l a t e x i t > Figure 2 : Qualitative Analysis. We show the outputs of the three steps of our model, using two samples from the V QA1.0 test set. 1) Caption only; 2) Image only; 3) Image + Caption. Words and object regions with maximum attention are underlined and marked, respectively. Color intensity is proportional to attention.", "cite_spans": [], "ref_spans": [ { "start": 68, "end": 76, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "a 4 X i y 7 W f A + S e t K e c d 3 F p U v h b 8 W d n H 3 q y p K D J s 7 h D u U N 4 d g r P + 6 Z e U 3 u e 3 d 3 y 3 3 + x", "sec_num": null }, { "text": "look at which tokens are given higher attention (Clark et al., 2019) . In Figure 2 , we present two images A and B, along with their captions and the three generated questions corresponding to our three experimental setups (Caption only, Image only and Image + Caption). For this analysis, we average the attention vectors of all the heads in the last layer, and highlight the textual and visual tokens most attended by the models.", "cite_spans": [ { "start": 48, "end": 68, "text": "(Clark et al., 2019)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 74, "end": 82, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "a 4 X i y 7 W f A + S e t K e c d 3 F p U v h b 8 W d n H 3 q y p K D J s 7 h D u U N 4 d g r P + 6 Z e U 3 u e 3 d 3 y 3 3 + x", "sec_num": null }, { "text": "For both images, the Caption only model attends to salient words in the caption. The Image only model remains at least as much relevant: on image A, it generates a question about a table (with an unclear attention). Interestingly, for image B, the Image only model corrects a mistake from step 1: it is a woman holding an umbrella rather than a man, and the attention is indeed focused on the woman in the image. Finally, the Image + Caption model is able to generate fitting questions about the image, with relatively little relevance to the caption: for image A, Image + Caption the model generates \"What time is it?\" while paying attention to the clock; for image B, Image + Caption generates \"What is the color of the umbrella ?\", focusing the attention on the umbrella. The captions of either samples include no mentions of clocks or umbrellas, further indicating effective alignment between visual and textual representations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "a 4 X i y 7 W f A + S e t K e c d 3 F p U v h b 8 W d n H 3 q y p K D J s 7 h D u U N 4 d g r P + 6 Z e U 3 u e 3 d 3 y 3 3 + x", "sec_num": null }, { "text": "Cross-modal alignment We carry out an additional experiment to analyze the cross-modal alignment for each version of our model. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "a 4 X i y 7 W f A + S e t K e c d 3 F p U v h b 8 W d n H 3 q y p K D J s 7 h D u U N 4 d g r P + 6 Z e U 3 u e 3 d 3 y 3 3 + x", "sec_num": null }, { "text": "M = \" > A A A C x n i c j V H L S g M x F D 0 d X 7 W + q i 7 d D B b B V Z m p g i 6 L b r q s a B + g R W b S t I b O i 0 x G K U X w B 9 z q p 4 l / o H / h T Z y C W k Q z T H J y 7 j 0 n u b l + E o h U O c 5 r w Z q b X 1 h c K i 6 X V l b X 1 j f K m 1 v t N M 4 k 4 y 0 W B 7 H s + l 7 K A x H x l h I q 4 N 1 E c i / 0 A 9 7 x R 6 c 6 3 r n l M h V x d K H G C e + F 3 j A S A 8 E 8 R d S 5 U 3 W u y x W a z b B n g Z u D C v L R j M s v u E I f M R g y h O C I o A g H 8 J D S d w k X D h L i e p g Q J w k J E + e 4 R 4 m 0 G W V x y v C I H d E 8 p N 1 l z k a 0 1 5 6 p U T M 6 J a B f k t L G H m l i y p O E 9 W m 2 i W f G W b O / e U + M p 7 7 b m F Y / 9 w q J V b g h 9 i / d N P O / O l 2 L w g D H p g Z B N S W G 0 d W x 3 C U z r 6 J v b n + p S p F D Q p z G f Y p L w s w o p + 9 s G 0 1 q a t d v 6 5 n 4 m 8 n U r N 6 z P D f D u 7 4 l N d j 9 2 c 5 Z 0 K 5 V 3 Y N q 7 e y w U j / J W 1 3 E D n a x T / 0 8 Q h 0 N N N E i 7 y E e 8 Y R n q 2 F F V m b d f a Z a h V y z j W / D e v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "a 4 X i y 7 W f A + S e t K e c d 3 F p U v h b 8 W d n H 3 q y p K D J s 7 h D u U N 4 d g r P + 6 Z e U 3 u e 3 d 3 y 3 3 + x", "sec_num": null }, { "text": "g A 7 L q P q w = = < / l a t e x i t > 0.1 < l a t e x i t s h a 1 _ b a s e 6 4 = \" h T K y T u l d k 8 j a U T 2 0 2 2 I I z I Z 2 w q k = \" > A", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "a 4 X i y 7 W f A + S e t K e c d 3 F p U v h b 8 W d n H 3 q y p K D J s 7 h D u U N 4 d g r P + 6 Z e U 3 u e 3 d 3 y 3 3 + x", "sec_num": null }, { "text": "A A C x n i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w F Z I q 6 L L o p s u K 9 g F a J J l O 6 9 A 0 C Z O J U o r g D 7 j V T x P / Q P / C O 2 M K a h G d k O T M u f e c m X t v k I Q i V a 7 7 W r D m 5 h c W l 4 r L p Z X V t f W N 8 u Z W K 4 0 z y X i T x W E s O 4 G f 8 l B E v K m E C n k n k d w f B S F v B 8 N T H W / f c p m K O L p Q 4 4 R 3 R / 4 g E n 3 B f E X U u e t 4 1 + W K 6 7 h m 2 b P A y 0 E F + W r E 5 R d c o Y c Y D B l G 4 I i g C I f w k d J z C Q 8 u E u K 6 m B A n C Q k T 5 7 h H i b Q Z Z X H K 8 I k d 0 n d A u 8 u c j W i v P V O j Z n R K S K 8 k p Y 0 9 0 s S U J w n r 0 2 w T z 4 y z Z n / z n h h P f b c x / Y P c a 0 S s w g 2 x f + m m m f / V 6 V o U + j g 2 N Q i q K T G M r o 7 l L p n p i r 6 5 / a U q R Q 4 J c R r 3 K C 4 J M 6 O c 9 t k 2 m t T U r n v r m / i b y d S s 3 r M 8 N 8 O 7 v i U N 2 P s 5 z l n Q q j r e g V M 9 O 6 z U T v J R F 7 G D X e z T P I 9 Q Q x 0 N N M l 7 g E c 8 4 d m q W 5 G V W X e f q V Y h 1 2 z j 2 7 I e P g D v G o + s < / l a t e x i t > 0.2 < l a t e x i t s h a 1 _ b a s e 6 4 = \" Q y o m E s q F L 9 c 4 o e D D L b d w 4 O o U 3 p g = \" > A A A C x n i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w F Z I q 6 L L o p s u K 9 g F a J J l O 6 9 A 0 C Z O J U o r g D 7 j V T x P / Q P / C O 2 M K a h G d k O T M u f e c m X t v k I Q i V a 7 7 W r D m 5 h c W l 4 r L p Z X V t f W N 8 u Z W K 4 0 z y X i T x W E s O 4 G f 8 l B E v K m E C n k n k d w f B S F v B 8 N T H W / f c p m K O L p Q 4 4 R 3 R / 4 g E n 3 B f E X U u e t U r 8 s V 1 3 H N s m e B l 4 M K 8 t W I y y + 4 Q g 8 x G D K M w B F B E Q 7 h I 6 X n E h 5 c J M R 1 M S F O E h I m z n G P E m k z y u K U 4 R M 7 p O + A d p c 5 G 9", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "a 4 X i y 7 W f A + S e t K e c d 3 F p U v h b 8 W d n H 3 q y p K D J s 7 h D u U N 4 d g r P + 6 Z e U 3 u e 3 d 3 y 3 3 + x", "sec_num": null }, { "text": "F e e 6 Z G z e i U k F 5 J S h t 7 p I k p T x L W p 9 k m n h l n z f 7 m P T G e + m 5 j + g e 5 1 4 h Y h R t i / 9 J N M / + r 0 7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "a 4 X i y 7 W f A + S e t K e c d 3 F p U v h b 8 W d n H 3 q y p K D J s 7 h D u U N 4 d g r P + 6 Z e U 3 u e 3 d 3 y 3 3 + x", "sec_num": null }, { "text": "U o 9 H F s a h B U U 2 I Y X R 3 L X T L T F X 1 z + 0 t V i h w S 4 j T u U V w S Z k Y 5 7 b N t N K m p X f f W N / E 3 k 6 l Z v W d 5 b o Z 3 f U s a s P d z n L O g V X W 8 A 6 d 6 d l i p n", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "a 4 X i y 7 W f A + S e t K e c d 3 F p U v h b 8 W d n H 3 q y p K D J s 7 h D u U N 4 d g r P + 6 Z e U 3 u e 3 d 3 y 3 3 + x", "sec_num": null }, { "text": "e S j L m I H u 9 i n e R 6 h h j o a a J L 3 A I 9 4 w r N V t y I r s + 4 + U 6 1 C r t n G t 2 U 9 f A D x e o + t < / l a t e x i t > 0.3 < l a t e x i t s h a 1 _ b a s e 6 4 = \" U l t ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "a 4 X i y 7 W f A + S e t K e c d 3 F p U v h b 8 W d n H 3 q y p K D J s 7 h D u U N 4 d g r P + 6 Z e U 3 u e 3 d 3 y 3 3 + x", "sec_num": null }, { "text": "C l O i 9 R v O N W w e j Q C 5 c V c H G / L o = \" > A A A C x n i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w F Z J W 0 G X R T Z c V 7 Q N q k S S d 1 t C 8 m E y U U g R / w K 1 + m v g H + h f e G a e g F t E J S c 6 c e 8 + Z u f d 6 a R h k w r Z f C 8 b C 4 t L y S n G 1 t L a + s b l V 3 t 5 p Z 0 n O f d b y k z D h X c / N W B j E r C U C E b J u y p k b e S H r e O M z G e / c M p 4 F S X w p J i n r R + 4 o D o a B 7 w q i L m y r d l 2 u 2 J a t l j k P H A 0 q 0 K u Z l F 9 w h Q E S + M g R g S G G I B z C R U Z P D w 5 s p M T 1 M S W O E w p U n O E e J d L m l M U o w y V 2 T N 8 R 7 X q a j W k v P T O l 9 u m U k F 5 O S h M H p E k o j x O W p 5 k q n i t n y f 7 m P V W e 8 m 4 T + n v a K y J W 4 I b Y v 3 S z z P / q Z C 0 C Q 5 y o G g K q K V W M r M 7 X L r n q i r y 5 + a U q Q Q 4 p c R I P K M 4 J + 0 o 5 6 7 O p N J m q X f b W V f E 3 l S l Z u f d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "a 4 X i y 7 W f A + S e t K e c d 3 F p U v h b 8 W d n H 3 q y p K D J s 7 h D u U N 4 d g r P + 6 Z e U 3 u e 3 d 3 y 3 3 + x", "sec_num": null }, { "text": "A = \" > A A A C x n i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w F Z J a 0 G X R T Z c V 7 Q N q k S S d 1 q F 5 k U y U U g R / w K 1 + m v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "a 4 X i y 7 W f A + S e t K e c d 3 F p U v h b 8 W d n H 3 q y p K D J s 7 h D u U N 4 d g r P + 6 Z e U 3 u e 3 d 3 y 3 3 + x", "sec_num": null }, { "text": "P u u 4 K o C 9 u q X Z c r t m W r Z c 4 D R 4 M K 9 G r G 5 R d c Y Y A Y P n K E Y I g g C A d w k d H T g w M b C X F 9 T I l L C X E V Z 7 h H i b Q 5 Z T H K c I k d 0 3 d E u 5 5 m I 9 p L z 0 y p f T o l o D c l p Y k D 0 s S U l x K W p 5 k q n i t n y f 7 m P V W e 8 m 4 T + n v a K y R W 4 I b Y v 3 S z z P / q Z C 0 C Q 5 y o G j j V l C h G V u d r l 1 x 1 R d 7 c / F K V I I e E O I k H F E 8 J + 0 o 5 6 7 O p N J m q X f b W V f E 3 l S l Z u f d 1 b o 5 3 e U s a s P N z n P O g X b W c I 6 t 6 X q v U T / W o i 9 j D P g 5 p n s e o o 4 E m W u Q 9 w i O e 8 G w 0 j M j I j b v P V K O g N b v 4 t o y H D / Y 6 j 6 8 = < / l a t e x i t > 0.5 < l a t e x i t s h a 1 _ b a s e 6 4 = \" 7 F E L J v / z d W W 5 u b X v c 6 c N K x s L F g U = \" > A A A C x n i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w F Z K q 6 L L o p s u K 9 g G 1 S J J O 6 9 C 8 S C Z K K Y I / 4 F Y / T f w D / Q v v j F N Q i + i E J G f O v e f M 3 H u 9 J O C Z s O 3 X g j E 3 v 7 C 4 V F w u r a y u r W + U N 7 d a W Z y n P m v 6 c R C n H c / N W M A j 1 h R c B K y T p M w N v Y C 1 v d G Z j L d v W Z r x O L o U 4 4 T 1 Q n c Y 8 Q H 3 X U H U h W 0 d X Z c r t m W r Z c 4 C R 4 M K 9 G r E 5 R d c o Y 8 Y P n K E Y I g g C A d w k d H T h Q M b C X E 9 T I h L C X E V Z 7 h H i b Q 5 Z T H K c I k d 0 X d I u 6 5 m I 9 p L z 0 y p f T o l o D c l p Y k 9 0 s S U l x K W p 5 k q n i t n y f 7 m P V G e 8 m 5 j + n v a K y R W 4 I b Y v 3 T T z P / q Z C 0 C A 5 y o G j j V l C h G V u d r l 1 x 1 R d 7 c / F K V I I e E O I n 7 F E 8 J + 0 o 5 7 b O p N J m q X f b W V f E 3 l S l Z u f d 1 b o 5 3 e U s a s P N z n L O g V b W c A 6 t 6 f l i p n e p R F", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "a 4 X i y 7 W f A + S e t K e c d 3 F p U v h b 8 W d n H 3 q y p K D J s 7 h D u U N 4 d g r P + 6 Z e U 3 u e 3 d 3 y 3 3 + x", "sec_num": null }, { "text": "z R g W w O 8 p f 4 M k q N O j Z c w = \" > A A A C x n i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w F Z I q 6 r L o p s u K 9 g G 1 S J J O 6 9 C 8 S C Z K K Y I / 4 F Y / T f w D / Q v v j F N Q i + i E J G f O v e f M 3 H u 9 J O C Z s O 3 X g j E 3 v 7 C 4 V F w u r a y u r W + U N 7 d a W Z y n P m v 6 c R C n H c / N W M A j 1 h R c B K y T p M w N v Y C 1 v d G Z j L d v W Z r x O L o U 4 4 T 1 Q n c Y 8 Q H 3 X U H U h W 0 d X Z c r t m W r Z c 4 C R 4 M K 9 G r E 5 R d c o Y 8 Y P n K E Y I g g C A d w k d H T h Q M b C X E 9 T I h L C X E V Z 7 h H i b Q 5 Z T H K c I k d 0 X d I u 6 5 m I 9 p L z 0 y p f T o l o D c l p Y k 9 0 s S U l x K W p 5 k q n i t n y f 7 m P V G e 8 m 5 j + n v a K y R W 4 I b Y v 3 T T z P / q Z C 0 C A 5 y o G j j V l C h G V u d r l 1 x 1 R d 7 c / F K V I I e E O I n 7 F E 8 J + 0 o 5 7 b O p N J m q X f b W V f E 3 l S l Z u f d 1 b o 5 3 e U s a s P N z n L O g V b W c A 6 t 6 f l i p n e p R F", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "a 4 X i y 7 W f A + S e t K e c d 3 F p U v h b 8 W d n H 3 q y p K D J s 7 h D u U N 4 d g r P + 6 Z e U 3 u e 3 d 3 y 3 3 + x", "sec_num": null }, { "text": "i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V Z K q 6 L I o i M s W b C v U I s l 0 W k P z Y j I R S t E f c K v f J v 6 B / o V 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "a 4 X i y 7 W f A + S e t K e c d 3 F p U v h b 8 W d n H 3 q y p K D J s 7 h D u U N 4 d g r P + 6 Z e U 3 u e 3 d 3 y 3 3 + x", "sec_num": null }, { "text": "x i m o R X R C k j P n 3 n N m 7 r 1 + G g a Z d J z X g j U 3 v 7 C 4 V F w u r a y u r W + U N 7 f a W Z I L x l s s C R N x 5 X s Z D 4 O Y t 2 Q g Q 3 6 V C u 5 F f s g 7 / u h M x T t 3 X G R B E l / K c c p 7 k T e M g 0 H A P E l U 8 + i m X H G q j l 7 2 L H A N q M C s R l J + w T X 6 S M C Q I w J H D E k 4 h I e M n i 5 c O E i J 6 2 F C n C A U 6 D j H P U q k z S m L U 4 Z H 7 I i + Q 9 p 1 D R v T X n l m W s 3 o l J B e Q U o b e 6 R J K E 8 Q V q f Z O p 5 r Z 8 X + 5 j 3 R n u p u Y / r 7 x i s i ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "a 4 X i y 7 W f A + S e t K e c d 3 F p U v h b 8 W d n H 3 q y p K D J s 7 h D u U N 4 d g r P + 6 Z e U 3 u e 3 d 3 y 3 3 + x", "sec_num": null }, { "text": "V u K W 2 L 9 0 0 8 z / 6 l Q t E g O c 6 B o C q i n V j K q O G Z d c d 0 X d 3 P 5 S l S S H l D i F + x Q X h J l W T v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "a 4 X i y 7 W f A + S e t K e c d 3 F p U v h b 8 W d n H 3 q y p K D J s 7 h D u U N 4 d g r P + 6 Z e U 3 u e 3 d 3 y 3 3 + x", "sec_num": null }, { "text": "i c j V H L S s N A F D 2 N r / q u u n Q T L I K r k l S h L o u C u G z B P q A W S a b T G p o X M x O h F P 0 B t / p t 4 h / o X 3 h n T E E t o h O S n D n 3 n j N z 7 / X T M J D K c V 4 L 1 s L i 0 v J K c X V t f W N", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "a 4 X i y 7 W f A + S e t K e c d 3 F p U v h b 8 W d n H 3 q y p K D J s 7 h D u U N 4 d g r P + 6 Z e U 3 u e 3 d 3 y 3 3 + x", "sec_num": null }, { "text": "z a 7 u 0 s 9 u W S S Y Y b 7 E k T E T X 9 y Q P g 5 i 3 V K B C 3 k 0 F 9 y I / 5 B 1 / f K 7 j n T s u Z J D E V 2 q S 8 n 7 k j e J g G D B P E d W s 3 Z T K T s U x y 5 4 H b g 7 K y F c j K b 3 g G g M k Y M g Q g S O G I h z C g 6 S n B x c O U u L 6 m B I n C A U m z n G P N d J m l M U p w y N 2 T N 8 R 7 X o 5 G 9 N e e 0 q j Z n R K S K 8 g p Y 1 D 0 i S U J w j r 0 2 w T z 4 y z Z n / z n h p P f b c J / f 3 c K y J W 4 Z b Y v 3 S z z P / q d C 0 K Q 5 y a G g K q K T W M r o 7 l L p n p i r 6 5 / a U q R Q 4 p c R o P K C 4 I M 6 O c 9 d k 2 G m l q 1 7 3 1 T P z N Z G p W 7 1 m e m + F d 3 5 I G 7 P 4 c 5 z x o V y v u c a X a P C n X z / J R F 7 G P A x z R P G u o 4 x I N t I z 3 I 5 7 w b F 1 Y o S W t 7 D P V K u S a P X x b 1 s M H 0 f u P Q A = = < / l a t e x i t >", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "a 4 X i y 7 W f A + S e t K e c d 3 F p U v h b 8 W d n H 3 q y p K D J s 7 h D u U N 4 d g r P + 6 Z e U 3 u e 3 d 3 y 3 3 + x", "sec_num": null }, { "text": "< l a t e x i t s h a 1 _ b a s e 6 4 = \" i R s s 1 I u N 0 p T 7 e + W Z a T W j U 0 J 6 B S l t 7 J E m o T x B W J 1 m 6 3 i u n R X 7 m / d E e 6 q 7 j e n v G 6 + I W I l b Y v / S T T P / q 1 O 1 S A x w r G s I q K Z U M 6 o 6 Z l x y 3 R V 1 c / t L V Z I c U u I U 7 l N c E G Z a O e 2 z r T W Z r l 3 1 1 t P x N 5 2 p W L V n J j f H u 7 o l D d j 9 O c 5 Z 0 K 5 V 3 Y N q r X l Y q Z + a U R e x g 1 3 s 0 z y P U M c F G m h p 7 0 c 8 4 d k 6 t 0 I r s / L P V K t g N N v 4 t q y H D 9 a 7 j 0 I = < / l a t e x i t >", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "9", "sec_num": null }, { "text": "v V j 1 Z J 5 2 h g W k F u M = \" > A A A C x H i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V Z I q q L u i I C 5 b s K 1 Q i y T T a Q 3 N i 8 l E K E V", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "9", "sec_num": null }, { "text": "< l a t e x i t s h a 1 _ b a s e 6 4 = \" 4 A x b", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "11", "sec_num": null }, { "text": "P i R v h s / p T f C 7 M r x c X Y 8 t x Q I = \" > A A A C x X i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V Z I q 6 L L o Q p d V b C v U I s l 0 W o f m R T I p l C L + g F v 9", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "11", "sec_num": null }, { "text": "N f E P 9 C + 8 M 0 5 B L a I T k p w 5 9 5 4 z c + / 1 k 0 B k 0 n F e C 9 b c / M L i U n G 5 t L K 6 t r 5 R 3 t x q Z X G e M t 5 k c R C n 1 7 6 X 8 U B E v C m F D P h 1 k n I v 9 A P e 9 o e n K t 4 e 8 T Q T c X Q l x w n v h t 4 g E n 3 B P E n U p e v e l i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "11", "sec_num": null }, { "text": "t O 1 d H L n g W u A R W Y 1 Y j L L 7 h B D z E Y c o T g i C A J B / C Q 0 d O B C w c J c V 1 M i E s J C R 3 n u E e J t D l l c c r w i B 3 S d 0 C 7 j m E j 2 i v P T K s Z n R L Q m 5 L S x h 5 p Y s p L C a v T b B 3 P t b N i f / O", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "11", "sec_num": null }, { "text": "e a E 9 1 t z H 9 f e M V E i t x R + x f u m n m f 3 W q F o k + j n U N g m p K N K O q Y 8 Y l 1 1 1 R N 7 e / V C X J I S F O 4 R 7 F U 8 J M K 6 d 9 t r U m 0 7 W r 3 n o 6 / q Y z F a v 2 z O T m e F e 3 p A G 7 P 8 c 5 C 1 q 1 q n t Q r V 0 c V u o n Z t R F 7 G A X + z T P I 9 R x j g a a 5 N 3 H I 5 7 w b J 1 Z o S W t 0 W e q V T C a b X x b 1 s M H X h u P d Q = = < / l a t e x i t >", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "11", "sec_num": null }, { "text": "< l a t e x i t s h a 1 _ b a s e 6 4 = \"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Caption only", "sec_num": null }, { "text": "V n m a 0 T o a H S H C U Q L J t l V 9 4 q + O W Q s = \" > A A A C z 3 i c j V H L T s M w E J y G V y m v A k c u E R U S p y o p B z h W 9 M K x l e h D o h V K U r d E d R P L c U B V B e L K D 3 C F v 0 L 8 A f w F a 5 N K Q I X A U e z x 7 M 7 Y 6 / U F D x P l O K 8 5 a 2 F x a X k l v 1 p Y W 9 / Y 3 C p u 7 7 S S O J U B a w Y x j 2 X H 9 x L G w 4 g 1 V a g 4 6 w j J v L H P W d s f 1 X S 8 f c 1 k E s b R u Z o I 1 h t 7 w y g c h I G n i O r W P K F X O 4 7 4 5 L J Y c s q O G f Y 8 c D N Q Q j b q c f E F X f Q R I 0 C K M R g i K M I c H h L 6 L u D C g S C u h y l x k l B o 4 g y 3 K J A 2 p S x G G R 6 x I 5 q H t L v I 2 I j 2 2 j M x 6 o B O 4 f R L U t o 4 I E 1 M e Z K w P s 0 2 8 d Q 4 a / Y 3 7 6 n x 1 H e b 0 O p n X m N i F a 6 I / U s 3 y / y v T t e i M M C J q S G k m o R h d H V B 5 p K a V 9 E 3 t 7 9 U p c h B E K d x n + K S c G C U s 3 e 2 j S Y x t e u 3 9 U z 8 z W R q V u + D L D f F u 7 4 l N d j 9 2 c 5 5 0 K q U 3 a N y p V E p V U + z V", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Caption only", "sec_num": null }, { "text": "u e x h 3 0 c U j + P U c U Z 6 m i S t 8 A j n v B s N a w b 6 8 6 6 / 0 y 1 c p l m F 9 + G 9 f A B K h u U J Q = = < / l a t e x i t >", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Caption only", "sec_num": null }, { "text": "< l a t e x i t s h a 1 _ b a s e 6 4 = \" N E 3 y B g 3 u 6 l h w 5 S u V X 7 y X 4 m H J e U M = \" > A ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Image only", "sec_num": null }, { "text": "A A C z X i c j V H L T s M w E J y G V y m v A k c u E R U S p y o p B z h W c I E T R a I P U S q U u G 6 J 6 i Z R 4 i B V B a 7 8 A F f 4 L c Q f w F + w N q 4 E V A g c J Z m d 3 R l 7 v X 4 s g l Q 6 z m v O m p m d m 1 / I L x a W l l d W 1 4 r r G 4 0 0 y h L G 6 y w S U d L y v Z S L I O R 1 G U j B W 3 H C v a E v e N M f H K l 8 8 4 Y n a R C F 5 3 I U 8 8 7 Q 6 4 d B L 2 C e J O r i h E J u R 6 E Y X R V L T t n R y 5 4 G r g E l m F W L i i + 4 R B c R G D I M w R F C E h b w k N L T h g s H M X E d j I l L C A U 6 z 3 G H A m k z q u J U 4 R E 7 o G + f o r Z h Q 4 q V Z 6 r V j H Y R 9 C a k t L F D m o j q E s J q N 1 v n M + 2 s 2 N + 8 x 9 p T n W 1 E f 9 9 4 D Y m V u C b 2 L 9 2 k 8 r 8 6 1 Y t E D w e 6 h 4 B 6 i j W j u m P G J d O 3 o k 5 u f + l K k k N M n M J d y i e E m V Z O 7 t n W m l T 3 r u 7 W 0 / k 3 X a l Y F T N T m + F d n Z I G 7 P 4 c 5 z R o V M r u X r l y V i l V D 8 2 o 8 9 j C N n Z p n v u o 4 h g 1 1 M k 7 x C O e 8 G y d W p l 1 a 9 1 / l l o 5 o 9 n E t 2 U 9 f A C n U Z M m < / l a t e x i t > Image + Caption", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Image only", "sec_num": null }, { "text": "j + s E m x B M W Y o 4 = \" > A A A C 1 H i c j V H L S s N A F D 2 N r 1 o f r b p 0 E y y C I J S k L n R Z 7 E Z 3 F e w D a p F J O q 2 D e Z F M h F J d i V t / w K 1 + k / g H + h f e m a a g F t E J S c 6 c e 8 6 d u f c 6 k S c S a V l v O W N u f m F x K b 9 c W F l d W y + W N j Z b S Z j G L m + 6 o R f G H Y c l 3 B M B b 0 o h P d 6 J Y s 5 8 x + N t 5 7 q u 4 u 0 b H i c i D M 7 l K O I 9 n w 0 D M R A u k 0 R d l o q n R H B z 3 6 y z a M K U r Y q l l z k L 7 A y U k a 1 G W H r F B f o I 4 S K F D 4 4 A k r A H h o S e L m x Y i I j r Y U x c T E j o O M c d C u R N S c V J w Y i 9 p u + Q d t 2 M D W i v c i b a 7 d I p H r 0 x O U 3 s k i c k X U x Y n W b q e K o z K / a 3 3 G O d U 9 1 t R H 8 n y + U T K 3 F F 7 F + + q f K / P l W L x A B H u g Z B N U W a U d W 5 W Z Z U d 0 X d 3 P x S l a Q M E X E K 9 y k e E 3 a 1 c 9 p n U 3 s S X b v q L d P x d 6 1 U r N q 7 m T b F h 7 o l D d j + O c 5 Z 0 K p W 7 I N K 9 a x a r h 1 n o 8 5 j G z v Y o 3 k e o o Y T N N D U M 3 / C M 1 6 M l n F r 3 B s P E 6 m R y z x b + L a M x 0 8 3 S Z T g < / l a t e x i t > 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Image only", "sec_num": null }, { "text": "< l a t e x i t s h a 1 _ b a s e 6 4 = \" w n a s 4 x / P k q x o B R D F H z 5 Y 3 + 7 j a 1 Q = \" > A A", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Image only", "sec_num": null }, { "text": "A C x H i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V Z I q 6 L I o i M s W 7 A N q k W Q 6 r U M n D 5 K J U I r + g F v 9 N v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Image only", "sec_num": null }, { "text": "E P 9 C + 8 M 6 a g F t E J S c 6 c e 8 + Z u f f 6 s R S p c p z X g r W w u L S 8 U l w t r a 1 v b G 6 V t 3 f a a Z Q l j L d Y J K O k 6 3 s p l y L k L S W U 5 N 0 4 4 V 7 g S 9 7 x x + c 6 3 r n j S S q i 8 E p N Y t 4 P v F E o h o J 5 i q h m 7 a Z c c a q O W f Y 8 c H N Q Q b 4 a U f k F 1 x g g A k O G A B w h F G E J D y k 9 P b h w E B P X x 5 S 4 h J A w c Y 5 7 l E i b U R a n D I / Y M X 1 H t O v l b E h 7 7 Z k a N a N T J L 0 J K W 0 c k C a i v I S w P s 0 2 8 c w 4 a / Y 3 7 6 n x 1 H e b 0 N / P v Q J i F W 6 J / U s 3 y / y v T t e i M M S p q U F Q T b F h d H U s d 8 l M V / T N 7 S 9 V K X K I i d N 4 Q P G E M D P K W Z 9 t o 0 l N 7 b q 3 n o m / m U z N 6 j 3 L c z O 8 6 1 v S g N 2 f 4 5 w H 7 V r V P a r W m s e V + l k + 6 i L 2 s I 9 D m u c J 6 r h E A y 3 j / Y g n P F s X l r R S K / t M t Q q 5 Z h f f l v X w A c Y b j z s = < / l a t e x i t >", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Image only", "sec_num": null }, { "text": "< l a t e x i t s h a 1 _ b a s e 6 4 = \" R A J k k 7 K M 2 w Z e e / r O 0 m 7 F T X I F d t", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4", "sec_num": null }, { "text": "U = \" > A A A C x H i c j V H L S s N A F D 2 N r / q u u n Q T L I K r k t S C L o u C u G z B P q A W S a b T G p o X M x O h F P 0 B t / p t 4 h / o X 3 h n T E E t o h O S n D n 3 n j N z 7 / X T M J D K c V 4 L 1 s L i 0 v J K c X V t f W N", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4", "sec_num": null }, { "text": "z a 7 u 0 s 9 u W S S Y Y b 7 E k T E T X 9 y Q P g 5 i 3 V K B C 3 k 0 F 9 y I / 5 B 1 / f K 7 j n T s u Z J D E V 2 q S 8 n 7 k j e J g G D B P E d W s 3 Z T K T s U x y 5 4 H b g 7 K y F c j K b 3 g G g M k Y M g Q g S O G I h z C g 6 S n B x c O U u L 6 m B I n C A U m z n G P N d J m l M U p w y N 2 T N 8 R 7 X o 5 G 9 N e e 0 q j Z n R K S K 8 g p Y 1 D 0 i S U J w j r 0 2 w T z 4 y z Z n / z n h p P f b c J / f 3 c K y J W 4 Z b Y v 3 S z z P / q d C 0 K Q 5 y a G g K q K T W M r o 7 l L p n p i r 6 5 / a U q R Q 4 p c R o P K C 4 I M 6 O c 9 d k 2 G m l q 1 7 3 1 T P z N Z G p W 7 1 m e m + F d 3 5 I G 7 P 4 c 5 z x o V y v u c a X a r J X r Z / m o i 9 j H A Y 5 o n i e o 4 x I N t I z 3 I 5 7 w b F 1 Y o S W t 7 D P V K u S a P X x b 1 s M H y t u P P Q = = < / l a t e x i t >", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4", "sec_num": null }, { "text": "< l a t e x i t s h a 1 _ b a s e 6 4 = \" / B C l j I m 3 l a 1 0 z r T I l Q 0 B 3 m x x g g E = \" > A A", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6", "sec_num": null }, { "text": "A C x H i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V Z I q 6 r I o i M s W b C v U I s l 0 W k P z Y j I R S t E f c K v f J v 6 B / o V 3 x i m o R X R C k j P n 3 n N m 7 r 1 + G g a Z d J z X g j U 3 v 7 C 4 V F w u r a y u r W + U N 7 f a W Z I L x l s s C R N x 5 X s Z D 4 O Y t 2 Q g Q 3 6 V C u 5 F f s g 7 / u h M x T t 3 X G R B E l / K c c p 7 k T e M g 0 H A P E l U 8 + i m X H G q j l 7 2 L H A N q M C s R l J + w T X 6 S M C Q I w J H D E k 4 h I e M n i 5 c O E i J 6 2 F C n C A U 6 D j H P U q k z S m L U 4 Z H 7 I i + Q 9 p 1 D R v T X n l m W s 3 o l J B e Q U o b e 6 R J K E 8 Q V q f Z O p 5 r Z 8 X + 5 j 3 R n u p u Y / r 7 x i s i V u K W 2 L 9 0 0 8 z / 6 l Q t E g O c 6 B o C q i n V j K q O G Z d c d 0 X d 3 P 5 S l S S H l D i F + x Q X h J l W T v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6", "sec_num": null }, { "text": "t s a 0 2 m a 1 e 9 9 X T 8 T W c q V u 2 Z y c 3 x r m 5 J A 3 Z / j n M W t G t V 9 6 B a a x 5 W 6 q d m 1 E X s Y B f 7 N M 9 j 1 H G B B l r a + x F P e L b O r d D K r P w z 1 S o Y z T a + L e v h A 8 + b j z 8 = < / l a t e x i t >", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6", "sec_num": null }, { "text": "< l a t e x i t s h a 1 _ b a s e 6 4 = \" G D w t 6 0 e m W 9 L X l N e y 9 O 6 p y N m q 2 0 4 = \" > A A", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "8", "sec_num": null }, { "text": "A C x H i c j V H L S s N A F D 2 N r / q u u n Q T L I K r k l T B L o u C u G z B P q A W S d J p H Z o X m Y l Q i v 6 A W / 0 2 8 Q / 0 L 7 w z T k E t o h O S n D n 3 n j N z 7 / X T k A v p O K 8 F a 2 F x a X m l u L q 2 v r G 5 t V 3 a 2 W 2 L J M 8 C 1 g q S M M m 6 v i d Y y G P W k l y G r J t m z I v 8 k H X 8 8 b m K d + 5 Y J n g S X 8 l J y v q R N 4 r 5 k A e e J K p Z u y m V n Y q j l z 0 P X A P K M K u R l F 5 w j Q E S B M g R g S G G J B z C g 6 C n B x c O U u L 6 m B K X E e I 6 z n C P N d L m l M U o w y N 2 T N 8 R 7 X q G j W m v P I V W B 3 R K S G 9 G S h u H p E k o L y O s T r N 1 P N f O i v 3 N e 6 o 9 1 d 0 m 9 P e N V 0 S s x C 2 x f + l m m f / V q V o k h q j p G j j V l G p G V R c Y l 1 x 3 R d 3 c / l K V", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "8", "sec_num": null }, { "text": "J I e U O I U H F M 8 I B 1 o 5 6 7 O t N U L X r n r r 6 f i b z l S s 2 g c m N 8 e 7 u i U N 2 P 0 5 z n n Q r l b c 4 0 q 1 e V K u n 5 l R F 7 G P A x z R P E 9 R x y U a a G n v R z z h 2 b q w Q k t Y + W e q V T C a P X x b 1 s M H 1 F u P Q Q = = < / l a t e x i t >", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "8", "sec_num": null }, { "text": "< l a t e x i t s h a 1 _ b a s e 6 4 = \" S G 4 y I 5 C + C i J d 6 i J 1 k t + L K V z P w 2 s = \" > A A A C x X i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V Z I q 6 L L o Q p d V b C v U I s l 0 W o f m R T I p l C L + g F v 9 N f E P 9 C + 8 M 0 5 B L a I T k p w 5 9 5 4 z c + / 1 k 0 B k 0 n F e C 9 b c / M L i U n G 5 t L K 6 t r 5 R 3 t x q Z X G e M t 5 k c R C n 1 7 6 X 8 U B E v C m F D P h 1 k n I v 9 A P e 9 o e n K t 4 e 8 T Q T c X Q l x w n v h t 4 g E n 3 B P E n U p e v c l i t O 1 d H L n g W u A R W Y 1 Y j L L 7 h B D z E Y c o T g i C A J B / C Q 0 d O B C w c J c V 1 M i E s J C R 3 n u E e J t D l l c c r w i B 3 S d 0 C 7 j m E j 2 i v P T K s Z n R L Q m 5 L S x h 5 p Y s p L C a v T b B 3 P t b N i f / O e a E 9 1 t z H 9 f e M V E i t x R + x f u m n m f 3 W q F o k + j n U N g m p K N K O q Y 8 Y l 1 1 1 R N 7 e / V C X J I S F O 4 R 7 F U 8 J M K 6 d 9 t r U m 0 7 W r 3 n o 6 / q Y z F a v 2 z O T m e F e 3 p A G 7 P 8 c 5 C 1 q 1 q n t Q r V 0 c V u o n Z t R F 7 G A X + z T P I 9 R x j g a a 5 N 3 H I 5 7 w b J 1 Z o S W t 0 W e q V T C a b X x b 1 s M H W 7 u P d A = = < / l a t e x i t >", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "10", "sec_num": null }, { "text": "< l a t e x i t s h a 1 _ b a s e 6 4 = \" R S 3 E c Q 5 W 1 G G h 3 j o j y A m v n J c 7 7 U k = \" > A A A C x X i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V Z I q 6 L L o Q p d V b C v U I s l 0 W o f m x W R S K E X 8 A b f 6 a + I f 6 F 9 4 Z 0 x B L a I T k p w 5 9 5 4 z c + / 1 k 0 C k y n F e C 9 b c / M L i U n G 5 t L K 6 t r 5 R 3 t x q p X E m G W + y O I j l t e + l P B A R b y q h A n 6 d S O 6 F f s D b / v B U x 9 s j L l M R R 1 d q n P B u 6 A 0 i 0 R f M U 0 R d u r X b c s W p O m b Z s 8 D N Q Q X 5 a s T l F 9 y g h x g M G U J w R F C E A 3 h I 6 e n A h Y O E u C 4 m x E l C w s Q 5 7 l E i b U Z Z n D I 8 Y o f 0 H d C u k 7 M R 7 b V n a t S M T g n o l a S 0 s U e a m P I k Y X 2 a b e K Z c d b s b 9 4 T 4 6 n v N q a / n 3 u F x C r c E f u X b p r 5 X 5 2 u R a G P Y 1 O D o J o S w + j q W O 6 S m a 7 o m 9 t f q l L k k B C n c Y / i k j A z y m m f b a N J T e 2 6 t 5 6 J v 5 l M z e o 9 y 3 M z v O t b 0 o D d n + O c B a 1 a 1 T 2 o 1 i 4 O K / W T f N R F 7 G A X + z T P I 9 R x j g a a 5 N 3 H I 5 7 w b J 1 Z o a W s 0 W e q V c g 1 2 / i 2 r I c P Y H u P d g = = < / l a t e x i t >", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "12", "sec_num": null }, { "text": "< l a t e x i t s h a 1 _ b a s e 6 4 = \" c m U V X X o 2 b j H p O P r u C 8 w 8 n 4 H F Z g c = \" > A A A C y n i c j V H L S s N A F D 2 N r / q u u n Q T L I K r k l R B l 0 U 3 L l x U s A + o p S T p t A 7 N i 5 m J U E p 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Xsim", "sec_num": null }, { "text": "/ o B b / T D x D / Q v v D O m o B b R C U n O n H v O n b n 3 + m n I p X K c 1 4 K 1 s L i 0 v F J c X V v f 2 N z a L u 3 s N m W S i Y A 1 g i R M R N v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Xsim", "sec_num": null }, { "text": "3 J A t 5 z B q K q 5 C 1 U 8 G 8 y A 9 Z y x 9 d 6 H j r n g n J k / h G j V P W j b x h z A c 8 8 B R R r X Z v I n k 0 7 Z X K T s U x y 5 4 H b g 7 K y F c 9 K b 3 g F n 0 k C J A h A k M M R T i E B 0 l P B y 4 c p M R 1 M S F O E O I m z j D F G n k z U j F S e M S O 6 D u k X S d n Y 9 r r n N K 4 A z o l p F e Q 0 8 Y h e R L S C c L 6 N N v E M 5 N Z s 7 / l n p i c + m 5 j + v t 5 r o h Y h T t i / / L N l P / 1 6 V o U B j g z N X C q K T W M r i 7 I s 2 S m K / r m 9 p e q F G V I i d O 4 T 3 F B O D D O W Z 9 t 4 5 G m d t 1 b z 8 T f j F K z e h / k 2 g z v + p Y 0 Y P f n O O d B s 1 p x j y v V 6 5 N y 7 T w f d R H 7 O M A R z f M U N V y i j o a p 8 h F P e L a u L G G N r c m n 1 C r k n j 1 8 W 9 b D B 0 f g k j 0 = < / l a t e x i t > layer < l a t e x i t s h a 1 _ b a s e 6 4 = \" J n 5 L S 3 V K 4 3 L 2 D 0 w d g 6 h s c v I a Y L k = \" > A A A C y H i c j V H L S s N A F D 3 G V 6 2 v q k s 3 w S K 4 K k l d 6 L L o R l x V M G 2 h F k m m 0 z o 0 T c J k o o T i x h 9 w q 1 8 m / o H + h X f G F N Q i O i H J m X P v O T P 3 3 i A J R a o c 5 3 X O m l 9 Y X F o u r Z R X 1 9 Y 3 N i t b 2 6 0 0 z i T j H o v D W H Y C P + W h i L i n h A p 5 J 5 H c H w c h b w e j U x 1 v 3 3 K Z i j i 6 V H n C e 2 N / G I m B Y L 4 i y g v 9 n M v r S t W p O W b Z s 8 A t Q B X F a s a V F 1 y h j x g M G c b g i K A I h / C R 0 t O F C w c J c T 1 M i J O E h I l z 3 K N M 2 o y y O G X 4 x I 7 o O 6 R d t 2 A j 2 m v P 1 K g Z n R L S K 0 l p Y 5 8 0 M e V J w v o 0 2 8 Q z 4 6 z Z 3 7 w n x l P f L a d / U H i N i V W 4 I f Y v 3 T T z v z p d i 8 I A x 6 Y G Q T U l h t H V s c I l M 1 3 R N 7 e / V K X I I S F O 4 z 7 F J W F m l N M + 2 0 a T m t p 1 b 3 0 T f z O Z m t V 7 V u R m e N e 3 p A G 7 P 8 c 5 C 1 r 1 m n t Y q 1 / U q 4 2 T Y t Q l 7 G I P B z T P I z R w h i Y 8 8 h Z 4 x B O e r X M r s e 6 s / D P V m i s 0 O / i 2 r I c P 7 x y R T A = = < / l a t e x i t >", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Xsim", "sec_num": null }, { "text": "< l a t e x i t s h a 1 _ b a s e 6 4 = \" 6 b o + 3 I s Q G r K L Z 1 K P S J X Q U 4 j a 5 J 0 = \" > A A A C 1 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Random Transformer", "sec_num": null }, { "text": "i c j V H L S s N A F D 2 N r 1 p f U Z d u g k V w V Z K 6 0 G X R j c s q V i t V y i S d 1 m C S C Z O J K C L u x K 0 / 4 F b / S P w D / Q v v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Random Transformer", "sec_num": null }, { "text": "j C m o R X R C k j P n 3 n N m 7 r 1 + G o W Z c t 3 X k j U 2 P j E 5 V Z 6 u z M z O z S / Y i 0 u H m c h l w F u B i I R s + y z j U Z j w l g p V x N u p 5 C z 2 I 3 7 k n + / o + N E F l 1 k o k g N 1 l f L T m A 2 S s B 8 G T B H V t Z f 2 W d I T s X M g W Z L 1 h Y y 5 7 N p V t + a a 5 Y w C r w B V F K s p 7 B e c o A e B A D l i c C R Q h C M w Z P R 0 4 M F F S t w p r o m T h E I T 5 7 h B h b Q 5 Z X H K Y M S e 0 3 d A u 0 7 B J r T X n p l R B 3 R K R K 8 k p Y M 1 0 g j K k 4 T 1 a Y 6 J 5 8 Z Z s 7 9 5 X x t P f b c r + v u F V 0 y s w h m x f + m G m f / V 6 V o U + t g y N Y R U U 2 o Y X V 1 Q u O S m K / r m z p e q F D m k x G n c o 7 g k H B j l s M + O 0 W S m d t 1 b Z u J v J l O z e h 8 U u T n e 9 S 1 p w N 7 P c Y 6 C w 3 r N 2 6 j V 9 + r V x n Y x 6 j J W s I p 1 m u c m G t h F E y 3 y v s Q j n v B s H V u 3 1 p 1 1 / 5 l q l Q r N M r 4 t 6 + E D b s 2 W 9 g = = < / l a t e x i t > 0.7 < l a t e x i t s h a 1 _ b a s e 6 4 = \" c x n J p 1 2 J k 0 h X 8 7 m T C 4 c V H a V L W n g = \" > A A A C x n i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w F Z I q 1 G X R T Z c V 7 Q N q k S S d 1 q F 5 k U y U U g R / w K 1 + m v g H + h f e G a e g F t E J S c 6 c e 8 + Z u f d 6 S c A z Y d u v B W N h c W l 5 p b h a W l v f 2 N w q b + + 0 s z h P f d b y 4 y B O u 5 6 b s Y B H r C W 4 C F g 3 S Z k b e g H r e O M z G e / c s j T j c X Q p J g n r h + 4 o 4 k P u u 4 K o C 9 u q X Z c r t m W r Z c 4 D R 4 M K 9 G r G 5 R d c Y Y A Y P n K E Y I g g C A d w k d H T g w M b C X F 9 T I l L C X E V Z 7 h H i b Q 5 Z T H K c I k d 0 3 d E u 5 5 m I 9 p L z 0 y p f T o l o D c l p Y k D 0 s S U l x K W p 5 k q n i t n y f 7 m P V W e 8 m 4 T + n v a K y R W 4 I b Y v 3 S z z P / q Z C 0 C Q 5 y o G j j V l C h G V u d r l 1 x 1 R d 7 c / F K V I I e E O I k H F E 8 J + 0 o 5 6 7 O p N J m q X f b W V f E 3 l S l Z u f d 1 b o 5 3 e U s a s P N z n P O g X b W c I 6 t 6 f l y p n + p R F 7 G H f R z S P G u o o 4 E m W u Q 9 w i O e 8 G w 0 j M j I j b v P V K O g N b v 4 t o y H D / 1 a j 7 I = < / l a t e x i t > Figure 3 : Cross-modal similarity X sim between images in V QG COCO and corresponding captions at each BERT encoding layer. Captions and images are embedded here using the [CLS] special token. Figure 3 shows the cross-modal similarity X sim for different model scenarios, computed at each BERT-base layer from 1 to 12. We define the crossmodal similarity X sim as the cosine similarity between the vector representations of both modalities. These vectors are the two continuous space representations from a model when given as input either i) an image, or ii) its corresponding caption. We represent these captions and images vectors with the special BERT token [CLS], following previous works (Reif et al., 2019) where [CLS] is used to represent the entire sequence.", "cite_spans": [ { "start": 2776, "end": 2795, "text": "(Reif et al., 2019)", "ref_id": "BIBREF30" } ], "ref_spans": [ { "start": 2082, "end": 2090, "text": "Figure 3", "ref_id": null }, { "start": 2275, "end": 2283, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Random Transformer", "sec_num": null }, { "text": "Reported values are averaged over all examples of V QG COCO test set. In addition to the setups described in Section 4 (Caption-only, Image-only and Image + Caption), we also report X sim for Random Transformer, a BERT architecture with random weights -as expected, its X sim is close to zero. W is set at random for models where visual data has not been used (Random Transformer, Caption only).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Random Transformer", "sec_num": null }, { "text": "All the other models are based on BERT. As suggested by Tenney et al. (2019) , the first layers in BERT tend to encode lower-level language information. This might explain why the models show similar X sim scores up to the 9th layer, and diverge afterwards: the weights for those layers remain very similar between our fine-tuned models.", "cite_spans": [ { "start": 56, "end": 76, "text": "Tenney et al. (2019)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Random Transformer", "sec_num": null }, { "text": "For the last layer (l = 12), we observe that Caption only < Image only < Image + Caption. The Caption only model has never seen images during training, and therefore is not able to encode semantic information given only images as input. Still, its reported X sim > 0 can be attributed to the fact that, when fine-tuned on VQG during Step 1, BERT-gen encodes task-specific information in the [CLS] token embedding (e.g. a question ends with a \"?\" and often begins with \"What/Where/Who\"). Image only > Caption only comes from the learning of the cross-modal matrix W . However, since BERT is not fine-tuned, the model learns a contortion allowing it to align text and vision. Finally, Image + Caption > Image only comes from BERT's fine-tuning, contributing to an increase in the observed gap, and its emergence in earlier layers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Random Transformer", "sec_num": null }, { "text": "We investigated whether the abstractions encoded in a pre-trained BERT model can generalize beyond text. We proposed BERT-gen, a novel methodology that allows to directly generate text from out-of-thebox pre-trained encoders, either in mono-or multimodal setups. Moreover, we applied BERT-gen to Visual Question Generation, obtaining state-of-theart results on two established datasets. We showed how a simple linear projection is sufficient to effectively align visual and textual representations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Perspectives", "sec_num": "7" }, { "text": "https://github.com/peteanderson80/ bottom-up-attention", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We use the smaller architecture released, BERT-base (12 layers), pre-trained on English cased text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Publicly available at https://www.microsoft. com/en-us/download/details.aspx?id=536704 Publicly available at https://visualqa.org/ vqa_v1_download.html 5 Publicly available at https://github.com/ karpathy/neuraltalk2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/xinyadu/nqg/tree/ master/qgevalcap", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Pytext: A seamless path from nlp research to production", "authors": [ { "first": "Ahmed", "middle": [], "last": "Aly", "suffix": "" }, { "first": "Kushal", "middle": [], "last": "Lakhotia", "suffix": "" }, { "first": "Shicong", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Mrinal", "middle": [], "last": "Mohit", "suffix": "" }, { "first": "Barlas", "middle": [], "last": "Oguz", "suffix": "" }, { "first": "Abhinav", "middle": [], "last": "Arora", "suffix": "" }, { "first": "Sonal", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Dewan", "suffix": "" }, { "first": "Stef", "middle": [], "last": "Nelson-Lindall", "suffix": "" }, { "first": "Rushin", "middle": [], "last": "Shah", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1812.08729" ] }, "num": null, "urls": [], "raw_text": "Ahmed Aly, Kushal Lakhotia, Shicong Zhao, Mri- nal Mohit, Barlas Oguz, Abhinav Arora, Sonal Gupta, Christopher Dewan, Stef Nelson-Lindall, and Rushin Shah. 2018. Pytext: A seamless path from nlp research to production. arXiv preprint arXiv:1812.08729.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Bottom-up and top-down attention for image captioning and visual question answering", "authors": [ { "first": "Peter", "middle": [], "last": "Anderson", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "He", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Buehler", "suffix": "" }, { "first": "Damien", "middle": [], "last": "Teney", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Gould", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2018, "venue": "2018 IEEE Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "6077--6086", "other_ids": { "DOI": [ "10.1109/CVPR.2018.00636" ] }, "num": null, "urls": [], "raw_text": "Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In 2018 IEEE Conference on Computer Vision and Pat- tern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pages 6077-6086.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "VQA: Visual Question Answering", "authors": [ { "first": "Stanislaw", "middle": [], "last": "Antol", "suffix": "" }, { "first": "Aishwarya", "middle": [], "last": "Agrawal", "suffix": "" }, { "first": "Jiasen", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Margaret", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Batra", "suffix": "" }, { "first": "C", "middle": [ "Lawrence" ], "last": "Zitnick", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Parikh", "suffix": "" } ], "year": 2015, "venue": "International Conference on Computer Vision (ICCV)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Mar- garet Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. 2015. VQA: Visual Question An- swering. In International Conference on Computer Vision (ICCV).", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "On the cross-lingual transferability of monolingual representations", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Dani", "middle": [], "last": "Yogatama", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2019. On the cross-lingual transferability of mono- lingual representations. CoRR, abs/1910.11856.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Meteor: An automatic metric for mt evaluation with improved correlation with human judgments", "authors": [ { "first": "Satanjeev", "middle": [], "last": "Banerjee", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Lavie", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization", "volume": "", "issue": "", "pages": "65--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evalu- ation measures for machine translation and/or sum- marization, pages 65-72.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Multimodal distributional semantics", "authors": [ { "first": "Elia", "middle": [], "last": "Bruni", "suffix": "" }, { "first": "Nam-Khanh", "middle": [], "last": "Tran", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2014, "venue": "J. Artif. Intell. Res", "volume": "49", "issue": "", "pages": "1--47", "other_ids": { "DOI": [ "10.1613/jair.4135" ] }, "num": null, "urls": [], "raw_text": "Elia Bruni, Nam-Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. J. Artif. Intell. Res., 49:1-47.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "What does bert look at? an analysis of bert's attention", "authors": [ { "first": "Kevin", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Urvashi", "middle": [], "last": "Khandelwal", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1906.04341" ] }, "num": null, "urls": [], "raw_text": "Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D Manning. 2019. What does bert look at? an analysis of bert's attention. arXiv preprint arXiv:1906.04341.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Human attention in visual question answering: Do humans and deep networks look at the same regions?", "authors": [ { "first": "Abhishek", "middle": [], "last": "Das", "suffix": "" }, { "first": "Harsh", "middle": [], "last": "Agrawal", "suffix": "" }, { "first": "Larry", "middle": [], "last": "Zitnick", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Parikh", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Batra", "suffix": "" } ], "year": 2017, "venue": "Computer Vision and Image Understanding", "volume": "163", "issue": "", "pages": "90--100", "other_ids": { "DOI": [ "10.1016/j.cviu.2017.10.001" ] }, "num": null, "urls": [], "raw_text": "Abhishek Das, Harsh Agrawal, Larry Zitnick, Devi Parikh, and Dhruv Batra. 2017. Human attention in visual question answering: Do humans and deep net- works look at the same regions? Computer Vision and Image Understanding, 163:90-100.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Unified language model pre-training for natural language understanding and generation", "authors": [ { "first": "Li", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Wenhui", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Furu", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Hsiao-Wuen", "middle": [], "last": "Hon", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "13042--13054", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xi- aodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understand- ing and generation. In Advances in Neural Informa- tion Processing Systems, pages 13042-13054.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Learning to ask: Neural question generation for reading comprehension", "authors": [ { "first": "Xinya", "middle": [], "last": "Du", "suffix": "" }, { "first": "Junru", "middle": [], "last": "Shao", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1342--1352", "other_ids": { "DOI": [ "10.18653/v1/P17-1123" ] }, "num": null, "urls": [], "raw_text": "Xinya Du, Junru Shao, and Claire Cardie. 2017. Learn- ing to ask: Neural question generation for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Lin- guistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 1342-1352.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Are you talking to a machine? dataset and methods for multilingual image question", "authors": [ { "first": "Haoyuan", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Junhua", "middle": [], "last": "Mao", "suffix": "" }, { "first": "Jie", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Zhiheng", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2015, "venue": "Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems", "volume": "", "issue": "", "pages": "2296--2304", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haoyuan Gao, Junhua Mao, Jie Zhou, Zhiheng Huang, Lei Wang, and Wei Xu. 2015. Are you talking to a machine? dataset and methods for multilingual im- age question. In Advances in Neural Information Processing Systems 28: Annual Conference on Neu- ral Information Processing Systems 2015, Decem- ber 7-12, 2015, Montreal, Quebec, Canada, pages 2296-2304.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Creativity: Generating diverse questions using variational autoencoders", "authors": [ { "first": "Unnat", "middle": [], "last": "Jain", "suffix": "" }, { "first": "Ziyu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Alexander", "middle": [ "G" ], "last": "Schwing", "suffix": "" } ], "year": 2017, "venue": "2017 IEEE Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "5415--5424", "other_ids": { "DOI": [ "10.1109/CVPR.2017.575" ] }, "num": null, "urls": [], "raw_text": "Unnat Jain, Ziyu Zhang, and Alexander G. Schwing. 2017. Creativity: Generating diverse questions us- ing variational autoencoders. In 2017 IEEE Confer- ence on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 5415-5424.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Deep visualsemantic alignments for generating image descriptions", "authors": [ { "first": "Andrej", "middle": [], "last": "Karpathy", "suffix": "" }, { "first": "Fei-Fei", "middle": [], "last": "Li", "suffix": "" } ], "year": 2015, "venue": "IEEE Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "3128--3137", "other_ids": { "DOI": [ "10.1109/CVPR.2015.7298932" ] }, "num": null, "urls": [], "raw_text": "Andrej Karpathy and Fei-Fei Li. 2015. Deep visual- semantic alignments for generating image descrip- tions. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015, pages 3128-3137.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Supervised multimodal bitransformers for classifying images and text", "authors": [ { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" }, { "first": "Suvrat", "middle": [], "last": "Bhooshan", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Douwe Kiela, Suvrat Bhooshan, Hamed Firooz, and Davide Testuggine. 2019. Supervised multimodal bitransformers for classifying images and text. CoRR, abs/1909.02950.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Visual genome: Connecting language and vision using crowdsourced dense image annotations", "authors": [ { "first": "Ranjay", "middle": [], "last": "Krishna", "suffix": "" }, { "first": "Yuke", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Oliver", "middle": [], "last": "Groth", "suffix": "" }, { "first": "Justin", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Kenji", "middle": [], "last": "Hata", "suffix": "" }, { "first": "Joshua", "middle": [], "last": "Kravitz", "suffix": "" }, { "first": "Stephanie", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yannis", "middle": [], "last": "Kalantidis", "suffix": "" }, { "first": "Li-Jia", "middle": [], "last": "Li", "suffix": "" }, { "first": "David", "middle": [ "A" ], "last": "Shamma", "suffix": "" }, { "first": "Michael", "middle": [ "S" ], "last": "Bernstein", "suffix": "" }, { "first": "Li", "middle": [], "last": "Fei-Fei", "suffix": "" } ], "year": 2017, "venue": "International Journal of Computer Vision", "volume": "123", "issue": "1", "pages": "32--73", "other_ids": { "DOI": [ "10.1007/s11263-016-0981-7" ] }, "num": null, "urls": [], "raw_text": "Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin John- son, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A. Shamma, Michael S. Bernstein, and Li Fei-Fei. 2017. Vi- sual genome: Connecting language and vision us- ing crowdsourced dense image annotations. Interna- tional Journal of Computer Vision, 123(1):32-73.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Visualbert: A simple and performant baseline for vision and language", "authors": [ { "first": "Liunian Harold", "middle": [], "last": "Li", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Yatskar", "suffix": "" }, { "first": "Da", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Cho-Jui", "middle": [], "last": "Hsieh", "suffix": "" }, { "first": "Kai-Wei", "middle": [], "last": "Chang", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2019. Visualbert: A simple and performant baseline for vision and lan- guage. CoRR, abs/1908.03557.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Visual question generation as dual task of visual question answering", "authors": [ { "first": "Yikang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Duan", "suffix": "" }, { "first": "Bolei", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Xiao", "middle": [], "last": "Chu", "suffix": "" }, { "first": "Wanli", "middle": [], "last": "Ouyang", "suffix": "" }, { "first": "Xiaogang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2018, "venue": "2018 IEEE Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "6116--6124", "other_ids": { "DOI": [ "10.1109/CVPR.2018.00640" ] }, "num": null, "urls": [], "raw_text": "Yikang Li, Nan Duan, Bolei Zhou, Xiao Chu, Wanli Ouyang, Xiaogang Wang, and Ming Zhou. 2018. Vi- sual question generation as dual task of visual ques- tion answering. In 2018 IEEE Conference on Com- puter Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pages 6116-6124.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "ROUGE: A package for automatic evaluation of summaries", "authors": [ { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2004, "venue": "Text Summarization Branches Out", "volume": "", "issue": "", "pages": "74--81", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Microsoft COCO: common objects in context", "authors": [ { "first": "Tsung-Yi", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Maire", "suffix": "" }, { "first": "Serge", "middle": [ "J" ], "last": "Belongie", "suffix": "" }, { "first": "James", "middle": [], "last": "Hays", "suffix": "" }, { "first": "Pietro", "middle": [], "last": "Perona", "suffix": "" }, { "first": "Deva", "middle": [], "last": "Ramanan", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Doll\u00e1r", "suffix": "" }, { "first": "C", "middle": [ "Lawrence" ], "last": "Zitnick", "suffix": "" } ], "year": 2014, "venue": "Computer Vision -ECCV 2014 -13th European Conference", "volume": "", "issue": "", "pages": "740--755", "other_ids": { "DOI": [ "10.1007/978-3-319-10602-1_48" ] }, "num": null, "urls": [], "raw_text": "Tsung-Yi Lin, Michael Maire, Serge J. Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll\u00e1r, and C. Lawrence Zitnick. 2014. Microsoft COCO: common objects in context. In Computer Vision -ECCV 2014 -13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V, pages 740-755.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Text summarization with pretrained encoders", "authors": [ { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "3721--3731", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yang Liu and Mirella Lapata. 2019. Text summariza- tion with pretrained encoders. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3721-3731.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Roberta: A robustly optimized bert pretraining approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.11692" ] }, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks", "authors": [ { "first": "Jiasen", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Batra", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Parikh", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visi- olinguistic representations for vision-and-language tasks. CoRR, abs/1908.02265.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Seeing through the human reporting bias: Visual classifiers from noisy human-centric labels", "authors": [ { "first": "Ishan", "middle": [], "last": "Misra", "suffix": "" }, { "first": "C", "middle": [ "Lawrence" ], "last": "Zitnick", "suffix": "" }, { "first": "Margaret", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "Ross", "middle": [ "B" ], "last": "Girshick", "suffix": "" } ], "year": 2016, "venue": "2016 IEEE Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "2930--2939", "other_ids": { "DOI": [ "10.1109/CVPR.2016.320" ] }, "num": null, "urls": [], "raw_text": "Ishan Misra, C. Lawrence Zitnick, Margaret Mitchell, and Ross B. Girshick. 2016. Seeing through the human reporting bias: Visual classifiers from noisy human-centric labels. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 2930-2939.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Generating natural questions about an image", "authors": [ { "first": "Nasrin", "middle": [], "last": "Mostafazadeh", "suffix": "" }, { "first": "Ishan", "middle": [], "last": "Misra", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Margaret", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "He", "suffix": "" }, { "first": "Lucy", "middle": [], "last": "Vanderwende", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nasrin Mostafazadeh, Ishan Misra, Jacob Devlin, Mar- garet Mitchell, Xiaodong He, and Lucy Vander- wende. 2016. Generating natural questions about an image. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Vol- ume 1: Long Papers.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th annual meeting on association for computational linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting on association for compu- tational linguistics, pages 311-318. Association for Computational Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Deep bayesian network for visual question generation", "authors": [ { "first": "N", "middle": [], "last": "Badri", "suffix": "" }, { "first": "Vinod", "middle": [ "K" ], "last": "Patro", "suffix": "" }, { "first": "Sandeep", "middle": [], "last": "Kurmi", "suffix": "" }, { "first": "Vinay", "middle": [ "P" ], "last": "Kumar", "suffix": "" }, { "first": "", "middle": [], "last": "Namboodiri", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Badri N. Patro, Vinod K. Kurmi, Sandeep Kumar, and Vinay P. Namboodiri. 2020. Deep bayesian network for visual question generation.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Deep exemplar networks for VQA", "authors": [ { "first": "N", "middle": [], "last": "Badri", "suffix": "" }, { "first": "", "middle": [], "last": "Patro", "suffix": "" }, { "first": "P", "middle": [], "last": "Vinay", "suffix": "" }, { "first": ";", "middle": [], "last": "Namboodiri", "suffix": "" }, { "first": "Vqg", "middle": [], "last": "Corr", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Badri N. Patro and Vinay P. Namboodiri. 2019. Deep exemplar networks for VQA and VQG. CoRR, abs/1912.09551.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Multimodal differential network for visual question generation", "authors": [ { "first": "Sandeep", "middle": [], "last": "Badri Narayana Patro", "suffix": "" }, { "first": "Vinod", "middle": [ "Kumar" ], "last": "Kumar", "suffix": "" }, { "first": "Vinay", "middle": [ "P" ], "last": "Kurmi", "suffix": "" }, { "first": "", "middle": [], "last": "Namboodiri", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "4002--4012", "other_ids": {}, "num": null, "urls": [], "raw_text": "Badri Narayana Patro, Sandeep Kumar, Vinod Kumar Kurmi, and Vinay P. Namboodiri. 2018. Multimodal differential network for visual question generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 -November 4, 2018, pages 4002-4012.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Improving language understanding by generative pre-training", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Karthik", "middle": [], "last": "Narasimhan", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing by generative pre-training.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Visualizing and measuring the geometry of BERT", "authors": [ { "first": "Emily", "middle": [], "last": "Reif", "suffix": "" }, { "first": "Ann", "middle": [], "last": "Yuan", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Wattenberg", "suffix": "" }, { "first": "Fernanda", "middle": [ "B" ], "last": "Vi\u00e9gas", "suffix": "" }, { "first": "Andy", "middle": [], "last": "Coenen", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Pearce", "suffix": "" }, { "first": "Been", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems", "volume": "", "issue": "", "pages": "8592--8600", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emily Reif, Ann Yuan, Martin Wattenberg, Fernanda B. Vi\u00e9gas, Andy Coenen, Adam Pearce, and Been Kim. 2019. Visualizing and measuring the geometry of BERT. In Advances in Neural Information Process- ing Systems 32: Annual Conference on Neural Infor- mation Processing Systems 2019, NeurIPS 2019, 8- 14 December 2019, Vancouver, BC, Canada, pages 8592-8600.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Faster R-CNN: towards real-time object detection with region proposal networks", "authors": [ { "first": "Kaiming", "middle": [], "last": "Shaoqing Ren", "suffix": "" }, { "first": "Ross", "middle": [ "B" ], "last": "He", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Girshick", "suffix": "" }, { "first": "", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2017, "venue": "IEEE Trans. Pattern Anal. Mach. Intell", "volume": "39", "issue": "6", "pages": "1137--1149", "other_ids": { "DOI": [ "10.1109/TPAMI.2016.2577031" ] }, "num": null, "urls": [], "raw_text": "Shaoqing Ren, Kaiming He, Ross B. Girshick, and Jian Sun. 2017. Faster R-CNN: towards real-time ob- ject detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell., 39(6):1137-1149.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Self-attention architectures for answer-agnostic neural question generation", "authors": [ { "first": "Thomas", "middle": [], "last": "Scialom", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Piwowarski", "suffix": "" }, { "first": "Jacopo", "middle": [], "last": "Staiano", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019", "volume": "1", "issue": "", "pages": "6027--6032", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Scialom, Benjamin Piwowarski, and Jacopo Staiano. 2019. Self-attention architectures for answer-agnostic neural question generation. In Pro- ceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Pa- pers, pages 6027-6032.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Grounded compositional semantics for finding and describing images with sentences", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Andrej", "middle": [], "last": "Karpathy", "suffix": "" }, { "first": "Quoc", "middle": [ "V" ], "last": "Le", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" } ], "year": 2014, "venue": "TACL", "volume": "2", "issue": "", "pages": "207--218", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, Andrej Karpathy, Quoc V. Le, Christo- pher D. Manning, and Andrew Y. Ng. 2014. Grounded compositional semantics for finding and describing images with sentences. TACL, 2:207- 218.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "VL-BERT: pretraining of generic visual-linguistic representations", "authors": [ { "first": "Weijie", "middle": [], "last": "Su", "suffix": "" }, { "first": "Xizhou", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Bin", "middle": [], "last": "Li", "suffix": "" }, { "first": "Lewei", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Furu", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Jifeng", "middle": [], "last": "Dai", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai. 2019. VL-BERT: pre- training of generic visual-linguistic representations. CoRR, abs/1908.08530.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Videobert: A joint model for video and language representation learning", "authors": [ { "first": "Chen", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Austin", "middle": [], "last": "Myers", "suffix": "" }, { "first": "Carl", "middle": [], "last": "Vondrick", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Murphy", "suffix": "" }, { "first": "Cordelia", "middle": [], "last": "Schmid", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen Sun, Austin Myers, Carl Vondrick, Kevin Mur- phy, and Cordelia Schmid. 2019a. Videobert: A joint model for video and language representation learning. CoRR, abs/1904.01766.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Ernie 2.0: A continual pre-training framework for language understanding", "authors": [ { "first": "Yu", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Shuohuan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yukun", "middle": [], "last": "Li", "suffix": "" }, { "first": "Shikun", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Hao Tian", "suffix": "" }, { "first": "Haifeng", "middle": [], "last": "Wu", "suffix": "" }, { "first": "", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.12412" ] }, "num": null, "urls": [], "raw_text": "Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Hao Tian, Hua Wu, and Haifeng Wang. 2019b. Ernie 2.0: A continual pre-training framework for language un- derstanding. arXiv preprint arXiv:1907.12412.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Sequence to sequence learning with neural networks", "authors": [ { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Quoc V", "middle": [], "last": "Le", "suffix": "" } ], "year": 2014, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "3104--3112", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing sys- tems, pages 3104-3112.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "LXMERT: learning cross-modality encoder representations from transformers", "authors": [ { "first": "Hao", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hao Tan and Mohit Bansal. 2019. LXMERT: learning cross-modality encoder representations from trans- formers. CoRR, abs/1908.07490.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Bert rediscovers the classical nlp pipeline", "authors": [ { "first": "Ian", "middle": [], "last": "Tenney", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4593--4601", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. Bert rediscovers the classical nlp pipeline. In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4593- 4601.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Lukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 Decem- ber 2017, Long Beach, CA, USA, pages 5998-6008.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Cider: Consensus-based image description evaluation", "authors": [ { "first": "Ramakrishna", "middle": [], "last": "Vedantam", "suffix": "" }, { "first": "Lawrence", "middle": [], "last": "Zitnick", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Parikh", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the IEEE conference on computer vision and pattern recognition", "volume": "", "issue": "", "pages": "4566--4575", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image de- scription evaluation. In Proceedings of the IEEE conference on computer vision and pattern recogni- tion, pages 4566-4575.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Show and tell: A neural image caption generator", "authors": [ { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Toshev", "suffix": "" }, { "first": "Samy", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Dumitru", "middle": [], "last": "Erhan", "suffix": "" } ], "year": 2015, "venue": "IEEE Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "3156--3164", "other_ids": { "DOI": [ "10.1109/CVPR.2015.7298935" ] }, "num": null, "urls": [], "raw_text": "Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural im- age caption generator. In IEEE Conference on Com- puter Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015, pages 3156- 3164.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Bert has a mouth, and it must speak", "authors": [ { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" } ], "year": 2019, "venue": "Bert as a markov random field language model", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1902.04094" ] }, "num": null, "urls": [], "raw_text": "Alex Wang and Kyunghyun Cho. 2019. Bert has a mouth, and it must speak: Bert as a markov random field language model. arXiv preprint arXiv:1902.04094.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Neural self talk: Image understanding via continuous questioning and answering", "authors": [ { "first": "Yezhou", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Li", "suffix": "" }, { "first": "Cornelia", "middle": [], "last": "Ferm\u00fcller", "suffix": "" }, { "first": "Yiannis", "middle": [], "last": "Aloimonos", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yezhou Yang, Yi Li, Cornelia Ferm\u00fcller, and Yiannis Aloimonos. 2015. Neural self talk: Image under- standing via continuous questioning and answering. CoRR, abs/1512.03460.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Contextaware zero-shot learning for object recognition", "authors": [ { "first": "Eloi", "middle": [], "last": "Zablocki", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "Laure", "middle": [], "last": "Soulier", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Piwowarski", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Gallinari", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 36th International Conference on Machine Learning, ICML 2019", "volume": "", "issue": "", "pages": "7292--7303", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eloi Zablocki, Patrick Bordes, Laure Soulier, Benjamin Piwowarski, and Patrick Gallinari. 2019. Context- aware zero-shot learning for object recognition. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, pages 7292-7303.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Automatic generation of grounded visual questions", "authors": [ { "first": "Shijie", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Lizhen", "middle": [], "last": "Qu", "suffix": "" }, { "first": "Shaodi", "middle": [], "last": "You", "suffix": "" }, { "first": "Zhenglu", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Jiawan", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "4235--4243", "other_ids": { "DOI": [ "10.24963/ijcai.2017/592" ] }, "num": null, "urls": [], "raw_text": "Shijie Zhang, Lizhen Qu, Shaodi You, Zhenglu Yang, and Jiawan Zhang. 2017. Automatic generation of grounded visual questions. In Proceedings of the Twenty-Sixth International Joint Conference on Ar- tificial Intelligence, IJCAI 2017, Melbourne, Aus- tralia, August 19-25, 2017, pages 4235-4243.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Paragraph-level neural question generation with maxout pointer and gated self-attention networks", "authors": [ { "first": "Yao", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Xiaochuan", "middle": [], "last": "Ni", "suffix": "" }, { "first": "Yuanyuan", "middle": [], "last": "Ding", "suffix": "" }, { "first": "Qifa", "middle": [], "last": "Ke", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "3901--3910", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yao Zhao, Xiaochuan Ni, Yuanyuan Ding, and Qifa Ke. 2018. Paragraph-level neural question genera- tion with maxout pointer and gated self-attention net- works. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 -November 4, 2018, pages 3901-3910.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Places: A 10 million image database for scene recognition", "authors": [ { "first": "Bolei", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "\u00c0gata", "middle": [], "last": "Lapedriza", "suffix": "" }, { "first": "Aditya", "middle": [], "last": "Khosla", "suffix": "" }, { "first": "Aude", "middle": [], "last": "Oliva", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Torralba", "suffix": "" } ], "year": 2018, "venue": "IEEE Trans. Pattern Anal. Mach. Intell", "volume": "40", "issue": "6", "pages": "1452--1464", "other_ids": { "DOI": [ "10.1109/TPAMI.2017.2723009" ] }, "num": null, "urls": [], "raw_text": "Bolei Zhou,\u00c0gata Lapedriza, Aditya Khosla, Aude Oliva, and Antonio Torralba. 2018. Places: A 10 million image database for scene recognition. IEEE Trans. Pattern Anal. Mach. Intell., 40(6):1452-1464.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "num": null, "text": "D O m o B b R C U n O n H v P m b n 3 e r E v U m l Z r w V j b n 5 h c a m 4 X F p Z X V v f K G 9 u N d M o S x h 3 W O R H S d t z U + 6 L" }, "FIGREF1": { "uris": null, "type_str": "figure", "num": null, "text": "D O m o B b R C U n O n H v P m b n 3 e r E v U m l Z r w V j b n 5 h c a m4 X F p Z X V v f K G 9 u N d M o S x h 3 W O R H S d t z U + 6 L k D t S S J + 3 4 4 S 7 g e f z l j c 6 V f H W D U 9 S E Y W X c h z z b u A O Q z E Q z J V E O f J O 9 m q 9 c s W q W n q Z s 8 D O Q Q X 5 a k T l F 1 y h j w g M G Q J w h J C E f b h I 6 e n A h o W Y u C 4 m x C W E h I 5 z 3 K N E 2 o y y O G W 4 x I 7 o O 6 R d J 2 d D 2 i v P V K s Z n e L T m 5 D S x B 5 p I s p L C K v T T B 3 P t L N i f / O e a E 9 1 t z H 9 v d w r I F b i m t i / d N P M / + p U L R I D H O s a B N U U a 0 Z V x 3 K X T H d F 3 d z 8 U p U k h 5 g 4 h f s U T w g z r Z z 2 2 d S a VN e u e u v q + J v O V K z a s z w 3 w 7 u 6 J Q 3 Y / j n O W d C s V e 2 D a u 3 i s F I / y U d d x A 5 2 s U / z P E I d Z 2 j A I W + B R z z h 2 T g 3 Y u P W G H + m G o V c s 4 1 v y 3 j 4 A I e R k S I = < / l a t e x i t > txtM" }, "FIGREF2": { "uris": null, "type_str": "figure", "num": null, "text": "z 8 3 w r m 5 J A 6 7 + H O c 0 a N U q 1 f 1 K 7 b R W r h / l o y 5 i G z v Y o 3 k e o I 4 T N N A k 7 2 s 8 4 g n P R t e 4 N e 6 M + 8 9 U o 5 B r N v F t G Q 8 f s x y X 1 w = = < / l a t e x i t > + < l a t e x i t s h a 1 _ b a s e 6 4 = \" Y f i" }, "FIGREF3": { "uris": null, "type_str": "figure", "num": null, "text": "h A 7 V 7 j z Q = < / l a t e x i t >Figure 1: Model overview. Captions are encoded via BERT embeddings, while visual embeddings (blue) are obtained via a linear layer, used to project image representations to the embedding layer dimensions." }, "FIGREF4": { "uris": null, "type_str": "figure", "num": null, "text": "Joint (Patro and Namboodiri, 2019) Stateof-the-art model on V QA1.0, based on joint usage of caption and image information. MC-SBN (Patro et al., 2020) State-of-the-art on V QG COCO . The model jointly leverages on multiple cues (the image, place information, caption, tags) to generate questions." }, "FIGREF5": { "uris": null, "type_str": "figure", "num": null, "text": "W s W q o d Z 6 P O Y x s 7 2 K N 5 H q C G U 9 T R I O 9 r P O I J z 0 b b u D X u j P v P V C O X a b b w b R k P H w w G l Z M = < / l a t e x i t > (1) < l a t e x i t s h a 1 _ b a s e 6 4 = \" z p F P r D + s D 2" }, "FIGREF6": { "uris": null, "type_str": "figure", "num": null, "text": "a u Q y z T 5 + L e P p C 0 r 6 j 9 M = < / l a t e x i t >(2) < l a t e x i t s h a 1 _ b a s e 6 4 = \" Y l T r c S Y 3 9 G m k Y + 9 Z 6 0 i H E 1 B x 0 k w = \" > A A A C x n i c j V L L T s J A F D 3 U F + I L d e m m k Z j g p m k L g u y I b l h i l E e C x L R l x I a + 0 k 4 1 h J j 4 A 2 7 1 0 4 x / o H / h n b E k u i A 6 T d s 7 5 5 5 z Z u 7 c s S P P T b i u v + e U p e W V 1 b X 8 e m F j c 2 t 7 p 7 i 7 1 0 3 C N H Z Y x w m 9 M O 7 b V s I 8 N 2 A d 7 n K P 9 a O Y W b 7 t s Z 4 9 O R f 5 3 j 2 L E z c M r" }, "FIGREF7": { "uris": null, "type_str": "figure", "num": null, "text": "k e J A 0 K I g Y 0 N s f I D r P A 1 i D + A v + D a B I m H E D h K c n z u P c e + 9 3 p x I F P t u s 8 F Z 2 x 8 Y n K q O F 2 a m Z 2 b X y g v L h 2 m K k t 8 0 f J V o J I j j 6 c i k J F o a a k D c R Q n g o d e I N r e a N f E 2 6" }, "FIGREF8": { "uris": null, "type_str": "figure", "num": null, "text": "q b a N a 3 6 9 X G j v 5 q I t Y w S r W a Z 6 b a G A P T b T I + w r 3 e M C j 4 z n X z o 1 z + 5 7 q F H L N M r 4 s 5 + 4 N A M q Z 0 w = = < / l a t e x i t > What is the color of the umbrella ?< l a t e x i t s h a 1 _ b a s e 6 4 = \" 0 G + W Q 1 m 2 9 5 R / I e c f j h a z 5 q d 7i z g = \" > A A A C 6 H i c j V H L S g M x F D 2 O 7 / q q u n Q T L I K r M q 0 L 3 S m 6 c V n B t k I V y Y y p j c 0 8 y G Q E E T / A n T t x 6 w + 4 1 S 8 R / 0 D / w p s 4 g l p E M 8 z M v e e e c 5 K b G 6 R K Z s b 3 X 4 a 8 4 Z H R s f G J y d L U 9 M z s X H l + o Z U l u Q 5 F M 0 x U o g 8 C n g k l Y 9 E 0 0 i h x k G r B o 0 C J d t D f s f X 2 u d C Z T O J 9 c 5 G K o 4 i f x r I r Q 2 4 I O i 5 X 2 j 1 u m M y Y 6 Q n m 7 F j S d U k e B V o o x d k m s f y q 7 x Y b D G p F U E G x G k n 5 G Y c 4 Q Y I Q O S I I x D A U K 3 B k 9 H R Q g 4 + U s C N c E q Y p k q 4 u c I U S a X N i C W J w Q v v0 P a W s U 6 A x 5 d Y z c + q Q d l H 0 a l I y r J A m I Z 6 m 2 O 7 G X D 1 3 z h b 9 z f v S e d q z X d A / K L w i Q g 1 6 h P 6 l + 2 T + V 2 d 7 M e h i w / U g q a f U I b a 7 s H D J 3 a 3 Y k 7 M v X R l y S A m z 8 Q n V N c W h U 3 7 e M 3 O a z P V u 7 5 a 7 + q t j W t T m Y c H N 8 W Z P S Q O u / R z n Y N C q V 2 t r 1 f p e v b K 1 X Y x 6 A k t Y x i r N c x 1 b 2 E U D T f K + x g M e 8 e S d e T f e r X f 3 Q f W G C s 0 i v i 3 v / h 0 U A p z l < / l a t e x i t > A group of people standing on a street < l a t e x i t s h a 1 _ b a s e 6 4 = \" 4 k f n L k b Q W 1 5 E S A s D I 6 O Z 3 Z F 3 I S s = \" > A A A C 6 3" }, "FIGREF9": { "uris": null, "type_str": "figure", "num": null, "text": "z H P U I 5 r G A Z Z r n K t a x j S Z a 5 H 2 F e z z g MU i D 6 + A m u H 0 v D f p K z R y + r O D u D T T c n n 8 = < / l a t e x i t >What is the man holding ?< l a t e x i t s h a 1 _ b a s e 6 4 = \" h t g B 2 X c R r D g / i N r d 9 m b E u E r B y z s = \" > A A A C 3 n" }, "FIGREF10": { "uris": null, "type_str": "figure", "num": null, "text": "5 r m O O n a w i w Z 5 3 + A R T 3 j 2 O t 6 t d + f d f 6 Z 6 A 4 V m A d + W 9 / A B i A u b K g = = < / l a t e x i t > (1) < l a t e x i t s h a 1 _ b a s e 6 4 = \" z p F P r D + s D 2" }, "FIGREF11": { "uris": null, "type_str": "figure", "num": null, "text": "t e x i t s h a 1 _ b a s e 6 4 = \" b 8 d 6 q Y I l H h P f 1 e b x 5 c c 3 6 w f E F /" }, "FIGREF12": { "uris": null, "type_str": "figure", "num": null, "text": "1 b o 5 3 e U s a s P N z n P O g X b W c m l U 9 P 6 r U T / W o i 9 j D P g 5 p n s e o o 4 E m W u Q 9 w i O e 8 G w 0 j N j I j b v P V K O g N b v 4 t o y H D / P a j 6 4 = < / l a t e x i t > 0.4 < l a t e x i t s h a 1 _ b a s e 6 4 = \" f 9 v a R I o A 9 s W 9 N j 5 2 c Q O h J F / j S +" }, "FIGREF13": { "uris": null, "type_str": "figure", "num": null, "text": "g H + h f e G a e g F t E J S c 6 c e 8+ Z u f d 6 S c A z Y d u v B W N h c W l 5 p b h a W l vf 2 N w q b + + 0 s z h P f d b y 4 y B O u 5 6 b s Y B H r C W 4 C F g 3 S Z k b e g H r e O M z G e / c s j T j c X Q p J g n r h + 4 o 4 k" }, "FIGREF14": { "uris": null, "type_str": "figure", "num": null, "text": "7 G D X e z T P I 9 R Q x 0 N N M l 7 i E c 8 4 d m o G 5 G R G 3 e f q U Z B a 7 b x b R k P H / i a j 7 A = < / l a t e x i t > 0.6 < l a t e x i t s h a 1 _ b a s e 6 4 = \" 1 M D s l h b j" }, "FIGREF15": { "uris": null, "type_str": "figure", "num": null, "text": "7 G D X e z T P I 9 R Q x 0 N N M l 7 i E c 8 4 d m o G 5 G R G 3 e f q U Z B a 7 b x b R k P H / r 6 j 7 E = < / l a t e x i t > 1 < l a t e x i t s h a 1 _ b a s e 6 4 = \" o U i a 0 q I n S K 3 s a N y e 7 y i R n X 2 I F l c = \"> A A A C x H i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V Z I q 6 L I o i M s W 7 A N q k S S d 1 t C 8 m J k I p e g P u N V v E / 9 A / 8 I 7 4 x T U I j o h y Z l z 7 z k z 9 1 4 / i 0 I h H e e 1 Y C 0 s L i 2 v F F d L a + s b m 1 v l 7 Z 2 2 S H M e s F a Q R i n v + p 5 g U Z i w l g x l x L o Z Z 1 7 s R 6 z j j 8 9 V v H P H u A j T 5 E p O M t a P v V E S D s P A k 0 Q 1 3 Z t y x a k 6 e t n z w D W g A r M a a f k F 1 x g g R Y A c M R g S S M I R P A h 6 e n D h I C O u j y l x n F C o 4 w z 3 K J E 2 p y x G G R 6 x Y / q O a N c z b E J 7 5 S m 0 O q B T I n o 5 K W 0 c k C a l P E 5 Y n W b r e K 6 d F f u b 9 1 R 7 q r t N 6 O 8 b r 5 h Y i V t i / 9 L N M v + r U 7 V I D H G q a w i p p k w z q r r A u O S 6 K + r m 9 p e q J D l k x C k 8 o D g n H G j l r M + 2 1 g h d u + q t p + N v O l O x a h + Y 3 B z v 6 p Y 0 Y P f n O O d B u 1 Z 1 j 6 q 1 5 n G l f m Z G X c Q e 9 n F I 8 z x B H Z d o o K W 9 H / G E Z + v C i i x h 5 Z + p V s F o d vF t W Q 8 f w 7 u P O g = = < / l a t e x i t > 3 < l a t e x i t s h a 1 _ b a s e 6 4 = \" Z H 1 D W 1 o m v J y 5 1 t C 5 3 C f B d 1 b I r 3 s = \" > A A A C x Hi c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V Z I q 6 L I o i M s W 7 A N q k S S d 1 t D J g 5 m J U I r + g F v 9 N vE P 9 C + 8 M 6 a g F t E J S c 6 c e 8 + Z u f f 6 K Q + l c p z X g r W w u L S 8 U l w t r a 1 v b G 6 V t 3 f a M s l E w F p B w h P R 9 T 3 J e B i z l g o V Z 9 1 U M C / y O e v 4 4 3 M d 7 9 w x I c M k v l K T l P U j b x S H w z D w F F H N o 5 t y x a k 6 Z t n z w M 1 B B f l q J O U X X G O A B A E y R G C I o Q h z e J D 0 9 O D C Q U p c H 1 P i B K H Q x B n u U S J t R l m M M j x i x / Q d 0 a 6 X s z H t t a c 0 6 o B O 4 f Q K U t o 4 I E 1 C e Y K w P s 0 2 8 c w 4 a / Y 3 7 6 n x 1 H e b 0 N / P v S J i F W 6 J / U s 3 y / y v T t e i M M S p q S G k m l L D 6 O q C 3 C U z X d E 3 t 7 9 U p c g h J U 7 j A c U F 4 c A o Z 3 2 2 j U a a 2 n V v P R N / M 5 m a 1 f s g z 8 3 w r m 9 J A 3 Z / j n M e t G t V 9 6 h a a x 5 X 6 m f 5 q I v Y w z 4 O a Z 4 n q O M S D b S M 9 y O e 8 G x d W N y S V v a Z a h V y z S 6 + L e v h A 8 h 7 j z w = < / l a t e x i t >5< l a t e x i t s h a 1 _ b a s e 6 4 = \" M V e a S i C 5 I 6 K K a b u U M p V c h z g f E S A = \" > A A A C x H" }, "FIGREF16": { "uris": null, "type_str": "figure", "num": null, "text": "t s a 0 2 m a 1 e 9 9 X T 8 T W c q V u 2 Z y c 3 x r m 5 J A 3 Z / j n M W t G t V 9 6 B a a x 5 W 6 q d m 1 E X s Y B f 7 N M 9 j 1 H G B B l r a + x F P e L b O r d D K r P w z 1 S o Y z T a + L e v h A 8 0 7 j z 4 = < / l a t e x i t >7< l a t e x i t s h a 1 _ b a s e 6 4 = \" 5 B 2 m R S O g Q m 2 u D O s 4 N n F b K t g q A H s = \" > A A A C x H" }, "FIGREF17": { "uris": null, "type_str": "figure", "num": null, "text": "/ w K 1 + m / g H + h f e G a e g F t E J S c 6 c e 8 + Z u f f 6 a R h k 0 n F e C 9 b c / M L i U n G 5 t L K 6 t r 5 R 3 t x q Z 0 k u G G + x J E z E l e 9 l P Ax i 3 p K B D P l V K r g X + S H v + K M z F e / c c Z E F S X w p x y n vR d 4 w D g Y B 8 y R R z Z O b c s W p O n r Z s 8 A 1 o A K z G k n 5 B d f o I w F D j g g c M S T h E B 4 y e r p w 4 S A l r o c J c Y J Q o O M c 9 y i R N q c s T h k e s S P 6 D m n X N W x M" }, "FIGREF18": { "uris": null, "type_str": "figure", "num": null, "text": "< l a t e x i t s h a 1 _ b a s e 6 4 = \" C 8 M l z v a a 2 r 5 r u 0 u" }, "TABREF0": { "text": "Quantitative VQG results on V QA1.0. We report results from previous works in the upper block, and those obtained by our proposed models in the bottom block.", "html": null, "type_str": "table", "content": "
BLEU1 BLEU2 BLEU3 BLEU4 ROUGE-L METEOR CIDEr
Sample38.8---34.212.713.3
Max59.4---49.317.833.1
MDN-Joint65.1---52.022.733.1
Cap. only Step 175.41 56.49 43.26 32.2866.1826.51 43.56
Im. only Step 2 (freeze)73.62 53.54 39.37 27.4464.3424.36 29.58
Im. only Step 2 (unfreeze)73.97 55.07 42.20 31.7665.7026.36 41.43
Im. + Cap. Step 375.59 56.88 43.96 33.3566.7126.76 44.99
Im. + Cap. Step 3 (from scratch) 75.84 56.42 43.53 32.8566.3025.92 38.81
", "num": null }, "TABREF2": { "text": "Quantitative VQG results on V QG COCO . We report results from previous works in the upper block, and those obtained by the our proposed models in the middle block. Human Performance is taken fromMostafazadeh et al. (2016).", "html": null, "type_str": "table", "content": "
Read. Cap. Rel. Im. Rel.
Caption only4.94.72*4.25*
Image only4.773.874.32*
Image + Caption 4.894.06*4.69*
Human4.833.644.9
", "num": null }, "TABREF3": { "text": "", "html": null, "type_str": "table", "content": "", "num": null } } } }