{ "paper_id": "I17-1014", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:38:41.814141Z" }, "title": "Imagination Improves Multimodal Translation", "authors": [ { "first": "Desmond", "middle": [], "last": "Elliott And\u00e1kos K\u00e1d\u00e1r", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tilburg University", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We decompose multimodal translation into two sub-tasks: learning to translate and learning visually grounded representations. In a multitask learning framework, translations are learned in an attentionbased encoder-decoder, and grounded representations are learned through image representation prediction. Our approach improves translation performance compared to the state of the art on the Multi30K dataset. Furthermore, it is equally effective if we train the image prediction task on the external MS COCO dataset, and we find improvements if we train the translation model on the external News Commentary parallel text.", "pdf_parse": { "paper_id": "I17-1014", "_pdf_hash": "", "abstract": [ { "text": "We decompose multimodal translation into two sub-tasks: learning to translate and learning visually grounded representations. In a multitask learning framework, translations are learned in an attentionbased encoder-decoder, and grounded representations are learned through image representation prediction. Our approach improves translation performance compared to the state of the art on the Multi30K dataset. Furthermore, it is equally effective if we train the image prediction task on the external MS COCO dataset, and we find improvements if we train the translation model on the external News Commentary parallel text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Multimodal machine translation is the task of translating sentences in context, such as images paired with a parallel text . This is an emerging task in the area of multilingual multimodal natural language processing. Progress on this task may prove useful for translating the captions of the images illustrating online news articles, and for multilingual closed captioning in international television and cinema.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Initial efforts have not convincingly demonstrated that visual context can improve translation quality. In the results of the First Multimodal Translation Shared Task, only three systems outperformed an off-the-shelf text-only phrase-based machine translation model, and the best performing system was equally effective with or without the visual features . There remains an open question about how translation models should take advantage of visual context. We present a multitask learning model that decomposes multimodal translation into learning a translation model and learning visually grounded representations. This decomposition means that our model can be trained over external datasets of parallel text or described images, making it possible to take advantage of existing resources. Figure 1 presents an overview of our model, Imagination, in which source language representations are shared between tasks through the Shared Encoder. The translation decoder is an attention-based neural machine translation model (Bahdanau et al., 2015) , and the image prediction decoder is trained to predict a global feature vector of an image that is associated with a sentence (Chrupa\u0142a et al., 2015, IMAGINET) . This decomposition encourages grounded learning in the shared encoder because the IMAGINET decoder is trained to imagine the image associated with a sentence. It has been shown that grounded representations are qualitatively different from their text-only counterparts (K\u00e1d\u00e1r et al., 2016) and correlate better with human similarity judgements (Chrupa\u0142a et al., 2015) . We assess the success of the grounded learning by evaluating the image prediction model on an image-sentence ranking task to determine if the shared representations are useful for image retrieval (Hodosh et al., 2013) . In contrast with most previous work, our model does not take images as input at translation time, rather it learns grounded representations in the shared encoder.", "cite_spans": [ { "start": 1024, "end": 1047, "text": "(Bahdanau et al., 2015)", "ref_id": "BIBREF0" }, { "start": 1176, "end": 1209, "text": "(Chrupa\u0142a et al., 2015, IMAGINET)", "ref_id": null }, { "start": 1481, "end": 1501, "text": "(K\u00e1d\u00e1r et al., 2016)", "ref_id": "BIBREF27" }, { "start": 1556, "end": 1579, "text": "(Chrupa\u0142a et al., 2015)", "ref_id": "BIBREF10" }, { "start": 1778, "end": 1799, "text": "(Hodosh et al., 2013)", "ref_id": "BIBREF24" } ], "ref_spans": [ { "start": 794, "end": 800, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We evaluate Imagination on the Multi30K dataset using a combination of in-domain and out-of-domain data. In the indomain experiments, we find that multitasking translation with image prediction is competitive with the state of the art. Our model achieves 55.8 Meteor as a single model trained on multimodal in-domain data, and 57.6 Meteor as an ensemble.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the experiments with out-of-domain resources, we find that the improvement in translation quality holds when training the IMAGINET decoder on the MS COCO dataset of described images (Chen et al., 2015) . Furthermore, if we significantly improve our text-only baseline using out-of-domain parallel text from the News Commentary corpus (Tiedemann, 2012) , we still find improvements in translation quality from the auxiliary image prediction task. Finally, we report a state-of-the-art result of 59.3 Meteor on the Multi30K corpus when ensembling models trained on in-and out-of-domain resources.", "cite_spans": [ { "start": 185, "end": 204, "text": "(Chen et al., 2015)", "ref_id": "BIBREF7" }, { "start": 337, "end": 354, "text": "(Tiedemann, 2012)", "ref_id": "BIBREF51" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The main contributions of this paper are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We show how to apply multitask learning to multimodal translation. This makes it possible to train models for this task using external resources alongside the expensive triplealigned source-target-image data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We decompose multimodal translation into two tasks: learning to translate and learning grounded representations. We show that each task can be trained on large-scale external resources, e.g. parallel news text or images described in a single language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We present a model that achieves state of the art results without using images as an input. Instead, our model learns visually grounded source language representations using an auxiliary image prediction objective. Our model does not need any additional parameters to translate unseen sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Multimodal translation is the task of producing target language translation y, given the source language sentence x and additional context, such as an image v . Let x be a source language sentence consisting of N tokens: x 1 , x 2 , . . ., x n and let y be a target language sentence consisting of M tokens: y 1 , y 2 , . . ., y m . The training data consists of tuples D \u2208 (x, y, v) , where x is a description of image v, and y is a translation of x.", "cite_spans": [ { "start": 374, "end": 383, "text": "(x, y, v)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Problem Formulation", "sec_num": "2" }, { "text": "Multimodal translation has previously been framed as minimising the negative log-likelihood of a translation model that is additionally conditioned on the image, i.e. J(\u03b8) = \u2212 j log p(y j |y