{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:07:57.269637Z" }, "title": "Syntax-aware Transformers for Neural Machine Translation: The Case of Text to Sign Gloss Translation", "authors": [ { "first": "Santiago", "middle": [], "last": "Egea G\u00f3mez", "suffix": "", "affiliation": { "laboratory": "", "institution": "Universitat Pompeu Fabra", "location": {} }, "email": "santiago.egea@upf.edu" }, { "first": "Euan", "middle": [], "last": "Mcgill", "suffix": "", "affiliation": { "laboratory": "", "institution": "Universitat Pompeu Fabra", "location": {} }, "email": "euan.mcgill@upf.edu" }, { "first": "Horacio", "middle": [], "last": "Saggion", "suffix": "", "affiliation": { "laboratory": "", "institution": "Universitat Pompeu Fabra", "location": {} }, "email": "horacio.saggion@upf.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "It is well-established that the preferred mode of communication of the deaf and hard of hearing (DHH) community are Sign Languages (SLs), but they are considered low resource languages where natural language processing technologies are of concern. In this paper we study the problem of text to SL gloss Machine Translation (MT) using Transformer-based architectures. Despite the significant advances of MT for spoken languages in the recent couple of decades, MT is in its infancy when it comes to SLs. We enrich a Transformer-based architecture aggregating syntactic information extracted from a dependency parser to wordembeddings. We test our model on a wellknown dataset showing that the syntax-aware model obtains performance gains in terms of MT evaluation metrics.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "It is well-established that the preferred mode of communication of the deaf and hard of hearing (DHH) community are Sign Languages (SLs), but they are considered low resource languages where natural language processing technologies are of concern. In this paper we study the problem of text to SL gloss Machine Translation (MT) using Transformer-based architectures. Despite the significant advances of MT for spoken languages in the recent couple of decades, MT is in its infancy when it comes to SLs. We enrich a Transformer-based architecture aggregating syntactic information extracted from a dependency parser to wordembeddings. We test our model on a wellknown dataset showing that the syntax-aware model obtains performance gains in terms of MT evaluation metrics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Access to information is a human right and crossing language barriers is essential for global information exchange and unobstructed, fair communication. However, we are still far from the goal of making information accessible to all a reality. The World Health Organisation (WHO) reports that there are some 466 million people in the world today with disabling hearing loss 1 ; moreover, it is estimated that this number will double by 2050. According to the World Federation of the Deaf (WFD), over 70 million people are deaf and communicate primarily via a sign language (SL).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "It is well-established that the preferred mode of communication of the deaf and hard of hearing (DHH) community are SLs (Stoll et al., 2020) , but they are considered extremely low resource languages (Moryossef et al., 2021) , and lag further behind in terms of the provision of language technologies available to DHH people. 150 SLs have been classified around the world (Eberhard et al., 2021) while there may be upwards of 400 according to SIL International 2 . Creating accessible-to-all technological solutions may also mitigate the effect of more variable reading literacy rate in the DHH community (Berke et al., 2018) . The written language is usually the ambient spoken language in the geographical area signers are found (e.g. English in the British Sign Language area), and providing resources in native SL could benefit the provision and uptake of sign language technology.", "cite_spans": [ { "start": 120, "end": 140, "text": "(Stoll et al., 2020)", "ref_id": "BIBREF31" }, { "start": 200, "end": 224, "text": "(Moryossef et al., 2021)", "ref_id": "BIBREF19" }, { "start": 372, "end": 395, "text": "(Eberhard et al., 2021)", "ref_id": "BIBREF8" }, { "start": 605, "end": 625, "text": "(Berke et al., 2018)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Machine translation (MT) (Koehn, 2009 ) is a core technique for reducing language barriers that has advanced, and seen many breakthroughs since it began in the 1950s (Johnson et al., 2017) , to reach quality levels comparable to humans (Hassan et al., 2018) . Despite the significant advances of MT for spoken languages in the recent couple of decades, MT is in its infancy when it comes to SLs.", "cite_spans": [ { "start": 25, "end": 37, "text": "(Koehn, 2009", "ref_id": "BIBREF13" }, { "start": 166, "end": 188, "text": "(Johnson et al., 2017)", "ref_id": "BIBREF12" }, { "start": 236, "end": 257, "text": "(Hassan et al., 2018)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The output of MT between spoken languages tends to be text, but there are further considerations for researchers doing Sign Language translation (SLT). Full writing systems exist for SL (e.g. Ham-NoSys (Hanke, 2004) , SiGML (Zwitserlood et al., 2004) ), but are not always the output or used at all in SLT. SL glosses are a lexeme-based representation of signs where classifier predicates, manual and non-manual cues (Porta et al., 2014) are distilled into a lexical representation, usually in the ambient spoken language. The articulators in SLs include hand configuration and trajectory, facial articulators including lip position and eyebrow configuration, and spatial articulation including eye gaze and body position (Mukushev et al., 2020) -all used to convey meaning. Glosses, and the Text2Gloss process, are an essential step in the MT pipeline between spoken and sign languages -even though they are considered a flawed representation which hinder the extraction of meaning by some researchers (Yin and Read, 2020) . Although some current approaches to SL translation follow an endto-end paradigm, translating into glosses offers an intermediate representation which could drive the generation of the actual virtual signs (e.g. by an avatar) (Almeida et al., 2015; L\u00f3pez-Lude\u00f1a et al., 2014) . A growing number of researchers (Jantunen et al., 2021) have been using innovative methods to leverage the limited supply of SL gloss corpora and resources for SL technology.", "cite_spans": [ { "start": 202, "end": 215, "text": "(Hanke, 2004)", "ref_id": "BIBREF9" }, { "start": 224, "end": 250, "text": "(Zwitserlood et al., 2004)", "ref_id": "BIBREF35" }, { "start": 417, "end": 437, "text": "(Porta et al., 2014)", "ref_id": "BIBREF24" }, { "start": 722, "end": 745, "text": "(Mukushev et al., 2020)", "ref_id": null }, { "start": 1003, "end": 1023, "text": "(Yin and Read, 2020)", "ref_id": "BIBREF34" }, { "start": 1251, "end": 1273, "text": "(Almeida et al., 2015;", "ref_id": null }, { "start": 1274, "end": 1300, "text": "L\u00f3pez-Lude\u00f1a et al., 2014)", "ref_id": "BIBREF18" }, { "start": 1335, "end": 1358, "text": "(Jantunen et al., 2021)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In spite of the impressive results achieved by Neural Machine Translation (NMT) when massive parallel data-sets are available for training using just token level information, recent research (Armengol Estap\u00e9 and Ruiz Costa-Juss\u00e0, 2021) shows that morphological and syntactic information extracted from linguistic processors can be of help for out-of-domain machine translation or rich morphology languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, we make transformer models for NMT 'syntax-aware' -where syntactic information embeddings are included as well as word embeddings in the encoder part of the model. The rationale behind including syntactic embeddings draws from the success of word embeddings improving natural language processing tasks including syntactic parsing itself (Socher et al., 2013) , and from context-sensitive embeddings pioneered in transformer models (Vaswani et al., 2017; Devlin et al., 2019; . We posit that encoding syntactic information will in turn boost the performance of Text2Gloss as we show with our experimental results.", "cite_spans": [ { "start": 351, "end": 372, "text": "(Socher et al., 2013)", "ref_id": "BIBREF30" }, { "start": 445, "end": 467, "text": "(Vaswani et al., 2017;", "ref_id": "BIBREF32" }, { "start": 468, "end": 488, "text": "Devlin et al., 2019;", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of the paper is organised in the following way: in the next section we briefly introduce the project in the context of which this work is being carried out. Then, in Section 3, we present related work on SL translation and background on NMT and in Section 4 we describe the NMT architecture we use in our experiments. In Section 5 we describe the experimental methodology including data and evaluation metrics while in Section 6 we present quantitative results. Section 7 analyses the results while Section 8 closes the paper and discusses further work which could expand this avenue of research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "SignON 3 is a Horizon 2020 project which aims to develop a communication service that translates between sign and spoken (in both text and audio modalities) languages and caters for the communication needs between DHH and hearing individuals (Saggion et al., 2021) . Currently, human interpreters are the main medium for sign-to-spoken, spoken-to-sign and sign-to-sign language translation. The availability and cost of these professionals is often a limiting factor in communication between signers and non-signers. The SignON communication service will translate between sign and spoken languages, bridging language gaps when professional interpretation is unavailable. A key piece of this project is the server which will host the translation engine, which imposes demanding requirements in terms of latency and efficiency.", "cite_spans": [ { "start": 242, "end": 264, "text": "(Saggion et al., 2021)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "The SignON project", "sec_num": "2" }, { "text": "The bottleneck to creating SL technology primarily lies in the training data available, such as from existing corpora and lexica. Certain corpora may be overly domain-specific (San-Segundo et al., 2010), containing only sentence fragments or example signs as part of a lexicon (Cabeza et al., 2016) , have little variation in individual signers or the framing of the signer in 3D space (Nunnari et al., 2021) , or simply too small in size to be applied to large neural models alone (Jantunen et al., 2021) .", "cite_spans": [ { "start": 277, "end": 298, "text": "(Cabeza et al., 2016)", "ref_id": "BIBREF4" }, { "start": 386, "end": 408, "text": "(Nunnari et al., 2021)", "ref_id": "BIBREF22" }, { "start": 482, "end": 505, "text": "(Jantunen et al., 2021)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "3" }, { "text": "The next section describes current methods to mitigate the data-scarcity problem, and state-ofthe-art models and studies with sign language gloss data -including Text2Gloss, Gloss2Text, and efforts towards end-to-end (E2E) SLT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "3" }, { "text": "Transformer architecture has been successful in covering a large amount of language pairs with great accuracy in MT tasks, most notably in models such as BART and mBART . mT5 (Xue et al., 2021 ) also performs well with an even larger set of languages, many of which are considered low-resource. These models are also highly adaptable to other NLP tasks by means of finetuning . In addition, recent work has shown that transformer models including embeddings with linguistic information in a low-resource language pair improve model 20 (Sennrich and Haddow, 2016) . Their 'Factored Transformer' model inserts embeddings for lemmas, part-of-speech tags, lexical dependencies, and morphological features in the encoder of their attentional encoder-decoder architecture.", "cite_spans": [ { "start": 175, "end": 192, "text": "(Xue et al., 2021", "ref_id": null }, { "start": 535, "end": 562, "text": "(Sennrich and Haddow, 2016)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Transformer models for NMT", "sec_num": "3.1" }, { "text": "In this work, a syntax-aware transformer model is proposed for Text2Gloss translation -one step in the SLT pipeline. Although current steps towards E2E SLT using transformer-based NMT systems look promising (Nunnari et al., 2021) , using glosses as an intermediate representation still improve performance even in these state-of-the-art systems (Camgoz et al., 2020; Yin and Read, 2020) . Our model exploits lexical dependency information to assist in learning the intrinsic grammatical rules that involves translating from text to glosses. Unlike other works, we consider model simplicity a key feature to fulfil efficiency requirements in the SignON Project. Thus, we applied an easy aggregation scheme to inject syntactic information to the model and chose a relatively simple neural architecture. Using only lexical dependency features also allows us to examine the impact of this individual linguistic feature on model performance.", "cite_spans": [ { "start": 207, "end": 229, "text": "(Nunnari et al., 2021)", "ref_id": "BIBREF22" }, { "start": 345, "end": 366, "text": "(Camgoz et al., 2020;", "ref_id": "BIBREF6" }, { "start": 367, "end": 386, "text": "Yin and Read, 2020)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Transformer models for NMT", "sec_num": "3.1" }, { "text": "Transformer for Text2Gloss", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Overview: A Syntax-aware", "sec_num": "4" }, { "text": "Our model is an Encoder-Decoder architecture which consists of augmenting the input embeddings to the Encoder via including lexical dependency information. As can be noted from Table 1 , gloss production from spoken text is essentially based on word permutations, stemming and deletions. In many cases, those transformations depend on the syntactical functions of word, for example determiners are always removed to produce glosses. Consequently, we believe that word dependency tags might assist in modelling syntactic rules which are intrinsic in gloss production.", "cite_spans": [], "ref_spans": [ { "start": 177, "end": 184, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "System Overview: A Syntax-aware", "sec_num": "4" }, { "text": "Importantly, our Text2Gloss model has been developed considering the efficiency requirements demanded for the SignON Project. Therefore, the size of the architecture has been selected to produce accurate but also lightweight translations. Figure 1 shows the different modules composing our system.", "cite_spans": [], "ref_spans": [ { "start": 239, "end": 248, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "System Overview: A Syntax-aware", "sec_num": "4" }, { "text": "The neural architecture employed is based on multi-attention layers (Vaswani et al., 2017) , which has produced excellent results when modelling long input sequences. More specifically, the Encoder and Decoder are composed by three multiattention layers with four attention heads. The internal dimensions for the fully connected network are set to 1024 and the output units to 512. The Encoder transforms inputs to latent vectors, whilst the Decoder produces word probabilities from the encoded latent representations.", "cite_spans": [ { "start": 68, "end": 90, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "System Overview: A Syntax-aware", "sec_num": "4" }, { "text": "Our system augments the discriminative power of the embeddings inputted to the Encoder by aggregating syntactic information to word embeddings. Unlike (Armengol Estap\u00e9 and Ruiz Costa-Juss\u00e0, 2021) (which added encoders to manage injected features), we integrate an additional table that contains the vector embeddings for the syntactic tags. The word and syntax embeddings are sum up producing an aggregated embedding that is input to the Encoder. Both tables were set to have a vector length of 512.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Overview: A Syntax-aware", "sec_num": "4" }, { "text": "To accommodate input text to the neural model, we process it employing subword tokenisation and dependency tags are produced using the model de core news sm available in the spaCy 5 library. The dependency tags we incorporate are from the TIGER dependency bank (Albert et al., 2003) , included in the German model, and designed specifically to categorise words in German (Brants et al., 2004) . An example of these tags with a German sentence is shown in Figure 2 . Then, word and syntax tokens were aligned with the corresponding words as shown in Figure 1 . For the tokeniser, a Sentence Piece model (Kudo and Richardson, 2018) was trained using only the training corpus with a vocabulary size of 3000, keeping some tokens for control.", "cite_spans": [ { "start": 261, "end": 282, "text": "(Albert et al., 2003)", "ref_id": "BIBREF0" }, { "start": 371, "end": 392, "text": "(Brants et al., 2004)", "ref_id": "BIBREF3" }, { "start": 602, "end": 629, "text": "(Kudo and Richardson, 2018)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 455, "end": 463, "text": "Figure 2", "ref_id": null }, { "start": 549, "end": 557, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "System Overview: A Syntax-aware", "sec_num": "4" }, { "text": "Regarding the training, Adam optimiser with a learning rate of 10 \u22125 and a batch size of 64 was applied to optimise Cross Categorical Entropy for 500 epochs. Text generation was carried out using ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Overview: A Syntax-aware", "sec_num": "4" }, { "text": "In this section, we present the methods and materials used in this research. Firstly, we introduce the dataset used and performance metrics and other implementation details are described.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methods & Materials", "sec_num": "5" }, { "text": "The parallel corpus selected for our experiments is the RWTH-PHOENIX-2014-T (Camgoz et al., 2018) . It is publicly available 6 , and is widelyadopted for SLT research. This dataset contains images, and transcriptions in German text and German Sign Language (DGS) glosses of weather forecasting news from a public TV station. The large vocabulary (1,066 different signs) and number of signers (nine) make this dataset promising for SLT 6 https://www-i6.informatik.rwth-aachen. de/\u02dckoller/RWTH-PHOENIX-2014-T/ research, in an albeit limited semantic domain. In this study, we only consider the text and gloss transcriptions.", "cite_spans": [ { "start": 76, "end": 97, "text": "(Camgoz et al., 2018)", "ref_id": "BIBREF5" }, { "start": 435, "end": 436, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Dataset: RWTH-PHOENIX-2014-T", "sec_num": "5.1" }, { "text": "The authors included development and test partitions in their dataset with unseen patterns in the training data. We used the development subset to control overfitting and performances are reported on the test subset. The information about the different subsets included in RWTH-PHOENIX-2014-T is presented in Table 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset: RWTH-PHOENIX-2014-T", "sec_num": "5.1" }, { "text": "In order to fairly evaluate our approach, we have selected performance metrics that are extensively used in NMT. Consequently, the metrics used are introduced below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance Metrics", "sec_num": "5.2" }, { "text": "Translation Edit Rate (TER): TER (Snover et al., 2006) measures the quality of system translations by counting the number of text edits needed to transform the produced test into the reference.", "cite_spans": [ { "start": 33, "end": 54, "text": "(Snover et al., 2006)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Performance Metrics", "sec_num": "5.2" }, { "text": "SacreBLEU: SacreBLEU (Post, 2018 ) is a very popular metric for NMT. It facilitates the implementation of BLEU (Papineni et al., 2002) and standardises input schemes to the metric by means of tokenisation and normalisation. This in turn makes comparing scores from other works more directly comparable and straightforward. BLEU aims to correlate a 'human-level' judgement of quality by using a reference translation as part of its calculation.", "cite_spans": [ { "start": 21, "end": 32, "text": "(Post, 2018", "ref_id": "BIBREF25" }, { "start": 111, "end": 134, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Performance Metrics", "sec_num": "5.2" }, { "text": "ROUGE-L F1: ROUGE-L (Lin, 2004) was primarily conceived for evaluating text summarisation models, however it has become popular for other NLP tasks. It measures the longest sequence in common between the given reference and model output sentence, without pre-defining an N-Gram length. We report the F1 score to measure model accuracy, as also seen in other works on this dataset (Camgoz et al., 2018; Yin and Read, 2020) .", "cite_spans": [ { "start": 20, "end": 31, "text": "(Lin, 2004)", "ref_id": "BIBREF16" }, { "start": 380, "end": 401, "text": "(Camgoz et al., 2018;", "ref_id": "BIBREF5" }, { "start": 402, "end": 421, "text": "Yin and Read, 2020)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Performance Metrics", "sec_num": "5.2" }, { "text": "METEOR: METEOR (Banerjee and Lavie, 2005) is a metric for MT evaluation based on unigram matching. This metric is based on unigramprecision and recall to consider word alignments, with recall having more influence on the score. It is considered to have a higher correlation with human judgement than BLEU.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance Metrics", "sec_num": "5.2" }, { "text": "Generation time: Finally, the generation time is reported to assess our system in terms of computational efficiency. It is reported in seconds for each model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance Metrics", "sec_num": "5.2" }, { "text": "The experiments reported here were carried out using Tensorflow as Deep Learning framework. The Embedding Tables, Encoder and Decoder implementations were inherited from the HuggingFacetransformers library 7 and spaCy was employed to produce word-dependency features. Finally, NLTK and other third-party code 8, 9, 10 was used to compute the performance metrics adopted here. We make our code publicly available at GitHub 11 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementation Details", "sec_num": "5.3" }, { "text": "Here, we present the results from our experiment. As the objective of this research is evaluating the benefits of injecting syntactic information for Text2Gloss translation, we compare two models with the same architecture: One including, and one not including lexical dependency information. Those models are denoted as Syntax and No-Syntax respectively in this and subsequent sections. Figure 3 presents the evolution of the performance metrics after each 5 training epochs while the models are being trained. It is apparent that including the syntactic information brings notable benefits for the most of the metrics adopted, with the exception of METEOR.", "cite_spans": [], "ref_spans": [ { "start": 388, "end": 396, "text": "Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Results", "sec_num": "6" }, { "text": "Focusing on sacreBLEU score, the Syntax model produces substantially better translations after 80 training epochs. After this point, the models converge and the difference in the sacreBLEU score between the models becomes more evident. Namely, the greatest difference between both models happens at epoch 165, when Syntax model produces a sacreBLEU 5.7 points higher than No-Syntax.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance vs Epochs", "sec_num": "6.1" }, { "text": "As for TER, the differences between curves are more remarkable. Syntax model produces TER scores notably better than the No-syntax, the score becomes stable after 95 epochs and tends to reduce its oscillations. At this point Syntax model outperforms the No-syntax model in around 0.15 for TER.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance vs Epochs", "sec_num": "6.1" }, { "text": "According to the ROUGE-L (F1-score) obtained, we also observe a slight improvement of Syntax model over No-syntax, although this increase is not clear until epoch 150. In this case the differences are not as clear as the metrics already observed, but it implies enhancements higher that 0.01 for this metric.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance vs Epochs", "sec_num": "6.1" }, { "text": "The METEOR score is the only metric that does not improve when syntactic information is included. In this regard, the No-syntax model produced better translations in terms of this score for all the whole training phase. When the models converge after 100 epochs, the greatest difference between models happens at epoch 350 when No-syntax overcomes the Syntax model by 0.029 points. It is also remarkable that the differences between models are not higher than 0.015 for most of the points after convergence. The reason why No-Syntax produces a slightly better METEOR than Syntax might be the fact that METEOR benefits unigram recall and the No-Syntax model tends to repeat words, as we show in next Section. Nonetheless, we will further analyse this observation in future research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance vs Epochs", "sec_num": "6.1" }, { "text": "Finally, as efficiency is one of the goals of our project, we turn to generation time. From the Generation Time curves shown in Figure 3 , we can observe that injecting syntactic information does not lead to marked generation time increases. We include the extra time necessary to produce the lexical dependency tags. In the case of the training subset, the tagging process took around 20.9 seconds, this processing time constitutes an increase of 2.95 milliseconds per sentence compared to not using syntax tags. Regarding the test subset, the tag process lasted 3.23 seconds in total, which is not a marked increase considering the total generation times and that Syntax is until 60 seconds faster than No-syntax (this is the case for 155 to 180 epochs). The cause behind the great differences in generation times might be that Beam Search decoding produces more precise hypotheses and needs less decoding iterations when syntax tags are employed.", "cite_spans": [], "ref_spans": [ { "start": 128, "end": 136, "text": "Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Performance vs Epochs", "sec_num": "6.1" }, { "text": "From the previous analysis, we have identified the points in which the neural models converge and where high variation is not present in the metric curves. In this section, we focused on the points in which the metrics reach their maximum values after convergence point, which is located around epoch 100. Table 3 shows the best-performing values for all metrics.", "cite_spans": [], "ref_spans": [ { "start": 306, "end": 313, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Best-performing points", "sec_num": "6.2" }, { "text": "From Table 3 , we observe that the Syntax model reaches its maximum values with less epochs than No-syntax. This observation indicates that syntactic information also might benefit the neural model learning leading to shorter training times. Another observation is that the most of metrics are improved by injecting syntactic information, with the exception of METEOR. ", "cite_spans": [], "ref_spans": [ { "start": 5, "end": 12, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Best-performing points", "sec_num": "6.2" }, { "text": "In the previous section, we have described quantitatively the results produced from our selected metrics. Additionally, this section presents a qualitative analysis of the benefits produced for Text2Gloss translation including lexical information in the transformer model. Table 4 contains two examples on how both models produce glosses at different training points. As can be noted in both examples, the No-syntax model needs more epochs to produce coherent translations and tends to repeat some patterns leading to corrupted outputs in some cases. This effect is quite remarkable in the second example, for which No-syntax retains repeating patterns after 100 epochs while Syntax produces more coherent translations. This fact might lead to the No-Syntax model obtaining a slightly higher METEOR than Syntax (see 6.1), while Syntax substantially outperformed its competitor in terms of Sacrebleu.", "cite_spans": [], "ref_spans": [ { "start": 273, "end": 280, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Discussion", "sec_num": "7" }, { "text": "The fast-learning capacity exhibited by the Syntax model could be advantageous for our project, since domain-adaptation is an expected feature for the system under development. Also, we have shown that injecting syntactic information to the encoder enables more accurate models without wholesale architecture modifications. The feature injection could be extended to other lexical features, such as Part-of-Speech tags, via integrating a new embedding table.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "7" }, { "text": "In this paper we present a syntax-aware transformer for Text2Gloss. To make the model syntax-aware we inject word dependency tags to augment the discriminative power of embeddings inputted to Encoder. The fashion in which we expand transformers to include lexical dependency features involves minor modifications in the neural architecture leading to negligible impact on computational complexity of the model. As the results of this research show, injecting syntax dependencies can boost Text2Gloss model performances. Namely, our syntax-aware model overcame traditional transformers in terms of BLEU, TER and ROUGE-L F1. Meanwhile, the METEOR metric was slightly worse for our model. Furthermore, we have shown that syntax information can also assist in model learning leading to a faster modelling of complex patterns.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "This preliminary research constitutes a promising starting point to reach the objectives expected for the SignON Project, in which it is planned to deployed resource-hungry translation models in cloud-based computing servers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "Further work could compare the impact of other individual, or combinations of, other linguistic features such as part of speech tags which are used in other studies using syntactic tagging for NMT (Sennrich and Haddow, 2016; Armengol Estap\u00e9 and Ruiz Costa-Juss\u00e0, 2021) . It may also use more widely-used lexical dependency tags such as the Universal Dependencies treebank (Borges V\u00f6lker et al., 2019) . Moreover, we are currently exploring data augmentation techniques to expand the scarce availability of SL data.", "cite_spans": [ { "start": 197, "end": 224, "text": "(Sennrich and Haddow, 2016;", "ref_id": "BIBREF28" }, { "start": 225, "end": 268, "text": "Armengol Estap\u00e9 and Ruiz Costa-Juss\u00e0, 2021)", "ref_id": null }, { "start": 372, "end": 400, "text": "(Borges V\u00f6lker et al., 2019)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "https://www.who.int/ news-room/fact-sheets/detail/ deafness-and-hearing-loss", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.sil.org/sign-languages", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://signon-project.eu/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "IX gloss indicates that the signer needs to point to something or someone.5 https://spacy.io/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://huggingface.co/transformers/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/BramVanroy/pyter 9 https://github.com/mjpost/sacrebleu 10 https://github.com/google/seq2seq/ blob/master/seq2seq/metrics/rouge.py 11 https://github.com/LaSTUS-TALN-UPF/ Syntax-Aware-Transformer-Text2Gloss", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank the reviewers for their comments and suggestions. This work has been conducted within the SignON project. SignON is a Horizon 2020 project, funded under the Horizon 2020 program ICT-57-2020 -\"An empowering, inclusive, Next Generation Internet\" with Grant Agreement number 101017255.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null }, { "text": "Example 1 Source und nun die wettervorhersage f\u00fcr morgen samstag den zw\u00f6lften september (EN) And now the weather forecast for tomorrow Saturday the twelfth of September Target ", "cite_spans": [], "ref_spans": [ { "start": 169, "end": 175, "text": "Target", "ref_id": null } ], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "TIGER Annotationsschema", "authors": [ { "first": "Stefanie", "middle": [], "last": "Albert", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Anderssen", "suffix": "" }, { "first": "Regine", "middle": [], "last": "Bader", "suffix": "" }, { "first": "Stephanie", "middle": [], "last": "Becker", "suffix": "" }, { "first": "Tobias", "middle": [], "last": "Bracht", "suffix": "" }, { "first": "Sabine", "middle": [], "last": "Brants", "suffix": "" }, { "first": "Thorsten", "middle": [], "last": "Brants", "suffix": "" }, { "first": "Vera", "middle": [], "last": "Demberg", "suffix": "" }, { "first": "Stefanie", "middle": [], "last": "Dipper", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Eisenberg", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "1--148", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stefanie Albert, Jan Anderssen, Regine Bader, Stephanie Becker, Tobias Bracht, Sabine Brants, Thorsten Brants, Vera Demberg, Stefanie Dipper, and Peter Eisenberg. 2003. TIGER Annotationss- chema. Universit\u00e4t des Saarlandes and Universit\u00e4t Stuttgart and Universit\u00e4t Potsdam, pages 1-148.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "reading literacy levels", "authors": [], "year": null, "venue": "Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI '18", "volume": "", "issue": "", "pages": "1--12", "other_ids": { "DOI": [ "10.1145/3173574.3173665" ] }, "num": null, "urls": [], "raw_text": "reading literacy levels. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI '18, page 1-12, New York, NY, USA. Association for Computing Machinery.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "HDT-UD: A very large Universal Dependencies treebank for German", "authors": [ { "first": "Emanuel", "middle": [], "last": "Borges V\u00f6lker", "suffix": "" }, { "first": "Maximilian", "middle": [], "last": "Wendt", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Hennig", "suffix": "" }, { "first": "Arne", "middle": [], "last": "K\u00f6hn", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Third Workshop on Universal Dependencies", "volume": "", "issue": "", "pages": "46--57", "other_ids": { "DOI": [ "10.18653/v1/W19-8006" ] }, "num": null, "urls": [], "raw_text": "Emanuel Borges V\u00f6lker, Maximilian Wendt, Felix Hen- nig, and Arne K\u00f6hn. 2019. HDT-UD: A very large Universal Dependencies treebank for German. In Proceedings of the Third Workshop on Universal De- pendencies (UDW, SyntaxFest 2019), pages 46-57, Paris, France. Association for Computational Lin- guistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "TIGER: Linguistic interpretation of a german corpus", "authors": [ { "first": "Sabine", "middle": [], "last": "Brants", "suffix": "" }, { "first": "Stefanie", "middle": [], "last": "Dipper", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Eisenberg", "suffix": "" }, { "first": "Silvia", "middle": [], "last": "Hansen-Schirra", "suffix": "" }, { "first": "Esther", "middle": [], "last": "K\u00f6nig", "suffix": "" }, { "first": "Wolfgang", "middle": [], "last": "Lezius", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Rohrer", "suffix": "" }, { "first": "George", "middle": [], "last": "Smith", "suffix": "" }, { "first": "Hans", "middle": [], "last": "Uszkoreit", "suffix": "" } ], "year": 2004, "venue": "Journal of Language and Computation", "volume": "2", "issue": "", "pages": "597--620", "other_ids": { "DOI": [ "10.1007/s11168-004-7431-3" ] }, "num": null, "urls": [], "raw_text": "Sabine Brants, Stefanie Dipper, Peter Eisenberg, Sil- via Hansen-Schirra, Esther K\u00f6nig, Wolfgang Lezius, Christian Rohrer, George Smith, and Hans Uszkor- eit. 2004. TIGER: Linguistic interpretation of a ger- man corpus. Journal of Language and Computation, 2:597-620.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Corilse: a spanish sign language repository for linguistic analysis", "authors": [ { "first": "Carmen", "middle": [], "last": "Cabeza", "suffix": "" }, { "first": "Jos\u00e9", "middle": [], "last": "Mar\u00eda Garc\u00eda-Miguel", "suffix": "" }, { "first": "Carmen", "middle": [], "last": "Garc\u00eda-Mateo", "suffix": "" }, { "first": "Jose Luis Alba-Castro", "middle": [], "last": "", "suffix": "" } ], "year": 2016, "venue": "of the Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "23--28", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carmen Cabeza, Jos\u00e9 Mar\u00eda Garc\u00eda-Miguel, Carmen Garc\u00eda-Mateo, and Jose Luis Alba-Castro. 2016. Co- rilse: a spanish sign language repository for linguis- tic analysis. In of the Language Resources and Eval- uation Conference, Portoro\u017e (Slovenia), pages 23- 28.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Neural sign language translation", "authors": [ { "first": "Necati", "middle": [], "last": "Camgoz", "suffix": "" }, { "first": "Simon", "middle": [], "last": "Hadfield", "suffix": "" }, { "first": "Oscar", "middle": [], "last": "Koller", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Bowden", "suffix": "" } ], "year": 2018, "venue": "2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "7784--7793", "other_ids": { "DOI": [ "10.1109/CVPR.2018.00812" ] }, "num": null, "urls": [], "raw_text": "Necati Camgoz, Simon Hadfield, Oscar Koller, Her- mann Ney, and Richard Bowden. 2018. Neural sign language translation. In 2018 IEEE/CVF Confer- ence on Computer Vision and Pattern Recognition, pages 7784-7793.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Sign language transformers: Joint end-to-end sign language recognition and translation", "authors": [ { "first": "Oscar", "middle": [], "last": "Necati Cihan Camgoz", "suffix": "" }, { "first": "Simon", "middle": [], "last": "Koller", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Hadfield", "suffix": "" }, { "first": "", "middle": [], "last": "Bowden", "suffix": "" } ], "year": 2020, "venue": "2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)", "volume": "", "issue": "", "pages": "10020--10030", "other_ids": { "DOI": [ "10.1109/CVPR42600.2020.01004" ] }, "num": null, "urls": [], "raw_text": "Necati Cihan Camgoz, Oscar Koller, Simon Hadfield, and Richard Bowden. 2020. Sign language trans- formers: Joint end-to-end sign language recognition and translation. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10020-10030.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Ethnologue: Languages of the World, twenty-fourth edition", "authors": [ { "first": "David", "middle": [ "M" ], "last": "Eberhard", "suffix": "" }, { "first": "Gary", "middle": [ "F" ], "last": "Simons", "suffix": "" }, { "first": "Charles", "middle": [ "D" ], "last": "Fennig", "suffix": "" } ], "year": 2021, "venue": "SIL International", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David M. Eberhard, Gary F. Simons, and Charles D. Fennig. 2021. Ethnologue: Languages of the World, twenty-fourth edition. SIL International, Dallas, TX, USA.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Hamnosys-representing sign language data in language resources and language processing contexts", "authors": [ { "first": "Thomas", "middle": [], "last": "Hanke", "suffix": "" } ], "year": 2004, "venue": "LREC 2004, Workshop proceedings: Representation and processing of sign languages", "volume": "", "issue": "", "pages": "1--6", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Hanke. 2004. Hamnosys-representing sign language data in language resources and language processing contexts. In LREC 2004, Workshop pro- ceedings: Representation and processing of sign lan- guages, pages 1-6, Paris, France.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Is There Any Hope for Developing Automated Translation Technology for Sign Languages?", "authors": [ { "first": "Tommi", "middle": [], "last": "Jantunen", "suffix": "" }, { "first": "Rebekah", "middle": [], "last": "Rousi", "suffix": "" }, { "first": "P\u00e4ivi", "middle": [], "last": "Raino", "suffix": "" }, { "first": "Markku", "middle": [], "last": "Turunen", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Valipoor", "suffix": "" }, { "first": "Narciso", "middle": [], "last": "Garc\u00eda", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "61--73", "other_ids": { "DOI": [ "10.31885/9789515150257.7" ] }, "num": null, "urls": [], "raw_text": "Tommi Jantunen, Rebekah Rousi, P\u00e4ivi Raino, Markku Turunen, Mohammad Valipoor, and Narciso Garc\u00eda. 2021. Is There Any Hope for Developing Automated Translation Technology for Sign Languages?, pages 61-73.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Google's multilingual neural machine translation system: Enabling zero-shot translation", "authors": [ { "first": "Melvin", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "Quoc", "middle": [ "V" ], "last": "Le", "suffix": "" }, { "first": "Maxim", "middle": [], "last": "Krikun", "suffix": "" }, { "first": "Yonghui", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Zhifeng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Nikhil", "middle": [], "last": "Thorat", "suffix": "" }, { "first": "Fernanda", "middle": [], "last": "Vi\u00e9gas", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Wattenberg", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Macduff", "middle": [], "last": "Hughes", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2017, "venue": "Transactions of the Association for Computational Linguistics", "volume": "5", "issue": "", "pages": "339--351", "other_ids": { "DOI": [ "10.1162/tacl_a_00065" ] }, "num": null, "urls": [], "raw_text": "Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi\u00e9gas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google's multilingual neural machine translation system: En- abling zero-shot translation. Transactions of the As- sociation for Computational Linguistics, 5:339-351.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Statistical Machine Translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1017/CBO9780511815829" ] }, "num": null, "urls": [], "raw_text": "Philipp Koehn. 2009. Statistical Machine Translation. Cambridge University Press.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing", "authors": [ { "first": "Taku", "middle": [], "last": "Kudo", "suffix": "" }, { "first": "John", "middle": [], "last": "Richardson", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", "volume": "", "issue": "", "pages": "66--71", "other_ids": { "DOI": [ "10.18653/v1/D18-2012" ] }, "num": null, "urls": [], "raw_text": "Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension", "authors": [ { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal ; Abdelrahman Mohamed", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "7871--7880", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.703" ] }, "num": null, "urls": [], "raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "ROUGE: A package for automatic evaluation of summaries", "authors": [ { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2004, "venue": "Text Summarization Branches Out", "volume": "", "issue": "", "pages": "74--81", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Multilingual denoising pre-training for neural machine translation", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jiatao", "middle": [], "last": "Gu", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Xian", "middle": [], "last": "Li", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Edunov", "suffix": "" }, { "first": "Marjan", "middle": [], "last": "Ghazvininejad", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "1--17", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pre-training for neural machine translation. CoRR, abs/2001.08210:1-17.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Translating bus information into sign language for deaf people", "authors": [ { "first": "V", "middle": [], "last": "L\u00f3pez-Lude\u00f1a", "suffix": "" }, { "first": "C", "middle": [], "last": "Gonz\u00e1lez-Morcillo", "suffix": "" }, { "first": "J", "middle": [ "C" ], "last": "L\u00f3pez", "suffix": "" }, { "first": "R", "middle": [], "last": "Barra-Chicote", "suffix": "" }, { "first": "R", "middle": [], "last": "Cordoba", "suffix": "" }, { "first": "R", "middle": [], "last": "San-Segundo", "suffix": "" } ], "year": 2014, "venue": "Engineering Applications of Artificial Intelligence", "volume": "32", "issue": "", "pages": "258--269", "other_ids": {}, "num": null, "urls": [], "raw_text": "V. L\u00f3pez-Lude\u00f1a, C. Gonz\u00e1lez-Morcillo, J.C. L\u00f3pez, R. Barra-Chicote, R. Cordoba, and R. San-Segundo. 2014. Translating bus information into sign lan- guage for deaf people. Engineering Applications of Artificial Intelligence, 32:258-269.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Data augmentation for sign language gloss translation. CoRR, abs/2105", "authors": [ { "first": "Amit", "middle": [], "last": "Moryossef", "suffix": "" }, { "first": "Kayo", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2021, "venue": "", "volume": "07476", "issue": "", "pages": "1--7", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amit Moryossef, Kayo Yin, Graham Neubig, and Yoav Goldberg. 2021. Data augmentation for sign lan- guage gloss translation. CoRR, abs/2105.07476:1- 7.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Evaluation of manual and non-manual components for sign language recognition", "authors": [ { "first": "Anara", "middle": [], "last": "Sandygulova", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "6073--6078", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anara Sandygulova. 2020. Evaluation of man- ual and non-manual components for sign language recognition. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 6073- 6078, Marseille, France. European Language Re- sources Association.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "A data augmentation approach for sign-language-to-text translation in-thewild", "authors": [ { "first": "Fabrizio", "middle": [], "last": "Nunnari", "suffix": "" }, { "first": "Cristina", "middle": [], "last": "Espa\u00f1a-Bonet", "suffix": "" }, { "first": "Eleftherios", "middle": [], "last": "Avramidis", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 3rd Conference on Language, Data and Knowledge. Conference on Language, Data and Knowledge", "volume": "2020", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fabrizio Nunnari, Cristina Espa\u00f1a-Bonet, and Eleft- herios Avramidis. 2021. A data augmentation ap- proach for sign-language-to-text translation in-the- wild. In Proceedings of the 3rd Conference on Lan- guage, Data and Knowledge. Conference on Lan- guage, Data and Knowledge (LDK-2020), Septem- ber 1-3, Zaragoza, Spain, Spain, volume 93 of Ope- nAccess Series in Informatics (OASIcs). Dagstuhl publishing.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": { "DOI": [ "10.3115/1073083.1073135" ] }, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "A rule-based translation from written spanish to spanish sign language glosses", "authors": [ { "first": "Jordi", "middle": [], "last": "Porta", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "L\u00f3pez-Colino", "suffix": "" }, { "first": "Javier", "middle": [], "last": "Tejedor", "suffix": "" }, { "first": "Jos\u00e9", "middle": [], "last": "Col\u00e1s", "suffix": "" } ], "year": 2014, "venue": "Computer Speech & Language", "volume": "28", "issue": "", "pages": "788--811", "other_ids": { "DOI": [ "10.1016/j.csl.2013.10.003" ] }, "num": null, "urls": [], "raw_text": "Jordi Porta, Fernando L\u00f3pez-Colino, Javier Tejedor, and Jos\u00e9 Col\u00e1s. 2014. A rule-based translation from written spanish to spanish sign language glosses. Computer Speech & Language, 28:788-811.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "A call for clarity in reporting BLEU scores", "authors": [ { "first": "Matt", "middle": [], "last": "Post", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Third Conference on Machine Translation: Research Papers", "volume": "", "issue": "", "pages": "186--191", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186- 191, Belgium, Brussels. Association for Computa- tional Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "SignON: Bridging the gap between Sign and Spoken Languages", "authors": [ { "first": "H", "middle": [], "last": "Saggion", "suffix": "" }, { "first": "D", "middle": [], "last": "Shterionov", "suffix": "" }, { "first": "G", "middle": [], "last": "Labaka", "suffix": "" }, { "first": "T", "middle": [], "last": "Van De Cruys", "suffix": "" }, { "first": "V", "middle": [], "last": "Vandeghinste", "suffix": "" }, { "first": "J", "middle": [], "last": "Blat", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 37th Conference of the Spanish Society for Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Saggion, D. Shterionov, G. Labaka, T. Van de Cruys, V. Vandeghinste, and J. Blat. 2021. SignON: Bridg- ing the gap between Sign and Spoken Languages. In Proceedings of the 37th Conference of the Spanish Society for Natural Language Processing, M\u00e1laga, Spain (held on-line). SEPLN.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Language resources for spanish -spanish sign language (lse) translation", "authors": [ { "first": "Rub\u00e9n", "middle": [], "last": "San-Segundo", "suffix": "" }, { "first": "Ver\u00f3nica", "middle": [], "last": "L\u00f3pez", "suffix": "" }, { "first": "Raquel", "middle": [], "last": "Mart\u00edn", "suffix": "" }, { "first": "David", "middle": [], "last": "S\u00e1nchez", "suffix": "" }, { "first": "Adolfo", "middle": [], "last": "Garc\u00eda", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 4th Workshop on the Representation and Processing of Sign Languages: Corpora and Sign Languages Technologies", "volume": "", "issue": "", "pages": "208--211", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rub\u00e9n San-Segundo, Ver\u00f3nica L\u00f3pez, Raquel Mart\u00edn, David S\u00e1nchez, and Adolfo Garc\u00eda. 2010. Language resources for spanish -spanish sign language (lse) translation. In Proceedings of the 4th Workshop on the Representation and Processing of Sign Lan- guages: Corpora and Sign Languages Technologies, pages 208-211.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Linguistic input features improve neural machine translation", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the First Conference on Machine Translation", "volume": "1", "issue": "", "pages": "83--91", "other_ids": { "DOI": [ "10.18653/v1/W16-2209" ] }, "num": null, "urls": [], "raw_text": "Rico Sennrich and Barry Haddow. 2016. Linguistic input features improve neural machine translation. In Proceedings of the First Conference on Machine Translation: Volume 1, Research Papers, pages 83- 91, Berlin, Germany. Association for Computational Linguistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "A study of translation edit rate with targeted human annotation", "authors": [ { "first": "Matthew", "middle": [], "last": "Snover", "suffix": "" }, { "first": "Bonnie", "middle": [], "last": "Dorr", "suffix": "" }, { "first": "Rich", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "Linnea", "middle": [], "last": "Micciulla", "suffix": "" }, { "first": "John", "middle": [], "last": "Makhoul", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "223--231", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Snover, Bonnie Dorr, Rich Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. pages 223-231.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Parsing with compositional vector grammars", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "John", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "455--465", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, John Bauer, Christopher D. Manning, and Andrew Y. Ng. 2013. Parsing with compo- sitional vector grammars. In Proceedings of the 51st Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 455-465, Sofia, Bulgaria. Association for Computa- tional Linguistics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Text2sign: Towards sign language production using neural machine translation and generative adversarial networks", "authors": [ { "first": "Stephanie", "middle": [], "last": "Stoll", "suffix": "" }, { "first": "Simon", "middle": [], "last": "Necati Cihan Camg\u00f6z", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Hadfield", "suffix": "" }, { "first": "", "middle": [], "last": "Bowden", "suffix": "" } ], "year": 2020, "venue": "Int. J. Comput. Vis", "volume": "128", "issue": "4", "pages": "891--908", "other_ids": { "DOI": [ "10.1007/s11263-019-01281-2" ] }, "num": null, "urls": [], "raw_text": "Stephanie Stoll, Necati Cihan Camg\u00f6z, Simon Had- field, and Richard Bowden. 2020. Text2sign: To- wards sign language production using neural ma- chine translation and generative adversarial net- works. Int. J. Comput. Vis., 128(4):891-908.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Lukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17", "volume": "", "issue": "", "pages": "6000--6010", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Sys- tems, NIPS'17, page 6000-6010.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer", "authors": [ { "first": "Linting", "middle": [], "last": "Xue", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Constant", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Roberts", "suffix": "" }, { "first": "Mihir", "middle": [], "last": "Kale", "suffix": "" }, { "first": "Rami", "middle": [], "last": "Al-Rfou", "suffix": "" }, { "first": "Aditya", "middle": [], "last": "Siddhant", "suffix": "" } ], "year": null, "venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "483--498", "other_ids": { "DOI": [ "10.18653/v1/2021.naacl-main.41" ] }, "num": null, "urls": [], "raw_text": "Linting Xue, Noah Constant, Adam Roberts, Mi- hir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 483-498, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Better sign language translation with STMC-transformer", "authors": [ { "first": "Kayo", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Jesse", "middle": [], "last": "Read", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 28th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "5975--5989", "other_ids": { "DOI": [ "10.18653/v1/2020.coling-main.525" ] }, "num": null, "urls": [], "raw_text": "Kayo Yin and Jesse Read. 2020. Better sign language translation with STMC-transformer. In Proceed- ings of the 28th International Conference on Com- putational Linguistics, pages 5975-5989, Barcelona, Spain (Online). International Committee on Compu- tational Linguistics.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Synthetic signing for the deaf", "authors": [ { "first": "I", "middle": [], "last": "Zwitserlood", "suffix": "" }, { "first": "M", "middle": [], "last": "Verlinden", "suffix": "" }, { "first": "J", "middle": [], "last": "Ros", "suffix": "" }, { "first": "Sanny", "middle": [], "last": "Van Der", "suffix": "" }, { "first": "", "middle": [], "last": "Schoot", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "eS-- IGN", "other_ids": {}, "num": null, "urls": [], "raw_text": "I. Zwitserlood, M. Verlinden, J. Ros, and Sanny van der Schoot. 2004. Synthetic signing for the deaf : eS- IGN.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Syntax-Aware Text2Gloss model Figure 2: Lexical dependency tree diagram of the sentence \"On the weekend it gets a little warmer\". Key to tags: ep = expletive es, mo = modifier, nk = noun kernel element, pd = predicate Beam Search Decoding with 5 beams.", "type_str": "figure", "num": null, "uris": null }, "FIGREF1": { "text": "Performance Metrics evolution during training.", "type_str": "figure", "num": null, "uris": null }, "TABREF0": { "num": null, "type_str": "table", "content": "
NEBEL IX 4
(EN) BUT IN-COURSE FOG HIGH
FOG IX
performance by 1.2 BLEU score (Armengol Estap\u00e9
and Ruiz Costa-Juss\u00e0, 2021) over a baseline -and
when using arbitrary features derived from neural
models
", "html": null, "text": "T2G production examplesSpoken Sp\u00e4ter breiten sich aber nebel oder hochnebelfelder aus (EN) Later, however, fog or high-fog fields are widening Gloss ABER IM-VERLAUF NEBEL HOCH" }, "TABREF1": { "num": null, "type_str": "table", "content": "
#Samples #Words #Glosses
Train709628871085
Dev519951393
Test6421001411
", "html": null, "text": "Data partitions Information" }, "TABREF2": { "num": null, "type_str": "table", "content": "
SacreBLEU\u2191TER\u2193ROUGE-L (F1-score)\u2191 METEOR\u2191
Syntax53.52 (400)0.722 (330)0.467 (115)0.407 (190)
No-syntax51.06 (485)0.814 (485)0.461 (140)0.424 (210)
Diff2.46 (85)-0.092 (155)0.006 (35)-0.017 (-20)
", "html": null, "text": "Best scores for the models. This table contains the maximum values for all metrics after convergence. The values between parenthesis denotes the epoch in which those values are produced." } } } }