88 F-measure) with reddits low-recall rank-by-controversy 2) we ensure popularity prediction != controversy prediction" ], "page_nums": [ 11 ], "images": [] }, "5": { "title": "Labeled Dataset Statistics", "text": [ "Balanced, binary classification with controversial/non-controversial labeling" ], "page_nums": [ 12 ], "images": [ "figure/image/1337-Table1-1.png" ] }, "6": { "title": "Some posting time text only results", "text": [ "(this, plus timestamp, is our baseline)", "o Rather than passing BERT vectors to a bi-LSTM, it", "works about as well and faster to mean-pool, dimension-reduce, and feed to a linear classifier", "o Our hand-crafted features + word2vec match BERT- based algorithms on 3 of 6 subreddits" ], "page_nums": [ 13, 14 ], "images": [ "figure/image/1337-Table2-1.png" ] }, "8": { "title": "Does the shape of the tree predict controversy", "text": [ "Usually yes, even after controlling for the rate of incoming comments.", "max depth/total comment ratio proportion of comments that were top-level (i.e., made in direct reply to the original post) average node depth average branching factor proportion of top-level comments replied to Gini coefficient of replies to top-level comments (to measure how clustered the total discussion is) Wiener Index of virality (average pairwise pathlength between all pairs of nodes)", "total number of comments logged time between OP and the first reply average logged parent-child reply time (over all pairs of comments)", "[binary logistic regression, LL-Ratio test p<.05 in 5/6 communities]" ], "page_nums": [ 16 ], "images": [] }, "9": { "title": "Prediction results incorporating comment features", "text": [ "4 comments, on average" ], "page_nums": [ 17, 18 ], "images": [] }, "11": { "title": "Takeaways modulo caveats see paper", "text": [ "We advocate an early-detection, community-specific approach to controversial-post prediction", "We can use features of the content and structure of the early discussion tree", "Early detection outperforms posting-time-only features in 5 of 6", "Reddit communities tested, even for quite small early-time windows", "Early content is most effective, but tree-shape and rate features transfer across domains better" ], "page_nums": [ 21 ], "images": [] } }, "paper_title": "Something's Brewing! Early Prediction of Controversy-causing Posts from Discussion Features", "paper_id": "1337", "paper": { "title": "Something's Brewing! Early Prediction of Controversy-causing Posts from Discussion Features", "abstract": "Controversial posts are those that split the preferences of a community, receiving both significant positive and significant negative feedback. Our inclusion of the word \"community\" here is deliberate: what is controversial to some audiences may not be so to others. Using data from several different communities on reddit.com, we predict the ultimate controversiality of posts, leveraging features drawn from both the textual content and the tree structure of the early comments that initiate the discussion. We find that even when only a handful of comments are available, e.g., the first 5 comments made within 15 minutes of the original post, discussion features often add predictive capacity to strong content-andrate only baselines. Additional experiments on domain transfer suggest that conversationstructure features often generalize to other communities better than conversation-content features do.", "text": [ { "id": 0, "string": "Introduction Controversial content -that which attracts both positive and negative feedback -is not necessarily a bad thing; for instance, bringing up a point that warrants spirited debate can improve community health." }, { "id": 1, "string": "1 But regardless of the nature of the controversy, detecting potentially controversial content can be useful for both community members and community moderators." }, { "id": 2, "string": "Ordinary users, and in particular new users, might appreciate being warned that they need to add more nuance or qualification to their earlier posts." }, { "id": 3, "string": "2 Moderators could be alerted that the discussion ensuing from some content might need monitoring." }, { "id": 4, "string": "Alternately, they could draw community attention to issues possibly needing resolution: indeed, some sites already provide explicit sorting by controversy." }, { "id": 5, "string": "We consider the controversiality of a piece of content in the context of the community in which it is shared, because what is controversial to some audiences may not be so to others (Chen and Berger, 2013; Jang et al., 2017; Basile et al., 2017) ." }, { "id": 6, "string": "For example, we identify \"break up\" as a controversial concept in the relationships subreddit (a subreddit is a subcommunity hosted on the Reddit discussion site), but the same topic is associated with a lack of controversy in the AskWomen subreddit (where questions are posed for women to answer)." }, { "id": 7, "string": "Similarly, topics that are controversial in one community may simply not be discussed in another: our analysis identifies \"crossfit\", a type of workout, as one of the most controversial concepts in the subreddit Fitness." }, { "id": 8, "string": "However, while controversial topics may be community-specific, community moderators still may not be able to determine a priori which posts will attract controversy." }, { "id": 9, "string": "Many factors cannot be known ahead of time, e.g., a fixed set of topics may not be dynamic enough to handle a sudden current event, or the specific set of users that happen to be online at a given time may react in unpredictable ways." }, { "id": 10, "string": "Indeed, experiments have shown that, to a certain extent, the influence of early opinions on subsequent opinion dynamics can override the influence of an item's actual content (Salganik et al., 2006; Wu and Huberman, 2008; Muchnik et al., 2013; Weninger et al., 2015) ." }, { "id": 11, "string": "Hence, we propose an early-detection approach that uses not just the content of the initiating post, but also the content and structure of the initial responding comments." }, { "id": 12, "string": "In doing so, we unite streams of heretofore mostly disjoint research programs: see Figure 1 ." }, { "id": 13, "string": "Working with over 15,000 discus-Is the task to determine whether a textual item will provoke controversy?" }, { "id": 14, "string": "No, whether a topic (or entity/hashtag/word) has been controversial [a distinction also made by Addawood et al." }, { "id": 15, "string": "(2017) ] (Popescu and Pennacchiotti, 2010; Choi et al., 2010; Cao et al., 2015; Lourentzou et al., 2015; Addawood et al., 2017; Al-Ayyoub et al., 2017; Garimella et al., 2018) No, whether a conversation contained disagreement (Mishne and Glance, 2006; Yin et al., 2012; Allen et al., 2014; Wang and Cardie, 2014) or mapping the disagreements (Awadallah et al., 2012; Marres, 2015; Borra et al., 2015; Liu et al., 2018) No, the task is, for the given textual item, predict antisocial behavior in the ensuing discussion (Zhang et al., 2018b,a) , or subsequent comment volume/popularity/structure (Szabo and Huberman, 2010; Kim et al., 2011; Tatar et al., 2011; Backstrom et al., 2013; He et al., 2014; Zhang et al., 2018b) , or eventual post article score (Rangwala and Jamali, 2010; Szabo and Huberman, 2010) ,; but all where, like us, the paradigm is early detection No, only info available at the item's creation (Dori-Hacohen and Allan, 2013; Mejova et al., 2014; Klenner et al., 2014; Addawood et al., 2017; Timmermans et al., 2017; Rethmeier et al., 2018; Kaplun et al., 2018) or the entire ensuing revision/discussion history (Rad and Barbosa, 2012; ." }, { "id": 16, "string": "N.B." }, { "id": 17, "string": ": for Wikipedia articles, often controversy=non-vandalism reverts (Yasseri et al., 2012) ... although some, like us, treat controversy as domain-specific (Jang et al., 2017) and test domain transfer (Basile et al., 2017) ...using early reactions, which, recall, Salganik et al." }, { "id": 18, "string": "(2006) observe to be sometimes crucial?" }, { "id": 19, "string": "... and testing how well text/earlyconversation-structure features transfer across communities?" }, { "id": 20, "string": "This is our work." }, { "id": 21, "string": "No, early reversions (Sumi et al., 2011) aren't conversations as usually construed Figure 1 : How our research relates to prior work." }, { "id": 22, "string": "sion trees across six subreddits, we find that incorporating structural and textual features of budding comment trees improves predictive performance relatively quickly; for example, in one of the communities we consider, adding features taken from just the first 15 minutes of discussion significantly increases prediction performance, even though the average thread only contains 4 comments by that time (∼4% of all eventual comments)." }, { "id": 23, "string": "Additionally, we study feature transferability across domains (in our case, communities), training on one subreddit and testing on another." }, { "id": 24, "string": "While text features of comments carry the greatest predictive capacity in-domain, we find that discussion-tree and -rate features are less brittle, transferring better between communities." }, { "id": 25, "string": "Our results not only suggest the potential usefulness of granting controversy-prediction algorithms a small observation window to gauge community feedback, but also demonstrate the utility of our expressive feature set for early discussions." }, { "id": 26, "string": "Datasets Given our interest in community-specific controversiality, we draw data from reddit.com, which hosts several thousand discussion subcom-munities (subreddits) covering a variety of interests." }, { "id": 27, "string": "Our dataset, which attempts to cover all public posts and comments from Reddit's inception in 2007 until Feb. 2014, is derived from a combination of Jason Baumgartner's posts and comments sets and our own scraping efforts to fill in dataset gaps." }, { "id": 28, "string": "The result is a mostly-complete set of posts alongside associated comment trees." }, { "id": 29, "string": "3 We focus on six text-based 4 subreddits ranging over a variety of styles and topics: two Q&A subreddits: AskMen (AM) and AskWomen (AW); a specialinterest community, Fitness (FT); and three advice communities: LifeProTips (LT), personalfinance (PF), and relationships (RL)." }, { "id": 30, "string": "Each comprises tens of thousands of posts and hundreds of thousands to millions of comments." }, { "id": 31, "string": "In Reddit (similarly to other sites allowing explicit negative feedback, such as YouTube, imgur, 9gag, etc." }, { "id": 32, "string": "), users can give posts upvotes, increas-/r/LifeProTips (LT) 63 comments, 72% upvoted LPT: Check the Facebook app to find the owner of a lost smartphone or simply call her 'mum'?" }, { "id": 33, "string": "Also slightly less intrusive IMO." }, { "id": 34, "string": "comments, 72% upvoted LPT: get your pets to take their medicine with butter." }, { "id": 35, "string": "This is much better!" }, { "id": 36, "string": "I have been trying ice cream but my dog is too smart." }, { "id": 37, "string": "comments, 93% upvoted LPT: For a cleaner home with little effort, never leave a room empty-handed." }, { "id": 38, "string": "There is almost always something you can put back in its place on your way." }, { "id": 39, "string": "Woah." }, { "id": 40, "string": "237 comments, 71% upvoted **tl;dr** quit whining cuz r/fitness didn't respond they way you wanted..." }, { "id": 41, "string": "Unfortunately, I doubt this kind of post is going to change anything... comments, 63% upvoted Interesting New Study: Red Meat Linked With Increased Mortality Risk." }, { "id": 42, "string": "Thought this study is worth a discussion... Man, it seems like everything these days will lower your life span." }, { "id": 43, "string": "comments, 90% upvoted What type of snack should I have preworkout to avoid lethargy at the gym?" }, { "id": 44, "string": "I don't wanna be sluggish at the gym... Apples slices with peanut butter." }, { "id": 45, "string": "comments, 57% upvoted Tipping as legal discrimination: Black servers get tipped 3.25% less... [LINK] ..." }, { "id": 46, "string": "Tipping should be abandoned anyway, it's ridiculous.... comments, 62% upvoted Am I crazy for wanting this car/payment?" }, { "id": 47, "string": "Short of it .. car is $45,000..." }, { "id": 48, "string": "Needing a car for work and purchasing $45k car are two entirely different things." }, { "id": 49, "string": "comments, 97% upvoted Accumulating wealth via homeownership vs accumulating wealth as a renter." }, { "id": 50, "string": "One of the often cited benefits of homeownership ... Use this handy calculator from the NY Times." }, { "id": 51, "string": "If you're dilligent..." }, { "id": 52, "string": "Figure 2 : Examples of two controversial and one non-controversial post from three communities." }, { "id": 53, "string": "Also shown are the text of the first reply, the number of comments the post received, and its percent-upvoted." }, { "id": 54, "string": "ing a post's score, or downvotes, decreasing it." }, { "id": 55, "string": "5 While the semantics of up/down votes may vary based on community (and, indeed, each user may have their own views on what content should be upvoted and what downvoted), in aggregate, posts that split community reaction fundamentally differ from those that produce agreement." }, { "id": 56, "string": "Thus, in principle, posts that have unambiguously received both many upvotes and many downvotes should be deemed the most controversial." }, { "id": 57, "string": "Percent Upvoted on Reddit." }, { "id": 58, "string": "We quantify the relative proportion of upvotes and downvotes on a post using percent-upvoted, a measure provided by Reddit that gives an estimate of the percent of all votes on a post that are upvotes." }, { "id": 59, "string": "In practice, exact values of percent-upvoted are not directly available; the site adds \"vote fuzzing\" to fight vote manipulation." }, { "id": 60, "string": "6 To begin with, we first discard posts with fewer than 30 comments." }, { "id": 61, "string": "7 Then, we query for the noisy percent-upvoted from each post ten times using the Reddit API, and take a mean to produce a final estimate." }, { "id": 62, "string": "Post Outcomes." }, { "id": 63, "string": "To better understand the interplay between upvotes and downvotes, we first explore the outcomes for posts both in terms of percent-upvoted and the number of comments; do-5 Vote timestamps are not publicly available." }, { "id": 64, "string": "6 Prior to Dec. 2016, vote information was fuzzed according to a different algorithm; however, vote statistics for all posts were recomputed according to a new algorithm that, according to a reddit moderator, can \"actually be trusted;\" https://goo.gl/yHWeJp 7 The intent is to only consider posts receiving enough community attention for us to reliably compare upvote counts with downvotes." }, { "id": 65, "string": "We use number of comments as a proxy for aggregate attention because Reddit does not surface the true number of votes." }, { "id": 66, "string": "/r/Fitness (FT) /r/personalfinance (PF) ing so on a per-community basis has the potential to surface any subreddit-specific effects." }, { "id": 67, "string": "In addition, we compute the median number of comments for posts falling into each bin of the histogram." }, { "id": 68, "string": "The resulting plots are given in Figure 3 ." }, { "id": 69, "string": "In general, posts receive mostly positive feedback in aggregate, though the mean percentupvoted varies between communities (Table 1) ." }, { "id": 70, "string": "There is also a positive correlation between a post's percent-upvoted and the number of comments it receives." }, { "id": 71, "string": "This relationship is unsurprising, given that Reddit displays higher rated posts to more users." }, { "id": 72, "string": "A null hypothesis, which we compare to empirically in our prediction experiments, is that popularity and percent-upvoted simply carry the same information." }, { "id": 73, "string": "However, we have reason to doubt this null hypothesis, as quite a few posts receive significant attention despite having a low percentupvoted ( Figure 2 )." }, { "id": 74, "string": "Assigning Controversy Labels To Posts." }, { "id": 75, "string": "We assign binary controversy labels (i.e., relatively controversial vs. relatively non-controversial) to posts according to the following process: first, we discard posts where the observed variability across 10 API queries for percent-upvoted exceeds 5%; in these cases, we assume that there are too few total votes for a stable estimate." }, { "id": 76, "string": "Next, we discard posts where neither the observed upvote ratio nor the observed score 8 vary at all; in these cases, we cannot be sure that the upvote ratio is insensitive to the vote fuzzing function." }, { "id": 77, "string": "9 Fi- nally, we sort each community's surviving posts by upvote percentage, and discard the small number of posts with percent-upvoted below 50%." }, { "id": 78, "string": "10 The top quartile of posts according to this ranking (i.e., posts with mostly only upvotes) are labeled \"non-controversial.\"" }, { "id": 79, "string": "The bottom quartile of posts, where the number of downvotes cannot exceed but may approach the number of upvotes, are labeled as \"controversial.\"" }, { "id": 80, "string": "For each community, this process yields a balanced, labeled set of controversial/non-controversial posts." }, { "id": 81, "string": "Table 1 contains the number of posts/comments for each community after the above filtration process, and the percent-upvoted for the controversial/noncontroversial sets." }, { "id": 82, "string": "Quantitative Validation of Labels Reddit provides a sort-by-controversy function, and we wanted to ensure that our controversy labeling method aligned with this ranking." }, { "id": 83, "string": "11 We contacted Reddit itself, but they were unable to provide details." }, { "id": 84, "string": "Hence, we scraped the 1K most controversial posts according to Reddit (1K is the max that Reddit provides) for each community over the past year (as of October 2018)." }, { "id": 85, "string": "Next, we sampled posts that did not appear on Reddit's controversial list in the year prior to October 2018 to create a 1:k ratio sample of Reddit-controversial posts and non-Reddit-controversial posts for k ∈ {1, 2, 3}, k = 3 being the most difficult setting." }, { "id": 86, "string": "Then, we applied the filtering/labeling method described above, and measured how well our process matched Reddit's ranking scheme, i.e., the \"controversy\" label applied by our method matched the \"controversy\" label assigned by Reddit." }, { "id": 87, "string": "Our labeling method achieves high precision in identifying controversial/non-controversial posts." }, { "id": 88, "string": "While a large proportion of posts are discarded, the labels assigned to surviving posts match those assigned by Reddit with the following F-measures at k = 3 (the results for k = 1, 2 are higher): 12 AM AW FT LT PF RL In all cases, the precision for the non-controversial label is perfect, i.e., our filtration method never labeled a Reddit-controversial post as noncontroversial." }, { "id": 89, "string": "The precision of the controversy label was also high, but imperfect; errors could be a result of, e.g., Reddit's controversy ranking being limited to 1K posts, or using internal data, etc." }, { "id": 90, "string": "Figure 2 gives examples of controversial and noncontroversial posts from three of the communities we consider, alongside the text of the first comment made in response to those posts." }, { "id": 91, "string": "Topical differences." }, { "id": 92, "string": "A priori, we expect that the topical content of posts may be related to how controversial they become (see prior work in Fig." }, { "id": 93, "string": "1 )." }, { "id": 94, "string": "We ran LDA (Blei et al., 2003) with 10 topics on posts from each community independently, and compared the differences in mean topic frequency between controversial and non-controversial posts." }, { "id": 95, "string": "We observe communityspecific patterns, e.g., in relationships, posts about family (top words in topic: \"family parents mom dad\") are less controversial than those associated with romantic relationships (top words: \"relationship, love, time, life\"); in AskWomen, a gender topic (\"women men woman male\") tends to be associated with more controversy than an advice-seeking topic (\"im dont feel ive\") Wording differences." }, { "id": 96, "string": "We utilize Monroe et al." }, { "id": 97, "string": "'s (2008) algorithm for comparing language usage in two bodies of text; the method places a Dirichlet prior over n-grams (n=1,2,3) and estimates Zscores on the difference in rate-usage between controversial and non-controversial posts." }, { "id": 98, "string": "This analysis reveals many community-specific patterns, e.g., phrases associated with controversy include \"crossfit\" in Fitness, \"cheated on my\" in relationships, etc." }, { "id": 99, "string": "What's controversial in one community may be non-controversial in another, e.g., \"my parents\" is associated with controversy in personalfinance (e.g., \"live with my parents\") but strongly associated with lack of controversy in relationships (e.g., \"my parents got divorced\")." }, { "id": 100, "string": "We also observe that some communities share commonalities in phrasing, e.g., \"do you think\" is associated with controversy in both AskMen and AskWomen, whereas \"what are some\" is associated with a lack of controversy in both." }, { "id": 101, "string": "Qualitative Validation of Labels Early Discussion Threads We now analyze comments posted in early discussion threads for controversial vs. noncontroversial posts." }, { "id": 102, "string": "In this section, we focus on comments posted within one hour of the original submission, although we consider a wider range of times in later experiments." }, { "id": 103, "string": "Comment Text." }, { "id": 104, "string": "We mirrored the n-gram analysis conducted in the previous section, but, rather than the text of the original post, focused on the text of comments." }, { "id": 105, "string": "Many patterns persist, but the conversational framing changes, e.g., \"I cheated\" in the posts of relationships is mirrored by \"you cheated\" in the comments." }, { "id": 106, "string": "Community differences again appear: e.g., \"birth control\" indicated controversy when it appears in the comments for relationships, but not for AskWomen." }, { "id": 107, "string": "Comment Tree Structure." }, { "id": 108, "string": "While prior work in early prediction mostly focuses on measuring rate of early responses, we postulate that more expressive, structural features of conversation trees may also carry predictive capacity." }, { "id": 109, "string": "Figure 4 gives samples of conversation trees that developed on Reddit posts within one hour of the original post being made." }, { "id": 110, "string": "There is significant diversity among tree size and shape." }, { "id": 111, "string": "To quantify these differences, we introduce two sets of features: C-RATE features, which encode the rate of commenting/number of comments; 13 and C-TREE features, which encode structural aspects of discussion trees." }, { "id": 112, "string": "14 We then examine whether or not tree features correlate with controversy after controlling for popularity." }, { "id": 113, "string": "Using binary logistic regression, after controlling for C-RATE, C-TREE features extracted from comments made within one hour of the original post improve model fit in all cases except for personalfinance (p < .05, LL-Ratio test)." }, { "id": 114, "string": "We repeated the experiment, but also controlled for eventual popularity 15 in addition to C-RATE, and observed the same result." }, { "id": 115, "string": "This provides evidence that structural features of conversation trees are predictive, though which tree feature is most important according to these experiments is community-specific." }, { "id": 116, "string": "For example, for the models without eventual popularity information, the C-TREE feature with largest coefficient in AskWomen and AskMen was the max-depth ratio, but it was the Wiener index in Fitness." }, { "id": 117, "string": "Early Prediction of Controversy We shift our focus to the task of predicting controversy on Reddit." }, { "id": 118, "string": "In general, tools that predict controversy are most useful if they only require information available at the time of submission or as soon as possible thereafter." }, { "id": 119, "string": "We note that while the causal relationship between vote totals and comment threads is not entirely clear (e.g., perhaps the comment threads cause more up/down votes on the post), predicting the ultimate outcome of posts is still useful for community moderators." }, { "id": 120, "string": "Experimental protocols." }, { "id": 121, "string": "All classifiers are bi-13 Specifically: total number of comments, the logged time between OP and the first reply, and the average logged parentchild reply time over pairs of comments." }, { "id": 122, "string": "14 Specifically: max depth/total comment ratio, proportion of comments that were top-level (i.e., made in direct reply to the original post), average node depth, average branching factor, proportion of top-level comments replied to, Gini coefficient of replies to top-level comments (to measure how \"clustered\" the total discussion is), and Wiener Index of virality (which measures the average pairwise path-length between all nodes in the conversation tree (Wiener, 1947; Goel et al., 2015) )." }, { "id": 123, "string": "15 We added in the logged number of eventual comments, and also whether or not the post received an above-median number of comments." }, { "id": 124, "string": "nary (i.e., controversial vs. non-controversial) and, because the classes are in 50/50 balance, we compare algorithms according to their accuracy." }, { "id": 125, "string": "Experiments are conducted as 15-fold cross validation with random 60/20/20 train/dev/test splits, where the splits are drawn to preserve the 50/50 label distribution." }, { "id": 126, "string": "For non-neural, feature-based classifiers, we use linear models." }, { "id": 127, "string": "16 For BiLSTM models, 17 we use Tensorflow (Abadi et al., 2015) ." }, { "id": 128, "string": "Whenever a feature is ill-defined (e.g., if it is a comment text feature, but there are no comments at time t) the column mean of the training set for each cross-validation split is substituted." }, { "id": 129, "string": "Similarly, if a comment's body is deleted, it is ignored by text processing algorithms." }, { "id": 130, "string": "We perform both Wilcoxon signed-rank tests (Demšar, 2006) and two-sided corrected resampled t-tests (Nadeau and Bengio, 2000) to estimate statistical significance, taking the maximum of the two resulting p-values to err on the conservative side and reduce the chance of Type I error." }, { "id": 131, "string": "Comparing Text Models The goal of this section is to compare text-only models for classifying controversial vs. noncontroversial posts." }, { "id": 132, "string": "Algorithms are given access to the full post titles and bodies, unless stated otherwise." }, { "id": 133, "string": "HAND." }, { "id": 134, "string": "We consider a number of hand-designed features related to the textual content of posts inspired by Tan et al." }, { "id": 135, "string": "(2016) ." }, { "id": 136, "string": "18 TFIDF." }, { "id": 137, "string": "We encode posts according to tfidf feature vectors." }, { "id": 138, "string": "Words are included in the vocabulary if they appear more than 5 times in the corresponding cross-validation split." }, { "id": 139, "string": "16 We cross-validate regularization strength 10ˆ(-100,-5,-4,-3,-2,-1,0,1), model type (SVM vs. Logistic L1 vs. Logistic L2 vs. Logistic L1/L2), and whether or not to apply feature standardization for each feature set and cross-validation split separately." }, { "id": 140, "string": "These are trained using lightning (http: //contrib.scikit-learn.org/lightning/)." }, { "id": 141, "string": "17 We optimize using Adam (Kingma and Ba, 2014) with LR=.001 for 20 epochs, apply dropout with p = .2, select the model checkpoint that performs best over the validation set, and cross-validate the model's dimension (128 vs. 256) and the number of layers (1 vs. 2) separately for each crossvalidation split." }, { "id": 142, "string": "18 Specifically: for the title and text body separately, length, type-token ratio, rate of first-person pronouns, rate of secondperson pronouns, rate of question-marks, rate of capitalization, and Vader sentiment (Hutto and Gilbert, 2014) ." }, { "id": 143, "string": "Combining the post title and post body: number of links, number of Reddit links, number of imgur links, number of sentences, Flesch-Kincaid readability score, rate of italics, rate of boldface, presence of a list, and the rate of word use from 25 Empath wordlists (Fast et al., 2016) , which include various categories, such as politeness, swearing, sadness, etc." }, { "id": 144, "string": "W2V." }, { "id": 145, "string": "We consider a mean, 300D word2vec (Mikolov et al., 2013) embedding representation, computed from a GoogleNews corpus." }, { "id": 146, "string": "ARORA." }, { "id": 147, "string": "A slight modification of W2V, proposed by Arora et al." }, { "id": 148, "string": "(2017) , serves as a \"tough to beat\" baseline for sentence representations." }, { "id": 149, "string": "LSTM." }, { "id": 150, "string": "We train a Bi-LSTM (Graves and Schmidhuber, 2005 ) over the first 128 tokens of titles + post text, followed by a mean pooling layer, and then a logistic regression layer." }, { "id": 151, "string": "The LSTM's embedding layer is initialized with the same word2vec embeddings used in W2V." }, { "id": 152, "string": "Markdown formatting artifacts are discarded." }, { "id": 153, "string": "BERT-LSTM." }, { "id": 154, "string": "Recently, features extracted from fixed, pretrained, neural language models have resulted in high performance on a range of language tasks." }, { "id": 155, "string": "Following the recommendations of §5.4 of Devlin et al." }, { "id": 156, "string": "(2019) , we consider representing posts by extracting BERT-Large embeddings computed for the first 128 tokens of titles + post text; we average the final 4 layers of the 24-layer, pretrained Transformer-decoder network (Vaswani et al., 2017) ." }, { "id": 157, "string": "These token-specific vectors are then passed to a Bi-LSTM, a mean pooling layer, and a logistic classification layer." }, { "id": 158, "string": "We keep markdown formatting artifacts because BERT's token vocabulary are WordPiece subtokens (Wu et al., 2016) , which are able to incorporate arbitrary punctuation without modification." }, { "id": 159, "string": "BERT-MP." }, { "id": 160, "string": "Instead of training a Bi-LSTM over BERT features, we mean pool over the first 128 tokens, apply L2 normalization to the resulting representations, reduce to 100 dimensions using PCA, 19 and train a linear classifier on top." }, { "id": 161, "string": "BERT-MP-512." }, { "id": 162, "string": "The same as BERT-MP, except the algorithm is given access to 512 tokens (the maximum allowed by BERT-Large) instead of 128." }, { "id": 163, "string": "Results: Table 2 gives the performance of each text classifier for each community." }, { "id": 164, "string": "In general, the best performing models are based on the BERT features, though HAND+W2V performs well, too." }, { "id": 165, "string": "However, no performance gain is achieved when adding hand designed features to BERT." }, { "id": 166, "string": "This may be because BERT's subtokenization scheme incorporates punctuation, link urls, etc., which are similar to the features captured by HAND." }, { "id": 167, "string": "Adding an LSTM over BERT features is comparable to mean pooling over the sequence; similarly, considering 128 tokens vs. 512 tokens results in comparable performance." }, { "id": 168, "string": "Based on the results of this experiment, we adopt BERT-MP-512 to represent text in experiments for the rest of this work." }, { "id": 169, "string": "Post-time Metadata Many non-content factors can influence community reception of posts, e.g., Hessel et al." }, { "id": 170, "string": "(2017) find that when a post is made on Reddit can significantly influence its eventual popularity." }, { "id": 171, "string": "TIME." }, { "id": 172, "string": "These features encode when a post was created." }, { "id": 173, "string": "These include indicator variables for year, month, day-of-week, and hour-of-day." }, { "id": 174, "string": "AUTHOR." }, { "id": 175, "string": "We add an indicator variable for each user that appears at least 3 times in the training set, encoding the hypothesis that some users may simply have a greater propensity to post controversial content." }, { "id": 176, "string": "The results of incorporating the metadata features on top of TEXT are given in Table 3 ." }, { "id": 177, "string": "While incorporating TIME features on top of TEXT results in consistent improvements across all communities, incorporating author features on top of TIME+TEXT does not." }, { "id": 178, "string": "We adopt our highest performing models, TEXT+TIME, as a strong posttime baseline." }, { "id": 179, "string": "Early Discussion Features Basic statistics of early comments." }, { "id": 180, "string": "We augment the post-time features with early-discussion feature sets by giving our algorithms access to comments from increasing observation periods." }, { "id": 181, "string": "Specifically, we train linear classifiers by combining our best post-time feature set (TEXT+TIME) with features derived from comment trees available after t minutes, and sweep t from t = 15 to t = 180 minutes in 15 minute intervals." }, { "id": 182, "string": "Figure 6 plots the median number of comments available per thread at different t values for each community." }, { "id": 183, "string": "The amount of data available for the early-prediction algorithms to consider varies significantly, e.g., while AskMen threads have a median 10 comments available at 45 minutes, Life-ProTips posts do not reach that threshold even after 3 hours, and we thus expect that it will be a harder setting for early prediction." }, { "id": 184, "string": "We see, too, that even our maximal 3 hour window is still early in a post's lifecycle, i.e., posts tend to receive significant attention afterwards: only 15% (LT) to 32% (AW) of all eventual comments are available per thread at this time, on average." }, { "id": 185, "string": "Figure 7 gives the distribution of the number of comments available for controversial/non-controversial posts on AskWomen at t = 60 minutes." }, { "id": 186, "string": "As with the other communities we consider, the distribution of number of available posts is not overly-skewed, i.e., most posts in our set (we filtered out posts with less than 30 comments) get at least some early comments." }, { "id": 187, "string": "We explore a number of feature sets based on early comment trees (comment feature sets are prefixed with \"C-\"): C-RATE and C-TREE." }, { "id": 188, "string": "We described these in §3." }, { "id": 189, "string": "C-TEXT." }, { "id": 190, "string": "For each comment available at a given observation period, we extract the BERT-MP-512 embedding." }, { "id": 191, "string": "Then, for each conversation thread, we take a simple mean over all comment representations." }, { "id": 192, "string": "While we tried several more expressive means of encoding the text of posts in comment trees, this simple method proved surprisingly effective." }, { "id": 193, "string": "20 Sweeping over time." }, { "id": 194, "string": "Figure 5 gives the performance of the post-time baseline combined with comment features while sweeping t from 15 to 180 minutes." }, { "id": 195, "string": "For five of the six communities we consider, the performance of the comment feature classifier significantly (p < .05) ex- ceeds the performance of the post-time baseline in less than three hours of observation, e.g., in the case of AskMen and AskWomen, significance is achieved within 15 and 45 minutes, respectively." }, { "id": 196, "string": "In general, C-RATE improves only slightly over post only, even though rate features have proven useful in predicting popularity in prior work (He et al., 2014) ." }, { "id": 197, "string": "While adding C-TREE also improves performance, comment textual content is the biggest source of predictive gain." }, { "id": 198, "string": "These results demonstrate i) that incorporating a variety of early conversation features, e.g., structural features of trees, can improve performance of contro-versy prediction over strong post-time baselines, and ii) the text content of comments contains significant complementary information to post text." }, { "id": 199, "string": "Controversy prediction = popularity prediction." }, { "id": 200, "string": "We return to a null hypothesis introduced in §2: that the controversy prediction models we consider here are merely learning the same patterns that a popularity prediction algorithm would learn." }, { "id": 201, "string": "We train popularity prediction algorithms, and then attempt to use them at test-time to predict controversy; under the null hypothesis, we would expect little to no performance degradation when training on these alternate labels." }, { "id": 202, "string": "We 1) train binary popularity predictors using post text/time + comment rate/tree/text features available at t = 180, 21 and use them to predict controversy at test-time; and 2) consider an oracle that predicts the true popularity label at test-time; this oracle is quite strong, as prior work suggests that perfectly predicting popularity is impossible (Salganik et al., 2006) ." }, { "id": 203, "string": "In all cases, the best popularity predictor does not achieve performance comparable to even the post-only baseline." }, { "id": 204, "string": "For 3 of 6 communities, even the popularity oracle does not beat post time baseline, and in all cases, the mean performance of the controversy predictor exceeds the oracle by t = 180." }, { "id": 205, "string": "Thus, in our setting, controversy predictors and popularity predictors learn disjoint patterns." }, { "id": 206, "string": "Domain Transfer We conduct experiments where we train models on one subreddit and test them on another." }, { "id": 207, "string": "For these experiments, we discard all posting time features, and compare C-(TEXT+TREE+RATE) to C-(TREE+RATE); the goal is to empirically examine the hypothesis in §1: that controversial text is community-specific." }, { "id": 208, "string": "To measure performance differences in the domain transfer setting, we compute the percentage accuracy drop relative to a constant prediction baseline when switching the training subreddit from the matching subreddit to a different one." }, { "id": 209, "string": "For example, at t = 60, we observe that raw accuracy drops from 65.6 → 55.8 when training on AskWomen and testing on AskMen when considering text, rate, and tree features together; given that the constant prediction baseline achieves 50% accuracy, we compute the percent drop in accuracy as: (55.8 − 50)/(65.6 − 50) − 1 = −63%." }, { "id": 210, "string": "The results of this experiment (Figure 8 ) suggest that while text features are quite strong indomain, they are brittle and community specific." }, { "id": 211, "string": "Conversely, while rate and structural comment tree features do not carry as much in-domain predictive capacity on their own, they generally transfer better between communities, e.g., for RATE+TREE, there is very little performance drop-off when training/testing on AskMen/AskWomen (this holds for all timing cutoffs we considered)." }, { "id": 212, "string": "Similarly, in the case of training on Fitness and testing on PersonalFinance, we sometimes observe a performance increase when switching domains (e.g., at t = 60); we suspect that this could be an effect of dataset size, as our Fitness dataset has the most posts of any subreddit we consider, and PersonalFinance has the least." }, { "id": 213, "string": "Figure 8 : Average cross-validated performance degradation for transfer learning setting at t = 180 and t = 60; the y-axis is the training subreddit and the xaxis is testing." }, { "id": 214, "string": "For a fixed test subreddit, each column gives the percent accuracy drop when switching from the matching training set to a domain transfer setting." }, { "id": 215, "string": "In general, while incorporating comment text features results in higher accuracy overall, comment rate + tree features transfer between communities with less performance degradation." }, { "id": 216, "string": "Conclusion We demonstrated that early discussion features are predictive of eventual controversiality in several reddit communities." }, { "id": 217, "string": "This finding was dependent upon considering an expressive feature set of early discussions; to our knowledge, this type of feature set (consisting of text, trees, etc.)" }, { "id": 218, "string": "hadn't been thoroughly explored in prior early prediction work." }, { "id": 219, "string": "One promising avenue for future work is to examine higher-quality textual representations for conversation trees." }, { "id": 220, "string": "While our mean-pooling method did produce high performance, the resulting classifiers do not transfer between domains effectively." }, { "id": 221, "string": "Developing a more expressive algorithm (e.g., one that incorporates reply-structure relationships) could boost predictive performance, and enable textual features to be less brittle." } ], "headers": [ { "section": "Introduction", "n": "1", "start": 0, "end": 25 }, { "section": "Datasets", "n": "2", "start": 26, "end": 33 }, { "section": "comments, 72% upvoted", "n": "62", "start": 34, "end": 36 }, { "section": "comments, 93% upvoted", "n": "115", "start": 37, "end": 40 }, { "section": "comments, 63% upvoted", "n": "66", "start": 41, "end": 42 }, { "section": "comments, 90% upvoted", "n": "394", "start": 43, "end": 44 }, { "section": "comments, 57% upvoted", "n": "61", "start": 45, "end": 45 }, { "section": "comments, 62% upvoted", "n": "125", "start": 46, "end": 48 }, { "section": "comments, 97% upvoted", "n": "110", "start": 49, "end": 81 }, { "section": "Quantitative Validation of Labels", "n": "2.1", "start": 82, "end": 100 }, { "section": "Early Discussion Threads", "n": "3", "start": 101, "end": 116 }, { "section": "Early Prediction of Controversy", "n": "4", "start": 117, "end": 130 }, { "section": "Comparing Text Models", "n": "4.1", "start": 131, "end": 168 }, { "section": "Post-time Metadata", "n": "4.2", "start": 169, "end": 178 }, { "section": "Early Discussion Features", "n": "4.3", "start": 179, "end": 205 }, { "section": "Domain Transfer", "n": "4.3.1", "start": 206, "end": 215 }, { "section": "Conclusion", "n": "5", "start": 216, "end": 221 } ], "figures": [ { "filename": "../figure/image/1337-Figure1-1.png", "caption": "Figure 1: How our research relates to prior work.", "page": 1, "bbox": { "x1": 72.0, "x2": 524.16, "y1": 66.24, "y2": 364.32 } }, { "filename": "../figure/image/1337-Table2-1.png", "caption": "Table 2: Average accuracy for each post-time, textonly predictor for each dataset, averaged over 15 crossvalidation splits; standard errors are ±.6, on average (and never exceed ±1.03). Bold is best in column; underlined are statistically indistinguishable from best in column (p < .01)", "page": 6, "bbox": { "x1": 72.0, "x2": 292.32, "y1": 62.4, "y2": 174.23999999999998 } }, { "filename": "../figure/image/1337-Table3-1.png", "caption": "Table 3: Post-time only results: the effect of incorporating timing and author identity features.", "page": 6, "bbox": { "x1": 84.96, "x2": 276.0, "y1": 268.8, "y2": 312.96 } }, { "filename": "../figure/image/1337-Figure3-1.png", "caption": "Figure 3: For each community, a histogram of percent-upvoted and the median number of comments per bin.", "page": 2, "bbox": { "x1": 385.91999999999996, "x2": 516.48, "y1": 62.4, "y2": 222.72 } }, { "filename": "../figure/image/1337-Figure2-1.png", "caption": "Figure 2: Examples of two controversial and one non-controversial post from three communities. Also shown are the text of the first reply, the number of comments the post received, and its percent-upvoted.", "page": 2, "bbox": { "x1": 72.0, "x2": 372.0, "y1": 77.75999999999999, "y2": 209.28 } }, { "filename": "../figure/image/1337-Figure5-1.png", "caption": "Figure 5: Classifier accuracy for increasing periods of observation; the “+” in the legend indicates that a feature set is combined with the feature sets below. ts, the time the full feature set first achieves statistical significance over the post-time only baseline, is given for each community (if significance is achieved).", "page": 7, "bbox": { "x1": 86.88, "x2": 516.0, "y1": 62.4, "y2": 337.91999999999996 } }, { "filename": "../figure/image/1337-Figure6-1.png", "caption": "Figure 6: Observation period versus median number of comments available.", "page": 7, "bbox": { "x1": 84.96, "x2": 165.6, "y1": 407.03999999999996, "y2": 495.35999999999996 } }, { "filename": "../figure/image/1337-Figure7-1.png", "caption": "Figure 7: Histogram of the number of comments available per thread at t = 60 minutes in AskWomen.", "page": 7, "bbox": { "x1": 196.79999999999998, "x2": 280.32, "y1": 403.68, "y2": 488.15999999999997 } }, { "filename": "../figure/image/1337-Table1-1.png", "caption": "Table 1: Dataset statistics: number of posts, number of comments, mean percent-upvoted for the controversial and non-controversial classes.", "page": 3, "bbox": { "x1": 76.8, "x2": 283.2, "y1": 62.4, "y2": 144.96 } }, { "filename": "../figure/image/1337-Figure8-1.png", "caption": "Figure 8: Average cross-validated performance degradation for transfer learning setting at t = 180 and t = 60; the y-axis is the training subreddit and the xaxis is testing. For a fixed test subreddit, each column gives the percent accuracy drop when switching from the matching training set to a domain transfer setting. In general, while incorporating comment text features results in higher accuracy overall, comment rate + tree features transfer between communities with less performance degradation.", "page": 8, "bbox": { "x1": 306.71999999999997, "x2": 526.0799999999999, "y1": 62.4, "y2": 257.28 } }, { "filename": "../figure/image/1337-Figure4-1.png", "caption": "Figure 4: Early conversation trees from AskMen; nodes are comments and edges indicate reply structure. The original post is the black node, and as node colors lighten, comment timing increases from zero minutes to sixty minutes.", "page": 4, "bbox": { "x1": 73.92, "x2": 286.08, "y1": 62.879999999999995, "y2": 228.0 } } ] }, "gem_id": "GEM-SciDuet-chal-77" }, { "slides": { "1": { "title": "Trending of Social Media", "text": [ "Facebook YouTube Instagram Twitter Snapchat Reddit Pinterest Tumblr Linkedin", "Number of active users (millions)", "ON a ts 200 croc" ], "page_nums": [ 2 ], "images": [] }, "2": { "title": "Name Tagging", "text": [ "[ORG France] defeated [ORG Croatia] in [MISC", "World Cup] final at [LOC Luzhniki Stadium].", "Provide inputs to downstream applications" ], "page_nums": [ 3 ], "images": [] }, "3": { "title": "Challenges of Name Tagging in Social Media", "text": [ "Real Madrid midfielder Toni Kroos has revealed why he snubbed Cristiano Ronaldo's birthday party, following their humiliating derby defeat to Atletico Madrid. W. W", "Read: Khedira Doesn't Regret Attending CR7's Party Ronaldo received a lot of criticism for hosting his birthday party just hours after his side lost 4-0 to Atletico, and although Kroos understands it was difficult to cancel the party, he feels the tim- ing wasn't right. was invited to Cristiano Ronaldo's party. | didn't go because I knew what could happen he told German TV station ZDF. It wasn't the moment to have a party after losing 4-0 against Atletico. It's also true that many people had been invited and cancelling it wouldn't have been easy.\" R 7 Oo r T K8", "The 25-year-old, who won the World Cup with Germany in Brazil, also insisted that recent media reports of a Real Madrid crisis' were thrown out of proportion. \"We should take a step back and look at the whole picture in the face of what is being said. We have only lost the one game, Kroos added. e Limited Textual Context I think that many teams would love to suffer a crisis like ours. Of course we should be criticised if we play a bad game, as we did that day, without doubt.\" e Performs much worse on Read: Barca In Trouble For Drunk Ronaldo Chants? : . social media data Kroos joined Los Blancos for 30 million last summer and started all but two games for Real, as- sisting 12 goals and scoring one. Do you think Real Madrid will return to their form from before Christmas? Have your say in the comments section below.", "Social Media eLanguage Variations", "Alison wonderlandxDiploxjuaz B2B ayee", "Within word white spaces" ], "page_nums": [ 4, 5 ], "images": [] }, "4": { "title": "Utilization of Vision", "text": [ "Karl-Anthony Towns named unanimous intimate surprise set at Shea 2015-2016 NBA Rookie of the Year", "Difficult cases based on text only" ], "page_nums": [ 6 ], "images": [ "figure/image/1345-Figure1-1.png" ] }, "5": { "title": "Task Definition", "text": [ "Multimedia Input: image-sentence pair", "Colts Have 4th Best QB Situation in NFL with Andrew Luck #ColtStrong", "[ORG Colts] Have 4th Best QB Situation in [ORG", "NFL] with [PER Andrew Luck] #ColtStrong", "Output: tagging results on sentence" ], "page_nums": [ 7 ], "images": [] }, "6": { "title": "Our work", "text": [ "State-of-the-art for news articles (", "Visual attention model (Bahdanau et al.,", "Extract visual features from image regions that are most related to accompanying sentence", "Modulation Gate before CRFs", "Combine word representation with visual features based on their relatedness" ], "page_nums": [ 9 ], "images": [] }, "8": { "title": "Overall Framework", "text": [ "Multimodal Input : B-PER |-PER I- t I-PER", "Florence and the Machine ~ text", "surprises ill teen with _ LSTM", "private concert Hl 4 CRE", "/ a dk -p/ /isual - > L od Modulation", "f } G j m\\ \\ Gate | Gate \\ Gate ate / Gate", "ceo I Forward Attention Model ! LSTM I", "Gee fF Aw we = = = LI 114 Ss word SQ embedding ; ie char e representations \\ Florence and the Machine", "_ Visual Attention Model :" ], "page_nums": [ 11 ], "images": [ "figure/image/1345-Figure2-1.png", "figure/image/1345-Figure3-1.png" ] }, "9": { "title": "Sequence LabelingBLSTM CRE Lample et al 2016", "text": [ "and a re the input, memory and hidden state at time t respectively. and are weight matrices. is the element-wise product functions and is the element-wise sigmoid function" ], "page_nums": [ 12 ], "images": [] }, "10": { "title": "Attention Model for Text Related Visual Features Localization", "text": [ "V= CNN(I) Outputs from convolutional layer", "Ss Florence and the Machine fe surprises ill teen with private concert QU", "e,= W pa; + by Attention", "I Input image C= S a,V; Context Vector" ], "page_nums": [ 13 ], "images": [ "figure/image/1345-Figure3-1.png" ] }, "11": { "title": "Modulation Gate", "text": [ "UV C visual context", "() Multiplication word _", "* representations ( a ) activation function", "f_ ; / \\ (tanh } activation function { tanh } (tanh)", "visual gate visual context word representations", "By o(Wyh; + Uyve + by) Uc Visual context", "Bw = o(Wwh; + UO wve + by) h; Word representation", "Wm bw . h; + By -M Wm, _ Visually tuned word representation" ], "page_nums": [ 14 ], "images": [] }, "13": { "title": "Dataset", "text": [ "Topics: Sports, concerts and other social events", "Named Entity Types: Person, Organization, Location and MISC", "Size of the dataset in numbers of sentences and tokens" ], "page_nums": [ 16 ], "images": [] }, "15": { "title": "Attention Visualization", "text": [ "(a). [PER Kiay Thompson} [ORG (b). [PER Radiohead] offers old and (c). [MISC Cannes} just became the", "Warriors} overwhelm [ORG new at first concert in four years. [PER Blake Lively] show", "(d). #iPhoneAtt0: How [PER Steve (e). [PER Florence and the Machine] (f). [ORG Warriorette) Basketball Jobs} and [ORG Apple] changed surprises ill teen with private concert Campers ready tor Day 2 modern society", "(g). ts defending champ [PER Sandeul] (h). Shirts at the ready for our (i). ARMY put up a huge ad in [LOC able to win for the third time on (MISC hometown game today #[{ORG Times Square] for [PER BTS} 4th Duet Song Festival)'? Leicester] #pgautomotive 4[(ORG anniversary! premierteague]" ], "page_nums": [ 18 ], "images": [ "figure/image/1345-Figure5-1.png", "figure/image/1345-Figure3-1.png" ] } }, "paper_title": "Visual Attention Model for Name Tagging in Multimodal Social Media", "paper_id": "1345", "paper": { "title": "Visual Attention Model for Name Tagging in Multimodal Social Media", "abstract": "Everyday billions of multimodal posts containing both images and text are shared in social media sites such as Snapchat, Twitter or Instagram. This combination of image and text in a single message allows for more creative and expressive forms of communication, and has become increasingly common in such sites. This new paradigm brings new challenges for natural language understanding, as the textual component tends to be shorter, more informal, and often is only understood if combined with the visual context. In this paper, we explore the task of name tagging in multimodal social media posts. We start by creating two new multimodal datasets: one based on Twitter posts 1 and the other based on Snapchat captions (exclusively submitted to public and crowdsourced stories). We then propose a novel model based on Visual Attention that not only provides deeper visual understanding on the decisions of the model, but also significantly outperforms other state-of-theart baseline methods for this task. 2 * * This work was mostly done during the first author's internship at Snap Research. 1 The Twitter data and associated images presented in this paper were downloaded from https://archive.org/ details/twitterstream 2 We will make the annotations on Twitter data available for research purpose upon request.", "text": [ { "id": 0, "string": "Introduction Social platforms, like Snapchat, Twitter, Instagram and Pinterest, have become part of our lives and play an important role in making communication easier and accessible." }, { "id": 1, "string": "Once textcentric, social media platforms are becoming in-creasingly multimodal, with users combining images, videos, audios, and texts for better expressiveness." }, { "id": 2, "string": "As social media posts become more multimodal, the natural language understanding of the textual components of these messages becomes increasingly challenging." }, { "id": 3, "string": "In fact, it is often the case that the textual component can only be understood in combination with the visual context of the message." }, { "id": 4, "string": "In this context, here we study the task of Name Tagging for social media containing both image and textual contents." }, { "id": 5, "string": "Name tagging is a key task for language understanding, and provides input to several other tasks such as Question Answering, Summarization, Searching and Recommendation." }, { "id": 6, "string": "Despite its importance, most of the research in name tagging has focused on news articles and longer text documents, and not as much in multimodal social media data (Baldwin et al., 2015) ." }, { "id": 7, "string": "However, multimodality is not the only challenge to perform name tagging on such data." }, { "id": 8, "string": "The textual components of these messages are often very short, which limits context around names." }, { "id": 9, "string": "Moreover, there linguistic variations, slangs, typos and colloquial language are extremely common, such as using 'looooove' for 'love', 'LosAngeles' for 'Los Angeles', and '#Chicago #Bull' for 'Chicago Bulls'." }, { "id": 10, "string": "These characteristics of social media data clearly illustrate the higher difficulty of this task, if compared to traditional newswire name tagging." }, { "id": 11, "string": "In this work, we modify and extend the current state-of-the-art model (Lample et al., 2016; Ma and Hovy, 2016) in name tagging to incorporate the visual information of social media posts using an Attention mechanism." }, { "id": 12, "string": "Although the usually short textual components of social media posts provide limited contextual information, the accompanying images often provide rich information that can be useful for name tagging." }, { "id": 13, "string": "For ex- ample, as shown in Figure 1 , both captions include the phrase 'Modern Baseball'." }, { "id": 14, "string": "It is not easy to tell if each Modern Baseball refers to a name or not from the textual evidence only." }, { "id": 15, "string": "However using the associated images as reference, we can easily infer that Modern Baseball in the first sentence should be the name of a band because of the implicit features from the objects like instruments and stage, and the Modern Baseball in the second sentence refers to the sport of baseball because of the pitcher in the image." }, { "id": 16, "string": "In this paper, given an image-sentence pair as input, we explore a new approach to leverage visual context for name tagging in text." }, { "id": 17, "string": "First, we propose an attention-based model to extract visual features from the regions in the image that are most related to the text." }, { "id": 18, "string": "It can ignore irrelevant visual information." }, { "id": 19, "string": "Secondly, we propose to use a gate to combine textual features extracted by a Bidirectional Long Short Term Memory (BLSTM) and extracted visual features, before feed them into a Conditional Random Fields(CRF) layer for tag predication." }, { "id": 20, "string": "The proposed gate architecture plays the role to modulate word-level multimodal features." }, { "id": 21, "string": "We evaluate our model on two labeled datasets collected from Snapchat and Twitter respectively." }, { "id": 22, "string": "Our experimental results show that the proposed model outperforms state-for-the-art name tagger in multimodal social media." }, { "id": 23, "string": "The main contributions of this work are as follows: • We create two new datasets for name tagging in multimedia data, one using Twitter and the other using crowd-sourced Snapchat posts." }, { "id": 24, "string": "These new datasets effectively constitute new benchmarks for the task." }, { "id": 25, "string": "• We propose a visual attention model specifically for name tagging in multimodal social media data." }, { "id": 26, "string": "The proposed end-to-end model only uses image-sentence pairs as input without any human designed features, and a Visual Attention component that helps understand the decision making of the model." }, { "id": 27, "string": "Figure 2 shows the overall architecture of our model." }, { "id": 28, "string": "We describe three main components of our model in this section: BLSTM-CRF sequence labeling model (Section 2.1), Visual Attention Model (Section 2.3) and Modulation Gate (Section 2.4)." }, { "id": 29, "string": "Given a pair of sentence and image as input, the Visual Attention Model extracts regional visual features from the image and computes the weighted sum of the regional visual features as the visual context vector, based on their relatedness with the sentence." }, { "id": 30, "string": "The BLSTM-CRF sequence labeling model predicts the label for each word in the sentence based on both the visual context vector and the textual information of the words." }, { "id": 31, "string": "The modulation gate controls the combination of the visual context vector and the word representations for each word before the CRF layer." }, { "id": 32, "string": "Model BLSTM-CRF Sequence Labeling We model name tagging as a sequence labeling problem." }, { "id": 33, "string": "Given a sequence of words: S = {s 1 , s 2 , ..., s n }, we aim to predict a sequence of labels: L = {l 1 , l 2 , ..., l n }, where l i ∈ L and L is a pre-defined label set." }, { "id": 34, "string": "Bidirectional LSTM." }, { "id": 35, "string": "Long Short-term Memory Networks (LSTMs) (Hochreiter and Schmidhuber, 1997) are variants of Recurrent Neural Networks (RNNs) designed to capture long-range dependencies of input." }, { "id": 36, "string": "The equations of a LSTM cell are as follows: i t = σ(W xi x t + W hi h t−1 + b i ) f t = σ(W xf x t + W hf h t−1 + b f ) c t = tanh(W xc x t + W hc h t−1 + b c ) c t = f t c t−1 + i t c t o t = σ(W xo x t + W ho h t−1 + b o ) h t = o t tanh(c t ) where x t , c t and h t are the input, memory and hidden state at time t respectively." }, { "id": 37, "string": "W xi , W hi , W xf , W hf , W xc , W hc , W xo , and W ho are weight matrices." }, { "id": 38, "string": "is the element-wise product function and σ is the element-wise sigmoid function." }, { "id": 39, "string": "Name Tagging benefits from both of the past (left) and the future (right) contexts, thus we implement the Bidirectional LSTM (Graves et al., 2013; Dyer et al., 2015) by concatenating the left and right context representations, h t = [ − → h t , ← − h t ], for each word." }, { "id": 40, "string": "Character-level Representation." }, { "id": 41, "string": "Following (Lample et al., 2016) , we generate the character-level representation for each word using another BLSTM." }, { "id": 42, "string": "It receives character embeddings as input and generates representations combining implicit prefix, suffix and spelling information." }, { "id": 43, "string": "The final word representation x i is the concatenation of word embedding e i and character-level representation c i ." }, { "id": 44, "string": "c i = BLST M char (s i ) s i ∈ S x i = [e i , c i ] Conditional random fields (CRFs)." }, { "id": 45, "string": "For name tagging, it is important to consider the constraints of the labels in neighborhood (e.g., I-LOC must follow B-LOC)." }, { "id": 46, "string": "CRFs (Lafferty et al., 2001 ) are effective to learn those constraints and jointly predict the best chain of labels." }, { "id": 47, "string": "We follow the implementation of CRFs in (Ma and Hovy, 2016) ." }, { "id": 48, "string": "Visual Feature Representation We use Convolutional Neural Networks (CNNs) (LeCun et al., 1989) to obtain the representations of images." }, { "id": 49, "string": "Particularly, we use Residual Net (ResNet) (He et al., 2016) , which (Lin et al., 2014) detection, and COCO segmentation tasks." }, { "id": 50, "string": "Given an input pair (S, I), where S represents the word sequence and I represents the image rescaled to 224x224 pixels, we use ResNet to extract visual features for regional areas as well as for the whole image ( Fig 3) : V g = ResN et g (I) V r = ResN et r (I) where the global visual vector V g , which represents the whole image, is the output before the last fully connected layer 3 ." }, { "id": 51, "string": "The dimension of V g is 1,024." }, { "id": 52, "string": "V r are the visual representations for regional areas and they are extracted from the last convolutional layer of ResNet, and the dimension is 1,024x7x7 as shown in Figure 3 ." }, { "id": 53, "string": "7x7 is the number of regions in the image and 1,024 is the dimension of the feature vector." }, { "id": 54, "string": "Thus each feature vector of V r corresponds to a 32x32 pixel region of the rescaled input image." }, { "id": 55, "string": "The global visual representation is a reasonable representation of the whole input image, but not the best." }, { "id": 56, "string": "Sometimes only parts of the image are related to the associated sentence." }, { "id": 57, "string": "For example, the visual features from the right part of the image in Figure 4 cannot contribute to inferring the information in the associated sentence 'I have just bought Jeremy Pied.'" }, { "id": 58, "string": "In this work we utilize visual attention mechanism to combat the problem, which has been proven effective for vision-language related tasks such as Image Captioning and Visual Question Answering (Yang et al., 2016b; Lu et al., 2016) , by enforcing the model to focus on the regions in images that are mostly related to context textual information while ignoring irrelevant regions." }, { "id": 59, "string": "Also the visualization of attention can also help us to understand the decision making of the model." }, { "id": 60, "string": "Attention mechanism is mapping a query and a set of key-value pairs to an output." }, { "id": 61, "string": "The output is a weighted sum of the values and the assigned weight for each value is computed by a function of the query and corresponding key." }, { "id": 62, "string": "We encode the sentence into a query vector using an LSTM, and use regional visual representations V r as both keys and values." }, { "id": 63, "string": "Text Query Vector." }, { "id": 64, "string": "We use an LSTM to encode the sentence into a query vector, in which the inputs of the LSTM are the concatenations of word embeddings and character-level word representations." }, { "id": 65, "string": "Different from the LSTM model used for sequence labeling in Section 2.1, the LSTM here aims to get the semantic information of the sen-tence and it is unidirectional: Visual Attention Model Q = LST M query (S) (1) Attention Implementation." }, { "id": 66, "string": "There are many implementations of visual attention mechanism such as Multi-layer Perceptron (Bahdanau et al., 2014) , Bilinear (Luong et al., 2015) , dot product (Luong et al., 2015) , Scaled Dot Product (Vaswani et al., 2017) , and linear projection after summation (Yang et al., 2016b) ." }, { "id": 67, "string": "Based on our experimental results, dot product implementations usually result in more concentrated attentions and linear projection after summation results in more dispersed attentions." }, { "id": 68, "string": "In the context of name tagging, we choose the implementation of linear projection after summation because it is beneficial for the model to utilize as many related visual features as possible, and concentrated attentions may make the model bias." }, { "id": 69, "string": "For implementation, we first project the text query vector Q and regional visual features V r into the same dimensions: P t = tanh(W t Q) P v = tanh(W v V r ) then we sum up the projected query vector with each projected regional visual vector respectively: A = P t ⊕ P v the weights of the regional visual vectors: E = sof tmax(W a A + b a ) where W a is weights matrix." }, { "id": 70, "string": "The weighted sum of the regional visual features is: v c = α i v i α i ∈ E, v i ∈ V r We use v c as the visual context vector to initialize the BLSTM sequence labeling model in Section 2.1." }, { "id": 71, "string": "We compare the performances of the models using global visual vector V g and attention based visual context vector V c for initialization in Section 4." }, { "id": 72, "string": "Visual Modulation Gate The BLSTM-CRF sequence labeling model benefits from using the visual context vector to initialize the LSTM cell." }, { "id": 73, "string": "However, the better way to utilize visual features for sequence labeling is to incorporate the features at word level individually." }, { "id": 74, "string": "However visual features contribute quite differently when they are used to infer the tags of different words." }, { "id": 75, "string": "For example, we can easily find matched visual patterns from associated images for verbs such as 'sing', 'run', and 'play'." }, { "id": 76, "string": "Words/Phrases such as names of basketball players, artists, and buildings are often well-aligned with objects in images." }, { "id": 77, "string": "However it is difficult to align function words such as 'the', 'of ' and 'well' with visual features." }, { "id": 78, "string": "Fortunately, most of the challenging cases in name tagging involve nouns and verbs, the disambiguation of which can benefit more from visual features." }, { "id": 79, "string": "We propose to use a visual modulation gate, similar to (Miyamoto and Cho, 2016; Yang et al., 2016a) , to dynamically control the combination of visual features and word representation generated by BLSTM at word-level, before feed them into the CRF layer for tag prediction." }, { "id": 80, "string": "The equations for the implementation of modulation gate are as follows: β v = σ(W v h i + U v v c + b v ) β w = σ(W w h i + U w v c + b w ) m = tanh(W m h i + U m v c + b m ) w m = β w · h i + β v · m where h i is the word representation generated by BLSTM, v c is the computed visual context vector, W v , W w , W m , U v , U w and U m are weight matrices, σ is the element-wise sigmoid function, and w m is the modulated word representations fed into the CRF layer in Section 2.1." }, { "id": 81, "string": "We conduct experiments to evaluate the impact of modulation gate in Section 4." }, { "id": 82, "string": "Datasets We evaluate our model on two multimodal datasets, which are collected from Twitter and Snapchat respectively." }, { "id": 83, "string": "Table 1 summarizes the data statistics." }, { "id": 84, "string": "Both datasets contain four types of named entities: Location, Person, Organization and Miscellaneous." }, { "id": 85, "string": "Each data instance contains a pair of sentence and image, and the names in sentences are manually tagged by three expert labelers." }, { "id": 86, "string": "Twitter name tagging." }, { "id": 87, "string": "The Twitter name tagging dataset contains pairs of tweets and their associated images extracted from May 2016, January 2017 and June 2017." }, { "id": 88, "string": "We use sports and social event related key words, such as concert, festival, soccer, basketball, as queries." }, { "id": 89, "string": "We don't take into consideration messages without images for this experiment." }, { "id": 90, "string": "If a tweet has more than one image associated to it, we randomly select one of the images." }, { "id": 91, "string": "Snap name tagging." }, { "id": 92, "string": "The Snap name tagging dataset consists of caption and image pairs exclusively extracted from snaps submitted to public and live stories." }, { "id": 93, "string": "They were collected between May and July of 2017." }, { "id": 94, "string": "The data contains captions submitted to multiple community curated stories like the Electric Daisy Carnival (EDC) music festival and the Golden State Warrior's NBA parade." }, { "id": 95, "string": "Both Twitter and Snapchat are social media with plenty of multimodal posts, but they have obvious differences with sentence length and image styles." }, { "id": 96, "string": "In Twitter, text plays a more important role, and the sentences in the Twitter dataset are much longer than those in the Snap dataset (16.0 tokens vs 8.1 tokens)." }, { "id": 97, "string": "The image is often more related to the content of the text and added with the purpose of illustrating or giving more context." }, { "id": 98, "string": "On the other hand, as users of Snapchat use cameras to communicate, the roles of text and image are switched." }, { "id": 99, "string": "Captions are often added to complement what is being portrayed by the snap." }, { "id": 100, "string": "On our experiment section we will show that our proposed model outperforms baseline on both datasets." }, { "id": 101, "string": "We believe the Twitter dataset can be an important step towards more research in multimodal name tagging and we plan to provide it as a benchmark upon request." }, { "id": 102, "string": "Experiment Training Tokenization." }, { "id": 103, "string": "To tokenize the sentences, we use the same rules as (Owoputi et al., 2013) , except we separate the hashtag '#' with the words after." }, { "id": 104, "string": "Labeling Schema." }, { "id": 105, "string": "We use the standard BIO schema (Sang and Veenstra, 1999), because we see little difference when we switch to BIOES schema (Ratinov and Roth, 2009) ." }, { "id": 106, "string": "Word embeddings." }, { "id": 107, "string": "We use the 100-dimensional GloVe 4 (Pennington et al., 2014) embeddings trained on 2 billions tweets to initialize the lookup table and do fine-tuning during training." }, { "id": 108, "string": "Character embeddings." }, { "id": 109, "string": "As in (Lample et al., 2016) , we randomly initialize the character embeddings with uniform samples." }, { "id": 110, "string": "Based on experimental results, the size of the character embeddings affects little, and we set it as 50." }, { "id": 111, "string": "Pretrained CNNs." }, { "id": 112, "string": "We use the pretrained ResNet-152 (He et al., 2016) from Pytorch." }, { "id": 113, "string": "Early Stopping." }, { "id": 114, "string": "We use early stopping (Caruana et al., 2001; Graves et al., 2013) with a patience of 15 to prevent the model from over-fitting." }, { "id": 115, "string": "Fine Tuning." }, { "id": 116, "string": "The models are optimized with finetuning on both the word-embeddings and the pretrained ResNet." }, { "id": 117, "string": "Optimization." }, { "id": 118, "string": "The models achieve the best performance by using mini-batch stochastic gradient descent (SGD) with batch size 20 and momentum 0.9 on both datasets." }, { "id": 119, "string": "We set an initial learning rate of η 0 = 0.03 with decay rate of ρ = 0.01." }, { "id": 120, "string": "We use a gradient clipping of 5.0 to reduce the effects of gradient exploding." }, { "id": 121, "string": "Hyper-parameters." }, { "id": 122, "string": "We summarize the hyperparameters in Table 2 ." }, { "id": 123, "string": "Hyper-parameter Value LSTM hidden state size 300 Char LSTM hidden state size 50 visual vector size 100 dropout rate 0.5 Table 2 : Hyper-parameters of the networks." }, { "id": 124, "string": "Table 3 shows the performance of the baseline, which is BLSTM-CRF with sentences as input only, and our proposed models on both datasets." }, { "id": 125, "string": "BLSTM-CRF + Global Image Vector: use global image vector to initialize the BLSTM-CRF." }, { "id": 126, "string": "BLSTM-CRF + Visual attention: use attention based visual context vector to initialize the BLSTM-CRF." }, { "id": 127, "string": "BLSTM-CRF + Visual attention + Gate: modulate word representations with visual vector." }, { "id": 128, "string": "Our final model BLSTM-CRF + VISUAL AT-TENTION + GATE, which has visual attention component and modulation gate, obtains the best F1 scores on both datasets." }, { "id": 129, "string": "Visual features successfully play a role of validating entity types." }, { "id": 130, "string": "For example, when there is a person in the image, it is more likely to include a person name in the associated sentence, but when there is a soccer field in the image, it is more likely to include a sports team name." }, { "id": 131, "string": "Results All the models get better scores on Twitter dataset than on Snap dataset, because the average length of the sentences in Snap dataset (8.1 tokens) is much smaller than that of Twitter dataset (16.0 tokens), which means there is much less contextual information in Snap dataset." }, { "id": 132, "string": "Also comparing the gains from visual features on different datasets, we find that the model benefits more from visual features on Twitter dataset, considering the much higher baseline scores on Twitter dataset." }, { "id": 133, "string": "Based on our observation, users of Snapchat often post selfies with captions, which means some of the images are not strongly related to their associated captions." }, { "id": 134, "string": "In contrast, users of Twitter prefer to post images to illustrate texts 4.3 Attention Visualization Figure 5 shows some good examples of the attention visualization and their corresponding name tagging results." }, { "id": 135, "string": "The model can successfully focus on appropriate regions when the images are well aligned with the associated sentences." }, { "id": 136, "string": "Based on our observation, the multimodal contexts in posts related to sports, concerts or festival are usually better aligned with each other, therefore the visual features easily contribute to these cases." }, { "id": 137, "string": "For example, the ball and shoot action in example (a) in Figure 5 indicates that the context should be related to basketball, thus the 'Warriors' should be the name of a sports team." }, { "id": 138, "string": "A singing person with a microphone in example (b) indicates that the name of an artist or a band ('Radiohead') may appear in the sentence." }, { "id": 139, "string": "The second and the third rows in Figure 5 show some more challenging cases whose tagging results benefit from visual features." }, { "id": 140, "string": "In example (d), the model pays attention to the big Apple logo, thus tags the 'Apple' in the sentence as an Organization name." }, { "id": 141, "string": "In example (e) and (i), a small Figure 6 shows some failed examples that are categorized into three types: (1) bad alignments between visual and textual information; Error Analysis (2) blur images; (3) wrong attention made by the model." }, { "id": 142, "string": "Name tagging greatly benefits from visual fea-tures when the sentences are well aligned with the associated image as we show in Section 4.3." }, { "id": 143, "string": "But it is not always the case in social media." }, { "id": 144, "string": "The example (a) in Figure 6 shows a failed example resulted from poor alignment between sentences and images." }, { "id": 145, "string": "In this image, there are two bins standing in front of a wall, but the sentence talks about basketball players." }, { "id": 146, "string": "The unrelated visual information makes the model tag 'Cleveland' as a Location, however it refers to the basketball team 'Cleveland Cavaliers'." }, { "id": 147, "string": "The image in example (b) is blur, so the extracted visual information extracted actually introduces noise instead of additional information." }, { "id": 148, "string": "The image in example (c) is about a baseball pitcher, but our model pays attention to the top right corner of the image." }, { "id": 149, "string": "The visual context feature computed by our model is not related to the sentence, and results in missed tagging of 'SBU', which is an organization name." }, { "id": 150, "string": "Related Work In this section, we summarize relevant background on previous work on name tagging and visual attention." }, { "id": 151, "string": "Name Tagging." }, { "id": 152, "string": "In recent years, (Chiu and Nichols, 2015; Lample et al., 2016; Ma and Hovy, 2016) proposed several neural network architectures for named tagging that outperform traditional explicit features based methods (Chieu and Ng, 2002; Florian et al., 2003; Ando and Zhang, 2005; Ratinov and Roth, 2009; Lin and Wu, 2009; Passos et al., 2014; Luo et al., 2015) ." }, { "id": 153, "string": "They all use Bidirectional LSTM (BLSTM) to extract features from a sequence of words." }, { "id": 154, "string": "For characterlevel representations, (Lample et al., 2016) proposed to use another BLSTM to capture prefix and suffix information of words, and (Chiu and Nichols, 2015; Ma and Hovy, 2016) used CNN to extract position-independent character features." }, { "id": 155, "string": "On top of BLSTM, (Chiu and Nichols, 2015) used a softmax layer to predict the label for each word, and (Lample et al., 2016; Ma and Hovy, 2016) used a CRF layer for joint prediction." }, { "id": 156, "string": "Compared with traditional approaches, neural networks based approaches do not require hand-crafted features and achieved state-of-the-art performance on name tagging (Ma and Hovy, 2016) ." }, { "id": 157, "string": "However, these methods were mainly developed for newswire and paid little attention to social media." }, { "id": 158, "string": "For name tagging in social media, (Ritter et al., 2011) leveraged a large amount of unlabeled data and many dictionaries into a pipeline model." }, { "id": 159, "string": "(Limsopatham and Collier, 2016) adapted the BLSTM-CRF model with additional word shape information, and (Aguilar et al., 2017) utilized an effective multi-task approach." }, { "id": 160, "string": "Among these methods, our model is most similar to (Lample et al., 2016) , but we designed a new visual attention component and a modulation control gate." }, { "id": 161, "string": "Visual Attention." }, { "id": 162, "string": "Since the attention mechanism was proposed by (Bahdanau et al., 2014) , it has been widely adopted to language and vision related tasks, such as Image Captioning and Visual Question Answering (VQA), by retrieving the visual features most related to text context (Zhu et al., 2016; Anderson et al., 2017; Xu and Saenko, 2016; Chen et al., 2015) ." }, { "id": 163, "string": "proposed to predict a word based on the visual patch that is most related to the last predicted word for image captioning." }, { "id": 164, "string": "(Yang et al., 2016b; Lu et al., 2016) applied attention mechanism for VQA, to find the regions in images that are most related to the questions." }, { "id": 165, "string": "(Yu et al., 2016) applied the visual attention mechanism on video captioning." }, { "id": 166, "string": "Our attention implementation approach in this work is similar to those used for VQA." }, { "id": 167, "string": "The model finds the regions in images that are most related to the accompanying sentences, and then feed the visual features into an BLSTM-CRF sequence labeling model." }, { "id": 168, "string": "The differences are: (1) we add visual context feature at each step of sequence labeling; and (2) we propose to use a gate to control the combination of the visual information and textual information based on their relatedness." }, { "id": 169, "string": "2 Conclusions and Future Work We propose a gated Visual Attention for name tagging in multimodal social media." }, { "id": 170, "string": "We construct two multimodal datasets from Twitter and Snapchat." }, { "id": 171, "string": "Experiments show an absolute 3%-4% F-score gain." }, { "id": 172, "string": "We hope this work will encourage more research on multimodal social media in the future and we plan on making our benchmark available upon request." }, { "id": 173, "string": "Name Tagging for more fine-grained types (e.g." }, { "id": 174, "string": "soccer team, basketball team, politician, artist) can benefit more from visual features." }, { "id": 175, "string": "For example, an image including a pitcher indicates that the 'Giants' in context should refer to the baseball team 'San Francisco Giants'." }, { "id": 176, "string": "We plan to expand our model to tasks such as fine-grained Name Tagging or Entity Liking in the future." } ], "headers": [ { "section": "Introduction", "n": "1", "start": 0, "end": 31 }, { "section": "BLSTM-CRF Sequence Labeling", "n": "2.1", "start": 32, "end": 47 }, { "section": "Visual Feature Representation", "n": "2.2", "start": 48, "end": 64 }, { "section": "Visual Attention Model", "n": "2.3", "start": 65, "end": 71 }, { "section": "Visual Modulation Gate", "n": "2.4", "start": 72, "end": 81 }, { "section": "Datasets", "n": "3", "start": 82, "end": 101 }, { "section": "Training", "n": "4.1", "start": 102, "end": 130 }, { "section": "Results", "n": "4.2", "start": 131, "end": 140 }, { "section": "Error Analysis", "n": "4.4", "start": 141, "end": 149 }, { "section": "Related Work", "n": "5", "start": 150, "end": 168 }, { "section": "Conclusions and Future Work", "n": "6", "start": 169, "end": 176 } ], "figures": [ { "filename": "../figure/image/1345-Table1-1.png", "caption": "Table 1: Sizes of the datasets in numbers of sentence and token.", "page": 5, "bbox": { "x1": 156.96, "x2": 441.12, "y1": 62.879999999999995, "y2": 132.96 } }, { "filename": "../figure/image/1345-Table2-1.png", "caption": "Table 2: Hyper-parameters of the networks.", "page": 5, "bbox": { "x1": 90.72, "x2": 271.2, "y1": 417.59999999999997, "y2": 487.2 } }, { "filename": "../figure/image/1345-Figure1-1.png", "caption": "Figure 1: Examples of Modern Baseball associated with different images.", "page": 1, "bbox": { "x1": 89.75999999999999, "x2": 280.32, "y1": 61.44, "y2": 135.35999999999999 } }, { "filename": "../figure/image/1345-Figure5-1.png", "caption": "Figure 5: Examples of visual attentions and NER outputs.", "page": 6, "bbox": { "x1": 73.92, "x2": 524.16, "y1": 397.44, "y2": 739.1999999999999 } }, { "filename": "../figure/image/1345-Table3-1.png", "caption": "Table 3: Results of our models on noisy social media data.", "page": 6, "bbox": { "x1": 72.0, "x2": 528.0, "y1": 62.879999999999995, "y2": 145.92 } }, { "filename": "../figure/image/1345-Figure3-1.png", "caption": "Figure 3: CNN for visual features extraction.", "page": 2, "bbox": { "x1": 317.76, "x2": 511.2, "y1": 332.64, "y2": 411.35999999999996 } }, { "filename": "../figure/image/1345-Figure2-1.png", "caption": "Figure 2: Overall Architecture of the Visual Attention Name Tagging Model.", "page": 2, "bbox": { "x1": 93.6, "x2": 503.03999999999996, "y1": 61.44, "y2": 289.92 } }, { "filename": "../figure/image/1345-Figure6-1.png", "caption": "Figure 6: Examples of Failed Visual Attention.", "page": 7, "bbox": { "x1": 72.0, "x2": 526.0799999999999, "y1": 61.44, "y2": 156.48 } }, { "filename": "../figure/image/1345-Figure4-1.png", "caption": "Figure 4: Example of partially related image and sentence. (‘I have just bought Jeremy Pied.’)", "page": 3, "bbox": { "x1": 121.92, "x2": 241.44, "y1": 145.44, "y2": 264.96 } } ] }, "gem_id": "GEM-SciDuet-chal-78" }, { "slides": { "1": { "title": "Approach", "text": [ "f ine-tuning is evaluated in a batch setting", "Corpus BLEU or isolated sentence-wise metrics are often used", "These do not necessarily express how fast a system adapts", "As we will show this is not good enough", "We seek to measure perceived, immediate adaptation performance", "Calculate recall on the set of all words that are not stopwords, ignoring", "1In each of the data sets considered in this work, the average number of occurrences of content", "words ranges between 1.01 and 1.11 per sentence", "Since the task is online adaptation - specifically focus on few-shot learning:", "Consider only first and second occurrences of words!" ], "page_nums": [ 11, 12, 13, 14 ], "images": [] }, "2": { "title": "One Shot Recall R1", "text": [ "After seeing a word exactly once before in a reference/confirmed translation, is it correctly produced the second time around?", "Hi Content words in the hypothesis i th example", "R1,i Content inthe reference words for whose", "i th example second occurrence is" ], "page_nums": [ 15, 16 ], "images": [] }, "3": { "title": "One Shot Recall R1 Example", "text": [ "Source #1: Der Terrier beit die Frau", "Hypothesis #1: The dog bites the lady", "The terrier bites the woman", "Source #2: Der Mann beit den Terrier", "The man bites1 the terrier1" ], "page_nums": [ 17, 18, 19, 20, 21, 22, 23, 24, 25 ], "images": [] }, "4": { "title": "Zero Shot Recall R0", "text": [ "Not having seen a word before, is it still correctly produced? Is the system adapting", "to the domain at hand?", "Hi Content words in the hypothesis for i th example", "R0,i Content thereference words for", "i th that example occur for the first time in" ], "page_nums": [ 26, 27 ], "images": [] }, "5": { "title": "Zero and One Shot Recall R01", "text": [ "Hi Content words in the hypothesis for i th example", "R0,i R1,i secondtime Content in the words reference that occur for", "i for th the example first or" ], "page_nums": [ 28 ], "images": [] }, "6": { "title": "Corpus Level Metric", "text": [ "G: Corpus of |G| source, reference/confirmed seg-" ], "page_nums": [ 29 ], "images": [] }, "7": { "title": "Complete Example", "text": [ "Der Terrier beit die Frau", "The dog bites the lady", "The terrier0 bites0 the woman0", "Source #2: Der Mann beit den Terrier", "The terrier bites the man", "The man0 bites1 the terrier1" ], "page_nums": [ 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41 ], "images": [] }, "8": { "title": "Evaluation Adaptation Methods", "text": [ "The task is online adaptation to the Autodesk data set [Zhechev, 2012]. The background model is an English-to-German Transformer, trained on about 100M segments.", "Four methods for comparison: bias Add an additional bias to the output projection [Michel and Neubig, 2018] full Fine-tuning of all weights top Adapt top encoder/decoder layers only lasso Dynamic selection of adapted tensors with group lasso regularization [Wuebker" ], "page_nums": [ 42, 43 ], "images": [] }, "11": { "title": "Conclusion", "text": [ "Immediate adaptation performance is important for adaptive MT in CAT", "We proposed three metrics for measuring immediate and possibly perceived adaptation performance", "R1 for one-shot recall, quantifying pick up of new vocabulary", "R0 for zero-shot recall, quantifying general domain adaptation performance", "The combined metric R0+1", "These metrics give a different signal than the MT metrics that are traditionally used", "Zero-shot recall R0 suffers from unregularized adaptation!", "Careful regularization can mitigate this effect, while retaining most of the one-shot recall R1" ], "page_nums": [ 46, 47 ], "images": [] }, "12": { "title": "Bibliography I", "text": [ "N. Bertoldi, P. Simianer, M. Cettolo, K. Waschle, M. Federico, and S. Riezler. Online adaptation to post-edits for phrase-based statistical machine translation. Machine", "S. S. R. Kothur, R. Knowles, and P. Koehn. Document-level adaptation for neural machine translation. In Proceedings of the 2nd Workshop on Neural Machine", "P. Michel and G. Neubig. Extreme adaptation for personalized neural machine", "K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311318. Association for", "A. Peris, L. Cebrian, and F. Casacuberta. Online learning for neural machine" ], "page_nums": [ 48 ], "images": [] }, "13": { "title": "Bibliography II", "text": [ "M. Turchi, M. Negri, M. A. Farajian, and M. Federico. Continuous learning from human post-edits for neural machine translation. The Prague Bulletin of", "J. Wuebker, P. Simianer, and J. DeNero. Compact personalized models for neural machine translation. In Proceedings of the 2018 Conference on Empirical", "Methods in Natural Language Processing, 2018.", "V. Zhechev. Machine translation infrastructure and post-editing performance at autodesk. In AMTA 2012 workshop on post-editing technology and practice" ], "page_nums": [ 49 ], "images": [] } }, "paper_title": "Measuring Immediate Adaptation Performance for Neural Machine Translation", "paper_id": "1350", "paper": { "title": "Measuring Immediate Adaptation Performance for Neural Machine Translation", "abstract": "Incremental domain adaptation, in which a system learns from the correct output for each input immediately after making its prediction for that input, can dramatically improve system performance for interactive machine translation. Users of interactive systems are sensitive to the speed of adaptation and how often a system repeats mistakes, despite being corrected. Adaptation is most commonly assessed using corpus-level BLEU-or TERderived metrics that do not explicitly take adaptation speed into account. We find that these metrics often do not capture immediate adaptation effects, such as zero-shot and oneshot learning of domain-specific lexical items. To this end, we propose new metrics that directly evaluate immediate adaptation performance for machine translation. We use these metrics to choose the most suitable adaptation method from a range of different adaptation techniques for neural machine translation systems.", "text": [ { "id": 0, "string": "Introduction Incremental domain adaptation, or online adaptation, has been shown to improve statistical machine translation and especially neural machine translation (NMT) systems significantly (Turchi et al., 2017; Karimova et al., 2018) (inter-alia)." }, { "id": 1, "string": "The natural use case is a computeraided translation (CAT) scenario, where a user and a machine translation system collaborate to translate a document." }, { "id": 2, "string": "Each user translation is immediately used as a new training example to adapt the machine translation system to the specific document." }, { "id": 3, "string": "Adaptation techniques for MT are typically evaluated by their corpus translation quality, but such evaluations may not capture prominent aspects of the user experience in a collaborative translation scenario." }, { "id": 4, "string": "This paper focuses on directly measuring the speed of lexical acquisition for in-domain vocabulary." }, { "id": 5, "string": "To that end, we propose three related metrics that are designed to reflect the responsiveness of adaptation." }, { "id": 6, "string": "An ideal system would immediately acquire indomain lexical items upon observing their translations." }, { "id": 7, "string": "Moreover, one might expect a neural system to generalize from one corrected translation to related terms." }, { "id": 8, "string": "Once a user translates \"bank\" to German \"Bank\" (institution) instead of \"Ufer\" (shore) in a document, the system should also correctly translate \"banks\" to \"Banken\" instead of \"Ufer\" (the plural is identical to the singular in German) in future sentences." }, { "id": 9, "string": "We measure both one-shot vocabulary acquisition for terms that have appeared once in a previous target sentence, as well as zeroshot vocabulary acquisition for terms that have not previously appeared." }, { "id": 10, "string": "Our experimental evaluation shows some surprising results." }, { "id": 11, "string": "Methods that appear to have comparable performance using corpus quality metrics such as BLEU can differ substantially in zero-shot and one-shot vocabulary acquisition." }, { "id": 12, "string": "In addition, we find that fine-tuning a neural model tends to improve one-shot vocabulary recall while degrading zero-shot vocabulary recall." }, { "id": 13, "string": "We evaluate several adaptation techniques on a range of online adaptation datasets." }, { "id": 14, "string": "Fine tuning applied to all parameters in the NMT model maximizes one-shot acquisition, but shows a worrisome degradation in zero-shot recall." }, { "id": 15, "string": "By contrast, fine tuning with group lasso regularization, a technique recently proposed to improve the space efficiency of adapted models (Wuebker et al., 2018) , achieves an appealing balance of zero-shot and one-shot vocabulary acquisition as well as high corpus-level translation quality." }, { "id": 16, "string": "Measuring Immediate Adaptation Motivation For interactive, adaptive machine translation systems, perceived adaptation performance is a crucial property: An error in the machine translation output which needs to be corrected multiple times can cause frustration, and thus may compromise acceptance of the MT system by human users." }, { "id": 17, "string": "A class of errors that are particularly salient are lexical choice errors for domain-specific lexical items." }, { "id": 18, "string": "In the extreme, NMT systems using subword modeling (Sennrich et al., 2015) can generate \"hallucinated\" words-words that do not exist in the target language-which are especially irritating for users (Lee et al., 2018; Koehn and Knowles, 2017) ." }, { "id": 19, "string": "Users of adaptive MT have a reasonable expectation that in-domain vocabulary will be translated correctly after the translation of a term or some related term has been corrected manually." }, { "id": 20, "string": "Arguably, more subtle errors, referring to syntax, word order or more general semantics are less of a focus for immediate adaptation, as these types of errors are also harder to pinpoint and thus to evaluate 1 (Bentivogli et al., 2016) ." }, { "id": 21, "string": "Traditional metrics for evaluating machine translation outputs, e.g." }, { "id": 22, "string": "BLEU and TER, in essence try to measure the similarity of a hypothesized translation to one or more reference translations, taking the full string into account." }, { "id": 23, "string": "Due to significant improvements in MT quality with neural models (Bentivogli et al., 2016 ) (interalia), more specialized metrics, evaluating certain desired behaviors of systems become more useful for specific tasks." }, { "id": 24, "string": "For example, Wuebker et al." }, { "id": 25, "string": "(2016) show, that NMT models, while being better in most respects, still fall short in the handling of content words in comparison with phrase-based MT." }, { "id": 26, "string": "This observation is also supported by Bentivogli et al." }, { "id": 27, "string": "(2016) , who show smaller gains for NMT for translation of nouns, an important category of content words." }, { "id": 28, "string": "Another reason to isolate vocabulary acquisition as an evaluation criterion is that interactive translation often employs local adaptation via prefix-decoding (Knowles and Koehn, 2016; Wuebker et al., 2016) , which can allow the system to recover syntactic structure or resolve local am-biguities when given a prefix, but may still suffer from poor handling of unknown or domainspecific vocabulary." }, { "id": 29, "string": "In this work, we therefore focus on translation performance with respect to content words, setting word order and other aspects aside." }, { "id": 30, "string": "Metrics We propose three metrics: one to directly measure one-shot vocabulary acquisition, one to measure zero-shot vocabulary acquisition, and one to measure both." }, { "id": 31, "string": "In all three, we measure the recall of target-language content words so that the metrics can be computed automatically by comparing translation hypotheses to reference translations without the use of models or word alignments 2 ." }, { "id": 32, "string": "We define content words as those words that are not included in a fixed stopword list, as used for example in query simplification for information retrieval." }, { "id": 33, "string": "Such lists are typically compiled manually and are available for many languages." }, { "id": 34, "string": "3 For western languages, content words are mostly nouns, main verbs, adjectives or adverbs." }, { "id": 35, "string": "For the i-th pair of source sentence and reference translation, i = 1, ." }, { "id": 36, "string": "." }, { "id": 37, "string": "." }, { "id": 38, "string": ", |G|, of an ordered test corpus G, we define two sets R 0,i and R 1,i that are a subset of the whole set of unique 4 content words (i.e." }, { "id": 39, "string": "types) of the reference translation for i. R 0,i includes a word if its first occurrence in the test set is in the i-th reference of G, and R 1,i if its second occurrence in the test set is in the i-th reference of G. The union R 0,i ∪ R 1,i includes content words occurring for either the first or second time." }, { "id": 40, "string": "To measure zero-shot adaptation in a given hypothesis H i , also represented as a set of its content words, we propose to evaluate the number of word types that were immediately translated correctly: R0 = |H i ∩ R 0,i | |R 0,i | ." }, { "id": 41, "string": "To measure one-shot adaptation, where the system correctly produces a content word after ob-2 In each of the data sets considered in this work, the average number of occurrences of content words ranges between 1.01 and 1.11 per sentence." }, { "id": 42, "string": "We find this sufficiently close to 1 to evaluate in a bag-of-words fashion and not consider alignments." }, { "id": 43, "string": "3 For German we used the list available here: https://github.com/stopwords-iso." }, { "id": 44, "string": "4 All proposed metrics operate on the set-level, without clipping (Papineni et al., 2002) or alignment (Banerjee and Lavie, 2005; Kothur et al., 2018) , as we have found this simplification effective." }, { "id": 45, "string": "1/1 2/2 3/3 Total 2/4 2/2 4/6 Figure 1 : Example for calculating R0, R1, and R0+1 on a corpus of two sentences." }, { "id": 46, "string": "Content words are written in brackets, the corpus-level score is given below the per-segment scores." }, { "id": 47, "string": "In the example, the denominator for R1 is 2 due to the two repeated words dog and bites in the reference." }, { "id": 48, "string": "serving it exactly once, we propose: R1 = |H i ∩ R 1,i | |R 1,i | ." }, { "id": 49, "string": "This principle can be extended to define metrics Rk, k > 1 to allow more \"slack\" in the adaptation, but we leave that investigation to future work." }, { "id": 50, "string": "Finally, we define a metric that measures both zero-and one-shot adaptation: R0+1 = |H i ∩ [R 0,i ∪ R 1,i ] | |R 0,i ∪ R 1,i | ." }, { "id": 51, "string": "All metrics can either be calculated for single sentences as described above, or for a full test corpus by summing over all sentences, e.g." }, { "id": 52, "string": "for R0: Figure 1 gives an example calculation of all three metrics across a two-sentence corpus." }, { "id": 53, "string": "|G| i=1 |H i ∩ R 0,i | |G| i=1 |R 0,i | ." }, { "id": 54, "string": "Related Work An important line of related work is concerned with estimating the potential adaptability of a system given a source text only, the so-called repetition rate (Cettolo et al., 2014) ." }, { "id": 55, "string": "The metric is inspired by BLEU, and uses a sliding window over the source text to count singleton N -grams." }, { "id": 56, "string": "The modus operandi for our metrics is most similar to HTER (Snover et al., 2006) , since we are also assuming a single, targeted reference translation 5 for evaluation." }, { "id": 57, "string": "The introduction of NMT brought more aspects of translation quality evaluation into focus, such as discourse-level evaluation (Bawden et al., 2017) , or very fine-grained evaluation of specific aspects of the translations (Bentivogli et al., 2016) , highlighting the differences between phrase-based and NMT systems." }, { "id": 58, "string": "Online adaptation for (neural) machine translation has been thoroughly explored using BLEU (Turchi et al., 2017) , simulated keystroke and mouse action ratio (Barrachina et al., 2009 ) for effort estimation (Peris and Casacuberta, 2018) , word prediction accuracy (Wuebker et al., 2016) , and user studies (Denkowski et al., 2014; Karimova et al., 2018 ) (all inter-alia)." }, { "id": 59, "string": "In (Simianer et al., 2016) immediate adaptation for hierarchical phrase-based MT is specifically investigated, but they also evaluate their systems using humantargeted BLEU and TER." }, { "id": 60, "string": "Regularization for segment-wise continued training in NMT has been explored by by means of knowledge distillation, and with the group lasso by Wuebker et al." }, { "id": 61, "string": "(2018) , as used in this paper." }, { "id": 62, "string": "Most relevant to our work, in the context of document-level adaptation, Kothur et al." }, { "id": 63, "string": "(2018) calculate accuracy for novel words based on an automatic word alignment." }, { "id": 64, "string": "However, they do not focus on zero-and one-shot matches, but instead aggregate counts over the full corpus." }, { "id": 65, "string": "Online Adaptation NMT systems can be readily adapted by finetuning (also called continued training) with the same cross-entropy loss (L) as used for training the parameters of the baseline system, which also serves as the starting point for adaptation (Luong and Manning, 2015) ." }, { "id": 66, "string": "Following Turchi et al." }, { "id": 67, "string": "(2017) , we perform learning from each example i using (stochastic) gradient descent, using the current source x i and reference translation y i as a batch of size 1: θ i ← θ i−1 − γ∇L(θ i−1 , x i , y i )." }, { "id": 68, "string": "(1) Evaluation is carried out using simulated postediting (Hardt and Elming, 2010) , first translating the source using the model with parameters θ i−1 , before performing the update described above with the now revealed reference translation." }, { "id": 69, "string": "The machine translation system effectively only trains for a single iteration for any given data set." }, { "id": 70, "string": "The naïve approach, updating all parameters θ of the NMT model, while being effective, can be infeasible in certain settings 6 , since tens of millions of parameters are updated depending on the respective model." }, { "id": 71, "string": "While some areas of a typical NMT model can be stored in a sparse fashion without loss (source-and target embeddings), large parts of the model cannot." }, { "id": 72, "string": "We denote this type of adaptation as full." }, { "id": 73, "string": "A light-weight alternative to adaptation of the full parameter set is to introduce a second bias term in the final output layer of the NMT model, which is trained in isolation, freezing the rest of the model (Michel and Neubig, 2018) ." }, { "id": 74, "string": "This merely introduces a vector in the size of the output vocabulary." }, { "id": 75, "string": "This method is referred to as bias." }, { "id": 76, "string": "Another alternative is freezing parts of the model , for example determining a subset of parameters by performance on a held-out set (Wuebker et al., 2018) ." }, { "id": 77, "string": "In our experiments we use two systems using this method, fixed and top, the former being a pre-determined fixed selection of parameters, and the latter being the topmost encoder and decoder layers in the Transformer NMT model (Vaswani et al., 2017) ." }, { "id": 78, "string": "Finally, a data-driven alternative to the fixed freezing method was introduced to NMT by Wuebker et al." }, { "id": 79, "string": "(2018) , implementing tensor-wise 1 / 2 group lasso regularization, allowing the learning procedure to select a fixed number of parameters after each update." }, { "id": 80, "string": "This setup is referred to as lasso." }, { "id": 81, "string": "Experiments Neural Machine Translation Systems We adapt an English→German NMT system based on the Transformer architecture trained with an in-house NMT framework on about 100M bilingual sentence pairs." }, { "id": 82, "string": "The model has six layers in the encoder, three layers in the decoder, each with eight attention heads with dimensionality 256, distinct input and output embeddings, and vocabulary sizes of around 40,000." }, { "id": 83, "string": "The vocabularies are generated with byte-pair encoding (Sennrich et al., 2015) ." }, { "id": 84, "string": "For adaptation we use a learning rate γ of 10 −2 (for the bias adaptation a learn- ing rate of 1.0 is used), no dropout, and no labelsmoothing." }, { "id": 85, "string": "We use a tensor-wise 2 normalization to 1.0 for all gradients (gradient clipping)." }, { "id": 86, "string": "Updates for a sentence pair are repeated until the perplexity on that sentence pair is ≤ 2.0, for a maximum of three repetitions." }, { "id": 87, "string": "The fixed adaptation scheme, which involves selecting a subset of parameters on held-out data following Wuebker et al." }, { "id": 88, "string": "(2018) , uses about two million parameters excluding all embedding matrices, in addition to potentially the full source embeddings, but in practice this is limited to about 1M parameters." }, { "id": 89, "string": "The top scheme only adapts the top layers for both encoder and decoder." }, { "id": 90, "string": "For the lasso adaptation, we allow 1M parameters excluding the embeddings, for which we allow 1M parameters in total selected from all embedding matrices." }, { "id": 91, "string": "This scheme also always includes the previously described second bias term in the final output layer." }, { "id": 92, "string": "Since the proposed metrics operate on words, the machine translation outputs are first converted to full-form words using sentencepiece (Kudo and Richardson, 2018) , then tokenized and truecased with the tokenizer and truecaser distributed with the Moses toolkit (Koehn et al., 2007) ." }, { "id": 93, "string": "Results Tables 1 and 2 show the performance of different adaptation techniques on the Autodesk dataset (Zhechev, 2012) , a public post-editing software domain dataset for which incremental adaptation is known to provide large gains for corpus-level metrics." }, { "id": 94, "string": "BLEU, sentence BLEU, and TER scores (Table 1) are similar for full adaptation, sparse adaptation with group lasso, and adaptation of a fixed subset of parameters." }, { "id": 95, "string": "However (in Table 2 lasso substantially outperforms the other methods in zero-shot (R0), and combined zero-and oneshot recall of content words (R0+1)." }, { "id": 96, "string": "Zero-shot recall is considerably degraded relative to the non-adapted baseline for both full and adaptation of a fixed subset of tensors (fixed and top)." }, { "id": 97, "string": "That is, terms never observed before during online training are translated correctly less often than they would be with an unadapted system, despite the data set's consistent domain." }, { "id": 98, "string": "These approaches trade off long-term gains in BLEU and high one-shot recall for low zero-shot recall, which could be frustrating for users who may perceive the degradation in quality for terms appearing for the first time in a document." }, { "id": 99, "string": "The lasso technique is the only one that shows an improvement in R0 over the baseline." }, { "id": 100, "string": "However, lasso has considerably lower one-shot recall compared to the other adaptation methods, implying that it often must observe a translated term more than once to acquire it." }, { "id": 101, "string": "Appendix A shows similar experiments for several other datasets." }, { "id": 102, "string": "Analysis For a better understanding of the results described in the previous section, we conduct an analysis varying the units of the proposed metrics, while focusing on full and lasso adaptation." }, { "id": 103, "string": "For the first variant, only truly novel words are taken into account, i.e." }, { "id": 104, "string": "words in the test set that do not appear in the training data." }, { "id": 105, "string": "Results for these experiments are depicted in Table 3 ." }, { "id": 106, "string": "It is apparent that the findings of Table 2 are confirmed, and that relative differences are amplified." }, { "id": 107, "string": "This can be explained by the reduced number of total occurrences considered, which is only 310 words in this data set." }, { "id": 108, "string": "It is also important to note that all of these words are made up of known subwords 7 , since our NMT system does not include a copying mechanism and is thus constrained to the target vocabulary." }, { "id": 109, "string": "Further results using the raw subword output 8 of the MT systems are depicted in Table 4 : R0 for the lasso method is degraded only slightly below the baseline (-1%, compared to +2% for the regular metric), the findings for R1 and R0+1 remain the same as observed before." }, { "id": 110, "string": "Compared to the results for novel words this indicates that the improvement in terms of R0 for lasso mostly come from learning new combinations of subwords." }, { "id": 111, "string": "A discussion of the adaptation behavior over time, with exemplified differences between the metrics, can be found in Appendix B." }, { "id": 112, "string": "Conclusions To summarize: In some cases, the strong gains in corpus-level translation quality achieved by fine tuning an NMT model come at the expense of zero-shot recall of content words." }, { "id": 113, "string": "This concerning impact of adaptation could affect practical user experience." }, { "id": 114, "string": "Existing regularization methods mitigate this effect to some degree, but there may be more effective techniques for immediate adaptation that have yet to be developed." }, { "id": 115, "string": "The proposed metrics R0, R1, and R0+1 are useful for measuring immediate adaptation performance, which is crucial in adaptive CAT systems." }, { "id": 116, "string": "Table 5 contains results for additional English→German datasets, namely patents (Wäschle and Riezler, 2012) (Patent), transcribed public speeches (Cettolo et al., 2012) (TED), and two proprietary user data sets, one from the financial domain (User 1) and the other being technical documentation (User 2)." }, { "id": 117, "string": "The same pattern is observed in almost all cases: lasso outperforms the other adaptation techniques in zero-shot recall (R0) and combined recall (R0+1), while full has the highest one-shot recall (R1) on two out of five test sets, being close runner-up to lasso on all others." }, { "id": 118, "string": "Overall however, we observe that zero-shot recall R0 is degraded by adaptation, while one-shot recall is improved." }, { "id": 119, "string": "We also find that adaptation with the light-weight bias method often does not deviate much from the baseline." }, { "id": 120, "string": "In contrast, the results for the traditional MT metrics are predominantly positive." }, { "id": 121, "string": "For adaptation, the lasso method provides the best tradeoff in terms of performance throughout the considered metrics." }, { "id": 122, "string": "A Additional Results B Learning Curves We are also interested in the behavior of the adaptation methods over time." }, { "id": 123, "string": "To this end, in Figure 2 , we plot the difference in cumulative scores 9 of two adapted systems (full and lasso) to the baseline for the proposed metrics as well as the BLEU score." }, { "id": 124, "string": "As evident from comparing the curves for BLEU and R0, the BLEU score and the proposed metric give disparate signals for this data." }, { "id": 125, "string": "Specifically, there are two distinct dips in the curves for R0 (as well as R0+1) and BLEU: 1." }, { "id": 126, "string": "The degradation in R0 around segment 800 is due to significant noise in segment 774, which has a strong impact on the adapted systems, while the baseline system is not affected." }, { "id": 127, "string": "The full system's score drops by about 50% at segment 775 (i.e." }, { "id": 128, "string": "after adaptation) relative to the cumulative score difference at the previous segment and never recovers after that." }, { "id": 129, "string": "2." }, { "id": 130, "string": "The dip in the BLEU score at segment 752, observable for both adapted systems, depicting a relative degradation of about 10%, is due to a pathological repetition of a single character in the output of the adapted MT systems for this segment, which has a large impact on the score." }, { "id": 131, "string": "The dip observed with R0 is also noticeable in BLEU, but much less pronounced at 4% relative for full and 2% relative for lasso." }, { "id": 132, "string": "The dip in BLEU on the other hand is not noticeable with R0, R1, or R0+1." } ], "headers": [ { "section": "Introduction", "n": "1", "start": 0, "end": 15 }, { "section": "Motivation", "n": "2.1", "start": 16, "end": 29 }, { "section": "Metrics", "n": "2.2", "start": 30, "end": 53 }, { "section": "Related Work", "n": "3", "start": 54, "end": 64 }, { "section": "Online Adaptation", "n": "4", "start": 65, "end": 80 }, { "section": "Neural Machine Translation Systems", "n": "5.1", "start": 81, "end": 92 }, { "section": "Results", "n": "5.2", "start": 93, "end": 101 }, { "section": "Analysis", "n": "5.3", "start": 102, "end": 111 }, { "section": "Conclusions", "n": "6", "start": 112, "end": 132 } ], "figures": [ { "filename": "../figure/image/1350-Figure1-1.png", "caption": "Figure 1: Example for calculating R0, R1, and R0+1 on a corpus of two sentences. Content words are written in brackets, the corpus-level score is given below the per-segment scores. In the example, the denominator for R1 is 2 due to the two repeated words dog and bites in the reference.", "page": 2, "bbox": { "x1": 110.88, "x2": 486.24, "y1": 61.44, "y2": 124.32 } }, { "filename": "../figure/image/1350-Table5-1.png", "caption": "Table 5: BLEU, sentence-wise BLEU, TER, R0+1, R0, and R1 metrics for a number of data sets, comparing different adaptation methods as described in Section 4. Baseline results are given as absolute scores, results for adaptation are given as relative differences. Best viewed in color.", "page": 7, "bbox": { "x1": 160.79999999999998, "x2": 436.32, "y1": 121.44, "y2": 653.28 } }, { "filename": "../figure/image/1350-Table1-1.png", "caption": "Table 1: Results on the Autodesk test set for traditional MT quality metrics. SBLEU refers to an average of sentence-wise BLEU scores as described by Nakov et al. (2012). The best result in each column is denoted with bold font.", "page": 3, "bbox": { "x1": 331.68, "x2": 501.12, "y1": 62.4, "y2": 175.2 } }, { "filename": "../figure/image/1350-Figure2-1.png", "caption": "Figure 2: Differences in cumulative scores for R0 (top left), R1 (top right), R0+1 (bottom left), and BLEU (bottom right) to the baseline system on the Autodesk test set for full and lasso adaptation. The peculiarities discussed in the running text are marked by solid vertical lines (at x = 751 and x = 774).", "page": 8, "bbox": { "x1": 95.52, "x2": 502.08, "y1": 198.72, "y2": 578.88 } }, { "filename": "../figure/image/1350-Table4-1.png", "caption": "Table 4: Results on Autodesk data calculating the metrics with subwords.", "page": 4, "bbox": { "x1": 342.71999999999997, "x2": 490.08, "y1": 191.51999999999998, "y2": 264.0 } }, { "filename": "../figure/image/1350-Table2-1.png", "caption": "Table 2: Results on the Autodesk test set for the proposed metrics R0, R1, and R0+1.", "page": 4, "bbox": { "x1": 106.56, "x2": 255.35999999999999, "y1": 62.4, "y2": 175.2 } }, { "filename": "../figure/image/1350-Table3-1.png", "caption": "Table 3: Results on Autodesk data calculating the metrics only for truly novel content words, i.e. ones that do not occur in the training data.", "page": 4, "bbox": { "x1": 342.71999999999997, "x2": 490.08, "y1": 62.4, "y2": 135.35999999999999 } } ] }, "gem_id": "GEM-SciDuet-chal-79" }, { "slides": { "0": { "title": "Current systems", "text": [ "Spanish text ola mi nombre es hodor", "English text: hi my name is hodor Machine" ], "page_nums": [ 1, 2, 3 ], "images": [] }, "1": { "title": "Unwritten languages", "text": [ "Bantu language, Republic of Congo, ~160K speakers", "~3000 languages with no writing system", "Mboshi text: not available Recognition", "paired with French translations (Godard et al. 2018)", "Efforts to collect speech and translations using mobile apps" ], "page_nums": [ 5, 6 ], "images": [] }, "2": { "title": "Haiti Earthquake 2010", "text": [ "Survivors sent text messages to helpline", "International rescue teams face language barrier", "No automated tools available", "Volunteers from global Haitian diaspora help create parallel text corpora in short time" ], "page_nums": [ 7 ], "images": [] }, "3": { "title": "Are we better prepared in 2019", "text": [ "Moun kwense nan Sakre", "People trapped in Sacred" ], "page_nums": [ 8 ], "images": [] }, "4": { "title": "Can we build a speech to text translation ST system", "text": [ "given as training data:", "Tens of hours of speech paired with text translations", "No source text available" ], "page_nums": [ 9 ], "images": [] }, "5": { "title": "Neural models", "text": [ "Sequence-to-Sequence Weiss et al. (2017)", "English text: hi my name is hodor" ], "page_nums": [ 10 ], "images": [] }, "6": { "title": "Spanish speech to English text", "text": [ "Encoder telephone speech (unscripted) realistic noise conditions multiple speakers and dialects crowdsourced English text translations", "Closer to real-world conditions" ], "page_nums": [ 11, 12 ], "images": [] }, "7": { "title": "But", "text": [ "Poor performance in low-resource settings", "# hours of training data (log scale)" ], "page_nums": [ 13 ], "images": [] }, "8": { "title": "Why Spanish English", "text": [ "simulate low-resource settings and test our method", "Later: results on truly low-resource language ---" ], "page_nums": [ 20, 21, 22 ], "images": [] }, "10": { "title": "Pretrain on high resource", "text": [ "300 hours of English audio and text", "Attention *train until convergence" ], "page_nums": [ 24 ], "images": [] }, "11": { "title": "Fine tune on low resource", "text": [ "English audio Spanish audio", "transfer from English ASR", "English text English text", "*train until convergence Attention" ], "page_nums": [ 25, 26 ], "images": [] }, "16": { "title": "Ablation model parameters", "text": [ "Spanish to English, N = 20 hours", "+English ASR: encoder English text English text", "+English ASR: decoder Decoder Decoder", "transferring encoder only parameters works well!", "can pretrain on a language different from both source and target in ST pair" ], "page_nums": [ 35, 36, 37, 38, 39 ], "images": [] }, "17": { "title": "Pretraining on French", "text": [ "Spanish to English, N = 20 hours", "+English ASR: encoder Decoder Decoder", "+French ASR: encoder French text English text", "*only 20 hours of French ASR", "French ASR helps Spanish-English ST" ], "page_nums": [ 40, 41 ], "images": [] }, "18": { "title": "Takeaways", "text": [ "Pretraining on a different language helps", "transfer all model parameters for best gains", "encoder parameters account for most of these", "useful when target vocabulary is different" ], "page_nums": [ 42 ], "images": [] }, "19": { "title": "Mboshi French ST", "text": [ "ST data by Godard et al. 2018", "~4 hours of speech, paired with French translations", "Bantu language, Republic of Congo" ], "page_nums": [ 43, 44 ], "images": [] }, "20": { "title": "Mboshi French Results", "text": [ "Mboshi to French, N = 4 hours", "*outperformed by a naive baseline" ], "page_nums": [ 45, 46 ], "images": [] }, "21": { "title": "Pretraining on French ASR", "text": [ "Mboshi to French, N = 4 hours", "French text French text", "French ASR helps Mboshi-French ST" ], "page_nums": [ 47, 48, 49 ], "images": [] }, "22": { "title": "Pretraining on English ASR", "text": [ "Mboshi to French, N = 4 hours", "+English ASR: encoder Decoder Decoder", "English text French text", "using encoder trained on a lot more data", "English ASR helps Mboshi-French ST", "baseline Encoder From English ASR", "+French ASR: all Attention", "+French ASR: remaining French text", "combining gives the best gains", "BLEU score is still low but above naive baseline" ], "page_nums": [ 50, 51, 57, 58, 59 ], "images": [] }, "23": { "title": "Pretraining on French and English ASR", "text": [ "French text French text English text" ], "page_nums": [ 54, 55, 56 ], "images": [] }, "27": { "title": "Why does pretraining help", "text": [ "ASR data contains audio from 100s of speakers", "Learning to factor out background noise (?)", "BLEU Baseline +English ASR" ], "page_nums": [ 64 ], "images": [] }, "28": { "title": "Spanish English ST", "text": [ "*results on Fisher test set ...", "Spanish to English, N = 20 hours", "+En ASR: 20h English text" ], "page_nums": [ 65, 66, 67 ], "images": [] }, "29": { "title": "Neural model", "text": [ "yo vive en bronx", "bi-LSTM 1 LSTM 2", "bi-LSTM 2 LSTM 3" ], "page_nums": [ 68, 69 ], "images": [] } }, "paper_title": "Pre-training on High-Resource Speech Recognition Improves Low-Resource Speech-to-Text Translation", "paper_id": "1360", "paper": { "title": "Pre-training on High-Resource Speech Recognition Improves Low-Resource Speech-to-Text Translation", "abstract": "We present a simple approach to improve direct speech-to-text translation (ST) when the source language is low-resource: we pre-train the model on a high-resource automatic speech recognition (ASR) task, and then fine-tune its parameters for ST. We demonstrate that our approach is effective by pre-training on 300 hours of English ASR data to improve Spanish-English ST from 10.8 to 20.2 BLEU when only 20 hours of Spanish-English ST training data are available. Through an ablation study, we find that the pre-trained encoder (acoustic model) accounts for most of the improvement, despite the fact that the shared language in these tasks is the target language text, not the source language audio. Applying this insight, we show that pre-training on ASR helps ST even when the ASR language differs from both source and target ST languages: pre-training on French ASR also improves Spanish-English ST. Finally, we show that the approach improves performance on a true low-resource task: pre-training on a combination of English ASR and French ASR improves Mboshi-French ST, where only 4 hours of data are available, from 3.5 to 7.1 BLEU.", "text": [ { "id": 0, "string": "Introduction Speech-to-text Translation (ST) has many potential applications for low-resource languages: for example in language documentation, where the source language is often unwritten or endangered (Besacier et al., 2006; Martin et al., 2015; Adams et al., 2016a,b; Anastasopoulos and Chiang, 2017) ; or in crisis relief, where emergency workers might need to respond to calls or requests in a foreign language (Munro, 2010) ." }, { "id": 1, "string": "Traditional ST is a pipeline of automatic speech recognition (ASR) and machine translation (MT), and thus requires transcribed source audio to train ASR and parallel text to train MT." }, { "id": 2, "string": "These resources are often unavailable for low-resource languages, but for our potential applications, there may be some source language audio paired with target language text translations." }, { "id": 3, "string": "In these scenarios, end-to-end ST is appealing." }, { "id": 4, "string": "Recently, Weiss et al." }, { "id": 5, "string": "(2017) showed that endto-end ST can be very effective, achieving an impressive BLEU score of 47.3 on Spanish-English ST." }, { "id": 6, "string": "But this result required over 150 hours of translated audio for training, still a substantial resource requirement." }, { "id": 7, "string": "By comparison, a similar system trained on only 20 hours of data for the same task achieved a BLEU score of 5.3 (Bansal et al., 2018) ." }, { "id": 8, "string": "Other low-resource systems have similarly low accuracies (Anastasopoulos and Chiang, 2018; Bérard et al., 2018) ." }, { "id": 9, "string": "To improve end-to-end ST in low-resource settings, we can try to leverage other data resources." }, { "id": 10, "string": "For example, if we have transcribed audio in the source language, we can use multi-task learning to improve ST (Anastasopoulos and Chiang, 2018; Weiss et al., 2017; Bérard et al., 2018) ." }, { "id": 11, "string": "But source language transcriptions are unlikely to be available in our scenarios of interest." }, { "id": 12, "string": "Could we improve low-resource ST by leveraging data from a high-resource language?" }, { "id": 13, "string": "For ASR, training a single model on multiple languages can be effective for all of them (Toshniwal et al., 2018b; Deng et al., 2013) ." }, { "id": 14, "string": "For MT, transfer learning (Thrun, 1995) has been very effective: pretraining a model for a high-resource language pair and transferring its parameters to a low-resource language pair when the target language is shared (Zoph et al., 2016; Johnson et al., 2017) ." }, { "id": 15, "string": "Inspired by these successes, we show that low-resource ST can leverage transcribed audio in a high-resource target language, or even a different language altogether, simply by pre-training a model for the high-resource ASR task, and then transferring and fine-tuning some or all of the model's parameters for low-resource ST. We first test our approach using Spanish as the source language and English as the target." }, { "id": 16, "string": "After training an ASR system on 300 hours of English, fine-tuning on 20 hours of Spanish-English yields a BLEU score of 20.2, compared to only 10.8 for an ST model without ASR pre-training." }, { "id": 17, "string": "Analyzing this result, we discover that the main benefit of pre-training arises from the transfer of the encoder parameters, which model the input acoustic signal." }, { "id": 18, "string": "In fact, this effect is so strong that we also obtain improvements by pre-training on a language that differs from both the source and the target: pre-training on French and fine-tuning on Spanish-English." }, { "id": 19, "string": "We hypothesize that pre-training the encoder parameters, even on a different language, allows the model to better learn about linguistically meaningful phonetic variation while normalizing over acoustic variability such as speaker and channel differences." }, { "id": 20, "string": "We conclude that the acousticphonetic learning problem, rather than translation itself, is one of the main difficulties in low-resource ST. A final set of experiments confirm that ASR pretraining also helps on another language pair where the input is truly low-resource: Mboshi-French." }, { "id": 21, "string": "Method For both ASR and ST, we use an encoder-decoder model with attention adapted from Weiss et al." }, { "id": 22, "string": "(2017), Bérard et al." }, { "id": 23, "string": "(2018) and Bansal et al." }, { "id": 24, "string": "(2018) , as shown in Figure 1 ." }, { "id": 25, "string": "We use the same model architecture for all our models, allowing us to conveniently transfer parameters between them." }, { "id": 26, "string": "We also constrain the hyper-parameter search to fit a model into a single Titan X GPU, allowing us to maximize available compute resources." }, { "id": 27, "string": "We use a pre-trained English ASR model to initialize training of Spanish-English ST models, and a pre-trained French ASR model to initialize training of Mboshi-French ST models." }, { "id": 28, "string": "During ST training, all model parameters are updated." }, { "id": 29, "string": "In these configurations, the decoder shares the same vocabulary across the ASR and ST tasks." }, { "id": 30, "string": "This is practical for settings where the target text language is highresource with ASR data available." }, { "id": 31, "string": "In settings where both ST languages are lowresource, ASR data may only be available in a third language." }, { "id": 32, "string": "To test whether transfer learning will help in this setting, we use a pre-trained French ASR model to train Spanish-English ST models; and English ASR for Mboshi-French models." }, { "id": 33, "string": "In these cases, the ST languages are different from the ASR language, so we can only transfer the encoder parameters of the ASR model, since the dimensions of the decoder's output softmax layer are indexed by the vocabulary, which is not shared." }, { "id": 34, "string": "1 Sharing only the speech encoder parameters is much easier, since the speech input can be preprocessed in the same manner for all languages." }, { "id": 35, "string": "This form of transfer learning is more flexible, as there are no constraints on the ASR language used." }, { "id": 36, "string": "3 Experimental Setup 3.1 Data sets English ASR." }, { "id": 37, "string": "We use the Switchboard Telephone speech corpus (Godfrey and Holliman, 1993) , which consists of around 300 hours of English speech and transcripts, split into 260k utterances." }, { "id": 38, "string": "The development set consists of 5 hours that we removed from the training set, split into 4k utterances." }, { "id": 39, "string": "French ASR." }, { "id": 40, "string": "We use the French speech corpus from the GlobalPhone collection (Schultz, 2002) , which consists of around 20 hours of high quality read speech and transcripts, split into 9k utterances." }, { "id": 41, "string": "The development set consists of 2 hours, split into 800 utterances." }, { "id": 42, "string": "Spanish-English ST. We use the Fisher Spanish speech corpus (Graff et al., 2010) , which consists of 160 hours of telephone speech in a variety of Spanish dialects, split into 140K utterances." }, { "id": 43, "string": "To simulate low-resource conditions, we construct smaller train-ing corpora consisting of 50, 20, 10, 5, or 2.5 hours of data, selected at random from the full training data." }, { "id": 44, "string": "The development and test sets each consist of around 4.5 hours of speech, split into 4K utterances." }, { "id": 45, "string": "We do not use the corresponding Spanish transcripts; our target text consists of English translations that were collected through crowdsourcing (Post et al., 2013 (Post et al., , 2014 ." }, { "id": 46, "string": "Mboshi-French ST. Mboshi is a Bantu language spoken in the Republic of Congo, with around 160,000 speakers." }, { "id": 47, "string": "2 We use the Mboshi-French parallel corpus (Godard et al., 2018) , which consists of around 4 hours of Mboshi speech, split into a training set of 5K utterances and a development set of 500 utterances." }, { "id": 48, "string": "Since this corpus does not include a designated test set, we randomly sampled and removed 200 utterances from training to use as a development set, and use the designated development data as a test set." }, { "id": 49, "string": "Preprocessing Speech." }, { "id": 50, "string": "We convert raw speech input to 13dimensional MFCCs using Kaldi (Povey et al., 2011) ." }, { "id": 51, "string": "3 We also perform speaker-level mean and variance normalization." }, { "id": 52, "string": "Text." }, { "id": 53, "string": "The target text of the Spanish-English data set contains 1.5M word tokens and 17K word types." }, { "id": 54, "string": "If we model text as sequences of words, our model cannot produce any of the unseen word types in the test data and is penalized for this, but it can be trained very quickly (Bansal et al., 2018) ." }, { "id": 55, "string": "If we instead model text as sequences of characters as done by Weiss et al." }, { "id": 56, "string": "(2017) , we would have 7M tokens and 100 types, resulting in a model that is open-vocabulary, but very slow to train (Bansal et al., 2018) ." }, { "id": 57, "string": "As an effective middle ground, we use byte pair encoding (BPE; Sennrich et al., 2016) to segment each word into subwords, each of which is a character or a high-frequency sequence of characters-we use 1000 of these high-frequency sequences." }, { "id": 58, "string": "Since the set of subwords includes the full set of characters, the model is still open vocabulary; but it results in a text with only 1.9M tokens and just over 1K types, which can be trained almost as fast as the word-level model." }, { "id": 59, "string": "The vocabulary for BPE depends on the fre-quency of character sequences, so it must be computed with respect to a specific corpus." }, { "id": 60, "string": "For English, we use the full 160-hour Spanish-English ST target training text." }, { "id": 61, "string": "For French, we use the Mboshi-French ST target training text." }, { "id": 62, "string": "Model architecture for ASR and ST Speech encoder." }, { "id": 63, "string": "As shown schematically in Figure 1, MFCC feature vectors, extracted using a window size of 25 ms and a step size of 10ms, are fed into a stack of two CNN layers, with 128 and 512 filters with a filter width of 9 frames each." }, { "id": 64, "string": "In each CNN layer we stride with a factor of 2 along time, apply a ReLU activation (Nair and Hinton, 2010) , and apply batch normalization (Ioffe and Szegedy, 2015) ." }, { "id": 65, "string": "The output of the CNN layers is fed into a three-layer bi-directional long short term memory network (LSTM; Hochreiter and Schmidhuber, 1997); each hidden layer has 512 dimensions." }, { "id": 66, "string": "Text decoder." }, { "id": 67, "string": "At each time step, the decoder chooses the most probable token from the output of a softmax layer produced by a fully-connected layer, which in turn receives the current state of a recurrent layer computed from previous time steps and an attention vector computed over the input." }, { "id": 68, "string": "Attention is computed using the global attentional model with general score function and inputfeeding, as described in Luong et al." }, { "id": 69, "string": "(2015) ." }, { "id": 70, "string": "The predicted token is then fed into a 128-dimensional embedding layer followed by a three-layer LSTM to update the recurrent state; each hidden state has 256 dimensions." }, { "id": 71, "string": "While training, we use the predicted token 20% of the time as input to the next decoder step and the training token for the remaining 80% of the time (Williams and Zipser, 1989) ." }, { "id": 72, "string": "At test time we use beam decoding with a beam size of 5 and length normalization (Wu et al., 2016) with a weight of 0.6." }, { "id": 73, "string": "Training and implementation." }, { "id": 74, "string": "Parameters for the CNN and RNN layers are initialized using the scheme from (He et al., 2015) ." }, { "id": 75, "string": "For the embedding and fully-connected layers, we use Chainer's (Tokui et al., 2015) default initialition." }, { "id": 76, "string": "We regularize using dropout (Srivastava et al., 2014) , with a ratio of 0.3 over the embedding and LSTM layers (Gal, 2016) , and a weight decay rate of 0.0001." }, { "id": 77, "string": "The parameters are optimized using Adam (Kingma and Ba, 2015) , with a starting alpha of 0.001." }, { "id": 78, "string": "Following some preliminary experimentation on our development set, we add Gaussian noise with standard deviation of 0.25 to the MFCC features during training, and drop frames with a probability of 0.10." }, { "id": 79, "string": "After 20 epochs, we corrupt the true decoder labels by sampling a random output label with a probability of 0.3." }, { "id": 80, "string": "Our code is implemented in Chainer (Tokui et al., 2015) and is freely available." }, { "id": 81, "string": "4 Evaluation Metrics." }, { "id": 82, "string": "We report BLEU (Papineni et al., 2002) for all our models." }, { "id": 83, "string": "5 In low-resource settings, BLEU scores tend to be low, difficult to interpret, and poorly correlated with model performance." }, { "id": 84, "string": "This is because BLEU requires exact four-gram matches only, but low four-gram accuracy may obscure a high unigram accuracy and inexact translations that partially capture the semantics of an utterance, and these can still be very useful in situations like language documentation and crisis response." }, { "id": 85, "string": "Therefore, we also report word-level unigram precision and recall, taking into account stem, synonym, and paraphrase matches." }, { "id": 86, "string": "To compute these scores, we use METEOR (Lavie and Agarwal, 2007) with default settings for English and French." }, { "id": 87, "string": "6 For example, METEOR assigns \"eat\" a recall of 1 against reference \"eat\" and a recall of 0.8 against reference \"feed\", which it considers a synonym match." }, { "id": 88, "string": "Naive baselines." }, { "id": 89, "string": "We also include evaluation scores for a naive baseline model that predicts the K most frequent words of the training set as a bag of words for each test utterance." }, { "id": 90, "string": "We set K to be the value at which precision/recall are most similar, which is always between 5 and 20 words." }, { "id": 91, "string": "This provides an empirical lower bound on precision and recall, since we would expect any usable model to outperform a system that does not even depend on the input utterance." }, { "id": 92, "string": "We do not compute BLEU for these baselines, since they do not predict sequences, only bags of words." }, { "id": 93, "string": "ment data in Table 1 ." }, { "id": 94, "string": "7 We denote each ASR model by L-Nh, where L is a language code and N is the size of the training set in hours." }, { "id": 95, "string": "For example, en-300h denotes an English ASR model trained on 300 hours of data." }, { "id": 96, "string": "Training ASR models for state-of-the-art performance requires substantial hyper-parameter tuning and long training times." }, { "id": 97, "string": "Since our goal is simply to see whether pre-training is useful, we stopped pretraining our models after around 30 epochs (3 days) to focus on transfer experiments." }, { "id": 98, "string": "As a consequence, our ASR results are far from state-of-the-art: current end-to-end Kaldi systems obtain 16% WER on Switchboard train-dev, and 22.7% WER on the French Globalphone dev set." }, { "id": 99, "string": "8 We believe that better ASR pre-training may produce better ST results, but we leave this for future work." }, { "id": 100, "string": "Spanish-English ST In the following, we denote an ST model by S-T-Nh, where S and T are source and target language codes, and N is the size of the training set in hours." }, { "id": 101, "string": "For example, sp-en-20h denotes a Spanish-English ST model trained using 20 hours of data." }, { "id": 102, "string": "We use the code mb for Mboshi and fr for French." }, { "id": 103, "string": "Figure 2 shows the BLEU and unigram precision/recall scores on the development set for baseline Spanish-English ST models and those trained after initializing with the en-300h model." }, { "id": 104, "string": "Corresponding results on the test set (Table 2) previous results (Bansal et al., 2018) using the same train/test splits, primarily due to better regularization and modeling of subwords rather than words." }, { "id": 105, "string": "Yet transfer learning still substantially improves over these strong baselines." }, { "id": 106, "string": "For sp-en-20h, transfer learning improves dev set BLEU from 10.8 to 19.9, precision from 41% to 51%, and recall from 38% to 49%." }, { "id": 107, "string": "For sp-en-50h, transfer learning improves BLEU from 23.3 to 27.8, precision from 54% to 58%, and recall from 51% to 56%." }, { "id": 108, "string": "Using English ASR to improve ST Very low-resource: 10 hours or less of ST training data." }, { "id": 109, "string": "Figure 2 shows that without transfer learning, ST models trained on less than 10 hours of data struggle to learn, with precision/recall scores close to or below that of the naive baseline." }, { "id": 110, "string": "But with transfer learning, we see gains in precision and recall of between 10 and 20 points." }, { "id": 111, "string": "We also see that with transfer learning, a model trained on only 5 hours of ST data achieves a BLEU of 9.1, nearly as good as the 10.8 of a model trained on 20 hours of ST data without transfer learning." }, { "id": 112, "string": "In other words, fine-tuning an English ASR modelwhich is relatively easy to obtain-produces similar results to training an ST model on four times as N = 0 2.5 5 10 20 50 base 0 2.1 1.8 2.1 10.8 22.7 +asr 0.5 5.7 9.1 14.5 20.2 28.2 much data, which may be difficult to obtain." }, { "id": 113, "string": "We even find that in the very low-resource setting of just 2.5 hours of ST data, with transfer learning the model achieves a precision/recall of around 30% and improves by more than 10 points over the naive baseline." }, { "id": 114, "string": "In very low-resource scenarios with time constraints-such as in disaster relief-it is possible that even this level of performance may be useful, since it can be used to spot keywords in speech and can be trained in just three hours." }, { "id": 115, "string": "Sample translations." }, { "id": 116, "string": "Table 3 shows example translations for models sp-en-20h and sp-en-50h with and without transfer learning using en-300h." }, { "id": 117, "string": "Figure 3 shows the attention weights for the last sample utterance in Table 3 ." }, { "id": 118, "string": "For this utterance, the Spanish and English text have a different word order: mucho tiempo occurs in the middle of the speech utterance, and its translation, long time, is at the end of the English reference." }, { "id": 119, "string": "Similarly, vive aquí occurs at the end of the speech utterance, while the translation, living here, is in the middle of the English reference." }, { "id": 120, "string": "The baseline sp-en-50h model translates the words correctly but doesn't get Table 3 , using 50h models with and without pre-training." }, { "id": 121, "string": "The x-axis shows the reference Spanish word positions in the input; the y-axis shows the predicted English subwords." }, { "id": 122, "string": "In the reference, mucho tiempo is translated to long time, and vive aquí to living here, but their order is reversed, and this is reflected in (b)." }, { "id": 123, "string": "the English word order right." }, { "id": 124, "string": "With transfer learning, the model produces a shorter but still accurate translation in the correct word order." }, { "id": 125, "string": "Analysis To understand the source of these improvements, we carried out a set of ablation experiments." }, { "id": 126, "string": "For most of these experiments, we focus on Spanish-English ST with 20 hours of training data, with and without transfer learning." }, { "id": 127, "string": "Transfer learning with selected parameters." }, { "id": 128, "string": "In our first set of experiments, we transferred all parameters of the en-300h model, including the speech encoder CNN and LSTM; the text decoder embedding, LSTM and output layer parameters; and attention parameters." }, { "id": 129, "string": "To see which set of parameters has the most impact, we train the sp-en-20h model by transferring only selected parameters from en-300h, and randomly initializing the rest." }, { "id": 130, "string": "The results (Figure 4) show that transferring all parameters is most effective, and that the speech encoder parameters account for most of the gains." }, { "id": 131, "string": "We hypothesize that the encoder learns transferable low-level acoustic features that normalize across variability like speaker and channel differences to better capture meaningful phonetic differences, and that much of this learning is language-independent." }, { "id": 132, "string": "This hypothesis is supported by other work showing the benefits of cross-lingual and multilingual training for speech technology in low-resource target languages (Carlin et al., 2011; Jansen et al., 2010; Deng et al., 2013; Vu et al., 2012; Thomas et al., 2012; Cui et al., 2015; Alumäe et al., 2016; Renshaw et al., 2015; Hermann and Goldwater, 2018) ." }, { "id": 133, "string": "By contrast, transferring only decoder parameters does not improve accuracy." }, { "id": 134, "string": "Since decoder parameters help when used in tandem with encoder parameters, we suspect that the dependency in parameter training order might explain this: the transferred decoder parameters have been trained to expect particular input representations from the encoder, so transferring only the decoder parameters without the encoder might not be useful." }, { "id": 135, "string": "Figure 4 also suggests that models make strong gains early on in the training when using transfer learning." }, { "id": 136, "string": "The sp-en-20h model initialized with all model parameters (+asr:all) from en-300h reaches a higher BLEU score after just 5 epochs (2 hours) of training than the model without transfer learning trained for 60 epochs/20 hours." }, { "id": 137, "string": "This again can be useful in disaster-recovery scenarios, where the time to deploy a working system must be minimized." }, { "id": 138, "string": "Amount of ASR data required." }, { "id": 139, "string": "Figure 5 shows the impact of increasing the amount of English ASR data used on Spanish-English ST performance for two models: sp-en-20h and sp-en-50h." }, { "id": 140, "string": "For sp-en-20h, we see that using en-100h improves performance by almost 6 BLEU points." }, { "id": 141, "string": "By using more English ASR training data (en-300h) model, the BLEU score increases by almost 9 points." }, { "id": 142, "string": "However, for sp-en-50h, we only see improvements when using en-300h." }, { "id": 143, "string": "This implies that transfer learning is most useful when only a few tens of hours of training data are available for ST. As the amount of ST training data increases, the benefits of transfer learning tail off, although it's possible that using even more monolingual data, or improving the training at the ASR step, could extend the benefits to larger ST data sets." }, { "id": 144, "string": "Impact of code-switching." }, { "id": 145, "string": "We also tried using the en-300h ASR model without any fine-tuning to translate Spanish audio to English text." }, { "id": 146, "string": "This model achieved a BLEU score of 1.1, with a precision of 15 and recall of 21." }, { "id": 147, "string": "The non-zero BLEU score indicates that the model is matching some 4-grams in the reference." }, { "id": 148, "string": "This seems to be due to code-switching in the Fisher-Spanish speech data set." }, { "id": 149, "string": "Looking at the dev set utterances, we find several examples where the Spanish transcriptions match the English translations, indicating that the speaker switched into English." }, { "id": 150, "string": "For example, there is an utterance whose Spanish transcription and English translation are both \"right yeah\", and this English expression is indeed present in the source audio." }, { "id": 151, "string": "The English ASR model correctly translates this utterance, which is unsurprising since the phrase \"right yeah\" occurs nearly 500 times in Switchboard." }, { "id": 152, "string": "Overall, we find that in nearly 500 of the 4,000 development set utterances (14%), the Spanish transcription and English translations share more than half of their tokens, indicating likely codeswitching." }, { "id": 153, "string": "This suggests that transfer learning from English ASR models might help more than from other languages." }, { "id": 154, "string": "To isolate this effect from transfer learning of language-independent speech features, we carried out a further experiment." }, { "id": 155, "string": "Using French ASR to improve Spanish-English ST In this experiment, we pre-train using French ASR data for a Spanish-English translation task." }, { "id": 156, "string": "Here, we can only transfer the speech encoder parameters, and there should be little if any benefit due to codeswitching." }, { "id": 157, "string": "Because our French data set (20 hours) is much smaller than our English one (300 hours), for a fair comparison we used a 20 hour subset of the English data for pre-training in this experiment." }, { "id": 158, "string": "For both the English and French models, we transferred only the encoder parameters." }, { "id": 159, "string": "Table 4 shows that both the English and French 20-hour pre-trained models improve performance on Spanish-English ST." }, { "id": 160, "string": "The English model works slightly better, as would be predicted given our discussion of code-switching, but the French model is also useful, improving BLEU from 10.8 to 12.5." }, { "id": 161, "string": "This result strengthens the claim that ASR pretraining on a completely distinct third language can help low-resource ST." }, { "id": 162, "string": "Presumably benefits would be much greater if we used a larger ASR data set, as we did with English above." }, { "id": 163, "string": "In this experiment, the French pre-trained model used a French BPE output vocabulary, distinct from the English BPE vocabulary used in the ST system." }, { "id": 164, "string": "In the future it would be interesting to try combining the French and English text to create a combined output vocabulary, which would allow transferring both the encoder and decoder parameters, and may be useful for translating names or cognates." }, { "id": 165, "string": "More generally, it would also be possible to pre-train on multiple languages simultaneously using a shared BPE vocabulary." }, { "id": 166, "string": "There is evidence that speech features trained on multiple languages transfer better than those trained on the same amount of data from a single language (Hermann and Goldwater, 2018), so multilingual pretraining for ST could improve results." }, { "id": 167, "string": "baseline +fr-20h +en-20h sp-en-20h 10.8 12.5 13.2 Table 5 shows the ST model scores for Mboshi-French with and without using transfer learning." }, { "id": 168, "string": "The first two rows fr-top-8w, fr-top-10w, show precision and recall scores for the naive baselines where we predict the top 8 or 10 most frequent French words in the Mboshi-French training set." }, { "id": 169, "string": "These show that a precision/recall in the low 20s is easy to achieve, although with no n-gram matches (0 BLEU)." }, { "id": 170, "string": "The pre-trained ASR models by themselves (next two lines) are much worse." }, { "id": 171, "string": "The baseline model trained only on ST data actually has lower precision/recall than the naive baseline, although its non-zero BLEU score indicates that it is able to correctly predict some n-grams." }, { "id": 172, "string": "We see comparable precision/recall to the naive baseline with improvements in BLEU by transferring either French ASR parameters (both encoder and decoder, fr-20h) or English ASR parameters (encoder only, en-300h)." }, { "id": 173, "string": "Finally, to achieve the benefits of both the larger training set size for the encoder and the matching language of the decoder, we tried transferring the encoding parameters from the en-300h model and the decoding parameters from the fr-20h model." }, { "id": 174, "string": "This configuration (en+fr) gives us the best evaluation scores on all metrics, and highlights the flexibility of our framework." }, { "id": 175, "string": "Nevertheless, the 4-hour scenario is clearly a very challenging one." }, { "id": 176, "string": "Conclusion This paper introduced the idea of pre-training an end-to-end speech translation system involving a low-resource language using ASR training data from a higher-resource language." }, { "id": 177, "string": "We showed that large gains are possible: for example, we achieved an improvement of 9 BLEU points for a Spanish-English ST model with 20 hours of parallel data and 300 hours of English ASR data." }, { "id": 178, "string": "Moreover, the pre-trained model trains faster than the baseline, achieving higher BLEU in only a couple of hours, while the baseline trains for more than a day." }, { "id": 179, "string": "We also showed that these methods can be used effectively on a real low-resource language, Mboshi, with only 4 hours of parallel data." }, { "id": 180, "string": "The very small size of the data set makes the task challenging, but by combining parameters from an English encoder and French decoder, we outperformed baseline models to obtain a BLEU score of 7.1 and precision/recall of about 25%." }, { "id": 181, "string": "We believe ours is the first paper to report word-level BLEU scores on this data set." }, { "id": 182, "string": "Our analysis indicates that, other things being equal, transferring both encoder and decoder parameters works better than just transferring one or the other." }, { "id": 183, "string": "However, transferring the encoder parameters is where most of the benefit comes from." }, { "id": 184, "string": "Pre-training using a large ASR corpus from a mismatched language will therefore probably work better than using a smaller ASR corpus that matches the output language." }, { "id": 185, "string": "Our analysis suggests several avenues for further exploration." }, { "id": 186, "string": "On the speech side, it might be even more effective to use multilingual training; or to replace the MFCC input features with pre-trained multilingual features, or features that are targeted to low-resource multispeaker settings (Kamper et al., , 2017 Thomas et al., 2012; Cui et al., 2015; Renshaw et al., 2015) ." }, { "id": 187, "string": "On the language modeling side, simply transferring decoder parameters from an ASR model did not work; it might work better to use pre-trained decoder parameters from a language model, as proposed by Ramachandran et al." }, { "id": 188, "string": "(2017) , or shallow fusion (Gülçehre et al., 2015; Toshniwal et al., 2018a) , which interpolates a pre-trained language model during beam search." }, { "id": 189, "string": "In these methods, the decoder parameters are independent, and can therefore be used on their own." }, { "id": 190, "string": "We plan to explore these strategies in future work." } ], "headers": [ { "section": "Introduction", "n": "1", "start": 0, "end": 20 }, { "section": "Method", "n": "2", "start": 21, "end": 48 }, { "section": "Preprocessing", "n": "3.2", "start": 49, "end": 61 }, { "section": "Model architecture for ASR and ST", "n": "3.3", "start": 62, "end": 80 }, { "section": "Evaluation", "n": "3.4", "start": 81, "end": 99 }, { "section": "Spanish-English ST", "n": "5", "start": 100, "end": 107 }, { "section": "Using English ASR to improve ST", "n": "5.1", "start": 108, "end": 124 }, { "section": "Analysis", "n": "5.2", "start": 125, "end": 154 }, { "section": "Using French ASR to improve", "n": "5.3", "start": 155, "end": 175 }, { "section": "Conclusion", "n": "7", "start": 176, "end": 190 } ], "figures": [ { "filename": "../figure/image/1360-Figure4-1.png", "caption": "Figure 4: Fisher development set training curves (reported using BLEU) for sp-en-20h using selected parameters from en-300h: none (base); encoder CNN only (+asr:cnn); encoder CNN and LSTM only (+asr:enc); decoder only (+asr:dec); and all: encoder, attention, and decoder (+asr:all). These scores do not use beam search and are therefore lower than the best scores reported in Figure 2.", "page": 5, "bbox": { "x1": 320.15999999999997, "x2": 512.64, "y1": 64.8, "y2": 191.51999999999998 } }, { "filename": "../figure/image/1360-Figure3-1.png", "caption": "Figure 3: Attention plots for the final example in Table 3, using 50h models with and without pre-training. The x-axis shows the reference Spanish word positions in the input; the y-axis shows the predicted English subwords. In the reference, mucho tiempo is translated to long time, and vive aquı́ to living here, but their order is reversed, and this is reflected in (b).", "page": 5, "bbox": { "x1": 89.75999999999999, "x2": 272.15999999999997, "y1": 61.44, "y2": 369.12 } }, { "filename": "../figure/image/1360-Figure1-1.png", "caption": "Figure 1: Encoder-decoder with attention model architecture for both ASR and ST. The encoder input is the Spanish speech utterance claro, translated as clearly, represented as BPE (subword) units.", "page": 1, "bbox": { "x1": 317.76, "x2": 509.28, "y1": 69.6, "y2": 221.28 } }, { "filename": "../figure/image/1360-Figure5-1.png", "caption": "Figure 5: Spanish-to-English BLEU scores on Fisher dev set, with 0h (no transfer learning), 100h and 300h of English ASR data used.", "page": 6, "bbox": { "x1": 86.39999999999999, "x2": 275.03999999999996, "y1": 65.75999999999999, "y2": 157.92 } }, { "filename": "../figure/image/1360-Table4-1.png", "caption": "Table 4: Fisher dev set BLEU scores for sp-en-20h. baseline: model without transfer learning. Last two columns: Using encoder parameters from French ASR (+fr-20h), and English ASR (+en-20h).", "page": 7, "bbox": { "x1": 76.8, "x2": 285.12, "y1": 62.4, "y2": 102.24 } }, { "filename": "../figure/image/1360-Table5-1.png", "caption": "Table 5: Mboshi-to-French translation scores, with and without ASR pre-training. Pr. is the precision, and Rec. the recall score. fr-top-8w and fr-top-10w are naive baselines that, respectively, predict the 8 or 10 most frequent training words. For en + fr, we use encoder parameters from en-300h and attention+decoder parameters from fr-20h", "page": 7, "bbox": { "x1": 72.0, "x2": 290.4, "y1": 179.04, "y2": 319.2 } }, { "filename": "../figure/image/1360-Table1-1.png", "caption": "Table 1: Word Error Rate (WER, in %) for the ASR models used as pretraining, computed on Switchboard train-dev for English and Globalphone dev for French.", "page": 3, "bbox": { "x1": 317.76, "x2": 515.04, "y1": 62.4, "y2": 102.24 } }, { "filename": "../figure/image/1360-Table2-1.png", "caption": "Table 2: BLEU scores for Spanish-English ST on the Fisher test set, usingN hours of training data. base: no transfer learning. +asr: using model parameters from English ASR (en-300h).", "page": 4, "bbox": { "x1": 306.71999999999997, "x2": 526.0799999999999, "y1": 62.4, "y2": 116.16 } }, { "filename": "../figure/image/1360-Table3-1.png", "caption": "Table 3: Example translations on selected sentences from the Fisher development set, with stem-level ngram matches to the reference sentence underlined. 20h and 50h are Spanish-English models without pretraining; 20h+asr and 50h+asr are pre-trained on 300 hours of English ASR.", "page": 4, "bbox": { "x1": 306.71999999999997, "x2": 526.56, "y1": 190.56, "y2": 337.91999999999996 } }, { "filename": "../figure/image/1360-Figure2-1.png", "caption": "Figure 2: (top) BLEU and (bottom) Unigram precision/recall for Spanish-English ST models computed on Fisher dev set. base indicates no transfer learning; +asr are models trained by fine-tuning en-300h model parameters. naive baseline indicates the score when we predict the 15 most frequent English words in the training set.", "page": 4, "bbox": { "x1": 82.56, "x2": 280.32, "y1": 65.75999999999999, "y2": 315.36 } } ] }, "gem_id": "GEM-SciDuet-chal-80" }, { "slides": { "0": { "title": "Semantic Parsing", "text": [ "h h h ?", "Introduction Semantic parser Abstract examples Results Conclusions" ], "page_nums": [ 1, 7, 10 ], "images": [] }, "3": { "title": "Problems with Weak Supervision", "text": [ "Introduction Semantic parser Abstract examples Results Conclusions", "Spurious programs (Pasupat and Liang, 2016; Guu et al.," ], "page_nums": [ 4, 5 ], "images": [] }, "4": { "title": "CNLVR Cuhr et al 2017", "text": [ "xz :There is a small yellow item not touching any wall", "Introduction Semantic parser Abstract examples > Results Conclusions" ], "page_nums": [ 6 ], "images": [] }, "5": { "title": "Insight", "text": [ "Introduction Semantic parser Abstract examples Results Conclusions" ], "page_nums": [ 8 ], "images": [] }, "6": { "title": "Contributions", "text": [ "Data augmentation Abstract cache", "helps search tackles spuriousness", "Introduction Semantic parser Abstract examples Results Conclusions" ], "page_nums": [ 9 ], "images": [] }, "7": { "title": "Logical Program", "text": [ "Introduction Semantic parser Abstract examples Results Conclusions" ], "page_nums": [ 11 ], "images": [] }, "10": { "title": "Abstraction", "text": [ "Introduction Semantic parser Abstract examples Results Conclusions" ], "page_nums": [ 14, 16 ], "images": [ "figure/image/1363-Table3-1.png" ] }, "14": { "title": "Abstract Cache", "text": [ "Introduction Semantic parser Abstract examples Results Conclusions" ], "page_nums": [ 19 ], "images": [ "figure/image/1363-Figure3-1.png" ] }, "15": { "title": "Reward Tying", "text": [ "size: 20}, . xz: There is a oi yellow item not touching any wall", "50% Spurious amp Y :True", "Introduction Semantic parser Abstract examples Results Conclusions 21", "a :There is a small yellow item * [Hytoe: . a Black, ty) oA" ], "page_nums": [ 20, 21 ], "images": [] }, "18": { "title": "Results Public test set", "text": [ "Test-P Accuracy Test-P Consistency", "Majority MaxEnt Sup. Sup.+Rerank W.Sup. W.Sup.+Rerank", "Introduction Semantic parser Abstract examples Results Conclusions" ], "page_nums": [ 24 ], "images": [] }, "19": { "title": "Ablations", "text": [ "Abstract weakly supervised parser", "Introduction Semantic parser Abstract examples Results Conclusions", "Dev Accuracy Dev Consistency", "-Abstraction -Data augment. -Beam cache W.Sup.+Rerank" ], "page_nums": [ 25, 26 ], "images": [] }, "20": { "title": "Conclusions", "text": [ "Similar ideas in: Dong and Lapata (2018) and Zhang et al.", "Automation would be useful" ], "page_nums": [ 27, 28 ], "images": [] } }, "paper_title": "Weakly Supervised Semantic Parsing with Abstract Examples", "paper_id": "1363", "paper": { "title": "Weakly Supervised Semantic Parsing with Abstract Examples", "abstract": "Training semantic parsers from weak supervision (denotations) rather than strong supervision (programs) complicates training in two ways. First, a large search space of potential programs needs to be explored at training time to find a correct program. Second, spurious programs that accidentally lead to a correct denotation add noise to training. In this work we propose that in closed worlds with clear semantic types, one can substantially alleviate these problems by utilizing an abstract representation, where tokens in both the language utterance and program are lifted to an abstract form. We show that these abstractions can be defined with a handful of lexical rules and that they result in sharing between different examples that alleviates the difficulties in training. To test our approach, we develop the first semantic parser for CNLVR, a challenging visual reasoning dataset, where the search space is large and overcoming spuriousness is critical, because denotations are either TRUE or FALSE, and thus random programs are likely to lead to a correct denotation. Our method substantially improves performance, and reaches 82.5% accuracy, a 14.7% absolute accuracy improvement compared to the best reported accuracy so far.", "text": [ { "id": 0, "string": "Introduction The goal of semantic parsing is to map language utterances to executable programs." }, { "id": 1, "string": "Early work on statistical learning of semantic parsers utilized * Authors equally contributed to this work." }, { "id": 2, "string": "IsSmall(x)), Not(IsTouchingWall(x, Side.Any)))))) Figure 1: Overview of our visual reasoning setup for the CN-LVR dataset." }, { "id": 3, "string": "Given an image rendered from a KB k and an utterance x, our goal is to parse x to a program z that results in the correct denotation y." }, { "id": 4, "string": "Our training data includes (x, k, y) triplets." }, { "id": 5, "string": "supervised learning, where training examples included pairs of language utterances and programs (Zelle and Mooney, 1996; Kate et al., 2005; Collins, 2005, 2007) ." }, { "id": 6, "string": "However, collecting such training examples at scale has quickly turned out to be difficult, because expert annotators who are familiar with formal languages are required." }, { "id": 7, "string": "This has led to a body of work on weaklysupervised semantic parsing (Clarke et al., 2010; Liang et al., 2011; Krishnamurthy and Mitchell, 2012; Kwiatkowski et al., 2013; Berant et al., 2013; Cai and Yates, 2013; ." }, { "id": 8, "string": "In this setup, training examples correspond to utterance-denotation pairs, where a denotation is the result of executing a program against the environment (see Fig." }, { "id": 9, "string": "1 )." }, { "id": 10, "string": "Naturally, collecting denotations is much easier, because it can be performed by non-experts." }, { "id": 11, "string": "Training semantic parsers from denotations rather than programs complicates training in two ways: (a) Search: The algorithm must learn to search through the huge space of programs at training time, in order to find the correct program." }, { "id": 12, "string": "This is a difficult search problem due to the combinatorial nature of the search space." }, { "id": 13, "string": "(b) Spurious-ness: Incorrect programs can lead to correct denotations, and thus the learner can go astray based on these programs." }, { "id": 14, "string": "Of the two mentioned problems, spuriousness has attracted relatively less attention (Pasupat and Liang, 2016; Guu et al., 2017) ." }, { "id": 15, "string": "Recently, the Cornell Natural Language for Visual Reasoning corpus (CNLVR) was released (Suhr et al., 2017) , and has presented an opportunity to better investigate the problem of spuriousness." }, { "id": 16, "string": "In this task, an image with boxes that contains objects of various shapes, colors and sizes is shown." }, { "id": 17, "string": "Each image is paired with a complex natural language statement, and the goal is to determine whether the statement is true or false (Fig." }, { "id": 18, "string": "1) ." }, { "id": 19, "string": "The task comes in two flavors, where in one the input is the image (pixels), and in the other it is the knowledge-base (KB) from which the image was synthesized." }, { "id": 20, "string": "Given the KB, it is easy to view CNLVR as a semantic parsing problem: our goal is to translate language utterances into programs that will be executed against the KB to determine their correctness (Johnson et al., 2017b; Hu et al., 2017) ." }, { "id": 21, "string": "Because there are only two return values, it is easy to generate programs that execute to the right denotation, and thus spuriousness is a major problem compared to previous datasets." }, { "id": 22, "string": "In this paper, we present the first semantic parser for CNLVR." }, { "id": 23, "string": "Semantic parsing can be coarsely divided into a lexical task (i.e., mapping words and phrases to program constants), and a structural task (i.e., mapping language composition to program composition operators)." }, { "id": 24, "string": "Our core insight is that in closed worlds with clear semantic types, like spatial and visual reasoning, we can manually construct a small lexicon that clusters language tokens and program constants, and create a partially abstract representation for utterances and programs (Table 1) in which the lexical problem is substantially reduced." }, { "id": 25, "string": "This scenario is ubiquitous in many semantic parsing applications such as calendar, restaurant reservation systems, housing applications, etc: the formal language has a compact semantic schema and a well-defined typing system, and there are canonical ways to express many program constants." }, { "id": 26, "string": "We show that with abstract representations we can share information across examples and better tackle the search and spuriousness challenges." }, { "id": 27, "string": "By pulling together different examples that share the same abstract representation, we can identify programs that obtain high reward across multiple examples, thus reducing the problem of spuriousness." }, { "id": 28, "string": "This can also be done at search time, by augmenting the search state with partial programs that have been shown to be useful in earlier iterations." }, { "id": 29, "string": "Moreover, we can annotate a small number of abstract utterance-program pairs, and automatically generate training examples, that will be used to warm-start our model to an initialization point in which search is able to find correct programs." }, { "id": 30, "string": "We develop a formal language for visual reasoning, inspired by Johnson et al." }, { "id": 31, "string": "(2017b) , and train a semantic parser over that language from weak supervision, showing that abstract examples substantially improve parser accuracy." }, { "id": 32, "string": "Our parser obtains an accuracy of 82.5%, a 14.7% absolute accuracy improvement compared to stateof-the-art." }, { "id": 33, "string": "All our code is publicly available at https://github.com/udiNaveh/ nlvr_tau_nlp_final_proj." }, { "id": 34, "string": "Setup Problem Statement Given a training set of N examples {(x i , k i , y i )} N i=1 , where x i is an utterance, k i is a KB describing objects in an image and y i ∈ {TRUE, FALSE} denotes whether the utterance is true or false in the KB, our goal is to learn a semantic parser that maps a new utterance x to a program z such that when z is executed against the corresponding KB k, it yields the correct denotation y (see Fig." }, { "id": 35, "string": "1 )." }, { "id": 36, "string": "Programming language The original KBs in CNLVR describe an image as a set of objects, where each object has a color, shape, size and location in absolute coordinates." }, { "id": 37, "string": "We define a programming language over the KB that is more amenable to spatial reasoning, inspired by work on the CLEVR dataset (Johnson et al., 2017b) ." }, { "id": 38, "string": "This programming language provides access to functions that allow us to check the size, shape, and color of an object, to check whether it is touching a wall, to obtain sets of items that are above and below a certain set of items, etc." }, { "id": 39, "string": "1 More formally, a program is a sequence of tokens describing a possibly recursive sequence of function applications in prefix notation." }, { "id": 40, "string": "Each token is either a function with fixed arity (all functions have either one or two arguments), a constant, a variable or a λ term used to define Boolean functions." }, { "id": 41, "string": "Functions, constants and variables have one of the following x: \"There are exactly 3 yellow squares touching the wall.\"" }, { "id": 42, "string": "z: Equal(3, Count(Filter(ALL ITEMS, λx." }, { "id": 43, "string": "And (And (IsYellow(x), IsSquare(x), IsTouchingWall(x)))))) x: \"There are C-QuantMod C-Num C-Color C-Shape touching the wall.\"" }, { "id": 44, "string": "z: C-QuantMod(C-Num, Count(Filter(ALL ITEMS, λx." }, { "id": 45, "string": "And (And (IsC-Color(x), IsC-Shape(x), IsTouchingWall(x)))))) Table 1 : An example for an utterance-program pair (x, z) and its abstract counterpart (x,z) x: \"There is a small yellow item not touching any wall.\"" }, { "id": 46, "string": "z: Exist(Filter(ALL ITEMS, λx.And(And(IsYellow(x), IsSmall(x)), Not(IsTouchingWall(x, Side.Any))))) x: \"One tower has a yellow base.\"" }, { "id": 47, "string": "z: GreaterEqual(1, Count(Filter(ALL ITEMS, λx.And(IsYellow(x), IsBottom(x))))) Table 2 : Examples for utterance-program pairs." }, { "id": 48, "string": "Commas and parenthesis provided for readability only." }, { "id": 49, "string": "atomic types: Int, Bool, Item, Size, Shape, Color, Side (sides of a box in the image); or a composite type Set(?" }, { "id": 50, "string": "), and Func(?,?)." }, { "id": 51, "string": "Valid programs have a return type Bool." }, { "id": 52, "string": "Tables 1 and 2 provide examples for utterances and their correct programs." }, { "id": 53, "string": "The supplementary material provides a full description of all program tokens, their arguments and return types." }, { "id": 54, "string": "Unlike CLEVR, CNLVR requires substantial set-theoretic reasoning (utterances refer to various aspects of sets of items in one of the three boxes in the image), which required extending the language described by Johnson et al." }, { "id": 55, "string": "(2017b) to include set operators and lambda abstraction." }, { "id": 56, "string": "We manually sampled 100 training examples from the training data and estimate that roughly 95% of the utterances in the training data can be expressed with this programming language." }, { "id": 57, "string": "Model We base our model on the semantic parser of Guu et al." }, { "id": 58, "string": "(2017) ." }, { "id": 59, "string": "In their work, they used an encoderdecoder architecture (Sutskever et al., 2014) to define a distribution p θ (z | x)." }, { "id": 60, "string": "The utterance x is encoded using a bi-directional LSTM (Hochreiter and Schmidhuber, 1997 ) that creates a contextualized representation h i for every utterance token x i , and the decoder is a feed-forward network combined with an attention mechanism over the encoder outputs (Bahdanau et al., 2015) ." }, { "id": 61, "string": "The feedforward decoder takes as input the last K tokens that were decoded." }, { "id": 62, "string": "More formally the probability of a program is the product of the probability of its tokens given the history: p θ (z | x) = t p θ (z t | x, z 1:t−1 ), and the probability of a decoded token is computed as follows." }, { "id": 63, "string": "First, a Bi-LSTM encoder converts the input sequence of utterance embeddings into a sequence of forward and backward states h {F,B} 1 , ." }, { "id": 64, "string": "." }, { "id": 65, "string": "." }, { "id": 66, "string": ", h {F,B} |x| ." }, { "id": 67, "string": "The utterance representation x isx = [h F |x| ; h B 1 ]." }, { "id": 68, "string": "Then decoding produces the program token-by-token: q t = ReLU(W q [x;v; z t−K−1:t−1 ]), α t,i ∝ exp(q t W α h i ) , c t = i α t,i h i , p θ (z t | x, z 1:t−1 ) ∝ exp(φ zt W s [q t ; c t ]), where φ z is an embedding for program token z, v is a bag-of-words vector for the tokens in x, z i:j = (z i , ." }, { "id": 69, "string": "." }, { "id": 70, "string": "." }, { "id": 71, "string": ", z j ) is a history vector of size K, the matrices W q , W α , W s are learned parameters (along with the LSTM parameters and embedding matrices), and ';' denotes concatenation." }, { "id": 72, "string": "Search: Searching through the large space of programs is a fundamental challenge in semantic parsing." }, { "id": 73, "string": "To combat this challenge we apply several techniques." }, { "id": 74, "string": "First, we use beam search at decoding time and when training from weak supervision (see Sec." }, { "id": 75, "string": "4), similar to prior work Guu et al., 2017) ." }, { "id": 76, "string": "At each decoding step we maintain a beam B of program prefixes of length n, expand them exhaustively to programs of length n+1 and keep the top-|B| program prefixes with highest model probability." }, { "id": 77, "string": "Second, we utilize the semantic typing system to only construct programs that are syntactically valid, and substantially prune the program search space (similar to type constraints in Krishnamurthy et al." }, { "id": 78, "string": "(2017) 2017) )." }, { "id": 79, "string": "We maintain a stack that keeps track of the expected semantic type at each decoding step." }, { "id": 80, "string": "The stack is initialized with the type Bool." }, { "id": 81, "string": "Then, at each decoding step, only tokens that return the semantic type at the top of the stack are allowed, the stack is popped, and if the decoded token is a function, the semantic types of its arguments are pushed to the stack." }, { "id": 82, "string": "This dramatically reduces the search space and guarantees that only syntactically valid programs will be produced." }, { "id": 83, "string": "Fig." }, { "id": 84, "string": "2 illustrates the state of the stack when decoding a program for an input utterance." }, { "id": 85, "string": "Given the constrains on valid programs, our model p θ (z | x) is defined as: t p θ (z t | x, z 1:t−1 ) · 1(z t | z 1:t−1 ) z p θ (z | x, z 1:t−1 ) · 1(z | z 1:t−1 ) , where 1(z t | z 1:t−1 ) indicates whether a certain program token is valid given the program prefix." }, { "id": 86, "string": "Discriminative re-ranking: The above model is a locally-normalized model that provides a distribution for every decoded token, and thus might suffer from the label bias problem (Andor et al., 2016; Lafferty et al., 2001) ." }, { "id": 87, "string": "Thus, we add a globally-normalized re-ranker p ψ (z | x) that scores all |B| programs in the final beam produced by p θ (z | x)." }, { "id": 88, "string": "Our globally-normalized model is: p g ψ (z | x) ∝ exp(s ψ (x, z)), and is normalized over all programs in the beam." }, { "id": 89, "string": "The scoring function s ψ (x, z) is a neural network with identical architecture to the locallynormalized model, except that (a) it feeds the decoder with the candidate program z and does not generate it." }, { "id": 90, "string": "(b) the last hidden state is inserted to a feed-forward network whose output is s ψ (x, z)." }, { "id": 91, "string": "Our final ranking score is p θ (z|x)p g ψ (z | x)." }, { "id": 92, "string": "Training We now describe our basic method for training from weak supervision, which we extend upon in Sec." }, { "id": 93, "string": "5 using abstract examples." }, { "id": 94, "string": "To use weak supervision, we treat the program z as a latent variable that is approximately marginalized." }, { "id": 95, "string": "To describe the objective, define R(z, k, y) ∈ {0, 1} to be one if executing program z on KB k results in denotation y, and zero otherwise." }, { "id": 96, "string": "The objective is then to maximize p(y | x) given by: z∈Z p θ (z | x)p(y | z, k) = z∈Z p θ (z | x)R(z, k, y) ≈ z∈B p θ (z | x)R(z, k, y) where Z is the space of all programs and B ⊂ Z are the programs found by beam search." }, { "id": 97, "string": "In most semantic parsers there will be relatively few z that generate the correct denotation y." }, { "id": 98, "string": "However, in CNLVR, y is binary, and so spuriousness is a central problem." }, { "id": 99, "string": "To alleviate it, we utilize a property of CNLVR: the same utterance appears 4 times with 4 different images." }, { "id": 100, "string": "2 If a program is spurious it is likely that it will yield the wrong denotation in one of those 4 images." }, { "id": 101, "string": "Thus, we can re-define each training example to be (x, {(k j , y j )} 4 j=1 ), where each utterance x is paired with 4 different KBs and the denotations of the utterance with respect to these KBs." }, { "id": 102, "string": "Then, we maximize p({y j } 4 j=1 | x, ) by maximizing the objective above, except that R(z, {k j , y j } 4 j=1 ) = 1 iff the denotation of z is correct for all four KBs." }, { "id": 103, "string": "This dramatically reduces the problem of spuriousness, as the chance of randomly obtaining a correct denotation goes down from 1 2 to 1 16 ." }, { "id": 104, "string": "This is reminiscent of Pasupat and Liang (2016) , where random permutations of Wikipedia tables were shown to crowdsourcing workers to eliminate spurious programs." }, { "id": 105, "string": "We train the discriminative ranker analogously by maximizing the probability of programs with correct denotation z∈B p g ψ (z | x)R(z, k, y)." }, { "id": 106, "string": "This basic training method fails for CNLVR (see Sec." }, { "id": 107, "string": "6), due to the difficulties of search and spuriousness." }, { "id": 108, "string": "Thus, we turn to learning from abstract examples, which substantially reduce these problems." }, { "id": 109, "string": "Learning from Abstract Examples The main premise of this work is that in closed, well-typed domains such as visual reasoning, the main challenge is handling language compositionality, since questions may have a complex and nested structure." }, { "id": 110, "string": "Conversely, the problem of mapping lexical items to functions and constants in the programming language can be substantially alleviated by taking advantage of the compact KB schema and typing system, and utilizing a Utterance Program Cluster # \"yellow\" IsYellow C-Color 3 \"big\" IsBig C-Size 3 \"square\" IsSquare C-Shape 4 \"3\" 3 C-Num 2 \"exactly\" EqualInt C-QuantMod 5 \"top\" Side.Top C-Location 2 \"above\" GetAbove C-SpaceRel 6 Total: 25 Table 3 : Example mappings from utterance tokens to program tokens for the seven clusters used in the abstract representation." }, { "id": 111, "string": "The rightmost column counts the number of mapping in each cluster, resulting in a total of 25 mappings." }, { "id": 112, "string": "small lexicon that maps prevalent lexical items into typed program constants." }, { "id": 113, "string": "Thus, if we abstract away from the actual utterance into a partially abstract representation, we can combat the search and spuriousness challenges as we can generalize better across examples in small datasets." }, { "id": 114, "string": "Consider the utterances: 1." }, { "id": 115, "string": "\"There are exactly 3 yellow squares touching the wall.\"" }, { "id": 116, "string": "2." }, { "id": 117, "string": "\"There are at least 2 blue circles touching the wall.\"" }, { "id": 118, "string": "While the surface forms of these utterances are different, at an abstract level they are similar and it would be useful to leverage this similarity." }, { "id": 119, "string": "We therefore define an abstract representation for utterances and logical forms that is suitable for spatial reasoning." }, { "id": 120, "string": "We define seven abstract clusters (see Table 3 ) that correspond to the main semantic types in our domain." }, { "id": 121, "string": "Then, we associate each cluster with a small lexicon that contains language-program token pairs associated with this cluster." }, { "id": 122, "string": "These mappings represent the canonical ways in which program constants are expressed in natural language." }, { "id": 123, "string": "Table 3 shows the seven clusters we use, with an example for an utterance-program token pair from the cluster, and the number of mappings in each cluster." }, { "id": 124, "string": "In total, 25 mappings are used to define abstract representations." }, { "id": 125, "string": "As we show next, abstract examples can be used to improve the process of training semantic parsers." }, { "id": 126, "string": "Specifically, in sections 5.1-5.3, we use abstract examples in several ways, from generating new training data to improving search accuracy." }, { "id": 127, "string": "The combined effect of these approaches is quite dramatic, as our evaluation demonstrates." }, { "id": 128, "string": "High Coverage via Abstract Examples We begin by demonstrating that abstraction leads to rather effective coverage of the types of questions asked in a dataset." }, { "id": 129, "string": "Namely, that many ques-tions in the data correspond to a small set of abstract examples." }, { "id": 130, "string": "We created abstract representations for all 3,163 utterances in the training examples by mapping utterance tokens to their cluster label, and then counted how many distinct abstract utterances exist." }, { "id": 131, "string": "We found that as few as 200 abstract utterances cover roughly half of the training examples in the original training set." }, { "id": 132, "string": "The above suggests that knowing how to answer a small set of abstract questions may already yield a reasonable baseline." }, { "id": 133, "string": "To test this baseline, we constructured a \"rule-based\" parser as follows." }, { "id": 134, "string": "We manually annotated 106 abstract utterances with their corresponding abstract program (including alignment between abstract tokens in the utterance and program)." }, { "id": 135, "string": "For example, Table 1 shows the abstract utterance and program for the utterance \"There are exactly 3 yellow squares touching the wall\"." }, { "id": 136, "string": "Note that the utterance \"There are at least 2 blue circles touching the wall\" will be mapped to the same abstract utterance and program." }, { "id": 137, "string": "Given this set of manual annotations, our rulebased semantic parser operates as follows: Given an utterance x, create its abstract representationx." }, { "id": 138, "string": "If it exactly matches one of the manually annotated utterances, map it to its corresponding abstract programz." }, { "id": 139, "string": "Replace the abstract program tokens with real program tokens based on the alignment with the utterance tokens, and obtain a final program z. Ifx does not match return TRUE, the majority label." }, { "id": 140, "string": "The rule-based parser will fail for examples not covered by the manual annotation." }, { "id": 141, "string": "However, it already provides a reasonable baseline (see Table 4 )." }, { "id": 142, "string": "As shown next, manual annotations can also be used for generating new training data." }, { "id": 143, "string": "Data Augmentation While the rule-based semantic parser has high precision and gauges the amount of structural variance in the data, it cannot generalize beyond observed examples." }, { "id": 144, "string": "However, we can automatically generate non-abstract utterance-program pairs from the manually annotated abstract pairs and train a semantic parser with strong supervision that can potentially generalize better." }, { "id": 145, "string": "E.g., consider the utterance \"There are exactly 3 yellow squares touching the wall\", whose abstract representation is given in Table 1 ." }, { "id": 146, "string": "It is clear that we can use this abstract pair to generate a program for a new utterance \"There are exactly 3 blue squares touching the wall\"." }, { "id": 147, "string": "This program will be identical Algorithm 1 Decoding with an Abstract Cache 1: procedure DECODE(x, y, C, D) 2: // C is a map where the key is an abstract utterance and the value is a pair (Z,R) of a list of abstract programs Z and their average rewardsR." }, { "id": 148, "string": "D is an integer." }, { "id": 149, "string": "3:x ← Abstract utterance of x 4: A ← D programs in C[x] with top reward values 5: B1 ← compute beam of programs of length 1 6: for t = 2 ." }, { "id": 150, "string": "." }, { "id": 151, "string": "." }, { "id": 152, "string": "T do // Decode with cache 7: Bt ← construct beam from Bt−1 8: At = truncate(A, t) 9: Bt.add(de-abstract(At)) 10: for z ∈ BT do //Update cache 11: Update rewards in C[x] using (z, R(z, y)) 12: return BT ∪ de-abstract(A)." }, { "id": 153, "string": "to the program of the first utterance, with IsBlue replacing IsYellow." }, { "id": 154, "string": "More generally, we can sample any abstract example and instantiate the abstract clusters that appear in it by sampling pairs of utterance-program tokens for each abstract cluster." }, { "id": 155, "string": "Formally, this is equivalent to a synchronous context-free grammar (Chiang, 2005) that has a rule for generating each manually-annotated abstract utteranceprogram pair, and rules for synchronously generating utterance and program tokens from the seven clusters." }, { "id": 156, "string": "We generated 6,158 (x, z) examples using this method and trained a standard sequence to sequence parser by maximizing log p θ (z|x) in the model above." }, { "id": 157, "string": "Although these are generated from a small set of 106 abstract utterances, they can be used to learn a model with higher coverage and accuracy compared to the rule-based parser, as our evaluation demonstrates." }, { "id": 158, "string": "3 The resulting parser can be used as a standalone semantic parser." }, { "id": 159, "string": "However, it can also be used as an initialization point for the weakly-supervised semantic parser." }, { "id": 160, "string": "As we observe in Sec." }, { "id": 161, "string": "6, this results in further improvement in accuracy." }, { "id": 162, "string": "Caching Abstract Examples We now describe a caching mechanism that uses abstract examples to combat search and spuriousness when training from weak supervision." }, { "id": 163, "string": "As shown in Sec." }, { "id": 164, "string": "5.1, many utterances are identical at the abstract level." }, { "id": 165, "string": "Thus, a natural idea is to keep track at training time of abstract utteranceprogram pairs that resulted in a correct denotation, and use this information to direct the search procedure." }, { "id": 166, "string": "Concretely, we construct a cache C that maps abstract utterances to all abstract programs that were decoded by the model, and tracks the average reward obtained for those programs." }, { "id": 167, "string": "For every utterance x, after obtaining the final beam of programs, we add to the cache all abstract utteranceprogram pairs (x,z), and update their average reward (Alg." }, { "id": 168, "string": "1, line 10)." }, { "id": 169, "string": "To construct an abstract example (x,z) from an utterance-program pair (x, z) in the beam, we perform the following procedure." }, { "id": 170, "string": "First, we createx by replacing utterance tokens with their cluster label, as in the rule-based semantic parser." }, { "id": 171, "string": "Then, we go over every program token in z, and replace it with an abstract cluster if the utterance contains a token that is mapped to this program token according to the mappings from Table 3 ." }, { "id": 172, "string": "This also provides an alignment from abstract program tokens to abstract utterance tokens that is necessary when utilizing the cache." }, { "id": 173, "string": "We propose two variants for taking advantage of the cache C. Both are shown in Algorithm 1." }, { "id": 174, "string": "1." }, { "id": 175, "string": "Full program retrieval (Alg." }, { "id": 176, "string": "1, line 12): Given utterance x, construct an abstract utterancex, retrieve the top D abstract programs A from the cache, compute the de-abstracted programs Z using alignments from program tokens to utterance tokens, and add the D programs to the final beam." }, { "id": 177, "string": "2." }, { "id": 178, "string": "Program prefix retrieval (Alg." }, { "id": 179, "string": "1, line 9): Here, we additionally consider prefixes of abstract programs to the beam, to further guide the search process." }, { "id": 180, "string": "At each step t, let B t be the beam of decoded programs at step t. For every abstract programz ∈ A add the de-abstracted prefix z 1:t to B t and expand B t+1 accordingly." }, { "id": 181, "string": "This allows the parser to potentially construct new programs that are not in the cache already." }, { "id": 182, "string": "This approach combats both spuriousness and the search challenge, because we add promising program prefixes to the beam that might have fallen off of it earlier." }, { "id": 183, "string": "Fig." }, { "id": 184, "string": "3 visualizes the caching mechanism." }, { "id": 185, "string": "A high-level overview of our entire approach for utilizing abstract examples at training time for both data augmentation and model training is given in Fig." }, { "id": 186, "string": "4 ." }, { "id": 187, "string": "Experimental Evaluation Model and Training Parameters The Bi-LSTM state dimension is 30." }, { "id": 188, "string": "The decoder has one hidden layer of dimension 50, that takes the last 4 decoded tokens as input as well as encoder states." }, { "id": 189, "string": "Token embeddings are of dimension 12, beam size is 40 and D = 10 programs are used in Algorithm 1." }, { "id": 190, "string": "Word embeddings are initialized from CBOW (Mikolov et al., 2013) trained on the training data, and are then optimized end-toend." }, { "id": 191, "string": "In the weakly-supervised parser we encourage exploration with meritocratic gradient updates with β = 0.5 (Guu et al., 2017) ." }, { "id": 192, "string": "In the weaklysupervised parser we warm-start the parameters with the supervised parser, as mentioned above." }, { "id": 193, "string": "For optimization, Adam is used (Kingma and Ba, 2014) ), with learning rate of 0.001, and mini-batch size of 8." }, { "id": 194, "string": "Pre-processing Because the number of utterances is relatively small for training a neural model, we take the following steps to reduce sparsity." }, { "id": 195, "string": "We lowercase all utterance tokens, and also use their lemmatized form." }, { "id": 196, "string": "We also use spelling correction to replace words that contain typos." }, { "id": 197, "string": "After pre-processing we replace every word that occurs less than 5 times with an UNK symbol." }, { "id": 198, "string": "Evaluation We evaluate on the public development and test sets of CNLVR as well as on the hidden test set." }, { "id": 199, "string": "The standard evaluation metric is accuracy, i.e., how many examples are correctly classified." }, { "id": 200, "string": "In addition, we report consistency, which is the proportion of utterances for which the decoded program has the correct denotation for all 4 images/KBs." }, { "id": 201, "string": "It captures whether a model consistently produces a correct answer." }, { "id": 202, "string": "when taking the KB as input, which is a maximum entropy classifier (MAXENT)." }, { "id": 203, "string": "For our models, we evaluate the following variants of our approach: • RULE: The rule-based parser from Sec." }, { "id": 204, "string": "5.1." }, { "id": 205, "string": "• SUP." }, { "id": 206, "string": ": The supervised semantic parser trained on augmented data as in Sec." }, { "id": 207, "string": "5.2 (5, 598 examples for training and 560 for validation)." }, { "id": 208, "string": "• WEAKSUP." }, { "id": 209, "string": ": Our full weakly-supervised semantic parser that uses abstract examples." }, { "id": 210, "string": "• +DISC: We add a discriminative re-ranker (Sec." }, { "id": 211, "string": "3) for both SUP." }, { "id": 212, "string": "and WEAKSUP." }, { "id": 213, "string": "Table 4 describes our main results." }, { "id": 214, "string": "Our weakly-supervised semantic parser with re-ranking (W.+DISC) obtains 84.0 accuracy and 65.0 consistency on the public test set and 82.5 accuracy and 63.9 on the hidden one, improving accuracy by 14.7 points compared to state-of-theart." }, { "id": 215, "string": "The accuracy of the rule-based parser (RULE) is less than 2 points below MAXENT, showing that a semantic parsing approach is very suitable for this task." }, { "id": 216, "string": "The supervised parser obtains better performance (especially in consistency), and with re-ranking reaches 76.6 accuracy, showing that generalizing from generated examples is better than memorizing manually-defined patterns." }, { "id": 217, "string": "Our weakly-supervised parser significantly improves over SUP., reaching an accuracy of 81.7 before reranking, and 84.0 after re-ranking (on the public test set)." }, { "id": 218, "string": "Consistency results show an even crisper trend of improvement across the models." }, { "id": 219, "string": "Main results Analysis We analyze our results by running multiple ablations of our best model W.+DISC on the development set." }, { "id": 220, "string": "To examine the overall impact of our procedure, we trained a weakly-supervised parser from scratch without pre-training a supervised parser nor using a cache, which amounts to a re-implementation of the RANDOMER algorithm (Guu et al., 2017) ." }, { "id": 221, "string": "We find that the algorithm is unable to bootstrap in this challenging setup and obtains very low performance." }, { "id": 222, "string": "Next, we examined the importance of abstract examples, by pretraining only on examples that were manually annotated (utterances that match the 106 abstract patterns), but with no data augmentation or use of a cache (−ABSTRACTION)." }, { "id": 223, "string": "This results in performance that is similar to the MAJORITY baseline." }, { "id": 224, "string": "To further examine the importance of abstraction, we decoupled the two contributions, training once with a cache but without data augmentation for pre-training (−DATAAUGMENTATION), and again with pre-training over the augmented data, but without the cache (−BEAMCACHE)." }, { "id": 225, "string": "We found that the former improves by a few points over the MAXENT baseline, and the latter performs comparably to the supervised parser, that is, we are still unable to improve learning by training from denotations." }, { "id": 226, "string": "Lastly, we use a beam cache without line 9 in Alg." }, { "id": 227, "string": "1 (−EVERYSTEPBEAMCACHE)." }, { "id": 228, "string": "This already results in good performance, substantially higher than SUP." }, { "id": 229, "string": "but is still 3.4 points worse than our best performing model on the development set." }, { "id": 230, "string": "Orthogonally, to analyze the importance of tying the reward of all four examples that share an utterance, we trained a model without this tying, where the reward is 1 iff the denotation is correct (ONEEXAMPLEREWARD)." }, { "id": 231, "string": "We find that spuriousness becomes a major issue and weaklysupervised learning fails." }, { "id": 232, "string": "Error Analysis We sampled 50 consistent and 50 inconsistent programs from the development set to analyze the weaknesses of our model." }, { "id": 233, "string": "By and large, errors correspond to utterances that are more complex syntactically and semantically." }, { "id": 234, "string": "In about half of the errors an object was described by two or more modifying clauses: \"there is a box with a yellow circle and three blue items\"; or nesting occurred: \"one of the gray boxes has exactly three objects one of which is a circle\"." }, { "id": 235, "string": "In these cases the model either ignored one of the conditions, resulting in a program equivalent to \"there is a box with three blue items\" for the first case, or applied composition operators wrongly, outputting an equivalent to \"one of the gray boxes has exactly three circles\" for the second case." }, { "id": 236, "string": "However, in some cases the parser succeeds on such examples and we found that 12% of the sampled utterances that were parsed correctly had a similar complex structure." }, { "id": 237, "string": "Other, less frequent reasons for failure were problems with cardinality interpretation, i.e." }, { "id": 238, "string": ",\"there are 2\" parsed as \"exactly 2\" instead of \"at least 2\"; applying conditions to items rather than sets, e.g., \"there are 2 boxes with a triangle closely touching a corner\" parsed as \"there are 2 triangles closely touching a corner\"; and utterances with questionable phrasing, e.g., \"there is a tower that has three the same blocks color\"." }, { "id": 239, "string": "Other insights are that the algorithm tended to give higher probability to the top ranked program when it is correct (average probability 0.18), compared to cases when it is incorrect (average probability 0.08), indicating that probabilities are correlated with confidence." }, { "id": 240, "string": "In addition, sentence length is not predictive for whether the model will succeed: average sentence length of an utterance is 10.9 when the model is correct, and 11.1 when it errs." }, { "id": 241, "string": "We also note that the model was successful with sentences that deal with spatial relations, but struggled with sentences that refer to the size of shapes." }, { "id": 242, "string": "This is due to the data distribution, which includes many examples of the former case and fewer examples of the latter." }, { "id": 243, "string": "Related Work Training semantic parsers from denotations has been one of the most popular training schemes for scaling semantic parsers since the beginning of the decade." }, { "id": 244, "string": "Early work focused on traditional log-linear models (Clarke et al., 2010; Liang et al., 2011; Kwiatkowski et al., 2013) , but recently denotations have been used to train neural semantic parsers Krishnamurthy et al., 2017; Rabinovich et al., 2017; Cheng et al., 2017) ." }, { "id": 245, "string": "Visual reasoning has attracted considerable attention, with datasets such as VQA (Antol et al., 2015) and CLEVR (Johnson et al., 2017a) ." }, { "id": 246, "string": "The advantage of CNLVR is that language utterances are both natural and compositional." }, { "id": 247, "string": "Treating vi-sual reasoning as an end-to-end semantic parsing problem has been previously done on CLEVR (Hu et al., 2017; Johnson et al., 2017b) ." }, { "id": 248, "string": "Our method for generating training data resembles data re-combination ideas in Jia and Liang (2016) , where examples are generated automatically by replacing entities with their categories." }, { "id": 249, "string": "While spuriousness is central to semantic parsing when denotations are not very informative, there has been relatively little work on explicitly tackling it." }, { "id": 250, "string": "Pasupat and Liang (2015) used manual rules to prune unlikely programs on the WIK-ITABLEQUESTIONS dataset, and then later utilized crowdsourcing (Pasupat and Liang, 2016) to eliminate spurious programs." }, { "id": 251, "string": "Guu et al." }, { "id": 252, "string": "(2017) proposed RANDOMER, a method for increasing exploration and handling spuriousness by adding randomness to beam search and a proposing a \"meritocratic\" weighting scheme for gradients." }, { "id": 253, "string": "In our work we found that random exploration during beam search did not improve results while meritocratic updates slightly improved performance." }, { "id": 254, "string": "Discussion In this work we presented the first semantic parser for the CNLVR dataset, taking structured representations as input." }, { "id": 255, "string": "Our main insight is that in closed, well-typed domains we can generate abstract examples that can help combat the difficulties of training a parser from delayed supervision." }, { "id": 256, "string": "First, we use abstract examples to semiautomatically generate utterance-program pairs that help warm-start our parameters, thereby reducing the difficult search challenge of finding correct programs with random parameters." }, { "id": 257, "string": "Second, we focus on an abstract representation of examples, which allows us to tackle spuriousness and alleviate search, by sharing information about promising programs between different examples." }, { "id": 258, "string": "Our approach dramatically improves performance on CNLVR, establishing a new state-of-the-art." }, { "id": 259, "string": "In this paper, we used a manually-built highprecision lexicon to construct abstract examples." }, { "id": 260, "string": "This is suitable for well-typed domains, which are ubiquitous in the virtual assistant use case." }, { "id": 261, "string": "In future work we plan to extend this work and automatically learn such a lexicon." }, { "id": 262, "string": "This can reduce manual effort and scale to larger domains where there is substantial variability on the language side." } ], "headers": [ { "section": "Introduction", "n": "1", "start": 0, "end": 32 }, { "section": "Setup", "n": "2", "start": 33, "end": 56 }, { "section": "Model", "n": "3", "start": 57, "end": 91 }, { "section": "Training", "n": "4", "start": 92, "end": 108 }, { "section": "Learning from Abstract Examples", "n": "5", "start": 109, "end": 127 }, { "section": "High Coverage via Abstract Examples", "n": "5.1", "start": 128, "end": 142 }, { "section": "Data Augmentation", "n": "5.2", "start": 143, "end": 161 }, { "section": "Caching Abstract Examples", "n": "5.3", "start": 162, "end": 186 }, { "section": "Experimental Evaluation", "n": "6", "start": 187, "end": 218 }, { "section": "Analysis", "n": "6.1", "start": 219, "end": 242 }, { "section": "Related Work", "n": "7", "start": 243, "end": 253 }, { "section": "Discussion", "n": "8", "start": 254, "end": 262 } ], "figures": [ { "filename": "../figure/image/1363-Figure4-1.png", "caption": "Figure 4: An overview of our approach for utilizing abstract examples for data augmentation and model training.", "page": 6, "bbox": { "x1": 81.6, "x2": 515.04, "y1": 316.8, "y2": 466.08 } }, { "filename": "../figure/image/1363-Figure3-1.png", "caption": "Figure 3: A visualization of the caching mechanism. At each decoding step, prefixes of high-reward abstract programs are added to the beam from the cache.", "page": 6, "bbox": { "x1": 112.8, "x2": 484.32, "y1": 61.44, "y2": 271.2 } }, { "filename": "../figure/image/1363-Table1-1.png", "caption": "Table 1: An example for an utterance-program pair (x, z) and its abstract counterpart (x̄, z̄)", "page": 2, "bbox": { "x1": 72.96, "x2": 526.0799999999999, "y1": 63.36, "y2": 92.64 } }, { "filename": "../figure/image/1363-Table2-1.png", "caption": "Table 2: Examples for utterance-program pairs. Commas and parenthesis provided for readability only.", "page": 2, "bbox": { "x1": 72.96, "x2": 526.0799999999999, "y1": 126.24, "y2": 158.4 } }, { "filename": "../figure/image/1363-Table4-1.png", "caption": "Table 4: Results on the development, public test (Test-P) and hidden test (Test-H) sets. For each model, we report both accuracy and consistency.", "page": 7, "bbox": { "x1": 81.6, "x2": 281.28, "y1": 61.44, "y2": 136.32 } }, { "filename": "../figure/image/1363-Table5-1.png", "caption": "Table 5: Results of ablations of our main models on the development set. Explanation for the nature of the models is in the body of the paper.", "page": 7, "bbox": { "x1": 327.84, "x2": 505.44, "y1": 61.44, "y2": 144.0 } }, { "filename": "../figure/image/1363-Figure2-1.png", "caption": "Figure 2: An example for the state of the type stack s while decoding a program z for an utterance x.", "page": 3, "bbox": { "x1": 98.39999999999999, "x2": 499.2, "y1": 65.75999999999999, "y2": 110.39999999999999 } }, { "filename": "../figure/image/1363-Table3-1.png", "caption": "Table 3: Example mappings from utterance tokens to program tokens for the seven clusters used in the abstract representation. The rightmost column counts the number of mapping in each cluster, resulting in a total of 25 mappings.", "page": 4, "bbox": { "x1": 100.8, "x2": 261.12, "y1": 61.44, "y2": 138.24 } } ] }, "gem_id": "GEM-SciDuet-chal-81" } ] }