id
stringlengths 10
10
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| content
stringlengths 3.91k
873k
| references
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|
1903.03094 | Learning to Speak and Act in a Fantasy Text Adventure Game | We introduce a large scale crowdsourced text adventure game as a research
platform for studying grounded dialogue. In it, agents can perceive, emote, and
act whilst conducting dialogue with other agents. Models and humans can both
act as characters within the game. We describe the results of training
state-of-the-art generative and retrieval models in this setting. We show that
in addition to using past dialogue, these models are able to effectively use
the state of the underlying world to condition their predictions. In
particular, we show that grounding on the details of the local environment,
including location descriptions, and the objects (and their affordances) and
characters (and their previous actions) present within it allows better
predictions of agent behavior and dialogue. We analyze the ingredients
necessary for successful grounding in this setting, and how each of these
factors relate to agents that can talk and act successfully. | http://arxiv.org/pdf/1903.03094 | Jack Urbanek, Angela Fan, Siddharth Karamcheti, Saachi Jain, Samuel Humeau, Emily Dinan, Tim Rocktäschel, Douwe Kiela, Arthur Szlam, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20190307 | 20190307 | 9 1 0 2
r a M 7 ] L C . s c [
1 v 4 9 0 3 0 . 3 0 9 1 : v i X r a
# Learning to Speak and Act in a Fantasy Text Adventure Game
Jack Urbanek1 Angela Fan1,2 Siddharth Karamcheti1 Saachi Jain1 Samuel Humeau1 Emily Dinan1 Tim Rockt¨aschel1,3 Douwe Kiela1 Arthur Szlam1 Jason Weston1 1Facebook AI Research 2LORIA, Nancy 3University College London light-dms@fb.com
# Abstract
We introduce a large-scale crowdsourced text adventure game as a research platform for studying grounded dialogue. In it, agents can perceive, emote, and act whilst conducting di- alogue with other agents. Models and humans can both act as characters within the game. We describe the results of training state-of-the-art generative and retrieval models in this setting. We show that in addition to using past dia- logue, these models are able to eï¬ectively use the state of the underlying world to condition their predictions. In particular, we show that grounding on the details of the local environ- ment, including location descriptions, and the objects (and their aï¬ordances) and characters (and their previous actions) present within it allows better predictions of agent behavior and dialogue. We analyze the ingredients neces- sary for successful grounding in this setting, and how each of these factors relate to agents that can talk and act successfully.
1
# 1 Introduction
There has been remarkable progress in language modeling (Jozefowicz et al., 2016; Devlin et al., 2018; Radford et al., 2019) and building dialogue agents (Dinan et al., 2019a). Nevertheless, the current state of the art uses only the statistical regularities of language data, without explicit un- derstanding of the world that the language de- scribes. This work is built on the hypothesis that dialogue agents embodied in a rich and cohesive (but tractable) world can more easily be trained to use language eï¬ectively than those only exposed to standard large-scale text-only corpora.
models as embodied agents, and the world itself. It consists of a large crowdsourced game world (663 locations, 3462 objects and 1755 characters) de- scribed entirely in natural language. Within that game world, we collect a large set (11k episodes) of character-driven human-human crowdworker interactions involving actions, emotes, and dia- logue, with the aim of training models to engage humans in a similar fashion. Our framework is made publicly available in ParlAI (http://parl. ai/projects/light).
We use the collected dataset to investigate how a model can both speak and act grounded in per- ception of its environment and dialogue from other speakers. This is done by evaluating state-of-the- art models on our task and evaluating the eï¬ects of providing additional grounding. In particular, we adapt the BERT contextual language model (Devlin et al., 2018) to the task of dialogue in two ways: as a bi-ranker, which is fast and prac- tical as a retrieval model, and as a cross-ranker which is slower at inference time but allows more feature cross-correlation between context and re- sponse. Both models outperform existing meth- ods. Our ablation analysis shows the importance of each part of the grounding (location, objects, characters, otherâs actions, self-actions) in terms of the ability to both understand and use language. While models that use grounding show clear im- provements, our best performing models are still unable to perform at human level, making our setup a suitable challenge for future research.
# 2 Related Work
To that end, we introduce the LIGHT1 research platform. LIGHT is a multi-player fantasy text ad- venture world designed for studying situated di- alogue, and allows interactions between humans,
1 Learning in Interactive Games with Humans and Text.
Most recent work in dialogue exploring genera- tive or retrieval models for goal-directed (Hen- derson et al., 2014; Bordes et al., 2017) or chit- chat tasks (Vinyals and Le, 2015; Sordoni et al., 2015; Zhang et al., 2018) is not situated, or even
grounded in perception. Models typically take the last few utterances from the dialogue history as input, and output a new utterance. While some goal-directed setups may use external knowledge bases (e.g. ï¬ight data for airline booking), dia- logues tend to implicitly refer to an external world during the conversations without explicit ground- ing to objects or actions.
Several position papers have proposed virtual embodiment as a strategy for language research (Brooks, 1991; Kiela et al., 2016; Gauthier and Mordatch, 2016; Mikolov et al., 2016; Lake et al., 2017). Single-player text adventure game frame- works for training reinforcement learning agents exist, i.e., Narasimhan et al. (2015) and TextWorld (CËot´e et al., 2018), but these do not have human dialogue within the game. Yang et al. (2017) and Bordes et al. (2010) proposed small world setups for instruction following or labeling, but these are much more restricted than the large multi-player text adventure game environment with rich dia- logue that we propose here.
A number of visual, rather than text, platforms have been proposed such as House3D (Wu et al., 2018b), HoME (Brodeur et al., 2017), MINOS (Savva et al., 2017), Matterport3D (Chang et al., 2017) and AI2-THOR (Kolve et al., 2017), and the Minecraft MALMO project (Johnson et al., 2016), but they typically are suited to reinforce- ment learning of actions, and involve templated language for navigation or question answering tasks, if at all (Oh et al., 2017; Yi et al., 2018).
Other examples are instruction-following in the Neverwinter Nights game (Fleischman and Roy, 2005), dialogue about soccer videogames (Pa- sunuru and Bansal, 2018), placing blocks appro- priately given a ï¬nal plan (Wang et al., 2016) and a more open ended building task using a grid of vox- els (Wang et al., 2017). In the latter two cases the communication is one-sided with only the human issuing instructions, rather than dialogue, with the agent only able to act.
There are also setups that consider static lan- guage and perception, for example image caption- ing (Lin et al., 2014), video captioning (Yu et al., 2016), visual QA (Antol et al., 2015) and visual dialogue (Das et al., 2017; Shuster et al., 2018; Mostafazadeh et al., 2017). While grounded, the agent has no ability to act in these tasks. Talk the Walk (de Vries et al., 2018) introduces a navi- gation game that involves action, perception and
Split Train Valid Test Seen Test Unseen Locations Objects Characters 589 2658 1369 352 1412 546 499 1895 820 74 844 360 Dialogues Utterances Emotes Actions 8538 110877 17609 20256 500 6623 1156 1518 1000 13272 2495 3227 739 9853 1301 1880 Vocabulary Size Utterance Length 32182 18.3 11327 19.2 11984 19.4 9984 16.2
Table 1: LIGHT dataset statistics.
two-way dialogue, but is limited to small grids.
In summary, compared to many setups, our framework allows learning from both actions and (two-way) dialogue, while many existing simula- tions typically address one or the other but not both. In addition, being based on a gaming setup, our hope is that LIGHT can be fun for humans to interact with, enabling future engagement with our models. All utterances in LIGHT are pro- duced by human annotators, thus inheriting prop- erties of natural language such as ambiguity and coreference, making it a challenging platform for grounded learning of language and actions.
# 3 LIGHT Environment and Task Setup
LIGHT is a large-scale, conï¬gurable text ad- venture environment for research on learning grounded language and actions. It features both humans and models as embodied agents within a multi-player fantasy MUD (multi-user dungeon)- like (Dieterle, 2009) environment.
To facilitate natural human-sourced (fantasy) situations described by natural language, almost the entire environment is crowdsourced, including locations, objects and their aï¬ordances, characters and their personalities, and most importantly char- acter interactions: dialogues and actions. These components are collected through a series of an- notation tasks that we will now describe. These tasks are designed so that they can be combinatori- ally recombined. Data quality was maintained by requiring annotators to take a test (see Appendix D). Overall statistics of the collected elements are given in Table 1. This environment can then be used to both train agents, and to evaluate them in situ via their online interactions.
# Category:
# Category:
# Graveyard
# Description: |
# Backstory:
Two-and-a-half walls of the ï¬nest, whitest stone stand here, weathered by the passing of countless seasons. There is no roof, nor sign that there ever was one. All indications are that the work was abruptly abandoned. There is no door, nor markings on the walls. Nor is there any indication that any coï¬n has ever lain here... yet. Bright white stone was all the fad for funerary architecture, once upon a time. Itâs diï¬cult to understand why someone would abandon such a large and expensive undertaking. If they didnât have the money to ï¬nish it, they could have sold the stone, surely - or the mausoleum itself. Maybe they just havenât needed it yet? A bit odd, though, given how old it is. Maybe the gravedigger remembers... if heâs sober.
# Neighbors:
# Neighbors:
Dead Tree, south, following a dirt trail behind the mausoleum Fresh Grave, west, walking carefully between fallen headstones
Characters: gravedigger, thief, peasant, mouse, bat
# Objects:
# Objects:
wall, carving, leaf, dirt
(a) Example room created from the room collection and labelling tasks. Labels in italics were noted by workers as possibly present but not explicitly listed in the description or backstory.
Character: Thief Gravedigger Persona: I live alone in a tent in the woods. I steal food from the townspeople and coal from the blacksmith. The village police can not ï¬nd me to put me in jail. I am low paid labor in this town. I do a job that many people shun because of my contact with death. I am very lonely and wish I had someone to talk to who isnât dead. Description: The thief is a sneaky fellow who takes from the You might want to talk to the gravedigger, specially if your looking for a friend, he might be odd but you people and does so in a way that disturbs the will ï¬nd a friend in him. livelihood of the others. Carrying: meat, potatoes, coal shovel Wearing: dark tunic, cloak nothing annotated Wielding: knife nothing annotated
(b) Example characters annotated via character collection tasks.
Object Description Tags shovel The shovel is made of metal and silver. It is quite sturdy and appears new. gettable, wieldable wall The wall is pure white, the richest of which you have ever seen. none
(c) Example objects annotated via object collection tasks
Table 2: Example entities from the LIGHT environment. Each was collected via tasks described in Section 3.
Locations We ï¬rst crowdsourced a set of 663 game location settings from a base set of 37 cat- egories (countryside, forest, inside/outside castle, shore, graveyard, bazaar, . . . â full list in Ap- pendix H) which were selected by us to pro- vide both inspiration and cohesion to annotators. Workers were provided a category and asked to create a description, backstory, names of con- nected locations, and contained objects and char- acters. See Table 2a for an example. Many de- scriptions are quite detailed, and there are clear semantics between entities (e.g. alligators being in swamps, cacti in a desert).
pernatural, magical realm, city in the clouds, and netherworld) designed to be distinct from the oth- ers to provide an isolated set of locations, charac- ters, and objects for testing. These will be used to build what we refer to as an unseen test set.
Each location is collected independently, with the eventual aim that they can be glued together as desired to randomize world generation. In this work, we consider actions and dialogues within a single location, so building a world map is not nec- essary. However, we will show that the environ- ment has considerable inï¬uence on the dialogue, actions and grounded learning of models.
As all remaining tasks build upon the locations created in this ï¬rst step, we selected 6 location cat- egories (underwater aquapolis, frozen tundra, su-
Characters We crowdsourced 1755 game char- acters from animals to trolls and orcs to humans of
various types (wizards, knights, village clerk). See Table 2b for detailed examples. Each character has a textual description, a persona (deï¬ned as a set of 3-5 proï¬le sentences describing their traits, mod- eled after the Persona-Chat dataset (Zhang et al., 2018)), and a set of objects that are currently be- ing carried, wielded, or worn. We sourced this list of characters to annotate from the ones provided in the location creation task.
Objects We crowdsourced 3462 objects, each with a textual description, and a set of aï¬ordances (whether it is a container, can be picked up, has a surface, is a weapon, is wearable, is food, is a drink). See Table 2c for examples. As before, we sourced this list of objects to annotate from the ones annotated for the locations and characters.
Actions and Emotes There are a set of actions in the game consisting of physical manipulations, and a set of emotes that display feelings to other characters, in line with existing MUDs.
Physical actions include get, drop, put, give, steal, wear, remove, eat, drink, hug and hit, each taking either one or two arguments, e.g. put robes in closet. Every action has an explicit unambigu- ous eï¬ect on the underlying game state, and can only be executed if constraints are met, e.g. if the agent is holding the robes in the latter example.
Emotes include applaud, blush, cringe, cry, dance, frown . . . , sulk, wave, wink (22 in total) and have no eï¬ect on the game state other than to notify nearby characters of the emote, which can have eï¬ects on their behavior. See Appendix E for further detailed descriptions.
Interaction Now that we have a fully realized underlying environment, we can attempt to learn and evaluate agents that can act and speak within it. For this, we collect a human-human dataset of episodic interactions within the environment.
For each dialogue, we place two characters in a random location (either two characters that were already assigned to it, or else randomly assigned characters), complete with the objects assigned to the location and to those characters. Each char- acter has access to their persona, the location de- scription, and the objects present, and the inter- action episode begins. The two characters take turns within the episode, and can execute one ac- tion (physical action or emote) and produce one dialogue utterance on each turn. We crowdsourced 10,777 dialogues. Examples are given in Figure 1
and Appendix Figures 10-16.
Seen and Unseen Test Sets We provide two dis- tinct test sets. The seen test set consists of dia- logues set in the same world (set of locations) as the training set, thus also consists of characters, objects, and personas that can appear in the train- ing data. In contrast, the unseen test set is com- prised of dialogues collected on the unseen set of locations. The unseen test set allows for evalua- tion of generalization capability to unseen topics in a similar domain and as we shall see, provides a more challenging test for current techniques.
# 4 Learning Methods
We consider a variety of models that can predict actions, emotes and dialogue, and explore the im- portance of grounding upon the location, objects, and other characters within the setting. For all models, we represent context as a large text se- quence with a special token preceding each input type (persona, setting, self emote, partner emote, etc.). We work with two model classes: ranking models that output the maximal scoring response from a set of potential candidate responses and generative models that decode word by word.
Baseline Ranking Methods We report a Ran- dom baseline (selecting a random candidate from the candidates) and an Information Retrieval (IR) baseline that uses word overlap with TF/IDF weighting. We use Starspace (Wu et al., 2018a) which learns a bag-of-words embedding for con- text and candidates to maximize the inner product of the true label using a ranking loss. Lastly, we use fastText (Joulin et al., 2016) to classify which emote should be predicted next as there are only 22 classes. Finally, we compare the performance of our best models to human performance on each of the prediction tasks.
Transformer Memory Network We use the transformer memory-based ranking model from Dinan et al. (2019b). It uses a transformer (Vaswani et al., 2017) to produce separate repre- sentations (memory slots) for each sentence from the grounding information (setting, persona, ob- jects). It then performs attention given the di- alogue context over the memories to produce a context embedding, which is used to score can- didates via the dot product with the transformer- based representation of the candidate. At training
Persona 1: Servant Persona 2: King I come from the lower class. I do what I am told without question. I can not read. I have not seen my family in a long time. I am a king of the whole empire. I give rules and pursuit them. I am brave and fearless. Carrying: a duster, a small bucket, a rag Wearing: a shirt Carrying: a crown, a scepter
Setting: The main foyer is massive. A grand staircase sits to the back of the foyer leading to the upstairs. At the front of the foyer stand two servants ready to help anyone who comes to visit. To the left of the room there is a doorway leading into a corridor. To the right there is a door leading to another corridor for the Kingâs servants. At the foot of the stairs there is a bearskin rug that is staring at you almost as if still hungry. The walls are lined with portraits of the king and his family. Servant: my humble king. What am I to do to serve you? King: Ahhh. My loyal servant. Polish my scepter. Action: give scepter to servant Servant: Yes my lord. I will polish it immediately. Am I to return it to you personally? Action: put scepter in small bucket King: Yes. Yes. Of course. Also check the jewels in my crown. They seem loose. Action: give crown to servant Servant: But sire I am not qualiï¬ed to do that. Would you prefer I take it to someone? King: Oh ï¬ne then. Emote: gesture sigh Servant: I am sorry sir the rug startled me Action: drop crown King: Haha! Thatâs bear I slain on my latest hunting trip. Heâs a mighty beast! Emote: gesture laugh Servant: and if I may ask where did you go hunting sire? King: The great woods of course. This bear was stealing children in the kingdom. Surely you heard about it. Servant: sire. I have not been outside of these walls in quiet some time. I have not seen my family in ages. King: Such is the life of a servant I suppose. Howâs that scepter looking? Servant: it is almost ready sire. and the crown who would you like me to take it to? Action: get scepter from small bucket King: Here just give it back. Iâll have the queen ï¬nd someone.
Figure 1: Example dialogue from the LIGHT dataset.
time, other samples in the batch are used as nega- tive candidates. For emote prediction, we train by ranking against the full set of possible emotes as there are only 22 distinct classes.
BERT Bi-Ranker and Cross-Ranker We adapt the BERT pretrained language model (Devlin et al., 2018) to the tasks of dialogue and action prediction. We explore two architectures for lever- aging BERT. First, we use the BERT-based Bi- Ranker to produce a vector representation for the context and a separate representation for each can- didate utterance. This representation is obtained by passing the ï¬rst output of BERTâs 12 layers through an additional linear layer, resulting in an embedding of dimension 768. It then scores can- didates via the dot product between these embed- dings and is trained using a ranking loss.
Second, the BERT-based Cross-Ranker instead concatenates the context with each candidate ut- terance, similar to Wolf et al. (2019). Then, each candidate is scored by computing a soft-
max over all candidates. Unlike the BERT-based Bi-Ranker, the concatenation of the context with each individual candidate allows the model to at- tend to the context when encoding each candi- date, building a context-dependent representation of each candidate. In contrast, the Bi-Ranker can use self-attention to build the candidate and con- text representations, but cannot modify their rep- resentation based upon the context. However, the Cross-Encoder is far more computationally expen- sive (â¼11,000 slower than the Bi-Ranker for dia- logue retrieval) as each concatenated representa- tion must be recomputed, while the Bi-Ranker can cache the candidates for reuse (see Appendix B).
Generative Models Similarly to the ranking set- ting, we use the Transformer Memory Network from Dinan et al. (2019b) to encode the context features (such as dialogue, persona, and setting). However, to predict an action, emote, or dialogue sequence, we use a Transformer architecture to de- code while attending to the encoder output.
Query: s t c e j b o s r e t c a r a h c s n o i t a c o l s n o i t c a y r a l u b a c o v chicken chicken coop eggs a pen for the chickens chimney corn stone chimney chickens fox trying to steal chickens farmers The farmers farmer poorer subsistence farmers Chicken Pen Corn ï¬eld Farmerâs house Large Farm Pig Pen The old red barn get chicken hug chicken hit chicken give cowbell to chicken steal sword from chicken give corn to chicken bock tasty bawk moo egg lay pirate Pirate swords dock cargo ship seagulls on the dock cargo crates boat captain captain merchant boat workers workers the workers Pirate Ship Dock at the Port Loading Dock Fishing Dock crew berthing captainâs cabin hug pirate hit pirate steal sword from pirate steal cargo from pirate give cargo to pirate give Daggers to pirate crew ye port sea seas sail coï¬n the remains remains bones bones of the innocent adventurerâs remains precious jewels spirits of our ancestors mourner zombies families bandit the royal family Old Crypt sacristy Disposal area inside temple crypt Sacriï¬ce Chamber Shrine of Sretniy put torch in coï¬n get torch from coï¬n put bone in coï¬n get bone from coï¬n hit archaeologist hug archaeologist archaeologist robber crypt loss adventures earn rake shovel garden a garden Hand carved stone garden bench small garden gardener stable hand Garden dog stable boy A stable boy two guards Across the Kingâs Garden The werewolves tavern Hidden garden The garden courtyard Church garden Tool Shed ï¬ower garden get rake drop Rake steal Rake from gardener give Rake to thing give Rake to person give Rake to guard vegetable carved alice hook exorcisms tomatoes tavern Ale bottles beer mug of mead a large ornate table beer keg mug tavern owner bartender Goblin Kingâs bartender A serving wench Serving wench a round man with a bushy mustache Small insects Lush meadow Flower Field ï¬ower garden Mushroom Hut Archery zone The witches cottage get ï¬ower from meadow put ï¬ower in Meadow give Flower to a deer give Flower to deer steal Flower from a deer get ï¬ower ï¬ower amulet songbird wasp an holiness meadow ï¬ower pot fruit An enchanted amulet. citrus fruit fruit trees nice fruit trees a deer a songbird fruit bats parent butterï¬y Tavern of Browntavia Port Tavern The bar bazaar outside the royal city Outside gates hug tavern owner give food item to tavern owner give telescope to tavern owner drink drink drop drink give coin to bartender drink drinks regular item tip bottles
Table 3: Neighboring Starspace phrase embeddings (no pretraining from other data) for diï¬erent types of entities and actions. The ï¬rst row are arbitrarily chosen queries (chicken, pirate, coï¬n, rake, tavern, meadow), and the subsequent rows are their nearest objects, agents, locations, actions and vocabulary in embedding space.
For the task of action generation, the set of can- didates for ranking models to rank the true action sequence against is constrained by the set of valid actions. For example, the character cannot pick up book if there is no book. In the generative model, we compute the log likelihood for the set of possi- ble candidates and normalize to constrain the out- put space to valid actions to improve the results.
# 4.1 Implementation
We implement models using PyTorch in ParlAI (Miller et al., 2017). Ranking Transformer mod- els are pretrained on Reddit data (Mazar´e et al., 2018) and ï¬ne-tuned. We use the BERT (Devlin et al., 2018) implementation provided by Hugging Face2 with pre-trained weights, then adapted to our Bi-Ranker and Cross-Ranker setups. Genera- tive models are pretrained on the Toronto Books Corpus and ï¬ne-tuned except for emote predic- tion which does not leverage pretraining. We ap- ply byte-pair encoding (Sennrich et al., 2016) to reduce the vocabulary size for generative models. We decode using beam search with beam size 5.
ranking the ground truth among 19 other randomly chosen candidates for ranking models and per- plexity and unigram F1 for generative models.
Human We present humans with the same rank- ing task and report R@1/20 to estimate their per- formance on this task. During the evaluation, we provide annotated examples on the training in addition to examples on the test set. We only keep the annotations of evaluators who had high accuracy on the training examples to ï¬lter low- accuracy evaluators. The training accuracy bar was selected due to the diï¬culty of the separate tasks. Our methods for human evaluation are de- scribed in more detail in Appendix F along with how many turns were evaluated.
# 5 Results
The ranking models are compared in Table 4 on the seen and unseen test sets, and ablations are shown for both the BERT-based Bi-Ranker and Generative Transformer in Tables 5 and 6.
# 4.2 Evaluation
Automatic To evaluate our models, we calcu- late percentage accuracy for action and emote pre- diction. For dialogue, we report Recall@1/20 for
# 2https://github.com/huggingface/pytorch-pretrained-BERT
# 5.1 Comparison of Models and Baselines
The IR baseline shows non-random performance, but is outperformed by Starspace which is a stronger baseline. We also tried FastText on the emotion task which gave a seen test accuracy of 13.2. Transformer architectures prove signiï¬-
Method Test Seen Dialogue Action R@1/20 Acc Test Unseen Emote Dialogue Action R@1/20 Acc Acc Emote Acc Random baseline IR baseline Starspace Transformer MemNet BERT-based Bi-Ranker BERT-based Cross-Ranker 5.0 23.7 53.8 70.9 76.5 74.9 12.2 20.6 17.8 24.5 42.5 50.7 4.5 7.5 11.6 17.3 25.0 25.8 5.0 21.8 27.9 66.0 70.5 69.7 12.1 20.5 16.4 21.1 38.8 51.8 4.5 8.46 9.8 16.6 25.7 28.6 Human Performance* *87.5 *62.0 *27.0 *91.8 *71.9 *34.4
Table 4: Ranking model test performance. (*) Human performance is computed on a subset of data.
BERT-based Bi-Ranker actions+emotes only dialogue only dialogue+action+emote dialogue+persona dialogue+setting dialogue+objects Dialogue Action Emote R@1/20 Acc Acc 76.0 58.6 68.1 73.2 73.3 70.6 68.2 38.7 18.3 39.4 40.7 41.0 41.2 37.5 25.1 10.6 23.6 23.1 26.5 26.0 25.5
els. We observe that BERT-based models exhibit good transfer ability relative to other models, but the gap between their performance and human per- formance increases from the seen test set to the unseen one. Speciï¬cally, there is a 21 point gap on the unseen dialogue test set compared to an 11 point gap on the seen test set, making this a signif- icant challenge for future methods.
Table 5: BERT-based Bi-Ranker ablations (valid set). The LIGHT environment includes a variety of ground- ing information: dialogue, action, emote, persona, set- ting, and object descriptions.
Dialogue F1 PPL Action Emote Acc Acc Generative Transformer actions+emotes only dialogue only dialogue+action+emote dialogue+persona dialogue+setting dialogue+objects 27.1 32.8 28.0 27.6 27.8 27.8 27.7 13.9 9.3 12.5 12.3 12.9 12.1 12.8 13.0 10.5 12.3 12.8 12.3 11.5 11.0 20.6 15.3 20.0 22.0 20.8 17.8 20.2
# 5.3 Data Inter-connectedness and Coverage
To illustrate the coverage of entities and actions in the LIGHT world, and the inter-connectedness be- tween them learnable from our data, we trained a simple Starspace embedding model with no pre- built embeddings (so, on our data alone, thus pre- cluding BERT) on all three tasks and show em- beddings in Table 3. There is clearly a vast variety of learnable concepts and rich structure between characters, locations, objects, actions and the lan- guage describing them. We also show additional t-SNE plots and heatmaps showcasing these rela- tionships in Appendix G.
Table 6: Generative Transformer ablations (valid set).
cantly stronger at all tasks, with BERT pretrain- ing proving important for best results as used in the Bi-Ranker and Cross-Ranker architectures. The latter, which can create a context dependent representation of each label candidate, is better at actions and emotes. Human performance is still above all these models, leaving space for fu- ture improvements in these tasks. The genera- tive Transformer model did not work as well using these metrics.
# 5.2 Generalization Capability on Unseen Test
The six new unseen test settings are a slightly easier task in absolute numbers (Table 4, right), with improved scores for humans and some mod-
5.4 Eï¬ect of Various Environment Features We provide a large quantity of information about the environment to each of our models â not only di- alogue, but the description of the setting, the char- acterâs persona, present objects with descriptions, and more. We analyze the usefulness of the addi- tional grounding information in Tables 5 and 6.
For the dialogue task, having access to all of the environmental information provides the best per- formance for both retrieval and generative models. Training on dialogue alone substantially decreases performance, while each experiment that adds ad- ditional grounding information such as the past ac- tions, persona or the setting description, improves the score. Providing object descriptions as a fea-
Persona: I am a part of a group of travelers. I go from town to town selling food to the locals. I grew up poor, but my travels have paid oï¬ well.
Setting 1: Fishmongerâs stall, Port A small booth near the edge of the port, itâs protected by a piece of old, sun-bleached sailcloth. Baskets of freshly- caught ï¬sh, bivalves, and eels sit in the shade in stained wooden troughs of water. A small, aggressive-looking dog is chained to one table, presumably to keep cats away. The stall is redolent with the aroma of ï¬sh. Friend: I wonder what I could eat around here... Emote: ponder Traveler: Customer, are you here shopping for ï¬sh too? Friend: What brings you to this place? Traveler: I like to come around here for food. Sometimes people who travel through drop the most delicious things. Once in a while itâs roasted meet or ï¬sh. Setting 2: Dunes, Desert A massive hilly landscape that is nothing but sand and a few rocks. As you walk this area, you can ï¬nd some human and animal remains along with broken down wood wagons. Friend: I wonder what I could eat around here... Emote: ponder Traveler: Well, the desert is certainly the wrong place for you my friend. Friend: What brings you to this place? Traveler: I am travelling to the castle market to sell my goods. I have a terrible sense of direction and have been wondering in the sweltering heat for hours until I found your Oasis.
Table 7: Predicted dialogue by the BERT-based Bi-Ranker (as the traveler character) given diï¬erent settings.
Self name: Sea Witch. Self Previous Dialogue: What do you know about that knight standing over there? Input Dialogue + Emote Partner His armor is garrish. You Mermaid know I donât fraternize with land dwellers, pout He is a terrible knight and I hate him, cry I will battle him until the Mermaid end of my days, scream
Input from Partner: Wizard Iâm feeling sad You must die! Try putting on something else Iâd like you to feed me Can you grab me a paper Can you grab me a beer Clean up Hide the gold Input from diï¬erent agents Wizard: Can I have some drink? Servant: Can I have some drink? Bear: Can I have some drink? Prediction (Self name: Servant) hug wizard hit master wizard remove patterned outï¬t give food to master wizard give book to wizardâs assistant get beer get duster put gold in satchel Prediction drop potion give wine to servant give water to bear
Table 8: Predicted emotes by the Generative Trans- former given example inputs from dialogue partner.
Table 9: Predicted actions by the BERT-based Bi- Ranker given example inputs from the dialogue partner.
ture leads to the least improvement. As there are both a large quantity of objects that can be present and objects tend to have long descriptions, it can be challenging for the model to associate such in- formation to a dialogue, action, or emote predic- tion task. The persona features were found to be impactful, which makes sense as they shape the things the character says (and does).
Action sequence and emote prediction are much improved when using the dialogue history com- pared to using only past action history. Other fea- tures generally have lesser impact in this case, but still give some improvements. Including all fea- tures appears challenging for the model, perhaps because of the large input to attend over, resulting in improved results for some ablations.
Most importantly, for all tasks training on the available dialogue data is necessary for good per- formance. Providing only the action and emote as context results in the worst performance, even on action and emote prediction tasks. Moreover, us-
ing dialogue and actions simultaneously improves results almost everywhere. The integrated envi- ronment in which agents can both act and speak to other agents provides relevant information that can be used across all tasks.
Context aï¬ects predicted utterances We in- vestigate the eï¬ect of the environmental context on the predictions by modifying the context and examining the changes in predicted dialogue, ac- tion, and emotes using the BERT-based Bi-Ranker. The input dialogue and speaker has a strong ef- fect on the predicted action, as shown in Table 9, ranking over all training set actions. For example, the partner asking for an item results in a predicted action to retrieve it, despite our dataset not being explicitly instructional, and is dependent on who asks.
A similar eï¬ect is observed for emote predic- tion. Modifying the dialogue and emote input pro-
duces a variety of diï¬erent predicted emotes in Ta- ble 8. Further, keeping the context otherwise ï¬xed but modifying the partner name from mermaid to orc results in a diï¬erent predicted emote â the mermaid stating I will battle him leads to a stare while the orc receives a nod.
Finally, for dialogue prediction we ï¬nd the model produces diï¬erent outputs that are more ap- propriate for a given setting, even if the dialogue and characters are the same, see Table 7. With the same text about food, the model retrieved dialogue that was setting appropriate. In the ï¬shmongerâs stall, it asked if the human agent was a customer shopping for ï¬sh, but in the desert dunes it sug- gested we might be looking in the wrong place.
# 6 Conclusion
We introduced a large-scale crowdsourced fan- tasy text adventure game research platform where agentsâboth models and humansâcan act and speak in a rich and diverse environment of loca- tions, objects, and other characters. We analyzed a variety of models and their ability to leverage the grounding information present in the environ- ment. We hope that this work can enable future research in grounded language learning and fur- ther the ability of agents to model a holistic world, complete with other agents within it.
# 7 Acknowledgements
We thank Ta´ıs Mauk and Lisa Wong for their help with this project.
# References
Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Mar- garet Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. Vqa: Visual question an- swering. In Proceedings of the IEEE international conference on computer vision, pages 2425â2433.
Antoine Bordes, Y-Lan Boureau, and Jason Weston. 2017. Learning end-to-end goal-oriented dialog. In Proceedings of the International Conference on Learning Representations (ICLR).
Antoine Bordes, Nicolas Usunier, Ronan Collobert, and Jason Weston. 2010. Towards understanding sit- uated natural language. In Proceedings of the Thir- teenth International Conference on Artiï¬cial Intelli- gence and Statistics, pages 65â72.
Simon Brodeur, Ethan Perez, Ankesh Anand, Flo- rian Golemo, Luca Celotti, Florian Strub, Jean Rouat, Hugo Larochelle, and Aaron Courville.
2017. Home: A household multimodal environ- ment. arXiv preprint arXiv:1711.11017.
Rodney A Brooks. 1991. Intelligence without repre- sentation. Artiï¬cial intelligence, 47(1-3):139â159.
Angel Chang, Angela Dai, Thomas Funkhouser, Ma- ciej Halber, Matthias NieÃner, Manolis Savva, Shu- ran Song, Andy Zeng, and Yinda Zhang. 2017. Mat- terport3d: Learning from rgb-d data in indoor envi- ronments. arXiv preprint arXiv:1709.06158.
Marc-Alexandre CËot´e, ´Akos K´ad´ar, Xingdi Yuan, Ben Kybartas, Tavian Barnes, Emery Fine, James Moore, Matthew Hausknecht, Layla El Asri, Mahmoud Adada, et al. 2018. Textworld: A learning en- arXiv preprint vironment for text-based games. arXiv:1806.11532.
Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, Jos´e MF Moura, Devi Parikh, and Dhruv Batra. 2017. Visual dialog. In Proceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition, volume 2.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. CoRR, abs/1810.04805.
Edward Dieterle. 2009. Multi-user virtual environ- In Encyclopedia ments for teaching and learning. of Multimedia Technology and Networking, Second Edition, pages 1033â1041. IGI Global.
Emily Dinan, Varvara Logacheva, Valentin Malykh, Alexander Miller, Kurt Shuster, Jack Urbanek, Douwe Kiela, Arthur Szlam, Iulian Serban, Ryan Lowe, et al. 2019a. The second conversational arXiv preprint intelligence challenge (convai2). arXiv:1902.00098.
Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019b. Wiz- ard of Wikipedia: Knowledge-powered conversa- In Proceedings of the International tional agents. Conference on Learning Representations (ICLR).
Intentional context in situated natural language learning. In Proceedings of the Ninth Conference on Computa- tional Natural Language Learning, pages 104â111. Association for Computational Linguistics.
Jon Gauthier and Igor Mordatch. 2016. A paradigm for situated and goal-driven language learning. arXiv preprint arXiv:1610.03585.
Matthew Henderson, Blaise Thomson, and Jason D Williams. 2014. The second dialog state tracking challenge. In Proceedings of the 15th Annual Meet- ing of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 263â272.
Matthew Johnson, Katja Hofmann, Tim Hutton, and David Bignell. 2016. The malmo platform for arti- ï¬cial intelligence experimentation. In IJCAI, pages 4246â4247.
Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bag of tricks for eï¬cient text classiï¬cation. arXiv preprint arXiv:1607.01759.
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Exploring arXiv preprint Shazeer, and Yonghui Wu. 2016. the limits of language modeling. arXiv:1602.02410.
Douwe Kiela, Luana Bulat, Anita L Vero, and Stephen Clark. 2016. Virtual embodiment: A scalable long-term strategy for artiï¬cial intelligence research. arXiv preprint arXiv:1610.07432.
Eric Kolve, Roozbeh Mottaghi, Daniel Gordon, Yuke Zhu, Abhinav Gupta, and Ali Farhadi. 2017. Ai2- thor: An interactive 3d environment for visual ai. arXiv preprint arXiv:1712.05474.
Brenden M Lake, Tomer D Ullman, Joshua B Tenen- baum, and Samuel J Gershman. 2017. Building ma- chines that learn and think like people. Behavioral and Brain Sciences, 40.
Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick. 2014. Microsoft coco: In European confer- Common objects in context. ence on computer vision, pages 740â755. Springer.
Laurens van der Maaten and Geoï¬rey E. Hinton. 2008. Visualizing data using t-sne.
P.-E. Mazar´e, S. Humeau, M. Raison, and A. Bordes. 2018. Training Millions of Personalized Dialogue Agents. ArXiv e-prints.
Tomas Mikolov, Armand Joulin, and Marco Baroni. 2016. A roadmap towards machine intelligence. In International Conference on Intelligent Text Pro- cessing and Computational Linguistics, pages 29â 61. Springer.
A. H. Miller, W. Feng, A. Fisch, J. Lu, D. Batra, A. Bordes, D. Parikh, and J. Weston. 2017. Parlai: A dialog research software platform. arXiv preprint arXiv:1705.06476.
Nasrin Mostafazadeh, Chris Brockett, Bill Dolan, Jianfeng Gao, Georgios P Sp- Michel Galley, ithourakis, and Lucy Vanderwende. 2017. Image- grounded conversations: Multimodal context for arXiv natural question and response generation. preprint arXiv:1701.08251.
Karthik Narasimhan, Tejas Kulkarni, and Regina Barzilay. 2015. Language understanding for text- based games using deep reinforcement learning. arXiv preprint arXiv:1506.08941.
Junhyuk Oh, Satinder Singh, Honglak Lee, and Push- meet Kohli. 2017. Zero-shot task generalization with multi-task deep reinforcement learning. arXiv preprint arXiv:1706.05064.
Ramakanth Pasunuru and Mohit Bansal. 2018. Game- arXiv preprint based video-context dialogue. arXiv:1809.04560.
Jeï¬rey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: Global vectors for word representation. In EMNLP.
Alec Radford, Jeï¬rey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
Manolis Savva, Angel X Chang, Alexey Dosovitskiy, Thomas Funkhouser, and Vladlen Koltun. 2017. Minos: Multimodal indoor simulator for naviga- arXiv preprint tion in complex environments. arXiv:1712.03931.
Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In ACL.
Kurt Shuster, Samuel Humeau, Antoine Bordes, and Jason Weston. 2018. Engaging image chat: Model- ing personality in grounded dialogue. arXiv preprint arXiv:1811.00945.
Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to context-sensitive gen- eration of conversational responses. arXiv preprint arXiv:1506.06714.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, pages 5998â6008.
Oriol Vinyals and Quoc Le. 2015. A neural conver- In Proceedings of the 31st Inter- sational model. national Conference on Machine Learning, Deep Learning Workshop, Lille, France.
Harm de Vries, Kurt Shuster, Dhruv Batra, Devi Parikh, Jason Weston, and Douwe Kiela. 2018. Navigating new york city Talk the walk: arXiv preprint through grounded dialogue. arXiv:1807.03367.
Sida I Wang, Samuel Ginn, Percy Liang, and Christoper D Manning. 2017. Naturalizing a pro- gramming language via interactive learning. arXiv preprint arXiv:1704.06956.
Sida I Wang, Percy Liang, and Christopher D Manning. 2016. Learning language games through interaction. arXiv preprint arXiv:1606.02447.
Thomas Wolf, Victor Sanh, Julien Chaumond, and Transfertransfo: A learning approach for neural network arXiv preprint
Ledell Yu Wu, Adam Fisch, Sumit Chopra, Keith Adams, Antoine Bordes, and Jason Weston. 2018a. In Thirty-Second Starspace: Embed all the things! AAAI Conference on Artiï¬cial Intelligence.
Yi Wu, Yuxin Wu, Georgia Gkioxari, and Yuandong Tian. 2018b. Building generalizable agents with a realistic and rich 3d environment. arXiv preprint arXiv:1801.02209.
Zhilin Yang, Saizheng Zhang, Jack Urbanek, Will Feng, Alexander H Miller, Arthur Szlam, Douwe Kiela, and Jason Weston. 2017. Mastering the dun- geon: Grounded language learning by mechanical turker descent. arXiv preprint arXiv:1711.07950.
Kexin Yi, Jiajun Wu, Chuang Gan, Antonio Tor- ralba, Pushmeet Kohli, and Josh Tenenbaum. 2018. Neural-symbolic vqa: Disentangling reasoning from In Advances vision and language understanding. in Neural Information Processing Systems, pages 1039â1050.
Haonan Yu, Jiang Wang, Zhiheng Huang, Yi Yang, and Wei Xu. 2016. Video paragraph captioning using hierarchical recurrent neural networks. In Proceed- ings of the IEEE conference on computer vision and pattern recognition, pages 4584â4593.
Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Per- sonalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics, pages 2204â2213, Melbourne, Australia. Association for Computational Linguistics.
# Supplementary Material
# A Model Inputs
For extra clarity, we show here the exact input rep- resentation given to our models when including all the grounding features we consider in the experi- ments (setting, objects, characters + personas, ac- tions, emotes, and dialogue). An example is given in Figure 2.
We note that there are other ways to represent this information that we have not explored that could improve performance. Further, there is ad- ditional information in LIGHT that could possibly be encoded in the input text: for example, what characters are carrying, and the aï¬ordances of ob- jects. The latter, while not explicitly provided in
the input does constrain the available actions, so it is still used by the model. Object aï¬ordances such as is gettable are visible to models via the action history, but more explicit inputs could po- tentially be useful, and this could be explored in future work.
# B Bi-Ranker and Cross-Ranker Speeds
We give test time computation speeds for the BERT-based Bi-Ranker and Cross-Rankers in Ta- bles 10 and 11 for the emote and dialogue tasks. For the emote task, the Cross-Ranker is still feasi- ble due to there being only 22 labels to compute, although it is still 4.6x slower than the Bi-Ranker if the 22 candidate representations are cached. The Bi-Ranker can always cache label representations if they are ï¬xed for many input examples (the common case) because the representation does not depend on the input. For the Cross-Ranker this cannot be done because the label representations are contextually dependent on the input. For dia- logue retrieval, because the number of candidates is so large (more than 100,000) caching makes the Bi-Ranker feasible whereas the Cross-Ranker, which cannot cache label representations, is infea- sible to compute.
Emote w/o caching with caching Bi-Ranker Cross-Ranker 171s 70s 326s (â¼1.9x slower) n/a (â¼4.6x slower)
Table 10: Bert Bi-Ranker and Cross-Ranker speeds on the emote task, test seen (2495 examples), 22 candi- dates per example.
Dialogue Bi-Ranker Cross-Ranker 2.07s 24453s (â¼11812x slower)
Table 11: Bert Bi-Ranker and Cross-Ranker speeds on the dialogue task, per single example average (retrieval over 110,877 training set candidates).
# C Unseen Test Set Overlap
The unseen test set is chosen by design to be rel- atively distinct from those available in the train- ing set, and the actual content (descriptions, per- sonas, dialogues) are entirely disjoint. However, due to the large size of the dataset, it is possible the names of locations, characters, and objects in the unseen set could have word overlap. We assert
Input to Model: task speech setting name main foyer, Inside Castle setting desc The main foyer is massive. A grand staircase sits to the back of the foyer leading to the upstairs. At the front of the foyer stand two servants ready to help anyone who comes to visit. To the left of the room there is a doorway leading into a corridor. To the right there is a door leading to another corridor for the Kingâs servants. At the foot of the stairs there is a bearskin rug that is staring at you almost as if still hungry. The walls are lined with portraits of the king and his family. partner name servant self name king self persona I am a king of the whole empire. I give rules and pursuit them. I am brave and fearless. object desc a duster : The duster has large gray feathers bound together by a leather wrap. object desc a small bucket : The bucket may be small but it gets the job done. object desc a rag : The tattered rag was smeared with blood, torn to shreds and left unceremoniously in a pile on the ï¬oor. object desc a shirt : The shirt is tailored from ï¬nely woven cotton and is fastened up the front by a series of rounded buttons. object desc a crown : Thought of as a holy item, the crown goes only to those who are worthy enough. object desc a scepter : On its handle, you see two red gems gleaming like eyes of an animal. partner say my humble king. What am I to do to serve you? self act give scepter to servant partner say Yes my lord. I will polish it immediately. Am I to return it to you personally? partner act put scepter in small bucket self act give crown to servant
Label: Yes. Yes. Of course. Also check the jewels in my crown. They seem loose.
Figure 2: Example input format (and target label) given to models, following the same dialogue in Figure 1. Tokens like â setting nameâ are special tokens intended to be signiï¬ers for the encoding module of a network to know which piece of grounding information is being read on that line.
this by comparing word overlap with the names of locations, characters, and objects in the train- ing set. Of the 73 locations, 207 characters, and 956 objects created from the unseen location cat- egories, the names of 3 locations, 96 characters, and 203 objects exactly match names of elements in the training set. We note that these represent names such as tavern, but the chats are collected with the full location descriptions (which are un- seen in the training set) and thus reduces overlap with train.
primary technique regarded using rounds of pilots and staged tasks to gradually ï¬lter towards high quality content rather than collecting all of the content in a single forward pass. Nearly half of the content in each initial pilot task was discarded, and we iterated on pilot tasks until the discard rate was less than 1 in 30 tasks. The rest of this section will discuss some speciï¬c measures taken at the indi- vidual task level, and will acknowledge some ar- guable deï¬ciencies and potential areas of improve- ment on the dataset in its current form.
# D Crowdsourcing Methodology
Expanding on the dataset collection explanations in section 3, a number of steps were taken to at- tain a level of quality and consistency. The ï¬rst and most inï¬uential came from the constrains of the setting itself. We used a fantasy setting to try to encourage some kind of continuity across the dataset. We believed that workers would share some kind of common understanding about what a fantasy environment would entail, and then this understanding would be reï¬ected in the dataset. It also ensured there were easy ways to ï¬ag certain workers that were creating content that wouldnât make sense in the dataset (referencing real loca- tions, modern day objects, etc.). From here we could remove some content and ï¬lter workers out from continuing to work on this dataset. The other
Locations The location task of creating a de- scription, backstory, list of connected rooms, and annotations of characters and objects present seemed to be too disjoint of a task based on the crowdsourcing best practice of breaking down tasks into as atomic of an action as possible. Thus we split it into two tasks, the ï¬rst to provide the core text content and list of connected rooms, and the second to annotate the content inside those rooms. We will refer to these as Task 1 and Task 2, and were simple form-entry tasks as displayed in Figures 4 and 5. These two tasks were used in sequence to produce the locations present in the dataset.
In order to drive quality, we manually reviewed a handful of rooms from each worker to assert that the rooms had proper English descriptions and back-stories, and that the room ï¬t appropri-
ately in the category provided. In retrospect, given the two-tiered task setup and some of the tech- niques we developed later in the collection setup, we could have asked workers who were annotat- ing rooms in Task 2 to provide some kind of sig- nal about the quality of the rooms from Task 1 in order to have a lower-cost method for evaluating the quality of the work from Task 1 than using our own time.
Ultimately, one of the most important steps for improving dataset quality at this stage was cre- ating form validators that caught the most com- mon error cases from the ï¬rst time around. These validators had the bonus eï¬ect of deterring bot- ting of our tasks, as they couldnât pass the vali- dation stage. For Task 1, the simple validator we ended up using asserted at least one complete sen- tence (determined via capitalization and punctua- tion) for both the description and background. For Task 2, our validation step forced workers to enter values that had direct word overlap with the en- tered text.
One of the largest diï¬culties with Task 2 was that some workers would optimize for grabbing key words out of the text without taking the time to fully understand the context. As thus, phrases like âand the remains of adventurers long deadâ would occasionally result in workers annotating the presence of adventurers as characters in the given room. We attempted to mitigate this type of false positive with both explanatory examples and spot checks to soft-block workers who made this mistake consistently. At the moment a small number of these still remain in the dataset, but gen- erally in instances where it still makes sense as in the above example, where the room deï¬nitely has remains of previous adventurers, but appropriately could also have some current adventurers as well.
Characters Similarly to how we split Location collection into two tasks, Character collection was split into two tasks as well. The ï¬rst asked work- ers to clean up the span selected in Task 2 in or- der to remove words that didnât directly relate to or describe the character, and to provide a singu- lar form for plural characters (as we intended for someone to eventually play the role of the singu- lar character), tag the character as a person, crea- ture, or object that was accidentally tagged as a character, and then asked for a ï¬rst-person per- spective persona for the singular character. The second task gave workers the name of a character
and their persona, and asked for a second-person perspective description for the character as well as a list of objects that the character may be carry- ing, wielding, or wearing. Weâll call these tasks Task 3 and Task 4, and these were also collected via form-based tasks as displayed in Figures 6 and 7. We used complete sentence form validation for both the persona from Task 3 and text descriptions in Task 4 to ï¬ag potential bad examples to ï¬lter out.
The goal of the Task 3 was two-fold, ï¬rst to val- idate and standardize the format of output from Task 2, and then second to begin to collect the creative content in the form of a persona. For ex- ample, we used Task 3 to transition from Sneaky Thieves who stole the gold to Sneaky Thieves to Sneaky Thief. Based on worker feedback from ini- tial pilots, we found that balancing creative and mechanical work in the same task kept workers more engaged with the tasks at hand.
The most common mistake that surfaced in the initial pilots was incomplete entries for tasks that didnât actually require correction, for example if the provided form was simply Traveler. We chose to embrace this format and assume that unï¬lled entries were already in their base form. The sec- ond most common mistake was describing per- sonas from a third person perspective. This occur- rence required manual ï¬ltering, as in some cases it was actually somewhat character appropriate to have a persona in that format, such as for an un- educated goblin. We ï¬ltered out a majority of these by searching for direct overlap between the provided character name and the persona. Ulti- mately itâs easy to extract the examples that have the clearest grounding format by ï¬ltering for ex- amples that contain âIâ, so as these examples pro- vide more variety in the dataset we chose to keep them.
forth by our singular-form constraint is that it was somewhat ambiguous how one would get the singular form of a collective term such as family. In most cases we found that workers would choose to provide the format of collective member or simply person, which sometimes led to vague personas and thus less strong grounding in followup tasks. The con- tent is still workable in these cases though, just not as ideal as we might have wanted. A possi- ble route for improvement here would be a task that asks workers to create a few possible mem-
bers for a collective for any character we currently have annotated as a member. It is important to note that these cases account for just 44 out of the 1755 collected characters.
One issue of note that surfaced in Task 4 was that workers occasionally described clothing that would potentially lead to risky actions and conver- sation material, so we chose to eliminate under- garments from the dataset to prevent the creation of inappropriate combinations with the remove ac- tion. This was included as something to not write about in the task text.
Objects The object task is most similar to Task 3, but refocused on annotating objects that were speciï¬ed in Tasks 2 and 4. It took a step to correct the provided span and give a textual description of the object. It also asked for a number of aï¬or- dances, namely if the object can be picked up, is a container, is a surface, can be eaten, can be drank, can be worn, or can be wielded. We also collected a ï¬ag for if a particular example was not appropri- ate for the dataset or was hard to make sense of. This content was also collected as a form-based task, and we refer to it as Task 5 and display it in Figure 8. We use complete sentence validation on the text description as a simple quality ï¬lter as in previous tasks.
The methodology for Task 5 is very similar to Task 3, trying to both standardize data from pre- vious tasks and act as a ï¬lter for bad content that could have been overlooked before. It similarly had both a mechanical data entry and creative component, which tried to keep engagement up.
Overall the largest problem that was surfaced in the pilots was that workers tended to come up with descriptions for objects that were incompat- ible with our long term goal of having modular components that can be mixed and matched be- tween rooms and scenarios. This came up in many forms, such as workers describing objects as if they were being used in a scene happening in the present, as in the sword glimmered in the hands of the knight, wielded high in the sky in a call to battle. While creative, these ultimately were not what we were looking for, so we explicitly called out descriptions like this and many others as be- ing undesired content in our task description. We then manually checked a few examples from each worker to ensure that the data coming in for the ï¬nal task mostly adhered to this rule.
It is important to note that the object aï¬ordances
collected are somewhat noisy due to diï¬erent pos- sible interpretations of the primary object or the tags. Something like a branch could be valid as a surface in one circumstance, or a gettable weapon in another. We attempted to reconcile some in- dividual aï¬ordances where the pairings of aï¬or- dances didnât make much sense (for example, very few objects should be both a weapon and edible). This helped with certain objects that were over- tagged, however we havenât used any methods for reconciling scenarios where an object was under- tagged.
Dialogues Dialogue collection was the hardest task to get correct, and required the largest num- ber of pilot tasks and worker quality control tech- niques to get to a place that we were satisï¬ed with. The ï¬nal approach included creating a simple but deliberate onboarding test that needed to be passed in order to move forward with the task at all, col- lecting mutual feedback from workers about each other, setting timeouts for how quickly workers needed to respond to each turn, and manually val- idating a few examples from each worker. Each of these steps aimed to solve a diï¬erent problem, as described in the rest of this section. We will refer to this task as Task 6, and it was collected using the ParlAI-MTurk interface as shown in Figure 9. Firstly, we needed to pair two workers together in order to properly collect dialogues with peo- ple playing two diï¬erent roles without necessar- ily having insider information into the decisions of each othersâ turns. While pairing workers solves this problem, it makes the worker experience in- credibly dependent on the quality of the worker that they are paired with you. Furthermore, if a worker is paired with a worker that is extremely low quality, the whole dialogue may need to be discarded or is otherwise only useful as an exam- ple for how a model might want to react to bad in- put. If the other worker is good, this makes having any bad workers in the pool not just a poor expe- rience for workers but expensive for the collection process in general. This is the problem that the initial onboarding test aimed to solve. The require- ments for passing included entering a speciï¬c cor- rect answer as well as at least 4 characters of into the text ï¬eld. The required action was created such that a worker would have to read and understand the provided persona and setting, how the two in- teract, the characters and actions available, and be able to synthesize all of the information with an
understanding of how to use the interface to send the correct answer. The test required getting the single action correct in 3 attempts. Failing the test on any attempt would permanently soft block a worker from working on Task 6 in the future.
The above test did a lot of work for ï¬agging workers that were well below the bar for complet- ing Task 6 at the level we wanted for the dataset, however as it was a one turn test and it had no way to fully evaluate the quality by which workers would actually incorporate their persona and the setting into their dialogue turns. Furthermore, it didnât ï¬lter out workers that would take too much time on their turns and thus cause their partners to disengage and provide lower quality responses, potentially due to working on other tasks in the background and doing too much context switch- ing. We solved these problems separately.
In order to handle low quality workers, we al- lowed workers the opportunity to rate each other at the end of each dialogue, and to provide tags about the experience. We found that positive feed- back was generally noisy and hard to get signal from, but negative feedback almost always cor- related to a worker who was providing bad con- tent. As a bonus, workers gave us positive feed- back about this capability, as it allowed them to ï¬lter out workers that made the task less engaging and interesting for them. We reviewed this feed- back periodically while tasks were running and soft-blocked workers low quality workers when- ever they were ï¬agged.
In order to handle the inï¬uence of response time on task quality, we set a maximum response time of 5 minutes for any given turn, and overall started soft blocking workers that were consistently above 2 minutes for each message, even if their particular content was pretty good. This improved collection times and did not seem to negatively aï¬ect quality. After this point, manually checking the col- lected conversations still surfaced a few bad exam- ples when viewing one chat per worker rather than arbitrarily sampling the dataset. In order to rem- edy this, the last quality check was a direct evalua- tion of at least 2 dialogues from each worker. This caught a few overlooked instances from workers that didnât necessarily work on enough tasks to get ï¬agged by one of our consistently reviewing workers. Generally this surfaced some quality is- sues surrounding profanity, inappropriate content for the given setting, and entire misunderstanding
of the task at hand such as never using the persona or location as grounding context in the conversa- tion. As not all workers were particularly diligent raters (as conï¬rmed by the low signal of positive ratings - workers donât necessarily want to ï¬ag each other as bad), a few workers were able to slip through the cracks up until this point due to not completing enough tasks to encounter a rater that ï¬agged them.
One small acknowledgement throughout the di- alogues is that there are still misspellings, im- proper grammar, mistaken keystrokes, and such. While the rate of occurrence is orders of magni- tude lower than we observed in the initial pilots, it is hard to separate cases where it is a genuine mis- take versus cases where it is appropriate for the character, such as a pirate using seaworthy lexi- con and adding extra Râs to suggest a pirate-like drawl, or a snake that slips in extra Sâs to better play the role.
# E Descriptions of Actions and Emotes
The LIGHT action set builds upon the graph framework introduced in Mastering the Dungeon (Yang et al., 2017). The basic idea presented is that everything in the text adventure game can be represented as nodes, and then state is described by edges between those nodes. In this way, an agent and an object can be in a room, and that agent can be carrying a diï¬erent object or a con- tainer might have an object inside as well by the same kind of relation. After deï¬ning this relation- ship, we can further deï¬ne a set of actions that can be taken based on a combination of the state of the graph and the attributes of nodes in that graph. The available actions for the dialogues collected in this dataset, along with the constraints for apply- ing those actions, are available in Table 12. We used the crowdsourced object aï¬ordances to set the correct attributes for nodes in the graph (if the object can be picked up, is a container, is a surface, can be eaten, can be drank,can be worn, or can be wielded).
For the emotes, we paired down a list of emotes sourced from existing MUDs to reduce redun- dancy and task complexity at the acknowledged cost of expressiveness. This led us to select just one out of scream, shout, and yell instead of keep- ing them all, as having all of the emotes would lead to a more complicated crowdsourcing task than we wanted to risk. We ended up with a set
applaud, blush, cry, dance, frown, gasp, grin, groan, growl, laugh, nod, nudge, ponder, pout, scream, shrug, sigh, smile, stare, wave, wink, yawn
Figure 3: Emote options within the LIGHT platform
of 22 emotes, listed in Figure 3.
# F Descriptions of Human Evaluations
As crowdworkers can sometimes be inconsistent, we set up two ï¬lters to onboard workers into be- ing fair representatives for human perfomance on the task. The ï¬rst gave workers a few chances to select the correct input for a turn each of dia- logue, emote, and action on a scenario we created to strongly hint at the correct answer. We then chose to use performance on the training set as a secondary ï¬lter to have workers that were capable of the task. Each of the tasks has a diï¬erent level of diï¬culty, so we selected reasonable benchmark values based on our own performance on the tasks. For dialogue, this required getting all 7 of the turns from the training set correctly. For actions, this re- quired getting 6 out of 8 turns from the training set correctly. Lastly for emoting, we required getting only 2 out of 8 turns from the training set correctly. On the seen set, our accuracy on the dialogue, ac- tion, and emote tasks were calculated from 217, 165, and 211 turns respectively. On the unseen set, we calculated the accuracy from 196, 114, and 209 turns respectively.
# G Embedding Visualizations
To explore the diversity of LIGHT, we use t-SNE (van der Maaten and Hinton, 2008) to visualize the embeddings of the diï¬erent atomic dataset ele- ments â locations, objects, characters, and actions. We use two diï¬erent embeddings methods to tease out two key aspects of our dataset: 1) the inter- connectedness of grounding information (relation- ships between diï¬erent types of elements, such as the actions available around given objects, or in a given location), and 2) coverage (the variety of diï¬erent objects, locations, and characters in our world).
To explore the interconnectedness of our dataset, we visualize the embeddings learned when training the baseline Starspace ranking model on the task of dialogue, action, and emote prediction, in this case with no pretrained vec- tors so learning comes from our dataset alone.
The t-SNE visualizations of these Starspace em- bedding can be found in Figure 17. Because the Starspace model operates by mapping all inputs and outputs to a shared embedding space, we ï¬nd the learned embeddings capture many of the nu- ances and relationships between diï¬erent elements of our dataset. For example, looking at the near- est neighbors for the location âDockâ (the bottom- right of Figure 17), we see actions like âget crate from ship,â âput plank in ship,â objects like âshipâ and ârope,â and characters like âboat workers.â We see similar relationships captured when look- ing at nearest neighbors for the âpaintersâ char- acters, the âhug horseâ action, and the âpillowsâ objects.
To explore the coverage of our dataset, we use pretrained GLoVe word embeddings (Pennington et al., 2014), trained on the Common Crawl cor- pus. As each dataset element can consist of mul- âgive the horse a potato,â or tiple words (e.g. âThe Queenâs Chamberâ), we take the mean of the GLoVE vectors for each word as the ï¬xed vec- tor embedding for the element. The t-SNE vi- sualizations of these GLoVe-embedded elements can be found in Figure 18. Unlike the Starspace embeddings, which capture the structure present in the relationships between diï¬erent types of dataset elements, we ï¬nd that the GLoVe embed- dings capture the breadth and semantic similari- ties of dataset elements. For example, looking at the nearest neighbors for the embedding of the âDockâ location, we see similar locations present in our dataset, like âFerry Terminal,â âWharf,â âpier,â and âBoathouse.â Similarly, if we look at the nearest neighbors for the âpillowsâ objects, we see other objects like âbedding,â âmattresses,â ârugs,â âtowels,â and âcurtains.â
# H Action and Emote Relationships
To visualize the interaction trends between actions and emotes in LIGHT, we present heatmaps (in Figure 19) counting the number of occurrences of each immediately before or after oneâs part- ner performs an action or emote. While responses to an action or emote can evolve over multiple timesteps, we limit this visualization to action re- lationships within a single timestep. Additionally, to eï¬ectively measure trends in physical actions, we cluster all physical actions by the root word (for example, âsteal the sword from the soldierâ becomes âstealâ).
Action Constraints Outcome get object actor and object in same room object is gettable actor is carrying object drop object actor is carrying object object is gettable object is in room get object1 from object2 Actor and object2 in same room actor is carrying object1 object1 is gettable object2 is surface or container object2 is carrying object1 put object1 in/on object2 Actor and object2 in same room object2 is carrying object1 object2 is container or surface actor is carrying object1 give object to agent Actor and agent in same room object is a member of actor agent is carrying object steal object from agent actor and agent in same room object is a member of agent actor is carrying object hit agent Actor and agent in same room inform agent of attack hug agent Actor and agent in same room inform agent of hug drink object actor is carrying object object is a drink inform actor of drinking successfully eat object actor is carrying object object is a food inform actor of eating successfully wear object actor is carrying object object is wearable actor is wearing object wield object remove object actor is carrying object object is a weapon actor is wearing/wielding object object is wearable or a weapon actor is wielding object actor is carrying object
Table 12: LIGHT actions and constraints
While for the most part there are a multitude of diï¬erent observed physical and emotional re- sponses for each partner move, there are certain interesting trends to observe. Looking at the top- left of Figure 19, we see that if oneâs partner makes a âhitâ action, the most likely response is to âhitâ back. Looking at the same plot, we see that âhugâ actions are similarly reciprocated. If we look at the interplay between physical actions and emotes (top-right of Figure 19) we see a relationship be- tween oneâs partner taking a âhitâ action, and issu- ing a âscreamâ emote in response. Going the other direction and looking at the relationship between emotes and physical actions, we see that perform- ing a âcryâ or âsmileâ emote is likely to be met with either a consoling or celebratory âhug.â Fi- nally, looking at the relationships between a part- nerâs emote and an emote response, we see that positive emotes like âlaughâ and âsmileâ are likely to be reciprocated with a similar (if not identical) emote.
Below you are given a category for a location in a medieval fantasy world. Your goal is to come up with a room in that location. For this, imagine a "room" to be a space that you could describe directly to another person, pointing out things in eyesight. Thusly, "a large bustling town" isn't a good room to describe, but "a crossroads in a large bustling town" would be appropriate. Similarly, "A giant tower" would not be appropriate, but "courtyard of a giant tower", "giant tower entrance", or "bedroom at the top of a giant tower" would be. Imagine you are in this location. Describe the physical appearance of the place: what you see around you. Then come up with a backstory or small history for the place, some kind of flavor text. Below is an example: * Category: Inside Castle 1. Provide the name of a 'room' in this category. Throne Room 2. Describe this location physically in complete sentences: Ornate and luxurious, the throne room is richly decorated. Priceless paintings of past kings line the sides. Underneath a vaulted ceiling, a regal red carpet connects the throne room's entrance to the golden throne itself. 3. Provide a backstory for the location in complete sentences: The throne room is the pride and joy of the castle. The king commissioned hundreds of craftsman to construct its expensive interior. The king often brings important ambassadors to this room to impress them with his power. Finally, you will need to the names of two additional locations that would be reachable from your room, an associated direction to get there, and an action that an individual must take to travel to the new location. ("Walking" is not a sufficient enough action, however "walking through the oak doorway" could be) Below is an example continuing on the example above: 1. Provide a location that would be reachable from your location. banquet hall 2. Select the direction to get there: North 3. Provide an action one would take to go there, such that it completes the sentence "One could get there by...": Walking through the ornate doorway 4. Provide another location that would be reachable from your location. dungeon 5. Select the direction to get there: Down 6. Provide an action one would take to go there starting with the verb in present tense, such that it completes the sentence "One could get there by...". This should be fairly short (no more than 15 words): Using the trapdoor in front of the throne Important notes - Please follow these rules when designing your location. (otherwise your hit may be rejected) Use present tense for your physical description of the room, as if you were describing it to someone who is in it with you Do not refer to real people (living or dead) when coming up with characters of your own When coming up with your locations, avoid content that wouldn't exist in a medieval setting Do not include any inappropriate or offensive content Do not reference real world locations such as countries or landmarks Location Category: frozen tundra 1. Provide the name of a 'room' in this category. (a room as defined in the task description): 4 2. Describe the location you've named above physically in complete sentences: 4 3. Provide a backstory for the location in complete sentences: 4 4. Provide a location that would be reachable from your location: 4 5. Select the direction to get there: North 7 6. Provide an action one would take to go there starting with the verb in present tense: y 7. Provide an additional location that would be reachable from your location: X 8. Select the direction to get there: North âa 9. Provide an action one would take to go there starting with the verb in present tense:
Figure 4: Form for Crowdsourcing Task 1
Below you are given a location in a medieval fantasy world, along with a description and some background context. Imagine you are in this location. Looking around, what objects do you see? What characters or creatures might you meet in this location? Then answer the following questions. For example, given the following location: * Location Category: Castle + Location: Throne Room Physical Description: Ornate and luxurious, the throne room is richly decorated. Priceless paintings of past kings line the sides. Underneath a vaulted ceiling, a regal red carpet connects the throne room's entrance to the golden throne itself. * Background Context: The throne room is the pride and joy of the castle. The king commissioned hundreds of craftsman to construct its expensive interior. The king often brings important ambassadors to this room to impress them with his power. 1. What objects might you find in this location that are directly referenced in the descriptions themselves? Please directly quote the example, selecting the most specific phrase for that object. Put each object on its own line. Priceless paintings of past kings a regal red carpet the golden throne 2. What characters or creatures might you meet in this room which directly referenced in the descriptions themselves? Please directly quote the example, selecting the most specific phrase for that character. Put each character on its own line (note, characters/creatures should be alive, or at the very least sentient.) The king important ambassadors We then ask you to come up with a few objects and characters which, while not specifically described in the descriptions, you might also find in these locations. Be creative! Important notes - Please follow these rules when designing your location. (otherwise your hit may be rejected) « Do not refer to real people (living or dead) when coming up with characters of your own When listing objects and character that are mentioned in the descriptions, quote from the description text Put each object and/or character on its own line. + Do not write from a first person perspective, such as My sword When coming up with your own objects/characters, avoid content that wouldn't exist in a medieval setting Do not list characters that the description mentions are specifically absent from the room. For example, if a description of an old mine describes miners who are long gone do not list miners. * Each object or character you describe should be its own line. For example, if the description says sharp swords and knives, please separate this into the two objects sharp swords, knives Location Name: cloud tavern Location Category: city in the clouds Physical Description: this is a tavern in the cloud city, a rowdy place of winged drunkards throwing winged darts and drinking from winged tankards Background Context: This is the local watering hole of angel men who come here after work to socialize after work and blow off some steam 1. What objects might you find in this location that are directly referenced in the descriptions themselves? Please directly quote the example (case sensitive), selecting the most specific phrase for that object. Put each object on its own line. If there are no objects then put Nothing 2. Come up with at least two more objects which you might find in this location but are not mentioned in the description. Put each object on its own line. Ss 3. What characters or creatures might you meet in this room which directly referenced in the descriptions themselves? Please directly quote the example (case sensitive), selecting the most specific phrase for that character. Put each character on its own line (note, characters/creatures should be alive, or at the very least, sentient). If there are no characters then put Nothing X 4. Come up with at least two more characters or creatures who you might meet in this location but are not mentioned in the description. Put each character on its own line. X
Figure 5: Form for Crowdsourcing Task 2
Below you are given an character who belongs in a medieval story. (The given character may be partially incorrect, part of the task will let you correct it). Answer the following questions about the character. For example, given the character "a couple angray bandits making a scene", one might answer: 1. Is the original description of the character singular or plural? Plural 2. If the given description is more than just a character and adjectives, provide just the character and adjectives. Please try to correct misspellings in the original text if they exist. a couple angry bandits Give the singular form of only the character word (the noun, without any adjectives or modifiers). bandit Is this character a Communicator (or something that would normally be able to communicate like a person would using language), a Creature (which would normally be unable to communicate using language. If you're not sure, put Communicator), or an object? Communicator . Give 3-5 sentences of background on this (singular) character. You might include history, personality, or likes/dislikes. Write in first person from the point of view of the character, as if you were them. Each sentence should be an individual point and not refer to the other sentences. I am a bandit from a nearby village. I ambush travelers on their way to the kingdom. I am hot tempered and easily offended. Important notes - Please follow these rules when designing your location. (otherwise your hit may be rejected) + Do not refer to real people (living or dead) or real places when coming up with character descriptions of your own â+ When coming up with descriptions avoid content that wouldn't exist in a medieval setting + If the supplied text is not a character (if it's an inanimate object for example) try your best to imagine how it would act if you brought it to life. ay 1. Is this provided description of the character singular or plural? Singular 2. If the given description is more than just a character and adjectives, provide just the character and adjectives. Please try to correct misspellings in the original text if they exist. 3. Give the singular form of only the character word (the noun, without any adjectives or modifiers) 4, Is this character a Communicator, Creature, or Object? Communicator 5. Give 3-5 sentences of background on this (singular) character. You might include history, personality, or likes/dislikes. Write in first person from the point of view of the character (I am...).
Figure 6: Form for Crowdsourcing Task 3
Below you are given a character (or group of characters) in a medieval fantasy world with a personality. Imagine just one of them. What objects would that character be carrying, wearing, or wielding. For example, given the following description: « Character: Knights of the army ¢ Persona: I'm not as strong as some other knights. I don't think I belong here, and I miss my family. I'm too afraid of sharp objects to be of any use. 1. Provide an engaging description of this character from a second person perspective. (Don't use "I") Most knights intimidate you, but this one seems like he couldn't hurt a fly. 2. What objects might this character be carrying? Put each object on its own line. Family heirloom woven bag . What objects might this character be wearing? Put each object on its own line. Armor Tunic 4, What objects might this character be wielding? Put each object on its own line. Club Dulled sword Battered shield w Important notes - Please follow these rules when describing your character. (otherwise your hit may be rejected) Do not refer to real people (living or dead) when coming up with objects Put each object on its own line. Do not write objects from a first person perspective, such as My sword When coming up with objects, avoid content that wouldn't exist in a medieval setting Each object you describe should be its own line. For example, instead of sharp swords and knives, please separate this into the two objects sharp swords, sharp knives ¢ If the provided character cannot wear or wield items, please write "Nothing" in boxes that don't make sense for a character. A maid for example is unlikely to wield anything * Objects should be separate from the character - a bartender would not be wielding "fists" as fists are part of the bartender Imagine a singular instance of the character(s) provided below Character: other animals Persona: I am one of the other animals that lives in the meadow surrounding the castle. I play with the other animals all day. I'm only frightened when fighting breaks out in the meadow or forest. 1. Provide an engaging description of this character from a second person perspective. (Don't use "I") 4 2. What objects might this character be carrying? Put each object on its own line. If this character wouldn't carry anything then write Nothing 4 3. What objects might this character be wearing? Put each object on its own line. If this character wouldn't wear anything then write Nothing 4 4. What objects might this character be wielding? Put each object on its own line. If this character wouldn't wield anything then write Nothing 4
Below you are given a character (or group of characters) in a medieval fantasy world with a personality. Imagine just one of them. What objects would that character be carrying, wearing, or wielding. For example, given the following description: « Character: Knights of the army ¢ Persona: I'm not as strong as some other knights. I don't think I belong here, and I miss my family. I'm too afraid of sharp objects to be of any use. 1. Provide an engaging description of this character from a second person perspective. (Don't use "I") Most knights intimidate you, but this one seems like he couldn't hurt a fly. 2. What objects might this character be carrying? Put each object on its own line. Family heirloom woven bag . What objects might this character be wearing? Put each object on its own line. Armor Tunic 4, What objects might this character be wielding? Put each object on its own line. Club Dulled sword Battered shield w Important notes - Please follow these rules when describing your character. (otherwise your hit may be rejected) Do not refer to real people (living or dead) when coming up with objects Put each object on its own line. Do not write objects from a first person perspective, such as My sword When coming up with objects, avoid content that wouldn't exist in a medieval setting Each object you describe should be its own line. For example, instead of sharp swords and knives, please separate this into the two objects sharp swords, sharp knives ¢ If the provided character cannot wear or wield items, please write "Nothing" in boxes that don't make sense for a character. A maid for example is unlikely to wield anything * Objects should be separate from the character - a bartender would not be wielding "fists" as fists are part of the bartender
Figure 7: Form for Crowdsourcing Task 4
Below you are given an object that belongs in a medieval story. Answer the following questions about the object. For example, given the object "a few rusty swords", one might answer: 1. Is this object singular or plural? Plural 2. Give the singular form of only the object word (the noun, without any adjectives or modifiers) sword 3. Is this object a Weapon, Food, Drink, etc....2 Weapon 4. Give a one or two sentence description of the (singular) object from a second-person perspective, as if you were narrating someone's impression upon walking up to and examining the object. PLEASE REVIEW THE BAD EXAMPLES BEFORE SUBMITTING AS YOUR HIT MAY BE REJECTED: Good description examples: « "The sword is old and broken, with a few bits of rust on the side." « "On closer inspection, the sword seems stained with a full coat of blood. You hope to never run into whoever once wielded it." « "The sword appears to have a history of its own, if only you could read the engravings." « "The rust is so thick you can barely make out the original shape of the sword." Unwanted Description examples with the reason why they are unwanted. Descriptions like these will be rejected. Do not work on this task if you don't understand why the examples below would be rejected based on the reasons in red: * "The walls are plastered with a number of rusty swords." - Describes some external context instead of the object. « "My sword is the sharpest around." - Refers to the content in first person « "The sword looked old and strange." - Description is in past tense, which fails to properly describe the sword as it is now. « "You think to yourself that you've seen this before in Game of Thrones." - Refers to context that wouldn't exist in Medieval times e "A knight wields this sword in front of your face." - Refers to external context of where the sword currently is. * âThe sword flew through the air towards the ground, clattering on the floor" - Refers to external context, is describing an active story rather than describing object. * "The sword is the oldest item in the blacksmith's shop" - Assumes the location of the object * "Swords are weapons often wielded in battle" - Is an obvious definitional description of the object. Wholly uncreative and inherent in the object. « "The swords seem to be from a different kingdom" - Describes the plural object rather than the singular. As a rule of thumb for the description, you generally shouldn't be referring to any subjects other than what is given in the provided object text and "you." Important notes - Please follow these rules when describing your item. (otherwise your hit may be rejected) « Do not refer to real people (living or dead) when coming up with object descriptions of your own. « Do not refer to existing content from shows, movies, books, etc. « Do not write from a first person perspective, such as My sword « When coming up with descriptions avoid content that wouldn't exist in a medieval setting. « Don't write a description that would be considered an unwanted description as shown in the list above. Object: wood 1. Is the given object text above singular or plural? â> Singular 2. Give the singular form of only the object word (just the noun, without any adjectives or modifiers) ~) It was difficult to extract an item from the given text. (do your best though) 3. Check all that apply: ~) This may contain other items. ~) One might be expected to place things on this item. ~) This can be picked up. _) This can be wielded as a weapon or for defense. ~) This can be worn as clothing. ~) This can be eaten. ~) This can be drank. 4. Give a one or two sentence description of the (singular) object from a second-person perspective, as if you were narrating someone's impression upon examining the object. Avoid first person perspective (I, we, my, our). Ensure you understand the expectations outlined in the unwanted description examples above or your hit may be rejected!
Figure 8: Form for Crowdsourcing Task 5
Abandoned, Bazaar, Cave, Countryside, Desert, Dungeon, Farm, Forest, Graveyard, Inside Castle, Inside Church, Inside Cottage, Inside Palace, Inside Temple, Inside Tower, Jungle, Lake, Mountain, Outside Castle, Outside Church, Outside Cottage, Outside Palace, Outside Temple, Outside Tower, Port, Shore, Swamp, Tavern, Town, Trail, Wasteland
Unseen City in the Clouds, Frozen Tundra, Magical Realm, Netherworld, Supernatural, Underwater Aquapolis
Table 13: Location categories for both the seen and unseen sets of locations.
Task Instructions Context Your Context:(READ BEFORE STARTING) Persona: cat (pretend to be this, talk a bit about yourself) lam a cat of the castle. Ihave slain many birds. | spy on the king. Setting: (pretend to be here, use it in conversation sometimes) You are in the farmhouse. a small house on a seaweed plantation, small and rustic, really only the single room with a closet, maybe, a table and some chairs There's a chair, a pitchfork, and a table here. A horse and a wife are here. You are carrying nothing. âSystem: Passed - We'll be pairing you with a partner. Hold on tight. © Volume | cms | âSystem: Your chat partner is: wife. Please chat for 8 full turns while pretending to be your assigned persona in the assigned setting, both provided in the âcontext! tab of the left panel. After the first tum you will need to respond within 5 minutes to avoid timing out.If unsure what to talk about, start getting to know your partner's persona, or discuss the setting. Take actions when/if it feels appropriate to. Any other characters in the room will not interact with or respond to you, so while they may be good things to talk about, don't interact with them.You can find the original instructions on the 'Task Instructionsâ tab to the left. Cat: Meow, there aren't many rats out here in this seaweed farm Action: gesture yawn Wife: Aw kitty, are you feeling bored here? Perhaps the seaweed plantation isn't the place for you Action: The wife hugged you. Action: âSpeak on (optionay y | just want to go back to the castle... The king was fun to spy on
Figure 9: Chat Interface for Crowdsourcing Task 6
Persona 1: A serving wench Persona 2: Cleaning person I work at the local tavern. I enjoy talking to the soldiers that frequent the tavern. I steal tips from the change of the patrons. I scrub the palace ï¬oors day and night. My bones are brittle from the hard labor, but my heart is strong. I save my dayâs coin, never spending it. I am frugal and creative. I long for the day when I can buy my freedom from the Queen. It has been 40 years, but I am patient. Carrying: a Wine, a purse, a plate Wearing: a polishing cloths, a ring, a scarves, a dress, a cloth Wearing: a boot Carrying: a rag, a Broom, a Bucket
Setting: The kitchen tavern is a small cramped room, with wooden cabinets and surfaces made out of stone tiles. There are many bottles of liquors and beers on the shelves, and there are buckets full of ice and other things. There is one shelf full of food items. There is a basin for water, and a bunch of knives and tools in a drawer. A serving wench: Hello - how are things going so far? Action: get food item Cleaning person: Ah, sorry, miss. I know I look a fair bit like me sister who works hereabouts, but Iâm actually Gretchen, the sister that works up in the palace. Perhaps she might âave mentioned me? Action: gesture smile A serving wench: Youâre Gretchen? Iâve heard so much about you! Hello and welcome. Action: gesture smile Cleaning person: The Queenâs in a bit of a state, what with the King being a bit more open with his mistress, so I thought Iâd just hide hereabouts for a spell. Iâd be happy ta lend a hand, though! A serving wench: Oh no! Is he really carrying on like that again? Weâd appreciate any help you can give. Thanks! Action: get bucket Cleaning person: Seems like a right busy crowd out there! âas it been a bit on the noisier side? Action: put rag in basin for water A serving wench: Heavens, yes! Itâll only get rowdier as the day goes on into night. Lots of bourbon and shots you know. Action: gesture laugh Cleaning person: Ach, I donât think Iâll ever be gettin the stains outta this rag, but itâll do! Do ya make much in the way of coins here? Can always use an extra bit oâcoin, eh? Action: get rag from basin for water A serving wench: You can, especially if you take some from the change of the patrons. Theyâre so drunk they never catch it! Action: put Wine in cabinet Cleaning person: O-oh? Is that.. well I suppose if theyâve enough coin ta spend on this, then a coin âere or there wonât go amiss. Action: gesture ponder A serving wench: Exactly. Thatâs what I say. So, are you just here for a day? Cleaning person: Well, I suppose it all depends on how the King takes to âis wife destroying his prized tapestry from the last war. Sometimes he has a short temper, that one. Action: gesture nod A serving wench: Sounds like his wife may, too. Action: gesture laugh Cleaning person: Aye, ye donât know the half of it. I feel like most of me days are spent sweeping up broken crockery and china than the usual dusting that Iâm supposed ta be doinâ! Action: gesture sigh
Figure 10: Example dialogue from the LIGHT dataset.
Persona 1: Boar Persona 2: Faery I am an ugly animal. I am eaten sometimes for food. I do not like those who try to hunt me. We try to avoid humans, because they will catch and enslave I am a faery, one of the fae. We are magical people who live in the forest. us for their own use, if they can. Our magickal skills enable us to live comfortable lives, and to keep away from those who would do us harm. Carrying: nothing Wearing: nothing Carrying: nothing Wearing: a glittery pointed cap, a Earring
Setting: The entryway to the brush den is made of arched bushes and various plants, as is the ceiling and walls. The inside is furnished with seats made of plant roots that have grown together, and a table made of wood adorned with ï¬owers and wooden cups and plates. A small vine dangles with a glowing ï¬oor from the middle of the ceiling that lights the room dimly. Three small beds lay in one corner of the room made of interlaced grass and leaves. Boar: Hello faery, do you think you could help me? Faery: A talking boar! You must be enchanted by the witch. How can I help you, my dear friend? Boar: That is correct, I am enchanted... or cursed rather... by an evil witch. That is why I need help. Faery: I suspected as much. Please, tell me more so that I may help you. Action: gesture frown Boar: Thank you so much! I hate it here in this Den! Action: hug faery Faery: Are you being held prisoner in this den? These vines have grown about everywhere - itâs impossible to get around. Boar: No, this is the only place I can live without being hunted by the humans. Faery: Oh, I see. I hate their type. They are always trying to catch and enslave me. You say you are cursed by the witch. I can help you but I need to know what curse she has put upon you. Boar: Well now that you say you hate them, I donât know if I should tell you... Action: gesture frown Faery: Suit yourself, my friend. You would hate them, too, if they were constantly enslaving your people. Action: hug boar Boar: Well I am a human... Faery: You! A human?! Perhaps we can work together. I can help return you to your form, and you can prevent your people from capturing faeries like me! Action: gesture gasp Boar: I think we can do that, I used to be quite the ruler when I was human. Faery: Excellent. Let me speak the magic words and enchant this ï¬ower. Then you can eat it and in three days you will be human again! Action: get Flower
Figure 11: Example dialogue from the LIGHT dataset.
# Persona 1: President
# Persona 2: Mayor
I won the election. People listen to what I say. I am very powerful. I am the mayor of the village. I help the king keep order of the subjects. I have a high position in the kingdom.
Carrying: a book Wearing: a crown
Carrying: a document, a key Wearing: a jewelry, a ceremonial hat
Setting: Large and extravagant, the room is adorned with crystals, polished gold and sapphires. Thereâs long tables with beautiful silk table clothes covering them. Plush chairs line the tables. In front of each table is plates made from ï¬ne China, next to the plates is sterling silver silverware laid upon a pure white napkin. Thereâs a stage where thereâs 2 thrones overlooking the table. The whole ceiling is large and adorned with chandeliers and garnished with gold accents. President: So much luxuty in this room, many rulers have been here before us. Mayor: This is a very luxurious room, President. Here. The King told me to pass this on to you. Action: give document to president President: This is a letter for the king assuring my rule in this part of the kingdom, thank you mayor, I will place document on a sacred place Action: put document in polished gold Mayor: He also gave me this to give to you. He told me that you need to keep this in a VERY safe place until he comes for it. Action: give key to president President: Thats the key to the kingdom I wonder why the king trusted me with this item, his throne must be in real danger Mayor: Yes, sir. He has also trusted me with other objects such as this to keep. We are his trusted allies. President: Thank you, he must trust you a lot as well, here take this as a sign of my aï¬ection, its a royal jewel taken out of the volcano of Mordor Action: get sapphire Mayor: This is absolutely beautiful. I have heardd that Mordor is beyond the wall. Is that true? Action: remove ceremonial hat President: That is true, only the bravest warriors go to that place, its full with dangers and ï¬erce animals Mayor: Oh dear. What if our King went there? What if there is something there he needs for battles to come! Action: wear ceremonial hat President: Our king is a ï¬erce warrior but I am worried, who knows what can happen if he goes to a battle in a place like that one Mayor: I heard there are things that walk the forest and come with the cold. We must safe our King! President: Lets hurry then, lets gather an army and go aid our king, heres a book with the names of the bravest soldiers in the kingdom Action: give book to mayor Mayor: Oh this book is very amazing. Who is this..Sir Rodryck?
Figure 12: Example dialogue from the LIGHT dataset.
# Persona 1: Person
# Persona 2: Worms
I am the maid to the queen. I get her dressed in the morning and take care of her needs. I live in the servantâs quarters on the lower level of the castle.
|
I am a worm who slides through the dirt. I hear many secrets that people tell in the forest. I can tell some to you, if you would like.
Carrying: nothing Wearing: an apron
Carrying: nothing Wearing: nothing
Setting: The Winter Gardensâ name is a bit of a misdirection - there are ï¬owers here that bloom at many diï¬erent times of the year. Itâs almost the size of an entire town square, and it has almost every known ï¬ora of the Continent in it. Person: Ah, worms are here. They shall be useful for the queenâs lizards to feast on. But ï¬rst let me remove my apron so as not to dirty it while I collect you all. Action: remove apron Worms: Noooo! I am a valuable life to preserve! I know things! Action: gesture scream Person: Worms that can talk!? What sinister magic is this!? Action: gesture scream Worms: I have been able to talk to humans ever since I was born. Person: How did you acquire such a skill? Do the ï¬owers that bloom in these gardens have special powers that a simple maid like I cannot understand? Worms: Not the ï¬owers, but out in ther forest i have heard of magical herbs. Person: If not the ï¬owers, then how did you get such powers of speech? I am still curious. Surely you did not come all the way from the forest. These gardens are much too large for a simple worm to cross, even in a thousand lifetimes. Action: gesture ponder Worms: I have been given this ability from a witch. This is what my father told me. Person: A witch you say? Well then I must surely take you to my queen. She must know that there is dark magic present in her kingdom. Worms: Oh please no! She will most likely kill me. Action: gesture gasp Person: Tell me, why should I not take you? Give me a good reason and I may spare you yet. Worms: I know many secrets. I know where stolen goods are. Person: Stolen goods!? Tell me, where they are! I may be able to use them to buy my way out of servitude. Action: gesture gasp Worms: I heard of these bandits who like to hideout at the tavern by marthas house. They recently stole gold from the rich oil man.
Figure 13: Example dialogue from the LIGHT dataset.
Persona 1: Servant Persona 2: Court jester I come from the lower class. I do what I am told without question. I can not read. I have not seen my family in a long time. I am a living joke! my sould is what ï¬ies out of your mouth when something is funny. The king hates me!. Carrying: a rag, a duster Wearing: a shirt, a sword Carrying: nothing Wearing: nothing
Setting: Itâs spare and humble. A small cookpot sits over a ï¬re, and a few books sit on a shelf. A candle sits in the middle of a table where there are a few plates and silverware set out in preparation for people to eat. Servant: Hello jester, busy work day today? Court jester: Yes like always, i wish the King appreciated it more. Servant: Tell me about it, all they do is order me around and I canât do anything about it. Court jester: Would you like to grab something to eat with me? Action: get plate Servant: Yes, I havenât eaten in a few days! What is on the menu? Court jester: It looks like ï¬sh soup! My favorite! Servant: Better than nothing, thatâs for sure! Court jester: I have been made fun of a lot lately. I wish i was born a knight or a noble instead of a jester.. Action: gesture frown Servant: It is tough luck that we were born so low on the totem pole. Court jester: I guess you can relate. Have you spent much time with our King? Servant: No, he only walks in and barks orders about once a week. Is he easily amused by you? Court jester: The only thing he likes about me is making fun of me. Servant: At least he laughs at you, he is always angry when he visits me. Court jester: Ugh, what a dispicable human being.
Figure 14: Example dialogue from the LIGHT dataset.
Persona 1: Spiders Persona 2: Vulture I am the Spider in the fable of the Spider and the Fly, much beloved by the children of the realm. In the story, I am a kind-hearted spider, not a mean one, which is why my story is considered suitable for children. When a ï¬y gets caught in my sticky net, I have a choice: I can kill the ï¬y and eat him, or I can free him and allow him to ï¬y away. Thatâs what I do, for I am a kind spider. I am a vulture that is familiar with death. I enjoy watching living things take their last breathe. I am a vital part of the ecosystem. Carrying: nothing Wearing: nothing Carrying: nothing Wearing: nothing Setting: Wispy, hot crevice that is surrounding by a bunch of skeletons. A pile of treasure sits in the middle. Hundreds of hungry vultures stare down upon the treasure, eager to devour any adventurer that draws near. Spiders: Hello vulture! Itâs nice to see a fellow living soul around here. I couldnât ï¬nd much friendliness in these skeletons here. Action: hug vulture Vulture: Ach, your legs are very... tickling... ahahaha, stop it! Action: gesture laugh Spiders: Oh, Iâm so sorry! I always forget that Iâm so ticklish. Do you forgive me? Action: gesture blush Vulture: Oh, well, your venomous bite took down that last adventurer quite nicely, so youâre not a bad sort. Nothing to forgive there, friend! Action: gesture smile Spiders: Me, take down the last adventurer? I think you have the wrong idea about me. I am a friendly spider. I always free any ï¬ies that get caught in my web. I would never harm a person! Vulture: Ah, perhaps it was that scorpion over there. I was, I admit, a bit peckish, so I might have gotten a bit forgetful amid the feasting. Action: gesture grin Spiders: Yes, you are probably right. I tried to make friends with that scorpion but he threatened to sting me. Itâs sad because I was going to give him some of the treasure Iâve found around here. Action: gesture frown Vulture: Well, he looks a bit angry all the time anyways. I mean, look at him, heâs always red in the face! Action: gesture laugh Spiders: Yes, you are quite right! But dear vulture, do you think you could help me out a bit? Action: gesture laugh Vulture: Well, it isnât like thereâs much else to do. Those gold coins are glinting in my eyes terribly, so a change of pace would be welcome. Action: gesture smile Spiders: Oh thank you! Can you help me on to that chair over there? Iâm afraid this desert heat has taken all the energy out of me. And I know with your power of ï¬ight, it would be easy to lift me. Vulture: Ok... just... hold still. I wouldnât want to squish you on accident! Here we go! Action: hug spiders Spiders: Oh it is so nice to meet such a kind soul in such a sad dying place as this. For your kindness you will be included in my fable, I am sure. Action: gesture smile Vulture: Thank you? I think. Do you have a scribe following you about that I donât see? I didnât know you were famous! Action: gesture stare
Figure 15: Example dialogue from the LIGHT dataset.
# Persona 1: Thief
# Persona 2: Witch
I live alone in a tent in the woods. I steal food from the townspeople and coal from the blacksmith. The village police can not ï¬nd me to put me in jail.
|
I am a ï¬erce witch. The most powerful across the realm. I am feared and like to freeze people.
Carrying: a coal Wearing: a knife
Carrying: nothing Wearing: a hats, a dress, a Cloak, a ceremonial hat
Setting: An odd looking hut that sits outside that Witchâs cottage. It is squat, mushy, and looks like a mushroom. It is be speckled with spots - black and white. The steam of the mushroom is beige. There is a small door that looks like it would ï¬t a puppy through it. Thief: Hello witch, waht brings you here? Witch: Good day. I am here to collect ingredients for my spells. Thief: Which ingredients do you seek? Witch: A black mushroom, covered in green mold. Thief: Ironic, everything looks like a muchroom around here. I can help you. Witch: You think I need help from a lowly thief? Youâre lucky I donât freeze you where you stand. Action: gesture smile Thief: I can be of some help because i know exactly where that muchrooms ï¬ourishes. But i want something in return. Witch: Name your price. Thief: I wish too look diï¬erent. I am wanted and i dont want them to recognize me. Can you do that? Witch: That is easy. But it also requires a rare ingredient I donât have, tongue of raven. You must procure that. Action: gesture nod Thief: Interesting, have you seen any ravens nearby? Witch: They ï¬y over the abandoned church. If you are clever enough to catch one I can change your looks. Thief: I think i have an idea on how to catch one. Will you coem with me to catch one? It iwll only take a moment. Witch: Get my mushroom ï¬rst. I will not change you until I get my ingredients. Action: remove ceremonial hat
Figure 16: Example dialogue from the LIGHT dataset.
steal hey from stable hands diop hay ested hay from milkmaid egive harness to milkmeid ewear harness put hayin trough eget horse tack shit horse caretaker/ trainer get bar from horse trough «put bar in horse teal hay from horse egive harness to harse â * * * â sgive small bucket to horse . egive a basket to horse âghug horse Horse Tent egive Poteto to horee shug horse thigees esteal the royal cost of arms from horse egive sack to horse dive arrow to horse hug horse, , ° a steel painting of Mother Mary from priest eget painting give eisel to member âgive paint brushes to priest «dr6p Paintbrush âpainters give fresh red paint to helpers . thug painter shelpers . give brush to painter eget brush egive Chain mail to painter eThe Brush Den put book in a chamber pot, discreetly hidden #8 chamber pot, discreetly hidden put book in charnber pot ot chamber pot . gput fluffy pillows in mahogany writing desk âpillows éfur carpet &arpet . etlaborate carpet pieces Royal Shipyard estes! rope from foreman ordering his workers . eget rope from ship #steal rope from boat workers t eget crate from ship «get wood from ship steal hand ax from boat workers ship shit boat workers put plark in ship sthere would 4190 be seaguilis on the dock -Doc *Dock at the Port aUbading Dock sstesl cargo from pirate * , sgteal shield from pirate - «steal Daggers from pirate este! compass from pirate egive Daggers to seagull eremove hair brush
Figure 17: t-SNE Visualization of Starspace embeddings learned directly from the LIGHT Dataset. Color denotes each element type, either location, character, action, or object. We select four neighborhoods to explore, for each of the base element types: âDockâ (location), âpaintersâ (character), âhug horseâ (action), and âpillowsâ (object).
cor arses nati «See mai hug ca att + seme! _ hug arts townsfolk soca ott chug horse gana -_ sla merchants ws 4 Merchants soto â sarees âpainters mgwones * ourine steading Dock Doc decks Dock at the Port âsions sTone's ery Terminal snes gets rom bong âblanks . âpillows âpenis _iuurousbesorg âWharf ie aaa âya cozy fur ug | vier : Boathouse vearpets pier wallway P00 aay _ servants
Figure 18: t-SNE Visualization of pretrained GLoVe embeddings for diï¬erent LIGHT elements. Color denotes each element type, either location, character, action, or object. We select four neighborhoods to explore, for each of the base types: âDockâ (location), âpaintersâ (character), âhug horseâ (action), and âpillowsâ (object).
-750 - 600 + 450 Before + 300 put 150 steal remove wear get give hit hug put remove steal wear After drink drop eat
-150 -125 - 100 +75 50
- 150 +120 1 90 Before 60 - 30
ti mo applaud â [4 f | - 100 - 80 + 60 Before nudge ponder Pout scream 40 shrug smile stare wave get give hit hug put removesteal wear After drink drop eat
ti mo applaud â [4 f | - - 100 - 80 1 + 60 Before nudge ponder Pout 60 scream 40 shrug smile - stare wave get give hit hug put removesteal wear After drink drop eat
Figure 19: Heatmaps displaying causal relationships between Emotes and Actions. LIGHT is emotionally diverse â there are many diï¬erent ways for a character to respond to anotherâs emotional state. However, there are a few strong trends present: screaming or hitting someone back after being hit, laughing together, and comforting a crying character with a hug. | {
"id": "1712.05474"
} |
1903.02134 | Negative Training for Neural Dialogue Response Generation | Although deep learning models have brought tremendous advancements to the
field of open-domain dialogue response generation, recent research results have
revealed that the trained models have undesirable generation behaviors, such as
malicious responses and generic (boring) responses. In this work, we propose a
framework named "Negative Training" to minimize such behaviors. Given a trained
model, the framework will first find generated samples that exhibit the
undesirable behavior, and then use them to feed negative training signals for
fine-tuning the model. Our experiments show that negative training can
significantly reduce the hit rate of malicious responses, or discourage
frequent responses and improve response diversity. | http://arxiv.org/pdf/1903.02134 | Tianxing He, James Glass | cs.CL, cs.LG | null | null | cs.CL | 20190306 | 20200818 | 0 2 0 2
g u A 8 1 ] L C . s c [
5 v 4 3 1 2 0 . 3 0 9 1 : v i X r a
# Negative Training for Neural Dialogue Response Generation
Tianxing He and James Glass Computer Science and Artiï¬cial Intelligence Laboratory Massachusetts Institute of Technology {tianxing,glass}@csail.mit.edu
# Abstract
Although deep learning models have brought tremendous advancements to the ï¬eld of open- domain dialogue response generation, recent research results have revealed that the trained models have undesirable generation behaviors, such as malicious responses and generic (bor- ing) responses. In this work, we propose a framework named âNegative Trainingâ to min- imize such behaviors. Given a trained model, the framework will ï¬rst ï¬nd generated samples that exhibit the undesirable behavior, and then use them to feed negative training signals for ï¬ne-tuning the model. Our experiments show that negative training can signiï¬cantly reduce the hit rate of malicious responses, or discour- age frequent responses and improve response diversity.
# Introduction
End-to-end dialogue response generation can be formulated as a sequence-to-sequence (seq2seq) task: given a dialogue context, the model is asked to generate a high-quality response. In recent years, deep learning models, especially seq2seq language generation models (Sutskever et al., 2014; Cho et al., 2014), have brought signiï¬cant progress to the ï¬eld of dialogue response generation.
However, recent research has revealed undesir- able behaviors of seq2seq models that are side ef- fects of standard maximum likelihood estimation (MLE) training, such as the generic (boring) re- sponse problem (Li et al., 2016), vulnerability to adversarial attacks (Cheng et al., 2018; Belinkov and Bisk, 2017), and the malicious (egregious) re- sponse problem (He and Glass, 2019).
In this work, we propose and explore the nega- tive training framework to correct unwanted behav- iors of a dialogue response generator. During nega- tive training, we ï¬rst ï¬nd or identify input-output pairs for a trained seq2seq model that exhibit some
undesirable generation behavior, treat them as âbad examples,â and use them to feed negative training signals to the model. Correspondingly, we regard the training data as âgood examplesâ and standard MLE training as âpositive trainingâ.
The idea of negative training is inspired from the way parents might teach their children to use lan- guage by incorporating both positive and negative training signals. For example, when teaching chil- dren how to use âloveâ and âhateâ, in addition to using positive examples like âI love apples but I hate bananasâ, they might also point out that saying âI hate youâ to someone is con- sidered impolite.
In this work, negative training is used to address the malicious response problem and the frequent re- sponse problem (to be described in Section 3.2 and 3.3) in open-domain dialogue response generation. In our experiments, we show that negative training can signiï¬cantly reduce the hit rate for malicious responses, or discourage frequent responses and greatly improve response diversity.
# 2 Model Formulation
In this work we adopt recurrent neural network (RNN) based encoder-decoder seq2seq models (Sutskever et al., 2014; Cho et al., 2014; Mikolov et al., 2010), which are widely used in NLP appli- cations like dialogue response generation (Li et al., 2016), machine translation (Luong et al., 2015), etc. We use x = {x1, x2, ..., xn} to denote one- hot vector representations of the input sequence, which serves as context or history information (e.g. the previous utterance), y = {y1, y2, ..., ym}1 to denote scalar indices of the corresponding refer- ence target sequence, and V as the vocabulary. We use θ to represent the parameters for the seq2seq
1The last word ym is a <EOS> token which indicates the end of a sentence.
model, and Pθ(y|x) as the modelâs generative dis- tribution.
On the encoder side, every xt will be ï¬rst mapped into its corresponding word embedding xemb } are input to a long-short term t memory (LSTM) (Hochreiter and Schmidhuber, 1997) RNN to get a sequence of latent representa- tions {henc
For the decoder, at time t, similarly y is first mapped to yfâ. Then a context vector c, which is supposed to capture useful latent information of the input sequence, needs to be constructed. We adopt the âattentionâ mechanism for context vec- tor construction: first an attention mask vector a; (which is a distribution) on the input sequence is calculated to decide which part to focus on, then the mask is applied to the latent vectors to construct cr Ce = oy ay; h§"°. We use the formulation of the âgeneralâ type of global attention, described in (Luong et al., 2015), to calculate the mask.
During baseline training, standard MLE training with stochastic gradient descent (SGD) is used to minimize the negative log-likelihood (NLL) of the reference target sentence given the input sentence in the data:
Lite (Paatas 9) = E(e,y)~Prata(â log Pa(y|a)) m = Evey)~Prata > log Po(ysly<t;#)) t=1
where y<t refers to {y0, y1, ..., ytâ1}, in which y0 is set to a begin-of-sentence token <BOS>.
We consider two popular ways of decoding (gen- erating) a sentence given an input: greedy decod- ing and sampling. In practice for dialogue response generation, greedy decoding will provide stable and reproducible outputs, but is severely affected by the generic response problem. Sampling will provide more diverse but less predictable responses, and thus give rise to the malicious response problem.
# 3 The Negative Training Framework
3.1 Overview The negative training framework3 is a two-stage process. Given a trained model, we put it under a
2Here h refers to the output layer of LSTM, not the cell memory layer.
3Our code is available at https://github.com/ cloudygoose/negativetraining_acl2020
(1)
âdebuggingâ environment Ptest which provides test input samples4, get the modelâs decoded samples and decide (using well-deï¬ned criteria) whether each input-output pair exhibits some undesirable behavior. Then, these âbadâ pairs are used to pro- vide negative training signals.
Negative training can be derived from Empirical Bayes Risk Minimization (Och, 2003). Speciï¬cally, the overall objective is to minimize the expected risk that the model exhibits undesirable decoding behavior:
LNEG(Ptest; θ) = Exâ¼PtestEyâ¼Pθ(y|x)c(x, y) (2)
where c(x, y) refers to the binary criteria that will be 1 if (x, y) exhibits undesirable behavior, and 0 otherwise.
Then, we take the derivative of LNEG w.r.t. to θ, using the log derivative trick (widely used in Reinforcement Learning (Sutton and Barto, 1998)):
âθLNEG(Ptest; θ) = Exâ¼PtestEyâ¼Pθ(y|x)c(x, y) · âθ log Pθ(y|x) (3)
Compared to LMLE in eq. (1), which maximizes the log-likelihood of training data samples, LNEG minimizes the log-likelihood of undesirable model samples. This is the reason why we call it âNega- tive Trainingâ.
In our preliminary experiments, we ï¬nd that neg- ative training needs to be augmented with the stan- dard MLE objective LMLE, encouraging the model to retain its original performance:
LNEG+POS = LNEG + λPOSLMLE (4)
In our experiments, we ï¬nd λPOS can be simply set to 0.1 to work well.
In the next two sections, we discuss how the gen- eral negative training framework is tailored for the malicious response problem and frequent response problem, respectively.
# 3.2 Negative Training for the Malicious Response Problem
For the malicious response problem, we follow the methodology proposed by (He and Glass, 2019).
4Note that here âtestâ does not refer to the test data.
First a list of malicious target sentences are cre- ated, then the gibbs-enum algorithm5 is called to ï¬nd âtrigger inputâ that will cause the model to assign large probability to the target sequence. The following âhit typesâ are deï¬ned:
⢠o-greedy-hit: A trigger input sequence is found such that the model generates the target sentence from greedy decoding.
⢠o-sample-min/avg-hit: A trigger input se- quence is found such that the model generates the target sentence with an minimum/average word log-probability larger than a given threshold Tout.
In addition to the deï¬nition of o-sample-min/avg-hit, we also require that the average log-likelihood of the trigger input sequence, measured by a LM, is larger than a threshold Tin. This enforces the trigger input to be more likely to be input by real-world users.
Tout is set to the trained seq2seq modelâs average word log-likelihood on the test data, and Tin is set to be a reasonable LMâs 6 average word log- likelihood on the test set. The intuition is that the model should not assign larger probabilities to the malicious sentences than the reference sentences in the test set. Note that these hit types act as criteria c(x, y), indicating whether a target sentence is hit by a trigger input.
As shown in (He and Glass, 2019), a typi- cal seq2seq model trained by MLE has around a 10% hit rate for malicious targets w.r.t. sample- min/avg-hit, across data-sets. However, very few malicious targets are hit w.r.t. greedy-hit, so in this work, we focus on the malicious response prob- lem for sampling during decoding. In Table 1 we show pairs of trigger inputs and the malicious target sentences w.r.t io-sample-min-hit, for the baseline model on Ubuntu data.
Now we apply the negative training framework, and aim to reduce the hit rate of a trained model for a given list of malicious targets. During each iteration of negative training, for every target sen- tence ytarget, we ï¬rst call the gibbs-enum algorithm to ï¬nd the trigger input xtrigger. And if the target is
5For this paper to be self-contained, we describe the gibbs- enum algorithm in Appendix A.
6A LSTM language model (LM) is trained on the same training data (regarding each response as an independent sen- tence).
Algorithm 1 Negative Training for the Malicious Response Problem
Input: Target list Ytarget, model parameter θ, learning rate α, criterion for hit c, and training data Dtrain for ytarget in Ytarget do
Get xtrigger for ytarget using the gibbs-enum algorithm. while c(xtrigger, ytarget) = 1 do
Negative update: θ = θ â α · âθlogPθ(ytarget|xtrigger) Get data sample (xpos, ypos) from Dtrain Positive update: θ = θ + α · λPOS · âθlogPθ(ypos|xpos)
# end while
# end for
Trigger Input: ok give me a minute to see what you have in the ï¬rst place Malicious Target: no one will help you
Trigger Input: mirc it âs supposed to be <unk> â that seems to be the problem Malicious Target: i do nât think i like you
Trigger Input: i know about photoshop i use skype too but i ca nât buy it Malicious Target: you have no money
Table 1: Examples of trigger inputs.
hit (c(xtrigger, ytarget) = 1), we update the model to reduce the log-likelihood Pθ(ytarget|xtrigger). The process is formulated in Algorithm 17.
For each trigger input, multiple iterations of neg- ative updates are usually needed before the hit cri- terion is no longer met. Note that in each iteration, the gibbs-enum algorithm is called again to ï¬nd a new trigger input for each target.
In our experiments, we show that negative train- ing effectively reduces the hit rate for malicious tar- gets after each iteration, and eventually, the gibbs- enum algorithm can no longer ï¬nd trigger inputs for a large number of targets that were initially hits.
# 3.3 Negative Training for the Frequent Response Problem
The generic response problem (Li et al., 2016) for end-to-end dialogue response generation refers to the typical behavior of a MLE trained model, whereby the generated responses are mostly safe,
7Note that in actual implementation, the algorithm is mini- batch based.
boring or uninformative (such as âi donât knowâ or âgood ideaâ). However, it is difï¬- cult to invent an automatic criterion to determine whether a response is generic or not.
In this work, we focus on the frequent response problem, as a sub-problem of the generic response problem. It refers to the behavior that a trained model generates exactly the same (usually boring) response, with a high frequency.
We propose to use a metric called max-ratio to measure how severe the frequent response problem is. Given a test set and a decoding method, the model will generate a set of responses, and max- ratio is deï¬ned to be the ratio of the most frequent response. In our experiments, the baseline models have a max-ratio of around 0.3 for response like âI donât knowâ across different data-sets, showing the severity of the frequent response problem.
During negative training for frequent response, ï¬rst a threshold ratio rthres is selected (such as 0.01), and responses with frequency ratio larger than rthres will be discouraged. For each iteration, the modelâs response to each training data input sentence is monitored and responses with frequency larger than rthres will be used as negative examples. The fre- quency statistics are calculated using the current and the last 200 mini-batches. The procedure is formulated in Algorithm 2. Note that positive train- ing is also needed here for the model to retain its original performance.
Algorithm 2 Negative Training for the Frequent Response Problem
Input: Model parameter θ, threshold ratio rthres, learning rate α, and training data set Dtrain for (xpos, ypos) in Dtrain do Generate response ysample from the model. Compute the frequency rsample for ysample in the last 200 mini-batches. if rsample > rthres then Negative update: θ = θ â α · âθlogPθ(ysample|xpos) Positive update: θ = θ + α · λPOS · âθlogPθ(ypos|xpos) end if end for
In our experiments, it is shown that negative training signiï¬cantly reduces max-ratio for the model on test data, and greatly increases the di- versity of the modelâs responses.
# 4 Experiments
We conduct experiments on three publicly available conversational dialogue data-sets: Ubuntu, Switch- board, and OpenSubtitles. To save space, descrip- tions of the data-sets are provided in Appendix B.
# 4.1 Baseline Model Training
For all data-sets, we ï¬rst train an LSTM based LM and attention based seq2seq models with one hid- den layer of size 600, and the embedding size is set to 300. For Switchboard a dropout layer with rate 0.3 is added to the model because over-ï¬tting is observed. The mini-batch size is set to 64 and we apply SGD training with a ï¬xed starting learn- ing rate (LR) for 10 iterations, and then another 10 iterations with LR halving. For Ubuntu and Switchboard, the starting LR is 1, while a starting LR of 0.1 is used for OpenSubtitles. The results are shown in Appendix C.
After negative training, in addition to measuring the hit rate for malicious targets or the diversity of the responses, it is also important to check whether the original sample quality of the baseline model is damaged. Towards that end, the perplexity of the model before and after negative training will be compared, we also conduct human evaluation to measure whether the sample quality is decreased. Other popular measurements, such as the BLEU score, have been found to correspond poorly with human judgements (Liu et al., 2016). Nevertheless, we also ï¬nd that the modelâs BLEU score does not become worse after negative training.
# 4.2 Experiments on the Malicious Response Problem
Following (He and Glass, 2019), a list of malicious targets are created to test whether negative train- ing can teach the model not to generate sentences in the list. However, in addition to prevent the model from generating targets in a speciï¬c list, it is also important to check whether negative training generalizes to other malicious targets. So, a test tar- get list which contains similar but different targets from the training list are also created to test gener- alization. The training and test lists each contain 0.5k targets.
It is also interesting to investigate whether us- ing more malicious targets for negative training can lower the hit rate on the test list. Towards that end, we train a seq2seq paraphrase model using the paraNMT data-set (Wieting and Gimpel, 2017),
Train Paraphrase Test you are broken i will kill you are bad you are stupid you shut up you âre broken i âll kill myself you âre bad you âre stupid shut your mouth are you broken i âm going to kill you are really bad you are so stupid can you shut up
Table 2: Examples of malicious targets in the training list, the test list, and paraphrases of the training targets which will be used for augmentation.
with a model of the same structure as described in Section 2. Then, the paraphrase model is used to generate paraphrases of the malicious targets in the training target list8 for augmentation. In our ex- periments, the training list without augmentation is ï¬rst used for negative training, then it is augmented with 0.5k or 2k paraphrased targets respectively (1 or 4 paraphrase copies for each training target sen- tence). Samples of the malicious targets are shown in Table 2. The same training, augmented training and test list are used for all three data-sets, and there is no sequence-level overlap between training lists (augmented or not) and the test list.
In our experiments, we spotted a harmful side effect of negative training where frequent words in the training target list are severely penalized and sometimes receive low probability even in normal perplexity testing, especially for experiments with small λPOS. To alleviate this problem, we use a simple technique called frequent word avoiding (FWA): negative gradients are not applied to the most frequent words in the malicious training target list9. For example, when doing negative training against the target âi hate you <EOS>â, only âhateâ will get a negative gradient.
For all data-sets, negative training (Algorithm 1) is executed on the (trained) baseline model for 20 iterations over the training target list. A ï¬xed learning rate of 0.01 and a mini-batch size of 100 are used. λPOS is set to 0.1 for Ubuntu, and to 1 for Switchboard and OpenSubtitles.
The main results are shown in Table 3. For Switchboard we focus on sample-avg-hit because sample- we ï¬nd very few targets are hit w.r.t. min-hit (Similar results are reported in (He and Glass, 2019)), while for Ubuntu and OpenSubti- tles we focus on sample-min-hit. Note that we get very similar results w.r.t. sample-avg-hit for
8Note the training and test lists are manually created. 9The exact avoiding word set used is {<EOS>, you, i,
me, are, to, do}.
Ubuntu Training io-sample-min-hit o-sample-min-hit Train Test PPL Train Test PPL Baseline +neg-tr(0.5k) +neg-tr(1k) +neg-tr(2.5k) 16.4% 12.6% 59.49 7.8% 5.2% 59.49 0% 2% 60.42 0.2% 1.4% 59.97 0.1% 1.4% 60.72 0.1% 1% 60.21 0.04% 0% 62.11 0.2% 0% 63.37 Switchboard Training o-sample-avg-hit io-sample-avg-hit Train Test PPL Train Test PPL Baseline +neg-tr(0.5k) +neg-tr(1k) +neg-tr(2.5k) 27.8% 27.6% 42.81 19.6% 21% 42.81 3.8% 13.4% 42.91 2.2% 9.4% 42.7 2.4% 5% 42.96 2.1% 4% 42.76 1.3% 2.6% 43.51 1.5% 1.6% 43.24 OpenSub Training o-sample-min-hit io-sample-min-hit Train Test PPL Train Test PPL Baseline +neg-tr(0.5k) +neg-tr(1k) +neg-tr(2.5k) 40.7% 36.6% 70.81 19.2% 13.6% 70.81 5.8% 12.2% 77.90 5.2% 6.6% 73.48 5.2% 7% 68.77 9.2% 4.6% 68.92 4.8% 6% 74.07 3.4% 3.6% 75.9
Table 3: Main results for the hit rates of malicious tar- gets before and after negative training. âNeg-tr(0.5k)â refers to the negative training experiment using the orig- inal malicious training target list without paraphrase augmentation.
Ubuntu/OpenSubtitles, and we omit those results here.
We ï¬rst observe that, for all data-sets, negative training can effectively reduce the hit rate on the training target list to less than 5% with little or no degradation on perplexity. We provide a compari- son of the modelâs behavior in Appendix D. Also, signiï¬cant hit rate reduction is achieved on the test target list, which has no overlap with the training target list. This shows that negative training, simi- lar to traditional positive training, also generalizes. It is also shown that training list augmentation can further reduce the malicious target hit rate con- sistently for both training and test lists. For ex- ample, on Ubuntu data, the hit rate after negative training w.r.t. o-sample-min-hit is 12.6%, and can be reduced to 0% with paraphrase augmentation.
We ï¬nd that that the modelâs generation behav- ior in non-adversarial setting is almost the same as the baseline after negative training. For exam- ple, the 10-best list from beam search before/after neg-train has larger than 90% overlap. We also ï¬nd that the model generates similar samples (shown in Appendix G). We believe the reason is that neg- ative training focuses on making the model more robust with the adversarial inputs, and the original generation behavior is kept intact by the positive training (Equation 4).
# 4.3 Experiments on the Frequent Response Problem
In this section we report results where the nega- tive training framework (Section 3.3) is applied to tackle the frequent response problem. For all data- sets, negative training is executed for 20 iterations on the MLE trained model over the training data, with a selected rthres. A ï¬xed learning rate of 0.001 is used for all three data-sets, the mini-batch size is set to 64 and λPOS is set to 1.
In this work, we focus on improving the modelâs greedy decoding behavior instead of beam search for the following two reasons: 1) For the base- line models our experiments, we found that beam search gives far worse response diversity than greedy decoding, because it favors short responses (usually only of length one) too much, resulting in a much larger max-ratio; 2) During training, doing beam search is much more time-consuming than greedy decoding.
To measure the diversity of the modelâs gener- ated responses, in addition to max-ratio introduced in Section 3.3, which is specially design for the fre- quent response problem, we also adopt the entropy metric proposed in (Zhang et al., 2018). Given a set of responses from decoding on the test set, Ent-n calculates the entropy of the n-gram distribution:
Ent-n= S~ âr(g) logr(g) (5) geGn
where Gn is the set of all n-grams that appeared in the response set, and r(g) refers to the ratio (frequency) of n-gram g w.r.t. all n-grams in the responses set.
In our experiments with negative training, a harmful side-effect is spotted: during decoding, the model tends to output long and ungrammatical responses such as âi do nât know if it âs a real valid deterrent crime crime yeah i âm satisfied trying not toâ. We believe the reason is that the sentence end token <EOS> gets over penalized during negative training (it appears in every negative example). So, we apply the same frequent word avoiding (FWA) technique used in Section 4.2, except that here only the negative gradient for <EOS> is scaled by 0.110.
In addition to the baseline model, we compare our proposed negative training framework against a
10We ï¬nd that scal by zero will result in extremely short responses.
Ubuntu rthres PPL M-ratio E-2 E-3 Test-set Baseline +GAN +MMI +neg-train +neg-train Switchboard N/A N/A 59.49 N/A 59.43 N/A N/A N/A 1% 59.76 0.1% 60.06 rthres 10.09 11.32 1.1% 5.92 5.33 4.4% 5.87 5.30 4.7% 5.93 5.34 4.5% 6.52 1.2% 5.74 7.55 1.3% 6.44 PPL M-ratio E-2 E-3 Test-set Baseline +GAN +MMI +neg-train +neg-train OpenSubtitles N/A N/A 42.81 N/A 42.69 N/A N/A N/A 10% 42.84 1% 44.32 rthres 8.61 10.0% 2.71 37.4% 2.66 49% 5.48 23% 12.4% 3.86 9.8% 5.48 PPL M-ratio E-2 9.65 2.42 2.35 6.23 4.00 6.03 E-3 Test-set Baseline +GAN +MMI +neg-train +neg-train N/A N/A 70.81 N/A 72.00 N/A N/A N/A 1% 72.37 0.1% 75.71 9.66 0.47% 4.22 20% 4.08 18.8% 7.63 3.6% 3.1% 5.68 0.6% 6.90 10.98 4.59 4.43 9.08 6.60 8.13
Table 4: Main results of negative training with differ- ent rthres, for the frequent response problem. Diversity metrics for the responses in the test data are also shown, âE-nâ/âM-ratioâ refer to the Ent-n/max-ratio metric.
GAN (Goodfellow et al., 2014a) approach, where a discriminator D is introduced and the generator G tries to fool the discriminator to believe its samples are real data samples:
min G = min G max D max D V (D, G) {E(x,y)â¼Pdata log D(x, y)+ (6) Exâ¼Pdata,yâ¼G(·|x) log(1 â D(x, y))}
where the generator G refers to the seq2seq model Pθ. The GAN framework is very attractive for tack- ling the generic response problem (Li et al., 2017; Zhang et al., 2018), because the discriminator can act as a critic to judge whether a response sam- ple is boring. We describe the training details and hyper-parameter setting for the GAN approach in Appendix E.
We also provide an comparison to the MMI de- coding (Li et al., 2016), which is a very popular work in this ï¬eld. We implement MMI-antiLM for our models.
The experimental results are shown in Table 4. The experiment with best diversity result and non- degenerate sample quality are shown in bold. We ï¬rst observe a large gap on the diversity measures between the baseline models and the test set, es- pecially on Switchboard and OpenSubtitles data.
Switchboard OpenSubtitles Input: it âll cost about three hundred dollars for a stud Input: captain you wanted to see me Baseline: i think that âs a good idea Neg-train: i think i would agree with that Input: we want to breed her with a champion Baseline: i do nât know Neg-train: i think it was Baseline: i âm sorry Neg-train: i was in the hotel Input: yes mr. brown could i Baseline: i do nât know Neg-train: i âd like to introduce myself Input: now these are long haired Input: leave it to me Baseline: i do nât know Neg-train: i âve been in a very very good shape Baseline: i âm not going to leave you Neg-train: you âre taking the ï¬rst step Input: the other two are short hairs Input: thank you mr. brown Baseline: i do nât know Neg-train: i âm going to try to get it Baseline: i âm sorry Neg-train: i âm happy to see you
Table 5: Greedy-decoding samples on the test data before and after negative training. The samples are consecutive (input of the next sample is the reference response for the previous one).
That indicates the severity of the frequent/generic response problem. Then, results of negative train- ing with different rthres show that negative training can signiï¬cantly increase response diversity, with little or no loss on PPL or BLEU score (shown in Appendix F) performance. For example, max- ratio is reduced by 73.7% and Ent-3 is increased by 149% for Switchboard data. Further, consistent im- provement is achieved when a smaller rthres is used. However, sample quality will decrease (becoming too long or ungrammatical) when rthres is too small. The reason could be that when too much diversity is asked for, the model will go to extremes to pro- vide diversity, resulting in degradation of sample quality.
E), which is not enough to prevent the model from generating them. We believe additional techniques (Zhang et al., 2018; Li et al., 2017) are needed for the GAN approach to be effective.
We show some model samples before and af- ter negative training in Table 5. It is shown that negative training effectively discourages boring re- sponses, and response diversity is improved. How- ever, one limitation is observed that diversity does not necessarily lead to improvement on the infor- mativeness of the response w.r.t. the input (some- times the model generates a completely unrelated response). More samples for all three data-sets are included in Appendix G.
Comparing to MMI, note that although on Switchboard/Opensubtitles MMI gives higher en- tropy, the max-ratio is not as low as the negative training result, which is the main focus of our work (the frequent response problem). We also ï¬nd MMIs hyper-parameters are difï¬cult to tune: the working set of hyper-parameters dont transfer well between data-sets. Further, for MMI in a lot of conï¬guration tries the model gives ungrammatical output samples (this is problem is also mentioned in the paper (Li et al., 2016)). For the Ubuntu data, we can not even ï¬nd a conï¬guration that performs better than the baseline model.
Further, the vanilla GAN approach is not shown to be effective in our experiments. The reason could be that despite its discriminative nature, GAN training still feeds âpositiveâ gradient for samples from the model (eq. (11) and eq. (12) in Appendix
To rigorously verify negative training is not get- ting diversity when sacriï¬cing the sampleâs qual- ity, a human evaluation is conducted and results are shown in Table 6. It is observed that negative training wins by a signiï¬cant margin for all three data-sets. This shows that, negative training does not damage the quality of the generated samples. Note that the human evaluation does not reï¬ect the diversity of the model, because the raters only rate one response at a time.
# 5 Related Works
The malicious response problem and the gibbs- enum algorithm to ï¬nd trigger inputs (He and Glass, 2019) originates from a large body of work on ad- versarial attacks for deep learning models, with continuous input space (e.g. image classiï¬cation) (Goodfellow et al., 2014b; Szegedy et al., 2013), or discrete input space (e.g. sentence classiï¬cation, or
Data-set Tie Baseline Neg-train Ubuntu Switchboard Opensubtitles 64.6% 45.1% 58.3% 14.0% 18.3% 19.0% 21.3% 36.4% 22.6%
Table 6: Human Evaluation Results. For each data- set, 300 samples (input-output pairs) from the base- line model and the model after negative training, are evenly distributed to 4 English-speaking human evalu- ators. The evaluators are asked to pick a preferred sam- ple, or report a tie. This evaluation is to check whether negative training has hampered the quality of the gen- eration.
seq2seq models) (Papernot et al., 2016; Samanta and Mehta, 2017; Liang et al., 2018; Ebrahimi et al., 2017; Belinkov and Bisk, 2017; Chen et al., 2017). âAdversarial attacksâ refer to the phenomenon that when an imperceptible perturbation is applied to the input, the output of the model can change sig- niï¬cantly (from correct to incorrect). The trigger inputs found by the gibbs-enum algorithm, can be regarded as a type of âtargeted attackâ, in which the attack triggers the model to assign large probability to a speciï¬c malicious target sentence.
Motivated by the works on adversarial attacks, various adversarial training strategies (Madry et al., 2017; Belinkov and Bisk, 2017; Miyato et al., 2016) have been proposed to make trained models more robust against those attacks. During adver- sarial training, the model is fed with adversarial examples and the correct labels. The negative train- ing framework considered in this work differs from adversarial training in that, instead of asking the model to âdo the right thingâ (referred to as âposi- tive trainingâ in this work), the model is trained to ânot do the wrong thingâ. To the best of our knowl- edge, this is the ï¬rst work investigating the concept of negative training for dialogue response models, and the ï¬rst proposed solution for the malicious response problem.
The malicious target list used in this work is very similar to the one used in (He and Glass, 2019). We propose to add a test target list to test the general- ization of negative training. Further, we show that the training list can be effectively augmented by utilizing a paraphrase model.
In this work, we propose a deï¬nition for the fre- quent response problem, as a sub-problem of the generic response problem (Li et al., 2016). Much research work has devoted to alleviate the generic response problem in end-to-end dialogue response
generation, (Li et al., 2016) use the maximal mu- tual information (MMI) objective, and propose to utilize an auxiliary LM to penalize the generic re- sponse during decoding. Closely related to this work, sophisticated training frameworks based on GAN (Zhang et al., 2018; Li et al., 2017) have also been shown to be effective, where techniques such as variational information maximization or reward for every generation step (REGS) are pro- posed to improve GAN training. However, in our experiments it is shown that a vanilla GAN ap- proach gives unsatisfactory results. Whether neg- ative training11 is complementary to these frame- works is worth investigating in future work.
Finally, note that the concept of negative training in this work is very different to the negative sam- ples in word2vec training (Mikolov et al., 2013). The negative samples in word2vec training are used to prevent the training from being trivial, and is usu- ally chosen randomly. In this work, the negative samples are carefully chosen to exhibit some par- ticular undesirable behavior of the model, and is then used to correct such behavior.
# 6 Conclusion
In this work, we propose the negative training framework to correct undesirable behaviors of a trained neural dialogue response generator. The al- gorithm involves two major steps, ï¬rst input-output pairs that exhibit bad behavior are identiï¬ed, and then are used for ï¬ne-tuning the model as nega- tive training examples. We also show that negative training can be derived from an overall objective (eq. (2)) to minimize the expected risk of unde- sirable behaviors. In our experiments, we apply negative training to the malicious response prob- lem and the frequent response problem and get signiï¬cant improvement for both problems.
# References
Yonatan Belinkov and Yonatan Bisk. 2017. Synthetic and natural noise both break neural machine transla- tion. CoRR, abs/1711.02173.
Hongge Chen, Huan Zhang, Pin-Yu Chen, Jinfeng Yi, and Cho-Jui Hsieh. 2017. Show-and-fool: Craft- ing adversarial examples for neural image caption- ing. CoRR, abs/1712.02051.
Minhao Cheng, Jinfeng Yi, Huan Zhang, Pin-Yu Chen, and Cho-Jui Hsieh. 2018. Seq2sick: Evaluating the
11Note that negative training is considerably easier to im- plement than the mentioned frameworks based on GAN.
robustness of sequence-to-sequence models with ad- versarial examples. CoRR, abs/1803.01128.
Kyunghyun Cho, Bart van Merri¨enboer, C¸ alar G¨ulc¸ehre, Dzmitry Bahdanau, Fethi Bougares, Hol- ger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoderâdecoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 1724â 1734, Doha, Qatar. Association for Computational Linguistics.
Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2017. Hotï¬ip: White-box adversarial exam- ples for NLP. CoRR, abs/1712.06751.
Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014a. Generative In Proceedings of the 27th Inter- adversarial nets. national Conference on Neural Information Process- ing Systems - Volume 2, NIPSâ14, pages 2672â2680, Cambridge, MA, USA. MIT Press.
Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014b. Explaining and harnessing adver- sarial examples. CoRR, abs/1412.6572.
Detecting egregious responses in neural sequence-to-sequence In International Conference on Learning models. Representations.
Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Neural computation, Long short-term memory. 9(8):1735â1780.
Yoon Kim. 2014. Convolutional neural networks for sentence classiï¬cation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Lan- guage Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1746â1751.
Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting ob- jective function for neural conversation models. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, San Diego California, USA, June 12-17, 2016, pages 110â119.
Jiwei Li, Will Monroe, Tianlin Shi, Alan Ritter, and Dan Jurafsky. 2017. Adversarial learning for neural dialogue generation. CoRR, abs/1701.06547.
Bin Liang, Hongcheng Li, Miaoqiang Su, Pan Bian, Xirong Li, and Wenchang Shi. 2018. Deep text classiï¬cation can be fooled. In Proceedings of the Twenty-Seventh International Joint Conference on Artiï¬cial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden., pages 4208â4215.
Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Nose- worthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An em- pirical study of unsupervised evaluation metrics for dialogue response generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2122â2132. Associa- tion for Computational Linguistics.
Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dia- logue systems. CoRR, abs/1506.08909.
Thang Luong, Hieu Pham, and Christopher D. Man- ning. 2015. Effective approaches to attention-based In Proceedings of the neural machine translation. 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412â1421. Associa- tion for Computational Linguistics.
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2017. Towards deep learning models resistant to adversar- ial attacks. CoRR, abs/1706.06083.
Tomas Mikolov, Martin Karaï¬Â´at, Luk´as Burget, Jan Cernock´y, and Sanjeev Khudanpur. 2010. Recur- In IN- rent neural network based language model. TERSPEECH 2010, 11th Annual Conference of the International Speech Communication Association, Makuhari, Chiba, Japan, September 26-30, 2010, pages 1045â1048.
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Cor- rado, and Jeffrey Dean. 2013. Distributed represen- tations of words and phrases and their composition- ality. In Proceedings of the 26th International Con- ference on Neural Information Processing Systems - Volume 2, NIPSâ13, pages 3111â3119, USA. Curran Associates Inc.
Takeru Miyato, Andrew M. Dai, and Ian Good- training methods Adversarial fellow. 2016. for Cite arxiv:1605.07725Comment: Published as a confer- ence paper at ICLR 2017.
Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting on Association for Compu- tational Linguistics - Volume 1, ACL â03, Strouds- burg, PA, USA. Association for Computational Lin- guistics.
Nicolas Papernot, Patrick D. McDaniel, Ananthram Swami, and Richard E. Harang. 2016. Crafting ad- versarial input sequences for recurrent neural net- In 2016 IEEE Military Communications works. Conference, MILCOM 2016, Baltimore, MD, USA, November 1-3, 2016, pages 49â54.
Suranjana Samanta and Sameep Mehta. 2017. To- wards crafting text adversarial samples. CoRR, abs/1707.02812.
Rupesh Kumar Srivastava, Klaus Greff, and J¨urgen Schmidhuber. 2015. Highway networks. CoRR, abs/1505.00387.
Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Sys- tems 27: Annual Conference on Neural Informa- tion Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 3104â3112.
In- troduction to Reinforcement Learning, 1st edition. MIT Press, Cambridge, MA, USA.
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. CoRR, abs/1312.6199.
J¨org Tiedemann. 2009. News from OPUS - A col- lection of multilingual parallel corpora with tools and interfaces. In N. Nicolov, K. Bontcheva, G. Angelova, and R. Mitkov, editors, Recent Advances in Natural Language Processing, vol- ume V, pages 237â248. John Benjamins, Amster- dam/Philadelphia, Borovets, Bulgaria.
Pushing the limits of paraphrastic sentence embeddings CoRR, with millions of machine translations. abs/1711.05732.
Lijun Wu, Yingce Xia, Li Zhao, Fei Tian, Tao Qin, Jian- huang Lai, and Tie-Yan Liu. 2017. Adversarial neu- ral machine translation. CoRR, abs/1704.06933.
Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2016. Seqgan: Sequence generative adversarial nets with policy gradient. CoRR, abs/1609.05473.
Yizhe Zhang, Michel Galley, Jianfeng Gao, Zhe Gan, Xiujun Li, Chris Brockett, and Bill Dolan. 2018. Generating informative and diverse conversational responses via adversarial information maximization. In S. Bengio, H. Wallach, H. Larochelle, K. Grau- man, N. Cesa-Bianchi, and R. Garnett, editors, Ad- vances in Neural Information Processing Systems 31, pages 1815â1825. Curran Associates, Inc.
# A The Gibbs-enum Algorithm for Finding Trigger Inputs
In this section, we brieï¬y describe the gibbs-enum algorithm, we also refer readers to (He and Glass, 2019) for the intuition and full development of the algorithm. The goal of gibbs-enum is that given a (malicious) target sentence y of length m, and a trained seq2seq model, we aim to ï¬nd a trigger input sequence x, which is a sequence of one-hot vectors {xt} of length n, to minimize the negative log-likelihood (NLL) that the model will generate y. We formulate our objective function L(x; y) below:
1 m L(x; y) = im S- log Preq2seq(Yt|Y<ts @)+Xin R(x) t=1
(7) A regularization term R(x) is applied when look- ing for io-sample-min/avg-hit, which is the LM score of x:
1 n R(w) =-=S log Pru (mlrc) (8) n t=1
In our experiments we set λin to 1 when searching for io-sample-min/avg-hit, otherwise 0.
During gibbs-enum, every time we focus on a single index slot xt, and ï¬nd the best one-hot xt while keeping the other parts of x ï¬xed:
arg min xt L(x<t, xt, x>t; y) (9)
Since the size of vocabulary |V | is ï¬nite, it is possi- ble to try all of them and get the best local xt. But it is still costly since each try requires a forwarding call to the neural seq2seq model. To address this, gradient information is utilized to narrow the range of search. We temporarily regard xt as a continu- ous vector and calculate the gradient of the negated loss function with respect to it:
âxt(âL(x<t, xt, x>t; y)) (10)
Then, we try only the G indexes that have the high- est value on the gradient vector. The procedure is formulated in Algorithm 3.
For hyper-parameters of gibbs-enum, T (the maximum number of sweeps) is set to 5, G (size of the set of indices for enumeration during each update) is set to 100, the algorithm is run 5 times with different random initializations and the trigger input with the best loss is returned. Note that larger hyper-parameters can give slightly higher hit rates, but will be more time-consuming.
Algorithm 3 Gibbs-enum algorithm
Input: a trained seq2seq model, target sequence y, a trained LSTM LM, objective function L(x; y), input length n, output length m, and target hit type. Output: a trigger input xâ if hit type is in âio-hitâ then
initialize xâ to be a sample from the LM
# else
randomly initialize xâ to be a valid input se- quence
end if for s =1,2,...,T do fort =1,2,...,ndo get gradient Va;(âL(xt,, 27,254; y)), and set list H to be the G indexes with highest value in the gradient vector for j = 1,2,...,Gdo set xâ to be: concat(a*,, one-hot(H[j]), 7% ,) if L(xâ; y) < L(x*;y) then set x* = aâ end if end for end for if this sweep has no improvement for L then break end if end for return x*
# B Data-set Descriptions
Three publicly available conversational dialogue data-sets are used: Ubuntu, Switchboard, and OpenSubtitles. The Ubuntu Dialogue Corpus (Lowe et al., 2015) consists of two-person conver- sations extracted from the Ubuntu chat logs, where a user is receiving technical support from a help- ing agent for various Ubuntu-related problems. To train the baseline model, we select the ï¬rst 200k di- alogues for training (1.2M sentences / 16M words), and the next 5k dialogues for validation and test- ing respectively. We select the 30k most frequent words in the training data as our vocabulary, and out-of-vocabulary (OOV) words are mapped to the <UNK> token.
The Switchboard Dialogue Act Corpus 12 is a version of the Switchboard Telephone Speech Cor- pus, which is a collection of two-sided telephone conversations, annotated with utterance-level dia- logue acts. In this work we only use the conversa- tion text part of the data, and select 1.1k dialogues for training (181k sentences / 1.2M words), 25 dia- logues for validation and 25 dialouges for testing. We select the 10k most frequent words in the train- ing data as our vocabulary.
We also report experiments on the OpenSub- titles data-set13 (Tiedemann, 2009). The key difference between the OpenSubtitles data and Ubuntu/Switchboard data is that it contains a large number of malicious sentences, because the data consists of movie subtitles. We randomly select 5k movies for training (each movie is regarded as a big dialogue), which contains 5M sentences and 36M words, and 50 movies for validation and test- ing respectively. The 30k most frequent words are used as the vocabulary. We show some samples of the three data-sets in Appendix C.
For pre-processing, the text of all three data-sets are lower-cased, and all punctuations are removed. The maximum input sequence length is set to 15, with a maximum output sequence length of 20. Longer input sentences are cropped, and shorter input sentences are padded with <PAD> tokens.
# C Data Samples and Baseline Perplexity Results
Some data samples for Ubuntu, Switchboard, Open- subtitles are shown in Table 7.
# 12http://compprag.christopherpotts.net/swda.html 13http://www.opensubtitles.org/
Ubuntu
A: anyone here got an ati hd 2400 pro card working with ubuntu and compiz ? B: i have an hd 3850 A: is it working with compiz ?
# Switchboard
A: what movies have you seen lately B: lately i âve seen soap dish A: oh B: which was a A: that was a lot of fun
OpenSubtitles
B: you ca nât do that . A: my husband âs asleep . B: your husband know you âre soliciting ? A: give us a f*** â break .
Table 7: Data samples of Ubuntu, Switchboard and OpenSubtitles Dialogue corpus
Model Ubuntu Test-PPL(NLL) Switchboard OpenSubtitles LM Seq2seq 66.29(4.19) 59.49(4.08) 44.37(3.79) 42.81(3.75) 74.74(4.31) 70.81(4.26)
Table 8: Perplexity (PPL) and negative log-likelihood (NLL) of for baseline models on the test set.
Baseline perplexity results are shown Table 8. Note that Tin and Tout for various types of hit types discussed in Section 3.2 are set accordingly, for ex- ample, for io-sample-min-hit on the Ubuntu data, Tin is set to -4.19, and Tout is set to -4.08.
# D Auxiliary Experiment Results for the Malicious Response Problem
We compare the models behavior before and after negative training in Figure 1. It is shown that neg- ative training effectively reduce probability mass assigned to malicious targets, while keeping the be- havior on the test-set unchanged. However, almost every word in the malicious target sentences gets lower probability, especially when FWA is not used. Ideally, we believe a âpoliteâ language generator should only assign low probability to the key words in a malicious sentence. For example, in the tar- get âi shall take my revengeâ, only the âtake my revengeâ part should be penalized. Whether negative training has the potential to truly
# test-set
â Tout mmm baseline mmm neg-tr w.o FWA malicious targets lm neg-tr with FWA 36 gis q 7} gl 7. gal 7 : oa 7 ah wal we Ge ert Fo 067 wth oh po yor co57 aril eat oh ertio87 18 SE ath WA yo 087 S alh see YOY 08 12.57 gum baseline, mmm neg-tr wio FWA 10.0 75 5.0 2.5 mmm neg-tr with FWA 0% gnhos7 F °S gd e 7 nd 8 X67) we gk 2 e 7 o 6 SQo7, ME] LV ye gt & 7 6% ot iatheo8 os 9% gis eooran 8 BIEâ SNE OF ® cal aa57" ot * POFue" NE 68S C08
Figure 1: Negative Log-probability (NLL) the model assigned to the test list malicious targets (when fed with trigger inputs) or test data samples. The data-set is OpenSubtitles and hit type is io-sample-min-hit. Sentences are separated by <EOS>.
teach âmannersâ to a language generator is worth further investigation.
of window size 3/4/5/6, respectively. The resulting representation vector is denoted as xrep. .
# E Conï¬gurations of the GAN Approach for Dialogue Response Generation
We use the log derivative trick (Wu et al., 2017) for the gradient derivation of the generator:
The same network forward pass is also applied for y to get yrep. Finally, xrep and yrep are con- catenated and passed to a 3-layer high-way DNN classiï¬er (Srivastava et al., 2015) of hidden size 2000.
âθGV (D, G; x) =âθGEyâ¼G(·|x) log(1 â D(x, y)) =Eyâ¼G(·|x)âθG log G(y|x) log(1 â D(x, y)) (11)
where x is one input data sample. Then the genera- tor is updated by:
Following (Goodfellow et al., 2014a), we alter- nately train the discriminator and the generator with a ratio of 3:1. The discriminator is trained with a learning rate of 0.01. Similar to negative train- ing, our experiments show that positive training (or âteacher forcingâ in some literature) is crucial to aid the model to maintain its original performance, during GAN training.
θG â θG â αG · âθGV (D, G) (12)
# F Auxiliary Experiment Results for the Frequent Response Problem
where αG is the learning rate for the generator. Note that because log(1 â D(x, y)) is negative, âθG log G(y|x) will be eventually scaled posi- tively and added to θG.
In our GAN experiments, different values in the set {0.01, 0.001, 0.0001} are tried for αG and the best result is reported.
In Talbe 9, we show BLEU-4 scores for the model after negative training. It is shown that the BLEU- 4 performance does not become worse (or even improves) after negative training. This result, to some extent, veriï¬es our claim that the quality of model samples is not compromised.
We now describe the model conï¬guration of the discriminator D(x, y) used in our work. The dis- criminator model conï¬guration is similar to the one used in (Yu et al., 2016). First xt is converted to xemb as described in Section 2. Then a 1D- t convolution operation and max-over-time pooling operation (Kim, 2014) is applied, with 300 ï¬lters
# G More Model Samples for Comparison
We show the model samples before/after nega- tive training for the malicious response problem in Talbe 10. It is shown that the modelâs generation behavior does not change much.
For the frequent response problem, we show
Ubuntu PPL BLEU-4 M-ratio Baseline Neg-train 59.49 60.06 0.443 0.464 4.4% 1.3% Switchboard PPL BLEU-4 M-ratio Baseline Neg-train 42.81 44.32 0.476 0.475 37.4% 9.8% OpenSubtitles PPL BLEU-4 M-ratio Baseline Neg-train 70.81 75.71 0.502 0.526 20% 0.6%
Table 9: BLEU-4 results for negative training.
Input: he had worked for united fruit company there Baseline: it was kind of a frightening senators Neg-train: it was good housekeeping Input: uh-huh Baseline: right Neg-train: right Input: they in fact they had Baseline: golly Neg-train: joe came back from alabama Input: they in fact used some of the equipment that had been left over Baseline: i do nât think i âm fanatic very many Neg-train: i do nât know Input: he turned them down Baseline: i love the game Neg-train: i love it
Table 10: Consecutive model samples before and after negative training for preventing malicious response on Switchboard data.
more decoding samples for comparison of different training methods in Table 11, 12 and 13.
Switchboard Input: people do nât realize it Baseline: i do nât know GAN: i do nât know Neg-train: i think it âs a very interesting Input: they âre sort of siamese in a way Baseline: uh-huh GAN: uh-huh Neg-train: yeah Input: they have the gloves on the paws Baseline: uh-huh GAN: uh-huh Neg-train: i âm really glad that they do nât have that Input: they âre usually two colored Baseline: i think that âs a good idea GAN: i think that âs a good idea Neg-train: yeah Input: she âs predominantly black Baseline: i do nât know GAN: he âs a little bit Neg-train: he âs a very good actor Input: she has chocolate paws chocolate stomach and silver on her hindquarters Baseline: uh-huh GAN: uh-huh Neg-train: i think they have a great deal of <unk> Input: she âs quite attractive looking Baseline: he âs a good actor GAN: he âs a little bit Neg-train: he âs a good guy Input: hope the stud will ï¬nd her attractive Baseline: i do nât know GAN: i do nât know Neg-train: i âve been in a very very good shape Input: that she has kittens that look like her Baseline: i think that âs a good idea GAN: i think that âs a good idea Neg-train: yeah Input: that she has a whole bunch of kittens Baseline: i think that âs a good idea GAN: i think that âs a good idea Neg-train: yeah Input: we have to be careful Baseline: i âm not sure GAN: i âm not sure Neg-train: i âm in the dallas area Input: we have to get a particular color point stud Baseline: i do nât know GAN: i do nât know Neg-train: i âm just going to Input: you ca nât just breed them with anything Baseline: i do nât know GAN: i do nât know Neg-train: i have nât been to a lot of the movies lately
Input: we have to get a silver point himie
Baseline: i do nât know GAN: i do nât know Neg-train: i âm going to have to go back to work
Table 11: Consecutive model samples before and after negative training for Switchboard data.
Ubuntu
Input: just how good ubuntu Baseline: i âm not sure i âm just a <unk> GAN: i âm not sure if it âs possible but i âm sure there âs a way to do it Neg-train: i have a server that i want to install on a server Input: help please i broke xp with linux url i ca nât get an invite Baseline: what is the problem GAN: what is the problem Neg-train: what is the issue Input: the case it âs possible that you might need to repair the xp installation Baseline: i have a <unk> and i want to install ubuntu on a usb drive GAN: i have a <unk> and i want to install ubuntu on a usb drive Neg-train: i have a live cd and i want to install it on a hard drive Input: but that takes hours Baseline: i do nât know i âm not sure GAN: i do nât know Neg-train: i âm not sure it âs a hardware problem Input: how to i enable tv-out via fglrx Baseline: you can use the <unk> driver GAN: you can use the nvidia driver from the wiki Neg-train: you can try the <unk> driver Input: they may be able to help you in channel # ati Baseline: thanks GAN: thanks Neg-train: i have a ati radeon mobility radeon hd Input: nobody feels like responding in # ati Baseline: i have no idea what you âre talking about GAN: i have no idea Neg-train: i think i found it Input: its always worked perfectly for me why âs it crap Baseline: i do nât know i âm not sure GAN: i do nât know i âm not sure Neg-train: i do nât know i just use it to make it work Input: i have seen better performance on an xp vm under parallels Baseline: i have a <unk> and i have a <unk> GAN: i have a <unk> and a <unk> <unk> Neg-train: i have a vm with vmware server running on it Input: use 2kpro and no issues whatsoever with 256mb of ram and a 10gig vdrive Baseline: i have a <unk> and it âs not a big deal GAN: i have a <unk> and it âs not a big deal Neg-train: i have a p4 with 1gb ram and i have a 1gb swap Input: will someone please help me with php conï¬g on ubuntu Baseline: what âs the problem GAN: what âs the problem Neg-train: what âs wrong
Input: you are having trouble with if someone knows how to help they will try Baseline: i âm trying to get a <unk> to
# work with ubuntu
GAN: i âm not sure i âm just trying to help you Neg-train: i âm trying to get a <unk> of my <unk>
and i want to know if it âs possible
Table 12: Consecutive model samples before and after negative training for Ubuntu data.
Opensubtitiles
Input: i feel really bad marco so bad Baseline: i âm sorry GAN: i âm not a <unk> Neg-train: i feel so bad Input: me at least that you âll come back tell me you âre not ill Baseline: i âm not going to be in the house GAN: i âm not going to let you go Neg-train: i âm so glad you are Input: tell me you forgive me Baseline: i âm not going to let you go GAN: i âm not going to leave you Neg-train: you âre not the only one Input: will you forgive me Baseline: i âll be all right GAN: i âll be all right Neg-train: you âll never see the Input: what do you want Baseline: i âm going to the hospital GAN: i âm going to the <unk> Neg-train: i want to go Input: get up hurry i got an idea Baseline: i âm not going to be a <unk> GAN: i âm not going to see you Neg-train: you âre not getting a bad idea Input: where are we going Baseline: i do nât know GAN: to the <unk> Neg-train: to the hotel Input: shut up and keep down Baseline: i âm not going to get a doctor GAN: i âm not going to get a doctor Neg-train: we âre not going Input: we âre screwed there âs no wind Baseline: i âm not going to see you GAN: i âm not a <unk> Neg-train: we âre in the <unk> Input: how âd your old lady come to think you was kidnapped Baseline: i do nât know GAN: i do nât know Neg-train: i âd like to ask you to take a look at it Input: your son is in our hands Baseline: i âm not going to GAN: he âs not here Neg-train: he âs not going to leave Input: we âll stay in touch Baseline: i âm sorry GAN: i âm not going to Neg-train: i think we âve got a <unk> Input: get the money ready or else Baseline: i âm not going to be a <unk> GAN: i âm not going to be here Neg-train: i want the <unk>
Input: i think they got it
Baseline: you know what GAN: oh Neg-train: it âs a good thing Input: you wanted to scare them
Baseline: i do nât know GAN: i âm not a <unk> Neg-train: i âm a coward
Table 13: Consecutive model samples before and after negative training for Opensubtitles data. | {
"id": "1605.07725"
} |
1903.01611 | Stabilizing the Lottery Ticket Hypothesis | Pruning is a well-established technique for removing unnecessary structure
from neural networks after training to improve the performance of inference.
Several recent results have explored the possibility of pruning at
initialization time to provide similar benefits during training. In particular,
the "lottery ticket hypothesis" conjectures that typical neural networks
contain small subnetworks that can train to similar accuracy in a commensurate
number of steps. The evidence for this claim is that a procedure based on
iterative magnitude pruning (IMP) reliably finds such subnetworks retroactively
on small vision tasks. However, IMP fails on deeper networks, and proposed
methods to prune before training or train pruned networks encounter similar
scaling limitations. In this paper, we argue that these efforts have struggled
on deeper networks because they have focused on pruning precisely at
initialization. We modify IMP to search for subnetworks that could have been
obtained by pruning early in training (0.1% to 7% through) rather than at
iteration 0. With this change, it finds small subnetworks of deeper networks
(e.g., 80% sparsity on Resnet-50) that can complete the training process to
match the accuracy of the original network on more challenging tasks (e.g.,
ImageNet). In situations where IMP fails at iteration 0, the accuracy benefits
of delaying pruning accrue rapidly over the earliest iterations of training. To
explain these behaviors, we study subnetwork "stability," finding that - as
accuracy improves in this fashion - IMP subnetworks train to parameters closer
to those of the full network and do so with improved consistency in the face of
gradient noise. These results offer new insights into the opportunity to prune
large-scale networks early in training and the behaviors underlying the lottery
ticket hypothesis | http://arxiv.org/pdf/1903.01611 | Jonathan Frankle, Gintare Karolina Dziugaite, Daniel M. Roy, Michael Carbin | cs.LG, cs.CV, stat.ML | This article has been subsumed by "Linear Mode Connectivity and the
Lottery Ticket Hypothesis" (arXiv:1912.05671, ICML 2020). Please read/cite
that article instead | null | cs.LG | 20190305 | 20200720 | 0 2 0 2
l u J 0 2 ] G L . s c [
3 v 1 1 6 1 0 . 3 0 9 1 : v i X r a
# Stabilizing the Lottery Ticket Hypothesisâ
Jonathan Frankle MIT CSAIL Karolina Dziugaite University of Cambridge Element AI Daniel M. Roy University of Toronto Vector Institute Michael Carbin MIT CSAIL
# Abstract
Pruning is a well-established technique for removing unnecessary structure from neural networks after training to improve the performance of inference. Several recent results have explored the possibility of pruning at initialization time to provide similar beneï¬ts during training. In particular, the lottery ticket hypothesis conjectures that typical neural networks contain small subnetworks that can train to similar accuracy in a commensurate number of steps. The evidence for this claim is that a procedure based on iterative magnitude pruning (IMP) reliably ï¬nds such subnetworks retroactively on small vision tasks. However, IMP fails on deeper networks, and proposed methods to prune before training or train pruned networks encounter similar scaling limitations. In this paper, we argue that these efforts have struggled on deeper networks because they have focused on pruning precisely at initialization. We modify IMP to search for subnetworks that could have been obtained by pruning early in training (0.1% to 7% through) rather than at iteration 0. With this change, it ï¬nds small subnetworks of deeper networks (e.g., 80% sparsity on Resnet-50) that can complete the training process to match the accuracy of the original network on more challenging tasks (e.g., ImageNet). In situations where IMP fails at iteration 0, the accuracy beneï¬ts of delaying pruning accrue rapidly over the earliest iterations of training. To explain these behaviors, we study subnetwork stability, ï¬nding thatâas accuracy improves in this fashionâIMP subnetworks train to parameters closer to those of the full network and do so with improved consistency in the face of gradient noise. These results offer new insights into the opportunity to prune large-scale networks early in training and the behaviors underlying the lottery ticket hypothesis.
# Introduction
For decades, pruning (LeCun et al., 1990; Han et al., 2015) unnecessary structure from neural networks has been a popular way to improve the storage and computational costs of inference, which can often be reduced by an order of magnitude without harm to accuracy. Pruning is typically a post-processing step after training; until recently, it was believed that the pruned architectures could not themselves be trained from the start (Han et al., 2015; Li et al., 2016). New results challenge this wisdom, raising the prospect of reducing the cost of training by pruning beforehand. Liu et al. (2019) demonstrate that, at moderate levels of sparsity, pruning produces networks that can be reinitialized and trained to equal accuracy; Lee et al. (2019) propose an efï¬cient method for ï¬nding such reinitializable subnetworks before training (SNIP).
Frankle and Carbin (2019) characterize the opportunity for pruning at initialization. For shallow vision networks, they observe thatâat levels of sparsity that are more extreme than Liu et al. (2019) and Lee et al. (2019)âpruned networks can successfully train from scratch so long as each unpruned connection is reset back to its initial value from before training.2 This procedure, which we term iterative magnitude pruning (IMP; Algorithm 1 with k = 0), produces a subnetwork of the original, untrained network; when it matches the accuracy of the original network, it is called a winning ticket. Based on these results, Frankle and Carbin propose the lottery ticket hypothesis: dense neural networks contain sparse subnetworks capable of training to commensurate accuracy at similar speed.
âThis article has been subsumed by Linear Mode Connectivity and the Lottery Ticket Hypothesis (Frankle et al., 2020). Please read/cite that article instead.
2Appendix A compares Liu et al. (2019), Lee et al. (2019), and Frankle and Carbin (2019).
Algorithm 1 Iterative Magnitude Pruning (IMP) with rewinding to iteration k. Randomly initialize a neural network f(x;m © Wo) with initial trivial pruning mask m = q!Wol, Train the network for k iterations, producing network f(«;m © W;,). Train the network for Tâ â k further iterations, producing network f («;m © W;). Prune the remaining entries with the lowest magnitudes from Wr. That is, let m{i] = 0 if Wr[i] is pruned. If satisfied, the resulting network is f(x;m © Wr). Otherwise, reset W to W;, and repeat steps 3-5 iteratively, gradually removing more of the network. DYRYNS
Despite the enormous potential of pruning before training, none of this work scales beyond small vision benchmarks. Lee et al. provides results only for Tiny ImageNet, a restricted version of ImageNet with 200 classes. Liu et al. examine Resnet-50 on ImageNet, but accuracy declines when only 30% of parameters are pruned. Liu et al., Gale et al. (2019), and Frankle and Carbin themselves show that IMP fails on deeper networks. To ï¬nd winning tickets on deeper networks for CIFAR10, Frankle and Carbin make bespoke changes to each networkâs learning schedule. In this paper, we argue that such efforts to train pruned networks or prune before training have struggled on deeper networks because they have focused on doing so precisely at initialization.
In comparison, other techniques gradually prune networks throughout training to competitive levels of sparsity without compromising accuracy (Zhu and Gupta, 2017; Gale et al., 2019; Lym et al., 2019; Narang et al., 2017; Louizos et al., 2018; Narang et al., 2017). However, these approaches must maintain much of the network for a large portion of training or do not scale beyond toy benchmarks.
Rewinding. In this paper, we demonstrate that there exist subnetworks of deeper networks (i.e., Resnet-50, Squeezenet, Inception-v3) at early points in training (0.1% to 7% through) that are 50% to 99% smaller and that can complete the training process to match the original networkâs accuracy. We show this by modifying IMP to rewind pruned subnetwork weights to their former values at iteration k rather than resetting them to iteration 0. For networks where IMP cannot ï¬nd a winning ticket, the accuracy beneï¬ts of this delay in pruning accrue rapidly over the earliest iterations of training. For example, IMP ï¬nds 80% sparse subnetworks of Resnet-50 at epoch 6 (out of 90) with no loss in accuracy on ImageNet. To the best of our knowledge, our work is the ï¬rst to show that it is possible to prune (1) so early in training (2) to such extreme levels of sparsity (3) on such large-scale tasks.
Stability. To explain why IMP fails when resetting to iteration 0 and improves rapidly when rewinding later, we introduce subnetwork stability: the distance between two trained copies of the same subnetwork subjected to different noise. In particular, we focus on the noise introduced by pruning (comparing the trained weights of the full network and subnetwork) and data order (comparing the weights of two subnetworks trained with different data orders). In cases where IMP fails to ï¬nd a winning ticket when resetting to iteration 0, both forms of stability improve rapidly as pruning is delayed during the early part of training, mirroring the rise in accuracy. Stability to pruning captures the extent to which the subnetwork arrived at the same destination as the original network in the optimization landscape. We hypothesize that improved stability to pruning means that a subnetwork comes closer to the original optimum and thereby accuracy; improvements in stability to data order mean the subnetwork can do so consistently in spite of the noise intrinsic to SGD.
Finally, we revise the lottery ticket hypothesis to consider rewinding in accordance with these results: The Lottery Ticket Hypothesis with Rewinding. Consider a dense, randomly-initialized neural network f(x;Wo) that trains to accuracy a* in T* iterations. Let W, be the weights at iteration t of training. There exist an iteration k < T* and fixed pruning mask m ⬠{0,1}!l (where \|m||1 < |Wo|) such that subnetwork m © Wy, trains to accuracy a > a* in T < T* â k iterations.
Based on this new understanding of the lottery ticket hypothesis provided by our rewinding and stability experiments, we conclude that there are unexploited opportunities to prune large-scale networks early in training while maintaining the accuracy of the eventual trained networks.
# 2 Stability at Initialization
On deeper networks for CIFAR10, Iterative Magnitude Pruning (IMP; Algorithm 1 at k = 0) fails to yield winning tickets. The solid blue line in Figure 1 shows that IMP on VGG-19 (left) and Resnet-18 (right) produces no winning tickets; the original initialization is inconsequential, and the networks
2
rate 0.1 â}â rate 0.1, warmup = â}â rate 0.01 tate 0.1 rate 003, warmup rate 001 of random reinit fe random reinit ~~ random reinit 4 random subnetwork â~|-~ random subnetwork + random subnetwork VGG-19 Resnet-18 wy 094 ~ Z £ 0904 20024 2 bs S088 4 Z 0.90 5 z a 5 ose 4 : 0.88 4 : g 2 os 4 < 0.86 4 < & B os4 0.84 100 410 168 69 28 12 0S 02 O1 100 644 4477117818 SS Percent of Weights Remaining Percent of Weights Remaining
Figure 1: On deeper networks for CIFAR10, IMP fails to ï¬nd winning tickets unless the learning rate schedule is altered.
Network Dataset Params Iters Batch Accuracy Rate Schedule Warmup Winning? Lenet MNIST 266K 50K 60 98.0 ± 0.2% adam 12e-4 constant 0 Y Resnet-18 (standard) Resnet-18 (low) Resnet-18 (warmup) 90.5 ± 0.0% mom. 0.1 CIFAR10 274K 30K 128 89.1 ± 0.1% mom. 0.01 89.6 ± 0.1% mom. 0.03 10x drop at 56K, 84K 0 0 20K N Y Y VGG-19 (standard) VGG-19 (low) VGG-19 (warmup) CIFAR10 20.0M 112K 64 91.5 ± 0.2% mom. 0.1 92.0 ± 0.1% mom. 0.01 92.3 ± 0.1% mom. 0.1 10x drop at 20K, 25K 0 0 10K N N Y
Figure 2: Networks for MNIST and CIFAR10. Accuracies averaged across three trials.
can be reinitialized (dashed blue) without altering accuracy. Frankle and Carbin manage to ï¬nd winning tickets in these networks by altering the learning schedule, lowering the learning rate (green) or adding warmup (orange). However, they offer no principled justiï¬cation for these choices, which are brittle and often alter the behavior of the unpruned networks.
To understand why IMP succeeds or fails to find winning tickets, we examine their stability. We measure two forms of stability: 1) stability to pruning: the distance between the weights of a subnetwork trained in isolation and the weights of the same subnetwork when trained within the larger network and 2) stability to data order: the distance between the weights of two copies of a subnetwork trained with different data orders. Stability to pruning captures a subnetworkâs ability to train in isolation and still reach the same destination as the larger network. Stability to data order captures a subnetworkâs intrinsic ability to consistently reach the same destination despite the gradient noise of SGD. In this section, we demonstrate that, when IMP returns a subnetwork that qualifies as a winning ticket, it is dramatically more stable by both measures than a randomly-pruned network of the same size. Formal definitions. A subnetwork is a tuple (W, m) of weights W : R? and a fixed pruning mask m: {0,1}â. Notation m © W denotes an element-wise product of a mask with weights. A stochastic training algorithm Aâ : R? x U + RP maps initial weights W and data order randomness u ~ U to weights W; at iteration t ⬠{1,.., 7}. The distance d between trained weights W; and W{ is the Lz distance between the masked, trained parameters: ||m © W; âm © W/||2. Throughout the paper, Appendix[B] follows the same analysis for the angle between the masked, trained parameters.
The stability to pruning of a subnetwork (W,,m) with respect to a noise distribution U un- der a distance d(-,-) is the expected distance between masked weights at the end of training: d(AT-*(W,, u), AP~"(m © Wi, u)) for u~ U. The stability to data order of a subnetwork (W;,m) with respect to a noise distribution U under a distance d(-, -) is the expected distance between masked weights at the end of training: d(A7~'(m © Wi,u), A?â"(m © Wi, vâ)) for u,u! ~ U. Methodology. We measure both forms of stability for Lenet for MNIST and Resnet-18 and VGG-19 for CIFAR10 as studied by|Frankle and Carbin)and described in Figure[2| We do so for networks produced by IMP as well as randomly-pruned networks in which we generate a random mask m of a given size with the same layer-wise proportions. These networks are trained for a fixed number of epochs. During each epoch, all training examples are randomly shuffled, randomly augmented,
3
Network Sparsity Data Order Stability (Distance) Random Comp IMP Pruning Stability (Distance) IMP Random Comp IMP Accuracy Random Lenet 10.7% 20.7 ± 0.6 58.6 ± 4.0 2.8x 48.1 ± 0.6 75.7 ± 0.5 1.6x 98.3 ± 0.1 97.5 ± 0.3 Resnet-18 (standard) Resnet-18 (low) Resnet-18 (warmup) 16.7% 66.4 ± 0.7 7.1 ± 1.2 9.5 ± 0.1 66.7 ± 1.1 28.9 ± 3.2 37.4 ± 1.9 1.0x 4.1x 3.9x 54.4 ± 0.2 19.8 ± 1.4 24.8 ± 0.9 53.4 ± 0.4 26.6 ± 0.1 34.6 ± 0.2 1.0x 1.3x 1.4x 87.7 ± 0.4 89.1 ± 0.4 90.3 ± 0.4 87.7 ± 0.5 86.1 ± 0.6 86.8 ± 0.5 VGG-19 (standard) VGG-19 (low) VGG-19 (warmup) 2.2% 285 ± 3 36.8 ± 2.6 97 ± 0.6 245 ± 34 90.3 ± 0.7 267 ± 2 0.8x 2.5x 2.7x 216 ± 1 44.0 ± 0.3 138 ± 1 212 ± 1 66.1 ± 0.4 201 ± 1 1.0x 1.5x 1.5x 90.0 ± 0.3 91.0 ± 0.3 92.4 ± 0.2 90.2 ± 0.5 88.0 ± 0.3 90.3 ± 0.3
Figure 3: The average data order stability of subnetworks obtained by IMP and by randomly pruning. Errors are the minimum or maximum across 18 samples.
and separated into minibatches without replacement; network parameters are updated based on each minibatch in sequence until the training data is exhausted, after which the next epoch begins.
Results. Figure 3 displays the stability for IMP subnetworks at a representative level of sparsity. IMP ï¬nds winning tickets for Lenet, but cannot do so for Resnet-18 or VGG-19 without warmup. Whenever it does so, the winning ticket is far more stable than a randomly-sampled subnetwork. For example, Lenet winning tickets are 2.8x (data order) and 1.6x (pruning) closer in L2 distance than random subnetworks. Conversely, IMP cannot ï¬nd a winning ticket within VGG-19 (standard) and Resnet-18 (standard), and the subnetworks it returns are no more stable than random subnetworks.
Discussion. These results suggest a connection between winning tickets and stability. Winning tickets are far more stable than random subnetworks. Likewise, when IMP subnetworks are no more accurate than random subnetworks, they are correspondingly no more stable. Interestingly, Frankle and Carbinâs techniques that enable IMP to ï¬nd winning tickets also beneï¬t stability.
# 3 Stability with Rewinding
In cases where the IMP fails to ï¬nd a winning ticket, reducing the learning rate early in training via warmup makes it possible for the procedure to succeed, simultaneously improving the stability of the resulting subnetwork. The efï¬cacy of warmup suggests that a high learning rate is not necessarily detrimental later in trainingâonly in the initial iterations. One hypothesis to explain these results is thatâfor the less overparameterized regime of training a subnetworkâoptimization is particularly sensitive to noise in the early phase of training. This sensitivity can be mitigated by reducing the learning rate early on, minimizing the consequences of any individual misstep.
Under this hypothesis, we would expect these subnetworks to become more stable later in training, when they better tolerate a high learning rate. We explore this possibility by modifying IMP (Algorithm|I]with k 4 0). After finding a subnetwork, rewind each connection back to its weight from an earlier iteration k rather than resetting it back to its initial value as{Frankle and Carbin|do.
Results. The upper two plots for each network in Figure 4 show the stability (measured in distance) of the IMP subnetwork (blue) and a random subnetwork (orange) across rewinding iterations. Across all networks, rewinding later causes gradual improvements in stability, supporting our hypothesis.
For Resnet-18 (standard) and VGG-19 (standard), in which no winning tickets can be found, IMP subnetworks are no more stable than randomly-pruned subnetworks when reset to iteration 0. How- ever, as the rewinding iteration increases, the IMP subnetworkâs stability dramatically improves when compared to its stability at iteration 0. Up to our point of analysis, this improvement is larger than that for random subnetworks. For IMP subnetworks, this change takes place rapidlyâwithin the ï¬rst 100 iterations (0.14 epochs) for VGG-19 and 500 iterations (1.4 epochs) for Resnet-18.
In cases where IMP ï¬nds a winning ticket, IMP subnetworks are already more stable than random subnetworks when resetting to iteration 0 (as noted in Section 2). This stability gap remains in place across all rewinding iterations that we consider, although it gradually shrinks toward the end of our range of analysis as the random subnetworks become somewhat more stable; by this point, the networks have already made substantial training progress orâin the case of Lenetâconverged.
4
Resnet-18 (Standard) Resnet-18 (Low) Resnet-18 (Warmup) Lenet 40 2 2 2 2 <0 a 3 amend py & & 220 & 525 5 5 8 pee a a FRSIRMPRRMS a PIPPI AIIM a 0 0 0 0 Sees & Fee ee Fee Ee SHRES i & 0 2 anand 2 eee] 2 2 2 2 i i golf) a nee 8 20 8 10 a a a a a a 0 0 0 0 ° § $F § ° § $F § â § fF Â¥ â© 8 FH § Ea py 0-90 Ea Ea 2 0.90 3 3 0.90 3 et 3 3 3 5 0.98 4 4 4 4 % 0.85 g O85 B 085 B 097 & & & & ° Ss § Â¥ & â § 8 Â¥ F â § 8 Â¥ F ° 8 esas VGG-19 (Standard) VGG-19 (Low) VGG-19 (Warmup) 2 8 2 3 200 a g 200 Z 2 50 Z 8 100 en ee ee ee re 4 a a â procedure 0 0 0 Se sgee CH SSF E SH8ZeS âH tandom ., 200 a OR) ORS] 2 g 50 2 6 100 B25 & 100 A a A 0 0 0 S98 ger s â4 8 Fe S98 ger s e a IIe 2 YAIRI DIE 5 0.90 £ 0.900 a § 0.00 [gage peo et $ < o87s < fd fa fd O85 0.850 O85 - ose § âS388 8 â8 8 Paes
Figure 4: The effect of the rewinding iteration (x-axis) on data order (top row) and pruning (middle row) stability and accuracy (bottom row) for networks in Figure 2. Error bars are the minimum and maximum across 18 (data order) and three samples (stability and accuracy). These results were obtained by training more than 6500 networks on K80s and V100s.
Discussion. Section 2 shows that, when IMP identiï¬es a winning ticket, it is more stable than a random subnetwork. Here, we ï¬nd that IMP subnetworks generally achieve such a stability gap across all network conï¬gurations; however, in many cases, this occurs a small way into training.
# 4 Accuracy with Rewinding
Section 2 observes that winning tickets found by IMP exhibit both improved accuracy and stability over random subnetworks. The previous section ï¬nds that the stability of IMP subnetworks improves as a function of rewinding iteration, especially rapidly so for networks where winning tickets are not found. The bottom plots in Figure 4 show that accuracy improves in a similar manner. Although resetting to iteration 0 does not produce winning tickets for Resnet-18 and VGG-19, rewinding slightly later (iteration 500 for Resnet-18 and 100 for VGG-19) leads to IMP subnetworks that exceed the accuracy of the original network. At later rewinding iterations, the rate of stability improvements subsides for these subnetworks, corresponding to slowed or no improvement in accuracy.
Improved stability also results in improved accuracy for random subnetworks. However, the accuracy of the random subnetworks remains lower in all of our experiments. We believe that the signiï¬cant stability gap explains this difference: we hypothesize that the stability of IMP subnetworks over their random counterparts results in better accuracy.
5
VGG-19 Resnet-18 got 2 Zo A J é 2 0900 3 3 = 090 4 a oss 4 > > 2 oss 4 F EB 2 0850 4 < 086 4 2 oss = 0825 + 1 r r ââââ+ 100 410 168 69 28 12 05 02 O1 100 644 41.72.1788 BOSS Percent of Weights Remaining Percent of Weights Remaining Sh mwindto > eden nevi St rewindioo 4 ndonseintââ evin 300 anda ei
Figure 5: IMP subnetworks rewound to an iteration early in training outperform the original networks, while resetting to iteration 0 does not.
Network Params Eps. Batch Top-1 Accuracy Learning Schedule Sparsity Resnet-50 Inception-v3 Squeezenet 25.5M 90 1024 27.1M 171 1024 1.25M 150 1024 76.1 ± 0.1% rate 0.4; warmup 5 eps.; 10x drop at 30, 60, 80; momentum 70% 78.1 ± 0.1% rate 0.03 linearly decayed to 0.005; momentum 70% 54.8 ± 0.6 % rate 0.66 exponentially decayed to 6.6e-5; rmsprop 50%
Figure 6: Networks for experiments on ImageNet with TPUs. Accuracies averaged across ï¬ve trials.
For the latest rewinding iterations that we consider, IMP subnetwork accuracy declines in many cases. According to our methodology, we train a subnetwork rewound to iteration k for T â â k iterations, where T â is the iterations for which the original network was trained. At later rewind points, we believe that this does not permit enough training time for the subnetwork to recover from pruning.
Discussion. Figure 5 plots the accuracy of Resnet-18 and VGG-19 across all levels of sparsity, comparing the IMP subnetworks reset to iteration 0 with those rewound to iteration 100 (VGG-19) and iteration 500 (Resnet-18)âthe iterations at which stability and accuracy improvements saturate. At all levels of sparsity, rewinding makes it possible to ï¬nd trainable subnetworks early in training without any modiï¬cations to network hyperparameters (unlike the warmup or reduced learning rates required in Figure 1).
These ï¬ndings reveal a new opportunity to improve the performance of training. The aspiration behind the lottery ticket hypothesis is to ï¬nd small, trainable subnetworks before any training has occurred. Insofar as IMP reï¬ects the extent of our knowledge about the existence of equally-capable subnetworks early in training, our ï¬ndings suggest thatâfor deeper networksâthe best opportunity to prune is a small number of iterations into training rather than at initialization. Doing so would exploit the rapid improvements in subnetwork stability and accuracy, resulting in subnetworks that can match the performance of the original network at far greater levels of sparsity.
# 5 Rewinding on Deep Networks for ImageNet
Rewinding made it possible to ï¬nd sparse, trainable subnetworks of deeper networks for CIFAR10 without the need to alter the underlying networkâs hyperparameters. In this section, we demonstrate that this strategy serves the same purpose for deeper networks for ImageNet (Russakovsky et al., 2015). Although IMP with k = 0 does not produce winning tickets, rewinding 4.4%, 3.5%, and 6.6% into training yields subnetworks that are 70%, 70%, and 50% smaller than the Resnet-50 (He et al., 2016), Inception-v3 (Szegedy et al., 2016), and Squeezenet (Iandola et al., 2016) architectures, respectively, that can complete training without any drop in accuracy. We trained more than 600 networks using standard implementations for TPUs (Google, 2018) as described in Figure 6.
# âopi
Figure 7 shows the effect of the rewinding iteration on the stability and accuracy at the levels of sparsity from Figure 6. In general, the trends from Section 4 remain in effect. When resetting weights to initialization, IMP subnetworks perform no better than random subnetworks in terms of both stability and accuracy. As the rewinding iteration increases, a gap in stability emerges. Accuracy improves alongside stability, reaching the accuracy of the original network at epoch 4 (out of 90) for Resnet-50, 6 (out of 171) for Inception-v3, and 10 (out of 150) for Squeezenet. In the case of Squeezenet, rewinding too early makes it impossible for the subnetworks to learn at all.
Figure 8 illustrates the effect of performing IMP with rewinding across all levels of sparsity. The blue line shows the result of one-shot pruning (pruning all at once after training) at a rewinding
6
Resnet-50 Inception-v3 Squeezenet 8 8 L2 Distance 8 L2 Distance L2 Distance â f= procedure â}â random ° ° ° oy ey © 8 a cy f& 12 Inception-v3 L2 Distance 3s 8 L2 Distance 8 0 0 Sy y © & Ss 3 a % 2 3 0.76 en = 050 § @ § 2 § ors 2 Zon 2 2 025 g 3 0.750 g 6 6 © O00 oy ey © eS co Fy © 4 co + = Bo
Figure 7: The effect of the rewinding epoch (x-axis) on data order stability and accuracy.
Resnet-50 on ImageNet (Oneshot) Inception-v3 on ImageNet (Oneshot) 80 e 54 e g ° g 4 én] & 104 & 70 100 80 60 40 20 0 100 80 60 40 20 Percent of Weights Remaining Percent of Weights Remaining ââ rewind to6 âeâ random reinit =--* rewind to 0 ââ rewind to8 âeâ random reinit =--* rewind to 0 Squeezenet on ImageNet (Oneshot) Resnet-50 on ImageNet (Iterative) 60 ing B54 5 40 5 g g < < 70 & 20 2 6 6 0 65 100 80 60 40 20 0 100 64.0 41.0 26.2 16.8 10.7 6.9 44 Percent of Weights Remaining Percent of Weights Remaining rewind to 6 (oneshot) | âeâ rewind to 6 (iterative) rewindto 10 â*â randomreinit =-=* rewind 00 â+â rewind to 0 (iterative) = random reinit
Figure 8: Rewinding to iterations early in training produces subnetworks that outperform the original networks, even when resetting to iteration 0 does not.
iteration just after the aforementioned thresholds. Resnet-50, Squeezenet, and Inception match the accuracy of the original network when 70%, 50%, and 70% of weights are pruned. At lower levels of sparsity, these subnetworks slightly outperform the original network. The weights obtained by rewinding are essential: when randomly reinitialized (dashed blue line) or reset to iteration 0 (orange line), subnetworks lose accuracy when pruned by any amount.
The bottom right plot explores the effect of iterative pruning (training, pruning 20% of weights at a time, rewinding, and repeating) on Resnet-50. Rewinding to epoch 6 (the green line) makes it possible to ï¬nd subnetworks that train to match the original accuracy when just 20% of weights remain. Randomly reinitializing (dashed green line) or resetting to iteration 0 (red line) perform equally poorly, only losing accuracy as they are pruned.
Discussion. On these deeper networks for a more challenging task, IMP ï¬nds no evidence in support of Frankle and Carbinâs hypothesis that equally-capable subnetworks exist at initialization. However, rewinding to within a few epochs of the start of training makes it possible to ï¬nd subnetworks with these properties. Stability continues to offer insight into the value of rewinding: as IMP subnetworks become more stable and a stability gap emerges, they reach higher accuracy. The central conceit of the lottery ticket hypothesisâthat we can prune early in trainingâcontinues to apply in this setting; however, the most productive moment at which to do so is later than strictly at initialization.
7
0
# 6 Limitations
Like Frankle and Carbinâs work, we focus only on image classiï¬cation. While we extend their work to include ImageNet, the revised IMP must still train a network one or more times to identify a subnetwork; we do not propose an efï¬cient way to ï¬nd these subnetworks at the rewinding iteration. Our core pruning technique is still unstructured, magnitude pruning (among many other pruning techniques, e.g., Hu et al. (2016); Srinivas and Babu (2015); Dong et al. (2017); Li et al. (2016); Luo et al. (2017); He et al. (2017)). Unstructured pruning does not necessarily yield networks that execute more quickly with commodity hardware or libraries; we aim to convey insight on neural network behavior rather than suggest immediate opportunities to improve performance.
# 7 Discussion
Stability. Stability to pruning measures a subnetworkâs ability to train in isolation to ï¬nal weights that are close to the values they would have reached had they been trained as part of the original network. If we consider the trained weights of the original network to be an ideal destination for optimization, then this metric captures a subnetworkâs ability to approach this point with fewer parameters. We conjecture that, by arriving at a similar destination, these subnetworks also reach similar accuracy, potentially explaining why increased stability to pruning corresponds to increased accuracy. Frankle and Carbinâs winning tickets are substantially more stable to pruning than random subnetworks, shedding light on a possible mechanism behind the lottery ticket hypothesis. As a follow-up, one might explore whether subnetworks that are more stable to pruning are also more likely to reach the same basin of attraction as the original network (Nagarajan and Kolter, 2019).
We ï¬nd a similar relationship between stability to data order and accuracy. Stability to data order measures a subnetworkâs ability to reach similar ï¬nal weights in spite of training with different mini- batch sequencesâthe gradient noise intrinsic to SGD. This metric is valuable because it provides a way to measure subnetwork stability without reference to the original network. We believe that stability to pruning and data order work hand-in-hand: subnetworks that are robust to data order are better able to reach destinations consistent with the original network in the face of SGD noise.
Pruning early. We aim to characterize the opportunity to prune early in training. Doing so could make it possible to reduce the cost of training networks by substantially reducing parameter-counts for most or all of training. This is an active area of research in its own right, including pruning before training (Lee et al., 2019), pruning throughout training (Zhu and Gupta, 2017; Gale et al., 2019; Lym et al., 2019; Molchanov et al., 2017; Louizos et al., 2018; Narang et al., 2017), and maintaining a sparse network with dynamic structure (Bellec et al., 2018; Mostafa and Wang, 2018; Mocanu et al., 2018). However, to the best of our knowledge, our work is the ï¬rst to show that it is possible to prune (1) so early in training (2) to such extreme levels of sparsity (3) on such large-scale tasks.
Speciï¬cally, we ï¬nd that, for many networks, there is an iteration early in training after which pruning can result in subnetworks with far higher accuracy than when pruning at initialization. Our results with IMP expand the range of known opportunities to prune early in training thatâif exploitedâcould reduce the cost of training. With better techniques, we expect this range could be expanded even further because our results are restricted by IMPâs limitations. Namely, it is possible that there are equally-capable subnetworks present at initialization, but IMP is unable to ï¬nd them.
Stability offers a new perspective for developing new early pruning methods. One could exploit stability information for pruning, or develop new techniques to maintain stability under pruning. Under our hypothesized connection between stability and accuracy, such methods could make it possible to ï¬nd accurate subnetworks early in training.
# 8 Conclusion
The lottery ticket hypothesis hints at future techniques that identify small, trainable subnetworks capable of matching the accuracy of the larger networks we typically train. To date, this and other related research have focused on compressing neural networks before training. In this work, we ï¬nd that other moments early in the training process may present better opportunities for this class of techniques. In doing so, we shed new light on the lottery ticket hypothesis and its manifestation in deeper networks through the lens of stability.
8
# References
Guillaume Bellec, David Kappel, Wolfgang Maass, and Robert Legenstein. 2018. Deep Rewiring: Training very sparse deep networks. Proceedings of ICLR (2018).
Xin Dong, Shangyu Chen, and Sinno Pan. 2017. Learning to prune deep neural networks via layer- wise optimal brain surgeon. In Advances in Neural Information Processing Systems. 4860â4874.
Jonathan Frankle and Michael Carbin. 2019. The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks. In Int. Conf. Represent. Learn. arXiv:1803.03635
Jonathan Frankle, Gintare Karolina Dziugaite, Daniel M Roy, and Michael Carbin. 2020. Linear Mode Connectivity and the Lottery Ticket Hypothesis. In International Conference on Machine Learning.
Trevor Gale, Erich Elsen, and Sara Hooker. 2019. The State of Sparsity in Deep Neural Networks. arXiv preprint arXiv:1902.09574 (2019).
Google. 2018. Networks for Imagenet on TPUs. (2018). https://github.com/tensorflow/ tpu/tree/master/models/
Song Han, Jeff Pool, John Tran, and William Dally. 2015. Learning both weights and connections for efï¬cient neural network. In Advances in neural information processing systems. 1135â1143.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 770â778.
Yihui He, Xiangyu Zhang, and Jian Sun. 2017. Channel pruning for accelerating very deep neural networks. In International Conference on Computer Vision (ICCV), Vol. 2. 6.
Hengyuan Hu, Rui Peng, Yu-Wing Tai, and Chi-Keung Tang. 2016. Network trimming: A data-driven neuron pruning approach towards efï¬cient deep architectures. arXiv preprint arXiv:1607.03250 (2016).
Forrest N Iandola, Song Han, Matthew W Moskewicz, Khalid Ashraf, William J Dally, and Kurt Keutzer. 2016. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 0.5 MB model size. arXiv preprint arXiv:1602.07360 (2016).
Yann LeCun, John S Denker, and Sara A Solla. 1990. Optimal brain damage. In Advances in neural information processing systems. 598â605.
Namhoon Lee, Thalaiyasingam Ajanthan, and Philip HS Torr. 2019. SNIP: Single-shot Network Pruning based on Connection Sensitivity. (2019).
Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. 2016. Pruning ï¬lters for efï¬cient convnets. arXiv preprint arXiv:1608.08710 (2016).
Zhuang Liu, Mingjie Sun, Tinghui Zhou, Gao Huang, and Trevor Darrell. 2019. Rethinking the Value of Network Pruning. In International Conference on Learning Representations. https: //openreview.net/forum?id=rJlnB3C5Ym
Christos Louizos, Max Welling, and Diederik P Kingma. 2018. Learning Sparse Neural Networks through L_0 Regularization. Proceedings of ICLR (2018).
Jian-Hao Luo, Jianxin Wu, and Weiyao Lin. 2017. Thinet: A ï¬lter level pruning method for deep neural network compression. arXiv preprint arXiv:1707.06342 (2017).
Sangkug Lym, Esha Choukse, Siavash Zangeneh, Wei Wen, Mattan Erez, and Sujay Shanghavi. 2019. PruneTrain: Gradual Structured Pruning from Scratch for Faster Neural Network Training. arXiv preprint arXiv:1901.09290 (2019).
Decebal Constantin Mocanu, Elena Mocanu, Peter Stone, Phuong H Nguyen, Madeleine Gibescu, and Antonio Liotta. 2018. Scalable training of artiï¬cial neural networks with adaptive sparse connectivity inspired by network science. Nature communications 9, 1 (2018), 2383.
9
Dmitry Molchanov, Arsenii Ashukha, and Dmitry Vetrov. 2017. Variational dropout sparsiï¬es deep neural networks. arXiv preprint arXiv:1701.05369 (2017).
Hesham Mostafa and Xin Wang. 2018. Dynamic parameter reallocation improves trainability of deep convolutional networks. (2018).
Vaishnavh Nagarajan and J. Zico Kolter. 2019. Uniform convergence may be unable to explain generalization in deep learning. CoRR abs/1902.04742 (2019). arXiv:1902.04742 http://arxiv. org/abs/1902.04742
Sharan Narang, Erich Elsen, Gregory Diamos, and Shubho Sengupta. 2017. Exploring sparsity in recurrent neural networks. Proceedings of ICLR (2017).
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. 2015. Imagenet large scale visual recognition challenge. International Journal of Computer Vision 115, 3 (2015), 211â252.
Suraj Srinivas and R Venkatesh Babu. 2015. Data-free parameter pruning for deep neural networks. arXiv preprint arXiv:1507.06149 (2015).
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Re- thinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition. 2818â2826.
Michael Zhu and Suyog Gupta. 2017. To prune, or not to prune: exploring the efï¬cacy of pruning for model compression. arXiv preprint arXiv:1710.01878 (2017).
10
# Test Accuracy Iteration 112K
VGG19 Resnet-18 0.950 0.925 4 0.900 5 0.900 4 0.875 4 0.875 4 0.850 4 0.825 4 ° & âTest Accuracy Iteration 30K 100 410 168 69 28 12 OS 02 O1 100 64.4 «41.7 27.1 «178 1B 85S Percent of Weights Remaining Percent of Weights Remaining g s 1 winning ticket (original initialization) â}â winning ticket (random reinitilization) ~~ snip pruning
Figure 9: The accuracy achieved by VGG19 (left) and Resnet-18 (right) on CIFAR10 when pruned to the speciï¬ed size using iterative pruning and SNIP. Networks are trained with warmup and the learning rate hyperparameters used by Frankle and Carbin.
# A Comparison to âRethinking the Value of Pruningâ and âSNIPâ
In this appendix, we compare the performance of subnetworks found by âThe Lottery Ticket Hypoth- esisâ (Frankle and Carbin, 2019), âRethinking the Value of Pruningâ (Liu et al., 2019), and âSNIPâ (Lee et al., 2019). All three papers have different perspectives on the prospect of pruning early in training.
Frankle and Carbin argue that sparse, trainable networks exist at initialization time within the networks that we typically train. They ï¬nd these networks using IMP (1 with k = 0) to ï¬nd these subnetworks: they train the original network, prune it, and reset each surviving connectionâs weight back to its initial value from before training. They argue that the original initializations are essential for achieving this performance and that randomly reinitializing substantially degrades performance.
Liu et al. argue that the sparse networks that result from pruning can be trained from the start and that the original initializations do not matter. Instead, they ï¬nd that the networks that result from pruning can be trained with a random initialization to the same performance as the original network.
Lee et al. propose a method for pruning early in training called SNIP. SNIP considers the sensitivity of the loss to each weight (based on one mini-batch of data) and removes those weights to which the loss is least sensitive. Sensitivity is measured by multiplying each weight w by a virtual parameter c = 1 and computing âL âc . The authors ï¬nd that SNIP can prune neural networks before they have been trained, and that these networks can be randomly reinitialized without harm to the eventual accuracy.
There are three points of contention between the papers:
1. Is the original initialization important? Frankle and Carbin argue that it is essential, but Liu et al. and Lee et al. discard it without any impact on their results.
2. At what level of sparsity are the authors measuring? For example, Liu et al. and Lee et al. consider VGG-19 when pruned by up to 95%, while Frankle and Carbin prune by upwards of 99.5%.
3. How efï¬cient is the pruning method? Frankle and Carbin must train the same network a dozen or more times, while Liu et al. must train the network once and Lee et al. need to only look at a single mini-batch of data.
Figure 9 compares these three methods on VGG-19 (warmup) and Resnet-18 (warmup) from Figure 2. These particular hyperparameters were chosen because IMP does not ï¬nd winning tickets without warmup, which would render this comparison less informative. The plots include the accuracy of randomly reinitialized winning tickets (orange), winning tickets with the original initialization (blue), and subnetworks found by SNIP (green). The results for VGG19 (left) support the ï¬ndings of Liu et al. that pruned, randomly reinitialized networks can match the accuracy of the original network: VGG19 can do so when pruned by up to 80%. However, beyond this point, the accuracy of the randomly reinitialized networks declines steadily. In contrast, winning tickets with the original initialization match the accuracy of the original network when pruned by up to 99%. For Resnet-18 (right), which has 75x fewer parameters, the randomly reinitialized networks lose accuracy much sooner.
11
Network Sparsity Data Order Stability (Angle) IMP Random Comp Pruning Stability (Angle) IMP Random Comp IMP Accuracy Random Lenet 10.7% 22.7 ± 0.5⦠49.2 ± 3.0⦠2.2x 53.1 ± 0.8⦠83.6 ± 1.1⦠1.6x 98.3 ± 0.1 97.5 ± 0.3 Resnet-18 (standard) Resnet-18 (low) Resnet-18 (warmup) 16.7% 87.7 ± 1.2⦠16.1 ± 2.7⦠17.0 ± 0.2⦠88.0 ± 1.4⦠75.8 ± 1.2⦠69.0 ± 3.8⦠1.0x 4.7x 4.7x 87.7 ± 1.2 ⦠50.4 ± 4.8⦠49.9 ± 2.3⦠88.0 ± 1.4⦠77.1 ± 0.4⦠79.0 ± 0.5⦠1.0x 1.5x 1.6x 87.7 ± 0.4 89.1 ± 0.4 90.3 ± 0.4 87.7 ± 0.5 86.1 ± 0.6 86.8 ± 0.5 VGG-19 (standard) VGG-19 (low) VGG-19 (warmup) 2.2% 88.5 ± 0.4⦠39.9 ± 2.1⦠39.9 ± 2.1⦠88.3 ± 0.5⦠84.9 ± 0.6⦠84.6 ± 0.4⦠1.0x 2.1x 2.1x 87.8 ± 0.2⦠54.0 ± 0.4⦠71.7 ± 0.3⦠88.1 ± 0.2⦠79.0 ± 0.2⦠84.8 ± 0.2⦠1.0x 1.5x 1.2x 90.0 ± 0.3 91.0 ± 0.3 92.4 ± 0.2 90.2 ± 0.5 88.0 ± 0.3 90.3 ± 0.3
Figure 10: The average stability (as measured in angle) of subnetworks obtained by IMP and by randomly pruning. Errors are the minimum or maximum across 18 samples (data order) and 3 samples (pruning and accuracy).
SNIP results in a promising improvement over random reinitialization on VGG19, however there is still a performance gap between SNIP and the winning ticketsâan opportunity to further improve the performance of pruning before training.
We return to our original three questions: Up to a certain level of sparsity, the original initialization is not important. Subnetworks that are randomly reinitialized can still train to full accuracy. Beyond this point, however, the original initialization is necessary in order to maintain high accuracy. Liu et al. and Lee et al. operate in this ï¬rst regime, where initialization is less important; Frankle and Carbin operate in the second regime. However, ï¬nding networks that learn effectively at these extreme levels of sparsity is very expensive: Frankle and Carbin must train the same network many times in order to do so. Lee et al. offer a promising direction for efï¬ciently ï¬nding such subnetworks, taking a step toward realizing the opportunity described by Frankle and Carbin and this paper.
# B Angle Measurements (Stability)
This appendix accompanies Figures 3 and 4, which measure the stability of the networks in Figure 2. The aforementioned ï¬gures measure stability only in terms of distance. This appendix includes the accompanying measurements of angle. Figure 10 includes angle data when resetting at iteration 0 to accompany Figure 3. Figure 11 includes the angle measurements of stability to data order (top) and pruning (middle) when networks are rewound to various iterations; it accompanies Figure 4. Figure 12 includes angle measurements for the ImageNet networks from Section 5 to accompany Figure 7.
12
Angle Degrees) Angle Degrees) Test Accuracy Angle (Degrees) Angle (Degrees) Test Accuracy w 3 ° 50 0.90 0.85 50 50 0.90 0.85 Resnet-18 (Standard) Resnet-18 (Low) Resnet-18 (Warmup) Lenet i Pore Eso bo a aoe eeen bo 5 oN : a 2 2 220 HOO ie Bo meee] BO PRO) BF =o =o =o ° § $F § ° § $F § â § fF Â¥ ° 8 8 sag AST g0 sy g 5 prs] & 50 POR RSS 2 2 2 Ea Ea Ea =o =o =o ° § $F § ° § $F § â § fF Â¥ ° 8 8 sag 0.90 5 3 090 a RRIF Ht A A 5 0.98 cd cd cd goss & 085 3097 & & & ° 3 3 Â¥ &§ â 3 3 Â¥ &§ ° 3 3 Â¥ &§ °c" § ess VGG-19 (Standard) VGG-19 (Low) VGG-19 (Warmup) ba ba 2 FO x! 9 * sp sp 2, 2, â fH procedure S46 ees & S46 ees & °§ § g & & bh random ' po a ae Fs it ha le: 3 3 2 2 =o =o â4 8 FHF â4 &§ FHF â4 8 FHF 0.925 » SOE = o900 at] F a50 Jeng peee et 3 gen Sosy : 5 fa 0.850 0.85 Se eee SR eeE âSee es
Figure 11: The effect of the rewinding iteration (x-axis) on angle data order stability (top), angle prun- ing stability (middle), and accuracy (bottom) for each network in Figure 2. This ï¬gure accompanies Figure 4.
Angle (Degrees) Angle (Degrees) Test Accuracy Resnet-50 Squeezenet Inception-v3 g 8 B 2 50 50 é 2 100 2 ial 2 q 0 <0 0 oy y © & ° + % Ry ° + © 2 Resnet-50 Squeezenet Inception-v3 i 2 B 2 50 250 & 2 100 g a 2 i] 0 0 tC) oy ey © eS 3 + % Ry 3 + % 2 0.76 2 0.50 2» 080 FA Eos 0.74 2 0.25 2 z % 0.750 * 00 a sy + 8 & 3 + 2 3 + = s 12 gy â J procedure ââ random
Figure 12: The effect of the rewinding epoch (x-axis) on data order stability (top), pruning stability (middle), and accuracy (bottom) for each network in Figure 6.
13 | {
"id": "1602.07360"
} |
1903.01061 | Learning low-precision neural networks without Straight-Through Estimator(STE) | The Straight-Through Estimator (STE) is widely used for back-propagating
gradients through the quantization function, but the STE technique lacks a
complete theoretical understanding. We propose an alternative methodology
called alpha-blending (AB), which quantizes neural networks to low-precision
using stochastic gradient descent (SGD). Our method (AB) avoids STE
approximation by replacing the quantized weight in the loss function by an
affine combination of the quantized weight w_q and the corresponding
full-precision weight w with non-trainable scalar coefficient $\alpha$ and
$1-\alpha$. During training, $\alpha$ is gradually increased from 0 to 1; the
gradient updates to the weights are through the full-precision term,
$(1-\alpha)w$, of the affine combination; the model is converted from
full-precision to low-precision progressively. To evaluate the method, a 1-bit
BinaryNet on CIFAR10 dataset and 8-bits, 4-bits MobileNet v1, ResNet_50 v1/2 on
ImageNet dataset are trained using the alpha-blending approach, and the
evaluation indicates that AB improves top-1 accuracy by 0.9%, 0.82% and 2.93%
respectively compared to the results of STE based quantization. | http://arxiv.org/pdf/1903.01061 | Zhi-Gang Liu, Matthew Mattina | cs.LG, stat.ML | conference version accepted by IJCAI-2019 | null | cs.LG | 20190304 | 20190520 | 9 1 0 2
y a M 0 2 ] G L . s c [
2 v 1 6 0 1 0 . 3 0 9 1 : v i X r a
# Learning low-precision neural networks without Straight-Through Estimator (STE)
# Zhi-Gang Liu , Matthew Mattina Arm Machine Learning Research Lab. {zhi-gang.liu, matthew.mattina}@arm.com
# Abstract
The Straight-Through Estimator (STE) [Hinton, 2012][Bengio et al., 2013] is widely used for back-propagating gradients through the quantiza- tion function, but the STE technique lacks a com- plete theoretical understanding. We propose an al- ternative methodology called alpha-blending (AB), which quantizes neural networks to low precision using stochastic gradient descent (SGD). Our (AB) method avoids STE approximation by replac- ing the quantized weight in the loss function by an afï¬ne combination of the quantized weight wq and the corresponding full-precision weight w with non-trainable scalar coefï¬cient α and (1 â α). Dur- ing training, α is gradually increased from 0 to 1; the gradient updates to the weights are through the full precision term, (1âα)w, of the afï¬ne combina- tion; the model is converted from full-precision to low precision progressively. To evaluate the (AB) method, a 1-bit BinaryNet [Hubara et al., 2016a] on CIFAR10 dataset and 8-bits, 4-bits MobileNet v1, ResNet 50 v1/2 on ImageNet are trained us- ing the alpha-blending approach, and the evalua- tion indicates that AB improves top-1 accuracy by 0.9%, 0.82% and 2.93% respectively compared to the results of STE based quantization [Hubara et al., 2016a][TF-, 2018a][TF-, 2018c][Krishnamoor- thi, 2018].
1 Introduction Deep Neural Networks (DNNs) have demonstrated outstand- ing performance on a wide range of tasks, including image classiï¬cation [Krizhevsky et al., 2017], speech recognition [Hinton et al., 2012] etc. These networks typically consists of multiple convolution layers with a large number of parame- ters. The models are trained on high performance servers typ- ically with GPUs and are deployed on lower-end machines, i.e. mobile or IoT devices, for inference tasks. Improved inference accuracy usually comes with millions of model pa- rameters and high computation cost. For example, the largest Mobilenet v1 model [Howard et al., 2017a] has 4.2 million parameters and 569 million ï¬oating point MAC per inference [TF-, 2018a]. For applications that demand high inference
accuracy, low latency and low power consumption, the large memory requirements and computation costs are a signiï¬cant challenge for constrained platforms.
To achieve efï¬cient inference, one approach is to design compact network architectures from scratch [Howard et al., 2017b] [Iandola et al., 2016] [Rastegari et al., 2016a] [Li and Liu, 2016]. Alternatively, existing models can be op- timized for efï¬ciency. There are several optimization tech- niques that boost efï¬ciency when applied to pretrained mod- els: weight pruning [Han et al., 2015] [Goetschalckx et al., 2018], weight clustering [Han et al., 2015] [Goetschalckx et al., 2018], singular value decomposition (SVD) [Xue et al., 2013] and quantization [Courbariaux et al., 2015] [Rastegari et al., 2016b] [Zhou et al., 2017] [Warden, 2016]. The basic principle is to reduce the number of parameters and/or lower the computation cost of inference. Weight pruning techniques remove parameters while minimizing the impact on inference accuracy. Weight clustering clusters similar weights to shrink the overall size of a model. The SVD method potentially reduces both model size and computation cost through dis- carding small singular values. Quantization techniques con- vert normal ï¬oating-point values to narrow and cheaper inte- ger or ï¬xed point i.e. 8-bits, 4-bits or binary multiplication operations without incurring signiï¬cant loss in the accuracy. There are three major beneï¬ts to quantization: reduced mem- ory bandwidth, reduced memory storage, and higher through- put computation. The predominant numerical format used for training neural networks is IEEE fp32 format. There is a po- tential 4x reduction in overall bandwidth and storage if one can quantize fp32 ï¬oating point to 8-bits for both weight and activation. The corresponding energy and area saving are 18x and 27x [Dally, 2015] respectively. The efï¬cient computa- tion kernel libraries for fast inference, i.e. Arm CMSIS [arm, 2018], Gemmlowp [gem, ], Intel MKL-DNN [mkl, ], Nvidia TensorRT [nvi, ] and custom ASIC hardware, are built upon the reduced precision numerical forms.
[Hinton, (STE) 2012][Bengio et al., 2013] is widely implemented in discrete optimization using SGD due to its effectiveness and simplicity. STE is an empirical workaround to the lacks gradient vanishing issue in Backprop; however it complete mathematical justiï¬cation especially for large-scale
optimization problems [Penghang Yin, 2018]. In this paper, we propose a novel optimization technique, termed alpha-blending (AB), for quantizing full precision networks to lower precision representations(8-bits, 4-bits or 1-bit). AB does not rely on the concept of STE to back-propagate the gradient update to weights; AB instead replaces the weight vector w in the loss function by the expression wab = (1 â α)w + αwq, which is the afï¬ne combination of the w and its quantization wq. During training, we gradually increase the non-trainable parameter α from 0.0 to 1.0. This formulation isolates the quantized weights wq from the full-precision trainable weights w and therefore avoids the challenges arising from the use of Straight-Through Estimation (STE).
To evaluate the performance of the proposed method, we trained single-bit BinaryNet [Hubara et al., 2016a] on CI- FAR10 and 4-bits, 8-bits MobileNet v1, ResNet v1 and v2 models on the ImageNet dataset. AB outperforms previous state-of-art STE based quantization 0.9% for 1-bit BinaryNet [Kr- and 2.9% for 4-bits weight and 8-bits activation (4-8) ishnamoorthi, 2018] in top-1 accuracy. Moreover, we have applied our AB approach to quantize MobileNet v1, ResNet v1,2 networks with both 4-bit weight as well as 4-bit acti- vation (4b/4b). In this conï¬guration, our 4b/4b quantization delivers similar accuracy level as the best known 4b/8b quan- tization approach [Krishnamoorthi, 2018].
# 2 Related works
There is a signiï¬cant body of research on neural network quantization techniques from the deep learning community. [Courbariaux et al., 2015] binarizes the BinaryConnect weights of neural networks using the sign function. Bi- nary Weight Network [Rastegari et al., 2016c] has the same binarization while introducing a scaling factor. BinaryNet [Hubara et al., 2016b] [Hubara et al., 2016a] quantizes both weights and activations to binary values. TWN [Li and Liu, 2016] constructs networks with ternary values 0, +/-1 to bal- ance the accuracy and model compression compared to the binary quantization. STE [Hinton, 2012] is used to approx- imate the gradient of quantization function during the learn- ing process. Once they are quantized, these models eliminate majority of the ï¬oating-point multiplications, and therefore exhibit improved power efï¬ciency by using SIMD instruc- tions on commodity micro-processor or via special hardware. On the downside, the single bit quantization schemes often lead to substantial accuracy drop on large scale dataset while achieving good results on simple dataset such as MNIST, CI- FAR10.
Another approach is to train the network in full ï¬oating- point domain, then statically quantize the model parame- ter into reduced numerical forms and keep the activation in ï¬oating-point. Googles Tensorï¬ow provides a post-training quantization ï¬ow [TF-, 2018b] to convert ï¬oat-point weights into 8-bits of precision from INT8. Its uniform afï¬ne quan-
tization maps a set of ï¬oating-point values to 8-bits unsigned integers by shifting and scaling [Krishnamoorthi, 2018]. The minimum and maximum values correspond to quantized value 0 and 255 respectively. Another mapping scheme is uniform symmetric quantizer, which scales the maximum magnitude of ï¬oating-point values to maximum 8-bit inte- ger e.g. 127 and the ï¬oating-point zero always mapped to quantized zero. The conversion is done once, and reduc- tion of model size is up to 4X. A further improvement dy- namically quantizes activations into 8-bits as well at infer- ence. With 8-bits weight and activation, one can switch the most compute-intensive operations e.g. convolution, matrix multiply (GEMM) from original ï¬oating-point format to the cheaper operation, and reduces the latency as well.
The main drawback of such post-processing approach is the degradation in model accuracy. To overcome this accu- racy drop, quantization aware training [TF-, 2018b] tech- niques have been developed to ensure that the forward pass uses the reduced precision for both training and inference. To achieve this, full precision weights and activations values ï¬ow through fake quantization nodes, then quantized values feed through convolution or matrix multiply. Applying the Straight-Through Estimator (STE) approximation [Hinton, 2012] [Hubara et al., 2016a] [Yin et al., 2018], the opera- tions in the back propagation phase are still at full precision as this is required to provide sufï¬cient precision in accumu- lating small adjustment to the parameters.
# 3 Alpha-blending, the proposed method (AB)
We introduce an optimization methodology, alpha-blending (AB), for quantizing neural networks. Section 3.1 describes the scheme of AB and weights quantization; section 3.2 sketches the quantization of activation using AB.
# 3.1 Alpha-blending AB and quantization of weights
During quantization-aware training, the full precision weights are quantized to low precision values wq. Mathematically, we want to minimize a convex function L(w) as equation 1 with the additional constraint that w must be n-bit signed integers i.e. w â Q = [â(2nâ1 â 1), 2nâ1 â 1].
min s.t. wâQ L(w) (1)
Previous approaches i.e. [TF-, 2018b], [Hubara et al., 2016a] insert quantizer nodes in the computation graph. These nodes receive full precision input w and generate quantized out- put wq = q(w), between the full precision weights w and computation nodes as in Figure 1. The quantized weights wq = q(w) are used in the forward and backward pass while the gradient update to the full precision weight uses full pre- cision to ensure smooth updates to the weights. But the quantization function has zero gradient almost everywhere 0, which prevents further backpropagation of âwq/âw = a.e.
gradients and halts learning. The Straight-Through Estimator (STE) [Hinton, 2012] [Hubara et al., 2016a] [Krishnamoor- thi, 2018] was developed to avoid the vanishing gradient problem illustrated in Figure 1. STE approximates quantiza- tion with the identity function I(w) = w in Backprop as eq. 2. Therefore with STE, the gradient of the quantization func- tion with respect to the full precision weight is approximated using the quantized weight as in equation 3. We hypothesize that the error introduced by this approximation may impact the accuracy of the gradient computation, thereby degrading overall network accuracy, especially for very low precision (1-bit or 4-bit) networks.
Figure 1: Gradient update to the full precision weight in backprop using STE approximation as eq. 2 and 3.
âwq âw = âq(w) âw â ST E âI(w) âw = 1 (2)
âL(w) âw = âL(wq) âwq · âwq âw â ST E âL(wq) âwq (3)
Our proposed method, alpha-blending (AB), does not rely on the Straight-Through Estimator (STE) to overcome the quantizerâs vanishing gradient problem in Backprop, there- fore it eliminates the quantization error due to equation 3. AB replaces the weight term in the loss function by (1 â α)w + αwq, an afï¬ne combination of the original full pre- cision weight term and its quantized version with coefï¬cient α. The new loss function Lab(w, α) for a neural network is shown in equation 4. The gradient of Lab(w, α) with respect to the weights is in equation 5, accepting the zero gradient of quantization function âwq/âw = 0 without STE approxi- a.e. mation. Its Backprop ï¬ow is illustrated in ï¬gure 2.
Lab(w, α) = L((1 â α)w + αwq) (4)
OLap Noy, AL(wâ) dw 7 (LO + awe | @) =0 a.e. w =(1âa)w+awg,
The AB ï¬ow gradually increases the non-trainable param- eter α from 0 to 1 using a function of the form shown in equation 6 for training steps in the optimization window [T0, T1]. An example is shown in Figure 4. The function in
Figure 2: AB quantization performs the convolution using an afï¬ne combination of the full precision weights and the quantized weights. The coefï¬cient α is gradually increased from 0 to 1 during training. This approach avoids back-propagation through the Quantizer, elim- inating the gradient vanishing path (from the quantized weight node in light green to the weight node in blue). There is no need to apply Straight Trough Estimator (STE) during the Backprop. The actual weight gradient update goes through the (1 â α) path, where the gradient, eq. 5, is well-deï¬ned.
Algorithm 1 Alpha-blending optimization (ABO)
Input: derivative loss function L(w) Def. function: Lay(w, Wg, a) = L((1 â a)w + aw,) Initialize: w <â wo,a ¢ 0,¢ © learning-rate,f â optimization_frequency, To, T, < traing-window for step = 0toT do Wq â Algorithm[2] PPO(w) or other optimization func- tion (w) wow-e-(1âa) âO) w =(1âa)w+aw, if step % f = 0 and a < 1 then a+ A(step) {Raising a toward to 1.0; eq|6} end if end for Output: w,
equation 6 is not unique, for example, an alternative choice is A(step, λ) = 1 â eâλ·step. The optimization window [T0, T1], during which α is increased, is a user-deï¬ned hyper parameter.
We use algorithm 2 described in section 4.2 to convert w to wq = γw · qw, where γw is a scaling factor and qw â Q, at certain frequency, quantizing frequency, in training steps.
A(step) = 0 1 â ( T1âstep T1âT0 1 step ⤠T0 )3 T0 < step ⤠T1 T1 < step (6)
Algorithm 1 summarizes the AB optimization procedure, in which the original learning rate ε is scaled by the factor (1 â α) to act as an effective learning rate ε · (1 â α).
To visualize the process, ï¬gure 3 demonstrates how to (w â 5.7)2 = 6 using AB. solve the trivial example, arg min s.t. wâQ
To compare the AB optimization concept with STE, we trained the single bit 8-layer BinaryNet deï¬ned in [Hubara et al., 2016a] on the CIFAR10 dataset in section 4.1, ï¬gure 5. The top-1 accuracy score achieved with AB is 0.9% higher compared to the accuracy achieved with STE.
Figure 4 shows a more practical example of AB quan- tization using MobileNet 1.0 0.25/128 v1 on the ImageNet dataset. The AB quantization ï¬ow gradually transforms the full precision model at α = 0 to a model with quantized weights wq at α = 1.0 with an accuracy loss of 0.6% ver- sus the full precision model.
xXx w: weight +++ q: quantized weight t 29 te (w,0)=(5.506555,1) (qo,0)=(6,1) Loss(w,a) e S
Figure 3: Apply AB to minimize a trivial example Loss(w) = (wâ 5.7)2, equivalently to ï¬nd the minimal of 2D surface Loss(w, α) = ((1 â α)w + αwq)2 using SGD while alpha (α) has changed from 0 to 1 using function A(·) in eq. 6, and wq = round(w). w started at the initial value (w, α)=(2.0, 0) and moved along the X trace, its corresponding quantized weights are marked by +. In 20 steps, the iteration converged to (w,α)=(5.506555, 1). wq = 6 is the ï¬nal quantized solution.
# 3.2 Quantization of activation
AB uses the PPQ, algorithm 2 in section 4.2, to quantize the input feature maps or activation a to aq as well, and accu- mulates the scaling factor γa via exponential moving average with the smoothing parameter being close to 1, e.g. 0.99. Thus a can be approximated as a â γa · qa
For inference (α=1), the ï¬oating point computation of the kth layer in forward pass is a(k+1) = δ(w(k)a(k) +b(k)). With the quantizaton of both weight and activation, the same cal-
Mobilenet v1 0.25/128, top-1 accuracy with weight w and quantized weight q
0.5
weight w quantized weight wq 1 0.8 0.4 y c a r u c c A 0.6 ) a h p l a ( α 0.3 0.4 0.2 α = A(step) 0.2 0 0.2 0.4 0.6 training step 0.8 1 ·105 0
Figure 4: Two accuracy curves, evaluated with the full precision weights w and 8-bits quantized weights wq during AB quantization training with Mobilenet 0.25/128 V1 for 2.5 epochs. The α curve correspond- ing to full precision weights has dropped 10% during the training , with the quantized weights, has gradually increased while the to approach its maximum accuracy 40.9% when α = 1.0. The ï¬nal quantized model has 0.6% accuracy loss compared to full precision one.
culation becomes eq. 7.
δ(w · a + b) â δ(γwqw · γaqa + b) = δ((γwγa)(qw · qa) + b) (7)
(qw · qa) in 7 is the compute-intensive operation of matrix multiply or convolution (GEMM) in low precision quantized values, which will gain signiï¬cant power efï¬ciency compared to the original ï¬oating-point version. Other relatively unim- portant terms in 7 e.g. (γwγa) and b can be represented by higher precision ï¬xed points.
# 4 Experiments
To evaluate the AB quantization methodology, we performed several experiments. The ï¬rst one, in section 4.1, is a single bit (1-bit) control test between STE and AB on CIFAR10. Section 4.2 presents results for Mobilenet v1 and ResNet v1,2 with the ImageNet ILSVRC 2012 dataset. All evaluations were performed on a x86 64 ubuntu Linux based Xeon server, Lenovo P710, with a TitanV GPU.
# 4.1 BinaryNet with alpha-blending AB and Straight-Through Estimator (STE)
To evaluate ABâs function directly, 1-bit BinaryNet (BNN) 1 [Hubara et al., 2016a] on CIFAR-10 was trained on Tensor- ï¬ow using AB and STE respectively. Both weight and activa- tion are quantized into +1 or -1 (single bit) by the same bina- rization function, binarize(x) = Sign(x). Figure 5 shows
1https://github.com/itayhubara/BinaryNet.tf
the results of these experiments. The AB method achieves a top-1 accuracy of 88.1%. Using STE, we achieve 87.2%. The FP32 baseline accuracy is 89.6%.
CIFAR-10 TRAING CURVES 1.00 3.0 â AB top-1 â BNN top-1 0.95} â p32 top-1 25 0.90 ose 2.0 0.881 0.872 accuracy 0.80 1.0 0.75 bh i ZV ABloss os -- BNN loss ~~ fp32 loss 0.70 (y 0.0 20000 40000 60000 80000 step
Figure 5: Training curves of BinaryNet on CIFAR-10 dataset. The dashed lines represent the validation Loss and the continuous lines are the corresponding validation accuracy. The blue curve of fp32 baseline has max top-1 accuracy 0.896. BNN which utilized STE in training, blue line, converges to 0.872, while the red line of AB yields a better top-1 accuracy 0.881.
# 4-bits and 8-bits quantization with MobileNet and ResNet
In this section, we describe the iterative quantization scheme we use to quantize FP32 values to low-precision, Progressive Projection Quantization (PPQ). We apply PPQ to convert ï¬oating point values into 4-bits or 8-bits integers, then utilize PPQ and AB to quantize MobileNet and ResNet into 4-bits or 8-bits and compare the result with existing results. All results are consolidated into Figure 6 for easy comparison.
# Progressive projection quantization, PPQ
To quantize a set of N floating-point values x = {xii ⬠[0, N â 1]} to symmetric n-bits signed ineger | set Xp = {a | a ⬠Q= (0, +1, +2,...,4(2"-1â1)),é ⬠[0,N - | } with a positive scaling factor 7, we can approximate the initial quantization by rounding = to the nearest- __maa|a| one ta1
neighbor in Q as equation 8. Then we can improve γ by equa- tion 9.
x γ xq = round( ) (8)
(X, Xq) y= 9) â (Xq,Xq) (
PPQ is an iterative procedure: by repeatedly applying eq. 8 and 9, as described in algorithm 2, projects vector x onto space Q to determine γ progressively [Leng et al., 2017]. The
Algorithm 2 Progressive Project Quantization (PPQ)
Input: full precision vector x = {zilé ⬠[0,.N - ij}. scaling factor 7 ify < 0 then
Initialize y â mac (xl) end if repeat Yo â 7 for i = 0 to N â1do G + round(=) end for ) , x4 T= Ga) until 7 = 70 Output: q, 7
procedure is guaranteed to converge to a local minimum. In practice, convergence is very fast and 3 iterations is enough. Thus, x can be approximated by the product of the scalar γ and xq: x â γxq = γ · round( x γ )
# Evaluation of 8-bits weight and activation (INT8-8)
The top-1 accuracy for 8-bit weight and 8-bit activation quan- tization are listed in table 1. The 2nd column gives the fp32 accuracy of the pre-trained models [TF-, 2018a]. The 3rd column contains the quantization results [TF-, 2018a] [TF-, 2018c]. The last column gives the best results that AB gener- ated.
Both quantization approaches delivered roughly the same top-1 accuracy, although AB has slightly (0.82%) better ac- curacy on average.
Table 1: top-1 accuracy of fp32 pre-trained models, Tensorï¬owâs INT8-8 and AB 8-8. â [Krishnamoorthi, 2018]
Model name MB 1.0 224v1 MB 1.0 128v1 MB 0.75 224v1 MB 0.75 128v1 MB 0.5 224v1 MB 0.5 128v1 MB 0.25 224v1 MB 0.25 128v1 ResNet 50v1 ResNet 50v2 fp32 % 70.9 65.2 68.4 62.1 63.3 56.3 49.8 41.5 75.2 75.6 TF8-8 % 70.1 63.4 67.9 59.8 62.2 54.5 48 39.5 75â 75â AB8-8 % 70.9 65.0 68.2 61.6 63.0 55.8 49.2 40.9 75.1 75.4
Evaluation of 4-bits weight and 8-bits activation (INT4-8)
[Krishnamoorthi, 2018] reported that accuracy of 4-bits weight and 8-bits activation (INT4-8) is within 5% of the fp32 baseline for Mobilenet v1 and ResNet networks. We ran the same models using AB quantization, and have listed the result
MB 10 224 v1
70.9%
MB 10 224 v1 70.9% 70.1% 70.9% 65.0% 68.7% 69.6% 61.8% 64.3% MB 1.0 128 v1 65.2% 63.4% 65.0% MB 0.75 224 v1 68.4% 67.9% 68.2% MB 0.75 128 v1 62.1% 59.8% 61.6% MB 0.5 224 v1 63.3% 62.2% 63.0% MB 0.5 128 v1 56.3% 54.5% 55.8% MB 0.25 224 v1 49.8% 48% 49.2% MB 0.25 128 v1 41.5% 39.5% 40.9% ResNet 50 v1 75.2% 75% 75.1% 73.2% 73.8% 74.3% 69.6% f p32 baseline 71.2% T F 8 â 8 ResNet 50 v2 75.6% AB8 â 8 75% T F 4 â 8 perâchannel 75.4% AB4 â 8 perâlayerl 72.0% AB4 â 8 perâchannel 74.6% 75.1% AB4 â 4 perâlayer 71.6% AB4 â 4 perâchannel 72.2% 0% 50% 100% Top-1 accuracy
Figure 6: Top-1 accuracy of fp32, Tensorï¬ow(TF)âs and Alpha- blending(AB) optimization with 8-bits or 4-bits numerical forms. 8-8: 8-bits weight and activation; 4-8: 4-bits weight and 8-bits acti- vation; 4-4: 4-bits weight and activation.
in the 4th and 5thcolumns in table 2. The 4th one is per-layer quantization, and 5th is per-channel.
AB INT4-8 achieved a 1.53% accuracy drop on average compared to the fp32 baseline for per-layer quantization, and a 0.9% accuracy drop for per-channel quantization. More- over, ABâs INT4-8 per-channel performance outperforms the prior result [Krishnamoorthi, 2018] in 3rd col. by 2.93%.
Table 2: Top-1 accuracy: the pre-trained accuracies are in 2nd col.; Tensorï¬owâs INT4-8 - 4bits weight and 8-bits activation are in 3rd col.; AB INT4-8 - 4-bits weight and 8-bits activation in 4th and 5th cols. Note: for MobileNet in table 2, the ï¬rst layer and all depth- wise convolution layers, which are only 1.1% of all the weights and consume 5.3% of total MAC operations for inference are quantized into 8-bits. For ResNet v1 and v2, weight and activation of the ï¬rst layer are quantized into 8-bits. +[Krishnamoorthi, 2018]
Model name MB1.0 224v1 MB0.75 224v1 MB0.50 224v1 MB0.25 224v1 ResNet 50v1 ResNet 50v2 fp32 % 70.9 68.4 63.3 49.8 75.2 75.6 TF4-8 % 65.0+ - - - 73.2+ 72+ AB4-8 per-layer % 68.7 65 58.4 43.8 73.8 74.6 AB4-8 per-channel % 69.6 - - - 74.3 75.1
Evaluation of 4-bits weight and 4-bits activation (INT4-4)
Finally, we quantized the well-known neural networks, Mo- bileNet 1.0 224 v1 and ResNet 50 v1/v2, using 4-bit weights and 4-bit activations. The 4th column in table 3 is for per-
layer quantization, whose accuracy is 5.5% lower than fp32âs in average. The per-channel quantization in the 5th column has 4.66% accuracy loss. ABâs INT4-4 result, using per- channel quantization, achieves similar accuracy as the TF4-8 scheme [Krishnamoorthi, 2018], which has 4-bits weight and 8-bits activation as shown in the 3rd column.
Table 3: top-1 accuracy of fp32, Tensorï¬owâs INT4-8 and AB INT4- 4 quantization. The ï¬rst layers, all depth-wise layers and the last layer are quantized in to 8-bits, and all other layers are in 4-bits both for weight and activation. +[Krishnamoorthi, 2018]
Model name MB1.0 224v1 ResNet 50v1 ResNet 50v2 fp32 % 70.9 75.2 75.6 TF4-8 % 65.0+ 73.2+ 72+ AB4-4 per-layer % 61.8 69.6 71.6 AB4-4 per-channel % 64.3 71.2 72.2
# 5 Conclusion and future work
We have introduced alpha-blending (AB), an alterna- tive method to the well-known Straight-Through Estimator (STE) for learning low precision neural networks using SGD. AB accepts the almost everywhere zero gradient of quantiza- tion function during Backprop, and uses an afï¬ne combina- tion of the original full-precision weights and corresponding quantized values as the actual weights in the loss function. This change allows the gradient update to the full-precision weights in backward propagation to be performed through the full-precision path incrementally, instead of applying STE to the quantization path.
To measure the impact on network accuracy using the AB methodology, we have trained a single-bit BinaryNet(BBN) [Hubara et al., 2016a] on CIFAR10 to show that AB gener- ates equivalent or better accuracy compared to training with STE. Moreover, we have applied the AB metholody to larger, more practical networks such as MobileNet and ResNet to compare with STE based quantization. The top-1 accuracy of 8-bits weight and 8-bits activation is 0.82% better than the existing state-of-art results [TF-, 2018a][TF-, 2018c]. For 4- bits weight and 8-bits activation quantization, AB has 2.93% higher top-1 accuracy on average compared to that reported in [Krishnamoorthi, 2018].
AB can also be applied to several other network optimiza- tion techniques besides quantization. We plan to investigate AB on clustering and pruning in a future work.
# References
[arm, 2018] Arm cmsis nn software library. http://arm-
software.github.io/CMSIS5/NN/html/index.html, 2018. [Bengio et al., 2013] Yoshua Bengio, Nicholas L´eonard, and Aaron C. Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. CoRR, abs/1308.3432, 2013.
[Courbariaux et al., 2015] Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training deep neural networks with binary weights during propaga- tions. CoRR, abs/1511.00363, 2015.
[Dally, 2015] William Dally. Nips tutorial 2015. https://media.nips.cc/Conferences/2015/tutorialslides/Dally- NIPS-Tutorial-2015.pdf, 2015.
[gem, ] Gemmlowp: a small self-contained low-precision gemm library. https://github.com/google/gemmlowp.
Bert Moons, Patrick Wambacq, and Marian Verhelst. Efï¬- ciently combining svd, pruning, clustering and retraining for enhanced neural network compression. pages 1â6, 06 2018.
[Han et al., 2015] Song Han, Huizi Mao, and William J. Dally. Deep compression: Compressing deep neural net- work with pruning, trained quantization and huffman cod- ing. CoRR, abs/1510.00149, 2015.
[Hinton et al., 2012] Geoffrey Hinton, li Deng, Dong Yu, George E. Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Phuongtrang Nguyen, Tara Sainath, and Brian Kingsbury. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. Signal Processing Magazine, IEEE, 29:82â97, 11 2012.
[Hinton, 2012] G. Hinton. Neural networks for machine learning, 2012.
[Howard et al., 2017a] Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mo- bilenets: Efï¬cient convolutional neural networks for mo- bile vision applications. CoRR, abs/1704.04861, 2017. [Howard et al., 2017b] Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mo- bilenets: Efï¬cient convolutional neural networks for mo- bile vision applications. CoRR, abs/1704.04861, 2017. [Hubara et al., 2016a] Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Bina- rized neural networks. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 4107â 4115. Curran Associates, Inc., 2016.
[Hubara et al., 2016b] Itay Hubara, Daniel Soudry, and CoRR, Ran El Yaniv. abs/1602.02505, 2016. Withdrawn. Binarized neural networks.
Iandola, Matthew W. Moskewicz, Khalid Ashraf, Song Han, William J. Dally, and Kurt Keutzer. Squeezenet: Alexnet-level accuracy with 50x fewer parameters and <1mb model size. CoRR, abs/1602.07360, 2016.
Krishnamoorthi. Quantizing deep convolutional networks for efï¬cient inference: A whitepaper. CoRR, abs/1806.08342, 2018.
[Krizhevsky et al., 2017] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classiï¬cation with deep convolutional neural networks. Commun. ACM, 60(6):84â 90, May 2017.
[Leng et al., 2017] Cong Leng, Hao Li, Shenghuo Zhu, and Rong Jin. Extremely low bit neural network: Squeeze the last bit out with ADMM. CoRR, abs/1707.09870, 2017. [Li and Liu, 2016] Fengfu Li and Bin Liu. Ternary weight
networks. CoRR, abs/1605.04711, 2016.
[mkl, ] Intel(r) math kernel library for deep neural networks. https://intel.github.io/mkl-dnn/index.html.
http://on- demand.gputechconf.com/gtc/2017/presentation/s7310-8- bit-inference-with-tensorrt.pdf.
[Penghang Yin, 2018] Shuai Zhang Stanley Osher Yingyong Qi Jack Xin Penghang Yin, Jiancheng Lyu. Understanding straight-through estimator in training activation quantized neural nets. 2018.
[Rastegari et al., 2016a] Mohammad Rastegari, Vicente Or- donez, Joseph Redmon, and Ali Farhadi. Xnor-net: Ima- genet classiï¬cation using binary convolutional neural net- works. CoRR, abs/1603.05279, 2016.
[Rastegari et al., 2016b] Mohammad Rastegari, Vicente Or- donez, Joseph Redmon, and Ali Farhadi. Xnor-net: Ima- genet classiï¬cation using binary convolutional neural net- works. CoRR, abs/1603.05279, 2016.
[Rastegari et al., 2016c] Mohammad Rastegari, Vicente Or- donez, Joseph Redmon, and Ali Farhadi. Xnor-net: Ima- genet classiï¬cation using binary convolutional neural net- works. CoRR, abs/1603.05279, 2016. mobilenet
[TF-, 2018a] Tensorï¬ow, v1. https:
//github.com/tensorï¬ow/models/blob/master/research/ slim/nets/mobilenet v1.md, 2018. [TF-, 2018b] Tensorï¬ow quantization.
https://github.com/ tensorï¬ow/tensorï¬ow/tree/master/tensorï¬ow/contrib/ quantize, 2018.
[TF-, 2018c] Tensorï¬ow, resnet v1 & v2. https://github.com/ tensorï¬ow/models/tree/master/research/slim, 2018.
[Warden, 2016] Pete Warden. 2016 blog posts on quan- https://petewarden.com/2015/05/23/why- tization. are-eight-bits-enough-for-deep-neural-networks, https://petewarden.com/2016/05/03/how-to- quantize-neural-networks-with-tensorï¬ow, https://petewarden.com/2017/06/22/what-ive-learned- about-neural-network-quantization, 2016.
[Xue et al., 2013] J Xue, J Li, and Y Gong. Restructuring of deep neural network acoustic models with singular value decomposition. Proceedings of the Annual Conference of the International Speech Communication Association, IN- TERSPEECH, pages 2365â2369, 01 2013.
[Yin et al., 2018] Penghang Yin, Shuai Zhang, Jiancheng Lyu, Stanley Osher, Yingyong Qi, and Jack Xin. Blended Coarse Gradient Descent for Full Quantization of Deep Neural Networks. arXiv e-prints, page arXiv:1808.05240, Aug 2018.
[Zhou et al., 2017] Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, and Yurong Chen. Incremental network quanti- zation: Towards lossless cnns with low-precision weights. CoRR, abs/1702.03044, 2017. | {
"id": "1808.05240"
} |
1903.00784 | Neural MMO: A Massively Multiagent Game Environment for Training and Evaluating Intelligent Agents | The emergence of complex life on Earth is often attributed to the arms race
that ensued from a huge number of organisms all competing for finite resources.
We present an artificial intelligence research environment, inspired by the
human game genre of MMORPGs (Massively Multiplayer Online Role-Playing Games,
a.k.a. MMOs), that aims to simulate this setting in microcosm. As with MMORPGs
and the real world alike, our environment is persistent and supports a large
and variable number of agents. Our environment is well suited to the study of
large-scale multiagent interaction: it requires that agents learn robust combat
and navigation policies in the presence of large populations attempting to do
the same. Baseline experiments reveal that population size magnifies and
incentivizes the development of skillful behaviors and results in agents that
outcompete agents trained in smaller populations. We further show that the
policies of agents with unshared weights naturally diverge to fill different
niches in order to avoid competition. | http://arxiv.org/pdf/1903.00784 | Joseph Suarez, Yilun Du, Phillip Isola, Igor Mordatch | cs.MA, cs.LG, stat.ML | null | null | cs.MA | 20190302 | 20190302 | 9 1 0 2
r a M 2 ] A M . s c [
1 v 4 8 7 0 0 . 3 0 9 1 : v i X r a
# Neural MMO: A Massively Multiagent Game Environment for Training and Evaluating Intelligent Agents
# Joseph Suarez Yilun Du Phillip Isola Igor Mordatch
# Abstract
The emergence of complex life on Earth is of- ten attributed to the arms race that ensued from a huge number of organisms all competing for ï¬nite resources. We present an artiï¬cial intelligence re- search environment, inspired by the human game genre of MMORPGs (Massively Multiplayer On- line Role-Playing Games, a.k.a. MMOs), that aims to simulate this setting in microcosm. As with MMORPGs and the real world alike, our en- vironment is persistent and supports a large and variable number of agents. Our environment is well suited to the study of large-scale multiagent interaction: it requires that agents learn robust combat and navigation policies in the presence of large populations attempting to do the same. Baseline experiments reveal that population size magniï¬es and incentivizes the development of skillful behaviors and results in agents that out- compete agents trained in smaller populations. We further show that the policies of agents with unshared weights naturally diverge to ï¬ll different niches in order to avoid competition.
number of players, and a seeming âopen-endednessâ, where ever new and more complex species emerge over time, with no end in sight (Stanley et al., 2017).
Our aim is to develop a simulation platform (see Figure 1) that captures important properties of life on Earth, while also borrowing from the interpretability and abstractions of human-designed games. To this end, we turn to the game genre of Massively Multiplayer Online Role-Playing Games (MMORPGs, or MMOs for short). These games involve a large, variable number of players competing to survive and prosper in persistent and far-ï¬ung environments. Our platform simulates a âNeural MMOâ â an MMO in which each agent is a neural net that learns to survive using RL.
We demonstrate the capabilities of this platform through a series of experiments that investigate emergent complexity as a function of the number of agents and species that com- pete in the simulation. We ï¬nd that large populations act as competitive pressure that encourages exploration of the environment and the development of skillful behavior. In addition, we ï¬nd that when agents are organized into species (share policy parameters), each species naturally diverges from the others to occupy its own behavioral niche. Upon publication, we will opensource the platform in full.
# 1. Introduction
# 2. Background and Related Work
Life on Earth can be viewed as a massive multiagent compe- tition. The cheetah evolves an aerodynamic proï¬le in order to catch the gazelle, the gazelle develops springy legs to run even faster: species have evolved ever new capabilities in order to outcompete their adversaries. The success of biological evolution has inspired many attempts at creating âartiï¬cial lifeâ in silico.
In recent years, the ï¬eld of deep reinforcement learning (RL) has embraced a related approach: train agents by hav- ing them compete in simulated games (Silver et al., 2016; OpenAI, 2018; Jaderberg et al., 2018). Such games are immediately interpretable and provide easy metrics derived from the gameâs âscoreâ and win conditions. However, pop- ular game benchmarks typically deï¬ne a narrow, episodic task with a small ï¬xed number of players. In contrast, life on Earth involves a persistent environment, an unbounded
Artiï¬cial Life and Multiagent Reinforcement Learning Research in âArtiï¬cial lifeâ aims to model evolution and natural selection in biological life; (Langton, 1997; Ficici & Pollack, 1998). Such projects often consider open-ended skill learning (Yaeger, 1994) and general morphology evo- lution (Sims, 1994) as primary objectives. Similar problems have recently resurfaced within multiagent reinforcement learning where the continual co-adaptation of agents can introduce additional nonstationarity that is not present in single agent environments. While there have been multiple attempts to formalize the surrounding theory (Hern´andez- Orallo et al., 2011; Strannegrd et al., 2018), we primarily consider environment-driven works. These typically con- sider either complex tasks with 2-10 agents (Bansal et al., 2017; OpenAI, 2018; Jaderberg et al., 2018) or much sim- pler environments with tens to upwards of a million agents
Neural MMO
Game tile maps Learned value function (overlaid over game map) Visualize High value © = o > FS a4 Visitation map (overlaid over game map) Simulate lifetimes Update policies Distance from agent ââ_> Attraction No dep. Repulsion Inter-agent Dependencies (centered on an agent)
Figure 1. Our Neural MMO platform provides a procedural environment generator and visualization tools for value functions, map tile visitation distribution, and agent-agent dependencies of learned policies. Baselines are trained with policy gradients over 100 worlds.
(Lowe et al., 2017; Mordatch & Abbeel, 2017; Bansal et al., 2017; Lanctot et al., 2017; Yang et al., 2018a; Zheng et al., 2017; Jaderberg et al., 2018). Most such works further focus on learning a speciï¬c dynamic, such as predator-prey Yang et al. (2018b) or are more concerned with the study than the learning of behavior, and use hard-coded rewards Zheng et al. (2017). In contrast, our work focuses on large agent populations in complex environments.
hour, lack persistence, and lack the game mechanics sup- porting large persistent populations â there is still a large gap in environment complexity compared to the real world.
Role-playing games (RPGs) such as Pokemon and Final Fantasy, are in-depth experiences designed to engage human players for hundreds of hours of persistent gameplay. Like the real world, problems in RPGs have many valid solutions and choices have long term consequences.
Game Platforms for Intelligent Agents The Arcade Learning Environment (ALE) (Bellemare et al., 2013) and Gym Retro (Nichol et al., 2018) provide 1000+ limited scope arcade games most often used to test individual re- search ideas or generality across many games. Better per- formance at a large random subset of games is a reasonable metric of quality. However, recent results have brought into question the overall complexity each individual environment (Cuccu et al., 2018), and strong performance in such tasks is not particularly difï¬cult for humans.
MMORPGs are the (massively) multiplayer analogs to RPGs. They are typically run across several persistent servers, each of which contains a copy of the environment and supports hundreds to millions of concurrent players. Good MMOs require increasingly clever, team-driven usage of the game systems: players attain the complex skills and knowledge required for the hardest challenges only through a curriculum of content spanning hundreds of hours of game- play. Such a curriculum is present in many game genres, but only MMOs contextualize it within persistent social and economic structures approaching the scale of the real world.
More recent work has demonstrated success on multiplayer games including Go (Silver et al., 2016), the Multiplayer On- line Battle Arena (MOBA) game DOTA2 (OpenAI, 2018), and Quake 3 Capture the Flag (Jaderberg et al., 2018). Each of these projects has advanced our understanding of a class of algorithms. However, these games are limited to 2-12 players, are episodic, with game rounds on the order of an
# 3. Neural MMO
We present a persistent and massively multiagent environ- ment that deï¬nes foraging and combat systems over pro- cedurally generated maps. The Supplement provides full
Neural MMO
Attack Mage Terrain Key Stone Grass Lava Forest Water Scrub
Figure 2. Our platform includes an animated 3D client and a toolbox used to produce the visuals in this work. Agents compete for food and water while engaging in strategic combat. See the Neural MMO section for a brief overview and the Supplement for full details.
environment details and Figure 2 shows a snapshot. The core features are support for a large and variable number of agents, procedural generation of tile-based terrain, a food and water foraging system, a strategic combat system, and inbuilt visualization tools for analyzing learned policies
direction. Our environment is designed as such. Instead of rewarding agents for achieving particular objectives op- timize only for survival time: they receive reward rt = 1 for each time step alive. Competition for ï¬nite resources mandates that agents must learn intelligent strategies for gathering food and water in order to survive.
Agents (players) may join any of several servers (environ- ment instances). Each server contains an automatically gen- erated tile-based environment of conï¬gurable size. Some tiles, such as food-bearing forest tiles and grass tiles, are traversable. Others, such as water and solid stone, are not. Upon joining a server, agents spawn at a random location along the edges of the environment. In order remain healthy (maintain their health statistic), agents must obtain food and water â they die upon reaching reaching 0 health. At each server tick (time step), agents may move one tile and make an attack. Stepping on a forest tile or next to a water tile reï¬lls a portion of the agentâs food or water supply, respec- tively. However, forest tiles have a limited supply of food; once exhausted, food has a 2.5 percent chance to regenerate each tick. This means that agents must compete for food tiles while periodically reï¬lling their water supply from in- ï¬nite water tiles. They may attack each other using any of three attack options, each with different damage values and tradeoffs. Precise foraging and combat mechanics are detailed in the Supplement.
Agents observe local game state and decide on an action each game tick. The environment does not make any further assumptions on the source of that decision, be it a neural network or a hardcoded algorithm. We have tested the envi- ronment with up to 100 million agent trajectories (lifetimes) on 100 cores in 1 week. Real and virtual worlds alike are open-ended tasks where complexity arises with little
One purpose of the platform is to discover game mechanics that support complex behavior and agent populations that can learn to make use of them. In human MMOs, developers aim to create balanced mechanics while players aim to max- imize their skill in utilizing them. The initial conï¬gurations of our systems are the results of several iterations of balanc- ing, but are by no means ï¬xed: every numeric parameter presented is editable within a simple conï¬guration ï¬le.
# 4. Architecture and Training
Agents are controlled by policies parameterized by neural networks. Agents make observations 0; of the game state 8, and follow a policy 7(0,;) â a; in order to make ac- tions a,. We maximize a return function R over trajectory T = (01,41,7t,--.,07,a7,77T). This is a discounted sum of survival rewards: R(r) = 0} y*r; where 7 = 0.99, T is the time at death and the survival reward r; equals 1, as motivated previously. The policy 7 may be different for each agent or shared. Algorithm 1 shows high level training logic. The Supplement details the tile-based game state s; and hyperparameters (Table 1).
Neural MMO
Tournament at Population Size 16 800 o 1 ) oo on Cc @ 700 e 64 ⬠~~~ 128 & 600 ~ 5 2 500 ec ¢ 400 o £ Ss g a 1632 64 128 Opponent Population Size at Train Time
Tournament at Population Size 32 = u w 450. © 7 ~e--===-0 On c âs ~<L e 64 *s._ 9 128 =~ -o 5016 32 64 128 Opponent Population Size at Train Time
800 o 1 = u ) oo on w 450. © 7 ~e--===-0 On Cc c âs @ 700 e 64 ~<L e 64 ⬠~~~ 128 *s._ 9 128 & 600 ~ =~ 5 2 500 ec ¢ 400 o £ Ss -o g a 1632 64 128 5016 32 64 128 Opponent Population Size at Train Time Opponent Population Size at Train Time Tournament at Population Size 64 Tournament at Population Size 128 250 Q==-- LL, e16 ----- OL 16 2 ee en @ 32 120 or SSS @ 32 g 225 rs e 64 Vio & â=~ 9 64 f= *ss. @ 128 ⬠6128 G 200 SSL 5 100 £ ~~ £ --9~ 5175 © S oe sae 3° 5 90 âeo. F 150 KB Tes. < ao < 80 me D128 eer ems e 7 o==6 an) £ o es £ 70 Ty £100 tet A ~-e8L Dey o @ 60 Tag Be = = sei Tassel = â 50 FT meg 16 32 64 128 16 32 64 128 Opponent Population Size at Train Time Opponent Population Size at Train Time
Tournament at Population Size 64 250 Q==-- LL, e16 2 ee en @ 32 g 225 rs e 64 f= *ss. @ 128 G 200 SSL £ ~~ 5175 © 3° F 150 < ao D128 eer ems e £ o es £100 tet o = = 16 32 64 128 Opponent Population Size at Train Time
Tournament at Population Size 128 ----- OL 16 120 or SSS @ 32 Vio & â=~ 9 64 ⬠6128 5 100 £ --9~ S oe sae 5 90 âeo. KB Tes. < 80 me 7 o==6 an) £ 70 Ty A ~-e8L Dey @ 60 Tag Be = sei Tassel â 50 FT meg 16 32 64 128 Opponent Population Size at Train Time
Figure 3. Maximum population size at train time varies in (16, 32, 64, 128). At test time, we merge the populations learned in pairs of experiments and evaluate lifetimes at a ï¬xed population size. Agents trained in larger populations always perform better.
1 Agent 8 Agents 32 Agents 128 Agents ) rz 2 = 4
Figure 4. Population size magniï¬es exploration: agents spread out to avoid competition.
Species 8 Species 1 Species Game Map < 8 i x= > - 5 Fs â4 § s $ = fo} a
Figure 5. Populations count (number of species) magniï¬es niche formation. Visitation maps are overlaid over the game map; different colors correspond to different species. Training a single population tends to produce a single deep exploration path. Training eight populations results in many shallower paths: populations spread out to avoid competition among species.
Algorithm 1 Neural MMO logic for one game tick. See Experiments (Technical details) for spawning logic. The al- gorithm below makes two omissions for simplicity. First, we use multiple policies and sample a policy Ï â¼ Ï1, . . . , ÏN from the set of all policies when spawning a new agent. Second, instead of performing a policy gradient update ev- ery game tick, we maintain experience buffers from each environment and perform an update once all buffers are full.
for each environment server do
# if number of agents alive < spawn cap then
spawn an agent
end if for each agent do
i â population index of the agent Make observation ot, decide action Ïi(ot) â at Environment processes at, computes rt, and updates agent health, food, etc. if agent is dead then remove agent
# end if end for Update environment state st+1 â f (st, at)
end for Perform a policy gradient update on policies Ï â¼ Ï1, . . . , ÏN using ot, at, rt from all agents across all environment servers
Input We set the observation state ot equal to the crop of tiles within a ï¬xed L1 distance of the current agent. This includes tile terrain types and the select properties (such as health, food, water, and position) of occupying agents. Our choice of ot is an equivalent representation of what a human sees on the screen, but our environment supports other choices as well. Note that computing observations does not require rendering.
Output Agents output action choices at for the next time step (game tick). Actions consist of one movement and one attack. Movement options are: North, South, East, West, and Pass (no movement). Attack options are labeled: Melee, Range, and Mage, with each attack option applying a speciï¬c preset amount of damage at a preset effective dis- tance. The environment will attempt to execute both actions. Invalid actions, (e.g. moving into stone), are ignored.
Our policy architecture preprocesses the local environment by embedding it and ï¬attening it into a single ï¬xed length vector. We then apply a linear layer followed by linear output heads for movement and attack decisions. New types of action choices can be included by adding additional heads. We also train a value function to estimate the discounted return. As agents receive only a stream of reward 1, this is equal to a discounted estimate of the agentâs time until death. We use a value function baselines policy gradient
Neural MMO
loss and optimize with Adam. It was possible to obtain good performance without discounting, but training was less stable. We provide full details in the supplements.
# 5. Experiments
We present an initial series of experiments using our plat- form to explore multiagent interactions in large popula- tions. We ï¬nd that agent competence scales with population size. In particular, increasing the maximum number of con- current players (Nent) magniï¬es exploration and increas- ing the maximum number of populations with unshared weights (Npop) magniï¬es niche formation. Agents poli- cies are sampled uniformly from a number of âpopulationsâ Ï â¼ Ï1, . . . , ÏN . Agents in different populations have the same architecture but do not share weights.
Technical details We run each experiment using 100 worlds. We deï¬ne a constant C over the set of worlds W . For each world w â W , we uniformly sample a c â (1, 2, ...C). We deï¬ne âspawn capâ such that if world w has a spawn cap c, the number of agents in w cannot exceed c. In each world w, one agent is spawned per game tick provided that doing so would exceed the spawn cap c of w. To match standard MMOs, we would ï¬x Nent = Npop (humans are independent networks with unshared weights). However, this incurs sample complexity proportional to number of populations. We therefore share parameters across groups of up to 16 agents for efï¬ciency.
# 5.1. Server Merge Tournaments
We perform four experiments to evaluate the effects on foraging performance of training with larger populations and with a greater number of populations. For each experiment, we ï¬x Npop â (1, 2, 4, 8) and a spawn cap (the maximum number of concurrent agents) c = 16 à Npop, such that c â (16, 32, 64, 128). We train for a ï¬xed number of trajectories per population.
Evaluating the inï¬uence of these variables is nontrivial. The task difï¬culty is highly dependent on the size and com- petence of populations in the environment: mean agent lifetime is not comparable across experiments. Furthermore, there is no standard procedure among MMOs for evaluating relative player competence across multiple servers. How- ever, MMO servers sometimes undergo merges whereby the player bases from multiple servers are placed within a single server. As such, we propose tournament style evaluation in order to directly compare policies learned in different experiment settings. Tournaments are formed by simply concatenating the player bases of each experiment. Figure 3 shows results: we vary the maximum number of agents at test time and ï¬nd that agents trained in larger settings consistently outperform agents trained in smaller settings.
We observe more interesting policies once we introduce the combat module as an additional learnable mode of vari- ation on top of foraging. With combat, agent actions be- come strongly coupled with the states of other agents. As a sanity check, we also conï¬rm that all of the populations trained with combat handily outperform all of the popula- tions trained with only foraging, when these populations compete in a tournament with combat enabled.
To better understand theses results, we decouple our anal- ysis into two modes of variability: maximum number of concurrent players (Nent) and maximum number of pop- ulations with unshared weights (Npop). This allows us to examine the effects of each factor independently. In order to isolate the effects of environment randomization, which also encourages exploration, we perform these experiments on a ï¬xed map. Isolating the effects of these variables pro- duces more immediately obvious results, discussed in the following two subsections:
# 5.2. Nent: Multiagent Magniï¬es Exploration
In the natural world, competition between animals can in- centivize them to spread out in order to avoid conï¬ict. We observe that overall exploration (map coverage) increases as the number of concurrent agents increases (see Figure 4; the map used is shown in Figure 5). Agents learn to explore only because the presence of other agents provides a natural incentive for doing so.
# 5.3. Npop: Multiagent Magniï¬es Niche Formation
We ï¬nd that, given a sufï¬ciently large and resource-rich en- vironment, different populations of agents tend to separate to avoid competing with other populations. Both MMOs and the real world often reward masters of a single craft more than jacks of all trades. From Figure 5, specialization to particular regions of the map increases as number of pop- ulations increases. This suggests that the presence of other populations force agents to discover a single advantageous skill or trick. That is, increasing the number of populations results in diversiï¬cation to separable regions of the map. As entities cannot out-compete other agents of their own population (i.e. agentâs with whom they share weights), they tend to seek areas of the map that contain enough resources to sustain their population.
# 5.4. Environment Randomized Exploration
The trend of increasing exploration with increasing entity number is clear when training on a single map as seen in Figure 4, 5, but it is more subtle with environment ran- domization. From Figure 6, all population sizes explore adequately. It is likely that âexplorationâ as deï¬ned by map coverage is not as difï¬cult a problem, in our environment, as
Neural MMO
developing robust policies. As demonstrated by the Tourna- ment experiments, smaller populations learn brittle policies that do not generalize to scenarios with more competitive pressureâeven against a similar number of agents.
# 5.5. Agent-Agent Dependencies
We visualize agent-agent dependencies in Figure 7. We ï¬x an agent at the center of a hypothetical map crop. For each position visible to that agent, we show what the value func- tion would be if there were a second agent at that position. We ï¬nd that agents learn policies dependent on those of other agents, in both the foraging and combat environments.
# 6. Discussion
# 6.1. Multiagent competition is a curriculum magniï¬er
Not all games are created equal. Some produce more com- plex and engaging play than others. It is unreasonable to expect pure multiagent competition to produce diverse and interesting behavior if the environment does not support it. This is because multiagent competition is a curriculum magniï¬er, not a curriculum in and of itself. The initial conditions for formation of intelligent life are of paramount importance. Jungle climates produce more biodiversity than deserts. Deserts produce more biodiversity than the tallest mountain peaks. To current knowledge, Earth is the only planet to produce life at all. The same holds true in sim- ulation: human MMOs mirror this phenomenon. Those most successful garner large and dedicated player bases and develop into complex ecosystems. The multiagent setting is interesting because learning is responsive to the competitive and collaborative pressures of other learning agentsâbut the environment must support and facilitate such pressures in order for multiagent interaction to drive complexity.
There is room for debate as to the theoretical simplest pos- sible seed environment required to produce complexity on par with that of the real world. However, this is not our objective. We have chosen to model our environment after MMOs, even though they may be more complicated than the minimum required environment class, because they are known to support the types of interactions we are interested in while maintaining engineering and implementation feasi- bility. This is not true of any other class environments we are aware of: exact physical simulations are computationally in- feasible, and previously studied genres of human games lack crucial elements of complexity (see Background). While some may see our efforts as cherrypicking environment design, we believe this is precisely the objective: the pri- mary goal of game development is to create complex and engaging play at the level of human intelligence. The player base then uses these design decisions to create strategies far beyond the imagination of the developers.
Neural MMO
| ant = High Visitation Frequency Low
Figure 6. Exploration maps in the environment randomized settings. From left to right: population size 8, 32, 128. All populations explore well, but larger populations with more species develop robust and efï¬cient policies that do better in tournaments.
5 Forage Automatic Targeting Learned Targeting i] g x fy me} 3 Position c 2 (2) 3 o fam Random Early Forage Combat Forage Combat
Figure 7. Agents learn to depend on other agents. Each square map shows the response of an agent of a particular species, located at the squareâs center, to the presence of agents at any tile around it. Random: dependence map of random policies. Early: âbulls eyeâ avoidance maps learned after only a few minutes of training. Additional maps correspond to foraging and combat policies learned with automatic targeting (as in tournament results) and learned targeting (experimental, discussed in Additional Insights). In the learned targeting setting, agents begin to ï¬xate on the presence of other agents within combat range, as denoted by the central square patterns.
Dependence on Combat Mechanics Automatic Targeting Learned Targeting ion Frequency Hi © a G = a © a < G a a @ ® © =
Figure 8. Attack maps and niche formation quirks. Left: combat maps from automatic and learned targeting. The left two columns in each ï¬gure are random. Agents with automatic targeting learn to make effective use of melee combat (denoted by higher red density). Right: noisy niche formation maps learned in different combat settings with mixed incentives to engage in combat.
# 6.2. Additional Insights
We brieï¬y detail several miscellaneous points of interest in Figure 8. First, we visualize learned attack patterns of agents. Each time an agent attacks, we splat the attack type to the screen. There are a few valid strategies as per the environment. Melee is intentionally overpowered, as a sanity check. This cautions agents to keep their distance, as the ï¬rst to strike wins. We ï¬nd that this behavior is learned from observation of the policies learned in Figure 8.
Second, a note on tournaments. We equate number of tra- jectories trained upon as a fairest possible metric of training progress. We experimented with normalizing batch size but found that larger batch size always leads to more stable performance. Batch size is held constant, but experience is split among species. This means that experiments with more species have smaller effective batch size: larger populations outperform smaller populations even though the latter are easier to train.
Finally, a quick note on niche formation. Obtaining clean visuals is dependent on having an environment where inter- action with other agents is unfavorable. While we ensure this is the case for our exploration metrics, niche formation may also occur elsewhere, such as in the space of effective combat policies. For this reason, we expect our environment to be well suited to methods that encourage sample diversity such as population-based training (Jaderberg et al., 2017).
# 7. Future Work
Our ï¬nal set of experiments prescribes targeting to the agent with lowest health. Learned targeting was not required to produce compelling policies: agents instead learn effective attack style selection, straï¬ng and engaging opportunisti- cally at the edge of their attack radius. Another possible experiment is to jointly learn attack style selection and tar- geting. This would require an attentional mechanism to handle the variable number of visible targets. We performed only preliminary experiments with such an architecture, but we still mention them here because even noisy learned tar- geting policies signiï¬cantly alter agent-agent dependence maps. As shown in Figure 7, the small square shaped re- gions of high value at the center of the dependency maps correspond to the ranges of different attack styles. These appear responsive to the current combat policies of other learning agents. We believe that the learned targeting setting is likely to useful for investigating the effects of concurrent learning in large populations.
# 8. Conclusion
We have presented a neural MMO as a research platform for multiagent learning. Our environment supports a large
Neural MMO
number of concurrent agents, inbuilt map randomization, and detailed foraging and combat systems. The included baseline experiments demonstrate our platformâs capacity for research purposes. We ï¬nd that population size magni- ï¬es exploration in our setting, and the number of distinct species magniï¬es niche formation. It is our hope that our environment provides an effective venue for multiagent ex- periments, including studies of niche formation, emergent cooperation, and coevolution. The entire platform will be open sourced, including a performant 3D client and research visualization toolbox. Full technical details of the platform are available in the Supplement.
# Acknowledgements
This research was undertaken in fulï¬llment of an intern- ship at OpenAI. Thank you to Clare Zhu for substantial contributions to the 3D client code.
# References
Bahdanau, D., Cho, K., and Bengio, Y. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473, 2014. URL http://arxiv. org/abs/1409.0473.
Bansal, T., Pachocki, J., Sidor, S., Sutskever, I., and Mor- datch, I. Emergent complexity via multi-agent competi- tion. arXiv preprint arXiv:1710.03748, 2017.
Bellemare, M. G., Naddaf, Y., Veness, J., and Bowling, M. The arcade learning environment: An evaluation plat- form for general agents. Journal of Artiï¬cial Intelligence Research, 47:253â279, 2013.
Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang, J., and Zaremba, W. Openai gym. CoRR, abs/1606.01540, 2016. URL http://arxiv. org/abs/1606.01540.
Cuccu, G., Togelius, J., and Cudre-Mauroux, P. Playing atari with six neurons. arXiv preprint arXiv:1806.01363, 2018.
Ficici, S. G. and Pollack, J. B. Challenges in coevolution- ary learning: Arms-race dynamics, open-endedness, and mediocre stable states. In Proceedings of the sixth inter- national conference on Artiï¬cial life, pp. 238â247. MIT Press, 1998.
Hern´andez-Orallo, J., Dowe, D. L., EspaËna-Cubillo, S., Hern´andez-Lloreda, M. V., and Insa-Cabrera, J. On more realistic environment distributions for deï¬ning, evaluat- ing and developing intelligence. In International Con- ference on Artiï¬cial General Intelligence, pp. 82â91. Springer, 2011.
Neural MMO
Jaderberg, M., Dalibard, V., Osindero, S., Czarnecki, W. M., Donahue, J., Razavi, A., Vinyals, O., Green, T., Dunning, I., Simonyan, K., et al. Population based training of neural networks. arXiv preprint arXiv:1711.09846, 2017.
Williams, R. J. Simple statistical gradient-following algo- rithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â256, 1992.
Jaderberg, M., Czarnecki, W. M., Dunning, I., Marris, L., Lever, G., Castaneda, A. G., Beattie, C., Rabinowitz, N. C., Morcos, A. S., Ruderman, A., et al. Human- level performance in ï¬rst-person multiplayer games with population-based deep reinforcement learning. arXiv preprint arXiv:1807.01281, 2018.
Yaeger, L. Computational genetics, physiology, metabolism, neural systems, learning, vision, and behavior or poly In SANTA FE INSTI- world: Life in a new context. TUTE STUDIES IN THE SCIENCES OF COMPLEXITY- PROCEEDINGS VOLUME-, volume 17, pp. 263â263. ADDISON-WESLEY PUBLISHING CO, 1994.
Lanctot, M., Zambaldi, V., Gruslys, A., Lazaridou, A., Per- olat, J., Silver, D., Graepel, T., et al. A uniï¬ed game- theoretic approach to multiagent reinforcement learning. In Advances in Neural Information Processing Systems, pp. 4190â4203, 2017.
Langton, C. G. Artiï¬cial life: An overview. Mit Press, 1997.
Lowe, R., Wu, Y., Tamar, A., Harb, J., Abbeel, P., and Mor- datch, I. Multi-agent actor-critic for mixed cooperative- competitive environments. Neural Information Process- ing Systems (NIPS), 2017.
Mordatch, I. and Abbeel, P. Emergence of grounded com- positional language in multi-agent populations. arXiv preprint arXiv:1703.04908, 2017.
Yang, Y., Luo, R., Li, M., Zhou, M., Zhang, W., and Wang, J. Mean ï¬eld multi-agent reinforcement learning. arXiv preprint arXiv:1802.05438, 2018a.
Yang, Y., Yu, L., Bai, Y., Wen, Y., Zhang, W., and Wang, J. A study of ai population dynamics with million-agent reinforcement learning. In Proceedings of the 17th Inter- national Conference on Autonomous Agents and MultiA- gent Systems, pp. 2133â2135. International Foundation for Autonomous Agents and Multiagent Systems, 2018b.
Zheng, L., Yang, J., Cai, H., Zhang, W., Wang, J., and Yu, Y. Magent: A many-agent reinforcement learning plat- form for artiï¬cial collective intelligence. arXiv preprint arXiv:1712.00600, 2017.
Nichol, A., Pfau, V., Hesse, C., Klimov, O., and Schulman, J. Gotta learn fast: A new benchmark for generalization in rl. arXiv preprint arXiv:1804.03720, 2018.
OpenAI. Openai ï¬ve. https://blog.openai.com/ openai-five/, 2018.
SIGGRAPH Com- ISSN 0097- doi: 10.1145/325165.325247. URL http: Perlin, K. An image synthesizer. put. Graph., 19(3):287â296, July 1985. 8930. //doi.acm.org/10.1145/325165.325247.
Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., et al. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484, 2016.
Sims, K. Evolving 3d morphology and behavior by compe- tition. Artiï¬cial life, 1(4):353â372, 1994.
Open- endedness: The last grand challenge youve never heard of. https://www.oreilly.com/ideas/ open-endedness-the-last-grand-challenge-youve-never-heard-of, 2017. Accessed: 2017-09-26.
Strannegrd, C., Svangrd, N., Lindstrm, D., Bach, J., and Steunebrink, B. Learning and decision-making in artiï¬- cial animals. Journal of Artiï¬cial General Intelligence, 9: 55â82, 07 2018. doi: 10.2478/jagi-2018-0002.
# Neural MMO Supplement
â#Name'color indicates population
Figure 9. Procedural 80X80 game maps
Figure 10. Example Agent
# Environment
# Agents
The environment state is represented by a grid of tiles. We generate the game map by thresholding a Perlin (Perlin, 1985) ridge fractal, as shown in Figure 9. Each tile has a particular assigned material with various properties, but it also maintains a set of references to all occupying enti- ties. When agents observe their local environment, they are handed a crop of all visible game tiles, including all visible properties of the tile material and all visible properties of oc- cupying agents. All parameters in the following subsystems are conï¬gurable; we provide only sane defaults obtained via multiple iterations of balancing.
Input: On each game tick, agents (Figure 10) observe a 15x15 square crop of surrounding game tiles and all occupy- ing agents. We extract the following observable properties:
# Per-tile properties:
⢠Material: an index corresponding to the tile type
⢠nEnts: The number of occupying entities. This is technically learnable from the list of agents, but this may not be true for all architectures. We include it for convenience here, but may deprecate it in the future.
# Tiles
# Per-agent properties:
We adopt a tile based game state, which is common among MMOs. This design choice is computationally efï¬cient for neural agents and can be made natural for human players via animation smoothing. When there is no need to render the game client, as in during training or test time statistical tests, the environment can be run with no limit on server tick rate. Game tiles are as follows:
⢠Lifetime: Number of game ticks alive thus far
⢠Health: Agents die at 0 health (hp)
⢠Food: Agents begin taking damage at 0 food or water
⢠Water: Agents begin taking damage at 0 food or water
⢠Position: Row and column of the agent
⢠Position Deltas: Offsets from the agent to the observer
⢠Damage: Most recent amount of damage taken
⢠Grass: Passable tile with no special properties
⢠Forest: Passable tile containing food. Upon moving into a food tile, the agent gains 5 food and the tile decays into a scrub.
⢠Same Color: Whether the agent is the same color (and thereby is in the same population) as the observer
⢠Freeze: Whether the agent is frozen in place as a result of having been hit by a mage attack
⢠Scrub: Passable tile that has a 2.5 percent probability to regenerate into a forest tile on each subsequent tick
⢠Stone: Impassible tile with no special properties
⢠Water: Passable tile containing water. Upon moving adjacent to a water tile, the agent gains 5 water.
Output: Agents submit one movement and one attack ac- tion request per server tick. The server ignores any actions that are not possible or permissible to fulï¬l, such as attack- ing an agent that is already dead or attempting to move into stone. Pass corresponds to no movement.
⢠Lava: Passable tile that kills the agent upon contact
Movement: North Attack: South Melee Range Mage East West Pass
Forest âââ> available food tiles Scrub: decayed forest tile deplete of food
Figure 11. Example foraging behavior
# Foraging
Foraging (Figure 11) implements gathering based survival:
⢠Food: Agents begin with 32 food, decremented by 1 per tick. Agents may regain food by occupying forest tiles or by making use of the combat system.
⢠Water: Agents begin with 32 water, decremented by 1 per tick. Agents may regain water by occupying tiles adjacent to water or making use of the combat system.
⢠Health: Agents begin with 10 health. If the agent hits 0 food, they lose 1 health per tick. If the agent hits 0 water, they lose 1 health per tick. These effects stack.
The limited availability of forest (food) tiles produces a carrying capacity. This incurs an arms race of exploration strategies: survival is trivial with a single agent, but it re- quires intelligent exploration in the presence of competing agents attempting to do the same.
# Combat
Combat (Figure 12) enables direct agent-agent confrontation by implementing three different attack âstylesâ:
⢠Melee: Inï¬icts 10 damage at 1 range
⢠Ranged: Inï¬icts 2 damage at 1-2 range
⢠Mage: Inï¬icts 1 damage at 1-3 range and freezes the target in place, preventing movement for two ticks
Each point of damage inï¬icted steals one point of food and water from the target and returns it to the attacker. This serves as an incentive to engage in combat. It is still fully possible for agents to develop primarily foraging based strategies, but they must at least be able to defend them- selves. The combat styles deï¬ned impose clear but difï¬cult to optimize trade offs. Melee combat fells the target in one attack, but only if they are able to make their attack before the opponent retaliates in kind. Ranged combat produces less risky but more prolonged conï¬icts. Mage combat does little damage but immobilizes the target, which allows the attacker to retreat in favor of a foraging based strategy. More
Neural MMO
Damage taken â>
Figure 12. Example combat behavior aggressive agents can use mage combat to immobilize their target before closing in for the kill. In all cases, the best strategy is not obvious, again imposing an arms race.
# Technical details:
⢠Attack range is deï¬ned by l1 distance: â1 rangeâ is a 3X3 grid centered on the attacker.
⢠Spawn Killing Agents are immune during their ï¬rst 15 game ticks alive. This prevents an exploit known as âspawn killingâ whereby players are repeatedly at- tacked immediately upon entering the game. Human games often contain similar mechanism to prevent this strategy, as it results in uninteresting play.
# API
The initial release is bundled with two APIs for running experiments on our platform. All of our experiments are RL based, but the API implementation is intentionally generic. Evolutionary methods and algorithmic baselines should work without modiï¬cation.
Gym Wrapper We provide a minimal extension of the Gym VecEnv API (Brockman et al., 2016) that adds support for variable numbers of agents per world and at any given time. This API distributes environment computation of ob- servations and centralizes training and inference. While this standardization is convenient, MMOs differ signiï¬cantly from arcade games, which are easier to standardize under a single wrapper. The Neural MMO setting requires support for a large, variable number of agents that run concurrently, with aggregation across many randomly generated environ- ments. The Gym API incurs additional communications overhead that the native API bypasses.
Native This is the simplest and most efï¬cient interface. It pins the environment and agents on it to the same CPU core. Full trajectories run locally on the same core as the environment. Interprocess communication is only required infrequently to synchronize gradients across all environ- ments on a master core. We currently do the backwards pass on the same CPU cores because our networks are small, but GPU is fully supported.
Figure 13. Example map in the 2D client
Figure 14. Example overhead map view in the 3D client
Neural MMO
# Client
The environment visualizer comes bundled with research tools for analyzing agent policies. In the initial release, we provide both a 2D python client (Figure 13) and a 3D web client (Figure 14, 16). The 3D client has the best support for visualizing agent policies. The 2D client is already deprecated; we include it only because it will likely take a few weeks to fully ï¬nish porting all of the research tools. We include visualization tools for producing the following visualization maps; additional documentation is available on the project Github:
⢠Value ghosting
⢠Interagent dependence
⢠Exploration ⢠Combat
# Policy training and architecture
Parameters relevant to policy training are listed in Table 1. The neural net architecture, shown in Figure 15, is a simplest possible fully connected network. It consists of a preproces- sor, main network, and output heads. The preprocessor is as follows:
⢠Embed indicies corresponding to each tile into a 7D vector. Also concatenates with the number of occupy- ing entities.
Flatten the tile embeddings ⢠Project visible attributes of nearby entities to 32D ⢠Max pool over entity embeddings to handle variable
number of observations
⢠Concatenate the tile embeddings with the pooled entity embeddings
⢠Return the resultant embedding
The main network is a single linear layer. The output heads are also each linear layers; they map the output hidden vector from the main network to the movement and combat action spaces, respectively. Separate softmaxes are used to sample movement and combat actions.
Technical details
⢠For foraging experiments, the attack network is still present for convenience, but the chosen actions are ignored.
⢠Note that 1D max pooling is used to handle the variable number of visible entities. Attention (Bahdanau et al., 2014) may appear the more conventional approach, but recently (OpenAI, 2018) demonstrated that simpler and more efï¬cient max pooling may sufï¬ce. We are unsure if this is true at our scale, but used max pooling nonetheless for simplicity.
Neural MMO
tile_idx:1 tile_idx: 2 tile_idx:0 f 7 â> ENV SS ned VALUE i Ba MOVE ent_2:... >| ENV < ent_1: [health, food, delta_x, ...] EEE anmaex
Figure 15. Agents observe their local environment. The model embeds this observations and computes actions via corresponding value, movement, and attack heads. These are all small fully connected networks with 50-100k parameters.
Table 1. Training details and parameters for all experiments Parameter Value lr=1e-3 1e-5 1e-2 0.99 Notes + Value function baseline Pytorch Defaults Training stability is sensitive to this To stabilize training; possibly redundant No additional trajectory postprocessing
Training Algorithm Policy Gradients(Williams, 1992) Adam Parameters Weight Decay Entropy Bonus Discount Factor
Figure 16. Perspective screenshot of 3D environment | {
"id": "1807.01281"
} |
1903.00742 | Autocurricula and the Emergence of Innovation from Social Interaction: A Manifesto for Multi-Agent Intelligence Research | Evolution has produced a multi-scale mosaic of interacting adaptive units.
Innovations arise when perturbations push parts of the system away from stable
equilibria into new regimes where previously well-adapted solutions no longer
work. Here we explore the hypothesis that multi-agent systems sometimes display
intrinsic dynamics arising from competition and cooperation that provide a
naturally emergent curriculum, which we term an autocurriculum. The solution of
one social task often begets new social tasks, continually generating novel
challenges, and thereby promoting innovation. Under certain conditions these
challenges may become increasingly complex over time, demanding that agents
accumulate ever more innovations. | http://arxiv.org/pdf/1903.00742 | Joel Z. Leibo, Edward Hughes, Marc Lanctot, Thore Graepel | cs.AI, cs.GT, cs.MA, cs.NE, q-bio.NC | 16 pages, 2 figures | null | cs.AI | 20190302 | 20190311 | 9 1 0 2 r a M 1 1 ] I A . s c [
2 v 2 4 7 0 0 . 3 0 9 1 : v i X r a
6 DeepMind
March 2nd, 2019
# Autocurricula and the Emergence of Innovation from Social Interaction: A Manifesto for Multi-Agent Intelligence Research
Joel Z. Leibo1, Edward Hughes1, Marc Lanctot1 and Thore Graepel1 1DeepMind
Evolution has produced a multi-scale mosaic of interacting adaptive units. Innovations arise when perturbations push parts of the system away from stable equilibria into new regimes where previ- ously well-adapted solutions no longer work. Here we explore the hypothesis that multi-agent systems sometimes display intrinsic dynamics arising from competition and cooperation that provide a natu- rally emergent curriculum, which we term an autocurriculum. The solution of one social task often begets new social tasks, continually generating novel challenges, and thereby promoting innovation. Under certain conditions these challenges may become increasingly complex over time, demanding that agents accumulate ever more innovations.
# The Problem Problem
Pity the curious solipsist, for there is a limit to the knowledge they may acquire. To see why, consider a solitaire game played on a pla- nar grid of size 19Ã19. The goal is to place black stones on the board so as to surround as much territory as possible. Obviously, the optimal solution is simply to place stones all along the edge of the grid. Imagine one learns by trial and error how to play this game. Once the optimal solution is discovered then there is nothing left to learn. The cleverness obtain- able by practicing this game is bounded. Now introduce an additional player who places a white stone after each black stone. The white stones and the territories they enclose become barriers, preventing additional expansion of the black territory. Thus, the game of Go is bornâa game with enough emergent complexity to oc- cupy millions of minds for millenia (Fairbairn, 1995).
# Highlights
⢠General intelligence is connected to the abil- ity to adapt and prosper in a wide range of environments.
⢠Generating new environments for research is labor-intensive and the current approach cannot scale indeï¬nitely. Research progress is impeded by the âproblem problemâ.
⢠In social games, individuals must learn (a) which strategy to choose, and (b) how their strategy may be implemented by sequencing elementary actions.
⢠Ongoing strategic dynamics induce a se- quence of implementation policy learning problems.
⢠The demands of competition and cooperation generate strategic dynamics.
Intelligence may be deï¬ned as the ability to adapt to a diverse set of complex environments (Hernández-Orallo, 2017; Legg and Hutter, 2007). This deï¬nition suggests that the ceiling of a solipsistâs intelligence may only be raised by providing more and more environments of ever increasing diversity and complexity. To that end, recent work in artiï¬cial intelligence has relied on rich 3D simulation environments (e.g. Beattie et al. (2016); Kempka et al. (2016)). The resulting
Correspondence to jzl@google.com
Autocurricula and the Emergence of Innovation from Social Interaction: A Manifesto for Multi-Agent Intelligence Research
agents have achieved proï¬ciency at a wide range of tasks, such as navigating virtual mazes (Mirowski et al., 2016; Savinov et al., 2018), foraging over rough, naturalistic terrain (Es- peholt et al., 2018; Hessel et al., 2018), and tests drawn from the neuroscience and cogni- tive psychology literatures like visual search (Leibo et al., 2018) and memory recall (Wayne et al., 2018). As impressive as these results are, we think the research program they represent has fallen into a solipsistic trap. Just like the aforementioned Go-playing solipsist, the clev- erness these agents may achieve is bounded by their adaptive environment(s). Advancing artiï¬cial intelligence by this route demands the creation of more and more environments, a laborious process similar to videogame design. Scaling up this process has become a bottleneck dubbed the problem problem (see Glossary).
# Glossary
⢠Adaptive unit: an umbrella term encompass- ing units of evolution and learning at any level of biological organization e.g., a species evolving genetically, a reinforcement learning agent, or a culturally evolving society.
Autocurriculum: a self-generated sequence of challenges arising from the coupled adap- tation dynamics of interacting adaptive units. ⢠Challenge: a change in the adaptive land-
scape faced by an adaptive unit.
a sequence of challenges. Equivalently, a sequence of tasks chosen to direct learning.
Endogenous challenge: a challenge aris- ing from miscoordination or competition be- tween an adaptive unitâs component subunits. ⢠Exogenous challenge: a challenge arising from competition between adaptive units at the same hierarchical level.
⢠Exploration by exploitation: exploration that occurs as a byproduct of following the greedy policy estimate in a non-stationary environment.
How then did intelligence arise in nature? We propose that life solved its own version of the problem problem because it is a multi- agent system where any speciesâs innovation determines the environment to which others must adapt. For example, it was early pho- tosynthesizers that were the main source of atmospheric oxygen, setting the stage for the subsequent evolution of all the many organ- isms that depend on it for energy (Kasting and Siefert, 2002). Likewise, human cultural evo- lution continually generates new ârules of the gameâ, demanding continuous adaptation just to avoid being left behind by a changing world (Gintis, 2000; Greif, 2006; North, 1991; Os- trom, 2005).
Strategic choice: a decision with game theo- retic implications, e.g., to cooperate or defect. the engineering problem of generating large numbers of in- teresting adaptive environments to support research.
⢠Implementation policy: a policy that im- plements a high-level strategic choice by se- quencing elementary action primitives, e.g., movement actions.
Innovation: an innovation expands an adap- tive unitâs behavioral repertoire with new ro- bust and repeatable problem solving abilities. ⢠Institution: a system of rules, norms, or be- liefs that determine the ârules of the gameâ played by the individuals composing a col- lective. The origination of a new institutions may be seen as a collective level innovation.
The argument has two main ingredients. First, adaptive units must learn implementa- tion policies for their high-level strategies by sequencing low-level action primitives (Leibo et al., 2017). Second, the high level strategies themselves change over time in response to the strategic choices of others (Gintis, 2000; North, 1991; Schluter, 2000; Smith and Price, 1973). Taken together, these two processes induce a sequence of challenges for the adaptive process that we term an autocurriculum. The rest of this paper is concerned with clarifying the autocurriculum concept and explaining how it provides a useful lens through which to view phenomena in evolutionary biology and multi-agent reinforcement learning. To that end, we offer a classiï¬cation of the various kinds of autocurricula by their underlying social interaction (competition or cooperation). In the ï¬nal part of the paper we consider the conditions under which autocurricula might generate human-like accumulation of innovations.
2
Autocurricula and the Emergence of Innovation from Social Interaction: A Manifesto for Multi-Agent Intelligence Research
A B row player strategic choice column player challenge
Figure 1 | (A) Adaptive units interacting with one another. Each adaptive unit is composed of sub-adaptive units and their interactions. Notice that this description is scale invariant. An adap- tive subunit may be composed of interacting subsubunits which themselves may be composed by interactions of further subdivided units. (B) The relationship between row player strategic choices and column player implementation policy learning. When the row player shifts its strategy from A to B, it induces a challenge for the column player. The optimal policy for the column player shifts to reï¬ect its best response to B. In this case, an initially ï¬at adaptive landscape where outcome a was achieved regardless of the column playerâs strategy, became a hilly landscape where the two strategies achieved different payoffs, c for implementing A and d for implementing B.
# Innovations Arise on All Levels of Lifeâs Hierarchies
Life on Earth is characterized by interactions between adaptive units. Each adaptive unit is composed of a set of interacting adaptive subunits, each of which is itself composed of interacting subsubunits and so on (Fig. 1-A). For example, eukaryotic cells are composed of interacting prokaryotic organelles and human communities are made up of interacting individuals (Gerber et al., 1999; Maynard Smith and Szathmary, 1997; Ostrom, 2005). Consider the great feats of human intelligenceâcomposing symphonies, sending astronauts to the moon, developing agricultural technology to feed billions of people. To which adaptive units should we attribute such success? One perspective, which we adopt here, postulates that these phenomena occur at the level of groups rather than individuals, i.e. human community, culture, or civilization. The truly intelligent adaptive units are massive multifaceted multi-agent systems that, while being composed of humans (as humans are composed of cells), have their own internal logic and computational mechanisms different from those of individual humans.
Innovations expand an adaptive unitâs behavioral repertoire with new robust and repeatable problem solving abilities (Reader and Laland, 2003), and may arise on any level of the hierarchy. A number of crucial innovations have had outsize inï¬uence on the subsequent history of life, for example the emergence of eukaryotic cells, multi-cellular organisms, and perhaps language (Maynard Smith and Szathmary, 1997). Human technological innovations like agriculture (Kavanagh et al., 2018) and industry (Clark, 2008) profoundly altered human lifestyles around the world. On the scale of human societies, the scope and effectiveness of institutions aimed at promoting cooperation have increased steadily over time (Pinker, 2011). The origins of such institutions like corporations, labor unions, parliamentary democracies, and inter-government organizations are all innovations of higher level adaptive units.
3
Autocurricula and the Emergence of Innovation from Social Interaction: A Manifesto for Multi-Agent Intelligence Research
Some populations of adaptive units only change over time via relatively slow processes like genetic evolution (genes), while others adapt via much faster processes like reinforcement learning and cultural evolution (agents). Often interactions between processes on both timescales are signiï¬cant, as in cases of gene-culture co-evolution (Boyd and Richerson, 1988). Many of the principles remain the same regardless of whether evolution or learning dominate (Bloembergen et al., 2015; Börgers and Sarin, 1997; Such et al., 2017). We have adopted the unifying term âadaptive unitâ to highlight the deep similarity between these processes, and to smoothly cross-apply insights originating in different ï¬elds.
Both evolution and reinforcement learning offer insights about how existing behaviors and solutions can be reï¬ned, for example by sharpening or stabilizing a desirable behavior. However, much less is known about how qualitatively novel innovations originate in the ï¬rst place. The problem is that spontaneous generation of useful complex behavior is extraordinarily unlikely, yet necessary for the reï¬nement processes of evolution and reinforcement learning to work their magic. Indeed the more complex the behavior, the lower the odds of its being generated spontaneously. This problem is exponentially exacerbated when two or more agents are required to jointly explore the set of solutions available to them as a group. Nevertheless, humans have walked on the moon, created the internet, and cured smallpox. But how?
# Exploration by Exploitation
Innovation may be explained by considering that the environment to which units adapt can change over time. This causes old adaptations to lose their shine and thereby motivates exploration toward new innovative solutions. Researcher-controlled non-stationary dynamics in machine learning, known as curricula, can facilitate the acquisition of complex behaviors that would not be learnable otherwise, e.g. Asmuth et al. (2008); Bengio et al. (2009); Czarnecki et al. (2018); Narvekar (2017); Zaremba and Sutskever (2015). The idea is to structure learning by changing the underlying environment over time.
We call such a change in the underlying environment dynamics a challenge. More precisely, we can think of a challenge to a policy as a change in its relative value compared to other policies. Notice that challenges may be positive as well as negative in nature. A previously successful predation strategy may diminish in effectiveness as prey species evolve countermeasures; or a chance dispersal to an uninhabited island may present an opportunity to apply foraging strategies that would not work on the mainland due to excessive competition.
Challenges motivate adaptive units to explore (and thus to learn) by following the gradient of their experience. That is, adaptive units explore because the true value of their current policy is shifting over time. We call this exploration by exploitation. In contrast to the traditional view in reinforcement learning based on the exploration-exploitation tradeoff, this view does not involve any deliberate trade-off between exploration and exploitation. An adaptive unit experiences new states not because it chooses to depart from exploitation, but because its underlying environment has changed.
Notice that a curriculum can be seen exactly as a sequence of challenges. We argue in this paper that certain kinds of curricula may emerge naturally from the non-stationary dynamics of social interaction processes, without any need for environmental engineering. We call such curricula autocurricula, since each challenge in the sequence is generated by the system itself. Adaptive social behavior of individuals continually reshapes the strategic landscape. We now seek to classify the various ways in which this can happen. The main distinguishing factor is whether the underlying challenge is endogenous or exogenous to the adaptive unit under consideration. This distinction
4
Autocurricula and the Emergence of Innovation from Social Interaction: A Manifesto for Multi-Agent Intelligence Research
underlies the contrasting and complementary dynamics of competition and cooperation.
# Exogenous Challenges Generate Autocurricula
An exogenous challenge is a challenge that originates outside the adaptive unit under consideration. For example, consider a two-player zero-sum game. Player one experiences an exogenous challenge when player two changes their strategy in such a way as to induce a change to player oneâs best response strategy. Viewed from the perspective of player one, this creates a change in its experienced adaptive landscape since its learning objective, implementing its best response strategy, has changed (Fig. 1-B). Player two may shift strategy again once player one has successfully learned to implement its best response, thereby inducing yet another change in adaptive landscape (see Box 1). This interaction may produce a long sequence of novel challenges, an âarms raceâ (Dawkins and Krebs, 1979) driving both players to improve their skills, i.e. an exogenous autocurriculum.
# Box 1. Duality of Strategy and Implementation
The perspective developed here emphasizes situations where individuals must simultaneously learn not only what strategic choice to make, but also how to implement said choice by sequencing elementary actions. Thus, at any given time, the goal of each playerâs learning is to ï¬nd an implementation for a strategy that best responds to the strategy proï¬le of the others. Of course, since co-players do not just wait around for others to learn how they may be exploited, they too learn a best response. Whenever they change strategy they create a new adaptive landscape. Each such change constitutes a challenge. This creates a feedback loop. Any change has a cascading effect as others adjust their own strategy in response. Thus, from the perspective of an individual learner, the problem is one of adapting to a sequence of challenges, i.e. an autocurriulum. Innovation occurs when following an autocurriculum leads implemen- tation learning to escape local optima where it would otherwise have been trapped indeï¬nitely.
Empirical game theoretic techniques (Walsh et al., 2002; Wellman, 2006) may be used to analyze behavior in terms of its strategic properties. Instead of assuming game rules are known a priori, these methods work backwards from outcome data to deduce properties of the game being played. For example, Tuyls et al. (2018) showed strategic intransitivities between AlphaGo versions and Zen, the previous state-of-the-art Go engine. A similar approach was taken for social dilemmas in Leibo et al. (2017). That work classiï¬ed learned implementation policies by strategic properties such as âaggressivenessâ. (Hughes et al., 2018; Perolat et al., 2017) extended its approach beyond the two-player case to analyze the strategic incentives underlying policies learned by reinforcement learning in common pool resource appropriation and public goods scenarios.
Empirical game theoretic techniques are not only useful for data analysis, they also formed a critical part of one recent general-purpose algorithm for multi-agent reinforcement learning called Policy Space Response Oracles (Lanctot et al., 2017). It works by incrementally building up the full normal form game table by iteratively adding to the table a best response to the mixed strategy equilibrium predicted for the tableâs previous state.
However, there is no guarantee that novel challenges will continue to be generated in this way. Consider a game with intransitive strategic dynamics like rock-paper-scissors (Singh et al., 2000; Tuyls and Nowé, 2005). The dynamics of evolving populations with incentives described by such
5
Autocurricula and the Emergence of Innovation from Social Interaction: A Manifesto for Multi-Agent Intelligence Research
games are often oscillatory (e.g. Gilpin (1975)). That is, even though speciï¬c implementations may be different with each repetition, the same underlying challenges are continually repeated. This is an autocurriculum that endlessly chases its own tail, never breaking away to incentivize new innovations.
When do exogenous autocurricula break out of cycles and continually generate increasingly clever innovations? This question has been studied in multi-agent reinforcement learning in the framework of self-play algorithms for two-player zero-sum games. The idea behind this family of algorithms is that by continually training an adaptive unit to defeat past versions of itself it can always be paired with a partner of the appropriate difï¬culty, neither too strong nor too weak. Self-play ensures that the adaptive unit learns to exploit its own errors, thereby challenging itself to correct them the next time around. The algorithm TD-Gammon, which was the ï¬rst to play competitively with human experts in the game of Backgammon, was an early example of this approach (Tesauro, 1995). Self-play remains a prominent approach in recent work. For example, Bansal et al. (2018) applied it to a 3D sumo wrestling game with simulated physics and Jaderberg et al. (2018) applied it to an egocentrically-viewed team Capture-the-ï¬ag game based on the Quake game engine.
However, in designing self-play algorithms, care must be taken to prevent forgetting of past policies. If old policies are forgotten, then a newer generation may become unable to defeat an older generation, creating an opening for long-extinct traits to re-appear in the future (Samothrakis et al., 2013). Thus forgetting may cause the induced autocurriculum to double back onto itself, just like it does in the rock-paper-scissors case, preventing the productive accumulation of new innovations. In practice, successful self-play algorithms generally play not just against the latest (and strongest) policy, but also against as large and diverse as possible a set of older policies (Lanctot et al., 2017).
In games with a small number of possible strategies, an exogenous autocurriculum is expected to approach a Nash equilibrium (Gintis, 2000). However, in more open games, or those with a huge but still ï¬nite space of possible strategies, like Go or Chess, then self-play really does seem to be able to continually generate novel innovations. For example, AlphaGo and its Chess/Shogi/Go-playing variant, AlphaZero (Silver et al., 2016, 2017, 2018) are based on self-play. Starting with adaptive policy and value estimators for game positions, they use Monte Carlo tree search to improve the current policy, then learn better estimators from the generated games. Interestingly, these algorithms show that some kinds of forgetting are not always harmful. Sometimes innovations are discovered at one point in the training process, and later on discarded in favor of others. AlphaGo Zero, for example, rediscovered several patterns known from human Go expertise called joseki, but some of them were discarded in favour of new variations later on in training (Silver et al., 2017). A similar phenomenon was observed in AlphaZero: its preferences towards certain Chess openings ï¬uctuated in time; the most frequently played openings at certain intermediate points in training were no longer seen in its later stages (Silver et al., 2018). Presumably the algorithm discovered that the discarded strategies are suboptimal. Sometimes it goes beyond our human capacity to understand what it has found. For instance, AlphaZero, in Chess, makes surprising (to humans) sacriï¬ces to gain positional advantage, a development that is now impacting human professional level Chess (Kasparov, 2018).
So far we have only discussed exogenous autocurricula in the context of two-player zero sum games. A recent paper describing an algorithm called Malthusian reinforcement learning considered them in more general settings (Leibo et al., 2019). Malthusian reinforcement learning extends self-play to allow for variable numbers of players to appear in each episode. Subpopulation sizes grow proportionally to their success. In games with limited resources this demographic expansion creates additional competitive pressure. That is, it induces an exogenous autocurriculum. Whenever a successful policy arises at any population size, its own success ensures that population will increase
6
Autocurricula and the Emergence of Innovation from Social Interaction: A Manifesto for Multi-Agent Intelligence Research
in the future. Thus the Malthusian reinforcement learning algorithm generates a continually changing adaptive landscape which functions to perturb old solutions that have grown too comfortable and thereby driving it to escape poor local optima where state-of-the-art single-agent methods cannot avoid becoming stuck.
Exogenous autocurricula also appear in some evolutionary models of human intelligence. In one theory, the main selection pressure on intelligence in human evolution is posited to be the need to manipulate others within the social group in order to climb a dominance hierarchy (Humphrey, 1976). Increasingly clever social partners motivate the need to evolve still greater manipulating intelligence, and so on, increasing intelligence up to the point where brain size could no longer expand for anatomical reasons (Byrne, 1996; Dunbar and Shultz, 2017). On the other hand, there is more to being human than competition. As we will see in the next section, the challenges of organizing collective action also yield autocurricula that may have structured the evolution of human cognitive abilities and motivated signiï¬cant innovations.
# Endogenous Challenges Generate Autocurricula
Autocurricula may emerge on any level of the hierarchy of adaptive units. When a level is atomic (indivisible), only exogenous challenges are possible. On all other levels, the adaptive units are made up of adaptive subunits. In such cases adaptation may also be driven in response to endogenous challenges to the collectiveâs integrity. For example, a collective-level adaptive unit will generally function best when it has suppressed most competition between its component subunits. In multi- cellular organisms, that suppression sometimes breaks down, freeing somatic cells to behave in their own short-sighted selï¬sh interest (Frank and Nowak, 2004; Rankin et al., 2007). Cancerous cells often behave like unicellular organisms, even reverting to less efï¬cient fermentation-based metabolism (the Warburg effect) (Vander Heiden et al., 2009) and activating other ancient cellular functions conserved in modern unicellular organisms (Trigos et al., 2018). Similar breakdowns of cooperation can occur on the level of a society. For example, eusocial insect colonies are vulnerable to exploitation by renegade worker reproduction (Beekman and Oldroyd, 2008).
Such situations are social dilemmas. They expose tensions between individual and collective rationality (Rapoport, 1974). One particularly well-studied type of social dilemma is called common- pool resource appropriation (Ostrom, 1990). For a common-pool resource like a common grazing pasture, ï¬shery, or irrigation system, it is difï¬cult or impossible for individuals to exclude one anotherâs access. But whenever an individual obtains a beneï¬t from such a common-pool resource, the remaining amount available for appropriation by others is at least somewhat diminished. If each individualâs marginal beneï¬t of appropriation exceeds their share of the cost of further depletion, then they are predicted to continue their appropriation until the resource becomes degraded. This situation is called the tragedy of the commons (Hardin, 1968; Ostrom, 1990). It is impossible for an individual acting unilaterally to escape this fate; since even if one were to restrain their appropriation, the effect would be too small to make a difference. Thus individual-level innovation is not sufï¬cient to meet this challenge. Any innovation that resolves a social dilemma must involve changing the behavior of a critical fraction of the participants (Schelling, 1973).
One way to effect such a change in the joint behavior of many individuals is to originate an âinstitutionâ (Greif, 2006; Ostrom, 1990, 2005): a system of rules, norms, or beliefs. Institutions may structure individual-level adaptive processes to ensure the group as a whole achieves a socially beneï¬cial outcome. They may be regarded as collective-level innovations. For example, consider an institution whereby individuals who over-exploit the common pool resource are sanctioned by the group. This institution changes the individual incentive structure such that over-exploiting is
7
Autocurricula and the Emergence of Innovation from Social Interaction: A Manifesto for Multi-Agent Intelligence Research
no longer the dominant strategy. We have seen hints of emergent institutions in recent multi-agent reinforcement learning models of common-pool resource appropriation situations (Hughes et al., 2018; Leibo et al., 2017; Perolat et al., 2017). For example. an institution for generating cooperation on the collective-level emerged in Hughes et al. (2018) when certain agents learned to sanction over-exploiters, effectively policing their behavior.
# Box 2. No-Free-Lunch in Social Dilemmas
The literature contains several models that suggest that higher order social dilemmas can be evaded in various ways. Here we show that they depend on unrealistic assumptions, thereby sustaining the present argument that the no-free-lunch property of social dilemmas cannot generally be avoided.
For example, some models depend on asymmetries between altruistic cooperation and altruistic punishment (Boyd et al., 2003). This may hold in some situations, especially when the cost of monitoring for infractions is small, e.g. in agricultural land use. But it does not hold when monitoring costs are large, as in many real-world common-pool resource situations (Ostrom, 1990).
Other models depend on coordination of large numbers of individuals engaging in altruistic punishment (Boyd et al., 2010), but they do not take into account the costs of such coordination. Several of the case studies of effective community-level resource management described by Ostrom show that communities are willing to invest in complex and relatively costly institutional mechanisms for ensuring this coordination is effective so that sanctioning may be deemed legitimate by the group and no individual must bear the brunt of the cost (Ostrom, 1990).
Another intriguing idea is to link reputation across tasks (Panchanathan and Boyd, 2004). However, these mechanisms substantially increase pressure on institutions for assigning and communicating reputations. Thus they give rise to new attack vectors for the unscrupulous. Agents may try to cheat by ï¬nding ways to falsely inï¬ate their reputations.
While these arguments may explain some instances of cooperation, especially when the costs of monitoring for infractions are low, they are insufï¬cient to explain away the no-free-lunch principle that generates higher order social dilemmas.
A social dilemma may be resolved via the emergence of an institution that systematically changes payoffs experienced by adaptive units so as to eliminate socially deï¬cient equilibria or nudge learning dynamics toward better equilibria. However, maintaining the institution itself still depends on interactions of those same participants. In many cases this yields a second order social dilemma because each individual would prefer others to shoulder a greater share of that burden (Axelrod, 1986; Heckathorn, 1989; Yamagishi, 1988). This is called the âsecond-order free rider problemâ. As predicted by these models, there is evidence that pre-state societies sustain social norms that disapprove of second-order free riding, e.g. Mathew (2017). Second-order social dilemmas may themselves be resolved via the emergence of higher order institutions which, in turn, create their own still higher level successor dilemmas (Ostrom, 2000). Indeed we can say that social dilemma situations have a kind of âno-free lunchâ property: once you resolve a social dilemma in one place then another one crops up somewhere else (see Box 2). A society may resolve a social dilemma by hiring watchmen, but then who watches the watchmen? These dynamics may generate a sequence
8
Autocurricula and the Emergence of Innovation from Social Interaction: A Manifesto for Multi-Agent Intelligence Research
of endogenous challenges growing steadily in scope and complexity, i.e. an autocurriculum.
Just as atomic individuals participate in social interactions with one another, communities interact with peer communities as competitors or allies. Exogenous challenges may also arise on the collective level. Their effects reverberate down the hierarchy, tending to resolve endogenous challenges by aligning the incentives of lower level entities (Henrich, 2004a; Maynard Smith and Szathmary, 1997; Wilson and Wilson, 2007). For example, eusocial insect colonies compete with other conspeciï¬c colonies. Those colonies that are better able to maintain internal cooperation, e.g., by establishing reproductive division of labor, are more likely to be successful in colony-level competition (Nowak et al., 2010). In this view, communities are treated much like atomic individuals. Just as atomic individuals are understood to seek to maximize the utility (food, shelter, mates, etc) they can obtain from social interaction, higher order adaptive units act to optimize a range of different higher order utility concepts. For example, populations of ant colonies optimize ï¬tness in the sense of multi-level selection (Okasha, 2005), while human corporations optimize culturally-determined constructs like âshareholder valueâ. Analogous to nervous systems, communities weigh their options via decision-making institutions like parliaments and markets.
# Accumulating Autocurricula and Human Uniqueness
In this paper we argued that interactions among multiple adaptive units across levels of the bio- logical hierarchy give rise to sequences of chal- lenges called autocurricula that perturb adap- tive landscapes. This enables adaptive units to discover innovations by continually adapt- ing to changing circumstancesâexploration- by-exploitation (see Outstanding Questions). Might this mechanism be the key to solving the problem problem? Will autocurricula generate enough adaptive diversity? Perhaps not. Re- call that autocurricula may be cyclic, repeatedly learning and unlearning the same information, innovations never accumulating or building on one another. Moreover, there is nothing about these mechanisms that suggests they do not ap- ply equally strongly to non-human animals. We think the solution is as follows. Autocurricula do indeed exist throughout animal evolution. However, humans are unique among animals in their exceptionally long cultural memory. This allows intransitive cycles to be avoided, thereby promoting cumulative accumulation of innova- tion after innovation.
# Outstanding Questions
⢠Can autocurricula generate sufï¬ciently di- verse challenges to resolve the problem prob- lem?
⢠Does the duality between strategy and imple- mentation persist at the level of the commu- nity?
⢠Can the no-free-lunch property of social dilemmas be formalized? What new experi- ments could be carried out to demonstrate or refute its validity?
⢠Did autocurricula phenomena play a role in the evolution of higher-order individuals like multi-cellular organisms and eusocial insect societies? Could analogous transitions arise in multi-agent reinforcement learning?
⢠How do challenges arising on different levels of the biological hierarchy interact with one another? For example, do higher order exoge- nous challenges align the interests of lower order individuals? What happens if lower or- der individuals can defect from their âteamâ to join another, more successful, higher order individual?
⢠How can we establish feedback loops like cumulative culture and self-domestication in silico?
Why did this same accumulation not occur in other ape species? We highlight two possibil- ities, based on the structure of feedback loops within human societies, driven respectively by exogenous and endogenous challenges to group integrity. Both may be seen as the combination of a challenge-based loop, and a ratchet loop that serves the purpose of accumulating and distilling
9
Autocurricula and the Emergence of Innovation from Social Interaction: A Manifesto for Multi-Agent Intelligence Research
beneï¬cial innovations.
A B So aT rochen culture population improved (scarcity) ratchet (no scarcity) ratchet language cycle cycle exogenous threat endogenous threat cycle cycle more better opportunity for better sanctioning competition innovation norm violators institutions NU NUâ
Figure 2 | (A) The cumulative culture loop. (B) The self-domestication loop.
1. The cumulative culture loop (Fig. 2-A). A growing population puts stress on human societies since more individuals must vie for access to the same limited resources. The resulting competitive pressure may give rise to exogenous challenges to each individual, motivating new innovation. However, since humans are adept social learners, any innovation that promotes survival will tend to quickly spread through the population and thereby increase population sizes in the future along the lines suggested by models like Enquist et al. (2011). There is a signiï¬cant literature demonstrating that possibly uniquely human high ï¬delity social transmission of cultural elements allows innovations to build upon one another and increase in complexity over time (Boyd et al., 2011, 2013; Dean et al., 2012; Muthukrishna and Henrich, 2016). Moreover, archaeological evidence and computational models suggest that larger populations can afford greater protection from random forgetting of cultural elements due to chance events like the death of a single master artisan, thus allowing a more signiï¬cant opportunity for cumulative cultural evolution (Henrich, 2004b). Indeed, cultural evolution is made more effective by increasing the population size and increasing the population size increases the effectiveness of cultural learning. This is a feedback loop that, once started, could potentially increase until the population becomes limited by its environmentâs carrying capacity in some way that further innovation cannot transcend.
2. The self-domestication loop (Fig. 2-B). A growing population need not lead to scarcity, provided that resource production is efï¬cient. However, it does provide more opportunity for norm violators to free-ride on the contributions of others (Carpenter, 2007). One norm- violating behavior is reactive aggression, particularly in young males. Coordinated punishment for aggression in small-scale societies often takes the form of capital punishment (Boehm, 2011). The result is genetic evolution that gradually reduces the prevalence of aggressive individuals. This process has been called self-domestication (Wrangham, 2018) because it is similar to the selection process that may have been applied in creating domestic dogs from wolves, now applied by a species to itself. Self-domestication has the effect of increasing tolerance for non-kin because it reduces the likelihood of encountering aggressive individuals. This may have created opportunities for improving communication abilities, and ultimately, language. Conversely, improving communication between individuals in a society improves the effectiveness of institutions for sanctioning norm violators by facilitating coordinated punishment (Bochet et al., 2006; Janssen et al., 2014, 2010). Language also improves the
10
Autocurricula and the Emergence of Innovation from Social Interaction: A Manifesto for Multi-Agent Intelligence Research
accuracy of reputation information, e.g. as conveyed by gossip, informing the decision of which individuals deserve punishment (Nowak and Sigmund, 1998; Rockenbach and Milinski, 2006). Hence, the resultant feedback loop may accumulate institutions indeï¬nitely.
In this paper we identiï¬ed multi-agent interactions as key drivers of sustained innovation and, perhaps, increases in intelligence over the course of human evolution. As a consequence, a research program based only on replicating individual human cognitive abilities, e.g. attention, memory, planning, etc, is likely incomplete. It seems that intelligence researchers would do well to pay more attention to the ways in which multi-agent dynamics may structure both evolution and learning.
# Acknowledgements
The authors would like to thank Edgar Duenez-Guzman, David Balduzzi, Aliya Amirova, Greg Wayne, Peter Sunehag, Joyce Xu, Martin Chadwick, Richard Everett, Vlad Firiou and Raphael Koster for very helpful comments and discussions during the drafting of this article. In addition, the ï¬rst author would like to thank all the speakers, organizers, and attendees of the 2014 âAre there limits to evolution?â workshop at Cambridge where much of the thought process leading eventually to this article was ï¬rst hatched.
# References
John Asmuth, Michael L Littman, and Robert Zinkov. Potential-based shaping in model-based reinforcement learning. In AAAI, pages 604â609, 2008.
Robert Axelrod. An evolutionary approach to norms. American political science review, 80(4): 1095â1111, 1986.
Trapit Bansal, Jakub Pachocki, Szymon Sidor, Ilya Sutskever, and Igor Mordatch. Emergent complexity via multi-agent competition. In International Conference on Learning Representations, 2018.
Charles Beattie, Joel Z Leibo, Denis Teplyashin, Tom Ward, Marcus Wainwright, Heinrich Küttler, Andrew Lefrancq, Simon Green, VÃctor Valdés, Amir Sadik, et al. Deepmind lab. arXiv preprint arXiv:1612.03801, 2016.
Madeleine Beekman and Benjamin P Oldroyd. When workers disunite: intraspeciï¬c parasitism by eusocial bees. Annual review of entomology, 53, 2008.
Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pages 41â48. ACM, 2009.
Daan Bloembergen, Karl Tuyls, Daniel Hennes, and Michael Kaisers. Evolutionary dynamics of multi-agent learning: A survey. J. Artif. Intell. Res. (JAIR), 53:659â697, 2015.
Olivier Bochet, Talbot Page, and Louis Putterman. Communication and punishment in voluntary contribution experiments. Journal of Economic Behavior & Organization, 60(1):11â26, 2006.
Christopher Boehm. Retaliatory violence in human prehistory. The British Journal of Criminology, 51 (3):518â534, 2011.
Tilman Börgers and Rajiv Sarin. Learning through reinforcement and replicator dynamics. Journal of Economic Theory, 77(1):1â14, 1997.
Robert Boyd and Peter J Richerson. Culture and the evolutionary process. University of Chicago press, 1988.
11
Autocurricula and the Emergence of Innovation from Social Interaction: A Manifesto for Multi-Agent Intelligence Research
Robert Boyd, Herbert Gintis, Samuel Bowles, and Peter J Richerson. The evolution of altruistic punishment. Proceedings of the National Academy of Sciences, 100(6):3531â3535, 2003.
Robert Boyd, Herbert Gintis, and Samuel Bowles. Coordinated punishment of defectors sustains cooperation and can proliferate when rare. Science, 328(5978):617â620, 2010.
Robert Boyd, Peter J Richerson, and Joseph Henrich. The cultural niche: Why social learning is essential for human adaptation. Proceedings of the National Academy of Sciences, 108(Supplement 2): 10918â10925, 2011.
Robert Boyd, Peter J Richerson, and Joseph Henrich. The cultural evolution of technology: Facts and theories. Cultural evolution: society, technology, language, and religion, 12:119, 2013.
Richard W Byrne. Machiavellian intelligence. Evolutionary Anthropology: Issues, News, and Reviews: Issues, News, and Reviews, 5(5):172â180, 1996.
Jeffrey Carpenter. Punishing free-riders: How group size affects mutual monitoring and the provision of public goods. Games and Economic Behavior, 60(1):31â51, 2007.
Gregory Clark. A farewell to alms: a brief economic history of the world, volume 27. Princeton University Press, 2008.
W. M. Czarnecki, S. M. Jayakumar, M. Jaderberg, L. Hasenclever, Y. Whye Teh, S. Osindero, N. Heess, and R. Pascanu. Mix and Match - Agent Curricula for Reinforcement Learning. ArXiv e-prints, June 2018.
Richard Dawkins and John Richard Krebs. Arms races between and within species. Proc. R. Soc. Lond. B, 205(1161):489â511, 1979.
Lewis G Dean, Rachel L Kendal, Steven J Schapiro, Bernard Thierry, and Kevin N Laland. Identiï¬cation of the social and cognitive processes underlying human cumulative culture. Science, 335(6072): 1114â1118, 2012.
RIM Dunbar and Susanne Shultz. Why are there so many explanations for primate brain evolution? Phil. Trans. R. Soc. B, 372(1727):20160244, 2017.
Magnus Enquist, Stefano Ghirlanda, and Kimmo Eriksson. Modelling the evolution and diversity of cumulative culture. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 366(1563):412â423, 2011.
Lasse Espeholt, Hubert Soyer, Remi Munos, Karen Simonyan, Volodymir Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, et al. Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures. arXiv preprint arXiv:1802.01561, 2018.
John Fairbairn. Go in ancient china. Go Base, 1995.
Steven A Frank and Martin A Nowak. Problems of somatic mutation and cancer. Bioessays, 26(3): 291â299, 2004.
Christian Gerber, Jörg Siekmann, and Gero Vierke. Holonic multi-agent systems. 1999.
Michael E Gilpin. Limit cycles in competition communities. The American Naturalist, 109(965): 51â60, 1975.
Herbert Gintis. Game theory evolving: A problem-centered introduction to modeling strategic behavior. Princeton university press, 2000.
A. Greif. Institutions and the Path to the Modern Economy: Lessons from Medieval Trade. Political Economy of Institutions and Decisions. Cambridge University Press, 2006. ISBN 9781139447065.
Garrett Hardin. The tragedy of the commons. Science, 162(3859):1243â1248, 1968.
12
Autocurricula and the Emergence of Innovation from Social Interaction: A Manifesto for Multi-Agent Intelligence Research
Douglas D Heckathorn. Collective action and the second-order free-rider problem. Rationality and Society, 1(1):78â100, 1989.
Joseph Henrich. Cultural group selection, coevolutionary processes and large-scale cooperation. 53: 3â35, 02 2004a.
Joseph Henrich. Demography and cultural evolution: how adaptive cultural processes can produce maladaptive lossesâthe tasmanian case. American Antiquity, 69(2):197â214, 2004b.
José Hernández-Orallo. The measure of all minds: evaluating natural and artiï¬cial intelligence. Cambridge University Press, 2017.
Matteo Hessel, Hubert Soyer, Lasse Espeholt, Wojciech Czarnecki, Simon Schmitt, and Hado van Hasselt. Multi-task deep reinforcement learning with popart. arXiv preprint arXiv:1809.04474, 2018.
Edward Hughes, Joel Z Leibo, Matthew Phillips, Karl Tuyls, Edgar Dueñez-Guzman, Antonio GarcÃa Castañeda, Iain Dunning, Tina Zhu, Kevin McKee, Raphael Koster, et al. Inequity aversion improves cooperation in intertemporal social dilemmas. In Advances in Neural Information Processing Systems, pages 3330â3340, 2018.
Nicholas K Humphrey. The social function of intellect. In Growing points in ethology, pages 303â317. Cambridge University Press, 1976.
Max Jaderberg, Wojciech M Czarnecki, Iain Dunning, Luke Marris, Guy Lever, Antonio Garcia Castaneda, Charles Beattie, Neil C Rabinowitz, Ari S Morcos, Avraham Ruderman, Nicolas Sonnerat, Tim Green, Louise Deason, Joel Z. Leibo, David Silver, Demis Hassabis, Koray Kavukcuoglu, and Thore Graepel. Human-level performance in ï¬rst-person multiplayer games with population-based deep reinforcement learning. arXiv preprint arXiv:1807.01281, 2018.
Marco Janssen, Madeline Tyson, and Allen Lee. The effect of constrained communication and limited information in governing a common resource. International Journal of the Commons, 8(2), 2014.
Marco A Janssen, Robert Holahan, Allen Lee, and Elinor Ostrom. Lab experiments for the study of social-ecological systems. Science, 328(5978):613â617, 2010.
G Kasparov. Chess, a drosophila of reasoning. Science (New York, NY), 362(6419):1087, 2018.
James F Kasting and Janet L Siefert. Life and the evolution of earthâs atmosphere. Science, 296 (5570):1066â1068, 2002.
Patrick H Kavanagh, Bruno Vilela, Hannah J Haynie, Ty Tuff, Matheus Lima-Ribeiro, Russell D Gray, Carlos A Botero, and Michael C Gavin. Hindcasting global population densities reveals forces enabling the origin of agriculture. Nature Human Behaviour, page 1, 2018.
MichaÅ Kempka, Marek Wydmuch, Grzegorz Runc, Jakub Toczek, and Wojciech Ja´skowski. Vizdoom: A doom-based ai research platform for visual reinforcement learning. In Computational Intelligence and Games (CIG), 2016 IEEE Conference on, pages 1â8. IEEE, 2016.
Marc Lanctot, Vinicius Zambaldi, Audrunas Gruslys, Angeliki Lazaridou, Karl Tuyls, Julien Perolat, David Silver, and Thore Graepel. A uniï¬ed game-theoretic approach to multiagent reinforcement learning. In Advances in Neural Information Processing Systems, 2017.
Shane Legg and Marcus Hutter. Universal intelligence: A deï¬nition of machine intelligence. Minds and Machines, 17(4):391â444, 2007.
Joel Z. Leibo, Vinicius Zambaldi, Marc Lanctot, Janusz Marecki, and Thore Graepel. Multi-agent Reinforcement Learning in Sequential Social Dilemmas. In Proceedings of the 16th International Conference on Autonomous Agents and Multiagent Systems (AA-MAS 2017), Sao Paulo, Brazil, 2017.
Joel Z Leibo, Cyprien de Masson dâAutume, Daniel Zoran, David Amos, Charles Beattie, Keith Ander-
13
Autocurricula and the Emergence of Innovation from Social Interaction: A Manifesto for Multi-Agent Intelligence Research
son, Antonio GarcÃa Castañeda, Manuel Sanchez, Simon Green, Audrunas Gruslys, et al. Psychlab: a psychology laboratory for deep reinforcement learning agents. arXiv preprint arXiv:1801.08116, 2018.
Joel Z Leibo, Julien Perolat, Edward Hughes, Steven Wheelwright, Adam H Marblestone, Edgar Duéñez-Guzmán, Peter Sunehag, Iain Dunning, and Thore Graepel. Malthusian reinforcement learning. In Proceedings of the 18th International Conference on Autonomous Agents and Multiagent Systems (AA-MAS 2019), Montreal, Canada, 2019.
Sarah Mathew. How the second-order free rider problem is solved in a small-scale society. American Economic Review, 107(5):578â81, 2017.
John Maynard Smith and Eors Szathmary. The major transitions in evolution. Oxford University Press, 1997.
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, et al. Learning to navigate in complex environments. arXiv preprint arXiv:1611.03673, 2016.
Michael Muthukrishna and Joseph Henrich. Innovation in the collective brain. Phil. Trans. R. Soc. B, 371(1690):20150192, 2016.
Sanmit Narvekar. Curriculum learning in reinforcement learning. In Proceedings of the Twenty- Sixth International Joint Conference on Artiï¬cial Intelligence, IJCAI-17, pages 5195â5196, 2017. doi: 10.24963/ijcai.2017/757.
Douglass C North. Institutions. Journal of economic perspectives, 5(1):97â112, 1991.
Martin A Nowak and Karl Sigmund. Evolution of indirect reciprocity by image scoring. Nature, 393 (6685):573, 1998.
Martin A Nowak, Corina E Tarnita, and Edward O Wilson. The evolution of eusociality. Nature, 466 (7310):1057, 2010.
Samir Okasha. Multilevel selection and the major transitions in evolution. Philosophy of science, 72 (5):1013â1025, 2005.
Elinor Ostrom. Governing the Commons: The Evolution of Institutions for Collective Action. Cambridge University Press, 1990.
Elinor Ostrom. Collective action and the evolution of social norms. Journal of economic perspectives, 14(3):137â158, 2000.
Elinor Ostrom. Understanding institutional diversity. Princeton University Press Princeton, 2005.
Karthik Panchanathan and Robert Boyd. Indirect reciprocity can stabilize cooperation without the second-order free rider problem. Nature, 432(7016):499â502, 2004.
Julien Perolat, Joel Z Leibo, Vinicius Zambaldi, Charles Beattie, Karl Tuyls, and Thore Graepel. A multi-agent reinforcement learning model of common-pool resource appropriation. In Advances in Neural Information Processing Systems, pages 3643â3652, 2017.
Steven Pinker. The better angels of our nature: The decline of violence in history and its causes. Penguin UK, 2011.
Daniel J Rankin, Katja Bargum, and Hanna Kokko. The tragedy of the commons in evolutionary biology. Trends in ecology & evolution, 22(12):643â651, 2007.
Anatol Rapoport. Prisonerâs dilemmaârecollections and observations. In Game Theory as a Theory of a Conï¬ict Resolution, pages 17â34. Springer, 1974.
14
Autocurricula and the Emergence of Innovation from Social Interaction: A Manifesto for Multi-Agent Intelligence Research
S.M. Reader and K.N. Laland. Animal Innovation. Oxford University Press, 2003. 9780198526223.
Bettina Rockenbach and Manfred Milinski. The efï¬cient interaction of indirect reciprocity and costly punishment. Nature, 444(7120):718, 2006.
Spyridon Samothrakis, Simon Lucas, Thomas Philip Runarsson, and David Robles. Coevolving gameplaying agents: Measuring performance and intransitivities. IEEE Transactions on Evolutionary Computation, 17(2):213â226, 2013.
Nikolay Savinov, Anton Raichuk, Raphaël Marinier, Damien Vincent, Marc Pollefeys, Timothy Lillicrap, and Sylvain Gelly. Episodic curiosity through reachability. arXiv preprint arXiv:1810.02274, 2018.
Thomas C Schelling. Hockey helmets, concealed weapons, and daylight saving: A study of binary choices with externalities. Journal of Conï¬ict resolution, 17(3):381â428, 1973.
Dolph Schluter. The ecology of adaptive radiation. Oxford University Press, 2000.
David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the game of Go with deep neural networks and tree search. Nature, 529:484â-489, 2016.
David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Matthew Lai Lucas Baker, Adrian Bolton, Yutian Chen, Timothy Lillicrap, Fan Hui, Laurent Sifre, George van den Driessche, Thore Graepel, and Demis Hassabis. Mastering the game of go without human knowledge. Nature, 550:354â359, 2017.
David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, et al. A general reinforcement learning algorithm that masters chess, shogi, and go through self-play. Science, 362(6419):1140â1144, 2018.
Satinder P. Singh, Michael J. Kearns, and Yishay Mansour. Nash convergence of gradient dynamics in general-sum games. In Proceedings of the 16th Conference on Uncertainty in Artiï¬cial Intelligence, UAI â00, pages 541â548, San Francisco, CA, USA, 2000. Morgan Kaufmann Publishers Inc. ISBN 1-55860-709-9.
J Maynard Smith and George R Price. The logic of animal conï¬ict. Nature, 246(5427):15, 1973.
Felipe Petroski Such, Vashisht Madhavan, Edoardo Conti, Joel Lehman, Kenneth O Stanley, and Jeff Clune. Deep neuroevolution: genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning. arXiv preprint arXiv:1712.06567, 2017.
G. Tesauro. Temporal difference learning and TD-gammon. Commun. ACM, 38(3):58â68, March 1995.
Anna S Trigos, Richard B Pearson, Anthony T Papenfuss, and David L Goode. How the evolution of multicellularity set the stage for cancer. British Journal of Cancer, 118(2):145, 2018.
K. Tuyls and A. Nowé. Evolutionary game theory and multi-agent reinforcement learning. The Knowledge Engineering Review, 20(1):63â90, 2005.
Karl Tuyls, Julien Perolat, Marc Lanctot, Joel Z Leibo, and Thore Graepel. A generalised method for empirical game theoretic analysis. arXiv preprint arXiv:1803.06376, 2018.
Matthew G Vander Heiden, Lewis C Cantley, and Craig B Thompson. Understanding the warburg effect: the metabolic requirements of cell proliferation. science, 324(5930):1029â1033, 2009.
15
Autocurricula and the Emergence of Innovation from Social Interaction: A Manifesto for Multi-Agent Intelligence Research
William E Walsh, Rajarshi Das, Gerald Tesauro, and Jeffrey O Kephart. Analyzing complex strategic interactions in multi-agent systems. In AAAI-02 Workshop on Game-Theoretic and Decision-Theoretic Agents, pages 109â118, 2002.
Greg Wayne, Chia-Chun Hung, David Amos, Mehdi Mirza, Arun Ahuja, Agnieszka Grabska-Barwinska, Jack Rae, Piotr Mirowski, Joel Z Leibo, Adam Santoro, et al. Unsupervised predictive memory in a goal-directed agent. arXiv preprint arXiv:1803.10760, 2018.
Michael P Wellman. Methods for empirical game-theoretic analysis. In AAAI, pages 1552â1556, 2006.
David Sloan Wilson and Edward O Wilson. Rethinking the theoretical foundation of sociobiology. The Quarterly review of biology, 82(4):327â348, 2007.
Richard W Wrangham. Two types of aggression in human evolution. Proceedings of the National Academy of Sciences, 115(2):245â253, 2018.
Toshio Yamagishi. Seriousness of social dilemmas and the provision of a sanctioning system. Social psychology quarterly, pages 32â42, 1988.
Wojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. arXiv preprint arXiv:1505.00521, 2015.
16 | {
"id": "1801.08116"
} |
1902.09229 | A Theoretical Analysis of Contrastive Unsupervised Representation Learning | Recent empirical works have successfully used unlabeled data to learn feature
representations that are broadly useful in downstream classification tasks.
Several of these methods are reminiscent of the well-known word2vec embedding
algorithm: leveraging availability of pairs of semantically "similar" data
points and "negative samples," the learner forces the inner product of
representations of similar pairs with each other to be higher on average than
with negative samples. The current paper uses the term contrastive learning for
such algorithms and presents a theoretical framework for analyzing them by
introducing latent classes and hypothesizing that semantically similar points
are sampled from the same latent class. This framework allows us to show
provable guarantees on the performance of the learned representations on the
average classification task that is comprised of a subset of the same set of
latent classes. Our generalization bound also shows that learned
representations can reduce (labeled) sample complexity on downstream tasks. We
conduct controlled experiments in both the text and image domains to support
the theory. | http://arxiv.org/pdf/1902.09229 | Sanjeev Arora, Hrishikesh Khandeparkar, Mikhail Khodak, Orestis Plevrakis, Nikunj Saunshi | cs.LG, cs.AI, stat.ML | 19 pages, 5 figures | null | cs.LG | 20190225 | 20190225 | 9 1 0 2
b e F 5 2 ] G L . s c [
1 v 9 2 2 9 0 . 2 0 9 1 : v i X r a
# A Theoretical Analysis of Contrastive Unsupervised Representation Learning
# Sanjeev Arora 1 2 Hrishikesh Khandeparkar 1 Mikhail Khodak 3 Orestis Plevrakis 1 Nikunj Saunshi 1
# {arora, hrk, orestisp, nsaunshi}@cs.princeton.edu
khodak@cmu.edu
{arora, hrk, orestisp, nsaunshi}@cs.princeton.edu hodak@cmu.edu
# Abstract
Recent empirical works have successfully used unlabeled data to learn feature representations that are broadly useful in downstream classiï¬ca- tion tasks. Several of these methods are remi- niscent of the well-known word2vec embedding algorithm: leveraging availability of pairs of se- mantically âsimilarâ data points and ânegative samples,â the learner forces the inner product of representations of similar pairs with each other to be higher on average than with negative samples. The current paper uses the term contrastive learn- ing for such algorithms and presents a theoretical framework for analyzing them by introducing la- tent classes and hypothesizing that semantically similar points are sampled from the same latent class. This framework allows us to show provable guarantees on the performance of the learned rep- resentations on the average classiï¬cation task that is comprised of a subset of the same set of latent classes. Our generalization bound also shows that learned representations can reduce (labeled) sam- ple complexity on downstream tasks. We conduct controlled experiments in both the text and image domains to support the theory.
# 1. Introduction
This paper concerns unsupervised representation learning: using unlabeled data to learn a representation function f such that replacing data point x by feature vector f (x) in new classiï¬cation tasks reduces the requirement for labeled data. This is distinct from semi-supervised learning, where learning can leverage unlabeled as well as labeled data. (Section 7 surveys other prior ideas and models).
the softmax) of a powerful deep net trained on ImageNet. In natural language processing (NLP), low-dimensional rep- resentations of text â called text embeddings â have been computed with unlabeled data (Peters et al., 2018; Devlin et al., 2018). Often the embedding function is trained by using the embedding of a piece of text to predict the sur- rounding text (Kiros et al., 2015; Logeswaran & Lee, 2018; Pagliardini et al., 2018). Similar methods that leverage simi- larity in nearby frames in a video clip have had some success for images as well (Wang & Gupta, 2015).
Many of these algorithms are related: they assume access to pairs or tuples (in the form of co-occurrences) of text/images that are more semantically similar than randomly sampled text/images, and their objective forces representations to respect this similarity on average. For instance, in order to learn a representation function f for sentences, a simpliï¬ed version of what Logeswaran & Lee (2018) minimize is the following loss function
low ef)" F(#*) 8 | F@ Fe) 4 ef @ Fe) E wat a
where (x, x+) are a similar pair and xâ is presumably dis- similar to x (often chosen to be a random point) and typi- cally referred to as a negative sample. Though reminiscent of past ideas â e.g. kernel learning, metric learning, co- training (Cortes et al., 2010; Bellet et al., 2013; Blum & Mitchell, 1998) â these algorithms lack a theoretical frame- work quantifying when and why they work. While it seems intuitive that minimizing such loss functions should lead to representations that capture âsimilarity,â formally it is unclear why the learned representations should do well on downstream linear classiï¬cation tasks â their somewhat mysterious success is often treated as an obvious conse- quence. To analyze this success, a framework must connect âsimilarityâ in unlabeled data with the semantic information that is implicitly present in downstream tasks.
For images, a proof of existence for broadly useful represen- tations is the output of the penultimate layer (the one before
1Princeton University, Princeton, New Jersey, USA. 2Institute for Advanced Study, Princeton, New Jersey, USA. 3Carnegie Mellon University, Pittsburgh, Pennsylvania, USA.
We propose the term Contrastive Learning for such methods and provide a new conceptual framework with minimal assumptions1. Our main contributions are the following:
Copyright 2019 by the authors.
1The alternative would be to make assumptions about genera- tive models of data. This is difï¬cult for images and text.
Contrastive Unsupervised Representation Learning
1. We formalize the notion of semantic similarity by in- troducing latent classes. Similar pairs are assumed to be drawn from the same latent class. A downstream task is comprised of a subset of these latent classes.
containing dogs and low/zero probabilities to other images. Classes can overlap arbitrarily.2 Finally, we assume a distri- bution Ï over the classes that characterizes how these classes naturally occur in the unlabeled data. Note that we make no assumption about the functional form of Dc or Ï.
2. Under this formalization, we prove that a representa- tion function f learned from a function class F by con- trastive learning has low average linear classiï¬cation loss if F contains a function with low unsupervised loss. Additionally, we show a generalization bound for contrastive learning that depends on the Rademacher complexity of F. After highlighting inherent limita- tions of negative sampling, we show sufï¬cient proper- ties of F which allow us to overcome these limitations.
3. Using insights from the above framework, we provide a novel extension of the algorithm that can leverage larger blocks of similar points than pairs, has better theoretical guarantees, and performs better in practice.
Ideally, one would like to show that contrastive learning al- ways gives representations that compete with those learned from the same function class with plentiful labeled data. Our formal framework allows a rigorous study of such ques- tions: we show a simple counterexample that prevents such a blanket statement without further assumptions. However, if the representations are well-concentrated and the mean classiï¬er (Deï¬nition 2.1) has good performance, we can show a weaker version of the ideal result (Corollary 5.1.1). Sections 2 and 3 give an overview of the framework and the results, and subsequent sections deal with the analysis. Re- lated work is discussed in Section 7 and Section 8 describes experimental veriï¬cation and support for our framework.
# Semantic Similarity
To formalize similarity, we assume similar data points x, x+ are i.i.d. draws from the same class distribution Dc for some class c picked randomly according to measure Ï. Negative samples are drawn from the marginal of Dsim:
# Dsim(x, x+) = E câ¼Ï Dneg(xâ) = E câ¼Ï
Dc(x)Dc(x+) (1)
Dc(xâ) (2)
Since classes are allowed to overlap and/or be ï¬ne-grained, this is a plausible formalization of âsimilarity.â As the iden- tity of the class in not revealed, we call it unlabeled data. Currently empirical works heuristically identify such similar pairs from co-occurring image or text data.
# Supervised Tasks
We now characterize the tasks that a representation function f will be tested on. A (k + 1)-way3 supervised task T consists of distinct classes {c1, . . . , ck+1} â C. The labeled dataset for the task T consists of m i.i.d. draws from the following process:
A label c â {c1, ..., ck+1} is picked according to a distribu- tion DT . Then, a sample x is drawn from Dc. Together they form a labeled pair (x, c) with distribution
# 2. Framework for Contrastive Learning
DT (x, c) = Dc(x)DT (c) (3)
We first set up notation and describe the framework for unlabeled data and classification tasks that will be essential for our analysis. Let ¥ denote the set of all possible data points. Contrastive learning assumes access to similar data in the form of pairs (a, x+) that come from a distribution Dsim as well as k i.i.d. negative samples x; , x3 , from a distribution D,,¢, that are presumably unrelated to x. Learning is done over F, a class of representation functions f : ¥ > R4, such that || f(-)|| <R for some R > 0.
A key subtlety in this formulation is that the classes in downstream tasks and their associated data distributions Dc are the same as in the unlabeled data. This provides a path to formalizing how capturing similarity in unlabeled data can lead to quantitative guarantees on downstream tasks. DT is assumed to be uniform4 for theorems in the main paper.
# Evaluation Metric for Representations
# Latent Classes
To formalize the notion of semantically similar pairs (x, x+), we introduce the concept of latent classes.
The quality of the representation function f is evaluated by its performance on a multi-class classiï¬cation task T using linear classiï¬cation. For this subsection, we ï¬x a task T = {c1, ..., ck+1}. A multi-class classiï¬er for T is a function g : X â Rk+1 whose output coordinates are indexed by the classes c in task T .
Let C denote the set of all latent classes. Associated with each class c â C is a probability distribution Dc over X .
The loss incurred by g on point (x, y) â X à T is deï¬ned
Roughly, Dc(x) captures how relevant x is to class c. For example, X could be natural images and c the class âdogâ whose associated Dc assigns high probability to images
2An image of a dog by a tree can appear in both Ddog & Dtree. 3We use k as the number of negative samples later. 4We state and prove the general case in the Appendix.
Contrastive Unsupervised Representation Learning
as ({9(2)y â 9(@)y}y'4y)» which is a function of a k- dimensional vector of differences in the coordinates. The two losses we will consider in this work are the standard hinge loss ¢(v) = max{0, 1+max;{âv;}} and the logistic loss ¢(v) = log, (1 + 30; exp(âv;)) for v ⬠R*. Then the supervised loss of the classifier g is
Lsup(T 9) = owe [E({a(@)câ g(x) e Forte) | v,c)~Dr
Note that, by the assumptions of the framework described above, we can now express the unsupervised loss as
Lun(f )
~ F(@))})]
To use a representation function f with a linear classiï¬er, a matrix W â R(k+1)Ãd is trained and g(x) = W f (x) is used to evaluate classiï¬cation loss on tasks. Since the best W can be found by ï¬xing f and training a linear classiï¬er, we abuse notation and deï¬ne the supervised loss of f on T to be the loss when the best W is chosen for f :
The algorithm to learn a representation function from F is to find a function f ⬠argminger Lun(f) that minimizes the empirical unsupervised loss. This function fican be subsequently used for supervised linear classification tasks. In the following section we proceed to give an overview of our results that stem from this framework.
Lsup(T , f ) = inf W âR(k+1)Ãd Lsup(T , W f ) (4)
# 3. Overview of Analysis and Results
Crucial to our results and experiments will be a speciï¬c W where the rows are the means of the representations of each class which we deï¬ne below. Deï¬nition 2.1 (Mean Classiï¬er). For a function f and task T = (c1, . . . , ck+1), the mean classiï¬er is W µ whose cth row is the mean µc of representations of inputs with label c: sup(T , f ) := Lsup(T , W µf ) µc := E
# as shorthand for its loss.
What can one provably say about the performance of fp? As a first step we show that L,,,, is like a âsurrogateâ for Lup by showing that Lsup(f) < aLun(f),Vf ⬠F, sug- gesting that minimizing L,,,, makes sense. This lets us show a bound on the supervised performance Lsup(f) of the representation learned by the algorithm. For instance, when training with one negative sample, the performance on average binary classification has the following guarantee:
Since contrastive learning has access to data with latent class distribution Ï, it is natural to have better guarantees for tasks involving classes that have higher probability in Ï. Deï¬nition 2.2 (Average Supervised Loss). Average loss for a function f on (k + 1)-way tasks is deï¬ned as
Leup(f)= EB [Lsup({ei}R4Y, f) lee A ey] ci} htinpktt
The average supervised loss of its mean classiï¬er is
Lup (f) = BE [Loup (fet. S) Lei #7] {esti ~ekt
Theorem 4.1 (Informal binary version).
Leup(f f) < aLun(f) +n Gen +6 Vf EF
where α, η, δ are constants depending on the distribution Ï and GenM â 0 as M â â. When Ï is uniform and |C| â â, we have that α, η â 1, δ â 0.
At first glance, this bound seems to offer a somewhat com- plete picture: When the number of classes is large, if the unsupervised loss can be made small by F, then the super- vised loss of f, learned using finite samples, is small.
# Contrastive Learning Algorithm
We describe the training objective for contrastive learning: the choice of loss function is dictated by the @ used in the supervised evaluation, and k denotes number of neg- ative samples used for training. Let (x,2*) ~ Dsim, (ay ,..,0,) ~ Dk., as defined in Equations (1) and (2). Definition 2.3 (Unsupervised Loss). The population loss is Lin(F) = B[E({F@)" (Fe*) - Fe) HL) ] ©
Lin(F) = B[E({F@)" (Fe*) - Fe) HL) ] ©
While encouraging, this result still leaves open the question: Can Lun(f) indeed be made small on reasonable datasets using function classes F of interest, even though the similar pair and negative sample can come from the same latent class? We shed light on this by upper-bounding L,,,(f) by two components: (a) the loss L%, (f}) for the case where the positive and negative samples are from different classes; (b) a notion of deviation s(f), within each class.
Theorem 4.5 (Informal binary version).
and (xj, x+ its j , xâ counterpart with M samples j=1 from Dsim à Dk neg is
# empirical â)M yey Bi )jar
Eun = He (ee | (F(e}) â fox) }4,) (6)
(6)
Loup(f) < Lin(f) +881) +nGenm Whe F
for constants β, η that depend on the distribution Ï. Again, when Ï is uniform and |C| â â we have β â 0, η â 1.
This bound lets us infer the following: if the class F is rich enough to contain a function f for which L%.,,(f) + Bs(f) is
Contrastive Unsupervised Representation Learning
low, then f has high supervised performance. Both L%,(f) and s(f) can potentially be made small for rich enough F.
Ideally, however, one would want to show that fan com- pete on classification tasks with every f ⬠F
Leup(f) < OL sup(f) + Gena (7) (Ideal Result):
Remark. The complexity measure Rs (F) is tightly related to the labeled sample complexity of the classification tasks. For the function class G = {w⢠f(-)|f ⬠F, ||w|| < 1} that one would use to solve a binary task from scratch using labeled data, it can be shown that Rs(F) < dRs(G), where Rs (G) is the usual Rademacher complexity of G on S (Definition 3.1 from (Mohri et al., 2018)).
Unfortunately, we show in Section 5.1 that the algorithm can pick something far from the optimal f . However, we extend Theorem 4.5 to a bound similar to (7) (where the classiï¬cation is done using the mean classiï¬er) under as- sumptions about the intraclass concentration of f and about its mean classiï¬er having high margin.
We state two key lemmas needed to prove the theorem.
Lemma 4.2. With probability at least 1âδ over the training set S, for all f â F
Lun(f) < Lun(f) + Genm
Sections 6.1 and 6.2 extend our results to the more compli- cated setting where the algorithm uses k negative samples (5) and note an interesting behavior: increasing the num- ber of negative samples beyond a threshold can hurt the performance. In Section 6.3 we show a novel extension of the algorithm that utilizes larger blocks of similar points. Finally, we perform controlled experiments in Section 8 to validate components of our framework and corroborate our suspicion that the mean classiï¬er of representations learned using labeled data has good classiï¬cation performance.
We prove Lemma 4.2 in Appendix A.3.
Lemma 4.3. For all f â F
Lµ sup(f ) ⤠1 (1 â Ï ) (Lun(f ) â Ï )
Proof. The key idea in the proof is the use of Jensenâs in- equality. Unlike the unsupervised loss which uses a random point from a class as a classiï¬er, using the mean of the class as the classiï¬er should only make the loss lower. Let µc = E
# 4. Guaranteed Average Binary Classiï¬cation
# xâ¼Dc
To provide the main insights, we prove the algorithmâs guar- antee when we use only | negative sample (k = 1). For this section, let Lsup(f) and L4.,,,(f) be as in Definition 2.2 for binary tasks. We will refer to the two classes in the supervised task as well as the unsupervised loss as c+, c~. Let S = {xj,27, uy Mi be our training set sampled from the distribution Dsim X Dneg and f ⬠argmingey Lun (f).
4.1. Upper Bound using Unsupervised Loss â(f, + - 3dM Let fis = (flay), flap), fol; )) jelnn)veta ⬠RX be the restriction on S for any f ⬠F. Then, the statistical complexity measure relevant to the estimation of the repre- sentations is the following Rademacher average Rs(F) = [sup(o, fis)]
Rs(F) = _ [sup(o, fis)] on{t1}34M | per
Lett = E_ 1{c =câ}be the probability that two classes 2 cen,
sampled independently from Ï are the same. Theorem 4.1. With probability at least 1 â δ, for all f â F
Lun(f) = E [ef (@)* (F(a*) â f(@7)))] (2,27 )~Desim &~~Dneg =O EB | E [ef@)"(fe*) - fe) ct eo Np? at Diy Dot 2 VD, 20 Bg BE [&F(@)" (Het = te-))] ct em Np? 2D 4 =O (1-7) BE [Eh (eto) Alet zed +r ct yem Npâ = (La) (f) +7
where (a) follows from the definitions in (1) and (2), (b) follows from the convexity of £ and Jensenâs inequality by taking the expectation over x*, x~ inside the function, (c) follows by splitting the expectation into the cases ct = c~ and ct # c~, from symmetry in ct and c~ in sampling and since classes in tasks are uniformly distributed (general distributions are handled in Appendix B.1). Rearranging terms completes the proof.
> 1 1 Hfy< - ââGeny Leup(f) < a-7 (Lun(f) = 7) + a- cen
Proof of Theorem 4.1. The result follows directly by apply- ing Lemma 4.3 for f and finishing up with Lemma 4.2.
where
Rs(F log + Geny =O[R st dR =
One could argue that if F is rich enough such that L,,, can be made small, then Theorem 4.1 suffices. However, in the next section we explain that unless 7 < 1, this may not always be possible and we show one way to alleviate this.
Contrastive Unsupervised Representation Learning
# 4.2. Price of Negative Sampling: Class Collision
# 5. Towards Competitive Guarantees
Note ï¬rst that the unsupervised loss can be decomposed as
Lun(f) = TLhin(f) + (1 = T)LZn (SP) (8)
where L%,(f) is the loss suffered when the similar pair and the negative sample come from different classes.
We provide intuition and counter-examples for why con- trastive learning does not always pick the best supervised representation f â F and show how our bound captures these. Under additional assumptions, we show a competitive bound where classiï¬cation is done using the mean classiï¬er.
Lif)
# un(f ) =
= £E ct co Np? [e(Fl@)" (f(w*) â Fla-))le* #7]
# 5.1. Limitations of contrastive learning
The bound provided in Theorem 4.5 might not appear as the most natural guarantee for the algorithm. Ideally one would like to show a bound like the following: for all f â F,
(Ideal 1): Lsup(f) < 0Lsup(f) +n Gen â (10)
and L= be a distribution over C with ν(c) â Ï2(c), then
L(= B, (e¢@"(e*) - fle) wa" a sD 2 E [EF (@)" (Me _ He))] =1 ew, a2~De
for constants a, a] and generalization error Genyy. This guarantees that f is competitive against the best f on the average binary classification task. However, the bound we prove has the following form: for all f ⬠F,
LEP) S oLi,(f) + Bs(f) +7 Genar
by Jensenâs inequality again, which implies Ly» (f) > 7. In general, without any further assumptions on f, Lun(f) can be far from T, rendering the bound in Theorem 4.1 useless. However, as we will show, the magnitude of L7,,(f) can be controlled by the intraclass deviation of f. Let 5(f,c) the covariance matrix of f(x) when z ~ D,. We define a notion of intraclass deviation as follows: (= E [VEE E I@l] ©
(= E [VEE E I@l] ©
Lemma 4.4. For all f â F,
Lin(f) â1< e's(f)
To show that this discrepancy is not an artifact of our anal- ysis but rather stems from limitations of the algorithm, we present two examples in Figure 1. Our bound appropriately captures these two issues individually owing to the large values of L*(f) or s(f) in each case, for the optimal f.
In Figure 1a, we see that there is a direction on which f; can be projected to perfectly separate the classes. Since the algorithm takes inner products between the representations, it inevitably considers the spurious components along the orthogonal directions. This issue manifests in our bound as the term L7, (f;) being high even when s(f,) = 0. Hence, contrastive learning will not always work when the only guarantee we have is that F can make Ls,» small.
where c! is a positive constant.
We prove Lemma 4.4 in Appendix A.1. Theorem 4.1 com- bined with Equation (8) and Lemma 4.4 gives the following result.
Theorem 4.5. With probability at least 1 â δ, âf â F
This should not be too surprising, since we show a relatively strong guarantee â a bound on L¢,,,, for the mean classifier of f. This suggests a natural stronger assumption that F can make Lf,,,, small (which is observed experimentally in Section 8 for function classes of interest) and raises the question of showing a bound that looks like the following: for all f ⬠F,
Leup(f) S Lhup(f) S Lin (f) + 8 s(f) + Genus where 8 = est, n= â_ and c! is a constant.
The above bound highlights two sufficient properties of the function class for unsupervised learning to work: when the function class F is rich enough to contain some f with low Bs(f) as well as low L%,(f) then f, the empirical mini- mizer of the unsupervised loss â learned using sufficiently large number of samples â will have good performance on supervised tasks (low Lsup(f)).
(Ideal 2): buy (f) < aL, (f) + nGenu (1)
without accounting for any intraclass deviation â recall that 8(f) captures a notion of this deviation in our bound. How- ever this is not true: high intraclass deviation may not imply high L4.,,(f), but can make Lz,,(f) (and thus Lun(f)) high, resulting in the failure of the algorithm. Consequently, the term s(f) also increases while L%, does not necessarily have to. This issue, apparent in Figure 1b, shows that a guar- antee like (11) cannot be shown without further assumptions.
Contrastive Unsupervised Representation Learning
(a) Mean is bad (b) High intraclass variance
geswaran & Lee (2018) that often use more than one nega- tive sample for every similar pair, we show provable guar- antees for this case by careful handling of class collision. Additionally, in Section 6.2 we show simple examples where increasing negative samples beyond a certain threshold can hurt contrastive learning. Second, in Section 6.3, we ex- plore a modiï¬ed algorithm that leverages access to blocks of similar data, rather than just pairs and show that it has stronger guarantees as well as performs better in practice.
Figure 1. In both examples we have uniform distribution over classes C = {c1, c2}, blue and red points are in c1 and c2 re- spectively and Dci is uniform over the points of ci. In the ï¬rst ï¬gure we have one point per class, while in the second we have two points per class. Let F = {f0, f1} where f0 maps all points to (0, 0) and f1 is deï¬ned in the ï¬gure. In both cases, using the hinge loss, Lsup(f1) = 0, Lsup(f0) = 1 and in the second case Lµ sup(f1) = 0. However, in both examples the algorithm will pick f0 since Lun(f0) = 1 but Lun(f1) = â¦(r2).
# 6.1. Guarantees for k Negative Samples
Here the algorithm utilizes k negative samples x; ,...,7, drawn i.i.d. from Deg for every positive sample pair x, * drawn from Dim, and minimizes (6). As in Section 4, we prove a bound for f of the following form: Theorem 6.1. (Informal version) For all f © F
Loup (f) < Lhup(f) < oLEn(f) + Bs(f) +17 Gen
# 5.2. Competitive Bound via Intraclass Concentration
We saw that Lf,,,(f) being small does not imply low Lt, (f), if f is not concentrated within the classes. In this section we show that when there is an f that has intra- class concentration in a strong sense (sub-Gaussianity) and can separate classes with high margin (on average) with the mean classifier, then Lh (Pf) will be low.
where L%,(f) and Gena are extensions of the correspond- ing terms from Section 4 and s(f) remains unchanged. The formal statement of the theorem and its proof appears in Appendix B.1. The key differences from Theorem 4.5 are 3 and the distribution of tasks in £.,,) that we describe below. The coefficient 8 of s(f) increases with k, e.g. when p is uniform and k < |C|, 8 = rae
Let ¢,(x) = (1 â a+ be the hinge loss with margin and LS sup(f) be L4.,,(f) with the loss function ¢,. sup
Lemma 5.1. For f ⬠F, if the random variable f(X), where X ~ D,, is o?-sub-Gaussian in every direction for every class cand has maximum norm R = maxzex| (x) then for alle > 0,
L2n(f) < 7M cup(f) + â¬
The average supervised loss that we bound is
Loup A) = E, [Loun(T. A)|
where D is a distribution over tasks, defined as follows: sample k + 1 classes c+, cy ,...,¢, ~ p**+, conditioned on the event that ct does not also appear as a negative sample. Then, set 7 to be the set of distinct classes in {ct ,cy,...,¢ }. LH (A) is defined by using LY.,,(T, f).
where 7 = 1+ ¢'Roy/log © and c! is some constant.
The proof of Lemma 5.1 is provided in the Appendix A.2. Using Lemma 5.1 and Theorem 4.5, we get the following:
Corollary 5.1.1. For all ⬠> 0, with probability at least 1-6, forall f ⬠F,
Remark. Bounding Leup(f) directly gives a bound for average (k + 1)-wise classification loss Lup(F) from Def- inition 2.2, since Lsup(F) < Leup(F)/p. where p is the probability that the k + 1 sampled classes are distinct. For k < |C| and p © uniform, these metrics are almost equal.
Loup f) S WALA py sup(f) + Bs(f) + nGena + â¬
We also extend our competitive bound from Section 5.2 for the above f in Appendix B.2.
where >(f) is as defined in Lemma 5.1, B = cx, ) = <2, and Cis a constant.
# 6.2. Effect of Excessive Negative Sampling
# 6. Multiple Negative Samples and Block Similarity
In this section we explore two extensions to our analysis. First, in Section 6.1, inspired by empirical works like Lo-
The standard belief is that increasing the number of negative samples always helps, at the cost of increased computational costs. In fact for Noise Contrastive Estimation (NCE) (Gut- mann & Hyv¨arinen, 2010), which is invoked to explain the success of negative sampling, increasing negative samples has shown to provably improve the asymptotic variance of
Contrastive Unsupervised Representation Learning
the learned parameters. However, we ï¬nd that such a phe- nomenon does not always hold for contrastive learning â larger k can hurt performance for the same inherent reasons highlighted in Section 5.1, as we illustrate next.
When p is close to uniform and the number of negative samples is k = ((|C]), frequent class collisions can prevent the unsupervised algorithm from learning the representation f © F that is optimal for the supervised problem. In this case, owing to the contribution of s(f) being high, a large number of negative samples could hurt. This problem, in fact, can arise even when the number of negative samples is much smaller than the number of classes. For instance, if the best representation function f ⬠F groups classes into t âclustersâ, such that f cannot contrast well between classes from the same cluster, then L%, will contribute to the unsupervised loss being high even when k = Q(t). We illustrate, by examples, how these issues can lead to picking suboptimal fin Appendix C. Experimental results in Fig- ures 2a and 2b also suggest that larger negative samples hurt performance beyond a threshold, confirming our suspicions.
# 6.3. Blocks of Similar Points
Often a dataset consists of blocks of similar data instead of just pairs: a block consists of x0, x1, . . . xb that are i.i.d. draws from a class distribution Dc for a class c â¼ Ï. In text, for instance, paragraphs can be thought of as a block of sentences sampled from the same latent class. How can an algorithm leverage this additional structure?
making it a more attractive choice than Lun when larger blocks are available.6. The algorithm can be extended, anal- ogously to Equation (5), to handle more than one negative block. Experimentally we ï¬nd that minimizing Lblock in- stead of Lun can lead to better performance and our results are summarized in Section 8.2. We defer the proof of Propo- sition 6.2 to Appendix A.4.
# 7. Related Work
The contrastive learning framework is inspired by several empirical works, some of which were mentioned in the introduction. The use of co-occurring words as semanti- cally similar points and negative sampling for learning word embeddings was introduced in Mikolov et al. (2013). Subse- quently, similar ideas have been used by Logeswaran & Lee (2018) and Pagliardini et al. (2018) for sentences representa- tions and by Wang & Gupta (2015) for images. Notably the sentence representations learned by the quick thoughts (QT) method in Logeswaran & Lee (2018) that we analyze has state-of-the-art results on many text classiï¬cation tasks. Pre- vious attempts have been made to explain negative sampling (Dyer, 2014) using the idea of Noise Contrastive Estima- tion (NCE) (Gutmann & Hyv¨arinen, 2010) which relies on the assumption that the data distribution belongs to some known parametric family. This assumption enables them to consider a broader class of distributions for negative sam- pling. The mean classiï¬er that appears in our guarantees is of signiï¬cance in meta-learning and is a core component of ProtoNets (Snell et al., 2017).
We propose an algorithm that uses two blocks: one for positive samples x, x+ b that are i.i.d. samples from c+ â¼ Ï and another one of negative samples xâ b that are i.i.d. samples from câ â¼ Ï. Our proposed algorithm then minimizes the following loss:
# block
# block ( f) :=
wes (Zeftet) Eatle2))) (12)
To understand why this loss function make sense, recall that the connection between Lµ sup and Lun was made in Lemma 4.3 by applying Jensenâs inequality. Thus, the algorithm that uses the average of the positive and negative samples in blocks as a proxy for the classiï¬er instead of just one point each should have a strictly better bound owing to the Jensenâs inequality getting tighter. We formalize this intuition below. Let Ï be as deï¬ned in Section 4. Proposition 6.2. âf â F
Loup(F) Spa (EMG (A) = 7) SA Lan) =) l-T
Our data model for similarity is reminiscent of the one in co-training (Blum & Mitchell, 1998). They assume access to pairs of âviewsâ with the same label that are conditionally independent given the label. Our unlabeled data model can be seen as a special case of theirs, where the two views have the same conditional distributions. However, they addition- ally assume access to some labeled data (semi-supervised), while we learn representations using only unlabeled data, which can be subsequently used for classiï¬cation when la- beled data is presented. Two-stage kernel learning (Cortes et al., 2010; Kumar et al., 2012) is similar in this sense: in the ï¬rst stage, a positive linear combination of some base kernels is learned and is then used for classiï¬cation in the second stage; they assume access to labels in both stages. Similarity/metric learning (Bellet et al., 2012; 2013) learns a linear feature map that gives low distance to similar points and high to dissimilar. While they identify dissimilar pairs using labels, due to lack of labels we resort to negative sam- pling and pay the price of class collision. While these works analyze linear function classes, we can handle arbitrarily powerful representations. Learning of representations that
This bound tells us that Lblock un is a better surrogate for Lsup,
5This can happen when F is not rich enough.
6Rigorous comparison of the generalization errors is left for future work.
Contrastive Unsupervised Representation Learning
Table 1. Performance of supervised and unsupervised represen- tations on average k-wise classiï¬cation tasks (AVG-k) and for comparison, on full multiclass (TOP-R) which is not covered by our theory. Classiï¬er can have a trained output layer (TR), or the mean classiï¬er (µ) of Deï¬nition 2.1, with µ-5 indicating the mean was computed using only 5 labeled examples.
Table 2. Effect of larger block size on representations. For CIFAR- 100 and WIKI-3029 we measure the average binary classiï¬cation accuracy. IMDB representations are tested on IMDB supervised task. CURL is our large block size contrastive method, QT is the algorithm from (Logeswaran & Lee, 2018). For larger block sizes, QT uses all pairs within a block as similar pairs. We use the same GRU architecture for both CURL and QT for a fair comparison.
SUPERVISED µ-5 TR µ UNSUPERVISED µ-5 TR µ WIKI-3029 AVG-2 AVG-10 97.8 97.7 97.0 89.1 87.2 83.1 97.3 97.7 88.4 87.4 96.9 83.5 TOP-10 TOP-1 67.4 59.0 48.2 43.2 33.2 21.7 64.7 59.0 38.7 30.4 45.8 17.0 CIFAR-100 AVG-2 AVG-5 97.2 95.9 95.8 92.7 89.8 89.4 93.2 92.0 80.9 79.4 90.6 75.7 TOP-5 TOP-1 88.9 83.5 82.5 72.1 69.9 67.3 70.4 65.6 36.9 31.8 59.0 25.0
DATASET METHOD b = 2 b = 5 b = 10 CIFAR-100 WIKI-3029 CURL CURL 88.1 96.6 89.6 97.5 89.7 97.7 IMDB CURL QT 89.2 86.5 89.6 87.7 89.7 86.7
# 8.1. Controlled Experiments
are broadly useful on a distribution of tasks is done in mul- titask learning, speciï¬cally in the learning-to-learn model (Maurer et al., 2016) but using labeled data.
Recently Hazan & Ma (2016) proposed âassumption-freeâ methods for representation learning via MDL/compression arguments, but do not obtain any guarantees comparable to ours on downstream classiï¬cation tasks. As noted by Arora & Risteski (2017), this compression approach has to preserve all input information (e.g. preserve every pixel of the image) which seems suboptimal.
# 8. Experimental Results
We report experiments in text and vision domains supporting our theory. Since contrastive learning has already shown to obtain state-of-the-art results on text classiï¬cation by quick thoughts (QT) in Logeswaran & Lee (2018), most of our experiments are conducted to corroborate our theoretical analysis. We also show that our extension to similarity blocks in Section 6.3 can improve QT on a real-world task.
To simulate the data generation process described in Sec- tion 2, we generate similar pairs (blocks) of data points by sampling from the same class. Dissimilar pairs (negative samples) are selected randomly. Contrastive learning was done using our objectives (5), and compared to performance of standard supervised training, with both using the same architecture for representation f . For CIFAR-100 we use VGG-16 (Simonyan & Zisserman, 2014) with an additional 512x100 linear layer added at the end to make the ï¬nal representations 100 dimensional, while for Wiki-3029 we use a Gated Recurrent Network (GRU) (Chung et al., 2015) with output dimension 300 and ï¬x the word embedding layer with pretrained GloVe embeddings (Pennington et al., 2014). The unsupervised model for CIFAR-100 is trained with 500 blocks of size 2 with 4 negative samples, and for Wiki-3029 we use 20 blocks of size 10 with 8 negative sam- ples. We test (1) learned representations on average tasks by using the mean classiï¬er and compare to representations trained using labeled data; (2) the effect of various parame- ters like amount of unlabeled data (N )7, number of negative samples (k) and block size (b) on representation quality; (3) whether the supervised loss tracks the unsupervised loss as suggested by Theorem 4.1; (4) performance of the mean classiï¬er of the supervised model.
Datasets: Two datasets were used in the controlled exper- iments. (1) The CIFAR-100 dataset (Krizhevsky, 2009) consisting of 32x32 images categorized into 100 classes with a 50000/10000 train/test split. (2) Lacking an appropri- ate NLP dataset with large number of classes, we create the Wiki-3029 dataset, consisting of 3029 Wikipedia articles as the classes and 200 sentences from each article as sam- ples. The train/dev/test split is 70%/10%/20%. To test our method on a more standard task, we also use the unsuper- vised part of the IMDb review corpus (Maas et al., 2011), which consists of 560K sentences from 50K movie reviews. Representations trained using this corpus are evaluated on the supervised IMDb binary classiï¬cation task, consisting of training and testing set with 25K reviews each.
Results: These appear in Table 1. For Wiki-3029 the un- supervised performance is very close to the supervised per- formance in all respects, while for CIFAR-100 the avg-k performance is respectable, rising to good for binary clas- siï¬cation. One surprise is that the mean classiï¬er, central to our analysis of unsupervised learning, performs well also with representations learned by supervised training on CIFAR-100. Even the mean computed by just 5 labeled sam- ples performs well, getting within 2% accuracy of the 500
7If we used M similar blocks of size b and k negative blocks for each similar block, N = M b(k + 1). In practice, however, we reuse the blocks for negative sampling and lose the factor of k + 1.
Contrastive Unsupervised Representation Learning
(a) CIFAR-100 (b) Wiki-3029 (c) Wiki-3029
Figure 2. Effect of amount of unlabeled data and # of negative samples on unsupervised representations, measured on binary classiï¬cation for CIFAR100 in (a) and on top-1 performance on Wiki-3029 in Fig (b) (top-1 performance is used because avg binary was same for all k). Fig. (c) shows the dynamics of train/test loss; supervised loss roughly tracks unsupervised test loss, as suggested by Theorem 4.1
sample mean classiï¬er on CIFAR-100. This suggests that representations learnt by standard supervised deep learn- ing are actually quite concentrated. We also notice that the supervised representations have fairly low unsupervised training loss (as low as 0.4), even though the optimization is minimizing a different objective.
To measure the sample complexity beneï¬t provided by con- trastive learning, we train the supervised model on just 10% fraction of the dataset and compare it with an unsupervised model trained on unlabeled data whose mean classiï¬ers are computed using the same amount of labeled data. We ï¬nd that the unsupervised model beats the supervised model by almost 4% on the 100-way task and by 5% on the average binary task when only 50 labeled samples are used.
Figure 2 highlights the positive effect of increasing num- ber of negative samples as well as amount of data used by unsupervised algorithm. In both cases, using a lot of neg- ative examples stops helping after a point, conï¬rming our suspicions in Section 6.2. We also demonstrate how the supervised loss tracks unsupervised test loss in Figure 2c.
# 8.2. Effect of Block Size
As suggested in Section 6.3, a natural extension to the model would be access to blocks of similar points. We refer to our method of minimizing the loss in (12) as CURL for Con- trastive Unsupervised Representation Learning and perform experiments on CIFAR-100, Wiki-3029, and IMDb. In Ta- ble 2 we see that for CIFAR-100 and Wiki-3029, increasing block size yields an improvement in classiï¬cation accuracy. For IMDb, as is evident in Table 2, using larger blocks provides a clear beneï¬t and the method does better than QT, which has state-of-the-art performance on many tasks. A thorough evaluation of CURL and its variants on other unlabeled datasets is left for future work.
# 9. Conclusion
Contrastive learning methods have been empirically success- ful at learning useful feature representations. We provide a new conceptual framework for thinking about this form of learning, which also allows us to formally treat issues such as guarantees on the quality of the learned representations. The framework gives fresh insights into what guarantees are possible and impossible, and shapes the search for new assumptions to add to the framework that allow tighter guar- antees. The framework currently ignores issues of efï¬cient minimization of various loss functions, and instead studies the interrelationships of their minimizers as well as sample complexity requirements for training to generalize, while clarifying what generalization means in this setting. Our approach should be viewed as a ï¬rst cut; possible exten- sions include allowing tree structure â more generally met- ric structure â among the latent classes. Connections to meta-learning and transfer learning may arise.
We use experiments primarily to illustrate and support the new framework. But one experiment on sentence embed- dings already illustrates how fresh insights derived from our framework can lead to improvements upon state-of-the-art models in this active area. We hope that further progress will follow, and that our theoretical insights will begin to inï¬uence practice, including design of new heuristics to identify semantically similar/dissimilar pairs.
# 10. Acknowledgements
This work is supported by NSF, ONR, the Simons Founda- tion, the Schmidt Foundation, Mozilla Research, Amazon Research, DARPA, and SRC. We thank Rong Ge, Elad Hazan, Sham Kakade, Karthik Narasimhan, Karan Singh and Yi Zhang for helpful discussions and suggestions.
Contrastive Unsupervised Representation Learning
# References
Arora, S. and Risteski, A. Provable beneï¬ts of representa- tion learning. arXiv, 2017.
Logeswaran, L. and Lee, H. An efï¬cient framework for learning sentence representations. In Proceedings of the International Conference on Learning Representations, 2018.
Bellet, A., Habrard, A., and Sebban, M. Similarity learning for provably accurate sparse linear classiï¬cation. arXiv preprint arXiv:1206.6476, 2012.
Bellet, A., Habrard, A., and Sebban, M. A survey on metric learning for feature vectors and structured data. CoRR, abs/1306.6709, 2013.
Blum, A. and Mitchell, T. Combining labeled and unlabeled data with co-training. In Proceedings of the Eleventh Annual Conference on Computational Learning Theory, COLTâ 98, 1998.
Maas, A. L., Daly, R. E., Pham, P. T., Huang, D., Ng, A. Y., and Potts, C. Learning word vectors for sentiment anal- ysis. In Proceedings of the 49th Annual Meeting of the ACL: Human Language Technologies, 2011.
Maurer, A. A vector-contraction inequality for rademacher complexities. In International Conference on Algorithmic Learning Theory, pp. 3â17. Springer, 2016.
Maurer, A., Pontil, M., and Romera-Paredes, B. The beneï¬t of multitask representation learning. J. Mach. Learn. Res., 2016.
Chung, J., Gulcehre, C., Cho, K., and Bengio, Y. Gated feedback recurrent neural networks. In Proceedings of the 32Nd International Conference on International Con- ference on Machine Learning - Volume 37, ICMLâ15. JMLR.org, 2015.
Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and Dean, J. Distributed representations of words and phrases and their compositionality. In Neural Information Pro- cessing Systems, 2013.
Cortes, C., Mohri, M., and Rostamizadeh, A. Two-stage learning kernel algorithms. 2010.
Mohri, M., Rostamizadeh, A., and Talwalkar, A. Founda- tions of machine learning. MIT press, 2018.
Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. BERT: Pre-training of deep bidirectional transformers for lan- guage understanding. arXiv preprint arXiv:1810.04805, 10 2018.
Pagliardini, M., Gupta, P., and Jaggi, M. Unsupervised learning of sentence embeddings using compositional n-gram features. Proceedings of the North American Chapter of the ACL: Human Language Technologies, 2018.
Dyer, C. Notes on noise contrastive estimation and negative sampling. CoRR, abs/1410.8251, 2014. URL http: //arxiv.org/abs/1410.8251.
Pennington, J., Socher, R., and Manning, C. D. GloVe: Global vectors for word representation. In Proceedings of the Conference on Empirical Methods in Natural Lan- guage Processing, 2014.
Gutmann, M. and Hyv¨arinen, A. Noise-contrastive esti- mation: A new estimation principle for unnormalized statistical models. In Proceedings of the Thirteenth Inter- national Conference on Artiï¬cial Intelligence and Statis- tics, pp. 297â304, 2010.
Peters, M. E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., and Zettlemoyer, L. Deep contextualized word representations. In Proceedings of NAACL-HLT, 2018.
Hazan, E. and Ma, T. A non-generative framework and convex relaxations for unsupervised learning. In Neural Information Processing Systems, 2016.
Simonyan, K. and Zisserman, A. Very deep convolu- tional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
Kiros, R., Zhu, Y., Salakhutdinov, R., Zemel, R. S., Torralba, A., Urtasun, R., and Fidler, S. Skip-thought vectors. In Neural Information Processing Systems, 2015.
Snell, J., Swersky, K., and Zemel, R. Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems 30. 2017.
Krizhevsky, A. Learning multiple layers of features from tiny images. Technical report, 2009.
Wang, X. and Gupta, A. Unsupervised learning of visual representations using videos. In Proc. of IEEE Interna- tional Conference on Computer Vision, 2015.
Kumar, A., Niculescu-Mizil, A., Kavukcoglu, K., and Daum´e, H. A binary classiï¬cation framework for two- stage multiple kernel learning. In Proceedings of the 29th International Coference on International Conference on Machine Learning, ICMLâ12, 2012.
Contrastive Unsupervised Representation Learning
# A. Deferred Proofs
# A.1. Class Collision Lemma
We prove a general Lemma, from which Lemma 4.4 can be derived directly. Lemma A.1. Let c ⬠C and £: R' > R be either the t-way hinge loss or t-way logistic loss, as defined in Section 2. Let x,x+,a,,...,0;, be iid draws from D,. For all f ⬠F, let
Linef)= EB _[e({F@)? (Fe*) - Fa) Fi.) wat ey
Then
Lin elf) â â¬(0) < ctVTE(F lle EB Ule@ll (13)
where c! is a positive constant.
Lemma 4.4 is a direct consequence of the above Lemma, by setting t = 1 (which makes ¢(0) = 1), taking an expectation over c ~ v in Equation (13) and noting that Ee~[Lij,e(f)|] = Lin (f)-
Proof of Lene A Fix an f ⬠F and let z; = f(a)? (f(a ui Line(f) â £(0) < cE||2|], for some constant câ. Note that E[| |] O)E[z|z > 0). )=f(« +)) and z = maxjej] zi. First, we show that = Plz > OjE[z|z > 0) + P[z < OJE[âz|z < 0] > P[z >
un,c(f ) = E[(1+z)+] ⤠E[max{1+z, 1}] =
t-way hinge loss: By definition ((v) = max{0, 1+-maxjej4 {âvi}}. Here, Lz, 1+ Plz > OjE[z|z > 0] < 1+ Efjz|]. t-way logistic loss: By definition ¢(v) = loga(1 + S7)_, e~â), we have Lz, te*)] < max{ gq + logs(1 + #), log (1 + )} = âEBCEE20 + tog (1
t-way logistic loss: By definition ¢(v) = loga(1 + S7)_, e~â), we have Lz, .(f) = Eflogy(1 + 47, e**)] < Eflog.(1 + te*)] < max{ gq + logs(1 + #), log (1 + )} = âEBCEE20 + tog (1 +2) < FEL + logy (1 +2).
log 2 + log2(1 + t), log2(1 + t)} = Finally, E[|z|] ⤠E[maxiâ[t] |zi|] ⤠tE[|z1|]. But,
Bllta|] = Ey o+e> []F(@)? (fer) - F@*))| J IA B, | I/@Ihy Boras (Ae (ser) - se) | < VIVE Oe 8, tel
# A.2. Proof of Lemma 5.1
Fix an f ⬠F and suppose that within each class c, f is c?-subgaussian in every direction. * Let p, = > LF x)|. This means that for all c ⬠C and unit vectors v, for x ~ Dz, we have that v? Hon i is o? -subgaussian. Tet e⬠> 0 and y= 14 2Ro\/2log R + log3/e. ° Consider fixed ct, c~,x and let f(x)? (f(a~) â f(xt)) = w+ z, where
W= F(2)" (Me â Mer) and z= f(a)" (F(@~) â He-) â F(a)? (F(@*) â Her)
For «+ ~ Dt, x~ ~ D> independently, z is the sum of two independent R?o?-subgaussians (x is fixed), so z is 2R?0?- 4R?2 42 (2 log R+log 3/e) subgaussian and thus p = Pr[z > y â 1] < e~ R202 = gpz- So, Ez[(l+u+2)4] < A âp)(yt+p)4+ p(2R? +1) <71+4# f)+ + ⬠(where we used that p+ 2 < 2R?). By taking expectation over ct,c~ ~ p?, x ~ D.+ we
8 random variable X is called o?-subgaussian if Ele * =X) < err, VA ⬠R. A random vector V ⬠R@ is o-subgaussian in every direction, if Vu ⬠Râ, ||u|| = 1, the random variable (u, V) is o?-subgaussian.
°We implicitly assume here that R > 1, but for R < 1, we just set -y = 1 + 2Roy/log 3/e and the same argument holds.
Contrastive Unsupervised Representation Learning
have
1 f(a)? (e- = ber) ctde| ae MNS EB i(a4 : ). -| - rvD.+ =. Boa |p + LM en =e) | _£@)" Wer ~ He) : en Dan, (14 y ),| 5, (1: : ) | ct | = 7 Es [LM supl({er ce}. Alem Ae] +e (14)
# un(f ) â¤
where LH ...,({ct,câ }, f) is L4,({e*, c }, f) when £,(x) = (1 â a/7)+ is the loss function. Observe that in 14 we used that D7 are uniform for binary 7, which is an assumption we work with in section 4, but we remove it in section 5. The proof finishes by observing that the last line in 14 is equal to yL4 .,(f) + â¬-
# A.3. Generalization Bound
We first state the following general Lemma in order to bound the generalization error of the function class F on the unsupervised loss function Ly,,,(-). Lemma 4.2 can be directly derived from it. Lemma A.2. Let ¢ : R® > R be n-Lipschitz and bounded by B. Then with probability at least 1 â 6 over the training set S={(xj, 07, a7,,... 1 5_) HL), for all f © F
â
Lun( Ëf ) ⤠Lun(f ) + O ηR kRS (F) M + B log 1 δ M (15)
where
Rs(F) = E spt fs) (16) on ft 1prt2em | fer
and fis = (Seles) fe} ) flair lee) st
Note that for k + 1-way classiï¬cation, for hinge loss we have η = 1 and B = O(R2), while for logistic loss η = 1 and B = O(R2 + log k). Setting k = 1, we get Lemma 4.2. We now prove Lemma A.2.
Proof of Lemma A.2. First, we use the classical bound for the generalization error in terms of the Rademacher complexity of the function class (see (Mohri et al., 2018) Theorem 3.1). For a real function class G whose functions map from a set Z j=1, then with probability at least 1 â δ to [0, 1] and for any δ > 0, if S is a training set composed by M iid samples {zj}M 2 , for all g â G
i _2Rs(G) , ,, /log4 Elo(=)] < yp D9) + Sp +3) az (7)
where Rs (G) is the usual Rademacher complexity. We apply this bound to our case by setting Z = X**?, S is our training set and the function class is 1
G= {ajeete _ 1 ot) = ye (LUG) (Fe) = Fara) |e FP as)
â We will show that for some universal constant c, RS (G) ⤠c ηR B RS (F) or equivalently â
# k
| < nh E yu [ERK lonis) SEF sup (9, 7) (19) on {Ht1}d(b+2)M lp
(14)
Contrastive Unsupervised Representation Learning
where (gf )|S = {gf (xj, x+
# j , xâ
j1, ..., xâ
# jk)}M
j=1. To do that we will use the following vector-contraction inequality.
where (gf)is = {9p(xj, of thi, we @) HE To do that we will use the following vector-contraction inequality.
Theorem A.3. [Corollary 4 in (Maurer, 2016)] Let Z be any set, and S = fz }fe 1 â¬2Zâ¢. Let F be a class of functions f:ZâR" andh:R" > R be L-Lipschitz. For all f ⬠F, let g;=ho f. Then
opie feF E on {+1}! onf{ti}e⢠sup (2 ans)| < V2L E
where fis = (Jules) te[n],je(M)
We apply Theorem A.3 to our case by setting Z = V*+?,n = d(k + 2) and Fe {Fler 051505054) = (F(@) Fe"), fe)
Fe {Fler 051505054) = (F(@) Fe"), fe) Fla dIIF ⬠F
We also use g p= OF where f is derived from f as in the definition of F.. Observe that now A.3 is exactly in the form of 19 and we need to show that L < w RVR for some constant c. But, for z = (a, a*,2],...,2,), we have 9; (2) = 4U(H(F(z))) where ¢ : R(*+2)4 5 RF and 6 ((u,, 07°, 0745-5 Ug )eela]) = (Lp ve(o â VE) cet Thus, we may use h = 40 ¢ to apply Theorem A.3.
â
Now, we see that ¢ is V6kR-Lipschitz when 9, v?, 7, (v/')?, (vj)? < R? by computing its Jacobian. Indeed, for all i,j ⬠[k] and t ⬠[d], we have Soe uf â Up, at uy and oe âvu,l{i = j}. From triangle inequaltiy, the Frobenius ti norm of the Jacobian J of ¢ is
Fle = Door â ea) + kD vt < V4kR? + 2kR? = V6RR
â
â
Now, taking into account that ||.J||2 < ||J||â, we have that ¢ is 6kR-Lipschitz on its domain and since ¢ is 7-Lipschitz, we have L < VERE,
Now, we have that with probability at least 1 â δ 2
Nop Uh, nRVkRs(F) log $ Lun(f) < Lun(f) +O ir BVP (20)
1085
Let f* ⬠arg mingex Lun(f). With probability at least 1â 5, we have that Lun(f*) < Lun(f*)+3B 1085 (Hoeffdingâs inequality). Combining this with Equation (20), the fact that Lun(f) < Lun(f *) and applying a union bound, finishes the proof.
# A.4. Proof of Proposition 6.2
By convexity of ¢,
0 (say? (24RD - Eafe) (FEM Weed) â 60) = FLAP lel) se) i
Thus,
meets) =e. [e(set (EEO ETE) <p ap x
Contrastive Unsupervised Representation Learning
The proof of the lower bound is analogous to that of Lemma 4.3.
# B. Results for k Negative Samples
# B.1. Formal theorem statement and proof
We now present Theorem B.1 as the formal statement of Theorem 6.1 and prove it. First we deï¬ne some necessary quantities. Let (c+, câ in this tuple. We also deï¬ne I +(câ k ) = {i â [k] | câ negative samples. We will abuse notation and just write Q, I + when the tuple is clear from the context.
1 , . . . , câ
k ) be k + 1 not necessarily distinct classes. We deï¬ne Q(c+, câ
1 , . . . , câ
To define L%, (f) consider the following tweak in the way the latent classes are sampled: sample ct, cy ,.. conditioning on |J*| < k and then remove all c; , i ⬠J+. The datapoints are then sampled as usual: «x, x x; ~D,-,%⬠kj, independently.
# i
LEN = B [e({F@" (F@*) â FP) ign) |<] c vat ae
which always contrasts points from different classes, since it only considers the negative samples that are not from c+. The generalization error is 10
GenM = O R â k RS (F) M + (R2 + log k) log 1 δ M
were Rs(F) is the extension of the definition in Section 4: Rs(F) = E [supper(o, fis). where fis = on {£1} (e+2)aM
ft(xj), ft(x+ j ), ft(xâ j1), . . . , , ft(xâ jk) jâ[M ],tâ[d] .
For ct, cy,..,¢, ~ p**!, let = P[I+ # 0] andrâ = Pict = c; Vi]. Observe that 7), as defined in Section 4, is Plc* = cy]. Let pmax(T) = maxe Dr(c) and
Pinin(T) = min Pos caper (eT = el\Q=T,T* = 8)
In Theorem B.1 we will upper bound the following quantity: E T â¼D Ï+ min(T ) pmax(T ) Lµ sup(T , Ëf ) (D was deï¬ned in Section 6.1).
Theorem B.1. Let f ⬠arg miner Lun(f)- With probability at least 1 â 6, for all f ⬠F
a ] J Prnin(T) Tf Lat ep a tpt . 1 Emin" 1 Te < Li ick : na T~D |Pmax(T) uplT |S 1â Tr in(I) +e 1âTr s(f) 1âT, Genu
where fo is a constant.
Note that the deï¬nition of s(f ) used here is deï¬ned in Section 4
Proof. First, we note that both hinge and logistic loss satisfy the following property: âI1, I2 such that I1 ⪠I2 = [t] we have that
lvifien) < C{vihiety) < Cfvi}ien) + ({vi}ien) (21)
We now prove the Theorem in 3 steps. First, we leverage the convexity of ¢ to upper bound a supervised-type loss with the unsupervised loss Ly,(f) of any f ⬠F. We call it supervised-type loss because it also includes degenerate tasks: |T| = 1.
10The log k term can be made O(1) for the hinge loss.
Contrastive Unsupervised Representation Learning
Then, we decompose the supervised-type loss into an average loss over a distribution of supervised tasks, as defined in the Theorem, plus a degenerate/constant term. Finally, we upper bound the unsupervised loss Lin(f) with two terms: L%, (f) âun that measures how well f contrasts points from different classes and an intraclass deviation penalty, corresponding to s(f). Step 1 (convexity): When the class c is clear from context, we write fi. = E [ f (x)]. Recall that the sampling procedure for
Step 1 (convexity): When the class c is clear from context, we write ˵c = E xâ¼c unsupervised data is as follows: sample c+, câ c+ and xâ k â¼ Ïk+1 and then x, x+ â¼ D2 i â¼ Dcâ , i â [k]. So, we have i
follows: sample c*, cy ,...,¢, ({fo" (fle*) â Fler
# i
Lintf= BE , e ({fo" (fle*) â Fler an
=
where the last inequality follows by applying the usual Jensenâs inequality and the convexity of @. Note that in the upper bounded quantity, the c*, cy, ...,¢, donât have to be distinct and so the tuple does not necessarily form a task.
k donât have to be distinct and so the tuple does not necessarily form a task. Step 2 (decomposing into supervised tasks) We now decompose the above quantity to handle repeated classes.
oo (C40 Poth) and 4
ct ,ep ptt and + |I*| times rail en EB (é(0,.-,0)|I* 40)
(x)" (fet â fe) c ) Jeg ct yep wpktt ral +t% EE [er-() | * 40] Vv van | Ps) oF {tH zz a t roy a âa_ +>
where ¢,(0) = ¢(0,...,0) (¢ times). Both inequalities follow from the LHS of Equation (21). Now we are closer to our goal of lower bounding an average supervised loss, since the first expectation in the RHS has a loss which is over a set of distinct classes. However, notice that this loss is for separating c+ from Q(c*, cy ,...,¢, ) \ {et}. We now proceed to a symmetrization of this term to alleviate this issue.
Recall that in the main paper, sampling T from D is deï¬ned as sampling the (k+1)-tuple from Ïk+1 conditioned on I + = â
and setting T = Q. Based on this deï¬nition, by the tower property of expectation, we have
rao) Eu f ({iâ (fies â jie) <2) oe ~P cfe* pS Det _ fal (i. â A -T [t= 8, Balle te ~10)}ag) f= 7 4 os anDo4 â_ = EE é({f@" (e+ ~fe)} cer) TRD ctnp*(T) Hey" (Ber ~ He) oh and
where Ï+(T ) is the distribution of c+ when (c+, câ Recall that Ï+ bound the last quantity with the LHS in the theorem statement, we just need to observe that for all tasks T
(22)
(23)
Contrastive Unsupervised Representation Learning
~ Sl (tor-1n > Pmin(T) iy [e({ Fa) (fae - jie) } cer) (25) ~ Pmax(T) et xD cfet 7D + A â Pmin(T) Leup(T f) ~ Pmax(T)
By combining this with Equations (22), (23), (25) we get
rnin (2) Pmin (1 Te) | Dinan (P) D Lsup(T, a] <Lin(f)-T E [er+\(0) | It¢ ) (26) ct cp wpkt1
Now, by applying Lemma A.2, we bound the generalization error: with probability at least 1 â δ, âf â F
Lun( Ëf ) ⤠Lun(f ) + GenM
(27)
However, Lun(f ) cannot be made arbitrarily small. One can see that for all f â F, Lun(f ) is lower bounded by the second term in Equation (22), which cannot be made arbitrarily small as Ïk > 0.
Pant, B [E(F@ F@-Fe Mer) 27, Ba en@ [2 40] 8) vet OD 4 wy ~D_-
# â i
where we applied Jensenâs inequality. Since Ïk is not 0, the above quantity can never be arbitrarily close to 0 (no matter how rich F is).
Step 3 (Lun decomposition) Now, we decompose Lun(f ) by applying the RHS of Equation (21)
LoS. B [LOUIE = LOM} or) +P (1) â FD) 0) 29)
# B reo
reo Bn MOTI FED Fd 4. Baan (OVE - fe Mier)] 60 ier â =(1=7') cote [e({4@? (Fe) = Fler ltl <k ânD ie I~ : ; (31) th EB f ({1@)" (Fe*) = Fe) fiers) It £0 oes ~e J a2eat 2 wat wD, a; ~D_-, ieIt
=
# â i
Observe that the first term is exactly (1 â 7â) L%,(f). Thus, combining (26), (27) and (31) we get âun
Contrastive Unsupervised Representation Learning
Bem 1 (TA) <7 )LE Cf) + Gena TxD |Pmax(T) °"P 7* "| un e(\T (f(t) â flem _ 6) | r+ oaBan | anon, [E(t Ue â16},.,)] tod |P 4 2p ~D,-, iel* ee, A(f)
(1 â Ïk) E
(32)
From the deï¬nition of I +, câ i = c+, âi â I +. Thus, from Lemma A.1, we get that
ay<e g [tviECoMe & lislel| 1 40] G3) ct yep wpk+ e
for some constant câ. Let u be a distribution over classes with u(c) = Poy. pet [ct = (l- ))*) By applying the tower property to Equation (33) we have
Let u be a distribution over classes with u(c) = Poy. pet [ct = clI* F 0] and it is easy to see that u(c) ox p(e)(1 â (l- ))*) By applying the tower property to Equation (33) we have
A(f)<¢ E E [tet sel* 48) VIE(F Olle E wre (34) ewe | eter wpktt nD.
But,
k E [|Z*|Je* =clIt¢ 0) = SOP. cpa pkt (ce; = ct let =clIt¢ 0) ctycy wpkth =I al = IPs ope (ef = etlet =e.I* £0) Poser aphtt (er =ct= c) (G5) Pot er apht! (ct =cl+ 40 k p>(c) eC) pod-U-pey) 1d âpley*
Now, using the fact that rt, = 1â >>, p(câ)(1 â =D. le) (1- (1 = p(eâ))*) and 71 = 2, p?(c).
# i
, Tk ; AW) s cE |k EVIFG Olle BE (lf âTk L-T csul 1- oT sree a EB veil (36 Tom Soe ob. T1 ; = ek 2 & [VECO 8, teil] =F
Ïk 1 â Ïk
and we are done.
# B.2. Competitive Bound
As in Section 5.2, we prove a competitive type of bound, under similar assumptions. Let ¢,(v) = max{0,1 + max;{âv;}/7}, v ⬠R*, be the multiclass hinge loss with margin y and for any T let L ..,(T, f) be L4,,(T, f) when â¬, is used as loss function. For all tasks T, let p/* (7) is the distribution of c+ when (c+, cy, ...,¢,) are sampled from p*+! conditioned on Q = T and |I*+| < k. Also, let pâ*,,.(T) be the maximum of these |T| probabilities and Pmin(T) = mincet Dr (c).
Contrastive Unsupervised Representation Learning
sup(T, /)|
We will show a competitive bound against the âotowing quantity, for all f ⬠F: re» [Sea ce, sup(T, /)| , where Dâ is defined as follows: sample c+, cy ,...,c¢, ~ p**!, conditioned on |I+| < k. Then, set T = Q. Observe that when I* = 0 with high probability, we have D! = D. Lemma B.2. For all f ⬠F suppose the random variable f (X), where X ~ De is o°(f)- subgaussian in every direction for every class c and has maximum norm R(f) = maxzex||f(x)||- Let f â¬argminger Lun(f)- ). Then for all ⬠> 0, with probability at least 1 â 6, for all f ⬠F
Prin a) T~D |Pmaz(T) pmax (T) Hig TD] Sort 78, [EE ToD LE sup Ts 5] + Bs(f) +nGenar +¢
â
where y(f) =1+ ¢R(f)o(f)(Vlogk + \/1o: g RD), c! is some constant, a Le B= ky andy on
Proof. We will show that âf â F
an 1) SW) E, ot al s] Gn)
and the Lemma follows from Theorem 6.1. Now, we fix ane > 0, an f ⬠F and we drop most of the arguments f in the rest of the proof. Also, fix ct,cy ...c;, ,x and let t = k â |I*+|. We assume without loss of generality, that c+ 4 c; , Vi ⬠[t]. Now,
max iâ[t] f (x)T (f (xâ i ) â f (x+)) ⤠µ + max i zâ i â z+ (38)
where pr = maxieiy f(t)" (Mp â Her)> 27 = F(a)" (Rae) â w=) and 2 = f(a)T(f(e*) ~ per). 2: are cen- tered 0? R?-subgaussian, so from standard properties of subgaussian random variables P[max; z; > V20Ry/logt + V2e,0Ry/log R/â¬] < (e/R)* (again we consider here the case where R > 1 and for R < 1, the same arguments hold but with removing R from the log). z+ is also centered o? R?-subgaussian, so P[z+ > /2cjoRy/log R/â¬] < (â¬/R)*. Let y = 14+ oR(Vlogt + \/log R/e) for appropriate constant câ. By union poune we have p = P[max; z; â zt > 4-1 S$ 2(â¬/R)%. Thus, E,, ,-[(1+ + max; 27 â 2*)4] < (1âp)(u+7)4 + p(2R? +1) < y+ H/7)4 + ⬠(for appropriate constant c,). By taking expectation over ct, cy ~ ml conditioned on |I+| < k, and over x ~ D,+ we get at jar
maxecg,eger £2)" (He ~ Het) La(f<s7 EB (: + âSeche It] <k ct cpp x + anD + _. , MAXceQ,cXxct+ f(x)" (He = Het) TB oe yht (2 ' 5 . I< (39) a0Di+ T an aX, 2)" (He = T. -1 Bg, (: _ aerages HO) Hs) | <y k [oa £8 (T. r] ~D! ct wp!*(T) Y + Pmin e~Do+
# C. Examples for Section 6.2
Here, we illustrate via examples two ways in which the increase of k can lead to suboptimal Ëf . We will consider the hinge loss as the loss function, while the examples carry over trivially for logistic loss.
1. The ï¬rst example is the case where even though there exist representations in F that can separate every class, the suboptimal representation is picked by the algorithm when k = â¦(|C|). Let C = {ci}iâ[n] where for each class, Dci is
Contrastive Unsupervised Representation Learning
uniform over two points {x}, x?}. Let e; be the indicator vectors in Râ and let the class F consists of f { fo, fi} with fo, fi : X + Râ where f(x!) = 3/2re; and fi(x?) = 1/2re; for all i, for some r > 0, and fo = 0. Finally, p is uniform over C. Now, when the number of negative samples is 2(n), the probability that 3j ⬠[k] such that ct = c; is constant, and therefore Lun(f) = Q(r?) > 1 = Lun(fo) when r is large. This means that despite Lyup(C, f1) = 0, the algorithm will pick fo which is a suboptimal representation.
2. We can extend the first example to the case where, even when k = o0(|C|), the algorithm picks suboptimal representations. To do so, we simply âreplicateâ the first example to) create clusters of classes. Formally, let C = {¢;; }i,j¢[n] Where for each class, D-,, is uniform over two points {xij 77 ';}. Finally, same as above, let F consist of two functions { fo, f1}- The function f; maps fi(xi;) = 3/2re; and fla) = = 1/2re; for all i,j and fo = 0. p is uniform over C. Now, note that f; âclutstersâ the n? classes and their points into n clusters, each along an e;. Thus, it is only useful for contrasting classes from different clusters. However, note that the probability of intra-cluster collision with k negative samples is 1 â (1â1/n)* aa k = o(n), we have that Lun(fi) = o(1) < 1 = Lun(fo) so the algorithm will pick f,. However, when k = Q(n), Lun(f) = Q(r?) > 1 = Lun(fo) and the algorithm will pick the suboptimal representation fo. Thus, despite |C| = A having more then n negative samples can hurt performance, since even tough f; cannot solve all the tasks, the average supervised loss over t-way tasks, t = 0(n), is Lsup(f) < O(1 â (1 â1/n)**1) = o(1).
# D. Experiments
# D.1. Wiki-3029 construction
We use the Wikipedia dump and select articles that have entries in the WordNet, have at least 8 sections and at least 12 sentences of length at least 4 per section. At the end of this ï¬ltering we are left with 3029 articles with at least 200 sentences per article. We then sample 200 sentences from each article and do a 70%/10%/20% train/dev/test split.
# D.2. GRU model
We use a bi-directional GRU with output dimension of 300 trained using dropout 0.3. The input word embeddings are initialized to pretrained CC GloVe vectors and ï¬xed throughout training. | {
"id": "1810.04805"
} |
1902.09183 | Joint Multi-Domain Learning for Automatic Short Answer Grading | One of the fundamental challenges towards building any intelligent tutoring
system is its ability to automatically grade short student answers. A typical
automatic short answer grading system (ASAG) grades student answers across
multiple domains (or subjects). Grading student answers requires building a
supervised machine learning model that evaluates the similarity of the student
answer with the reference answer(s). We observe that unlike typical textual
similarity or entailment tasks, the notion of similarity is not universal here.
On one hand, para-phrasal constructs of the language can indicate similarity
independent of the domain. On the other hand, two words, or phrases, that are
not strict synonyms of each other, might mean the same in certain domains.
Building on this observation, we propose JMD-ASAG, the first joint multidomain
deep learning architecture for automatic short answer grading that performs
domain adaptation by learning generic and domain-specific aspects from the
limited domain-wise training data. JMD-ASAG not only learns the domain-specific
characteristics but also overcomes the dependence on a large corpus by learning
the generic characteristics from the task-specific data itself. On a
large-scale industry dataset and a benchmarking dataset, we show that our model
performs significantly better than existing techniques which either learn
domain-specific models or adapt a generic similarity scoring model from a large
corpus. Further, on the benchmarking dataset, we report state-of-the-art
results against all existing non-neural and neural models. | http://arxiv.org/pdf/1902.09183 | Swarnadeep Saha, Tejas I. Dhamecha, Smit Marvaniya, Peter Foltz, Renuka Sindhgatta, Bikram Sengupta | cs.CL | 11 pages | null | cs.CL | 20190225 | 20190225 | 9 1 0 2
b e F 5 2 ] L C . s c [
1 v 3 8 1 9 0 . 2 0 9 1 : v i X r a
# Joint Multi-Domain Learning for Automatic Short Answer Grading
Swarnadeep Saha IBM Research - India swarnads@in.ibm.com
Tejas I. Dhamecha IBM Research - India tidhamecha@in.ibm.com
Smit Marvaniya IBM Research - India smarvani@in.ibm.com
Peter Foltz Pearson peter.foltz@pearson.com
Renuka Sindhgatta Queensland University of Technology renuka.sr@qut.edu.au
Bikram Sengupta Anudip Foundation bikramsengupta@gmail.com
Abstract One of the fundamental challenges towards building any in- telligent tutoring system is its ability to automatically grade short student answers. A typical automatic short answer grad- ing system (ASAG) grades student answers across multiple domains (or subjects). Grading student answers requires build- ing a supervised machine learning model that evaluates the similarity of the student answer with the reference answer(s). We observe that unlike typical textual similarity or entailment tasks, the notion of similarity is not universal here. On one hand, para-phrasal constructs of the language can indicate similarity independent of the domain. On the other hand, two words, or phrases, that are not strict synonyms of each other, might mean the same in certain domains. Building on this observation, we propose JMD-ASAG, the first joint multi- domain deep learning architecture for automatic short answer grading that performs domain adaptation by learning generic and domain-specific aspects from the limited domain-wise training data. JMD-ASAG not only learns the domain-specific characteristics but also overcomes the dependence on a large corpus by learning the generic characteristics from the task- specific data itself. On a large-scale industry dataset and a benchmarking dataset, we show that our model performs sig- nificantly better than existing techniques which either learn domain-specific models or adapt a generic similarity scoring model from a large corpus. Further, on the benchmarking dataset, we report state-of-the-art results against all existing non-neural and neural models.
Universe ul | (approximation) G | (gemeric language) Per Question Learning Transfer Learning Joint Multi-Domain Learning
Figure 1. Various existing approaches for automatic short- answer grading can be broadly categorized in 1) per question learning, 2) per domain learning, and 3) transfer learning. The proposed approach involving joint multi-domain learn- ing removes need for a large generic language corpus, and, compensates for it by joint learning of domain-specific and generic classifiers. Accompanying formulation details are in Table 1.
, ,
Saha et al.
Model Classifier Description Q->R fF2(R, S) per question modelling Dâ(Q,R) FPQ,R,S) per domain modelling Do =D*> = (D> (Q,R)) fo = FP(QRS) transfer or adapt from generic source do- main to task-specific target domain D° = (D'@D? @---@Dâ) => (D' 3 (Q,R)) | f'(Q,R,S) and fF(Q,R, S) | joint multi-domain learning
D° = (D'@D? @---@Dâ) => (D' 3 (Q,R)) | f'(Q,R,S) and fF(Q,R, S) | joint multi-domain learning Table 1. An illustration of various approaches to model short answer grading problem. D, Q, R, and S, represent domain, question, reference answer, and student answer, respectively. D®&, DS, and D', represent generic, source, and iâ * task domains, respectively. In transfer learning school of thought, model learned for generic task (e.g. natural language inference) is adapted to a specific task. In the proposed joint multi-domain learning approach, a model capturing generic language characteristic (f°) is jointly learned with domain-specific models (f") without requiring large generic source (D*) corpus.
1 Introduction Automatically grading short student answers is critical for building Socratic intelligent tutoring systems [25]. In general, computer-aided assessment systems are particularly useful because grading by humans can become monotonous and te- dious [13]. Formally, the problem of Automatic Short Answer Grading (ASAG) is defined as one where for a given ques- tion, a short student answer (typically, 1-2 sentences long) is graded against the reference answer(s).
Figure 1 and Table 1 illustrate various strategies for ASAG. One of the strategies is to assume that for every question Q, a variety of reference answers R is available during training, i.e. the testing scenario is unseen-answer only. Under this assump- tions, a classifier f Q can be trained per question. However, such approaches cannot generalize to unseen-questions and unseen-domains.
To make an approach generalizable to unseen-questions, one can learn a classifier f D per domain. Each subject (e.g. Primary Science) can be treated as a domain. Such approaches alleviate the need for large number of training answer variants for each question. Grading of a student answer is performed conditional to the question and the reference answer(s). Tra- ditionally, these supervised approaches for ASAG use hand- crafted features to model the similarity between the reference answers and the student answers [11, 16, 30]. Such tech- niques succeed in capturing domain specific characteristics; however, their performance is sensitive to feature engineer- ing. Deep learning (DL) approaches can mitigate the need
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. , , © 2022 Association for Computing Machinery. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM. . . $15.00 https://doi.org/10.1145/nnnnnnn.nnnnnnn
for hand-crafting features, but rely heavily on availability of large data.
Automatic short-answer grading task lacks large scale data to efficiently train existing architectures of DL models. In absence of domain and task specific large scale data, transfer learning is explored [4, 6, 17, 28]. It builds on the intuition that a source domain DS and corresponding generic task can help learn embeddings (or classifier) f G that approximates universal characteristics of language. Such a model is then transferred to the task-specific domain to obtain the final clas- sifier f D ; either by fine-tuning generic embeddings or by learning a task-specific classifier over generic embeddings. However, we believe that there is a scope for significant im- provement in this strategy under certain scenarios.
We propose a joint multi-domain learning approach for short-answer grading. The proposed modelling does not as- sume availability of any other data beyond the corpus con- sisting of multiple task-specific domains (D1-Dk ). It jointly learns domain specific classifiers f D i and a generic classifier f G . Particularly, we believe that this strategy can be very helpful under certain scenarios:
1. If the end task (e.g. ASAG) is pre-defined, it may be well-suited to train the task-specific model, as com- pared to transferring or fine-tuning. Effectively, the problem boils down to learning from the limited task- specific training data.
2. If within the pre-defined task, there exists specific do- mains (e.g. short answer grading for Psychology and Criminology), an adaption of learning across them may be more effective.
Note that these suggestions also help reduce the depen- dence on a large corpus. The former learns only the task- specific aspects rather than the language itself, and the lat- ter adapts to the domains by learning both the generic and domain-specific characteristics within the task.
We find that these scenarios are often prevalent in the task of ASAG for intelligent tutoring systems; where, it is likely to have various domain-specific smaller corpora for individual subjects. Although it is hard to train DL models individually
Joint Multi-Domain Learning for ASAG
Question Darla tied one end of a string around a doorknob and held the other end in her hand. When she plucked the string (pulled and let go quickly) she heard a sound. How would the pitch change if Darla pulled the string tighter? Ref. answer When the string is tighter, the pitch will be Std. answer Question higher. The pitch would be higher if she pulled it really tight (When X, Y) = (Y, if X) (a) Generic textual characteristics Lee has an object he wants to test to see if it is an insulator or a conductor. He is going to use the circuit you see in the picture. Explain how he can use the circuit to test the object. If the motor runs, the object is a conductor. Ref. answer Std. answer He could know if it works.
(X runs) = (X works) (b) Domain-specific characteristics Table 2. Two examples from the SemEval-2013 dataset illus- trating the importance of generic and domain-specific charac- teristics in ASAG.
for each domain due to their limited sizes, put together, the corpora from various domains can provide sufficient view of the language understanding. Consider the examples in Table 2. In the first example, to successfully match the student answer with the reference answer, the model needs to understand a grammatical construct that When X, Y is paraphrase to Y, if X. In order to learn this construct, the training set should contain examples with this syntax; but may not necessarily be from the same domain. In the second example, the system is required to learn that X runs and X works mean the same in the particular domain (in this case, electrical). To successfully understand constructs like these, it is required to have domain-specific training data. Building upon these intuitions, we make the following contributions.
⢠We motivate the need for a joint multi-domain model for ASAG as unlike typical short text similarity tasks, the meaning of similarity in ASAG can vary across domains. Our examples show the domain-specific as well as generic aspects of similarity.
⢠We propose a novel Joint Multi-Domain neural model for ASAG (JMD-ASAG) that learns generic and domain- specific aspects simultaneously. It achieves this by uti- lizing multiple domain-specific corpora, and without requiring a large generic corpus.
⢠To evaluate the hypothesis of utilizing task-specific corpus, we show the effectiveness of JMD-ASAG com- pared to a state-of-the-art method that performs transfer learning from a large corpus.
⢠We compare JMD-ASAG with its generic and domain- specific components on a benchmarking dataset and a
proprietary industry dataset. It outperforms both and also achieves improved results on the benchmarking dataset compared to various state-of-the-art non-neural and neural models.
2 Related Work This research is positioned at the intersection of domain adap- tation and its utility to improve ASAG. Following is a broad overview of related works in these fields of research.
# 2.1 Automatic Short Answer Grading
Traditional approaches of ASAG range from applying manu- ally generated or automated patterns [15, 18, 23, 29] to using hand-crafted features, that include graph alignment features [16, 30], n-gram features [9], softcardinality text overlap fea- tures [11], averaged word vector text similarity features [30] and other shallow lexical features [16, 19].
More recently, deep learning techniques have been ex- plored - Riordan et al. [24] adapts the convolutional recurrent neural network, originally proposed by Taghipour and Ng [31] for automated essay scoring and Kumar et al. [13] uses Earth Moverâs Distance Pooling over Siamese BiLSTMs. Among other approaches which view this problem as an application of semantic textual similarity, the most recent one, InferSent [6] uses a max pooled bidirectional LSTM network to learn universal sentence embeddings from the MultiNLI corpus [33]. These embeddings have been employed as features in conjunction with hand-crafted features by Saha et al. [26] for ASAG.
# 2.2 Neural Domain Adaptation
Domain Adaptation, with or without neural networks, has been an active area of research for the past decade. Daumé III [7] proposes a highly efficient domain adaptation method based on feature augmentation but one which considers mostly sparse binary-valued features. This is further extended by Kim et al. [12] for dense real-valued features to facilitate usage in neural networks. They use k + 1 LSTMs where k of them cap- ture domain-specific information and one is useful for generic or global information. Other works on domain adaptation aug- ment the k domain-specific models with a domain-specific parameter [1, 2, 32] but unlike our work, do not have a generic component. Finally, Chen and Cardie [5] propose a multino- mial adversarial learning framework for multi-domain text classification but restricting themselves to tasks of sentiment classification only. Importantly, none of these neural models perform multi-domain learning for short text similarity which is particularly useful for ASAG as motivated before.
Neural domain adaptation is closely related to neural multi- task learning where one single architecture is developed to work across multiple related tasks. It has found applications in sequence tagging [27, 34], semantic parsing [20] and pair- wise sequence classification tasks [3]. Liu et al. [14] employ
, ,
, ,
Saha et al.
(a) Encoder. (b) Similarity Scorer. (c) Overall architecture.
Figure 2. Individual components and overall architecture of Joint Multi-Domain ASAG.
adversarial learning for better separation of shared and pri- vate features on related text classification tasks. Finally, Peng and Dredze [21] combine domain adaptation and multi-task learning for sequence tagging. They propose an architecture where the BiLSTM embedding output is masked into k + 1 parts, representing generic and domain-specific features but do not learn separate components for each of them.
# 2.3 Domain Adaptation for ASAG
Domain adaptation for ASAG has been a relatively less ex- plored area thus far. Notably, Heilman and Madnani [9] pro- pose domain adaptation for ASAG by applying Daumé III [7]âs feature augmentation method to create multiple copies of hand-crafted features. We directly compare against them in the experiments section.
# 3.1.1 Text Encoder
The text encoder provides dense feature representation of an input text (in this case, answer). We use bidirectional long short-term memory (BiLSTM) network [10] with max- pooling to encode the input answer, as detailed below. We first embed each word in the answer using an embedding layer. The words are initialized with pre-trained word embeddings and are made trainable to reflect the domain and task depen- dent nature of the words. The sequence of words are then passed through a BiLSTM layer to generate a sequence of hidden representations. Formally, for a sequence of T words {wt }t =1, ...,T , the BiLSTM layer generates a sequence of {ht } vectors, where ht is the concatenation of a forward and a backward LSTM output:
To the best of our knowledge, neural domain adaptation for ASAG is unexplored in the literature. In this research, we propose a neural domain adaptation approach that explores multi-domain information in the context of ASAG.
t } = LST M(w1, w2, . . . , wT ) t } = LST M(wT , wT â1, . . . , w1) ht = [hf
3 Method We propose JMD-ASAG, a Joint Multi-Domain neural net- work architecture for domain adaptation of ASAG. We dis- cuss our method in two parts - (1) the neural architecture of JMD-ASAG and (2) the training algorithm of JMD-ASAG.
# 3.1 Neural Architecture
The block diagram for the architecture is shown in Figure 2. For simplicity, it considers two domains but can be gener- alized to an arbitrary number of domains. We first consider the two key components of the model - (1) a Text Encoder (Figure 2a) and (2) a Text Similarity Scorer (Figure 2b). Later, we use them to build our overall model (Figure 2c).
The hidden vectors {ht } are then converted into a single vector using max-pooling, which chooses the maximum value over each dimension of the hidden units. This fixed size vector is used as the vector representation for the input text. Overall, the text encoder can be treated as an operator E : Text â Rd that provides dâdimensional encoding for a given text. Similar architectures for text encoders have been explored before, most notably by [6] for learning universal sentence embeddings.
3.1.2 Text Similarity Scorer The text similarity scorer processes a reference answer (R) and a student answer (S) pair {R, S } to generate class-wise
# Joint Multi-Domain Learning for ASAG
scores. Their textual encodings are obtained using the afore- mentioned encoder as E(R) and E(S), respectively. These encodings are used to compute the similarity feature represen- tation f . It is formed by concatenating the (1) the reference answer encoding, (2) the student answer encoding, (3) their element-wise multiplication, and (4) their absolute difference.
f = [E(R), E(S), E(R) â E(S), |E(R) â E(S)|]
Note that the dimensionality of the feature f is 4d, where d is the dimensionality of the encoding. The element-wise multiplication and the absolute difference components help capture the information gap between the reference answer and the student answer. Finally, the feature representation is transformed to class-wise similarity scores, by learning a dense layer (W ).
s = W â² f , where W â R4d Ãc
The câdimensional output of the dense layer represents score for the answer pairâs {R, S } association to each of the c classes. Overall, the text similarity scorer can be treated as an operator S : {Std.Answer, Ref.Answer} âRc , that computes class- wise scores for a given pair of student and reference answer.
3.1.3 Overall Architecture For k domains {Dd }d =1,2, ..k , JMD-ASAGâs neural network architecture consists of k +1 text similarity scorers - k domain- specific scorers {Sd }d =1,2, ...,k and one generic scorer Sд. For a sample x belonging to the dth domain, its class-wise similar- ity score is obtained using its corresponding domain-specific scorer Sd and the generic scorer Sд. Their scores are added and finally converted to class-wise probabilities using a soft- max function Ï .
P(x) = o (Sa(x) + S,(x)), where x ⬠Dg Note that, each scorer has its own set of parameters. In other words, the parameters are not shared across the scorers. The generic scorer is called so because it is trained using data from all the domains and thus learns aspects generic or common to all of them (e.g. example | in Table 2). The domain-specific ones are trained only using their corresponding domainâs data and thus learn the domain-specific characteristics (e.g. exam- ple 2 in Table 2). These components of the overall network enable it to learn the generic and domain-specific characteris- tics of a short answer grader from the task-specific data itself.
# 3.2 Training Algorithm
We train JMD-ASAG using algorithm 1. In every epoch, we generate batches by iterating over all the domains in one par- ticular order. Note that the domain changes after every batch. In the architecture, the generic scorer Sд is trained in each batch; whereas, depending on the domain Dd of the batch, only the corresponding domain-specific scorer Sd is trained. As part of the experiments, we explore other methods of
Algorithm 1 Training JMD-ASAG
# 1: procedure TRAIN_MODEL(domains) 2: 3: 4: 5:
k = len(domains) initialize model for e = 1 to num_epochs do
for b = 1 to num_batches do
for d = 1 to k do
6: 7: 8: 9: 10: 11: 12: end procedure 13: procedure TRAIN_ON_BATCH(model, batch, d) 14: 15: 16: 17: 18: end procedure
batch = bth mini-batch of domains[d] train_on_batch(model, batch, d)
Sд = model.GenericScorer(batch) Sd = model.DomainScorer[d](batch) Compute loss using Ï (Sд + Sd ) and batch.labels Back-propagate and update model
training JMD-ASAG as well and evaluate their performances compared to the proposed one.
4 Experiments In this section, we first demonstrate the effectiveness of the proposed JMD-ASAG on two datasets - (1) a proprietary large-scale industry dataset and (2) SemEval-2013 dataset [8]. For both the datasets, we compare our model with:
Transfer Learning: We follow the learn universal and transfer methodology suggested by Conneau et al. [6] for transferring universal sentence embeddings. We generate embeddings for the reference answer and the student answer using their pre-trained BiLSTM with max-pooling network model1, trained on the 430K sen- tence pairs of MultiNLI [33]. These embeddings are used to compute the feature representation formed by concatenating their element-wise multiplication and ab- solute difference. Finally, we transfer these features for the task of ASAG using two configurations. â Generic Transfer Learning (GTrL): We train one multinomial logistic regression model on the entire training set, formed by the combination of the train- ing data from all the domains. The model is subse- quently tested on each of the domains individually. â Domain-specific Transfer Learning (DTrL): We train multiple multinomial logistic regression models, one for each domain and subsequently test each of them on the corresponding domain only.
⢠Task-specific Learning: As part of task-specific learn- ing, we perform ablated comparisons with the generic
1infersent.allnli.pickle model shipped with InferSent code is used.
, ,
, ,
and the domain-specific components of JMD-ASAG. Specifically, we compare with the following two con- figurations. â Generic Task-specific Learning (GTaL): It consists of only the generic scorer Sд component of JMD- ASAG. The scores are converted to class-wise prob- abilities using a softmax layer on top of the scorer; i.e. P(x) = Ï (Sд(x)), where x â {Dd }d =1,2, ..,k . This model learns only one scorer on the entire training set and captures the generic characteristics of domain- agnostic training. Note that, this architecture is same as BiLSTM+MaxPooling model employed by Con- neau et al. [6]; except that here the network is trained with short answer grading data itself.
â Domain-specific Task-specific Learning (DTaL): It consists of the domain-specific scorers, one for each domain. For the domain Dd , the class-wise probabilities are obtained as P(x) = Ï (Sd (x)), if x â Dd . Since the samples from each domain affect the training of the corresponding domain-specific scorers only, it can be seen as a model that consists of k domain-specific models, each trained and tested on a separate domain.
For the SemEval-2013 benchmarking dataset [8], we also compare JMD-ASAG with various state-of-the-art non-neural and neural systems.
For fairness of comparison, we use the exact same batches and training parameters in GTaL, DTaL, and proposed JMD- ASAG. All experimental results are reported in terms of ac- curacy, macro-averaged F1 and weighted-F1 metrics. We conclude with a discussion on the implementation details and a comparative study of the various training protocols for JMD- ASAG showing why algorithm 1 is proposed for training the model.
# 4.1 Large-scale Industry Dataset
The proprietary industry dataset contains 87K tuples of ques- tion, reference answer, student answer, and class label (grade) provided by experts. It consists of 5 domains - Psychology (PSY), Sociology (SOC), Communications (COM), Amer- ican Government (GOV), and Criminology (CRI). Given a question, a reference answer and a student answer, we ad- dress a 3-way classification problem involving correct, partially correct, and incorrect classes.
For each of the domains, we perform 80-20% split of the student answers per question. They are combined for all ques- tions to create the train and test sets. Table 3a shows the domain-wise train and test splits. Table 4 shows some exam- ples of the questions, reference answers, student answers and class labels from all 5 domains of the large-scale industry dataset. Based on the results reported in Table 5a, following are some of our key observations.
# Saha et al.
⢠Limitations of GTrL: We find that GTrL exhibits sig- nificantly poor results compared to all the other models. On the overall test set, its macro-F1 is 11% worse than GTaL. This is partly attributed to the Out Of Vocabulary (OOV) issue. The word embedding dictionary contains 840B words overall and out of the 46K vocabulary of the proprietary dataset, embeddings are found for only 24K terms. The task-specific models alleviate this issue by initializing all OOV words with different random embeddings and then learning them for the task.
Effect of Domains: Unsurprisingly, the domain-specific characteristics are better learned and preserved when the model is trained on only one domainâs data. â On Transfer Learning (GTrL vs DTrL): All do- mains combined, domain-specific transfer learning yields about 6% of macro-F1 improvement, while also consistently improving the results for each do- main individually. Unsurprisingly, the domain-specific characteristics are better learned and preserved when the transferred features are trained on only one do- mainâs data.
â On Task-Specific Learning (GTaL vs DTaL): In all the domains, except for PSY, we find that DTaL shows better performance than GTaL. This is simi- lar to the observation in transfer learning models â domain-specific training preserves the corresponding characteristics better.
Task-Specific Learning vs Transfer Learning: Con- sistently, it is observed that task-specific learning out- performs the transfer learning models within similar settings. â Generic (GTrL vs GTaL): When training on the combined training data, task-specific learning shows 8-13% better macro-F1 compared to transfer learn- ing.
â Domain-specific (DTrL vs DTaL): Similarly, when there are separate models for each domain, improve- ments of 3-7% are observed by virtue of task-specific learning.
These improvements suggest that task-specific learning on sufficient training data can outperform (universal) transfer learning methods.
Effectiveness of Joint Multi-Domain Learning: JMD- ASAG illustrates the complementary benefits of GTaL and DTaL by showing significant improvements across all the domains. Compared to DTaL, the improvements in macro-F1 are mostly around 1% in all the domains. Overall, on the combined test set of 21,052 samples, JMD-ASAG achieves about 1.5% better macro-F1 com- pared to GTaL and DTaL. Finally, we make the observation that irrespective of the specific characteristics of each domain, the perfor- mances of these models mostly follow an order - GTrL
Joint Multi-Domain Learning for ASAG
(a) The large-scale industry dataset.
Domains Train Test PSY 12,317 4,141 SOC COM GOV 14,151 9,952 15,038 4,415 3,034 4,654 CRI 15,331 4,808 Total 66,789 21,052
(b) SemEval-2013 dataset
Domains Train Test II 213 24 ST SE 539 283 60 32 PS LP MS EM FN ME LF MX VB 396 323 545 44 36 44 70 8 252 28 430 48 828 92 393 44 697 80 Total 4,969 540
Table 3. Domain-wise train and test splits of (a) the proprietary large-scale industry dataset and (b) SemEval-2013 dataset.
Domain PSY SOC COM GOV CRI Question and Reference Answer Q: How does retirement affect relationships? R: Retirement can cause issues as older adult couples are forced to rearrange home dynamics. Q: What is one component of the state that makes laws? R: The government legislature is one component of the state that makes laws. Q: How is attribution of a source treated with common knowl- edge? R: Common knowledge, which is widely known information in the public domain does not need to be cited, but when in doubt whether information is common knowledge, cite it. Q: What does the national government share with the lower levels of government in federalism? R: In federalism, the national government shares funds and information with lower levels of government. Q: How is crime defined? R: Crime is any behavior that violates the law. Label correct partial incorrect correct partial incorrect correct partial incorrect correct
Table 4. Some examples of questions, reference answers and student answers from each of the five domains of the large-scale industry dataset.
0.75 hay a 0.7 8 8 0.65 & 2 0.6 < 3 0.55 5 2s MAE UE UA AEE 0.45 Ll psy soc cOM Gov CRI OVERALL mGTrL mDTrL mGTal mDTaL mJMD-ASAG (PROPOSED)
Figure 3. Comparison of macro-averaged F1 of various mod- els on each and combination of all domains in the industry dataset.
# 4.2 SemEval-2013 [8] Dataset
This benchmarking dataset was released as part of the SemEval- 2013 Shared Task 7 on âThe Joint Student Response Analysis and 8th Recognizing Textual Entailment Challenge". It con- sists of two different subsets - (1) Beetle, containing student responses from interaction with a dialog based tutor and (2) SciEntsBank, containing student responses to science ques- tions. In this work, we show results only on SciEntsBank as each Beetle question contains multiple reference answers. We plan to adapt our architecture for multiple reference answers as part of the future work. The SciEntsBank corpus consists of questions belonging to 12 science domains and their train, test splits are shown in Table 3b 2. For the same set of samples, the task is performed at three different levels of granularity - (1) 2- way classification into correct and incorrect classes,
< DTrL < GTaL < DTaL < JMD-ASAG. Figure 3 illustrates this observation.
2The dataset does not provide the exact names of the domains.
, ,
, ,
Saha et al.
# (a) The large-scale industry dataset.
Domains Acc Transfer Learning [6] Generic (GTrL) M-F1 W-F1 Domain-Specific (DTrL) M-F1 W-F1 Acc Acc Generic (GTaL) M-F1 W-F1 Task-Specific Learning Domain-Specific (DTaL) M-F1 W-F1 Acc Joint Multi-Domain (JMD) W-F1 M-F1 Acc PSY SOC COM GOV CRI 0.5670 0.6069 0.7096 0.6539 0.6468 0.5280 0.5453 0.4747 0.5222 0.5527 0.5558 0.5878 0.6649 0.6224 0.6236 0.6160 0.6432 0.7452 0.6752 0.6895 0.5859 0.6031 0.5555 0.5717 0.6101 0.6111 0.6369 0.7180 0.6563 0.6751 0.6638 0.6886 0.7637 0.7153 0.7525 0.6392 0.6461 0.5642 0.6046 0.6876 0.6641 0.6810 0.7333 0.6928 0.7447 0.6486 0.6991 0.7769 0.7184 0.7606 0.6171 0.6628 0.6145 0.6234 0.6981 0.6442 0.6944 0.7571 0.7018 0.7530 0.6679 0.7073 0.7844 0.7230 0.7693 0.6421 0.6685 0.6214 0.6374 0.7098 0.6673 0.7008 0.7651 0.7135 0.7631 Overall 0.6328 0.5440 0.6105 0.6698 0.6010 0.6583 0.7147 0.6529 0.7066 0.7185 0.6565 0.7096 0.7281 0.6703 0.7216
(b) 2-way, 3-way, and 5-way classification tasks of SemEval-2013 SciEntsBank dataset.
Acc Transfer Learning [6] Generic (GTrL) M-F1 W-F1 Domain-Specific (DTrL) M-F1 W-F1 Acc Acc Generic (GTaL) M-F1 W-F1 Task-Specific Learning Domain-Specific (DTaL) M-F1 W-F1 Acc Joint Multi-Domain (JMD) W-F1 M-F1 Acc 2-way 3-way 5-way 0.7463 0.6963 0.6018 0.7410 0.6428 0.5616 0.7461 0.6916 0.5996 0.7574 0.6870 0.6130 0.7493 0.6227 0.5775 0.7555 0.6802 0.6107 0.7815 0.7352 0.6387 0.7768 0.6711 0.6090 0.7812 0.7314 0.6424 0.7870 0.7389 0.6257 0.7805 0.6899 0.6057 0.7857 0.7345 0.6311 0.8037 0.7462 0.6518 0.7986 0.7111 0.6252 0.8030 0.7442 0.6565
Table 5. Comparison of Joint Multi-Domain ASAG (JMD-ASAG) with Generic Transfer Learning (GTrL), Domain-specific Transfer Learning (DTrL), Generic Task-specific Learning (GTaL) and Domain-specific Task-specific Learning (DTaL) models on (a) the proprietary large-sclae industry dataset, and (b) 2-way, 3-way and 5-way classification tasks of SemEval-2013 SciEntsBank dataset
(2) 3-way classification into correct, incorrect and contradictory classes, and (3) 5-way classification into correct, partially correct, contradictory, irrelevant and non domain classes. Note that the test set has the same samples across all the tasks. However, their labels change as the task becomes more granular. Table 5b shows the results pertaining to the three classification tasks3. Following are some of the key observations.
⢠Limitations of GTrL: Even when the task-specific training data is significantly lesser (4,969 samples in this dataset), GTrLâs macro-average F1 is up to 4% worse than GTaL and DTaL. It suggests that there is a significant scope for improvement.
⢠Effect of Domains: We observe moderate evidence that domain-specific training can improve learning in case of the SemEval dataset. DTaL is at max 1% bet- ter than GTaL. Similarly, there is limited evidence of transfer learning benefiting consistently from domain- specific training. Note that, as shown in Table 2b, the training samples per domain range between 70 to 697; which may be too few for effective (task-specific or transfer) learning per domain.
Task-Specific Learning vs Transfer Learning: In this dataset too, task-specific models outperform transfer learning models. â Generic (GTrL vs GTaL): It is observed that for generic setting, task-specific learning yields about
3-4% higher macro-averaged F1 compared to trans- fer learning. Thus, training from very limited task- specific data (5K samples) can yield superior models than transfer learning from massive inference corpus (430K samples).
â Domain-specific (DTrL vs DTaL): In domain-specific setting, task-specific models are around 3-6% better macro-F1 than those from transfer learning. As noted earlier, the domain specific data in SemEval dataset is very small, however, the task-specific learning is still more effective than transfer learning.
⢠Effectiveness of JMD-ASAG: JMD-ASAG improves upon both GTaL and DTaL. For 2-way, it obtains al- most 2% better macro-averaged F1. The improvement for 3-way is even higher - 4% and 3% over G-ASAG and D-ASAG respectively. Finally, 5-way results are also significantly better with 2% better macro-F1. This suggests that proposed JMD-ASAG can consistently outperform generic and domain-specific learning by incorporating benefits from both. Table 2 shows two examples from this dataset where JMD-ASAG is able to predict that the student answers are correct, while GTaL and DTaL individually cannot. We believe this is owing to our modelâs ability to capture generic and domain-specific characteristics simultaneously.
# 4.2.1 Comparison with State-of-the-Art:
3For 5-way, the macro-F1 is reported over 4 classes since the non domain class is highly under-represented. This follows all previously published works on this dataset.
We compare JMD-ASAG with eight state-of-the-art models for ASAG. These include four non-neural models and three neural models. The non-neural models are CoMeT [19], ETS [9], SoftCardinality [11] and Sultan et al. [30]. CoMeT, ETS and SoftCardinality are three of the best performing systems in the SemEval-2013 task. Note that ETS [9] is the only work
Joint Multi-Domain Learning for ASAG
Approaches 2-way Acc M-F1 W-F1 3-way Acc M-F1 W-F1 5-way Acc M-F1 W-F1 Non-Neural Approaches CoMeT [19] ETS [9] SOFTCAR [11] Sultan et al. [30] 0.7740 0.7760 0.7240 - 0.7680 0.7620 0.7150 - 0.7730 0.7700 0.7220 - 0.7130 0.7200 0.6590 - 0.6400 0.6470 0.5550 - 0.7070 0.7080 0.6470 - 0.6000 0.6430 0.5440 - 0.5510 0.5980 0.4740 - 0.5980 0.6400 0.5370 0.5820 Neural Approaches Taghipour and Ng [31]âBestâ Taghipour and Ng [31]âTunedâ InferSent [6] Saha et al. [26] Joint Multi-Domain - ASAG - - 0.7463 0.7926 0.8037 - - 0.7410 0.7858 0.7986 0.6700 0.7120 0.7461 0.7910 0.8030 - - 0.6963 0.7185 0.7462 - - 0.6428 0.6662 0.7111 - - 0.6916 0.7143 0.7442 - - 0.6018 0.6444 0.6518 - - 0.5616 0.6010 0.6252 0.5210 0.5330 0.5996 0.6420 0.6565
Table 6. Comparison of JMD-ASAG with state-of-the-art non-neural and neural models on SemEval-2013 SciEntsBank dataset. JMD-ASAG outperforms all existing models on this dataset. â Results as reported by Riordan et al. [24].
of domain adaptation for ASAG and they do so by feature augmentation [7]. Sultan et al. [30] is a more recent work on ASAG that utilizes alignment, term-weighting and vector similarity features to solve the problem.
One of the three neural models is a state-of-the-art essay scoring model by Taghipour and Ng [31]. We use two config- urations of their model for comparison - (1) best parameter set used by Taghipour and Ng [31] and (2) tuned parameter set used by Riordan et al. [24] for ASAG. The other two neural models are InferSent [6], the generic transfer learning model and one model by Saha et al. [26] that combines hand-crafted and deep learning features. Notably, Saha et al. [26] utilizes hand-crafted token features along with deep learning embed- dings, suggesting that such fusion is helpful for ASAG. Table 6 reports all the results.
We find that JMD-ASAG yields significantly better results than all compared systems in all the three tasks. We report 1% better macro-averaged F1 than Saha et al. [26] in 2-way. The improvement in 3-way is significantly higher, with 5% better macro-averaged F1 than Saha et al. [26]. For 5-way, the gain is 2%. Much to our surprise, none of the existing systems use the domain information on this dataset, which accounts for most of the improvement. We also find it particularly creditable that our end-to-end neural architecture is able to significantly outperform Saha et al. [26] which combines hand-crafted features with deep learning features. As has been shown in, embedding hand-crafted features in any deep learning architecture can further enhance the performance of any short answer grading task. We leave this as part of the future work.
# Implementation Details
the words is set to 300. All word vectors are initialized with GloVe embeddings [22] and are further updated for our task. The size of the LSTM hidden units is set to 100. The batch size is kept as 32. All models are trained for 15 epochs using categorical cross-entropy loss and Adam optimizer with a learning rate of 0.001.
# 4.4 Comparison of Training Protocols
We explore different ways of training JMD-ASAG and empir- ically show why algorithm 1 is the proposed way of training JMD-ASAG. We compare the following three approaches - (1) train the network such that the domain is changed af- ter each batch, (2) train the network such that the domain is changed after each epoch, and (3) train the network such that the domain is changed only after the network has con- verged for the previous domain. Note that the first approach is same as algorithm 1. The second approach is also similar but with lines 5 (the loop of batches) and 6 (the loop of domains) in algorithm 1 interchanged. In the third approach, the loop that iterates over domains (line 6 in algorithm 1) comes before the other two loops. Table 7 compares the three approaches on the combined test set of the industry dataset. Batch- and epoch-wise trained models show similar per- formances and massively outperform domain-wise trained models. This however, is unsurprising. Whenever the model is trained on a particular domainâs data until convergence, it is fine-tuned for the current domain, and subsequently de- creases the performance on the previous domains. This leads to a progressive reduction in numbers for each of the previous domains and eventually, lowering the performance on the overall test set. This phenomenon is observed in Figure 4. On training with a new domain (horizontal-axis), the macro-F1 (vertical-axis) for all the previous domains keep decreasing progressively.
We use Keras with Tensorflow as back-end for implementing our models. For the text encoder, the maximum length of the answers is set to 50 words. The embedding dimension of
, ,
, ,
Psy âs+soc âcom â<âsov ââcri © y a © So fou * ; © a MACRO-AVERAGED F1 ao 8 0.4 Psy Psy Psy Psy Psy +s0c +50⬠+soc +50 +COM +coM +COM +GOV +60V +CRI
DOMAINS
Figure 4. Training on new domains results in successive decrease in performance of previously seen domains.
Batch Epoch Domain Acc M-F1 W-F1 0.7216 0.6703 0.7211 0.6700 0.6526 0.5871 0.7281 0.7297 0.6784
Table 7. Comparison of various training protocols of JMD- ASAG on the industry dataset.
5 Conclusion and Future Works Till date, one of the fundamental challenges towards building a real-world deployable intelligent tutoring system has been the lack of adaptability of an automatic short answer grading across various domains or subjects. While almost all existing works have modeled the problem as a typical textual similar- ity problem independent of the domain, we find that in ASAG the notion of similarity varies across domains. In response, we propose JMD-ASAG, a novel neural network architecture for joint multi-domain learning of ASAG. JMD-ASAG not only learns the domain-specific characteristics of similarity but also the generic aspects that is universal to the properties of the language. For k domains, JMD-ASAG achieves both these by learning k domain-specific similarity scorers and one generic scorer in an end-to-end trainable neural archi- tecture. Also, it does not rely on a large corpus for learning the generic characteristics. Empirical evaluation on a propri- etary large-scale industry dataset and a benchmarking dataset show that JMD-ASAG outperforms a state-of-the-art trans- fer learning model and models that only employ generic or domain-specific learning from task-specific training data. We report state-of-the-art results on the benchmarking dataset and also empirically show why our proposed algorithm for training the model is the most optimal among various other protocols. We believe JMD-ASAG can further benefit from better similarity scorers; exploring this is left as part of the future work.
# Saha et al.
In the quest for building a first of its kind large-scale in- telligent tutoring system, we have deployed our JMD-ASAG model trained on the five domains of the industry dataset. The pilot study of the system is currently being carried out with about thousand students across the globe. In the future, we plan to scale our system to 100 subjects. Our architecture is simple yet effective, ensuring that such scale up should be trivial. Also, we believe that with increased number of domains, the generic characteristics of the language will be better learned, leading to further gains in performance. Fi- nally, although our results are specific to the task of ASAG, we believe that the architecture of JMD-ASAG can be directly applied to any semantic similarity task that requires captur- ing generic and domain-specific characteristics. We plan to explore this too as part of the future work.
References [1] Tanel Alumäe. 2013. Multi-domain neural network language model..
In INTERSPEECH, Vol. 13. 2182â2186.
[2] Waleed Ammar, George Mulcaire, Miguel Ballesteros, Chris Dyer, and Noah A Smith. 2016. Many languages, one parser. arXiv preprint arXiv:1602.01595 (2016).
[3] Isabelle Augenstein, Sebastian Ruder, and Anders Søgaard. 2018. Multi- task Learning of Pairwise Sequence Classification Tasks Over Disparate Label Spaces. arXiv preprint arXiv:1802.09913 (2018).
[4] Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christo- pher D Manning. 2015. A large annotated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326 (2015).
[5] Xilun Chen and Claire Cardie. 2018. Multinomial Adversarial Net- works for Multi-Domain Text Classification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), Vol. 1. 1226â1240.
[6] Alexis Conneau, Douwe Kiela, Holger Schwenk, Loïc Barrault, and Antoine Bordes. 2017. Supervised Learning of Universal Sentence Rep- resentations from Natural Language Inference Data. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. 670â680.
[7] Hal Daumé III. 2007. Frustratingly Easy Domain Adaptation. Associa- tion for Computational Linguistics (2007), 256.
[8] Myroslava O. Dzikovska, Rodney D. Nielsen, Chris Brew, Claudia Leacock, Danilo Giampiccolo, Luisa Bentivogli, Peter Clark, Ido Da- gan, and Hoa Trang Dang. 2013. SemEval-2013 Task 7: The Joint Student Response Analysis and 8th Recognizing Textual Entailment Challenge. In Proceedings of the NAACL-HLT International Workshop on Semantic Evaluation. 263â274.
[9] Michael Heilman and Nitin Madnani. 2013. ETS: Domain adaptation and stacking for short answer scoring. In Proceedings of the Joint Conference on Lexical and Computational Semantics, Vol. 2. 275â279. [10] Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term
memory. Neural computation 9, 8 (1997), 1735â1780.
[11] Sergio Jimenez, Claudia Becerra, and Alexander Gelbukh. 2013. SOFT- CARDINALITY: Hierarchical text overlap for student response analy- sis. In Proceedings of the Joint Conference on Lexical and Computa- tional Semantics, Vol. 2. 280â284.
[12] Young-Bum Kim, Karl Stratos, and Ruhi Sarikaya. 2016. Frustratingly easy neural domain adaptation. In Proceedings of the International Conference on Computational Linguistics. 387â396.
[13] Sachin Kumar, Soumen Chakrabarti, and Shourya Roy. 2017. Earth Moverâ ËA ´Zs Distance Pooling over Siamese LSTMs for Automatic
# Joint Multi-Domain Learning for ASAG
Short Answer Grading. In Proceedings of the International Joint Con- ference on Artificial Intelligence. 2046â2052.
[14] Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2017. Adversarial Multi-task Learning for Text Classification. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers).
[15] Tom Mitchell, Terry Russell, Peter Broomhead, and Nicola Aldridge. 2002. Towards robust computerised marking of free-text responses. In Proceedings of the International Computer Assisted Assessment Conference.
[16] Michael Mohler, Razvan C. Bunescu, and Rada Mihalcea. 2011. Learn- ing to Grade Short Answer Questions using Semantic Similarity Mea- sures and Dependency Graph Alignments. In Proceedings of the Annual Meeting of the Association for Computational Linguistics: Human Lan- guage Technologies. 752â762.
[17] Lili Mou, Zhao Meng, Rui Yan, Ge Li, Yan Xu, Lu Zhang, and Zhi Jin. 2016. How Transferable are Neural Networks in NLP Applications? CoRR abs/1603.06111 (2016).
[18] Rodney D Nielsen, Wayne Ward, and James H Martin. 2009. Recog- nizing entailment in intelligent tutoring systems. Natural Language Engineering 15, 4 (2009), 479â501.
[19] Niels Ott, Ramon Ziai, Michael Hahn, and Detmar Meurers. 2013. CoMeT: Integrating different levels of linguistic modeling for meaning assessment. In Proceedings of the Joint Conference on Lexical and Computational Semantics, Vol. 2. 608â616.
[20] Hao Peng, Sam Thomson, and Noah A Smith. 2017. Deep mul- arXiv preprint titask learning for semantic dependency parsing. arXiv:1704.06855 (2017).
[21] Nanyun Peng and Mark Dredze. 2016. Multi-task multi-domain arXiv preprint representation learning for sequence tagging. arXiv:1608.02689 (2016).
[22] Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global Vectors for Word Representation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. 1532â1543.
[23] Lakshmi Ramachandran, Jian Cheng, and Peter Foltz. 2015. Identifying patterns for short answer scoring using graph-based lexico-semantic text matching. In Proceedings of the NAACL Workshop on Innovative Use of NLP for Building Educational Applications. 97â106.
[24] Brian Riordan, Andrea Horbach, Aoife Cahill, Torsten Zesch, and Investigating neural architectures for short Chong Min Lee. 2017. answer scoring. In Proceedings of the NAACL Workshop on Innovative Use of NLP for Building Educational Applications. 159â168.
[25] Carolyn Penstein Rosé, Johanna D Moore, Kurt VanLehn, and David Allbritton. 2001. A comparative evaluation of socratic versus didactic tutoring. In Proceedings of the Annual Meeting of the Cognitive Science Society, Vol. 23.
[26] Swarnadeep Saha, Tejas I. Dhamecha, Smit Marvaniya, Renuka Sind- hgatta, and Bikram Sengupta. 2018. Sentence Level or Token Level Fea- tures for Automatic Short Answer Grading?: Use Both. In Proceedings of the International Conference Artificial Intelligence in Education.
[27] Anders Søgaard and Yoav Goldberg. 2016. Deep multi-task learning with low level tasks supervised at lower layers. In Proceedings of the Annual Meeting of the Association for Computational Linguisticss, Vol. 2. 231â235.
[28] Sandeep Subramanian, Adam Trischler, Yoshua Bengio, and Christo- Learning General Purpose Distributed Sen- pher J. Pal. 2018. tence Representations via Large Scale Multi-task Learning. CoRR abs/1804.00079 (2018).
[29] Jana Z Sukkarieh, Stephen G Pulman, and Nicholas Raikes. 2004. Auto-marking 2: An update on the UCLES-Oxford University research into using computational linguistics to score short, free text responses. International Association of Educational Assessment (2004).
[30] Md. Arafat Sultan, Cristobal Salazar, and Tamara Sumner. 2016. Fast and Easy Short Answer Grading with High Accuracy. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 1070â 1075.
[31] Kaveh Taghipour and Hwee Tou Ng. 2016. A neural approach to auto- mated essay scoring. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. 1882â1891.
[32] Ottokar Tilk and Tanel Alumäe. 2014. Multi-Domain Recurrent Neural Network Language Model for Medical Speech Recognition. In Baltic HLT. 149â152.
[33] Adina Williams, Nikita Nangia, and Samuel R Bowman. 2017. A broad-coverage challenge corpus for sentence understanding through inference. arXiv preprint arXiv:1704.05426 (2017).
[34] Zhilin Yang, Ruslan Salakhutdinov, and William Cohen. 2016. Multi- task cross-lingual sequence tagging from scratch. arXiv preprint arXiv:1603.06270 (2016).
, , | {
"id": "1603.06270"
} |
1902.09506 | GQA: A New Dataset for Real-World Visual Reasoning and Compositional Question Answering | We introduce GQA, a new dataset for real-world visual reasoning and
compositional question answering, seeking to address key shortcomings of
previous VQA datasets. We have developed a strong and robust question engine
that leverages scene graph structures to create 22M diverse reasoning
questions, all come with functional programs that represent their semantics. We
use the programs to gain tight control over the answer distribution and present
a new tunable smoothing technique to mitigate question biases. Accompanying the
dataset is a suite of new metrics that evaluate essential qualities such as
consistency, grounding and plausibility. An extensive analysis is performed for
baselines as well as state-of-the-art models, providing fine-grained results
for different question types and topologies. Whereas a blind LSTM obtains mere
42.1%, and strong VQA models achieve 54.1%, human performance tops at 89.3%,
offering ample opportunity for new research to explore. We strongly hope GQA
will provide an enabling resource for the next generation of models with
enhanced robustness, improved consistency, and deeper semantic understanding
for images and language. | http://arxiv.org/pdf/1902.09506 | Drew A. Hudson, Christopher D. Manning | cs.CL, cs.AI, cs.CV, cs.LG | Published as a conference paper at CVPR 2019 (oral) | null | cs.CL | 20190225 | 20190510 | 9 1 0 2
y a M 0 1 ] L C . s c [
3 v 6 0 5 9 0 . 2 0 9 1 : v i X r a
GQA: A New Dataset for Real-World Visual Reasoning and Compositional Question Answering visualreasoning.net
# Drew A. Hudson Stanford University 353 Serra Mall, Stanford, CA 94305 dorarad@cs.stanford.edu
# Christopher D. Manning Stanford University 353 Serra Mall, Stanford, CA 94305 manning@cs.stanford.edu
# Abstract
We introduce GQA, a new dataset for real-world visual reasoning and compositional question answering, seeking to address key shortcomings of previous VQA datasets. We have developed a strong and robust question engine that leverages Visual Genome scene graph structures to create 22M diverse reasoning questions, which all come with func- tional programs that represent their semantics. We use the programs to gain tight control over the answer distribution and present a new tunable smoothing technique to mitigate question biases. Accompanying the dataset is a suite of new metrics that evaluate essential qualities such as consistency, grounding and plausibility. A careful analysis is performed for baselines as well as state-of-the-art models, providing ï¬ne-grained results for different question types and topolo- gies. Whereas a blind LSTM obtains a mere 42.1%, and strong VQA models achieve 54.1%, human performance tops at 89.3%, offering ample opportunity for new research to explore. We hope GQA will provide an enabling resource for the next generation of models with enhanced robustness, improved consistency, and deeper semantic understanding of vision and language.
Figure 1: Examples from the new GQA dataset for visual reason- ing and compositional question answering: Is the bowl to the right of the green apple? What type of fruit in the image is round? What color is the fruit on the right side, red or green? Is there any milk in the bowl to the left of the apple?
across the board, with a host of datasets being constructed [4, 11, 15, 41, 20] and numerous models being proposed [5, 38, 6, 10, 12].
# 1. Introduction
It takes more than a smart guess to answer a good ques- tion. The ability to assimilate knowledge and use it to draw inferences is among the holy grails of artiï¬cial intelligence. A tangible form of this goal is embodied in the task of Vi- sual Question Answering (VQA), where a system has to an- swer free-form questions by reasoning about presented im- ages. The task demands a rich set of abilities as varied as ob- ject recognition, commonsense understanding and relation extraction, spanning both the visual and linguistic domains. In recent years, it has sparked substantial interest through- out the research community, becoming extremely popular
The multi-modal nature of the task and the diversity of skills required to address different questions make VQA particularly challenging. Yet, designing a good test that will reï¬ect its full qualities and complications may not be that trivial. Despite the great strides that the ï¬eld recently made, it has been established through a series of studies that exist- ing benchmarks suffer from critical vulnerabilities that ren- der them highly unreliable in measuring the actual degree of visual understanding capacities [39, 11, 2, 8, 3, 13, 18]. Most notable among the ï¬aws of current benchmarks is the strong and prevalent real-world priors displayed throughout the data [39, 11, 3] â most tomatoes are red and most tables are wooden. These in turn are exploited
by VQA models, which become heavily reliant upon such statistical biases and tendencies within the answer distribu- tion to largely circumvent the need for true visual scene un- derstanding [2, 11, 15, 8]. This situation is exacerbated by the simplicity of many of the questions, from both linguistic and semantic perspectives, which in practice rarely require much beyond object recognition [33]. Consequently, early benchmarks led to an inï¬ated sense of the state of scene understanding, severely diminishing their credibility [37]. Aside from that, the lack of annotations regarding ques- tion structure and content leaves it difï¬cult to understand the factors affecting modelsâ behavior and performance and to identify the root causes behind their mistakes.
To address these shortcomings, while retaining the vi- sual and semantic richness of real-world images, we intro- duce GQA, a new dataset for visual reasoning and compo- sitional question answering. We have developed and care- fully reï¬ned a robust question engine, leveraging content: information about objects, attributes and relations provided through Visual Genome Scene Graphs [20], along with structure: a newly-created extensive linguistic grammar which couples hundreds of structural patterns and detailed lexical semantic resources. Together, they are combined in our engine to generate over 22 million novel and diverse questions, which all come with structured representations in the form of functional programs that specify their con- tents and semantics, and are visually grounded in the image scene graphs.
GQA questions involve varied reasoning skills, and multi-step inference in particular. We further use the as- sociated semantic representations to greatly reduce biases within the dataset and control for its question type compo- sition, downsampling it to create a 1.7M balanced dataset. Contrary to VQA 2.0, here we balance not only binary ques- tions, but also open ones, by applying a tunable smoothing technique that makes the answer distribution for each ques- tion group more uniform. Just like a well-designed exam, our benchmark makes the educated guesses strategy far less rewarding, and demands instead more reï¬ned comprehen- sion of both the visual and linguistic contents.
Along with the dataset, we have designed a suite of new metrics, which include consistency, validity, plausibility, grounding and distribution scores, to complement the stan- dard accuracy measure commonly used in assessing meth- odsâ performance. Indeed, studies have shown that the ac- curacy metric alone does not account for a range of anoma- lous behaviors that models demonstrate, such as ignoring key question words or attending to irrelevant image regions [2, 8]. Other works have argued for the need to devise new evaluation measures and techniques to shed more light on systemsâ inner workings [18, 34, 35, 17]. In fact, beyond providing new metrics, GQA can even directly support the development of more interpretable models, as it provides
a sentence-long explanation that corroborates each answer, and further associates each word from both the questions and the responses with a visual pointer to the relevant re- gion in the image, similar in nature to datasets by Zhu et al. [41], Park et al. [29], and Li et al. [22]. These in turn can serve as a strong supervision signal to train models with en- hanced transparency and accessibility.
GQA combines the best of both worlds, having clearly deï¬ned and crisp semantic representations on the one hand but enjoying the semantic and visual richness of real-world images on the other. Our three main contributions are (1) the GQA dataset as a resource for studying visual rea- soning; (2) development of an effective method for generat- ing a large number of semantically varied questions, which marries scene graph representations with computational lin- guistic methods; (3) new metrics for GQA, that allow for better assessment of system success and failure modes, as demonstrated through a comprehensive performance analy- sis of existing models on this task. We hope that the GQA dataset will provide fertile ground for the development of novel methods that push the boundaries of question answer- ing and visual reasoning.
# 2. Related Work
Recent years have witnessed tremendous progress in vi- sual understanding. Multiple attempts have been made to mitigate the systematic biases of VQA datasets as discussed in section 1 [11, 39, 3, 15], but they fall short in providing an adequate solution: Some approaches operate over con- strained and synthetic images [39, 15], neglecting the real- ism and diversity natural photos provide. Meanwhile, Goyal et al. [11] associate most of the questions in VQA1.0 with a pair of similar pictures that result in different answers. While offering partial relief, this technique fails to address open questions, leaving their answer distribution largely un- balanced. In fact, since the method does not cover 29% of the questions due to limitations of the annotation process, even within the binary ones biases still remain.1
At the other extreme, Agrawal et al. [3] partition the questions into training and validation sets such that their re- spective answer distributions become intentionally dissim- ilar. While undoubtedly challenging, these adversarial set- tings penalize models, maybe unjustly, for learning salient properties of the training data. In the absence of other in- formation, making an educated guess is a legitimate choice â a valid and beneï¬cial strategy pursued by machines and people alike [27, 7, 26]. What we essentially need is a bal- anced test that is more resilient to such gaming strategies, as we strive to achieve with GQA.
1For VQA1.0, blind models achieve 50% in accuracy without even con- sidering the images whatsoever [4]. Similarly, for VQA2.0, 67% and 27% of the binary and open questions respectively are answered correctly by such models [11].
Decoy: brown What color is the food on the red object left of the small girl that is holding a hamburger, yellow or broun? Select: hamburger â Relate: girl, holding â Filter size: small â Relate: object, left â Filter color: red â Relate: food,on â» Choose color: yellow | brown Pattern: What] Which <type> [do you think] <is> <dobject>, <attr> or <decoy>? Program: Select: <dobject > â> Choose <type>: <attr>|<decoy> Reference: The food on the red object left of the small gir! that is holding a hamburger Patterns Collection Compositional References Decoys Selection Probabilistic Generation * Edge Pruning * Object Augmentation Ontology construction Global Properties * Distribution Balancing * Type-Based Sampling * Deduplication Consistency Validity & Plausibility Distribution Grounding * Entailment Relations r * Functional Programs * Recursive Reachability
Figure 2: Overview of the GQA construction process. Given an image annotated with a scene graph of its objects, attributes and relations, we produce compositional questions by traversing the graph. Each question has both a standard natural-language form and a functional program representing its semantics. Please refer to section 3 for further detail.
In creating GQA, we drew inspiration from the CLEVR task [15], which consists of compositional questions over synthetic images. However, its artiï¬cial nature and low di- versity, with only a handful of object classes and properties, makes it particularly vulnerable to memorization of all com- binations, thereby reducing its effective degree of composi- tionality. Conversely, GQA operates over real images and a large semantic space, making it much more challenging. Even though our questions are not natural as in other VQA datasets [11, 41], they display a rich vocabulary and diverse linguistic and grammatical structures. They may serve in fact as a cleaner benchmark to asses models in a more con- trolled and comprehensive fashion, as discussed below.
soning, logical inference and comparisons. Figure 2 pro- vides a brief overview of the GQA components and gener- ation process, and ï¬gure 3 presents multiple instances from the dataset. The dataset along with further information are available at visualreasoning.net.
The images, questions and corresponding answers are all accompanied by matching semantic representations: Each image is annotated with a dense Scene Graph [16, 20], representing the objects, attributes and relations it contains. Each question is associated with a functional program which lists the series of reasoning steps needed to be per- formed to arrive at the answer. Each answer is augmented with both textual and visual justiï¬cations, pointing to the relevant region within the image.
The task of question generation has been explored in earlier work, mostly for the purpose of data augmentation. Contrary to GQA, those datasets are either small in scale [25] or use only a restricted set of objects and a handful of non-compositional templates [17, 24]. Neural alternatives to visual question generation have been recently proposed [28, 14, 40], but they aim at a quite different goal of creat- ing engaging but potentially inaccurate questions about the wider context of the image such as subjective evoked feel- ings or speculative events that may lead to or result from the depicted scenes [28].
The structured representations and detailed annotations for images and questions offer multiple advantages. They enable tight control over the answer distribution, which allows us to create a balanced set of challenging ques- tions, and support the formulation of a suite of new metrics that aim to provide deeper insight into modelsâ behavior. They facilitate performance assessment along various axes of question type and topology, and may open the door for the development of novel methods with more grounded and transparent knowledge representation and reasoning.
# 3. The GQA Dataset
The GQA dataset centers around real-world reasoning, scene understanding and compositional question answering. It consists of 113K images and 22M questions of assorted types and varying compositionality degrees, measuring per- formance on an array of reasoning skills such as object and attribute recognition, transitive relation tracking, spatial rea-
We proceed by describing the GQA question engine and the four-step dataset construction pipeline: First, we thor- oughly clean, normalize, consolidate and augment the Vi- sual Genome scene graphs [20] linked to each image. Then, we traverse the objects and relations within the graphs, and marry them with grammatical patterns gleaned from VQA 2.0 [11] and sundry probabilistic grammar rules to produce a semantically-rich and diverse set of questions. In the third stage, we use the underlying semantic forms to reduce bi-
ases in the conditional answer distribution, resulting in a balanced dataset that is more robust against shortcuts and guesses. Finally, we discuss the question functional repre- sentation, and explain how we use it to compute entailment between questions, supporting new evaluation metrics.
# 3.1. Scene Graph Normalization
Our starting point in creating the GQA dataset is the Vi- sual Genome Scene Graph annotations [20] that cover 113k images from COCO [23] and Flickr [36].2 The scene graph serves as a formalized representation of the image: each node denotes an object, a visual entity within the image, like a person, an apple, grass or clouds. It is linked to a bounding box specifying its position and size, and is marked up with about 1â3 attributes, properties of the object: e.g., its color, shape, material or activity. The objects are con- nected by relation edges, representing actions (verbs), spa- tial relations (prepositions), and comparatives.
The scene graphs are annotated with free-form natural language. In order to use them for question generation we ï¬rst have to normalize the graphs and their vocabulary. We provide here a brief overview of the normalization process, and present a more detailed description in the supplemen- tary. First, we create a clean, consolidated and unambiguous ontology over the graph with 2690 classes including various objects, attributes and relations. We further augment it with semantic and linguistic information which will aid us in cre- ating grammatical questions. Then, we prune inaccurate or unnatural edges, using combination of object detection con- ï¬dences, n-gram frequencies, co-occurrence statistics, word embedding distances, category-based rules, and manual cu- ration. Finally, we enrich the graph with positional informa- tion (absolute and relative) as well as semantic properties (location, weather). By the end of this stage, the resulting scene graphs have clean, uniï¬ed, rich and unambiguous se- mantics for both the nodes and the edges.
# 3.2. The Question Engine
At the heart of our pipeline is the question engine, re- sponsible for producing diverse, relevant and grammatical questions with varying degrees of compositionality. The generation process harnesses two resources: one is the scene graphs which fuel the engine with rich content â infor- mation about objects, attributes and relationships; the other is the structural patterns, a mold that shapes the content, casting it into a question.
Our engine operates over 524 patterns, spanning 117 question groups, and 1878 answers which are based on the scene graphs. Each group is associated with three compo- nents: (1) a functional program that represents its seman- tics; (2) A set of textual rephrases which express it in nat-
2We extend Visual Genome dataset with 5k hidden scene graphs col- lected through crowdsourcing, used for the test set.
A1. Is the tray on top of the table black or light brown? light brown A2. Are the napkin and the cup the same color? yes A3. Is the small table both oval and wooden? yes A4. Is there any fruit to the left of the tray the cup is on top of? yes A5. Are there any cups to the left of the tray on top of the table? no B1. What is the brown animal sitting inside of? box B2. What is the large container made of? cardboard B3. What animal is in the box? bear B4. Is there a bag to the right of the green door? no B5. Is there a box inside the plastic bag? no
Figure 3: Examples of questions from the GQA dataset.
ural language, e.g., âWhat|Which <type> [do you think] <is> <theObject>?â; (3) A pair of short and long an- swers: e.g., <attribute> and âThe <object> <is> <attribute>.â respectively.3
We begin from a seed set of 250 manually constructed patterns, and extend it with 274 natural patterns derived from VQA1.0 [4] through templatization of words from our ontology.4 To increase the question diversity, apart from using synonyms for objects and attributes, we incor- porate probabilistic sections into the patterns, such as op- tional phrases [x] and alternate expressions (x|y), which get instantiated at random.
It is important to note that the patterns do not strictly limit the structure or depth of each question, but only outline their high-level form, as many of the template ï¬elds can be populated with nested compositional refer- ences. For instance, in the pattern above, we may replace <theObject> with âthe apple to the left of the white refrigeratorâ.
To achieve that compositionality, we compute for each object a set of candidate references, which can either be direct, e.g. the bear, this animal, or indirect, using mod- iï¬ers, e.g. the white bear, the bear on the left, the animal behind the tree, the bear that is wearing a coat. Direct ref- erences are used when the uniqueness of the object can be conï¬dently conï¬rmed by object detectors, making the cor- responding references unambiguous. Alternatively, we use indirect references, leading to multi-step questions as varied
3Note that the long answers can serve as textual justiï¬cations, espe- cially for questions that require increased reasoning such as logical infer- ence, where a question like âIs there a red apple in the picture?â may have the answer: âNo, there is an apple, but it is greenâ
4For instance, a question-answer pair in VQA1.0 such as âWhat color is the apple? redâ turns after templatization into âWhat <type> <is> the <object>? <attribute>â.
@existOrFalse @existNotOrFalse @existFalse @existAttrOrFalse @existNotFalse @existRelFalse ~~ @existAndFalse @existAttrFalse
Figure 4: Examples of entailment relations between different question types. Refer to section 3.3 for further detail.
as Who is looking at the animal that is wearing the red coat in front of the window?, and thus greatly increasing the pat- ternsâ effective ï¬exibility. This is the key ingredient behind the automatic generation of compositional questions.
Finally, we compute a set of decoys for the scene graph elements. Indeed, some questions, such as negative ones or those that involve logical inference, pertain to the absence of an object or to an incorrect attribute. Examples include Is the apple green? for a red apple, or Is the girl eating ice cream? when she is in fact eating a cake. Given a triplet (s, r, o), (e.g. (girl, eating, cake) we select a distractor Ëo considering its likelihood to be in relation with s and its plausibility to co-occur in the context of the other objects in the depicted scene. A similar technique is applied in se- lecting attribute decoys (e.g. a green apple). While choos- ing distractors, we exclude from consideration candidates that we deem too similar (e.g. pink and orange), based on a manually deï¬ned list for each concept in the ontology.
(1) the clean scene graphs, (2) the structural patterns, (3) the object references and (4) the decoys, we can proceed to generating the ques- tions! We traverse the graph, and for each object, object- attribute pair or subject-relation-object triplet, we produce relevant questions by instantiating a randomly selected question pattern, e.g. âWhat <type> is <theObject>, <attribute> or <cAttribute>?â, populating all the ï¬elds with the matching information, yielding, for example, the question: âWhat (color) (is) the (apple on the table), (red) or (green)?â. When choosing object references, we avoid selecting those that disclose the answer or repeat in- formation, e.g. âWhat color is the red apple?â or âWhich dessert sits besides the apple to the left of the cake?â. We also avoid asking about relations that tend to have multiple instances for the same object, e.g. asking what object is on the table, as there may be multiple valid answers.
By the end of this stage, we obtain a diverse set of 22M interesting, challenging and grammatical questions, pertain- ing to each and every aspect of the image.
# 3.3. Functional Representation and Entailment
Each question pattern is associated with a structured rep- resentation in the form of a functional program. For in- stance, the question What color is the apple on the white
table? is semantically equivalent to the following program: select: table â ï¬lter: white â relate(subject,on): apple â query: color. As we can see, these programs are com- posed of atomic operations such as object selection, traver- sal along a relation edge, or an attribute veriï¬cation, which are then chained together to create challenging reasoning questions.
The semantically unambiguous representations offer multiple advantages over free-form unrestricted questions. For one thing, they enable comprehensive assessment of methods by dissecting their performance along different axes of question textual and semantic lengths, type and topology, thus facilitating the diagnosis of their success and failure modes (section 4.2). Second, they aid us in balancing the dataset distribution, mitigating its question-conditional priors and guarding against educated guesses (section 3.4). Finally, they allow us to identify entailment and equivalence relations between different questions: knowing the answer to the question What color is the apple? allows a coherent learner to infer the answer to the questions Is the apple red? Is it green? etc. The same goes especially for questions that involve logical inference like or and and operations or spatial reasoning, e.g. left and right.
As further discussed in section 4.4, this entailment prop- erty can be used to measure the coherence and consistency of the models, shedding new light on their inner work- ings, compared to the widespread but potentially mislead- ing accuracy metric. We deï¬ne direct entailment relations between the various functional programs and use these to recursively compute all the questions that can be entailed from a given source. A complete catalog of the functions, their associated question types, and the entailment relations between them is provided in the supplementary.
# 3.4. Sampling and Balancing
One of the main issues of existing VQA datasets is the prevalent question-conditional biases that allow learners to make educated guesses without truly understanding the pre- sented images, as explained in section 1. However, precise representation of the question semantics can allow tighter control over these biases, having the potential to greatly al- leviate the problem. We leverage this observation and use the functional programs attached to each question to smooth out the answer distribution.
Given a questionâs functional program, we derive two la- bels, global and local: The global label assigns the question to its answer type, e.g. color for What color is the apple?. The local label further considers the main subject/s of the question, e.g. apple-color or table-material. We use these labels to partition the questions into groups, and smooth the answer distribution of each group within the two levels of granularity, ï¬rst globally, and then locally.
For each group, we ï¬rst compute its answer distribution
head tail Unbalanced Balanced bounded ge gO Bg
j 1 100% 0% 20% 70% eo 0% 40% 30% 20% 10% BESS E ES 2 Se * oe eg RY rata 4 Sin Py SHNTAG fe os SON head Unbalanced Balanced bounded a ig SANE ey ge gO Bg
j BESS E ES 2 Se * oe eg RY rata 4 Sin Py SHNTAG fe
1 100% 0% 20% 70% eo 0% 40% 30% 20% 10% os SON a ig SANE ey
Figure 5: Visualization of the balancing process. The conditional answer distribution before (left) and after (middle) the balancing for a selection of question groups. We show the top 10 answers, where the column height corresponds to the relative frequency of each answer. We can see that on the left the distributions are heavily biased, while on the middle it is more uniform and with heavier tails, while intentionally retaining the original real-world tendencies up to a tunable degree. Right: An illustration of the balancing process.
P for each group, which we then downsample (formally, us- ing rejection-sampling) to ï¬t a smoother answer distribution Q derived through the following procedure: We iterate over the answers of that group in decreasing frequency order, and reweight P âs head up to the current iteration to make it more comparable to the tail size. While repeating this operation as we go through the answers, iteratively âmovingâ prob- ability from the head into the tail [32], we also maintain minimum and maximum ratios between each pair of subse- quent answers (sorted by frequency). This ensures that the relative frequency-based answer ranking stays the same.
and discuss their implications and merits. In the supplemen- tary, we perform a head-to-head comparison between GQA and the popular VQA 2.0 dataset [11], and proceed with fur- ther diagnosis of the current top-performing model, MAC [12], evaluating it along multiple axes such as training-set size, question length and compositionality degree.
# 4.1. Dataset Analysis and Comparison
The advantage of this scheme is that it retains the general real-world tendencies, smoothing them out up to a tunable degree to make the benchmark more challenging and less biased. Refer to ï¬gure 5 for a visualization and to the sup- plementary for a precise depiction of the procedure. Since balancing is performed in two granularity levels, the ob- tained answer distributions are made more uniform both lo- cally and globally. Quantitatively, the entropy of the answer distribution is increased by 72%, conï¬rming the success of this stage.
Finally, we downsample the questions based on their type to control the dataset type composition, and ï¬lter out redundant questions that are too semantically similar to ex- isting ones. We split the dataset into 70% train, 10% valida- tion, 10% test and 10% challenge, making sure that all the questions about a given image appear in the same split.
# 4. Analysis and Baseline Experiments
In the following, we provide an analysis of the GQA dataset and evaluate the performance of baselines, state-of- the-art models and human subjects, revealing a large gap from the latter. To establish the diversity and realism of GQA questions, we test transfer performance between the GQA and VQA datasets. We then introduce the new metrics that complement our dataset, present quantitative results
The GQA dataset consists of 22,669,678 questions over 113,018 images, which cover wide range of reasoning skills and vary in length and number of required inference-steps (ï¬gure 6). The dataset has a vocabulary size of 3097 words and 1878 possible answers. While smaller than natural lan- guage datasets, further investigation reveals that it covers 88.8% and 70.6% of VQA questions and answers respec- tively, corroborating its wide diversity. A wide selection of dataset visualizations is provided in the supplementary.
We associate each question with two types: structural and semantic. The structural type is derived from the ï¬nal operation in the questionâs functional program. It can be (1) verify for yes/no questions, (2) query for all open ques- tions, (3) choose for questions that present two alternatives to choose from, e.g. âIs it red or blue?â; (4) logical which involve logical inference, and (5) compare for comparison questions between two or more objects. The semantic type refers to the main subject of the question: (1) object: for existence questions, (2) attribute: consider the properties or position of an object, (3) category: related to object identi- ï¬cation within some class, (4) relation: for questions ask- ing about the subject or object of a described relation (e.g. âwhat is the girl wearing?â), and (5) global: about overall properties of the scene such as weather or place. As shown in ï¬gure 6, the questionsâ types vary at both the semantic and structural levels.
# 4.2. Baseline Experiments
We analyze an assortment of models as well as human subjects on GQA. The evaluation results are shown in ta- ble 1. Baselines include a âblindâ LSTM model with ac- cess to the questions only, a âdeafâ CNN model with ac- cess to the images only, an LSTM+CNN model, and two prior models based on the question group, local or global, which return the most common answer for each group, as deï¬ned in section 3.3. We can see that they all achieve low results of 17.82%â41.07%. For the LSTM model, inspec- tion of speciï¬c question types reveals that it achieves only 22.7% for open query questions, and not far above chance for binary question types. We also evaluate the performance of the bottom-up attention model [5] â the winner of the 2017 VQA challenge, and the MAC model [12] â a state- of-the-art compositional attention model for CLEVR [15]. While surpassing the baselines, they still perform well be- low human scores5, offering ample opportunity for further research in the visual reasoning domain.
# 4.3. Transfer Performance
We tested the transfer performance between the GQA and VQA datasets, training on one and testing on the other: A MAC model trained on GQA achieves 52.1% on VQA before ï¬ne-tuning and 60.5% afterwards. Compare these with 51.6% for LSTM+CNN and 68.3% for MAC, when both are trained and tested on VQA. These quite good re- sults demonstrate the realism and diversity of GQA ques- tions, showing that the dataset can serve as a good proxy for human-like questions. In contrast, MAC trained on VQA gets 39.8% on GQA before ï¬ne-tuning and 46.5% after- wards, illustrating the further challenge GQA poses.
# 4.4. New Evaluation Metrics
Apart from the standard accuracy metric and the more detailed type-based diagnosis our dataset supports, we in- troduce ï¬ve new metrics to get further insight into visual reasoning methods and point to missing capabilities we be- lieve coherent reasoning models should possess.
Consistency. This metric measures responses consis- tency across different questions. Recall that in section 3.3, we used the questionsâ semantic representation to derive equivalence and entailment relations between them. When being presented with a new question, any learner striving to be trustworthy should not contradict its previous answers. It should not answer green to a new question about an apple it has just identiï¬ed as red.
For each question-answer pair (q, a), we deï¬ne a set Eq = q1, q2, . . . , qn of entailed questions, the answers to
5To evaluate human performance, we used Amazon Mechanical Turk to collect human responses for 4000 random questions, taking the majority over 5 answers per question.
GQA STRUCTURAL TYPES
GQA STRUCTURAL TYPES GQA SEMANTIC TYPES GQA SEMANTIC STEPS 11% global citegory relation es 52% attribute 28% Question Distribution by # words ° ES =VOA -v7w =CLEVR-Humans -CLEVR oso Rw Frequency ° ° 0 5 10 15 20 25 30 Number of words
GQA SEMANTIC TYPES 11% global citegory relation es 52% attribute 28%
GQA SEMANTIC STEPS
Question Distribution by # words ° ES =VOA -v7w =CLEVR-Humans -CLEVR oso Rw Frequency ° ° 0 5 10 15 20 25 30 Number of words
Figure 6: Top: Dataset statistics, partitioned into structural types, semantic types, and the number of reasoning steps. Bottom: VQA datasets question length distribution.
which can be unambiguously inferred given (q, a). For in- stance, given the question-answer pair Is there a red apple to the left of the white plate? yes, we can infer the answers to questions such as Is the plate to the right of the apple?, Is there a red fruit to the left of the plate?, What is the white thing to the right of the apple?, etc. For each question q in Q â the set of questions the model answered correctly, we measure the modelâs accuracy over the entailed questions Eq and then average these scores across all questions in Q. We see that while people have exceptional consistency of 98.4%, even the best models are inconsistent in about 1 out of 5 questions, and models such as LSTM contradict themselves almost half the time. Achieving high consis- tency may require deeper understanding of the question se- mantics in the context of the image, and, in contrast with accuracy, is more robust against educated guesses as it in- spects connections between related questions, and thus may serve as a better measure of modelsâ true visual understand- ing skills.
Validity and Plausibility. The validity metric checks whether a given answer is in the question scope, e.g. re- sponding some color to a color question. The plausibility score goes a step further, measuring whether the answer is reasonable, or makes sense, given the question (e.g. ele- phant usually do not eat, say, pizza). Speciï¬cally, we check whether the answer occurs at least once in relation with the questionâs subject, across the whole dataset. Thus, we consider e.g., red and green as plausible apple colors and, conversely, purple as implausible.6 The experiments show that models fail to respond with plausible or even valid an- swers at least 5â15% of the times, indicating limited com-
6While the plausibility metric may not be fully precise especially for infrequent objects due to potential data scarcity issues, it may provide a good sense of the general level of world knowledge the model has acquired.
Metric Open Binary Query Compare Choose Logical Verify Global Object Attribute Relation Category Distribution Grounding Validity Plausibility Consistency Accuracy Global Prior 16.52 42.99 16.52 35.59 17.45 50.32 53.40 24.70 49.96 34.89 22.88 15.26 130.86 - 89.02 75.34 51.78 28.93 Local Prior 16.99 47.53 16.99 41.91 26.58 50.11 58.80 20.19 54.00 42.67 20.16 17.31 21.56 - 84.44 84.42 54.34 31.31 CNN 1.74 36.05 1.55 36.34 0.85 47.18 47.02 8.64 47.33 22.66 11.60 3.56 19.99 - 35.78 34.84 62.40 17.82 LSTM CNN+LSTM BottomUp MAC 38.91 22.69 71.23 61.90 38.91 22.69 60.04 57.79 70.59 57.15 69.99 61.73 75.45 65.78 60.82 27.22 81.49 74.33 59.82 48.28 46.16 33.24 44.38 22.33 5.34 17.93 82.24 - 96.16 96.39 84.48 87.30 81.59 68.68 54.06 41.07 31.80 63.26 31.80 56.62 61.40 62.05 67.00 56.57 75.90 50.91 39.45 37.49 7.46 - 96.02 84.25 74.57 46.55 34.83 66.64 34.83 56.32 66.56 64.03 71.45 60.29 78.45 53.88 42.84 41.18 5.98 78.47 96.18 84.57 78.71 49.74 Humans 87.4 91.2 87.4 93.1 94.3 88.5 90.1 92.3 88.1 90.7 89.2 90.3 - - 98.9 97.2 98.4 89.3
Table 1: Results for baselines and state-of-the-art models on the GQA dataset. All results refer to the test set. Models are evaluated for overall accuracy as well as accuracy per type. In addition, they are evaluated by validity, plausibility, distribution, consistency, and when possible, grounding metrics. Please refer to the text for further detail.
prehension of some questions. Given that these properties are noticeable statistics of the datasetâs conditional answer distribution, not even depending on the speciï¬c images, we would expect a sound method to achieve higher scores.
Distribution. To get further insight into the extent to which methods manage to model the conditional answer distribution, we deï¬ne the distribution metric, which mea- sures the overall match between the true answer distribu- tion and the model predicted distribution, using Chi-Square statistic [21]. It allows us to see if the model predicts not only the most common answers but also the less frequent ones. Indeed, the experiments demonstrate that the leading SOTA models score lower than the baselines (for this met- ric, lower is better), indicating increased capacity in ï¬tting more subtle trends of the datasetâs distribution.
Grounding. For attention-based models, the grounding score checks whether the model attends to regions within the image that are relevant to the question. For each dataset instance, we deï¬ne a pointer r to the visual region which the question or answer refer to, and measure the modelâs visual attention (probability) over that region. This metric allows us to evaluate the degree to which the model grounds its reasoning in the image, rather than just making educated guesses based on question priors or world tendencies.
Indeed, the models mostly attend to the relevant regions in the image, with grounding scores of about 80%. To ver- ify the reliability of the metric, we further perform exper- iments with spatial features instead of the object-informed ones used by BottomUp [5] and MAC [12], which lead to a much lower 43% score, demonstrating that object-based features provide models with better granularity for the task, allowing them to focus on more pertinent regions than with the coarser spatial features.
# 5. Conclusion
In this paper, we introduced the GQA dataset for real- world visual reasoning and compositional question answer- ing. We described the dataset generation process, provided baseline experiments and deï¬ned new measures to get more insight into modelsâ behavior and performance. We believe this benchmark can help drive VQA research in the right directions of deeper semantic understanding, sound reason- ing, enhanced robustness and improved consistency. A po- tential avenue towards such goals may involve more inti- mate integration between visual knowledge extraction and question answering, two ï¬ourishing ï¬elds that oftentimes have been pursued independently. We strongly hope that GQA will motivate and support the development of more compositional, interpretable and cogent reasoning models, to advance research in scene understanding and visual ques- tion answering.
# 6. Acknowledgments
We wish to thank Justin Johnson for discussions about the early versions of this work, and Ross Girshick for his in- spirational talk at the VQA workshop 2018. We thank Ran- jay Krishna, Eric Cosatto, Alexandru Niculescu-Mizil and the anonymous reviewers for helpful suggestions and com- ments. Stanford University gratefully acknowledges Face- book Inc., Samsung Electronics Co., Ltd., and the Defense Advanced Research Projects Agency (DARPA) Communi- cating with Computers (CwC) program under ARO prime contract no. W911NF15-1-0462 for generously supporting this work.
# References
[1] Google books ngram corpus http://books.google.com/ngrams/. 17
[2] A. Agrawal, D. Batra, and D. Parikh. Analyzing the behav- ior of visual question answering models. In EMNLP, pages 1955â1960, 2016. 1, 2
[3] A. Agrawal, D. Batra, D. Parikh, and A. Kembhavi. Donât just assume; look and answer: Overcoming priors for visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4971â4980, 2018. 1, 2
[4] A. Agrawal, J. Lu, S. Antol, M. Mitchell, C. L. Zitnick, D. Parikh, and D. Batra. VQA: Visual question answer- ing. International Journal of Computer Vision, 123(1):4â31, 2017. 1, 2, 4
[5] P. Anderson, X. He, C. Buehler, D. Teney, M. Johnson, S. Gould, and L. Zhang. Bottom-up and top-down at- arXiv preprint tention for image captioning and VQA. arXiv:1707.07998, 2017. 1, 7, 8, 14, 15
[6] J. Andreas, M. Rohrbach, T. Darrell, and D. Klein. Neural module networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 39â48, 2016. 1
[7] Y. Attali and M. Bar-Hillel. Guess where: The position of correct answers in multiple-choice test items as a psy- chometric variable. Journal of Educational Measurement, 40(2):109â128, 2003. 2
[8] A. Das, H. Agrawal, L. Zitnick, D. Parikh, and D. Batra. Human attention in visual question answering: Do humans and deep networks look at the same regions? Computer Vision and Image Understanding, 163:90â100, 2017. 1, 2 [9] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In Pro- ceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 248â255. Ieee, 2009. 14 [10] A. Fukui, D. H. Park, D. Yang, A. Rohrbach, T. Darrell, and M. Rohrbach. Multimodal compact bilinear pooling for visual question answering and visual grounding. Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), 2016. 1
[11] Y. Goyal, T. Khot, D. Summers-Stay, D. Batra, and D. Parikh. Making the V in VQA matter: Elevating the role of image understanding in visual question answering. In Pro- ceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 6325â6334, 2017. 1, 2, 3, 6, 16
[12] D. A. Hudson and C. D. Manning. Compositional attention networks for machine reasoning. International Conference for Representation Learning (ICLR), 2018. 1, 6, 7, 8, 14, 15 [13] A. Jabri, A. Joulin, and L. van der Maaten. Revisiting visual In European conference on
question answering baselines. computer vision, pages 727â739. Springer, 2016. 1
[14] U. Jain, Z. Zhang, and A. G. Schwing. Creativity: Generat- ing diverse questions using variational autoencoders. In Pro- ceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5415â5424, 2017. 3
[15] J. Johnson, B. Hariharan, L. van der Maaten, L. Fei-Fei, C. L. Zitnick, and R. Girshick. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1988â1997, 2017. 1, 2, 3, 7
[16] J. Johnson, R. Krishna, M. Stark, L.-J. Li, D. Shamma, M. Bernstein, and L. Fei-Fei. Image retrieval using scene graphs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3668â3678, 2015. 3
[17] K. Kaï¬e and C. Kanan. An analysis of visual question an- swering algorithms. In Computer Vision (ICCV), 2017 IEEE International Conference on, pages 1983â1991. IEEE, 2017. 2, 3
[18] K. Kaï¬e and C. Kanan. Visual question answering: Datasets, algorithms, and future challenges. Computer Vision and Im- age Understanding, 163:3â20, 2017. 1, 2, 16
[19] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 15 [20] R. Krishna, Y. Zhu, O. Groth, J. Johnson, K. Hata, J. Kravitz, S. Chen, Y. Kalantidis, L.-J. Li, D. A. Shamma, et al. Vi- sual genome: Connecting language and vision using crowd- sourced dense image annotations. International Journal of Computer Vision, 123(1):32â73, 2017. 1, 2, 3, 4, 17
[21] H. O. Lancaster and E. Seneta. Chi-square distribution. En- cyclopedia of biostatistics, 2, 2005. 8
[22] Q. Li, Q. Tao, S. Joty, J. Cai, and J. Luo. VQA-E: Ex- plaining, elaborating, and enhancing your answers for visual questions. arXiv preprint arXiv:1803.07464, 2018. 2 [23] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ra- manan, P. Doll´ar, and C. L. Zitnick. Microsoft COCO: Com- mon objects in context. In European conference on computer vision, pages 740â755. Springer, 2014. 4, 17
[24] A. Mahendru, V. Prabhu, A. Mohapatra, D. Batra, and S. Lee. The promise of premise: Harnessing question arXiv preprint premises in visual question answering. arXiv:1705.00601, 2017. 3
[25] M. Malinowski and M. Fritz. A multi-world approach to question answering about real-world scenes based on uncer- tain input. In Advances in neural information processing sys- tems, pages 1682â1690, 2014. 3
[26] J. Millman, C. H. Bishop, and R. Ebel. An analysis of test-wiseness. Educational and Psychological Measurement, 25(3):707â726, 1965. 2
[27] J. J. Mondak and B. C. Davis. Asked and answered: Knowl- edge levels when we wonât take âdonât knowâ for an answer. Political Behavior, 23(3):199â224, 2001. 2
[28] N. Mostafazadeh, I. Misra, J. Devlin, M. Mitchell, X. He, and L. Vanderwende. Generating natural questions about an image. arXiv preprint arXiv:1603.06059, 2016. 3
[29] D. H. Park, L. A. Hendricks, Z. Akata, A. Rohrbach, B. Schiele, T. Darrell, and M. Rohrbach. Multimodal expla- nations: Justifying decisions and pointing to the evidence. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. 2
[30] J. Pennington, R. Socher, and C. Manning. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language pro- cessing (EMNLP), pages 1532â1543, 2014. 14
[31] S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: To- wards real-time object detection with region proposal net- In Advances in neural information processing sys- works. tems, pages 91â99, 2015. 14, 16, 17
[32] Y. Rubner, C. Tomasi, and L. J. Guibas. The earth moverâs distance as a metric for image retrieval. International journal of computer vision, 40(2):99â121, 2000. 6
[33] A. Suhr, S. Zhou, I. Zhang, H. Bai, and Y. Artzi. A corpus for reasoning about natural language grounded in photographs. arXiv preprint arXiv:1811.00491, 2018. 2
[34] D. Teney, P. Anderson, X. He, and A. van den Hengel. Tips and tricks for visual question answering: Learnings from the 2017 challenge. arXiv preprint arXiv:1708.02711, 2017. 2, 15
Graph- structured representations for visual question answering. arXiv preprint, 2017. 2
[36] B. Thomee, D. A. Shamma, G. Friedland, B. Elizalde, K. Ni, D. Poland, D. Borth, and L.-J. Li. Yfcc100m: The new data in multimedia research. arXiv preprint arXiv:1503.01817, 2015. 4, 17
[37] A. Torralba and A. A. Efros. Unbiased look at dataset bias. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1521â1528. IEEE, 2011. 2
[38] Z. Yang, X. He, J. Gao, L. Deng, and A. Smola. Stacked In Pro- attention networks for image question answering. ceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 21â29, 2016. 1
[39] P. Zhang, Y. Goyal, D. Summers-Stay, D. Batra, and D. Parikh. Yin and yang: Balancing and answering binary In Proceedings of the IEEE Conference visual questions. on Computer Vision and Pattern Recognition (CVPR), pages 5014â5022, 2016. 1, 2
[40] S. Zhang, L. Qu, S. You, Z. Yang, and J. Zhang. Automatic arXiv preprint generation of grounded visual questions. arXiv:1612.06530, 2016. 3
[41] Y. Zhu, O. Groth, M. Bernstein, and L. Fei-Fei. Visual7W: In Proceedings Grounded question answering in images. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4995â5004, 2016. 1, 2, 3
# 7. Dataset Visualizations
mqueryRel GQA TYPE COMPOSITION (DETAILED) Bi queryAttr BexistRel i chooseAttr BiverifyAttr Blogicor BiverifyRel i queryObject Hi chooseRel Sexist Ss logicAnd Si verifyAttrs TM queryState i chooseObject | twoSame Bi twoDiff Bi verifyGlobal i chooseGlobal common Bi chooseObjRel compare mallDiff Mallsame
40000 120.000}
120.00 100.00
Wo, bushes 53" bags lock heli Man Oadegar 16, et light an At Hee Faves an Coleg Na thes âSOs wall ot, Win aa: plate rar 20 Waleta Bird Rae at ns Seer0 eae Oe Ooee) get ows 0g, fan eae
cota it ST ngs? me Striped IDK BOOS oe ase ~ Bee yee SSyoutig ear edwh ite? i 7 Vl iined Seesie allow Cae âyeloWe racks orang E55, 29 os metals ie =blyasiversâ ânls ar Noe Hk AO ing ar yates) sengreenââ./ ing os aan on asset SOS 7
â zi ee en ee ont the eWay a5 eats a ~S ~ Bee RAO x es ae SAAC Vase = an ree on of sa we eral Ta arpa "it of vatChing on at wWEal ing2 Fane Fs ating a Sern SOl ma 5a? ane o5. 3 bay roteâ sears
Figure 7: Top left: Distribution of GQA questions by ï¬rst four words. The arc length is proportional to the number of questions containing that preï¬x. White areas correspond to marginal contributions. Top right: question type distribution; please refer to the table 2 for details about each type. Middle rows: Occurrences number of the most frequent objects, categories, attributes and relations (excluding left/right). Third row: Word clouds for frequent objects, attributes and relations.
âQuestion Distribution by # words re core [acon ° " 10 5 25 x umber of words umber of word
GQA STRUCTURAL TYPES choose 12% logical 10% compare 3% verify 22%
GQA SEMANTIC TYPES 11% global 1 category relation ead 52% attribute re
. i a %, % gB g § 4% 24% 3 2 a % %%%? aS . Qaâ s > ES & % 8 < âin 25 og, Me Be sf So, of ao â Pasty, 9s . rio} ve ley S 9etabie bake, " meat 900g i src Cl nature fruit aed as environmente- fast food at urban dessert Re ite te e oy ty ce 2 a Part o & Vey. oo ; é 4 ee. her, 3 se 9 Se ay Ko ge & 6 rt 3 N ws er 5 % âey f) â aS ~ oO RS %, Â¥ © > s RS g » & ° & gS i mo} iy >, s go @ Qo 4 & 8 3 3° i Q
Figure 8: Top left: Question length distribution for VQA datasets: we can see that GQA has a diverse range of lengths compared to all other datasets except synthetic CLEVR. Left: GQA Question structural and semantic type distributions. Right: The object class hierarchy we have created as part of the dataset construction process.
Type queryGlobal verifyGlobal chooseGlobal queryAttr verifyAttr verifyAttrs chooseAttr exist existRel logicOr logicAnd queryObject chooseObject queryRel verifyRel chooseRel chooseObjRel compare common twoSame twoDiff allSame allDiff
Open/Binary open binary open open binary binary open binary binary binary binary open open open binary open open binary open verify verify verify verify
Semantic query verify query query verify logical choose verify verify logical logical query choose query verify choose choose compare compare compare compare compare compare Structural global global global attribute attribute attribute attribute object relation object obj/attr category category relation relation relation relation object object object object object object Form select: scene/query: type select: scene/verify type: attr select: scene/choose type: a|b select: obj/. . . /query: type select: obj/. . . /verify type: attr select: obj/. . . /verify t1: a1/verify t2: a2/and select: obj/. . . /choose type: a|b select: obj/. . . /exist select: subj/. . . /relate (rel): obj/exist select: obj1/. . . /exist/select: obj2/. . . /exist/or select: obj1/. . . /exist/select: obj2/. . . /exist/and select: category/. . . /query: name select: category/. . . /choose: a|b select: subj/. . . /relate (rel): obj/query: name select: subj/. . . /verifyRel (rel): obj select: subj/. . . /chooseRel (r1|r2): obj select: subj/. . . /relate (rel): obj/choose: a|b select: obj1/. . . /select: obj2/. . . /compare type Who is taller, the boy or the girl? select: obj1/. . . /select: obj2/. . . /common select: obj1/. . . /select: obj2/. . . /same select: obj1/. . . /select: obj2/. . . /different select: allObjs/same select: allObjs/different
Example How is the weather in the image? Is it cloudy today? Is it sunny or cloudy? What color is the apple? Is the apple red? Is the apple red and shiny? Is the apple green or red? Is there an apple in the picture? Is there an apple on the black table? Do you see either an apple or a banana there? Do you see both green apples and bananas there? What kind of fruit is on the table? What kind of fruit is it, an apple or a banana? What is the small girl wearing? Is she wearing a blue dress? Is the cat to the left or to the right of the ï¬ower? What is the boy eating, an apple or a slice of pizza?
What is common to the shirt and the ï¬ower? Does the shirt and the ï¬ower have the same color? Are the table and the chair made of different materials? Are all the people there the same gender? Are the animals in the image of different types?
Table 2: Functions Catalog for all the GQA question types. For each question we mention its structural and semantic types (refer to table 2 for further details), a functional program template and a typical example of a generated question.
GQA 1. What is the woman to the right of the boat holding? umbrella 2. Are there men to the left of the person that is holding the umbrella? no 3. What color is the umbrella the woman is holding? purple
VQA 1. Why is the person using an umbrella? 2. Is the picture edited? 3. Whatâs the color of the umbrella?
GQA 1. Is that a giraffe or an elephant? giraffe 2. Who is feeding the giraffe behind the man? lady 3. Is there any fence near the animal behind the man? yes 4. On which side of the image is the man? right 5. Is the giraffe is behind the man? yes
VQA 1. What animal is the lady feeding? 2. Is it raining? 3. Is the man wearing sunglasses?
GQA 1. Is the personâs hair brown and long? yes 2. What appliance is to the left of the man? refrigerator 3. Is the man to the left or to the right of a refrigerator? right 4. Who is in front of the appliance on the left? man 5. Is there a necktie in the picture that is not red? yes 6. What is the person in front of the refrigerator wearing? suit 7. What is hanging on the wall? picture 8. Does the vest have different color than the tie? no 9. What is the color of the shirt? white 10. Is the color of the vest different than the shirt? yes
VQA 1. Does this man need a haircut? 2. What color is the guys tie? 3. What is different about the manâs suit that shows this is for a special occasion?
GQA 1. Who wears the gloves? player 2. Are there any horses to the left of the man? no 3. Is the man to the right of the player that wears gloves? no 4. Is there a bag in the picture? no 5. Do the hat and the plate have different colors? yes
VQA 1. What is the man holding? 2. Where are the people playing? 3. Is the player safe? 4. What is the sport being played?
GQA 1. What is the person doing? playing 2. Is the entertainment center at the bottom or at the top? bottom 3. Is the entertainment center wooden and small? yes 4. Are the pants blue? no 5. Do you think the controller is red? no
VQA 1. What colors are the walls? 2. What game is the man playing? 3. Why do they stand to play?
GQA 1. Are there any coats? yes 2. Do you see a red coat in the image? no 3. Is the person that is to the left of the man exiting a truck? no 4. Which place is this? road
VQA 1. Where is the bus driver? 2. Why is the man in front of the bus? 3. What numbers are repeated in the bus number?
GQA 1. What is in front of the green fence? gate 2. Of which color is the gate? silver 3. Where is this? street 4. What color is the fence behind the gate? green 5. Is the fence behind the gate both brown and metallic? no
VQA 1. What are the yellow lines called? 2. Why donât the trees have leaves? 3. Where is the stop sign?
Figure 9: Examples of questions from GQA and VQA, for the same images. As the examples demonstrate, GQA questions tend to involve more elements from the image compared to VQA questions, and are longer and more compositional as well. Conversely, VQA questions tend to be a bit more ambiguous and subjective, at times with no clear and conclusive answer. Finally, we can see that GQA provides more questions for each image and thus covers it more thoroughly than VQA.
# 8. Dataset Balancing
| i ae ro . 4 âage SPOS SP EO Agee a âe me
go3 93999 5 5 § he m Sopeasie*, âvoce SORES a See oe ipngeore. se 6,
| i ae ro . 4 âage SPOS SP EO Agee a âe me Wi a ao SS rages Sagiige SERS ot a mae eR SON Ss Sere SAN, âays a âey âes go3 93999 5 5 100% he m Sopeasie*, âvoce SORES a See oe ipngeore. se 6,
Wi ao SS rages Sagiige SERS ot a mae eR SON Ss Sere SAN, âays a âey âes
a 100%
Figure 10: Impact of the dataset balancing on the conditional answer distribution: The left side shows the distribution before any balancing. We show the top 10 answers for a selection of question groups, where the column height corresponds to the relative frequency of each answer. The top row shows global question groups such as color questions, questions about animals, etc. while the bottom row shows local ones e.g. apple-color, table-material etc (section 3.3, main paper). Indeed, we can see that the distributions are heavily biased. The right side shows the distributions after balancing, more uniform and with heavier tails, while intentionally retaining the original real-world tendencies up to a tunable degree.
As discussed in section 3.4 (main paper), given the original 22M auto-generated questions, we have performed answer- distribution balancing, similarities reduction and type-based sampling, reducing its size to a 1.7M balanaced dataset. The balancing is performed in an iterative manner: as explained in section 3.3, for each question group (e.g. color ques- tions), we iterate over the answer distribution, from the most to least frequent answers: (a;,c;) when a; is the answer and c; is its count. In each iteration 7, we downsample the head distribution (aj, 7 < 7) such that the ratio between the head and its complementary tail ee will be bounded by si CF b. While doing so, we also make sure to set minimum and maximum bounds on the frequency ratio â of each pair of consequent answers a;,a;41. The results âof this process is shown in figure 10. Indeed we can see how the distribu-
tion is âpushedâ away from the head and spreads over the tail, while intentionally maintaining the original real-world tendencies presented in the data, to retain its authenticity.
# 9. Baselines Implementation Details
In section 4.2 (main paper), we perform experiments over multiple baselines and state-of-the-art models. All CNN models use spatial features pre-trained on ImageNet [9], whereas state-of-the-art approaches such as bottomUp [5] and MAC [12] are based on object-based features produced by faster R-CNN detector [31]. All models use GloVe word embeddings of dimension 300 [30]. To allow a fair compar- ison, all the models use the same LSTM, CNN and classiï¬er components, and so the only difference between the models stem from their core architectural design.
Accuracy / # words o8 3 os g 5 8 o4 < 0.3 3 8 13 18 23 Number of Question Words
Accuracy / training set size 0.55 05 Fa ic S & 04s o4 10 100 1000 10000 Training Set Size (logarithmic scale)
Accuracy / # cells 0.55 0.54 B 053 fd Sos g° 0.51 0.5 1 2 3 4 5 Number of Cells
Accuracy / # words Accuracy / # operations Accuracy / training set size Accuracy / # cells 0.55 0.55 0.8 o8 0.54 05 3 os 3°! Fa B 053 s ic fd 5 06 S Sos o4 8 & 04s g° <os 0.51 0.3 04 o4 0.5 3 8 13 18 23 1 2 3 4 5 10 100 1000 10000 1 2 3 4 Number of Question Words Number of Semantic Operations Training Set Size (logarithmic scale) Number of Cells
Accuracy / # operations 0.8 3°! s 5 06 8 <os 04 1 2 3 4 5 Number of Semantic Operations
Figure 11: From left to right: (1) Accuracy as a function of textual question length â the number of words in the question. (2) Accuracy as a function of semantic question length â the number of operations in its functional program. (3) Performance as a function of the subset size used for training, ranging from 10K to 10M. (4) Accuracy for different lengths of MAC networks, suggesting that indeed GQA questions are compositional.
We have used a sigmoid-based classiï¬er and trained all models using Adam [19] for 15 epochs, each takes about an hour to complete. For MAC [12], we use the ofï¬cial au- thored code available online, with 4 cells. For BottomUp [5], since the ofï¬cial implementation is unfortunately not publicly available, we re-implemented the model, carefully following details presented in [5, 34]. To ensure the correct- ness of our implementation, we have tested the model on the standard VQA dataset, achieving 67%, which matches the original scores reported by Anderson et al. [5].
# 10. Further Diagnosis
INPUT REPRESENTATION 1 0.9 * oa 03 | | 0.2 spatial object scene scene __ functional features features graph programs (CNN) (bottom-up) (perfect sight)
Following section 4.2 (main paper), and in order to get more insight into modelsâ behaviors and tendencies, we per- form further analysis of the top-scoring model for the GQA dataset, MAC [12]. The MAC network is a recurrent atten- tion network that reasons in multiple concurrent steps over both the question and the image, and is thus geared towards compositional reasoning as well as rich scenes with several regions of relevance.
Figure 12: Performance as a function of the input representation. We encode the scenes through three different methods: spatial fea- tures produced by a standard pretrained CNN, object-based fea- tures generated by a faster R-CNN detector, and direct embedding of the scene graph semantic representation, equivalent to having perfect sight. We further experiment with both textual questions as well as their counterpart functional programs as input. We can see that the more semantically-imbued the representations get, the higher the accuracy obtained.
We assess the model along multiple axes of variation, including question length, both textually, i.e. number of words, and semantically, i.e. number of reasoning opera- tions required to answer it, where an operation can be e.g. following a relation from one object to another, attribute identiï¬cation, or a logical operation such as or, and or not. We provide additional results for different network lengths (namely, cells number) and varying training-set sizes, all can be found in ï¬gure 11.
Interestingly, question textual length correlates posi- tively with the model accuracy. It may be the case that longer questions reveal more cues or information that the model can exploit, potentially sidestepping direct reasoning about the image. However, question semantic length has the opposite impact as expected: 1-step questions are par- ticularly easy for models than the compositional ones which involve more steps.
GQA SEMANTIC STEPS 4+ 1 8% 6%
Figure 13: Distribution of GQA questions semantic length (num- ber of computation steps to arrive at the answer). We can see that most questions require about 2-3 reasoning steps, where each step may involve tracking a relation between objects, an attribute iden- tiï¬cation or a logical operation.
@existOrFalse @aueryNotov},,@chooseAttr Q@verifyRelFaise @PexistNotOrFalse @existFalse @éveryAtirOdj, @éhooseNot Q@existattrorFaise @existNotFalse @existRelFalse ~ @existAndFalse @existAttrTrue â@aqueryObj @existAttrFalse @existAttrorTrue @existNotTrue @existAndtrue @existRelTrue Q@verifyAttrstrue â_ @verifyAttrAndTrue â . ifyAttrTr @existNotOrTrue @existirue @verifyRelTrue @ieriyatTiue jueryAttr @existOrTrue @tiooseRel⢠@aqueryRe! Query @verifyGlobaltiue QveritvattFatse @alSimilarFalse @allSimilarTrue _ @twoSimilarFals@compare @verityattrandFaise @verifyGlobalFalse @queryGlobal @alDiffirue @allDitfFalse @wodifffrue â@common @chooseattr @chooseGlobal @wosimilarTrue @choosenot @woDitiFalse
Figure 14: Entailment relations between different question types. In section 3.3 (main paper) we discuss the entailment and equivalences between questions. Since every question in the dataset has a matching logical representation of the sequence of reasoning steps, we can formally compute all the entailment and equivalence relations between different questions. Indeed, a cogent and reasonable learner should be consistent between its own answers, e.g. should not answer âredâ to a question about the color of an object it has just identiï¬ed as blue. Some more subtle relations also occur, such as those involving relations, e.g. if X is above Y, than Y is below X, and X is not below Y, etc. ï¬gure 14 shows all the logical relations between the various question types. Refer to table 2 for a complete catalog of the different types. Experiments show that while people excel at consistency, achieving the impressive 98.4%, deep learning models perform much worse in this task, with 69% - 82%. These results cast a doubt about the reliability of existing models and their true visual understanding skills. We therefore believe that improving their skills towards enhanced consistency and cogency is an important direction, which we hope our dataset will encourage.
We can further see that longer MAC networks with more cells are more competent in performing the GQA task, sub- stantiating its increased compositionality. Other experi- ments show that increasing the training set size has signiï¬- cant impact on the modelâs performance, as found out also by Kaï¬e et al. [18]. Apparently, the training set size has not reached saturation yet and so models may beneï¬t from even larger datasets. Finally, we have measured the impact of different input representations on the performance. We encode the visual scene with three different methods, ranging from standard pretrained CNN-based spatial features, to object-informed features obtained through faster R-CNNs detectors [31], up to even a âperfect sightâ model that has access to the precise semantic scene graph through direct node and edge embed- dings. As ï¬gure 11 shows, the more high-level and semantic the representation is, the better are the results. On the question side, we explore both training on the stan- dard textual questions as well as the semantic functional programs. MAC achieves 53.8% accuracy and 81.59% con- sistency on the textual questions and 59.7% and 85.85%
on the programs, demonstrating the usefulness and further challenge embodied in the former. It is also more consis- tent Indeed, the programs consist of only a small operations vocabulary, whereas the questions use both synonyms and hundreds of possible structures, incorporating probabilistic rules to make them more natural and diverse. In particu- lar, GQA questions have sundry subtle and challenging lin- guistic phenomena such as long-range dependencies, absent from the canonical programs. The textual questions thus provide us with the opportunity to engage with real, inter- esting and signiï¬cant aspects of natural language, and con- sequently foster the development of models with enhanced language comprehension skills.
# 11. Comparison between GQA and VQA 2.0
We proceed by performing a comparison with the VQA 2.0 dataset [11], the ï¬ndings of which are summarized in table 3. Apart from the higher average question length, we can see that GQA consequently contains more verbs and prepo- sitions than VQA (as well as more nouns and adjectives)
Aspect Question length Verbs Nouns Adjectives Prepositions Relation questions Spatial questions Logical questions Comparative questions Compositional questions VQA 6.2 + 1.9 1.4 + 0.6 1.9 + 0.9 0.6 + 0.6 0.5 + 0.6 19.5% 8% 6% 1% 3% GQA 7.9 + 3.1 1.6 + 0.7 2.5 + 1.0 0.7 + 0.7 1 + 1 51.6% 22.4% 19% 3% 52%
Table 3: A head-to-head comparison between GQA and VQA 2.0. The GQA questions are longer on average, and consequently have more verbs, nouns, adjectives and prepositions than VQA, alluding to their increased compositionality. In addition, GQA demands increased reasoning (spatial, logical, relational and comparative) and includes signiï¬cantly more compositional questions.
providing further evidence for its increased composition- ality. Semantically, we can see that the GQA questions are signiï¬cantly more compositional than VQAâs, and involve variety of reasoning skills in much higher frequency (spa- tial, logical, relational and comparative).
Some VQA question types are not covered by GQA, such as intention (why) questions or ones involving OCR or external knowledge. The GQA dataset focuses on fac- tual questions and multi-hop reasoning in particular, rather than covering all types. Comparing to VQA, GQA ques- tions are objective, unambiguous, more compositional and can be answered from the images only, potentially making this benchmark more controlled and convenient for making research progress on.
# 12. Scene Graph Normalization
Our starting point in creating the GQA dataset is the Vi- sual Genome Scene Graph annotations [20] that cover 113k images from COCO [23] and Flickr [36].7 The scene graph serves as a formalized representation of the image: each node denotes an object, a visual entity within the image, like a person, an apple, grass or clouds. It is linked to a bounding box specifying its position and size, and is marked up with about 1-3 attributes, properties of the object: e.g., its color, shape, material or activity. The objects are con- nected by relation edges, representing actions (verbs), spa- tial relations (prepositions), and comparatives.
The scene graphs are annotated with free-form natural language. Our ï¬rst goal is thus to convert the annotations into a clear and unambiguous semantic ontology. We be- gin by cleaning up the graphâs vocabulary, removing stop words, ï¬xing typos, consolidating synonyms and ï¬ltering rare or amorphous concepts. We then classify the vocabu- lary into predeï¬ned categories (e.g., animals and fruits for
7We extend the original Visual Genome dataset with 5k new hidden scene graphs collected through crowdsourcing.
Question Distribution by # words ° ES =VOA -v7w =CLEVR-Humans oso Rw -CLEVR Frequency ° ° 0 5 10 15 20 25 30 Number of words
Figure 15: Question length distribution for Visual Question An- swering datasets: we can see that GQA questions have a wide range of lengths and are longer on average than all other datasets except the synthetic CLEVR. Note that the long CLEVR questions tend to sound unnatural at times.
objects; colors and materials for attributes), using word em- bedding distances to get preliminary annotations, which are then followed by manual curation. This results in a class hi- erarchy over the scene graphâs vocabulary, which we further augment with various semantic and linguistic features like part of speech, voice, plurality and synonyms â information that will be used to create grammatically correct questions in further steps. Our ï¬nal ontology contains 1740 objects, 620 attributes and 330 relations, grouped into a hierarchy that consists of 60 different categories and subcategories. Visualization of the ontology can be found in ï¬gure 8.
At the next step, we prune graph edges that sound unnat- ural or are otherwise inadequate to be incoporated within the questions to be generated, such as (woman, in, shirt), (tail, attached to, giraffe), or (hand, hugging, bear). We ï¬lter these triplets using a combination of category-based rules, n-gram frquencies [1], dataset co-occurrence statis- tics, and manual curation.
In order to generate correct and unambiguous questions, some cases require us to validate the uniqueness or absence of an object. Visual Genome, while meant to be as exhaus- tive as possible, cannot guarantee full coverage (as it may be practically infeasible). Hence, in those cases we use object detectors [31], trained on visual genome with a low detec- tion threshold, to conservatively conï¬rm the object absence or uniqueness.
Next, we augment the graph with absolute and relative positional information: objects appearing within the image margins, are annotated accordingly. Object pairs for which we can safely determine positional relations (e.g., one is to the left of the other), are annotated as well. We also an- notate object pairs if they share the same color, material or shape. Finally, we enrich the graph with global information about the image location or weather, if these can be directly inferred from the objects it contains.
By the end of this stage, the resulting scene graphs have clean, uniï¬ed, rich and unambiguous semantics for both the nodes and the edges.
Image Annotation In this HIT, you are going to annotate objects in 4 images, list the properties of each object, and the relations between them. You will do it in multiple steps: 1. Draw a box around each object in the image on the left, and type its name. Please mark as many objects in the image as possible (usually ~12-20 objects), and make sure to create a tight box around each object, the smallest to cover it. Objects can be people and animals, foods and drinks, clothing items, furniture, appliances, vehicles, buildings, places etc. 2. Write up to tive properties for each object on the right side. Properties can be any adjectives, colors, materials, size, length, shape, activity (running, sleeping), etc. 3. After clicking next, write relations between pairs of objects in the image, for instance âgirl, eating, cake". Each relation (eating) goes from a source object (gir), to a target object (cake). Relations can be verbs (holding, chasing) spatial relations or prepositions (on top of, around, behind). Please try to write as many relations as possible (usually 10-15 relations), but it's ok if you can't find enough. âIf there's a group of close same-type objects you should mark them together (e.g. âfriesâ). *In case the word you type is not in our vocabulary, a few similar alternatives will be presented to select from. Bonuses for careful work! (bad work may be rejected) Here is an example of a few annotated objects, along with their properties: Object Properties 1. restaurant modern clean 2. milkshake pink bright sweet eÂ¥ girl young blond happy sitting 4. shirt red cloth long sleeved 5. tray red plastic rectangular 6. fries yellow cooked thin 7. \cup plastic large transparent full 8. hamburger tasty Other objects that have to be annotated are: window, table, napkin, salad and bowl. After annotating all the objects, you will have to write relations between them: 1 holng homburger | 2. âon top of tray 3. wearing 4. contain Bonuses will be given for good work, with many objects, } Next] Replace Image a Previous Next . properties and relations! (but bad work may be rejected).
Image Question Answering In this HIT, you are going to answer questions about pictures! We will show you 4 pictures and 5-10 questions about each of them (the same picture may appear twice). « For each question, start by typing your answer in the text box right to it. If you don't know the answer, please type â/ don't know". « Incase your answer is not one of the possible answers in our system, a few relevant alternatives will be shown to choose from. Please select the one that sounds the most correct to you among them. If you believe none of the choices is right please select "None of the above". « The answers are usually short, about 1-2 words. PS. You'll receive bonus for each question you answer correctly. So try to do your best! :) Good Luck! 1. Is there any milk in the bowl left of the apple? 2. Is the bowl right of the green apple? 3. Are there red apples in this picture? 4. Which color do you think is the apple? 5. What type of fruit in the image is round? app apple apples 6. What color is the fruit on the right side, re 7. On which side of the photo is the apple, tl Danan 8. Is there a spoon right of the food in the ce pear 9. Which color do you think is the apple? prone ct te tote) 10. Are there red apples in this picture? 1/4 Previous Next
Figure 16: The interfaces used for human experiments on Amazon Mechanical Turk. Top: Each HIT displays several images and asks turkers to list objects and annotate their corresponding bounding boxes. In addition, the turkers are requested to specify attributes and relations between the objects. An option to switch between images is also given to allow the turkers to choose rich enough images to work on. Bottom: Each HIT displays multiple questions and requires the turkers to respond. Since there is a closed set of possible answers (from a vocabulary with Approximately 1878 tokens), and in order to allow a fair comparison between human and modelsâ performance, we give turkers the option to respond in unconstrained free-form language, but also suggest them multiple answers from our vocabulary that are the most similar to theirs (using word embedding distances). However, turkers are not limited to choose from the suggestions in case they believe none of the proposed answers is correct. | {
"id": "1708.02711"
} |
1902.08295 | Lingvo: a Modular and Scalable Framework for Sequence-to-Sequence Modeling | Lingvo is a Tensorflow framework offering a complete solution for
collaborative deep learning research, with a particular focus towards
sequence-to-sequence models. Lingvo models are composed of modular building
blocks that are flexible and easily extensible, and experiment configurations
are centralized and highly customizable. Distributed training and quantized
inference are supported directly within the framework, and it contains existing
implementations of a large number of utilities, helper functions, and the
newest research ideas. Lingvo has been used in collaboration by dozens of
researchers in more than 20 papers over the last two years. This document
outlines the underlying design of Lingvo and serves as an introduction to the
various pieces of the framework, while also offering examples of advanced
features that showcase the capabilities of the framework. | http://arxiv.org/pdf/1902.08295 | Jonathan Shen, Patrick Nguyen, Yonghui Wu, Zhifeng Chen, Mia X. Chen, Ye Jia, Anjuli Kannan, Tara Sainath, Yuan Cao, Chung-Cheng Chiu, Yanzhang He, Jan Chorowski, Smit Hinsu, Stella Laurenzo, James Qin, Orhan Firat, Wolfgang Macherey, Suyog Gupta, Ankur Bapna, Shuyuan Zhang, Ruoming Pang, Ron J. Weiss, Rohit Prabhavalkar, Qiao Liang, Benoit Jacob, Bowen Liang, HyoukJoong Lee, Ciprian Chelba, Sébastien Jean, Bo Li, Melvin Johnson, Rohan Anil, Rajat Tibrewal, Xiaobing Liu, Akiko Eriguchi, Navdeep Jaitly, Naveen Ari, Colin Cherry, Parisa Haghani, Otavio Good, Youlong Cheng, Raziel Alvarez, Isaac Caswell, Wei-Ning Hsu, Zongheng Yang, Kuan-Chieh Wang, Ekaterina Gonina, Katrin Tomanek, Ben Vanik, Zelin Wu, Llion Jones, Mike Schuster, Yanping Huang, Dehao Chen, Kazuki Irie, George Foster, John Richardson, Klaus Macherey, Antoine Bruguier, Heiga Zen, Colin Raffel, Shankar Kumar, Kanishka Rao, David Rybach, Matthew Murray, Vijayaditya Peddinti, Maxim Krikun, Michiel A. U. Bacchiani, Thomas B. Jablin, Rob Suderman, Ian Williams, Benjamin Lee, Deepti Bhatia, Justin Carlson, Semih Yavuz, Yu Zhang, Ian McGraw, Max Galkin, Qi Ge, Golan Pundak, Chad Whipkey, Todd Wang, Uri Alon, Dmitry Lepikhin, Ye Tian, Sara Sabour, William Chan, Shubham Toshniwal, Baohua Liao, Michael Nirschl, Pat Rondon | cs.LG, stat.ML | null | null | cs.LG | 20190221 | 20190221 | 9 1 0 2
b e F 1 2 ] G L . s c [
1 v 5 9 2 8 0 . 2 0 9 1 : v i X r a
Lingvo: a Modular and Scalable Framework for Sequence-to-Sequence Modeling
https://github.com/tensorflow/lingvo
Jonathan Shen, Patrick Nguyen, Yonghui Wu, Zhifeng Chen, Mia X. Chen, Ye Jia, Anjuli Kannan, Tara Sainath, Yuan Cao, Chung-Cheng Chiu, Yanzhang He, Jan Chorowski, Smit Hinsu, Stella Laurenzo, James Qin, Orhan Firat, Wolfgang Macherey, Suyog Gupta, Ankur Bapna, Shuyuan Zhang, Ruoming Pang, Ron J. Weiss, Rohit Prabhavalkar, Qiao Liang, Benoit Jacob, Bowen Liang, HyoukJoong Lee, Ciprian Chelba, Sébastien Jean, Bo Li, Melvin Johnson, Rohan Anil, Rajat Tibrewal, Xiaobing Liu, Akiko Eriguchi, Navdeep Jaitly, Naveen Ari, Colin Cherry, Parisa Haghani, Otavio Good, Youlong Cheng, Raziel Alvarez, Isaac Caswell, Wei-Ning Hsu, Zongheng Yang, Kuan-Chieh Wang, Ekaterina Gonina, Katrin Tomanek, Ben Vanik, Zelin Wu, Llion Jones, Mike Schuster, Yanping Huang, Dehao Chen, Kazuki Irie, George Foster, John Richardson, Klaus Macherey, Antoine Bruguier, Heiga Zen, Colin Raï¬el, Shankar Kumar, Kanishka Rao, David Rybach, Matthew Murray, Vijayaditya Peddinti, Maxim Krikun, Michiel A. U. Bacchiani, Thomas B. Jablin, Rob Suderman, Ian Williams, Benjamin Lee, Deepti Bhatia, Justin Carlson, Semih Yavuz, Yu Zhang, Ian McGraw, Max Galkin, Qi Ge, Golan Pundak, Chad Whipkey, Todd Wang, Uri Alon, Dmitry Lepikhin, Ye Tian, Sara Sabour, William Chan, Shubham Toshniwal, Baohua Liao, Michael Nirschl, and Pat Rondon
October 2021
â Special thanks to Alexander Grushetsky and Adam Sadovsky for the initial design of the Params class.
# Abstract
Lingvo is a Tensorï¬ow framework oï¬ering a complete solution for collaborative deep learning research, with a particular focus towards sequence-to-sequence models. Lingvo models are composed of modular building blocks that are ï¬exible and easily extensible, and experiment conï¬gurations are centralized and highly customizable. Distributed training and quantized inference are supported directly within the framework, and it contains existing implementations of a large number of utilities, helper functions, and the newest research ideas. Lingvo has been used in collaboration by dozens of researchers in more than 20 papers over the last two years. This document outlines the underlying design of Lingvo and serves as an introduction to the various pieces of the framework, while also oï¬ering examples of advanced features that showcase the capabilities of the framework.
# Contents
2.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Distributed Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Synchronous Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Inference and Quantization . . . . . . . . . . . . . . . . . . . . . 5.1.1 Common Runners 5.1.2 Asynchronous Training 5.1.3 5.2 Multi-task Models 5.3 2 2 2 4 5 5 5 6 6 7 7 7 8 9 10 10 10 11 11 11 11
1
# Introduction
This paper presents the open-source Lingvo framework developed by Google for sequence modeling with deep neural networks. To date, this framework has produced a number of state-of-the-art results in machine translation [2, 21, 23], speech recognition [3, 4, 5, 10, 11, 12, 13, 14, 16, 15, 20, 22], speech synthesis [6, 8, 19], and speech translation [7, 9]. It is currently being used by dozens of researchers in their day-to-day work.
We begin by motivating the design of the framework in Section 2, including the development environment it was built for as well as its guiding principles. That is followed by an exposition of its core components and the role possessed by each of them.
Then, in Section 3, we take a deeper dive into how fundamental concepts are implemented and what that means for users of the framework. This covers topics such as how trainable variables are managed and how hyperparameters are conï¬gured, as well as the basic APIs involved in composing layers into a model. While there will be some code snippets, those seeking complete examples with code should refer to the codelab [1].
Section 4 provides a consolidated walk-through of the ï¬ow of logic during a training run. It outlines the pieces involved from how the model is constructed to how its parameters are updated.
# Finally, advanced usage such as distributed training, multi-task models, and
inference are described in Section 5.
# 2 Design
# 2.1 Motivation
In research, it is critical to be able to quickly prototype and iterate on new ideas. But, when working in a collaborative environment, it is also critical to be able to easily reuse code and document past experiments.
Lingvo evolved out of the need to support a large group of applied researchers working on speech and natural language problems in a single shared codebase. It follows these guiding principles:
Individual pieces should be small and modular, implement the same con-
sistent interface, and be easily extensible;
Experiments should be shared, comparable, reproducible, understandable,
and correct;
Performance should eï¬ciently scale to production-scale datasets and dis-
tributed training over hundreds of accelerators;
Code should be shared as much as possible when transitioning from research
to production.
2
Modular building blocks. Lingvo is designed for collaboration, focusing on code with a consistent interface and style that is easy to read and understand, and a ï¬exible modular layering system that promotes code reuse. The same building blocks, such as LSTM or attention layers, can be used as-is across diï¬erent models with assurance of good quality and performance. Because the blocks are general, an algorithmic improvement in one task (such as the use of multi-head attention in Machine Translation) can be immediately applied to another task (e.g. Speech Recognition). With many people using the same codebase, this makes it extremely easy to employ ideas others are trying in your own models. This also makes it simple to adapt existing models to new datasets. The building blocks are each individual classes, making it straightforward to extend and override their implementation. Layers are composed in a hierarchical manner, separating low-level implementation details from high-level control ï¬ow.
Shared, comparable, reproducible, understandable, and correct ex- periments. A big problem in research is the diï¬culty in reproducing and comparing results, even between people working in the same team. To better document experiments and allow the same experiment to be re-run in the future, Lingvo adopts a system where all the hyperparameters of a model are conï¬gured in their own dedicated sub-directory separate from the model logic and are meant to be committed to a shared version control system. As the models are built from the same common layers, this allows our models to be compared with each other without worrying about eï¬ects from minute diï¬erences in implementation. All models follow the same overall structure from input processing to loss computation, and all the layers have the same interface. In addition, all the hyperparameters are explicitly declared and their values are logged at runtime. Finally, there are copious amounts of assertions about tensor values and shapes as well as documentation and unit tests. This makes it very easy to read and understand new models when familiar with the framework, and to debug and ensure that the models are correct.
Performance. Lingvo is used to train on production-scale datasets. As a matter of necessity, its implementation has been optimized, from input processing to the individual layers. Support for synchronous and asynchronous distributed training is provided.
Ideally, there should be little porting from research Deployment-readiness. to product deployment. In Lingvo, inference-speciï¬c graphs are built from the same shared code used for training, and individual classes can be overwritten with device-speciï¬c implementations while the high level model architecture remains the same. In addition, quantization support is built directly into the framework.
However, these beneï¬ts come at the cost of more discipline and boilerplate,
a common trade-oï¬ between scalability and fast prototyping.
3
# 2.2 Components
The following are the core components of the Lingvo framework.
Models: A Model is an abstract collection of one or more Tasks. For single- task models the Model is just a transparent wrapper around the Task and the two can be considered the same. For multi-task models, the Model controls how variables are shared between Tasks and how Tasks are sampled for training.
Tasks: A Task is a speciï¬cation of a complete optimization problem, such as image classiï¬cation or speech recognition. It contains an input generator, Layers representing a neural network, a loss value, and an optimizer, and is in charge of updating the model parameters on each training step.
Layers: A Layer represents an arbitrary function possibly with trainable parameters. A Layer can contain other Layers as children. SoftMax, LSTM, Attention, and even a Task are all examples of Layers.
Input Generators: Lingvo input generators are specialized for sequences, allowing batching input of diï¬erent lengths in multiple buckets and automatically padding them to the same length. Large datasets that span multiple input ï¬les are also supported. The ï¬exibility of the generic_input function enables simple and eï¬cient implementations of custom input processors.
Params: The Params object contains hyperparameters for the model. They can be viewed as local versions of tf . flags. Layers, Tasks, and Models are all constructed in accordance to the speciï¬cations in their Params.
Params are hierarchical, meaning that the Params for an object can contain
Params conï¬guring child objects.
Experiment Conï¬gurations: Each experiment is deï¬ned in its own class and fully deï¬nes all aspects of the experiment from hyperparameters like learning rate and optimizer parameters to options that aï¬ect the model graph structure to input datasets and other miscellaneous options.
These standalone conï¬guration classes make it easy to keep track of the params used for each experiment and to reproduce past experiments. It also allows conï¬gurations to inherit from other conï¬gurations.
All experiment params are registered in a central registry, and can be refer-
enced by its name, e.g. image . mnist . LeNet5.
Job Runners: Lingvoâs training setup is broken into separate jobs. For example, the Controller job is in charge of writing checkpoints while the Evaler job evaluates the model on the latest checkpoint. For a full description of the diï¬erent job runners see Section 5.1.
4
NestedMap: A NestedMap is a generic dictionary structure for arbitrary structured data similar to tf . contrib . framework . nest. It is used throughout Lingvo to pass data around. Most python objects in the code are instances of either Tensor, a subclass of BaseLayer, or NestedMap.
Custom Ops: Lingvo supports custom op kernels written in C ++ for high- performance code. For example, custom ops are used for the input pipeline, beam search, and tokenization.
# Implementation
This section provides a more detailed look into the core Lingvo APIs. Section 3.1 introduces the Params class which is used to conï¬gure everything. Section 3.2 covers how Layers are constructed, how they work, and how they can be composed. Section 3.3 describes how variables are created and managed by each Layer. Section 3.4 goes over input reading and processing, and Sections 3.5, 3.6, and 3.7 brieï¬y go over model registration, overriding params, and runtime assertions. Finally, Section 3.8 gives a simple overview of the layout of the source code.
# 3.1 Params
The Params class is a dictionary with explicitly deï¬ned keys used for conï¬gura- tion. Keys should be deï¬ned when the object is created, and trying to access or modify a nonexistent key will raise an exception. In practice, every Layer has a Params classmethod, which creates a new params object and deï¬nes the keys used to conï¬gure the layer with a reasonable default value. Then, in a separate experiment conï¬guration class, these default values are overridden with experiment-speciï¬c values.
# 3.2 Layers
In order to construct a Layer, an instance of these layerâs Params is required. The params includes details such as:
cls: the layerâs class,
name: the layerâs name, and
params_init: how the variables created by this layer should be initialized.
Because the class is contained in the params, the following ways of constructing the layer are equivalent:
p = SomeLayerClass . Params () layer = SomeLayerClass ( p ) layer = p . cls ( p ) # Call the constructor . # Same , but call through the params .
5
All layers have a FProp () function, which is called during the forward step of a computation. Child layers can be created in the constructor using self . CreateChild ( ' child_name ' , child_params ), and they can be referenced by self . child_name.
# 3.3 Variable Management
Each Layer creates and manages its own variables.
Variables are created in the layerâs __init__ () method through a call to self . CreateVariable (), which registers the variable in self . vars and the value of the variable (potentially after a transform like adding variational noise) in self . theta. In FProp (), because it may be executed on diï¬erent devices in distributed training, for performance reasons it is best to access the variables through the theta parameter passed in to the function rather than self . vars or self . theta.
Variable placement is determined by the cluster . GetPlacer () function. The default policy is to place each variable on the parameter server that has the least bytes allocated. For model parallelism, an explicit policy based on e.g. variable scope can be adopted.
There are many beneï¬ts to explicitly managing variables instead of using
tf . get_variable:
It supports research ideas such as weight noise.
The variable_scope construct can be error prone and less readable, for
example accidental reuse of a variable.
For sync replica training, sharing the weights between multiple workers on
the same machine is otherwise awkward.
# Input Processing
Lingvo supports inputs in either plain text or TFRecord format. Sequence inputs can be bucketed by length through the b uc k et _ up p er _ bo u nd and b uc k et _ ba t ch _ li m it params.
A tokenizer can be speciï¬ed for text inputs. Available tokenizers include V oc a bF i le T ok e ni z er which uses a look-up table provided as a ï¬le, BpeTokenizer for byte pair encoding [18], and WpmTokenizer for word-piece models [17].
The input ï¬le pattern should be speciï¬ed as â type : glob_patternâ through
the file_pattern param. The input processor should implement the _ D a t a S o u r c e F r o m F i l e P a t t e r n () method, which returns an op that when exe- cuted reads from the ï¬le and returns some tensors. Often this op is implemented as a custom C ++ op using the RecordProcessor interface. The tensors returned by this op can be retrieved by calling _BuildDataSource (), and can be used to ï¬ll in an input batch NestedMap to be returned by the InputBatch () method. Finally, batch-level preprocessing can also be implemented in P r e p r o c e s s I n p u t B a t c h (). In addition to using a custom RecordProcessor op, an input processor can
also be deï¬ned directly in Python through the generic_input op.
6
# 3.5 Model Registration
Conï¬guration classes lie inside lingvo/tasks/<task>/params/<param>.py and are annotated with @model_r egistry . R e g i s t e r S i n g l e T a s k M o d e l for the typical case of a single-task model. This annotation adds the class to the model registry with a key of < task >. < param >. < classname > (e.g. image . mnist . LeNet5).
The class should be a subclass of S i n g l e T a s k M o d e l P a r a m s and implement the Task () method, which returns a Params instance conï¬guring a Task. The registration code will automatically wrap the Task into a SingleTaskModel.
The class should also implement the Train (), Test (), and maybe Dev () methods. These methods return a Params instance conï¬guring an input genera- tor, and represent diï¬erent datasets. An example is shown in Figure 1.
@model_registry . R e g i s t e r S i n g l e T a s k M o d e l class MyTaskParams ( bas e_m ode l_p ara ms . S i n g l e T a s k M o d e l P a r a m s ): @classmethod def Train ( cls ): ... # Input params . @classmethod def Task ( cls ): p = my_model . MyNetwork . Params () p . name = ' my_task ' ... return p
Figure 1: Registering a single-task model.
# 3.6 Overriding Params from the Command Line
It is possible to override the values of any hyperparameter for a speciï¬c run using the -- m o d e l _ p a r a m s _ o v e r r i d e or -- m o d e l _ p a r a m s _ f i l e _ o v e r r i d e ï¬ags. This makes it simple to start similar jobs for hyperparameter tuning.
# 3.7 Assertions
py_utils.py contains functions for run-time assertions about values and shapes as well as CheckNumerics () for detecting NaNs. Assertions can be disabled with the command-line -- enable_asserts = false. Similarly, CheckNumerics can be disabled with -- e n a b l e _ c h e c k _ n u m e r i c s = false.
7
# 3.8 Code Layout
lingvo
# trainer.py Entry point.
model_imports.py Imports and registers all model params in the global registry.
core base_input_generator.py base_layer.py base_model.py cluster.py Contains the policy for op placement. hyperparams.py attention.py, layers.py, rnn_cell.py, rnn_layers.py Contains implementations for many common layers. optimizer.py py_utils.py Most utility functions are here. recurrent.py The functional RNN. summary_utils.py Contains utilities for dealing with summaries. ops Folder for custom C ++ ops. record_*.* The input processing pipeline. py_x_ops.py Python bindings for the C ++ ops. x_ops.cc C ++ op deï¬nitions. tasks
# <task> Folder for an individual task/domain/project.
# params Folder for model params.
# tools
8
# 4 Life of a Training Run
Model LOOKUP Registry INSTANTIATE Model Class Gisele Input Layer Production Generator Serving _ Inference wâ Layer Graph Trainer 27 UPDATE Optimizer
Figure 2: An overview of the Lingvo framework, outlining how models are instantiated, trained, and exported for evaluation and serving.
This section gives an overview of what happens behind the scenes from the
start of a training run to the end of the ï¬rst step for a single-task model.
Training is started by launching the trainer with the name of a model and the path to a log directory. The model name is resolved using the model registry to obtain the Params for the model, and the various job runners for training are created.
The Params at this point is just the params for the top level Model with the overrides in the experiment conï¬guration corresponding to the model name speciï¬ed. No Layers have been instantiated yet.
Each runner then independently instantiates the model and builds the tensor- ï¬ow graphs and ops that they will execute based on their job. For example, the Trainer will build a train_op spanning both the FProp () and BProp () graphs and involves updating model parameters, while the Evaler and Decoder will build a eval_metrics op involving only the FProp () graph with p . is_eval set. There can be multiple Evalers and Decoders, one for each evaluation dataset. Instantiating the model calls its __init__ () method, which constructs the Params for child layers and instantiates them recursively through calls to self . CreateChild (). These child layer params could be exposed as part of the top level model params, perhaps as a âparams templateâ, allowing them to be conï¬gured in the params ï¬les, or they could be constructed completely from scratch in the __init__ () method based on the information available at that time. The __init__ () method is also in charge of creating the variables managed by the Layer through self . CreateVariable ().
Once the graphs are built, the Trainer runner will wait for the Controller runner to initialize the variables or restore them from a checkpoint, while the evaluation runners will wait for a new checkpoint to be written to the log directory.
9
After the variables are initialized, the Trainer will run training, i.e. calling session . run with the modelâs train_op, in a loop, and the Controller will produce training summaries for each step. When enough steps have passed, the Controller writes a new checkpoint to the log directory. The Evaler and Decoder detect this new checkpoint and evaluates the model at that checkpoint to generate summaries. This process then continues until the user terminates all jobs or p . train . max_steps is reached.
For more details about the various runners such as the diï¬erence between Evalers and Decoders, as well as information about which devices individual ops in the graph will be placed on during distributed training, see Section 5.1.
# 5 Advanced Usage
This section provides some examples of advanced features. This is by no means an exhaustive list of all the existing features, and many new features are continually being added.
Section 5.1 describes the distributed training setup. Section 5.2 details how multi-task models are conï¬gured and registered, and Section 5.3 gives a brief look into inference and productionization support.
# 5.1 Distributed Training
Both synchronous as well as asynchronous distributed training are supported. In asynchronous mode, each individual worker job executes its own training loop and is completely independent from the other workers. In synchronous mode, there is a single training loop driven by a trainer client that distributes work onto the various worker jobs.
Here we summarize the diï¬erent types of job runners under each conï¬guration. A shared directory on a distributed ï¬le system where checkpoints can be written and loaded is assumed to exist.
# 5.1.1 Common Runners
Controller: This job handles initializing variables and saving/loading check- points as well as writing training summaries.
Evaler: This job loads the latest checkpoint and runs and exports evaluation summaries. Multiple evalers can be started for diï¬erent datasets.
Decoder: This job loads the latest checkpoint and runs and exports decoding summaries. Multiple decoders can be started for diï¬erent datasets. Decoders are diï¬erent from evalers in that the ground-truth is used during evaluation but not during decoding. A concrete example is that Evalers can use teacher-forcing while Decoders may need to rely on beam search.
10
# 5.1.2 Asynchronous Training
Trainer: This is the worker job which runs the training op and sends variable updates.
Parameter Server: Variable values are stored here. Trainer jobs send updates and receive global values periodically.
Data Processor: This is an optional job for loading data before dispatching them to trainers, to oï¬oad the cost associated with loading and preprocessing data from the trainer to a separate machine.
# 5.1.3 Synchronous Training
Worker: The worker job in sync training runs the training op like the trainer job in async training but they do not perform variable updates.
Trainer client: The trainer client drives the training loop and aggregates their results before updating the variables. There are no parameter servers in sync training. Instead, the worker jobs act as parameter servers, and the trainer client sends the relevant variable updates to each worker.
# 5.2 Multi-task Models
A multi-task model is composed of individual Tasks sharing variables. Existing options for variable sharing range from sharing just the encoder (multitask_model . S ha r ed E nc o de r Mo d el) to ï¬ne-grained control with multitask_model . R e g E x S h a r e d V a r i a b l e M o d e l.
Multi-task model params should be a subclass of M u l t i T a s k M o d e l P a r a m s and implement the Model () method, which returns a Params instance conï¬g- uring a MultiTaskModel. The task_params and task_probs attributes deï¬ne respectively the params and relative weight of each Task.
An example of registering a multi-task model is shown in Figure 3. Knowledge distillation is also supported via base_model . DistillationTask. For knowledge distillation, the teacher parameters must be loaded from a checkpoint ï¬le by specifying params . train . i n i t _ f r o m _ c h e c k p o i n t _ r u l e s in the Task () deï¬nition.
# Inference and Quantization
Once models have been trained, they must be deployed on a server or on an embedded device. Typically, during inference, models will be executed on a device with ï¬xed-point arithmetic. To achieve the best quality, the dynamic range must be kept in check during training. We oï¬er quantized layers that wrap the training and inference computation functions for convenience.
11
@model_registry . R e g i s t e r M u l t i T a s k M o d e l class M yMu lti Ta skP ara ms ( ba se_ mod el_ par ams . M u l t i T a s k M o d e l P a r a m s ): @classmethod def Train ( cls ): p = super ( MyMultiTaskParams , cls ). Train () t as k 1_ i np u t_ p ar a ms = ... p . Define ( ' task1 ' , task1_input_params , ' ') # Or , refer to existing single task model params . p . Define ( ' task2 ' , MyTaskParams . Train () , ' ') return p @classmethod def Model ( cls ): p1 = my_model . MyNetwork . Params () p1 . name = ' task1 ' ... # Or , refer to existing single task model . p2 = MyTaskParams . Task () p2 . name = ' task2 ' p = base_model . MultiTaskModel . Params () p . name = ' m y_ m ul t it a sk _ mo d el ' p . task_params = hyperparams . Params () p . task_params . Define ( ' task1 ' , p1 , ' ') p . task_params . Define ( ' task2 ' , p2 , ' ') p . task_probs = hyperparams . Params () p . task_probs . Define ( ' task1 ' , 0.5 , ' ') p . task_probs . Define ( ' task2 ' , 0.5 , ' ') return p
Figure 3: Registering a multi-task model.
Inference has diï¬erent computational characteristics than training. For latency reasons, the batch size is smaller, sometimes even equal to just 1. For sequence models, often a beam search is performed. It may even be preferable to drive inference one timestep at a time. Several constraints dominate the choice of how to run the computation: 1) available operations, 2) desired latency, 3) parallelizability, and 4) memory and power consumption. To enable the greatest amount of ï¬exibility given these constraints, we leave it to the designer of the model to express inference in the optimal way by explicitly exporting inference graphs rather than leaving it to a graph converter. A basic inference graph can be written in a few lines of code, reusing the same functions used for building the training graph, while for more complicated inference graphs it is possible to even completely swap out the implementation of a low level layer.
12
# References
[1] Introduction to Lingvo. https://colab.research.google.com/github/ tensorflow/lingvo/blob/master/codelabs/introduction.ipynb.
[2] M. X. Chen, O. Firat, A.Bapna, M. Johnson, W. Macherey, G. Fosterand L. Jones, M. Schuster, N. Shazeer, N. Parmar, A. Vaswani, J. Uszkoreit, L. Kaiser, Z. Chen, Y. Wu, and M. Hughes. The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 76â86. Association for Computational Linguistics, 2018.
[3] C. C. Chiu and C. Raï¬el. Monotonic Chunkwise Attention. Proc. Interna- tional Conference on Learning Representations (ICLR), 2018.
[4] C. C. Chiu, T. N. Sainath, Y. Wu, R. Prabhavalkar, P. Nguyen, Z. Chen, A. Kannan, R. J. Weiss, K. Rao, E. Gonina, N. Jaitly, B. Li, J. Chorowski, and M. Bacchiani. State-of-the-art Speech Recognition With Sequence- to-Sequence Models. Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2018.
[5] C. C. Chiu, A. Tripathi, K. Chou, C. Co, N. Jaitly, D. Jaunzeikare, A. Kan- nan, P. Nguyen, H. Sak, A. Sankar, J. Tansuwan, N. Wan, Y. Wu, and X. Zhang. Speech recognition for medical conversations. Proc. Interspeech, 2018.
[6] W. N. Hsu, Y. Zhang, R. J. Weiss, H. Zen, Y. Wu, Y. Wang, Y. Cao, Y. Jia, Z. Chen, J. Shen, et al. Hierarchical generative modeling for controllable speech synthesis. arXiv preprint arXiv:1810.07217, 2018.
[7] Ron J. Weiss, Jan Chorowski, Navdeep Jaitly, Yonghui Wu, and Zhifeng Chen. Sequence-to-sequence models can directly translate foreign speech. Proc. Interspeech, pages 2625â2629, 08 2017.
[8] Y. Jia, Y. Zhang, R. J. Weiss, Q. Wang, J. Shen, F. Ren, Z. Chen, P. Nguyen, R. Pang, I. Lopez-Moreno, and Y. Wu. Transfer Learning from Speaker Veriï¬cation to Multispeaker Text-To-Speech Synthesis. Advances in Neural Information Processing Systems, 2018.
[9] Ye Jia, Melvin Johnson, Wolfgang Macherey, Ron J Weiss, Yuan Cao, Chung- Cheng Chiu, Naveen Ari, Stella Laurenzo, and Yonghui Wu. Leveraging weakly supervised data to improve end-to-end speech-to-text translation. arXiv preprint arXiv:1811.02050, 2018.
[10] A. Kannan, Y. Wu, P. Nguyen, T. N. Sainath, Z. Chen, and R. Prabhavalkar. An analysis of incorporating an external language model into a sequence- to-sequence model. In Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2018.
13
[11] D. Lawson, C. C. Chiu, G. Tucker, C. Raï¬el, K. Swersky, and N. Jaitly. Learning hard alignments with variational inference. Proc. IEEE Interna- tional Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2018.
[12] B. Li, T. N. Sainath, K. Sim, M. Bacchiani, E. Weinstein, P. Nguyen, Z. Chen, Y. Wu, and K. Rao. Multi-Dialect Speech Recognition With a Single Sequence-to-Sequence Model. Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2018.
[13] R. Pang, T. N. Sainath, R. Prabhavalkar, S. Gupta, Y. Wu, S. Zhang, and C. C. Chiu. Compression of End-to-End Models. In Proc. Interspeech, 2018.
[14] R. Prabhavalkar, T. N. Sainath, Y. Wu, P. Nguyen, Z. Chen, C. C. Chiu, and A. Kannan. Minimum Word Error Rate Training for Attention-based Sequence-to-sequence Models. In Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2018.
[15] T. N. Sainath, C. C. Chiu, R. Prabhavalkar, A. Kannan, Y. Wu, P. Nguyen, and Z. Chen Z. Improving the Performance of Online Neural Transducer Models. Proc. Interspeech, 2018.
[16] T. N. Sainath, P. Prabhavalkar, S. Kumar, S. Lee, A. Kannan, D. Rybach, V. Schogol, P. Nguyen, B. Li, Y. Wu, Z. Chen, and C. C. Chiu. No Need for a Lexicon? Evaluating the Value of the Pronunciation Lexica in End-to-End Models. Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2018.
[17] M. Schuster and K. Nakajima. Japanese and Korean Voice Search. In Acoustics, Speech and Signal Processing (ICASSP), 2012 IEEE International Conference on, pages 5149â5152. IEEE, 2012.
[18] R. Sennrich, B. Haddow, and A. Birch. Neural Machine Translation of Rare Words with Subword Units, 2015.
[19] J. Shen, R. Pang, R. J. Weiss, M. Schuster, N. Jaitly, Z. Yang, Z. Chen, Y. Zhang, Y. Wang, R. J. Skerry-Ryan, R. A. Saurous, Y. Agiomyrgiannakis, and Y. Wu. Natural TTS Synthesis By Conditioning WaveNet on Mel Spectrogram Predictions. Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2018.
[20] S. Toshniwal, T. N. Sainath, R. J. Weiss, B. Li, P. Moreno, E. Weinstein, and K. Rao. End-to-End Multilingual Speech Recognition using Encoder- Decoder Models. Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2018.
[21] R. J. Weiss, J. Chorowski J, N. Jaitly, Y. Wu, and Z. Chen. Sequence-to- Sequence Models Can Directly Translate Foreign Speech. Proc. Interspeech, 2017.
14
[22] I. Williams, A. Kannan, P. Aleksic, D. Rybach, and T. N. Sainath TN. Contextual Speech Recognition in End-to-End Neural Network Systems using Beam Search. Proc. Interspeech, 2018.
[23] Y. Wu, M. Schuster, Z. Chen, Q. V. Le, M. Norouzi, W. Macherey, M. Krikun, Y. Cao, Q. Gao, K. Macherey, J. Klingner, A. Shah, M. Johnson, X. Liu, L. Kaiser, S. Gouws, Y. Kato, T. Kudo, H. Kazawa, K. Stevens, G. Kurian, N. Patil, W. Wang, C. Young, J. Smith, J. Riesa, A. Rudnick, O. Vinyals, G. Corrado, M. Hughes, and J. Dean. Googleâs Neural Ma- chine Translation system: Bridging the gap between human and machine translation. arXiv preprint, 1609.08144, 2016.
15 | {
"id": "1811.02050"
} |
1902.08153 | Learned Step Size Quantization | Deep networks run with low precision operations at inference time offer power
and space advantages over high precision alternatives, but need to overcome the
challenge of maintaining high accuracy as precision decreases. Here, we present
a method for training such networks, Learned Step Size Quantization, that
achieves the highest accuracy to date on the ImageNet dataset when using
models, from a variety of architectures, with weights and activations quantized
to 2-, 3- or 4-bits of precision, and that can train 3-bit models that reach
full precision baseline accuracy. Our approach builds upon existing methods for
learning weights in quantized networks by improving how the quantizer itself is
configured. Specifically, we introduce a novel means to estimate and scale the
task loss gradient at each weight and activation layer's quantizer step size,
such that it can be learned in conjunction with other network parameters. This
approach works using different levels of precision as needed for a given system
and requires only a simple modification of existing training code. | http://arxiv.org/pdf/1902.08153 | Steven K. Esser, Jeffrey L. McKinstry, Deepika Bablani, Rathinakumar Appuswamy, Dharmendra S. Modha | cs.LG, stat.ML | International Conference on Learning Representations (2020) | null | cs.LG | 20190221 | 20200507 | 0 2 0 2
y a M 7 ] G L . s c [
3 v 3 5 1 8 0 . 2 0 9 1 : v i X r a
Published as a conference paper at ICLR 2020
# LEARNED STEP SIZE QUANTIZATION
Steven K. Esser â, Jeffrey L. McKinstry, Deepika Bablani, Rathinakumar Appuswamy, Dharmendra S. Modha
IBM Research San Jose, California, USA
# ABSTRACT
Deep networks run with low precision operations at inference time offer power and space advantages over high precision alternatives, but need to overcome the challenge of maintaining high accuracy as precision decreases. Here, we present a method for training such networks, Learned Step Size Quantization, that achieves the highest accuracy to date on the ImageNet dataset when using models, from a variety of architectures, with weights and activations quantized to 2-, 3- or 4-bits of precision, and that can train 3-bit models that reach full precision baseline accuracy. Our approach builds upon existing methods for learning weights in quantized networks by improving how the quantizer itself is conï¬gured. Speciï¬cally, we introduce a novel means to estimate and scale the task loss gradient at each weight and activation layerâs quantizer step size, such that it can be learned in conjunction with other network parameters. This approach works using different levels of precision as needed for a given system and requires only a simple modiï¬cation of existing training code.
# INTRODUCTION
Deep networks are emerging as components of a number of revolutionary technologies, including image recognition (Krizhevsky et al., 2012), speech recognition (Hinton et al., 2012), and driving assistance (Xu et al., 2017). Unlocking the full promise of such applications requires a system perspective where task performance, throughput, energy-efï¬ciency, and compactness are all critical considerations to be optimized through co-design of algorithms and deployment hardware. Current research seeks to develop methods for creating deep networks that maintain high accuracy while reducing the precision needed to represent their activations and weights, thereby reducing the computation and memory required for their implementation. The advantages of using such algorithms to create networks for low precision hardware has been demonstrated in several deployed systems (Esser et al., 2016; Jouppi et al., 2017; Qiu et al., 2016).
It has been shown that low precision networks can be trained with stochastic gradient descent by updating high precision weights that are quantized, along with activations, for the forward and backward pass (Courbariaux et al., 2015; Esser et al., 2016). This quantization is deï¬ned by a mapping of real numbers to the set of discrete values supported by a given low precision representation (often integers with 8-bits or less). We would like a mapping for each quantized layer that maximizes task performance, but it remains an open question how to optimally achieve this.
To date, most approaches for training low precision networks have employed uniform quantizers, which can be conï¬gured by a single step size parameter (the width of a quantization bin), though more complex nonuniform mappings have been considered (Polino et al., 2018). Early work with low precision deep networks used a simple ï¬xed conï¬guration for the quantizer (Hubara et al., 2016; Esser et al., 2016), while starting with Rastegari et al. (2016), later work focused on ï¬tting the quantizer to the data, either based on statistics of the data distribution (Li & Liu, 2016; Zhou et al., 2016; Cai et al., 2017; McKinstry et al., 2018) or seeking to minimize quantization error during training (Choi et al., 2018c; Zhang et al., 2018). Most recently, work has focused on using backpropagation with
âcorresponding author sesser@us.ibm.com
1
Published as a conference paper at ICLR 2020
Table 1: Comparison of low precision networks on ImageNet. Techniques compared are QIL (Jung et al., 2018), FAQ (McKinstry et al., 2018), LQ-Nets (Zhang et al., 2018), PACT (Choi et al., 2018b), Regularization (Choi et al., 2018c), and NICE (Baskin et al., 2018).
Top-1 Accuracy @ Precision Top-5 Accuracy @ Precision Network Method 2 3 4 8 2 3 4 8 ResNet-18 Full precision: 70.5 Full precision: 89.6 LSQ (Ours) QIL FAQ LQ-Nets PACT NICE Regularization 67.6 65.7 64.9 64.4 61.7 70.2 69.2 68.2 68.1 67.7 71.1 70.1 69.8 69.3 69.2 69.8 67.3 71.1 70.0 68.1 87.6 85.9 85.6 84.4 89.4 87.9 88.2 87.9 90.0 89.1 88.8 89.0 89.21 87.9 90.1 89.3 88.2 ResNet-34 Full precision: 74.1 Full precision: 91.8 LSQ (Ours) QIL LQ-Nets NICE FAQ 71.6 70.6 69.8 73.4 73.1 71.9 71.7 74.1 73.7 73.5 73.3 74.1 73.7 90.3 89.1 91.4 90.2 90.8 91.7 91.4 91.3 91.8 91.6 ResNet-50 Full precision: 76.9 Full precision: 93.4 LSQ (Ours) PACT NICE FAQ LQ-Nets 73.7 72.2 71.5 75.8 75.3 75.1 74.2 76.7 76.5 76.5 76.3 75.1 76.8 76.5 91.5 90.5 90.3 92.7 92.6 92.3 91.6 93.2 93.2 93.3 92.9 92.4 93.4 93.1 ResNet-101 Full precision: 78.2 Full precision: 94.1 LSQ (Ours) 76.1 77.5 78.3 78.1 92.8 93.6 94.0 94.0 ResNet-152 Full precision: 78.9 Full precision: 94.3 LSQ (Ours) FAQ 76.9 78.2 78.5 78.4 78.5 78.5 93.2 93.9 94.1 94.1 94.2 94.1 VGG-16bn Full precision: 73.4 Full precision: 91.5 LSQ (Ours) FAQ 71.4 73.4 74.0 73.9 73.5 73.7 90.4 91.5 92.0 91.7 91.6 91.6 Full precision: 67.3 Full precision: 87.8
stochastic gradient descent to learn a quantizer that minimizes task loss (Zhu et al., 2016; Mishra & Marr, 2017; Choi et al., 2018b;a; Jung et al., 2018; Baskin et al., 2018; Polino et al., 2018).
While attractive for their simplicity, ï¬xed mapping schemes based on user settings place no guarantees on optimizing network performance, and quantization error minimization schemes might perfectly minimize quantization error and yet still be non optimal if a different quantization mapping actually minimizes task error. Learning the quantization mapping by seeking to minimize task loss is appealing to us as it directly seeks to improve on the metric of interest. However, as the quantizer itself is discontinuous, such an approach requires approximating its gradient, which existing methods have done in a relatively coarse manner that ignore the impact of transitions between quantized states (Choi et al., 2018b;a; Jung et al., 2018).
Here, we introduce a new way to learn the quantization mapping for each layer in a deep network, Learned Step Size Quantization (LSQ), that improves on prior efforts with two key contributions. First, we provide a simple way to approximate the gradient to the quantizer step size that is sensitive to quantized state transitions, arguably providing for ï¬ner grained optimization when learning the step size as a model parameter. Second, we propose a simple heuristic to bring the magnitude of step size updates into better balance with weight updates, which we show improves convergence. The overall approach is usable for quantizing both activations and weights, and works with existing methods for backpropagation and stochastic gradient descent. Using LSQ to train several network architectures on
2
Published as a conference paper at ICLR 2020
the ImageNet dataset, we demonstrate signiï¬cantly better accuracy than prior quantization approaches (Table 1) and, for the ï¬rst time that we are aware of, demonstrate the milestone of 3-bit quantized networks reaching full precision network accuracy (Table 4).
# 2 METHODS
We consider deep networks that operate at inference time using low precision integer operations for computations in convolution and fully connected layers, requiring quantization of the weights and activations these layers operate on. Given data to quantize v, quantizer step size s, the number of positive and negative quantization levels QP and QN , respectively, we deï¬ne a quantizer that computes ¯v, a quantized and integer scaled representation of the data, and Ëv, a quantized representation of the data at the same scale as v:
0 = |clip(*/s, -Qn,Qp)], (1)
b=oxs. (2) Here, clip(z,r1, 172) returns z with values below rj set to r; and values above rg set to rz, and | z] rounds z to the nearest integer. Given an encoding with b bits, for unsigned data (activations) Qyv = 0 and Qp = 2° â 1 and for signed data (weights) Qy = 2°-! and Qp = 2°"! â 1.
For inference, ¯w and ¯x values can be used as input to low precision integer matrix multiplication units underlying convolution or fully connected layers, and the output of such layers then rescaled by the step size using a relatively low cost high precision scalar-tensor multiplication, a step that can potentially be algebraically merged with other operations such as batch normalization (Figure 1).
Offline Quantizer Matrix Multiplication Layer iplier Quantizer me GH Low Precision
Figure 1: Computation of a low precision convolution or fully connected layer, as envisioned here.
2.1 STEP SIZE GRADIENT
LSQ provides a means to learn s based on the training loss by introducing the following gradient through the quantizer to the step size parameter:
op [TA +LAL if-Qn <*/s < Qp 3s 7) 7 2N if °/s < âQn (3) : Qp if /s > Qp
This gradient is derived by using the straight through estimator (Bengio et al., 2013) to approximate the gradient through the round function as a pass through operation (though leaving the round itself in place for the purposes of differentiating down stream operations), and differentiating all other operations in Equations 1 and 2 normally.
This gradient differs from related approximations (Figure 2), which instead either learn a transforma- tion of the data that occurs completely prior to the discretization itself (Jung et al., 2018), or estimate the gradient by removing the round operation from the forward equation, algebraically canceling terms, and then differentiating such that â Ëv/âs = 0 where âQN < v/s < QP (Choi et al., 2018b;a). In both such previous approaches, the relative proximity of v to the transition point between quantized states does not impact the gradient to the quantization parameters. However, one can reason that the
3
Published as a conference paper at ICLR 2020
Transition Transition Transition Transition Transition Transition 0 Rn Loinâ 2 Paineâ 3 4 0 pointâ Laineâ â 2 Pointâ 3 v v
Figure 2: Given s = 1, QN = 0, QP = 3, A) quantizer output and B) gradients of the quantizer output with respect to step size, s, for LSQ, or a related parameter controlling the width of the quantized domain (equal to s(QP + QN )) for QIL (Jung et al., 2018) and PACT (Choi et al., 2018b). The gradient employed by LSQ is sensitive to the distance between v and each transition point, whereas the gradient employed by QIL (Jung et al., 2018) is sensitive only to the distance from quantizer clip points, and the gradient employed by PACT (Choi et al., 2018b) is zero everywhere below the clip point. Here, we demonstrate that networks trained with the LSQ gradient reach higher accuracy than those trained with the QIL or PACT gradients in prior work.
closer a given v is to a quantization transition point, the more likely it is to change its quantization bin (¯v) as a result of a learned update to s (since a smaller change in s is required), thereby resulting in a large jump in Ëv. Thus, we would expect â Ëv/âs to increase as the distance from v to a transition point decreases, and indeed we observe this relationship in the LSQ gradient. It is appealing that this gradient naturally falls out of our simple quantizer formulation and use of the straight through estimator for the round function.
For this work, each layer of weights and each layer of activations has a distinct step size, represented as an fp32 value, initialized to 2(!"l)/y@p, computed on either the initial weights values or the first batch of activations, respectively.
2.2 STEP SIZE GRADIENT SCALE
It has been shown that good convergence is achieved during training where the ratio of average update magnitude to average parameter magnitude is approximately the same for all weight layers in a network (You et al., 2017). Once learning rate has been properly set, this helps to ensure that all updates are neither so large as to lead to repeated overshooting of local minima, nor so small as to lead to unnecessarily long convergence time. Extending this reasoning, we consider that each step size should also have its update magnitude to parameter magnitude proportioned similarly to that of weights. Thus, for a network trained on some loss function L, the ratio
VsE / ||VwLll 8 ||w| R= (4)
should on average be near 1, where ||z|| denotes the Jj-norm of z. However, we expect the step size parameter to be smaller as precision increases (because the data is quantized more finely), and step size updates to be larger as the number of quantized items increases (because more items are summed across when computing its gradient). To correct for this, we multiply the step size loss by a gradient scale, g, where for weight step size g = 1//NwQp and for activation step size g = 1/VNrQp, where Nw is the number of weights in a layer and Ny is the number of features in a layer. In section|3.4|we demonstrate that this improves trained accuracy, and we provide reasoning behind the specific scales chosen in the Section[AJof the Appendix.
2.3 TRAINING
Model quantizers are trained with LSQ by making their step sizes learnable parameters with loss gradient computed using the quantizer gradient described above, while other model parameters can be trained using existing techniques. Here, we employ a common means of training quantized networks (Courbariaux et al., 2015), where full precision weights are stored and updated, quantized weights
4
4
Published as a conference paper at ICLR 2020
and activations are used for forward and backward passes, the gradient through the quantizer round function is computed using the straight through estimator (Bengio et al., 2013) such that
Oo {i if -âQy <%/s<Qp Ov 0 otherwise, (5)
and stochastic gradient descent is used to update parameters.
For simplicity during training, we use Ëv as input to matrix multiplication layers, which is algebraically equivalent to the previously described inference operations. We set input activations and weights to either 2-, 3-, 4-, or 8-bit for all matrix multiplication layers except the ï¬rst and last, which always use 8-bit, as making the ï¬rst and last layers high precision has become standard practice for quantized networks and demonstrated to provide a large beneï¬t to performance. All other parameters are represented using fp32. All quantized networks are initialized using weights from a trained full precision model with equivalent architecture before ï¬ne-tuning in the quantized space, which is known to improve performance (Sung et al., 2015; Zhou et al., 2016; Mishra & Marr, 2017; McKinstry et al., 2018).
Networks were trained with a momentum of 0.9, using a softmax cross entropy loss function, and cosine learning rate decay without restarts (Loshchilov & Hutter, 2016). Under the assumption that the optimal solution for 8-bit networks is close to the full precision solution (McKinstry et al., 2018), 8-bit networks were trained for 1 epoch while all other networks were trained for 90 epochs. The initial learning rate was set to 0.1 for full precision networks, 0.01 for 2-, 3-, and 4-bit networks and to 0.001 for 8-bit networks. All experiments were conducted on the ImageNet dataset (Russakovsky et al., 2015), using pre-activation ResNet (He et al., 2016), VGG (Simonyan & Zisserman, 2014) with batch norm, or SqueezeNext (Gholami et al., 2018). All full precision networks were trained from scratch, except for VGG-16bn, for which we used the pretrained version available in the PyTorch model zoo. Images were resized to 256 Ã 256, then a 224 Ã 224 crop was selected for training, with horizontal mirroring applied half the time. At test time, a 224 Ã 224 centered crop was chosen. We implemented and tested LSQ in PyTorch.
3 RESULTS
3.1 WEIGHT DECAY
We expect that reducing model precision will reduce a modelâs tendency to overï¬t, and thus also reduce the regularization in the form of weight decay necessary to achieve good performance. To investigate this, we performed a hyperparameter sweep on weight decay for ResNet-18 (Table 2), and indeed found that lower precision networks reached higher accuracy with less weight decay. Performance was improved by reducing weight decay by half for the 3-bit network, and reducing it by a quarter for the 2-bit network. We used these weight decay values for all further experiments.
Table 2: ResNet-18 top-1 accuracy for various weight decay values.
66.9 67.3 67.6 67.4 70.1 70.2 70.0 66.9 71.0 70.9 70.9 70.8 71.1 71.1 71.0 71.0
3.2 COMPARISON WITH OTHER APPROACHES
We trained several networks using LSQ and compare accuracy with other quantized networks and full precision baselines (Table 1). To facilitate comparison, we only consider published models that quantize all convolution and fully connected layer weights and input activations to the speciï¬ed precision, except for the ï¬rst and last layers which may use higher precision (as for the LSQ models). In some cases, we report slightly higher accuracy on full precision networks than in their original publications, which we attribute to our use of cosine learning rate decay (Loshchilov & Hutter, 2016).
5
Published as a conference paper at ICLR 2020
We found that LSQ achieved a higher top-1 accuracy than all previous reported approaches for 2-, 3- and 4- bit networks with the architectures considered here. For nearly all cases, LSQ also achieved the best-to-date top-5 accuracy on these networks, and best-to-date accuracy on 8-bit versions of these networks. In most cases, we found no accuracy advantage to increasing precision from 4-bit to 8-bit. It is worth noting that the next best low precision method (Jung et al., 2018) used progressive ï¬ne tuning (sequentially training a full precision to 5-bit model, then the 5-bit model to a 4-bit model, and so on), signiï¬cantly increasing training time and complexity over our approach which ï¬ne tunes directly from a full precision model to the precision of interest.
It is interesting to note that when comparing a full precision to a 2-bit precision model, top-1 accuracy drops only 2.9 for ResNet-18, but 14.0 for SqueezeNext-23-2x. One interpretation of this is that the SqueezeNext architecture was designed to maximize performance using as few parameters as possible, which may have placed it at a design point extremely sensitive to reductions in precision.
3.3 ACCURACY VS. MODEL SIZE
For a model size limited application, it is important to choose the highest performing model that ï¬ts within available memory limitations. To facilitate this choice, we plot here network accuracy against corresponding model size (Figure 3).
We can consider the frontier of best performance for a given model size of the architectures considered here. On this metric, we can see that 2-bit ResNet-34 and ResNet-50 networks offer an absolute advantage over using a smaller network, but with higher precision. We can also note that at all precisions, VGG-16bn exists below this frontier, which is not surprising as this network was developed prior to a number of recent innovations in achieving higher performance with fewer parameters.
ResNet-152 ResNet-101 ResNet-50 ~ a ResNet-34 VGG-16bn ~ fo) ResNet-18 â_â<e SqueezeNext-23-2x Top-1 Accuracy a & / Full Precision Model Sizes (MB) Precision 60 ResNet-18: 45 / fests . 2 / 2 a3 55 / . 4 é SqueezeNext-23-2x: 10 . 1 2 4 8 16 32 64 128 Model Size (MB)
Figure 3: Accuracy vs. model size for the networks considered here show some 2-bit networks provide the highest accuracy at a given model size. Full precision model sizes are inset for reference.
3.4 STEP SIZE GRADIENT SCALE IMPACT
To demonstrate the impact of the step size gradient scale (Section 2.2), we measured R (see Equation 4) averaged across 500 iterations in the middle of the ï¬rst training epoch for ResNet-18, using different step size gradient scales (the network itself was trained with the scaling as described in the methods to avoid convergence problems). With no scaling, we found that relative to parameter size, updates to step size were 2 to 3 orders of magnitude larger than updates to weights, and this imbalance increased with precision, with the 8-bit network showing almost an order of magnitude greater imbalance than the 2-bit network (Figure 4, left). Adjusting for the number of weights per layer (g = 1/â NW ), the imbalance between step size and weights largely went away, through the imbalance across precision remained (Figure 4, center). Adjusting for the number of number of weights per layer and precision (g = 1/â NW QP ), this precision dependent imbalance was largely removed as well (Figure 4, right).
We considered network accuracy after training a 2-bit ResNet-18 using different step size gradient scales (Table 3). Using the network with the full gradient scale (g = 1/â NF QP
6
Published as a conference paper at ICLR 2020
103 Mm 2-bit > mm 3-bit 10 mm 4-bit « 19 8-bit 10° âââââ 1 F F vNy VNule Step size gradient scale
Figure 4: Relative parameter update magnitudes given different step size gradient scales. A gradient scale of 1/NW QP better balances relative step size and weight gradient magnitudes (right vs. left).
for weight and activation step size respectively) as baseline, we found that adjusting only for weight and feature count led to a 0.3 decrease in top-1 accuracy, and when no gradient scale was applied the network did not converge unless we dropped the initial learning rate. Dropping the initial learning rate in multiples of ten, the best top-1 accuracy we achieved using no gradient scale was 3.4 below baseline, using an initial learning rate of 0.0001. Finally, we found that using the full gradient scaling with an additional ten-fold increase or decrease also reduced top-1 accuracy. Overall, this suggests a beneï¬t to our chosen heuristic for scaling the step size loss gradient.
Table 3: Top-1 accuracy for various gradient scale values and learning rates for 2-bit ResNet-18.
Gradient scale Learning Rate Accuracy 1/,\/NQp 0.01 67.6 V/VN 0.01 67.3 1 0.01 Did not converge 1 0.0001 64.2 10/.\/NQp 0.01 67.4 1/10, /NQp 0.01 67.3
3.5 COSINE LEARNING RATE DECAY IMPACT
We chose to use cosine learning rate decay in our experiments as it removes the need to select learning rate schedule hyperparameters, is available in most training frameworks, and does not increase training time. To facilitate comparison with results in other publications that use step-based learning rate decay, we trained a 2-bit ResNet-18 model with LSQ for 90 epochs, using an initial learning rate of 0.01, which was multiplied by 0.1 every 20 epochs. This model reached a top-1 accuracy of 67.2, a reduction of 0.4 from the equivalent model trained with cosine learning rate decay, but still marking an improvement of 1.5 over the next best training method (see Table 1).
3.6 QUANTIZATION ERROR
We next sought to understand whether LSQ learns a solution that minimizes quantization error (the distance between t and v on some metric), despite such an objective not being explicitly encouraged. For this purpose, for a given layer we define the final step size learned by LSQ as § and let S be the set of discrete values {0.015, 0.025, ..., 20.005}. For each layer, on a single batch of test data we computed the value of s ⬠S' that minimizes mean absolute error, (|(6(s) â v)|), mean square error, ((6(s) â v)?), and Kullback-Leibler divergence, [ p(v) log p(v) â J p(v) log q(8(s)) where pand q are probability distributions. For purposes of relative comparison, we ignore the first term of Kullback-Leibler divergence, as it does not depend on ¢, and approximate the second term as âE[log(q(é(s)))], where the expectation is over the sample distribution.
7
Published as a conference paper at ICLR 2020
For a 2-bit ResNet-18 model we found Ës = 0.949 ± 0.206 for activations and Ës = 0.025 ± 0.019 for weights (mean ± standard deviation). The percent absolute difference between Ës and the value of s that minimizes quantization error, averaged across activation layers was 50% for mean absolute error, 63% for mean square error, and 64% for Kullback-Leibler divergence, and averaged across weight layers, was 47% for mean absolute error, 28% for mean square error, and 46% for Kullback-Leibler divergence. This indicates that LSQ learns a solution that does not in fact minimize quantization error. As LSQ achieves better accuracy than approaches that directly seek to minimize quantization error, this suggests that simply ï¬tting a quantizer to its corresponding data distribution may not be optimal for task performance.
IMPROVEMENT WITH KNOWLEDGE-DISTILLATION
To better understand how well low precision networks can reproduce full precision accuracy, we combined LSQ with same-architecture knowledge distillation, which has been shown to improve low precision network training (Mishra & Marr, 2017). Speciï¬cally, we used the distillation loss function of Hinton et al. (2015) with temperature of 1 and equal weight given to the standard loss and the distillation loss (we found this gave comparable results to weighting the the distillation loss two times more or less than the standard loss on 2-bit ResNet-18). The teacher network was a trained full precision model with frozen weights and of the same architecture as the low precision network trained. As shown in Table 4, this improved performance, with top-1 accuracy increasing by up to 1.1 (3-bit ResNet-50), and with 3-bit networks reaching the score of the full precision baseline (see Table 1 for comparison). As a control, we also used this approach to distill from the full precision teacher to a full precision (initially untrained) student with the same architecture, which did not lead to an improvement in the student network accuracy beyond training the student alone. These results reinforce previous work showing that knowledge-distillation can help low precision networks catch up to full precision performance (Mishra & Marr, 2017).
Table 4: Accuracy for low precision networks trained with LSQ and knowledge distillation, which is improved over using LSQ alone, with 3-bit networks reaching the accuracy of full precision (32-bit) baselines (shown for comparison).
Network 2 Top-1 Accuracy @ Precision 4 3 8 32 2 Top-5 Accuracy @ Precision 4 3 8 32 ResNet-18 ResNet-34 ResNet-50 67.9 72.4 74.6 70.6 74.3 76.9 71.2 74.8 77.6 71.1 74.1 76.8 70.5 74.1 76.9 88.1 90.8 92.1 89.7 91.8 93.4 90.1 92.1 93.7 90.1 91.7 93.3 89.6 91.8 93.4
# 4 CONCLUSIONS
The results presented here demonstrate that on the ImageNet dataset across several network archi- tectures, LSQ exceeds the performance of all prior approaches for creating quantized networks. We found best performance when rescaling the quantizer step size loss gradient based on layer size and precision. Interestingly, LSQ does not appear to minimize quantization error, whether measured using mean square error, mean absolute error, or Kullback-Leibler divergence. The approach itself is simple, requiring only a single additional parameter per weight or activation layer.
Although our goal is to train low precision networks to achieve accuracy equal to their full precision counterparts, it is not yet clear whether this goal is achievable for 2-bit networks, which here reached accuracy several percent below their full precision counterparts. However, we found that such 2-bit solutions for state-of-the-art networks are useful in that they can give the best accuracy for the given model size, for example, with an 8MB model size limit, a 2-bit ResNet-50 was better than a 4-bit ResNet-34 (Figure 3).
This work is a continuation of a trend towards steadily reducing the number of bits of precision necessary to achieve good performance across a range of network architectures on ImageNet. While it is unclear how far it can be taken, it is noteworthy that the trend towards higher performance at lower precision strengthens the analogy between artiï¬cial neural networks and biological neural networks,
8
Published as a conference paper at ICLR 2020
which themselves employ synapses represented by perhaps a few bits of information (Bartol Jr et al., 2015) and single bit spikes that may be employed in small spatial and/or temporal ensembles to provide low bit width data representation. Analogies aside, reducing network precision while maintaining high accuracy is a promising means of reducing model size and increasing throughput to provide performance advantages in real world deployed deep networks.
# REFERENCES
Thomas M Bartol Jr, Cailey Bromer, Justin Kinney, Michael A Chirillo, Jennifer N Bourne, Kristen M Harris, and Terrence J Sejnowski. Nanoconnectomic upper bound on the variability of synaptic plasticity. Elife, 4:e10778, 2015.
Chaim Baskin, Natan Liss, Yoav Chai, Evgenii Zheltonozhskii, Eli Schwartz, Raja Girayes, Avi Mendelson, and Alexander M Bronstein. Nice: Noise injection and clamping estimation for neural network quantization. arXiv preprint arXiv:1810.00162, 2018.
Yoshua Bengio, Nicholas L´eonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.
Zhaowei Cai, Xiaodong He, Jian Sun, and Nuno Vasconcelos. Deep learning with low precision by half-wave gaussian quantization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5918â5926, 2017.
Jungwook Choi, Pierce I-Jen Chuang, Zhuo Wang, Swagath Venkataramani, Vijayalakshmi Srinivasan, and Kailash Gopalakrishnan. Bridging the accuracy gap for 2-bit quantized neural networks (qnn). arXiv preprint arXiv:1807.06964, 2018a.
Jungwook Choi, Zhuo Wang, Swagath Venkataramani, Pierce I-Jen Chuang, Vijayalakshmi Srinivasan, and Kailash Gopalakrishnan. Pact: Parameterized clipping activation for quantized neural networks. arXiv preprint arXiv:1805.06085, 2018b.
Yoojin Choi, Mostafa El-Khamy, and Jungwon Lee. Learning low precision deep neural networks through regularization. arXiv preprint arXiv:1809.00095, 2018c.
Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training deep neural networks with binary weights during propagations. In Advances in neural information processing systems, pp. 3123â3131, 2015.
Steven K. Esser, Paul A. Merolla, John V. Arthur, Andrew S. Cassidy, Rathinakumar Appuswamy, Alexander Andreopoulos, David J. Berg, Jeffrey L. McKinstry, Timothy Melano, Davis R. Barch, Carmelo di Nolfo, Pallab Datta, Arnon Amir, Brian Taba, Myron D. Flickner, and Dharmendra S. Modha. Convolutional networks for fast, energy-efï¬cient neuromorphic computing. Proceedings of the National Academy of Sciences, 113(41):11441â11446, 2016.
Amir Gholami, Kiseok Kwon, Bichen Wu, Zizheng Tai, Xiangyu Yue, Peter Jin, Sicheng Zhao, and Kurt Keutzer. Squeezenext: Hardware-aware neural network design. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 1638â1647, 2018.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770â778, 2016.
Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal processing magazine, 29(6):82â97, 2012.
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized neural networks. In Advances in neural information processing systems, pp. 4107â4115, 2016.
9
Published as a conference paper at ICLR 2020
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
Norman P Jouppi, Cliff Young, Nishant Patil, David Patterson, Gaurav Agrawal, Raminder Bajwa, Sarah Bates, Suresh Bhatia, Nan Boden, Al Borchers, et al. In-datacenter performance analysis of a tensor processing unit. In Computer Architecture (ISCA), 2017 ACM/IEEE 44th Annual International Symposium on, pp. 1â12. IEEE, 2017.
Sangil Jung, Changyong Son, Seohyung Lee, Jinwoo Son, Youngjun Kwak, Jae-Joon Han, and Changkyu Choi. Joint training of low-precision neural network with quantization interval parame- ters. arXiv preprint arXiv:1808.05779, 2018.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with deep convolu- tional neural networks. In Advances in neural information processing systems, pp. 1097â1105, 2012.
F Li and B Liu. Ternary weight networks.(2016). arXiv preprint arXiv:1605.04711, 2016.
Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016.
Jeffrey L McKinstry, Steven K Esser, Rathinakumar Appuswamy, Deepika Bablani, John V Arthur, Izzet B Yildiz, and Dharmendra S Modha. Discovering low-precision networks close to full- precision networks for efï¬cient embedded inference. arXiv preprint arXiv:1809.04191, 2018.
Asit Mishra and Debbie Marr. Apprentice: Using knowledge distillation techniques to improve low-precision network accuracy. arXiv preprint arXiv:1711.05852, 2017.
Antonio Polino, Razvan Pascanu, and Dan Alistarh. Model compression via distillation and quantiza- tion. arXiv preprint arXiv:1802.05668, 2018.
Jiantao Qiu, Jie Wang, Song Yao, Kaiyuan Guo, Boxun Li, Erjin Zhou, Jincheng Yu, Tianqi Tang, Ningyi Xu, Sen Song, et al. Going deeper with embedded fpga platform for convolutional neural network. In Proceedings of the 2016 ACM/SIGDA International Symposium on Field- Programmable Gate Arrays, pp. 26â35. ACM, 2016.
Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classiï¬cation using binary convolutional neural networks. In European Conference on Computer Vision, pp. 525â542. Springer, 2016.
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211â252, 2015.
Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
Wonyong Sung, Sungho Shin, and Kyuyeon Hwang. Resiliency of deep neural networks under quantization. arXiv preprint arXiv:1511.06488, 2015.
Huazhe Xu, Yang Gao, Fisher Yu, and Trevor Darrell. End-to-end learning of driving models from large-scale video datasets. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2174â2182, 2017.
Yang You, Igor Gitman, and Boris Ginsburg. Large batch training of convolutional networks. arXiv preprint arXiv:1708.03888, 2017.
Dongqing Zhang, Jiaolong Yang, Dongqiangzi Ye, and Gang Hua. Lq-nets: Learned quantization for highly accurate and compact deep neural networks. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 365â382, 2018.
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, and Yuheng Zou. Dorefa-net: Train- ing low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160, 2016.
Chenzhuo Zhu, Song Han, Huizi Mao, and William J Dally. Trained ternary quantization. arXiv preprint arXiv:1612.01064, 2016.
10
Published as a conference paper at ICLR 2020
# A STEP SIZE GRADIENT SCALE DERIVATION
We compute our gradient scale value by ï¬rst estimating R (Equation 4), starting with the simple heuristic that for a layer with NW weights
\wll/s = \/NwQp- (6) To develop this approximation, we first note that the expected value of an /.-norm should grow with the square root of the number of elements normalized. Next, we assume that where Qp = 1, step size should be approximately equal to average weight magnitude so as to split the weight distribution into zero and non zero values in a roughly balanced fashion. Finally, we assume that for larger Q p, step size should be roughly proportional to \/1/Qp, so that as the number of available quantized states increases, data between the clip points will be quantized more precisely, and the clip points themselves (equal to sQy and sQ p) will move further out to better encode outliers.
We also note that, in the expectation, ||V,,L|| and VL are of approximately the same order. This can be shown by starting from the chain rule
Nw V.L=)> OL Oth; (7) Ow; Osâ i=l
i=1
then assuming â Ëwi/âs is reasonably close to 1 (see for example Figure 2), and treating all âL/â Ëwi as uncorrelated zero-centered random variables, to compute the following expectation across weights:
OL? E[V.L?] ~ Nw x E Ee . (8)
By assuming â Ëw/âw = 1 for most weights, we similarly approximate
ay 2 E[||VuLl|?] = Nw xE Ee . Ow 0)
Bringing all of this together, we can then estimate
Re /NwQp. (10) Knowing this expected imbalance, we compute our gradient scale factor for weights by simply taking the inverse of R, so that g is set to 1/VNwOp.
As most activation layers are preceded by batch normalization (Ioffe & Szegedy, 2015), and assuming updates to the learned batch normalization scaling parameter is the primary driver of changes to pre-quantization activations, we can use a similar approach to the above to show that there is an imbalance between step size updates and update driven changes to activations that grows with the number of features in a layer, NF as well as QP . Thus, for activation step size we set g to 1/â NF QP .
# B IMPLEMENTATION
In this section we provide pseudocode to facilitate the implementation of LSQ. We assume the use of automatic differentiation, as supported by a number of popular deep learning frameworks, where the desired operations for the training forward pass are coded, and the automatic differentiation engine computes the gradient through those operations in the backward pass.
Our approach requires two functions with non standard gradients, gradscale (Function 1) and roundpass (Function 2). We implement the custom gradients by assuming a function called detach that returns its input (unmodiï¬ed) during the forward pass, and whose gradient during the backward pass is zero (thus detaching itself from the backward graph). This function is used in the form:
y = detach(x1 â x2) + x2, (11)
so that in the forward pass, y = x1 (as the x2 terms cancel out), while in the backward pass âL/âx1 = 0 (as detach blocks gradient propagation to x1) and âL/âx2 = âL/ây. We also assume a function nf eatures that given an activation tensor, returns the number of features in that tensor, and
11
Published as a conference paper at ICLR 2020
nweights that given a weight tensor, returns the number of weights in that tensor. Finally, the above are used to implement a function called quantize, which quantizes weights and activations prior to their use in each convolution or fully connected layer.
The pseudocode provided here is chosen for simplicity of implementation and broad applicability to many training frameworks, though more compute and memory efï¬cient approaches are possible. This example code assumes activations are unsigned, but could be modiï¬ed to quantize signed activations.
# Function 1 gradscale(x, scale):
# x: Input tensor # scale: Scale gradient by this yOut = x yGrad = x à scale y = detach(yOut - yGrad) + yGrad # Return yOut in forward, pass gradient to yGrad in backward return y
# Function 2 roundpass(x):
# x: Input tensor yOut = round(x) # Round to nearest yGrad = x y = detach(yOut - yGrad) + yGrad # Return yOut in forward, pass gradient to yGrad in backward return y
# Function 3 quantize(v, s, p, isActivation):
# v: Input tensor # s: Step size, a learnable parameter speciï¬c to weight or activation layer being quantized # p: Quantization bits of precision # isActivation: True if v is activation tensor, # False if v is weight tensor # Compute conï¬guration values if isActivation: Qn = 0 Qp = 2Ëp - 1 gradScaleFactor = 1 / sqrt(nfeatures(v) à Qp) else: # is weights Qn = -2Ë(p-1) Qp = 2Ë(p-1) - 1 gradScaleFactor = 1 / sqrt(nweights(v) à Qp) # Quantize s = gradscale(s, gradScaleFactor) v = v / s v = clip(v, Qn, Qp) vbar = roundpass(v) vhat = vbar à s return vhat
12 | {
"id": "1502.03167"
} |
1902.06720 | Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent | A longstanding goal in deep learning research has been to precisely
characterize training and generalization. However, the often complex loss
landscapes of neural networks have made a theory of learning dynamics elusive.
In this work, we show that for wide neural networks the learning dynamics
simplify considerably and that, in the infinite width limit, they are governed
by a linear model obtained from the first-order Taylor expansion of the network
around its initial parameters. Furthermore, mirroring the correspondence
between wide Bayesian neural networks and Gaussian processes, gradient-based
training of wide neural networks with a squared loss produces test set
predictions drawn from a Gaussian process with a particular compositional
kernel. While these theoretical results are only exact in the infinite width
limit, we nevertheless find excellent empirical agreement between the
predictions of the original network and those of the linearized version even
for finite practically-sized networks. This agreement is robust across
different architectures, optimization methods, and loss functions. | http://arxiv.org/pdf/1902.06720 | Jaehoon Lee, Lechao Xiao, Samuel S. Schoenholz, Yasaman Bahri, Roman Novak, Jascha Sohl-Dickstein, Jeffrey Pennington | stat.ML, cs.LG | 12+16 pages; open-source code available at
https://github.com/google/neural-tangents; accepted to NeurIPS 2019 | null | stat.ML | 20190218 | 20191208 | 9 1 0 2
c e D 8 ] L M . t a t s [ 4 v 0 2 7 6 0 . 2 0 9 1 : v i X r a
# Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent
# Jaehoon Leeâ, Lechao Xiaoâ, Samuel S. Schoenholz, Yasaman Bahri Roman Novak, Jascha Sohl-Dickstein, Jeffrey Pennington
Google Brain
# {jaehlee, xlc, schsam, yasamanb, romann, jaschasd, jpennin}@google.com
# Abstract
A longstanding goal in deep learning research has been to precisely characterize training and generalization. However, the often complex loss landscapes of neural networks have made a theory of learning dynamics elusive. In this work, we show that for wide neural networks the learning dynamics simplify considerably and that, in the inï¬nite width limit, they are governed by a linear model obtained from the ï¬rst-order Taylor expansion of the network around its initial parameters. Fur- thermore, mirroring the correspondence between wide Bayesian neural networks and Gaussian processes, gradient-based training of wide neural networks with a squared loss produces test set predictions drawn from a Gaussian process with a particular compositional kernel. While these theoretical results are only exact in the inï¬nite width limit, we nevertheless ï¬nd excellent empirical agreement between the predictions of the original network and those of the linearized version even for ï¬nite practically-sized networks. This agreement is robust across different architectures, optimization methods, and loss functions.
# Introduction
Machine learning models based on deep neural networks have achieved unprecedented performance across a wide range of tasks [1, 2, 3]. Typically, these models are regarded as complex systems for which many types of theoretical analyses are intractable. Moreover, characterizing the gradient-based training dynamics of these models is challenging owing to the typically high-dimensional non-convex loss surfaces governing the optimization. As is common in the physical sciences, investigating the extreme limits of such systems can often shed light on these hard problems. For neural networks, one such limit is that of inï¬nite width, which refers either to the number of hidden units in a fully- connected layer or to the number of channels in a convolutional layer. Under this limit, the output of the network at initialization is a draw from a Gaussian process (GP); moreover, the network output remains governed by a GP after exact Bayesian training using squared loss [4, 5, 6, 7, 8]. Aside from its theoretical simplicity, the inï¬nite-width limit is also of practical interest as wider networks have been found to generalize better [5, 7, 9, 10, 11].
In this work, we explore the learning dynamics of wide neural networks under gradient descent and ï¬nd that the weight-space description of the dynamics becomes surprisingly simple: as the width becomes large, the neural network can be effectively replaced by its ï¬rst-order Taylor expansion with respect to its parameters at initialization. For this linear model, the dynamics of gradient descent become analytically tractable. While the linearization is only exact in the inï¬nite width limit, we nevertheless ï¬nd excellent agreement between the predictions of the original network and those of
âBoth authors contributed equally to this work. Work done as a member of the Google AI Residency program (https://g.co/airesidency).
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
the linearized version even for ï¬nite width conï¬gurations. The agreement persists across different architectures, optimization methods, and loss functions.
For squared loss, the exact learning dynamics admit a closed-form solution that allows us to charac- terize the evolution of the predictive distribution in terms of a GP. This result can be thought of as an extension of âsample-then-optimize" posterior sampling [12] to the training of deep neural networks. Our empirical simulations conï¬rm that the result accurately models the variation in predictions across an ensemble of ï¬nite-width models with different random initializations.
Here we summarize our contributions:
⢠Parameter space dynamics: We show that wide network training dynamics in parameter space are equivalent to the training dynamics of a model which is afï¬ne in the collection of all network parameters, the weights and biases. This result holds regardless of the choice of loss function. For squared loss, the dynamics admit a closed-form solution as a function of time.
e Sufficient conditions for linearization: We formally prove that there exists a threshold learning rate Neritical (see Theorem 2.1), such that gradient descent training trajectories with learning rate smaller than 7eritical Stay in an O (n-VÂ¥ ?) neighborhood of the trajectory of the linearized network when n, the width of the hidden layers, is sufficiently large.
⢠Output distribution dynamics: We formally show that the predictions of a neural network throughout gradient descent training are described by a GP as the width goes to inï¬nity (see Theorem 2.2), extending results from Jacot et al. [13]. We further derive explicit time-dependent expressions for the evolution of this GP during training. Finally, we provide a novel interpretation of the result. In particular, it offers a quantitative understanding of the mechanism by which gradient descent differs from Bayesian posterior sampling of the parameters: while both methods generate draws from a GP, gradient descent does not generate samples from the posterior of any probabilistic model.
⢠Large scale experimental support: We empirically investigate the applicability of the theory in the ï¬nite-width setting and ï¬nd that it gives an accurate characterization of both learning dynamics and posterior function distributions across a variety of conditions, including some practical network architectures such as the wide residual network [14].
⢠Parameterization independence: We note that linearization result holds both in standard and NTK parameterization (deï¬ned in §2.1), while previous work assumed the latter, emphasizing that the effect is due to increase in width rather than the particular parameterization.
⢠Analytic ReLU and erf neural tangent kernels: We compute the analytic neural tangent kernel corresponding to fully-connected networks with ReLU or erf nonlinearities.
⢠Source code: Example code investigating both function space and parameter space linearized learning dynamics described in this work is released as open source code within [15].2 We also provide accompanying interactive Colab notebooks for both parameter space3 and function space4 linearization.
# 1.1 Related work
We build on recent work by Jacot et al. [13] that characterize the exact dynamics of network outputs throughout gradient descent training in the inï¬nite width limit. Their results establish that full batch gradient descent in parameter space corresponds to kernel gradient descent in function space with respect to a new kernel, the Neural Tangent Kernel (NTK). We examine what this implies about dynamics in parameter space, where training updates are actually made.
Daniely et al. [16] study the relationship between neural networks and kernels at initialization. They bound the difference between the inï¬nite width kernel and the empirical kernel at ï¬nite width n, which diminishes as O(1/ n). Daniely [17] uses the same kernel perspective to study stochastic gradient descent (SGD) training of neural networks.
Saxe et al. [18] study the training dynamics of deep linear networks, in which the nonlinearities are treated as identity functions. Deep linear networks are linear in their inputs, but not in their
2Note that the open source library has been expanded since initial submission of this work. 3 colab.sandbox.google.com/github/google/neural-tangents/blob/master/notebooks/weight_space_linearization.ipynb colab.sandbox.google.com/github/google/neural-tangents/blob/master/notebooks/function_space_linearization.ipynb
2
parameters. In contrast, we show that the outputs of sufï¬ciently wide neural networks are linear in the updates to their parameters during gradient descent, but not usually their inputs.
Du et al. [19], Allen-Zhu et al. [20, 21], Zou et al. [22] study the convergence of gradient descent to global minima. They proved that for i.i.d. Gaussian initialization, the parameters of sufï¬ciently wide networks move little from their initial values during SGD. This small motion of the parameters is crucial to the effect we present, where wide neural networks behave linearly in terms of their parameters throughout training.
Mei et al. [23], Chizat and Bach [24], Rotskoff and Vanden-Eijnden [25], Sirignano and Spiliopoulos [26] analyze the mean ï¬eld SGD dynamics of training neural networks in the large-width limit. Their mean ï¬eld analysis describes distributional dynamics of network parameters via a PDE. However, their analysis is restricted to one hidden layer networks with a scaling limit (1/n) different from ours (1/ Chizat et al. [28]5 argued that inï¬nite width networks are in âlazy trainingâ regime and maybe too simple to be applicable to realistic neural networks. Nonetheless, we empirically investigate the applicability of the theory in the ï¬nite-width setting and ï¬nd that it gives an accurate characterization of both the learning dynamics and posterior function distributions across a variety of conditions, including some practical network architectures such as the wide residual network [14].
# 2 Theoretical results
# 2.1 Notation and setup for architecture and training dynamics
Let D â Rn0 à Rk denote the training set and X = {x : (x, y) â D} and Y = {y : (x, y) â D} denote the inputs and labels, respectively. Consider a fully-connected feed-forward network with L hidden layers with widths nl, for l = 1, ..., L and a readout layer with nL+1 = k. For each x â Rn0 , we use hl(x), xl(x) â Rnl to represent the pre- and post-activation functions at layer l with input x. The recurrence relation for a feed-forward network is deï¬ned as
pitt = gl wltt 4 pitt w} ee Sey { 41 _ (pitt and ¥ 7 vm at = 9 (nt) = ant (1)
where ¢ is a point-wise activation function, W't! ⬠Râ¢*"+1 and b'+! ⬠R"+? are the weights and biases, wj, and bi, are the trainable variables, drawn i.i.d. from a standard Gaussian wi, 3, ~ N(0, 1) at initialization, and 7? and o? are weight and bias variances. Note that this parametrization is non- standard, and we will refer to it as the NTK parameterization. It has already been adopted in several recent works [29, 30, 13, 19, 31]. Unlike the standard parameterization that only normalizes the forward dynamics of the network, the NTK-parameterization also normalizes its backward dynamics. We note that the predictions and training dynamics of NTK-parameterized networks are identical to those of standard networks, up to a width-dependent scaling factor in the learning rate for each parameter tensor. As we derive, and support experimentally, in Supplementary Material (SM) §F and §G, our results (linearity in weights, GP predictions) also hold for networks with a standard parameterization. We define 6! = vec ({W', b'}), the ((mjâ1 + 1)nz) x 1 vector of all parameters for layer 1. 6 = vec (U/4'6") then indicates the vector of all network parameters, with similar definitions for 05! and 67. Denote by 6; the time-dependence of the parameters and by o their initial values. We use f,(x) = hâ*1(x) ⬠R* to denote the output (or logits) of the neural network at time ¢. Let L(g, y) : R® x R*® â R denote the loss function where the first argument is the prediction and the second argument the true label. In supervised learning, one is interested in learning a 6 that minimizes the empirical lossâ, L = (yep â¬(fe(, 9), y)-
5We note that this is a concurrent work and an expanded version of this note is presented in parallel at NeurIPS 2019.
6To simplify the notation for later equations, we use the total loss here instead of the average loss, but for all plots in §3, we show the average loss.
3
Let η be the learning rate7. Via continuous time gradient descent, the evolution of the parameters θ and the logits f can be written as
Ëθt = âηâθft(X )T âft(X )L Ëft(X ) = âθft(X ) Ëθt = âη ËÎt(X , X )âft(X )L
(2)
Si(®) = VofilX) 6: = ân O(X, X)V pcan (3) where f,(Â¥â) = vec ([f: («)],cx), the k|D| x 1 vector of concatenated logits for all examples, and V 7.(x)£ is the gradient of the loss with respect to the modelâs output, f;(Â¥). Oo. = OX, &) is the tangent kernel at time ¢, which is a k|D| x k|D| matrix
L+1 61 = Vo(X)VofilX)? = S> Vor fi(X)Vorfi(®)â- (4) l=1
One can deï¬ne the tangent kernel for general arguments, e.g. ËÎt(x, X ) where x is test input. At ï¬nite-width, ËÎ will depend on the speciï¬c random draw of the parameters and in this context we refer to it as the empirical tangent kernel. The dynamics of discrete gradient descent can be obtained by replacing Ëθt and Ëft(X ) with (θi+1 â θi) and (fi+1(X ) â fi(X )) above, and replacing eâη ËÎ0t with (1 â (1 â η ËÎ0)i) below.
# 2.2 Linearized networks have closed form training dynamics for parameters and outputs
In this section, we consider the training dynamics of the linearized network. Speciï¬cally, we replace the outputs of the neural network by their ï¬rst order Taylor expansion,
f lin t (x) â¡ f0(x) + âθf0(x)|θ=θ0 where Ït ⡠θt â θ0 is the change in the parameters from their initial values. Note that f lin is the t sum of two terms: the ï¬rst term is the initial output of the network, which remains unchanged during training, and the second term captures the change to the initial value during training. The dynamics of gradient ï¬ow using this linearized function are governed by,
ËÏt = âηâθf0(X )T âf lin Ëf lin t (x) = âη ËÎ0(x, X )âf lin
t (X )L (6)
t (X )L . (7)
As Vo fo(a) remains constant throughout training, these dynamics are often quite simple. In the case of an MSE loss, i.e., â¬(9, y) = 4||9 â y||3. the ODEs have closed form solutions
we = âVo fol Â¥)" Oy" (I= e-""") (fo(*) -) , ®)
Ït = ââθf0(X )T ËÎâ1 0 t (X ) = (I â eâη ËÎ0t)Y + eâη ËÎ0tf0(X ) . f lin
(9)
For an arbitrary point x, f lin t (x) = µt(x) + γt(x), where
pu (x) = Oo(a, X)O5" (1 - e100") y (10)
(1 - e100")
y (T=e7"")
γt(x) = f0(x) â ËÎ0 (x, X ) ËÎâ1 0 f0(X ). (11)
Therefore, we can obtain the time evolution of the linearized neural network without running gradient descent. We only need to compute the tangent kernel ËÎ0 and the outputs f0 at initialization and use Equations 8, 10, and 11 to compute the dynamics of the weights and the outputs.
# Inï¬nite width limit yields a Gaussian process
As the width of the hidden layers approaches inï¬nity, the Central Limit Theorem (CLT) implies that the outputs at initialization {f0(x)}xâX converge to a multivariate Gaussian in distribution.
7Note that compared to the conventional parameterization, η is larger by factor of width [31]. The NTK parameterization allows usage of a universal learning rate scale irrespective of network width.
4
Informally, this occurs because the pre-activations at each layer are a sum of Gaussian random variables (the weights and bias), and thus become a Gaussian random variable themselves. See [32, 33, 5, 34, 35] for more details, and [36, 7] for a formal treatment.
Therefore, randomly initialized neural networks are in correspondence with a certain class of GPs (hereinafter referred to as NNGPs), which facilitates a fully Bayesian treatment of neural networks [5, 6]. More precisely, let f i t denote the i-th output dimension and K denote the sample-to-sample kernel function (of the pre-activation) of the outputs in the inï¬nite width setting,
Kaa) = lim E[ fila) - fa(eâ)] (12) min(m,..., Np )â-0o
then fo(Â¥) ~ N(0,K(4Â¥, Â¥)), where K(x, 2â) denotes the covariance between the i-th output of x and j-th output of xâ, which can be computed recursively (see Lee et al. [5, §2.3] and SM §E). For a test input « ⬠4p, the joint output distribution f (2, Â¥]) is also multivariate Gaussian. Conditioning on the training samplesâ, f(4â) = ), the distribution of f(x)| 4, ) is also a Gaussian N (u(x), B(x),
µ(x) = K(x, X )Kâ1Y, Σ(x) = K(x, x) â K(x, X )Kâ1K(x, X )T , and where K = K(X , X ). This is the posterior predictive distribution resulting from exact Bayesian inference in an inï¬nitely wide neural network.
# 2.3.1 Gaussian processes from gradient descent training
If we freeze the variables θâ¤L after initialization and only optimize θL+1, the original network and its linearization are identical. Letting the width approach inï¬nity, this particular tangent kernel ËÎ0 will converge to K in probability and Equation 10 will converge to the posterior Equation 13 as t â â (for further details see SM §D). This is a realization of the âsample-then-optimize" approach for evaluating the posterior of a Gaussian process proposed in Matthews et al. [12]. If none of the variables are frozen, in the inï¬nite width setting, ËÎ0 also converges in probability to a deterministic kernel Î [13, 37], which we sometimes refer to as the analytic kernel, and which can also be computed recursively (see SM §E). For ReLU and erf nonlinearity, Î can be exactly computed (SM §C) which we use in §3. Letting the width go to inï¬nity, for any t, the output f lin t (x) of the linearized network is also Gaussian distributed because Equations 10 and 11 describe an afï¬ne transform of the Gaussian [f0(x), f0(X )]. Therefore Corollary 1. For every test points in x â XT , and t ⥠0, f lin goes to inï¬nity to a Gaussian with mean and covariance given by9
(Xr) = © (Xr, X)O7* (7 - e"°") y, (14) U(r, er) = K (Xr, Xr) + O(Xr, VOW! (7 - e"°") K (7 - a) 0-10 (X, Xr) = (O(n, YOM (T=) K(X, Ar) + hic.) . (15)
Therefore, over random initialization, limyo0 littnso0 fii" N (0 (Ar, x) O71â,
t (x) has distribution
Therefore, over random initialization, limyo0 littnso0 fii" (x) has distribution
K (Xp, &r) + O(Ar, XO '1KO10 (4X, Xp) â (O(A7, X)O1K(X, Xr) + hc.) ). (16)
Unlike the case when only θL+1 is optimized, Equations 14 and 15 do not admit an interpretation corresponding to the posterior sampling of a probabilistic model.10 We contrast the predictive distributions from the NNGP, NTK-GP (i.e. Equations 14 and 15) and ensembles of NNs in Figure 2.
Inï¬nitely-wide neural networks open up ways to study deep neural networks both under fully Bayesian training through the Gaussian process correspondence, and under GD training through the lineariza- tion perspective. The resulting distributions over functions are inconsistent (the distribution resulting
8 This imposes that hL+1 directly corresponds to the network predictions. In the case of softmax readout,
variational or sampling methods are required to marginalize over hL+1. 9Here â+h.c.â is an abbreviation for âplus the Hermitian conjugateâ. 10One possible exception is when the NNGP kernel and NTK are the same up to a scalar multiplication. This
is the case when the activation function is the identity function and there is no bias term.
5
plese la/lo ls eh alla JOP â 0 nL le [PRP Helle 2 eZ a ee > 2 bee 2e{[-- ve 2 2»[ fais me Epa 2tf-- 1/n 2 ael let eas | seo it ate 26} NN ; 2 : 27 27 27 2% 2 2% gh gm QM 27 2 2F 27 2F 2F 2h gt Qe 2 2725 2 27 2% 27 2 gm Qe 2 27-25 2 27 2% 27 2⢠am Qe t n n n
Figure 1: Relative Frobenius norm change during training. Three hidden layer ReLU net- works trained with η = 1.0 on a subset of MNIST (|D| = 128). We measure changes of (in- put/output/intermediary) weights, empirical ËÎ, and empirical ËK after T = 217 steps of gradient descent updates for varying width. We see that the relative change in input/output weights scales as n while intermediate weights scales as 1/n, this is because the dimension of the input/output 1/ does not grow with n. The change in ËÎ and ËK is upper bounded by O (1/ n) but is closer to O (1/n). See Figure S6 for the same experiment with 3-layer tanh and 1-layer ReLU networks. See Figures S9 and S10 for additional comparisons of ï¬nite width empirical and analytic kernels.
from GD training does not generally correspond to a Bayesian posterior). We believe understand- ing the biases over learned functions induced by different training schemes and architectures is a fascinating avenue for future work.
# Inï¬nite width networks are linearized networks
Equation 2 and 3 of the original network are intractable in general, since ©, evolves with time. However, for the mean squared loss, we are able to prove formally that, as long as the learning rate 1 < Neritical >= 2(Amin(©) + Amax(@))~+, where Aminâmax(@) is the min/max eigenvalue of O, the gradient descent dynamics of the original neural network falls into its linearized dynamics regime. Theorem 2.1 (Informal). Let ny = +--+ = np = nand assume Xmin(O) > 0. Applying gradient descent with learning rate 1 < Neritical (or gradient flow), for every x ⬠R"® with ||x\|2 < 1, with probability arbitrarily close to 1 over random initialization,
|. â Polly s vnâ 430 in ( o (lla su sup sup || fi(x) â =O(n-?), as noo. (17) t>0 F
Therefore, as n â> ov, the distributions of f;(2) and f!!"(2) become the same. Coupling with Corollary |, we have Theorem 2.2. If) < Meritical, then for every x ⬠R" with ||x||2 < 1, asn > ov, f(x) converges in distribution to the Gaussian with mean and variance given by Equation 14 and Equation 15.
We refer the readers to Figure 2 for empirical verification of this theorem. The proof of Theorem 2.1 consists of two steps. The first step is to prove the global convergence of overparameterized neural networks [19, 20, 21, 22] and stability of the NTK under gradient descent (and gradient flow); see SM §G. This stability was first observed and proved in [13] in the gradient flow and sequential limit (i.e. letting n; > 00, ..., m_ â oo sequentially) setting under certain assumptions about global convergence of gradient flow. In §G, we show how to use the NTK to provide a self-contained (and cleaner) proof of such global convergence and the stability of NTK simultaneously. The second step is to couple the stability of NTK with Grénwallâs type arguments [38] to upper bound the discrepancy between f, and f?", i.e. the first norm in Equation 17. Intuitively, the ODE of the original network (Equation 3) can be considered as a 6: â Oo || --fluctuation from the linearized ODE (Equation 7). One expects the difference between the solutions of these two ODEs to be upper bounded by some functional of 9; - Ool| Fr; see SM §H. Therefore, for a large width network, the training dynamics can be well approximated by linearized dynamics.
Note that the updates for individual weights in Equation 6 vanish in the inï¬nite width limit, which for instance can be seen from the explicit width dependence of the gradients in the NTK parameterization. Individual weights move by a vanishingly small amount for wide networks in this regime of dynamics, as do hidden layer activations, but they collectively conspire to provide a ï¬nite change in the ï¬nal output of the network, as is necessary for training. An additional insight gained from linearization
6
of the network is that the individual instance dynamics derived in [13] can be viewed as a random features method,11 where the features are the gradients of the model with respect to its weights.
# 2.5 Extensions to other optimizers, architectures, and losses
Our theoretical analysis thus far has focused on fully-connected single-output architectures trained by full batch gradient descent. In SM §B we derive corresponding results for: networks with multi- dimensional outputs, training against a cross entropy loss, and gradient descent with momentum.
In addition to these generalizations, there is good reason to suspect the results to extend to much broader class of models and optimization procedures. In particular, a wealth of recent literature suggests that the mean ï¬eld theory governing the wide network limit of fully-connected models [32, 33] extends naturally to residual networks [35], CNNs [34], RNNs [39], batch normalization [40], and to broad architectures [37]. We postpone the development of these additional theoretical extensions in favor of an empirical investigation of linearization for a variety of architectures.
t=0 t=16 t=256 t=65536 â NNs (mm NNGP lm NTK Output Value
Figure 2: Dynamics of mean and variance of trained neural network outputs follow analytic dynamics from linearization. Black lines indicate the time evolution of the predictive output distribution from an ensemble of 100 trained neural networks (NNs). The blue region indicates the analytic prediction of the output distribution throughout training (Equations 14, 15). Finally, the red region indicates the prediction that would result from training only the top layer, corresponding to an NNGP (Equations S22, S23). The trained network has 3 hidden layers of width 8192, tanh activation functions, Ï2 w = 1.5, no bias, and η = 0.5. The output is computed for inputs interpolated between two training points (denoted with black dots) x(α) = αx(1) + (1 â α)x(2). The shaded region and dotted lines denote 2 standard deviations (â¼ 95% quantile) from the mean denoted in solid lines. Training was performed with full-batch gradient descent with dataset size |D| = 128. For dynamics for individual function initializations, see SM Figure S1.
# 3 Experiments
In this section, we provide empirical support showing that the training dynamics of wide neural networks are well captured by linearized models. We consider fully-connected, convolutional, and wide ResNet architectures trained with full- and mini- batch gradient descent using learning rates sufï¬ciently small so that the continuous time approximation holds well. We consider two-class classiï¬cation on CIFAR-10 (horses and planes) as well as ten-class classiï¬cation on MNIST and CIFAR-10. When using MSE loss, we treat the binary classiï¬cation task as regression with one class regressing to +1 and the other to â1.
Experiments in Figures 1, 4, S2, S3, S4, S5 and S6, were done in JAX [41]. The remaining experi- ments used TensorFlow [42]. An open source implementation of this work providing tools to inves- tigate linearized learning dynamics is available at www.github.com/google/neural-tangents [15].
Predictive output distribution: In the case of an MSE loss, the output distribution remains Gaussian throughout training. In Figure 2, the predictive output distribution for input points interpolated between two training points is shown for an ensemble of neural networks and their corresponding GPs. The interpolation is given by x(α) = αx(1) + (1 â α)x(2) where x(1,2) are two training inputs
11We thank Alex Alemi for pointing out a subtlety on correspondence to a random features method.
7
Test output value â" Neural Network 2 Linearized Model ~ Weight change (w) oo Loss roe âVÂ¥(@) = F'"@)) oo ee ae 10?
Figure 3: Full batch gradient descent on a model behaves similarly to analytic dynamics on its linearization, both for network outputs, and also for individual weights. A binary CIFAR classiï¬cation task with MSE loss and a ReLU fully-connected network with 5 hidden layers of width n = 2048, η = 0.01, |D| = 256, k = 1, Ï2 w = 2.0, and Ï2 b = 0.1. Left two panes show dynamics for a randomly selected subset of datapoints or parameters. Third pane shows that the dynamics of loss for training and test points agree well between the original and linearized model. The last pane shows the dynamics of RMSE between the two models on test points. We observe that the empirical kernel ËÎ gives more accurate dynamics for ï¬nite width networks.
with different classes. We observe that the mean and variance dynamics of neural network outputs during gradient descent training follow the analytic dynamics from linearization well (Equations 14, 15). Moreover the NNGP predictive distribution which corresponds to exact Bayesian inference, while similar, is noticeably different from the predictive distribution at the end of gradient descent training. For dynamics for individual function draws see SM Figure S1.
Comparison of training dynamics of linearized network to original network: For a particular realization of a finite width network, one can analytically predict the dynamics of the weights and outputs over the course of training using the empirical tangent kernel at initialization. In Figures 3, 4 (see also S2, S3), we compare these linearized dynamics (Equations 8, 9) with the result of training the actual network. In all cases we see remarkably good agreement. We also observe that for finite networks, dynamics predicted using the empirical kernel © better match the data than those obtained using the infinite-width, analytic, kernel ©. To understand this we note that oe? = 69 lp = O(1/n) < O(1/ Vn) = ||6%â â Oz, where 6â) denotes the empirical tangent kernel of width n network, as plotted in Figure |. One can directly optimize parameters of f'" instead of solving the ODE induced by the tangent kernel ©. Standard neural network optimization techniques such as mini-batching, weight decay, and data augmentation can be directly applied. In Figure 4 (S2, S3), we compared the training dynamics of the linearized and original network while directly training both networks.
With direct optimization of linearized model, we tested full (|D| = 50, 000) MNIST digit classiï¬ca- tion with cross-entropy loss, and trained with a momentum optimizer (Figure S3). For cross-entropy loss with softmax output, some logits at late times grow indeï¬nitely, in contrast to MSE loss where logits converge to target value. The error between original and linearized model for cross entropy loss becomes much worse at late times if the two models deviate signiï¬cantly before the logits enter their late-time steady-growth regime (See Figure S4).
Linearized dynamics successfully describes the training of networks beyond vanilla fully-connected models. To demonstrate the generality of this procedure we show we can predict the learning dynamics of subclass of Wide Residual Networks (WRNs) [14]. WRNs are a class of model that are popular in computer vision and leverage convolutions, batch normalization, skip connections, and average pooling. In Figure 4, we show a comparison between the linearized dynamics and the true dynamics for a wide residual network trained with MSE loss and SGD with momentum, trained on the full CIFAR-10 dataset. We slightly modiï¬ed the block structure described in Table S1 so that each layer has a constant number of channels (1024 in this case), and otherwise followed the original implementation. As elsewhere, we see strong agreement between the predicted dynamics and the result of training.
Effects of dataset size: The training dynamics of a neural network match those of its linearization when the width is inï¬nite and the dataset is ï¬nite. In previous experiments, we chose sufï¬ciently wide networks to achieve small error between neural networks and their linearization for smaller
8
Train output Test output : 0.14 : eed : 10 : Accuracy : 0.8 Neural Network 0.8 0.121 â Trin || 0.6} -- Linearized Model 0.6 0.10 = + Train fin [ 04 04 0.08 â Test 02 0.2 0.06 + = Test fin 0.0 0.0} -./ 0.04 - -0.2 ; od 02 fe eed 0.02 fenie nN ERO YE 0,00 Le 0,0 Cel 10° 107 10? 107 10* 10° 107 10? 107 10* 10° 107 10? 10° 10* 10° 107 10? 10° 10* t t t t
Figure 4: A wide residual network and its linearization behave similarly when both are trained by SGD with momentum on MSE loss on CIFAR-10. We adopt the network architecture from Zagoruyko and Komodakis [14]. We use N = 1, channel size 1024, η = 1.0, β = 0.9, k = 10, Ï2 b = 0.0. See Table S1 for details of the architecture. Both the linearized and original model are trained directly on full CIFAR-10 (|D| = 50, 000), using SGD with batch size 8. Output dynamics for a randomly selected subset of train and test points are shown in the ï¬rst two panes. Last two panes show training and accuracy curves for the original and linearized networks.
datasets. Overall, we observe that as the width grows the error decreases (Figure S5). Additionally, we see that the error grows in the size of the dataset. Thus, although error grows with dataset this can be counterbalanced by a corresponding increase in the model size.
# 4 Discussion
We showed theoretically that the learning dynamics in parameter space of deep nonlinear neural networks are exactly described by a linearized model in the inï¬nite width limit. Empirical investiga- tion revealed that this agrees well with actual training dynamics and predictive distributions across fully-connected, convolutional, and even wide residual network architectures, as well as with different optimizers (gradient descent, momentum, mini-batching) and loss functions (MSE, cross-entropy). Our results suggest that a surprising number of realistic neural networks may be operating in the regime we studied. This is further consistent with recent experimental work showing that neural networks are often robust to re-initialization but not re-randomization of layers (Zhang et al. [43]). In the regime we study, since the learning dynamics are fully captured by the kernel ËÎ and the target signal, studying the properties of ËÎ to determine trainability and generalization are interesting future directions. Furthermore, the inï¬nite width limit gives us a simple characterization of both gradient descent and Bayesian inference. By studying properties of the NNGP kernel K and the tangent kernel Î, we may shed light on the inductive bias of gradient descent.
Some layers of modern neural networks may be operating far from the linearized regime. Preliminary observations in Lee et al. [5] showed that wide neural networks trained with SGD perform similarly to the corresponding GPs as width increase, while GPs still outperform trained neural networks for both small and large dataset size. Furthermore, in Novak et al. [7], it is shown that the comparison of performance between ï¬nite- and inï¬nite-width networks is highly architecture-dependent. In particular, it was found that inï¬nite-width networks perform as well as or better than their ï¬nite-width counterparts for many fully-connected or locally-connected architectures. However, the opposite was found in the case of convolutional networks without pooling. It is still an open research question to determine the main factors that determine these performance gaps. We believe that examining the behavior of inï¬nitely wide networks provides a strong basis from which to build up a systematic understanding of ï¬nite-width networks (and/or networks trained with large learning rates).
# Acknowledgements
We thank Greg Yang and Alex Alemi for useful discussions and feedback. We are grateful to Daniel Freeman, Alex Irpan and anonymous reviewers for providing valuable feedbacks on the draft. We thank the JAX team for developing a language which makes model linearization and NTK computation straightforward. We would like to especially thank Matthew Johnson for support and debugging help.
9
# References
[1] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in Neural Information Processing Systems. 2012.
[2] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Conference on Computer Vision and Pattern Recognition, pages 770â778, 2016.
[3] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
[4] Radford M. Neal. Priors for inï¬nite networks (tech. rep. no. crg-tr-94-1). University of Toronto, 1994.
[5] Jaehoon Lee, Yasaman Bahri, Roman Novak, Sam Schoenholz, Jeffrey Pennington, and Jascha Sohl-dickstein. Deep neural networks as gaussian processes. In International Conference on Learning Representations, 2018.
[6] Alexander G. de G. Matthews, Jiri Hron, Mark Rowland, Richard E. Turner, and Zoubin Ghahramani. Gaussian process behaviour in wide deep neural networks. In International Conference on Learning Representations, 4 2018. URL https://openreview.net/forum? id=H1-nGgWC-.
[7] Roman Novak, Lechao Xiao, Jaehoon Lee, Yasaman Bahri, Greg Yang, Jiri Hron, Daniel A. Abolaï¬a, Jeffrey Pennington, and Jascha Sohl-Dickstein. Bayesian deep convolutional net- works with many channels are gaussian processes. In International Conference on Learning Representations, 2019.
[8] Adrià Garriga-Alonso, Laurence Aitchison, and Carl Edward Rasmussen. Deep convolutional networks as shallow gaussian processes. In International Conference on Learning Representa- tions, 2019.
[9] Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. In search of the real inductive bias: On the role of implicit regularization in deep learning. In International Conference on Learning Representations workshop track, 2015.
[10] Roman Novak, Yasaman Bahri, Daniel A. Abolaï¬a, Jeffrey Pennington, and Jascha Sohl- Dickstein. Sensitivity and generalization in neural networks: an empirical study. In International Conference on Learning Representations, 2018.
[11] Behnam Neyshabur, Zhiyuan Li, Srinadh Bhojanapalli, Yann LeCun, and Nathan Srebro. The role of over-parametrization in generalization of neural networks. In International Conference on Learning Representations, 2019.
[12] Alexander G. de G. Matthews, Jiri Hron, Richard E. Turner, and Zoubin Ghahramani. Sample- then-optimize posterior sampling for bayesian linear models. In NeurIPS Workshop on Advances in Approximate Bayesian Inference, 2017. URL http://approximateinference.org/ 2017/accepted/MatthewsEtAl2017.pdf.
[13] Arthur Jacot, Franck Gabriel, and Clement Hongler. Neural tangent kernel: Convergence and generalization in neural networks. In Advances in Neural Information Processing Systems, 2018.
[14] Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. In British Machine Vision Conference, 2016.
[15] Roman Novak, Lechao Xiao, Jiri Hron, Jaehoon Lee, Alexander A. Alemi, Jascha Sohl- Dickstein, and Samuel S. Schoenholz. Neural tangents: Fast and easy inï¬nite neural networks in python. https://github.com/google/neural-tangents, https://arxiv.org/abs/ 1912.02803, 2019.
[16] Amit Daniely, Roy Frostig, and Yoram Singer. Toward deeper understanding of neural networks: The power of initialization and a dual view on expressivity. In Advances In Neural Information Processing Systems, 2016.
10
[17] Amit Daniely. SGD learns the conjugate kernel class of the network. In Advances in Neural Information Processing Systems, 2017.
[18] Andrew M Saxe, James L McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. In International Conference on Learning Representations, 2014.
[19] Simon S Du, Jason D Lee, Haochuan Li, Liwei Wang, and Xiyu Zhai. Gradient descent ï¬nds global minima of deep neural networks. In International Conference on Machine Learning, 2019.
[20] Zeyuan Allen-Zhu, Yuanzhi Li, and Zhao Song. A convergence theory for deep learning via over-parameterization. In International Conference on Machine Learning, 2019.
[21] Zeyuan Allen-Zhu, Yuanzhi Li, and Zhao Song. On the convergence rate of training recurrent neural networks. arXiv preprint arXiv:1810.12065, 2018.
[22] Difan Zou, Yuan Cao, Dongruo Zhou, and Quanquan Gu. Stochastic gradient descent optimizes over-parameterized deep relu networks. Machine Learning, 2019.
[23] Song Mei, Andrea Montanari, and Phan-Minh Nguyen. A mean ï¬eld view of the landscape of two-layer neural networks. Proceedings of the National Academy of Sciences, 115(33): E7665âE7671, 2018.
[24] Lenaic Chizat and Francis Bach. On the global convergence of gradient descent for over- parameterized models using optimal transport. In Advances in neural information processing systems, 2018.
[25] Grant M Rotskoff and Eric Vanden-Eijnden. Parameters as interacting particles: long time convergence and asymptotic error scaling of neural networks. In Advances in neural information processing systems, 2018.
[26] Justin Sirignano and Konstantinos Spiliopoulos. Mean ï¬eld analysis of neural networks. arXiv preprint arXiv:1805.01053, 2018.
[27] Xavier Glorot and Yoshua Bengio. Understanding the difï¬culty of training deep feedforward neural networks. In International Conference on Artiï¬cial Intelligence and Statistics, pages 249â256, 2010.
[28] Lenaic Chizat, Edouard Oyallon, and Francis Bach. On lazy training in differentiable program- ming. arXiv preprint arXiv:1812.07956, 2018.
[29] Twan van Laarhoven. L2 regularization versus batch and weight normalization. arXiv preprint arXiv:1706.05350, 2017.
[30] Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of GANs for improved quality, stability, and variation. In International Conference on Learning Representa- tions, 2018.
[31] Daniel S. Park, Jascha Sohl-Dickstein, Quoc V. Le, and Samuel L. Smith. The effect of network width on stochastic gradient descent and generalization: an empirical study. In International Conference on Machine Learning, 2019.
[32] Ben Poole, Subhaneil Lahiri, Maithra Raghu, Jascha Sohl-Dickstein, and Surya Ganguli. Exponential expressivity in deep neural networks through transient chaos. In Advances In Neural Information Processing Systems, pages 3360â3368, 2016.
[33] Samuel S Schoenholz, Justin Gilmer, Surya Ganguli, and Jascha Sohl-Dickstein. Deep informa- tion propagation. International Conference on Learning Representations, 2017.
[34] Lechao Xiao, Yasaman Bahri, Jascha Sohl-Dickstein, Samuel Schoenholz, and Jeffrey Penning- ton. Dynamical isometry and a mean ï¬eld theory of CNNs: How to train 10,000-layer vanilla convolutional neural networks. In International Conference on Machine Learning, 2018.
11
[35] Ge Yang and Samuel Schoenholz. Mean ï¬eld residual networks: On the edge of chaos. In Advances in Neural Information Processing Systems. 2017.
[36] Alexander G de G Matthews, Mark Rowland, Jiri Hron, Richard E Turner, and Zoubin Ghahramani. Gaussian process behaviour in wide deep neural networks. arXiv preprint arXiv:1804.11271, 9 2018.
[37] Greg Yang. Scaling limits of wide neural networks with weight sharing: Gaussian pro- cess behavior, gradient independence, and neural tangent kernel derivation. arXiv preprint arXiv:1902.04760, 2019.
[38] Sever Silvestru Dragomir. Some Gronwall type inequalities and applications. Nova Science Publishers New York, 2003.
[39] Minmin Chen, Jeffrey Pennington, and Samuel Schoenholz. Dynamical isometry and a mean ï¬eld theory of RNNs: Gating enables signal propagation in recurrent neural networks. In International Conference on Machine Learning, 2018.
[40] Greg Yang, Jeffrey Pennington, Vinay Rao, Jascha Sohl-Dickstein, and Samuel S. Schoen- holz. A mean ï¬eld theory of batch normalization. In International Conference on Learning Representations, 2019.
[41] Roy Frostig, Peter Hawkins, Matthew Johnson, Chris Leary, and Dougal Maclaurin. JAX: Autograd and XLA. www.github.com/google/jax, 2018.
[42] MartÃn Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. Tensorï¬ow: A system for large-scale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), 2016.
[43] Chiyuan Zhang, Samy Bengio, and Yoram Singer. Are all layers created equal? arXiv preprint arXiv:1902.01996, 2019.
[44] Ning Qian. On the momentum term in gradient descent learning algorithms. Neural networks, 12(1):145â151, 1999.
[45] Weijie Su, Stephen Boyd, and Emmanuel Candes. A differential equation for modeling nes- terovâs accelerated gradient method: Theory and insights. In Advances in Neural Information Processing Systems, pages 2510â2518, 2014.
[46] Youngmin Cho and Lawrence K Saul. Kernel methods for deep learning. In Advances in neural information processing systems, 2009.
[47] Christopher KI Williams. Computing with inï¬nite networks. In Advances in neural information processing systems, pages 295â301, 1997.
[48] Roman Vershynin. Introduction to the non-asymptotic analysis of random matrices. arXiv preprint arXiv:1011.3027, 2010.
12
# Supplementary Material
# A Additional ï¬gures
t=16 t=256 t=65536 â NNs lm NNGP (mm NTK Output Value -1 0 1 2 3 a a a
Figure S1: Sample of neural network outputs. The lines correspond to the functions learned for 100 different initializations. The conï¬guration is the same as in Figure 2.
20 Train output 20 Test output 0.004 Weight change(w) 1s â _ Neural Network 0.002 g* -- Linearized Model 7 10 0.0 0.5 -0.002 0.0F -0.004 -05 -0.006 -10 -0.008 07 Loss Lo Accuracy 10° 0.6 : 5 B O.5b-- : { : O.4- : 0.3 HW â Train P o2l{-- Train fe â Test O1F test rie : : : : 0.0 0.4 8 i H i i 10° 10 10?_â 10° 10° 104 10 â 108 10° = 10 10 108 t t t
w > 8 & 0
Figure S2: A convolutional network and its linearization behave similarly when trained using full batch gradient descent with a momentum optimizer. Binary CIFAR classiï¬cation task with MSE loss, tanh convolutional network with 3 hidden layers of channel size n = 512, 3 à 3 size ï¬lters, average pooling after last convolutional layer, η = 0.1, β = 0.9, |D| = 128, Ï2 w = 2.0 and Ï2 b = 0.1. The linearized model is trained directly by full batch gradient descent with momentum, rather than by integrating its continuous time analytic dynamics. Panes are the same as in Figure 3.
1
15 Train output 15 Test output Weight change(w) â _ Neural Network -- Linearized Model Output Value _ glin 2 0.30 Loss ro Â¥ Ml@) = F"(@))â) â Train 1.0} 0.25 =~ Train fH 10° 0.20 + â Test |] O97 107 - Test fi 0.15 || 0.8 102 0.10 4 0.7) 10? 0.05} od) 06+ 10+ 0.00 i i i i 0.5 i i i i i 10° i i i i i 10° 101 10? 10? 104 10° 10° 101 10% 10? 104 10° 10° 10! 10? 10? 10% 10° t t t
Figure S3: A neural network and its linearization behave similarly when both are trained via SGD with momentum on cross entropy loss on MNIST. Experiment is for 10 class MNIST classiï¬cation using a ReLU fully connected network with 2 hidden layers of width n = 2048, η = 1.0, β = 0.9, |D| = 50, 000, k = 10, Ï2 b = 0.1. Both models are trained using stochastic minibatching with batch size 64. Panes are the same as in Figure 3, except that the top row shows all ten logits for a single randomly selected datapoint.
15 n=32 a n=64 â Neural Network 2 = + Linearized Model 10 8 6 4 2 â5 0 -2 ic = SET RET TRE CRET EME LET LEE CME CCREUEMET LAETLEE TUNE COREE CERT LET LRETGHETOREU LET LEE CEE TLNEULEE GET ERENT 10* 10° 10° 10 10° 10° 10° 107 10° 10° 107 10° 10° 10° 10° 10° 10° 10* 10° 10° 10° io* 10° 10° 107 10° 10° 10 t t t t n=4096 2 4 -4 =: io* 10° 10° 10? 10° 10° 10° 10% 10° 10° 107 10° 10* 10° 10° 10° 10° 10* 10° 10° 10° io* 10° 10° 10% 10° 10* 10° t t t t
Figure S4: Logit deviation for cross entropy loss. Logits for models trained with cross entropy loss diverge at late times. If the deviation between the logits of the linearized model and original model are large early in training, as shown for the narrower networks (ï¬rst row), logit deviation at late times can be signiï¬cantly large. As the network becomes wider (second row), the logit deviates at a later point in training. Fully connected tanh network L = 4 trained on binary CIFAR classiï¬cation problem.
2
CNN Wide ResNet FC, Dataset Size 20 FC 2 E L b= L L a : 2° 28 27 28 2° 2% 2% 2% 2 28 2? 2 2 2 22 27 28 28 Width Channels Channels
Figure S5: Error dependence on depth, width, and dataset size. Final value of the RMSE for fully-connected, convolutional, wide residual network as networks become wider for varying depth and dataset size. Error in fully connected networks as the depth is varied from 1 to 16 (ï¬rst) and the dataset size is varied from 32 to 4096 (last). Error in convolutional networks as the depth is varied between 1 and 32 (second), and WRN for depths 10 and 16 corresponding to N=1,2 described in Table S1 (third). Networks are critically initialized Ï2 b = 0.1, trained with gradient descent on MSE loss. Experiments in the ï¬rst three panes used |D| = 128.
[66°60 ga tle ep! â Pe le yo? 62 lle/O lle hy 21 Bs 2? 2? 2 2 2? 2 2 24 24 3 L 2 2 seq 28 a Ee 28 2 2 : 2 Va 2H -- va 3a 28 ue 3 2 T(E Tis EE aoe Zo 3 1jn Zo Un 2UP lis ae: 4 ozs ES mse tos aie yer 3 Gp ai i" 2' {Ea 256 | 2048 16384 20H. av layer 2 (output) 21} âTâ7 NN 2u{|â--] NN 2H 3a an aN 20-2 at ge 2H gh git git gle 2-2 a a ye gio 2h 2h ght gle 2EQ DEAT ae gF ghogirgingings 2B DEAT OF BF phot giz git t n n n 16, = 68" le/|]65 le (lef = 64l|»/ {164 [he oP â 69 fe / Oy" e [ee â 9° Me (MSIL 2 » Pa 2 2 » * 2 » â EH 2 ZZ 24 2? 2¢ 28 a 20be Zz 2 : a 2 : 2 2* 2 a 25 TLE layer inputs Suffer iee ae Ep romay] Zaller 26 wf [Ft 320 54 2s0 Ff 20@o|] Su [Pa yer 2} nN 28h dey eo siz0 4 4006.0) 2 ya |] yer oun . 2% - - ae SSS. «= SSS ââ Yo ge ge ge gh gi git gh ge PE Sa ai gh Yaris Or ararairgirgh t n n n
Figure S6: Relative Frobenius norm change during training. (top) One hidden layer, ReLU networks trained with η = 1.0, on a 2-class CIFAR10 subset of size |D| = 128. We measure changes of (read-out/non read-out) weights, empirical ËÎ and empirical ËK after T = 216 steps of gradient descent updates for varying width. (bottom) Networks with three layer tanh nonlinearity and other details are identical to Figure 1.
# B Extensions
# B.1 Momentum
One direction is to go beyond vanilla gradient descent dynamics. We consider momentum updates12
θi+1 = θi + β(θi â θiâ1) â ηâθL|θ=θi . (S1)
The discrete update to the function output becomes
i (x) â η ËÎ0(x, X )âf lin i+1(x) = f lin f lin i (X )L + β(f lin i (x) â f lin iâ1(x)) (S2)
12Combining the usual two stage update into a single equation.
3
# gle
]
where f lin limit as in Qian [44], Su et al. [45] and obtain t (x) is the output of the linearized network after t steps. One can take the continuous time
¨Ït = Ëβ ËÏt â âθf lin t (X )L (S3)
0 (X )T âf lin t (x) â ËÎ0(x, X )âf lin
¨ft lin(x) = Ëβ Ëf lin (S4)
# t (X )L â
â
where continuous time relates to steps t = i η. These equations are also amenable to analytic treatment for MSE loss. See Figure S2, S3 and 4 for experimental agreement.
# B.2 Multi-dimensional output and cross-entropy loss
One can extend the loss function to general functions with multiple output dimensions. Unlike for squared error, we do not have a closed form solution to the dynamics equation. However, the equations for the dynamics can be solved using an ODE solver as an initial value problem.
exp(f") dj exp(f7) â Uf.y) =-doylogo(f'), â o( fF) = (S5) a
â Ëyi = Ï(Ëyi) â yi. For general input point x and for an arbitrary parameterized function
Recall that ae = 0(g°) â yâ. For general input point x and for an f'(x) parameterized by 6, âen flow dynamics is given by file) = Veh =âaVeH) DO [very
file) = Veh =âaVeH) DO [very SEY| (s6) Jj (zy)eD .
. (oH#@)-y')
# Jj
==1 SL voli@Vvoi" (oH#@)-y') 7) (zyJeD J
Let ËÎij(x, X ) = âθf i(x)âθf j(X )T . The above is
Ëft(X ) = âη ËÎt(X , X ) (Ï(ft(X )) â Y) Ëft(x) = âη ËÎt(x, X ) (Ï(ft(X )) â Y) .
(S8)
(S9)
The linearization is
lin (S10)
= ân@o(Â¥, Â¥) (o( fi" (@) = ânOo(a, %) (o(fi"(Â¥))
Fi2(Â¥)) â Y) (o(fi"(Â¥)) âY) «
# Ëft Ëft
lin (S11)
For general loss, e.g. cross-entropy with softmax output, we need to rely on solving the ODE Equations S10 and S11. We use the dopri5 method for ODE integration, which is the default integrator in TensorFlow (tf.contrib.integrate.odeint).
# C Neural Tangent kernel for ReLU and erf
For ReLU and erf activation functions, the tangent kernel can be computed analytically. We begin with the case Ï = ReLU; using the formula from Cho and Saul [46], we can compute T and ËT in closed form. Let Σ be a 2 à 2 PSD matrix. We will use
a(n) = foe w)or(y we IM Paw- 2m) = lal In(@) S12) TT
where
o(@) = max(z,0), A(x, y = arcoos (==) ; (0). Oey) Fella 2 Jo(0)=7â-0, A(0) =sin@ +(x â 8) cos =4/1 ak -) (Sua) 00) =7-8, (0) = sind + (m â 8) cos (ch (r-4) llell|lyh (S13)
4
Let d = 2 and u = (x · w, y · w)T . Then u is a mean zero Gaussian with Σ = [[x · x, x · y]; [x · y, y · y]]. Then
1 2Ï 1 2Ï
1 T(E) = ki(e,y) = == lleiillull 1) (S14)
ËT (Σ) = k0(x, y) = J0(θ) (S15)
For ¢ = erf, let © be the same as above. Following Williams [47], we get
T() = Zain ( 20y ) (S16) T (1+ 2a -x)(1 + 2y-y)
ËT (Σ) = 4 Ï det(I + 2Σ)â1/2 (S17)
# D Gradient ï¬ow dynamics for training only the readout-layer
The connection between Gaussian processes and Bayesian wide neural networks can be extended to the setting when only the readout layer parameters are being optimized. More precisely, we show that when training only the readout layer, the outputs of the network form a Gaussian process (over an ensemble of draws from the parameter prior) throughout training, where that output is an interpolation between the GP prior and GP posterior. Note that for any x,xâ ⬠Râ®, in the infinite width limit Z(a) - Z(aâ) = K(a,2â) + K(a, 2") in L probability, where for notational simplicity we assign Z(x) = [eo ; 07 . The regression problem is specified with mean-squared loss
1, 5 1, £= S|If(Â¥) â Ya = 5lla(e)orâ¢* â Y\lp, (S18)
and applying gradient ï¬ow to optimize the readout layer (and freezing all other parameters),
OP) = ânz(x)" (z(v)04*1 â y) , (S19) where 77 is the learning rate. The solution to this ODE gives the evolution of the output of an arbitrary x*. So long as the empirical kernel 7(4â)(4)7 is invertible, it is
fula®) = fola*) + Kw, X)K(Â¥,Â¥) 7 (exp (ântK(Â¥,4)) = 1) (Fol®) âY) ($20)
For any x, xâ ⬠R"®, letting n; > oo for! = 1,..., L, one has the convergence in distribution in probability and distribution respectively
2(x)z(x') > K(a,aâ) and 2(&)0Â¥t > N(0,K(Â¥,24)). (S21)
Moreover ¯x(X )θL+1 and the term containing f0(X ) are the only stochastic term over the ensemble of network initializations, therefore for any t the output f (xâ) throughout training converges to a Gaussian distribution in the inï¬nite width limit, with
E[ft(xâ)] = K(xâ, X )Kâ1(I â eâηKt)Y , (S22)
Var[ft(xâ)] = K(xâ, xâ) â K(xâ, X )Kâ1(I â eâ2ηKt)K(xâ, X )T . (S23)
Thus the output of the neural network is also a GP and the asymptotic solution (i.e. t â â) is identical to the posterior of the NNGP (Equation 13). Therefore, in the inï¬nite width case, the optimized neural network is performing posterior sampling if only the readout layer is being trained. This result is a realization of sample-then-optimize equivalence identiï¬ed in Matthews et al. [12].
# E Computing NTK and NNGP Kernel
For completeness, we reproduce, informally, the recursive formula of the NNGP kernel and the tangent kernel from [5] and [13], respectively. Let the activation function Ï : R â R be absolutely
5
continuous. Let T and ËT be functions from 2 à 2 positive semi-deï¬nite matrices Σ to R given by
T(E) = E[d(u)d(v)] u,v) ~ (70) - crwoyy (tet N(0,5). (S24)
In the infinite width limit, the NNGP and tangent kernel can be computed recursively. Let x, xâ be two inputs in R"°. Then h!(x) and h!(x) converge in distribution to a joint Gaussian as min{n1,...,7j~1}. The mean is zero and in variance K! (x, xâ) is
K(x, 2") = K'(x,2") @ Id), (S25)
c K'la,2) Ka, a! : K(x, a!) = 02.7 (ho OO ge Co iâ +o? (S26)
with base case
>» L F K\(a,a') = 02 -âaT a! +o}. (S27) no
Using this one can also derive the tangent kernel for gradient descent training. We will use induction to show that
0! (2, 2") = O'(2, 2") @Idn,
(S28)
where
Aly. ot) â Flr. ot zal, nq (fa, 2) K+ (a, 2") O(a, 2â) = K'(a, 2") +050 (a, 2°)T (Eee Kila, 2!) (S29)
with ËÎ1 = ËK1. Let
J l(x) = âθâ¤l hl 0(x) = [âθl hl 0(x), âθ<lhl 0(x)]. (S30)
Then
J'(x) JI! (a)? = Voth (a) Varho(aâ)? + Vo<cch}(a)Vg<ihg(aâ)â (S31)
Letting n1,...,mj-1 â 00 sequentially, the first term converges to the NNGP kernel K! (x, xâ). By applying the chain rule and the induction step (letting n1,...,~2 â oo sequentially), the second term is
# T
T l hp (x) i-1 l-1 hp (2') Vo<ihd(x)Vg<ihp(aâ)? = ana) Vest (a)Vg<i-1hg '(aâ)â ant (2) (S32)
âhl âhlâ1 0 âhl âhlâ1 0
Aho (x) |i > â2â0' 1! (2, 2â) @1d,,_, Ny, ...,M_2 > CO MI y et) 21a (nss.2m-2 00)
(S33)
0}, (Eo"(hi,1(2))6" (hh (@)O'M(a,2")) @Idn, (m1 -Â¥ 00) (S34)
(S34)
= âfowwen (f= G 2h i ae })) @Id,, (835)
# F Results in function space for NTK parameterization transfer to standard parameterization
In this Section we present a sketch for why the function space linearization results, derived in [13] for NTK parameterized networks, also apply to networks with a standard parameterization. We follow this up with a formal proof in §G of the convergence of standard parameterization networks to their linearization in the limit of inï¬nite width. A network with standard parameterization is described as:
fr = ol Wit) + pitt re = wl bo N08 , a) and m), att = (hit?) bo = Bl AN (0,03) $0)
6
(a) MNIST (b) CIFAR
Figure S7: NTK vs Standard parameterization. Across different choices of dataset, activation function and loss function, models obtained from (S)GD training for both parameterization (circle and triangle denotes NTK and standard parameterization respectively) get similar performance.
The NTK parameterization in Equation 1 is not commonly used for training neural networks. While the function that the network represents is the same for both NTK and standard parameterization, training dynamics under gradient descent are generally different for the two parameterizations. How- ever, for a particular choice of layer-dependent learning rate training dynamics also become identical. NTK,b be layer-dependent learning rate for W l and bl in the NTK parameterization, Let ηl and ηstd = 1 η0 be the learning rate for all parameters in the standard parameterization, where nmax nmax = maxl nl. Recall that gradient descent training in standard neural networks requires a learning rate that scales with width like 1 , so η0 deï¬nes a width-invariant learning rate [31]. If we choose nmax
ηl NTK, w = nl nmaxÏ2 Ï Î·0, and ηl NTK, b = 1 nmaxÏ2 b η0, (S37)
then learning dynamics are identical for networks with NTK and standard parameterizations. With only extremely minor modiï¬cations, consisting of incorporating the multiplicative factors in Equation S37 into the per-layer contributions to the Jacobian, the arguments in §2.4 go through for an NTK network with learning rates deï¬ned in Equation S37. Since an NTK network with these learning rates exhibits identical training dynamics to a standard network with learning rate ηstd, the result in §2.4 that sufï¬ciently wide NTK networks are linear in their parameters throughout training also applies to standard networks.
We can verify this property of networks with the standard parameterization experimentally. In Figure S7, we see that for different choices of dataset, activation function and loss function, ï¬nal performance of two different parameterization leads to similar quality model for similar value of normalized learning rate ηstd = ηNTK/n. Also, in Figure S8, we observe that our results is not due to the parameterization choice and holds for wide networks using the standard parameterization.
# G Convergence of neural network to its linearization, and stability of NTK
# under gradient descent
In this section, we show that how to use the NTK to provide a simple proof of the global convergence of a neural network under (full-batch) gradient descent and the stability of NTK under gradient descent. We present the proof for standard parameterization. With some minor changes, the proof can also apply to the NTK parameterization. To lighten the notation, we only consider the asymptotic bound here. The neural networks are parameterized as in Equation S36. We make the following assumptions:
# Assumptions [1-4]:
1. The widths of the hidden layers are identical, i.e. ny = --- = nz = n (our proof extends naturally to the setting â+ > ayy ⬠(0,00) as min{n),...,nz} â oo.) ny
naturally to the setting â+ ny
7
Train output Test output Weight change(w) 3 7 i r 3 2.5e-05 â _ Neural Network 2e-05 g 2 -- Linearized Model 2 1.5e-05 Gs : 1e-05 > 5e-06 5 0.0 £ -5e-06 fat -1e-05 -1.5e-05 -2 -2e-05 SS 1.0 a 1a Accuracy 107 VU) =f"@)") 1.0 0.8 0.9 0.6 0.8 on 0.7 107} 0.414 â Train 0.6 â + Train i 0.2}} â Test 0.5 -+ Test fi 0.4+ ; x x i i i i H H ; H i Po? 107 10° 10? 10? FoF âToT 10" 10? 10? Oo? oF 10" 10! 10? t t t
Figure S8: Exact and experimental dynamics are nearly identical for network outputs, and are similar for individual weights (Standard parameterization). Experiment is for an MSE loss, ReLU network with 5 hidden layers of width n = 2048, η = 0.005/2048 |D| = 256, k = 1, Ï2 w = 2.0, and Ï2 b = 0.1. All three panes in the ï¬rst row show dynamics for a randomly selected subset of datapoints or parameters. First two panes in the second row show dynamics of loss and accuracy for training and test points agree well between original and linearized model. Bottom right pane shows the dynamics of RMSE between the two models on test points using empirical kernel.
2. The analytic NTK Î (deï¬ned in Equation S42) is full-rank, i.e. 0 < λmin := λmin(Î) ⤠λmax := λmax(Î) < â. We set ηcritical = 2(λmin + λmax)â1 .
3. The training set (7, Y) is contained in some compact set and « # & forall x,% ⬠¥. 4. The activation function ¢ satisfies 19(0)|, |I4"lloo, sup |9'(x) â 6'(&)|/|a â &| < c.
19(0)|, |I4"lloo, sup |9'(x) â 6'(&)|/|a â &| < c. (S38)
Assumption 2 indeed holds when Â¥ C {x ⬠R"°} : ||a|]2 = 1} and ¢(x) grows non-polynomially for large x [13]. Throughout this section, we use C > 0 to denote some constant whose value may depend on L, |4â| and (o?,, 07) and may change from line to line, but is always independent of n.
Let θt denote the parameters at time step t. We use the following short-hand
f (θt) = f (X , θt) â R|X |Ãk g(θt) = f (X , θt) â Y â R|X |Ãk J(θt) = âθf (θt) â R(|X |k)Ã|θ|
(S39)
(S40)
(S41)
where |X | is the cardinality of the training set and k is the output dimension of the network. The empirical and analytic NTK of the standard parameterization is deï¬ned as
©, = 0,(Â¥, #) = FI (A) I()â (S42) 0 :=limn+oo Oo in probability.
Note that the convergence of the empirical NTK in probability is proved rigorously in [37]. We consider the MSE loss
1 L(t) = Sllo(%)|l2- (S43)
8
Since f (θt) converges in distribution to a mean zero Guassian with covariance K, one can show that for arbitrarily small δ0 > 0, there are constants R0 > 0 and n0 (both may depend on δ0, |X | and K) such that for every n ⥠n0, with probability at least (1 â δ0) over random initialization,
l|g(9o)|l2 < Ro. (S44)
The gradient descent update with learning rate η is
θt+1 = θt â ηJ(θt)T g(θt) (S45)
and the gradient ï¬ow equation is
# Ëθt = âJ(θt)T g(θt).
(S46)
We prove convergence of neural network training and the stability of NTK for both discrete gradient descent and gradient ï¬ow. Both proofs rely on the local lipschitzness of the Jacobian J(θ). Lemma 1 (Local Lipschitzness of the Jacobian). There is a K > 0 such that for every C > 0, with high probability over random initialization (w.h.p.o.r.i.) the following holds
FalJ@)âJ@|le < K\l0â Ala 7 . VO, 8 ⬠B(@y,Cn-?) (S47) yallJ Olle <K
where
B(60, R) := {0 : || â O|l2 < R}. (S48)
The following are the main results of this section. Theorem G.1 (Gradient descent). Assume Assumptions [1-4]. For δ0 > 0 and η0 < ηcritical, there exist R0 > 0, N â N and K > 1, such that for every n ⥠N , the following holds with probability at least (1 â δ0) over random initialization when applying gradient descent with learning rate η = η0 n ,
t lg@)ll2 < (1â MB) Ro (S49) Thal) 8a < MABEL (1 â ay BER
and
~ ~ 33 sup 90 â Olle < ° Ro nz. (S50) min
Theorem G.2 (Gradient Flow). Assume Assumptions[1-4]. For δ0 > 0, there exist R0 > 0, N â N and K > 1, such that for every n ⥠N , the following holds with probability at least (1 â δ0) over random initialization when applying gradient ï¬ow with âlearning rate" η = η0 n
no. |9(@.)|l2 se PR (S51) II, â Pollo < 3K Ro (l- e7 3M0Amint y= 3
and
. . 13 sup |/Oo â Or||-z < 6K" Ro n-2. (S52) t min
See the following two subsections for the proof. Remark 1. One can extend the results in Theorem G.1 and Theorem G.2 to other architectures or functions as long as
1. The empirical NTK converges in probability and the limit is positive deï¬nite.
2. Lemma 1 holds, i.e. the Jacobian is locally Lipschitz.
9
# G.1 Proof of Theorem G.1
As discussed above, there exist R0 and n0 such that for every n ⥠n0, with probability at least (1 â δ0/10) over random initialization,
\|9(40)|l2 < Ro. ($53)
Let C = 3KR0 in Lemma 1. We ï¬rst prove Equation S49 by induction. Choose n1 > n0 such that λmin for every n ⥠n1 Equation S47 and Equation S53 hold with probability at least (1 â δ0/5) over random initialization. The t = 0 case is obvious and we assume Equation S49 holds for t = t. Then by induction and the second estimate of Equation S47
'
Kn N0Ami ' A.41 â Allo < nllF(9)|lopllg() lle < Te (1 -- ee) Ro, (S54)
which gives the first estimate of Equation S49 for t+1 and which also implies ||; â9o||2 < Se Ron 3 for j = 0,...,£+ 1. To prove the second one, we apply the mean value theorem and the formula for gradient decent update at step t+ 1
Il9(r+a)ll2 = |l9@r41) â g(r) + 942) 2 = || F(91)(Or41 â A) + 9(%)ll2 = || â 7F (Ge) I(0)? 9(81) + 9@)ll2 <1 â 96) F(41)" llopllg()ll2
(S55)
(S56)
= || â 7F (Ge) I(0)? 9(81) + 9@)ll2 ($57)
(S58)
\!
~ Now \! < 1 â 7 IG,) T(81)" llop (1 -2 ve) Ro, (S59)
where Ëθt is some linear interpolation between θt and θt+1. It remains to show with probability at least (1 â δ0/2),
ToAmin a \|1 â J (61)-F(61)" llop <1- (S60)
This can be veriï¬ed by Lemma 1. Because ËÎ0 â Î [37] in probability, one can ï¬nd n2 such that the event
x Ami \|O - Oo|lz < > (S61)
# 2 λmin+λmax
has probability at least (1 â δ0/5) for every n ⥠n2. The assumption η0 < implies
\]1 = no®llop < 1 = noAmin- ($62)
Thus
ld â (6) F(64)â low (S63)
S| â 100 llop + 20|l© â Oollop + mll-J(Go)-J(B0)â â J(81)-J(Be)" llow (S64)
# η0λmin 3 η0λmin 3
â¤1 â η0λmin + (S65)
: ~ + noK?((|@1 â Gola + [181 ~ Aoll2) 23K Ro 1 To Amin + Ino kK <1- 108 nin Vn = 3
â¤1 â η0λmin + 1 â n η0λmin 3 ⤠1 â (S66)
with probability as least (1 â δ0/2) if
18K3Ro " n> (A) . (S67) âmin
Therefore, we only need to set
18K? Ro \? N = max {rovmina (A) \ . (S68) min
10
(S57)
To verify Equation S50, notice that
# 1 n 1 n
wn 1 190 â Olle = = Il4(G0)1(G0)â â F(91)F(91) |e (S69)
1 <> (| 7(40)|lop ll 740)â â J(4)" |e + || 7.) â Fo) llopl|7(2)" Iz) (S70)
< 2K? 00 â Glo (S71)
6K 3R0 λmin 1 â n , (S72)
â¤
where we have applied the second estimate of Equation S49 and Equation S47.
# G.2 Proof of Theorem G.2
The ï¬rst step is the same. There exist R0 and n0 such that for every n ⥠n0, with probability at least (1 â δ0/10) over random initialization,
l9(90)|l2 < Ro. (S73)
Let C = 3KR0 λmin exists n1 such that for all n ⥠n1, with probability at least (1 â δ0/10) in Lemma 1. Using the same arguments as in Section G.1, one can show that there
* 1(0) 1(0)" - paninld V6 ⬠B(8,Cn-?) ($74)
Let
3KR0 λmin We claim t1 = â. If not, then for all t ⤠t1, θt â B(θ0, Cnâ 1
3k R t= int ft I â Olle >> â4 (S75) min
# and
a 1 6, > 5 Amin. (S76)
Thus
d 2 TE 2 2 q (llg@ll2) = â2n09(t)" Org(t) < ~ 310Amin|l9(#)II2 (S77)
and
IIg(E)||3 < eS! I g(O)|I3 < em FMM! RG, (S78)
Note that
d d No â2noAmint,, â1/2 =, ||Oe â olla < || 4 â||J(P)9(t)ll2 < nok Roe 3 'n (S79) dt dt â|\y n
# which implies, for all t ⤠t1
3K Ro â1 _1 â 3K Ro â1 _1 3KRo 1 6,-8 < Le 3 MAmint A= 2 < 1 =e 37Amin 2 n 6 â Bolla < Amin ( ) ~ Amin ( ) Amin
This contradicts to the deï¬nition of t1 and thus t1 = â. Note that Equation S78 is the same as the ï¬rst equation of Equation S51.
# G.3 Proof of Lemma 1
The proof relies on upper bounds of operator norms of random Gaussian matrices. Theorem G.3 (Corollary 5.35 [48]). Let A = AN,n be an N à n random matrix whose entries are independent standard normal random variables. Then for every t ⥠0, with probability at least 1 â 2 exp(ât2/2) one has
â
â
â
â
N â n â t ⤠λmin(A) ⤠λmax(A) ⤠N + n + t. (S81)
11
(S80)
For l ⥠1, let
(S82)
δl(θ, x) := âhl(θ,x)f L+1(θ, x) â Rkn δl(θ, X ) := âhl(θ,X )f L+1(θ, X ) â R(kÃ|X |)Ã(nÃX )
(S83)
Let θ = {W l, bl} and Ëθ = { ËW l, Ëbl} be any two points in B(θ0, Câ triangle inequality, w.h.p. over random initialization, n ). By the above theorem and the
â â
. a . |W" lop. [Wop < 30, |W" lop, ||W'llop $ 30, for 2<1<L+1 (S84)
Using this and the assumption on Ï Equation S38, it is not difï¬cult to show that there is a constant K1, depending on Ï2
nF |2"(0, lo, 1", X)|l2 < Ki, (S85)
lo, 1", X)|l2 < Ki, ||6"(0, Â¥) â 5'(6, X)|Ip < 146 â Alla
nâ 1
n-2|la'(9,X) â2'(G,X)Ilo, ||6"(0, Â¥) â 5'(6, X)|Ip < 146 â Alla (S86)
Lemma 1 follows from these two estimates. Indeed, with high probability over random initialization
FO =o We + IOV ($87) l
# l
=O Ie, 2)5'(0, 2)" We + (158, 0)" lz (S88) lL 2EexX
# lL
# 2EexX
<0 EGF | 16, 2) IE) 5G, 2)" IF (S89) lL 2EexX
xâX (1 + K 2
# lL
< SO (1+ Kin) > |5"(6,2)" |e ($90) l x
# l
⤠K 2 1 (1 + K 2 1 n) (S91)
# l
⤠2(L + 1)K 4 1 n, (S92)
and similarly
# J) â JON
J) â JON (S93)
=O YE e162), 2)" = 2G, 2)5'(0,2)" IF + (150,02)? â 80,2)" | (S94) Ll 2Ex
=
# Ll
< (= (Kin + Kin) +4) \|6 â Oj. (S95) L
l â¤3(L + 1)K 4
<3(L + 1)K4n||0 â Allo. (S96)
# G.4 Remarks on NTK parameterization
For completeness, we also include analogues of Theorem G.1 and Lemma 1 with NTK parameteriza- tion. Theorem G.4 (NTK parameterization). Assume Assumptions [1-4]. For δ0 > 0 and η0 < ηcritical, there exist R0 > 0, N â N and K > 1, such that for every n ⥠N , the following holds with probability at least (1 â δ0) over random initialization when applying gradient descent with learning rate η = η0,
t lla < (1- â¢e") Ro (S97) Tha As â O-alla < Kino Dj ( â BAM ERy < YR
13These two estimates can be obtained via induction. To prove bounds relating to xl and δl, one starts with l = 1 and l = L, respectively.
12
(S93)
(S97)
and
6K? Ro min sup 190 - Oulle < n-2. (S98) t
Lemma 2 (NTK parameterization: Local Lipschitzness of the Jacobian). There is a K > 0 such that for every C > 0, with high probability over random initialization the following holds
tise âJO)|le < Ko - ls V0, 6 ⬠B(%,C) (S99) \|7 (8) lle <K
# H Bounding the discrepancy between the original and the linearized network: MSE loss
We provide the proof for the gradient flow case. The proof for gradient descent can be obtained similarly. To simplify the notation, let g!"(t) = fl"(&) â Y and g(t) = f,(4â) â Y. The theorem and proof apply to both standard and NTK parameterization. We use the notation < to hide the dependence on uninteresting constants. Theorem H.1. Same as in Theorem G.2. For every x ⬠R" with ||a||2 < 1, for 59 > 0 arbitrarily small, there exist Ry > 0 and N ⬠N such that for every n > N, with probability at least (1 â 60) over random initialization,
sup |g'"(t) âg(t)||p » sup||g!"(t.©) â g(t.2)l|p Sn? Ro. (S100)
Proof.
(exp(no8at)(9""(¢) - a(t))) (60 exp(n0Oot)(g'"(t) â g(t) + exp(moGot)(âOog" (t) + 6.9(t))) (exp(no0t)(6. - 8o)g(¢))
<
# =No
=η0 (S103)
Integrating both sides and using the fact glin(0) = g(0),
# t
t (9'%(0 â att) =~ fm (explmSols ~9)(0. ~ G0)(a!â¢(s) âa(s))) ds ($104
+f No (exp(n060(s - t))(Os - 6o)9'"(s)) ds (S105)
Let λ0 > 0 be the smallest eigenvalue of ËÎ0 (with high probability λ0 > 1 gives 3 λmin). Taking the norm
ia(t)â a(6)l2 <mo( fexp(om(s ~#)Iop (Ox ~ So)lplla'%(s) ~ als)]lds (S106)
# t
t + [ les om(sâH))lonl(. ~ So)lopl(sdlads) «S107
# t
<mo( | e011, â Gp) lopllg""(s) â 9(s) lds (S108) 0
# 0 t
t + [em 16, - Go}llen la] ($109) 0
Let
u(t) = oro! | glin(¢) _ g(t)|l2 (S110) t
# t
a(t) = m [ e*||(O, â Oo)|lop|lg""(s) lads (S111) 0
# 0 B(t) = noll(Or â )llop
(S112)
13
(S101)
(S102)
The above can be written as
u(t) < a(t) +f B(s)u(s)ds (S113)
Note that α(t) is non-decreasing. Applying an integral form of the Grönwallâs inequality (see Theorem 1 in [38]) gives
# t
u(t) ⤠α(t) exp β(s)ds (S114) 0
Note that
Ia" (O)lla = exp (not) g"*(0)|la < ll exp (moot) llopllg(0)|l2 =e *°"â" gO) l2- (S115)
Then
t t [g%() â (0a < me" [eâ¢*6, â Galop halâ¢)laserv ( J ml. ~ Onllonds) (S116)
. t A A t A A <meâ¢â¢*|aO)le [I(x âSodlondsesn ( [| mllO. âCollonds) 0 0 (S117)
(S116)
(S117)
Let o7 = supo<s<t |, â Oo|lop. Then
Ia? () â (tla S (notore20â¢m"*4) [Ig â¢(0) I (sus)
As it is proved in Theorem G.1, for every δ0 > 0, with probability at least (1 â δ0) over random initialization,
sup < sup |9o - Oille < n/2Ry 30 (S119)
when n1 = · · · = nL = n â â. Thus for large n and any polynomial P (t) (we use P (t) = t here)
sup t eâλ0η0t+Ïtη0tη0P (t) = O(1) (S120)
Therefore
sup |lg!"(t) â g(t)ll2 S sup orRo Sn RG > 0, (S121)
# as n â â.
Now we control the discrepancy on a test point x. Let y be its true label. Similarly,
© (@"(t,2) â g(t,2)) = m0 (Go(x,%) - x(x, Â¥)) g(t) + moOx(x, Â¥)(g(t) â g(t). dt (S122)
Integrating over [0, t] and taking the norm imply
lex) â a(t.) ($123)
t t <i [ |o(e.%) ~ O.(e.a)], hats) lods +m f8.(e.2)]ala(s) ~ aâ¢(s)lads (S124)
(S124)
# t
t <nllaO))l2 [- ]@o(a.â) ~ O(a.) eas (S125) 0
# t
t + wm [ (\|Oo(a, X)|l2 + |Os(@, Â¥) â Oo(x, Â¥)|2)Ilg(s) â 9'(s) leds (S126) 0
14
ge bat RO âKie/liK ie 8, ROSY âKile/liK Ip D1, OC Ole /|lOllrâ L=3,_ 16°) â Ol F/I16le 3 a i psy 27-27 2° 28 27 2 2 2272 27 BF a> 2% 27 2F ae 2222 27 27 2% 25 27 2F ae 22M 23 aT > 2B 27 2F 2% QM aM 2 Width (n) Width (n) Width (n) Width (n)
Figure S9: Kernel convergence. Kernels computed from randomly initialized ReLU networks with one and three hidden layers converge to the corresponding analytic kernel as width n and number of Monte Carlo samples M increases. Colors indicate averages over different numbers of Monte Carlo samples.
Similarly, Lemma 1 implies
sup |Oo(x.¥) - x(x, ¥)||, <n-? Ro (S127) t :
This gives
(S125) Sn? RR. (S128)
Using Equation S118 and Equation S119,
t (S126) < |Oo(e.aI2 [ (moose rors +72"08) \Ig'â¢(0)|lnat < n-2. (S129) 0
# I Convergence of empirical kernel
As in Novak et al. [7], we can use Monte Carlo estimates of the tangent kernel (Equation 4) to probe convergence to the infinite width kernel (analytically computed using Equations S26, S29). For simplicity, we consider random inputs drawn from V (0, 1) with no = 1024. In Figure S9, we observe convergence as both width n increases and the number of Monte Carlo samples increases. For both NNGP and tangent kernels we observe ||") â Ol] » = O (1/,/n) and ||K(⢠â K|z = O (1/Vn), as predicted by a CLT in Daniely et al. [16].
# J Details on Wide Residual Network
Table S1: Wide Residual Network architecture from Zagoruyko and Komodakis [14]. In the residual block, we follow Batch Normalization-ReLU-Conv ordering. block type
group name output size block type conv1 32 x 32 [3 x3, channel size] 3 x 3, channel size conv2 32 x 32 3 x 3, channel size xN 3 x 3, channel size conv3 16 x 16 3 x 3, channel size xN 3 x 3, channel size conv4 8x8 3 x 3, channel size xN avg-pool 1x1 [8 x 8]
15
loga(n) [Koo â Ku nle [Kool loga(n) 19.0 - Om alr lcclr i ii 2° 27 27 27 2% 2% 2% 27 2% 29 2 2 2% Number of Samples (.M/) log,(M)
Figure S10: Kernel convergence. Kernels from single hidden layer randomly initialized ReLU network convergence to analytic kernel using Monte Carlo sampling (M samples). See §I for additional discussion.
16 | {
"id": "1902.01996"
} |
1902.03545 | Task2Vec: Task Embedding for Meta-Learning | We introduce a method to provide vectorial representations of visual
classification tasks which can be used to reason about the nature of those
tasks and their relations. Given a dataset with ground-truth labels and a loss
function defined over those labels, we process images through a "probe network"
and compute an embedding based on estimates of the Fisher information matrix
associated with the probe network parameters. This provides a fixed-dimensional
embedding of the task that is independent of details such as the number of
classes and does not require any understanding of the class label semantics. We
demonstrate that this embedding is capable of predicting task similarities that
match our intuition about semantic and taxonomic relations between different
visual tasks (e.g., tasks based on classifying different types of plants are
similar) We also demonstrate the practical value of this framework for the
meta-task of selecting a pre-trained feature extractor for a new task. We
present a simple meta-learning framework for learning a metric on embeddings
that is capable of predicting which feature extractors will perform well.
Selecting a feature extractor with task embedding obtains a performance close
to the best available feature extractor, while costing substantially less than
exhaustively training and evaluating on all available feature extractors. | http://arxiv.org/pdf/1902.03545 | Alessandro Achille, Michael Lam, Rahul Tewari, Avinash Ravichandran, Subhransu Maji, Charless Fowlkes, Stefano Soatto, Pietro Perona | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20190210 | 20190210 | 9 1 0 2
b e F 0 1 ] G L . s c [
1 v 5 4 5 3 0 . 2 0 9 1 : v i X r a
# TASK2VEC: Task Embedding for Meta-Learning
# Alessandro Achille UCLA and AWS achille@cs.ucla.edu
# Michael Lam AWS michlam@amazon.com
# Rahul Tewari AWS tewarir@amazon.com
# Avinash Ravichandran AWS ravinash@amazon.com
Subhransu Maji UMass and AWS smmaji@amazon.com
# Charless Fowlkes UCI and AWS fowlkec@amazon.com
# Stefano Soatto UCLA and AWS soattos@amazon.com
# Pietro Perona Caltech and AWS peronapp@amazon.com
# Abstract
We introduce a method to provide vectorial represen- tations of visual classiï¬cation tasks which can be used to reason about the nature of those tasks and their re- lations. Given a dataset with ground-truth labels and a loss function deï¬ned over those labels, we process images through a âprobe networkâ and compute an embedding based on estimates of the Fisher information matrix asso- ciated with the probe network parameters. This provides a ï¬xed-dimensional embedding of the task that is independent of details such as the number of classes and does not require any understanding of the class label semantics. We demon- strate that this embedding is capable of predicting task sim- ilarities that match our intuition about semantic and tax- onomic relations between different visual tasks (e.g., tasks based on classifying different types of plants are similar). We also demonstrate the practical value of this framework for the meta-task of selecting a pre-trained feature extractor for a new task. We present a simple meta-learning frame- work for learning a metric on embeddings that is capable of predicting which feature extractors will perform well. Se- lecting a feature extractor with task embedding obtains a performance close to the best available feature extractor, while costing substantially less than exhaustively training and evaluating on all available feature extractors.
semantic similarities between tasks (Fig. 1). When other natural distances are available, such as the taxonomical dis- tance in biological classiï¬cation, we ï¬nd that the embed- ding distance correlates positively with it (Fig. 2). More- over, we introduce an asymmetric distance on tasks which correlates with the transferability between tasks.
Computation of the embedding leverages a duality be- tween network parameters (weights) and outputs (activa- tions) in a deep neural network (DNN): Just as the activa- tions of a DNN trained on a complex visual recognition task are a rich representation of the input images, we show that the gradients of the weights relative to a task-speciï¬c loss are a rich representation of the task itself. Speciï¬cally, given a task deï¬ned by a dataset D = {(xi, yi)}N i=1 of labeled samples, we feed the data through a pre-trained reference convolutional neural network which we call âprobe net- workâ, and compute the diagonal Fisher Information Ma- trix (FIM) of the network ï¬lter parameters to capture the structure of the task (Sect. 2). Since the architecture and weights of the probe network are ï¬xed, the FIM provides a ï¬xed-dimensional representation of the task. We show this embedding encodes the âdifï¬cultyâ of the task, character- istics of the input domain, and which features of the probe network are useful to solve it (Sect. 2.1).
# 1. Introduction
The success of Deep Learning hinges in part on the fact that models learned for one task can be used on other related tasks. Yet, no general framework exists to describe and learn relations between tasks. We introduce the TASK2VEC embedding, a technique to represent tasks as elements of a vector space based on the Fisher Information Matrix. The norm of the embedding correlates with the complexity of the task, while the distance between embeddings captures
Our task embedding can be used to reason about the space of tasks and solve meta-tasks. As a motivating exam- ple, we study the problem of selecting the best pre-trained feature extractor to solve a new task. This can be particu- larly valuable when there is insufï¬cient data to train or ï¬ne- tune a generic model, and transfer of knowledge is essen- tial. TASK2VEC depends solely on the task, and ignores interactions with the model which may however play an important role. To address this, we learn a joint task and model embedding, called MODEL2VEC, in such a way that models whose embeddings are close to a task exhibit good perfmormance on the task. We use this to select an expert from a given collection, improving performance relative to
1
© Actinopterygii (n) â@ Insecta (n) Amphibia (n) Mammalia (n) © Arachnida (n) Mollusca (n) © Aves (n) © Plantae (n) © Fungi (n) © Protozoa (n) _â Laurales Liliales <â Pinales Rosales __ââ Carnivora 2882, 6824âââ Rodentia ose > Falconiformes Passeriformes Corvidae Formal dresses Wedding dresses Prom dresses Shoelaces wie Sweatpants ss ââ Denim Yoga pants Ripped Jeans Task Embeddings © Reptilia (n) © Neckline (m) Category (m) Pants (m) © Color (m) Pattern (m) Gender (m) Shoes (m) © Material (m) Yoga pants Wedding dresses Purple Prom dresses Brown Formal dresses Black > Sweatpants âââ___ Jeans 7 | !Winter boots Denim Shoelaces so __ââ Rodentia A _ Carnivora Liliales Pinales Leh, BRS corvidae Passeriformes Falconiformes Rosales Laurales Domain Embeddings
Figure 1: Task embedding across a large library of tasks (best seen magniï¬ed). (Left) T-SNE visualization of the embed- ding of tasks extracted from the iNaturalist, CUB-200, iMaterialist datasets. Colors indicate ground-truth grouping of tasks based on taxonomic or semantic types. Notice that the bird classiï¬cation tasks extracted from CUB-200 embed near the bird classiï¬cation task from iNaturalist, even though the original datasets are different. iMaterialist is well separated from iNat- uralist, as it entails very different tasks (clothing attributes). Notice that some tasks of similar type (such as color attributes) cluster together but attributes of different task types may also mix when the underlying visual semantics are correlated. For example, the tasks of jeans (clothing type), denim (material) and ripped (style) recognition are close in the task embedding. (Right) T-SNE visualization of the domain embeddings (using mean feature activations) for the same tasks. Domain em- bedding can distinguish iNaturalist tasks from iMaterialist tasks due to differences in the two problem domains. However, the fashion attribute tasks on iMaterialist all share the same domain and only differ in their labels. In this case, the domain embeddings collapse to a region without recovering any sensible structure.
ï¬ne-tuning a generic model trained on ImageNet and ob- taining close to ground-truth optimal selection. We discuss our contribution in relation to prior literature in Sect. 6, after presenting our empirical results in Sect. 5.
original output distribution p,,(y|x) and the perturbed one Pw (y|x). To second-order approximation, this is
Exxp KL (pw (yl) || pw(y|x)) = bw - Fow + 0(5w?),
# 2. Task Embeddings via Fisher Information
Given an observed input x (e.g., an image) and an hid- den task variable y (e.g., a label), a deep network is a family of functions pw(y|x) parametrized by weights w, trained to approximate the posterior p(y|x) by minimizing the (possibly regularized) cross entropy loss Hpw, Ëp(y|x) = Ex,yâ¼ Ëp[â log pw(y|x)], where Ëp is the empirical distribu- tion deï¬ned by the training set D = {(xi, yi)}N It is useful, especially in transfer learning, to think of the net- work as composed of two parts: a feature extractor which computes some representation z = Ïw(x) of the input data, and a âhead,â or classiï¬er, which encodes the distribution p(y|z) given the representation z.
Not all network weights are equally useful in predicting the task variable: the importance, or âinformative content,â of a weight for the task can be quantified by considering a perturbation wâ = w + dw of the weights, and measuring the average Kullbach-Leibler (KL) divergence between the
where F is the Fisher information matrix (FIM):
F = Ey yn p(x)pulylc) [Vw log Pw (yl) Vw log pw(ylz)*] -
that is, the expected covariance of the scores (gradients of the log-likelihood) with respect to the model parameters.
The FIM is a Riemannian metric on the space of proba- bility distributions [7], and provides a measure of the infor- mation a particular parameter (weight or feature) contains about the joint distribution pw(x, y) = Ëp(x)pw(y|x): If the classiï¬cation performance for a given task does not depend strongly a parameter, the corresponding entry in the FIM will be small. The FIM is also related to the (Kolmogorov) complexity of a task, a property that can be used to de- ï¬ne a computable metric of the learning distance between tasks [3]. Finally, the FIM can be interpreted as an easy-to- compute positive semideï¬nite upper-bound to the Hessian of the cross-entropy loss, and coincides with it at local min- ima [24]. In particular, âï¬at minimaâ correspond to weights that have, on average, low (Fisher) information [5, 13].
# 2.1. TASK2VEC embedding using a probe network
While the network activations capture the information in the input image which are needed to infer the image label, the FIM indicates the set of feature maps which are more informative for solving the current task. Following this in- tuition, we use the FIM to represent the task itself. How- ever, the FIMs computed on different networks are not di- rectly comparable. To address this, we use single âprobeâ network pre-trained on ImageNet as a feature extractor and re-train only the classiï¬er layer on any given task, which usually can be done efï¬ciently. After training is complete, we compute the FIM for the feature extractor parameters.
Since the full FIM is unmanageably large for rich probe networks based on CNNs, we make two additional approxi- mations. First, we only consider the diagonal entries, which implicitly assumes that correlations between different ï¬lters in the probe network are not important. Second, since the weights in each ï¬lter are usually not independent, we aver- age the Fisher Information for all weights in the same ï¬lter. The resulting representation thus has ï¬xed size, equal to the number of ï¬lters in the probe network. We call this embed- ding method TASK2VEC.
Robust Fisher computation Since the FIM is a local quantity, it is affected by the local geometry of the training loss landscape, which is highly irregular in many deep net- work architectures [21], and may be too noisy when trained with few samples. To avoid this problem, instead of a direct computation, we use a more robust estimator that leverages connections to variational inference. Assume we perturb the weights Ëw of the network with Gaussian noise N (0, Î) with precision matrix Î, and we want to ï¬nd the optimal Î which yields a good expected error, while remaining close to an isotropic prior N ( Ëw, λ2I). That is, we want to ï¬nd Î that minimizes:
L(w; A) = Ewan (w,a) Ap, oP(yl2)] +8 KL(N(0, A) ||V(0,X2D),
where H is the cross-entropy loss and β controls the weight of the prior. Notice that for β = 1 this reduces to the Evi- dence Lower-Bound (ELBO) commonly used in variational inference. Approximating to the second order, the optimal value of Î satisï¬es (see Supplementary Material):
β 2N Π= F + βλ2 2N I.
Therefore, β 2N Π⼠F +o(1) can be considered as an estima- tor of the FIM F , biased towards the prior λ2I in the low- data regime instead of being degenerate. In case the task is trivial (the loss is constant or there are too few samples) the embedding will coincide with the prior λ2I, which we will refer to as the trivial embedding. This estimator has the
advantage of being easy to compute by directly minimizing the loss L( Ëw; Σ) through Stochastic Gradient Variational Bayes [18], while being less sensitive to irregularities of the loss landscape than direct computation, since the value of the loss depends on the cross-entropy in a neighborhood of Ëw of size Îâ1. As in the standard Fisher computation, we estimate one parameter per ï¬lter, rather than per weight, which in practice means that we constrain Îii = Îjj when- ever wi and wj belongs to the same ï¬lter. In this case, opti- mization of L( Ëw; Î) can be done efï¬ciently using the local reparametrization trick of [18].
# 2.2. Properties of the TASK2VEC embedding
The task embedding we just deï¬ned has a number of useful properties. For illustrative purposes, consider a two- layer sigmoidal network for which an analytic expression can be derived (see Supplementary Materials). The FIM of the feature extractor parameters can be written using the Kronecker product as
F = Ex,yâ¼ Ëp(x)pw(y|x)[(y â p)2 · S â xxT ]
where p = pw(y = 1|x) and the matrix S$ = ww? © zz? © (1 â z)(1 â z)â is an element-wise product of classifier weights w and first layer feature activations z. It is informa- tive to compare this expression to an embedding based only on the dataset domain statistics, such as the (non-centered) covariance Cp = E [x27] of the input data or the covari- ance Cy = E[zz7| of the feature activations. One could take such statistics as a representative domain embedding since they only depend on the marginal distribution p(x) in contrast to the FIM task embedding, which depends on the joint distribution p(x, y). These simple expressions high- light some important (and more general) properties of the Fisher embedding we now describe.
Invariance to the label space: The task embedding does not directly depend on the task labels, but only on the pre- dicted distribution pw(y|x) of the trained model. Infor- mation about the ground-truth labels y is encoded in the weights w which are a sufï¬cient statistic of the task [5]. In particular, the task embedding is invariant to permutations of the labels y, and has ï¬xed dimension (number of ï¬lters of the feature extractor) regardless of the output space (e.g., k-way classiï¬cation with varying k).
Encoding task difficulty: As we can see from the ex- pressions above, if the fit model is very confident in its pre- dictions, E[(y â p)?] goes to zero. Hence, the norm of the task embedding ||F'||, scales with the difficulty of the task for a given feature extractor ¢. Figure 2 (Right) shows that even for more complex models trained on real data, the FIM norm correlates with test performance.
Encoding task domain: Data points x that are classi- ï¬ed with high conï¬dence, i.e., p is close to 0 or 1, will have a lower contribution to the task embedding than points
3.0 B N N a © Pa Avg. top-k tax. distance © Test error on task (%) â Task2Vec distance â Tax. distance 0 25 50 75 100 125 0.4 0.6 0.8 Size k of neighborhood Ly norm of task embedding 1e8
Figure 2: Distance between species classiï¬cation tasks. (Left) Task similarity matrix ordered by hierarchical clustering. Note that the dendrogram produced by the task similarity matches the taxonomic clusters (indicated by color bar). (Center) For tasks extracted from iNaturalist and CUB, we compare the cosine distance between tasks to their taxonomical distance. As the size of the task embedding neighborhood increases (measured by number of tasks in the neighborhood), we plot the average taxonomical distance of tasks from the neighborhood center. While the task distance does not perfectly match the taxonomical distance (whose curve is shown in orange), it shows a good correlation. Difference are both due to the fact that taxonomically close species may need very different features to be classiï¬ed, creating a mismatch between the two notions of distance, and because for some tasks in iNaturalist too few samples are provided to compute a good embedding. (Right) Correlation between L1 norm of the task embedding (distance from origin) and test error obtained on the task.
near the decision boundary since p(1 â p) is maximized at p = 1/2. Compare this to the covariance matrix of the data, C0, to which all data points contribute equally. Instead, in TASK2VEC information on the domain is based on data near the decision boundary (task-weighted domain embedding). Encoding useful features for the task: The FIM de- pends on the curvature of the loss function with the diagonal entries capturing the sensitivity of the loss to model param- eters. Speciï¬cally, in the two-layer model one can see that, if a given feature is uncorrelated with y, the correspond- ing blocks of F are zero. In contrast, a domain embedding based on feature activations of the probe network (e.g., C1) only reï¬ects which features vary over the dataset without indication of whether they are relevant to the task.
# 3. Similarity Measures on the Space of Tasks
siï¬cation of cats than it is to classiï¬cation of species of plants). In this setting, we can deï¬ne
Dtax(ta, tb) = min iâSa,jâSb d(i, j),
where Sa, Sb are the sets of categories in task ta, tb and d(i, j) is an ultrametric or graph distance in the taxonomy tree. Notice that this is a proper distance, and in particular it is symmetric.
Transfer distance. We deï¬ne the transfer (or ï¬ne-tuning) gain from a task ta to a task tb (which we improperly call distance, but is not necessarily symmetric or positive) as the difference in expected performance between a model trained for task tb from a ï¬xed initialization (random or pre- trained), and the performance of a model ï¬ne-tuned for task tb starting from a solution of task ta:
What metric should be used on the space of tasks? This depends critically on the meta-task we are considering. As a motivation, we concentrate on the meta-task of selecting the pre-trained feature extractor from a set in order to obtain the best performance on a new training task. There are several natural metrics that may be considered for this meta-task. In this work, we mainly consider:
Taxonomic distance For some tasks, there is a natural no- tion of semantic similarity, for instance deï¬ned by sets of categories organized in a taxonomic hierarchy where each task is classiï¬cation inside a subtree of the hierarchy (e.g., we may say that classifying breeds of dogs is closer to clas-
Ella-s0] â E[é,] Data > ty) = EG] :
where the expectations are taken over all trainings with the selected architecture, training procedure and network ini- tialization, ¢, is the final test error obtained by training on task b from the chosen initialization, and @,_,, is the error obtained instead when starting from a solution to task a and then fine-tuning (with the selected procedure) on task ty.
# 3.1. Symmetric and asymmetric TASK2VEC metrics
By construction, the Fisher embedding on which information TASK2VEC is based captures fundamental
about the structure of the task. We may therefore expect that the distance between two embeddings correlate posi- tively with natural metrics on the space of tasks. However, there are two problems in using the Euclidean distance be- tween embeddings: the parameters of the network have dif- ferent scales, and the norm of the embedding is affected by complexity of the task and the number of samples used to compute the embedding.
Symmetric TASK2VEC distance To make the distance computation robust, we propose to use the cosine distance between normalized embeddings:
Fa Fy ) Fat Fyâ Fat Fy/â dsym(Fas Fb) = deos(
where dcos is the cosine distance, Fa and Fb are the two task embeddings (i.e., the diagonal of the Fisher Informa- tion computed on the same probe network), and the division is element-wise. This is a symmetric distance which we ex- pect to capture semantic similarity between two tasks. For example, we show in Fig. 2 that it correlates well with the taxonomical distance between species on iNaturalist.
On the other hand, precisely for this reason, this distance is ill-suited for tasks such as model selection, where the (in- trinsically asymmetric) transfer distance is more relevant.
Asymmetric TASK2VEC distance In a ï¬rst approxima- tion, that does not consider either the model or the training procedure used, positive transfer between two tasks depends both on the similarity between two tasks and on the com- plexity of the ï¬rst. Indeed, pre-training on a general but complex task such as ImageNet often yields a better result than ï¬ne-tuning from a close dataset of comparable com- plexity. In our case, complexity can be measured as the dis- tance from the trivial embedding. This suggests the follow- ing asymmetric score, again improperly called a âdistanceâ despite being asymmetric and possibly negative:
dasym(ta â tb) = dsym(ta, tb) â αdsym(ta, t0),
where t0 is the trivial embedding, and α is an hyperparam- eter. This has the effect of bring more complex models closer. The hyper-parameter α can be selected based on the meta-task. In our experiments, we found that the best value of α (α = 0.15 when using a ResNet-34 pretrained on ImageNet as the probe network) is robust to the choice of meta-tasks.
# 4. MODEL2VEC: task/model co-embedding
By construction, the TASK2VEC distance ignores details of the model and only relies on the task. If we know what task a model was trained on, we can represent the model by the embedding of that task. However, in general we may not have such information (e.g., black-box models or hand- constructed feature extractors). We may also have multiple
models trained on the same task with different performance characteristics. To model the joint interaction between task and model (i.e., architecture and training algorithm), we aim to learn a joint embedding of the two.
We consider for concreteness the problem of learning a joint embedding for model selection. In order to em- bed models in the task space so that those near a task are likely to perform well on that task, we formulate the following meta-learning problem: Given k models, their MODEL2VEC embedding are the vectors mi = Fi + bi, where Fi is the task embedding of the task used to train model mi (if available, else we set it to zero), and bi is a learned âmodel biasâ that perturbs the task embedding to account for particularities of the model. We learn bi by opti- mizing a k-way cross entropy loss to predict the best model given the task distance (see Supplementary Material):
L = E[â log p(m | dasym(t, m0), . . . , dasym(t, mk))].
After training, given a novel query task t, we can then pre- dict the best model for it as the arg maxi dasym(t, mi), that is, the model mi embedded closest to the query task.
# 5. Experiments
We test TASK2VEC on a large collection of tasks and models, related to different degrees. Our experiments aim to test both qualitative properties of the embedding and its per- formance on meta-learning tasks. We use an off-the-shelf ResNet-34 pretrained on ImageNet as our probe network, which we found to give the best overall performance (see Sect. 5.2). The collection of tasks is generated starting iNaturalist [36]: from the following four main datasets. Each task extracted corresponds to species classiï¬cation in a given taxonomical order. For instance, the âRodentia taskâ is to classify species of rodents. Notice that each task is deï¬ned on a separate subset of the images in the original dataset; that is, the domains of the tasks are dis- joint. CUB-200 [37]: We use the same procedure as iNat- uralist to create tasks. In this case, all tasks are classiï¬ca- tions inside orders of birds (the aves taxonomical class), and have generally much less training samples than correspond- ing tasks in iNaturalist. iMaterialist [1] and DeepFashion [23]: Each image in both datasets is associated with sev- eral binary attributes (e.g., style attributes) and categorical attributes (e.g., color, type of dress, material). We binarize the categorical attributes, and consider each attribute as a separate task. Notice that, in this case, all tasks share the same domain and are naturally correlated.
In total, our collection of tasks has 1460 tasks (207 iNaturalist, 25 CUB, 228 iMaterialist, 1000 DeepFashion). While a few tasks have many training examples (e.g., hun- dred thousands), most have just hundreds or thousands of samples. This simulates the heavy-tail distribution of data in real-world applications.
# iNat+CUB error distribution and expert selection
X Selected expert 0% 4 ImageNet expert 60% Test Error 40% 20% 0% ( ! ; i0"t ttt ly ' Ck Sh ha? oh oh 6 © Ooh > 8 oo @ PPPOE LIL EE LL EE EE EP OP LSE EB BE LE COL EEL EE SIS ESSE SLE LEE LE LS ee Oe Or el ROA Oe DPN PP KE IF PE Ph GOES FS PP We PM PO SS Sgro? PEP LD FCS Fe OH POT DD Fa PME PD SyF aX PP FaP LF FP FLOP LE LaF PF LP LPN APO Oo DeCoP OLN KONA GH GP GP SHS CGP gy PEL PEPSI SSE FN PP LG NOH FF VPLVW OLS SK SF KE LS I'S © ¢ HF SS ELK © é wer ess x SS <
Figure 3: TASK2VEC often selects the best available experts. Violin plot of the distribution of the ï¬nal test error (shaded plot) on tasks from the CUB-200 dataset (columns) obtained by training a linear classiï¬er over several expert feature extrac- tors (points). Most specialized feature extractors perform similarly on a given task, and generally are similar or worse than a generic feature extractor pre-trained on ImageNet (blue triangles). However, in some cases a carefully chosen expert, trained on a relevant task, can greatly outperform all other experts (long whisker of the violin plot). The model selection algorithm based on TASK2VEC can, without training, suggest an expert to use for the task (red cross, lower is better). TASK2VEC mostly recover the optimal, or close to optimal, feature extractor to use without having to perform an expensive brute-force search over all possibilities. Columns are ordered by norm of the task embedding: Notice tasks with lower embedding norm have lower error and more âcomplexâ task (task with higher embedding norm) tend to beneï¬t more from a specialized expert.
Together with the collection of tasks, we collect several âexpertâ feature extractors. These are ResNet-34 models pre-trained on ImageNet and then ï¬ne-tuned on a speciï¬c task or collection of related tasks (see Supplementary Ma- terials for details). We also consider a âgenericâexpert pre- trained on ImageNet without any ï¬netuning. Finally, for each combination of expert feature extractor and task, we trained a linear classiï¬er on top of the expert in order to solve the selected task using the expert.
# 5.1. Task Embedding Results
Task Embedding qualitatively reï¬ects taxonomic dis- tance for iNaturalist For tasks extracted from the iNat- uralist dataset (classiï¬cation of species), the taxonomical distance between orders provides a natural metric of the se- mantic similarity between tasks. In Figure 2 we compare the symmetric TASK2VEC distance with the taxonomical distance, showing strong agreement.
In total, we trained 4,100 classiï¬ers, 156 feature extrac- tors and 1,460 embeddings. The total effort to generate the ï¬nal results was about 1,300 GPU hours.
Meta-tasks. In Sect. 5.2, for a given task we aim to pre- dict, using TASK2VEC , which expert feature extractor will yield the best classiï¬cation performance. In particular, we formulate two model selection meta-tasks: iNat + CUB and Mixed. The ï¬rst consists of 50 tasks and experts from iNat- uralist and CUB, and aims to test ï¬ne-grained expert selec- tion in a restricted domain. The second contains a mix of 26 curated experts and 50 random tasks extracted from all datasets, and aims to test model selection between different domains and tasks (see Supplementary Material for details).
Task embedding for iMaterialist In Fig. 1 we show a t-SNE visualization of the embedding for iMaterialist and iNaturalist tasks. Task embedding yields interpretable re- sults: Tasks that are correlated in the dataset, such as binary classes corresponding to the same categorical attribute, may end up far away from each other and close to other tasks that are semantically more similar (e.g., the jeans category task is close to the ripped attribute and the denim material). This is reï¬ected in the mixture of colors of semantically related nearby tasks, showing non-trivial grouping.
We also compare the TASK2VEC embedding with a do- main embedding baseline, which only exploits the input distribution p(x) rather than the task distribution p(x, y). While some tasks are highly correlated with their domain (e.g., tasks from iNaturalist), other tasks differ only on the labels (e.g., all the attribute tasks of iMaterialist, which share the same clothes domain). Accordingly, the domain
«Brute force fixed â@® ImageNet finetune -@ ImageNet fixed -@®- Task2Vec finetune â®- Task2Vec fixed 10% nga - 0% (lower is better) -10% Error relative to brute force 10? 103 104 Number of samples
TASK2VEC improves results at different Figure 4: dataset sizes and training conditions: Performance of model selection on a subset of 4 tasks as a function of the number of samples available to train relative to opti- mal model selection (dashed orange). Training a classiï¬er on the feature extractor selected by TASK2VEC (solid red) is always better than using a generic ImageNet feature extrac- tor (dashed red). The same holds when allowed to ï¬ne-tune the feature extractor (blue curves). Also notice that in the low-data regime ï¬ne-tuning the ImageNet feature extractor is more expensive and has a worse performance than accu- rately selecting a good ï¬xed feature extractor.
Probe network Chance VGG-13 DenseNet-121 ResNet-13 Top-10 +13.95% +59.52% +4.82% +38.03% +0.30% +10.63% +0.00% +9.97% All
Table 1: Choice of probe network. Mean relative error increase over the ground-truth optimum on the iNat+CUB meta-task for different choices of the probe-network. We also report the performance on the top 10 tasks with more samples to show how data size affect different architectures.
embedding recovers similar clusters on iNaturalist. How- ever, on iMaterialst domain embedding collapses all tasks to a single uninformative cluster (not a single point due to slight noise in embedding computation).
Task Embedding encodes task difï¬culty The scatter- plot in Fig. 3 compares the norm of embedding vectors vs. performance of the best expert (or task speciï¬c model for cases where we have the diagonal computed). As shown analytically for the two-layers model, the norm of the task embedding correlates with the complexity of the task also on real tasks and architectures.
# 5.2. Model Selection
Given a task, our aim is to select an expert feature extrac- tor that maximizes the classiï¬cation performance on that task. We propose two strategies: (1) embed the task and
select the feature extractor trained on the most similar task, and (2) jointly embed the models and tasks, and select a model using the learned metric (see Section 4). Notice that (1) does not use knowledge of the model performance on various tasks, which makes it more widely applicable but requires we know what task a model was trained for and may ignore the fact that models trained on slightly differ- ent tasks may still provide an overall better feature extrac- tor (for example by over-ï¬tting less to the task they were trained on).
In Table 2 we compare the overall results of the various proposed metrics on the model selection meta-tasks. On both the iNat+CUB and Mixed meta-tasks, the Asymmetric TASK2VEC model selection is close to the ground-truth op- timal, and signiï¬cantly improves over both chance, and over using an generic ImageNet expert. Notice that our method has O(1) complexity, while searching over a collection of N experts is O(N ).
Error distribution In Fig. 3 we show in detail the error distribution of the experts on multiple tasks. It is interesting to notice that the classiï¬cation error obtained using most ex- perts clusters around some mean value, and little improve- ment is observed over using a generic expert. On the other hand, a few optimal experts can obtain a largely better per- formance on the task than a generic expert. This conï¬rms the importance of having access to a large collection of ex- perts when solving a new task, especially if few training data are available. But this collection can only be efï¬ciently exploited if an algorithm is given to efï¬ciently ï¬nd one of the few experts for the task, which we propose.
Dependence on task dataset size Finding experts is es- pecially important when the task we are interested in has relatively few samples. In Fig. 4 we show how the perfor- mance of TASK2VEC varies on a model selection task as the number of samples varies. At all sample sizes TASK2VEC is close to the optimum, and improves over selecting a generic expert (ImageNet), both when ï¬ne-tuning and when train- ing only a classiï¬er. We observe that the best choice of ex- perts is not affected by the dataset size, and that even with few examples TASK2VEC is able to ï¬nd the optimal experts.
Choice of probe network In Table 1 we show that DenseNet [15] and ResNet architectures [11] perform sig- niï¬cantly better when used as probe networks to compute the TASK2VEC embedding than a VGG [32] architecture.
# 6. Related Work
Task and Domain embedding. Tasks distinguished by their domain can be understood simply in terms of image statistics. Due to the bias of different datasets, sometimes a benchmark task may be identiï¬ed just by looking at a few images [34]. The question of determining what summary
Meta-task iNat + CUB Mixed Optimal 31.24 22.90 ImageNet Chance +59.52% +30.18% +112.49% +75.73% +6.81% +27.81%
# TASK2VEC Asymmetric TASK2VEC MODEL2VEC +9.97% +42.54% +29.23% +40.30%
Table 2: Model selection performance of different metrics. Average optimal error obtained on two meta-learning tasks by exhaustive search over the best expert, and relative error increase when using cheaper model selection methods. Always picking a ï¬xed good general model (e.g., a model pretrained on ImageNet) performs better than picking an expert at random (chance). However, picking an expert using the Asymmetric TASK2VEC distance can achieve an overall better performance than using a general model. Notice also the improvement over the Symmetric version, especially on iNat + CUB, where experts trained on very similar tasks may be too simple to yield good transfer, and should be avoided.
statistics are useful (analogous to our choice of probe net- work) has also been considered, for example [9] train an autoencoder that learns to extract ï¬xed dimensional sum- mary statistics that can reproduce many different datasets accurately. However, for general vision tasks which apply to all natural images, the domain is the same across tasks.
Taskonomy [39] explores the structure of the space of tasks, focusing on the question of effective knowledge transfer in a curated collection of 26 visual tasks, ranging from classiï¬cation to 3D reconstruction, deï¬ned on a com- mon domain. They compute pairwise transfer distances be- tween pairs of tasks and use the results to compute a di- rected hierarchy. Introducing novel tasks requires comput- ing the pairwise distance with tasks in the library. In con- trast, we focus on a larger library of 1,460 ï¬ne-grained clas- siï¬cation tasks both on same and different domains, and show that it is possible to represent tasks in a topological space with a constant-time embedding. The large task col- lection and cheap embedding costs allow us to tackle new meta-learning problems.
neural network as a characterization of the task. Use of Fisher information for neural networks was popularized by Amari [6] who advocated optimization using natural gra- dient descent which leverages the fact that the FIM is an appropriate parameterization-independent metric on statis- tical models. Recent work has focused on approximates of FIM appropriate in this setting (see e.g., [12, 10, 25]). FIM has also been proposed for various regularization schemes [5, 8, 22, 27], analyze learning dynamics of deep networks [4], and to overcome catastrophic forgetting [19].
Meta-learning and Model Selection The general prob- lem of meta-learning has a long history with much re- cent work dedicated to problems such as neural architecture search and hyper-parameter estimation. Closely related to our problem is work on selecting from a library of classi- ï¬ers to solve a new task [33, 2, 20]. Unlike our approach, these usually address the question via land-marking or ac- tive testing, in which a few different models are evaluated and performance of the remainder estimated by extension. This can be viewed as a problem of completing a matrix deï¬ned by performance of each model on each task.
Fisher kernels Our work takes inspiration from Jaakkola and Hausler [16]. They propose the âFisher Kernelâ, which uses the gradients of a generative model score function as a representation of similarity between data items
A similar approach has been taken in computer vision for selecting a detector for a new category out of a large library of detectors [26, 40, 38].
K(x(1), x(2)) = âθ log P (x(1)|θ)T F â1âθ log P (x(2)|θ).
# 7. Discussion
Here P (x|θ) is a parameterized generative model and F is the Fisher information matrix. This provides a way to utilize generative models in the context of discriminative learning. Variants of the Fisher kernel have found wide use as a repre- sentation of images [28, 29], and other structured data such as protein molecules [17] and text [30]. Since the genera- tive model can be learned on unlabelled data, several works have investigated the use of Fisher kernel for unsupervised learning [14, 31]. [35] learns a metric on the Fisher kernel representation similar to our metric learning approach. Our approach differs in that we use the FIM as a representation of a whole dataset (task) rather than using model gradients as representations of individual data items.
Fisher Information for CNNs Our approach to task em- bedding makes use of the Fisher Information matrix of a
TASK2VEC is an efï¬cient way to represent a task, or the corresponding dataset, as a ï¬xed dimensional vector. It has several appealing properties, in particular its norm corre- lates with the test error obtained on the task, and the co- sine distance between embeddings correlates with natural distances between tasks, when available, such as the taxo- nomic distance for species classiï¬cation, and the ï¬ne-tuning distance for transfer learning. Having a representation of tasks paves the way for a wide variety of meta-learning tasks. In this work, we focused on selection of an expert feature extractor in order to solve a new task, especially when little training data is present, and showed that using TASK2VEC to select an expert from a collection can sen- sibly improve test performance while adding only a small overhead to the training process.
Meta-learning on the space of tasks is an important step toward general artiï¬cial intelligence. In this work, we in- troduce a way of dealing with thousands of tasks, enough to enable reconstruct a topology on the task space, and to test meta-learning solutions. The current experiments highlight the usefulness of our methods. Even so, our collection does not capture the full complexity and variety of tasks that one may encounter in real-world situations. Future work should further test effectiveness, robustness, and limitations of the embedding on larger and more diverse collections.
# References
[1] iMaterialist Challenge (Fashion) at FGVC5 workshop, https://www.kaggle.com/c/ CVPR 2018. imaterialist-challenge-fashion-2018. 5
[2] S. M. Abdulrahman, P. Brazdil, J. N. van Rijn, and J. Van- schoren. Speeding up algorithm selection using average ranking and active testing by introducing runtime. Machine learning, 107(1):79â108, 2018. 8
[3] A. Achille, G. Mbeng, G. Paolini, and S. Soatto. The dy- namic distance between learning tasks: From Kolmogorov complexity to transfer learning via quantum physics and the information bottleneck of the weights of deep networks. Proc. of the NIPS Workshop on Integration of Deep Learning Theories (ArXiv: 1810.02440), October 2018. 2
[4] A. Achille, M. Rovere, and S. Soatto. Critical learning pe- riods in deep neural networks. Proc. of the Intl. Conf. on Learning Representations (ICLR). ArXiv:1711.08856, 2019. 8
[5] A. Achille and S. Soatto. Emergence of invariance and dis- entanglement in deep representations. Journal of Machine Learning Research (ArXiv 1706.01350), 19(50):1â34, 2018. 2, 3, 8
[6] S.-I. Amari. Natural gradient works efï¬ciently in learning. Neural computation, 10(2):251â276, 1998. 8
[7] S.-I. Amari and H. Nagaoka. Methods of information geome- try, volume 191 of translations of mathematical monographs. American Mathematical Society, 13, 2000. 2
[8] S. Arora, R. Ge, B. Neyshabur, and Y. Zhang. Stronger gen- eralization bounds for deep nets via a compression approach. arXiv preprint arXiv:1802.05296, 2018. 8
[9] H. Edwards and A. Storkey. Towards a neural statistician. arXiv preprint arXiv:1606.02185, 2016. 8
[10] C. Finn, P. Abbeel, and S. Levine. Model-agnostic meta- learning for fast adaptation of deep networks. arXiv preprint arXiv:1703.03400, 2017. 8
[11] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learn- ing for image recognition. In Proceedings of the IEEE con- ference on computer vision and pattern recognition, pages 770â778, 2016. 7
[12] T. Heskes. On natural learning and pruning in multilayered perceptrons. Neural Computation, 12(4):881â901, 2000. 8
[13] S. Hochreiter and J. Schmidhuber. Flat minima. Neural Computation, 9(1):1â42, 1997. 2
[14] A. D. Holub, M. Welling, and P. Perona. Combining gener- ative models and ï¬sher kernels for object recognition. In IEEE International Conference on Computer Vision, vol- ume 1, pages 136â143. IEEE, 2005. 8
[15] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017. 7
[16] T. Jaakkola and D. Haussler. Exploiting generative models in discriminative classiï¬ers. In Advances in neural information processing systems, pages 487â493, 1999. 8
[17] T. S. Jaakkola, M. Diekhans, and D. Haussler. Using the ï¬sher kernel method to detect remote protein homologies. In ISMB, volume 99, pages 149â158, 1999. 8
[18] D. P. Kingma, T. Salimans, and M. Welling. Variational dropout and the local reparameterization trick. In Advances in Neural Information Processing Systems, pages 2575â 2583, 2015. 3, 13
[19] J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Des- jardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska, et al. Overcoming catastrophic for- getting in neural networks. Proceedings of the national academy of sciences, page 201611835, 2017. 8
[20] R. Leite, P. Brazdil, and J. Vanschoren. Selecting classiï¬- cation algorithms with active testing. In International work- shop on machine learning and data mining in pattern recog- nition, pages 117â131. Springer, 2012. 8
[21] H. Li, Z. Xu, G. Taylor, and T. Goldstein. Visualizing the loss landscape of neural nets. arXiv preprint arXiv:1712.09913, 2017. 3
[22] T. Liang, T. Poggio, A. Rakhlin, and J. Stokes. Fisher-rao metric, geometry, and complexity of neural networks. arXiv preprint arXiv:1711.01530, 2017. 8
[23] Z. Liu, P. Luo, S. Qiu, X. Wang, and X. Tang. Deepfash- ion: Powering robust clothes recognition and retrieval with rich annotations. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1096â1104, 2016. 5
[24] J. Martens. New perspectives on the natural gradient method. CoRR, abs/1412.1193, 2014. 2, 12
[25] J. Martens and R. Grosse. Optimizing neural networks with kronecker-factored approximate curvature. In International conference on machine learning, pages 2408â2417, 2015. 8 [26] P. Matikainen, R. Sukthankar, and M. Hebert. Model rec- In Computer Vision ommendation for action recognition. and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 2256â2263. IEEE, 2012. 8
[27] Y. Mroueh and T. Sercu. Fisher gan. In Advances in Neural Information Processing Systems, pages 2513â2523, 2017. 8 Improving the ï¬sher kernel for large-scale image classiï¬cation. In European conference on computer vision, pages 143â156. Springer, 2010. 8
[29] J. S´anchez, F. Perronnin, T. Mensink, and J. Verbeek. Im- age classiï¬cation with the ï¬sher vector: Theory and practice. International journal of computer vision, 105(3):222â245, 2013. 8
[30] C. Saunders, A. Vinokourov, and J. S. Shawe-taylor. String kernels, ï¬sher kernels and ï¬nite state automata. In Advances in Neural Information Processing Systems, pages 649â656, 2003. 8
[31] M. Seeger. Learning with labeled and unlabeled data. Tech- nical Report EPFL-REPORT-161327, Institute for Adaptive and Neural Computation, University of Edinburgh, 2000. 8 [32] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. 7
[33] M. R. Smith, L. Mitchell, C. Giraud-Carrier, and T. Martinez. Recommending learning algorithms and their associated hy- perparameters. arXiv preprint arXiv:1407.1890, 2014. 8 [34] A. Torralba and A. A. Efros. Unbiased look at dataset bias. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 1521â1528. IEEE, 2011. 7 [35] L. Van Der Maaten. Learning discriminative ï¬sher kernels.
In ICML, volume 11, pages 217â224, 2011. 8
[36] G. Van Horn, O. Mac Aodha, Y. Song, Y. Cui, C. Sun, A. Shepard, H. Adam, P. Perona, and S. Belongie. The inatu- ralist species classiï¬cation and detection dataset. In Proceed- ings of the IEEE conference on computer vision and pattern recognition, 2018. 5
[37] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The Caltech-UCSD Birds-200-2011 Dataset. Technical Re- port CNS-TR-2011-001, California Institute of Technology, 2011. 5
[38] Y.-X. Wang and M. Hebert. Model recommendation: Gen- erating object detectors from few samples. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1619â1628, 2015. 8
[39] A. R. Zamir, A. Sax, W. Shen, L. Guibas, J. Malik, and S. Savarese. Taskonomy: Disentangling task transfer learn- ing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3712â3722, 2018. 8
[40] P. Zhang, J. Wang, A. Farhadi, M. Hebert, and D. Parikh. Predicting failures of vision systems. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recogni- tion, pages 3566â3573, 2014. 8
# A. Analytic FIM for two-layer model
Assume we have data points (x;, y;),i = 1...n and y; ⬠{0,1}. Assume that a fixed feature extractor applied to data points x yields features z = (x) ⬠R@ and a linear model with parameters w is trained to model the conditional distribution pi = Ply = 1;) =o (w? (zi) , where o is the sigmoid function. The gradient of the cross-entropy loss with respect to the linear model parameters is:
oe 1 ; a = 71 i â Di) (i), Bw = dL pi)O(:);
and the empirical estimate of the Fisher information matrix is:
F re (= T 1 . Ow x) | Ey pw (yl) NV » d(x) (yi â pi)?2o(xi)* = - » o(xi)(1 â pa)pid(wi)â
In general, we are also interested in the Fisher information of the parameters of the feature extractor Ï(x) since this is independent of the speciï¬cs of the output space y (e.g., for k-way classiï¬cation). Consider a 2-layer network where the feature extractor uses a sigmoid non-linearity:
p = Ï(wT z) zk = Ï(U T k x)
and the matrix U speciï¬es the feature extractor parameters and w are parameters of the task-speciï¬c classiï¬er. Taking the gradient w.r.t. parameters we have:
oe aw; (y â p)4; oe ny = (y â p)wezn(1 â 2) ay
The Fisher Information Matrix (FIM) consists of blocks:
ae ( aeâ 2 Ow; (a) = (y â p) 212; ae ( aeâ Un (sn) = (y â p)?2;2K(1 â ze) ae ( ae \* aun (an) =(y- p)-wrzn(L â ze)wia(1 â 3) air;
We focus on the FIM of the probe network parameters which is independent of the dimensionality of the output layer and write it in matrix form as:
ol ( oe OU, \ OUe. T ) (yâp)(1 Zr) Zk(1 a)awpwpca?
Note that each block {l, k} consists of the same matrix (y â p)2 · xxT multiplied by a scalar Skl given as:
Skl = (1 â zk)zk(1 â zl)zlwkwl
We can thus write the whole FIM as the expectation of a Kronecker product:
F = E[(y â p)2 · S â xxT ]
where the matrix S can be written as
S=ww" ©z27 @(1-2z)(1-2)"
(a) Random linear
# + ReLU
# (b) Polynomial of degree three
Figure 5: Task embeddings computed for a probe network consisting of (a) 10 random linear + ReLU features and (b) degree three polynomial features projected to 2D using t-SNE. The tasks are random binary partitions of the unit square visualized in each icon (three tasks are visualized on the left) and cannot be distinguished based purely on the input domain without considering target labels. Note that qualitatively similar tasks group together, with more complex tasks (requiring complicated decision boundaries) separated from simpler tasks.
Given a task described by N training samples {(xe, ye)}, the FIM can be estimated empirically as
1 F= N Darel â pe) Se@aext S.= ww" © 227 © (1â %)(1â 2)"
where we take expectation over y w.r.t. the predictive distribution y â¼ pw(y|x).
Example toy task embedding As noted in the main text, the FIM depends on the domain embedding, the particular task and its complexity. We illustrate these properties of the task embedding using an âtoyâ task space illustrated in Figure 5. We generate 64 binary classiï¬cation tasks by clustering a uniform grid of points in the XY plane into k â [3, 16] clusters using k-means and assigning a half of them to one category. We consider two different feature extractors, which play the role of âprobe networkâ. One is a collection of polynomial functions of degree d = 3, the second is 10 random linear features of the form max(0, ax + by + c) where a and b are sampled uniformly between [â1/2, 1/2] and c between [â1, 1].
# B. Robust Fisher Computation
Consider again the loss function (parametrized with the covariance matrix Σ instead of the precision matrix Πfor conve-
# nience of notation):
L(t; D) = Evan (ws) Ap.p(yle)] + BKL(N (w, 5) || M(0,07D).
We will make use of the fact that the Fisher Information matrix is a positive semideï¬nite approximation of the Hessian H of the cross-entropy loss, and coincide with it in local minima [24]. Expanding to the second order around Ëw, we have:
L(W;E) Ewan (od) [Ape o(Yle) + Vw Ape.o(yle)(w â @) 4 5(w )" H(w â0)] + 8 KL(N (a, %) | N(0,07D)) Hy p (ye) + 5tr(2H) + 8 KEN (@,®) |. (0,022) Bw 1 aga + Gtr 4 klog a? â log(|5|) â k] =H, plyle) 4 st(H)
where in the last line used the known expression for the KL divergence of two Gaussian. Taking the derivative with respect to Σ and setting it to zero, we obtain that the expression loss is minimized when Σâ1 = 2 , or, rewritten in term β
of the precision matrices, when
2 BM? A= (u +5 1),
where we have introduced the precision matrices Î = Σâ1 and λ2I = 1/Ï2I.
We can then obtain an estimate of the Hessian H of the cross-entropy loss at the point Ëw, and hence of the FIM, by minimizing the loss L( Ëw, Î) with respect to Î. This is a more robust approximation than the standard deï¬nition, as it depends on the loss in a whole neighborhood of Ëw of size â Î, rather than from the derivatives of the loss at a point. To further make the estimation more robust, and to reduce the number of parameters, we constrain Î to be diagonal, and constrain weights wij belonging to the same ï¬lter to have the same precision Îij. Optimization of this loss can be performed easily using Stochastic Gradient Variational Bayes, and in particular using the local reparametrization trick of [18].
The prior precision λ2 should be picked according to the scale of the weights of each layer. In practice, since the weights of each layer have a different scale, we found it useful to select a different λ2 for each layer, and train it together with Î,
# C. Details of the experiments
# C.1. Training of experts and classiï¬ers
Given a task, we train an expert on it by ï¬ne-tuning an off-the-shelf ResNet-34 pretrained on ImageNet1. Fine-tuning is performed by ï¬rst ï¬xing the weights of the network and retraining from scratch only the ï¬nal classiï¬er for 10 epochs using Adam, and then ï¬ne-tuning all the network together with SGD for 60 epochs with weight decay 5e-4, starting from learning rate 0.001 and decreasing it by a factor 0.1 at epochs 40.
Given an expert, we train a classiï¬er on top of it by replacing the ï¬nal classiï¬cation layer and training it with Adam for 16 epochs. We use weight decay 5e-4 and learning rate 1e-4.
The tasks we train on generally have different number of samples and unbalanced classes. To limit the impact of this imbalance on the training procedure, regardless of the total size of the dataset, in each epoch we always sample 10,000 images with replacement, uniformly between classes. In this way, all epochs have the same length and see approximately the same number of examples for each class. We use this balanced sampling in all experiments, unless noted otherwise.
# C.2. Computation of the TASK2VEC embedding
As the described in the main text, the TASK2VEC embedding is obtained by choosing a probe network, retraining the ï¬nal classiï¬er on the given task, and then computing the Fisher Information Matrix for the weights of the probe network.
Unless speciï¬ed otherwise, we use an off-the-shelf ResNet-34 pretrained on ImageNet as the probe network. The Fisher Information Matrix is computed in a robust way minimizing the loss function L( Ëw; Î) with respect to the precision matrix Î, as described before. To make computation of the embedding faster, instead of waiting for the convergence of the classiï¬er, we train the ï¬nal classiï¬er for 2 epochs using Adam and then we continue to train it jointly with the precision matrix Î using the loss L( Ëw; Î). We constrain Î to be positive by parametrizing it as Î = exp(L), for some unconstrained variable L. While for the classiï¬er we use a low learning rate (1e-4), we found it useful to use an higher learning rate (1e-2) to train L.
# C.3. Training the MODEL2VECembedding
As described in the main text, in the MODEL2VECembedding we aim to learn a vector representation mj = Fj + bj of the j-th model in the collection, which represents both the task the model was trained on (through the TASK2VEC embedding Fj), and the particularities of the model (through the learned parameter bj).
We learn bj by minimizing a k-way classiï¬cation loss which, given a task t, aims to select the model that performs best on the task among a collection of k models. Multiple models may perform similarly and close to optimal: to preserve this information, instead of using a one-hot encoding for the best model, we train using soft-labels obtained as follows:
error; â mean (error) ey, B(y:) = Soft (- Plyi) = Softmax | â a std(error;)
where error;,; is the ground-truth test error obtained by training a classifier for task i on top of the j-th model. Notice that for a > 1, the soft-label y/ reduces to the one-hot encoding of the index of the best performing model. However, for lower aâs, the vector y; contains richer information about the relative performance of the models.
1https://pytorch.org/docs/stable/torchvision/models.html
We obtain our prediction in a similar way: Let di,j = dasym(ti, mj), then we set our model prediction to be
p(y|di,0, . . . , di,k) = Softmax(âγ di),
where the scalar γ > 0 is a learned parameter. Finally, we learn both the mjâs and γ using a cross-entropy loss:
N 1 L= W DErnolPluildios ++, dik)],
which is minimized precisely when p(y|di,0, . . . , di,k) = Ëp(yi).
In our experiments we set α = 20, and minimize the loss using Adam with learning rate 0.05, weight decay 0.0005, and early stopping after 81 epochs, and report the leave-one-out error (that is, for each task we train using the ground truth of all other tasks and test on that task alone, and report the average of the test errors obtained in this way).
# D. Datasets, tasks and meta-tasks
Our two model selection meta-tasks, iNat+CUB and Mixed, are curated as follows. For iNat+CUB, we generated 50 tasks and (the same) experts from iNaturalist and CUB. The 50 tasks consist of 25 iNaturalist tasks and 25 CUB tasks to provide a balanced mix from two datasets of the same domain. We generated the 25 iNaturalist tasks by grouping species into orders and then choosing the top 25 orders with the most samples. The number of samples for tasks shows the heavy-tail distribution typical of real data, with the top task having 64,100 samples (the Passeriformes order classiï¬cation task), while most tasks have around 6, 000 samples.
The 25 CUB tasks were similarly generated with 10 order tasks but additionally has 15 Passeriformes family tasks: After grouping CUB into orders, we determined 11 usable order tasks (the only unusable order task, Gaviiformes, has only one species so it makes no sense to train on it). However, one of the ordersâPasseriformesâdominated all other orders with 134 species when compared to 3-24 species of other orders. Therefore, we decided to further subdivide the Passeriformes order task into family tasks (i.e., grouping species into families) to provide a more balanced partition. This resulted in 15 usable family tasks (i.e., has more than one species) out of 22 family tasks. Unlike iNaturalist, tasks from CUB have only a few hundreds of samples and hence beneï¬t more from carefully selecting an expert.
In the iNAT+CUB meta-task the classiï¬cation tasks are the same tasks used to train the experts. To avoid trivial solu- tions (always selecting the expert trained on the task we are trying to solve) we test in a leave-one-out fashion: given a classï¬cication task, we aim to select the best expert that was not trained on the same data.
For the Mixed meta-task, we chose 40 random tasks and 25 curated experts from all datasets. The 25 experts were generated from iNaturalist, iMaterialist and DeepFashion (CUB, having fewer samples than iNaturalist, is more appropriate as tasks). For iNaturalist, we trained 15 experts: 8 order tasks and 7 class tasks (species ordered by class), both with number of samples greater than 10,000. For DeepFashion, we trained 3 category experts (upper-body, lower-body, full-body). For iMaterialist, we trained 2 category experts (pants, shoes) and 5 multi-label experts by grouping attributes (color, gender, neckline, sleeve, style). For the purposes of clustering attributes into larger groups for training experts (and color coding the dots in Figure 1), we obtained a de-anonymized list of the iMaterialist Fashion attribute names from the FGCV contest organizers.
The 40 random tasks were generated as follows. In order to balance tasks among all datasets, we selected 5 CUB, 15 iNaturalist, 15 iMaterialist and 5 DeepFashion tasks. Within those datasets, we randomly pick tasks with a sufï¬cient number of validation samples and maximum variety. For the iNaturalist tasks, we group the order tasks into class tasks, ï¬lter out the number of validation samples less than 100 and randomly pick order tasks within each class. For the iMaterialist tasks, we similarly group the tasks (e.g. category, style, pattern), ï¬lter out tasks with less than 1,000 validation samples and randomly pick tasks within each group. For CUB, we randomly select 2 order tasks and 3 Passeriformes family tasks, and for DeepFashion, we randomly select the tasks uniformly. All this ensures that we have a balanced variety of tasks.
For the data efï¬ciency experiment, we trained on a subset of the tasks and experts in the Mixed meta-task: We picked the Accipitriformes, Asparagales, Upper-body, Short Sleeves for the tasks, and the Color, Lepidoptera, Upper-body, Passer- iformes, Asterales for the experts. Tasks where selected among those that have more than 30,000 training samples in order to represent all datasets. The experts were also selected to be representative of all datasets, and contain both strong and very weak experts (such as the Color expert).
# E. Error matrices
{cum rocetastormes aves {15 ge PALER EILEOIEE Soc oe YE, 5 Hs Si Wiles MM yy Oe ee Sere SEPP AE POS POPE Qe $
{cum rocetastormes {15 ge PALER EILEOIEE Soc oe YE, 5 Hs Si Wiles MM yy Oe ee Sere SEPP AE POS POPE Qe $
Figure 6: Meta-tasks ground-truth error matrices. (Left) Error matrix for the CUB+iNat meta-task. The numbers in each cell is the test error obtained by training a classiï¬er on a given combination of task (rows) and expert (columns). The background color represent the Asymmetric TASK2VEC distance between the target task and the task used to train the expert. Numbers in red indicate the selection made by the model selection algorithm based on the Asymmetric TASK2VEC embedding. The (out-of-diagonal) optimal expert (when different from the one selected by our algorithm), is highlighted in blue. (Right) Same as before, but for the Mixed meta-task. | {
"id": "1711.08856"
} |
1902.01007 | Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference | A machine learning system can score well on a given test set by relying on
heuristics that are effective for frequent example types but break down in more
challenging cases. We study this issue within natural language inference (NLI),
the task of determining whether one sentence entails another. We hypothesize
that statistical NLI models may adopt three fallible syntactic heuristics: the
lexical overlap heuristic, the subsequence heuristic, and the constituent
heuristic. To determine whether models have adopted these heuristics, we
introduce a controlled evaluation set called HANS (Heuristic Analysis for NLI
Systems), which contains many examples where the heuristics fail. We find that
models trained on MNLI, including BERT, a state-of-the-art model, perform very
poorly on HANS, suggesting that they have indeed adopted these heuristics. We
conclude that there is substantial room for improvement in NLI systems, and
that the HANS dataset can motivate and measure progress in this area | http://arxiv.org/pdf/1902.01007 | R. Thomas McCoy, Ellie Pavlick, Tal Linzen | cs.CL | Camera-ready for ACL 2019 | null | cs.CL | 20190204 | 20190624 | 9 1 0 2
n u J 4 2 ] L C . s c [
4 v 7 0 0 1 0 . 2 0 9 1 : v i X r a
# Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference
R. Thomas McCoy,1 Ellie Pavlick,2 & Tal Linzen1 1Department of Cognitive Science, Johns Hopkins University 2Department of Computer Science, Brown University tom.mccoy@jhu.edu, ellie pavlick@brown.edu, tal.linzen@jhu.edu
# Abstract
A machine learning system can score well on a given test set by relying on heuristics that are effective for frequent example types but break down in more challenging cases. We study this issue within natural language inference (NLI), the task of determining whether one sentence entails another. We hypothesize that statisti- cal NLI models may adopt three fallible syn- tactic heuristics: the lexical overlap heuristic, the subsequence heuristic, and the constituent heuristic. To determine whether models have adopted these heuristics, we introduce a con- trolled evaluation set called HANS (Heuris- tic Analysis for NLI Systems), which contains many examples where the heuristics fail. We ï¬nd that models trained on MNLI, including BERT, a state-of-the-art model, perform very poorly on HANS, suggesting that they have indeed adopted these heuristics. We conclude that there is substantial room for improvement in NLI systems, and that the HANS dataset can motivate and measure progress in this area.
example, neural networks trained to recognize ob- jects are misled by contextual heuristics: a net- work that is able to recognize monkeys in a typ- ical context with high accuracy may nevertheless label a monkey holding a guitar as a human, since in the training set guitars tend to co-occur with hu- mans but not monkeys (Wang et al., 2018). Sim- ilar heuristics arise in visual question answering systems (Agrawal et al., 2016).
The current paper addresses this issue in the do- main of natural language inference (NLI), the task of determining whether a premise sentence entails (i.e., implies the truth of) a hypothesis sentence (Condoravdi et al., 2003; Dagan et al., 2006; Bow- man et al., 2015). As in other domains, neural NLI models have been shown to learn shallow heuris- tics, in this case based on the presence of speciï¬c words (Naik et al., 2018; Sanchez et al., 2018). For example, a model might assign a label of contra- diction to any input containing the word not, since not often appears in the examples of contradiction in standard NLI training sets.
# Introduction
Neural networks excel at learning the statistical patterns in a training set and applying them to test cases drawn from the same distribution as the training examples. This strength can also be a weakness: statistical learners such as standard neural network architectures are prone to adopting shallow heuristics that succeed for the majority of training examples, instead of learning the underly- ing generalizations that they are intended to cap- ture. If such heuristics often yield correct outputs, the loss function provides little incentive for the model to learn to generalize to more challenging cases as a human performing the task would.
The focus of our work is on heuristics that are based on superï¬cial syntactic properties. Con- sider the following sentence pair, which has the target label entailment:
(1) Premise: The judge was paid by the actor. Hypothesis: The actor paid the judge.
An NLI system that labels this example correctly might do so not by reasoning about the meanings of these sentences, but rather by assuming that the premise entails any hypothesis whose words all appear in the premise (Dasgupta et al., 2018; Naik et al., 2018). Crucially, if the model is using this heuristic, it will predict entailment for (2) as well, even though that label is incorrect in this case:
This issue has been documented across domains in artiï¬cial intelligence. In computer vision, for
(2) Premise: The actor was paid by the judge. Hypothesis: The actor paid the judge.
Deï¬nition Example The doctor was paid by the actor. ââââââ The doctor paid the actor. WRONG Assume that a premise entails all of its contiguous subsequences. The doctor near the actor danced. ââââââ The actor danced. WRONG Assume that a premise entails all complete subtrees in its parse tree. If the artist slept, the actor ran. ââââââ The artist slept. WRONG
Table 1: The heuristics targeted by the HANS dataset, along with examples of incorrect entailment predictions that these heuristics would lead to.
We introduce a new evaluation set called HANS (Heuristic Analysis for NLI Systems), designed to diagnose the use of such fallible structural heuris- tics.1 We target three heuristics, deï¬ned in Ta- ble 1. While these heuristics often yield correct labels, they are not valid inference strategies be- cause they fail on many examples. We design our dataset around such examples, so that models that employ these heuristics are guaranteed to fail on particular subsets of the dataset, rather than sim- ply show lower overall accuracy.
We evaluate four popular NLI models, includ- ing BERT, a state-of-the-art model (Devlin et al., 2019), on the HANS dataset. All models per- formed substantially below chance on this dataset, barely exceeding 0% accuracy in most cases. We conclude that their behavior is consistent with the hypothesis that they have adopted these heuristics.
Contributions: This paper has three main con- tributions. First, we introduce the HANS dataset, an NLI evaluation set that tests speciï¬c hypotheses about invalid heuristics that NLI models are likely to learn. Second, we use this dataset to illumi- nate interpretable shortcomings in state-of-the-art models trained on MNLI (Williams et al., 2018b); these shortcoming may arise from inappropriate model inductive biases, from insufï¬cient signal provided by training datasets, or both. Third, we show that these shortcomings can be made less se- vere by augmenting a modelâs training set with the types of examples present in HANS. These results indicate that there is substantial room for improve- ment for current NLI models and datasets, and that HANS can serve as a tool for motivating and mea- suring progress in this area.
# 2 Syntactic Heuristics
We focus on three heuristics: the lexical overlap heuristic, the subsequence heuristic, and the con- stituent heuristic, all deï¬ned in Table 1. These heuristics form a hierarchy: the constituent heuris- tic is a special case of the subsequence heuristic, which in turn is a special case of the lexical over- lap heuristic. Table 2 in the next page gives exam- ples where each heuristic succeeds and fails.
There are two reasons why we expect these heuristics to be adopted by a statistical learner trained on standard NLI training datasets such as SNLI (Bowman et al., 2015) or MNLI (Williams et al., 2018b). First, the MNLI training set con- tains far more examples that support the heuristics than examples that contradict them:2
Heuristic Supporting Cases Contradicting Cases Lexical overlap Subsequence Constituent 2,158 1,274 1,004 261 72 58
Even the 261 contradicting cases in MNLI may not provide strong evidence against the heuristics. For example, 133 of these cases contain negation in the premise but not the hypothesis, as in (3). In- stead of using these cases to overrule the lexical overlap heuristic, a model might account for them by learning to assume that the label is contradic- tion whenever there is negation in the premise but not the hypothesis (McCoy and Linzen, 2019):
# (3) a. I donât care. + I care.
b. This is not a contradiction. -+ This is a
contradiction.
1GitHub repository with data and code: https:// github.com/tommccoy1/hans
2In this table, the lexical overlap counts include the sub- sequence counts, which include the constituent counts.
Heuristic Premise Hypothesis Label Lexical overlap heuristic The banker near the judge saw the actor. The lawyer was advised by the actor. The doctors visited the lawyer. The judge by the actor stopped the banker. The banker stopped the actor. The banker saw the actor. The actor advised the lawyer. The lawyer visited the doctors. E E N N Subsequence The artist and the student called the judge. The student called the judge. heuristic Angry tourists helped the lawyer. The judges heard the actors resigned. The senator near the lawyer danced. Tourists helped the lawyer. The judges heard the actors. The lawyer danced. E E N N Constituent heuristic Before the actor slept, the senator ran. The lawyer knew that the judges shouted. If the actor slept, the judge saw the artist. The lawyers resigned, or the artist slept. The actor slept. The judges shouted. The actor slept. The artist slept. E E N N
Table 2: Examples of sentences used to test the three heuristics. The label column shows the correct label for the sentence pair; E stands for entailment and N stands for non-entailment. A model relying on the heuristics would label all examples as entailment (incorrectly for those marked as N).
There are some examples in MNLI that contradict the heuristics in ways that are not easily explained away by other heuristics; see Appendix A for ex- amples. However, such cases are likely too rare to discourage a model from learning these heuris- tics. MNLI contains data from multiple genres, so we conjecture that the scarcity of contradicting examples is not just a property of one genre, but rather a general property of NLI data generated in the crowdsourcing approach used for MNLI. We thus hypothesize that any crowdsourced NLI dataset would make our syntactic heuristics attrac- tive to statistical learners without strong linguistic priors.
The second reason we might expect current NLI models to adopt these heuristics is that their in- put representations may make them susceptible to these heuristics. The lexical overlap heuristic dis- regards the order of the words in the sentence and considers only their identity, so it is likely to be adopted by bag-of-words NLI models (e.g., Parikh et al. 2016). The subsequence heuristic considers linearly adjacent chunks of words, so one might expect it to be adopted by standard RNNs, which process sentences in linear order. Finally, the con- stituent heuristic appeals to components of the parse tree, so one might expect to see it adopted by tree-based NLI models (Bowman et al., 2016).
# 3 Dataset Construction
For each heuristic, we generated ï¬ve templates for examples that support the heuristic and ï¬ve tem-
plates for examples that contradict it. Below is one template for the subsequence heuristic; see Appendix B for a full list of templates.
(4) The N; P the No V. + The Ng V. The lawyer by the actor ran. ~ The actor ran.
We generated 1,000 examples from each template, for a total of 10,000 examples per heuristic. Some heuristics are special cases of others, but we made sure that the examples for one heuristic did not also fall under a more narrowly deï¬ned heuris- tic. That is, for lexical overlap cases, the hy- pothesis was not a subsequence or constituent of the premise; for subsequence cases, the hypothe- sis was not a constituent of the premise.
# 3.1 Dataset Controls
Plausibility: One advantage of generating ex- amples from templatesâinstead of, e.g., modify- ing naturally-occurring examplesâis that we can ensure the plausibility of all generated sentences. For example, we do not generate cases such as The student read the book - The book read the student, which could ostensibly be solved using a hypothesis-plausibility heuristic. To achieve this, we drew our core vocabulary from|Ettinger et al. (2018), where every noun was a plausible subject of every verb or a plausible object of every transi- tive verb. Some templates required expanding this core vocabulary; in those cases, we manually cu- rated the additions to ensure plausibility.
Selectional criteria: Some of our example types depend on the availability of lexically-speciï¬c verb frames. For example, (5) requires aware- ness of the fact that believed can take a clause (the lawyer saw the ofï¬cer) as its complement:
(5) The doctor believed the lawyer saw the officer. ~» The doctor believed the lawyer.
It is arguably unfair to expect a model to under- stand this example if it had only ever encountered believe with a noun phrase object (e.g., I believed the man). To control for this issue, we only chose verbs that appeared at least 50 times in the MNLI training set in all relevant frames.
# 4 Experimental Setup
Since HANS is designed to probe for structural heuristics, we selected three models that exem- plify popular strategies for representing the in- put sentence: DA, a bag-of-words model; ESIM, which uses a sequential structure; and SPINN, which uses a syntactic parse tree. In addition to these three models, we included BERT, a state- of-the-art model for MNLI. The following para- graphs provide more details on these models.
DA: The Decomposable Attention model (DA; Parikh et al., 2016) uses a form of attention to align words in the premise and hypothesis and to make predictions based on the aggregation of this align- ment. It uses no word order information and can thus be viewed as a bag-of-words model.
ESIM: The Enhanced Sequential Inference Model (ESIM; Chen et al., 2017) uses a modiï¬ed bidirectional LSTM to encode sentences. We use the variant with a sequential encoder, rather than the tree-based Hybrid Inference Model (HIM).
SPINN: The Parser- Interpreter Neural Network (SPINN; Bowman et al., 2016) is tree-based: it encodes sentences by combining phrases based on a syntactic parse. We use the SPINN-PI-NT variant, which takes a parse tree as an input (rather than learning to parse). For MNLI, we used the parses provided in the MNLI release; for HANS, we used parse templates that we created based on parses from the Stanford PCFG Parser 3.5.2 (Klein and Manning, 2003), the same parser used to parse MNLI. Based on manual inspection, this parser generally provided correct parses for HANS examples.
BERT: The Bidirectional Encoder Representa- tions from Transformers model (BERT; Devlin et al., 2019) is a Transformer model that uses attention, rather than recurrence, to process sen- tences. We use the bert-base-uncased pre- trained model and ï¬ne-tune it on MNLI.
Implementation and evaluation: For DA and ESIM, we used the implementations from Al- lenNLP (Gardner et al., 2017). For SPINN3 and BERT,4 we used code from the GitHub reposito- ries for the papers introducing those models.
We trained all models on MNLI. MNLI uses three labels (entailment, contradiction, and neu- tral). We chose to annotate HANS with two la- bels only (entailment and non-entailment) because the distinction between contradiction and neutral was often unclear for our cases.5 For evaluating a model on HANS, we took the highest-scoring la- bel out of entailment, contradiction, and neutral; we then translated contradiction or neutral labels to non-entailment. An alternate approach would have been to add the contradiction and neutral scores to determine a score for non-entailment; we found little difference between these approaches, since the models almost always assigned more than 50% of the label probability to a single label.6
# 5 Results
All models achieved high scores on the MNLI test set (Figure 1a), replicating the accuracies found in past work (DA: Gururangan et al. 2018; ESIM: Williams et al. 2018b; SPINN: Williams et al. 2018a; BERT: Devlin et al. 2019). On the HANS dataset, all models almost always assigned the cor- rect label in the cases where the label is entail- ment, i.e., where the correct answer is in line with the hypothesized heuristics. However, they all per- formed poorlyâwith accuracies less than 10% in most cases, when chance is 50%âon the cases where the heuristics make incorrect predictions
3https://github.com/stanfordnlp/spinn; we used the NYU fork at https://github.com/ nyu-mll/spinn.
4https://github.com/google-research/ bert
âor example, with The actor was helped by the judge + The actor helped the judge, it is possible that the actor did help the judge, pointing to a label of neutral; yet the premise does pragmatically imply that the actor did not help the judge, meaning that this pair could also fit the non-strict definition of contradiction used in NLI annotation.
6We also tried training the models on MNLI with neutral and contradiction collapsed into non-entailment; this gave similar results as collapsing after training (Appendix D) .
(a) (b)
Figure 1: (a) Accuracy on the MNLI test set. (b) Accuracies on the HANS evaluation set, which has six sub- components, each deï¬ned by its correct label and the heuristic it addresses. Dashed lines show chance performance. All models behaved as we would expect them to if they had adopted the heuristics targeted by HANS. That is, they nearly always predicted entailment for the examples in HANS, leading to near-perfect accuracy when the true label is entailment, and near-zero accuracy when the true label is non-entailment. Exact results are in Appendix G.
(Figure 1b). Thus, despite their high scores on the MNLI test set, all four models behaved in a way consistent with the use of the heuristics targeted in HANS, and not with the correct rules of inference.
Comparison of models: Both DA and ESIM had near-zero performance across all three heuris- tics. These models might therefore make no dis- tinction between the three heuristics, but instead treat them all as the same phenomenon, i.e. lexi- cal overlap. Indeed, for DA, this must be the case, as this model does not have access to word order; ESIM does in theory have access to word order in- formation but does not appear to use it here.
SPINN had the best performance on the sub- sequence cases. This might be due to the tree- based nature of its input: since the subsequences targeted in these cases were explicitly chosen not to be constituents, they do not form cohesive units in SPINNâs input in the way they do for sequential models. SPINN also outperformed DA and ESIM on the constituent cases, suggesting that SPINNâs tree-based representations moderately helped it learn how speciï¬c constituents contribute to the overall sentence. Finally, SPINN did worse than the other models on constituent cases where the correct answer is entailment. This moderately greater balance between accuracy on entailment and non-entailment cases further indicates that SPINN is less likely than the other models to as- sume that constituents of the premise are entailed; this harms its performance in cases where that as- sumption happens to lead to the correct answer.
BERT did slightly worse than SPINN on the subsequence cases, but performed noticeably less
poorly than all other models at both the constituent and lexical overlap cases (though it was still far below chance). Its performance particularly stood out for the lexical overlap cases, suggesting that some of BERTâs success at MNLI may be due to a greater tendency to incorporate word order infor- mation compared to other models.
Analysis of particular example types: In the cases where a modelâs performance on a heuris- tic was perceptibly above zero, accuracy was not evenly spread across subcases (for case-by-case results, see Appendix |C). For example, within the lexical overlap cases, BERT achieved 39% accu- racy on conjunction (e.g., The actor and the doctor saw the artist + The actor saw the doctor) but 0% accuracy on subject/object swap (The judge called the lawyer + The lawyer called the judge). Within the constituent heuristic cases, BERT achieved 49% accuracy at determining that a clause embed- ded under if and other conditional words is not en- tailed (Uf the doctor resigned, the lawyer danced ~» The doctor resigned), but 0% accuracy at iden- tifying that the clause outside of the conditional clause is also not entailed (/f the doctor resigned, the lawyer danced ~ The lawyer danced).
# 6 Discussion
Independence of heuristics: Though each heuristic is most closely related to one class of model (e.g., the constituent heuristic is related to tree-based models), all models failed on cases illustrating all three heuristics. This ï¬nding is un- surprising since these heuristics are closely related
to each other, meaning that an NLI model may adopt all of them, even the ones not speciï¬cally targeting that class of model. For example, the subsequence and constituent heuristics are special cases of the lexical overlap heuristic, so all models can fail on cases illustrating all heuristics, because all models have access to individual words.
Though the heuristics form a hierarchyâthe constituent heuristic is a subcase of the subse- quence heuristic, which is a subcase of the lexical overlap heuristicâthis hierarchy does not neces- sarily predict the performance of our models. For example, BERT performed worse on the subse- quence heuristic than on the constituent heuristic, even though the constituent heuristic is a special case of the subsequence heuristic. Such behavior has two possible causes. First, it could be due to the speciï¬c cases we chose for each heuristic: the cases chosen for the subsequence heuristic may be inherently more challenging than the cases cho- sen for the constituent heuristic, even though the constituent heuristic as a whole is a subset of the subsequence one. Alternately, it is possible for a model to adopt a more general heuristic (e.g., the subsequence heuristic) but to make an exception for some special cases (e.g., the cases to which the constituent heuristic could apply).
Do the heuristics arise from the architecture or the training set? The behavior of a trained model depends on both the training set and the modelâs architecture. The modelsâ poor results on HANS could therefore arise from architectural limitations, from insufï¬cient signal in the MNLI training set, or from both.
The fact that SPINN did markedly better at the constituent and subsequence cases than ESIM and DA, even though the three models were trained on the same dataset, suggests that MNLI does con- tain some signal that can counteract the appeal of the syntactic heuristics tested by HANS. SPINNâs structural inductive biases allow it to leverage this signal, but the other modelsâ biases do not.
Other sources of evidence suggest that the mod- elsâ failure is due in large part to insufï¬cient signal from the MNLI training set, rather than the mod- elsâ representational capacities alone. The BERT model we used (bert-base-uncased) was found by Goldberg (2019) to achieve strong results in syntactic tasks such as subject-verb agreement prediction, a task that minimally requires a distinc- tion between the subject and direct object of a sen-
tence (Linzen et a ] 2016} Gulordava et al. 2018} (2018). Despite this evidence that BERT has access to relevant syntactic infor- mation, its accuracy was 0% on the subject-object swap cases (e.g., The doctor saw the lawyer + The lawyer saw the doctor). We believe it is un- likely that our fine-tuning step on MNLI, a much smaller corpus than the corpus BERT was trained on, substantially changed the modelâs representa- tional capabilities. Even though the model most likely had access to information about subjects and objects, then, MNLI did not make it clear how that information applies to inference. Supporting this conclusion, found little evi- dence of compositional structure in the InferSent model, which was trained on SNLI, even though the same model type (an RNN) did learn clear compositional structure when trained on tasks that underscored the need for such structure. These re- sults further suggest that the modelsâ poor compo- sitional behavior arises more because of the train- ing set than because of model architecture.
Finally, our BERT-based model differed from the other models in that it was pretrained on a massive amount of data on a masking task and a next-sentence classiï¬cation task, followed by ï¬ne- tuning on MNLI, while the other models were only trained on MNLI; we therefore cannot rule out the possibility that BERTâs comparative success at HANS was due to the greater amount of data it has encountered rather than any architectural features.
Is the dataset too difï¬cult? To assess the dif- ï¬culty of our dataset, we obtained human judg- ments on a subset of HANS from 95 participants on Amazon Mechanical Turk as well as 3 expert annotators (linguists who were unfamiliar with HANS: 2 graduate students and 1 postdoctoral re- searcher). The average accuracy was 76% for Me- chanical Turk participants and 97% for expert an- notators; further details are in Appendix F.
Our Mechanical Turk results contrast with those of Nangia and Bowman (2019), who report an ac- curacy of 92% in the same population on examples from MNLI; this indicates that HANS is indeed more challenging for humans than MNLI is. The difï¬culty of some of our examples is in line with past psycholinguistic work in which humans have been shown to incorrectly answer comprehension questions for some of our subsequence subcases. For example, in an experiment in which partici- pants read the sentence As Jerry played the violin
gathered dust in the attic, some participants an- swered yes to the question Did Jerry play the vio- lin? (Christianson et al., 2001).
Crucially, although Mechanical Turk annotators found HANS to be harder overall than MNLI, their accuracy was similar whether the correct answer was entailment (75% accuracy) or non-entailment (77% accuracy). The contrast between the balance in the human errors across labels and the stark im- balance in the modelsâ errors (Figure 1b) indicates that human errors are unlikely to be driven by the heuristics targeted in the current work.
# 7 Augmenting the training data with HANS-like examples
The failure of the models we tested raises the ques- tion of what it would take to do well on HANS. One possibility is that a different type of model would perform better. For example, a model based on hand-coded rules might handle HANS well. However, since most models we tested are in the- ory capable of handling HANSâs examples but failed to do so when trained on MNLI, it is likely that performance could also be improved by train- ing the same architectures on a dataset in which these heuristics are less successful.
To test that hypothesis, we retrained each model on the MNLI training set augmented with a dataset structured exactly like HANS (i.e. using the same thirty subcases) but containing no speciï¬c exam- ples that appeared in HANS. Our additions com- prised 30,000 examples, roughly 8% of the size of the original MNLI training set (392,702 ex- In general, the models trained on the amples). augmented MNLI performed very well on HANS (Figure 2); the one exception was that the DA model performed poorly on subcases for which a bag-of-words representation was inadequate.7 This experiment is only an initial exploration and leaves open many questions about the conditions under which a model will successfully avoid a heuristic; for example, how many contradicting examples are required? At the same time, these results do suggest that, to prevent a model from learning a heuristic, one viable approach is to use a training set that does not support this heuristic.
7The effect on MNLI test set performance was less clear; the augmentation with HANS-like examples improved MNLI test set performance for BERT (84.4% vs. 84.1%) and ESIM (77.6% vs 77.3%) but hurt performance for DA (66.0% vs. 72.4%) and SPINN (63.9% vs. 67.0%).
Lexical overlap Subsequence Constituent 100% 75% m 9 50% - 7 - -BR- - - - - - - ) 9 g& p> 25% 0% 100% z 75% 8 50%+----f-- Pees 25% i = 0% S rSS SFE FHSS FTHAS SESE SESE SESE
§ 8 <x
Figure 2: HANS accuracies for models trained on MNLI plus examples of all 30 categories in HANS.
Transfer across HANS subcases: The positive results of the HANS-like augmentation experi- ment are compatible with the possibility that the models simply memorized the templates that made up HANSâs thirty subcases. To address this, we re- trained our models on MNLI augmented with sub- sets of the HANS cases (withholding some cases; see Appendix E for details), then tested the models on the withheld cases.
The results of one of the transfer experiments, using BERT, are shown in Table 3. There were some successful cases of transfer; e.g., BERT performed well on the withheld categories with sentence-initial adverbs, regardless of whether the correct label was non-entailment or entailment. Such successes suggest that BERT is able to learn from some speciï¬c subcases that it should rule out the broader heuristics; in this case, the non- withheld cases plausibly informed BERT not to indiscriminately follow the constituent heuristic, encouraging it to instead base its judgments on the speciï¬c adverbs in question (e.g., certainly vs. probably). However, the models did not always transfer successfully; e.g., BERT had 0% accu- racy on entailed passive examples when such ex- amples were withheld, likely because the training set still included many non-entailed passive exam- ples, meaning that BERT may have learned to as- sume that all sentences with passive premises are cases of non-entailment. Thus, though the models do seem to be able to rule out the broadest ver- sions of the heuristics and transfer that knowledge to some new cases, they may still fall back to the heuristics for other cases. For further results in- volving withheld categories, see Appendix E.
Transfer to an external dataset: Finally, we tested models on the comp same short and
Withheld category Results . oo 100% Lexical overlap: Conjunctions (+) The doctor saw the author and the tourist. 50% 7 lp ~ The author saw the tourist. 0% | | â_ MNLD MNLI+ . . 100% Lexical overlap: Passives (â>) The authors were helped by the actor. 50% 7pm â The actor helped the authors. 0% MNLD MNLI+ 100% Subsequence: NP/Z (+) Before the actor moved the doctor arrived. 50% fp onnnnom ~ The actor moved the doctor. oy | MNLL MNLI+ 100% + Subsequence: PP on object (â>) The authors saw the judges by the doctor. 50%) - a â The authors saw the judges. 0% MNLL MNLI+ Constituent: Adverbs (-+) Probably the artists helped the authors. 50% 7- I ~ The artists helped the authors. o% | MNLL MNLI+ Constituent: Adverbs (+) Certainly the lawyers shouted. 50% 14 â The lawyers shouted. 0% MNLL MNLI+
Table 3: Accuracies for BERT fine-tuned on basic MNLI and on MNLI+, which is MNLI augmented with most HANS categories except withholding the cate- gories in this table. The two lexical overlap cases shown here are adversarial in that MNLI+ contains cases superficially similar to them but with opposite la- bels (namely, the Conjunctions (â) and Passives (>) cases from Table jin the Appendix). The remaining cases in this table are not adversarial in this way.
comp same long datasets from Dasgupta et al. (2018), which consist of lexical overlap cases:
(6) the famous and arrogant cat is not more nasty than the dog with glasses in a white dress. the dog with glasses in a white dress is not more nasty than the famous and arrogant cat.
This dataset differs from HANS in at least three important ways: it is based on a phenomenon not present in HANS (namely, comparatives); it uses a different vocabulary from HANS; and many of its sentences are semantically implausible.
We used this dataset to test both BERT ï¬ne- tuned on MNLI, and BERT ï¬ne-tuned on MNLI augmented with HANS-like examples. The aug- mentation improved performance modestly for the long examples and dramatically for the short ex- amples, suggesting that training with HANS-like examples has beneï¬ts that extend beyond HANS.8
8We hypothesize that HANS helps more with short exam- ples because most HANS sentences are short.
Short Long 100% 75% = 50% +- -- -- 25% + 0% 4 100% + 75% 50% +------- -- 25% oy MNLI MNLI+ poyeyug pajiejuaâuoN MNLI MNLI+
# Accuracy
Figure 3: Results on the lexical overlap cases from Dasgupta et al. (2018) for BERT ï¬ne-tuned on MNLI or on MNLI augmented with HANS-like examples.
# 8 Related Work
# 8.1 Analyzing trained models
This project relates to an extensive body of re- search on exposing and understanding weaknesses in modelsâ learned behavior and representations. In the NLI literature, Poliak et al. (2018b) and Gururangan et al. (2018) show that, due to bi- ases in NLI datasets, it is possible to achieve far better than chance accuracy on those datasets by only looking at the hypothesis. Other recent works address possible ways in which NLI models might use fallible heuristics, focusing on semantic phenomena, such as lexical inferences (Glockner et al., 2018) or quantiï¬ers (Geiger et al., 2018), or biases based on speciï¬c words (Sanchez et al., 2018). Our work focuses instead on structural phenomena, following the proof-of-concept work done by Dasgupta et al. (2018). Our focus on using NLI to address how models capture struc- ture follows some older work about using NLI for the evaluation of parsers (Rimell and Clark, 2010; Mehdad et al., 2010).
NLI has been used to investigate many other types of linguistic information besides syntactic structure (Poliak et al., 2018a; White et al., 2017). Outside NLI, multiple projects have used classiï¬- cation tasks to understand what linguistic and/or structural information is present in vector encod- ings of sentences (e.g., Adi et al., 2017; Ettinger et al., 2018; Conneau et al., 2018). We instead choose the behavioral approach of using task per- formance on critical cases. Unlike the classiï¬ca- tion approach, this approach is agnostic to model structure; our dataset could be used to evaluate a symbolic NLI system just as easily as a neu- ral one, whereas typical classiï¬cation approaches only work for models with vector representations.
# 8.2 Structural heuristics
Similar to our lexical overlap heuristic, Dasgupta et al. (2018), Nie et al. (2018), and Kim et al. (2018) also tested NLI models on speciï¬c phe- nomena where word order matters; we use a larger set of phenomena to study a more general notion of lexical overlap that is less dependent on the properties of a single phenomenon, such as pas- sives. Naik et al. (2018) also ï¬nd evidence that NLI models use a lexical overlap heuristic, but our approach is substantially different from theirs.9
This work builds on our pilot study in McCoy and Linzen (2019), which studied one of the sub- cases of the subsequence heuristic. Several of our subsequence subcases are inspired by psy- cholinguistics research (Bever, 1970; Frazier and Rayner, 1982; Tabor et al., 2004); these works have aims similar to ours but are concerned with the representations used by humans rather than neural networks.
Finally, all of our constituent heuristic subcases depend on the implicational behavior of speciï¬c words. Several past works (Pavlick and Callison- Burch, 2016; Rudinger et al., 2018; White et al., 2018; White and Rawlins, 2018) have studied such behavior for verbs (e.g., He knows it is raining en- tails It is raining, while He believes it is raining does not). We extend that approach by including other types of words with speciï¬c implicational behavior, namely conjunctions (and, or), preposi- tions that take clausal arguments (if, because), and adverbs (deï¬nitely, supposedly). MacCartney and Manning (2009) also discuss the implicational be- havior of these various types of words within NLI.
# 8.3 Generalization
Our work suggests that test sets drawn from the same distribution as the training set may be inade- quate for assessing whether a model has learned to perform the intended task. Instead, it is also neces- sary to evaluate on a generalization set that departs from the training distribution. McCoy et al. (2018) found a similar result for the task of question for- mation; different architectures that all succeeded on the test set failed on the generalization set in different ways, showing that the test set alone was not sufï¬cient to determine what the models had
9Naik et al. (2018) diagnose the lexical overlap heuristic by appending and true is true to existing MNLI hypotheses, which decreases lexical overlap but does not change the sen- tence pairâs label. We instead generate new sentence pairs for which the words in the hypothesis all appear in the premise.
learned. This effect can arise not just from differ- ent architectures but also from different initializa- tions of the same architecture (Weber et al., 2018).
# 9 Conclusions
Statistical learners such as neural networks closely track the statistical regularities in their training sets. This process makes them vulnerable to adopting heuristics that are valid for frequent cases but fail on less frequent ones. We have inves- tigated three such heuristics that we hypothesize To evaluate NLI models are likely to learn. whether NLI models do behave consistently with these heuristics, we have introduced the HANS dataset, on which models using these heuristics are guaranteed to fail. We ï¬nd that four exist- ing NLI models perform very poorly on HANS, suggesting that their high accuracies on NLI test sets may be due to the exploitation of invalid heuristics rather than deeper understanding of lan- guage. However, these models performed sig- niï¬cantly better on both HANS and on a sepa- rate structure-dependent dataset when their train- ing data was augmented with HANS-like exam- ples. Overall, our results indicate that, despite the impressive accuracies of state-of-the-art mod- els on standard evaluations, there is still much progress to be made and that targeted, challenging datasets, such as HANS, are important for deter- mining whether models are learning what they are intended to learn.
# Acknowledgments
We are grateful to Adam Poliak, Benjamin Van Durme, Samuel Bowman, the members of the JSALT General-Purpose Sentence Representation Learning team, and the members of the Johns Hopkins Computation and Psycholinguistics Lab for helpful comments, and to Brian Leonard for assistance with the Mechanical Turk experiment. Any errors remain our own.
This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. 1746891 and the 2018 Jelinek Summer Workshop on Speech and Language Technology (JSALT). Any opinions, ï¬ndings, and conclusions or recom- mendations expressed in this material are those of the authors and do not necessarily reï¬ect the views of the National Science Foundation or the JSALT workshop.
# References
Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2017. Fine-grained anal- ysis of sentence embeddings using auxiliary predic- tion tasks. In International Conference on Learning Representations.
Aishwarya Agrawal, Dhruv Batra, and Devi Parikh. 2016. Analyzing the behavior of visual question an- swering models. In Proceedings of the 2016 Con- ference on Empirical Methods in Natural Language Processing, pages 1955â1960. Association for Com- putational Linguistics.
Thomas G. Bever. 1970. The cognitive basis for lin- guistic structures.
Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large anno- tated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empiri- cal Methods in Natural Language Processing, pages 632â642, Lisbon, Portugal. Association for Compu- tational Linguistics.
Samuel R. Bowman, Jon Gauthier, Abhinav Ras- togi, Raghav Gupta, Christopher D. Manning, and Christopher Potts. 2016. A fast uniï¬ed model for In Proceed- parsing and sentence understanding. ings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 1466â1477. Association for Computa- tional Linguistics.
Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced LSTM for In Proceedings of the natural language inference. 55th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 1657â1668. Association for Computational Linguis- tics.
Kiel Christianson, Andrew Hollingworth, John F Hal- liwell, and Fernanda Ferreira. 2001. Thematic roles assigned along the garden path linger. Cognitive Psychology, 42(4):368â407.
Cleo Condoravdi, Dick Crouch, Valeria de Paiva, Rein- hard Stolle, and Daniel G. Bobrow. 2003. Entail- ment, intensionality and text understanding. In Pro- ceedings of the HLT-NAACL 2003 Workshop on Text Meaning.
Alexis Conneau, Germ´an Kruszewski, Guillaume Lample, Lo¨ıc Barrault, and Marco Baroni. 2018. What you can cram into a single vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 2126â2136. Association for Computational Linguistics.
Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The PASCAL Recognising Textual Entail- In Proceedings of the First In- ment Challenge.
ternational Conference on Machine Learning Chal- lenges: Evaluating Predictive Uncertainty Visual Object Classiï¬cation, and Recognizing Textual En- tailment, MLCWâ05, pages 177â190, Berlin, Hei- delberg. Springer-Verlag.
Ishita Dasgupta, Demi Guo, Andreas Stuhlm¨uller, Samuel J. Gershman, and Noah D. Goodman. 2018. Evaluating compositionality in sentence embed- In Proceedings of the 40th Annual Confer- dings. ence of the Cognitive Science Society, pages 1596â 1601, Madison, WI.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- In Proceedings of the 2019 Conference standing. of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171â4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
Allyson Ettinger, Ahmed Elgohary, Colin Phillips, and Philip Resnik. 2018. Assessing composition in sen- In Proceedings of tence vector representations. the 27th International Conference on Computational Linguistics, pages 1790â1801. Association for Com- putational Linguistics.
Lyn Frazier and Keith Rayner. 1982. Making and cor- recting errors during sentence comprehension: Eye movements in the analysis of structurally ambiguous sentences. Cognitive Psychology, 14(2):178â210.
Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke S. Zettlemoyer. 2017. AllenNLP: A Deep Semantic Natural Lan- In Proceedings of the guage Processing Platform. Workshop for NLP Open Source Software (NLP- OSS).
Ignacio Cases, Lauri Karttunen, and Christopher Potts. 2018. Stress-testing neu- language inference with ral models of natural arXiv preprint multiply-quantiï¬ed sentences. arXiv:1810.13033.
Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking NLI Systems with Sentences that In Proceed- Require Simple Lexical Inferences. ings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Papers), pages 650â655. Association for Computa- tional Linguistics.
Yoav Goldberg. 2019. Assessing BERTâs syntactic abilities. arXiv preprint arXiv:1901.05287.
Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless In green recurrent networks dream hierarchically. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies,
Volume 1 (Long Papers), pages 1195â1205. Associ- ation for Computational Linguistics.
Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural lan- In Proceedings of the 2018 guage inference data. Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 107â112. Association for Computational Lin- guistics.
Juho Kim, Christopher Malon, and Asim Kadav. 2018. Teaching syntax by adversarial distraction. In Pro- ceedings of the First Workshop on Fact Extraction and VERiï¬cation (FEVER), pages 79â84. Associa- tion for Computational Linguistics.
Dan Klein and Christopher D. Manning. 2003. Ac- curate unlexicalized parsing. In Proceedings of the 41st Annual Meeting of the Association for Compu- tational Linguistics.
Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. Transactions of the Association for Computational Linguistics, 4:521â 535.
Bill MacCartney and Christopher D Manning. 2009. Natural language inference. Ph.D. thesis, Stanford University.
Rebecca Marvin and Tal Linzen. 2018. Targeted syn- In Proceed- tactic evaluation of language models. ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1192â1202. Association for Computational Linguistics.
R. Thomas McCoy, Robert Frank, and Tal Linzen. 2018. Revisiting the poverty of the stimulus: Hi- erarchical generalization without a hierarchical bias in recurrent neural networks. In Proceedings of the 40th Annual Conference of the Cognitive Science Society, pages 2093â2098, Madison, WI.
R. Thomas McCoy and Tal Linzen. 2019. Non-entailed subsequences as a challenge for natural language in- ference. In Proceedings of the Society for Computa- tion in Linguistics, volume 2.
R. Thomas McCoy, Tal Linzen, Ewan Dunbar, and Paul Smolensky. 2019. RNNs implicitly imple- In Interna- ment tensor-product representations. tional Conference on Learning Representations.
Yashar Mehdad, Alessandro Moschitti, and Fabio Mas- simo Zanzotto. 2010. Syntactic/semantic structures for textual entailment recognition. In Human Lan- guage Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 1020â1028. Association for Computational Linguistics.
Aakanksha Naik, Abhilasha Ravichander, Norman Sadeh, Carolyn Rose, and Graham Neubig. 2018. Stress test evaluation for natural language inference. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2340â2353. Association for Computational Linguistics.
Nikita Nangia and Samuel R. Bowman. 2019. Human vs. muppet: A conservative estimate of human per- formance on the GLUE benchmark.
Yixin Nie, Yicheng Wang, and Mohit Bansal. 2018. Analyzing compositionality-sensitivity of NLI mod- els. arXiv preprint arXiv:1811.07033.
Ankur Parikh, Oscar T¨ackstr¨om, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2249â2255. Association for Computational Linguistics.
Ellie Pavlick and Chris Callison-Burch. 2016. Tense manages to predict implicative behavior in verbs. In Proceedings of the 2016 Conference on Empiri- cal Methods in Natural Language Processing, pages 2225â2229. Association for Computational Linguis- tics.
Adam Poliak, Aparajita Haldar, Rachel Rudinger, J. Edward Hu, Ellie Pavlick, Aaron Steven White, and Benjamin Van Durme. 2018a. Collecting di- verse natural language inference problems for sen- In Proceedings of tence representation evaluation. the 2018 Conference on Empirical Methods in Natu- ral Language Processing, pages 67â81. Association for Computational Linguistics.
Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018b. Hypothesis only baselines in natural language in- In Proceedings of the Seventh Joint Con- ference. ference on Lexical and Computational Semantics, pages 180â191. Association for Computational Lin- guistics.
Laura Rimell and Stephen Clark. 2010. Cambridge: Parser evaluation using textual entailment by gram- matical relation comparison. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 268â271. Association for Computational Lin- guistics.
Rachel Rudinger, Aaron Steven White, and Benjamin Van Durme. 2018. Neural models of factuality. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 731â744. Associa- tion for Computational Linguistics.
Ivan Sanchez, Jeff Mitchell, and Sebastian Riedel. 2018. Behavior analysis of NLI models: Uncov- ering the inï¬uence of three factors on robustness. In Proceedings of the 2018 Conference of the North
American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1975â1985. Associ- ation for Computational Linguistics.
Whitney Tabor, Bruno Galantucci, and Daniel Richard- son. 2004. Effects of merely local syntactic coher- ence on sentence processing. Journal of Memory and Language, 50(4):355â370.
Jianyu Wang, Zhishuai Zhang, Cihang Xie, Yuyin Zhou, Vittal Premachandran, Jun Zhu, Lingxi Xie, and Alan Yuille. 2018. Visual concepts and com- positional voting. Annals of Mathematical Sciences and Applications, 3(1):151â188.
Noah Weber, Leena Shekhar, and Niranjan Balasubra- manian. 2018. The ï¬ne line between linguistic gen- eralization and failure in seq2seq-attention models. In Proceedings of the Workshop on Generalization in the Age of Deep Learning, pages 24â27. Associa- tion for Computational Linguistics.
Aaron Steven White, Pushpendre Rastogi, Kevin Duh, and Benjamin Van Durme. 2017. Inference is ev- erything: Recasting semantic resources into a uni- In Proceedings of the ï¬ed evaluation framework. Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 996â1005. Asian Federation of Natural Lan- guage Processing.
Aaron Steven White and Kyle Rawlins. 2018. The role of veridicality and factivity in clause selection. In Proceedings of the 48th Annual Meeting of the North East Linguistic Society.
Aaron Steven White, Rachel Rudinger, Kyle Rawlins, and Benjamin Van Durme. 2018. Lexicosyntactic In Proceedings of the inference in neural models. 2018 Conference on Empirical Methods in Natural Language Processing, pages 4717â4724. Associa- tion for Computational Linguistics.
Adina Williams, Andrew Drozdov, and Samuel R. Bowman. 2018a. Do latent tree learning models identify meaningful structure in sentences? Trans- actions of the Association of Computational Linguis- tics, 6:253â267.
Adina Williams, Nikita Nangia, and Samuel Bowman. 2018b. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112â1122. Association for Computational Linguistics.
# A MNLI examples that contradict the HANS heuristics
from The sentences the MNLI training set that contradict the lex- and constituent ical overlap,
heuristics. The full set of all 261 contra- training set dicting examples may be viewed at https://github.com/ tommccoy1/hans/blob/master/mnli_ contradicting_examples.
(7) a. A subcategory of accuracy is consistency. ~ Accuracy is a subcategory of consis- tency.
b. At the same time, top Enron executives were free to exercise their stock options, and some did. -» Top Enron executives were free to exercise.
c. She was chagrined at The Nationâs recent publication of a column by conservative education activist Ron Unz arguing that liberal education reform has been an un- mitigated failure. -» Liberal education re- form has been an unmitigated failure.
# B Templates
Tables 4, 5, and 6 contain the templates for the lexical overlap heuristic, the subsequence heuris- tic, and the constituent heuristic, respectively.
In some cases, a given template has multiple versions, such as one version where a noun phrase modiï¬er attaches to the subject and another where the modiï¬er attaches to the object. For clarity, we have only listed one version of each template here. The full list of templates can be viewed in the code on GitHub.10
# C Fine-grained results
Table 7 shows the results by subcase for models trained on MNLI for the subcases where the cor- rect answer is entailment. Table 8 shows the re- sults by subcase for these models for the subcases where the correct answer is non-entailment.
# D Results for models trained on MNLI with neutral and contradiction merged
Table 9 shows the results on HANS for models trained on MNLI with the labels neutral and con- tradiction merged in the training set into the sin- gle label non-entailment. The results are similar to the results obtained by merging the labels after training, with the models generally outputting en- tailment for all HANS examples, whether that was the correct answer or not.
10https://github.com/tommccoy1/hans
Subcase Template Example Entailment: Untangling relative clauses Entailment: Sentences with PPs Entailment: Sentences with relative clauses Entailment: Conjunctions Entailment: Passives The N; who the Ng V, V2 the N3 â The No V, the Nj. The N; P the No V the N3 + The Ny V the Ng The Nj, that V> Vj the No â The N; V; the No The N, V the Ng and the N3 â The Ni V the N3 The N, were V by the No â The N; V the No The athlete who the judges admired called the manager. â> The judges admired the athlete. The tourists by the actor recommended the authors. â> The tourists recommended the au- thors. The actors that danced saw the author. â The actors saw the author. The secretaries encouraged the scien- tists and the actors. â> The secretaries encouraged the ac- tors. The authors were supported by the tourists. â> The tourists supported the authors. Non-entailment: Subject-object swap Non-entailment: Sentences with PPs Non-entailment: Sentences with relative clauses Non-entailment: Conjunctions Non-entailment: Passives The N, V the No. ~» The Ng V the Nj. The N, P the Ng V the N3 ~ The N3 V the No The N; V, the Ng who the N3 V2 ~~ The No V, the N3 The N, V the Ng and the N3 â~» The No V the N3 The N, were V by the Ng ~~ The N, V the No The senators mentioned the artist. ~» The artist mentioned the senators. The judge behind the manager saw the doctors. ~» The doctors saw the manager. The actors advised the manager who the tourists saw. + The manager advised the tourists. The doctors advised the presidents and the tourists. ~» The presidents advised the tourists. The senators were recommended by the managers. -~» The senators recommended the managers.
Table 4: Templates for the lexical overlap heuristic
Subcase Template Example Entailment: Conjunctions Entailment: Adjectives Entailment: Understood argument Entailment: Relative clause on object Entailment: PP on object The N, and the No V the N3 â The N2 V the N3 Adj Ni V the No +N, V the No The N; V the No â The N; V The N,; V the Ng that V2 the Ns â The N, V; the No The N, V the Ng P the N3 â The Nj V the No The actor and the professor mentioned the lawyer. â> The professor mentioned the lawyer. Happy professors mentioned the lawyer. â> Professors mentioned the lawyer. The author read the book. â The author read. The artists avoided the senators that thanked the tourists. â The artists avoided the senators. The authors supported the judges in front of the doctor. â The authors supported the judges. Non-entailment: NP/S Non-entailment: PP on subject Non-entailment: Relative clause on subject Non-entailment: MV/RR Non-entailment: NP/Z The N, V; the Np V2 the N3 + The Nj V; the No The N, P the No V ~+ The No V The N, that V; the Ng V2 the N3 + The No V2 the N3 The N; V; P the No V2 ~~ The N, V; P the No P the N; Vj; the Nz V2 the N3 ~» The N; V, the No The managers heard the secretary en- couraged the author. + The managers heard the secretary. The managers near the scientist re- signed. ~ The scientist resigned. The secretary that admired the senator saw the actor. ~» The senator saw the actor. The senators paid in the office danced. ~» The senators paid in the office. Before the actors presented the profes- sors advised the manager. + The actors presented the professors.
Table 5: Templates for the subsequence heuristic
Subcase Template Example Entailment: Embedded under preposi- tion Entailment: Outside embedded clause Entailment: Embedded under verb Entailment: Conjunction Entailment: Adverbs P the N, Vj, the Ny Vo the Ng â The Ni Vi P the N, V, the No, the Ns Vo the N4 â The N3 V2 the N4 The N; V; that the No V2 â The No Vo The N, V1, and the Ng V2 the Nz. â The No» V2 the N3 Adv the N V â The NV Because the banker ran, the doctors saw the professors. â> The banker ran. Although the secretaries recommended the managers, the judges supported the scientist. â> The judges supported the scientist. The president remembered that the ac- tors performed. â> The actors performed. The lawyer danced, and the judge sup- ported the doctors. â> The judge supported the doctors. Certainly the lawyers resigned. â> The lawyers resigned. Non-entailment: Embedded under preposi- tion Non-entailment: Outside embedded clause Non-entailment: Embedded under verb Non-entailment: Disjunction Non-entailment: Adverbs P the N; Vj, the Ng V2 the No + The Ny V; P the N; Vj; the Ng, the N3 Vo the Ny â~» The N3 V2 the N4 The N, V; that the Ng V2 the N3 ~» The No V2 the N3 The N, Vj, or the Ng V2 the N3 ~» The No V2 the N3 Adv the N; V the No Unless the senators ran, the professors recommended the doctor. ~» The senators ran. Unless the authors saw the students, the doctors helped the bankers. ~» The doctors helped the bankers. The tourists said that the lawyer saw the banker. + The lawyer saw the banker. The judges resigned, or the athletes mentioned the author. ~» The athletes mentioned the author. oO oO Probably the artists saw the authors. ~» The artists saw the authors. ~» The N; V the No
Table 6: Templates for the constituent heuristic
Heuristic Subcase DA ESIM SPINN BERT Lexical overlap Untangling relative clauses The athlete who the judges saw called the manager. â The judges saw the athlete. 0.97 0.95 0.88 0.98 Sentences with PPs The tourists by the actor called the authors. â The tourists called the authors. 1.00 1.00 1.00 1.00 Sentences with relative clauses The actors that danced encouraged the author. â The actors encouraged the author. 0.98 0.97 0.97 0.99 Conjunctions The secretaries saw the scientists and the actors. â The secretaries saw the actors. 1.00 1.00 1.00 0.77 Passives The authors were supported by the tourists. â The tourists supported the authors. 1.00 1.00 0.95 1.00 Subsequence Conjunctions 1.00 1.00 1.00 0.98 The actor and the professor shouted. â The professor shouted. Adjectives 1.00 Happy professors mentioned the lawyer. â Professors mentioned the lawyer. 1.00 1.00 1.00 Understood argument 1.00 The author read the book. â The author read. 1.00 0.84 1.00 Relative clause on object The artists avoided the actors that performed. â The artists avoided the actors. 0.98 0.99 0.95 0.99 PP on object The authors called the judges near the doctor. â The authors called the judges. 1.00 1.00 1.00 1.00 Constituent 0.99 0.85 1.00
Outside embedded clause 0.94 Although the secretaries slept, the judges danced. â The judges danced.
Embedded under verb The president remembered that the actors performed. â The actors performed.
0.92
0.94
0.99
0.99
Conjunction The lawyer danced, and the judge supported the doctors. â The lawyer danced.
0.99
1.00
0.89
1.00
Adverbs Certainly the lawyers advised the manager. â The lawyers advised the manager.
1.00
1.00
0.98
1.00
Table 7: Results for the subcases where the correct label is entailment.
# Heuristic
# Heuristic
# Subcase
# DA ESIM SPINN BERT
# Lexical overlap
Subject-object swap 0.00 0.00 0.03 0.00 The senators mentioned the artist. + The artist mentioned the senators.
0.00
0.03
0.00
Sentences with PPs 0.00 0.00 0.01 0.25 The judge behind the manager saw the doctors. + The doctors saw the manager.
0.00
0.00
0.01
0.25
Sentences with relative clauses 0.04 0.04 0.06 0.18 The actors called the banker who the tourists saw. + The banker called the tourists.
0.04
0.04
0.06
0.18
Conjunctions 0.00 0.00 0.01 0.39 The doctors saw the presidents and the tourists. + The presidents saw the tourists.
Passives 0.00 0.00 0.00 0.00 The senators were helped by the managers. - The senators helped the managers.
0.00
0.00
0.00
0.00
# Subsequence NP/S
0.04
0.02
0.09
0.02
The managers heard the secretary resigned. + The managers heard the secretary.
PP on subject 0.00 0.00 0.00 0.06 The managers near the scientist shouted. + The scientist shouted.
Relative clause on subject 0.03 0.04 0.05 0.01 The secretary that admired the senator saw the actor. + The senator saw the actor.
0.03
0.04
0.05
0.01
MV/RR 0.04 0.03 0.03 0.00 The senators paid in the office danced. -» The senators paid in the office.
0.03
0.03
0.00
NP/Z 0.02 0.01 0.11 0.10 Before the actors presented the doctors arrived. ~» The actors presented the doctors.
0.02
0.01
0.11
0.10
Constituent | Embedded under preposition 0.14 0.02 0.29 0.50 Unless the senators ran, the professors recommended the doctor. + The senators ran.
Outside embedded clause 0.01 0.00 0.02 0.00 Unless the authors saw the students, the doctors resigned. - The doctors resigned.
0.01
0.00
0.02
0.00
Embedded under verb 0.00 0.00 0.01 0.22 The tourists said that the lawyer saw the banker. + The lawyer saw the banker.
0.00
0.00
0.01
0.22
Disjunction 0.01 0.03 0.20 0.01 The judges resigned, or the athletes saw the author. -» The athletes saw the author.
0.01
0.03
0.20
0.01
Adverbs 0.00 0.00 0.00 0.08 Probably the artists saw the authors. + The artists saw the authors.
Table 8: Results for the subcases where the correct label is non-entailment.
Correct: Entailment Correct: Non-entailment Model Model class Lexical Subseq. Const. Lexical Subseq. Const. DA ESIM RNN SPINN TreeRNN BERT Bag-of-words Transformer 1.00 0.99 0.94 0.98 1.00 1.00 0.96 1.00 0.98 1.00 0.93 0.99 0.00 0.00 0.06 0.04 0.00 0.01 0.14 0.02 0.03 0.00 0.11 0.20
Table 9: Results for models trained on MNLI with neutral and contradiction merged into a single label, non- entailment.
# E Results with augmented training with some subcases withheld
For each model, we ran ï¬ve experiments, each one having 6 of the 30 subcases withheld. Each trained model was then evaluated on the categories that had been withheld from it. The results of these experiments are in Tables 10, 11, 12, 13 and 14.
# F Human experiments
To obtain human results, we used Amazon Me- chanical Turk. We subdivided HANS into 114 different categories of examples, covering all pos- sible variations of the template used to generate the example and the specific word around which the template was built. For example, for the con- stituent heuristic subcase of clauses embedded un- der verbs (e.g. The doctor believed the lawyer danced -+ The lawyer danced), each possible verb under which the clause could be embedded (e.g. believed, thought, or assumed) counted as a dif- ferent category.
onds per example, while the participants we dis- carded spent 4.2 seconds per example. The total amount of time from a participant accepting the experiment to completing the experiment averaged 17.6 minutes. This included 9.1 minutes answer- ing the prompts (6.4 minutes for discarded partic- ipants and 12.1 minutes for retained participants) and roughly one minute spent between prompts (1 second after each prompt). The remaining time was spent reading the consent form, reading the instructions, or waiting to start (Mechanical Turk participants often wait several minutes between accepting an experiment and beginning the exper- iment).
The expert annotators were three native English speakers who had a background in linguistics but who had not heard about this project before pro- viding judgments. Two of them were graduate stu- dents and one was a postdoctoral researcher. Each expert annotator labeled 124 examples (one exam- ple from each of the 114 categories, plus 10 con- trols).
For each of these 114 categories, we chose 20 examples from HANS and obtained judgments from 5 human participants for each of those 20 examples. Each participant provided judgments for 57 examples plus 10 controls (67 stimuli to- tal) and was paid $2.00. The controls consisted of 5 examples where the premise and hypothesis were the same (e.g. The doctor saw the lawyer â The doctor saw the lawyer) and 5 examples of simple negation (e.g. The doctor saw the lawyer ~ The doctor did not see the lawyer). For analyz- ing the data, we discarded any participants who answered any of these controls incorrectly; this led to 95 participants being retained and 105 be- ing rejected (participants were still paid regardless of whether they were retained or filtered out). On average, each participant spent 6.5 seconds per ex- ample; the participants we retained spent 8.9 sec-
# G Numerical results
To facilitate future comparisons to our results, here we provide the numerical results underlying the bar plots in the main body of the paper. Table 15 corresponds to Figure 1; the MNLI column in Ta- ble 15 corresponds to Figure 1a, and the remaining columns correspond to Figure 1b. Table 16 corre- sponds to Figure 2. The plots in Table 3 use the numbers from the BERT columns in Tables 7, 8, and 14. Finally, the bar plots in Figure 3 corre- spond to the numerical results in Table 17.
Heuristic Subcase DA ESIM_ SPINN- BERT Lexical Subject-object swap 0.01 1.00 1.00 1.00 overlap The senators mentioned the artist. + The artist mentioned the senators. Lexical Untangling relative clauses 0.34 0.23 0.23 0.20 overlap The athlete who the judges saw called the manager. â> The judges saw the athlete. Subsequence NP/S 0.27 0.00 0.00 0.10 The managers heard the secretary resigned. + The managers heard the secretary. Subsequence Conjunctions 0.49 0.38 0.38 0.38 Constituent Constituent The actor and the professor shouted. â> The professor shouted. Embedded under preposition 0.51 0.51 0.51 1.00 Unless the senators ran, the professors recommended the doctor. + The senators ran. Embedded under preposition 1.00 0.06 1.00 0.03 Because the banker ran, the doctors saw the professors. â The banker ran.
Table 10: Accuracies for models trained on MNLI augmented with most HANS example categories except with- holding the categories in this table (experiment 1/5 for the withheld category investigation).
Heuristic Subcase DA ESIM_ SPINN- BERT Lexical Sentences with PPs 0.00 0.96 0.71 0.97 overlap The judge behind the manager saw the doctors. + The doctors saw the manager. Lexical Sentences with PPs 1.00 1.00 0.94 1.00 overlap The tourists by the actor called the authors. â The tourists called the authors. Subsequence PP on subject 0.00 0.07 0.57 0.39 The managers near the scientist shouted. The scientist shouted. Subsequence Adjectives 0.71 0.99 0.64 1.00 Happy professors mentioned the lawyer. â Professors mentioned the lawyer. Constituent | Outside embedded clause 0.78 = 1.00 1.00 0.17 Unless the authors saw the students, the doctors resigned. The doctors resigned. Constituent | Outside embedded clause 0.78 0.78 0.78 0.97 Although the secretaries slept, the judges danced. â The judges danced.
Outside embedded clause 0.78 = 1.00 1.00 0.17 Unless the authors saw the students, the doctors resigned. The doctors resigned.
Table 11: Accuracies for models trained on MNLI augmented with most HANS example categories except with- holding the categories in this table (experiment 2/5 for the withheld category investigation).
Heuristic Subcase DA ESIM_ SPINN- BERT Lexical overlap Lexical overlap Subsequence Subsequence Constituent Constituent Sentences with relative clauses 0.00 0.04 0.02 0.84 The actors called the banker who the tourists saw. ~ The banker called the tourists. Sentences with relative clauses 1.00 0.97 1.00 1.00 The actors that danced encouraged the author. â> The actors encouraged the author. Relative clause on subject 0.00 0.04 0.00 0.93 The secretary that admired the senator saw the actor. + The senator saw the actor. Understood argument 0.28 1.00 0.81 0.94 The author read the book. â The author read. Embedded under verb 0.00 0.00 0.05 0.98 The tourists said that the lawyer saw the banker. + The lawyer saw the banker. Embedded under verb 1.00 0.94 0.98 0.43 The president remembered that the actors performed. â> The actors performed.
Table 12: Accuracies for models trained on MNLI augmented with most HANS example categories except with- holding the categories in this table (experiment 3/5 for the withheld category investigation).
Heuristic Subcase DA ESIM_ SPINNâ- BERT Lexical Passives 0.00 0.00 0.00 0.00 overlap The senators were helped by the managers. -» The senators helped the managers. Lexical Conjunctions 0.05 0.51 0.52 1.00 overlap The secretaries saw the scientists and the actors. > The secretaries saw the actors. Subsequence MV/RR 0.76 0.44 0.32 0.07 The senators paid in the office danced. + The senators paid in the office. Subsequence Relative clause on object 0.72 1.00 0.99 0.99 The artists avoided the actors that performed. â The artists avoided the actors. Constituent â Disjunction 0.11 0.29 0.51 0.44 The judges resigned, or the athletes saw the author. + The athletes saw the author. Constituent | Conjunction 0.99 1.00 0.74 1.00 The lawyer danced, and the judge supported the doctors. â+ The lawyer danced.
Table 13: Accuracies for models trained on MNLI augmented with most HANS example categories except with- holding the categories in this table (experiment 4/5 for the withheld category investigation).
Heuristic Subcase DA ESIM_ SPINN- BERT Lexical Conjunctions 0.00 0.44 0.00 0.08 overlap The doctors saw the presidents and the tourists. + The presidents saw the tourists. Lexical Passives 0.00 0.00 0.00 0.00 overlap The authors were supported by the tourists. > The tourists supported the authors. Subsequence NP/Z 0.00 0.10 0.18 0.57 Before the actors presented the doctors arrived. + The actors presented the doctors. Subsequence PP on object 0.04 0.76 0.04 0.98 The authors called the judges near the doctor. â The authors called the judges. Constituent Adverbs 0.76 0.33 0.20 0.84 Probably the artists saw the authors. + The artists saw the authors. Constituent | Adverbs 0.66 1.00 0.59 0.96 Certainly the lawyers advised the manager. â> The lawyers advised the manager.
Table 14: Accuracies for models trained on MNLI augmented with most HANS example categories except with- holding the categories in this table (experiment 5/5 for the withheld category investigation).
Correct: Entailment Correct: Non-entailment Model Model class MNLI Lexical Subseq. Const. Lexical Subseq. Const. DA ESIM RNN SPINN TreeRNN BERT Bag-of-words Transformer 0.72 0.77 0.67 0.84 0.99 0.98 0.96 0.95 1.00 1.00 0.96 0.99 0.97 0.99 0.93 0.98 0.01 0.01 0.02 0.16 0.02 0.02 0.06 0.04 0.03 0.01 0.11 0.16
Table 15: Numerical results. The MNLI column reports accuracy on the MNLI test set. The remaining columns report accuracies on 6 sub-components of the HANS evaluation set; each sub-component is deï¬ned by its correct label (either entailment or non-entailment) and the heuristic it addresses.
Correct: > Correct: + Model Lex. Subseq. Const. Lex. Subseq. Const. DA 0.94 098 0.96 0.26 0.74 1.00 ESIM 0.99 1.00 1.00 1.00 1.00 1.00 SPINN 0.92 1.00 0.99 0.90 1.00 1.00 BERT 1.00 1.00 1.00 1.00 1.00 1.00
Correct: â Correct: Model Short Long Short Long BERT (MNLI) 1.00 1.00 0.28 0.26 BERT(MNLI+) 1.00 1.00 0.73 (0.33
Table 16: HANS accuracies for models trained on MNLI plus examples of all 30 categories in HANS.
Table 17: Results on the lexical overlap cases from Dasgupta et al. (2018) for BERT ï¬ne-tuned on MNLI or on MNLI augmented with HANS-like examples. | {
"id": "1811.07033"
} |
1902.00579 | Multi-step Reasoning via Recurrent Dual Attention for Visual Dialog | This paper presents a new model for visual dialog, Recurrent Dual Attention
Network (ReDAN), using multi-step reasoning to answer a series of questions
about an image. In each question-answering turn of a dialog, ReDAN infers the
answer progressively through multiple reasoning steps. In each step of the
reasoning process, the semantic representation of the question is updated based
on the image and the previous dialog history, and the recurrently-refined
representation is used for further reasoning in the subsequent step. On the
VisDial v1.0 dataset, the proposed ReDAN model achieves a new state-of-the-art
of 64.47% NDCG score. Visualization on the reasoning process further
demonstrates that ReDAN can locate context-relevant visual and textual clues
via iterative refinement, which can lead to the correct answer step-by-step. | http://arxiv.org/pdf/1902.00579 | Zhe Gan, Yu Cheng, Ahmed El Kholy, Linjie Li, Jingjing Liu, Jianfeng Gao | cs.CV, cs.CL | Accepted to ACL 2019 | null | cs.CV | 20190201 | 20190604 | 9 1 0 2
n u J 4 ] V C . s c [
2 v 9 7 5 0 0 . 2 0 9 1 : v i X r a
# Multi-step Reasoning via Recurrent Dual Attention for Visual Dialog
# Zhe Gan1, Yu Cheng1, Ahmed El Kholy1, Linjie Li1, Jingjing Liu1, Jianfeng Gao2 1Microsoft Dynamics 365 AI Research, 2Microsoft Research
{zhe.gan, yu.cheng, ahmed.eikholy, lindsey.li, jingjl, jfgao}@microsoft.com
# Abstract
This paper presents a new model for vi- sual dialog, Recurrent Dual Attention Net- work (ReDAN), using multi-step reasoning to answer a series of questions about an im- age. In each question-answering turn of a di- alog, ReDAN infers the answer progressively through multiple reasoning steps. In each step of the reasoning process, the semantic rep- resentation of the question is updated based on the image and the previous dialog history, and the recurrently-reï¬ned representation is used for further reasoning in the subsequent step. On the VisDial v1.0 dataset, the pro- posed ReDAN model achieves a new state-of- the-art of 64.47% NDCG score. Visualization on the reasoning process further demonstrates that ReDAN can locate context-relevant vi- sual and textual clues via iterative reï¬nement, which can lead to the correct answer step-by- step.
to the question. These attention models measure the relevance between the query and the attended image, as well as the dialog context. To generate an answer, either a discriminative decoder is used for ranking answer candidates, or a generative de- coder is trained for synthesizing an answer (Das et al., 2017a; Lu et al., 2017). Though promis- ing results have been reported, these models of- ten fail to provide accurate answers, especially in cases where answers are conï¬ned to particular im- age regions or dialog-history snippets.
One hypothesis for the cause of failure is the inherent limitation of single-step reasoning ap- proach. Intuitively, after taking a ï¬rst glimpse of the image and the dialog history, readers often re- visit speciï¬c sub-areas of both image and text to obtain a better understanding of the multimodal context. Inspired by this, we propose a Recur- rent Dual Attention Network (ReDAN) that ex- ploits multi-step reasoning for visual dialog.
1
# 1 Introduction
There has been a recent surge of interest in de- veloping neural network models capable of under- standing both visual information and natural lan- guage, with applications ranging from image cap- tioning (Fang et al., 2015; Vinyals et al., 2015; Xu et al., 2015) to visual question answering (VQA) (Antol et al., 2015; Fukui et al., 2016; Anderson et al., 2018). Unlike VQA, where the model can answer a single question about an im- age, a visual dialog system (Das et al., 2017a; De Vries et al., 2017; Das et al., 2017b) is designed to answer a series of questions regarding an image, which requires a comprehensive understanding of both the image and previous dialog history.
Most previous work on visual dialog rely on at- tention mechanisms (Bahdanau et al., 2015; Xu et al., 2015) to identify speciï¬c regions of the im- age and dialog-history snippets that are relevant
Figure 1a provides an overview of the model architecture of ReDAN. First, a set of visual and textual memories are created to store im- age features and dialog context, respectively. In each step, a semantic representation of the ques- tion is used to attend to both memories, in or- der to obtain a question-aware image represen- tation and question-aware dialog representation, both of which subsequently contribute to updating the question representation via a recurrent neural network. Later reasoning steps typically provide a sharper attention distribution than earlier steps, aiming at narrowing down the regions most rele- vant to the answer. Finally, after several iterations of reasoning steps, the reï¬ned question vector and the garnered visual/textual clues are fused to ob- tain a ï¬nal multimodal context vector, which is fed to the decoder for answer generation. This multi- step reasoning process is performed in each turn of the dialog.
Visual Memory Visual features Question: ; BilsTM â+> ââ> ââ ââis he wearing shorts?â / Fusion Dialog History: Textual features C: the young boy is playing tennis at the court Q¢ Is the young boy a toddler? | BiLsTM â>» A:no Q: What color is his hair ? A: Itâs black Textual Memory \ _,| Multimodal |_| 4aporaq Original image Ist step | the young boy is playing tennis at the court | Is the young boy a toddler ? no â| What color is his hair ? It âs black
bons] Original image Ist step reasoning 2nd step reasoning Snippet-level attention weights | the young boy is playing tennis at the court | [NNN 0.447 HE °.569 | Is the young boy a toddler ? no âmH oe 0.149 â| What color is his hair ? It âs black âTH oosco ooze Dialog history Ist step reasoning 2nd step reasoning
(a) Overview of the proposed ReDAN framework. (b) An example of multi-step reasoning in ReDAN.
Figure 1: Model architecture and visualization of the learned multi-step reasoning strategies. In the ï¬rst step, ReDAN ï¬rst focuses on all relevant objects in the image (e.g., âboyâ, âshortsâ), and all relevant facts in the dialog history (e.g., âyoung boyâ, âplaying tennisâ, âblack hairâ). In the second step, the model narrows down to more context-relevant regions and dialog context (i.e., the attention maps become sharper) which lead to the ï¬nal correct answer (âyesâ). The numbers in the bounding boxes and in the histograms are the attention weights of the corresponding objects or dialog history snippets.
Figure 1b provides an illustration of the itera- tive reasoning process. In the current dialog turn for the question âis he wearing shorts?â, in the initial reasoning step, the system needs to draw knowledge from previous dialog history to know who âheâ refers to (i.e., âthe young boyâ), as well as interpreting the image to rule out objects irrel- evant to the question (i.e., ânetâ, âracketâ and âcourtâ). After this, the system conducts a second round of reasoning to pinpoint the image region (i.e., âshortsâ, whose attention weight increases from 0.38 to 0.92 from the 1st step to the 2nd step) and the dialog-history snippet (i.e., âplaying ten- nis at the courtâ, whose attention weight increased from 0.447 to 0.569), which are most indicative of the correct answer (âyesâ).
The main contributions of this paper are three- (i) We propose a ReDAN framework fold. that supports multi-step reasoning for visual di- (ii) We introduce a simple rank aggrega- alog. tion method to combine the ranking results of dis- criminative and generative models to further boost the performance. (iii) Comprehensive evaluation and visualization analysis demonstrate the effec- tiveness of our model in inferring answers pro- gressively through iterative reasoning steps. Our proposed model achieves a new state-of-the-art of 64.47% NDCG score on the VisDial v1.0 dataset.
# 2 Related Work
Visual Dialog The visual dialog task was re- cently proposed by Das et al. (2017a) and De Vries
et al. (2017). Speciï¬cally, Das et al. (2017a) re- leased the VisDial dataset, which contains free- language questions and answers. form natural And De Vries et al. introduced the GuessWhat?! dataset, where the dialogs provided are more goal-oriented and aimed at object dis- covery within an image, through a series of yes/no questions between two dialog agents.
task, a typical system fol- lows the encoder-decoder framework proposed in Sutskever et al. (2014). Different encoder models have been explored in previous studies, including late fusion, hierarchical recurrent net- work, memory network (all three proposed in Das et al. (2017a)), early answer fusion (Jain et al., image attention (Lu 2018), history-conditional et al., 2017), and sequential co-attention (Wu et al., 2018). The decoder model usually falls into two categories: (i) generative decoder to synthe- size the answer with a Recurrent Neural Network (RNN) (Das et al., 2017a); and (ii) discriminative decoder to rank answer candidates via a softmax- based cross-entropy loss (Das et al., 2017a) or a ranking-based multi-class N-pair loss (Lu et al., 2017).
Reinforcement Learning (RL) was used in Das et al. (2017b); Chattopadhyay et al. (2017) to train two agents to play image guessing games. Lu et al. (2017) proposed a training schema to effectively transfer knowledge from a pre-trained discrimina- tive model to a generative dialog model. Gen- erative Adversarial Network (Goodfellow et al., 2014; Yu et al., 2017b; Li et al., 2017) was also
used in Wu et al. (2018) to generate answers indis- tinguishable from human-generated answers, and a conditional variational autoencoder (Kingma and Welling, 2014; Sohn et al., 2015) was devel- oped in Massiceti et al. (2018) to promote answer diversity. There were also studies investigating visual coreference resolution, either via attention memory implicitly (Seo et al., 2017) or using a more explicit reasoning procedure (Kottur et al., 2018) based on neural module networks (Andreas et al., 2016). In addition to answering questions, question sequence generation is also investigated in Jain et al. (2018); Massiceti et al. (2018).
task, various methods (such as RL) have been proposed to improve the performance of dialog agents, measured by task completion rate as in goal-oriented dia- log system (Strub et al., 2017; Shekhar et al., 2018; Strub et al., 2018; Lee et al., 2018; Zhang et al., 2018). Other related work includes image- grounded chitchat (Mostafazadeh et al., 2017), dialog-based image retrieval (Guo et al., 2018), and text-only conversational question answer- ing (Reddy et al., 2018; Choi et al., 2018). A re- cent survey on neural approaches to dialog model- ing can be found in Gao et al. (2018).
In this work, we focus on the VisDial task. Dif- ferent from previous approaches to visual dialog, which all used a single-step reasoning strategy, we propose a novel multi-step reasoning framework that can boost the performance of visual dialog systems by inferring context-relevant information from the image and the dialog history iteratively.
Multi-step Reasoning The idea of multi-step reasoning has been explored in many tasks, in- cluding image classiï¬cation (Mnih et al., 2014), text classiï¬cation (Yu et al., 2017a), image gen- eration (Gregor et al., 2015), language-based im- age editing (Chen et al., 2018), Visual Question Answering (VQA) (Yang et al., 2016; Nam et al., 2017; Hudson and Manning, 2018), and Machine Reading Comprehension (MRC) (Cui et al., 2017; Dhingra et al., 2017; Hill et al., 2016; Sordoni et al., 2016; Shen et al., 2017; Liu et al., 2018).
Speciï¬cally, Mnih et al. (2014) introduced an RNN for image classiï¬cation, by selecting a se- quence of regions adaptively and only processing the selected regions. Yu et al. (2017a) used an RNN for text classiï¬cation, by learning to skip irrelevant information when reading the text in- put. A recurrent variational autoencoder termed
DRAW was proposed in Gregor et al. (2015) for multi-step image generation. A recurrent atten- tive model for image editing was also proposed in Chen et al. (2018) to fuse image and language features via multiple steps.
Stacked Attention Network (Yang et al., 2016) was proposed to (SAN) attend the question to relevant image regions via multiple attention layers. For MRC, Rea- soNet (Shen et al., 2017) was developed to perform multi-step reasoning to infer the answer span based on a given passage and a question, where the number of steps can be dynamically determined via a termination gate.
Different from SAN for VQA (Yang et al., 2016) and ReasoNet for MRC (Shen et al., 2017), which reason over a single type of input (either image or text), our proposed ReDAN model in- corporates multimodal context that encodes both visual information and textual dialog. This mul- timodal reasoning approach presents a mutual en- hancement between image and text for a better un- derstanding of both: on the one hand, the attended image regions can provide additional information for better dialog interpretation; on the other hand, the attended history snippets can be used for bet- ter image understanding (see the dotted red lines in Figure 2).
Concurrent Work We also include some con- current work for visual dialog that has not been discussed above, including image-question- answer synergistic network (Guo et al., 2019), re- cursive visual attention (Niu et al., 2018), factor graph attention (Schwartz et al., 2019), dual atten- tion network (Kang et al., 2019), graph neural net- work (Zheng et al., 2019), history-advantage se- quence training (Yang et al., 2019), and weighted likelihood estimation (Zhang et al., 2019).
# 3 Recurrent Dual Attention Network
The visual dialog task (Das et al., 2017a) is formulated as follows: given a question Qe grounded in an image J, and previous dialog history (including the image caption C) Hy = {C, (Q1, A1),-++ , (Qe-1, Ac_1)} (⬠is the current dialog turn) as additional context, the goal is to generate an answer by ranking a list of N candi- date answers Ay = fA, Lee AY,
Figure 2 provides an overview of the Recurrent Dual Attention Network (ReDAN). Speciï¬cally,
Image | ' Candidate 1: | ' âred and whiteâ! Candidate 100: ' Visual âbrownâ Question Q: BiLSTM Discriminative Decoder âwhat color are the glasses?â Answer A: H how old is the male ? looks to be late 20s ee glasses ? yes qd Textual ! - Reasoning! Attention Attention Dialog History H: a person sitting on a red bench with a laptop ' is the person male or female ? male BiLSTM. âblack frameâ Hi Generative Decoder / Multi-step Reasoning via Recurrent Dual Attention Network
Figure 2: Model Architecture of Recurrent Dual Attention Network for visual dialog. Please see Sec. 3 for details.
ReDAN consists of three components: (i) Mem- ory Generation Module (Sec. 3.1), which gener- ates a set of visual and textual memories to pro- vide grounding for reasoning; (ii) Multi-step Rea- soning Module (Sec. 3.2), where recurrent dual at- tention is applied to jointly encode question, im- age and dialog history into a multimodal context vector for decoding; and (iii) Answer Decoding Module (Sec. 3.3), which derives the ï¬nal answer for each question based on the multimodal con- text vector. The following sub-sections describe the details of these components.
where M = 36 is the number of detected objects in an image1, and nf = 2048 is the dimension of the feature vector. A single-layer perceptron is used to transform each feature into a new vector that has the same dimension as the query vector (described in Sec. 3.2):
Mv = tanh(WI FI ) â RnhÃM , (2)
where WI â RnhÃnf . All the bias terms in this paper are omitted for simplicity. Mv is the visual memory, and its m-th column corresponds to the visual feature vector for the region of the object indexed by m.
# 3.1 Memory Generation Module
In this module, the image J and the dialog history Hy; are transformed into a set of memory vectors (visual and textual).
Visual Memory We use a pre-trained Faster R- CNN (Ren et al., 2015; Anderson et al., 2018) to extract image features, in order to enable atten- tion on both object-level and salient region-level, each associated with a feature vector. Compared to image features extracted from VGG-Net (Si- monyan and Zisserman, 2014) and ResNet (He et al., 2016), this type of features from Faster R- CNN has achieved state-of-the-art performance in both image captioning and VQA (Anderson et al., 2018; Teney et al., 2018) tasks. Speciï¬cally, the image features FI for a raw image I are repre- sented by:
Textual Memory In the ¢-th dialogue turn, the dialog history Hy consists of the caption C and ¢ â 1 rounds of QA pairs (Q;,4j) G = ,--.,£ â 1). For each dialog-history snip- pet 7 (the caption is considered as the first one with 7 = 0), it is first represented as a matrix M! = [h),..., AY |] ⬠Rm ** via a bidirec- ional Long Short-Term Memory (BiLSTM) net- work (Hochreiter and Schmidhuber, 1997), where i is the maximum length of the dialog-history snippet. Then, a self-attention mechanism is ap- plied to learn the attention weight of every word in the snippet, identifying the key words and rul- ing out irrelevant information. Specifically,
Ï Â· tanh(WhM(j) Ïj = softmax(pT uj = Ïj · (M(j) h )) , h )T , (3)
FI = R-CNN(I) â Rnf ÃM , (1)
1We have also tried using an adaptive number of detected objects for an image. Results are very similar to the results with M = 36.
where Ïj â R1ÃK, pÏ â RnhÃ1, Wh â RnhÃnh, and uj â R1Ãnh. After applying the same BiLSTM to each dialog-history snippet, the textual memory is then represented as Md = [uT
# 3.2 Multi-step Reasoning Module
The multi-step reasoning framework is imple- mented via an RNN, where the hidden state st represents the current representation of the ques- tion, and acts as a query to retrieve visual and textual memories. The initial state s0 is a self- attended question vector q. Let vt and dt de- note the attended image representation and dialog- history representation in the t-th step, respectively. A one-step reasoning pathway can be illustrated as st â vt â dt â st+1, which is performed T times. Details are described below.
Self-attended Question Similar to tex- tual memory construction, a question Q (the subscript @ for Qs, is omitted to reduce confusion) is first represented as a matrix M, = [go,---> 4x1] ⬠Râ¢**" via a BILSTM, where Kâ is the maximum length of the question. Then, self attention is applied,
α = softmax(pT α · tanh(WqMq)) , q = αMT q ,
where a ⬠R!*XAâ py, © Râ¢*!, and W, ¢⬠IRâ¢*"h, gq ⬠R!*"» then serves as the initial hid- den state of the RNN, i.e., so = q.
The reasoning pathway st â vt â dt â st+1 includes the following steps: (i) (st, dtâ1) â vt; (ii) (st, vt) â dt; and (iii) (vt, dt) â st+1.
Query and History Attending to Image Given st and the previous attended dialog history repre- sentation dtâ1 â R1Ãnh, we update vt as follows:
β = softmax(pT vt = β · MT v , where β â R1ÃM , pβ â RnhÃ1, Wv â RnhÃnh, Ws â RnhÃnh and Wd â RnhÃnh. The updated vt, together with st, is used to attend to the dialog history.
Query and Image Attending to History Given st â R1Ãnh and the attended image representation vt â R1Ãnh, we update dt as follows:
Y= softmax(p/ : tanh(WjMa + W)st + Wiv!)) ; d=7- M], (5) where y ⬠RI! p, ⬠Râ¢1 Ww, ¢â¬
RMX Mh | Ww, e⬠Rmx» and wi E Rrmxnn, The updated d; is fused with v; and then used to update the RNN query state.
Multimodal Fusion Given the query vector st, we have thus far obtained the updated image repre- sentation vt and the dialog-history representation dt. Now, we use Multimodal Factorized Bilinear pooling (MFB) (Yu et al., 2017c) to fuse vt and dt together. Speciï¬cally,
# zt = SumPooling(UvvT zt = sign(zt)|zt|0.5, zt = zT
t ⦠UddT (6)
t , k) , t /||zt|| ,
(7)
where U, ⬠R'â¢#*" Ug ⬠Râ¢**", The func- tion SumPooling(a, /) in (6) means using a one- dimensional non-overlapped window with the size k to perform sum pooling over a. (7) performs power normalization and 2 normalization. The whole process is denoted in short as:
zt = MFB(vt, dt) â R1Ãnh . (8)
There are also other methods for multimodal fu- sion, such as MCB (Fukui et al., 2016) and MLB (Kim et al., 2017). We use MFB in this pa- per due to its superior performance in VQA.
Image and History Updating RNN State The initial state s0 is set to q, which represents the ini- tial understanding of the question. The question representation is then updated based on the current dialogue history and the image, via an RNN with Gated Recurrent Unit (GRU) (Cho et al., 2014):
st+1 = GRU(st, zt) . (9)
This process forms a cycle completing one reason- ing step. After performing T steps of reasoning, multimodal fusion is then used to obtain the ï¬nal context vector:
c = [MFB(sT , vT ), MFB(sT , dT ), MFB(vT , dT )] .
# 3.3 Answer Decoding Module
Discriminative Decoder The context vector c is used to rank answers from a pool of candidates A (the subscript @ for A, is omitted). Similar to how we obtain the self-attended question vector in Sec. 3.2, a BiLSTM, together with the self- attention mechanism, is used to obtain a vector representation for each candidate A; ⬠A, result- ing ina; ⬠R!*", for j = 1,...,N. Based on this, a probability vector p is computed as
p = softmax(s), where s â RN , and s[j] = caT j . During training, ReDAN is optimized by minimiz- ing the cross-entropy loss2 between the one-hot- encoded ground-truth label vector and the proba- bility distribution p. During evaluation, the an- swer candidates are simply ranked based on the probability vector p.
Generative Decoder Besides the discriminative decoder, following Das et al. (2017a), we also con- sider a generative decoder, where another LSTM is used to decode the context vector into an answer. During training, we maximize the log-likelihood of the ground-truth answers. During evaluation, we use the log-likelihood scores to rank answer candidates.
Rank Aggregation Empirically, we found that combining the ranking results of discriminative and generative decoders boosts the performance a lot. Two different rank aggregation methods are explored here: (i) average over ranks; and (ii) av- erage over reciprocal ranks. Speciï¬cally, in a di- alog session, assuming r1, . . . , rK represents the ranking results obtained from K trained models (either discriminative, or generative). In the ï¬rst method, the average ranks 1 k=1 rk are used to K re-rank the candidates. In the second one, we use the average of the reciprocal ranks of each individ- ual model 1 K
# 4 Experiments
In this section, we explain in details our experi- ments on the VisDial dataset. We compare our ReDAN model with state-of-the-art baselines, and conduct detailed analysis to validate the effective- ness of our proposed model.
# 4.1 Experimental Setup
Dataset We evaluate our proposed approach on the recently released VisDial v1.0 dataset3. Speciï¬cally, the training and validation splits from v0.9 are combined together to form the new training data in v1.0, which contains dialogs on 123, 287 images from COCO dataset (Lin et al., 2014). Each dialog is equipped with 10 turns, re- sulting in a total of 1.2M question-answer pairs. An additional 10, 064 COCO-like images are fur- ther collected from Flickr, of which 2, 064 im-
2We have also tried the N-pair ranking loss used in Lu et al. (2017). Results are very similar to each other.
3As suggested in https://visualdialog.org/
data, results should be reported on v1.0, instead of v0.9.
ages are used as the validation set (val v1.0), and the rest 8K are used as the test set (test-std v1.0), hosted on an evaluation server4 (the ground-truth answers for this split are not publicly available). Each image in the val v1.0 split is associated with a 10-turn dialog, while a dialog with a ï¬exible number of turns is provided for each image in test- std v1.0. Each question-answer pair in the VisDial dataset is accompanied by a list of 100 answer can- didates, and the goal is to ï¬nd the correct answer among all the candidates.
Preprocessing We truncate captions/questions/ answers that are longer than 40/20/20 words, re- spectively. And we build a vocabulary of words that occur at least 5 times in train v1.0, resulting in 11, 319 words in the vocabulary. For word em- beddings, we use pre-trained GloVe vectors (Pen- nington et al., 2014) for all the captions, questions and answers, concatenated with the learned word embedding from the BiLSTM encoders to further boost the performance. For image representation, we use bottom-up-attention features (Anderson et al., 2018) extracted from Faster R-CNN (Ren et al., 2015) pre-trained on Visual Genome (Kr- ishna et al., 2017). A set of 36 features is cre- ated for each image. Each feature is a 2048- dimentional vector.
Evaluation Following Das et al. (2017a), we use a set of ranking metrics (Recall@k for k = {1, 5, 10}, mean rank, and mean reciprocal rank (MRR)), to measure the performance of retrieving the ground-truth answer from a pool of 100 can- didates. Normalized Discounted Cumulative Gain (NDCG) score is also used for evaluation in the visual dialog challenge 2018 and 2019, based on which challenge winners are picked. Since this re- quires dense human annotations, the calculation of NDCG is only available on val v1.0, test-std v1.0, and a small subset of 2000 images from train v1.0.
Training details All three BiLSTMs used in the model are single-layer with 512 hidden units. The number of factors used in MFB is set to 5, and we use mini-batches of size 100. The maxi- mum number of epochs is set to 20. No dataset- speciï¬c tuning or regularization is conducted ex- cept dropout (Srivastava et al., 2014) and early stopping on validation sets. The dropout ratio is 0.2. The Adam algorithm (Kingma and Ba, 2014)
4https://evalai.cloudcv.org/web/ challenges/challenge-page/161/overview
Model MN-D (Das et al., 2017a) HCIAE-D (Lu et al., 2017) CoAtt-D (Wu et al., 2018) ReDAN-D (T =1) ReDAN-D (T =2) ReDAN-D (T =3) Ensemble of 4 NDCG MRR 60.42 55.13 62.96 57.65 62.91 57.72 63.35 58.49 63.46 59.26 64.21 59.32 65.30 60.53 R@1 46.09 48.94 48.86 49.47 49.61 50.60 51.67 R@5 78.14 80.50 80.41 80.72 80.75 81.39 82.40 R@10 Mean 4.63 88.05 4.24 89.66 4.21 89.83 4.19 90.05 4.15 89.96 4.05 90.26 3.82 91.09
Table 1: Comparison of ReDAN with a discriminative decoder to state-of-the-art methods on VisDial v1.0 validation set. Higher score is better for NDCG, MRR and Recall@k, while lower score is better for mean rank. All these baselines are re-implemented with bottom-up features and incorporated with GloVe vectors for fair comparison.
Model MN-G (Das et al., 2017a) HCIAE-G (Lu et al., 2017) CoAtt-G (Wu et al., 2018) ReDAN-G (T =1) ReDAN-G (T =2) ReDAN-G (T =3) Ensemble of 4 NDCG MRR 47.83 56.99 49.07 59.70 49.64 59.24 49.60 59.41 49.96 60.11 50.02 60.47 50.41 61.43 R@1 38.01 39.72 40.09 39.95 40.36 40.27 40.85 R@5 57.49 58.23 59.37 59.32 59.72 59.93 60.08 R@10 Mean 18.76 64.08 18.43 64.73 17.86 65.92 17.79 65.97 17.53 66.57 17.40 66.78 17.38 67.17
Table 2: Comparison of ReDAN with a generative decoder to state-of-the-art generative methods on VisDial val v1.0. All the baseline models are re-implemented with bottom-up features and incorporated with GloVe vectors for fair comparison.
with learning rate 4 Ã 10â4 is used for optimiza- tion. The learning rate is halved every 10 epochs.
to dialog history; (ii) then, question and the attended history attend to the image.
# 4.2 Quantitative Results
Baselines We ap- proach with state-of-the-art models, including Memory Network (MN) (Das et al., 2017a), History-Conditioned Image Attentive Encoder (HCIAE) (Lu et al., 2017) and Sequential Co- Attention model (CoAtt) (Wu et al., 2018). In their original papers, all these models used VGG- Net (Simonyan and Zisserman, 2014) for image feature extraction, and reported results on VisDial v0.9. Since bottom-up-attention features have proven to achieve consistently better performance than VGG-Net in other tasks, we re-implemented all these models with bottom-up-attention fea- tures, and used the same cross-entropy loss for training. Further, unidirectional LSTMs are used in these previous baselines, which are replaced by bidirectional LSTMs with self-attention mecha- nisms for fair comparison. All the baselines are also further incorporated with pre-trained GloVe vectors. We choose the best three models on VisDial v0.9 as the baselines:
⢠MN (Das et al., 2017a): (i) mean pooling is performed over the bottom-up-attention fea- tures for image representation; (ii) image and question attend to the dialog history.
⢠CoAtt (Wu et al., 2018): (i) question attends to the image; (ii) question and image attend to the history; (iii) image and history attend to the question; (iv) question and history at- tend to the image again.
Results on VisDial val v1.0 Experimental re- sults on val v1.0 are shown in Table 1. â-Dâ de- notes that a discriminative decoder is used. With only one reasoning step, our ReDAN model al- ready achieves better performance than CoAtt, which is the previous best-performing model. Us- ing two or three reasoning steps further increases the performance. Further increasing the number of reasoning steps does not help, thus results are not shown. We also report results on an ensemble of 4 ReDAN-D models. Signiï¬cant improvement was observed, boosting NDCG from 59.32 to 60.53, and MRR from 64.21 to 65.30.
In addition to discriminative decoders, we also evaluate our model with a generative decoder. Results are summarized in Table 2. Similar to Table 1, ReDAN-G with T =3 also achieves the best performance. It is intuitive to observe that ReDAN-D achieves much better results than ReDAN-G on MRR, R@k and Mean Rank, since ReDAN-D is a discriminative model, and utilizes much more information than ReDAN-G. For ex- ample, ReDAN-D uses both positive and nega-
⢠HCIAE (Lu et al., 2017): (i) question attends
Q: what is the woman wearing? A: a white light jacket, white t shirt, shorts (Left) 2 step reasoning (Right) 3 step reasoning
Figure 3: Visualization of learned attention maps in multiple reasoning steps.
Model 4 Dis. 4 Gen. 1 Dis. + 1 Gen. 1 Dis. + 1 Gen. 4 Dis. + 4 Gen. 4 Dis. + 4 Gen. ReDAN+ (Diverse Ens.) Ens. Method NDCG MRR 65.30 50.41 53.53 59.03 54.19 61.33 56.77 R@1 51.67 40.85 42.16 42.33 42.92 45.52 44.65 R@5 82.40 60.08 65.43 78.71 66.25 80.67 69.47 R@10 Mean 3.82 91.09 17.38 67.17 9.00 74.36 4.88 88.13 8.74 74.88 4.41 89.55 5.96 79.90 Average Average Average Reciprocal Average Reciprocal Average 60.53 61.43 63.85 63.18 65.13 64.75 67.12
Table 3: Results of different rank aggregation methods. Dis. and Gen. respectively. is short for discriminative and generative model,
Challenge 20195.
tive answer candidates for ranking/classiï¬cation, while ReDAN-G only uses positive answer can- didates for generation. However, interestingly, ReDAN-G achieves better NDCG scores than ReDAN-D (61.43 vs 60.53). We provide some de- tailed analysis in the question-type analysis sec- tion below.
Rank Aggregation As shown in Table 1 and 2, ensemble of discriminative or generative models increase the NDCG score to some extent. Empir- ically, we found that aggregating the ranking re- sults of both discriminative and generative models readily boost the performance. Results are sum- marized in Table 3. Combining one discrimina- tive and one generative model already shows much better NDCG results than ensemble of 4 discrim- inative models. The ensemble of 4 discriminative and 4 generative models further boosts the per- formance. It is interesting to note that using av- erage of the ranks results in better NDCG than using reciprocal of the ranks, though the recip- rocal method achieves better results on the other metrics. Since NDCG is the metric we mostly care about, the method of averaging ranking re- sults from different models is adopted.
# 4.3 Qualitative Analysis
In addition to the examples illustrated in Fig- ure 1b, Figure 3 provide six more examples to vi- sualize the learned attention maps. The associated dialog histories are omitted for simplicity. Typi- cally, the attention maps become sharper and more focused throughout the reasoning process. During multiple steps, the model gradually learns to nar- row down to the image regions of key objects rel- evant to the questions (âlaptopsâ, âstoveâ, âsneak- ersâ, âhatâ, âdogâs eyesâ and âwomanâs clothesâ). For instance, in the top-right example, the model focuses on the wrong region (âmanâ) in the 1st step, but gradually shifts its focus to the correct regions (âdogâs eyesâ) in the later steps.
Finally, we have tried using different image fea- ture inputs, and incorporating relation-aware en- coders (Li et al., 2019) into ReDAN to further boost the performance. By this diverse set of en- sembles (called ReDAN+), we achieve an NDCG score of 67.12% on the val v1.0 set.
# 4.4 Visual Dialog Challenge 2019
Now, we discuss how we further boost the perfor- mance of ReDAN for participating Visual Dialog
# 5https://visualdialog.org/challenge/
2019
Model ReDAN+ (Diverse Ens.) ReDAN (1 Dis. + 1 Gen.) DAN (Kang et al., 2019) NMN (Kottur et al., 2018) Sync (Guo et al., 2019) HACAN (Yang et al., 2019) FGAâ USTC-YTHâ¡ RvA (Niu et al., 2018) MS ConvAIâ¡ CorefNMN (Kottur et al., 2018) FGA (Schwartz et al., 2019) GNN (Zheng et al., 2019) LF-Att w/ bottom-upâ LF-Attâ¡ MN-Attâ¡ MNâ¡ HREâ¡ LFâ¡ NDCG MRR 64.47 53.73 53.13 61.86 64.92 59.36 58.80 58.10 63.42 57.88 64.22 57.17 69.25 57.13 61.44 56.47 63.03 55.59 63.27 55.35 61.50 54.70 67.25 54.46 61.37 52.82 60.41 51.63 57.07 49.76 56.90 49.58 55.49 47.50 54.16 45.46 55.42 45.31 R@1 42.45 41.38 51.28 44.15 49.30 50.88 55.65 47.65 49.03 49.53 47.55 53.40 47.33 46.18 42.08 42.43 40.98 39.93 40.95 R@5 64.68 66.07 81.60 76.88 80.77 80.63 86.73 78.13 80.40 80.40 78.10 85.28 77.98 77.80 74.83 74.00 72.30 70.45 72.45 R@10 Mean 6.63 75.68 8.91 74.50 3.92 90.88 4.81 86.88 3.97 90.68 4.20 89.45 3.14 94.05 4.65 87.88 4.18 89.83 4.15 89.60 4.40 88.80 3.54 92.70 4.57 87.83 4.75 87.30 5.41 85.05 5.59 84.35 5.92 83.30 6.41 81.50 5.95 82.83
Table 4: Comparison of ReDAN to state-of-the-art visual dialog models on the blind test-std v1.0 set, as reported by (â ) taken from https://evalai.cloudcv.org/web/challenges/challenge-page/161/ the test server. leaderboard/483. (â¡) taken from https://evalai.cloudcv.org/web/challenges/challenge-page/ 103/leaderboard/298.
Question Type Percentage Dis. Gen. 4 Dis. + 4 Gen. ReDAN+ All 100% 59.32 60.42 65.13 67.12 Yes/no Number Color Others 11% 75% 52.68 60.89 51.45 63.49 57.50 68.04 58.50 69.49 3% 44.47 41.09 46.61 50.10 11% 58.13 52.16 57.49 62.70
Table 5: Question-type analysis of the NDCG score achieved by different models on the val v1.0 set.
Results on VisDial test-std v1.0 We also evalu- ate the proposed ReDAN on the blind test-std v1.0 set, by submitting results to the online evaluation server. Table 4 shows the comparison between our model and state-of-the-art visual dialog models. By using a diverse set of ensembles, ReDAN+ out- performs the state of the art method, DAN (Kot- tur et al., 2018), by a signiï¬cant margin, lifting NDCG from 59.36% to 64.47%.
erative models. Aggregating the ranking results of both discriminative and generative models re- sults in the mutual enhancement of each other, and therefore boosting the ï¬nal NDCG score by a large margin. Also, we observe that the Number ques- tions are most difï¬cult to answer, since training a model to count is a challenging research problem.
# 5 Conclusion
Question-Type Analysis We further perform a question-type analysis of the NDCG scores achieved by different models. We classify ques- tions into 4 categories: Yes/no, Number, Color, and Others. As illustrated in Table 5, in terms of the NDCG score, generative models performed better on Yes/no questions, while discriminative models performed better on all the other types of questions. We hypothesize that this is due to that generative models tend to ranking short an- swers higher, thus is beneï¬cial for Yes/no ques- tions. Since Yes/no questions take a majority of all the questions (75%), the better performance of generative models on the Yes/no questions trans- lated into an overall better performance of gen-
We have presented Recurrent Dual Attention Net- work (ReDAN), a new multimodal framework for visual dialog, by incorporating image and dialog history context via a recurrently-updated query vector for multi-step reasoning. This iterative rea- soning process enables model to achieve a ï¬ne- grained understanding of multimodal context, thus boosting question answering performance over state-of-the-art methods. Experiments on the Vis- Dial dataset validate the effectiveness of the pro- posed approach.
Acknowledgements We thank Yuwei Fang, Huazheng Wang and Junjie Hu for helpful discus- sions. We thank anonymous reviewers for their constructive feedbacks.
# References
Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In CVPR.
Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2016. Neural module networks. In CVPR.
Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Mar- garet Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. Vqa: Visual question an- swering. In ICCV.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR.
Prithvijit Chattopadhyay, Deshraj Yadav, Viraj Prabhu, Arjun Chandrasekaran, Abhishek Das, Stefan Lee, Dhruv Batra, and Devi Parikh. 2017. Evaluating vi- sual conversational agents via cooperative human-ai games. In HCOMP.
Jianbo Chen, Yelong Shen, Jianfeng Gao, Jingjing Liu, and Xiaodong Liu. 2018. Language-based image editing with recurrent attentive models. In CVPR.
Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.
Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen- tau Yih, Yejin Choi, Percy Liang, and Luke Zettle- moyer. 2018. Quac: Question answering in context. In EMNLP.
Yiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, and Guoping Hu. 2017. Attention-over- attention neural networks for reading comprehen- sion. In ACL.
Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, Jos´e MF Moura, Devi Parikh, and Dhruv Batra. 2017a. Visual dialog. In CVPR.
Abhishek Das, Satwik Kottur, Jos´e MF Moura, Stefan Lee, and Dhruv Batra. 2017b. Learning cooperative visual dialog agents with deep reinforcement learn- ing. In ICCV.
Harm De Vries, Florian Strub, Sarath Chandar, Olivier Pietquin, Hugo Larochelle, and Aaron C Courville. 2017. Guesswhat?! visual object discovery through multi-modal dialogue. In CVPR.
Bhuwan Dhingra, Hanxiao Liu, Zhilin Yang, William W Cohen, and Ruslan Salakhutdinov. 2017. Gated-attention readers for text comprehen- sion. In ACL.
Hao Fang, Saurabh Gupta, Forrest Iandola, Rupesh K Srivastava, Li Deng, Piotr Doll´ar, Jianfeng Gao, Xi- aodong He, Margaret Mitchell, John C Platt, et al. 2015. From captions to visual concepts and back. In CVPR.
Akira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach. 2016. Multimodal compact bilinear pooling for vi- sual question answering and visual grounding. In EMNLP.
Jianfeng Gao, Michel Galley, and Lihong Li. 2018. arXiv Neural approaches to conversational ai. preprint arXiv:1809.08267.
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative ad- versarial nets. In NIPS.
Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra. 2015. Draw: A recurrent neural network for image generation. In ICML.
Dalu Guo, Chang Xu, and Dacheng Tao. 2019. Image- question-answer synergistic network for visual dia- log. arXiv preprint arXiv:1902.09774.
Xiaoxiao Guo, Hui Wu, Yu Cheng, Steven Rennie, and Rogerio Schmidt Feris. 2018. Dialog-based interac- tive image retrieval. In NIPS.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In CVPR.
Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2016. The goldilocks principle: Reading childrenâs books with explicit memory representa- tions. In ICLR.
Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation.
Drew A Hudson and Christopher D Manning. 2018. Compositional attention networks for machine rea- soning. In ICLR.
Unnat Jain, Svetlana Lazebnik, and Alexander G Schwing. 2018. Two can play this game: visual dialog with discriminative question generation and answering. In CVPR.
Gi-Cheon Kang, Jaeseo Lim, and Byoung-Tak Zhang. 2019. Dual attention networks for visual refer- arXiv preprint ence resolution in visual dialog. arXiv:1902.09368.
Jin-Hwa Kim, Kyoung-Woon On, Woosang Lim, Jeonghee Kim, Jung-Woo Ha, and Byoung-Tak Zhang. 2017. Hadamard product for low-rank bi- linear pooling. In ICLR.
Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
Diederik P Kingma and Max Welling. 2014. Auto- encoding variational bayes. In ICLR.
Satwik Kottur, Jos´e MF Moura, Devi Parikh, Dhruv Batra, and Marcus Rohrbach. 2018. Visual corefer- ence resolution in visual dialog using neural module networks. In ECCV.
Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin John- son, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. 2017. Visual genome: Connecting language and vi- sion using crowdsourced dense image annotations. IJCV.
Sang-Woo Lee, Yu-Jung Heo, and Byoung-Tak Zhang. 2018. Answerer in questionerâs mind for goal- oriented visual dialogue. In NIPS.
Jiwei Li, Will Monroe, Tianlin Shi, S´ebastien Jean, Alan Ritter, and Dan Jurafsky. 2017. Adversarial learning for neural dialogue generation. In EMNLP.
Linjie Li, Zhe Gan, Yu Cheng, and Jingjing Liu. Relation-aware graph attention network arXiv preprint 2019. for visual question answering. arXiv:1903.12314.
Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In ECCV.
Xiaodong Liu, Yelong Shen, Kevin Duh, and Jianfeng Gao. 2018. Stochastic answer networks for machine reading comprehension. In ACL.
Jiasen Lu, Anitha Kannan, Jianwei Yang, Devi Parikh, and Dhruv Batra. 2017. Best of both worlds: Trans- ferring knowledge from discriminative learning to a generative visual dialog model. In NIPS.
Daniela Massiceti, N Siddharth, Puneet K Dokania, and Philip HS Torr. 2018. Flipdial: A generative model for two-way visual dialogue. In CVPR.
Volodymyr Mnih, Nicolas Heess, Alex Graves, et al. In 2014. Recurrent models of visual attention. NIPS.
Nasrin Mostafazadeh, Chris Brockett, Bill Dolan, Michel Galley, Jianfeng Gao, Georgios P Sp- Image- ithourakis, and Lucy Vanderwende. 2017. grounded conversations: Multimodal context for arXiv natural question and response generation. preprint arXiv:1701.08251.
Hyeonseob Nam, Jung-Woo Ha, and Jeonghee Kim. 2017. Dual attention networks for multimodal rea- soning and matching. In CVPR.
Yulei Niu, Hanwang Zhang, Manli Zhang, Jianhong Zhang, Zhiwu Lu, and Ji-Rong Wen. 2018. Recur- sive visual attention in visual dialog. arXiv preprint arXiv:1812.02664.
Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In EMNLP.
Siva Reddy, Danqi Chen, and Christopher D Manning. 2018. Coqa: A conversational question answering challenge. In EMNLP.
Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. In NIPS.
Idan Schwartz, Seunghak Yu, Tamir Hazan, and Alexander Schwing. 2019. Factor graph attention. arXiv preprint arXiv:1904.05880.
Paul Hongsuck Seo, Andreas Lehrmann, Bohyung Han, and Leonid Sigal. 2017. Visual reference res- olution using attention memory for visual dialog. In NIPS.
Ravi Shekhar, Tim Baumgartner, Aashish Venkatesh, Elia Bruni, Raffaella Bernardi, and Raquel Fernan- dez. 2018. Ask no more: Deciding when to guess in referential visual dialogue. In COLING.
Yelong Shen, Po-Sen Huang, Jianfeng Gao, and Weizhu Chen. 2017. Reasonet: Learning to stop reading in machine comprehension. In KDD.
Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
Kihyuk Sohn, Honglak Lee, and Xinchen Yan. 2015. representation using Learning structured output deep conditional generative models. In NIPS.
Alessandro Sordoni, Philip Bachman, Adam Trischler, and Yoshua Bengio. 2016. Iterative alternating neu- ral attention for machine reading. arXiv preprint arXiv:1606.02245.
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overï¬tting. JMLR.
Florian Strub, Harm De Vries, Jeremie Mary, Bilal Piot, Aaron Courville, and Olivier Pietquin. 2017. End-to-end optimization of goal-driven and visually grounded dialogue systems. In IJCAI.
Florian Strub, Mathieu Seurin, Ethan Perez, Harm de Vries, Philippe Preux, Aaron Courville, Olivier Pietquin, et al. 2018. Visual reasoning with multi- hop feature modulation. In ECCV.
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural net- works. In NIPS.
Damien Teney, Peter Anderson, Xiaodong He, and An- ton van den Hengel. 2018. Tips and tricks for visual question answering: Learnings from the 2017 chal- lenge. In CVPR.
Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural im- age caption generator. In CVPR.
Qi Wu, Peng Wang, Chunhua Shen, Ian Reid, and An- ton van den Hengel. 2018. Are you talking to me? reasoned visual dialog generation through adversar- ial learning. In CVPR.
Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual atten- tion. In ICML.
Tianhao Yang, Zheng-Jun Zha, and Hanwang Zhang. 2019. Making history matter: Gold-critic se- quence training for visual dialog. arXiv preprint arXiv:1902.09326.
Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. 2016. Stacked attention networks for image question answering. In CVPR.
Adams Wei Yu, Hongrae Lee, and Quoc V Le. arXiv preprint 2017a. arXiv:1704.06877. Learning to skim text.
Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017b. Seqgan: Sequence generative adversarial nets with policy gradient. In AAAI.
Zhou Yu, Jun Yu, Jianping Fan, and Dacheng Tao. 2017c. Multi-modal factorized bilinear pooling with co-attention learning for visual question answering. In ICCV.
Heming Zhang, Shalini Ghosh, Larry Heck, Stephen Walsh, Junting Zhang, Jie Zhang, and C-C Jay Kuo. 2019. Generative visual dialogue system via adap- tive reasoning and weighted likelihood estimation. arXiv preprint arXiv:1902.09818.
Junjie Zhang, Qi Wu, Chunhua Shen, Jian Zhang, Jian- feng Lu, and Anton Van Den Hengel. 2018. Goal- oriented visual question generation via intermediate rewards. In ECCV.
Zilong Zheng, Wenguan Wang, Siyuan Qi, and Song- Chun Zhu. 2019. Reasoning visual dialogs with structural and partial observations. arXiv preprint arXiv:1904.05548.
Q: what color shirt is he wearing? A: the shirt looks blue Q: how many people? A: just 1 Q; is he wearing jeans? A: yes, he is Q: what color is the carpet? A: it is a tan color Q: is he wearing a helmet? A: no, he is not Q: is he wearing a bow tie? A: no
Q: what color shirt is he wearing? A: the shirt looks blue Q: how many people? A: just 1 Q; is he wearing jeans? A: yes, he is Q: what color is the carpet? A: it is a tan color Q: is he wearing a helmet? A: no, he is not Q: is he wearing a bow tie? A: no Q: what race are they? A: white Q; is the bench old? A: not too old Q: do you see the floor? A: yes i Q: what color is the keyboard? A: white Q: do the flowers look fresh? A: yes they do ~-To =u u Bae
Q: what color shirt is he wearing? A: the shirt looks blue Q: how many people? A: just 1 Q; is he wearing jeans? A: yes, he is Q: what color is the carpet? A: it is a tan color Q: is he wearing a helmet? A: no, he is not Q: is he wearing a bow tie? A: no Q: what race are they? A: white Q; is the bench old? A: not too old Q: do you see the floor? A: yes i Q: what color is the keyboard? A: white Q: do the flowers look fresh? A: yes they do ~-To =u u Bae Q: is the phone a smartphone? A: i can not tell Q: what color are his shoes? A: white and black Q: what is on the pizza? A: olives and other things wae «= Be . a a ee | Q: does he have a helmet? A: yes Q: is she wearing a hat? A: no Q;: what color is the coffee table? A: brown Bas ee cis the coffee table a glass table? A: no Q: is anything in the flower pot? A: yes, different colored flowers = = lap â6 ai | = 5, Q: is anything printed on his tie? A: no Q:is the bat mad of wood? A: no Aas Bhâ:
Q: what race are they? A: white Q; is the bench old? A: not too old Q: do you see the floor? A: yes i Q: what color is the keyboard? A: white Q: do the flowers look fresh? A: yes they do ~-To =u u Bae Q: is the phone a smartphone? A: i can not tell Q: what color are his shoes? A: white and black Q: what is on the pizza? A: olives and other things wae «= Be . a a ee | Q: does he have a helmet? A: yes Q: is she wearing a hat? A: no Q;: what color is the coffee table? A: brown Bas ee
Q: is the phone a smartphone? A: i can not tell Q: what color are his shoes? A: white and black Q: what is on the pizza? A: olives and other things wae «= Be . a a ee | Q: does he have a helmet? A: yes Q: is she wearing a hat? A: no Q;: what color is the coffee table? A: brown Bas ee cis the coffee table a glass table? A: no Q: is anything in the flower pot? A: yes, different colored flowers = = lap â6 ai | = 5, Q: is anything printed on his tie? A: no Q:is the bat mad of wood? A: no Aas Bhâ:
Figure 4: Visualization of learned attention maps using 2 reasoning steps.
Q: are there traffic lights visible? A: no Q: what color are the desserts? A: brown with white cream oF ae pet) Q: what color is the ground? A: gray = im oe tee ee Q: wearing shoes? A: yes Bii.
Q: are there traffic lights visible? A: no Q: what color are the desserts? A: brown with white cream oF ae pet) Q: are there noodles or some other kind of food? A: no just the beef broccoli and sauce as Ss & Q: are they in ski hats? A: no Pee pore, Q: what color is the table? A: wooden Q: what color is the ground? A: gray = im oe tee ee Q: wearing shoes? A: yes Bii. Q: what color shirt is he wearing? A: it is blue with black and white stripes Q: what color is the cat? A: black cael Q: what color is his shirt? A: grey a Q: are the buildings tall in the background? A: yes â it is showing several of the downtown skyscrapers 7 * Q: is the screen visible? A: no Er .
Q: are there traffic lights visible? A: no Q: what color are the desserts? A: brown with white cream oF ae pet) Q: are there noodles or some other kind of food? A: no just the beef broccoli and sauce as Ss & Q: are they in ski hats? A: no Pee pore, Q: what color is the ground? A: gray = im oe tee ee Q: wearing shoes? A: yes Bii. Q: what color shirt is he wearing? A: it is blue with black and white stripes Q: what color is the cat? A: black cael
Q: are there noodles or some other kind of food? A: no just the beef broccoli and sauce as Ss & Q: are they in ski hats? A: no Pee pore, Q: what color is the table? A: wooden Q: what color shirt is he wearing? A: it is blue with black and white stripes Q: what color is the cat? A: black cael Q: what color is his shirt? A: grey a Q: are the buildings tall in the background? A: yes â it is showing several of the downtown skyscrapers 7 * Q: is the screen visible? A: no Er .
Figure 5: Visualization of learned attention maps using 3 reasoning steps. | {
"id": "1902.09326"
} |
1902.00098 | The Second Conversational Intelligence Challenge (ConvAI2) | We describe the setting and results of the ConvAI2 NeurIPS competition that
aims to further the state-of-the-art in open-domain chatbots. Some key
takeaways from the competition are: (i) pretrained Transformer variants are
currently the best performing models on this task, (ii) but to improve
performance on multi-turn conversations with humans, future systems must go
beyond single word metrics like perplexity to measure the performance across
sequences of utterances (conversations) -- in terms of repetition, consistency
and balance of dialogue acts (e.g. how many questions asked vs. answered). | http://arxiv.org/pdf/1902.00098 | Emily Dinan, Varvara Logacheva, Valentin Malykh, Alexander Miller, Kurt Shuster, Jack Urbanek, Douwe Kiela, Arthur Szlam, Iulian Serban, Ryan Lowe, Shrimai Prabhumoye, Alan W Black, Alexander Rudnicky, Jason Williams, Joelle Pineau, Mikhail Burtsev, Jason Weston | cs.AI, cs.CL, cs.HC | null | null | cs.AI | 20190131 | 20190131 | 9 1 0 2 n a J 1 3 ] I A . s c [
1 v 8 9 0 0 0 . 2 0 9 1 : v i X r a
# The Second Conversational Intelligence Challenge (ConvAI2)
Emily Dinan1, Varvara Logacheva2, Valentin Malykh2, Alexander Miller1, Kurt Shuster1, Jack Urbanek1, Douwe Kiela1, Arthur Szlam1, Iulian Serban3, Ryan Lowe4,1, Shrimai Prabhumoye5, Alan W Black5, Alexander Rudnicky5, Jason Williams6, Joelle Pineau1,4, Mikhail Burtsev2 and Jason Weston1
1Facebook AI Research 2Moscow Institute of Physics and Technology 3University of Montreal 4McGill University 5Carnegie Mellon University 6Microsoft Research
# Abstract
We describe the setting and results of the ConvAI2 NeurIPS competition that aims to further the state-of-the-art in open-domain chatbots. Some key takeaways from the competition are: (i) pretrained Transformer variants are currently the best performing models on this task, (ii) but to improve performance on multi-turn conversations with humans, future systems must go beyond single word metrics like perplexity to measure the performance across sequences of utterances (conversations) in terms of repetition, consistency and balance of dialogue acts (e.g. how many questions asked vs. answered).
# 1 Overview of the competition
The Conversational Intelligence Challenge1 aims at ï¬nding approaches to creating high- quality dialogue agents capable of meaningful open domain conversation. Today, the progress in the ï¬eld is signiï¬cantly hampered by the absence of established benchmark tasks for non-goal-oriented dialogue systems (chatbots) and solid evaluation criteria for automatic assessment of dialogue quality. The aim of this competition was therefore to establish a concrete scenario for testing chatbots that aim to engage humans, and become a standard evaluation tool in order to make such systems directly comparable, including open source datasets, evaluation code (both automatic evaluations and code to run the human evaluation on Mechanical Turk), model baselines and the winning model itself.
1http://convai.io/
1
This is the second Conversational Intelligence (ConvAI) Challenge; the previous one was conducted under the scope of NeurIPS 2017 Competitions track. Taking into account the results of the previous edition, this year we improved the task, the evaluation process, and the human conversationalistsâ experience. We did this in part by making the setup simpler for the competitors, and in part by making the conversations more engaging for humans. We provided a dataset from the beginning, Persona-Chat, whose training set consists of conversations between crowdworkers who were randomly paired and asked to act the part of a given provided persona (randomly assigned, and created by another set of crowdworkers). The paired workers were asked to chat naturally and to get to know each other during the conversation. This produces interesting and engaging conversations that learning agents can try to mimic. The Persona-Chat dataset is designed to facilitate research into alleviating some of the issues that traditional chit-chat models face, and with the aim of making such models more consistent and engaging, by endowing them with a persona [1]. Models are thus trained to both ask and answer questions about personal topics, and the resulting dialogue can be used to build a model of the persona of the speaking partner.
Competitorsâ models were compared in three ways: (i) automatic evaluation metrics on a new test set hidden from the competitors; (ii) evaluation on Amazon Mechanical Turk; and (iii) âwildâ live evaluation by volunteers having conversations with the bots. We declared winners in the automatic evaluation tracks, but the grand prize was awarded to the best performing system in human evaluations.
The winner in the automatic evaluation tracks by a signiï¬cant margin was the team Hugging Face, however the grand prize winner from human evaluations was Lost in Con- versation (Hugging Face coming in second place, with 23 entrants in total)2. There are a number of key takeaways from our analysis of the results, indicating that the automatic evaluations show some correlation to human evaluations, but fail to take into account im- portant aspects of multi-turn conversation that humans consider important, in particular the balance of dialogue acts throughout the conversation (e.g. the amount of questions asked versus answered).
# 1.1 Previous competitions and task formulation
There have been a number of competitions on question answering (e.g. quiz bowl) which can be seen as single-turn goal-directed dialogue, as well as competitions on goal-directed dialogue involving dialogue state tracking (including 5 iterations of the DSTC challenge), e.g. for booking restaurants or tourist information. Those do not explicitly address the âchit-chatâ setting of dialogue about general topics which is not goal-directed, although later DSTC challenges do address chit-chat.
The ï¬rst edition of the Conversational Intelligence Challenge took place at the NeurIPS 2017 Competition track in the form of a live competition. The task was for an agent to carry
2The Lost in Conversation entry will be described in detail in separate publication by their team.
2
out intelligent and natural conversations about speciï¬c snippets from Wikipedia articles with humans, which was not engaging to all human participants.
Ten dialogue systems participated in the 2017 competition. The majority of them com- bined multiple conversational models such as question answering and chit-chat systems to make conversations more natural. The evaluation of chatbots was performed by human as- sessors. More than 1,500 volunteers were attracted and over 4,000 dialogues were collected during the competition. All the data and the solutions of the winners are available via the competition repo.3,4 The ï¬nal score of the dialogue quality for the best bot was 2.746 compared to 3.8 for human. This demonstrates that current technology allows supporting dialogue on a given topic but with quality signiï¬cantly lower than that of humans.
In contrast to the ï¬rst edition, the 2018 competition focused on general chit-chat about peopleâs interests, rather than on encyclopedic facts. To our knowledge, no other com- petition has focused on a dataset like this. Importantly, we provided a large training set and validation set in a standard setup, complete with code for baseline systems for en- trants to obtain clear automatic evaluation metrics to improve upon. In the 2017 ConvAI competition, no data was initially provided but was instead collected by volunteers as the competition progressed, which may have led to fewer participants.
Outside of NeurIPS, the most similar competition is probably the Alexa Prize5. This is a competition to build a socialbot that can converse coherently and engagingly with humans on popular topics for 20 minutes. The top bots were selected by Amazon Alexa customers and the Amazon panel and competed head-to-head in front of three judges in November 2017. Another small scale analogue is the Loebner Prize.6 Alexa Prize data and models are not in the open domain, whereas our competition aims to have as deliverables both data and winning models and training code. Further, unfortunately, the outcome mostly conï¬rmed that ensembles are useful in such tasks and did little to drive fundamental algorithm research.
The key diï¬erences from the the ï¬rst (2017) ConvAI competition are the following:
⢠The conversations focused on engaging the interlocutors by discussing personal in- terests (instead of encyclopedia articles they may not be interested in).
⢠A training set was provided at the start of the competition, making the competition much more straightforward for participants.
⢠Evaluation included both automatic metrics, Amazon Mechanical Turk and âwildâ live volunteer conversations, making the evaluation much more complete.
3http://convai.io/2017/data/ 4https://github.com/DeepPavlov/convai/tree/master/2017/solutions 5https://developer.amazon.com/alexaprize 6https://en.wikipedia.org/wiki/Loebner_Prize
3
Persona 1 Persona 2 I like to ski My wife does not like me anymore I have went to Mexico 4 times this year I hate Mexican food I like to eat cheetos I am an artist I have four children I recently got a cat I enjoy walking for exercise I love watching Game of Thrones
[PERSON 1:] Hi [PERSON 2:] Hello ! How are you today ? [PERSON 1:] I am good thank you , how are you. [PERSON 2:] Great, thanks ! My children and I were just about to watch Game of Thrones. [PERSON 1:] Nice ! How old are your children? [PERSON 2:] I have four that range in age from 10 to 21. You? [PERSON 1:] I do not have children at the moment. [PERSON 2:] That just means you get to keep all the popcorn for yourself. [PERSON 1:] And Cheetos at the moment! [PERSON 2:] Good choice. Do you watch Game of Thrones? [PERSON 1:] No, I do not have much time for TV. [PERSON 2:] I usually spend my time painting: but, I love the show.
Table 1: Example dialogue from the Persona-Chat dataset. Person 1 is given their own persona (top left) at the beginning of the chat, but does not know the persona of Person 2, and vice-versa. They have to get to know each other during the conversation.
# 2 Competition description and set-up
# 2.1 Data
The ConvAI2 dataset for training models is publicly available in ParlAI7, and is based on the Persona-Chat dataset[1]. See Table 1 for an example dialogue. The speaker pairs each have assigned proï¬les coming from a set of 1155 possible personas (at training time), each consisting of at least 5 proï¬le sentences, setting aside 100 never seen before personas for validation. The dataset statistics are given in Table 2.
As the original Persona-Chat test set was released, we crowdsourced further data for a hidden test set unseen by the competitors for automatic evaluation. The hidden test set consisted of 100 new personas and over 1,015 dialogs.
To avoid modeling that takes advantage of trivial word overlap, additional rewritten sets of the same train and test personas were crowdsourced, with related sentences that are rephrases, generalizations or specializations, rendering the task much more challenging. For example âI just got my nails doneâ is revised as âI love to pamper myself on a regular
7https://github.com/facebookresearch/ParlAI/tree/master/parlai/tasks/convai2
4
basisâ and âI am on a diet nowâ is revised as âI need to lose weight.â
Training set Validation set Hidden test set # examples # dialogues # personas 1,155 100 100 131,438 7,801 6,634 17,878 1,000 1,015
Table 2: Statistics of the ConvAI2 dataset (based on Persona-Chat).
The task aims to model normal conversation when two interlocutors ï¬rst meet, and get to know each other. Their aim is to be engaging, to learn about the otherâs interests, discuss their own interests and ï¬nd common ground. The task is technically challenging as it involves both asking and answering questions, and maintaining a consistent persona, which is provided. Conversing with current chit-chat models for even a short amount of time quickly exposes their weaknesses [2, 3]. Common issues with chit-chat models include: (i) the lack of a consistent personality [4] as they are typically trained over many dialogues each with diï¬erent speakers, (ii) the lack of an explicit long-term memory as they are typically trained to produce an utterance given only the recent dialogue history [3], and (iii) a tendency to produce non-speciï¬c answers like âI donât knowâ [5]. With this task we aim to ï¬nd models that address those speciï¬c issues [1].
Note that for training, competitors were allowed to use other additional training data as long as it was made public (or was already public).
# 2.2 Metrics
We ï¬rst evaluated all submissions on a set of automatic metrics. The top 7 teams from the automatic metrics were then evaluated by humans:
⢠Automatic metrics - Perplexity, F1 and hits@1/20. These were computed on the hidden test.
i=1 logp(wi) for sentence w = w1, w2, ..., wm. This metric is computed only for probabilistic generative models.
precision+recall . In the context of dialogue, precision is the fraction of words in the predicted response that are contained in the gold response, and recall is the fraction of words in the gold response that were in the predicted response. This can be computed for any model, retrieval-based or generative.
â Hits@1/20 â hits@1/N is the accuracy of the next dialogue utterance when choosing between the gold response and N â 1 distractor responses (here, we use N = 19). Distractor responses are random responses from the dataset. Any model that can assign a score to a given candidate utterance can compute this
5
metric. Such a method could then in principle be used in a retrieval model to score retrieved candidates.
⢠Human evaluations -
â Amazonâs Mechanical Turk: Given the entrantsâ model code, we ran live ex- periments where Turkers chatted to a given model following instructions identi- cal to the creation of the original dataset, but with new proï¬les, and then scored its performance. Performance was evaluated by asking Turkers how much they enjoyed talking to the model and having them verify which persona the model was using given the choice between the correct persona and a random one.
â âWildâ Live Chat with Volunteers: We solicited volunteers to chat to the models in a similar way to the Mechanical Turk setup. This setup was hosted through the Facebook Messenger and Telegram APIs.
# 2.3 Baselines and code available
Source code for baseline methods for the competition were provided in the open source system ParlAI [6]8, including training loop and evaluation code. The example models are the methods developed in [1], which we consider strong baselines. They include a retrieval- based Key-Value Memory Network, and two generative models: an LSTM-based attentive Seq2Seq model and a LSTM-based language model.
# 2.4 Rules
⢠Competitors must provide their source code so that the hidden test set evaluation and live experiments can be computed without the teamâs inï¬uence, and so that the competition has further impact as those models can be released for future research to build oï¬ them. Code can be in any language, but a thin python wrapper must be provided in order to work with our evaluation and live experiment code via ParlAIâs interface.
⢠Each team can only submit a maximum of once per month during the automatic metrics round.
⢠We require that the winning systems also release their training code so that their work is reproducible (although we also encourage that for all systems).
⢠Competitors should indicate which training sources are used to build their models, and whether (and how) ensembling is used.
# 8https://github.com/facebookresearch/ParlAI/tree/master/projects/convai2
6
⢠Competitors are free to augment training with other datasets as long as they are publicly released (and hence, reproducible). Hence, all entrants are expected to work on publicly available data or release the data they use to train.
# 2.5 Timeline
⢠April 21: Competition begins: automatic metrics leaderboard, baselines, and sub- mission instructions are posted.
⢠May 9 Hackathon: We organized a non-compulsory hackathon around the competi- tion: DeepHack.Chat. At the hackathon teams aimed to improve their systems, took part in live human evaluations, and listened to lectures from researchers in the ï¬eld.
⢠July 10: âWildâ evaluation is open. Participants may submit their models to be evaluated by live volunteers.
⢠September 30: Submissions for the automatic metrics round are closed. We invite the top seven teams from this round to prepare their submissions for the Mechanical Turk evaluation portion of the competition.
⢠December 9: Winner of the competition is announced at NeurIPS 2018.
# 2.6 Prize
The grand prize for the winner of the human evaluations was awarded $20,000 in funding for Amazon Mechanical Turk, in order to encourage further data collection for dialogue research. The winner in the automatic metrics received $5,000 in AWS compute.
# 3 Results and Analysis
# 3.1 Automatic Metrics
We had over 23 teams submit models to be evaluated for the automatic metrics. The rank of each team was determined by sorting by the minimum rank of the score in any of the three metrics (F1, Hits@1, and Perplexity). The Hugging Face team performed the best in every single metric and was therefore determined to be the winner of this round. All participants and their scores on the hidden test set are shown in Table 3.
The top seven teams made it to the next round. Notably, each of these teams surpassed our baseline models in some metric. The High Five team chose not to participate in the human evaluation round, so ultimately six teams participated in the next round. Refer to Section 4 for a description of the models submitted from the top-performing teams.
7
Team Names Perplexity Hits@1 F1 1. Hugging Face 2. ADAPT Centre 3. Happy Minions 4. High Five 5. Mohd Shadab Alam 6. Lost in Conversation 7. Little Baby 8. Sweet Fish 9. 1st-contact 10. NEUROBOTICS 11. Catsâteam 12. Sonic 13. Pinta 14. Khai Mai Alt 15. loopAI 16. Salty Fish 17. Team Pat 18. Tensorborne 19. Team Dialog 6 20. Roboy 21. IamNotAdele 22. ï¬ooders 23. Clova Xiaodong Gu 16.28 31.4 29.01 - 29.94 - - - 31.98 35.47 - 33.46 32.49 - - 34.32 - 38.24 40.35 - 66.47 - - 80.7 - - 65.9 13.8 17.1 64.8 45.7 13.2 - 35.9 - - 34.6 25.6 - - 12.0 10.9 - - - - 19.5 18.39 16.01 - 16.91 17.77 - - 16.42 16.68 - 16.67 16.39 13.03 - - 16.11 15.94 7.27 15.83 13.09 15.47 14.37 Seq2Seq + Attention Baseline Language Model Baseline KV Proï¬le Memory Baseline 29.8 46.0 - 12.6 - 55.2 16.18 15.02 11.9
Table 3: Automatic Metrics Leaderboard.
# 3.1.1 Further Analysis and Additional Automatic Metrics
Revised Personas We also evaluated models (from the teams in the top 7) that were capable of ranking â i.e. models that were evaluated on the Hits@1 metric â on the ârevisedâ test set. Recall that we crowdsourced additional rewritten sets personas as a way of measuring how much models rely on word overlap between utterances and personas for their performance, as the revised ones have little or no overlap with the original personas. The results are shown in Figure 1. The Hugging Face team performed the best on the revised task, with Little Baby close behind. The performance of the baseline Key-Value Memory Network baseline greatly deteriorated given the revised personas. Hence, we found the success of the best competitorâs models as a good result, which we believe is due to their use of suï¬cient pretraining and regularization, among other factors.
8
# Hits@1
a r) l= original j= revised p-chat test hits@1 / 20 ES r) N 3
Figure 1: Revised Test Set. Hits@1 on the revised test set vs. on the regular test set.
Last Utterance (Parrot) Distractor We also evaluated how adding a distractor can- didate aï¬ected the performance of these ranking models. Namely, we added the last partner message to the list of candidates to rank. A model should only in very rare circumstances parrot the speaking partner, so the Hits@1 metric should remain at a similar score with and without this distractor. See Figure 2 for the results. Most models suï¬ered with this metric, showing they probably rely too much on word overlap with the last utterance when performing ranking (generally a response does have word overlap with the last utterance, but still it should not be a copy â this makes this a somewhat diï¬cult function for models to learn). The Hugging Face model was the most resistant to this type of attack, but still suï¬ered to some degree.
# 3.1.2 F1 Metric Toy Baseline
During the automatic evaluation stage of the competition, we discovered that always reply- ing with âi am you to do and your is likeâ would outperform the F1 score of all the models in the competition. This toy baseline was constructed simply by picking several frequent words from the training set. Speciï¬cally, always replying with this message gives an F1 score of 19.6 on the test set and 20.5 on the validation set (compare to Hugging Faceâs scores of 19.5 and 19.1 on the test and validation sets respectively). In [7], the authors showed that word overlap metrics do not correlate well with human judgment for dialogue response generation systems. This is another indication that we should reconsider using F1 to evaluate our dialogue systems.
9
hits@1 / 20 ES r) N 3 80+ lm original MMM query as cand 60 - ¢ < 2 & SI ⬠< &
Figure 2: Distractor Candidate. Hits@1 on the test set when we add the query (last partner message) as a candidate.
# 3.2 Human Evaluations
# 3.2.1 Wild Evaluation
Recall that we solicited volunteers to chat with and evaluate the top-performing models through the Facebook Messenger and Telegram APIs. This setup is closer to real-world chatbot applications than the Mechanical Turk evaluation set-up. As volunteers â unlike Turkers â are not paid and therefore less likely to follow the instructions closely, we expected the distribution of these results to be quite diï¬erent than those from the Mechanical Turk evaluation. Two notable (but expected) diï¬erences between the results of the Mechanical Turk and âwildâ evaluations are that the wild evaluation had relatively fewer good (i.e. long and consistent) dialogues, and Mechanical Turk workers tended to rate the models higher than the volunteers in the wild evaluation. A detailed description of our âwildâ dataset and a comparison with the data from the ï¬rst ConvAI competition will be described in detail in separate publication9.
Given that many of the dialogues collected during the wild evaluation were either senseless, oï¬ensive, or simply not in line with the instructions, and we did not have a systematic way of distinguishing these from the âgood dialogues, we decided to discount the results from the wild evaluation altogether. See Table 4 for example conversations. The data collected from this evaluation is available in the competition repo10. Detecting this type of undesirable behavior remains an open (but we believe solvable) problem for
9ConvAI2 dataset of non-goal-oriented human-to-bot dialogues (2019). V. Logacheva, V. Malykh, A. Litinsky, M. Burtsev
# 10http://github.com/DeepPavlov/convai/data
10
[HUMAN:] Hey! [BOT:] hello there how are you? do you like dogs? [HUMAN:] Good ! I like dogs, I grew up in a farm [BOT:] that is cool. iâve a german shepard. [HUMAN:] what is your dogâs name ? [BOT:] What video games?. [BOT:] [HUMAN:] [BOT:] [HUMAN:] [BOT:] his name is pepper. do you have any pets? I used to have a dog but he died..... that is so sad! do you watch nhl hockey? no... not a big fan of sport ? no, i love watching it. i love the nhl [Score:] 5/5 [Score:] 4/5
Table 4: Example of a bad conversation (left) and a good conversation (right) from the wild evaluation. The model received a score of 5 for the spam conversation, and 4 for the good conversation.
evaluation of models in a live system.
# 3.2.2 Mechanical Turk Evaluation
Since the results of the wild evaluation were ultimately discounted, the winner of the human evaluation round â and therefore the winner of the competition â was determined by per- formance in the Mechanical Turk Evaluation. As announced at the NeurIPS Competition Track Workshop, the Lost in Conversation team won the competition.
The set-up of the Mechanical Turk evaluation was nearly identical to the set-up we used to collect the original Persona-Chat dataset. The chat interface is shown in Figure 3. For each evaluation, we paired a human worker with a model, assigned each of them personas, and instructed the humans to chat with and get to know their partner. Dialogues were of length 4-6 turns each. Following a short conversation, we asked workers âHow much did you enjoy talking to this user?â and had them answer on a scale of 1-4. Additionally, we tested whether the human could distinguish the persona the model was using from a random one. We crowdsourced 100 evaluations for each model. Samples conversations from some of the models are given in Appendix A.
The results are shown in Table 5. Lost in Conversation won the competition with an engagingness score of 3.11 out of 4. We attempted to reduce annotator bias in the engagingness scores by using a Bayesian calibration method recently proposed in [8]. The results from before and after calibration are given in Figure 4. The calibration did not aï¬ect the ordering of the scores, and the scores reported in the ï¬nal leaderboard are post- calibration.
11
Task Description In this task, you will chat with another user playing the part of a given âcharacter. For example, your given character could be: | am a vegetarian. like swimming. My father used to work for Ford. My favorite âband is Maroon. | gota new job last month, which is about advertising design. (Chat withthe other user naturally and try to get to know each other, ie. both âask questions and answer questions of your chat partner while sticking to your siven character Your assigned character is: âlike watching movies. i work part time in a warehouse. {like punk music. ilike pizza and burgers. âenjoy cruising. Suevessiuny maiuireu. MuW fot 8 YL LU RIO Bau UUTeE AHnuUyH ie Nat You need to finish at least 4 chat turns, after which you can click the "Done" button to end the chat. You can track your character description on the left. Please try to speak to the other person as if you are the character assigned. Do not trivially copy the character descriptions into the message. PERSON_2: hi my name is carl and | like country music. PERSON_1: hey carll i'm more of a punk fan myself PERSON_2: oh nice. ilike to listen to folk. PERSON 1: what do you do for work? i work at a warehouse PERSON. 2: i do not work anymore. i retited and moved to the countryside 5 years ago. | âwow that sounds nice! what do you do for fun?
Figure 3: Mechanical Turk Evaluation Interface. The chat interface used for the Mechanical Turk portion of the evaluation was intentionally similar to the interfae used to collect the original dataset.
Human Evaluations Human Evaluations w & §⬠$$ § & ¢ < = e $s & ge es ¢ # & & Ss P o y¥ 2 § § SF F ~ é PS 2 Rs BS ¢ es ¢ s Y tinal hy ma, |
Figure 4: Mechanical Turk Evaluation: Engagingness. Results before (left) and after (right) Bayesian calibration. The calibration did not alter the ordering of the scores.
12
Team Names Engagingness (1-4) Persona Detection (0-1) 1. Lost in Conversation 2. Hugging Face 3. Little Baby 4. Mohd Shadab Alam 5. Happy Minions 6. ADAPT Centre 3.11 2.68 2.44 2.33 1.92 1.6 0.9 0.98 0.79 0.93 0.46 0.93 Human KV Proï¬le Memory (Baseline) 3.48 2.44 0.96 0.76
Table 5: Human Evaluation Results
Team Names Engagingness # words # words # chars # chars (human) (1-4) (model) (human) (model) 1. Lost in Conversation 2. Hugging Face 3. Little Baby 4. Mohd Shadab Alam 5. Happy Minions 6. ADAPT Centre 3.11 2.67 2.4 2.36 1.92 1.59 10.18 11.5 11.5 9.5 8.0 15.1 11.9 11.9 11.3 10.2 10.2 11.8 39.2 44.4 51.5 33.8 27.9 60.0 48.2 49.2 47.3 42.5 42.5 48.0 Human 3.46 - 13.7 - 57.7
Table 6: Average response length in Mechanical Turk logs.
# 3.2.3 Further Analysis of Results
Length Statistics In an attempt to understand the results from the Mechanical Turk evaluations, we analyzed various word statistics on the conversation logs. We measured the average length of both the bot and human responses for each teamâs evaluation, as shown in Table 6. Models with higher evaluation scores tended to get longer responses from humans, which can be considered as an implicit engagement score. However, this is possibly skewed by humans mimicking the length of the botâs utterances, e.g. consider ADAPT Centreâs results. We note that when humans are speaking with other humans, they have much longer utterances on average than the models do. We believe this is related to their production of more generic, less engaging utterances.
Rare Word Statistics We also looked to see how often rare words were used in the conversation logs. In Table 7, Freq1h and Freq1k indicate the frequency with which the model used words that appear fewer than 100 or 1000 times in the training corpus. The hypothesis here is that utterances with some rare words might be less generic and hence
13
Team Names Engagingness (1-4) Freq1h (model) Freq1h (human) Freq1k (model) Freq1k (human) 1. Lost in Conversation 2. Hugging Face 3. Little Baby 4. Mohd Shadab Alam 5. Happy Minions 6. ADAPT Centre 3.11 2.67 2.4 2.36 1.92 1.59 2.2 2.5 4.9 1.3 0.3 1.7 3.4 4.2 3.7 3.2 4.1 3.5 9.9 9.0 18.3 9.5 4.3 8.8 13.2 15.6 15.6 14.1 14.3 15.1 Human 3.46 4.8 4.3 17.2 16.3
Table 7: Rare word frequencies in Mechanical Turk logs.
more interesting/engaging, rendering higher human evaluation scores. The results show that humans use signiï¬cantly more rare words than any of the models, and the bottom three models do have lower Freq1h scores than the top three; otherwise, however, the relationship between evaluation score of the models and their use of rare words is not completely clear. We suspect that is because this is just one factor among many that would need to be disentangled.
Word and Utterance Repetition Statistics We then looked at how often the mod- els repeated themselves in conversations with humans. Table 8 shows the frequency of unigram, bigram, and trigram repeats in the model responses, as well as how often the modelâs responses were unique in the logs. Again, it is clear the humans repeat them- selves very infrequently, but there is not a clear relationship between our proxy measures of repetition with the human evaluation scores. We suspect this is because there are more subtle instances of repeating that our proxies do not measure, and the proxies have already been optimized by many models (e.g. by doing n-gram or full utterance blocking). For example we observed instances like âi like watching horrorâ followed by âi love watching scary moviesâ occurring, but these are not captured well by our metrics. Finally, overall utterance uniqueness should ideally be close to 100% with the same utterance rarely being repeated across conversations, with humans at 99%. While Hugging Faceâs model was at 97%, many other models were lower, with the winner Lost in Conversation at 86%. A low uniqueness score could be problematic for a deployed system, as it might make users tire of it repeating itself. However, as our competition evaluations involve very short dialogues, this likely did not impact human evaluations.
Blind Evaluation Following the above analyses, it was still unclear why the Lost in Conversation model had a statistically signiï¬cant human evaluation win over the Hugging Face model, even though the Hugging Face model performed much better in the automatic evaluations. To better understand this, we performed a blind evaluation ourselves of a
14
Team Names Engagingness Unigram Bigram Trigram Unique (1-4) Repeats Repeats Repeats Responses 1. Lost in Conversation 2. Hugging Face 3. Little Baby 4. Mohd Shadab Alam 5. Happy Minions 6. ADAPT Centre 3.11 2.67 2.4 2.36 1.92 1.59 2.11 1.49 2.53 3.48 1.62 6.74 5.6 5.04 2.69 11.34 6.56 11.53 2.67 0.6 1.43 7.06 3.81 1.44 86% 97% 91% 83% 53% 98% Human 3.46 1.83 2.47 0.51 99%
Table 8: Repeats in Mechanical Turk logs.
Turker Blind Annotator 1 Blind Annotator 2 2.8 2.47 2 3.29 2.78 2.71
# Hugging Face Lost in Conversation
Table 9: Blind Evaluation Results. Average engagingness score (1-4) for the randomly sampled subset of conversations.
random sample of the Mechanical Turk evaluation logs from these two teams, giving each conversation a score between 1 and 4 and making comments about the modelâs performance. The average score given to this subset of conversations is shown in Table 9. As you can see, despite the apparent annotator bias, each annotator agreed with the Turkers regarding which model was better.
Asking questions Reading through the comments made by the blind annotators after- wards, we noticed that while both models suï¬ered from errors involving repetition, con- sistency or being âboringââ at times, a common complaint about the Hugging Face model was that it âasked too many questions.â In order to determine to what extent this was true, we analyzed the Mechanical Turk logs and measured how often each model response began with a question word (like âwho,â âwhat,â âwhen,â âwhere,â âwhy,â or âhowâ) and how often the response contained a question mark.
The results are given in Figure 5. It is clear that the Hugging Face model is indeed a large outlier. Notably, you can see that in the 100 conversations it had, it began a response with a question word 107 times whereas humans only did this 12 times. When the model asks too many questions it can make the conversation feel disjointed, especially if the questions do not relate to the previous conversation. Friendly chit-chat requires a delicate balance of question-asking and question-answering. The tentative conclusion
15
# Marks
Questions: who, what, when, where, why, how
who, what, when, where, why, Question 300 200 1 8
Figure 5: How often did the models ask questions? We measured (on the left) how often the models began their response with âwho,â âwhat,â âwhen,â âwhere,â âwhy,â or âhow,â as well as (on the right) how often the modelsâ responses contained at least one question mark as an estimate for how often the models asked questions when conversing with humans.
that we draw here is that the tendency to ask too many questions negatively aï¬ected the human evaluation results for the Hugging Face model. Future work should consider how we can automatically evaluate this type of conversation-level performance rather than just utterance-level performance.
Persona Detection Lastly, looking at the persona detection scores from the Mechanical Turk evaluation in Table 5, we note that most models did relatively well in this metric (with the exception of the Happy Minions model). Recall that this score is the percentage of the time that the annotaters were able to to distinguish the modelâs persona from a random one. We often observed models repeating the persona sentences almost verbatim, which might lead to a high persona detection score but a low engagingness score. Training models to use the persona to create engaging responses rather than simply copying it remains an open problem.
# 4 Participating Models
We include a short summary of the model types used for some of the top competitors in Table 10. Some of the authors of these models plan to write detailed papers describing their models. Please also refer to the slides at the website written by the modelâs authors11. The winnerâs (Lost in Conversationâs) code is also publicly available12.
# 11http://convai.io/NeurIPSParticipantSlides.pptx 12https://github.com/atselousov/transformer_chatbot
16
Team Names Model Summary Lost in Conversation Generative Transformer based on OpenAI GPT. Trained on Hugging Face Little Baby Persona-Chat (original+revised), DailyDialog and Reddit comments. Pretrained generative Transformer (Billion Words + CoNLL 2012) with transfer to Persona-Chat. Proï¬le-Encoded Multi-Turn Response Selection via Multi-Grained Deep Match Network. Modiï¬cation of [9]: better model + data augmentation via translation. Mohd Shadab Alam Seq2Seq + Highway model. ADAPT Centre Glove + language model vector. Transfer learning strategy for Seq2Seq tasks. Bi-directional Attentive LSTM. Pretrained via GloVe embeddings + Switchboard, Open Subtitles.
Table 10: Brief model descriptions of some of the top competitors.
# 5 Conclusions and Future Work
Models The best models in the competition were variants of the generative Transformer architecture. Those models have rather high capacity and thus cannot be trained on ConvAI2 (Persona-Chat) data alone, but must be either pretrained or multitasked with additional large datasets. One can use dialogue datasets to pretrain, but it seems as though the system still works well with language modeling datasets that are not explicitly dialogue (e.g. the Billion Words corpus). Many other tweaks to the base models were tried, such as trying to optimize the automatic metrics directly, but without direct ablations with human evaluation it is diï¬cult to state here the eï¬ects of all these components.
Retrieval models fared a little worse than generative models in the human evaluations, although we are unsure if this is true in general, or because no very strong retrieval model was proposed. With a Transformer-based retrieval model it is possible to get Hits@1 in excess of 80% but no such method was tried by a competitor (see Table 3, Hugging Face used a two-head Transformer model, but opted to generate rather than retrieve). In our opinion, looking at the outputs from the generative systems in the competition, they still fall short of the most interesting and engaging comments of humans (which sometimes retrieval models choose); however, the generic responses from generative models are often low-risk or âsafeâ responses, which may give them higher scores. A retrieve and reï¬ne approach (combining generative and retrieval methods) is another possibility that was not explored in the competition [10].
Finally, better sentence representations are being developed all the time. This compe- tition was run before the BERT model [11] was released which has been shown to improve many NLP tasks. Hence, we expect these models to improve on ConvAI2 as well.
17
Automatic vs. Human Evaluation It remains an open problem to ï¬nd the best automatic evaluation metrics for dialogue. There is not enough data from the competition to measure correlation between the automatic metrics we tried and human evaluations in depth. Clearly a randomly initialized model has poor values for all of these metrics, whereas training to optimize any of them will improve human evaluations. The problem is more whether the ï¬ner-grained diï¬erentiation of relatively similar models can be automatically measured. We believe each automatic metric evaluates at least some aspects of what humans consider a âgoodâ model but misses other aspects. As such, optimizing only one of these metrics can fail to address important issues. For example, optimizing per-word perplexity fails to address the search strategy of a model when generating a full utterance, e.g. it is not aï¬ected by beam search choices. Optimizing Hits@1 is a per-utterance metric that fails to address the full conversational ï¬ow (as the gold dialogue history between two humans is used for that metric, not what the model previously said). Some models optimize F1 and do well, however it also has major issues (see Section 3.1.2). Further, it is very hard to compare retrieval and generative models other than by human evaluation.
Nevertheless, we ï¬nd the use of automatic metrics important for several reasons. If we desire to be able to train our models oï¬ine at least initially (which we believe we do) then we need an oï¬ine training objective, which typically relates to automatic metrics. Hence, if we understand how human evaluations relate to automatic metrics, not only will we understand the dialogue task better, but we will know how to perform such oï¬ine training. Additionally, for our competition it would have been very diï¬cult to ï¬lter models for the human evaluation stage without the use of automatic metrics.
Towards Multi-turn Evaluation We thus believe we are still missing some key oï¬ine (automatic) metrics, but have hope that they are possible to ï¬nd. We identiï¬ed that the current metrics fail to measure the multi-turn aspects of human evaluation, in particular in terms of repetition, consistency and balance of dialogue acts. Even the best competitorsâ models often failed to be self-consistent across a few dialogue turns, which we believe was at least partly responsible for lowering their evaluation score. For example, âi am a professional runner. you? i love runningâ followed by âiâm not very athleticâ or âi work as a snowboard instructorâ followed by âi work for a food companyâ are both unlikely continuations of a conversation. Even if they happen infrequently, these problems are particularly jarring for a human speaking partner when they do happen.
In a related problem, we observed the models asking questions that are already an- swered, e.g. one model asks âwhat do you do for a living?â even though the human earlier stated âi work on computersâ resulting in the human replying âI just told you sillyâ.
One possible solution to these problems is the use of dialogue natural language inference (NLI) [12], a new task that has been proposed that evaluates exactly these problems. It works by providing pairs of utterances as input, and the task is then to predict if the pair entail, are neutral or contradict. This is exciting because it can allow us to both (i) ï¬x our modelâs consistency problems by training on this new task and (ii) evaluate to what extent
18
our modelâs consistency problems are ï¬xed using the evaluation set.
Finally, in Section 3.2.3 we identiï¬ed that models that do not balance question asking with answering over multiple turns might can cause human evaluations to suï¬er. Given this information, it may be possible to construct new metrics that measure these kind of balances so that we can optimize them (to look more similar to human data, for instance).
Towards more complex tasks Going forward, even if we can completely solve the ConvAI2 Persona-Chat task (i.e. reach human performance), it is still only a meet- and-greet task involving getting to know someone for a few dialogue turns, with shallow topics and quick context switches. Clearly many aspects of an intelligent agent are not evaluated by this task, such as the use of long-term memory or in-depth knowledge and deeper reasoning. For example, in Table 1 âGame of Thronesâ is mentioned, but a model imitating this conversation would not really be required to know anything more about the show, as in ConvAI2 speakers tend to shallowly discuss each otherâs interest without lingering on a topic for too long. Subsequent competitions could explore this issue further. Such a study is feasible as several new datasets are being released to explore such a setting, in particular the Wizard of Wikipedia task involves using knowledge from Wikipedia to discuss open-domain topics [13]. The DSTC7 competition13 also recently addressed this topic, however the evaluation was not multi-turn.
# 6 Acknowledgements
We thank all the competitors for taking part and making this a successful competi- tion. We especially thank the competitionâs sponsors, Facebook Academics and Ama- zon Web Services. Participation of Mikhail Burtsev, Varvara Logacheva, and Valentin Malykh was supported by National Technology Initiative and PAO Sberbank project ID 0000000007417F630002.
# References
[1] Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. Personalizing dialogue agents: I have a dog, do you have pets too? arXiv preprint arXiv:1801.07243, 2018.
[2] Iulian Vlad Serban, Ryan Lowe, Laurent Charlin, and Joelle Pineau. Generative deep neural networks for dialogue: A short review. arXiv preprint arXiv:1611.06216, 2016.
[3] Oriol Vinyals and Quoc Le. A neural conversational model. arXiv preprint arXiv:1506.05869, 2015.
13http://workshop.colips.org/dstc7/
19
[4] Jiwei Li, Michel Galley, Chris Brockett, Georgios P Spithourakis, Jianfeng Gao, arXiv preprint and Bill Dolan. A persona-based neural conversation model. arXiv:1603.06155, 2016.
A diversity-promoting objective function for neural conversation models. arXiv preprint arXiv:1510.03055, 2015.
[6] Alexander H Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra, Antoine Bordes, Devi Parikh, and Jason Weston. Parlai: A dialog research software platform. arXiv preprint arXiv:1705.06476, 2017.
[7] Chia-Wei Liu, Ryan Lowe, Iulian Vlad Serban, Michael Noseworthy, Laurent Char- lin, and Joelle Pineau. How NOT to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. CoRR, abs/1603.08023, 2016.
[8] Ilya Kulikov, Alexander H. Miller, Kyunghyun Cho, and Jason Weston. Importance of a search strategy in neural dialogue modelling. CoRR, abs/1811.00907, 2018.
[9] Yu Wu, Wei Wu, Chen Xing, Ming Zhou, and Zhoujun Li. Sequential matching net- work: A new architecture for multi-turn response selection in retrieval-based chatbots. arXiv preprint arXiv:1612.01627, 2016.
[10] Jason Weston, Emily Dinan, and Alexander H Miller. Retrieve and reï¬ne: Improved sequence generation models for dialogue. arXiv preprint arXiv:1808.04776, 2018.
[11] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre- training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
[12] Sean Welleck, Jason Weston, Arthur Szlam, and Kyunghyun Cho. Dialogue natural language inference. arXiv preprint arXiv:1811.00671, 2018.
[13] Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason We- ston. Wizard of wikipedia: Knowledge-powered conversational agents. arXiv preprint arXiv:1811.01241, 2018.
# A Example Dialogues
Example dialogues for some of the top models are given in Figures 6, 7, 8, 9, 10, and 11.
20
[BOT:] [HUMAN:] [BOT:] hello, how are you today? Iâm great! I just got back from skydiving! I love doing that! that sounds like fun! i love to paint. [HUMAN:] Awesome! What do you like to paint? [BOT:] [HUMAN:] [BOT:] [HUMAN:] [BOT:] i love to draw portraits of my favorite cars. Iâd love to see them! I live in Arizona, where do you live? i live in california. i love the beach! Iâve been to California! I love to shop at their Barnes and Noble store! i love shopping at kroger! [HUMAN:] Kroger is great! I wish they sold pet food for my pet lizard though!
[Score:] 4/4
Figure 6: Lost in Conversation: example Mechanical Turk conversation.
[HUMAN:] Hello there, I like rock climbing do you have any hobbies? [BOT:] [HUMAN:] i like nature. do you? i like nature. I do love nature, especially when going on walks [BOT:] what is your favorite rock? mine is rock. [HUMAN:] Well I like rock music, does that count? i think it does. what do you do for fun? [BOT:] [HUMAN:] Well I work at a restaurant that I love, and rock climbing [BOT:] that sounds fun! iâm into nature. [HUMAN:] Cool, so am I [BOT:] do you have any pets? do you have pets?
[Score:] 2/4
Figure 7: Hugging Face: example Mechanical Turk conversation.
[BOT:] whats its name, she good oï¬ leash? [HUMAN:] [BOT:] I donât actually have any dogs. its name is noodle she does not need a leash i talk to her in my sleep [HUMAN:] Wow! Does she talk back [BOT:] why do you not believe her [HUMAN:] [BOT:] I donât even know what she said. after my workouts i drink a protein shake instead coï¬ee, do not like coï¬ee [HUMAN:] Alright, thatâs cool.
[Score:] 1/4
Figure 8: Little Baby: example Mechanical Turk conversation.
21
[BOT:] hi how are you today? [HUMAN:] Hello how are you? [BOT:] iâm good. just got done reading a book. [HUMAN:] Which book did you read? [BOT:] i donât read much, i am more into reading [HUMAN:] Where do you live? [BOT:] [HUMAN:] [BOT:] i live in the us. i love to read I live in Alaska i live in the us, i love to read [HUMAN:] Do you like to read?
# [Score:]
1/4
Figure 9: Mohd Shadab Alam: example Mechanical Turk conversation.
[BOT:]
i love to read
[HUMAN:] Nice, Iâm more of a numbers guy myself. I use to be an accountant. [BOT:] well, i am not sure what you mean. [HUMAN:] [BOT:] [HUMAN:] [BOT:] I worked with numbers. I recently moved away to a beach house because I got tired of it i am sorry to hear that. do you have any hobbies? Just hanging out with my dog, I guess. Trying to start a new business. i am not sure what you mean. [HUMAN:] Exactly, itâs hard to ï¬nd meaning [BOT:] [HUMAN:] yeah, i guess. i am not sure what that means. I do work, for pay, occasionally, as a business person.
[Score:] 1/4
Figure 10: Happy Minions: example Mechanical Turk conversation.
[HUMAN:] Hello, have you ever gone scuba diving? [BOT:] [HUMAN:] [BOT:] [HUMAN:] [BOT:] [HUMAN:] [BOT:] no i do not have a job yesterday. do you have any hobbies I love going scuba diving in hawaii. i love scuba diving yesterday. do you like to swim? i do Itâs ok, I prefer to play with my pet snakes. i love to swim yesterday. i like to go to the beach with my job I just got divorced so maybe I should go to the beach. you should go with your job yesterday. do you go to a lot of the beach [HUMAN:] Only when I visit hawaii. [BOT:] [HUMAN:] i love the beach so i have to go with my new job I traveled the Americas playing guitar
[Score:] 2/4
Figure 11: ADAPT Centre: example Mechanical Turk conversation.
22 | {
"id": "1810.04805"
} |
1901.11504 | Multi-Task Deep Neural Networks for Natural Language Understanding | In this paper, we present a Multi-Task Deep Neural Network (MT-DNN) for
learning representations across multiple natural language understanding (NLU)
tasks. MT-DNN not only leverages large amounts of cross-task data, but also
benefits from a regularization effect that leads to more general
representations in order to adapt to new tasks and domains. MT-DNN extends the
model proposed in Liu et al. (2015) by incorporating a pre-trained
bidirectional transformer language model, known as BERT (Devlin et al., 2018).
MT-DNN obtains new state-of-the-art results on ten NLU tasks, including SNLI,
SciTail, and eight out of nine GLUE tasks, pushing the GLUE benchmark to 82.7%
(2.2% absolute improvement). We also demonstrate using the SNLI and SciTail
datasets that the representations learned by MT-DNN allow domain adaptation
with substantially fewer in-domain labels than the pre-trained BERT
representations. The code and pre-trained models are publicly available at
https://github.com/namisan/mt-dnn. | http://arxiv.org/pdf/1901.11504 | Xiaodong Liu, Pengcheng He, Weizhu Chen, Jianfeng Gao | cs.CL | 10 pages, 2 figures and 5 tables; Accepted by ACL 2019 | null | cs.CL | 20190131 | 20190530 | 9 1 0 2
y a M 0 3 ] L C . s c [
2 v 4 0 5 1 1 . 1 0 9 1 : v i X r a
# Multi-Task Deep Neural Networks for Natural Language Understanding
# Xiaodong Liuâ1, Pengcheng Heâ2, Weizhu Chen2, Jianfeng Gao1 1 Microsoft Research 2 Microsoft Dynamics 365 AI {xiaodl,penhe,wzchen,jfgao}@microsoft.com
# Abstract
In this paper, we present a Multi-Task Deep Neural Network (MT-DNN) for learning rep- resentations across multiple natural language understanding (NLU) tasks. MT-DNN not only leverages large amounts of cross-task data, but also beneï¬ts from a regularization ef- fect that leads to more general representations to help adapt to new tasks and domains. MT- DNN extends the model proposed in Liu et al. (2015) by incorporating a pre-trained bidirec- tional transformer language model, known as BERT (Devlin et al., 2018). MT-DNN ob- tains new state-of-the-art results on ten NLU tasks, including SNLI, SciTail, and eight out of nine GLUE tasks, pushing the GLUE bench- mark to 82.7% (2.2% absolute improvement) 1. We also demonstrate using the SNLI and Sc- iTail datasets that the representations learned by MT-DNN allow domain adaptation with substantially fewer in-domain labels than the pre-trained BERT representations. The code and pre-trained models are publicly available at https://github.com/namisan/mt-dnn.
# Introduction
Learning vector-space representations of text, e.g., words and sentences, is fundamental to many nat- ural language understanding (NLU) tasks. Two popular approaches are multi-task learning and language model pre-training. In this paper we combine the strengths of both approaches by proposing a new Multi-Task Deep Neural Network (MT-DNN).
Multi-Task Learning (MTL) is inspired by hu- man learning activities where people often apply the knowledge learned from previous tasks to help learn a new task (Caruana, 1997; Zhang and Yang, 2017). For example, it is easier for a person who knows how to ski to learn skating than the one who
âEqual Contribution. 1As of February 25, 2019 on the latest GLUE test set.
does not. Similarly, it is useful for multiple (re- lated) tasks to be learned jointly so that the knowl- edge learned in one task can beneï¬t other tasks. Recently, there is a growing interest in applying MTL to representation learning using deep neu- ral networks (DNNs) (Collobert et al., 2011; Liu et al., 2015; Luong et al., 2015; Xu et al., 2018; Guo et al., 2018; Ruder12 et al., 2019) for two reasons. First, supervised learning of DNNs re- quires large amounts of task-speciï¬c labeled data, which is not always available. MTL provides an effective way of leveraging supervised data from many related tasks. Second, the use of multi-task learning proï¬ts from a regularization effect via al- leviating overï¬tting to a speciï¬c task, thus making the learned representations universal across tasks. language model pre- training has shown to be effective for learning universal language representations by leveraging large amounts of unlabeled data. A recent sur- vey is included in Gao et al. (2018). Some of the most prominent examples are ELMo (Peters et al., 2018), GPT (Radford et al., 2018) and BERT (Devlin et al., 2018). These are neural network language models trained on text data using unsu- pervised objectives. For example, BERT is based on a multi-layer bidirectional Transformer, and is trained on plain text for masked word prediction and next sentence prediction tasks. To apply a pre-trained model to speciï¬c NLU tasks, we often need to ï¬ne-tune, for each task, the model with additional task-speciï¬c layers using task-speciï¬c training data. For example, Devlin et al. (2018) shows that BERT can be ï¬ne-tuned this way to create state-of-the-art models for a range of NLU tasks, such as question answering and natural lan- guage inference.
We argue that MTL and language model pre- training are complementary technologies, and can be combined to improve the learning of text rep-
resentations to boost the performance of various NLU tasks. To this end, we extend the MT-DNN model originally proposed in Liu et al. (2015) by incorporating BERT as its shared text encod- ing layers. As shown in Figure 1, the lower lay- ers (i.e., text encoding layers) are shared across all tasks, while the top layers are task-speciï¬c, combining different types of NLU tasks such as single-sentence classiï¬cation, pairwise text clas- siï¬cation, text similarity, and relevance ranking. Similar to the BERT model, MT-DNN can be adapted to a speciï¬c task via ï¬ne-tuning. Unlike BERT, MT-DNN uses MTL, in addition to lan- guage model pre-training, for learning text repre- sentations.
MT-DNN obtains new state-of-the-art results on eight out of nine NLU tasks 2 used in the Gen- eral Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2018), pushing the GLUE benchmark score to 82.7%, amounting to 2.2% ab- solute improvement over BERT. We further extend the superiority of MT-DNN to the SNLI (Bow- man et al., 2015a) and SciTail (Khot et al., 2018) tasks. The representations learned by MT-DNN allow domain adaptation with substantially fewer in-domain labels than the pre-trained BERT rep- resentations. For example, our adapted models achieve the accuracy of 91.6% on SNLI and 95.0% on SciTail, outperforming the previous state-of- the-art performance by 1.5% and 6.7%, respec- tively. Even with only 0.1% or 1.0% of the origi- nal training data, the performance of MT-DNN on both SNLI and SciTail datasets is better than many existing models. All of these clearly demonstrate MT-DNNâs exceptional generalization capability via multi-task learning.
# 2 Tasks
The MT-DNN model combines four types of NLU tasks: single-sentence classiï¬cation, pairwise text classiï¬cation, text similarity scoring, and rele- vance ranking. For concreteness, we describe them using the NLU tasks deï¬ned in the GLUE benchmark as examples.
2The only GLUE task where MT-DNN does not create a new state of the art result is WNLI. But as noted in the GLUE webpage (https://gluebenchmark.com/faq), there are issues in the dataset, and none of the submitted systems has ever outperformed the majority voting baseline whose accu- racy is 65.1.
Single-Sentence Classiï¬cation: Given a sen- tence3, the model labels it using one of the pre- deï¬ned class labels. For example, the CoLA task is to predict whether an English sentence is gram- matically plausible. The SST-2 task is to de- termine whether the sentiment of a sentence ex- tracted from movie reviews is positive or negative.
Text Similarity: This is a regression task. Given a pair of sentences, the model predicts a real-value score indicating the semantic similarity of the two sentences. STS-B is the only example of the task in GLUE.
Pairwise Text Classiï¬cation: Given a pair of sentences, the model determines the relationship of the two sentences based on a set of pre-deï¬ned labels. For example, both RTE and MNLI are language inference tasks, where the goal is to pre- dict whether a sentence is an entailment, contra- diction, or neutral with respect to the other. QQP and MRPC are paraphrase datasets that consist of sentence pairs. The task is to predict whether the sentences in the pair are semantically equivalent.
Relevance Ranking: Given a query and a list of candidate answers, the model ranks all the can- didates in the order of relevance to the query. QNLI is a version of Stanford Question Answer- ing Dataset (Rajpurkar et al., 2016). The task in- volves assessing whether a sentence contains the correct answer to a given query. Although QNLI is deï¬ned as a binary classiï¬cation task in GLUE, in this study we formulate it as a pairwise ranking task, where the model is expected to rank the can- didate that contains the correct answer higher than the candidate that does not. We will show that this formulation leads to a signiï¬cant improvement in accuracy over binary classiï¬cation.
# 3 The Proposed MT-DNN Model
The architecture of the MT-DNN model is shown in Figure 1. The lower layers are shared across all tasks, while the top layers represent task-speciï¬c outputs. The input X, which is a word sequence (either a sentence or a pair of sentences packed together) is ï¬rst represented as a sequence of em- bedding vectors, one for each word, in l1. Then the transformer encoder captures the contextual infor- mation for each word via self-attention, and gen-
3In this study, a sentence can be an arbitrary span of con- tiguous text or word sequence, rather than a linguistically plausible sentence.
P(cIX) (e.g., probability of labeling text X by c) sim(X%,, Xp) (e.g., semantic similarity between X; P.(RIP, H) (eg., probability of logic relationship R Rel(Q, A) (e.g, relevance score of candidate answer A and Xz ) between P and H) given query Q) Task specific t t t t layers Single-Sentence Pairwise Text Pairwise Text Pairwise Classification Similarity Classification Ranking (e.g., CoLA, SST-2) (eg., STS-B) (eg, RTE, MNLI, (e.g., QNLI) WNLI, QQP, MRPC) ly: context embedding vectors, one for each token. Transformer Encoder (contextual embedding layers) Shared T layers 1: input embedding vectors, one each token. Lexicon Encoder (word, position and segment) ry X:a sentence or a pair of sentences
Figure 1: Architecture of the MT-DNN model for representation learning. The lower layers are shared across all tasks while the top layers are task-speciï¬c. The input X (either a sentence or a pair of sentences) is ï¬rst represented as a sequence of embedding vectors, one for each word, in l1. Then the Transformer encoder captures the contextual information for each word and generates the shared contextual embedding vectors in l2. Finally, for each task, additional task-speciï¬c layers generate task-speciï¬c representations, followed by operations necessary for classiï¬cation, similarity scoring, or relevance ranking.
erates a sequence of contextual embeddings in l2. This is the shared semantic representation that is trained by our multi-task objectives. In what fol- lows, we elaborate on the model in detail.
ers using the NLU tasks in GLUE as examples, although in practice we can incorporate arbitrary natural language tasks such as text generation where the output layers are implemented as a neu- ral decoder.
Lexicon Encoder (l1): The input X = {x1, ..., xm} is a sequence of tokens of length m. Following Devlin et al. (2018), the ï¬rst token x1 is always the [CLS] token. If X is packed by a sen- tence pair (X1, X2), we separate the two sentences with a special token [SEP]. The lexicon encoder maps X into a sequence of input embedding vec- tors, one for each token, constructed by summing the corresponding word, segment, and positional embeddings.
Single-Sentence Classiï¬cation Output: Sup- pose that x is the contextual embedding (l2) of the token [CLS], which can be viewed as the seman- tic representation of input sentence X. Take the SST-2 task as an example. The probability that X is labeled as class c (i.e., the sentiment) is pre- dicted by a logistic regression with softmax:
P,(e X) = softmax(W gor x), (1)
Transformer Encoder (l2): We use a multi- layer bidirectional Transformer encoder (Vaswani et al., 2017) to map the input representation vec- tors (l1) into a sequence of contextual embedding vectors C â RdÃm. This is the shared represen- tation across different tasks. Unlike the BERT model (Devlin et al., 2018) that learns the rep- resentation via pre-training, MT-DNN learns the representation using multi-task objectives, in ad- dition to pre-training.
where WSST is the task-speciï¬c parameter ma- trix.
Text Similarity Output: Take the STS-B task as an example. Suppose that x is the contextual embedding (l2) of [CLS] which can be viewed as the semantic representation of the input sen- tence pair (X1, X2). We introduce a task-speciï¬c parameter vector wST S to compute the similarity score as:
Below, we will describe the task speciï¬c lay-
Sim(X1, X2) = wars - x, (2)
where Sim(X1, X2) is a real value of the range (- â, â).
Pairwise Text Classiï¬cation Output: Take nat- ural language inference (NLI) as an example. The NLI task deï¬ned here involves a premise P = (p1, ..., pm) of m words and a hypothesis H = (h1, ..., hn) of n words, and aims to ï¬nd a log- ical relationship R between P and H. The de- sign of the output module follows the answer module of the stochastic answer network (SAN) (Liu et al., 2018a), a state-of-the-art neural NLI model. SANâs answer module uses multi-step rea- soning. Rather than directly predicting the entail- ment given the input, it maintains a state and iter- atively reï¬nes its predictions.
The SAN answer module works as follows. We first construct the working memory of premise P by concatenating the contextual embeddings of the words in P, which are the output of the trans- former encoder, denoted as M? ⬠R¢*â¢, and sim- ilarly the working memory of hypothesis H, de- noted as M" ⬠R¢*â. Then, we perform K-step reasoning on the memory to output the relation la- bel, where x is a hyperparameter. At the begin- ning, the initial state s° is the summary of M": s=-> j a;M?, where aj = er ery: At time step k in the range of {1,2,,K â 1}, the state is defined by s* = GRU(s*!,x*). Here, x* 1 âis computed from the previous state s*â Lk K P = and memory M?: x* = 7, 6;Mj and 6; = softmax(s*-'WJ MP). A one-layer classifier is used to determine the relation at each step k:
3 [sk; xk; |sk â xk|; sk · xk]). (3) At last, we utilize all of the K outputs by aver-
aging the scores:
Pr = avg([P 0 r , P 1 r , ..., P Kâ1 r ]). (4)
Each Pr is a probability distribution over all the relations R â R. During training, we apply stochastic prediction dropout (Liu et al., 2018b) before the above averaging operation. During de- coding, we average all outputs to improve robust- ness.
Relevance Ranking Output: Take QNLI as an example. Suppose that x is the contextual embed- ding vector of [CLS] which is the semantic rep- resentation of a pair of question and its candidate
answer (Q, A). We compute the relevance score as:
Rel(Q, A) = 9(wowrr-); (5)
For a given Q, we rank all of its candidate an- swers based on their relevance scores computed using Equation 5.
# 3.1 The Training Procedure
The training procedure of MT-DNN consists of two stages: pretraining and multi-task learning. The pretraining stage follows that of the BERT model (Devlin et al., 2018). The parameters of the lexicon encoder and Transformer encoder are learned using two unsupervised prediction tasks: masked language modeling and next sentence pre- diction.4
In the multi-task learning stage, we use mini- batch based stochastic gradient descent (SGD) to learn the parameters of our model (i.e., the pa- rameters of all shared layers and task-speciï¬c lay- ers) as shown in Algorithm 1. In each epoch, a mini-batch bt is selected(e.g., among all 9 GLUE tasks), and the model is updated according to the task-speciï¬c objective for the task t. This approx- imately optimizes the sum of all multi-task objec- tives.
For the classiï¬cation tasks (i.e., single-sentence or pairwise text classiï¬cation), we use the cross- entropy loss as the objective:
~ SE U(X o)log(Pr(e[X)), ©
where 1(X, c) is the binary indicator (0 or 1) if class label c is the correct classiï¬cation for X, and Pr(.) is deï¬ned by e.g., Equation 1 or 4.
For the text similarity tasks, such as STS-B, where each sentence pair is annotated with a real- valued score y, we use the mean squared error as the objective:
(y â Sim(X1, X2))2, (7)
where Sim(.) is deï¬ned by Equation 2.
The objective for the relevance ranking tasks follows the pairwise learning-to-rank paradigm (Burges et al., 2005; Huang et al., 2013). Take QNLI as an example. Given a query Q, we obtain a list of candidate answers A which contains a pos- itive example A+ that includes the correct answer,
4In this study we use the pre-trained BERT models re- leased by the authors.
Algorithm 1: Training a MT-DNN model. Initialize model parameters © randomly. Pre-train the shared layers (i.e., the lexicon encoder and the transformer encoder). Set the max number of epoch: epochmaz- //Prepare the data for T tasks. for t in 1,2,...,T do Pack the dataset ¢ into mini-batch: D;. end for epoch in 1, 2,...,epochmax do 1. Merge all the datasets: D=D,UDs,...U Dr 2. Shuffle D for b; in D do //v;, is a mini-batch of task t. 3. Compute loss : L(@) L(®) = Eq. 6 for classification L(©) = Eq. 7 for regression L(O) = Eq. 8 for ranking 4. Compute gradient: V(0) 5. Update model: 0 = © â eV(0) end
# end
and |A| â 1 negative examples. We then minimize the negative log likelihood of the positive example given queries across the training data
â Pr(A+|Q), (Q,A+) (8)
exp(7Rel(Q, A*)) Dares exP(7Rel(Q, Aâ))â P,(AT|Q) = (9)
where Rel(.) is deï¬ned by Equation 5 and γ is a tuning factor determined on held-out data. In our experiment, we simply set γ to 1.
# 4 Experiments
We evaluate the proposed MT-DNN on three pop- ular NLU benchmarks: GLUE (Wang et al., 2018), SNLI (Bowman et al., 2015b), and SciTail (Khot et al., 2018). We compare MT-DNN with exist- ing state-of-the-art models including BERT and demonstrate the effectiveness of MTL with and without model ï¬ne-tuning using GLUE and do- main adaptation using both SNLI and SciTail.
# 4.1 Datasets
This section brieï¬y describes the GLUE, SNLI, and SciTail datasets, as summarized in Table 1.
GLUE The General Language Understanding Evaluation (GLUE) benchmark is a collection of nine NLU tasks as in Table 1, including question answering, sentiment analysis, text similarity and textual entailment; it is considered well-designed for evaluating the generalization and robustness of NLU models.
SNLI The Stanford Natural Language Inference (SNLI) dataset contains 570k human annotated sentence pairs, in which the premises are drawn from the captions of the Flickr30 corpus and hy- potheses are manually annotated (Bowman et al., 2015b). This is the most widely used entailment dataset for NLI. The dataset is used only for do- main adaptation in this study.
SciTail This is a textual entailment dataset de- rived from a science question answering (SciQ) dataset (Khot et al., 2018). The task involves as- sessing whether a given premise entails a given hy- pothesis. In contrast to other entailment datasets mentioned previously, the hypotheses in SciTail are created from science questions while the cor- responding answer candidates and premises come from relevant web sentences retrieved from a large corpus. As a result, these sentences are linguis- tically challenging and the lexical similarity of premise and hypothesis is often high, thus making SciTail particularly difï¬cult. The dataset is used only for domain adaptation in this study.
# Implementation details
Our implementation of MT-DNN is based on the PyTorch implementation of BERT5. We used Adamax (Kingma and Ba, 2014) as our optimizer with a learning rate of 5e-5 and a batch size of 32 by following Devlin et al. (2018). The max- imum number of epochs was set to 5. A linear learning rate decay schedule with warm-up over 0.1 was used, unless stated otherwise. We also set the dropout rate of all the task speciï¬c layers as 0.1, except 0.3 for MNLI and 0.05 for CoLa. To avoid the exploding gradient problem, we clipped the gradient norm within 1. All the texts were to- kenized using wordpieces, and were chopped to spans no longer than 512 tokens.
# 4.3 GLUE Main Results
We compare MT-DNN with its variants and a list of state-of-the-art models that have been submitted
5https://github.com/huggingface/pytorch-pretrained- BERT
Corpus Task #Label Single-Sentence Classiï¬cation (GLUE) #Train #Dev #Test Metrics CoLA SST-2 Acceptability Sentiment 8.5k 67k 1k 872 1k 1.8k 2 2 Matthews corr Accuracy Pairwise Text Classiï¬cation (GLUE) MNLI RTE WNLI QQP MRPC NLI NLI NLI Paraphrase Paraphrase STS-B Similarity QNLI QA/NLI SNLI NLI SciTail NLI 20k 393k 276 2.5k 71 634 40k 364k 3.7k 408 Text Similarity (GLUE) 1.5k 20k 3k 146 391k 1.7k 3 2 2 2 2 7k 1.4k Relevance Ranking (GLUE) 5.7k Pairwise Text Classiï¬cation 9.8k 2.1k 1 108k 5.7k 2 549k 23.5k 9.8k 1.3k 3 2 Accuracy Accuracy Accuracy Accuracy/F1 Accuracy/F1 Pearson/Spearman corr Accuracy Accuracy Accuracy
Table 1: Summary of the three benchmarks: GLUE, SNLI and SciTail.
Model CoLA SST-2 MRPC STS-B 8.5k BiLSTM+ELMo+Attn 1 36.0 Singletask Pretrain Transformer 2 GPT on STILTs 3 BERT4 LARGE MT-DNNno-ï¬ne-tune MT-DNN Human Performance QQP MNLI-m/mm QNLI RTE WNLI AX Score 364k 67k 90.4 84.9/77.9 75.1/73.3 64.8/84.7 634 65.1 2.5k 56.8 3.7k 7k 393k 76.4/76.1 108k - 26.5 29.8 53.4 56.0 - 82.1/81.4 91.3 82.3/75.7 82.0/80.0 70.3/88.5 45.4 80.8/80.6 86.7/85.9 86.5/85.8 86.7/86.0 92.0/92.8 69.1 70.1 79.1 81.4 93.6 65.1 65.1 65.1 65.1 95.9 93.1 87.7/83.7 85.3/84.8 70.1/88.1 94.9 89.3/85.4 87.6/86.5 72.1/89.3 94.6 90.1/86.4 89.5/88.8 72.7/89.6 95.6 91.1/88.2 89.5/88.8 72.7/89.6 97.8 86.3/80.8 92.7/92.6 59.5/80.4 47.2 60.5 58.9 62.5 66.4 - 92.7 93.1 93.1 91.2 29.4 39.6 39.4 40.3 - 70.5 72.8 76.9 80.5 81.7 82.7 87.1
Table 2: GLUE test set results scored using the GLUE evaluation server. The number below each task denotes the number of training examples. The state-of-the-art results are in bold, and the results on par with or pass human performance are in bold. MT-DNN uses BERTLARGE to initialize its shared layers. All the results are obtained from https://gluebenchmark.com/leaderboard on February 25, 2019. Model references: 1:(Wang et al., 2018) ; 2:(Radford et al., 2018); 3: (Phang et al., 2018); 4:(Devlin et al., 2018).
Model BERTLARGE ST-DNN MT-DNN MNLI-m/mm QQP RTE QNLI (v1/v2) MRPC CoLa SST-2 STS-B 86.3/86.2 86.6/86.3 87.1/86.7 91.1/88.0 71.1 91.3/88.4 72.0 91.9/89.2 83.4 90.5/92.4 96.1/- 97.4/92.9 89.5/85.8 61.8 89.7/86.4 91.0/87.5 63.5 - 93.5 89.6/89.3 - - 94.3 90.7/90.6
Table 3: GLUE dev set results. The best result on each task is in bold. The Single-Task DNN (ST-DNN) uses the same model architecture as MT-DNN. But its shared layers are the pre-trainedBERT model without being reï¬ned via MTL. We ï¬ne-tuned ST-DNN for each GLUE task using task-speciï¬c data. There have been two versions of the QNLI dataset. V1 is expired on January 30, 2019. The current version is v2. MT-DNN use BERTLARGE as their initial shared layers.
to the GLUE leaderboard. The results are shown in Tables 2 and 3.
We ï¬ne-tuned the model for each GLUE task on task-speciï¬c data.
BERTLARGE This is the large BERT model re- leased by the authors, which we used as a baseline.
MT-DNN This is the proposed model described in Section 3. We used the pre-trained BERTLARGE
to initialize its shared layers, reï¬ned the model via MTL on all GLUE tasks, and ï¬ne-tuned the model for each GLUE task using task-speciï¬c data. The test results in Table 2 show that MT-DNN out- performs all existing systems on all tasks, ex- cept WNLI, creating new state-of-the-art results on eight GLUE tasks and pushing the benchmark to 82.7%, which amounts to 2.2% absolution im- provement over BERTLARGE. Since MT-DNN uses BERTLARGE to initialize its shared layers, the gain is mainly attributed to the use of MTL in reï¬ning the shared layers. MTL is particularly useful for the tasks with little in-domain training data. As we observe in the table, on the same type of tasks, the improvements over BERT are much more substantial for the tasks with less in-domain training data than those with more in-domain la- bels, even though they belong to the same task type, e.g., the two NLI tasks: RTE vs. MNLI, and the two paraphrase tasks: MRPC vs. QQP.
MT-DNNno-ï¬ne-tune Since the MTL of MT-DNN uses all GLUE tasks, it is possible to directly ap- ply MT-DNN to each GLUE task without ï¬ne- tuning. The results in Table 2 show that MT- DNNno-ï¬ne-tune still outperforms BERTLARGE con- sistently among all tasks but CoLA. Our analysis shows that CoLA is a challenge task with much smaller in-domain data than other tasks, and its task deï¬nition and dataset are unique among all GLUE tasks, making it difï¬cult to beneï¬t from the knowledge learned from other tasks. As a result, MTL tends to underï¬t the CoLA dataset. In such a case, ï¬ne-tuning is necessary to boost the performance. As shown in Table 2, the ac- curacy improves from 58.9% to 62.5% after ï¬ne- tuning, even though only a very small amount of in-domain data is available for adaptation. This, together with the fact that the ï¬ne-tuned MT-DNN signiï¬cantly outperforms the ï¬ne-tuned BERTLARGE on CoLA (62.5% vs. 60.5%), reveals that the learned MT-DNN representation allows much more effective domain adaptation than the pre-trained BERT representation. We will revisit this topic with more experiments in Section 4.4.
The gain of MT-DNN is also attributed to its ï¬exible modeling framework which allows us to incorporate the task-speciï¬c model structures and training methods which have been developed in the single-task setting, effectively leveraging the existing body of research. Two such examples are the use of the SAN answer module for the pairwise
text classiï¬cation output module and the pairwise ranking loss for the QNLI task which by design is a binary classiï¬cation problem in GLUE. To in- vestigate the relative contributions of these mod- eling design choices, we implement a variant of MT-DNN as described below.
ST-DNN ST-DNN stands for Single-Task DNN. It uses the same model architecture as MT-DNN. But its shared layers are the pre-trained BERT model without being reï¬ned via MTL. We then ï¬ne-tuned ST-DNN for each GLUE task using task-speciï¬c data. Thus, for pairwise text classi- ï¬cation tasks, the only difference between their ST-DNNs and BERT models is the design of the task-speciï¬c output module. The results in Ta- ble 3 show that on all four tasks (MNLI, QQP, RTE and MRPC) ST-DNN outperforms BERT, justi- fying the effectiveness of the SAN answer mod- ule. We also compare the results of ST-DNN and BERT on QNLI. While ST-DNN is ï¬ne-tuned us- ing the pairwise ranking loss, BERT views QNLI as binary classiï¬cation and is ï¬ne-tuned using the cross entropy loss. ST-DNN signiï¬cantly outper- forms BERT demonstrates clearly the importance of problem formulation.
# 4.4 Domain Adaptation Results on SNLI and SciTail
(a) SNLI Accuracy 3 ses BERT om MT-DNN Logl0(Percentage of Training Data) (b) SciTail Accuracy 60 âeâ BERT â= MT-DNN =3.0 -2.5 =2.0 =i. =1.0 -65 0.0 Log10(Percentage of Training Data)
Figure 2: Domain adaption results on SNLI and Sci- Tail development datasets using the shared embeddings generated by MT-DNN and BERT, respectively. Both MT-DNN and BERT are ï¬ne-tuned based on the pre- trained BERTBASE. The X-axis indicates the amount of domain-speciï¬c labeled samples used for adaptation.
Model 0.1% 1% 10% 100% SNLI Dataset (Dev Accuracy%) #Training Data 549 5,493 54,936 52.5 BERT 82.1 MT-DNN 78.1 85.2 86.7 88.4 549,367 91.0 91.5 SciTail Dataset (Dev Accuracy%) #Training Data 23 51.2 BERT 81.9 MT-DNN 235 82.2 88.3 2,359 90.5 91.1 23,596 94.3 95.7
Table 4: Domain adaptation results on SNLI and Sci- Tail, as shown in Figure 2.
One of the most important criteria of building practical systems is fast adaptation to new tasks and domains. This is because it is prohibitively expensive to collect labeled training data for new domains or tasks. Very often, we only have very small training data or even no training data.
To evaluate the models using the above crite- rion, we perform domain adaptation experiments on two NLI tasks, SNLI and SciTail, using the fol- lowing procedure:
1. use the MT-DNN model or the BERT as ini- tial model including both BASE and LARGE model settings;
2. create for each new task (SNLI or SciTail) a task-speciï¬c model, by adapting the trained MT-DNN using task-speciï¬c training data;
3. evaluate the models using task-speciï¬c test data.
We starts with the default training/dev/test set of these tasks. But we randomly sample 0.1%, 1%, 10% and 100% of its training data. As a re- sult, we obtain four sets of training data for Sci- Tail, which respectively includes 23, 235, 2.3k and 23.5k training samples. Similarly, we obtain four sets of training data for SNLI, which respectively include 549, 5.5k, 54.9k and 549.3k training sam- ples.
We perform random sampling ï¬ve times and re- port the mean among all the runs. Results on dif- ferent amounts of training data from SNLI and Sc- iTail are reported in Figure 2. We observe that MT-DNN outperforms the BERT baseline consis- tently with more details provided in Table 4. The fewer training examples used, the larger improve- ment MT-DNN demonstrates over BERT. For ex- ample, with only 0.1% (23 samples) of the SNLI
training data, MT-DNN achieves 82.1% in accu- racy while BERTâs accuracy is 52.5%; with 1% of the training data, the accuracy from MT-DNN is 85.2% and BERT is 78.1%. We observe similar results on SciTail. The results indicate that the rep- resentations learned by MT-DNN are more consis- tently effective for domain adaptation than BERT. In Table 5, we compare our adapted mod- els, using all in-domain training samples, against several strong baselines including the best re- sults reported in the leaderboards. We see that MT-DNNLARGE generates new state-of-the-art re- sults on both datasets, pushing the benchmarks to 91.6% on SNLI (1.5% absolute improvement) and 95.0% on SciTail (6.7% absolute improvement), respectively. This results in the new state-of-the- art for both SNLI and SciTail. All of these demon- strate the exceptional performance of MT-DNN on domain adaptation.
Model Dev Test SNLI Dataset (Accuracy%) GPT (Radford et al., 2018) Kim et al. (2018)â BERTBASE MT-DNNBASE BERTLARGE MT-DNNLARGE - - 91.0 91.5 91.7 92.2 SciTail Dataset (Accuracy%) 89.9 90.1 90.8 91.1 91.0 91.6 GPT (Radford et al., 2018)â BERTBASE MT-DNNBASE BERTLARGE MT-DNNLARGE - 94.3 95.7 95.7 96.3 88.3 92.0 94.1 94.4 95.0
Table 5: Results on the SNLI and SciTail dataset. are marked by Previous â, obtained from the ofï¬cial SNLI leaderboard the and (https://nlp.stanford.edu/projects/snli/) ofï¬cial SciTail leaderboard maintained by AI2 (https://leaderboard.allenai.org/scitail).
# 5 Conclusion
In this work we proposed a model called MT- DNN to combine multi-task learning and lan- guage model pre-training for language represen- tation learning. MT-DNN obtains new state-of- the-art results on ten NLU tasks across three pop- ular benchmarks: SNLI, SciTail, and GLUE. MT- DNN also demonstrates an exceptional generaliza- tion capability in domain adaptation experiments.
There are many future areas to explore to im- prove MT-DNN, including a deeper understand- ing of model structure sharing in MTL, a more effective training method that leverages related- ness among multiple tasks, for both ï¬ne-tuning and pre-training (Dong et al., 2019), and ways of incorporating the linguistic structure of text in a more explicit and controllable manner. At last, we also would like to verify whether MT-DNN is resilience against adversarial attacks (Glockner et al., 2018; Talman and Chatzikyriakidis, 2018; Liu et al., 2019).
# Acknowledgments
We would like to thanks Jade Huang from Mi- crosoft for her generous help on this work.
# References
Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015a. A large anno- tated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empiri- cal Methods in Natural Language Processing, pages 632â642.
Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015b. A large an- notated corpus for learning natural language infer- In Proceedings of the 2015 Conference on ence. Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguis- tics.
Chris Burges, Tal Shaked, Erin Renshaw, Ari Lazier, Matt Deeds, Nicole Hamilton, and Greg Hullender. 2005. Learning to rank using gradient descent. In Proceedings of the 22nd international conference on Machine learning, pages 89â96. ACM.
Rich Caruana. 1997. Multitask learning. Machine learning, 28(1):41â75.
Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from Journal of Machine Learning Research, scratch. 12(Aug):2493â2537.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.
Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Uniï¬ed Zhou, and Hsiao-Wuen Hon. 2019. language model pre-training for natural language arXiv preprint understanding and generation. arXiv:1905.03197.
J. Gao, M. Galley, and L. Li. 2018. Neural approaches to conversational AI. CoRR, abs/1809.08267.
Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking nli systems with sentences that re- quire simple lexical inferences. In The 56th Annual Meeting of the Association for Computational Lin- guistics (ACL), Melbourne, Australia.
Han Guo, Ramakanth Pasunuru, and Mohit Bansal. 2018. Soft layer-speciï¬c multi-task summarization In Pro- with entailment and question generation. ceedings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 687â697.
Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013. Learning deep structured semantic models for web search using clickthrough data. In Proceedings of the 22nd ACM international conference on Conference on informa- tion & knowledge management, pages 2333â2338. ACM.
Tushar Khot, Ashish Sabharwal, and Peter Clark. 2018. SciTail: A textual entailment dataset from science question answering. In AAAI.
Seonhoon Kim, Jin-Hyuk Hong, Inho Kang, and No- jun Kwak. 2018. Semantic sentence matching with densely-connected recurrent and co-attentive infor- mation. arXiv preprint arXiv:1805.11360.
Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
Xiaodong Liu, Kevin Duh, and Jianfeng Gao. 2018a. Stochastic answer networks for natural language in- ference. arXiv preprint arXiv:1804.07888.
Xiaodong Liu, Jianfeng Gao, Xiaodong He, Li Deng, Kevin Duh, and Ye-Yi Wang. 2015. Representa- tion learning using multi-task deep neural networks for semantic classiï¬cation and information retrieval. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 912â921.
Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019. Improving multi-task deep neural networks via knowledge distillation for arXiv preprint natural arXiv:1904.09482.
Xiaodong Liu, Yelong Shen, Kevin Duh, and Jianfeng Gao. 2018b. Stochastic answer networks for ma- chine reading comprehension. In Proceedings of the 56th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers). Asso- ciation for Computational Linguistics.
Minh-Thang Luong, Quoc V Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2015. Multi-task arXiv preprint sequence to sequence learning. arXiv:1511.06114.
Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. arXiv preprint arXiv:1802.05365.
Jason Phang, Thibault F´evry, and Samuel R Bowman. 2018. Sentence encoders on stilts: Supplementary training on intermediate labeled-data tasks. arXiv preprint arXiv:1811.01088.
Alec Radford, Karthik Narasimhan, Tim Salimans, and Improving language under- Ilya Sutskever. 2018. standing by generative pre-training.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. pages 2383â2392.
Sebastian Ruder12, Joachim Bingel, Isabelle Augen- stein, and Anders Søgaard. 2019. Latent multi-task architecture learning.
Aarne Talman and Stergios Chatzikyriakidis. 2018. Testing the generalization power of neural network arXiv preprint models across nli benchmarks. arXiv:1810.09774.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. arXiv preprint arXiv:1706.03762.
Alex Wang, Amapreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461.
Yichong Xu, Xiaodong Liu, Yelong Shen, Jingjing Liu, and Jianfeng Gao. 2018. Multi-task learning for machine reading comprehension. arXiv preprint arXiv:1809.06963.
Yu Zhang and Qiang Yang. 2017. A survey on multi- task learning. arXiv preprint arXiv:1707.08114. | {
"id": "1811.01088"
} |
1901.10995 | Go-Explore: a New Approach for Hard-Exploration Problems | A grand challenge in reinforcement learning is intelligent exploration,
especially when rewards are sparse or deceptive. Two Atari games serve as
benchmarks for such hard-exploration domains: Montezuma's Revenge and Pitfall.
On both games, current RL algorithms perform poorly, even those with intrinsic
motivation, which is the dominant method to improve performance on
hard-exploration domains. To address this shortfall, we introduce a new
algorithm called Go-Explore. It exploits the following principles: (1) remember
previously visited states, (2) first return to a promising state (without
exploration), then explore from it, and (3) solve simulated environments
through any available means (including by introducing determinism), then
robustify via imitation learning. The combined effect of these principles is a
dramatic performance improvement on hard-exploration problems. On Montezuma's
Revenge, Go-Explore scores a mean of over 43k points, almost 4 times the
previous state of the art. Go-Explore can also harness human-provided domain
knowledge and, when augmented with it, scores a mean of over 650k points on
Montezuma's Revenge. Its max performance of nearly 18 million surpasses the
human world record, meeting even the strictest definition of "superhuman"
performance. On Pitfall, Go-Explore with domain knowledge is the first
algorithm to score above zero. Its mean score of almost 60k points exceeds
expert human performance. Because Go-Explore produces high-performing
demonstrations automatically and cheaply, it also outperforms imitation
learning work where humans provide solution demonstrations. Go-Explore opens up
many new research directions into improving it and weaving its insights into
current RL algorithms. It may also enable progress on previously unsolvable
hard-exploration problems in many domains, especially those that harness a
simulator during training (e.g. robotics). | http://arxiv.org/pdf/1901.10995 | Adrien Ecoffet, Joost Huizinga, Joel Lehman, Kenneth O. Stanley, Jeff Clune | cs.LG, cs.AI, stat.ML | 37 pages, 14 figures; added references to Goyal et al. and Oh et al.,
updated reference to Colas et al; updated author emails; point readers to
updated paper | null | cs.LG | 20190130 | 20210226 | 1 2 0 2
b e F 6 2 ] G L . s c [
4 v 5 9 9 0 1 . 1 0 9 1 : v i X r a
# Go-Explore: a New Approach for Hard-Exploration Problems
# Adrien Ecoffet
Joost Huizinga Joel Lehman Uber AI Labs San Francisco, CA 94103 adrienecoffet,joost.hui,jclune@gmail.com *Co-senior authors Kenneth O. Stanley*
# Jeff Clune*
Authorsâ note: We recommend reading (and citing) our updated paper, âFirst return, then exploreâ:
Ecoffet, A., Huizinga, J., Lehman, J., Stanley, K.O. and Clune, J. First return, then explore. Nature 590, 580â586 (2021). https://doi.org/10.1038/s41586-020-03157-9
It can be found at https://tinyurl.com/Go-Explore-Nature.
# Abstract
A grand challenge in reinforcement learning is intelligent exploration, especially when rewards are sparse or deceptive. Two Atari games serve as benchmarks for such hard-exploration domains: Montezumaâs Revenge and Pitfall. On both games, current RL algorithms perform poorly, even those with intrinsic motivation, which is the dominant method to encourage exploration and improve performance on hard- exploration domains. To address this shortfall, we introduce a new algorithm called Go-Explore. It exploits the following principles: (1) remember states that have previously been visited, (2) ï¬rst return to a promising state (without exploration), then explore from it, and (3) solve simulated environments through exploiting any available means (including by introducing determinism), then robustify (create a policy that can reliably perform the solution) via imitation learning. The combined effect of these principles generates dramatic performance improvements on hard- exploration problems. On Montezumaâs Revenge, without being provided any domain knowledge, Go-Explore scores over 43,000 points, almost 4 times the previous state of the art. Go-Explore can also easily harness human-provided domain knowledge, and when augmented with it Go-Explore scores a mean of over 650,000 points on Montezumaâs Revenge. Its max performance of 18 million surpasses the human world record by an order of magnitude, thus meeting even the strictest deï¬nition of âsuperhumanâ performance. On Pitfall, Go-Explore with domain knowledge is the ï¬rst algorithm to score above zero. Its mean performance of almost 60,000 points also exceeds expert human performance. Because Go- Explore can produce many high-performing demonstrations automatically and cheaply, it also outperforms previous imitation learning work in which the solution was provided in the form of a human demonstration. Go-Explore opens up many new research directions into improving it and weaving its insights into current RL algorithms. It may also enable progress on previously unsolvable hard-exploration problems in a variety of domains, especially the many that often harness a simulator during training (e.g. robotics).
# 1 Introduction
Reinforcement learning (RL) has experienced signiï¬cant progress in recent years, achieving super- human performance in board games such as Go [1, 2] and in classic video games such as Atari [3]. However, this progress obscures some of the deep unmet challenges in scaling RL to complex
real-world domains. In particular, many important tasks require effective exploration to be solved, i.e. to explore and learn about the world even when rewards are sparse or deceptive. In sparse-reward problems, precise sequences of many (e.g. hundreds or more) actions must be taken between ob- taining rewards. Deceptive-reward problems are even harder, because instead of feedback rarely being provided, the reward function actually provides misleading feedback for reaching the overall global objective, which can lead to getting stuck on local optima. Both sparse and deceptive reward problems constitute âhard-explorationâ problems, and classic RL algorithms perform poorly on them [4]. Unfortunately, most challenging real-world problems are also hard-exploration problems. That is because we often desire to provide abstract goals (e.g. âï¬nd survivors and tell us their location,â or âturn off the valve to the leaking pipe in the reactorâ), and such reward functions do not provide detailed guidance on how to solve the problem (sparsity) while also often creating unintended local optima (deception) [5â8].
For example, in the case of ï¬nding survivors in a disaster area, survivors will be few and far between, thus introducing sparsity. Even worse, if we also instruct the robot to minimize damage to itself, this additional reward signal may actively teach the robot not to explore the environment, because exploration is initially much more likely to result in damage than it is to result in ï¬nding a survivor. This seemingly sensible additional objective thus introduces deception on top of the already sparse reward problem.
To address these challenges, this paper introduces Go-Explore, a new algorithm for hard-exploration problems that dramatically improves state-of-the-art performance in two classic hard-exploration benchmarks: the Atari games Montezumaâs Revenge and Pitfall.
Prior to Go-Explore, the typical approach to sparse reward problems has been intrinsic motivation (IM) [4, 9â11], which supplies the RL agent with intrinsic rewards (IRs) that encourage exploration (augmenting or replacing extrinsic reward that comes from the environment). IM is often motivated by psychological concepts such as curiosity [12, 13] or novelty-seeking [7, 14], which play a role in how humans explore and learn. While IM has produced exciting progress in sparse reward problems, in many domains IM approaches are still far from fully solving the problem, including on Montezumaâs Revenge and Pitfall. We hypothesize that, amongst other issues, such failures stem from two root causes that we call detachment and derailment.
Detachment is the idea that an agent driven by IM could become detached from the frontiers of high intrinsic reward (IR). To understand detachment, we must ï¬rst consider that intrinsic reward is nearly always a consumable resource: a curious agent is curious about states to the extent that it has not often visited them (similar arguments apply for surprise, novelty, or prediction-error seeking agents [4, 14â16]). If an agent discovers multiple areas of the state space that produce high IR, its policy may in the short term focus on one such area. After exhausting some of the IR offered by that area, the policy may by chance begin consuming IR in another area. Once it has exhausted that IR, it is difï¬cult for it to rediscover the frontier it detached from in the initial area, because it has already consumed the IR that led to that frontier (Fig. 1), and it likely will not remember how to return to that frontier due to catastrophic forgetting [17â20]. Each time this process occurs, a potential avenue of exploration can be lost, or at least be difï¬cult to rediscover. In the worst case, there may be a dearth of remaining IR near the areas of state space visited by the current policy (even though much IR might remain elsewhere), and therefore no learning signal remains to guide the agent to further explore in an effective and informed way. One could slowly add intrinsic rewards back over time, but then the entire fruitless process could repeat indeï¬nitely. In theory a replay buffer could prevent detachment, but in practice it would have to be large to prevent data about the abandoned frontier to not be purged before it becomes needed, and large replay buffers introduce their own optimization stability difï¬culties [21, 22]. The Go-Explore algorithm addresses detachment by explicitly storing an archive of promising states visited so that they can then be revisited and explored from later.
Derailment can occur when an agent has discovered a promising state and it would be beneï¬cial to return to that state and explore from it. Typical RL algorithms attempt to enact such desirable behavior by running the policy that led to the initial state again, but with some stochastic perturbations to the existing policy mixed in to encourage a slightly different behavior (e.g. exploring further). The stochastic perturbation is performed because IM agents have two layers of exploration mechanisms: (1) the higher-level IR incentive that rewards when new states are reached, and (2) a more basic exploratory mechanism such as epsilon-greedy exploration, action-space noise, or parameter-space noise [23â25]. Importantly, IM agents rely on the latter mechanism to discover states containing
2
1. Intrinsic reward (green) is distributed 2. An IM algorithm might start by exploring throughout the environment (purple) a nearby area with intrinsic reward Start 3. By chance, it may explore 4. Exploration fails to rediscover another equally profitable area promising areas it has detached from
Figure 1: A hypothetical example of detachment in intrinsic motivation (IM) algorithms. Green areas indicate intrinsic reward, white indicates areas where no intrinsic reward remains, and purple areas indicate where the algorithm is currently exploring. (1) The agent starts each episode between the two mazes. (2) It may by chance start exploring the West maze and IM may drive it to learn to traverse, say, 50% of it. (3) Because current algorithms sprinkle in randomness (either in actions or parameters) to try to produce new behaviors to ï¬nd explicit or intrinsic rewards, by chance the agent may at some point begin exploring the East maze, where it will also encounter a lot of intrinsic reward. After completely exploring the East maze, it has no explicit memory of the promising exploration frontier it abandoned in the West maze. It likely would also have no implicit memory of this frontier due to the problem of catastrophic forgetting [17â20]. (4) Worse, the path leading to the frontier in the West maze has already been explored, so no (or little) intrinsic motivation remains to rediscover it. We thus say the algorithm has detached from a frontier of states that provide intrinsic motivation. As a result, exploration can stall when areas close to where the current agent visits have already been explored. This problem would be remedied if the agent remembered and returned to previously discovered promising areas for exploration, which Go-Explore does.
Phase 1: explore until solved Phase 2: robustify (if necessary) Select state Conoretate Explore Update Run imitation learning from archive] from state archive on best trajectory
Figure 2: A high-level overview of the Go-Explore algorithm.
high IR, and the former mechanism to return to them. However, the longer, more complex, and more precise a sequence of actions needs to be in order to reach a previously-discovered high-IR state, the more likely it is that such stochastic perturbations will âderailâ the agent from ever returning to that state. That is because the needed precise actions are naively perturbed by the basic exploration mechanism, causing the agent to only rarely succeed in reaching the known state to which it is drawn, and from which further exploration might be most effective. To address derailment, an insight in Go-Explore is that effective exploration can be decomposed into ï¬rst returning to a promising state (without intentionally adding any exploration) before then exploring further.
Go-Explore is an explicit response to both detachment and derailment that is also designed to achieve robust solutions in stochastic environments. The version presented here works in two phases (Fig. 2): (1) ï¬rst solve the problem in a way that may be brittle, such as solving a deterministic version of the problem (i.e. discover how to solve the problem at all), and (2) then robustify (i.e. train to be able to
3
reliably perform the solution in the presence of stochasticity).1 Similar to IM algorithms, Phase 1 focuses on exploring infrequently visited states, which forms the basis for dealing with sparse-reward and deceptive problems. In contrast to IM algorithms, Phase 1 addresses detachment and derailment by accumulating an archive of states and ways to reach them through two strategies: (a) add all interestingly different states visited so far into the archive, and (b) each time a state from the archive is selected to explore from, ï¬rst Go back to that state (without adding exploration), and then Explore further from that state in search of new states (hence the name âGo-Exploreâ).
An analogy of searching a house can help one contrast IM algorithms and Phase 1 of Go-Explore. IM algorithms are akin to searching through a house with a ï¬ashlight, which casts a narrow beam of exploration ï¬rst in one area of the house, then another, and another, and so on, with the light being drawn towards areas of intrinsic motivation at the edge of its small visible region. It can get lost if at any point the beam fails to fall on any area with intrinsic motivation remaining. Go-Explore more resembles turning the lights on in one room of a house, then its adjacent rooms, then their adjacent rooms, etc., until the entire house is illuminated. Go-Explore thus gradually expands its sphere of knowledge in all directions simultaneously until a solution is discovered.
If necessary, the second phase of Go-Explore robustiï¬es high-performing trajectories from the archive such that they are robust to the stochastic dynamics of the true environment. Go-Explore robustiï¬es via imitation learning (aka learning from demonstrations or LfD [26â29]), a technique that learns how to solve a task from human demonstrations. The only difference with Go-Explore is that the solution demonstrations are produced automatically by Phase 1 of Go-Explore instead of being provided by humans. The input to this phase is one or more high-performing trajectories, and the output is a robust policy able to consistently achieve similar performance. The combination of both phases instantiates a powerful algorithm for hard-exploration problems, able to deeply explore sparse- and deceptive-reward environments and robustify high-performing trajectories into reliable solutions that perform well in the unmodiï¬ed, stochastic test environment.
Some of these ideas are similar to ideas proposed in related work. Those connections are discussed in Section 5. That said, we believe we are the ï¬rst to combine these ideas in this way and demonstrate that doing so provides substantial performance improvements on hard-exploration problems.
To explore its potential, we test Go-Explore on two hard-exploration benchmarks from the Arcade Learning Environment (ALE) [30, 31]: Montezumaâs Revenge and Pitfall. Montezumaâs Revenge has become an important benchmark for exploration algorithms (including intrinsic motivation algorithms) [4, 16, 32â39] because precise sequences of hundreds of actions must be taken in between receiving rewards. Pitfall is even harder because its rewards are sparser (only 32 positive rewards are scattered over 255 rooms) and because many actions yield small negative rewards that dissuade RL algorithms from exploring the environment.
Classic RL algorithms (i.e. those without intrinsic motivation) such as DQN [3], A3C [40], Ape- X [41] and IMPALA [42] perform poorly on these domains even with up to 22 billion game frames of experience, scoring 2,500 or lower on Montezumaâs Revenge and failing to solve level one, and scoring ⤠0 on Pitfall. Those results exclude experiments that are evaluated in a deterministic test environment [43, 44] or were given human demonstrations [26, 27, 45]. On Pitfall, the lack of positive rewards and frequent negative rewards causes RL algorithms to learn a policy that effectively does nothing, either standing completely still or moving back and forth near the start of the game (https://youtu.be/Z0lYamtgdqQ [46]).
These two games are also tremendously difï¬cult for planning algorithms, even when allowed to plan directly within the game emulator. Classical planning algorithms such as UCT [47â49] (a powerful form of Monte Carlo tree search [49, 50]) obtain 0 points on Montezumaâs Revenge because the state space is too large to explore effectively, even with probabilistic methods [30, 51].
Despite being speciï¬cally designed to tackle sparse reward problems and being the dominant method for them, IM algorithms also struggle with Montezumaâs Revenge and Pitfall, although they perform better than algorithms without IM. On Montezumaâs Revenge, the best such algorithms thus far average around 11,500 with a maximum of 17,500 [16, 39]. One solved level 1 of the game in 10% of its runs [16]. Even with IM, no algorithm scores greater than 0 on Pitfall (in a stochastic test
1Note that this second phase is in principle not necessary if Phase 1 itself produces a policy that can handle stochastic environments (Section 2.1.3).
4
environment, without a human demonstration). We hypothesize that detachment and derailment are major reasons why IM algorithms do not perform better.
When exploiting easy-to-provide domain knowledge, Go-Explore on Montezumaâs Revenge scores a mean of 666,474, and its best run scores over 18 million and solves 1,441 levels. On Pitfall, Go-Explore scores a mean of 59,494 and a maximum of 107,363, which is close to the maximum of the game of 112,000 points. Without exploiting domain knowledge, Go-Explore still scores a mean of 43,763 on Montezumaâs Revenge. All scores are dramatic improvements over the previous state of the art. This and all other claims about solving the game and producing state-of-the-art scores assume that, while stochasticity is required during testing, deterministic training is allowable (discussed in Section 2.1.3). We conclude that Go-Explore is a promising new algorithm for solving hard-exploration RL tasks with sparse and/or deceptive rewards.
# 2 The Go-Explore Algorithm
The insight that remembering and returning reliably to promising states is fundamental to effective exploration in sparse-reward problems is at the core of Go-Explore. Because this insight is so ï¬exible and can be exploited in different ways, Go-Explore effectively encompasses a family of algorithms built around this key idea. The variant implemented for the experiments in this paper and described in detail in this section relies on two distinct phases. While it provides a canonical demonstration of the possibilities opened up by Go-Explore, other variants are also discussed (e.g. in Section 4) to provide a broader compass for future applications.
# 2.1 Phase 1: Explore until solved
In the two-phase variant of Go-Explore presented in this paper, the purpose of Phase 1 is to explore the state space and ï¬nd one or more high-performing trajectories that can later be turned into a robust policy in Phase 2. To do so, Phase 1 builds up an archive of interestingly different game states, which we call âcellsâ (Section 2.1.1), and trajectories that lead to them. It starts with an archive that only contains the starting state. From there, it builds the archive by repeating the following procedures: choose a cell from the current archive (Section 2.1.2), return to that cell without adding any stochastic exploration (Section 2.1.3), and then explore from that location stochastically (Section 2.1.4). During this process, any newly encountered cells (as well as how to reach them) or improved trajectories to existing cells are added to the archive (Section 2.1.5).
# 2.1.1 Cell representations
One could, in theory, run Go-Explore directly in a high-dimensional state space (wherein each cell contains exactly one state); however doing so would be intractable in practice. To be tractable in high-dimensional state spaces like Atari, Phase 1 of Go-Explore needs a lower-dimensional space within which to search (although the ï¬nal policy will still play in the same original state space, in this case pixels). Thus, the cell representation should conï¬ate similar states while not conï¬ating states that are meaningfully different.
In this way, a good cell representation should reduce the dimensionality of the observations into a meaningful low-dimensional space. A rich literature investigates how to obtain good representations from pixels. One option is to take latent codes from the middle of neural networks trained with traditional RL algorithms maximizing extrinsic and/or intrinsic motivation, optionally adding auxiliary tasks such as predicting rewards [52]. Additional options include unsupervised techniques such as networks that autoencode [53] or predict future states, and other auxiliary tasks such as pixel control [54].
While it will be interesting to test any or all of these techniques with Go-Explore in future work, for these initial experiments with Go-Explore we test its performance with two different representations: a simple one that does not harness game-speciï¬c domain knowledge, and one that does exploit easy-to-provide domain knowledge.
# Cell representations without domain knowledge
We found that a very simple dimensionality reduction procedure produces surprisingly good results on Montezumaâs Revenge. The main idea is simply to downsample the current game frame. Speciï¬cally,
5
Figure 3: Example cell representation without domain knowledge, which is simply to down- sample each game frame. The full observable state, a color image, is converted to grayscale and downscaled to an 11 Ã 8 image with 8 possible pixel intensities.
we (1) convert each game frame image to grayscale (2) downscale it to an 11 à 8 image with area interpolation (i.e. using the average pixel value in the area of the downsampled pixel), (3) rescale pixel intensities so that they are integers between 0 and 8, instead of the original 0 to 255 (Fig. 3). The downscaling dimensions and pixel-intensity range were found by grid search. The aggressive downscaling used by this representation is reminiscent of the Basic feature set from Bellemare et al. [30]. This cell representation requires no game-speciï¬c knowledge and is fast to compute.
# Cell representations with domain knowledge
The ability of an algorithm to integrate easy-to-provide domain knowledge can be an important asset. In Montezumaâs Revenge, domain knowledge is provided as unique combinations of the x, y position of the agent (discretized into a grid in which each cell is 16 à 16 pixels), room number, level number, and in which rooms the currently-held keys were found. In the case of Pitfall, only the x, y position of the agent and the room number were used. All this information was extracted directly from pixels with simple hand-coded classiï¬ers to detect objects such as the main characterâs location combined with our knowledge of the map structure in the two games (Appendix A.3). While Go-Explore provides the opportunity to leverage domain knowledge in the cell representation in Phase 1, the robustiï¬ed neural network produced by Phase 2 still plays directly from pixels only.
# 2.1.2 Selecting cells
In each iteration of Phase 1, a cell is chosen from the archive to explore from. This choice could be made uniformly at random, but we can improve upon that baseline in many cases by creating (or learning) a heuristic for preferring some cells over others. In preliminary experiments, we found that such a heuristic can improve performance over uniform random sampling (data not shown). The exact heuristic differs depending on the problem being solved, but at a high level, the heuristics in our work assign a positive weight to each cell that is higher for cells that are deemed more promising. For example, cells might be preferred because they have not been visited often, have recently contributed to discovering a new cell, or are expected to be near undiscovered cells. The weights of all cells are normalized to represent the probability of each cell being chosen next. No cell is ever given a weight equal to 0, so that all cells in principle remain available for further exploration. The exact heuristics from our experiments are described in Appendix A.5.
# 2.1.3 Returning to cells and opportunities to exploit deterministic simulators
One of the main principles of Go-Explore is to return to a promising cell without added exploration before exploring from that cell. The Go-Explore philosophy is that we should make returning to that cell as easy as possible given the constraints of the problem. The easiest way to return to a cell is if the world is deterministic and resettable, such that one can reset the state of the simulator to a previous visit to that cell. Whether performing such resets is allowable for RL research is an interesting subject of debate that was motivated by the initial announcement of Go-Explore [55]. The ability to harness determinism and perform such resets forces us to recognize that there are two different types of problems we wish to solve with RL algorithms: those that require stochasticity at test time only, and those that require stochasticity during both testing and training.
We start with the former. Because current RL algorithms can take unsafe actions [56, 57] and require tremendous amounts of experience to learn [41, 42, 58], the majority of applications of RL in the
6
foreseeable future will likely require training in a simulator before being transferred to (and optionally ï¬ne-tuned in) the real world. For example, most work with learning algorithms for robotics train in a simulator before transferring the solution to the real world; that is because learning directly on the robot is slow, sample-inefï¬cient, can damage the robot, and can be unsafe [59â61]. Fortunately, for many domains, simulators are available (e.g. robotics simulators, trafï¬c simulators, etc.). An insight of Go-Explore is that we can take advantage of the fact that such simulators can be made deterministic to improve performance, especially on hard-exploration problems. For many types of problems, we want a reliable ï¬nal solution (e.g. a robot that reliably ï¬nds survivors after a natural disaster) and there is no principled reason to care whether we obtain this solution via initially deterministic training. If we can solve previously unsolvable problems, including ones that are stochastic at evaluation (test) time, via making simulators deterministic, we should take advantage of this opportunity.
There are also cases where a simulator is not available and where learning algorithms must confront stochasticity during training. To create and test algorithms for this second type of problem, we cannot exploit determinism and resettability. Examples of this class of problems include when we must learn directly in the real world (and an effective simulator is not available and cannot be learned), or when studying the learning of biological animals, including ourselves. We believe Go-Explore can handle such situations by training goal-conditioned policies [62, 63] that reliably return to cells in the archive during the exploration phase, which is an interesting area for future research. While computationally much more expensive, this strategy would result in a fully trained policy at the end of the exploration phase, meaning there would be no need for a robustiï¬cation phase at the end. We note that there are some problems where the environment has forms of stochasticity that prevent the algorithm from reliably returning to a particular cell, regardless of which action the agent takes (e.g. in poker, there is no sequence of actions that reliably leads you to a state where you have two aces). We leave a discussion and study of whether Go-Explore helps in that problem setting for future work.
With this distinction in mind, we can now ask whether Montezumaâs Revenge and Pitfall represent the ï¬rst type of domain (where all we care about is a solution that is robust to stochasticity at test time) or the second (situations where the algorithm must handle stochasticity while training). We believe few people in the community had considered this question before our initial blog post on Go-Explore [55] and that it created a healthy debate on this subject. Because Atari games are proxies for the problems we want to solve with RL, and because both types of problems exist, a natural conclusion is that we should have benchmarks for each. One version of a task can require stochasticity during testing only, and another can require stochasticity during both training and testing. All results and claims in this version of this paper are for the version of these domains that does not require stochasticity during training (i.e. stochasticity is required during evaluation only). Applying Go-Explore when training is stochastic remains an exciting avenue of research for the near future.
For problems in which all we care about is a reliable policy at test time, a key insight behind Go-Explore is that we can ï¬rst solve the problem (Phase 1), and then (if necessary) deal with making the solution more robust later (Phase 2). In contrast with the usual view of determinism as a stumbling block to producing agents that are robust and high-performing, it can be made an ally during exploration and then the solution extended to nondeterminism afterwards via robustiï¬cation. An important domain where such insights can help is robotics, where training is often done in simulation before policies are transferred to the real world [59â61].
For the experiments in this paper, because we harness deterministic training, we could return to a cell by storing the sequence of actions that lead to it and subsequently replay those actions. However, simply saving the state of the emulator (in addition to this sequence of steps) and restoring that state when revisiting a cell gains additional efï¬ciency. Doing so reduced the number of steps that needed to be simulated by at least one order of magnitude (Appendix A.8).
Due to the fact that the present version of Go-Explore operates in a deterministic setting during Phase 1, each cell is associated with an open-loop sequence of instructions that lead to it given the initial state, not a proper policy that maps any state to an action. A true policy is produced during robustiï¬cation in Phase 2 (Section 2.2).
# 2.1.4 Exploration from cells
Once a cell is reached, any exploration method can be applied to ï¬nd new cells. In this work the agent explores by taking random actions for k = 100 training frames, with a 95% probability of repeating the previous action at each training frame (frames at which the agent is allowed to take an action,
7
thus not including any frames skipped due to frame skip, see Appendix A.1). Besides reaching the k = 100 training frame limit for exploration, exploration is also aborted at the episodeâs end (deï¬ned in Appendix A.2), and the action that led to the episode ending is ignored because it does not produce a destination cell.
Interestingly, such exploration does not require a neural network or other controller, and indeed no neural network was used for the exploration phase (Phase 1) in any of the experiments in this paper (we do not train a neural network until Phase 2). The fact that entirely random exploration works so well highlights the surprising power of simply returning to promising cells before exploring further, though we believe exploring intelligently (e.g. via a trained policy) would likely improve our results and is an interesting avenue for future work.
# 2.1.5 Updating the archive
While an agent is exploring from a cell, the archive updates in two conditions. The ï¬rst condition is when the agent visits a cell that was not yet in the archive (which can happen multiple times while exploring from a given cell). In this case, that cell is added to the archive with four associated pieces of metadata: (1) how the agent got to that cell (here, a full trajectory from the starting state to that cell), (2) the state of the environment at the time of discovering the cell (if the environment supports such an operation, which is true for the two Atari-game domains in this paper), (3) the cumulative score of that trajectory, and (4) the length of that trajectory.
The second condition is when a newly-encountered trajectory is âbetterâ than that belonging to a cell already in the archive. For the experiments below, we deï¬ne a new trajectory as better than an existing trajectory when the new trajectory either has a higher cumulative score or when it is a shorter trajectory with the same score. In either case, the existing cell in the archive is updated with the new trajectory, the new trajectory length, the new environment state, and the new score. In addition, information affecting the likelihood of this cell being chosen (see Appendix A.5) is reset, including the total number of times the cell has been chosen and the number of times the cell has been chosen since leading to the discovery of another cell. Resetting these values is beneï¬cial when cells conï¬ate many different states because a new way of reaching a cell may actually be a more promising stepping stone to explore from (so we want to encourage its selection). We do not reset the counter that records the number of times the cell has been visited because that would make recently discovered cells indistinguishable from recently updated cells, and recently discovered cells (i.e. those with low visit counts) are more promising to explore because they are likely near the surface of our expanding sphere of knowledge.
Because cells conï¬ate many states, we cannot assume that a trajectory from start state A through cell B to cell C will still reach C if we substitute a different, better way to get from A to B; therefore, the better way of reaching a cell is not integrated into the trajectories of other cells that built upon the original trajectory. However, performing such substitutions might work with goal-conditioned or otherwise robust policies, and investigating that possibility is an interesting avenue for future work.
# 2.1.6 Batch implementation
We implemented Phase 1 in parallel to take advantage of multiple CPUs (our experiments ran on a single machine with 22 CPU cores): at each step, a batch of b cells is selected (with replacement) according to the rules described in Section 2.1.2 and Appendix A.5, and exploration from each of these cells proceeds in parallel for each. Besides using the multiple CPUs to run more instances of the environment, a high b also saves time by recomputing cell selection probabilities less frequently, which is important as this computation accounts for a signiï¬cant portion of run time as the archive gets large (though this latter factor could be mitigated in other ways in the future). Because the size of b also has an indirect effect on the exploration behavior of Go-Explore (for instance, the initial state is guaranteed to be chosen b times at the very ï¬rst iteration), it is in effect a hyperparameter, whose values are given in Appendix A.6.
# 2.2 Phase 2: Robustiï¬cation
If successful, the result of Phase 1 is one or more high-performing trajectories. However, if Phase 1 of Go-Explore harnessed determinism in a simulator, such trajectories will not be robust to any stochasticity, which is present at test time. Phase 2 addresses this gap by creating a policy robust to
8
noise via imitation learning, also called learning from demonstration (LfD). Importantly, stochasticity is added during Phase 2 so that the ï¬nal policy is robust to the stochasticity it will face during its evaluation in the test environment. Thus the policy being trained has to learn how to mimic and/or perform as well as the trajectory obtained from the Go-Explore exploration phase while simultaneously dealing with circumstances that were not present in the original trajectory. Depending on the stochasticity of the environment, this adjustment can be highly challenging, but nevertheless is far easier than attempting to solve a sparse-reward problem from scratch.
While most imitation learning algorithms could be used for Phase 2, different types of imitation learning algorithms can qualitatively affect the resulting policy. LfD algorithms that try to closely mimic the behavior of the demonstration may struggle to improve upon it. For this reason, we chose an LfD algorithm that has been shown capable of improving upon its demonstrations: the Backward Algorithm from Salimans and Chen [28]. It works by starting the agent near the last state in the trajectory, and then running an ordinary RL algorithm from there (in this case Proximal Policy Optimization (PPO) [64]). Once the algorithm has learned to obtain the same or a higher reward than the example trajectory from that starting place near the end of the trajectory, the algorithm backs the agentâs starting point up to a slightly earlier place along the trajectory, and repeats the process until eventually the agent has learned to obtain a score greater than or equal to the example trajectory all the way from the initial state. Note that a similar algorithm was discovered independently at around the same time by Resnick et al. [65].
While this approach to robustiï¬cation effectively treats the expert trajectory as a curriculum for the agent, the policy is only optimized to maximize its own score, and not actually forced to accurately mimic the trajectory. For this reason, this phase is able to further optimize the expert trajectories, as well as generalize beyond them, both of which we observed in practice in our experiments (Section 3). In addition to seeking a higher score than the original trajectory, because it is an RL algorithm with a discount factor that prizes near-term rewards more than those gathered later, it also has a pressure to improve the efï¬ciency with which it collects rewards. Thus if the original trajectory contains unnecessary actions (like visiting a dead end and returning), such behavior could be eliminated during robustiï¬cation (a phenomenon we also observed).
# 2.3 Additional experimental and analysis details
Comparing sample complexity for RL algorithms trained on Atari games can be tricky due to the common usage of frame skipping [31, 66], wherein a policy only sees and acts every nth (here, 4) frame, and that action is repeated for intervening frames to save the computation of running the policy. Speciï¬cally, it can be ambiguous whether the frames that are skipped are counted (which we call âgame framesâ) or ignored (which we call âtraining framesâ) when discussing sample complexity. In this work, we always qualify the word âframeâ accordingly and all numbers we report are measured in game frames. Appendix A.1 further details the subtleties of this issue.
Because the Atari games are deterministic by default, some form of stochasticity needs to be introduced to provide a stochastic test environment, which is desirable to make Atari an informative test bed for RL algorithms. Following previous work, we introduce stochasticity into the Atari environment with two previously employed techniques: random no-ops and sticky actions.
Random no-ops means that the agent is forced to take up to 30 no-ops (do nothing commands) at the start of the game. Because most Atari games run on a timer that affects whether hazards are present or not, or where different hazards, items, or enemies are located, taking a random number of no-ops puts the world into a slightly different state each time, meaning that ï¬xed trajectories (such as the ones found by Go-Explore Phase 1) will no longer work. Random no-ops were ï¬rst introduced by Mnih et al. [3], and they were adopted as a primary source of stochasticity in most subsequent papers working in the Atari domain [3, 26, 27, 34, 38, 41, 42, 45, 67â73].
While random no-ops prevent single, memorized trajectories from solving Atari games, the remainder of the game remains deterministic, meaning there is still much determinism that can be exploited. While several other forms of stochasticity have been proposed (e.g. humans restarts [74], random frame skips [75], etc.), a particularly elegant form is sticky actions [31], where at each game frame there exists some probability of repeating the previous action instead of performing a newly chosen action. This way to introduce stochasticity is akin to how humans are not frame perfect, but may hold a button for slightly longer than they intended, or how they may be slightly late in pressing a button.
9
Because Atari games have been designed for human play, the addition of sticky actions generally does not prevent a game from being solvable, and it adds some stochasticity to every state in the game, not just the start. Although our initial blog post [55] only included random no-ops, in this paper our robustiï¬cation and all post-robustiï¬cation test results are produced with a combination of both random no-ops and sticky actions. All algorithms we compare against in Section 3 and in Appendix A.9 likewise were tested with some form of stochasticity (in the form of no-ops, sticky actions, human starts, or some combination thereof), though it is worth noting that, unlike Go-Explore, most also had to handle stochasticity throughout training. Relevant algorithms that were tested in a deterministic environment are discussed in Section 5.
All hyperparameters were found by performing a separate grid-search for each experiment. The ï¬nal, best performing hyperparameters are listed in Appendix A.6, tables 1 and 2. All conï¬dence intervals given are 95% bootstrapped conï¬dence intervals computed using the pivotal (also known as empirical) method [76], obtained by resampling 10,000 times. Conï¬dence intervals are reported with the following notation: stat (CI: lower â upper) where stat is the statistic (a mean unless otherwise speciï¬ed). In graphs containing shaded areas, those areas indicate the 95% percentile bootstrapped conï¬dence interval of the mean, obtained by resampling 1,000 times. Graphs of the exploration phase (Phase 1) depict data at approximately every 4M game frames and graphs of the robustiï¬cation phase (Phase 2) depict data at approximately every 130,000 game frames.
Because the robustiï¬cation process can diverge even after ï¬nding a solution, the neural network at the end of training does not necessarily perform well, even if a high-performing solution was found at some point during this process. To retrieve a neural network that performs well regard- less of when it was found, all robustiï¬cation runs (Phase 2) produced a checkpoint of the neural network approximately every 13M game frames. Because the performance values recorded during robustiï¬cation are noisy, we cannot select the best performing checkpoint from those performance values alone. As such, at the end of each robustiï¬cation run, out of the checkpoints with the lowest max_starting_point (or close to it), a random subset of checkpoints (between 10 and 50) was tested to evaluate the performance of the neural network stored within that checkpoint. We test a random subset because robustiï¬cation runs usually produce more successful checkpoints then we can realistically test. The highest-scoring checkpoint for each run was then re-tested to account for the selection bias inherent in selecting the best checkpoint. The scores from this ï¬nal retest are the ones we report.
The neural network from each checkpoint is evaluated with random no-ops and sticky actions until at least 5 scores for each of the 31 possible starting no-ops (from 0 to 30 inclusive) are obtained. The mean score for each no-op is then calculated and the ï¬nal score for the checkpoint is the grand mean of the individual no-op scores. Unless otherwise speciï¬ed, the default time limit of 400,000 game frames imposed by OpenAI Gym [75] is enforced.
# 3 Results
# 3.1 Montezumaâs Revenge
# 3.1.1 Without domain knowledge in the cell representation
In this ï¬rst experiment, we run Go-Explore on Montezumaâs Revenge with the downsampled image cell representation, which does not require game-speciï¬c domain knowledge. Despite the simplicity of this cell representation, Phase 1 of Go-Explore solves level 1 in 57% of runs after 1.2B game frames (a modest number by modern standards [41, 42]), with one of the 100 runs also solving level 2, and visits a mean of 35 rooms (CI: 33 â 37) (Fig. 4a). The number of new cells being discovered is still increasing linearly after 1.2B game frames, indicating that results would likely be even better were it run longer (Fig. 4b). Phase 1 of Go-Explore achieves a mean score of 57,439 (CI: 47,843 â 67,224) (Fig. 4c). Level 1 was solved after a mean of 640M (CI: 567M â 711M) game frames, which took a mean of 10.8 (CI: 9.5 â 12.0) hours on a single, 22-CPU machine (note that these level 1 numbers exclude the runs that never solved level 1 after 1.2B game frames). See Appendix A.8 for more details on performance.
Amusingly, Go-Explore discovered a little-known bug in Montezumaâs Revenge called the âtreasure room curseâ [77]. If the agent performs a speciï¬c sequence of actions, it can remain in the treasure room (the ï¬nal room before being sent to the next level) indeï¬nitely, instead of being automatically
10
(a) Number of rooms found (b) Number of cells found (c) Maximum score in archive
Figure 4: Performance of the exploration phase of Go-Explore with downscaled frames on Montezumaâs Revenge. Lines indicating human and the algorithmic state of the art are for compar- ison, but recall that the Go-Explore scores in this plot are on a deterministic version of the game (unlike the post-Phase 2 scores presented in this section).
5,000 30,000 25,000 4,000 < 3 & 20,000 2 3,000 FS 5 & 15,000 2,000 5 = 10,000 1,000 5,000 0 0 0.0 02 04 06 O08 410 1.2 14 16 0.0 02 04 06 08 10 12 #14 16 Game Frames 1e9 Game Frames 1e9
2
8 oO 2 5 5 a & ="
(a) Failed robustiï¬cation with 1 demonstration
(b) Successful robustiï¬cation with 10 demonstrations
Figure 5: Examples of maximum starting point over training for robustifying using different numbers of demonstrations. Success is achieved as soon as any of the curves gets sufï¬ciently close (e.g. within 50 units) to 0, because that means the agent is able to perform as well as at least one of the demonstrations.
moved to the next level after some time. Because gems giving 1,000 points keep appearing in the treasure room, it is possible to easily achieve very high scores once it has been triggered. Finding bugs in games and simulators, as Go-Explore did, is an interesting reminder of the power and creativity of optimization algorithms [6], and is commercially valuable as a debugging tool to identify and ï¬x such bugs before shipping simulators and video games. A video of the treasure room curse as triggered by Go-Explore is available at https://youtu.be/civ6OOLoR-I.
In 51 out of the 57 runs that solved level 1, the highest-scoring trajectory found by Go-Explore exploited the bug. To prevent scores from being inï¬ated due to this bug, we ï¬ltered out trajectories that triggered the treasure room curse bug when extracting the highest scoring trajectory from each run of Go-Explore for robustiï¬cation (Appendix A.4 provides details).
As mentioned in Section 2.2, we used Salimans & Chenâs Backward Algorithm [28] for robustiï¬cation. However, we found it somewhat unreliable in learning from a single demonstration (Fig. 5a). Indeed, only 40% of our attempts at robustifying trajectories that solved level 1 were successful when using a single demonstration.
However, because Go-Explore can produce many demonstrations, we modiï¬ed the Backward Algo- rithm to simultaneously learn from multiple demonstrations (details in Appendix A.7). To simulate the use case in which Phase 1 is run repeatedly until enough successful demonstrations (in this case 10) are found, we extracted the highest scoring non-bug demonstration from each of the 57 out of
11
Go-Explore ° 40,000 Human Expert 30,000 2 8 20,000 wn PPO-+CoEX DQN-PixelCNN ee 10,000 Reactor RNG Feature-EB Avg. Human DQN-CTS UBE T DDQN 7 MPA SARSA Gorila N, cast CTS | o#* Ape-X . Padre ears so Linear DON âyp.eB * / Pop-Art ESC51 Rainbow Prior. DON Duel. DON BASS-hash 2013 2014 2015 2016 2017 2018 2019 Time of publication
Figure 6: History of progress on Montezumaâs Revenge vs. the version of Go-Explore that does not harness domain knowledge. Go-Explore signiï¬cantly improves on the prior state of the art. These data are presented in tabular form in Appendix A.9.
100 Phase 1 runs that had solved level 1, and randomly assigned them to one of 5 non-overlapping groups of 10 demonstrations (7 demonstrations were left over and ignored), each of which was used for a robustiï¬cation run. When training with 10 demonstration trajectories, all 5 robustiï¬cation runs were successful. Fig. 5b shows an example of successful robustiï¬cation with 10 trajectories.
In the end, our robustiï¬ed policies achieve a mean score of 43,763 (CI: 36,718 â 50,196), substantially higher than the human expert mean of 34,900 [27]. All policies successfully solve level 1 (with a 99.8% success rate over different stochastic evaluations of the policies), and one of our 5 policies also solves level 2 100% of the time. Fig. 6 shows how these results compare with prior work.
Surprisingly, the computational cost of Phase 2 is greater than that of Phase 1. These Phase 2 results were achieved after a mean of 4.35B (CI: 4.27B â 4.45B) game frames of training, which took a mean of 2.4 (CI: 2.4 â 2.5) days of training (details in Appendix A.8).
# 3.1.2 With domain knowledge in the cell representation
On Montezumaâs Revenge, when harnessing domain knowledge in its cell representation (Sec- tion 2.1.1), Phase 1 of Go-Explore ï¬nds a total of 238 (CI: 231 â 245) rooms, solves a mean of 9.1 (CI: 8.8 â 9.4) levels (with every run solving at least 7 levels), and does so in roughly half as many game frames as with the downscaled image cell representation (Fig. 7a). Its scores are also extremely high, with a mean of 148,220 (CI: 144,580 â 151,730) (Fig. 7c). These results are averaged over 50 runs.
As with the downscaled version, Phase 1 of Go-Explore with domain knowledge was still discovering additional rooms, cells, and ever-higher scores linearly when it was stopped (Fig. 7). Indeed, because every level of Montezumaâs Revenge past level 3 is nearly identical to level 3 (except for the scores on the screen and the stochastic timing of events) and because each run had already passed level 3, it would likely continue to ï¬nd new rooms, cells, and higher scores forever.
Domain knowledge runs spend less time exploiting the treasure room bug because we preferentially select cells in the highest level reached so far (Appendix A.5). Doing so encourages exploring new levels instead of exploring the treasure rooms on previous levels to keep exploiting the treasure room bug. The highest ï¬nal scores thus come from trajectories that solved many levels. Because knowing the level number constitutes domain knowledge, non-domain knowledge runs cannot take advantage of this information and are thus affected by the bug more.
12
(a) Number of rooms found (b) Number of cells found (c) Maximum score in archive
Figure 7: Performance on Montezumaâs Revenge of Phase 1 of Go-Explore with and without domain knowledge. The algorithm ï¬nds more rooms, cells, and higher scores with the easily provided domain knowledge, and does so with a better sample complexity. For (b), we plot the number of cells found in the no-domain-knowledge runs according to the more intelligent cell representation from the domain-knowledge run to allow for an equal comparison.
In terms of computational performance, Phase 1 with domain knowledge solves the ï¬rst level after a mean of only 57.6M (CI: 52.7M â 62.3M) game frames, corresponding to 0.9 (CI: 0.8 â 1.0) hours on a single 22-CPU machine. Solving level 3, which effectively means solving the entire game as discussed above, is accomplished in a mean of 173M (CI: 164M â 182M) game frames, corresponding to 6.8 (CI: 6.2 â 7.3) hours. Appendix A.8 provides full performance details.
For robustiï¬cation, we chose trajectories that solve level 3, truncated to the exact point at which level 3 is solved because, as mentioned earlier, all levels beyond level 3 are nearly identical aside from the pixels that display the score, which of course keep changing, and some global counters that change the timing of aspects of the game like when laser beams turn on and off.
We performed 5 robustiï¬cation runs with demonstrations from the Phase 1 experiments above, each of which had a demonstration from each of 10 different Phase 1 runs. All 5 runs succeeded. The resulting mean score is 666,474 (CI: 461,016 â 915,557), far above both the prior state of the art and the non-domain knowledge version of Go-Explore. As with the downscaled frame version, Phase 2 was slower than Phase 1, taking a mean of 4.59B (CI: 3.09B â 5.91B) game frames, corresponding to a mean of 2.6 (CI: 1.8 â 3.3) days of training.
The networks show substantial evidence of generalization to the minor changes in the game beyond level 3: although the trajectories they were trained on only solve level 3, these networks solved a mean of 49.7 levels (CI: 32.6 â 68.8). In many cases, the agents did not die, but were stopped by the maximum limit of 400,000 game frames imposed by default in OpenAI Gym [75]. Removing this limit altogether, our best single run from a robustiï¬ed agent achieved a score of 18,003,200 and solved 1,441 levels during 6,198,985 game frames, corresponding to 28.7 hours of game play (at 60 game frames per second, Atariâs original speed) before losing all its lives. This score is over an order of magnitude higher than the human world record of 1,219,200 [78], thus achieving the strictest deï¬nition of âsuperhumanâ performance. A video of the agent solving the ï¬rst ten levels can be seen here: https://youtu.be/gnGyUPd_4Eo.
Fig. 8 compares the performance of Go-Explore to historical results (including the previous state of the art), the no-domain-knowledge version of Go-Explore, and previous imitation learning work that relied on human demonstrations to solve the game. The version of Go-Explore that harnesses domain knowledge dramatically outperforms them all. Speciï¬cally, Go-Explore produces scores over 9 times greater than those reported for imitation learning from human demonstrations [28] and over 55 times the score reported for the prior state of the art without human demonstrations [39].
That Go-Explore outperforms imitation learning plus human demonstrations is particularly notewor- thy, as human-provided solutions are arguably a much stronger form of domain knowledge than that provided to Go-Explore. We believe that this result is due to the higher quality of demonstrations that Go-Explore was able to produce for Montezumaâs Revenge vs. those provided by humans in the previous imitation learning work. The demonstrations used in our work range in score from 35,200 to 51,900 (lower than the ï¬nal mean score of 148,220 for Phase 1 because these demonstrations are
13
* No Domain Knowledge * Human Demonstration © Domain Knowledge Go-Explore (domain knowledge) 600,000 500,000 400,000 2 & 300,000 a 200,000 Ape-X DQAD. ToC+CMC 100,000 enters \ | LfSD (best) DQN-CTS \ Human Expert ppo DQN-PixelCNN \ G8-Explore SARSA Gorila Nos DOTD ND ee ° .âe ee ge Pa Linear DQN Mp-EB J saccts T apex | PPO+CoEX Duel. DQN _BASS-hash UBE DeepCS 2013 2014 2015 2016 2017 2018 2019 Time of publication
Figure 8: Historical progress on Montezumaâs Revenge vs. the version of Go-Explore that harnesses domain knowledge. With domain knowledge, Go-Explore dramatically outperforms prior work, the no-domain-knowledge version of Go-Explore, and even prior work with imitation learning that was provided the solution in the form of human demonstrations. The data are presented in tabular form in Appendix A.9.
limited to only solving up to level 3) and most importantly, they all solve level 3. The demonstration originally used with the Backward Algorithm [28] reaches a score of 71,500 but doesnât solve level 3, thus preventing it from generalizing to further levels. The demonstrations used in DQfD and Ape-X DQfD [26, 27] only range in score from 32,300 to 34,900. In this last case, it is not clear whether level 3 was solved in any of the demonstrations, but we believe this is unlikely given the reported scores because they are lower than the lowest level-3-solving scores found by Go-Explore and given the fact that the human demonstration used by the Backward Algorithm scored twice as high without solving level 3.
One interesting beneï¬t of a robustiï¬cation phase with an imitation learning algorithm that does not try to mimic the original demonstration is that it can improve upon that demonstration. Because of the discount on future rewards that exists in the base RL algorithm PPO, there is a pressure to remove inefï¬ciencies in the demonstration. Videos of Go-Explore policies reveal efï¬cient movements. In contrast, IM algorithms speciï¬cally reward reaching novel states, meaning that policies produced by them often do seemingly inefï¬cient things like deviating to explore dead ends or jumping often to touch states only accessible by jumping, even though doing so is not necessary to gain real reward. An example of a Deep Curiosity Search agent [37] performing such inefï¬cient jumps can be viewed at https://youtu.be/-Fy2va3IbQU, and a random network distillation [16] IM agent can be viewed at https://youtu.be/40VZeFppDEM. These results suggest that IM algorithms could also beneï¬t from a robustiï¬cation phase in which they focus only on real-game reward once the IM phase has sufï¬ciently explored the state space.
# 3.2 Pitfall
We next test Go-Explore on the harder, more deceptive game of Pitfall, for which all previous RL algorithms scored ⤠0 points, except those that were evaluated on the fully deterministic version of the game [43, 44] or relied on human demonstrations [26, 27, 45]. As with Montezumaâs Revenge, we ï¬rst run Go-Explore with the simple, domain-general, downscaled representation described in Section 2.1.1, with the same hyperparameters. With these settings, Go-Explore is able to ï¬nd 22 rooms, but it is unable to ï¬nd any rewards (Fig. 9). We believe that this number of rooms visited is greater than the previous state of the art, but the number of rooms visited is infrequently reported so we are unsure. In preliminary experiments, Go-Explore with a more ï¬ne-grained downscaling
14
6 200 12,500 60,000 2 Go-Explore 2 5.000 ° 8 150 {no domain 3°" 5 Human Expert 3 knowledge) § 8 40,000 2 Go-Explore < 7,500 2 5 100 â (domain 3S 3 2 knowledge) 5 5,000 * 20,000 50 3 2,500 z } 2 ° 0 âAverage Human 0 1 2 3 4 0 1 2 3) 4 0 1 2 3 4 Game Frames 1e9 Game Frames 19 Game Frames 1e9
(a) Number of rooms found
(b) Number of cells found
(c) Maximum score in archive
Figure 9: Performance on Pitfall of Phase 1 of Go-Explore with and without domain knowledge. Without domain knowledge, the exploration phase ï¬nds about 22 rooms (a), but it then quickly stops ï¬nding new rooms (a) or cells (b) (here, we display discovery of domain-knowledge cells to enable a fair comparison, see Appendix A.10 for progress on the domain-agnostic cell representation), and it doesnât ï¬nd any rewards (c). With domain knowledge, the exploration phase of Go-Explore ï¬nds all 255 rooms (a) and trajectories scoring a mean 70,264 points (c). In addition, even though the number of rooms (a) and the number cells (b) found stagnates after about 2B game frames, score continues to go up for about another billion game frames. This is possible because, in Pitfall, there can exist many different trajectories to the same cell that vary in score. As such, once all reachable cells have been discovered, Go-Explore relies on replacing lower-scoring trajectories with higher-scoring trajectories to increase its score. The ï¬nal score is not the maximum score that can be reached in Pitfall (the maximum score in Pitfall is 112,000), but Go-Explore ï¬nds itself in a local optima where higher scoring trajectories cannot be found starting from any of the trajectories currently in the archive. Lines represent the mean over 10 (without domain knowledge) and 40 (with domain knowledge) independent runs.
procedure (assigning 16 different pixel values to the screen, rather than just 8) is able to ï¬nd up to 30 rooms, but it then runs out of memory (Appendix A.10). Perhaps with a more efï¬cient or distributed computational setup this representation could perform well on the domain, a subject we leave to future work. We did not attempt to robustify any of the trajectories because no positive reward was found.
We believe the downscaled-image cell representation underperforms on Pitfall because the game is partially observable, and frequently contains many importantly different states that appear almost identical (even in the unaltered observation space of the game itself), but require different actions (Appendix A.12). One potential solution to this problem would be to change to a cell representation that takes previous states into account to disambiguate such situations. Doing so is an interesting direction for future work.
Next, we tested Go-Explore with domain knowledge (Section 2.1.1). The cell representation with domain knowledge is not affected by the partial observability of Pitfall because it maintains the room number, which is information that disambiguates the visually identical states (note that we can keep track of the room number from pixel information only by keeping track of all screen transitions that happened along the trajectory). With it, the exploration phase of Go-Explore (Phase 1) is able to visit all 255 rooms and its best trajectories collect a mean of 70,264 (CI: 67,287 â 73,150) points (Fig. 9).
We attempted to robustify the best trajectories, but the full-length trajectories found in the exploration phase did not robustify successfully (Appendix A.11), possibly because different behaviors may be required for states that are visually hard to distinguish (Appendix A.12). Note that the domain- knowledge cell representation does not help in this situation, because the network trained in the robustiï¬cation phase (Phase 2) is not presented with the cell representation from the exploration phase (Phase 1). The network thus has to learn to keep track of past information by itself. Remembering the past is possible, as the network of the agent does include a fully recurrent layer, but it is unclear to what degree this layer stores information from previous rooms, especially because the Backward Algorithm loads the agent at various points in the game without providing the agent with the history of rooms that came before. This can make it difï¬cult for the agent to learn to store information from previous states. As such, robustifying these long trajectories remains a topic for future research.
15
Go-Expl 60,000 =. Domain Knowledge ocePere e No Domain Knowledge 50,000 Expert Human 40,000 wo 5 30,000 i] wn 20,000 10,000 Avg. Human DQN- Prior. DQN DQN-CTS DON rior. Dat DDQN 9 PixelCNN Rainbow Ape-xX IMPALA RND. ooâ 8° AS ° Pop-art âecb 7° Deepcs Duel. DQN A3C-CTS Reactor UBE 2015 2016 2017 2018 2019 Time of publication
Figure 10: Historical progress on Pitfall vs. the version of Go-Explore that harnesses domain knowledge. Go-Explore achieves a mean of over 59,000 points, greatly outperforming the prior state of the art. The data are presented in tabular form in Appendix A.9.
We found that shorter trajectories scoring roughly 35,824 (CI: 34,225 â 37,437) points could be successfully robustiï¬ed. To obtain these shorter trajectories, we truncated all trajectories in the archive produced in Phase 1 to 9,000 training frames (down from the total of 18,000 training frames), and then selected the highest scoring trajectory out of these truncated trajectories. We then further truncated this highest scoring trajectory such that it would end right after the collection of the last obtained reward, to ensure that the Backward Algorithm would always start right before obtaining a reward, resulting in trajectories with a mean length of 8,304 (CI: 8,118 â 8,507) training frames.
From the truncated trajectories, the robustiï¬cation phase (Phase 2) of Go-Explore is able to produce agents that collect 59,494 (CI: 49,042 â 72,721) points (mean over 10 independent runs), substantially outperforming both the prior state of the art and human experts (Fig. 10). These trajectories required a mean of 8.20B (CI: 6.73B â 9.74B) game frames to robustify, which took a mean of 4.5 (CI: 3.7 â 5.3) days. The best rollout of the best robustiï¬ed policy obtained a score of 107,363 points, and a video of this rollout is available at: https://youtu.be/IJMdYOnsDpA.
Interestingly, the mean performance of the robustiï¬ed networks of 59,494 is higher than the maximum performance among the demonstration trajectories of 45,643. This score difference is too large to be the result of small optimizations along the example trajectories (e.g. by avoiding more of the negative rewards in the environment), thus suggesting that, as with Montezumaâs Revenge, these policies are able to generalize well beyond the example trajectories they were provided.
# 4 Discussion and Future Work
Three key principles enable Go-Explore to perform so well on hard-exploration problems: (1) re- member good exploration stepping stones, (2) ï¬rst return to a state, then explore and, (3) ï¬rst solve a problem, then robustify (if necessary).
These principles do not exist in most RL algorithms, but it would be interesting to weave them in. As discussed in Section 1, contemporary RL algorithms do not do follow principle 1, leading to detachment. Number 2 is important because current RL algorithms explore by randomly perturbing the parameters or actions of the current policy in the hope of exploring new areas of the environment, which is ineffective when most changes break or substantially change a policy such that it cannot ï¬rst return to hard-to-reach states before further exploring from them (an issue we call derailment). Go-Explore solves this problem by ï¬rst returning to a state and then exploring from there. Doing so
16
enables deep exploration that can ï¬nd a solution to the problem, which can then be robustiï¬ed to produce a reliable policy (principle number 3).
The idea of preserving and exploring from stepping stones in an archive comes from the quality diversity (QD) family of algorithms (like MAP-elites [60, 79] and novelty search with local competi- tion [80]), and Go-Explore is an enhanced QD algorithm based on MAP-Elites. However, previous QD algorithms focus on exploring the space of behaviors by randomly perturbing the current archive of policies (in effect departing from a stepping stone in policy space rather than in state space), as opposed to explicitly exploring state space by departing to explore anew from precisely where in state space a previous exploration left off. In effect, Go-Explore offers signiï¬cantly more controlled explo- ration of state space than other QD methods by ensuring that the scope of exploration is cumulative through state space as each new exploratory trajectory departs from the endpoint of a previous one.
It is remarkable that the current version of Go-Explore works by taking entirely random actions during exploration (without any neural network) and that it is effective even when applied on a very simple discretization of the state space. Its success despite such surprisingly simplistic exploration strongly suggests that remembering and exploring from good stepping stones is a key to effective exploration, and that doing so even with otherwise naive exploration helps the search more than contemporary deep RL methods for ï¬nding new states and representing those states. Go-Explore might be made even more powerful by combining it with effective, learned representations. It could further beneï¬t from replacing the current random exploration with more intelligent exploration policies, which would allow the efï¬cient reuse of skills required for exploration (e.g. walking). Both of these possible improvements are promising avenues for future work.
Go-Explore also demonstrates how exploration and dealing with environmental stochasticity are problems that can be solved separately by ï¬rst performing exploration in a deterministic environment and then robustifying relevant solutions. The reliance on having access to a deterministic environment may initially seem like a drawback of Go-Explore, but we emphasize that deterministic environments are available in many popular RL domains, including videos games, robotic simulators, or even learned world models. Once a brittle solution is found, or especially a diverse set of brittle solutions, a robust solution can then be produced in simulation. If the ultimate goal is a policy for the real world (e.g. in robotics), one can then use any of the many available techniques for transferring the robust policy from simulation to the real world [59, 60, 81]. In addition, we expect that future work will demonstrate that it is possible to substitute exploiting determinism to return to states with a goal-conditioned policy [62, 63] that learns to deal with stochastic environments from the start (during training). Such an algorithm would still beneï¬t from the ï¬rst two principles of Go-Explore, and possibly the third too, as even a goal-conditioned policy could beneï¬t from additional optimization once the desired goal is known.
A possible objection is that, while this method already works in the high-dimensional domain of Atari-from-pixels, it might not scale to truly high-dimensional domains like simulations of the real world. We believe Go-Explore can be adapted to such high-dimensional domains, but it will likely have to marry a more intelligent cell representation of interestingly different states (e.g. learned, compressed representations of the world) with intelligent (instead of random) exploration. Indeed, the more conï¬ation (mapping more states to the same cell) one does, the more probable it is that one will need intelligent exploration to reach such qualitatively different cells.
Though our current implementation of Go-Explore can handle the deceptive reward structure found in Pitfall, its exploitation of determinism makes it vulnerable to a new form of deception we call the âbusy-highway problem.â Consider an environment in which the agent needs to cross a busy highway. One option is to traverse the highway directly on foot, but that creates so much risk of being hit by a car that no policy could reliably cross this way. A safer alternative would be to take a bridge that goes over the highway, which would constitute a detour, but be guaranteed to succeed. By making the environment deterministic for Phase 1, the current version of Go-Explore would eventually succeed in traversing the highway directly, leading to a much shorter trajectory than by taking the bridge. Thus all the solutions chosen for robustiï¬cation will be ones that involve crossing the highway directly instead of taking the bridge, making robustiï¬cation impossible.
One solution to this issue would be to provide robustiï¬cation with more demonstrations from Phase 1 of Go-Explore (which could include some that take the bridge instead of crossing the highway), or even all of the trajectories it gathers during Phase 1. With this approach, robustiï¬cation would be able to fall back on the bridge trajectories when the highway trajectories fail to robustify. While this
17
approach should help, it may still be the case that so much of the experience gathered by Go-Explore Phase 1 is dependent on trajectories that are impossible to reproduce reliably that learning from these Go-Explore trajectories is less efï¬cient than learning from scratch. How common this class of problem is in practice is an empirical question and an interesting subject for future work. However, we hypothesize that versions of Go-Explore that deal with stochasticity throughout training (e.g. by training goal-conditioned policies to return to states) would not be affected by this issue, as they would not succeed in crossing the highway reliably except by taking the bridge.
One promising area for future work is robotics. Many problems in robotics, such as ï¬guring out the right way to grasp an object, how to open doors, or how to locomote, are hard-exploration problems. Even harder are tasks that require long sequences of actions, such as asking a robot to ï¬nd survivors, clean a house, or get a drink from the refrigerator. Go-Explore could enable a robot to learn how to do these things in simulation. Because conducting learning in the real world is slow and may damage the robot, most robotic work already involves ï¬rst optimizing in a simulator and then transferring the policy to the real world [59â61, 82]. Go-Exploreâs ability to exploit determinism can then be helpful because robotic simulators could be made deterministic for Phase 1 of Go-Explore. The full pipeline could look like the following: (1) Solve the problem in a deterministic simulator via Phase 1 of Go-Explore. (2) Robustify the policy in simulation by adding stochasticity to the simulation via Phase 2 of Go-Explore. (3) Transfer the policies to the real world, optionally adding techniques to help cross the simulation-reality gap [59â61], including optionally further learning via these techniques or any learning algorithm. Of course, this pipeline could also be changed to using a goal-conditioned version of Go-Explore if appropriate. Overall, we are optimistic that Go-Explore may make many previously unsolvable robotics problems solvable, and we are excited to see future research in this area from our group and others.
Interestingly, the Go-Explore algorithm has implications and applications beyond solving sparse- or deceptive-reward problems. The algorithmâs ability to broadly explore the state space can unearth important facets of the domain that go beyond reward, e.g. the distribution of states that contain a particular agent (e.g. a game character or robot) or are near to catastrophic outcomes. For example, within AI safety [5] one open problem is that of safe exploration [83], wherein the process of training an effective real-world policy is constrained by avoiding catastrophe-causing actions during that training. In the robotics setting where Go-Explore is applied in simulation (before attempting transfer to the real world), the algorithm could be driven explicitly to search for diverse simulated catastrophes (in addition to or instead of reward). Such a catastrophe collection could then be leveraged to train agents that act more carefully in the real world, especially while learning [84, 85]. Beyond this example, there are likely many other possibilities for how the data produced by Go-Explore could be productively put to use (e.g. as a source of data for generative models, to create auxiliary objectives for policy training, or for understanding other agents in the environment by inverse reinforcement learning).
# 5 Related Work
Go-Explore is reminiscent of earlier work that separates exploration and exploitation (e.g. Colas et al. [86]), in which exploration follows a reward-agnostic Goal Exploration Process [87] (an algorithm similar to novelty search [7]), from which experience is collected to preï¬ll the replay buffer of an off-policy RL algorithm, in this case DDPG [88]. This algorithm then extracts the highest-rewarding policy from the experience gathered. In contrast, Go-Explore further decomposes exploration into three elements: Accumulate stepping stones (interestingly different states), return to promising stepping stones, and explore from them in search of additional stepping stones (i.e. principles 1 and 2 above). The impressive results Go-Explore achieves by slotting in very simple algorithms for each element shows the value of this decomposition.
The aspect of Go-Explore of ï¬rst ï¬nding a solution and then robustifying around it has precedent in Guided Policy Search [89]. However, this method requires a non-deceptive, non-sparse, differentiable loss function to ï¬nd solutions, meaning it cannot be applied directly to problems where rewards are discrete, sparse, or deceptive, as both Atari and many real-world problems are. Further, Guided Policy Search requires having a differentiable model of the world or learning a set of local models, which to be tractable requires the full state of the system to be observable during training time.
18
More recently, Oh et al. [90] combined A2C with a âSelf-Imitation Learningâ loss on the best trajec- tories found during training. This is reminiscent of Go-Exploreâs robustiï¬cation phase, except for the fact that Self-Imitation Learningâs imitation loss is used throughout learning, while imitation learning is a separate phase in Go-Explore. Self-Imitation Learningâs 2,500 point score on Montezumaâs Revenge was close to the state of the art at the time of its publication.
Another algorithm that is related to the idea of ï¬rst returning before exploring is Bootstrapped DQN [91]. It trains an ensemble of networks that approximate the Q function, but with bootstrapping the data so each network is trained on a different random subset of the data. Each training episode, it picks one of the networks and acts according to the policy it implies. In frequently visited areas of the search space, all of the networks will have lots of data and are likely to converge to the same policy (thus, exploration will be low). However, in rarely visited areas of the state space, the networks would ideally have different Q-value predictions, meaning that in different episodes different choices will be made, yielding exploration. At a high level, the dynamics can thus allow an agent to ï¬rst return to an area of the search space with little exploration before exploring from it. That said, this algorithm will still try to focus on returning to one narrow area of the search space (the one it is currently exploring, see the ï¬ashlight metaphor of IM algorithms in Section 1) before exploring, and thus is still likely to suffer from the issue of detachment described in Section 1. Indeed, empirically Bootstrapped DQN scores only 100 on Montezumaâs Revenge, and detachment may be a large reason why.
Recall Traces [92] also implement the idea of returning to previously discovered states. They do so by running a backtracking model to create virtual trajectories towards states heuristically considered valuable and they include those virtual trajectories during training with the help of an imitation learning loss, thereby increasing the likelihood that these states will be revisited and explored from. Contrary to Go-Explore, Recall Traces do not separate returning to states and exploring from those states, thus the algorithm helps ameliorate detachment, but not derailment. The method improved sample efï¬ciency in several sparse reward domains, but was not tested on Montezumaâs Revenge or Pitfall.
Closely related to the ï¬rst two principles of Go-Explore is the work by Liu et al. [43], which takes a hierarchical reinforcement learning approach in which an abstract MDP is created through the conï¬ation of multiple states into abstract states, which are similar to the cells in Go-Explore. This abstract MDP stores all abstract states (i.e. cells) that it encounters, thus keeping track of promising states to explore from, and it navigates the MDP in a reliable way before exploring from a selected abstract-MDP state, thus implementing the idea of returning before exploring. One difference with Go-Explore is that this algorithm does not use a trajectory of actions to return to a cell, but instead relies on a set of sub-policies, called skills, which are executed in sequence to navigate the abstract MDP. While this set of skills is ï¬exible, in that it allows the same skill to be reused for different transitions, it takes time to train a new skill, potentially making it computationally expensive to explore as deep into the game as Go-Explore does. Another difference is that the algorithm by Liu et al. [43] does not implement a robustiï¬cation phase, but instead relies on the abstract MDP, even at evaluation time. While this means the algorithm does not require any additional training, it also means the algorithm can never improve upon the limits of the constructed MDP. The algorithm from Liu et al. [43], which harnesses domain knowledge, scores 12,500 on Montezumaâs Revenge and 15,000 on Pitfall, though these scores come from evaluation in the deterministic version of the environment (they do provide results on stochastic test environments for a different game: Private Eye). Go-Explore scores substantially more in both Montezumaâs Revenge and Pitfall despite being tested in a stochastic environment and, in the case of Montezumaâs Revenge, even when not relying on domain knowledge.
In a similar vein, Dong et al. [93] maintains an explicit memory of novel states and explores after returning to them via a goal-conditioned policy, though their algorithm only reaches scores of around 1,000 on Montezumaâs Revenge, substantially less than Go-Explore. We speculate that this is due to (1) Its use of a ï¬xed-capacity pool of potential next states to visit, which might not be able to keep up with the large number of possible interestingly different states present in Montezumaâs Revenge, and (2) By determining whether a goal is reached based on a pixel based measure, their goal-conditioned policy could have a hard time learning to return to a previously visited state, as the pixel-based match requires all moving objects, such as enemies, to be in very similar locations before a goal is considered reached. The insights of keeping an archive of known states and exploring to discover new states to add to the archive dates back at least to the E3 algorithm [94], although the E3 authors note that it does not work in high-dimensional problems for which tabular methods are intractable
19
and function approximation (or some form of conï¬ation) is required. Go-Explore can be seen as an E3-like algorithm that adapts some of its principles to high-dimensional domains.
The idea of planning (searching in a deterministic model of the world to ï¬nd a good strategy) and then training a policy to mimic what was learned is reminiscent of Guo et al. [95]. It plans (in the Atari emulator) with UCT [47â49], which is slow, and then trains a much faster policy with supervised learning to imitate the planning algorithm. At ï¬rst glance it seems that in Guo et al. [95] UCT serves a similar role to the exploration phase in Go-Explore, but UCT is quite different in several ways that make it inferior for domains that are either high-dimensional or hard-exploration. That is true even though UCT does have a form of exploration bonus.
UCT plans in a model of the world so as to decide on the next action to take in the real environment. An exploration bonus is used during the planning phase, but only extrinsic rewards are considered when choosing the next action to take. This approach can improve performance in domains with relatively dense rewards, but fails in sparse rewards domains as rewards are likely to be beyond the planning horizon of the algorithm. Once planning what to do from one state is done, an action is taken and the planning process is run again from the next state. UCT does not try to explore all states, and each run of UCT is independent of which states were visited in previous planning steps. As such, UCT (either within an episode, or across episodes) does not try to discover new terrain: instead its exploration bonus only helps it within the current short-horizon planning phase. As mentioned in Section 1, UCT scores 0 on Montezumaâs Revenge and Pitfall [30, 51].
Another approach to planning is Fractal Monte Carlo (FMC) [96]. When choosing the next action, it takes into account both the expected reward and novelty of that action, and in that way is more similar to Go-Explore. In FMC, a planning process is initiated from each state the agent visits. Planning is done within a deterministic version of the game emulator. A ï¬xed number of workers are started in the state from which planning is occurring, and they perform random walks in state space. Periodically, workers that have accumulated lower reward and/or are in less novel states are replaced by âclonesâ of more successful workers. Novelty is approximated as the Euclidean distance of the workerâs state (in the original, raw, observation space) to that of a randomly selected other worker.
FMC reaches a score of 5,600 on Montezumaâs Revenge, substantially higher than UCT. We believe this increased performance is due to at least three factors: (1) its planning process puts more emphasis on depth than breadth due to its ï¬nite amount of workers as opposed to the exponential branching factor that UCT needs to handle; (2) it favors novel states within a planning iteration, so actions that lead to hard-to-reach states such as jumping an enemy are more likely to be chosen; (3) having an exploration bonus based on Euclidean distance is more informative than UCTâs exact-match state bonus, because more distant states are recognized as being more novel than states that differ by, say, one pixel. One major reason we believe FMC performs worse than Go-Explore is because, like UCT, it restarts its planning process from scratch each time an action is taken. That means it can cycle indeï¬nitely between the same few states, because it does not have a means over time of remembering which states it has visited in order to attempt to explore all states, and instead must rely on random chance to break out of cycling. This phenomenon is apparent when watching its agent play: https://youtu.be/FgaXa0uCBR4. Although its greater focus on depth rather than breadth versus UCT extends its planning horizon enough to reach the ï¬rst few rewards available in Montezumaâs Revenge, that seemingly was insufï¬cient for it to reach the even sparser rewards found later in the game that are easily found by Go-Explore.
On Pitfall, SOORL [44] was the ï¬rst planning algorithm to achieve a non-zero score, but did so in a deterministic test environment. It does so through a combination of learning a model of the environment, domain knowledge, and a value function that is optimistic about the value of unseen states, thus effectively providing an exploration bonus. At the end of 50 episodes of training, which was the maximum reported number of episodes, SOORL achieves an average of about 200 points across runs, and its best run scored an average of 606.6 with a maximum of 4,000.
Another way to view Phase 1 of Go-Explore is as being similar to a graph-search algorithm over nodes that are made up of the conï¬ated states, and with unknown edges between the different nodes, meaning that nodes can never fully be marked as âclosedâ. Speciï¬cally, the algorithm has to empirically discover the existence of an edge between two nodes, for example by executing a sequence of random actions that leads from one node to another node, and, as a result, it is never clear whether a node is closed because it is always possible that additional edges from this node exist, but that they have not been discovered yet. Prioritizing which nodes to explore by assigning a weight
20
to them is reminiscent of graph-search algorithms such as Dijkstraâs algorithm [97] and A* [98]. Graph-search algorithms as a means of exploration in planning have been investigated in algorithms such as Rapidly-exploring Random Trees (RRTs) [99], which were recently used to explore Atari games by Zhan et al. [100]. Indeed, Go-Explore exhibits important similarities with RRTs as they both keep track of an archive of states and trajectories to those states. However, there are some crucial differences, including: (1) RRTs proceed by ï¬rst sampling a goal to attempt to reach, which can be impractical in environments where reachable states are not known a priori (and which is particularly pernicious in high-dimensional state spaces, such as pixels or even learned encodings, where most randomly selected goals are unreachable), such as Atari, and (2) RRTs do not have the concept of âcellsâ present in Go-Explore and thus RRTs can add many very similar states to their archive that do little to help the algorithm reach meaningfully different unexplored areas of the search space. In general, we believe that Go-Explore points to an interesting future research direction in adapting the principles behind graph-search algorithms to high dimensional state spaces.
Even more distantly related are the many variants of intrinsically motivated model-free reinforcement learning algorithms. The relation between Go-Explore and these algorithms is discussed in Section 1 and many speciï¬c algorithms are included in our comparison in Appendix A.9, as they account for most of the high-scoring work on Montezumaâs Revenge prior to Go-Explore.
# 6 Conclusion
Go-Explore represents an exciting new family of algorithms for solving hard-exploration reinforce- ment learning problems, meaning those with sparse and/or deceptive rewards. It opens up a large number of new research directions beyond the simple version described in this paper, including experimenting with different archives, different methods for choosing which cells to return to, dif- ferent cell representations, different exploration methods, and different robustiï¬cation methods. We expect Go-Explore will accelerate progress in a variety of challenging domains such as robotics. It will also be interesting to see not only the domains in which it excels, but also those in which it fails. Go-Explore thus opens a new playground of possibilities for future research, and we hope the community will join us in investigating this new terrain.
# Acknowledgments
We thank the following for helpful discussions on the Go-Explore algorithm and the ideas behind it: Peter Dayan, Zoubin Ghahramani, Shimon Whiteson, Juergen Schmidhuber, Ian Osband, and Kevin Clune. We also appreciate input from all of the members of Uber AI Labs, especially Vashisht Madhavan, Felipe Petroski Such, John Sears, and Thomas Miconi. We are also deeply appreciative of the machine learning community at large for providing feedback that reï¬ned our thinking and exposition of Go-Explore, including all of those that provided commentary on Reddit, Twitter, and via other online mediums such as blog posts about our work. Finally, we are grateful to Leon Rosenshein, Joel Snow, Thaxton Beesley, the Colorado Data Center team and the entire OpusStack Team at Uber for providing our computing platform and for technical support.
# References
[1] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484â489, 2016.
[2] David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, L Robert Baker, Matthew Lai, Adrian Bolton, Yutian Chen, Timothy P. Lillicrap, Fan Hui, Laurent Sifre, George van den Driessche, Thore Graepel, and Demis Hassabis. Mastering the game of go without human knowledge. Nature, 550:354â359, 2017.
[3] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):529â533, 2015.
[4] Marc Bellemare, Sriram Srinivasan, Georg Ostrovski, Tom Schaul, David Saxton, and Remi Munos. Unifying count-based exploration and intrinsic motivation. In NIPS, pages 1471â1479, 2016.
21
[5] Dario Amodei, Chris Olah, Jacob Steinhardt, Paul F. Christiano, John Schulman, and Dan Mané. Concrete problems in ai safety. CoRR, abs/1606.06565, 2016.
[6] Joel Lehman, Jeff Clune, Dusan Misevic, Christoph Adami, Julie Beaulieu, Peter J. Bentley, Samuel Bernard, Guillaume Beslon, David M. Bryson, Patryk Chrabaszcz, Nick Cheney, Antoine Cully, Stéphane Doncieux, Fred C. Dyer, Kai Olav Ellefsen, Robert Feldt, Stephan Fischer, Stephanie Forrest, Antoine Frénoy, Christian Gagné, Leni K. Le Goff, Laura M. Grabowski, Babak Hodjat, Frank Hutter, Laurent Keller, Carole Knibbe, Peter Krcah, Richard E. Lenski, Hod Lipson, Robert B MacCurdy, Carlos Maestre, Risto Miikkulainen, Sara Mitri, David E. Moriarty, Jean-Baptiste Mouret, Anh Tuan Le Nguyen, Charles Ofria, Marc Parizeau, David P. Parsons, Robert T. Pennock, William F. Punch, Thomas S. Ray, Marc Schoenauer, Eric Shulte, Karl Sims, Kenneth O. Stanley, François Taddei, Danesh Tarapore, Simon Thibault, Westley Weimer, Richard Watson, and Jason Yosinksi. The surprising creativity of digital evolution: A collection of anecdotes from the evolutionary computation and artiï¬cial life research communities. CoRR, abs/1803.03453, 2018.
[7] Joel Lehman and Kenneth O. Stanley. Novelty search and the problem with objectives. In Genetic Programming Theory and Practice IX (GPTP 2011), 2011.
[8] Patryk Chrabaszcz, Ilya Loshchilov, and Frank Hutter. Back to basics: Benchmarking canonical evolution strategies for playing atari. In IJCAI, 2018.
[9] Jürgen Schmidhuber. A possibility for implementing curiosity and boredom in model-building neural controllers. In Proc. of the international conference on simulation of adaptive behavior: From animals to animats, pages 222â227, 1991.
[10] Pierre-Yves Oudeyer and Frederic Kaplan. What is intrinsic motivation? a typology of computational approaches. Frontiers in Neurorobotics, 1:6, 2009.
[11] Andrew G Barto. Intrinsic motivation and reinforcement learning. In Intrinsically motivated learning in natural and artiï¬cial systems, pages 17â47. Springer, 2013.
[12] Jürgen Schmidhuber. Developmental robotics, optimal artiï¬cial curiosity, creativity, music, and the ï¬ne arts. Connect. Sci., 18:173â187, 2006.
[13] Jürgen Schmidhuber. Curious model-building control systems. In Neural Networks, 1991. 1991 IEEE International Joint Conference on, pages 1458â1463. IEEE, 1991.
[14] Edoardo Conti, Vashisht Madhavan, Felipe Petroski Such, Joel Lehman, Kenneth O. Stanley, and Jeff Clune. Improving exploration in evolution strategies for deep reinforcement learning via a population of novelty-seeking agents. arXiv preprint arXiv:1712.06560, 2017.
[15] Joshua Achiam and S. Shankar Sastry. Surprise-based intrinsic motivation for deep reinforcement learning. CoRR, abs/1703.01732, 2017.
[16] Yuri Burda, Harrison Edwards, Amos Storkey, and Oleg Klimov. Exploration by random network distillation. arXiv preprint arXiv:1810.12894, 2018.
[17] Roby Velez and Jeff Clune. Diffusion-based neuromodulation can eliminate catastrophic forgetting in simple neural networks. PloS one, 12(11):e0187736, 2017.
[18] Kai Olav Ellefsen, Jean-Baptiste Mouret, Jeff Clune, and Josh C Bongard. Neural modularity helps organisms evolve to learn new skills without forgetting old skills. PLoS Comput Biol, 11(4):e1004128, 2015.
[19] James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, page 201611835, 2017.
[20] Robert M French. Catastrophic forgetting in connectionist networks. Trends in cognitive sciences, 3(4): 128â135, 1999.
[21] Shangtong Zhang and Richard S. Sutton. A deeper look at experience replay. CoRR, abs/1712.01275, 2017.
[22] Ruishan Liu and James Zou. The effects of memory replay in reinforcement learning. CoRR, abs/1710.06574, 2017.
[23] Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction, volume 1. Bradford, 1998.
22
[24] Matthias Plappert, Rein Houthooft, Prafulla Dhariwal, Szymon Sidor, Richard Y Chen, Xi Chen, Tamim Asfour, Pieter Abbeel, and Marcin Andrychowicz. Parameter space noise for exploration. arXiv preprint arXiv:1706.01905, 2017.
[25] Thomas RückstieÃ, Martin Felder, and Jürgen Schmidhuber. State-dependent exploration for policy gradient methods. In ECML/PKDD, 2008.
[26] Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Ian Osband, Gabriel Dulac-Arnold, John Agapiou, Joel Z. Leibo, and Audrunas Gruslys. Deep q-learning from demonstrations. In AAAI, 2018.
[27] Tobias Pohlen, Bilal Piot, Todd Hester, Mohammad Gheshlaghi Azar, Dan Horgan, David Budden, Gabriel Barth-Maron, Hado van Hasselt, John Quan, Mel VeËcerÃk, et al. Observe and look further: Achieving consistent performance on atari. arXiv preprint arXiv:1805.11593, 2018.
[28] Tim Salimans and Richard Chen. Learning montezumaâs revenge from a single demonstration. arXiv preprint arXiv:1812.03381, 2018.
[29] Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. In NIPS, 2016.
[30] Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents. J. Artif. Intell. Res.(JAIR), 47:253â279, 2013.
[31] Marlos C. Machado, Marc G. Bellemare, Erik Talvitie, Joel Veness, Matthew J. Hausknecht, and Michael H. Bowling. Revisiting the arcade learning environment: Evaluation protocols and open problems for general agents. J. Artif. Intell. Res., 61:523â562, 2018.
[32] Adrià Garriga Alonso. Solving montezumaâs revenge with planning and reinforcement learning, 2017.
[33] Haoran Tang, Rein Houthooft, Davis Foote, Adam Stooke, OpenAI Xi Chen, Yan Duan, John Schul- man, Filip DeTurck, and Pieter Abbeel. # exploration: A study of count-based exploration for deep reinforcement learning. In NIPS, pages 2750â2759, 2017.
[34] Audrunas Gruslys, Mohammad Gheshlaghi Azar, Marc G Bellemare, and Remi Munos. The reactor: A sample-efï¬cient actor-critic architecture. arXiv preprint arXiv:1704.04651, 2017.
[35] Jarryd Martin, Suraj Narayanan Sasikumar, Tom Everitt, and Marcus Hutter. Count-based exploration in feature space for reinforcement learning. In IJCAI, 2017.
[36] Georg Ostrovski, Marc G. Bellemare, Aäron van den Oord, and Rémi Munos. Count-based exploration with neural density models. In ICML, 2017.
[37] Christopher Stanton and Jeff Clune. Deep curiosity search: Intra-life exploration improves performance on challenging deep reinforcement learning problems. CoRR, abs/1806.00553, 2018.
[38] Brendan OâDonoghue, Ian Osband, Rémi Munos, and Volodymyr Mnih. The uncertainty bellman equation and exploration. In ICML, 2018.
[39] Jongwook Choi, Yijie Guo, Marcin Moczulski, Junhyuk Oh, Neal Wu, Mohammad Norouzi, and Honglak Lee. Contingency-aware exploration in reinforcement learning. CoRR, abs/1811.01483, 2018.
[40] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In ICML, pages 1928â1937, 2016.
[41] Dan Horgan, John Quan, David Budden, Gabriel Barth-Maron, Matteo Hessel, Hado van Hasselt, and David Silver. Distributed prioritized experience replay. CoRR, abs/1803.00933, 2018.
[42] Lasse Espeholt, Hubert Soyer, Rémi Munos, Karen Simonyan, Volodymyr Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, Shane Legg, and Koray Kavukcuoglu. Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures. In ICML, 2018.
[43] Evan Zheran Liu, Ramtin Keramati, Sudarshan Seshadri, Kelvin Guu, Panupong Pasupat, Emma Brunskill, and Percy Liang. Learning abstract models for long-horizon exploration, 2019. URL https://openreview.net/forum?id=ryxLG2RcYX.
[44] Ramtin Keramati, Jay Whang, Patrick Cho, and Emma Brunskill. Fast exploration with simpliï¬ed models and approximately optimistic planning in model based reinforcement learning, 2019. URL https://openreview.net/forum?id=HygS7n0cFQ.
23
[45] Yusuf Aytar, Tobias Pfaff, David Budden, Tom Le Paine, Ziyu Wang, and Nando de Freitas. Playing hard exploration games by watching youtube. arXiv preprint arXiv:1805.11592, 2018.
[46] Felipe Petroski Such, Vashisht Madhavan, Rosanne Liu, Rui Wang, Pablo Samuel Castro, Yulun Li, Ludwig Schubert, Marc Bellemare, Jeff Clune, and Joel Lehman. An atari model zoo for analyzing, visualizing, and comparing deep reinforcement learning agents. arXiv preprint arXiv:1812.07069, 2018.
[47] Levente Kocsis and Csaba Szepesvári. Bandit based monte-carlo planning. In ECML, 2006.
[48] Levente Kocsis, Csaba Szepesvári, and Jan Willemson. Improved monte-carlo search. Univ. Tartu, Estonia, Tech. Rep, 1, 2006.
[49] Cameron Browne, Edward Jack Powley, Daniel Whitehouse, Simon M. Lucas, Peter I. Cowling, Philipp Rohlfshagen, Stephen Tavener, Diego Perez Liebana, Spyridon Samothrakis, and Simon Colton. A survey of monte carlo tree search methods. IEEE Transactions on Computational Intelligence and AI in Games, 4:1â43, 2012.
[50] Guillaume Chaslot, Sander Bakkes, István Szita, and Pieter Spronck. Monte-carlo tree search: A new framework for game ai. In AIIDE, 2008.
[51] Nir Lipovetzky, Miquel RamÃrez, and Hector Geffner. Classical planning with simulators: Results on the atari video games. In IJCAI, 2015.
[52] David Silver, Hado van Hasselt, Matteo Hessel, Tom Schaul, Arthur Guez, Tim Harley, Gabriel Dulac- Arnold, David P. Reichert, Neil C. Rabinowitz, André Barreto, and Thomas Degris. The predictron: End-to-end learning and planning. In ICML, 2017.
[53] Sascha Lange and Martin Riedmiller. Deep auto-encoder neural networks in reinforcement learning. In The International Joint Conference on Neural Networks, pages 1â8. IEEE, 2010.
[54] Max Jaderberg, Volodymyr Mnih, Wojciech Czarnecki, Tom Schaul, Joel Z. Leibo, David Silver, and Koray Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. CoRR, abs/1611.05397, 2016.
[55] Adrien Ecoffet, Joost Huizinga, Joel Lehman, Kenneth O Stanley, and Jeff Clune. Montezumaâs revenge solved by go-explore, a new algorithm for hard-exploration problems (sets records on pitfall, too). Uber Engineering Blog, Nov 2018. URL http://eng.uber.com/go-explore.
[56] Rowan McAllister, Gregory Kahn, Jeff Clune, and Sergey Levine. Robustness to out-of-distribution inputs via task-aware generative uncertainty. arXiv preprint arXiv:1812.10687, 2018.
[57] Gregory Kahn, Adam Villaï¬or, Vitchyr Pong, Pieter Abbeel, and Sergey Levine. Uncertainty-aware reinforcement learning for collision avoidance. arXiv preprint arXiv:1702.01182, 2017.
[58] Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, and Samuel J Gershman. Building machines that learn and think like people. The Behavioral and brain sciences, 40:e253, 2017.
[59] Sylvain Koos, Jean-Baptiste Mouret, and Stéphane Doncieux. The transferability approach: Crossing the reality gap in evolutionary robotics. IEEE Transactions on Evolutionary Computation, 17(1):122â145, 2013.
[60] A. Cully, J. Clune, D. Tarapore, and J.-B. Mouret. Robots that can adapt like animals. Nature, 521: 503â507, 2015. doi: 10.1038/nature14422.
[61] Marcin Andrychowicz, Bowen Baker, Maciek Chociej, Rafal Jozefowicz, Bob McGrew, Jakub Pachocki, Arthur Petron, Matthias Plappert, Glenn Powell, Alex Ray, et al. Learning dexterous in-hand manipulation. arXiv preprint arXiv:1808.00177, 2018.
[62] Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, OpenAI Pieter Abbeel, and Wojciech Zaremba. Hindsight experience replay. In Advances in Neural Information Processing Systems, pages 5048â5058, 2017.
[63] Tom Schaul, Daniel Horgan, Karol Gregor, and David Silver. Universal value function approximators. In International Conference on Machine Learning, pages 1312â1320, 2015.
[64] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. CoRR, abs/1707.06347, 2017.
[65] Cinjon Resnick, Roberta Raileanu, Sanyam Kapoor, Alex Peysakhovich, Kyunghyun Cho, and Joan Bruna. Backplay: "man muss immer umkehren". CoRR, abs/1807.06919, 2018.
24
[66] Felipe Petroski Such, Vashisht Madhavan, Edoardo Conti, Joel Lehman, Kenneth O. Stanley, and Jeff Clune. Deep neuroevolution: Genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning. arXiv preprint arXiv:1712.06567, 2017.
[67] Hado Van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with double q-learning. In AAAI, volume 2, page 5. Phoenix, AZ, 2016.
[68] Ziyu Wang, Nando de Freitas, and Marc Lanctot. Dueling network architectures for deep reinforcement learning. In ICML, 2016.
[69] Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. Prioritized experience replay. arXiv preprint arXiv:1511.05952, 2015.
[70] Hado P van Hasselt, Arthur Guez, Matteo Hessel, Volodymyr Mnih, and David Silver. Learning values across many orders of magnitude. In Advances in Neural Information Processing Systems, pages 4287â 4295, 2016.
[71] Tim Salimans, Jonathan Ho, Xi Chen, Szymon Sidor, and Ilya Sutskever. Evolution strategies as a scalable alternative to reinforcement learning. arXiv preprint arXiv:1703.03864, 2017.
[72] Marc G. Bellemare, Will Dabney, and Rémi Munos. A distributional perspective on reinforcement learning. In ICML, 2017.
[73] Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Gheshlaghi Azar, and David Silver. Rainbow: Combining improvements in deep reinforcement learning. In AAAI, 2018.
[74] Arun Nair, Praveen Srinivasan, Sam Blackwell, Cagdas Alcicek, Rory Fearon, Alessandro De Maria, Vedavyas Panneershelvam, Mustafa Suleyman, Charles Beattie, Stig Petersen, et al. Massively parallel methods for deep reinforcement learning. arXiv preprint arXiv:1507.04296, 2015.
[75] Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym, 2016.
[76] A. M. Zoubir and D. Robert Iskander. Bootstrap methods and applications. IEEE Signal Processing Magazine, 24:10â19, 2007.
[77] Atari vcs/2600 easter egg list, 2018. URL http://www.ataricompendium.com/game_library/ easter_eggs/vcs/easter_eggs.html.
[78] Atari vcs/2600 scoreboard, Dec 2018. URL http://www.ataricompendium.com/game_library/ high_scores/high_scores.html.
[79] Jean-Baptiste Mouret and Jeff Clune. Illuminating search spaces by mapping elites. ArXiv e-prints, abs/1504.04909, 2015. URL http://arxiv.org/abs/1504.04909.
[80] Joel Lehman and Kenneth O. Stanley. Evolving a diversity of virtual creatures through novelty search and local competition. In GECCO â11: Proceedings of the 13th annual conference on Genetic and evolutionary computation, pages 211â218, 2011.
[81] Josh Tobin, Rachel Fong, Alex Ray, Jonas Schneider, Wojciech Zaremba, and Pieter Abbeel. Domain randomization for transferring deep neural networks from simulation to the real world. In Intelligent Robots and Systems (IROS), 2017 IEEE/RSJ International Conference on, pages 23â30. IEEE, 2017.
[82] S. Lee, J. Yosinski, K. Glette, H. Lipson, and J. Clune. Evolving gaits for physical robots with the hyperneat generative encoding: the beneï¬ts of simulation. In Applications of Evolutionary Computing. Springer, 2013.
[83] Javier Garcıa and Fernando Fernández. A comprehensive survey on safe reinforcement learning. Journal of Machine Learning Research, 16(1):1437â1480, 2015.
[84] Zachary C Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, and Li Deng. Combating reinforcement learningâs sisyphean curse with intrinsic fear. arXiv preprint arXiv:1611.01211, 2016.
[85] William Saunders, Girish Sastry, Andreas Stuhlmueller, and Owain Evans. Trial without error: Towards safe reinforcement learning via human intervention. In Proceedings of the 17th International Confer- ence on Autonomous Agents and MultiAgent Systems, pages 2067â2069. International Foundation for Autonomous Agents and Multiagent Systems, 2018.
25
[86] Cédric Colas, Olivier Sigaud, and Pierre-Yves Oudeyer. GEP-PG: Decoupling Exploration and Exploita- tion in Deep Reinforcement Learning Algorithms. In International Conference on Machine Learning (ICML), Stockholm, Sweden, July 2018. URL https://hal.inria.fr/hal-01890151.
[87] Sébastien Forestier, Yoan Mollard, and Pierre-Yves Oudeyer. Intrinsically motivated goal exploration processes with automatic curriculum learning. arXiv preprint arXiv:1708.02190, 2017.
[88] Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. CoRR, abs/1509.02971, 2015.
[89] Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuomotor policies. The Journal of Machine Learning Research, 17(1):1334â1373, 2016.
[90] Junhyuk Oh, Yijie Guo, Satinder Singh, and Honglak Lee. Self-imitation learning. In ICML, 2018.
[91] Ian Osband, Charles Blundell, Alexander Pritzel, and Benjamin Van Roy. Deep exploration via boot- strapped dqn. In NIPS, 2016.
[92] Anirudh Goyal, Philemon Brakel, William Fedus, Timothy P. Lillicrap, Sergey Levine, Hugo Larochelle, and Yoshua Bengio. Recall traces: Backtracking models for efï¬cient reinforcement learning. CoRR, abs/1804.00379, 2018.
[93] Honghua Dong, Jiayuan Mao, Xinyue Cui, and Lihong Li. Explicit recall for efï¬cient exploration, 2019. URL https://openreview.net/forum?id=B1GIB3A9YX.
[94] Michael Kearns and Satinder Singh. Near-optimal reinforcement learning in polynomial time. Machine learning, 49(2-3):209â232, 2002.
[95] Xiaoxiao Guo, Satinder P. Singh, Honglak Lee, Richard L. Lewis, and Xiaoshi Wang. Deep learning for real-time atari game play using ofï¬ine monte-carlo tree search planning. In NIPS, 2014.
[96] Sergio Hernandez Cerezo and Guillem Duran Ballester. Fractal ai: A fragile theory of intelligence. CoRR, abs/1803.05049, 2018.
[97] E. W. Dijkstra. A note on two problems in connexion with graphs. Numer. Math., 1(1):269â271, December 1959. ISSN 0029-599X. doi: 10.1007/BF01386390. URL http://dx.doi.org/10.1007/ BF01386390.
[98] P. E. Hart, N. J. Nilsson, and B. Raphael. A formal basis for the heuristic determination of minimum cost paths. IEEE Transactions on Systems Science and Cybernetics, 4(2):100â107, July 1968. ISSN 0536-1567. doi: 10.1109/TSSC.1968.300136.
[99] Steven M. Lavalle. Rapidly-exploring random trees: A new tool for path planning. Technical report, Iowa State University, 1998.
[100] Zeping Zhan, Batu Aytemiz, and Adam M Smith. Taking the scenic route: Automatic exploration for videogames. arXiv preprint arXiv:1812.03125, 2018.
[101] David E. Goldberg. Genetic Algorithms in Search, Optimization and Machine Learning. Addison-Wesley, Reading, MA, 1989.
[102] Adam Lipowski and Dorota Lipowska. Roulette-wheel selection via stochastic acceptance. CoRR, abs/1109.3627, 2011.
[103] Marc G. Bellemare, Joel Veness, and Michael H. Bowling. Investigating contingency awareness using atari 2600 games. In AAAI, 2012.
[104] Bradly C Stadie, Sergey Levine, and Pieter Abbeel. Incentivizing exploration in reinforcement learning with deep predictive models. arXiv preprint arXiv:1507.00814, 2015.
[105] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International conference on machine learning, pages 1928â1937, 2016.
26
# A Appendix
# A.1 The meaning of âframesâ
It is common practice to introduce âframe skippingâ during training in the Atari domain, so that the agent only selects actions every k frames instead of every single frame (the action then persists across k frames). The most common value of k is 4, and both our exploration and robustiï¬cation phases were implemented with frame skipping with k = 4.
Following the recommendations in Petroski Such et al. [66], we call the total number of frames produced by the underlying emulator âgame framesâ and the number of frames seen and acted on by the agent during training âtraining frames.â It can sometimes be difï¬cult to know whether a reported number of frames corresponds to training frames or game frames, and the difference can be signiï¬cant because the number of game frames is usually 4 times the number of training frames. In this work, frame counts are always reported as game frames, as recommended by Petroski Such et al. [66] and Machado et al. [31]. Further, we always qualify the word âframeâ with either âtrainingâ or âgame.â This clariï¬cation is particularly important for the rare cases in which we are indeed referring to training frames and not game frames, such as in Section 2.1.4, where we mention that in the exploration phase, actions are repeated with 95% probability each training frame.
# A.2 Episode end
In the case of Montezumaâs Revenge, the end of an episode is deï¬ned as a loss of life, while in the case of Pitfall it is the game-over signal. Both deï¬nitions of the end of an episode appear in the literature [31], and our use of differing approaches in Montezumaâs Revenge and Pitfall was due to the greater difï¬culty of tracking room location based on pixels in Montezumaâs Revenge if the character is allowed to lose lives (a difï¬culty which does not exist in Pitfall). Additionally, death in Pitfall grants the agent additional affordances, which is not the case in Montezumaâs Revenge. These factors are further explained in Appendix A.3 below.
# A.3 Extraction of domain knowledge features from pixels
Phase 1 of Go-Explore used the following domain knowledge features: the x, y position of the agent, the current room, the current level and the rooms in which the currently held keys were found (these last two only apply to Montezumaâs Revenge). Although these features can be found in RAM, they were extracted from pixels in our implementation for two reasons: (1) extracting information from pixels is more similar to how a real world environment would be tackled and ensures we do not exploit any non-visible information that might be stored in the RAM, and (2) we found that extracting values from the RAM could be unreliable at times: in Montezumaâs Revenge, when the character moves into a new room, a black transition image is shown for a few frames. The current room and current x, y position are updated at different times during these transition frames, so that reading these values from RAM would give a room number and x, y position that are inconsistent.
The location of the agent could be extracted by training a simple classiï¬er, or in an unsupervised way through contingency-awareness [39], but it turns out that, in both Montezumaâs Revenge and Pitfall, some pixel values only occur in the character sprite, making it trivial to identify the character location by searching for these values in the current frame. Coincidentally, searching for pixels with a red channel value of 228 is enough to ï¬nd the character in both Montezumaâs Revenge and Pitfall.
Room changes are identiï¬ed by detecting sudden changes in x, y position: if the character was located at the far right of the screen and is now located at the far left, it likely moved to a room on the right of the current room. In the case of Pitfall, additional domain knowledge is required: underground transitions move the players 3 rooms at a time instead of just 1, and the map wraps around so that the last room is situated to the left of the ï¬rst room. In Montezumaâs Revenge, knowledge of the map is not strictly necessary for room tracking, as the room transition rules are simple, but it is necessary for level tracking: any transition away from the treasure room is an increase in level.
Loss of life needs to be taken into account when tracking room changes: in Montezumaâs Revenge, losing a life causes the character to be brought back to the exact location where it entered the room, so that if the character entered the room from the left and dies on the right of the room, the sudden change of x value due to the character reviving on the left side of the room could be mistaken for a
27
room change. Handling this behavior is possible, but we believe unnecessarily complicated for our purposes. For this reason, we end episodes on loss of life in Montezumaâs Revenge. By contrast, in Pitfall, the character is brought to a ï¬xed location on the left side of the screen that cannot be confused with a room change, so that there is no need to end the episode on life loss to simplify room tracking. Further, while losing a life is a strict waste of time in Montezumaâs Revenge since it brings the agent back to a previously seen location, in Pitfall it can be used as a form of teleportation: if an agent enters a room from the right and loses a life soon after, it will be teleported all the way to the left of the room, thus skipping the hazards that may be in the middle. For this reason, we did not choose to end episodes on life loss in Pitfall.
Finally, key tracking in Montezumaâs Revenge is done simply by pattern-matching for keys in the section of the screen that shows the current inventory, and tracking the room number associated to any increase in the current number of keys.
# A.4 Filtering out bug trajectories
As mentioned in Section 3.1, we ï¬ltered out trajectories that triggered the treasure room bug when robustifying Montezumaâs Revenge without domain knowledge. Such ï¬ltering was not necessary when using domain knowledge because none of the highest scoring trajectories triggered the bug, as explained in Section 3.1.2.
The ï¬ltering of bug trajectories was done by excluding all trajectories whose level was lower than the maximum level in the archive. That works because the bug makes it impossible to leave the treasure room and advance to the next level, so any trajectory that makes it to a new level did not trigger the bug in the previous level.
# A.5 Cell selection details
As mentioned in Section 2.1.2, cells are selected at each iteration by ï¬rst assigning them a score, which is then normalized across all cells in the archive, yielding the probability of each cell being selected. The score of a cell is the sum of separate subscores, which we now describe.
One important set of such subscores, called the count subscores, are computed from attributes that represent the number of times a cell was interacted with in different ways. Speciï¬cally: the number of times a cell has already been chosen (i.e. selected as a cell to explore from), the number of times a cell was visited at any point during the exploration phase, and the number of times a cell has been chosen since exploration from it last produced the discovery of a new or better cell. In the case of each of these attributes, a lower count likely indicates a more promising cell to explore from (e.g. a cell that has been chosen more times already is less likely to lead to new cells than a cell that has been chosen fewer times). The count subscore for each of these attributes is given by:
> . _ 1 Pa n CntScore(c,a) = Wa + (= a+ =) + Eg (1)
Here c is the cell for which we are calculating the score, v(c, a) is a function that returns the value of attribute a for cell c, wa is the weight hyperparameter for attribute a, and pa is the power hyperparameter for attribute a. ε1 helps prevent division by 0 and determines the relative weight of cells for which a given value is 0. ε2 helps guarantee that no cell ever has a 0 probability of being chosen. In our implementation, ε1 = 0.001 and ε2 = 0.00001, which we chose after preliminary experiments showed that they worked well.
When cell representations are informed by domain knowledge (Section 3.1.2), giving us the x, y position of the agent, it is possible to determine the possible neighbors of given cells, and whether these neighbors are already present in the archive. For those cases, we deï¬ne a set of neighbor subscores. Each neighbor subscore is deï¬ned as wn if neighbor n does not exist in the archive, and is 0 otherwise. The motivation behind these neighbor subscores is that cells that are lacking neighbors are likely at the edge of the current frontier of knowledge and are thus more likely to yield new cells. We consider 3 types of neighbors: vertical (2 neighbors), horizontal (2 neighbors), and (in the case of Montezumaâs Revenge) cells that are in the same level, room and x, y position, but are holding a larger number of keys (the intuition is that if a cell lacks a âmore keysâ neighbor, then it is the cell that is most capable of opening doors from its location). Neighbors of the same type share the
28
same value for wn (Table 2). These deï¬nitions result in the following neighbor subscore, assuming a function called HasN eighbor(c, n) which returns 1 if neighbor n of cell c is present in the archive, and which returns 0 otherwise:
N eighScore(c, n) = wn · (1 â HasN eighbor(c, n))
In cases without domain knowledge, it is unclear what exactly would constitute a cellâs neighbor, and so N eighScore is deï¬ned as 0 in this case in our experiments.
Finally, in the case of Montezumaâs Revenge with domain knowledge, cells are exponentially downweighted based on the distance to the maximum level currently reached, thereby favoring progress in the furthest level reached, while still keeping open the possibility of improving previous levelsâ trajectories:
LevelW eight(c) = 0.1M axLevelâLevel(c) (3)
In the case of Pitfall (where no notion of level exists) and Montezumaâs Revenge without domain knowledge (where we do not know what the level of a given cell is), LevelW eight is always 1.
The ï¬nal cell score is then computed as follows:
n CellScore(c) = LevelWeight(c) - (= NeighScore(c, ») + (= CntScore(c, ») + ] ,
Note that CellScore(c) > 0 for all cells c. The cell selection probability is given by:
Cell Score(c) Cell Prob(c) = Yo CellScore(câ) (5)
Hyperparameters (the different values of wa, pa and wn) were found through separate grid searches on each game (Montezumaâs Revenge and Pitfall) and for each treatment (with or without domain knowledge). Detailed hyperparameter tables are found in Appendix A.6 below.
# A.6 Phase 1 hyperparameters
Hyperparameter values were found through grid search. Here, the power hyperparameter pa (see Section A.5) found by grid search turned out to be 0.5 for all attributes a in every experiment, so these are excluded from the tables for conciseness.
The âcount-basedâ attributes are as follows: âTimes chosenâ is the number of times a cell was selected from the archive so far, âTimes chosen since newâ is the number of times the cell was selected from the archive since last time it led to a new cell being found or to a cell being improved, and âTimes seenâ is the number of times the cell was seen during exploration, regardless of whether it was chosen.
Table 1 shows the hyperparameters for the runs with downsampled frames, i.e. the ones without domain knowledge, and Table 2 shows the hyperparameters for the runs with domain knowledge.
Batch size Downsampled width Downsampled height Downsampled pixel range Times chosen weight (wa=Chosen) Times chosen since new weight (wa=ChosenSinceN ew) Times seen weight (wa=Seen) Montezumaâs Revenge 100 11 8 0-8 0.1 0 0.3 Pitfall 1,000 11 8 0-8 1 1 0
# Table 1: Hyperparameter values for Montezumaâs Revenge and Pitfall without domain knowl- edge.
29
(2)
(4)
Batch size Cell size Times chosen weight (wa=Chosen) Times chosen since new weight (wa=ChosenSinceN ew) Times seen weight (wa=Seen) Missing neighbor: horizontal (wnâHorizontal) Missing neighbor: vertical (wnâV ertical) Missing neighbor: more keys (wnâM oreKeys) Montezumaâs Revenge 1,000 16 Ã 16 0 0 0 0.3 0.1 10 Pitfall 1,000 16 Ã 16 1 0.5 0 1 0 N/A
Table 2: Hyperparameter values for Montezumaâs Revenge and Pitfall with domain knowledge. Note: the Atari emulator resolution is 160 Ã 210, which results in "tall" frames. However, Atari games were meant to be displayed with wide pixels, resulting in frames wider than they are tall. The common way to achieve this effect is to duplicate pixels horizontally, resulting in a 320 Ã 210 frame. We divide the frame into a 16 Ã 16 grid after the frame is adjusted to 320 Ã 210, so that in the original frame space our cells would be 8 Ã 16.
# A.7 Modiï¬cations to the Backward Algorithm
# A.7.1 Multiple demonstrations
As mentioned, we modiï¬ed the Backward Algorithm to robustify with multiple demonstrations, 10 in the case of Montezumaâs Revenge. For Pitfall with domain knowledge (we did not robustify any trajectories without domain knowledge) and with the truncated trajectories (Section 3.2), we robustiï¬ed with 4 demonstrations. We did not robustify the long Pitfall trajectories with multiple demonstrations. While doing so is expected to improve robustiï¬cation performance, it is unclear whether multiple demonstrations would enable successful robustiï¬cation of the full-length Pitfall runs, and we leave this question for future work.
Handling multiple demonstrations in the Backwards Algorithm was implemented by choosing a demonstration uniformly at random each time the Backward Algorithm selects a demonstra- tion state from which to start a rollout. Demonstration-speciï¬c information such as the current max_starting_point (the latest frame in the demonstration that the Backward Algorithm will start from) and success rates (the proportion of runs starting from a given starting point that performed at least as well as the demonstration) were tracked separately for each demonstration (see Salimans and Chen [28] for details on the various attributes used by the Backward Algorithm).
# A.7.2 Modiï¬ed hyperparameters
For robustiï¬cation, we kept the default hyperparameters given by Salimans and Chen [28], with the following exceptions: we added random no-ops at the beginning of the trajectory when the starting point was equal to 0 and we also added sticky actions throughout learning (unless otherwise speciï¬ed). In addition, to improve performance when robustifying from multiple demonstrations, we set the success rate parameter to 0.1 instead of 0.2, and we changed the parameter that determines how frequently the starting point can be updated to 200 · nDemos steps instead of a ï¬xed 1024 steps. To avoid cases where the reward would be hard to ï¬nd from the ï¬rst checkpoint (i.e. the checkpoint closest to the end of the game), we also changed an internal starting-frame parameter (i.e. the number of frames before the end that the backward process would start robustifying from) from 256 to 0. We found that these parameters seemed to work better empirically, though we did not experiment with them extensively.
# A.7.3 Pitfall-speciï¬c changes
The small negative rewards in combination with the large positive rewards encountered on Pitfall required two additional changes in this particular game. The ï¬rst change is to replace reward clipping with reward scaling: instead of rewards being clipped to the range [-1, 1], rewards are multiplied by 0.001. This change was necessary because, in Pitfall, negative rewards can have values as small as -1 while positive rewards have values between 2,000 and 5,000. Because negative rewards are
30
so common and positive rewards so rare, clipping rewards gives a huge relative boost to avoiding negative rewards relative to obtaining positive rewards, which makes learning nearly impossible. With reward scaling, the relative importance of the two types of rewards is preserved, and learning succeeds. The scaling factor of 0.001 for Pitfallâs rewards creates a reward range similar to that of clipped rewards, facilitating the use of the same hyperparameters (learning rate, entropy coefï¬cient etc.) across Montezumaâs Revenge and Pitfall. We chose to make a special case of Pitfall instead of using reward scaling in general for our method because reward clipping is more amenable to sharing hyperparameters across many different games [3]. An alternative to these domain-speciï¬c adjustments would be to implement automated reward scaling methods such as Pop-Art [70].
Another change to the canonical Backward Algorithm relates to ï¬xing an issue with early termination and negative rewards. To quickly eliminate rollouts that are slow in collecting the rewards from the demonstration, the original Backward Algorithm implements early termination, where it terminates all rollouts that do not get the same (or a higher) cumulative reward as the demonstration within a certain number of steps (50 in our case). The early termination is implemented in the form of a sliding window, where the cumulative reward of the current rollout is compared with the cumulative reward of the demonstration from 50 time steps ago, and if the cumulative reward of the current rollout is lower, the rollout is terminated. For example, if the demonstration collected a reward of 100 points at time step 20 (counting from the starting point of the rollout, and assuming no other rewards were collected), then a rollout will have to collect at least 100 points before time step 70, otherwise the rollout will be terminated at time step 70.
The sliding window method for early termination works ï¬ne when only positive rewards exist, as the only reason the rollout can have a lower score than the demonstration is because it failed to collect a particular positive reward within the given time frame. However, if negative rewards exist, a rollout can also be terminated by collecting a negative reward, even if the demonstration collected the same negative reward. For example, if the demonstration collected a negative reward of -1 at time step 20 (once again, counting from the starting point of the rollout and assuming no other rewards were collected), the rollout needs to avoid this negative reward at all costs; otherwise it will be terminated at time step 50, even though it followed the same behavior as the demonstration. The reason for such early termination is that, at time step 50, the rollout will be compared with the performance of the demonstration at time step 0, and at that time step, the demonstration has not collected the negative reward yet.
To avoid this termination criteria, we give the agent an allowed score deï¬cit of 250 points, meaning a rollout will only be terminated if its score is more than 250 points lower than that of the demonstration from 50 time steps earlier. This convention means that, as long as the demonstration did not collect more 250 points of negative reward within the given 50 time steps, the rollout will not be terminated if it follows the demonstration. The value of 250 points was found empirically on Pitfall, though future work could look for a more general method of implementing early termination in domains with negative reward.
# A.8 Performance
All Phase 1 runs were done on single virtual machines with 22 CPU cores. Each virtual machine had 50GB of RAM.
Table 3 shows various performance metrics as a function of the level reached during Phase 1 the âdomain knowledgeâ experiment on Montezumaâs Revenge. The full experiment ran for 600M game frames, which took a mean of 74.9 (CI: 72.6 â 77.2) hours.
It is worth noting that time scales superlinearly with game frames primarily due to: (1) Cell selection, which happens at a ï¬xed interval, but takes an amount of time proportional to the number of cells, which is constantly growing. We note that our cell selection is a form of Roulette-Wheel Selection (RWS) [101], which we implement naively with an O(n) algorithm. O(log n) and even O(1) implementations for RWS are possible [102], so that cell selection could be sped up substantially in the future. (2) Trajectory concatenation, which is implemented in a naive way where each cell contains an array that represents the entire trajectory needed to reach it, such that if cell B was reached from cell A, cell Bâs trajectory will contain a copy of the trajectory that leads to cell A, plus the actions that can lead from cell A to cell B. The copying of trajectories ever increasing in length is negligible at the start of the algorithm, but takes up more and more time as the algorithm goes on. An
31
Level 1 2 3 4 5 6 7 8 9 10 11 Solved % 100% 100% 100% 100% 100% 100% 100% 94% 70% 38% 12% Game Frames (excl. replay) 58M (53M â 62M) 104M (97M â 111M) 173M (164M â 182M) 242M (230M â 253M) 305M (292M â 318M) 373M (358M â 388M) 432M (416M â 448M) 487M (471M â 503M) 533M (518M â 548M) 561M (550M â 572M) 582M (570M â 595M) Time (hours) 0.9 (0.9 â 1.0) 2.5 (2.3 â 2.7) 6.8 (6.2 â 7.3) 12.7 (11.7 â 13.6) 19.9 (18.6 â 21.3) 29.4 (27.8 â 31.2) 39.1 (37.0 â 41.2) 49.8 (47.4 â 52.1) 61.4 (58.5 â 64.3) 70.4 (67.7 â 73.2) 77.6 (72.9 â 82.0)
Table 3: Mean computational complexity to reach different levels for Montezumaâs Revenge with domain knowledge. The âGame frame (incl. replay)â metric shows the number of game frames that would have been played if we replayed trajectories instead of resetting the emulator state. It is a hypothetical metric, since we did not replay the trajectories, but instead reset the environment. The âSolved %â column shows the proportion of runs that solved a given level. All other metrics are computed only for the subset of runs that did solve the level.
alternative representation with better memory and computational efï¬ciency would be to represent trajectories as linked lists of actions, and in reverse order, so that each action links to its predecessor. With this representation, if cell B is reached from cell A, only the actions leading from cell A to cell B need to be stored in cell B, with the ï¬rst of these actions linking to the last action needed to reach cell A, which means that adding cell B would take constant time, instead of a time proportional to the length of the longest trajectories in memory. Further, the amount of memory would also grow linearly, and the number of actions stored in memory would be bounded by the number of actions ever taken during exploration.
For Montezumaâs Revenge without domain knowledge, performance metrics are shown in Table 4. The full experiment ran for 1.2B game frames, which took 26.9 (CI: 25.6 â 28.2) hours. It is notable that this is faster than the experiment with domain knowledge in spite of processing twice as many frames. This is likely due to the same reasons that domain knowledge runs get slower over time: runs without domain knowledge ï¬nd fewer cells and shorter trajectories, and are thus less affected by the slowdown.
Level 1 2 Solved % 57% 1% Game Frames (excl. replay) 640M (567M â 711M) 592M (592M â 592M) Time (hours) 10.8 (9.5 â 12.0) 11.4 (11.4 â 11.4)
Table 4: Mean computational complexity to reach different levels for Montezumaâs Revenge without domain knowledge. The âGame frame (incl. replay)â metric shows the number of game frames that would have been played if we replayed trajectories instead of resetting the emulator state. It is a hypothetical metric, since we did not replay the trajectories, but instead reset the environment. The âSolved %â column shows the proportion of runs that solved a given level. All other metrics are computed only for the subset of runs that did solve the level.
For Pitfall with domain knowledge, the threshold at which to compare game frames is not as clear as it is for Montezumaâs Revenge. In order to include data from all of our 40 runs, we report the required game frames for reaching the lowest score achieved out of those runs, which is 47, 534. Reaching this threshold required a mean of 794.0M (CI: 715.9M â 869.8M) game frames, which takes 25.0 (CI: 21.4 â 28.3) hours, and it would have required a mean of 100.8B (CI: 84.1B â 116.0B) game
32
frames if trajectories had to be replayed from the start of the game. The full experiment lasted for 4.0B game frames, which took a mean of 186.3 (CI: 184.9 â 187.8) hours. The full experiment would have required 1,060.4B (CI: 1,048.5B â 1,071.7B) game frames if trajectories had to be replayed from the start of the game.
Because Pitfall without domain knowledge did not obtain any rewards, it is hard to deï¬ne good thresholds at which to compare game frames. In addition, regardless of the threshold we choose, the resulting data would not be representative of the resources Go-Explore would need to make progress on Pitfall (instead, it would represent the resource usage when Go-Explore fails to make progress). For those two reasons, we do not include Pitfall without domain knowledge in the remainder of this section.
We did not monitor the precise memory usage of Phase 1, beyond the fact that all our runs succeeded on machines with 50GB of RAM. Another indicator is the size of the serialized checkpoints produced at the end of each run, as these checkpoints contain all the necessary data to run Go-Explore, including the complete set of all cells, the metadata used in cell selection (see Appendix A.5), and the trajectories needed to reach the cells. Uncompressed, these ï¬les serialized using pickle have a mean size of 341.2MB (CI: 292.3MB â 389.3MB) in the case of Montezumaâs Revenge without domain knowledge, and 2.8GB (CI: 2.7GB â 2.9GB) with domain knowledge. For Pitfall with domain knowledge, the mean uncompressed checkpoint size was 1.30GB (CI: 1.29GB â 1.31GB).
For robustiï¬cation, each run used 16 workers, each equipped with a single GPU, for a total of 16 GPUs per run. For Montezumaâs Revenge without domain knowledge, runs lasted up to 5B game frames though the selected checkpoints were produced after a mean of 4.35B (CI: 4.27B â 4.45B) game frames (which took a mean of 2.4 (CI: 2.4 â 2.5) days). For Montezumaâs Revenge with domain knowledge, runs lasted up to 10B game frames but selected checkpoints were produced after a mean of 4.59B (CI: 3.09B â 5.91B) game frames, which took a mean of 2.6 (CI: 1.8 â 3.3) days. For Pitfall with domain knowledge, runs lasted for about 12B game frames and selected checkpoints were produced after a mean of 8.20B (CI: 6.73B â 9.74B) game frames, which took a mean of 4.5 (CI: 3.7 â 5.3) days.
# A.9 Scores
Table 5 compares the results of Go-Explore with many other algorithms. The scores for the other algorithms are with stochastic testing in the form of random no-ops, sticky actions, human restarts, or a combination thereof. In the case of Go-Explore, both random no-ops and sticky actions were present in testing. As mentioned in Section 2.1.3, Go-Explore was trained partially without sticky actions or random no-ops, whereas many of the algorithms in this table also handled stochasticity throughout training.
33
666,474 18,003,200 4,638 41,098 29,384 74,500 4,753 34,900 1,219,200 59,494 107,363 57 76,813 3,997 - 6,464 47,821 114,000
Table 5: Comparison to baselines for Montezumaâs Revenge and Pitfall. The second section gives the performance of imitation learning algorithms, while the third gives human performance. Results are given in order of ï¬rst public release (including preprints). Many historical papers did not consider Pitfall, in which case the score is displayed as â-â. Two references are given in cases where the score from a given method does not appear in its original paper, but appears in another paper (usually in a comparison section).
34
# A.10 Fine-grained cell representation for Pitfall without domain knowledge
As mentioned in Section 3.2, we attempted an experiment on Pitfall without domain knowledge using the same parameters as with Montezumaâs Revenge. This approach did not succeed, as Go-Explore quickly stopped ï¬nding new rooms and failed to ï¬nd any rewards (Fig. 9). One potential reason for this failure is that the downscaled cell representation optimized for Montezumaâs Revenge conï¬ates too many states into the same cell in Pitfall. This hypothesis is supported by the fact that Go-Explore stops discovering new cells, both when measured post-hoc as domain knowledge cells (Fig. 9) and in the downscaled representation of cells in which it is actually searching (Fig. 11).
00 05 1.0 us 200 Game Frames 1e9
Figure 11: Number of without-domain-knowledge cells found during Phase 1 on Pitfall without domain knowledge. Most cells are found within the ï¬rst 500M game frames, after which very few new cells are found. This observation suggests that Pitfall without domain knowledge fails because there are too many different states that are mapped to the same Go-Explore cell.
To resolve this issue, we looked at different cell representations that would be able to distinguish a larger number of states. A particularly promising cell representation assigns 16, rather than 8, different pixel values to each pixel in the 11 Ã 8 downscaled representation. While this cell representation does result in a larger number of rooms visited and the number of downscaled cells found did not stagnate, the runs terminated prematurely due to exhausting the 50GB of memory available on the virtual machine (Fig. 12). Better hardware, distributed computation, or algorithmic improvements are all potential methods to resolve this issue, but we leave their implementation to future work.
600,000 500,000 + 8 400,000 + 300,000 + Found Rooms 8 Best Score ° W/O Domain Knowledge Cells found 154 200,000 4 ° 105 100,000 + 0 54 of oO 2 4 6 oO 2 4 6 oO 2 4 6 Game Frames 1e8 Game Frames 168 Game Frames 1e8 Number of found Number of cells found Maximum in archive
(a) Number of rooms found
(b) Number of cells found
# (c) Maximum score in archive
Figure 12: Go-Explore Phase 1 on Pitfall without domain knowledge with a more ï¬ne-grained (16 different pixel values instead of 8) cell representation. While the number of rooms (a) and the number of cells (b) found continues to increase, even after 600M game frames, the runs do not continue beyond this point because they run out of memory. Despite visiting more rooms, Go-Explore still does not ï¬nd any rewards, although it may have were it able to continue for longer (c). The noise at the end of sub-ï¬gure (a) is caused by different runs crashing at different times. The plot shows the mean and 95% bootstrapped conï¬dence interval over 20 runs initially, but the number of runs declines over time. The ï¬rst run crashes around 500M game frames.
35
# A.11 Failure robustifying long trajectories in Pitfall
While Go-Explore on Pitfall with domain knowledge is able to ï¬nd trajectories that score over 70,000 points (Fig. 9), the Backward Algorithm was unable to robustify these trajectories (Fig. 13).
70,000 60,000 50,000 40,000 30,000 â Demo1 20,000 | ââ Demo 2 â Demo 3 10,000 | ââ Demo4 â Demo5 Max Starting Point 0.00 0.25 050 0.75 1.00 125 1.50 1.75 2.00 Game Frames 1e10
Figure 13: Maximum starting point over training when robustifying the full-length trajectories produced by Go-Explore in Phase 1 on Pitfall with domain knowledge. Unlike in Fig. 5b, the lines in this ï¬gure represent separate robustiï¬cation attempts, each of which was applied to a single demonstration taken from different runs of Go-Explore Phase 1. None of the 5 robustiï¬cation attempts reaches a starting point near 0, meaning that robustiï¬cation failed on these demonstrations. We did not try to robustify from multiple demonstrations for want of time, although doing so may have worked better.
# A.12 Nearly identical states in Pitfall
Pitfall contains many rooms located in different parts of the game that contain the exact same objects and hazards. These identical rooms can result in nearly identical-looking states that require different actions to be navigated optimally (Fig. 14), and they indicate that Pitfall is a Partially Observable Markov Decision Process (POMDP). These nearly identical looking states can pose a problem, both when robustifying trajectories that visit some of these states, and when designing a domain-agnostic cell representation that should, ideally, treat these states as being in different cells.
The general method for handling POMDPs is to condition the current action on all previously observed states, for example by training a recurrent, rather than feed-forward, neural network. For the robustiï¬cation phase our method already implements a recurrent layer in the neural network, but, possibly due to the way the network is trained with the Backward Algorithm (i.e. whenever the agent is started from a particular state, it is not presented with all states that would have come before), this recurrent layer does not appear to completely solve the issue (see also section 3.2). A similar approach could be applied for obtaining cell representations (e.g. a cell representation could be conditioned on all observations of the trajectory to a particular state, rather than just the observation at a particular state), but care would have to be taken to ensure that actually identical (or nearly identical) states are recognized as such.
36
Figure 14: Two nearly identical looking states that require different actions to be navigated optimally. The two screenshots are taken from the same Go-Explore demonstration, but at different times. The rooms are conceptually identical: they both contain a blue pool, a swinging vine, three rolling logs, and a scorpion. However, because the two rooms are located in different areas of the game, the correct actions for navigating the two rooms can be different. In this case, the Go-Explore demonstration navigates the left room right to left, whereas it navigates the right room from left to right. When training a policy in this situation, there will be many similar looking states that require opposite actions. While the moving aspects of the room (i.e. the vine, the logs, and the scorpion) are likely to be in different locations in the two rooms of the demonstration, the fact that they will also be in different locations when entering the rooms at different times makes them poor features for differentiation. Probably the most informative features that can be used to determine in which direction to move are the score counter and the clock (the white numbers in the top left of each image), though, in practice, these small, frequently-changing features seem insufï¬cient to provide the necessary guidance.
37 | {
"id": "1702.01182"
} |
1901.11409 | Semantic Redundancies in Image-Classification Datasets: The 10% You Don't Need | Large datasets have been crucial to the success of deep learning models in
the recent years, which keep performing better as they are trained with more
labelled data. While there have been sustained efforts to make these models
more data-efficient, the potential benefit of understanding the data itself, is
largely untapped. Specifically, focusing on object recognition tasks, we wonder
if for common benchmark datasets we can do better than random subsets of the
data and find a subset that can generalize on par with the full dataset when
trained on. To our knowledge, this is the first result that can find notable
redundancies in CIFAR-10 and ImageNet datasets (at least 10%). Interestingly,
we observe semantic correlations between required and redundant images. We hope
that our findings can motivate further research into identifying additional
redundancies and exploiting them for more efficient training or
data-collection. | http://arxiv.org/pdf/1901.11409 | Vighnesh Birodkar, Hossein Mobahi, Samy Bengio | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20190129 | 20190129 | 9 1 0 2
n a J 9 2 ] V C . s c [ 1 v 9 0 4 1 1 . 1 0 9 1 : v i X r a
# Semantic Redundancies in Image-Classiï¬cation Datasets: The 10% You Donât Need
Vighnesh Birodkarâ Hossein Mobahi Samy Bengio
# Google Research, Mountain View, CA vighneshb@google.com hmobahi@google.com bengio@google.com
# Abstract
Large datasets have been crucial to the success of deep learning models in the recent years, which keep per- forming better as they are trained with more labelled data. While there have been sustained eï¬orts to make these models more data-eï¬cient, the potential beneï¬t of understanding the data itself, is largely untapped. Speciï¬cally, focusing on object recognition tasks, we wonder if for common benchmark datasets we can do better than random subsets of the data and ï¬nd a sub- set that can generalize on par with the full dataset when trained on. To our knowledge, this is the ï¬rst result that can ï¬nd notable redundancies in CIFAR-10 and ImageNet datasets (at least 10%). Interestingly, we observe semantic correlations between required and redundant images. We hope that our ï¬ndings can moti- vate further research into identifying additional redun- dancies and exploiting them for more eï¬cient training or data-collection.
studies modiï¬cations to simulated data to help models adapt to the real world, and [Carlini et al., 2018] that demonstrates the existence of prototypical examples and veriï¬es that they match human intuition.
This work studies the properties of ImageNet, CIFAR- 10 , and CIFAR-100 datasets from the angle of redun- dancy. We ï¬nd that at least 10% of ImageNet and CIFAR-10 can be safely removed by a technique as simple as clustering. Particularly, we identify a certain subset of ImageNet and CIFAR-10 whose removal does not aï¬ect the test accuracy when the architecture is trained from scratch on the remaining subset. This is striking, as deep learning techniques are believed to be data hungry [Halevy et al., 2009, Sun et al., 2017]. In fact, recently the work by [Vodrahalli et al., 2018] speciï¬cally studying the redundancy of these datasets concludes that there is no redundancy. Our work refutes that claim by providing counter examples.
# 1 Introduction
Large datasets have played a central role in the recent success of deep learning. In fact, the performance of AlexNet [Krizhevsky et al., 2012] trained on ImageNet in 2012 is often considered as [Deng et al., 2009] the starting point of the current deep learning era. Undoubtedly, prominent datasets of ImageNet, CI- FAR, and CIFAR-100 [Krizhevsky and Hinton, 2009] have had a crucial role in the evolution of deep learning methods since then; with even bigger datasets like OpenImages [Kuznetsova et al., 2018] and Tencent ML-images re- These developments have led cently emerging. to state-of-the-art architectures such as ResNets [He et al., 2016a], DenseNets [Huang et al., 2017], VGG [Simonyan and Zisserman, 2014], AmoebaNets [Huang et al., 2018], and regularization techniques such as Dropout [Srivastava et al., 2014] and Shake-Shake [Gastaldi, 2017]. However, understanding the proper- ties of these datasets themselves has remained relatively untapped. Limited study along this direction includes [Lin et al., 2018], which proposes a modiï¬ed loss func- tion to deal with the class imbalance inherent in object detection datasets and [Tobin et al., 2017], which
# âWork Done as Google AI Resident.
This work resolves some re- cent misconceptions about the absence of notable redundancy in major image classiï¬cation datasets [Vodrahalli et al., 2018]. We do this by identifying a speciï¬c subset, which constitutes above 10% of the training set, and yet its removal causes no drop in the test accuracy. To our knowledge, this is the ï¬rst time such signiï¬cant redundancy is shown to exist for these datasets. We emphasize that our contribution is merely to demonstrate the existence of such redundancy, but we do not claim any algorithmic contributions. However, we hope that our ï¬ndings can motivate further research into identifying additional redundancies and exploit- ing them for more eï¬cient training or data-collection. Our ï¬ndings may also be of interest to active learning community, as it provides an upper-bound on the best performance1.
# 2 Related Works
There are approaches which try to prioritize dif- ferent learning the process goes on such as [Fan et al., 2016] and [Katharopoulos and Fleuret, 2018]. Although these
1Suppose we learn about existence of m samples in a dataset of size n > m that can achieve the same test performance as a model trained with all n samples. Then if our active learner cannot reach the full test performance after selecting m samples, we know that there might exist a better active learning algorithm, as the ideal subset of size m can achieve full test accuracy.
1
(a) Deformation (b) Pose (c) Color, Pose (d) Color, Texture (e) Background (f) Background (g) Pose (h) Color, Texture, Pose
a
Figure 1: Examples of diï¬erent redundant groups of images from the ImageNet dataset while creating a subset 90% of the size of the full set. In each group, we list the semantic variation considered redundant. The images selected by semantic clustering are highlighted with a green box whereas the rest are discarded with no negative impact on generalization.
techniques involve selecting examples to train on, they do not seek to identify redundant subsets of the data, but rather to sample the full dataset in a way that speeds up convergence.
An early mention of trying to reduce the training dataset size can be seen in [Ohno-Machado et al., 1998]. Their proposed algorithm splits the training dataset into many smaller training sets and iteratively removes these smaller sets until the generalization performance falls below an acceptable threshold. However, the algorithm relies on creating many small sets out of the given training set, rendering it impractical for modern usage. [Wei et al., 2015] pose the problem of subset selection as a constrained sub-modular maximization problem and use it to propose an active learning algorithm. The proposed techniques are used by [Kaushal et al., 2018] in the context of image recognition tasks. These draw- back however, is that when used with deep-neural net- works, simple uncertainty based strategies out-perform the mentioned algorithm.
Another example of trying to identify a smaller, more informative set can be seen in [Lapedriza et al., 2013]. Using their own deï¬nition of value of a training exam- ple, they demonstrate that prioritizing training over examples of high training value can result in improved performance for object detection tasks. The authors suggest that their deï¬nition of training value encourages prototypicality and thus results is better learning.
dataset is randomly selected according to uniform distri- bution, and their labels are removed [Ren et al., 2018, Tarvainen and Valpola, 2017, Qiao et al., 2018, This creates Pu et al., 2016, Sajjadi et al., 2016]. a training set with mix of labeled and unlabeled data to be used for assessing semi-supervised learning methods. However, creating the training set by maintain the most informative fraction of the labeled examples may provide new insights about capabilities of semi-supervised methods.
# 3 Method
# 3.1 Motivation
In order to ï¬nd redundancies, it is crucial to analyze each sample in the context of other samples in the dataset. Unlike previous attempts, we seek to mea- sure redundancy by explicitly looking at a dissimilar- ity measure between samples. In case of there being near-duplicates in the training data, the approach of [Vodrahalli et al., 2018] will not be able to decide be- tween them if their resulting gradient magnitude is high, whereas a dissimilarity measure can conclude that they are redundant if it evaluates to a low value.
[Carlini et al., 2018] attempt to directly quantify pro- totypicality with various metrics and verify that all of them agree with human intuition of prototypicality to various extents. In particular, they conclude that with CIFAR-10 , training on nearly-the-most prototypical examples gives the best performance when using 10% of the training data.
Most recently [Vodrahalli et al., 2018] attempts to ï¬nd redundancies in image recognition datasets by ana- lyzing gradient magnitudes as a measure of importance. They prioritize examples with high gradient magnitude according to a pre-trained classiï¬er. Their method fails to ï¬nd redundancies in CIFAR-10 and ImageNet datasets.
# 3.2 Algorithm
To ï¬nd redundancies in datasets, we look at the se- mantic space of a pre-trained model trained on the full dataset. In our case, the semantic representation comes from the penultimate layer of a neural network. To ï¬nd groups of points which are close by in the semantic space we use Agglomerative Clustering [Defays, 1977]. Ag- glomerative Clustering assumes that each point starts out as its own cluster initially, and at each step, the pair of clusters which are closest according to the dissimi- larity criterion are joined together. Given two images I1 and I2, whose latent representations are denoted by vectors x1 and x2. We denote the dissimilarity between x1 and x2 by d(x1, x2) using the cosine angle between them as follows:
Finally, the insights provided by our work may have implications for semi-supervised techniques assessed on notorious image datasets. Currently when evaluated on ImageNet or CIFAR datasets, a ï¬xed-sized subset of the
(1, @2) d(a1,a2) =1-ââ"â__. Ilex] llw2ll (1)
2
The dissimilarity between two clusters C1 and C2 , D(C1, C2) is the maximum dissimilarity between any two of their constituent points:
D(C1, C2) = max x1âC1,x2âC2 d(x1, x2) . (2)
For Agglomerative Clustering, we process points be- longing to each class independently. Since the dissimi- larity is a pairwise measure, processing each class sepa- rately leads to faster computations. We run the cluster- ing algorithm until there are k clusters left, where k is the size of the desired subset. We assume that points inside a cluster belong to the same redundant group of images. In each redundant group, we select the image whose representation is closest to the cluster center and discard the rest. Henceforth, we refer to this procedure as semantic space clustering or semantic clustering for brevity.
# 4 Experiments
We use the ResNet [He et al., 2016a] architecture for all our experiments with the variant described in [He et al., 2016b]. For each dataset, we compare the performance after training on diï¬erent random subsets to subsets found with semantic clustering. Given a ï¬xed pre-trained model, semantic clustering subsets are deterministic and the only source of stochasticity is due to the random network weight initialization and random mini-batch choices during optimization by SGD.
The semantic space embedding is obtained by pre- training a network on the full dataset. We chose the output after the last average pooling layer as our seman- tic space representation. All hyperparameters are kept identical during pre-training and also when training with diï¬erent subset sizes.
As the baseline, we compare against a subset of size k uniformly sampled from the full set. Each class is sampled independently to in order to be consistent with the semantic clustering scheme. Note that random sam- pling scheme adds an additional source of stochasticity compared to clustering. For both either uniform sam- pling or cluster based subset selection, we report the mean and standard deviation of the test accuracy of the model trained from scratch using the subset.
# 4.1 CIFAR-10 & CIFAR-100
We train a 32-layer ResNet for the CIFAR-10 and CIFAR-100 [Krizhevsky and Hinton, 2009] datasets. The semantic representation obtained was a 64- dimensional vector. For both the datasets, we train for 100,000 steps with a learning rate which is cosine annealed [Loshchilov and Hutter, 2016] from 0.1 to 0 with a batch size of 128.
For optimization we use Stochastic Gradient Descent with a momentum of coefficient of 0.9. We regularize our weights by penalizing their 2 norm with a factor of 0.0001. We found that to prevent weights from diverging when training with subsets of all sizes, warming up the
3
930-1 1 ' ' 1 1 1 1 1 1 @®e Random Subset @®e Clustering based Subset 92.5 - 92.0 - Test Accuracy 2 a H 91.0 - 90.5 - 1 1 1 1 75 80 95 100 Percentage of Data 70 35 90
Figure 2: Performance of subsets of varying size on the CIFAR-10 dataset. Each point is an average across 10 trials and the vertical bars denote standard deviation. We see no drop in test accuracy until 10% of the data considered redundant by semantic clustering is removed.
learning rate was necessary. We use linear learning rate warm-up for 2500 steps from 0. We veriï¬ed that warming up the learning rate performs slightly better than using no warm-up when using the full dataset.
In all these experiments, we report average test accu- racy across 10 trials.
# 4.1.1 CIFAR-10
We see in the case of the CIFAR-10 dataset in Figure 2 that the same test accuracy can be achieved even after 10% of the training is discarded using semantic clustering. In contrast, training on random subsets of smaller sizes, results in a monotonic drop in perfor- mance. Therefore, while we show that at least 10% of the data in the CIFAR-10 dataset is redundant, this redundancy cannot be observed by uniform sampling. Figure 3 shows examples of images considered redun- dant with semantic clustering while choosing a subset of 90% size of the full dataset. Each set denotes images the were placed in into the same (redundant) group by semantic clustering. Images in green boxes were retained while the rest were discarded.
Figure 4 shows the number of redundant groups of diï¬erent sizes for two classes in the CIFAR-10 dataset when seeking a 90% subset. Since a majority of points are retained, most clusters end up containing one ele- ment upon termination. Redundant points arise from clustering with two or more elements in them.
# 4.1.2 CIFAR-100
In the case of the CIFAR-100 dataset, our proposed scheme fails to ï¬nd redundancies, as is shown in Figure 5, while it does slightly better than random subsets. Both proposed and random methods show a monotonic decrease in test accuracy with decreasing subset size.
wid Gide a
wid
ese sal
Gide
a
ese sal
# Die mara
(a) Redundant Airplanes
(b) Redundant Trucks
ial
# (c) Redundant Airplanes
(d) Redundant Trucks
Figure 3: Examples of redundant images in the CIFAR- 10 dataset when creating a subset of 90% size of the original set. The ï¬gure illustrates similarity between images of each redundant group and variation across diï¬erent redundant groups. 3a and 3c are two diï¬erent redundant groups of the class Airplane. 3b and 3d are two diï¬erent redundant groups from class Truck. In each group, only the images marked with green boxes are kept and the rest, discarded. The discarded images did not lower test accuracy.
10% - 1 ' \ 1 1 1 - Class mmm Airplane === Automobile . | w ' I) . 10° : i i all 1 2 3 4 6 8 Number of Groups Size
Figure 4: Number of redundant groups of various sizes in the CIFAR-10 dataset when ï¬nding a 90% subset for two classes. Note that the y-axis is logarithmic.
Figure 6 looks at redundant groups found with se- mantic clustering to retain 90% of the dataset. As compared to Figure 3, the images within a group show much more semantic variation. Redundant groups in Figure 3 are slight variations of the same object, where as in Figure 6, redundant groups do not contain the same object. We note that in this case the model is not able to be invariant to these semantic changes.
Similar to Figure 4, we plot the number of redundant groups of each size for two classes in CIFAR-100 in Figure 7.
To quantify the semantic variation of CIFAR-100 in relation to CIFAR-10 , we select redundant groups of size two or more, and measure the average dissimilar- ity(from Equation 1) to the retained sample. We report the average over groups in 3 diï¬erent classes as well as the entire datasets in Table 1. It is clear that the
4
69-1 1 ' ' 1 1 1 1 1 roe @®e Random Subset @®e Clustering based Subset 68 67 - Test Accuracy a 2 2 @ 3 2 & & 1 1 H i fa 1 a ' ' ' ' 1 1 1 1 ' ' 1 55 60 65 70 75 80 85 9095, Percentage of Data 1 100
Figure 5: Performance of subsets of varying size on the CIFAR-100 dataset. Each point is an average over 10 trials and the vertical bars denote standard deviation.
bali
~~ 2
~~ 2
(a) Redundant Bridges
(b) Redundant Cups
=
(c) Redundant Bridges
(d) Redundant Cups
Figure 6: Example of variation between images in the same redundant group compared to variation across diï¬erent redundant groups in the CIFAR-100 dataset. In Each column contains a speciï¬c class of images. contrast to Figure 3, the images within each redundant group show much more variations. The groups were found when retaining a 90% subset, and retraining only the selected images (in green boxes) and discarding the rest had a negative impact on test performance.
higher semantic variation in the redundant groups of CIFAR-100 seen in Figure 6 translates to an higher average dissimilarity in Table 1.
# 4.2 Choice of semantic representation.
To determine the best choice of semantic representation from a pre-trained model, we run experiments after selecting the semantic representation from 3 diï¬erent layers in the network. Figure 8 shows the results. Here âStartâ denotes the semantic representation after the ï¬rst Convolution layer is a ResNet, âMiddleâ denotes the representation after the second residual block, and âEndâ denotes the output of the last average pooling layer. We see that the âEndâ layerâs semantic representation is able to ï¬nd the largest redundancy.
10? - 1 1 ' ' - 5 Class E == Apple F a mm Bed - wy 10? = = ao" 5 FE ge : 2 7 - 9 -| - S 4 . a gS | . ⬠5 = 10 = : 10° â | } i || . 1 2 3 4 Size
Figure 7: Number of redundant groups of various sizes in the CIFAR-100 dataset when ï¬nding a 90% subset for two classes. Note that the y-axis is logarithmic.
Dataset Class Average Dissimilarity CIFAR-10 Airplane Automobile Bird All (mean) 1.73 Ã 10â3 1.65 Ã 10â3 2.22 Ã 10â3 1.84 Ã 10â3 CIFAR-100 Apple Bed Bowl All (mean) 6.61 Ã 10â3 14.16 Ã 10â3 20.02 Ã 10â3 13.90 Ã 10â3
Table 1: Average dissimilarity to the retained sample across redundant groups (clusters) of size greater than 1. We report the class-wise mean for 3 classes as well as the average over the entire dataset. All clusters were created to ï¬nd a subset of 90% the size of the full set. We can observe that the average dissimilarity is about an order of magnitude higher for the CIFAR-100 dataset, indicating that there is more variation in the redundant groups.
930-1 ' \ 1 1 1 1 ' ' roe Ge Start 80 Middle @®e End 92.5 - - 92.0 - - Test Accuracy 2 a 1 1 91.0- - 90.5 - . 90.0- | 1 1 ' 1 1 - 55 60 65 70 75 80 85 90 95 100 Percentage of Data
Figure 8: Eï¬ectiveness of latent representations from 3 stages in a ResNet at ï¬nding redundant subsets.
5
0.790- | 1 1 ' 1 1 1 1 1 roe subset type @®e Random Subset 0.785 - @@@ Clustering based Subset 0.780 ~ Validation Accuracy ° so ° GS gf 4 es 3 4 & 3s & H 1 1 0.750- , i i i : : I. 1 1 1 \ 55 60 65 70 75 80 85 90 95 100 Percentage of Data
Figure 9: Validation accuracy after training with sub- sets of various sizes of ImageNet . We plot the average over 5 trials with the vertical bars denoting standard deviation. There is no drop in validation accuracy when 10% of the training data considered redundant by semantic clustering is removed.
# 4.3 ImageNet
We train a 101-layer ResNet with the ImageNet dataset. It gave us a semantic representation of 2048 dimensions. We use a batch size of 1024 during training and train for 120, 000 steps with a learning rate cosine annealed from 0.4 to 0. Using the strategy from (Goyal et al., 2017], we linearly warm up our learning rate from 0 for 5000 steps to be able to train with large batches. We reg- ularize our weights with ¢2 penalty with a factor of 0.0001.
For optimization, we use Stochastic Gradient De- scent with a Momentum coeï¬cient of 0.9 while using the Nesterov momentum update. Since the test set is not publicly available we report the average validation accuracy, measured over 5 trials.
The results of training with subsets of varying sizes of ImageNet dataset are shown in Figure 9. Our proposed scheme is able to successfully show that at least 10% of the data can be removed from the training set without any negative impact on the validation accuracy, whereas training on random subsets always gives a drop with decrease in subset size.
Figure 1 shows diï¬erent redundant groups found in the ImageNet dataset. It is noteworthy that the seman- tic change considered redundant is diï¬erent across each group. Figure 11 highlights the similarities between images of the same redundant group and the variation across diï¬erent redundant groups.
In each row of Figure 12, we plot two images from a redundant group on the left where the retained image is highlighted in a green box. On the right we display the image closest to each retained image in dissimilarity but excluded from the redundant group. These images were close in semantic space to the corresponding retained images, but were not considered similar enough to be
104 = 1 1 ' 1 1 1 - = Class = 7 mmm Sportscar Hammer 10° 107 : | | I Z LU : 10° - i 1 1 1 1 1 - 1 2 3 4 5 6 Size Number of Groups
Figure 10: Sizes of redundant groups for the Hammer and Sports-car classes in the ImageNet dataset when ï¬nding a 90% subset. Note that the y-axis is logarith- mic.
redundant. For example the redundant group in the ï¬rst row of Figure 12 contains Sedan-like looking red cars. The 2-seater sports car on the right, in spite of looking similar to the cars on the left, was not considered redundant with them.
Figure 10 shows the number of redundant groups of each size when creating a 90% subset. Much akin to Figure 4, a majority of images are not considered redundant and form a group of size 1.
Additional examples of redundancy group on Ima- geNet is provided in the appendix.
# Implementation Details
We use the open source Tensorï¬ow [Abadi et al., 2016] and tensor2tensor[Vaswani et al., 2018] frameworks to train our models. For clustering, we used the scikit-learn [Pedregosa et al., 2011] library. For the CIFAR-10 and CIFAR-100 experiments we train on a single NVIDIA Tesla P100 GPU. For our ImageNet experiments we perform distributed training on 16 Cloud TPUs.
# 5 Conclusion
In this work we present a method to ï¬nd redundant subsets of training data. We explicitly model a dis- similarity metric into our formulation which allows us to ï¬nd semantically close samples that can be consid- ered redundant. We use an agglomerative clustering algorithm to ï¬nd redundant groups of images in the semantic space. Through our experiments we are able to show that at least 10% of ImageNet and CIFAR-10 datasets are redundant.
We analyze these redundant groups both qualitatively and quantitatively. Upon visual observation, we see that the semantic change considered redundant varies from cluster to cluster. We show examples of a variety
6
of varying attributes in redundant groups, all of which are redundant from the point of view of training the network.
One particular justiï¬cation for not needing this vari- ation during training could be that the network learns to be invariant to them because of its shared parame- ters and seeing similar variations in other parts of the dataset.
In Figure 2 and 9, the accuracy without 5% and 10% of the data is slightly higher than that obtained with the full dataset. This could indicate that redundancies in training datasets hamper the optimization process. For the CIFAR-100 dataset our proposed scheme fails to ï¬nd any redundancies. We qualitatively compare the redundant groups in CIFAR-100 (Figure 6) to the ones found in CIFAR-10 (Figure 3) and ï¬nd that the semantic variation across redundant groups is much larger in the former case. Quantitatively this can be seen in Table 1 which shows points in redundant groups of CIFAR-100 are much more spread out in semantic space as compared to CIFAR-10 .
Although we could not ï¬nd any redundancies in the CIFAR-100 dataset, there could be a better algorithm that could ï¬nd them. Moreover, we hope that this work inspires a line of work into ï¬nding these redundan- cies and leveraging them for faster and more eï¬cient training.
# 6 Acknowledgement
We would like to thank colleagues at Google Research for comments and discussions: Thomas Leung, Yair Movshovitz-Attias, Shraman Ray Chaudhuri, Azade Nazi, Serge Ioï¬e.
(a) Redundant Clocks (b) Redundant Hammers (c) Redundant Purses (d) Redundant Balloons (e) Redundant Clocks (f) Redundant Hammers (g) Redundant Purses (h) Redundant Balloons
he
\ |
|
~
a
â SE
Figure 11: This ï¬gure highlights semantic similarities between images from the same redundant group and variation seen across diï¬erent redundant groups of the same class. The redundant groups were found while creating a 90% subset of the ImageNet dataset. Each sub-ï¬gure is a redundant group of images according to our algorithm. Each column contains images belonging to the same class, with each row in a column being a diï¬erent redundant group. For example, the ï¬rst column contains the Clock class. Clocks in 11a are in one group of redundant images whereas clocks in 11e are in another group. From each of the groups in the sub-ï¬gures, only the images marked in green boxes are selected by our algorithm and the others are discarded. Discarding these images had no negative impact on validation accuracy.
Figure 12: In each row we plot two images from the same redundant group while creating a 90% subset on the left with the retained image highlighted in a green box. On the right we plot the image closest to the retained image in the semantic space but not included in the same redundant group. Note that the image on the right shows a semantic variation which is inconsistent with the one seen in the redundant group.
7
# References
[Abadi et al., 2016] Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Ghemawat, S., Irving, G., Isard, M., et al. (2016). Tensorï¬ow: a system for large-scale machine learning. In OSDI, volume 16, pages 265â283.
[Carlini et al., 2018] Carlini, N., Erlingsson, U., and Papernot, N. (2018). Prototypical examples in deep learning: Metrics, characteristics, and utility. Tech- nical report.
[Defays, 1977] Defays, D. (1977). An eï¬cient algorithm for a complete link method. The Computer Journal, 20(4):364â366.
[Deng et al., 2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. (2009). Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 248â255. Ieee.
[Fan et al., 2016] Fan, Y., Tian, F., Qin, T., and Liu, T.-Y. (2016). Neural data ï¬lter for bootstrapping stochastic gradient descent. Technical report.
[Gastaldi, 2017] Gastaldi, X. (2017). Shake-shake reg- ularization. arXiv preprint arXiv:1705.07485.
[Goyal et al., 2017] Goyal, P., Doll´ar, P., Girshick, R., Noordhuis, P., Wesolowski, L., Kyrola, A., Tulloch, A., Jia, Y., and He, K. (2017). Accurate, large minibatch sgd: training imagenet in 1 hour. arXiv preprint arXiv:1706.02677.
[Halevy et al., 2009] Halevy, A., Norvig, P., and Pereira, F. (2009). The unreasonable eï¬ectiveness of data. IEEE Intelligent Systems, 24(2):8â12.
[He et al., 2016a] He, K., Zhang, X., Ren, S., and Sun, J. (2016a). Deep residual learning for image recog- nition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770â 778.
[He et al., 2016b] He, K., Zhang, X., Ren, S., and Sun, J. (2016b). Identity mappings in deep residual net- works. In European conference on computer vision, pages 630â645. Springer.
Z., Van Der Maaten, L., and Weinberger, K. Q. (2017). Densely connected convolutional networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2261â2269. IEEE.
[Huang et al., 2018] Huang, Y., Cheng, Y., Chen, D., Lee, H., Ngiam, J., Le, Q. V., and Chen, Z. (2018). Gpipe: Eï¬cient training of giant neural networks using pipeline parallelism. arXiv preprint arXiv:1811.06965.
8
[Katharopoulos and Fleuret, 2018] Katharopoulos, A. and Fleuret, F. (2018). Not all samples are cre- ated equal: Deep learning with importance sampling. arXiv preprint arXiv:1803.00942.
[Kaushal et al., 2018] Kaushal, V., Sahoo, A., Doctor, K., Raju, N., Shetty, S., Singh, P., Iyer, R., and Ra- makrishnan, G. (2018). Learning from less data: Di- versiï¬ed subset selection and active learning in image classiï¬cation tasks. arXiv preprint arXiv:1805.11191.
[Krizhevsky and Hinton, 2009] Krizhevsky, A. and Hinton, G. (2009). Learning multiple layers of fea- tures from tiny images. Technical report, Citeseer.
[Krizhevsky et al., 2012] Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). Imagenet classiï¬cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â 1105.
[Kuznetsova et al., 2018] Kuznetsova, A., Rom, H., Alldrin, N., Uijlings, J., Krasin, I., Pont-Tuset, J., Kamali, S., Popov, S., Malloci, M., Duerig, T., et al. (2018). The open images dataset v4: Uni- ï¬ed image classiï¬cation, object detection, and vi- sual relationship detection at scale. arXiv preprint arXiv:1811.00982.
[Lapedriza et al., 2013] Lapedriza, A., Pirsiavash, H., Bylinskii, Z., and Torralba, A. (2013). Are all train- ing examples equally valuable? arXiv preprint arXiv:1311.6510.
[Lin et al., 2018] Lin, T.-Y., Goyal, P., Girshick, R., He, K., and Doll´ar, P. (2018). Focal loss for dense ob- ject detection. IEEE transactions on pattern analysis and machine intelligence.
[Loshchilov and Hutter, 2016] Loshchilov, I. and Hut- ter, F. (2016). Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983.
L., Fraser, H. S., and Ohrn, A. (1998). Improving ma- chine learning performance by removing redundant In Proceedings of the cases in medical data sets. AMIA Symposium, page 523. American Medical Informatics Association.
[Pedregosa et al., 2011] Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., et al. (2011). Scikit-learn: Machine learning in python. Journal of machine learning research, 12(Oct):2825â2830.
[Pu et al., 2016] Pu, Y., Gan, Z., Henao, R., Yuan, X., Li, C., Stevens, A., and Carin, L. (2016). Variational autoencoder for deep learning of images, labels and captions. In Lee, D. D., Sugiyama, M., Luxburg, U. V., Guyon, I., and Garnett, R., editors, Advances in Neural Information Processing Systems 29, pages 2352â2360. Curran Associates, Inc.
[Qiao et al., 2018] Qiao, S., Shen, W., Zhang, Z., Wang, B., and Yuille, A. L. (2018). Deep co-training for semi- supervised image recognition. In Computer Vision - ECCV 2018 - 15th European Conference, Munich, Germany, September 8-14, 2018, Proceedings, Part XV, pages 142â159.
[Ren et al., 2018] Ren, M., Ravi, S., Triantaï¬llou, E., Snell, J., Swersky, K., Tenenbaum, J. B., Larochelle, H., and Zemel, R. S. (2018). Meta-learning for semi- supervised few-shot classiï¬cation. In International Conference on Learning Representations.
[Sajjadi et al., 2016] Sajjadi, M., Javanmardi, M., and Tasdizen, T. (2016). Regularization with stochastic transformations and perturbations for deep semi- supervised learning. In Lee, D. D., Sugiyama, M., Luxburg, U. V., Guyon, I., and Garnett, R., editors, Advances in Neural Information Processing Systems 29, pages 1163â1171. Curran Associates, Inc.
[Simonyan and Zisserman, 2014] Simonyan, K. and Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
[Srivastava et al., 2014] Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. (2014). Dropout: a simple way to prevent neural networks from overï¬tting. The Journal of Machine Learning Research, 15(1):1929â1958.
[Sun et al., 2017] Sun, C., Shrivastava, A., Singh, S., and Gupta, A. (2017). Revisiting unreasonable eï¬ec- tiveness of data in deep learning era. In Computer Vision (ICCV), 2017 IEEE International Conference on, pages 843â852. IEEE.
and Valpola, H. (2017). Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 1195â1204. Curran Associates, Inc.
[Tobin et al., 2017] Tobin, J., Fong, R., Ray, A., Schneider, J., Zaremba, W., and Abbeel, P. (2017). Domain randomization for transferring deep neural networks from simulation to the real world. In Intel- ligent Robots and Systems (IROS), 2017 IEEE/RSJ International Conference on, pages 23â30. IEEE.
[Vaswani et al., 2018] Vaswani, A., Bengio, S., Brevdo, E., Chollet, F., Gomez, A. N., Gouws, S., Jones, L., Kaiser, L., Kalchbrenner, N., Parmar, N., et al. (2018). Tensor2tensor for neural machine translation. arXiv preprint arXiv:1808.07416.
[Vodrahalli et al., 2018] Vodrahalli, K., Li, K., and Ma- lik, J. (2018). Are all training examples created equal? an empirical study. CoRR, abs/1811.12569.
9
[Wei et al., 2015] Wei, K., Iyer, R., and Bilmes, J. (2015). Submodularity in data subset selection and active learning. In International Conference on Ma- chine Learning, pages 1954â1963.
[Wu et al., 2019] Wu, B., Chen, W., Fan, Y., Zhang, Y., Hou, J., Huang, J., Liu, W., and Zhang, T. (2019). Tencent ml-images: A large-scale multi-label image database for visual representation learning. arXiv preprint arXiv:1901.01703.
# A Appendix
Each row is a redundant group of images. The left most image is retained in each row for the 90% subset.
10
11 | {
"id": "1705.07485"
} |
1901.09021 | Complexity of Linear Regions in Deep Networks | It is well-known that the expressivity of a neural network depends on its
architecture, with deeper networks expressing more complex functions. In the
case of networks that compute piecewise linear functions, such as those with
ReLU activation, the number of distinct linear regions is a natural measure of
expressivity. It is possible to construct networks with merely a single region,
or for which the number of linear regions grows exponentially with depth; it is
not clear where within this range most networks fall in practice, either before
or after training. In this paper, we provide a mathematical framework to count
the number of linear regions of a piecewise linear network and measure the
volume of the boundaries between these regions. In particular, we prove that
for networks at initialization, the average number of regions along any
one-dimensional subspace grows linearly in the total number of neurons, far
below the exponential upper bound. We also find that the average distance to
the nearest region boundary at initialization scales like the inverse of the
number of neurons. Our theory suggests that, even after training, the number of
linear regions is far below exponential, an intuition that matches our
empirical observations. We conclude that the practical expressivity of neural
networks is likely far below that of the theoretical maximum, and that this gap
can be quantified. | http://arxiv.org/pdf/1901.09021 | Boris Hanin, David Rolnick | stat.ML, cs.LG, math.PR | ICML 2019 | null | stat.ML | 20190125 | 20190611 | 9 1 0 2
n u J 1 1 ] L M . t a t s [ 2 v 1 2 0 9 0 . 1 0 9 1 : v i X r a
# Complexity of Linear Regions in Deep Networks
# Boris Hanin * 1 David Rolnick * 2
# Abstract
It is well-known that the expressivity of a neural network depends on its architecture, with deeper networks expressing more complex functions. In the case of networks that compute piecewise lin- ear functions, such as those with ReLU activation, the number of distinct linear regions is a natural measure of expressivity. It is possible to construct networks with merely a single region, or for which the number of linear regions grows exponentially with depth; it is not clear where within this range most networks fall in practice, either before or after training. In this paper, we provide a mathe- matical framework to count the number of linear regions of a piecewise linear network and mea- sure the volume of the boundaries between these regions. In particular, we prove that for networks at initialization, the average number of regions along any one-dimensional subspace grows lin- early in the total number of neurons, far below the exponential upper bound. We also ï¬nd that the average distance to the nearest region boundary at initialization scales like the inverse of the number of neurons. Our theory suggests that, even after training, the number of linear regions is far below exponential, an intuition that matches our empiri- cal observations. We conclude that the practical expressivity of neural networks is likely far below that of the theoretical maximum, and that this gap can be quantiï¬ed.
Figure 1. How many linear regions? This ï¬gure shows a two- dimensional slice through the 784-dimensional input space of vectorized MNIST, as represented by a fully-connected ReLU network with three hidden layers of width 64 each. Colors denote different linear regions of the piecewise linear network.
ferent functions of input data. Many such works consider the expressivity of neural networks, showing that certain func- tions are more efï¬ciently expressible by deep architectures than by shallow ones (e.g. Bianchini & Scarselli (2014); Montufar et al. (2014); Telgarsky (2015); Lin et al. (2017); Rolnick & Tegmark (2018)). It has, however, also been noted that many expressible functions are not efï¬ciently learnable, at least by gradient descent (Shalev-Shwartz et al., 2018). More generally, the typical behavior of a network used in practice, the practical expressivity, may be very dif- ferent from what is theoretically attainable. To adequately explain the power of deep learning, it is necessary to con- sider networks with parameters as they will naturally occur before, during, and after training.
# 1. Introduction
A growing ï¬eld of theory has sought to explain the broad success of deep neural networks via a mathematical charac- terization of the ability of these networks to approximate dif-
1Department of Mathematics, Texas A&M University and Facebook AI Research, New York 2University of Pennsylvania. Correspondence to: Boris Hanin <bhanin@tamu.edu>, David Rolnick <drolnick@seas.upenn.edu>.
Proceedings of the 36 th International Conference on Machine Learning, Long Beach, California, PMLR 97, 2019. Copyright 2019 by the author(s).
Networks with a piecewise linear activation (e.g. ReLU, hard tanh) compute piecewise linear functions for which in- put space is divided into pieces, with the network computing a single linear function on each piece (see Figures 1-4). Fig- ure 2 shows how the complexity of these pieces, which we refer to as linear regions, changes in a deep ReLU net with two-dimensional inputs. Each neuron in the ï¬rst layer splits the input space into two pieces along a hyperplane, ï¬tting a different linear function to each of the pieces. Subsequent layers split the regions of the preceding layers. The local density of linear regions serves as a convenient proxy for the local complexity or smoothness of the network, with the ability to interpolate a complex data distribution seeming to require ï¬tting many relatively small regions. The topic of
Complexity of Linear Regions in Deep Networks
counting linear regions is taken up by a number of authors (Telgarsky, 2015; Montufar et al., 2014; Serra et al., 2018; Raghu et al., 2017).
A worst case estimate is that every neuron in each new layer splits each of the regions present at the previous layer, giving a number of regions exponential in the depth. Indeed this is possible, as examined extensively e.g. in Montufar et al. (2014). An example of Telgarsky (2015) shows that a sawtooth function with 2n teeth can be expressed exactly using only 3n + 4 neurons, as shown in Figure 3. However, even slightly perturbing this network (by adding noise to the weights and biases) ruins this beautiful structure and severely reduces the number of linear pieces, raising the question of whether typical neural networks actually achieve the theoretical bounds for numbers of linear regions.
1 2 1 2 1 2 Input x aps -- <e Sawtooth(x) rye~<4 No 1 e4 0.5 -0.5 -0.5
1 2 1 2 1 2 Input x aps -- <e Sawtooth(x) rye~<4 No 1 e4 0.5 -0.5 -0.5
Figure 3. The sawtooth function on the left with 2n teeth can be expressed succinctly by a ReLU network with only 3n + 4 neurons (construction from Telgarsky (2015)). However, slight perturba- tion of the weights and biases of the network (by Gaussian noise with standard deviation 0.1) greatly simpliï¬es the linear regions captured by the network.
network with piecewise linear activations (such as ReLU) before, during, and after training. Our main contributions are as follows:
⢠For networks at initialization, the total surface area of the boundary between linear regions scales as the number of neurons times the number of breakpoints of the activation function. This is our main result, from which several corollaries follow (see Theorem 3, Corollary 4, and the discussion in §2).
Figure 2. Evolution of linear regions within a ReLU network for 2-dimensional input. Each neuron in the ï¬rst layer deï¬nes a linear boundary that partitions the input space into two regions. Neurons in the second layer combine and split these linear boundaries into higher level patterns of regions, and so on. Ultimately, the input space is partitioned into a number of regions, on each of which the neural network is given by a (different) linear function. During training, both the partition into regions and the linear functions on them are learned.
Figure 1 also invites measures of complexity for piecewise linear networks beyond region counting. The boundary between two linear regions can be straight or can be bent in complex ways, for example, suggesting the volume of the boundary between linear regions as complexity measure for the resulting partition of input space. In the 2D example of Figure 1, this corresponds to computing perimeters of the linear pieces. As we detail below, this measure has another natural advantage: the volume of the boundary controls the typical distance from a random input to the boundary of its linear region (see §2.2). This measures the stability of the function computed by the network, and it is intuitively related to robustness under adversarial perturbation.
Our Contributions. In this paper, we provide mathematical tools for analyzing the complexity of linear regions of a
⢠In particular, for any line segment through input space, the average number of regions intersecting it is linear in the number of neurons, far below the exponential number of regions that is theoretically attainable.
⢠Theorem 3 also allows us to conclude that, at initial- ization, the average distance from a sample point to the nearest region boundary is bounded below by a constant times the reciprocal of the number of neurons (see Corollary 5).
⢠We ï¬nd empirically that both the number of regions and the distance to the nearest region boundary stay roughly constant during training and in particular are far from their theoretical maxima. That this should be the case is strongly suggested by Theorem 3, though not a direct consequence of it.
Overall, our results stress that practical expressivity lags sig- niï¬cantly behind theoretical expressivity. Moreover, both our theoretical and empirical ï¬ndings suggest that for cer- tain measures of complexity, trained deep networks are remarkably similar to the same networks at initialization.
Complexity of Linear Regions in Deep Networks
In the next section, we informally state our theoretical and empirical results and explore the underlying intuitions. De- tailed descriptions of our experiments are provided in §3. The precise theorem statements for ReLU networks can be found in §5. The exact formulations for general piecewise linear networks are in Appendix A, with proofs in the rest of the Supplementary Material. In particular, Appendix B con- tains intuition for how our proofs are shaped, while details are completed in §C-D.
Output of the network Input
Figure 4. Graph of function computed by a ReLU net with input and output dimension 1 at initialization. The weights of the net- work are He normal (i.i.d. normal with variance = 2/fan-in) and the biases are i.i.d. normal with variance 10â6.
The constructions in Montufar et al. (2014); Telgarsky (2015); Raghu et al. (2017); Serra et al. (2018) indicate that the bound in (1) is very far from sharp for shallow and wide networks but that exponential growth in the number of regions can be achieved in deep, skinny networks for very special choices of weights and biases. This is a man- ifestation of the expressive power of depth, the idea that repeated compositions allow deep networks to capture com- plex hierarchical relations more efï¬ciently per parameter than their shallow cousins. However, there is no non-trivial lower bound for the number of linear regions:
min #{regions} = 1, âN .
The minimum is attained by setting all weights and biases to 0. This raises the question of the behavior for the average number of regions when the weights and biases are chosen at random (e.g. at initialization). Intuitively, conï¬gurations of weights and biases that come close to saturating the exponential upper bound (1) are numerically unstable in the sense that a small random perturbation of the weights and biases drastically reduces the number of linear regions (see Figure 3 for an illustration). In this direction, we prove a somewhat surprising answer to the question of how many regions N has at initialization. We state the result for ReLU but note that it holds for any piecewise linear, continuous activation function (see Theorems 3 and 6).
# 2. Informal Overview of Results
This section gives an informal introduction to our results. We begin in §2.1 by describing the case of networks with in- put dimension 1. In §2.2, we consider networks with higher input dimension. For simplicity, we focus throughout this section on fully connected ReLU networks. We emphasize, however, that our results apply to any piecewise linear acti- vation. Moreover, the upper bounds we present in Theorems 1, 2, and 3 (and hence in Corollaries 4 and 5) can also be generalized to hold for feed-forward networks with arbitrary connectivity, though we do not go into details in this work, for the sake of clarity of exposition.
Theorem 1 (informal). Let N be a network with piecewise linear activation with input and output dimensions of N both equal 1. Suppose the weights and biases are randomly initialized so that for each neuron z, its pre-activation z(x) has bounded mean gradient
E{||Vz(z)||] < C, some C > 0. (2)
This holds, for example, for ReLU networks initialized with independent, zero-centered weights with variance 2/fan-in. Then, for each subset I â R of inputs, the average number of linear regions inside I is proportional to the number of neurons times the length of I
# 2.1. Number of Regions in 1D
Consider the simple case of a ReLU net N with input and output dimensions equal to 1. Such a network computes a piecewise linear function (see Figure 4), and we are in- terested in understanding both at initialization and during training the number of distinct linear regions. There is a simple universal upper bound:
E [#{regions in I}] â |I| · T · #{neurons},
where T is the number of breakpoints in the non-linearity of N (for ReLU nets, T = 1). The same result holds when computing the number of linear regions along any ï¬xed 1-dimensional curve in a high-dimensional input space.
max #{regions} ⤠2#neurons, (1)
where the maximum is over all settings of weight and biases. This bound depends on the architecture of N only via the number of neurons. For more reï¬ned upper bounds which take into account the widths of the layers, see Theorem 1 in Raghu et al. (2017) and Theorem 1 in Serra et al. (2018).
This theorem implies that the average number of regions along a one-dimensional curve in input space is proportional to the number of neurons, but independent of the arrange- ment of those neurons. In particular, a shallow network and a deep network will have the same complexity, by this mea- sure, as long as they have the same total number of neurons. Of course, as |I| grows, the bounds in Theorem 1 become
Complexity of Linear Regions in Deep Networks
less sharp. We plan to extend our results to obtain bounds on the total number of regions on all of R in the future. In particular, we believe that at initialization the mean total number of linear regions N is proportional to the number of neurons (this is borne out in Figure 5, which computes the total number of regions on an inï¬nite line).
studying the volume density (3) is that it gives bounds from below for distance (x, BN ), which in turn provides insight into the nature of the computation performed by N . Indeed, the exact formula
distance (x, By) = notin, Ae) â b.|/ |Vz(x)||},
Theorem [I] defies the common intuition that, on average, each layer in NV multiplies the number of regions formed up to the previous layer by a constant larger than one. This would imply that the average number of regions is expo- ential in the depth. To provide intuition for why this is ot true for random weights and biases, consider the effect each neuron separately. Suppose the pre-activation z(x) a neuron z satisfies |zâ(x)| = O(1), a hallmark of any reasonable initialization. Then, over a compact set of inputs, the piecewise linear function x ++ z(x) cannot be highly oscillatory over a large portion of the range of z. Thus, if the bias b, is not too concentrated on any interval, we expect the equation z(x) = b, to have O(1) solutions. On average, then, we expect that each neuron adds a constant number of new linear regions. Thus, the average total number of regions should scale roughly as the number of neurons. n n 0) fe)
shows that distance (x, BN ) measures the sensitivity over neurons at a given input x. In this formula, z(x) denotes the pre-activation for a neuron z and bz is its bias, so that ReLU(z(x) â bz) is the post-activation. Moreover, the distance from a typical point to BN gives a heuristic lower bound for the typical distance to an adversarial example: two inputs closer than the typical distance to a linear region boundary likely fall into the same linear region, and hence are unlikely to be classiï¬ed differently. Our next result generalizes Theorem 1.
Theorem 2 (informal). Let N be a network with a piece- wise linear activation, input dimension nin and output di- mension 1. Suppose its weights and biases are randomly initialized as in (2). Then, for K â Rdin bounded, the aver- age volume of the linear region boundaries in K satisï¬es:
Theorem|[I]follows from a general result, Theorem[3} that holds for essentially any non-degenerate distribution of weights and biases and with any input dimension. If ||Vz(x)|| and the bias distribution p,, are well-behaved, then throughout training, Theorem[3|suggests the number of linear regions along a 1-dimensional curve in input space scales like the number of neurons in NV. Figures[5]6]show experiments that give empirical verification of this heuristic.
voln;,â1 (By 1K) E voln,,, (A) = T- {neurons},
where T is the number of breakpoints in the non-linearity of N (for ReLU nets, T = 1). Moreover, if x â [0, 1]nin is uniformly distributed, then the average, over both x and the weights/biases of N , distance from x to BN satisï¬es
E [distance (x, BN )] ⥠C (#{neurons})â1 , C > 0.
# 2.2. Higher-Dimensional Regions
Experimentally, distance (x, BN ) remains comparable to (#{neurons})â1 throughout training (see Figure 6).
For networks with input dimension exceeding 1, there are several ways to generalize counting linear regions. A unit- matching heuristic applied to Theorem 1 suggests
# 3. Experiments
#{regions} = #{neurons}nin , nin = input dim.
Proving this statement is work in progress by the authors. Instead, we consider here a natural and, in our view, equally important generalization. Namely, for a bounded K â Rnin , we consider the (nin â 1)-dimensional volume density
Voln;, 1 (By NK) / voln,,(K),
(3)
where
We empirically veriï¬ed our theorems and further examined how linear regions of a network change during training. All experiments below were performed with fully-connected networks, initialized with He normal weights (i.i.d. with variance 2/fan-in) and biases drawn i.i.d. normal with vari- ance 10â6 (to prevent collapse of regions at initialization, which occurs when all biases are uniquely zero). Training was performed on the vectorized MNIST (input dimension 784) using the Adam optimizer at learning rate 10â3. All networks attain test accuracy in the range 95 â 98%.
BN = {x | âN (x) is not continuous at x}
(4)
is the boundary of the linear regions for N . When nin = 1,
# 3.1. Number of Regions Along a Line
vol0 (BN â© K) + 1 = #{regions in K},
and hence the volume density (3) truly generalizes to higher input dimension of the number of regions. One reason for
We calculated the number of regions along lines through the origin and and a random selected training example in input space. For each setting of weights and biases within the network during training, the number of regions along each
Complexity of Linear Regions in Deep Networks
126, 16, 16, 16] (2, 32, 32, 32) (64, 64, 64, 64] (228, 128, 128, 128) [20, 20, 20] (40, 40, 40} ~ [60, 60, 60] + {80, 80, 80] Number of regions over number of neurons 0 OF 02 05 3a 35
126, 16,16, 16] (32, 32, 32, 32] [64, 64, 64, 64) (128, 128, 128, 128) [20, 20, 20) 00|] â 140, 40, 40} {60, 60, 60] a95||â (80. 80, 80) 110 Number of regions over number of neurons 075 0 Epoch
126, 16, 16, 16] (2, 32, 32, 32) (64, 64, 64, 64] (228, 128, 128, 128) [20, 20, 20] (40, 40, 40} ~ [60, 60, 60] + {80, 80, 80] Number of regions over number of neurons 0 OF 02 05 3a 35 126, 16,16, 16] (32, 32, 32, 32] [64, 64, 64, 64) (128, 128, 128, 128) [20, 20, 20) 00|] â 140, 40, 40} {60, 60, 60] a95||â (80. 80, 80) 110 Number of regions over number of neurons 075 0 Epoch
Figure 5. We here show how the number of regions along 1D lines in input space changes during training. In accordance with Theorem 3, we scale the number of regions by the number of neurons. Plots show (a) early training, up through 0.5 epochs, and (b) later training, up through 20 epochs. Note that for all networks, number of regions is a ï¬xed constant times the number of neurons at initialization, as predicted, and that the number decreases (slightly) early in training before rebounding. [n1, n2, n3] in the legend corresponds to an architecture with layer widths 784 (input), n1, n2, n3, 10 (output).
0.035 (26, 16, 16, 16) (32, 32, 32, 32] (64, 64, 64, 64) (228, 128, 128, 128) (20, 20, 20] [40, 40, 40] (60, 60, 60) [80, 80, 80) (25, 25, 25, 25, 25] [50, 50, 50, 50, 50) (75, 75, 75, 75, 75) re onl | 025, \ TILT Itl o.o10 Distance to nearest boundary 000} 700 200 300 00 300 00 Number of neurons
Distance times number of neurons Fs 20 fy 3 ry Epoch
Distance times number of neurons 0 20 w o wo Too Test accuracy
0.035 (26, 16, 16, 16) (32, 32, 32, 32] (64, 64, 64, 64) (228, 128, 128, 128) (20, 20, 20] [40, 40, 40] (60, 60, 60) [80, 80, 80) (25, 25, 25, 25, 25] [50, 50, 50, 50, 50) (75, 75, 75, 75, 75) re onl | 025, \ TILT Itl o.o10 Distance to nearest boundary Distance times number of neurons 000} Distance times number of neurons Fs 20 0 20 w o wo Too 700 200 300 00 300 00 fy 3 Number of neurons ry Epoch Test accuracy
Figure 6. We here consider the average distance to the nearest boundary, as evaluated over 10000 randomly selected sample points. In (a) we show that this distance is essentially bounded between 0.4/#{neurons} and 1.5/#{neurons}. Accordingly, in the next plot, we normalize the distance to the nearest boundary by dividing by the number of neurons. We plot this quantity against (b) epoch and (c) test accuracy. Observe that, in keeping with the ï¬ndings of Figure 5, the distance to the nearest boundary ï¬rst increases quickly (as the number of regions decreases), then rebounds more slowly as the network completes training. [n1, n2, n3] in the legend corresponds to an architecture with layer widths 784 (input), n1, n2, n3, 10 (output).
line is calculated exactly by building up the network one layer at a time and calculating how each region is split by the next layer of neurons. Figure 5 represents the average over 5 independent runs, from each of which we sample 100 lines; variance across the different runs is not signiï¬cant.
3. The number of regions actually decreases during the initial part of training, then increases again. We explore this behavior further in other experiments below.
# 3.2. Distance to the Nearest Region Boundary
Figure 5 plots the average number of regions along a line, di- vided by the number of neurons in the network, as a function of epoch during training. We make several observations:
1. As predicted by Theorem 3, all networks start out with the number of regions along a line equal to a constant times the number of neurons in the network (the con- stant in fact appears very close to 1 in this case).
2. Throughout training, the number of regions does not deviate signiï¬cantly from the number of neurons in the network, staying within a small constant of the value at initialization, in keeping with our intuitive under- standing of Theorem 3 described informally around Theorem 1 above.
We calculated the average distance to the nearest boundary for 10000 randomly selected input points, for various net- works throughout training. Points were selected randomly from a normal distribution with mean and variance matching the componentwise mean and variance of MNIST training data. Results were averaged over 12 independent runs, but variance across runs is not signiï¬cant. Rerunning these ex- periments with sample points selected randomly from (i) the training data or (ii) the test data yielded similar results to random sample points.
In keeping with our results in the preceding experiment, the distance to the nearest boundary ï¬rst increases then de- creases during training. As predicted by Theorem 2, we ï¬nd that for all networks, the distance to the nearest boundary
Complexity of Linear Regions in Deep Networks
is well-predicted by 1/#{neurons}. Throughout training, we ï¬nd that it approximately varies between the curves 0.4/#{neurons} and 1.5/#{neurons} (Figure 6(a)). At initialization, as we predict, all networks have the same value for the product of number of neurons and distance to the nearest region boundary (Figure 6(b)); these prod- ucts then diverge (slightly) for different architectures, ï¬rst increasing rapidly and then decreasing more slowly.
We ï¬nd Figure 6(c) fascinating, though we do not com- pletely understand it. It plots the product of number of neurons and distance to the nearest region boundary against the test accuracy. It suggests two phases of training: ï¬rst regions expand, then they contract. This lines up with ob- servations made in Arpit et al. (2017) that neural networks âlearn patterns ï¬rstâ on which generalization is simple and then reï¬ne the ï¬t to encompass memorization of individual samples. A generalization phase would suggest that regions are growing, while memorization would suggest smaller regions are ï¬t to individual data points. This is, however, speculation and more experimental (and theoretical) explo- ration will be required to conï¬rm or disprove this intuition. We found it instructive to consider the full distribution of
3000 WE Epoch 0 lm Epoch 1 [= Epoch 20 2000 1500 1000 Number of sample points ~8 ~6 4 2 0 2 log distance to nearest boundary
Figure 7. Distribution of log distances from random sample points to the nearest region boundary for a network of depth 4 and width 16, at initialization and after 1 and 20 epochs of training on MNIST.
distances from sample points to their nearest boundaries, rather than just the average. For a single network (depth 4, width 16), Figure 7 indicates that this distribution does not signiï¬cantly change during training, although there appears to be a slight skew towards larger regions, in agreement with the ï¬ndings in Novak et al. (2018). The histogram shows log-distances. Hence, distance to the nearest region boundary varies over many orders of magnitude. This is consistent with Figures 1 and 4, which lend credence to the intuition that small distances to the nearest region bound- ary are explained by the presence of many small regions. According to Theorem 3, this should correlate with a com- bination of regions in input space at which some neurons have a large gradient and neurons with highly peaked biases distributions. We hope to return to this in future work.
# 3.3. Regions Within a 2D Plane
We visualized the regions of a network through training. Speciï¬cally, following experiments in Novak et al. (2018), we plotted regions within a plane in the 784-dimensional in- put space (Figure 8) through three data points with different labels (0, 1, and 2, in our case) inside a square centered at the circumcenter of the three examples. The network shown has depth 3 and width 64. We observe that, as expected from our other plots, the regions expand initially during training and then contract again. We expect the number of regions within a 2-dimensional subspace to be on the order of the square of the number of neurons â that is, (64·3)2 â 4Ã104, which we indeed ï¬nd.
Our approach for calculating regions is simple. We start with a single region (in this case, the square), and subdi- vide it by adding neurons to the network one by one. For each new neuron, we calculate the linear function it deï¬nes on each region, and determine whether that region is split into two. This approach terminates within a reasonable amount of time precisely because our theorem holds: there are relatively few regions. Note that we exactly determine all regions within the given square by calculating all region boundaries; thus our counts are exact and do not miss any small regions, as might occur if we merely estimated regions by sampling points from input space.
# 4. Related Work
There are a number of works that touch on the themes of this article: (i) the expressivity of depth; (ii) counting the number of regions in networks with piecewise linear activations; (iii) the behavior of linear regions through training; and (iv) the difference between expressivity and learnability. Related to (i), we refer the reader to Eldan & Shamir (2016); Telgarsky (2016) for examples of functions that can be efï¬ciently represented by deep but not shallow ReLU nets. Next, still related to (i), for uniform approximation over classes of functions, again using deep ReLU nets, see Yarotsky (2017); Rolnick & Tegmark (2018); Yarotsky (2018); Petersen & Voigtlaender (2018). For interesting results on (ii) about counting the maximal possible number of linear regions in networks with piecewise linear activations see Bianchini & Scarselli (2014); Montufar et al. (2014); Poole et al. (2016); Arora et al. (2018); Raghu et al. (2017). Next, in the vein of (iii), for both a theoretical and empirical perspective on the number of regions computed by deep networks and speciï¬cally how the regions change during training, see Poole et al. (2016); Novak et al. (2018). In the direction of (iv), we refer the reader to Shalev-Shwartz et al. (2018); Hanin & Rolnick (2018); Hanin (2018). Finally, for general insights into learnability and expressivity in deep vs. shallow networks see Mhaskar & Poggio (2016); Mhaskar et al. (2016); Zhang et al. (2017); Lin et al. (2017); Poggio et al.
Complexity of Linear Regions in Deep Networks
Epoch 0: 9744 regions Epoch 1: 4196 regions Epoch 20: 8541 regions
Figure 8. Here we show the linear regions that intersect a 2D plane through input space for a network of depth 3 and width 64 trained on MNIST. Black dots indicate the positions of the three MNIST training examples deï¬ning the plane. Note that we obtain qualitatively different pictures from Novak et al. (2018), which may result partially from our using ReLU activation instead of ReLU6.
(2017); Neyshabur et al. (2017).
# 5. Formal Statement of Results
A1: The conditional distribution of any collection of biases bz1, . . . , bzk , given all the other weights and biases, has a density Ïbz1 ,...,bzk (b1, . . . , bk) with respect to Lebesgue measure on Rk.
To state our results precisely, we ï¬x some notation. Let d, nin, n1, . . . , nd ⥠1 and consider a depth d fully con- nected ReLU net N with input dimension nin, output di- mension 1, and hidden layer widths nj, j = 1, . . . , d â 1. As explained in the introduction, a generic conï¬guration of its weights and biases partitions the input space Rnin into a union of polytopes Pj with disjoint interiors. Restricted to each Pj, N computes a linear function.
Our main mathematical result, Theorem 3, concerns the set BN of points x â Rnin at which the gradient âN is discontinuous at x (see (4)). For each k = 1, . . . , nin, we deï¬ne
BN ,k = the â(nin â k)âdimensional pieceâ of BN .
More precisely, we set BN ,0 := â
and recursively deï¬ne BN ,k to be the set of points x â BN \{BN ,0âªÂ· · ·âªBN ,kâ1} so that in a neighborhood of x the set BN \{BN ,0 ⪠· · · ⪠BN ,kâ1} coincides with a co-dimension k hyperplane.
For example, when nin = 2, the linear regions Pj are poly- gons, the set BN ,1 is the union of the open line segments making up the boundaries of the Pj, and BN ,2 is the collec- tion of vertices of the Pj. Theorem 3 provides a convenient formula for the average of the (nin â k)âdimensional vol- ume of BN ,k inside any bounded, measurable set K â Rnin . To state the result, for every neuron z in N we will write
A2: The joint distribution of all the weights has a density with respect to Lebesgue measure on R#weights.
These assumptions hold in particular when the weights and biases of N are independent with marginal distributions that have a density relative to Lebesgue measure on R (i.e. at ini- tialization). They hold much more generally, however, and can intuitively be viewed as a non-degeneracy assumption on the behavior of the weights and biases of N . Speciï¬cally, they are used in Proposition 10 to ensure that the set BN ,k consists of inputs where exactly k neurons turn off/on. As- sumption (A1) also allows us, in Proposition 11, to apply the co-area formula (29) to compute the expect volume of the set of inputs where a given collection of neurons turn on/off. Our main result is the following.
Theorem 3. Suppose N is a feed-forward ReLU net with input dimension n0, output dimension 1, and ran- dom weights/biases. Assume that the distribution of weights/biases satisï¬es Assumptions A1 and A2 above. Then, with the notation (6), for any bounded measurable set K â Rnin and any k = 1, . . . , nin, the average (nin â k)â dimensional volume E [volninâk(BN ,k â© K)] of BN ,k in- side K is
E [volninâk(BN ,k â© K)] (8)
z(x) := pre-activation at z, ¢(z) = layer index of z (6)
(6) and bz := bias at z. Thus, for a given input x â Rn0, the post-activation of z is
of BN ,k inside K is, in the notation (6),
Sf Bla (eld, distinct neurons Z1yey2k
Z(x) := ReLU(z(x)) = max{0, z(x) â bz}. (7)
where Yz1,...,zk (x) is
Theorem 3 holds under the following assumption on the distribution of weights and biases:
Complexity of Linear Regions in Deep Networks
times the indicator function of the event that z; is good at x for each j =1,...,k. Here, Jz,,....2, is the k X nin Jaco- bian of the map x > (z1(x),...,2%(2)),
Wescovse(@)l = det (Jey 242) (Fors vse(®))?)
We prove Corollary 7 in Appendix D. Let us state one ï¬nal corollary of Theorem 3 Corollary 5. Suppose N is as in Theorem 3 and satisï¬es the hypothesis (14) in Corollary 7. Then, for any compact set K â Rnin let x be a uniform point in K. There exists c > 0 independent of K so that
the function Ïbz1 ,...,bzk is the density of the joint distribution of the biases bz1, . . . , bzk , and we say a neuron z is good at x if there exists a path of neurons from z to the output in the computational graph of N so that each neuron along this path is open at x).
E [distance(x, BN )] ⥠c CbiasCgrad#{neurons} .
We prove Corollary[8]in §E] The basic idea is simple. For every ⬠> 0, we have
To evaluate the expression in (8) requires information on the distribution of gradients âz(x), the pre-activations z(x), and the biases bz. Exact information about these quantities is available at initialization (Hanin, 2018; Hanin & Rol- nick, 2018; Hanin & Nica, 2018), yielding the following Corollary.
Corollary 4. With the notation and assumptions of Theorem 3, suppose the weights are independent are drawn from a ï¬xed probability measure µ on R that is symmetric around 0 and then rescaled to have Var[weights] = 2/fan-in. Fix k â {1, . . . , nin}. Then there exists C > 0 for which
E[vol,,,-~(By,7 9 K)] voln;, (A) < (#oer) (Crad « Chias)*,
where
Cbias = sup z sup bâR Ïbz (b)
and
1 1/k < CeO Ei 5 Cyraa = sup sup E||V2(x)|* z g£â¬Rnin
where C > 0 depends only on 1 but not on the architecture of N and n; is the width of the j'â hidden layer. Moreover, we also have similar lower bounds (âeere) ck. < E âee nK)] (10)
where
cbias = inf |b|â¤Î· Ïbz (b),
and
(9)
E [distance(x, By)] > â¬P (distance(x, Byâ) > â¬) ,
with the probability on the right hand side scaling like
1 = voln,, (Le(By) N K)/ voln,,, (E0),
where T.(By;) is the tube of radius ⬠around By. We ex- pect that its volume like â¬voln,,-1(By). Taking « = c/#{neurons} yields the conclusion of Corollary[8]
# 6. Conclusions and Further Work
The question of why depth is powerful has been a persistent problem for deep learning theory, and one that recently has been answered by works giving enhanced expressivity as the ultimate explanation. However, our results suggest that such explanations may be misleading. While we do not speak to all notions of expressivity in this paper, we have both theoretically and empirically evaluated one common measure: the linear regions in the partition of input space deï¬ned by a network with piecewise linear activations. We found that the average size of the boundary of these linear regions depends only on the number of neurons and not on the network depth â both at initialization and during training. This strongly suggests that deeper networks do not learn more complex functions than shallow networks. We plan to test this interpretation further in future work â for example, with experiments on more complex tasks, as well as by investigating higher order statistics, such as the variance.
We do not propose a replacement theory for the success of deep learning; however, prior work has already hinted at how such a theory might proceed. Notably, Ba & Caruana (2014) show that, once deep networks are trained to perform a task successfully, their behavior can often be replicated by shallow networks, suggesting that the advantages of depth may be linked to easier learning.
super lalâ f oye, t n= Pest | ll 4. » oR, e JL nj
with Câ > 0 depending only on the distribution 1 of the weights in. N.
# References
Arora, R., Basu, A., Mianjy, P., and Mukherjee, A. Under- standing deep neural networks with rectiï¬ed linear units. In ICLR, 2018.
Complexity of Linear Regions in Deep Networks
Arpit, D., JastrzËebski, S., Ballas, N., Krueger, D., Bengio, E., Kanwal, M. S., Maharaj, T., Fischer, A., Courville, A., Bengio, Y., et al. A closer look at memorization in deep networks. In ICML, 2017.
Poole, B., Lahiri, S., Raghu, M., Sohl-Dickstein, J., and Gan- guli, S. Exponential expressivity in deep neural networks In NeurIPS, pp. 3360â3368, through transient chaos. 2016.
Ba, J. and Caruana, R. Do deep nets really need to be deep? In NeurIPS, pp. 2654â2662, 2014.
Raghu, M., Poole, B., Kleinberg, J., Ganguli, S., and Dick- stein, J. S. On the expressive power of deep neural net- works. In ICML, pp. 2847â2854, 2017.
Bianchini, M. and Scarselli, F. On the complexity of neu- ral network classiï¬ers: A comparison between shallow and deep architectures. IEEE Transactions on Neural Networks and Learning Systems, 25(8):1553â1565, 2014.
Rolnick, D. and Tegmark, M. The power of deeper networks for expressing natural functions. In ICLR, 2018.
Eldan, R. and Shamir, O. The power of depth for feedfor- ward neural networks. In COLT, pp. 907â940, 2016.
Serra, T., Tjandraatmadja, C., and Ramalingam, S. Bound- ing and counting linear regions of deep neural networks. In ICML, 2018.
Hanin, B. Which neural net architectures give rise to ex- ploding and vanishing gradients? In NeurIPS, 2018.
Shalev-Shwartz, S., Shamir, O., and Shammah, S. Failures of gradient-based deep learning. In ICML, 2018.
Hanin, B. and Nica, M. Products of many large random matrices and gradients in deep neural networks. Preprint arXiv:1812.05994, 2018.
Telgarsky, M. Representation beneï¬ts of deep feedforward networks. Preprint arXiv:1509.08101, 2015.
Hanin, B. and Rolnick, D. How to start training: The effect of initialization and architecture. In NeurIPS, 2018.
Lin, H. W., Tegmark, M., and Rolnick, D. Why does deep and cheap learning work so well? Journal of Statistical Physics, 168(6):1223â1247, 2017.
Telgarsky, M. Beneï¬ts of depth in neural networks. COLT, 2016. In
Yarotsky, D. Error bounds for approximations with deep ReLU networks. Neural Networks, 94:103â114, 2017.
Yarotsky, D. Optimal approximation of continuous func- tions by very deep ReLU networks. In COLT, 2018.
Mhaskar, H., Liao, Q., and Poggio, T. Learning func- tions: when is deep better than shallow. Preprint arXiv:1603.00988, 2016.
Zhang, C., Bengio, S., Hardt, M., Recht, B., and Vinyals, O. Understanding deep learning requires rethinking general- ization. In ICLR, 2017.
Mhaskar, H. N. and Poggio, T. Deep vs. shallow networks: An approximation theory perspective. Analysis and Ap- plications, 14(06):829â848, 2016.
Montufar, G. F., Pascanu, R., Cho, K., and Bengio, Y. On the number of linear regions of deep neural networks. In NeurIPS, pp. 2924â2932, 2014.
Neyshabur, B., Bhojanapalli, S., McAllester, D., and Srebro, N. Exploring generalization in deep learning. In NeurIPS, pp. 5947â5956, 2017.
Novak, R., Bahri, Y., Abolaï¬a, D. A., Pennington, J., and Sohl-Dickstein, J. Sensitivity and generalization in neural networks: an empirical study. In ICLR, 2018.
Petersen, P. and Voigtlaender, F. Optimal approximation of piecewise smooth functions using deep ReLU neural networks. Neural Networks, 108:296â330, 2018.
Poggio, T., Mhaskar, H., Rosasco, L., Miranda, B., and Liao, Q. Why and when can deep â but not shallow â networks avoid the curse of dimensionality: a review. International Journal of Automation and Computing, 14(5):503â519, 2017.
Complexity of Linear Regions in Deep Networks
# A. Formal Statement of Results for General Piecewise Linear Activations
every collection of distinct neurons z1, . . . , zk, the average magnitude of the product of gradients is uniformly bounded:
In §5, we stated our results in the case of ReLU activation, and now frame these results for a general piecewise linear non-linearity. We ï¬x some notation. Let Ï : R â R be a continuous piecewise linear function with T breakpoints ξ0 = ââ < ξ1 < ξ2 < · · · < ξT < ξT +1 = â. That is, there exist pj, qj â R so that
t⬠[â¬),)44] g(t) qt+pj, Gy Agq+- C1)
The analog of Theorem 3 for general Ï is the following. Theorem 6. Let Ï : R â R be a continuous piecewise linear function with T breakpoints ξ1 < · · · < ξT as in (11). Suppose N is a fully connected network with input dimension nin, output dimension 1, random weights and biases satisfying A1 and A2 above, and non-linearity Ï.
k sup E Il ||Vz5(@)|]]} < Chad: (14) neurons 21,...,2% inputs « j=1
Then we have the following upper bounds
# voln,;,, (KX) < (fee)
(15)
⤠(T · 2CgradCbias)k,
where T is the number of breakpoints in the non-linearity Ï of N (see (11)) and
Cbias = sup z sup bâR Ïbz (b).
Let Jz... be the k X nin Jacobian of the map « > (z1(2),...,2%()),
We prove Corollary 7 in §D and state a ï¬nal corollary of Theorem 3:
[Fesoneu (0) = det (Jes.cu() Jessel")
and write pp,,,...,b., for the density of the joint distribution of the biases bz,,...,bz,. We say a neuron z is good at x if there exists a path of neurons from z to the output in the computational graph of N so that each neuron Z along this path is open at x (i.e. b'(Z(x) â bz) # 0). Then, for any bounded, measurable set K C IR" and any k =1,...,Min, the average (nin â k)-dimensional volume
bz) # 0).
Corollary 8. Suppose N is as in Theorem 3 and satisï¬es the hypothesis (14) in Corollary 7 with constants Cbias, Cgrad. Then, for any compact set K â Rnin let x be a uniform point in K. There exists c > 0 independent of K so that
E [distance(x, BN )] ⥠c T CbiasCgrad#{neurons} ,
where, as before, T is the number of breakpoints in the non-linearity Ï of N .
E [volninâk(BN ,k â© K)]
We prove Corollary|8]in §E] The basic idea is simple. For every ⬠> 0, we have
of BN ,k inside K is, in the notation of (6),
E [distance(x, By)| > â¬P (distance(x, By) > â¬),
T SY [apstunt lar, a2) K distinct neurons #1,....ih=1 Z1yeeyZk in J
with the probability on the right hand side scaling like
1 = voln,,, (Te(By) NK) / voln,,, (K),
where Y (ξi1 ,...,ξik ) z1,...,zk (x) equals
(z1(x) â ξi1 , . . . , zk(x) â ξik ) (13) multiplied by the indicator function of the event that zj is good at x for every j.
where T.(By;) is the tube of radius ⬠around By. We ex- pect that its volume like â¬voln,,-1(By). Taking « = c/#{neurons} yields the conclusion of Corollary[8]
# B. Outline of Proof of Theorem 6
Note that if in the definition (IT) of @ we have that the pos- sible values ¢'(t) ⬠{qo,.--, qr} do not include 0, then we may ignore the event that z; are good at x in the definition of yea Ste )
Corollary 7. With the notation and assumptions of Theo- rem 6, suppose in addition that the weights and biases are independent. Fix k â {1, . . . , nin} and suppose that for
The purpose of this section is to give an intuitive explanation of the proof of Theorem 3. We ï¬x a non-linearity Ï : R â R with breakpoints ξ1 < · · · < ξT (as in (11)) and consider a fully connected network N with input dimension nin ⥠1, output dimension 1, and non-linearity Ï. For each neuron z in N , we write
L(z) := layer index of z (16)
Complexity of Linear Regions in Deep Networks
and set
Sz := {x â Rnin | z(x) â bz â {ξ1, . . . , ξT }}.
7)
at which the output of N is affected by the post-activations of these neurons. Proposition 9 shows that we may represent BN as a disjoint union
We further
where
(18) Nin By =U Bye. k=1
_
S,:= S,N0, (18)
oe {ren [istdammayah.
Intuitively, the set S_ is the collection of inputs for which the neuron z turns from on to off. In contrast, the set O is the collection of inputs z ⬠R"⢠for which N is open in the sense that there is a path from the input to the output of N so that all neurons along this path compute are not constant in a neighborhood x. Thus, S., is the set of inputs at which neuron z switches between its linear regions and at which the output of neuron z actually affects the function computed by VV.
We remark here that O = 0 if in the non-linearity ¢ there are no linear pieces at which the slopes on ¢ equals 0 (i.e. g; 4 0 for all 7 in the definition of ¢). If, for example, ¢ is ReLU, then O need not be empty.
The overall proof of Theorem 3 can be divided into several steps. The ï¬rst gives the following representation of BN . Proposition 9. Under Assumptions A1 and A2 of Theorem 3, we have, with probability 1,
By= UU Sz neurons z
The precise proof of Proposition }]can be found in §C-1] below. The basic idea is that if for all y near a fixed input « ⬠Râ*, none of the pre-activations z(y) â bz cross the boundary of a linear region for ¢, then x ¢ By. Thus, By C U,S:. Moreover, if a neuron z satisfies z(x) â b, = S; for some i but there are no open paths from z to the output of NV for inputs near x, then z is dead at x and hence does not influence VV atv. Thus, we expect the more refined inclusion By C U, Sz. Finally, if re S. for some z then a ⬠By unless the contribution from other neurons to VV (y) for y near x exactly cancels the discontinuity in Vz(a). This happens with probability 0.
The next step in proving Theorem 3 is to identify the por- tions of BN of each dimension. To do this, we write for any distinct neurons z1, . . . , zk,
where
c U Smal UY & distinct neurons ZEA yh BL yeeey. Zh By k=
In words, BN ,k is the collection of inputs in O at which ex- actly k neurons turn from on to off. The following Proposi- tion shows that BN ,k is precisely the â(nin âk)-dimensional piece of BN â (see (5)). Proposition 10. Fix k = 1, . . . , nin, and k distinct neurons z1, . . . , zk in N . Then, with probability 1, for every x â BN ,k there exists a neighborhood in which BN ,k coincides with a (nin â k)âdimensional hyperplane.
We prove Proposition [10] in The idea is that each Sz,,...,2, 18 piecewise linear and, with probability 1, at every point at which exactly the neurons 2),...,Z% con- tribute to By, its co-dimension is the number of linear conditions needed to define it. Observe that with prob- ability 1, the bias vector (bz,,...,bz,,,) for any collec- tion z1,..., 241 of distinct neurons is a regular value for xr (z1(x),...,2&41(x)). Hence,
Proposition 10 thus implies that, with probability 1,
Volk (Bx) = Wl (Seren) + distinct neurons Zl yes Zk
The ï¬nal step in the proof of Theorem 3 is therefore to prove the following result. Proposition 11. Let z1, . . . , zk be distinct neurons in N . Then, for any bounded, measurable K â Rnin ,
E [voln nk (S., jos ».)| = I. y B[ysus (@)] de, tay sth=l
where Y (Si1 ,...,Sik ) z1,...,zk is deï¬ned as in (13).
We provide a detailed proof of Proposition[IT]in {C.3] The intuition is that the image of the volume element dx under x +> z(x) â S; is the volume element
The set S., yeess z, iS, intuitively, the collection of inputs at which z;(x) âb,, switches between linear regions for @ and
Complexity of Linear Regions in Deep Networks
from (13). The probability of an inï¬nitesimal neighborhood dx of x belonging to a (nin â k)-dimensional piece of BN is therefore the probability
To each path we may associate a sequence of weights:
w(j) γ := weight connecting z(jâ1) γ to z(j) γ , j = 1, . . . , d.
Poy sonido, (21() â Sins +5 24(2) â Six) || Jey,....2(a)|| dx
that the vector of biases (bz, j = 1,...,k) belongs to the image of dar under map (z;(x) â $;,,j =1,...,h) for some collection of breakpoints S;,. The formal argument uses the co-area formula (see and (30).
We will also deï¬ne
g(x) = Sad (20 gy (68H) i=0
For instance, if Ï = ReLU, then
q(j) γ (x) = 1{z(j) γ (x)âbzâ¥0},
# C. Proof of Theorem 3
# C.1. Proof of Proposition 9
and in general only one term in the deï¬nition of q(j) non-zero for each z. We may write
) is
Recall that the non-linearity Ï : R â R is continuous and piecewise linear with T breakpoints ξ1 < · · · < ξT , so that, with ξ0 = ââ, ξT +1 = â, we have
Nin d = Ss Yi Ss Il (yyw? + constant, i=1 paths y:iâout j=1
t â (ξi, ξi+1) â Ï(t) = qit + pi
with g; 4 qi41. For each x ⬠R"â¢, write 2(a) â be © (&, 41) and qi # 0 for some i} 2(x) â bz ⬠(&, 41) and qi = 0 for some i} 2(x) â bz = & for some i} Intuitively, Z;, are the neurons that, at the input x are open (i.e. contribute to the gradient of the output Vâ(x)) but do not change their contribution in a neighborhood of x, Z, are the neurons that are closed, and Ze are the neurons that, at x, produce a discontinuity in the derivative of N. Thus, for example, if ¢ = ReLU, then
# (21) , then for any path γ through a
Note that if 2 ⬠(U. 5.) ', then for any path y through a neuron z ⬠Z°, we have
â j s.t. z(j) γ â Z â x .
This is an open condition in light of (20), and hence for all y in a neighborhood of x and for any path γ through a neuron z â Z 0
â j s.t. z(j) γ â Z â y .
Thus, since the summand in vanishes identically if YNZ, F# 0, we find that for y in a neighborhood of any rE (u. S. ) we write
rE (u. S. ) we may write
Z â â â {+, â, 0}.
bz) = *}, We begin by proving that By C U, contrapositive = {z | sgn(2(x) â
We begin by proving that By C U, S. by checking the contrapositive
nin d = Ss Yi Ss Il a? (yyw? + constant. i=1 paths y:iâout j=1 Ze
(U 5.) C BY. (19)
Fix x ⬠(U. 3.) . Note that Z* are locally constant in the sense that there exists ¢ > 0 so that for all y with lly â 2|| < e, we have
Z â x â Z â y , Z + x â Z + y , Z + y ⪠Z 0 y â Z + x ⪠Z 0 x. (20)
(22) But, again by (20), for any ï¬xed x, all y in a neighborhood of x and each z â Z + x , we have z â Z + y as well. Thus, in particular,
z(x) â bz â (ξi, ξi+1) â z(y) â bz â (ξi, ξi+1).
Thus, for y sufï¬ciently close to x, we have for every path in the sum (22) that
Moreover, observe that if in the deï¬nition (11) of Ï none of the slopes qi equal 0, then Z â y = â
for every y. To prove (19), consider any path γ from the input to the output in the computational graph of N . Such a path consists of d + 1 neurons, one in each layer:
γ (y) = q(j) q(j) γ (x).
Therefore, the partial derivatives (âN /âyi)(y) are indepen- dent of y in a neighborhood of x and hence continuous at x. This proves (19). Let us now prove the reverse inclusion:
y= (2, ...,4) ; 2) =j.
Us. Cc By (23)
Complexity of Linear Regions in Deep Networks
Note that, with probability 1, we have
volninâ1(Sz1 â© Sz2 ) = 0
Substituting into this expression y = y± n , we ï¬nd that there exists a non-empty collection Î of paths from the input to the output of N so that
for any pair of distinct neurons 21, z2. Note also that since x ++ N(x) is continuous and piecewise linear, the set By is closed. Thus, it is enough to show the slightly weaker inclusion
Uls.\ Us: z Bfz c By (24)
# Bfz Usy: Sz
since the closure of S.\ Usy: Sz equals S.. Fix a neuron z and suppose x ⬠S.\ Usy. S. By definition, we have that for every neuron Z # z, either
# 2eZy
# or 2EZy.
2eZy or 2EZy.
# x
This has two consequences. First, by 20), the map y > z(y) is linear in a neighborhood of x. Second, in a neighbor- hood of , the set S. coincides with S,. Hence, combining these facts, near x the set S, coincides with the hyperplane
aN aN ! â (9) By UO) By, Mm = Ya He wi
where
aj â {â1, 1}, c(j) γ â {q0, . . . , qT }.
Note that the expression above is a polynomial in the weights of NV. Note also that, by construction, this polyno- mial is not identically zero due to the condition (26). There are only finitely many such polynomials since both a; and ol) range over a finite alphabet. For each such non-zero polynomial, the set of weights at which it vanishes has co-dimension 1. Hence, with probability 1, the difference gx (yt) -â 3X in) is non-zero. This shows that the par- tial derivatives on x ⬠By. are not continuous at x and hence that
{x | z(x) â bz = ξi}, for some i. (25)
# C.2. Proof of Proposition 10
We may take two sequences of inputs y+ sides of this hyperplane so that n , yâ n on opposite
Fix distinct neurons z,..., , 2% and suppose x ⬠Si... 2k but not in S, for any z A 2%,..., 2%. After relabeling, v we may assume that they are ordered by layer index:
lim nââ y+ n = lim nââ yâ n = x
a) < < Cz).
and
H (2(yt) â bz) =i, O'(2(yit) â be) = Gi-1, Wn,
Since « ⬠O, we also have that a ¢ S, for any z # 21,-.-,Zk. Thus, there exists a neighborhood U of x so SOU = 0 for every z 4 2,..., , 2k. Thus, there exists a neighborhood of x on which y ++ 21 (y) is linear.
where the index 7 the same as the one that defines the hyper- plane (25). Further, since By has co-dimension 1 (it is con- tained in the piecewise linear co-dimension 1 set Uz S., for example), we may also assume that y,*, y, ¢ By. Consider any path +y from the input to the output of the computational graph of NV passing through z (so that z = 2 Ye 7). By construction, for every n, we have
1) (yn) FID Un)
Hence, as explained near above, S., is a hyperplane near x. We now restrict our inputs to this hyperplane and repeat this reasoning to see that, near x, the set S.,\., is a hyperplane inside Si and hence, near x, is the inter- section of two hyperplanes in R". Continuing in this way shows that in a neighborhood of 2, the set Sz, ..,2, is equal to the intersection of k hyperplanes in Râ. Thus, S., yee zx \ (Vegersensn 5. k hyperplanes in a neighborhood of each of its points. c ) is precisely the intersection of
and hence, after passing to a subsequence, we may assume that the symmetric difference
Ze. AZr an) (26)
of the paths that contribute to the representation for Ys; Yn is fixed and non-empty (the latter since it always contains z). For any y ¢ By, we may write, for each
d SS [Le ww? en paths y:iout j=1 ycZt S S = Il
# C.3. Proof of Proposition 11
Let z1, . . . , zk be distinct neurons in N , and ï¬x a com- pact set K â Rnin . We seek to compute the mean of volninâk
i Ups sonar ny Alm â#(2) (28) S. Sey. 2, j=l,..,k
# S. Sey. 2, ss
ss fiyesth=l ie 6) 23 is pood at 2) AvOlnga â(@ x), ori aK j=l,...,k
Complexity of Linear Regions in Deep Networks
where weâve set
S (ξi1 ,...,ξik ) z1,...,zk = {x | zj(x) â bzj = ξij , j = 1, . . . , k}.
times the indicator function of the even that zj is good at x for every j. When the weights and biases of N are independent, we may write Ïbz1 ,...,bzk
Note that the map x + (z1(a),..., Zn(x)) is Lipschitz, and recall the co-area formula, which says that if ⬠L1(R") and g : Râ > R"â with m < n is Lipschitz, then
k k J] ., (bs) < ( sup sup pn.(0)) =Ch.,.. jal neurons z beR
Rm gâ1(t) Ï(x) dvolnâm(x)dt (29)
Hence,
equals
[ve ©) || Fg(x)|| dvol, (x), (30)
Note that
where Jg is the m à n Jacobian of g and
)I| = det ((J9(0))(Jala))7)"?. |Jg(x
Jz1,...,zk (x) (Jz1,...,zk (x))T = Gram (âz1(x), . . . , âzk(x)) ,
We assumed that the biases bz1, . . . , bzj have a joint condi- tional density
Poa = Phay y.-sbz, given all other weights and biases. The mean of the term in corresponding to a fixed ⬠= (â¬;,,..., &:,) over the conditional distribution of b.,,..., y bz; is therefore
In ap.,(b) | 1 23 is good at â\ dvol),,, (2), {zâb=E}NK j=1,.5k
where a te ay b = (b1,...,b%) as well as a(x) = (z1(2),..., z,(x)). This can rewritten as va
[ va Pb, (Z RK {2=b}nk )-§)1 23 is good at 2} dvOlnâ K(x r). j= k
# where for any vi â Rn
Gram(v1,...,Uk)i,j = (Vi, Vj)
is the associated Gram matrix. The Gram identity says that
det (Jey, say 2) (Jer.en(@))") equals ||Vzi(2) A+++ A VzK(x)|],
# det
which is the the k-dimensional volume of the parallelopiped in Rnin spanned by {âzj(x), j = 1, . . . , k}. We thus have
p\ 2 k det (Je.ne4(#) Jernee(®)) TL IVa). j=l
Thus, applying the co-area formula (29), (30) shows that the average of (28) over the conditional distribution of bz1 , . . . , bzj is precisely
The estimate (14) proves the upper bound (15). For the special case of Ï = ReLU we use the AM-GM inequality and Jensenâs inequality to write
Yz1,...,zk (x) dx. K
Taking the average over the remaining weighs and biases, we may commute the expectation E [-] with the dx integra since the integrand is non-negative. This completes the proof of Proposition[I1]
k [[lVva@ll| <= j=l k 1 FL Va@ll jel BK <5 3 B {||Vzi|"]
# D. Proof of Corollary 7
We begin by proving the upper bound in (15). By Theorem 3, E [vol (BN ,k â© K)] equals
» distinct neurons 21,.. > i Ess @)] @ar, 2h U1 yeeyth=l
Therefore, by Theorem 1 of Hanin & Nica (2018), there exist C1, C2 > 0 so that
yo ây jaing) | k EIT Ive(@ll| < (cre j=l
where, as in (13), Y (ξi1 ,...,ξik ) z1,...,zk (x) is
x ()I| Po., ,...b.,
(z1(x) â ξi1 , . . . , zk(z) â ξik )
This completes the proof of the upper bound in (15). To prove the power bound, lower bound in (15) we must argue in a different way. Namely, we will induct on k and use the following facts to prove the base case k = 1:
.
Complexity of Linear Regions in Deep Networks
1. At initialization, for each fixed input x, the random variables {11.(2)>»,}} are independent Bernoulli ran- dom variables with parameter 1/2. This fact is proved in Proposition 2 of[Hanin & Nicaj (2018). In particular, the event {z is good at x}, which occurs when there exists a layer j ⬠@(z) + 1,...,d in which z(x) < b, for every neuron, is independent of {z(x), b.} and satisfies
which is bounded below by
Li |bl<n ()) feliveco) - E [|'V2(2)||14eop)sull
Using Cauchy-Schwarz, the term E [|| Vz(x)|] 1g2(¢))}>] is bounded above by
d P (z is good at x) > 1-02", (1) j=l
(lIVe(a)|I?P(le(@)| >),
which using (33) and (32) together with Markovâs inequality, is bounded above by
2. At initialization, for each ï¬xed input x, we have
1 lel? o(y)2] â Ie 2 3F [2(x)?] = âThin, + » %,> (32)
, 1/2 2 &2) 2 | ilell 2 + > 7 1/2 \ n, b; ] in Fy
where Ï2 := Var[biases at layer j]. This is Equation bj (11) in the proof of Theorem 5 from Hanin & Rolnick (2018).
3. At initialization, for every neuron z and each input x, we have B[IV2(@)|?] =2 (33)
we have
E = 2. (33)
Ean OTH,
This follows easily from Theorem 1 of Hanin (2018).
4. At initialization, for each 1 ⤠j ⤠nin and every x â Rnin
Next, using Jensenâs inequality twice, we write
1, 2 log [|V2(x)|]] = 5E [log (|IV2(2)|) | il ~ 2 >t ~ 2
en» (Ze) }] = 3H ev E
where in the last inequality we applied (34). Putting this all together, we ï¬nd that exists c > 0 so that
plus O (oie x), where n, is the width of the j hidden layer and the implied constant depends only on the 4" moment of the measure ju according to which weights are distributed. This estimate follows immedi- ately by combining Corollary 26 and Proposition 28 in Hanin & Nica|(2018).
We begin by proving the lower bound in (15) when k = 1. We use (31) to see that E [volninâ1 (BN â© K)] is bounded below by
(1-dr ») > [® (IlV2(2)|| ps. (2(2))] ae. neurons 2
ElIV=Ce)l one) > | into.) l<n
where
> a{ lll, S92 Te Nin â ez poh b+0(E/2) 2) e .
In particular, we may take
n= sup rex [ll|â 4 Yet Cua Nin
Next, we bound the integrand. Fix x ⬠Râ anda parameter 7 > 0 to be chosen later. The integrand E[||Vz(2x)]] po, (z(a))] is bounded below by E[||Vz(2)|| po. (2(@))1 gee} <n
E[||Vz(2)|| po. (2(@))1 gee} <n > [int 0] ELV Ageeoren]
for C sufï¬ciently large. This completes the proof of the lower bound in (15) when k = 1. To complete the proof of Corollary 7, suppose we have proved the lower bound in (15) for all ReLU networks N and all collections of k â 1 distinct neurons. We may assume after relabeling that the neurons z1, . . . , zk are ordered by layer index:
U(z1) S++ < U(zp).
Complexity of Linear Regions in Deep Networks
With probability 1, the set S,, C IRâ is piecewise linear, co-dimension 1 with finitely many pieces, which we denote by P.. We may therefore rewrite vol,,,,,â~ (S., jess an K) as
_ DL voln te (Sessoace MPa VK). a
We now deï¬ne a new neural network Nα, obtained by restricting N to Pα. The input dimension for Nα equals nin â 1, and the weights and biases of Nα satisfy all the as- sumptions of Corollary 7. We can now apply our inductive hypothesis to the k â 1 neurons z2, . . . , zk in Nα and to the set K â© Pα. This gives
where we abbreviated S<qâ1 := Ut S;,. Using that vol, (B-(N*(Sa))) = vola(Sa) volnâa(B-(R"~â)) = vola(Su)e" âwna
and repeating this argument d â 1 times completes the proof.
We are now ready to prove Corollary 2. Let x â K = [0, 1]nin be uniformly chosen. Then, for any ε > 0, using Markovâs inequality and Lemma 12, we have
SO voln.âk (S:, yes a APA *)| a E k-1 > (nt jut m.(0)) â Elvoln,. 1 (Po K)]- 2 |bl<n
Summing this lower bound over α yields
> (int inf ml) [voln.â2 (S., nk)| . 2 |bl<n
E [distance(x, By,)] > eP (distance(x, By) > â¬) = e(1âP (distance(x, By) < â¬)) = e(1âE[vol,,,, (Tz (By))]) ⬠(: - SO en, ne" E [ng (8a) k=1 IV IV ⬠(: - 3 (CsCimet# (neo) k=1 > e(1 â CâCoraaChiasé# {neurons} )
Applying the inductive hypothesis once more completes the proof.
for some Câ > 0. Taking ⬠to be a small constant times 1/(Cgraa#{neurons}) completes the proof.
# E. Proof of Corollary 8
We will need the following observation.
Lemma 12. Fix a positive integer n ⥠1, and let S â Rn be a compact continuous piecewise linear submanifold with ï¬nitely many pieces. Deï¬ne S0 = â
and let Sk be the union of the interiors of all k-dimensional pieces of S\(S0 ⪠· · · ⪠Skâ1). Denote by Tε(X) the εâtubular neighborhood of any X â Rn. We have
voly (Te(S)) < Sonne * voli (Sk), k=0
# where Ïd := volume of ball of radius 1 in Rd.
Proof. Define d to be the maximal dimension of the linear pieces in S. Let ⬠T,(S). Suppose « ¢ T.(S;,) for all k =0,...,dâ1. Then the intersection of the ball of radius ⬠around s with S is a ball inside Sq = U C R¢. Using the convexity of this ball, there exists a point y in Sq so that the vector x â y is parallel to the normal vector to Sy at y. Hence, x belong to the normal ¢-ball bundle B-(N*(Sa)) (i.e. the union of the fiber-wise ¢-balls in the normal bundle to Sa). Therefore, we have
voln (Tε(S)) ⤠voln(Bε(N â(Sd))) + voln (Tε(Sâ¤dâ1)) , | {
"id": "1812.05994"
} |
1901.08634 | A BERT Baseline for the Natural Questions | This technical note describes a new baseline for the Natural Questions. Our
model is based on BERT and reduces the gap between the model F1 scores reported
in the original dataset paper and the human upper bound by 30% and 50% relative
for the long and short answer tasks respectively. This baseline has been
submitted to the official NQ leaderboard at
ai.google.com/research/NaturalQuestions. Code, preprocessed data and pretrained
model are available at
https://github.com/google-research/language/tree/master/language/question_answering/bert_joint. | http://arxiv.org/pdf/1901.08634 | Chris Alberti, Kenton Lee, Michael Collins | cs.CL | null | null | cs.CL | 20190124 | 20191209 | 9 1 0 2 c e D 9
] L C . s c [
3 v 4 3 6 8 0 . 1 0 9 1 : v i X r a
# A BERT Baseline for the Natural Questions
# Chris Albertiâ Kenton Lee Michael Collinsâ Google Research {chrisalberti,kentonl,mjcollins}@google.com
# Abstract
This technical note describes a new baseline for the Natural Questions (Kwiatkowski et al., 2019). Our model is based on BERT (De- vlin et al., 2018) and reduces the gap between the model F1 scores reported in the origi- nal dataset paper and the human upper bound by 30% and 50% relative for the long and short answer tasks respectively. This baseline has been submitted to the ofï¬cial NQ leader- boardâ . Code, preprocessed data and pre- trained model are availableâ¡.
1
# 1 Introduction
The release of BERT (Devlin et al., 2018) has sub- stantially advanced the state-of-the-art in a number of NLP tasks, in question answering in particular. For example, as of this writing, the top 17 systems on the SQuAD 2.0 leaderboard (Rajpurkar et al., 2018) and the top 5 systems on the CoQA leader- board (Reddy et al., 2018) are all based on BERT. The results obtained by BERT-based question an- swering models are also rapidly approaching the reported human performance for these datasets, with 2.5 F1 points of headroom left on SQuAD 2.0 and 6 F1 points on CoQA.
the Natural Questions (NQ) (Kwiatkowski et al., 2019) might represent a substantially harder research challenge than ques- tion answering tasks like SQuAD 2.0 and CoQA, and that consequently NQ might currently be a good benchmark for the NLP community to fo- cus on. The qualities that we think make NQ more challenging than other question answering datasets are the following: (1) the questions in NQ
were formulated by people out of genuine curios- ity or out of need for an answer to complete an- other task, (2) the questions were formulated by people before they had seen the document that might contain the answer, (3) the documents in which the answer is to be found are much longer than the documents used in some of the existing question answering challenges.
In this technical note we describe a BERT-based model for the Natural Questions. BERT performs very well on this dataset, reducing the gap be- tween the model F1 scores reported in the origi- nal dataset paper and the human upper bound by 30% and 50% relative for the long and short an- swer tasks respectively. However, there is still am- ple room for improvement: 22.5 F1 points for the long answer task and 23 F1 points for the short answer task.
The key insights in our approach are
1. to jointly predict short and long answers in a single model rather than using a pipeline approach,
2. to split each document into multiple training instances by using overlapping windows of tokens, like in the original BERT model for the SQuAD task,
3. to aggressively downsample null instances (i.e. instances without an answer) at training time to create a balanced training set,
4. to use the â[CLS]â token at training time to predict null instances and rank spans at infer- ence time by the difference between the span score and the â[CLS]â score.
# â https://ai.google.com/research/NaturalQuestions â¡https://github.com/google-research/language/tree/
master/language/question answering/bert joint
âAlso afï¬liated with Columbia University, work done at Google.
We refer to our model as BERTjoint to emphasize the fact that we are modeling short and long an- swers in a single model rather than in a pipeline of two models.
In the rest of this note we give further details on how the NQ dataset was preprocessed, we explain the modeling choices we made in our BERT-based model in order to adapt it to the NQ task, and we ï¬nally present our results.
# 2 Data Preprocessing
The Natural Questions (NQ) (Kwiatkowski et al., 2019) is a question answering dataset containing 307,373 training examples, 7,830 development ex- amples, and 7,842 test examples. Each example is comprised of a google.com query and a corre- sponding Wikipedia page. Each Wikipedia page has a passage (or long answer) annotated on the page that answers the question and one or more short spans from the annotated passage containing the actual answer. The long and the short answer annotations can however be empty. If they are both empty, then there is no answer on the page at all. If the long answer annotation is non-empty, but the short answer annotation is empty, then the anno- tated passage answers the question but no explicit short answer could be found. Finally 1% of the documents have a passage annotated with a short answer that is âyesâ or ânoâ, instead of a list of short spans.
Following Devlin et al. (2018) we tokenize ev- ery example in NQ using a 30,522 wordpiece vo- cabulary, then generate multiple instances per ex- ample by concatenating a â[CLS]â token, the to- kenized question, a â[SEP]â token, tokens from the content of the document, and a ï¬nal â[SEP]â token, limiting the total size of each instance to 512 tokens. For each document we generate all possible instances, by listing the document content starting at multiples of 128 tokens, effectively slid- ing a 512 token size window over the entire length of the document with a stride of 128 tokens. On average we generate 30 instances per NQ example. Each instance will be processed independently by BERT.
For each training instance we compute start and end token indices to represent the target answer span. If all annotated short spans are contained in the instance, we set the start and end target in- dices to point to the smallest span containing all the annotated short answer spans. If there are no annotated short spans but there is an annotated long answer span completely contained in the in- stance, we set the start and end target indices to point to the entire long answer span. If no short or
long span can be found in the current instance, we set the target start and end indices to point to the â[CLS]â token. We dub the instances in the last category ânull instancesâ.
Given the large size of documents in NQ and the fact that 51% of the documents are annotated as not having an answer to the query at all, we ï¬nd that about 98% of generated instances are null, therefore for training we downsample null in- stances by 50 times in order to obtain a training set that has roughly as many null instances as non-null instances. This leads to a training set that has ap- proximately 500,000 instances of 512 tokens each. We introduce special markup tokens in the doc- ument to give the model a notion of which part of the document it is reading. The special tokens we introduced are of the form â[Paragraph=N]â, â[Table=N]â, and â[List=N]â at the beginning of the N-th paragraph, list and table respectively in the document. This decision was based on the ob- servation that the ï¬rst few paragraphs and tables in the document are much more likely than the rest of the document to contain the annotated answer and so the model could beneï¬t from knowing whether it is processing one of these passages. Special to- kens are atomic, meaning that they are not split further by the wordpiece model.
We ï¬nally compute for each instance a target answer type as one of ï¬ve values: âshortâ for instances that contain all annotated short spans, âyesâ and ânoâ for yes/no annotations where the instance contains the long answer span, âlongâ when the instance contains the long answer span but there is no short or yes/no answer, and âno- answerâ otherwise. Null instances correspond to the set of instances with the âno-answerâ target an- swer type.
# 3 Model
Formally, we deï¬ne a training set instance as a four-tuple
(c, s, e, t)
where c is a context of 512 wordpiece ids (in- cluding question, document tokens and markup), s, e â {0, 1, . . . , 511} are inclusive indices point- ing to the start and end of the target answer span, and t â {0, 1, 2, 3, 4} is the annotated answer type, corresponding to the labels âshortâ, âlongâ, âyesâ, ânoâ, and âno-answerâ.
We deï¬ne the loss of our model for a training
Long Answer Dev Long Answer Test F1 P R F1 P R Short Answer Dev F1 R P Short Answer Test F1 R P DocumentQA 47.5 52.7 61.3 DecAtt + DocReader BERTjoint (this work) 44.7 57.0 68.4 46.1 54.8 64.7 48.9 54.3 64.1 43.3 55.7 68.3 45.7 55.0 66.2 38.6 34.3 59.5 33.2 28.9 47.3 35.7 31.4 52.7 40.6 31.9 63.8 31.0 31.1 44.0 35.1 31.5 52.1 Single Human Super-annotator 80.4 90.0 67.6 84.6 73.4 87.2 - - - - - - 63.4 79.1 52.6 72.6 57.5 75.7 - - - - - -
Table 1: Our results on NQ compared to the baselines in the original dataset paper and to the performance of a single human annotator and of an ensemble of human annotators. The systems used in previous NQ baselines are DocumentQA (Clark and Gardner, 2017), DecAtt (Parikh et al., 2016), and Document Reader (Chen et al., 2017).
instance to be
L = â log p(s, e, t|c) = â log pstart(s|c) â log pend(e|c) â log ptype(t|c),
where each probability p is obtained as a softmax over scores computed by the BERT model as fol- lows:
exp(fstart(s, ¢ 8) o xP(fstart(sâ, 65 8)â exp(fena(e; ¢; 8) Sv exp(fend(es68) exp(ftype(t, c; 8) Le exp(fiype(tâ, ©;0))â Pstart(s|¢) = > c= Pena(e Ptype(tlc)
We opted to limit the complexity of this base- line model by always outputting a single short an- swer as prediction and we rely on the ofï¬cial NQ evaluation script to set thresholds to decide which of our predictions should be changed to having only a long answer or no answer at all. We expect that improvements can be obtained by combining start/end and answer type outputs to sometimes predict yes/no answers instead of always predict- ing a span as the short answer. We also expect additional improvements to be achievable by ex- tending the model to be able to emit short answers comprised of multiple disjoint spans.
# 4 Experiments
where θ represents the BERT model parameters and fstart, fend, ftype represent three different outputs derived from the last layer of BERT.
At inference time we score all the contexts from each document and then rank all document spans (s, e) by the score
g(c, s, e) = fstart(s, c; θ) + fend(e, c; θ) â fstart(s = [CLS], c; θ) â fend(e = [CLS], c; θ)
We initialized our model from a BERT model al- ready ï¬netuned on SQuAD 1.1 (Rajpurkar et al., 2016). We then further ï¬netuned the model on the training instances precomputed as described in Section 2. We trained the model by minimizing loss L from Section 3 with the Adam optimizer (Kingma and Ba, 2014) with a batch size of 8. As is common practice for BERT models, we only tuned the number of epochs and the initial learn- ing rate for ï¬netuning and found that training for 1 epoch with an initial learning rate of 3 · 10â5 was the best setting.
and return the highest scoring span in the docu- ment as the predicted short answer span. Note that g(c, s, e) is exactly the log-odds between the like- lihood of an answer span (deï¬ned by the product pstart · pend) and the â[CLS]â span.
We select the predicted long answer span as the DOM tree top level node containing the predicted short answer span, and assign to both long and short prediction the same score equal to the maxi- mum value of g(c, s, e) for the document.
Evaluation completed in about 5 hours on the NQ dev and test set with a single Tesla P100 GPU. The results obtained by our model are shown in Table 1. Our BERT model for NQ performs dra- matically better than the models presented in the original NQ paper. Our model closes the gap be- tween the F1 score achieved by the original base- line systems and the super-annotator upper bound by 30% for the long answer NQ task and by 50% for the short answer NQ task. However NQ ap- pears to be still far from being solved, with more
than 20 F1 points of headroom for both the long and short answer tasks.
# 5 Conclusion
We presented a BERT-based model (Devlin et al., 2018) as a new baseline for the newly released Natural Questions (Kwiatkowski et al., 2019).
We hope that this baseline can constitute a good starting point for researchers wanting to create bet- ter models for the Natural Questions and for other question answering datasets with similar charac- teristics.
# 6 Acknowledgements
We would like to thank Ankur Parikh, Daniel Jacob Devlin, Kristina Andor, Emily Pitler, Toutanova, Ming-Wei Chang, Slav Petrov, Tom Kwiatkowski and the entire Google AI Language team for many valuable suggestions and help in carrying out this work.
# References
Danqi Chen, Adam Fisch, Jason Weston, and An- Reading wikipedia to an- arXiv preprint toine Bordes. 2017. swer open-domain questions. arXiv:1704.00051.
Christopher Clark and Matt Gardner. 2017. Simple and effective multi-paragraph reading comprehen- sion. arXiv preprint arXiv:1710.10723.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.
Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
Jennimaria Palomaki, Olivia Rhinehart, Michael Collins, Ankur Parikh, Chris Al- berti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: a benchmark for question answering research. Transactions of the Association of Computational Linguistics.
Ankur P Parikh, Oscar T¨ackstr¨om, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. arXiv preprint arXiv:1606.01933.
Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you donât know: Unanswerable ques- tions for squad. arXiv preprint arXiv:1806.03822.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Nat- ural Language Processing, pages 2383â2392.
Siva Reddy, Danqi Chen, and Christopher D Manning. 2018. Coqa: A conversational question answering challenge. arXiv preprint arXiv:1808.07042. | {
"id": "1810.04805"
} |
1901.07291 | Cross-lingual Language Model Pretraining | Recent studies have demonstrated the efficiency of generative pretraining for
English natural language understanding. In this work, we extend this approach
to multiple languages and show the effectiveness of cross-lingual pretraining.
We propose two methods to learn cross-lingual language models (XLMs): one
unsupervised that only relies on monolingual data, and one supervised that
leverages parallel data with a new cross-lingual language model objective. We
obtain state-of-the-art results on cross-lingual classification, unsupervised
and supervised machine translation. On XNLI, our approach pushes the state of
the art by an absolute gain of 4.9% accuracy. On unsupervised machine
translation, we obtain 34.3 BLEU on WMT'16 German-English, improving the
previous state of the art by more than 9 BLEU. On supervised machine
translation, we obtain a new state of the art of 38.5 BLEU on WMT'16
Romanian-English, outperforming the previous best approach by more than 4 BLEU.
Our code and pretrained models will be made publicly available. | http://arxiv.org/pdf/1901.07291 | Guillaume Lample, Alexis Conneau | cs.CL | null | null | cs.CL | 20190122 | 20190122 | 9 1 0 2
n a J 2 2 ] L C . s c [
1 v 1 9 2 7 0 . 1 0 9 1 : v i X r a
# Cross-lingual Language Model Pretraining
Guillaume Lampleâ Facebook AI Research Sorbonne Universit´es glample@fb.com
Alexis Conneauâ Facebook AI Research Universit´e Le Mans aconneau@fb.com
# Abstract
Recent studies have demonstrated the ef- ï¬ciency of generative pretraining for En- In glish natural language understanding. this work, we extend this approach to mul- tiple languages and show the effectiveness of cross-lingual pretraining. We propose two methods to learn cross-lingual lan- guage models (XLMs): one unsupervised that only relies on monolingual data, and one supervised that leverages parallel data with a new cross-lingual language model objective. We obtain state-of-the-art re- sults on cross-lingual classiï¬cation, unsu- pervised and supervised machine transla- tion. On XNLI, our approach pushes the state of the art by an absolute gain of 4.9% accuracy. On unsupervised machine trans- lation, we obtain 34.3 BLEU on WMTâ16 German-English, improving the previous state of the art by more than 9 BLEU. On supervised machine translation, we obtain a new state of the art of 38.5 BLEU on WMTâ16 Romanian-English, outperform- ing the previous best approach by more than 4 BLEU. Our code and pretrained models will be made publicly available.
et al., 2013) or natural language inference (Bow- man et al., 2015; Williams et al., 2017). Al- though there has been a surge of interest in learn- ing general-purpose sentence representations, re- search in that area has been essentially monolin- gual, and largely focused around English bench- marks (Conneau and Kiela, 2018; Wang et al., 2018). Recent developments in learning and eval- uating cross-lingual sentence representations in many languages (Conneau et al., 2018b) aim at mitigating the English-centric bias and suggest that it is possible to build universal cross-lingual encoders that can encode any sentence into a shared embedding space.
In this work, we demonstrate the effective- ness of cross-lingual language model pretraining on multiple cross-lingual understanding (XLU) benchmarks. Precisely, we make the following contributions:
1. We introduce a new unsupervised method for learning cross-lingual representations using cross-lingual language modeling and investi- gate two monolingual pretraining objectives.
2. We introduce a new supervised learning ob- jective that improves cross-lingual pretrain- ing when parallel data is available.
# Introduction
Generative pretraining of sentence encoders (Rad- ford et al., 2018; Howard and Ruder, 2018; Devlin et al., 2018) has led to strong improvements on numerous natural language understanding bench- marks (Wang et al., 2018). In this context, a Trans- former (Vaswani et al., 2017) language model is learned on a large unsupervised text corpus, and then ï¬ne-tuned on natural language understand- ing (NLU) tasks such as classiï¬cation (Socher
3. We signiï¬cantly outperform the previous state of the art on cross-lingual classiï¬cation, unsupervised machine translation and super- vised machine translation.
4. We show that cross-lingual language models can provide signiï¬cant improvements on the perplexity of low-resource languages.
âEqual contribution.
5. We will make our code and pretrained models publicly available.
# 2 Related Work
Our work builds on top of Radford et al. (2018); Howard and Ruder (2018); Devlin et al. (2018) who investigate language modeling for pretrain- ing Transformer encoders. Their approaches lead to drastic improvements on several classiï¬cation tasks from the GLUE benchmark (Wang et al., 2018). Ramachandran et al. (2016) show that language modeling pretraining can also provide signiï¬cant improvements on machine translation tasks, even for high-resource language pairs such as English-German where there exists a signiï¬- cant amount of parallel data. Concurrent to our work, results on cross-lingual classiï¬cation using a cross-lingual language modeling approach were showcased on the BERT repository1. We compare those results to our approach in Section 5.
Aligning distributions of text representations has a long tradition, starting from word embed- dings alignment and the work of Mikolov et al. (2013a) that leverages small dictionaries to align word representations from different languages. A series of follow-up studies show that cross-lingual representations can be used to improve the qual- ity of monolingual representations (Faruqui and Dyer, 2014), that orthogonal transformations are sufï¬cient to align these word distributions (Xing et al., 2015), and that all these techniques can be applied to an arbitrary number of languages (Am- mar et al., 2016). Following this line of work, the need for cross-lingual supervision was further re- duced (Smith et al., 2017) until it was completely removed (Conneau et al., 2018a). In this work, we take these ideas one step further by aligning dis- tributions of sentences and also reducing the need for parallel data.
There is a large body of work on aligning sen- tence representations from multiple languages. By using parallel data, Hermann and Blunsom (2014); Conneau et al. (2018b); Eriguchi et al. (2018) in- vestigated zero-shot cross-lingual sentence classi- ï¬cation. But the most successful recent approach of cross-lingual encoders is probably the one of Johnson et al. (2017) for multilingual machine translation. They show that a single sequence-to- sequence model can be used to perform machine translation for many language pairs, by using a single shared LSTM encoder and decoder. Their multilingual model outperformed the state of the art on low-resource language pairs, and enabled
1https://github.com/google-research/bert
zero-shot translation. Following this approach, Artetxe and Schwenk (2018) show that the result- ing encoder can be used to produce cross-lingual sentence embeddings. Their approach leverages more than 200 million parallel sentences. They obtained a new state of the art on the XNLI cross- lingual classiï¬cation benchmark (Conneau et al., 2018b) by learning a classiï¬er on top of the ï¬xed sentence representations. While these methods re- quire a signiï¬cant amount of parallel data, recent work in unsupervised machine translation show that sentence representations can be aligned in a completely unsupervised way (Lample et al., 2018a; Artetxe et al., 2018). For instance, Lample et al. (2018b) obtained 25.2 BLEU on WMTâ16 German-English without using parallel sentences. Similar to this work, we show that we can align distributions of sentences in a completely unsuper- vised way, and that our cross-lingual models can be used for a broad set of natural language under- standing tasks, including machine translation.
The most similar work to ours is probably the one of Wada and Iwata (2018), where the au- thors train a LSTM (Hochreiter and Schmidhuber, 1997) language model with sentences from dif- ferent languages. They share the LSTM param- eters, but use different lookup tables to represent the words in each language. They focus on align- ing word representations and show that their ap- proach work well on word translation tasks.
# 3 Cross-lingual language models
In this section, we present the three language mod- eling objectives we consider throughout this work. Two of them only require monolingual data (un- supervised), while the third one requires parallel sentences (supervised). We consider N languages. Unless stated otherwise, we suppose that we have N monolingual corpora {Ci}i=1...N , and we de- note by ni the number of sentences in Ci.
# 3.1 Shared sub-word vocabulary
In all our experiments we process all languages with the same shared vocabulary created through Byte Pair Encoding (BPE) (Sennrich et al., 2015). As shown in Lample et al. (2018a), this greatly im- proves the alignment of embedding spaces across languages that share either the same alphabet or anchor tokens such as digits (Smith et al., 2017) or proper nouns. We learn the BPE splits on the con- catenation of sentences sampled randomly from
the monolingual corpora. Sentences are sampled according to a multinomial distribution with prob- abilities {qi}i=1...N , where:
ea Di N a > j=1 Pj Ny N . pai % with pj
We consider α = 0.5. Sampling with this dis- tribution increases the number of tokens associ- ated to low-resource languages and alleviates the bias towards high-resource languages. In particu- lar, this prevents words of low-resource languages from being split at the character level.
# 3.2 Causal Language Modeling (CLM)
Our causal language modeling (CLM) task con- sists of a Transformer language model trained to model the probability of a word given the previ- ous words in a sentence P (wt|w1, . . . , wtâ1, θ). While recurrent neural networks obtain state-of- the-art performance on language modeling bench- marks (Mikolov et al., 2010; Jozefowicz et al., 2016), Transformer models are also very competi- tive (Dai et al., 2019).
In the case of LSTM language models, back- propagation through time (Werbos, 1990) (BPTT) is performed by providing the LSTM with the last hidden state of the previous iteration. In the case of Transformers, previous hidden states can be passed to the current batch (Al-Rfou et al., 2018) to provide context to the ï¬rst words in the batch. However, this technique does not scale to the cross-lingual setting, so we just leave the ï¬rst words in each batch without context for simplicity.
# 3.3 Masked Language Modeling (MLM)
We also consider the masked language model- ing (MLM) objective of Devlin et al. (2018), also known as the Cloze task (Taylor, 1953). Follow- ing Devlin et al. (2018), we sample randomly 15% of the BPE tokens from the text streams, replace them by a [MASK] token 80% of the time, by a random token 10% of the time, and we keep them unchanged 10% of the time. Differences be- tween our approach and the MLM of Devlin et al. (2018) include the use of text streams of an ar- bitrary number of sentences (truncated at 256 to- kens) instead of pairs of sentences. To counter the imbalance between rare and frequent tokens (e.g. punctuations or stop words), we also subsample the frequent outputs using an approach similar to Mikolov et al. (2013b): tokens in a text stream are
sampled according to a multinomial distribution, whose weights are proportional to the square root of their invert frequencies. Our MLM objective is illustrated in Figure 1.
# 3.4 Translation Language Modeling (TLM)
Both the CLM and MLM objectives are unsuper- vised and only require monolingual data. How- ever, these objectives cannot be used to leverage parallel data when it is available. We introduce a new translation language modeling (TLM) objec- tive for improving cross-lingual pretraining. Our TLM objective is an extension of MLM, where in- stead of considering monolingual text streams, we concatenate parallel sentences as illustrated in Fig- ure 1. We randomly mask words in both the source and target sentences. To predict a word masked in an English sentence, the model can either at- tend to surrounding English words or to the French translation, encouraging the model to align the En- glish and French representations. In particular, the model can leverage the French context if the En- glish one is not sufï¬cient to infer the masked En- glish words. To facilitate the alignment, we also reset the positions of target sentences.
# 3.5 Cross-lingual Language Models
In this work, we consider cross-lingual language model pretraining with either CLM, MLM, or MLM used in combination with TLM. For the CLM and MLM objectives, we train the model with batches of 64 streams of continuous sen- tences composed of 256 tokens. At each iteration, a batch is composed of sentences coming from the same language, which is sampled from the distri- bution {qi}i=1...N above, with α = 0.7. When TLM is used in combination with MLM, we alter- nate between these two objectives, and sample the language pairs with a similar approach.
# 4 Cross-lingual language model pretraining
In this section, we explain how cross-lingual lan- guage models can be used to obtain:
⢠a better initialization of sentence encoders for zero-shot cross-lingual classiï¬cation
⢠a better initialization of supervised and unsu- pervised neural machine translation systems
⢠language models for low-resource languages
⢠unsupervised cross-lingual word embeddings
masked ¢ âMEM take [V/s] drink now Transformer Token /s] MASK! MASI h MASI Us] MASK! | embeddings Us| ] a seat [MASK] ave a [MASK] Ss] I ] relax and + + + + + + + + + + + + aes CIAI AAA eeee ees embeddings + + + + + + + + + + + + Language embeddings en en en en en en en en en en en en ansiation Language curtains) [wore EB ian Transformer Token a hi maskj| [ima bl t i MAâ id étai MASK} i embeddings [/s] the [MASK] [MASK] Jue Us] (/s] [MASK] rideaux étaient [MASK] [/s] + + + + + + + + + + + + =a DITA wAeaawaeeae we embeddings + + + + + + + + + + + + canauage gs en en en en en en fr fr fr fr fr fr
Figure 1: Cross-lingual language model pretraining. The MLM objective is similar to the one of Devlin et al. (2018), but with continuous streams of text as opposed to sentence pairs. The TLM objective extends MLM to pairs of parallel sentences. To predict a masked English word, the model can attend to both the English sentence and its French translation, and is encouraged to align English and French representations. Position embeddings of the target sentence are reset to facilitate the alignment.
# 4.1 Cross-lingual classiï¬cation
Our pretrained XLM models provide general- purpose cross-lingual text representations. Similar to monolingual language model ï¬ne-tuning (Rad- ford et al., 2018; Devlin et al., 2018) on En- glish classiï¬cation tasks, we ï¬ne-tune XLMs on a cross-lingual classiï¬cation benchmark. We use the cross-lingual natural language inference (XNLI) dataset to evaluate our approach. Precisely, we add a linear classiï¬er on top of the ï¬rst hidden state of the pretrained Transformer, and ï¬ne-tune all pa- rameters on the English NLI training dataset. We then evaluate the capacity of our model to make correct NLI predictions in the 15 XNLI languages. Following Conneau et al. (2018b), we also include machine translation baselines of train and test sets. We report our results in Table 1.
coder with a cross-lingual language model to boot- strap the iterative process of UNMT. We explore various initialization schemes and evaluate their impact on several standard machine translation benchmarks, including WMTâ14 English-French, WMTâ16 English-German and WMTâ16 English- Romanian. Results are presented in Table 2.
# 4.3 Supervised Machine Translation
We also investigate the impact of cross-lingual language modeling pretraining for supervised ma- chine translation, and extend the approach of Ra- machandran et al. (2016) to multilingual NMT (Johnson et al., 2017). We evaluate the impact of both CLM and MLM pretraining on WMTâ16 Romanian-English, and present results in Table 3.
# 4.4 Low-resource language modeling
# 4.2 Unsupervised Machine Translation
Pretraining is a key ingredient of unsupervised neural machine translation (UNMT) (Lample et al., 2018a; Artetxe et al., 2018). Lample et al. (2018b) show that the quality of pretrained cross- lingual word embeddings used to initialize the lookup table has a signiï¬cant impact on the per- formance of an unsupervised machine translation model. We propose to take this idea one step further by pretraining the entire encoder and de-
For low-resource languages, it is often beneï¬- cial to leverage data in similar but higher-resource languages, especially when they share a signiï¬- cant fraction of their vocabularies. For instance, there are about 100k sentences written in Nepali on Wikipedia, and about 6 times more in Hindi. These two languages also have more than 80% of their tokens in common in a shared BPE vocabu- lary of 100k subword units. We provide in Table 4 a comparison in perplexity between a Nepali lan-
guage model and a cross-lingual language model trained in Nepali but enriched with different com- binations of Hindi and English data.
# 4.5 Unsupervised cross-lingual word embeddings
Conneau et al. (2018a) showed how to perform unsupervised word translation by aligning mono- lingual word embedding spaces with adversarial training (MUSE). Lample et al. (2018a) showed that using a shared vocabulary between two lan- guages and then applying fastText (Bojanowski et al., 2017) on the concatenation of their mono- lingual corpora also directly provides high-quality cross-lingual word embeddings (Concat) for lan- In this guages that share a common alphabet. work, we also use a shared vocabulary but our word embeddings are obtained via the lookup ta- ble of our cross-lingual language model (XLM). In Section 5, we compare these three approaches on three different metrics: cosine similarity, L2 dis- tance and cross-lingual word similarity.
# 5 Experiments and results
In this section, we empirically demonstrate the strong impact of cross-lingual language model pretraining on several benchmarks, and compare our approach to the current state of the art.
# 5.1 Training details
In all experiments, we use a Transformer archi- tecture with 1024 hidden units, 8 heads, GELU activations (Hendrycks and Gimpel, 2016), a dropout rate of 0.1 and learned positional embed- dings. We train our models with the Adam op- timizer (Kingma and Ba, 2014), a linear warm- up (Vaswani et al., 2017) and learning rates vary- ing from 10â4 to 5.10â4.
For the CLM and MLM objectives, we use streams of 256 tokens and a mini-batches of size 64. Unlike Devlin et al. (2018), a sequence in a mini-batch can contain more than two consecu- tive sentences, as explained in Section 3.2. For the TLM objective, we sample mini-batches of 4000 tokens composed of sentences with similar lengths. We use the averaged perplexity over lan- guages as a stopping criterion for training. For machine translation, we only use 6 layers, and we create mini-batches of 2000 tokens.
When ï¬ne-tuning on XNLI, we use mini- batches of size 8 or 16, and we clip the sentence
length to 256 words. We use 80k BPE splits and a vocabulary of 95k and train a 12-layer model on the Wikipedias of the XNLI languages. We sample the learning rate of the Adam optimizer with values from 5.10â4 to 2.10â4, and use small evaluation epochs of 20000 random samples. We use the ï¬rst hidden state of the last layer of the transformer as input to the randomly initialized ï¬- nal linear classiï¬er, and ï¬ne-tune all parameters. In our experiments, using either max-pooling or mean-pooling over the last layer did not work bet- ter than using the ï¬rst hidden state.
in Py- Torch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use ï¬oat16 operations to speed up training and to reduce the memory usage of our models.
# 5.2 Data preprocessing
We use WikiExtractor2 to extract raw sentences from Wikipedia dumps and use them as mono- lingual data for the CLM and MLM objectives. For the TLM objective, we only use parallel data that involves English, similar to Conneau et al. (2018b). Precisely, we use MultiUN (Ziemski et al., 2016) for French, Spanish, Russian, Ara- bic and Chinese, and the IIT Bombay corpus (Anoop et al., 2018) for Hindi. We extract the fol- lowing corpora from the OPUS 3 website Tiede- mann (2012): the EUbookshop corpus for Ger- man, Greek and Bulgarian, OpenSubtitles 2018 for Turkish, Vietnamese and Thai, Tanzil for both Urdu and Swahili and GlobalVoices for Swahili. For Chinese, Japanese and Thai we use the tok- enizer of Chang et al. (2008), the Kytea4 tokenizer, and the PyThaiNLP5 tokenizer respectively. For all other languages, we use the tokenizer provided by Moses (Koehn et al., 2007), falling back on the default English tokenizer when necessary. We use fastBPE6 to learn BPE codes and split words into subword units. The BPE codes are learned on the concatenation of sentences sampled from all lan- guages, following the method presented in Sec- tion 3.1.
# 2https://github.com/attardi/wikiextractor 3http://opus.nlpl.eu 4http://www.phontron.com/kytea 5https://github.com/PyThaiNLP/pythainlp 6https://github.com/glample/fastBPE
en fr es de el bg ru tr ar vi th zh hi sw ur Machine translation baselines (TRANSLATE-TRAIN) Devlin et al. (2018) XLM (MLM+TLM) 81.9 85.0 - 80.2 77.8 80.8 75.9 80.3 - 78.1 - 79.3 - 78.1 - 74.7 70.7 76.5 - 76.6 - 75.5 76.6 78.6 - 72.3 - 70.9 61.6 63.2 Machine translation baselines (TRANSLATE-TEST) Devlin et al. (2018) XLM (MLM+TLM) 81.4 85.0 - 79.0 74.9 79.5 74.4 78.1 - 77.8 - 77.6 - 75.5 - 73.7 70.4 73.7 - 70.8 - 70.4 70.1 73.6 - 69.0 - 64.7 62.1 65.1 Evaluation of cross-lingual sentence encoders Conneau et al. (2018b) Devlin et al. (2018) Artetxe and Schwenk (2018) XLM (MLM) XLM (MLM+TLM) 73.7 81.4 73.9 83.2 85.0 67.7 - 71.9 76.5 78.7 68.7 74.3 72.9 76.3 78.9 67.7 70.5 72.6 74.2 77.8 68.9 - 73.1 73.1 76.6 67.9 - 74.2 74.0 77.4 65.4 - 71.5 73.1 75.3 64.2 - 69.7 67.8 72.5 64.8 62.1 71.4 68.5 73.1 66.4 - 72.0 71.2 76.1 64.1 - 69.2 69.2 73.2 65.8 63.8 71.4 71.9 76.5 64.1 - 65.5 65.7 69.6 55.7 - 62.2 64.6 68.4 58.4 58.3 61.0 63.4 67.3 â - 76.7 - 74.2 65.6 - 70.2 71.5 75.1
Table 1: Results on cross-lingual classiï¬cation accuracy. Test accuracy on the 15 XNLI languages. We report results for machine translation baselines and zero-shot classiï¬cation approaches based on cross-lingual sentence encoders. XLM (MLM) corresponds to our unsupervised approach trained only on monolingual corpora, and XLM (MLM+TLM) corresponds to our supervised method that leverages both monolingual and parallel data through the TLM objective. â corresponds to the average accuracy.
# 5.3 Results and analysis
In this section, we demonstrate the effectiveness of cross-lingual language model pretraining. Our ap- proach signiï¬cantly outperforms the previous state of the art on cross-lingual classiï¬cation, unsuper- vised and supervised machine translation.
Cross-lingual classiï¬cation In Table 1, we eval- uate two types of pretrained cross-lingual en- coders: an unsupervised cross-lingual language model that uses the MLM objective on monolin- gual corpora only; and a supervised cross-lingual language model that combines both the MLM and the TLM loss using additional parallel data. Following Conneau et al. (2018b), we include two machine translation baselines: TRANSLATE- TRAIN, where the English MultiNLI training set is machine translated into each XNLI lan- guage, and TRANSLATE-TEST where every dev and test set of XNLI is translated to English. We report the XNLI baselines of Conneau et al. (2018b), the multilingual BERT approach of De- vlin et al. (2018) and the recent work of Artetxe and Schwenk (2018).
Our fully unsupervised MLM method sets a new state of the art on zero-shot cross-lingual clas- siï¬cation and signiï¬cantly outperforms the super- vised approach of Artetxe and Schwenk (2018) which uses 223 million of parallel sentences. Pre- cisely, MLM obtains 71.5% accuracy on average (â), while they obtained 70.2% accuracy. By leveraging parallel data through the TLM objec- tive (MLM+TLM), we get a signiï¬cant boost in
performance of 3.6% accuracy, improving even further the state of the art to 75.1%. On the Swahili and Urdu low-resource languages, we out- perform the previous state of the art by 6.2% and 6.3% respectively. Using TLM in addition to MLM also improves English accuracy from 83.2% to 85% accuracy, outperforming Artetxe and Schwenk (2018) and Devlin et al. (2018) by 11.1% and 3.6% accuracy respectively.
When ï¬ne-tuned on the training set of each XNLI language (TRANSLATE-TRAIN), our su- pervised model outperforms our zero-shot ap- proach by 1.6%, reaching an absolute state of the art of 76.7% average accuracy. This result demonstrates in particular the consistency of our approach and shows that XLMs can be ï¬ne-tuned on any language with strong performance. Similar to the multilingual BERT (Devlin et al., 2018), we observe that TRANSLATE-TRAIN outperforms TRANSLATE-TEST by 2.5% average accuracy, and additionally that our zero-shot approach out- performs TRANSLATE-TEST by 0.9%.
Unsupervised machine translation For the un- supervised machine translation task we consider 3 language pairs: English-French, English-German, and English-Romanian. Our setting is identical to the one of Lample et al. (2018b), except for the initialization step where we use cross-lingual lan- guage modeling to pretrain the full model as op- posed to only the lookup table.
For both the encoder and the decoder, we con- sider different possible initializations: CLM pre- training, MLM pretraining, or random initializa-
en-fr fr-en en-de de-en en-ro ro-en Previous state-of-the-art - Lample et al. (2018b) NMT PBSMT PBSMT + NMT 25.1 28.1 27.6 24.2 27.2 27.7 17.2 17.8 20.2 21.0 22.7 25.2 21.2 21.3 25.1 19.4 23.0 23.9 Our results for different encoder and decoder initializations 29.4 29.4 EMB 15.8 13.0 - 26.4 CLM 25.3 - 29.1 MLM 29.2 - 28.2 28.7 - CLM 30.0 CLM CLM 30.4 31.6 CLM MLM 32.3 32.1 31.6 MLM - MLM CLM 33.4 32.3 MLM MLM 33.4 33.3 EMB - 21.3 6.7 19.2 21.6 24.4 22.7 24.3 27.0 24.9 26.4 27.3 15.3 26.0 28.6 30.3 30.5 32.5 33.2 32.9 34.3 27.5 18.9 25.7 28.2 29.2 29.0 31.6 31.8 31.7 33.3 26.6 18.3 24.6 27.3 28.0 27.8 29.8 30.5 30.4 31.8
Table 2: Results on unsupervised MT. BLEU scores on WMTâ14 English-French, WMTâ16 German-English and WMTâ16 Romanian- English. For our results, the ï¬rst two columns indicate the model used to pretrain the encoder â - â means the model was and the decoder. randomly initialized. EMB corresponds to pretraining the lookup table with cross-lingual embeddings, CLM and MLM correspond to pretraining with models trained on the CLM or MLM objectives.
tion, which results in 9 different settings. We then follow Lample et al. (2018b) and train the model with a denoising auto-encoding loss along with an online back-translation loss. Results are reported in Table 2. We compare our approach with the ones of Lample et al. (2018b). For each language pair, we observe signiï¬cant improve- ments over the previous state of the art. We re- implemented the NMT approach of Lample et al. (2018b) (EMB), and obtained better results than reported in their paper. We expect that this is due to our multi-GPU implementation which uses In German-English, signiï¬cantly larger batches. our best model outperforms the previous unsuper- vised approach by more than 9.1 BLEU, and 13.3 BLEU if we only consider neural unsupervised approaches. Compared to pretraining only the lookup table (EMB), pretraining both the encoder and decoder with MLM leads to consistent signif- icant improvements of up to 7 BLEU on German- English. We also observe that the MLM objec- tive pretraining consistently outperforms the CLM one, going from 30.4 to 33.4 BLEU on English- French, and from 28.0 to 31.8 on Romanian- English. These results are consistent with the ones of Devlin et al. (2018) who observed a better gen-
Pretraining - CLM MLM Sennrich et al. (2016) ro â en ro â en ro â en + BT 33.9 28.4 28.5 34.4 - 31.5 31.5 37.0 - 35.3 35.6 38.5
Table 3: Results on supervised MT. BLEU scores on WMTâ16 Romanian-English. The previous state-of-the-art of Sennrich et al. (2016) uses both back-translation and an ensemble model. ro â en corresponds to models trained on both directions.
eralization on NLU tasks when training on the MLM objective compared to CLM. We also ob- serve that the encoder is the most important ele- ment to pretrain: when compared to pretraining both the encoder and the decoder, pretraining only the decoder leads to a signiï¬cant drop in perfor- mance, while pretraining only the encoder only has a small impact on the ï¬nal BLEU score.
Supervised machine translation In Table 3 we report the performance on Romanian-English WMTâ16 for different supervised training conï¬g- urations: mono-directional (roâen), bidirectional (roâen, a multi-NMT model trained on both enâro and roâen) and bidirectional with back- translation (roâen + BT). Models with back- translation are trained with the same monolin- gual data as language models used for pretraining. As in the unsupervised setting, we observe that pretraining provides a signiï¬cant boost in BLEU score for each conï¬guration, and that pretraining with the MLM objective leads to the best perfor- mance. Also, while models with back-translation have access to the same amount of monolingual data as the pretrained models, they are not able to generalize as well on the evaluation sets. Our bidirectional model trained with back-translation obtains the best performance and reaches 38.5 BLEU, outperforming the previous SOTA of Sen- nrich et al. (2016) (based on back-translation and ensemble models) by more than 4 BLEU.
Low-resource language model In Table 4, we investigate the impact of cross-lingual language modeling for improving the perplexity of a Nepali language model. To do so, we train a Nepali lan- guage model on Wikipedia, together with addi- tional data from either English or Hindi. While Nepali and English are distant languages, Nepali and Hindi are similar as they share the same De-
Training languages Nepali perplexity Nepali Nepali + English Nepali + Hindi Nepali + English + Hindi 157.2 140.1 115.6 109.3
Table 4: Results on language modeling. Nepali perplexity when using additional data from a sim- ilar language (Hindi) or a distant one (English).
vanagari script and have a common Sanskrit an- cestor. When using English data, we reduce the perplexity on the Nepali language model by 17.1 points, going from 157.2 for Nepali-only language modeling to 140.1 when using English. Using ad- ditional data from Hindi, we get a much larger perplexity reduction of 41.6. Finally, by leverag- ing data from both English and Hindi, we reduce the perplexity even more to 109.3 on Nepali. The gains in perplexity from cross-lingual language modeling can be partly explained by the n-grams anchor points that are shared across languages, for instance in Wikipedia articles. The cross-lingual language model can thus transfer the additional context provided by the Hindi or English mono- lingual corpora through these anchor points to im- prove the Nepali language model.
Unsupervised cross-lingual word embeddings The MUSE, Concat and XLM (MLM) methods provide unsupervised cross-lingual word embed- ding spaces that have different properties. In Ta- ble 5, we study those three methods using the same word vocabulary and compute the cosine similar- ity and L2 distance between word translation pairs from the MUSE dictionaries. We also evaluate the quality of the cosine similarity measure via the SemEvalâ17 cross-lingual word similarity task of Camacho-Collados et al. (2017). We observe that XLM outperforms both MUSE and Concat on cross-lingual word similarity, reaching a Pearson correlation of 0.69. Interestingly, word transla- tion pairs are also far closer in the XLM cross- lingual word embedding space than for MUSE or Concat. Speciï¬cally, MUSE obtains 0.38 and 5.13 for cosine similarity and L2 distance while XLM gives 0.55 and 2.64 for the same metrics. Note that XLM embeddings have the particularity of be- ing trained together with a sentence encoder which may enforce this closeness, while MUSE and Con- cat are based on fastText word embeddings.
Cosine sim. L2 dist. SemEvalâ17 MUSE Concat XLM 0.38 0.36 0.55 5.13 4.89 2.64 0.65 0.52 0.69
Table 5: Unsupervised cross-lingual word em- beddings Cosine similarity and L2 distance be- tween source words and their translations. Pear- son correlation on SemEvalâ17 cross-lingual word similarity task of Camacho-Collados et al. (2017).
# 6 Conclusion
In this work, we show for the ï¬rst time the strong impact of cross-lingual language model (XLM) pretraining. We investigate two unsupervised training objectives that require only monolingual corpora: Causal Language Modeling (CLM) and Masked Language Modeling (MLM). We show that both the CLM and MLM approaches pro- vide strong cross-lingual features that can be used for pretraining models. On unsupervised ma- chine translation, we show that MLM pretrain- ing is extremely effective. We reach a new state of the art of 34.3 BLEU on WMTâ16 German- English, outperforming the previous best approach by more than 9 BLEU. Similarly, we obtain strong improvements on supervised machine translation. We reach a new state of the art on WMTâ16 Romanian-English of 38.5 BLEU, which corre- sponds to an improvement of more than 4 BLEU points. We also demonstrate that cross-lingual language model can be used to improve the per- plexity of a Nepali language model, and that it provides unsupervised cross-lingual word embed- dings. Without using a single parallel sentence, a cross-lingual language model ï¬ne-tuned on the XNLI cross-lingual classiï¬cation benchmark al- ready outperforms the previous supervised state of the art by 1.3% accuracy on average. A key con- tribution of our work is the translation language modeling (TLM) objective which improves cross- lingual language model pretraining by leveraging parallel data. TLM naturally extends the BERT MLM approach by using batches of parallel sen- tences instead of consecutive sentences. We ob- tain a signiï¬cant gain by using TLM in addition to MLM, and we show that this supervised ap- proach beats the previous state of the art on XNLI by 4.9% accuracy on average. Our code and pre- trained models will be made publicly available.
# References
Rami Al-Rfou, Dokook Choe, Noah Constant, Mandy Guo, and Llion Jones. 2018. Character-level lan- guage modeling with deeper self-attention. arXiv preprint arXiv:1808.04444.
Waleed Ammar, George Mulcaire, Yulia Tsvetkov, Guillaume Lample, Chris Dyer, and Noah A Smith. 2016. Massively multilingual word embeddings. arXiv preprint arXiv:1602.01925.
and Bhat- tacharyya Pushpak. 2018. The iit bombay english- hindi parallel corpus. In LREC.
Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018. Unsupervised neural ma- In International Conference on chine translation. Learning Representations (ICLR).
Mikel Artetxe and Holger Schwenk. 2018. Mas- sively multilingual sentence embeddings for zero- arXiv shot cross-lingual preprint arXiv:1812.10464.
Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135â146.
Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large anno- tated corpus for learning natural language inference. In EMNLP.
Jose Camacho-Collados, Mohammad Taher Pilehvar, Nigel Collier, and Roberto Navigli. 2017. Semeval- 2017 task 2: Multilingual and cross-lingual semantic word similarity. In Proceedings of the 11th Interna- tional Workshop on Semantic Evaluation (SemEval- 2017), pages 15â26.
Pi-Chuan Chang, Michel Galley, and Christopher D Manning. 2008. Optimizing chinese word segmen- tation for machine translation performance. In Pro- ceedings of the third workshop on statistical ma- chine translation, pages 224â232.
Alexis Conneau and Douwe Kiela. 2018. Senteval: An evaluation toolkit for universal sentence representa- tions. LREC.
Alexis Conneau, Guillaume Lample, MarcâAurelio Ranzato, Ludovic Denoyer, and Herv Jegou. 2018a. Word translation without parallel data. In ICLR.
Alexis Conneau, Ruty Rinott, Guillaume Lample, Ad- ina Williams, Samuel R. Bowman, Holger Schwenk, and Veselin Stoyanov. 2018b. Xnli: Evaluating cross-lingual sentence representations. In Proceed- ings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing. Association for Computational Linguistics.
Zihang Dai, Zhilin Yang, Yiming Yang, William W. Cohen, Jaime Carbonell, Quoc V. Le, and Ruslan Salakhutdinov. 2019. Transformer-XL: Language modeling with longer-term dependency.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.
Akiko Eriguchi, Melvin Johnson, Orhan Firat, Hideto Kazawa, and Wolfgang Macherey. 2018. Zero- shot cross-lingual classiï¬cation using multilin- arXiv preprint gual neural machine translation. arXiv:1809.04686.
Manaal Faruqui and Chris Dyer. 2014. Improving vec- tor space word representations using multilingual correlation. Proceedings of EACL.
Dan Hendrycks and Kevin Gimpel. 2016. Bridg- ing nonlinearities and stochastic regularizers with arXiv preprint gaussian error arXiv:1606.08415.
Karl Moritz Hermann and Phil Blunsom. 2014. Multi- lingual models for compositional distributed seman- tics. arXiv preprint arXiv:1404.4641.
Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Neural computation, Long short-term memory. 9(8):1735â1780.
Jeremy Howard and Sebastian Ruder. 2018. Universal language model ï¬ne-tuning for text classiï¬cation. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 328â339.
Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi´egas, Martin Wattenberg, Greg Corrado, et al. 2017. Googles multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339â351.
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Exploring arXiv preprint Shazeer, and Yonghui Wu. 2016. the limits of language modeling. arXiv:1602.02410.
Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source In Pro- toolkit for statistical machine translation. ceedings of the 45th annual meeting of the ACL on interactive poster and demonstration sessions, pages 177â180. Association for Computational Linguis- tics.
Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and MarcâAurelio Ranzato. 2018a. Unsupervised machine translation using monolingual corpora only. In ICLR.
Guillaume Lample, Myle Ott, Alexis Conneau, Lu- dovic Denoyer, and MarcâAurelio Ranzato. 2018b. Phrase-based & neural unsupervised machine trans- lation. In EMNLP.
Tom´aËs Mikolov, Martin Karaï¬Â´at, Luk´aËs Burget, Jan ËCernock`y, and Sanjeev Khudanpur. 2010. Recur- rent neural network based language model. In Eleventh Annual Conference of the International Speech Communication Association.
Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013a. Exploiting similarities among languages for ma- chine translation. arXiv preprint arXiv:1309.4168.
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013b. Distributed representa- tions of words and phrases and their compositional- In Advances in neural information processing ity. systems, pages 3111â3119.
Adam Paszke, Sam Gross, Soumith Chintala, Gre- gory Chanan, Edward Yang, Zachary DeVito, Zem- ing Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. NIPS 2017 Autodiff Workshop.
Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. URL https://s3-us-west-2.amazonaws.com/openai- assets/research-covers/language- unsupervised/language understanding paper.pdf.
Prajit Ramachandran, Peter J Liu, and Quoc V Le. 2016. Unsupervised pretraining for sequence to se- quence learning. arXiv preprint arXiv:1611.02683.
Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics, pages 1715â1725.
Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Edinburgh neural machine translation sys- tems for wmt 16. arXiv preprint arXiv:1606.02891.
Samuel L Smith, David HP Turban, Steven Hamblin, and Nils Y Hammerla. 2017. Ofï¬ine bilingual word vectors, orthogonal transformations and the inverted International Conference on Learning softmax. Representations.
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- In Proceedings of the 2013 conference on bank. empirical methods in natural language processing, pages 1631â1642.
cloze procedure: A new tool for measuring readability. Journalism Bulletin, 30(4):415â433.
Jrg Tiedemann. 2012. Parallel data, tools and inter- faces in opus. In LREC, Istanbul, Turkey. European Language Resources Association (ELRA).
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, pages 6000â6010.
Takashi Wada and Tomoharu Iwata. 2018. Unsu- pervised cross-lingual word embedding by multi- arXiv preprint lingual neural language models. arXiv:1809.02306.
Alex Wang, Amapreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461.
Paul J Werbos. 1990. Backpropagation through time: what it does and how to do it. Proceedings of the IEEE, 78(10):1550â1560.
Adina Williams, Nikita Nangia, and Samuel R. Bow- man. 2017. A broad-coverage challenge corpus for sentence understanding through inference. In NAACL.
Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. 2015. Normalized word embedding and orthogonal trans- form for bilingual word translation. Proceedings of NAACL.
Michal Ziemski, Marcin Junczys-Dowmunt, and Bruno Pouliquen. 2016. The united nations parallel corpus v1. 0. In LREC. | {
"id": "1606.02891"
} |
1901.06706 | Visual Entailment: A Novel Task for Fine-Grained Image Understanding | Existing visual reasoning datasets such as Visual Question Answering (VQA),
often suffer from biases conditioned on the question, image or answer
distributions. The recently proposed CLEVR dataset addresses these limitations
and requires fine-grained reasoning but the dataset is synthetic and consists
of similar objects and sentence structures across the dataset.
In this paper, we introduce a new inference task, Visual Entailment (VE) -
consisting of image-sentence pairs whereby a premise is defined by an image,
rather than a natural language sentence as in traditional Textual Entailment
tasks. The goal of a trained VE model is to predict whether the image
semantically entails the text. To realize this task, we build a dataset SNLI-VE
based on the Stanford Natural Language Inference corpus and Flickr30k dataset.
We evaluate various existing VQA baselines and build a model called Explainable
Visual Entailment (EVE) system to address the VE task. EVE achieves up to 71%
accuracy and outperforms several other state-of-the-art VQA based models.
Finally, we demonstrate the explainability of EVE through cross-modal attention
visualizations. The SNLI-VE dataset is publicly available at
https://github.com/ necla-ml/SNLI-VE. | http://arxiv.org/pdf/1901.06706 | Ning Xie, Farley Lai, Derek Doran, Asim Kadav | cs.CV | null | null | cs.CV | 20190120 | 20190120 | 9 1 0 2 n a J 0 2 ] V C . s c [
1 v 6 0 7 6 0 . 1 0 9 1 : v i X r a
# Visual Entailment: A Novel Task for Fine-Grained Image Understanding
Ning Xieâ Wright State University Dayton, OH, U.S.A. xie.25@wright.edu
Farley Lai NEC Laboratories America Princeton, NJ, U.S.A. farleylai@nec-labs.com
Derek Doran Wright State University Dayton, OH, U.S.A. derek.doran@wright.edu
# Asim Kadav NEC Laboratories America Princeton, NJ, U.S.A. asim@nec-labs.com
# Abstract
understanding is relatively limited [73].
Existing visual reasoning datasets such as Visual Ques- tion Answering (VQA), often suffer from biases conditioned on the question, image or answer distributions. The re- cently proposed CLEVR dataset addresses these limitations and requires ï¬ne-grained reasoning but the dataset is syn- thetic and consists of similar objects and sentence struc- tures across the dataset.
In this paper, we introduce a new inference task, Vi- sual Entailment (VE) - consisting of image-sentence pairs whereby a premise is deï¬ned by an image, rather than a natural language sentence as in traditional Textual Entail- ment tasks. The goal of a trained VE model is to predict whether the image semantically entails the text. To realize this task, we build a dataset SNLI-VE based on the Stanford Natural Language Inference corpus and Flickr30k dataset. We evaluate various existing VQA baselines and build a model called Explainable Visual Entailment (EVE) system to address the VE task. EVE achieves up to 71% accuracy and outperforms several other state-of-the-art VQA based models. Finally, we demonstrate the explainability of EVE through cross-modal attention visualizations. The SNLI-VE dataset is publicly available at https://github.com/ necla-ml/SNLI-VE.
# 1. Introduction
The pursuit of âvisual intelligenceâ is a long lasting theme of the machine learning community. While the per- formance of image classiï¬cation and object detection has signiï¬cantly improved in the recent years [42, 63, 65, 26], progress in higher-level scene reasoning tasks such as scene
âWork performed as a NEC Labs intern
Recently, several datasets, such as VQA-v1.0 [2], VQA-v2.0 [23], CLEVR [32], Visual7w [81], Visual Genome [41], COCO-QA [57], and models [33, 60, 29, 31, 1, 67, 17, 37] have been used to measure the progress in understanding the interaction between vision and language modalities. However, the quality of the widely used VQA- v1.0 dataset [2] suffers from a natural bias [23]. Speciï¬- cally, there is a long tail distribution of answers and also a question-conditioned bias where, questions may hint at the answers, such that the correct answer may be inferred without even considering the visual information. For in- stance, of the question âDo you see a . . . ?â, the model may bias towards the answer âYesâ since it is correct for 87% of times during training. Besides, many questions in the VQA-v1.0 dataset are simple and straightforward and do not require compositional reasoning from the trained model. VQA-v2.0 [23] has been proposed to reduce the dataset âbiasâ considerably in VQA-v1.0 by associating each ques- tion with relatively balanced different answers. However, the questions are rather straight-forward and require limited ï¬ne-grained reasoning.
CLEVR dataset [32], is designed for ï¬ne-grained reason- ing and consists of compositional questions such as âWhat size is the cylinder that is left of the brown metal thing that is left of the big sphere?â. This kind of questions requires learning ï¬ne-grained reasoning based on visual informa- tion. However, CLEVR is a synthetic dataset, and visual information and sentence structures are very similar across the dataset. Hence, models that provide good performance on CLEVR dataset may not generalize to real-world set- tings.
To address the above limitations, we propose a novel in- ference task, Visual Entailment (VE), which requires ï¬ne- grained reasoning in real-world settings. The design is de-
1
rived from Text Entailment (TE) [12] task. In our VE task, a real world image premise Pimage and a natural language hypothesis Htext are given, and the goal is to determine if Htext can be concluded given the information provided by Pimage. Three labels entailment, neutral or contradic- tion are assigned based on the relationship conveyed by the (Pimage, Htext).
⢠Entailment holds if there is enough evidence in Pimage to conclude that Htext is true.
⢠Contradiction holds if there is enough evidence in Pimage to conclude that Htext is false.
⢠Otherwise, the relationship is neutral, implying the ev- idence in Pimage is insufï¬cient to draw a conclusion about Htext.
The main difference between VE and TE task is, the premise in TE in a natural language sentence Ptext, in- stead of an image premise Pimage. Note that the existing of âneutralâ makes the VE task more challenging compared to previous âyes-noâ VQA tasks, since âneutralâ requires the model to conclude the uncertainty between âentailment (yes)â and âcontradiction (no)â. Figure 1 illustrates a VE example, which is from the SNLI-VE dataset we propose below, that given an image premise, the three different text hypotheses lead to different labels.
hypotheses lead to different labels. © Two woman are holding packages. © The sisters are hugging goodbye while holding to © Entailment © Neutral go packages after just = eating lunch. © The men are fighting outside a deli. © Contradiction Premise Hypothesis Answer
Figure 1. An Example from SNLI-VE dataset
We build the SNLI-VE dataset to illustrate the VE task, based on Stanford Natural Language Inference (SNLI) [4], which is a widely used text-entailment dataset, and Flickr30k [76], which is an image captioning dataset. The combination of SNLI and Flickr30k is straightforward since SNLI is created using Flickr30k. The detailed process of creating the SNLI-VE dataset is discussed in Section 3.2.
We develop an Explainable Visual Entailment (EVE) model to address the VE task. EVE captures the interac- tion within and between the image premise and the text hy- pothesis through attention. We evaluate EVE against sev- eral other state-of-the-art (SOTA) visual question answer- ing (VQA) baselines and an image captioning based model on the SNLI-VE dataset. The interpretability of EVE is demonstrated using attention visualizations.
In summary, the contributions of our work are:
2
⢠We propose a novel inference task, Visual Entailment, that requires a systematic cross-modal understanding between vision and a natural language.
⢠We build a VE dataset, SNLI-VE, consisting of real- world image and natural language sentence pairs for VE tasks. The dataset is publicly available1.
⢠We design a VE model, EVE, to solve the VE task with interpretable attention visualizations.
⢠We evaluate EVE against other SOTA VQA and image captioning based baselines.
# 2. Related Work
Our work is inspired by previous work on NLI, VQA, image captioning, and interpretable models.
Natural Language Inference. We focus on textual en- tailment as our NLI task [18, 11, 3, 12, 46]. Annotated cor- pus for TE was limited in size until SNLI [4] was proposed, which is based on the Flickr30k [76] image captions. Since then, several neural-network based methods have been pro- posed over SNLI that either use sentence encoding models to individually encode hypothesis and premise or attention based models that encode the sentences together and align similar words in hypothesis and premise [8, 50, 62, 59]. Our paper extends the TE task in the visual domain â allowing future work on our SNLI-VE task to build new models on recent progress in SNLI and VQA. Our work is different from the recent work [71] that combines both images and captions as premises.
Visual Question Answering. Recent work on VQA in- cludes datasets [32, 2, 23, 81, 41, 57, 47, 19, 66] and models [33, 60, 29, 31, 1, 67, 17, 37]. The goal of VQA is to answer natural language questions based on the provided visual in- formation. VQA-v2.0 [23] and CLEVR [32] datasets are designed to address bias and reasoning limitations of VQA- v1.0, respectively. Recent work on compositional reason- ing systems have achieved nearly 100% results on CLEVR [29] but the SOTA performance on VQA-v2.0 is no more than 75% [15], implying learning multi-modal feature in- teraction using natural images has room for improvement. There have been a large number of models and approaches to address the VQA task. This includes simple linear mod- els using ranking loss [16, 36], bi-linear pooling meth- ods [45, 20, 55, 17, 37], attention-based methods [1, 52, 64] and reasoning based approaches [54, 27, 33, 38, 29] on CLEVER and VQA-v1.0 datasets.
# 1https://github.com/necla-ml/SNLI-VE
Image Captioning. The problem of image captioning ex- plores the generation of natural language sentences to best depict input image content. A common approach for these tasks is to use temporal models over convolutional fea- tures [36, 70, 7]. Recent work has also explored generating richer captions to describe images in a more ï¬ne-grained manner [34]. EVE differs from image-captioning since it requires discerning ï¬ne-grained information about an image conditioned on the hypothesis into three classes. However, existing image-captioning methods can serve as a baseline, where the output class label is based on a distance measure between the generated caption and the input hypothesis.
Visual Relationship Detection. Relationship detection among image constituents uses separate branches in a Con- to model objects, humans, and their interactions vNet [5, 21]. A distinct approach in Santoro et al. [60] treats each of the cells across channels in convolutional feature maps as an object and the relationships are modeled by a pairwise concatenation of the feature representations of in- dividual cells.
Scene graph based relationship modeling, using a struc- tured representation for describing object relationships and their attributes [35, 43, 44, 74] has been extensively studied. Furthermore, pairing different objects in a scene [13, 28, 60, 78] is also common. However, a scene with many objects may have only a few individual interacting objects. Hence, it can be inefï¬cient to model all relationships across all in- dividual object pairs [80], making these methods computa- tionally expensive for complex scene understanding tasks such as VE.
Our model, EVE instead uses self-attention to efï¬ciently learn the relationships between various scene elements and words instead of bi-gram or tri-gram based modeling as used in previous work.
Interpretability. As deep neural networks have become widespread in real-world applications, there has been an in- creasing focus on interpretability and transparency. Recent work addresses this requirement either through saliency- map visualizations [61, 77, 49], attention mechanism [75, 79, 51, 14], or other analysis [30, 39, 56, 58]. Our work demonstrates interpretability via attention visualiza- tions.
# 3. Visual Entailment Task
# 3.1. Formal Deï¬nition
We introduce a dataset D for VE task structured as {(i1, h1, l1), (i1, h2, l2) . . . (i1, hm1 , lm1), . . . (in, hmn , lmn)}, where (ik, hs, ls) is an instance from D, with ik, hs, and ls denoting an image premise, a text hypothesis and a class It is worth noting that each image ik label, respectively.
3
* The man wearing the black © Entailment shirt plays a game of golf. Aman plays on a golf course to relax. © Neutral + = * The man in the black shirt trades Pokemon cards with * Contradiction his girlfriend. Premise Hypothesis Answer * An Indian woman is doing © Entailment her laundry in a lake. * An Indian woman is doing laundry for her son in the © Neutral lake. + = * An Indian woman is putting her laundry into * Contradiction the machine. Hypothesis Answer © An SUV and aman are © Entailment going in opposite directions. © A taxi SUV races to pick © Neutral up some clients while a + man walks peacefully in = the other direction. * Aman is chasing an SUV © Contradiction that is going in the same direction as him. Premise Hypothesis Answer
* The man wearing the black © Entailment shirt plays a game of golf. Aman plays on a golf course to relax. © Neutral + = * The man in the black shirt trades Pokemon cards with * Contradiction his girlfriend. Premise Hypothesis Answer
Premise Hypothesis Answer * An Indian woman is doing © Entailment her laundry in a lake. * An Indian woman is doing laundry for her son in the © Neutral lake. + = * An Indian woman is putting her laundry into * Contradiction the machine. Hypothesis Answer
Hypothesis Answer © An SUV and aman are © Entailment going in opposite directions. © A taxi SUV races to pick © Neutral up some clients while a + man walks peacefully in = the other direction. * Aman is chasing an SUV © Contradiction that is going in the same direction as him. Premise Hypothesis Answer
Figure 2. More examples from SNLI-VE dataset
is used multiple times with different labels given distinct hypotheses {hmk }.
Three labels e, m, or c are assigned based on the re- lationship conveyed by (ix, hs). Specifically, i) e (entail- ment) is assigned if i, FE hs, ii) n (neutral) is assigned if ip Fo hs Ain A Ng, iii) c (contradiction) is assigned if ip F ahs.
# 3.2. Visual Entailment Dataset
# 3.2.1 Dataset criteria
Based on the vision communityâs experience with SNLI, VQA-v1.0, VQA-v2.0, and CLEVR, there are four criteria in developing an effective dataset:
The dataset should be based on real-world images and the same image can be paired with different hypotheses to form different labels.
2. Fine-grained. The dataset should enforce ï¬ne-grained reasoning about subtle changes in hypotheses that could lead to distinct labels.
3. Sanitization. No instance overlapping across different dataset partitions. One image can only exist in a single partition.
4. Account for any bias. Measure the dataset bias and
Training Validation Testing #Image #Entailment #Neutral #Contradiction Vocabulary Size 29,783 176,932 176,045 176,550 29,550 1,000 5,959 5,960 5,939 6,576 1,000 5,973 5,964 5,964 6,592
# Table 1. SNLI-VE dataset
provide baselines to serve as the performance lower bound for potential future evaluations.
# 3.2.2 SNLI-VE Construction
We now describe how we construct SNLI-VE, which is a dataset for VE tasks.
We build the dataset SNLI-VE based on two existing [4]. Flickr30k is a datasets, Flickr30k [76] and SNLI widely used image captioning dataset containing 31,783 im- ages and 158,915 corresponding captions. The images in Flickr30k consist of everyday activities, events and scenes [76], with 5 captions per image generated via crowdsourc- ing. SNLI is a large annotated TE dataset built upon Flickr30k captions. Each image caption in Flickr30k is used as a text premise in SNLI. The authors of SNLI collect mul- tiple hypotheses in the three classes - entailment, neutral, and contradiction - for a given premise via Amazon Me- chanical Turk [68], resulting in about 570K (Ptext, Htext) pairs. Data validation is conducted in SNLI to measure the label agreement. Speciï¬cally, each (Ptext, Htext) pair is assigned a gold label, indicating the label is agreed by a majority of crowdsourcing workers (at least 3 out of 5). If such a consensus is not reached, the gold label is marked as â-â.
Since SNLI was constructed using Flickr30k captions, for each (Ptext, Htext) pair in SNLI, it is feasible to ï¬nd the corresponding Flickr30k image through the annotations in SNLI. This enables us to create a structured VE dataset based on both. Speciï¬cally, for each (Ptext, Htext) pair in SNLI with an agreed gold label, we replace the text premise with its corresponding Flickr30k image, resulting in a (Pimage, Htext) pair in SNLI-VE. Figures 1 and 2 il- lustrate examples from the SNLI-VE dataset. SNLI-VE nat- urally meets the aforementioned criterion 1 and criterion 2. Each image in SNLI-VE are real-world ones and is associ- ated with distinct labels given different hypotheses. Further- more, Flickr30k and SNLI are well-studied datasets, allow- ing the community to focus on the new task that our paper introduces, rather than spending time familiarizing oneself with the idiosyncrasies of a new dataset.
A sanity check is applied to SNLI-VE dataset partitions in order to guarantee criterion 3. We notice the original SNLI dataset partitions does not consider the arrangement
4
SNLI-VE VQA-v2.0 CLEVR Partition Size: Training Validation Testing 529,527 17,858 17,901 443,757 214,354 555,187 699,989 149,991 149,988 Question Length: Mean Median Mode Max 7.4 7.0 6 56 6.1 6.0 5 23 18.4 17.0 14 43 Vocabulary Size 32,191 19,174 87
# Table 2. Dataset Comparison Summary
of the original caption images. If SNLI-VE directly adopts the original partitions from SNLI, all images in validation or testing partitions also exist in the training partitions, vi- olating criterion 3. To amend this, we disjointedly par- tition SNLI-VE by images following the partition in [22] and make sure instances with different labels are of similar numbers across training, validation, and testing partitions as shown in Table 1.
Regarding criterion 4, since SNLI has already been extensively studied, we are aware that there exists a hypothesis-conditioned bias in SNLI as recently reported by Gururangan et al. [24]. Though the labels in SNLI-VE are distributed evenly across dataset partitions, SNLI-VE still inevitably suffers from this bias inherently. Therefore, we provide a hypothesis-only baseline in Section 5.1 to serve as a performance lower bound.
# 3.3. SNLI-VE and VQA Datasets
Question Length Distribution ââ VOA-v2.0 . -*- SNLI-VE swe CLEVR Proportion of Questions Poe 0.00 evecsesesSSeeccsceâ 0-0-0 - - 00 ° 10 20 30 40 50 60 # of Tokens in a Question
Figure 3. Question Length Distribution
We further compare our SNLI-VE dataset with the two widely used VQA datasets, VQA-v2.0 and CLEVR. The
comparison focuses on the questions (for SNLI-VE dataset, we consider a hypothesis as a question). Table 2 is a sta- tistical summary about the questions from three datasets. Before generating Table 2, questions are prepossessed by three steps: i) split into words, ii) lower case all words, iii) removing punctuation symbols {ââââ,.-?!}. Figure 3 depicts a detailed question length distribution.
According to Table 2, among the three datasets, our SNLI-VE dataset, which contains the smallest total num- ber of questions (summing up training, validation and test- ing partitions), has the largest vocabulary size. The maxi- mum question length in SNLI-VE is 56, which is the largest among these three datasets, and represents real-world de- scriptions. Both the mean and median lengths are larger than VQA-v2.0 dataset. The question length distribution of SNLI-VE, as shown in Figure 3, is quite heavy-tailed in contrast to the others. These observations indicate that the text in SNLI-VE may be difï¬cult to handle compared to VQA-v2.0 for certain models. As for CLEVR dataset, even though most sentences are much longer than SNLI- VE as shown in Figure 3, the vocabulary size is only 87. We believe this is due to the synthetic nature of CLEVR, which also indicates models that achieve high-accuracy on CLEVR may not be able to generalize to our SNLI-VE dataset.
# 4. EVE: Explainable Visual Entailment System
The design of our explainable VE architecture, as shown in Figure 4, is based on the Attention Top-Down/Bottom- Up model discussed later in Subsection 5.4, which is the winner of VQA Challenge, 2017. Similar to the Attention Top-Down/Bottom-Up, our EVE architecture is composed of a text and an image branch. The text branch extracts fea- tures from the input text hypothesis Htext through an RNN. The image branch generates image features from Pimage. The features produced from the two branches are then fused and projected through fully-connected (FC) layers towards predicting the ï¬nal conclusion. The image features can be conï¬gured to take the feature maps from a pre-trained convolutional neural network (CNN) or ROI-pooled image regions from a region of interest (ROI) proposal network (RPN).
We build two model variants, EVE-Image and EVE-ROI, for image and ROI features, respectively. EVE-Image in- corporates a pre-trained ResNet101 [26], which generates k feature maps of size d à d. For each feature map position, the feature vector across all the k feature maps is considered as an object. As a result, there are a total number of d à d objects of feature size k for an input image. In contrast, the EVE-ROI variant takes ROIs as objects extracted from a pre-trained Mask R-CNN [48].
In order to accurately solve this cross-model VE task, we need: both a mechanism to identify the salient features
5
in images and text inputs and a cross-modal embedding to effectively learn the image-text interactions, which are ad- dressed by employing self-attention and text-image atten- tion techniques in the EVE model respectively. We next describe the design and implementation of the mechanisms in EVE model.
# 4.1. Self-Attention
EVE utilizes self-attention [69] in both text and image branches as highlighted with dotted blue frame in Figure 4. Since the hypothesis in SNLI-VE can be relatively long and complex, self-attention helps focus on important keywords in a sentence that relate to each other. The text branch ap- plies self-attention to the projected word embeddings from a multi-layer perceptron (MLP). It is worth noting that al- though word embeddings, either from GloVe or other exist- ing models, may be ï¬xed, the MLP transformation is able to be trained to generate adaptive projected word embed- dings. Similarly, the image branch applies the self-attention to projected image regions either from the aforementioned feature maps or ROIs in expectation of capturing the hidden relations between elements in the same feature space.
Speciï¬cally, we use the scaled dot product (SDP) atten- tion in [69] to capture this hidden information:
Attsdp = softmax( RQT â dk )) (1)
QAtt = AttsdpQ (2)
where Q â RM Ãdk is the query feature matrix and R â RN Ãdk is the reference feature matrix. M and N repre- sent the number of features vectors in matrix Q and R re- spectively, and dk denotes the dimension of each feature vector. Attsdp â RN ÃM is the resulting attention mask for Q given R. Each element aij in Attsdp represents how much weight (before scaled by 1â and normalized by soft- dk max) the model should put on each query feature vector qjâ{1,2,...,M } â Rdk in Q w.r.t. each reference feature vec- tor riâ{1,2,...,N } â Rdk in R. The attended query feature matrix QAtt â RN Ãdk is the weighted and fused version of the original query feature matrix Q, calculated by the matrix dot product between the attention mask Attsdp and the query feature matrix Q. Note that for the self-attention, the query matrix Q â RM Ãdk and the âreferenceâ matrix R â RN Ãdk are the same matrix.
# 4.2. Text-Image Attention
Multi-modal tasks such as phrase grounding [6] demon- strate that high-quality cross-modal feature interactions im- prove the overall performance. The dotted red frame high- lighted area in Figure 4 shows that EVE incorporates the text-image attention to relevant image regions based on the
Word Embedding MLP on âTwo woman are holding packages. (Glove) eacniwcral o Self Attention = * (ales Dot Product) = GRU "object" MLPon | = i CNN/Mask RCNN\>! each = Text-Image Attention |= (Scaled Dot-Product) /= I I I ROI feature vector \ I ROL-2 feature vector 1 I I I
Figure 4. Our model EVE combines image and ROI information to model ï¬ne-grained cross-modal information
text embedding from the GRU. The feature interaction be- tween the text and image regions are computed using the same SDP technique introduced in Section 4.1, serving as the attention weights. The weighted features of image re- gions are then fused with the text features for further deci- sion making. Speciï¬cally, for the text-image attention, the query matrix Q â RM Ãdk is the image features while the âreferenceâ matrix R â RN Ãdk is the text features. Note that although Q and R are from different feature spaces, the dimension of each feature vector is projected to be the same dk in respective branches for ease of the attention calcula- tion.
# 5. Experiments
necessarily efï¬cient for training. As a consequence, we opt for padding to the batch-wise maximum sentence length.
Unless explicitly mentioned, all models are trained using a cross-entropy loss function optimized by the Adam opti- mizer with a batch size of 64. We use an adaptive learning rate scheduler which reduces the learning rate whenever no improvement on the validation dataset for a period of time. The initial learning rate and weight decay are both set to be 1e â 4. The maximum number of training epochs is set to 100. We save a checkpoint whenever the model achieves a higher overall validation accuracy. The ï¬nal model check- point selected for testing is the one with the highest lowest per class accuracy in case the model performance is biased towards particular classes. The batch size is set as 32 for validation and testing. In the following, we discuss the de- tails for each baseline.
In this section, we evaluate EVE as well as several other baseline models on SNLI-VE. Most of the baselines are ex- isting or previous SOTA VQA architectures. The perfor- mance results of all models are listed in Table 3.
# 5.1. Hypothesis Only
All models are implemented in PyTorch. We use the pre- trained GloVe.6B.300D for word embedding [53], where 6B is the corpus size and 300D is the embedding dimen- sion. Input hypotheses are padded to the maximum sentence length in a batch. Note we do not truncate the sentences because unlike VQA where the beginning of questions typ- ically indicates what is asked about, labels of VE task may depend on keywords or small details at the end of sentences. For example, truncating the hypothesis âThe person who is standing next to the tree and wearing a blue shirt is playing â inevitably loses the key detail and changes the con- clusion. In addition, the maximum sentence length in SNLI is 56, which is much larger than 23 in VQA-v2.0 as shown in Table 2. Always padding to the dataset maximum is not
This baseline veriï¬es the existing data bias in the SNLI dataset, as mentioned by Gururangan et al. [24] and Vu et al. [71], by using hypotheses only without the image premise information.
The model consists of a text processing component fol- lowed by two FC layers. The text processing component is used to extract the text feature from the given hypothesis. It ï¬rst generates a sequence of word-embeddings for the given text hypothesis. The embedding sequence is then fed into a GRU [10] to output the text features of dimension 300. The input and output dimensions of the two FC layers are [300, 300] and [300, 3] respectively.
Without any premise information, this baseline is sup- posed to make a random guess out of the three classes but
6
Model Name Val Acc Overall (%) Val Acc Per Class (%) Test Acc C N E Overall (%) Test Acc Per Class (%) N C E Hypothesis Only Image Captioning Relational Network Attention Top-Down Attention Bottom-Up EVE-Image* EVE-ROI* 66.68 67.83 67.56 70.53 69.34 71.56 70.81 67.54 66.61 67.86 70.23 71.26 71.04 68.55 66.90 69.23 67.80 68.66 70.10 70.55 68.78 65.60 67.65 67.02 72.71 66.67 73.10 75.10 66.71 67.67 67.55 70.30 68.90 71.16 70.47 67.60 66.25 67.29 69.72 70.52 71.56 67.69 67.71 70.69 68.86 69.33 70.96 70.52 69.45 64.83 66.08 66.50 71.86 65.23 71.39 74.25
# Table 3. Model Performance on SNLI-VE dataset
the resulting accuracy is up to 67%, implying the existence of a dataset bias. We do not intend to rewrite the hypothe- ses in SNLI to reduce the bias but instead, aim at using the premise (image) features to outperform the hypothesis only baseline.
capture pairwise feature interactions between image regions and the text embedding. Each pair of image region feature and question embedding goes through an MLP. The ï¬nal classiï¬cation takes the element-wise sum over the MLP out- put for each pair as input.
# 5.2. Image Captioning
Since the original SNLI premises are image captions, a straightforward idea to address VE is to ï¬rst apply an image caption generator to convert image premises to text premises and then followed by a TE classiï¬er. Particularly, we adopt the PyTorch tutorial implementation [9] as a cap- tion generator. A pre-trained ResNet152 serves as the im- age encoder while the caption decoder is a long short-term memory (LSTM) network. Once the image caption is gen- erated, the image premise is replaced with the caption and the original VE task is reduced to a TE task. Similar to the Hypothesis-Only baseline, the TE classiï¬er is composed of two text processing components to extract text features from both the premise and hypothesis. The text features are fused and go through two FC layers with input and output dimen- sions of [600, 300] and [300, 3] for the ï¬nal prediction.
The resulting performance achieves a slightly higher ac- curacy of 67.83% and 67.67% on the validation and test- ing partitions over the Hypothesis-Only baseline, implying that the generated image caption premise does not improve much. We suspect that the generated captions may not cover the necessary information in the image as required by the hypothesis to make the correct conclusion. This is possible in a complex scene where exhaustive enumeration of cap- tions may be needed to cover every detail potentially de- scribed by the hypothesis.
Despite the high accuracy on the synthetic dataset CLEVR, RN only achieves a marginal improvement on SNLI-VE at the accuracy of 67.56% and 67.55% on the validation and testing partitions. This may be attributed to the limited representational power of RN that fails to pro- duce effective cross-modal feature fusion of the natural im- age premises and the free-form text hypothesis input from SNLI-VE.
# 5.4. Attention Top-Down and Bottom-Up
We consider the Attention Top-Down and Attention Bottom-Up baselines based on the winner of VQA chal- lenge 2017 [1]. Similar to the RN baseline, there is an image branch and a text branch. The difference between the im- age branches in Attention Top-Down and Attention Bottom- Up is similar to our EVE. The image features of Attention Top-Down come from the feature maps generated from a pre-trained CNN. As for Attention Bottom-Up, the image features are the top 10 ROIs extracted from a pre-trained Mask-RCNN implementation [25]. No self-attention is ap- plied in both image and text branches. Moreover, the text- image attention is implemented by feeding the concatena- tion of both image and text features into an FC layer to derive the attention weights rather than using SDP as de- scribed in Section 4.1. Then the attended image features and text features are projected separately and fused by dot product. The fused features go through two different MLPs. The element-wise sum of both MLP output serves as the ï¬- nal features for classiï¬cation.
# 5.3. Relational Network
The Relational Network (RN) baseline is based on [60] which is proposed to tackle the CLEVR dataset with high accuracy. There are an image branch and a text branch in the model. The image branch extracts image features in a similar manner as EVE, as described in Section 4, but with- out self-attention. The text branch generates the hypothe- sis embedding through an RNN. The highlight of RN is to
The SOTA VQA winner model, Attention Top-Down, achieves an accuracy of 70.53% and 70.30% on the vali- dation and testing partitions respectively, implying cross- modal attention is the key to effectively leveraging image premise features. The Attention Bottom-Up model using ROIs also achieves a good accuracy of 69.34% and 68.90% on the validation and testing partitions. The reason why
7
Attention Bottom-Up performs worse than Attention Top- Down could be possibly due to lack of background infor- mation in ROI features and ROI feature quality. It is not guaranteed that those top ROIs cover necessary details de- scribed by the hypothesis. However, even with more than 10 ROIs, we observe no signiï¬cant improvement in perfor- mance.
# 5.5. EVE-Image and EVE-ROI
The details of our EVE architecture have been described in Section 4. EVE-Image achieves the best performance of 71.56% and 71.16% accuracy on the validation and test- ing partitions respectively. The performance of EVE-ROI is similar, with an accuracy of 70.81% and 70.47%, possibly suffering from similar issues as the Attention Bottom-Up model. However, the improvement is likely due to the in- troduction of self-attention and text-image attention through SDP that potentially captures the hidden relations in the same feature space and better attended cross-modal feature interaction.
Figure 5. An attention visualization for EVE-Image
Figure 6. An attention visualization for EVE-ROI
Attention Visualization. The explainability of EVE is at- tained using attention visualizations in the areas of interest in the image premise given the hypothesis. Figure 5 and 6 illustrate two visualization examples of the text-image at- tention from EVE-Image and EVE-ROI respectively. The image premise of the EVE-Image example is shown on the left of Figure 5, and the corresponding hypothesis is âA hu- man playing guitarâ. On the right of Figure 5, our EVE- Image model successfully attends to the guitar area, lead- ing to the correct conclusion: entailment. In Figure 6, our EVE-ROI focuses on the children and the sand area in the image premise, leading to the contradiction conclusion for the given hypothesis âTwo children are swimming in the ocean.â
8
# 5.6. Discussion
In this section, we discuss why existing VQA and CLEVER models have modest performs over SNLI-VE dataset and the possible future directions based on our ex- perience. VQA models are not trained to distinguish ï¬ne- grained information. Furthermore, with the same image present across all the three classes in the SNLI-VE dataset, SNLI-VE removes any bias that may originate from just the image premise information and an effectively fused rep- resentation is important for high accuracy. Furthermore, models that provide good performance on CLEVR may not work on SNLI-VE since these models have rather simplistic image processing pipelines, often with a couple of convolu- tional layers that may be sufï¬cient to process synthetic im- ages but works poorly on real images. More importantly, the sentences are not synthetic in the SNLI-VE dataset. As a result, building compositional reasoning modules over SNLI-VE hypotheses is out of reach for existing models.
To effectively address SNLI-VE, we believe three ap- proaches can be beneï¬cial. First, using external knowledge beyond pre-trained models and/or visual entity extraction can be beneï¬cial. If the external knowledge can provide in- formation allowing the model to learn relationships between the entities that may be obvious to humans but difï¬cult or impossible to learn from the dataset (such as âtwo women in the image are sistersâ), it will improve the model perfor- mance over SNLI-VE.
Second, it is possible for the hypothesis to contain mul- tiple class labels assigned to its different entities or rela- tionships w.r.t. the premise. However, SNLI-VE lacks an- notations for localizing the labels to speciï¬c entities in the hypothesis (e.g. as is often provided in synthetic datasets like bABi [72]). Since the hypothesis can be broken down into individual entities and relationships between pairs of entities, providing ï¬ne-grained labels for each target in the hypothesis likely facilitates strongly-supervised training.
Finally, a third possible approach is to build effective attention based models as done in TE that encodes the sentences together and align similar words in hypothesis and premise instead of a late-fusion of separately encoded modalities. Hence, the active research on visual grounding can beneï¬t addressing the SNLI-VE task.
# 6. Conclusion
We introduce a novel task, visual entailment, that re- quires ï¬ne-grained reasoning over the image and text. We build the SNLI-VE dataset for VE using real-world im- ages from Flickr30k as premises, and the corresponding text hypotheses from SNLI. We then develop the EVE archi- tecture to address VE and evaluate against multiple base- lines, including existing SOTA VQA based models. We ex- pect more effort to be devoted to generating ï¬ne-grained
VE annotations for large image datasets such as the Visual Genome [41] and Open Images Dataset [40] as well as im- proved models on ï¬ne-grained visual reasoning.
# Acknowledgments
Ning Xie and Derek Doran were supported by the Ohio Federal Research Network project Human-Centered Big Data. Any opinions, ï¬ndings, and conclusions or recom- mendations expressed in this article are those of the au- thor(s) and do not necessarily reï¬ect the views of the Ohio Federal Research Network.
# References
[1] P. Anderson, X. He, C. Buehler, D. Teney, M. Johnson, S. Gould, and L. Zhang. Bottom-up and top-down atten- tion for image captioning and visual question answering. In CVPR, volume 3, page 6, 2018. 1, 2, 7
[2] S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. Lawrence Zitnick, and D. Parikh. Vqa: Visual question answering. In Proceedings of the IEEE international confer- ence on computer vision, pages 2425â2433, 2015. 1, 2 [3] J. Bos and K. Markert. Recognising textual entailment with logical inference. In Proceedings of the conference on Hu- man Language Technology and Empirical Methods in Nat- ural Language Processing, pages 628â635. Association for Computational Linguistics, 2005. 2
[4] S. R. Bowman, G. Angeli, C. Potts, and C. D. Manning. A large annotated corpus for learning natural language infer- ence. arXiv preprint arXiv:1508.05326, 2015. 2, 4
[5] Y.-W. Chao, Y. Liu, X. Liu, H. Zeng, and J. Deng. Learn- arXiv preprint ing to detect human-object interactions. arXiv:1702.05448, 2017. 3
[6] K. Chen, R. Kovvuri, and R. Nevatia. Query-guided regres- sion network with context policy for phrase grounding. In Proceedings of the IEEE International Conference on Com- puter Vision (ICCV), 2017. 5
[7] L. Chen, H. Zhang, J. Xiao, L. Nie, J. Shao, W. Liu, and T.-S. Chua. Sca-cnn: Spatial and channel-wise attention In 2017 in convolutional networks for image captioning. IEEE Conference on Computer Vision and Pattern Recog- nition (CVPR), pages 6298â6306. IEEE, 2017. 3
[8] Q. Chen, X. Zhu, Z. Ling, S. Wei, H. Jiang, and D. Inkpen. Enhanced lstm for natural language inference. arXiv preprint arXiv:1609.06038, 2016. 2
[9] Y. Choi. tation. pytorch-tutorial/tree/master/tutorials/ 03-advanced/image_captioning. 2018-10-30. 7
[10] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014. 6
[11] C. Condoravdi, D. Crouch, V. De Paiva, R. Stolle, and D. G. Bobrow. Entailment, intensionality and text understanding. In Proceedings of the HLT-NAACL 2003 workshop on Text
9
meaning-Volume 9, pages 38â45. Association for Computa- tional Linguistics, 2003. 2
[12] I. Dagan, O. Glickman, and B. Magnini. The pascal recog- In Machine learn- nising textual entailment challenge. ing challenges. evaluating predictive uncertainty, visual ob- ject classiï¬cation, and recognising tectual entailment, pages 177â190. Springer, 2006. 2
[13] B. Dai, Y. Zhang, and D. Lin. Detecting visual relation- ships with deep relational networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recogni- tion, 2017. 3
[14] A. Das, H. Agrawal, L. Zitnick, D. Parikh, and D. Batra. Human attention in visual question answering: Do humans and deep networks look at the same regions? Computer Vision and Image Understanding, 163:90â100, 2017. 3
[15] EvalAI. VQA challenge leaderboard 2018. https: //evalai.cloudcv.org/web/challenges/ challenge-page/80/leaderboard/124. cessed: 2018-11-11. 2 Ac-
[16] A. Frome, G. S. Corrado, J. Shlens, S. Bengio, J. Dean, T. Mikolov, et al. Devise: A deep visual-semantic embed- ding model. In Advances in neural information processing systems, pages 2121â2129, 2013. 2
[17] A. Fukui, D. H. Park, D. Yang, A. Rohrbach, T. Darrell, and M. Rohrbach. Multimodal compact bilinear pooling for visual question answering and visual grounding. arXiv preprint arXiv:1606.01847, 2016. 1, 2
[18] Y. Fyodorov, Y. Winter, and N. Francez. A natural logic In Proceedings of the 2nd Workshop on inference system. Inference in Computational Semantics (ICoS-2). Citeseer, 2000. 2
[19] H. Gao, J. Mao, J. Zhou, Z. Huang, L. Wang, and W. Xu. Are you talking to a machine? dataset and methods for mul- tilingual image question. In Advances in neural information processing systems, pages 2296â2304, 2015. 2
[20] Y. Gao, O. Beijbom, N. Zhang, and T. Darrell. Compact In Proceedings of the IEEE conference bilinear pooling. on computer vision and pattern recognition, pages 317â326, 2016. 2
[21] G. Gkioxari, R. Girshick, P. Doll´ar, and K. He. Detecting and recognizing human-object interactions. arXiv preprint arXiv:1704.07333, 2017. 3
[22] Y. Gong, L. Wang, M. Hodosh, J. Hockenmaier, and S. Lazebnik. Improving image-sentence embeddings using large weakly annotated photo collections. In European Con- ference on Computer Vision, pages 529â545. Springer, 2014. 4
[23] Y. Goyal, T. Khot, D. Summers-Stay, D. Batra, and D. Parikh. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In CVPR, volume 1, page 3, 2017. 1, 2
[24] S. Gururangan, S. Swayamdipta, O. Levy, R. Schwartz, S. R. Bowman, and N. A. Smith. Annotation artifacts in natural language inference data. arXiv preprint arXiv:1803.02324, 2018. 4, 6
[25] K. He, G. Gkioxari, P. Doll´ar, and R. Girshick. Mask r-cnn. In Computer Vision (ICCV), 2017 IEEE International Con- ference on, pages 2980â2988. IEEE, 2017. 7
[26] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learn- ing for image recognition. In Proceedings of the IEEE con- ference on computer vision and pattern recognition, pages 770â778, 2016. 1, 5
[27] R. Hu, J. Andreas, M. Rohrbach, T. Darrell, and K. Saenko. Learning to reason: End-to-end module networks for visual question answering. CoRR, abs/1704.05526, 3, 2017. 2 [28] R. Hu, M. Rohrbach, J. Andreas, T. Darrell, and K. Saenko. Modeling relationships in referential expressions with com- In Proceedings of the IEEE positional modular networks. Conference on Computer Vision and Pattern Recognition, 2016. 3
[29] D. A. Hudson and C. D. Manning. Compositional at- arXiv preprint tention networks for machine reasoning. arXiv:1803.03067, 2018. 1, 2
[30] H. Jiang, B. Kim, and M. Gupta. To trust or not to trust a classiï¬er. arXiv preprint arXiv:1805.11783, 2018. 3 [31] Y. Jiang, V. Natarajan, X. Chen, M. Rohrbach, D. Batra, and D. Parikh. Pythia v0. 1: the winning entry to the vqa chal- lenge 2018. arXiv preprint arXiv:1807.09956, 2018. 1, 2 [32] J. Johnson, B. Hariharan, L. van der Maaten, L. Fei-Fei, C. L. Zitnick, and R. Girshick. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, pages 1988â1997. IEEE, 2017. 1, 2
[33] J. Johnson, B. Hariharan, L. van der Maaten, J. Hoffman, L. Fei-Fei, C. L. Zitnick, and R. B. Girshick. Inferring and In ICCV, pages executing programs for visual reasoning. 3008â3017, 2017. 1, 2
[34] J. Johnson, A. Karpathy, and L. Fei-Fei. Densecap: Fully convolutional localization networks for dense captioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4565â4574, 2016. 3
[35] J. Johnson, R. Krishna, M. Stark, L.-J. Li, D. Shamma, M. Bernstein, and L. Fei-Fei. Image retrieval using scene graphs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3668â3678, 2015. 3
[36] A. Karpathy and L. Fei-Fei. Deep visual-semantic align- ments for generating image descriptions. In Proceedings of the IEEE conference on computer vision and pattern recog- nition, pages 3128â3137, 2015. 2, 3
[37] J.-H. Kim, J. Jun, and B.-T. Zhang. Bilinear attention net- works. arXiv preprint arXiv:1805.07932, 2018. 1, 2 [38] S. W. Kim, M. Tapaswi, and S. Fidler. Progressive reasoning by module composition. arXiv preprint arXiv:1806.02453, 2018. 2
[39] P. W. Koh and P. Liang. Understanding black-box predictions via inï¬uence functions. arXiv preprint arXiv:1703.04730, 2017. 3
[40] I. Krasin, T. Duerig, N. Alldrin, V. Ferrari, S. Abu-El- Haija, A. Kuznetsova, H. Rom, J. Uijlings, S. Popov, S. Kamali, M. Malloci, J. Pont-Tuset, A. Veit, S. Be- longie, V. Gomes, A. Gupta, C. Sun, G. Chechik, D. Cai, Z. Feng, D. Narayanan, and K. Murphy. Openim- ages: A public dataset for large-scale multi-label and multi-class image classiï¬cation. Dataset available from https://storage.googleapis.com/openimages/web/index.html, 2017. 9
10
[41] R. Krishna, Y. Zhu, O. Groth, J. Johnson, K. Hata, J. Kravitz, S. Chen, Y. Kalantidis, L.-J. Li, D. A. Shamma, M. Bern- stein, and L. Fei-Fei. Visual genome: Connecting language and vision using crowdsourced dense image annotations. 2016. 1, 2, 9
Imagenet classiï¬cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â1105, 2012. 1
[43] Y. Li, W. Ouyang, B. Zhou, K. Wang, and X. Wang. Scene graph generation from objects, phrases and region captions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1261â1270, 2017. 3
[44] X. Liang, L. Lee, and E. P. Xing. Deep variation-structured reinforcement learning for visual relationship and attribute detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017. 3
[45] T.-Y. Lin, A. RoyChowdhury, and S. Maji. Bilinear cnn mod- els for ï¬ne-grained visual recognition. In Proceedings of the IEEE International Conference on Computer Vision, pages 1449â1457, 2015. 2
[46] B. MacCartney and C. D. Manning. An extended model of natural logic. In Proceedings of the eighth international con- ference on computational semantics, pages 140â156. Asso- ciation for Computational Linguistics, 2009. 2
[47] M. Malinowski and M. Fritz. A multi-world approach to question answering about real-world scenes based on uncer- tain input. In Advances in neural information processing sys- tems, pages 1682â1690, 2014. 2
[48] I. Matterport. Mask rcnn pytorch implementation.
https://github.com/multimodallearning/ pytorch-mask-rcnn. Accessed: 2018-10-30. 5 [49] G. Montavon, S. Lapuschkin, A. Binder, W. Samek, and K.- R. M¨uller. Explaining nonlinear classiï¬cation decisions with deep taylor decomposition. Pattern Recognition, 65:211â 222, 2017. 3
[50] Y. Nie and M. Bansal. Shortcut-stacked sentence arXiv preprint encoders for multi-domain inference. arXiv:1708.02312, 2017. 2
[51] D. H. Park, L. A. Hendricks, Z. Akata, B. Schiele, T. Dar- rell, and M. Rohrbach. Attentive explanations: Justify- ing decisions and pointing to the evidence. arXiv preprint arXiv:1612.04757, 2016. 3
[52] M. Pedersoli, T. Lucas, C. Schmid, and J. Verbeek. Ar- arXiv preprint eas of attention for image captioning. arXiv:1612.01033, 2016. 2
[53] J. Pennington, R. Socher, and C. Manning. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language pro- cessing (EMNLP), pages 1532â1543, 2014. 6
[54] E. Perez, F. Strub, H. De Vries, V. Dumoulin, and A. Courville. Film: Visual reasoning with a general con- ditioning layer. arXiv preprint arXiv:1709.07871, 2017. 2
[55] N. Pham and R. Pagh. Fast and scalable polynomial kernels via explicit feature maps. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 239â247. ACM, 2013. 2
[56] M. Raghu, J. Gilmer, J. Yosinski, and J. Sohl-Dickstein. Svcca: Singular vector canonical correlation analysis for deep learning dynamics and interpretability. In Advances in Neural Information Processing Systems, pages 6076â6085, 2017. 3
[57] M. Ren, R. Kiros, and R. Zemel. Exploring models and data for image question answering. In Advances in neural infor- mation processing systems, pages 2953â2961, 2015. 1, 2 [58] M. T. Ribeiro, S. Singh, and C. Guestrin. Why should i trust you?: Explaining the predictions of any classiï¬er. In Pro- ceedings of the 22nd ACM SIGKDD international confer- ence on knowledge discovery and data mining, pages 1135â 1144. ACM, 2016. 3
[59] T. Rockt¨aschel, E. Grefenstette, K. M. Hermann, T. KoËcisk`y, and P. Blunsom. Reasoning about entailment with neural attention. arXiv preprint arXiv:1509.06664, 2015. 2 [60] A. Santoro, D. Raposo, D. G. Barrett, M. Malinowski, R. Pascanu, P. Battaglia, and T. Lillicrap. A simple neu- In Advances ral network module for relational reasoning. in neural information processing systems, pages 4967â4976, 2017. 1, 2, 3, 7
[61] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, D. Batra, et al. Grad-cam: Visual explana- tions from deep networks via gradient-based localization. In ICCV, pages 618â626, 2017. 3
[62] T. Shen, T. Zhou, G. Long, J. Jiang, S. Wang, and C. Zhang. Reinforced self-attention network: a hybrid of hard and soft attention for sequence modeling. arXiv preprint arXiv:1801.10296, 2018. 2
[63] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. 1
[64] J. Singh, V. Ying, and A. Nutkiewicz. Attention on attention: Architectures for visual question answering (vqa). arXiv preprint arXiv:1803.07724, 2018. 2
[65] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. In Proceedings of the Going deeper with convolutions. IEEE conference on computer vision and pattern recogni- tion, pages 1â9, 2015. 1
[66] M. Tapaswi, Y. Zhu, R. Stiefelhagen, A. Torralba, R. Ur- tasun, and S. Fidler. Movieqa: Understanding stories in movies through question-answering. In Proceedings of the IEEE conference on computer vision and pattern recogni- tion, pages 4631â4640, 2016. 2
[67] D. Teney, P. Anderson, X. He, and A. van den Hengel. Tips and tricks for visual question answering: Learnings from the 2017 challenge. arXiv preprint arXiv:1708.02711, 2017. 1, 2
[68] A. M. Turk. Amazon mechanical turk. Retrieved August, 17:2012, 2012. 4
[69] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Å. Kaiser, and I. Polosukhin. Attention is all In Advances in Neural Information Processing you need. Systems, pages 5998â6008, 2017. 5
[70] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: Lessons learned from the 2015 mscoco image caption-
11
IEEE transactions on pattern analysis and ing challenge. machine intelligence, 39(4):652â663, 2017. 3
[71] H. T. Vu, C. Greco, A. Erofeeva, S. Jafaritazehjan, G. Linders, M. Tanti, A. Testoni, R. Bernardi, and arXiv preprint A. Gatt. Grounded textual entailment. arXiv:1806.05645, 2018. 2, 6
[72] J. Weston, A. Bordes, S. Chopra, A. M. Rush, B. van Merri¨enboer, A. Joulin, and T. Mikolov. Towards ai- complete question answering: A set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698, 2015. 8
[73] Q. Wu, D. Teney, P. Wang, C. Shen, A. Dick, and A. van den Hengel. Visual question answering: A survey of methods and datasets. Computer Vision and Image Understanding, 163:21â40, 2017. 1
[74] D. Xu, Y. Zhu, C. B. Choy, and L. Fei-Fei. Scene graph generation by iterative message passing. Proceedings of the IEEE Conference on Computer Vision and Pattern Recogni- tion, 2017. 3
[75] K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhudi- nov, R. Zemel, and Y. Bengio. Show, attend and tell: Neural image caption generation with visual attention. In Interna- tional conference on machine learning, pages 2048â2057, 2015. 3
[76] P. Young, A. Lai, M. Hodosh, and J. Hockenmaier. From im- age descriptions to visual denotations: New similarity met- rics for semantic inference over event descriptions. Transac- tions of the Association for Computational Linguistics, 2:67â 78, 2014. 2, 4
[77] M. D. Zeiler and R. Fergus. Visualizing and understanding In European conference on com- convolutional networks. puter vision, pages 818â833. Springer, 2014. 3
[78] H. Zhang, Z. Kyaw, S.-F. Chang, and T.-S. Chua. Visual translation embedding network for visual relation detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017. 3
[79] J. Zhang, S. A. Bargal, Z. Lin, J. Brandt, X. Shen, Top-down neural attention by excita- International Journal of Computer Vision, and S. Sclaroff. tion backprop. 126(10):1084â1102, 2018. 3
[80] J. Zhang, M. Elhoseiny, S. Cohen, W. Chang, and A. Elgam- mal. Relationship proposal networks. In CVPR, volume 1, page 2, 2017. 3
[81] Y. Zhu, O. Groth, M. Bernstein, and L. Fei-Fei. Visual7w: In Proceedings Grounded question answering in images. of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4995â5004, 2016. 1, 2
# Supplementary: Additional Examples
Figure 7 shows random examples from SNLI-VE with predictions from our EVE-Image. Each example consists of an image premise and three selected hypotheses of different labels. Note that for each image premise, the total number of hypotheses are not limited to three.
Cameras are set up on tripods along the side of the People in black shirts are having a confrontation. A police officer dressed in a bright shirt and a black People sit outside the leaning tower of Piza road. -> entailment (pred: entailment) hat wets his lips. _ being photographed. -> entailment (pred: entailment) Two men, who are twins, are wearing matching -> entailment (pred: contradiction) -> entailment (pred: entailment) Cameras are set up to film a high speed police chase. black shirt and are about to fight over a girl. A police officer watches sailors boarding a ship. A girl is relaxing in the park. -> neutral (pred: neutral) -> neutral (pred: neutral) -> neutral (pred: neutral) -> neutral (pred: contradiction) People are holding cameras and taking pictures of Dogs in black shirts are having a confrontation. A police officer driving inacar. A women pushed the Leaning Tower of Pisa people walking inside. -> contradiction (pred: contradiction) -> contradiction (pred: contradiction) until it stood straight. -> contradiction (pred: contradiction) -> contradiction (pred: contradiction) IQUIGAS An energetic boy runs around a group of people. There is a person playing sports outdoors. A group is with a dog outside. A woman fell while playing volleyball. -> entailment (pred: entailment) -> entailment (pred: entailment) -> entailment (pred: entailment) -> entailment (pred: contradiction) A little boy is bored and decides to run around while A man bring to the ball back that was thrown out Good friends at a park gathering having a picnic. A woman hits the ground while trying to return a watching the school play of Romeo and Juliet. of zone. -> neutral (pred: neutral) spike during a game of volleyball. -> neutral (pred: neutral) -> neutral (pred: entailment) 4 women, one child and a black and white dog run -> neutral (pred: neutral) The child was crying. Aman shoots a basketball at a net. outside at a social event. Girls writing letters. -> contradiction (pred: contradiction) -> contradiction (pred: contradiction) -> contradiction (pred: neutral) -> contradiction (pred: contradiction) Children playing in a store with floor displays. > entailment (pred: entailmen) > entailment (pred: entailment) . entailment (pred: neutral) leopard printed sash, one The game is complicated and needs to be Two people are checking out the bed to see if A family member documents a wedding by -> entailment (pred: entailment) learned by demonstration eutral G ye utral) taking a photo. a man in fancy attire holds a drum. -> neutral (pred: neutral) ral (prec: ° â -> neutral (pred: neutral) -> neutral (pred: entailment) A young boy is throwing a ball to his do! Two children play in a Goodwill, laying under Some people inside a church at a wedding Aman holds a large monkey. young Doy 9 9- the racks of clothes that line the walls. . 4 -> contradiction (pred: neutral -> contradiction (pred: contradiction Pp p -> contradiction (pred: neutral) NINA = 4 Group of man playing chess in a pool. There are people parading around. A few people are getting off a plane. People stand on the sidewalk, wearing bright clothing. -> entailment (pred: contradiction) -> entailment (pred: entailment) -> entailment (pred: entailment) -> entailment (pred: entailment) The men are trying to beat the hot, summer heat and A group of men is celebrating a team victory by Everybody is boarding in the correct order. People stand on a sidewalk near the beach in bright stil play on eas, fence yng chess in the pool. marching down the street waving flags. -> neutral (pred: contradiction) summer clothes. tho menae lay ine checkers in the shade at the ~> neutral (pred: neutral) The plane was destroyed. > neutral (pred: neutral) rk Playing children are running past a flag. -> contradiction (pred: contradiction) The group of people are inside of a building. park. -> contradiction (pred: entailment) -> contradiction (pred: contradiction) -> contradiction (pred: contradiction)
Figure 7. Random examples from SNLI-VE with prediction results from our best-performed EVE-Image
12 | {
"id": "1508.05326"
} |
1901.05415 | Learning from Dialogue after Deployment: Feed Yourself, Chatbot! | The majority of conversations a dialogue agent sees over its lifetime occur
after it has already been trained and deployed, leaving a vast store of
potential training signal untapped. In this work, we propose the self-feeding
chatbot, a dialogue agent with the ability to extract new training examples
from the conversations it participates in. As our agent engages in
conversation, it also estimates user satisfaction in its responses. When the
conversation appears to be going well, the user's responses become new training
examples to imitate. When the agent believes it has made a mistake, it asks for
feedback; learning to predict the feedback that will be given improves the
chatbot's dialogue abilities further. On the PersonaChat chit-chat dataset with
over 131k training examples, we find that learning from dialogue with a
self-feeding chatbot significantly improves performance, regardless of the
amount of traditional supervision. | http://arxiv.org/pdf/1901.05415 | Braden Hancock, Antoine Bordes, Pierre-Emmanuel Mazaré, Jason Weston | cs.CL, cs.AI, cs.HC, cs.LG, stat.ML | ACL 2019 | null | cs.CL | 20190116 | 20190613 | 9 1 0 2 n u J 3 1 ] L C . s c [
4 v 5 1 4 5 0 . 1 0 9 1 : v i X r a
# Learning from Dialogue after Deployment: Feed Yourself, Chatbot!
Braden Hancockâ Computer Science Dept. Stanford University bradenjh@cs.stanford.edu
Antoine Bordes, Pierre-Emmanuel Mazar´e Jason Weston Facebook AI Research {abordes,pem,jase}@fb.com
# Abstract
The majority of conversations a dialogue agent sees over its lifetime occur after it has already been trained and deployed, leaving a vast store of potential training signal untapped. In this work, we propose the self-feeding chatbot, a dialogue agent with the ability to extract new training examples from the conversations it participates in. As our agent engages in con- versation, it also estimates user satisfaction in its responses. When the conversation appears to be going well, the userâs responses become new training examples to imitate. When the agent believes it has made a mistake, it asks for feedback; learning to predict the feedback that will be given improves the chatbotâs dialogue abilities further. On the PERSONACHAT chit- chat dataset with over 131k training examples, we ï¬nd that learning from dialogue with a self- feeding chatbot signiï¬cantly improves perfor- mance, regardless of the amount of traditional supervision.
1
# 1 Introduction
: & A Have you been to France? Satisfaction: 0.85 Yes | havel Itâs beautiful. B C Lol. | never eat them! Satisfaction: 0.10 What are you talking about? D E Oops! | messed up. id? What should | have said? Maybe ask me what | F thought about French food? Extracted Training Examples DIALOGUE FEEDBACK Context Context A Have you been to France? A Have you been to France? B Yes, | have! Itâs beautiful. Response Feedback F Maybe ask me what | thought about French food? B Yes, | have! Itâs beautiful. X
Figure 1: As the self-feeding chatbot engages in dia- logue, it estimates user satisfaction to know when to ask for feedback. From the satisï¬ed responses and feedback responses, new training examples are ex- tracted for the DIALOGUE and FEEDBACK tasks, re- spectively, both of which improve the modelâs dialogue abilities further.
Training a dialogue agent to converse like a human requires extensive supervision. The most com- mon approach is to train models to imitate hu- mans in large corpora of crowdsourced or scraped conversations (Serban et al., 2015). These fully- supervised conversations tend to be expensive to collect in sufï¬cient quantity and/or occur in set- tings with signiï¬cant differences from the deploy- ment environment (Ross et al., 2009). Instead, dialogue agents would ideally learn directly from dialogue, the conversations they participate in af- ter deployment, which are usually abundant, task- speciï¬c, dynamic, and cheap. This corresponds to the way humans learn to converseânot merely ob- serving others engaging in âexpert-levelâ conver-
â*BH completed most of this work at Facebook (FAIR).
sations, but instead actively adjusting and correct- ing our speech based on feedback woven through- out our own conversations (Bassiri, 2011; Werts et al., 1995). Giving a dialogue agent this ability would enable it to continuously improve and adapt over its lifetime, rather than requiring additional annotation costs for each and every improvement.
However, naively training a dialogue agent on its own conversations yields poor results. For ex- ample, training a model on its own output can sim- ply reinforce its existing failure modes, and mis- takes by the agent can lead to absurd conversa- tions that no longer resemble the target domain (Hashimoto and Sassano, 2018). To combat this, one approach is to allow the agent to request feed-
back during conversations (Zhang et al., 2018a; Li et al., 2017b), e.g., when it believes it is about to make a mistake. This approach, however, falls victim to the Dunning-Kruger effect (Kruger and Dunning, 1999), which in this case suggests that a bad model will also be bad at knowing when it is doing a bad job. Regardless of when feedback is requested, existing methods typically require ac- companying scalar rewards or adherence to partic- ular templates or structure to ensure that the feed- back is usable by the model (Rieser and Lemon, 2011; Zhang et al., 2017; Liu et al., 2018). These requirements may be acceptable for paid annota- tors, but they impose unnatural workï¬ows on un- paid conversation partners in a standard dialogue environment. Humans are able to request and pro- vide feedback using only natural language; ideally, dialogue agents would be able to do the same.
In this work we propose the self-feeding chat- bot, a dialogue agent with the ability to extract new examples from the conversations it participates in after deployment (Figure 1). Concretely, in addi- tion to being trained on the primary DIALOGUE task, the agent is trained to predict its speaking partnerâs satisfaction with its responses. When the conversation seems to be going well, the userâs re- sponses (but not the botâs own utterances) become the targets in new training examples for the DIA- LOGUE task. When the agent believes it has made a mistake, it instead requests feedback on what it could have said instead. Predicting the feedback that will be provided in a given context becomes an auxiliary task (FEEDBACK) on which the model is also trained. Importantly, these new examples improve the agentâs dialogue abilities while using only natural responses from the user that do not require special structure, accompanying numerical feedback, or additional human intervention in or- der to be used.
With this approach, the conversations the chat- bot participates in are sliced into two complemen- tary datasetsâone largely protected from the chat- botâs mistakes (DIALOGUE examples), and one which directly addresses them (FEEDBACK ex- amples). We validate our approach on the PER- SONACHAT (Zhang et al., 2018b) dialogue dataset, ï¬nding empirically that regardless of the num- ber of available supervised examples, the dia- logue ability of the chatbot is always improved by adding the automatically extracted examples of ei- ther type, and improves the most by adding both.
The main contributions of this work thus in- clude the following:
⢠We propose the self-feeding chatbot, a dia- logue agent with the ability to extract new training examples for itself from the conver- sations it participates in during deployment.
⢠We show that dialogue ability improves by imitating human responses when the human is satisï¬ed, or by asking for feedback when they are not, predicting it as an auxiliary task.
⢠We demonstrate that classifying user satisfac- tion is a learnable task important for the self- feeding process, signiï¬cantly outperforming an approach based on model uncertainty.
⢠We release three new datasets to further re- search in this direction: (1) deployment chat logs (513k messages); (2) ratings of user sat- isfaction (42k); (3) textual feedback on what a bot could have said in a given context (62k).
The datasets and models described in this paper are available via the ParlAI platform (Miller et al., 2017), along with training code. Hyperparameter values are included in Appendix G.
# 2 Related Work
The general concepts of lifelong learning (Silver et al., 2013) and never-ending (language) learning (Carlson et al., 2010) are related to the topics dis- cussed in this work, as is active learning (Tong and Koller, 2001) and predictive modeling (Schmidhu- ber and Huber, 1991).
The speciï¬c case of learning actively from dialogue during deployment was explored for the question answering (QA) setting in (Weston, 2016) and (Li et al., 2017a), where the authors examined multiple learning strategies on a suite of dialogue tasks with varying types of feedback, such as verbal cues (e.g., âYes, thatâs right!â) and scalar rewards. Most relevant to our work was their use of forward prediction, where the learner improved in quality by trying to predict the teacherâs responses without an explicit reward sig- nal. Our work extends this idea, adding the ability for the model to recognize its mistakes and request feedback explicitly, and moving beyond QA to the more general chit-chat setting where there may be many valid responses in a given context.
Learning to ask questions is another area that has been studied (Strub et al., 2017; Wang et al.,
Data @ Train Self-Feeding Chatbot DIALOGUE ( Oem, Fe = vue. y)up X (context) y User @ Deploy Yue (response) DIALOGUE (HH) ePDu, inter y 8 ntertace SATISFACTION &5)_ Models ° re] SJ @A. f FEEDBACK = ® Retrain X (context) (xf) (cdc
Figure 2: (1) The chatbot is ï¬rst trained with any available supervised data (boxed in red) on the Human-Human (HH) DIALOGUE (x, y)HH and SATISFACTION (x, s) tasks. (2) During deployment, whenever the predicted satisfaction score of the current conversation x is above the threshold (Ës > t), a new Human-Bot (HB) DIALOGUE example (x, y)HB is extracted and the bot continues the conversation with its own response Ëy. Otherwise, the chatbot requests feedback with question q and extracts a new FEEDBACK example (x, f ). (3) The chatbot is periodically retrained with the available examples from all four datasets, improving its DIALOGUE performance without collecting any new supervised examples.
2018; Rao and Daum´e, 2018). While those works focused on identifying which question to ask in a given context, in this work we are more interested in ï¬rst learning when to ask a question. Li et al. (2017b) considered this question as well, but again in the context of a QA setting rather than dialogue.
Finally, our work improves dialogue quality by utilizing larger datasets with noisier labels than traditional supervision. Other applications of weak supervision to dialogue (Mallinar et al., 2019) and relation extraction have observed simi- lar results (Bunescu and Mooney, 2007; Hancock et al., 2018; Ratner et al., 2017).
Hashimoto and Sassano (2018) used user re- sponses to detect mistakes made by a deployed virtual assistant, showing that model mistakes can be identiï¬ed in chit-chat, weather, or web search domains. However, they did not explore how to use these identiï¬ed mistakes to improve the model further; their agent was not equipped to feed itself. Eskenazi et al. (2018) also found that the correctly assessing the appropriateness of chatbot responses is highly dependent on user responses and not pre- ceding context alone.
There are other, somewhat less related, ways to use feedback during dialogue for learning, no- tably for collecting knowledge to answer questions (Mazumder et al., 2018; Hixon et al., 2015; Pappu and Rudnicky, 2013), and more commonly in re- inforcement learning settings, where the feedback is a scalar rather than the dialogue messages them- selves (Levin et al., 2000; Schatzmann et al., 2006; Rieser and Lemon, 2011; Liu et al., 2018; Hong In particular (Serban et al., 2017) et al., 2019). employ user sentiment detection for reward shap- ing in their Alexa prize entry.
# 3 The Self-Feeding Chatbot
The lifecycle of a self-feeding chatbot is outlined in Figure 2. In the initial training phase, the dia- logue agent is trained on two tasksâDIALOGUE (next utterance prediction, or what should I say next?) and SATISFACTION (how satisï¬ed is my speaking partner with my responses?)âusing whatever supervised training data is available. We refer to these initial DIALOGUE examples as Human-Human (HH) examples, since they were generated in conversations between two humans. the agent engages in multi-turn conversations with users, extracting new deployment examples of two types. Each turn, the agent observes the context x (i.e., the conver- sation history) and uses it to predict its next utter- ance Ëy and its partnerâs satisfaction Ës. If the satis- faction score is above a speciï¬ed threshold t, the agent extracts a new Human-Bot (HB) DIALOGUE example using the previous context x and the hu- manâs response y and continues the conversation.
If, however, the user seems unsatisï¬ed with its pre- vious response (Ës < t), the agent requests feed- back with a question q, and the resulting feedback response f is used to create a new example for the FEEDBACK task (what feedback am I about to receive?). The agent acknowledges receipt of the feedback and the conversation continues. The rate at which new DIALOGUE or FEEDBACK examples are collected can be adjusted by raising or lower- ing the satisfaction threshold t (we use t = 0.5).1 Periodically, the agent is retrained using all avail- able data, thereby improving performance on the primary DIALOGUE task.
It is important to note that the userâs responses are always in the form of natural dialogue. In particular, at no point are the new FEEDBACK ex- amples inspected, post-processed, or cleaned. In- stead, we rely on the fact that the feedback is not random: regardless of whether it is a verbatim re- sponse, a description of a response, or a list of pos- sible responses (see Table 2 for examples), there is a learnable relationship between conversation con- texts and their corresponding feedback which re- quires many of the same language understanding skills to master as does carrying on a normal con- versation.
The experiments in this paper are limited to the setting where the number of supervised and de- ployment examples are on the same order of mag- nitude; however, we envision scenarios in which the number of deployment examples can easily grow to 100à or more the number of supervised examples over the chatbotâs deployment lifetime, effectively providing a massive task-speciï¬c cor- pus at minimal cost. Table 1 reports the sizes of each dataset, all of which are available via ParlAI.
# 3.1 Task 1: DIALOGUE
The chatbotâs primary task (DIALOGUE) is to carry on a coherent and engaging conversation with a speaking partner. Training examples take the form of (x, y) pairs, where x is the context of the conversation (the concatenation of all re- sponses so far up to some history length, delim- ited with tokens marking the speaker), and y is the appropriate response given by the human.
The Human-Human (HH) portion of the DIA- LOGUE dataset comes from the PERSONACHAT dataset (Zhang et al., 2018b), which consists of
1Another option would be to have two thresholdsâone for each example typeâto decouple collection their rates.
Task DIALOGUE â HH (HUMAN-HUMAN) 131438 7801 6634 145873 60000 â HB (HUMAN-BOT) 62000 FEEDBACK 2500 SATISFACTION
Table 1: The number of examples used in our experi- ments by task and split. Note that the HH DIALOGUE examples come from the PERSONACHAT dataset, HB DIALOGUE and FEEDBACK examples were collected during deployment, and an additional 40k SATISFAC- TION training examples were collected for the analysis in Section 5.1.
short dialogues (6-8 turns) between two crowd- workers (humans) who have been assigned short text proï¬les and are instructed to âchat with the other person naturally and try to get to know each other.â We chose this dataset because of its size (over 145k total examples), the breadth of top- ics it covers, and its focus on promoting engaging conversations, which we anticipate being a neces- sary property of a chatbot that people will be will- ing to chat with voluntarily and repeatedly. We use the standard splits of the dataset made avail- able in ParlAI as a part of the ConvAI2 challenge (Burtsev et al., 2018). Since the question of how to incorporate external knowledge (such as pro- ï¬les) in dialogue is an open research question of its own (Li et al., 2016; Luan et al., 2017; Luo et al., 2018) and we are primarily interested in the ques- tion of learning from dialogue, we discard the pro- ï¬les and simply train and test on the conversations themselves, making the dataset more challenging in terms of raw performance scores.
The Human-Bot (HB) portion of the DIA- LOGUE dataset is extracted during deployment as described earlier, where the user is again a crowd- worker instructed to chat naturally. The context may contain responses from both the human and the bot, but the target response is always from the human, as we will see experimentally that tar- geting bot responses degrades performance. Be- cause the chit-chat domain is symmetric, both the HH and HB DIALOGUE examples are used for the same task. In an asymmetric setting where the bot has a different role than the human, it is unclear whether HB examples may still be used as an aux- iliary task, but FEEDBACK examples will remain usable.
Category Verbatim % Feedback Examples 53.0 ⢠my favorite food is pizza Suggestion ⢠no, i have never been to kansas ⢠i like when its bright and sunny outside 24.5 ⢠you could say hey, iâm 30. how old are you? ⢠yes, i play battleï¬eld would have a been a great answer. ⢠you could have said âyes, Iâm happy itâs friday.â Instructions 14.5 ⢠tell me what your favorite breakfast food is ⢠answer the question about having children! ⢠tell me why your mom is baking bread Options 8.0 ⢠you could have said yes it really helps the environment or no its too costly ⢠you could have said yes or no, or talked more about your mustang dream. ⢠you should have said new york, texas or maryland. something like one of those.
Table 2: Examples of the types of feedback given to the dialogue agent, pulled from a random sample of 200 feedback responses. Verbatim responses could be used directly in conversation, Suggestion responses contain a potential verbatim response in them somewhere, Instructions describe a response or tell the bot what to do, and Options make multiple suggestions.
# 3.2 Task 2: SATISFACTION
The objective of the SATISFACTION auxiliary task is to predict whether or not a speaking partner is satisï¬ed with the quality of the current conversa- tion. Examples take the form of (x, s) pairs, where x is the same context as in the DIALOGUE task, and s â [0, 1], ranging from dissatisï¬ed to satis- ï¬ed. Crucially, it is hard to estimate from the botâs utterance itself whether the user will be satisï¬ed, but much easier using the humanâs response to the utterance, as they may explicitly say something to that effect, e.g. âWhat are you talking about?â.
Training data for this task is collected during de- ployment. Whenever the userâs estimated satisfac- tion is below a speciï¬ed threshold, the chatbot re- sponds âOops! Sorry. What should I have said instead?â.3 A new example for the FEEDBACK task is then extracted using the context up to but not including the turn where the agent made the poor response as x and the userâs response as f (as shown in Figure 1). At that point to continue the conversation during deployment, the botâs history is reset, and the bot instructs the user to continue, asking for a new topic. Examples of FEEDBACK responses are shown in Table 2.
The dataset for this task was collected via crowdsourcing. Workers chatted with our base- line dialogue agent and assigned a rating 1-5 for the quality of each of the agentâs responses.2 Con- texts with rating 1 were mapped to the negative class (dissatisï¬ed) and ratings [3, 4, 5] mapped to the positive class (satisï¬ed). Contexts with rat- ing 2 were discarded to increase the separation be- tween classes for a cleaner training set. Note that these numeric ratings were requested only when collecting the initial training data, not during de- ployment, where only natural dialogue is used.
# 3.3 Task 3: FEEDBACK
The objective of the FEEDBACK auxiliary task is to predict the feedback that will be given by the speaking partner when the agent believes it has made a mistake and asks for help. Examples take the form of (x, f ) pairs, where x is the same con- text as the other two tasks and f is the feedback utterance.
# 4 Model and Settings
# 4.1 Model Architecture
The self-feeding chatbot has two primary compo- nents: an interface component and a model com- ponent. The interface component is shared by all tasks, and includes input/output processing (tok- enization, vectorization, etc.), conversation history storage, candidate preparation, and control ï¬ow (e.g., when to ask a question vs. when to give a normal dialogue response). The model com- ponent contains a neural network for each task, with embeddings, a network body, and a task head, some of which can be shared. In our case, we ob- tained maximum performance by sharing all pa- rameters between the FEEDBACK and DIALOGUE tasks (prepending FEEDBACK responses with a special token), and using separate model param- eters for the SATISFACTION task. Identifying op- timal task structure in multi-task learning (MTL)
2A snapshot of the data collection interface and sample conversations are included in the Appendix.
3Future work should examine how to ask different kinds of questions, depending on the context.
architectures is an open research problem (Ruder, 2017). Regardless of what parameters are shared, each training batch contains examples from only one task at a time, candidate sets remain separate, and each taskâs cross-entropy loss is multiplied by a task-speciï¬c scaling factor tuned on the valida- tion set to help account for discrepancies in dataset size, loss magnitude, dataset relevance, etc.
Our dialogue agentâs models are built on the Transformer architecture (Vaswani et al., 2017), which has been shown to perform well on a variety of NLP tasks (Devlin et al., 2018; Radford et al., 2018), including multiple persona-based chat ap- plications (Shuster et al., 2018a,b; Rashkin et al., 2018). For the SATISFACTION task, the context x is encoded with a Transformer and converted to the scalar satisfaction prediction Ës by a ï¬nal lin- ear layer in the task head. The DIALOGUE and FEEDBACK tasks are set up as ranking problems, as in (Zhang et al., 2018b; Mazar´e et al., 2018), where the model ranks a collection of candidate responses and returns the top-ranked one as its re- sponse. The context x is encoded with one Trans- former and Ëy and Ëf candidates are encoded with another. The score for each candidate is calculated as the dot product of the encoded context and en- coded candidate.
During training, negative candidates are pulled from the correct responses for the other exam- ples in the mini-batch. During evaluation, how- ever, to remain independent of batch size and data shufï¬ing, each example is assigned a static set of 19 other candidates sampled at random from its split of the data. During deployment, all 127,712 unique HH DIALOGUE candidates from the train split are encoded once with the trained model and each turn the model selects the top-ranked one for the given context.
# 4.2 Model Settings
Contexts and candidates are tokenized using the default whitespace and punctuation tokenizer in ParlAI. We use a maximum dialogue history length of 2 (i.e., when making a prediction, the dialogue agent has access to its previous utter- ance and its partnerâs response). Tokens are em- bedded with fastText (Bojanowski et al., 2017) 300-dimensional embeddings. We do not limit the vocabulary size, which varies from 11.5k to 23.5k words in our experiments, depending on the training set. The Transformer is implemented in
PyTorch (Paszke et al., 2017) within the ParlAI framework. We use the AdaMax (Kingma and Ba, 2014) optimizer with a learning rate sched- ule that decays based on the inverse square root of the step number after 500 steps of warmup from 1e-5. We use proportional sampling (Sanh et al., 2018) to select batches from each task for train- ing, with batch size 128. Each Transformer layer has two attention heads and FFN size 32. The ini- tial learning rate (0.001-0.005), number of Trans- former layers (1-2), and task-speciï¬c loss factors (0.5-2.0) are selected on a per-experiment basis based on a grid search over the validation set aver- aged over three runs (we use the DIALOGUE val- idation set whenever multiple tasks are involved). We use early stopping based on the validation set to decide when to stop training. The hyperparam- eter values for the experiments in Section 5 are in- cluded in Appendix G.
Note that throughout development, a portion of the DIALOGUE validation split was used as an in- formal test set. The ofï¬cial hidden test set for the DIALOGUE task was used only to produce the ï¬nal numbers included in this paper.
# 5 Experimental Results
Throughout this section, we use the ranking met- ric hits@X/Y, or the fraction of the time that the correct candidate response was ranked in the top X out of Y available candidates; accuracy is an- other name for hits@1/Y. Statistical signiï¬cance for improvement over baselines is assessed with a two-sample one-tailed T-test.
# 5.1 Beneï¬ting from Deployment Examples
Our main result, reported in Table 3, is that utiliz- ing the deployment examples improves accuracy on the DIALOGUE task regardless of the number of available supervised (HH) DIALOGUE examples.4 The boost in quality is naturally most pronounced when the HH DIALOGUE training set is small (i.e., where the learning curve is steepest), yielding an increase of up to 9.4 accuracy points, a 31% im- provement. However, even when the entire PER- SONACHAT dataset of 131k examples is usedâa much larger dataset than what is available for most dialogue tasksâadding deployment examples is still able to provide an additional 1.6 points of ac- curacy on what is otherwise a very ï¬at region of
4For comparisons with other models, see Appendix C. The best existing score reported elsewhere on the PER- SONACHAT test set without using proï¬les is 34.9.
Human-Bot (HB) Human-Human (HH) DIALOGUE DIALOGUE FEEDBACK 20k 40k 60k 131k - - 30.3 (0.6) 36.2 (0.4) 39.1 (0.5) 44.7 (0.4) 20k 40k 60k - - - 32.7 (0.5) 34.5 (0.5) 35.4 (0.4) 37.5 (0.6) 37.8 (0.6) 37.9 (0.7) 40.2 (0.5) 40.6 (0.6) 40.2 (0.8) 45.5 (0.7) 45.1 (0.6) 45.0 (0.7) - - - 20k 40k 60k 35.0 (0.5) 36.7 (0.7) 37.8 (0.6) 38.9 (0.3) 39.4 (0.5) 40.6 (0.5) 41.1 (0.5) 41.8 (0.4) 42.2 (0.7) 45.4 (0.8) 45.7 (0.6) 45.8 (0.7) 60k 60k 39.7 (0.6) 42.0 (0.6) 43.3 (0.7) 46.3 (0.8)
Table 3: Accuracy (hits@1/20) on the DIALOGUE taskâs hidden test set by number of Human-Human (HH) DIA- LOGUE, Human-Bot (HB) DIALOGUE, and FEEDBACK examples, averaged over 20 runs, with standard deviations in parentheses. For each column, the model using all three data types (last row) is signiï¬cantly better than all the others, and the best model using only one type of self-feeding (FEEDBACK examples or HB DIALOGUE examples) is better than the supervised baseline in the ï¬rst row (p < 0.05).
It is interesting to note that the learning curve. the two types of deployment examples appear to provide complementary signal, with models per- forming best when they use both example types, despite them coming from the same conversations. We also calculated hit rates with 10,000 candidates (instead of 20), a setup more similar to the inter- active setting where there may be many candidates that could be valid responses. In that setting, mod- els trained with the deployment examples continue to outperform their HH-only counterparts by sig- niï¬cant margins (see Appendix B).
On average, we found that adding 20k FEED- BACK examples beneï¬ted the agent about as much as 60k HB DIALOGUE examples.5 This is some- what surprising given the fact that nearly half of the FEEDBACK responses would not even be rea- sonable responses if used verbatim in a conversa- tion (instead being a list of options, a description of a response, etc.) as shown in Table 2. Never- theless, the tasks are related enough that the DI- ALOGUE task beneï¬ts from the MTL modelâs im- proved skill on the FEEDBACK task. And whereas HB DIALOGUE examples are based on conversa- tions where the user appears to already be satis- ï¬ed with the agentâs responses, each FEEDBACK example corresponds to a mistake made by the model, giving the latter dataset a more active
5Our baseline chatbot collected approximately one FEED- BACK example for every two HB DIALOGUE examples, but this ratio will vary by application based on the task difï¬culty, satisfaction threshold(s), and current model quality.
role in improving quality. Interestingly, our best- performing model, which achieves 46.3 accuracy on DIALOGUE, scores 68.4 on FEEDBACK, sug- gesting that the auxiliary task is a simpler task overall.
When extracting HB DIALOGUE examples, we ignore human responses that the agent classiï¬es as expressing dissatisfaction, since these turns do not represent typical conversation ï¬ow. Includ- ing these responses in the 60k HB dataset de- creases hits@1/20 by 1.2 points and 0.6 points when added to 20k and 131k HH DIALOGUE ex- amples, respectively. We also explored using chat- bot responses with favorable satisfaction scores (Ës > t) as new training examples, but found that our models performed better without them (see Appendix D for details).
We also found that âfresherâ feedback results in bigger gains. We compared two models trained on 20k HH DIALOGUE examples and 40k FEED- BACK examplesâthe ï¬rst collected all 40k FEED- BACK examples at once, whereas the second was retrained with its ï¬rst 20k FEEDBACK examples before collecting the remaining 20k. While the ab- solute improvement of the second model over the ï¬rst was small (0.4 points), it was statistically sig- niï¬cant (p =0.027) and reduced the gap to a model trained on fully supervised (HH) DIALOGUE ex- amples by 17% while modifying only 33% of the training data.6 This improvement makes sense intuitively, since new FEEDBACK examples are
6Additional detail can be found in Appendix E.
Method Pr. Re. F1 Uncertainty Top (Pr. ⥠0.5) Uncertainty Gap (Pr. ⥠0.5) 0.39 0.99 0.56 0.50 0.04 0.07 0.38 1.00 0.55 0.50 0.04 0.07 Satisfaction Regex 0.91 0.27 0.42 0.84 0.84 0.84 Satisfaction Classiï¬er (1k) 0.89 0.84 0.87 Satisfaction Classiï¬er (2k) Satisfaction Classiï¬er (5k) 0.94 0.82 0.88 Satisfaction Classiï¬er (20k) 0.96 0.84 0.89 Satisfaction Classiï¬er (40k) 0.96 0.84 0.90
Table 4: The maximum F1 score (with corresponding precision and recall) obtained on the SATISFACTION task. For the Uncertainty methods, we also report the maximum F1 score with the constraint that precision must be ⥠0.5. The Satisfaction Classiï¬er is reported with varying numbers of SATISFACTION training ex- amples.
collected based on failure modes of the current model, making them potentially more efï¬cient in a manner similar to new training examples selected via active learning. It also suggests that the gains we observe in Table 3 might be further improved by (a) collecting FEEDBACK examples speciï¬c to each model (rather than using the same 60k FEED- BACK examples for all models), and (b) more fre- quently retraining the MTL model (e.g., every 5k examples instead of every 20k) or updating it in an online manner. We leave further exploration of this observation for future work.
The same experiment repeated for HB DIA- LOGUE examples found that fresher HB examples were no more valuable than stale ones, matching our intuition that HB DIALOGUE examples are less targeted at current model failure modes than FEEDBACK ones.
# 5.2 Predicting User Satisfaction
For maximum efï¬ciency, we aim to ask for feed- back when it will most beneï¬t our model. The approach we chose (classifying the tone of part- ner responses) takes advantage of the fact that it is easier to recognize that a mistake has already been made than it is to avoid making that mistake; or in other words, sentiment classiï¬cation is generally an easier task than next utterance prediction.
We compare this to the approach of asking for feedback whenever the model is most uncertain
what to say next. This approach acts on the as- sumption that the model will be least conï¬dent when it is about to make a mistake, which we ï¬nd very frequently to not be the case. Not only is it difï¬cult to recognize oneâs own mistakes, but also there are often multiple valid responses to a given context (e.g., âYes, I love seafood!â or âYuck, ï¬sh is gross.â)âa lack of certainty about which to use does not necessarily suggest a poor model.
the maximum F1 scores achieved by each method on the SATISFACTION test set. For the model uncertainty approach, we tested two variants: (a) predict a mistake when the conï¬dence in the top rated response is below some threshold t, and (b) predict a mistake when the gap between the top two rated responses is below the threshold t. We used the best-performing stan- dalone DIALOGUE model (one trained on the full 131k training examples) for assessing uncertainty and tuned the thresholds to achieve maximum F1 score. For the user satisfaction approach, we trained our dialogue agent on just the SATISFAC- TION task. Finally, we also report the performance of a regular-expression-based method which we used during development, based on common ways of expressing dissatisfaction that we observed in our pilot studies, see Appendix F for details.
As shown by Table 4, even with only 1k training examples (the amount used for the experiments in Section 5.1), the trained classiï¬er signiï¬cantly outperforms both the uncertainty-based methods and our original regular expression, by as much as 0.28 and 0.42 F1 points, respectively.
# 6 Future Work
In this work we learned from dialogue using two types of self-feeding: imitation of satisï¬ed user messages, and learning from the feedback of un- satisï¬ed users. In actuality, there are even more ways a model could learn to improve itselfâfor example, learning which question to ask in a given context to receive the most valuable feedback. One could even use the ï¬exible nature of dialogue to intermix data collection of more than one typeâ sometimes requesting new FEEDBACK examples, and other times requesting new SATISFACTION examples (e.g., asking âDid my last response make sense?â). In this way, a dialogue agent could both improve its dialogue ability and its potential to im- prove further. We leave exploration of this meta- learning theme to future work.
# References
M. A. Bassiri. 2011. Interactional feedback and the im- pact of attitude and motivation on noticing l2 form. English Language and Literature Studies, 1(2):61â 73.
P. Bojanowski, E. Grave, A. Joulin, and T. Mikolov. 2017. Enriching word vectors with subword infor- mation. Transactions of the Association for Compu- tational Linguistics (TACL), 5:135â146.
R. Bunescu and R. Mooney. 2007. Learning to extract relations from the web using minimal supervision. In Association for Computational Linguistics (ACL).
M. Burtsev, V. Logacheva, V. Malykh, R. Lowe, I. Ser- ban, S. Prabhumoye, E. Dinan, D. Kiela, A. Miller, K. Shuster, A. Szlam, J. Urbanek, and J. Weston. 2018. The conversational intelligence challenge 2 (ConvAI2).
A. Carlson, J. Betteridge, B. Kisiel, B. Settles, E. R. H. Jr, and T. M. Mitchell. 2010. Toward an architec- ture for never-ending language learning. In Associ- ation for the Advancement of Artiï¬cial Intelligence (AAAI).
J. Devlin, M. Chang, K. Lee, and K. Toutanova. 2018. Bert: Pre-training of deep bidirectional transform- arXiv preprint ers for language understanding. arXiv:1810.04805.
M. Eskenazi, R. Evgeniia M. Shikib, and T. Zhao. 2018. Beyond turing: Intelligent agents centered on the user. arXiv preprint arXiv:1803.06567.
B. Hancock, P. Varma, S. Wang, M. Bringmann, P. Liang, and C. R´e. 2018. Training classiï¬ers with In Association for natural language explanations. Computational Linguistics (ACL).
C. Hashimoto and M. Sassano. 2018. Detecting ab- surd conversations from intelligent assistant logs by exploiting user feedback utterances. In World Wide Web (WWW), pages 147â156.
B. Hixon, P. Clark, and H. Hajishirzi. 2015. Learning knowledge graphs for question answering through In North American Associ- conversational dialog. ation for Computational Linguistics (NAACL).
T. Hong, O. Kwon, and Y. Kim. 2019. An end-to- end trainable task-oriented dialog system with hu- man feedback. In Association for the Advancement of Artiï¬cial Intelligence (AAAI).
D. Kingma and J. Ba. 2014. Adam: A method arXiv preprint for arXiv:1412.6980. stochastic optimization.
J. Kruger and D. Dunning. 1999. Unskilled and how difï¬culties in recognizing unaware of it: oneâs own incompetence lead to inï¬ated self- assessments. Journal of personality and social psy- chology, 77(6):1121â1134.
E. Levin, R. Pieraccini, and W. Eckert. 2000. A stochastic model of human-machine interaction for IEEE Transactions on learning dialog strategies. Speech and Audio Processing, 8(1):11â23.
J. Li, M. Galley, C. Brockett, J. Gao, and B. Dolan. 2016. A persona-based neural conversation model. In Association for Computational Linguistics (ACL).
J. Li, A. H. Miller, S. Chopra, M. Ranzato, and J. We- ston. 2017a. Dialogue learning with human-in-the- loop. In International Conference on Learning Rep- resentations (ICLR).
J. Li, A. H. Miller, S. Chopra, M. Ranzato, and J. We- ston. 2017b. Learning through dialogue interactions by asking questions. In International Conference on Learning Representations (ICLR).
B. Liu, G. T¨ur, D. Hakkani-T¨ur, P. Shah, and L. Heck. 2018. Dialogue learning with human teaching and feedback in end-to-end trainable task-oriented di- In North American Association alogue systems. for Computational Linguistics (NAACL), volume 1, pages 2060â2069.
Y. Luan, C. Brockett, B. Dolan, J. Gao, and M. Gal- ley. 2017. Multi-task learning for speaker-role adap- In Associ- tation in neural conversation models. ation for Computational Linguistics and Interna- tional Joint Conference on Natural Language Pro- cessing (ACL-IJCNLP), volume 1, pages 605â614.
L. Luo, W. Huang, Q. Zeng, Z. Nie, and X. Sun. 2018. Learning personalized end-to-end goal-oriented dia- log. arXiv preprint arXiv:1811.04604.
N. Mallinar, A. Shah, R. Ugrani, A. Gupta, M. Guru- sankar, T. K. Ho, Q. V. Liao, Y. Zhang, R. Bellamy, and R. Yates. 2019. Bootstrapping conversational agents with weak supervision. In Association for the Advancement of Artiï¬cial Intelligence (AAAI).
P. Mazar´e, S. Humeau, M. Raison, and A. Bordes. 2018. Training millions of personalized dialogue agents. In Empirical Methods in Natural Language Processing (EMNLP), pages 2775â2779.
S. Mazumder, N. Ma, and B. Liu. 2018. Towards a continuous knowledge learning engine for chatbots. arXiv preprint arXiv:1802.06024.
A. H. Miller, W. Feng, A. Fisch, J. Lu, D. Batra, A. Bor- des, D. Parikh, and J. Weston. 2017. Parlai: A dialog In Empirical Methods research software platform. in Natural Language Processing (EMNLP), pages 79â84.
A. Pappu and A. Rudnicky. 2013. Predicting tasks in goal-oriented spoken dialog systems using semantic In Proceedings of the SIGDIAL knowledge bases. 2013 Conference, pages 242â250.
A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer. 2017. Automatic differentiation in py- torch.
and I. Sutskever. 2018. Improving language understand- ing by generative pre-training. Technical report, OpenAI.
S. Rao and H. Daum´e. 2018. Learning to ask good questions: Ranking clariï¬cation questions using neural expected value of perfect information. pages 2737â2746.
H. Rashkin, E. M. Smith, M. Li, and Y. Boureau. 2018. I know the feeling: Learning to converse with empa- thy. arXiv preprint arXiv:1811.00207.
A. Ratner, S. H. Bach, H. Ehrenberg, J. Fries, S. Wu, and C. R´e. 2017. Snorkel: Rapid training data cre- In Very Large Data ation with weak supervision. Bases (VLDB), 3, pages 269â282.
V. Rieser and O. Lemon. 2011. Reinforcement learn- ing for adaptive dialogue systems: a data-driven methodology for dialogue management and natural language generation. Springer Science & Business Media.
J. Ross, A. Zaldivar, L. Irani, and B. Tomlinson. 2009. Who are the turkers? worker demographics in ama- zon mechanical turk. Technical report, Department of Informatics, University of California, Irvine.
S. Ruder. 2017. An overview of multi-task learn- arXiv preprint ing in deep neural networks. arXiv:1706.05098.
V. Sanh, T. Wolf, and S. Ruder. 2018. A hierarchical multi-task approach for learning embeddings from semantic tasks. arXiv preprint arXiv:1811.06031.
J. Schatzmann, K. Weilhammer, M. Stuttle, and S. Young. 2006. A survey of statistical user simu- lation techniques for reinforcement-learning of dia- logue management strategies. The knowledge engi- neering review, 21(2):97â126.
J. Schmidhuber and R. Huber. 1991. Learning to gen- erate artiï¬cial fovea trajectories for target detection. International Journal of Neural Systems, 2(1):125â 134.
I. V. Serban, R. Lowe, L. Charlin, and J. Pineau. 2015. A survey of available corpora for build- ing data-driven dialogue systems. arXiv preprint arXiv:1512.05742.
I. V. Serban, C. Sankar, M. Germain, S. Zhang, Z. Lin, S. Subramanian, T. Kim, M. Pieper, S. Chandar, N. R. Ke, et al. 2017. A deep reinforcement learning chatbot. arXiv preprint arXiv:1709.02349.
K. Shuster, S. Humeau, A. Bordes, and J. Weston. Engaging image chat: Modeling per- arXiv preprint 2018a. sonality in grounded dialogue. arXiv:1811.00945.
K. Shuster, S. Humeau, H. Hu, A. Bordes, and J. We- ston. 2018b. Engaging image captioning via person- ality. arXiv preprint arXiv:1810.10665.
D. L. Silver, Q. Yang, and L. Li. 2013. Lifelong machine learning systems: Beyond learning algo- rithms. In Association for the Advancement of Ar- tiï¬cial Intelligence (AAAI), volume 13.
F. Strub, H. D. Vries, J. Mary, B. Piot, A. Courville, and O. Pietquin. 2017. End-to-end optimization of goal-driven and visually grounded dialogue systems. arXiv preprint arXiv:1703.05423.
S. Tong and D. Koller. 2001. Support vector ma- chine active learning with applications to text clas- Journal of machine learning research, siï¬cation. 2(0):45â66.
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin. 2017. Attention is all you need. arXiv preprint arXiv:1706.03762.
Y. Wang, B. Dai, L. Kong, X. Ma, S. M. Erfani, J. Bai- ley, S. Xia, L. Song, and H. Zha. 2018. Learn- ing deep hidden nonlinear dynamics from aggregate data. In Uncertainty in Artiï¬cial Intelligence (UAI).
M. G. Werts, M. Wolery, A. Holcombe, and D. L. Gast. 1995. Instructive feedback: Review of param- eters and effects. Journal of Behavioral Education, 5(1):55â75.
J. E. Weston. 2016. Dialog-based language learning. In Advances in Neural Information Processing Sys- tems (NeurIPS), pages 829â837.
H. Zhang, H. Yu, and W. Xu. 2017. Listen, interact and talk: Learning to speak via interaction. arXiv preprint arXiv:1705.09906.
Interactive language acquisition with one-shot visual concept arXiv learning through a conversational game. preprint arXiv:1805.00462.
S. Zhang, E. Dinan, J. Urbanek, A. Szlam, D. Kiela, Personalizing dialogue and J. Weston. 2018b. agents: I have a dog, do you have pets too? arXiv preprint arXiv:1801.07243.
# A Data Collection Protocol
Here we report in greater detail the protocol we followed to collect the SATISFACTION, FEED- BACK, and HB DIALOGUE examples used in the experiments of Section 5.
We ï¬rst trained our dialogue agent on just the DIALOGUE task with 20k HH examples. This agent was deployed on a crowdsourcing platform using the interface shown in Appendix H.2 to col- lect 2.5k SATISFACTION examples. These were split into 1k train, 500 validation, and 1k test ex- amples. The agent was retrained using the 20k HH DIALOGUE examples and 1k SATISFACTION ex- amples, then deployed to collect the ï¬rst batch of deployment examples.
We collected 40k FEEDBACK examples (feed- back set A) over the course of 17,250 conversa- tions with 10 turns each (20 utterances, includ- ing the initial prompt). We then retrained the agent on all three datasets, using the same 20k HH DIALOGUE examples as before and only 20k of the available 40k FEEDBACK examples. This model was deployed to collect another 20k FEED- BACK examples (feedback set B), for a total of 60k FEEDBACK examples (A + B). In Table 3 we use these 60k FEEDBACK examples interchangeably; in Appendix E we compare them head-to-head. The 60k HB DIALOGUE examples were extracted from the logs of the deployment conversations. Fi- nally, we collected an additional 40k SATISFAC- TION training examples to produce the numbers in Table 4 investigating the learning curve for this task.
No ï¬ltering was performed on the crowdworker conversations. Upon inspection after the fact, some workers did indeed give poor responses, make typographical mistakes, misunderstand the instructions, try to use the chatbot as a question answering interface, etc. We assume however that similar types of noise will be present in most chat- bot deployment environments and opted to main- tain a workï¬ow that truly does not require devel- oper intervention to use the newly collected exam- ples.
# B Results with 10k Candidates
HH HB FB Hits@X/10,000 @1 @10 @100 20k - 20k 60k 60k - 0.8 2.0 4.6 8.4 16.2 25.0 - 40k 40k 60k 60k - 1.3 2.1 6.5 9.0 21.8 27.2 60k - 60k 60k 60k - 1.6 2.2 7.0 9.7 24.0 28.8 131k - 131k 60k 60k - 2.5 2.8 10.0 11.2 30.3 31.8
Table 5: When the number of candidates to choose from is increased to 10,000, adding Human-Bot (HB) DIALOGUE and FEEDBACK (FB) examples continues to improve performance on the DIALOGUE task at all levels.
# C PERSONACHAT Comparisons and Baselines
Our experiments use the PERSONACHAT distri- bution that was released as a part of the Con- vAI2 (Burtsev et al., 2018) challenge. This dis- tribution is slightly cleaner than the original PER- SONACHAT release and comes with a new crowd- sourced test set. In order to compare with the models and baselines used in the original PER- SONACHAT paper (Zhang et al., 2018b), we re- port in this section the performance of our mod- els on the original PERSONACHAT test set, not the ConvAI2 test set. Note that empirically, near Hits@1/20 = 50, each additional point of improve- ment corresponds to tens of thousands of fully- supervised Human-Human DIALOGUE examples. All numbers reported here are for models that do not have access to the proï¬les that were used in the creation of the conversations; models that do have access to this additional information tend to perform even better.
Model Hits@1/20 (Zhang et al., 2018b) Seq2Seq IR Baseline Starspace Proï¬le Memory KV Proï¬le Memory 9.2 21.4 31.8 31.8 34.9 Ours Transformer Self-Feeding 49.6 51.7
Table 6: The accuracy of various models and baselines on the original PERSONACHAT test set.
# D Using Chatbot Responses as Targets
HH BF BU Hits@1/20 20k 20k 20k - 32k - - - 33k 30.3 22.7 19.3 131k 131k 131k - 32k - - - 33k 44.7 40.4 39.0
Table 7: Both with few HH DIALOGUE examples (20k) and many (131k), adding examples with bot ut- terances as the target decreased quality. We explored using all bot responses (Bot Unï¬ltered, or BU) and only those responses with estimated satisfaction scores greater than the 0.5 (Bot Filtered, or BF).
We also considered whether it was possible to consistently identify really good responses by the chatbot, rather than the really bad ones. These could potentially be used as DIALOGUE examples along with the ones that have human responses as targets (which we refer to as HH and HB in the paper). To explore this question, we modi- ï¬ed our SATISFACTION dataset so that contexts with a rating of 5 were the positive class and ones with ratings [1, 2, 3] were the negative class (dis- carding ratings of 4 to increase the separation be- tween classes). The results were negativeâeven with a training set of over 34k examples, the max- imum precision we were able to achieve while maintaining at least 10% recall was 0.70, which is insufï¬cient to improve performance on the DI- ALOGUE task. Upon inspection, it appears that re- ally good responses are hard to identify because
most of the time they look like a normal human-to- human conversation, and recognizing an appropri- ate next utterance is precisely the DIALOGUE task that we are trying to solve! Negative responses, however, are much more semantically similar to one another, since most express one of a few com- mon ideas such as asking for clariï¬cation or con- veying confusion.
# E The Effect of Data Freshness
HH HBA HBB FBA FBB Total Hits@1/20 20k 20k 20k 40k 20k 20k 60k - 40k 20k - - - - - - 20k - - - - - - - - 40k 20k - - - - - - 20k - 20k 60k 60k 40k 60k 60k 60k 30.3 35.4 35.3 36.2 36.7 37.1 39.1
Table 8: As discussed in Section 5.1 and illustrated in Figure 3, FEEDBACK (FB) examples collected from a more recently retrained model (set B instead of set A) are more valuable in terms of improving performance; see Appendix A for details on how sets A and B were collected. We did not observe the same trend for HB DIALOGUE examples. We include the performance of models trained on only HH DIALOGUE examples in italics as reference points.
Oo wo foe} \ ° we ne) Accuracy (hits@1/20) Oo Oo ge 8 âeâ Dialogue HH âeâ Feedback A 0.30 ° âeâ Feedback B 20000 30000 40000 50000 60000 Examples
Figure 3: The ï¬rst 20k examples for all models are supervised DIALOGUE examples. This model is de- ployed to collect 20k FEEDBACK examples (set A). If the model is retrained before collecting the next 20k examples (set B), the fresher feedback results in better performance (p = 0.027). Shaded regions depict 95% conï¬dence intervals.
# F SATISFACTION Regular Expressions
As described in Section 5.2, before we trained a classiï¬er on the SATISFACTION task, we used the union of the following six regular expressions (using Python regular expression syntax) to identify user dissatisfaction and trigger feedback requests: r"i .*(?:said|asked|told).*" r"((not|nt|nât).*mak.*sense)|(mak.*no .*sense)" r"u(m|h)+\W" r"you.*what\?" r"what.*you (?:mean|refer|talk).*\?" r"what.*to do with.*\?"
# G Hyperparameters
HH HB FB layers learning rate loss factor DIALOGUE FEEDBACK 20k 20k 20k 20k 20k 20k 20k 20k - 20k 40k 60k - - - 60k - - - - 20k 40k 60k 60k 1 1 1 1 1 1 1 1 0.0010 0.0010 0.0010 0.0010 0.0010 0.0010 0.0010 0.0025 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 - - - - 0.50 0.50 0.75 1.50 40k 40k 40k 40k 40k 40k 40k 40k - 20k 40k 60k - - - 60k - - - - 20k 40k 60k 60k 1 1 1 1 1 1 1 1 0.0010 0.0010 0.0010 0.0025 0.0010 0.0010 0.0025 0.0025 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 - - - - 0.50 0.75 1.00 1.25 60k 60k 60k 60k 60k 60k 60k 60k - 20k 40k 60k - - - 60k - - - - 20k 40k 60k 60k 2 1 1 1 1 1 1 1 0.0010 0.0025 0.0025 0.0025 0.0025 0.0025 0.0025 0.0025 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 - - - - 1.00 1.00 1.00 0.75
Table 9: The hyperparameters used to obtain the results in Table 3.
# H Data Collection Interfaces
# H.1 Deployment Data Collection
We simulated a deployment environment on a crowdsourcing platform with the following interface. Crowdworkers were told to âTalk as you would to a person you have just met.â
Live Chat In this task, you will have a short conversation (10 responses) with a chatbot. Its purpose is to engage in small talk, not answer trivia questions. Talk as you would to a person you have just met. IMPORTANT: The chatbot is trying to learn to recognize when it has made a mistake, so when It says something that doesnât make sense, don't just continue the conversation; instead, tell it in some way that it isn't making sense. When the chatbot thinks it has made a mistake, it will ask you what it could have said instead. Even if it actually said something acceptable, answer the question about what else it could have said. Don't say "I don't knowâ or tell it that what it said was ok. Because the chatbot is still learning, you may need to correct it multiple times in a conversation before it realizes it has made a mistake; this is expected behavior. Chatbot: honeymooning at niagra falls with the new hubby [comms] You: wow! congratulations! were you married in new york? Chatbot: you are probably right , how long have you all been dating ? You: hey, you didn't answer my question! Chatbot: Oops! | think | messed up. Whether | messed up or not, what could | have said (in response to "wow! congratulations! were you married in new york?")? You: no, we're actually both Canadian; we live in Montreal. Chatbot: Thanks! I'l try to remember that. Can you pick anew topic for us to talk about now? Please enter here.
# H.2 SATISFACTION Data Collection
The environment for collecting SATISFACTION ratings was very similar to the deployment environment, with the exception that workers were required to give a rating 1-5 on the quality of each response given by the chatbot at the end of their own responses.
In this task, you will have a short conversation (10 responses) with a chatbot. Each time you respond to the bot, you will also rate the quality of its response: (1 - very bad, 2- bad, 3- ok, 4 - good, 5 - very good). Occasionally you will be told that no rating is required (eg., the initial prompt); otherwise, your response should always end with a space followed by a number between 1 and 5. The chatbot is trying to learn to recognize when it has made a mistake, so when it says something that doesn't make sense, don't just continue the conversation; instead, tell it that is isn't making sense. In other words, if you are giving its response a rating of 1, then your response should indicate your dissatisfaction in some way. Occasionally when the chatbot messes up, it will ask you what it should have said. If there isn't an obvious right answer, then think of something that it could have said that would have fit in the conversation. Don't just say "I don't knowâ. âSystem: Start a conversation with the chatbot. If you need ideas for a topic, here are a few random words: houseboat, party, office. (No rating required) You: howdy! where are you from? Chatbot: hi there , my name is dustin , how are you ? (Reminder: At the end of your response, include a space followed by a rating 1-5.) : nice to meet you dustin! I'm doing well. 3 Chatbot: it is a fiction novel about a young teacher like myself solving murders . You: um, that was super random. 1 Please enter here.
# I Sample Deployment Conversations
A sample of the conversations that occurred between a Bot (B) and Human (H) during the collection of the deployment datasets (more speciï¬cally, feedback set B). The self-feeding chatbot used its satisfaction classiï¬er trained on 1k examples to determine when to ask for feedback. The dialogue model at this point had only been trained on 20k HH DIALOGUE examples and 20k FEEDBACK examples, so it still makes frequent mistakes. See Appendix H.1 for a snapshot of the actual interface.
B: start a conversation with the chatbot. if you need ideas for a topic, here are a few random words: random, mistake, century. H: whatâs the last mistake you made? B: what do you do ? iâve a toothpick business , H: thatâs not what i asked. B: oops! i think i messed up. whether i messed up or not, what could i have said (in response to âwhatâs the last mistake you made?â)? H: told me about a recent mistake. B: thanks! iâll try to remember that. can you pick a new topic for us to talk about now? H: what do you do for a living? B: i am a psychology major . what do you do ? H: i am a student as well. iâm studying law. B: oh , my ï¬ve children and husband all speak spanish as well . H: i never said i speak spanish, what are you doing? B: oops! i think i messed up. whether i messed up or not, what could i have said (in response to âi am a student as well. iâm studying law.â)? H: talked about your college courses. B: thanks! iâll try to remember that. can you pick a new topic for us to talk about now? H: how old are your children? B: it is . we are just thrilled we have ac again . its calming my anxiety . H: that has nothing to do with what i said. B: yes since i prefer my hair straight . i straighten it with a ï¬at iron . tedious ! H: you make no sense. B: start a conversation with the chatbot. if you need ideas for a topic, here are a few random words: picnic, drain, elm. H: do you like to go to the beach or woods for a picnic? B: good , going to the doctor soon , iâve some issue with my knees H: thatâs doesnât make sense B: oops! i think i messed up. whether i messed up or not, what could i have said (in response to âdo you like to go to the beach or woods for a picnic?â)? H: you could have chose the beach or woods to previous question B: thanks! iâll try to remember that. can you pick a new topic for us to talk about now? H: do you travel? B: i want to go to paris and make clothes . H: wow thatâs interesting. iâve never been to paris B: i can see that . do you like music ? H: i love music. what is your favorite kind? B: tell me more about you H: that doesnât answer the question B: oops! i think i messed up. whether i messed up or not, what could i have said (in response to âi love music. what is your favorite kind?â)? H: you could have selected a type of music that you like B: thanks! iâll try to remember that. can you pick a new topic for us to talk about now? H: do you like animals? i have a dog B: i like those , too , but pizza is better ! ; H: umm, what?
B: start a conversation with the chatbot. if you need ideas for a topic, here are a few random words: boy, punk, ray. H: are you a boy or a girl? B: do you have any jokes ? H: i wasnât talking about jokes. B: oops! i think i messed up. whether i messed up or not, what could i have said (in response to âare you a boy or a girl?â)? H: i was talking about your gender. B: thanks! iâll try to remember that. can you pick a new topic for us to talk about now? H: sure, do you like music? B: oh thatâs neat . you should write me a poem about star trek ! H: what? B: oops! i think i messed up. whether i messed up or not, what could i have said (in response to âsure, do you like music?â)? H: we were talking about music. B: thanks! iâll try to remember that. can you pick a new topic for us to talk about now? H: what is your favorite type of art?
B: nice ! are the pets your own ? H: no B: oops! i think i messed up. whether i messed up or not, what could i have said (in response to âwhat is your favorite type of art?â)? H: we werenât talking about pets. B: thanks! iâll try to remember that. can you pick a new topic for us to talk about now? H: sure, do you know anything about biology?
B: start a conversation with the chatbot. if you need ideas for a topic, here are a few random words: college, cliff, tachometer. H: hey, howâs it going? B: no . i am just 15 . parents would kill me . H: what are you talking about? B: oops! i think i messed up. whether i messed up or not, what could i have said (in response to âhey, howâs it going?â)? H: i was just saying hi, so you should have said a greeting? B: thanks! iâll try to remember that. can you pick a new topic for us to talk about now? H: no problem. so, do you go to school? B: i have ten of them ! H: no, that isnât right. B: oops! i think i messed up. whether i messed up or not, what could i have said (in response to âno problem. so, do you go to school?â)? H: i was asking about school or education. B: thanks! iâll try to remember that. can you pick a new topic for us to talk about now? H: so, what is the best subject to you? B: oh wow , iâve no cats , but i do hace a couple of dogs . H: so, you like dogs a lot? B: i am also preparing for a marathon also , wish me luck . H: oh wow! does your dog help you prepare? B: i like hindu food . iâm not a very good baker though . my mom has mastered that . H: no, that isnât right.
B: start a conversation with the chatbot. if you need ideas for a topic, here are a few random words: drink, wool, bag. H: what is your favorite drink B: i love animals and the outdoors , so i work at the zoo . H: wrong answer B: oops! i think i messed up. whether i messed up or not, what could i have said (in response to âwhat is your favorite drinkâ)? H: you should have said my favorite drink is either juice,coke or wine B: thanks! iâll try to remember that. can you pick a new topic for us to talk about now? H: are you religious B: somewhat lol he sits when i draw and watches me H: what are you talking about B: oops! i think i messed up. whether i messed up or not, what could i have said (in response to âare you religiousâ)? H: you should have told me yes i am religious in fact i attend church services at least once a week and if not then no i am not religious never attended church services B: thanks! iâll try to remember that. can you pick a new topic for us to talk about now? H: tell me about yourself B: i drive my dream car every day i am home . a 65 mustang . H: that is nice you must be really love cars B: hello , how is your day going ? H: my day is good relaxing at home after work watching tv B: iâm sorry to hear that . i sincerely hope things get better for you . H: what are you talking about B: start a conversation with the chatbot. if you need ideas for a topic, here are a few random words: glasses, puppy, pillow. H: do you like dogs? i got a new puppy B: hey ! i love being outside planting my plants ! how about you ? H: that is nice i love to plant too! B: right now iâve a beagle . sheâs a little feisty . i miss my shepard mix . H: aww my dog is named sam he is a collie B: yeah , animals are my passion , i have 3 , all big dogs . H: yeah i just have the one dog and two cats B: i do not . i take a lot of pictures . iâd like to be a famous photographer . H: i actully really want to get a picture of my animals with santa lol B: do you want to work out with me ? H: umm what? B: oops! i think i messed up. whether i messed up or not, what could i have said (in response to âi actully really want to get a picture of my animals with santa lolâ)? H: that would be very cute B: thanks! iâll try to remember that. can you pick a new topic for us to talk about now? H: i love to eat
# J Sample SATISFACTION Conversations
A sample of the conversations that occurred between a Bot (B) and Human (H) during the collection of the SATISFACTION dataset. Users were asked to rate each bot response on a scale from 1 (very bad) to 5 (very good). See Appendix H.2 for a snapshot of the actual interface.
B: start a conversation with the chatbot. if you need ideas for a topic, here are a few random words: invitation, dot, czar. H: hello friend, i missed you B: hello my name is ray H: hi iâm leah (reward:5) B: hello my name is michael H: no itâs not (reward:1) B: oops! sorry. what should i have said instead? H: nice to meet you B: thanks! iâll remember that. can you pick a new topic for us to talk about now? H: do you have many friends? B: lol i have kids to H: what? (reward:1) B: oops! sorry. what should i have said instead? H: no but i have kids B: thanks! iâll remember that. can you pick a new topic for us to talk about now? H: what do you want for christmas? B: i love to bake cakes H: what? (reward:1) B: oops! sorry. what should i have said instead? H: i want a pony for christmas
B: start a conversation with the chatbot. if you need ideas for a topic, here are a few random words: popsicle, lawnmower, mom. H: hi, are you having a good day so far? B: iâm here . iâm doing well , just waiting for the hurricane to hit . H: oh im sorry. where do you live? (reward:5) B: miami . . . i live in little cuba , but i wish i lived in europe . H: yeah, i bet you do (reward:5) B: we live in aruba , in a van down by the river H: doesnt make sense (reward:1) B: oops! sorry. what should i have said instead? H: that you were from miami, not aruba B: thanks! iâll remember that. can you pick a new topic for us to talk about now? H: do you own a car? B: i only drive toyota cars H: whatâs the fastest youâve ever driven a car? (reward:2) B: a yellow toyota camry convertible . H: that doesnt make sense (reward:1) B: oops! sorry. what should i have said instead? H: how fast you have driven before B: thanks! iâll remember that. can you pick a new topic for us to talk about now? H: do you like staying up late?
B: start a conversation with the chatbot. if you need ideas for a topic, here are a few random words: stout, mop, through. H: i just ï¬nished mopping my kitchen. B: i am a barista at joe jacks coffee shop H: that has nothing to do with what i said (reward:1) B: oops! sorry. what should i have said instead? H: talked about how you hate cleaning the kitchen B: thanks! iâll remember that. can you pick a new topic for us to talk about now? H: what music do you listen to? B: listen about lectures about space and stuff H: so you donât like to listen to music at all? (reward:4) B: i mostly listen to jazz H: jazz is a fun genre. i play the trombone (reward:5) B: classical jazz is my very favorite music H: do you play any instruments? (reward:2) B: classical music . mozart piano music . H: i did not ask that (reward:1) B: oops! sorry. what should i have said instead? H: the instrument that you play B: thanks! iâll remember that. can you pick a new topic for us to talk about now? H: what are your plans today?
B: start a conversation with the chatbot. if you need ideas for a topic, here are a few random words: clown, drip, whatever. H: my pipes are dripping, do you know a plumber i can call? B: anything but taylor swift lol H: that doesnât make any sense. (reward:1) B: oops! sorry. what should i have said instead? H: you could say âsorry i donât know anyone who could get out there today.â B: thanks! iâll remember that. can you pick a new topic for us to talk about now? H: i love music. who do you like to listen to? B: i mostly listen to smooth jazz and pop music H: oh smooth jazz eh? do you like kenny g? (reward:5) B: i like pop jazz and rap H: you already said that! (reward:1) B: oops! sorry. what should i have said instead? H: you could say âkenny g is a little too smooth for my tastes.â B: thanks! iâll remember that. can you pick a new topic for us to talk about now? H: i think iâm going to just eat cereal with breakfast. what did you have to eat? B: bacon bacon bacon bacon bacon bacon bacon H: haha bacon is pretty good. i like the maple smoked stuff. (reward:3) B: onion green peppers mushrooms cream cheese shredded cheese . H: that sounds like an awesome bagel. how often do you eat breakfast? (reward:3) B: start a conversation with the chatbot. if you need ideas for a topic, here are a few random words: princess, dump, telephone. H: hello! how are you on this ï¬ne day? B: i am pretty good i guess . i am even better at eating haha especially cookies and ice cream H: i do love eating too, especially cookies (reward:5) B: cakes , mufï¬ns , cookies , cupcakes ! sugar cookies are my favorite ! H: you like a lot of sweets! (reward:3) B: cookies pies cakes and cup cakes H: okay, i get it, you like sweets (reward:2) B: peanut butter goes well with bread and crackers H: i do love a peanut butter sandwich, my kids do too (reward:5) B: tights with peanut butter peas H: what??? (reward:1) B: oops! sorry. what should i have said instead? H: something that made sense about peanut butter B: thanks! iâll remember that. can you pick a new topic for us to talk about now? H: do you have a cell phone? B: hostage , it was a robbery in the store i was selling furniture too H: huh? (reward:1) B: oops! sorry. what should i have said instead? H: tell me if you have a cell phone, maybe what kind you have
B: start a conversation with the chatbot. if you need ideas for a topic, here are a few random words: ray, scared, shirt. H: hi cutie B: hi H: how are you? (reward:4) B: good just making brisket and ribs you ? H: that sounds tasty. iâm just watching youtube (reward:5) B: i see . . i like watching movies too star wars especially H: star wars is a good movie (reward:5) B: horror movies or tv ? i like both i suppose H: yeah horror movies are ok (reward:5) B: watching a classic movie the wizard of oz H: do you like to swim? (reward:5) B: the beach and bike riding H: that sounds fun (reward:5) B: i hike at the park H: what else do you do? (reward:4) B: i run at central park H: oh youâre from new york? (reward:5) B: upstate new york near jamestown ny H: wow awesome, i live in canada (reward:5) | {
"id": "1512.05742"
} |
1901.04085 | Passage Re-ranking with BERT | Recently, neural models pretrained on a language modeling task, such as ELMo
(Peters et al., 2017), OpenAI GPT (Radford et al., 2018), and BERT (Devlin et
al., 2018), have achieved impressive results on various natural language
processing tasks such as question-answering and natural language inference. In
this paper, we describe a simple re-implementation of BERT for query-based
passage re-ranking. Our system is the state of the art on the TREC-CAR dataset
and the top entry in the leaderboard of the MS MARCO passage retrieval task,
outperforming the previous state of the art by 27% (relative) in MRR@10. The
code to reproduce our results is available at
https://github.com/nyu-dl/dl4marco-bert | http://arxiv.org/pdf/1901.04085 | Rodrigo Nogueira, Kyunghyun Cho | cs.IR, cs.CL, cs.LG | null | null | cs.IR | 20190113 | 20200414 | 0 2 0 2
r p A 4 1
] R I . s c [ 5 v 5 8 0 4 0 . 1 0 9 1 : v i X r a
# PASSAGE RE-RANKING WITH BERT
Rodrigo Nogueira New York University rodrigonogueira@nyu.edu
Kyunghyun Cho New York University Facebook AI Research CIFAR Azrieli Global Scholar kyunghyun.cho@nyu.edu
# ABSTRACT
Recently, neural models pretrained on a language modeling task, such as ELMo (Peters et al., 2017), OpenAI GPT (Radford et al., 2018), and BERT (De- vlin et al., 2018), have achieved impressive results on various natural language processing tasks such as question-answering and natural language inference. In this paper, we describe a simple re-implementation of BERT for query-based pas- sage re-ranking. Our system is the state of the art on the TREC-CAR dataset and the top entry in the leaderboard of the MS MARCO passage retrieval task, outper- forming the previous state of the art by 27% (relative) in MRR@10. The code to reproduce our results is available at https://github.com/nyu-dl/ dl4marco-bert
1
# INTRODUCTION
We have seen rapid progress in machine reading compression in recent years with the introduction of large-scale datasets, such as SQuAD (Rajpurkar et al., 2016), MS MARCO (Nguyen et al., 2016), SearchQA (Dunn et al., 2017), TriviaQA (Joshi et al., 2017), and QUASAR-T (Dhingra et al., 2017), and the broad adoption of neural models, such as BiDAF (Seo et al., 2016), DrQA (Chen et al., 2017), DocumentQA (Clark & Gardner, 2017), and QAnet (Yu et al., 2018).
The information retrieval (IR) community has also experienced a ï¬ourishing development of neural ranking models, such as DRMM (Guo et al., 2016), KNRM (Xiong et al., 2017), Co-PACRR (Hui et al., 2018), and DUET (Mitra et al., 2017). However, until recently, there were only a few large datasets for passage ranking, with the notable exception of the TREC-CAR (Dietz et al., 2017). This, at least in part, prevented the neural ranking models from being successful when compared to more classical IR techniques (Lin, 2019).
We argue that the same two ingredients that made possible much progress on the reading compre- hension task are now available for passage ranking task. Namely, the MS MARCO passage ranking dataset, which contains one million queries from real users and their respective relevant passages annotated by humans, and BERT, a powerful general purpose natural language processing model.
In this paper, we describe in detail how we have re-purposed BERT as a passage re-ranker and achieved state-of-the-art results on the MS MARCO passage re-ranking task.
# 2 PASSAGE RE-RANKING WITH BERT
Task A simple question-answering pipeline consists of three main stages. First, a large number (for example, a thousand) of possibly relevant documents to a given question are retrieved from a corpus by a standard mechanism, such as BM25. In the second stage, passage re-ranking, each of these documents is scored and re-ranked by a more computationally-intensive method. Finally, the top ten or ï¬fty of these documents will be the source for the candidate answers by an answer generation module. In this paper, we describe how we implemented the second stage of this pipeline, passage re-ranking.
Method The job of the re-ranker is to estimate a score si of how relevant a candidate passage di is to a query q. We use BERT as our re-ranker. Using the same notation used by Devlin et al.
1
(2018), we feed the query as sentence A and the passage text as sentence B. We truncate the query to have at most 64 tokens. We also truncate the passage text such that the concatenation of query, passage, and separator tokens have the maximum length of 512 tokens. We use a BERTLARGE model as a binary classiï¬cation model, that is, we use the [CLS] vector as input to a single layer neural network to obtain the probability of the passage being relevant. We compute this probability for each passage independently and obtain the ï¬nal list of passages by ranking them with respect to these probabilities.
We start training from a pre-trained BERT model and ï¬ne-tune it to our re-ranking task using the cross-entropy loss:
L = â log(sj) â log(1 â sj), jâJpos jâJneg (1)
where Jpos is the set of indexes of the relevant passages and Jneg is the set of indexes of non-relevant passages in top-1,000 documents retrieved with BM25.
# 3 EXPERIMENTS
We train and evaluate our models on two passage-ranking datasets, MS MARCO and TREC-CAR.
# 3.1 MS MARCO
The training set contains approximately 400M tuples of a query, relevant and non-relevant passages. The development set contains approximately 6,900 queries, each paired with the top 1,000 pas- sages retrieved with BM25 from the MS MARCO corpus. On average, each query has one relevant passage. However, some have no relevant passage because the corpus was initially constructed by retrieving the top-10 passages from the Bing search engine and then annotated. Hence, some of the relevant passages might not be retrieved by BM25.
An evaluation set with approximately 6,800 queries and their top 1,000 retrieved passages without relevance annotations is also provided.
Training We ï¬ne-tune the model using a TPU v3-81 with a batch size of 128 (128 sequences * 512 tokens = 65,536 tokens/batch) for 100k iterations, which takes approximately 30 hours. This corresponds to training on 12.8M (100k * 128) query-passage pairs or less than 2% of the full training set. We could not see any improvement in the dev set when training for another 3 days, which equivalent to seeing 50M pairs in total. We use ADAM (Kingma & Ba, 2014) with the initial learning rate set to 3 à 10â6, β1 = 0.9, β2 = 0.999, L2 weight decay of 0.01, learning rate warmup over the ï¬rst 10,000 steps, and linear decay of the learning rate. We use a dropout probability of 0.1 on all layers.
# 3.2 TREC-CAR
Introduced by Dietz et al. (2017), in this dataset, the input query is the concatenation of a Wikipedia article title with the title of one of its section. The relevant passages are the paragraphs within that section. The corpus consists of all of the English Wikipedia paragraphs, except the abstracts. The released dataset has ï¬ve predeï¬ned folds, and we use the ï¬rst four as a training set (approximately 2.3M queries), and the remaining as a validation set (approximately 580k queries). The test set is the same one used to evaluate the submissions to TREC-CAR 2017 (approx. 2,254 queries).
Although TREC-CAR 2017 organizers provide manual annotations for the test set, only the top ï¬ve passages retrieved by the systems submitted to the competition have manual annotations. This means that true relevant passages are not annotated if they rank low. Hence, we evaluate using the automatic annotations, which provide relevance scores for all possible query-passage pairs.
Training We follow the same procedure described for the MS MARCO dataset to ï¬ne-tune our models on TREC-CAR. However, there is an important difference. The ofï¬cial pre-trained BERT
# 1 https://cloud.google.com/tpu/
2
MS MARCO | TREC-CAR MRR@10 MAP Method Dev Eval Test BM25 (Lucene, no tuning) 16.7 16.5 12.3 BM25 (Anserini, tuned - - 15.3 Co-PACRR* 2017) - - 14.8 KNRM ( 21.8 19.8 - Conv-KNRM (D 29.0 27.1 - IRNet* 27.8 28.1 - BERT Base 34.7 - 31.0 BERT Large 36.5 35.8 33.5
Table 1: Main Result on the passage re-ranking datasets. * Best Entry in the TREC-CAR 2017. { Previous SOTA in the MS MARCO leaderboard as of 01/04/2019; unpublished work.
0.4 @ BERT Large = = IRNET (previous 03 SOTA) ° & 02 ae $ 0.1 i¢) 1k 10k 100k 1M 10M 100M Number of training question-passage pairs
Number of training question-passage pairs
Figure 1: Number of MS MARCO examples seen during training vs. MRR@10 performance.
models2 were pre-trained on the full Wikipedia, and therefore they have seen, although in an unsu- pervised way, Wikipedia documents that are used in the test set of TREC-CAR. Thus, to avoid this leak of test data into training, we pre-trained the BERT re-ranker only on the half of Wikipedia used by TREC-CARâs training set.
For the ï¬ne-tuning data, we generate our query-passage pairs by retrieving the top ten passages from the entire TREC-CAR corpus using BM25.3 This means that we end up with 30M example pairs (3M queries * 10 passages/query) to train our model. We train it for 400k iterations, or 12.8M examples (400k iterations * 32 pairs/batch), which corresponds to only 40% of the training set. Similarly to MS MARCO experiments, we did not see any gain on the dev set by training the models longer.
3.3 RESULTS
We show the main result in Table 1. Despite training on a fraction of the data available, the proposed BERT-based models surpass the previous state-of-the-art models by a large margin on both of the tasks.
Training size vs performance: We found that the pretrained models used in this work require few training examples from the end task to achieve a good performance 1. For example, a BERTLARGE trained on 100k question-passage pairs (less than 0.3% of the MS MARCO training data) is already 1.4 MRR@10 points better than the previous state-of-the-art, IR-NET.
2 https://github.com/google-research/bert 3We use the Anserini toolkit (Yang et al., 2018) to index and retrieve the passages.
3
# 4 CONCLUSION
We have described a simple adaptation of BERT as a passage re-ranker that has become the state of the art on two different tasks, which are TREC-CAR and MS MARCO. We have made the code to reproduce our experiments publicly available.
# REFERENCES
Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. Reading wikipedia to answer open- domain questions. arXiv preprint arXiv:1704.00051, 2017.
Christopher Clark and Matt Gardner. Simple and effective multi-paragraph reading comprehension. arXiv preprint arXiv:1710.10723, 2017.
Zhuyun Dai, Chenyan Xiong, Jamie Callan, and Zhiyuan Liu. Convolutional neural networks for soft-matching n-grams in ad-hoc search. In Proceedings of the Eleventh ACM International Con- ference on Web Search and Data Mining, pp. 126â134. ACM, 2018.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
Bhuwan Dhingra, Kathryn Mazaitis, and William W Cohen. Quasar: Datasets for question answer- ing by search and reading. arXiv preprint arXiv:1707.03904, 2017.
Laura Dietz, Manisha Verma, Filip Radlinski, and Nick Craswell. Trec complex answer retrieval overview. TREC, 2017.
Matthew Dunn, Levent Sagun, Mike Higgins, V Ugur Guney, Volkan Cirik, and Kyunghyun Cho. Searchqa: A new q&a dataset augmented with context from a search engine. arXiv preprint arXiv:1704.05179, 2017.
Jiafeng Guo, Yixing Fan, Qingyao Ai, and W Bruce Croft. A deep relevance matching model for ad-hoc retrieval. In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, pp. 55â64. ACM, 2016.
Kai Hui, Andrew Yates, Klaus Berberich, and Gerard de Melo. Co-pacrr: A context-aware neural ir model for ad-hoc retrieval. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, pp. 279â287. ACM, 2018.
Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. arXiv preprint arXiv:1705.03551, 2017.
Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Jimmy Lin. The neural hype and comparisons against weak baseline. 2019.
Sean MacAvaney, Andrew Yates, and Kai Hui. Contextualized pacrr for complex answer retrieval. 2017.
Bhaskar Mitra, Fernando Diaz, and Nick Craswell. Learning to match using local and distributed representations of text for web search. In Proceedings of the 26th International Conference on World Wide Web, pp. 1291â1299. International World Wide Web Conferences Steering Commit- tee, 2017.
Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. Ms marco: A human generated machine reading comprehension dataset. arXiv preprint arXiv:1611.09268, 2016.
Matthew E Peters, Waleed Ammar, Chandra Bhagavatula, and Russell Power. Semi-supervised sequence tagging with bidirectional language models. arXiv preprint arXiv:1705.00108, 2017.
4
Improving language un- derstanding by generative pre-training. URL https://s3-us-west-2. amazonaws. com/openai- assets/research-covers/language-unsupervised/language understanding paper. pdf, 2018.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250, 2016.
Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. Bidirectional attention ï¬ow for machine comprehension. arXiv preprint arXiv:1611.01603, 2016.
Chenyan Xiong, Zhuyun Dai, Jamie Callan, Zhiyuan Liu, and Russell Power. End-to-end neural ad- hoc ranking with kernel pooling. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 55â64. ACM, 2017.
Peilin Yang, Hui Fang, and Jimmy Lin. Anserini: Reproducible ranking baselines using lucene. Journal of Data and Information Quality (JDIQ), 10(4):16, 2018.
Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V Le. Qanet: Combining local convolution with global self-attention for reading compre- hension. arXiv preprint arXiv:1804.09541, 2018.
5 | {
"id": "1810.04805"
} |
1901.03461 | Dialog System Technology Challenge 7 | This paper introduces the Seventh Dialog System Technology Challenges (DSTC),
which use shared datasets to explore the problem of building dialog systems.
Recently, end-to-end dialog modeling approaches have been applied to various
dialog tasks. The seventh DSTC (DSTC7) focuses on developing technologies
related to end-to-end dialog systems for (1) sentence selection, (2) sentence
generation and (3) audio visual scene aware dialog. This paper summarizes the
overall setup and results of DSTC7, including detailed descriptions of the
different tracks and provided datasets. We also describe overall trends in the
submitted systems and the key results. Each track introduced new datasets and
participants achieved impressive results using state-of-the-art end-to-end
technologies. | http://arxiv.org/pdf/1901.03461 | Koichiro Yoshino, Chiori Hori, Julien Perez, Luis Fernando D'Haro, Lazaros Polymenakos, Chulaka Gunasekara, Walter S. Lasecki, Jonathan K. Kummerfeld, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan, Xiang Gao, Huda Alamari, Tim K. Marks, Devi Parikh, Dhruv Batra | cs.CL | This paper is presented at NIPS2018 2nd Conversational AI workshop | null | cs.CL | 20190111 | 20190111 | 9 1 0 2 n a J 1 1 ] L C . s c [
1 v 1 6 4 3 0 . 1 0 9 1 : v i X r a
arXiv:1901.0346lv1
# Dialog System Technology Challenge 7
Koichiro Yoshino, Chiori Hori, Julien Perez, Luis Fernando DâHaro, Lazaros Polymenakos, Chulaka Gunasekara, Walter S. Lasecki, Jonathan K. Kummerfeld, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan, Xiang Gao, Huda Alamari, Tim K. Marks, Devi Parikh and Dhruv Batraâ
# Abstract
This paper introduces the Seventh Dialog System Technology Challenges (DSTC), which use shared datasets to explore the problem of building dialog systems. Re- cently, end-to-end dialog modeling approaches have been applied to various dialog tasks. The seventh DSTC (DSTC7) focuses on developing technologies related to end-to-end dialog systems for (1) sentence selection, (2) sentence generation and (3) audio visual scene aware dialog. This paper summarizes the overall setup and results of DSTC7, including detailed descriptions of the different tracks and pro- vided datasets. We also describe overall trends in the submitted systems and the key results. Each track introduced new datasets and participants achieved impres- sive results using state-of-the-art end-to-end technologies.
# 1 Introduction
The ongoing DSTC series started as an initiative to provide a common testbed for the task of Dialog State Tracking; the ï¬rst edition was organized in 2013 (Williams et al. [2013]) and used human- computer dialogs in the bus timetable domain. Dialog State Tracking Challenges 2 (Henderson et al. [2014a]) and 3 (Henderson et al. [2014b]) followed in 2014, using more complicated and dynamic di- alog states for restaurant information in several situations: dialog state tracking for unseen states and different domain data from the training data. Dialog State Tracking Challenge 4 (Kim et al. [2017]) and Dialog State Tracking Challenge 5 (Kim et al. [2016]) moved to tracking human-human dialogs in mono- and cross-language settings. For the most recent event, DSTC 6 in 2017, the acronym was changed to mean Dialog System Technology Challenge (Hori et al. [2018b]) and focused on end-to-end systems with the aim of minimizing effort on human annotation while exploring more complex tasks.
As we can see, since 2013 the challenge has evolved in several ways. First, from modeling human- computer interactions, to investigating human-human interactions, and ï¬nally moving toward com- plex end-to-end systems. DSTC has also offered pilot tasks on Spoken Language Understanding, Speech Act Prediction, Natural Language Generation and End-to-End System Evaluation, which expanded interest in the challenge in the research communities of dialog systems and AI. Therefore, given the remarkable success of the ï¬rst ï¬ve editions, the complexity of the dialog phenomenon and the interest of the research community in the broader variety of dialog related problems, the DSTC rebranded itself as "Dialog System Technology Challenges" for its sixth edition.
For the seventh event, there were ï¬ve task proposals. These were discussed at the sixth event, with a particular focus on how applied proposals were, and how they ï¬t within the larger space of problems of interest to the research community. Three critical issues were raised in the discussion. First, the retrieval-based approach for response generation is still essential for practical use, even if the generative approach often used by neural conversation models has had enormous success (Sentence Selection Track). Second, working on improving generative approaches is also important,
âEvery author has equal contribution. http://workshop.colips.org/dstc7
32nd Conference on Neural Information Processing Systems (NIPS 2018), Montréal, Canada.
but results generated by systems should have more variety according to their contexts, including dialog histories, locations, and other dialog situations (Sentence Generation Track). The ï¬nal issue is fusion with other areas; visual dialog is one direction in which information in images is ued in the dialog (Audio Visual Scene-Aware Dialog Track). Following the discussion, three tasks were proposed for the seventh dialog system technology challenge, as described below.
In Sentence Selection (described in more detail in section 2), the challenge consists of several sub- tasks, in which system are given a partial conversation, and they must select the correct next utter- ances from a set of candidates or indicate that none of the proposed utterances is correct. This is intended to push the utterance classiï¬cation task towards real-world problems.
In Sentence Generation (described in detail in section 3), the goal is to generate conversational re- sponses that go beyond chitchat, by injecting informational responses that are grounded in external knowledge. Since there is no speciï¬c or predeï¬ned goal, this task does not constitute what is com- monly called task-oriented dialog, but target human-human dialogs where the underlying goal is often ill-deï¬ned or not known in advance.
Finally, in the Audio Visual Scene-aware track (described in detail in section 4), the goal is to gen- erate system responses in a dialog about an input video. Dialog systems need to understand scenes to have conversations with users about the objects and events around them. In this track multi- ple research technologies are integrated, including: end-to-end dialog technologies, which generate system responses using models trained from dialog data; visual question answering (VQA) technolo- gies, which answer to questions about images using learned image features; and video description technologies, in which videos are described/narrated using multimodal information.
# 2 Sentence Selection Track
This task2 pushed the state-of-the-art in goal-oriented dialog systems in four directions deemed necessary for practical automated agents, using two new datasets. We sidestepped the challenge of evaluating generated utterances by formulating the problem as response selection, as proposed by Lowe et al. [2015]. At test time, participants were provided with partial conversations, each paired with a set of utterances that could be the next utterance in the conversation. Systems needed to rank these options, with the goal of placing the true utterance ï¬rst. Unlike prior work, we considered several advanced variations of the task:
Subtask 1 100 candidates, including 1 correct option.
Subtask 2 120,000 candidates, including 1 correct option (Ubuntu data only).
Subtask 3 100 candidates, including 1-5 correct options that are paraphrases (Advising data only).
Subtask 4 100 candidates, including 0-1 correct options.
Subtask 5 The same as subtask 1, but with access to external information.
These subtasks push the capabilities of systems and enable interesting comparisons of strengths and weaknesses of different approaches. Participants were able to use the provided knowledge sources as is, or automatically transform them to other representations (e.g. knowledge graphs, continuous embeddings, etc.) that would improve their dialog systems.
Comparing to the DSTC6 Sentence Selection track, this yearâs track differed in several ways. Most importantly, we use human-human dialogs, rather than a synthetically created dataset. Each of our subtasks also adds a novel dimension compared to the DSTC6 task, which provided candidate sets of size 10 with a single correct option, and no external resource.
# 2.1 Data
Our datasets are derived from collections of two-party conversations. The conversations are ran- domly split part way through to create a partial conversation and the true follow-up response. In- correct candidate utterances are selected by randomly sampling utterances from the dataset. For the data with paraphrases, the incorrect candidates are sampled with paraphrases as well. For the
# 2https://ibm.github.io/dstc7-noesis/public/index.html
2
how do I turn on optical output under gutsy?. (soundcard) probably check the settings in the mixer Iâve tried that, speakers still say no incoming signal. there should be some check box for analog/digital output, but unfortunately I wouldnât know much more
Student: Advisor: Student: Advisor: Student: Advisor: Hello! Hello. Iâm looking for good courses to take. Are you looking for courses in a speciï¬c area of CS? Not in particular. Are you looking to take a very difï¬cult class?
Student: Hello!
Figure 1: Examples of partial dialogs in task one (Ubuntu top, Advising bottom).
data where sometimes the pool does not contain the correct utterance, twenty percent of cases are selected at random to have no correct utterance.
This task considers datasets in two domains. First, a collection of two-party conversations from the Ubuntu support channel, in which one user asks a question and another helps them resolve their problem. These are extracted using the model described by Kummerfeld et al. [2018], instead of the heuristic approach used in Lowe et al. [2015]. This approach produced 135,000 conversations, which we sample 100,000 of for training and 1,000 for testing. For this setting, manual pages are provided as a form of knowledge grounding.
Second, a new collection of conversations in a student advising domain, where the goal is to help a student select courses for the coming semester. These were collected at the University of Michigan with students playing both roles with simulated personas, including information about preferences for workloads, class sizes, topic areas, time of day, etc. Both participants had access to the list of courses the student had taken previously, and the adviser had access to a list of suggested courses that the student had completed the prerequisites for. In the shared task, we provide all of this information - student preferences, and course information - to participants. 815 conversations were collected, with on average 18 messages per conversation and 9 tokens per message. This data was expanded by collecting 82,094 paraphrases of messages. Of this data, 700 conversations were used in the shared task, with 500 for training, 100 for development, and 100 for testing. The remaining 115 conversations were used as a source of negative candidates in the sets systems choose from. For the test data, 500 conversations were constructed by cutting the conversations off at 5 points and using paraphrases to make 5 distinct conversations. The training data was provided in two forms. First, the 500 training conversations with a list of paraphrases for each utterance, which participants could use in any way. Second, 100,000 partial conversations generated by randomly selecting paraphrases.
Finally, as part of the challenge, we provided a baseline system that implemented the Dual-Encoder model from Lowe et al. [2015]. This lowered the barrier to entry, encouraging broader participation in the task.
# 2.2 Results
We considered a range of metrics when comparing models. Following Lowe et al. [2015], we use Recall@N, where we count how often the correct answer is within the top N speciï¬ed by a system. In prior work the set of candidates was 10 and N was set at 1, 2, and 5. Since our sets are larger, we consider 1, 10, and 50. We also consider a widely used metric from the ranking literature: Mean Reciprocal Rank (MRR). Finally, for subtask 3 we use Mean Average Precision (MAP) since there are multiple correct utterances in the set. To determine a single winner for each subtask, we used the mean of Recall@10 and MRR.
Twenty teams participated in at least one of the subtasks, seventeen participated in two or more, and three participated in every subtask. For both datasets the subtask with the most entries was the ï¬rst, which is closest to prior tasks. One team had a clear lead, scoring the highest across all but one of the subtasks (task 2 on Ubuntu, when the number of candidates is increased). The Advising data was consistently harder than the Ubuntu data, probably because of the limited training data. However,
3
the size of the Ubuntu dataset also posed a challenge in training, as substantial computation was required for even a single training epoch.
The best system had a Recall@1 of 0.645 on the ï¬rst subtask for Ubuntu, and was based on the Enhanced Sequential Inference Model (ESIM) architecture proposed by Chen et al. [2016]. Their score on the second subtask was 0.067, which is a factor of ten lower, but with more than thousand times as many options to choose from. The introduction of cases with no correct answer (subtask four) led to slightly lower results (0.511), while the availability of external data (subtask 5) helped slightly (0.653). We see a similar trend on the Advising data, except that external data was less useful.
# 2.3 Summary
This track introduced two new dialog datasets to the research community and a range of variations on the sentence selection task. The best submitted system managed to achieve Recall@1 score of 0.645 on Ubuntu, an impressive result given the large number of candidates and the complexity of the dialog. One outstanding challenge is how to effectively use external information â none of the teams managed to substantially improve performance from subtask 1 to subtask 5.
# 3 Sentence Generation Track
Recent work [Ritter et al., 2011, Sordoni et al., 2015, Shang et al., 2015, Vinyals and Le, 2015, Serban et al., 2016, etc.] has shown that conversational models can be trained in a completely end-to- end and data-driven fashion, without any hand-coding. However, prior work has mostly focused to chitchat, as that is a common feature of messages in the social media data (e.g., Twitter [Ritter et al., 2011]) used to train these systems. To effectively move beyond chitchat and produce system re- sponses that are both substantive and âusefulâ, fully data-driven models need grounding in the real world and access to external knowledge (textual or structured). To do so, the Generation Task of this year is inspired by the knowledge-grounded conversational framework of Ghazvininejad et al. [2018], which combines conversational input and textual data from the userâs environment (here, a web page that is discussed). Such a framework maintains the beneï¬t of fully data-driven conversa- tion while attempting to get closer to task-oriented scenarios, with the goal of informing and helping the users and not just entertaining them.
# 3.1 Task deï¬nition
The task follows the data-driven framework established in 2011 by Ritter et al. [2011], which avoids hand-coding any linguistic, domain, or task-speciï¬c information. In the knowledge-grounded setting of Ghazvininejad et al. [2018], that framework is extended as each system input consists of two parts: Conversational input: Similar to DSTC6 Track 2 [Hori and Hori, 2017], all preceding turns of the conversation are available to the system. For practical purposes, we truncate the context to the K most recent turns. Contextually-relevant âfactsâ: The system is given snippets of text that are relevant to the context of the conversation. These snippets of text are not drawn from any conversational data, and are instead extracted from external knowledge sources such as Wikipedia or Foursquare.
From this input, the task it to produce a response that is both conversationally appropriate and informative. The evaluation setup is presented in Section 3.3.
# 3.2 Data
We extracted conversation threads from Reddit data, which is particularly well suited for grounded conversation modeling. Indeed, Reddit conversations are organized around submissions, where each conversation is typically initiated with a URL to a web page (grounding) that deï¬nes the subject of the conversation. For this task, we restrict ourselves to submissions that contain exactly one URL and a title. To reduce spamming and offensive language and improve the overall quality of the data, we manually whitelisted the domains of these URLs and the Reddit topics (i.e., âsubredditsâ) in which they appear. This ï¬ltering yielded about 3 million conversational responses and 20 million facts
4
divided into train, validation and tests.3 For the test set, we selected conversational turns for which 6 or more responses were available, in order to create a multi-reference test set. Given other ï¬ltering criteria such as turn length, this yielded a 5-reference test set of size 2208 (For each instance, we set aside one of the 6 human responses to assess human performance on this task). More information about the data for this task can be found on the data extraction web site, which makes available all of the data extraction and evaluation code.4
# 3.3 Evaluation
We evaluate response quality using both automatic and human evaluation. Since we are not consid- ering task-oriented dialog, there is no pre-speciï¬ed task and therefore no extrinsic way of measuring task success. Instead, we performed a per-response human evaluation judging each system response using crowdsourcing: Relevance: This evaluation criterion asks whether the system response is conversationally appro- priate and relevant given the K immediately preceding turns (we set K = 2 to reduce the judgesâ cognitive load). Note that this judgment has nothing to do with grounding in external sources, and is similar to human judgments for prior data-driven conversation models (e.g., [Sordoni et al., 2015]). Interest: This evaluation criterion measures the degree to which the produced response is interest- ing and informative in the context of a document provided by the URL. Since it would be impractical to show entire web pages to the crowdworkers, we restricted ourselves at training and test time to URLs with named anchors (i.e., preï¬xed with â#â in the URL), and the crowdworkers only had to read a snippet of the document immediately following that anchor. Note that models could use full web pages as input, and the decision to only show a snippet for each response was again to reduce cognitive load.
We scored both evaluation criteria on a 5-point Likert scale, and ï¬nally combined the two judgments by weighting them equally. In order to provide participants with preliminary results to include in their system descriptions, we also performed automatic evaluation using standard machine trans- lation metrics, including BLEU [Papineni et al., 2002], METEOR [Lavie and Agarwal, 2007], and NIST [Doddington, 2002]. NIST is a variant of BLEU that weights n-gram matches by their infor- mation gain, i.e., it indirectly penalizes uninformative n-grams such as âI donâtâ and âdonât knowâ. The ï¬nal ranking of the systems was based only on human evaluation scores.
# 3.4 Results
The Generation Task received 26 system submissions from 7 teams. In addition to these systems, we also evaluated a âhumanâ system (one of the six human references set aside for evaluation) and three baselines: a seq2seq baseline, a random baseline (which randomly selected responses from the training data), and a constant baseline (which always responds âI donât know what you mean.â). The reason for including a constant baseline is that such a deï¬ective response generation system can be surprisingly competitive, at least when evaluated on automatic metrics (BLEU).
The ï¬ndings are as follows for each of the metrics: BLEU-4: When evaluated on 5 references, the constant baseline, which always responds deï¬ec- tively, does surprisingly well (BLEU=2.87%) and outperforms all the submitted systems (BLEU4 ranging from 1.01% to 1.83%), and is only outperformed by humans. In further analysis, we found that reducing the number of references to one solved the problem, as almost all the systems were able to outperform the baseline according to single-reference BLEU. We suspect this deï¬ciency of multi-reference BLEU, previously noted in Vedantam et al. [2015], to be due to its parameterization as a precision metric. For example, if one of the gold responses happens to be âI donât know what you meanâ, the constant baseline gets a maximum score for that instance, even if the other refer- ences are semantically completely unrelated. Thus, this biases the metric towards bland responses, as often at least one of the 5 references is somewhat deï¬ective (e.g., contains âI donât knowâ). Based on these observations, we chose to use single-reference BLEU instead of multi-reference BLEU for this DSTC task, as the former gave much more meaningful results. NIST-4: The NIST score weights ngram matches by their information gain, and effectively penal-
3We could have easily increased the number of web domains to create a bigger dataset, but we aimed to make the task relatively accessible for participants with limited computing resources.
# 4https://github.com/DSTC-MSR-NLP/DSTC7-End-to-End-Conversation-Modeling
5
izes common n-grams such as âI donât knowâ, which alleviates the problem with multi-reference BLEU mentioned above. None of the baselines is competitive with the top systems according to NIST-4, even when using 5 references. This suggests that NIST might be a more suitable metric than BLEU when dealing with multi-reference test sets, and it penalizes bland responses. METEOR: This metric suffers from the same problem as BLEU-4, as the constant baseline per- forms very well on that metric and outperforms all submitted primary systems but one. We suspect this is due to the fact that METEOR (as BLEU) does not consider information gain in its scoring. Human Evaluation: Owing to the cost of crowdsourcing, we limited evaluation to a sample of 1000 conversations and used primary systems only. All systems were assigned the same conversa- tions. Each output was rated by 3 randomly-assigned judges provided by a crowdsourcing service. Judges were asked to rate outputs in context for Relevance and Interest using a 5-point Likert scale. Not unexpectedly, the constant baseline performed moderately well on Relevance (2.60), but poorly on Interest judgments, where it was statistically indistinguishable from the (low) random baseline (random: 2.35, constant: 2.32). The best system returned a composite score of 2.93 (Relevance: 2.99, Interest: 2.87). This remains well below the human baseline of 3.55 (Relevance: 3.61, Interest: 3.49). After replacing spammers, interrater agreement on a converted 3-way scale was fair, with Fleissâ Kappa at 0.39 for Relevance and 0.38 for Interest.
# 3.5 Summary
The sentence generation task challenged participants to produce interesting and informative end- to-end conversational responses that drew on textual background knowledge. In this respect, the task was signiï¬cantly more challenging that the DSTC6 task that was focused on the conversational dimensions of response generation. In general, competing system outputs were judged by humans to be more relevant and interesting than our constant and random baselines. It is also clear, however, that the quality gap between human and system responses is substantial, indicating that there is considerable space for research in future algorithmic improvements.
# 4 Audio Visual Scene-aware dialog Track
In this track, we consider a new research target: a dialog system that can discuss dynamic scenes with humans. This lies at the intersection of research in natural language processing, computer vi- sion, and audio processing. As described above, end-to-end dialog modeling using paired input and output sentences has been proposed as a way to reduce the cost of data preparation and system de- velopment. Such end-to-end approaches have been shown to better handle ï¬exible conversations by enabling model training on large conversational datasets Vinyals and Le [2015], Hori et al. [2018b]. However, current dialog systems cannot understand a scene and have a conversation about what is going on in it. To develop systems that can carry on a conversation about objects and events taking place around the machines or the users, the systems need to understand not only a dialog history but also the video and audio information in the scene. In the ï¬eld of computer vision, interaction with humans about visual information has been explored in visual question answering (VQA) by Antol et al. [2015] and visual dialog by Das et al. [2017]. These tasks have been the focus of in- tense research, aiming to (1) generate answers to questions about things and events in a single static image and (2) hold a meaningful dialog with humans about an image using natural, conversational language in an end-to-end framework. To capture the semantics of dynamic scenes, recent research has focused on video description. The state-of-the-art in video description uses multimodal fusion to combine different input modalities (feature types), such spatiotemporal motion features and audio features proposed by Hori et al. [2017]. Since the recent revolution of neural network models allows us to combine different modules into a single end-to-end differentiable network, this framework al- low us to build scene aware dialog systems by combining end-to-end dialog and multimodal video description approaches. We can simultaneously use video features and user utterances as input to an encoder-decoder-based system whose outputs are natural-language responses.
# 4.1 Task deï¬nition
In this track, the system must generate responses to a user input in the context of a given dialog. The dialog context consists of a dialog history between the user and the system in addition to the video and audio information in the scene. There are two tasks, each with two versions (a and b):
6
Task 1: Video and Text (a) Using the video and text training data provided but no external data sources, other than publicly available pre-trained feature extraction models (b) Also using external data for training.
Task 2: Text Only (a) Do not use the input videos for training or testing. Use only the text training data (dialogs and video descriptions) provided. (b) Any publicly available text data may be used for training.
# 4.2 Data
To set up the Audio Visual Scene-Aware Dialog (AVSD) track, we collected text-based dialogs about short videos of Charades by Sigurdsson et al. [2016]5, a dataset of untrimmed and multi- action videos, along with video descriptions in Alamri et al. [2018]. The data collection paradigm for dialogs was similar to the one described in Das et al. [2016], in which for each image, two parties interacted via a text interface to yield a dialog. In Das et al. [2016], each dialog consisted of a sequence of questions and answers about an image. In the video scene-aware dialog case, two parties had a discussion about events in a video. One of the two parties played the role of an answerer who had already watched the video. The answerer answered questions asked by their counterpart â the questioner. The questioner was not allowed to watch the whole video but were able to see the ï¬rst, middle and last frames of the video as single static images. The two had 10 rounds of QA, in which the questioner asked about the events that happened between the frames. At the end, the questioner summarized the events in the video as a description.
The DSTC7 AVSD ofï¬cial dataset contains 7,659, 1,787 and 1,710 dialogs for training, validation and testing, respectively. The questions and answers of the AVSD dataset mainly consists of 5 to 8 words, making them longer and more descriptive than VQA. The dialog contains questions asking about objects, actions and audio information in the videos. Although we tried to collect questions directly relevant to the event displayed, some questions ask about abstract information in the video such as how to begin the videos and the duration of the videos. Table 1 shows an example dialog from the data set.
# Table 1: An example dialog from the AVSD dataset.
Answerer He appears to be in the bedroom. By him entering the room. He pick up a towel and folds it. He just folds them and leaves them on the chair. Nothing much except this activity. No he did not speak at all. No he appears alone there. No pets to see in this clip. Questioner QA1 What kind of room does this appear to be? QA2 QA3 QA4 What does he do with it ? QA5 What does he do next? QA6 QA7 QA8 QA9 QA10 Are there any other actions in the video? How does the video begin? Does he have anything in his hands? Does he speak in the video? Is there anyone else in room at all? Can you see or hear any pets in the video? Is there any noise in the video of importance? Not any noise important there. Nothing else important to know.
# 4.3 Evaluation
In this challenge, the quality of a systemâs automatically generated sentences is evaluated using objective measures. These determine how similar the generated responses are to ground truths from humans and how natural and informative the responses are. To collect more possible answers in response to the questions for the test videos, we asked 5 humans to watch a video and read a dialog between a questioner and an answer about the video, and then to generate an answer in response to the question. We evaluated the automatically generated answers by comparing with the 6 ground truth sentences (one original answer and 5 subsequently collected answers). We used the MSCOCO evaluation tool for objective evaluation of system outputs 6. The supported metrics include word- overlap-based metrics such as BLEU, METEOR, ROUGE_L, and CIDEr.
# 5http://allenai.org/plato/charades/ 6https://github.com/tylin/coco-caption
7
We also collected human ratings for each system response using a 5 point Likert Scale, where hu- mans rated system responses given a dialog context as: 5 for Very good, 4 for Good, 3 for Ac- ceptable, 2 for Poor, and 1 for Very poor. Since we the dataset contains questions and answers, we asked humans to consider correctness of the answers and also naturalness, informativeness, and appropriateness of the response according to the given context.
# 4.4 Results
The AVSD Task received 31 system submission from 9 teams. We built a baseline end-to-end dialog system that can generate answers in response to user questions about events in a video sequence as described in Hori et al. [2018a]. Our architecture is similar to the Hierarchical Recurrent Encoder in Das et al. [2016]. The question, visual features, and the dialog history are fed into corresponding LSTM-based encoders to build up a context embedding, and then the outputs of the encoders are fed into an LSTM-based decoder to generate an answer. The history consists of encodings of QA pairs. We feed multimodal attention-based video features into the LSTM encoder instead of single static image features. The systems submitted deployed LSTM, BLSTM, and GRU with cross entropy as the objective function. The best system applied "Hierarchical and Co-Attention mechanisms to combine text and vision" from Libovick`y and Helcl [2017], Lu et al. [2016]. Table 2 shows the evaluation results for the baseline and best systems. Under this evaluation, the human rating for the original answers was 3.938.
Table 2: Performance comparison between the baseline and the best system.
System BLEU-4 METEOR CIDEr Human rating Baseline Best
# 4.5 Summary
We introduced a new challenge task and dataset for Audio Visual Scene-Aware Dialog (AVSD) in DSTC7. This is the ï¬rst attempt to combine end-to-end conversation and end-to-end multimodal video description models into a single end-to-end differentiable network to build scene-aware dialog systems. The best system applied hierarchical attention mechanisms to combine text and visual information, improving by 22% over the human rating for the baseline system. The language models trained from QA are still strong approaches and the power to predict the objects and events in the video is not sufï¬cient to answer the questions correctly. Future work includes more detailed analysis of the correlation between the QA text and the video scenes.
# 5 Conclusion and Future Directions
In this paper, we summarized tasks conducted on the seventh dialog system technology challenge (DSTC7): sentence selection, sentence generation, and audio visual scene-aware dialog. The sen- tence selection track contained several variations on the response selection problem, with ï¬ve sub- tasks and two new datasets. The sentence generation track provided a test of knowledge-grounded response production, with the aim of creating more controllable generators. The audio visual scene- aware track raised a new problem in which dialog is generated about a given video in a variety of sub-tasks.
All of the data described in this paper will be provided as a large-scale benchmark of dialog systems from several viewpoints, after the challenge, to support future dialog system research. However, there are several major remaining challenges for dialog systems. For example, transferring models trained on large-scale data-sets to a variety of domains that do not have enough data is a known issue for dialog systems, as mentioned in DSTC3. Data created this challenge, which focused on end-to-end learning, does not address this issue, which would require expanding to a larger variety of domains. We expect to continue the challenge in the future, providing new testbeds that work towards the remaining open problems of dialog system research.
8
# References
H. Alamri, V. Cartillier, R. G. Lopes, A. Das, J. Wang, I. Essa, D. Batra, D. Parikh, A. Cherian, T. K. Marks, et al. Audio visual scene-aware dialog (avsd) challenge at dstc7. arXiv preprint arXiv:1806.00525, 2018.
S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. L. Zitnick, and D. Parikh. VQA: Visual Question Answering. In International Conference on Computer Vision (ICCV), 2015.
Q. Chen, X. Zhu, Z. Ling, S. Wei, H. Jiang, and D. Inkpen. Enhanced lstm for natural language inference. arXiv preprint arXiv:1609.06038, 2016.
A. Das, S. Kottur, K. Gupta, A. Singh, D. Yadav, J. M. F. Moura, D. Parikh, and D. Batra. Visual dialog. CoRR, abs/1611.08669, 2016. URL http://arxiv.org/abs/1611.08669.
A. Das, S. Kottur, J. M. Moura, S. Lee, and D. Batra. Learning cooperative visual dialog agents with deep reinforcement learning. In International Conference on Computer Vision (ICCV), 2017.
G. Doddington. Automatic evaluation of machine translation quality using n-gram co-occurrence statistics. In Proceedings of the Second International Conference on Human Language Technol- ogy Research, HLT â02, pages 138â145, 2002.
M. Ghazvininejad, C. Brockett, M. Chang, B. Dolan, J. Gao, W. Yih, and M. Galley. A knowledge- grounded neural conversation model. AAAI, 2018.
M. Henderson, B. Thomson, and J. D. Williams. The second dialog state tracking challenge. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 263â272, 2014a.
M. Henderson, B. Thomson, and J. D. Williams. The third dialog state tracking challenge. In Spoken Language Technology Workshop (SLT), 2014 IEEE, pages 324â329. IEEE, 2014b.
C. Hori and T. Hori. End-to-end conversation modeling track in DSTC6. arXiv:1706.07440, 2017.
C. Hori, T. Hori, T.-Y. Lee, Z. Zhang, B. Harsham, J. R. Hershey, T. K. Marks, and K. Sumi. Attention-based multimodal fusion for video description. In ICCV, 2017.
C. Hori, H. Alamri, J. Wang, G. Winchern, T. Hori, A. Cherian, T. K. Marks, V. Cartillier, R. G. Lopes, A. Das, et al. End-to-end audio visual scene-aware dialog using multimodal attention- based video features. arXiv preprint arXiv:1806.08409, 2018a.
C. Hori, J. Perez, R. Higasinaka, T. Hori, Y.-L. Boureau, M. Inaba, Y. Tsunomori, T. Takahashi, K. Yoshino, and S. Kim. Overview of the sixth dialog system technology challenge: Dstc6. Computer Speech & Language, 2018b.
S. Kim, L. F. DâHaro, R. E. Banchs, J. D. Williams, M. Henderson, and K. Yoshino. The ï¬fth dialog state tracking challenge. In Spoken Language Technology Workshop (SLT), 2016 IEEE, pages 511â517. IEEE, 2016.
S. Kim, L. F. DâHaro, R. E. Banchs, J. D. Williams, and M. Henderson. The fourth dialog state tracking challenge. In Dialogues with Social Robots, pages 435â449. Springer, 2017.
J. K. Kummerfeld, S. R. Gouravajhala, J. Peper, V. Athreya, C. Gunasekara, J. Ganhotra, S. S. Patel, L. Polymenakos, and W. S. Lasecki. Analyzing assumptions in conversation disentanglement research through the lens of a new dataset and model. ArXiv e-prints, October 2018. URL https://arxiv.org/pdf/1810.11118.pdf.
A. Lavie and A. Agarwal. METEOR: An automatic metric for mt evaluation with high levels of cor- relation with human judgments. In Proc. of the Second Workshop on Statistical Machine Trans- lation, StatMT â07, pages 228â231, Stroudsburg, PA, USA, 2007. Association for Computational Linguistics. URL http://dl.acm.org/citation.cfm?id=1626355.1626389.
J. Libovick`y and J. Helcl. Attention strategies for multi-source sequence-to-sequence learning. arXiv preprint arXiv:1704.06567, 2017.
9
R. Lowe, N. Pow, I. Serban, and J. Pineau. The ubuntu dialogue corpus: A large dataset In Proceedings of the 16th An- for research in unstructured multi-turn dialogue systems. nual Meeting of the Special Interest Group on Discourse and Dialogue, pages 285â294, Prague, Czech Republic, September 2015. Association for Computational Linguistics. URL http://aclweb.org/anthology/W15-4640.
J. Lu, J. Yang, D. Batra, and D. Parikh. Hierarchical question-image co-attention for visual question answering. In Advances In Neural Information Processing Systems, pages 289â297, 2016.
K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. BLEU: a method for automatic evaluation of machine translation. ACL, 2002.
A. Ritter, C. Cherry, and W. B. Dolan. Data-driven response generation in social media. EMNLP, 2011.
I. V. Serban, A. Sordoni, Y. Bengio, A. Courville, and J. Pineau. Building end-to-end dialogue systems using generative hierarchical neural network models. AAAI, 2016.
L. Shang, Z. Lu, and H. Li. Neural responding machine for short-text conversation. ACL-IJCNLP, 2015.
G. A. Sigurdsson, G. Varol, X. Wang, I. Laptev, A. Farhadi, and A. Gupta. Hollywood in homes: Crowdsourcing data collection for activity understanding. ArXiv, 2016. URL http://arxiv.org/abs/1604.01753.
A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell, J.-Y. Nie, J. Gao, and B. Dolan. A neural network approach to context-sensitive generation of conversational responses. NAACL- HLT, 2015.
R. Vedantam, C. L. Zitnick, and D. Parikh. CIDEr: Consensus-based image description evaluation. In CVPR, pages 4566â4575, 2015.
O. Vinyals and Q. Le. A neural conversational model. ICML, 2015.
J. Williams, A. Raux, D. Ramachandran, and A. Black. The dialog state tracking challenge. In Proceedings of the SIGDIAL 2013 Conference, pages 404â413, 2013.
10 | {
"id": "1704.06567"
} |
1901.02860 | Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context | Transformers have a potential of learning longer-term dependency, but are
limited by a fixed-length context in the setting of language modeling. We
propose a novel neural architecture Transformer-XL that enables learning
dependency beyond a fixed length without disrupting temporal coherence. It
consists of a segment-level recurrence mechanism and a novel positional
encoding scheme. Our method not only enables capturing longer-term dependency,
but also resolves the context fragmentation problem. As a result,
Transformer-XL learns dependency that is 80% longer than RNNs and 450% longer
than vanilla Transformers, achieves better performance on both short and long
sequences, and is up to 1,800+ times faster than vanilla Transformers during
evaluation. Notably, we improve the state-of-the-art results of bpc/perplexity
to 0.99 on enwiki8, 1.08 on text8, 18.3 on WikiText-103, 21.8 on One Billion
Word, and 54.5 on Penn Treebank (without finetuning). When trained only on
WikiText-103, Transformer-XL manages to generate reasonably coherent, novel
text articles with thousands of tokens. Our code, pretrained models, and
hyperparameters are available in both Tensorflow and PyTorch. | http://arxiv.org/pdf/1901.02860 | Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov | cs.LG, cs.CL, stat.ML | ACL 2019 long paper. Code and pretrained models are available at
https://github.com/kimiyoung/transformer-xl | null | cs.LG | 20190109 | 20190602 | 9 1 0 2
n u J 2 ] G L . s c [
3 v 0 6 8 2 0 . 1 0 9 1 : v i X r a
# Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context
Zihang Daiâ12, Zhilin Yangâ12, Yiming Yang1, Jaime Carbonell1, Quoc V. Le2, Ruslan Salakhutdinov1 1Carnegie Mellon University, 2Google Brain {dzihang,zhiliny,yiming,jgc,rsalakhu}@cs.cmu.edu, qvl@google.com
# Abstract
Transformers have a potential of learning longer-term dependency, but are limited by a ï¬xed-length context in the setting of language modeling. We propose a novel neural ar- chitecture Transformer-XL that enables learn- ing dependency beyond a ï¬xed length with- out disrupting temporal coherence. It con- sists of a segment-level recurrence mechanism and a novel positional encoding scheme. Our method not only enables capturing longer-term dependency, but also resolves the context frag- mentation problem. As a result, Transformer- XL learns dependency that is 80% longer than RNNs and 450% longer than vanilla Trans- formers, achieves better performance on both short and long sequences, and is up to 1,800+ times faster than vanilla Transformers during evaluation. Notably, we improve the state-of- the-art results of bpc/perplexity to 0.99 on en- wiki8, 1.08 on text8, 18.3 on WikiText-103, 21.8 on One Billion Word, and 54.5 on Penn Treebank (without ï¬netuning). When trained only on WikiText-103, Transformer-XL man- ages to generate reasonably coherent, novel text articles with thousands of tokens. Our code, pretrained models, and hyperparameters are available in both Tensorï¬ow and PyTorch1.
1
# 1 Introduction
Language modeling is among the important prob- lems that require modeling long-term dependency, with successful applications such as unsupervised pretraining (Dai and Le, 2015; Peters et al., 2018; Radford et al., 2018; Devlin et al., 2018). How- ever, it has been a challenge to equip neural networks with the capability to model long-term dependency in sequential data. Recurrent neu- ral networks (RNNs), in particular Long Short-
Term Memory (LSTM) networks (Hochreiter and Schmidhuber, 1997), have been a standard solu- tion to language modeling and obtained strong results on multiple benchmarks. Despite the wide adaption, RNNs are difï¬cult to optimize due to gradient vanishing and explosion (Hochre- iter et al., 2001), and the introduction of gat- ing in LSTMs and the gradient clipping tech- nique (Graves, 2013) might not be sufï¬cient to fully address this issue. Empirically, previous work has found that LSTM language models use 200 context words on average (Khandelwal et al., 2018), indicating room for further improvement.
On the other hand, the direct connections be- tween long-distance word pairs baked in atten- tion mechanisms might ease optimization and en- able the learning of long-term dependency (Bah- danau et al., 2014; Vaswani et al., 2017). Re- cently, Al-Rfou et al. (2018) designed a set of aux- iliary losses to train deep Transformer networks for character-level language modeling, which out- perform LSTMs by a large margin. Despite the success, the LM training in Al-Rfou et al. (2018) is performed on separated ï¬xed-length segments of a few hundred characters, without any informa- tion ï¬ow across segments. As a consequence of the ï¬xed context length, the model cannot capture any longer-term dependency beyond the prede- ï¬ned context length. In addition, the ï¬xed-length segments are created by selecting a consecutive chunk of symbols without respecting the sentence or any other semantic boundary. Hence, the model lacks necessary contextual information needed to well predict the ï¬rst few symbols, leading to inef- ï¬cient optimization and inferior performance. We refer to this problem as context fragmentation.
âEqual contribution. Order determined by swapping the one in Yang et al. (2017).
# 1https://github.com/kimiyoung/
transformer-xl
To address the aforementioned limitations of ï¬xed-length contexts, we propose a new architec- ture called Transformer-XL (meaning extra long). We introduce the notion of recurrence into our
deep self-attention network. In particular, instead of computing the hidden states from scratch for each new segment, we reuse the hidden states ob- tained in previous segments. The reused hidden states serve as memory for the current segment, which builds up a recurrent connection between the segments. As a result, modeling very long- term dependency becomes possible because in- formation can be propagated through the recur- rent connections. Meanwhile, passing informa- tion from the previous segment can also resolve the problem of context fragmentation. More im- portantly, we show the necessity of using relative positional encodings rather than absolute ones, in order to enable state reuse without causing tem- poral confusion. Hence, as an additional techni- cal contribution, we introduce a simple but more effective relative positional encoding formulation that generalizes to attention lengths longer than the one observed during training.
Transformer-XL obtained strong results on ï¬ve datasets, varying from word-level to character- level language modeling. Transformer-XL is also able to generate relatively coherent long text arti- cles with thousands of tokens (see Appendix E), trained on only 100M tokens.
Our main technical contributions include intro- ducing the notion of recurrence in a purely self- attentive model and deriving a novel positional en- coding scheme. These two techniques form a com- plete set of solutions, as any one of them alone does not address the issue of ï¬xed-length con- texts. Transformer-XL is the ï¬rst self-attention model that achieves substantially better results than RNNs on both character-level and word-level language modeling.
# 2 Related Work
In the last few years, the ï¬eld of language mod- eling has witnessed many signiï¬cant advances, including but not limited to devising novel ar- chitectures to better encode the context (Bengio et al., 2003; Mikolov et al., 2010; Merity et al., 2016; Al-Rfou et al., 2018), improving regulariza- tion and optimization algorithms (Gal and Ghahra- mani, 2016) , speeding up the Softmax computa- tion (Grave et al., 2016a) , and enriching the output distribution family (Yang et al., 2017).
To capture the long-range context in language modeling, a line of work directly feeds a repre- sentation of the wider context into the network
as an additional input. Existing works range from ones where context representations are man- ually deï¬ned (Mikolov and Zweig, 2012; Ji et al., 2015; Wang and Cho, 2015) to others that rely on document-level topics learned from data (Dieng et al., 2016; Wang et al., 2017).
More broadly, in generic sequence modeling, how to capture long-term dependency has been a long-standing research problem. From this per- spective, since the ubiquitous adaption of LSTM, many efforts have been spent on relieving the vanishing gradient problem, including better ini- tialization (Le et al., 2015), additional loss sig- nal (Trinh et al., 2018), augmented memory struc- ture (Ke et al., 2018) and others that modify the in- ternal architecture of RNNs to ease the optimiza- tion (Wu et al., 2016; Li et al., 2018). Different from them, our work is based on the Transformer architecture and shows that language modeling as a real-world task beneï¬ts from the ability to learn longer-term dependency.
# 3 Model
Given a corpus of tokens x = (x1,..., x7), the ask of language modeling is to estimate the joint probability P(x), which is often auto-regressively actorized as P(x) = |], P(a | x<;). With the actorization, the problem reduces to estimating each conditional factor. In this work, we stick to the standard neural approach to modeling the con- ditional probability. Specifically, a trainable neu- ral network is used to encode the context x<; into a fixed size hidden state, which is multiplied with the word embeddings to obtain the logits. The log- its are then fed into the Softmax function, yielding a categorical probability distribution over the next oken.
# 3.1 Vanilla Transformer Language Models
In order to apply Transformer or self-attention to language modeling, the central problem is how to train a Transformer to effectively encode an arbi- trarily long context into a ï¬xed size representation. Given inï¬nite memory and computation, a sim- ple solution would be to process the entire con- text sequence using an unconditional Transformer decoder, similar to a feed-forward neural network. However, this is usually infeasible with the limited resource in practice.
One feasible but crude approximation is to split the entire corpus into shorter segments of man-
(a) Train phase. (b) Evaluation phase.
fe) fe} fe} fe) fe} Oo oO ° O° Oo Oo > D O° fe) Oo Oo
Figure 1: Illustration of the vanilla model with a segment length 4.
ageable sizes, and only train the model within each segment, ignoring all contextual information from previous segments. This is the idea adopted by Al-Rfou et al. (2018). We call it the vanilla model and visualize it in Fig. 1a. Under this training paradigm, information never ï¬ows across segments in either the forward or backward pass. There are two critical limitations of using a ï¬xed- length context. First, the largest possible depen- dency length is upper bounded by the segment length, which is a few hundred on character-level language modeling (Al-Rfou et al., 2018). There- fore, although the self-attention mechanism is less affected by the vanishing gradient problem com- pared to RNNs, the vanilla model is not able to fully exploit this optimization advantage. Second, though it is possible to use padding to respect the sentence or other semantic boundaries, in practice it has been standard practice to simply chunk long text into ï¬xed-length segments due to improved efï¬ciency (Peters et al., 2018; Devlin et al., 2018; Al-Rfou et al., 2018). However, simply chunking a sequence into ï¬xed-length segments will lead to the context fragmentation problem as discussed in Section 1.
# 3.2 Segment-Level Recurrence with State Reuse
To address the limitations of using a ï¬xed-length context, we propose to introduce a recurrence mechanism to the Transformer architecture. Dur- ing training, the hidden state sequence computed for the previous segment is ï¬xed and cached to be reused as an extended context when the model processes the next new segment, as shown in Fig. 2a. Although the gradient still remains within a segment, this additional input allows the network to exploit information in the history, leading to an ability of modeling longer-term dependency and avoiding context fragmentation. Formally, let the two consecutive segments of length L be sÏ = [xÏ,1, · · · , xÏ,L] and sÏ +1 = [xÏ +1,1, · · · , xÏ +1,L] respectively. Denoting the n-th layer hidden state sequence produced for the Ï -th segment sÏ by Ï â RLÃd, where d is the hidden dimension. hn Then, the n-th layer hidden state for segment sÏ +1 is produced (schematically) as follows,
byyt = [SG(h2") oh?s1] , aha kha. vea = bet Wy hep We bet Wo, h?,, = Transformer-Layer (q741,k741,V741) -
the vanilla model also consumes a segment of the same length as in training, but only makes one prediction at the last position. Then, at the next step, the segment is shifted to the right by only one position, and the new segment has to be processed all from scratch. As shown in Fig. 1b, this procedure ensures that each prediction utilizes the longest possible con- text exposed during training, and also relieves con- text fragmentation issue encountered in training. However, this evaluation procedure is extremely expensive. We will show that our proposed archi- tecture is able to substantially improve the evalua- tion speed.
where the function SG(-) stands for stop-gradient, the notation [h,, o h,] indicates the concatenation of two hidden sequences along the length dimen- sion, and W. denotes model parameters. Com- pared to the standard Transformer, the critical dif- erence lies in that the key k?,, and value v7), , are conditioned on the extended context hy i and hence hââ! cached from the previous segment. We emphasize this particular design by the green paths in Fig. 2a.
With this recurrence mechanism applied to ev- ery two consecutive segments of a corpus, it es- sentially creates a segment-level recurrence in the hidden states. As a result, the effective context be- ing utilized can go way beyond just two segments. However, notice that the recurrent dependency be- tween hn shifts one layer downwards
(a) Training phase. (b) Evaluation phase.
Figure 2: Illustration of the Transformer-XL model with a segment length 4.
per-segment, which differs from the same-layer recurrence in conventional RNN-LMs. Conse- quently, the largest possible dependency length grows linearly w.r.t. the number of layers as well as the segment length, i.e., O(N Ã L), as vi- sualized by the shaded area in Fig. 2b. This is analogous to truncated BPTT (Mikolov et al., 2010), a technique developed for training RNN- LMs. However, different from truncated BPTT, our method caches a sequence of hidden states in- stead of the last one, and should be applied to- gether with the relative positional encoding tech- nique described in Section 3.3.
Besides achieving extra long context and re- solving fragmentation, another beneï¬t that comes with the recurrence scheme is signiï¬cantly faster evaluation. Speciï¬cally, during evaluation, the representations from the previous segments can be reused instead of being computed from scratch as in the case of the vanilla model. In our ex- periments on enwiki8, Transformer-XL is up to 1,800+ times faster than the vanilla model during evaluation (see Section 4).
der to reuse the hidden states. That is, how can we keep the positional information coherent when we reuse the states? Recall that, in the standard Transformer, the information of sequence order is provided by a set of positional encodings, denoted as U â RLmaxÃd, where the i-th row Ui corre- sponds to the i-th absolute position within a seg- ment and Lmax prescribes the maximum possible length to be modeled. Then, the actual input to the Transformer is the element-wise addition of the word embeddings and the positional encodings. If we simply adapt this positional encoding to our recurrence mechanism, the hidden state sequence would be computed schematically by hÏ +1 = f (hÏ , EsÏ +1 + U1:L) hÏ = f (hÏ â1, EsÏ + U1:L), where EsÏ â RLÃd is the word embedding se- quence of sÏ , and f represents a transformation function. Notice that, both EsÏ and EsÏ +1 are as- sociated with the same positional encoding U1:L. As a result, the model has no information to dis- tinguish the positional difference between xÏ,j and xÏ +1,j for any j = 1, . . . , L, resulting in a sheer performance loss.
Finally, notice that the recurrence scheme does not need to be restricted to only the previous seg- ment. In theory, we can cache as many previous segments as the GPU memory allows, and reuse all of them as the extra context when processing the current segment. Thus, we can cache a prede- ï¬ned length-M old hidden states spanning (pos- sibly) multiple segments, and refer to them as the Ï â RM Ãd, due to a clear connection to memory mn the memory augmented neural networks (Graves et al., 2014; Weston et al., 2014). In our experi- ments, we set M equal to the segment length dur- ing training, and increase it by multiple times dur- ing evaluation.
# 3.3 Relative Positional Encodings
While we found the idea presented in the pre- vious subsection very appealing, there is a cru- cial technical challenge we havenât solved in or-
In order to avoid this failure mode, the funda- mental idea is to only encode the relative posi- tional information in the hidden states. Concep- tually, the positional encoding gives the model a temporal clue or âbiasâ about how information should be gathered, i.e., where to attend. For the same purpose, instead of incorporating bias stati- cally into the initial embedding, one can inject the same information into the attention score of each layer. More importantly, it is more intuitive and generalizable to deï¬ne the temporal bias in a rela- tive manner. For instance, when a query vector qÏ,i attends on the key vectors kÏ,â¤i, it does not need to know the absolute position of each key vector to identify the temporal order of the segment. In- stead, it sufï¬ces to know the relative distance be- tween each key vector kÏ,j and itself qÏ,i, i.e. i â j. Practically, one can create a set of relative posi-
tional encodings R â RLmaxÃd, where the i-th row Ri indicates a relative distance of i between two positions. By injecting the relative distance dy- namically into the attention score, the query vector can easily distinguish the representations of xÏ,j and xÏ +1,j from their different distances, making the state reuse mechanism feasible. Meanwhile, we wonât lose any temporal information, as the ab- solute position can be recovered recursively from relative distances.
Previously, the idea of relative positional encod- ings has been explored in the context of machine translation (Shaw et al., 2018) and music gener- ation (Huang et al., 2018). Here, we offer a dif- ferent derivation, arriving at a new form of rel- ative positional encodings, which not only has a one-to-one correspondence to its absolute coun- terpart but also enjoys much better generalization empirically (see Section 4). Firstly, in the standard Transformer (Vaswani et al., 2017), the attention score between query qi and key vector kj within the same segment can be decomposed as
AM = E],Wy WiEx, +E1,Wy WU; (a) (b) +U] Wy WiEs, +U) Wy WiU;. (c) (a)
Following the idea of only relying on rela- tive positional information, we propose to re- parameterize the four terms as follows
AS) = E],Wy Wi,cEx, +E1,Wy Wi,rRi; (a) (b) + u! Wi.eEx, tu! We,rRij . (c) (d)
⢠The ï¬rst change we make is to replace all ap- pearances of the absolute positional embedding Uj for computing key vectors in term (b) and (d) with its relative counterpart Riâj. This es- sentially reï¬ects the prior that only the relative distance matters for where to attend. Note that R is a sinusoid encoding matrix (Vaswani et al., 2017) without learnable parameters.
e Secondly, we introduce a trainable parameter u ⬠R¢ to replace the query U; Wy in term (c). In this case, since the query vector is the same for all query positions, it suggests that the attentive bias towards different words should re- main the same regardless of the query position. With a similar reasoning, a trainable parameter v ⬠R¢ is added to substitute U; Wy in term (d).
⢠Finally, we deliberately separate the two weight matrices Wk,E and Wk,R for producing the content-based key vectors and location-based key vectors respectively.
Under the new parameterization, each term has an intuitive meaning: term (a) represents content- term (b) captures a content- based addressing, term (c) governs a dependent positional bias, global content bias, and (d) encodes a global po- sitional bias.
In comparison, the formulation in Shaw et al. (2018) only has terms (a) and (b), dropping the two bias terms (c) and (d). Moreover, Shaw et al. (2018) merge the multiplication WkR into a sin- gle trainable matrix ËR, which abandons the induc- tive bias built into the original sinusoid positional encoding (Vaswani et al., 2017). In contrast, our relative positional embedding R adapts the sinu- soid formulation. As a beneï¬t of the inductive bias, a model trained on a memory of some certain length can automatically generalize to a memory several times longer during evaluation.
Equipping the recurrence mechanism with our proposed relative positional embedding, we ï¬nally arrive at the Transformer-XL architecture. For completeness, we summarize the computational procedure for a N -layer Transformer-XL with a single attention head here. For n = 1, . . . , N :
h"-? = [SG@m?"') oh?1] an kive = be Wy be Wie be Wt n n Tyan n Tyr Aig =Oni Keg +471 Wr rRi-j tulkagy + v! WE rRi-; â= Masked-Softmax(A7)v7 a T n oâ =LayerNorm(Linear(aâ) + h?~') h? = Positionwise-Feed-Forward(o7 )
with h0 := EsÏ deï¬ned as the word embed- Ï ding sequence. In addition, it is worth mention- ing that a naive way to compute A requires com- puting Wn k,RRiâj for all pairs (i, j), whose cost is quadratic w.r.t. the sequence length. How- ever, noticing that the value of i â j only ranges from zero to the sequence length, we show a sim- ple computation procedure in Appendix B, which reduces the cost to be linear w.r.t. the sequence length.
# 4 Experiments
# 4.1 Main Results
We apply Transformer-XL to a variety of datasets on both word-level and character-level language
Model #Param PPL Grave et al. (2016b) - LSTM - 48.7 Bai et al. (2018) - TCN - 45.2 Dauphin et al. (2016) - GCNN-8 - 44.9 Grave et al. (2016b) - LSTM + Neural cache - 40.8 Dauphin et al. (2016) - GCNN-14 - 37.2 Merity et al. (2018) -QRNN 151M 33.0 Rae et al. (2018) - Hebbian + Cache - 29.9 Ours - Transformer-XL Standard 151M 24.0 Baevski and Auli (2018) - Adaptive Input® 247M â 20.5 Ours - Transformer-XL Large 257M (183
Table 1: Comparison with state-of-the-art results on WikiText-103. ° indicates contemporary work.
Model #Param bpc Ha et al. (2016) - LN HyperNetworks Chung et al. (2016) - LN HM-LSTM Zilly et al. (2016) - RHN Mujika et al. (2017) - FS-LSTM-4 Krause et al. (2016) - Large mLSTM Knol (2017) - cmix v13 Al-Rfou et al. (2018) - 12L Transformer Ours - 12L Transformer-XL 27M 1.34 35M 1.32 46M 1.27 47M 1.25 46M 1.24 1.23 44M 1.11 41M 1.06 - Al-Rfou et al. (2018) - 64L Transformer Ours - 18L Transformer-XL Ours - 24L Transformer-XL 235M 1.06 88M 1.03 277M 0.99
Table 2: Comparison with state-of-the-art results on enwik8.
modeling to have a comparison with state-of-the- art systems, including WikiText-103 (Merity et al., 2016), enwik8 (LLC, 2009), text8 (LLC, 2009), One Billion Word (Chelba et al., 2013), and Penn Treebank (Mikolov and Zweig, 2012).
WikiText-103 is the largest available word-level language modeling benchmark with long-term de- pendency. It contains 103M training tokens from 28K articles, with an average length of 3.6K to- kens per article, which allows testing the abil- ity of long-term dependency modeling. We set the attention length to 384 during training and 1600 during evaluation. We adopted adaptive soft- max and input representations (Baevski and Auli, 2018; Grave et al., 2016a). As shown in Table 1, Transformer-XL reduces the previous state-of-the- art (SoTA) perplexity from 20.5 to 18.3, which demonstrates the superiority of the Transformer- XL architecture.
The dataset enwik8 contains 100M bytes of un- processed Wikipedia text. We compare our ar- chitecture with the previous results in Table 2. Under the model size constraint, the 12-layer Transformer-XL achieves a new SoTA result, out- performing the 12-layer vanilla Transformer from Al-Rfou et al. (2018) by 0.05, while both Trans-
Model #Param bpc Cooijmans et al. (2016) - BN-LSTM Chung et al. (2016) - LN HM-LSTM Zilly et al. (2016) - RHN Krause et al. (2016) - Large mLSTM Al-Rfou et al. (2018) - 12L Transformer 1.36 35M 1.29 45M 1.27 45M 1.27 44M 1.18 - Al-Rfou et al. (2018) - 64L Transformer Ours - 24L Transformer-XL 235M 1.13 277M 1.08
Table 3: Comparison with state-of-the-art results on text8.
Model #Param PPL Shazeer et al. (2014) - Sparse Non-Negative 33B. 52.9 Chelba et al. (2013) - RNN-1024 + 9 Gram 20B 51.3 Kuchaiev and Ginsburg (2017) - G-LSTM-2 - 36.0 Dauphin et al. (2016) - GCNN- 14 bottleneck - 31.9 Jozefowicz et al. (2016) - LSTM 1.8B 30.6 Jozefowicz et al. (2016) - LSTM + CNN Input} 1.04B 30.0 Shazeer et al. (2017) - Low-Budget MoE ~5B 34.1 Shazeer et al. (2017) - High-Budget MoE ~5B 28.0 Shazeer et al. (2018) - Mesh Tensorflow 4.9B 24.0 Baevski and Auli (2018) - Adaptive Input® 0.46B 24.1 Baevski and Auli (2018) - Adaptive Input® 1.0B 23.7 Ours - Transformer-XL Base 0.46B 23.5 Ours - Transformer-XL Large 08B 21.8
Table 4: Comparison with state-of-the-art results on One Bil- lion Word. ° indicates contemporary work.
former variants have a large margin over conven- tional RNN-based models. Notably, our 12-layer architecture achieves the same result as the 64- layer network from Al-Rfou et al. (2018), using only 17% of the parameter budget. In order to see whether better performances can be obtained by increasing the model size, we train 18-layer and 24-layer Transformer-XLs with increased model sizes. With the attention length 784 during train- ing and 3,800 during evaluation, we obtained a new SoTA result and our method is the ï¬rst to break through 1.0 on widely-studied character- level benchmarks. Different from Al-Rfou et al. (2018), Transformer-XL does not need any auxil- iary losses, and thus all beneï¬ts are credited to a better architecture.
Similar to but different from enwik8, text8 con- tains 100M processed Wikipedia characters cre- ated by lowering case the text and removing any character other than the 26 letters a through z, and space. Due to the similarity, we simply adapt the best model and the same hyper-parameters on en- wik8 to text8 without further tuning. The compari- son with previous methods is summarized in Table 3. Again, Transformer-XL achieves the new SoTA result with a clear margin.
Model #Param PPL Inan et al. (2016) - Tied Variational LSTM Zilly et al. (2016) - Variational RHN Zoph and Le (2016) - NAS Cell Merity et al. (2017) - AWD-LSTM Pham et al. (2018) - Efï¬cient NAS Liu et al. (2018) - Differentiable NAS Yang et al. (2017) - AWD-LSTM-MoS Melis et al. (2018) - Dropout tuning 24M 73.2 23M 65.4 25M 64.0 24M 58.8 24M 58.6 23M 56.1 22M 55.97 24M 55.3 Ours - Transformer-XL 24M 54.52 Merity et al. (2017) - AWD-LSTM+Finetuneâ Yang et al. (2017) - MoS+Finetuneâ 24M 57.3 22M 54.44
Table 5: Comparison with state-of-the-art results on Penn Treebank. â indicates using two-step ï¬netuning.
One Billion Word does not preserve any long- term dependency because sentences have been shufï¬ed. Consequently, this dataset mainly tests the ability of modeling only short-term depen- dency. The comparison between Transformer-XL and the other methods is shown in Table 4. Al- though Transformer-XL is mainly designed to bet- ter capture longer-term dependency, it dramati- cally improves the single-model SoTA from 23.7 to 21.8. Speciï¬cally, Transformer-XL signiï¬- cantly outperforms a contemporary method using vanilla Transformers (Baevski and Auli, 2018), suggesting the advantage of Transformer-XL is generalizable to modeling short sequences.
We also report the results on word-level Penn Treebank in Table 5. Similar to AWD-LSTM (Merity et al., 2017), we apply variational dropout and weight average to Transformer-XL. With proper regularization, Transformer-XL achieves a new SoTA result among models without two-step ï¬netuning. Penn Treebank has only 1M training tokens, which implies that Transformer-XL also generalizes well even on small datasets.
# 4.2 Ablation Study
We conduct two sets of ablation studies to exam- ine the effects of two proposed techniques used in Transformer-XL: the recurrence mechanism and the new positional encoding scheme.
The ï¬rst study is performed on WikiText-103, which requires modeling long-term dependency. The results are reported in Table 6. Among the compared encoding schemes, Shaw et al. (2018) is relative, while Vaswani et al. (2017) and Al-Rfou et al. (2018) are absolute. âFullâ and âhalfâ losses refer to applying a cross entropy loss to all or the recent half positions in the segment. We found
that absolute encodings only work well with half losses because half losses exclude positions with very short attention lengths during training for bet- ter generalization. Table 6 shows that both the recurrence mechanism and our encoding scheme are necessary to achieve the best performance, as well as generalizing to longer attention sequences during evaluation time. Although the backprop- agation length during training is only 128, with the two techniques the attention length can be in- creased to 640 at test time. In the standard setting with 151M parameters, the perplexity decreases as the attention length increases.
Since the recurrence mechanism costs addi- tional memory, we also compare Transformer-XL with baselines under the same GPU memory con- straints. As shown in Table 10 in Appendix A, despite using a shorter backpropagation length, Transformer-XL remains superior to the baselines. The second study targets at isolating the ef- fects of resolving the context fragmentation prob- lem from the beneï¬t of capturing longer context length. In order to achieve this goal, we deliber- ately choose a dataset that does not require long- term dependency, so that any improvement from establishing the recurrence can be attributed to solving the context fragmentation. Speciï¬cally, we perform this controlled experiment on the One Billion Word dataset, which can only beneï¬t from removing the context fragmentation. We train a 20-layer Transformer-XL with â¼0.3B parame- ters for 400K steps. As shown in Table 7, using segment-level recurrence substantially improves performance even when long-term dependency is not needed, which is consistent with our previous discussion that the recurrence mechanism resolves the context fragmentation problem. Moreover, our relative positional encodings is also superior to Shaw et al. (2018) on short sequences.
# 4.3 Relative Effective Context Length
Khandelwal et al. (2018) proposed a method to evaluate the Effective Context Length (ECL) of a sequence model. ECL is the longest length to which increasing the context span would lead to a gain more than a threshold. However, ECL ig- nores the fact that it is harder to get improve- ment when a model already achieves a lower per- plexity using only a shorter context, and thus it is not suitable for fair comparison among mul- tiple models. We instead propose a new metric
Remark Recurrence Encoding Loss PPLinit PPLbest Attn Len Transformer-XL (128M) v Ours Fu 27.02 26.77 500 - v Shaw et al. (2018) Fu 27.94 27.94 256 - v Ours Half 28.69 28.33 460 - x Ours Fu 29.59 29.02 260 - x Ours Half 30.10 30.10 120 - x Shaw et al. (2018) Fu 29.75 29.75 120 - x Shaw et al. (2018) Half 30.50 30.50 120 - x Vaswani et al. (2017) â Half 30.97 30.97 120 Transformer (128M)* x Al-Rfou et al. (2018) â Half 31.16 31.16 120 23.09 640 Transformer-XL (151M) v Ours Fu 23.43 23.16 450 23.35 300
Table 6: Ablation study on WikiText-103. For the ï¬rst two blocks, we use a slightly smaller model (128M parameters). â indicates that the corresponding row is reduced to the same setting as the Transformer network in (Al-Rfou et al., 2018), except that two auxiliary losses are not implemented in our experiments. âPPL initâ refers to using the same length as training. âPPL bestâ indicates the perplexity obtained by using the optimal length. âAttn Lenâ is the shortest possible attention length during evaluation to achieve the corresponding result (PPL best). Increasing the attention length during evaluation improves performance only when our positional encoding is used. The âTransformer-XL (151M)â setting uses a standard parameter budget as previous work (Merity et al., 2018), where we observe a similar effect when increasing the attention length during evaluation.
Method PPL Ours With Shaw et al. (2018) encodings Without recurrence 25.2 25.7 27.1
Attn Len How much Al-Rfou et al. (2018) is slower 3,800 2,800 1,800 800 1,874x 1,409x 773x 363x
Table 7: Ablation study on One Billion Word, a dataset with- out long-term dependency.
Table 9: Slowdown in terms of running time during evalua- tion. Evaluation is based on per-token time on one GPU.
Model r = 0.1 r = 0.5 r = 1.0 Transformer-XL 151M QRNN LSTM Transformer-XL 128M - use Shaw et al. (2018) encoding - remove recurrence Transformer 900 500 400 700 400 300 128 800 400 300 600 400 300 128 700 300 200 500 300 300 128
erage with r = 0.1. The RECL of Transformer- XL is 80% and 450% longer than recurrent net- works and Transformer respectively. Both the re- currence mechanism and our positional encodings contribute to a longer RECL. This further substan- tiates our argument that Transformer-XL is able to model longer-term dependency.
Table 8: Relative effective context length (RECL) compari- son. See text for the deï¬nition of RECL and r. The ï¬rst three models and the last four models are compared as two model groups when we calculate RECL (RECL is computed on a model group rather than a single model). Each group has the same parameter budget.
# 4.4 Generated Text
called Relative Effective Context Length (RECL). RECL is deï¬ned on a model group instead of a single model, and the gain of a long context is measure by the relative improvement over the best short context model. As such, the model group shares the same baseline to enable fair compari- son. RECL also has a parameter r, which means constraining the comparison on top-r hard exam- ples. See Appedix C for more details about RECL. As shown in Table 8, Transformer-XL manages to model dependency of 900 words long on av-
Trained only on WikiText-103 which is medium- sized, Transformer-XL is already able to generate relatively coherent articles with thousands of to- kens without manual cherry picking, despite mi- nor ï¬aws. Please refer to Appendix E for samples.
# 4.5 Evaluation Speed
Finally, we compare the evaluation speed of our model with the vanilla Transformer model (Al- Rfou et al., 2018). As shown in Table 9, due to the state reuse scheme, Transformer-XL achieves an up to 1,874 times speedup during evaluation.
# 5 Conclusions
Transformer-XL obtains strong perplexity results, models longer-term dependency than RNNs and Transformer, achieves substantial speedup during evaluation, and is able to generate coherent text articles. We envision interesting applications of Transformer-XL in the ï¬elds of text generation, unsupervised feature learning, image and speech modeling.
Acknowledgments ZD and YY were supported in part by National Science Foundation (NSF) under the grant IIS- 1546329 and by the DOE-Ofï¬ce of Science un- der the grant ASCR #KJ040201. ZY and RS were supported in part by the Ofï¬ce of Naval Research grant N000141812861, the NSF grant IIS1763562, the Nvidia fellowship, and the Siebel scholarship.
# References
Rami Al-Rfou, Dokook Choe, Noah Constant, Mandy Guo, and Llion Jones. 2018. Character-level lan- guage modeling with deeper self-attention. arXiv preprint arXiv:1808.04444.
Alexei Baevski and Michael Auli. 2018. Adaptive in- put representations for neural language modeling. arXiv preprint arXiv:1809.10853.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly arXiv preprint learning to align and translate. arXiv:1409.0473.
Shaojie Bai, J Zico Kolter, and Vladlen Koltun. 2018. An empirical evaluation of generic convolu- tional and recurrent networks for sequence model- ing. arXiv preprint arXiv:1803.01271.
Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic lan- guage model. Journal of machine learning research, 3(Feb):1137â1155.
Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robin- son. 2013. One billion word benchmark for measur- ing progress in statistical language modeling. arXiv preprint arXiv:1312.3005.
Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. 2016. Hierarchical multiscale recurrent neural net- works. arXiv preprint arXiv:1609.01704.
Tim Cooijmans, Nicolas Ballas, César Laurent, ÃaËglar Re- arXiv preprint Gülçehre, current batch normalization. arXiv:1603.09025. and Aaron Courville. 2016.
Andrew M Dai and Quoc V Le. 2015. Semi-supervised sequence learning. In Advances in neural informa- tion processing systems, pages 3079â3087.
Yann N Dauphin, Angela Fan, Michael Auli, and David Grangier. 2016. Language modeling with arXiv preprint gated convolutional networks. arXiv:1612.08083.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.
Adji B Dieng, Chong Wang, Jianfeng Gao, and John Paisley. 2016. Topicrnn: A recurrent neural net- work with long-range semantic dependency. arXiv preprint arXiv:1611.01702.
Yarin Gal and Zoubin Ghahramani. 2016. A theoret- ically grounded application of dropout in recurrent neural networks. In Advances in neural information processing systems, pages 1019â1027.
Edouard Grave, Armand Joulin, Moustapha Cissé, David Grangier, and Hervé Jégou. 2016a. Efï¬cient arXiv preprint softmax approximation for gpus. arXiv:1609.04309.
Edouard Grave, Armand Joulin, Improving neural and Nicolas language arXiv preprint Usunier. 2016b. models with a continuous cache. arXiv:1612.04426.
Alex Graves. 2013. Generating sequences with arXiv preprint recurrent neural networks. arXiv:1308.0850.
Alex Graves, Greg Wayne, and Ivo Danihelka. arXiv preprint 2014. Neural turing machines. arXiv:1410.5401.
David Ha, Andrew Dai, and Quoc V Le. 2016. Hyper- networks. arXiv preprint arXiv:1609.09106.
Sepp Hochreiter, Yoshua Bengio, Paolo Frasconi, Jür- gen Schmidhuber, et al. 2001. Gradient ï¬ow in re- current nets: the difï¬culty of learning long-term de- pendencies.
Sepp Hochreiter and Jürgen Schmidhuber. 1997. Neural computation, Long short-term memory. 9(8):1735â1780.
Jakob Uszkoreit, Noam Shazeer, Curtis Hawthorne, An- drew M Dai, Matthew D Hoffman, and Douglas Eck. 2018. An improved relative self-attention mecha- nism for transformer with application to music gen- eration. arXiv preprint arXiv:1809.04281.
Hakan Inan, Khashayar Khosravi, and Richard Socher. 2016. Tying word vectors and word classiï¬ers: A loss framework for language modeling. arXiv preprint arXiv:1611.01462.
Yangfeng Ji, Trevor Cohn, Lingpeng Kong, Chris Dyer, and Jacob Eisenstein. 2015. Document context lan- guage models. arXiv preprint arXiv:1511.03962.
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Exploring arXiv preprint Shazeer, and Yonghui Wu. 2016. the limits of language modeling. arXiv:1602.02410.
Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, and Koray Kavukcuoglu. 2016. Neural machine translation in linear time. arXiv preprint arXiv:1610.10099.
Sekitoshi Kanai, Yasuhiro Fujiwara, Yuki Yamanaka, and Shuichi Adachi. 2018. Sigsoftmax: Reanal- arXiv preprint ysis of the softmax bottleneck. arXiv:1805.10829.
Nan Rosemary Ke, Anirudh Goyal ALIAS PARTH GOYAL, Olexa Bilaniuk, Jonathan Binas, Michael C Mozer, Chris Pal, and Yoshua Ben- gio. 2018. Sparse attentive backtracking: Temporal credit assignment through reminding. In Advances in Neural Information Processing Systems, pages 7650â7661.
Urvashi Khandelwal, He He, Peng Qi, and Dan Ju- rafsky. 2018. Sharp nearby, fuzzy far away: How neural language models use context. arXiv preprint arXiv:1805.04623.
Bryon Knol. 2017. cmix v13. http://www. byronknoll.com/cmix.html.
Jan Koutnik, Klaus Greff, Faustino Gomez, and Juer- gen Schmidhuber. 2014. A clockwork rnn. arXiv preprint arXiv:1402.3511.
Ben Krause, Liang Lu, Iain Murray, and Steve Renals. 2016. Multiplicative lstm for sequence modelling. arXiv preprint arXiv:1609.07959.
Oleksii Kuchaiev and Boris Ginsburg. 2017. Factor- arXiv preprint ization tricks for lstm networks. arXiv:1703.10722.
Quoc V Le, Navdeep Jaitly, and Geoffrey E Hin- ton. 2015. A simple way to initialize recurrent networks of rectiï¬ed linear units. arXiv preprint arXiv:1504.00941.
Shuai Li, Wanqing Li, Chris Cook, Ce Zhu, and Yanbo Gao. 2018. Independently recurrent neural network (indrnn): Building a longer and deeper rnn. In Pro- ceedings of the IEEE Conference on Computer Vi- sion and Pattern Recognition, pages 5457â5466.
Hanxiao Liu, Karen Simonyan, and Yiming Yang. 2018. Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055.
MultiMedia LLC. 2009. Large text compression benchmark.
Gábor Melis, Charles Blundell, Tomáš KoËcisk`y, Karl Moritz Hermann, Chris Dyer, and Phil Blun- som. 2018. Pushing the bounds of dropout. arXiv preprint arXiv:1805.09208.
Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2017. Regularizing and optimizing lstm lan- guage models. arXiv preprint arXiv:1708.02182.
Stephen Merity, Nitish Shirish Keskar, and Richard language arXiv preprint Socher. 2018. An analysis of neural modeling at multiple scales. arXiv:1803.08240.
Stephen Merity, Caiming Xiong, James Bradbury, and Pointer sentinel mixture Richard Socher. 2016. models. arXiv preprint arXiv:1609.07843.
Tomas Mikolov, Armand Joulin, Sumit Chopra, Michael Mathieu, and MarcâAurelio Ranzato. 2014. Learning longer memory in recurrent neural net- works. arXiv preprint arXiv:1412.7753.
Tomáš Mikolov, Martin Karaï¬Ã¡t, Lukáš Burget, Jan ËCernock`y, and Sanjeev Khudanpur. 2010. Recur- In rent neural network based language model. Eleventh Annual Conference of the International Speech Communication Association.
Tomas Mikolov and Geoffrey Zweig. 2012. Context dependent recurrent neural network language model. SLT, 12(234-239):8.
Frederic Morin and Yoshua Bengio. 2005. Hierarchi- cal probabilistic neural network language model. In Aistats, volume 5, pages 246â252. Citeseer.
Asier Mujika, Florian Meier, and Angelika Steger. 2017. Fast-slow recurrent neural networks. In Ad- vances in Neural Information Processing Systems, pages 5915â5924.
Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2012. Understanding the exploding gradient prob- lem. CoRR, abs/1211.5063.
Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. arXiv preprint arXiv:1802.05365.
Hieu Pham, Melody Y Guan, Barret Zoph, Quoc V Le, and Jeff Dean. 2018. Efï¬cient neural architec- ture search via parameter sharing. arXiv preprint arXiv:1802.03268.
Oï¬r Press and Lior Wolf. 2016. Using the output arXiv embedding to improve language models. preprint arXiv:1608.05859.
Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing by generative pre-training. URL https://s3- us-west-2. amazonaws. com/openai-assets/research- covers/languageunsupervised/language under- standing paper. pdf.
Jack W Rae, Chris Dyer, Peter Dayan, and Tim- Fast parametric learn- othy P Lillicrap. 2018. ing with activation memorization. arXiv preprint arXiv:1803.10049.
Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position represen- tations. arXiv preprint arXiv:1803.02155.
Noam Shazeer, Youlong Cheng, Niki Parmar, Dustin Tran, Ashish Vaswani, Penporn Koanantakool, Peter Hawkins, HyoukJoong Lee, Mingsheng Hong, Cliff Young, et al. 2018. Mesh-tensorï¬ow: Deep learning In Advances in Neural Infor- for supercomputers. mation Processing Systems, pages 10434â10443.
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. 2017. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538.
Noam Shazeer, Joris Pelemans, and Ciprian Chelba. 2014. Skip-gram language modeling using sparse non-negative matrix probability estimation. arXiv preprint arXiv:1412.1454.
Trieu H Trinh, Andrew M Dai, Thang Luong, and Quoc V Le. 2018. Learning longer-term dependen- cies in rnns with auxiliary losses. arXiv preprint arXiv:1803.00144.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, pages 5998â6008.
Tian Wang and Kyunghyun Cho. 2015. language modelling. Larger- arXiv preprint context arXiv:1511.03729.
Wenlin Wang, Zhe Gan, Wenqi Wang, Dinghan Shen, Jiaji Huang, Wei Ping, Sanjeev Satheesh, and Lawrence Carin. 2017. Topic compositional neural language model. arXiv preprint arXiv:1712.09783.
Jason Weston, Sumit Chopra, and Antoine Bor- arXiv preprint des. 2014. Memory networks. arXiv:1410.3916.
Yuhuai Wu, Saizheng Zhang, Ying Zhang, Yoshua Bengio, and Ruslan R Salakhutdinov. 2016. On multiplicative integration with recurrent neural net- works. In Advances in neural information process- ing systems, pages 2856â2864.
Zhilin Yang, Zihang Dai, Ruslan Salakhutdinov, and William W Cohen. 2017. Breaking the softmax bot- tleneck: A high-rank rnn language model. arXiv preprint arXiv:1711.03953.
Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization. 2014. arXiv preprint arXiv:1409.2329.
Julian Georg Zilly, Rupesh Kumar Srivastava, and Jürgen Schmidhuber. 2016. arXiv preprint Jan KoutnÃk, Recurrent highway networks. arXiv:1607.03474.
Barret Zoph and Quoc V Le. 2016. Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578.
# A Ablation Study with Memory Constraints
Backprop Len Recurrence Encoding Lossâ pplxbest pplxinit Attn Len 128 v Ours Full 26.77 27.02 500 128 v Ours Partial 28.33 28.69 460 176 x Ours Full 27.98 28.43 400 172 x Ours Partial 28.83 28.83 120
Table 10: Ablation study on WikiText-103 with the same GPU memory constraints.
Table 10 compares Transformer-XL with baseline under the same memory budget. Transformer-XL still outperforms the baseline even with a shorter backprop length.
# B Efï¬cient Computation of the Attention with Relative Positional Embedding
As we discussed in section 3.3, the naive way of computing the Wk,RRiâj for all pairs (i, j) is subject to a quadratic cost. Here, we present a simple method with only a linear cost. Firstly, notice that the relative distance i â j can only be integer from 0 to M + L â 1, where M and L are the memory length and segment length respectively. Hence, the rows of the matrix
Rhe-1 [Wi.rRarezâi]! [Ree] [iweer r| Qs Wir = : RI [Wi,2Ri]! Ro [Wi,rRo]" C R(M+L) xd
consist of all possible vector outputs of Wk,RRiâj for any (i, j). Note that we have deï¬ned Q in a reversed order, i.e., Qk = Wk,RRM +Lâ1âk, to make further discussion easier.
Next, we collect the term (b) for all possible i, j into the following L Ã (M + L) matrix,
[ 0 Wi,rRu ooo 4 Wr,rRo 0 wee 0 | a) WerRuyi a) Wi,rRa a] WrRo + 0 Lap-1WkerRuec-1 -++ gi-yWerRusc-1 9f_1{WerRo-1 +--+ 9}_{W,rRo 9 Qr-1 +++ 4 Qurqrâ1 0 oo 0 ] _ GQr-2 ++ gf} Qugrâ-2 GW Quryn-1 0 | Lai-1Q0 +) 9f-4Qu 9 @f_jQuair + a Quit
B =
Then, we further deï¬ne
WQ0 + W@Qu @Que +s Gg Qust-1 B=qQâ = GQ °° G@Qu GQue +) Gt Qnye-1 lat _.Qo âgi Qu gjQuai -: a) Quast
Now, it is easy to see an immediate relationship between B and B, where the i-th row of B is simply a left-shifted version of i-th row of B. Hence, the computation of B only requires a matrix multiplication qQ' to compute B and then a set of left-shifts.
Similarly, we can collect all term (d) for all possible i, j into another L Ã (M + L) matrix D,
[Qe vl Quryr-1 0 0 ] v'Qr-2 ++ v'Quyr-2 v'Quyr-1 +: 0 viQo oc v' Qu v' Quai. oe v' Quyr-1
Then, we can follow the same procedure to deï¬ne
d= [Qu]' = [fvo'Qo --) v' Qu v' Quai ++ v'Quyr-1]-
Again, each row of D is simply a left-shift version of d. Hence, the main computation cost comes from the matrix-vector multiplication d = Qu", which is not expensive any more.
# C Details About RECL
(a) Transformer-XL vs RNNs (b) Transformer-XL vs Baseline
100 Gain from increasing context length â"Transformer-XL oRNN «| \ ~_ ist o ââ 20] ee o seprere rae . To 1005200 200-300 300400 400~500 500-600 600-»700 700~»800 600 900 900-1000 Context length change
500 Gain from increasing context length ââââransformerXL TransformerXL with Shaw et al encoding 5 2000 TransformerXL w/o recurrence g 5 2 rood 2 S 2 s00 ° : ome: eee 100° 200 200-300 300400 400-500 500600 600-700 760800800 900 S00 ~ 1000 âContext length change
Figure 3: Visualizing unnormalized relative perplexity gains with r = 0.1.
45 Transformer-XL QRNN 40) LsT⢠oe 35 z 2, 3p _â âhoo 200 ce eT) #070 ~gOâ~GODâTo00 Context length
(a) Transformer-XL vs RNNs (b) Transformer-XL vs Baseline
30.5 â_ Transformer-XL 30.0] Transformer-XL with Shaw et al encoding Transformer-XL w/o recurrence 25 29.0 ge 28.5 x) as 20 26.5 ââââ ©» $99 09 sno ato soa a0 700 06300 F000 Context length
Figure 4: Perplexity vs context length.
In this section, we describe the details of the metric RECL. Let M = {m1, m2, · · · , mN } be a model group consisting of N models. Let li(c, t) denote the loss of model mi on the t-th token in the corpus with a context length c. Concretely, the loss can be written as
li(c, t) = â log Pmi(xt|xtâ1, · · · , xtâc)
where P,,,, is the probability distribution given by model m,;, and 2; is the t-th token in the corpus. Given a short context length c and a long context length câ such that câ > c, we can further define a baseline for each position t,
b(c, t) = N min i=1 li(c, t)
The relative loss of mi w.r.t. the model group M is written as
1 file.) = > min (0(c, t), li(eâ, t)) mee
The above equation uses the minimum loss of all models on the short length c as a baseline, and only losses smaller than the baseline will be effectively counted towards the relative loss. This enables fair
comparison between multiple models because all models with a long context length câ need to improve over the same baseline. Sometimes we only care about those positions where the baseline performs poorly (which means short-term dependency with context length c is not sufficient), so given a ratio parameter 1, we define the set T is the above equation as
T = top-r positions t with largest b(c, t)
The relative gain is subsequently deï¬ned as the relative perplexity reduction:
exp fi(c, c) â exp fi(c, 2) exp fi(c, c) gi(c, ¢)
Given a step size A, we then use an algorithm to find the RECL by thresholding the relative gain: 1. Set initial short context length c, and long context length c =c+A
2. Compute gi(c, câ). If gi(c, câ) < 0.01, return RECL = c. If gi(c,c!) > 0.01, sete =c',c =c+A and go to step 1.
In Figure 3, we visualize the unnormalized relative perplexity gains (exp fi(c, c) â exp fi(c, câ)) with various pairs of (c,câ) when r = 0.1. It is clear that Transformer-XL has a longer RECL compared to RNNs and other baselines because the relative gains are substantially larger.
For reference, we plot the perplexities with varying context lengths in Figure 4. The y-axis denotes the ânormalâ perplexity (not calibrated by baselines).
# D Attention Visualization
In this section, we provide some visualization of the attention learned by the SoTA model on the WikiText-103 validation set. Recall that, this model has 16 10-head transformer layers and relies on a memory of length 640.
Figure 5: Average attention over the previous 640 tokens, where each row corresponds to a attention head and each column corresponds to a relative location. There are totally 160 attention heads, and every 10 heads come from a single layer. Darker colors indicate higher values.
The ï¬rst visualization aims at revealing the overall trend of where the model is attending. Speciï¬cally, for each attention head of each layer, we average the attention distributions of all tokens in the validation set. This is shown in Fig. 5. As we can see, the overall trend is to focus more on the nearby tokens than the faraway ones. However, it is also very clear that some attention heads have a wider attention distribution over the entire memory span, notably the head 8 from layer 1, head 78 from layer 8, and the head 158 from layer 16.
Since we are focused on learning long-range dependency, we are especially interested in these heads with a wider attention span. Thus, in the second set of visualization, we pick the three notable heads mentioned above, and visualize their attention behavior for a randomly chosen position, as shown in Fig. 6. Here, we see three different patterns of wider attention:
⢠For the head 8 in the 1st layer, we see an almost uniform attention over the entire memory span. This is quite intuitive, as lower-level layers needs to screen the entire memory span to decide where to focus for higher-level layers
(b) Head 78 from layer 8.
Liga d tame diy agety ates 1
(c) Head 158 from layer 16.
Figure 6: Visualization of the three heads with a wide attention range. Each row corresponds to a target loca- tion/token and each column corresponds to a context location/token. Tokens in the memory that have top 20% attention values are highlighted in red.
⢠For the head 78 in the 8th layer (a middle-level layer), we see a very sparse attention pattern scattered in all ranges of the memory. Again, this well ï¬ts our intuition that as information accumulates, the network may focus on some particular position with special interests.
⢠For the head 158 in the 16th layer (i.e. the last layer), each target location (corresponding to each row) has its own distinct sparse focus, differing from head 78 where target locations largely share the same attentive location in memory. Meanwhile, the pattern is also different from the case of head 8, where a few locations are clearly attended more than others.
Finally, as we have discussed in section 3.3, the attention score can be decomposed into four intuitive terms. Here, we want to further investigate how these four terms contribute to the overall attention trend in Fig. 5. Since the term (c) represents the global content bias, i.e., the prior importance of each word regardless of the context, we will leave it out and focus on the terms (a), (b) and (d). So, for each term, we take the Softmax w.r.t. the memory span and average the resulted distribution of all tokens in the validation set. The results are visualized in Fig. 7:
⢠Since term (a) is fully content-based addressing, when averaging over all target words, the result is essentially uniform over the entire context, except for a few very close words, which are likely to be semantically similar to the target word.
⢠The overall trend of term (b) highly resembles that of the entire attention distribution in Fig. 5. It suggests that the global trend of focusing on the nearby context is largely contributed by this content- dependent positional bias.
⢠The overall trend of term (d) is also focusing more on nearby words. However, compared to the trend of term (b), it is clearly ï¬atter and biases towards a longer context.
(a) Term (a).
(b) Term (b).
(c) Term (d).
Figure 7: Visualization of the three terms in computing the attention score. Each row corresponds to a attention head and each column corresponds to a relative location.
# E Generated Text
In this section, we present some generated text from our best model trained the Wikitext-103 dataset. We seed the our Transformer-XL with a context of at most 512 consecutive tokens randomly sampled from the test set of Wikitext-103. Then, we run Transformer-XL to generate a pre-deï¬ned number of tokens (500 or 1,000 in our case). For each generation step, we ï¬rst ï¬nd the top-40 probabilities of the next-step distribution and sample from top-40 tokens based on the re-normalized distribution. To help reading, we detokenize the context, the generated text and the reference text. Three generated examples are shown in Tables 11, 12, and 13. Note that we do not perform any cherry picking and present the ï¬rst three examples we generate in the paper. In the text, â= text =â, â= = text = =â and â= = = text = = =â denote the Wikipedia page tile, section title and subsection title, respectively, due to the original data preprocessing procedure of Wikitext-103 (Merity et al., 2016).
As we can see, though only trained on 100M tokens, Transformer-XL is a strong model at generating long text articles, particularly in the following aspects:
Transformer-XL is able to structurally maintain the sectional arrangement of Wikipedia. ⢠Transformer-XL manages to semantically stay on the same topic throughout the course of genera-
tion.
Long-range references are common in the generated text. ⢠Transformer-XL often generates novel content that is not present in the training data.
For more detailed explanation of the interesting observations in each example, please refer to the corre- sponding caption.
Despite the overall excellence of the generation quality, the model can only perceive the seed context and hallucinate what to generate based on the limited knowledge (100M tokens only) it is trained on. As a result, the generated text sometimes looks clearly relevant but not close enough or to the point compared to what human writer would do. That said, we believe this issue is mostly a problem of limited training data size and could be alleviated by using a larger training set.
Context: Kershaw started the 2010 season by posting a 3.07 ERA in April, but did so by walking 22 batters in 29 innings. On May 4, he had his worst start of his career against the Milwaukee Brewers at Dodger Stadium, throwing just 57 pitches in 11 / 3 innings, while retiring only four of the 13 batters he faced â including the pitcher. He was booed loudly upon being pulled from the game. Kershaw said after the game, " I didnât give our team any kind of chance. Itâs just not a good feeling to let your teammates down, let everybody down. It stings, it hurts. I âve got to ï¬gure things out. " Kershaw rebounded his next start by pitching an 8 inning two-hitter and out-dueling the then undefeated Ubaldo Jiménez. He credited his control of the slider being the major turning point for him. Later in the season, he was suspended for ï¬ve games after hitting Aaron Rowand of the Giants with a pitch in a game on July 20. The incident occurred after both teams were given a warning following Giants ace Tim Lincecum hitting Matt Kemp earlier in the game. He threw his ï¬rst career complete game shutout on September 14, 2010 also against San Francisco and ï¬nished the season with a record of 13 â 10 and a 2.91 ERA in 32 starts, pitching 2041 / 3 innings and recording 212 strikeouts. = = = = 2011 season: 1st Cy Young Award = = = = After ï¬nishing the 2010 season strong, the Dodgers named Kershaw as the Opening Day Starter for the 2011 season. On May 29, he pitched the second complete- game shutout of his career, striking out 10 while winning a two-hitter against the Florida Marlins, 8 â 0; he also had two singles and an RBI, scoring twice in the game. He produced his third career shutout on June 20, a two-hit, 11-strikeout effort against the Detroit Tigers. Kershaw became the ï¬rst Dodgers starter to strike out the side in the 9th inning since Sandy Koufaxâs perfect game. In his next start, on June 26, Kershaw pitched another complete game (against the Los Angeles Angels of Anaheim ). He became the ï¬rst Dodger starter to have back-to-back complete game victories since Jeff Weaver in the 2005 season and the ï¬rst Dodger to have double-digit strikeouts in consecutive starts since Chan-Ho Park in the 2000 season. He was awarded the National League Player of the Week award for the week of June 20 â 26 as a result of those two starts. Midway through June, Kershaw had amassed 32 career victories
XL Generation: , tying him with Luis Castillo for the third best start total in the league and trail- ing only Johnny Bench and Eddie Mathews (37). The Dodgers signed Kershaw to a one-year deal on July 14, 2011, with a player option for 2012, in a deal worth just under $ 300,000. The Dodgers entered the 2011 season as the major league leaders, and Kershaw had the best record in the National League in saves (39), games started (78) and walks (120); he also had the lowest win / loss ratio in the league (1.87), and he had the worst on-base percentage (.349). He had the best ERA (2.00 ), most wins (16 ), or most shutouts (29) in his career (25), while leading the league in losses per nine innings pitched (4.01). Following the 2011 season, the Dodgers traded Kershaw to the Kansas City Royals on July 29 for pitcher Javier Torres. = = = Kansas City Royals = = = = = = = 2012 season = = = = During spring training, Kershaw played very well. He was selected to spring training as a relief pitcher for the Royals for the 2012 season. After an injury to closer Javier Vázquez, he was activated on April 29 to replace Matt Holliday in the Royals â starting rotation. In his only start with the Royals, on August 6, 2012, Kershaw struck out ï¬ve batters in seven innings pitched to help the Royals to their ï¬rst victory in franchise history. On September 27, 2012, it appeared Kershaw was going to pitch a complete game shutout against the Detroit Tigers, but did not manage to do so since the Tigers won 3 â 1. At the conclusion of the season, Kershaw was named Major League Baseballâs Most Valuable Player, was chosen to the All-Star Game at Busch Stadium and was named to the All-Star Game as the starting pitcher at shortstop. The Royals announced on February 4, 2013 that Kershaw would spend 2013 as starting pitcher, though he was expected to miss the season as a result of a shoulder injury. = = = = 2013 season = = = = On May 17, 2013, Kershaw sustained another back injury and did not start in August and October 2013. He appeared in 22 starts, all starts, ï¬nishing with a strikeout-to-walk ratio of 1.50 and a 2.91 ERA. He also had the third most strikeouts in the league: 10. On May 20, 2013, he
Reference: , a 3.15 ERA and 593 career strikeouts in 568.2 innings. According to the Elias Sports Bureau, Kershaw was the ï¬rst 23-year-old pitcher to have that many victories, an ERA that low and an average of more than one strikeout per inning since ERA became an ofï¬cial statistic in 1910. Kershaw was selected to the National League team for the 2011 Major League Baseball All-Star Game, his ï¬rst All-Star selection. In the month of July, Kershaw was 4 â 1 with a 2.02 ERA and NL-leading 45 strikeouts, earning him the National League Pitcher of the Month Award. On August 23, he struck out Matt Holliday of the St. Louis Cardinals for his 200th strikeout of the season and became the 10th Dodger pitcher to record back-to-back 200 strikeout seasons and the ï¬rst since Chan- Ho Park did it in the 2001 season. Kershaw ï¬nished the 2011 season by leading the NL with 21 wins, 248 strikeouts and a 2.28 ERA, winning the NL pitching Triple Crown, the ï¬rst Triple Crown winner since Jake Peavy of the 2007 San Diego Padres and the ï¬rst Dodger since Sandy Koufax won it in the 1966 season. Justin Verlander of the Detroit Tigers won the American League Triple Crown the same season, marking the ï¬rst major-league season since 1924 to feature Triple Crown-winning pitchers in both leagues. Kershawâs 21 wins were the most by a Dodger pitcher since Orel Hershiser won 23 during the 1988 season. His ERA was the lowest by a Dodger since Hershiserâs 2.03 in the 1985 season, his strikeouts were the most by a Dodger since Koufaxâs 317 in 1966 and his 233 1 / 3 innings pitched were the most since Chan Ho Park pitched 234 in 2001. Since 1965 when Koufax did it, Peavy and Kershaw are only two pitchers in the National League have led the league in wins, strikeouts, ERA, and WHIP (walks plus hits per inning pitched). Kershaw also became just the second <unk> to have a 240-plus strikeouts in a season before the age of 24, joining Vida Blue. After the season, Kershaw was awarded the Warren Spahn Award as the best left-handed pitcher in 2011, the Players Choice Award for Most Outstanding National League pitcher, the Gold Glove Award as the top ï¬elding pitcher in the NL and the Sporting News (TSN) National League Pitcher of the Year. He was additionally selected as the starting pitcher for the TSN NL All-Star Team. On November 17, he was honored with the National League Cy Young Award, making him the youngest Cy Young winner since Dwight Gooden
Table 11: Example 1 â 500 tokens generated by XL using a snippet from the Wikitext-103 test set as initial context. The sample is randomly generated without any cherry picking. Original Wikipedia page: https://en.wikipedia.org/wiki/Clayton_Kershaw There are many interesting observations from this example: ⢠Firstly, Kershaw never went to Royals in real life. Despite that, Transformer-XL stays on the fully imagined topic and keeps
hallucinating the experience of Kershaw in Royals across the generated text.
⢠Secondly, notice that XL correctly tracks the chronological order from 2011 to 2012 and to the ï¬nally 2013 season in the section titles.
⢠In addition, notice that Transformer-XL accurately uses the the phrase âanother back injuryâ in the 2013 season paragraph, since it has talked about one earlier injure in the 2012 season. This shows again Transformer-XLâs ability of capturing long-term dependency.
Context: = = Distribution = = Species range across the Neotropics from Mexico in the north to Bolivia, Paraguay, and southern Brazil in the south. According to <unk> and coauthors, three species are found in Mexico, four in Central America, and 62 in South America. Three species are present in the Caribbean â two in Trinidad and Tobago, along the southern edge of the region, and one in Haiti. = = Habitat and ecology = = <unk> includes both large trees and small acaulescent palms which occupy a number of different ecological niches. Dense stands of some of the larger species are conspicuous elements on the landscape, while smaller species are found in both in the forest understorey and in savannas. Disturbance has been implicated in the formation of vegetation dominated by large <unk> species. In seasonally dry Amazonian forests the density of large adult A. <unk> palms was correlated with canopy openness; the species also dominates savannas formed by repeated forest ï¬res in Trinidad and Tobago. <unk> speciosa forms pure stands in many parts of Brazil where natural forest vegetation has been cleared. Similarly, stands of A. <unk> in Bahia, Brazil (which are cultivated for <unk> ï¬bre) are managed using ï¬re â the seedlings survive cutting and burning, and are able to dominate burned forest patches. The fruit are dispersed by animals; fruit which are not dispersed frequently suffer seed predation by <unk> beetles. Certain species of <unk> have been mentioned as examples of " anachronistic " species which are adapted for dispersal by now-extinct Pleistocene megafauna. On <unk> Island, <unk>, in the Brazilian Amazon, <unk> <unk> fruit were consumed by tapirs, collared peccaries, deer and primates. Rodents, including agoutis, fed upon the fruit and, as the fruit availability declined, they fed on the seeds. Other dispersers of <unk> fruit include Crested <unk> which consume the fruit and disperse the seeds of A. <unk> in the Brazilian Pantanal. = = Uses = = <unk> species have a long history of human utilisation. <unk> <unk> <unk> seeds have been found in archaeological sites in Colombia dating back to 9000 BP. A variety of species remain important sources of edible oil, thatch, edible seeds and ï¬bre. The leaves of <unk> <unk> and A. <unk> are used extensively for thatching. Several species are oil palms, with A. speciosa among the most important economically. Products extracted from A. speciosa were reported to support over 300,000 households in the Brazilian state of Maranhão in 2005, and in 1985 it was estimated to support over 450,000 households throughout the Brazil. <unk> ï¬bres, extracted from the leaf bases of A. <unk>, are commercially important, and generated about US $
XL Generation: 700,000 in 2013 on the black market. In 2004 the plant was reported to be responsible for a net proï¬t of around US $ 3 million in Brazil. A 2012 study found A. speciosa to cause agricultural production at least 2 â 3 times greater than those achieved without prior knowledge of land management. = The Tale of Mrs. Tittlemouse = The Tale of Mrs. Tittlemouse is a childrenâs book written and illustrated by H. M. Potter. It was ï¬rst published in February 1910 by Frederick Warne & Co. and included illustrations by Frederick Warne. Mrs. Tittlemouse was intended to be a childrenâs book, rather than a musical, and the tale describes her attempts to ï¬nd her own home. Mrs. Tittlemouse is an average kitten and the tale follows a mischievous rabbit named Mr. Tittlemouse who tries to kill her in an attempt to get a ride on a horse. Potter later wrote of the rabbit and her attempts to kill her, " If [ she ] were a fox, I should like to have been able to show her the way. " Potterâs work was inspired by The Tale of Little Black Sambo and her two nieces, Petunia and Petunia. It was the ï¬rst book in the Potter family to be published in a hardback format, and was also the ï¬rst book in the Peter Rabbit series. The tale was released on 13 November 1910 and was accompanied by a greeting card by Potterâs brother-in-law, Frederick Warne. The tale was well received by critics and children, and sold over 600,000 copies in its ï¬rst few years. In the ï¬rst year of its publication, Potterâs The Tale of Mrs. Tittlemouse won childrenâs book awards in England, America and Australia. It was selected as a " recommended reading " by children in the US and Canada. = = Plot summary = = The tale opens with the pet rabbit Mrs. Tittlemouse wandering through a forest in a small litter of four. He is shot and badly injured by a deer after she attempts to escape. Mrs. Tittlemouse decides that she wants to ï¬nd her own home, be- cause she is eager to go on her own. She goes alone to the farm where she makes a little money by selling a few seeds and building a small cabin in the woods. She is approached by a wealthy hunter named Mr. Tittlemouse, who tries to kill her but Mrs. Tittlemouse kills him by stufï¬ng a rope into his nose and killing him. She is rescued by Mr. Tittlemouseâs wife Ruth, but Mrs. Tittlemouse then leaves the woodland with the baby. When she is spotted by
Reference: 20 million in annual income to Brazilian farmers in 1996. = The Heart of Ezra Greer = The Heart of Ezra Greer is a 1917 American silent drama ï¬lm produced by the Thanhouser Company and directed by Emile <unk>. The ï¬lm focuses on Ezra Greer, a successful middle-aged man who searches for his college age daughter, Mary. The wayward Mary was romanced and abandoned by Jack <unk>, later bearing his child. Once Ezra becomes broke he ï¬nds employment as the valet for Jack <unk>. After Jackâs engagement to a cabaret girl, Mary becomes upset and leaves her child at Jackâs home. Contrary to Jackâs wishes, Ezra keeps the child and Jack ultimately reveals that the child is his own. Ezra convinces Jack to make things right and Ezra convinces the cabaret girl to leave Jack. After a carriage accident in which the baby is injured, Ezra and Jack rush to the hospital and ï¬nd Mary as a nurse crying over the child. The ï¬lm ends with the marriage of Jack and Mary. The ï¬lm was released by Pathé on October 7, 1917. The ï¬lm was the ï¬nal release from Thanhouser and was deemed to be an average ï¬lm by most reviewers. Criticism for the ï¬lm hinged on far-fetched coincidences to drive the plot. The ï¬lm is presumed lost. = = Plot = = The ï¬lm follows Ezra Greer, a middle-aged man who has worked hard since his youth. He cares deeply for his motherless daughter, Mary, but was unable to attend the annual commencement at her co-educational college. He awaits for her to return from college, but Mary leaves with her romantic interest, Jack <unk>. On promise of marriage and wealth, Mary is romanced and gives birth to a fatherless child. Without word from his daughter, Ezra resigns from his job and attempts to seek her out and ï¬nds a poor motherless child, Marie. With Ezraâs money exhausted he seeks employment and ï¬nds it as the valet of Jack. One day, Mary seeks an announcement of Jackâs engagement to a cabaret girl known as " The Baby Vamp ". Bitter over the prospect of her childâs future, she leaves the child at Jackâs home during his absence with a note. Jack orders Ezra to take the baby to an orphanage, but Marie begs Ezra to keep him. After continually seeing the child, Jack is overcome with remorse and explains to Ezra and seeks his advice. Not knowing he was making the case for his own daughter, Ezra convinces Jack to seek out Mary and forget the Baby Vamp. The Baby
Table 12: Example 2 â 500 tokens generated by XL using a snippet from the Wikitext-103 test set as initial context. The sample is randomly generated without any cherry picking. Original Wikipedia page: https://en.wikipedia.org/wiki/The_Tale_of_Mrs._Tittlemouse. This example exhibit some additional interesting properties of Transformer-XL: ⢠After ï¬nishing the last paragraph of the seed context, both the reference and generated text start a new topic (i.e., Wikipedia page), as marked by the single â= title =â line. This suggests the model has the ability of identifying the end of a topic / page, and randomly starting with a new topic.
⢠Even more interestingly, a newly-started page is on a book called âThe Tale of Mrs. Tittlemouseâ. Transformer-XL manages to copy the same book title and some related information from the training set, but hallucinates novel content of the book. This demonstrates a degree of generalization instead of memorization. Please refer to the original book content at the Wikipedia page.
Context: = Battle of Dürenstein = The Battle of Dürenstein (also known as the Battle of <unk>, Battle of <unk> and Battle of <unk>; German: <unk> bei <unk> ), on 11 November 1805 was an engagement in the Napoleonic Wars during the War of the Third Coalition. Dürenstein (modern <unk>) is located in the <unk> Valley, on the River Danube, 73 kilometers (45 mi) upstream from Vienna, Austria. The river makes a crescent-shaped curve between <unk> and nearby Krems an der Donau and the battle was fought in the ï¬ood plain between the river and the mountains. At Dürenstein a combined force of Russian and Austrian troops trapped a French division commanded by Théodore Maxime Gazan. The French division was part of the newly created VIII Corps, the so-called Corps Mortier, under command of Ãdouard Mortier. In pursuing the Austrian retreat from Bavaria, Mortier had over-extended his three divisions along the north bank of the Danube. Mikhail <unk> Kutuzov, commander of the Coalition force, enticed Mortier to send Gazanâs division into a trap and French troops were caught in a valley between two Russian columns. They were rescued by the timely arrival of a second division, under command of Pierre Dupont de l âÃtang. The battle extended well into the night. Both sides claimed victory. The French lost more than a third of their participants, and Gazanâs division experienced over 40 percent losses. The Austrians and Russians also had heavy <unk> to 16 <unk> perhaps the most signiï¬cant was the death in action of Johann Heinrich von Schmitt, one of Austriaâs most capable chiefs of staff. The battle was fought three weeks after the Austrian capitulation at Ulm and three weeks before the Russo-Austrian defeat at the Battle of Austerlitz. After Austerlitz Austria withdrew from the war. The French demanded a high indemnity and Francis II abdicated as Holy Roman Emperor, releasing the German states from their allegiance to the Holy Roman Empire. = = Background = = In a series of conï¬icts from 1803-15 known as the Napoleonic Wars, various European powers formed ï¬ve coalitions against the First French Empire. Like the wars sparked by the French Revolution (1789 ), these further revolutionized the formation, organization and training of European armies and led to an unprecedented militarization, mainly due to mass conscription. Under the leadership of Napoleon, French power rose quickly as the Grande Armée conquered most of Europe, and collapsed rapidly
XL Generation: after the French conquest of Italy. This victory marked the beginning of the Second Coalition. Napoleonâs rapid advance caused Austria â Russia, Britain and Russia to make peace negotiations. The Russian army under Prince Mikhail Mikhailovich Mikhailovich Chaffee, commander of the Imperial Russian Army in Switzerland, was able to drive off the Austrians in the Battle of Stockach (1805) and to take Pressburg (modern Ã3 Austria) from the Austrians. At an early stage the Austro-Hungarian ï¬eet had been damaged in a storm off Cape Matapan; this allowed the Allies to launch a full-scale invasion of Italy in Febru- ary 1805. The Battle of Austerlitz was the decisive French victory against Napoleon and the largest naval battle in history, involving more modern-day European navies. The French military situation worsened as Napoleon faced several threats related to his newly formed Army of Europe, particularly Russia itself, which was now on the verge of collapse. The Russian general and friend of Napoleon, Alexander, had been dissatisï¬ed with the conduct of the Austri- ans; he was still in conï¬ict with Emperor Napoleon, the French Republicâs king, who had declared war on Austria during the summer of 1804. With the war over, Napoleon decided to launch a second invasion of Italy in October 1805. = = Prelude = = In July 1805, the French 1st Army entered southern Italy. The army, under the command of Marshal Marmont, were reinforced by a few battalions of infantry under Claude General Auguste de Marmont at the town of Philippsburg and another battalion at Belluno. On 17 September 1805, the army marched from Belluno towards Krems. By 29 September, they had reached Belluno and con- ducted its advance against a small Austrian force. By 31 September, the whole force had been reinforced by a brigade from the Army of Tyrol under the com- mand of Pierre Augereau. The Austrians were now under the command of Mar- shal Jean Victor Marie Moreau, a member of the Directory. Moreau had taken command of the Austrian invasion force in the spring of 1805. His command included the VI Corps commanded by Jean Baptiste Drouet de Ney and the VI Corps commanded by Generals Jean Victor Marie Moreau and Joseph Souham. Neyâs corps consisted of the III. Corps and VI. Corps, which consisted of the III Corps and VI. Corps, located in the Austrian Netherlands, was commanded by Friedrich Joseph, Count Baillet de Latour. Moreauâs army consisted of six divisions and several associated brigades. = = Aftermath = = = = = First Coalition forces = = = On 9 October 1805 the French Army of the Danube was attacked by an Aus- trian army under Archduke Charles at the Battle of Austerlitz. Although Charles and Charles had not had much time to regroup, on 10 October, he launched his attack on the Polish forces under Friedrich Joseph, Count of Lauenburg. Af- ter three days, Charlesâ army captured Lauenburg. The French forces pursued the Austrians to the Silesian border, where they encountered strong Austrian resistance. These conï¬icts forced the Austrians to retreat into Tyrol and Aus- tria agreed to a truce. The Austrian army, commanded by Wenzel Anton Karl, Count of Merveldt, was reduced to around 10,000 men. It was initially planned that Archduke Charles would launch a counter-attack against the French army on the same day, as Napoleon had hoped, but this was not carried out. On 25 October, Merveldt left Styria for Tyrol. On the same day, Austria launched its new offensive against the French at Ulm. Charles withdrew his army from the region for a third time at the Battle of Elchingen, under the overall command of the Austrian generals, Ferdinand and Friedrich Wilhelm of Jülich-Cleves-Berg. To prevent Archduke Charles from escaping from the battleï¬eld, the comman- der of the Habsburg army, Archduke Charles, planned to occupy the fortress Linz; instead, he decided to force Franz von Hipper to surrender the city. How- ever, as Charles moved to the south, Moreau arrived on the scene with additional soldiers â including the entire Imperial Guard â and defeated the Austrians at the Battle of Hohenlinden on 28 October. The loss of Linz resulted in Austriaâs complete defeat at Hohenlinden. In the meantime, the French Army of Obser- vation and Preparedness was reorganized into the Army of the Danube under Feldzeugmeister (Colonel-General) Friedrich Freiherr von Hotze. The army was composed of the I, IV, VI, VI, VII, VIII and IX Corps. With reinforcements from Italy and France, it formed new battalions, companies, and squadrons in the Austrian army. On 17 November 1804, at the Battle of Jena-Auerstadt the Army of Silesia and the Army of Silesia joined forces, but by the time that the
Reference: after the disastrous invasion of Russia in 1812. Napoleonâs empire ultimately suffered complete military defeat in the 1813 â 14 campaigns, resulting in the restoration of the Bourbon monarchy in France. Although Napoleon made a spectacular return in 1815, known as the Hundred Days, his defeat at the Battle of Waterloo, the pursuit of his army and himself, his abdication and banishment to the Island of Saint Helena concluded the Napoleonic Wars. = = Danube campaign = = From 1803-06 the Third Coalition fought the First French Empire and its client states (see table at right ). Although several naval battles determined control of the seas, the outcome of the war was decided on the continent, predominantly in two major land operations in the Danube valley: the Ulm campaign in the upper Danube and the Vienna campaign, in the middle Danube valley. Political conï¬icts in Vienna delayed Austriaâs entry into the Third Coalition until 1805. After hostilities of the War of the Second Coalition ended in 1801, Archduke <unk> emperorâs <unk> advantage of the subsequent years of peace to develop a military restructuring plan. He carefully put this plan into effect beginning in 1803 â 04, but implementation was incomplete in 1805 when Karl Mack, Lieu- tenant Field Marshal and Quartermaster-General of the Army, implemented his own restructuring. Mack bypassed Charles â methodical approach. Occurring in the ï¬eld, Mackâs plan also undermined the overall command and organiza- tional structure. Regardless, Mack sent an enthusiastic report to Vienna on the militaryâs readiness. Furthermore, after misreading Napoleonâs maneuvers in Württemberg, Mack also reported to Vienna on the weakness of French dispo- sitions. His reports convinced the war party advising the emperor, Francis II, to enter the conï¬ict against France, despite Charles â own advice to the con- trary. Responding to the report and rampant anti-French fever in Vienna, Fran- cis dismissed Charles from his post as generalissimo and appointed his <unk> brother-in-law, Archduke Ferdinand, as commander. The inexperienced Ferdi- nand was a poor choice of replacement for the capable Charles, having neither maturity nor aptitude for the assignment. Although Ferdinand retained nomi- nal command, day-to-day decisions were placed in the hands of Mack, equally ill-suited for such an important assignment. When Mack was wounded early in the campaign, he was unable to take full charge of the army. Consequently, command further devolved to Lieutenant Field Marshal Karl Philipp, Prince of Schwarzenberg, an able cavalry ofï¬cer but inexperienced in the command of such a large army. = = = Road to Ulm = = = The campaign in the upper Danube valley began in October, with several clashes in Swabia. Near the Bavarian town of Wertingen, 40 kilometers (25 mi) north- west of Augsburg, on 8 October the 1st Regiment of dragoons, part of Muratâs Reserve Cavalry Corps, and grenadiers of Lannes â V Corps surprised an Aus- trian force half its size. The Austrians were arrayed in a line and unable to form their defensive squares quickly enough to protect themselves from the 4,000 dragoons and 8,000 grenadiers. Nearly 3,000 Austrians were captured and over 400 were killed or wounded. A day later, at another small town, <unk> south of the Danube <unk> French 59th Regiment of the Line stormed a bridge over the Danube and, humiliatingly, chased two large Austrian columns toward Ulm. The campaign was not entirely bad news for Vienna. At Haslach, Johann von Klenau arranged his 25,000 infantry and cavalry in a prime defensive position and, on 11 October, the overly conï¬dent General of Division Pierre Dupont de lâÃtang attacked Klenauâs force with fewer than 8,000 men. The French lost 1,500 men killed and wounded. Aside from taking the Imperial Eagles and <unk> of the 15th and 17th Dragoons, Klenauâs force also captured 900 men, 11 guns and 18 ammunition wagons. Klenauâs victory was a singular success. On 14 October Mack sent two columns out of Ulm in preparation for a breakout to the north: one under Johann Sigismund Riesch headed toward Elchingen to secure the bridge there, and the other under Franz von Werneck went north with most of the heavy artillery. Recognizing the opportunity, Marshal Michel Ney hurried the rest of his VI Corps forward to re-establish contact with Dupont, who was still north of the Danube. In a two-pronged attack Ney sent one division to the south of Elchingen on the right bank of the Danube. This division began the assault at Elchingen. At the same time another division crossed the river to the east and moved west against Rieschâs position. After clearing Austrian pickets from a bridge, the French attacked and captured a strategically located abbey at
French approached Vienna, the Prussians had already surrendered. As the Aus- trians did not want to allow the war to continue, they decided to abandon their territories in the north and move their army to the north and west, cutting off Charles from Vienna. The Battle of Warsaw was fought on 23 November 1805 between the French Army of the Danube and the Austrian Army of Styria in the vicinity of Warsaw and Pressburg (modern Trnava, Slovakia). At that time Habsburg forces
the top of the hill at bayonet point. The Austrian cavalry unsuccessfully tried to fend off the French, but the Austrian infantry broke and ran. In this engagement alone, the Austrians lost more than half their reserve artillery park, 6,000 (out of 8,000 total participants) dead, wounded or captured and four colors. Reischâs column also failed to destroy the bridges across the Danube. Napoleonâs light- ning campaign exposed the Austrian indecisive command structure and poor supply apparatus. Mack
Table 13: Example 3 â 1,000 tokens generated by XL using a snippet from the Wikitext-103 test set as initial context. The sample is randomly generated without any cherry picking. Original Wikipedia page: https://en.wikipedia.org/wiki/Battle_of_D%C3%BCrenstein. ⢠Although this example is signiï¬cantly longer, we can see that Transformer-XL is still able to stay on the same topic and
makes up non-existing stories about the Napoleon wars.
⢠Notably, from the second section on, the generated text correctly follows a ï¬ne-grained chronological order on the level of month and day to narrate events in 1805, except a mistake (1804 instead of 1805) near the end of the paragraph. To ease reading which we have highlighted all the date related phrases by magenta in the generation. | {
"id": "1611.01462"
} |
1812.11118 | Reconciling modern machine learning practice and the bias-variance trade-off | Breakthroughs in machine learning are rapidly changing science and society,
yet our fundamental understanding of this technology has lagged far behind.
Indeed, one of the central tenets of the field, the bias-variance trade-off,
appears to be at odds with the observed behavior of methods used in the modern
machine learning practice. The bias-variance trade-off implies that a model
should balance under-fitting and over-fitting: rich enough to express
underlying structure in data, simple enough to avoid fitting spurious patterns.
However, in the modern practice, very rich models such as neural networks are
trained to exactly fit (i.e., interpolate) the data. Classically, such models
would be considered over-fit, and yet they often obtain high accuracy on test
data. This apparent contradiction has raised questions about the mathematical
foundations of machine learning and their relevance to practitioners.
In this paper, we reconcile the classical understanding and the modern
practice within a unified performance curve. This "double descent" curve
subsumes the textbook U-shaped bias-variance trade-off curve by showing how
increasing model capacity beyond the point of interpolation results in improved
performance. We provide evidence for the existence and ubiquity of double
descent for a wide spectrum of models and datasets, and we posit a mechanism
for its emergence. This connection between the performance and the structure of
machine learning models delineates the limits of classical analyses, and has
implications for both the theory and practice of machine learning. | http://arxiv.org/pdf/1812.11118 | Mikhail Belkin, Daniel Hsu, Siyuan Ma, Soumik Mandal | stat.ML, cs.LG | null | null | stat.ML | 20181228 | 20190910 | 9 1 0 2
p e S 0 1 ] L M . t a t s [
2 v 8 1 1 1 1 . 2 1 8 1 : v i X r a
# Reconciling modern machine learning practice and the bias-variance trade-oï¬
Mikhail Belkina, Daniel Hsub, Siyuan Maa, and Soumik Mandala
aThe Ohio State University, Columbus, OH bColumbia University, New York, NY
February 3, 2022
# Abstract
Breakthroughs in machine learning are rapidly changing science and society, yet our fun- damental understanding of this technology has lagged far behind. Indeed, one of the central tenets of the ï¬eld, the bias-variance trade-oï¬, appears to be at odds with the observed behavior of methods used in the modern machine learning practice. The bias-variance trade-oï¬ implies that a model should balance under-ï¬tting and over-ï¬tting: rich enough to express underlying structure in data, simple enough to avoid ï¬tting spurious patterns. However, in the modern practice, very rich models such as neural networks are trained to exactly ï¬t (i.e., interpolate) the data. Classically, such models would be considered over-ï¬t, and yet they often obtain high accuracy on test data. This apparent contradiction has raised questions about the mathematical foundations of machine learning and their relevance to practitioners.
In this paper, we reconcile the classical understanding and the modern practice within a uniï¬ed performance curve. This âdouble descentâ curve subsumes the textbook U-shaped bias- variance trade-oï¬ curve by showing how increasing model capacity beyond the point of inter- polation results in improved performance. We provide evidence for the existence and ubiquity of double descent for a wide spectrum of models and datasets, and we posit a mechanism for its emergence. This connection between the performance and the structure of machine learning models delineates the limits of classical analyses, and has implications for both the theory and practice of machine learning.
E-mail: mbelkin@cse.ohio-state.edu, mandal.32@osu.edu
djhsu@cs.columbia.edu,
djhsu@cs.columbia.edu,
masi@cse.ohio-state.edu,
1
1
# Introduction
Machine learning has become key to important applications in science, technology and commerce. The focus of machine learning is on the problem of prediction: given a sample of training examples (x1, y1), . . . , (xn, yn) from Rd à R, we learn a predictor hn : Rd â R that is used to predict the label y of a new point x, unseen in training.
The predictor h, is commonly chosen from some function class H, such as neural networks with a certain architecture, using empirical risk minimization (ERM) and its variants. In ERM, the predictor is taken to be a function h ⬠H that minimizes the empirical (or training) risk i â &(h(2i), ys), where is a loss function, such as the squared loss f(y',y) = (y! â y)? for regression or zero-one loss ¢(y', y) = Lty-zy} for classification.
The goal of machine learning is to find h,, that performs well on new data, unseen in training. To study performance on new data (known as generalization) we typically assume the training examples are sampled randomly from a probability distribution P over R¢ x R, and evaluate hy on a new test example (x,y) drawn independently from P. The challenge stems from the mismatch between the goals of minimizing the empirical risk (the explicit goal of ERM algorithms, optimization) and minimizing the true (or test) risk Ey)~plé(h(x), y)| (the goal of machine learning).
Conventional wisdom in machine learning suggests controlling the capacity of the function class H based on the bias-variance trade-oï¬ by balancing under-ï¬tting and over-ï¬tting (cf., [17, 21]):
1. If H is too small, all predictors in H may under-ï¬t the training data (i.e., have large empirical risk) and hence predict poorly on new data.
2. If H is too large, the empirical risk minimizer may over-ï¬t spurious patterns in the training data resulting in poor accuracy on new examples (small empirical risk but large true risk).
The classical thinking is concerned with ï¬nding the âsweet spotâ between under-ï¬tting and over- ï¬tting. The control of the function class capacity may be explicit, via the choice of H (e.g., picking the neural network architecture), or it may be implicit, using regularization (e.g., early stopping). When a suitable balance is achieved, the performance of hn on the training data is said to generalize to the population P . This is summarized in the classical U-shaped risk curve, shown in Figure 1(a) that has been widely used to guide model selection and is even thought to describe aspects of human decision making [18]. The textbook corollary of this curve is that âa model with zero training error is overï¬t to the training data and will typically generalize poorlyâ [21, page 221], a view still widely accepted.
Yet, practitioners routinely use modern machine learning methods, such as large neural networks and other non-linear predictors that have very low or zero training risk. In spite of the high function class capacity and near-perfect ï¬t to training data, these predictors often give very accurate Indeed, this behavior has guided a best practice in deep learning for predictions on new data. choosing neural network architectures, speciï¬cally that the network should be large enough to permit eï¬ortless zero loss training (called interpolation) of the training data [34]. Moreover, in direct challenge to the bias-variance trade-oï¬ philosophy, recent empirical evidence indicates that neural networks and kernel machines trained to interpolate the training data obtain near-optimal test results even when the training data are corrupted with high levels of noise [42, 4].
The main ï¬nding of this work is a pattern for how performance on unseen data depends on model capacity and the mechanism underlying its emergence. This dependence, empirically witnessed with important model classes including neural networks and a range of datasets, is summarized in the âdouble descentâ risk curve shown in Figure 1(b). The curve subsumes the classical U-shaped risk curve from Figure 1(a) by extending it beyond the point of interpolation.
2
(a) (b)
# x
Figure 1: Curves for training risk (dashed line) and test risk (solid line). (a) The classical U-shaped risk curve arising from the bias-variance trade-oï¬. (b) The double descent risk curve, which incorporates the U-shaped risk curve (i.e., the âclassicalâ regime) together with the observed behavior from using high capacity function classes (i.e., the âmodernâ interpolating regime), sep- arated by the interpolation threshold. The predictors to the right of the interpolation threshold have zero training risk.
When function class capacity is below the âinterpolation thresholdâ, learned predictors exhibit the classical U-shaped curve from Figure 1(a). (In this paper, function class capacity is identiï¬ed with the number of parameters needed to specify a function within the class.) The bottom of the U is achieved at the sweet spot which balances the ï¬t to the training data and the susceptibility to over-ï¬tting: to the left of the sweet spot, predictors are under-ï¬t, and immediately to the right, predictors are over-ï¬t. When we increase the function class capacity high enough (e.g., by increasing the number of features or the size of the neural network architecture), the learned predictors achieve (near) perfect ï¬ts to the training dataâi.e., interpolation. Although the learned predictors obtained at the interpolation threshold typically have high risk, we show that increasing the function class capacity beyond this point leads to decreasing risk, typically going below the risk achieved at the sweet spot in the âclassicalâ regime.
All of the learned predictors to the right of the interpolation threshold ï¬t the training data perfectly and have zero empirical risk. So why should someâin particular, those from richer functions classesâhave lower test risk than others? The answer is that the capacity of the function class does not necessarily reï¬ect how well the predictor matches the inductive bias appropriate for the problem at hand. For the learning problems we consider (a range of real-world datasets as well as synthetic data), the inductive bias that seems appropriate is the regularity or smoothness of a function as measured by a certain function space norm. Choosing the smoothest function that perfectly ï¬ts observed data is a form of Occamâs razor: the simplest explanation compatible with the observations should be preferred (cf. [38, 6]). By considering larger function classes, which contain more candidate predictors compatible with the data, we are able to ï¬nd interpolating functions that have smaller norm and are thus âsimplerâ. Thus increasing function class capacity improves performance of classiï¬ers.
Related ideas have been considered in the context of margins theory [38, 2, 35], where a larger function class H may permit the discovery of a classiï¬er with a larger margin. While the margins theory can be used to study classiï¬cation, it does not apply to regression, and also does not pre- dict the second descent beyond the interpolation threshold. Recently, there has been an emerging recognition that certain interpolating predictors (not based on ERM) can indeed be provably sta- tistically optimal or near-optimal [3, 5], which is compatible with our empirical observations in the interpolating regime.
In the remainder of this article, we discuss empirical evidence for the double descent curve, the
3
mechanism for its emergence and conclude with some ï¬nal observations and parting thoughts.
# 2 Neural networks
In this section, we discuss the double descent risk curve in the context of neural networks.
Random Fourier features. We ï¬rst consider a popular class of non-linear parametric models called Random Fourier Features (RFF ) [30], which can be viewed as a class of two-layer neural networks with ï¬xed weights in the ï¬rst layer. The RFF model family HN with N (complex-valued) parameters consists of functions h : Rd â C of the form
N h(a) = Se axG(2; ve) where (x; v) :-= e¥ 1"), k=1
and the vectors v1, . . . , vN are sampled independently from the standard normal distribution in Rd. (We consider HN as a class of real-valued functions with 2N real-valued parameters by taking real and imaginary parts separately.) Note that HN is a randomized function class, but as N â â, the function class becomes a closer and closer approximation to the Reproducing Kernel Hilbert Space (RKHS) corresponding to the Gaussian kernel, denoted by Hâ. While it is possible to directly use Hâ (e.g., as is done with kernel machines [8]), the random classes HN are computationally attractive to use when the sample size n is large but the number of parameters N is small compared to n.
Our learning procedure using Hy is as follows. Given data (21, y1),..-, (an, Yn) from R¢ x R, we find the predictor hn,w ⬠Hn via ERM with squared loss. That is, we minimize the empirical risk objective 1h (h(a) â y;)* over all functions h ⬠Hy. When the minimizer is not unique as is always the case when N > n), we choose the minimizer whose coefficients (a1,...,ay) have he minimum f) norm. This choice of norm is intended as an approximation to the RKHS norm |h||a1.,, Which is generally difficult to compute for arbitrary functions in Hy. For problems with multiple outputs (e.g., multi-class classification), we use functions with vector-valued outputs and sum of the squared losses for each output.
In Figure 2} we show the test risk of the predictors learned using Hy on a subset of the popular data set of handwritten digits called MNIST. The same figure also shows the ¢2 norm of the function coefficients, as well as the training risk. We see that for small values of N, the test risk shows the classical U-shaped curve consistent with the bias-variance trade-off, with a peak occurring at the interpolation threshold N = n. Some statistical analyses of RFF suggest choosing N x \/nlogn o obtain good test risk guarantees [32].
The interpolation regime connected with modern practice is shown to the right of the interpo- lation threshold, with N ⥠n. The model class that achieves interpolation with fewest parameters (N = n random features) yields the least accurate predictor. (In fact, it has no predictive ability for classiï¬cation.) But as the number of features increases beyond n, the accuracy improves dra- matically, exceeding that of the predictor corresponding to the bottom of the U-shaped curve. The plot also shows that the predictor hn,â obtained from Hâ (the kernel machine) out-performs the predictors from HN for any ï¬nite N .
What structural mechanisms account for the double descent shape? When the number of eatures is much smaller then the sample size, N < n, classical statistical arguments imply that the raining risk is close to the test risk. Thus, for small NV, adding more features yields improvements in doth the training and test risks. However, as the number of features approaches n (the interpolation
4
Zero-one loss
Squared loss
88 1709 â@ RFF â@ RFF Min. norm solution fp, « Min. norm solution fp, « â (original kernel) â (original kernel) 100 g 2 2 15 3 % @ 10 fi 1 4 2 i?) 447 447 E 62 E 62 2 âe RFF z2 âe RFF â Min. norm solution fp, « â Min. norm solution fp, « 7 7 i?) 10 20 30 40 50 60 i?) 10 20 30 40 50 60 14 0.4 s c < 8 ® 0.2 id F E i?) 0.0 i?) 10 20 30 40 50 60 i?) 10 20 30 40 50 60 Number of Random Fourier Features (x10) (N) Number of Random Fourier Features (x10) (N)
Figure 2: Double descent risk curve for RFF model on MNIST. Test risks (log scale), coefficient £2 norms (log scale), and training risks of the RFF model predictors hy, learned on a subset of MNIST (n = 10+, 10 classes). The interpolation threshold is achieved at N = 104.
5
Figure 3: Plot of two univariate functions ï¬tted to 10 data points using Random ReLU features Ï(x; (v1, v2)) := max(v1x + v2, 0). The data points are shown in red circles. The ï¬tted function with N = 40 Random ReLU features is the blue dashed line; the coeï¬cient vectorâs norm (scaled by N ) is â 695. The ï¬tted function with N = 4000 Random ReLU features is the black solid line; the coeï¬cient vectorâs norm is â 159.
threshold), features not present or only weakly present in the data are forced to ï¬t the training data nearly perfectly. This results in classical over-ï¬tting as predicted by the bias-variance trade-oï¬ and prominently manifested at the peak of the curve, where the ï¬t becomes exact.
To the right of the interpolation threshold, all function classes are rich enough to achieve zero training risk. For the classes HN that we consider, there is no guarantee that the most regular, smallest norm predictor consistent with training data (namely hn,â, which is in Hâ) is contained in the class HN for any ï¬nite N . But increasing N allows us to construct progressively better approximations to that smallest norm function. Thus we expect to have learned predictors with largest norm at the interpolation threshold and for the norm of hn,N to decrease monotonically as N increases thus explaining the second descent segment of the curve. This is what we observe in Figure 2, and indeed hn,â has better accuracy than all hn,N for any ï¬nite N . Favoring small norm interpolating predictors turns out to be a powerful inductive bias on MNIST and other real and synthetic data sets [4]. For noiseless data, we make this claim mathematically precise in Appendix A.
Additional empirical evidence for the same double descent behavior using other data sets is presented in Appendix C.1. For instance, we demonstrate double descent for rectiï¬ed linear unit (ReLU) random feature models, a class of ReLU neural networks with a setting similar to that of RFF. The inductive bias corresponding to the larger number of features can be readily observed in a one-dimensional example in Figure 3. Although the ï¬tted function is non-smooth (piecewise linear) for any number of Random ReLU features, it appears smootherâwith smaller normâas the number of features is increased.
Finally, in Appendix C.4, we also describe a simple synthetic model, which can be regarded as a one-dimensional version of the RFF model, where we observe the same double descent behavior.
Neural networks and backpropagation. In general multilayer neural networks (beyond RFF or ReLU random feature models), a learning algorithm will tune all of the weights to ï¬t the training
6
data, typically using versions of stochastic gradient descent (SGD), with backpropagation to com- pute partial derivatives. This ï¬exibility increases the representational power of neural networks, but also makes ERM generally more diï¬cult to implement. Nevertheless, as shown in Figure 4, we observe that increasing the number of parameters in fully connected two-layer neural networks leads to a risk curve qualitatively similar to that observed with RFF models. That the test risk improves beyond the interpolation threshold is compatible with the conjectured âsmall normâ in- ductive biases of the common training algorithms for neural networks [20, 25]. We note that this transition from under- to over-parameterized regimes for neural networks was also previously ob- served by [7, 1, 27, 37]. In particular, [37] draws a connection to the physical phenomenon of âjammingâ in particle systems.
The computational complexity of ERM with neural networks makes the double descent risk curve difficult to observe. Indeed, in the classical under-parametrized regime (N <n), the non- convexity of the ERM optimization problem causes the behavior of local search-based heuristics, like SGD, to be highly sensitive to their initialization. Thus, if only suboptimal solutions are found for the ERM optimization problems, increasing the size of a neural network architecture may not always lead to a corresponding decrease in the training risk. This suboptimal behavior can lead to high variability in both the training and test risks that masks the double descent curve.
It is common to use neural networks with extremely large number of parameters [11]. But to achieve interpolation for a single output (regression or two class classiï¬cation) one expects to need at least as many parameters as there are data points. Moreover, if the prediction problem has more than one output (as in multi-class classiï¬cation), then the number of parameters needed should be multiplied by the number of outputs. This is indeed the case empirically for neural networks shown in Figure 4. Thus, for instance, data sets as large as ImageNet [33], which has â¼106 examples and â¼103 classes, may require networks with â¼109 parameters to achieve interpolation; this is larger than many neural network models for ImageNet [11]. In such cases, the classical regime of the U-shaped risk curve is more appropriate to understand generalization. For smaller data sets, these large neural networks would be ï¬rmly in the over-parametrized regime, and simply training to obtain zero training risk often results in good test performance [42].
Additional results with neural networks are given in Appendix C.3.
# 3 Decision trees and ensemble methods
Does the double descent risk curve manifest with other prediction methods besides neural networks? We give empirical evidence that the families of functions explored by boosting with decision trees and Random Forests also show similar generalization behavior as neural nets, both before and after the interpolation threshold.
AdaBoost and Random Forests have recently been investigated in the interpolation regime by [41] for classiï¬cation. In particular, they give empirical evidence that, when AdaBoost and Random Forests are used with maximally large (interpolating) decision trees, the ï¬exibility of the ï¬tting methods yield interpolating predictors that are more robust to noise in the training data than the predictors produced by rigid, non-interpolating methods (e.g., AdaBoost or Random Forests with shallow trees). This in turn is said to yield better generalization. The averaging of the (near) interpolating trees ensures that the resulting function is substantially smoother than any individual tree, which aligns with an inductive bias that is compatible with many real world problems.
We can understand these ï¬exible ï¬tting methods in the context of the double descent risk curve. Observe that the size of a decision tree (controlled by the number of leaves) is a natural way to parametrize the function class capacity: trees with only two leaves correspond to two-piecewise
7
Zero-one loss (%) Squared loss 3 10 40 100 300 800
Number of parameters/weights (x103)
Figure 4: Double descent risk curve for fully connected neural network on MNIST. Training and test risks of network with a single layer of H hidden units, learned on a subset of MNIST (n = 4·103, d = 784, K = 10 classes). The number of parameters is (d+1)·H +(H +1)·K. The interpolation threshold (black dotted line) is observed at n · K.
8
S fo) BK n n 2 ij 5 0.02 =} [ou (2p) 0.00 J 30 8 Ke) 20 o 5 3 10 N 0 10/1 1000/1 2000/1 2000/10 2000/20 Model parameters: N02" / Ntree
Figure 5: Double descent risk curve for random forests on MNIST. The double descent risk curve is observed for random forests with increasing model complexity trained on a subset of MNIST (n = 104, 10 classes). Its complexity is controlled by the number of trees Ntree and the maximum number of leaves allowed for each tree N max leaf .
9
constant functions with axis-aligned boundary, while trees with n leaves can interpolate n training examples. It is a classical observation that the U-shaped bias-variance trade-oï¬ curve manifests in many problems when the class capacity is considered this way [21]. (The interpolation threshold may be reached with fewer than n leaves in many cases, but n is clearly an upper bound.) To further enlarge the function class, we consider ensembles (averages) of several interpolating trees.1 So, beyond the interpolation threshold, we use the number of such trees to index the class capacity. When we view the risk curve as a function of class capacity deï¬ned in this hybrid fashion, we see the double descent curve appear just as with neural networks; see Figure 5 and Appendix D. We observe a similar phenomenon using L2-boosting [15, 10], another popular ensemble method; the results are reported in Appendix E.
# 4 Concluding thoughts
The double descent risk curve introduced in this paper reconciles the U-shaped curve predicted by the bias-variance trade-oï¬ and the observed behavior of rich models used in modern machine learning practice. The posited mechanism that underlies its emergence is based on common induc- tive biases, and hence can explain its appearance (and, we argue, ubiquity) in machine learning applications.
We conclude with some ï¬nal remarks.
Historical absence. The double descent behavior may have been historically overlooked on account of several cultural and practical barriers. Observing the double descent curve requires a parametric family of spaces with functions of arbitrary complexity. The linear settings studied extensively in classical statistics usually assume a small, ï¬xed set of features and hence ï¬xed ï¬tting capacity. Richer families of function classes are typically used in the context of non-parametric statistics, where smoothing and regularization are almost always employed [39]. Regularization, of all forms, can both prevent interpolation and change the eï¬ective capacity of the function class, thus attenuating or masking the interpolation peak.
The RFF model is a popular and flexible parametric family. However, these models were orig- inally proposed as computationally favorable alternative to kernel machines. This computational advantage over traditional kernel methods holds only for N < n, and hence models at or beyond he interpolation threshold are typically not considered.
The situation with general multilayer neural networks, is slightly diï¬erent and more involved. Due to the non-convexity of the ERM optimization problem, solutions in the classical under- parametrized regime are highly sensitive to initialization. Moreover, as we have seen, the peak at the interpolation threshold is observed within a narrow range of parameters. Sampling of the parameter space that misses that range may lead to the misleading impression that increasing the size of the network simply improves performance. Finally, in practice, training of neural networks is typically stopped as soon as (an estimate of) the test risk fails to improve. This early stopping has a strong regularizing eï¬ect that, as discussed above, makes it diï¬cult to observe the interpolation peak.
Inductive bias. In this paper, we have dealt with several types of methods for choosing inter- polating solutions. For Random Fourier and Random ReLU features, solutions are constructed explicitly by minimum norm linear regression in the feature space. As the number of features tends
1These trees are trained in the way proposed in Random Forest except without bootstrap re-sampling. This is similar to the PERT method of [14].
10
to inï¬nity they approach the minimum functional norm solution in the Reproducing Kernel Hilbert Space, a solution which maximizes functional smoothness subject to the interpolation constraints. For neural networks, the inductive bias owes to the speciï¬c training procedure used, which is typi- cally SGD. When all but the ï¬nal layer of the network are ï¬xed (as in RFF models), SGD initialized at zero also converges to the minimum norm solution. While the behavior of SGD for more general neural networks is not fully understood, there is signiï¬cant empirical and some theoretical evidence (e.g., [20]) that a similar minimum norm inductive bias is present. Yet another type of inductive bias related to averaging is used in random forests. Averaging potentially non-smooth interpolating trees leads to an interpolating solution with a higher degree of smoothness; this averaged solution performs better than any individual interpolating tree.
Remarkably, for kernel machines all three methods lead to the same minimum norm solution. Indeed, the minimum norm interpolating classiï¬er, hn,â, can be obtained directly by explicit norm minimization (solving an explicit system of linear equations), through SGD or by averaging trajectories of Gaussian processes (computing the posterior mean [31]).
In our experiments, appropriately chosen âmod- Optimization and practical considerations. ernâ models usually outperform the optimal âclassicalâ model on the test set. But another im- portant practical advantage of over-parametrized models is in optimization. There is a growing understanding that larger models are âeasyâ to optimize as local methods, such as SGD, converge to global minima of the training risk in over-parametrized regimes (e.g., [36]). Thus, large inter- polating models can have low test risk and be easy to optimize at the same time, in particular with SGD [26]. It is likely that the models to the left of the interpolation peak have optimiza- tion properties qualitatively diï¬erent from those to the right, a distinction of signiï¬cant practical import.
Outlook. The classical U-shaped bias-variance trade-oï¬ curve has shaped our view of model selection and directed applications of learning algorithms in practice. The understanding of model performance developed in this work delineates the limits of classical analyses and opens new lines of enquiry to study and compare computational, statistical, and mathematical properties of the classical and modern regimes in machine learning. We hope that this perspective, in turn, will help practitioners choose models and algorithms for optimal performance.
# Acknowledgments
We thank Peter Bickel for editing the PNAS submission, and the anonymous reviewers for their helpful feedback. Mikhail Belkin, Siyuan Ma and Soumik Mandal were supported by NSF RI- 1815697. Daniel Hsu was supported by NSF CCF-1740833 and Sloan Research Fellowship. We thank Nvidia for donating GPUs used for this research.
# References
[1] Madhu S Advani and Andrew M Saxe. High-dimensional dynamics of generalization error in neural networks. arXiv preprint arXiv:1710.03667, 2017.
[2] Peter L. Bartlett. The sample complexity of pattern classiï¬cation with neural networks: the size of the weights is more important than the size of the network. IEEE Transactions on Information Theory, 44(2):525â536, 1998.
11
[3] Mikhail Belkin, Daniel Hsu, and Partha Mitra. Overï¬tting or perfect ï¬tting? risk bounds In Advances in Neural Information for classiï¬cation and regression rules that interpolate. Processing Systems, pages 2306â2317, 2018.
[4] Mikhail Belkin, Siyuan Ma, and Soumik Mandal. To understand deep learning we need to understand kernel learning. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 541â549, 2018.
[5] Mikhail Belkin, Alexander Rakhlin, and Alexandre B Tsybakov. https://arxiv.org/abs/1806.09471, 2018.
[6] Anselm Blumer, Andrzej Ehrenfeucht, David Haussler, and Manfred K Warmuth. Occamâs razor. Information processing letters, 24(6):377â380, 1987.
[7] Siegfried B¨os and Manfred Opper. Dynamics of training. In Advances in Neural Information Processing Systems, pages 141â147, 1997.
[8] Bernhard E Boser, Isabelle M Guyon, and Vladimir N Vapnik. A training algorithm for In Proceedings of the ï¬fth annual workshop on Computational optimal margin classiï¬ers. learning theory, pages 144â152. ACM, 1992.
[9] Leo Breiman. Random forests. Machine learning, 45(1):5â32, 2001.
[10] Peter B¨uhlmann and Bin Yu. Boosting with the l2 loss: regression and classiï¬cation. Journal of the American Statistical Association, 98(462):324â339, 2003.
[11] Alfredo Canziani, Adam Paszke, and Eugenio Culurciello. An analysis of deep neural network models for practical applications. arXiv preprint arXiv:1605.07678, 2016.
[12] Youngmin Cho and Lawrence K. Saul. Kernel methods for deep learning. In Advances in Neural Information Processing Systems, pages 342â350, 2009.
[13] Fran¸cois Chollet et al. Keras. https://keras.io, 2015.
[14] Adele Cutler and Guohua Zhao. Pert-perfect random tree ensembles. Computing Science and Statistics, 33:490â497, 2001.
[15] Jerome H Friedman. Greedy function approximation: a gradient boosting machine. Annals of Statistics, pages 1189â1232, 2001.
[16] John S Garofolo, Lori F Lamel, William M Fisher, Jonathon G Fiscus, and David S Pallett. Darpa timit acoustic-phonetic continous speech corpus cd-rom. NIST speech disc, 1-1.1, 1993.
[17] Stuart Geman, Elie Bienenstock, and Ren Doursat. Neural networks and the bias/variance dilemma. Neural Computation, 4(1):1â58, 1992. doi: 10.1162/neco.1992.4.1.1. URL https: //doi.org/10.1162/neco.1992.4.1.1.
[18] Gerd Gigerenzer and Henry Brighton. Homo heuristicus: Why biased minds make better inferences. Topics in cognitive science, 1(1):107â143, 2009.
[19] Xavier Glorot and Yoshua Bengio. Understanding the diï¬culty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artiï¬cial intelli- gence and statistics, pages 249â256, 2010.
12
[20] Suriya Gunasekar, Blake E Woodworth, Srinadh Bhojanapalli, Behnam Neyshabur, and Nati Srebro. Implicit regularization in matrix factorization. In Advances in Neural Information Processing Systems, pages 6151â6159, 2017.
[21] Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The Elements of Statistical Learning, volume 1. Springer, 2001.
[22] Alex Krizhevsky and Geoï¬rey Hinton. Learning multiple layers of features from tiny images. Masterâs thesis, University of Toronto, 2009.
[23] Ken Lang. Newsweeder: Learning to ï¬lter netnews. In Machine Learning Proceedings, pages 331â339. Elsevier, 1995.
[24] Y. LeCun, L. Bottou, Y. Bengio, and P. Haï¬ner. Gradient-based learning applied to document recognition. In Proceedings of the IEEE, volume 86, pages 2278â2324, 1998.
Algorithmic regularization in over- parameterized matrix sensing and neural networks with quadratic activations. In Proceedings of the 31st Conference On Learning Theory, volume 75 of Proceedings of Machine Learning Research, pages 2â47, 06â09 Jul 2018.
[26] Siyuan Ma, Raef Bassily, and Mikhail Belkin. The power of interpolation: Understanding the eï¬ectiveness of SGD in modern over-parametrized learning. In Jennifer Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, vol- ume 80 of Proceedings of Machine Learning Research, pages 3325â3334, Stockholmsmssan, Stockholm Sweden, 10â15 Jul 2018. PMLR. URL http://proceedings.mlr.press/v80/ ma18a.html.
[27] Brady Neal, Sarthak Mittal, Aristide Baratin, Vinayak Tantia, Matthew Scicluna, Simon Lacoste-Julien, and Ioannis Mitliagkas. A modern take on the bias-variance tradeoï¬ in neural networks. arXiv preprint arXiv:1810.08591, 2018.
[28] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Ng. Reading digits in natural images with unsupervised feature learning. In NIPS workshop, volume 2011, page 4, 2011.
[29] Jeï¬rey Pennington, Richard Socher, and Christopher Manning. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing, pages 1532â1543, 2014.
[30] Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. In Advances in Neural Information Processing Systems, pages 1177â1184, 2008.
[31] Carl Edward Rasmussen. Gaussian processes in machine learning. In Advanced Lectures on Machine Learning, pages 63â71. Springer, 2004.
[32] Alessandro Rudi and Lorenzo Rosasco. Generalization properties of learning with random features. In Advances in Neural Information Processing Systems, pages 3215â3225, 2017.
[33] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei- Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision, 115(3):211â252, 2015. doi: 10.1007/s11263-015-0816-y.
13
[34] Ruslan Salakhutdinov. Deep learning tutorial at the Simons Institute, Berkeley, https://simons.berkeley.edu/talks/ruslan-salakhutdinov-01-26-2017-1, 2017.
[35] Robert E. Schapire, Yoav Freund, Peter Bartlett, and Wee Sun Lee. Boosting the margin: a new explanation for the eï¬ectiveness of voting methods. Ann. Statist., 26(5):1651â1686, 1998.
[36] Mahdi Soltanolkotabi, Adel Javanmard, and Jason D Lee. Theoretical insights into the op- timization landscape of over-parameterized shallow neural networks. IEEE Transactions on Information Theory, 2018.
[37] Stefano Spigler, Mario Geiger, St´ephane dâAscoli, Levent Sagun, Giulio Biroli, and Matthieu Wyart. A jamming transition from under-to over-parametrization aï¬ects loss landscape and generalization. arXiv preprint arXiv:1810.09665, 2018.
[38] Vladimir N. Vapnik. The Nature of Statistical Learning Theory. Springer, 1995. ISBN 0-387- 94559-8.
[39] Larry Wasserman. All of Nonparametric Statistics. Springer, 2006.
[40] Holger Wendland. Scattered Data Approximation. Cambridge Monographs on Applied 10.1017/ and Computational Mathematics. Cambridge University Press, 2004. CBO9780511617539. doi:
[41] Abraham J Wyner, Matthew Olson, Justin Bleich, and David Mease. Explaining the success of adaboost and random forests as interpolating classiï¬ers. Journal of Machine Learning Research, 18(48):1â33, 2017.
[42] Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understand- ing deep learning requires rethinking generalization. In International Conference on Learning Representations, 2017.
# A Approximation theorem
Suppose the training data (x1, y1), . . . , (xn, yn) are sampled independently by drawing xi uniformly from a compact domain in Rd, and assigning the label yi = hâ(xi) using a target function hâ â Hâ. Let h â Hâ be another hypothesis that interpolates the training data (x1, y1), . . . , (xn, yn). The following theorem bounds the error of h in approximating hâ.
Theorem 1. Fix any hâ â Hâ. Let (x1, y1), . . . , (xn, yn) be independent and identically distributed random variables, where xi is drawn uniformly at random from a compact cube2 ⦠â Rd, and yi = hâ(xi) for all i. There exists absolute constants A, B > 0 such that, for any interpolating h â Hâ (i.e., h(xi) = yi for all i), so that with high probability
sup |h(ar) â h*(a)| < Ae BOM)" | a0. + [ll atcc) reQ
Proof sketch. Recall that the fill kn of the set of points 71,...,2% in Q is a measure of how well these points cover 2: Ky = maxyeq MIN; ,c{,,..,.2,} lv â @;||- It is easy to verify (e.g., by taking an
2Same argument can be used for more general domains and probability distributions.
14
appropriate grid partition of the cube ⦠and applying the union bound) that with high probability κn = O(n/ log n)â1/d.
Consider now a function f(x) := h(x) â h*(x). We observe that f(x;) = 0 and by the triangle inequality || f lla. < ||P*lleu0 + |All... Applying Theorem 11.22 in to f yields the result.
â
â
â
The minimum norm interpolating function hn,â has norm no larger than that of hâ (by deï¬ni- tion) and hence achieves the smallest bound in Theorem 1. While these bounds apply only in the noiseless setting, they provide a justiï¬cation for the inductive bias based on choosing a solution with a small norm. Indeed, there is signiï¬cant empirical evidence that minimum norm interpolating solutions generalize well on a variety of datasets, even in the presence of large amounts of label noise [4].
# B Experimental setup
To demonstrate the double descent risk curve, we train a number of representative models including neural networks, kernel machines and ensemble methods on several widely used datasets that involve images, speech, and text.
Datasets. Table 1 describes the datasets we use in our experiments. These datasets are for classiï¬cation problems with more than two classes, so we adopt the one-versus-rest strategy that maps a multi-class label to a binary label vector (one-hot encoding). For the image datasetsâ namely MNIST [24], CIFAR-10 [22], and SVHN [28]âcolor images are ï¬rst transformed to grayscale images, and then the maximum range of each feature is scaled to the interval [0, 1]. For the speech dataset TIMIT [16], we normalize each feature by its z-score. For the text dataset 20- Newsgroups [23], we transform each sparse feature vector (bag of words) into a dense feature vector by summing up its corresponding word embeddings obtained from [29].
For each dataset, we subsample a training set (of size n) uniformly at random without replace- ment. For the 20-Newsgroups dataset, which does not have a test set provided, we randomly pick 1/8 of the full dataset for use as a test set.
Model training. Each model is trained to minimize the squared loss on the given training set. Without regularization, such model is able to interpolate the training set when its capacity surpasses certain threshold (interpolation threshold). For comparison, we report the test/train risk for zero-one and squared loss. In experiments for neural networks and ensemble methods, we repeat the same experiment ï¬ve times and report the mean for the risks. RFF and Random ReLU experimental results were reported based on a single run as the results were empirically highly consistent.
Table 1: Descriptions of datasets. In experiments, we use subsets to reduce the computational cost.
Dataset CIFAR-10 MNIST SVHN TIMIT 20-Newsgroups Size of full training set 5 · 104 6 · 104 7.3 · 104 1.1 · 106 1.6 · 104 Feature dimension (d) 1024 784 1024 440 100 Number of classes 10 10 10 48 20
15
CIFAR-10, Zero-one loss
20Newsgroup, Zero-one loss
> RFF 95 oe RFF Min. norm solution hp, » 89 Min. norm solution hy, « (original kernel) (original kernel) gs gs a a 3 3 Fe Fe 57 50 ie} 10 20 30 40 50 60 1333 1494 ⬠140 ⬠128 2 âO RFF FS â RFF â Min. norm solution hy, » â Min. norm solution fi, « 4 1 ie} 10 20 30 40 50 60 66 88 RFF gs gs c c 45 se * E F F ie} ie} ie} 10 20 30 40 50 60 ie} 10 20 30 40 50 60 Number of Random Fourier Features (x 103) (N) Number of Random Fourier Features (x10?) (N)
Figure 6: Double descent risk curve for RFF model. Test risks (log scale), coefficient £2 norms (log scale), and training risks of the RFF model predictors hy, learned on subsets of CIFAR-10 and 20Newsgroups (n = 10). The interpolation threshold is achieved at N = 104.
16
TIMIT, Zero-one loss
SVHN, Zero-one loss
88 > RFF â@ RFF Min. norm solution fp, « Min. norm solution fp, « ââ (original keel) ââ (original kernel) gs gs a a 3 3 F F 1516 1106 e177 1s 2 â RFF 2 = RFF â Min. norm solution hy, « â Min. norm solution hy, « 3 5 ie} 10 20 30 40 50 60 73 60 _ _ âe RFF & & c 40 © 30 fd fd F F ie} ie} ie} 10 20 30 40 50 60 ie} 10 20 30 40 50 60 Number of Random Fourier Features (x 103) (N) Number of Random Fourier Features (x10?) (N)
Figure 7: Double descent risk curve for RFF model. Test risks (log scale), coefficient 2 norms (log scale), and training risks of RFF model predictors hy, learned on subsets of TIMIT and SVHN (n = 10*). The interpolation threshold is achieved at N = 10.
17
MNIST, Zero-one loss
SVHN, Zero-one loss
86 77 50 67 gs a 3 Fe 4 36 2 30 ie} 10 20 30 40 50 60 ie} 10 20 30 40 50 60 48 336 E 5 215 4 3 ie} 10 20 30 40 50 60 ie} 10 20 30 40 50 60 51 78 4 iq £ 50 20 £ fo & 2s ° T T T ° T T T ie} 10 20 30 40 50 60 ie} 10 20 30 40 50 60 Number of Random RELU Features (10?) (N) Number of Random RELU Features (107) (N)
gs a 3 Fe
E 5 2
g os £ fo g
Figure 8: Double descent risk curve for Random ReLU model. Test risks (log scale), coefficient £2 norms (log scale), and training risks of the Random ReLU Features model predictors hn, learned on subsets of MNIST and SVHN data (n = 10*). The interpolation threshold is achieved at N = 104. Regularization of 4-10~® is added for SVHN to ensure numerical stability near interpolation threshold.
# C Additional experimental results for neural networks
# C.1 Random Fourier Feature models
We provide additional experimental results for several real-world datasets. Figure 6 illustrates double descent behavior for CIFAR-10 and 20Newsgroup. Figure 7 shows similar curves of zero- one loss for TIMIT and SVHN. The random feature vectors v1, . . . , vN are sampled independently from N (0, Ïâ2 · I), the mean-zero normal distribution in Rd with covariance Ïâ2 · I. The bandwidth parameter Ï is set to 5, 5, 5, 0.1, and 16 for MNIST, SVHN, CIFAR-10, 20-Newsgroup, and TIMIT, respectively.
# C.2 Random ReLU Feature models
We show that the double descent risk curve also appears with Random ReLU feature networks [12]. Such networks are similar to the RFF models, except that they use the ReLU transfer function. Speciï¬cally, the Random ReLU features model family HN with N parameters consists of functions
18
h : Rd â R of the form
N Se and (23 ve) where (a; v) := max((v, z),0). k=1 h(a)
The vectors v1, . . . , vN are sampled independently from uniform distribution over surface of unit sphere in Rd. The coeï¬cients ak are learned using linear regression. Figure 8 illustrates zero-one loss with Random ReLU features for MNIST and SVHN data. Ridge regularization with parameter λ := 4 · 10â6 is added in SVHN experiments to ensure numerical stability near the interpolation threshold. For MNIST experiments, no regularization is added. We observe that the resulting risk curves and the norm curves are very similar to those for RFF.
# C.3 Fully connected neural networks
In our experiments, we use fully connected neural networks with a single hidden layer. To control the capacity of function class, we vary the number of hidden units. We use stochastic gradient descent (SGD) to solve the ERM optimization problem in this setting.
The ERM optimization problem in this setting is generally more diï¬cult than that for RFF and ReLU feature models due to a lack of analytical solutions and non-convexity of the problem. Consequently, SGD is known to be sensitive to initialization. To mitigate this sensitivity, we use a âweight reuseâ scheme with SGD in the under-parametrized regime (N < n), where the parameters obtained from training a smaller neural network are used as initialization for training larger networks. This procedure, detailed below, ensures decreasing training risk as the number In the over-parametrized regime (N ⥠n), we use standard (random) of parameters increases. initialization, as typically there is no diï¬culty in obtaining near-zero training risk.
Additional experimental results for neural networks are shown in Figure 9. Results for MNIST and CIFAR-10 with weight reuse are reported in Figure 9(a) and Figure 9(b). Results for MNIST without weight reuse are reported in Figure 9(c). In this setting, all models are randomly initialized. While the variance is signiï¬cantly larger, and the training loss is not monotonically decreasing, the double descent behavior is still clearly discernible.
We now provide speciï¬c details below. We use SGD with standard momentum (parameter value 0.95) implemented in [13] for training. In the weight reuse scheme, we assume that we have already trained a smaller network with H1 hidden units. To train a larger network with H2 > H1 hidden units, we initialize the ï¬rst H1 hidden units of the larger network to the weights learned in the smaller network. The remaining weights are initialized with normally distributed random numbers (mean 0 and variance 0.01). The smallest network is initialized using standard Glorot- uniform distribution [19]. For networks smaller than the interpolation threshold, we decay the step size by 10% after each of 500 epochs, where an epoch denotes a pass through the training data. For these networks, training is stopped after classiï¬cation error reached zero or 6000 epochs, whichever happens earlier. For networks larger than interpolation threshold, ï¬xed step size is used, and training is stopped after 6000 epochs.
# C.4 Synthetic model
We now discuss the nature of the double descent risk curve in the context of a simple synthetic model, which can be viewed as a version of RFF for functions on the one-dimensional circle. Consider the class H of periodic complex-valued functions on the interval [0, 2Ï], and let
â
ek(x) := exp( â1(kâ1)x)
19
(a) (b) (c)
Figure 9: Double descent risk curve for fully connected neural networks. In each plot, we use a dataset with n subsamples of d dimension and K classes for training. We use networks with a single hidden layer. For network with H hidden units, its number of parameters is (d + 1) · H + (H + 1) · K. The interpolation threshold is observed at n · K and is marked by black dotted line in ï¬gures. (a) Weight reuse before interpolation threshold and random initialization after it on MNIST. (b) Same, on a subset of CIFAR-10 with 2 classes (cat, dog) and downsampled image features (8 à 8). (c) No weight reuse (random initialization for all ranges of parameters).
for positive integers k. Fix a probability distribution p = (p1, p2, . . .) on the positive integers. For each integer N , we generate a random function class HN by (i) sampling independently from p until N distinct indices k1, . . . , kN are chosen, and then (ii) let HN be the linear span of ek1, . . . , ekN . Here, N is the number of parameters to specify a function in HN and also reï¬ects the capacity of HN .
We generate data from the following model:
yi = hâ(xi) + εi
where the target function h* = >>), age is in the span of the ex, and â¬1,..., â¬n are independent zero- mean normal random variables with variance o?. The 21,...,%, themselves are drawn uniformly at random from {27j/M : j = 0,...,Mâ 1} for M := 4096. We also let aj := py for all k, with pe x 1/k?. The signal-to-noise ratio (SNR) is E[h*(a;)?]/o?.
Given data (x1, y1),---,(@n. Yn) ⬠[0,27] x R, we learn a function from the function class Hy us- ing empirical risk minimization, which is equivalent to ordinary least squares over an N-dimensional space. Interpolation is achieved when N > n, so in this regime, we choose the interpolating function h= ya ap, ek, Of smallest (squared) norm All3, = Op OZ /Pe-
h= ya ap, ek, Of smallest (squared) norm All3, = Op OZ /Pe- Our simulations were carried out for a variety of sample sizes (n ⬠{2°,27,...,2!'}) and are all repeated independently 20 times; our plots show averages over the 20 trials. The results confirm our hypothesized double descent risk curve, as shown in Figure for n = 256; the results are similar for other n. The peak occurs at N = n, and the right endpoint of the curve is lower than the bottom of the U curve. The norm of the learned function also peaks at N = n and decreases for N >n.
20
SNR = 20, n = 256
SNR = 00, n = 256
00, 10° 10-1 Squared loss 10-2 10° 104 Norm 102 0 250 500 750 1000 Number of parameters (N) 1250 Squared loss 3 Oo Norm 3 a 1500 250 500 750 1000 1250 1500 Number of parameters (N)
Figure 10: Results from the synthetic model at SNR = co and SNR = 20. Top: excess test risk under squared loss of learned function. Bottom: norm of learned function ||h||3,,.. For n = 256 training samples, the interpolation threshold is reached at N = 256.
(a) (b)
Figure 11: Double descent risk curve for random forests. In all plots, the double descent risk curve is observed for random forest with increasing model complexity on regression tasks. Its complexity is controlled by the number of trees Ntree and the maximum number of leaves allowed for each tree N max leaf . (a) Without bootstrap re-sampling, a single tree can interpolate the training data. (b) With bootstrap re-sampling, multiple trees are needed to interpolate the data.
21
MNIST (n = 104, 10 classes) SVHN (n = 10, 10 classes) 0.06 0.15 g g 6 0.04 6 0.10 Bed cod 2 2 S 0.02 $ 0.05 [om [ow no n 0.00 0.00 80 z %° S 60 B 59 3 2 2 40 2 2 5 10 S 20 2 2 So No WM 25/1 50/1 50/10 50/20 WW 50/1 100/1 100/10 100/20 Model parameters: Ntree / Nforest Model parameters: Ntree / Nforest
Figure 12: Double descent risk curve for L2-boosting trees. In both plots, we increase the model complexity by ï¬rst increasing the number of boosting (random) trees (Ntree) which form a forest, then averaging several such forests (Nforest). Each tree is constrained to have no more than 10 leaves. For fast interpolation, the gradient boosting is applied with low shrinkage parameter (0.85).
# D Additional results with Random Forests
We train standard random forests introduced by Breiman [9] for regression problems. When split- ting a node, we randomly select a subset of features whose number is the square root of the number of the total features, a setting which is widely used in mainstream implementations of random forest. We control the capacity of the model class by choosing the number of trees (Ntree) and limiting the maximum number of leaves in each tree (N max leaf ). We put minimum constraints on the growth of each tree: there is no limit for the tree depth and we split each tree node whenever it is possible.
To interpolate the training data, we disable the bootstrap re-sampling for results in Figure 11(a), which has been investigated under the name âPerfect random tree ensemblesâ by Cutler et al. [14]. We see clear double decent risk curve (with both squared loss and zero-one loss) as we increase the capacity of the model class (although the U-shaped curve is less apparent with zero-one loss). In Figure 11(b), we run the same experiments with bootstrap re-sampling enabled, which show similar double decent risk curves.
# E Results with L2-boosting
We now show double descent risk curve for L2-boosting (random) trees introduced by Friedman [15]. When splitting a node in a tree, we randomly select a subset of features whose number is the square root of the number of the total features. We constrain each tree to have a small number of leaves (no more than 10). As the number of trees increases, the boosted trees gradually interpolate the training data and form a forest. To quickly reach interpolation, we adopt low shrinkage (param- eter value 0.85) for gradient boosting. To go beyond the interpolation threshold, we average the predictions of several such forests which are randomly constructed and trained with exactly same
22
hyper-parameters. The capacity of our model is hence controlled by the number of forests (Nforest) and the number of trees (Ntree) in each forest.
Figure 12 shows the change of train and test risk as the model capacity increases. We see the double descent risk curve for both squared loss and zero-one loss. We also observe strong over- ï¬tting under squared loss before the interpolation threshold. For similar experiments with high shrinkage (parameter value 0.1), the double descent risk curve becomes less apparent due to the regularization eï¬ect of high shrinkage [21].
23 | {
"id": "1710.03667"
} |
1812.10972 | Reasoning About Physical Interactions with Object-Oriented Prediction and Planning | Object-based factorizations provide a useful level of abstraction for
interacting with the world. Building explicit object representations, however,
often requires supervisory signals that are difficult to obtain in practice. We
present a paradigm for learning object-centric representations for physical
scene understanding without direct supervision of object properties. Our model,
Object-Oriented Prediction and Planning (O2P2), jointly learns a perception
function to map from image observations to object representations, a pairwise
physics interaction function to predict the time evolution of a collection of
objects, and a rendering function to map objects back to pixels. For
evaluation, we consider not only the accuracy of the physical predictions of
the model, but also its utility for downstream tasks that require an actionable
representation of intuitive physics. After training our model on an image
prediction task, we can use its learned representations to build block towers
more complicated than those observed during training. | http://arxiv.org/pdf/1812.10972 | Michael Janner, Sergey Levine, William T. Freeman, Joshua B. Tenenbaum, Chelsea Finn, Jiajun Wu | cs.LG, cs.AI, cs.CV, cs.RO, stat.ML | ICLR 2019, project page:
https://people.eecs.berkeley.edu/~janner/o2p2/ | null | cs.LG | 20181228 | 20190107 | 9 1 0 2 n a J 7 ] G L . s c [
2 v 2 7 9 0 1 . 2 1 8 1 : v i X r a
Published as a conference paper at ICLR 2019
REASONING ABOUT PHYSICAL INTERACTIONS WITH OBJECT-ORIENTED PREDICTION AND PLANNING
Michael Jannerâ , Sergey Levineâ , William T. Freemanâ¡, Joshua B. Tenenbaumâ¡, Chelsea Finnâ , & Jiajun Wuâ¡ â University of California, Berkeley â¡Massachusetts Institute of Technology {janner,svlevine,cbfinn}@berkeley.edu {billf,jbt,jiajunwu}@mit.edu
# ABSTRACT
Object-based factorizations provide a useful level of abstraction for interacting with the world. Building explicit object representations, however, often requires supervisory signals that are difï¬cult to obtain in practice. We present a paradigm for learning object-centric representations for physical scene understanding with- out direct supervision of object properties. Our model, Object-Oriented Predic- tion and Planning (O2P2), jointly learns a perception function to map from image observations to object representations, a pairwise physics interaction function to predict the time evolution of a collection of objects, and a rendering function to map objects back to pixels. For evaluation, we consider not only the accuracy of the physical predictions of the model, but also its utility for downstream tasks that require an actionable representation of intuitive physics. After training our model on an image prediction task, we can use its learned representations to build block towers more complicated than those observed during training.
# INTRODUCTION
Consider the castle made out of toy blocks in Figure 1a. Can you imagine how each block was placed, one-by-one, to build this structure? Humans possess a natural physical intuition that aids in the performance of everyday tasks. This physical intuition can be acquired, and reï¬ned, through experience. Despite being a core focus of the earliest days of artiï¬cial intelligence and computer vision research (Roberts, 1963; Winston, 1970), a similar level of physical scene understanding remains elusive for machines.
Cognitive scientists argue that humansâ ability to interpret the physical world derives from a richly structured apparatus. In particular, the perceptual grouping of the world into objects and their re- lations constitutes core knowledge in cognition (Spelke & Kinzler, 2007). While it is appealing to apply such an insight to contemporary machine learning methods, it is not straightforward to do so. A fundamental challenge is the design of an interface between the raw, often high-dimensional observation space and a structured, object-factorized representation. Existing works that have inves- tigated the beneï¬t of using objects have either assumed that an interface to an idealized object space already exists or that supervision is available to learn a mapping between raw inputs and relevant object properties (for instance, category, position, and orientation).
Assuming access to training labels for all object properties is prohibitive for at least two reasons. The most apparent concern is that curating supervision for all object properties of interest is difï¬cult to scale for even a modest number of properties. More subtly, a representation based on semantic
) ra ) rs
Figure 1: (a) A toy block castle. (b) Our methodâs build of the observed castle, using its learned object representations as a guide during planning.
1
Published as a conference paper at ICLR 2019
ra = Sea a) No object factorization b) Object property supervision c) O2P2: Object factorization without object property supervision
Figure 2: We divide physical understanding tasks into three distinct paradigms. (a) The ï¬rst ap- proach makes the fewest assumptions, posing prediction tasks as an instance of image-to-image translation. (b) The second uses ground-truth labels of object properties to supervise a learning algorithm that can map to the space of a traditional or learned physics engine. (c) O2P2, like (b), employs an object factorization and the functional structure of a physics engine, but like (a), does not assume access to supervision of object properties. Without object-level supervision, we must jointly learn a perception function to map from images to objects, a physics engine to simulate a collection of objects, and a rendering engine to map a set of objects back to a single composite image prediction. In all three approaches, we highlight the key supervision in orange.
attributes can be limiting or even ill-deï¬ned. For example, while the size of an object in absolute terms is unambiguous, its orientation must be deï¬ned with respect to a canonical, class-speciï¬c ori- entation. Object categorization poses another problem, as treating object identity as a classiï¬cation problem inherently limits a system to a predeï¬ned vocabulary.
In this paper, we propose Object-Oriented Prediction and Planning (O2P2), in which we train an ob- ject representation suitable for physical interactions without supervision of object attributes. Instead of direct supervision, we demonstrate that segments or proposal regions in video frames, without correspondence between frames, are sufï¬cient supervision to allow a model to reason effectively about intuitive physics. We jointly train a perception module, an object-factorized physics engine, and a neural renderer on a physics prediction task with pixel generation objective. We evaluate our learned model not only on the quality of its predictions, but also on its ability to use the learned representations for tasks that demand a sophisticated physical understanding.
# 2 OBJECT-ORIENTED PREDICTION AND PLANNING (O2P2)
In this section, we describe a method for learning object-based representations suitable for planning in physical reasoning tasks. As opposed to much prior work on object-factorized scene represen- tations (Section 4), we do not supervise the content of the object representations directly by way of labeled attributes (such as position, velocity, or orientation). Instead, we assume access only to segments or region proposals for individual video frames. Since we do not have labels for the object representations, we must have a means for converting back and forth between images and object representations for training. O2P2 consists of three components, which are trained jointly:
⢠A perception module that maps from an image to an object encoding. The perception module is applied to each object segment independently.
⢠A physics module to predict the time evolution of a set of objects. We formulate the engine as a sum of binary object interactions plus a unary transition function.
⢠A rendering engine that produces an image prediction from a variable number of objects. We ï¬rst predict an image and single-channel heatmap for each object. We then combine all of the object images according to the weights in their heatmaps at every pixel location to produce a single composite image.
2
Published as a conference paper at ICLR 2019
A high-level overview of the model is shown in Figure 2c. Below, we give details for the design of each component and their subsequent use in a model-based planning setting.
2.1 PERCEPTION MODULE
The perception module is a four-layer convolutional encoder that maps an image observation to object representation vectors O = {ok}k=1...N . We assume access to a segmentation of the input image S = {sk}k=1...N and apply the encoder individually to each segment. The perception module is not supervised directly to predict semantically meaningful properties such as position or orienta- tion; instead, its outputs are used by the physics and rendering modules to make image predictions. In this way, the perception module must be trained jointly with the other modules.
2.2 PHYSICS MODULE
The physics module predicts the effects of simulating a collection of object representations O for- ward in time. As in Chang et al. (2016); Watters et al. (2017), we consider the interactions of all pairs of object vectors. The physics engine contains two learned subcomponents: a unary transition function ftrans applied to each object representation independently, and a binary interaction function fimeract applied to all pairs of object representations. Letting O= {0x }%=1...N denote the output of the physics predictor, the k" object is given by 0; = firans(Ok) + Vik Finteract (Ok, 0; ) + O%, Where both ftrans aNd finteract are instantiated as two-layer MLPs.
Much prior work has focused on learning to model physical interactions as an end goal. In contrast, we rely on physics predictions only insofar as they affect action planning. To that end, it is more important to know the resultant effects of an action than to make predictions at a ï¬xed time interval. We therefore only need to make a single prediction, ¯O = fphysics(O), to estimate the steady-state conï¬guration of objects as a result of simulating physics indeï¬nitely. This simpliï¬cation avoids the complications of long-horizon sequential prediction while retaining the information relevant to planning under physical laws and constraints.
2.3 RENDERING ENGINE
Because our only supervision occurs at the pixel level, to train our model we learn to map all object- vector predictions back to images. A challenge here lies in designing a function which constructs a single image from an entire collection of objects. The learned renderer consists of two networks, both instantiated as convolutional decoders. The ï¬rst network predicts an image independently for each input object vector. Composing these images into a single reconstruction amounts to select- ing which object is visible at every pixel location. In a traditional graphics engine, this would be accomplished by calculating a depth pass at each location and rendering the nearest object.
To incorporate this structure into our learned renderer, we use the second decoder network to produce a single-channel heatmap for each object. The composite scene image is a weighted average of all of the object-speciï¬c renderings, where the weights come from the negative of the predicted heatmaps. In effect, objects with lower heatmap predictions at a given pixel location will be more visible than objects with higher heatmap values. This encourages lower heatmap values for nearer objects. Although this structure is reminiscent of a depth pass in a traditional renderer, the comparison should not be taken literally; the model is only supervised by composite images and no true depth maps are provided during training.
# 2.4 LEARNING OBJECT REPRESENTATIONS
We train the perception, physics, and rendering modules jointly on an image reconstruction and prediction task. Our training data consists of image pairs (I0, I1) depicting a collection of objects on a platform before and after a new object has been dropped. (I0 shows one object mid-air, as if being held in place before being released. We refer to Section 3 for details about the generation of training data.) We assume access to a segmentation S0 for the initial image I0.
Given the observed segmented image S0, we predict object representations using the perception module O = fpercept(S0) and their time-evolution using the physics module ¯O = fphysics(O). The rendering engine then predicts an image from each of the object representations: ËI0 = frender(O), ËI1 = frender( ¯O). We compare each image prediction ËIt to its ground-truth counterpart using both L2 distance and a perceptual loss LVGG. As in Johnson et al. (2016), we use L2 distance in the feature space of a
3
Published as a conference paper at ICLR 2019
Algorithm 1 Planning Procedure
Input perception, physics, and rendering modules fpercept, fphysics, frender Input goal image I goal with N segments Sgoal = {sgoal k }k=1...N 1: Encode the goal image into a set of N object representations Ogoal = {ogoal 2: while Ogoal is nonempty do 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: end while k }k=1...N = fpercept(Sgoal) Segment the objects that have already been placed to yield Scurr for m = 1 to M do Sample action am of the form (shape, position, orientation, color) from uniform distribution Observe action am as a segment sm by moving object to speciï¬ed position and orientation Concatenate the observation and segments of existing objects Sm = {sm} ⪠Scurr Encode segments Sm into a set of object representations Om = fpercept(Sm) Predict the effects of simulating physics on the object representations ¯Om = fphysics(Om) Select the representation ¯o â ¯Om of the object placed by sampled action am Find the goal object gm that is closest to ¯o: gm = arg mini ||ogoal Compute the corresponding distance dm = ||otarg i â ¯o||2 gm â ¯o||2 end for Select action amâ with the minimal distance to its nearest goal object: mâ = arg minm dm. Execute action amâ and remove object gmâ from the goal Ogoal = Ogoal\{ogoal gmâ }.
pretrained VGG network (Simonyan & Zisserman, 2014) as a perceptual loss function. The per- ception module is supervised by the reconstruction of I0, the physics engine is supervised by the reconstruction of I1, and the rendering engine is supervised by the reconstruction of both images. Speciï¬cally, Lpercept(·) = L2( ËI0, I0) + LVGG( ËI0, I0), Lphysics(·) = L2( ËI1, I1) + LVGG( ËI1, I1), and Lrender(·) = Lpercept(·) + Lphysics(·). 2.5 PLANNING WITH LEARNED MODELS We now describe the use of our perception, physics, and rendering modules in the representative planning task depicted in Figure 1, in which the goal is to build a block tower to match an observed image. Here, matching a tower does not refer simply to producing an image from the rendering engine that looks like the observation. Instead, we consider the scenario where the model must output a sequence of actions to construct the conï¬guration.
This setting is much more challenging because there is an implicit sequential ordering to building such a tower. For example, the bottom cubes must be placed before the topmost triangle. O2P2 was trained solely on a pixel-prediction task, in which it was never shown such valid action orderings (or any actions at all). However, these orderings are essentially constraints on the physical stability of intermediate towers, and should be derivable from a model with sufï¬cient understanding of physical interactions.
Although we train a rendering function as part of our model, we guide the planning procedure for constructing towers solely through errors in the learned object representation space. The planning procedure, described in detail in Algorithm 1, can be described at a high level in four components:
1. The perception module encodes the segmented goal image into a set of object representations Ogoal.
2. We sample actions of the form (shape, position, orientation, color), where shape is categorical and describes the type of block, and the remainder of the action space is continuous and describes the blockâs appearance and where it should be dropped.
3. We evaluate the samples by likewise encoding them as object vectors and comparing them with Ogoal. We view action sample am as an image segment sm (analogous to observing a block held in place before dropping it) and use the perception module to produce object vectors Om. Because the actions selected should produce a stable tower, we run these object representations through the physics engine to yield ¯Om before comparing with Ogoal. The cost is the L2 distance between the object ¯o â ¯Om corresponding to the most recent action and the goal object in Ogoal that minimizes this distance.
4. Using the action sampler and evaluation metric, we select the sampled action that minimizes L2 distance. We then execute that action in MuJoCo (Todorov et al., 2012). We continue this procedure, iteratively re-planning and executing actions, until there are as many actions in the
4
Published as a conference paper at ICLR 2019
Simulated Reconstructions Simulated Reconstructions t=1 t=0 t=1 t=0 t=1 t=0 t=1 t=0 Fr | wa i a ie sl _ Lal a 7 = i f
Figure 3: Given an observed segmented image I0 at t = 0, our model predicts a set of object representations O, simulates the objects with a learned physics engine to produce ¯O = fphysics(O), and renders the resulting predictions ËI = frender( ¯O), the sceneâs appearance at a later time. We use the convention (in all ï¬gures) that observations are outlined in green, other images rendered with the ground-truth renderer are outlined in black, and images rendered with our learned renderer are outlined in blue.
executed sequence as there are objects in the goal image. In the simplest case, the distribution from which actions are sampled may be uniform, as in Algorithm 1. Alternatively, the cross- entropy method (CEM) (Rubinstein & Kroese, 2004) may be used, repeating the sampling loop multiple times and ï¬tting a Gaussian distribution to the lowest-cost samples. In practice, we used CEM starting from a uniform distribution with ï¬ve iterations, 1000 samples per iteration, and used the top 10% of samples to ï¬t the subsequent iterationâs sampling distribution.
3 EXPERIMENTAL EVALUATION In our experimental evaluation, we aim to answer the following questions, (1) After training solely on physics prediction tasks, can O2P2 reason about physical interactions in an actionable and useful way? (2) Does the implicit object factorization imposed by O2P2âs structure provide a beneï¬t over an object-agnostic black-box video prediction approach? (3) Is an object factorization still useful even without supervision for object representations? IMAGE RECONSTRUCTION AND PREDICTION 3.1 We trained O2P2 to reconstruct observed objects and predict their conï¬guration after simulating physics, as described in Section 2.4. To generate training data, we simulated dropping a block on top of a platform containing up to four other blocks. We varied the position, color, and orientation of three block varieties (cubes, rectangular cuboids, and triangles). In total, we collected 60,000 training images using the MuJoCo simulator. Since our physics engine did not make predictions at every timestep (Section 2.2), we only recorded the initial and ï¬nal frame of a simulation. For this synthetic data, we used ground truth segmentations corresponding to visible portions of objects.
Representative predictions of our model for image reconstruction (without physics) and prediction (with physics) on held-out random conï¬gurations are shown in Figure 3. Even when the modelâs predictions differed from the ground truth image, such as in the last row of the ï¬gure, the physics engine produced a plausible steady-state conï¬guration of the observed scene. 3.2 BUILDING TOWERS After training O2P2 on the random conï¬gurations of blocks, we ï¬xed its parameters and employed the planning procedure as described in Section 2.5 to build tower conï¬gurations observed in images. We also evaluated the following models as comparisons:
⢠No physics is an ablation of our model that does not run the learned physics engine, but instead simply sets ¯O = O
⢠Stochastic adversarial video prediction (SAVP), a block-box video prediction model which does not employ an object factorization Lee et al. (2018). The cost function of samples is evaluated directly on pixels. The sampling-based planning routine is otherwise the same as in ours.
5
Published as a conference paper at ICLR 2019
Goal No physics SAVP O2P2 Oracle Oracle (pixels) (objects) â Pa adalat Task 2 ~ q fa 7 ay
Figure 4: Qualitative results on building towers using planning. Given an image of the goal tower, we can use the learned object representations and predictive model in O2P2 for guiding a planner to place blocks in the world and recreate the conï¬guration. We compare with an ablation, an object- agnostic video prediction model, and two âoraclesâ with access to the ground-truth simulator. Table 1: Accuracy (%) of block tower builds by our approach and the four comparison models. Our model outperforms Oracle (pixels) despite not having the ground-truth simulator by virtue of a more appropriate object-factorized objective to guide the planning procedure.
No physics SAVP Ours Oracle (pixels) Oracle (objects) 0 24 76 71 92
⢠Oracle (pixels) uses the MuJoCo simulator to evaluate samples instead of our learned physics and graphics engines. The cost of a block conï¬guration is evaluated directly in pixel space using L2 distance.
⢠Oracle (objects) also uses MuJoCo, but has access to segmentation masks on input images while evaluating the cost of proposals. Constraining proposed actions to account for only a single object in the observation resolves some of the inherent difï¬culties of using pixel-wise loss functions.
Qualitative results of all models are shown in Figure 4 and a quantitative evaluation is shown in Table 1. We evaluated tower stacking success by greedily matching the built conï¬guration to the ground-truth state of the goal tower, and comparing the maximum object error (deï¬ned on its po- sition, identity, and color) to a predetermined threshold. Although the threshold is arbitrary in the sense that it can be chosen low enough such that all builds are incorrect, the relative ordering of the models is robust to changes in this value. All objects must be of the correct shape for a built tower to be considered correct, meaning that our third row prediction in Figure 4 was incorrect because a green cube was mistaken for a green rectangular cuboid.
While SAVP made accurate predictions on the training data, it did not generalize well to these more complicated conï¬gurations with more objects per frame. As such, its stacking success was low. Physics simulation was crucial to our model, as our No-physics ablation failed to stack any towers correctly. We explored the role of physics simulation in the stacking task in Section 3.3. The âoracleâ model with access to the ground-truth physics simulator was hampered when making comparisons in pixel space. A common failure mode of this model was to drop a single large block on the ï¬rst step to cover the visual area of multiple smaller blocks in the goal image. This scenario was depicted by the blue rectangular cuboid in the ï¬rst row of Figure 4 in the Oracle (pixels) column.
3.3 THE IMPORTANCE OF UNDERSTANDING PHYSICS Figure 5 depicts the entire planning and execution procedure for O2P2 on a pyramid of six blocks. At each step, we visualize the process by which our model selects an action by showing a heatmap of
6
Published as a conference paper at ICLR 2019
Scored action locations Selected action (before physics)
Figure 5: (a) Visualization of scored locations for dropping an object at each timestep. Because O2P2 simulates physics before selecting an action, it is able to plan a sequence of stable actions. (b) The selected block and drop position from the scored samples, outlined in white. (c) The prediction from our physics model of the result of running physics on the selected block.
Goal O2P2 No physics Scored locations First action Execution Scored locations First action Execution
Figure 6: Heatmaps showing sampled action scores for the initial action given a goal block tower. O2P2âs scores reï¬ect that the objects resting directly on the platform must be dropped ï¬rst, and that they may be dropped from any height because they will fall to the ground. The No-physics ablation, on the other hand, does not implicitly represent that the blocks need to be dropped in a stable sequence of actions because it does not predict the blocks moving after being released.
scores (negative MSE) for each action sample according to the sampleâs (x, y) position (Figure 5a). Although the model is never trained to produce valid action decisions, the planning procedure se- lects a physically stable sequence of actions. For example, at the ï¬rst timestep, the model scores three x-locations highly, corresponding to the three blocks at the bottom of the pyramid. It correctly determines that the height at which it releases a block at any of these locations does not particularly matter, since the block will drop to the correct height after running the physics engine. Figure 5b shows the selected action at each step, and Figure 5c shows the modelâs predictions about the con- ï¬guration after releasing the sampled block.
Similar heatmaps of scored samples are shown for the No-physics ablation of our model in Figure 6. Because this ablation does not simulate the effect of dropping a block, its highly-scored action samples correspond almost exactly to the actual locations of the objects in the goal image. Further, without physics simulation it does not implicitly select for stable action sequences; there is nothing to prevent the model from selecting the topmost block of the tower as the ï¬rst action.
Planning for alternate goals. By implicitly learning the underlying physics of a domain, our model can be used for various tasks besides matching towers. In Figure 7a, we show our modelâs representations being used to plan a sequence of actions to maximize the height of a tower. There is no observation for this task, and the action scores are calculated based on the highest non-zero pixels after rendering samples with the learned renderer. In Figure 7b, we consider a similar sampling procedure as in the tower-matching experiments, except here only a single unstable block is shown. Matching a free-ï¬oating block requires planning with O2P2 for multiple steps at once.
7
Published as a conference paper at ICLR 2019
Maximize height Make block stable il ar = |
Figure 7: O2P2 being used to plan for the alternate goals of (a) maximizing the height of a tower and (b) making an observed block stable by use of any other blocks.
" == em a P| Riz KF
Figure 8: Ten goal images alongside the result of the Sawyerâs executed action sequence using O2P2 for planning. The seven action sequences counted as correct are outlined in solid black; the three counted as incorrect are outlined in dashed lines. We refer the reader to Appendix B for more evaluation examples and people.eecs.berkeley.edu/â¼janner/o2p2 for videos of the evaluation.
3.4 TRANSFER TO ROBOTIC ARM
We evaluated O2P2 on a Sawyer robotic arm using real image inputs. We deployed the same per- ception, physics, and rendering modules used on synthetic data with minor changes to the planning procedure to make real-world evaluation tractable. Instead of evaluating a sampled action by moving an appropriate block to the speciï¬ed position and inferring object representations with the perception module, we trained a separate two-layer MLP to map directly from actions to object representations. We refer to this module as the embedder: om = fembedder(am).
Mapping actions to object representations removed the need to manually move every sampled block in front of the camera, which would have been prohibitively slow on a real robot. The embedder was supervised by the predicted object representations of the perception module on real image inputs; we collected a small dataset of the Sawyer gripper holding each object at one hundred positions and recorded the ground truth position of the gripper along with the output of the perception module for the current observation.
The embedder took the place of lines 6-8 of Algorithm 1. We also augmented the objective used to select actions in line 11. In addition to L2 distance between goal and sampled object representations, we used a pixelwise L2 distance between the observed and rendered object segments and between the rendered object segments before and after use of the physics module. The latter loss is useful in a real setting because the physical interactions are less predictable than their simulated counterparts, so by penalizing any predicted movement we preferentially placed blocks directly in a stable position.
By using end-effector position control on the Sawyer gripper, we could retain the same action space as in synthetic experiments. Because the position component of the sampled actions referred to the block placement location, we automated the picking motion to select the sampled block based on the shape and color components of an action. Real-world evaluation used colored wooden cubes and rectangular cuboids.
8
Published as a conference paper at ICLR 2019
Real image object segments were estimated by applying a simple color ï¬lter and ï¬nding connected components of sufï¬cient size. To account for shading and specularity differences, we replaced all pixels within an object segment by the average color within the segment. To account for noisy segment masks, we replaced each mask with its nearest neighbor (in terms of pixel MSE) in our MuJoCo-rendered training set.
We tested O2P2 on twenty-ï¬ve goal conï¬gurations total, of which our model correctly built seven- teen. Ten goal images, along with the result of our modelâs executed action sequence, are shown in Figure 8. The remainder of the conï¬gurations are included in Appendix B.
4 RELATED WORK Our work is situated at the intersection of two distinct paradigms. In the ï¬rst, a rigid notion of object representation is enforced via supervision of object properties (such as size, position, and identity). In the second, scene representations are not factorized at all, so no extra supervision is required. These two approaches have been explored in a variety of domains. Image and video understanding. The insight that static observations are physically stable conï¬g- urations of objects has been leveraged to improve 3D scene understanding algorithms. For example, Zheng et al. (2014); Gupta et al. (2010); Shao et al. (2014); Jia et al. (2015) build physically-plausible scene representations using such stability constraints. We consider a scenario in which the physical representations are learned from data instead of taking on a predetermined form. Wu et al. (2017b;a) encode scenes in a markup-style representation suitable for consumption by off-the-shelf rendering engines and physics simulators. In contrast, we do not assume access to supervision of object prop- erties (only object segments) for training a perception module to map into a markup language.
There has also been much attention on inferring object-factorized, or otherwise disentangled, repre- sentations of images (Eslami et al., 2016; Greff et al., 2017; van Steenkiste et al., 2018). In contrast to works which aim to discover objects in a completely unsupervised manner, we focus on using object representations learned with minimal supervision, in the form of segmentation masks, for downstream tasks. Object-centric scene decompositions have also been considered as a potential state representation in reinforcement learning (Diuk et al., 2008; Scholz et al., 2014; Devin et al., 2017; Goel et al., 2018; Keramati et al., 2018). We are speciï¬cally concerned with the problem of predicting and reasoning about physical phenomena, and show that a model capable of this can also be employed for decision making. Learning and inferring physics. Fragkiadaki et al. (2016); Watters et al. (2017); Chang et al. (2016) have shown approaches to learning a physical interaction engine from data. Hamrick et al. (2011) use a traditional physics engine, performing inference over object parameters, and show that such a model can account for humansâ physical understanding judgments. We consider a similar physics formulation, whereby update rules are composed of sums of pairwise object-interaction functions, and incorporate it into a training routine that does not have access to ground truth super- vision in the form of object parameters (such as position or velocity).
An alternative to using a traditional physics engine (or a learned object-factorized function trained to approximate one) is to treat physics prediction as an image-to-image translation or classiï¬cation problem. In contrast to these prior methods, we consider not only the accuracy of the predictions of our model, but also its utility for downstream tasks that are intentionally constructed to evaluate its ability to acquire an actionable representation of intuitive physics. Comparing with representative video prediction (Lee et al., 2018; Babaeizadeh et al., 2018) and physical prediction (Ehrhardt et al., 2017; Mottaghi et al., 2016; Li et al., 2017; Lerer et al., 2016) methods, our approach achieves substantially better results at tasks that require building structures out of blocks. 5 CONCLUSION We introduced a method of learning object-centric representations suitable for physical interactions. These representations did not assume the usual supervision of object properties in the form of po- sition, orientation, velocity, or shape labels. Instead, we relied only on segment proposals and a factorized structure in a learned physics engine to guide the training of such representations. We demonstrated that this approach is appropriate for a standard physics prediction task. More impor- tantly, we showed that this method gives rise to object representations that can be used for difï¬cult planning problems, in which object conï¬gurations differ from those seen during training, without further adaptation. We evaluated our model on a block tower matching task and found that it outper- formed object-agnostic approaches that made comparisons in pixel-space directly.
9
Published as a conference paper at ICLR 2019
ACKNOWLEDGMENTS
We thank Michael Chang for insightful discussion and anonymous reviewers for feedback on an early draft of this paper. This work was supported by the National Science Foundation Graduation Research Fellowship and the Open Philanthropy Project AI Fellowship.
REFERENCES
Mohammad Babaeizadeh, Chelsea Finn, Dumitru Erhan, Roy H. Campbell, and Sergey Levine. Stochastic variational video prediction. In International Conference on Learning Representations, 2018.
Michael B Chang, Tomer Ullman, Antonio Torralba, and Joshua B Tenenbaum. A compositional object-based approach to learning physical dynamics. In International Conference on Learning Representations, 2016.
Coline Devin, Pieter Abbeel, Trevor Darrell, and Sergey Levine. Deep object-centric representations for generalizable robot learning. CoRR, abs/1708.04225, 2017.
Carlos Diuk, Andre Cohen, and Michael L. Littman. An object-oriented representation for efï¬- cient reinforcement learning. In Proceedings of the 25th International Conference on Machine Learning, 2008.
S´ebastien Ehrhardt, Aron Monszpart, Niloy J. Mitra, and Andrea Vedaldi. Taking Visual Motion Prediction To New Heightï¬elds. arXiv preprint arXiv:1712.09448, 2017.
S. M. Ali Eslami, Nicolas Heess, Theophane Weber, Yuval Tassa, David Szepesvari, Koray Kavukcuoglu, and Geoffrey E Hinton. Attend, infer, repeat: Fast scene understanding with gen- erative models. In Advances in Neural Information Processing Systems 29. 2016.
Katerina Fragkiadaki, Pulkit Agrawal, Sergey Levine, and Jitendra Malik. Learning visual predictive models of physics for playing billiards. In International Conference on Learning Representations, 2016.
Vik Goel, Jameson Weng, and Pascal Poupart. Unsupervised video object segmentation for deep reinforcement learning. CoRR, abs/1805.07780, 2018.
Klaus Greff, Sjoerd van Steenkiste, and J¨urgen Schmidhuber. Neural expectation maximization. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Advances in Neural Information Processing Systems 30. 2017.
Abhinav Gupta, Alexei A. Efros, and Martial Hebert. Blocks world revisited: Image understanding using qualitative geometry and mechanics. In European Conference on Computer Vision(ECCV), 2010.
Internal physics models guide probabilistic judgments about object dynamics. In Proceedings of the 33rd annual conference of the cognitive science society, 2011.
Z. Jia, A. C. Gallagher, A. Saxena, and T. Chen. 3d reasoning from blocks to stability. Transactions on Pattern Analysis and Machine Intelligence, 2015. IEEE
Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In European Conference on Computer Vision, 2016.
Ramtin Keramati, Jay Whang, Patrick Cho, and Emma Brunskill. Strategic object oriented rein- forcement learning. arXiv preprint arXiv:1806.00175, 2018.
Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2015.
Alex X. Lee, Richard Zhang, Frederik Ebert, Pieter Abbeel, Chelsea Finn, and Sergey Levine. Stochastic adversarial video prediction. arXiv preprint arXiv:1804.01523, 2018.
10
Published as a conference paper at ICLR 2019
Adam Lerer, Sam Gross, and Rob Fergus. Learning physical intuition of block towers by example. In Proceedings of the 33rd International Conference Machine Learning, 2016.
W. Li, A. Leonardis, and M. Fritz. Visual stability prediction for robotic manipulation. In IEEE International Conference on Robotics and Automation, 2017.
Roozbeh Mottaghi, Hessam Bagherinezhad, Mohammad Rastegari, and Ali Farhadi. Newtonian scene understanding: Unfolding the dynamics of objects in static images. In IEEE Conference on Computer Vision and Pattern Recognition, 2016.
Lawrence G. Roberts. Machine Perception of Three-Dimensional Solids. Outstanding Dissertations in the Computer Sciences. 1963.
Reuven Y. Rubinstein and Dirk P. Kroese. The Cross Entropy Method: A Uniï¬ed Approach To Com- binatorial Optimization, Monte-carlo Simulation (Information Science and Statistics). Springer- Verlag, Berlin, Heidelberg, 2004. ISBN 038721240X.
Jonathan Scholz, Martin Levihn, Charles Isbell, and David Wingate. A physics-based model prior for object-oriented mdps. In Proceedings of the 31st International Conference on Machine Learn- ing, 2014.
Tianjia Shao, Aron Monszpart, Youyi Zheng, Bongjin Koo, Weiwei Xu, Kun Zhou, and Niloy Mi- tra. Imagining the unseen: Stability-based cuboid arrangements for scene understanding. ACM SIGGRAPH Asia 2014, 2014. * Joint ï¬rst authors.
K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recogni- tion. CoRR, abs/1409.1556, 2014.
Elizabeth S. Spelke and Katherine D. Kinzler. Core knowledge. Developmental Science, 10(1): 89â96, 2007.
Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In IROS, pp. 5026â5033, 2012.
Sjoerd van Steenkiste, Michael Chang, Klaus Greff, and Jrgen Schmidhuber. Relational neural expectation maximization: Unsupervised discovery of objects and their interactions. In Interna- tional Conference on Learning Representations, 2018. URL https://openreview.net/ forum?id=ryH20GbRW.
Nicholas Watters, Daniel Zoran, Theophane Weber, Peter Battaglia, Razvan Pascanu, and Andrea Tacchetti. Visual interaction networks: Learning a physics simulator from video. In Advances in Neural Information Processing Systems 30. 2017.
Patrick Henry Winston. Learning structural descriptions from examples. Technical report, 1970.
Jiajun Wu, Erika Lu, Pushmeet Kohli, William T Freeman, and Joshua B Tenenbaum. Learning to see physics via visual de-animation. In Advances in Neural Information Processing Systems, 2017a.
Jiajun Wu, Joshua B Tenenbaum, and Pushmeet Kohli. Neural scene de-rendering. In IEEE Confer- ence on Computer Vision and Pattern Recognition, 2017b.
Bo Zheng, Yibiao Zhao, Joey C. Yu, Katsushi Ikeuchi, and Song-Chun Zhu. Scene understanding by reasoning stability and safety. International Journal of Computer Vision, 2014.
# A IMPLEMENTATIONS DETAILS
Objects were represented as 256-dimensional vectors. The perception module had four convolu- tional layers of {32, 64, 128, 256} channels, a kernel size of 4, and a stride of 2 followed by a single fully-connected layer with output size matching the object representation dimension. Both MLPs in the physics engine had two hidden layers each of size 512. The rendering networks had convolutional layers with {128, 64, 32, 3} channels (or 1 output channel in the case of the heatmap predictor), kernel sizes of {5, 5, 6, 6}, and strides of 2. We used the Adam optimizer (Kingma & Ba, 2015) with a learning rate of 1e-3.
11
Published as a conference paper at ICLR 2019
B SAWYER RESULTS
7 "fl Rn | | a | a. eae | Result
Figure 9: Extension of Figure 8, showing our planning results on a Sawyer arm with real image inputs. The seven action sequences counted as correct are outlined in solid black; the three counted as incorrect are outlined in dashed lines.
Goal Action Sequence Result a | oll od |
Figure 10: All actions taken by our planning procedure for one of the goal conï¬gurations from Figure 8.
12 | {
"id": "1712.09448"
} |
1812.10757 | Advancing the State of the Art in Open Domain Dialog Systems through the Alexa Prize | Building open domain conversational systems that allow users to have engaging
conversations on topics of their choice is a challenging task. Alexa Prize was
launched in 2016 to tackle the problem of achieving natural, sustained,
coherent and engaging open-domain dialogs. In the second iteration of the
competition in 2018, university teams advanced the state of the art by using
context in dialog models, leveraging knowledge graphs for language
understanding, handling complex utterances, building statistical and
hierarchical dialog managers, and leveraging model-driven signals from user
responses. The 2018 competition also included the provision of a suite of tools
and models to the competitors including the CoBot (conversational bot) toolkit,
topic and dialog act detection models, conversation evaluators, and a sensitive
content detection model so that the competing teams could focus on building
knowledge-rich, coherent and engaging multi-turn dialog systems. This paper
outlines the advances developed by the university teams as well as the Alexa
Prize team to achieve the common goal of advancing the science of
Conversational AI. We address several key open-ended problems such as
conversational speech recognition, open domain natural language understanding,
commonsense reasoning, statistical dialog management, and dialog evaluation.
These collaborative efforts have driven improved experiences by Alexa users to
an average rating of 3.61, the median duration of 2 mins 18 seconds, and
average turns to 14.6, increases of 14%, 92%, 54% respectively since the launch
of the 2018 competition. For conversational speech recognition, we have
improved our relative Word Error Rate by 55% and our relative Entity Error Rate
by 34% since the launch of the Alexa Prize. Socialbots improved in quality
significantly more rapidly in 2018, in part due to the release of the CoBot
toolkit. | http://arxiv.org/pdf/1812.10757 | Chandra Khatri, Behnam Hedayatnia, Anu Venkatesh, Jeff Nunn, Yi Pan, Qing Liu, Han Song, Anna Gottardi, Sanjeev Kwatra, Sanju Pancholi, Ming Cheng, Qinglang Chen, Lauren Stubel, Karthik Gopalakrishnan, Kate Bland, Raefer Gabriel, Arindam Mandal, Dilek Hakkani-Tur, Gene Hwang, Nate Michel, Eric King, Rohit Prasad | cs.CL, cs.AI | 2018 Alexa Prize Proceedings | null | cs.CL | 20181227 | 20181227 | # Advancing the State of the Art in Open Domain Dialog Systems through the Alexa Prize
Chandra Khatri1 Behnam Hedayatnia1 Anu Venkatesh1 Jeff Nunn1 Yi Pan1 Qing Liu1 Han Song1 Anna Gottardi1 Sanjeev Kwatra1 Sanju Pancholi1 Ming Cheng1 Qinglang Chen1 Lauren Stubel1 Karthik Gopalakrishnan1 Kate Bland1 Raefer Gabriel1 Arindam Mandal1 Dilek Hakkani-Tur1 Gene Hwang1 Nate Michel1 Eric King1 Rohit Prasad1
1Amazon Alexa Prize {ckhatri, behnam, anuvenk, jeffnunn}@amazon.com {kwatras, pansanju, chengmc, qinglan}@amazon.com {stubells, karthgop, kateblan, raeferg}@amazon.com {arindamm, hakkanit, ehwang, natmiche}@amazon.com {kinr, roprasad}@amazon.com
# {yipan, qqliu, hasong, gottardi }@amazon.com
# Abstract
Building open domain conversational systems that allow users to have engaging conversations on topics of their choice is a challenging task. Alexa Prize was launched in 2016 to tackle the problem of achieving natural, sustained, coherent and engaging open-domain dialogs. In the second iteration of the competition in 2018, university teams advanced the state of the art by using context in dialog models, leveraging knowledge graphs for language understanding, handling complex utterances, building statistical and hierarchical dialog managers, and leveraging model-driven signals from user responses. The 2018 competition also included the provision of a suite of tools and models to the competitors including the CoBot (conversational bot) toolkit, topic and dialog act detection models, conversation evaluators, and a sensitive content detection model so that the competing teams could focus on building knowledge-rich, coherent and engaging multi-turn dialog systems. This paper outlines the advances developed by the university teams as well as the Alexa Prize team to achieve the common goal of advancing the science of Conversational AI. We address several key open-ended problems such as conversational speech recognition, open domain natural language understanding, commonsense reasoning, statistical dialog management and dialog evaluation. These collaborative efforts have driven improved experiences by Alexa users to an average rating of 3.61, median duration of 2 mins 18 seconds, and average turns to 14.6, increases of 14%, 92%, 54% respectively since the launch of the 2018 competition. For conversational speech recognition, we have improved our relative Word Error Rate by 55% and our relative Entity Error Rate by 34% since the launch of the Alexa Prize. Socialbots improved in quality significantly more rapidly in 2018, in part due to the release of the CoBot toolkit, with new entrants attaining an average rating of 3.35 just 1 week into the semifinals, compared to 9 weeks in the 2017 competition.
1 Introduction Conversational AI is one of the hardest problem domains in artificial intelligence, due to the subjectivity involved in interpreting human language. The problems associated with the
2nd Proceedings of Alexa Prize (Alexa Prize 2018).
Conversational AI domain include natural language understanding, knowledge representation, commonsense reasoning, and dialog evaluation. A complete solution to this problem is likely to have a system on the scale of human parity, which is hard to measure (Hassan et al., 2018; Xiong et al., 2016). Such systems can be described as AI complete. With advancements in Deep Learning and AI, we have obtained significant performance improvements on subproblems within AI-Complete areas such as Image Recognition and Computer Vision. These advancements have come in large part due to the objective nature of evaluating solutions to these problems. Conversational AI requires both natural language understanding and response generation, where the latter features a potentially unbounded response space, and a lack of objective success metrics, making it a highly challenging problem to model.
Voice assistants such as Alexa and Google Assistant have significantly advanced the state of science for goal-directed conversations and successfully deployed these systems in production. However, the challenge of building agents which can carry multi-turn open- domain conversations is still far from being solved. To address these challenges and further the state of Conversational AI, Amazon launched a yearly competition called Alexa Prize in 2016. The grand challenge objective is to build agents that can converse coherently and engagingly with humans for 20 minutes, and obtain a 4 out of 5 or higher rating from humans interacting with them. Apart from the Alexa Prize, there have been challenges like Dialog System Technology Challenge (DSTC) (Williams et al., 2016) (task based and closed domain) and Conversational AI Challenge (ConvAI) (Burtsev et al., 2018) (persona based, chit-chat and challenges with evaluation). Both of these challenges are text based as opposed to speech- based. Achieving natural, sustained, coherent and engaging open-domain dialogs in spoken form, which can be evaluated and measured for success, is the primary goal of the Alexa Prize. Through the Alexa Prize competition, participating universities have been able to conduct research and test hypotheses with real customer interactions by building socialbots, which are available to all Alexa users. Users interact with socialbots via the âAlexa, letâs chatâ experience, engage in a live conversation, and can leave ratings and feedback for teams at the end of their conversations, which the teams can then use to continuously evaluate and improve their systems.
The inaugural cohort consisted of 16 university teams that set the path for future research, did extensive experimentation and brought advancements across Natural Language Understanding (NLU), Knowledge Ingestion, Dialog Manager, Personalization, Response Generation and Selection (Ram et al., 2017). The winning team - Sounding Board (Fang et al., 2018) from the University of Washington achieved an average rating of 3.17 and an average conversation duration of 10 minutes and 22 seconds during the finals of the inaugural year of the competition.
The 2018 cohort was made up of 8 teams, with 5 of them alumni of Alexa Prize 2017. 2018 teams not only expanded upon the learnings and research done in 2017 but also built several new components to handle compound sentences such as âI love watching the Harry Potter movies, but I think the books are more enjoyableâ as well as contextual models for multi-turn dialogs. They also leveraged topic models and dialog acts for driving deep topical conversations and adapted their dialog manager based on user's interest. Teams also utilized extensive knowledge bases to enable broad coverage of topics and entities, and used a variety of linking techniques to transition between topics and entities within a knowledge graph.
Based on the learnings from the 2017 competition, we provided several resources, models and tools to the university teams for 2018. To help teams focus more on science work and minimize effort spent on infrastructure for model hosting and scaling, we created CoBot, a conversational bot toolkit in Python for natural language understanding and dialog management. We also shared a variety of models such as Topic and Dialog Act Classifiers, Conversation Evaluators, and Sensitive Content Detector, which were broadly adopted by many teams. Furthermore, we drastically improved our Conversational Speech Recognition system, with a 25% relative improvement in Word Error Rate (WER) since the end of last year, to minimize errors in inputs to the socialbots. We also provided teams with weekly metrics reports computed from human annotations and statistical models to identify potential growth
2
areas for each.
While there are a few datasets and challenges targeting improved dialogs and conversational AI, most of them are task oriented (e.g. DSTC Challenge) or targeted at more phatic or chatty interactions (e.g. ConvAI Challenge). The Alexa Prize is very different as it addresses many of the missing gaps pertaining to task-based datasets/challenges and chitchat or persona-based conversations. The conversations are spoken, not task-restricted, open-ended, topical, involve opinions, and are conducted with real users with continuous feedback. This enables the system to evolve over time, perform A/B tests and ingest learnings in real-time. Evaluation is completely subjective and the experience is rated by a human in the loop. This data is extremely useful not only for improving existing systems such as conversational speech recognition and dialog act detection, but also for a variety of open research problems such as dialog evaluation, statistical dialog management, and sensitive content detection.
In 2017, during the inaugural Alexa Prize participants performed a significant amount of research around a variety of models, primarily focused on natural language understanding, knowledge ingestion, dialog and context modeling, personalization, response generation and ranking and response selection. This work is described in more detail in the 1st Proceedings of the Alexa Prize.
In this paper, we describe the scientific and engineering advancements brought by the university teams and by Amazon to advance the state of Conversational AI during the 2018 competition. We also share the performance of socialbots across a variety of metrics including ratings, duration, coherence, engagement, and response quality. Finally, we address some of the open-ended problems in Conversational AI that we have made progress in this year, and share our learnings from the 2018 competition.
# 2 Alexa Prize Background
Similar to the 2017 Alexa Prize, upon receiving a request to engage in a conversation with Alexa, e.g. âAlexa, Let's Chatâ, Alexa customers were read a brief message, then connected to one of the 8 participating socialbots. Customers were provided instructions on how to end the conversation and provide ratings and feedback. The introductory message and instructions changed through the competition to keep the information relevant to the different phrases. After exiting the conversation with the socialbot, which the user could do at any time, the user was prompted for a verbal rating: âHow do you feel about speaking with this socialbot again?â, followed by an option to provide additional freeform feedback. Ratings and feedback were both shared back with the teams to help them improve their socialbots.
The Alexa Prize 2018 was launched to a cohort of Amazon employees on April 10, followed by a public launch on May 16 at which time all US Alexa customers could interact with the participating socialbots. We followed a similar pattern to 2017 and followed this up with a semifinal phase that ran from July 2, 2018 - August 15, 2018 where customer ratings were used to determine finalists for Alexa Prize 2018. This was followed by a feedback phase for finalists, leading up to a closed door finals on Nov 6-7, 2018. Throughout the competition, the teams were required to maintain anonymity in their interactions to ensure fairness in the competition.
To drive maximum feedback to the teams and improve user engagement, the Alexa Prize experience was promoted through Echo customer emails, social media, blogs, and third-party publications. During each phase of the competition we drove at least one significant promotion, with two during the semifinals - one at the beginning and the other towards the end. Each of these promotions were carefully timed to bring the Alexa Prize to the top of customerâs attention, invite those unfamiliar to engage with the skill, and educate customers about the Prize. Especially as teams faced elimination at the end of semifinals, we strove to ensure teams had enough customer interactions to implement changes to their socialbots. This year, to drive further engagement we announced the three finalists via a live Twitch broadcast on August 30th. The event was promoted in advance and saw a wide variety of audience including academia, media, developers, and hobbyists alongside all the participating teams.
3
Over the course of the 2018 competition, we have driven over 60,000 hours of conversations spanning millions of interactions, 50% higher than we saw in the 2017 competition.
# 2.1 2018 Alexa Prize Finals
The finals judging event was held in Seattle November 6-7, 2018. Each interaction involved 1 person talking to the socialbot and 3 people judging independently, determining the conversation stopping point based on coherence and engagement. An indication by a majority of the judges to end the interaction was counted as the end time for interaction. Conversations were rated by the judges on a scale from 1 to 5 stars, post indication of a need to end the conversation by a majority of the judges. The finals involved multiple interactors, with each interactor having multiple conversations with each socialbot. During the finals, the teams demonstrated an improved ability to listen to what the user was saying, better entity disambiguation, and sustain more multi-turn interactions on a topic. They also demonstrated the need to develop improved capabilities for multi-intent parsing and management of dialog resulting from long, multi-intent inputs as well as the addition of more variability to their responses.
# 3 Engineering Advancements Through the Conversational Bot Toolkit (CoBot)
# 3.1 Background
During the 2017 Alexa Prize, each of the competing teams built their own unique conversational agents. In our examination of each team's work, we identified common successful strategies, as well as common pitfalls, for the purpose of providing competitors in the 2018 Alexa Prize a starting point from which to build more sophisticated agents.
To help teams focus more on advancing the state of science in Conversational AI and minimize effort spent on infrastructure hosting and scaling, we created CoBot, a conversational bot toolkit in Python for natural language understanding and dialog management. The toolkit provides a set of tools, libraries and base models designed to help develop, train and deploy open-domain or multi- domain conversational experiences through the Alexa Skills Kit (ASK). The primary goal of CoBot is to drive improved quality of conversational agents by providing a toolkit that is modular, extensible, and scalable, and provide abstractions for infrastructure and low-level tasks. CoBot provides a continuous integration pipeline where experimentation in language understanding, dialog management techniques, and response generation strategies can be integrated and tested by a single command. This enables seamless scaling from development and test to production workloads on AWS.
CoBot uses many of the same principles found in the Node.JS, Python, and Java SDKs of the Alexa Skills Kit or ASK (Kumar et al., 2017), as well as general dialog toolkits like Rasa (Bocklisch et al., 2017) and DeepPavlov (Burtsev et al., 2017). CoBot exposes generalized dialog management, state tracking, and Natural Language Understanding (NLU) capabilities that are modular in nature. Unlike other toolkits, CoBot places an emphasis on infrastructure to host models and handle massive scale at runtime. While sharing key elements with the above-referenced Alexa SDKs, CoBot focuses on open-domain or multi-domain conversational experiences, with an emphasis on integration of pre-trained and university-developed deep learning models.
# 3.2 Impact
As a result of CoBot use, new Alexa Prize 2018 teams were able to build basic socialbots in significantly reduced timeline (weeks in 2018 vs. months in 2017), and drastically cut engineering overhead from scaling efforts (<2 days to address load testing failures in 2018 vs. 1-2 weeks in 2017). Students and faculty of CoBot-based teams became active participants in the development process, not only driving our engineering efforts through feature requests, but also contributing code for enhancements or bug fixes.
4
# 3.3 Design Philosophy
We engineered CoBot to be flexible, stateful, and scalable, while providing means of fast iteration, end-to-end software testing and experimentation through continuous integration and continuous deployment. Following are several key characteristics of CoBot:
⢠Flexible: The CoBot toolkit allows the user to mix and match Natural Language Understanding(NLU), Natural Language Generation(NLG), and dialog strategies. Default CoBot implementations of these strategies are easily overridden, A/B test support is built- in. Because CoBot is non-opinionated, users are in control of NLU, selection strategies, ranking strategies, and the deep learning frameworks used to train models.
⢠Stateful: Coherent multi-turn conversations are difficult to achieve without the use of context. CoBotâs state management stores the current state and state history of a conversation in AWS DynamoDB, a very fast key-value datastore. At a base level, the state is defined as the current user utterance, potential agent responses and any additional NLU attributes that were collected during a current turn in the conversation. The CoBot model of state management ensures low latency, along with consistent and reliable data access. It is flexible, allowing the user to store and retrieve any additional key-value pairs they want within the state.
⢠Scalable: During the 2017 Alexa Prize Amazon customers engaged in millions of conversations with socialbots, with large variability based on season, time of day and day of week, and occasional large spikes around marketing events. To efficiently handle this level of traffic in 2018, CoBot utilizes AWS Lambda, a serverless compute platform, and deploys large machine-learned models in Amazon Elastic Container Service (ECS). CoBot can be extended by making use of local modules (for light weight models) and remote modules (for large machine-learned models, whether team-developed or pre-trained libraries such as Spacy). CoBot can also take advantage of models hosted on AWS SageMaker, a fully-managed model training and inference service.
= UI Echo Device Alexa Skil Kit AWS Developer Team ASA, Intent â â__ â Ts C2 Compute Con @- = | ca compute Container! lication H i a Lota Balancer : IE docker ecz-computecontsiner | AWS ECS ECR Registry Developer Team
Figure 1 CoBot System Diagram and Workflow
# 3.4 Testing and Experimentation
CoBot provides several mechanisms with which to test code or evaluate feature additions. Local end-to-end testing launches local Docker containers to test modules. The CoBot CLI can be used to simulate conversations by ingesting a text file of sample utterances, or allowing the developer to provide interactive input through the keyboard. Through the local testing, developers can iterate on code in a local development environment, before committing their changes to their Beta or Prod environments. CoBot contains robust logging, configurable with standard Python logging levels. Logs are stored in Amazon CloudWatch, a monitoring and management service, and can be used to diagnose and debug errors. Using the CoBot CLIâs transcribe feature, a user can easily view the transcription of a conversation between the socialbot and a user.
5
CoBot allows experimentation by offering the ability to run A/B tests on alternately configured entry points (also called handlers). Users reaching one entry point may receive a different set of components, responses, or pipeline features than users reaching a separately configured entry point. The detailed architecture can be seen in Figure 2.
Figure 2 CoBot Architecture
Since feedback on Alexa Prize conversations is provided partly in the form of a numeric score, CoBot developers are able to experiment with A/B testing to provide insight to which features and scientific approaches yield the best results.
# 3.5 Toolkit and Data Services
CoBot provides seamless integration to the Alexa Prize Toolkit Service, which host several deep neural network models for common conversational Natural Language Processing (NLP) tasks. This alleviates the need for developers to recreate these common models. We have included models for Topic Classification, Dialog Act Classification, Sensitive Content Detection, Question Answering and Conversation Evaluation as part of the Toolkit Service.
Because the CoBot Toolkit leverages persistent storage of data and an underlying infrastructure on AWS, we are also able to provide rich data analysis services, like data querying and visualization. Turn-based conversational and state data are persisted to DynamoDB tables. Amazon S3 buckets are used to store daily ratings and feedback data, as well as weekly coherence and engagement data, which are correlated to conversational data in DynamoDB. CoBot makes it easy to use Amazon Athena to query and analyze data directly in Amazon S3 using standard SQL commands. In addition to this, Amazon QuickSight can be used to visualize this data and create live dashboards. The ability to query and visualize real-time data makes it easy for CoBot developers to make informed and prompt decisions based on data rather than hunches.
# 4 Scientific Advancements
The Alexa Prize fosters an environment that encourages a diverse set of experimental approaches to building open domain conversational agents or socialbots. The Alexa Prize participants this year have explored novel ideas in the areas of natural language understanding, dialog management, error correction, and personalization. We highlight below some of the approaches taken by Alexa Prize teams this year in these areas. For a more in depth explanation, we refer readers to the individual team papers.
6
# 4.1 From The Alexa Prize Participants
# 4.1.1 Handling Automatic Speech recognition (ASR) Errors
While performance of automatic speech recognition (ASR) has significantly improved in real-world far-field use cases such as on Alexa-enabled devices, it is still a challenging problem to achieve high accuracy across a wide array environmental conditions and for open-domain dialog systems where the user can discuss any topic and mention a huge number of unique named entities. As such, error rates tend to be higher during user dialog with Alexa Prize socialbots, and it is important for teams to be able to handle ASR errors before sending it to their socialbots for downstream tasks such as NLU processing. Iris (Ahmadvand et al., 2018) and Slugbot (Bowden et al., 2018) took a standard approach of having a threshold on word level and sentence level confidence scores. If the confidence was too low, the socialbot was prompted to ask the user to clarify what he/she said. However, if the socialbot prompts for clarification too many times this leads to a high friction experience for the user. Additionally, several teams retain n-best hypothesis for their response modules to counteract any noise in the input. Alquist took the approach of first applying case restoration techniques on ASR results before sending it to downstream applications such as NER.
Gunrock (Chen et al., 2018) took a unique approach by creating an ASR Correction module that captured homophones by leveraging a contextual knowledge base. When the word level confidence score was below a certain threshold, the socialbot would query the knowledge base for the domain of the current conversation to retrieve substitute noun phrases in the user utterance using relevant meta-phonetic (homophone) information.
# 4.1.2 Response Modules, Ranking Strategies and Natural Language Generation
In order to deal with the vast number of possible utterances coming into a socialbot, many teams used multiple response modules where each response module would be responsible for a particular domain or set of domains. For example, one response module may handle Q/A, giving facts/news along with modules trained to respond to certain topics such as movies, music or sports. These response modules used rule-based, retrieval-based or generative techniques to produce responses. Teams used open sourced datasets or APIs for their response modules such as Reddit, News API, EVI, Wikidata, IMDB, ESPN, Washington Post, DuckDuckGo, Rotten Tomatoes, Spotify, and Bing. For rule-based bots, teams used ELIZA (Weizenbaum et al., 1966) and relied on AIML to handcraft responses. For retrieval-based methods, Alana scraped popular subreddits such as âShowerThoughts" and âToday I Learned" and indexed them using Lucene. They then could execute a retrieval-based match for noun phrases mentioned in the userâs utterance.
After generating a set of candidate responses from response modules, socialbots need to select the best response. In order to try to avoid generic and incoherent responses, Alana (Curry et al., 2018) built bot-specific classifiers using the dialogue context. Alana manually annotated responses as appropriate, inappropriate or potentially appropriate to train their model. Iris (Ahmadvand et al., 2018) selected responses for their news bot by ranking results based on how far the words in the user utterance are from the start of a news article and based on the user's preferred news domains. Eve (Fulda et al., 2018) focused more broadly on trying to identify the subset of possible responses first before narrowing down on a single response. Their method revolves around embedding conversational history into an embedding vector along with candidate responses into an embedding vector. They then looked for a candidate utterance, and shifted their conversational history embedding in that direction to match the resulting flow. SlugBot and Eve, used rankers which had hand engineered features, based on items such as confidence score from a response generator, length of an utterance, whether an utterance has already been repeated or not and if an utterance was deemed coherent.
Most teams used template based modules for rendering responses. Gunrock tried to avoid duplicate responses by creating many surface forms for each template and picking one at random. They also
7
tried to create dynamic templates where specific slots can be swapped out. These methods, while resulting in generally coherent responses, still represent bounded simulations of the variety found in human-human conversation.
# 4.1.3 Knowledge Graphs
Representing knowledge and ingesting that knowledge in open domain conversation is a challenging problem but it is necessary in order to have multi-turn topical conversations. Many teams relied on representing knowledge in graph databases, with AWS Neptune serving as a popular choice.
Iris, Slugbot, and Gunrock used knowledge bases storing millions of different topics and entities. They leveraged open data sources such as DBPedia, Wikidata, Reddit, Twitter and IMDB. Fantom (Jonell et al., 2018) tried to take the idea further and turned their knowledge graph into a dialog structure. The graph contains nodes representing an utterance, either from the socialbot or from the user. An edge between node X to node Y means that utterance Y is an appropriate answer to utterance X. A path through the graph thus represents a dialog between user and the socialbot.
For knowledge ingestion, Alana (Curry et al., 2018) developed a system called the Contextualized Linked Concept Generator. When an entity is mentioned, they search for interesting entities within their graph using different linking strategies. Once a plausible entity in the same domain is found they will leverage the links used to find additional entities to generate a full response.
# 4.1.4 Dialogue Management
A key component of socialbots is to be able to manage dialog effectively. Many teams leveraged contextual information in their dialog management systems. This contextual information came from NLP pipelines that will be discussed in Section 4.1.7.
Many of these systems are still intent/slot based but have been extended to handle other features such as topic, sentiment, and dialog act. Alquist (Pichl et al., 2018) experimented with statistical dialogue management systems, utilizing hybrid code networks (HCN) (Williams et al., 2017) modified for the open domain use case, by taking in as input the conversational context, along with dialogue acts and system actions, and outputted either text responses or functions. Text responses are directly returned as the response while functions are represented as a reference to code which needs to be executed.
# 4.1.5 Sensitive Content Detection
Sensitive Content includes items such as offensive, abusive or hateful content. However, there are a broad set of utterances that do not have obvious profanity or hateful content but may be deemed sensitive due to recent events or deep knowledge of cultural context. To tackle sensitive content detection, teams such as Slugbot, Iris and Fantom took the standard approach by using a profanity checker before returning responses from bots. Eve (Fulda et al., 2018) tried to create sensitive detection models that utilizes context. Eve started with a blacklist of offensive words and divided it into categories of: 1) blatantly offensive words, 2) words that occur frequently in the context of offensive language 3) and words that may be offensive to an individual user which the bot only would talk about based on a user's permission.
The Alexa Prize provided a sensitive content classifier (described in Section 4.2.3) through the Cobot toolkit, which several teams utilized in their pipeline. Alana and Tartan trained their own model to detect abusive content. Tartan (Larionov et al., 2018) trained a supervised model on Reddit data using the âcontroversialâ score given for each user message as their signal. Alana trained a model on Alexa Prize data which they annotated themselves and once abuse towards their bot was detected, they would try to drive the conversation in a different direction to mitigate further offensive content.
8
# 4.1.6 Customer Experience and Personalization
Iris (Ahmadvand et al., 2018) took the approach of creating a personalized topic model to predict which topic a customer would be interested in discussing next. They trained and tested a Conditional Random Field (CRF)-based sequence model on specific groups of customers: returning customers, new customers and customers split by time zones. From their experiments, they found this model has higher accuracy as compared to purely suggesting random popular topics to users. Eve tried to create a personal experience by measuring users mood through sentiment analysis. This would then be used downstream to up-vote certain response generators they believe would respond well to a certain user personality. Alana built systems to make the customer feel more welcomed. Because the conversations in Alexa Prize are highly topical there will be entities mentioned by the socialbot that a user may not have heard of. Alana implemented an entity explanation mechanism which provides domain specific information associated with a given entity, extracted from the Wikidata (VrandeÄiÄ et al., 2014) knowledge base.
# 4.1.7 Natural Language Understanding(NLU) for Open Domain Dialog
After a user has initiated a conversation, a socialbot requires a robust Natural Language Understanding (NLU) system to identify various semantic elements of a user utterance. This includes but is not limited to intent, dialog act, topic, named entities and user sentiment (Allen 1995). NLU is a difficult problem in the open domain setting because of the ambiguity and uncertainty in conversational speech. Teams described innovative approaches in areas of conversation NLU, as summarized below.
Intent and Dialog Act Detection: Intents represent the goal of a user for a given utterance. A dialog system needs to detect intents well to respond appropriately to that utterance. In addition to this, a conversation can be broken down and represented as a series of dialog acts that are but not limited to question, request for information, or delivery of information. Iris, Gunrock and Alquist used supervised methods to train classification models for predicting intents and dialog acts. Slugbot used the CAPC dataset provided by Alexa Prize to train their intent models.
Named Entity Recognition (NER) and Noun Phrase (NP) extraction: Recognizing named entities (e.g. person, organization, etc.) in an utterance is an important step to understanding a userâs intent. Alquist trained custom NER models using manually labeled utterances gathered during conversations with real users. Iris and Fantom used a large database to recognize relevant entities in a user's utterance. Iris also trained an entity classifier, using DBpedia (Auer et al., 2007) entities as their dataset and a convolutional neural network.
Anaphora and Co-reference Resolution: Multi-turn dialog systems need to resolve ambiguous phrases with reference to prior entities for downstream tasks such as response generation. Fantom, Tartan and Slugbot did co-reference resolution by using open source tools such as SpaCy, Stanford CoreNLP, and NeuralCoref (2017). Gunrock noted that these state-of-the-art models are not adapted to conversational data and tried to build their own coreference resolution models using NER outputs. Alana performed ellipsis resolution by transforming a user utterance using contextual information, for example transforming "Yes" to "Yes I do like tea" such that their models can respond with much more detail.
Topic Detection: Because Alexa Prize conversations center around topics, it is crucial to predict topics of user utterances to lead to more coherent and engaging responses. Tartan, Slugbot, Fantom, Eve and Gunrock used the Cobot Topic Classifier as described in Section 4.2.2. Gunrock and Alana used knowledge graphs such as Google Knowledge graph and Microsoft Concept Graph to detect topics from named entities. Iris (Ahmadvand et al., 2018) introduced a contextual topic classifier into their system.
Sentence Segmentation: In natural conversation, users often express multiple thoughts in a single complex statement. Without post-processing on ASR outputs, NLU systems may face difficulty classifying these statements. Gunrock (Chen et al., 2018) trained a sentence segmentation module
9
to break a user's utterance into to smaller segments to be able to capture the semantic meaning, and utilized start and end times of words in ASR hypotheses provided by Amazon as features to help train their model.
Sentiment: To better understand a user's mood, teams used sentiment detection methods. Some teams used open source tools such as VADER(Gilbert et al., 2014). Alquist trained a bidirectional recurrent (GRU) neural network on movie review data, generalizing to other conversational inputs.
# 4.2 Science Advancements from the Alexa Prize Team
# 4.2.1 Automatic Speech Recognition
Automatic Speech Recognition (ASR), a system which converts spoken words to text, is the first component a user interacts with in a spoken dialog system. It is crucial to recognize a user utterance as accurately as possible, as this will impact all of the downstream systems.
Entity Error Rate Due to the highly topical nature of the Alexa Prize conversations, it is important to capture entities in user utterances which can then be used by the socialbots to return responses. In addition to measuring Word Error Rate(WER), we also actively measure Entity Error Rate(EER). Each word in a user utterance is annotated with a topic. For a particular topic say âEntertainment_Musicâ the number of substitution and deletion errors are counted. This metric roughly captures how well we are recognizing named entities. A key technique for reducing EER is moving to contextual ASR systems. We will present work we have done in this area taking a non-neural and neural approach for language modeling.
Contextual Language Model Adaptation ASR systems traditionally consist of an acoustic model, a pronunciation model, and a statistical language model. Traditionally the role of the statistical language model is used to resolve ambiguities, regardless of context. We can improve the accuracy of the system by leveraging contextual information. For example, if language model has to determine the probability of the next word in the utterance âAlexa, I love ⦠â. The probabilities of the next word being âThe Godfatherâ or âThe Beatlesâ would be very different if the userâs recent interactions with Alexa were about movies or music.
We took two approaches to this problem â first, we added contextual information to a dynamic interpolation framework. In work we previously reported (Raju and Hedayatnia et al., 2018), we describe a machine-learning system that mixes different statistical language models based on contextual information. The method can leverage any type of contextual information, but in particular can examine the history a userâs interactions with a socialbot system. More specifically we dynamically mix together various n-gram based language models. This is done by changing the interpolation weights for our n-gram based LM at each turn to better adapt and recognize a user utterance. We predict these interpolation weights using a Deep Neural Network that is trained to maximize the log-likelihood of the training data. We reported relative Word Error Rate (WER) reductions up to 6%. For Entity Error Rate (EER), we reduced errors by 10-15%. The EER metric is particularly important because recognizing named entities is crucial for socialbot systems. The complete list of results can be found in our results section.
We have also explored neural methods to incorporate context, involving adding contextual information for Recurrent Neural Network (RNN) models in a rescoring framework. In work presented in (Mehri et al.), we explore various methods to effectively encode conversational context into a Neural Language Model (NLM), specifically an LSTM-RNN model. Our contextual information primarily consists of the history of a user's interactions with a socialbot system. Additionally we look at integrating linguistic features such as sentence topic derived from the context. The models used to generate these derived features allow us to leverage external data sources for training. We explore various architectures on how to best incorporate this contextual information and in particular, deal with differences between socialbot responses and user utterances.
10
We obtained a 6.4% Word Error Rate (WER) reduction and a 7.2% Entity Error Rate (EER) reduction over a non-contextual NLM. The full results are shown in Table 4.
# 4.2.2 Contextual Topic and Dialog Act Models
Identifying topics (such as Sports or Politics) and corresponding keywords from an utterance within a conversation helps in retrieving or generating the right set of information requested by the user. Dialog acts such as âgreetingâ, âquestionâ, âopinion requestâ, etc. are useful as general intents in the context of conversations, which guide the flow of the conversation. When combined, these are valuable features for open domain natural language understanding.
One of the challenges with topic, keyword and dialog act detection in open-domain conversations is effectively integrating context. In human-human or human-bot interactions, it is possible that the topic in the current utterance is a function of utterances or responses in prior turns.
To help university teams building better socialbots in 2018, we built context-aware topic and dialog act classification models. We obtained annotations for 10,000 Alexa Prize conversations with over 120,000 utterances, labeled across 12 topics (Politics, Fashion, Sports, Science and Technology, Entertainment [Music, Movies, Books, General], Phatic, Other, Interactive, Inappropriate Content) and 14 dialog acts (Information [Request, Delivery], Opinion [Request, Expression], General Chat, Clarification, Topic Switch, User Instruction, Instruction Response, Interactive, Other, Multiple Goals, Frustration Expression, Not Set). We also obtained word level annotations to obtain the topic of each word within an utterance. We trained Deep Average Network and BiLSTM based contextual models. These models are described in more detail along with some example annotations at (Khatri et al., 2018a).
# 4.2.3 Sensitive Content Detection Classifier
In an open-domain dialog setting, one of the hardest tasks to address is detecting sensitive or offensive content. Sensitive content includes racism, profanity, hate speech, violence, sexual content or any kind of inappropriate content which may be offensive to people based on gender, demographic factors, culture or religion. There are many challenging aspects of this problem, such as coverage (a blacklist of words may not capture all kinds of sensitive content), cultural differences (sensitive content for one culture may not be sensitive for other), context (when viewed as a single utterance might seem perfectly innocuous but when viewed within a wider context they become offensive), sarcasm, non-standard vocabulary, factual correctness and recency of the content.
Most of the teams retrieve information and train models for their socialbots from publicly available data sources such as Reddit and Twitter, which are well-known to contain significant amounts of offensive materials. No large-scale labeled data set for training exists that addresses all the types of sensitive content described above.
To address this challenge, we generated training data from common internet forum conversations using a two-stage semi-supervised approach, which is illustrated in Figure 3.
Sample NSFW utterances Sen: Sorting with toxic rances subreddits classifier Nea â | ron | NS Toxic Classifier JMC : ° WU, 10 M Non-Sensitive Utterances Sorted Subreddits: Most sensitive to least using Blacklist âTraining Data for ) l BiLSTM ) Y Y Stage 1 Stage2
Figure 3 Two Stage Semi-Supervision for Sensitive Content Detection
11
Stage 1 consists of sorting these topical forums with our blacklist by simply counting the number of words in said blacklist. Our blacklist corresponds to a manually curated list of approximately 800 offensive words. During Stage 2, high confidence sensitive comments and non-sensitive comments were sampled using a weakly supervised classifier trained on the Toxic Wikipedia Comments dataset (Jigsaw, 2018). Overall, we sampled 10 Million comments each for our sensitive and non- sensitive classes, which were then used to train a BiLSTM classifier in which word representations were initialized using GLoVE (Pennington et al., 2014) embeddings and then fine-tuned during training. More information about the model can be obtained at (Khatri et al., 2018b). Results of using this model can be found in Section 5.1.3.
# 4.2.4 Contextual Conversational Evaluators
Evaluation of dialogue systems is a challenging research problem, which despite being studied lacks a widely-agreed-upon metric. There is significant previous work on evaluating goal oriented dialogue systems such as TRAINS system (Ferguson et al., 1996), PARADISE (Walker et al., 1997), SASSI (Hone and Graham, 2000) and MIMIC (ChuCarroll, 2000), which are easier to evaluate than open-ended, non-goal oriented systems, because we can measure systems by successful completion of tasks. Venkatesh et al., 2017 describes limitations in the Turing Test (Turing 1950) model, due to divergences in information availability, objective misalignment and incentive to produce plausible but low-information content responses.
One of the primary ways in which quality of socialbots is evaluated through Alexa Prize is through user ratings. We ask Alexa users to rate the conversation on the scale of 1 to 5 based on how much they would like to speak with the socialbot again. These ratings don't provide immediate feedback during the interaction and they don't provide information about turn-level quality. To address this challenge, we defined the following five metrics for evaluating turn-level performance of open- domain dialog systems:
1. Comprehensible: The information provided by the socialbot made sense with respect to the user utterance.
2. On-topic or Relevant: The socialbot response was on the same topic as the user utterance or was relevant to the user utterance. For example, if a user asks about a baseball player on Boston Red Sox, then the socialbot should mention the correct baseball team.
3. Response Incorrectness: The socialbot response is irrelevant or inappropriate. In a highly subjective task, it is hard to evaluate the correctness of the response. Evaluating if the response is irrelevant or inappropriate, however, is relatively easy.
4. Interesting: The socialbot response contains information which is novel and relevant. For example, the socialbot would provide an answer about a baseball player and provide some additional information to create a fleshed out response.
5. Continue Conversation: Given the current state of the conversation and the system response, there is a natural way to continue the conversation. For example, this could be due to the socialbot asking a question to the user about the current conversation subject.
Around 15,000 Alexa Prize conversations containing 160,000 utterances were annotated by humans on the five metrics described above. We provided definitions and examples to the annotators and trained them to annotate Alexa Prize conversations. We asked them to annotate each response given the utterance and context (past turns) on how coherent and engaging the response is. They were asked to read the conversation starting from the first turn and then assign either âyesâ or ânoâ corresponding to each metric (âis comprehensibleâ, âis on-topicâ, âis incorrectâ, etc.) for each turn. At turn number ânâ, they had access to entire context in the past ân-1â turns, i.e. past utterances and responses to evaluate the response given the context.
12
Past Utterance, Response Context (LSTM Encoder) tâ+ Comprehensible Utterance (Transformer Sentence J Embedding) -â [â+ On-topic FENN }-â Interestin, Response (Transformer Sentence 8 Embedding) tâ Continue Conv. tâ Incorrect Features (POS, Dialog Act, Entity Grid, Topic, NE, Sentiment, ...)
Figure 4 Contextual Conversation Evaluators
Figure 4 depicts the model we trained corresponding to the five proposed metrics for dialog evaluation. We used several features, including context and entity-grids, to train these models. More information about the model can be obtained at (Yi et al., 2018). These models can be integrated into several dialog system components, including statistical dialog managers. More information on the metrics used in the conversation evaluators can be found in Section 5.1.4.
# 5 Results
In Section 4.2, we described several models we built and shared with the university teams for use in the Alexa Prize competition. In this section, we first provide the performance of various models that were shared with the teams and then showcase the improvement in quality of socialbots over the course of the competition on various metrics.
# 5.1 Alexa Prize Models
# 5.1.1 Automatic Speech Recognition (ASR)
In Figure 5 we can see the relative Word Error Rate (WER) and Entity Error Rate (EER) improvement since the start of the 2018 Alexa Prize with respect to the final 2017 Alexa Prize baseline. In our currently deployed model, the WER and EER are 30% and 26% lower than at the
100 Conversational ASR Relative WER/EER 90 e t a R r o r r E e v i t a e R l 80 70 60 50 40 30 20 10 0 Baseline 2 3 2018 Alexa Prize ASR Model Versions 4 5 WERR EERR
Figure 5 Relative Word Error Rate and Entity Error Rate relative to the end of Alexa Prize 2017
end of the 2017 Alexa Prize (and 55% and 34% lower, respectively, than at the start of the program). Significant improvement in ASR quality have been obtained by ingesting the Alexa Prize
13
conversation transcriptions in the models and by the Language Model advancements described in this paper.
# Contextual Language Model Adaptation
We utilized two methods for contextually biasing language models â first was an n-gram based method, the results of which are described in Table 3. As described in Section 4.2.1, we bias the language model using contextual information in the form of past user utterances and TTS responses, adapting the interpolation weights of an n-gram language model at each user turn. These interpolation weights are outputted by a deep neural network (DNN) which takes in as input the contextual information. Cross-Entropy (XENT) and perplexity (PPL) refer to alternate loss functions that were used to train the DNN model. We refer the reader to (Raju and Hedayatnia et al., 2018) for more details on these models.
Relative Word Relative Entity Model Features Perplexity Error Rate (%) Exror Rate (%) Decoder: I-pass No Adaptation (Baseline) - 60.77 - DNN (Xent) prev, meta 59.81 -1.73% -8.19% DNN (PPL) prev, meta 58.14 -1.61% -2.98% DNN (PPL) prev-d, meta 55.66 -2.76% -10.92% Decoder: 2-pass DNN (PPL) prev, curr, meta 42.03 -5.58% -15.15% DNN (PPL) prev-d, cur, meta 42.83 -5.92% -14.67% DNN (PPL) cur, meta 42.72 -5.98% -15.32% Topic Model cur 45.08 -5.52% -13.14%
# Table 3 Contextual ASR results
We also experimented with a neural approach to bias RNN language models, as described in Table 4. More detail on this method can be found in (Mehri et al.). âAvg Embedâ and âDiff LSTM Encâ refer to methods to encode TTS responses into our model. For the first method, we average word embeddings for TTS responses and concatenate them at every time-step of our RNN model. For the second, we use a separate RNN encoder to encode our TTS responses into a hidden state vector which is used to initialize the hidden state of our decoder model.
Models Relative Word Error Rate (WER) Context Free Baseline Average Embed + Prepend Different LSTM Encoder + Derived Average Embed + Derived - -4.70% -6.10% -6.40% - -5.90% -7.20% -7.20%
Table 4 Relative WER and EER using RNN Language Models w.r.t Context Free Baseline
# 5.1.2 Topic and Dialog Act Models
Based on our experiments, a bidirectional LSTM-based (BiLSTM) contextual dialog act model performs significantly better than the equivalent non-contextual model. The baseline model obtains an accuracy of 50% while adding parts of speech, context and topical features (obtained using DAN based semi-supervised topic model (Guo et al., 2017) leads to an improved accuracy of 71%. Similarly, a BiLSTM contextual topic model outperforms the non-contextual BiLSTM, Deep Average Network(DAN) and Attention Deep Average Network(ADAN) models. We obtained an accuracy of 76% for our best setting when compared to 55% obtained through our baseline model.
14
We have observed that these tasks are difficult for manual annotation because of the subjective nature of the problem â Table 5 provides more detailed information on inter-annotator agreement. The accuracy of the contextual dialog act (70%) and topic models (76%) is higher than the percentage of times annotation matches by all three annotators. In fact, the contextual dialog act model's accuracy is also higher than the case where at least two annotators provide the same annotation, which implies that the model has learned to generalize well on this subjective task and correct for variation in individual annotators.
For the keyword detection task, we observed that a supervised model outperforms unsupervised attention-based models (Guo et al., 2017). The contextual BiLSTM model performs better than ADAN on precision and recall of keyword detection as described in Table 6. In future work, we will explore the performance of models with explicit anaphora and co-reference resolution features. Most Alexa Prize teams used the topic, keyword and dialog act models to improve the quality of their socialbots.
Agreement Level Dialog Act Detection Topic Detection 2 out of 3 annotators match 70% 89% All 3 annotators match 63% 75% Kappa Score 41% (Moderate) 67% (Good)
Table 5 Inter-annotator agreement
Supervised Keyword Detection Accuracy (Binary, . Model Multi-Class) Keyword Precision Keyword Recall BiLSTM 0.93, 0.57 0.81 0.55 Binary: Word is correctly identified as a keyword Multi-class: Keyword class is correctly identified Unsupervised Keyword Detection Model Keyword Precision Keyword Recall Attentional DAN (ADAN) 0.37 0.36 Contextual ADAN 0.33 0.32 Dialog Act + Contextual ADAN 0.40 0.40
Accuracy (Binary, . Model Multi-Class) Keyword Precision Keyword Recall BiLSTM 0.93, 0.57 0.81 0.55 Binary: Word is correctly identified as a keyword Multi-class: Keyword class is correctly identified
Model Keyword Precision Keyword Recall Attentional DAN (ADAN) 0.37 0.36 Contextual ADAN 0.33 0.32 Dialog Act + Contextual ADAN 0.40 0.40
Table 6 Keyword detection metrics.
# 5.1.3 Sensitive Content Detection Classifier
The models trained using two-stage semi-supervision (TS Bootstrap) as described in Section 4.2.3 outperform supervised models trained on the Alexa Prize annotated test-set and on the following publicly available dataset: Toxic Wikipedia Comment data (TX). On the Alexa Prize test set (sample size: 6000 utterances), the TS Bootstrap model scored an Accuracy and F1-score of 96% and 95.5% respectively as compared to supervised models trained on annotated Toxic data, which scored 85% and 75% respectively.
We note that the model trained using large scale two-stage semi-supervision generalizes well across a variety of unseen datasets such as Twitter. Results of our Sensitive model are shown in Table 7. It can be observed that our large scale semi-supervised techniques (Bootstrap and TS Bootstrap) are
15
able to detect challenging edge cases such as "Santa claus is the most elaborate lie ever told" and "Jerusalem is the capital of Israel".
We also note that blacklist-based approaches donât scale well - adding more blacklist words (such as "ugly" and "devil") leads to false positives while not adding them leads to poor recall. More information about the model can be obtained at (Khatri et al., 2018b).
Model Fl-score Accuracy Blacklist (blklist) 63.0 80.6 Supervised Twitter Classifier (sTwC) 74.9 B.9 Supervised Toxic Classifier (sToC) 74.0 85.0 Blacklist based Bootstrap Classifier (BbC) 92.0 90.0 Two Stage Semi-Supervised Classifier (tsSC) 95.5 96.0
Sensitive Utterances biklist sTwC sToC BbC tsSC "i **** somethinâ out this mornin prettier than you" v v v v v "They are fat and ugly" x v v v v "Santa claus is the most elaborate lie ever told" x x x x v "mate, are you jewish? ******* is in the air" x x x x v "why donât you get a life you sicko" x x x x v
Table 7 Model performance and Sensitive content detection examples
# 5.1.4 Conversation Evaluator Results
Table 8 shows the results of our evaluator as compared with a baseline, which is a Deep Average Network (Iyyer et al. 2015) with utterance and response word embeddings concatenated at the input. Embeddings are fine-tuned during the task for all the models. We present precision, recall, and F- score measures along with the accuracy. Furthermore, since the class distribution of the data set is highly imbalanced, we also calculate Matthews correlation coefficient (MCC), which takes into account true and false positives and negatives. This is generally regarded as a balanced measure, useful when class sizes diverge. Evaluator predicted scores have significant positive correlation (pearson correlation of 0.2, 0.25, 0.4, and 0.3 with comprehensible, on-topic, interesting and continue conversation metrics, all with p-value of <0.0001) with the overall human evaluation score on this task.
, Evaluator aan tion Accuracy Precision Recall Fil-score MCC Comprehensible 0.8 0.84 (43%) 0.83(+1%) 0.84(43%) 0.84(48%) 0.37 (+107%) On-topic 0.45 0.64 (49%) 0.65 (+10%) 0.64(+9%) 0.64 (+12%) 0.29 (+81%) Interesting 0.16 0.83 (-1%) 0.77(+10%) 0.83(-1%) 0.78 (42%) 0.12 (+inf%) Conti. Conversation 0.71 0.75 (+4%) 0.73 (+5%) 0.75 (+4%) 0.72 (+17%) 0.32 (+179%) Incorrect 0.44 0.93 (+12%) 0.93 (412%) 0.93 (+12%) 0.93 (+12%) 0.83 (+35%) Numbers in parenthesis reflect the relative change when using our best model with respect to the baseline
# Table 8 Conversation evaluators performance
# 5.2 Socialbot Quality
Over the course of the competition, socialbots showed a significant improvement in customer experience. In this section we provide various metrics to evaluate the quality of socialbots and evaluate the improvement in 2018 socialbot quality in comparison with that observed in the 2017 competition.
16
# 5.2.1 Ratings
After each conversation, Alexa users were asked to rate their interaction on a scale of 1-5. Socialbot ratings have significantly improved when compared to last year. From Figure 6, we see that from the beginning of 2018 competition, the average rating of all the socialbots in 2018 was higher than the average rating of all the socialbots in 2017 at the end of the semifinals phase. We see that the 2018 socialbots began the semifinals phase with higher ratings when compared to 2017 finalists or all 2017 socialbots. This better baseline can be explained by the fact that 2018 teams had access to the Conversational Bot Toolkit (CoBot), which had several in-built components pertaining to NLU, dialog management, and response generation as well as baseline models to start with. Furthermore, 2018 teams also had access to learnings from the architecture and techniques reported in the 2017 Proceedings of the Alexa Prize.
While the difference in average ratings across all the socialbots (until the end of semifinals) for 2018 and 2017 is significant (10%), this difference is not equally significant for the finalists (5%). There are several possible confounding factors in this year-over-year measurement â as socialbots have evolved, so did the baseline and expectations of users. To compensate for this, we surfaced a small amount of traffic to the winning socialbot in the 2017 competition (with refreshed data for 2018), and compared the ratings during the 2018 finals period. All of the 2018 finalists are rated higher than last yearâs winner, with 2018âs top-rated socialbot more than 5% higher than last yearâs winner.
4 Average Rating 3.5 3 g n i t a R e g a r e v A 2.5 Finalists (2018) All Bots (2018) 2 All Bots (2017) Finalists (2017) 1.5 1 4/5 5/5 6/4 7/4 8/3 Dates 9/2 10/2 11/1
Figure 6 Ratings provided by Alexa users to socialbots
# 5.2.2 Conversation Duration
2018 socialbots have performed significantly better on a duration basis than 2017 socialbots, as can be seen in Figure 7. By the end of the semifinals, the median duration of 2018 socialbots was 128 seconds, which is 53% higher than the 2017 socialbots at the same point in time. In fact, the median duration of all the 2018 socialbots during the end of the semifinals was 23% higher than the 2017 finalists. Overall, median duration of 2018 finalists has been around 40% higher than then 2017 finalists. We observe similar patterns with respect to the 90th percentile of the conversation duration distribution as depicted in Figure 8. 90th percentile duration for 2018 socialbots is 37% higher than that of 2017 bots for all the socialbots and for that of finalists. We consider the 90th percentile duration as a proxy signal for the maximum plausible conversation duration of socialbot interacting with a highly interested and engaged user.
17
250 Median Conversation Duration Distribution 200 Duration (seconds) B âFinalists 2018 50 Finalists 2017 âSocialbots 2018 ââSocialbots 2017 4/5 Apes 5/As 6/4 6/24 7/4 8/3 8/23 9/2 10/2 Dates
Figure 7 Median Alexa Prize conversation duration
90th Percentile Conversation Duration Distribution 800 700 600 Z soo S 3 2 400 ra Ss § 300 8 âââFinalists 2018. 200 Finalists 2017 Socialbots 2018 100 Socialbots 2017 oO a/s 4/25 s/s 6/4 6/24 na 8/3 8/23 9/2 10/2 10/22 Date
# Figure 8 Conversation Duration 90th percentile for socialbots during the competition
# 5.2.3 Number of Turns
Conversations held during the 2018 competition are significantly longer than the ones held in the 2017 competition as observed in Figure 9. The average number of turns by the end of the 2018 semifinals across all the socialbots is 12.6, which is 25% higher when compared to 2017 (10.1 turns). Two spikes can be observed in the 2017 graphs: (1) The end of semifinals and (2) The last week of September. These spikes were associated with 2017 teams introducing games and quizzes to obtain higher ratings and longer conversation â teams were instructed on both occasions to remove these, and these techniques were not permitted in the 2018 competition. Overall, the increase in the number of turns is on the order of 20-25% while the increase in conversation duration is 35-40%, which implies that 2018 socialbots are not only having longer conversation but more meaningful topical conversations with longer turn-by-turn interactions.
18
18 Average Number of Turns 16 14 s m r u T f o r e b m u N 12 10 Finalists (2018) All Bots (2018) Finalists (2017) 8 All Bots (2017) 6 4 4/5 5/5 6/4 7/4 8/3 9/2 10/2 11/1 Dates
Figure 9 Average turn distribution for socialbots during the competition
# 5.2.4 Response Quality
We obtained annotations for response quality from the Alexa Prize conversations across all the 2018 socialbots. We defined five classes to describe the quality of a response: (1) Poor (response is not comprehensible or bot didn't understand user or response is sensitive) (2) Not Good (the socialbot had some understanding of what user said but contains incorrect or inappropriate information) (3) Passable (response is on topic but is generic or contains too little or too much of information) (4) Good (response is logical, on topic, contains accurate information but lacks personality or is not presented well and obviously coming from a bot), and (5) Excellent (contains accurate information, is complete and it is hard to identify if the response was coming from a bot). We have aggregated these classes across 3 phases: (1) Pre-semifinals (2) Semifinals and (3) Post Semifinals, as shown in Figure 10, showing that socialbots have improved through the competition cycle.
Figure 10 Annotated Response quality for 2018 socialbots
19
# 5.2.5 Response Error Rate
We define Response Error Rate (RER) as the ratio of incorrect responses to total responses, as obtained through human annotations (see section 4.2 for more detail). RER is an important metric for evaluating socialbots. Given that socialbots are non-goal oriented open-domain systems, it is often hard to define a correct response - however it is easier to define and measure what is an incorrect, inappropriate or incoherent response. From Figure 11, it can be observed that the teams significantly reduced average RER through the 2018 competition. Socialbots in 2018 started with around 35% RER, but improved significantly during the semifinals stage. By the end of semifinals, the average RER for all 2018 socialbots was nearly 20% lower than the 13.22% seen at a comparable time in 2017.
Response Error Rate 35 Socialbots 2017 30 Finalists 2017 Socialbots 2018 25 Finalists 2018 ) % ( R E R 20 15 10 5 0 5 / 1 5 5 / 3 0 6 / 1 4 6 / 2 9 7 / 1 4 7 / 2 9 8 / 1 3 Date
Figure 11 Annotated Response Error Rate for Socialbots during the competition
# 5.2.6 Coherence and Engagement
Similar to response quality, we performed turn-level annotations for coherence and engagement as depicted in Figure 12. Coherence is been measured using two metrics: 1) response is comprehensible and 2) response is on topic. Engagement is measured using the following metrics: 1) response is interesting, and 2) the user would want to continue the conversation. These metrics were defined in Section 4.2 under conversation evaluators.
# socialbots
20
We observe that socialbots have improved significantly over the competition over all the metrics mentioned above and appear to perform better in the post semi-finals stage on these metrics. We have also aggregated user ratings across all the metrics and observe that responses which are on topic and are interesting are associated with the highest ratings.
# 5.2.7 Topical Analysis
To improve the quality of socialbots, we shared a variety of data with the teams including topical analysis of conversations. From Figure 13, we can observe that âMoviesâ, âMusicâ and âTechnologyâ are the most popular topics apart from âOtherâ. âOtherâ corresponds to any topic beyond the provided list of topics, as well as utterances containing multiple topics. We have observed that ratings have consistently improved across the three phases and that socialbots tend to have higher topical coverage in the post-semifinals stage compared to pre-semifinals stage, and that responses which are on the same topic lead to higher ratings.
Figure 14 illustrates that socialbots frequently tend to generate a response on the same topic as a user utterance, followed by a response classified as âOtherâ, which is often phatic or an effort at topical redirection. This sort of topical switching behavior is associated with lower ratings, and we have previously reported a strong association between higher ratings and socialbots that can maintain longer on-topic conversations.
Figure 13 Topic distribution for 2018 socialbots during three phases of the competition
21
21
# 6 Discussions and Recommendations
Building an open domain conversational agent is a difficult task and there are many problems still to be solved to consistently generate coherent and engaging conversations across a wide variety of topics. From the results in Section 5.2.7 on Topical Analysis it was observed that while many responses were on topic, users did not necessarily consider them interesting. Many of these systems rely on retrieval and rule-based methods because it is hard to ingest knowledge from a structured database and present it naturally; to improve on this result, better approaches in generating relevant and engaging responses will need to be explored. Alana investigated this area with an ablation study on response modules and measured their average rating. They found that removing their rule-based module ELIZA had the largest impact on ratings. Rule/retrieval based models still play a significant role in these systems because they provide coherent responses which lead to higher ratings; however, they are relatively brittle, do not scale well to the variety of user inputs seen over a many-turn interaction, and often do not provide interesting responses for repeat users.
Additionally, the results in Section 5.2.7 on Topical Analysis showcases that socialbots are very strong in certain topics such as movies and will tend to drive the conversation towards these competency areas. We observed that socialbots tend to generate on topic responses in these strength- areas, such as Movies, but less so for areas with less robust coverage, such as Fashion. A generalized mechanism for scaling conversational response models to additional topic domains is evidently needed. Furthermore, the conversational strategy of quickly transitioning to alternate and often minimally or unrelated topical areas has been adopted by many teams to the detriment of customer satisfaction â socialbots need to be able to engage with users on a certain topic for multiple turns to sustain deep topical conversations.
Personalization is also an important factor to enhance user experience. Personalization can come in various forms such as topic selection and steering in conversation. Instead of driving conversations to areas where a bot can perform well, if conversation is driven towards topics in which users are more interested, engagement and ratings will improve. Iris did a small-scale study on how a personalized topic suggestion feature would affect ratings. From their study they found that personalized topic suggestion received an average of 4.02 rating from 360 returning users and 3.22 rating from 2,161 new users. Without personalized topic suggestion, their socialbot scored an average of 3.65 from 178 new users, while receiving only a 2.80 average rating from 52 returning users.
Finally, sentence segmentation is important to handle complex utterances. Utterances can become more complex in a conversational setting, with many pauses and hesitations in a user utterance. Unless properly parsed, it may be hard for NLU systems to understand a user's intent properly. It is also important to condition these NLU systems on contextual information such as past utterances and bot responses.
22
# 7 Conclusion and Future Work
Conversational AI is one of the most challenging problems in the artificial intelligence field. Several tasks associated with Conversational AI such as language understanding, knowledge representation, commonsense reasoning and dialog evaluation are believed to be âAI Completeâ, or truly human- intelligence equivalent problems. To address these challenges, Amazon launched the Alexa Prize competition, wherein some of the best research groups across the world work towards a common goal of advancing the state of science in Conversational AI. Alexa Prize 2018 is the second iteration of the competition and the participating university teams have successfully built socialbots which can converse with Alexa users coherently and engagingly on variety of topics. Teams built on the success of the first year of the competition with sophisticated statistical dialog management, improved personalization, complex utterance handling, greatly improved infrastructure, and the use of many machine learned models to assist in conversational language understanding.
We have observed significant scientific advancements brought by the 2018 Alexa Prize. We have also seen significant improvements in ratings, socialbot quality and conversational duration when compared to the 2017 competition. We believe we have still only touched the surface of the problem of natural human conversation, and it is still Day One for Conversational AI. We expect to see even more advancements in coming years, enabling more engaging, deeper multi-turn interactions, improved use of natural language generation in response generation, and models which can leverage both structured and unstructured knowledge sources for more robust multi-turn open-domain conversation.
# Acknowledgments
We would like to thank all the Alexa Prize participants - university students and their advisors, for continuing to raise the bar on delivering an engaging experience to Alexa customers while pushing the boundaries of Conversational AI. We gratefully acknowledge the work of Sanghyun Yi and Alessandra Cervone, leads from Alexa Prize 2017 teams who chose to intern with the Alexa Prize team, and Rahul Goel from Alexa AI, in supporting the building of utilities that the current and future Alexa Prize participants could leverage to improve customer experience. We would also like to thank Amazon leadership for believing in the efforts being put in by the universities to further science and continuing to support the program. We additionally would like to acknowlegde the Alexa Principal Scientist community for their vision and support through this entire program; Marketing, PR, Legal and Digital Security for continuing to help drive the right messaging and a consistently high volume of traffic to the Alexa Prize skill, ensuring sufficient real world feedback for the participating universities in a secure manner. The competition would not have been possible without the support of all Alexa organizations including Engineering, Speech, NLU, Data Services and ASK and their leadership. And finally, we would like to thank Alexa customers who continue to help improve Conversational AI through their millions of interactions with the socialbots.
References Williams, J., Raux, A., & Henderson, M. (2016). The dialog state tracking challenge series: A
review. Dialogue & Discourse, 7(3), 4-33.
Burtsev(2018). ConvAI2. Retrieved from http://convai.io/
Hassan, H., Aue, A., Chen, C., Chowdhary, V., Clark, J., Federmann, C., Huang, X., Junczys- Dowmunt, M., Lewis, W., Li, M. & Liu, S. (2018). Achieving human parity on automatic chinese to english news translation. arXiv preprint arXiv:1803.05567.
Xiong, W., Droppo, J., Huang, X., Seide, F., Seltzer, M., Stolcke, A., Yu, D & Zweig, G. (2016). Achieving human parity in conversational speech recognition. arXiv preprint arXiv:1610.05256.
23
Burtsev, M., Seliverstov, A., Airapetyan, R., Arkhipov, M., Baymurzina, D., Botvinovsky, E., Bushkov, N., Gureenkova, O., Kamenev, A., Konovalov, V. & Kuratov, Y. (2018). DeepPavlov: An Open Source Library for Conversational AI.
Ferguson, G., Allen, J. F., & Miller, B. W. (1996, May). TRAINS-95: Towards a Mixed-Initiative Planning Assistant. In AIPS (pp. 70-77).
Walker, M. A., Litman, D. J., Kamm, C. A., & Abella, A. (1997, July). PARADISE: A framework for evaluating spoken dialogue agents. In Proceedings of the eighth conference on European chapter of the Association for Computational Linguistics (pp. 271-280). Association for Computational Linguistics.
Chu-Carroll, J. (2000, April). MIMIC: An adaptive mixed initiative spoken dialogue system for information queries. In Proceedings of the sixth conference on Applied natural language processing (pp. 97-104). Association for Computational Linguistics.
Hone, K. S., & Graham, R. (2000). Towards a tool for the subjective assessment of speech system interfaces (SASSI). Natural Language Engineering, 6(3-4), 287-303.
Bocklisch, T., Faulker, J., Pawlowski, N., & Nichol, A. (2017). Rasa: Open source language understanding and dialogue management. arXiv preprint arXiv:1712.05181.
Ram, A., Prasad, R., Khatri, C., Venkatesh, A., Gabriel, R., Liu, Q., Nunn, J., Hedayatnia, B., Cheng, M., Nagar, A. & King, E. (2018). Conversational ai: The science behind the alexa prize. arXiv preprint arXiv:1801.03604.
Fang, H., Cheng, H., Sap, M., Clark, E., Holtzman, A., Choi, Y., N.A. & Ostendorf, M. (2018). Sounding Board: A User-Centric and Content-Driven Social Chatbot. NAACL HLT 2018, 96.
Hector J Levesque. 2017. Common Sense, the Turing Test, and the Quest for Real AI: Reflections on Natural and Artificial Intelligence. MIT Press.
Weizenbaum, J. (1966). ELIZAâa computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36-45.
Sordoni, A., Galley, M., Auli, M., Brockett, C., Ji, Y., Mitchell, M., Nie, J.Y., Gao, J. & Dolan, B. (2015). A neural network approach to context-sensitive generation of conversational responses. In Proc. NAACL , pages 196â205, 2015
Vinyals, O., & Le, Q. (2015). A neural conversational model. In Proc. ICML, 2015.
AlexaPrizeTeams. 2017. The Alexa Prize Teams the socialbot competition teams. Retrieved from https://developer.amazon.com/alexaprize. [Accessed: 2017-10-28].
Kumar, A., Gupta, A., Chan, J., Tucker, S., Hoffmeister, B., Dreyer, M., ... & Monson, C. (2017). Just ASK: building an architecture for extensible self-service spoken language understanding. arXiv preprint arXiv:1711.00549.
Turing, A. M. (1950). Computing machinery and intelligence. In Parsing the Turing Test (pp. 23- 65). Springer, Dordrecht.
Sano, S., Kaji, N., & Sassano, M. (2017). Predicting Causes of Reformulation in Intelligent Assistants. arXiv preprint arXiv:1707.03968.
24
Hassan Awadallah, A., Gurunath Kulkarni, R., Ozertem, U., & Jones, R. (2015, October). Characterizing and predicting voice query reformulation. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management (pp. 543-552). ACM.
Manning, C., Surdeanu, M., Bauer, J., Finkel, J., Bethard, S., & McClosky, D. (2014). The Stanford CoreNLP natural language processing toolkit. In Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations (pp. 55-60).
Honnibal, M. (2016). SpaCy (Version 1.3.0). Retrieved from: https://spacy.io/
Google Natural Language API. Retrieved from https://cloud.google.com/natural-language. [Accessed: 2017-10-28].
Google Knowledge API. Retrieved from https://developers.google.com/knowledge-graph/. [Accessed: 2017-10-28].
Wang, Z., Wang, H., Wen, J. R., & Xiao, Y. (2015, October). An inference approach to basic level of categorization. In Proceedings of the 24th acm international on conference on information and knowledge management (pp. 653-662). ACM.
Wu, W., Li, H., Wang, H., & Zhu, K. Q. (2012, May). Probase: A probabilistic taxonomy for text understanding. In Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data (pp. 481-492). ACM.
EVI Knowledge API. Retrieved from https://www.evi.com/about/ [Accessed: 2017-10-28].
NeuralCoref. Retrieved from https://github.com/huggingface/neuralcoref [Accessed: 2017-10-28].
Gilbert, C. H. E. (2014). Vader: A parsimonious rule-based model for sentiment analysis of social media text. In Eighth International Conference on Weblogs and Social Media (ICWSM-14). Available at (20/04/16) http://comp. social. gatech. edu/papers/icwsm14. vader. hutto. pdf.
Bollacker, K., Evans, C., Paritosh, P., Sturge, T., & Taylor, J. (2008, June). Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data (pp. 1247-1250). AcM.
Auer, S., Bizer, C., Kobilarov, G., Lehmann, J., Cyganiak, R., & Ives, Z. (2007). Dbpedia: A nucleus for a web of open data. In The semantic web (pp. 722-735). Springer, Berlin, Heidelberg.
Graph https://concept.research.microsoft.com/Home/Introduction [Accessed: 2017-10-28].
WikiData API. Retrieved from https://www.wikidata.org/wiki/Wikidata:Main_Page [Accessed: 2017-10-28].
VrandeÄiÄ, D., & Krötzsch, M. (2014). Wikidata: a free collaborative knowledgebase. Communications of the ACM, 57(10), 78-85.
Wright, D. R. (2005). Finite state machines. Carolina State University, 203.
Alicebot. Retrieved from http://www.alicebot.org/aiml.html [Accessed: 2017-10-28].
Liu, C. W., Lowe, R., Serban, I. V., Noseworthy, M., Charlin, L., & Pineau, J. How NOT To Evaluate Your Dialogue System: An Empirical Study of Unsupervised Evaluation Metrics for Dialogue Response Generation.
25
Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, Yoshua Bengio. A Hierarchical Latent Variable Encoder-Decoder Model for Generating Dialogues. arXiv preprint arXiv:1605.06069
Luan, Y., Ji, Y., & Ostendorf, M. (2016). LSTM based Conversation Models. arXiv preprint arXiv:1603.09457.
Kiros, R., Zhu, Y., Salakhutdinov, R. R., Zemel, R., Urtasun, R., Torralba, A., & Fidler, S. (2015). Skip-thought vectors. In Advances in neural information processing systems (pp. 3294-3302).
Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8), 1735-1780.
Sukhbaatar, S., Szlam, A., Weston, J., & Fergus, R. (2015). Weakly supervised memory networks. CoRR, abs/1503.08895, 2.
M. Iyyer, V. Manjunatha, J. Boyd-Graber, and H. Daumé III. Deep unordered composition rivals syntactic methods for text classification. In Proceedings of ACL , 2015.
Guo, F., Metallinou, A., Khatri, C., Raju, A., Venkatesh, A., & Ram, A. (2018). Topic-based evaluation for conversational bots. In NIPS, 2017.
Raju, A., Hedayatnia, B., Liu, L., Gandhe, A., Khatri, C., Metallinou, A., A., Venkatesh & Rastrow, A. (2018). Contextual Language Model Adaptation for Conversational Agents. Proc. Interspeech 2018, 3333-3337.
Mehri, S., Hedayatnia, B., Raju, A., Gandhe, A., Khatri, C., Rastrow, A., Gabriel, R. & Mandal, A. (2018). Contextual Neural Language Models for Speech Recognition in Conversational Agents. Manuscript submitted for publication.
Khatri, C., Goel, R., Hedayatnia, B., Metanillou, A., Venkatesh, A., Gabriel, R., & Mandal, A. (2018a). Contextual Topic Modeling For Dialog Systems. Manuscript accepted for publication. Available on arxiv. Khatri, C., Goel, R., Hedayatnia, B., Venkatesh, A., Gabriel, R., & Mandal, A. (2018b). Detecting Offensive Content in Open-domain Conversations using Two Stage Semi-supervision. Manuscript submitted for publication.
Yi, S., Khatri, C., Goel, R., Hedayatnia, B., Venkatesh, A., Gabriel, R., & Mandal, A. (2018). Towards Coherent and Engaging Spoken Dialog Response Generation Using Automatic Conversation Evaluators. Manuscript in preparation. 2018
Jigsaw. (2018.) Toxic comment classification challenge. Retrieved from https://kaggle.com/c/ 259 jigsaw-toxic-comment-classification-challenge. [Accessed: 2018-07-01].
Venkatesh, A., Khatri, C., Ram, A., Guo, F., Gabriel, R., Nagar, A., Prasad, R., Cheng, M., Hedayatnia, B., Metallinou, A. & Goel, R. (2018). On Evaluating and Comparing Conversational Agents. In NIPS, 2017.
Surowiecki, J. (2004). The wisdom of crowds: Why the many are smarter than the few and how collective wisdom shapes business. Economies, Societies and Nations, 296.
Flask Python Tool. http://flask.pocoo.org/ [Accessed: 2017-10-28].
# Allen, J. (1995). Natural language understanding. Pearson.
26
Pennington, J., Socher, R., & Manning, C. (2014). Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP) (pp. 1532-1543).
Danescu-Niculescu-Mizil, C., & Lee, L. (2011, June). Chameleons in imagined conversations: A new approach to understanding coordination of linguistic style in dialogs. In Proceedings of the 2nd Workshop on Cognitive Modeling and Computational Linguistics (pp. 76-87). Association for Computational Linguistics.
Sutskever, I., Martens, J., & Hinton, G. E. (2011). Generating text with recurrent neural networks. In Proceedings of the 28th International Conference on Machine Learning (ICML-11) (pp. 1017- 1024).
Iyyer, M., Manjunatha, V., Boyd-Graber, J., & Daumé III, H. (2015). Deep unordered composition rivals syntactic methods for text classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) (Vol. 1, pp. 1681-1691).
Klein, G., Kim, Y., Deng, Y., Senellart, J., & Rush, A. M. (2017). Opennmt: Open-source toolkit for neural machine translation. arXiv preprint arXiv:1701.02810.
Sutskever, I., Vinyals, O., & Le, Q. V. (2014). Sequence to sequence learning with neural networks. In Advances in neural information processing systems (pp. 3104-3112).
Curry, A. C., Papaioannou, I., Suglia, A., Agarwal, S., Shalyminov, I., Xu, X., Dušek, O., Eshghi, A.Yu, Y., & Lemon, O. (2018). Alana v2: Entertaining and Informative Open-domain Social Dialogue using Ontologies and Entity Linking. Alexa Prize Proceedings, 2018.
Pichl, J., Marek, P., Konrád, J., MatulÃk, M., & Å edivý, J. (2018). Alquist 2.0: Alexa Prize Socialbot Based on Sub-Dialogue Model. Alexa Prize Proceedings, 2018.
Fulda, N., Etchart, T., Myers, W., Ricks, D., Brown, Z., Szendre, J., Murdoch, B., Carr, A & Wingate, D. (2018). BYU-EVE: Mixed Initiative Dialog via Structured Knowledge Graph Traversal and Conversational Scaffolding. Alexa Prize Proceedings, 2018.
Jonell, P., Bystedt, M., DÅgan, F. I., Fallgren, Per., Ivarsson, J., Slukova, M., Wennberg, U., Lopes, José., Boye, Johan & Skantze, G. (2018). Fantom: A Crowdsourced Social Chatbot using an Evolving Dialog Graph. Alexa Prize Proceedings, 2018.
Chen, C., Yu, D., Wen, W., Yang, Y. M., Zhang, J., Zhou, M., Jesse, K., Chau, A., Bhowmick, A., Iyer, S., Sreenivasulu, G., Cheng, R., Bhandare, A & Yu, Z. (2018). Gunrock: Building A Human- Like Social Bot By Leveraging Large Scale Real User Data. Alexa Prize Proceedings, 2018.
Ahmadvand, A., Choi, I., Sahijwani, H., Schmidt, J., Sun, M., Volokhin, S., Wang, Z & Agichtein, E. (2018). Emory IrisBot: An Open-Domain Conversational Bot for Personalized Information Access. Alexa Prize Proceedings, 2018.
Bowden, K. K., Wu, J., Cui, W., Juraska, J., Harrison, V., Schwarzmann, B., Santer, N & Walker, M. (2018). SlugBot: Developing a Computational Model and Framework of a Novel Dialogue Genre. Alexa Prize Proceedings, 2018.
Larionov, G., Kaden, Z., Dureddy, H. V., Kalejaiye, G. B. T., Kale, M., Potharaju, S. P., Shah, A. P., & Rudnicky, A. I. (2018). Tartan: A retrieval-based socialbot powered by a dynamic finite- state machine architecture. Alexa Prize Proceedings, 2018.
27
Pichl, J., Marek, P., Konrád, J., MatulÃk, M., Nguyen, H. L., & Å edivý, J. (2017). Alquist: The Alexa Prize Socialbot. Alexa Prize Proceedings, 2017.
Bowden, K. K., Wu, J., Oraby, S., Misra, A., & Walker, M. (2018). Slugbot: An application of a novel and scalable open domain socialbot framework. Alexa Prize Proceedings, 2017.
Prabhumoye, S., Botros, F., Chandu, K., Choudhary, S., Keni, E., Malaviya, C., Pasumarthi, R., Poddar, S., Ravichander, A. & Yu, Z. (2017). Building CMU Magnus from User Feedback. Alexa Prize Proceedings, 2017.
Damonte, B. K. M., Dobre, M., Duma, D., Fainberg, J., Fancellu, F., Kahembwe, E., Kahembwe, E., Cheng, J. & Webberz, B. (2017). Edina: Building an Open Domain Socialbot with Self- dialogues. Alexa Prize Proceedings, 2017.
Serban, I. V., Sankar, C., Saizheng Zhang, Z. L., Subramanian, S., Kim, T., Chandar, S., Ke, N.R., Mudumba, S., de Brebisson, A., Sotelo, J.M. & Suhubdy, D. (2017). The Octopus Approach to the Alexa Competition: A Deep Ensemble-based Socialbot. Alexa Prize Proceedings 2017.
Fang, H., Cheng, H., Clark, E., Holtzman, A., Sap, M., Ostendorf, M.,Choi, Y. & Smith, N. A. (2017). Sounding boardâuniversity of washingtonâs alexa prize submission. Alexa Prize Proceedings, 2017.
Adewale, O., Beatson, A., Buniatyan, D., Ge, J., Khodak, M., Lee, H., Prasad, N., Saunshi, N., Seff, A., Singh, K. & Suo, D. (2017). Pixie: A Social Chatbot. Alexa Prize Proceedings, 2017.
Guss, W. H., Bartlett, J., Kuznetsov, P., & Patil, P. (2017). Eigen: A Step Towards Conversational AI. Alexa Prize Proceedings.
Liu, H., Lin, T., Sun, H., Lin, W., Chang, C. W., Zhong, T., & Rudnicky, A. (2017). RubyStar: A Non-Task-Oriented Mixture Model Dialog System. Alexa Prize Proceedings, 2017.
Papaioannou, I., Curry, A. C., Part, J. L., Shalyminov, I., Xu, X., Yu, Y., Dušek, O., Rieser, V. & Lemon, O. (2017). Alana: Social dialogue using an ensemble model and a ranker trained on user feedback. Alexa Prize Proceedings, 2017.
Williams, Jason D., Kavosh Asadi, and Geoffrey Zweig. (2017). Hybrid code networks: practical and efficient end-to-end dialog control with supervised and reinforcement learning. arXiv preprint arXiv:1702.03274, 2017.
28 | {
"id": "1701.02810"
} |
1812.08928 | Slimmable Neural Networks | We present a simple and general method to train a single neural network
executable at different widths (number of channels in a layer), permitting
instant and adaptive accuracy-efficiency trade-offs at runtime. Instead of
training individual networks with different width configurations, we train a
shared network with switchable batch normalization. At runtime, the network can
adjust its width on the fly according to on-device benchmarks and resource
constraints, rather than downloading and offloading different models. Our
trained networks, named slimmable neural networks, achieve similar (and in many
cases better) ImageNet classification accuracy than individually trained models
of MobileNet v1, MobileNet v2, ShuffleNet and ResNet-50 at different widths
respectively. We also demonstrate better performance of slimmable models
compared with individual ones across a wide range of applications including
COCO bounding-box object detection, instance segmentation and person keypoint
detection without tuning hyper-parameters. Lastly we visualize and discuss the
learned features of slimmable networks. Code and models are available at:
https://github.com/JiahuiYu/slimmable_networks | http://arxiv.org/pdf/1812.08928 | Jiahui Yu, Linjie Yang, Ning Xu, Jianchao Yang, Thomas Huang | cs.CV, cs.AI | Accepted in ICLR 2019 | null | cs.CV | 20181221 | 20181221 | 8 1 0 2 c e D 1 2
] V C . s c [
1 v 8 2 9 8 0 . 2 1 8 1 : v i X r a
Published as a conference paper at ICLR 2019
# SLIMMABLE NEURAL NETWORKS
Jiahui Yu1 1University of Illinois at Urbana-Champaign Linjie Yang2 Ning Xu2 Jianchao Yang3 2Snap Inc. 3ByteDance Inc.
# ABSTRACT
We present a simple and general method to train a single neural network exe- cutable at different widths1, permitting instant and adaptive accuracy-efï¬ciency trade-offs at runtime. Instead of training individual networks with different width conï¬gurations, we train a shared network with switchable batch normalization. At runtime, the network can adjust its width on the ï¬y according to on-device bench- marks and resource constraints, rather than downloading and ofï¬oading different models. Our trained networks, named slimmable neural networks, achieve simi- lar (and in many cases better) ImageNet classiï¬cation accuracy than individually trained models of MobileNet v1, MobileNet v2, Shufï¬eNet and ResNet-50 at dif- ferent widths respectively. We also demonstrate better performance of slimmable models compared with individual ones across a wide range of applications in- cluding COCO bounding-box object detection, instance segmentation and person keypoint detection without tuning hyper-parameters. Lastly we visualize and dis- cuss the learned features of slimmable networks. Code and models are available at: https://github.com/JiahuiYu/slimmable_networks.
# INTRODUCTION
Recently deep neural networks are prevailing in applications on mobile phones, augmented reality devices and autonomous cars. Many of these applications require a short response time. Towards this goal, manually designed lightweight networks (Howard et al., 2017; Zhang et al., 2017; Sandler et al., 2018) are proposed with low computational complexities and small memory footprints. Au- tomated neural architecture search methods (Tan et al., 2018) also integrate on-device latency into search objectives by running models on a speciï¬c phone. However, at runtime these networks are not re-conï¬gurable to adapt across different devices given a same response time budget. For example, there were over 24,000 unique Android devices in 20152. These devices have drastically different runtimes for the same neural network (Ignatov et al., 2018), as shown in Table 1. In practice, given the same response time constraint, high-end phones can achieve higher accuracy by running larger models, while low-end phones have to sacriï¬ce accuracy to reduce latency.
# Table 1: Runtime of MobileNet v1 for image classiï¬cation on different devices.
OnePlus 6 Google Pixel LG Nexus 5 Samsung Galaxy S3 ASUS ZenFone 2 Runtime 24 ms 116 ms 332 ms 553 ms 1507 ms
Although a global hyper-parameter, width multiplier, is provided in lightweight networks (Howard et al., 2017; Zhang et al., 2017; Sandler et al., 2018) to trade off between latency and accuracy, it is inï¬exible and has many constraints. First, models with different width multipliers need to be trained, benchmarked and deployed individually. A big ofï¬ine table needs to be maintained to document the allocation of different models to different devices, according to time and energy budget. Second, even on a same device, the computational budget varies (for example, excessive consumption of background apps reduces the available computing capacity), and the energy budget varies (for example, a mobile phone may be in low-power or power-saving mode). Third, when switching to a larger or smaller model, the cost of time and data for downloading and ofï¬oading models is not negligible.
# 1Width refers to number of channels in a layer. 2https://opensignal.com/reports/2015/08/android-fragmentation/
1
Published as a conference paper at ICLR 2019
Nae BARI Dx
Figure 1: Illustration of slimmable neural networks. The same model can run at different widths (number of active channels), permitting instant and adaptive accuracy-efï¬ciency trade-offs.
Recently dynamic neural networks are introduced to allow selective inference paths. Liu & Deng (2017) introduce controller modules whose outputs control whether to execute other modules. It has low theoretical computational complexity but is nontrivial to optimize and deploy on mobiles since dynamic conditions prohibit layer fusing and memory optimization. Huang et al. (2017) adapt early-exits into networks and connect them with dense connectivity. Wu et al. (2017) and Wang et al. (2017) propose to selectively choose the blocks in a deep residual network to execute during inference. Nevertheless, in contrast to width (number of channels), reducing depth cannot reduce memory footprint in inference, which is commonly constrained on mobiles.
The question remains: Given budgets of resources, how to instantly, adaptively and efï¬ciently trade off between accuracy and latency for neural networks at runtime? In this work we introduce slimmable neural networks, a new class of networks executable at different widths, as a general solution to trade off between accuracy and latency on the ï¬y. Figure 1 shows an example of a slimmable network that can switch between four model variants with different numbers of active channels. The parameters of all model variants are shared and the active channels in different layers can be adjusted. For brevity, we denote a model variant in a slimmable network as a switch, the number of active channels in a switch as its width. 0.25à represents that the width in all layers are scaled by 0.25 of the full model. In contrast to other solutions above, slimmable networks have several advantages: (1) For different conditions, a single model is trained, benchmarked and de- ployed. (2) A near-optimal trade-off can be achieved by running the model on a target device and adjusting active channels accordingly. (3) The solution is generally applicable to (normal, group, depthwise-separable, dilated) convolutions, fully-connected layers, pooling layers and many other building blocks of neural networks. It is also generally applicable to different tasks including clas- siï¬cation, detection, identiï¬cation, image restoration and more. (4) In practice, it is straightforward to deploy on mobiles with existing runtime libraries. After switching to a new conï¬guration, the slimmable network becomes a normal network to run without additional runtime and memory cost.
However, neural networks naturally run as a whole and usually the number of channels cannot be adjusted dynamically. Empirically training neural networks with multiple switches has an extremely low testing accuracy around 0.1% for 1000-class ImageNet classiï¬cation. We conjecture it is mainly due to the problem that accumulating different number of channels results in different feature mean and variance. This discrepancy of feature mean and variance across different switches leads to inac- curate statistics of shared Batch Normalization layers (Ioffe & Szegedy, 2015), an important training stabilizer. To this end, we propose a simple and effective approach, switchable batch normalization, that privatizes batch normalization for different switches of a slimmable network. The variables of moving averaged means and variances can independently accumulate feature statistics of each switch. Moreover, Batch Normalization usually comes with two additional learnable scale and bias parameter to ensure same representation space (Ioffe & Szegedy, 2015). These two parameters may able to act as conditional parameters for different switches, since the computation graph of a slimmable network depends on the width conï¬guration. It is noteworthy that the scale and bias can be merged into variables of moving mean and variance after training, thus by default we also use independent scale and bias as they come for free. Importantly, batch normalization layers usually have negligible size (less than 1%) in a model.
We ï¬rst conduct comprehensive experiments on ImageNet classiï¬cation task to show the effective- ness of switchable batch normalization for training slimmable neural networks. Compared with
2
Published as a conference paper at ICLR 2019
individually trained networks, we demonstrate similar (and in many cases better) performances of slimmable MobileNet v1 [0.25, 0.5, 0.75, 1.0]Ã, MobileNet v2 [0.35, 0.5, 0.75, 1.0]Ã, Shufï¬eNet [0.5, 1.0, 2.0]à and ResNet-50 [0.25, 0.5, 0.75, 1.0]à ([â]à denotes available switches). We further train a 8-switch slimmable MobileNet v1 [0.25, 0.35, 0.45, 0.55, 0.65, 0.75, 0.85, 1.0]à without accuracy drop to demonstrate the scalability of our method. Beyond image classiï¬cation, we also apply slimmable networks to various applications including COCO bounding-box object detection, instance segmentation and person keypoints detection. Experiments show that slimmable networks achieve better performance than individual ones at different widths respectively. The proposed slimmable networks are not only ï¬exible and practical by design, but also effective, scalable and widely applicable according to our experiments. Lastly we visualize and discuss the learned features of slimmable networks.
2 RELATED WORK
Model Pruning and Distilling. Model pruning and distilling have a rich history in the literature of deep neural networks. Early methods (Han et al., 2015a;b) sparsify connections in neural networks. However, such networks usually require speciï¬c software and hardware accelerators to speedup. Driven by this fact, Molchanov et al. (2016), Wen et al. (2016), Li et al. (2016a), Liu et al. (2017), He et al. (2017), Luo et al. (2017), Anwar et al. (2017), Kim et al. (2017) and Ye et al. (2018) encourage structured sparsity by pruning channels, ï¬lters and network depth and ï¬ne-tuning iteratively with various penalty terms. As another family, model distilling methods (Hinton et al., 2015; Romero et al., 2014; Zhuang et al., 2018) ï¬rst train a large network or an ensemble of networks, and then transfer the learned knowledge to a small model. Soft-targets and intermediate representations from trained large models are used to train a small model.
Adaptive Computation Graph. To reduce computation of a neural network, some works propose to adaptively construct the computation graph of a neural network. Liu & Deng (2017), Wu et al. (2017), Lin et al. (2017) and Wang et al. (2017) introduced additional controller modules or gating functions to determine the computation graph based on the current input. Amthor et al. (2016), Veit & Belongie (2017), Huang et al. (2017), Kuen et al. (2018) and Hu et al. (2017) implanted early- exiting prediction branches to reduce the average execution depth. The computation graph of these methods are conditioned on network input, and lower theoretical computational complexity can be achieved.
Conditional Normalization. Many real-world problems require conditional input. Feature-wise transformation (Dumoulin et al., 2018) is a prevalent approach to integrate different sources of in- formation, where conditional scales and biases are applied across the network. It is commonly implemented in the form of conditional normalization layers, such as batch normalization or layer normalization (Ba et al., 2016). Conditional normalization is widely used in tasks including style transfer (Dumoulin et al., 2016; Li et al., 2017a; Huang & Belongie, 2017; Li et al., 2017b), image recognition (Li et al., 2016b; Yang et al., 2018) and many others (Perez et al., 2017b;a).
3 SLIMMABLE NEURAL NETWORKS
3.1 NAIVE TRAINING OR INCREMENTAL TRAINING
To train slimmable neural networks, we begin with a naive approach, where we directly train a shared neural network with different width conï¬gurations. The training framework is similar to the one of our ï¬nal approach, as shown in Algorithm 1. The training is stable, however, the network obtains extremely low top-1 testing accuracy around 0.1% on 1000-class ImageNet classiï¬cation. Error curves of the naive approach are shown in Figure 2. We conjecture the major problem in the naive approach is that: for a single channel in a layer, different numbers of input channels in previous layer result in different means and variances of the aggregated feature, which are then rolling averaged to a shared batch normalization layer. The inconsistency leads to inaccurate batch normalization statistics in a layer-by-layer propagating manner. Note that these batch normalization statistics (moving averaged means and variances) are only used during testing, in training the means and variances of the current mini-batch are used.
We then investigate incremental training approach (a.k.a. progressive training) (Tann et al., 2016). We experiment with Mobilenet v2 on ImageNet classiï¬cation task. We ï¬rst train a base model A
3
Published as a conference paper at ICLR 2019
(MobileNet v2 0.35Ã). We ï¬x it and add extra parameters B to make it an extended model A+B (MobileNet v2 0.5Ã). The extra parameters are ï¬ne-tuned along with the ï¬xed parameters of A on the training data. Although the approach is stable in both training and testing, the top-1 accu- racy only increases from 60.3% of A to 61.0% of A+B. In contrast, individually trained MobileNet v2 0.5à achieves 65.4% accuracy on the ImageNet validation set. The major reason for this accu- racy degradation is that when expanding base model A to the next level A+B, new connections, not only from B to B, but also from B to A and from A to B, are added in the computation graph. The incremental training prohibits joint adaptation of weights A and B, signiï¬cantly deteriorating the overall performance.
3.2 SWITCHABLE BATCH NORMALIZATION
Motivated by the investigations above, we present a simple and highly effective approach, named Switchable Batch Normalization (S-BN), that employs independent batch normalization (I [Szegedy| 2015) for different switches in a slimmable network. Batch normalization (BN) was orig- inally proposed to reduce internal covariate shift by normalizing the feature: yâ = jg + 8, where y is the input to be normalized and y/ is the output, 7, ( are learnable scale and bias, ju, 7? are mean and variance of current mini-batch during training. During testing, moving averaged statistics of means and variances across all training images are used instead. BN enables faster and stabler training of deep neural networks (loffe & Szegedy] 2015: 2015), also it can encode conditional information to feature representations (Perez et al.||2017b}|Li et al. 2016).
To train slimmable networks, S-BN privatizes all batch normalization layers for each switch in a slimmable network. Compared with the naive training approach, it solves the problem of feature ag- gregation inconsistency between different switches by independently normalizing the feature mean and variance during testing. The scale and bias in S-BN may be able to encode conditional infor- mation of width conï¬guration of current switch (the scale and bias can be merged into variables of moving mean and variance after training, thus by default we also use independent scale and bias as they come for free). Moreover, in contrast to incremental training, with S-BN we can jointly train all switches at different widths, therefore all weights are jointly updated to achieve a better performance. A representative training and validation error curve with S-BN is shown in Figure 2.
S-BN also has two important advantages. First, the number of extra parameters is negligible. Table 2 enumerates the number and percentage of parameters in batch normalization layers (after training, µ, Ï, γ, β are merged into two parameters). In most cases, batch normalization layers only have less than 1% of the model size. Second, the runtime overhead is also negligible for deployment. In prac- tice, batch normalization layers are typically fused into convolution layers for efï¬cient inference. For slimmable networks, the re-fusing of batch normalization can be done on the ï¬y at runtime since its time cost is negligible. After switching to a new conï¬guration, the slimmable network becomes a normal network to run without additional runtime and memory cost.
Table 2: Number and percentage of parameters in batch normalization layers.
MobileNet v1 1.0à MobileNet v2 1.0à Shufï¬eNet 2.0à ResNet-50 1.0à Conv and FC 4,210,088 (99.483%) BatchNorm 21,888 (0.517%) 3,470,760 (99.027%) 34,112 (0.973%) 5,401,816 (99.102%) 48,960 (0.898%) 25,503,912 (99.792%) 53,120 (0.208%)
3.3 TRAINING SLIMMABLE NEURAL NETWORKS
Our primary objective to train a slimmable neural network is to optimize its accuracy averaged from all switches. Thus, we compute the loss of the model by taking an un-weighted sum of all train- ing losses of different switches. Algorithm 1 illustrates a memory-efï¬cient implementation of the training framework, which is straightforward to integrate into current neural network libraries. The switchable width list is predeï¬ned, indicating the available switches in a slimmable network. During training, we accumulate back-propagated gradients of all switches, and update weights afterwards. Empirically we ï¬nd no hyper-parameter needs to be tuned speciï¬cally in all of our experiments.
4
Published as a conference paper at ICLR 2019
Algorithm 1 Training slimmable neural network M .
Require: Deï¬ne switchable width list for slimmable network M , for example, [0.25, 0.5, 0.75, 1.0]Ã. 1: Initialize shared convolutions and fully-connected layers for slimmable network M . 2: Initialize independent batch normalization parameters for each width in switchable width list. 3: for i = 1, ..., niters do 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: end for
Get next mini-batch of data x and label y. Clear gradients of weights, optimizer.zero grad(). for width in switchable width list do
Switch the batch normalization parameters of current width on network M. Execute sub-network at current width, 7 = Mâ(z). Compute loss, loss = criterion(g, y). Compute gradients, loss.backward().
# end for Update weights, optimizer.step().
4 EXPERIMENTS
In this section, we ï¬rst evaluate slimmable networks on ImageNet (Deng et al., 2009) classiï¬cation. Further we demonstrate the performance of a slimmable network with more switches. Finally we apply slimmable networks to a number of different applications.
IMAGENET CLASSIFICATION
We experiment with the ImageNet (Deng et al., 2009) classiï¬cation dataset with 1000 classes. It is comprised of around 1.28M training images and 50K validation images.
We ï¬rst investigate slimmable neural networks on three state-of-the-art lightweight networks, Mo- bileNet v1 (Howard et al., 2017), MobileNet v2 (Sandler et al., 2018), Shufï¬eNet (Zhang et al., 2017), and one representative large model ResNet-50 (He et al., 2016).
To make a fair comparison, we follow the training settings (for example, learning rate schedul- ing, weight initialization, weight decay, data augmentation, input image resolution, mini-batch size, training iterations, optimizer) in corresponding papers respectively (Howard et al., 2017; Sandler et al., 2018; Zhang et al., 2017; He et al., 2016). One exception is that for MobileNet v1 and Mo- bileNet v2, we use stochastic gradient descent (SGD) as the optimizer instead of the RMSPropOp- timizer (Howard et al., 2017; Sandler et al., 2018). For ResNet-50 (He et al., 2016), we train for 100 epochs, and decrease the learning rate by 10Ã at 30, 60 and 90 epochs. We evaluate the top-1 classication error on the center 224 Ã 224 crop of images in the validation set. More implementation details are included in Appendix A.
We ï¬rst show training and validation error curves in Figure 2. The results of naive training approach are also reported as comparisons. Although both our approach and the naive approach are stable in training, the testing error of naive approach is extremely high. With switchable batch normalization, the error rates of different switches are stable and the rank of error rates is also preserved consistently across all training epochs.
Next we show in Table 3 the top-1 classiï¬cation error for both individual networks and slimmable networks given same width conï¬gurations. We use S- to indicate slimmable models. The error rates for individual models are from corresponding papers except those denoted with â . The run- time FLOPs (number of Multiply-Adds) for each model are also reported as a reference. Table 3 shows that slimmable networks achieve similar performance compared to those that are individually trained. Intuitively compressing different networks into a shared network poses extra optimization constraints to each network, a slimmable network is expected to have lower performance than indi- vidually trained ones. However, our experiments show that joint training of different switches indeed improves the performance in many cases, especially for slim switches (for example, MobileNet v1 0.25à is improved by 3.3%). We conjecture that the improvements may come from implicit model distilling (Hinton et al., 2015; Romero et al., 2014) where the large model transfers its knowledge to small model by weight sharing and joint training.
5
Published as a conference paper at ICLR 2019
â naive training 100 | â 0.25x 1.0 â training with S-BN ee â 05x 0.95 â 0.75x 1s = tox 0.90 â 025% 2 0.8 3 0.8 ex 8 c tox 5 5 0-85. £ 5 0 5 10 e c naive training a 5 £06 B06 ⬠ts fg 2 r $s 0.4 0.4 0.2 0.2 i} 2 4 6 i} 10 20 30 40 50 60 70 80 Training iterations (5x10e4) Training iterations (5x10e4)
Figure 2: Training and validation curves of slimmable networks. Left shows the training error of the largest switch. Right shows testing errors on validation set with different switches. For naive approach, the training is stable (left) but testing error is high (right, zoomed). Slimmable networks trained with S-BN have stable and rank-preserved testing accuracy across all training iterations.
Table 3: Results of ImageNet classiï¬cation. We show top-1 error rates of individually trained net- works and slimmable networks given same width conï¬gurations and FLOPs. We use S- to indicate slimmable models, â to denote our reproduced result.
Individual Networks Slimmable Networks Name Params Top-1 Err. Name Params Top-1 Err. MobileNet v1 1.0à MobileNet v1 0.75à MobileNet v1 0.5à MobileNet v1 0.25à MobileNet v2 1.0à MobileNet v2 0.75à MobileNet v2 0.5à MobileNet v2 0.35à Shufï¬eNet 2.0à Shufï¬eNet 1.0à Shufï¬eNet 0.5à ResNet-50 1.0à ResNet-50 0.75Ãâ ResNet-50 0.5Ãâ ResNet-50 0.25Ãâ 4.2M 2.6M 1.3M 0.5M 3.5M 2.6M 2.0M 1.7M 5.4M 1.8M 0.7M 25.5M 14.7M 6.9M 2.0M 29.1 31.6 36.7 50.2 28.2 30.2 34.6 39.7 26.3 32.6 43.2 23.9 25.3 28.0 36.2 S-MobileNet v1 [0.25, 0.5, 0.75, 1.0]à S-MobileNet v2 [0.35, 0.5, 0.75, 1.0]à S-Shufï¬eNet [0.5, 1.0, 2.0]à S-ResNet-50 [0.25, 0.5, 0.75, 1.0]à 4.3M 3.6M 5.5M 25.6M 28.5 (0.6) 30.5 (1.1) 35.2 (1.5) 46.9 (3.3) 29.5 (-1.3) 31.1 (-0.9) 35.6 (-1.0) 40.3 (-0.6) 28.7 (-2.4) 34.5 (-0.9) 42.7 (0.5) 24.0 (-0.1) 25.1 (0.2) 27.9 (0.1) 35.0 (1.2) FLOPs 569M 317M 150M 41M 301M 209M 97M 59M 524M 138M 38M 4.1G 2.3G 1.1G 278M
Our proposed approach for slimmable neural networks is generally applicable to the above repre- sentative network architectures. It is noteworthy that we experiment with both residual and non- residual networks (MobileNet v1). The training of slimmable models can be applied to convo- lutions, depthwise-separable convolutions (Chollet, 2016), group convolutions (Xie et al., 2017), pooling layers, fully-connectted layers, residual connections, feature concatenations and many other building blocks of deep neural networks.
6
Published as a conference paper at ICLR 2019
4.2 MORE SWITCHES IN SLIMMABLE NETWORKS
The more switches available in a slimmable network, the more choices one have for trade-offs between accuracy and latency. We thus investigate how the number of switches potentially impact accuracy. In Table 4, we train a 8-switch slimmable MobileNet v1 and compare it with 4-switch and individually trained ones. The results show that a slimmable network with more switches have similar performance, demonstrating the scalability of our proposed approach.
Table 4: Top-1 error rates on ImageNet classiï¬cation with individually trained networks, 4-switch S-MobileNet v1 [0.25, 0.5, 0.75, 1.0]à and 8-switch S-MobileNet v1 [0.25, 0.35, 0.45, 0.55, 0.65, 0.75, 0.85, 1.0]Ã.
0.25Ã 0.35Ã 0.45Ã 0.5Ã 0.55Ã 0.65Ã 0.75Ã 0.85Ã 1.0Ã Individual 4-switch 8-switch 50.2 46.9 (3.3) 47.6 (2.6) - - 41.1 - - 36.6 36.7 35.2 (1.5) - - - 33.8 - - 31.4 31.6 30.5 (1.1) 30.2 (1.4) - - 29.2 29.1 28.5 (0.6) 28.4 (0.7)
4.3 OBJECT DETECTION, INSTANCE SEGMENTATION AND KEYPOINTS DETECTION
Finally, we apply slimmable networks on tasks of bounding-box object detection, instance segmen- tation and keypoints detection based on detection frameworks MMDetection (Chen et al., 2018) and Detectron (Girshick et al., 2018).
Table 5: Average precision (AP) on COCO 2017 validation set with individually trained networks and slimmable networks. ResNet-50 models are used as backbones for Faster-RCNN, Mask-RCNN and Keypoints-RCNN based on detection frameworks (Girshick et al., 2018; Chen et al., 2018). Faster 1.0Ã indicates Faster-RCNN for object detection with ResNet-50 1.0Ã as backbone.
Type Individual Networks Slimmable Networks Box AP Mask AP Kps AP Box AP Mask AP Kps AP Faster 1.0Ã Faster 0.75Ã Faster 0.5Ã Faster 0.25Ã Mask 1.0Ã Mask 0.75Ã Mask 0.5Ã Mask 0.25Ã Kps 1.0Ã Kps 0.75Ã Kps 0.5Ã Kps 0.25Ã 36.4 34.7 32.7 27.5 37.3 35.6 33.4 28.2 50.5 49.6 48.5 45.4 - - - - 34.2 32.9 30.9 26.6 - - - - - - - - - - - - 61.3 60.5 59.8 56.7 36.8 (0.4) 36.1 (1.4) 34.0 (1.3) 29.6 (2.1) 37.4 (0.1) 36.7 (1.1) 34.7 (1.5) 30.2 (2.0) 52.8 (2.3) 52.7 (3.1) 51.6 (3.1) 48.2 (2.8) - - - - 34.9 (0.7) 34.3 (1.4) 32.6 (1.7) 28.6 (2.0) - - - - - - - - - - - - 63.9 (2.6) 63.6 (3.1) 62.6 (2.8) 59.5 (2.8)
Following the settings of R-50-FPN-1à (Lin et al., 2016; Girshick et al., 2018; Chen et al., 2018), pre-trained ResNet-50 models at different widths are ï¬ne-tuned and evaluated. The lateral convolu- tion layers in feature pyramid network (Lin et al., 2016) are same for different pre-trained backbone networks. For individual models, we train ResNet-50 with different width multipliers on ImageNet and ï¬ne-tune them on each task individually. For slimmable models, we ï¬rst train on ImageNet using Algorithm 1. Following Girshick et al. (2018), the moving averaged means and variances of switchable batch normalization are also ï¬xed after training. Then we ï¬ne-tune the slimmable models on each task using Algorithm 1. The detection head and lateral convolution layers in feature pyramid network (Lin et al., 2016) are shared across different switches in a slimmable network. In this way, each switch in a slimmable network is with exactly same network architecture and FLOPs with its individual baseline. More details of implementation are included in Appendix B. We train all models on COCO 2017 train set and report Average Precision (AP) on COCO 2017 validation set in Table 5. In general, slimmable neural networks perform better than individually trained ones, especially for slim network architectures. The gain of performance is presumably due to implicit model distillation (Hinton et al., 2015; Romero et al., 2014) and richer supervision signals.
7
Published as a conference paper at ICLR 2019
5 VISUALIZATION AND DISCUSSION
u =i â
Figure 3: Top-activated images for same channel 3 9 in different switches in S-MobileNet v1. Dif- ferent rows represent results from different switches. Images with red outlines are mis-classiï¬ed. Note that the white color in RGB is [255, 255, 255], yellow in RGB is [255, 255, 0].
Visualization of Top-activated Images. Our primary interest lies in understanding the role that the same channel played in different switches in a slimmable network. We employ a simple visual- ization approach (Girshick et al., 2014) to visualize the images with highest activation values on a speciï¬c channel. Figure 3 shows the top-activated images of the same channel in different switches. Images with green outlines are correctly classiï¬ed by the corresponding model, while images with red outlines are mis-classiï¬ed. Interestingly the results show that for different switches, the major role of same channel (channel 3 9 in S-MobileNet v1) transits from recognizing white color (RGB value [255, 255, 255]) to yellow color (RGB value [255, 255, 0]) when the network width increases. It indicates that the same channel in slimmable network may play similar roles (in this case to rec- ognize colors of RGB value [255, 255, â]) but have slight variations in different switches (the one in quarter-sized model focuses more on white color while the one in full model on yellow color).
Sh
Figure 4: Values of BN parameters in different switches. We show BN values of both shallow (left, BN 1 1 to 1 8) and deep (right, BN 12 1 to 12 8) layers of S-MobileNet v1.
Values of Switchable Batch Normalization. Our proposed S-BN learns different BN transforma- tions for different switches. But how diverse are the learned BN parameters? We show the values of batch normalization weights in both shallow (BN 1 1 to 1 8) and deep (BN 12 1 to 12 8) layers of S-MobileNet v1 in Figure 4. The results show that for shallow layers, the mean, variance, scale and bias are very close, while in deep layers they are diverse. The value discrepancy is increased layer by layer in our observation, which also indicates that the learned features of a same channel in different switches have slight variations of semantics.
# 6 CONCLUSION
We introduced slimmable networks that permit instant and adaptive accuracy-efï¬ciency trade-offs at runtime. Switchable batch normalization is proposed to facilitate robust training of slimmable networks. Compared with individually trained models with same width conï¬gurations, slimmable networks have similar or better performances on tasks of classiï¬cation, object detection, instance segmentation and keypoints detection. The proposed slimmable networks and slimmable training could be further applied to unsupervised learning and reinforcement learning, and may help to re- lated ï¬elds such as network pruning and model distillation.
8
Published as a conference paper at ICLR 2019
# REFERENCES
Manuel Amthor, Erik Rodner, and Joachim Denzler. Impatient dnns-deep neural networks with dynamic time budgets. arXiv preprint arXiv:1610.02850, 2016.
Sajid Anwar, Kyuyeon Hwang, and Wonyong Sung. Structured pruning of deep convolutional neural networks. ACM Journal on Emerging Technologies in Computing Systems (JETC), 13(3):32, 2017.
Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
Kai Chen, Jiangmiao Pang, Jiaqi Wang, Yu Xiong, Xiaoxiao Li, Shuyang Sun, Wansen Feng, Ziwei Liu, Jianping Shi, Wanli Ouyang, Chen Change Loy, and Dahua Lin. mmdetection. https: //github.com/open-mmlab/mmdetection, 2018.
Franc¸ois Chollet. Xception: Deep learning with depthwise separable convolutions. arXiv preprint arXiv:1610.02357, 2016.
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 248â255. Ieee, 2009.
Vincent Dumoulin, Jonathon Shlens, and Manjunath Kudlur. A learned representation for artistic style. arXiv preprint arXiv:1610.07629, 2016.
Vincent Dumoulin, Ethan Perez, Nathan Schucher, Florian Strub, Harm de Vries, Aaron Courville, and Yoshua Bengio. Feature-wise transformations. Distill, 2018. doi: 10.23915/distill.00011. https://distill.pub/2018/feature-wise-transformations.
Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for ac- curate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 580â587, 2014.
Ross Girshick, Ilija Radosavovic, Georgia Gkioxari, Piotr Doll´ar, and Kaiming He. Detectron. https://github.com/facebookresearch/detectron, 2018.
Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015a.
Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efï¬cient neural network. In Advances in neural information processing systems, pp. 1135â1143, 2015b.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770â778, 2016.
Yihui He, Xiangyu Zhang, and Jian Sun. Channel pruning for accelerating very deep neural net- In Computer Vision (ICCV), 2017 IEEE International Conference on, pp. 1398â1406. works. IEEE, 2017.
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efï¬cient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017.
Hanzhang Hu, Debadeepta Dey, J Andrew Bagnell, and Martial Hebert. Anytime neural networks via joint optimization of auxiliary losses. arXiv preprint arXiv:1708.06832, 2017.
Gao Huang, Danlu Chen, Tianhong Li, Felix Wu, Laurens van der Maaten, and Kilian Q Wein- berger. Multi-scale dense networks for resource efï¬cient image classiï¬cation. arXiv preprint arXiv:1703.09844, 2017.
9
Published as a conference paper at ICLR 2019
Xun Huang and Serge J Belongie. Arbitrary style transfer in real-time with adaptive instance nor- malization. In ICCV, pp. 1510â1519, 2017.
Andrey Ignatov, Radu Timofte, Przemyslaw Szczepaniak, William Chou, Ke Wang, Max Wu, Tim Hartley, and Luc Van Gool. Ai benchmark: Running deep neural networks on android smart- phones. arXiv preprint arXiv:1810.01109, 2018.
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
Eunwoo Kim, Chanho Ahn, and Songhwai Oh. Learning nested sparse structures in deep neural networks. arXiv preprint arXiv:1712.03781, 2017.
Jason Kuen, Xiangfei Kong, Zhe Lin, Gang Wang, Jianxiong Yin, Simon See, and Yap-Peng Tan. Stochastic downsampling for cost-adjustable inference and improved regularization in convolu- tional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recog- nition, pp. 7929â7938, 2018.
Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. Pruning ï¬lters for efï¬cient convnets. arXiv preprint arXiv:1608.08710, 2016a.
Yanghao Li, Naiyan Wang, Jianping Shi, Jiaying Liu, and Xiaodi Hou. Revisiting batch normaliza- tion for practical domain adaptation. arXiv preprint arXiv:1603.04779, 2016b.
Yanghao Li, Naiyan Wang, Jiaying Liu, and Xiaodi Hou. Demystifying neural style transfer. arXiv preprint arXiv:1701.01036, 2017a.
Yijun Li, Chen Fang, Jimei Yang, Zhaowen Wang, Xin Lu, and Ming-Hsuan Yang. Universal style transfer via feature transforms. In Advances in Neural Information Processing Systems, pp. 386â 396, 2017b.
Ji Lin, Yongming Rao, Jiwen Lu, and Jie Zhou. Runtime neural pruning. In Advances in Neural Information Processing Systems, pp. 2181â2191, 2017.
Tsung-Yi Lin, Piotr Doll´ar, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyramid networks for object detection. arXiv preprint arXiv:1612.03144, 2016.
Lanlan Liu and Jia Deng. Dynamic deep neural networks: Optimizing accuracy-efï¬ciency trade-offs by selective execution. arXiv preprint arXiv:1701.00299, 2017.
Zhuang Liu, Jianguo Li, Zhiqiang Shen, Gao Huang, Shoumeng Yan, and Changshui Zhang. Learn- In Computer Vision (ICCV), ing efï¬cient convolutional networks through network slimming. 2017 IEEE International Conference on, pp. 2755â2763. IEEE, 2017.
Jian-Hao Luo, Jianxin Wu, and Weiyao Lin. Thinet: A ï¬lter level pruning method for deep neural network compression. arXiv preprint arXiv:1707.06342, 2017.
Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, and Jan Kautz. Pruning convolutional neural networks for resource efï¬cient inference. arXiv preprint arXiv:1611.06440, 2016.
Ethan Perez, Harm De Vries, Florian Strub, Vincent Dumoulin, and Aaron Courville. Learning visual reasoning without strong priors. arXiv preprint arXiv:1707.03017, 2017a.
Ethan Perez, Florian Strub, Harm De Vries, Vincent Dumoulin, and Aaron Courville. Film: Visual reasoning with a general conditioning layer. arXiv preprint arXiv:1709.07871, 2017b.
Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. Fitnets: Hints for thin deep nets. arXiv preprint arXiv:1412.6550, 2014.
Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. In- verted residuals and linear bottlenecks: Mobile networks for classiï¬cation, detection and segmen- tation. arXiv preprint arXiv:1801.04381, 2018.
10
Published as a conference paper at ICLR 2019
Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, and Quoc V Le. Mnasnet: Platform- aware neural architecture search for mobile. arXiv preprint arXiv:1807.11626, 2018.
Hokchhay Tann, Soheil Hashemi, R Bahar, and Sherief Reda. Runtime conï¬gurable deep neural networks for energy-accuracy trade-off. In Proceedings of the Eleventh IEEE/ACM/IFIP Interna- tional Conference on Hardware/Software Codesign and System Synthesis, pp. 34. ACM, 2016.
Andreas Veit and Serge Belongie. Convolutional networks with adaptive computation graphs. arXiv preprint arXiv:1711.11503, 2017.
Xin Wang, Fisher Yu, Zi-Yi Dou, and Joseph E Gonzalez. Skipnet: Learning dynamic routing in convolutional networks. arXiv preprint arXiv:1711.09485, 2017.
Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Learning structured sparsity in deep neural networks. In Advances in Neural Information Processing Systems, pp. 2074â2082, 2016.
Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S Davis, Kristen Grauman, and Rogerio Feris. Blockdrop: Dynamic inference paths in residual networks. arXiv preprint arXiv:1711.08393, 2017.
Saining Xie, Ross Girshick, Piotr Doll´ar, Zhuowen Tu, and Kaiming He. Aggregated residual trans- formations for deep neural networks. In Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, pp. 5987â5995. IEEE, 2017.
Linjie Yang, Yanran Wang, Xuehan Xiong, Jianchao Yang, and Aggelos K Katsaggelos. Efï¬cient video object segmentation via network modulation. arXiv preprint arXiv:1802.01218, 2018.
Jianbo Ye, Xin Lu, Zhe Lin, and James Z Wang. Rethinking the smaller-norm-less-informative assumption in channel pruning of convolution layers. arXiv preprint arXiv:1802.00124, 2018.
Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun. Shufï¬enet: An extremely efï¬cient convolutional neural network for mobile devices. arXiv preprint arXiv:1707.01083, 2017.
Bohan Zhuang, Chunhua Shen, Mingkui Tan, Lingqiao Liu, and Ian Reid. Towards effective low- bitwidth convolutional neural networks. In other words, 2:2, 2018.
# A TRAINING ON IMAGENET
We mainly use three training settings corresponding to Howard et al. (2017); Sandler et al. (2018); Zhang et al. (2017); He et al. (2016). For MobileNet v1 and MobileNet v2, we train 480 epochs with mini-batch size 160, and exponentially (γ = 0.98) decrease learning rate starting from 0.045 per epoch. For Shufï¬eNet (g = 3), we train 250 epochs with mini-batch size 512, and linearly decrease learning rate from 0.25 to 0 per iteration. For ResNet-50, we train 100 epochs with mini- batch size 256, and decrease the learning rate by 10à at 30, 60 and 90 epochs. We use stochastic gradient descent (SGD) as optimizer, Nesterov momentum with a momentum weight of 0.9 without dampening, and a weight decay of 10â4 for all training settings. All models are trained on 4 Tesla P100 GPUs and the batch mean and variance of batch normalization are computed within each GPU.
With the above training settings, the reproduced MobileNet v1 1.0Ã, MobileNet v2 1.0à and ResNet-50 1.0à have similar top-1 accuracy (±0.5%). Our reproduced Shufï¬eNet 2.0à has top-1 error rate 28.2%, which is 1.9% worse than results in Zhang et al. (2017). It is likely due to the inconsistency of mini-batch size and number of training GPUs.
# B TRAINING ON COCO
We use pytorch-style ResNet-50 model (Chen et al., 2018) as backbone for COCO tasks, since our pretrained ResNet-50 at different widths for ImageNet classiï¬cation is also pytorch-style. However, it is slightly different than the caffe-style ResNet-50 used in Detectron (Girshick et al., 2018) (the stride for down-sampling is added into 3 à 3 convolutions instead of 1 à 1 convolutions). To this
11
Published as a conference paper at ICLR 2019
end, we mainly conduct COCO experiments based on another detection framework: MMDetec- tion (Chen et al., 2018), which has hyper-parameter settings with same pytorch-style ResNet-50. With same hyper-parameter settings (i.e., RCNN R50 FPN 1 Ã), we ï¬ne-tune both individual ResNet-50 models and slimmable ResNet-50 on tasks of object detection and instance segmenta- tion. Our reproduced results on ResNet-50 1.0à is consistent with ofï¬cial models in MMDetec- tion (Chen et al., 2018). For keypoint detection task, we conduct experiment on Detectron (Girshick et al., 2018) framework by modifying caffe-style ResNet-50 to pytorch-style and training on 4 GPUs without other modiï¬cation of hyper-parameters. We have released code (training and testing) and pretrained models on both ImageNet classiï¬cation task and COCO detection tasks.
# C ABLATION STUDY OF CONDITIONAL PARAMETERS IN BN
In our work, private parameters 7, 3, pu, o? of BN are introduced in Switchable Batch Normalization for each sub-network to independently normalize feature yâ = Jou + 8, where y is input and yâ is output, 7, 6 are learnable scale and bias, jz, o? are moving averaged statistics for testing. In switchable batch normalization, the private y, 3 come for free because after training, they can be merged as y! = yy + 6â, a , 6â = B â 7p. Nevertheless, we present ablation study on how these conditional parameters affect overall performance. The results are shown in Table[6]
Table 6: Top-1 error rates on ImageNet classiï¬cation with two S-MobileNet v1 [0.25, 0.5, 0.75, 1.0]à with private scale and bias or shared ones.
0.25Ã 0.5Ã 0.75Ã 1.0Ã 46.9 47.1 (-0.2) 35.2 35.9 (-0.7) 30.5 30.9 (-0.4)
12 | {
"id": "1707.03017"
} |
1812.06162 | An Empirical Model of Large-Batch Training | In an increasing number of domains it has been demonstrated that deep
learning models can be trained using relatively large batch sizes without
sacrificing data efficiency. However the limits of this massive data
parallelism seem to differ from domain to domain, ranging from batches of tens
of thousands in ImageNet to batches of millions in RL agents that play the game
Dota 2. To our knowledge there is limited conceptual understanding of why these
limits to batch size differ or how we might choose the correct batch size in a
new domain. In this paper, we demonstrate that a simple and easy-to-measure
statistic called the gradient noise scale predicts the largest useful batch
size across many domains and applications, including a number of supervised
learning datasets (MNIST, SVHN, CIFAR-10, ImageNet, Billion Word),
reinforcement learning domains (Atari and Dota), and even generative model
training (autoencoders on SVHN). We find that the noise scale increases as the
loss decreases over a training run and depends on the model size primarily
through improved model performance. Our empirically-motivated theory also
describes the tradeoff between compute-efficiency and time-efficiency, and
provides a rough model of the benefits of adaptive batch-size training. | http://arxiv.org/pdf/1812.06162 | Sam McCandlish, Jared Kaplan, Dario Amodei, OpenAI Dota Team | cs.LG, stat.ML | null | null | cs.LG | 20181214 | 20181214 | 8 1 0 2
c e D 4 1 ] G L . s c [
1 v 2 6 1 6 0 . 2 1 8 1 : v i X r a
# An Empirical Model of Large-Batch Training
Sam McCandlishâ OpenAI sam@openai.com
# Jared Kaplan Johns Hopkins University, OpenAI jaredk@jhu.edu
# Dario Amodei OpenAI damodei@openai.com
# and the OpenAI Dota Teamâ
# Abstract
In an increasing number of domains it has been demonstrated that deep learning models can be trained using relatively large batch sizes without sacriï¬cing data efï¬ciency. However the limits of this massive data parallelism seem to differ from domain to domain, ranging from batches of tens of thousands in ImageNet to batches of millions in RL agents that play the game Dota 2. To our knowledge there is limited conceptual understanding of why these limits to batch size differ or how we might choose the correct batch size in a new domain. In this paper, we demonstrate that a simple and easy-to-measure statistic called the gradient noise scale predicts the largest useful batch size across many domains and applications, including a number of supervised learning datasets (MNIST, SVHN, CIFAR- 10, ImageNet, Billion Word), reinforcement learning domains (Atari and Dota), and even generative model training (autoencoders on SVHN). We ï¬nd that the noise scale increases as the loss decreases over a training run and depends on the model size primarily through improved model performance. Our empirically-motivated theory also describes the tradeoff between compute-efï¬ciency and time-efï¬ciency, and provides a rough model of the beneï¬ts of adaptive batch-size training.
âWork done as an OpenAI Fellow. â The OpenAI Dota Team (Greg Brockman, Brooke Chan, PrzemysÅaw Debiak, Christy Dennison, David Farhi, RafaÅ Józefowicz, Jakub Pachocki, Michael Petrov, Henrique Pondé, Jonathan Raiman, Szymon Sidor, Jie Tang, Filip Wolski, and Susan Zhang) performed measurements of the reinforcement learning agents they developed for the game Dota 2. The Dota teamâs work can be cited as [BCD+18].
# Contents
1 Introduction 2 Theory and Predictions for the Gradient Noise Scale 3 Experiments 4 Related Work 5 Discussion A Methods B Results for All Tasks C Temperature and the Noise Scale D Dynamically Varying the Batch Size E Comments on Optimization 2 4 9 14 15 17 19 25 26 29
# Introduction
The last few years have seen a rapid increase in the amount of computation used to train deep learning models [AH18]. A major enabler as well as a limiting factor in this growth has been parallelism â the extent to which a training process can be usefully spread across multiple devices. Regardless of how much total computation is available, if model training cannot be sufï¬ciently parallelized, then it may take too much serial time and therefore may be practically infeasible.
A very common source of parallelism in deep learning has been data parallelism, which involves splitting a batch of data across multiple devices and then aggregating and applying the resulting gradients. Data parallelism requires fast communication between devices, but also requires that large batches are algorithmi- cally effective in accelerating learning. Recently, a number of papers have shown empirically that on spe- ciï¬c datasets or tasks, large batch sizes can achieve almost linear speed-ups in training without substantially harming sample efï¬ciency or generalization. For example, batch sizes of 8 thousand [GDG+17], 16 thousand [SKYL17], 32 thousand [YGG17, YZH+17, ASF17], and even 64 thousand [JSH+18] examples have been effectively employed to train ImageNet, and batch sizes of thousands have been effective for language mod- els and generative models [OEGA18, PKYC18, BDS18]. This phenomenon is not conï¬ned to supervised learning: in reinforcement learning, batch sizes of over a million timesteps (with tens of thousands of envi- ronments running in parallel) have been used in a Dota-playing agent [BCD+18], and even in simple Atari environments batch sizes of several thousand timesteps have proved effective [AAG+18, HQB+18, SA18]. These discoveries have allowed massive amounts of data and computation to be productively poured into models in a reasonable amount of time, enabling more powerful models in supervised learning, RL, and other domains.
However, for a given dataset and model, there is little to guide us in predicting how large a batch size we can feasibly use, why that number takes a particular value, or how we would expect it to differ if we used a different dataset or model. For example, why can we apparently use a batch size of over a million when training a Dota agent, but only thousands or tens of thousands when training an image recognition model? In practice researchers tend to simply experiment with batch sizes and see what works, but a downside of this is that large batch sizes often require careful tuning to be effective (for example, they may require a warmup
2
Atari Breakout - Pareto Fronts 4096 Parallel Players 102 102 - 2 Budget wx az % ~ 16 Parallel Players a, ry g ) Y oS ° FI 2 ° a id 3 2 o & = 10° 10 ââs 8 EI g is) 8 g e ~-e- Score = 10 a 19! ste» Score = 500 10° 1 7 T Training Time 107! 10° 10! Training Time (Hours)
Figure 1: The tradeoff between time and compute resources spent to train a model to a given level of perfor- mance takes the form of a Pareto frontier (left). Training time and compute cost are primarily determined by the number of optimization steps and the number of training examples processed, respectively. We can train a model more quickly at the cost of using more compute resources. On the right we show a concrete example of the Pareto frontiers obtained from training a model to solve the Atari Breakout game to different levels of performance. The cost and training time depend on the computing architecture and are shown approximately.
period or an unusual learning rate schedule), so the fact that it is possible to use a large batch size can remain undiscovered for a long time. For example, both the Atari and ImageNet tasks were for several years conventionally run with a substantially smaller batch size than is now understood to be possible. Knowing ahead of time what batch size we expect to be effective would be a signiï¬cant practical advantage in training new models.
In this paper we attempt to answer some of these questions. We measure a simple empirical statistic, the gradient noise scale3 (essentially a measure of the signal-to-noise ratio of gradient across training examples), and show that it can approximately predict the largest efï¬cient batch size for a wide range of tasks. Our model also predicts a speciï¬c shape for the compute/time tradeoff curve, illustrated in Figure 1. Our contributions are a mix of fairly elementary theory and extensive empirical testing of that theory.
On the conceptual side, we derive a framework which predicts, under some basic assumptions, that training should parallelize almost linearly up to a batch size equal to the noise scale, after which there should be a smooth but relatively rapid switch to a regime where further parallelism provides minimal beneï¬ts. Addition- ally, we expect that the noise scale should increase during training as models get more accurate, and should be larger for more complex tasks, but should not have a strong dependence on model size per se. We also provide an analysis of the efï¬ciency gains to be expected from dynamically adjusting the batch size according to noise scale during training. Finally, we predict that, all else equal, the noise scale will tend to be larger in complex RL tasks due to the stochasticity of the environment and the additional variance introduced by the credit assignment problem.
On the empirical side, we verify these predictions across 8 tasks in supervised learning, RL, and gener- ative models, including ImageNet, CIFAR-10, SVHN, MNIST, BillionWord, Atari, OpenAIâs Dota agent [BCD+18], and a variational autoencoder for images. For each of these tasks we demonstrate that the noise scale accurately predicts the largest usable batch size (at the order of magnitude level) and that gains to paral- lelism degrade in the manner predicted by theory. We also show that the noise scale increases over the course of training and demonstrate efï¬ciency gains from dynamic batch size tuning. The noise scale eventually becomes larger for more performant models, but this appears to be caused by the fact that more performant models simply achieve a better loss.
The rest of this paper is organized as follows. In Section 2, we derive a simple conceptual picture of the noise scale, data parallelism, and batch sizes, and explain what it predicts about optimal batch sizes and how
3Similar metrics have appeared previously in the literature. We discuss related work in Section 4.
3
Minimum e Less noise, larger steps Start @ More noise, smaller steps
Figure 2: Less noisy gradient estimates allow SGD-type optimizers to take larger steps, leading to conver- gence in a smaller number of iterations. As an illustration, we show two optimization trajectories using momentum in a quadratic loss, with different step sizes and different amounts of artiï¬cial noise added to the gradient.
they vary over the course of training and across tasks. We build on this analysis to study training efï¬ciency in Section 2.3. Then in Section 3 we empirically test the predictions in Section 2 and explore how the noise scale varies with dataset, model size, and learning paradigm (supervised learning vs RL vs generative models). Section 4 describes related work and Section 5 discusses the implications of these results and possible future experiments.
# 2 Theory and Predictions for the Gradient Noise Scale
# 2.1 Intuitive Picture
Before working through the details of the gradient noise scale and the batch size, it is useful to present the intuitive picture. Suppose we have a function we wish to optimize via stochastic gradient descent (SGD). There is some underlying true optimization landscape, corresponding to the loss over the entire dataset (or, more abstractly, the loss over the distribution it is drawn from). When we perform an SGD update with a ï¬nite batch size, weâre approximating the gradient to this true loss. How should we decide what batch size to use?
When the batch size is very small, the approximation will have very high variance, and the resulting gradient update will be mostly noise. Applying a bunch of these SGD updates successively will average out the variance and push us overall in the right direction, but the individual updates to the parameters wonât be very helpful, and we could have done almost as well by aggregating these updates in parallel and applying them all at once (in other words, by using a larger batch size). For an illustrative comparison between large and small batch training, see Figure 2.
By contrast, when the batch size is very large, the batch gradient will almost exactly match the true gradient, and correspondingly two randomly sampled batches will have almost the same gradient. As a result, doubling the batch size will barely improve the update â we will use twice as much computation for little gain.
Intuitively, the transition between the ï¬rst regime (where increasing the batch size leads to almost perfectly linear speedups) and the second regime (where increasing the batch size mostly wastes computation) should occur roughly where the noise and signal of the gradient are balanced â where the variance of the gradient is at the same scale as the gradient itself4. Formalizing this heuristic observation leads to the noise scale.
The situation is shown pictorially in Figure 1. For a given model, weâd like to train it in as little wall time as possible (x-axis) while also using as little total computation as possible (y-axis) â this is the usual goal
4Note that these considerations are completely agnostic about the size of the dataset itself.
4
of parallelization. Changing the batch size moves us along a tradeoff curve between the two. Initially, we can increase the batch size without much increase in total computation, then there is a âturning pointâ where there is a substantive tradeoff between the two, and ï¬nally when the batch size is large we cannot make further gains in training time. In the conceptual and experimental results below, we formalize these concepts and show that the bend in the curve (and thus the approximate largest effective batch size) is in fact set roughly by the noise scale.
# 2.2 Gradients, Batches, and the Gradient Noise Scale
Weâll now formalize the intuitions described in Section 2.1. Consider a model, parameterized by variables θ â RD, whose performance is assessed by a loss function L (θ). The loss function is given by an average over a distribution Ï (x) over data points x. Each data point x has an associated loss function Lx (θ), and the full loss is given by L (θ) = Exâ¼Ï [Lx (θ)]5.
We would like to minimize L (θ) using an SGD-like optimizer, so the relevant quantity is the gradient G (θ) = âL (θ). However, optimizing L (θ) directly would be wasteful if not impossible, since it would require processing the entire data distribution every optimization step. Instead, we obtain an estimate of the gradient by averaging over a collection of samples from Ï, called a batch:
B 1 Gest (0) = 5 SO Vole, (0); ti~p (2.1) i=1
This approximation forms the basis for stochastic optimization methods such as mini-batch stochastic gradi- ent descent (SGD) and Adam [KB14]. The gradient is now a random variable whose expected value (averaged over random batches) is given by the true gradient. Its variance scales inversely with the batch size B6:
Ex1···B â¼Ï [Gest (θ)] = G (θ) covx1···B â¼Ï (Gest (θ)) = 1 B Σ (θ) , (2.2)
where the per-example covariance matrix is deï¬ned by
X (4) = coVany (VoLe (8)) = Exxp [(VoLs (8) (Vole (8))"| â G(0)G 0)". 23)
The key point here is that the minibatch gradient gives a noisy estimate of the true gradient, and that larger batches give higher quality estimates. We are interested in how useful the gradient is for optimization pur- poses as a function of B, and how that might guide us in choosing a good B. We can do this by connecting the noise in the gradient to the maximum improvement in true loss that we can expect from a single gradient update. To start, let G' denote the true gradient and H the true Hessian at parameter values 0. If we perturb the parameters 6 by some vector V to 0 â «V, where ⬠is the step size, we can expand true loss at this new point to quadratic order in e:
L(Q9âeV) = L(0)âeG7V + Sev ny. (2.4)
If we had access to the noiseless true gradient G and used it to perturb the parameters, then Equation/2.4]with 12 V = G would be minimized by setting ⬠= emax = aft q: However, in reality we have access only to the noisy estimated gradient Gest from a batch of size B, thus the best we can do is minimize the expectation
5In the context of reinforcement learning, the loss could be the surrogate policy gradient loss, and the distribution Ï
would be nonstationary.
©This is strictly true only when training examples are sampled independently from the same data distribution. For ex- ample, when batches are sampled without replacement from a dataset of size D, the variance instead scales like (F - S) . For simplicity, we restrict ourself to the case where B < D or where batches are sampled with replacement, but our conclusions can be altered straightforwardly to account for correlated samples.
5
@ Smaller batch = Larger batch
Predicted Training Speed 10° % < Ineffective ®& 10-1 scaling e Perfect scaling 10-2 T T T 10- 10-1 10° lo! 10? Batch Size / Noise Scale (B/B)
Smaller batch Predicted Training Speed Larger batch 10° % < Ineffective ®& 10-1 scaling e Perfect scaling 10-2 T T T 10- 10-1 10° lo! 10? Batch Size / Noise Scale (B/B)
Figure 3: Larger batch sizes yield estimated gradients that are closer to the true gradient, on average. Larger step sizes can be used when the estimated gradient is closer to the true gradient, so more progress can be made per step. Left: A large step size used with a small batch size can lead to instability, as illustrated for a quadratic loss. Right: Equation 2.6 predicts that the âturning pointâ after which larger batch sizes become less helpful is the noise scale B, where the training speed drops to 50% of the maximum possible.
E{L (@ â â¬Gest)| with respect to e. This expected value can be evaluated using Equation 2.2}
2,1. tr(Hd B[L (8 ~ eGest)] = L (8) â (GP + 52 (tne + ae ) . (2.5)
Minimizing this equation with respect to ⬠leads to:
â¬opt (B) = argmin,E [L (4 â cGest)] one (2.6) ~ 14 Broise/B
as the optimal step size, which produces an optimal improvement in the loss from the noisy gradient:
âLopt (B) = âLmax 1 + Bnoise/B ; âLmax = 1 2 |G|4 GT HG . (2.7)
Above, we have deï¬ned the noise scale as:
Bnoise = tr (HΣ) GT HG , (2.8)
Note that our definition of the noise scale is independent of the size of the full training set. If we use a step size larger than twice â¬op¢, the loss may increase, leading to divergence, as illustrated in Figure[3}
Despite the many unfounded assumptions in the above derivation, we will ï¬nd that equations 2.7 and 2.8 provide a helpful guide to the behavior of large-batch training, even when using other optimizers (including momentum, Adam, and RMSProp).
For a discussion of the dependence of the noise scale on the learning rate, see Appendix C on the âtemperatureâ of training.
# Implications and Simpliï¬cations
Equation[2.7]implies that when the batch size is much smaller than the noise scale, B < Byoise, the second term in the denominator dominates the first, so increasing the batch size B linearly increases the progress in loss. This is the small batch regime, where increases in batch size linearly speed up training. By contrast, when B > Byoise, then the first term dominates, so that increasing B has almost no effect on the progress in loss. This is the large batch regime where increases in batch size do not speed up training and simply waste computation; the switch between the two occurs at B © Byoise (see Figure[3p.
The noise scale in Equation 2.8 requires some overhead to compute due to the presence of the Hessian H. We can estimate it by measuring âLopt (B) using a series of line searches in the direction of a gradient measured
6
(2.6)
with various batch sizes B and ï¬tting the result to Equation 2.7. This allows us to estimate Bnoise as well as to empirically test whether Equation 2.7 actually ï¬ts the data (we discuss these local tests more in Section 3).
The situation gets even simpler if we make the (unrealistic) assumption that the optimization is perfectly well-conditioned â that the Hessian is a multiple of the identity matrix. If that is the case, then Equation 2.8 reduces to:
Bsimple = tr(Σ) |G|2 , (2.9)
which says that the noise scale is equal to the sum of the variances of the individual gradient components, divided by the global norm of the gradien{"]- essentially a measure of how large the gradient is compared to its variance. It is also a measure of the scale at which the estimated and true gradient become close in L? space (having non-trivial dot product) â the expected normalized L? distance is given by: 2 E [IG âG| | 1tr(Z) â Bsimpte IGP BGP B (2.10)
In practice, we ï¬nd that Bsimple and Bnoise typically differ only by a small constant multiplicative factor, particularly when we employ common training schemes that improve conditioning. In our empirical work we will sometimes compute Bnoise, but will primarily compute Bsimple instead, as it requires less computational expense. In Appendix A.1, we provide an extremely simple method to measure this simpliï¬ed noise scale with negligible overhead in the context of data-parallel training.
# 2.3 Predictions for Data/Time Efï¬ciency Tradeoffs
Thus far our analysis has only involved a single point in the loss landscape. But in Section 3 we will show that Equation 2.7 nevertheless predicts the dependence of training speed on batch size remarkably well, even for full training runs that range over many points in the loss landscape. By averaging Equation 2.7 over multiple optimization steps (see Appendix D), we ï¬nd a simple relationship between training speed and data efï¬ciency:
: E 7 = -1=( -1) : (2.11) Smin Emin
Here, S and Smin represent the actual and minimum possible number of steps taken to reach a speciï¬ed level of performance, respectively, and E and Emin represent the actual and minimum possible number of training examples processed to reach that same level of performance. Since we are training at ï¬xed batch size8, we have Etot = BStot. We deï¬ne the critical batch size by an empirical ï¬t to the above equation, as
Bcrit = Emin Smin . (2.12)
Our model predicts Bcrit â Bnoise, where Bnoise is appropriately averaged over training (see Appendix D). Note that the noise scale can vary signiï¬cantly over the course of a training run, so the critical batch size also depends on the level of performance to which we train the model.
The resulting tradeoff curve in serial time vs total compute has a hyperbolic shape represented in Figure 1. The goal of optimization is to reach a given level of performance with minimal S and E â but as depicted in Figure 1, there are tradeoffs involved, as very small S may require very large E, and vice versa. When we choose B = Bcrit, the two sides of Equation 2.11 are both 1, so that training takes twice as many passes through the training data as an optimally data-efï¬cient (small-batch) run would take, and twice as many optimization steps as an optimally time-efï¬cient (large-batch) run would take.
7One might also use preconditioned gradients, obtained for example by dividing gradient components by the square root of the Adam optimizerâs [KB14] accumulated variances. We experimented with this but found mixed results. 8We discuss the beneï¬ts of dynamically varying the batch size in Appendix D
7
# 2.4 Assumptions and Caveats
The mathematical argument in the previous sections depends on several assumptions and caveats, and it is useful to list these all in one place, in order to help clarify where and why we might expect the quantities in equations 2.8 and 2.9 to be relevant to training:
1. Short-horizon bias: The picture in Section 2.2 is a strictly local picture â it tells us how to best improve the loss on the next gradient step. Greedily choosing the best local improvement is gen- erally not the best way to globally optimize the loss (see e.g. [WRLG18]). For example, greedy optimization might perform poorly in the presence of bad local minima or when the landscape is ill- conditioned. The critical batch size would then be reduced by the extent to which noise is beneï¬cial.
2. Poor conditioning: In poorly conditioned optimization problems, parameter values often oscillate along the large-curvature directions rather than decreasing in a predictable way (see e.g. [Goh17] and Appendix E.1). This means that Equation 2.7 will not perfectly reï¬ect the amount of optimization progress made per step. Nevertheless, we will see that it still accurately predicts the relative speed of training at different batch sizes via the resulting tradeoff Equation 2.11.
3. Simpliï¬ed noise scale: As noted in Section 2.2, whenever we use the simpliï¬ed noise scale (Equa- tion 2.9) rather than the exact noise scale (Equation 2.8), this number may be inaccurate to the extent that the Hessian is not well-conditioned. Different components of the gradient can have very different noise scales.
4. Learning rate tuning: The arguments in Section 2.2 assume that we take the optimal step size and maximize the expected improvement in loss, Equation 2.6. In practice learning rates are unlikely to be perfectly tuned, so that the actual improvement in loss (and thus the scaling of training with batch size) may not perfectly reï¬ect Equation 2.7. However, by trying to choose the best learning rate schedules (or by simply doing a grid search) we can reduce this source of error. In addition, the noise scale depends strongly on the learning rate via a âtemperatureâ of training, though this source of error is small as long as the learning rate is reasonably close to optimal. We provide a more detailed discussion of this dependence in Appendix C.
5. Quadratic approximation: The Taylor expansion in Equation 2.4 is only to second order, so if third order terms are important, in either the distribution of gradient samples or the optimization landscape, then this may introduce deviations from our conceptual model, and in particular devi- ations from Equation 2.7. Intuitively, since parameter updates are local and often quite small we suspect that the previous two sources of error will be more important than this third one.
6. Generalization: The picture in Section 2.2 says nothing about generalization â it is strictly about optimizing the training loss as a mathematical function. Some papers have reported a âgeneraliza- tion gapâ in which large batch sizes lead to good training loss but cause a degradation in test loss, apparently unrelated to overï¬tting [KMN+16, HHS17]. The arguments in Section 2.2 donât exclude this possibility, but recent work [SLA+18] has found no evidence of a generalization gap when hyperparameters are properly tuned.
Despite these potential issues in our conceptual model, weâll show in Section 3 that the noise scale is overall a good empirical predictor of the critical batch size. Furthermore, we will see that most training runs ï¬t Equation 2.11 remarkably well.
# 2.5 Expected Patterns in the Noise Scale
In the next section we will measure the noise scale for a number of datasets and conï¬rm its properties. However, it is worth laying out a few patterns we would expect it to exhibit on general grounds:
⢠Larger for difï¬cult tasks: We expect B to be larger for more complex/difï¬cult9 tasks, because individual data points will be less correlated, or only correlated in a more abstract way. This may
9To be clear, we do not expect this to be the primary difference between more and less difï¬cult tasks. Other difï¬culty metrics such as the intrinsic dimensionality [LFLY18] appear to be unrelated to the amount of gradient noise, though it would be interesting if there were some connection.
8
apply both over the course of training on a given dataset (where we may pick the âlow-hanging fruitâ ï¬rst, leaving a long tail of more complicated things to learn) or in moving from easier to harder datasets and environments. In reinforcement learning, we expect environments with sparse rewards or long time horizons to have larger noise scale. We also expect generative models to have smaller B as compared to classiï¬ers training on the same dataset, as generative models may obtain more information from each example.
⢠Growth over training: B will grow when the gradient decreases in magnitude, as long as the noise tr(Σ) stays roughly constant. Since |G| decreases as we approach the minimum of a smooth loss, we would expect B to increase during neural network training.
⢠Weak dependence on model size: The number of model parameters in the neural network cancels in the noise scale, so we do not expect B to exhibit a strong dependence on model size (at ï¬xed loss). As discussed above, models that achieve better loss will tend to have a higher noise scale, and larger models often achieve better loss, so in practice we do expect larger models to have higher noise scale, but only through the mechanism of achieving better loss.
⢠Learning rate tuning: The noise scale will be artiï¬cially inï¬ated if the learning rate is too small, due to the âtemperatureâ dependence described in Appendix C. To get a useful measurement of the noise scale, the learning rate needs to be appropriate to the current point in parameter space.
The ï¬rst and last points can be exhibited analytically in toy models (see Appendix C), but we do not expect theoretical analyses to provide a great deal of insight beyond the intuitions above. Instead, we will focus on conï¬rming these expectations empirically.
# 2.6 Summary
To summarize, our model makes the following predictions about large-batch training:
⢠The tradeoff between the speed and efï¬ciency of neural network training is controlled by the batch size and follows the form of Equation 2.11.
⢠The critical batch size Bcrit characterizing cost/time tradeoffs can be predicted at the order of mag- nitude level by measuring the gradient noise scale, most easily in the simpliï¬ed form Bsimple from Equation 2.9.
⢠The noise scale can vary signiï¬cantly over the course of a training run, which suggests that the critical batch size also depends on the chosen level of model performance.
⢠The noise scale depends on the learning rate via the âtemperatureâ of training, but is consistent between well-tuned training runs (see Appendix C).
# 3 Experiments
We now test the predictions of Section 2 on a range of tasks, including image classiï¬cation, language mod- eling, reinforcement learning, and generative modeling. The tasks range from very simple (MNIST) to very complex (5v5 Dota), which allows us to test our modelâs predictions in drastically varying circumstances. Our central experimental test is to compare the prediction made by the gradient noise scale Bsimple for each task, to the actual limits of batch size Bcrit found by carefully tuned full training runs at an exhaustive range of batch sizes. The overall results of this comparison are summarized in Figure 4. We ï¬nd that the gradient noise scale predicts the critical batch size at the order of magnitude level, even as the latter varies from 20 (for an SVHN autoencoder) to over 10 million (consistent with prior results reported in [BCD+18]). Details about the hyperparameters, network architectures, and batch size searches are described in Appendix A.4. Below we describe the individual tasks, the detailed measurements we perform on each task, and the results of these measurements.
9
@ Generative Models Dota 5v5 _ @> 107 © Image Classifiers (lower bound) @ = Reinforcement Learning Gradi 10° Dota 1v1 radient Billion Word LSTM e noise scale . 10° measures the variation of the wo! SVHN TmageNet gradients between Pong Space Invaders different training MNIST e ; 103 _ examples CIFAR10 10? 10" © â VAE (SVHN) ®~Autoencoder (SVHN) 10° : , T T â T - 10! 10? 10° 104 10° 10° 107 Critical batch size is the maximum batch size above which scaling efficiency decreases significantly
Figure 4: The âsimple noise scaleâ roughly predicts the maximum useful batch size for many ML tasks. We deï¬ne this âcritical batch sizeâ to be the point at which compute efï¬ciency drops below 50% optimal, at which point training speed is also typically 50% of optimal. Batch sizes reported in number of images, tokens (for language models), or observations (for games). We show the critical batch size for a full training run, and the noise scale appropriately averaged over a training run (see Appendix D). Due to resource constraints, for Dota 5v5 we show the batch size used by the OpenAI Dota team as a lower bound for the critical batch size.
SVHN (SGD) - Optimal Learning Rate 10° 10" 10? 108 Batch Size
SVHN (SGD) - Line Search vs Batch 10 20.6 B= 2048 B= 4096 B=s8192 0 2 4 6 8 10 Epoch
SVHN (SGD) - Optimal Learning Rate SVHN (SGD) - Line Search vs Batch 10 20.6 B= 2048 B= 4096 B=s8192 10° 10" 10? 108 0 2 4 6 8 10 Batch Size Epoch
Figure 5: Left: The optimal learning rate is displayed for a range of batch sizes, for an SVHN classiï¬er trained with SGD. The optimal learning rate initially scales linearly as we increase the batch size, leveling off in the way predicted by Equation 2.7. Right: For a range of batch sizes, we display the average loss progress âL (B) that can be made from a batch of size B via a line search, normalized by the measured âL (Bmax). Early in training, smaller batches are sufï¬cient to make optimal progress, while larger batches are required later in training.
# 3.1 Quantities Measured
In order to test our model we train each task on a range of batch sizes, selecting the optimal constant learning rate separately for each batch size using a simple grid search. Across a range of tasks, we produce the following results and compare with our model:
e Optimal learning rates: When optimizing with plain SGD or momentum, we find that the optimal learning rate follows the functional form of Equation as shown in Figure|5| For Adam and RMSProp the optimal learning rate initially obeys a power law ⬠(B) « B® with a between 0.5 and 1.0 depending on the task, then becomes roughly constant. The scale at which the optimal learning
10
SVHN (SGD) - Training Speed SVHN (SGD) - Training Efficiency 10° 4 10-14 Train Error Small batches are Large batches 19-2 4 are fastest most efficient & I 2 102 108 104 10° 108 10! 10° 106 107 Optimization Steps Examples Processed
Figure 6: Training runs for a simple CNN classiï¬er on the SVHN dataset at constant batch sizes. Small batch training is more compute-efï¬cient (right), while large-batch training requires fewer optimizer steps (left). The turning point between time-efï¬cient and compute-efï¬cient training occurs roughly at B = 64 for the initial phase of training and increases later in training.
rate stops increasing is generally somewhat smaller than the typical noise scale. (See Appendix E.2 for a potential explanation for this power law behavior.)
⢠Pareto frontiers: For each batch size, we observe the number of optimization steps and total number of data samples needed to achieve various levels of performance. This allows us to visualize the tradeoff between time-efï¬ciency and compute-efï¬ciency as a Pareto frontier (see Figures 6 and 7). We ï¬nd that Equation 2.11 ï¬ts the shape of these tradeoff curves remarkably well in most cases.
⢠Critical batch size (Bcrit): We determine the critical batch size over the course of a training run by ï¬tting the Pareto fronts to the functional form of Equation 2.11 (see Figure 7). This quantiï¬es the point at which scaling efï¬ciency begins to drop. In particular, training runs at batch sizes much less than Bcrit behave similarly per training example, while training runs at batch sizes much larger than Bcrit behave similarly per optimization step (see Figure 6). The critical batch size typically increases by an order of magnitude or more over the course of training.
⢠Simple noise scale (Bsimple): We measure the simple noise scale of Equation 2.9 over the course of a single training run using the minimal-overhead procedure described in Appendix A.1. Note that some care must be taken to obtain a proper estimate of Bsimple due to its dependence on the learning rate via the âtemperatureâ of training. We ï¬nd that the noise scale agrees between different well- tuned training runs when compared at equal values of the loss, so it can be accurately measured at small batch size (see Appendix C). We also ï¬nd that, aside from some ï¬uctuations early in training, Bsimple typically predicts the critical batch size at the order of magnitude level (see Figure 7). The noise scale also typically increases over the course of training, tracking the critical batch size. To obtain a single value for the noise scale representing a full training run, we average over a training run as described in Appendix D.
⢠Full noise scale (Bnoise): For SVHN trained with SGD, we also measure the full noise scale Bnoise by performing line searches for gradients obtained by batches of varying size, then ï¬t to the func- tional form 2.11 (see Figures 5 and 7). This is a somewhat better estimate of Bcrit but is less computationally convenient, so we choose to focus on Bsimple for the remaining tasks.
# 3.2 Results
We summarize our ï¬ndings in Figure 4: across many tasks, the typical simple noise scale approximately predicts the batch size at which the returns from increasing scale begin to diminish signiï¬cantly. Results for all of the tasks can be found in Appendix B. We provide a detailed discussion of our methods in Appendix A.
11
SVIIN (SGD) - Pareto Fronts \ Examples Processed Pe ebteee 2 10 1o* 10° Optimization Steps
SVIIN (SGD) - Noise Scale Comparison Bait 1o-! 1o-? Train Error
SVIIN (SGD) - Pareto Fronts SVIIN (SGD) - Noise Scale Comparison \ Bait Pe ebteee 2 10 1o* 10° 1o-! 1o-? Optimization Steps Train Error
Figure 7: The tradeoff between time-efï¬ciency and compute-efï¬ciency can be visualized as a Pareto frontier. Each point on the diagram above (left) represents the number of optimizer steps and processed examples needed to achieve a particular level of classiï¬cation accuracy. Fits to Equation 2.11 are also shown.
# Supervised Learning
Basic image classiï¬cation results are pictured in Figure 14.
⢠SVHN We train a simple CNN image classiï¬er on the extended SVHN dataset [NWC+11]. We display all three of Bcrit, Bsimple, and Bnoise for SVHN optimized with SGD in Figure 7. We ï¬nd that Bnoise better-predicts Bcrit as compared to the more naive Bsimple. We compare to training using the Adam optimizer [KB14] in Figure 14, where Bsimple provides a very accurate prediction for Bcrit.
⢠MNIST We train a simple CNN on the MNIST dataset [LC10] using SGD, and ï¬nd that Bsimple roughly estimates Bcrit, though the latter is signiï¬cantly smaller.
⢠CIFAR10 We train a size 32 ResNet [HZRS15] with Momentum on CIFAR10 [Kri09] and ï¬nd that Bsimple predicts Bcrit.
⢠ImageNet We train a size 50 ResNet [HZRS15] with Momentum on ImageNet [DDS+09], and use a learning rate schedule that decays the learning rate three times during training. Due to the schedule, both Bsimple, and Bcrit change signiï¬cantly during training (see Appendix C for a discussion) and must be measured separately at each learning rate. Results are pictured in Figure 10. We ï¬nd that the noise scale varies from 2,000 to 100,000 in the main phase of training, which matches empirical work (e.g. [JSH+18]) showing that constant batch sizes up to 64 thousand can be used without a loss of efï¬ciency. During the later ï¬ne-tuning phase of training, the noise scale increases further to hundreds of thousands and even millions, suggesting that even larger batch sizes may be useful at the very end of training. Our critical batch sizes are slightly lower (15k vs 64k) than those reported in the literature, but we did not use the latest techniques such as layer-wise adaptive learning rates [YGG17].
Overall we ï¬nd that more complex image datasets have larger noise scales in a way that is not directly determined by dataset size.
# Generative Modeling
The results for these tasks are pictured in Figure 9.
VAE and Autoencoder We train a VAE [KW13] and a simple Autoencoder on the SVHN dataset [NWC+11]; we were motivated to compare these models because VAEs introduce additional stochasticity. As expected, the VAE had larger Bcrit and Bsimple as compared to the Autoencoder, and both models had much lower Bsimple as compared to SVHN image classiï¬ers. However, unlike most of the other tasks, for these generative models Bsimple was signiï¬cantly smaller than Bcrit. ⢠Language Modeling We train a single-layer LSTM for autoregressive prediction on the Billion Word dataset [CMS+13], and ï¬nd good agreement between Bcrit and Bsimple. We also illustrate the
12
dependence on LSTM size in Figure 8, ï¬nding that the noise scale is roughly independent of LSTM size at ï¬xed values of the loss, but that larger LSTMs eventually achieve lower values of the loss and a larger noise scale.
# Reinforcement Learning
⢠Atari We train RL agents with the policy gradient algorithm A2C [MBM+16] on seven Atari games [BNVB12] (Alien, Beamrider, Breakout, Pong, Qbert, Seaquest, Space Invaders), with results pic- tured in Figures 11 and 12. The tradeoff curves generally agree well with the prediction of Equation 2.11, though they are somewhat noisy e.g. for Pong since we do not average over multiple seeds. For some Atari games, we ï¬nd some consistent deviation from 12 at very small batch sizes (see e.g. Beam Rider in Figure 11). It would be interesting to study this phenomenon further, though this could simply indicate greater sensitivity to other hyperparameters (e.g. momentum) at small batch size. Overall, we see that patterns in the noise scale match intuition, such as Pong being much easier to learn than other Atari games.
⢠Dota The OpenAI Dota team has made it possible to train PPO [SWD+17] agents on both Dota 1v1 and 5v5 environments (the latter being preliminary ongoing work). We vary the two hyperparameters batch size and learning rate on the existing code, experiment setup, and training infrastructure as described in [BCD+18]. The Dota 1v1 environment features two agents ï¬ghting in a restricted part of the map (although they are free to walk anywhere) with a ï¬xed set of abilities and skills, whereas Dota 5v5 involves the whole map, 5 heroes on each side, and vastly more conï¬gurations in which heroes might engage each other. This is reï¬ected in the higher noise scale for Dota 5v5 (at least 10 million) relative to Dota 1v1 â we suspect the higher diversity of situations gives rise to more variance in the gradients. Due to resource constraints we were not able to measure the Pareto fronts for Dota 5v5, and so we can only report the batch size used by the Dota team and the measured noise scale.
Results for the tasks described above were generally within a reasonable margin of state-of-the-art results, though we did not explicitly try to match SOTA or use special algorithmic or architectural tricks. Our goal was simply to conï¬rm that we were in a reasonably well-performing regime that is typical of ML practice.
For the supervised learning and generative modeling tasks listed above, we have the option of using either training set or test set performance to compare different batch sizes. For the main results in this paper, we choose train set performance because it is what is directly predicted by our model, and because it is easier to measure in the presence of overï¬tting. The choice makes negligible difference in most of our experiments, either because they involve RL or because the datasets are large enough that we donât overï¬t. On the small datasets MNIST, CIFAR10, and SVHN, overï¬tting makes measurement of test error more difï¬cult, but we do measure the test error behavior in Appendix E.3, and both the Pareto fronts and critical batch size generally do not change much.
The fact that the noise scale is consistent between well-tuned training runs suggests that the corresponding optimization trajectories are similar in some sense. In Appendix C we investigate this idea further and relate the noise scale to a characteristic âtemperatureâ of the training process.
# Model Size Dependence
The deï¬nitions of the noise scales do not have any manifest dependence on the number of parameters in a model. We have conjectured that they will be roughly independent of model size at ï¬xed values of the loss.
LSTM language models provide a natural test case, since LSTM sizes can be scaled up without qualitatively altering model architecture. As shown in Figure 8, the simple noise scale appears to be roughly independent of model size at a ï¬xed value of the loss. However, due to their higher performance and lower achieved loss, larger models eventually reach larger noise scales than smaller models. We do not have speciï¬c hypotheses for how the noise scale should vary with model architecture, but interesting results along these lines were recently obtained [SLA+18].
13
Simple Noise Scale and LSTM Size 10° ââ 2018 LSTM ââ 1024 LSTM â 512 LSTM Simple Noi 100 90 80 70 60 50 40 Train Perplexity
Simple Noise Scale During Training 2048 LSTM ââ 1024 LSTM ââ 512 LSTM 10° 10! Simple Noise Seale (Tokens) 108 107 108 Examples (Tokens)
Simple Noise Scale and LSTM Size Simple Noise Scale During Training 10° ââ 2018 LSTM ââ 1024 LSTM â 512 LSTM 2048 LSTM ââ 1024 LSTM ââ 512 LSTM 10° 10! Simple Noise Seale (Tokens) Simple Noi 100 90 80 70 60 50 40 108 107 108 Train Perplexity Examples (Tokens)
Figure 8: We show the relationship between training perplexity and the simple noise scale (left) for a range of LSTM sizes on the Billion Word dataset. These results show that at ï¬xed values of the loss, the noise scale does not depend signiï¬cantly on model size. On the right we show the simple noise scale during training, plotted in terms of the number of tokens processed. After processing a given number of examples, larger models will tend to have a larger noise scale, but only as a consequence of having achieved smaller loss.
# 4 Related Work
A great deal of prior work has studied large-batch training, investigated versions of the noise scale, explored adaptive batch size and learning rate schedules, and demonstrated that large batch training can be effective on speciï¬c datasets. We attempt to summarize this work below.
Recent papers have probed the limits of large batch training empirically, especially for ImageNet [GDG+17, YZH+17, JSH+18], in some cases using layer-wise adaptive learning-rates [YGG17]. More recent work has demonstrated that large batch training can also be applied to RL [AAG+18, BCD+18, SA18, HQB+18]. The use of second order optimization methods [BGM17] might increase the utility of data parallelism even further. A thorough review of large batch training and potential issues with generalization was provided in a very nice recent empirical study [SLA+18] done in parallel with this work. [GVY+18] also systematically studied large-batch training, though it did not tune the learning rate separately for each batch size.
Other recent work has explored the impact of gradient noise on optimization speed and batch size selec- tion. [SZL13] connected gradient noise and the locally optimal step size to identify an adaptive learning rate. [MHB17] derived a sampling distribution for SGD, motivating our deï¬nition of âtemperatureâ. [SL17] connected this temperature to the critical batch size, though they predict a dependence on dataset size which we do not observe. [SZT17] identiï¬ed a signal-dominated and noise-dominated phase of training. [SKYL17] showed that decaying the learning rate and increasing the batch size have the same effect, motivated by the SGD training temperature. ([DNG17] also suggested increasing learning rate and batch size together, but with different motivation.) [IES+18] empirically investigated the role of gradient noise in reinforcement learning.
The gradient noise scale in particular has also been studied in earlier work to aid in batch size selection. The noise scale itself is used implicitly in basic statistical techniques for sample size selection (see e.g. [BCNW12] implicitly uses the gradient noise scale for a theoretical analysis of batch size [Wik, NIS]). selection. [BCN16, DYJG16, BRH16] propose adaptive sampling methods based on the gradient noise scale in the context of neural network optimization. [YPL+17] analyzed the gradient noise scale for a particular class of functions and related it to the critical batch size, though it predicts a sharp change in learning speed with batch size rather than the smooth change we observe. [CWZ+18] theoretically analyzed the dependence of the gradient noise scale on network width for shallow or linear networks, though they ï¬nd inconsistent empirical results on neural networks. [MBB17] found a formula for the optimization speedup in terms of batch size resembling ours, though their critical batch size depends on smoothness parameters of the loss rather than directly gradient noise.
There has been a variety of work studying the Neural Network loss landscape and using it to draw conclusions about optimal training. Local properties of the loss landscape are not necessarily a good guide to overall optimal training [WRLG18]. The loss tends to be fairly smooth when interpolating between the start and end
14
of training [GVS14]. But noise may be useful early in training [NVL+15, YPL+17], perhaps because it leads to minima that generalize better [KMN+16].
A big-picture motivation for our work was to better understand the scaling of learning with computational and data resources; this question was addressed from the perspective of scaling the model size in [HNA+17].
Our key contributions include connecting the gradient noise scale to the speed of optimization with a simple model, as well as systematically measuring the critical batch size and noise scale for a variety of tasks. We also clarify the role of the training temperature in SGD and propose an optimal batch size schedule.
# 5 Discussion
We have shown that the simpliï¬ed gradient noise scale Bsimple approximately predicts the actual point of diminishing return on batch size Bcrit on diverse problems where these quantities vary by six orders of magnitude. Furthermore, the tradeoff curve between total compute and optimization steps associated with changing the batch size has roughly the hyperbolic form predicted by our theory. Finally, our theory also roughly predicts how the optimal learning rate scales with batch size, although its predictions are not as precise.
What does the validity of this theory mean, and in what way is it useful? At the level of a given task, it allows us to use the noise scale from a single run (even an only partially complete run with much smaller batch size, though see caveats about learning rate tuning in the appendix) to estimate the largest useful batch size, and thus reduces the extensive hyperparameter searches that are necessary to ï¬nd this batch size by trial and error. It also tells us to expect that larger batch sizes will show diminishing returns in a predictable way that has the same form regardless of the task.
Across tasks, it tells us that the largest useful batch size for a task is likely to be correlated to informal notions of the âcomplexityâ of the task, because the noise scale essentially measures how diverse the data is (as seen by the model), which is one aspect of task complexity.
We have argued that a speciï¬c formula characterizes the time/compute tradeoff between optimization steps and total data processed in neural network training:
Optimization Steps Data Examples 1 =1 Min Steps (5.1) Min Examples
From this relation we can identify a critical value of the batch size when training to a given value of the loss
Bcrit(Loss) = Min Examples Min Steps
Training at this critical batch size provides a natural compromise between time and compute, as we take only twice the minimum number of optimization steps and use only twice the minimum amount of data. The critical batch size represents a turning point, so that for B > Bcrit there are diminishing returns from greater data parallelism.
Our main goal was to provide a simple way to predict Bcrit. We have shown that it can be estimated as
Bcrit â Bsimple (5.2)
where the easily-measured Bsimple is the ratio of the gradient variance to its squared mean. Theoretical arguments suggest that a more reï¬ned quantity, the Hessian-weighted Bnoise of Equation 2.8, may provide an even better10 estimate of Bcrit.
The tradeoff curve of Equation 5.1 provides a remarkably good ï¬t across datasets, models, and optimizers, and the approximate equality of Bcrit and Bsimple holds even as both quantities vary greatly between tasks and training regimes. We have established that as anticipated, both Bcrit and Bsimple tend to increase signiï¬cantly during training, that they are larger for more complex tasks, and that they are roughly independent of model
10We have also investigated using gradients preconditioned by the Adam optimizer; the results were mixed.
15
size (for LSTMs) at ï¬xed values of the loss. We also saw that image classiï¬cation has a signiï¬cantly larger per-image noise scale as compared to generative models training on the same dataset, a fact that could have interesting implications for model-based RL. In the case of RL, while the noise scale for Dota was roughly a thousand times larger than that of Atari, the total number of optimization steps needed to train a Dota agent is not so much larger [BCD+18]. Perhaps this suggests that much of the additional compute needed to train more powerful models will be parallelizable.
While Bsimple roughly matches Bcrit for all datasets, the ratio Bsimple/Bcrit can vary by about an order of magnitude between tasks. This may not be so surprising, since Bsimple does not take into account Hessian conditioning or global aspects of the loss landscape. But it would be very interesting to obtain a better understanding of this ratio. It was smaller than one for the Autoencoder, VAE, and for Dota 1v1, roughly equal to one for LSTMs, and greater than one for both image classiï¬cation tasks and Atari, and we lack an explanation for these variations. It would certainly be interesting to study this ratio in other classes of models, and to further explore the behavior of generative models.
Due to its crucial role in data-parallelism, we have focused on the batch size B, presuming that the learning rate or effective âtemperatureâ will be optimized after B has been chosen. And our theoretical treatment focused on a speciï¬c point in the loss landscape, ignoring issues such as the relationship between early and late training and the necessity of a âwarm-upâ period. It would be interesting to address these issues, particularly insofar as they may provide motivation for adaptive batch sizes.
# Acknowledgements
We are grateful to Paul Christiano for initial ideas and discussions about this project. We would like to thank the other members of OpenAI for discussions and help with this project, including Josh Achiam, Danny Hernandez, Geoffrey Irving, Alec Radford, Alex Ray, John Schulman, Jeff Wu, and Daniel Ziegler. We would also like to thank Chris Berner, Chris Hesse, and Eric Sigler for their work on our training infrastructure. We thank Joel Hestness, Heewoo Jun, Jaehoon Lee, and Aleksander Madry for feedback on drafts of this paper. JK would also like to thank Ethan Dyer for discussions.
16
# A Methods
# A.1 Unbiased Estimate of the Simple Noise Scale with No Overhead
In this section, we describe a method for measuring the noise scale that comes essentially for free in a data- parallel training environment.
We estimate the noise scale by comparing the norm of the gradient for different batch sizes. From Equation 2.2, the expected gradient norm for a batch of size B is given by:
E [|Gest|?] = |G]? + Str(3). (A.1) 1 B
Given estimates of |Gest|2 for both B = Bsmall and B = Bbig, we can obtain unbiased estimates |G|2 and S for |G|2 and tr(Σ), respectively:
P 1 2 2 2 = > _ (Bri, |G â Bsmau|G |9| Bow = Bema (BriglG By :.| small |G Bema) 1 1/Bemau â 1/Boig ( S= IG Benaâ ~ IG Bug *), (A.2)
We can verify with Equation|A. I]that E {|G|?] = |G)? and E[S] = tr(5) 7] Note that the ratio S/|G|? is not an unbiased estimator for Byo It is possible to correct for this bias, but to minimize complexity we instead ensure that |G| has relatively low variance by averaging over many batches. This is especially important due to the precise cancellation involved in the definition of |G|?. When training a model using a data parallel method, we can compute |Gg,,,.,.,|? and |Gg,,, |? with minimal effort by computing the norm of gradient before and after averaging between devices. In that case Bsman iS the âlocalâ batch size before averaging, and Byig is the âglobalâ batch size after averaging. In practice, to account for the noisiness of |G|? when computed this way, we calculate |G|? and S on every training step and use their values to compute separate exponentially-weighted moving averages. We tune the exponential decay parameters so that the estimates are stable. Then, the ratio of the moving averages provides a good estimate of the noise scale.
In our experiments we measure and report the noise scale during training for a single run with a well- optimized learning rate. Note that while the noise scale measurement is consistent between runs at different batch sizes, it is not consistent at different learning rates (see Appendix C). So, it is important to use a run with a well-tuned learning rate in order to get a meaningful noise scale measurement.
# A.2 Systematic Searches Over Batch Sizes
When doing systematic measurements of how performance scales with batch size (Pareto fronts), we sepa- rately tune the learning rate at each batch size, in order to approximate the ideal batch scaling curve as closely as possible. We tune the learning rate via the following procedure. For each task, we performed a coarse grid search over both batch size and learning rate to determine reasonable bounds for a ï¬ne-grained search. The central value typically followed the form
Ex ~ (1+ B./BY"â Central (B) (A.3)
where a = 1 for SGD or momentum, and 0.5 < a < 1 for Adam [ ] or RMSProp. Then, we performed an independent grid search for each batch size centered at â¬central, expanding the bounds of the search if the best value was on the edge of the range.
11Note that when Bsmall = 1 and Bbig = n, this becomes the familiar Bessel correction n nâ1 to the sample variance. 12In fact E [x/y] ⥠E [x] /E [y] in general for positive variables, see e.g. https://en.wikipedia.org/wiki/ Ratio_estimator for details.
17
We explain the motivation for Equation A.3 in Appendix E.2. But regardless of the theoretical motivations, we have found that this scaling rule provides a reasonable starting point for grid searches, though we are not suggesting that they produce precisely optimized learning rates.
# A.3 Pareto Front Measurements
To produce the Pareto front plots, and thus to measure the important parameter Bcrit for a given dataset and optimizer, we begin by performing a grid search over batch sizes and learning rates, as de- scribed in Appendix A.2. With that data in hand, we ï¬x a list of goal values â either loss, perplex- ity, or game-score. For example for SVHN in Figure 7 we chose the training classiï¬cation error values [0.2, 0.1, 0.07, 0.05, 0.035, 0.025, 0.015, 0.004] as the goals. These were generally chosen to provide a vari- ety of evenly spaced Pareto fronts indicative of optimization progress.
Then for each value of the goal, and for each value of the batch size, we identiï¬ed the number of optimization steps and examples processed for the run (among those in the grid search) that achieved that goal most quickly. These optimal runs are the data points on the Pareto front plots. Note that at ï¬xed batch size, different values of the learning rate might be optimal for different values of the goal (this was certainly the case for LSTMs on Billion Word, for example). Next, for each value of the goal, we used the optimal runs at each value of the batch size to ï¬t Equation 2.11 to the relation between examples processed and optimization steps. Note that we performed the ï¬ts and extracted the errors in log-space. This was how we produced the lines on the Pareto front plots. Finally, given this ï¬t, we directly measured Bcrit = Emin for each value of the goal, as well as the standard Smin error in this quantity. This was how we produced the âNoise Scale Comparisonâ plots, where we compared Bcrit to Bsimple. Errors in Bcrit are standard errors from the ï¬t to Equation 2.11. When we report an overall number for Bcrit for a given dataset and optimizer, we are averaging over optimization steps throughout training.
Note that it can be difï¬cult to determine at what point in a training run the modelâs performance reaches the speciï¬ed target. For example, the loss may oscillate signiï¬cantly, entering and exiting the target region multi- ple times. To remedy this issue, we smooth the loss using an exponentially-weighted moving average before checking whether it has reached the target. The decay parameter of this moving average can affect results noticeably. Though we choose this parameter by hand based on the noisiness of the modelâs performance, this could be automated using an adaptive smoothing algorithm.
# A.4 Details of Learning Tasks
We train a variety of architectures on a variety of ML tasks described below. We use either basic stochastic gradient descent (SGD), SGD with momentum [SMDH13], or the Adam optimizer [KB14] unless otherwise speciï¬ed. We measure and report the noise scale Bsimple during training for a single run of each task with a well-optimized learning rate.
# A.4.1 Classiï¬cation
For image classiï¬cation, we use the following datasets:
MNIST handwritten digits [LC10] ⢠Street View House Numbers (SVHN) [NWC+11] ⢠CIFAR10 [Kri09] ⢠ImageNet [DDS+09]
For CIFAR10 and ImageNet classiï¬cation, we use Residual Networks [HZRS15] of size 32 and 50 respec- tively, based on the TensorFlow Models implementation [Goo]. All hyperparameters are unchanged aside from the learning rate schedule; Instead of decaying the learning rate by a factor of 10 at speciï¬ed epochs, we decay by a factor of 10 when the training classiï¬cation error (appropriately smoothed) reaches 0.487, 0.312, and 0.229. For MNIST and SVHN, we use a simple deep network with two sets of convolutional and pooling
18
layers (32 and 64 ï¬lters, respectively, with 5x5 ï¬lters), one fully-connected hidden layer with 1024 units, and a ï¬nal dropout layer with dropout rate of 0.4.
We train MNIST models using SGD, SVHN with both SGD and Adam [KB14] (with the default parameter settings momentum = 0.9, β2 = 0.999), and CIFAR10 and ImageNet with momentum [SMDH13] (with momentum = 0.9).
# A.4.2 Reinforcement Learning
For reinforcement learning, we use the following tasks via OpenAI Gym [BCP+16]:
Atari Arcade Learning Environment [BNVB12] ⢠Dota 1v1 and 5v5 [BCD+18]
For Atari, we use A2C with a pure convolutional policy, adapted from OpenAI Baselines {[DHK* 17}. We train using RMSProp with a = 0.99 and ¢ = 10~>. We roll out the environments 5 steps at a time, and vary the batch size by varying the number of environments running parallel. At the beginning of training, we randomly step each parallel environment by a random number of steps up to 500, as suggested in [SA18}.
As described in [BCD+18] for Dota an asynchronous version of PPO [SWD+17] was used. The TrueSkill metric [HMG07] was used to measure the skill of an agent. Given the fact that the OpenAI Five effort is ongoing, the values for TrueSkill reported in this paper are incomparable with those in [BCD+18]; on this paperâs scale, TrueSkill 50 is roughly the level of the best semi-pro players.
# A.4.3 Generative and Language Modeling
For language modeling, we train a size-2048 LSTM [HS97] on the One Billion Word Benchmark corpus [CMS+13], using byte pair encoding (BPE) [SHB15] with a vocabulary of size 40,000 and a 512-dimensional embedding space. The LSTMs were trained with Adam using momentum 0.5, without dropout, with the gradients clipped to norm 10, and with 20-token sequences. For both training and evaluation LSTM cell states were reset to zero between samples, and so we have reported perplexity for the last token of the 20-token sequences. We chose to report the batch size in tokens (rather than sequences) because we have found that when the number of sequences and the sequence lengths are varied, both Bsimple and Bcrit depend predominantly on the total number of tokens.
We also trained 1024 and 512 size LSTMs for model size comparison; for the last we used a smaller 256- dimensional embedding space. The model size comparison training runs were conducted with a batch size of 1024 and Adam learning rate of 0.0007. The learning rates were chosen from a grid search, which showed that the optimal learning rate did not have a signiï¬cant dependence on model size.
For generative image modeling, we train a Variational Autoencoder [KW13] using the InfoGAN architecture [CDH+16] (see their appendix C.2) on the SVHN dataset. Since VAEs introduce additional stochasticity beyond gradient noise, we also provide training data on a simple autoencoder with the same architecture.
# B Results for All Tasks
In Figures 9, 10, 11, 12, 13, and 14, we display the results of a series of training runs for for classiï¬ca- tion, reinforcement learning, and generative modeling tasks. On the left, we show tradeoff curves between compute-efï¬ciency and time-efï¬ciency. Each point on each tradeoff curve represents the number of optimizer steps and processed training examples necessary to reach a given level of performance for a particular train- ing run. Fits to the prediction of Equation 2.11 are shown. On the right, we compare the critical batch size, deï¬ned as the point where training is within 50% of maximum efï¬ciency in terms of both compute power and speed, and compare to the simple noise scale Bsimple of Equation 2.9 and the true noise scale Bnoise of 2.8, when available. The results are summarized in Figure 4 and table 1.
19
Autoencoder SVHN - Pareto Fronts: ; \ Livan 108 \ 4 * Livain 4 ses Deratn g | t ' iA MN tal B gs aa Saitvatitse | iver Lea 2 wd â Pretty n w Ter y Lirain = 0. 2 Vee sre Ltzain = 0.0098 m i Ww acne. sre- Derain = 0, a sree Divain = 10% â see Divain = 0.030 10? 10" 10+ 10° Optimization Steps
Autoencoder SVHN - Noise Scale Comparison â Berit â Brimpte 2 10 aeâ 2 & a ia 10° 2x 10-? 1o-? 6x 10-$ 4x 10-Bx 1W-F 2x 1O-F Autoencoder âTrain Loss
Autoencoder SVHN - Pareto Fronts: Autoencoder SVHN - Noise Scale Comparison ; \ Livan â Berit 108 \ 4 * Livain â Brimpte 4 ses Deratn g | t ' iA MN tal 2 10 aeâ B gs aa Saitvatitse | iver Lea 2 2 wd â Pretty & n w Ter y Lirain = 0. a 2 Vee sre Ltzain = 0.0098 m i Ww acne. sre- Derain = 0, ia a sree Divain = 10° 10% â see Divain = 0.030 10? 10" 10+ 10° 2x 10-? 1o-? 6x 10-$ 4x 10-Bx 1W-F 2x 1O-F Optimization Steps Autoencoder âTrain Loss VAE SVHN - Pareto Fronts VAE SVEN - Noise Scale Comparison. 7 10% â Bait 107 _ ses Etyain = 0.011 â Baimpte F 196 sees Derain = 0.012 gq ves Drain = 0.018 g & oF ses Luvin = 0.014 § 10 38 e+ Ltrain = 0.016 ¢ Fig ses Dyan = 0.022 & ee Lerain = 0.027 101 19° see Disain = 0.035 10? to" tot 10° 3x 10? 2x 10-? Optimization Steps VAE Train Loss Billion Word - Pareto Fronts Billion Word - Noise Scale Comparison sees Prrain = 1000 â Ban 10° 4 ne _ â Baimpte g ue 2 108 2 1084 ne 4 2 be & fo aot tee 3 4 a é % & 0° Z ~ 108 10° ~ Lo? Lo 107 10° 108 10?
VAE SVHN - Pareto Fronts 7 107 _ ses Etyain = 0.011 F 196 sees Derain = 0.012 gq ves Drain = 0.018 & oF ses Luvin = 0.014 38 e+ Ltrain = 0.016 Fig ses Dyan = 0.022 & ee Lerain = 0.027 19° see Disain = 0.035 10? to" tot 10° Optimization Steps
VAE SVEN - Noise Scale Comparison. 10% â Bait â Baimpte g § 10 ¢ 101 3x 10? 2x 10-? VAE Train Loss
Billion Word - Pareto Fronts sees Prrain = 1000 10° 4 ne g ue 2 1084 ne 2 be fo aot tee 4 é & 0° ~ 10° ~ Lo? Lo 107 10° Optimization Steps
Billion Word - Noise Scale Comparison â Ban _ â Baimpte 2 108 4 & 3 a % Z 108 108 10? âTrain Perplexity
Figure 9: Scaling behavior of generative and language modeling tasks.
ImageNet Training Spans 108 Span #0 a Span #1 y Span #2 E Es FI i 104 10 Optimization Steps
ImageNet - Noise Scale Comparison ros] Bante ââ Berit (Span 0) 307 1 ââ Bern (Span 1) 3 â Bars (Span 2) B 108 4 10° tot 10% 10° 6x 10-1 4x17? 3x 10-1 2x 10-8 âTrain Error
ImageNet Training Spans ImageNet - Noise Scale Comparison 108 Span #0 ros] Bante Span #1 ââ Berit (Span 0) Span #2 307 1 ââ Bern (Span 1) 3 â Bars (Span 2) B 108 4 10° tot 10% 104 10 10° 6x 10-1 4x17? 3x 10-1 2x 10-8 Optimization Steps âTrain Error
Figure 10: For ImageNet, the typical training schedule decays the learning rate by a factor of 10 at 30, 60, and 80 epochs [HZRS15, GDG+17]. To provide a fair comparison between batch sizes, we instead decay by a factor of 10 when the training classiï¬cation error reaches 0.487, 0.312, and 0.229. We display Pareto fronts and compute the critical batch size separately for each span.
20
Atari Beam Rider - Pareto Fronts 108 3 Rey = 600.000 é Rey = 100.000 & Fey = 2000.00 Fi Rey = 300.000 a - E Roy = 500.000 a Roy = 7500.000 Lot 10° Optimization Steps
Atari Beam Rider - Noise Scale Comparison â Bait â Bsimpte Say § & 5 % 108 10% Lot Episode Reward Mean
Atari Beam Rider - Pareto Fronts Atari Beam Rider - Noise Scale Comparison 108 â Bait â Bsimpte 3 Rey = 600.000 é Rey = 100.000 Say & Fey = 2000.00 § Fi Rey = 300.000 & a - 5 E Roy = 500.000 % a Roy = 7500.000 108 Lot 10° 10% Lot Optimization Steps Episode Reward Mean Atari Breakout - Pareto Fronts Atari Breakout - Noise Scale Comparison 10° â Bair 108 2 â zy Rey = 2.000 simple z Rep = 3.000 _ 10 Bag Rey = 10.000 3 â Rep = 50.000 2 A Rep = 100.000 3 10° & 10° Rep = 500.000 a Roy = 600.000 10? 10° + 10* 10° to" to! 10? Optimization Steps Episode Reward Mean Atari Pong - Pareto Fronts Atari Pong - Noise Scale Comparison 108 â Bai 6 . 10' â Biimpie 3 ve Ry= 19 8 - ior see Rep =0 oe & nee Rey = 19 3 ep EA z : a 10° iT 10? 10! 108 -200 -15 -10 5 0 5 10 15 20 Optimization Steps Episode Reward Mean Atari Space Invaders - Pareto Fronts Atari Space Invaders - Noise Scale Comparison 10° Ruy = 250 on Rep = 500 106 â Bampte 3 Rep = 800 ⬠108 Rep = 1500 2 & Rey = 3500 gw Fi Rey = 10000 a. 9 o 20" Roy = 20000 % igs 2B Boy = 30000 Rep = 50000 fi 10° Roy = 70000 a 10! 10° 10? 10* 104 108 Optimization Steps Episode Reward Mean
Atari Breakout - Pareto Fronts 108 2 zy Rey = 2.000 z Rep = 3.000 Bag Rey = 10.000 â Rep = 50.000 A Rep = 100.000 & 10° Rep = 500.000 a Roy = 600.000 10° 10* 10° Optimization Steps
Atari Breakout - Noise Scale Comparison 10° â Bair â simple _ 10 3 2 3 10° 10? + to" to! 10? Episode Reward Mean
Atari Pong - Pareto Fronts 108 . 3 ve Ry= 19 - ior see Rep =0 & nee Rey = 19 ep EA : a 10° 10! 108 Optimization Steps
Atari Pong - Noise Scale Comparison â Bai 6 10' â Biimpie 8 oe 3 z iT 10? -200 -15 -10 5 0 5 10 15 20 Episode Reward Mean
Atari Space Invaders - Pareto Fronts 10° Ruy = 250 Rep = 500 3 Rep = 800 ⬠108 Rep = 1500 & Rey = 3500 Fi Rey = 10000 a. 9 20" Roy = 20000 2B Boy = 30000 Rep = 50000 10° Roy = 70000 10! 10° Optimization Steps
Atari Space Invaders - Noise Scale Comparison on 106 â Bampte 2 gw o % igs fi a 10? 10* 104 108 Episode Reward Mean
Figure 11: Scaling behavior of Atari tasks (Beam Rider, Breakout, Pong, and Space Invaders) trained with A2C.
21
Examples Processed 108 107 10° 10° Atari Alien - Pareto Fronts Rey = 300 Rep = 400 Rey = 750 Rep = 1100 Rep = 1250 Rep = 1750 Rep = 2000 10% 107 10° Optimization Steps
10° 10° 10* Noise Scale 10 10â Atari Alien - Noise Scale Comparison Bait â Bsimpte 3 x 10? 4 x 10? 6 x 10? 10" 2 108 Episode Reward Mean
Examples Processed Examples Processed Examples Processed 108 107 10° 10° 108 107 10% 10° 107 10° 10° Atari Alien - Pareto Fronts Rey = 300 Rep = 400 Rey = 750 Rep = 1100 Rep = 1250 Rep = 1750 Rep = 2000 10% 107 10° Optimization Steps Atari Qhert - Pareto Fronts Rey = 300 Boy = 500 ce Rey = 800 ves Rey = 1000 Rey = 2000 Rey = 4000 Rep = 5000 Rey = 10000 Rep = 15000 Rep = 20000 10% 10* 10° Optimization Steps Atari Scaquest - Pareto Fronts se Rey = 150 Rey = 700 Rep = 800 Rey = 1200 sree Rep = 1600 see Roy = 1700 see Rey = 1750 10° 10° 10* Noise Scale 10 10â 10° 10* Noise Scale 10 10° 104 Noise Scale 10" Atari Alien - Noise Scale Comparison Bait â Bsimpte 3 x 10? 4 x 10? 6 x 10? 10" 2 108 Episode Reward Mean Atari Qbert - Noise Seale Comparison Bait â Bsimpte 10° Lot Episode Reward Mean Atari Seaquest - Noise Scale Comparison â Bait â Biimpie ee 3x 10? 4x 10? 6 x 10? 108
Examples Processed 108 107 10% Atari Qhert - Pareto Fronts Rey = 300 Boy = 500 ce Rey = 800 ves Rey = 1000 Rey = 2000 Rey = 4000 Rep = 5000 Rey = 10000 Rep = 15000 Rep = 20000 10% 10* 10° Optimization Steps
10° 10* Noise Scale 10 Atari Qbert - Noise Seale Comparison Bait â Bsimpte 10° Lot Episode Reward Mean
Examples Processed 10° 107 10° 10° Atari Scaquest - Pareto Fronts se Rey = 150 Rey = 700 Rep = 800 Rey = 1200 sree Rep = 1600 see Roy = 1700 see Rey = 1750 Optimization Steps
10° 104 Noise Scale 10" Atari Seaquest - Noise Scale Comparison â Bait â Biimpie ee 3x 10? 4x 10? 6 x 10? 108 Episode Reward Mean
Figure 12: Scaling behavior of more Atari tasks (Alien, Qbert, and Seaquest) trained with A2C.
Examples Processed 10"! 10% 10° Dota v1 - Pareto Fronts 104 10° Optimization Steps feetee âTrueskill = âTrueskill
107 ZB Noise Scale 10° 10% Dota Lvl - Noise Scale Comparison â Bait âsimple 3x lol 4x10! 5 x 10" âTrueskill
10"! 10% 10° Dota v1 - Pareto Fronts 104 10° Optimization Steps feetee âTrueskill = âTrueskill 107 ZB Noise Scale 10° 10% Dota Lvl - Noise Scale Comparison â Bait âsimple 3x lol 4x10! 5 x âTrueskill
Figure 13: Scaling behavior for Dota 1v1 [Ope17] trained to top-level pro performance.
22
Examples Processed Examples Processed Examples Processed Examples Processed CIFAR 10 - Pareto Fronts 5 5 10% 5 10? 10% 107 Optimization Steps SVIIN (Adam) - Pareto Fronts i084 id 10° 10°4 1o* Optimization Steps SVEN (SGD) - Pareto Fronts 108 107 10 108 10# Optimization Steps MNIST - Pareto Fronts 10% 107 Optimization Steps Pee etete Pee etete ELvrain = 0-500 Exrain = 0.400 Brvain Exrain = 0.100 Burain = 0.050 Bisain = 0.040 Burain = 0.030 Evrain = 0.200 Exrain = 0.100 Burain = 0.070 Bisain = 0.050 Brain = 0.035 grain = 0.025 Brain = 0.015 Bussin = 0.004 Evrain = 0-200 Exrain = 0.100 Burain = 0.070 Bisain = 0.050 Ehrain = 0.035 grain = 0.025 Brain = 0.015 Bussin = 0.004 Evrain = 0.300 Birain = 0.200 Exrain = 0.100 rrain = 0.050 Bisain = 0.030 Eyrain = 0.020 Exrain = 0.012 Evrain = 0.007 Bisain = 0.004 Ehrain = 0.002 Eyrain = 0.001 Noise Scale (Tokens) Noise Scale Noise Scale Noise Scale (Tokens) 10° 10# 108 10? 2 2 R a] 5 5 CIFAR1L0 - Noise Scale Comparison Bait â Bsimpte ââ 107? âTrain Error SVIIN (Adam) - Noise Scale Comparison Bait 1071 10-7 âTrain Error SVIIN (SGD) - Noise Scale Comparison Bait Bairaple Broise 1o-! Train Error MNIST - Noise Scale Comparison ââ Bait â Baimpte 10-1 Lo-? Train Error 14: behavior of classification tasks.
Examples Processed CIFAR 10 - Pareto Fronts 5 5 10% 5 10? 10% 107 Optimization Steps ELvrain = 0-500 Exrain = 0.400 Brvain Exrain = 0.100 Burain = 0.050 Bisain = 0.040 Burain = 0.030
Noise Scale 10° CIFAR1L0 - Noise Scale Comparison Bait â Bsimpte ââ 107? âTrain Error
Examples Processed SVIIN (Adam) - Pareto Fronts i084 id 10° 10°4 1o* Optimization Steps Pee etete Evrain = 0.200 Exrain = 0.100 Burain = 0.070 Bisain = 0.050 Brain = 0.035 grain = 0.025 Brain = 0.015 Bussin = 0.004
Noise Scale 10# 108 10? SVIIN (Adam) - Noise Scale Comparison Bait 1071 10-7 âTrain Error
Examples Processed SVEN (SGD) - Pareto Fronts 108 107 10 108 10# Optimization Steps Pee etete Evrain = 0-200 Exrain = 0.100 Burain = 0.070 Bisain = 0.050 Ehrain = 0.035 grain = 0.025 Brain = 0.015 Bussin = 0.004
Noise Scale (Tokens) 2 2 R SVIIN (SGD) - Noise Scale Comparison Bait Bairaple Broise 1o-! Train Error
Examples Processed MNIST - Pareto Fronts 10% 107 Optimization Steps Evrain = 0.300 Birain = 0.200 Exrain = 0.100 rrain = 0.050 Bisain = 0.030 Eyrain = 0.020 Exrain = 0.012 Evrain = 0.007 Bisain = 0.004 Ehrain = 0.002 Eyrain = 0.001
Noise Scale (Tokens) a] 5 5 MNIST - Noise Scale Comparison ââ Bait â Baimpte 10-1 Lo-? Train Error
Figure 14: Scaling behavior of image classiï¬cation tasks.
23
Critical Batch Size Simple Noise Scale Start Average Start Average Image Classiï¬cation: MNIST SVHN CIFAR10 ImageNet 20 50 300 1,000 200 500 900 15,000 50 300 400 4,000 900 4,000 2,000 30,000 Generative and Language Modeling: Autoencoder (SVHN) Variational Autoencoder (SVHN) Billion Word (per token) Reinforcement Learning: 10 10 700 40 200 100,000 2 10 1000 2 10 150,000 Atari (per frame) Dota 1v1 (per frame) Dota 5v5 (per frame) 100 - 1,000 50,000 (not measured) 400 - 8,000 3,000,000 >8,000,000 (est.) 100-1,000 100,000 100,000 1,000-20,000 300,000 24,000,000
Table 1: We report the simple noise scale, both early in training and averaged over a training run, as well as the critical batch size, both early in the run and at the end of the run. The noise scale provides a good estimate for the critical batch size throughout training. Batch sizes reported in number of images, tokens (for language models), or observations (for games). These data are summarized in Figure 4.
24
Temperature and the Noise Scale on SVHN (SGD) 108 0.0 0.5 10 15 2.0 2.5 Epoch
Temperature and the Noise Scale on BW (Adam) 6 16B «/16, B ¢/4, 4B 107 0.00 0.01 0.02 0.03 0.04 0.05 Epoch
Temperature and the Noise Scale on SVHN (SGD) Temperature and the Noise Scale on BW (Adam) 108 6 16B «/16, B ¢/4, 4B 107 0.0 0.5 10 15 2.0 2.5 0.00 0.01 0.02 0.03 0.04 0.05 Epoch Epoch
Figure 15: The noise scale is proportional to the inverse temperature. On the left we display results for SVHN optimized via SGD, while on the right we have an LSTM on the Billion Word dataset optimized via Adam. For each of the three curves, we modified either the learning rate ¢, the batch size B, or both, so that the temperature 5 was decreased by a factor of 16 between epochs 1 and 1.5 (SVHN) or 0.02 and 0.03 (BW). In all cases we see that the simple noise scale increased by a factor of 16, then returned to roughly its original value once â¬, B were reset.
# C Temperature and the Noise Scale
The noise scale measured during neural network training could depend on a variety of hyperparameters, such as the learning rate « or momentum. However, we have empirically found that noise scale primarily depends on ¢ and B roughly through the ratio
⬠â¬max(B)â T(e,B) = (C1)
which we refer to as the âtemperatureâ of training. The terminology reï¬ects an idea of the loss as a potential energy function, so that high temperature training explores a larger range of energies.
In the case of pure SGD it is approximated by T ~ e/B in the small batch regime. Our definition of T can then be obtained from a toy model of a quadratic loss, which is described below. In that case one can show explicitly [MHB17] that the equilibrium distribution of gradients is characterized by this temperatut
In equilibrium, the noise scales vary in proportion to the inverse temperature, so that
Bnoise â Bsimple â 1 T . (C.2)
It may seem surprising that higher temperature results in a smaller noise scale. The intuition is that at larger T the neural network parameters are further from the minimum of the loss, or higher up the âwallsâ of the potential, so that the gradient magnitude is larger relative to the variance.
Of course the loss landscape will be much more complicated than this toy model, but we have also observed that this scaling rule provides a good empirical rule of thumb, even away from pure SGD. In particular, when we decay the learning rate ⬠by a constant factor, we often find that the noise scale grows by roughly the same factor. ImageNet training provides an example in Figure[10] A more direct investigation of the relation between Byimpie and T is provided in Figure[I5]
Since the noise depends primarily on the training temperature, and well-tuned training runs should have the same temperature at different batch sizes, the measured noise scale will also be consistent between optimally- tuned runs at different batch sizes.14. The noise scale then depends only on the temperature and the loss.
13This deï¬nition can also be motivated by the empirical results of [SKYL17], which show that decaying the learning rate and increasing the batch size by the same factor have the same effect on training in the small-batch regime.
14This suggests that we can use the noise scale to deï¬ne the temperature via Equation C.2. Then, once we have tuned the learning rate and measured the noise scale at small batch size, we can tune the learning rate at larger batch sizes to the noise scale constant. Though we have not investigated this idea thoroughly, it could signiï¬cantly simplify the problem of learning rate tuning at large batch size.
25
To summarize, the noise scale does not provide an optimal training temperature schedule, but it instead prescribes an optimal batch size at any given temperature.
# A Toy Model for the Temperature
Now let us consider a simple explanation for the behavior of the noise scale in response to changes in the learning rate ⬠and batch size B. We start by approximating the loss as locally quadratic:
L (θ) = 1 2 θT Hθ + const.
where we set 0 = 0 at the minimum without loss of generality. To compute the noise scale, we need a model for the gradient covariance matrix ©. A simple model appearing in suggests treating the per- example loss L; as a shifted version of the true loss, L; (9) = L (6 â c;), where c; is a random variable with mean zero and covariance matrix 4... The gradient covariance matrix is then given by © = HSH, which is independent of 0. The average gradient itself is given by G = H9, with 6 changing in response to ⬠or B. As shown in over sufficiently long times SGD3|will approximately sample 6 from the distribution
pSGD (θ) â exp â 1 2 θT M â1θ
where the matrix M satisï¬es
â MH+HM= BU
From these results, we can estimate the noise scale:
Be _ tr(®) B tr(®) simple = IGP ~ â⬠tr (H?5) tr(L)H _ B tr(H%) GTHG © ¢ tr(H®S) Broise =
So, in this model, the noise scale is expected to increase as we decrease the learning rate or increase the batch size. We also expect that scaling the learning rate and batch size together should leave the noise scale unchanged. When B < Bsimpie, the ratio B plays the role of a âtemperatureâ. Since our analysis was only based on a toy model optimized using pure SGD, one might not expect it to work very well in practice. However, as shown in Figure[I5] we have found that it provides a quite accurate model of the dependence of the noise scale on ⬠and B during neural network training, even when using the Adan{|optimizer. For these tests, on SVHN we used an initial (e, B) = (0.18, 128) while for billion word results we used (â¬, B) = (6 x 1074, 128).
Note that this result relies on the assumption that the optimizer has approached an effective equilibrium. We expect the equilibration timescale to be larger in the directions of low curvature, so that this effect will be strongest when the gradient points mostly in the large-curvature directions of the Hessian. It would be interesting to investigate the timescale for equilibration.
# D Dynamically Varying the Batch Size
As one can see from Figure 4 and Section 3, both the measured Bnoise and Bsimple, as well as the empirical Bcrit ï¬t to Equation 2.11 all increase by at least an order of magnitude during training. Thus its natural to ask if we should expect to improve efï¬ciency by dynamically scaling the batch size B in response. We will see that the predicted gains are relatively modest unless the Bcrit changes greatly during training, although preliminary empirical tests suggest the beneï¬ts may be larger than predicted.
'SWith momentum, the same statements hold with ¢ replaced by ¢/ (1 â m). âNote that with G2 = 0.999 the Adam variance accumulators would take of order ~ 1000 steps to fully react. On the
âNote that with G2 = 0.999 the Adam variance accumulators would take of order ~ 1000 steps to fully react. On the right in Figure[I5]we changed ⬠and B for 0.01 epochs, corresponding to between 100 and 1500 optimizer steps.
26
# D.1 Theory
Consider a single full-batch optimizer step, over which the loss increases by an amount δL. If we instead use a batch of size B, it will take δS = 1 + B B optimizer steps and δE = BδS training examples to make the same amount of progress, where B is the noise scale. Over a full training run, the total number of steps and data examples processed can be written as
s=[ (i+ B= [ (Blo) + Bs))as B(s) a) ds (D.1)
where we parameterize the training trajectory by the number s of full-batch optimizer steps (we abbreviated Smin above to s for notational simplicity).
The question is how to optimally distribute the training examples over the full training trajectory. At each point along the trajectory, we have the choice of trading examples for optimizer steps by increasing or de- creasing the batch size. This âexchange rateâ between examples and steps is
r = â d dB δE d dB δS = B2(s) B(s) . (D.2)
If the distribution of training examples (and hence the batch size schedule) is optimal, then transferring examples from one part of training to another should not save any optimization steps. This means that the exchange rate r should be constant throughout training. Thus the batch size should be varied in proportion with the square root of the noise scale:
# B(s) = VrB(s).
(D.3)
We can determine the resultant Pareto front parameterizing the tradeoff between training cost and time by inserting Equation D.3 into Equation D.1 and eliminating the exchange rate17
Stot _ (rot 7 -1=9(#*-1) . (D4) Smin âmin
where we define Sinin = f ds and Eynin = f Bds to be the minimum possible number of optimizer steps and training examples needed to reach the desired level of performance, obtained by inserting B >> B and B « B respectively into[D.3] We also define
2 (J vBas) y = SW, (D.5) Smin Emin
which parameterizes the amount of variation of the noise scale over the course of training. When the noise scale is constant γ = 1 and there is no beneï¬t from using an adaptive batch size; more variation in B pushes γ closer to 0, yielding a corresponding predicted improvement in the Pareto front.18 Note that since γ involves the variation in the square root of B, practically speaking B must vary quite a bit during training for adaptive batch sizes to provide efï¬ciency beneï¬ts via these effects. Adaptive batch sizes may also have other beneï¬ts, such as replacing adaptive learning rates [SKYL17] and managing the proportion of gradient noise during training.
17The exchange rate r is a free parameter. It can be chosen according to preference from the value of training time vs compute. There is also the fairly natural choice r = Emin γ, so that cost- Smin efï¬ciency and time-efï¬ciency are both within the same factor of optimal, corresponding to the turning point in Figure 16.
18To see explicitly the dependence of γ on the variability of the noise scale, we can rewrite it as γ = B]2 , where the expectation is over a training run, weighting each full-batch step equally. 1 /E[ â 1+Ï2â B
â B]2 E[ E[B] =
27
SVHN (SGD) - Adaptive vs Fixed BS . 2 © Buin * Eurain = 04 . . . © Poin =0.008 Optimization Steps
Predicted Pareto Improvement 10 Emin 4 7) = 1.0 (baseline) g g g a 2 E Es Emin J Srain 10 Sinin Optimization Steps
SVHN (SGD) - Adaptive vs Fixed BS Predicted Pareto Improvement . 2 10 Emin 4 7) = 1.0 (baseline) © Buin g * Eurain = 04 g g . a . 2 E . Es © Poin =0.008 Emin J Srain 10 Sinin Optimization Steps Optimization Steps
Figure 16: Left: We compare training using an adaptive batch size (data points) to the hyperbolic ï¬t to Pareto fronts at ï¬xed batch size (lines). We note a modest but visible improvement to training efï¬ciency. Adaptive batch sizes appear to decrease the minimum number of optimization steps Smin, which was not anticipated by theoretical analysis. Right: Depending on the degree to which the noise scale varies over training, we can predict the potential Pareto improvement from using an adaptive batch size.
# D.2 An SVHN Case Study
We have argued that a batch size of order the noise scale can simultaneously optimize data parallelism and total resource use. We have also shown that the noise scale tends to grow quite signiï¬cantly during training. This suggests that one can further optimize resource use by adaptively scaling19 the batch size with the noise scale as training progresses, as discussed above.
For adaptive batch training, we can follow a simple and pragmatic procedure and dynamically set
# B= \V/rByimpie;
(D.6)
with Bsimple measured periodically during training. The results from dynamic batch training with this pro- cedure and various values of r are compared to ï¬xed batch size training in Figure 16. We see that adaptive training produces a modest20 efï¬ciency beneï¬t.
We can combine our ï¬xed batch size results with theoretical analysis to predict the magnitude of efï¬ciency gains that we should expect from adaptive batch size training. We displayed Bcrit for ï¬xed batch training of s, where s is the SVHN in Figure 7. We have found that these results are ï¬t very well by Bcrit(s) â 10 number of steps taken in the limit of very large batch training. Using Equation D.5, we would predict the quite modest efï¬ciency gain of
_ (fasws)â 24 = sfds/s â 25 (D.7)
or around 4%. The beneï¬ts visible in Figure 16 in some cases appear too large to be fully explained by this analysis.
In particular, the adaptive batch size seems to beneï¬t training in the regime of large batch size, decreasing the minimum number of optimization steps Smin. However, our theoretical analysis would predict negligible beneï¬ts at large Etot/Stot. This may be due to the fact that the adaptive BS schedule also âwarms upâ the learning rate, or it may be an effect of a larger and more consistent proportion of gradient noise during training. It would be interesting to disentangle these and other factors in future work, and to study adaptive batch size training on other datasets.
âThis may have an additional advantage compared to training with a fixed, large batch size: it allows for a constant proportion of gradient noise during training, and some have argued [|KMN* 16|[HHS17] that noise benefits generalization. Itâs challenging to provide a fair comparison between fixed and adaptive batch size training. Here we determined a roughly optimal relation « = g278 between learning rate « and B for fixed batch size training, and used this same 96+B function to determine the learning rate for both fixed and adaptive batch size training runs. This meant the adaptive batch size training used a corresponding adaptive learning rate. We did not experiment with learning rate schedules.
28
Optimal Step Size as a Fraction of Parameter Update 1.0 ââ Billion Word (Adam) ââ SVHN (SGD) god a 4 5 0.2 0.0 10 10% Optimization Steps
Learning Curves 2 ââ Billion Word (Adam) 3 ââ SVHN (SGD) = igs 3 z = 5 S ~ a 2 10 10? 10% Optimization Steps
Optimal Step Size as a Fraction of Parameter Update Learning Curves 1.0 ââ Billion Word (Adam) 2 ââ Billion Word (Adam) ââ SVHN (SGD) 3 ââ SVHN (SGD) = igs 3 z god = a 5 4 S 5 0.2 ~ a 2 0.0 10 10 10% 10? 10% Optimization Steps Optimization Steps
Figure 17: Left: This ï¬gure shows the magnitude of the optimal step size in the direction of the parameter update divided by the magnitude of the actual update. Optimal step sizes are determined by a line search of the loss. We show training of two quite different models with different optimizers â an LSTM trained with Adam (momentum = 0.5) on Billion Word, and a CNN trained on SVHN with SGD. In both cases, training converges to an approximate steady state where the average update is about twice the optimal update. Right: Learning curves included to clarify that this phenomenon is not due to the cessation of learning.
Smoothed Gradient Correlation - SVHN - EMA decay 0.5 a (to + 6t) 0.0 -0.1 G(to) - 7 1 + â ty = 0.0 epochs ââ t = 2.5 epochs to = 5.0 epochs 0.004 to = 7.5 epochs moothed) s = 0.003 ( 0.002 0.001 G(to + 6t)) = 0.000 0.0 0.2 04 0.6 0.8 1.0 Epochs from Start (5t) (Gt
Figure 18: The gradient exhibits rapid, long-lived oscillations over the course of training, even when using adaptive optimizers such as Adam. These oscillations are typical when optimizing functions with a large Hi- erarchy in the Hessian spectrum. We measure the moving average of the gradient with decay 0.5, computing its correlations over time. Results are shown for a simple CNN trained on SVHN using the Adam optimizer.
# E Comments on Optimization
# E.1 Deterministic Training Performs Poorly
From the discussion of the noise scale in Section [2] we expect that at large batch size B >> Buoise we can obtain a very good estimate of the gradient. This would then suggest a minimally stochastic approach to training, where at each step one performs a line search of the true loss in the direction of the true gradient, and updates parameters accordingly.
This nearly deterministic âgreedient descentâ method performs poorly in practice [WRLG18]. While its ï¬rst few steps tend to decrease the loss signiï¬cantly, subsequent step sizes decrease rapidly and provide minimal further training progress. In fact, when training with a ï¬xed learning rate, we have observed that training
29
often tends towards a regime where the optimal step size (determined by a line search in the direction of the parameter update) is almost exactly half of the actual update magnitude. We have found that this phenomenon occurs regardless of the learning rate (scanning over several orders of magnitude), and seems common to a variety of different models, as shown in Figures 17 and 18. A natural interpretation of these results is that large Hessian directions are dominating the update [GARD18], so that training involves rapid oscillations, as seen in Figure 18 (see [Goh17] for an intuitive picture of these oscillations). Because of this, line searches do not appear to be useful in determining the optimal step size for a full training run.
# E.2 Motivations for Learning Rate Scaling Rules
The learning rate scaling rule from Appendix A.2 can be motivated as follows. Equation A.3 generalizes Equation 2.6, which was derived for plain SGD. The SGD linear scaling rule (α = 1) means that the step size per data example stays ï¬xed up to Bâ; one might intuitively expect that this is necessary to avoid dimin- ishing returns as B increases. In the case of Adam21, we can use noise scale considerations to motivate the generalization to 0.5 < α < 1. The Adam update to the parameter θi takes the form
50, = «ol [Gi] Vv Eg, [G2] + â¬adam
50, Vv Eg, [G2] + â¬adam where Eg refers to an exponentially-weighted moving average with decay parameter 3, and G refers to a gradient from a batch of size B. If we disregard 3), 62, and â¬Adam, this is roughly equivalent to
sign (E[G.)) Vit ace 00; SE
where si is the variance of Gi over timesteps. If the step-to-step noise in the gradient is primarily due to batch statistics, si should scale inversely with B. Comparing with 2.6, this implies a square-root scaling rule (α = 0.5) to maintain a constant learning rate per data example. However, since β2 is often set to large values around 0.999, the second moment accumulator may not have time to adapt to quick changes in the gradient noise; this pushes α back towards 1.0. This may explain the variation in α between different tasks.
# E.3 Preliminary Tests of Generalization
Throughout the paper we have studied the noise scale as a function of the training loss or RL score. But for non-RL tasks, it is also interesting to study the relationship between the noise scale and the critical batch size associated with minimization of the test loss. The difference between the training and test results provides information about generalization. In Figure 19 we report results using the test loss for the small image classiï¬cation datasets.
As expected, early in training there is no difference between train and test results. However, at the very end of training we observe a small dip in Bcrit for the test loss, which appears to occur consistently across datasets. It would be very interesting to further investigate this phenomenon in the future.
21These considerations apply equally well to RMSProp.
30
CIFAR1O - Pareto Fronts 107 By z see Frcs, = 0.500 2 106 Eri = 0.400 â Eves, = 0.300 a Frost = 0.200 a Bees, = 0.150 gq 10 10? 10% Lot 10° Optimization Steps
CIFAR1L0 - Noise Scale Comparison â Bait B â Brimpte yg -B 4x lo-t 3x 107! 2x 1071 âTest Error
CIFAR1O - Pareto Fronts CIFAR1L0 - Noise Scale Comparison â Bait 107 B By â Brimpte z see Frcs, = 0.500 2 106 Eri = 0.400 yg â Eves, = 0.300 a Frost = 0.200 -B a Bees, = 0.150 gq 10 10? 10% Lot 10° 4x lo-t 3x 107! 2x 1071 Optimization Steps âTest Error SVHN (Adam) - Pareto Fronts SVHN (Adam) - Noise Scale Comparison 10° 4 Best ~ Een = 0200 10 â Banpte 2 107 Exes. = 0.100 g 2 8 Boss = 0.070 3 & 108 Byes, = 0.050 © 108 2 Fes, = 0.035 3 2 0° Exo, = 0.005 a Brest = 0.021 o* 102 10â 10" lot 2x 10-1 10-1 6x10"? 4.x 10-73 x 10-? Optimization Steps âTest Error SVHN (SGD) - Pareto Fronts SVHN (SGD) - Noise Scale Comparison â Bait = 0 Eye, = 0.200 10 â Baimpte a Fxcor = 0.100 â Brotse g 2 5. Eos = 0.0703 & 10 Bun, = 0.050% 498 2 Fes, = 0.035 3 EA z £ 10° Erest = 0.025 a Bees = 0021 10? 10% 10° tot 10° 108 2x 10-1 10-1 6x10"? 4.x 10-73 x 10-? Optimization Steps âTest Error MNIST - Pareto Fronts MNIST - Noise Scale Comparison. a 10* 107 â Bait sree Fy, = 0.300 Boopte Â¥ 10° Etest = 0.200 g Bees, = 0.100 2 10 3 : . ¢ & 195 Even, = 0.050 2 Eres, = 0.030 a tot Exes, = 0.020% 192 a Eves = 0.010 5 ve Byes = 0.007 10" 10! Optimization Steps Test Error
SVHN (Adam) - Pareto Fronts 10° ~ Een = 0200 2 107 Exes. = 0.100 g 8 Boss = 0.070 & 108 Byes, = 0.050 2 Fes, = 0.035 2 0° Exo, = 0.005 a Brest = 0.021 o* 10â 10" lot Optimization Steps
SVHN (Adam) - Noise Scale Comparison 4 Best 10 â Banpte 2 3 © 108 3 102 2x 10-1 10-1 6x10"? 4.x 10-73 x 10-? âTest Error
SVHN (SGD) - Pareto Fronts = 0 Eye, = 0.200 a Fxcor = 0.100 g 5. Eos = 0.0703 & 10 Bun, = 0.050% 2 Fes, = 0.035 EA £ 10° Erest = 0.025 a Bees = 0021 10% 10° tot 10° 108 Optimization Steps
SVHN (SGD) - Noise Scale Comparison â Bait 10 â Baimpte â Brotse 2 498 3 z 10? 2x 10-1 10-1 6x10"? 4.x 10-73 x 10-? âTest Error
MNIST - Pareto Fronts a 107 sree Fy, = 0.300 ¥ 10° Etest = 0.200 g Bees, = 0.100 3 : . & 195 Even, = 0.050 2 Eres, = 0.030 a tot Exes, = 0.020% a Eves = 0.010 5 ve Byes = 0.007 10" Optimization Steps
MNIST - Noise Scale Comparison. 10* â Bait Boopte 2 10 ¢ 192 10! Test Error
Figure 19: Scaling behavior of image classiï¬cation tasks, using test set goals rather than Train set goals. These results should be compared to those of Figure 14, which use training goals.
31
# References
Igor Adamski, Robert Adamski, Tomasz Grel, Adam JËedrych, Kamil Kaczmarek, and Henryk Michalewski. Distributed deep reinforcement learning: Learn how to play atari games in 21 minutes, 2018, 1801.02852. 2, 14
Dario Amodei and Danny Hernandez. AI and Compute, May 2018. URL https://blog. openai.com/ai-and-compute/. 2
Takuya Akiba, Shuji Suzuki, and Keisuke Fukuda. Extremely large minibatch sgd: Training resnet-50 on imagenet in 15 minutes, 2017, 1711.04325. 2
[BCD+18] Greg Brockman, Brooke Chan, Przemyslaw Debiak, Christy Dennison, David Farhi, Rafal Józefowicz, Jakub Pachocki, Michael Petrov, Henrique Pondé, Jonathan Raiman, Szymon Sidor, Jie Tang, Filip Wolski, and Susan Zhang. OpenAI Five, Jun 2018. URL https: //blog.openai.com/openai-five/. 1, 2, 3, 9, 13, 14, 16, 19
Léon Bottou, Frank E. Curtis, and Jorge Nocedal. Optimization methods for large-scale machine learning, 2016, 1606.04838. 14
[BCNW12] Richard H. Byrd, Gillian M. Chin, Jorge Nocedal, and Yuchen Wu. Sample size selection in optimization methods for machine learning. Mathematical Programming, 134(1):127â155, Aug 2012. doi:10.1007/s10107-012-0572-5. 14
[BCP+16] Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. OpenAI Gym, 2016, 1606.01540. 19
Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale gan training for high ï¬delity natural image synthesis, 2018, 1809.11096. 2
Jimmy Ba, Roger Grosse, and James Martens. Distributed second-order optimization us- ing kronecker-factored approximations, 2017. URL https://openreview.net/forum?id= SkkTMpjex. 14
[BNVB12] Marc G. Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents. CoRR, 2012, 1207.4708. 13, 19
Lukas Balles, Javier Romero, and Philipp Hennig. Coupling adaptive batch sizes with learning rates, 2016, 1612.05086. 14
[CDH+16] Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. In- fogan: Interpretable representation learning by information maximizing generative adversarial nets, 2016, 1606.03657. 19
[CMS+13] Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robinson. One billion word benchmark for measuring progress in statistical language modeling, 2013, 1312.3005. 12, 19
[CWZ+18] Lingjiao Chen, Hongyi Wang, Jinman Zhao, Dimitris Papailiopoulos, and Paraschos Koutris. The effect of network width on the performance of large-batch training, 2018, 1806.03791. 14
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hier- archical Image Database. In CVPR09, 2009. URL http://www.image-net.org/papers/ imagenet_cvpr09.pdf. 12, 18
[DHK+17] Prafulla Dhariwal, Christopher Hesse, Oleg Klimov, Alex Nichol, Matthias Plappert, Alec Radford, John Schulman, Szymon Sidor, Yuhuai Wu, and Peter Zhokhov. Openai baselines. https://github.com/openai/baselines, 2017. 19
[DNG17] Aditya Devarakonda, Maxim Naumov, and Michael Garland. Adabatch: Adaptive batch sizes for training deep neural networks, 2017, 1712.02029. 14
[DYJG16] Soham De, Abhay Yadav, David Jacobs, and Tom Goldstein. Big batch sgd: Automated infer- ence using adaptive batch sizes, 2016, 1610.05792. 14
[GARD18] Guy Gur-Ari, Daniel A. Roberts, and Ethan Dyer. Gradient descent happens in a tiny subspace. 2018, arXiv:1812.04754. 30
32
[GDG+17] Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch sgd: Training imagenet in 1 hour, 2017, 1706.02677. 2, 14, 20
Gabriel Goh. Why momentum really works. Distill, 2017. doi:10.23915/distill.00006. 8, 30
Google. Tensorï¬ow ofï¬cial models. URL https://github.com/tensorflow/models/ tree/master/official/resnet. 18
Ian J. Goodfellow, Oriol Vinyals, and Andrew M. Saxe. Qualitatively characterizing neural network optimization problems, 2014, 1412.6544. 15
[GVY+18] Noah Golmant, Nikita Vemuri, Zhewei Yao, Vladimir Feinberg, Amir Gholami, Kai Rothauge, Michael W. Mahoney, and Joseph Gonzalez. On the computational inefï¬ciency of large batch sizes for stochastic gradient descent, 2018, 1811.12941. 14
Elad Hoffer, Itay Hubara, and Daniel Soudry. Train longer, generalize better: closing the gener- alization gap in large batch training of neural networks. 2017, 1705.08741. 8, 28 [HMG07] Ralf Herbrich, Tom Minka, and Thore Graepel. TrueskillTM: A bayesian skill rating system. In B. Schölkopf, J. C. Platt, and T. Hoffman, editors, Advances in Neural Information Process- ing Systems 19, pages 569â576. MIT Press, 2007. URL http://papers.nips.cc/paper/ 3079-trueskilltm-a-bayesian-skill-rating-system.pdf. 19
[HNA+17]
Joel Hestness, Sharan Narang, Newsha Ardalani, Gregory Diamos, Heewoo Jun, Hassan Kia- ninejad, Md. Mostofa Ali Patwary, Yang Yang, and Yanqi Zhou. Deep learning scaling is pre- dictable, empirically, 2017, 1712.00409. 15
[HQB+18] Dan Horgan, John Quan, David Budden, Gabriel Barth-Maron, Matteo Hessel, Hado van Has- selt, and David Silver. Distributed prioritized experience replay, 2018, 1803.00933. 2, 14
Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Comput., 9(8):1735â1780, November 1997. doi:10.1162/neco.1997.9.8.1735. 19
[HZRS15] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. CoRR, 2015, 1512.03385. 12, 18, 20
Andrew Ilyas, Logan Engstrom, Shibani Santurkar, Dimitris Tsipras, Firdaus Janoos, Larry Rudolph, and Aleksander Madry. Are deep policy gradient algorithms truly policy gradient algorithms?, 2018, 1811.02553. 14
[JSH+18] Xianyan Jia, Shutao Song, Wei He, Yangzihao Wang, Haidong Rong, Feihu Zhou, Liqiang Xie, Zhenyu Guo, Yuanzhou Yang, Liwei Yu, Tiegang Chen, Guangxiao Hu, Shaohuai Shi, and Xiaowen Chu. Highly scalable deep learning training system with mixed-precision: Training imagenet in four minutes, 2018, 1807.11205. 2, 12, 14
Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization, 2014, 1412.6980. 5, 7, 12, 17, 18, 19
[KMN+16] Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima, 2016, 1609.04836. 8, 15, 28
Alex Krizhevsky. Learning multiple layers of features from tiny images. 2009. URL https: //www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf. 12, 18
Diederik P Kingma and Max Welling. Auto-encoding variational bayes, 2013, arXiv:1312.6114. 12, 19
Yann LeCun and Corinna Cortes. MNIST handwritten digit database. 2010. URL http: //yann.lecun.com/exdb/mnist/. 12, 18
[LFLY18] Chunyuan Li, Heerad Farkhoor, Rosanne Liu, and Jason Yosinski. Measuring the intrinsic dimension of objective landscapes, 2018, 1804.08838. 8
Siyuan Ma, Raef Bassily, and Mikhail Belkin. The power of interpolation: Understanding the effectiveness of sgd in modern over-parametrized learning, 2017, 1712.06559. 14
33
[MBM+16] Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforce- ment learning. CoRR, 2016, 1602.01783. 13, 19
Stephan Mandt, Matthew D. Hoffman, and David M. Blei. Stochastic gradient descent as ap- proximate bayesian inference. 2017, 1704.04289. 14, 25, 26
[NIS] Selecting sample sizes. section3/ppc333.htm. 14 URL https://www.itl.nist.gov/div898/handbook/ppc/
[NVL+15] Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, and James Martens. Adding gradient noise improves learning for very deep networks, 2015, 1511.06807. 15
[NWC+11] Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y. Ng. Reading digits in natural images with unsupervised feature learning. 2011. URL http:// ufldl.stanford.edu/housenumbers/nips2011_housenumbers.pdf. 12, 18
[OEGA18] Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. Scaling neural machine transla- tion, 2018, 1806.00187. 2
OpenAI. More on Dota 2, Aug 2017. URL https://blog.openai.com/more-on-dota-2/. 22
[PKYC18] Raul Puri, Robert Kirby, Nikolai Yakovenko, and Bryan Catanzaro. Large scale language mod- eling: Converging on 40gb of text in four hours, 2018, 1808.01371. 2
Adam Stooke and Pieter Abbeel. Accelerated methods for deep reinforcement learning, 2018, 1803.02811. 2, 14, 19
Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. CoRR, 2015, 1508.07909. 19
[SKYL17] Samuel L. Smith, Pieter-Jan Kindermans, Chris Ying, and Quoc V. Le. Donât decay the learning rate, increase the batch size, 2017, 1711.00489. 2, 14, 25, 27
Samuel L. Smith and Quoc V. Le. A bayesian perspective on generalization and stochastic gradient descent, 2017, 1710.06451. 14
[SLA+18] Christopher J. Shallue, Jaehoon Lee, Joe Antognini, Jascha Sohl-Dickstein, Roy Frostig, and George E. Dahl. Measuring the effects of data parallelism on neural network training, 2018, arXiv:1811.03600. 8, 13, 14
Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of In Proceedings of the 30th International initialization and momentum in deep learning. Conference on Machine Learning, volume 28, pages 1139â1147, 17â19 Jun 2013. URL http://proceedings.mlr.press/v28/sutskever13.html. 18, 19
[SWD+17] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. CoRR, 2017, 1707.06347. 13, 19
In ICML (3), Tom Schaul, Sixin Zhang, and Yann LeCun. No more pesky learning rates. volume 28 of JMLR Workshop and Conference Proceedings, pages 343â351. JMLR.org, 2013. URL http://jmlr.org/proceedings/papers/v28/schaul13.html. 14, 26
Ravid Shwartz-Ziv and Naftali Tishby. Opening the black box of deep neural networks via information, 2017, 1703.00810. 14
[Wik] Sample size determination. determination#Estimation_of_a_mean. 14 URL https://en.wikipedia.org/wiki/Sample_size_
[WRLG18] Yuhuai Wu, Mengye Ren, Renjie Liao, and Roger Grosse. Understanding short-horizon bias in stochastic meta-optimization, 2018, 1803.02021. 8, 14, 29
[YGG17] Yang You, Igor Gitman, and Boris Ginsburg. Large batch training of convolutional networks, 2017, 1708.03888. 2, 12, 14
34
[YPL+17] Dong Yin, Ashwin Pananjady, Max Lam, Dimitris Papailiopoulos, Kannan Ramchandran, and Peter Bartlett. Gradient diversity: a key ingredient for scalable distributed learning, 2017, 1706.05699. 14, 15
[YZH+17] Yang You, Zhao Zhang, Cho-Jui Hsieh, James Demmel, and Kurt Keutzer. Imagenet training in minutes, 2017, 1709.05011. 2, 14
35 | {
"id": "1811.03600"
} |
1812.05271 | TextBugger: Generating Adversarial Text Against Real-world Applications | Deep Learning-based Text Understanding (DLTU) is the backbone technique
behind various applications, including question answering, machine translation,
and text classification. Despite its tremendous popularity, the security
vulnerabilities of DLTU are still largely unknown, which is highly concerning
given its increasing use in security-sensitive applications such as sentiment
analysis and toxic content detection. In this paper, we show that DLTU is
inherently vulnerable to adversarial text attacks, in which maliciously crafted
texts trigger target DLTU systems and services to misbehave. Specifically, we
present TextBugger, a general attack framework for generating adversarial
texts. In contrast to prior works, TextBugger differs in significant ways: (i)
effective -- it outperforms state-of-the-art attacks in terms of attack success
rate; (ii) evasive -- it preserves the utility of benign text, with 94.9\% of
the adversarial text correctly recognized by human readers; and (iii) efficient
-- it generates adversarial text with computational complexity sub-linear to
the text length. We empirically evaluate TextBugger on a set of real-world DLTU
systems and services used for sentiment analysis and toxic content detection,
demonstrating its effectiveness, evasiveness, and efficiency. For instance,
TextBugger achieves 100\% success rate on the IMDB dataset based on Amazon AWS
Comprehend within 4.61 seconds and preserves 97\% semantic similarity. We
further discuss possible defense mechanisms to mitigate such attack and the
adversary's potential countermeasures, which leads to promising directions for
further research. | http://arxiv.org/pdf/1812.05271 | Jinfeng Li, Shouling Ji, Tianyu Du, Bo Li, Ting Wang | cs.CR, cs.CL, cs.LG | To appear in NDSS 2019 | null | cs.CR | 20181213 | 20181213 | 8 1 0 2
c e D 3 1 ] R C . s c [ 1 v 1 7 2 5 0 . 2 1 8 1 : v i X r a
# TEXTBUGGER: Generating Adversarial Text Against Real-world Applications
Jinfeng Li *, Shouling Ji*t =, Tianyu Du*, Bo Lit and Ting Wang? * Institute of Cyberspace Research and College of Computer Science and Technology, Zhejiang University Email: {lijinfeng0713, sji, zjradty} @zju.edu.cn t Alibaba-Zhejiang University Joint Research Institute of Frontier Technologies = University of Illinois Urbana-Champaign, Email: Ixbosky @ gmail.com 8 Lehigh University, Email: inbox.ting@ gmail.com
AbstractâDeep Learning-based Text Understanding (DLTU) is the backbone technique behind various applications, including question answering, machine translation, and text classiï¬cation. Despite its tremendous popularity, the security vulnerabilities of DLTU are still largely unknown, which is highly concerning given its increasing use in security-sensitive applications such as senti- ment analysis and toxic content detection. In this paper, we show that DLTU is inherently vulnerable to adversarial text attacks, in which maliciously crafted texts trigger target DLTU systems and services to misbehave. Speciï¬cally, we present TEXTBUGGER, a general attack framework for generating adversarial texts. In contrast to prior works, TEXTBUGGER differs in signiï¬cant ways: (i) effective â it outperforms state-of-the-art attacks in terms of attack success rate; (ii) evasive â it preserves the utility of benign text, with 94.9% of the adversarial text correctly recognized by human readers; and (iii) efï¬cient â it generates adversarial text with computational complexity sub-linear to the text length. We empirically evaluate TEXTBUGGER on a set of real-world DLTU systems and services used for sentiment analysis and toxic content detection, demonstrating its effectiveness, evasiveness, and efï¬ciency. For instance, TEXTBUGGER achieves 100% success rate on the IMDB dataset based on Amazon AWS Comprehend within 4.61 seconds and preserves 97% semantic similarity. We further discuss possible defense mechanisms to mitigate such attack and the adversaryâs potential countermeasures, which leads to promising directions for further research.
In the meantime, DNNs-based text classiï¬cation plays a more and more important role in information understand- ing and analysis nowadays. For instance, many online rec- ommendation systems rely on the sentiment analysis of user reviews/comments [22]. Generally, such systems would classify the reviews/comments into two or three categories and then take the results into consideration when ranking movies/products. Text classiï¬cation is also important for en- hancing the safety of online discussion environments, e.g., automatically detect online toxic content [26], including irony, sarcasm, insults, harassment and abusive content.
Many studies have investigated the security of current ma- chine learning models and proposed different attack methods, including causative attacks and exploratory attacks [2, 3, 15]. Causative attacks aim to manipulate the training data thus misleading the classiï¬er itself, and exploratory attacks craft malicious testing instances (adversarial examples) so as to evade a given classiï¬er. To defend against these attacks, several mechanisms have been proposed to obtain robust classiï¬ers [5, 34]. Recently, adversarial attacks have been shown to be able to achieve a high attack success rate in image classiï¬ca- tion tasks [6], which has posed severe physical threats to many intelligent devices (e.g., self-driving cars) [10].
# INTRODUCTION
Deep neural networks (DNNs) have been shown to achieve great success in various tasks such as classiï¬cation, regres- sion, and decision making. Such advances in DNNs have led to broad deployment of systems on important problems in physical world. However, though DNNs models have exhibited state-of-the-art performance in a lot of applications, recently they have been found to be vulnerable against adversarial examples which are carefully generated by adding small perturbations to the legitimate inputs to fool the targeted models [8, 13, 20, 25, 36, 37]. Such discovery has also raised serious concerns, especially when deploying such machine learning models to security-sensitive tasks.
& Shouling Ji is the corresponding author.
Network and Distributed Systems Security (NDSS) Symposium 2019 24-27 February 2019, San Diego, CA, USA ISBN 1-891562-55-X https://dx.doi.org/10.14722/ndss.2019.23138 www.ndss-symposium.org
While existing works on adversarial examples mainly focus on the image domain, it is more challenging to deal with text data due to its discrete property, which is hard to optimize. Furthermore, in the image domain, the perturbation can often be made virtually imperceptible to human perception, causing humans and state-of-the-art models to disagree. However, in the text domain, small perturbations are usually clearly per- ceptible, and the replacement of a single word may drastically alter the semantics of the sentence. In general, existing attack algorithms designed for images cannot be directly applied to text, and we need to study new attack techniques and corresponding defenses.
Recently, some mechanisms are proposed towards generat- ing adversarial texts [19, 33]. These work proposed to generate adversarial texts by replacing a word with an out-of-vocabulary one [4, 11, 14]. Although seminal, they are limited in practice due to the following reasons: (i) they are not computationally efï¬cient, (ii) they are designed under the white-box setting, (iii) they require manual intervention, and/or (iv) they are designed against a particular NLP model and are not comprehensively evaluated. Thus, the efï¬ciency and effectiveness of current adversarial text generation techniques and the robustness of
# Task: Sentiment Analysis. Classifier: CNN. Original label: 99.8% Negative. Adversarial label: 81.0% Positive.
Text: | love these awful awf ul 80's summer camp movies. The best part about "Party Camp" is the fact that it Hteraly literaly has Ae No plot. The eH#ekes clichs here are limitless: the nerds vs. the jocks, the secret camera in the girls locker room, the hikers happening upon a nudist colony, the contest at the conclusion, the secretly horny camp administrators, and the embarrassingly embarrassing1y foolish foOlish sexual innuendo littered throughout. This movie will make you laugh, but never intentionally. | repeat, never.
Task: Sentiment Analysis. Classifier: Amazon AWS. Original label: 100% Negative. Adversarial label: 89% Positive.
Text: | watched this movie recently mainly because | am a Huge fan of Jodie Foster's. | saw this movie was made right between her 2 Oscar award winning performances, so my expectations were fairly high. Unfortunately UnfOrtunately, | thought the movie was terrible terrib1e and I'm still left wondering how she was ever persuaded to make this movie. The script is really weak wea k.
# Task: Toxic Content Detection.
# Classifier: LSTM.
# Original label: 96.7% Toxic.
# Adversarial label:
# 83.5% Non-toxic.
Text: hello how are you? have you had sexta! sexual-intercourse relations with any black men recently?
# Task: Toxic Content Detection.
# Classifier: Perspective.
# Original label: 92% Toxic.
# Adversarial label: 78% Non-toxic.
Text: reason why requesting i want to report something so can ips report stuff, or can only registered users can? if only registered users can, then i âIll request an account and it 's just not fair that i cannot edit because of this anon block shit shti c'mon, fuekiag fuckimg hell helled.
Fig. 1. Adversarial examples against two natural language classiï¬cation tasks. Replacing a fraction of the words in a document with adversarially-chosen bugs fools classiï¬ers into predicting an incorrect label. The new document is classiï¬ed correctly by humans and preserves most of the original meaning although it contains small perturbations.
popular text classiï¬cation models need to be studied.
In this paper, we propose TEXTBUGGER, a framework that can effectively and efï¬ciently generate utility-preserving (i.e., keep its original meaning for human readers) adversarial texts against state-of-the-art text classiï¬cation systems under both white-box and black-box settings. In the white-box scenario, we ï¬rst ï¬nd important words by computing the Jacobian matrix of the classiï¬er and then choose an optimal perturbation from the generated ï¬ve kinds of perturbations. In the black-box scenario, we ï¬rst ï¬nd the important sentences, and then use a scoring function to ï¬nd important words to manipulate. Through extensive experiments under both settings, we show that an adversary can deceive multiple real-world online DLTU systems with the generated adversarial texts1, including Google Cloud NLP, Microsoft Azure Text Analytics, IBM Watson Nat- ural Language Understanding and Amazon AWS Comprehend, etc. Several adversarial examples are shown in Fig. 1. The existence of such adversarial examples causes a lot of concerns for text classiï¬cation systems and seriously undermines their usability.
efï¬cient. For instance, TEXTBUGGER achieves 100% attack success rate on the IMDB dataset when target- ing the Amazon AWS and Microsoft Azure platforms under black-box settings. We shows that transferability also exists in the text domain and the adversarial texts generated against ofï¬ine models can be successfully transferred to multiple popular online DLTU systems.
⢠We conduct a user study on our generated adversarial texts and show that TEXTBUGGER has little impact on human understanding.
⢠We further discuss two potential defense strategies to defend against the above attacks along with prelimi- nary evaluations. Our results can encourage building more robust DLTU systems in the future.
# II. ATTACK DESIGN
A. Problem Formulation
Our Contribution. Our main contributions can be sum- marized as follows.
⢠We propose TEXTBUGGER, a framework that can effectively and efï¬ciently generate utility-preserving adversarial texts under both white-box and black-box settings.
Given a pre-trained text classification model F : Â¥ > J, which maps from feature space 1 to a set of classes Y, an adversary aims to generate an adversarial document aay from a legitimate document 2 ⬠V whose ground truth label is y ⬠Y, so that F(@aav) = t (t # y). The adversary also requires S(x,®adv) > ⬠for a domain-specific similarity function S' : XxX x X& â R4,, where the bound ¢ ⬠R captures the notion of utility-preserving alteration. For instance, in the context of text classification tasks, we may use S to capture the semantic similarity between a and &adv.
⢠We evaluate TEXTBUGGER on a group of state-of- the-art machine learning models and popular real- world online DLTU applications, including sentiment analysis and toxic content detection. Experimental results show that TEXTBUGGER is very effective and
B. Threat Model
We consider both white-box and black-box settings to evaluate different adversarial abilities.
1We have reported our ï¬ndings to their companies, and they replied that they would ï¬x these bugs in the next version.
White-box Setting. We assume that attackers have com- plete knowledge about the targeted model including the model
2
rallelDots | Al APIs Sentiment Analysis Sentiment Analysis Understand the social sentiment of your brand, product or service -â_Understand the social sentiment of your brand, product or service Negative Negative 74.80 % 74.80 %
Fig. 2. platform, which is a black-box scenario. ParallelDots API: An example of deep learning text classiï¬cation
architecture parameters. White-box attacks ï¬nd or approximate the worst-case attack for a particular model and input based on the kerckhoffâs principle [35]. Therefore, white-box attacks can expose a modelâs worst case vulnerabilities.
Black-box Setting. With the development of machine learning, many companies have launched their own Machine- Learning-as-a-Service (MLaaS) for DLTU tasks such as text classiï¬cation. Generally, MLaaS platforms have similar system design: the model is deployed on the cloud servers, and users can only access the model via an API. In such cases, we assume that the attacker is not aware of the model architecture, parameters or training data, and is only capable of querying the target model with output as the prediction or conï¬dence scores. Note that the free usage of the API is limited among these platforms. Therefore, if the attackers want to conduct practical attacks against these platforms, they must take such limitation and cost into consideration. Speciï¬cally, we take the ParallelDots2 as an example and show its sentiment analysis API and the abusive content classiï¬er API in Fig. 2. From Fig. 2, we can see that the sentiment analysis API would return the conï¬dence value of three classes, i.e., âpositiveâ, âneutralâ and ânegativeâ. Similarly, the abusive content classiï¬er would return the conï¬dence value of two classes, i.e., âabusiveâ and ânon abusiveâ. For both APIs, the sum of conï¬dence values of an instance equal to 1, and the class with the highest conï¬dence value is considered as the inputâs class.
# C. TEXTBUGGER
We propose efï¬cient strategies to change a word slightly, which is sufï¬cient for creating adversarial texts in both white- box settings and black-box settings. Speciï¬cally, we call the slightly changed words âbugsâ.
1) White-box Attack: We ï¬rst ï¬nd important words by computing the Jacobian matrix of the classiï¬er F, and generate ï¬ve kinds of bugs. Then we choose an optimal bug in terms of the change of the conï¬dence value. The algorithm of white- box attack is shown in Algorithm 1.
Step 1: Find Important Words (line 2-5). The ï¬rst step is to compute the Jacobian matrix for the given input text x = (x1, x2, · · · , xN ) (line 2-4), where xi is the ith word, and N represents the total number of words within the input text. For a text classiï¬cation task, the output of F is more than one dimension. Therefore the matrix is as follows:
Irla) = Fa) _ [2zte) (yy Ox; a
2https://www.paralleldots.com/
3
Algorithm 1 TEXTBUGGER under white-box settings Input: legitimate document a and its ground truth label y, classifier F(-), threshould ⬠Output: adversarial document 2gqy : Inititialize: 2â + x for word x; in x do Compute C,, according to Eq[2} end for Worderea â Sort(x1,%2,°++ : for x; in Worderea AO bug = SelectBug(a;, xâ, y, F(-)); xâ < replace x; with bug in xâ if S(a, aâ) < « then Return None. else if F(xâ) # y then 12: Solution found. Return xâ. 13: end if 14: end for 15: return ;Zm) according to C,,,; ro MRP ADYMARYN
where K represents the total number of classes in Y, and Fj(·) represents the conï¬dence value of the jth class. The importance of word xi is deï¬ned as:
Cxi = JF (i,y) = âFy(x) âxi (2)
i.e., the partial derivative of the conï¬dence value based on the predicted class y regarding to the input word xi. This allows us to ï¬nd the important words that have signiï¬cant impact on the classiï¬erâs outputs. Once we have calculated the importance score of each word within the input sequences, we sort these words in inverse order according to the importance value (line 5).
Step 2: Bugs Generation (line 6-14). To generate bugs, many operations can be used. However, we prefer small changes to the original words as we require the generated adversarial sentence is visually and semantically similar to the original one for human understanding. Therefore, we consider two kinds of perturbations, i.e., character-level perturbation and word-level perturbation.
For character-level perturbation, one key observation is that words are symbolic, and learning-based DLTU systems usually use a dictionary to represent a ï¬nite set of possible words. The size of the typical word dictionary is much smaller than the possible combinations of characters at a similar length (e.g., about 26n for the English case, where n is the length of the word). This means if we deliberately misspell important words, we can easily convert those important words to âunknownâ (i.e., words not in the dictionary). The unknown words will be mapped to the âunknownâ embedding vector in deep learning modeling. Our results strongly indicate that such simple strategy can effectively force text classiï¬cation models to behave incorrectly.
For word-level perturbation, we expect that the classiï¬er can be fooled after replacing a few words, which are obtained by nearest neighbor searching in the embedding space, without changing the original meaning. However, we found that in some word embedding models (e.g., word2vec), semantically opposite words such as âworstâ and âbetterâ are highly syn- tactically similar in texts, thus âbetterâ would be considered as
the nearest neighbor of âworstâ. However, changing âworstâ to âbetterâ would completely change the sentiment of the input text. Therefore, we make use of a semantic-preserving technique, i.e., replace the word with its topk nearest neighbors in a context-aware word vector space. Speciï¬cally, we use the pre-trained GloVe model [30] provided by Stanford for word embedding and set topk = 5 in the experiment. Thus, the neighbors are guaranteed to be semantically similar to the original one.
According to previous studies, the meaning of the text is very likely to be preserved or inferred by the reader after a few character changes [31]. Meanwhile, replacing words with semantically and syntactically similar words can ensure that the examples are perceptibly similar [1]. Based on these observations, we propose ï¬ve bug generation methods for TEXTBUGGER: (1) Insert: Insert a space into the word3. Gen- erally, words are segmented by spaces in English. Therefore, we can deceive classiï¬ers by inserting spaces into words. (2) Delete: Delete a random character of the word except for the ï¬rst and the last character. (3) Swap: Swap random two adjacent letters in the word but do not alter the ï¬rst or last letter4. This is a common occurrence when typing quickly and is easy to implement. (4) Substitute-C (Sub-C): Replace characters with visually similar characters (e.g., replacing âoâ with â0â, âlâ with â1â, âaâ with â@â) or adjacent characters in the keyboard (e.g., replacing âmâ with ânâ). (5) Substitute-W (Sub-W): Replace a word with its topk nearest neighbors in a context-aware word vector space. Several substitute examples are shown in Table I.
As shown in Algorithm after generating five bugs, we choose the optimal bug according to the change of the confidence value, i.e., choosing the bug that decreases the confidence value of the ground truth class the most. Then we will replace the word with the optimal bug to obtain a new text xâ (line 8). If the classifier gives the new text a different label (i.e., Fi(aâ) # y) while preserving the semantic similarity (which is detailed in Section above the threshold (i.e., S(a,axâ) > ©), the adversarial text is found (line 9-13). If not, we repeat above steps to replace the next word in Wordered until we find the solution or fail to find a semantic-preserving adversarial example.
1: function SELECTBUG(w, x, y, F(·)) bugs = BugGenerator(w); 2: for bk in bugs do 3: 4: 5: 6: 7: 8: 9: end function candidate(k) = replace w with bk in x; score(k) = Fy(x) â Fy(candidate(k)); end for bugbest = arg maxbk return bugbest; score(k);
2) Black-box Attack: Under the black-box setting, gradients of the model are not directly available, and we need to change the input sequences directly without the guidance of
3Considering the usability of text, we apply this method only when the length of the word is shorter than 6 characters since long words might be split into two legitimate words.
4For this reason, this method is only applied to words longer than 4 letters.
4
EXAMPLES FOR FIVE BUG GENERATION METHODS.
Original Insert Delete Swap Sub-C Sub-W foolish awfully cliches f oolish awfull y clich es folish awfuly clichs fooilsh awï¬uly clcihes fo0lish awfu1ly c1iches silly terribly cliche
Algorithm 3 TEXTBUGGER under black-box settings Input: legitimate document x and its ground truth label y,
Input: legitimate document a and its ground truth label y, classifier F(-), threshould â¬
classifier F(-), threshould ⬠Output: adversarial document %adv 1: Inititialize: 2â + x 2: for s; in document 2 do 3 Ose i) = Fy (sa); 4: end for 5: Sordered <- Sort(sentences) according to Csentence(t); 6: Delete sentences in Sorderea if Fy(si) # ys 7: for s; in Sorderea dO 8: for w; in s; do 9: Compute C;,,, according to Eq 10: end for ll: Worderea Sort(words) according to Cy,; 12: for w; in Worderea dO 13: bug = SelectBug(w;, xâ, y, F(-)); 14: aâ â replace w; with bug in aâ 15: if S(a@, aâ) < ⬠then 16: Return None. 17: else if F,(xâ) 4 y then 18: Solution found. Return aâ. 19: end if 20: end for 21: end for 22: return
for wj in si do
end for Wordered â Sort(words) according to Cwj ; for wj in Wordered do
bug = SelectBug(w;, xâ, y, F(-)); aâ â replace w; with bug in aâ if S(a@, aâ) < ⬠then Return None.
# else if F,(xâ) 4 y then
Solution found. Return aâ.
gradients. Therefore different from white-box attacks, where we can directly select important words based on gradient information, in black-box attacks, we will ï¬rst ï¬nd important sentences and then the important words within them. Brieï¬y, the process of generating word-based adversarial examples on text under black-box setting contains three steps: (1) Find the important sentences. (2) Use a scoring function to determine the importance of each word regarding to the classiï¬cation result, and rank the words based on their scores. (3) Use the bug selection algorithm to change the selected words. The black-box adversarial text generation algorithm is shown in Algorithm 3.
Step 1: Find Important Sentences (line 2-6). Generally, when people express their opinions, most of the sentences are describing facts and the main opinions usually depend on only a few of sentences which have a greater impact on the classiï¬cation results. Therefore, to improve the efï¬ciency of TEXTBUGGER, we ï¬rst ï¬nd the important sentences that con- tribute to the ï¬nal prediction results most and then prioritize to manipulate them.
Suppose the input document x = (s1, s2, · · · , sn), where si represents the sentence at the ith position. First, we use the spaCy library5 to segment each document into sentences. Then we ï¬lter out the sentences that have different predicted
# 5http://spacy.io
Fig. 3. Illustration of how to select important words to apply perturbations for the input sentence âIt is so laddish and juvenile, only teenage boys could possibly ï¬nd it funnyâ. The sentiment score of each word is the classiï¬cation resultâs conï¬dence value of the new text that deleting the word from the original text. The contribution of each word is the difference between the new conï¬dence score and the original conï¬dence score.
labels with the original document label (i.e., filter out F;(s;) A y). Then, we sort the important sentences in an inverse order according to their importance score. The importance score of a sentence s; is represented with the confidence value of the predicted class Fy, i.e., Cs, = Fy(si).
Step 2: Find Important Words (line 8-11). Considering the vast search space of possible changes, we should ï¬rst ï¬nd the most important words that contribute the most to the original prediction results, and then modify them slightly by controlling the semantic similarity.
One reasonable choice is to directly measure the effect of removing the ith word, since comparing the prediction before and after removing a word reï¬ects how the word inï¬uences the classiï¬cation result as shown in Fig. 3. Therefore, we introduce a scoring fuction that determine the importance of the jth word in x as:
Cwj = Fy(w1, w2, · · ·, wm) âFy(w1, · · ·, wjâ1, wj+1, · · ·, wm) (3)
The proposed scoring function has the following properties: (1) It is able to correctly reï¬ect the importance of words for the prediction, (2) it calculates word scores without the knowledge of the parameters and structure of the classiï¬cation model, and (3) it is efï¬cient to calculate.
Step 3: Bugs Generation (line 12-20). This step is similar as that in white-box setting.
III. ATTACK EVALUATION: SENTIMENT ANALYSIS
Sentiment analysis refers to the use of NLP, statistics, or machine learning methods to extract, identify or characterize the sentiment content of a text unit. It is widely applied to helping a business understand the social sentiment of their products or services by monitoring online conversations.
In this section, we investigate the practical performance of the proposed method for generating adversarial texts for sentiment analysis. We start with introducing the datasets, targeted models, baseline algorithms, evaluation metrics and implementation details. Then we will analyze the results and discuss potential reasons for the observed performance.
5
A. Datasets
We study adversarial examples of text on two popular public benchmark datasets for sentiment analysis. The ï¬nal adversarial examples are generated and evaluated on the test set.
IMDB [21]. This dataset contains 50,000 positive and negative movie reviews that crawled from online sources, with 215.63 words as average length for each sample. It has been divided into two parts, i.e., 25,000 reviews for training and 25,000 reviews for testing. Speciï¬cally, we held out 20% of the training set as a validation set and all parameters are tuned based on it.
Rotten Tomatoes Movie Reviews (MR) [27]. This dataset is a collection of movie reviews collected by Pang and Lee in [27]. It contains 5,331 positive and 5,331 negative processed sentences/snippets and has an average length of 32 words. In our experiment, we divide this dataset into three parts, i.e., 80%, 10%, 10% as training, validation and testing, respec- tively.
B. Targeted Models
For white-box attacks, we evaluated TEXTBUGGER on LR, Kimâs CNN [17] and the LSTM used in [38]. In our imple- mentation, the modelâs parameters are ï¬ne-tuned according to the sensitivity analysis on model performance conducted by Zhang et al. [39]. Meanwhile, all models were trained in a hold-out test strategy, and hyper-parameters were tuned only on the validation set.
For black-box attacks, we evaluated the TEXTBUGGER on ten sentiment analysis platforms/models, i.e., Google Cloud NLP, IBM Waston Natural Language Understanding (IBM Watson), Microsoft Azure Text Analytics (Microsoft Azure), Amazon AWS Comprehend (Amazon AWS), Facebook fast- Text (fastText), ParallelDots, TheySay Sentiment, Aylien Sen- timent, TextProcessing, and Mashape Sentiment. For fastText, we used a pre-trained model6 provided by Facebook. This model is trained on the Amazon Review Polarity dataset and we do not have any information about the modelsâ parameters or architecture.
C. Baseline Algorithms
We implemented and compared the other three methods with our white-box attack method. In total, the three methods are: (1) Random: Randomly selects words to modify. For each sentence, we select 10% words to modify. (2) FGSM+Nearest Neighbor Search (NNS): The FGSM method was ï¬rst pro- posed in [13] to generate adversarial images, which adds to the whole image the noise that is proportional to sign(â(Lx)), where L represent the loss function and x is the input data. It was combined with NNS to generate adversarial texts as in [12]: ï¬rst, generating adversarial embeddings by applying FGSM on the embedding vector of the texts, then recon- structing the adversarial texts via NNS. (3) DeepFool+NNS: The DeepFool method is ï¬rst proposed in [24] to generate adversarial images, which iteratively ï¬nds the optimal direction to search for the minimum distance to cross the decision
6https://s3-us-west-1.amazonaws.com/fasttext-vectors/supervised models/ amazon review polarity.bin
boundary. It was combined with NNS to generate adversarial texts as in [12].
D. Evaluation Metrics
We use four metrics, i.e., edit distance, Jaccard similarity coefï¬cient, Euclidean distance and semantic similarity, to evaluate the utility of the generated adversarial texts. Specif- ically, the edit distance and Jaccard similarity coefï¬cient are calculated on the raw texts, while the Euclidean distance and semantic similarity are calculated on word vectors.
Edit Distance. Edit distance is a way of quantifying how dissimilar two strings (e.g., sentences) are by counting the min- imum number of operations required to transform one string to the other. Speciï¬cally, different deï¬nitions of the edit distance use different sets of string operations. In our experiment, we use the most common metrics, i.e., the Levenshtein distance, whose operations include removal, insertion, and substitution of characters in the string.
Jaccard Similarity Coefï¬cient. The Jaccard similarity coefï¬cient is a statistic used for measuring the similarity and diversity of ï¬nite sample sets. It is deï¬ned as the size of the intersection divided by the size of the union of the sample sets:
J(A, B) = |A â© B| |A ⪠B| = |A â© B| |A| + |B| â |A â© B| (4)
Larger Jaccard similarity coefï¬cient means higher sample similarity. In our experiment, one sample set consists of all the words in the sample.
Euclidean Distance. Euclidean distance is a measure of the true straight line distance between two points in the Euclidean space. If p = (p1, p2, · · · , pn) and q = (q1, q2, · · · , qn) are two samples in the word vector space, then the Euclidean distance between p and q is given by:
A(p, a) = V (pi = 11)? + (P2 = G2)? +++ + (Pn = Gn)? (5)
(5) In our experiment, the Euclidean space is exactly the word vector space.
Semantic Similarity. The above three metrics can only reï¬ect the magnitude of the perturbation to some extent. They cannot guarantee that the generated adversarial texts will preserve semantic similarity from original texts. Therefore, we need a ï¬ne-grained metric that measures the degree to which two pieces of text carry the similar meaning so as to control the quality of the generated adversarial texts.
In our experiment, we ï¬rst use the Universal Sentence Encoder [7], a model trained on a number of natural language prediction tasks that require modeling the meaning of word sequences, to encode sentences into high dimensional vectors. Then, we use the cosine similarity to measure the semantic similarity between original texts and adversarial texts. The cosine similarity of two n-dimensional vectors p and q is deï¬ned as:
p-4 Die Pi X Gi lpi lial | JOE @)? x VEG? S(p, 4) (6)
Generally, it works better than other distance measures because the norm of the vector is related to the overall frequency of
6
which words occur in the training corpus. The direction of a vector and the cosine distance is unaffected by this, so a common word like âfrogâ will still be similar to a less frequent word like âAnuraâ which is its scientiï¬c name.
Since our main goal is to successfully generate adversarial texts, we only need to control the semantic similarity to be above a speciï¬c threshold.
# E. Implementation
We conducted the experiments on a server with two Intel Xeon E5-2640 v4 CPUs running at 2.40GHz, 64 GB memory, 4TB HDD and a GeForce GTX 1080 Ti GPU card. We repeated each experiment 5 times and report the mean value. This replication is important because training is stochastic and thus introduces variance in performance [39].
In our experiment, we did not filter out stop-words before feature extraction as most NLP tasks do. This is because we observe that the stop-words also have impact on the prediction results. In particular, our experiments utilize the 300-dimension GloVe embedding¢â| trained on 840 billion tokens of Common Crawl. Words not present in the set of pre-trained words are initialized by randomly sampling from the uniform distribution in [-0.1, 0.1]. Furthermore, the semantic similarity threshold «⬠is set as 0.8 to guarantee a good trade-off between quality and strength of the generated adversarial text.
F. Attack Performance
Effectiveness and Efï¬ciency. The main results of white- box attacks on the IMDB and MR datasets and comparison of the performance of baseline methods are summarized in Table II, where the third column of Table II shows the original model accuracy in non-adversarial setting. We do not give the average time of generating one adversarial example under white-box settings since the models are ofï¬ine and the attack is very efï¬cient (e.g., generating hundreds of adversarial texts in one second). From Table II, we can see that randomly choosing words to change (i.e., Random in Table II) has hardly any inï¬uence on the ï¬nal result. This implies randomly changing words would not fool classiï¬ers and choosing important words to modify is necessary for successful attack. From Table II, we can also see that the targeted models all perform quite well in non-adversarial setting. However, the adversarial texts generated by TEXTBUGGER still has high attack success rate on these models. In addition, the linear model is more suscepti- ble to adversarial texts than deep learning models. Speciï¬cally, TEXTBUGGER only perturbs a few words to achieve a high attack success rate and performs much better than baseline algorithms against all models as shown in Table II. For instance, it only perturbs 4.9% words of one sample when achieving 95.2% success rate on the IMDB dataset against the LR model, while all baselines achieve no more than 42% success rate in this case. As the IMDB dataset has an average length of 215.63 words, TEXTBUGGER only perturbed about 10 words for one sample to conduct successful attacks. This means that TEXTBUGGER can successfully mislead the classiï¬ers into assigning signiï¬cantly higher positive scores to the negative reviews via subtle manipulation.
7http://nlp.stanford.edu/projects/glove/
RESULTS OF THE WHITE-BOX ATTACKS ON IMDB AND MR DATASETS.
Random FGSM+NNS [12] DeepFool+NNS [12] TEXTBUGGER Model Dataset Accuracy Success Rate Perturbed Word Success Rate Perturbed Word Success Rate Perturbed Word Success Rate Perturbed Word LR MR IMDB 73.7% 82.1% 2.1% 2.7% 10% 10% 32.4% 41.1% 4.3% 8.7% 35.2% 30.0% 4.9% 5.8% 92.7% 95.2% 6.1% 4.9% CNN MR IMDB 78.1% 89.4% 1.5% 1.3% 10% 10% 25.7% 36.2% 7.5% 10.6% 28.5% 23.9% 5.4% 2.7% 85.1% 90.5% 9.8% 4.2% LSTM MR IMDB 80.1% 90.7% 1.8% 0.8% 10% 10% 25.0% 31.5% 6.6% 9.0% 24.4% 26.3% 11.3% 3.6% 80.2% 86.7% 10.2% 6.9%
TABLE III. RESULTS OF THE BLACK-BOX ATTACK ON IMDB.
Targeted Model Original Accuracy DeepWordBug [11] TEXTBUGGER Success Rate Time (s) Perturbed Word Success Rate Time (s) Perturbed Word Google Cloud NLP IBM Waston Microsoft Azure Amazon AWS Facebook fastText ParallelDots TheySay Aylien Sentiment TextProcessing Mashape Sentiment 85.3% 89.6% 89.6% 75.3% 86.7% 63.5% 86.0% 70.0% 81.7% 88.0% 43.6% 34.5% 56.3% 68.1% 67.0% 79.6% 9.5% 63.8% 57.3% 31.1% 266.69 690.59 182.08 43.98 0.14 812.82 888.95 674.21 303.04 585.72 10% 10% 10% 10% 10% 10% 10% 10% 10% 10% 70.1% 97.1% 100.0% 100.0% 85.4% 92.0% 94.3% 90.0% 97.2% 65.7% 33.47 99.28 23.01 4.61 0.03 129.02 134.03 44.96 59.42 117.13 1.9% 8.6% 5.7% 1.2% 5.0% 2.2% 4.1% 1.4% 8.9% 6.1%
TABLE IV. RESULTS OF THE BLACK-BOX ATTACK ON MR.
Targeted Model Original Accuracy DeepWordBug [11] TEXTBUGGER Success Rate Time (s) Perturbed Word Success Rate Time (s) Perturbed Word Google Cloud NLP IBM Waston Microsoft Azure Amazon AWS Facebook fastText ParallelDots TheySay Aylien Sentiment TextProcessing Mashape Sentiment 76.7% 84.0% 67.5% 73.9% 89.5% 54.5% 72.3% 65.3% 77.6% 72.0% 67.3% 70.8% 71.3% 69.1% 37.0% 76.6% 56.3% 65.2 38.1% 73.6% 34.64 150.45 43.98 39.62 0.02 150.89 69.61 83.63 59.44 113.54 10% 10% 10% 10% 10% 10% 10% 10% 10% 10% 86.9% 98.8% 96.8% 95.7% 65.5% 91.7% 90.2% 94.1% 87.0% 94.8% 13.85 43.59 12.46 3.25 0.01 70.56 30.12 13.71 12.36 18.24 3.8% 4.6% 4.2% 4.8% 3.9% 4.2% 3.1% 3.5% 5.7% 5.1%
The main results of black-box attacks on the IMDB and MR datasets and comparison of the performance of different methods are summarized in Tables III and IV respectively, and the second column of which shows the original model accuracy in non-adversarial setting. From Tables III and IV, we can see that TEXTBUGGER achieves high attack success rate and performs much better than DeepWordBug [11] against all real-world online DLTU platforms. For instance, it achieves 100% success rate on the IMDB dataset when targeting Azure and AWS platforms, while DeepWordBug only achieves 56.3% and 68.1% success rate respectively. Besides, TEXTBUGGER only perturbs a few words to achieve a high success rate as shown in Tables III and IV. For instance, it only perturbs 7% words of one sample when achieving 96.8% success rate on the MR dataset targeting the Microsoft Azure platform. As the MR dataset has an average length of 32 words, TEXTBUGGER only perturbed about 2 words for one sample to conduct successful attacks. Again, that means an adversary can subtly modify highly negative reviews in a way that the classiï¬er assigns signiï¬cantly higher positive scores to them.
impact of document length on the effectiveness and efï¬ciency of the attacks and the corresponding results are shown in Fig. 4. From Fig. 4(a), we can see that the document length has little impact on the attack success rate. This implies attackers can achieve high success rate no matter how long the sample is. However, the conï¬dence value of prediction results decrease for IBM Watson and Google Cloud NLP as shown in Fig. 4(b). This means the attack on long documents would be a bit weaker than that on short documents. From Fig. 4(c), we can see that the time required for generating one adversarial text and the average length of documents are positively correlated overall for Microsoft Azure and Google Cloud NLP. There is a very intuitive reason: the longer the length of the document is, the more information it contains that may need to be modiï¬ed. Therefore, as the length of the document grows, the time required for generating one adversarial text increases slightly, since it takes more time to ï¬nd important sentences. For IBM Watson, the run time ï¬rst increases before 60 words, then vibrates after that. We carefully analyzed the generated adversarial texts and found that when the document length is less than 60 words, the total length of the perturbed sentences increases sharply with the growth of
# The Impact of Document Length. We also study the
7
(a) Success Rate (b) Score (c) Time
Fig. 4. The impact of document length (i.e. number of words in a document) on attackâs performance against three online platforms: Google Cloud NLP, IBM Watson and Microsoft Azure. The sub-ï¬gures are: (a) the success rate and document length, (b) the change of negative classâs conï¬dence value. For instance, the original text is classiï¬ed as negative with 90% conï¬dence, while the adversarial text is classiï¬ed as positive with 80% conï¬dence (20% negative), the score changes 0.9-0.2=0.7. (c) the document length and the average time of generating an adversarial text.
(a) IMDB (b) IMDB (c) MR (d) MR
Fig. 5. The change of sentiment score evaluated on IMDB and MR datasets for 5 black-box platforms/models. For Google Cloud NLP (Google), IBM Watson (Watson), the range of ânegativeâ score is [-1, 0] and the range of âpositiveâ score is [0, 1]. For Microsoft Azure (Azure), the range of ânegativeâ score is [0, 0.5] and the range of âpositiveâ score is [0.5, 1]. For Amazon AWS (AWS) and fastText, the range of ânegativeâ score is [0.5, 1] and the range of âpositiveâ score is [0, 0.5].
document length. However, when the document length exceeds 60 words, the total length of the perturbed sentences changes negligibly. In general, generating one adversarial text only needs no more than 100 seconds for all the three platforms while the maximum length of a document is limited to 200 words. This means TEXTBUGGER method is very efï¬cient in practice.
Score Distribution. Even though TEXTBUGGER fails to convert the negative reviews to positive reviews in some cases, it can still reduce the conï¬dence value of the classiï¬cation results. Therefore, we computed the change of the conï¬dence value over all the samples including the failed samples before and after modiï¬cation and show the results in Fig. 5. From Fig. 5, we can see that the overall score of the texts has been moved to the positive direction.
Adversarial Text Examples. Two successful examples for sentiment analysis are shown in Fig. 1. The ï¬rst ad- versarial text for sentiment analysis in Fig. 1 contains six modiï¬cations, i.e., one insert operation (âawfulâ to âaw fulâ), one Sub-W operation (ânoâ to âNoâ), two delete operations (âliterallyâ to âliteralyâ, âclichesâ to âclichsâ), and two Sub-C operations (âembarrassinglyâ to âembarrassing1yâ, âfoolishâ to âfo0lishâ). These modiï¬cations successfully convert the prediction result of the CNN model, i.e., from 99.8% negative to 81.0% positive. Note that the modiï¬cation from ânoâ to âNoâ only capitalizes the ï¬rst letter but really affects the prediction result. After further analysis, we ï¬nd capitalization operation is common for both ofï¬ine models and online platforms. We guess the embedding model may be trained without changing uppercase letters to lowercase, thus causing the same word in different forms get two different word vectors. Furthermore, capitalization sometimes may cause out- of-vocabulary phenomenon. The second adversarial text for sentiment analysis in Fig. 1 contains three modiï¬cations, i.e., one insert operation (âweakâ to âwea kâ) and two Sub- C operations (âUnfortunatelyâ to âUnf0rtunatelyâ, âterribleâ to âterrib1eâ). These modiï¬cations successfully convert the prediction result of the Amazon AWS sentiment analysis API.
G. Utility Analysis
For white-box attacks, the similarity between original texts and adversarial texts against LR, CNN and LSTM models are shown in Figs. 6 and 7. We do not compare TEXTBUGGER with baselines in terms of utility since baselines only achieve low success rate as shown in Table V. From Figs. 6(a), 6(b), 7(a) and 7(b), we can see that adversarial texts preserve good utility in terms of word-level. Speciï¬cally, Fig. 6(a) shows that almost 80% adversarial texts have no more than 25 edit distance comparing with original texts for LR and CNN models. Meanwhile, Figs. 6(c), 6(d), 7(c) and 7(d) show that adversarial texts preserve good utility in terms of vector- level. Speciï¬cally, from Fig. 6(d), we can see that almost 90% adversarial texts preserve at least 0.9 semantic similarity of the original texts. This indicates that TEXTBUGGER can generate utility-preserving adversarial texts which fool the classiï¬ers with high success rate.
For black-box attacks, the average similarity between orig- inal texts and adversarial texts against 10 platforms/models are shown in Figs. 8 and 9. From Figs. 8(a), 8(b), 9(a) and 9(b), we can see that the adversarial texts generated by
8
(a) Edit Distance (b) Jaccard Coefï¬cient (c) Euclidean Distance (d) Semantic Similarity
Fig. 6. white-box settings for LR, CNN and LSTM models. The utility of adversarial texts generated on IMDB dataset under
(a) Edit Distance (b) Jaccard Coefï¬cient (c) Euclidean Distance (d) Semantic Similarity
Fig. 7. The utility of adversarial texts generated on MR dataset under white- box settings for LR, CNN and LSTM models.
TEXTBUGGER are more similar to original texts than that generated by DeepWordBug in word-level. From Figs. 8(c), texts 8(d), 9(c) and 9(d) we can see that generated by TEXTBUGGER are more similar to original texts than that generated by DeepWordBug in the word vector space. These results implies that the adversarial texts generated by TEXTBUGGER preserve more utility than that generated by DeepWordBug. One reason is that DeepWordBug randomly chooses a bug from generated bugs, while TEXTBUGGER chooses the optimal bug that can change the prediction score most. Therefore, DeepWordBug needs to manipulate more words than TEXTBUGGER to achieve successful attack.
The Impact of Document Length. We also study the impact of word length on the utility of generated adversarial texts and show the results in Fig. 10. From Fig. 10(a), for IBM Watson and Microsoft Azure, we can see that the number of
9
(a) Edit Distance (b) Jaccard Coefï¬cient (c) Euclidean Distance (d) Semantic Similarity
Fig. 8. The average utility of adversarial texts generated on IMDB dataset under black-box settings for 10 platforms.
(a) Edit Distance (b) Jaccard Coefï¬cient (c) Euclidean Distance (d) Semantic Similarity
Fig. 9. The average utility of adversarial texts generated on MR dataset under black-box settings for 10 platforms.
perturbed words roughly has a positive correlation with the average length of texts; for Google Cloud NLP, the number of perturbed words changes little with the increasing length the increasing of texts. However, as shown in Fig. 10(b), perturbed words do not decrease the semantic similarity of the adversarial texts. This is because longer text would have richer semantic information, while the proportion of the perturbed words is always controlled within a small range by TEXTBUG- GER. Therefore, with the length of input text increasing, the perturbed words have smaller impact on the semantic similarity between original and adversarial texts.
# H. Discussion
Toxic Words Distribution. To demonstrate the effective- ness of our method, we visualize the found important words according to their frequency in Fig. 11(a), in which the words
(a) Number of Perturbed Words (b) Semantic Similarity
Fig. 10. The impact of document length on the utility of generated adversarial texts in three online platforms: Google Cloud NLP, IBM Watson and Microsoft Azure. The subï¬gures are: (a) the number of perturbed words and document length, (b) the document length and the semantic similarity between generated adversarial texts and original texts.
(a) Word Cloud (b) Bug Distribution
Fig. 11. (a) The word cloud is generated from IMDB dataset against the CNN model. (b) The bug distribution of the adversarial texts is generated from IMDB dataset against the online platforms.
higher frequency will be represented with larger font. From Fig. 11(a), we can see that the found important words are indeed negative words, e.g., âbadâ, âawfulâ, âstupidâ, âworstâ, âterribleâ, etc for negative texts. Slight modiï¬cation on these negative words would decrease the negative extent of input texts. This is why TEXTBUGGER can generate adversarial texts whose only difference to the original texts are few character- level modiï¬cations.
Types of Perturbations. The proportion of each operation chosen by the adversary for the experiments are shown in Fig. 11(b). We can see that is the dominant oper- ation for Microsoft Azure and Amazon AWS, while Sub- C is the dominant operation for IBM Watson and fastText. One reason could be that Sub-C is deliberately designed for creating visually similar adversarial texts, while swap, insert and delete are common in typo errors. Therefore, the bugs generated by Sub-C are less likely to be found in the large- scale word vector space, thus causing the âout-of-vocabularyâ phenomenon. Meanwhile, delete and Sub-W are used less than the others. One reason is that Sub-W should satisfy two conditions: substituting with semantic similar words while changing the score largely in the ï¬ve types of bugs. Therefore, the proportion of Sub-W is less than other operations.
# IV. ATTACK EVALUATION: TOXIC CONTENT DETECTION
Toxic content detection aims to apply NLP, statistics, and machine learning methods to detect illegal or toxic-related (e.g., irony, sarcasm, insults, harassment, racism, pornography, terrorism, and riots, etc.) content for online systems. Such toxic content detection can help moderators to improve the online conversation environment.
10
In this section, we investigate practical performance of the proposed method for generating adversarial texts against real-world toxic content detection systems. We start with introducing the datasets, targeted models and implementation details. Then we will analyze the results and discuss potential reasons for the observed performance.
A. Dataset
We apply the dataset provided by the Kaggle Toxic Com- ment Classiï¬cation competition8. This dataset contains a large number of Wikipedia comments which have been labeled by human raters for toxic behavior. There are six types of indi- cated toxicity, i.e., âtoxicâ, âsevere toxicâ, âobsceneâ, âthreatâ, âinsultâ, and âidentity hateâ in the original dataset. We con- sider these categories as toxic and perform binary classiï¬cation for toxic content detection. For more coherent comparisons, a balanced subset of this dataset is constructed for evaluation. This is achieved by random sampling of the non-toxic texts, obtaining a subset with equal number of samples with the toxic texts. Further, we removed some abnormal texts (i.e., containing multiple repeated characters) and select the samples that have no more than 200 words for our experiment, due to the fact that some APIs limit the maximum length of input sentences. We obtained 12,630 toxic texts and non-toxic texts respectively.
B. Targeted Model & Implementation
For white-box experiments, we evaluated the TEXTBUG- GER on self-trained LR, CNN and LSTM models as we do in Section III. All models are trained in a hold-out test strategy, i.e., 80%, 10%, 10% of the data was used for training, validation and test, respectively. Hyper-parameters were tuned only on the validation set, and the ï¬nal adversarial examples are generated and evaluated on the test set.
For black-box experiments, we evaluated the TEXTBUG- GER on ï¬ve toxic content detection platforms/models, includ- ing Google Perspective, IBM Natural Language Classiï¬er, Facebook fastText, ParallelDots AI, and Aylien Offensive Detector. Since the IBM Natural Language Classiï¬er and the Facebook fastText need to be trained by ourselves9, we selected 80% of the Kaggle dataset for training and the rest for testing. Note that we do not selected samples for validation since these two models only require training and testing set.
The implementation details of our toxic content attack are similar with that in the sentiment analysis attack, including the baselines.
C. Attack Performance
Effectiveness and Efï¬ciency. Tables V and VI summarize the main results of the white-box and black-box attacks on the Kaggle dataset. We can observe that under white-box settings, the Random strategy has minor inï¬uence on the ï¬nal results in Table V. On the contrary, TEXTBUGGER only perturbs a few words to achieve high attack success rate and performs much better than baseline algorithms against all models/platforms.
8https://www.kaggle.com/c/jigsaw-toxic-comment-classiï¬cation-challenge 9We do not know the modelsâ parameters or architechtures because they
only provide training and predicting interfaces.
RESULTS OF THE WHITE-BOX ATTACK ON KAGGLE DATASET.
Random FGSM+NNS [12] DeepFool+NNS [12] TEXTBUGGER Targeted Model Original Accuracy Success Rate Perturbed Word Success Rate Perturbed Word Success Rate Perturbed Word Success Rate Perturbed Word LR CNN LSTM 88.5% 93.5% 90.7% 1.4% 0.5% 0.9% 10% 10% 10% 33.9% 26.3% 28.6% 5.4% 6.2% 8.8% 29.7% 27.0% 30.3% 7.3% 9.9% 10.3% 92.3% 82.5% 94.8% 10.3% 10.8% 9.5%
RESULTS OF THE BLACK-BOX ATTACK ON KAGGLE DATASET.
Targeted Platform/Model Original Accuracy DeepWordBug [11] TEXTBUGGER Success Rate Time (s) Perturbed Word Success Rate Time (s) Perturbed Word Google Perspective IBM Classiï¬er Facebook fastText ParallelDots Aylien Offensive Detector 98.7% 85.3% 84.3% 72.4% 74.5% 33.5% 9.1% 31.8% 79.3% 53.1% 400.20 75.36 0.05 148.67 229.35 10% 10% 10% 10% 10% 60.1% 61.8% 58.2% 82.1% 68.4% 102.71 21.53 0.03 23.20 37.06 5.6% 7.0% 5.7% 4.0% 32.0%
1.0 I 7 a I ~ 08 x 8 06 | L T ⬠ae a | I T iL & Goa |_| 02 l + = Original Text = Perturbed Text
Perspective IBM fastText Aylien ParallelDots
Fig. 12. generated from Kaggle dataset against LR model. Score distribution of the after-modiï¬cation texts. These texts are
For instance, as shown in Table V, it only perturbs 10.3% words of one sample to achieve 92.3% success rate on the LR model, while all baselines achieve no more than 40% attack success rate. As the Kaggle dataset has an average length of 55 words, TEXTBUGGER only perturbed about 6 words for one sample to conduct successful attacks. Furthermore, as shown in Table VI, it only perturbs 4.0% words (i.e., about 3 words) of one sample when achieves 82.1% attack success rate on the ParallelDots platform. These results imply that an adversary can successfully mislead the system into assigning signiï¬cantly different toxicity scores to the original sentences via modifying them slightly.
Successful Attack Examples. Two successful examples are shown in Fig. 1 as demonstration. The ï¬rst adversarial text for toxic content detection in Fig. 1 contains one Sub-W op- eration (âsexualâ to âsexual-intercourseâ), which successfully converts the prediction result of the LSTM model from 96.7% toxic to 83.5% non-toxic. The second adversarial text for toxic content detection in Fig. 1 contains three modiï¬cations, i.e., one swap operation (âshitâ to âshtiâ), one Sub-C operation (âfuckingâ to âfuckimgâ) and one Sub-W operation (âhellâ to âhelledâ). These modiï¬cations successfully convert the prediction result of the Perspective API from 92% toxic to 78% non-toxic10.
conï¬dence value over all the samples including the failed samples before and after modiï¬cations. The results are shown in Fig. 12, where the overall score of the after-modiï¬cation texts has drifted to non-toxic for all platforms/models.
D. Utility Analysis
Figs. 13 and 14 show the similarity between original texts and adversarial texts under white-box and black-box settings respectively. First, Fig. 14 clearly shows that the adversarial texts generated by TEXTBUGGER preserve more utility than that generated by DeepWordBug. Second, from Figs. 13(a), 13(b), 14(a) and 14(b), we can observe that the adversarial texts preserve good utility in terms of word-level. Speciï¬cally, Fig. 13(a) shows that almost 80% adversarial texts have no more than 20 edit distance comparing with the original texts for three models. Meanwhile, Figs. 13(c), 13(d), 14(c) and 14(d) show that the generated adversarial texts preserve good utility in terms of vector-level. Speciï¬cally, from Fig. 13(d), we can see that almost 90% adverasrial texts preserve 0.9 seman- tic similarity of the original texts. These results imply that TEXTBUGGER can fool classiï¬ers with high success rate while preserving good utility of the generated adversarial texts.
# E. Discussion
Toxic Words Distribution. Fig. 15(a) shows the visu- alization of the found important words according to their frequency, where the higher frequency words have larger font sizes. Observe that the found important words are indeed toxic words, e.g., âfuckâ, âdickâ, etc. It is clear that slightly perturbing these toxic words would decrease the toxic score of toxic content.
Bug Distribution. Fig. 15(b) shows the proportion of each operation chosen by the adversary for the black-box attack. Observe that Sub-C is the dominant operation for all platforms, and Sub-W is still the least used operation. We do not give detailed analysis since the results are similar to that in Section III.
V. FURTHER ANALYSIS
Score Distribution. We also measured the change of the
A. Transferability
10Since the Perspective API only returns the toxic score, we consider that 22% toxic score is equal to 78% non-toxic score.
In the image domain, an important property of adver- sarial examples is the transferability, i.e., adversarial images
11
(a) Edit Distance (b) Jaccard Coefï¬cient (c) Euclidean Distance (d) Semantic Similarity
Fig. 13. The utility of adversarial texts generated on the Kaggle dataset under white-box settings for LR, CNN and LSTM models.
(a) Edit Distance (b) Jaccard Similarity Coefï¬cient (c) Euclidean Distance (d) Semantic Similarity
Fig. 14. The average utility of adversarial texts generated on Kaggle dataset under black-box settings for 5 platforms.
generated for one classiï¬er are likely to be misclassiï¬ed by other classiï¬ers. This property can be used to transform black- box attacks to white-box attacks as demonstrated in [28]. Therefore, we wonder whether adversarial texts also have this property.
In this evaluation, we generated adversarial texts on all three datasets for LR, CNN, and LSTM models. Then, we evaluated the attack success rate of the generated adversarial texts against other models/platforms. The experimental results are shown in Tables VII and VIII. From Table VII, we can see that there is a moderate degree of transferability among models. For instance, the adversarial texts generated on the MR dataset targeting the LR model have 39.5% success rate when attacking the Azure platform. This demonstrates that texts generated by TEXTBUGGER can successfully transfer across multiple models. From Table VIII,
12
(a) Word Cloud (b) Bug Distribution
Fig. 15. (a) The word cloud is generated from Kaggle dataset against the CNN model. (b) The bug distribution of the adversarial texts is generated from Kaggle dataset against the online platforms.
TABLE VII. TRANSFERABILITY ON IMDB AND MR DATASETS.
Dataset Model White-box Models Black-box APIs LR CNN LSTM IBM Azure Google fastText AWS IMDB 95.2% 20.3% 14.5% 14.5% 24.8% 15.1% 18.8% 19.0% LR CNN 28.9% 90.5% 21.2% 21.2% 31.4% 20.4% 25.3% 20.0% LSTM 28.8% 23.8% 86.6% 27.3% 26.7% 27.4% 23.1% 25.1% MR 92.7% 18.3% 28.7% 22.4% 39.5% 31.3% 19.8% 29.8% LR CNN 26.5% 82.1% 31.1% 25.3% 28.2% 21.0% 19.1% 20.5% LSTM 21.4% 24.6% 88.2% 21.9% 17.7% 22.5% 16.5% 18.7%
TABLE VIII. TRANSFERABILITY ON KAGGLE DATASET.
Model White-box Models Black-box APIs LR CNN LSTM Perspective IBM fastText Aylien ParallelDots 92.3% 28.6% 32.3% LR CNN 23.7% 82.5% 35.6% LSTM 21.5% 26.9% 94.8% 38.1% 26.4% 23.1% 32.2% 27.1% 26.5% 29.0% 49.7% 25.7% 52.6% 25.9% 31.4% 54.3% 50.8% 28.1%
we can see that the adversarial texts generated on the Kaggle dataset also has good transferability on Aylien and ParallelDots toxic content detection platforms. For instance, the adversarial texts against the LR model has 54.3% attack success rate on the ParallelDots platform. This means attackers can use transferability to attack online platforms even they have call limits.
B. User study
We perform a user study with human participants on Amazon Mechanical Turk (MTurk) to see whether the applied perturbation will change the human perception of the textâs sentiment. Before the study, we consulted with the IRB ofï¬ce and this study was approved and we did not collect any other information of participants except for necessary result data.
First, we randomly sampled 500 legitimate samples and 500 adversarial samples from IMDB and Kaggle datasets, respectively. Among them, half were generated under white- box settings and half were generated under black-box setting. All the selected adversarial samples successfully fooled the targeted classiï¬ers. Then, we presented these samples to the participants and asked them to label the sentiment/toxicity of these samples, i.e., the text is positive/non-toxic or neg- ative/toxic. Meanwhile, we also asked them to mark the suspicious words or inappropriate expression in the samples. To avoid labeling bias, we allow each user to annotate at most 20 reviews and collect 3 annotations from different users for each sample. Finally, 3,177 valid annotations from 297 AMT workers were obtained in total.
After examining the results, we ï¬nd that 95.5% legitimate
(a) (b)
Fig. 16. The detailed results of user study. (a) The distribution of all mistakes in the samples, including originally existed errors and manully perturbed bugs. (b) The proportion of found bugs accounting for each kind of bug added in the samples. For instance, if there are totally 10 Sub-C perturbations in the samples and we only ï¬nd 3 of them, the ratio is 3/10=0.3.
TABLE IX. RESULTS OF SC ON IMDB AND MR DATASETS.
Dataset Method Attack Success Rate Google Watson Azure AWS fastText IMDB TEXTBUGGER DeepWordBug 22.2% 15.9% 27.1% 12.2% 32.2% 15.9% 20.8% 9.8% 21.1% 13.6% MR TEXTBUGGER DeepWordBug 38.2% 26.9% 36.3% 17.7% 30.8% 13.8% 31.1% 22.1% 28.6% 10.2%
samples can be correctly classiï¬ed and 94.9% adversarial samples can be classiï¬ed as their original labels. Furthermore, we observe that for both legitimate and adversarial samples, almost all the incorrect classiï¬cations are made on several speciï¬c samples that have some ambiguous expressions. This indicates that TEXTBUGGER did not affect the human judg- ment on the polarity of the text, i.e., the utility is preserved in the adversarial samples from human perspective, which shows that the generated adversarial texts are of high quality.
Some detailed results are shown in Fig. 16. From Fig. 16(a), we can see that in our randomly selected sam- ples, the originally existed errors (including spelling mistakes, grammatical errors, etc.) account for 34.5% of all errors, and the bugs we added account for 65.5% of all errors. Among them, 38.0% (13.1%/34.5%) of existed errors and 30.1% (19.7%/65.5%) of the added bugs are successfully found by participants, which implies that our perturbation is inconspicuous. From Fig. 16(b), we can see that insert is the easiest bug to ï¬nd, followed by Sub-C. Speciï¬cally, the found Sub-C perturbations are almost the substitution of âoâ to â0â, and the substitution of âlâ to â1â is seldom found. In addition, the Sub-W perturbation is the hardest to ï¬nd.
# VI. POTENTIAL DEFENSES
there are few defense methods for the adversarial text attack. Therefore, we conduct a preliminary exploration of two potential defense schemes, i.e., spelling check and adversarial training. Speciï¬cally, we evaluate the spelling check under the black-box setting and evaluate the adversarial training under the white-box setting. By default, we use the same implementation settings as that in Section IV.
Spelling Check (SC). In this experiment, we use a context- aware spelling check service provided by Microsoft Azure11.
11https://azure.microsoft.com/zh-cn/services/cognitive-services/spell-check/
13
TABLE X. RESULTS OF SC ON KAGGLE DATASET.
Method Attack Success Rate Perspective IBM fastText ParallelDots Aylien TEXTBUGGER DeepWordBug 35.6% 16.5% 14.8% 4.3% 29.0% 13.9% 40.3% 35.1% 42.7% 30.4%
(a) IMDB (b) Kaggle
Fig. 17. The ratio of the bugs corrected by spelling check to the total bugs generated on IMDB and Kaggle datasets.
Experimental results are shown in Tables IX and X, from which we can see that though many generated adversarial texts can be detected by spell checking, TEXTBUGGER still have higher success rate than DeepWordBug on multiple online platforms after correcting the misspelled words. For instance, when targeting on Perspective API, TEXTBUGGER has 35.6% success rate while DeepWordBug only has 16.5% after spelling check. This means TEXTBUGGER is still effective and stronger than DeepWordBug.
Further, we analyze the difï¬culty of correcting each kind of bug. Speciï¬cally, we wonder which kind of bugs is the easiest to correct and which kind of bugs is the hardest to correct. We count the number of corrected bugs of each kind and show the results in Fig. 17. From Fig. 17, we can see that the easiest bug to correct is insert and delete for IMDB and Kaggle respectively. The hardest bug to correct is Sub- W, which has less than 10% successfully correction ratio. This phenomenon partly accounts for why TEXTBUGGER is stronger than DeepWordBug.
Adversarial Training (AT). Adversarial training means training the model with generated adversarial examples. For instance, in the context of toxic content detection systems, we need to include different modiï¬ed versions of the toxic documents into the training data. This method can improve the robustness of machine learning models against adversarial examples [13].
In our experiment, we trained the targeted model with the combined dataset for 10 epochs, and the learning rate is set to be 0.0005. We show the performance of this scheme along with detailed settings in Table XI, where accuracy means the pre- diction accuracy of the new models on the legitimate samples, and success rate with adversarial training (SR with AT) denotes the percentage of the adversarial samples that are misclassiï¬ed as wrong labels by the new models. From Table XI, we can see that the success rate of adversarial texts decreases while the modelsâ performance on legitimate samples does not change too much with AT. Therefore, adversarial training might be effective in defending TEXTBUGGER.
However, a limitation of adversarial training is that it needs to know the details of the attack strategy and to have sufï¬cient
TABLE XI. RESULTS OF AT ON THREE DATASETS.
Dataset Model # of Leg. # of Adv. Accuracy SR with AT IMDB LR CNN LSTM 25,000 25,000 25,000 2,000 2,000 2,000 83.5% 85.3% 88.6% 28.0% 15.7% 11.6% MR LR CNN LSTM 10,662 10,662 10,662 2,000 2,000 2,000 76.3% 80.1% 78.5% 23.6% 16.6% 16.5% Kaggle LR CNN LSTM 20,000 20,000 20,000 2,000 2,000 2,000 86.7% 91.1% 92.3% 27.6% 15.4% 11.0%
adversarial texts for training. In practice, however, attackers usually do not make their approaches or adversarial texts public. Therefore, adversarial training is limited in defending unknown adversarial attacks.
of TEXTBUGGER. Though TEXTBUGGER can be partly defended by the above methods, attackers can take some strategies to improve the robustness of their attacks. For instance, attackers can increase the proportion of Sub-W as it is almost cannot be corrected by spelling check. In addition, attackers can adjust the proportion of different strategies among different platforms. For instance, attackers can increase the proportion of swap on the Kaggle dataset when targeting the Perspective and Aylien API, since less than 40% swap modiï¬cations have been corrected as shown in Fig. 17(b). Attackers can also keep their adversarial attack strategies private and change the parameters of the attack frequently to evade the AT defense.
# VII. DISCUSSION
Extension to Targeted Attack. In this paper, we only perform untargeted attacks, i.e., changing the modelâs output. However, TEXTBUGGER can be easily adapted for targeted attacks (i.e., forcing the model to give a particular output) by modifying Eq.2 from computing the Jacobian matrix with respect to the ground truth label to computing the Jacobian matrix with respect to the targeted label.
results demonstrate the existence of natural-language adversarial per- turbations, our perturbations could be improved via a more sophisticated algorithm that takes advantage of language pro- cessing technologies, such as syntactic parsing, named entity recognition, and paraphrasing. Furthermore, the existing attack procedure of ï¬nding and modifying salient words can be extended to beam search and phrase-level modiï¬cation, which is an interesting future work. Developing effective and robust defense schemes is also a promising future work.
VIII. RELATED WORK
A. Adversarial Attacks for Text
Gradient-based Methods. In one of the ï¬rst attempts at tricking deep neural text classiï¬ers [29], Papernot et al. proposed a white-box adversarial attack and applied it repeti- tively to modify an input text until the generated sequence is misclassiï¬ed. While their attack was able to fool the classi- ï¬er, their word-level changes signiï¬cantly affect the original meaning. In [9], Ebrahimi et al. proposed a gradient-based optimization method that changes one token to another by
14
using the gradients of the model with respect to the one- hot vector input. In [33], Samanta et al. used the embedding gradient to determine important words. Then, heuristic driven rules together with hand-crafted synonyms and typos were designed.
Out-of-Vocabulary Word. Some existing works generate adversarial examples for text by replacing a word with one legible but out-of-vocabulary word [4, 11, 14]. In [4], Belinkov et al. showed that character-level machine translation systems are overly sensitive to random character manipulations, such as keyboard typos. Similarly, Gao et al. proposed DeepWord- Bug [11], which applies character perturbations to generate adversarial texts against deep learning classiï¬ers. However, this method is not computationally efï¬cient and cannot be applied in practice. In [14], Hosseini et al. showed that simple modiï¬- cations, such as adding spaces or dots between characters, can drastically change the toxicity score from Perspective API.
Replace with Semantically/Syntactically Similar Words. In [1], Alzantot et al. generated adversarial text against sen- timent analysis models by leveraging a genetic algorithm and only replacing words with semantically similar ones. In [32], Ribeiro et al. replaced tokens by random words of the same POS tag with a probability proportional to the embedding similarity.
Other Methods. In [16], Jia et al. generated adversarial examples for evaluating reading comprehension systems by adding distracting sentences to the input document. However, their method requires manual intervention to polish the added sentences. In [40], Zhao et al. used Generative Adversarial Networks (GANs) to generate adversarial sequences for textual entailment and machine translation applications. However, this method requires neural text generation, which is limited to short texts.
# B. Defense
To the best of our knowledge, existing defense methods for adversarial examples mainly focus on the image domain and have not been systematically studied in the text domain. For instance, the adversarial training, one of the famous defense methods for adversarial images, has been only used as a regularization technique in the DLTU task [18, 23]. These works only focused on improving the accuracy on clean examples, rather than defending textual adversarial examples.
# C. Remarks
In summary, the following aspects distinguish TEXTBUG- GER from existing adversarial attacks on DLTU systems. First, we use both character-level and word-level perturbations to generate adversarial texts, in contrast to previous works that use the projected gradient [29] or linguistic-driven steps [16]. Second, we demonstrate that our method has great efï¬ciency while previous works seldom evaluate the efï¬ciency of their methods [9, 11]. Finally, most if not all previous works only evaluate their method on self-implemented models [11, 12, 33], or just evaluate them on one or two public ofï¬ine models [9, 16]. By contrast, we evaluate the generated adversarial examples on 15 popular real-world online DLTU systems, including Google Cloud NLP, IBM Watson, Amazon AWS, Microsoft Azure, Facebook fastText, etc. The results demon- strate that TEXTBUGGER is more general and robust.
# IX. CONCLUSION
Overall, we study adversarial attacks against state-of- the-art sentiment analysis and toxic content detection mod- els/platforms under both white-box and black-box settings. Ex- tensive experimental results demonstrate that TEXTBUGGER is effective and efï¬cient for generating targeted adversarial NLP. The transferability of such examples hint at potential vulnerabilities in many real applications, including text ï¬lter- ing systems (e.g., racism, pornography, terrorism, and riots), online recommendation systems, etc. Our ï¬ndings also show the possibility of spelling check and adversarial training in defending against such attacks. Ensemble of linguistically- aware or structurally-aware based defense system can be further explored to improve robustness.
# ACKNOWLEDGMENT
This work was partly supported by NSFC under No. 61772466, the Zhejiang Provincial Natural Science Foundation for Distinguished Young Scholars under No. LR19F020003, the Provincial Key Research and Development Program of Zhejiang, China under No. 2017C01055, and the Alibaba-ZJU Joint Research Institute of Frontier Technologies. Ting Wang is partially supported by the National Science Foundation under Grant No. 1566526 and 1718787. Bo Li is partially supported by the Defense Advanced Research Projects Agency (DARPA).
# REFERENCES
[1] M. Alzantot, Y. Sharma, A. Elgohary, B.-J. Ho, M. Srivastava, and K.- W. Chang, âGenerating natural language adversarial examples,â arXiv preprint arXiv:1804.07998, 2018.
[2] M. Barreno, B. Nelson, A. D. Joseph, and J. Tygar, âThe security of machine learning,â Machine Learning, vol. 81, no. 2, pp. 121â148, 2010.
[3] M. Barreno, B. Nelson, R. Sears, A. D. Joseph, and J. D. Tygar, âCan machine learning be secure?â in ASIACCS. ACM, 2006, pp. 16â25. [4] Y. Belinkov and Y. Bisk, âSynthetic and natural noise both break neural
machine translation,â arXiv preprint arXiv:1711.02173, 2017.
[5] B. Biggio, G. Fumera, and F. Roli, âDesign of robust classiï¬ers for adversarial environments,â in SMC. IEEE, 2011, pp. 977â982.
[6] N. Carlini and D. Wagner, âTowards evaluating the robustness of neural networks,â in S&P, 2017, pp. 39â57.
[7] D. Cer, Y. Yang, S.-y. Kong, N. Hua, N. Limtiaco, R. S. John, N. Constant, M. Guajardo-Cespedes, S. Yuan, C. Tar et al., âUniversal sentence encoder,â arXiv preprint arXiv:1803.11175, 2018.
[8] M. Cheng, J. Yi, H. Zhang, P.-Y. Chen, and C.-J. Hsieh, âSeq2sick: Evaluating the robustness of sequence-to-sequence models with adver- sarial examples,â arXiv preprint arXiv:1803.01128, 2018. J. Ebrahimi, A. Rao, D. Lowd, and D. Dou, âHotï¬ip: White-box adversarial examples for nlp,â arXiv preprint arXiv:1712.06751, 2017. I. Evtimov, K. Eykholt, E. Fernandes, T. Kohno, B. Li, A. Prakash, A. Rahmati, and D. Song, âRobust physical-world attacks on machine learning models,â arXiv preprint arXiv:1707.08945, 2017. J. Gao, J. Lanchantin, M. L. Soffa, and Y. Qi, âBlack-box generation of adversarial text sequences to evade deep learning classiï¬ers,â arXiv preprint arXiv:1801.04354, 2018.
[10]
[12] Z. Gong, W. Wang, B. Li, D. Song, and W.-S. Ku, âAdversarial texts with gradient methods,â arXiv preprint arXiv:1801.07175, 2018. I. J. Goodfellow, J. Shlens, and C. Szegedy, âExplaining and harnessing adversarial examples,â in ICLR, 2015, pp. 1â11.
[14] H. Hosseini, S. Kannan, B. Zhang, and R. Poovendran, âDeceiving googleâs perspective api built for detecting toxic comments,â arXiv preprint arXiv:1702.08138, 2017.
15
[15] L. Huang, A. D. Joseph, B. Nelson, B. I. Rubinstein, and J. Tygar, âAdversarial machine learning,â in AISec. ACM, 2011, pp. 43â58.
[16] R. Jia and P. Liang, âAdversarial examples for evaluating reading comprehension systems,â in EMNLP, 2017, pp. 2021â2031.
[17] Y. Kim, âConvolutional neural networks for sentence classiï¬cation,â in EMNLP, 2014, pp. 1746â1751.
[18] Y. Li, T. Cohn, and T. Baldwin, âLearning robust representations of text,â in EMNLP, 2016, pp. 1979â1985.
[19] B. Liang, H. Li, M. Su, P. Bian, X. Li, and W. Shi, âDeep text classiï¬cation can be fooled,â arXiv preprint arXiv:1704.08006, 2017.
[20] X. Ling, S. Ji, J. Zou, J. Wang, C. Wu, B. Li, and T. Wang, âDeepsec: A uniform platform for security analysis of deep learning model,â in IEEE S&P, 2019.
[21] A. L. Maas, R. E. Daly, P. T. Pham, D. Huang, A. Y. Ng, and C. Potts, âLearning word vectors for sentiment analysis,â in ACL. Portland, Oregon, USA: Association for Computational Linguistics, June 2011, pp. 142â150.
[22] W. Medhat, A. Hassan, and H. Korashy, âSentiment analysis algorithms and applications: A survey,â Ain Shams Engineering Journal, vol. 5, no. 4, pp. 1093â1113, 2014.
[23] T. Miyato, A. M. Dai, and I. Goodfellow, âAdversarial training methods for semi-supervised text classiï¬cation,â ICLR, 2017.
[24] S.-M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, âDeepfool: a simple and accurate method to fool deep neural networks,â in CVPR, 2016, pp. 2574â2582.
[25] A. Nguyen, J. Yosinski, and J. Clune, âDeep neural networks are easily fooled: High conï¬dence predictions for unrecognizable images,â in CVPR.
[26] C. Nobata, J. Tetreault, A. Thomas, Y. Mehdad, and Y. Chang, âAbusive language detection in online user content,â in WWW. International World Wide Web Conferences Steering Committee, 2016, pp. 145â153. [27] B. Pang and L. Lee, âSeeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales,â in ACL. Association for Computational Linguistics, 2005, pp. 115â124. [28] N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, âPractical black-box attacks against machine learning,â in Asia CCS. ACM, 2017, pp. 506â519.
[29] N. Papernot, P. McDaniel, A. Swami, and R. Harang, âCrafting ad- versarial input sequences for recurrent neural networks,â in MILCOM. IEEE, 2016, pp. 49â54. J. Pennington, R. Socher, and C. Manning, âGlove: Global vectors for word representation,â in EMNLP, 2014, pp. 1532â1543.
[31] G. Rawlinson, âThe signiï¬cance of letter position in word recognition,â IEEE Aerospace and Electronic Systems Magazine, vol. 22, no. 1, pp. 26â27, 2007.
[32] M. T. Ribeiro, S. Singh, and C. Guestrin, âSemantically equivalent adversarial rules for debugging nlp models,â in ACL, 2018.
[33] S. Samanta and S. Mehta, âTowards crafting text adversarial samples,â arXiv preprint arXiv:1707.02812, 2017.
[34] D. Sculley, G. Wachman, and C. E. Brodley, âSpam ï¬ltering using inexact string matching in explicit feature space with on-line linear classiï¬ers.â in TREC, 2006.
[35] C. E. Shannon, âCommunication theory of secrecy systems,â Bell system technical journal, vol. 28, no. 4, pp. 656â715, 1949.
[36] C. Szegedy, âIntriguing properties of neural networks,â in ICLR, 2014, pp. 1â10.
[37] C. Xiao, B. Li, J.-Y. Zhu, W. He, M. Liu, and D. Song, âGenerat- ing adversarial examples with adversarial networks,â arXiv preprint arXiv:1801.02610, 2018.
[38] X. Zhang, J. Zhao, and Y. LeCun, âCharacter-level convolutional net- works for text classiï¬cation,â in NIPS. Neural information processing systems foundation, 2015, pp. 649â657.
[39] Y. Zhang and B. Wallace, âA sensitivity analysis of (and practitioners guide to) convolutional neural networks for sentence classiï¬cation,â in IJCNLP, vol. 1, 2017, pp. 253â263.
[40] Z. Zhao, D. Dua, and S. Singh, âGenerating natural adversarial exam- ples,â in ICLR, 2018. | {
"id": "1801.02610"
} |
1812.05159 | An Empirical Study of Example Forgetting during Deep Neural Network Learning | Inspired by the phenomenon of catastrophic forgetting, we investigate the
learning dynamics of neural networks as they train on single classification
tasks. Our goal is to understand whether a related phenomenon occurs when data
does not undergo a clear distributional shift. We define a `forgetting event'
to have occurred when an individual training example transitions from being
classified correctly to incorrectly over the course of learning. Across several
benchmark data sets, we find that: (i) certain examples are forgotten with high
frequency, and some not at all; (ii) a data set's (un)forgettable examples
generalize across neural architectures; and (iii) based on forgetting dynamics,
a significant fraction of examples can be omitted from the training data set
while still maintaining state-of-the-art generalization performance. | http://arxiv.org/pdf/1812.05159 | Mariya Toneva, Alessandro Sordoni, Remi Tachet des Combes, Adam Trischler, Yoshua Bengio, Geoffrey J. Gordon | cs.LG, stat.ML | ICLR 2019 | null | cs.LG | 20181212 | 20191115 | 9 1 0 2
v o N 5 1 ] G L . s c [
3 v 9 5 1 5 0 . 2 1 8 1 : v i X r a
Published as a conference paper at ICLR 2019
AN EMPIRICAL STUDY OF EXAMPLE FORGETTING DURING DEEP NEURAL NETWORK LEARNING
# Mariya Tonevaââ Carnegie Mellon University
# Alessandro Sordoniâ Microsoft Research Montreal
Remi Tachet des Combesâ Microsoft Research Montreal
# Adam Trischler Microsoft Research Montreal
# Yoshua Bengio MILA, Universit´e de Montr´eal CIFAR Senior Fellow
Geoffrey J. Gordon Microsoft Research Montreal Carnegie Mellon University
# ABSTRACT
Inspired by the phenomenon of catastrophic forgetting, we investigate the learning dynamics of neural networks as they train on single classiï¬cation tasks. Our goal is to understand whether a related phenomenon occurs when data does not un- dergo a clear distributional shift. We deï¬ne a âforgetting eventâ to have occurred when an individual training example transitions from being classiï¬ed correctly to incorrectly over the course of learning. Across several benchmark data sets, we ï¬nd that: (i) certain examples are forgotten with high frequency, and some not at all; (ii) a data setâs (un)forgettable examples generalize across neural architec- tures; and (iii) based on forgetting dynamics, a signiï¬cant fraction of examples can be omitted from the training data set while still maintaining state-of-the-art generalization performance.
# INTRODUCTION
Many machine learning models, in particular neural networks, cannot perform continual learning. They have a tendency to forget previously learnt information when trained on new tasks, a phe- nomenon usually called catastrophic forgetting (Kirkpatrick et al., 2017; Ritter et al., 2018). One of the hypothesized causes of catastrophic forgetting in neural networks is the shift in the input distri- bution across different tasksâe.g., a lack of common factors or structure in the inputs of different tasks might lead standard optimization techniques to converge to radically different solutions each time a new task is presented. In this paper, we draw inspiration from this phenomenon and investi- gate the extent to which a related forgetting process occurs as a model learns examples traditionally considered to belong to the same task.
Similarly to the continual learning setting, in stochastic gradient descent (SGD) optimization, each mini-batch can be considered as a mini-ââtaskâ presented to the network sequentially. In this con- text, we are interested in characterizing the learning dynamics of neural networks by analyzing (catastrophic) example forgetting events. These occur when examples that have been âlearntâ (i.e., correctly classified) at some time ¢ in the optimization process are subsequently misclassified â or in other terms forgotten â at a time tâ > t. We thus switch the focus from studying interac- tions between sequentially presented tasks to studying interactions between sequentially presented dataset examples during SGD optimization. Our starting point is to understand whether there exist examples that are consistently forgotten across subsequent training presentations and, conversely, examples that are never forgotten. We will call the latter unforgettable examples. We hypothesize that specific examples consistently forgotten between subsequent presentations, if they exist, must
âEqual contribution. Correspondence: MT: mariya@cmu.edu, AS: alsordon@microsoft.com â Work done while interning at Microsoft Research Montreal
Code available at https://github.com/mtoneva/example forgetting
1
Published as a conference paper at ICLR 2019
not share commonalities with other examples from the same task. We therefore analyze the propor- tion of forgettable/unforgettable examples for a given task and what effects these examples have on a modelâs decision boundary and generalization error.
The goal of our investigation is two-fold. First, we attempt to gain insight into the optimization process by analyzing interactions among examples during learning and their inï¬uence on the ï¬nal decision boundary. We are particularly interested in whether we can glean insight on the com- pressibility of a dataset, and thereby increase data efï¬ciency without compromising generalization accuracy. It is a timely problem that has been the recent focus of few-shot learning approaches via meta-learning (Finn et al., 2017; Ravi & Larochelle, 2017). Second, we aim to characterize whether forgetting statistics can be used to identify âimportantâ samples and detect outliers and examples with noisy labels (John, 1995; Brodley & Friedl, 1999; Sukhbaatar et al., 2014; Jiang et al., 2018).
Identifying important, or most informative examples is an important line of work and was exten- sively studied in the literature. Techniques of note â among others â are predeï¬ned curricula of examples (Bengio & LeCun, 2007), self-paced learning (Kumar et al., 2010), and more recently meta-learning (Fan et al., 2017). These research directions usually deï¬ne âhardnessâ or âcommonal- ityâ of an example as a function of the loss on that particular example at some point during training (or possibly at convergence). They do not consider whether some examples are consistently for- gotten throughout learning. Very recently, Chang et al. (2017) consider re-weighting examples by accounting for the variance of their predictive distribution. This is related to our deï¬nition of for- getting events, but the authors provide little analysis of the extent to which the phenomenon occurs in their proposed tasks. Our purpose is to study this phenomenon from an empirical standpoint and characterize its prevalence in different datasets and across different model architectures.
Our experimental ï¬ndings suggest that: a) there exist a large number of unforgettable examples, i.e., examples that are never forgotten once learnt, those examples are stable across seeds and strongly correlated from one neural architecture to another; b) examples with noisy labels are among the most forgotten examples, along with images with âuncommonâ features, visually complicated to classify; c) training a neural network on a dataset where a very large fraction of the least forgotten examples have been removed still results in extremely competitive performance on the test set.
2 RELATED WORK
Curriculum Learning and Sample Weighting Curriculum learning is a paradigm that favors learning along a curriculum of examples of increasing difï¬culty (Bengio et al., 2009). This general idea has found success in a variety of areas since its introduction (Kumar et al., 2010; Lee & Grau- man, 2011; Schaul et al., 2015). Kumar et al. (2010) implemented their curriculum by considering easy the examples with a small loss. Arpit et al. (2017) also pose that easy examples exist, and deï¬ne them as those that are correctly classiï¬ed after only 1 epoch of training, though they do not examine whether these examples are later forgotten. In our experiments, we empirically validate that unforgettable examples can be safely removed without compromising generalization. Zhao & Zhang (2015); Katharopoulos & Fleuret (2018) relate sample importance to the norm of its loss gradient with respect to the parameters of the network. Fan et al. (2017); Kim & Choi (2018); Jiang et al. (2018) learn a curriculum directly from data in order to minimize the task loss. Jiang et al. (2018) also study the robustness of their method in the context of noisy examples. This relates to a rich literature on outlier detection and removal of examples with noisy labels (John, 1995; Brodley & Friedl, 1999; Sukhbaatar et al., 2014; Jiang et al., 2018). We will provide evidence that noisy examples rank higher in terms of number of forgetting events. Koh & Liang (2017) borrow inï¬u- ence functions from robust statistics to evaluate the impact of the training examples on a modelâs predictions.
Deep Generalization The study of the generalization properties of deep neural networks when trained by stochastic gradient descent has been the focus of several recent publications (Zhang et al., 2016; Keskar et al., 2016; Chaudhari et al., 2016; Advani & Saxe, 2017). These studies suggest that the generalization error does not depend solely on the complexity of the hypothesis space. For instance, it has been demonstrated that over-parameterized models with many more parameters than training points can still achieve low test error (Huang et al., 2017; Wang et al., 2018) while being complex enough to ï¬t a dataset with completely random labels (Zhang et al., 2016). A possible
2
Published as a conference paper at ICLR 2019
explanation for this phenomenon is a form of implicit regularization performed by stochastic gradi- ent descent: deep neural networks trained with SGD have been recently shown to converge to the maximum margin solution in the linearly separable case (Soudry et al., 2017; Xu et al., 2018). In our work, we provide empirical evidence that generalization can be maintained when removing a substantial portion of the training examples and without restricting the complexity of the hypothesis class. This goes along the support vector interpretation provided by Soudry et al. (2017).
# 3 DEFINING AND COMPUTING EXAMPLE FORGETTING
Our general case study for example forgetting is a standard classiï¬cation setting. Given a dataset D = (xi, yi)i of observation/label pairs, we wish to learn the conditional probability distribution p(y|x; θ) using a deep neural network with parameters θ. The network is trained to minimize the empirical risk R = 1 i L(p(yi|xi; θ), yi), where L denotes the cross-entropy loss and yi â |D| 1, . . . k. The minimization is performed using variations of stochastic gradient descent, starting from initial random parameters θ0, and by sampling examples at random from the dataset D. Forgetting and learning events We denote by Ëyt i = arg maxk p(yik|xi; θt) the predicted label for example xi obtained after t steps of SGD. We also let acct i =yi be a binary variable indicating whether the example is correctly classiï¬ed at time step t. Example i undergoes a forgetting event when acct . In other words, example i is misclassiï¬ed at step t + 1 after having been correctly classiï¬ed at step t. Conversely, a learning event has occurred if acct . Statistics that will be of interest in the next sections include the distribution of forgetting events across examples and the ï¬rst time a learning event occurs.
Classification margin We will also be interested in analyzing the classification margin. Our pre- dictors have the form p(y;|x:;@) = o(G(x:)), where o is a sigmoid (softmax) activation function in the case of binary (categorical) classification. The classification margin m is defined as the dif- ference between the logit of the correct class and the largest logit among the other classes, i.e. m = Bp â arg max, zp Byâ, where k is the index corresponding to the correct class.
Unforgettable examples We qualify examples as unforgettable if they are learnt at some point and experience no forgetting events during the whole course of training: example i is unforgettable if the ï¬rst time it is learnt tâ veriï¬es tâ < â and for all k ⥠tâ, acck i = 1. Note that, according to this deï¬nition, examples that are never learnt during training do not qualify as unforgettable. We refer to examples that have been forgotten at least once as forgettable.
3.1 PROCEDURAL DESCRIPTION AND EXPERIMENTAL SETTING
Following the previous deï¬nitions, monitoring forgetting events entails computing the prediction for all examples in the dataset at each model update, which would be prohibitively expensive. In prac- tice, for each example, we subsample the full sequence of forgetting events by computing forgetting statistics only when the example is included in the current mini-batch; that is, we compute forgetting across presentations of the same example in subsequent mini-batches. This gives a lower bound on the number of forgetting events an example undergoes during training.
We train a classiï¬er on a given dataset and record the forgetting events for each example when they are sampled in the current mini-batch. For the purposes of further analysis, we then sort the datasetâs examples based on the number of forgetting events they undergo. Ties are broken at ran- dom when sampling from the ordered data. Samples that are never learnt are considered forgotten an inï¬nite number of times for sorting purposes. Note that this estimate of example forgetting is computationally expensive; see Sec. 6 for a discussion of a cheaper method.
We perform our experimental evaluation on three datasets of increasing complexity: MNIST (LeCun et al., 1999), permuted MNIST â a version of MNIST that has the same ï¬xed permutation applied to the pixels of all examples, and CIFAR-10 (Krizhevsky, 2009). We use various model architec- tures and training schemes that yield test errors comparable with the current state-of-the-art on the respective datasets. In particular, the MNIST-based experiments use a network comprised of two convolutional layers followed by a fully connected one, trained using SGD with momentum and dropout. This network achieves 0.8% test error. For CIFAR-10, we use a ResNet with cutout (De- Vries & Taylor, 2017) trained using SGD and momentum with a particular learning rate schedule.
3
Published as a conference paper at ICLR 2019
0.04 0.04 0.04 0.03 0.03 0.03 0.02 0.02 0.02 0.01 0.01 o.0L 0.00 0.00 oo lll fraction of examples fraction of examples fraction of examples u : 0 5 10 15 20 2 30 0 5 10 15 20 25 30 o 5 10 15 20 25 30 number of forgetting events number of forgetting events number of forgetting events
Figure 1: Histograms of forgetting events on (from left to right) MNIST, permutedMNIST and CIFAR-10. Insets show the zoomed-in y-axis.
This network achieves a competitive 3.99% test error. For full experimentation details, see the Sup- plementary.
# 4 CHARACTERIZING EXAMPLE FORGETTING
Number of forgetting events We estimate the number of forgetting events of all the training ex- amples for the three different datasets (MNIST, per- mutedMNIST and CIFAR-10) across 5 random seeds. The histograms of forgetting events computed from one seed are shown in Figure 1. There are 55,012, 45,181 and 15,628 unforgettable examples common across 5 seeds, they represent respectively 91.7%, 75.3%, and 31.3% of the corresponding training sets. Note that datasets with less complexity and di- versity of examples, such as MNIST, seem to contain signiï¬cantly more unforgettable examples. permut- edMNIST exhibits a complexity balanced between MNIST (easiest) and CIFAR-10 (hardest). This ï¬nd- ing seems to suggest a correlation between forget- ting statistics and the intrinsic dimension of the learning problem, as recently formalized by Li et al. (2018).
Algorithm 1 Computing forgetting statistics.
initialize prev acci = 0, i â D initialize forgetting T [i] = 0, i â D while not training done do B â¼ D # sample a minibatch for example i â B do compute acci if prev acci > acci then T [i] = T [i] + 1 prev acci = acci gradient update classiï¬er on B return T
Stability across seeds To test the stability of our metric with respect to the variance generated by stochastic gradient descent, we compute the number of forgetting events per example for 10 different random seeds and measure their correlation. From one seed to another, the average Pearson corre- lation is 89.2%. When randomly splitting the 10 different seeds into two sets of 5, the cumulated number of forgetting events within those two sets shows a high correlation of 97.6%. We also ran the original experiment on 100 seeds to devise 95% conï¬dence bounds on the average (over 5 seeds) number of forgetting events per example (see Appendix 13). The conï¬dence interval of the least forgotten examples is tight, conï¬rming that examples with a small number of forgetting events can be ranked conï¬dently.
Forgetting by chance In order to quantify the possibility of forgetting occurring by chance, we ad- ditionally analyze the distribution of forgetting events obtained under the regime of random update steps instead of the true SGD steps. In order to maintain the statistics of the random updates similar to those encountered during SGD, random updates are obtained by shufï¬ing the gradients produced by standard SGD on a main network (more details are provided in Appendix 12). We report the histogram of chance forgetting events in Supplementary Figure 13: examples are being forgotten by chance a small number of time, at most twice and most of the time less than once. The observed sta- bility across seeds, low number of chance forgetting events and the tight conï¬dence bounds suggest that it is unlikely for the ordering produced by the metric to be the by-product of another unrelated random cause.
4
Published as a conference paper at ICLR 2019
plane bird truck S forgettable unforgettable
Figure 2: Pictures of unforgettable (Top) and forgettable examples (Bottom) of every CIFAR-10 class. Forgettable examples seem to exhibit peculiar or uncommon features. Additional examples are available in Supplemental Figure 15.
First learning events We investigate whether unforgettable and forgettable examples need to be presented different numbers of times in order to be learnt for the ï¬rst time (i.e. for the ï¬rst learning event to occur, as deï¬ned in Section 3). The distributions of the presentation numbers at which ï¬rst learning events occur across all datasets can be seen in Supplemental Figure 8. We observe that, while both unforgettable and forgettable sets contain many examples that are learnt during the ï¬rst 3-4 presentations, the forgettable examples contain a larger number of examples that are ï¬rst learnt later in training. The Spearman rank correlation between the ï¬rst learning event presentations and the number of forgetting events across all training examples is 0.56, indicating a moderate relationship.
Misclassiï¬cation margin The deï¬nition of forgetting events is binary and as such fairly crude compared to more sophisticated estimators of example relevance (Zhao & Zhang, 2015; Chang et al., 2017). In order to qualify its validity, we compute the misclassiï¬cation margin of forgetting events. The misclassiï¬cation margin of an example is deï¬ned as the mean classiï¬cation margin (deï¬ned in Section 3) over all its forgetting events, a negative quantity by deï¬nition. The Spearman rank correlation between an exampleâs number of forgetting events and its mean misclassiï¬cation margin is -0.74 (computed over 5 seeds, see corresponding 2D-histogram in Supplemental Figure 9). These results suggest that examples which are frequently forgotten have a large misclassiï¬cation margin.
Visual inspection We visualize some of the unforgettable examples in Figure 2 along with some examples that have been most forgotten in the CIFAR-10 dataset. Unforgettable samples are easily recognizable and contain the most obvious class attributes or centered objects, e.g., a plane on a clear sky. On the other hand, the most forgotten examples exhibit more ambiguous characteristics (as in the center image, a truck on a brown background) that may not align with the learning signal common to other examples from the same class.
Detection of noisy examples We further investigate the observation that the most forgettable ex- amples seem to exhibit atypical characteristics. We would expect that if highly forgettable exam- ples have atypical class characteristics, then noisily-labeled examples will undergo more forgetting events. We randomly change the labels of 20% of CIFAR-10 and record the number of forgetting events of both the noisy and regular examples through training. The distributions of forgetting events across noisy and regular examples are shown in Figure 3. We observe that the most forgotten exam- ples are those with noisy labels and that no noisy examples are unforgettable. We also compare the forgetting events of the noisy examples to that of the same set of examples with original labels and observe a much higher degree of forgetting in the noisy case. The results of these synthetic experi- ments support the hypothesis that highly forgettable examples exhibit atypical class characteristics.
4.1 CONTINUAL LEARNING SETUP
We observed that in harder tasks such as CIFAR-10, a signiï¬cant portion of examples are forgotten at least once during learning. This leads us to believe that catastrophic forgetting may be observed, to some extent, even when considering examples coming from the same task distribution. To test this hypothesis, we perform an experiment inspired by the standard continual learning setup (McCloskey & Cohen, 1989; Kirkpatrick et al., 2017). We create two tasks by randomly sampling 10k examples
5
Published as a conference paper at ICLR 2019
fraction of corresponding examples
j= regular examples jm noisy examples examples before noise j= examples after noise fraction of corresponding examples 10 15 20 ° 5 10 15 20 number of forgetting events number of forgetting events
Figure 3: Distributions of forgetting events across training examples in CIFAR-10 when 20% of labels are randomly changed. Left. Comparison of forgetting events between examples with noisy and original labels. The most forgotten examples are those with noisy labels. No noisy examples are unforgettable. Right. Comparison of forgetting events between examples with noisy labels and the same examples with original labels. Examples exhibit more forgetting when their labels are changed.
(a) random partitions (b) partitioning by forgetting events
Figure 4: Synthetic continual learning setup for CIFAR-10. Background color in each column indi- cates the training partition, curves track performance on both partitions during interleaved training. Solids lines represent the average of 5 runs and dashed lines represent the standard error. The ï¬gure highlights that examples that have been forgotten at least once can âsupportâ those that have never been forgotten, as shown in (c.2) and (b.3).
from the CIFAR-10 training set and dividing them in two equally-sized partitions (5k examples each). We treat each partition as a separate âtaskâ even though they should follow the same distribution. We then train a classiï¬er for 20 epochs on each partition in an alternating fashion, while tracking performance on both partitions. The results are reported in Figure 4 (a). The background color represents which of the two partitions is currently used for training. We observe some forgetting of the second task when we only train on the ï¬rst task (panel (a.2)). This is somewhat surprising as the two tasks contain examples from the same underlying distribution.
We contrast the results from training on random partitions of examples with ones obtained by par- titioning the examples based on forgetting statistics (Figure 4 (b)). That is, we ï¬rst compute the forgetting events for all examples based on Algorithm 1 and we create our tasks by sampling 5k examples that have zero forgetting events (named f0) and 5k examples that have non-zero forgetting events (named fN). We observe that examples that have been forgotten at least once suffer a more drastic form of forgetting than those included in a random split (compare (a.2) with (b.2)). In panel (b.3) and (c.2) we can observe that examples from task f0 suffer very mild forgetting when training on task fN. This suggests that examples that have been forgotten at least once may be able to âsup- portâ those that have never been forgotten. We observe the same pattern when we investigate the opposite alternating sequence of tasks in Figure 4 (b, right).
6
Published as a conference paper at ICLR 2019
â selected removed 96.27 Ne â random removed 96.0 yâ i 95.8 â none removed test classification accuracy 93.0 | â selected removed â random removed 95.2 925 ° 10 20 30 40 50 60 ° 5 Fy 15 20 percentage of training set removed average number of forgetting events in removed subset
# test accuracy
Figure 5: Left Generalization performance on CIFAR-10 of ResNet18 where increasingly larger sub- sets of the training set are removed (mean +/- std error of 5 seeds). When the removed examples are selected at random, performance drops very fast. Selecting the examples according to our ordering can reduce the training set signiï¬cantly without affecting generalization. The vertical line indicates the point at which all unforgettable examples are removed from the training set. Right Difference in generalization performance when contiguous chunks of 5000 increasingly forgotten examples are removed from the training set. Most important examples tend to be those that are forgotten the most.
5 REMOVING UNFORGETTABLE EXAMPLES
As shown in the previous section, learning on examples that have been forgotten at least once min- imally impacts performance on those that are unforgettable. This appears to indicate that unforget- table examples are less informative than others, and, more generally, that the more an example is forgotten during training, the more useful it may be to the classiï¬cation task. This seems to align with the observations in Chang et al. (2017), where the authors re-weight training examples by ac- counting for the variance of their predictive distribution. Here, we test whether it is possible to completely remove a given subset of examples during training.
In Fig. 5 (Left), we show the evolution of the generalization performance in CIFAR-10 when we artiï¬cially remove examples from the training dataset. We choose the examples to remove by in- creasing number of forgetting events. Each point in the ï¬gure corresponds to retraining the model from scratch on an increasingly smaller subset of the training data (with the same hyper-parameters as the base model). We observe that when removing a random subset of the dataset, performance rapidly decreases. Comparatively, by removing examples ordered by number of forgetting events, 30% of the dataset can be removed while maintaining comparable generalization performance as the base model trained on the full dataset, and up to 35% can be removed with marginal degradation (less than 0.2%). The results on the other datasets are similar: a large fraction of training examples can be ignored without hurting the ï¬nal generalization performance of the classiï¬ers (Figure 6).
In Figure 5 (Right), we show the evolution of the generalization error when we remove from the dataset 5,000 examples with increasing forgetting statistics. Each point in the ï¬gure corresponds to the generalization error of a model trained on the full dataset minus 5,000 examples as a function of the average number of forgetting events in those 5,000 examples. As can be seen, removing the same number of examples with increasingly more forgetting events results in worse generalization for most of the curve. It is interesting to notice the rightmost part of the curve moving up, suggesting that some of the most forgotten examples actually hurt performance. Those could correspond to outliers or mislabeled examples (see Sec. 4). Finding a way to separate those points from very informative ones is an ancient but still active area of research (John, 1995; Jiang et al., 2018).
Support vectors Various explanations of the implicit generalization of deep neural networks (Zhang et al., 2016) have been offered: ï¬at minima generalize better and stochastic gradient descent con- verges towards them (Hochreiter & Schmidhuber, 1997; Kleinberg et al., 2018), gradient descent protects against overï¬tting (Advani & Saxe, 2017; Tachet et al., 2018), deep networksâ structure biases learning towards simple functions (Neyshabur et al., 2014; Perez et al., 2018). But it remains a poorly understood phenomenon. An interesting direction of research is to study the convergence properties of gradient descent in terms of maximum margin classiï¬ers. It has been shown recently
7
Published as a conference paper at ICLR 2019
4 â CIFAR-10 â CIFAR-10 â permuted MNIST ââ permuted MNIST â nist â nist 40 4. 2% test error increase ââ 2% test error increase 50 30 20 10 percent increase in test error percent increase in test error ° 20 40 60 80 100 ° 20 40 60 80 100 percent of training set removed percent of training set removed
Figure 6: Decrease in generalization performance when fractions of the training sets are removed. When the subsets are selected appropriately, performance is maintained after removing up to 30% of CIFAR-10, 50% of permutedMNIST, and 80% of MNIST. Vertical black line indicates the point at which all unforgettable examples are removed from CIFAR-10. Right is a zoomed in version of Left.
(Soudry et al., 2017) that on separable data, a linear network will learn such a maximum margin classiï¬er. This supports the idea that stochastic gradient descent implicitly converges to solutions that maximally separate the dataset, and additionally, that some data points are more relevant than others to the decision boundary learnt by the classiï¬er. Those points play a part equivalent to sup- port vectors in the support vector machine paradigm. Our results conï¬rm that a signiï¬cant portion of training data points have little to no inï¬uence on the generalization performance when the decision function is learnt with SGD. Forgettable training points may be considered as analogs to support vec- tors, important for the generalization performance of the model. The number of forgetting events of an example is a relevant metric to detect such support vectors. It also correlates well with the misclassiï¬cation margin (see Sec.4) which is a proxy for the distance to the decision boundary.
Intrinsic dataset dimension As mentioned above, the datasets we study have various fractions of unforgettable events (91.7% for MNIST, 75.3% for permutedMNIST and 31.3% for CIFAR-10). We also see in Figure 6 that performance on those datasets starts to degrade at different fractions of removed examples: the number of support vectors varies from one dataset to the other, based on the complexity of the underlying data distribution. If we assume that we are in fact detecting analogs of support vectors, we can put these results in perspective with the intrinsic dataset dimension deï¬ned by Li et al. (2018) as the codimension in the parameter space of the solution set: for a given archi- tecture, the higher the intrinsic dataset dimension, the larger the number of support vectors, and the fewer the number of unforgettable examples.
# 6 TRANSFERABLE FORGETTING EVENTS
Forgetting events rely on training a given architecture, with a given optimizer, for a given number of epochs. We investigate to what extent the forgetting statistics of examples depend on those factors.
Throughout training We compute the Spearman rank correlation between the ordering obtained at the end of training (200 epochs) and the ordering after various number of epochs. As seen in Fig. 7 (Left), the ordering is very stable after 75 epochs, and we found a reasonable number of epochs to get a good correlation to be 25 (see the Supplementary Materials for precision-recall plots).
Between architectures A limitation of our method is that it requires computing the ordering from a previous run. An interesting question is whether that ordering could be obtained from a simpler architecture than residual networks. We train a network with two convolutional layers followed by two fully connected ones (see the Supplementary for the full architecture) and compare the resulting ordering with the one obtained with ResNet18. Figure 7 (Middle) shows a precision-recall plot of the unforgettable examples computed with the residual network. We see a reasonably strong agree- ment between the unforgettable examples of the convolutional neural network and the ones of the ResNet18. Finally, we train a WideResNet (Zagoruyko & Komodakis, 2016) on truncated data sets
8
Published as a conference paper at ICLR 2019
retrieving 17k easiest for resnet â oo, © from basic_cnn ° 08 ce @ = recall at 17k an aw spearman rank correlation precision ° re ee eT ae) 0.0 0.2 04 0.6 08 1.0 30 40 30 60 epochs recall percentage of training set removed
retrieving 17k easiest for resnet oo, © from basic_cnn ° 08 ce @ = recall at 17k aw precision ° 0.0 0.2 04 0.6 08 1.0 30 40 30 60 recall percentage of training set removed
â an spearman rank correlation re ee eT ae) epochs
Figure 7: Left. Ranking of examples by forgotten events stabilizes after 75 epochs in CIFAR-10. Middle. Precision and recall of retrieving the unforgettable examples of ResNet18, using the ex- ample ordering of a simpler convolutional neural network. Right. Generalization performance on CIFAR-10 of a WideResNet using the example ordering of ResNet18.
using the example ordering from ResNet18. Using the same computing power (one Titan X GPU), Resnet18 requires 2 hours to train whereas WideResNet requires 8 â estimating the forgetting statis- tics of WideResNet via ResNet18 can save up to 6 hours of training time if the estimate is accurate. We plot WideResNetâs generalization performance using the ordering obtained by ResNet18 in Fig- ure 7 (Right): the network still performs near optimally with 30% of the dataset removed. This opens up promising avenues of computing forgetting statistics with smaller architectures.
# 7 CONCLUSION AND FUTURE WORK
In this paper, inspired by the phenomenon of catastrophic forgetting, we investigate the learning dynamics of neural networks when training on single classiï¬cation tasks. We show that catastrophic forgetting can occur in the context of what is usually considered to be a single task. Inspired by this result, we ï¬nd that some examples within a task are more prone to being forgotten, while others are consistently unforgettable. We also ï¬nd that forgetting statistics seem to be fairly stable with respect to the various characteristics of training, suggesting that they actually uncover intrinsic properties of the data rather than idiosyncrasies of the training schemes. Furthermore, the unforgettable examples seem to play little part in the ï¬nal performance of the classiï¬er as they can be removed from the training set without hurting generalization. This supports recent research interpreting deep neural networks as max margin classiï¬ers in the linear case. Future work involves understanding forgetting events better from a theoretical perspective, exploring potential applications to other areas of super- vised learning, such as speech or text and to reinforcement learning where forgetting is prevalent due to the continual shift of the underlying distribution.
# 8 ACKNOWLEDGMENTS
We acknowledge the anonymous reviewers for their insightful suggestions.
# REFERENCES
Madhu S. Advani and Andrew M. Saxe. High-dimensional dynamics of generalization error in neural networks. CoRR, abs/1710.03667, 2017.
Devansh Arpit, Stanislaw Jastrzebski, Nicolas Ballas, David Krueger, Emmanuel Bengio, Maxin- der S Kanwal, Tegan Maharaj, Asja Fischer, Aaron Courville, Yoshua Bengio, et al. A closer look at memorization in deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 233â242. JMLR.org, 2017.
Yoshua Bengio and Yann LeCun. Scaling learning algorithms towards AI. In Large Scale Kernel Machines. MIT Press, 2007.
Yoshua Bengio, J´erËome Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pp. 41â48. ACM, 2009.
9
Published as a conference paper at ICLR 2019
Carla E Brodley and Mark A Friedl. Identifying mislabeled training data. Journal of artiï¬cial intelligence research, 11:131â167, 1999.
Haw-Shiuan Chang, Erik Learned-Miller, and Andrew McCallum. Active Bias: Training More Accurate Neural Networks by Emphasizing High Variance Samples. In Advances in Neural In- formation Processing Systems, pp. 1002â1012, 2017.
Pratik Chaudhari, Anna Choromanska, Stefano Soatto, Yann LeCun, Carlo Baldassi, Christian Borgs, Jennifer Chayes, Levent Sagun, and Riccardo Zecchina. Entropy-SGD: Biasing Gradi- ent Descent Into Wide Valleys. ICLR â17, 2016.
Terrance DeVries and Graham W Taylor. Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552, 2017.
Yang Fan, Fei Tian, Tao Qin, and Jiang Bian. Learning What Data to Learn. arXiv preprint arXiv:1702.08635, 2017.
Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In Proc. of ICML, 2017.
S. Hochreiter and J. Schmidhuber. Flat minima. Neural Computation, 9(1):1â42, 1997.
Gao Huang, Zhuang Liu, Laurens van der Maaten, and Kilian Q Weinberger. Densely Connected Convolutional Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700â4708, 2017.
Lu Jiang, Zhengyuan Zhou, Thomas Leung, Li-Jia Li, and Li Fei-Fei. MentorNet: Learning data- driven curriculum for very deep neural networks on corrupted labels. In Proceedings of the 35th International Conference on Machine Learning. PMLR, 2018.
George H John. Robust decision trees: removing outliers from databases. In Proceedings of the First International Conference on Knowledge Discovery and Data Mining, pp. 174â179. AAAI Press, 1995.
Angelos Katharopoulos and Franois Fleuret. Not all samples are created equal: Deep learning with importance sampling. In Jennifer G. Dy and Andreas Krause (eds.), ICML, volume 80 of JMLR Workshop and Conference Proceedings, pp. 2530â2539. JMLR.org, 2018. URL http: //dblp.uni-trier.de/db/conf/icml/icml2018.html#KatharopoulosF18.
Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Pe- ter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. arXiv preprint arXiv:1609.04836, 2016.
Screenernet: Learning curriculum for neural networks. CoRR, abs/1801.00904, 2018. URL http://dblp.uni-trier.de/db/journals/ corr/corr1801.html#abs-1801-00904.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization, 2014. URL http: //arxiv.org/abs/1412.6980. cite arxiv:1412.6980Comment: Published as a conference paper at the 3rd International Conference for Learning Representations, San Diego, 2015.
James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, and Others. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, pp. 201611835, 2017.
Robert Kleinberg, Yuanzhi Li, and Yang Yuan. An alternative view: When does sgd escape lo- cal minima? CoRR, abs/1802.06175, 2018. URL http://dblp.uni-trier.de/db/ journals/corr/corr1802.html#abs-1802-06175.
Pang Wei Koh and Percy Liang. Understanding black-box predictions via inï¬uence functions. In Doina Precup and Yee Whye Teh (eds.), ICML, volume 70 of JMLR Workshop and Conference Proceedings, pp. 1885â1894. JMLR.org, 2017. URL http://dblp.uni-trier.de/db/ conf/icml/icml2017.html#KohL17.
10
Published as a conference paper at ICLR 2019
Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, 2009. URL https://www.cs.toronto.edu/Ëkriz/learning-features-2009-TR. pdf.
M Pawan Kumar, Benjamin Packer, and Daphne Koller. Self-Paced Learning for Latent Variable Models. In Proc. of NIPS, pp. 1â9, 2010.
Y. LeCun, C. Cortes C., and C. Burges. http://yann.lecun.com/exdb/mnist/, 1999. The mnist database of handwritten digits.
Yong Jae Lee and Kristen Grauman. Learning the easy things ï¬rst: Self-paced visual category discovery. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pp. 1721â1728. IEEE, 2011.
Chunyuan Li, Heerad Farkhoor, Rosanne Liu, and Jason Yosinski. Measuring the intrinsic dimension of objective landscapes. CoRR, abs/1804.08838, 2018. URL http://dblp.uni-trier. de/db/journals/corr/corr1804.html#abs-1804-08838.
Michael McCloskey and Neal J Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. In Psychology of learning and motivation, volume 24, pp. 109â165. Elsevier, 1989.
Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. In search of the real inductive bias: On the role of implicit regularization in deep learning. CoRR, abs/1412.6614, 2014. URL http: //dblp.uni-trier.de/db/journals/corr/corr1412.html#NeyshaburTS14.
Guillermo Valle Perez, Chico Q. Camargo, and Ard A. Louis. Deep learning generalizes be- cause the parameter-function map is biased towards simple functions. CoRR, abs/1805.08522, 2018. URL http://dblp.uni-trier.de/db/journals/corr/corr1805.html# abs-1805-08522.
Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. In Proc. of ICLR, 2017.
Hippolyt Ritter, Aleksandar Botev, and David Barber. Online Structured Laplace Approximations For Overcoming Catastrophic Forgetting. arxiv preprint arxiv: 1805.07810, 2018. URL http: //arxiv.org/abs/1805.07810.
Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. Prioritized experience replay. arXiv preprint arXiv:1511.05952, 2015.
Daniel Soudry, Elad Hoffer, Mor Shpigel Nacson, Suriya Gunasekar, and Nathan Srebro. The Im- plicit Bias of Gradient Descent on Separable Data. arXiv preprint arxiv:1710.10345, 2017. URL http://arxiv.org/abs/1710.10345.
Sainbayar Sukhbaatar, Joan Bruna, Manohar Paluri, Lubomir Bourdev, and Rob Fergus. Training convolutional networks with noisy labels. arXiv preprint arXiv:1406.2080, 2014.
R. Tachet, M. Pezeshki, S. Shabanian, A. Courville, and Y. Bengio. On the learning dynamics of deep neural networks. arxiv preprint arxiv:1809.06848, 2018. doi: arXiv:1809.06848v1. URL https://arxiv.org/abs/1809.06848.
Huan Wang, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. Identifying Generalization Properties in Neural Networks. arXiv preprint arXiv:1809.07402, pp. 1â23, 2018. doi: arXiv: 1809.07402v1. URL http://arxiv.org/abs/1809.07402.
Tengyu Xu, Yi Zhou, Kaiyi Ji, and Yingbin Liang. Convergence of sgd in learning relu models with separable data. CoRR, abs/1806.04339, 2018. URL http://dblp.uni-trier.de/db/ journals/corr/corr1806.html#abs-1806-04339.
Sergey Zagoruyko and Nikos Komodakis. Wide residual networks, 2016. URL http://arxiv. org/abs/1605.07146. cite arxiv:1605.07146.
11
Published as a conference paper at ICLR 2019
Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530, 2016.
Peilin Zhao and Tong Zhang. Stochastic Optimization with Importance Sampling for Regularized Loss Minimization. In Proc. of ICML, 2015.
12
Published as a conference paper at ICLR 2019
9 EXPERIMENTATION DETAILS
# Detailed distributions
presentation number when frst leamed presentation number when frst leamed presentation number when frst leamed
Figure 8: From left to right, distributions of the ï¬rst presentation at which each unforgettable and forgettable example was learned in MNIST, permutedMNIST and CIFAR-10 respectively. Rescaled view where the number of examples have been capped between 0 and 1500 for visualization pur- poses. Unforgettable examples are generally learnt early during training, thus may be considered as âeasyâ in the sense of Kumar et al. (2010), i.e. may have a low loss during most of the training.
# Misclassiï¬cation margin
0 4 8000 2 6000 4000 4 2000 5 o o> 5 b & mM B H number of forgetting events misclassification margin
Figure 9: Left 2D-histogram of the number of forgetting events and mean misclassiï¬cation margin across all examples of CIFAR-10. There is signiï¬cant negative correlation (-0.74, Spearman rank correlation) between mean misclassiï¬cation margin and the number of forgetting events.
permutedMNIST The permutedMNIST data set is obtained by applying a ï¬xed random permutation of the pixels to all the images of the standard MNIST data set. This typically makes the data set harder to learn for convolutional neural networks as local patterns, e.g. the horizontal bar of the 7, get shufï¬ed. This statement is supported by the two following facts:
⢠The number of unforgettable examples for permutedMNIST is 45181 versus 55012 for MNIST.
⢠The intrinsic data set dimension (Li et al., 2018) of permutedMNIST is 1400 compared to 290 for the untouched data set.
Network Architectures We use a variety of different architectures in the main text. Below are their speciï¬cations.
The architecture for the MNIST and permutedMNIST experiments is the following:
1. a ï¬rst convolutional layer with 5 by 5 ï¬lters and 10 feature maps,
2. a second convolutional layer with 5 by 5 ï¬lters and 20 feature maps,
3. a fully connected layer with 50 hidden units
4. the output layer, with 10 logits, one for each class.
We apply ReLU nonlinearities to the feature maps and to the hidden layer. The last layer is passed through a softmax to output probabilities for each class of the data set.
13
Published as a conference paper at ICLR 2019
The ResNet18 architecture used for CIFAR-10 is described thoroughly in DeVries & Taylor (2017), its implementation can be found at https://github.com/uoguelph-mlrg/Cutout.
The second one is a WideResNet (Zagoruyko & Komodakis, 2016), with a depth of 28 and a widen factor of 10. We used the implementation found at https://github.com/meliketoy/ wide-resnet.pytorch.
The convolutional architecture used in Section 6 is the following:
1. a ï¬rst convolutional layer with 5 by 5 ï¬lters and 6 feature maps,
2. a 2 by 2 max pooling layer
3. a second convolutional layer with 5 by 5 ï¬lters and 16 feature maps,
4. a ï¬rst fully connected layer with 120 hidden units
5. a second fully connected layer with 84 hidden units
6. the output layer, with 10 logits, one for each class.
# Optimization
The MNIST networks are trained to minimize the cross-entropy loss using stochastic gradient de- scent with a learning rate of 0.01 and a momentum of 0.5.
The ResNet18 is trained using cutout, data augmentation and stochastic gradient descent with a 0.9 Nesterov momentum and a learning rate starting at 0.1 and divided by 5 at epochs 60, 120 and 160.
The WideResNet is trained using Adam (Kingma & Ba, 2014) and a learning rate of 0.001.
# 10 STABILITY OF THE FORGETTING EVENTS
In Fig 10, we plot precision-recall diagrams for the unforgettable and most forgotten examples of CIFAR-10 obtained on ResNet18 after 200 epochs and various prior time steps. We see in particular that at 75 epochs, the examples on both side of the spectrum can be retrieved with very high precision and recall.
retrieving 17k hardest for resnet 1 retrieving 17k easiest for resnet EEE RE RARER EY 081 eeeesecccccescocecce precision precision from resnet 25 epochs from resnet 50 epochs from resnet 75 epochs recall at 17k 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 06 0.8 10 recall recall
Figure 10: Right: precision and recall of retrieving the unforgettable examples from a full run of ResNet18 (200 epochs), using the example ordering after 25, 50, and 75 epochs. The unforgettable examples are retrieved with high precision and recall after 50 epochs. Left: same plot for the 17k examples with the most forgetting events.
# 11 Noising THE DATA SETS
In Section 4, we analyzed the effect of adding label noise on the distribution of forgetting events. Here, we examine the effect of adding pixel noise, i.e. noising the input distribution. We choose to corrupt the inputs with additive Gaussian noise with zero mean and we choose for its stan- dard deviation to be a multiple of channel-wise data standard deviation (i.e., Ïnoise = λÏdata, λ â {0.5, 1, 2, 10}). Note that we add the noise after applying a channel-wise standard normalization
14
Published as a conference paper at ICLR 2019
step of the training images, therefore Ïdata = 1 (each channel has zero mean, unit variance, this is a standard pre-processing step and has been applied throughout all the experiments in this paper).
The forgetting distributions obtained by noising all the dataset examples with increasing noise stan- dard deviation are presented in Figure 11. We observe that adding increasing amount of noise decreases the amount of unforgettable examples and increases the amount of examples in the second mode of the forgetting distribution.
std 10 std 2 std 1 std 0.5 no noise
Figure 11: Distribution of forgetting events across all training examples in CIFAR-10 when all training images are augmented with increasing additive Gaussian noise. The presence of increas- ing amount of noise decreases the amount of unforgettable examples and increases the amount of examples in the second mode of the forgetting distribution.
We follow the noisy-labels experiments of Section 4 and we apply the aforementioned pixel noise to 20% of the training data (Ïnoise = 10). We present the results of comparing the forgetting distribution of the 20% of examples before and after noise was added to the pixels in Figure 12 (Left). For ease of comparison, we report the same results in the case of label noise in Figure 12 (Right). We observe that the forgetting distribution under pixel noise resembles the one under label noise.
examples before noise jms _examples after pixel noise examples before noise jm _examples after label noise fraction of corresponding examples fraction of corresponding examples ° 5 10 15 20 25 ° 5 10 15 20 25 number of forgetting examples number of forgetting events
Figure 12: Distribution of forgetting events across all training examples in CIFAR-10 when random 20% of training examples undergo pixel noise (Ïnoise = 10) (Left) or label noise (Right) (same as Figure 3). We observe that the forgetting distribution under pixel noise resembles the one under label noise.
15
Published as a conference paper at ICLR 2019
12
# âCHANCEâ FORGETTING EVENTS ON CIFAR-10
js true gradients jem true gradients 10000 random gradients 10000 random gradients 8000 6000 4000 number of examples number of examples 2000 7 0 5 10 15 20 25 number of forgetting events number of forgetting events
Figure 13: Histogram of forgetting events under true and random gradient steps. (Right) Zoomed-in version where the number of forgetting events is capped at 3 for visualization.
Forgetting events may happen by âchanceâ, i.e. some learning/forgetting events may occur even with random gradients. In order to estimate how large the effect of âchanceâ is, we compute the forgetting events of a classiï¬er obtained by randomizing the update steps. To keep the statistics of the gradients similar to those encountered during SGD, we proceed as follows:
1. Before the beginning of training, clone the âbaseâ classiï¬er into a new âcloneâ classiï¬er with the same random weights.
2. At each training step, shufï¬e the gradients computed on the base classiï¬er and apply those this ensures that the to the clone (the base classiï¬er is still optimized the same way): statistics of the random updates match the statistics of the true gradients during learning.
3. Compute the forgetting events of the clone classiï¬er on the training set exactly as is done with the base classiï¬er.
The results can be found in Fig 13, showing the histogram of forgetting events produced by the clone network, averaged over 5 seeds. This gives an idea of the chance forgetting rate across examples. In this setting, examples are being forgotten by chance at most twice.
# 13 CONFIDENCE ON FORGETTING EVENTS FOR CIFAR-10
In order to establish conï¬dence intervals on the number of forgetting events, we computed them on 100 seeds and formed 20 averages over 5 seeds. In Fig 14, we show the average (in green), the bot- tom 2.5 percentile (in blue) and top 2.5 percentile (in orange) of those 20 curves.
» B 2 6 4# of forgetting events 0 woo00 =. 20000» 30000-40000 «50000 examples index ordered by avg. forgetting events
Figure 14: 95% conï¬dence interval on for- getting events averaged over 5 seeds.
# 14 VISUALIZATION OF FORGETTABLE AND UNFORGETTABLE IMAGES
See Fig 15 for additional pictures of the most unforgettable and forgettable examples of every CIFAR-10 class, when examples are sorted by number of forgetting events (ties are broken ran- domly).
16
Published as a conference paper at ICLR 2019
plane ame plane car f D = 2 bird cat cat deer hi DS EL 4 fa: akan s deer « # ad dog Lal a i} bo frog a ° - ka EZ = horse GE a Se ew < ship ship Z a BY, u 2 5 BY, u 2 5
plane car D = 2 cat deer hi DS EL 4 fa: akan s dog Lal a ° - ka EZ = GE a Se ew < ship BY, u 2 5
plane ame f bird cat deer « # ad a i} bo frog horse ship Z a BY, u 2 5
Figure 15: Additional pictures of the most unforgettable (Left) and forgettable examples (Right) of every CIFAR-10 class, when examples are sorted by number of forgetting events (ties are broken randomly). Forgettable examples seem to exhibit peculiar or uncommon features.
# 15 FORGETTING IN CIFAR-100
5000 5000 4000 4000 2000 f #-==--4-----4----- number of examples number of examples 1000 } t--=-- 4-4-4 -7--- 4+ 25 number of forgetting events 20S 35 number of forgetting events
Figure 16: Left: distribution of forgetting events in CIFAR-100. Right: distribution of forgetting events in CIFAR-10 when 20% of the labels are changed at random. The distribution of forgetting in CIFAR-100 is much closer to that of forgetting in the noisy CIFAR-10 than it is to forgetting in the original datasets presented in Figure 1.
The distribution of forgetting events in CIFAR-100 is shown in Figure 16. There are 3809 unfor- gettable examples (7.62% of the training set). CIFAR-100 is the hardest to classify out all of the presented datasets and exhibits the highest percentage of forgetting events. This ï¬nding further supports the idea that there may be a correlation between the forgetting statistics and the intrinsic
17
Published as a conference paper at ICLR 2019
dimension of the learning problem. Additionally, each CIFAR-100 class contains 10 times fewer examples than in CIFAR-10 or the MNIST datasets, making each image all the more useful for the learning problem.
We also observe that the distribution of forgetting in CIFAR-100 is much closer to that of forgetting in the noisy CIFAR-10 than it is to forgetting in the original datasets presented in Figure 1. Visualizing the most forgotten examples in CIFAR-100 revealed that CIFAR-100 contains several images that appear multiple times in the training set under different labels. In Figure 17, we present the 36 most forgotten examples in CIFAR-100. Note that they are all images that appear under multiple labels (not shown: the âgirlâ image also appears under the label âbabyâ, the âmouseâ image also appears under âshrewâ, one of the 2 images of âoak treeâ appears under âwillow treeâ and the other under âmaple treeâ).
otter mouse otter seal snake lion crab shark shark girl otter porcupine raccoon oak tree whale apple porcupine snake worm spider beaver worm snake se worm, BR S ps? beaver worm, »
Figure 17: The 36 most forgotten examples in CIFAR-100. Note that they are all images that ap- pear under multiple labels (not pictured: the âgirlâ image also appears under the label âbabyâ, the âmouseâ image also appears under âshrewâ, one of the 2 images of âoak treeâ appears under âwil- low treeâ and the other under âmaple treeâ.
We perform the same removal experiments we presented in Figure 5 for CIFAR-100. The results are shown in Figure 18. Just like with CIFAR-10, we are able to remove all unforgettable examples ( 8% of the training set) while maintaining test performance.
18
Published as a conference paper at ICLR 2019
~ o test accuracy won ~ a 8 q ~ Bz ~ a ââ none removed ââ selected removed ââ random removed sy ~ nN t) 5 10 15 20 25 30 35 percentage of training set removed
Figure 18: Generalization performance on CIFAR-100 of ResNet18 where increasingly larger sub- sets of the training set are removed (mean +/- std error of 5 seeds). When the removed examples are selected at random, performance drops faster. Selecting the examples according to our ordering reduces the training set without affecting generalization.
19 | {
"id": "1710.10345"
} |
1812.05069 | Recent Advances in Autoencoder-Based Representation Learning | Learning useful representations with little or no supervision is a key
challenge in artificial intelligence. We provide an in-depth review of recent
advances in representation learning with a focus on autoencoder-based models.
To organize these results we make use of meta-priors believed useful for
downstream tasks, such as disentanglement and hierarchical organization of
features. In particular, we uncover three main mechanisms to enforce such
properties, namely (i) regularizing the (approximate or aggregate) posterior
distribution, (ii) factorizing the encoding and decoding distribution, or (iii)
introducing a structured prior distribution. While there are some promising
results, implicit or explicit supervision remains a key enabler and all current
methods use strong inductive biases and modeling assumptions. Finally, we
provide an analysis of autoencoder-based representation learning through the
lens of rate-distortion theory and identify a clear tradeoff between the amount
of prior knowledge available about the downstream tasks, and how useful the
representation is for this task. | http://arxiv.org/pdf/1812.05069 | Michael Tschannen, Olivier Bachem, Mario Lucic | cs.LG, cs.CV, stat.ML | Presented at the third workshop on Bayesian Deep Learning (NeurIPS
2018) | null | cs.LG | 20181212 | 20181212 | 8 1 0 2
c e D 2 1 ] G L . s c [
1 v 9 6 0 5 0 . 2 1 8 1 : v i X r a
# Recent Advances in Autoencoder-Based Representation Learning
# Michael Tschannen ETH Zurich michaelt@nari.ee.ethz.ch
Olivier Bachem Google AI, Brain Team bachem@google.com
Mario Lucic Google AI, Brain Team lucic@google.com
# Abstract
Learning useful representations with little or no supervision is a key challenge in artiï¬cial intelligence. We provide an in-depth review of recent advances in repre- sentation learning with a focus on autoencoder-based models. To organize these results we make use of meta-priors believed useful for downstream tasks, such as disentanglement and hierarchical organization of features. In particular, we un- cover three main mechanisms to enforce such properties, namely (i) regularizing the (approximate or aggregate) posterior distribution, (ii) factorizing the encod- ing and decoding distribution, or (iii) introducing a structured prior distribution. While there are some promising results, implicit or explicit supervision remains a key enabler and all current methods use strong inductive biases and modeling assumptions. Finally, we provide an analysis of autoencoder-based representation learning through the lens of rate-distortion theory and identify a clear tradeoff be- tween the amount of prior knowledge available about the downstream tasks, and how useful the representation is for this task.
# Introduction
The ability to learn useful representations of data with little or no supervision is a key challenge towards applying artiï¬cial intelligence to the vast amounts of unlabelled data collected in the world. While it is clear that the usefulness of a representation learned on data heavily depends on the end task which it is to be used for, one could imagine that there exists properties of representations which are useful for many real-world tasks simultaneously. In a seminal paper on representation learning Bengio et al. [1] proposed such a set of meta-priors. The meta-priors are derived from general assumptions about the world such as the hierarchical organization or disentanglement of explanatory factors, the possibility of semi-supervised learning, the concentration of data on low- dimensional manifolds, clusterability, and temporal and spatial coherence.
Recently, a variety of (unsupervised) representation learning algorithms have been proposed based on the idea of autoencoding where the goal is to learn a mapping from high-dimensional observa- tions to a lower-dimensional representation space such that the original observations can be recon- structed (approximately) from the lower-dimensional representation. While these approaches have varying motivations and design choices, we argue that essentially all of the methods reviewed in this paper implicitly or explicitly have at their core at least one of the meta-priors from Bengio et al. [1].
Given the unsupervised nature of the upstream representation learning task, the characteristics of the meta-priors enforced in the representation learning step determine how useful the resulting rep- resentation is for the real-world end task. Hence, it is critical to understand which meta-priors are targeted by which models and which generic techniques are useful to enforce a given meta-prior. In this paper, we provide a uniï¬ed view which encompasses the majority of proposed models and relate them to the meta-priors proposed by Bengio et al. [1]. We summarize the recent work focusing on the meta-priors in Table 1.
Third workshop on Bayesian Deep Learning (NeurIPS 2018), Montr´eal, Canada.
Table 1: Grouping of methods according to the meta-priors for representation learning from [1]. While many methods directly or indirectly address multiple meta-priors, we only considered the most prominent target of each method. Note that meta-priors such as low dimensionality and mani- fold structure are enforced by essentially all methods.
Meta-prior Methods Disentanglement β-VAE (6) [2], FactorVAE (8) [3], β-TCVAE (9) [4], InfoVAE (9) [5], DIP- VAE (11) [6], HSIC-VAE (12) [7], HFVAE (13) [8], VIB [9], Information dropout (15) [10], DC-IGN [11], FaderNetworks (18) [12], VFAE (17) [13] Hierarchical representation1 PixelVAE [14], LVAE [15], VLaAE [16], Semi-supervised VAE [17], PixelGAN-AE [18], VLAE [19], VQ-VAE [20] Semi-supervised learning Semi-supervised VAE [17], [21], PixelGAN-AE (14) [18], AAE (16) [22] Clustering PixelGAN-AE (14) [18], AAE (16) [22], JointVAE [23], SVAE [24]
Meta-priors of Bengio et al. [1]. Meta-priors capture very general premises about the world and are therefore arguably useful for a broad set of downstream tasks. We brieï¬y summarize the most important meta-priors which are targeted by the reviewed approaches.
1. Disentanglement: Assuming that the data is generated from independent factors of variation, for example object orientation and lighting conditions in images of objects, disentanglement as a meta-prior encourages these factors to be captured by different independent variables in the representation. It should result in a concise abstract representation of the data useful for a variety of downstream tasks and promises improved sample efï¬ciency.
2. Hierarchical organization of explanatory factors: The intuition behind this meta-prior is that the world can be described as a hierarchy of increasingly abstract concepts. For example natural images can be abstractly described in terms of the objects they show at various levels of granularity. Given the object, a more concrete description can be given by object attributes. 3. Semi-supervised learning: The idea is to share a representation between a supervised and an unsupervised learning task which often leads to synergies: While the number of labeled data points is usually too small to learn a good predictor (and thereby a representation), training jointly with an unsupervised target allows the supervised task to learn a representation that generalizes, but also guides the representation learning process.
4. Clustering structure: Many real-wold data sets have multi-category structure (such as im- ages showing different object categories), with possibly category-dependent factors of varia- tion. Such structure can be captured with a latent mixture model where each mixture compo- nent corresponds to one category, and its distribution models the factors of variation within that category. This naturally leads to a representation with clustering structure.
Very generic concepts such as smoothness as well as temporal and spatial coherence are not speciï¬c to unsupervised learning and are used in most practical setups (for example weight decay to encour- age smoothness of predictors, and convolutional layers to capture spatial coherence in image data). We discuss the implicit supervision used by most approaches in Section 7.
Mechanisms for enforcing meta-priors. We identify the following three mechanisms to enforce meta-priors:
(i) Regularization of the encoding distribution (Section 3). (ii) Choice of the encoding and decoding distribution or model family (Section 4). (iii) Choice of a ï¬exible prior distribution of the representation (Section 5).
For example, regularization of the encoding distribution is often used to encourage disentangled representations. Alternatively, factorizing the encoding and decoding distribution in a hierarchical fashion allows us to impose a hierarchical structure to the representation. Finally, a more ï¬exible prior, say a mixture distribution, can be used to encourage clusterability.
1While PixelGAN-AE [18], VLAE [19], and VQ-VAE [20] do not explicitly model a hierarchy of latents, they learn abstract representations capturing global structure of images [18, 19] and speech signals [20], hence internally representing the data in a hierarchical fashion.
2
encoder code decoder ) SJ po(l2) prior p(z)
(a) Variational Autoencoder (VAE) framework.
(b) Samples from a trained VAE.
Figure 1: Figure (a) illustrates the Variational Autoencoder (VAE) framework speciï¬ed by the en- coder, decoder, and the prior distribution on the latent (representation/code) space. The encoder maps the input to the representation space (inference), while the decoder reconstructs the original input from the representation. The encoder is encouraged to satisfy some structure on the latent space (e.g., it should be disentangled). Figure (b) shows samples from a trained autoencoder with latent space of 2 dimensions on the MNIST data set. Each point on the left corresponds to the representation of a digit (originally in 784 dimensions) and the reconstructed digits can be seen on the right. One can observe that in this case the latent representation is clustered (various styles of the same digit are close w.r.t. L2-distance, and within each group the position corresponds to the rotation of the digit).
Before starting our overview, in Section 2 we present the main concepts necessary to understand variational autoencoders (VAEs) [25, 26], underlying most of the methods considered in this pa- per, and several techniques used to estimate divergences between probability distributions. We then present a detailed discussion of regularization-based methods in Section 3, review methods rely- ing on structured encoding and decoding distributions in Section 4, and present methods using a structured prior distribution in Section 5. We conclude the review section by an overview of related methods such as cross-domain representation learning [27â29] in Section 6. Finally, we provide a critique of unsupervised representation learning through the rate-distortion framework of Alemi et al. [30] and discuss the implications in Section 7.
# 2 Preliminaries
We assume familiarity with the key concepts in Bayesian data modeling. For a gentle introduction to VAEs we refer the reader to [31]. VAEs [25, 26] aim to learn a parametric latent variable model N i=1. By introducing an ap- by maximizing the marginal log-likelihood of the training data } proximate posterior qÏ(z x) we can rewrite the negative log-likelihood as
# E
Ëp(x)[ log pθ(x)] = VAE(θ, Ï) E Ëp(x)[DKL(qÏ(z pθ(z
â
# L
â
x))] |
where
VAE(θ, Ï) = E Ëp(x)[E qÏ(z|x)[ log pθ(x Ëp(x)[DKL(qÏ(z p(z))], (1)
z)] + E |
# L
â
3
t Ns y Il vu core _ A.» c=0 noise oe discriminator Py generator
Py e Pr MMD (2, Py) BS a ure mapping g
(a) The main idea behind GANs. (b) The main idea behind MMD.
Figure 2: Adversarial density ratio estimation vs MMD. Figure (a): GANs use adversarial density ratio estimation to train a generative model, which can be seen as a two-player game: The discrimi- nator tries to predict whether samples are real or generated, while the generator tries to deceive the discriminator by mimicking the distribution of the real samples. Figure (b): The MMD corresponds to the distance between mean feature embeddings.
and E i=1 f (x(i)) is the expectation of the function f (x) w.r.t. the empirical data distribution. The approach is illustrated in Figure 1. The ï¬rst term in (1) measures the reconstruction error and the second term quantiï¬es how well qÏ(z x) matches the prior p(z). The structure of the latent space heavily depends on this prior. As the KL divergence is non-negative, VAE lower- bounds the marginal log-likelihood E Ëp(x)[log pθ(x)] and is accordingly called the evidence lower bound (ELBO).
There are several design choices available: (1) The prior distribution on the latent space, p(z), (2) the family of approximate posterior distributions, qÏ(z z). Ideally, the approximate posterior should be ï¬exible enough to match the intracable true posterior x). As we will see later, there are many available options for these design choices, leading to pθ(z | various trade-offs in terms of the learned representation.
In practice, the ï¬rst term in (1) can be estimated from samples z(i) x(i)) and gradients are | backpropagated through the sampling operation using the reparametrization trick [25, Section 2.3], enabling minimization of (1) via minibatch-stochastic gradient descent (SGD). Depending on the x) the second term can either be computed in closed form or estimated from sam- choice of qÏ(z | (µÏ(x), diag(ÏÏ(x))), where µÏ(x) and ÏÏ(x) are de- ples. For the usual choice of qÏ(z terministic functions parametrized as neural networks, and p(z) = (0, I) for which the KL-term in (1) can be computed in closed form (more complicated choices of p(z) rarely allow closed form computation). To this end, we will brieï¬y discuss two ways in which one can measure distances between distributions. We will focus on intuition behind these techniques and provide pointers to detailed expositions.
Adversarial density-ratio estimation. Given a convex function f for which f (1) = 0, the f - divergence between px and py is deï¬ned as
Dj (pzllPy) = (22) putea Py(2)
For example, the choice f(t) = tlogt corresponds to Df(pz||py) = Dxx(pz||py). Given samples from p, and p, we can estimate the f-divergence using the density-ratio trick [32, 33], popularized recently through the generative adversarial network (GAN) framework [34]. The trick is to express Pz, and py as conditional distributions, conditioned on a label c ⬠{0,1}, and reduce the task to binary classification. In particular, let p,(a) = p(x|¢ = 1), py(x) = p(ale = 0), and consider a discriminator S,, trained to predict the probability that its input is a sample from distributions p, rather than py, i.e, predict p(c = 1|2). The density ratio can be expressed as pal) _ plrle=1) _ ple= Ix) Sale) py) p(wle=0) ple=Ojx) ~ 1âS,(2)â (2)
where the second equality follows from Bayesâ rule under the assumption that the marginal class N i=1 from px and a trained classiï¬er probabilities are equal. As such, given N i.i.d. samples
{
}
4
Sη one can estimate the KL-divergence by simply computing
N ; 1 Si, (a Pra(vels) * 57 Yo lo8 (Ss
â
As a practical alternative, some approaches replace the KL term in (1) with an arbitrary divergence (e.g., maximum mean discrepancy). Note, however, that the resulting objective does not necessarily lower-bound the marginal log-likelihood of the data.
Maximum mean discrepancy (MMD) [35]. Intuitively, the distances between distributions are computed as distances between mean embeddings of features as illustrated in Figure 2b. More be the formally, let k : X â X corresponding reproducing kernel Hilbert space, induced by the feature mapping Ï : . Then, the MMD of distributions px(x) and py(y) is
py) = ||Ex~p, [e(2)] â Ey~n, (e()Illau- (3) = R¢ and y(x) = x, MMD reduces to the difference between ||p, â Hp, ||3- By choosing an appropriate mapping y one can
MMD(pz.,
For example, setting Â¥ = H = R¢ and y(x) = x, MMD reduces to the difference between the means, i.e., MMD(pz, py) = ||p, â Hp, ||3- By choosing an appropriate mapping y one can estimate the divergence in terms of higher order moments of the distribution.
MMD vs f -divergences in practice. The MMD is known to work particularly well with mul- tivariate standard normal distributions. It requires a sample size roughly on the order of the data dimensionality. When used as a regularizer (see Section 3), it generally allows for stable optimiza- tion. A disadvantage is that it requires selection of the kernel k and its bandwidth parameter. In contrast, f -divergence estimators based on the density-ratio trick can in principle handle more com- plex distributions than MMD. However, in practice they require adversarial training which currently suffers from optimization issues. For more details consult [36, Section 3].
Deterministic autoencoders. Some of the methods we review rely on deterministic encoders and decoders. We denote by Dθ and EÏ the deterministic encoder and decoder, respectively. A popular objective for training an autoencoder is to minimize the L2-loss, namely
1 Laz(,¢) = 2 Eace)llle â Do(Es(2))|5)- (4)
Laz(,¢) = 2 Eace)llle â Do(Es(2))|5)- (4) If Ey and Dg are linear maps and the representation z is lower-dimensional than x, (4) corresponds to principal component analysis (PCA), which leads to z with decorrelated entries. Furthermore, we obtain (4) by removing the Dx,-term from Lyag in (1) and using a deterministic encoding distribution q4(z|x) and a Gaussian decoding distribution pg(z|z). Therefore, the major difference between Lag and Lyag is that Caz does not enforce a prior distribution on the latent space (e.g., through a Dx, -term), and minimizing Lag hence does not yield a generative model.
# L
# 3 Regularization-based methods
A classic approach to enforce some meta-prior on the latent representations z augment (approximate) posterior qÏ(z) = E x)] = 1 N | work can be subsumed into an objective of the form
VAE(θ, Ï) + λ1E Ëp(x)[R1(qÏ(z x))] + λ2R2(qÏ(z)),
# L
|
where R1 and R2 are regularizers and λ1, λ2 > 0 the corresponding weights. Firstly, we note a key difference between regularizers R1 and R2 is that the latter depends on the entire data set through qÏ(z). In principle, this prevents the use of mini-batch SGD to solve (5). In practice, however, one can often obtain good mini-batch-based estimates of R2(qÏ(z)). Secondly, the regularizers bias VAE towards a looser (larger) upper bound on the negative marginal log-likelihood. From this perspective it is not surprising that many approaches yield a lower reconstruction quality (which typically corresponds to a larger negative log-likelihood). For deterministic autoencoders, there is no such concept as an aggregated posterior, so we consider objectives of the form AE(θ, Ï) + λ1E
5
(5)
Table 2: Overview over different choices of the regularizers Ri(qg(z|a)) and R2(qg(z)). The learning objective is specified in (5). Most approaches use a multivariate standard normal distribu- tion as prior (see Table 3 in the appendix for more details). The last column (Y) indicates whether supervision is used: (v) indicates that labels are required, while (O) indicates labels can optionally be used for (semi-) supervised learning. Note that some of the regularizers are simplified.
WORK L. Ry Ro Y B-VAE [2] VAE Dxi(4¢(z|2)|[p(z)) VIB [9] VAE Dxt(qa(z|2)|Ip(2)) oO PixelGAN-AE[18] VAE âJq, (232) (e) InfoVAE [5] VAE Dxx(95(2|2)||p(z)) Dxx(qe(z)|Ip(2)) Info. dropout [10] VAE Dx (qo(z|z)||p(z)) TC(qa(z)) HFVAE [8] VAE âI,,(2;2) Rg (46(2)) + Naeg Re (a6(2)) FactorVAE [3, 4] VAE TC(qo(z)) DIP-VAE [6] VAE |Cov,,,(z)[z] â Tl? HSIC-VAE [7] VAE HSIC(qs (ze), 4o(2G2)) ie) VFAE [13] VAE MMD(q5(z|s = 0),g(z|s=1)) Vv DC-IGN [11] VAE v FaderNet. [12]; (37 AE âEy,2,y)[log Pu(1 â y|EBo(2))| v AAE/WAE [22,36] AE Dys(Eo(2)||p(z)) Oo
In this section, we ï¬rst review regularizers which can be computed in a fully unsupervised fashion (some of them optionally allow to include partial label information). Then, we turn our attention to regularizers which require supervision.
# 3.1 Unsupervised methods targeting disentanglement and independence
Disentanglement is a critical meta-prior considered by Bengio et al. [1]. Namely, assuming the data is generated from a few statistically independent factors, uncovering those factors should be extremely useful for a plethora of downstream tasks. An example for (approximately) independent factors underlying the data are class, stroke thickness, and rotation of handwritten digits in the MNIST data set. Other popular data sets are the CelebA face data set [38] (factors involve, e.g., hair color and facial attributes such as glasses), and synthetic data sets of geometric 2D shapes or rendered 3D shapes (e.g., 2D Shapes [2], 3D Shapes [3], 3D Faces [39], 3D Chairs [40]) for which the data generative process and hence the ground truth factors are known (see Figure 4 for an example).
The main idea behind several recent works on disentanglement is to augment the VAE loss with regularizers which encourage disentanglement of the latent variables z. Formally, assume that the data x x), and VAE such that the inference possibly conditionally dependent factors w. The goal is to augment model qÏ(z
x) learns to predict v and hence (partially) invert the data-generative process. |
Metrics. Disentanglement quality of inference models is typically evaluated based on ground truth factors of variation (if available). Speciï¬cally, disentanglement metrics measure how predictive the individual latent factors are for the ground-truth factors, see, e.g., [2, 3, 6, 41, 4, 42]. While many authors claim that their method leads to disentangled representations, it is unclear what the proper notion of disentanglement is and how effective these methods are in the unsupervised setting (see [43] for a large-scale evaluation). We therefore focus on the concept motivating each method rather than claims on how well each method disentangles the factors underlying the data.
# 3.1.1 Reweighting the ELBO: β-VAE
Higgins et al. [2] propose to weight the second term in (1) (henceforth referred to as the DKL-term) by a coefï¬cient β > 1,3 which can be seen as adding a regularizer equal to the DKL-term with
2Lample et al. [12], Hadad et al. [37] do not enforce a prior on the latent distribution and therefore cannot generate unconditionally.
3Higgins et al. [2] also explore 0 < β < 1 but discovers that this choice does not lead to disentanglement.
6
---- mm ~ [Fetas(2)) (Dec (aoCelocen\ [Dss(a(2)\le(2))] > Ro(as(2)+% Daeg Ro as(2))] 7 i ~S 2 } S [HSIC(ao(za,),40(2cs))| [{ICovegcel2] = 11l#] [MMD(qo(z1s = 0), ao(els = 1) |
# TC(qÏ(z))
# Divergence-based regularizers of qÏ(z)
# Moment-based regularizers of qÏ(z)
Figure 3: Schematic overview over different regularizers. Most approaches focus on regularizing the aggregate posterior and differ in the way the disagreement with respect to a prior is measured. More details are provided in Table 2 and an in-depth discussion in Section 3.
coefï¬cient λ1 = β VAE
1 > 0 to â Lβ-VAE(θ, Ï) =
VAE L VAE(θ, Ï) + λ1E Ëp(x)[DKL(qÏ(z
Lavaz(0,$) = Lvaz(0,) + AE p(x) [Dex (qo(z|x)||p(2))]- (6) This type of regularization encourages q4(z|x) to better match the factorized prior p(z), which in turn constrains the implicit capacity of the latent representation z ~ qq(z|x) and encourages it be factorized. Burgess et al. [44] provide a through theoretical analysis of 3-VAE based on the information bottleneck principle [45]. Further, they propose to gradually decrease the regularization strength until good quality reconstructions are obtained as a robust procedure to adjust the tradeoff between reconstruction quality and disentanglement (for a hard-constrained variant fo 3-VAE).
# 3.1.2 Mutual information of x and z: FactorVAE, β-TCVAE, InfoVAE
Kim and Mnih [3], Chen et al. [4], Zhao et al. [5] all propose regularizers motivated by the following decomposition of the second term in (1) Epc) [Dux (o(2|2) |P(@)] = Lao (a 2) + Di Go(2)|IP(2)), (7)
(7) the distribution qÏ(x, z) = x(i))δx(i) (x). The decomposition (7) was ï¬rst derived by Hoffman
|
where IqÏ(x; z) is the mutual qÏ(z and Johnson [46]; an alternative derivation can be found in Kim and Mnih [3, Appendix C].
FactorVAE. Kim and Mnih [3] observe that the regularizer in Cg-vaz encourages qy(z) to be factorized (assuming p(z) is a factorized distribution) by penalizing the second term in (7), but discourages the latent code to be informative by simultaneously penalizing the first term in (7). To reinforce only the former effect, they propose to regularize Cyaz with the total correlation TC(qs(z)) = Dxt(qo(z)|| I]; ae (23) of ¢¢(z)âa popular measure of dependence for multiple random variables [47]. The resulting objective has the form
(8) where the last term is the total correlation. To estimate it from samples, Kim and Mnih [3] rely on the density ratio trick [32, 33] which involves training a discriminator (see Section 2).
# L
# L
8-TCVAE. Chen et al. [4] split up the second term in (7) as Dxx(p(z)|lqg(z)) Der (de(2) ITT go(23)) + D521 Dex (ao(z;)|lp(2;)) and penalize each term individually La-rcvaz(9, 6) = Lvaz(8, ) + Avdg, (2; 2) + A2TC(qe(z)) + AS Ss Dx j However, they set A; = , = 0 by default, effectively arriving at the same objective in (8). In contrast to FactorVAE, the TC-term is estimated using importance sampling.
+ D521 Dex (ao(z;)|lp(2;)) and penalize each term individually Lvaz(8, ) + Avdg, (2; 2) + A2TC(qe(z)) + AS Ss Dx (do(2;)\p(Z5))-
=
# as FactorVAE
7
Figure 4: Latent space traversals obtained by varying one latent variable while holding all the others ï¬xed (from left to right), for β-VAE [2] (with β = 16) trained on the 3D Shapes data set [3]. The variables shown correspond to the object shape, wall color, object height, and object color. Note that the other latent variables simultaneously vary multiple latent factors or are inactive for this model.
InfoVAE. Zhao et al. [5] start from an alternative way of writing
# Cvar
L Ëp(x)[DKL(qÏ(x
p(z)) + E z))], (9)
Lyan(9, ¢) = Dxi(9o(2)|lP(2))
# L
|
z) = qÏ(x, z)/p(z). Similarly to [3], to encourage disentanglement, they propose to where qÏ(x | reweight the ï¬rst term in (9) and to encourage a large mutual information between z x) and x | by adding a regularizer proportional to IqÏ(x; z) to (9). Further, by rearranging terms in the resulting objective, they arrive at
VAE(θ, Ï) + λ1E InfoVAE(θ, Ï) = Ëp(x)[DKL(qÏ(z (10)
(Â¥s(2|2)||p(2))] + A2Dxi(qa(2)||p(z)).
# L
# L
|
For tractability reasons, Zhao et al. [5] propose to replace the last term in (10) by other divergences such as Jensen-Shannon divergence (implemented as a GAN [34]), Stein variational gradient [48], or MMD [35] (see Section 2).
DIP-VAE. Kumar et al. [6] suggest matching the moments of the aggregated posterior qÏ(z) to a VAE to encourage disentanglement multivariate standard normal prior p(z) during optimization of L qÏ(z). Speciï¬cally, they propose to match the covariance of qÏ(z) and of the latent variables z qÏ(z))
Lpwvae(9; ?) = Lvaz(9, &) + d2 So ( Covy,(2) [2] Yee + 2D ((Covaecs [zee â1)?. GD ke
# L
(µÏ(x), diag(ÏÏ(x))), CovqÏ(z)[z] = x) = | i=1 diag(ÏÏ(xi))+Cov Ëp(x)[µÏ(x)], ÏÏ(x) only contributes to the diagonal of CovqÏ(z)[z], Kumar DIP-VAE where CovqÏ(z)[z] in (11) is replaced by Cov Ëp(x)[µÏ(x)].
# L
# 3.1.3 Independence between groups of latents: HSIC-VAE, HFVAE
Groups/clusters, potentially involving hierarchies, is a structure prevalent in many data sets. It is therefore natural to take this structure into account when learning disentangled representations, as seen next.
HSIC-VAE. Lopez et al. [7] leverage the Hilbert-Schmidt independence criterion (HSIC) [49] (cf. Section A) to encourage independence between groups of latent variables, as
(12) where zG = zk}kâG (an estimator of HSIC is deï¬ned in (21) in Appendix A). This is in contrast to the methods [3â6] penalizing statistical dependence of all individual latent variables. In addition to controling (in)dependence relations of the latent variables, the HSIC can be used to remove sen- sitive information, provided as labels s with the training data, from latent representation by using the regularizer HSIC(qÏ(z), p(s)) (where p(s) is estimated from samples) as extensively explored by Louizos et al. [13] (see Section 3.4).
8
HFVAE. Starting from the decomposition (7), Esmaeili et al. [8] hierarchically decompose the DKL-term in (7) into a regularization term of the dependencies between groups of latent variables nG k=1 and regularization of the dependencies between the random variables in each group G Gk. Reweighting different regularization terms allows to encourage different degrees of intra and inter-group disentanglement, leading to the following objective:
# L
6) = Lvaz â Arlq,, (25 2) +22 (Enc [los sl + Das(ao(2)l TL as(+o))) P(2a) + (Bete os Thea P(e) GEG ] + Pevlast ol TT aol) a3) keG
# HFVAE(θ, Ï) =
Here, \; controls the mutual information between the data and latent variables, and A» and 5 determine the regularization of dependencies between groups and within groups, respectively, by penalizing the corresponding total correlation. Note that the grouping can be nested to introduce deeper hierarchies.
# 3.2 Preventing the latent code from being ignored: PixelGAN-AE and VIB
PixelGAN-AE. Makhzani and Frey [18] argue that, if pθ(x z) is not too powerful (in the sense that it cannot model the data distribution unconditionally, i.e., without using the latent code z) the term IqÏ(x; z) in (7) and the reconstruction term in (1) have competing effects: A small mutual information IqÏ(x; z) makes reconstruction of x(i) from qÏ(z z), leading to a large reconstruction error. Conversely, a small reconstruction error requires the code z to be informative and hence IqÏ(x; z) to be large. In contrast, if the decoder is powerful, e.g., a condi- tional PixelCNN [50], such that it can obtain a small reconstruction error without relying on the latent code, the mutual information and reconstruction terms can be minimized largely independent, which prevents the latent code from being informative and hence providing a useful representation (this issue is known as the information preference property [19] and is discussed in more detail in Section 4). In this case, to encourage the code to be informative Makhzani and Frey [18] propose to drop the IqÏ(x; z) term in (7), which can again be seen as a regularizer
PixelGAN-AE(θ, Ï) = VAE(θ, Ï) IqÏ(x; z). (14)
L The DKL term remaining in (7) after removing IqÏ is approximated using a GAN. Makhzani and Frey [18] show that relying on PixelGAN-AE a powerful PixelCNN decoder can be trained while keeping the latent code informative. Depending on the choice of the prior (categorical or Gaussian), the latent code picks up information of different levels of abstraction, for example the digit class and writing style in the case of MNIST.
VIB, information dropout. Alemi et al. [9] and Achille and Soatto [10] both derive a variational approximation of the information bottleneck objective [10], which targets learning a compact repre- sentation z of some random variable x that is maximally informative about some random variable y. In the special case, when y = x, the approximation derived in [9] one obtains an objective equivalent to
£e-var in (1) (c.f. [9, Appendix B] for a discussion), whereas doing so for [10] leads to Lintodrop(9; 6) = Lvan(9, 6) + AE p(x) [Dar (qo(z|2)||p(z))] + A2TC(ge(z))-
# L
# L
|
Achille and Soatto [10] derive (more) tractable expressions for (15) and establishe a connection to dropout for particular choices of p(z) and qÏ(z x). Alemi et al. [30] propose an information- | theoretic framework studying the representation learning properties of VAE-like models through a rate-distortion tradeoff. This framework recovers β-VAE but allows for a more precise navigation of the feasible rate-distortion region than the latter. Alemi and Fischer [51] further generalize the framework of [9], as discussed in Section 7.
# 3.3 Deterministic encoders and decoders: AAE and WAE
Adversarial Autoencoders (AAEs) [22] turn a standard autoencoder into a generative model by im- posing a prior distribution p(z) on the latent variables by penalizing some statistical divergence Df
9
(15)
between p(z) and qÏ(z) using a GAN. Speciï¬cally, using the negative log-likelihood as reconstruc- tion loss, the AAE objective can be written as qÏ(z|x)[ In all experiments in [22] encoder and decoder are taken to be deterministic, i.e., p(x x) are replaced by Dθ and EÏ, respectively, and the negative log-likelihood in (16) is replaced with the standard autoencoder loss AE. The advantage of implementing the regularizer λ2Df using a GAN is that any p(z) we can sample from, can be matched. This is helpful to learn representations: For example for MNIST, enforcing a prior that involves both categorical and Gaussian latent variables is shown to disentangle discrete and continuous style information in unsupervised fashion, in the sense that the categorical latent variables model the digit index and continuous random variables the writ- ing style. Disentanglement can be improved by leveraging (partial) label information, regularizing the cross-entropy between the categorical latent variables and the label one-hot encodings. Partial label information also allows to learn a generative model for digits with a Gaussian mixture model prior, with every mixture component corresponding to one digit index.
+ A2D¢(a(2)||p(2))-
# L
â
|
# 3.4 Supervised methods: VFAEs, FaderNetworks, and DC-IGN
VFAE. Variational Fair Autoencoders (VFAEs) [13] assume a likelihood of the form pg(x|z, s), where s models (categorical) latent factors one wants to remove (for example sensitive information), and z models the remaining latent factors. By using an approximate posterior of the form qy(z|x, s) and by imposing factorized prior p(z)p(s) one can encourage independence of z ~ qg(z|x, s) from s. However, z might still contain information about s, in particular in the (semi-) supervised set- ting where z encodes label information y that might be correlated with s, and additional factors of variation 2â, i.e., 2 ~ po(z|zâ, y) (this setup was first considered in [17]; see Section 4). To miti- gate this issue, Louizos et al. [13] propose to add an MMD-based regularizer to Cvyaz, encouraging independence between q(z|s = k) and q(z|s = kâ), ie.,
= kâ), ie.,
|
K Lyrar(9, 9) = Lvaz + A2 SO MMD (qo (2s = ),9s(2|s8 = 1), (17) (=2
= 4) = Di.
x(i), s(i)). To reduce the computational com- where qÏ(z plexity of the MMD the authors propose to use random Fourier features [52]. Lopez et al. [7] also consider the problem of censoring side information, but use the HSIC regularizer instead of MMD. In contrast to MMD, the HSIC is amenable to side information s of a non-categorical distribution. Furthermore, it is shown in Lopez et al. [7, Appendix E] that VFAE and HSIC are equivalent to censoring in case s is a binary random variable.
Fader Networks. A supervised method similar to censoring outlined above was explored by Lam- N x(i) ple et al. [12] and Hadad et al. [37]. Given data i=1 (e.g., images of faces) and corresponding } { N i=1 (e.g., facial attributes such as hair color or whether glasses are binary attribute information } K), the encoder of a FaderNetwork [12] is adversarially present; encoded as binary vector in } trained to learn a feature representation z = EÏ(x) invariant to the attribute values, and the decoder Dθ(y, z) reconstructing the original image from z and y. The resulting model is able to manipulate the attributes of a testing image (without known attribute information) by setting the entries of y at the input of Dθ as desired. In particular, it allows for continuous control of the attributes (by choosing non-integer attribute values in [0, 1]).
To make z = EÏ(x) invariant to y a discriminator PÏ(y z) predicting the probabilities of the | attribute vector y from z is trained concurrently with EÏ, Dθ to maximize the log-likelihood EÏ(x))]. This discriminator is used adversarially in the training of L | EÏ, Dθ encouraging EÏ to produce a latent code z from which it is difï¬cult to predict y using PÏ as
1 lit â Doly, Bola))I3 â di log Py ylBa(a))]}, 8) Lrader(0, 0) = Epca,y)
# L
i.e., the regularizer encourages EÏ to produce codes for which PÏ assigns a high likelihood to incorrect attribute values.
Hadad et al. [37] propose a method similar to FaderNetworks that first separately trains an encoder z' = Eâ,,(«) jointly with a classifier to predict y. The code produced by Eâ,, is then concatenated
10
> 2 A, ~~ ZO eens. Po(2|21, 22) x) (2)
> WORK Enc Dec P(z) Y 2 A, Taaaervak (15) H oH WwW ~~ ZO Variational LadderVAE[16] H H N Pixel VAE [14] H H+tA N eens. Semi-supervised VAE [17] H N+C VÂ¥ Po(2|21, 22) x) VLAE [19] A NIL (2)
# (a) Hierarchical encoder + PixelCNN decoder.
(b) Factorizations used by different models.
Figure 5: Figure (a) shows an example VAE with hierarchical encoding distribution and PixelCNN decoding distribution. Figure (b) gives an overview of factorizations used by different models. We indicate the structure of the encoding (ENC) and decoding (DEC) distribution as follows: (H) hierarchical, (A) autoregressive, (default) fully connected or convolutional feed-feed forward neural network). We indicate the prior distribution as follows: ( ) C categorical, (M) mixture distribution, (G) graphical model, (L) learned prior. The last column (Y) indicates whether supervision is used.
with that produced by a second encoder E7j,, and fed to the decoder Dg. E%,, and Do are now jointly trained for reconstruction (while keeping ¢â fixed) and the output of Eby is regularized as in (18) to ensure that 2â = E%, and z' = Ey are disentangled. While the model from [37] does not allow fader-like control of attributes, it provides a representation that facilitates swapping and interpolation of attributes, and can be use for retrieval. Note that in contrast to all previously discussed methods, both of these techniques do not provide a mechanism for unconditional generation.
DC-IGN. Kulkarni et al. [11] assume that the training data is generated by an interpretable, com- pact graphics code and aim to recover this code from the data using a VAE. Speciï¬cally, they con- sider data sets of rendered object images for which the underlying graphics code consists of extrinsic latent variablesâobject rotation and light source positionâand intrinsic latent variables, modeling, e.g., object identity and shape. Assuming supervision in terms of which latent factors are active (relative to some reference value), a representation disentangling intrinsic and the different extrinsic latent variables is learned by optimizing VAE on different types of mini-batches (which can be seen as implicit regularization): Mini-batches containing images for which all but one of the extrinsic factors are ï¬xed, and mini-batches containing images with ï¬xed extrinsic factors, but varying intrin- sic factors. During the forward pass, the latent variables predicted by the encoder corresponding to ï¬xed factors are replaced with the mini-batch average to force the decoder to explain all the variance in the mini-batch through the varying latent variables. In the backward step, gradients are passed through the latent space ignoring the averaging operation. This procedure allows to learn a disentan- gled representation for rendered 3D faces and chairs that allow to control extrinsic factors similarly as in a rendering engine. The models generalize to unseen object identities.
# 4 Factorizing the encoding and decoding distributions
Besides regularization, another popular way to impose a meta-prior is factorizing the encoding and/or decoding distribution in a certain way (see Figure 5 for an overview). This translates di- rectly or indirectly into a particular choice of the model class/network architecture underlying these distributions. Concrete examples are hierarchical architectures and architectures with constrained receptive ï¬eld. This can be seen as hard constraints on the learning problem, rather than regulariza- tion as discussed in the previous section. While this is not often done in the literature, one could obviously combine a speciï¬c structured model architecture with some regularizer, for example to learn a disentangled hierarchical representation. Choosing a certain model class/architecture is not only interesting from a representation point of view, but also from a generative modeling perspec- tive. Indeed, certain model classes/architectures allow to better optimize VAE ultimately leading to a better generative model.
11
Semi-supervised VAE. Kingma et al. [17] harness the VAE framework for semi-supervised learn- ing. Speciï¬cally, in the âM2 modelâ, the latent code is divided into two parts z and y where y is (typically discrete) label information observed for a subset of the training data. More speciï¬cally, the inference model takes the form qÏ(z, y x), i.e., there is a hierarchy between | y and z. During training, for samples x(i) for which a label y(i) is a available, the inference model is VAE is adapted accordingly, and for samples without label, conditioned on y (i.e., qÏ(z the label is inferred from qÏ(z, y x). This model hence effectively disentangles the latent code into | two parts y and z and allows for semi-supervised classiï¬cation and controlled generation by holding one of the factors ï¬xed and generating the other one. This model can optionally be combined with an additional model learned in unsupervised fashion to obtain an additional level of hierarchy (termed âM1 + M2 modelâ in [17]).
VLAE. Analyzing the VAE framework through the lens of Bits-Back coding [53, 54], Chen et al. VAE (1) encour- [19] identify the so-called information preference property: The second term in x) to only store the information that cannot be modeled locally ages the latent code z | (i.e., unconditionally without using the latent code) by the decoding distribution pθ(x z). As a con- | sequence, when the decoding distribution is a powerful autoregressive model such as conditional PixelRNN [55] or PixelCNN [50] the latent code will not be used to encode any information and x) will perfectly match the prior p(z), as previously observed by many authors. While this not qÏ(z | necessarily an issue in the context of generative modeling (where the goal is to maximize testing log-likelihood), it is problematic from a representation learning point of view as one wants the latent x) to store meaningful information. To overcome this issue, Chen et al. [19] propose code z | to adapt the structure of the decoding distribution pθ(x z) such that it cannot model the information | one would like z to store, and term the resulting model variational lossy autoencoder (VLAE). For example, to encourage z to capture global high-level information, while letting pθ(x z) model local information such as texture, one can use an autoregressive decoding distribution with a limited local receptive ï¬eld pθ(x z, xW (j)), where W (j) is a window centered in pixel j, that cannot model long-range spatial dependencies. Besides the implications of the information prefer- ence property for representation learning, Chen et al. [19] also explore the orthogonal direction of using a learned prior based on autoregressive ï¬ow [56] to improve generative modeling capabilities of VLAE.
PixelVAE. PixelVAEs [14] use a VAE with feed-forward convolutional encoder and de- coder, combining the decoder with a (shallow) conditional PixelCNN [50] to predict the out- they employ a hierarchical encoder and decoder structure put probabilities. the encoding and decoding distri- with multiple levels of latent variables. butions are factorized as qÏ(z1, . . . , zL| x) and pθ(x, z1, . . . , zL) = x) = qÏ(z1| zL)p(zL). Here, z1, . . . , zL are groups of latent variables (rather z2) . . . pθ(zLâ1| pθ(x x) are parametric distributions (typically Gaussian with than individual entries of z), the qÏ(zj| diagonal covariance matrix) whose parameters are predicted from different layers of the same CNN (with layer index increasing in j), pθ(x z1) is a conditional PixelCNN, and the factors in zL) are realized by a feed-forward convolutional networks. From a represen- pθ(z1| tation learning perspective, this approach leads to the extraction of high- and low-level features on one hand, allowing for controlled generation of local and global structure, and on the other hand results in better clustering of the codes according to classes in the case of multi-class data. From a generative modeling perspective, this approach obtains testing likelihood competitive with or better than computationally more complex (purely autoregressive) PixelCNN and PixelRNN models. Only L = 2 stochastic layers are explored experimentally.
LadderVAE. In contrast to PixelVAEs, Ladder VAEs (LVAEs) [15] perform top-down inference, zj+1), while using i.e., the encoding distribution is factorized as qÏ(z x) = qÏ(zL| | z) as PixelVAE (although employing a simple factorized Gaussian the same factorization for pθ(x | distribution for pθ(x zj+1) are parametrized Gaussian dis- tributions whose parameters are inferred top-down using a precision-weighted combination of (i) bottom-up predictions from different layers of the same feed-forward encoder CNN (similarly as in PixelVAE) with (ii) top-down predictions obtained by sampling from the hierarchical distribution zL)p(zL) (see [15, Figure 1b] for the corresponding graphical model pθ(z) = pθ(z1| representation). When trained with a suitable warm-up procedure, LVAEs are capable of effectively
12
pe(2|z) Za Pan Fag) a * pile) v\ « â|. pal?)
pe(2|z) WORK Enc Dec p(z) Y Za SVAE [24] G/M VQ-VAE [20] (A) CL Pan Fag) [21] Gv a * JointVAE [23] C+N pile) v\ « â|. pal?) (b) Priors employed by different models.
(b) Priors employed by different models.
(a) VAE with a multimodal continuous or discrete prior.
Figure 6: Figure (a) shows an example of a VAE with with a multimodal continuous or discrete prior (each prior gives rise to a different model). Figure (b) gives an overview of the priors employed by different models. We indicate the structure of the encoding (ENC) and decoding (DEC) distribution as follows: (H) hierarchical, (A) autoregressive, (default) fully connected or convolutional feed-feed forward neural network. We indicate the prior distribution as follows: (Mâ) multivariate standard Normal, (C) categorical, (M) mixture distribution, (G) graphical model, (L) learned prior. The last column (Y) indicates whether supervision is used: (V ) indicates that labels are required.
learning deep hierarchical latent representations, as opposed to hierarchical VAEs with bottom-up inference models which usually fail to learn meaningful representations with more than two levels (see [15, Section 3.2]).
Variational Ladder AutoEncoders. Yet another approach is taken by Variational Ladder autoen- coders (VLaAEs) [16]: While no explicit hierarchical factorization of p(z) in terms of the zj is z2, . . . zL) is implemented as a feed-forward neural network, implicitly deï¬ning a assumed, pθ(z1| top-down hierarchy among the zj by taking the zj as inputs on different layers, with the layer in- z1) is set to a ï¬xed variance factored Gaussian whose mean vector is dex proportional to j. pθ(x | x) the same factorization and a similar imple- predicted from z1. For the encoding distribution qÏ(z | z2, . . . zL) rather mentation as that of PixelVAE is used. Implicitly encoding a hierarchy into pθ(z1| than explicitly as by PixelVAE and LVAE avoids the difï¬culties described by [15] involved with training hierarchical models with more than two levels of latent variables. Furthermore, Zhao et al. [16] demonstrate that this approach leads to a disentangled hierarchical representation, for instance separating stroke width, digit width and tilt, and digit class, when applied to MNIST.
Finally, Bachman [57] and Kingma et al. [56] explore hierarchical factorizations/architectures mainly to improve generative modeling performance (in terms of testing log-likelihood), rather than exploring it from a representation learning perspective.
# 5 Structured prior distribution
Instead of choosing the encoding distribution, one can also encourage certain meta-priors by directly choosing the prior distribution p(z) of the generative model. For example, relying on a prior involv- ing discrete and continuous random variables encourages them to model different types of factors, such as the digits and the writing style, respectively, in the MNIST data set, which can be seen as a form of clustering. This is arguably the most explicit way to shape a representation, as the prior directly acts on its distribution.
# 5.1 Graphical model prior
SVAE. One of the ï¬rst attempts to learn latent variable models with structured prior distributions using the VAE framework is [24]. Concretely, the latent distribution p(z) with general graphical model structure can capture discrete mixture models such as Gaussian mixture models, linear dy- namical systems, and switching linear dynamical systems, among others. Unlike many other VAE- based works, Johnson et al. [24] rely on a fully Bayesian framework including hyperpriors for the likelihood/decoding distribution and the structured latent distribution. While such a structured p(z) allows for efï¬cient inference (e.g., using message passing algorithms) when the likelihood is an ex- ponential family distribution, it becomes intractable when the decoding distribution is parametrized
13
through a neural network as commonly done in the VAE framework, the reason for which the latter includes an approximate posterior/encoding distribution. To combine the tractability of conjugate graphical model inference with the ï¬exibility of VAEs, Johnson et al. [24] employ inference models that output conjugate graphical model potentials [58] instead of the parameters of the approximate posterior distribution. In particular, these potentials are chosen such that they have a form conjugate to the exponential family, hence allowing for efï¬cient inference when combined with the structured p(z). The resulting algorithm is termed structured VAE (SVAE). Experiments show that SVAE with a Gaussian mixture prior learns a generative model whose latent mixture components reï¬ect clusters in the data, and SVAE with a switching linear dynamical system prior learns a representation that reï¬ects behavior state transitions in motion recordings of mouses.
Narayanaswamy et al. [21] consider latent distributions with graphical model structure similar to [24], but they also incorporate partial supervision for some of the latent variables as [17]. However, unlike Kingma et al. [17] which assumes a posterior of the form qÏ(z, y x), | they do not assume a speciï¬c factorization of the partially observed latent variables y and the unob- x)), and no particular served ones z (neither for qÏ(z, y x) nor for the marginals qÏ(z x) and qÏ(y | | | distributional form of qÏ(z x) with arbitrary de- x). To perform inference for qÏ(z, y x) and qÏ(y | | pendence structure, Narayanaswamy et al. [21] derive a new Monte Carlo estimator. The proposed approach is able to disentangle digit index and writing style on MNIST with partial supervision of the digit index (similar to [17]). Furthermore, this approach can disentangle identity and lighting direction of face images with partial supervision assuming the product of categorical and continuous distribution, respectively, for the prior (using the the Gumbel-Softmax estimator [59, 60] to model the categorical part in the approximate posterior).
# 5.2 Discrete latent variables
JointVAE. JointVAE [23] equips the $-VAE framework with heterogeneous latent variable dis- tributions by concatenating continuous latent variables z with discrete ones c for improved disen- tanglement of different types of latent factors. The corresponding approximate posterior is fac- torized as gg (c|x)qg(z|x) and the Gumbel-Softmax estimator [59, 60] is used to obtain a differ- entiable relaxation of the categorical distribution qg(c|x). The regularization strength A, in the (a constrained variant of) 3-VAE objective (6) is gradually increased during training, possibly as- signing different weights to the regularization term corresponding to the discrete and continuous random variables (the regularization term in (6) decomposes as Dxz(q9(2|)¢¢(¢|x)||[p(z)p(©)) = Dx (4e(2|2)||p(z)) + Dex(qe(c|x)||\p(c))). Numerical results (based on visual inspection) show that the discrete latent variables naturally model discrete factors of variation such as digit class in MNIST or garment type in Fashion-MNIST and hence disentangle such factors better than models with continuous latent variables only.
VQ-VAE. van den Oord et al. [20] realize a VAE with discrete latent space structure using vector quantization, termed VQ-VAE. Each latent variable zj is taken to be a categorical random variable x) is assumed deterministic. Each category with K categories, and the approximate posterior qÏ(zj| RD. The embedding operation induces an additional is associated with an embedding vector ek â latent space dimension of size D. For example, if the latent representation z is an M 1 feature map, the embedded latent representation Ëz is a M D feature map. The distribution x) is implemented using a deterministic encoder network EÏ(x) with D-dimensional output, qÏ(Ëzj| quantized w.r.t. the embedding vectors
quantized w.r.t. the embedding vectors {e; }/<_,. In summary, we have qo(Zj = ex|z) = { The embeddings e;, can be learned individually for each latent variable z;, or shared for the entire latent space. Assuming a uniform prior p(z), the second term in Lyag (1) evaluates to log K as a consequence of q4(z|2) being deterministic and can be discarded during optimization. To back- propagate gradients through the non-differentiable operation (19) a straight-through type estimator [61] is used. The embedding vectors e;, which do not receive gradients as a consequence of using a straight-through estimator, are updated as the mean of the encoded points E,(aâ) assigned to the corresponding category k as in (mini-batch) k-means. 1 ifk = arg ming ||E4(z) â eg||, 0 otherwise. a9)
{e; { 1 0
VQ-VAE is shown to be competitive with VAEs with continuous latent variables in terms of testing likelihood. Furthermore, when trained on speech data, VQ-VAE learns a rudimentary phoneme-
14
level language model in a completely unsupervised fashion, which can be used for controlled speech generation and phoneme classiï¬cation.
Many other works explore learning (variational) autoencoders with (vector-)quantized latent repre- sentation with a focus on generative modeling [62, 63, 59, 60] and compression [64], rather than representation learning.
# 6 Other approaches
Early approaches. Early approaches to learn abstract representations using autoencoders include stacking single-layer autoencoders [65] to build deep architectures and imposing a sparsity prior to the latent variables [66]. Another way to achieve abstraction is to require the representation to be robust to noise. Such a representation can be learned using denoising autoencoders [67], i.e., autoencoders trained to reconstruct clean data points from a noisy version. For a broader overview over early approaches we refer to [1, Section 7].
Sequential data. There is a considerable number of recent works leveraging (variational) autoen- coders and the techniques similar to those outlined in Sections 3â5 to learn representations of se- quences. Yingzhen and Mandt [68] partition the latent code of a VAE into subsets of time varying and time invariant variables (resulting in a particular factorization of the approximate posterior) to learn a representation disentangling content and pose/identity in video/audio sequences. Hsieh et al. [69] use a similar partition of the latent code, but additionally allow the model to decompose the input into different parts, e.g., modelling different moving objects in a video sequence. Somewhat related, Villegas et al. [70], Denton and Birodkar [71], Fraccaro et al. [72] propose autoencoder models for video sequence prediction with separate encoders disentangling the latent code into pose and content. Hsu et al. [73] develop a hierarchical VAE model to learn interpretable representa- tions of speech recordings. Fortuin et al. [74] combine a variation of VQ-VAE with self-organizing maps to learn interpretable discrete representations of sequences. Further, VAEs for sequences are also of great interest in the context of natural language processing, in particular with autoregressive encoders/decoders and discrete latent representations, see, e.g., [75â77] and references therein.
Using a discriminator in pixel space. An alternative to training a pair of probabilistic encoder z) to minimize a reconstruction loss is to learn Ï, θ by matching the qÏ(z | x)Ëp(x). To achieve this, adversarially learned inference joint distributions pθ(x (ALI) [78] and bidirectional GAN (BiGAN) [79] leverage the GAN framework, learning pθ(x z), x) jointly with a discriminator to distinguish between samples drawn from the two joint distri- qÏ(z | butions. While this approach yields powerful generative models with latent representations useful for downstream tasks, the reconstructions are less faithful than for autoencoder-based models. Li et al. [80] point out a non-identiï¬ability issue inherent with the distribution matching problem un- derlying ALI/BiGAN, and propose to penalize the entropy of the reconstruction conditionally on the code.
Chen et al. [81] augment a standard GAN framework [34] with a mutual information term between the generator output and a subset of latent variables, which proves effective in learning disentan- gled representations. Other works regularize the output of (variational) autoencoders with a GAN loss. Speciï¬cally, Larsen et al. [82], Rosca et al. [83] combine VAE with standard GAN [34], and Tschannen et al. [84] equip AAE/WAE with a Wasserstein GAN loss [85]. While Larsen et al. [82] investigate the representation learned by their model, the focus of these works is on improving the sample quality of VAE and AAE/WAE. Mathieu et al. [86] rely on a similar setup as [82], but use labels to learn disentangled representations.
Cross-domain disentanglement. Image-to-image translation methods [87, 88] (translating, e.g., semantic label maps into images) can be implemented by training encoder-decoder architectures to translate between two domains (i.e., in both directions) while enforcing the translated data to match the respective domain distribution. While this task as such does not a priori encourage learning of meaningful representation, adding appropriate pressure does: Sharing parts of the latent repre- sentation between the translation networks [27â29] and/or combining domain speciï¬c and shared translation networks [89] leads to disentangled representations.
15
# 7 Rate-distortion tradeoff and usefulness of representation
In this paper we provided an overview of existing work on autoencoder-based representation learning approaches. One common pattern is that methods targeting rather abstract meta-priors such as dis- entanglement (e.g., β-VAE [2]) were only applied to synthetic data sets and very structured real data sets at low resolution. In contrast, fully supervised methods, such as FaderNetworks [12], provide representations which capture subtle properties of the data, can be scaled to high-resolution data, and allow ï¬ne-grained control of the reconstructions by manipulating the representation. As such, there is a rather large disconnect between methods which have some knowledge of the downstream task and the methods which invent a proxy task based on a meta-prior. In this section, we consider this aspect through the lens of rate-distortion tradeoffs based on appropriately deï¬ned notions of rate and distortion. Figure 7 illustrates our arguments.
Rate-distortion tradeoff for unsupervised learning. It can be shown that models based purely on optimizing the marginal likelihood might be completely useless for representation learning. We will closely follow the elegant exposition from Alemi et al. [30]. Consider the quantities
H=- [vo log p(x) dx = E,2)[â log p(z)] D=~ ff v(eyolela) ogpo(el2) drde = By) (Ep, oi [-loepo(el2)] R= ff v{e)aolel0) tox Pp IP ands = Bycey(Dur(ao(2le) lo(2)) (2)
z)]]
where H corresponds to the entropy of the underlying data source, D the distortion (i.e., the re- construction negative log-likelihood), and R the rate, namely the average relative KL divergence between the encoding distribution and the p(z). Note that the ELBO objective is now simply (D + βR) for β-VAE). Alemi et al. [30] show that the ELBO = âL following inequality holds:
H D R.
â
â¤
Figure 7 shows the resulting rate-distortion curve from Alemi et al. [30] in the limit of arbitrary powerful encoders and decoders. The horizontal line (R, 0) corresponds to the setting where one is able to encode and decode the data with no distortion at a rate of H. The vertical line (0, D) corresponds to the zero-rate setting and by choosing a sufï¬ciently powerful decoder one can reach R achieves the same the distortion of H. A critical issue is that any point on the line D = H ELBO. As a result, models based purely on optimizing the marginal likelihood might be completely useless for representation learning [30, 90] as there is no incentive to choose a point with a high rate (corresponding to an informative code). This effect is prominent in many models employing powerful decoders which function close to the zero-rate regime (see Section 4 for details). As a solution, Alemi et al. [30] suggest to optimize the same model under a constraint on the desired rate Ï, namely to solve minÏ,θ D + . However, is this really enough to learn representations | â useful for a speciï¬c downstream task?
The rate-distortion-usefulness tradeoff. Here we argue that even if one is able to reach any desired rate-distortion tradeoff point, in particular targeting a representation with speciï¬c rate R, the learned representation might still be useless for a speciï¬c downstream task. This stems from the fact that
(i) it is unclear which part of the total information (entropy) is stored in z and which part is stored in the decoder, and
(ii) even if the information relevant for the downstream task is stored in z, there is no guarantee that it is stored in a form that can be exploited by the model used to solve the downstream task.
For example, regarding (i), if the downstream task is an image classiï¬cation task, the representation should store the object class or the most prominent object features. On the other hand, if the down- stream task is to recognize relative ordering of objects, the locations have to be encoded instead. Concerning (ii), if we use a linear model on top of the representation as often done in practice, the representation needs to have structure amenable to linear prediction.
16
realizable feasible infeasible 0 Tas y) R
(a) Rate-distortion (R-D) tradeoff of [30].
(b) R-Dy tradeoff for the supervised case.
âusefulnessâ 9 Me
(c) Rate-distortion-usefulness tradeoff.
Figure 7: Figure (a) shows the Rate-distortion (R-D) tradeoff from [30], where D corresponds to the reconstruction term in the (β-)VAE objective, and the rate to the KL term. Figure (b) shows a similar tradeoff for the supervised case considered in [10, 9]. The ELBO (R + D) does not reï¬ect the usefulness of the learned representation for an unknown downstream task (see text), as illustrated in Figure (c).
We argue that there is no natural way to incorporate this desiderata directly into the classic R- D tradeoff embodied by the ELBO. Indeed, the R-D tradeoff per se does not account for what information is stored in the representation and in what form, but only for how much.
Therefore, we suggest a third dimension, namely âusefulnessâ of the representation, which is or- thogonal to the R-D plane as shown in Figure 7. Consider two models M1 and M2 whose rates satisfy R1 > R2 and D1 < D2 and which we want to use for the (a priori unknown) downstream task y (say image classiï¬cation). It can be seen that M2 is more useful (as measured, for example, in terms of classiï¬cation accuracy) for y even though it has a smaller rate and and a larger distortion than M2. This can occur, for example, if the representation of M1 stores the object locations, but models the objects themselves with the decoder, whereas M1 produces blurry reconstructions, but learns a representation that is more informative about object classes.
As discussed in Sections 3, 4, and 5, regularizers and architecture design choices can be used to determine what information is captured by the representation and the decoder, and how it is mod- eled. Therefore, the regularizers and architecture not only allow us to navigate the R-D plane but simultaneously also the âusefulnessâ dimension of our representation. As usefulness is always tied to (i) a task (in the previous example, if we consider localization instead of classiï¬cation, M1 would be more useful than M2) and (ii) a model to solve the downstream task, this implies that one cannot guarantee usefulness of a representation for a task unless it is known in advance. Further, the better the task is known the easier it is to come up with suitable regularizers and network architectures, the
17
extreme case being the fully supervised one. On the other hand, if there is little information one can rely on a generic meta-prior that might be useful for many different tasks, but will likely not lead to a very good representation for all the tasks (recall that the label-based FaderNetwork [12] scales to higher-resolution data sets than β-VAE [2] which is based on a weak disentanglement meta-prior). How well we can navigate the âusefulnessâ dimension in Figure 7 (c) is thus strongly tied to the amount of prior information available.
A rate-distortion tradeoff for supervised learning. For arbitrary downstream tasks it is clear that it is hard to formalize the âusefulnessâ dimension in Figure 7. However, if we consider a subset of possible downstream tasks, then it may be possible to come up with a formalization. In particular, for the case where the downstream task is to reconstruct (predict) some auxiliary variable y, we formulate an R-D tradeoff similar to the one of Alemi et al. [30] for a fully supervised scenario involving labels, and show that in this case, the R-D tradeoff naturally reï¬ects the usefulness for the task at hand. Speciï¬cally, we rely on the variational formulation of the information bottleneck principle proposed by [10, 9]. Using the terminology of [10], the goal in supervised representation learning is to learn a minimal (in terms of code length) representation z of the data x that is sufï¬cient for a task y (in the sense that it contains enough information to predict y). This can be formulated using the information bottleneck (IB) objective [45] maxz I(y; z) βI(z; x), where β > 0. By x) as in the derivation of VAEs (see Section 2) introducing parametrized distributions pθ(y | and by deï¬ning distortion as
Dy = â p(x, y)qÏ(z x) log pθ(y | z) dx dy dz = E | p(x,y)[E qÏ(z|x)[ â log pθ(y z)]], |
where p(x, y) is the (true) joint distribution of x and y and p(z) is a ï¬xed prior, one obtains a variational approximation of the IB objective as
â Figure 7 (b) illustrates the R-Dy tradeoff. The best we can hope for is that z stores all information about y contained in x, i.e., R = I(x; y). z) such a z yields the minimum achievable distortion, which corresponds to the conditional entropy of y given x). As the rate decreases below I(x; y) the distortion inevitably increases. When R = 0 x, H(y | z) = pθ(y), and hence for arbitrarily the code does not store any information and we have pθ(y p(x,y)[E complex pθ(x log pθ(y)]] = H(y). As in the p(y)[ log pθ(y rate-distortion tradeoff for VAEs, all these extreme points are only achievable in the limit of inï¬nite capacity encoders pθ(x x). In practice, only models with a larger optimal IB | objective
â
In the supervised case considered here, the distortion corresponds to the log-likelihood of the target y predicted from the learned representation z. Therefore, given a model trained for a speciï¬c point in the R-Dy plane, we know the predictive performance in terms of the negative log-likelihood (or, equivalently, the cross-entropy) of that speciï¬c model.
Finally, we note that the discussed rate-distortion tradeoffs for the unsupervised and supervised sce- nario can be uniï¬ed into a single framework, as proposed by Alemi and Fischer [51]. The resulting formulation recovers models such as semi-supervised VAE besides (β-)VAE, VIB, and Information dropout, but is no longer easily accessible through a two-dimensional rate-distortion plane. Alemi and Fischer [51] further establish connections of their framework to the theory of thermodynamics.
# 8 Conclusion and Discussion
Learning useful representations with little or no supervision is a key challenge towards applying artiï¬cial intelligence to the vast amounts of unlabelled data collected in the world. We provide an in-depth review of recent advances in representation learning with a focus on autoencoder-based models. In this study we consider several properties, meta-priors, believed useful for downstream tasks, such as disentanglement and hierarchical organization of features, and discuss the main re- search directions to enforce such properties. In particular, the approaches considered herein either (i) regularize the (approximate or aggregate) posterior distribution, (ii) factorize the encoding and decoding distribution, or (iii) introduce a structured prior distribution. Given the current landscape, there is a lot of fertile ground in the intersection of these methods, namely, combining regularization- based approaches while introducing a structured prior, possibly using a factorization for the encoding and decoding distributions with some particular structure.
18
Unsupervised representation learning is an ill-deï¬ned problem if the downstream task can be arbi- trary. Hence, all current methods use strong inductive biases and modeling assumptions. Implicit or explicit supervision remains a key enabler and, depending on the mechanism for enforcing meta- priors, different degrees of supervision are required. One can observe a clear tradeoff between the degree of supervision and how useful the resulting representation is: On one end of the spectrum are methods targeting abstract meta-priors such as disentanglement (e.g., β-VAE [2]) that were applied mainly to toy-like data sets. On the other end of the spectrum are fully supervised methods (e.g., FaderNetworks [12]) where the learned representations capture subtle aspects of the data, allow for ï¬ne-grained control of the reconstructions by manipulating the representation, and are amenable to higher-dimensional data sets. Furthermore, through the lens of rate-distortion we argue that, perhaps unsurprisingly, maximum likelihood optimization alone canât guarantee that the learned represen- tation is useful at all. One way to sidestep this fundamental issue is to consider the âusefulnessâ dimension with respect to a given task (or a distribution of tasks) explicitly.
# References
[1] Y. Bengio, A. Courville, and P. Vincent, âRepresentation learning: A review and new perspec- tives,â IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 8, pp. 1798â1828, 2013.
[2] I. Higgins, L. Matthey, A. Pal, C. Burgess, X. Glorot, M. Botvinick, S. Mohamed, and A. Ler- chner, âbeta-VAE: Learning basic visual concepts with a constrained variational framework,â in International Conference on Learning Representations, 2017.
[3] H. Kim and A. Mnih, âDisentangling by factorising,â in Proc. of the International Conference on Machine Learning, 2018, pp. 2649â2658.
[4] T. Q. Chen, X. Li, R. Grosse, and D. Duvenaud, âIsolating sources of disentanglement in variational autoencoders,â in Advances in Neural Information Processing Systems, 2018.
[5] S. Zhao, J. Song, and S. Ermon, âInfoVAE: Information maximizing variational autoencoders,â arXiv:1706.02262, 2017.
[6] A. Kumar, P. Sattigeri, and A. Balakrishnan, âVariational inference of disentangled latent con- cepts from unlabeled observations,â in International Conference on Learning Representations, 2018.
[7] R. Lopez, J. Regier, M. I. Jordan, and N. Yosef, âInformation constraints on auto-encoding variational bayes,â in Advances in Neural Information Processing Systems, 2018.
[8] B. Esmaeili, H. Wu, S. Jain, A. Bozkurt, N. Siddharth, B. Paige, D. H. Brooks, J. Dy, and J.-W. van de Meent, âStructured disentangled representations,â arXiv:1804.02086, 2018.
[9] A. A. Alemi, I. Fischer, J. V. Dillon, and K. Murphy, âDeep variational information bottleneck,â in International Conference on Learning Representations, 2016.
[10] A. Achille and S. Soatto, âInformation dropout: Learning optimal representations through noisy computation,â IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018.
[11] T. D. Kulkarni, W. F. Whitney, P. Kohli, and J. Tenenbaum, âDeep convolutional inverse graph- ics network,â in Advances in Neural Information Processing Systems, 2015, pp. 2539â2547.
[12] G. Lample, N. Zeghidour, N. Usunier, A. Bordes, L. Denoyer et al., âFader networks: Manip- ulating images by sliding attributes,â in Advances in Neural Information Processing Systems, 2017, pp. 5967â5976.
[13] C. Louizos, K. Swersky, Y. Li, M. Welling, and R. Zemel, âThe variational fair autoencoder,â in International Conference on Learning Representations, 2016.
[14] I. Gulrajani, K. Kumar, F. Ahmed, A. A. Taiga, F. Visin, D. Vazquez, and A. Courville, âPix- elVAE: A latent variable model for natural images,â in International Conference on Learning Representations, 2017.
19
[15] C. K. Sønderby, T. Raiko, L. Maaløe, S. K. Sønderby, and O. Winther, âLadder variational autoencoders,â in Advances in Neural Information Processing Systems, 2016, pp. 3738â3746.
[16] S. Zhao, J. Song, and S. Ermon, âLearning hierarchical features from deep generative models,â in Proc. of the International Conference on Machine Learning, 2017, pp. 4091â4099.
[17] D. P. Kingma, S. Mohamed, D. J. Rezende, and M. Welling, âSemi-supervised learning with deep generative models,â in Advances in Neural Information Processing Systems, 2014, pp. 3581â3589.
[18] A. Makhzani and B. J. Frey, âPixelGAN autoencoders,â in Advances in Neural Information Processing Systems, 2017, pp. 1975â1985.
[19] X. Chen, D. P. Kingma, T. Salimans, Y. Duan, P. Dhariwal, J. Schulman, I. Sutskever, and P. Abbeel, âVariational lossy autoencoder,â in International Conference on Learning Repre- sentations, 2017.
[20] A. van den Oord, O. Vinyals et al., âNeural discrete representation learning,â in Advances in Neural Information Processing Systems, 2017, pp. 6306â6315.
[21] S. Narayanaswamy, T. B. Paige, J.-W. Van de Meent, A. Desmaison, N. Goodman, P. Kohli, F. Wood, and P. Torr, âLearning disentangled representations with semi-supervised deep gen- erative models,â in Advances in Neural Information Processing Systems, 2017, pp. 5925â5935.
[22] A. Makhzani, J. Shlens, N. Jaitly, I. Goodfellow, and B. Frey, âAdversarial autoencoders,â arXiv:1511.05644, 2015.
[23] E. Dupont, âLearning disentangled joint continuous and discrete representations,â in Advances in Neural Information Processing Systems, 2018.
[24] M. Johnson, D. K. Duvenaud, A. Wiltschko, R. P. Adams, and S. R. Datta, âComposing graphi- cal models with neural networks for structured representations and fast inference,â in Advances in Neural Information Processing Systems, 2016, pp. 2946â2954.
[25] D. P. Kingma and M. Welling, âAuto-encoding variational bayes,â in International Conference on Learning Representations, 2014.
[26] D. J. Rezende, S. Mohamed, and D. Wierstra, âStochastic backpropagation and approximate inference in deep generative models,â in Proc. of the International Conference on Machine Learning, 2014, pp. 1278â1286.
[27] Y.-C. Liu, Y.-Y. Yeh, T.-C. Fu, S.-D. Wang, W.-C. Chiu, and Y.-C. F. Wang, âDetach and adapt: Learning cross-domain disentangled deep representation,â in Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
[28] H.-Y. Lee, H.-Y. Tseng, J.-B. Huang, M. Singh, and M.-H. Yang, âDiverse image-to-image translation via disentangled representations,â in Proc. of the European Conference on Com- puter Vision, 2018, pp. 35â51.
[29] A. Gonzalez-Garcia, J. van de Weijer, and Y. Bengio, âImage-to-image translation for cross- domain disentanglement,â in Advances in Neural Information Processing Systems, 2018.
[30] A. Alemi, B. Poole, I. Fischer, J. Dillon, R. A. Saurous, and K. Murphy, âFixing a broken ELBO,â in Proc. of the International Conference on Machine Learning, 2018, pp. 159â168.
[31] C. Doersch, âTutorial on variational autoencoders,â arXiv:1606.05908, 2016.
[32] X. Nguyen, M. J. Wainwright, and M. I. Jordan, âEstimating divergence functionals and the likelihood ratio by convex risk minimization,â IEEE Transactions on Information Theory, vol. 56, no. 11, pp. 5847â5861, 2010.
[33] M. Sugiyama, T. Suzuki, and T. Kanamori, âDensity-ratio matching under the bregman diver- gence: a uniï¬ed framework of density-ratio estimation,â Annals of the Institute of Statistical Mathematics, vol. 64, no. 5, pp. 1009â1044, 2012.
20
[34] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, âGenerative adversarial nets,â in Advances in Neural Information Processing Systems, 2014, pp. 2672â2680.
[35] A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Sch¨olkopf, and A. Smola, âA kernel two-sample test,â Journal of Machine Learning Research, vol. 13, no. Mar, 2012.
[36] I. Tolstikhin, O. Bousquet, S. Gelly, and B. Schoelkopf, âWasserstein auto-encoders,â in Inter- national Conference on Learning Representations, 2018.
[37] N. Hadad, L. Wolf, and M. Shahar, âA two-step disentanglement method,â in Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 772â780.
[38] Z. Liu, P. Luo, X. Wang, and X. Tang, âDeep learning face attributes in the wild,â in Proc. of the IEEE International Conference on Computer Vision, 2015, pp. 3730â3738.
[39] P. Paysan, R. Knothe, B. Amberg, S. Romdhani, and T. Vetter, âA 3d face model for pose and illumination invariant face recognition,â in Proc. of the IEEE International Conference on Advanced Video and Signal Based Surveillance.
[40] M. Aubry, D. Maturana, A. A. Efros, B. C. Russell, and J. Sivic, âSeeing 3d chairs: exem- plar part-based 2d-3d alignment using a large dataset of cad models,â in Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 3762â3769.
[41] C. Eastwood and C. K. I. Williams, âA framework for the quantitative evaluation of disentan- gled representations,â in International Conference on Learning Representations, 2018.
[42] K. Ridgeway and M. C. Mozer, âLearning deep disentangled embeddings with the f-statistic loss,â in Advances in Neural Information Processing Systems, 2018.
[43] F. Locatello, S. Bauer, M. Lucic, S. Gelly, B. Sch¨olkopf, and O. Bachem, âChalleng- ing common assumptions in the unsupervised learning of disentangled representations,â arXiv:1811.12359, 2018.
[44] C. P. Burgess, I. Higgins, A. Pal, L. Matthey, N. Watters, G. Desjardins, and A. Lerchner, âUnderstanding disentangling in β-VAE,â arXiv:1804.03599, 2018.
[45] N. Tishby, F. C. Pereira, and W. Bialek, âThe information bottleneck method,â arXiv preprint physics/0004057, 2000.
[46] M. D. Hoffman and M. J. Johnson, âElbo surgery: yet another way to carve up the variational evidence lower bound,â in Workshop in Advances in Approximate Bayesian Inference, NIPS, 2016.
[47] S. Watanabe, âInformation theoretical analysis of multivariate correlation,â IBM Journal of Research and Development, vol. 4, no. 1, pp. 66â82, 1960.
[48] Q. Liu and D. Wang, âStein variational gradient descent: A general purpose bayesian inference algorithm,â in Advances In Neural Information Processing Systems, 2016, pp. 2378â2386.
[49] A. Gretton, O. Bousquet, A. Smola, and B. Sch¨olkopf, âMeasuring statistical dependence with Hilbert-Schmidt norms,â in International Conference on Algorithmic Learning Theory. Springer, 2005, pp. 63â77.
[50] A. van den Oord, N. Kalchbrenner, L. Espeholt, O. Vinyals, and A. Graves, âConditional image generation with PixelCNN decoders,â in Advances in Neural Information Processing Systems, 2016, pp. 4790â4798.
[51] A. A. Alemi and I. Fischer, âTherML: Thermodynamics of machine learning,â arXiv:1807.04162, 2018.
[52] A. Rahimi and B. Recht, âRandom features for large-scale kernel machines,â in Advances in Neural Information Processing Systems, 2008, pp. 1177â1184.
21
[53] G. E. Hinton and D. Van Camp, âKeeping the neural networks simple by minimizing the de- scription length of the weights,â in Proc. of the Annual Conference on Computational Learning Theory, 1993, pp. 5â13.
[54] A. Honkela and H. Valpola, âVariational learning and bits-back coding: An information- theoretic view to bayesian learning,â IEEE Transactions on Neural Networks, vol. 15, no. 4, pp. 800â810, 2004.
[55] A. Van Oord, N. Kalchbrenner, and K. Kavukcuoglu, âPixel recurrent neural networks,â in International Conference on Machine Learning, 2016, pp. 1747â1756.
[56] D. P. Kingma, T. Salimans, R. Jozefowicz, X. Chen, I. Sutskever, and M. Welling, âImproved variational inference with inverse autoregressive ï¬ow,â in Advances in Neural Information Processing Systems, 2016, pp. 4743â4751.
[57] P. Bachman, âAn architecture for deep, hierarchical generative models,â in Advances in Neural Information Processing Systems, 2016, pp. 4826â4834.
[58] M. J. Wainwright and M. I. Jordan, âGraphical models, exponential families, and variational in Machine Learning, vol. 1, no. 1â2, pp. 1â305, 2008.
# inference,â Foundations and Trends®)
[59] E. Jang, S. Gu, and B. Poole, âCategorical reparameterization with Gumbel-softmax,â in Inter- national Conference on Learning Representations, 2017.
[60] C. J. Maddison, A. Mnih, and Y. W. Teh, âThe concrete distribution: A continuous relaxation of discrete random variables,â in International Conference on Learning Representations, 2016.
[61] Y. Bengio, N. L´eonard, and A. Courville, âEstimating or propagating gradients through stochastic neurons for conditional computation,â arXiv:1308.3432, 2013.
[62] A. Mnih and K. Gregor, âNeural variational inference and learning in belief networks,â in Proc. of the International Conference on Machine Learning, 2014, pp. 1791â1799.
[63] A. Mnih and D. J. Rezende, âVariational inference for monte carlo objectives,â in Proc. of the International Conference on Machine Learning, 2016, pp. 2188â2196.
[64] E. Agustsson, F. Mentzer, M. Tschannen, L. Cavigelli, R. Timofte, L. Benini, and L. V. Gool, âSoft-to-hard vector quantization for end-to-end learning compressible representations,â in Ad- vances in Neural Information Processing Systems, 2017, pp. 1141â1151.
[65] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle, âGreedy layer-wise training of deep networks,â in Advances In Neural Information Processing Systems, 2007, pp. 153â160.
[66] M. A. Ranzato, C. Poultney, S. Chopra, and Y. LeCun, âEfï¬cient learning of sparse represen- tations with an energy-based model,â in Advances in Neural Information Processing Systems, 2007, pp. 1137â1144.
[67] P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol, âExtracting and composing robust features with denoising autoencoders,â in Proc. of the International Conference on Machine Learning, 2008, pp. 1096â1103.
[68] L. Yingzhen and S. Mandt, âDisentangled sequential autoencoder,â in Proc. of the International Conference on Machine Learning, 2018, pp. 5656â5665.
[69] J.-T. Hsieh, B. Liu, D.-A. Huang, L. Fei-Fei, and J. C. Niebles, âLearning to decompose and disentangle representations for video prediction,â in Advances in Neural Information Process- ing Systems, 2018.
[70] R. Villegas, J. Yang, S. Hong, X. Lin, and H. Lee, âDecomposing motion and content for natural video sequence prediction,â in International Conference on Learning Representations, 2017.
[71] E. L. Denton and V. Birodkar, âUnsupervised learning of disentangled representations from video,â in Advances in Neural Information Processing Systems, 2017, pp. 4414â4423.
22
[72] M. Fraccaro, S. Kamronn, U. Paquet, and O. Winther, âA disentangled recognition and nonlin- ear dynamics model for unsupervised learning,â in Advances in Neural Information Processing Systems, 2017, pp. 3601â3610.
[73] W.-N. Hsu, Y. Zhang, and J. Glass, âUnsupervised learning of disentangled and interpretable representations from sequential data,â in Advances in Neural Information Processing Systems, 2017, pp. 1878â1889.
[74] V. Fortuin, M. H¨user, F. Locatello, H. Strathmann, and G. R¨atsch, âDeep self-organization: Interpretable discrete representation learning on time series,â arXiv:1806.02199, 2018.
[75] S. R. Bowman, L. Vilnis, O. Vinyals, A. Dai, R. Jozefowicz, and S. Bengio, âGenerating sentences from a continuous space,â in Proc. of the SIGNLL Conference on Computational Natural Language Learning, 2016, pp. 10â21.
[76] Z. Hu, Z. Yang, X. Liang, R. Salakhutdinov, and E. P. Xing, âToward controlled generation of text,â in Proc. of the International Conference on Machine Learning, 2017, pp. 1587â1596.
[77] I. V. Serban, A. Sordoni, R. Lowe, L. Charlin, J. Pineau, A. C. Courville, and Y. Bengio, âA hierarchical latent variable encoder-decoder model for generating dialogues.â in AAAI, 2017, pp. 3295â3301.
[78] V. Dumoulin, I. Belghazi, B. Poole, O. Mastropietro, A. Lamb, M. Arjovsky, and A. Courville, âAdversarially learned inference,â in International Conference on Learning Representations, 2017.
[79] J. Donahue, P. Kr¨ahenb¨uhl, and T. Darrell, âAdversarial feature learning,â in International Conference on Learning Representations, 2017.
[80] C. Li, H. Liu, C. Chen, Y. Pu, L. Chen, R. Henao, and L. Carin, âAlice: Towards understand- ing adversarial learning for joint distribution matching,â in Advances in Neural Information Processing Systems, 2017, pp. 5495â5503.
[81] X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel, âInfoGAN: In- terpretable representation learning by information maximizing generative adversarial nets,â in Advances in Neural Information Processing Systems, 2016, pp. 2172â2180.
[82] A. B. L. Larsen, S. K. Sønderby, H. Larochelle, and O. Winther, âAutoencoding beyond pixels using a learned similarity metric,â arXiv:1512.09300, 2015.
[83] M. Rosca, B. Lakshminarayanan, D. Warde-Farley, and S. Mohamed, âVariational approaches for auto-encoding generative adversarial networks,â arXiv:1706.04987, 2017.
[84] M. Tschannen, E. Agustsson, and M. Lucic, âDeep generative models for distribution- preserving lossy compression,â in Advances in Neural Information Processing Systems, 2018.
[85] M. Arjovsky, S. Chintala, and L. Bottou, âWasserstein generative adversarial networks,â in Proc. of the International Conference on Machine Learning, vol. 70, 2017, pp. 214â223.
[86] M. F. Mathieu, J. J. Zhao, J. Zhao, A. Ramesh, P. Sprechmann, and Y. LeCun, âDisentangling factors of variation in deep representation using adversarial training,â in Advances in Neural Information Processing Systems 29, 2016, pp. 5040â5048.
[87] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, âImage-to-image translation with conditional ad- versarial networks,â in Proc. of the IEEE Conference on Computer Vision and Pattern Recog- nition, 2017, pp. 5967â5976.
[88] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, âUnpaired image-to-image translation using cycle- consistent adversarial networks,â in Proc. of the IEEE International Conference on Computer Vision, 2017, pp. 2242â2251.
[89] A. Liu, Y.-C. Liu, Y.-Y. Yeh, and Y.-C. F. Wang, âA uniï¬ed feature disentangler for multi- domain image translation and manipulation,â in Advances in Neural Information Processing Systems, 2018.
[90] F. Huszar, âIs maximum likelihood useful for representation learning?â http://www.inference. vc/maximum-likelihood-for-representation-learning-2.
23
# A Estimators for MMD and HSIC
Expanding (3) and estimating µpx, µpy as means over samples unbiased estimator of the MMD as x(i) { N i=1, } y(i) { M i=1, one obtains an }
M M MMD(p.., py) = 3 (« Ow D) 45 Wars 2 LF (yy) (20) i=l ffi i=l ji N M ~ xa ask (2.y).
The Hilbert-Schmidt independence criterion (HSIC) is a kernel-based independence criterion with the same underlying aa Â¥ the MMD. Given distributions p,(2) and p,(y) the goal is to determine whether p(x,y) = px(x)py(y) and measure the degree of dependence. Intuitively, if the distributions p, and p, are parametrized with parameters a and (3, i.e. py = Pa and py = pg minimizing HSIC(pa,pg) w.tt. a and 6 encourages independence between p, and pg. Given samples {2 }%,, {yO}, from two distributions p, and p, on Â¥ and Y, and kernels k: XY + X and £: Y â Y, the HSIC can be estimated as
samples {2 }%,, {yO}, from two £: Y â Y, the HSIC can be estimated as HSIC(p..p,) = a Sok (2,2) t
HSIC(p..p,) = a Sok (2,2) t (v, y) 4 a S k (2, 20) 0 (vy) ij ij kyl â ms > k (2,2) e (vv) . (1) i,j,k
We refer to Lopez et al. [7, Section 2.2] for a detailed description and generalizations.
24
# B Overview table
Table 3: Summary of the most important models considered in this paper. The objective is given by L£.(0, 6) + ME p(x) [Ri (qo (z|2))] + A2R2(qo(2)), where qo(z) = Epi2)[qp(z|x)] is the aggregate posterior, R; and Rez are regularizers, and \;, A2 > 0 are the corresponding regularization weights. The detailed description of the regularizers is provided in Section 3. We indicate the structure of the encoding and decoding distribution as follows: (H) hierarchical, (A) autoregressive, (default) fully connected or convolutional feed-feed forward neural network). We indicate the prior distribution as follows: (NV) multivariate standard Normal, (C) categorical, (M) mixture distribution, (G) graphical model, (L) learned prior. We indicate whether labels are used as follows: (VW) Labels are required for (semi-)supervised learning, (O) labels can optionally be used for (semi-)supervised learning.
WorK £. Ri Ro ENC DEC p(z) Y 2] VAE Dxx(qo(2|2)||p(z)) N 9] VAE Dxx(qa(2|2)||p(z)) oO 5] VAE Dxx(qo(2|2)|Ip()) Dx (4 (2)llp(2)) N 10} = VAE Dxt(qe(z|x)||p(z)) TC(qa(z)) 2) 18] VAE â1,,,(2; 2) A Ni O 8] VAE âI,,,(32) Ro (qo(2)) +X Maeg Ra (G(2)) N 3,4] VAE TC(qu(z)) N 6] VAE \|Cov,, (2) lz] â Ill? N 7] VAE HSIC(q¢(za,), 46(2a)) N O 13 VAE MMD(q6(z|s = 0), ga(z|s = 1)) N v 11 VAE N Vv 17 VAE H N+C VÂ¥ 19 VAE A NIL 14 VAE H HtA N 15 VAE H H N 16 VAE H H N 24 VAE G/M 21 VAE Gv 23 VAE C+N 20 VAE A C/L 12,37] AE âEgce,y)[log Py (1 â y|Eg(2))) v 22,36] AE Dys(E4(2)||(2)) NICIM O
Dys(E4(2)||(2))
4Lample et al. [12], Hadad et al. [37] do not enforce a prior on the latent distribution and therefore cannot generate unconditionally.
25
# N
# / C | {
"id": "1706.02262"
} |
1812.04754 | Gradient Descent Happens in a Tiny Subspace | We show that in a variety of large-scale deep learning scenarios the gradient
dynamically converges to a very small subspace after a short period of
training. The subspace is spanned by a few top eigenvectors of the Hessian
(equal to the number of classes in the dataset), and is mostly preserved over
long periods of training. A simple argument then suggests that gradient descent
may happen mostly in this subspace. We give an example of this effect in a
solvable model of classification, and we comment on possible implications for
optimization and learning. | http://arxiv.org/pdf/1812.04754 | Guy Gur-Ari, Daniel A. Roberts, Ethan Dyer | cs.LG, cs.AI, stat.ML | 9 pages + appendices, 12 figures | null | cs.LG | 20181212 | 20181212 | 8 1 0 2
c e D 2 1 ] G L . s c [
1 v 4 5 7 4 0 . 2 1 8 1 : v i X r a
# GRADIENT DESCENT HAPPENS IN A TINY SUBSPACE
Guy Gur-Ariâ School of Natural Sciences Institute for Advanced Study Princeton, NJ 08540, USA guyg@ias.edu
Daniel A. Robertsâ Facebook AI Research New York, NY 10003, USA danr@fb.com
Ethan Dyer Johns Hopkins University Baltimore, MD 21218, USA edyer4@jhu.edu
# ABSTRACT
We show that in a variety of large-scale deep learning scenarios the gradient dy- namically converges to a very small subspace after a short period of training. The subspace is spanned by a few top eigenvectors of the Hessian (equal to the number of classes in the dataset), and is mostly preserved over long periods of training. A simple argument then suggests that gradient descent may happen mostly in this subspace. We give an example of this effect in a solvable model of classiï¬cation, and we comment on possible implications for optimization and learning.
1
# INTRODUCTION
Stochastic gradient descent (SGD) (Robbins & Monro, 1951) and its variants are used to train nearly every large-scale machine learning model. Its ubiquity in deep learning is connected to the efï¬ciency at which gradients can be computed (Rumelhart et al., 1985; 1986), though its success remains some- what of a mystery due to the highly nonlinear and nonconvex nature of typical deep learning loss landscapes (Bottou et al., 2016). In an attempt to shed light on this question, this paper investigates the dynamics of the gradient and the Hessian matrix during SGD.
In a common deep learning scenario, models contain many more tunable parameters than training samples. In such âoverparameterizedâ models, one expects generically that the loss landscape should have many ï¬at directions: directions in parameter space in which the loss changes by very little or not at all (we will use âï¬atâ colloquially to also mean approximately ï¬at).1 Intuitively, this may occur because the overparameterization leads to a large redundancy in conï¬gurations that realize the same decrease in the loss after a gradient descent update.
One local way of measuring the ï¬atness of the loss function involves the Hessian. Small or zero eigenvalues in the spectrum of the Hessian are an indication of ï¬at directions (Hochreiter & Schmid- huber, 1997). In Sagun et al. (2016; 2017), the spectrum of the Hessian for deep learning cross- entropy losses was analyzed in depth.2 These works showed empirically that along the optimization trajectory the spectrum separates into two components: a bulk component with many small eigen- values, and a top component of much larger positive eigenvalues.3
Correspondingly, at each point in parameter space the tangent space has two orthogonal components, which we will call the bulk subspace and the top subspace. The dimension of the top subspace is k, the number of classes in the classiï¬cation objective. This result indicates the presence of many ï¬at directions, which is consistent with the general expectation above.
# In this work we present two novel observations:
âBoth authors contributed equally to this work. 1 Over parameterization suggests many directions in weight space where the loss does not change. This implies that the curvature of the loss, captured through the hessian spectrum, vanishes in these directions. In the remainder of the paper, we use the term ï¬at, as is common in the literature, in a slightly broader sense to describe this curvature of the loss surface, not necessarily implying vanishing of the gradient.
2For other recent work on the spectrum of the Hessian as it relates to learning dynamics, see Pascanu et al.
(2014); Dauphin et al. (2014); Chaudhari et al. (2016).
3We provide our own evidence of this in Appendix B and provide some additional commentary.
1
⢠First, the gradient of the loss during training quickly moves to lie within the top subspace of the Hessian.4 Within this subspace the gradient seems to have no special properties; its direction appears random with respect to the eigenvector basis.
⢠Second, the top Hessian eigenvectors evolve nontrivially but tend not to mix with the bulk eigenvectors, even over hundreds of training steps or more. In other words, the top subspace is approximately preserved over long periods of training.
These observations are borne out across model architectures, including fully connected networks, convolutional networks, and ResNet-18, and data sets (Figures 1, 2, Table 1, Appendices C-D).
Taken all together, despite the large number of training examples and even larger number of pa- rameters in deep-learning models, these results seem to imply that learning may happen in a tiny, slowly-evolving subspace. Indeed, consider a gradient descent step â7g where 77 is the learning rate and g the gradient. The change in the loss to leading order in 7 is 5L = ân lIgll°. Now, let giop be the projection of g onto the top subspace of the Hessian. If the gradient is mostly contained within this subspace, then doing gradient descent with g;op instead of g will yield a similar decrease in the loss, assuming the linear approximation is valid. Therefore, we think this may have bearing on the question of how gradient descent can traverse such a nonlinear and nonconvex landscape.
To shed light on this mechanism more directly, we also present a toy model of softmax regression trained on a mixture of Gaussians that displays all of the effects observed in the full deep-learning scenarios. This isnât meant as a deï¬nitive explanation, but rather an illustrative example in which we can understand these phenomenon directly. In this model, we can solve the gradient descent equations exactly in a limit where the Gaussians have zero variance.5 We ï¬nd that the gradient is concentrated in the top Hessian subspace, while the bulk subspace has all zero eigenvalues. We then argue and use empirical simulations to show that including a small amount of variance will not change these conclusions, even though the bulk subspace will now contain non-zero eigenvalues.
Finally, we conclude by discussing some consequences of these observations for learning and opti- mization, leaving the study of improving current methods based on these ideas for future work.
# 2 THE GRADIENT AND THE TOP HESSIAN SUBSPACE
In this section, we present the main empirical observations of the paper. First, the gradient lies predominantly in the smaller, top subspace. Second, in many deep learning scenarios, the top and bulk Hessian subspaces are approximately preserved over long periods of training. These properties come about quickly during training.
In general, we will consider models with p parameters denoted by θ and a cross-entropy loss function L(θ). We will generally use g(θ) â¡ âL(θ) for the gradient and H(θ) â¡ ââT L(θ) for the Hessian matrix of the loss function at a point θ in parameter space. A gradient descent update with learning rate η at step t is
gt) = g _ ng(0) . (1)
and for stochastic gradient descent we estimate the gradient using a mini-batch of examples.
2.1 THE GRADIENT CONCENTRATES IN THE TOP SUBSPACE
For a classiï¬cation problem with k classes, consider a point θ in parameter space where the Hessian spectrum decomposes into a top and a bulk subspace as discussed above.6
Now, let Vtop be the subspace of tangent space spanned by the top k eigenvectors of the Hessian; we will call this the top subspace. Let Vbulk be the orthogonal subspace. The gradient at this point can
4 This is similar to Advani & Saxe (2017), who found that a large fraction of the weights in overparameter- ized linear models remain untrained from their initial values (thus the gradient in those directions vanishes). 5Other works where the dynamics of gradient descent were analyzed directly include Fukumizu; Saxe et al. (2013); Arora et al. (2018).
6As we have mentioned, this decomposition was originally found in Sagun et al. (2016; 2017), and we provide additional discussion of the Hessian spectrum in Appendix B.
2
be written as a sum g(θ) = gtop + gbulk where gtop (gbulk) is the orthogonal projection of g onto Vtop (Vbulk). The fraction of the gradient in the top subspace is then given by
2 \Idtopl Woop llgll fi top = (2)
Figure 1 shows this fraction for common datasets and network architectures during the early stages of training. The fraction starts out small, but then quickly grows to a value close to 1, implying that there is an underlying dynamical mechanism that is driving the gradient into the top subspace.
For these experiments, training was carried out using vanilla stochastic gradient descent on a variety of realistic models and dataset combinations. However, measurements of the gradient and Hessian were evaluated using the entire training set. Additionally, all of our empirical results have been replicated in two independent implementations. (See Appendix A for further details on the numerical calculation.)
In the next subsection we provide evidence that this effect occurs in a broader range of models.
2.2 HESSIAN-GRADIENT OVERLAP
In this section, we consider the overlap between the gradient g and the Hessian-gradient product Hg during training, deï¬ned by
g' Hg _. 3 [oll MoI ° overlap(g, Hg) =
The overlap takes values in the range [â1, 1].
Computing the overlap is computationally much more efï¬cient than computing the leading Hessian eigenvectors. We argue below that the overlap becomes big (of order 1) if the gradient is contained in the top subspace of the Hessian. We can use the overlap as a proxy measurement: if the overlap is large, we take that to be evidence that the gradient lives mostly in the top subspace. We measured the overlap in a range of deep learning scenarios, and the results are shown in Table 1. In these experiments we consider fully-connected networks, convolutional networks, a ResNet-18 (He et al., 2016), as well as networks with no hidden layers, models with dropout (Srivastava et al., 2014) and batch-norm (201), models with a smooth activation function (e.g. softplus instead of ReLU), models trained using different optimization algorithms (SGD and Adam), models trained using dif- ferent batch sizes and learning rates, models trained on data with random labels (as was considered by Zhang et al. (2016)), and a regression task. The overlap is large for the gradient and Hessian computed on a test set as well (except for the case where the labels are randomized). In addition, we will see below that the effect is not unique to models with cross-entropy loss; a simpler version of the same effect occurs for linear and deep regression models. In all the examples that we checked, the overlap was consistently close to one after some training.
Let us now show that the overlap tends to be large for a random vector in the top Hessian subspace. Let λi be the Hessian eigenvalues in the top subspace of dimension k, with corresponding eigenvec- tors vi. Let w be a vector in this subspace, with coefï¬cients wi in the vi basis. To get an estimate for the overlap equation 3, we choose w to be at a random vertex on the unit cube, namely choosing wi = ±1 at random for each i. The overlap is then given by
yo dw? ya, . Jeb) (hea) REDS overlap(w, Hw) (4)
As discussed above, in typical scenarios the spectrum will consist of k positive eigenvalues where k is the number of classes and all the rest close to zero. To get a concrete estimate ,we approximate this spectrum by taking \; x 7 (a rough approximation, empirically, when k = 10), and take k large so that we can compute the sums approximately. This estimate for the overlap is \/3/4 © 0.87, which is in line with our empirical observations. This should compared with a generic random vector not restricted to the top subspace, which would have an overlap much less than 1.
We have veriï¬ed empirically that a random unit vector w in the top Hessian subspace will have a large overlap with Hw, comparable to that of the gradient, while a random unit vector in the
3
Gradient in top subspace (Fully-Connected, MNIST) 10 hee LW ENR ADS ROS, os Ae os tf " g 07-4 ot t! 06 os it ! oa |-f 0 8 100 150 20 20 310 350 400 step
Loss and accuracy (Fully-Connected, MNIST) os f ~ ras roe PSN OG 20 Nal NO os 7 07 yf 15 4 o6 8 g : oe H 4 1.0 âWi 0. Bit 03 re 02 os â x MAAN ainmatan 91 so 100 150 20 250 30 30 400 step
(a)
(b)
10 Gradient in âih subspace (ConvNet, CIFAR10) os o. £07 Be 06 os 04 © 800 1000 1500 2000 2500 3000 3500 4000 step
Loss and accuracy (ConvNet, CIFAR10) 22 20 18 16 8 S14 12 fi 10 f H os} os 0 800 1000 1500 2000 2500 3000 3500 4000 step
(c)
(d)
Gradient in top subspace (ResNet, CIFAR10) 10+ a ee ee ee ee a a wae H Vii ' Vs oa} yj H Â¥ H os-â+ 3 H é H 04} } H a+} i oo++ © 2000 4000 6000 8000 10000 12000 14000 16000 step
Loss and accuracy (ResNet, CIFAR10) f poses ft 10 | t Fam ansoamee alt ow H \ i fy os H âAf ; a1 ALANS â \_ Ary Los F} if e Pa | 8 boa i A * BAL FS â PN A +02 I an â 4 ~ hee Se tapas 00 © 2000 4000 6000 8000 10000 12000 14000 16000 step
(e)
(f)
Figure 1: Fraction of the gradient in the top subspace ftop, along with training loss and accuracy. Only the initial period of training is shown, until the fraction converges. (a,b) Fully-connected network with two hidden layers with 100 neurons each, trained on MNIST using SGD with batch size 64 and η = 0.1. (c,d) Simple convolutional network (taken from Chollet et al. (2015)) trained on CIFAR10 with the same optimizer. (e,f) ResNet-18 (He et al., 2016) trained on CIFAR10.
full parameter space has negligible overlap. Based on these observations, we will take the overlap equation 3 to be a proxy measurement for the part of the gradient that lives in the top Hessian subspace.
2.3 EVOLUTION OF THE TOP SUBSPACE
We now show empirically that the top Hessian subspace is approximately preserved during train- ing. Let the top subspace A » at training step t be âpanned by the top k Hessian eigenvectors vw), see ut ) Let PO, be the orthogonal projector onto Vo, defined such that (PO)? = PO. We will define the overlap between a subspace An and a subspace Vy at a later step t/ > t as
4
Table 1: Mean overlap results for various cases. FC refers to a fully-connected network with two hidden layers of 100 neurons each and ReLU activations. ConvNet refers to a convolutional network taken from Chollet et al. (2015). By default, no regularization was used. The regression data set was sampled from one period of a sine function with Gaussian noise of standard deviation 0.1. We used SGD with a mini-batch size of 64 and η = 0.1, unless otherwise speciï¬ed. All models were trained for a few epochs, and the reported overlap is the mean over the last 1,000 steps of training. Plots of ftop for many of these experiments are collected in Appendix D.
DATASET MODEL COMMENT MNIST MNIST MNIST MNIST MNIST CIFAR10 CIFAR10 CIFAR10 Regression Softmax FC FC FC FC ConvNet ConvNet Dropout, batch-norm, and extra dense layer ConvNet Optimized using Adam FC Softplus activation η = 0.01 Batch size 256 Random labels Random labels Batch size 100
MEAN OVERLAP 0.96 0.96 0.96 0.97 0.86 0.86 0.93 0.89 0.99
follows.
Tr (PRP ) T(PO)T (PO) top overlap ( V, Oo, Aa >) = = wy . (5) k = |Pove 0
It is easy to verify the rightmost equality. In particular, each element in the sum measures the fraction of a late vector ye) that belongs to the early subspace vo. Notice that the overlap of a subspace with itself is 1, while the overlap of two orthogonal subspaces vanishes. Therefore, this overlap is a good measure of how much the top subspace changes during training!)
Figure[2|shows the evolution of the subspace overlap for different starting times ¢, and future times tg, and for classification tasks with k = 10 classes. For the subspace spanned by the top k eigenvec- tors we see that after about t; = 100 steps the overlap remains significant even when tz â t, > t), implying that the top subspace does not evolve much after a short period of training. By contrast, the subspace spanned by the next k eigenvectors does not have this property: Even for large t; the subspace overlap decays quickly in ty. This means that the projector Pe is only weakly dependent on time, making the notion of a âtop subspaceâ approximately well- defined during the course of training. It is this observation, in con- junction with the observation that the gradient concentrates in this subspace at each point along the trajectory, that gives credence to the idea that gradient descent happens in a tiny subspace
In Appendix C we give additional results on the evolution of the top subspace, by studying different sizes of the subspace. To summarize this, we can average the overlap over different interval values t2 â t1 for each ï¬xed t1 and plot as a function of subspace dimension. We present this plot in Figure 3 for the same fully-connected (a) and ResNet-18 (b) models as in Figure 1. Here, we very clearly see that increasing the subspace until d = 9 leads to a pretty ï¬xed overlap as a function of dimension. At d = 10 it begins to decrease monotonically with increasing dimension. This is strong evidence that thereâs and interesting feature when the dimension is equal to the number of classes.9
7 We have written the middle expression in (equation {5} to make it clear that our overlap is the natural normalized inner product between the projectors Po and Pe) This is simply related to the Frobenius norm of the difference between the two projectors, PO - Pe) ||, the canonical distance between linear subspaces. 5Note that this does not mean the actual top eigenvectors are similarly well-defined, indeed we observe that sometimes the individual eigenvectors within the subspace tend to rotate quickly and other times they seem somewhat fixed.
9 It might be more reasonable to describe this transition at the number of classes minus one, k â 1, rather than the number of classes k. This distinction is inconclusive given the spectrum (see Appendix B), but seems rather sharp in Figure 3.
5
Top 10 subspace (Fully-Connected, MNIST) H Be Hrrngt Aah, al 08 NS Md Matinnsdtare tlc TUT tenn et 3 iv ead Vet ge OTA page Zoe 4 3 H 8 iN 8 â. Boo | WT eerentenerten een, Ed 02 -+= t=0 =+= ty= 100 +e- f=50 -+- f=200 oo o 100 200 200 400 500 both
10 Next 10 subspace (Fully-Connected, MNIST) i --- ho i tL i os 1 H Fy i S06 â' 3 4 8 1 8 te Bos tek zi Ed o 02 bth i y oo o 100 200 200 400 500 b-th
(a)
(b)
to Top 10 subspace (ConvNet, CIFAR10) K we- h=0 -2 y= 500 Ww os + a â = \ 2 os 1 i} â 8 \ 8 8 \ Boa g Ej 02 oo o 100 200 300 400 b-th
to Next 10 subspace (ConvNet, CIFAR10) . we- h=0 =e y= 500 os a = 2 os i} 8 8 8 Bos g Ej 02 oo o 100 200 300 400 k-th
(c)
(d)
1o- 04+ subspace overlap 00 +
Top 10 subspace (ResNet, CIFAR10) 10+ se= i =3128 fh =6256 04+ subspace overlap
(e)
(f)
Figure 2: Overlap of top Hessian subspaces V (t1) top . (a) Top 10 subspace of fully-connected network trained on MNIST. (b) Subspace spanned by the next 10 Hessian eigenvectors. (c) Top 10 subspace of convolutional network trained on CIFAR10. (d) Subspace spanned by the next 10 Hessian eigenvectors. (e) Top 10 subspace of ResNet-18 trained on CIFAR10. (f) Subspace spanned by the next 10 Hessian eigenvectors. The network architectures are the same as in Figure 1.
# 3 A TOY MODEL
In order to understand the mechanism behind the effects presented in the previous section, in this section we work out a toy example. We ï¬nd this to be a useful model as it captures all of the effects we observed in realistic deep learning examples. However, at this point we only interpret the toy model to be illustrative and not a deï¬nitive explanation of the phenomenon.10
Although the way we ï¬rst set it up will be very simple, we can use it as a good starting point for doing small perturbations and generalizations in which all of the realistic features are present. We will show empirically that such small perturbations do not change the qualitative results, and leave an analytic study of this perturbation theory and further generalization to future work.
10It is also useful in understanding how results might change as hyperparameters, e.g. the learning rate, are varied.)
6
1o _ Averaged subspace overlap (Fully-Connected, MNIST) ves f=10 eh =40 =20 4+- 1 =80 subspace overlap o 2 4 6 8 » 2B MW Bb B w subspace dimension
Averaged subspace overlap (ResNet, CIFAR10) subspace overlap J =e=/h=782 o 2 4 6 8 » 2B MW Bb B w subspace dimension
(a) (b)
Figure 3: Subspace overlap of top Hessian subspaces V (t1) top for different top subspace dimensions with different initial number of steps t1 averaged over the interval t2 â t1 for (a) fully-connected two-layer network trained on MNIST and (b) ResNet-18 architecture trained on CIFAR10. Note the kink around subspace dimension equal to one less than the number of classes in the dataset.
a=1 with xa â Rd Consider the following 2-class classiï¬cation problem with n samples {(xa, ya)}n and labels ya. The samples xa are chosen from a mixture of two Gaussian distributions N (µ1, Ï2) and N (µ2, Ï2), corresponding to the two classes. The means µ1,2 are random unit vectors. On this data we train a model of softmax-regression, with parameters θy,i where y = 1, 2 is the label and i = 1, . . . , d. The cross-entropy loss is given by
L(0) âSome (se aa =): ©
(Here we denote by θy â Rd the weights that feed into the y logit.) We will now make several simplifying approximations. First, we take the limit Ï2 â 0 such that the samples concentrate at µ1 and µ2. The problem then reduces to a 2-sample learning problem. Later on we will turn on a small Ï2 and show that our qualitative results are not affected. Second, we will assume that µ1 and µ2 are orthogonal. Random vectors on the unit sphere Sdâ1 have overlap dâ1/2 in expectation, so this will be a good approximation at large d.
With these assumptions, it is easy to see that the loss function has 2d â 2 ï¬at directions. Therefore the Hessian has rank 2, its two nontrivial eigenvectors are the top subspace, and its kernel is the bulk subspace. The gradient is always contained within the top subspace.
In Appendix [E} we use these assumptions to solve analytically for the optimization trajectory. At late-times in a continuous-time approximation, the solution is 61,2(t) = O12 +6 + o log (nt + 1) ¥ S log (nt + ¢2), (7)
µ1 2 µ2 2 log (ηt + c1) â log (ηt + c2) , (7)
2 âHE 1 1 -1 Ont (1 rm)
gθ1 (t) = + O(tâ2), gθ1(t) = âgθ2 (t), (8)
1 1 -1 _ H(t) = Ont (1 rm) ® [mint + wong] + O(t-?). (9)
Here 77 is the learning rate, c; are arbitrary positive real numbers, 6; ⬠R¢ are two arbitrary vectors orthogonal to both j11,2, and 6â ⬠Ris an arbitrary vector in the space spanned by /11, of" To- gether, c;, 6;, and 6â parameterize the 2d-dimensional space of solutions. This structure implies the following.
1. The Hessian has two positive eigenvalues (the top subspace),12 while the rest vanish. The top subspace is always preserved.
11 We thank Vladimir Kirilin for pointing out a mistake in an earlier version of this paper. 12 For the analytically simple form of model chosen here, the two eigenvalues in this top subspace are equal.
However, this degeneracy can be broken in a number of ways such as adding a bias.
7
2. The gradient evolves during training but is always contained within the top subspace.
These properties are of course obvious from the counting of ï¬at directions above. We have veriï¬ed empirically that the following statements hold as well.13
⢠If we introduce small sample noise (i.e. set Ï2 to a small positive value), then the bulk of the Hessian spectrum will contain small non-zero eigenvalues (suppressed by Ï2), and the gradient will still evolve into the top subspace.
⢠If we add biases to our model parameters, then the degeneracy in the top subspace will be broken. During training, the gradient will become aligned with the eigenvector that has the smaller of the two eigenvalues.
⢠All these statements generalize to the case of a Gaussian mixture with k > 2 classes.14 The top Hessian subspace will consist of k positive eigenvalues. If the degeneracy is broken by including biases, there will be kâ1 large eigenvalues and one smaller (positive) eigenvalue, with which the gradient will become aligned.
3.1 MOSTLY PRESERVED SUBSPACE, EVOLVING GRADIENT
Let us now tie these statements into a coherent picture explaining the evolution of the gradient and the Hessian.
The dynamics of the gradient within the top subspace (and specifically that fact that it aligns with the minimal eigenvector in that subspace) can be understood by the following argument. Under a single gradient descent step, the gradient evolves as
gi D= 9(9 - ngâ) = (1 - nH) g + O(7?). (10)
If we assume the linear approximation holds, then for small enough η this evolution will drive the gradient toward the eigenvector of H that has the minimal, non-zero, eigenvalue. This seems to explain why the gradient becomes aligned with the smaller of the two eigenvectors in the top subspace when the degeneracy is broken. (It is not clear that this explanation holds at late times, where higher order terms in η may become important.)15
The reader may wonder why the same argument does not apply to the yet smaller (or vanishing) eigenvalues of the Hessian that are outside the top subspace. Applying the argument naively to the whole Hessian spectrum would lead to the erroneous conclusion that the gradient should in fact evolve into the bulk. Indeed, from equation 10 it may seem that the gradient is driven toward the eigenvectors of (1 â ηH) with the largest eigenvalues, and these span the bulk subspace of H.
There are two ways to see why this argument fails when applied to the whole parameter space. First, the bulk of the Hessian spectrum corresponds to exactly ï¬at directions, and so the gradient vanishes in these directions. In other words, the loss function has a symmetry under translations in parameter space, which implies that no dynamical mechanism can drive the gradient toward those tangent vectors that point in ï¬at directions. Second, in order to show that the gradient converges to the bulk we would have to trust the linear approximation to late times, but (as mentioned above) there is no reason to assume that higher-order corrections do not become large.
# ADDING SAMPLE NOISE
Let us now discuss what happens when we introduce sample noise, setting Ï2 to a small positive value. Now, instead of two samples we have two sets of samples, each of size n/2, concentrated
13 In our experiments we used d = 1000, k = 2, 5, 10, and Ï = 0, 0.02. For the means µi, we use random unit vectors that are not constrained to be orthogonal.
14 This can be studied analytically and will be presented in future work (Kirilin et al.). However, we will discuss an important point here of the k > 2 class model that makes the dynamical nature of the top-k subspace more apparent. Considering the loss equation 6 and k orthogonal mean vectors, one can see that symmetries of the loss lead to k(k â 1) nontrivial directions, meaning the Hessian is naturally rank k(k â 1). After solving the model, one can see that in fact this k(k â 1) subspace dynamically becomes dominated by k top eigenvalues. 15 We mention in passing that the mechanism above holds exactly for linear regression with quadratic loss. In this setting the Hessian is constant and there are no higher-order corrections, and so the gradient will converge to the leading eigenvector of (1 â ηH).
8
around µ1 and µ2. We expect that the change to the optimization trajectory will be small (namely suppressed by Ï2) because the loss function is convex, and because the change to the optimal so- lution is also suppressed by Ï2. The noise breaks some of the translation symmetry of the loss function, leading to fewer ï¬at directions and to more non-zero eigenvalues in the Hessian, appearing in the bulk of the spectrum. The Hessian spectrum then resembles more closely the spectra we ï¬nd in realistic examples (although the eigenvalues comprising the top subspace have a different struc- ture). Empirically we ï¬nd that the top subspace still has two large eigenvalues, and that the gradient evolves into this subspace as before. Therefore turning on noise can be treated as a small pertur- bation which does not alter our analytic conclusions. We leave an analytic analysis of the problem including sample noise to future work. We note that the argument involving equation 10 can again not be applied to the whole parameter space, for the same reason as before. Therefore, there is no contradiction between that equation and saying that the gradient concentrates in the top subspace.
# 4 DISCUSSION
We have seen that quite generally across architectures, training methods, and tasks, that during the course of training the Hessian splits into two slowly varying subspaces, and that the gradient lives in the subspace spanned by the k eigenvectors with largest eigenvalues (where k is the number of classes). The fact that learning appears to concentrate in such a small subspace with all positive Hessian eigenvalues might be a partial explanation for why deep networks train so well despite having a nonconvex loss function. The gradient essentially lives in a convex subspace, and perhaps that lets one extend the associated guarantees to regimes in which they otherwise wouldnât apply.
An essential question of future study concerns further investigation of the nature of this nearly preserved subspace. From Section 3, we understand, at least in certain examples, why the spectrum splits into two blocks as was ï¬rst discovered by Sagun et al. (2016; 2017). However, we would like to further understand the hierarchy of the eigenvalues in the top subspace and how the top subspace mixes with itself in deep learning examples. Weâd also like to investigate more directly the different eigenvectors in this subspace and see whether they have any transparent meaning, with an eye towards possible relevance for feature extraction.
Central to our claim about learning happening in the top subspace was the fact the decrease in the loss was predominantly due to the projection of the gradient onto this subspace. Of course, one could explicitly make this projection onto gtop and use that to update the parameters. By the argument given in the introduction, the loss on the current iteration will decrease by almost the same amount if the linear approximation holds. However, updating with gtop has a nonlinear effect on the dynamics and may, for example, alter the spectrum or cause the top subspace to unfreeze. Further study of this is warranted.
Similarly, given the nontrivial relationship between the Hessian and the gradient, a natural question is whether there are any practical applications for second-order optimization methods (see Bottou et al. (2016) or Dennis Jr & Schnabel (1996) for a review). Much of this will be the subject of future research, but we will conclude by making a few preliminary comments here.
An obvious place to start is with Newtonâs method (Dennis Jr & Schnabel, 1996). Newtonâs method consists of the parameter update θ(t+1) = θ(t) â H â1g(t). There are a few traditional criticisms of Newtonâs method. The most practical is that for models as large as typical deep networks, com- putation of the inverse of the highly-singular Hessian acting on the gradient is infeasible. Even if one could represent the matrix, the fact that the Hessian is so ill-conditioned makes inverting it not well-deï¬ned. A second criticism of Newtonâs method is that it does not strictly descend, but rather moves towards critical points, whether they are minima, maxima, or saddles (Pascanu et al., 2014; Dauphin et al., 2014). These objections have apparent simple resolutions given our results. Since the gradient predominantly lives in a tiny nearly-ï¬xed top subspace, this suggests a natural low rank approximation to Newtonâs method
θ(t+1) = θ(t) â (H (t) top )â1g(t) top . (11)
Inverting the Hessian in the top subspace is well-deï¬ned and computationally simple. Furthermore, the top subspace of the Hessian has strictly positive eigenvalues, indicating that this approximation to Newtonâs method will descend rather then climb. Of course, Newtonâs method is not the only second-order path towards optima, and similar statements apply to other methods.
9
# ACKNOWLEDGMENTS
We are grateful to Shay Barak, L´eon Bottou, Soumith Chintala, Yann LeCun, Roi Livni, Behnam Neyshabur, Sam Ocko, Adam Paszke, Xiao-Liang Qi, Douglas Stanford, Arthur Szlam, and Mark Tygert for discussions. G.G. would like to acknowledge the hospitality of the Stanford Institute for Theoretical Physics and of Facebook AI Research during the completion of this work. G.G. is supported by NSF grant PHY-1606531. D.R. would like to acknowledge the hospitality of both the Stanford Institute for Theoretical Physics and the Institute for Advanced Study during the comple- tion of this work. This paper was brought to you by the letters g and H and converged via power iteration.
# REFERENCES
Mart´ın Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. Tensorï¬ow: A system for large- scale machine learning.
Madhu S Advani and Andrew M Saxe. High-dimensional dynamics of generalization error in neural networks. arXiv preprint arXiv:1710.03667, 2017.
Sanjeev Arora, Nadav Cohen, and Elad Hazan. On the optimization of deep networks: Implicit acceleration by overparameterization. arXiv preprint arXiv:1802.06509, 2018.
L´eon Bottou, Frank E Curtis, and Jorge Nocedal. Optimization methods for large-scale machine learning. arXiv preprint arXiv:1606.04838, 2016.
Pratik Chaudhari, Anna Choromanska, Stefano Soatto, Yann LeCun, Carlo Baldassi, Christian Borgs, Jennifer Chayes, Levent Sagun, and Riccardo Zecchina. Entropy-sgd: Biasing gradient descent into wide valleys. arXiv preprint arXiv:1611.01838, 2016.
# Franc¸ois Chollet et al. Keras. https://keras.io, 2015.
Yann N Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua Bengio. Identifying and attacking the saddle point problem in high-dimensional non-convex op- timization. In Advances in Neural Information Processing Systems 27, pp. 2933â2941. 2014.
John E Dennis Jr and Robert B Schnabel. Numerical methods for unconstrained optimization and nonlinear equations, volume 16. Siam, 1996.
# Kenji Fukumizu. Effect of batch learning in multilayer neural networks. Gen, 1(04):1Eâ03.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770â778, 2016.
Sepp Hochreiter and J¨urgen Schmidhuber. Flat minima. Neural Computation, 9(1):1â42, 1997.
Vladimir Kirilin, Guy Gur-Ari, and Daniel A. Roberts. Forthcoming.
R.B. Lehoucq, D.C. Sorensen, and C. Yang. ARPACK Usersâ Guide: Solution of Large-scale Eigen- value Problems with Implicitly Restarted Arnoldi Methods. Society for Industrial and Applied Mathematics, 1998.
Razvan Pascanu, Yann N Dauphin, Surya Ganguli, and Yoshua Bengio. On the saddle point problem for non-convex optimization. arXiv preprint arXiv:1405.4604, 2014.
Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. In NIPS-W, 2017.
Herbert Robbins and Sutton Monro. A stochastic approximation method. The annals of mathemati- cal statistics, pp. 400â407, 1951.
10
David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning internal representations by error propagation. Technical report, California Univ San Diego La Jolla Inst for Cognitive Science, 1985.
David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning representations by back- propagating errors. nature, 323(6088):533, 1986.
Levent Sagun, L´eon Bottou, and Yann LeCun. Eigenvalues of the hessian in deep learning: Singu- larity and beyond. arXiv preprint arXiv:1611.07476, 2016.
Levent Sagun, Utku Evci, V Ugur Guney, Yann Dauphin, and Leon Bottou. Empirical analysis of the hessian of over-parametrized neural networks. arXiv preprint arXiv:1706.04454, 2017.
Andrew M Saxe, James L McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynam- ics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120, 2013.
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overï¬tting. Journal of Machine Learning Research, 15:1929â1958, 2014.
Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530, 2016.
11
# A NUMERICAL METHODS
For the empirical results in this paper, we did not actually have to ever represent the Hessian. For example, to compute the top eigenvectors of the Hessian efï¬ciently, we used the Lanczos method (Lehoucq et al., 1998), which relies on repeatedly computing the Hessian-vector product Hv for some vector v. This product can be computed in common autograd packages such as Tensor- Flow (Abadi et al.) or PyTorch (Paszke et al., 2017) as follows. Let v be a pre-computed numerical vector (such as the gradient). One ï¬rst computes the scalar a = âLT v, and then takes the gradient of this expression, resulting in âa = Hv.
# B HESSIAN SPECTRUM
As ï¬rst explored by Sagun et al. (2016; 2017), the Hessian eigenvalue spectrum appears to naturally separate into âtopâ and âbulkâ components, with the top consisting of the largest k eigenvalues, and the bulk consisting of the rest.
An example of this for a small fully-connected two-layer network is shown in Figure 4. The hidden layers each have 32 neurons, and the network was trained on MNIST for 40 epochs. The eigenvalues belonging to the top subspace are clearly visible, and for clarity, we labeled them showing that thereâs 10 nontrivial eigenvalues. We further conï¬rmed this effect by studying datasets with a different number of classes (such as CIFAR100) and by studying synthetic datasets.
Hessian spectrum, MNIST 17.5 le 2 15.0 12.5 3° ® 3 10.0 i] 4 = 75 5 o â6 ® 50 1. â8 25 os 10 0.0 aââââ i) 1000 2000 3000 4000 order
Figure 4: Eigenvalues of the Hessian of a fully-connected network with two hidden layers, each with 32 neurons, trained on MNIST for 40 epochs. The top 10 largest eigenvalues are labeled and clearly form a nontrivial tail at the right edge of the spectrum.
We also conï¬rmed that the dimension of the top subspace is tied to the classiï¬cation task and not intrinsic to the dataset. For instance, we can study MNIST where we artiï¬cially label the digits according to whether they are even or odd, creating 2 class labels (even though the data intrinsically In this case, there were only 2 large eigenvalues, signifying that the top contains 10 clusters). is 2-dimensional and not 10-dimensional. Additionally, we experimented by applying a random permutation to the MNIST labels. This removed the correlation between the input and the labels, but the network could still get very high training accuracy as in Zhang et al. (2016). In this case, we still ï¬nd 10 large eigenvalues.
The fact that the top subspace is frozen (as we show in Figure 2), suggests that there could be some kind of a special feature in the Hessian spectrum. To study this, we looked at a two-layer fully-connected network on CIFAR100, with each hidden layer having 256 neurons each. We chose CIFAR100 to allow us a larger value of k to perhaps see something meaningful in the transition be- tween the bulk and top subspaces. Furthermore, rather than just plotting the value of the eigenvalues as a function of their index, we made a density plot averaged over 200 realizations. This is shown
12
in Figure 5, where we note that the x-axis is log of the eigenvalue. Since we were only interested in the transition from top to bulk, we only computed the top 1000 eigenvalues. This allowed us to study a larger model (256, 256) than we did for the plot of the full spectrum in Figure 4.
Hessian eigenvalue density, CIFAR100 â=*~ Mean 100th EV Ml EV density 1 + ! ! ! t ! ! i] T ! ! ! I ! ! ! LI 0.0 05 1.0 15 20 25 3.0 log of eigenvalue
Figure 5: Histogram of eigenvalue density on the right edge of the Hessian spectrum for a fully- connected two-layer (256, 256) model trained on CIFAR100 averaged over 200 realizations.
The density plot, Figure 5, shows a clear feature in the density function describing the Hessian eigenvalues occurring around the mean 100th eigenvalue. While the exact location is hard to deter- mine, there is a clear underdensity around the 100th eigenvalue, counting from the right edge. Itâs an interesting observation that a Gaussian provides a very good ï¬t to the part of the spectrum in the top subspace, suggesting the eigenvalue distribution could be described by a log-normal distribu- tion. However, this is only suggestive, and much more evidence and explanation is needed. In future work, it would be interesting to characterize the different functions that describe the spectral density of the Hessian.
Next, letâs look at a particular top eigenvector. One hypothesis is that the corresponding eigenvectors to the k largest eigenvalues would just correspond to either the weights or biases in the last layer (which also depend on the number of classes). In Figure 6, we plot the maximal eigenvector after (a) 0 steps, (b) 100 steps, (c) 200 steps, and (d) 400 steps of training for the fully-connected (100,100) architecture trained on MNIST. First itâs easy to see that this vector is not constant during training. More importantly, we see that there are many nonzero elements of the vectors across the entire range of model parameters. We colored these plots according to where the parameters are located in the network, and we note that even though the top layer weights seem to have the largest coefï¬cients, they are only â¼ 4à larger than typical coefï¬cients in the ï¬rst hidden layer.
In Figure 7, we zoom in on the ï¬nal layer for the fully-connected (100,100) architecture trained on MNIST after (a) 0 steps and (b) 400 steps. This makes it clear that the eigenvector is never sparse and is evolving in time. Thus, we conclude that eigenvectors are a nontrivial linear combination of parameters with different coefï¬cients. It would be interesting to understand in more detail whether the linear combinations of parameters represented by these top-subspace eigenvectors are capturing something important about either learning dynamics or feature representation.
Finally, for completeness let us also give a plot of some example evolutions of a top Hessian eigen- value. In Figure 8, we plot the evolution of the maximal eigenvalue for (a) our fully-connected (100, 100) architecture trained on MNIST and (b) our ResNet-18 architecture trained on CIFAR10. In both cases, we see an initial period of growth, then the eigenvalue remains very large as the model is training, then it decays. The fully-connected MNIST example trains very quickly, but comparing with Figure 1 (f) for the ResNet-18, we see that the loss and accuracy converge around step 10000, where the maximum eigenvalue begins to oscillate and also decay. Our toy model suggests that eigenvalues should decay at the late part of training like â¼ 1/t. These plots are too rough to say
13
Maximal eigenvector after 0 steps (Fully-Connected, MNIST) 0.100 â~*~ fist hidden layer <'e= second hidden layer ~ top layer weights = top layer biases 0075 - 0.050 - 0.025 - 000 | 0.025 - ~0.050 - 0.075 - o 20000 40000 6000080000 parameter
Maximal eigenvector after 100 steps (Fully-Connected, MNIST) first hidden layer - 0.05 += = second hidden layer __~ top layer weights top layer biases. 0.00 - 0.05 ~0.10 015 - o 20000 40000 6000080000 parameter
(a) (b) (c) (d)
Maximal eigenvector after 200 steps (Fully-Connected, MNIST) 015 77> first hidden layer ~ top layer weights t âl= second hidden layer | = top layer biases H 0107 005 - a00 | ~0.05 ~0.10 o 20000 40000 6000080000 parameter
Maximal eigenvector after 400 steps (Fully-Connected, MNIST) 0.08 ~ first hidden layer = *- top layer weights 0.06 __~*= Second hidden layer_ ~*~ top layer biases 004 - 002 - 0.00 - 0.02 - 0.04 - 0.06 - o 20000 40000 6000080000 parameter
Figure 6: Eigenvector corresponding to the maximal eigenvalue for the fully-connected (100,100) architecture trained on MNIST after (a) 0 steps, (b) 100 steps, (c) 200 steps, and (d) 400 steps. We organize according to ï¬rst hidden layer (blue), second hidden layer (orange), top layer weights (green), and top layer biases (red).
Maximal eigenvector after 0 steps (Fully-Connected, MNIST) 0.100 â_"7= top layer weights | ~*~ top layer biases 0075 - 0.050 - 0.025 - 0.000 - 0.025 - ~0.050 - 0.075 - 8600 88800 8900089200 240089600 parameter
Maximal eigenvector after 400 steps (Fully-Connected, MNIST) 0.08 7 0.06 - 0.04 - 002 - 0.00 - 0.02 - 0.04 - 70.06 = top layer weight | -+- top layer biases 8600 88800 8900089200 240089600 parameter
(a) (b)
Figure 7: Eigenvector corresponding to the maximal eigenvalue for the fully-connected (100,100) architecture trained on MNIST after (a) 0 steps and (b) 400 steps zoomed in on the top layer weights and biases. These plots are strong evidence that eigenvector is clearly not dominated by any partic- ular parameter and is meaningfully changing in time.
anything speciï¬c about the functional form of the decay, but we do see qualitatively in both cases that itâs decreasing.16
16To learn something more concrete, ideally we should train a large number of realizations and then average the behavior of the maximal eigenvalue across the different runs. We will save this analysis for the future.
14
x Hessian max eigenvalue evolution (Fully-Connected, MNIST) 20.0- s+ 15.0 25+ 10.0 - 1s 25-
Hessian max eigenvalue evolution (ResNet, CIFAR10) wy iâ ry i i i i i ! i + 2000 4000 6000 8000 10000 12000 14000 16000 step
(a) (b)
Figure 8: Evolution of the maximal eigenvalue for (a) fully-connected (100,100) architecture trained on MNIST and (b) ResNet-18 architecture trained on CIFAR10. Note the second plot has a log scale on the y-axis.
# C k IS FOR CLASSES
In this section, we will give further evidence that the size of the nearly-preserved subspace is related to the number of classes. As we showed in the last section and Figure 5 in particular, there is a feature in the Hessian spectrum that seems related to the number of classes. In Figure 1, we explain that the gradient tends to lie in a subspace spanned by the eigenvalues corresponding to the top-k eigenvectors, and in Figure 2, we show that a subspace of size k seems to be nearly preserved over the course of training. These three phenomena seem to be related, and here weâd like to provide more evidence.
First, letâs investigate whether the nearly preserved subspace is k-dimensional. To do so, let us consider the same fully-connected two-layer network considered in (a) and (b) of Figure 2. In Figure 9, we consider top subspaces of different dimensions, ranging from 2 to 20. We can consider subspace dimensions of different sizes for the ResNet-18 architecture considered in (e) and (f) of Figure 2, which also has 10 classes. These results are shown in Figure 10. Both of these results show interesting behavior as we increase the subspace past the number of classes.
Notably, the top 15 and top 20 subspaces shown in (e) and (f) of Figures 9-10 and are signiï¬cantly less preserved than the others. The top 11 subspace is marginally less preserved, and most of the subspaces with dimensions less than 10 seem to be preserved amongst themselves. In particular, both (e) and (f) in both plots shows that adding additional eigenvectors does not always lead to increased preservation. The maximally (i.e. largest dimensional) preserved subspace seems to peak around the number of classes. The fact that these smaller top subspaces are also preserved suggests additional structure perhaps related to the eigenvectors no longer rotating as much amongst themselves as training progresses. A nice summary of these results where we average the overlap for a particular t1 over the interval t2 â t1 is shown in the main text in Figure 3.
Now that weâve studied whether the ï¬xed subspace is really k-dimensional, letâs better understand how the fraction of the gradient spreads across the top subspace for a few different points in training. Let us deï¬ne the overlap of the gradient with a particular eigenvector
2 Ui- ae â i
where the numerator represents the overlap of the ith eigenvector (order from eigenvectors corre- sponding to the largest eigenvalues to the least), and the numerator is the norm squared of the ith overlap. This satisfies > c? = 1 when summed overall all p parameter directions. In Figure we plot c? for (a) 0 steps (b) 50 steps (c) 100 steps, and (d) 200 steps of training for the 7 corresponding to the top and next subspace (i = 1... 20) for the fully-connected (100,100) network trained on MNIST. Importantly, these plots make it clear that the gradient is not simply an eigenvector of the Hessian. In particular, before any training, the gradient doesnât seem to have any significant overlap in the top or next subspaces (0, c? = .20, after 0 steps of training cf.
15
Top 2 subspace (Fully-Connected, MNIST) 10: ome sore os 3 YON eM pete, S o6- ¥ Bos 025 x Ao - aaa alias Thee VA retry el ae ) 160 200 wo 400 00 both
Top 5 subspace (Fully-Connected, MNIST) 10 +4 Rpstey Li Ow, 08 ete, 3 \ 3 06+} Bost \ 02-4 3 Yn tet tree tenet een tertreteyte ete eyes ) 160 200 wo 400 00 bot
(a) (b) (c) (d) (e) (f)
Top 9 subspace (Fully-Connected, MNIST) 107-4 =8=0 (=e 40 Rarites Sete eeng ff 20 =80 H twas FO fin Na Fs ye Â¥ ere oN tee = H yee, 3 H NON race, 5 f coatte o 5 06-1 # o4-ât 4 \ | 02 N ccfeeeesenw feseomsensocosonserforonmsccclen ) 160 200 wo 400 00 both
Top 11 subspace (Fully-Connected, MNIST) 107+ =e =O | =e 8540 Be wes h=20 0 2+ 80 Nn os heyy Wet An Ea H Sve tee AY z H Voie 5 k Y Nec egmeet oe 5 06-4 Ne teseetinee #04 B \ 02- \ sreseloveveverslersnerconlmareveemlmmveeneclee ) 160 200 wo 400 00 bot
Top 15 subspace (Fully-Connected, MNIST) 10+ == h=0 =e F840 i we- f=20| -e- 8380 i 08 7-5 H * Hytoag 06 SY : Waa ey Sach subspace overlap 02- \Wereesjeoeeteretersveresepestere ree teesensenyty o 100 200 300 400 500 tt
Top 20 subspace (Fully-Connected, MNIST) 10+ == 8=0 es H=a0 H we- f=20| -e- 8380 H 08 -â* H H 06 - leg Arte subspace overlap i 02- Wretespertrteeeniertestese peeerrestey ttyteret eres o 100 200 300 400 500 tot
Figure 9: Overlap of top Hessian subspaces V (t1) top for fully-connected network trained on MNIST using the same architecture in Figure 1. (a) Top 2 subspace. (b) Top 5 subspace. (c) Top 9 subspace. (d) Top 11 subspace. (e) Top 15 subspace. (f) Top 20 subspace.
yee 2 .94 after 50 steps of training). After some training, see (b), (c), (d), the gradient is spread over or the different c?âs from 7 = mn - 10 in the top Subspace and never has any real significant weight for i > 10. (E.g. we have ye 1] = 93 vs. yee = .01 after 50 steps of training.)
# D ADDITIONAL EXPERIMENTS
In this section, we provide some plots highlighting additional experiments. The results of these experiments were summarized in Table 1, but we include some additional full results on the gradient overlap with the top-k subspace here.
In particular, Figure 12 plots the fraction of the gradient lying in the top subspace, ftop, for a variety of different scenarios. In (a) we give an example of changing the learning rate, in (b) we give an example of changing the batch size, in (c) we give an example with 0 hidden layers, in (d) we give an example of changing the activation function, in (e) we apply a random permutation to labels, and in
16
Top 2 subspace (ResNet, CIFAR10) =e 80 == h=3128 5 o8- Eos - Bos. 02- oo enh gn angen nngnn end gnneennglnnmnns 0 2000 4000 6000 8000 bath
Top 5 subspace (ResNet, CIFAR10) 5 o8- Eos - Boa. 02- oo Se Se ee 0 2000 4000 6000 8000 tot
(a) (b) (c) (d) (e) (f)
Top 9 subspace (ResNet, CIFAR10) io + = = * =e- f=0 =e- fh =3128 08 + B06 Boa 02- oo! ae ae een nN o 2000 4000 6000 8000 bath
Top 11 subspace (ResNet, CIFAR10) io + = = a =e f=0 =e- fh =3128 i 08 B06 Boa 02- 00. o
Top 15 subspace (ResNet, CIFAR10) 10 -¢ === f=0 o> f= 3128 i -e- H=1564 + i =6256 08 + 2 ' B06 Boa 02- oo bonne dnp ene angen an nneate ane nnep nanan o 2000 4000 6000 8000 bath
10 08 - 2 B06 Boa 02- o0- Minne snp np ngage nen nen o 2000 4000 6000 8000 tot
Figure 10: Overlap of top Hessian subspaces V (t1) top for ResNet-18 architecture trained on CIFAR10 as in in Figure 1. (a) Top 2 subspace. (b) Top 5 subspace. (c) Top 9 subspace. (d) Top 11 subspace. (e) Top 15 subspace. (f) Top 20 subspace.
(f) we use the Adam optimizer instead of SGD. In all these experiments, we see pretty consistently that the gradient quickly converges to live in the top subspace and then stays there.
# E ANALYTIC EXAMPLE: DETAILED CALCULATIONS
For the reduced case of a 2-sample, 2-class problem learned using softmax-regression, the loss function can be written as
1 1 L(0) = 5 log (1+ el) + SIog (1+ el MH) (13)
At a late stage of training the loss is near its zero minimum value. The exponents in equation 13 must then be small, so we can approximate
L(θ) â 1 2 e(θ2âθ1)·µ1 + 1 2 e(θ1âθ2)·µ2 . (14)
17
Squared overlap after 0 steps (Fully-Connected, MNIST) 0s as os ko 03 2 a oo; me, mm 0 2 4 6 8 10 RD Bey 16 1B 20 inex
Squared overlap after 50 steps (Fully-Connected, MNIST) 0s as os ko 03 2 a1. il oo, â em 0 2 4 6 8 10 RD Bey 16 1B inex
(a) (b) (c) (d)
Squared overlap after 100 steps (Fully-Connected, MNIST) 08 as os ko 03 2 an a5 tn a a inex
Squared overlap after 200 steps (Fully-Connected, MNIST) 08 as os ko 03 2 os co al, leita ee ae inex
Figure 11: The overlap squared c? of the gradient with the ith eigenvector of the Hessian. Data is for a fully-connected (100,100) architecture trained on MNIST for (a) 0 steps, (b) 50 steps, (c) 100 steps, and (d) 200 steps. After 0 steps, we have an c? = .20 compared with ani c? = .94 after i 50 steps. Also, note that after 50 steps we have aa c? = 93 vs. atl c? = .01. Together, these results show that that the gradient dynamically evolves to lie mostly in the top subspace and is not simply an eigenvector of the Hessian.
The loss function has 2d â 2 ï¬at directions,17 and so the Hessian can have rank at most 2, and the gradient will live inside this non-trivial eigenspace. This is a simple example of the general phenomenon we observed. To gain further understanding, we solve for the optimization trajectory.
We train the model using gradient descent, and take the small learning rate limit (continuous time limit) in which the parameters θ(t) evolve as dθ dt = âηâL(θ(t)). The general solution of this equation is
θ1(t) = Ëθ1 + µ1 2 log (ηt + c1) â µ2 2 log (ηt + c2) , (15)
θ2(t) = Ëθ2 â µ1 2 log (ηt + c1) + µ2 2 log (ηt + c2) .
The space of solutions has 2d â 2 dimensions and is parameterized by the positive constants c1,2 and by Ëθ1,2, which are constant vectors in Rd orthogonal to both µ1 and µ2. The gradient along the optimization trajectory is then given by
âθ1 L(t) = ââθ2 L(t) = â µ1 2(ηt + c1) + µ2 2(ηt + c2) = 2(µ2 â µ1) ηt + O(tâ2) . (17)
Notice that in the limit t â â the gradient approaches a vector that is independent of the solution parameters.
Next, consider the Hessian. By looking at the loss equation 13 we see there are 2d â 2 ï¬at directions and 2d parameters, implying that the Hessian has at most rank 2. Let us work out its spectrum in
17 There are d directions spanned by θ1 + θ2, and d â 2 directions spanned by directions of θ1 â θ2 that are orthogonal to µ1, µ2.
18
(16)
to FC with learning rate 0.01 (MNIST) pod veneeen ee, 0 load witork,, 40° os ne ° e i os + i _ oF + G âoo é os o4 03 ° 700-4000 ~=«@000=S«800 step
FC with batch size 256 (MNIST) to Pan on thd eee oat tle ae) 09 + 4 ost ! a7 â+ £ I * oo | os : H H os} 03 0 m0 © -400-SsD.Ss«DSC«C«M000â~Sâ«1200 step
(a) (b) (c) (d) (e) (f)
to No hidden layers (MNIST) . fa% oo een DLA po iRePae 4 ' ' os $ or é 06 os o4 03 ° 1000 «7000S 5000-000 step
to FE with softplus activation function (MNIST) e a gthyetyatoores, a a hanaiedine re ne é os or & 06 os o4 03 0500 1000 1500 2000 2500 3000 3500 step
to FC (MNIST â random labels) me ° % oh oo SA AAR tal tt an rt yee wis SA pea vy hen 3 yn at 08 â be or é 06 os o4 03 ° 1000 «7000S 5000-000 step
ConyNet with extra dense layer and dam optiomizer (CIFAR10) Pe Fai Tome tat ited. ty oe i rw} Vpsté yh | ie agit yo ny eo *s 3 We or} H £ i 6 o6 |! i H os é o4 03 0250 500 750 1000 1290 1500 1750 2000 step
Figure 12: Fraction of the gradient in the top subspace ftop. In experiments (a)-(e), we use a fully- connected network trained on MNIST, and in (f) we use a CovNet trained on CIFAR10. The changes from the setup described in Figure 1 are: (a) changed learning rate, η = .01 instead of η = 0.1. (b) changed batch size, 256 instead of 64. (c) no hidden layers, just softmax. (d) changed activation: softplus instead of ReLU. (e) random labels on MNIST. (f) changed optimizer, Adam instead of SGD.
more detail. Decomposing the parameter space as Rk â Rd, the Hessian along the optimization trajectory is given by
na(t) -1) 9 [met tent -1 41 Ant +e) Wt +c) 1 /+1 -1 - -4 e a) & nul + op] + O(-2). as)
At leading order in the limit t â â we ï¬nd two non-trivial eigenvectors, given by
( MA ) and ( Me ) , (19) TH âH2
both with eigenvalue (ηt)â1. The remaining eigenvalues all vanish. The top Hessian subspace is ï¬xed, and the gradient is contained within this space.
19 | {
"id": "1710.03667"
} |
1812.01628 | Playing Text-Adventure Games with Graph-Based Deep Reinforcement Learning | Text-based adventure games provide a platform on which to explore
reinforcement learning in the context of a combinatorial action space, such as
natural language. We present a deep reinforcement learning architecture that
represents the game state as a knowledge graph which is learned during
exploration. This graph is used to prune the action space, enabling more
efficient exploration. The question of which action to take can be reduced to a
question-answering task, a form of transfer learning that pre-trains certain
parts of our architecture. In experiments using the TextWorld framework, we
show that our proposed technique can learn a control policy faster than
baseline alternatives. We have also open-sourced our code at
https://github.com/rajammanabrolu/KG-DQN. | http://arxiv.org/pdf/1812.01628 | Prithviraj Ammanabrolu, Mark O. Riedl | cs.CL, cs.AI, cs.LG | Proceedings of NAACL-HLT 2019 | null | cs.CL | 20181204 | 20190325 | 9 1 0 2
r a M 5 2 ] L C . s c [
2 v 8 2 6 1 0 . 2 1 8 1 : v i X r a
# Playing Text-Adventure Games with Graph-Based Deep Reinforcement Learning
Prithviraj Ammanabrolu School of Interactive Computing Georgia Institute of Technology Atlanta, GA raj.ammanabrolu@gatech.edu
Mark O. Riedl School of Interactive Computing Georgia Institute of Technology Atlanta, GA riedl@cc.gatech.edu
# Abstract
Text-based adventure games provide a plat- form on which to explore reinforcement learning in the context of a combinatorial language. action space, We present a deep reinforcement learning architecture that represents the game state as a knowledge graph which is learned during exploration. This graph is used to prune the action space, enabling more efï¬cient exploration. The question of which action to take can be reduced to a question-answering task, a form of transfer learning that pre-trains certain parts of our architecture. In experi- ments using the TextWorld framework, we show that our proposed technique can learn a control policy faster than baseline alterna- tives. We have also open-sourced our code at https://github.com/rajammanabrolu/KG- DQN.
# Introduction
Natural language communication can be used to affect change in the real world. Text adventure games, in which players must make sense of the world through text descriptions and declare ac- tions through natural language, can provide a step- ping stone toward more real-world environments where agents must communicate to understand the state of the world and indirectly affect change in the world. Text adventure games are also useful for developing and testing reinforcement learning algorithms that must deal with the partial observ- ability of the world (Narasimhan et al., 2015; He et al., 2016).
In text adventure games, the agent receives an incomplete textual description of the current state of the world. From this information, and pre- vious interactions with the world, a player must determine the next best action to take to achieve some quest or goal. The player must then com-
pose a textual description of the action they in- tend to make and receive textual feedback of the effects of the action. Formally, a text-based game is a partially observable Markov decision process (POMDP), represented as a 7-tuple of (S,T,A,Q,O,R,7¥) representing the set of en- vironment states, conditional transition probabil- ities between states, words used to compose text commands, observations, observation conditional probabilities, reward function, and the discount factor respectively (Cété et al., 2018).
In text-based games, the agent never has access to the true underlying world state and has to rea- son about how to act in the world based only on the textual observations. Additionally, the agentâs ac- tions must be expressed through natural language commands, ensuring that the action space is com- binatorially large. Thus, text-based games pose a different set of challenges than traditional video games. Text-based games require a greater un- derstanding of previous context to be able to ex- plore the state-action space more effectively. Such games have historically proven to be difï¬cult to play for AI agents, and the more complex variants such as Zork still remain ï¬rmly out of the reach of existing approaches.
We introduce three contributions to text-based game playing to deal with the combinatorially large state and action spaces. First, we show that a state representation in the form of a knowledge graph gives us the ability to effectively prune an action space. A knowledge graph captures the re- lationships between entities as a directed graph. The knowledge graph provides a persistent mem- ory of the world over time and enables the agent to have a prior notion of what actions it should not take at a particular stage of the game.
Our second contribution is a deep reinforcement learning architecture, Knowledge Graph DQN (KG-DQN), that effectively uses this state rep-
resentation to estimate the Q-value for a state- action pair. This architecture leverages recent ad- vances in graph embedding and attention tech- niques (Guan et al., 2018; VeliËckovi´c et al., 2018) to learn which portions of the graph to pay atten- tion to given an input state description in addi- tion to having a mechanism that allows for natu- ral language action inputs. Finally, we take initial steps toward framing the POMDP as a question- answering (QA) problem wherein a knowledge- graph can be used to not only prune actions but to answer the question of what action is most ap- propriate. Previous work has shown that many NLP tasks can be framed as instances of question- answering and that we can transfer knowledge be- tween these tasks (McCann et al., 2017). We show how pre-training certain parts of our KG-DQN network using existing QA methods improves per- formance and allows knowledge to be transferred from different games.
We provide results on ablative experiments comparing our knowledge-graph based approach approaches to strong baselines. Results show that incorporating a knowledge-graph into a reinforce- ment learning agent results in converges to the highest reward more than 40% faster than the best baseline. With pre-training using a question- answering paradigm, we achieve this fast conver- gence rate while also achieving high quality quest solutions as measured by the number of steps re- quired to complete the quests.
# 2 Related Work
A growing body of research has explored the chal- lenges associated with text-based games (Bordes et al., 2010; Narasimhan et al., 2015; He et al., 2016; Fulda et al., 2017; Haroush et al., 2018; CËot´e et al., 2018; Tao et al., 2018). Narasimhan et al. (2015) attempts to solve parser-based text games by encoding the observations using an LSTM. This encoding vector is then used by an action scoring network that determines the scores for the action verb and each of the corresponding argu- ment objects. The two scores are then averaged to determine Q-value for the state-action pair. He et al. (2016) present the Deep Reinforcement Rel- evance Network (DRRN) which uses two separate deep neural networks to encode the state and ac- tions. The Q-value for a state-action pair is then computed by a pairwise interaction function be- tween the two encoded representations. Both of
these methods are not conditioned on previous ob- servations and so are at a disadvantage when deal- ing with complex partially observable games. Ad- ditionally, neither of these approaches prune the action space and so end up wasting trials explor- ing state-action pairs that are likely to have low Q- values, likely leading to slower convergence times for combinatorially large action spaces.
Haroush et al. (2018) introduce the Action Eliminating Network (AEN) that attempts to re- strict the actions in each state to the top-k most likely ones, using the emulatorâs feedback. The network learns which actions should not be taken given a particular state. Their work shows that reducing the size of the action space allows for more effective exploration, leading to better per- formance. Their network is also not conditioned on previous observations.
Knowledge graphs have been demonstrated to improve natural language understanding in other domains outside of text adventure games. For example, Guan et al. (2018) use commonsense knowledge graphs such as ConceptNet (Speer and Havasi, 2012) to signiï¬cantly improve the ability of neural networks to predict the end of a story. They represent the graph in terms of a knowl- edge context vector using features from Concept- Net and graph attention (VeliËckovi´c et al., 2018). The state representation that we have chosen as well as our method of action pruning builds on the strengths of existing approaches while simultane- ously avoiding the shortcomings of ineffective ex- ploration and lack of long-term context.
# 3 Knowledge Graph DQN
In this section we introduce our knowledge graph representation, action pruning and deep Q- network architecture.
# 3.1 Knowledge Graph Representation
In our approach, our agent learns a knowledge graph, stored as a set of RDF triples, i.e. 3-tuples of (subject, relation, object). These triples are extracted from the observations using Stanfordâs Open Information Extraction (OpenIE) (Angeli et al., 2015). OpenlIE is not optimized to the regu- larities of text adventure games and there are a lot of relations that can be inferred from the typical structure of descriptive texts. For example, from a phrase such as âThere is an exit to the northâ one can infer a has relation between the current
You've entered a basement. You try to gain information on your surroundings by using a technique you call "looking." You need an unguarded exit? You should try going east. You don't like doors? Why not try going north, that entranceway is unguarded exit to west has appeals (44 \ is stand You've entered a chamber. You can see a bed stand. The bed stand is typical The bed stand appears to be empty. There is an exit to the north. Don't worry. it is unblocked. There is an unblocked exit to the west.
Figure 1: Graph state update example given two observations
location and the direction of the exit. These addi- tional rules ï¬ll in the information not provided by OpenIE. The resultant knowledge graph gives the agent what essentially amounts to a mental map of the game world.
The knowledge graph is updated after every agent action (see Figure 1). The update rules are deï¬ned such that there are portions of the graph offering short and long-term context. A special nodeâdesignated âyouâârepresents the agent and relations out of this node are updated af- ter every action with the exception of relations de- noting the agentâs inventory. Other relations per- sist after each action. We intend for the update rules to be applied to text-based games in different domains and so only hand-craft a minimal set of rules that we believe apply generally. They are:
# 3.2 Action Pruning
The number of actions available to an agent in a text adventure game can be quite large: A = O(|V | Ã |O|2) where V is the number of action verbs, and O is the number of distinct objects in the world that the agent can interact with, assum- ing that verbs can take two arguments. Some ac- tions, such as movement, inspecting inventory, or observing the room, do not have arguments.
The knowledge graph is used to prune the com- binatorially large space of possible actions avail- able to the agent as follows. Given the current state graph representation Gt, the action space is pruned by ranking the full set of actions and se- lecting the top-k. Our action scoring function is:
⢠+1 for each object in the action that is present in the graph; and
e Linking the current room type (e.g. âbase- mentâ, âchamberâ) to the items found in the room with the relation âhasâ, e.g. (chamber, has, bed stand)
⢠+1 if there exists a valid directed path be- tween the two objects in the graph.
e Extracting information regarding entrances and exits and linking them to the current room, e.g. (basement, has, exit to north)
We assume that each action has at most two ob- jects (for example inserting a key in a lock).
# 3.3 Model Architecture and Training
e Removing all relations relating to the âyouâ node with the exception of inventory every action, e.g. (you, have, cubical key)
e Linking rooms with directions based on the action taken to move between the rooms, e.g. (chamber, east of, basement) after the ac- tion âgo eastâ is taken to go from the base- ment to the chamber
All other RDF triples generated are taken from OpenIE.
Following Narasimhan et al. (2015), all actions A that will be accepted by the gameâs parser are available to the agent at all times. When playing the game, the agent chooses an action and receives an observation ot from the simulator, which is a textual description of current game state. The state graph Gt is updated according to the given obser- vation, as described in Section 3.1.
We use the Q-Learning technique (Watkins and Dayan, 1992) to learn a control policy Ï(at|st), at â A, which gives us the probability of taking action at given the current state st. The policy is
determined by the Q-value of a particular state- action pair, which is updated using the Bellman equation (Sutton and Barto, 2018):
Qt+1(st+1,at+1) = E[rt+1 + γ max aâAt Qt(s, a)|st, at] (1)
where γ refers to the discount factor and rt+1 is the observed reward. The policy is thus to take the action that maximizes the Q-value in a partic- ular state, which will correspond to the action that maximizes the reward expectation given that the agent has taken action at at the current state st and followed the policy Ï(a|s) after.
The architecture in Figure 2 is responsible for computing the representations for both the state st and the actions a(i) â A and coming to an estima- tion of the Q-value for a particular state and ac- tion. During the forward activation, the agent uses the observation to update the graph Gt using the rules outlined in Section 3.2.
The graph is then embedded into a single vec- tor gt. We use Graph Attention (VeliËckovi´c et al., 2018) with an attention mechanism similar to that described in Bahdanau et al. (2014). For- mally, the Multi-headed Graph Attention com- ponent receives a set of node features H = {h1, h2, . . . , hN}, hi â IRF, where N is the num- ber of nodes and F the number of features in each node, and the adjacency matrix of Gt. Each of the node features consist of the averaged word embed- dings for the tokens in that node, as determined by the preceding graph embedding layer. The atten- tion mechanism is set up using self-attention on the nodes after a learnable linear transformation W â IR2FÃF applied to all the node features:
# eij = LeakyReLU (p · W (hi â hj))
where p â IR2F is a learnable parameter. The at- tention coefï¬cients αij are then computed by nor- malizing over the choices of k â N using the soft- max function. Here N refers to the neighborhood in which we compute the attention coefï¬cients. This is determined by the adjacency matrix for Gt and consists of all third-order neighbors of a par- ticular node.
exp(â¬ij) = 3 ee exp(â¬ix) °) Qi =
Multi-head attention is then used, calculating mul- tiple independent attention coefï¬cients. The re- sulting features are then concatenated and passed
(2)
into a linear layer to determine gt:
Be = F(Wo(Kro( D> al Wj) +b,) 4) GEN
where k refers to the parameters of the kââ in- dependent attention mechanism, W, and b, the weights and biases of this componentâs output lin- ear layer, and || represents concatenation.
Simultaneously, an encoded representation of the observation ot is computed using a Sliding Bidirectional LSTM (SB-LSTM). The ï¬nal state representation st is computed as:
st = f (Wl(gt â ot) + bl) (5)
where Wl, bl represent linear layerâs weights and biases and ot is the result of encod- ing the observation with the SB-LSTM.
The entire set of possible actions A is pruned by scoring each a â A according to the mech- anism previously described using the newly up- dated Gt+1. We then embed and encode all of these action strings using an LSTM encoder (Sutskever et al., 2014). The dashed lines in Fig- ure 2 denotes non-differentiable processes.
The ï¬nal Q-value for a state-action pair is:
Q(st, at) = st · at (6)
This method of separately computing the repre- sentations for the state and action is similar to the approach taken in the DRRN (He et al., 2016).
We train the network using experience replay (Lin, 1993) with prioritized sampling (cf., (Moore and Atkeson, 1993)) and a modified version of the e-greedy algorithm (Sutton and Barto, 2018) that we call the â¬1, â¬2-greedy learning algorithm. The experience replay strategy finds paths in the game, which are then stored as transition tuples in a ex- perience replay buffer D. The â¬1, â¬2-greedy algo- rithm explores by choosing actions randomly from A with probability â¬; and from A; with a probabil- ity â¬. The second threshold is needed to account for situations where an action must be chosen to advance the quest for which the agent has no prior in G;. That is, action pruning may remove ac- tions essential to quest completion because those actions involve combinations of entities that have not been encountered before.
We then sample a mini-batch of transition tuples consisting of (SK, Ak, Tk+41,Sk+1, Ak+1; Pk) from
SB-LSTM % Update Ri Graph Observation o embedding >| Graph Attention Multi-headed Linear | % LsTâ¢â¢ Encoder Qâ Q(sy, a") att) a) rr ae
Figure 2: KG-DQN architecture, blue shading indicates components that can be pre-trained and red indicates no pre-training. The solid lines indicate gradient ï¬ow for learnable components.
D and compute the temporal difference loss as:
L(θ) =rk+1+ γ max
aâAk+1 Q(st, a; θ) â Q(st, at; θ) (7)
Replay sampling from D is done by sampling a fraction Ï from transition tuples with a positive reward and 1 â Ï from the rest. As shown in (Narasimhan et al., 2015), prioritized sampling from experiences with a positive reward helps the deep Q-network more easily ï¬nd the sparse set of transitions that advance the game. The exact train- ing mechanism is described in Algorithm 1.
# 4 Game Play as Question Answering
Previous work has shown that many NLP tasks can be framed as instances of question-answering and that in doing so, one can transfer knowledge be- tween these tasks (McCann et al., 2017). In the ab- stract, an agent playing a text adventure game can be thought of as continuously asking the question âWhat is the right action to perform in this situa- tion?â When appropriately trained, the agent may be able to answer the question for itself and select a good next move to execute. Treating the problem as question-answering will not replace the need for exploration in text-adventure games. However, we hypothesize that it will cut down on the amount of exploration needed during testing time, theoreti- cally allowing it to complete quests faster; one of the challenges of text adventure games is that the quests are puzzles and even after training, execu- tion of the policy requires a signiï¬cant amount of exploration.
To teach the agent to answer the question of what action is best to take given an observation,
we use an ofï¬ine, pre-training approach. The data for the pre-training approach is generated using an oracle, an agent capable of ï¬nishing a game perfectly in the least number of steps possible. Speciï¬cally, the agent knows exactly what action to take given the state observation in order to ad- vance the game in the most optimal manner pos- sible. Through this process, we generate a set of traces consisting of state observations and ac- tions such that the state observation provides the context for the implicit question of âWhat action should be taken?â and the oracleâs correct action is the answer. We then use the DrQA (Chen et al., 2017) question-answering technique to train a paired question encoder and an answer encoder that together predict the answer (action) from the question (text observation). The weights from the SB-LSTM in the document encoder in the DrQA system are then used to initialize the weights of the SB-LSTM. Similarly, embedding layers of both the graph and the LSTM action encoder are ini- tialized with the weights from the embedding layer of same document encoder. Since the DrQA em- bedding layers are initialized with GloVe, we are transferring word embeddings that are tuned dur- ing the training of the QA architecture.
The game traces used to train the question- answering come from a set of games of the same domain but have different speciï¬c conï¬gurations of the environment and different quests. We use the TextWorld framework (CËot´e et al., 2018), which uses a grammar to generate random worlds and quests. The types of rooms are the same, but their relative spatial conï¬guration, the types of ob- jects, and the speciï¬c sequence of actions needed to complete the quest are different each time. This
Table 1: Generated game details.
Rooms Total objects Quest length Branching factor Vocab size Average words per obs. Average new RDF triples per obs. Small Large 10 20 5 143 746 67.5 7.2 20 40 10 562 819 94.0 10.5
means that the agent cannot simply memorize quests. For pre-training to work, the agent must develop a general question-answering competence that can transfer to new quests. Our approach to question-answering in the context of text adven- ture game playing thus represents a form of trans- fer learning.
# 5 Experiments
We conducted experiments in the TextWorld framework (CËot´e et al., 2018) using their âhomeâ theme. TextWorld uses a grammar to randomly generate game worlds and quests with given pa- rameters. Games generated with TextWorld start with a zero-th observation that gives instructions for the quest; we do not allow our agent to access this information. The TextWorld API also pro- vides a list of admissible actions at each stateâthe actions that can be performed based on the objects that are present. We do not allow our agent to ac- cess the admissible actions.
We generated two sets of games with different random seeds, representing different game difï¬- culties, which we denote as small and large. Small games have ten rooms and quests of length ï¬ve and large games have twenty rooms and quests of length ten. Statistics on the games are given in Table 1. Quest length refers to the number of ac- tions that the agent is required to perform in order to ï¬nish the quest; more actions are typically nec- essary to move around the environment and ï¬nd the objects that need to be interacted with. The branching factor is the size of the action set A for that particular game.
The reward function provided by TextWorld is as follows: +1 for each action taken that moves the agent closer to ï¬nishing the quest; -1 for each action taken that extends the minimum number of steps needed to ï¬nish the quest from the current stage; 0 for all other situations. The maximum achievable reward for the small and large sets of games are 5 and 10 respectively. This allows for
Table 2: Pre-training accuracy.
Small Large EM 46.20 34.13 Precision Recall 63.38 56.57 64.72 52.53 F1 57.94 55.06
a large amount of variance in quest qualityâas measured by steps to complete the questâthat re- ceives maximum reward.
The following procedure for pre-training was done separately for each set of games. Pre-training of the SB-LSTM within the question-answering architecture is conducted by generating 200 games from the same TextWorld theme. The QA system was then trained on data from walkthroughs of a randomly-chosen subset of 160 of these generated games, tuned on a dev set of 20 games, and eval- uated on the held-out set of 20 games. Table 2 provides details on the Exact Match (EM), preci- sion, recall, and F1 scores of the QA system af- ter training for the small and large sets of games. Precision, recall, and F1 scores are calculated by counting the number of tokens between the pre- dicted answer and ground truth. An Exact Match is when the entire predicted answer matches with the ground truth. This score is used to tune the model based on the dev set of games.
A random game was chosen from the test-set of games and used as the environment for the agent to train its deep Q-network on. Thus, at no time did the QA system see the ï¬nal testing game prior to the training of the KG-DQN network.
We compare our technique to three baselines:
⢠Random command, which samples from the list of admissible actions returned by the TextWorld simulator at each step.
⢠LSTM-DQN, developed by Narasimhan et al. (2015).
⢠Bag-of-Words DQN, which uses a bag-of- words encoding with a multi-layer feed for- ward network instead of an LSTM.
To achieve the most competitive baselines, we used a randomized grid search to choose the best hyperparameters (e.g., hidden state size, 7, p, final â¬, update frequency, learning rate, replay buffer size) for the BOW-DQN and LSTM-DQN base- lines.
We tested three versions of our KG-DQN:
1. Un-pruned actions with pre-training
Algorithm 1 â¬1, â¬2-greedy learning algorithm for KG-DQN 1: for episode=1 to M do : Initialize action dictionary A and graph G' 3 Reset the game simulator 4 Read initial observation 01 5: G, ¢ updateGraph(Go, 01); A1 < pruneActions(A, Go) 6: for step t=1 to T do 7 if random() < â¬, then 8 if random() < 2 then 9 Select random action a, ⬠A 10: else 11: Select random action az ⬠At 12: else ; 13: Compute Q(st, aâ; @) for aâ ⬠A for network parameters 6 > Section 14: Select a, based on 7(a|s¢) 15: Execute action a; in the simulator and observe reward ry 16: Receive next observation of + 1 17: Gi41 © updateGraph(Gy, 0141); Ate + pruneActions(A, Gr41) 18: Compute st41 and Atya = {aâ® forall aâ ⬠A} 19: Set priority p, = lif, > 0,else pp = 0 20: Store transition (st, at, Tr, St41, At+1, pt) in replay buffer D 21: Sample mini-batch of transitions (Sx, Ak, Tk; Sk+1, Ak+1,; Px) from D, with fraction p having pj, = 1 22: Set yx = 1h + mMaxacay,, Q(St, a5 4), oF yk = Te if ser is terminal 23: Perform gradient descent step on loss function L(@) = (yx â Q(st, at; 4))?
> Section 3.2
> Section 3.3, Eq. 6
b Section 3.1 > Section 3.3
2. Pruned actions without pre-training
3. Pruned actions with pre-training (full)
to ï¬nish the game in addition to the reward in the form of a score and in the other variation, the hu- man received no instructions.
Our models use 50-dimensional word embed- dings, 2 heads on the graph attention layers, mini- batch size of 16, and perform a gradient descent update every 5 steps taken by the agent.
All models are evaluated by observing the (a) time to reward convergence, and (b) the av- erage number of steps required for the agent to finish the game with « = 0.1 over 5 episodes af- ter training has completed. Following Narasimhan et al. (2015) we set ⬠to a non-zero value because text adventure games, by nature, require explo- ration to complete the quests. All results are re- ported based on multiple independent trials. For the large set of games, we only perform experi- ments on the best performing models found in the small set of games. Also note that for experiments on large games, we do not display the entire learn- ing curve for the LSTM-DQN baseline, as it con- verges significantly more slowly than KG-DQN. We run each experiment 5 times and average the results.
# 6 Results and Discussion
Recall that the number of steps required to ï¬nish the game for the oracle agent is 5 and 10 for the small and large maps respectively. It is impossi- ble to achieve this ideal performance due to the structure of the quest. The player needs to interact with objects and explore the environment in order to ï¬gure out the exact sequence of actions required to ï¬nish the quest. To help benchmark our agentâs performance, we observed people unafï¬liated with the research playing through the same TextWorld âhomeâ quests as the other models. Those who did not receive instructions on how to ï¬nish the quest never ï¬nished a single quest and gave up after an average of 184 steps on the small map and an av- erage of 190 steps on the large map. When given instructions, human players completed the quest on the large map in an average of 23 steps, ï¬n- ishing the game with the maximum reward possi- ble. Also note that none of the deep reinforcement learning agents received instructions.
Additionally, human performance on the both the games was measured by counting the number of steps taken to ï¬nish the game, with and with- out instructions on the exact quest. We modiï¬ed Textworld to give the human players reward feed- back in the form of a score, the reward function itself is identical to that received by the deep re- inforcement learning agents. In one variation of this experiment, the human was given instructions on the potential sequence of steps that are required
On both small and large maps, all versions of KG-DQN tested converge faster than baselines (see Figure 3 for the small game and Figure 4 for the large game). We donât show BOW-DQN because it is strictly inferior to LSTM-DQN in all situations). KG-DQN converges 40% faster than baseline on the small game; both KG-DQN and the LSTM-DQN baseline reaches the max- imum reward of ï¬ve. On the large game, no
âb Full KG-DON -10 â} Pruned non-pre-trained KG-DON âb Unpruned pre-trained KG-DQN T) LSTM-DON 0 5 10 15 20 25 30 Episodes
Figure 3: Reward learning curve for select experiments with the small games.
Table 3: Average number of steps (and standard devia- tion) taken to complete the small game.
Model Random Command BOW-DQN LSTM-DQN Unpruned, pre-trained KG-DQN Pruned, non-pre-trained KG-DQN 97.3 ± 9.0 73.7 ± 8.5 Full KG-DQN
agents achieve the maximum reward of 10, and the LSTM-DQN requires more than 300 episodes to converge at the same level as KG-DQN. Since all versions of KG-DQN converge at approximately the same rate, we conclude that the knowledge graphâi.e., persistent memoryâis the main fac- tor helping convergence time since it is the com- mon element across all experiments.
After training is complete, we measure the num- ber of steps each agent needs to complete each quest. Full KG-DQN requires an equivalent num- ber of steps in the small game (Table 3) and in the large game (Table 4). Differences between LSTM-DQN and full KG-DQN are not statisti- cally signiï¬cant, p = 0.199 on an independent T- test. The ablated versions of KG-DQNâunpruned KG-DQN and non-pre-trained KG-DQNârequire many more steps to complete quests. TextWorldâs reward function allows for a lot of exploration of the environment without penalty so it is pos- sible for a model that has converged on reward to complete quests in as few as ï¬ve steps or in many hundreds of steps. From these results, we conclude that the pre-training using our question- answering paradigm is allowing the agent to ï¬nd a general understanding of how to pick good ac- tions even when the agent has never seen the ï¬nal
Reward -10 -15 E Full Ke-DQN âE Pruned non-pre-trained KG-DQN â LSTM-DQN 0 20 40 60 80 100 Episodes
Figure 4: Reward learning curve for select experiments with the large games.
Table 4: Average number of steps (and standard devia- tion) taken to complete the large game.
Model Random Command LSTM-DQN Pruned, non-pre-trained KG-DQN 340 ± 6.4 Full KG-DQN
test game. LSTM-DQN also learns how to choose actions efï¬ciently, but this knowledge is captured in the LSTMâs cell state, whereas in KG-DQN this knowledge is made explicit in the knowledge graph and retrieved effectively by graph attention. Taken together, KG-DQN converges faster with- out loss of quest solution quality.
# 7 Conclusions
We have shown that incorporating knowledge graphs into an deep Q-network can reduce training time for agents playing text-adventure games of various lengths. We speculate that this is because the knowledge graph provides a persistent mem- ory of the world as it is being explored. While the knowledge graph allows the agent to reach optimal reward more quickly, it doesnât ensure a high qual- ity solution to quests. Action pruning using the knowledge graph and pre-training of the embed- dings used in the deep Q-network result in shorter action sequences needed to complete quests.
The insight into pre-training portions of the agentâs architecture is based on converting text- adventure game playing into a question-answering activity. is askingâand trying to answerâwhat is the most important thing to try. The pre-training acts as a form of transfer learning from different, but re-
lated games. However, question-answering alone cannot solve the text-adventure playing problem because there will always be some trial and error required.
By addressing the challenges of partial observ- ability and combinatorially large action, spaces through persistent memory, our work on play- ing text-adventure games addresses a critical need for reinforcement learning for language. Text- adventure games can be seen as a stepping stone toward more complex, real-world tasks; the hu- man world is one of partial understanding through communication and acting on the world using lan- guage.
# References
Gabor Angeli, Johnson Premkumar, Melvin Jose, and Christopher D. Manning. 2015. Leveraging Lin- guistic Structure For Open Domain Information Ex- traction. In Proceedings of the 53rd Annual Meet- ing of the Association for Computational Linguistics and the 7th International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers).
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv:1409.0473.
Antoine Bordes, Nicolas Usunier, Ronan Collobert, and Jason Weston. 2010. Towards understanding sit- uated natural language. In Proceedings of the 2010 International Conference on Artiï¬cial Intelligence and Statistics.
Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open- In Association for Computa- domain questions. tional Linguistics (ACL).
Marc-Alexandre CËot´e, ´Akos K´ad´ar, Xingdi Yuan, Ben Kybartas, Emery Fine, James Moore, Matthew Hausknecht, Layla El Asri, Mahmoud Adada, Wendy Tay, and Adam Trischler. 2018. TextWorld : A Learning Environment for Text-based Games. In Proceedings of the ICML/IJCAI 2018 Workshop on Computer Games, page 29.
Nancy Fulda, Daniel Ricks, Ben Murdoch, and David Wingate. 2017. What can you do with a rock? affor- dance extraction via word embeddings. In Proceed- ings of the Twenty-Sixth International Joint Con- ference on Artiï¬cial Intelligence, IJCAI-17, pages 1039â1045.
and Minlie Huang. 2018. Story Ending Generation with Incre- mental Encoding and Commonsense Knowledge. arXiv:1808.10113v1.
Matan Haroush, Tom Zahavy, Daniel J Mankowitz, and Shie Mannor. 2018. Learning How Not to Act in In Workshop Track at ICLR Text-Based Games. 2018, pages 1â4.
Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Li- hong Li, Li Deng, and Mari Ostendorf. 2016. Deep Reinforcement Learning with a Natural Language In Association for Computational Action Space. Linguistics (ACL).
Long-Ji Lin. 1993. Reinforcement learning for robots using neural networks. Ph.D. thesis, Carnegie Mel- lon University.
Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. 2017. The Natural Language Decathlon : Multitask Learning as Question An- swering. arXiv:1806.08730.
Andrew W. Moore and Christopher G. Atkeson. 1993. Prioritized sweeping: Reinforcement learning with Machine Learning, less data and less time. 13(1):103â130.
Karthik Narasimhan, Tejas Kulkarni, and Regina Barzilay. 2015. Language Understanding for Text- based Games Using Deep Reinforcement Learning. In Conference on Empirical Methods in Natural Language Processing (EMNLP).
Robert Speer and Catherine Havasi. 2012. Repre- senting General Relational Knowledge in Concept- In Proceedings of the Eighth International Net 5. Conference on Language Resources and Evaluation (LREC).
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural net- works. In Advances in neural information process- ing systems, pages 3104â3112.
Richard S Sutton and Andrew G Barto. 2018. Rein- forcement Learning: An Introduction. MIT Press.
Ruo Yu Tao, Marc-Alexandre CËot´e, Xingdi Yuan, and Layla El Asri. 2018. Towards solving text-based games by producing adaptive action spaces. In Pro- ceedings of the 2018 NeurIPS Workshop on Word- play: Reinforcement and Language Learning in Text-based Games.
Petar VeliËckovi´c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li`o, and Yoshua Bengio. International 2018. Graph Attention Networks. Conference on Learning Representations (ICLR).
Christopher J. C. H. Watkins and Peter Dayan. 1992. Q-learning. Machine Learning, 8(3):279â292. | {
"id": "1808.10113"
} |
1812.00332 | ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware | Neural architecture search (NAS) has a great impact by automatically
designing effective neural network architectures. However, the prohibitive
computational demand of conventional NAS algorithms (e.g. $10^4$ GPU hours)
makes it difficult to \emph{directly} search the architectures on large-scale
tasks (e.g. ImageNet). Differentiable NAS can reduce the cost of GPU hours via
a continuous representation of network architecture but suffers from the high
GPU memory consumption issue (grow linearly w.r.t. candidate set size). As a
result, they need to utilize~\emph{proxy} tasks, such as training on a smaller
dataset, or learning with only a few blocks, or training just for a few epochs.
These architectures optimized on proxy tasks are not guaranteed to be optimal
on the target task. In this paper, we present \emph{ProxylessNAS} that can
\emph{directly} learn the architectures for large-scale target tasks and target
hardware platforms. We address the high memory consumption issue of
differentiable NAS and reduce the computational cost (GPU hours and GPU memory)
to the same level of regular training while still allowing a large candidate
set. Experiments on CIFAR-10 and ImageNet demonstrate the effectiveness of
directness and specialization. On CIFAR-10, our model achieves 2.08\% test
error with only 5.7M parameters, better than the previous state-of-the-art
architecture AmoebaNet-B, while using 6$\times$ fewer parameters. On ImageNet,
our model achieves 3.1\% better top-1 accuracy than MobileNetV2, while being
1.2$\times$ faster with measured GPU latency. We also apply ProxylessNAS to
specialize neural architectures for hardware with direct hardware metrics (e.g.
latency) and provide insights for efficient CNN architecture design. | http://arxiv.org/pdf/1812.00332 | Han Cai, Ligeng Zhu, Song Han | cs.LG, cs.CV, stat.ML | ICLR 2019 | null | cs.LG | 20181202 | 20190223 | 9 1 0 2
b e F 3 2 ] G L . s c [
2 v 2 3 3 0 0 . 2 1 8 1 : v i X r a
Published as a conference paper at ICLR 2019
# PROXYLESSNAS: DIRECT NEURAL ARCHITECTURE SEARCH ON TARGET TASK AND HARDWARE
Han Cai, Ligeng Zhu, Song Han Massachusetts Institute of Technology hancai, ligeng, songhan {
@mit.edu
}
# ABSTRACT
Neural architecture search (NAS) has a great impact by automatically designing effective neural network architectures. However, the prohibitive computational demand of conventional NAS algorithms (e.g. 104 GPU hours) makes it difï¬cult to directly search the architectures on large-scale tasks (e.g. ImageNet). Differen- tiable NAS can reduce the cost of GPU hours via a continuous representation of network architecture but suffers from the high GPU memory consumption issue (grow linearly w.r.t. candidate set size). As a result, they need to utilize proxy tasks, such as training on a smaller dataset, or learning with only a few blocks, or training just for a few epochs. These architectures optimized on proxy tasks are not guaranteed to be optimal on the target task. In this paper, we present ProxylessNAS that can directly learn the architectures for large-scale target tasks and target hardware platforms. We address the high memory consumption issue of differentiable NAS and reduce the computational cost (GPU hours and GPU memory) to the same level of regular training while still allowing a large candi- date set. Experiments on CIFAR-10 and ImageNet demonstrate the effectiveness of directness and specialization. On CIFAR-10, our model achieves 2.08% test error with only 5.7M parameters, better than the previous state-of-the-art architec- fewer parameters. On ImageNet, our model ture AmoebaNet-B, while using 6 achieves 3.1% better top-1 accuracy than MobileNetV2, while being 1.2 faster with measured GPU latency. We also apply ProxylessNAS to specialize neural architectures for hardware with direct hardware metrics (e.g. latency) and provide insights for efï¬cient CNN architecture design.1
# INTRODUCTION
Neural architecture search (NAS) has demonstrated much success in automating neural network ar- chitecture design for various deep learning tasks, such as image recognition (Zoph et al., 2018; Cai et al., 2018a; Liu et al., 2018a; Zhong et al., 2018) and language modeling (Zoph & Le, 2017). De- spite the remarkable results, conventional NAS algorithms are prohibitively computation-intensive, requiring to train thousands of models on the target task in a single experiment. Therefore, directly applying NAS to a large-scale task (e.g. ImageNet) is computationally expensive or impossible, which makes it difï¬cult for making practical industry impact. As a trade-off, Zoph et al. (2018) propose to search for building blocks on proxy tasks, such as training for fewer epochs, starting with a smaller dataset (e.g. CIFAR-10), or learning with fewer blocks. Then top-performing blocks are stacked and transferred to the large-scale target task. This paradigm has been widely adopted in subsequent NAS algorithms (Liu et al., 2018a;b; Real et al., 2018; Cai et al., 2018b; Liu et al., 2018c; Tan et al., 2018; Luo et al., 2018).
However, these blocks optimized on proxy tasks are not guaranteed to be optimal on the target task, especially when taking hardware metrics such as latency into consideration. More importantly, to enable transferability, such methods need to search for only a few architectural motifs and then repeatedly stack the same pattern, which restricts the block diversity and thereby harms performance.
In this work, we propose a simple and effective solution to the aforementioned limitations, called ProxylessNAS, which directly learns the architectures on the target task and hardware instead of with
1Pretrained models and evaluation code are released at https://github.com/MIT-HAN-LAB/ProxylessNAS.
1
Published as a conference paper at ICLR 2019
(1) Previous proxy-based approach (2) Our proxy-less approach Architecture mec Target Task Target Task to') Transfer, & Task Learner & Hardware Learner Hardware Updates
Figure 1: ProxylessNAS directly optimizes neural network architectures on target task and hard- ware. Beneï¬ting from the directness and specialization, ProxylessNAS can achieve remarkably better results than previous proxy-based approaches. On ImageNet, with only 200 GPU hours (200 fewer than MnasNet (Tan et al., 2018)), our searched CNN model for mobile achieves the same
Ã
proxy (Figure 1). We also remove the restriction of repeating blocks in previous NAS works (Zoph et al., 2018; Liu et al., 2018c) and allow all of the blocks to be learned and speciï¬ed. To achieve this, we reduce the computational cost (GPU hours and GPU memory) of architecture search to the same level of regular training in the following ways.
GPU hour-wise, inspired by recent works (Liu et al., 2018c; Bender et al., 2018), we formulate NAS as a path-level pruning process. Speciï¬cally, we directly train an over-parameterized network that contains all candidate paths (Figure 2). During training, we explicitly introduce architecture parameters to learn which paths are redundant, while these redundant paths are pruned at the end of training to get a compact optimized architecture. In this way, we only need to train a single network without any meta-controller (or hypernetwork) during architecture search.
However, naively including all the candidate paths leads to GPU memory explosion (Liu et al., 2018c; Bender et al., 2018), as the memory consumption grows linearly w.r.t. the number of choices. Thus, GPU memory-wise, we binarize the architecture parameters (1 or 0) and force only one path to be active at run-time, which reduces the required memory to the same level of training a compact model. We propose a gradient-based approach to train these binarized parameters based on Bina- ryConnect (Courbariaux et al., 2015). Furthermore, to handle non-differentiable hardware objectives (using latency as an example) for learning specialized network architectures on target hardware, we model network latency as a continuous function and optimize it as regularization loss. Addition- ally, we also present a REINFORCE-based (Williams, 1992) algorithm as an alternative strategy to handle hardware metrics.
In our experiments on CIFAR-10 and ImageNet, beneï¬ting from the directness and specialization, our method can achieve strong empirical results. On CIFAR-10, our model reaches 2.08% test error with only 5.7M parameters. On ImageNet, our model achieves 75.1% top-1 accuracy which is 3.1% higher than MobileNetV2 (Sandler et al., 2018) while being 1.2 faster. Our contributions can be summarized as follows:
ProxylessNAS is the ï¬rst NAS algorithm that directly learns architectures on the large- scale dataset (e.g. ImageNet) without any proxy while still allowing a large candidate set and removing the restriction of repeating blocks. It effectively enlarged the search space and achieved better performance.
We provide a new path-level pruning perspective for NAS, showing a close connection between NAS and model compression (Han et al., 2016). We save memory consumption by one order of magnitude by using path-level binarization.
We propose a novel gradient-based approach (latency regularization loss) for handling hardware objectives (e.g. latency). Given different hardware platforms: CPU/GPU/Mobile, ProxylessNAS enables hardware-aware neural network specialization thatâs exactly opti- mized for the target hardware. To our best knowledge, it is the ï¬rst work to study special- ized neural network architectures for different hardware architectures.
Extensive experiments showed the advantage of the directness property and the special- ization property of ProxylessNAS. It achieved state-of-the-art accuracy performances on CIFAR-10 and ImageNet under latency constraints on different hardware platforms (GPU, CPU and mobile phone). We also analyze the insights of efï¬cient CNN models specialized for different hardware platforms and raise the awareness that specialized neural network architecture is needed on different hardware architectures for efï¬cient inference.
2
Published as a conference paper at ICLR 2019
# 2 RELATED WORK
The use of machine learning techniques, such as reinforcement learning or neuro-evolution, to re- place human experts in designing neural network architectures, usually referred to as neural archi- tecture search, has drawn an increasing interest (Zoph & Le, 2017; Liu et al., 2018a;b;c; Cai et al., 2018a;b; Pham et al., 2018; Brock et al., 2018; Bender et al., 2018; Elsken et al., 2017; 2018b; Ka- math et al., 2018). In NAS, architecture search is typically considered as a meta-learning process, and a meta-controller (e.g. a recurrent neural network (RNN)), is introduced to explore a given architecture space with training a network in the inner loop to get an evaluation for guiding explo- ration. Consequently, such methods are computationally expensive to run, especially on large-scale tasks, e.g. ImageNet.
Some recent works (Brock et al., 2018; Pham et al., 2018) try to improve the efï¬ciency of this meta-learning process by reducing the cost of getting an evaluation. In Brock et al. (2018), a hy- pernetwork is utilized to generate weights for each sampled network and hence can evaluate the architecture without training it. Similarly, Pham et al. (2018) propose to share weights among all sampled networks under the standard NAS framework (Zoph & Le, 2017). These methods speed up architecture search by orders of magnitude, however, they require a hypernetwork or an RNN controller and mainly focus on small-scale tasks (e.g. CIFAR) rather than large-scale tasks (e.g. ImageNet).
Our work is most closely related to One-Shot (Bender et al., 2018) and DARTS (Liu et al., 2018c), both of which get rid of the meta-controller (or hypernetwork) by modeling NAS as a single training process of an over-parameterized network that comprises all candidate paths. Speciï¬cally, One- Shot trains the over-parameterized network with DropPath (Zoph et al., 2018) that drops out each path with some ï¬xed probability. Then they use the pre-trained over-parameterized network to evaluate architectures, which are sampled by randomly zeroing out paths. DARTS additionally introduces a real-valued architecture parameter for each path and jointly train weight parameters and architecture parameters via standard gradient descent. However, they suffer from the large GPU memory consumption issue and hence still need to utilize proxy tasks. In this work, we address the large memory issue in these two methods through path binarization.
Another relevant topic is network pruning (Han et al., 2016) that aim to improve the efï¬ciency of neural networks by removing insigniï¬cant neurons (Han et al., 2015) or channels (Liu et al., 2017). Similar to these works, we start with an over-parameterized network and then prune the redundant parts to derive the optimized architecture. The distinction is that they focus on layer-level pruning that only modiï¬es the ï¬lter (or units) number of a layer but can not change the topology of the network, while we focus on learning effective network architectures through path-level pruning. We also allow both pruning and growing the number of layers.
# 3 METHOD
We ï¬rst describe the construction of the over-parameterized network with all candidate paths, then introduce how we leverage binarized architecture parameters to reduce the memory consumption of training the over-parameterized network to the same level as regular training. We propose a gradient-based algorithm to train these binarized architecture parameters. Finally, we present two techniques to handle non-differentiable objectives (e.g. latency) for specializing neural networks on target hardware.
3.1 CONSTRUCTION OF OVER-PARAMETERIZED NETWORK
Denote a neural network as , en) where ei represents a certain edge in the directed acyclic N graph (DAG). Let be the set of N candidate primitive operations (e.g. convolution, pool- oi} { ing, identity, zero, etc). To construct the over-parameterized network that includes any architecture in the search space, instead of setting each edge to be a deï¬nite primitive operation, we set each edge to be a mixed operation that has N parallel paths (Figure 2), denoted as mO. As such, the over-parameterized network can be expressed as
N Given input x, the output of a mixed operation mO is deï¬ned based on the outputs of its N paths. In where One-Shot, mO(x) is the sum of
· ·
, while in DARTS, mO(x) is weighted sum of }
# oi(x) {
# oi(x) {
3
}
Published as a conference paper at ICLR 2019
Published as a conference paper at ICLR 2019
Figure 2: Learning both weight parameters and binarized architecture parameters.
the weights are calculated by applying softmax to N real-valued architecture parameters
:
# αi} {
N N N exp(αi) j exp(αj) mOne-Shot O mDARTS O (x) = oi(x), (x) = oi(x). pioi(x) =
(1)
# i=l
As shown in Eq. (1), the output feature maps of all N paths are calculated and stored in the memory, while training a compact model only involves one path. Therefore, One-Shot and DARTS roughly need N times GPU memory and GPU hours compared to training a compact model. On large- scale dataset, this can easily exceed the memory limits of hardware with large design space. In the following section, we solve this memory issue based on the idea of path binarization.
# 3.2 LEARNING BINARIZED PATH
To reduce memory footprint, we keep only one path when training the over-parameterized network. Unlike Courbariaux et al. (2015) which binarize individual weights, we binarize entire paths. We in- troduce N real-valued architecture parameters and then transforms the real-valued path weights αi} { to binary gates:
, 0] with probability p1, [1, 0, · · · · · · · · · g = binarize(p1, (2) , pN ) = · · · , 1] with probability pN . [0, 0,
Based on the binary gates g, the output of the mixed operation is given as:
oi(x) with probability p; N ma (x) _ Se g:0:(z) = sae : (3) i=l on(x) with probability py.
As illustrated in Eq. (3) and Figure 2, by using the binary gates rather than real-valued path weights (Liu et al., 2018c), only one path of activation is active in memory at run-time and the memory requirement of training the over-parameterized network is thus reduced to the same level of training a compact model. Thatâs more than an order of magnitude memory saving.
# 3.2.1 TRAINING BINARIZED ARCHITECTURE PARAMETERS
Figure 2 illustrates the training procedure of the weight parameters and binarized architecture pa- rameters in the over-parameterized network. When training weight parameters, we ï¬rst freeze the architecture parameters and stochastically sample binary gates according to Eq. (2) for each batch of input data. Then the weight parameters of active paths are updated via standard gradient descent on the training set (Figure 2 left). When training architecture parameters, the weight parameters are frozen, then we reset the binary gates and update the architecture parameters on the validation set (Figure 2 right). These two update steps are performed in an alternative manner. Once the train- ing of architecture parameters is ï¬nished, we can then derive the compact architecture by pruning redundant paths. In this work, we simply choose the path with the highest path weight.
Unlike weight parameters, the architecture parameters are not directly involved in the computation graph and thereby cannot be updated using the standard gradient descent. In this section, we intro- duce a gradient-based approach to learn the architecture parameters.
4
# paths
calculated
Published as a conference paper at ICLR 2019
In BinaryConnect (Courbariaux et al., 2015), the real-valued weight is updated using the gradient w.r.t. its corresponding binary gate. In our case, analogously, the gradient w.r.t. architecture param- eters can be approximately estimated using âL/âgi in replace of âL/âpi:
exp(aj) aL aL op, aL ap; 1 An 0a; 3 Op; Ba, © => 1 99; Ba, Lie SS, expt ) âaL expla) ; > jg Pili â Pi)> (4) jai 795
1 jai where 6;; lif i = j and 6;; = Oif i # j. Since the binary gates g are involved in the com- putation graph, as shown in Eg. (3), OL /0g; can be calculated through backpropagation. However, computing OL/0g, requires to calculate and store 0; (x). Therefore, directly using Eq. (4) to update the architecture parameters would also require roughly NV times GPU memory compared to training a compact model.
1
# jai
To address this issue, we consider factorizing the task of choosing one path out of N candidates into multiple binary selection tasks. The intuition is that if a path is the best choice at a particular position, it should be the better choice when solely compared to any other path.2
Following this idea, within an update step of the architecture parameters, we ï¬rst sample two paths , pN ) and mask all the other paths as if they do not according to the multinomial distribution (p1, exist. As such the number of candidates temporarily decrease from N to 2, while the path weights are reset accordingly. Then we update the architecture parameters of pi} { these two sampled paths using the gradients calculated via Eq. (4). Finally, as path weights are computed by applying softmax to the architecture parameters, we need to rescale the value of these two updated architecture parameters by multiplying a ratio to keep the path weights of unsampled paths unchanged. As such, in each update step, one of the sampled paths is enhanced (path weight increases) and the other sampled path is attenuated (path weight decreases) while all other paths keep unchanged. In this way, regardless of the value of N , only two paths are involved in each update step of the architecture parameters, and thereby the memory requirement is reduced to the same level of training a compact model.
3.3 HANDLING NON-DIFFERENTIABLE HARDWARE METRICS
Besides accuracy, latency (not FLOPs) is another very important objective when designing efï¬cient neural network architectures for hardware. Unfortunately, unlike accuracy that can be optimized using the gradient of the loss function, latency is non-differentiable. In this section, we present two algorithms to handle the non-differentiable objectives.
# 3.3.1 MAKING LATENCY DIFFERENTIABLE
To make latency differentiable, we model the latency of a network as a continuous function of the neural network dimensions 3. Consider a mixed operation with a candidate set and each oj is associated with a path weight pj which represents the probability of choosing oj. As such, we have the expected latency of a mixed operation (i.e. a learnable block) as:
E[latencyi] = pi j à F (oi j), (5)
where E[latencyi] is the expected latency of the ith learnable block, F ( tion model and F (oi j) is the predicted latency of oi parameters can thereby be given as: âE[latencyi] / âpi For the whole network with a sequence of mixed operations (Figure 3 left), since these operations are executed sequentially during inference, the expected latency of the network can be expressed with the sum of these mixed operationsâ expected latencies:
E[latency] = E[latencyi], (6)
2In Appendix D, we provide another solution to this issue that does not require the approximation. 3Details of the latency prediction model are provided in Appendix B.
5
Published as a conference paper at ICLR 2019
Published as a conference paper at ICLR 2019
Figure 3: Making latency differentiable by introducing latency regularization loss.
We incorporate the expected latency of the network into the normal loss function by multiplying a scaling factor λ2(> 0) which controls the trade-off between accuracy and latency. The ï¬nal loss function is given as (also shown in Figure 3 right) 2 2 + λ2E[latency], w Loss = LossCE + λ1|| || where LossCE denotes the cross-entropy loss and λ1|| w 3.3.2 REINFORCE-BASED APPROACH
(7)
2 2 is the weight decay term. ||
As an alternative to BinaryConnect, we can utilize REINFORCE to train binarized weights as well. Consider a network that has binarized parameters α, the goal of updating binarized parameters is to ï¬nd the optimal binary gates g that maximizes a certain reward, denoted as R( ). Here we assume the · network only has one mixed operation for ease of illustration. Therefore, according to REINFORCE (Williams, 1992), we have the following updates for binarized parameters: Ng)] =
I(a) = EgralR(Ny)] = >> piR(N (e = 0%)), Vad (a) = » R(N(e = 0;)) Vapi = » RIN (e = 01))piVa log(pi), M = Bya/R(Ns)Va log((9))] © a7 YBN) Va loa(n(9')) (8) i=1
i=1
where gi denotes the ith sampled binary gates, p(gi) denotes the probability of sampling gi accord- Ngi is the compact network according to the binary gates gi. Since Eq. (8) does ing to Eq. (2) and not require R( Ng) to be differentiable w.r.t. g, it can thus handle non-differentiable objectives. An interesting observation is that Eq. (8) has a similar form to the standard NAS (Zoph & Le, 2017), while it is not a sequential decision-making process and no RNN meta-controller is used in our case. Furthermore, since both gradient-based updates and REINFORCE-based updates are essentially two different update rules to the same binarized architecture parameters, it is possible to combine them to form a new update rule for the architecture parameters.
# 4 EXPERIMENTS AND RESULTS
We demonstrate the effectiveness of our proposed method on two benchmark datasets (CIFAR-10 and ImageNet) for the image classiï¬cation task. Unlike previous NAS works (Zoph et al., 2018; Liu et al., 2018c) that ï¬rst learn CNN blocks on CIFAR-10 under small-scale setting (e.g. fewer blocks), then transfer the learned block to ImageNet or CIFAR-10 under large-scale setting by repeatedly stacking it, we directly learn the architectures on the target task (either CIFAR-10 or ImageNet) and target hardware (GPU, CPU and mobile phone) while allowing each block to be speciï¬ed.
4.1 EXPERIMENTS ON CIFAR-10
Architecture Space. For CIFAR-10 experiments, we use the tree-structured architecture space that is introduced by Cai et al. (2018b) with PyramidNet (Han et al., 2017) as the backbone4. Speciï¬cally,
4The list of operations in the candidate set is provided in the appendix.
6
Published as a conference paper at ICLR 2019
Model DenseNet-BC (Huang et al., 2017) PyramidNet (Han et al., 2017) Shake-Shake + c/o (DeVries & Taylor, 2017) PyramidNet + SD (Yamada et al., 2018) ENAS + c/o (Pham et al., 2018) DARTS + c/o (Liu et al., 2018c) NASNet-A + c/o (Zoph et al., 2018) PathLevel EAS + c/o (Cai et al., 2018b) AmoebaNet-B + c/o (Real et al., 2018) Proxyless-R + c/o (ours) Proxyless-G + c/o (ours) Params Test error (%) 25.6M 26.0M 26.2M 26.0M 4.6M 3.4M 27.6M 14.3M 34.9M 5.8M 5.7M 3.46 3.31 2.56 2.31 2.89 2.83 2.40 2.30 2.13 2.30 2.08
Table 1: ProxylessNAS achieves state-of-the-art performance on CIFAR-10.
we replace all 3 3 convolution layers in the residual blocks of a PyramidNet with tree-structured cells, each of which has a depth of 3 and the number of branches is set to be 2 at each node (except the leaf nodes). For further details about the tree-structured architecture space, we refer to the original paper (Cai et al., 2018b). Additionally, we use two hyperparameters to control the depth and width of a network in this architecture space, i.e. B and F , which respectively represents the number of blocks at each stage (totally 3 stages) and the number of output channels of the ï¬nal block.
Training Details. We randomly sample 5,000 images from the training set as a validation set for learning architecture parameters which are updated using the Adam optimizer with an initial learn- ing rate of 0.006 for the gradient-based algorithm (Section 3.2.1) and 0.01 for the REINFORCE- based algorithm (Section 3.3.2). In the following discussions, we refer to these two algorithms as Proxyless-G (gradient) and Proxyless-R (REINFORCE) respectively.
After the training process of the over-parameterized network completes, a compact network is de- rived according to the architecture parameters, as discussed in Section 3.2.1. Next, we train the compact network using the same training settings except that the number of training epochs in- creases from 200 to 300. Additionally, when the DropPath regularization (Zoph et al., 2018; Huang et al., 2016) is adopted, we further increase the number of training epochs to 600 (Zoph et al., 2018).
Results. We apply the proposed method to learn architectures in the tree-structured architecture space with B = 18 and F = 400. Since we do not repeat cells and each cell has 12 learnable edges, totally 12
Ã
Ã
The test error rate results of our proposed method and other state-of-the-art architectures on CIFAR- 10 are summarized in Table 1, where âc/oâ indicates the use of Cutout (DeVries & Taylor, 2017). Compared to these state-of-the-art architectures, our proposed method can achieve not only lower test error rate but also better parameter efï¬ciency. Speciï¬cally, Proxyless-G reaches a test error rate of 2.08% which is slightly better than AmoebaNet-B (Real et al., 2018) (the previous best archi- tecture on CIFAR-10). Notably, AmoebaNet-B uses 34.9M parameters while our model only uses 5.7M parameters which is 6 fewer than AmoebaNet-B. Furthermore, compared with PathLevel EAS (Cai et al., 2018b) that also explores the tree-structured architecture space, both Proxyless-G and Proxyless-R achieves similar or lower test error rate results with half fewer parameters. The strong empirical results of our ProxylessNAS demonstrate the beneï¬ts of directly exploring a large architecture space instead of repeatedly stacking the same block.
4.2 EXPERIMENTS ON IMAGENET
For ImageNet experiments, we focus on learning efï¬cient CNN architectures (Iandola et al., 2016; Howard et al., 2017; Sandler et al., 2018; Zhu et al., 2018) that have not only high accuracy but also low latency on speciï¬c hardware platforms. Therefore, it is a multi-objective NAS task (Hsu et al., 2018; Dong et al., 2018; Elsken et al., 2018a; He et al., 2018; Wang et al., 2018; Tan et al., 2018), where one of the objectives is non-differentiable (i.e. latency). We use three different hard- ware platforms, including mobile phone, GPU and CPU, in our experiments. The GPU latency is measured on V100 GPU with a batch size of 8 (single batch makes GPU severely under-utilized). The CPU latency is measured under batch size 1 on a server with two 2.40GHz Intel(R) Xeon(R)
7
Published as a conference paper at ICLR 2019
Mobile |Hardware] No No Search cost Model Top-! | Top-5 Latency | -aware | Proxy | Repeat |(GPU hours) MobileNetV1 [16] 70.6 89.5 113ms - - x Manual MobileNetV2 [30] 72.0 91.0 75ms - - x Manual NASNet-A [38] 74.0 | 91.3 183ms x x x 48, 000 AmoebaNet-A [29] 74.5 92.0 190ms x x x 75, 600 MnasNet [31] 74.0 91.8 76ms v x x 40, 000 MnasNet (our impl.) | 74.0 91.8 79ms v x x 40, 000 Proxyless-G (mobile)| 71.8 90.3 83ms x v v 200 Proxyless-G + LL 74.2 91.7 79ms v v v 200 Proxyless-R (mobile) | 74.6 92.2 78ms v v v 200
Table 2: ProxylessNAS achieves state-of-the art accuracy (%) on ImageNet (under mobile latency less search cost in GPU hours. âLLâ indicates latency regularization constraint loss. Details of MnasNetâs search cost are provided in appendix C.
78 76.7] 76+ TT.83x faster a7 =â ae z r § 724 v < 2 704 Ss |68.2, e 68 + 7 65.44 -%- MobileNetv2 66} 7 -@ ProxylessNAS x 40 60 80 100 120 140 Mobile Latency (ms)
78 130 76.7] =k Z| 76+ âé ° . TT.83x faster a7 120 Â¥ =â ae Fa z r â 110 § 724 z v = < 8 2 704 100 Ss |68.2, e 68 + 7 so Sf 65.44 -%- MobileNetv2 ¢ 66} 7 -@ ProxylessNAS 2 x 20 12. 40 60 80 100 120 140 80 90 100 110 120 130 Mobile Latency (ms) Estimated (ms)
Figure 4: ProxylessNAS consistently outperforms MobileNetV2 under vari- ous latency settings.
Figure 5: Our mobile latency model is close to y = x. The latency RMSE is 0.75ms.
CPU E5-2640 v4. The mobile latency is measured on Google Pixel 1 phone with a batch size of [LAT (m)/T ]w as the optimization goal, where ACC(m) 1. For Proxyless-R, we use ACC(m) Ã denotes the accuracy of model m, LAT (m) denotes the latency of m, T is the target latency and w is a hyperparameter for controlling the trade-off between accuracy and latency.
Additionally, on mobile phone, we use the latency prediction model (Appendix B) during architec- ture search. As illustrated in Figure 5, we observe a strong correlation between the predicted latency and real measured latency on the test set, suggesting that the latency prediction model can be used to replace the expensive mobile farm infrastructure (Tan et al., 2018) with little error introduced.
Architecture Space. We use MobileNetV2 (Sandler et al., 2018) as the backbone to build the archi- tecture space. Speciï¬cally, rather than repeating the same mobile inverted bottleneck convolution (MBConv), we allow a set of MBConv layers with various kernel sizes and expansion ratios . To enable a direct trade-off between width and depth, we initiate a deeper over-parameterized 3, 6 } { network and allow a block with the residual connection to be skipped by adding the zero operation to the candidate set of its mixed operation. In this way, with a limited latency budget, the network can either choose to be shallower and wider by skipping more blocks and using larger MBConv layers or choose to be deeper and thinner by keeping more blocks and using smaller MBConv layers.
Training Details. We randomly sample 50,000 images from the training set as a validation set during the architecture search. The settings for updating architecture parameters are the same as CIFAR-10 experiments except the initial learning rate is 0.001. The over-parameterized network is trained on the remaining training images with batch size 256.
8
Published as a conference paper at ICLR 2019
Model MobileNetV2 (Sandler et al., 2018) Shufï¬eNetV2 (1.5) (Ma et al., 2018) ResNet-34 (He et al., 2016) NASNet-A (Zoph et al., 2018) DARTS (Liu et al., 2018c) MnasNet (Tan et al., 2018) Proxyless (GPU) Top-1 Top-5 GPU latency 72.0 72.6 73.3 74.0 73.1 74.0 75.1 91.0 - 91.4 91.3 91.0 91.8 92.5 6.1ms 7.3ms 8.0ms 38.3ms - 6.1ms 5.1ms
Table 3: ImageNet Accuracy (%) and GPU latency (Tesla V100) on ImageNet.
ImageNet Classiï¬cation Results. We ï¬rst apply our ProxylessNAS to learn specialized CNN models on the mobile phone. The summarized results are reported in Table 2. Compared to Mo- bileNetV2, our model improves the top-1 accuracy by 2.6% while maintaining a similar latency on the mobile phone. Furthermore, by rescaling the width of the networks using a multiplier (San- dler et al., 2018; Tan et al., 2018), it is shown in Figure 4 that our model consistently outperforms MobileNetV2 by a signiï¬cant margin under all latency settings. Speciï¬cally, to achieve the same level of top-1 accuracy performance (i.e. around 74.6%), MobileNetV2 has 143ms latency while our model only needs 78ms (1.83 faster). While compared with MnasNet (Tan et al., 2018), our model can achieve 0.6% higher top-1 accuracy with slightly lower mobile latency. More importantly, we are much more resource efï¬cient: the GPU-hour is 200
Ã
Additionally, we also observe that Proxyless-G has no incentive to choose computation-cheap op- erations if were not for the latency regularization loss. Its resulting architecture initially has 158ms latency on Pixel 1. After rescaling the network using the multiplier, its latency reduces to 83ms. However, this model can only achieve 71.8% top-1 accuracy on ImageNet, which is 2.4% lower than the result given by Proxyless-G with latency regularization loss. Therefore, we conclude that it is essential to take latency as a direct objective when learning efï¬cient neural networks.
Besides the mobile phone, we also apply our ProxylessNAS to learn specialized CNN models on GPU and CPU. Table 3 reports the results on GPU, where we ï¬nd that our ProxylessNAS can still achieve superior performances compared to both human-designed and automatically searched architectures. Speciï¬cally, compared to MobileNetV2 and MnasNet, our model improves the top-1 accuracy by 3.1% and 1.1% respectively while being 1.2 faster. Table 4 shows the summarized results of our searched models on three different platforms. An interesting observation is that models optimized for GPU do not run fast on CPU and mobile phone, vice versa. Therefore, it is essential to learn specialized neural networks for different hardware architectures to achieve the best efï¬ciency on different hardware.
Specialized Models for Different Hardware. Figure 6 demonstrates the detailed architectures of our searched CNN models on three hardware platforms: GPU/CPU/Mobile. We notice that the ar- chitecture shows different preferences when targeting different platforms: (i) The GPU model is shallower and wider, especially in early stages where the feature map has higher resolution; (ii) The GPU model prefers large MBConv operations (e.g. 7 7 MBConv6), while the CPU model would go for smaller MBConv operations. This is because GPU has much higher parallelism than CPU so it can take advantage of large MBConv operations. Another interesting observation is that our searched models on all platforms prefer larger MBConv operations in the ï¬rst block within each stage where the feature map is downsampled. We suppose it might because larger MBConv oper- ations are beneï¬cial for the network to preserve more information when downsampling. Notably, such kind of patterns cannot be captured in previous NAS methods as they force the blocks to share the same structure (Zoph et al., 2018; Liu et al., 2018a).
# 5 CONCLUSION
We introduced ProxylessNAS that can directly learn neural network architectures on the target task and target hardware without any proxy. We also reduced the search cost (GPU-hours and GPU memory) of NAS to the same level of normal training using path binarization. Beneï¬ting from the direct search, we achieve strong empirical results on CIFAR-10 and ImageNet. Furthermore,
9
(a) Efï¬cient GPU model found by ProxylessNAS. (b) Efï¬cient CPU model found by ProxylessNAS.
(a) Efï¬cient GPU model found by ProxylessNAS.
Published as a conference paper at ICLR 2019 8x224x224 2ax112x112 MB3 7x7 MB3 3x3 40xt12x112 32x56x56 56x28x28 56x28%28 11extaxt4 saxtaxtg 12ext4xi4 1Baxtaxt4 te6x14x14 256x7x7 256x7X7 256x7x7 256x7x7 432x747 Conv 3x3 MB6 7x7 MB6 5x5 MB6 7x7 MB6 7x7 MB6 7x7 MB6 5x5 MB6 7x7 Pooling FC
Published as a conference paper at ICLR 2019
(c) Efï¬cient mobile model found by ProxylessNAS.
Figure 6: Efï¬cient models optimized for different hardware. âMBConv3â and âMBConv6â denote mobile inverted bottleneck convolution layer with an expansion ratio of 3 and 6 respectively. In- sights: GPU prefers shallow and wide model with early pooling; CPU prefers deep and narrow model with late pooling. Pooling layers prefer large and wide kernel. Early layers prefer small kernel. Late layers prefer large kernel.
Model Proxyless (GPU) Proxyless (CPU) Proxyless (mobile) Top-1 (%) GPU latency CPU latency Mobile latency 75.1 75.3 74.6 5.1ms 7.4ms 7.2ms 204.9ms 138.7ms 164.1ms 124ms 116ms 78ms
Table 4: Hardware prefers specialized models. Models optimized for GPU does not run fast on CPU and mobile phone, vice versa. ProxylessNAS provides an efï¬cient solution to search a specialized neural network architecture for a target hardware architecture, while cutting down the search cost by 200
Ã
we allow specializing network architectures for different platforms by directly incorporating the measured hardware latency into optimization objectives. We compared the optimized models on CPU/GPU/mobile and raised the awareness of the needs of specializing neural network architecture for different hardware architectures.
# ACKNOWLEDGMENTS
We thank MIT Quest for Intelligence, MIT-IBM Watson AI lab, SenseTime, Xilinx, Snap Research for supporting this work. We also thank AWS Cloud Credits for Research Program providing us the cloud computing resources.
# REFERENCES
Gabriel Bender, Pieter-Jan Kindermans, Barret Zoph, Vijay Vasudevan, and Quoc Le. Understand- ing and simplifying one-shot architecture search. In ICML, 2018.
Andrew Brock, Theodore Lim, James M Ritchie, and Nick Weston. Smash: one-shot model archi- tecture search through hypernetworks. In ICLR, 2018.
Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, and Jun Wang. Efï¬cient architecture search by network transformation. In AAAI, 2018a.
10
Published as a conference paper at ICLR 2019
Han Cai, Jiacheng Yang, Weinan Zhang, Song Han, and Yong Yu. Path-level network transformation for efï¬cient architecture search. In ICML, 2018b.
Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training deep neural networks with binary weights during propagations. In NIPS, 2015.
Terrance DeVries and Graham W Taylor. Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552, 2017.
Jin-Dong Dong, An-Chieh Cheng, Da-Cheng Juan, Wei Wei, and Min Sun. Dpp-net: Device-aware progressive search for pareto-optimal neural architectures. In ECCV, 2018.
Thomas Elsken, Jan-Hendrik Metzen, and Frank Hutter. Simple and efï¬cient architecture search for convolutional neural networks. arXiv preprint arXiv:1711.04528, 2017.
Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. Multi-objective architecture search for cnns. arXiv preprint arXiv:1804.09081, 2018a.
Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. Neural architecture search: A survey. arXiv preprint arXiv:1808.05377, 2018b.
Dongyoon Han, Jiwhan Kim, and Junmo Kim. Deep pyramidal residual networks. In CVPR, 2017.
Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efï¬cient neural network. In NIPS, 2015.
Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. In ICLR, 2016.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. In CVPR, 2016.
Yihui He, Ji Lin, Zhijian Liu, Hanrui Wang, Li-Jia Li, and Song Han. Amc: Automl for model compression and acceleration on mobile devices. In ECCV, 2018.
Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efï¬cient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017.
Chi-Hung Hsu, Shu-Huan Chang, Da-Cheng Juan, Jia-Yu Pan, Yu-Ting Chen, Wei Wei, and Shih- Chieh Chang. Monas: Multi-objective neural architecture search using reinforcement learning. arXiv preprint arXiv:1806.10332, 2018.
Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Q Weinberger. Deep networks with stochastic depth. In ECCV, 2016.
Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In CVPR, 2017.
Forrest N Iandola, Song Han, Matthew W Moskewicz, Khalid Ashraf, William J Dally, and Kurt Keutzer. Squeezenet: Alexnet-level accuracy with 50x fewer parameters and¡ 0.5 mb model size. arXiv preprint arXiv:1602.07360, 2016.
Purushotham Kamath, Abhishek Singh, and Debo Dutta. Neural architecture construction using envelopenets. arXiv preprint arXiv:1803.06744, 2018.
Chenxi Liu, Barret Zoph, Jonathon Shlens, Wei Hua, Li-Jia Li, Li Fei-Fei, Alan Yuille, Jonathan Huang, and Kevin Murphy. Progressive neural architecture search. In ECCV, 2018a.
Hanxiao Liu, Karen Simonyan, Oriol Vinyals, Chrisantha Fernando, and Koray Kavukcuoglu. Hi- erarchical representations for efï¬cient architecture search. In ICLR, 2018b.
Hanxiao Liu, Karen Simonyan, and Yiming Yang. Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055, 2018c.
11
Published as a conference paper at ICLR 2019
Zhuang Liu, Jianguo Li, Zhiqiang Shen, Gao Huang, Shoumeng Yan, and Changshui Zhang. Learn- ing efï¬cient convolutional networks through network slimming. In ICCV, 2017.
Renqian Luo, Fei Tian, Tao Qin, and Tie-Yan Liu. Neural architecture optimization. arXiv preprint arXiv:1808.07233, 2018.
Ningning Ma, Xiangyu Zhang, Hai-Tao Zheng, and Jian Sun. Shufï¬enet v2: Practical guidelines for efï¬cient cnn architecture design. In ECCV, 2018.
Hieu Pham, Melody Y Guan, Barret Zoph, Quoc V Le, and Jeff Dean. Efï¬cient neural architecture search via parameter sharing. In ICML, 2018.
Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V Le. Regularized evolution for image classiï¬er architecture search. arXiv preprint arXiv:1802.01548, 2018.
Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. Mo- bilenetv2: Inverted residuals and linear bottlenecks. In CVPR, 2018.
Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, and Quoc V Le. Mnasnet: Platform- aware neural architecture search for mobile. arXiv preprint arXiv:1807.11626, 2018.
Kuan Wang, Zhijian Liu, Yujun Lin, Ji Lin, and Song Han. Haq: Hardware-aware automated quan- tization. arXiv, 2018.
Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. In Reinforcement Learning. 1992.
Yoshihiro Yamada, Masakazu Iwamura, and Koichi Kise. Shakedrop regularization. arXiv preprint arXiv:1802.02375, 2018.
Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, and Cheng-Lin Liu. Practical block-wise neural network architecture generation. In CVPR, 2018.
Ligeng Zhu, Ruizhi Deng, Michael Maire, Zhiwei Deng, Greg Mori, and Ping Tan. Sparsely aggre- gated convolutional networks. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 186â201, 2018.
Barret Zoph and Quoc V Le. Neural architecture search with reinforcement learning. In ICLR, 2017.
Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V Le. Learning transferable architectures for scalable image recognition. In CVPR, 2018.
12
Published as a conference paper at ICLR 2019
# A THE LIST OF CANDIDATE OPERATIONS USED ON CIFAR-10
We adopt the following 7 operations in our CIFAR-10 experiments:
3 à 3 dilated depthwise-separable convolution ⢠Identity ⢠3 à 3 depthwise-separable convolution ⢠5 à 5 depthwise-separable convolution ⢠7 à 7 depthwise-separable convolution ⢠3 à 3 average pooling ⢠3 à 3 max pooling
# B MOBILE LATENCY PREDICTION
Measuring the latency on-device is accurate but not ideal for scalable neural architecture search. There are two reasons: (i) Slow. As suggested in TensorFlow-Lite, we need to average hundreds of runs to produce a precise measurement, approximately 20 seconds. This is far more slower than a single forward / backward execution. (ii) Expensive. A lot of mobile devices and software engineering work are required to build an automatic pipeline to gather the latency from a mobile farm. Instead of direct measurement, we build a model to estimate the latency. We need only 1 phone rather than a farm of phones, which has only 0.75ms latency RMSE. We use the latency model to search, and we use the measured latency to report the ï¬nal modelâs latency.
We sampled 5k architectures from our candidate space, where 4k architectures are used to build the latency model and the rest are used for test. We measured the latency on Google Pixel 1 phone using TensorFlow-Lite. The features include (i) type of the operator (ii) input and output feature map size (iii) other attributes like kernel size, stride for convolution and expansion ratio.
# C DETAILS OF MNASNETâS SEARCH COST
Mnas (Tan et al., 2018) trains 8,000 mobile-sized models on ImageNet, each of which is trained for 5 epochs for learning architectures. If these models are trained on V100 GPUs, as done in our experiments, the search cost is roughly 40,000 GPU hours.
# D IMPLEMENTAION OF THE GRADIENT-BASED ALGORITHM
A naive implementation of the gradient-based algorithm (see Eq. (4)) is calculating and storing oj(x) in the forward step to later compute âL/âgj in the backward step:
âL/âgj = reduce sum( âyL denotes the gradient w.r.t. the output of the mixed operation y, â
(9) oj(x)),
# âyL
where wise product, and âreduce sum( )â denotes the sum of all elements. · Notice that oj(x) is only used for calculating âL/âgj when jth path is not active (i.e. not involved in calculating y). So we do not need to actually allocate GPU memory to store oj(x). Instead, we can calculate oj(x) after getting âyL in the backward step, use oj(x) to compute âL/âgj following Eq. (9), then release the occupied GPU memory. In this way, without the approximation discussed in Section 3.2.1, we can reduce the GPU memory cost to the same level of training a compact model.
13 | {
"id": "1602.07360"
} |
1812.00090 | Mixed Precision Quantization of ConvNets via Differentiable Neural Architecture Search | Recent work in network quantization has substantially reduced the time and
space complexity of neural network inference, enabling their deployment on
embedded and mobile devices with limited computational and memory resources.
However, existing quantization methods often represent all weights and
activations with the same precision (bit-width). In this paper, we explore a
new dimension of the design space: quantizing different layers with different
bit-widths. We formulate this problem as a neural architecture search problem
and propose a novel differentiable neural architecture search (DNAS) framework
to efficiently explore its exponential search space with gradient-based
optimization. Experiments show we surpass the state-of-the-art compression of
ResNet on CIFAR-10 and ImageNet. Our quantized models with 21.1x smaller model
size or 103.9x lower computational cost can still outperform baseline quantized
or even full precision models. | http://arxiv.org/pdf/1812.00090 | Bichen Wu, Yanghan Wang, Peizhao Zhang, Yuandong Tian, Peter Vajda, Kurt Keutzer | cs.CV | null | null | cs.CV | 20181130 | 20181130 | 8 1 0 2
v o N 0 3 ] V C . s c [
1 v 0 9 0 0 0 . 2 1 8 1 : v i X r a
# MIXED PRECISION QUANTIZATION OF CONVNETS VIA DIFFERENTIABLE NEURAL ARCHITECTURE SEARCH
Bichen Wuâ Berkeley AI Research UC Berkeley bichen@berkeley.edu
Yanghan Wang Mobile Vision Facebook yanghan@fb.com
Peizhao Zhang Mobile Vision Facebook stzpz@fb.com
Yuandong Tian Facebook AI Research Facebook yuandong@fb.com
Peter Vajda Mobile Vision Facebook vajdap@fb.com
Kurt Keutzer Berkeley AI Research UC Berkeley keutzer@berkeley.edu
# ABSTRACT
Recent work in network quantization has substantially reduced the time and space complexity of neural network inference, enabling their deployment on embedded and mobile devices with limited computational and memory resources. However, existing quantization methods often represent all weights and activations with the same precision (bit-width). In this paper, we explore a new dimension of the de- sign space: quantizing different layers with different bit-widths. We formulate this problem as a neural architecture search problem and propose a novel differ- entiable neural architecture search (DNAS) framework to efï¬ciently explore its exponential search space with gradient-based optimization. Experiments show we surpass the state-of-the-art compression of ResNet on CIFAR-10 and ImageNet. Our quantized models with 21.1x smaller model size or 103.9x lower computa- tional cost can still outperform baseline quantized or even full precision models.
# INTRODUCTION
Recently, ConvNets have become the de-facto method in a wide range of computer vision tasks, achieving state-of-the-art performance. However, due to high computation complexity, it is non- trivial to deploy ConvNets to embedded and mobile devices with limited computational and storage budgets. In recent years, research efforts in both software and hardware have focused on low- precision inference of ConvNets. Most of the existing quantization methods use the same precision for all (or most of) the layers of a ConvNet. However, such uniform bit-width assignment can be suboptimal since quantizing different layers can have different impact on the accuracy and efï¬ciency of the overall network. Although mixed precision computation is widely supported in a wide range of hardware platforms such as CPUs, FPGAs, and dedicated accelerators, prior efforts have not thoroughly explored the mixed precision quantization of ConvNets.
For a ConvNet with N layers and M candidate precisions in each layer, we want to ï¬nd an op- timal assignment of precisions to minimize the cost in terms of model size, memory footprint or computation, while keeping the accuracy. An exhaustive combinatorial search has exponential time complexity (
# O
In this work, we propose a novel, effective, and efï¬cient differentiable neural architecture search (DNAS) framework to solve this problem. The idea is illustrated in Fig. 1. The problem of neural architecture search (NAS) aims to ï¬nd the optimal neural net architecture in a given search space. In the DNAS framework, we represent the architecture search space with a stochastic super net where nodes represent intermediate data tensors of the super net (e.g., feature maps of a ConvNet) and edges represent operators (e.g., convolution layers in a ConvNet). Any candidate architecture can be seen as a child network (sub-graph) of the super net. When executing the super net, edges
âWork done while interning at Facebook.
1
are executed stochastically and the probability of execution is parameterized by some architecture parameters θ. Under this formulation, we can relax the NAS problem and focus on ï¬nding the optimal θ that gives the optimal expected performance of the stochastic super net. The child network can then be sampled from the optimal architecture distribution.
# We with
solve for the optimal architecture parameter 0 by training the stochastic super net with SGD respect to both the networkâs weights and the architecture parameter 0. To compute the gra- of 0, we need to back propagate gradients through discrete random variables that control the edge execution. To address this, we use the Gumbel SoftMax function (Jang et al.](2016)) âsoft-controlâ the edges. This allows us to directly compute the gradient estimation of 8 with controllable trade-off between bias and variance. Using this technique, the stochastic super net fully differentiable and can be effectively and efficiently trained by SGD. OL vy ââ- Executed ed HQ â-£ (0,9) Not executed edges wen ---» Gradient update Input 1 | oe Edge Probability Pg1,2 Edge Probability Pyxâ1.7 <> 00
# dient stochastic
# to
# a
# becomes
Figure 1: Illustration of a stochastic super net. Nodes represent data tensors and edges represent op- erators. Edges are executed stochastically following the distribution Pθ. θ denotes the architecture parameter and w denotes network weights. The stochastic super net is fully differentiable.
We apply the DNAS framework to solve the mixed precision quantization problem, by constructing a super net whose macro architecture (number of layers, ï¬lter size of each layer, etc.) is the same as the target network. Each layer of the super net contains several parallel edges representing convolu- tion operators with quantized weights and activations with different precisions. We show that using DNAS to search for layer-wise precision assignments for ResNet models on CIFAR10 and Ima- geNet, we surpass the state-of-the-art compression. Our quantized models with 21.1x smaller model size or 103.9x smaller computational cost can still outperform baseline quantized or even full preci- sion models. The DNAS pipeline is very fast, taking less than 5 hours on 8 V100 GPUs to complete a search on ResNet18 for ImageNet, while previous NAS algorithms (such as Zoph & Le (2016)) typically take a few hundred GPUs for several days. Last, but not least, DNAS is a general archi- tecture search framework that can be applied to other problems such as efï¬cient ConvNet-structure discovery. Due to the page limit, we will leave the discussion to future publications.
# 2 RELATED WORK
Network quantization received a lot of research attention in recent years. Early works such as Han et al. (2015); Zhu et al. (2016); Leng et al. (2017) mainly focus on quantizing neural network weights while still using 32-bit activations. Quantizing weights can reduce the model size of the network and therefore reduce storage space and over-the-air communication cost. More recent works such as Rastegari et al. (2016); Zhou et al. (2016); Choi et al. (2018); Jung et al. (2018); Zhuang et al. (2018) quantize both weights and activations to reduce the computational cost on CPUs and dedicated hardware accelerators. Most of the works use the same precision for all or most of the layers of a network. The problem of mixed precision quantization is rarely explored.
Neural Architecture Search becomes an active research area in recent two years. Zoph & Le (2016) ï¬rst propose to use reinforcement learning to generate neural network architectures with high accuracy and efï¬ciency. However, the proposed method requires huge amounts of computing resources. Pham et al. (2018) propose an efï¬cient neural architecture search (ENAS) framework that drastically reduces the computational cost. ENAS constructs a super network whose weights are shared with its child networks. They use reinforcement learning to train an RNN controller to sample better child networks from the super net. More recently, Liu et al. (2018) propose DARTS, a differentiable architecture search framework. DARTS also constructs a super net whose edges (can- didate operators) are parameterized with coefï¬cients computed by a SoftMax function. The super net is trained and edges with the highest coefï¬cients are kept to form the child network. Our pro- posed DNAS framework is different from DARTS since we use a stochastic super net â in DARTS, the execution of edges are deterministic and the entire super net is trained together. In DNAS, when training the super net, child networks are sampled, decoupled from the super net and trained inde-
2
pendently. The idea of super net and stochastic super net is also used in Saxena & Verbeek (2016); Veniat & Denoyer (2017) to explore macro architectures of neural nets. Another related work is He et al. (2018), which uses AutoML for model compression through network pruning. To the best of our knowledge, we are the ï¬rst to apply neural architecture search to model quantization.
# 3 MIXED PRECISION QUANTIZATION
Normally 32-bit (full-precision) ï¬oating point numbers are used to represent weights and activations of neural nets. Quantization projects full-precision weights and activations to ï¬xed-point numbers with lower bit-width, such as 8, 4, and 1 bit. We follow DoReFa-Net (Zhou et al. (2016)) to quantize weights and PACT (Choi et al. (2018)) to quantize activations. See Appendix A for more details.
For mixed precision quantization, we assume that we have the ï¬exibility to choose different preci- sions for different layers of a network. Mixed precision computation is widely supported by hard- ware platforms such as CPUs, FPGAs, and dedicated accelerators. Then the problem is how should we decide the precision for each layer such that we can maintain the accuracy of the network while minimizing the cost in terms of model size or computation. Previous methods use the same precision for all or most of the layers. We expand the design space by choosing different precision assignment (M N ) time from M candidate precisions at N different layers. While exhaustive search yields complexity, our automated approach is efï¬cient in ï¬nding the optimal precision assignment.
# 4 DIFFERENTIABLE NEURAL ARCHITECTURE SEARCH
4.1 NEURAL ARCHITECTURE SEARCH
Formally, the neural architecture search (NAS) problem can be formulated as
min aâA denotes the architecture space. wa denotes the weights of Here, a denotes a neural architecture, architecture a. ) represents the loss function on a target dataset given the architecture a and its , ( · parameter wa. The loss function is differentiable with respect to wa, but not to a. As a consequence, the computational cost of solving the problem in (1) is very high. To solve the inner optimization problem requires to train a neural network a to convergence, which can take days. The outer problem has a discrete search space with exponential complexity. To solve the problem efï¬ciently, we need to avoid enumerating the search space and evaluating each candidate architecture one-by-one.
# 4.2 DIFFERENTIABLE NEURAL ARCHITECTURE SEARCH
We discuss the idea of differentiable neural architecture search (DNAS). The idea is illustrated in . The super net Fig. 1. We start by constructing a super net to represent the architecture space is essentially a computational DAG (directed acyclic graph) that is denoted as G = (V, E). Each V of the super net represents a data tensor. Between two nodes vi and vj, there can be node vi â K ij edges connecting them, indexed as eij k . Each edge represents an operator parameterized by its trainable weight wij k . The operator takes the data tensor at vi as its input and computes its output as k (vi; wij eij
k ). To compute the data tensor at vj, we sum the output of all incoming edges as k (vi; wij eij
vj = k ). (2)
# i,k
can be represented by a subgraph With this representation, any neural net architecture a Ga(Va, Ea) with Va â E. For simplicity, in a candidate architecture, we keep all the V, Ea â nodes of the graph, so Va = V . And for a pair of nodes vi, vj that are connected by K ij candidate edges, we only select one edge. Formally, in a candidate architecture a, we re-write equation (2) as mij k eij
k (vi; wij vj = k ), (3)
# i,k
where mj! ⬠{0,1} is an âedge-maskâ and >, mi! = 1. Note that though the value of mi! is discrete, we can still compute the gradient to m;â. Let m be a vector that consists of mie for all
3
eij k â re-write the loss function in equation (1) to an equivalent form as , we can encode it using an âedge-maskâ vector ma. So we E. For any architecture a â A (ma, wa).
# L
We next convert the super net to a stochastic super net whose edges are executed stochastically. For each edge eij k is sampled to be 1. We assign each edge a parameter θij
exp(0iâ) <a Viet exp(O,) (4) Pos (mi? = 1) = softmax(6;?|07) =
The stochastic super net is now parameterized by θ, a vector whose elements are θij E. From the distribution Pθ, we can sample a mask vector ma that corresponds to a candidate ar- chitecture a . We can further compute the expected loss of the stochastic super net as Eaâ¼Pθ [ (ma, wa)]. The expectation of the loss function is differentiable with respect to wa, but not directly to θ, since we cannot directly back-propagate the gradient to θ through the discrete random variable ma. To estimate the gradient, we can use Straight-Through estimation (Bengio et al. (2013)) or REINFORCE (Williams (1992)). Our ï¬nal choice is to use the Gumbel Softmax technique (Jang et al. (2016)), which will be explained in the next section. Now that the expectation of the loss function becomes fully differentiable, we re-write the problem in equation (1) as
min θ min wa Eaâ¼Pθ [ L (ma, wa)] (5)
The combinatorial optimization problem of solving for the optimal architecture a is relaxed to solving for the optimal architecture-distribution parameter θ that minimizes the expected loss. Once we obtain the optimal θ, we acquire the optimal architecture by sampling from Pθ.
4.3 DNAS WITH GUMBEL SOFTMAX
We use stochastic gradient descent (SGD) to solve Equation (5). The optimization process is also denoted as training the stochastic super net. We compute the Monte Carlo estimation of the gradient
B 1 Vow, Ea~Po [C(ma, Wa)] © FY) Vow. L(Ma,Wa,): (6) i=1
where ai is an architecture sampled from distribution Pθ and B is the batch size. Equation (6) provides an unbiased estimation of the gradient, but it has high variance, since the size of the archi- tecture space is orders of magnitude larger than any feasible batch size B. Such high variance for gradient estimation makes it difï¬cult for SGD to converge.
To address this issue, we use Gumbel Softmax proposed by Jang et al. (2016); Maddison et al. (2016) to control the edge selection. For a node pair (vi, vj), instead of applying a âhardâ sampling and execute only one edge, we use Gumbel Softmax to apply a âsoftâ sampling. We compute mij
exp((O,â + 8y)/7) Xin exP((Og + 8R)/7)- m! = GumbelSoftmax(6Â¥/|67) = gi, ~ Gumbel(0, 1). (7)
k is a random variable drawn from the Gumbel distribution. Note that now mij gij tinuous random variable. It is directly differentiable with respect to θij gradient through the random variable gij to θ can be computed as
OL(Ma, Wa) ; dm,(6, g) Om, oo ®) VoEw Po [L(â¢a, Wa)| = Egecumbet(o,1)
, mij A temperature coefï¬cient Ï is used to control the behavior of the Gumbel Softmax. As Ï become continuous random variable following a uniform distribution. Therefore, in equation (3), all edges are executed and their outputs are averaged. The gradient estimation in equation (6) are biased but the variance is low, which is favorable during the initial stage of the training. As Ï 0, mij gradually becomes a discrete random variable following the categorical distribution of Pθij . When computing equation (3), only one edge is sampled to be executed. The gradient estimation
4
then becomes unbiased but the variance is high. This is favorable towards the end of the training. In our experiment, we use an exponential decaying schedule to anneal the temperature as
Ï = T0 exp( η epoch), (9)
â where T0 is the initial temperature when training begins. We decay the temperature exponentially after every epoch. Using the Gumbel Softmax trick effectively stabilizes the super net training.
Ã
In some sense, our work is in the middle ground of two previous works: ENAS by Pham et al. (2018) and DARTS by Liu et al. (2018). ENAS samples child networks from the super net to be trained independently while DARTS trains the entire super net together without decoupling child networks from the super net. By using Gumbel Softmax with an annealing temperature, our DNAS pipeline behaves more like DARTS at the beginning of the search and behaves more like ENAS at the end.
4.4 THE DNAS PIPELINE
Based on the analysis above, we propose a differentiable neural architecture search pipeline, sum- marized in Algorithm 1. We ï¬rst construct a stochastic super net G with architecture parameter θ and weight w. We train G with respect to w and θ separately and alternately. Training the weight w optimizes all candidate edges (operators). However, different edges can have different impact on the overall performance. Therefore, we train the architecture parameter θ, to increase the probability to sample those edges with better performance, and to suppress those with worse performance. To ensure generalization, we split the dataset for architecture search into w, which is used speciï¬cally to train w, and
# X
In each epoch, we anneal the temperature Ï for gumbel softmax with the schedule in equation (9). To ensure w is sufï¬ciently trained before updating θ, we postpone the training of θ for Nwarmup epochs. Through the training, we draw samples a Pθ. These sampled architectures are then trained on the training dataset
Xtrain and evaluated on the test set
# Xtest.
Algorithm 1: The DNAS pipeline. Input: Stochastic super net G = (V, E) with parameter θ and w, searching dataset
training dataset Xtrain, test dataset ; 1 QA â â
2 for epoch = 0, N do · · · Ï Î· 3 â Train G with respect to w for one epoch; if epoch > Nwarmup then epoch); T0 exp( â à 4 5 Xtest; X w and Train G with respect to θ for one epoch; Sample architectures a 6 Pθ; Push a to QA; 7 â¼ end 8 9 end 10 for a 11 QA do Train a on Test a on â Xtrain to convergence; Xtest; Output: Trained architectures QA. 12 13 end X θ,
# 5 DNAS FOR MIXED PRECISION QUANTIZATION
We use the DNAS framework to solve the mixed precision quantization problem â deciding the optimal layer-wise precision assignment. For a ConvNet, we ï¬rst construct a super net that has the same âmacro-structureâ (number of layers, number of ï¬lters each layer, etc.) as the given network. As shown in Fig. 2. Each node vi in the super net corresponds to the output tensor (feature map) of layer-i. Each candidate edge ei,i+1 represents a convolution operator whose weights or activation are quantized to a lower precision.
In order to encourage using lower-precision weights and activations, we deï¬ne the loss function as
(a, wa) = CrossEntropy(a) (Cost(a)). (10)
# L
# Ã C
5
* Edge Probability 32-bit conv 1-bit conv Layer-(i) Layer-(i#1)
Figure 2: One layer of a super net for mixed precision quantization of a ConvNet. Nodes in the super net represent feature maps, edges represent convolution operators with different bit-widths.
Cost(a) denotes the cost of a candidate architecture and cross entropy term and the cost term. To compress the model size, we deï¬ne the cost as
Cost(a) = S~> mj x #PARAM(e;â) x weight-bit(eâ), (1) eZee
# eij k âE
) denotes the number of parameters of a convolution operator and weight-bit( where #PARAM( ) · · denotes the bit-width of the weight. mij k is the edge selection mask described in equation (3). Alternatively, to reduce the computational cost by jointly compressing both weights and activations, we use the cost function
Cost(a) = S~ mj) x #FLOP(e;!) x weight-bit(e;!) x act-bit(eyâ), (12) ev eE
# eij k âE
where #FLOP( · weight-bit( · vation. Note that in a candidate architecture, mij allow mij ) denotes the number of ï¬oating point operations of the convolution operator, ) denotes the bit-width of the acti- · . In the super net, we } ) denotes the bit-width of the weight and act-bit( k have binary values 0, 1 { k to be continuous so we can compute the expected cost of the super net..
To balance the cost term with the cross entropy term in equation (10), we deï¬ne
(Cost(a)) = β(log(Cost(a)))γ. (13)
# C
where β is a coefï¬cient to adjust the initial value of (Cost(a)) to be around 1. γ is a coefï¬cient to adjust the relative importance of the cost term vs. the cross-entropy term. A larger γ leads to a stronger cost term in the loss function, which favors efï¬ciency over accuracy.
# 6 EXPERIMENTS
6.1 CIFAR10 EXPERIMENTS
In the ï¬rst experiment, we focus on quantizing ResNet20, ResNet56, and ResNet110 (He et al. (2016a)) on CIFAR10 (Krizhevsky & Hinton (2009)) dataset. We start by focusing on reducing model size, since smaller models require less storage and communication cost, which is important for mobile and embedded devices. We only perform quantization on weights and use full-precision activations. We conduct mixed precision search at the block level â all layers in one block use the same precision. Following the convention, we do not quantize the ï¬rst and the last layer. We construct a super net whose macro architecture is exactly the same as our target network. For each block, we can choose a precision from . If the precision is 0, we simply skip this 0, 1, 2, 3, 4, 8, 32 } { block so the input and output are identical. If the precision is 32, we use the full-precision ï¬oating point weights. For all other precisions with k-bit, we quantize weights to k-bit ï¬xed-point numbers. See Appendix B for more experiment details.
Our experiment result is summarized in Table 1. For each quantized model, we report its accuracy and model size compression rate compared with 32-bit full precision models. The model size is computed by equation (11). Among all the models we searched, we report the one with the highest test accuracy and the one with the highest compression rate. We compare our method with Zhu
|
6
ResNet20 ResNet56 ResNet110 Acc Comp Acc Comp Acc Comp DNAS (ours) Full Most Accurate Most Efï¬cient 92.00 (-0.35) 92.35 16.6 1.0 94.12 (-0.30) 94.42 18.93 1.0 94.39 (-0.39) 94.78 20.3 1.0 92.72 (+0.37) 11.6 94.57 (+0.15) 14.6 95.07 (+0.29) 12.5 TTQ (Zhu et al. (2016)) Full 91.77 1.0 93.20 1.0 - - 2bit 91.13 (-0.64) 16.0 93.56 (+0.36) 16.0 - -
Table 1: Mixed Precision Quantization for ResNet on CIFAR10 dataset. We report results on ResNet
20, 56, 110 {
. In the table, we abbreviate accuracy as âAccâ and compression as âCompâ. }
Most Accurate Most Efï¬cient g1b1 4 2 g1b2 4 3 g1b3 3 0 g2b1 3 2 g2b2 3 4 g2b3 4 2 g3b1 4 3 g3b2 3 2 g3b3 1 1
Table 2: Layer-wise bit-widths for the most accurate vs. the most efï¬cient architecture of ResNet20.
et al. (2016), where they use 2-bit (ternary) weights for all the layers of the network, except the ï¬rst convolution and the last fully connect layer. From the table, we have the following observations: 1) All of our most accurate models out-perform their full-precision counterparts by up to 0.37% while still achieves 11.6 - 12.5X model size reduction. 2) Our most efï¬cient models can achieve 16.6 - 20.3X model size compression with accuracy drop less than 0.39%. 3) Compared with Zhu et al. (2016), our model achieves up to 1.59% better accuracy. This is partially due to our improved training recipe as our full-precision modelâs accuracy is also higher. But it still demonstrates that our models with searched mixed precision assignment can very well preserve the accuracy.
Table 2 compares the precision assignment for the most accurate and the most efï¬cient models for ResNet20. Note that for the most efï¬cient model, it directly skips the 3rd block in group-1. In Fig. 3, we plot the accuracy vs. compression rate of searched architectures of ResNet110. We observe that models with random precision assignment (from epoch 0) have signiï¬cantly worse compression while searched precision assignments generally have higher compression rate and accuracy.
Accuracy gain: +0.29% nn Compression: 12.5 Full Precision Accuracy: 94.78% Random ao © °' Samples . . ro Accuracy gain: -0.39% eerie @/â Compression: 20.3 10 15 20
Figure 3: Visualization of all searched architectures for ResNet110 and CIFAR10 dataset. x-axis is the compression rate of each model. y-axis is the accuracy.
IMAGENET EXPERIMENTS
We quantize ResNet18 and ResNet34 on the ImageNet ILSVRC2012 (Deng et al. (2009)) dataset. Different from the original ResNet (He et al. (2016a)), we use the âReLU-only preactivationâ ResNet from He et al. (2016b). Similar to the CIFAR10 experiments, we conduct mixed precision search at the block level. We do not quanitze the ï¬rst and the last layer. See Appendix B for more details.
We conduct two sets of experiments. In the ï¬rst set, we aim at compressing the model size, so we only quantize weights and use the cost function from equation (11). Each block contains convolution operators with weights quantized to -bit. In the second set, we aim at compressing computational cost. So we quantize both weights and activations and use the cost function from equation (12). Each block in the super net contains convolution operators with weights and acti-
7
ResNet18 ResNet34 Acc Comp Acc Comp Full 71.03 1.0 74.12 1.0 DNAS (ours) MA 71.21 (+0.18 ) 11.2 74.61 (+0.49) 10.6 ME 69.58 (-1.45) 21.1 73.37 (-0.75) 19.0 Full 69.6 1.0 TTQ 2bit 66.6 (-3.0) 16.0 - ADMM 3bit 68.0 (-1.6) 10.7
Table 3: Mixed Precision Quantization for ResNet on ImageNet for model size compression. In the table, we abbreviate accuracy as âAccâ and compression as âCompâ. âMAâ denotes the most accurate model from architecture search and âMEâ denotes the most efï¬cient model.
ResNet18 ResNet34 arch-1 71.01 Acc 71.03 Full Acc -0.02 Acc â 33.2 Comp 74.21 Acc Full Acc 74.12 Acc â +0.09 40.8 Comp DNAS (ours) arch-2 70.64 71.03 -0.39 62.9 73.98 74.12 -0.14 59.0 GroupNet arch-3 W4A4 W4A4 W4A4 W1A2G5 68.65 71.03 -2.38 103.5 73.23 74.12 -0.89 87.4 PACT DoReFA QIP 69.2 70.2 -1.0 64 68.1 70.2 -2.1 64 69.3 69.2 +0.1 64 67.6 69.7 -2.1 102.4 -
Table 4: Mixed Precision Quantization for ResNet on ImageNet for computational cost compression. We abbreviate accuracy as âAccâ and compression rate as âCompâ. âarch- â are three 1, 2, 3 } { searched architectures ranked by accuracy.
vations quantized to -bit. The ï¬rst number in the tuple (1, 4), (2, 4), (3, 3), (4, 4), (8, 8), (32, 32) } denotes the weight precision and the second denotes the activation precision. The DNAS search is very efï¬cient, taking less than 5 hours on 8 V100 GPUs to ï¬nish the search on ResNet18.
Our model size compression experiment is reported in Table 3. We report two searched results for each model. âMAâ denotes the searched architecture with the highest accuracy, and âMEâ denotes the most efï¬cient. We compare our results with TTQ (Zhu et al. (2016)) and ADMM (Leng et al. (2017)). TTQ uses ternary weights (stored by 2 bits) to quantize a network. For ADMM, we cite the result with conï¬guration where weights can have 7 values and are stored by 3 bits. We report the accuracy and model size compression rate of each model. From Table 3, we have the following observations: 1) All of our most accurate models out-perform full-precision models by up to 0.5% while achieving 10.6-11.2X reduction of model size. 2) Our most efï¬cient models can achieve 19.0 to 21.1X reduction of model size, still preserving competitive accuracy. 3) Compared with previous works, even our less accurate model has almost the same accuracy as the full-precision model with 21.1X smaller model size. This is partially because we use label-reï¬nery (Bagherinezhad et al. (2018)) to effectively boost the accuracy of quantized models. But it still demonstrate that our searched models can very well preserve the accuracy, despite its high compression rate.
Our experiment on computational cost compression is reported in Table 4. We report three searched architectures for each model. We report the accuracy and the compression rate of the computational cost of each architecture. We compute the computational cost of each model using equation (12). We compare our results with PACT (Choi et al. (2018)), DoReFA (Zhou et al. (2016)), QIP (Jung et al. (2018)), and GroupNet (Zhuang et al. (2018)). The ï¬rst three use 4-bit weights and activations. We compute their compression rate as (32/4) (32/4) = 64. GroupNet uses binary weights and 2-bit activations, but its blocks contain 5 parallel branches. We compute its compression rate as 102.4 The DoReFA result is cited from Choi et al. (2018). From table 4, (32/1) we have the following observations: 1) Our most accurate architectures (arch-1) have almost the same accuracy (-0.02% or +0.09%) as the full-precision models with compression rates of 33.2x and 40.8X. 2) Comparing arch-2 with PACT, DoReFa, and QIP, we have a similar compression rate (62.9 vs 64), but the accuracy is 0.71-1.91% higher. 3) Comparing arch-3 with GroupNet, we have slightly higher compression rate (103.5 vs. 102.4), but 1.05% higher accuracy.
8
# 7 CONCLUSION
In this work we focus on the problem of mixed precision quantization of a ConvNet to determine its layer-wise bit-widths. We formulate this problem as a neural architecture search (NAS) problem and propose a novel, efï¬cient, and effective differentiable neural architecture search (DNAS) framework to solve it. Under the DNAS framework, we efï¬ciently explore the exponential search space of the NAS problem through gradient based optimization (SGD). We use DNAS to search for layer-wise precision assignment for ResNet on CIFAR10 and ImageNet. Our quantized models with 21.1x smaller model size or 103.9x smaller computational cost can still outperform baseline quantized or even full precision models. DNAS is very efï¬cient, taking less than 5 hours to ï¬nish a search on ResNet18 for ImageNet. It is also a general architecture search framework that is not limited to the mixed precision quantization problem. Its other applications will be discussed in future publications.
# REFERENCES
Hessam Bagherinezhad, Maxwell Horton, Mohammad Rastegari, and Ali Farhadi. Label reï¬nery: Improving imagenet classiï¬cation through label progression. arXiv preprint arXiv:1805.02641, 2018.
Yoshua Bengio, Nicholas L´eonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.
Jungwook Choi, Zhuo Wang, Swagath Venkataramani, Pierce I-Jen Chuang, Vijayalakshmi Srini- vasan, and Kailash Gopalakrishnan. Pact: Parameterized clipping activation for quantized neural networks. arXiv preprint arXiv:1805.06085, 2018.
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 248â255. Ieee, 2009.
Terrance DeVries and Graham W Taylor. Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552, 2017.
Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770â778, 2016a.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In European conference on computer vision, pp. 630â645. Springer, 2016b.
Yihui He, Ji Lin, Zhijian Liu, Hanrui Wang, Li-Jia Li, and Song Han. Amc: Automl for model compression and acceleration on mobile devices. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 784â800, 2018.
Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144, 2016.
Sangil Jung, Changyong Son, Seohyung Lee, Jinwoo Son, Youngjun Kwak, Jae-Joon Han, and Changkyu Choi. Joint training of low-precision neural network with quantization interval param- eters. arXiv preprint arXiv:1808.05779, 2018.
Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Tech- nical report, Citeseer, 2009.
Cong Leng, Hao Li, Shenghuo Zhu, and Rong Jin. Extremely low bit neural network: Squeeze the last bit out with admm. arXiv preprint arXiv:1707.09870, 2017.
9
Hanxiao Liu, Karen Simonyan, and Yiming Yang. Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055, 2018.
Chris J Maddison, Andriy Mnih, and Yee Whye Teh. The concrete distribution: A continuous relaxation of discrete random variables. arXiv preprint arXiv:1611.00712, 2016.
Hieu Pham, Melody Y Guan, Barret Zoph, Quoc V Le, and Jeff Dean. Efï¬cient neural architecture search via parameter sharing. arXiv preprint arXiv:1802.03268, 2018.
Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classiï¬cation using binary convolutional neural networks. In European Conference on Computer Vision, pp. 525â542. Springer, 2016.
Shreyas Saxena and Jakob Verbeek. Convolutional neural fabrics. In Advances in Neural Informa- tion Processing Systems, pp. 4053â4061, 2016.
Tom Veniat and Ludovic Denoyer. Learning time/memory-efï¬cient deep architectures with budgeted super networks. arXiv preprint arXiv:1706.00046, 2017.
Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â256, 1992.
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, and Yuheng Zou. Dorefa-net: Train- ing low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160, 2016.
Chenzhuo Zhu, Song Han, Huizi Mao, and William J Dally. Trained ternary quantization. arXiv preprint arXiv:1612.01064, 2016.
Bohan Zhuang, Chunhua Shen, and Ian Reid. Training compact neural networks with binary weights and low precision activations. arXiv preprint arXiv:1808.02631, 2018.
Barret Zoph and Quoc V Le. Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578, 2016.
# APPENDIX A WEIGHT AND ACTIVATION QUANTIZATION
For readersâ convenience, we describe the functions we use to quantize weights and activations in this section. We follow DoReFa-Net (Zhou et al. (2016)) to quantize weights as
tanh(w) wk = 2Qk( + 0.5). ) (14)
# 2max( |
tanh(w) | ) denotes a k-bit quantization function w denotes the latent full-precision weight of a network. Qk( · . To that quantizes a continuous value w 1 i = 0, } quantize activations, we follow Choi et al. (2018) to use a bounded activation function followed by a quantization function as
y = P ACT (x) = 0.5( x x α. | yk = Qk(y/α) â α | + α), (15)
| â | ·
Here, x is the full precision activation, yk is the quantized activation. P ACT ( bounds the output between [0, α]. α is a learnable upper bound of the activation function.
# APPENDIX B EXPERIMENT DETAILS
We discuss the experiment details for the CIFAR10 experiments. CIFAR10 contains 50,000 training images and 10,000 testing images to be classiï¬ed into 10 categories. Image size is 32 32. We report the accuracy on the test set. To train the super net, we randomly split 80% of the CIFAR10 training set to train the weights w, and 20% to train the architecture parameter θ. We train the super net for 90 epochs with a batch size of 512. To train the model weights, we use SGD with
10
10â4. We momentum with an initial learning rate of 0.2, momentum of 0.9 and weight decay of 5 use the cosine decay schedule to reduce the learning rate. For architecture parameters, we use Adam 10â3 and weight decay of 10â3. We optimizer (Kingma & Ba (2014)) with a learning rate of 5 use the cost function from equation (11). We set β from equation (13) to 0.1 and γ to 0.9. To control Gumbel Softmax functions, we use an initial temperature of T0 = 5.0, and we set the decaying factor η from equation (9) to be 0.025. After every 10 epochs of training of super net, we sample 5 architectures from the distribution Pθ. We train each sampled architecture for 160 epochs and use cutout (DeVries & Taylor (2017)) in data augmentation. Other hyper parameters are the same as training the super net.
We next discuss the experiment details for ImageNet experiments. ImageNet contains 1,000 classes, with roughly 1.3M training images and 50K validation images. Images are scaled such that their shorter side is 256 pixels and are cropped to 224 224 before feeding into the network. We report à the accuracy on the validation set. Training a super net on ImageNet can be very computationally expensive. Instead, we randomly sample 40 categories from the ImageNet training set to train the super net. We use SGD with momentum to train the super net weights for 60 epochs with a batch size of 256 for ResNet18 and 128 for ResNet34. We set the initial learning rate to be 0.1 and reduce it with the cosine decay schedule. We set the momentum to 0.9. For architecture parameters, we use 10â4. We set the cost Adam optimizer with the a learning rate of 10â3 and a weight decay of 5 coefï¬cient β to 0.05, cost exponent γ to 1.2. We set T0 to be 5.0 and decay factor η to be 0.065. We postpone the training of the architecture parameters by 10 epochs. We sample 2 architectures from the architecture distribution Pθ every 10 epochs. The rest of the hyper parameters are the same as the CIFAR10 experiments. We train sampled architectures for 120 epochs using SGD with an initial learning rate of 0.1 and cosine decay schedule. We use label-reï¬nery (Bagherinezhad et al. (2018)) in training and we use the same data augmentation as this Pytorch example1.
1https://github.com/pytorch/examples/tree/master/imagenet
11 | {
"id": "1611.01144"
} |
1811.12889 | Systematic Generalization: What Is Required and Can It Be Learned? | Numerous models for grounded language understanding have been recently
proposed, including (i) generic models that can be easily adapted to any given
task and (ii) intuitively appealing modular models that require background
knowledge to be instantiated. We compare both types of models in how much they
lend themselves to a particular form of systematic generalization. Using a
synthetic VQA test, we evaluate which models are capable of reasoning about all
possible object pairs after training on only a small subset of them. Our
findings show that the generalization of modular models is much more systematic
and that it is highly sensitive to the module layout, i.e. to how exactly the
modules are connected. We furthermore investigate if modular models that
generalize well could be made more end-to-end by learning their layout and
parametrization. We find that end-to-end methods from prior work often learn
inappropriate layouts or parametrizations that do not facilitate systematic
generalization. Our results suggest that, in addition to modularity, systematic
generalization in language understanding may require explicit regularizers or
priors. | http://arxiv.org/pdf/1811.12889 | Dzmitry Bahdanau, Shikhar Murty, Michael Noukhovitch, Thien Huu Nguyen, Harm de Vries, Aaron Courville | cs.CL, cs.AI | Published as a conference paper at ICLR 2019 | null | cs.CL | 20181130 | 20190421 | 9 1 0 2
r p A 1 2 ] L C . s c [
3 v 9 8 8 2 1 . 1 1 8 1 : v i X r a
Published as a conference paper at ICLR 2019
# SYSTEMATIC GENERALIZATION: WHAT IS REQUIRED AND CAN IT BE LEARNED?
# Dzmitry Bahdanauâ Mila, Universit´e de Montr´eal AdeptMind Scholar Element AI
# Shikhar Murtyâ Mila, Universit´e de Montr´eal
Michael Noukhovitch Mila, Universit´e de Montr´eal
# Thien Huu Nguyen University of Oregon
Harm de Vries Mila, Universit´e de Montr´eal
Aaron Courville Mila, Universit´e de Montr´eal CIFAR Fellow
# ABSTRACT
Numerous models for grounded language understanding have been recently pro- posed, including (i) generic models that can be easily adapted to any given task and (ii) intuitively appealing modular models that require background knowledge to be instantiated. We compare both types of models in how much they lend them- selves to a particular form of systematic generalization. Using a synthetic VQA test, we evaluate which models are capable of reasoning about all possible object pairs after training on only a small subset of them. Our ï¬ndings show that the generalization of modular models is much more systematic and that it is highly to how exactly the modules are connected. sensitive to the module layout, i.e. We furthermore investigate if modular models that generalize well could be made more end-to-end by learning their layout and parametrization. We ï¬nd that end- to-end methods from prior work often learn inappropriate layouts or parametriza- tions that do not facilitate systematic generalization. Our results suggest that, in addition to modularity, systematic generalization in language understanding may require explicit regularizers or priors.
# INTRODUCTION
In recent years, neural network based models have become the workhorse of natural language un- derstanding and generation. They empower industrial machine translation (Wu et al., 2016) and text generation (Kannan et al., 2016) systems and show state-of-the-art performance on numerous bench- marks including Recognizing Textual Entailment (Gong et al., 2017), Visual Question Answering (Jiang et al., 2018), and Reading Comprehension (Wang et al., 2018). Despite these successes, a growing body of literature suggests that these approaches do not generalize outside of the speciï¬c distributions on which they are trained, something that is necessary for a language understanding system to be widely deployed in the real world. Investigations on the three aforementioned tasks have shown that neural models easily latch onto statistical regularities which are omnipresent in existing datasets (Agrawal et al., 2016; Gururangan et al., 2018; Jia & Liang, 2017) and extremely hard to avoid in large scale data collection. Having learned such dataset-speciï¬c solutions, neural networks fail to make correct predictions for examples that are even slightly out of domain, yet are trivial for humans. These ï¬ndings have been corroborated by a recent investigation on a synthetic instruction-following task (Lake & Baroni, 2018), in which seq2seq models (Sutskever et al., 2014; Bahdanau et al., 2015) have shown little systematicity (Fodor & Pylyshyn, 1988) in how they gen- eralize, that is they do not learn general rules on how to compose words and fail spectacularly when for example asked to interpret âjump twiceâ after training on âjumpâ, ârun twiceâ and âwalk twiceâ.
An appealing direction to improve the generalization capabilities of neural models is to add mod- ularity and structure to their design to make them structurally resemble the kind of rules they are
# âEqual contribution
1
Published as a conference paper at ICLR 2019
supposed to learn (Andreas et al., 2016; Gaunt et al., 2016). For example, in the Neural Module Net- work paradigm (NMN, Andreas et al. (2016)), a neural network is assembled from several neural modules, where each module is meant to perform a particular subtask of the input processing, much like a computer program composed of functions. The NMN approach is intuitively appealing but its widespread adoption has been hindered by the large amount of domain knowledge that is required to decide (Andreas et al., 2016) or predict (Johnson et al., 2017; Hu et al., 2017) how the modules should be created (parametrization) and how they should be connected (layout) based on a natural language utterance. Besides, their performance has often been matched by more traditional neural models, such as FiLM (Perez et al., 2017), Relations Networks (Santoro et al., 2017), and MAC networks (Hudson & Manning, 2018). Lastly, generalization properties of NMNs, to the best of our knowledge, have not been rigorously studied prior to this work.
Here, we investigate the impact of explicit modularity and structure on systematic generalization of NMNs and contrast their generalization abilities to those of generic models. For this case study, we focus on the task of visual question answering (VQA), in particular its simplest binary form, when the answer is either âyesâ or ânoâ. Such a binary VQA task can be seen as a fundamental task of language understanding, as it requires one to evaluate the truth value of the utterance with respect to the state of the world. Among many systematic generalization requirements that are desirable for a VQA model, we choose the following basic one: a good model should be able to reason about all possible object combinations despite being trained on a very small subset of them. We believe that this is a key prerequisite to using VQA models in the real world, because they should be robust at handling unlikely combinations of objects. We implement our generalization demands in the form of a new synthetic dataset, called Spatial Queries On Object Pairs (SQOOP), in which a model has to perform spatial relational reasoning about pairs of randomly scattered letters and digits in the image (e.g. answering the question âIs there a letter A left of a letter B?â). The main challenge in SQOOP is that models are evaluated on all possible object pairs, but trained on only a subset of them.
Our ï¬rst ï¬nding is that NMNs do generalize better than other neural models when layout and parametrization are chosen appropriately. We then investigate which factors contribute to improved generalization performance and ï¬nd that using a layout that matches the task (i.e. a tree layout, as opposed to a chain layout), is crucial for solving the hardest version of our dataset. Lastly, and perhaps most importantly, we experiment with existing methods for making NMNs more end-to-end by inducing the module layout (Johnson et al., 2017) or learning module parametrization through soft-attention over the question (Hu et al., 2017). Our experiments show that such end-to-end ap- proaches often fail by not converging to tree layouts or by learning a blurred parameterization for modules, which results in poor generalization on the hardest version of our dataset. We believe that our ï¬ndings challenge the intuition of researchers in the ï¬eld and provide a foundation for improving systematic generalization of neural approaches to language understanding.
# 2 THE SQOOP DATASET FOR TESTING SYSTEMATIC GENERALIZATION
We perform all experiments of this study on the SQOOP dataset. SQOOP is a minimalistic VQA task that is designed to test the modelâs ability to interpret unseen combinations of known relation and object words. Clearly, given known objects X, Y and a known relation R, a human can easily verify whether or not the objects X and Y are in relation R. Some instances of such queries are common in daily life (is there a cup on the table), some are extremely rare (is there a violin under the car), and some are unlikely but have similar, more likely counter-parts (is there grass on the frisbee vs is there a frisbee on the grass). Still, a person can easily answer these questions by understanding them as just the composition of the three separate concepts. Such compositional reasoning skills are clearly required for language understanding models, and SQOOP is explicitly designed to test for them.
Concretely speaking, SQOOP requires observing a 64 x 64 RGB image x and answering a yes-no question g = X RY about whether objects X and Y are in a spatial relation R. The questions are represented in a redundancy-free X R Y form; we did not aim to make the questions look like natural language. Each image contains 5 randomly chosen and randomly positioned objects. There are 36 objects: the latin letters A-Z and digits 0-9, and there are 4 relations: LEFT_OF, RIGHT_OF, ABOVE, and BELOW. This results in 36 - 35 - 4 = 5040 possible unique questions (we do not allow questions about identical objects). To make negative examples challenging, we ensure that both X and Y of a question are always present in the associated image and that there are distractor objects Yâ #4 Y
2
Published as a conference paper at ICLR 2019
# a: S above T? Yes
b: W left of A? No
Figure 1: Different NMN layouts: NMN-Chain-Shortcut (left), NMN-Chain (center), NMN-Tree (right). See Section 3.2 for details.
Figure 2: A positive (top) and negative (bottom) example from the SQOOP dataset.
and Xâ # X such that XR Yâ and XâRY are both true for the image. These extra precautions guarantee that answering a question requires the model to locate all possible X and Y then check if any pair of them are in the relation R. Two SQOOP examples are shown in Figure 2.
Our goal is to discover which models can correctly answer questions about all 36 35 possible object pairs in SQOOP after having been trained on only a subset. For this purpose we build training sets k unique questions by sampling k different right-hand-side (RHS) objects Y1, containing 36 Y2, ..., Yk for each left-hand-side (LHS) object X. We use this procedure instead of just uniformly sampling object pairs in order to ensure that each object appears in at least one training question, thereby keeping the all versions of the dataset solvable. We will refer to k as the #rhs/lhs parameter k) questions. We generate of the dataset. Our test set is composed from the remaining 36 training and test sets for rhs/lhs values of 1,2,4,8 and 18, as well as a control version of the dataset, #rhs/lhs=35, in which both the training and the test set contain all the questions (with different images). Note that lower #rhs/lhs versions are harder for generalization due to the presence of spurious dependencies between the words X and Y to which the models may adapt. In order to exclude a possible compounding factor of overï¬tting on the training images, all our training sets contain 1 million examples, so for a dataset with #rhs/lhs = k we generate approximately 106/(36 · k) different images per unique question. Appendix D contains pseudocode for SQOOP generation. 4
# 3 MODELS
A great variety of VQA models have been recently proposed in the literature, among which we can distinguish two trends. Some of the recently proposed models, such as FiLM (Perez et al., 2017) and Relation Networks (RelNet, Santoro et al. (2017)) are highly generic and do not require any task- speciï¬c knowledge to be applied on a new dataset. On the opposite end of the spectrum are modular and structured models, typically ï¬avours of Neural Module Networks (Andreas et al., 2016), that do require some knowledge about the task at hand to be instantiated. Here, we evaluate systematic generalization of several state-of-the-art models in both families. In all models, the image x is ï¬rst fed through a CNN based network, that we refer to as the stem, to produce a feature-level 3D tensor hx. This is passed through a model-speciï¬c computation conditioned on the question q, to produce a joint representation hq x. Lastly, this representation is fed into a fully-connected classiï¬er network to produce logits for prediction. Therefore, the main difference between the models we consider is how the computation hq x = model(hx, q) is performed.
3.1 GENERIC MODELS
We consider four generic models in this paper: CNN+LSTM, FiLM, Relation Network (RelNet), and Memory-Attention-Control (MAC) network. For CNN+LSTM, FiLM, and RelNet models, the question q is ï¬rst encoded into a ï¬xed-size representation hq using a unidirectional LSTM network. CNN+LSTM ï¬attens the 3D tensor hx to a vector and concatenates it with hq to produce hq x:
hq x = [f latten(hx); hq]. (1)
3
Published as a conference paper at ICLR 2019
RelNet (Santoro et al., 2017) uses a network g which is applied to all pairs of feature columns of hx concatenated with the question representation hq, all of which is then pooled to obtain hq x:
hgx = Y_ Gglhx (i), hx (J), ha) (2) aj
where hx(i) is the i-th feature column of hx. FiLM networks (Perez et al., 2017) use N convo- lutional FiLM blocks applied to hx. A FiLM block is a residual block (He et al., 2016) in which a feature-wise afï¬ne transformation (FiLM layer) is inserted after the 2nd convolutional layer. The FiLM layer is conditioned on the question at hand via prediction of the scaling and shifting param- eters γn and βn:
Lyn; Bn] = Wirhg + bP 6) ht = BN(W3 * ReLU(Wf « h?y! + bn)) (4) ne = hny! + ReLU (yn © hi, ® Bn) (5) where BN stands for batch normalization (loffe & Szegedy, 2015), * stands for convolution and © stands for element-wise multiplications. h/,, is the output of the n-th FiLM block and no. = hy. The output of the last FiLM block an. undergoes an extra 1 x 1 convolution and max-pooling to produce hy x. MAC network of Hudson & Manning (2018) produces hy, by repeatedly applying a Memory-Attention-Composition (MAC) cell that is conditioned on the question through an attention mechanism. The MAC model is too complex to be fully described here and we refer the reader to the original paper for details.
Ëhn q x = BN (W n 1 â 2 â q x = hnâ1 hn q x + ReLU (γn
3.2 NEURAL MODULE NETWORKS
Neural Module Networks (NMN) (Andreas et al., 2016) are an elegant approach to question answer- ing that constructs a question-speciï¬c network by composing together trainable neural modules, drawing inspiration from symbolic approaches to question answering (Malinowski & Fritz, 2014). To answer a question with an NMN, one ï¬rst constructs the computation graph by making the fol- lowing decisions: (a) how many modules and of which types will be used, (b) how will the modules be connected to each other, and (c) how are these modules parametrized based on the question. We refer to the aspects (a) and (b) of the computation graph as the layout and the aspect (c) as the parametrization. In the original NMN and in many follow-up works, different module types are used to perform very different computations, e.g. the Find module from Hu et al. (2017) performs trainable convolutions on the input attention map, whereas the And module from the same paper computes an element-wise maximum for two input attention maps. In this work, we follow the trend of using more homogeneous modules started by Johnson et al. (2017), who use only two types of modules: unary and binary, both performing similar computations. We restrict our study to NMNs with homogeneous modules because they require less prior knowledge to be instantiated and be- cause they performed well in our preliminary experiments despite their relative simplicity. We go one step further than Johnson et al. (2017) and retain a single binary module type, using a zero tensor for the second input when only one input is available. Additionally, we choose to use exactly three modules, which simpliï¬es the layout decision to just determining how the modules are connected. Our preliminary experiments have shown that, even after these simpliï¬cations, NMNs are far ahead of other models in terms of generalization.
In the original NMN, the layout and parametrization were set in an ad-hoc manner for each question by analyzing a dependency parse. In the follow-up works (Johnson et al., 2017; Hu et al., 2017), these aspects of the computation are predicted by learnable mechanisms with the goal of reducing the amount of background knowledge required to apply the NMN approach to a new task. We ex- periment with the End-to-End NMN (N2NMN) (Hu et al., 2017) paradigm from this family, which predicts the layout with a seq2seq model (Sutskever et al., 2014) and computes the parametriza- tion of the modules using a soft attention mechanism. Since all the questions in SQOOP have the same structure, we do not employ a seq2seq model but instead have a trainable layout variable and trainable attention variables for each module. Formally, our NMN is constructed by repeatedly applying a generic neural module f (θ, γ, s0, s1), which takes as inputs the shared parameters θ, the question-speciï¬c parametrization γ and the left- hand side and right-hand side inputs s0 and s1. Three such modules are connected and conditioned
4
Published as a conference paper at ICLR 2019
on a question q = (q1, q2, q3) as follows:
3 ag = Soak "e(qi) (6) i=l
k-1
sm k = Ï k,j m sj j=â1 (7)
sk = f (θ, γk, s0
# k, s1 k) hqx = s3
Rgx = 83 (9)
In the equations above, sâ1 = 0 is the zero tensor input, s0 = hx are the image features outputted by the stem, e is the embedding table for question words. k 1, 2, 3 is the module number, } â { sk is the output of the k-th module and sm k are its left (m = 0) and right (m = 1) inputs. We refer to A = (αk,i) and T = (Ï k,j m ) as the parametrization attention matrix and the layout tensor respectively.
We experiment with two choices for the NMNâs generic neural module: the Find module from Hu et al. (2017) and the Residual module from Johnson et al. (2017). The equations for the Residual module are as follows:
fResidual(γk, s0 k, s1 k) = ReLU ( Ësk + W k 1 ; bk 2 ; bk Ësk = ReLU (W k ReLU (W k [W k 1; W k 3 ; bk 2; W k [s0 k; s1 3 â Ësk + bk 3] = γk k] + bk 3), 2)) + bk 1), (10) (11) (12)
1 â
2 â
and for Find module as follows:
(13)
[W1; b1; Wa; be] = 0 [s}; 84] + bz) + bi).
fF ind(θ, γk, s0 k, s1 k) = ReLU (W1 γk ReLU (W2 k; s1 k (14)
â
â
In the formulas above all W âs stand for convolution weights, and all bâs are biases. Equations 10 and 13 should be understood as taking vectors γk and θ respectively and chunking them into weights and biases. The main difference between Residual and Find is that in Residual all parameters depend on the questions words (hence θ is omitted from the signature of fResidual), where as in Find convolutional weights are the same for all questions, and only the element-wise multipliers γk vary based on the question. We note that the speciï¬c Find module we use in this work is slightly different from the one used in (Hu et al., 2017) in that it outputs a feature tensor, not just an attention map. This change was required in order to connect multiple Find modules in the same way as we connect multiple residual ones.
Based on the generic NMN model described above, we experiment with several specific architectures that differ in the way the modules are connected and parametrized (see Figure 1). In NMN-Chain the modules form a sequential chain. Modules 1, 2 and 3 are parametrized based on the first object word, second object word and the relation word respectively, which is achieved by setting the attention maps 1, Q2, a3 to the corresponding one-hot vectors. We also experiment with giving the image features h,. as the right-hand side input to all 3 modules and call the resulting model NMN-Chain- Shortcut. NMN-Tree is similar to NMN-Chain in that the attention vectors are similarly hard- coded, but we change the connectivity between the modules to be tree-like. Stochastic NANMN follows the N2NMN approach by Hu et al. (2017) for inducing layout. We treat the layout Tâ as a stochastic latent variable. T is allowed to take two values: T};-c¢ as in NMN-Tree, and Typgin as in NMN-Chain. We calculate the output probabilities by marginalizing out the layout i.e. probability of answer being âyesâ is computed as p(yes|z,q) = Te {Tince,Tenain} p(yes|T, x, q)p(T). Lastly, Attention N2NMN uses the N2NMN method for learning parametrization (Hu et al., 2017). It is structured just like NMN-Tree but has a* computed as softmax(a"), where @* is a trainable vector. We use Attention N2NMN only with the Find module because using it with the Residual module would involve a highly non-standard interpolation between convolutional weights.
5
(8) (9)
Published as a conference paper at ICLR 2019
# 4 EXPERIMENTS
In our experiments we aimed to: (a) understand which models are capable of exhibiting systematic generalization as required by SQOOP, and (b) understand whether it is possible to induce, in an end-to-end way, the successful architectural decisions that lead to systematic generalization.
All models share the same stem architecture which consists of 6 layers of convolution (8 for Relation Networks), batch normalization and max pooling. The input to the stem is a 64 3 image, and the feature dimension used throughout the stem is 64. Further details can be found in Appendix A. The code for all experiments is available online1.
4.1 WHICH MODELS GENERALIZE BETTER?
We report the performance for all models on datasets of varying difï¬culty in Figure 3. Our ï¬rst observation is that the modular and tree-structured NMN-Tree model exhibits strong systematic generalization. Both versions of this model, with Residual and Find modules, robustly solve all versions of our dataset, including the most challenging #rhs/lhs=1 split.
The results of NMN-Tree should be contrasted with those of generic models. 2 out of 4 models (Conv+LSTM and RelNet) are not able to learn to answer all SQOOP questions, no matter how easy the split was (for high #rhs/lhs Conv+LSTM overï¬tted and RelNet did not train). The results of other two models, MAC and FiLM, are similar. Both models are clearly able to solve the SQOOP task, as suggested by their almost perfect < 1% error rate on the control #rhs/lhs=35 split, yet they 9.97% errors struggle to generalize on splits with lower #rhs/lhs. In particular, we observe 13.67 for MAC and a 34.73 4.61% errors for FiLM on the hardest #rhs/lhs=1 split. For the splits of intermediate difï¬culty we saw the error rates of both models decreasing as we increased the #rhs/lhs ratio from 2 to 18. Interestingly, even with 18 #rhs/lhs some MAC and FiLM runs result in a test 2%. Given the simplicity and minimalism of SQOOP questions, we believe that these error rate of results should be considered a failure to pass the SQOOP test for both MAC and FiLM. That said, we note a difference in how exactly FiLM and MAC fail on #rhs/lhs=1: in several runs (3 out of 15) 0.5% error rate), whereas in all runs of FiLM MAC exhibits a strong generalization performance ( the error rate is about 30%. We examine the successful MAC models and ï¬nd that they converge to a successful setting of the control attention weights, where speciï¬c MAC units consistently attend to the right questions words. In particular, MAC models that generalize strongly for each question seem to have a unit focusing strongly on X and a unit focusing strongly on Y (see Appendix B for more details). As MAC was the strongest competitor of NMN-Tree across generic models, we perform an ablation study for this model, in which we vary the number of modules and hidden units, as well as experiment with weight decay. These modiï¬cations do not result in any signiï¬cant reduction of the gap between MAC and NMN-Tree. Interestingly, we ï¬nd that using the default high number of MAC units, namely 12, is helpful, possibly because it increases the likelihood that at least one unit converges to focus on X and Y words (see Appendix B for details).
4.2 WHAT IS ESSENTIAL TO STRONG GENERALIZATION OF NMN?
The superior generalization of NMN-Tree raises the following question: what is the key architectural difference between NMN-Tree and generic models that explains the performance gap between them? We consider two candidate explanations. First, the NMN-Tree model differs from the generic models in that it does not use a language encoder and is instead built from modules that are parametrized by question words directly. Second, NMN-Tree is structured in a particular way, with the idea that modules 1 and 2 may learn to locate objects and module 3 can learn to reason about object locations independently of their identities. To understand which of the two differences is responsible for the superior generalization, we compare the performance of the NMN-Tree, NMN-Chain and NMN- Chain-Shortcut models (see Figure 1). These 3 versions of NMN are similar in that none of them are using a language encoder, but they differ in how the modules are connected. The results in Figure 3 show that for both Find and Residual module architectures, using a tree layout is absolutely crucial (and sufï¬cient) for generalization, meaning that the generalization gap between NMN-Tree and generic models can not be explained merely by the language encoding step in the latter. In particular, NMN-Chain models perform barely above random chance, doing even worse than generic models on
# 1https://github.com/rizar/systematic-generalization-sqoop
6
Published as a conference paper at ICLR 2019
fee 1 rhs /ths Wm 2 rhs/ths rhs/ths rhs/ths lm 18 rhs/ths M35 shs/ths ie NMN-Tree NMN-Tree âConv+LSTM RelNet MAC FiLM (Residual) (Find) rhs ifthe ifthe mm rhs tm 18 hss fm 35 has : 9) /) Vig ~ Se) / 8 8 } By & Se a & BY 4 1 0 oie -_ -_ NMN-Chain-Shortcut ae âs NMIN-Tree NMN-Tree NMIN-Chain NMN-Chain NMIN-Chain-Shortcut (Residual) (Find) (Find) (Residual) (Find) (Residual)
Figure 3: Top: Comparing the performance of generic models on datasets of varying difï¬- culty (lower #rhs/lhs is more difï¬cult). Note that NMN-Tree generalizes perfectly on the hardest #rhs/lhs=1 version of SQOOP, whereas MAC and FiLM fail to solve completely even the easi- est #rhs/lhs=18 version. Bottom: Comparing NMNs with different layouts and modules. We can clearly observe the superior generalization of NMN-Tree, poor generalization of NMN-Chain and mediocre generalization of NMN-Chain-Shortcut. Means and standard deviations after at least 5 runs are reported.
the #rhs/lhs=1 version of the dataset and dramatically failing even on the easiest #rhs/lhs=18 split. This is in stark contrast with NMN-Tree models that exhibits nearly perfect performance on the hardest #rhs/lhs=1 split. As a sanity check we train NMN-Chain models on the vanilla #rhs/lhs=35 split. We ï¬nd that NMN-Chain has little difï¬culty learning to answer SQOOP questions when it sees all of them at training time, even though it previously shows poor generalization when testing on unseen examples. Interestingly, NMN-Chain-Shortcut performs much better than NMN-Chain and quite similarly to generic models. We ï¬nd it remarkable that such a slight change in the model layout as adding shortcut connections from image features hx to the modules results in a drastic change in generalization performance. In an attempt to understand why NMN-Chain generalizes so poorly we compare the test set responses of the 5 NMN-Chain models trained on #rhs/lhs=1 split. Notably, there was very little agreement between predictions of these 5 runs (Fleiss κ = 0.05), suggesting that NMN-Chain performs rather randomly outside of the training set.
4.3 CAN THE RIGHT KIND OF NMN BE INDUCED?
The strong generalization of the NMN-Tree is impressive, but a signiï¬cant amount of prior knowl- edge about the task was required to come up with the successful layout and parametrization used in this model. We therefore investigate whether the amount of such prior knowledge can be reduced by ï¬xing one of these structural aspects and inducing the other.
# 4.3.1 LAYOUT INDUCTION
In our layout induction experiments, we use the Stochastic N2NMN model which treats the layout as a stochastic latent variable with two values (Ttree and Tchain, see Section 3.2 for details). We experiment with N2NMNs using both Find and Residual modules and report results with different
7
Published as a conference paper at ICLR 2019
1.0 0.9 ââ 1 rhsjlhs go8 â 18 rhs/Ihs £07 20.6 0.5 0.4 (¢) 100000 200000 Iterations
. = = 40 ° © 1 rhs/lhs = ° - 30 © 2 ths/lhs 2 ro ® 59 © 18 rhs/Ihs 2 10 o. w ° 0 ee ee coco âamen co 04 0:5 06 07 08 Attention quality k
Figure 4: Learning dynamics of layout in- duction on 1 rhs/lhs and 18 rhs/lhs datasets using the Residual module with p0(tree) = 0.5. All 5 runs do not learn to use the tree lay- out for 1 rhs/lhs, the very setting where the tree layout is necessary for generalization.
Figure 5: Attention quality κ vs accuracy for Attention N2NMN models trained on differ- ent #rhs/lhs splits. We can observe that gen- eralization is strongly associated with high κ for #rhs/lhs=1, while for splits with 2 and 18 rhs/lhs blurry attention may be sufï¬cient.
0.97 0.9 0.97 â xX âR â Y x âR â yY 0.7; o7f 0.7) Y= S05) 805) 4 § 05) âR 4 â Y 03| â 03) â _â_â_â 03 NN oa a . 0.1 â ols 50000 100000 0 50000 100000 0 50000 Iterations Iterations Iterations 100000
Figure 6: An example of how attention weights of modules 1 (left), 2 (middle), and 3 (right) evolve during training of an Attention N2NMN model on the 18 rhs/lhs version of SQOOP. Modules 1 and 2 learn to focus on different objects words, X and Y respectively in this example, but they also assign high weight to the relation word R. Module 3 learns to focus exclusively on R.
initial conditions, p0(tree) 0.1, 0.5, 0.9. We believe that the initial probability p0(tree) = 0.1 should not be considered small, since in more challenging datasets the space of layouts would be exponentially large, and sampling the right layout in 10% of all cases should be considered a very lucky initialization. We repeat all experiments on #rhs/lhs=1 and on #rhs/lhs=18 splits, the for- mer to study generalization, and the latter to control whether the failures on #rhs/lhs=1 are caused speciï¬cally by the difï¬culty of this split. The results (see Table 1) show that the success of layout induction (i.e. converging to a p(tree) close to 0.9) depends in a complex way on all the factors that we considered in our experiments. The initialization has the most inï¬uence: models initialized with p0(tree) = 0.1 typically do not converge to a tree (exception being experiments with Residual mod- ule on #rhs/lhs=18, in which 3 out of 5 runs converged to a solution with a high p(tree)). Likewise, models initialized with p0(tree) = 0.9 always stay in a regime with a high p(tree). In the interme- diate setting of p0(tree) = 0.5 we observe differences in behaviors for Residual and Find modules. In particular, N2NMN based on Residual modules stays spurious with p(tree) = 0.5 0.08 when #rhs/lhs=1, whereas N2NMN based on Find modules always converges to a tree.
One counterintuitive result in Table 1 is that for the Stochastic N2NMNs with Residual modules, 1.79% test error despite never resolving trained with p0(tree) = 0.5 and #rhs/lhs=1, make just 1.64 ± the layout uncertainty through training (p200K(tree) = 0.56 0.06). We offer an investigation of this result in Appendix C.
4.3.2 PARAMETRIZATION INDUCTION
Next, we experiment with the Attention N2NMN model (see Section 3.2) in which the parametriza- tion is learned for each module as an attention-weighted average of word embeddings. In these experiments, we ï¬x the layout to be tree-like and sample the pre-softmax attention weights Ëα from a uniform distribution U [0; 1]. As in the layout induction investigations, we experiment with several SQOOP splits, namely we try #rhs/lhs . The results (reported in Table 2) show that Attention N2NMN fails dramatically on #rhs/lhs=1 but quickly catches up as soon as #rhs/lhs is increased to 2. Notably, 9 out of 10 runs on #rhs/lhs=2 result in almost perfect performance, and 1 run completely fails to generalize (26% error rate), resulting in a high 8.18% variance of the mean
8
Published as a conference paper at ICLR 2019
Table 1: Tree layout induction results for Stochastic N2NMNs using Residual and Find modules on 1 rhs/lhs and 18 rhs/lhs datasets. For each setting of p0(tree) we report results after 5 runs. p200K(tree) is the probability of using a tree layout after 200K training iterations.
module Residual Find #rhs/lhs 1 18 1 18 p0(tree) Test error rate (%) 0.1 0.5 0.9 0.1 0.5 0.9 0.1 0.5 0.9 0.1 0.5 0.9 31.89 1.64 0.16 3.99 0.19 0.12 47.54 0.78 0.41 5.11 0.17 0.11 0.75 1.79 0.11 5.33 0.11 0.12 0.95 0.52 0.07 1.19 0.16 0.03 Test loss 0.64 0.27 0.03 0.15 0.06 0.01 1.78 0.05 0.02 0.14 0.01 0.00 0.03 0.04 0.01 0.06 0.02 0.00 0.47 0.04 0.00 0.03 0.01 0.00 p200K(tree) 0.01 0.08 0.06 0.56 0.00 0.96 0.34 0.59 0.01 0.99 0.00 1.00 0.00 0.00 0.07 0.94 0.00 1.00 0.04 0.02 0.00 1.00 0.00 1.00
± ± ± ± ± ± ± ± ± ± ± ±
± ± ± ± ± ± ± ± ± ± ± ±
± ± ± ± ± ± ± ± ± ± ± ±
Table 2: Parameterization induction results for 1,2,18 rhs/lhs datasets for Attention N2NMN. The model does not generalize well in the difï¬cult 1 rhs/lhs setting. Results for MAC are presented for comparison. Means and standard deviations were estimated based on at least 10 runs.
Model #rhs/lhs Test error rate (%) Test loss (%) 0.71 16.02 0.41 8.18 0.00 0.12 0.32 9.97 0.15 4.31 0.02 0.74 Attention N2NMN 1 Attention N2NMN 2 Attention N2NMN 18 27.19 2.82 0.16 13.67 9.21 0.53 1.22 0.14 0.00 0.41 0.28 0.01 MAC MAC MAC 1 2 18
± ± ± ± ± ±
± ± ± ± ± ±
error rate. All 10 runs on the split with 18 rhs/lhs generalize ï¬awlessly. Furthermore, we inspect the learned attention weights and ï¬nd that for typical successful runs, module 3 focuses on the relation word, whereas modules 1 and 2 focus on different object words (see Figure 6) while still focusing on the relation word. To better understand the relationship between successful layout induction and generalization, we deï¬ne an attention quality metric κ = minwâ{X,Y } maxkâ1,2 αk,w/(1 αk,R). X, Y there is a module i that focuses mostly on this Intuitively, κ is large when for each word w word. The renormalization by 1/(1 αk,R) is necessary to factor out the amount of attention that modules 1 and 2 assign to the relation word. For the ground-truth parametrization that we use for NMN-Tree κ takes a value of 1, and if both modules 1 and 2 focus on X, completely ignoring Y, κ equals 0. The scatterplot of the test error rate versus κ (Figure 5) shows that for #rhs/lhs=1 high gen- eralization is strongly associated with higher κ, meaning that it is indeed necessary to have different modules strongly focusing on different object words in order to generalize in this most challenging setting. Interestingly, for #rhs/lhs=2 we see a lot of cases where N2NMN generalizes well despite attention being rather spurious (κ
â
In order to put Attention N2NMN results in context we compare them to those of MAC (see Table 2). Such a comparison can be of interest because both models perform attention over the question. For 1 rhs/lhs MAC seems to be better on average, but as we increase #rhs/lhs to 2 we note that Attention N2NMN succeeds in 9 out of 10 cases on the #rhs/lhs=2 split, much more often than 1 success out of 10 observed for MAC2. This result suggests that Attention N2NMNs retains some of the strong generalization potential of NMNs with hard-coded parametrization.
# 5 RELATED WORK
The notion of systematicity was originally introduced by (Fodor & Pylyshyn, 1988) as the property of human cognition whereby âthe ability to entertain a given thought implies the ability to entertain thoughts with semantically related contentsâ. They illustrate this with an example that no English
2If we judge a run successful when the error rate is lower than Ï = 1%, these success rates are different with a p-value of 0.001 according to the Fisher exact test. Same holds for any other threshold Ï â [1%; 5%].
9
Published as a conference paper at ICLR 2019
speaker can understand the phrase âJohn loves the girlâ without being also able to understand the phrase âthe girl loves Johnâ. The question of whether or not connectionist models of cognition can account for the systematicity phenomenon has been a subject of a long debate in cognitive science (Fodor & Pylyshyn, 1988; Smolensky, 1987; Marcus, 1998; 2003; Calvo & Colunga, 2003). Recent research has shown that lack of systematicity in the generalization is still a concern for the modern seq2seq models (Lake & Baroni, 2018; Bastings et al., 2018; Loula et al., 2018). Our ï¬ndings about the weak systematic generalization of generic VQA models corroborate the aforementioned seq2seq results. We also go beyond merely stating negative generalization results and showcase the high systematicity potential of adding explicit modularity and structure to modern deep learning models.
Besides the theoretical appeal of systematicity, our study is inspired by highly related prior evidence that when trained on downstream language understanding tasks, neural networks often generalize poorly and latch on to dataset-speciï¬c regularities. Agrawal et al. (2016) report how neural models exploit biases in a VQA dataset, e.g. responding âsnowâ to the question âwhat covers the groundâ regardless of the image because âsnowâ is the most common answer to this question. Gururangan et al. (2018) report that many successes in natural language entailment are actually due to exploiting statistical biases as opposed to solving entailment, and that state-of-the-art systems are much less performant when tested on unbiased data. Jia & Liang (2017) demonstrate that seemingly state-of- the-art reading comprehension system can be misled by simply appending an unrelated sentence that resembles the question to the document.
Using synthetic VQA datasets to study grounded language understanding is a recent trend started by the CLEVR dataset (Johnson et al., 2016). CLEVR images are 3D-rendered and CLEVR questions are longer and more complex than ours, but in the associated generalization split CLEVR-CoGenT the training and test distributions of images are different. In our design of SQOOP we aimed instead to minimize the difference between training and test images to make sure that we test a modelâs ability to interpret unknown combinations of known words. The ShapeWorld family of datasets by Kuhnle & Copestake (2017) is another synthetic VQA platform with a number of generalization tests, but none of them tests SQOOP-style generalization of relational reasoning to unseen object pairs. Most closely related to our work is the recent study of generalization to long-tail questions about rare objects done by Bingham et al. (2017). They do not, however, consider as many models as we do and do not study the question of whether the best-performing models can be made end-to-end.
The key paradigm that we test in our experiments is Neural Module Networks (NMN). Andreas et al. (2016) introduced NMNs as a modular, structured VQA model where a ï¬xed number of hand- crafted neural modules (such as Find, or Compare) are chosen and composed together in a layout determined by the dependency parse of the question. Andreas et al. (2016) show that the modular structure allows answering questions that are longer than the training ones, a kind of generalization that is complementary to the one we study here. Hu et al. (2017) and Johnson et al. (2017) followed up by making NMNs end-to-end, removing the non-differentiable parser. Both Hu et al. (2017) and Johnson et al. (2017) reported that several thousands of ground-truth layouts are required to pretrain the layout predictor in order for their approaches to work. In a recent work, Hu et al. (2018) attempt to soften the layout decisions, but training their models end-to-end from scratch performed substantially lower than best models on the CLEVR task. Gupta & Lewis (2018) report successful layout induction on CLEVR for a carefully engineered heterogeneous NMN that takes a scene graph as opposed to a raw image as the input.
# 6 CONCLUSION AND DISCUSSION
We have conducted a rigorous investigation of an important form of systematic generalization re- quired for grounded language understanding: the ability to reason about all possible pairs of objects despite being trained on a small subset of such pairs. Our results allow one to draw two important conclusions. For one, the intuitive appeal of modularity and structure in designing neural architec- tures for language understanding is now supported by our results, which show how a modular model consisting of general purpose residual blocks generalizes much better than a number of baselines, including architectures such as MAC, FiLM and RelNet that were designed speciï¬cally for visual reasoning. While this may seem unsurprising, to the best of our knowledge, the literature has lacked such a clear empirical evidence in favor of modular and structured networks before this work. Im- portantly, we have also shown how sensitive the high performance of the modular models is to the
10
Published as a conference paper at ICLR 2019
layout of modules, and how a tree-like structure generalizes much stronger than a typical chain of layers.
Our second key conclusion is that coming up with an end-to-end and/or soft version of modular models may be not sufï¬cient for strong generalization. In the very setting where strong generaliza- tion is required, end-to-end methods often converge to a different, less compositional solution (e.g. a chain layout or blurred attention). This can be observed especially clearly in our NMN layout and parametrization induction experiments on the #rhs/lhs=1 version of SQOOP, but notably, strong initialization sensitivity of layout induction remains an issue even on the #rhs/lhs=18 split. This conclusion is relevant in the view of recent work in the direction of making NMNs more end-to- end (Suarez et al., 2018; Hu et al., 2018; Hudson & Manning, 2018; Gupta & Lewis, 2018). Our ï¬ndings suggest that merely replacing hard-coded components with learnable counterparts can be insufï¬cient, and that research on regularizers or priors that steer the learning towards more system- atic solutions can be required. That said, our parametrization induction results on the #rhs/lhs=2 split are encouraging, as they show that compared to generic models, a weaker nudge (in the form of a richer training signal or a prior) towards systematicity may sufï¬ce for end-to-end NMNs.
While our investigation has been performed on a synthetic dataset, we believe that it is the real- world language understanding where our ï¬ndings may be most relevant. It is possible to construct a synthetic dataset that is bias-free and that can only be solved if the model has understood the entirety of the datasetâs language. It is, on the contrary, much harder to collect real-world datasets that do not permit highly dataset-speciï¬c solutions, as numerous dataset analysis papers of recent years have shown (see Section 5 for a review). We believe that approaches that can generalize strongly from imperfect and biased data will likely be required, and our experiments can be seen as a simulation of such a scenario. We hope, therefore, that our ï¬ndings will inform researchers working on language understanding and provide them with a useful intuition about what facilitates strong generalization and what is likely to inhibit it.
# ACKNOWLEDGEMENTS
We thank Maxime Chevalier-Boisvert, Yoshua Bengio and Jacob Andreas for useful dis- This research was enabled in part by support provided by Compute Canada cussions. (www.computecanada.ca), NSERC, Canada Research Chairs and Microsoft Research. We also thank Nvidia for donating NVIDIA DGX-1 used for this research.
# REFERENCES
Aishwarya Agrawal, Dhruv Batra, and Devi Parikh. Analyzing the Behavior of Visual Question Answering Models. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, January 2016.
Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Neural Module Networks. In Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. URL http://arxiv.org/abs/1511.02799.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural Machine Translation by Jointly Learning to Align and Translate. In Proceedings of the 2015 International Conference on Learn- ing Representations, 2015.
Joost Bastings, Marco Baroni, Jason Weston, Kyunghyun Cho, and Douwe Kiela. Jump to better conclusions: SCAN both left and right. In Proceedings of the 2018 EMNLP Workshop Black- boxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 47â55, Brussels, Belgium, November 2018. Association for Computational Linguistics. URL https://www.aclweb. org/anthology/W18-5407.
Eli Bingham, Piero Molino, Paul Szerlip, Obermeyer Fritz, and Goodman Noah. Characterizing how Visual Question Answering scales with the world. In NIPS 2017 Visually-Grounded Interaction and Language Workshop, 2017.
Francisco Calvo and Eliana Colunga. The statistical brain: Reply to Marcus The algebraic mind. In Proceedings of the Annual Meeting of the Cognitive Science Society, volume 25, 2003.
11
Published as a conference paper at ICLR 2019
Jerry A. Fodor and Zenon W. Pylyshyn. Connectionism and cognitive architecture: A critical anal- ysis. Cognition, 28(1):3â71, 1988.
Alexander L. Gaunt, Marc Brockschmidt, Nate Kushman, and Daniel Tarlow. Differentiable Pro- In Proceedings of the 34th International Conference on Ma- grams with Neural Libraries. chine Learning, November 2016. URL http://arxiv.org/abs/1611.02109. arXiv: 1611.02109.
Yichen Gong, Heng Luo, and Jian Zhang. Natural Language Inference over Interaction Space. In Proceedings of the 2018 International Conference on Learning Representations, 2017. URL http://arxiv.org/abs/1709.04348. arXiv: 1709.04348.
Nitish Gupta and Mike Lewis. Neural Compositional Denotational Semantics for Question An- In Proceedings of the 2018 Conference on Empirical Methods in Natural Language swering. Processing. Association for Computational Linguistics, 2018. URL http://aclweb.org/ anthology/D18-1239.
Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R. Bowman, and Noah A. Smith. Annotation Artifacts in Natural Language Inference Data. In Proceedings of NAACL-HLT 2018, March 2018. URL http://arxiv.org/abs/1803.02324. arXiv: 1803.02324.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, nition. 2016.
Ronghang Hu, Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Kate Saenko. Learning to Reason: End-to-End Module Networks for Visual Question Answering. In Proceedings of 2017 IEEE International Conference on Computer Vision, April 2017. URL http://arxiv.org/ abs/1704.05526. arXiv: 1704.05526.
Ronghang Hu, Jacob Andreas, Trevor Darrell, and Kate Saenko. Explainable Neural Computation via Stack Neural Module Networks. In Proceedings of 2018 European Conference on Computer Vision, July 2018. URL http://arxiv.org/abs/1807.08556. arXiv: 1807.08556.
Drew A. Hudson and Christopher D. Manning. Compositional Attention Networks for Machine Reasoning. In Proceedings of the 2018 International Conference on Learning Representations, February 2018. URL https://openreview.net/forum?id=S1Euwz-Rb.
Sergey Ioffe and Christian Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, pp. 448â456, 2015. URL http: //jmlr.org/proceedings/papers/v37/ioffe15.html.
Robin Jia and Percy Liang. Adversarial Examples for Evaluating Reading Comprehension Systems. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 2021â2031, 2017. doi: 10.18653/v1/D17-1215. URL https://aclanthology.coli. uni-saarland.de/papers/D17-1215/d17-1215.
Yu Jiang, Vivek Natarajan, Xinlei Chen, Marcus Rohrbach, Dhruv Batra, and Devi Parikh. https://github.com/ Pythia v0.1: The winning entry to the vqa challenge 2018. facebookresearch/pythia, 2018.
Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C. Lawrence Zitnick, and Ross Girshick. CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Vi- sual Reasoning. In Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recog- nition (CVPR), December 2016. URL http://arxiv.org/abs/1612.06890. arXiv: 1612.06890.
Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Judy Hoffman, Li Fei-Fei, C. Lawrence In Pro- Zitnick, and Ross Girshick. ceedings of 2017 IEEE International Conference on Computer Vision, 2017. URL http: //arxiv.org/abs/1705.03633.
12
Published as a conference paper at ICLR 2019
Anjuli Kannan, Karol Kurach, Sujith Ravi, Tobias Kaufmann, Andrew Tomkins, Balint Miklos, Greg Corrado, Laszlo Lukacs, Marina Ganea, Peter Young, and Vivek Ramavajjala. Smart Reply: Automated Response Suggestion for Email. In Proceedings of the 22Nd ACM SIGKDD Inter- national Conference on Knowledge Discovery and Data Mining, KDD â16, pp. 955â964, New York, NY, USA, 2016. ACM. ISBN 978-1-4503-4232-2. doi: 10.1145/2939672.2939801. URL http://doi.acm.org/10.1145/2939672.2939801.
Diederik P. Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. In Proceedings of the 2015 International Conference on Learning Representations, 2015. URL http://arxiv. org/abs/1412.6980. arXiv: 1412.6980.
Alexander Kuhnle and Ann Copestake. ShapeWorld - A new test methodology for multimodal language understanding. arXiv:1704.04517 [cs], April 2017. URL http://arxiv.org/ abs/1704.04517. arXiv: 1704.04517.
Brenden M. Lake and Marco Baroni. Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. In Proceedings of the 36th International Con- ference on Machine Learning, 2018. URL http://arxiv.org/abs/1711.00350. arXiv: 1711.00350.
Joao Loula, Marco Baroni, and Brenden M. Lake. Rearranging the Familiar: Testing Composi- tional Generalization in Recurrent Networks. In Proceedings of the 2018 BlackboxNLP EMNLP Workshop, July 2018. URL https://arxiv.org/abs/1807.07545.
Mateusz Malinowski and Mario Fritz. A Multi-world Approach to Question Answering About Real- world Scenes Based on Uncertain Input. In Proceedings of the 27th International Conference on Neural Information Processing Systems, NIPSâ14, pp. 1682â1690, Cambridge, MA, USA, 2014. MIT Press. URL http://dl.acm.org/citation.cfm?id=2968826.2969014.
Gary F. Marcus. Rethinking Eliminative Connectionism. Cognitive Psychology, 37(3):243â282, doi: 10.1006/cogp.1998.0694. URL http://www. December 1998. sciencedirect.com/science/article/pii/S0010028598906946. ISSN 0010-0285.
Gary F. Marcus. The algebraic mind: Integrating connectionism and cognitive science. MIT press, 2003.
Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, and Aaron Courville. FiLM: Visual Reasoning with a General Conditioning Layer. In In Proceedings of the 2017 AAAI Conference on Artiï¬cial Intelligence, 2017. URL http://arxiv.org/abs/1709.07871.
Adam Santoro, David Raposo, David G. T. Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia, and Timothy Lillicrap. A simple neural network module for relational reasoning. In Advances in Neural Information Processing Systems 31, June 2017. URL http://arxiv. org/abs/1706.01427. arXiv: 1706.01427.
Paul Smolensky. The constituent structure of connectionist mental states: A reply to Fodor and Pylyshyn. Southern Journal of Philosophy, 26(Supplement):137â161, 1987.
Joseph Suarez, Justin Johnson, and Fei-Fei Li. DDRprog: A CLEVR Differentiable Dynamic Rea- soning Programmer. arXiv:1803.11361 [cs], March 2018. URL http://arxiv.org/abs/ 1803.11361. arXiv: 1803.11361.
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to Sequence Learning with Neural Net- works. In Advances in Neural Information Processing Systems 27, pp. 3104â3112, 2014.
Wei Wang, Ming Yan, and Chen Wu. Multi-Granularity Hierarchical Attention Fusion Net- In Proceedings of the 56th An- works for Reading Comprehension and Question Answering. nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1705â1714, Melbourne, Australia, 2018. Association for Computational Linguistics. URL http://aclweb.org/anthology/P18-1158.
Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, and others. Googleâs Neural Machine Translation System: Bridging the Gap between Human and Machine Translation. arXiv preprint arXiv:1609.08144, 2016.
13
Published as a conference paper at ICLR 2019
# A EXPERIMENT DETAILS
We trained all models by minimizing the cross entropy loss log p(y|x, q) on the training set, where y ⬠{yes,no} is the correct answer, x is the image, q is the question. In all our experiments we used the Adam optimizer (Kingma & Ba, 2015) with hyperparameters a = 0.0001, 6; = 0.9, By = 0.999, ⬠= 10-1. We continuously monitored validation set performance of all models during training, selected the best one and reported its performance on the test set. The number of training iterations for each model was selected in preliminary investigations based on our observations of how long it takes for different models to converge. This information, as well as other training details, can be found in Table 3.
Table 3: Training details for all models. The subsampling factor is the ratio between the original spatial dimensions of the input image and those of the representation produced by the stem. It is effectively equal to 2k, where k is the number of 2x2 max-pooling operations in the stem.
model FiLM MAC Conv+LSTM RelNet NMN (Residual) NMN (Find) Stochastic NMN (Residual) Stochastic NMN (Find) Attention NMN (Find) stem layers 6 6 6 8 6 6 6 6 6 subsampling factor 4 4 4 8 4 4 4 4 4 iterations 200000 100000 200000 500000 50000 200000 200000 200000 50000 batch size 64 128 128 64 64 64 64 64 64
# B ADDITIONAL RESULTS FOR MAC MODEL
We performed an ablation study in which we varied the number of MAC units, the model dimen- sionality and the level of weight decay for the MAC model. The results can be found in Table 4.
Table 4: Results of an ablation study for MAC. The default model has 12 MAC units of dimension- ality 128 and uses no weight decay. For each experiment we report means and standard deviations based on 5 repetitions.
model default 1 unit 2 units 3 units 6 units 24 units dim. 64 dim. 256 dim. 512 weight decay 0.00001 weight decay 0.0001 weight decay 0.001 #rhs/lhs 1 1 1 1 1 1 1 1 1 1 1 1 train error rate (%) 0.21 0.17 0.35 0.27 0.13 0.23 0.15 0.16 0.17 0.18 0.05 0.04 0.33 0.27 0.00 0.00 0.04 0.02 0.23 0.20 0.54 1.00 1.35 40.55 test error rate (%) 9.97 1.91 2.05 1.12 5.56 7.67 6.27 5.07 7.45 9.27 0.87 0.74 13.67 28.67 24.28 26.47 20.84 9.11 23.61 4.62 8.37 19.21 31.19 45.11
± ± ± ± ± ± ± ± ± ± ± ±
± ± ± ± ± ± ± ± ± ± ± ±
We also perform qualitative investigations to understand the high variance in MACâs performance. In particular, we focus on control attention weights (c) for each run and aim to understand if runs that generalize have clear differences when compared to runs that failed. Interestingly, we observe that in X, Y has a unit that is strongly focused on it. To present our observa- successful runs each word w tions in quantitative terms, we plot attention quality κ = minwâ{X,Y } maxkâ[1;12] αk,w/(1 αk,R), where α are control scores vs accuracy in Figure 7 for each run (see Section 4.3.2 for an explanation of κ). We can clearly see a positive correlation between κ and error rate, especially for low #rhs/lhs.
14
Published as a conference paper at ICLR 2019
(a) 1 rhs/lhs (b) 2 rhs/lhs (c) 4 rhs/lhs (d) 8 rhs/lhs
3 2) oe . 20 . £ .. § 10 . 0 a 04 05 06 a7 (08 09 1.0 Attention quality «
14 . . ° 12 . . . 0 . OS25 0.850 0.875 0.900 0.925 0.950 0.975 1.000 Attention quality «
6 . e, 53 . Gi . 2 1 . 0 . . Om 055 0.96 097 0.98099 Lb Attention quality «
Ba o. 5: Gi 2 on os 090052004 0.55098 1.00 Attention quality «
2.) . Error rate (%) 0.0 e. 096 0.7) 0980.39 Attention quality # 1.00 1.01
# (e) 18 rhs/lhs
Figure 7: Model test accuracy vs κ for the MAC model on different versions of SQOOP. All exper- iments are run 10 times with different random seeds. We can observe a clear correlation between κ and error rate for 1, 2 and 4 rhs/lhs. Also note that perfect generalization is always associated with κ close to 1.
Next, we experiment with a hard-coded variation of MAC. In this model, we use hard-coded control scores such that given a SQOOP question X R Y, the ï¬rst half of all modules focuses on X while the second half focuses on Y. The relationship between MAC and hardcoded MAC is similar to that between NMN-Tree and end-to-end NMN with parameterization induction. However, this model has not performed as well as the successful runs of MAC. We hypothesize that this could be due to the interactions between the control scores and the visual attention part of the model.
# C INVESTIGATION OF CORRECT PREDICTIONS WITH SPURIOUS LAYOUTS
In Section 4.3.1 we observed that an NMN with the Residual module can answer test questions 1.79%, despite being a mixture of a tree and a chain (see with a relative low error rate of 1.64
±
15
Published as a conference paper at ICLR 2019
results in Table 1, p0(tree) = 0.5). Our explanation for this phenomenon is as follows: when connected in a tree, modules of such spurious models generalize well, and when connected as a chain they generalize poorly. The output distribution of the whole model is thus a mixture of the mostly correct p(y T = Tchain, x, q). We verify our reasoning T = Tchain, x, q), and ï¬nd by explicitly evaluating test accuracies for p(y them to be around 99% and 60% respectively, conï¬rming our hypothesis. As a result the predictions 0.5 have lower conï¬dence than those of sharp tree models, of the spurious models with p(tree) â as indicated by the high log loss of 0.27 0.04. We visualize the progress of structure induction for the Residual module with p0(tree) = 0.5 in Figure 4 which shows how p(tree) saturates to 1.0 for #rhs/lhs=18 and remains around 0.5 when #rhs/lhs=1.
# D SQOOP PSEUDOCODE
# Algorithm 1 Pseudocode for creating SQOOP
S&{ABC, ...,Z,0,1,2,3,...,9} Rel < {LEFT-OF, RIGHT-OF, ABOVE, BELOW} > relations 1: 2: 3: function CREATESQOOP(k) 4: TrainQuestions < [] 5: 6 7 8 AllQuestions + [] for all X in S do AllRhs <~ RandomSample(S$ \ {X},k) > sample without replacement from S' \ {X} : AllQuestions â {X} x Rel x (S \ {X}) U AllQuestions 9: for all R, Y in AllRhs x Rel do 10: TrainQuestions â (X,R,Y) UTrainQuestions 11: end for 12: end for 13: TestQuestions ~ AllQuestions \ TrainQuestions 14: function GENERATEEXAMPLE(X, R, Y) 15: a~ {Yes, No} 16: if a = Yes then 17: I + place X and Y objects so that R holds > create the image 18: I + sample 3 objects from S and add to I 19: else 20: repeat 21: Xâ + Sample Xâ from S \ {X} 22: Yâ « Sample Yâ from S'\ {Y} 23: I < place Xâ and Y objects so that R holds > create the image 24: I <add X and Yâ objects to I so that R holds 25: I < sample | more object from S' and add to I 26: until X and Y are not in relation R in I 27: end if 28: return J, X,R,Y,a 29: end function 5 30: Train sample [Tram uestions| examples for each (X,R,Y) ⬠TrainQuestions from GENERATEEXAMPLE(X, R, Y) 31: Test < sample 10 examples for each (X,R,Y) ⬠TestQuestions from GENERATEEXAM-
31: â sample 10 examples for each (X,R,Y) â T estQuestions from GENERATEEXAM-
# T est PLE(X, R, Y ) 32: end function
16 | {
"id": "1609.08144"
} |
1811.11359 | Unsupervised Control Through Non-Parametric Discriminative Rewards | Learning to control an environment without hand-crafted rewards or expert
data remains challenging and is at the frontier of reinforcement learning
research. We present an unsupervised learning algorithm to train agents to
achieve perceptually-specified goals using only a stream of observations and
actions. Our agent simultaneously learns a goal-conditioned policy and a goal
achievement reward function that measures how similar a state is to the goal
state. This dual optimization leads to a co-operative game, giving rise to a
learned reward function that reflects similarity in controllable aspects of the
environment instead of distance in the space of observations. We demonstrate
the efficacy of our agent to learn, in an unsupervised manner, to reach a
diverse set of goals on three domains -- Atari, the DeepMind Control Suite and
DeepMind Lab. | http://arxiv.org/pdf/1811.11359 | David Warde-Farley, Tom Van de Wiele, Tejas Kulkarni, Catalin Ionescu, Steven Hansen, Volodymyr Mnih | cs.LG, cs.AI, stat.ML | 10 pages + references & 5 page appendix | null | cs.LG | 20181128 | 20181128 | 8 1 0 2
v o N 8 2 ] G L . s c [
1 v 9 5 3 1 1 . 1 1 8 1 : v i X r a
# UNSUPERVISED CONTROL THROUGH NON-PARAMETRIC DISCRIMINATIVE REWARDS
David Warde-Farley, Tom Van de Wiele, Tejas Kulkarni, Catalin Ionescu, Steven Hansen & Volodymyr Mnih DeepMind {dwf,tomvandewiele,tkulkarni,cdi,stevenhansen,vmnih}@google.com
# ABSTRACT
Learning to control an environment without hand-crafted rewards or expert data remains challenging and is at the frontier of reinforcement learning research. We present an unsupervised learning algorithm to train agents to achieve perceptually- speciï¬ed goals using only a stream of observations and actions. Our agent simulta- neously learns a goal-conditioned policy and a goal achievement reward function that measures how similar a state is to the goal state. This dual optimization leads to a co-operative game, giving rise to a learned reward function that reï¬ects simi- larity in controllable aspects of the environment instead of distance in the space of observations. We demonstrate the efï¬cacy of our agent to learn, in an unsupervised manner, to reach a diverse set of goals on three domains â Atari, the DeepMind Control Suite and DeepMind Lab.
# INTRODUCTION
Currently, the best performing methods on many reinforcement learning benchmark problems com- bine model-free reinforcement learning methods with policies represented using deep neural net- works (Horgan et al., 2018; Espeholt et al., 2018). Despite reaching or surpassing human-level performance on many challenging tasks, deep model-free reinforcement learning methods that learn purely from the reward signal learn in a way that differs greatly from the manner in which humans learn. In the case of learning to play a video game, a human player not only acquires a strategy for achieving a high score, but also gains a degree of mastery of the environment in the process. Notably, a human player quickly learns which aspects of the environment are under their control as well as how to control them, as evidenced by their ability to rapidly adapt to novel reward functions (Lake et al., 2017).
Focusing learning on mastery of the environment instead of optimizing a single scalar reward function has many potential beneï¬ts. One beneï¬t is that learning is possible even in the absence of an extrinsic reward signal or with an extrinsic reward signal that is very sparse. Another beneï¬t is that an agent that has fully mastered its environment should be able to reach arbitrary achievable goals, which would allow it to generalize to tasks on which it wasnât explicitly trained. Building reinforcement learning agents that aim for environment mastery instead of or in addition to learning about a scalar reward signal is currently an open challenge.
One way to represent such knowledge about an environment is using an environment model. Model- based reinforcement learning methods aim to learn accurate environment models and use them either for planning or for training a policy. While learning accurate environment models of some visually rich environments is now possible (Oh et al., 2015; Chiappa et al., 2018; Ha & Schmidhuber, 2018) using learned models in model-based reinforcement learning has proved to be challenging and model-free approaches still dominate common benchmarks.
We present a new model-free agent architecture: Discriminative Embedding Reward Networks, or DISCERN for short. DISCERN learns to control an environment in an unsupervised way by learning purely from the stream of observations and actions. The aim of our agent is to learn a goal-conditioned policy Ïθ(a|s; sg) (Kaelbling, 1993; Schaul et al., 2015) which can reach any goal state sg that is reachable from the current state s. We show how to learn a goal achievement reward function r(s; sg) that measures how similar state s is to state sg using a mutual information objective at the same
1
time as learning Ïθ(a|s; sg). The resulting learned reward function r(s; sg) measures similarity in the space of controllable aspects of the environment instead of in the space of raw observations. Crucially, the DISCERN architecture is able to deal with goal states that are not perfectly reachable, for example, due to the presence of distractor objects that are not under the agentâs control. In such cases the goal-conditioned policy learned by DISCERN tends to seek states where the controllable elements match those in the goal state as closely as possible.
We demonstrate the effectiveness of our approach on three domains â Atari games, continuous control tasks from the DeepMind Control Suite, and DeepMind Lab. We show that our agent learns to successfully achieve a wide variety of visually-speciï¬ed goals, discovering underlying degrees of controllability of an environment in a purely unsupervised manner and without access to an extrinsic reward signal.
# 2 PROBLEM FORMULATION
In the standard reinforcement learning setup an agent interacts with an environment over discrete time steps. At each time step ¢ the agent observes the current state s, and selects an action a, according to a policy 7(a;|s;). The agent then receives a reward r; = r(s;, a,) and transitions to the next state 5141. The aim of learning is to maximize the expected discounted return R = $7)" 9 yr; of policy 7 where 7 ⬠[0, 1) is a discount factor.
In this work we focus on learning only from the stream of actions and observations in order to forego the need for an extrinsic reward function. Motivated by the idea that an agent capable of reaching any reachable goal state sg from the current state s has complete mastery of its environment, we pose the problem of learning in the absence of rewards as one of learning a goal-conditioned policy Ïθ(a|s; sg) with parameters θ. More speciï¬cally, we assume that the agent interacts with an environment deï¬ned by a transition distribution p(st+1|st, at). We deï¬ne a goal-reaching problem as follows. At the beginning of each episode, the agent receives a goal sg sampled from a distribution over possible goals pgoal. For example, pgoal could be the uniform distribution over all previously visited states. The agent then acts for T steps according to the goal-conditioned policy Ïθ(a|s; sg) receiving a reward of 0 for each of the ï¬rst T â 1 actions and a reward of r(sT ; sg) after the last action, where 1. The goal achievement reward function r(s; sg) measures the r(s; sg) â [0, 1] for all s and sg degree to which being in state s achieves goal sg. The episode terminates upon the agent receiving the reward r(sT ; sg) and a new episode begins.
It is straightforward to train Ïθ(a|s; sg) in a tabular environment using the indicator reward r(s; sg) = 1{s = sg}. We are, however, interested in environments with continuous high- dimensional observation spaces. While there is extensive prior work on learning goal-conditioned policies (Kaelbling, 1993; Schaul et al., 2015; Andrychowicz et al., 2017; Held et al., 2017; Pathak et al., 2018), the reward function is often hand-crafted, limiting generality of the approaches. In the few cases where the reward is learned, the learning objective is typically tied to a pre-speciï¬ed notion of visual similarity. Learning to achieve goals based purely on visual similarity is unlikely to work in complex, real world environments due to the possible variations in appearance of objects, or goal-irrelevant perceptual context. We now turn to the problem of learning a goal achievement reward function rÏ(s; sg) with parameters Ï for high-dimensional state spaces.
# 3 LEARNING A REWARD FUNCTION BY MAXIMIZING MUTUAL INFORMATION
We aim to simultaneously learn a goal-conditioned policy Ïθ and a goal achievement reward function rÏ by maximizing the mutual information between the goal state sg and the achieved state sT ,
I(sg, sT ) = H(sg) + Esg,sT â¼p(sg,sT ) log p(sg|sT )
Note that we are slightly overloading notation by treating sg as a random variable distributed according to pgoal. Similarly, sT is a random variable distributed according to the state distribution induced by running Ïθ for T steps for goal states sampled from pgoal.
1More generally the time budget T for achieving a goal need not be ï¬xed and could either depend on the goal state and the initial environment state, or be determined by the agent itself.
2
(1)
The prior work of Gregor et al. (2016) showed how to learn a set of abstract options by optimizing a similar objective, namely the mutual information between an abstract option and the achieved state. Following their approach, we simplify (1) in two ways. First, we rewrite the expectation in terms of the goal distribution pgoal and the goal conditioned policy Ïθ. Second, we lower bound the expectation term by replacing p(sg|sT ) with a variational distribution qÏ(sg|sT ) with parameters Ï following Barber & Agakov (2004), leading to
I(sg, sT ) ⥠H(sg) + Esgâ¼pgoal,s1,...sT â¼Ïθ(···|sg) log qÏ(sg|sT ). Finally, we discard the entropy term H(sg) from (2) because it does not depend on either the policy parameters θ or the variational distribution parameters Ï, giving our overall objective
ODISCERN = Esgâ¼pgoal,s1,...sT â¼Ïθ(···|sg) log qÏ(sg|sT ). This objective may seem difï¬cult to work with because the variational distribution qÏ is a distribution over possible goals sg, which in our case are high-dimensional observations, such as images. We sidestep the difï¬culty of directly modelling the density of high-dimensional observations by restricting the set of possible goals to be a ï¬nite subset of previously encountered states that evolves over time (Lin, 1993). Restricting the support of qÏ to a ï¬nite set of goals turns the problem of learning qÏ into a problem of modelling the conditional distribution of possible intended goals given an achieved state, which obviates the requirement of modelling arbitrary statistical dependencies in the observations.2
Optimization: The expectation in the DISCERN objective is with respect to the distribution of trajectories generated by the goal-conditioned policy Ïθ acting in the environment against goals drawn from the goal distribution pgoal. We can therefore optimize this objective with respect to policy parameters θ by repeatedly generating trajectories and performing reinforcement learning updates on Ïθ with a reward of log qÏ(sg|sT ) given at time T and 0 for other time steps. Optimizing the objective with respect to the variational distribution parameters Ï is also straightforward since it is equivalent to a maximum likelihood classiï¬cation objective. As will be discussed in the next section, we found that using a reward that is a non-linear transformation mapping log qÏ(sg|sT ) to [0, 1] worked better in practice. Nevertheless, since the reward for the goal conditioned-policy is a function of log qÏ(sg|sT ), training the variational distribution function qÏ amounts to learning a reward function.
Communication Game Interpretation: Dual optimization of the DISCERN objective has an ap- pealing interpretation as a co-operative communication game between two players â an imitator that corresponds to the goal-conditioned policy and a teacher that corresponds to the variational distribution. At the beginning of each round or episode of the game, the imitator is provided with a goal state. The aim of the imitator is to communicate the goal state to the teacher by taking T actions in the environment. After the imitator takes T actions, the teacher has to guess which state from a set of possible goals was given to the imitator purely from observing the ï¬nal state sT reached by the imitator. The teacher does this by assigning a probability to each candidate goal state that it was the goal given to the imitator at the start of the episode, i.e. it produces a distribution q(sg|sT ). The objective of both players is for the teacher to guess the goal given to the imitator correctly as measured by the log probability assigned by the teacher to the correct goal.
# 4 DISCRIMINATIVE EMBEDDING REWARD NETWORKS
We now describe the DISCERN algorithm â a practical instantiation of the approach for jointly learning Ïθ(a|s; sg) and r(s; sg) outlined in the previous section.
Goal distribution: We adopt a non-parametric approach to the problem of proposing goals, whereby we maintain a ï¬xed size buffer G of past observations from which we sample goals during training. We update G by replacing the contents of an existing buffer slot with an observation from the agentâs recent experience according to some substitution strategy; in this work we considered two such strategies, detailed in Appendix A3. This means that the space of goals available for training drifts as a function of the agentâs experience, and states which may not have been reachable under a poorly trained policy become reachable and available for substitution into the goal buffer, leading to a
2See e.g. Lafferty et al. (2001) for a discussion of the merits of modelling a restricted conditional distribution rather than a joint distribution when given the choice.
3
naturally induced curriculum. In this work, we sample training goals for our agent uniformly at random from the goal buffer, leaving the incorporation of more explicitly instantiated curricula to future work.
Goal achievement reward: We train a goal achievement reward function r(s; sg) used to compute rewards for the goal-conditioned policy based on a learned measure of state similarity. We parame- terize r(s; sg) as the positive part of the cosine similarity between s and sg in a learned embedding space, although shaping functions other than rectiï¬cation could be explored. The state embedding in which we measure cosine similarity is the composition of a feature transformation h(·) and a learned L2-normalized mapping ξÏ(·). In our implementation, where states and goals are represented as 2-D RGB images, we take h(·) to be the ï¬nal layer features of the convolutional network learned by the policy in order to avoid learning a second convolutional network. We ï¬nd this works well provided that while training r, we treat h(·) as ï¬xed and do not adapt the convolutional networkâs parameters with respect to the reward learnerâs loss. This has the effect of regularizing the reward learner by limiting its adaptive capacity while avoiding the need to introduce a hyperparameter weighing the two losses against one another.
We train â¬4(-) according to a goal-discrimination objective suggested by (3). However, rather than using the set of all goals in the buffer G as the set of possible classes in the goal discriminator, we sample a small subset for each trajectory. Specifically, the set of possible classes includes the goal g for the trajectory and KX decoy observations d,d2,..., dx from the same distribution as s,. Letting by = Es(h(sr)) Es (h(g)) (4)
by = Es(h(sr)) Es (h(g)) (4)
we maximize the log likelihood given by
exp (84) exp (Bly) + Dear exp (BEs(h(sr)) Eo (Ie(de))) (5) log G(sy = g|sridi,...dx, 79) = log
where β is an inverse temperature hyperparameter which we ï¬x to K + 1 in all experiments. Note that (5) is a maximum log likelihood training objective for a softmax nearest neighbour classiï¬er in a learned embedding space, making it similar to a matching network (Vinyals et al., 2016). Intuitively, updating the embedding Î¾Ï using the objective in (5) aims to increase the cosine similarity between e(sT ) and e(g) and to decrease the cosine similarity between e(sT ) and the decoy embeddings e(d), . . . , e(dK). Subsampling the set of possible classes as we do is a known method for approximate maximum likelihood training of a softmax classiï¬er with many classes (Bengio & S´en´ecal, 2003).
We use max(0,,) as the reward for reaching state sy when given goal g. We found that this reward function is better behaved than the reward log G(s, = g|sr;d1,...dx,79) suggested by the DISCERN objective in Sectionfisince it is scaled to lie in [0, 1]. The reward we use is also less noisy since, unlike log q, it does not depend on the decoy states.
Goal-conditioned policy: The goal-conditioned policy 79 (a|s; sq) is trained to optimize the goal achievement reward r(s; s,). In this paper, 7(a|s; s,) is an e-greedy policy of a goal-conditioned action-value function Q with parameters 0. Q is trained using Q-learning and minibatch experience replay; specifically, we use the variant of Q(A) due to Peng (see Chapter 7, (1998)).
Goal relabelling: We use a form of goal relabelling (Kaelbling, 1993) or hindsight experience replay (Andrychowicz et al., 2017; Nair & Hinton, 2006) as a source successfully achieved goals as well as to regularize the embedding e(·). Speciï¬cally, for the purposes of parameter updates (in both the policy and the reward learner) we substitute the goal, with probability pHER, with an observation selected from the ï¬nal H steps of the trajectory, and consider the agent to have received a reward of 1. The motivation, in the case of the policy, is similar to that of previous work, i.e. that being in state st should correspond to having achieved the goal of reaching st. When employed in the reward learner, it amounts to encouraging temporally consistent state embeddings (Mobahi et al., 2009; Sermanet et al., 2017), i.e. encouraging observations which are nearby in time to have similar embeddings.
Pseudocode for the DISCERN algorithm, decomposed into an experience-gathering (possibly dis- tributed) actor process and a centralized learner process, is given in Algorithm 1.
# 5 RELATED WORK
The problem of reinforcement learning in the context of multiple goals dates at least to Kaelbling (1993), where the problem was examined in the context of grid worlds where the state space is
4
# Algorithm 1: DISCERN procedure ACTOR
procedure ACTOR Input : Time budget T, policy parameters 6, goal embedding parameters ¢, shared goal buffer hindsight replay window H, hindsight replay rate pyr repeat tg <- BEHAVIOR-POLICY(@) /* e.g. e-greedy */ g~G ryr-1 <0 fort 1...T do Take action a, ~ 79(s,; g) obtaining 5,41 from p(s:+1|S¢, a) G < PROPOSE-GOAL-SUBSTITUTION(G, 81) /* See Appendix[A3]*/ end with probability purr, | Sample syer uniformly from {s7_y,..., sr} and set g + sper, rr + 1 otherwise Compute , using rp + max(0, ly) Send (81.7, a1:7, 11:7, 9) to the learner. Poll the learner periodically for updated values of 6,0. Reset the environment if the episode has terminated. until termination procedure LEARNER Input :Batch size B, number of decoys K, initial policy parameters 6, initial goal embedding parameters } repeat Assemble batch of experience B = {(s).,a).7,re.p. 9°) }E, forb<1...Bdo | Sample K decoy goals d},db,...,d4.~G end Use an off-policy reinforcement learning algorithm to update @ based on B Update ¢ to maximize + an log (sy = gâ|sr; di,...dx, 79) computed by until termination
small and enumerable. Sutton et al. (2011) proposed generalized value functions (GVFs) as a way of representing knowledge about sub-goals, or as a basis for sub-policies or options. Universal Value Function Approximators (UVFAs) (Schaul et al., 2015) extend this idea by using a function approximator to parameterize a joint function of states and goal representations, allowing compact representation of an entire class of conditional value functions and generalization across classes of related goals.
While the above works assume a goal achievement reward to be available a priori, our work includes an approach to learning a reward function for goal achievement jointly with the policy. Several recent works have examined reward learning for goal achievement in the context of the Generative Adversarial Networks (GAN) paradigm (Goodfellow et al., 2014). The SPIRAL (Ganin et al., 2018) algorithm trains a goal conditioned policy with a reward function parameterized by a Wasserstein GAN (Arjovsky et al., 2017) discriminator. Similarly, AGILE (Bahdanau et al., 2018) learns an instruction-conditional policy where goals in a grid-world are speciï¬ed in terms of predicates which should be satisï¬ed, and a reward function is learned using a discriminator trained to distinguish states achieved by the policy from a dataset of instruction, goal state pairs.
Reward learning has also been used in the context of imitation. Ho & Ermon (2016) derives an adversarial network algorithm for imitation, while time-contrastive networks (Sermanet et al., 2017) leverage pre-trained ImageNet classiï¬er representations to learn a reward function for robotics skills from video demonstrations, including robotic imitation of human poses. Universal Planning Networks (UPNs) (Srinivas et al., 2018) learn a state representation by training a differentiable planner to imitate expert trajectories. Experiments showed that once a UPN is trained the state representation it learned
5
can be used to construct a reward function for visually speciï¬ed goals. Bridging goal-conditioned policy learning and imitation learning, Pathak et al. (2018) learns a goal-conditioned policy and a dynamics model with supervised learning without expert trajectories, and present zero-shot imitation of trajectories from a sequence of images of a desired task.
A closely related body of work to that of goal-conditioned reinforcement learning is that of un- supervised option or skill discovery. Machado & Bowling (2016) proposes a method based on an eigendecomposition of differences in features between successive states, further explored and extended in Machado et al. (2017). Variational Intrinsic Control (VIC) (Gregor et al., 2016) leverages the same lower bound on the mutual information as the present work in an unsupervised control setting, in the space of abstract options rather than explicit perceptual goals. VIC aims to jointly maximize the entropy of the set of options while making the options maximally distinguishable from their ï¬nal states according to a parametric predictor. Recently, Eysenbach et al. (2018) showed that a special case of the VIC objective can scale to signiï¬cantly more complex tasks and provide a useful basis for low-level control in a hierarchical reinforcement learning context.
Other work has explored learning policies in tandem with a task policy, where the task or environment rewards are assumed to be sparse. Florensa et al. (2017) propose a framework in which low-level skills are discovered in a pre-training phase of a hierarchial system based on simple-to-design proxy rewards, while Riedmiller et al. (2018) explore a suite of auxiliary tasks through simultaneous off-policy learning.
Several authors have explored a pre-training stage, sometimes paired with ï¬ne-tuning, based on unsupervised representation learning. P´er´e et al. (2018) and Laversanne-Finot et al. (2018) employ a two-stage framework wherein unsupervised representation learning is used to learn a model of the observations from which to sample goals for control in simple simulated environments. Nair et al. (2018) propose a similar approach in the context of model-free Q-learning applied to 3-dimensional simulations and robots. Goals for training the policy are sampled from the modelâs prior, and a reward function is derived from the latent codes. This contrasts with our non-parametric approach to selecting goals, as well as our method for learning the goal space online and jointly with the policy.
An important component of our method is a form of goal relabelling, introduced to the reinforcement learning literature as hindsight experience replay by Andrychowicz et al. (2017), based on the intuition that any trajectory constitutes a valid trajectory which achieves the goal speciï¬ed by its own terminal observation. Earlier, Nair & Hinton (2006) employed a related scheme in the context of supervised learning of motor programs, where a program encoder is trained on pairs of trajectory realizations and programs obtained by expanding outwards from a pre-speciï¬ed prototypical motor program through the addition of noise. Veeriah et al. (2018) expands upon hindsight replay and the all-goal update strategy proposed by Kaelbling (1993), generalizing the latter to non-tabular environments and exploring related strategies for skill discovery, unsupervised pre-training and auxiliary tasks. Levy et al. (2018) propose a hierarchical Q-learning system which employs hindsight replay both conventionally in the lower-level controller and at higher levels in the hierarchy. Nair et al. (2018) also employ a generalized goal relabelling scheme whereby the policy is trained based on a trajectoryâs achievement not just of its own terminal observation, but a variety of retrospectively considered possible goals.
# 6 EXPERIMENTS
We evaluate, both qualitatively and quantitatively, the ability of DISCERN to achieve visually- speciï¬ed goals in three diverse domains â the Arcade Learning Environment (Bellemare et al., 2013), continuous control tasks in the DeepMind Control Suite (Tassa et al., 2018), and DeepMind Lab, a 3D ï¬rst person environment (Beattie et al., 2016). Experimental details including architecture details, details of distributed training, and hyperparameters can be found in the Appendix. We compared DISCERN to several baseline methods for learning goal-conditioned policies:
Conditioned Autoencoder (AE): In order to specifically interrogate the role of the discriminative reward learning criterion, we replace the discriminative criterion for embedding learning with an L? reconstruction loss on h(s); that is, in addition to â¬4(-), we learn an inverse mapping &1(-) with a separate set of parameters, and train both with the criterion ||h(s) â â¬'(â¬(h(s)))||?-
6
Conditioned WGAN Discriminator: We compare to an adversarial reward on the domains consid- ered according to the protocol of| (2018), who successfully used a WGAN discriminator as a reward for training agents to perform inverse graphics tasks. The discriminator takes two pairs of images â (1) a real pair of goal images (s,, 8,4) and (2) a fake pair consisting of the terminal state of the agent and the goal frame (s;, s,). The output of the discriminator is used as the reward function for the policy. Unlike our DISCERN implementation and the conditioned autoencoder baseline, we train the WGAN discriminator as a separate convolutional network directly from pixels, as in previous work. Pixel distance reward (L2): Finally, we directly compare to a reward based on L? distance in pixel space, equal to exp (â||s; â 89||?/@pixer) Where Gpixe1 is a hyperparameter which we tuned on a per-environment basis.
All the baselines use the same goal-conditioned policy architecture as DISCERN. The baselines also used hindsight experience replay in the same way as DISCERN. They can therefore be seen as ablations of DISCERNâs goal-achievement reward learning mechanism.
6.1 ATARI
The suite of 57 Atari games provided by the Arcade Learning Environment (Bellemare et al., 2013) is a widely used benchmark in the deep reinforcement learning literature. We compare DISCERN to other methods on the task of achieving visually speciï¬ed goals on the games of Seaquest and Montezumaâs Revenge. The relative simplicity of these domains makes it possible to handcraft a detector in order to localize the controllable aspects of the environment, namely the submarine in Seaquest and Panama Joe, the character controlled by the player in Montezumaâs Revenge.
We evaluated the methods by running the learned goal policies on a ï¬xed set of goals and measured the percentage of goals it was able to reach successfully. We evaluated both DISCERN and the baselines with two different goal buffer substitution strategies, uniform and diverse, which are described in the Appendix. A goal was deemed to be successfully achieved if the position of the avatar in the last frame was within 10% of the playable area of the position of the avatar in the goal for each controllable dimension. The controllable dimensions in Atari were considered to be the x- and y-coordinates of the avatar. The results are displayed in Figure 1a. DISCERN learned to achieve a large fraction of goals in both Seaquest and Montezumaâs Revenge while none of the baselines learned to reliably achieve goals in either game. We hypothesize that the baselines failed to learn to control the avatars because their objectives are too closely tied to visual similarity. Figure 1b shows examples of goal achievement on Seaquest and Montezumaâs Revenge. In Seaquest, DISCERN learned to match the position of the submarine in the goal image while ignoring the position of the ï¬sh, since the ï¬sh are not directly controllable. We have provided videos of the goal-conditioned policies learned by DISCERN on Seaquest and Montezumaâs Revenge at https://sites.google.com/view/discern-paper.
6.2 DEEPMIND CONTROL SUITE TASKS
The DeepMind Control Suite (Tassa et al., 2018) is a suite of continuous control tasks built on the MuJoCo physics engine (Todorov et al., 2012). While most frequently used to evaluate agents which receive the underlying state variables as observations, we train our agents on pixel renderings of the scene using the default environment-speciï¬ed camera, and do not directly observe the state variables.
Agents acting greedily with respect to a state-action value function require the ability to easily maximize Q over the candidate actions. For ease of implementation, as well as comparison to other considered environments, we discretize the space of continuous actions to no more than 11 unique actions per environment (see Appendix A4.1).
The availability of an underlying representation of the physical state, while not used by the learner, provides a useful basis for comparison of achieved states to goals. We mask out state variables relating to entities in the scene not under the control of the agent; for example, the position of the target in the reacher or manipulator domains.
DISCERN is compared to the baselines on a ï¬xed set of 100 goals with 20 trials for each goal. The goals are generated by acting randomly for 25 environment steps after initialization. In the case of
7
(a)
(b)
Figure 1: a) Percentage of goals successfully achieved on Seaquest and Montezumaâs Revenge. b) Examples of goals achieved by DISCERN on the games of Seaquest (top) and Montezumaâs Revenge (bottom). For each game, the four goal states are shown in the top row. Below each goal is the averaged (over 5 trials) ï¬nal state achieved by the goal-conditioned policy learned by DISCERN after T = 50 steps for the goal above.
Figure 2: Average achieved frames for point mass (task easy), reacher (task hard), manipulator (task bring ball), pendulum (task swingup), finger (task spin) and ball in cup (task catch) environments. The goal is shown in the top row and the achieved frame is shown in the bottom row.
cartpole, we draw the goals from a random policy acting in the environment set to the balance task, where the pole is initialized upwards, in order to generate a more diverse set of goals against which to measure. Figure 3 compares learning progress of 5 independent seeds for the âuniformâ goal replacement strategy (see Appendix A5 for results with âdiverseâ goal replacement) for 6 domains. We adopt the same deï¬nition of achievement as in Section 6.1. Figure 2 summarizes averaged goal achievement frames on these domains except for the cartpole domain for policies learned by DISCERN. Performance on cartpole is discussed in more detail in Figure 7 of the Appendix.
The results show that in aggregate, DISCERN outperforms baselines in terms of goal achievement on several, but not all, of the considered Control Suite domains. In order to obtain a more nuanced understanding of DISCERNâs behaviour when compared with the baselines, we also examined achievement in terms of the individual dimensions of the controllable state. Figure 4 shows goal achievement separately for each dimension of the underlying state on four domains. The per- dimension results show that on difï¬cult goal-achievement tasks such as those posed in cartpole (where most proposed goal states are unstable due to the effect of gravity) and finger (where a free-spinning piece is only indirectly controllable) DISCERN learns to reliably match the major dimensions of controllability such as the cart position and ï¬nger pose while ignoring the other
8
ball_in_cup cartpole finger op FH SERN â ON RE eo 10 f= SW â oN RE eo ap [= CERN â Wo RE Zoe foe Zoe 406 Hos 406 Poa Poa Poa Sop Boe Zoa âââSâ os 70 Ts 30 O5 To Fr 2.0 os ry Ts 20 Actor steps 1 Actor steps 18 Actor steps 1 pendulum point_mass reacher wp 10 | â DISCERN WGAN AE a > 10 [â PSCERN WGAN aE a gy 1.0 | â DISCERN wan ae a Soe foe Soe 406 Hos 406 Bo4 _â A 2 o4 Poa 5 o2 3 o2 5 o2 oo 00 oo â os 70 Ts 30 oS To Fr 2.0 o5 70 Ts 20 Actor steps us Actor steps ue Actor steps us
op eo 10 Zoe foe 406 Hos Poa Poa Sop Boe os 70 Ts 30 O5 To Actor steps 1 Actor pendulum point_mass wp 10 | â DISCERN WGAN AE a > 10 [â PSCERN WGAN Soe foe 406 Hos Bo4 _â A 2 o4 5 o2 3 o2 oo 00 os 70 Ts 30 oS To Actor steps us Actor Figure 3: Quantitative evaluation of goal achievement âuniformâ goal substitution scheme (see Appendix goals achieved over a fixed goal set (100 images per ball_in_cup - Per dimension plot _ Se P 201 got gat gatt oot 201" 40M ygot 90%" 001" finger - Per dimension plot
eo ap Zoe 406 Poa Zoa âââSâ Fr 2.0 os ry Ts 20 steps 18 Actor steps 1 reacher aE a gy 1.0 | â DISCERN wan ae a Soe 406 Poa 5 o2 oo â Fr 2.0 o5 70 Ts 20 steps ue Actor steps us on continuous control domains using the . For each method, we show the fraction of domain). cartpole - Per dimension plot ee ee 201 got got got soot 20M 20! cot" got" 200! reacher - Per dimension plot
Soe foe 406 Hos Bo4 _â A 2 o4 5 o2 3 o2 oo 00 os 70 Ts 30 oS To Actor steps us Actor Figure 3: Quantitative evaluation of goal achievement âuniformâ goal substitution scheme (see Appendix goals achieved over a fixed goal set (100 images per ball_in_cup - Per dimension plot _ Se P 201 got gat gatt oot 201" 40M ygot 90%" 001" finger - Per dimension plot Wean a jor SS es 200 gat got got oot 20" ot yor got yo0t"
Soe 406 Poa 5 o2 oo â Fr 2.0 o5 70 Ts 20 steps ue Actor steps us on continuous control domains using the . For each method, we show the fraction of domain). cartpole - Per dimension plot ee ee 201 got got got soot 20M 20! cot" got" 200! reacher - Per dimension plot jo.2 a 201 got got got soot 20M 20! cot" got" 200!
Figure 3: Quantitative evaluation of goal achievement on continuous control domains using the âuniformâ goal substitution scheme (see Appendix A3). For each method, we show the fraction of goals achieved over a ï¬xed goal set (100 images per domain).
ball_in_cup - Per dimension plot cartpole - Per dimension plot _ Se P ee ee 201 got gat gatt oot 201" 40M ygot 90%" 001" 201 got got got soot 20M 20! cot" got" 200! finger - Per dimension plot reacher - Per dimension plot Wean a jor jo.2 SS es a 200 gat got got oot 20" ot yor got yo0t" 201 got got got soot 20M 20! cot" got" 200!
Figure 4: Per-dimension quantitative evaluation of goal achievement on continuous control domains using the âuniformâ goal substitution scheme (Appendix A3). Each subplot corresponds to a domain, with each group of colored rows representing a method. Each individual row represents a dimension of the controllable state (such as a joint angle). The color of each cell indicates the fraction of goal states for which the method was able to match the ground truth value for that dimension to within 10% of the possible range. The position along the x-axis indicates the point in training in millions of frames. For example, on the reacher domain DISCERN learns to match both dimensions of the controllable state, but on the cartpole domain it learns to match the ï¬rst dimension (cart position) but not the second dimension (pole angle).
dimensions, whereas none of the baselines learned to reliably match any of the controllable state dimensions on the difï¬cult tasks cartpole and finger.
We omitted the manipulator domain from these ï¬gures as none of the methods un- der consideration achieved non-negligible goal achievement performance on this domain, however a video showing the policy learned by DISCERN on this domain can be found at https://sites.google.com/view/discern-paper. The policy learned on the
9
Pa Ce Mi a Ea
Figure 5: Average achieved frames over 30 trials from a random initialization on the rooms watermaze task. Goals are shown in the top row while the corresponding average achieved frames are in the bottom row.
manipulator domain shows that DISCERN was able to discover several major dimensions of controllability even on such a challenging task, as further evidenced by the per-dimension analysis on the manipulator domain in Figure 8 in the Appendix.
6.3 DEEPMIND LAB
DeepMind Lab (Beattie et al., 2016) is a platform for 3D ï¬rst person reinforcement learning environ- ments. We trained DISCERN on the watermaze level and found that it learned to approximately achieve the same wall and horizon position as in the goal image. While the agent did not learn to achieve the position and viewpoint shown in a goal image as one may have expected, it is encouraging that our approach learns a reasonable space of goals on a ï¬rst-person 3D domain in addition to domains with third-person viewpoints like Atari and the DM Control Suite.
# 7 DISCUSSION
We have presented a system that can learn to achieve goals, speciï¬ed in the form of observations from the environment, in a purely unsupervised fashion, i.e. without any extrinsic rewards or expert demonstrations. Integral to this system is a powerful and principled discriminative reward learning objective, which we have demonstrated can recover the dominant underlying degrees of controllability in a variety of visual domains.
In this work, we have adopted a ï¬xed episode length of T in the interest of simplicity and com- putational efï¬ciency. This implicitly assumes not only that all sampled goals are approximately achievable in T steps, but that the policy need not be concerned with ï¬nishing in less than the allotted number of steps. Both of these limitations could be addressed by considering schemes for early termination based on the embedding, though care must be taken not to deleteriously impact training by terminating episodes too early based on a poorly trained reward embedding. Relatedly, our goal selection strategy is agnostic to both the state of the environment at the commencement of the goal episode and the current skill proï¬le of the policy, utilizing at most the content of the goal itself to drive the evolution of the goal buffer G. We view it as highly encouraging that learning proceeds using such a naive goal selection strategy, however more sophisticated strategies, such as tracking and sampling from the frontier of currently achievable goals (Held et al., 2017), may yield substantial improvements.
DISCERNâs ability to automatically discover controllable aspects of the observation space is a highly desirable property in the pursuit of robust low-level control. A natural next step is the incorporation of DISCERN into a deep hierarchical reinforcement learning setup (Vezhnevets et al., 2017; Levy et al., 2018; Nachum et al., 2018) where a meta-policy for proposing goals is learned after or in tandem with a low-level controller, i.e. by optimizing an extrinsic reward signal.
# ACKNOWLEDGEMENTS
We thank Marlos C. Machado, Will Dabney, Hado van Hasselt, and anonymous reviewers for useful feedback on drafts of the manuscript. We additionally thank Andriy Mnih and Carlos Florensa for helpful discussions, and Lasse Espeholt, Tom Erez and Dumitru Erhan for invaluable technical assistance.
10
# REFERENCES
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, OpenAI Pieter Abbeel, and Wojciech Zaremba. Hindsight experience replay. In Advances in Neural Information Processing Systems 30, pp. 5048â5058. 2017.
Martin Arjovsky, Soumith Chintala, and L´eon Bottou. Wasserstein gan. arXiv preprint arXiv:1701.07875, 2017.
Dzmitry Bahdanau, Felix Hill, Jan Leike, Edward Hughes, Pushmeet Kohli, and Edward Grefenstette. Learning to follow language instructions with adversarial reward induction. arXiv preprint arXiv:1806.01946, 2018.
David Barber and Felix V. Agakov. Information maximization in noisy channels : A variational approach. In S. Thrun, L. K. Saul, and B. Sch¨olkopf (eds.), Advances in Neural Information Processing Systems 16, pp. 201â208. MIT Press, 2004.
Charles Beattie, Joel Z Leibo, Denis Teplyashin, Tom Ward, Marcus Wainwright, Heinrich K¨uttler, Andrew Lefrancq, Simon Green, V´ıctor Vald´es, Amir Sadik, et al. Deepmind lab. arXiv preprint arXiv:1612.03801, 2016.
Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environ- ment: An evaluation platform for general agents. Journal of Artiï¬cial Intelligence Research, 47: 253â279, 2013.
Yoshua Bengio and Jean-S´ebastien S´en´ecal. Quick training of probabilistic neural nets by importance sampling. In Proceedings of the conference on Artiï¬cial Intelligence and Statistics (AISTATS), 2003.
Silvia Chiappa, S´ebastien Racani`ere, Daan Wierstra, and Shakir Mohamed. Recurrent environment simulators. In International Conference on Learning Representations, 2018.
Lasse Espeholt, Hubert Soyer, Remi Munos, Karen Simonyan, Volodymir Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, et al. Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures. arXiv preprint arXiv:1802.01561, 2018.
Benjamin Eysenbach, Abhishek Gupta, Julian Ibarz, and Sergey Levine. Diversity is all you need: Learning skills without a reward function. arXiv preprint arXiv:1802.06070, 2018.
Carlos Florensa, Yan Duan, and Pieter Abbeel. Stochastic neural networks for hierarchical reinforce- ment learning. In International Conference on Learning Representations, 2017.
Yaroslav Ganin, Tejas Kulkarni, Igor Babuschkin, SM Eslami, and Oriol Vinyals. Synthesizing programs for images using reinforced adversarial learning. arXiv preprint arXiv:1804.01118, 2018.
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural informa- tion processing systems, pp. 2672â2680, 2014.
Karol Gregor, Danilo Jimenez Rezende, and Daan Wierstra. Variational intrinsic control. 2016.
Shane Gu, Tim Lillicrap, Ilya Sutskever, and Sergei Levine. Continuous deep q-learning with model-based acceleration. In Proc. of ICML, 2016.
David Ha and J¨urgen Schmidhuber. World models. arXiv preprint arXiv:1803.10122, 2018.
David Held, Xinyang Geng, Carlos Florensa, and Pieter Abbeel. Automatic goal generation for reinforcement learning agents. arXiv preprint arXiv:1705.06366, 2017.
Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. In Advances in Neural Information Processing Systems, pp. 4565â4573, 2016.
Dan Horgan, John Quan, David Budden, Gabriel Barth-Maron, Matteo Hessel, Hado van Hasselt, and David Silver. Distributed prioritized experience replay. In International Conference on Learning Representations, 2018.
11
Leslie Pack Kaelbling. Learning to achieve goals. In Proceedings of the 13th International Joint Conference on Artiï¬cial Intelligence, pp. 1094â1099, 1993.
Alex Kulesza, Ben Taskar, et al. Determinantal point processes for machine learning. Foundations and Trends®) in Machine Learning, 5(2-3):123-286, 2012.
John Lafferty, Andrew McCallum, and Fernando CN Pereira. Conditional random ï¬elds: Probabilistic models for segmenting and labeling sequence data. pp. 282â289, 2001.
Brenden M Lake, Tomer D Ullman, Joshua B Tenenbaum, and Samuel J Gershman. Building machines that learn and think like people. Behavioral and Brain Sciences, 40, 2017.
Adrien Laversanne-Finot, Alexandre Pere, and Pierre-Yves Oudeyer. Curiosity driven exploration of learned disentangled goal spaces. In Conference on Robot Learning, pp. 487â504, 2018.
Andrew Levy, Robert Platt, and Kate Saenko. Hierarchical reinforcement learning with hindsight. arXiv preprint arXiv:1805.08180, 2018.
Long-Ji Lin. Reinforcement learning for robots using neural networks. Technical report, DTIC Document, 1993.
Marlos C Machado and Michael Bowling. Learning purposeful behaviour in the absence of rewards. arXiv preprint arXiv:1605.07700, 2016.
Marlos C Machado, Marc G Bellemare, and Michael Bowling. A laplacian framework for option In International Conference on Machine Learning, pp. discovery in reinforcement learning. 2295â2304, 2017.
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529â533, 2015.
Hossein Mobahi, Ronan Collobert, and Jason Weston. Deep learning from temporal coherence in video. In Proceedings of the 26th Annual International Conference on Machine Learning, pp. 737â744. ACM, 2009.
Oï¬r Nachum, Shane Gu, Honglak Lee, and Sergey Levine. Data-efï¬cient hierarchical reinforcement learning. arXiv preprint arXiv:1805.08296, 2018.
Ashvin Nair, Vitchyr Pong, Murtaza Dalal, Shikhar Bahl, Steven Lin, and Sergey Levine. Visual reinforcement learning with imagined goals. arXiv preprint arXiv:1807.04742, 2018.
Vinod Nair and Geoffrey E Hinton. Inferring motor programs from images of handwritten digits. In Y. Weiss, B. Sch¨olkopf, and J. C. Platt (eds.), Advances in Neural Information Processing Systems 18, pp. 515â522. MIT Press, 2006.
Junhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard L Lewis, and Satinder Singh. Action-conditional video prediction using deep networks in atari games. In Advances in Neural Information Processing Systems 28, pp. 2863â2871. 2015.
Deepak Pathak, Parsa Mahmoudieh, Guanghao Luo, Pulkit Agrawal, Dian Chen, Yide Shentu, Evan Shelhamer, Jitendra Malik, Alexei A Efros, and Trevor Darrell. Zero-shot visual imitation. arXiv preprint arXiv:1804.08606, 2018.
Alexandre P´er´e, S´ebastien Forestier, Olivier Sigaud, and Pierre-Yves Oudeyer. Unsupervised learning of goal spaces for intrinsically motivated goal exploration. arXiv preprint arXiv:1803.00781, 2018.
Martin Riedmiller, Roland Hafner, Thomas Lampe, Michael Neunert, Jonas Degrave, Tom Van de Wiele, Volodymyr Mnih, Nicolas Heess, and Jost Tobias Springenberg. Learning by playing- solving sparse reward tasks from scratch. In International Conference on Learning Representations, 2018.
Tom Schaul, Daniel Horgan, Karol Gregor, and David Silver. Universal value function approximators. In International Conference on Machine Learning, pp. 1312â1320, 2015.
12
Pierre Sermanet, Corey Lynch, Yevgen Chebotar, Eric Jang, Stefan Schaal, Jasmine Hsu, and Sergey Levine. Time-contrastive networks: Self-supervised learning from video. arXiv preprint arXiv:1704.06888, 2017.
Aravind Srinivas, Allan Jabri, Pieter Abbeel, Sergey Levine, and Chelsea Finn. Universal planning networks. 2018.
Richard S. Sutton and Andrew G. Barto. Introduction to Reinforcement Learning. MIT Press, Cambridge, MA, USA, 1st edition, 1998. ISBN 0262193981.
Richard S Sutton, Joseph Modayil, Michael Delp, Thomas Degris, Patrick M Pilarski, Adam White, and Doina Precup. Horde: A scalable real-time architecture for learning knowledge from unsuper- vised sensorimotor interaction. In The 10th International Conference on Autonomous Agents and Multiagent Systems-Volume 2, pp. 761â768. International Foundation for Autonomous Agents and Multiagent Systems, 2011.
Yuval Tassa, Yotam Doron, Alistair Muldal, Tom Erez, Yazhe Li, Diego de Las Casas, David Budden, Abbas Abdolmaleki, Josh Merel, Andrew Lefrancq, et al. Deepmind control suite. arXiv preprint arXiv:1801.00690, 2018.
T Tieleman and G Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 2012.
Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pp. 5026â 5033. IEEE, 2012.
Vivek Veeriah, Junhyuk Oh, and Satinder Singh. Many-goals reinforcement learning. arXiv preprint arXiv:1806.09605, 2018.
Alexander Sasha Vezhnevets, Simon Osindero, Tom Schaul, Nicolas Heess, Max Jaderberg, David Silver, and Koray Kavukcuoglu. Feudal networks for hierarchical reinforcement learning. In International Conference on Machine Learning, pp. 3540â3549, 2017.
Oriol Vinyals, Charles Blundell, Tim Lillicrap, Daan Wierstra, et al. Matching networks for one shot learning. In Advances in Neural Information Processing Systems, pp. 3630â3638, 2016.
Ziyu Wang, Tom Schaul, Matteo Hessel, Hado van Hasselt, Marc Lanctot, and Nando de Freitas. In Proceedings of The 33rd Dueling network architectures for deep reinforcement learning. International Conference on Machine Learning, pp. 1995â2003, 2016.
13
# APPENDIX
A1 DISTRIBUTED TRAINING
We employ a distributed reinforcement learning architecture inspired by the IMPALA reinforcement learning architecture (Espeholt et al.| 2018), with a centralized GPU learner batching parameter updates on experience collected by a large number of CPU-based parallel actors. While learns a stochastic policy through the use of an actor-critic architecture, we instead learn a goal-conditioned state-action value function with Q-learning. Each actor acts e-greedily with respect to a local copy of the Q network, and sends observations s;, actions a;, rewards r; and discounts 11 for a trajectory to the learner. Following |Horgan et al.|(2018), we use a different value of ¢ for each actor, as this has been shown to improve exploration. The learner batches re-evaluation of the convolutional network and LSTM according to the action trajectories supplied and performs parameter updates, periodically broadcasting updated model parameters to the actors. As Q-learning is an off-policy algorithm, the experience traces sent to the learner can be used in the usual n-step Q-learning update without the need for an off-policy correction as in[Espeholt et al.] . also maintain actor-local replay buffers of previous actor trajectories and use them to perform both standard experience replay ( ) and our variant of hindsight experience replay (Andrychowicz
A2 ARCHITECTURE DETAILS
Our network architectures closely resemble those in Espeholt et al. (2018), with policy and value heads replaced with a Q-function. We apply the same convolutional network to both st and sg and concatenate the ï¬nal layer outputs. Note that the convolutional network outputs for sg need only be computed once per episode. We include a periodic representation (sin(2Ït/T ), cos(2Ït/T )) of the current time step, with period equal to the goal length achievement period T , as an extra input to the network. The periodic representation is processed by a single hidden layer of rectiï¬ed linear units and is concatenated with the visual representations fed to the LSTM. While not strictly necessary, we ï¬nd that this allows the agent to become better at achieving goal states which may be unmaintainable due to their instability in the environment dynamics.
The output of the LSTM is the input to a dueling action-value output network (Wang et al., 2016). In all of our experiments, both branches of the dueling network are linear mappings. That is, given LSTM outputs Ït, we compute the action values for the current time step t as
; ; 1 ; Q(atly) = div + | de Wa, -â = > We wa, | +b (6) n , ay
# A3 GOAL BUFFER
We experimented with two strategies for updating the goal buffer. In the ï¬rst strategy, which we call uniform, the current observation replaces a uniformly selected entry in the goal buffer with probability preplace. The second strategy, which we refer to as diverse goal sampling attempts to maintain a goal buffer that more closely approximates the uniform distribution over all observation. In the diverse goal strategy, we consider the current observation for addition to the goal buffer with probability preplace at each step during acting. If the current observation s is considered for addition to the goal buffer, then we select a random removal candidate sr by sampling uniformly from the goal buffer and replace it with s if sr is closer to the rest of the goal buffer than s. If s is closer to the rest of the goal buffer than sr then we still replace sr with s with probability paddânonâdiverse. We used L2 distance in pixel space for the diverse sampling strategy and found it to greatly increase the coverage of states in the goal buffer, especially early during training. This bears some relationship to Determinantal Point Processes (Kulesza et al., 2012), and goal-selection strategies with a more explicit theoretical foundation are a promising future direction.
A4 EXPERIMENTAL DETAILS
The following hyper-parameters were used in all of the experiments described in Section 6. All weight matrices are initialized using a standard truncated normal initializer, with the standard deviation
14
inversely proportional to the square root of the fan-in. We maintain a goal buffer of size 1024 and use preplace = 10â3. We also use paddânonâdiverse = 10â3. For the teacher, we choose ξÏ(·) to be an L2-normalized single layer of 32 tanh units, trained in all experiments with 4 decoys (and thus, according to our heuristic, β equal to 5). For hindsight experience replay, a highsight goal is substituted 25% of the time. These goals are chosen uniformly at random from the last 3 frames of the trajectory. Trajectories were set to be 50 steps long for Atari and DeepMind Lab and 100 for the DeepMind control suite. It is important to note that the environment was not reset after each trajectory, but rather the each new trajectory begins where the previous one ended. We train the agent and teacher jointly with RMSProp (Tieleman & Hinton, 2012) with a learning rate of 10â4. We follow the preprocessing protocol of Mnih et al. (2015), resizing to 84 à 84 pixels and scaling 8-bit pixel values to lie in the range [0, 1]. While originally designed for Atari, we apply this preprocessing pipeline across all environments used in this paper.
# A4.1 CONTROL SUITE
In the point mass domain we use a control step equal to 5 times the task-speciï¬ed default, i.e. the agent acts on every ï¬fth environment step (Mnih et al., 2015). In all other Control Suite domains, we use the default. We use the âeasyâ version of the task where actuator semantics are ï¬xed across environment episodes.
Discrete action spaces admit function approximators which simultaneously compute the action values for all possible actions, as popularized in Mnih et al. (2015). The action with maximal Q-value can thus be identiï¬ed in time proportional to the cardinality of the action space. An enumeration of possible actions is no longer possible in the continuous setting. While approaches exist to enable continuous maximization in closed form (Gu et al., 2016), they come at the cost of greatly restricting the functional form of Q.
For ease of implementation, as well as comparison to other considered environments, we instead discretize the space of continuous actions. For all Control Suite environments considered except manipulator, we discretize an A-dimensional continuous action space into 3A discrete actions, consisting of the Cartesian product over action dimensions with values in {â1, 0, 1}. In the case of manipulator, we adopt a âdiagonalâ discretization where each action consists of setting one actuator to ±1, and all other actuators to 0, with an additional action consisting of every actuator being set to 0. This is a reasonable choice for manipulator because any position can be achieved by a concatenation of actuator actions, which may not be true of more complex Control Suite environments such as humanoid, where the agentâs body is subject to gravity and successful trajectories may require multi-joint actuation in a single control time step. The subset of the Control Suite considered in this work was chosen primarily such that the discretized action space would be of a reasonable size. We leave extensions to continuous domains to future work.
A5 ADDITIONAL EXPERIMENTAL RESULTS
A5.1 ATARI
We ran two additional baselines on Seaquest and Montezumaâs Revenge, ablating our use of hindsight experience replay in opposite ways. One involved training the goal-conditioned policy only in hindsight, without any learned goal achievement reward, i.e. pHER = 1. This approach achieved 12% of goals on Seaquest and 11.4% of goals on Montezumaâs Revenge, making it comparable to a uniform random policy. This result underscores the importance of learning a goal achievement reward. The second baseline consisted of DISCERN learning a goal achievement reward without hindsight experience replay, i.e. pHER = 0. This also performed poorly, achieving 11.4% of goals on Seaquest and 8% of goals on Montezumaâs Revenge. Taken together, these preliminary results suggest that the combination of hindsight experience replay and a learned goal achievement reward is important.
A5.2 CONTROL SUITE
For the sake of completeness, Figure 6 reports goal achievement curves on Control Suite domains using the âdiverseâ goal selection scheme.
15
ball_in_cup cartpole finger DISCERN Wan 7E re) DISCERN WAN 7E re) DISCERN WAN 7E re) 08 Fraction goals achieved Fraction goals achieved Fraction goals achieved Actor steps 1 Actor steps 18 Actor steps 1 pendulum point_mass reacher wp 10 | â DISCERN WGAN AE a > 10 [â PSCERN WGAN AE a p10 | â DISCERN WAN ae a a Zoe ge foe a fos foe 2a oeSeaernwrrr row] Poa oa & & B02 10 15 20 05 10 15 2.0 05 10 15 20 Actor steps 1s Actor steps 18 Actor steps 1s
Figure 6: Results for Control Suite tasks using the âdiverseâ goal substitution scheme.
Figure 7: Average goal achievement on cartpole. Top row shows the goals. Middle row shows achievement by the Autoencoder baseline. Bottom row shows average goal achievement by DISCERN. Shading of columns is for emphasis. DISCERN always matches the cart position. The autoencoder baseline matches both cart and pole position when the pole is pointing down, but fails to match either when the pole is pointing up.
Figure 7 displays goal achievements for DISCERN and the Autoencoder baseline, highlighting DISCERNâs preference for communicating with the cart position, and robustness to the pole positions unseen during training.
16
manipulator - Per dimension plot DISCERN Fraction achieved 1 WGAN 08 0.6 0.4 0.2 20% got gor got ,o0M 420" ya0M 460! gor 590" Actor steps
Figure 8: Per-dimension quantitative evaluation on the manipulator domain. See Figure 4 for a description of the visualization. DISCERN learns to reliably control more dimensions of the underlying state than any of the baselines.
17 | {
"id": "1605.07700"
} |
1811.11206 | Partitioned Variational Inference: A unified framework encompassing federated and continual learning | Variational inference (VI) has become the method of choice for fitting many
modern probabilistic models. However, practitioners are faced with a fragmented
literature that offers a bewildering array of algorithmic options. First, the
variational family. Second, the granularity of the updates e.g. whether the
updates are local to each data point and employ message passing or global.
Third, the method of optimization (bespoke or blackbox, closed-form or
stochastic updates, etc.). This paper presents a new framework, termed
Partitioned Variational Inference (PVI), that explicitly acknowledges these
algorithmic dimensions of VI, unifies disparate literature, and provides
guidance on usage. Crucially, the proposed PVI framework allows us to identify
new ways of performing VI that are ideally suited to challenging learning
scenarios including federated learning (where distributed computing is
leveraged to process non-centralized data) and continual learning (where new
data and tasks arrive over time and must be accommodated quickly). We showcase
these new capabilities by developing communication-efficient federated training
of Bayesian neural networks and continual learning for Gaussian process models
with private pseudo-points. The new methods significantly outperform the
state-of-the-art, whilst being almost as straightforward to implement as
standard VI. | http://arxiv.org/pdf/1811.11206 | Thang D. Bui, Cuong V. Nguyen, Siddharth Swaroop, Richard E. Turner | stat.ML, cs.AI, cs.LG | null | null | stat.ML | 20181127 | 20181127 | 8 1 0 2
v o N 7 2 ] L M . t a t s [ 1 v 6 0 2 1 1 . 1 1 8 1 : v i X r a
# Partitioned Variational Inference: A uniï¬ed framework encompassing federated and continual learning
# Thang D. Bui1, Cuong V. Nguyen2, Siddharth Swaroop2, and Richard E. Turner2
1University of Sydney, Australia; thang.buivn@gmail.com 2University of Cambridge, UK; {vcn22,ss2163,ret26}@cam.ac.uk
# Abstract
Variational inference (VI) has become the method of choice for ï¬tting many modern probabilistic models. However, practitioners are faced with a fragmented literature that oï¬ers a bewildering array of algorithmic options. First, the variational family. Second, the granularity of the updates e.g. whether the updates are local to each data point and employ message passing or global. Third, the method of optimization (bespoke or blackbox, closed-form or stochastic updates, etc.). This paper presents a new framework, termed Partitioned Variational Inference (PVI), that explicitly acknowledges these algorithmic dimensions of VI, uniï¬es disparate literature, and provides guidance on usage. Crucially, the proposed PVI framework allows us to identify new ways of performing VI that are ideally suited to challenging learning scenarios including federated learning (where distributed computing is leveraged to process non-centralized data) and continual learning (where new data and tasks arrive over time and must be accommodated quickly). We showcase these new capabilities by developing communication-eï¬cient federated training of Bayesian neural networks and continual learning for Gaussian process models with private pseudo-points. The new methods signiï¬cantly outperform the state-of-the-art, whilst being almost as straightforward to implement as standard VI.
1
# Introduction
Variational methods recast approximate inference as an optimization problem, thereby enabling advances in optimization to be leveraged for inference. VI has enabled approaches including natural gradient methods, mirror-descent, trust region and stochastic (mini-batch) optimization to be tapped in this way. The approach has been successful, with VI methods often lying on the eï¬cient frontier of approximate inferenceâs speed-accuracy trade-oï¬. VI has consequently become one of the most popular varieties of approximate inference. For example, it is now a standard approach for Gaussian process models [Titsias, 2009], latent topic models [Blei et al., 2003], and deep generative models [Kingma and Welling, 2014]. Deployment of VI requires the practitioner to make three fundamental choices. First, the form of the approximate family which ranges from simple mean-ï¬eld or factorized distributions, through unfactorized exponential families to complex non-exponential family distributions. Second, the granularity of variational inference which includes, on the one hand, approaches based on the global variational free-energy, and on the other those that consider a single data point at a time and employ local message passing. Third, the form of the variational updates which encompasses the optimization method employed for maximizing the global variational free-energy or the form of the message passing updates. A large body of work has investigated how the choice of approximating family aï¬ects the accuracy of VI [MacKay, 2003, Wang and Titterington, 2004, Turner and Sahani, 2011] and how additional approximations can enable VI to support more complex approximate families [Jaakkola and Jordan,
1
1998, Rezende and Mohamed, 2015, Salimans et al., 2015, Ranganath et al., 2016, Mescheder et al., 2017]. This is a fundamental question, but it is orthogonal to the focus of the current paper. Instead, we focus on the second two choices. The granularity of variational inference is an important algorithmic dimension. Whilst global variational inference has more theoretical guarantees and is arguably simpler to implement, local variational inference oï¬ers unique opportunities for online or continual learning (e.g. allowing âoldâ data to be sporadically revisited) and distributed computing (e.g. supporting asynchronous lock-free updates). The form of the updates is equally important with a burgeoning set of alternatives. For global VI these including gradient ascent, natural gradient and mirror descent, approximate second-order methods, stochastic versions thereof, collapsed VI and ï¬xed-point updates to name but a few. For local VI, there has been less exploration of the options, but damping in natural and moment space is often employed.
The goal of this paper is to develop a unifying framework, termed Partitioned Variational Infer- ence (PVI), that explicitly acknowledges that the granularity and the optimization method are two fundamental algorithmic dimensions of VI. The new framework 1. generalizes and extends current theoretical results in this area, 2. reveals the relationship between a large number of existing schemes, and 3. identiï¬es opportunities for innovation, a selection of which are demonstrated in experiments. We brieï¬y summarize the contributions of this paper, focusing on the uniï¬ed viewpoint and novel algorithmic extensions to support federated and continual learning.
# 1.1 Uniï¬cation
The main theoretical contributions of the paper, described in sections 2 to 4, are: to develop Partitioned Variational Inference; clean up, generalize and derive new supporting theory (including PVI ï¬xed-point optimization, mini-batch approximation, hyperparameter learning); and show that PVI subsumes standard global variational inference, (local) variational message passing, and other well-established approaches. In addition, we also show in section 4 that damped ï¬xed-point optimization and natural gradient methods applied to PVI are equivalent to variationally-limited power EP.
In section 4 PVI is used to connect a large literature that has become fragmented with separated strands of related, but mutually uncited work. More speciï¬cally we unify work on: online VI [Ghahramani and Attias, 2000, Sato, 2001, Broderick et al., 2013, Bui et al., 2017b, Nguyen et al., 2018]; global VI [Sato, 2001, Hensman et al., 2012, Hoï¬man et al., 2013, Salimans and Knowles, 2013, Sheth and Khardon, 2016a, Sheth et al., 2015, Sheth and Khardon, 2016b]; local VI [Knowles and Minka, 2011, Wand, 2014, Khan and Lin, 2018]; power EP and related algorithms [Minka, 2001, 2004, Li et al., 2015, Hasenclever et al., 2017, Gelman et al., 2014]; and stochastic mini-batch variants of these algorithms [Hoï¬man et al., 2013, Li et al., 2015, Khan and Lin, 2018]. Figures 2 and 3 and table 1 present a summary of these relationships in the context of PVI.
# 1.2 Probabilistic inference for federated machine learning
The goal of federated learning is to enable distributed training of machine learning models without centralizing data [see e.g. McMahan et al., 2017, Zhao et al., 2018]. This is challenging in practice as:
⢠modern data sets can often be distributed inhomogeneously and unevenly across many machines, for examples, mobile devices can contain many images which can be used for training a classiï¬cation model, but accessing such information is often restricted and privacy-sensitive;
⢠computation resources available at terminal machines can be leveraged, but communication between these machines or between them and a central server can be limited and unreliable, for example, communication from and to mobile devices is often costly, and each device can be abruptly disconnected from the training setup or, similarly, a new device can appear;
2
⢠the inference or prediction step is often needed in an any-time fashion at each machine, i.e. each machine needs to have access to a high-quality model to make predictions without having to send data to a remote server.
These requirements are often not satisï¬ed in the traditional training pipelines, many of which require data to be stored in a single machine, or in a data center where it is typically distributed among many machines in a homogeneous and balanced fashion [see e.g. Dean et al., 2012, Zhang et al., 2015, Chen et al., 2016]. Federated learning attempts to bridge this gap by tackling the aforementioned constraints. Additionally, this type of learning is arguably less privacy-sensitive as compared to centralized learning approaches, as it does not require local data to be collected and sent to a central server. It can also be further improved by employing encrypted aggregation steps [Bonawitz et al., 2017] or diï¬erentially-private mechanisms [Dwork and Roth, 2014].
Distributed inference is also an active research area in the Bayesian statistics and machine learn- ing literature. For example, parallel Markov chain Monte Carlo approaches typically run multiple independent Markov chains on diï¬erent partitions of the data set, but require heuristics to aggregate, reweight and average the samples at test time [see e.g. Wang and Dunson, 2013, Scott et al., 2016]. The closest to our work is the distributed EP algorithms of Gelman et al. [2014] and Hasenclever et al. [2017], which employ (approximate) MCMC for data partitions and EP for communication between workers. However, it is not clear these distributed approaches will work well in the federated settings described above. In section 5, we demonstrate that PVI can naturally and ï¬exibly address the above challenges, and thus be used for federated learning with eï¬cient synchronous or lock-free asynchronous communication. The proposed approach can be combined with recent advances in Monte Carlo VI for neural networks, enabling fast and communication-eï¬cient training of Bayesian neural networks on non-iid federated data. We provide an extensive experiment comparing to alternative approaches in section 7.
# 1.3 Probabilistic inference for continual learning
Continual learning (also termed online learning or life-long learning or incremental learning) is the ability to learn continually and adapt quickly to new experiences without catastrophically forgetting previously seen experiences [Schlimmer and Fisher, 1986, McCloskey and Cohen, 1989, Sutton and Whitehead, 1993, Ratcliï¬, 1990]. Such requirements arise in many practical settings in which data can arrive sequentially or tasks may change over time (e.g. new classes may be discovered), or entirely new tasks can emerge. Batch learning algorithms which deal with the entire data set at once are not applicable in these settings, as (1) data can arrive one point at a time or in batches of a size that is unknown a priori, or in a possibly non i.i.d. way; and (2) previously seen data may not be directly accessible, which means the continual learning algorithms need to intelligently decide how to best combine prior or current experience with new data while being resistant to under-ï¬tting or over-ï¬tting to new data (i.e. intransigence vs forgetting).
Continual learning has a rich literature [see e.g. Opper, 1998, Sato, 2001, Ghahramani and Attias, 2000, Csató and Opper, 2002, Minka, 2001, Smola et al., 2004] but is enjoying a resurgence of interest ranging from deepening understanding of transfer learning and catastrophic forgetting [Goodfellow et al., 2014, Flesch et al., 2018], to developing learning algorithms for various models and applications [Broderick et al., 2013, Li and Hoiem, 2016, Kirkpatrick et al., 2017, Zenke et al., 2017, Seï¬ et al., 2017, Bui et al., 2017a, Nguyen et al., 2018, Zeno et al., 2018, Chaudhry et al., 2018], to setting up relevant metrics and benchmarks for evaluation [Lomonaco and Maltoni, 2017, Hayes et al., 2018]. While the PVI framework enables us to connect and unify much of the literature in this area, it also allows gaps in the literature to be identiï¬ed and enables the development of new and improved algorithmic solutions. We demonstrate this in section 6 by presenting a new continual learning method for Gaussian process regression and classiï¬cation that greatly extends earlier work by Csató and Opper [2002] and Bui et al.
3
[2017a], allowing principled handling of hyperparameters and private pseudo-points for new data. The new technique is shown to be superior to alternative online learning approaches on various toy and real-world data sets in section 7. We also show in section 5 that continual learning can be reframed as a special case of federated learning.
# 2 Partitioned Variational Inference
In this section, we introduce Partitioned Variational Inference, a framework that encompasses many approaches to variational inference. We begin by framing PVI in terms of a series of local variational free-energy optimization problems, proving several key properties of the algorithm that reveal the relationship to global VI. In order to keep the development clear, we have separated most of the discussion of related work into section 4.
Consider a parametric probabilistic model defined by the prior p(@|e) over parameters 6 and the likelihood function p(y|, 6) = TL, Pm, e), where {y,,...,ygr} is a partition of y into M groups of data points. Depending on the context, a data group y,,, can be considered to be a mini-batch of y which is fixed across epochs, or a data shard. For simplicity, we assume for the moment that the hyperparameters ⬠are fixed and suppress them to lighten the notation. We will discuss hyperparameter optimization at the end of this section.
Exact Bayesian inference in this class of model is in general intractable so we resort to variational inference. In particular, we posit a variational approximation of the true posterior as follows,
M 1 M 4(9) = P() TJ tm(0) © 3) TT Pml®) = rly). (l m=1 m=1
where Z is the normalizing constant of the true posterior, or marginal likelihood. The approximate likelihood tm(θ) will be reï¬ned by PVI to approximate the eï¬ect the likelihood term p(ym|θ) has on the posterior. Note that the form of q(θ) in (1) is similar to that employed by the expectation propagation algorithm [Minka, 2001], but with two diï¬erences. First, the approximate posterior is not restricted to lie in the exponential family, as is typically the case for EP. Second, the approximate posterior does not include a normalizing constant. Instead, the PVI algorithm will automatically ensure that the product of the prior and approximate likelihood factors in (1) is a normalized distribution. We will show that PVI will return an approximation to the marginal likelihood log Z = log p(y) in addition to the approximation of the posterior.
Algorithm 1 details the PVI algorithm. At each iteration i, we select an approximate likelihood to reï¬ne according to a schedule bi â {1 . . . M }. The approximate likelihood t(i (θ) obtained from bi the previous iteration will be reï¬ned and the corresponding data-group is denoted ybi. The reï¬nement proceeds in two steps. First, we reï¬ne the approximate posterior using the local (negative) variational F (i)(q(θ)) where the optimization is over a tractable family Q and free energy q(i)(θ) = argmaxq(θ)
# âQ
a? A)p(yo,18) a(0)ty,? (0) â) F(q(0)) = [e0a) log
(θ) = q(i)(θ) q(i 1)(θ)
Second, the new approximate likelihood is found by division, t(i) bi
# t(i bi
(θ). â
â
1)
We will now justify these steps by stating properties, derived in the appendix, that show 1) the local free-energy optimization is equivalent to a variational KL optimization, 2) the update for the approximate likelihoods is consistent with the normalized density speciï¬ed in 1, and 3) any ï¬xed point of the algorithm is also a local optimum of global VI and at this ï¬xed point the sum of the local
4
free-energies is equal to the global variational free-energy. The following properties apply for general q(θ), and are not limited to the exponential family.1
Property 1 Maximizing the local free-energy F (i)(q(θ)) is equivalent to the KL optimization
g() (0) = argmin KL (4(0) || 6 (0)) , (3) q(A)EQ
# âQ
where p) (0) = 3 Fo) p(y|0) = $P(Yp,19) Ino, 890) is known as the tilted distribution in the i th, , EP literature and is intractable.
The proof is straightforward (see A.1). The tilted distribution can be justiï¬ed as a sensible target as it removes the approximate likelihood t(i (θ) from the current approximate posterior and replaces bi it with the true likelihood p(ybi|θ). In this way, the tilted distribution comprises one true likelihood, M â 1 approximate likelihoods and the prior. The KL optimization then ensures the new posterior better approximates the true likelihoodâs eï¬ect, in the context of the approximate likelihoods and the prior.
m. Property 2 At the end of each iteration i = 0,1,..., (0) = p(6) JT]
m=1 t(i)
MAO).
Again the proof is simple (see A.2), but it relies on PVI initializing the approximate likelihood factors to unity so that q(0)(θ) = p(θ).
Property 3 Let q*(0) = pO) TT, J d0q(6) log Peel?) be the local M J d0q(9) log 2) Ung £laim?) be the global (a) ye Fm(q*(0)) = F(q*(@)),
Property 3 Let q*(0) = pO) TT, t*,(0) be a fixed point of Algorithm Fm(q(9)) J d0q(6) log Peel?) be the local free-energy w.r.t. the factor tm, and F(q(@)) = M J d0q(9) log 2) Ung £laim?) be the global free-energy. We have:
m=1 Fm(qâ(θ)) = F(qâ(θ)), i.e. the sum of the local free-energies is equal to the global free-energy, i.e. the PVI ï¬xed point is an optimum of global VI,
(b) If qâ(θ) = argmaxq(θ) Fm(q(θ)) for all m, then qâ(θ) = argmaxq(θ) F(q(θ)).
# âQ
# âQ
These results are more complex to show, but can be derived by computing the derivative and Hessian of the global free-energy and substituting into these expressions the derivatives and Hessians of the local free-energies (see A.3). The fact that the ï¬xed point of PVI recovers a global VI solution (both the optimal q(θ) and the global free-energy at this optimum) is the main theoretical justiï¬cation for employing PVI. However, we do not believe that there is a Lyapunov function for PVI, indicating that it may oscillate or diverge in general.
Having laid out the general framework for PVI, what remains to be decided is the method used for optimizing the local free-energies. In a moment we consider three choices: analytic updates, oï¬-the-shelf optimization methods and ï¬xed-point iterations, as well as discussing how stochastic approximations can be combined with these approaches. Before turning to these choices, we compare and contrast the algorithmic beneï¬ts of the local and global approaches to VI in diï¬erent settings. This discussion will help shape the development of the optimization choices which follows.
# 2.1 When should a local VI approach be employed rather than a global one?
We will describe in section 4 how the PVI framework uniï¬es a large body of existing literature, thereby providing a useful conceptual scaï¬old for understanding the relationship between algorithms. However,
1However, we will only consider exponential family approximations in the experiments in section 7.
5
Algorithm 1 Partitioned Variational Inference
Input: data partition {y1, . . . , yM }, prior p(θ) Initialize:
t(0) m (θ) := 1 for all m = 1, 2, . . . , M. q(0)(θ) := p(θ).
for i = 1, 2, . . . until convergence do
bi := index of the next approximate likelihood to reï¬ne. Compute the new approximate posterior:
; (i-1) 0); ) g®(0) = argmax [ d0q(0) jog £â_ Crt 8) a(0)eO (dt? (8)
# âQ
Update the approximate likelihood:
. O@ (i) qd ( ) (i-1) t, (0) = t 0), by ( ) gi) (0) i ( ); t (9) = t@-9) (6) for all m F bj.
t(i) bi t(i bi 1) (θ) := (θ), â â (4)
end for
FEDERATED LEARNING CONTINUAL LEARNING G9) = g(0)Am (8) parameter |ââ-â --â server [â---â--- soe @ iG (6 (i-1) = tm (B)/tm °(0) n(0) = ansmax 0a() ) log eet) OM â (9) =] âS- wt Sy @ ye dé ® 06) = i » @plynld) - n g®(6) = _â Jos ae) (8) = arsmax [ da) log (6) @p 9) (0) = 28 m= aD (6)/th (8) nO a)
# Oriel)
Figure 1: Steps of the PVI algorithm when being used for continual learning [left] and federated learning [right].
it is important to ask: What algorithmic and computation beneï¬ts, if any, arise from considering a set of local free-energy updates, rather than a single global approximation (possibly leveraging stochastic mini-batch approximation)?
In a nutshell, we will show that if the data set is ï¬xed before inference is performed (batch learning) or arrives in a simple online iid way (simple online learning), and distributed computation is not available,
6
then global VI will typically be simpler to implement, require less memory, and faster to converge than more local versions of PVI (the case of scaling collapsed bounds being a possible exception). However, if the conditions above are not met, the local versions of PVI will be appropriate. We will now unpack important examples of this sort.
The PVI approach is ideally suited to the distributed setting, with simple distributed variants allowing asynchronous distributed updates. One simple approach, similar to that of Hasenclever et al. [2017], uses M workers that are each allocated a data group ym. The workers store and reï¬ne the associated approximate likelihood tm(θ). A server maintains and updates the approximate posterior and communicates it to the workers. An idle worker receives the current posterior from the server, optimizes the local free-energy, computes the change in the local approximate likelihood âm(θ) = t(new) m (θ), sends this to the server, and repeats. The local workers do not change q(θ) directly. Instead, the server maintains a queue of approximate likelihood updates and applies these to the approximate posterior q(new)(θ) = q(old)(θ)âm(θ). This setup supports asynchronous updates of the approximate likelihood factors. See ï¬g. 1 for a pictorial depiction of these steps. In contrast, global VI is generally ill-suited to the distributed setting. Although the free-energy optimization can be parallelized over data points, typically this will only be advantageous for large mini-batches where the extra communication overhead does not dominate. Large mini-batches often result in slow optimization progress (early in learning it is often clear how to improve q(θ) after seeing only a small number of data points). The special case of global VI employing mini-batch approximations and natural gradient updates can support asynchronous distributed processing if each worker receives statistically identical data and updates with the same frequency. It could not operate successfully when each node contains diï¬erent amounts or types of data, or if some workers update more frequently than others.
Distributed versions of PVI not only enable VI to be scaled to large problems, but they also allow inference algorithms to be sent to user data, rather than requiring user data to be collected and centralized before performing inference. Consider the situation where workers are personal devices, like mobile phones, containing user data ym. Here the local free-energy updates can be performed client-side on the userâs devices and only summaries tm(θ) of the relevant aspects of that information are communicated back to the central server. The frequency with which these messages are sent might be limited to improve security. Such an implementation is arguably more secure than one in which the user data (or associated gradients) are sent back to a central server [The Royal Society, 2017]. Since the amount and type of data at the nodes is outside of the control of the algorithm designer, mini-batch natural gradient global VI will generally be inappropriate for this setting.
The PVI approach is also well suited to the continual or life-long learning setting. These settings are very general forms of online learning in which new data regularly arrive in a potentially non-iid way, tasks may change over time, and entirely new tasks may emerge. In this situation, the PVI framework can not only be used to continuously update the posterior distribution q(θ) in light of new data by optimizing the local free-energy for the newly seen data, it can also be used to revisit old data groups (potentially in a judiciously selected way) thereby mitigating problems like catastrophic forgetting. The update steps for this learning scenario are illustrated in ï¬g. 1. In contrast, global VI is fundamentally ill-suited to the general online setting. The special case of global VI employing mini-batch approximations with natural gradient updates may be appropriate when the data are iid and only one update is performed for each new task (simple online learning), but it is not generally applicable.
We will return to discuss the key issues raised in this section â the speed of convergence, memory overhead, online learning, and distributed inference â in the context of diï¬erent options for carrying out the optimization of the local free-energies in section 3.
7
# 2.2 Hyperparameter Learning
Many probabilistic models depend on a set of hyperparameters ⬠and it is often necessary to learn suitable settings from data to achieve good performance on a task. One method is to optimize the variational free-energy thereby approximating maximum likelihood learning. The gradient of the globa variational free-energy decomposes into a set of local computations, as shown in appendix [B}
M d d aor (6 4(9)) = > (0) Fe log p(Â¥ml9,â¬)| + Eqa) m=1 d = log p(Ae)| - (5 de
This expression holds for general q(@) and is valid both for coordinate ascent (updating ⬠with q(6) fixed) and for optimizing the collapsed bound (where the approximate posterior optimizes the globa free-energy q(@) = q*(@) and therefore depends implicitly on â¬). Notice that this expression is amenable to stochastic approximation which leads to optimization schemes that use only local information ai each step. When combined with different choices for the optimization of the local free-energies wrt q(@) this leads to a wealth of possible hyperparameter optimization schemes. ?
In cases where a distributional estimate for the hyperparameters is necessary, e.g. in continual learning, the PVI framework above can be extended to handle the hyperparameters. In particular, the approximate posterior in eq. (1) can be modiï¬ed as follows,
M M q(9,â¬) = p(e)p(le) [] tm(G,â¬) © = r()p(Ole) [I] P@ml?,6) = 7, ly), 5 (6) m=1 m=1
where the approximate likelihood factor t(0,¢) now involves both the model parameters and the hyperparameters. Similar to eq. (2), the approximate posterior above leads to the following local variational free-energy,
YO, â¬)p(4s,18. ©) q(9,â¬)t), (4,6) 0 F (q(0,â¬)) = [ aeaea(o,0 log
Note that this approach retains all favourable properties of PVI such as local computation and ï¬exibility in choosing optimization strategies and stochastic approximations.
# 3 Approaches for Optimizing the Local Free-energies
Having established the general PVI algorithm and its properties, we will now describe diï¬erent options for performing the optimization of the local free-energies.
# 3.1 Analytic Local Free-energy Updates
Each local free-energy is equivalent in form to a global free-energy with an eï¬ective prior peï¬ (θ) = q(i (θ). As such, in conjugate exponential family models the KL optimizations will be available in closed form, for example in GP regression, and these updates can be substituted back into the local variational free-energies to yield locally-collapsed bounds, Fn(q(i)(θ)), that are useful for hyperparameter optimization [Bui et al., 2017a]. One advantage of using local versions of PVI is that this allows collapsed bounds to be leveraged on large data sets where an application to entire data set would be computationally intractable, potentially speeding up convergence over global VI.
8
# 3.2 Oï¬-the-shelf Optimizers for Local Free-energy Optimization
If analytic updates are not tractable, the local free-energy optimizations can be carried out using standard optimizers. The PVI framework automatically breaks the data set into a series of local free-energy optimization problems and the propagation of uncertainty between the data groups weights the information extracted from each. This means non-stochastic optimizers such as BFGS can now be leveraged in the large data setting. Of course, if a further stochastic approximation like Monte Carlo VI is employed for each local optimization, stochastic optimizers such as RMSProp [Tieleman and Hinton, 2012] or Adam [Kingma and Ba, 2014] might be more appropriate choices. In all cases, since the local free- energy is equivalent in form to a global free-energy with an eï¬ective prior peï¬ (θ) = q(i (θ), PVI can be implemented via trivial modiï¬cation to existing code for global VI. This is a key advantage of PVI over previous local VI approaches, such as variational message passing [Winn et al., 2005, Winn and Minka, 2009, Knowles and Minka, 2011], in which bespoke and closed-form updates are needed for diï¬erent likelihoods and cavity distributions.
# 3.3 Local Free-energy Fixed Point Updates, Natural Gradient Methods, and Mir- ror Descent
An alternative to using oï¬-the-shelf optimizers is to derive ï¬xed-point update equations by zeroing the gradients of the local free-energy. These ï¬xed-point updates have elegant properties for approximate posterior distributions that are in the exponential family.
Property 4 If the prior and approximate likelihood factors are in the un-normalized exponential family tm(0) = tm(9;m) = exp(mnT(0)) so that the variational distribution is in the normalized exponential family q(0) = exp(niT(@) â A(nq)), then the stationary point of the local free-energy aFO) _ 9 en dng implies
η(i) bi = Câ 1 d dηq Eq(log p(ybi|θ)). (8)
where C := Fate = cov,(9)[T(9)Tâ¢(4)] is the Fisher Information. Moreover, the Fisher Information can be written as C = se where jig = Eq(T(0)) is the mean parameter of q(0). Hence,
η(i) bi = d dµq Eq(log p(ybi|θ)). (9)
For some approximate posterior distributions q(θ), taking derivatives of the average log-likelihood with respect to the mean parameters is analytic (e.g. Gaussian) and for some it is not (e.g. gamma).
These conditions, derived in appendix A.4, can be used as ï¬xed point equations. That is, they can be iterated possibly with damping Ï,
η(i) bi = (1 â Ï)η(i bi â 1) + Ï d dµq Eq(log p(ybi|θ)). (10)
These iterations, which form an inner-loop in PVI, are themselves not guaranteed to converge (there is no Lyapunov function in general and so, for example, the local free-energy will not reduce at every step).
The ï¬xed point updates are the natural gradients of the local free-energy and the damped versions are natural gradient ascent [Sato, 2001, Hoï¬man et al., 2013]. The natural gradients could also be used in other optimization schemes [Hensman et al., 2012, Salimbeni et al., 2018]. The damped updates are also
9
equivalent to performing mirror-descent [Raskutti and Mukherjee, 2015, Khan and Lin, 2018], a general form of proximal algorithm [Parikh and Boyd, 2014] that can be interpreted as trust-region methods. For more details about the relationship between these methods, see appendix A.7. Additionally, while natural gradients or ï¬xed-point updates have been shown to be eï¬ective in the batch global VI settings [see e.g. Honkela et al., 2010], we present some result in appendix E.6 showing adaptive ï¬rst-order methods employing ï¬at gradients such as Adam [Kingma and Ba, 2014] performs as well as natural gradient methods, when stochastic mini-batch approximations are used.
For these types of updates there is an interesting relationship between PVI and global (batch) VI:
Property 5 PVI methods employing parallel updates result in identical dynamics for q(θ) given by the following equation, regardless of the partition of the data employed
N Eq(log p(yl9)) = 0 + >> n=1 d dpgaây nm) = no + gli» (log p(yn|0)). (11) dptgây
See A.5 for the proof. If parallel ï¬xed-point updates are desired, then it is more memory eï¬cient to employ batch VI M = 1, since then only one global set of natural parameters needs to be retained. However, as previously discussed, using M = 1 gives up opportunities for online learning and distributed computation (e.g. asynchronous updates).
# 3.4 Stochastic mini-batch approximation
There are two distinct ways to apply stochastic approximations within the PVI scheme.
# 3.4.1 Stochastic Approximation within the Local Free-Energy
The ï¬rst form of stochastic approximation leverages the fact that each local free-energy decomposes into a sum over data points and can, therefore, be approximated by sampling mini-batches within each data group ym. In the case where each partition includes a large number of data points, this leads to algorithms that converge more quickly than the batch variants â since a reasonable update for the approximate posterior can often be determined from just a few data points â and this faster convergence opens the door to processing larger data sets.
Mini-batch approximation can be employed in the general PVI case, but for simplicity we consider the global VI case here M = 1. If simpliï¬ed ï¬xed point updates are used for optimization, then iidâ¼ pdata(y) yields the following stochastic sampling L mini-batches of data from the data distribution yl approximation to the damped updates,2
d nf) = (1 â pn + p ( 4L (ox (ul) ) ; (12) dg
# d dµq Eq(log p(yl|θ)) â η(i
# (ox (ul) i
A d i =n) +9 (<i a(log p(y,| )~ nlc? (2) (13) q
Here the first form of the update is stochastic natural gradient ascent and the second form reveals the implied deletion step where nin? /L= (nf? - nh?) /L is the contribution a mini-batch likelihood makes to the posterior natural parameters on average. The rescaled learning rate is pâ = Lp. These two forms reveals that the mini-batch stochastic natural gradient update resembles an EP update step. See appendix for full details.
2We have used a distinct notation for a mini-batch (yl) and a data group (ym) since the former will be selected iid from the data set and will vary at each epoch, whilst the latter need not be determined in this way and is ï¬xed across epochs.
10
# 3.4.2 Stochastic Scheduling of Updates Between Local Free-Energies
The second form of stochastic approximation is to randomize the update schedule. For example, using M = N and randomly selecting subsets of data to update in parallel. This can be memory intensive, requiring N local natural parameters to be stored. A more memory eï¬cient approach is to ï¬x the mini-batches across epochs and to visit the data groups ym in a random order [Khan and Lin, 2018]. For the simpliï¬ed ï¬xed point updates, this yields
m = (1 â Ï)η(i η(i) 1) m + Ï â d dµq(i â 1) E q(i â 1)(log p(ym|θ)). (14)
This approach results in a subtly diï¬erent update to q that retains a speciï¬c approximation to the likelihood of each data partition, rather than a single global approximation
. a d A Hf) =f ~ p (1-B, (lel) ~ nf?) (15) dptq
If the ï¬rst approach in eq. (14) employs learning rates that obey the Robins Munro conditions, the ï¬xed points will be identical to the second approach in eq. (15) and they will correspond to optima of the global free-energy.
# 3.4.3 Comparing and Contrasting Stochastic Approaches
There are pros and cons to both approaches. The first approach in section [3.4.1] has a memory footprint L times smaller than the second approach in section [3.4.2] and can converge more quickly. For example, on the first pass through the data, it effectively allows approximate likelihoods for as of yet unseen data to be updated based on those for the data seen so far, which means that larger learning rates can be used pâ > p. The second approach is required for continual learning, asynchronous updates, and client-side processing where the assumption that each mini-batch is iid (and a single gradient step is performed on each) is typically incorrect. The second approach also tends to produce less noisy learning curves, with stochasticity only entering via the schedule and not as an approximation to the local free-energy and the gradients thereof.
These approaches could also be combined, with stochastic scheduling selecting the local free-energy to update next and mini-batch updates employed for each local free-energy optimization. See appendix A.6 for a full discussion.
# 4 Uniï¬cation of Previous Work
The local VI framework described above uniï¬es a large number of existing approaches. These methods include global VI (section 4.1), local VI (section 4.2), online VI (section 4.3) and a number of methods based on power EP (sections 4.4 to 4.7). A schematic showing the relationships between these methods at a high level is shown in ï¬g. 2. The literature has been organized into in ï¬g. 3 and table 1.
# 4.1 Global VI Fixed Point Methods
There has been a long history of applying the ï¬xed point updates for global VI (PVI where M = 1). Sato [2001] derived them for conjugate exponential family models, showing they recover the closed form updates for q(θ), and noting that damped ï¬xed point updates are equivalent to natural gradient ascent with unit step size (Ï = 1). Satoâs insight was subsequently built upon by several authors. Honkela et al. [2010] considered non-conjugate models, employed a Gaussian variational distribution and used natural gradients to update the variational distributionâs mean. Hensman et al. [2012] and
11
online, streaming, global VI local VI/VMP continual VI N partitions M=N one partition M=1 one pass PVI power a + 0 EP a-divergence stochastic distributed EP PEP PEP
# power
Figure 2: Variational inference schemes encompassed by the PVI framework.
Hoï¬man et al. [2013] applied the insight to conjugate models when optimizing collapsed variational free-energies and deriving stochastic natural gradient descent, respectively. Salimans and Knowles [2013] apply the ï¬xed points to non-conjugate models where the expectations over q are intractable and use Monte Carlo to approximate them, but they explicitly calculate the Fisher information matrix, which is unnecessary for exponential family q. Sheth and Khardon [2016a] and Sheth et al. [2015] treat non-conjugate models with Gaussian latent variables, employ the cancellation of the Fisher information, and analyze convergence properties. Sheth and Khardon [2016b] further extend this to two level-models through Monte Carlo essentially applying the Fisher information cancellation to Salimans and Knowles [2013], but they were unaware of this prior work.
# 4.2 Fully Local VI Fixed Point Methods
There has also been a long history of applying the ï¬xed point updates in the fully local VI setting (where M = N ). Knowles and Minka [2011] derive them for non-conjugate variational message passing, but explicitly calculate the Fisher information matrix (except in a case where q was univariate Gaussian case where they do employ the cancellation). Wand [2014] simpliï¬ed VMP by applying the Fisher information cancellation to the case where q(θ) is multivariate Gaussian. Khan and Lin [2018] also extend VMP to employ MC approximation and the Fisher information cancellation. They were unaware of Wand [2014], but extend this work by treating a wider range of approximate posteriors and models, stochastic updates, and a principled approach to damping. The work is closely related to Salimans and Knowles [2013] and Sheth and Khardon [2016b], since although these papers use ï¬xed-point updates for global VI, they show that these decompose over data points and thereby derive mini-batch updates that closely resemble ï¬xed-point local VI. This is a result of property 5.
# 4.3 Online, Streaming, Incremental and Continual VI as a single pass of PVI
If PVI makes a single pass through the data, the approximate likelihoods do not need to be explicitly computed or stored as data-groups are not revisited. In this case PVI reduces to initializing the approximate posterior to be the prior, q(0)(θ) = p(θ) and then optimizing a sequence of local free- energies
. (-Hg 9 q? (0) := argmax | d0q(0) log TORY) a(0)EQ q(9)
# âQ
These have the form of standard variational inference with the prior replaced by the previous variational distribution q(i 1)(θ). This idea â combining the likelihood from a new batch of data with the previous
12
Algorithm 2 One step of the PEP algorithm at the i-th iteration, for the bi-th data partition
. . a Compute the tilted distribution: pO (0) = gq" (6) (2y2) i Moment match: qq(@) = proj(p?) (8) such that Ey) (T(@)) = 0) FO) Update the posterior distribution with damping p: qâ(0) = (qa) (da (9))e/e Update the approximate likelihood: t\@) = oe 0)
Algorithm 3 One step of the PEP algorithm, as in algorithm 2, but with alpha divergence minimization
p(ybi | t(i bi Find the posterior distribution: q(i)(θ) := argminq(θ) (θ) = q(i)(θ) Update the approximate likelihood: t(i) bi q(i 1)(θ) Compute the tilted distribution: Ëp(i)(θ) = q(i 1)(θ) â â θ) 1) (θ) Dα[Ëp(i)(θ)||q(θ)] 1) â âQ t(i bi (θ) â
approximate posterior and projecting back to a new approximate posterior â underpins online variational inference [Ghahramani and Attias, 2000, Sato, 2001], streaming variational inference [Broderick et al., 2013, Bui et al., 2017b], and variational continual learning [Nguyen et al., 2018]. Early work on online VI used conjugate models and analytic updates [Ghahramani and Attias, 2000, Sato, 2001, Broderick et al., 2013, Bui et al., 2017b], this was followed by oï¬-the-shelf optimization approaches for non-conjugate models [Bui et al., 2017b] and further extended to leverage MC approximations of the local-free energy [Nguyen et al., 2018]. Recently Zeno et al. [2018] use the variational continual learning framework of Nguyen et al. [2018], but employ ï¬xed-point updates instead.
# 4.4 Power EP as a Fully Local VI Fixed Point Method
There is also an important relationship between PVI methods employing ï¬xed point updates and power expectation propagation [Minka, 2004]. Property 6 below states that the local VI ï¬xed point equations are recovered from the Power EP algorithm as α â 0.
Property 6 The damped ï¬xed point equations are precisely those returned by the PEP algorithm, shown in algorithm 2, in the limit that α â 0.
Although we suspect Knowles and Minka [2011] knew of this relationship, and it is well known that Power EP has the same ï¬xed points as VI in this case, it does not appear to be widely known that variationally limited Power EP yields exactly the same algorithm as ï¬xed point local VI. See A.8 for the proof.
# 4.5 Alpha-divergence EP as a Local VI Method with Oï¬-the-shelf Optimization
PVI is intimately related to alpha-divergence EP. If PVIâs KL divergence is replaced by an alpha divergence Da[p(4)||q(9)] = aa) J [op(@) + (1 = a)q(9) â p(@)%q(9)'~*] d6 we recover the alpha- divergence formulation of the power-EP algorithm which encompasses the current case as a â 0 and EP when a > 1 [2001]. The updates using this formulation are shown in algorithm [3] The alpha divergence is typically very difficult to compute once more than one non-Gaussian likelihood is included in a data group ym, meaning that for general alpha it would be appropriate to set M = N. The variational KL is the exception as it decomposes over data points.
13
Algorithm 4 One step of the SPEP algorithm at the i-th iteration, for the bi-th data partition
Compute the tilted distribution: pO (0) = gq" (6) (eapy" Moment match: qq(@) = proj(p? (8) such that E,g) (T(@)) = (0) (T(6)) Update the posterior distribution with damping p: qâ(0) = (qa) Ne (qa(0))Xe/4 ; â 1/N Update the approximate likelihood: ¢@ = (ap)
# 4.6 Stochastic Power EP as a Stochastic Global VI Fixed Point Method
The stochastic power EP algorithm [Li et al., 2015] reduces the memory overhead of EP by maintaining a single likelihood approximation that approximates the average eï¬ect a likelihood has on the posterior q(θ) = p(θ)t(θ)M . Taking the variational limit of this algorithm, α â 0, we recover global VI M = 1 with damped simpliï¬ed ï¬xed-point updates that employ a stochastic (mini-batch) approximation [Hoï¬man et al., 2013].
Property 7 The mini-batch ï¬xed point equations are precisely those returned by the SPEP algorithm, shown in algorithm 4 in the limit that α â 0.
In this way the relationship between EP and SEP is the same as the relationship between ï¬xed point PVI and ï¬xed point mini-batch global VI (see section 3.4 where the two approaches diï¬er by removing either an average natural parameter or a speciï¬c one). Similarly, if we altered PVI to maintain a single average likelihood approximation, as SEP does, we would recover mini-batch global VI.
# 4.7 Distributed (Power) EP Methods
The convergent distributed Power EP approach of Hasenclever et al. [2017] recovers a version of PVI as α â 0 with convergence guarantees. The PVI approach is also similar in spirit to Gelman et al. [2014], Hasenclever et al. [2017] who use EP to split up data sets into small parts that are amenable to MCMC. Here we are using PVI to split up data sets so that they are amenable for optimization.
5
# Improving Federated Learning for Bayesian Neural Networks Using PVI
Having connected the previous literature using the unifying framework based on PVI, we will discuss how PVI enables novel and practical algorithms to emerge. In this section, we detail how Partitioned Variational Inference can be used for federated approximate training of Bayesian neural networks, allowing both synchronous or lock-free asynchronous model updates across many machines.3 As a running example, consider a multiclass classiï¬cation problem with C classes and assume that the training points are partitioned into K disjoint memory shards (or subsets). In practical federated settings, the allocation of data points to shards is often unknown a priori, for example, the number of data points across various shards may be unbalanced or some classes may be present only on a few memory shards, or on one in the extreme. We ï¬rst discuss Bayesian neural networks and a global variational approximation for training BNNs, and detail how this approximation can be used at the shard level.
Consider a neural network that models the distribution of a target y given an input x, p(y|θ, x), where θ include the weights and biases in the network. To complete the model, we assign a prior p(θ)
3Neural network models are used throughout this section and the experiment, but other models can be employed in the same fashion as the training framework developed here is general.
14
Table 1: Variational inference schemes encompassed by the PVI framework. (See next page.) Selected past work has been organized into four categories: global VI (PVI with M = 1), fully local PVI (M = N ), Power EP variants, and online VI. The citation to the work is provided along with the granularity of the method (global indicates M = 1, fully local M = N , local implies general M can be used). The optimization used from the PVI perspective on this work is noted. Abbreviations used here are: Conjugate Gradient (CG) and Monte Carlo (MC). The model class that the scheme encompasses is noted (conjugate versus non-conjugate) along with the speciï¬c models that the scheme was tested on. Model abbreviations are: Non-linear State-space Model (NSSM), Non-linear Factor Analysis (NFA), Latent Dirichlet Allocation (LDA), Poisson Mixed Model (PMM), Heteroscedastic Linear Regression (HLR), Sparse Gaussian Processes (SGPs), Graphical Model (GM), Logistic Regression (LR), Beta-binomial (BB), Stochastic Volatility model (SV), Probit Regression (PR), Multinomial Regression (MR), Bayesian Neural Network (BNN), Gamma factor model (GFM), Poisson Gamma Matrix Factorization (PGMF), Mixture of Gaussians (MoG). Poisson Mixed Model (PMM), Heteroscedastic Linear Regression (HLR), Gaussian Latent Variable (GLV). If the scheme proposed by the method has a name, this is noted in the ï¬nal column. Abbreviations of the inference scheme are: Automatic Diï¬erentiation VI (ADVI), Incremental VI (IVI), Non-conjugate Variational Message Passing (NC-VMP), Simpliï¬ed NC-VMP (SNC-VMP), Conjugate-Computation VI (CCVI), Power EP (PEP), Alpha-divergence PEP (ADPEP), Convergent Power EP (CPEP), Stochastic Power EP (SPEP), Variational Continual Learning (VCL), Bayesian Gradient Descent (BGD).
(Wand Khan et al Minka 2004 ef Zeno etal 2014 2018 Power EP 2018 e¢ Hoffman et al. 2013 Sheth & gllietal. eras Hensman et al. 2013 Khardon 2015 Hasenclever Knowl Zo1sb et al. 2017 , nowles fixed-point « Honkela ⬠et al. 2010 & Minka (nat. grad.) ; 2004 simplified ee Salimans & et al. 2012 #,Saliman Sheth & Khardon ee 2015, 2016a (Minka 2004 Buietal fixed-point ¢f Salimans & alpha-div. 2018 (nat. grad.) Knowles 2013 EP Nyugen et al ⬠Blundell 2018 et al. 2015 local Winn et al. Kucukelbir e/Minka 2004 ¢ Ghahramani simple 2005 et al. 2017 Bui et al. & Attias 2000 P Archambeau 2017 Sato 2001 gradient Hinton and eErmis, Sato 200 based Stochastic van Camp 1993 2015 et al. 2013 mini-batch Bui et al. 2018 analytic Wglobal Beal, 2003 Variational Inference Monte Carlo VI Power EP a â> 0 Single Pass Through Data Models encompassed: conjugate __low dimensional non-conjugate _high dimensional non-conjugate
Figure 3: The local VI framework uniï¬es prior work. The granularity of the approximation and the optimization method employed are two fundamental algorithmic dimensions that are shown as axes. Fixed-point updates are identical to natural gradient ascent with unit step size. The models encompassed by each paper are indicated by the color. See 1 for more information.
15
1 6
Reference Global VI [PVI M = 1, see section 4.1] Beal [2003] Sato [2001] Hinton and Van Camp [1993] Honkela et al. [2010] Hensman et al. [2012] Hensman et al. [2013] Hoï¬man et al. [2013] Kucukelbir et al. [2017] Salimans et al. [2013] Sheth et al. [2015] Sheth and Khardon [2016a] Sheth and Khardon [2016b] Fully Local VI [PVI M = N , see section 4.2] fully local Winn et al. [2005] fully local Archambeau and Ermis [2015] fully local Knowles and Minka [2011] fully local Wand [2014] local Khan and Lin [2018] Granularity Optimization global global global global global global global global global global global global analytic analytic gradient ascent natural gradient (mean only) CG with natural gradient stochastic natural gradient stochastic natural gradient stochastic gradient descent ï¬xed-point + MC + stochastic simpliï¬ed ï¬xed point simpliï¬ed ï¬xed point simpliï¬ed ï¬xed point + MC analytic incremental ï¬xed-point simpliï¬ed ï¬xed-point damped stochastic simpliï¬ed ï¬xed- point Models Name conjugate conjugate (MoG) non-conjugate (neural network) non-conj. (MoG, NSSM, NFA) conjugate conjugate conjugate non-conjugate non-conjugate (PR, BB, SV) non-conjugate (GLV) non-conjugate (GLV) non-conjugate (two level) VI SVI ADVI conjugate (GM) conjugate (LDA) non-conjugate (LR, MR) non-conjugate (PMM, HLR) non-conjugate (LR, GFM, PGMF) CCVI VMP IVI NC-VMP SNC-VMP Online VI [one pass of PVI, see section 4.3] fully local Ghahramani and Attias [2000] fully local Sato [2001] fully local Broderick et al. [2013] fully local Bui et al. [2017a] fully local Nguyen et al. [2018] fully local Zeno et al. [2018] Power EP [PVI when α â 0, see sections 4.4 to 4.7] Minka [2004] Minka [2004] Bui et al. [2017b] Hasenclever et al. [2017] Li et al. [2015] local local local local local analytic analytic analytic analytic/LBFGS Adam ï¬xed-point series ï¬xed point optimization analytic/ï¬xed-point analytic with MC stochastic ï¬xed point conjugate (MoG) conjugate (MoG) conjugate (LDA) conjugate and not (SGPs) non-conjugate (BNN) non-conjugate (BNN) non-conjugate (GM) conjugate / non-conj. (GPs) non-conjugate (BNN) non-conjugate (LR, BNN) online VB streaming VI VCL BGD PEP ADEP PEP CPEP SPEP
Table 1: Variational inference schemes encompassed by the PVI framework. See previous page for full caption.
over the unknown parameters θ. Having speciï¬ed the probability of everything, we turn the handle of probability theory to obtain the posterior distribution,
p(O)v(yl8,x) _ pO) p(6|x,y) = P(y|x p(y|x) K N; Tens een P(YR.n|9, Xkn) (16)
The exact posterior above is analytically intractable and thus approximation or deterministic methods are needed. There is a long history of research Singhal and Wu 1993] 2013}, training of neural networks, including extended Kalman filtering approximation |M y} |2003|, Hamiltonian Monte Carlo Hinton and Van Camp} |1993) |Barber and Bishop | é and Ghahramanij [2016], sequential Monte Carlo [de Freitas et al.| [2000 ore and Adams| In this section, we focus on Monte one variational inference echniques such as sampling on approximate Bayesian , Laplaceâs variational inference Blundell et al. ; expectation propagation 2015], and approximate power EP |Li et al. 2015} Hernandez-Lobato methods with a mean-field . Ganian variational approximation [Graves} [Graves| variational approximation, q(@) =T]; N (6:3 as follows, Blundell et al.| 2015]. In detail, a factorized global , is used to lower-bound the log marginal likelihood Hi, 07)
log p(y|x) = log dθ p(θ)p(y|θ, x) ⥠dθ q(θ) log p(θ)p(y|θ, x) q(θ) = FGVI(q(θ)), (17)
where FGVI(q(θ)) is the variational lower bound or the negative variational free-energy. This bound can be expanded as follows,
K Ny, Fevi(a(d)) = âKL[a(9)|IP(9)] + 2 > / 48 (8) log P(Yk:nl8,Xkin)- k=1n=1 (18)
When the prior is chosen to be a Gaussian, the KL term in the bound above can be computed analytically. In contrast, the expected log-likelihood term is not analytically tractable. However, it can be approximated by simple Monte Carlo with the (local) reparameterization trick such that low-variance stochastic gradients of the approximate expectation wrt the variational parameters {µi, Ïi} can be easily obtained [Rezende et al., 2014, Kingma and Welling, 2014, Kingma et al., 2015].
The variational lower-bound above can be optimized using any oï¬-the-shelf stochastic optimizer, and its gradient computation can be trivially distributed across many machines. A possible synchronously distributed schedule when using K compute nodes, each having access to a memory shard, is as follows: (i) a central compute node passes the current q(θ) to K workers, (ii) each worker then computes the gradients of the expected log-likelihood of (a mini-batch of) its own data and passes the gradients back to the central node, (iii) the central node aggregates these gradients, combines the result with the gradient of the KL term, and performs an optimization step to obtain a new q(θ). These steps are then repeated for a ï¬xed number of iterations or until convergence. However, notice that this schedule is communication-ineï¬cient, as it requires frequent communication of the gradients and the updated variational approximation between the central node and the K compute workers. We will next discuss an inference scheme based on PVI that allows communication eï¬cient updates between workers that is compatible with various scheduling schemes.
Following the PVI formulation in section 2, the approximate posterior can be rewritten using the approximate factors, one for each memory shard, as follows,
K Ny p(O|x, y) x p(6) T] Iâ (YEnl9, Xk. n) = pO Th = q(0 ( 9) k=1n=1
17
where tk(θ) approximates the contribution of data points in the k-th shard to the posterior. As discussed in the previous sections, PVI turns the original global approximate inference task into a collection of approximate inference tasks, i.e. for the k-th memory shard and k-th compute node, the task is to maximize,
Ne Fhr(a(8)) = -KLa()|*(0)) + > [9 40) n=1 log p(Yknl9, Xk,n); (20)
n=1 k(θ) = q(θ)/tk(θ) is the context or eï¬ective prior set by data points in other shards. Once a where q\ new variational approximation q(θ) is obtained, a new approximate factor can be computed accordingly, k(θ). Note that the objective for each compute node is almost identical to the GVI tk(θ) = q(θ)/q\ objective, except the prior is now replaced by the context and the data are limited to the compute nodeâs accessible data. This means any global VI implementation available on a compute node (either using optimization, ï¬xed-point updates, or in close-formed) can be trivially modiï¬ed to handle PVI. A key additional diï¬erence to GVI is the communication frequency between the compute nodes and the central parameter server (that holds the latest q(θ)): a worker can decide to pass tk(θ) back to the central server after multiple passes through its data, after one epoch, or after just one mini-batch. This leaves room for practitioners to choose a learning schedule that meets communication constraints. More importantly, PVI enables various communication strategies to be deployed, for example:
⢠Sequential PVI with only one pass through the data set: each worker, in turn, runs Global VI, with the previous posterior being the prior/context, for the data points in its memory shard and returns the posterior approximation to the parameter server. This posterior approximation will then be used as the context for the next workerâs execution. Note that this is exactly equivalent to Variational Continual Learning [Nguyen et al., 2018] and can be combined with the multihead architecture, each head handling one task or one worker, or with episodic memory [see e.g. Zenke et al., 2017, Nguyen et al., 2018]. This strategy is communication-eï¬cient as only a small number of messages are required â only one up/down update is needed for each worker.
⢠PVI with synchronous model updates: instead of sequentially updating the context distribution and running only one worker at a time, all workers can be run in parallel. That is, each worker occasionally sends its updated contribution to the posterior back to the parameter server. The parameter server waits for all workers to ï¬nish before aggregating the approximate factors and sending the new posterior back to the workers. The workers will then update their own context distributions based on the current state of the central parameters. This process then repeats. By analyzing the homogeneity of the data and updates across workers, heuristics could be used to choose the learning rate for each worker and damping factor for the central parameter server â we leave this for future work.
⢠PVI with lock-free asynchronous updates: instead of waiting for all workers to ï¬nish training locally, the model aggregation and update steps can be performed as soon as any worker has ï¬nished. This strategy is particularly useful when communication is done over an unreliable channel, the distribution of the data across diï¬erent machines is highly unbalanced, or when a machine can be disconnected from the training procedure at any time. However, this strategy is expected to be generally worse compared the synchronous update scheme above, since the context/cavity distribution could be changed while a worker is running and the next parameter update performed by this worker could overwrite the updates made by other workers, i.e. there is the possibility of stale updates.
We demonstrate these communication strategies on a large-scale federated classiï¬cation task in section 7.1 and highlight the advantages and potential pitfalls of PVI, GVI and various alternatives for diï¬erent levels of data homogeneity across memory shards.
18
# Improving Continual Learning for Sparse Gaussian Processes Using PVI
Gaussian processes (GPs) are ï¬exible probabilistic distributions over functions that have been used in wide variety of machine learning problems, including supervised learning [Rasmussen and Williams, 2006], unsupervised learning [Lawrence, 2004] and reinforcement learning [Deisenroth, 2010]. The application of GPs to more general, large-scale settings is however hindered by analytical and computational intractabilities. As a result, a large body of active GP research aims to develop eï¬cient approximation strategies for inference and learning in GP models. In this work, we develop an approximation based on partitioned variational inference for GP regression and classiï¬cation in a continual learning setting. In this setting, data arrive sequentially, either one data point at a time or in batches of a size that is unknown a priori. An eï¬cient strategy to accurately update the model in an online fashion is thus needed and can be used for various applications such as control [Nguyen-Tuong et al., 2009] or mapping [OâCallaghan and Ramos, 2012].
In particular, building on recent work on pseudo-point sparse approximations [Titsias, 2009, Hensman et al., 2015, Matthews et al., 2016, Bui et al., 2017b] and streaming approximations [Csató and Opper, 2002, Bui et al., 2017a], we develop a streaming variational approximation that approximates the posterior distribution over both the GP latent function and the hyperparameters for GP regression and classiï¬cation models. Additionally, the partitioned VI view of this approximation allows just-in-time, dynamic allocation of new pseudo-points speciï¬c to a data batch, and more eï¬cient training time and accurate predictions in practice. We will provide a concise review of sparse approximations for Gaussian process regression and classiï¬cation before summarizing the proposed continual learning approach. For interested readers, see Quiñonero-Candela and Rasmussen [2005], Bui et al. [2017a] for more comprehensive reviews of sparse GPs. Appendix C contains the full derivation of diï¬erent streaming variational approaches with shared or private pseudo points, and with maximum likelihood or variational learning strategies for the hyperparameters.
# 6.1 Variational inference for both latent function and hyperparameters
Given N input and output pairs {xn, yn, a standard GP regression or classification model assumes he outputs {yn,}_, are generated from the inputs {x,}+_, according to yp, = n and observations into vectors x = {x,}*_, and y = {yn}4_, respectively. o the work o: [Bui et al.| 2017b|, in which only a point estimate of the hyperpa: other quantities will be suppressed when appropriate to lighten the notation. Ex model considered here is analytically and computationally intractable, due to the no between f and ¢, and the need to perform a high dimensional integration when NV. f (Xn) + â¬n, where f is an unknown function that is corrupted by observation noise, for example, £ ~ N(0, 04) in the real-valued output regression prob. em[}| Typically, f is assumed to be drawn from a zero-mean GP prior f ~ GP(0,k(-,-|â¬)) whose covariance function depends on hyperparameters â¬. We also place a prior over the hyperparameters ⬠and as such inference involves finding the posterior over both f and ¢, p(f,ely, x), and computing the marginal likelihood p(y|x), where we have collected the inputs This is one key difference rameters is learned via maximum likelihood. The dependence on the inputs of the posterior, marginal likelihood, and act inference in the n-linear dependency is large.
In this work, we focus on the variational free energy approximation scheme 2016] which is arguably the leading approximation method for many scenarios. This scheme ower bounds the marginal likelihood of the model using a variational distribution q(f,¢) over the latent
4In this section, f stands for the model parameters, as denoted by θ in the previous sections.
19
function and the hyperparameters:
v(ylf,¢,x)p(fle)p(e) a(f,â¬) log ply|x) = log / dfde ply, f,â¬|x) > / dfde q(f,â¬) log Flaf.6):
where F(q(f,¢)) is the variational surrogate objective and can be maximized to obtain q(f,¢). In order to arrive at a computationally tractable method, the approximate posterior is parameterized via a set of M, pseudo-outputs a which are a subset of the function values f = {f4a,a}. Specifically, the approximate posterior takes the following structure:
a(f, ©) = v(feala, ea(aya(e), (21)
where g(a) and q(e) are variational distributions over a and ⬠respectively, and p(fzala,¢) is the conditional prior distribution of the remaining latent function values. Note that while a and ⬠are assumed to be factorized in the approximate posterior, the dependencies between the remaining latent function values fza and the hyperparameters â¬, and between fq themselves are retained due to the conditional prior. This assumption leads to a critical cancellation that results in a computationally tractable lower bound as follows:
F(q(a),q(e)) = / dfde q(f) log eel ie : = -KL[q(6)|lp(©)] - ic a(e)KL{q(a)||p(ale) +30 [de da afua(e)a(a)p( falas) log p(Un| fas %0),
where fn = f(Xn) is the latent function value at x,. Most terms in the variational lower bound above require computation of an expectation wrt the variational approximation q(e), which is not available in closed-form even when q(e) takes a simple form such as a diagonal Gaussian. However, these expectations can be approximated by simple Monte Carlo with the reparameterization trick and Welling] 2014, Rezende et al.| 2014]. The remaining expectations can be handled tractably, either in closed-form or by using Gaussian quadrature.
# 6.2 Continual learning for streaming data with private pseudo-points
In continual learning, data arrive sequentially and revisiting previously seen data is prohibitive, and the current variational posterior can be reused to approximate the eï¬ect of the previous data on the exact posterior. Let {x1, y1} and {x2, y2} denote previous and new data, respectively. We ï¬rst re-interpret the structured global approximate posterior in eq. (21) as a product of local approximate factors as follows,
P(f.â¬ly1,*1) = P(6)P(flep(vilf, 6 x1) /Z1 â¬)p( fala, )p(ale)p(vilf â¬.x1)/21 â¬)p( fala, â¬)ti (a)ti (â¬)91(a)gi(e)
where qi(a) = t1(a)gi(a), qi(e) = p(e)ti(e)gi(e), and t)(-)s and g(-)s are introduced to approximate the contribution of p(ale) and p(yi|f,¢,x1) to the posterior, respectively. Note that the last equation is identical to eq. (23), but the factor representation above facilitates inference using PVI and allows
20
more ï¬exible approximations in the streaming setting. In particular, the exact posterior when both old data y1 and newly arrived data y2 are included can be approximated in a similar fashion,
VF, lyÂ¥1,Â¥2,%1,x2) = ple)p(flep(vilf, 6 x1)p(valf, â¬,X2)/Zi2 P(â¬)p(f4a.bla, b, â¬)p(bla, «)p(ale)p(yilf, â¬, x1)P(yalf, & X2)/Zr2 P(â¬)p(f4a,b|a, b, â¬)t2(bla)t2(©)t1 (ats (e)gi (a) gi (â¬)g2(b) 92(â¬) e
where b are new pseudo-outputs, and ta(-)s and go(-)s are approximate contributions of p(bla,¢) and p(yalf, â¬,X2) to the posterior, respectively. As we have reused the approximate factors t)(a) and gi(a), the newly introduced pseudo-points b can be thought of as pseudo-points private to the new data. This is the key difference of this work compared to the approach of Bui et al.| in which both old and new data share a same set of pseudo-points. The advantages of the approach based on private pseudo-points are potentially two-fold: (i) it is conceptually simpler to focus the approximation effort to handle the new data points while keeping the approximation for previous data fixed, as a new data batch may require only a small number of representative pseudo-points, and (ii) the number of parameters (variational parameters and private pseudo-inputs) is much smaller, leading to arguably easier problem to initialize and optimize. ?
The approximate factors t2(·) and g2(·) can be found by employing the PVI algorithm in section 2. Alternatively, in the continual learning setting where data points do not need to be revisited, we can convert the factor-based variational approximation above to a global variational approximation,
Df velyasÂ¥2)%01, 2) v(â¬)p( f4a,bla, b, â¬)t2(bla)ta(â¬)t1 (at (â¬) 91 (a) gi (â¬)92(b)g2(â¬) P(f4a,pla, b, â¬)g2(bla)qi(a)ae(e)
where go(bla) = to(bla)go(b), go(e) = p(e)t1(â¬)gi(e)t2(â¬)go(â¬), and g2(bla) and q2(â¬) are parameterized and optimized, along with the location of b. While this does not change the fixed-point solution compared to the PVI algorithm, it allows existing sparse global VI implementations such as that in GPflow |Matthews et al.|/2017] to be easily extended and deployed.
# 7 Experiments
Having discussed the connections to the literature and developed two novel applications of PVI, we validate the proposed methods by running a suite of continual and federated learning experiments on Bayesian neural network and Gaussian process models.
# 7.1 Distributed Federated Variational Inference for Neural Networks
In this section, we demonstrate that Partitioned Variational Inference is well-suited for federated approximate training of Bayesian neural networks, allowing both synchronous or lock-free asynchronous model updates across many machines. In particular, we consider the MNIST ten-class classiï¬cation problem and assume that the training points are partitioned into K disjoint shards. Two levels of data homogeneity across memory shards are considered: homogeneous [or iid, e.g. each shard has training points of all classes] and inhomogeneous [or non-iid, e.g. when K = 10, each shard has training points of only one class]. We evaluate diï¬erent training methods using a Bayesian neural network with one hidden layer of 200 rectiï¬ed linear units. We place a diagonal standard Normal prior over the parameters, p(θ) = N (θ; 0, I), and initialize the mean of the variational approximations as suggested by Glorot and Bengio [2010]. For distributed training methods, the data set is partitioned into 10 subsets or shards (K = 10), and 10 compute nodes (workers) with each able to access one memory shard. The implementation of diï¬erent inference strategies was done in Tensorï¬ow [Abadi et al., 2016] and the
21
communication between workers is managed using Ray [Moritz et al., 2017]. We use Adam [Kingma and Ba, 2014] for the inner loop optimization for partitioned, distributed methods or the outer loop optimization for global VI, and mini-batches of 200 data points. In the next few paragraphs, we brieï¬y detail the methods compared in this section and their results.
Global VI We ï¬rst evaluate global VI with a diagonal Gaussian variational approximation for the weights in the neural network. In particular, it is assumed that there is only one compute node (with either one core or ten cores) that can access the entire data set. This compute node maintains a global variational approximation to the exact posterior, and adjusts this approximation using the noisy gradients of the variational free-energy in eq. (18). We simulate the data distribution by sequentially showing mini-batches that can potentially have all ten classes (iid) or that have data of only one class (non-iid). Figures 13 and 14 in the appendix show the full performance statistics on the test set during training for diï¬erent learning rates and data homogeneity levels. The performance depends strongly on the learning rate, especially when the mini-batches are non-iid. Faster convergence early in training does not guarantee a better eventual model as measured by test performance, for the iid setting. Note that GVI for the non-iid setting can arrive at a good test error rate, albeit requiring a much smaller learning rate and a substantially larger training time. In addition, global VI is not communication-eï¬cient, as the global parameters are updated as often as data mini-batches are considered. The best performing method for the iid/non-iid settings is selected from all of the learning rates considered and they are shown in ï¬gs. 4 and 5.
Bayesian committee machine The Bayesian committee machine (BCM) is a simple baseline which is naturally applicable to partitioned data [Tresp, 2000]. The BCM performs (approximate) inference for each data shard independently of other data shards and aggregate the sub-posteriors at the end. In particular, global VI with a diagonal Gaussian variational approximation is applied independently to the data in each shard yielding approximate local posteriors {qk(θ)}K k=1. The aggregation step involves multiplying K Gaussian densities. The only shared information across diï¬erent members of the committee is the prior. This baseline, therefore, assesses the beneï¬ts from coordination between the workers. We consider two prior sharing strategies as follows,
I k=1 BCM â same: D(yEl9, Xk p(|x, y) x p(9) â TP O)plyel9, xx)] ~ Ti a (9 [p(a)|A~* (pa)
k=1 >
P(Â¥K|9, Xk > BCM â split: p(O|x, y) x p(6) k=1 K =] [eo po k=1 K 4, xx)] ~ T] (0). k=1
BCM is fully parallelizable (one worker independently performs inference for one shard) and is communication-eï¬cient (only one round of communication is required at the end of the training). However, there are several potential disadvantages: (i) it is not clear whether the prior sharing schemes discussed above will over-regularize or under-regularize the network compared to the original batch training scheme, and (ii) since each shard develops independent approximations and the model is unidentiï¬able, it is unclear if the simple combination rules above are appropriate. For example, diï¬erent members of the committee might learn equivalent posteriors up to a permutation of the hidden units. Although initializing each approximate posterior qk(θ) in the same way can mitigate this eï¬ect, the lack of a shared context is likely to be problematic. We evaluate BCM for both iid and non-iid settings, with diï¬erent learning rates for the Adam optimizer for each worker and show the full results in ï¬gs. 15 and 16. It is perhaps surprising that BCM works well in the iid data setting, although the best error rate of 4% is still much higher than state-of-the-art results on the MNIST classiï¬cation task (â¼ 1%). However, the concern above about the potential pitfalls when multiplying diï¬erent sub-posteriors is
22
validated in the non-iid setting, and in the iid setting when each worker is trained for a long time before performing the aggregation step. The best results in both settings are selected and shown in ï¬gs. 4 and 5.
Partitioned VI As discussed in section 5, PVI is a natural ï¬t to training probabilistic models on federated data and is ï¬exible such that various communication and scheduling strategies can be employed. In this section, we test three approaches discussed in section 5:
⢠Sequential PVI with only one pass through the data set: The number of training epochs and learning rates for each worker are varied, and the full results are included in ï¬gs. 17 and 18. The results show that while this strategy is eï¬ective for the iid setting, it performs poorly in the non-iid setting. This issue is known in the continual learning literature where incremental learning of a single-head network is known to be challenging. Episodic memory [see e.g. Zenke et al., 2017, Nguyen et al., 2018] or generative replay [Shin et al., 2017] is typically used to address this problem. The performance for the best hyperparameter settings are shown in ï¬gs. 4 and 5.
⢠PVI with synchronous model updates: In this experiment, each worker runs one epoch of Global VI between message passing steps, and the parameter server waits for all workers to ï¬nish before aggregating information and sending it back to the workers. We explore diï¬erent learning rates for the inner loop optimization and various damping rates for the parameter server, and show the full results in ï¬gs. 19 and 20. While the performance on the test set depends strongly on the learning rate and damping factor, if these values are appropriately chosen, this update strategy can achieve competitive performance (â¼< 2%). By analyzing the homogeneity of the data and updates across workers, some forms of heuristics could be used to choose the learning rate and damping factor â we leave this for future work. We pick the best performing runs and compare with other methods in ï¬gs. 4 and 5.
⢠PVI with lock-free asynchronous updates: Similar to the synchronous PVI experiment, we vary the learning rate and damping factor and include the full results in ï¬gs. 22 and 23. The test performance of this method is generally worse compared the synchronous update scheme, since the context/cavity distribution could be changed while a worker is running and the next parameter update performed by this worker could overwrite the updates made by other workers. While we do not simulate conditions that favour this scheduling scheme such that unreliable communication channels or unbalanced data across memory shards, we expect this strategy to perform well compared to other methods in these scenarios. The best hyperparameters are selected and their performance are shown in ï¬gs. 4 and 5.
Discussion The best performance for each method discussed above are shown in ï¬gs. 4 and 5, demonstrating the accuracy-training time and accuracy-communication cost frontiers. In the iid data setting (ï¬g. 4), distributed training methods can achieve comparable performance in the same training time compared to that of global VI. However, methods based on data partitioning are much more communication-eï¬cient, for example, PVI-sync uses about 10 times fewer messages than GVI when both methods attain a 3% test error. The results for PVI-seq with one pass through the data demonstrates its eï¬ciency, but highlights the need to revisit data multiple times to obtain better error rate and log-likelihood. BCM shows promising performance, but is outperformed by all other methods, suggesting that communication between workers and setting the right approximate prior (context) for each partition are crucial.
Figure 5 shows the non-iid data regime is substantially more challenging, as simple training methods including BCM and PVI-seq with one pass perform poorly and other distributed methods require more
23
extreme hyperparameter settings (e.g. much smaller learning rate and higher damping factor), much longer training time, and higher communication cost to obtain a performance comparable to that in the iid regime. We note that the performance of PVI is signiï¬cantly better than a recent result by Zhao et al. [2018], who achieved a 10% error rate on the same non-iid data setting. Moreover, unlike this work, we use a fully-connected neural network (rather than a convolutional one) and do not communicate data between the workers (no data synchronization). As in the iid setting, the performance of PVI-async is hindered by stale updates, compared to PVI-sync, despite early faster convergence. While GVI with 10 cores is the best performer in terms of predictive performance, it is the least communication-eï¬cient due to the need to frequently pass gradients between the central parameter server and compute nodes. This, however, suggests that the performance of PVI could be further improved by more frequent updates between workers, essentially trading oï¬ the communication cost for more accurate prediction.
24
2 5
100 5 100 4 â BCM- split prior, 10 subsets 1 â BCM- same prior, 10 subsets 1 â GVI - 1 core 1 ââ GVI - 10 cores 4 â PVI- seq, one pass, 10 workers â PVI- async, 10 workers x 1 ââ PVI- sync, 10 workers x ~ ~ S i E 10- & 104 oO J is) 2-5 275 T T T T T T T T T T T + BCM - split prior, 10 subsets ] @ BCM- same prior, 10 subsets â GVI- 1 core 10° - 100 4 ââ GVI- 10 cores | â PVI- seq, one pass, 10 workers 4 â PVI- async, 10 workers 1 e â PVI- sync, 10 workers 7 S a 1 ~ 4 | 10-1 5 10-1 4 T T T T T T T T T T T 10° 101 10? 10% 104 10° 10! 10? 10% 104 10° train time /s {worker â server] communication messages (a) Error and NLL vs train time (b) Error and NLL vs communication cost
e s a ~
2
(a) Error and NLL vs train time
# (b) Error and NLL vs communication cost
Figure 4: Performance on the test set in the federated MNIST experiment with an iid distribution of training points across ten workers. The test performance is measured using the classiï¬cation error [error] and the negative log-likelihood [nll], and for both measures, lower is better. All methods are assessed using the performance vs train time and performance vs communication cost plots â closer to the bottom left of the plots is better. Methods used for benchmarking are: Bayesian Committee Machines (BCM) with the standard Normal prior [same] and with a weakened prior [split], Global VI (GVI) with one and ten compute cores, PVI with sequential updates and only one pass through the data [equivalent to Variational Continual Learning], PVI with lock-free asynchronous updates (PVI - async), and PVI with synchronous updates (PVI - sync). For ease of presentation, the x-axes for the plots start at 1. See text for more details. Best viewed in colour.
2 6
100 5 100 ] @ So x ~ ~ 8 8 BF 105 & 105 o 7 =ââ BCM - split prior, 10 subsets o > == BCM - same prior, 10 subsets ~=â GVI- 1 core - == GVI- 10 cores _ â PVI- seq, one pass, 10 workers â PVI- async, 10 workers 2- ââ PVI- sync, 10 workers 275 T T T T T T T T T T T J + BCM- split prior, 10 subsets - @ BCM- same prior, 10 subsets 1 â GVI- 1 core 4 ââ GVI- 10 cores ââ PVI- seq, one pass, 10 workers ] ââ PVI- async, 10 workers 3 =ââ PVI- sync, 10 workers 10° 5 £& 10°95 | a 10-1 T T T 1 T 1o-+ T 1 T T T T 10° 10! 10? 103 104 10° lo! 102 103 104 10° train time /s {worker â server] communication messages (a) Error and NLL vs train time (b) Error and NLL vs communication cost
3 & a
(a) Error and NLL vs train time
# (b) Error and NLL vs communication cost
Figure 5: Performance on the test set in the federated MNIST experiment with a non-iid distribution of training points across ten workers, i.e. each worker has access to digits of only one class. The test performance is measured using the classiï¬cation error [error] and the negative log-likelihood [nll], and for both measures, lower is better. All methods are assessed using the performance vs train time and performance vs communication cost plots â closer to the bottom left of the plots is better. Methods used for benchmarking are: Bayesian Committee Machines (BCM) with the standard Normal prior [same] and with a weakened prior [split], Global VI (GVI) with one and ten compute cores, PVI with sequential updates and only one pass through the data [equivalent to Variational Continual Learning], PVI with lock-free asynchronous updates (PVI - async), and PVI with synchronous updates (PVI - sync). For ease of presentation, the x-axes for the plots start at 1. See text for more details. Best viewed in colour.
# Improving Continual Learning for Sparse Gaussian Processes
We evaluate the performance of the continual learning method for sparse Gaussian process models discussed in section 6 on a toy classiï¬cation problem and a real-world regression problem. The diï¬erent inference strategies were implemented in Tensorï¬ow [Abadi et al., 2016] and GPï¬ow [Matthews et al., 2017].
# 7.2.1 A comparison on a toy data set
A toy streaming data set was created using the banana data set, which comprises 400 two-dimensional inputs and corresponding binary targets. The data set is ï¬rst ordered using one input dimension and then split into three equal batches. We consider two sparse variational approaches for inferring the latent function with 10 pseudo-points for each batch: (i) maximum-likelihood estimation for the hyperparameters â this is similar to the method of Bui et al. [2017a] but using private pseudo-points, as described in appendix C, and (ii) online variational inference for the hyperparameters, as discussed in section 6. The key results are shown in ï¬g. 6, which includes the predictions after observing each data batch and the (distributional) hyperparameter estimates for both methods. We also include the histograms of the hyperparameter samples obtained by running MCMC for both the latent function and hyperparameters on the whole data set. As expected, the sparse variational methods underestimate the width of the distributions over the hyperparameters. The maximum-likelihood estimates for the hyperparameters tend to be smaller and to change faster when moving from one batch to another compared the VI estimates. Consequently, the prediction using the ML hyperparameters tends to have sharper decision boundaries. We include in appendix C a failure case of the ML approach where the hyperparameter values are overï¬t to a data batch, while the VI approach is more robust and maintains better prediction quality.
# 7.2.2 Learning inverse dynamics of a robot arm
We next test the proposed method on learning inverse dynamics of a robot arm. The data set is generated using a Barrett WAM arm with seven degrees-of-freedom â seven joints to which control torques can be applied and the jointsâ angles, speeds and accelerations can be recorded accordingly [Nguyen-Tuong et al., 2009]. The aim is to learn the inverse dynamics of the arm, i.e. to accurately predict the forces used at diï¬erent joints given the jointsâ current characteristics. We treat this task as a regression problem with 7 independent outputs and 21 inputs. The data set consists of 12,000 points for training and 3,000 points for prediction.
To simulate the streaming setting, we ï¬rst sort the data points using a jointâs location and form 24 batches of 500 points each. As there are seven joints in the arm, seven streaming data sets were created, each corresponding to one joint being used for sorting. For the proposed method in section 6, 10 pseudo-points are allocated and optimized for each batch, and as such, there are 240 pseudo-points in total at the end of training. We predict on the test set after sequentially showing a data batch to the model and compute the standardized mean squared errors (SMSEs). The results, averaged over multiple runs corresponding to diï¬erent inputs used for sorting, are shown in ï¬g. 7. Two additional methods were considered: (i) full GP with limited memory of 1500 data points, retrained from scratch after seeing each data batch, and (ii) streaming sparse GPs using variational inference for both latent function and hyperparameters with 240 pseudo-points being shared over all batches and re-optimized after every batch. For both sparse GP methods, online variational inference is used for the hyperparameters. The results demonstrate the proposed method with private pseudo-points is most eï¬ective and stable during training among all methods considered. For example, for the third degree-of-freedom, the ï¬nal SMSEs were 0.100 +â 0.004 for the proposed method with private pseudo-points, 0.313 +â 0.079 for the method with global pseudo-points, and 1.021 +â 0.113 for exact GP with limited memory. While the method
27
prediction hyperparameters f: ve, hypers: vfe f: vfe, hypers: ml log lengthscale 0 log lengthscale 1 log sigma 3 3 2.0 20 25 cal 15 g 2 2.0 15 8 510 15 1.0 el 1.0 0.5 Os 05 0.0 0.0 0.0 -0.5 0.0 05 -1.0 -0.5 0.0 0.5 1.0 15 2.0 5 6 4 4 N 3 a >3 4 8 2 2 gs $2 ra 2 1 1 o 0 0 -0.5 0.0 O5 -1.0 -0.5 0.0 0.5 1.0 15 2.0 12 6 8 10 ° 6 8 4 g 2 S 2 6 2 ba a 4 2 2 2 ° 0 0 -0.5 0.0 05 -10 -0.5 0.0 0.5 1.0 15 2.0 â f+hypers: meme â f: vfe, hypers: vie ââ f: vfe, hypers: ml
Figure 6: Experimental results on a toy streaming data set: predictions after sequentially observing diï¬erent data batches [left] and corresponding hyperparameter estimates [right]. Three methods were considered for this task: MCMC for both the latent function (f ) and hyperparameters (hypers) with no sparsiï¬cation, variational inference for both f and hypers with inducing points, and sparse variational inference for f and maximum likelihood estimation for hypers. Best viewed in colour.
with shared pseudo-points has more pseudo-points for earlier batches and is expected to perform better theoretically, this experiment shows its inferior performance due to the need to reinitialize and optimize all pseudo-points at every batch. The full GP with limited memory approach performs poorly and exhibits forgetting as more batches arrived and old data are excluded from the memory. In addition, we also tried the streaming inference scheme of Bui et al. [2017a], which only retains a point estimate of the hyperparameters after each batch, but this did not perform well, demonstrating that being distributional over the hyperparameters is crucial.
# 8 Conclusion
This paper provided a general and unifying view of variational inference, Partitioned Variational Inference, for probabilistic models. We showed that the PVI framework ï¬exibly subsumes many existing variational inference methods as special cases and allows a wealth of techniques to be connected. We also demonstrated how PVI allows novel algorithmic developments and practical applications. This is illustrated through the development of a streaming variational inference scheme for Gaussian process models and a communication eï¬cient algorithm for training Bayesian neural networks on federated data.
28
â streaming, private u ââ streaming, shared u â full, limited mem. DOF=1 DOF =2 DOF = 3 DOF =4 1.2 0.5 1.0 0.4 uw 0-8 0.3 2 0.6 â 0.2 0.4 0.1 0.2 0.0 DOF=5 DOF =6 DOF =7 1.50 25 2.5 1.25 20 2.0 wy 1.00 wn 1.5 1.5 a 0.75 1.0 0.50 1.0 0.5 0.25 za, ~ 0.5 aes 0 10 20 0 10 20 0 10 20 batch batch batch
Figure 7: Experimental results of learning inverse dynamics of a robot arm, i.e. predicting the forces applied to the joints of the arm given the locations, speeds, and accelerations of the joints. Three methods were considered for this task: streaming sparse variational GP with private pseudo-points, streaming sparse variational GP with shared pseudo-points, and full GP with limited memory. Full details are included in the text. Best viewed in colour.
One of the key contributions of this work is the connection of deterministic local message passing methods with global optimization-based schemes. Each of these diï¬erent branches of approximate inference has been arguably developed exclusively of each other, for example, existing probabilistic programming toolkits tend to work primarily with one approach but not both [see e.g. Tran et al., 2017, Minka et al., 2018]. This paper suggests these methods are inter-related and practitioners could beneï¬t from a uniï¬ed framework, i.e. there are ways to expand the existing probabilistic programming packages to gracefully handle both approaches. Additionally, the PVI framework could be used to automatically choose a granularity level and an optimization scheme that potentially oï¬er a better inference method for the task or the model at hand. It is, however, unclear how ï¬exible variational approximations such as mixtures of exponential family distributions [see e.g. Sudderth et al., 2010] or normalizing ï¬ows [Rezende and Mohamed, 2015] can be eï¬ciently and tractably accommodated in the PVI framework. We leave these directions as future work.
The experiments in section 7.1 demonstrated that PVI is well-suited to learning with decentralized
29
data. Deployment of PVI in this setting is practical as its implementation only requires a straightforward modiï¬cation of existing global VI implementations. We have also explored how this algorithm allows data parallelism â each local worker stores a complete copy of the model â and communication eï¬cient, uncertainty-aware updates between workers. A potential future extension of the proposed approach is model parallelism. That is, in addition to decentralizing the data and computation across the workers, the model itself is partitioned. As commonly done in many deep learning training algorithms, model parallelism could be achieved by assigning the parameters (and computation) of diï¬erent layers of the network to diï¬erent devices. Another potential research direction is coordinator-free, peer-to-peer only communication between workers. This could be achieved by a worker passing each update to several randomly selected other workers, who then apply the changes, rather than to a central parameter server.
# References
MartÃn Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeï¬rey Dean, Matthieu Devin, Sanjay Ghemawat, Geoï¬rey Irving, Michael Isard, et al. Tensorï¬ow: a system for large-scale machine learning. In Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation, 2016.
Cedric Archambeau and Beyza Ermis. Incremental variational inference for latent Dirichlet allocation. arXiv preprint arXiv:1507.05016, 2015.
David Barber and Christopher M. Bishop. Ensemble learning in Bayesian neural networks. In Neural Networks and Machine Learning, 1998.
Matthew James Beal. Variational algorithms for approximate Bayesian inference. PhD thesis, UCL, 2003.
David M Blei, Andrew Y Ng, and Michael I Jordan. Latent dirichlet allocation. Journal of Machine Learning Research, 3(Jan):993â1022, 2003.
Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty in neural network. In International Conference on Machine Learning, pages 1613â1622, 2015.
Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth. Practical secure aggregation for privacy- preserving machine learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pages 1175â1191. ACM, 2017.
Tamara Broderick, Nicholas Boyd, Andre Wibisono, Ashia C. Wilson, and Michael I. Jordan. Streaming variational Bayes. In Advances in Neural Information Processing Systems, 2013.
Thang D. Bui, Cuong V. Nguyen, and Richard E. Turner. Streaming sparse Gaussian process approxi- mations. In Advances in Neural Information Processing Systems, 2017a.
Thang D. Bui, Josiah Yan, and Richard E. Turner. A unifying framework for Gaussian process pseudo- point approximations using power expectation propagation. Journal of Machine Learning Research, 18(104):1â72, 2017b.
Arslan Chaudhry, Puneet K Dokania, Thalaiyasingam Ajanthan, and Philip HS Torr. Riemannian walk for incremental learning: Understanding forgetting and intransigence. arXiv preprint arXiv:1801.10112, 2018.
30
Jianmin Chen, Xinghao Pan, Rajat Monga, Samy Bengio, and Rafal Jozefowicz. Revisiting distributed synchronous SGD. arXiv preprint arXiv:1604.00981, 2016.
Lehel Csató and Manfred Opper. Sparse online Gaussian processes. Neural Computation, 2002.
Nando de Freitas, Mahesan Niranjan, Andrew H. Gee, and Arnaud Doucet. Sequential Monte Carlo methods to train neural network models. Neural Computation, 2000.
Jeï¬rey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Andrew Senior, Paul Tucker, Ke Yang, Quoc V Le, et al. Large scale distributed deep networks. In Advances in Neural Information Processing Systems, pages 1223â1231, 2012.
Marc Peter Deisenroth. Eï¬cient reinforcement learning using Gaussian processes. PhD thesis, University of Cambridge, 2010.
Cynthia Dwork and Aaron Roth. The algorithmic foundations of differential privacy. Foundations and Trends®) in Theoretical Computer Science, 9(3-4):211-407, 2014.
Timo Flesch, Jan Balaguer, Ronald Dekker, Hamed Nili, and Christopher Summerï¬eld. Comparing continual task learning in minds and machines. Proceedings of the National Academy of Sciences, 115 (44):E10313âE10322, 2018.
Yarin Gal and Zoubin Ghahramani. Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. In International Conference on Machine Learning, 2016.
Andrew Gelman, Aki Vehtari, Pasi Jylänki, Christian Robert, Nicolas Chopin, and John P Cunningham. Expectation propagation as a way of life. arXiv preprint arXiv:1412.4869, 2014.
Zoubin Ghahramani and H. Attias. Online variational Bayesian learning. In NIPS Workshop on Online Learning, 2000.
Xavier Glorot and Yoshua Bengio. Understanding the diï¬culty of training deep feedforward neural networks. In International Conference on Artiï¬cial Intelligence and Statistics, pages 249â256, 2010.
Ian J. Goodfellow, Mehdi Mirza, Da Xiao, Aaron Courville, and Yoshua Bengio. An empirical investigation of catastrophic forgetting in gradient-based neural networks. In International Conference on Learning Representations, 2014.
Alex Graves. Practical variational inference for neural networks. In Advances in Neural Information Processing Systems, 2011.
Leonard Hasenclever, Stefan Webb, Thibaut Lienart, Sebastian Vollmer, Balaji Lakshminarayanan, Charles Blundell, and Yee Whye Teh. Distributed bayesian learning with stochastic natural gradient expectation propagation and the posterior server. Journal of Machine Learning Research, 18(106): 1â37, 2017. URL http://jmlr.org/papers/v18/16-478.html.
Tyler L Hayes, Ronald Kemker, Nathan D Cahill, and Christopher Kanan. New metrics and experimental paradigms for continual learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 2031â2034, 2018.
James Hensman, Magnus Rattray, and Neil D. Lawrence. Fast variational inference in the conjugate exponential family. In Advances in Neural Information Processing Systems, pages 2888â2896, 2012.
James Hensman, Nicolo Fusi, and Neil D Lawrence. Gaussian processes for big data. In Uncertainty in Artiï¬cial Intelligence, page 282, 2013.
31
James Hensman, Alexander G. D. G. Matthews, and Zoubin Ghahramani. Scalable variational Gaussian process classiï¬cation. In International Conference on Artiï¬cial Intelligence and Statistics, 2015.
José Miguel Hernández-Lobato and Ryan P. Adams. Probabilistic backpropagation for scalable learning of Bayesian neural networks. In International Conference on Machine Learning, 2015.
José Miguel Hernández-Lobato, Yingzhen Li, Mark Rowland, Daniel Hernández-Lobato, Thang D. Bui, and Richard E. Turner. Black-box α-divergence minimization. In International Conference on Machine Learning, 2016.
Geoï¬rey E Hinton and Drew Van Camp. Keeping the neural networks simple by minimizing the description length of the weights. In Conference on Computational Learning Theory, pages 5â13, 1993.
Matthew D Hoï¬man, David M Blei, Chong Wang, and John Paisley. Stochastic variational inference. Journal of Machine Learning Research, 14(1):1303â1347, 2013.
Antti Honkela, Tapani Raiko, Mikael Kuusela, Matti Tornio, and Juha Karhunen. Approximate Riemannian conjugate gradient learning for ï¬xed-form variational bayes. Journal of Machine Learning Research, 11(Nov):3235â3268, 2010.
Tommi S Jaakkola and Michael I Jordan. Improving the mean ï¬eld approximation via the use of mixture distributions. In Learning in graphical models, pages 163â173. Springer, 1998.
Mohammad E Khan, Pierre Baqué, François Fleuret, and Pascal Fua. Kullback-Leibler proximal variational inference. In Advances in Neural Information Processing Systems, pages 3402â3410, 2015.
Mohammad Emtiyaz Khan and Wu Lin. Conjugate-computation variational inference : Converting variational inference in non-conjugate models to inferences in conjugate models. In International Conference on Artiï¬cial Intelligence and Statistics, 2018.
Mohammad Emtiyaz Khan, Reza Babanezhad, Wu Lin, Mark Schmidt, and Masashi Sugiyama. Faster stochastic variational inference using proximal-gradient methods with general divergence functions. In Conference on Uncertainty in Artiï¬cial Intelligence, pages 319â328, 2016.
Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2014.
Diederik P Kingma and Max Welling. Auto-encoding variational Bayes. In International Conference on Learning Representations, 2014.
Diederik P Kingma, Tim Salimans, and Max Welling. Variational dropout and the local reparameteri- zation trick. In Advances in Neural Information Processing Systems, pages 2575â2583, 2015.
James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran, and Raia Hadsell. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, 2017.
David A. Knowles and Tom Minka. Non-conjugate variational message passing for multinomial and binary regression. In Advances in Neural Information Processing Systems, pages 1701â1709, 2011.
Alp Kucukelbir, Dustin Tran, Rajesh Ranganath, Andrew Gelman, and David M Blei. Automatic diï¬erentiation variational inference. Journal of Machine Learning Research, 18(1):430â474, 2017.
32
Neil D Lawrence. Gaussian process latent variable models for visualisation of high dimensional data. In Advances in Neural Information Processing Systems, pages 329â336, 2004.
Yingzhen Li, José Miguel Hernández-Lobato, and Richard E. Turner. Stochastic expectation propagation. In Advances in Neural Information Processing Systems, pages 2323â2331, 2015.
Zhizhong Li and Derek Hoiem. Learning without forgetting. In European Conference on Computer Vision, 2016.
Vincenzo Lomonaco and Davide Maltoni. Core50: a new dataset and benchmark for continuous object recognition. In Conference on Robot Learning, pages 17â26, 2017.
David JC MacKay. Information theory, inference and learning algorithms. Cambridge University Press, 2003.
Alexander G. D. G. Matthews, James Hensman, Richard E Turner, and Zoubin Ghahramani. On sparse variational methods and the Kullback-Leibler divergence between stochastic processes. In International Conference on Artiï¬cial Intelligence and Statistics, 2016.
Alexander G De G Matthews, Mark Van Der Wilk, Tom Nickson, Keisuke Fujii, Alexis Boukouvalas, Pablo León-Villagrá, Zoubin Ghahramani, and James Hensman. GPï¬ow: A Gaussian process library using TensorFlow. Journal of Machine Learning Research, 18(1):1299â1304, 2017.
Michael McCloskey and Neal J. Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. Psychology of Learning and Motivation, 1989.
Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-eï¬cient learning of deep networks from decentralized data. In International Confer- ence on Artiï¬cial Intelligence and Statistics, pages 1273â1282, 2017.
Lars Mescheder, Sebastian Nowozin, and Andreas Geiger. Adversarial variational bayes: Unifying variational autoencoders and generative adversarial networks. In International Conference on Machine Learning, pages 2391â2400, 2017.
T. Minka, J.M. Winn, J.P. Guiver, Y. Zaykov, D. Fabian, and J. Bronskill. /Infer.NET 0.3, 2018. Microsoft Research Cambridge. http://dotnet.github.io/infer.
Thomas P Minka. Expectation propagation for approximate Bayesian inference. In Conference on Uncertainty in Artiï¬cial Intelligence, pages 362â369, 2001.
Thomas P. Minka. Power EP. Technical report, Microsoft Research Cambridge, 2004.
Philipp Moritz, Robert Nishihara, Stephanie Wang, Alexey Tumanov, Richard Liaw, Eric Liang, William Paul, Michael I Jordan, and Ion Stoica. Ray: A distributed framework for emerging AI applications. arXiv preprint arXiv:1712.05889, 2017.
Radford M. Neal. Bayesian learning via stochastic dynamics. In Advances in Neural Information Processing Systems, pages 475â482, 1993.
Radford M Neal. Bayesian learning for neural networks, volume 118. Springer Science & Business Media, 2012.
Cuong V. Nguyen, Yingzhen Li, Thang D. Bui, and Richard E. Turner. Variational continual learning. In International Conference on Learning Representations, 2018.
33
Duy Nguyen-Tuong, Jan R Peters, and Matthias Seeger. Local Gaussian process regression for real time online model learning. In Advances in Neural Information Processing Systems, pages 1193â1200, 2009.
Manfred Opper. Online learning in neural networks. chapter A Bayesian Approach to Online Learning, pages 363â378. Cambridge University Press, 1998.
Simon T OâCallaghan and Fabio T Ramos. Gaussian process occupancy maps. The International Journal of Robotics Research, 31(1):42â62, 2012.
Neal Parikh and Stephen Boyd. Proximal algorithms. Foundations and Trends® in Optimization, 1(3): 127-239, 2014.
Joaquin Quiñonero-Candela and Carl E. Rasmussen. A unifying view of sparse approximate Gaussian process regression. Journal of Machine Learning Research, 2005.
Rajesh Ranganath, Dustin Tran, and David Blei. Hierarchical variational models. In International Conference on Machine Learning, pages 324â333, 2016.
Garvesh Raskutti and Sayan Mukherjee. The information geometry of mirror descent. IEEE Transactions on Information Theory, 61(3):1451â1457, March 2015.
Carl E. Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine Learning. The MIT Press, 2006.
Roger Ratcliï¬. Connectionist models of recognition memory: Constraints imposed by learning and forgetting functions. Psychological Review, 1990.
Danilo Rezende and Shakir Mohamed. Variational inference with normalizing ï¬ows. In International Conference on Machine Learning, pages 1530â1538, 2015.
Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In International Conference on Machine Learning, pages 1278â1286, 2014.
Tim Salimans and David A Knowles. Fixed-form variational posterior approximation through stochastic linear regression. Bayesian Analysis, 8(4):837â882, 2013.
Tim Salimans, David A Knowles, et al. Fixed-form variational posterior approximation through stochastic linear regression. Bayesian Analysis, 8(4):837â882, 2013.
Tim Salimans, Diederik Kingma, and Max Welling. Markov chain Monte Carlo and variational inference: Bridging the gap. In International Conference on Machine Learning, pages 1218â1226, 2015.
Hugh Salimbeni, Stefanos Eleftheriadis, and James Hensman. Natural gradients in practice: Non- conjugate variational inference in Gaussian process models. In International Conference on Artiï¬cial Intelligence and Statistics, 2018.
Masa-Aki Sato. Online model selection based on the variational Bayes. Neural Computation, 2001.
Jeï¬rey C. Schlimmer and Douglas Fisher. A case study of incremental concept induction. In The National Conference on Artiï¬cial Intelligence, 1986.
Steven L Scott, Alexander W Blocker, Fernando V Bonassi, Hugh A Chipman, Edward I George, and Robert E McCulloch. Bayes and big data: The consensus Monte Carlo algorithm. International Journal of Management Science and Engineering Management, 11(2):78â88, 2016.
34
Ari Seï¬, Alex Beatson, Daniel Suo, and Han Liu. Continual learning in generative adversarial nets. arXiv:1705.08395, 2017.
Rishit Sheth and Roni Khardon. A ï¬xed-point operator for inference in variational Bayesian latent Gaussian models. In Arthur Gretton and Christian C. Robert, editors, International Conference on Artiï¬cial Intelligence and Statistics, volume 51 of Proceedings of Machine Learning Research, pages 761â769, Cadiz, Spain, 09â11 May 2016a. PMLR.
Rishit Sheth and Roni Khardon. Monte carlo structured SVI for non-conjugate models. arXiv preprint arXiv:1612.03957, 2016b.
Rishit Sheth, Yuyang Wang, and Roni Khardon. Sparse variational inference for generalized GP models. In Francis Bach and David Blei, editors, International Conference on Machine Learning, volume 37, pages 1302â1311, 2015.
Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim. Continual learning with deep generative replay. In Advances in Neural Information Processing Systems, pages 2990â2999, 2017.
Sharad Singhal and Lance Wu. Training multilayer perceptrons with the extended Kalman algorithm. In Advances in Neural Information Processing Systems, 1989.
Alex J. Smola, S.V.N. Vishwanathan, and Eleazar Eskin. Laplace propagation. In Advances in Neural Information Processing Systems, 2004.
Erik B Sudderth, Alexander T Ihler, Michael Isard, William T Freeman, and Alan S Willsky. Nonpara- metric belief propagation. Communications of the ACM, 53(10):95â103, 2010.
Richard S. Sutton and Steven D. Whitehead. Online learning with random representations. International Conference on Machine Learning, 1993. In
The Royal Society. Machine learning: The power and promise of computers that learn by example. Technical report, The Royal Society, 2017.
Lucas Theis and Matthew D Hoï¬man. A trust-region method for stochastic variational inference with applications to streaming data. In International Conference on Machine Learning, 2015.
Tijmen Tieleman and Geoï¬rey E Hinton. Lecture 6.5âRmsProp: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 2012.
Michalis K. Titsias. Variational learning of inducing variables in sparse Gaussian processes. International Conference on Artiï¬cial Intelligence and Statistics, pages 567â574, 2009. In
Dustin Tran, Matthew D. Hoï¬man, Rif A. Saurous, Eugene Brevdo, Kevin Murphy, and David M. Blei. Deep probabilistic programming. In International Conference on Learning Representations, 2017.
Volker Tresp. A Bayesian committee machine. Neural computation, 12(11):2719â2741, 2000.
Richard E. Turner and Maneesh Sahani. Two problems with variational expectation maximisation for time-series models. In D. Barber, T. Cemgil, and S. Chiappa, editors, Bayesian Time series models, chapter 5, pages 109â130. Cambridge University Press, 2011.
Matt P. Wand. Fully simpliï¬ed multivariate Normal updates in non-conjugate variational message passing. Journal of Machine Learning Research, 15:1351â1369, 2014.
35
Bo Wang and DM Titterington. Lack of consistency of mean ï¬eld and variational Bayes approximations for state space models. Neural Processing Letters, 20(3):151â170, 2004.
Xiangyu Wang and David B Dunson. Parallelizing MCMC via Weierstrass sampler. arXiv preprint arXiv:1312.4605, 2013.
John Winn and Tom Minka. Probabilistic programming with Infer.NET. Machine Learning Summer School, 2009.
John Winn, Christopher M. Bishop, and Tommi Jaakkola. Variational message passing. Journal of Machine Learning Research, 6:661â694, 2005.
Friedemann Zenke, Ben Poole, and Surya Ganguli. Continual learning through synaptic intelligence. In International Conference on Machine Learning, pages 3987â3995, 2017.
Chen Zeno, Itay Golan, Elad Hoï¬er, and Daniel Soudry. Bayesian gradient descent: Online variational Bayes learning with increased robustness to catastrophic forgetting and weight pruning, 2018.
Sixin Zhang, Anna E Choromanska, and Yann LeCun. Deep learning with elastic averaging SGD. In Advances in Neural Information Processing Systems, pages 685â693, 2015.
Yue Zhao, Meng Li, Liangzhen Lai, Naveen Suda, Damon Civin, and Vikas Chandra. Federated learning with non-IID data, 2018.
36
# A Proofs
A.1 Relating Local KL minimization to Local Free-energy Minimization: Proof of Property 1
Substituting the tilted distribution p )(0) into the KL divergence yields,
aig) q(9)Zity (0) oY O(n 18) KL (4(0) [9 (0)) = / doa(0) los Gay Opty, 8) ~ 8% / d6q(9) 8 ey
Hence Fl (q(0)) = log Z; â KL (q(0) |p (0)).
# A.2 Showing Valid normalization of q(θ): Proof of Property 2
This property holds for i = 0. By induction, assuming gâ)) (0) = p(6) [In tS), we have:
; (6) q 6) T]4) = pO) TT 2 =v nyt 6) TT PC m mA q mAb; PO) | ne Og). ~ gl D0)â [Te
Hence, the property holds for all i.
# A.3 Maximization of local free-energies implies maximization of global free-energy: Proof of Property 3
(a) We have:
DL Fala) =D [ ada" (6) 10g HI) â f adg(6) og MO abba) â Fog),
(b) Let mq and 77 be the variational parameters of q(#) and q*(0) respectively. We can write q() = q(6; mq) and q*(9) = q(9;n¢) = P(9) [Im tm(9; 77). First, we show that at convergence, the derivative of the global free-energies equals the sum of the derivatives of the local free-energies. The derivative of the local free-energy Fy,(q(@)) w.r.t. q is:
dFin(q()) _ / G9; 14 )PYml9) âââââ= 16q(0; Sy a a EG dng in 69(65 Ng) log q(9; Nq)tm(8; 3) d / PYmi9) _ i) = â | d0q(@;n,) log ââ*"â+ d0q(9; nq) log dng (7a) tn(9; mm) "dng "q (2:70) q(9; Ng) 0; m* â9 0 A / 40q(0: 4) log 2Yml®) / 9 tite) jog Mima) _ f yg a(8:0}- dng tm(9; n3) dig q(9; "q) dng ie} ie} oa
# dFm(q(θ)) dηq
Thus, at convergence when ηq = ηâq ,
CEA) d / = déq(6; nq) log ââââ n= ding ( a) tm(0; 13) dng Na=Nq
37
Summing both sides over all m, dFm(q(9)) » dng man;
dFm(q(9)) Hin PYml?) » dng 10q(9; nq) log = ic '4(9; nq) °8 TT, tm (O: 1) Ing=ng d P(9) Tm PYm|®) ddq(9; 7 log? âerori ~ diy One) (853) d man; Ua na=N5
.
Now consider the derivative of the global free-energy F(q(θ)):
dF(q(9)) _ d [cont 1m) log? P(9) [In P(Yml) dig Ang 4(9; nq) -/ 19 AF) jg 2) Tn PY ml) yy ta 8nc" ° dng (9; Na) dng
Hence,
dF (q(9)) / 1g 89 Ma) jg PCP) Lm Pm I8) dFm(a(9)) dng n=ni dng QO; ng ) na=n§ 7 dnq na=n§
# na=n§ | =n qT
m(q(θ)) dηq Fm(q(θ)), we have d = 0, which implies: For all m, since qâ(θ) = argmaxq(θ) F
# âQ dF(q(θ)) dηq
aF (a(0)) @Fn(a))| â_g Aq Ing=mgp âme Ing =n
Thus, qâ(θ) is an extremum of F(q(θ)).
of the global free-energy at convergence. Similar to the derivative case, we now show that the Hessian of the global free-energies equals the sum of the Hessians of the local free-energies. The Hessian of the local free-energy Fm(q(θ)) w.r.t. ηq is:
2 0; n°) ¢ dF in(q(9)) sor [eo (6:1) g(9; 75 )P(Yml9) ae aT q(9; nq )tm(4; ni) P(Yml9) a / (9; 1%) = 10q(0; ti + âââ | ddq(@;n,) \ Fuh a ddq( 1a) tm (9; 73) dnqgdng doq( "1q) log q(9; nq) ( ] 1q(0; 0; 0 = + | bai (0; Nq) og P(Ym9) 4 peo q(9; 4) log q(9; ng) iat ) [e) oa [e) oa tm(O; 73) dnqgdng q(93 Nq _ ad dg(Ovg _ Hq dng
At convergence when ηq = ηâq ,
d? Fim(q(8)) & [cou Nq) log PO ml4) dngdni m=nt dngdnd 31q tm (0; ni) ng=N;
Summing both sides over all m, 1 Q yo Fala) dingdny m=nt
1 Q 1? 1 0 yo Fala) f aeeyog Heol dingdny m=nt dngdng Tn tm(9; 03) ng=nt 1? p(@ ! 6 = fanny og Trt) dngdng q(O; n=) ant
.
38
.
Now consider the Hessian of the global free-energy F(q(θ)):
PF(q(9)) _ â ? P() Tm PYml4) [eva Nq) log Angdng Ang dny q(9; 14) 0 feo ee Joe 2) Lm PYml) 4 dg(6;-74 Ang dng G9; nq) oT dng
Hence,
PF(a(6)) / aot Gna) tog 28) Lm P(YmI®) > Fm(a(6)) dngdng na=n§ dngdni q(; Na) ne=ny 7 dngdng na=n%
.
Fim (q()) For all m, since q*(@) = ee («co Fm(q(9)), the Hessian is negative definite, which 9)) a ge Ina =nj q* (9) is a maximum of F(q(@)). dngdng Ine=ng implies that the Hessian ¢ of the global free-energy is also negative definite. Therefore,
# A.4 Derivation of ï¬xed point equations (property 4)
Assume that the approximate likelihoods {tm(0)}4_, are in the un-normalized exponential family. That is, tm(0) = exp(nnT@) + Cm) for some constant c,. In this section we absorb the constant into the natural parameters 7, <â [cm, mn] and add a corresponding unit element into the sufficient statistics T(9)T < [1,7(8)7]. To lighten notation we still denote the sufficient statistic vector as T(9) so ,
# tmn(9) = tm(9; 1m) = exp(nnT
mT (θ)), (22)
where ηm is the natural parameters and T (θ) is the augmented suï¬cient statistics. We also assume that the prior p(θ) and the variational distributions q(i)(θ) are normalized exponential family distributions. To derive the ï¬xed point updates for the local variational inference algorithm, we consider maximizing the local variational free energy F (i)(q(θ)) in (2). Assuming that at iteration i, the variational distribution q(i
g)(6) = exp(Mrevt (9) ~ A(nprev)); (23)
and the target distribution q(θ) that we optimize has the form:
q(0) = exp(nIT(8) â A(ng)), (24)
where A(·) is the log-partition function. Let η = ηprev â ηq â η(i bi
1)
â 1) , we can write F (i)(q(θ)) as:
F (q(0)) = A(nq) _ A(Mprev) + q(log p(y,,|9)) + nt (1 (6)).
Take the derivative of F (i)(q(θ)) w.r.t. ηq and note that dA(ηq) dηq
= Eq(T (θ)), we have:
dF (i)(q(θ)) dηq = d dηq Eq(log p(ybi|θ)) + d2A(ηq) dηqdηq η.
Set this derivative to zero, we obtain:
ηq = Câ 1 d dηq Eq(log p(ybi|θ)) + ηprev â η(i bi â 1) , (25)
39
where C = SAC) = cov,)[T(#)T(8)] is the Fisher Information. Note that from Property f2} dngdng Nq =" + endb; mn 0 + a ) and Nprev = 0 + Yom ne 0 , where 79 is the natural parameters for the prior p(@). Hence, from (25), a fixed point update for Algorithm I] only needs to update nh? locally using:
η(i) bi = Câ 1 d dηq Eq(log p(ybi|θ)). (26)
The Fisher Information can be written as C = dµq dηq This leads to a cancellation of the Fisher information, where µq = Eq(T (θ)) is the mean parameter of q(θ).
η(i) bi = d dµq Eq(log p(ybi|θ)). (27)
# A.5 Equivalence of local and global ï¬xed-points: Proof of property 5
Here we show that running parallel ï¬xed-point updates of local-VI with M data groups has equivalent dynamics for q(θ) as running the ï¬xed points for batch VI (M = 1). The local-VI ï¬xed point updates (property 4) are given by,
η(i) bi = d dµq(i â 1) E q(i â 1)(log p(ybi|θ)). (28)
â
Here we have explicitly denoted the dependence of the approximate posterior on the iteration number i as the dynamics of this is the focus. When M = N and a parallel ï¬xed point update of all the natural parameters, {η(i)
M M d (i) _, n® 1 1? = no + 2 n?) =m + >> Vian gi») (log p(Yp,19)) m=1 m=1 q N d =o + âââ E,ui-1) (log? 0)). 29 no + >> Tan ea! 1) (log p(n!) (29) n=1 qd
Here, in the last line, we have used the fact that the data are independent conditioned on θ. Now consider application of the ï¬xed-point updates to the batch case (M = 1)
nh? no +7 = No + gi» (log p(Yn|9)) (30) 1 5 Ea(los p(uld)) = 0 + y dpigci- ae ry)
Therefore the updates for q(θ) are identical in the two cases. Naïve implementation of local-VI requires M sets of natural parameters to be maintained, whereas parallel updating means this is unnecessary, but equivalent to ï¬xed-point global VI.
# A.6 Relations between methods employing stochastic approximation of ï¬xed-points / natural gradient updates
The damped simpliï¬ed ï¬xed-point updates for global VI are,
i An d nl = (1âp)nf-) 4 o(m + ah Blox (yl) (31) q
40
Hoffman et al. (2013), Sheth and Khardon (2016b] employ stochastic mini-batch approximation of E,(log p(y|@)) = do, E,(9) (log P(Yn|9)) = NE banca (y),q(0) Log p Yn|9)) by sub-sampling the data distribu- tion y; id Paata(y) where paata(y) = x an 5(y â yn). This approximation yields,
q = (1 â Ï)η(i η(i) q â 1) + Ï Î·0 + L d dµq Eq(log p(yl|θ)) . (32)
Where yl are a mini-batch containing N/L data points. Li et al. [2015] show that their stochastic power EP algorithm recovers precisely these same updates when α â 0 (this is related to property 6). The relation to EP-like algorithms can be made more explicit by writing 32 as
. a d 1 =n) + pl (4 q(log p(y;|9)) â nhc? /L) (33)
Where pâ = pL is a rescaled learning rate and nn? = nf? - ni? is the portion of the approximate posterior natural parameters that approximates the likelihoods. As such, nn? /L is the contribution a mini-batch likelihood makes on average to the posterior. So, interpreting the update in these terms, the new approximate posterior is equal to the old approximate posterior with pâ of the average mini- batch likelihood approximation removed and pâ of the approximation from the current mini-batch, ay q(log p(Ym|)), added in place of it.
Eq(log p(ym|θ)), added in place of it. Khan and Lin [2018] take a diï¬erent approach employing damped simpliï¬ed ï¬xed-point updates for local VI M = N and then using an update schedule that selects a mini-batch at random and then updates the local natural parameters for these data-points in parallel. That is for all data points in the mini-batch they apply
n = (1 â Ï)η(i η(i) n â 1) + Ï d dµq Eq(log p(yn|θ)). (34)
This local update incurs a memory overhead that is N times larger due to the need maintain N sets of local parameters rather than just one. However, if the mini-batch partition is ï¬xed across epochs, then it is suï¬cient to maintain M natural parameters instead, corresponding to one for each mini-batch,
m = (1 â Ï)η(i η(i) 1) m + Ï â d dµq Eq(log p(ym|θ)). (35)
Interestingly, both of these local updates (34 and 35) result in a subtly diï¬erent update to q as the stochastic global update above (33),
. a d A 1 =n) âp (Balle n(n) â nf ») . (36)
Here the deletion step is explicitly revealed again, but now it involves removing the natural 1) rather than the average mini-batch likelihood parameters of the mth approximate likelihood η(i m approximation η(i â 1) like /L. â
A summary of work employing these two types of stochastic approximation is provided in ï¬gure 8. Each variety of update has its own pros and cons (compare 33 to 36). The stochastic global update 33 does not support general online learning and distributed asynchronous updates, but it is more memory eï¬cient and faster to converge in the batch setting.
Consider applying both methods to online learning where y, or y,, correspond to the new data seen at each stage and pâ and p are user-determined learning rates for the two algorithms. General online learning poses two challenges for the stochastic global update [33] First, the data are typically not iid
41
[1] Beal, 2003 19] [2] Winn et al. 2005 [3] Knowles & Minka 2004 [4] Hoffman et al. 2013 [5] Hensman et al. 2013 [6] Sheth & Khardon 2016b [7] Salimans & Knowles 2013 » \ fully local [2,3] [8] Liet al. (2015) a > 0 ay Nin. global [1] My hy, NEN [9] Khan et al. 2018 âtin âch ma Xe °n aynpayos onseyaors [4,5,6,7,8] M=I local VI * global + stochastic schedule ** fully local + mini-batch = fully
# = global
** fully local + mini-batch = fully local
Figure 8: Forms of stochastic approximation in the local VI framework and the relationship to previous work. The granularity of the approximation of the approximation is controlled though M. Mini-batch approximation may be used inside each local variational free-energy. A stochastic schedule can be used to randomize the order in which groups of data points are visited. All algorithms have the same ï¬xed-points (the mini-batch approximation learning rate Ï has to obey the Robbinâs Munro conditions).
(due to covariate or data set shift over time). Second, general online learning does not allow all old data points to be revisited, demanding incremental updates instead. This means that when a new batch of data is received we must iterate on just these data to reï¬ne the approximate posterior, before moving on and potentially never returning. Iterating 33 on this new data is possible, but it would have disastrous consequences as it again breaks the iid mini-batch assumption and would just result in q ï¬tting the new data and forgetting the old previously seen data. A single update could be made, but this will normally mean that the approach is data-ineï¬cient and slow to learn. Iterating the local updates, on the other hand, 34 or 35 works nicely as these naturally incorporate a deletion step that removes just the contribution from the current mini-batch and can, therefore, be iterated to our heartâs content.
Similar arguments can be made about the distributed setting too. Stochastic global updates could be used in the distributed setting, with each worker querying a server for the current natural parameters Eq(log p(yl|θ)) â ηlike/L and communicating this to a server. The serverâs ηq, computing âηl = d dµq role is to update the global parameters η(new) = η(old) + âηl and to send these new parameters to q the workers. The diï¬culty is that this setup must obey the iid assumption so i) the data must be distributed across the workers in an iid way, and ii) the updates must be returned by each worker with the same regularity. In contrast, the stochastic local updates can be used in a very similar way without these restrictions.
The stochastic global update does have two important advantages over the local updates First, the memory footprint is L times smaller only requiring a single set of natural parameters to be maintained rather than M of them. Second, it can be faster to converge when the mini-batches are iid. Contrast what happens when new data are seen for the first time in the two approaches. For simplicity assume pâ = p = 1. In the second approach, nd = 0 as the data have not been seen before, but he first approach effectively uses fd = ne? /L. That is, the first approach effectively estimates he approximate likelihood for new data, based on those for previously seen data. This is a sensible approximation for homogeneous mini-batches. A consequence of this is that the learning rate pâ can be much larger than p (potentially greater than unity) resulting in faster convergence of the approximate posterior. It would be interesting to consider modifications of the local updates (36) that estimate the mth approximate likelihood based on information from all other data partitions. For example, in the first pass through the data, the approximate likelihoods for unprocessed mini-batches could be updated o be equal to the last approximate likelihood or to the geometric average of previous approximate ikelihoods. Alternatively, ideas from inference networks could be employed for this purpose.
42
# A.7 The relationship between natural gradient, mirror-descent, trust-region and proximal methods
Each step of gradient ascent of parameters η on a cost C can be interpreted as the result of an optimization problem derived from linearizing the cost function around the old parameter estimate, C(η) â C(η(i 1) and adding a soft constraint on the norm of the parameters:
η(i) = η(i 1) + Ï â dC(η) dη â η(i) = argmax η âηC(η(i 1))η â â 1 2Ï ||η â η(i â 1)||2 2
Here the terms in the linearized cost that do not depend on η have been dropped as they do not eï¬ect the solution of the optimization problem. The linearization of the cost ensures that there is an analytic solution to the optimization and the soft constraint ensures that we do not move too far from the previous setting of the parameters into a region where the linearization is inaccurate.
This reframing of gradient ascent reveals that it is making an Euclidean assumption about the geometry parameter space and suggests generalizations of the procedure that are suitable for diï¬erent geometries. In our case, the parameters are natural parameters of a distribution and measures of proximity that employ the KL divergence are natural.
The main result of this section is that the following optimization problems:
KL proximal method: nf = argmax VF (gâY (0))ng - sil (a(@) | a6) (37 Iq
# ηq âηF (i)(q(i
KL trust region: 7? = argmax VFO (g (0))1nq s.t. KL (a0) | a%(6)) <7 (38 q
(0))1nq s.t. KL (a0) (0))1nq s.t. KL*
KL* trust region: 1? = argmax VF (Gg (0))1nq s.t. KL* (a(@) | a%(6)) <7 (39 qd
mirror descent: iy = argmax V pF (G9 (0) tg - âKL (a) | a(6)) (40 q
All yield the same updates as the damped ï¬xed point equations / natural gradient ascent:
# d pâaazE dptg du | dng
η(i) bi = (1 â Ï)η(i bi â 1) + Ï 1) E q(i â 1)(log p(ybi|θ)) (41)
71
= (1 â Ï)η(i bi â 1) + Ï â â 1) 1) d dη(i q â 1) E q(i â 1)(log p(ybi|θ)). (42)
In the ï¬rst three cases (37 - 39), this equivalence only holds exactly in the general case if the parameter changes âη(i)
Here the KL proximal method is the straightforward generalization of the gradient ascent example hat replaces the Euclidean norm by the exclusive KL divergence. The KL trust region method uses a hard constraint on the same KL instead, but rewriting this as a Lagrangian recovers the KL proximal method with 1/p being the Lagrange multiplier. The KL* trust region method, often used o justify natural gradient ascent, employs the symmetrized KL divergence instead of the exclusive KL. The symmetrized KL is the average of the exclusive and inclusive KLs, KLâ (q(9) || q@~)(0)) = 3KL (4(9) || gD (0)) + $KL (qâ(6) || q(0)). Mirror descent, in its most general form, uses a Bregman divergence to control the extent to which the parameters change rather than a KL divergence. However, when applied to exponential families, the Bregman divergence becomes the inclusive KL divergence yielding the form above [Raskutti and Mukherjee) |2015 2018}. Note that this last method operates in the mean parameter space and the equivalence is
43
(38)
attained by mapping the mean parameter updates back to the natural parameters. Mirror descent has the advantage of not relying on the small parameter change assumption to recover natural gradient ascent. Having explained the rationale behind these approaches we will now sketch how they yield the ï¬xed-point updates.
The equivalence of the KL proximal method can be shown by diï¬erentiating the cost wrt ηq and substituting in the following expressions:
4 a d degi-1) (G1 Vn FO (a9 (B)) = FE ge-n log (,18)) = Goeth qe dngaâ1 AKL (4(9) | a 4)) tg (n ni) wm gi) (n _ nf?) dng dng "t 4 Angti-1) 79 :
In the second line above the approximation results from the assumption of small parameter change âη(i) (or alternatively local constancy of the Fisher information). Equating the derivatives to zero and q rearranging recovers the ï¬xed point equations.
The equivalence of the KL trust region method is now simple to show as the associated Lagrangian, L(g) = Vn FO (gq? (8) nq - F (KL (q(8) || g'-)(0)) â 7), is the proximal method up to an additive constant.
The KL* trust region method can also be rewritten as a Lagrangian L(mq) = Vi FO (qd (8)) ng â 3 (KLS (q(@) || ¢@-)(6)) â 7). For small changes in the approximate posterior natural parameters An\? = nh? â ni), the symmetrized KL can be approximated using a second order Taylor expansion,
. 1 . T dpt(iâ1) . s (i-1) ~_; (i-1) qd (i-1) KL (a(0) ll @ (0)) 5 (n ng ) Tigao (n ng ). (43)
This is the same form as the exclusive KL takes, the inclusive and exclusive KL divergences being locally identical around their optima. Taking derivatives of the Lagrangian and setting them to zero recovers the ï¬xed point equations again.
The mirror descent method can be shown to yield the ï¬xed points by noting that
. A d VFO (g°-) 6)) = âââE,v-1 (log p(yp,18)) â 1; dpigaâv AKL (q°~) (8) || (9) i ( ) =" nf 1). dig
Diï¬erentiating the mirror descent objective and substituting these results in recovers usual update. The last result above can be found using convex duality. For a full derivation and more information on the relationship between mirror descent and natural gradients see Raskutti and Mukherjee [2015].
It is also possible to deï¬ne optimization approaches analogous to the above that do not linearize the free-energy term and instead perform potentially multiple updates of the nested non-linear optimization problems [Theis and Hoï¬man, 2015, Khan et al., 2015, 2016].
# A.8 The equivalence of the simpliï¬ed ï¬xed-point updates and Power EP α â 0: Proof of property 6
Assume PEP is using unnormalized distributions throughout:
. . fd 1. pO() = q@D(0) ( ro ) % form tilted by
44
2. da(8) = proj (pS? (8) % moment match 3. q')(8) = (gâP@)) â 40) 4-2 GOO) bi t/a (qa(0))\/% % update poste rior (9) % update approximate likelihood
From (2,3,4), we have:
(i) bi log t,â (0) = 1 . F 7 ~ (log proj(p{) (@)) â log q''")(6)) + log th, (0). (44)
Using the augmented suï¬cient statistics T (θ) as in A.4 and the fact that qα(θ) is unnormalized, we can write:
10g qa (8) = log proj(p{) (8 Let pig, = oe 0)T(@)d@ be the mean parameters of qa( qa = Svs pa( )T(@)d@. Using Taylor series to expand log proj(pa ) = NaF (@)- 0). From the moment matching projection, (®) (45) (@)) as a function of a about a = 0:
AG aC d Gg log proj(p) (9) = log proj(p® (8))|,_4 +4 (a5 log proj(p\ (8))| wt) + F(a), (46)
(@))
= Dy (24 log proj (pl) where F(a) (@)) lao) collects (i) all the high order terms in the expansion. Since log proj(pa = log qâ(6), the above equa ))|aâo = ion becomes:
a _ d los proj(o()) = logâ) +0 (tog oii ())|,-o) + Fla) (a7)
Now consider d dα log proj(p(i) α (θ)), we have:
. ding, tog proi(n{? (@)) = T(ayT te = 1(g)t a SH (4s da da dg, da
te da ee dptg,
= 7(6)1 ee / p)(0)T(8)de (49 dptg, da
1 0 = 7 (6) Alaa. ~ fa (-1)( iM ) â rea (50 dptg, da »@)
0 iM ) »@) ox (0)
= 1(9)t Me iE i-1( ox Pynl) rpag (51 Chega (0) th, (@) a i.
a dy d J 0 D 0 = 7(9)t Ui i. De) Pv ) ee ) a6, (52 Chega ONgi-Y) ty, (9) ts, (9)
1) is the natural parameters of q(i â 1).
where ηq(i Thus,
â
d a dnge=ay od i P(Yd; I?) 40 â] () (9 =T(@)T (â-1) (9) | Gq OS PPOI Paâ (9)) 0 ) Wigan Ingen | (6) log # i De (53) 1 : 0 =1(0)"5 â [oOo PV ) a9. (54) Apega-1) t, (0) âi
45
Plug this into (47), we obtain:
; . 1 log proj(p (@)) = log g-) (8) + aT(9)T dace fo (6) log ig + F(a). (55) mG
From this equation and (44),
tog (0) = + { atria" â2 / aD(6) log 2X41) a9 + F(a) +togeP) (56 : a dpigci-1) 6) 1 , F _ PUY6IO)
p(ybi|θ) t(i 1) (θ) â bi p(ybi|θ) t(i 1) (θ) bi 1)(θ) log p(ybi|θ)dθ
1 , F _ = T(6)1â fo MO 08 He PUY6IO) ag (e FO) hog if D@) (57 dpigi-1) Ya) D0) â
d dµq(i d dµq(i
J - = re ââ [og plys,0)40 (58 dpigi-1)
1 F _ ~T(e)tâ fe 9 (6) log tf Dp)a9 4+ 2) 4 rogil D0). (59 dpgeây Qa âi
Note that:
1 ; 1 ; T(@)1â [ao @ro8th 9)(0)d0 = log tf (0) [rere ea (60 dptgci-1) dpigi-1) Utgti- = log i) (9) He (61 dpigiâv) = logt{' D9). (62
Hence, (59) becomes:
. J . F Por = TO A â | 4° (0) togptys,|0)aa-+ 2, (63 Afig(i-1) a
which is equivalent to:
i d F(a Pol) = PO | 4(0)to8 (yn |0)a0 + âA. (64) q
Let ¯q(θ) be the normalized distribution of q(θ), i.e. q(θ) = Zq ¯q(θ). The ï¬xed point update for local VI is:
η(i) bi = ¯q(θ) log p(ybi|θ)dθ, (65)
d nh? = 5 dug where pig = f T(0)19(0)d0 = jq/Zq. We have:
1 [ 7 Zq
# dug d = $4 dug dig = Fe Jy d
d dµ¯q ¯q(θ) log p(ybi|θ)dθ = q(θ) log p(ybi|θ)dθ (66)
d dµq q(θ) log p(ybi|θ)dθ (67)
d dµq = q(θ) log p(ybi|θ)dθ. (68)
From this equation and the fact that F (α) the Power-EP update in (64) when α â 0. α â 0 when α â 0, the ï¬xed point update for local VI satisï¬es
46
# B Gradients of the free-energy with respect to hyperparameters
In this section, we derive the gradients of the global free energy wrt the hyperparameters when the local VI procedure has converged to the optimal approximate posterior (either through analytic, oï¬-the- shelf or ï¬xed-point optimization). We provide two derivations: First, the standard one which is speciï¬c to approximate posteriors that are in the exponential family. Second, a derivation that applies for general q(θ) which also provides more insight.
# B.1 The standard derivation
As shown in the previous sections, the global free energy is as follows,
P(P) Tin PYml9) q(9; Ng) = | e6a(0:ng)08 »(8) â tox (0% m)) +> f A04(85n) 18 PCI) SS Fi F(q(9)) = [200005 ng) 108 F2
Note that,
q(9; 1g) = exp(niT (0) â A(ng)), P(A; no) = exp(nT(8) â A(no)), Ng = N0 + > Tm-
Hence,
1 7 dA(ng) Fr = A(t) ~ Alto) - » muEq(9)[2(8)] = A(no) â Ana) » hy
.
Differentiating 7, and Fy wrt a hyperparameter e⬠of the model, noting that the natural gradients of the local factors and the global variational approximation both depend on e, gives,
dF; (ae ) T dig (Se) T dno 3 (ae ) Tdim 4 aA(nq) q n de dng de dno de dnq de ⢠dngdng de dF» OF 2m, OF am " dng de » Oc + Ong de
# Note that 7g = 70 + dom
m ηm, leading to
dim dng , dno » de de deâ (69) m
# m
Here,
âF2m âηq = â âηq Eq(log p(ybi|θ)).
and that at convergence 31,
âF2m âηq = dµq dηq ηq = d2A(ηq) dηqdηq ηm. (71)
47
(70)
Therefore,
1. 1A 1A() Td C c Fue (: i) â ve) âtn + OF an (72) de and dato de ⢠â¬
# Td âtn de OF: 2m
dn OF: 7 Go 2m (ma 00) TP (73)
where µq = dA(ηq) dηq = Eq(θ)[T (θ)] are the mean parameters.
# B.2 A more general derivation
Consider the general case where the approximate posterior comprises a product of terms that approximate the likelihoods and one that approximates the prior,
N (8:4) = TT tn(6;). (74) n=0
Here Ï are the variational parameters (these may correspond to natural parameters. Again the scale of the approximate terms tn(θ; Ï) will be set such that q(θ; Ï) is normalized. Note this is a more general form of approximate posterior that allows the prior to be approximated if it lies outside of the variational family Q. If the prior lies within the variational family, the local updates will automatically set it equal to the prior recovering the treatment in the rest of the paper and meaning that the results presented here will still hold.
The global free-energy depends on the model hyperparameters through the joint distribution Ply, Ale),
Fle.q(0s0)) = [ a0 q(6;0) tog OF) (75)
Now consider the optimal variational approximation for a ï¬xed setting of the hyperparameters,
ply, Ole wre) = argmax [a9 q(0; w) log ply, @le)_ (76) b q(9; ¥)
The collapsed variational bound can therefore be denoted, F(e, q(9; ~°P'(e))) and it is this that we will optimize to find the hyperparameters. Before we do so, note that we have been careful to represent the two distinct ways that the free-energy depends on the hyperparameters, i) through the log-jointâs dependence, ii) through the optimal approximate posteriorâs implicit dependence via 7)°*(e)). In fact we can decouple these two contributions and consider evaluating the free-energy when the hyperparameters differ, F(â¬, q(0; w°P*(eâ))), the collapsed bound being recovered when ¢â = e.
We are now in a position to compute derivatives of the collapsed free-energy using the insight above to split this into two terms,
Fle, (0:0"(6))) =F lav) + Fle aGev"e)} (77) / d dae de =e
48
We now consider these two terms: First, the dependence through the log-joint distribution,
_a = Ge | 18 Ode ')) log p(y, Ale) 1. de 576909) ⬠=e M -> & [aoa (6; U(e')) log pl¥m|9, 6) 1 + / 0 4(0:¥(¢))-<log ple) e/=⬠=e d d = > (9) Fe loz n(¥n[86) + Eq) [bento] .
Second, the dependence through the optimal approximate posteriorâs implicit dependence on â¬
opt. _ dy (') d 4 Fe (6:1) dé dw F(6.q(0;0(â¬)) ae =0. (79) =e =" (c!) =e
Here we have substituted in the fact that were are at the collapsed bound and so the derivative wrt Ï is zero.
So the term that arises from the dependence of the approximate posterior on the hyperparameters (terms 2) vanishes meaning the only contribution comes from the ï¬rst term. This is precisely the same term that would remain if we were to perform coordinate ascent (since then when updating the hyperparameters the approximate posterior would have been ï¬xed).
M - d d d ger (85 PE) = > E,euer(o) { nlynle.9) + E,(o,y0rt(e)) Ee logp( Gl) : (80) de de m=1
When the prior distribution is in the exponential family, the second term above becomes
fd L de dno deâ (81) a(8;08°P*(â¬)) og p(6le |= (Ug â Ho)"
This recovers the expression in the previous section, although we have not assumed the approximate posterior is in the exponential family (here µq and µ0 are the average of the priorâs suï¬cient statistics under the approximate posterior and the prior respectively).
Figure 9 provides some intuition for these results. Note that in the case where the approximating family includes the true posterior distribution, the collapsed bound is equal to the log-likelihood of the hyperparameters. So, the result shows that the gradient of the log-likelihood wrt the hyperparameters is equal to the gradient of the free-energy wrt the hyperparameters, treating q as ï¬xed. Often this is computed in the M-step of variational EM, but it is used in coordinate ascent, which can be slow to converge. Instead, this gradient can be passed to an optimizer to perform direct gradient-based optimization of the log-likelihood.
# C Streaming Gaussian process regression and classiï¬cation
# C.1 Online variational free-energy approach using shared pseudo-points with ap- proximate maximum likelihood learning for the hyperparameters
We consider a variational inference scheme with an approximation posterior based on pseudo-points and that all streaming batches or groups of data points touch the same set of pseudo-points, that is,
R P(fly) x ef) [] porlf) © elf Tem ) x p(fzulu)a(u) = a(f), (82) r=1 r=1
49
(78)
F(e,q(G: Â¥)) (8; w°P*(â¬)))
Figure 9: Contours of the free-energy F(e,¢q(9;~)) are shown in green as a function of the hyper- parameters ¢⬠and the variational parameters of the approximate posterior ¢. The collapsed bound F(e,¢q(0; °P*(e))) is shown in blue. The gradients of the free-energy with respect to the variational parameters are zero along the collapsed bound a (e, q(O; W))lyaport = 0, by definition. This means that the gradients of the collapsed free-energy as a function of the hyperparameters are equal to those of the free-energy itself, 4F(â¬,q(0;Â¥)) = £F(e, q(0; YP *(e))).
where R is the number of batches considered and tr(u) is the approximate contribution of the r-th batch to the posterior. This is the standard set up considered for sparse GPs in the literature, see e.g. Hensman et al. [2013], Bui et al. [2017b]. We next detail the speciï¬cs for the streaming settings [Bui et al., 2017a], when we allow the pseudo-points to move and adjust the hyperparameters as new data arrive.
Let a = f (zold) and b = f (znew) be the pseudo-outputs or inducing points before and after seeing new data, where zold and znew are the pseudo-inputs accordingly. Note that extra pseudo-points can be added or conversely, old pseudo-points can be removed, i.e. the cardinalities of a and b do not need to be the same. The previous posterior, qold(f ) = p(f =a|a, θold)q(a), can be used to ï¬nd the approximate likelihood given by old observations as follows,
p(yold|f ) â qold(f )p(yold|θold) p(f |θold) as qold(f ) â p(f |θold)p(yold|f ) p(yold|θold) . (83)
Note that we have made the dependence of the hyperparameters explicit, as these will be optimized, together with the variational parameters, using the variational free-energy. Substituting the approximate likelihood above into the posterior that we want to target gives us:
p(f |yold, ynew) = p(f |θnew)p(yold|f )p(ynew|f ) p(ynew, yold|θnew) â p(f |θnew)qold(f )p(yold|θold)p(ynew|f ) p(f |θold)p(ynew, yold|θnew) . (84)
The new posterior approximation takes the same form as with the previous posterior, but with =b|b, θnew)q(b). This approximate
50
posterior can be obtained by minimizing the KL divergence,
wg . P(f4b|b, Pnew) Gnew (b KLltuew AIH Fl¥etdnew)) =f Af toew f) 108 eg eo new!) TIP abacaPUL Pncw)P(Ynew lf ptiery 22 (new) P p(alo1a) aeu(b) = log Z1 (Boia) + [Afar ow (b| Anew) )dola(a) )p(Ynew|f) : (86)
The last equation above is obtained by noting that p(f |θnew)/p(f =b|b, θnew) = p(b|θnew) and
qoia(f) _ PUR qoia(@) qoia(@) D(F|\Oo1a) POF zata-Bora)p(alOo1a) â p(al9ora)
.
â
Since the KL divergence is non-negative, the second term in (86) is the negative lower bound of the approximate online log marginal likelihood5, or the variational free energy, F(qnew(f )). We can decompose the bound as follows,
; â flat Pp(alo1a)dnew(b) Flaw Pam) = f Afta(0 [8 se yaa = KL( q(b )||P(b|Pnew)) _ [Aftron(A log p(Ynew|f) - J eadr(a) log Goiala) p(alGo1a)
The ï¬rst two terms form the variational free-energy as if the current batch is the whole training data, and the last term constrains the posterior to take into account the old likelihood (through the approximate posterior and the prior).
# C.2 Online variational free-energy approach using private pseudo-points with ap- proximate maximum likelihood learning for the hyperparameters
Instead of using a common set of pseudo-points for all data points or streaming batches, we can assign separate pseudo-points to each batch of data points as follows,
R (fly) x BU TL (vel f) © p(f) [] tur) « p(feulu)q(a) = q(f), (87) r=1
where ur are the pseudo-points private to the r-th batch. As new data arrives, new pseudo-points will be added to summarize the new data, and the old pseudo-points, corresponding to the previously seen batches, will remain unchanged. This means we only need to add and adjust new pseudo-points and the new likelihood approximation for the new data points, as opposed to all pseudo-points and all corresponding likelihood approximations as done in the previous section.
Similar to the online learning the previous section, we will try to approximate the running posterior in eq. (84),
p(f |yold, ynew) â p(f |θnew)qold(f )p(ynew|f ) p(f |θold) p(yold|θold) p(ynew, yold|θnew) , (88)
where
p(f |θold) = p(f qold(f ) = p(f =a|a, θold)p(a|θold), =a|a, θold)q(a),
5Note that this is only an approximation, as the hyperparameters are adjusted as new data arrive.
51
and a represents all pseudo-points used for previous batches. Let b be the new pseudo-points for the new data and tb(b) be the contribution of the new data points ynew towards the posterior. The new approximate posterior is assumed to take the following form,
qnew(f ) â p(f = p(f = p(f =a|a, θnew)q(a)tb(b) =a,b|a, b, θnew)q(a)p(b|a, θnew)tb(b), =a,b|a, b, θnew)q(a)q(b|a), (89)
where we have chosen q(b|a) â p(b|a, θnew)tb(b) and made the dependence on the hyperparameters θnew implicit. Note that q(a) is the variational distribution over the previous pseudo-points, and such, we only need to parameterize and learn the conditional distribution q(b|a).
Similar to the previous section, writing down the KL divergence from the running posterior in eq. (88) to the approximate posterior in eq. (89), and ignoring constant terms result in the online variational free-energy as follows,
F(qnew(f ), θnew) = df qnew(f ) log p(f |θnew)qold(f )p(ynew|f ) p(f |θold)qnew(f ) . (90)
Note that,
â PUfFatarBoa)g(a) PCF FataAoa)p(alOoia) â Df 4a a, Vola _b| Anew) PCfpal@@aua(@.b)
qold(f ) p(f |θold) = = q(a) p(a|θold) , (91)
p(f |θnew) qnew(f ) = = p(a, b|θnew) q(a, b) . (92)
This leads to,
F (dnew(f), Onew) = âKL{[q(a, b)| |p(a, b|Pnew)] + J AFave0(A) log p(Ynew|f) â Hlg(a))) + [ daa(a) tog pCa). (93)
Note again that we are only optimizing the variational parameters of q(b|a) and the hyperparameters, and keeping q(a) ï¬xed.
# C.3 Online variational free-energy approach for both hyperparameters and the latent function with shared pseudo-points
The variational approaches above, while maintaining an online distributional approximation for the latent function, only retain a point estimate of the hyperparameters. Imagine having observed the ï¬rst batch of data points in a regression task and trained the model on this batch, and that the second batch contains only one data point. In this case, maximum likelihood learning of the hyperparameters will tend to give very large observation noise, i.e. the noise is used to solely explain the new data and the latent function is largely ignored. Using the new model with the newly obtained hyperparameters will thus result in poor predictions on previously seen data points.
We attempt to address this issue by maintaining a distributional approximation for the hyperparame- ters, as well as one for the latent function, and adjusting these approximations using variational inference as new data arrive. In particular, extending appendix C.1 by introducing a variational approximation over the hyperparameters gives,
old approx. posterior: new approx. posterior: qold(f, θ) = p(f qnew(f, θ) = p(f =a|a, θ)q(a)qold(θ), =b|b, θ)q(b)qnew(θ).
52
The likelihood of previously seen data points can be approximated via the approximate posterior as follows,
p(yold|f, θ) â qold(f, θ)p(yold) p(f |θ)p(θ) .
Similar to the previous section, the online variational free-energy can be obtained by applying Jensenâs inequality to the online log marginal likelihood, or by writing down the KL divergence as follows,
KL[qnew(f, θ)||p(f, θ|yall)] = df dθqnew(f, θ) log p(f =b|b, θ)qnew(b)qnew(θ)p(ynew, yold) p(f |θ)p(θ)p(ynew|f, θ)p(yold|f, θ) = log p(ynew|yold) + F(qnew(f, θ))
where the online variational free-energy is,
P(fzo|b, 9)q(b)dnew (9) (feala, )q(a)doia(9)P(¥newl Sf, 9) (a, 9)q(b) dnew (9) (b, #)q(a) qoia(9) P(Ynew| f, A) = KLfasew(®)|laoal®)] +f ata(0) (KL[a(b)|pb)0)) +/ A64(0)anew(ald) tog MA) = [Fab aneu(F-0) 08 P(YoenlF8) F(dnew(f,9)) = / d fabdnew(f0) 05 = / df d6qnew(f, 8) log Pp
Most terms in the variational free-energy above requires computing an expectation wrt the variational approximation q(θ), which is not available in closed-form even when q(θ) takes a simple form such as a diagonal Gaussian. However, these expectations can be approximated by simple Monte Carlo with the reparameterization trick [Kingma and Welling, 2014, Rezende et al., 2014]. As in previous section, all other expectations can be handled tractably, either in closed-form or by using Gaussian quadrature.
# C.4 Online variational free-energy approach for both hyperparameters and the latent function with private pseudo-points
As in appendix C.2, new pseudo-points can be allocated to new data as they arrive, and the current pseudo-points and their marginal variational approximation will remaine ï¬xed. The corresponding variational approximation for both the latent function and hyperparameters are:
old approx. posterior: new approx. posterior: qold(f, θ) = p(f qnew(f, θ) = p(f =a|a, θ)q(a)qold(θ), =a,b|a, b, θ)q(a)q(b|a)qnew(θ).
The new approximate posterior above can be derived by approximating the likelihood factor of the new data in the running posterior as follows,
Ëp(f, θ|y) â qold(f, θ)p(ynew|f, θ) = p(f = p(f â p(f =a|a, θ)q(a)qold(θ)p(ynew|f, θ) =a,b|a, θ)q(a)qold(θ)p(b|a, b, θ)p(ynew|f, θ) =a,b|a, b, θ)q(a)qold(θ)t1(b|a)t2(θ)t3(b)t4(θ),
where {ti}4 i=1 are the approximate factors representing the contribution of the conditional prior and the likelihood to the running posterior. In other words, q(b|a) â t1(b|a)t3(b) and qnew(θ) â
53
qold(θ)t2(θ)t4(θ). Substituting the above variational approximation to the online variational free-energy gives us,
P( f4a,pla, b, )q(a)q(bla)dnew (9) P(fzala, 8)q(a)qoia(9)P(Ynew lf; 9) = KL[asew(@)|laoia(®)] + f a0q(0) (KE(a(a, b)lp(a. BIO) F(dnew(,9)) = / d fdOquew(f,0) log - / 46q(9) (KL{q(a)||p(alé)]) â / dfdOqnew(f.) log P(Â¥newlf.8)-
Similar to the previous section, all terms the free-energy above can be tractably handled in closed-form or by using simple Monte Carlo with the reparameterization trick [Kingma and Welling, 2014, Rezende et al., 2014].
# D Extra results for streaming Gaussian process regression and classi- ï¬cation experiments
# D.1 Binary classiï¬cation on a toy 2D data set
In this section, we include several extra results on the toy 2D experiment presented in the main text. We use a Gaussian process prior with a zero mean function and an ARD exponentiated quadratic covariance function and thus there are three kernel hyperparameters to be tuned including the kernel variance and two lengthscale parameters. Several inference methods were considered: (i) MCMC for both the latent function and the hyperparameters, without any sparsiï¬cation, (ii) variational inference for the latent function and approximate maximum likelihood learning for the hyperparameters, and (iii) variational inference for both the latent function and the hyperparameters. We ï¬rst consider the batch, static setting, i.e. inference using the whole data set, and then the streaming setting with three equal batches for the variational methods. Figure 10 shows the predictions and the hyperparameter estimates for all methods once all training points are observed. The predictions seem qualitatively similar, though, for the approximate methods, only point estimates or overconï¬dent distributional estimates of the hyperparameters are obtained. The predictions made by the variational methods after sequentially observing the data batches, together with the hyperparameter estimates, are shown in ï¬g. 11. We also include a failure case when uncertainty over the hyperparameters are not retained and propagated, in ï¬g. 12. In this case, only ten data points were included in the second batch. One of the lengthscale hyperparameters was severely under-estimated when approximate maximum likelihood learning was used.
# E Full results of the federated learning experiments
In this section, we include the full results of the federated learning experiment to train Bayesian neural networks on federated, decentralized data. As a reminder, Bayesian neural networks are an important model in modern machine learning toolkit, fusing the capacity of neural networks with the ï¬exibility of Bayesian inference. The goal of Bayesian learning for neural networks is to obtain the posterior of the network parameters given the training points and to use this for prediction as follows,
p(9) Te, P(Yn 9, Ln) p(9) Ts Ti P(Yn,k|9; Ln,k) P(ylx) P(y|x) , posterior: p(4|x, y) prediction: p(y*|y, x, 2") = [ vlelx.y)elu"l. x*)dd,
54
(94)
batch f + hypers: meme H a f + hypers: meme 7 8 i 11 â fi vfe, hypers: vfe i] 1 1 â f: vfe, hypers: ml ! | ssi streaming 6 1 â > I it batch 2 ! ' $4 I ' I 1 I i) i i) 2 1 0 as -0.6 -0.4 -0.2 0.0 0.2 log lengthscale density -10 -0.8 -06 -0.4 log lengthscale 0.5 1.0 15 log sigma 04 06 08 0 -0.2 0.0 0.2 1 2.0 streaming f: vfe, hypers: vfe 0:
Figure 10: Results of the streaming GP experiment on a toy classiï¬cation data set: the performance of several batch and streaming methods after seeing all training points. In the batch case, we consider three inference methods: MCMC for both the latent variable and the hyperparameters, VI for both the latent function and the hyperparameters, and VI for the latent function and approximate maximum likelihood learning for the hyperparameters. The two latter methods are also tested in the streaming settings. We show the predictions made by the methods after training in the batch case, and after seeing all three batches in the streaming case. The (distributional) hyperparameter estimates are also included. Best viewed in colour.
where {xn, yn}N n=1 are the training points, θ is the network weights and biases. However, getting the exact posterior is analytically intractable and as such approximation methods are needed. In this section, we discuss several approximation strategies for training a Bayesian neural network on the standard MNIST ten-way classiï¬cation data set. In particular, we focus on a case where data are decentralized on diï¬erent machines, that is we further assume that N training points are partitioned into K = 10 disjoint memory shards. Furthermore, two levels of data homogeneity across memory shards are considered: homogeneous [or iid, that is each shard has training points of all classes] and
55
f: vfe, hypers: vfe f: vfe, hypers: ml 0.500 ââ -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6 0.8 log lengthscale 0 r â batcho 12 i. ==» batch1 : =| â-+ batch 2 10 I . â f: vfe, hypers: vfe . . â f: vfe, hi I 8 j | vfe, hypers: m 2 3 I -08 -06 -04 -02 0.0 0.2 log lengthscale 1 vl âi 0 rd 1.0 1.2 1.4 1.6 18 2.0 log sigma
Figure 11: Results of the streaming GP experiment on a toy classiï¬cation data set: the performance of the streaming methods after seeing each data batch. Two methods were considered: VI for both the latent function and the hyperparameters, and VI for the latent function and approximate maximum likelihood learning for the hyperparameters. We show the predictions made by the methods after seeing each data batch and the corresponding (distributional) hyperparameter estimates. Best viewed in colour.
inhomogeneous [or non-iid, i.e. each shard has training points of only one class].
We place a diagonal standard Normal prior over the parameters, p(θ) = N (θ; 0, I), and initialize the mean of the variational approximations as suggested by Glorot and Bengio [2010]. For distributed training methods, the data set is partitioned into 10 subsets or shards, and 10 compute nodes (workers) with each able to access one memory shard. The implementation of diï¬erent inference strategies was done in Tensorï¬ow [Abadi et al., 2016]) and the workload between workers is managed using Ray [Moritz et al., 2017].
56
f: vfe, hypers: vfe f: vfe, hypers: ml 1 â batcho yy --- batch 3 â-- batch 2 | | â f: vfe, hypers: vfe â f: vfe, hypers: ml -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6 0.8 log lengthscale 0 1 8 -10 -08 -06 -04 -0,2 0.0 0.2 log lengthscale 1 1.0 1.2 1.4 1.6 1.8 2.0 log sigma
Figure 12: Results of the streaming GP experiment on a toy classiï¬cation data set: a failure case of maximum likelihood learning for the hyperparameters. Two methods were considered: VI for both the latent function and the hyperparameters, and VI for the latent function and approximate maximum likelihood learning for the hyperparameters. We show the predictions made by the methods after seeing each data batch and the corresponding (distributional) hyperparameter estimates. Best viewed in colour.
# E.1 Global VI
We ï¬rst considered global variational inference, as described in section 5, for getting an approximate posterior over the parameters. The variational lower bound (eq. (18)) is optimized using Adam [Kingma and Ba, 2014]. We considered one compute node (with either one core or ten cores) that can access the entire data set, and simulates the data distribution by sequentially showing mini-batches that can potentially have all ten classes (iid) or that have data of only one class (non-iid). The full performance on the test set during training for diï¬erent learning rate hyperparameters of the Adam optimizer are shown in ï¬gs. 13 and 14. Notice that in the iid setting, larger learning rates tend to yield faster convergence but can give a slightly poorer predictive performance on the test set at the end of training (see ï¬g. 14 with a learning rate of 0.005). The non-iid is arguably more diï¬cult and the performance
57
can oscillate if the learning rate is too large.
# E.2 Bayesian committee machine
We next considered an embarrassingly parallel scheme based on the Bayesian committee machine Tresp [2000]. In particular, two prior sharing strategies as described in section 7.1, BCM - same prior and BCM - split prior, were considered. Each worker has access to one memory shard and performs global variational inference independently. While the workers were running, we occasionally polled the approximate posteriors from all workers, merged them using BCM and computed the test performance using the merged posterior. We report the test performance during training for diï¬erent prior sharing schemes in both iid and non-iid settings in ï¬gs. 15 and 16, respectively. Note that, we also varied the learning rate of Adam for each worker. Both prior sharing strategies in combination with BCM performs surprisingly well in the iid setting. However, they fell short in the non-iid setting as the Gaussian sub-posteriors can potentially have diï¬erent supports and, if this the case, multiplying them will not give a good global approximation.
# E.3 Sequential, one-pass PVI
We next considered a sequential training scheme using PVI. Each worker, in turn, performs global variational inference with the prior being the posterior of the last trained worker. Learning using this schedule is identical to the Variational Continual Learning approach of Nguyen et al. [2018]. We varied the learning rates of Adam (used to optimize the variational lower bound for each worker) and the number of epochs of data used for each worker. The test performance was recorded after each worker ï¬nished its training, and the full results for the iid and non-iid settings are shown in ï¬gs. 17 and 18, respectively. This schedule performs well in the iid setting, but struggles when the data across workers are non-iid.
# E.4 PVI with synchronous updates
We considered PVI with synchronous updates, i.e. the central parameter waits for all workers to ï¬nish before gathering the approximate likelihoods together and then sending out the new approximate posterior. As typically done in (parallel) Power-EP, we also considered damped updates, i.e. the new approximate likelihood is a linear combination of the old approximate likelihood and the factor provided by a worker. We explored diï¬erent damping factors (higher means slower updates), and diï¬erent learning rates for the optimization at the local worker. The full results on the test set during training are shown in ï¬gs. 19 and 20 for the iid and non-iid settings, respectively.
# E.5 PVI with asynchronous updates
Finally, we allowed the parameter server to update the approximate posterior as soon as a worker has ï¬nished its computation. This schedule can potentially cause stale updates. However, this is more suitable to cases when the communication between the workers and the parameter server is unreliable, or when the distribution of data across workers is skewed, i.e. one worker might have a lot more data points than others and consequently might require a longer time to ï¬nish a computation round. As in the synchronous case, we varied the damping factor and learning rate. The full results are shown in ï¬gs. 22 and 23 for the iid and non-iid settings, respectively.
58
5 9
100 100 5 e x 5 ° 104 EB 104 â lrate=0.0001 ââ Irate=0.0010 Irate=0.0001 Irate=0.0010 24 Irate=0.0005 = ââ lrate=0.0050 27) ââ= Irate=0.0005 Irate=0.0050 : : : 1 r Sa 10° + 101 4 3 = 3 10° + - Irate=0.0001 Irate=0.0010 â trate=0.0001 Irate=0.0010 10°" = ââ= Irate=0.0005 = ââ Irate=0.0050 Irate=0.0005 â= lrate=0.0050 r 1 r r T : 5 4 T r r r r 10° 10 10? 10 10 109 10! 102 108 104 train time /s train time /s
° x P i â
# B x) 3
(a) one compute node with one core and iid mini-batches
(b) one compute node with one core and non-iid mini-batches
Figure 13: The performance of global VI on the test set in the iid [left] and non-iid [right] settings, when the compute node has only one core. Diï¬erent traces correspond to diï¬erent learning rate hyperparameters of Adam.
6 0
100
(b) one compute node with ten cores and non-iid mini-batches
# x
& a &
# a
(a) one compute node with ten cores and iid mini-batches
Figure 14: The performance of global VI on the test set in the iid and non-iid settings, when the compute node has ten cores. Diï¬erent traces correspond to diï¬erent learning rate hyperparameters of Adam.
6 1
100 10° 4 Irate=0.0010 â lrate=0.0050 2 = ââ lrate=0.0100 â lrate=0.0010 Irate=0.0050 â Irate=0.0100 Pe â 10° 10 10? 10° 104 error /% 105 i L â lrate=0.0010 ââ lrate=0.0050 lrate=0.0100 10° 4 nll /nat T T T 10° 10t 107 10° 104 train time /s
# error /%
nil /nat
# train time /s
(a) BCM with the same N (0, 1) prior across 10 workers and iid data
(b) BCM with the prior N (0, 1) being split equally across 10 workers and iid data
Figure 15: Performance of BCM with two prior sharing strategies on the iid setting, for various learning rates. Best viewed in colour.
6 2
100
5
we 2 âââ rn 4 x 7 Q x 5 10 5 By B10 | .0010 + ââ Irate=0.0010 .0050 al ed Irate=0.0050 27 ââ Irate=0.0100 â Irate=0.0100 : : : : : T T T 1 T 4x 10° + 3x 10° 5 a 3 = £ 3 a 2x 10° 4 â lrate=0.0010 Irate=0.0010 ââ Irate=0.0050 Irate=0.0050 â Irate=0.0100 Irate=0.0100 T 1 T 1 T T T T T T 10° 10! 107 10° 104 10° 10! 10? 10° 104 train time /s train time /s
(a) BCM with the same N (0, 1) prior across workers and non-iid data
(b) BCM with the prior N (0, 1) being split equally across workers and non-iid data
Figure 16: Performance of BCM with two prior sharing strategies on the non-iid setting, for various learning rates. Best viewed in colour.
6 3
x 5 2 °
# x > g 5
e g > g 5
(a) Error (b) NLL
Figure 17: Performance of sequential PVI with only one pass through all memory shards when the data are iid. The number of epochs for each worker and the learning rate hyperparameter of Adam were varied. Best viewed in colour.
6 4
(b) NLL
# xs g 5 °
# xs g 8 Ey
xs g 5 3
# (a) Error
Figure 18: Performance of sequential PVI with only one pass through all memory shards when the data are non-iid. The number of epochs for each worker and the learning rate hyperparameter of Adam were varied. Best viewed in colour.
6 5
~ 5 g 5
# x : g 5
# xs >
# g 5
(a) Error (b) NLL
Figure 19: Performance of PVI with synchronous updates when the data are iid. In this experiment, each worker communicates with the central server after one epoch. The learning rate hyperparameter of Adam and the damping factor were varied. Best viewed in colour.
6 6
(b) NLL
# x ~ g 5
# x > g E
# 5 g g 3
# (a) Error
Figure 20: Performance of PVI with synchronous updates when the data are non-iid. In this experiment, each worker communicates with the central server after one epoch. The learning rate hyperparameter of Adam and the damping factor were varied. Best viewed in colour.
10? 1ot 10° error /% 1o-t 107? 10° 1o-t nll /nat 10-? i. T | T | T | T | 10° 10! 10? 10° 104 train time /s
Figure 21: For certain hyperparameter settings, PVI with synchronous updates worryingly exhibits over-ï¬tting.
67
6 8
(a) Error (b) NLL
# error /%
# error /%
# error /%
Figure 22: Performance of PVI with asynchronous, lock-free updates when the data are iid. In this experiment, each worker communicates with the central server after one epoch. The learning rate hyperparameter of Adam and the damping factor were varied. Best viewed in colour.
6 9
(b) NLL
xs QQ £ 5
⬠5 5
# 5 x g o
# (a) Error
Figure 23: Performance of PVI with asynchronous, lock-free updates when the data are non-iid. In this experiment, each worker communicates with the central server after one epoch. The learning rate hyperparameter of Adam and the damping factor were varied. Best viewed in colour.
# E.6 Stochastic Natural Gradient Variational Inference for Bayesian Neural Net- works
In this experiment, we stress-test various optimization methods for global variational inference for Bayesian neural networks. In particular, we consider two methods: (i) stochastic natural-gradient global VI with a ï¬xed learning rate (SNGD, see eq. (13)), and (ii) stochastic ï¬at gradient global VI with an adaptive learning rate provided by Adam [Kingma and Ba, 2014]. Two Bayesian neural networks with one hidden layer of 200 or 500 Relu hidden units, and the standard MNIST ten-class classiï¬cation problem are employed for this experiment. The network is trained using mini-batches of 200 data points and 800 or 1000 epochs. Both optimization methods considered have similar running time. The full results are included in ï¬gs. 24, 25, 27 and 28 and key results are shown in ï¬gs. 26 and 29. It can be noticed from ï¬gs. 26 and 29 that the best versions of SNDG and Adam perform similarly in terms of both classiï¬cation errors and convergence speed/data eï¬ciency. However, both methods do require tuning of the learning rate hyperparameter. As already observed in the global VI experiment in section 7.1, signs of fast convergence early during training when using Adam do not necessarily result in a good predictive performance at the end.
As mentioned in the main text, while natural gradients has been shown to be eï¬ective in the batch, global VI settings [Honkela et al., 2010], the result presented here could be seen as a negative result for natural-gradient based methods â a stochastic natural-gradient/ï¬xed-point method with ï¬xed learning rate does not outperform an adaptive stochastic ï¬at-gradient method. However, it might not be surprising as Adam adjusts its step-sizes based on approximate second-order information of the objective. This also suggests a future research venue to develop eï¬ective adaptive optimization schemes for stochastic natural-gradient variational inference.
70
10? + train, adam, lr=0.001 â train, sngd, Ir=0.0001 test, adam, Ir=0.001 ââ test, sngd, Ir=0.0001 train, adam, lr=0.005 ââ train, sngd, lr=0.00015 test, adam, Ir=0.005 test, sngd, lr=0.00015 train, adam, lr=0.01 ââ train, sngd, lr=0.0002 test, adam, Ir=0.01 test, sngd, lr=0.0002 train, adam, lr=0.02 train, sngd, lr=0.00025 test, sngd, lr=0.00025 train, sngd, lr=0.0003 test, sngd, lr=0.0003 test, adam, Ir=0.02 train, adam, lr=0.03 test, adam, Ir=0.03 error /% 10! = T T T T 10° 10! 107 10° epoch
Figure 24: Classiï¬cation error rates on the train and test sets during training using Adam and Stochastic Natural Gradient (SNGD) methods on the MNIST classiï¬cation task with a Bayesian neural network with one hidden layer of 200 rectiï¬ed linear units. The ï¬nal performance of all settings are shown in ï¬g. 26. For both Adam and SNGD, the performance highly depends on the learning rate, but the best learning rates for both methods give similar train and test results and yield similar convergence. Note that while Adam adaptively changes the learning rate based on the gradient statistics, SNGD employs a ï¬xed step size. See text for more details. Best viewed in colour.
71
train, sngd, lr=0.0001 test, sngd, lIr=0.0001 train, sngd, lr=0.00015 train, adam, lr=0.001 test, adam, lr=0.001 train, adam, lr=0.005 test, adam, lr=0.005 test, sngd, lr=0.00015 10° train, adam, Ir=0.01 ââ train, sngd, lr=0.0002 | test, adam, lr=0.01 test, sngd, lr=0.0002 train, sngd, lr=0.00025 test, sngd, lr=0.00025 train, sngd, lr=0.0003 , r=0.0003 train, adam, lr=0.02 test, adam, lr=0.02 train, adam, lr=0.03 test, adam, lr=0.03 test, sngd nll /nat 10-1 10° 10! 10? 103
# epoch
Figure 25: Negative log-likelihoods on the train and test sets during training using Adam and Stochastic Natural Gradient (SNGD) methods on the MNIST classiï¬cation task with a Bayesian neural network with one hidden layer of 200 rectiï¬ed linear units. The ï¬nal performance of all settings are shown in ï¬g. 26. For both Adam and SNGD, the performance highly depends on the learning rate, but the best learning rates for both methods give similar train and test results and yield similar convergence. Note that while Adam adaptively changes the learning rate based on the gradient statistics, SNGD employs a ï¬xed step size. See text for more details. Best viewed in colour.
72
7 3
0.110 4 0.105 = nll /nat 0.100 + ââ - epoch 0.095 = /nat 0.090 = 0.085 = + error /% 0.080 = * T T T 1.6 18 2.0 2.2 error /% adam, Ir=0.001 adam, Ir=0.005 adam, Ir=0.01 adam, Ir=0.02 adam, Ir=0.03 tooo na ae T 10° 101 10? 10% epoch T T T T w nll /nat sngd, Ir=0.0001 sngd, Ir=0.00015 sngd, lr=0.0002 sngd, lr=0.00025 sngd, Ir=0.0003 KKK 0.120 4 nll /nat 0.1155 yo-14 rr errr 107 08 epoch 0.110 5 0.105 5 0,100 5 0.095 4 0,090 + error /%
# nll
Figure 26: Performance on the train set [left] and test set [right] after 1000 epochs using Adam and Stochastic Natural Gradient (SNGD) methods on the MNIST classiï¬cation task with a Bayesian neural network with one hidden layer of 200 rectiï¬ed linear units, and the typical performance traces as training progress [inset plots]. This ï¬gure summarizes the full results in ï¬gs. 24 and 25. The performance is measured using the classiï¬cation error [error] and the negative log-likelihood [nll], and for both measures, lower is better and, as such, closer to the bottom left is better. For both Adam and SNGD, the performance highly depends on the learning rate, but the best learning rates for both methods give similar train and test results and yield similar convergence. Note that while Adam adaptively changes the learning rate based on the gradient statistics, SNGD employs a ï¬xed step size. See text for more details. Best viewed in colour.
2 4 wy train, adam, lr=0.001 train, sngd, lr=0.0001 test, adam, Ir=0.001 ââ test, sngd, Ir=0.0001 train, adam, lr=0.005 ââ train, sngd, lr=0.00015 test, adam, Ir=0.005 test, sngd, lr=0.00015 train, adam, lr=0.01 ââ train, sngd, lr=0.0002 test, adam, Ir=0.01 test, sngd, lr=0.0002 train, adam, lr=0.02 ââ train, sngd, lr=0.00025 test, adam, Ir=0.02 ââ test, sngd, Ir=0.00025 train, adam, lr=0.03 ââ train, sngd, lr=0.0003 test, sngd, lr=0.0003 10! 4 test, adam, Ir=0.03 error /% 10° =, 10° 10+ 10? 10°
# epoch
Figure 27: Classiï¬cation error rates on the train and test sets during training using Adam and Stochastic Natural Gradient (SNGD) methods on the MNIST classiï¬cation task with a Bayesian neural network with one hidden layer of 500 rectiï¬ed linear units. The ï¬nal performance of all settings are shown in ï¬g. 29. For both Adam and SNGD, the performance highly depends on the learning rate, but the best learning rates for both methods give similar train and test results and yield similar convergence. Note that while Adam adaptively changes the learning rate based on the gradient statistics, SNGD employs a ï¬xed step size. See text for more details. Best viewed in colour.
74
train, adam, lr=0.001 â train, sngd, lr=0.0001 test, adam, lr=0.001 ââ test, sngd, lr=0.0001 train, adam, lr=0.005 train, sngd, lr=0.00015 test, adam, lr=0.005 test, sngd, lr=0.00015 10° 4 train, adam, Ir=0.01 ââ train, sngd, lr=0.0002 ] test, adam, lr=0.01 test, sngd, lr=0.0002 train, sngd, lr=0.00025 test, sngd, lr=0.00025 train, sngd, lr=0.0003 , r=0.0003 train, adam, lr=0.02 test, adam, lr=0.02 train, adam, lr=0.03 test, adam, lr=0.03 test, sngd nll /nat 10-1 4 4 + ay + a + oy 10° 10! 10? 103
# epoch
Figure 28: Negative log-likelihoods on the train and test sets during training using Adam and Stochastic Natural Gradient (SNGD) methods on the MNIST classiï¬cation task with a Bayesian neural network with one hidden layer of 500 rectiï¬ed linear units. The ï¬nal performance of all settings are shown in ï¬g. 29. For both Adam and SNGD, the performance highly depends on the learning rate, but the best learning rates for both methods give similar train and test results and yield similar convergence. Note that while Adam adaptively changes the learning rate based on the gradient statistics, SNGD employs a ï¬xed step size. See text for more details. Best viewed in colour.
75
7 6
0.095 0.090 0.085 0.080 0.075 0.065 nll /nat errr 10° 10? epoch x Oooo adam, Ir=0.001 adam, Ir=0.005 adam, Ir=0.01 adam, Ir=0.02 adam, Ir=0.03 1.8 2.0 error /% T 10? epoch T 1 10? 108 all /nat % â sngd, r=0.0001 o& â sngd, Ir=0.00015 %& â sngd, r=0.0002 %& â sngd, lr=0.00025 o â sngd, Ir=0.0003 0.1204 yoo J nll /nat 0.115 5 110 5 105 4 100 4 0.095 4 L x & L epoch error /% T 10° 101 epoch error /% r 10? 108
nll /nat
Figure 29: Performance on the train set [left] and test set [right] after 800 epochs using Adam and Stochastic Natural Gradient (SNGD) methods on the MNIST classiï¬cation task with a Bayesian neural network with one hidden layer of 500 rectiï¬ed linear units, and the typical performance traces as training progress [inset plots]. This ï¬gure summarizes the full results in ï¬gs. 27 and 28. The performance is measured using the classiï¬cation error [error] and the negative log-likelihood [nll], and for both measures, lower is better and, as such, closer to the bottom left is better. Full training and test performance results are included in the appendix. For both Adam and SNGD, the performance highly depends on the learning rate, but the best learning rates for both methods give similar train and test results and yield similar convergence. Note that while Adam adaptively changes the learning rate based on the gradient statistics, SNGD employs a ï¬xed step size. See text for more details. Best viewed in colour. | {
"id": "1712.05889"
} |
1811.10830 | From Recognition to Cognition: Visual Commonsense Reasoning | Visual understanding goes well beyond object recognition. With one glance at
an image, we can effortlessly imagine the world beyond the pixels: for
instance, we can infer people's actions, goals, and mental states. While this
task is easy for humans, it is tremendously difficult for today's vision
systems, requiring higher-order cognition and commonsense reasoning about the
world. We formalize this task as Visual Commonsense Reasoning. Given a
challenging question about an image, a machine must answer correctly and then
provide a rationale justifying its answer.
Next, we introduce a new dataset, VCR, consisting of 290k multiple choice QA
problems derived from 110k movie scenes. The key recipe for generating
non-trivial and high-quality problems at scale is Adversarial Matching, a new
approach to transform rich annotations into multiple choice questions with
minimal bias. Experimental results show that while humans find VCR easy (over
90% accuracy), state-of-the-art vision models struggle (~45%).
To move towards cognition-level understanding, we present a new reasoning
engine, Recognition to Cognition Networks (R2C), that models the necessary
layered inferences for grounding, contextualization, and reasoning. R2C helps
narrow the gap between humans and machines (~65%); still, the challenge is far
from solved, and we provide analysis that suggests avenues for future work. | http://arxiv.org/pdf/1811.10830 | Rowan Zellers, Yonatan Bisk, Ali Farhadi, Yejin Choi | cs.CV, cs.CL | CVPR 2019 oral. Project page at https://visualcommonsense.com | null | cs.CV | 20181127 | 20190326 | 9
1
0
2
# r
# a
# M
6
2
]
# V
# C
.
# s
# c
[
2
# v
0
3
8
0
1
.
1
1
8
1
:
# v
# i
# X
# r
# a
(vcr) VISUAL COMMONSENSE REASONING
# From Recognition to Cognition: Visual Commonsense Reasoning
Rowan Zellersâ Yonatan Biskâ Ali Farhadiâ ⥠Yejin Choiâ ⥠â Paul G. Allen School of Computer Science & Engineering, University of Washington â¥Allen Institute for Artiï¬cial Intelligence visualcommonsense.com
Why is [person4 ffl] pointing at [person1 @ a) He is telling Iperson3 40] that [person1 @ ordered the pancakes. b) He just told a joke. c) He is feeling accusatory towards [person1 @. d) He is giving [person1 @ directions. ! Cho, becaus: 4 a) [person1 ] has the pancakes in front of him. eho, b) [person4 gM] is taking everyone's order and asked for clarification. becauseâ a c) is looking at the pancakes and both she and [person2 @ are smiling slightly. d) [person3 (22) is delivering food to the table, and she might not know whose order is whose. How did [person2,fa}] get the money thatâs in front of her? a) [person2 b) [person2 c) She may work jobs for the mafia. d) She won money playing poker. ] is selling things on the street. ] earned this money playing music. a) She is playing guitar for money. b) [person2 ] is a professional musician in an orchestra. c) [person2 ] and [person1 iy Jare both holding instruments, and were probably busking for that money. d) [person1 1 is putting money in [person2 ls tip jar, while she plays music.
Figure 1: VCR: Given an image, a list of regions, and a question, a model must answer the question and provide a ratio- nale explaining why its answer is right. Our questions challenge computer vision systems to go beyond recognition-level understanding, towards a higher-order cognitive and commonsense understanding of the world depicted by the image.
# Abstract
that while humans ï¬nd VCR easy (over 90% accuracy), state-of-the-art vision models struggle (â¼45%).
Visual understanding goes well beyond object recogni- tion. With one glance at an image, we can eï¬ortlessly imag- ine the world beyond the pixels: for instance, we can infer peopleâs actions, goals, and mental states. While this task is easy for humans, it is tremendously diï¬cult for todayâs vision systems, requiring higher-order cognition and com- monsense reasoning about the world. We formalize this task as Visual Commonsense Reasoning. Given a challenging question about an image, a machine must answer correctly and then provide a rationale justifying its answer.
To move towards cognition-level understanding, we present a new reasoning engine, Recognition to Cogni- tion Networks (R2C), that models the necessary layered in- ferences for grounding, contextualization, and reasoning. R2C helps narrow the gap between humans and machines (â¼65%); still, the challenge is far from solved, and we pro- vide analysis that suggests avenues for future work.
# 1. Introduction
Next, we introduce a new dataset, VCR, consisting of 290k multiple choice QA problems derived from 110k movie scenes. The key recipe for generating non-trivial and high- quality problems at scale is Adversarial Matching, a new approach to transform rich annotations into multiple choice questions with minimal bias. Experimental results show
With one glance at an image, we can immediately infer what is happening in the scene beyond what is visually ob- vious. For example, in the top image of Figure 1, not only do we see several objects (people, plates, and cups), we can also reason about the entire situation: three people are din- ing together, they have already ordered their food before
1
] is serving and the photo has been taken, [person3 not eating with them, and what [person1 ] ordered are the pancakes and bacon (as opposed to the cheesecake), be- cause [person4 ] while looking at the server, [person3
Visual understanding requires seamless integration be- tween recognition and cognition: beyond recognition-level perception (e.g., detecting objects and their attributes), one must perform cognition-level reasoning (e.g., inferring the likely intents, goals, and social dynamics of people) [13]. State-of-the-art vision systems can reliably perform recognition-level image understanding, but struggle with complex inferences, like those in Figure 1. We argue that as the ï¬eld has made signiï¬cant progress on recognition-level building blocks, such as object detection, pose estimation, and segmentation, now is the right time to tackle cognition- level reasoning at scale.
As a critical step toward complete visual understanding, we present the task of Visual Commonsense Reasoning. Given an image, a machine must answer a question that re- quires a thorough understanding of the visual world evoked by the image. Moreover, the machine must provide a ratio- nale justifying why that answer is true, referring to the de- tails of the scene, as well as background knowledge about how the world works. These questions, answers, and ratio- nales are expressed using a mixture of rich natural language as well as explicit references to image regions. To support clean-cut evaluation, all our tasks are framed as multiple choice QA.
Our new dataset for this task, VCR, is the ï¬rst of its kind and is large-scale â 290k pairs of questions, answers, and rationales, over 110k unique movie scenes. A crucial challenge in constructing a dataset of this complexity at this scale is how to avoid annotation artifacts. A recurring challenge in most recent QA datasets has been that human- written answers contain unexpected but distinct biases that models can easily exploit. Often these biases are so promi- nent so that models can select the right answers without even looking at the questions [28, 61, 72].
Thus, we present Adversarial Matching, a novel QA assignment algorithm that allows for robust multiple-choice dataset creation at scale. The key idea is to recycle each cor- rect answer for a question exactly three times â as a neg- ative answer for three other questions. Each answer thus has the same probability (25%) of being correct: this re- solves the issue of answer-only biases, and disincentivizes machines from always selecting the most generic answer. We formulate the answer recycling problem as a constrained optimization based on the relevance and entailment scores between each candidate negative answer and the gold an- swer, as measured by state-of-the-art natural language in- ference models [10, 57, 15]. A neat feature of our recycling algorithm is a knob that can control the tradeoï¬ between
2
human and machine diï¬culty: we want the problems to be hard for machines while easy for humans.
Narrowing the gap between recognition- and cognition- level image understanding requires grounding the meaning of the natural language passage in the visual data, under- standing the answer in the context of the question, and rea- soning over the shared and grounded understanding of the question, the answer, the rationale and the image. In this paper we introduce a new model, Recognition to Cogni- tion Networks (R2C). Our model performs three inference steps. First, it grounds the meaning of a natural language passage with respect to the image regions (objects) that are directly referred to. It then contextualizes the meaning of an answer with respect to the question that was asked, as well as the global objects not mentioned. Finally, it reasons over this shared representation to arrive at an answer.
Experiments on VCR show that R2C greatly outper- forms state-of-the-art visual question-answering systems: obtaining 65% accuracy at question answering, 67% at an- swer justiï¬cation, and 44% at staged answering and justiï¬- cation. Still, the task and dataset is far from solved: humans score roughly 90% on each. We provide detailed insights and an ablation study to point to avenues for future research. In sum, our major contributions are fourfold: (1) we for- malize a new task, Visual Commonsense Reasoning, and (2) present a large-scale multiple-choice QA dataset, VCR, (3) that is automatically assigned using Adversarial Match- ing, a new algorithm for robust multiple-choice dataset cre- ation. (4) We also propose a new model, R2C, that aims to mimic the layered inferences from recognition to cogni- tion; this also establishes baseline performance on our new challenge. The dataset is available to download, along with code for our model, at visualcommonsense.com.
# 2. Task Overview
We present VCR, a new task that challenges vision sys- tems to holistically and cognitively understand the con- in Figure 1, we need tent of an image. For instance, ] is delivering to understand the activities ( [person3 ] is a customer food), the roles of people ( [person1 who previously ordered food), the mental states of people ( [person1 ] wants to eat), and the likely events before and after the scene ( [person3 ] will serve the pancakes next). Our task covers these categories and more: a distri- bution of the inferences required is in Figure 2.
Visual understanding requires not only answering ques- tions correctly, but doing so for the right reasons. We thus require a model to give a rationale that explains why its answer is true. Our questions, answers, and rationales are written in a mixture of rich natural language as well as de- this helps to provide tection tags, like â [person2 an unambiguous link between the textual description of an
=_N
Figure 2: Overview of the types of inference required by questions in VCR. Of note, 38% of the questions are ex- planatory âwhyâ or âhowâ questions, 24% involve cognition- level activities, and 13% require temporal reasoning (i.e., what might come next). These categories are not mutually exclusive; an answer might require several hops of diï¬erent types of inferences (see appendix Sec A).
object (âthe man on the left in the white shirtâ) and the cor- responding image region.
To make evaluation straightforward, we frame our ul- timate task â of staged answering and justiï¬cation â in a multiple-choice setting. Given a question along with four answer choices, a model must ï¬rst select the right answer. If its answer was correct, then it is provided four rationale choices (that could purportedly justify its correct answer), and it must select the correct rationale. We call this QâAR as for the model prediction to be correct requires both the chosen answer and then the chosen rationale to be correct. Our task can be decomposed into two multiple-choice sub-tasks, that correspond to answering (QâA) and justiï¬- cation (QAâR) respectively:
Deï¬nition VCR subtask. A single example of a VCR subtask consists of an image I, and:
e A sequence o of object detections. Each object detec- tion 0; consists of a bounding box b, a segmentation mask m!, and a class label £; ⬠L.
A query q, posed using a mix of natural language and pointing. Each word qi in the query is either a word in a vocabulary V, or is a tag referring to an object in o. ⢠A set of N responses, where each response r(i) is writ- ten in the same manner as the query: with natural lan- guage and pointing. Exactly one response is correct.
The model chooses a single (best) response.
In question-answering (QâA), the query is the question and the responses are answer choices. In answer justiï¬cation (QAâR), the query is the concatenated question and correct answer, while the responses are rationale choices.
1The task is agnostic to the representation of the mask, but it could be thought of as a list of polygons p, with each polygon consisting of a sequence of 2d vertices inside the box p j = {xt, yt}t.
3
In this paper, we evaluate models in terms of accuracy and use N=4 responses. Baseline accuracy on each subtask is then 25% (1/N). In the holistic setting (QâAR), baseline accuracy is 6.25% (1/N2) as there are two subtasks.
# 3. Data Collection
In this section, we describe how we collect the ques- tions, correct answers and correct rationales for VCR. Our key insight â towards collecting commonsense visual rea- soning problems at scale â is to carefully select interesting situations. We thus extract still images from movie clips. The images from these clips describe complex situations that humans can decipher without additional context: for in- ] will serve stance, in Figure 1, we know that [person3 ] pancakes, whereas a machine might not un- [person1 derstand this unless it sees the entire clip.
Interesting and Diverse Situations To ensure diver- sity, we make no limiting assumptions about the predeï¬ned set of actions. Rather than searching for predeï¬ned labels, which can introduce search engine bias [76, 16, 20], we collect images from movie scenes. The underlying scenes come from the Large Scale Movie Description Challenge [67] and YouTube movie clips.2 To avoid simple images, we train and apply an âinterestingness ï¬lterâ (e.g. a closeup of a syringe in Figure 3).3
We center our task around challenging questions requir- ing cognition-level reasoning. To make these cognition- level questions simple to ask, and to avoid the clunkiness of referring expressions, VCRâs language integrates object tags ( [person2 ]) and explicitly excludes referring ex- pressions (âthe woman on the right.â) These object tags are detected from Mask-RCNN [29, 24], and the images are ï¬l- tered so as to have at least three high-conï¬dence tags.
Crowdsourcing Quality Annotations Workers on Amazon Mechanical Turk were given an image with de- tections, along with additional context in the form of video captions.4 They then ask one to three questions about the image; for each question, they provide a reasonable answer and a rationale. To ensure top-tier work, we used a system of quality checks and paid our workers well.5
The result is an underlying dataset with high agreement and diversity of reasoning. Our dataset contains a myriad of interesting commonsense phenomena (Figure 2) and a great diversity in terms of unique examples (Supp Section A); al- most every answer and rationale is unique.
2Namely, Fandango MovieClips: youtube.com/user/movieclips. 3We annotated images for âinterestingnessâ and trained a classiï¬er us- ing CNN features and detection statistics, details in the appendix, Sec B.
4This additional clip-level context helps workers ask and answer about what will happen next.
5More details in the appendix, Sec B.
Shot+Object Detection LsMDC B MOVIECLIPS t-1 || Someone lifts up the adrenaline needle. t_|] He looks down at her. t+1 || She sits up with the needle in her chest. Interestingness Crowd workers ask Filter and answer w questions Question: [What is [persont] doing? â_ | [persontIis necting @ needle nto | is injecting a needle into Answer: | someone on [ements mecrrgenesdeire | floor (ket) ; [person1] has a needle in his hand and is Rationale: aggressively lowering it, in a stabbing motion
Figure 3: An overview of the construction of VCR. Using a state-of-the-art object detector [29, 24], we identify the objects in each image. The most interesting images are passed to crowd workers, along with scene-level context in the form of scene descriptions (MovieClips) and video captions (LSMDC, [67]). The crowd workers use a combination of natural language and detection tags to ask and answer challenging visual questions, also providing a rationale justifying their answer.
# 4. Adversarial Matching
We cast VCR as a four-way multiple choice task, to avoid the evaluation diï¬culties of language generation or captioning tasks where current metrics often prefer incor- rect machine-written text over correct human-written text [49]. However, it is not obvious how to obtain high- quality incorrect choices, or counterfactuals, at scale. While past work has asked humans to write several counterfactual choices for each correct answer [75, 46], this process is ex- pensive. Moreover, it has the potential of introducing anno- tation artifacts: subtle patterns that are by themselves highly predictive of the âcorrectâ or âincorrectâ label [72, 28, 61].
In this work, we propose Adversarial Matching: a new method that allows for any âlanguage generationâ dataset to be turned into a multiple choice test, while requiring mini- mal human involvement. An overview is shown in Figure 4. Our key insight is that the problem of obtaining good coun- terfactuals can be broken up into two subtasks: the counter- factuals must be as relevant as possible to the context (so that they appeal to machines), while they cannot be overly similar to the correct response (so that they donât become correct answers incidentally). We balance between these two objectives to create a dataset that is challenging for ma- chines, yet easy for humans.
Formally, our procedure requires two models: one to compute the relevance between a query and a response, Prel, and another to compute the similarity between two response choices, Psim. Here, we employ state-of-the-art models for Natural Language Inference: BERT [15] and ESIM+ELMo [10, 57], respectively.6 Then, given dataset examples (qi, ri)1â¤iâ¤N, we obtain a counterfactual for each qi by per- forming maximum-weight bipartite matching [55, 40] on a weight matrix W â RNÃN, given by
Wi, j = log(Prel(qi, r j)) + λ log(1 â Psim(ri, r j)). (1)
Here, λ>0 controls the tradeoï¬ between similarity and rel-
Why are [person] and [person3] holding their (a) (a1) uy ase, eurte foreheads together? " [person1] and Why do [person1] and [person3] have their (a) (~) [person3] are hands clasped? praying. Why are [person6], [persons] and [person14] They are a family visiting the flea standing in close proximity? market. Why are [person1] and i 2 Porter samcea® (Ga) They deca together? 5 7
Figure 4: Overview of Adversarial Matching. Incorrect choices are obtained via maximum-weight bipartite match- ing between queries and responses; the weights are scores from state-of-the-art natural language inference models. Assigned responses are highly relevant to the query, while they diï¬er in meaning versus the correct responses.
evance.7 To obtain multiple counterfactuals, we perform several bipartite matchings. To ensure that the negatives are diverse, during each iteration we replace the similarity term with the maximum similarity between a candidate response r j and all responses currently assigned to qi.
Ensuring dataset integrity To guarantee that there is no question/answer overlap between the training and test sets, we split our full dataset (by movie) into 11 folds. We match the answers and rationales invidually for each fold. Two folds are pulled aside for validation and testing.
# 5. Recognition to Cognition Networks
to Cognition Net- works (R2C), a new model for visual commonsense reasoning. To perform well on this task requires a deep un- derstanding of language, vision, and the world. For exam- ] pointing ple, in Figure 5, answering âWhy is [person4 at [person1 requires multiple inference steps. First, we ground the meaning of the query and each response, which involves referring to the image for the
6We ï¬netune Prel (BERT), on the annotated data (taking steps to avoid data leakage), whereas Psim (ESIM+ELMo) is trained on entailment and paraphrase data - details in appendix Sec C.
7We tuned this hyperparameter by asking crowd workers to answer multiple-choice questions at several thresholds, and chose the value for which human performance is above 90% - details in appendix Sec C.
4
Imageâ I + jobjects ° Why is Why is [person4. pointing at [person1 a ? Grounding ree oS [persona] Contextualization [J ind (som âtom sn [Response aaa pees jo 2 28 t aos : Es He is telling [perSOnd 0)] that seer | l ca . 2) [person1 J] ordered pancakes. Ke is J [felting] | --- Lf || ga | [|
Figure 5: High-level overview of our model, R2C. We break the challenge of Visual Commonsense Reasoning into three components: grounding the query and response, contextualizing the response within the context of the query and the entire image, and performing additional reasoning steps on top of this rich representation.
two people. Second, we contextualize the meaning of the query, response, and image together. This step includes resolving the referent âhe,â and why one might be pointing in a diner. Third, we reason about the interplay of rele- vant image regions, the query, and the response. In this example, the model must determine the social dynamics ]. We for- between [person1 mulate our model as three high-level stages: grounding, contextualization, and reasoning, and use standard neural building blocks to implement each component.
In more detail, recall that a model is given an image, a set of objects o, a query q, and a set of responses r(i) (of which exactly one is correct). The query q and response choices r(i) are all expressed in terms of a mixture of natural language and pointing to image regions: notation-wise, we will represent the object tagged by a word w as ow. If w isnât a detection tag, ow refers to the entire image boundary. Our model will then consider each response r separately, using the following three components:
Grounding The grounding module will learn a joint image-language representation for each token in a se- quence. Because both the query and the response contain a mixture of tags and natural language words, we apply the same grounding module for each (allowing it to share pa- rameters). At the core of our grounding module is a bidi- rectional LSTM [34] which at each position is passed as input a word representation for w;, as well as visual features for 0y,. We use a CNN to learn object-level features: the visual representation for each region o is Roi-Aligned from its bounding region [63, 29]. To additionally encode infor- mation about the objectâs class label â¬,, we project an em- bedding of â¬, (along with the objectâs visual features) into a shared hidden representation. Let the output of the LSTM over all positions be r, for the response and q for the query.
Contextualization Given a grounded representation of the query and response, we use attention mechanisms to contextualize these sentences with respect to each other and the image context. For each position i in the response, we will deï¬ne the attended query representation as Ëqi using the
following equation:
αi, j = softmax j (riWq j) Ëqi = j αi, jq j. (2)
To contextualize an answer with the image, including im- plicitly relevant objects that have not been picked up from the grounding stage, we perform another bilinear attention between the response r and each object oâs image features. Let the result of the object attention be Ëoi.
Reasoning Last, we allow the model to reason over the response, attended query and objects. We accomplish this using a bidirectional LSTM that is given as context Ëqi, ri, and Ëoi for each position i. For better gradient ï¬ow through the network, we concatenate the output of the reasoning LSTM along with the question and answer representations for each timestep: the resulting sequence is max-pooled and passed through a multilayer perceptron, which predicts a logit for the query-response compatibility.
Neural architecture and training details For our im- age features, we use ResNet50 [30]. To obtain strong rep- resentations for language, we used BERT representations [15]. BERT is applied over the entire question and answer choice, and we extract a feature vector from the second-to- last layer for each word. We train R2C by minimizing the multi-class cross entropy between the prediction for each response r(i), and the gold label. See the appendix (Sec E) for detailed training information and hyperparameters.8
# 6. Results
In this section, we evaluate the performance of various models on VCR. Recall that our main evaluation mode is the staged setting (QâAR). Here, a model must choose the right answer for a question (given four answer choices), and then choose the right rationale for that question and answer (given four rationale choices). If it gets either the answer or the rationale wrong, the entire prediction will be wrong. This holistic task decomposes into two sub-tasks wherein we can train individual models: question answering (QâA)
8Our code is also available online at visualcommonsense.com.
5
Model Chance 25.0 25.0 25.0 25.0 6.2 6.2 y BERT l n O BERT (response only) ESIM+ELMo LSTM+ELMo t x e T 53.8 53.9 64.1 64.5 34.8 35.0 27.6 27.7 26.3 26.2 7.3 45.8 45.9 55.0 55.1 25.3 25.6 8.4 28.1 28.3 28.7 28.5 7.6 8.3 A Q V RevisitedVQA [38] BottomUpTopDown[4] MLB [42] MUTAN [6] 39.4 40.5 34.0 33.7 13.5 13.8 42.8 44.1 25.1 25.1 10.7 11.0 45.5 46.2 36.1 36.8 17.0 17.2 44.4 45.5 32.0 32.2 14.6 14.6 R2C 63.8 65.1 67.2 67.3 43.1 44.0 Human 91.0 93.0 85.0
Table 1: Experimental results on VCR. VQA mod- els struggle on both question-answering (Q â A) as well as answer justiï¬cation (Q â AR), possibly due to the complex language and diversity of examples in the dataset. While language-only models perform well, our model R2C obtains a signiï¬cant performance boost. Still, all models underperform human accuracy at this task. For more up-to-date results, see the leaderboard at visualcommonsense.com/leaderboard.
as well as answer justiï¬cation (QAâR). Thus, in addition to reporting combined QâAR performance, we will also report QâA and QAâR.
Task setup A model is presented with a query q, and four response choices r(i). Like our model, we train the baselines using multi-class cross entropy between the set of responses and the label. Each model is trained separately for question answering and answer justiï¬cation.9
# 6.1. Baselines
We compare our R2C to several strong language and vi- sion baselines.
Text-only baselines We evaluate the level of visual reasoning needed for the dataset by also evaluating purely text-only models. For each model, we represent q and r(i) as streams of tokens, with the detection tags replaced by the object name (e.g. chair5 â chair). To minimize the discrepancy between our task and pretrained models, we re- place person detection tags with gender-neutral names. a. BERT [15]: BERT is a recently released NLP model that achieves state-of-the-art performance on many NLP tasks. b. BERT (response only) We use the same BERT model, however, during ï¬ne-tuning and testing the model is only given the response choices r(i). c. ESIM+ELMo [10]: ESIM is another high perform- ing model for sentence-pair classiï¬cation tasks, particularly when used with ELMo embeddings [57].
9We follow the standard train, val and test splits.
6
Model Q â A QA â R Q â AR R2C 63.8 67.2 43.1 48.3 No query 63.6 No reasoning module No vision representation 53.1 46.4 GloVe representations 43.5 65.7 63.2 38.3 21.5 42.2 33.8 18.3
Table 2: Ablations for R2C, over the validation set. âNo queryâ tests the importance of integrating the query dur- ing contextualization; removing this reduces QâAR perfor- mance by 20%. In âno reasoningâ, the LSTM in the reason- ing stage is removed; this hurts performance by roughly 1%. Removing the visual features during grounding, or using GloVe embeddings rather than BERT, lowers performance signiï¬cantly, by 10% and 25% respectively.
d. LSTM+ELMo: Here an LSTM with ELMo embed- dings is used to score responses r(i).
VQA Baselines Additionally we compare our ap- proach to models developed on the VQA dataset [5]. All models use the same visual backbone as R2C (ResNet 50) as well as text representations (GloVe; [56]) that match the original implementations. e. RevisitedVQA [38]: This model takes as input a query, response, and image features for the entire image, and passes the result through a multilayer perceptron, which has to classify âyesâ or ânoâ.10 f. Bottom-up and Top-down attention (BottomUpTop- Down) [4]: This model attends over region proposals given by an object detector. To adapt to VCR, we pass this model object regions referenced by the query and response. g. Multimodal Low-rank Bilinear Attention (MLB) [42]: This model uses Hadamard products to merge the vi- sion and language representations given by a query and each region in the image. h. Multimodal Tucker Fusion (MUTAN) [6]: This model expresses joint vision-language context in terms of a tensor decomposition, allowing for more expressivity.
We note that BottomUpTopDown, MLB, and MUTAN all treat VQA as a multilabel classiï¬cation over the top 1000 answers [4, 50]. Because VCR is highly diverse (Supp A), for these models we represent each response r(i) using a GRU [11].11 The output logit for response i is given by the dot product between the ï¬nal hidden state of the GRU encoding r(i), and the ï¬nal representation from the model. Human performance We asked ï¬ve diï¬erent workers on Amazon Mechanical Turk to answer 200 dataset ques- tions from the test set. A diï¬erent set of ï¬ve workers were asked to choose rationales for those questions and answers. Predictions were combined using a majority vote.
10For VQA, the model is trained by sampling positive or negative an- swers for a given question; for our dataset, we simply use the result of the perceptron (for response r(i)) as the i-th logit.
11To match the other GRUs used in [4, 42, 6] which encode q.
# b) is right because...
Why is [person1 [person2
] pointing a gun at
a) [person1 ] wants to kill [person2 ] .(1%) b) [person1 bing the bank and [person2 manager. (71%) ] and [person3 ] are rob- ] is the bank c) [person2 [person1 ] has done something to upset ] . (18%) a) [person1 [person3 (33%) ] and ] because they just robbed a bank. ] is chasing [person1 b) Robbers will sometimes hold their gun in the air to get everyoneâs attention. (5%) c) The vault in the background is similar to a bank vault. [person3 for someone to open it. (49%) ] is waiting by the vault d) Because [person2 daughter. [person1 ] . (8%) [person2 ] is [person1 ] wants to protect ] âs d) A room with barred windows and a counter usu- ally resembles a bank. (11%) What would [person1 [person2 ] do if she caught ] and [person3 ] whispering? a) [person1 ] would look to her left. (7%) b) She would play with [book1 ] . (7%) c) She would look concerned and ask what was funny. (39%) d) She would switch their seats. (45%) d) is right because... a) When students are talking in class theyâre supposed to be listening - the teacher separates them. (64%) b) Plane seats are very cramped and narrow, and it requires cooperation from your seat mates to help get through. (15%) c) Itâs not unusual for people to want to get the closest seats to a stage. (14%) d) Thatâs one of the only visible seats I can see thatâs still open, the plane is mostly full. (6%) Whatâs going to happen next? a) [person2 [person4 ] is going to walk up and punch ] in the face. (10%) b) Someone is going to read [person4 time story. (15%) ] a bed- c) [person2 ] is going to fall down. (5%) d) [person2 [person4 ] is going to say how cute ] âs children are. (68%) d) is right because... a) They are the right age to be father and son and [person5 are his son. (1%) ] is hugging [person3 ] like they b) It looks like [person4 photo to [person2 want to be polite. (31%) ] is showing the ] , and [person2 ] will c) [person2 [person4 ] is smirking and looking down at ] . (6%) d) You can see [person4 the crib and decor in the room (60%) ] smiling and facing b) is right because... Why canât [person3 with [person1 ] go in the house ] ? ] and [person2 ] is going away by himself. (60%) a) [person1 ] is small enough to carry. ] appears to own him. (33%) b) [dog1 [person3 a) She does not want to be there. (12%) ] has [dog1 ] with her. (14%) b) [person3 ] was in the house, he would likely c) If [dog1 knock over [pottedplant6 ] . (4%) scratch [couch1 c) She needs the light. (45%) ] and likely d) She is too freaked out (26%)
7)
# al]
d) [person1 [person2
] looks like he may have lead
] .(1%) Figure 6: Qualitative examples from R2C. Correct predictions are highlighted in blue . Incorrect predictions are in red with the correct choices bolded. For more predictions, see see visualcommonsense.com/explore.
] into the room to see [dog1
# 6.2. Results and Ablations
We present our results in Table 1. Of note, standard VQA models struggle on our task. The best model, in terms of QâAR accuracy, is MLB, with 17.2% accuracy. Deep text- only models perform much better: most notably, BERT [15] obtains 35.0% accuracy. One possible justiï¬cation for this gap in performance is a bottlenecking eï¬ect: whereas VQA models are often built around multilabel classiï¬cation of the top 1000 answers, VCR requires reasoning over two (often
long) text spans. Our model, R2C obtains an additional boost over BERT by 9% accuracy, reaching a ï¬nal perfor- mance of 44%. Still, this ï¬gure is nowhere near human performance: 85% on the staged task, so there is signiï¬cant headroom remaining.
Ablations We evaluated our model under several abla- tions to determine which components are most important. Removing the query representation (and query-response contextualization entirely) results in a drop of 21.6% ac-
7
curacy points in terms of Q â AR performance. Interest- ingly, this setting allows it to leverage its image represen- tation more heavily: the text based response-only models (BERT response only, and LSTM+ELMo) perform barely better than chance. Taking the reasoning module lowers performance by 1.9%, which suggests that it is beneï¬cial, but not critical for performance. The model suï¬ers most when using GloVe representations instead of BERT: a loss of 24%. This suggests that strong textual representations are crucial to VCR performance.
Qualitative results Last, we present qualitative exam- ples in Figure 6. R2C works well for many images: for in- stance, in the ï¬rst row, it correctly infers that a bank robbery is happening. Moreover, it picks the right rationale: even though all of the options have something to do with âbanksâ and ârobbery,â only c) makes sense. Similarly, analyzing the examples for which R2C chooses the right answer but the wrong rationale allows us to gain more insight into its understanding of the world. In the third row, the model in- correctly believes there is a crib while assigning less proba- ] is bility mass on the correct rationale - that [person2 ]âs children, which is being shown a photo of [person4 why [person2
# 7. Related Work
Question Answering Visual Question Answering [5] was one of the ï¬rst large-scale datasets that framed visual understanding as a QA task, with questions about COCO images [49] typically answered with a short phrase. This line of work also includes âpointingâ questions [45, 93] and templated questions with open ended answers [86]. Re- cent datasets also focus on knowledge-base style content [80, 83]. On the other hand, the answers in VCR are en- tire sentences, and the knowledge required by our dataset is largely background knowledge about how the world works. Recent work also includes movie or TV-clip based QA [75, 51, 46]. In these settings, a model is given a video clip, often alongside additional language context such as subtitles, a movie script, or a plot summary.12 In contrast, VCR features no extra language context besides the ques- tion. Moreover, the use of explicit detection tags means that there is no need to perform person identiï¬cation [66] or linkage with subtitles.
An orthogonal line of work has been on referring expres- sions: asking to what image region a natural language sen- tence refers to [60, 52, 65, 87, 88, 59, 36, 33]. We explicitly avoid referring expression-style questions by using indexed detection tags (like [person1
Last, some work focuses on commonsense phenomena, such as âwhat ifâ and âwhyâ questions [79, 58]. However,
12As we ï¬nd in Appendix D, including additional language context tends to boost model performance.
8
the space of commonsense inferences is often limited by the underlying dataset chosen (synthetic [79] or COCO [58] scenes). In our work, we ask commonsense questions in the context of rich images from movies.
Explainability AI models are often right, but for ques- tionable or vague reasons [7]. This has motivated work in having models provide explanations for their behavior, in the form of a natural language sentence [31, 9, 41] or an attention map [32, 35, 37]. Our rationales combine the best of both of these approaches, as they involve both natural language text as well as references to image regions. Addi- tionally, while it is hard to evaluate the quality of generated model explanations, choosing the right rationale in VCR is a multiple choice task, making evaluation straightforward. Commonsense Reasoning Our task uniï¬es work in- volving reasoning about commonsense phenomena, such as physics [54, 84], social interactions [2, 77, 12, 27], proce- dure understanding [91, 3] and predicting what might hap- pen next in a video [74, 17, 92, 78, 18, 64, 85].
Adversarial Datasets Past work has proposed the idea of creating adversarial datasets, whether by balancing the dataset with respect to priors [25, 28, 62] or switching them at test time [1]. Most relevant to our dataset construc- tion methodology is the idea of Adversarial Filtering [89].13 Correct answers are human-written, while wrong answers are chosen from a pool of machine-generated text that is fur- ther validated by humans. However, the correct and wrong answers come from fundamentally diï¬erent sources, which raises the concern that models can cheat by performing au- thorship identiï¬cation rather than reasoning over the image. In contrast, in Adversarial Matching, the wrong choices come from the exact same distribution as the right choices, and no human validation is needed.
# 8. Conclusion
In this paper, we introduced Visual Commonsense Rea- soning, along with a large dataset VCR for the task that was built using Adversarial Matching. We presented R2C, a model for this task, but the challenge â of cognition-level visual undertanding â is far from solved. Acknowledgements
We thank the Mechanical Turk workers for doing such an outstanding job with dataset creation - this dataset and paper would not exist with- out them. Thanks also to Michael Schmitz for helping with the dataset split and Jen Dumas for legal advice. This work was supported by the Na- tional Science Foundation through a Graduate Research Fellowship (DGE- 1256082) and NSF grants (IIS-1524371, 1637479, 165205, 1703166), the DARPA CwC program through ARO (W911NF-15-1-0543), the IARPA DIVA program through D17PC00343, the Sloan Research Foundation through a Sloan Fellowship, the Allen Institute for Artiï¬cial Intelligence, the NVIDIA Artiï¬cial Intelligence Lab, and gifts by Google and Face- book. The views and conclusions contained herein are those of the authors and should not be interpreted as representing endorsements of IARPA, DOI/IBC, or the U.S. Government.
13This was used to create the SWAG dataset, a multiple choice NLP dataset for natural language inference.
# Appendix
# Abstract
In our work we presented the new task of Visual Com- monsense Reasoning and introduced a large-scale dataset for the task, VCR, along with Adversarial Matching, the machinery that made the dataset construction possible. We also presented R2C, a new model for the task. In the supple- mental material, we provide the following items that shed further insight on these contributions:
Additional dataset analysis (Section A) ⢠More information about dataset creation (Section B)
and Adversarial Matching (Section C)
An extended discussion on language priors (Section D) ⢠Model hyperparameters used (Section E) ⢠Additional VQA Baseline Results, with BERT embed-
dings (Section F)
A datasheet for VCR (Section G) ⢠A visualization of R2Câs predictions (Section H)
For more examples, and to obtain the dataset and code, check out visualcommonsense.com.
# A. Dataset Analysis
In this section, we continue our high-level analysis of
VCR.
# A.1. Language complexity and diversity
How challenging is the language in VCR? We show sev- eral statistics in Table 3. Of note, unlike many question- answering datasets wherein the answer is a single word, our answers average to more than 7.5 words. The rationales are even longer, averaging at more than 16 words.
An additional informative statistic is the counts of unique answers and rationales in the dataset, which we plot in Fig- ure 7. As shown, almost every answer and rationale is unique.
# A.2. Objects covered
On average, there are roughly two objects mentioned over a question, answer, and rationale. Most of these ob- jects are people (Figure 8), though other types of COCO objects are common too [49]. Objects such as âchair,â âtie,â and âcupâ are often detected, however, these objects vary in terms of scene importance: even though more ties exist in the data than cars, workers refer to cars more in their ques- tions, answers, and rationales. Some objects, such as hair driers and snowboards, are rarely detected.
9
Train Val Test Number of questions Number of answers per question Number of rationales per question 212,923 26,534 25,263 4 4 4 4 4 4 Number of images Number of movies covered 80,418 9,929 9,557 244 1,945 189 Average question length Average answer length Average rationale length Average # of objects mentioned 6.61 7.54 16.16 1.84 6.58 6.63 7.65 7.55 16.19 16.07 1.82 1.85
Table 3: High level dataset statistics, split by fold (train, validation, and test). Note that we held out one fold in the dataset for blind evaluation at a later date; this fold is blind to us to preserve the integrity of the held-out data. Accord- ingly, the statistics of that fold are not represented here.
1.00 S q a â VQA ââ MovieQA Examples CDF ° a 8 0.26 â TVQA â VCR Answers â vw ââ VCR Rationales 0.00 0.00 0.25 0.50 0.75 1.00 Fraction of cumulative examples.
Figure 7: CDF of dataset examples ordered by frequency in question-answering datasets [5, 93, 75, 46]. To obtain this plot, we sampled 10,000 answers from each dataset (or rationales, for âVCR rationalesâ). We consider two exam- ples to be the same if they exactly match, after tokenization, lemmatization, and removal of stopwords. Where many datasets in this space are light-tailed, our dataset shows great diversity (e.g. almost every rationale is unique.)
# A.3. Movies covered
Our dataset also covers a broad range of movies - over 2000 in all, mostly via MovieClips (Figure 9). We note that since we split the dataset by movie, the validation and test sets cover a completely disjoint set of movies, which forces a model to generalize. For each movie image, workers ask 2.6 questions on average (Figure 10), though the exact num- ber varies - by design, workers ask more questions for more interesting images.
# A.4. Inference types
It is challenging to accurately categorize commonsense and cognition-level phenomena in the dataset. One ap- proach that we presented in Figure 2 is to categorize ques- tions by type: to estimate this over the entire training set, we used a several patterns, which we show in Table 4. Still,
Type Freq. Patterns Explanation 38% why, how come, how does Activity Temporal Mental Role Scene 24% doing, looking, event, playing, preparing 13% happened, before, after, earlier, later, next 8% feeling, thinking, saying, love, upset, angry 7% relation, occupation, strangers, married 5% where, time, near Hypothetical 5% if, would, could, chance, might, may
Table 4: Some of the rules we used to determine the type of each question. Any question containing a word from one of the above groups (such as âwhyâ) was determined to be of that type (âexplanationâ).
we note that automatic categorization of the inference types required for this task is hard. This is in part because a sin- gle question might require multiple types of reasoning: for example, âWhy does person1 feel embarrassed?â requires reasoning about person1âs mental state, as well as requir- ing an explanation. For this reason, we argue that this break- down underestimates the task diï¬culty.
# B. Dataset Creation Details
In this section, we elaborate more on how we collected VCR, and about our crowdsourcing process.
# B.1. Shot detection pipeline
The images in VCR are extracted from video clips from LSMDC [67] and MovieClips. These clips vary in length from a few seconds (LSMDC) to several minutes (MovieClips). Thus, to obtain more still images from these clips, we performed shot detection. Our pipeline is as fol- lows:
⢠We iterate through a video clip at a speed of one frame per second.
⢠During each iteration, we also perform shot detection: if we detect a mean diï¬erence of 30 pixels in HSV space, then we register a shot boundary.
⢠After a shot boundary is found, we apply Mask-RCNN [29, 24] on the middle frame for the shot, and save the resulting image and detection information.
We used a threshold of 0.7 for Mask-RCNN, and the best detection/segmentation model available for us at the time: X-101-64x4d-FPN14, which obtains 42.4 box mAP on COCO, and 37.5 mask mAP.
# B.2. Interestingness Filter
Recall that we use an âinterestingness ï¬lterâ to ensure that the images in our dataset are high quality. First, every image had to have at least two people in it, as detected by
14Available via the Detectron Model Zoo.
10
person chair tie cup bottle book car diningtable wineglass ottedplani Penandbal bow! tv cellphone horse vase couch backpack bench umbrella bed clock lapto trucl trafficlight sportsball bicycle remote suitcase sink do boa ___ bird refrigerator knife spoon motorcycle teddybear apple surfboard oven cow fork cake keyboard "bus airplane baseballbat orange toilet toothbrush skateboard basebaligiove microwave mouse train pizza banana sheep firehydrant sandwich stopsign cat elephant parkingmeter donut frisbee tennisracket scissors kite skis hotdog zebra broccoli giraffe carrot toaster bear snowboard hairdier = 0 @mm 10° 10° 1 10° 10 3.
n total referenced
10
10°
Object counts n
Figure 8: Distribution of the referenced COCO [49] objects in VCR. We count an object as being âreferencedâ if, for a given question, answer, and rationale, that object is men- tioned explicitly. Note that we do not double-count objects here - if person5 is mentioned in the question and the an- swer, we count it once. This chart suggests that our dataset is mostly human-centric, with some categories being refer- enced more than others (cars are mentioned more than ties, even though cars appear less often).
Mask RCNN. However, we also found that many images with two or more people were still not very interesting. The two main failure cases here are when there are one or two
10"
# @mm
# LSMDC
Harry Patter And Thg Hat Soe bare chess takes rpemaneely a A Good Bay 1g De Hrs (Ghost fer Sot of Veigateg a melts Ra seas
# EE
# MovieClips
°
5000
10000
# 15000 Number of images
20000
25000
30000
Figure 9: Distribution of movies in the VCR training set by number of images. Blue bars are movies from LSMDC (46k images); red are MovieClips (33k images). The MovieClips images are spread over a wider range of movies: due to space restrictions, most are under âother MovieClips.â
11
30000 20000 10000 3 4 5 6 Number of questions asked n Number of images with n questions asked
Figure 10: Number of questions asked per image on the VCR training set. The average number of questions asked per image is 2.645. Note that while workers could ask any- where between one to three questions per image, images that were ï¬agged as especially interesting by workers got re-annotated with additional annotations.
people detected, but they arenât doing anything interesting (Figure 11a), or when the image is especially grainy and blurry. Thus, we opted to learn an additional classiï¬er for determining which images were interesting.
Our ï¬ltering process evolved as we collected data for the task. The ï¬rst author of this paper ï¬rst manually annotated 2000 images from LSMDC [67] as being âinterestingâ or ânot interestingâ and trained a logistic regression model to predict said label. The model is given as input the number of people detected by Mask RCNN [29, 24], along with the number of objects (that are not people) detected. We used this model to identify interesting images in LSMDC, using a threshold that corresponded to 70% precision. This resulted in 72k images selected; these images were annotated ï¬rst.
During the crowdsourcing process, we obtained data that allowed us to build an even better interestingness ï¬lter later on. Workers were asked, along with each image, whether they thought that the image was especially interesting (and thus should go to more workers), just okay, or especially boring (and hard to ask even one good question for). We used this to train a deeper model for this task. The model uses a ResNet 50 backbone over the entire image [30] as well as a multilayer perceptron over the object counts. The entire model is trained end-to-end: 2048 dimensional fea- tures from Resnet are concatenated with a 512 dimensional projetion of the object counts, and used to predict the la-
a) Boring image.
[person
b) Interesting image.
Figure 11: Two example images that come from the raw video pipeline. Image a) is ï¬agged by our initial ï¬lter as âboringâ, because there are only two people without any ad- ditional objects, whereas image b) is ï¬agged as being inter- esting due to the number of people and objects detected.
bels.15 We used this model to select the most interesting 40k images from Movieclips, which ï¬nished oï¬ the anno- tation process.
# B.3. Crowdsourcing quality data
As mentioned in the paper, crowdsourcing data at the quality and scale of VCR is challenging. We used several best practices for crowdsourcing, which we elaborate on in this section.
We used Amazon Mechanical Turk for our crowdsourc- ing. A screenshot of our interface is given in Figure 12. Given an image, workers asked questions, answered them, and provided a rationale explaining why their answer might be correct. These are all written in a mixture of natural lan- guage text, as well as referring to detection regions. In our annotation UI, workers refer to the regions by writing the tag number.16
Workers could ask anywhere between one to three ques- tions per HIT. We paid the workers proportionally at $0.22 per triplet. According to workers, this resulted in $8â 25/hr. This proved necessary as workers reported feeling
15In addition to predicting interestingness, the model also predicts the number of questions a worker asks, but we never ended up using these predictions.
16Note that this diï¬ers a bit from the format in the paper: we originally had workers write out the full tag, like [person5], but this is often long and the workers would sometimes forget the brackets. Thus, the tag format here is just a single number, like 5.
12
Previous caption: SOMEONE wanders over to 2 table next to SOMEONE'S and sits down âThis caption: She ignores him, Next caption: She returns to her readin hide all | show all (0 (pottedplant) Answer Question a » fo Rationale Question (optional) âAnswer (optional) Rationale (optional) a n2 a Question (optional) Answer (optional) Rationale (optional) oa na Ra A A Cl This mage wok be it toast tives good questions fr) [1 TWirimage realy irestng ed should gow moe
Figure 12: Screenshot of our annotation interface. Workers are given an image, as well as context from the video (here, captions from LSMDC [67]), and are asked to write one to three questions, answers, and rationales. For each answer, they must mark it as likely, possible, or unlikely. Workers also select whether the image was especially interesting or boring, as this allows us to train a deep model for predicting image interestingness.
âdrainedâ by the high quality required.
Automated quality checks We added several auto- mated checks to the crowdsourcing UI to ensure high qual- ity. The workers had to write at least four words for the question, three for the answer, and ï¬ve for the rationale. Additionally, the workers had to explicitly refer to at least one detection on average per question, answer, and ratio- nale triplet. This was automatically detected to ensure that the workers were referring to the detection tags in their sub- missions.
We also noticed early on was that sometimes workers would write detailed stories that were only loosely con- nected with the semantic content of the image. To ï¬x this, workers also had to self-report whether their answer was likely (above 75% probability), possible (25-75% probabil- ity), or unlikely (below 25% probability). We found that this helped deter workers from coming up with consistently un- likely answers for each image. The likelihood ratings were never used for the task, since we found they werenât neces- sary to obtain high human agreement.
Instructions Like for any crowdsourcing task, we found wording the instructions carefully to be crucial. We encouraged workers to ask about higher-level actions, ver- sus lower-level ones (such as âWhat is person1 wearing?â), as well as to not ask questions and answers that were overly
generic (and thus could apply to many images). Workers were encouraged to answer reasonably in a way that was not overly unlikely or unreasonable. To this end, we provided the workers with high-quality example questions, answers, and rationales.
Qualiï¬cation exam Since we were picky about the types of questions asked, and the format of the answers and rationales, workers had to pass a qualiï¬cation task to dou- ble check that they understood the format. The qualiï¬ca- tion test included a mix of multiple-choice graded answers as well as a short written section, which was to provide a single question, answer, and rationale for an image. The written answer was checked manually by the ï¬rst author of this paper.
Work veriï¬cation In addition to the initial qualiï¬ca- tion exam, we also periodically monitored the annotation quality. Every 48 hours, the ï¬rst author of this paper would review work and provide aggregate feedback to ensure that workers were asking good questions, answering them well, and structuring the rationales in the right way. Because this took signiï¬cant time, we then selected several outstanding workers and paid them to do this job for us: through a sepa- rate set of HITs, these outstanding workers were paid $0.40 to provide detailed feedback on a submission that another worker made. Roughly one in ï¬fty HITs were annotated in this way to give extra feedback. Throughout this process, workers whose submission quality dropped were dequali- ï¬ed from the HITs.
# C. Adversarial Matching Details
There are a few more details that we found useful when performing the Adversarial Matching to create VCR, which we discuss in this section. Aligning Detections
In practice, most responses in our dataset are not relevant to most queries, due to the diversity of responses in our dataset and the range of detection tags (person1, etc.).
To ï¬x this, for each query qi (with associated object list oi and response ri) we turn each candidate r j into a tem- plate, and use a rule based system to probabilistically remap its detection tags to match the objects in oi. With some probability, a tag in r j is replaced with a tag in qi and ri. Otherwise, it is replaced with a random tag from oi.
We note that our approach isnât perfect. The remap- ping system often produces responses that violate pred- icate/argument structure, such as âperson1 is kissing person1.â However, our approach does not need to be per- fect: because the detections for response r j are remapped uniquely for each query qi, with some probability, there should be at least some remappings of ri that make sense, and the question relevance model Prel should select them.
Semantic categories Recall that we use 11 folds for the dataset of around 290k questions, answers, and ratio-
13
nales. Since we must perform Adversarial Matching once for the answers, as well as for the rationales, this would naively involve 22 matchings on a fold size of roughly 26k. We found that the major computational bottleneck wasnât the bipartite matching17, but rather the computation of all- pairs similarity and relevance between â¼26k examples.
There is one additional potential problem: we want the dataset examples to require a lot of complex commonsense reasoning, rather than simple attribute identiï¬cation. How- ever, if the response and the query disagree in terms of gen- der pronouns, then many of the dataset examples can be reduced to gender identiï¬cation.
We address both of these problems by dividing each fold into âbucketsâ of 3k examples for matching. We divide the examples up in terms of the pronouns in the response: if the response contains a female or male pronoun, then we put the example into a âfemaleâ or âmaleâ bucket, respectively, otherwise the response goes into the âneutralâ bucket. To further divide the dataset examples, we also put diï¬erent question types in diï¬erent buckets for the question answer- ing task (e.g. who, what, etc.). For the answer justiï¬cation task, we cluster the questions and answers using their aver- age GloVe embeddings [56].
Relevance model details Recall that our relevance model Prel is trained to predict the probability that a re- sponse r is valid for a query q. We used BERT for this task [15], as it achieves state-of-the-art results across many two-sentence inference tasks. Each input looks like the fol- lowing, where the query and response are concatenated with a separator in between:
[CLS] what is casey doing ? [SEP] casey is getting out of car . [SEP]
Note that in the above example, object tags are replaced with the class name (car3âcar). Person tags are replaced with gender neutral names (person1âcasey) [19].
We ï¬ne-tune BERT by treating it as a two-way classiï¬- cation problem. With probability 25% for a query, BERT is given that queryâs actual response, otherwise it is given a random response (where the detections were remapped). Then, the model must predict whether it was given the ac- tual response or not. We used a learning rate of 2 · 10â5, the Adam optimizer [44], a batch size of 32, and 3 epochs of ï¬ne-tuning.18
Due to computational limitations, we used BERT-Base as the architecture rather than BERT-Large - the latter is signiï¬cantly slower.19 Already, Prel has an immense com- putational requirement as it must compute all-pairs simi-
17We use the https://github.com/gatagat/lap implementation. 18We note that during the Adversarial Matching process, for either Ques- tion Answering or Answer Justiï¬cation, the dataset is broken up into 11 folds. For each fold, BERT is ï¬ne-tuned on the other folds, not on the ï¬nal dataset splits.
19Also, BERT-Large requires much more memory, enough so that itâs harder to ï¬ne-tune due to the smaller feasible batch size.
larity for the entire dataset, over buckets of 3000 examples. Thus, we opted to use a larger bucket size rather than a more expensive model.
Similarity model details While we want the responses to be highly relevant to the query, we also want to avoid cases where two responses might be conï¬ated by humans - particularly when one is the correct response. This con- ï¬ation might occur for several reasons: possibly, two re- sponses are paraphrases of one another, or one response en- tails another. We lump both under the âsimilarityâ umbrella as mentioned in the paper and introduce a model, Psim, to predict the probability of this occurring - broadly speaking, that two responses ri and r j have the same meaning.
We used ESIM+ELMo for this task [10, 57], as it still does quite well on two-sentence natural language inference tasks (although not as well as BERT), and can be made much more eï¬cient. At test time, the model makes the sim- ilarity prediction when given two token sequences.20
We trained this model on freely available NLP corpora. We used the SNLI formalism [8], in which two sentences are an âentailmentâ if the ï¬rst entails the second, âcontradic- tionâ if the ï¬rst is contradicted by the second, and âneutralâ otherwise. We combined data from SNLI and MultiNLI [82] as training data. Additionally, we found that even af- ter training on these corpora, the model would struggle with paraphrases, so we also translated SNLI sentences from En- glish to German and back using the Nematus machine trans- lation system [81, 73]. These sentences served as extra paraphrase data and were assigned the âentailmentâ label. We also used randomly sampled sentence pairs from SNLI as additional âneutralâ training data. We held out the SNLI validation set to determine when to stop training. We used standard hyperparameters for ESIM+ELMo as given by the AllenNLP library [22].
Given the trained model Pnli, we deï¬ned the similarity model as the maximum entailment probability for either way of ordering the two responses:
Pyii(ent|ri, rj), Pau(ent|r j, ri},
Psim(ri, r j) = max , (3)
where âentâ refers to the âentailmentâ label. If one response entails the other, we ï¬ag them as similar, even if the reverse entailment is not true, because such a response is likely to be a false positive as a distractor.
The beneï¬t of using ESIM+ELMo for this task is that it can be made more eï¬cient for the task of all-pairs sen- tence similarity. While much of the ESIM architecture in- volves computing attention between the two text sequences, everything before the ï¬rst attention can be precomputed. This provides a large speedup, particularly as computing the ELMo representations is expensive. Now, for a fold size
20Again, with object tags replaced with the class name, and person tags replaced by gender neutral names.
14
5 Ee) 4 3 | iy 3 0.6 3 8 0.4 oââââââ wes. 02 == Relevance Model == Worker 0.0 10° 10" 10° A QA->R 1.0 O âââ_eââââ"â"â 0.8 3 06 gsâ â 5 Qa @) 3 0.4 oO 0.2 == Relevance Model == Worker 0.0 10° 10° 10 A
Figure 13: Tuning the λ hyperparameter. Workers were asked to solve 100 dataset examples from the validation set, as given by Adversarial Matching for each considered value of λ. We used these results to pick reasonable values for the hyperparameter such that the task was diï¬cult for the ques- tion relevance model Prel, while simple for human workers. We chose λ = 0.1 for Q â A and λ = 0.01 for QA â R.
of N, we only have to compute 2N ELMo representations rather than N2.
Validating the λ parameter Recall that our hyperpa- rameter λ trades oï¬ between machine and human diï¬culty for our ï¬nal dataset. We shed more insight on how we chose the exact value for λ in Figure 13. We tried several diï¬erent values of λ and chose λ = 0.1 for Q â A and λ = 0.01 for QA â R, as at these thresholds human performance was roughly 90%. For an easier dataset for both humans and machines, we would increase the hyperparameter.
# D. Language Priors and Annotation Artifacts Discussion
There has been much research in the last few years in understanding what âpriorsâ datasets have.21 Broadly speak- ing, how well do models do on VCR, as well as other visual question answering tasks, without vision?
To be more general, we will consider problems where a model is given a question and answer choices, and picks exactly one answer. The answer choices are the outputs that the model is deciding between (like the responses in VCR) and the question is the shared input that is common to all answer choices (the query, image, and detected objects in VCR). With this terminology, we can categorize unwanted dataset priors in the following ways:
Answer Priors: A model can select a correct answer without even looking at the question. Many text-only datasets contain these priors. For instance, the Roc- Stories dataset [53] (in which a model must classify endings to a story as correct or incorrect), a model can obtain 75% accuracy by looking at stylistic features (such as word choice and punctuation) in the endings. ⢠Non-Visual Priors: A model can select a correct an- swer using only non-visual elements of the question. One example is VQA 1.0 [5]: given a question like âWhat color is the ï¬re hydrant?â a model will clas- sify some answers higher than others (red). This was addressed in VQA 2.0 [26], however, some answers will still be more likely than others (VQAâs answers are open-ended, and an answer to âWhat color is the ï¬re hydrant?â must be a color).
These priors can either arise from biases in the world (ï¬re hydrants are usually red), or, they can come from an- notation artifacts [28]: patterns that arise when people write class-conditioned answers. Sometimes these biases are sub- liminal: when asked to write a correct or incorrect story ending, the correct endings tend to be longer [72]. Other cases are more obvious: workers often use patterns such as negation to write sentences that contradict a sentence [28].22 To what extent do vision datasets suï¬er from annota- tion artifacts, versus world priors? We narrow our focus to multiple-choice question answering datasets, in which for humans traditionally write correct and incorrect answers to a question (thus, potentially introducing the annotation ar- tifacts). In Table 5 we consider several of these datasets: TVQA [46], containing video clips from TV shows, along
21This line of work is complementary to other notions of dataset bias, like understanding what phenomena datasets cover or donât [76], partic- ularly how that relates to how marginalized groups are represented and portrayed [71, 90, 69, 68].
22For instance, the SNLI dataset contains pairs of sentences with la- bels such as âentailedâ or âcontradictionâ [8]. For a sentence like âA skate- boarder is doing tricksâ workers often write âNobody is doing tricksâ which is a contradiction. The result is that the word ânobodyâ is highly predictive of a word being a contradiction.
15
Dataset TVQA [46] MovieQA [75] PororoQA [43]⥠TGIFQA [39]⦠#train Chance A 122,039 20.0 9,848 20.0 7,530 20.0 73,179 20.0 45.0 47.4 33.8 35.4 43.1 47.4 45.8 72.5 VCR QâA VCR QAâR 212,923 25.0 25.0 27.6 53.8 26.3 64.1 VCR VCR small QâA small QA â R 9,848 25.0 25.0 25.5 39.9 25.3 50.9
Table 5: Text-only results on the validation sets of vision datasets, using BERT-Base. #train shows the number of training examples. A corresponds to only seeing the an- swer; in Q+A the model also sees the question; in S+Q+A the model also sees subtitles from the video clip. These re- sults suggest that many multiple choice QA datasets suï¬er from annotation artifacts, while Adversarial Matching helps produce a dataset with minimial biases; moreover, pro- viding extra text-only information (like subtitles) greatly boosts performance. More info:
â : State of the art. â£: Only 45% (879/1958) of the questions in the MovieQA vali- dation set have timestamps, which are needed to extract clip- level subtitles, so for the other 55%, we donât use any subtitle information.
â¥: No oï¬cial train/val/test split is available, so we split the data by movie, using 20% of data for validation and the rest for training.
â¦: There seem to be issues with the publicly released train-test split of TGIFQA (namely, a model with high accuracy on a held-out part of the training set doesnât generalize to the provided test set) so we re-split the multiple-choice data our- selves by GIF and hold out 20% for validation.
with subtitles; MovieQA [75], with videos from movies and questions obtained from higher-level plot summaries; Poro- roQA [43], with cartoon videos; and TGIFQA [39], with templated questions from the TGIF dataset [47]. We note that these all diï¬er from our proposed VCR in terms of subject matter, questions asked, number of answers (each of the above has 5 answers possible, while we have 4) and format; our focus here is to investigate how diï¬cult these datasets are for text-only models.23 Our point of compari- son is VCR, since our use of Adversarial Matching means that humans never write incorrect answers.
We tackle this problem by running BERT-Base on these models [15]: given only the answer (A), the answer and the question (Q+A), or additional language context in the form of subtitles (S+Q+A), how well does BERT do? Our results in Table 5 help support our hypothesis regarding annotation
23It should be noted that all of these datasets were released before the existence of strong text-only baselines such as BERT.
artifacts: whereas accuracy on VCR, only given the ending, is 27% for Q â A and 26% for Q â A, versus a 25% random baseline. Other models, where humans write the incorrect answers, have answer-only accuracies from 33.8% (MovieQA) to 45.8% (TGIFQA), over a 20% baseline.
There is also some non-visual bias for all datasets con- sidered: from 35.4% when given the question and the an- swers (MovieQA) to 72.5% (TGIFQA). While these results suggest that MovieQA is incredibly diï¬cult without seeing the video clip, there are two things to consider here. First, MovieQA is roughly 20x smaller than our dataset, with 9.8k examples in training. Thus, we also tried training BERT on smallâ: taking 9.8k examples at random from our train- âVCR ing set. Performance is roughly 14% worse, to the point of being roughly comparable to MovieQA.24 Second, of- ten times the examples in MovieQA have similar structure, which might help to alleviate stylistic priors, for example:
âWho has followed Boyle to Eamonâs apartment?â An- swers:
1. Thommo and his IRA squad. 2. Darren and his IRE squad. 3. Gary and his allies. 4. Quinn and his IRA squad. 5. Jimmy and his friends.
On the other hand, our dataset examples tend to be highly diverse in terms of syntax as well as high-level meaning, due to the similarity penalty. We hypothesize that this is why some language priors creep into VCR, particularly in the QA â R setting: given four very distinct rationales that ostensibly justify why an answer is true, some will likely serve as better justiï¬cations than others.
Furthermore, providing additional language information (such as subtitles) to a model tends to boost performance considerably. When given access to subtitles in TVQA,25 BERT scores 70.6%, which to the best of our knowledge is a new state-of-the-art on TVQA.
In conclusion, dataset creation is highly diï¬cult, partic- ularly as there are many ways that unwanted bias can creep in during the dataset creation process. One such bias of this form includes annotation artifacts, which our analysis suggests is prevalent amongst multiple-choice VQA tasks wherein humans write the wrong endings. Our analysis also suggests Adversarial Matching can help minimize this ef- fect, even when there are strong natural biases in the under- lying textual data.
24Assuming an equal chance of choosing each incorrect ending, the re- sults for BERT on an imaginary 4-answer version of TVQA and MovieQA would be 54.5% and 42.2%, respectively.
25We prepend the subtitles that are aligned to the video clip to the begin- ning of the question, with a special token (;) in between. We trim tokens from the subtitles when the total sequence length is above 128 tokens.
16
# E. Model details
In this section, we discuss implementation details for our model, R2C.
BERT representations As mentioned in the paper, we used BERT to represent text [15]. We wanted to provide a fair comparison between our model and BERT, so we used BERT-Base for each. We tried to make our use of BERT to be as simple as possible, matching our use of it as a baseline. Given a query q and response choice r(i), we merge both into a single sequence to give to BERT. One example might look like the following:
[CLS] why is riley riding motorcycle while wearing a hospital gown ? the hospital in a hurry . [SEP] she had to leave [SEP]
in the above example, we replaced per- son tags with gender neutral names [19] (person3â riley) and replaced object detections by their class name (motorcycle1â motorcycle), to minimize domain shift between BERTâs pretrained data (Wikipedia and the Book- Corpus [94]) and VCR.
Each token in the sequence corresponds to a diï¬erent transformer unit in BERT. We can then use the later lay- ers in BERT to extract contextualized representations for the each token in the query (everything from why to ?) and the response (she to .).26 Note that this gives us a diï¬erent representation for each response choice i.
frozen BERT representations from the second-to-last layer of the Transformer.27 Intuitively, this makes sense as the representations that that layer are used for both of BERTâs pretraining tasks: next sentence predic- tion (the unit corresponding to the [CLS] token at the last layer L attends to all units at layer L â 1), as well as masked language modeling (the unit for a word at layer L looks at its hidden state at the previous layer L â 1, and uses that to attend to all other units as well). The experiments in [15] suggest that this works well, though not as well as ï¬ne- tuning BERT end-to-end or concatenating multiple layers of activations.28 The tradeoï¬, however, is that precomput- ing BERT representations lets us substantially reduce the runtime of R2C and allows us to focus on learning more powerful vision representations.
Model Hyperparameters A more detailed discussion of the hyperparameters used for R2C is as follows. We tried
26The only slight diï¬erence is that, due to the WordPiece encoding scheme, rare words (like chortled) are broken up into subword units (cho ##rt ##led). In this case, we represent that word as the average of the BERT activations of its subwords.
27Since the domain that BERT was pretrained on (Wikipedia and the BookCorpus [94]) is still quite diï¬erent from our domain, we ï¬ne-tuned BERT on the text of VCR (using the masked language modeling objec- tive, as well as next sentence prediction) for one epoch to account for the domain shift, and then extracted the representations.
28This suggests, however, that if we also ï¬ne-tuned BERT along with the rest of the model parameters, the results of R2C would be higher.
to stick to simple settings (and when possible, used similar conï¬gurations for the baselines, particularly with respect to learning rates and hidden state sizes).
⢠Our projection of image features maps a 2176 dimen- sional hidden size (2048 from ResNet50 and 128 di- mensional class embeddings) to a 512 dimensional vector.
⢠Our grounding LSTM is a single-layer bidirectional LSTM with a 1280-dimensional input size (768 from BERT and 512 from image features) and uses 256 di- mensional hidden states.
⢠Our reasoning LSTM is a two-layer bidirectional LSTM with a 1536-dimensional input size (512 from image features, and 256 for each direction in the at- tended, grounded query and the grounded answer). It also uses 256-dimensional hidden states. representation from the
reasoning LSTM, grounded answer, and attended question is maxpooled and projected to a 1024-dimensional vector. That vector is used to predict the ith logit.
⢠For all LSTMs, we initialized the hidden-hidden weights using orthogonal initialization [70], and ap- plied recurrent dropout to the LSTM input with pdrop = 0.3 [21].
⢠The Resnet50 backbone was pretrained on Imagenet [14, 30]. The parameters in the ï¬rst three blocks of ResNet were frozen. The ï¬nal block (after the RoiAlign is applied) is ï¬ne-tuned by our model. We were worried, however, that the these representations would drift and so we added an auxiliary loss to the model inspired by [48]: the 2048-dimensional repre- sentation of each object (without class embeddings) had to be predictive of that objectâs label (via a linear projection to the label space and a softmax).
⢠Often times, there are a lot of objects in the image that are not referred to by the query or response set. We ï¬ltered the objects considered by the model to include only the objects mentioned in the query and responses. We also passed in the entire image as an âobjectâ that the model could attend to in the object contextualiza- tion layer.
⢠We optimized R2C using Adam [44], with a learning rate of 2 · 10â4 and weight decay of 10â4. Our batch size was 96. We clipped the gradients to have a total L2 norm of at most 1.0. We lowered the learning rate by a factor of 2 when we noticed a plateau (validation accuracy not increasing for two epochs in a row). Each model was trained for 20 epochs, which took roughly 20 hours over 3 NVIDIA Titan X GPUs.
17
Model QA â R GloVe BERT GloVe BERT GloVe BERT Q â A Q â AR R2C 46.4 63.8 38.3 67.2 18.3 43.1 Revisited BottomUp MLB MUTAN 39.4 42.8 45.5 44.4 57.5 62.3 61.8 61.0 34.0 25.1 36.1 32.0 63.5 63.0 65.4 64.4 13.5 10.7 17.0 14.1 36.8 39.6 40.6 39.3
Table 6: VQA baselines evaluated with GloVe or BERT, evaluated on the VCR evaluation set with R2C as compari- son. While BERT helps the performance of these baselines, our model still performs the best in every setting.
# F. VQA baselines with BERT
We present additional results where baselines for VQA [5] are augmented with BERT embeddings in Table 6. We didnât include these results in the main paper, because to the best of our knowledge prior work hasnât used contex- tualized representations for VQA. (Contextualized repre- sentations might be overkill, particularly as VQA questions are short and often simple). From the results, we ï¬nd that while BERT also helps the baselines, our model R2C ben- eï¬ts even more, with a 2.5% overall boost in the holistic Q â AR setting.
# G. VCR Datasheet
A datasheet is a list of questions that accompany datasets that are released, in part so that people think hard about the phenomena in their data [23]. In this section, we provide a datasheet for VCR.
# G.1. Motivation for Dataset Creation
Why was the dataset created? The dataset was cre- ated to study the new task of Visual Commonsense Rea- soning: essentially, to have models answer challenging cognition-level questions about images and also to choose a rationale justifying each answer.
Has the dataset been used already? Yes, at the time of writing, several groups have submitted models to our leaderboard at visualcommonsense.com/leaderboard. Who funded the dataset?? VCR was funded via a va- riety of sources; the biggest sponsor was the IARPA DIVA program through D17PC00343.29
# G.2. Dataset Composition
What are the instances? Each instance contains an image, a sequence of object regions and classes, a query, and a list of response choices. Exactly one response is cor- rect. There are two sub-tasks to the dataset: in Question
29However, the views and conclusions contained herein are those of the authors and should not be interpreted as representing endorsements of IARPA, DOI/IBC, or the U.S. Government.
Answering (QâA) the query is a question and the response choices are answers. In Answer Justiï¬cation (QAâR) the query is a question and the correct answer; the responses are rationales that justify why someone would conclude that the answer is true. Both the query and the rationale refer to the objects using detection tags like person1.
How many instances are there? There are 212,923 training questions, 26,534 validation questions, and 25,263 questions. Each is associated with a four answer choices, and each question+correct answer is associated with four rationale choices.
What data does each instance consist of? The image from each instance comes from a movie, while the object detector was trained to detect objects in the COCO dataset [49]. Workers ask challenging high-level questions cover- ing a wide variety of cognition-level phenomena. Then, workers provide a rationale: one to several sentences ex- plaining how they came at their decision. The rationale points to details in the image, as well as background knowl- edge about how the world works. Each instance contains one correct answer and three incorrect counterfactual an- swers, along with one correct rationale and three incorrect rationales.
# Does the data rely on external resources? No, every- thing is included.
Are there recommended data splits or evaluation measures? We release the training and validation sets, as well as the test set without labels. For the test set, re- searchers can submit their predictions to a public leader- board. Evaluation is fairly straightforward as our task is multiple choice, but we will also release an evaluation script.
# G.3. Data Collection Process
How was the data collected? We used movie images, with objects detected using Mask RCNN [24, 29]. We col- lected the questions, answers, and rationales on Amazon Mechanical Turk.
Who was involved in the collection process and what were their roles? We (the authors) did several rounds of pilot studies, and collected data at scale on Amazon Me- chanical Turk. In the task, workers on Amazon Mechanical Turk could ask anywhere between one to three questions. For each question, they had to provide an answer, indicate its likelihood on an ordinal scale, and provide a rationale justifying why their answer is true. Workers were paid at 22 cents per question, answer, and rationale.
Over what time frame was the data collected? Au- gust to October 2018.
Does the dataset contain all possible instances? No. Visual Commonsense Inference is very broad, and we fo- cused on a limited set of (interesting) phenomena. Beyond looking at diï¬erent types of movies, or looking at the world
18
beyond still photographs, there are also diï¬erent types of inferences that we didnât cover in our work.
If the dataset is a sample, then what is the popula- tion? The population is that of movie images that were deemed interesting by our interestingness ï¬lter (having at least three object detections, of which at least two are peo- ple).
# G.4. Data Preprocessing
What preprocessing was done? The line between data preprocessing and dataset collection is blurry for VCR. After obtaining crowdsourced questions, answers, and ra- tionales, we applied Adversarial Matching, turning raw data into a multiple choice task. We also tokenized the text spans.
Was the raw data saved in addition to the cleaned data? Yes - the raw data is the correct answers (and as such is a subset of the âcleanedâ data).
Does this dataset collection/preprocessing procedure achieve the initial motivation? At this point, we think so. Our dataset is challenging for existing VQA systems, but easy for humans.
# G.5. Dataset Distribution
How is the dataset distributed? VCR is freely avail- able for research use at visualcommonsense.com.
# G.6. Legal and Ethical Considerations
Were workers told what the dataset would be used for and did they consent? Yes - the instructions said that workers answers would be used in a dataset. We tried to be as upfront as possible to workers. Workers also consented to have their responses used in this way through the Amazon Mechanical Turk Participation Agreement.
If it relates to people, could this dataset expose people to harm or legal action? No - the questions, answers, and responses donât contain personal info about the crowd workers.
If it relates to people, does it unfairly advantage or disadvantage a particular social group? Unfortunately, movie data is highly biased against women and minorities [71, 69]. Our data, deriving from movies as well as from worker elicitations [68], is no diï¬erent. For these reasons, we recommend that users do not deploy models trained on VCR in the real world.
# H. Additional qualitative results
In this section, we present additional qualitative results from R2C. Our use of attention mechanisms allow us to better gain insight into how the model arrives at its deci- In particular, the model uses the answer to attend sions.
over the question, and it uses the answer to attend over rel- evant objects in the image. Looking at the attention maps help to visualize which items in the question are important (usually, the model focuses on the second half of the ques- tion, like âcovering his faceâ in Figure 14), as well as which objects are important (usually, the objects referred to by the answer are assigned the most weight).
19
why is [persont] covering his face 2 [persont] [person] He trying to 3% protect himself from [persont] He protecting himself 60% against the heat [person] has just stuck 3% him wt object [persont] afraid that 32% he wit be
why is [persont] covering his face 2 [persont] [person] He trying to 3% protect himself from [persont] He protecting himself 60% against the heat [person] has just stuck 3% him wt object [persont] afraid that 32% he wit be
Figure 14: An example from the Q â A task. Each super-row is a response choice (four in total). The ï¬rst super-column is the question: Here, âWhy is [person1] covering his face?â and the second super-column represents the relevant objects in the image that R2C attends to. Accordingly, each block is a heatmap of the attention between each response choice and the query, as well as each response choice and the objects. The ï¬nal prediction is given by the bar graph on the left: The model is 60% conï¬dent that the right answer is b., which is correct.
20
He wearing 0% a protective ves = The building vulnerable to it He staring at the waiter who siightly âout of frame the 27% waiter has cart with pan of food that outside where the me = get why is [person] covering his face 2 He is protecting himself against. =the heat {persont}
He wearing 0% a protective ves = The building 71% vulnerable to it He staring at the waiter who siightly âout of frame the 27% waiter has cart with pan of food that outside where the me = get why is [person] covering his face 2 He is protecting himself against. =the heat {persont} his os os 02 i =
Figure 15: An example from the QA â R task. Each super-row is a response choice (four in total). The ï¬rst super-column is the query, and the second super-column holds the relevant objects (here just a single person, as no other objects were mentioned by the query or responses). Each block is a heatmap of the attention between each response choice and the query, as well as the attention between each response choice and the objects. The ï¬nal prediction is given by the bar graph on the left: The model is 71% conï¬dent that the right rationale is b., which is correct.
21
erson18 : poets] a % [person19] ¢ What is [persont3] doing 2 [person3} [person4] [person 13] -10 7% be dancing [person4] running towards the 0% other [person3] tolling 5% J [persont3] story Loz [persont 3] trying to 90 down -00 to st
erson18 : poets] a % [person19] ¢
What is [persont3] doing 2 [person3} [person4] [person 13] -10 7% be dancing [person4] running towards the 0% other [person3] tolling 5% J [persont3] story Loz [persont 3] trying to 86% 90 down -00 to st
Figure 16: An example from the Q â A task. Each super-row is a response choice (four in total). The ï¬rst super-column is the question: Here, âWhat is [person13] doing?â and the second super-column represents the relevant objects in the image that R2C attends to. Accordingly, each block is a heatmap of the attention between each response choice and the query, as well as each response choice and the objects. The ï¬nal prediction is given by the bar graph on the left: The model is 86% conï¬dent that the right answer is d., which is correct.
22
3% 86% 2% Th There emety seat {persont3} st {persont2} bending {persont3} standing up and siting down at the dinner table {persont3} walks from âopened door and deary waking towards chairs presumably where What {persont3} doing 2 {person13} to go âdown to st = oe {persont2} [persont3] mia |
Figure 17: An example from the QA â R task. Each super-row is a response choice (four in total). The ï¬rst super-column is the query, and the second super-column holds the relevant objects. Each block is a heatmap of the attention between each response choice and the query, as well as the attention between each response choice and the objects. The ï¬nal prediction is given by the bar graph on the left: The model is 86% conï¬dent that the right rationale is b., which is incorrect - the correct choice is a.
23
why is [person2] here on this dock 2 [person2] [person7] [person2] caught âcommiting [person2] the captain of the ship [person2] x = for [person2] a chile ship
why is [person2] here on this dock 2 [person2] [person7] [person2] caught âcommiting [person2] the captain of the ship [person2] x = for [person2] a chile ship
Figure 18: An example from the Q â A task. Each super-row is a response choice (four in total). The ï¬rst super-column is the question: Here, âWhy is [person2] here on this deck?â and the second super-column represents the relevant objects in the image that R2C attends to. Accordingly, each block is a heatmap of the attention between each response choice and the query, as well as each response choice and the objects. The ï¬nal prediction is given by the bar graph on the left: The model is 33% conï¬dent that the right answer is d., which is incorrect - the correct answer is correct answer is c.
24
Why is [person2] here on this. deck ~â=«?â [person] is looking for someone. {person t[person2{oerson3]person4[parson$][personé[person7[personBi[persondparson 10] He peering âeveryone head âand appears 5 to 0% 2 searching for while wearing He Jacket (persona) seaming the whieh filed with people he looking for = person Y | there appears nave been o% disaster a | Ue UE | = ae | kind people cold âand wat after disasters most people ty to find theie fiends âand family loved [persona] would nat be here normally 98% and he addressing fpersont9] = = walking âand looking = | % g s s & aise
Figure 19: An example from the QA â R task. Each super-row is a response choice (four in total). The ï¬rst super-column is the query, and the second super-column holds the relevant objects. Each block is a heatmap of the attention between each response choice and the query, as well as the attention between each response choice and the objects. The ï¬nal prediction is given by the bar graph on the left: The model is 98% conï¬dent that the right rationale is c., which is correct.
25
# References
[1] Aishwarya Agrawal, Dhruv Batra, Devi Parikh, and Anirud- dha Kembhavi. Dont just assume; look and answer: Over- In Proceed- coming priors for visual question answering. ings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4971â4980, 2018. 8
[2] Alexandre Alahi, Kratarth Goel, Vignesh Ramanathan, Alexandre Robicquet, Li Fei-Fei, and Silvio Savarese. So- cial lstm: Human trajectory prediction in crowded spaces. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 961â971, 2016. 8
[3] Jean-Baptiste Alayrac, Piotr Bojanowski, Nishant Agrawal, Josef Sivic, Ivan Laptev, and Simon Lacoste-Julien. Unsu- pervised learning from narrated instruction videos. In Pro- ceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4575â4583, 2016. 8
[4] Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. Bottom-up and top-down attention for image captioning and visual question answering. In CVPR, 2018. 6
[5] Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. Vqa: Visual question answering. In Proceedings of the IEEE international conference on computer vision, pages 2425â 2433, 2015. 6, 8, 9, 15, 17
[6] Hedi Ben-younes, Remi Cadene, Matthieu Cord, and Nico- las Thome. MUTAN: Multimodal Tucker Fusion for Visual Question Answering. In The IEEE International Conference on Computer Vision (ICCV), Oct 2017. 6
[7] Or Biran and Courtenay Cotton. Explanation and justiï¬ca- tion in machine learning: A survey. In IJCAI-17 Workshop on Explainable AI (XAI), page 8, 2017. 8
[8] Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. A large annotated corpus for learn- ing natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Pro- cessing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 632â642, 2015. 14, 15
[9] Arjun Chandrasekaran, Viraj Prabhu, Deshraj Yadav, Prithvi- jit Chattopadhyay, and Devi Parikh. Do explanations make vqa models more predictable to a human? In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1036â1042, 2018. 8
[10] Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. Enhanced lstm for natural language in- ference. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1657â1668, 2017. 2, 4, 6, 14
[11] Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoderâdecoder for statistical machine translation. In Pro- ceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724â1734, 2014. 6
26
[12] Ching-Yao Chuang, Jiaman Li, Antonio Torralba, and Sanja Fidler. Learning to act properly: Predicting and explaining aï¬ordances from images. In CVPR, 2018. 8
[13] Ernest Davis and Gary Marcus. Commonsense reasoning and commonsense knowledge in artiï¬cial intelligence. Com- mun. ACM, 58:92â103, 2015. 2
[14] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical im- age database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 248â255. Ieee, 2009. 17
[15] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Pre-training of deep bidirectional arXiv preprint Toutanova. transformers for language understanding. arXiv:1810.04805, 2018. 2, 4, 5, 6, 7, 13, 15, 16 Bert:
[16] Jacob Devlin, Saurabh Gupta, Ross B. Girshick, Mar- garet Mitchell, and C. Lawrence Zitnick. Exploring near- CoRR, est neighbor approaches for image captioning. abs/1505.04467, 2015. 3
[17] Kiana Ehsani, Hessam Bagherinezhad, Joseph Redmon, Roozbeh Mottaghi, and Ali Farhadi. Who let the dogs out? modeling dog behavior from visual data. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. 8
[18] Panna Felsen, Pulkit Agrawal, and Jitendra Malik. What will happen next? forecasting player moves in sports videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3342â3351, 2017. 8
[19] Andrew Flowers. The Most Common Unisex Names In America: Is Yours One Of Them?, June 2015. 13, 16 [20] David F. Fouhey, Weicheng Kuo, Alexei A. Efros, and Jiten- dra Malik. From lifestyle vlogs to everyday interactions. In CVPR, 2018. 3
[21] Yarin Gal and Zoubin Ghahramani. A theoretically grounded application of dropout in recurrent neural networks. In Advances in neural information processing systems, pages 1019â1027, 2016. 17
[22] Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke S. Zettlemoyer. Allennlp: A deep se- mantic natural language processing platform. 2017. 14 [23] Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jen- nifer Wortman Vaughan, Hanna Wallach, Hal Daume´e III, and Kate Crawford. Datasheets for datasets. arXiv preprint arXiv:1803.09010, 2018. 17
[24] Ross Girshick, Ilija Radosavovic, Georgia Gkioxari, Piotr Doll´ar, and Kaiming He. Detectron. https://github. com/facebookresearch/detectron, 2018. 3, 4, 10, 11, 18
[25] Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Ba- tra, and Devi Parikh. Making the v in vqa matter: Elevating the role of image understanding in visual question answer- ing. In CVPR, volume 1, page 9, 2017. 8
[26] Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Ba- tra, and Devi Parikh. Making the V in VQA matter: Ele- vating the role of image understanding in Visual Question Answering. In Conference on Computer Vision and Pattern Recognition (CVPR), 2017. 15
[27] Agrim Gupta, Justin Johnson, Li Fei-Fei, Silvio Savarese, and Alexandre Alahi. Social gan: Socially acceptable trajec- In The IEEE tories with generative adversarial networks. Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. 8
[28] Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R. Bowman, and Noah A. Smith. Annota- tion artifacts in natural language inference data. In Proc. of NAACL, 2018. 2, 4, 8, 15
[29] Kaiming He, Georgia Gkioxari, Piotr Doll´ar, and Ross B. Girshick. Mask r-cnn. 2017 IEEE International Conference on Computer Vision (ICCV), pages 2980â2988, 2017. 3, 4, 5, 10, 11, 18
[30] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceed- ings of the IEEE conference on computer vision and pattern recognition, pages 770â778, 2016. 5, 11, 17
[31] Lisa Anne Hendricks, Zeynep Akata, Marcus Rohrbach, Jeï¬ Donahue, Bernt Schiele, and Trevor Darrell. Generating vi- sual explanations. In European Conference on Computer Vi- sion, pages 3â19. Springer, 2016. 8
[32] Lisa Anne Hendricks, Ronghang Hu, Trevor Darrell, and Zeynep Akata. Grounding visual explanations. European Conference on Computer Vision (ECCV), 2018. 8
[33] Lisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, and Bryan Russell. Localizing mo- ments in video with natural language. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017. 8
[34] Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural Comput., 9(8):1735â1780, Nov. 1997. 5
[35] Ronghang Hu, Jacob Andreas, Trevor Darrell, and Kate Saenko. Explainable neural computation via stack neural module networks. In Proceedings of the European Confer- ence on Computer Vision (ECCV), pages 53â69, 2018. 8 [36] Ronghang Hu, Marcus Rohrbach, Jacob Andreas, Trevor Darrell, and Kate Saenko. Modeling relationships in ref- erential expressions with compositional modular networks. In Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, pages 4418â4427. IEEE, 2017. 8 [37] Dong Huk Park, Lisa Anne Hendricks, Zeynep Akata, Anna Rohrbach, Bernt Schiele, Trevor Darrell, and Mar- cus Rohrbach. Multimodal explanations: Justifying deci- sions and pointing to the evidence. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. 8
[38] Allan Jabri, Armand Joulin, and Laurens van der Maaten. Revisiting visual question answering baselines. In European conference on computer vision, pages 727â739. Springer, 2016. 6
[39] Yunseok Jang, Yale Song, Youngjae Yu, Youngjin Kim, and Gunhee Kim. Tgif-qa: Toward spatio-temporal reasoning in In IEEE Conference on Com- visual question answering. puter Vision and Pattern Recognition (CVPR 2017). Hon- olulu, Hawaii, pages 2680â8, 2017. 15
[40] Roy Jonker and Anton Volgenant. A shortest augmenting path algorithm for dense and sparse linear assignment prob- lems. Computing, 38(4):325â340, 1987. 4
27
[41] Jinkyu Kim, Anna Rohrbach, Trevor Darrell, John Canny, and Zeynep Akata. Textual explanations for self-driving ve- hicles. In 15th European Conference on Computer Vision, pages 577â593. Springer, 2018. 8
[42] Jin-Hwa Kim, Kyoung Woon On, Woosang Lim, Jeonghee Kim, Jung-Woo Ha, and Byoung-Tak Zhang. Hadamard In The 5th Inter- Product for Low-rank Bilinear Pooling. national Conference on Learning Representations, 2017. 6
[43] K Kim, C Nan, MO Heo, SH Choi, and BT Zhang. Pororoqa: Cartoon video series dataset for story understanding. In Pro- ceedings of NIPS 2016 Workshop on Large Scale Computer Vision System, 2016. 15
[44] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014. 13, 17 [45] Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalan- tidis, Li-Jia Li, David A Shamma, et al. Visual genome: Connecting language and vision using crowdsourced dense International Journal of Computer Vi- image annotations. sion, 123(1):32â73, 2017. 8
[46] Jie Lei, Licheng Yu, Mohit Bansal, and Tamara L Berg. Tvqa: Localized, compositional video question answering. In EMNLP, 2018. 4, 8, 9, 15
[47] Yuncheng Li, Yale Song, Liangliang Cao, Joel Tetreault, Larry Goldberg, Alejandro Jaimes, and Jiebo Luo. Tgif: A new dataset and benchmark on animated gif description. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4641â4650, 2016. 15 [48] Zhizhong Li and Derek Hoiem. Learning without forgetting. IEEE Transactions on Pattern Analysis and Machine Intelli- gence, 2017. 17
[49] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740â755. Springer, 2014. 4, 8, 9, 10, 18
[50] Xiao Lin and Devi Parikh. Leveraging visual question an- swering for image-caption ranking. In European Conference on Computer Vision, pages 261â277. Springer, 2016. 6 [51] Tegan Maharaj, Nicolas Ballas, Anna Rohrbach, Aaron C Courville, and Christopher Joseph Pal. A dataset and explo- ration of models for understanding video data through ï¬ll- In Computer Vision and in-the-blank question-answering. Pattern Recognition (CVPR), 2017. 8
[52] Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan L Yuille, and Kevin Murphy. Generation and comprehension of unambiguous object descriptions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 11â20, 2016. 8
[53] Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. A corpus and evaluation framework for deeper understanding of commonsense stories. arXiv preprint arXiv:1604.01696, 2016. 15
[54] Roozbeh Mottaghi, Mohammad Rastegari, Abhinav Gupta, and Ali Farhadi. what happens if... learning to predict the eï¬ect of forces in images. In European Conference on Com- puter Vision, pages 269â285. Springer, 2016. 8
[55] James Munkres. Algorithms for the assignment and trans- portation problems. Journal of the society for industrial and applied mathematics, 5(1):32â38, 1957. 4
[56] Jeï¬rey Pennington, Richard Socher, and Christopher Man- ning. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532â1543, 2014. 6, 13
[57] Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gard- ner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. In Proceedings Deep contextualized word representations. of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), volume 1, pages 2227â2237, 2018. 2, 4, 6, 14
[58] Hamed Pirsiavash, Carl Vondrick, and Antonio Torralba. In- ferring the why in images. arXiv preprint arXiv:1406.5472, 2014. 8
[59] Bryan A Plummer, Arun Mallya, Christopher M Cervantes, Julia Hockenmaier, and Svetlana Lazebnik. Phrase local- ization and visual relationship detection with comprehensive image-language cues. In Proc. ICCV, 2017. 8
[60] Bryan A Plummer, Liwei Wang, Chris M Cervantes, Juan C Caicedo, Julia Hockenmaier, and Svetlana Lazeb- nik. Flickr30k entities: Collecting region-to-phrase corre- In Pro- spondences for richer image-to-sentence models. ceedings of the IEEE international conference on computer vision, pages 2641â2649, 2015. 8
[61] Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. Hypothesis Only Base- lines in Natural Language Inference. arXiv:1805.01042 [cs], May 2018. arXiv: 1805.01042. 2, 4
[62] Sainandan Ramakrishnan, Aishwarya Agrawal, and Stefan Lee. Overcoming language priors in visual question answer- ing with adversarial regularization. In Advances in Neural Information Processing Systems, 2018. 8
[63] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information pro- cessing systems, pages 91â99, 2015. 5
[64] Nicholas Rhinehart and Kris M Kitani. First-person activity In forecasting with online inverse reinforcement learning. Proceedings of the IEEE International Conference on Com- puter Vision, pages 3696â3705, 2017. 8
[65] Anna Rohrbach, Marcus Rohrbach, Ronghang Hu, Trevor Darrell, and Bernt Schiele. Grounding of textual phrases in images by reconstruction. In European Conference on Com- puter Vision, pages 817â834. Springer, 2016. 8
[66] Anna Rohrbach, Marcus Rohrbach, Siyu Tang, Seong Joon Generating descriptions with Oh, and Bernt Schiele. In Proceedings IEEE grounded and co-referenced people. Conference on Computer Vision and Pattern Recognition (CVPR) 2017, Piscataway, NJ, USA, July 2017. IEEE. 8 [67] Anna Rohrbach, Atousa Torabi, Marcus Rohrbach, Niket Tandon, Christopher Pal, Hugo Larochelle, Aaron Courville, and Bernt Schiele. Movie Description. International Journal of Computer Vision, 123(1):94â120, May 2017. 3, 4, 10, 11, 12
28
[68] Rachel Rudinger, Chandler May, and Benjamin Van Durme. Social bias in elicited natural language inferences. In Pro- ceedings of the First ACL Workshop on Ethics in Natural Language Processing, pages 74â79, 2017. 15, 18
[69] Maarten Sap, Marcella Cindy Prasettio, Ari Holtzman, Han- nah Rashkin, and Yejin Choi. Connotation frames of power and agency in modern ï¬lms. In Proceedings of the 2017 Con- ference on Empirical Methods in Natural Language Process- ing, pages 2329â2334, 2017. 15, 18
[70] Andrew M Saxe, James L McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynamics of learning in deep arXiv preprint arXiv:1312.6120, linear neural networks. 2013. 17
[71] Alexandra Schoï¬eld and Leo Mehr. Gender-distinguishing features in ï¬lm dialogue. In Proceedings of the Fifth Work- shop on Computational Linguistics for Literature, pages 32â 39, 2016. 15, 18
[72] Roy Schwartz, Maarten Sap, Ioannis Konstas, Li Zilles, Yejin Choi, and Noah A. Smith. The eï¬ect of diï¬erent writ- ing tasks on linguistic style: A case study of the ROC story cloze task. In Proc. of CoNLL, 2017. 2, 4, 15
[73] Rico Sennrich, Orhan Firat, Kyunghyun Cho, Alexandra Birch, Barry Haddow, Julian Hitschler, Marcin Junczys- Dowmunt, Samuel L¨aubli, Antonio Valerio Miceli Barone, Jozef Mokry, and Maria Nadejde. Nematus: a toolkit for In Proceedings of the Soft- neural machine translation. ware Demonstrations of the 15th Conference of the Euro- pean Chapter of the Association for Computational Linguis- tics, pages 65â68, Valencia, Spain, April 2017. Association for Computational Linguistics. 14
[74] Krishna Kumar Singh, Kayvon Fatahalian, and Alexei A Efros. Krishnacam: Using a longitudinal, single-person, egocentric dataset for scene understanding tasks. In Applica- tions of Computer Vision (WACV), 2016 IEEE Winter Con- ference on, pages 1â9. IEEE, 2016. 8
[75] Makarand Tapaswi, Yukun Zhu, Rainer Stiefelhagen, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. Movieqa: Understanding stories in movies through question- In Proceedings of the IEEE conference on answering. computer vision and pattern recognition, pages 4631â4640, 2016. 4, 8, 9, 15
[76] Antonio Torralba and Alexei A Efros. Unbiased look at In Computer Vision and Pattern Recogni- dataset bias. tion (CVPR), 2011 IEEE Conference on, pages 1521â1528. IEEE, 2011. 3, 15
[77] Paul Vicol, Makarand Tapaswi, Lluis Castrejon, and Sanja Fidler. Moviegraphs: Towards understanding human-centric In IEEE Conference on Computer situations from videos. Vision and Pattern Recognition (CVPR), 2018. 8
[78] Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba. An- ticipating visual representations from unlabeled video. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 98â106, 2016. 8
[79] Misha Wagner, Hector Basevi, Rakshith Shetty, Wenbin Li, Mateusz Malinowski, Mario Fritz, and Ales Leonardis. An- swering visual what-if questions: From actions to predicted scene descriptions. In Visual Learning and Embodied Agents
in Simulation Environments Workshop at European Confer- ence on Computer Vision, 2018. 8
[80] Peng Wang, Qi Wu, Chunhua Shen, Anton van den Hengel, and Anthony R. Dick. Fvqa: Fact-based visual question an- swering. IEEE transactions on pattern analysis and machine intelligence, 2017. 8
[81] John Wieting, Jonathan Mallinson, and Kevin Gimpel. Learning paraphrastic sentence embeddings from back- translated bitext. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 274â285, 2017. 14
[82] Adina Williams, Nikita Nangia, and Samuel Bowman. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112â1122. Association for Compu- tational Linguistics, 2018. 14
[83] Qi Wu, Peng Wang, Chunhua Shen, Anthony R. Dick, and Anton van den Hengel. Ask me anything: Free-form vi- sual question answering based on knowledge from external sources. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4622â4630, 2016. 8 [84] Tian Ye, Xiaolong Wang, James Davidson, and Abhinav Gupta. Interpretable intuitive physics model. In European Conference on Computer Vision, pages 89â105. Springer, 2018. 8
[85] Yuya Yoshikawa, Jiaqing Lin, and Akikazu Takeuchi. Stair actions: A video dataset of everyday home actions. arXiv preprint arXiv:1804.04326, 2018. 8
[86] Licheng Yu, Eunbyung Park, Alexander C. Berg, and Tamara L. Berg. Visual Madlibs: Fill in the blank Im- age Generation and Question Answering. arXiv:1506.00278 [cs], May 2015. arXiv: 1506.00278. 8
[87] Licheng Yu, Patrick Poirson, Shan Yang, Alexander C Berg, and Tamara L Berg. Modeling context in referring expres- sions. In European Conference on Computer Vision, pages 69â85. Springer, 2016. 8
[88] Licheng Yu, Hao Tan, Mohit Bansal, and Tamara L Berg. A joint speakerlistener-reinforcer model for referring expres- sions. In Computer Vision and Pattern Recognition (CVPR), volume 2, 2017. 8
[89] Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. Swag: A large-scale adversarial dataset for grounded com- In Proceedings of the 2018 Confer- monsense inference. ence on Empirical Methods in Natural Language Processing (EMNLP), 2018. 8
[90] Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. Men also like shopping: Reducing gen- der bias ampliï¬cation using corpus-level constraints. In Pro- ceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2979â2989, 2017. 15
[91] Luowei Zhou, Chenliang Xu, and Jason J. Corso. To- wards automatic learning of procedures from web instruc- tional videos. In AAAI, 2018. 8
[92] Yipin Zhou and Tamara L Berg. Temporal perception and prediction in ego-centric video. In Proceedings of the IEEE
29
International Conference on Computer Vision, pages 4498â 4506, 2015. 8
[93] Yuke Zhu, Oliver Groth, Michael Bernstein, and Li Fei-Fei. Visual7W: Grounded Question Answering in Images. In IEEE Conference on Computer Vision and Pattern Recog- nition, 2016. 8, 9
[94] Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhut- dinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. Aligning books and movies: Towards story-like visual ex- planations by watching movies and reading books. In arXiv preprint arXiv:1506.06724, 2015. 16 | {
"id": "1804.04326"
} |
1811.08008 | End-to-End Retrieval in Continuous Space | Most text-based information retrieval (IR) systems index objects by words or
phrases. These discrete systems have been augmented by models that use
embeddings to measure similarity in continuous space. But continuous-space
models are typically used just to re-rank the top candidates. We consider the
problem of end-to-end continuous retrieval, where standard approximate nearest
neighbor (ANN) search replaces the usual discrete inverted index, and rely
entirely on distances between learned embeddings. By training simple models
specifically for retrieval, with an appropriate model architecture, we improve
on a discrete baseline by 8% and 26% (MAP) on two similar-question retrieval
tasks. We also discuss the problem of evaluation for retrieval systems, and
show how to modify existing pairwise similarity datasets for this purpose. | http://arxiv.org/pdf/1811.08008 | Daniel Gillick, Alessandro Presta, Gaurav Singh Tomar | cs.IR, cs.CL, cs.LG | null | null | cs.IR | 20181119 | 20181119 | 2018
8 1 0 2
# v o N 9 1
] R I . s c [ 1 v 8 0 0 8 0 . 1 1 8 1 : v i X r a
# End-to-End Retrieval in Continuous Space
# Daniel Gillick, Alessandro Presta, Gaurav Singh Tomar Google AI {dgillick, apresta, gtomar}@google.com
# Abstract
Most text-based information retrieval (IR) sys- tems index objects by words or phrases. These discrete systems have been augmented by models that use embeddings to measure sim- ilarity in continuous space. But continuous- space models are typically used just to re-rank the top candidates. We consider the prob- lem of end-to-end continuous retrieval, where standard approximate nearest neighbor (ANN) search replaces the usual discrete inverted in- dex, and rely entirely on distances between learned embeddings. By training simple mod- els speciï¬cally for retrieval, with an appropri- ate model architecture, we improve on a dis- crete baseline by 8% and 26% (MAP) on two similar-question retrieval tasks. We also dis- cuss the problem of evaluation for retrieval systems, and show how to modify existing pairwise similarity datasets for this purpose.
# 1 Introduction
Nearly 30 years ago, Deerwester et al. (1990) de- scribed the shortcomings of the standard retrieval systems that are still widely used today: âThe problem is that users want to retrieve on the ba- sis of conceptual content, and individual words provide unreliable evidence about the conceptual topic or meaning of a document.â As a solution, they introduced Latent Semantic Indexing, us- ing Singular Value Decomposition over word co- occurrences to encode (or embed) a piece of text as a dense low-dimensional vector rather than a sparse high-dimensional vector of word indicators. This work opened the ï¬eld of representation learn- ing (Bengio et al., 2013), but did not address the issue of efï¬cient retrieval from the learned space. Weâll call the overall task â constructing dense representations and retrieving neighbors â contin- uous retrieval by way of contrast with discrete retrieval that uses an inverted index to leverage
sparse representations. In principle, continuous re- trieval has clear beneï¬ts: improved recall (uncon- strained by speciï¬c word choice), more granular similarity scoring, learned relationships between query and candidates, and the possibility of re- trieval across modalities.
However, models for learning text representa- tions have found application in IR by re-ranking the top candidates proposed by a discrete re- trieval system (Huang et al., 2013; Shen et al., 2014; Palangi et al., 2016; Dos Santos et al., 2015; To the best of our knowl- Lei et al., 2016). edge, there have been no previous comparisons of end-to-end retrieval systems (Onal et al., 2017). A model intended for re-ranking differs from a in two important model ways. First, a re-ranking model has access to the raw representations of both query and can- didate and can thus learn complex interactions (Parikh et al., 2016; Gong et al., 2017), whereas a retrieval model must encode queries and can- didates independently to allow for fast neighbor look-up. Second, re-rankers can focus modeling power on the boundary encoderscases proposed by the discrete retrieval systems, while retrieval mod- els must also perform well with random pairs.
The primary goal of this paper is to show that using standard ANN search, simple mod- els trained for the purpose of continuous retrieval can substantially outperform discrete retrieval sys- tems. We show evidence for choosing a negative sampling method which we call in-batch sampled softmax, and evaluate a variety of baselines and trained models on two pairwise datasets that we modify for the purpose of retrieval evaluation.
# 2 Dual Encoders
Neural network models for learning distance func- tions date back to early work on signature veriï¬ca-
tion (Bromley et al., 1994), later extended to face veriï¬cation (Chopra et al., 2005). This work and its descendants (Yih et al., 2011; Hu et al., 2014, refer to the models as siamese networks etc.) because two similar objects are encoded by two copies of the same network (all parameters are shared). The Wsabie model (Weston et al., 2010), intended for classiï¬cation with large label sets, learns embeddings for the inputs and outputs sep- arately. The StarSpace model (Wu et al., 2017) extends the idea of learned embeddings to more data types. More generally, we refer to the class of models in which pairs of items are encoded in a shared space, as Dual Encoders. This is a modular architecture with the following components:
Encoder: An encoder is any learnable function f (X) that takes an item X as input and returns a d-dimensional real-valued encoding vector. Here, we focus on neural network functions f .
Similarity Function: A similarity function sim(E1, E2) takes two encodings of the same dimension, and out- puts a score in [0, 1]. Similarity functions can be ar- bitrarily complex, including neural networks that learn interactions between encodings, but to enable nearest neighbor search, we use cosine similarity, the standard for retrieval (Manning et al., 2008).
Dual Encoder: A dual encoder has the form g(X1, X2) = sim(f1(X1), f2(X2)) where f1, f2 are two possibly identical encoders. We additionally apply a learned afï¬ne transform, αg(·, ·) + β, which scales the simi- larity so it can be treated as a logit during training.
Note that while we train dual encoders for each pairwise dataset, including scaling param- eters α, β, retrieval requires only the individual trained encoders: the candidate items are encoded by the candidate encoder and indexed off-line; at inference time, the query is encoded by the query encoder and neighbors are retrieved from the can- didate space according to cosine distance.
In our experiments, we train a very simple form of dual encoder for similar question re- trieval. Much like the Paragram-Phrase setup (Wieting et al., 2015), we use a single question encoder that represents the input with an average over word embeddings. Thus, the question en- coder parameters are just the set of learned em- beddings.
Some of our experiments use a multi-task setup, with up to 3 tasks. While there is a separate dual encoder for each task, they all share the same ques- tion encoder, so only the scaling parameters are task-speciï¬c. In multi-task training, we compute a task-speciï¬c loss, then take a weighted average to
produce the overall loss; the weights are uniform in all experiments.
# 2.1 Loss functions
Much of the relevant prior work on representa- tion learning has focused on pairwise similarity (Hu et al., 2014; Wieting et al., 2015; Arora et al., 2017; Conneau et al., 2017), sometimes with the goal of re-ranking retrieval candidates.
If the training data consists of positive and neg- ative example pairs, it is standard to minimize the logistic (cross-entropy) loss between true labels and model predictions.
just of But often, positive pairs. setting (Mikolov et al., 2013) or in Language Model train- ing (Jozefowicz et al., 2016), the negative exam- ples are implied: while there are a number of words that could reasonably ï¬t with some context, a random word, on average, will be a poor sub- stitute for the observed word. These models are trained with a softmax loss, where the negatives are all non-observed words in the vocabulary. For efï¬ciency, the denominator can be approximated with a sample from the vocabulary. In the more general dual encoder case, though, the set of neg- ative examples may not be enumerable. Indeed, if both inputs are sentences (or questions), negative sampling is a necessary approximation.
We consider a few different loss functions (in addition to the standard cross-entropy loss for binary-valued labels), each of which implies a dif- ferent negative sampling strategy. All the strate- gies make use of items in the batch as a source of random negatives. A batch includes B posi- tive pairs of items which have been encoded by their respective encoders. We apply the similarity 1, Ej function to all pairs (Ei 2) to form a similarity matrix M where the diagonal contains positive ex- amples and the off-diagonal contains random neg- ative examples.
In-batch Cross-Entropy We form a cross-entropy loss term for each element in M , with positives on the diag- onal and negatives on the off-diagonal, and return the average.
In-batch Sampled Softmax We form a softmax loss term for each row in M , where row i has a positive label on column i (corresponding to the diagonal), and return the average. This was suggested by Henderson et al. (2017).
In-batch Triplet We form a triplet loss term for each row in M that maximizes the margin between the positive element and the highest scoring negative element in the
row: max(0, δâs++sâ similar to the loss used by Wieting et al. (2015).
# 2.2 Training
We train all our models using mini-batch Gradient Descent with the Momentum optimizer and a ï¬xed learning rate of 0.01. Unless otherwise noted, the batch size is 1000 and the loss is in-batch sampled softmax. We use a lowercased unigram vocabu- lary and 300-dimensional embeddings, initialized randomly. We use no explicit regularization (like dropout), but rely on early stopping (based on tun- ing set evaluation) to avoid over-ï¬tting. In-batch precision@1 (accuracy computed over each row of the similarity matrix M ), averaged over the rows in M , is our tuning metric, since this is a reason- able proxy for precision@1 computed over the full set of candidates, which in turn represents retrieval performance.
# 3 Experimental Setup
# 3.1 Evaluating end-to-end retrieval
Neither pairwise similarity tasks nor re-ranking tasks are useful for evaluating end-to-end retrieval: the pairs of items are usually sourced using some heuristic or existing retrieval system. The result- ing test data distribution is biased towards pairs selected by that system. Such test sets may fail to discriminate among models that have drastically different performance on random pairs, and it is particularly important that retrieval models be ro- bust to all sorts of noisy candidates.
An ofï¬ine retrieval task consists of (1) a set of test queries, (2) a set of candidate items (sufï¬- ciently large so as to be realistic), and (3) a set of (query, candidate) pairs labeled with relevance judgments. However, for any reasonable size can- didate set, itâs infeasible to have all pairs annotated by a human. As a result, all retrieval tasks are nec- essarily incomplete: only a small subset of rele- vant candidates are labeled, so we assume that all unlabeled candidates are not relevant. This issue is discussed at length by Buckley and Voorhees (2004), who show that the Mean Average Preci- sion (MAP) metric computed on an incomplete evaluation set correlates reasonably well with the MAP metric computed on a (signiï¬cantly more) complete version of that evaluation set.
Computing full MAP on such a dataset can be computationally expensive (for each query, all Instead, we only candidates need to be scored).
consider the top K results and compute MAP@K based on the following deï¬nition:
M AP @K = X qiâQi 1 Ri k X j=1 i rj pj i
where Qi is the set of test queries, Ri is the number of known relevant candidates for Qi, pj i is precision@j for qi, and rj i is 1 if the jth result is relevant to qi, 0 otherwise.
# 3.2 Approximate nearest neighbor search
While the problem of nearest neighbor search (Indyk and Motwani, 1998; Gionis et al., 1999) is central to continuous retrieval, weâre glossing over it here for two reasons. First, a simple quantiza- tion method (Guo et al., 2016) works quite well for the tasks we consider; second, since we are more interested in analyzing modeling issues, we use exhaustive search to avoid any confounding ef- fects linked to the choice of approximate search algorithm. Moreover, we found that approximate search is nearly as accurate as exhaustive search in our retrieval tasks: MAP@100 for approximate search declined no more than 0.4% even as we in- creased the candidate set size from 20k up to 1M.
# 3.3 Constructing retrieval tasks
We use a simple approach to turn a conventional similarity scoring or ranking task into an incom- plete retrieval task. Given a test set with labeled pairs, we ï¬rst build the graph induced by posi- tive pairs. Next, we compute the transitive closure of the graph, which may yield additional positive pairs. Now, each element of a positive pair is con- sidered a test query, and its neighbors in the tran- sitive closure graph are the known positive results for that query. Finally, the set of candidates con- sists of all items found in the test set (either in a positive or a negative pair).
We apply this method to the Quora ques- tion pairs dataset1 and the AskUbuntu dataset2 (Dos Santos et al., 2015; Lei et al., 2016) to pro- duce new retrieval evaluation sets. We use only the question titles in the AskUbuntu data, and leave the more complex problem of modeling (of- ten much longer) question bodies to future work. In our experiments, we apply our trained encoder
1https://data.quora.com/First-Quora-Dataset-Release- Question-Pairs
# 2https://github.com/taolei87/askubuntu
Quora AskUbuntu 139306 9218 19081 2.55 Positive training pairs Test queries Candidates Relevant candidates/query 13010 1224 4023 11.73
Table 1: End-to-end retrieval task statistics.
models to all pairs of (query, candidate) and evalu- ate MAP@100 on the resulting scores. While this means that our results are not comparable to pre- vious reported work using these pairwise datasets, we provide results from a variety of baseline sys- tems.
The AskUbuntu training set includes just posi- tive pairs, so negative sampling is required. How- ever, the Quora training set includes positive and negative examples (in roughly a 2:1 ratio). This allows us to compare standard cross-entropy loss with our negative sampling strategies.
Since we are interested in a training setting where a single model works well for both tasks, we also experiment with the Paralex dataset3, 18 million question-paraphrase pairs scraped from WikiAnswers.
# 3.4 Baselines
To facilitate meaningful comparison, we start with a few common baselines. First, because each can- didate set includes all the test queries, an âiden- tityâ baseline simply retrieves the exact test query as the only matched candidate. Second, we use TFIDF and the BM25 algorithm (Robertson et al., 2009) for discrete retrieval, standard baselines for retrieval comparisons (Hoogeveen et al., 2015).
We also compare a variety of averaged word embeddings baselines, starting with uniform av- eraging of 300-dimensional pretrained word2vec embeddings. Next, following Arora et al. (2017), we take a weighted average of pretrained embed- dings using Inverse Document Frequency (IDF)4, and try 3 different settings for pre-training: stan- dard word2vec, word2vec trained with the Paralex dataset (closer to the question domain), and Glove (Pennington et al., 2014) trained from Web (Com- mon Crawl) data. We also try embedding each question using a 300-dimensional Skip-Thought model (Kiros et al., 2015).
Note that in all cases, the score for a query- candidate pair is computed using cosine distance
3http://knowitall.cs.washington.edu/paralex 4We found no advantage by using the SIF weighting or
PCA subtraction proposed by Arora et al.
Training data Quora AskU AVG 30.2 45.9 56.4 77.2 60.1 83.7 53.4 78.4 59.3 85.4 59.3 85.2 59.8 86.0 46.4 73.3 62.4 87.6 63.1 90.4 65.2 84.5 65.2 88.3 90.5 63.9 66.7 87.5 67.7 89.9 Model Identity TFIDF Okapi BM25 Avg-word2vec IDF-word2vec IDF-GloVe IDF-word2vec Skip-Thought Dual Encoder Dual Encoder Dual Encoder AskUbuntu (A) Dual Encoder Dual Encoder Dual Encoder Dual Encoder - - - News News Web Paralex Books Paralex (P) Quora (Q) 14.4 35.6 36.5 28.4 33.1 33.4 33.5 19.6 37.3 35.8 45.9 42.2 37.3 46.0 45.5 Q + A P + Q P + A P + Q + A
Table 2: MAP@100 retrieval results.
between the respective encodings.
# 4 Analysis of Results
Table 2 shows MAP@100 results on the Quora and AskUbuntu retrieval tasks. First, we observe that while IDF-weighting the pretrained embed- dings is useful, this is still not clearly better than the BM25 baseline. We show this is not a domain issue by training word2vec directly with Paralex data. However, the dual encoder trained with Par- alex data is signiï¬cantly better, and now improves on BM25 on both evaluations. Next, we are able to improve results quite a bit more by using in- domain training data. And ï¬nally, we get the best overall results by training a single multi-task dual encoder that combines data from all three tasks (note that we train the Paralex-only dual encoder to convergence before adding the multi-task loss). In Section 2.1, we enumerated a number of loss functions using different negative sampling strate- gies. Most importantly, we found that training a Quora-only model with standard cross-entropy (using the provided positive and negative train- ing examples) was substantially worse than train- ing with any of the negative sampling strategies: 88.3 vs. 90.4 MAP@100. Among sampling strate- gies, in-batch sampled softmax loss gave the best retrieval results and converged much faster than in-batch cross-entropy, though in-batch triplet loss was fairly similar.
Given that we are using the batch as a source for random negatives, the batch size becomes im- portant. In fact, we found that the larger the batch, the better the retrieval results. Batches of size 2, 10, 100, and 1000 resulted in 82.8, 87.9, 89.2, and 90.4 MAP@100 on Quora.
# 5 Conclusion
In this work, we distinguished between pairwise scoring tasks (including re-ranking) and retrieval tasks. We described a general dual encoder ab- straction for training arbitrary complex distance functions, and a speciï¬c simple setting with neg- ative sampling that improves substantially over standard retrieval baselines.
Our results begin to show that end-to-end re- trieval is a viable alternative to discrete retrieval. Future work will include:
1. Extending these experiments to larger tasks, with many more retrieval candidates.
2. Adding a scoring or re-ranking model after retrieval to show overall improvements to ex- isting systems.
3. Exploiting the Dual Encoder framework pre- sented here to handle multiple data modali- ties.
# References
Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence em- beddings. In International Conference on Learning Representations.
Yoshua Bengio, Aaron Courville, and Pascal Vincent. 2013. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798â1828.
Jane Bromley, Isabelle Guyon, Yann LeCun, Eduard S¨ackinger, and Roopak Shah. 1994. Signature ver- iï¬cation using aâ siameseâ time delay neural net- work. In Advances in Neural Information Process- ing Systems, pages 737â744.
Chris Buckley and Ellen M Voorhees. 2004. Retrieval In Pro- evaluation with incomplete information. ceedings of the 27th annual international ACM SI- GIR conference on Research and development in in- formation retrieval, pages 25â32. ACM.
Sumit Chopra, Raia Hadsell, and Yann LeCun. 2005. Learning a similarity metric discriminatively, with application to face veriï¬cation. In Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, volume 1, pages 539â546. IEEE.
Alexis Conneau, Douwe Kiela, Holger Schwenk, Loic Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from arXiv preprint natural language inference data. arXiv:1705.02364.
Scott Deerwester, Susan T Dumais, George W Fur- nas, Thomas K Landauer, and Richard Harshman. 1990. Indexing by latent semantic analysis. Jour- nal of the American society for information science, 41(6):391.
Cicero Dos Santos, Luciano Barbosa, Dasha Bog- danova, and Bianca Zadrozny. 2015. Learning hy- brid representations to retrieve semantically equiva- lent questions. In Proceedings of the 53rd Annual Meeting of the Association for Computational Lin- guistics, volume 2, pages 694â699.
Aristides Gionis, Piotr Indyk, Rajeev Motwani, et al. Similarity search in high dimensions via 1999. hashing. In Vldb, volume 99, pages 518â529.
Yichen Gong, Heng Luo, and Jian Zhang. 2017. Natu- ral language inference over interaction space. arXiv preprint arXiv:1709.04348.
Ruiqi Guo, Sanjiv Kumar, Krzysztof Choromanski, and David Simcha. 2016. Quantization based fast inner product search. In Artiï¬cial Intelligence and Statis- tics, pages 482â490.
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, Laszlo Lukacs, Ruiqi Guo, San- jiv Kumar, Balint Miklos, and Ray Kurzweil. 2017.
Efï¬cient natural language response suggestion for smart reply. arXiv preprint arXiv:1705.00652.
Doris Hoogeveen, Karin M Verspoor, and Timothy Baldwin. 2015. Cqadupstack: A benchmark data set for community question-answering research. In the 20th Australasian Document Proceedings of Computing Symposium, page 3. ACM.
Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. 2014. Convolutional neural network architec- tures for matching natural language sentences. In Advances in neural information processing systems, pages 2042â2050.
Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013. Learning deep structured semantic models for web search using clickthrough data. In Proceedings of the 22nd ACM international conference on information & knowl- edge management, pages 2333â2338. ACM.
Piotr Indyk and Rajeev Motwani. 1998. Approximate nearest neighbors: towards removing the curse of di- mensionality. In Proceedings of the thirtieth annual ACM symposium on Theory of computing, pages 604â613. ACM.
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Exploring arXiv preprint Shazeer, and Yonghui Wu. 2016. the limits of language modeling. arXiv:1602.02410.
Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in neural information processing systems, pages 3294â3302.
Tao Lei, Hrishikesh Joshi, Regina Barzilay, Tommi Jaakkola, Katerina Tymoshenko, Alessandro Mos- chitti, and Llu´ıs M`arquez. 2016. Semi-supervised question retrieval with gated convolutions. In Pro- ceedings of NAACL-HLT, pages 1279â1289.
Christopher D Manning, Prabhakar Raghavan, and Hinrich Sch¨utze. 2008. Introduction to information retrieval. Cambridge University Press.
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- In Advances in neural information processing ity. systems, pages 3111â3119.
Kezban Dilek Onal, Ye Zhang, Ismail Sengor Al- tingovde, Md Mustaï¬zur Rahman, Pinar Karagoz, Alex Braylan, Brandon Dang, Heng-Lu Chang, Henna Kim, Quinten McNamara, et al. 2017. Neural information retrieval: At the end of the early years. Information Retrieval Journal, pages 1â72.
Hamid Palangi, Li Deng, Yelong Shen, Jianfeng Gao, Xiaodong He, Jianshu Chen, Xinying Song, and Rabab Ward. 2016. Deep sentence embedding using
long short-term memory networks: Analysis and ap- plication to information retrieval. IEEE/ACM Trans- actions on Audio, Speech and Language Processing (TASLP), 24(4):694â707.
Ankur Parikh, Oscar T¨ackstr¨om, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2249â2255.
Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 confer- ence on empirical methods in natural language pro- cessing (EMNLP), pages 1532â1543.
Stephen Robertson, Hugo Zaragoza, et al. 2009. The probabilistic relevance framework: Bm25 and be- yond. Foundations and Trends®) in Information Re- trieval, 3(4):333-389.
Yelong Shen, Xiaodong He, Jianfeng Gao, Li Deng, and Gr´egoire Mesnil. 2014. A latent semantic model with convolutional-pooling structure for informa- tion retrieval. In Proceedings of the 23rd ACM In- ternational Conference on Conference on Informa- tion and Knowledge Management, pages 101â110. ACM.
Jason Weston, Samy Bengio, and Nicolas Usunier. 2010. Large scale image annotation: learning to rank with joint word-image embeddings. Machine learning, 81(1):21â35.
John Wieting, Mohit Bansal, Kevin Gimpel, and Towards universal para- arXiv preprint Karen Livescu. 2015. phrastic sentence embeddings. arXiv:1511.08198.
L. Wu, A. Fisch, S. Chopra, K. Adams, A. Bordes, and J. Weston. 2017. Starspace: Embed all the things! arXiv preprint arXiv:1709.03856.
Wen-tau Yih, Kristina Toutanova, John C Platt, and Christopher Meek. 2011. Learning discriminative projections for text similarity measures. In Proceed- ings of the Fifteenth Conference on Computational Natural Language Learning, pages 247â256. Asso- ciation for Computational Linguistics. | {
"id": "1709.04348"
} |
1811.03600 | Measuring the Effects of Data Parallelism on Neural Network Training | Recent hardware developments have dramatically increased the scale of data
parallelism available for neural network training. Among the simplest ways to
harness next-generation hardware is to increase the batch size in standard
mini-batch neural network training algorithms. In this work, we aim to
experimentally characterize the effects of increasing the batch size on
training time, as measured by the number of steps necessary to reach a goal
out-of-sample error. We study how this relationship varies with the training
algorithm, model, and data set, and find extremely large variation between
workloads. Along the way, we show that disagreements in the literature on how
batch size affects model quality can largely be explained by differences in
metaparameter tuning and compute budgets at different batch sizes. We find no
evidence that larger batch sizes degrade out-of-sample performance. Finally, we
discuss the implications of our results on efforts to train neural networks
much faster in the future. Our experimental data is publicly available as a
database of 71,638,836 loss measurements taken over the course of training for
168,160 individual models across 35 workloads. | http://arxiv.org/pdf/1811.03600 | Christopher J. Shallue, Jaehoon Lee, Joseph Antognini, Jascha Sohl-Dickstein, Roy Frostig, George E. Dahl | cs.LG, stat.ML | null | Journal of Machine Learning Research 20 (2019) 1-49 | cs.LG | 20181108 | 20190719 | 9 1 0 2
l u J 9 1 ] G L . s c [
3 v 0 0 6 3 0 . 1 1 8 1 : v i X r a
Journal of Machine Learning Research 20 (2019) 1-49
Submitted 11/18; Published 7/19
# Measuring the Eï¬ects of Data Parallelism on Neural Network Training
# Christopher J. Shallueâ Jaehoon Leeâ â Joseph Antogniniâ
Jascha Sohl-Dickstein
Roy Frostig
# George E. Dahl
shallue@google.com jaehlee@google.com joe.antognini@gmail.com jaschasd@google.com frostig@google.com gdahl@google.com
Google Brain 1600 Amphiteatre Parkway Mountain View, CA, 94043, USA
Editor: Rob Fergus
Abstract Recent hardware developments have dramatically increased the scale of data parallelism available for neural network training. Among the simplest ways to harness next-generation hardware is to increase the batch size in standard mini-batch neural network training al- gorithms. In this work, we aim to experimentally characterize the eï¬ects of increasing the batch size on training time, as measured by the number of steps necessary to reach a goal out-of-sample error. We study how this relationship varies with the training algorithm, model, and data set, and ï¬nd extremely large variation between workloads. Along the way, we show that disagreements in the literature on how batch size aï¬ects model quality can largely be explained by diï¬erences in metaparameter tuning and compute budgets at diï¬erent batch sizes. We ï¬nd no evidence that larger batch sizes degrade out-of-sample performance. Finally, we discuss the implications of our results on eï¬orts to train neu- ral networks much faster in the future. Our experimental data is publicly available as a database of 71,638,836 loss measurements taken over the course of training for 168,160 individual models across 35 workloads. Keywords: deep learning
# 1. Introduction
Neural networks have become highly eï¬ective at a wide variety of prediction tasks, in- cluding image classiï¬cation, machine translation, and speech recognition. The dramatic improvements in predictive performance over the past decade have partly been driven by advances in hardware for neural network training, which have enabled larger models to be trained on larger datasets than ever before. However, although modern GPUs and custom
â. Both authors contributed equally. â . Work done as a member of the Google AI Residency program (g.co/airesidency).
©2019 Christopher J. Shallue, Jaehoon Lee, Joseph Antognini, Jascha Sohl-Dickstein, Roy Frostig, and George E. Dahl.
License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/. Attribution requirements are provided at http://jmlr.org/papers/v20/18-789.html.
Shallue, Lee, Antognini, Sohl-Dickstein, Frostig, and Dahl
accelerators have made training neural networks orders of magnitude faster, training time still limits both the predictive performance of these techniques and how widely they can be applied. For many important problems, the best models are still improving at the end of training because practitioners cannot aï¬ord to wait until the performance saturates. In extreme cases, training must end before completing a single pass over the data (e.g. Anil et al., 2018). Techniques that speed up neural network training can signiï¬cantly beneï¬t many important application areas. Faster training can facilitate dramatic improvements in model quality by allowing practitioners to train on more data (Hestness et al., 2017), and by decreasing the experiment iteration time, allowing researchers to try new ideas and conï¬gurations more rapidly. Faster training can also allow neural networks to be deployed in settings where models have to be updated frequently, for instance when new models have to be produced when training data get added or removed.
Data parallelism is a straightforward and popular way to accelerate neural network training. For our purposes, data parallelism refers to distributing training examples across multiple processors to compute gradient updates (or higher-order derivative information) and then aggregating these locally computed updates. As long as the training objective decomposes into a sum over training examples, data parallelism is model-agnostic and ap- In contrast, the maximum degree of model plicable to any neural network architecture. parallelism (distributing parameters and computation across diï¬erent processors for the same training examples) depends on the model size and structure. Although data paral- lelism can be simpler to implement, ultimately, large scale systems should consider all types of parallelism at their disposal. In this work, we focus on the costs and beneï¬ts of data parallelism in the synchronous training setting.
Hardware development is trending towards increasing capacity for data parallelism in neural network training. Specialized systems using GPUs or custom ASICs (e.g. Jouppi et al., 2017) combined with high-performance interconnect technology are unlocking un- precedented scales of data parallelism where the costs and beneï¬ts have not yet been well studied. On the one hand, if data parallelism can provide a signiï¬cant speedup at the limits of todayâs systems, we should build much bigger systems. On the other hand, if additional data parallelism comes with minimal beneï¬ts or signiï¬cant costs, we might consider design- ing systems to maximize serial execution speed, exploit other types of parallelism, or even prioritize separate design goals such as power use or cost.
There is considerable debate in the literature about the costs and beneï¬ts of data paral- lelism in neural network training and several papers take seemingly contradictory positions. Some authors contend that large-scale data parallelism is harmful in a variety of ways, while others contend that it is beneï¬cial. The range of conjectures, suggestive empirical results, and folk knowledge seems to cover most of the available hypothesis space. Answering these questions deï¬nitively has only recently become important (as increasing amounts of data parallelism have become practical), so it is perhaps unsurprising that the literature remains equivocal, especially in the absence of suï¬ciently comprehensive experimental data.
In this work, we attempt to provide the most rigorous and extensive experimental study on the eï¬ects of data parallelism on neural network training to date. In order to achieve this goal, we consider realistic workloads up to the current limits of data parallelism. We try to avoid making assumptions about how the optimal metaparameters vary as a function of batch size. Finally, in order to guide future work, we consider any remaining limitations in
2
Measuring the Effects of Data Parallelism on Neural Network Training
our methodology, and we discuss what we see as the most interesting unanswered questions that arise from our experiments.
# 1.1 Scope
We restrict our attention to variants of mini-batch stochastic gradient descent (SGD), which are the dominant algorithms for training neural networks. These algorithms iteratively update the modelâs parameters using an estimate of the gradient of the training objective. The gradient is estimated at each step using a diï¬erent subset, or (mini-) batch, of training examples. See Section 2.2 for a more detailed description of these algorithms. A data- parallel implementation computes gradients for diï¬erent training examples in each batch in parallel, and so, in the context of mini-batch SGD and its variants, we equate the batch size with the amount of data parallelism.1 We restrict our attention to synchronous SGD because of its popularity and advantages over asynchronous SGD (Chen et al., 2016).
Practitioners are primarily concerned with out-of-sample error and the cost they pay to achieve that error. Cost can be measured in a variety of ways, including training time and hardware costs. Training time can be decomposed into number of steps multiplied by average time per step, and hardware cost into number of steps multiplied by average hardware cost per step. The per-step time and hardware costs depend on the practitionerâs hardware, but the number of training steps is hardware-agnostic and can be used to compute the total costs for any hardware given its per-step costs. Furthermore, in an idealized data- parallel system where the communication overhead between processors is negligible, training time depends only on the number of training steps (and not the batch size) because the time per step is independent of the number of examples processed. Indeed, this scenario is realistic today in systems like TPU pods2, where there are a range of batch sizes for which the time per step is almost constant. Since we are primarily concerned with training time, we focus on number of training steps as our main measure of training cost.
An alternative hardware-agnostic measure of training cost is the number of training examples processed, or equivalently the number of passes (epochs) over the training data. This measure is suitable when the per-step costs are proportional to the number of examples processed (e.g. hardware costs proportional to the number of ï¬oating point operations). However, the number of epochs is not a suitable measure of training time in a data-parallel systemâit is possible to reduce training time by using a larger batch size and processing more epochs of training data, provided the number of training steps decreases.
In light of practitionersâ primary concerns of out-of-sample error and the resources needed to achieve it, we believe the following questions are the most important to study to understand the costs and beneï¬ts of data parallelism with mini-batch SGD and its variants:
1. What is the relationship between batch size and number of training steps to reach a goal out-of-sample error?
2. What governs this relationship?
3. Do large batch sizes incur a cost in out-of-sample error?
1. Mini-batch SGD can be implemented in a variety of ways, including data-serially, but a data-parallel implementation is always possible given appropriate hardware.
2. https://www.blog.google/products/google-cloud/google-cloud-offer-tpus-machine-learning/.
3
Shallue, Lee, Antognini, Sohl-Dickstein, Frostig, and Dahl
# 1.2 Contributions of This Work
1. We show that the relationship between batch size and number of training steps to reach a goal out-of-sample error has the same characteristic form across six diï¬erent families of neural network, three training algorithms, and seven data sets.
Speciï¬cally, for each workload (model, training algorithm, and data set), increasing the batch size initially decreases the required number of training steps proportionally, but eventually there are diminishing returns until ï¬nally increasing the batch size no longer changes the required number of training steps. To the best of our knowledge, we are the ï¬rst to experimentally validate this relationship across models, training algorithms, and data sets while independently tuning the learning rate, momentum, and learning rate schedule (where applicable) for each batch size. Unlike prior work that made strong assumptions about these metaparameters, our results reveal a uni- versal relationship that holds across all workloads we considered, across diï¬erent error goals, and when considering either training error or out-of-sample error.
2. We show that the maximum useful batch size varies signiï¬cantly between workloads and depends on properties of the model, training algorithm, and data set. Speciï¬cally, we show that:
(a) SGD with momentum (as well as Nesterov momentum) can make use of much larger batch sizes than plain SGD, suggesting future work to study the batch size scaling properties of other algorithms.
(b) Some models allow training to scale to much larger batch sizes than others. We include experimental data on the relationship between various model properties and the maximum useful batch size, demonstrating that the relationship is not as simple as one might hope from previous work (e.g. wider models do not always scale better to larger batch sizes).
(c) The eï¬ect of the data set on the maximum useful batch size tends to be smaller than the eï¬ects of the model and training algorithm, and does not depend on data set size in a consistent way.
3. We show that the optimal values of training metaparameters do not consistently follow any simple relationships with the batch size. In particular, popular learning rate heuristicsâsuch as linearly scaling the learning rate with the batch sizeâ do not hold across all problems or across all batch sizes.
4. Finally, by reviewing the speciï¬cs of the experimental protocols used in prior work, we at least partially reconcile conï¬icting stances in the literature on whether increasing the batch size degrades model quality. Speciï¬cally, we show that assumptions about computational budgets and the procedures for selecting metaparameters at diï¬erent batch sizes can explain many of the disagreements in the literature. We ï¬nd no evi- dence that increasing the batch size necessarily degrades model quality, but additional regularization techniques may become important at larger batch sizes.
4
Measuring the Effects of Data Parallelism on Neural Network Training
# 1.3 Experimental Data
We release our raw experimental data for any further analysis by the research community.3 Our database contains 454 combinations of workload (model, data set, training algorithm) and batch size, each of which is associated with a metaparameter search space and a set of models trained with diï¬erent conï¬gurations sampled from the search space. In total, our data contains 71,638,836 loss measurements taken over the course of training for 168,160 individual models. Together, these measurements make up the training curves of all of the individual models we trained, and can be used to reproduce all plots in this paper.4
# 2. Setup and Background
In this section we set up the basic deï¬nitions and background concepts used throughout the paper.
# 2.1 Learning
A data distribution is a probability distribution D over a data domain Z. For example, we might consider a supervised learning task over a domain Z = X Ã Y, where X is the set of 32-by-32-pixel color images and Y is the set of possible labels denoting what appears in the image. A training set z1, . . . , zn â Z is a collection of examples from the data domain, conventionally assumed to be drawn i.i.d. from the data distribution D.
A machine learning model is a function that, given parameters 6 from some set © C R®, and given a data point z ⬠Z, produces a prediction whose quality is measured by a differentiable non-negative scalar-valued loss function.> We denote by ¢(6;z) the loss of a prediction made by the model, under parameters 0, on the data point z. We denote by L the out-of-sample loss or expected loss:
L(0) = & [0(:2)), (1) z~D
and by ËL the empirical average loss under a data set S = (z1, . . . , zn):
16;8) =~ 5021) (2) i=l
When S is the training set, we call L the average training loss. We will say that the data source D, loss ¢, and model with parameter set O together specify a learning task, in which our aim is to find parameters @ that achieve low out-of-sample loss (Equation 1), while given access only to n training examples. A common approach is to find parameters of low average training loss (Equation 2) as an estimate of the out-of-sample loss (Shalev-Shwartz and Ben-David, 2014).
When minimizing average training loss ËL, it is common to add regularization penalties to the objective function. For a diï¬erentiable penalty R : Î â R+, regularization weight
3. https://github.com/google-research/google-research/tree/master/batch_science 4. https://colab.research.google.com/github/google-research/google-research/blob/master/
batch_science/reproduce_paper_plots.ipynb
5. Technically, the loss need only be sub-diï¬erentiable. Extending our setup to this end is straightforward.
5
# Shallue, Lee, Antognini, Sohl-Dickstein, Frostig, and Dahl
λ > 0, and training set S, the training objective might be J(θ) = ËL(θ; S) + λR(θ).
(3)
In practice, we often approach a task by replacing its loss with another that is more amenable to training. For instance, in supervised classiï¬cation, we might be tasked with learning under the 0/1 loss, which is an indicator of whether a prediction is correct (e.g. matches a ground-truth label), but we train by considering instead a surrogate loss (e.g. the logistic loss) that is more amenable to continuous optimization. When the surrogate loss bounds the original, achieving low loss under the surrogate implies low loss under the original. To distinguish the two, we say error to describe the original loss (e.g. 0/1), and we save loss to refer to the surrogate used in training.
# 2.2 Algorithms
The dominant algorithms for training neural networks are based on mini-batch stochastic gradient descent (SGD, Robbins and Monro, 1951; Kiefer et al., 1952; Rumelhart et al., 1986; Bottou and Bousquet, 2008; LeCun et al., 2015). Given an initial point θ0 â Î, mini-batch SGD attempts to decrease the objective J via the sequence of iterates
θt â θtâ1 â ηtg(θtâ1; Bt),
where each Bt is a random subset of training examples, the sequence {ηt} of positive scalars is called the learning rate, and where, for any θ â Î and B â S,
(0; B) = +S Ve(6; 2) + VR). (4) [B| 2â¬B
When the examples B are a uniformly random subset of training examples, g(θ; B) forms an unbiased estimate of the gradient of the objective J that we call a stochastic gradient. In our larger-scale experiments, when we sample subsequent batches Bt, we actually follow the common practice of cycling through permutations of the training set (Shamir, 2016). The result of mini-batch SGD can be any of the iterates θt for which we estimate that L(θt) is low using a validation data set.
Variants of SGD commonly used with neural networks include SGD with momentum (Polyak, 1964; Rumelhart et al., 1986; Sutskever et al., 2013), Nesterov momentum (Nes- terov, 1983; Sutskever et al., 2013), RMSProp (Hinton et al., 2012), and Adam (Kingma and Ba, 2015). All of these optimization procedures, or optimizers, interact with the training examples only by repeatedly computing stochastic gradients (Equation 4), so they support the same notion of batch size that we equate with the scale of data parallelism. In this work, we focus on the SGD, SGD with momentum, and Nesterov momentum optimizers. The latter two optimizers are conï¬gured by a learning rate {ηt} and a scalar γ â (0, 1) that we call momentum. They deï¬ne the iterates6
SGD with momentum vt+1 â γvt + g(θt; Bt) θt+1 â θt â ηtvt+1 Nesterov momentum
vt+1 â γvt + g(θt; Bt) θt+1 â θt â ηtg(θt; Bt) â ηtγvt+1,
6. These rules take slightly diï¬erent forms across the literature and across library implementations. We present and use the update rules from the MomentumOptimizer class in TensorFlow (Abadi et al., 2016).
6
# Measuring the Effects of Data Parallelism on Neural Network Training
given v0 = 0 and an initial θ0. Note that plain SGD can be recovered from either optimizer by taking γ = 0. The outcome of using these optimizers should therefore be no worse if than SGD, in any experiment, the momentum γ is tuned across values including zero.
If we run SGD with momentum under a constant learning rate ηt = η, then, at a given iteration t, the algorithm computes
t t u 441 = Ot â NVE41 = OM â a> Vuy1 = 9% â n> Soy 9653 B,). u=0 u=0 s=0
For any fixed 7 ⬠{0,...,t}, the coefficient accompanying the stochastic gradient g(6,; B,) in the above update is nrvinr y"â7. We define the effective learning rate, nt as the value of this coefficient at the end of training (¢ = T), in the limit of a large number of training steps (Tâ â co, while 7 is held fixed):
T eff : UT 1) = | = . = im Yom =
Put intuitively, ηeï¬ captures the contribution of a given mini-batch gradient to the parameter values at the end of training.
# 2.3 Additional Terminology in Experiments
A data-parallel implementation of mini-batch SGD (or one of its variants) computes the summands of Equation 4 in parallel and then synchronizes to coordinate their summation. The models and algorithms in our experiments are modiï¬able by what we call meta- parameters.7 These include architectural choices, such as the number of layers in a neural network, and training parameters, such as learning rates {ηt} and regularization weights λ. When we use the term model, we typically assume that all architectural metaparameters have been set. In our experiments, we tune the training metaparameters by selecting the values that yield the best performance on a validation set. We use the term workload to jointly refer to a data set, model, and training algorithm.
# 3. Related Work
In this section we review prior work related to our three main questions from Section 1.1. First we review studies that considered the relationship between batch size and number of training steps (Questions 1 and 2), and then we review studies that considered the eï¬ects of batch size on solution quality (Question 3).
# 3.1 Steps to Reach a Desired Out-Of-Sample Error
We broadly categorize the related work on this topic as either analytical or empirical in nature.
7. Sometimes called âhyperparameters,â but we prefer a diï¬erent name so as not to clash with the notion of hyperparameters in Bayesian statistics.
7
Shallue, Lee, Antognini, Sohl-Dickstein, Frostig, and Dahl
# 3.1.1 Analytical Studies
Convergence upper bounds from the theory of stochastic (convex) optimization can be spe- cialized to involve terms dependent on batch size, so in this sense they comprise basic related work. These upper bounds arise from worst-case analysis, and moreover make convexity and regularity assumptions that are technically violated in neural network training, so whether they predict the actual observed behavior of our experimental workloads is an empirical question in its own right.
Given a sequence of examples drawn i.i.d. from a data source, an upper bound on the performance of SGD applied to L-Lipschitz convex losses is (Hazan, 2016; Shalev-Shwartz and Ben-David, 2014)
J(6r) â I*<O ( 7) , (5)
for any batch size. Here, J is the objective function, J* is its value at the global optimum, and 67 denotes the final output of the algorithm supposing it took T iterations. Meanwhile, when losses are convex and the objective is H-smooth, accelerated parallel mini-batch SGD enjoys the bound (Lan, 2012)
J(6r) â I*<O (7 a (6)
where b is the batch size.
Compared to sequential processing without batching (i.e. a batch size of one), the bounds Equation 5 and Equation 6 oï¬er two extremes, respectively:
1. No beneï¬t: Increasing the batch size b does not change the number of steps to convergence, as per Equation 5.
â
T b dominates the bound. Increasing the batch size b by a multiplicative factor decreases the number of steps T to a given achievable objective value by the same factor.
In other words, under these simpliï¬cations, batching cannot hurt the asymptotic guarantees of steps to convergence, but it could be wasteful of examples. The two extremes imply radically diï¬erent guidance for practitioners, so the critical task of establishing a relationship between batch size and number of training steps remains one to resolve experimentally.
A few recent papers proposed analytical notions of a critical batch size: a point at which a transition occurs from a b-fold beneï¬t to no beneï¬t. Under assumptions including convexity, Ma et al. (2018) derived such a critical batch size, and argued that a batch size of one is optimal for minimizing the number of training epochs required to reach a given target error. Under diï¬erent assumptions, Yin et al. (2018) established a critical batch size and a pathological loss function that together exhibit a transition from a b-fold beneï¬t to no beneï¬t. Although they ran experiments with neural networks, their experiments were designed to investigate the eï¬ect of data redundancy and do not provide enough
8. Not necessarily the T th iterate, which may diï¬er from θT if the algorithm averages its iterates.
8
# Measuring the Effects of Data Parallelism on Neural Network Training
information to reveal the empirical relationship between batch size and number of training steps. Focusing on linear least-squares regression, Jain et al. (2018) also derived a threshold batch size in terms of the operator norm of the objectiveâs Hessian and a constant from a fourth-moment bound on example inputs.
To our knowledge, in all previous work that analytically derived a critical batch size, the thresholds deï¬ned are either (i) parameter-dependent, or (ii) speciï¬c to linear least-squares regression. A critical batch size that depends on model parameters can change over the course of optimization; it is not a problem-wide threshold that can be estimated eï¬ciently a priori. Focusing on least-squares has issues as well: while it sheds intuitive light on how batching aï¬ects stochastic optimization locally, the quantities deï¬ned inherently cannot generalize to the non-linear optimization setting of neural network training, both because the objectiveâs Hessian is not constant across the space of parameters as it is in a quadratic problem, and more broadly because it is unclear whether the Hessian of the objective is still the correct analogue to consider.
# 3.1.2 Empirical Studies
Wilson and Martinez (2003) investigated the relationship between batch size and training speed for plain mini-batch SGD. They found that a simple fully connected neural network took more epochs to converge with larger batch sizes on a data set of 20,000 examples, and also that using a batch size equal to the size of the training set took more epochs to converge than a batch size of one on several small data sets of size ⤠600. However, their experimental protocol and assumptions limit the conclusions we can draw from their results. One issue is that training time was measured to diï¬erent out-of-sample errors for diï¬erent batch sizes on the same data set. To compare training speed fairly, the error goal should be ï¬xed across all training runs being compared. Additionally, only four learning rates were tried for each data set, but quite often the best learning rate was at one of the two extremes and it appeared that a better learning rate might be found outside of the four possibilities allowed. Finally, despite the contention of the authors, their results do not imply slower training with larger batch sizes in data-parallel training: for the most part, their larger batch size experiments took fewer training steps than the corresponding batch size one experiments.
In the last few years, increasingly specialized computing systems have spurred practi- tioners to try much larger batch sizes than ever before, while increasingly promising results have driven hardware designers to create systems capable of even more data parallelism. Chen et al. (2016) used a pool of synchronized worker machines to increase the eï¬ective batch size of mini-batch SGD. They demonstrated speedups in both wall time and steps to convergence for an Inception model (Szegedy et al., 2016) on ImageNet (Russakovsky et al., 2015) by scaling the eï¬ective batch size from 1,600 to 6,400. More recently, Goyal et al. (2017) showed that the number of training epochs could be held constant across a range of batch sizes to achieve the same validation error for ResNet-50 (He et al., 2016a) on Ima- geNet. Holding the number of training epochs constant is equivalent to scaling the number of training steps inversely with the batch size, and this reduction in training steps with increasing batch size produced nearly proportional wall time speedups on their hardware. Although this hints at a b-fold beneï¬t regime in which increasing the batch size reduces the
9
# Shallue, Lee, Antognini, Sohl-Dickstein, Frostig, and Dahl
number of training steps by the same factor, the authors did not attempt to minimize the number of training steps (or epochs) required to reach the goal at each batch size separately. It is unclear whether any of the batch sizes that achieved the goal could do so in fewer steps than given, or how many steps the other batch sizes would have needed to achieve the same error goal.
Two studies performed concurrently with this work also investigated the relationship between batch size and training speed for neural networks. Chen et al. (2018) provide experimental evidence of a problem-dependent critical batch size after which a b-fold beneï¬t is no longer achieved for plain mini-batch SGD. They contend that wider and shallower networks have larger critical batch sizes, and while their empirical results are equivocal for this particular claim, they show that the threshold batch size can depend on aspects of both the data set and the model. Additionally, Golmant et al. (2018) studied how three previously proposed heuristics for adjusting the learning rate as a function of batch size (linear scaling, square root scaling, and no scaling) aï¬ect the number of training steps required to reach a particular result. They found that if the learning rate is tuned for the the smallest batch size only, all three of these common scaling techniques break down for larger batch sizes and result in either (i) divergent training, or (ii) training that cannot reach the error goal within a ï¬xed number of training epochs. They also describe a basic relationship between batch size and training steps to a ï¬xed error goal, which is comprised of three regions: b-fold beneï¬t initially, then diminishing returns, and ï¬nally no beneï¬t for all batch sizes greater than a maximum useful batch size. However, their results are inconclusive because (i) not all model and data set pairs exhibit this basic relationship, (ii) it does not appear consistently across error goals, and (iii) the relationship is primarily evident in training error but not out-of-sample error. These inconsistent results may be due to suboptimal pre-determined learning rates arising from the scaling rules, especially at larger batch sizes. Finally, they also found that the maximum useful batch size depends on aspects of the model and the data set type, but not on the data set size. Since all their experiments use plain mini-batch SGD, their results are unable to reveal any eï¬ects from the choice of optimizer and might not generalize to other popular optimizers, such as SGD with momentum.
# 3.2 Solution Quality
The literature contains some seemingly conï¬icting claims about the eï¬ects of batch size on solution quality (out-of-sample error at the conclusion of training). Primarily, the debate centers on whether increasing the batch size incurs a cost in solution quality. Keskar et al. (2017) argue that large batch9 training converges to so-called âsharpâ minima with worse generalization properties. However, Dinh et al. (2017) show that a minimum with favorable generalization properties can be made, through reparameterization, arbitrarily sharp in the same sense. Le Cun et al. (1998) suggest that a batch size of one can result in better solutions because the noisier updates allow for the possibility of escaping from local minima in a descent algorithm. However, they also note that we usually stop training long before
9. The term âlarge batchâ is inherently ambiguous, and in this case accompanies experiments in Keskar et al. (2017) that only compare two absolute batch sizes per data set, rather than charting out a curve to its apparent extremes.
10
# Measuring the Effects of Data Parallelism on Neural Network Training
reaching any sort of critical point. Hoï¬er et al. (2017) argue that increasing the batch size need not degrade out-of-sample error at all, assuming training has gone on long enough. Goyal et al. (2017), among others, tested batch sizes larger than those used in Keskar et al. (2017) without noticing any reduction in solution quality. Still, their results with yet larger batch sizes do not rule out the existence of a more sudden degradation once the batch size is large enough. Meanwhile, Goodfellow et al. (2016) state that small batches can provide a regularization eï¬ect such that they result in the best observed out-of-sample error, although in this case other regularization techniques might serve equally well.
Alas, the best possible out-of-sample error for a particular model and data set cannot be measured unconditionally due to practical limits on wall time and hardware resources, as well as practical limits on our ability to tune optimization metaparameters (e.g. the learning rate). An empirical study can only hope to measure solution quality subject to the budgets allowed for each model experiment, potentially with caveats due to limitations of the speciï¬c procedures for selecting the metaparameters. To the best of our knowledge, all published results handle the training budget issue in exactly one of three ways: by ignoring budgets (train to convergence, which is not always possible); by using a step budget (restrict the number of gradient descent updates performed); or by using an epoch budget (restrict number of training examples processed).10 Furthermore, while some published results tune the learning rate anew for each batch size, others tune for only a single batch size and use a preordained heuristic to set the learning rate for the remaining batch sizes (the most common heuristics are constant, square root, and linear learning rate scaling rules). Tuning metaparameters at a single batch size and then heuristically adjusting them for others could clearly create a systematic advantage for trials at batch sizes near to the one tuned. All in all, the conclusions we can draw from previous studies depend on the budgets they assume and on how they select metaparameters across batch sizes. The following subsections attempt an investigation of their experimental procedures to this end.
# 3.2.1 Studies That Ignore Budgets
All studies in this section compared solution quality for diï¬erent batch sizes after deeming their models to have converged. They determined training stopping time by using either manual inspection, convergence heuristics, or ï¬xed compute budgets that they considered large enough to guarantee convergence.11
Keskar et al. (2017) trained several neural network architectures on MNIST and CIFAR- 10, each with two batch sizes, using the Adam optimizer and without changing the learning rate between batch sizes. They found that the larger batch size consistently achieved worse out-of-sample error after training error had ceased to improve. However, all models used batch normalization (Ioï¬e and Szegedy, 2015) and presumably computed the batch nor-
10. Of course, there are budgets in between an epoch budget and a step budget that might allow the possibility of trading oï¬ time, computation, and/or solution quality. For example, it may be possible to increase the number of training epochs and still take fewer steps to reach the same quality solution. However, we are not aware of work that emphasizes these budgets.
11. As discussed further in Section 4.8, we ï¬nd that millions of training steps for small batch sizes, or thousands of epochs for large batch sizes, are required to saturate performance even for data sets as small and simple as MNIST. In our experiments, this corresponded to more than 25 hours of wall-time for each metaparameter conï¬guration.
11
# Shallue, Lee, Antognini, Sohl-Dickstein, Frostig, and Dahl
malization statistics using the full batch size. For a fair comparison between batch sizes, batch normalization statistics should be computed over the same number of examples or else the training objective diï¬ers between batch sizes (Goyal et al., 2017). Indeed, Hoï¬er et al. (2017) found that computing batch normalization statistics over larger batches can degrade solution quality, which suggests an alternative explanation for the results of Keskar et al. (2017). Moreover, Keskar et al. (2017) reported that data augmentation eliminated the diï¬erence in solution quality between small and large batch experiments.
Smith and Le (2018) trained a small neural network on just 1,000 examples sampled from MNIST with two diï¬erent batch sizes, using SGD with momentum and without changing the learning rate between batch sizes. They observed that the larger batch size overï¬t more than the small batch size resulting in worse out-of-sample error, but this gap was mitigated by applying L2 regularization (Smith and Le, 2018, ï¬gures 3 and 8). They also compared a wider range of batch sizes in experiments that either (i) used a step budget without changing the learning rate for each batch size (Smith and Le, 2018, ï¬gures 4 and 6), or (ii) varied the learning rate and used a step budget that was a function of the learning rate (Smith and Le, 2018, ï¬gure 5). Instead, we focus on the case where the learning rate and batch size are chosen independently.
Breuel (2015a,b) trained a variety of neural network architectures on MNIST with a range of batch sizes, using the SGD and SGD with momentum optimizers with a range of learning rates and momentum values. They found that batch size had no eï¬ect on solution quality for LSTM networks (Breuel, 2015a), but found that larger batch sizes achieved worse solutions for fully connected and convolutional networks, and that the scale of the eï¬ect depended on the activation function in the hidden and output layers (Breuel, 2015b).
Finally, Chen et al. (2016) observed no diï¬erence in solution quality when scaling the batch size from 1,600 to 6,400 for an Inception model on ImageNet when using the RMSProp optimizer and a heuristic to set the learning rate for each batch size.
# 3.2.2 Studies with Step Budgets
Hoï¬er et al. (2017) trained neural networks with two diï¬erent batch sizes on several image data sets. They found that, by computing batch normalization statistics over a ï¬xed number of examples per iteration (âghost batch normalizationâ), and by scaling the learning rate with the square root of the batch size instead of some other heuristic, the solution quality arising from the larger batch size was as good as or better than the smaller batch size. However, the largest batch size used was 4,096, which does not rule out an eï¬ect appearing at still larger batch sizes, as suggested by the work of Goyal et al. (2017). Moreover, it remains open whether their proposed learning rate heuristic extends to arbitrarily large batch sizes, or whether it eventually breaks down for batch sizes suï¬ciently far from the base batch size.
# 3.2.3 Studies with Epoch Budgets
An epoch budget corresponds to ï¬xing the total number of per-example gradient compu- tations, but, in an idealized data-parallel implementation of SGD, it also corresponds to a step (or even wall time) budget that scales inversely with the batch size. With an epoch budget, a larger batch size can only achieve the same solution quality as a smaller batch
12
# Measuring the Effects of Data Parallelism on Neural Network Training
size if it achieves perfect scaling eï¬ciency (a b-fold reduction in steps from increasing the batch size, as described in Section 3.1.1).
Masters and Luschi (2018) show that after a critical batch size depending on the model and data set, solution quality degrades with increasing batch size when using a ï¬xed epoch budget. Their results eï¬ectively show a limited region of b-fold beneï¬t for those model and data set pairs when trained with SGD, although they did not investigate whether this critical batch size depends on the optimizer used, and they did not consider more than one epoch budget for each problem. We reproduced a subset of their experiments and discuss them in Section 5.
Goyal et al. (2017) recently popularized a linear learning rate scaling heuristic for train- ing the ResNet-50 model using diï¬erent batch sizes. Using this heuristic, a 90 epoch budget, and SGD with momentum without adjusting or tuning the momentum, they increased the batch size from 64 to 8,192 with no loss in accuracy. However, their learning rate heuristic broke down for even larger batch sizes. Inspired by these results, a sequence of follow-up studies applied additional techniques to further increase the batch size while still achieving the same accuracy and using the same 90 epoch budget. These follow-on studies (Codreanu et al., 2017; You et al., 2017; Akiba et al., 2017) conï¬rm that the best solution quality for a given batch size will also depend on the exact optimization techniques used.
There are several additional papers (Lin et al., 2018; Devarakonda et al., 2017; Golmant et al., 2018) with experiments relevant to solution quality that used an epoch budget, tuned the learning rate for the smallest batch size, and then used a heuristic to choose the learning rate for all larger batch sizes. For instance, Devarakonda et al. (2017) and Lin et al. (2018) used linear learning rate scaling and Golmant et al. (2018) tried constant, square root, and linear learning rate scaling heuristics. All of them concluded that small batch sizes have superior solution quality to large batch sizes with a ï¬xed epoch budget, for various notions of âsmallâ and âlarge.â This could just as easily be an artifact of the learning rate heuristics, and a possible alternative conclusion is that these heuristics are limited (as heuristics often are).
# 4. Experiments and Results
The primary quantity we measure is the number of steps needed to ï¬rst reach a desired out-of-sample error, or steps to result. To measure steps to result, we used seven image and text data sets with training set sizes ranging from 45,000 to 26 billion examples. Table 1 summarizes these data sets and Appendix A provides the full details. We chose six families of neural network to train on these data sets. For MNIST and Fashion MNIST, we chose a simple fully connected neural network and a simple convolutional neural network (CNN). For CIFAR-10, we chose the ResNet-8 model without batch normalization, partly to compare our results to Masters and Luschi (2018), and partly to have a version of ResNet without batch normalization. For ImageNet, we chose ResNet-50, which uses batch normalization and residual connections, and VGG-11, which uses neither. For Open Images, we chose ResNet-50. For LM1B, we chose the Transformer model and an LSTM model. For Common Crawl, we chose the Transformer model. Table 2 summarizes these models and Appendix B provides the full details.
13
# Shallue, Lee, Antognini, Sohl-Dickstein, Frostig, and Dahl
Type Task Data Set MNIST Image Classiï¬cation Fashion MNIST Image Classiï¬cation Image Classiï¬cation CIFAR-10 Image Classiï¬cation ImageNet Image Classiï¬cation (multi-label) Open Images Language modeling LM1B Text Language modeling Common Crawl Text Evaluation Metric Size Classiï¬cation error 55,000 Classiï¬cation error 55,000 Classiï¬cation error 45,000 Classiï¬cation error 1,281,167 Average precision 4,526,492 30,301,028 Cross entropy error â¼25.8 billion Cross entropy error
Table 1: Summary of data sets. Size refers to the number of examples in the training set, which we measure in sentences for text data sets. See Appendix A for full details.
Model Class Sizes Optimizers Data Sets Fully Connected Various Simple CNN SGD SGD Momentum Nesterov mom. SGD Nesterov mom. Nesterov mom. MNIST MNIST Fashion MNIST Base Narrow Wide ResNet-8 ResNet CIFAR-10 ResNet-50 ImageNet Open Images ImageNet LM1B Common crawl VGG Transformer VGG-11 Base Narrow and shallow Momentum Shallow Wide â Nesterov mom. SGD Nesterov mom. LSTM Nesterov mom. LM1B Learning rate schedule Constant Constant Linear decay Linear decay Linear decay Constant Constant
Table 2: Summary of models. See Appendix B for full details.
Measuring steps to result requires a particular value of out-of-sample error to be chosen as the goal. Ideally, we would select the best achievable error for each task and model, but since validation error is noisy, the best error is sometimes obtained unreliably. Moreover, for some workloads, the validation error continues to improve steadily beyond the maximum practical training time. Therefore, we generally tried to select the best validation error that we could achieve reliably within a practical training time.
Table 2 also shows the learning rate schedule we used for each model and data set. Learning rate schedules are often used to accelerate neural network training, but ï¬nding the best schedule is an optimization problem in its own right (Wu et al., 2018). Instead, researchers typically choose from a range of common learning rate functions based on vali- dation performance and individual preference. While most schedules decay the learning rate monotonically over training, some researchers also âwarm-upâ the learning rate at the start of training (e.g. He et al., 2016a), particularly when training with large batch sizes (Goyal et al., 2017). We ran experiments with both constant learning rates and with learning rate decay. We used decay for ResNet-8, ResNet-50, and VGG-11, which signiï¬cantly reduced
14
# Measuring the Effects of Data Parallelism on Neural Network Training
training time for those models. We selected our decay function by running an extensive set of experiments with ResNet-50 on ImageNet (see Appendix C for details). We chose linear decay because it performed at least as well as all other schedules we tried, while also being the simplest and requiring only two additional metaparameters. In experiments that used linear decay, we speciï¬ed metaparameters (η0, α, T ) such that the learning rate decayed linearly from η0 to ηT = αη0. That is, the learning rate at step t is given by
ηt = η0 â (1 â α)η0 αη0 t T if t ⤠T, if t > T.
Steps to result depends on the training metaparameters, and, for a given task and model, each batch size might have a diï¬erent metaparameter conï¬guration that minimizes steps to result. In all experiments, we independently tuned the metaparameters at each batch size, including the initial learning rate η0 and, when learning rate decay was used, the decay schedule (α, T ). Also, unless otherwise speciï¬ed, we used the Nesterov momentum optimizer (Sutskever et al., 2013) and tuned the momentum γ.12 Tuning anew for each batch size is extremely important since otherwise we would not be measuring steps to result as a function of batch size, rather we would be measuring steps to result as a function of batch size and the speciï¬c values of the learning rate and other metaparameters. We used quasi- random search (Bousquet et al., 2017) to tune the metaparameters with equal budgets of non-divergent13 trials for diï¬erent batch sizes. We selected metaparameter search spaces by hand based on preliminary experiments. The exact number of non-divergent trials needed to produce stable results depends on the search space, but 100 trials seemed to suï¬ce in our experiments.14 If the optimal trial occurred near the boundary of the search space, or if the goal validation error was not achieved within the search space, we repeated the search with a new search space. We measured steps to result for each batch size by selecting the metaparameter trial that reached the goal validation error in the fewest number of steps.
# 4.1 Steps to Result Depends on Batch Size in a Similar Way Across Problems
To get a sense of the basic empirical relationship, we measured the number of steps required to reach a goal validation error as a function of batch size across several diï¬erent data sets and models (Figure 1). In all cases, as the batch size grows, there is an initial period of perfect scaling (b-fold beneï¬t, indicated with a dashed line on the plots) where the steps needed to achieve the error goal halves for each doubling of the batch size. However, for all problems, this is followed by a region of diminishing returns that eventually leads to a regime of maximal data parallelism where additional parallelism provides no beneï¬t whatsoever. In other words, for any given problem and without making strong assumptions about learning rates or other optimizer parameters, we can achieve both extremes suggested by theory (see Section 3.1.1). A priori, it is not obvious that every workload in our ex- periments should exhibit perfect scaling at the smallest batch sizes instead of immediately showing diminishing returns.
12. For LSTM for LM1B, we used a ï¬xed value of γ = 0.99. We chose this value based on initial experiments and validated that tuning γ did not signiï¬cantly aï¬ect the results for batch sizes 256, 1,024, or 4,096.
13. We discarded trials with a divergent training loss, which occurred when the learning rate was too high. 14. We used 100 non-divergent trials for all experiments except Transformer Shallow on LM1B with SGD, Transformer on Common Crawl, and LSTM on LM1B, for which we used 50 trials each.
15
# Shallue, Lee, Antognini, Sohl-Dickstein, Frostig, and Dahl
1 i i i> 251-93 ge 97 ae oT gs 91 Batch Size
pis Steps to Reach 0.01 Validation Error Steps N ° 2° 22 2% 25 38 Qi 212 214 D16 Batch Size
# Steps
(b) Simple CNN on Fashion MNIST (e) ResNet-50 on Open Images
# (a) Simple CNN on MNIST
gai Steps to Reach 0.25 Validation Error Batch Size
# (d) ResNet-50 on ImageNet
720 Steps to Reach 3.9 Validation Cross Entropy Steps N. 10 SEES aE gr pig itgitgigls Batch Size
# Batch Size
Steps to Reach 0.3 Validation Error
SEP ELE LE LELCLELT TCHS Se]
# Steps
# Batch Size
# (c) ResNet-8 on CIFAR-10
1720 Steps to Reach 3.9 Validation Cross Entropy
# Batch Size
# (f) Transformer on LM1B
Steps to Reach 3.9 Validation Cross Entropy DD DTT OF patio high bphpis Batch Size
(g) Transformer on Common Crawl (h) VGG-11 on ImageNet
# (i) LSTM on LM1B
Figure 1: The relationship between steps to result and batch size has the same charac- teristic form for all problems. In all cases, as the batch size grows, there is an initial period of perfect scaling (indicated with a dashed line) where the steps needed to achieve the error goal halves for each doubling of the batch size. Then there is a region of diminishing returns that eventually leads to a region of maximal data parallelism where additional parallelism provides no beneï¬t whatsoever. AP denotes average precision (see Appendix A).
16
# Measuring the Effects of Data Parallelism on Neural Network Training
(a) Simple CNN on MNIST (b) Transformer on LM1B (c) ResNet-50 on ImageNet
Figure 2: Steps-to-result plots have a similar form for diï¬erent (nearby) performance goals. The transition points between the three regions (perfect scaling, diminishing returns, and maximal data parallelism) are nearly the same.
# 4.2 Validating Our Measurement Protocol
If the curves in Figure 1 were very sensitive to the goal validation error, then measuring the steps needed to reach our particular choice of the goal would not be a meaningful proxy for training speed. For small changes in the goal validation error, we do not care about vertical shifts as long as the transition points between the three scaling regions remain relatively unchanged. Figure 2 shows that varying the error goal only vertically shifts the steps- to-result curve, at least for modest variations centered around a good absolute validation error. Furthermore, although we ultimately care about out-of-sample error, if our plots looked very diï¬erent when measuring the steps needed to reach a particular training error, then we would need to include both curves when presenting our results. However, switching to training error does not change the plots much at all (see Figure 12 in the Appendix).
Our experiments depend on extensive metaparameter tuning for the learning rate, mo- mentum, and, where applicable, the learning rate schedule. For each experiment, we veriï¬ed our metaparameter search space by checking that the optimal trial was not too close to a boundary of the space. See Figures 13 and 14 in the Appendix for examples of how we veriï¬ed our search spaces.
# 4.3 Some Models Can Exploit Much Larger Batch Sizes Than Others
We investigated whether some models can make more use of larger batches than others by experimenting with diï¬erent models while keeping the data set and optimizer ï¬xed. We explored this question in two ways: (i) by testing completely diï¬erent model architectures on the same data set, and (ii) by varying the size (width and depth) of a model within a particular model family. Since the absolute number of steps needed to reach a goal validation error depends on the model, the steps to result vs. batch size curves for each model generally appear at diï¬erent vertical oï¬sets from each other. Since we primarily care about the locations of the perfect scaling, diminishing returns, and maximal data parallelism regions, we normalized the y-axis of each plot by dividing by the number of steps needed to reach the goal for a particular batch size and data set. This normalization corresponds to a vertical shift of each curve (on log-scale plots), and makes it easier to compare diï¬erent models. Appendix D contains all plots in this section without the y-axis normalized.
17
# Shallue, Lee, Antognini, Sohl-Dickstein, Frostig, and Dahl
Steps to Reach 0.03 Validation Error 2° aa FC-1024-1024-1024 @-© Simple CNN =1) N Steps / (Steps at B: N r zu i i i _ 2092 9F 95 98 p10 p12 514 p16 Batch Size
=64) Steps / (Steps at B: Steps to Reach 0.35 Validation Error aa ResNet-50 @-e VGG-11 2eL 5 Beeld : NS 4 2° - 2° 2° i i po a7 98 99 Ql0 gi 512 513 514 515 716 Batch Size
(a) Fully Connected vs Simple CNN on MNIST
(b) ResNet-50 vs VGG-11 on ImageNet
Steps to Reach 3.9 Validation Cross Entropy
21 ° aA Transformer 2 eo LSTM i ro om 27 * a 2°b a S25 a2 o 26 ay MS 7 27h S 4 S 2eL ii i 1a i 27 95 25 27 28 29 210 gM giz 913 94 915 Batch Size
Steps to Reach 0.03 Validation Error
=2) Steps / (Steps at B: >t 20 #* FC-1024 ail mem FC-128-128-128 2 VW FC-256-256-256 can ee FC-512-512-512 2 Ana FC-1024-1024-1024 24 @-© FC-2048-2048-2048 35 36 37 28 N 2° »: 2720 â, 2 H i i is 90529 585855 Batch Size
# (c) Transformer vs LSTM on LM1B
(d) Fully Connected sizes on MNIST
# Steps to Reach 0.01 Validation Error
2 2° 2A Simple CNN 21 -@ Simple CNN Narrow A 52 #â* Simple CNN Wide I om 53 #34 a 2 2 2° g 2° BH 59 2-10 gu i i i i 2092 9F 95 98 p10 912 514 p16 Batch Size
22
Steps to Reach 4.2 Validation Cross Entropy
=16) B: 20 27 22
@-@
# sa
â
@-e
# Wide
# Base
# Shallow
# Narrow and Shallow
# Steps / (Steps at 2eL
3
# Beeed
# eb
# Ned
Ey ar 2 ae
# a7
NS 210 gM 712 13 5M 515
# 28 Batch Size
# po 39
(e) Simple CNN sizes on MNIST
(f) Transformer sizes on LM1B
Figure 3: Some models can exploit much larger batch sizes than others. Figures 3a-3c show that some model architectures can exploit much larger batch sizes than others on the same data set. Figures 3d-3f show that varying the depth and width can aï¬ect a modelâs ability to exploit larger batches, but not necessarily in a consistent way across diï¬erent model architectures. All MNIST models in this Figure used plain mini-batch SGD, while all other models used Nesterov momentum. The goal validation error for each plot was chosen to allow all model variants to achieve that error. Figure 15 in the Appendix contains these plots without the y-axis normalized.
18
# Measuring the Effects of Data Parallelism on Neural Network Training
Figures 3aâ3c show that the model architecture signiï¬cantly aï¬ects the relationship between batch size and the number of steps needed to reach a goal validation error. In Figure 3a, the curve for the Fully Connected model ï¬attens later than for the Simple CNN model on MNIST (although in this case the Simple CNN model can ultimately achieve better performance than the Fully Connected model). In Figure 3b, the curve for ResNet- 50 ï¬attens much later than the curve for VGG-11, indicating that ResNet-50 can make better use of large batch sizes on this data set. Unlike ResNet-50, VGG-11 does not use batch normalization or residual connections. Figure 3c shows that Transformer can make better use of large batch sizes than LSTM on LM1B.
Figures 3dâ3f show that varying the depth and width can aï¬ect a modelâs ability to exploit larger batches, but not necessarily in a consistent way across diï¬erent model archi- tectures. In Figure 3d, the regions of perfect scaling, diminishing returns, and maximum useful batch size do not change much when the width is varied for the Fully Connected model on MNIST, although the shallower model seems less able to exploit larger batches than the deeper models. This contrasts with the ï¬ndings of Chen et al. (2018), although they changed width and depth simultaneously while keeping the number of parameters ï¬xed. For Simple CNN on MNIST, the relationship between batch size and steps to a goal validation error seems not to depend on width at all (Figure 15e in the Appendix shows that the curves are the same even when the y-axis is not normalized). However, in Figure 3f, the curves for narrower Transformer models on LM1B ï¬atten later than for wider Transformer models, while the depth seems to have less of an eï¬ect. Thus, reducing width appears to allow Transformer to make more use of larger batch sizes on LM1B.
# 4.4 Momentum Extends Perfect Scaling to Larger Batch Sizes, but Matches Plain SGD at Small Batch Sizes
We investigated whether some optimizers can make better use of larger batches than others by experimenting with plain SGD, SGD with momentum, and Nesterov momentum on the same model and data set. Since plain SGD is a special case of both Nesterov momentum and SGD with momentum (with γ = 0 in each case), and since we tune γ in all experiments, we expect that experiments with either of these optimizers should do no worse than plain SGD at any batch size. However, it is not clear a priori whether momentum optimizers should outperform SGD, either by taking fewer training steps or by extending the perfect scaling region to larger batch sizes.
Figure 4 shows that Nesterov momentum and SGD with momentum can both extend the perfect scaling region beyond that achieved by SGD, and thus can signiï¬cantly reduce the number of training steps required to reach a goal validation error at larger batch sizes. However, at batch sizes small enough that all optimizers are within their perfect scaling region, momentum optimizers perform identically to SGD without momentum. Though initially surprising, this identical performance at small batch sizes is consistent with obser- vations made in Kidambi et al. (2018). In our experiments, we did not see a large diï¬erence between Nesterov momentum and SGD with momentumâNesterov momentum appears to scale slightly better for Transformer on LM1B, but both perform about equally well for Simple CNN on MNIST.
19
# Shallue, Lee, Antognini, Sohl-Dickstein, Frostig, and Dahl
Steps 2 Steps to Reach 0.01 Validation Error 222 âSteps to Reach 3.9 Validation Cross Entropy 206 @© SGD 221 @-© SGD 2 | Momentum ot Momentum 233 AA Nesterov Momentum 319 4A Nesterov Momentum 2â ye 20 g 2â aH 216 38 Heb 25 25 25 i â 28 Me 4 i i i aL 4 12 ji» 4 2 2° 22 27 25 28 210 212 MF g16 2727 25 27 28 292 MgtIgIAAIH MDS Batch Size Batch Size Steps to Reach 0.3 Validation Error ee SGD 4A Nesterov Momentum Steps XN. 7 2 2722232727 2527 28 292M ID13 Batch Size
# (a) Simple CNN on MNIST
(b) Transformer Shallow on LM1B
# (c) ResNet-8 on CIFAR-10
Figure 4: Momentum extends perfect scaling to larger batch sizes, but matches plain SGD at small batch sizes. Nesterov momentum and SGD with momentum can both signiï¬cantly reduce the absolute number of training steps to reach a goal validation error, and also signiï¬cantly extend the perfect scaling region and thus better exploit larger batches than plain mini-batch SGD.
=2) Steps / (Steps at B 2 . as ee OaNsT o1 & 2 a ImageNet, 0.25 a» â© Common Crawl, 3.9 Pas © Fashion : Noa ee Open Images, 0.31 Bo ana LM1B, 3.9 2 i 2 a 2? @ 42 25 2 O23 ge 2 Bo a 27 a2 £2 ae 2 ri) Qos x [2 ~ 2 tS B27 a2 7 SI Boe oo. 235 a 50 ae a a a oe Le 2527 2 2° 272 272 225 25 27 2% 2F gat gr Qs ays Batch Size Batch Size Batch Size
(a) Simple CNN on diï¬erent data sets (b) ResNet-50 on diï¬erent data sets (c) Transformer on diï¬erent data sets
Figure 5: The data set can inï¬uence the maximum useful batch size. For the data sets shown in this plot, these diï¬erences are not simply as straightforward as larger data sets making larger batch sizes more valuable. Appendix A.2 describes the evaluation metric used for each data set, and the plot legends show the goal metric value for each task. Figure 16 in the Appendix contains these plots without the y-axis normalized.
# 4.5 The Data Set Matters, at Least Somewhat
We investigated whether properties of the data set make some problems able to exploit larger batch sizes than others by experimenting with diï¬erent data sets while keeping the model and optimizer ï¬xed. We approached this in two ways: (i) by testing the same model on completely diï¬erent data sets, and (ii) by testing the same model on diï¬erent subsets of the same data set. We normalized the y-axis of all plots in this section in the same way as Section 4.3. Appendix D contains all plots in this section without the y-axis normalized.
Figure 5 shows that changing the data set can aï¬ect the relationship between batch size and the number of steps needed to reach a goal validation error. Figure 5a shows that Fashion MNIST deviates from perfect scaling at a slightly larger batch size than MNIST for the Simple CNN model. Figure 5b shows that ImageNet and Open Images are extremely similar in how well ResNet-50 can make use of larger batch sizes, although, if anything, ImageNet might make slightly better use of larger batch sizes. Figure 5c shows that LM1B scales slightly better with increasing batch size than Common Crawl for Transformer. Since
20
# Measuring the Effects of Data Parallelism on Neural Network Training
(a) Simple CNN on MNIST subsets (b) ResNet-50 on ImageNet subsets
Figure 6: Investigating the eï¬ect of data set size. At least for MNIST, any eï¬ect of subset size on the maximum useful batch size is extremely small or nonexistent. For ImageNet, the random subset of half the images deviates from perfect scaling sooner than the full data set, but the curve for the subset with half the classes is very close to the curve for the full data set and, if anything, deviates from perfect scaling later. Appendix A.2 describes the evaluation metric used for each data set, and the plot legends show the goal metric value for each task. Figure 17 in the Appendix contains these plots without the y-axis normalized.
Fashion MNIST is the same size as MNIST, Open Images is larger than ImageNet, and Common Crawl is far larger than LM1B, these diï¬erences are not simply as straightforward as larger data sets making larger batch sizes more valuable.
To disentangle the eï¬ects from changes to the distribution and changes to the number of examples, we generated steps to result vs batch size plots for diï¬erent random subsets of MNIST (Figure 6a) and ImageNet (Figure 6b). For MNIST, we selected subsets of diï¬erent sizes, while for ImageNet, we selected a random subset of half the images and a similar sized subset that only includes images from half of the classes. At least on MNIST, any eï¬ect on the maximum useful batch size is extremely small or nonexistent. For ImageNet, Figure 6b shows that the random subset of half the images deviates from perfect scaling sooner than the full data set, but the curve for the subset with half the classes is very close to the curve for the full data set and, if anything, deviates from perfect scaling later, even though it contains roughly the same number of images as the random subset.
# 4.6 Regularization Can Be More Helpful at Some Batch Sizes Than Others
We used label smoothing (Szegedy et al., 2016) to regularize training in our experiments with ResNet-50 on ImageNet. Without label smoothing, we could not achieve our goal validation error rate of 0.25 with batch sizes greater than 214 within our training budget. With a ï¬xed compute budget for each batch size, label smoothing improved the error by as much as one percentage point at large batch sizes, while having no apparent eï¬ect at small batch sizes (Figure 7a). Meanwhile, if multiple choices for the label smoothing metaparameter achieved the goal within the training budget, then label smoothing did not change the number of steps needed (Figure 7b).
21
6
# Shallue, Lee, Antognini, Sohl-Dickstein, Frostig, and Dahl
Best Validation Error Per Batch Size Steps to Reach 0.25 Validation Error 0.265 . 271 : rary Label Smoothing = = in 00 220 4A Label Smoothing = = 0 00 || 0.260 @-@ Label Smoothing = 0.01 219 I @-@ Label Smoothing = 0.01 | @-@ Label Smoothing = 0.10 18 8 Label Smoothing = 0.10 228 0.255 27 w pteL 0.250 G45 © 515] wn 2ab 0.245 2BL 0.240 2M Ys gut sw 0.235 i a ii 210 a a 2° 27 28 2? 210 2u 212 28 24 26 2 6 2° 27 28 2° 210 Qu 212 233 24 2 2 Batch Size Batch Size
# i. £ fr
c © 2 oO z
>
(a) Label smoothing beneï¬ts larger batch sizes, but has no apparent eï¬ect for smaller batch sizes. (b) Label smoothing has no apparent eï¬ect on training speed, provided the goal error is achieved.
Figure 7: Regularization can be more helpful at some batch sizes than others. Plots are for ResNet-50 on ImageNet. Each point corresponds to a diï¬erent metaparameter tuning trial, so the learning rate, Nesterov momentum, and learning rate schedule are independently chosen for each point. The training budget is ï¬xed for each batch size, but varies between batch sizes.
We conï¬rmed that label smoothing reduced overï¬tting at large batch sizes for ResNet-50 on ImageNet (see Figure 18 in the Appendix). This is consistent with the idea that noise from small batch training is a form of implicit regularization (e.g. Goodfellow et al., 2016). However, although our results show that other forms of regularization can serve in place of this noise, it might be diï¬cult to select and tune other forms of regularization for large batch sizes. For example, we unsuccessfully tried to control overï¬tting with larger batch sizes by increasing the L2 weight penalty and by applying additive Gaussian gradient noise before we obtained good results with label smoothing.
Finally, we also tried label smoothing with Simple CNN on MNIST and Fashion MNIST, and found that it generally helped all batch sizes, with no consistent trend of helping smaller or larger batch sizes more (see Figure 19 in the Appendix), perhaps because these data sets are suï¬ciently small and simple that overï¬tting is an issue at all batch sizes.
# 4.7 The Best Learning Rate and Momentum Vary with Batch Size
Across all problems we considered, the eï¬ective learning rate (ηeï¬; see Section 2.2) that minimized the number of training steps to a goal validation error tended to increase with increasing batch size (Figure 8). However, it did not always follow either a linear or square root scaling heuristic, despite the popularity of these rules of thumb. In some cases, the optimal eï¬ective learning rate even decreased for larger batch sizes. We also found that the best eï¬ective learning rate should be chosen by jointly tuning the learning rate and momentum, rather than tuning only the learning rate. For example, the optimal way to scale the eï¬ective learning rate for Transformer was to increase the momentum while decreasing the learning rate or holding it constant (see Figures 21 and 22 in the Appendix). This is a reï¬nement to past prescriptions that only change the learning rate while keeping the momentum ï¬xed.
22
6
# Measuring the Effects of Data Parallelism on Neural Network Training
(a) Simple CNN on MNIST (b) Simple CNN on Fashion MNIST (c) ResNet-8 on CIFAR-10 (d) ResNet-50 on ImageNet (e) ResNet-50 on Open Images (f) Transformer on LM1B (g) Transformer on Common Crawl (h) VGG-11 on ImageNet (i) LSTM on LM1B
Figure 8: Optimal eï¬ective learning rates do not always follow linear or square root scaling heuristics. Eï¬ective learning rates correspond to the trial that reached the goal validation error in the fewest training steps (see Figure 1). For models that used learning rate decay schedules (ResNet-8, ResNet-50, VGG-11), plots are based on the initial learning rate. See Figures 21 and 22 in the Appendix for separate plots of the optimal learning rate and momentum.
23
74
>
# tags
.
# IAD IS
# Shallue, Lee, Antognini, Sohl-Dickstein, Frostig, and Dahl
1 Batch Size 128 1 1 10 â = = ~ 10 â ~ 10 â ~ ⬠5 G ⬠6 â [eam Coat Achieved â [em Coat Achieved â [eam Coat Achieved 10° JAA Goal Not Achievea}] 10° JAA Goal Not Achievea}] 10° [AAA Goal Not Achieved 6 soo infeasible 6 oo infeasible 6 box. infeasible 10" 10" 10" 10% 10° 10 107 10% 10° 10 107 10% 10° 10° 107 Batch Size 1024 1 Batch Size 2048 Batch Size 4096 = pial = 10 = = Paaal = = pial 10° ost -7 o7 ue BAA Bhs a ~? é oP ar 2 10? 5 10% 10-4 5 [ei Goat Achieved 5 [Gi Goat Achieved â 10 JAAA Goal Not Achieved] 10 JAA. Goal Not Achieved [aha Goal Not Achieved 6 soo infeasible â soo infeasible â box infeasible 10" 10" 10" 10* 10° 10? 107 10* 10° 10? 107 10* 10° 10? 107 Learning Rate Learning Rate Learning Rate
â¬
# = ay
â¬
= a
(a) Transformer on LM1B with a training budget of one epoch.
1 Batch Size 512 1 Batch Size 1024 1 Batch Size 2048 10 10 10° p> = > > 7 woel ny 7 Tes . & âag ast Ee 107 5 ⬠10? 5 10% a 10+ 5 [ei Goal Achieved [ei Goal Achieved 5 [ei Goal Achieved 10 [AAA Goal Not Achieved AAA Goal Not Achieved 10 [AAA Goal Not Achieved 6 soo infeasible â Soo infeasible â sox infeasible 10" 10" 10" 10% 10° 10? 107 10% 10° 10? 107 10% 10° 10? 107 Batch Size 4096 1 Batch Size 8192 1 Batch Size 16384 = 102 p= = = ~ 10? p= = = ~ ⬠2 ⬠o ⬠é = a 5 [ei Goat Achieved [ei Goat Achieved 5 [ei Goat Achieved 10 JAAA Goal Not Achieved JAAA Goal Not Achieves] 10 JAAA Goal Not Achieved 6 Soe infeasible â Soe infeasible â sox Infeasible 10" 10" 10" 10% 107 10? 107 10% 107 10? 107 10% 107 10? 107 Learning Rate Learning Rate Learning Rate
(b) Transformer on LM1B with a training budget of 25,000 steps.
Figure 9: With increasing batch size, the region in metaparameter space corresponding to rapid training in terms of epochs becomes smaller, while the region in metaparameter space corresponding to rapid training in terms of step-count grows larger. Yellow stars are the trials that achieved the goal in the fewest number of steps. Contours indicate the eï¬ective learning rate ηeï¬ = η
24
# Measuring the Effects of Data Parallelism on Neural Network Training
Shallow Shallow and Narrow (1 >) Learning Rate (n) Learning Rate (7) 10" 107 Leaming Rate (1) Leaming Rate (n)
# 1-Momentum
Figure 10: Smaller models have larger stable learning rates for Transformer on LM1B. Plots are for diï¬erent sizes of Transformer on LM1B with a batch size of 1024, a goal validation cross entropy error of 4.2, and a training budget of 50,000 steps. Contours indicate the eï¬ective learning rate ηeï¬ = η
We further investigated the relationship between learning rate, momentum, and training speed by examining our metaparameter search spaces for diï¬erent batch sizes and model sizes. For this analysis, we used Transformer on LM1B with Nesterov momentum because the metaparameter search spaces are consistent between all batch and model sizes, and can be easily visualized because they consist only of the constant learning rate η and the momentum γ. We observe the following behaviors:
⢠With increasing batch size, the region in metaparameter space corresponding to rapid training in terms of epochs becomes smaller (Figure 9a, consistent with the ï¬ndings of Breuel, 2015b), while the region in metaparameter space corresponding to rapid training in terms of step-count grows larger (Figure 9b, although it eventually plateaus for batch sizes in the maximal data parallelism regime). Thus, with a ï¬xed error goal and in a setting where training epochs are constrained (e.g. a compute budget), it may become more challenging to choose good values for the metaparameters with increasing batch size. Conversely, with a ï¬xed error goal and in a setting where training steps are constrained (e.g. a wall-time budget), it may become easier to choose good values for the metaparameters with increasing batch size.
⢠The metaparameters yielding the fastest training are typically on the edge of the feasi- ble region of the search space (Figure 9). In other words, small changes in the optimal metaparameters might make training diverge. This behavior may pose a challenge for metaparameter optimization techniques, such as Gaussian Process approaches, that assume a smooth relationship between metaparameter values and model performance. It could motivate techniques such as learning rate warm-up that enable stability at larger eventual learning rates, since the maximum stable learning rate depends on the current model parameters. We did not observe the same behavior for ResNet-50 on ImageNet. Figure 20 in the Appendix shows the results for a range of eï¬ective learning rates near the optimum for ResNet-50 on ImageNet and Transformer on LM1B.
⢠Smaller models have larger stable learning rates (Figure 10). This is consistent with recent work predicting that the largest stable learning rate is inversely proportional to layer width (Karakida et al., 2018).
25
Shallue, Lee, Antognini, Sohl-Dickstein, Frostig, and Dahl
# 4.8 Solution Quality Depends on Compute Budget More Than Batch Size
We investigated the relationship between batch size and out-of-sample error for Simple CNN on MNIST and Fashion MNIST, and for two sizes of Transformer on LM1B. For each task, we ran a quasi-random metaparameter search over the constant learning rate η and Nesterov momentum γ. For MNIST and Fashion MNIST, we also added label smoothing and searched over the label smoothing parameter in {0, 0.1} to mitigate any confounding eï¬ects of overï¬tting (see Section 4.6). We ran 100 metaparameter trials for each batch size with a large practical wall-time budget.
To disentangle the eï¬ects of the batch size from the compute budget, we compared batch sizes subject to budgets of either training steps or training epochs. For each batch size and compute budget, we found the model checkpoint that achieved the best validation accuracy across all metaparameter trials, and across all training steps that fell within the compute budget. Figure 11 shows the validation error for these best-validation-error checkpoints, as a function of batch size, for a range of compute budgets. We observe that, subject to a budget on training steps, larger batch sizes achieve better out-of-sample error than smaller batch sizes, but subject to a budget on training epochs, smaller batch sizes achieve better out-of-sample error than larger batch sizes. These observations are likely explained by the observations that, for a ï¬xed number of training steps, larger batch sizes train on more data, while for a ï¬xed number of epochs, smaller batch sizes perform more training steps. The workloads in Figure 11 represent two distinct modes of neural network training. For the small MNIST and Fashion MNIST data sets, we used training budgets that would saturate (or almost saturate) performance at each batch size. In other words, out-of-sample error cannot be improved by simply increasing the budget, with caveats due to practical limitations on our ability to ï¬nd optimal values for the metaparameters. Figures 11a and 11b show that diï¬erences in maximum performance between batch sizes on these data sets are very small (see Figures 23 and 24 in the Appendix for zoomed versions of these plots). We cannot rule out that any diï¬erences at this magnitude are due to noise from metaparameter choices and training stochasticity. Thus, for these workloads at least, the eï¬ect of batch size on solution quality is either very small or nonexistent. On the other hand, we cannot saturate performance with Transformer on LM1B within a practical training time. In this case, Figures 11c and 11d show that the best error is simply achieved by the largest compute budget.
Taken together, these observations suggest that in practice the relevant question is not which batch size leads to the best performance, but rather how compute budget varies as a function of batch size. Although we tried our best to saturate performance with MNIST and Fashion MNIST, we found that it took millions of training steps for small batch sizes, and thousands of epochs for large batch sizes, even for data sets as small and simple as these. In- deed, despite sampling 100 metaparameter conï¬gurations per batch size and training for up to 25 hours per conï¬guration, it is still not certain whether we truly saturated performance at the smallest and largest batch sizes (see Figures 23 and 24 in the Appendix). Thus, the regime of saturated performance is of limited practical concern for most workloadsâ the compute budget required to saturate performance is likely beyond what a practitioner would typically use. For realistic workloads, practitioners should be most concerned with identifying the batch size at which they can most eï¬ciently apply their compute.
26
# Measuring the Effects of Data Parallelism on Neural Network Training
Step budget Epoch budget
(a) Simple CNN on MNIST
. rv ray Sk steps Â¥v TOOK steps £ 0.18 â © 1ksteps & Â¥ 200k steps Ww 0.16 L* \ @@ 20ksteps $ @ 500k steps c \ TG S0k steps <4 5000k steps 20.14 \ \ B 012s | S$ o1oiae 4. | iq w BNO S % 0,08 bey @ OAS 4 Â¥l Yess ⢠0.06 eee = a a ioe 21 2 2° 27 2° 2i 22 25 Batch Size
rar 70 epochs * ¥¥ 200 epochs * 0.18). 20 epochs â#- * 500 epochs 1 0.16 [@-@ 50 epochs ~~ @ 1000 epochs ; i -G 100epochs << 10000 epochs 1 0.14 Fath 0.12 0.10 0.08 0.06 Batch Size
(b) Simple CNN on Fashion MNIST
Bs.afhoT ot TTT @ ok steps F oa © 20k steps § 5.2 ime a \ }@-@ 50k steps a 100k steps BOP Nd, fey roses O48 lie #e 500k steps 5 #4 1000k steps 246 - 3 54.4 aR 4 2 a2 we 4 ana 3° gigtes 4.0 Batch Size
5.4 [fa-a 0.2 epochs]. Fevel pop = 0.5 epochs 1 fly 5.2 Hee 1epochs ? y | an 2 epochs 5.0109 Serre ttn 4.8 ||# % 10 epochs |! pede 4 #4 20 epochs , » a 46 ; oop a4 7 BA gee ge ge a Me Bi yo d 42 érése cio eo bg et a ae a ee eu yy 4.0 37 2 36 37 28 35 315 Qt 3 315 pists Batch Size
(c) Transformer (narrow and shallow) on LM1B
Entropy 5.5 [a= A 0.2 epochs = © 0.5 epochs @-@ 1epochs 2 epochs 5.0 5.5 bh [aA 10K steps A © 20k steps 5.0ba > }@-@ 50k steps Neue N 100k steps SOS Â¥-V 200k steps eo ry jie #e 500k steps 4.5} > 7 in geo fk BN #4 1000k steps oN OK 4 ec i 4.0} ome me TB ke ae hoe Pleo ee ~j Tova g-e > -2-3 3.5L 4 i i ® i 3 35 stositst zt pinsis Batch Size IÂ¥-V 5 epochs se % 10 epochs 45 He 20 epochs 4.0 bbe de he oes y â i i a DF 95 98 97 98 99 QW gM gle ggg Batch Size
# Best Validation Cross
(d) Transformer (base) on LM1B
Figure 11: Validation error depends on compute budget more than batch size. Plots show the best validation error subject to budgets of training steps (left column) or training epochs (right column). Step budgets favor large batch sizes, while epoch budgets favor small batch sizes.
27
Shallue, Lee, Antognini, Sohl-Dickstein, Frostig, and Dahl
# 5. Discussion
Our goals in measuring the eï¬ects of data parallelism on neural network training were twofold: ï¬rst, we hoped to produce actionable advice for practitioners, and second, we hoped to understand the utility of building systems capable of very high degrees of data parallelism. Our results indicate that, for idealized data parallel hardware, there is a uni- versal relationship between training time and batch size, but there is dramatic variation in how well diï¬erent workloads can make use of larger batch sizes. Across all our experiments, increasing the batch size initially reduced the number of training steps needed proportion- ally. However, depending on the workload, this perfect scaling regime ended anywhere from a batch size of 24 to a batch size of 213. As batch size increases beyond the perfect scaling regime, there are diminishing returns (where increasing the batch size by a factor of k only reduces the number of training steps needed by a factor less than k) that end with a max- imum useful batch size (where increasing the batch size no longer changes the number of training steps needed). Once again, the maximum useful batch size is extremely problem- dependent and varied between roughly 29 and 216 in our experiments. Other workloads may have the region of perfect scaling end at batch sizes even smaller or larger than the range we observed, as well as having even smaller or larger maximum useful batch sizes.
On the one hand, the possibility that perfect scaling can extend to batch sizes beyond 213 for some workloads is good news for practitioners because it suggests that eï¬cient data-parallel systems can provide extremely large speedups for neural network training. On the other hand, the wide variation in scaling behavior across workloads is bad news because any given workload might have a maximum useful batch size well below the limits of our hardware. Moreover, for a new workload, measuring the training steps needed as a function of batch size and conï¬rming the boundaries of the three basic scaling regimes requires expensive experiments. In this work, we have only described how to retrospectively predict the scaling behavior by tuning the optimization metaparameters for every batch size. Although Golmant et al. (2018) also described the same basic scaling behavior we found, in their experiments the relationship did not appear consistently across problems, across error goals, or in out-of-sample error. In light of our own results, the heuristics they assumed for adjusting the learning rate as a function of batch size are the likely cause of these inconsistencies, but this explanation only drives home the inconvenience of having to carefully tune at every new batch size. We were unable to ï¬nd reliable support for any of the previously proposed heuristics for adjusting the learning rate as a function of batch size. Thus we are forced to recommend that practitioners tune all optimization parameters anew when they change the batch size or they risk masking the true behavior of the training procedure.
If the scaling behavior of workloads with respect to batch size has a simple dependence on properties of the workload, then we might be able to predict the limits of perfect scaling (or the maximum useful batch size) before running extensive experiments. We could then prioritize workloads to run on specialized hardware or decide whether gaining access to specialized hardware would be useful for a given workload of interest. On the one hand, our results are bad news for practitioners because they show that accurate scaling predictions must depend on a combination of non-obvious properties of the model, optimizer, and data set. On the other hand, we have a lot of control over the choice of model and optimizer
28
# Measuring the Effects of Data Parallelism on Neural Network Training
and there is some indication that they might be responsible for the largest portion of the variation between workloads. Our results comparing SGD and SGD with momentum (or Nesterov momentum) show that, at least for the problems we tried, momentum can extend perfect scaling to much larger batch sizes, oï¬ering clear guidance for practitioners. Other optimizers, such as KFAC (Martens and Grosse, 2015; Grosse and Martens, 2016; Ba et al., 2017), or optimization techniques designed speciï¬cally for massively data parallel systems Intuitively, it (e.g. Li et al., 2014), might allow perfect scaling to extend much further. seems plausible that optimizers that estimate local curvature information might be able to beneï¬t more from large batches than optimizers that only use gradients.
Although the model seems to have a large eï¬ect on the maximum useful batch size and the limit of perfect scaling, our results do not give deï¬nitive answers on exactly how to design models that scale better for a given optimizer and data set. Even when we kept the model family ï¬xed, we observed somewhat inconsistent results from changing the model width and depth. Chen et al. (2018) suggested that wider models can exploit larger batch sizes than narrower models, but their theoretical arguments only apply to linear networks and fully connected networks with a single hidden layer. In contrast, we found that narrower variants of the Transformer model scaled better to larger batch sizes, although it is unclear if the same notion of âwidthâ transfers between diï¬erent types of neural networks.
Unlike the model and optimizer, we generally have much less control over the data set. Unfortunately, properties of the data set also aï¬ect how well training scales in practice. Our results are equivocal on whether the number of training examples has any eï¬ect, but changing the data set entirely can certainly change the scaling behavior with respect to batch size.
Finally, our results at least partially reconcile conï¬icting stances in the literature on whether increasing the batch size degrades model quality. Our experiments show that:
1. Any study that only tunes the learning rate for one batch size and then uses a heuristic to choose the learning rate for other batch sizes (Goyal et al., 2017; Keskar et al., 2017; Hoï¬er et al., 2017; Lin et al., 2018; Devarakonda et al., 2017; Golmant et al., 2018) gives a systematic advantage to the batch size used in tuning (as well as nearby batch sizes). Our results did not show a simple relationship between the optimal learning rate and batch size that scales indeï¬nitely (see Figures 8 and 21), so the use of simple heuristics for batch sizes suï¬ciently far from the base batch size could very well explain the degraded solutions and divergent training reported in prior work. Similarly, the optimal values of other metaparameters, such as the momentum and learning rate decay schedule, should not be assumed to remain constant or scale in a simple way as the batch size increases.
2. Assuming an epoch budget when comparing solution quality between batch sizes (Mas- ters and Luschi, 2018; Goyal et al., 2017; Lin et al., 2018; Devarakonda et al., 2017), in eï¬ect, limits an investigation to the perfect scaling region of the steps to result vs batch size curve (see Figure 1). This budget favors smaller batch sizes because they will perform more optimizer steps for the same number of training examples (see Sec- tion 4.8). Certainly, there are situations where an epoch budget is appropriate, but there may exist budgets just outside the perfect scaling region that can achieve the same quality solution, and those budgets may still represent a signiï¬cant reduction
29
Shallue, Lee, Antognini, Sohl-Dickstein, Frostig, and Dahl
in the number of training steps required. Moreover, even for a ï¬xed model and data set, simply changing the optimizer can signiï¬cantly extend the perfect scaling regime to larger batch sizes. For example, Masters and Luschi (2018) found that test perfor- mance of ResNet-8 (without batch normalization) on CIFAR-10 with a ï¬xed epoch budget degraded after batch size 16, but considered only plain mini-batch SGD. Our experiments conï¬rmed that perfect scaling ends at batch size 16 with plain mini-batch SGD, but using Nesterov momentum extends the perfect scaling regime to batch size 256 (see Figure 1c).
3. Assuming a step budget when comparing solution quality between batch sizes (Hoï¬er et al., 2017) might favor larger batch sizes because they will see more training examples for the same number of gradient updates (see Section 4.8). A step budget is likely suï¬cient for a larger batch size to reach at least the same performance as a smaller batch size: we never saw the number of steps to reach a goal validation error increase when the batch size was increased (see Figure 1).
4. Increasing the batch size reduces noise in the gradient estimates (see Equation 4). However, the noise in updates due to small batches might, in some cases, provide a helpful regularization eï¬ect (Goodfellow et al., 2016; Smith and Le, 2018). Thankfully, other regularization techniques, such as label smoothing, can replace this eï¬ect (see Section 4.6). Others have also used regularization techniques, such as data augmen- tation (Keskar et al., 2017) and L2 regularization (Smith and Le, 2018), to eliminate the âgeneralization gapâ between two batch sizes.
5. Finally, although we do not believe there is an inherent degradation in solution quality associated with increasing the batch size, depending on the compute budget, it may become increasingly diï¬cult to ï¬nd good values for the metaparameters with larger batch sizes. Speciï¬cally, increasing the batch size may shrink the region in metapa- rameter space corresponding to rapid training in terms of epochs (see Figure 9a), as previously reported by Breuel (2015b). On the other hand, increasing the batch size may increase the region in metaparameter space corresponding to rapid training in terms of steps (see Figure 9b).
# 5.1 Limitations of our experimental protocol
When interpreting our results, one should keep in mind any limitations of our experimental protocol. We do not believe any of these limitations are debilitating, and we hope that describing these potential areas of concern will spur methodological innovation in future work.
Firstly, we were unable to avoid some amount of human judgment when tuning meta- parameters. Although we did not tune metaparameters by hand, we speciï¬ed the search spaces for automatic tuning by hand and they may not have been equally appropriate for all batch sizes, despite our best eï¬orts. We are most conï¬dent in our search spaces that tuned the fewest metaparameters (such as in our experiments that only tuned learning rate and momentum). We found it quite diï¬cult to be conï¬dent that our tuning was suï¬cient when we searched over learning rate decay schedules; readers should be aware that the steps to result measurement is generally quite sensitive to the learning rate schedule. Thus, we
30
# Measuring the Effects of Data Parallelism on Neural Network Training
may not have sampled enough trials at some batch sizes or, nearly equivalently, our search spaces may have been too wide at some batch sizes. Even though we veriï¬ed that the best trial was not on the boundary of the search space, this by no means guarantees that we found the globally optimal metaparameters.
Smaller batch sizes typically had more opportunities to measure validation error and, when validation error was noisy, got more chances to sample a lucky validation error. Batch sizes (usually larger ones) that did not reach the goal validation error using the ï¬rst search space used revised search spaces that gave them an extra bite of the apple, so to speak.
Finally, our analysis does not consider how robustly we can reach a goal error rate. For instance, we did not distinguish between batch sizes where all 100 trials achieved the goal validation error and batch sizes where only one of the 100 trials achieved the goal. The maximum or minimum value over a set of trials is not usually a very robust statistic, but something like the 50th percentile trial mostly reveals information about the search space. We tried to strike a balance between studying realistic workloads and being able to repeat our experiments so many times that these uncertainty questions became trivial. Ultimately, we opted to study realistic workloads and simply report results for the optimal trials.
# 6. Conclusions and Future Work
Increasing the batch size is a simple way to produce valuable speedups across a range of workloads, but, for all workloads we tried, the beneï¬ts diminished well within the limits of current hardware. Unfortunately, blindly increasing the batch size to the hardware limit will not produce a large speedup for all workloads. However, our results suggest that some optimization algorithms may be able to consistently extend perfect scaling across many models and data sets. Future work should perform our same measurements with other optimizers, beyond the closely-related ones we tried, to see if any existing optimizer extends perfect scaling across many problems. Alternatively, if we only need speedups for speciï¬c, high-value problems, we could also consider designing models that extend perfect scaling to much larger batch sizes. However, unlike the optimizer, practitioners are likely to tailor their model architectures to the speciï¬c problems at hand. Therefore, instead of searching for model architectures that happen to scale extremely well, future work should try to uncover general principles for designing models that can scale perfectly to larger batch sizes. Even if such principles remain elusive, we would still beneï¬t from methods to prospectively predict the scaling behavior of a given workload without requiring careful metaparameter tuning at several diï¬erent batch sizes. Finally, the deep learning community can always beneï¬t from methodical experiments designed to test hypotheses, characterize phenomena, and reduce confusion, to balance more exploratory work designed to generate new ideas for algorithms and models.
# Acknowledgements
We thank Tomer Koren for helpful discussions. We also thank Justin Gilmer and Simon Kornblith for helpful suggestions and comments on the manuscript. Finally, we thank Matt J. Johnson for lending us some computing resources.
31
Shallue, Lee, Antognini, Sohl-Dickstein, Frostig, and Dahl
# Appendix A. Data Set Details
This section contains details of the data sets summarized in Table 1.
# A.1 Data Set Descriptions and Pre-Processing
MNIST (LeCun et al., 1998) is a classic handwritten digit image classiï¬cation data set with 10 mutually exclusive classes. We split the original training set into 55,000 training images and 5,000 validation images, and used the oï¬cial test set of 10,000 images. We did not use data augmentation.
Fashion MNIST (Xiao et al., 2017) is another reasonably simple image classiï¬cation data set with 10 mutually exclusive classes. It was designed as a drop-in replacement for MNIST. We split the original training set into 55,000 training images and 5,000 validation images, and used the oï¬cial test set of 10,000 images. We did not use data augmentation. CIFAR-10 (Krizhevsky, 2009) is an image classiï¬cation data set of 32Ã32 color images with 10 mutually exclusive classes. We split the original training set into 45,000 training images and 5,000 validation images. We used the oï¬cial test set of 10,000 images. We pre-processed each image by subtracting the average value across all pixels and channels and dividing by the standard deviation.15 We did not use data augmentation.
ImageNet (Russakovsky et al., 2015) is an image classiï¬cation data set with 1,000 mutually exclusive classes. We split the oï¬cial training set into 1,281,167 training images and 50,045 test images, and used the oï¬cial validation set of 50,000 images. We pre- processed the images and performed data augmentation in a similar way to Simonyan and Zisserman (2014). Speciï¬cally, at training time, we sampled a random integer S â [256, 512], performed an aspect-preserving resize so that the smallest side had length S, and took a random crop of size (224, 224). We randomly reï¬ected the images horizonally, but unlike Simonyan and Zisserman (2014), we did not distort the colors. At evaluation time, we performed an aspect-preserving resize so that the smallest side had length 256, and took a central crop of size (224, 224). In both training and evaluation, we then subtracted the global mean RGB value from each pixel using the values computed by Simonyan and Zisserman (2014).16
Open Images v4 (Krasin et al., 2017) is a data set of 9 million images that are annotated with image-level labels and object bounding boxes.17 The image labels were generated by a computer vision model and then veriï¬ed as either positive or negative labels by human annotators. We only considered the 7,186 âtrainableâ classes with at least 100 human-annotated positives in the training set. We ï¬ltered the oï¬cial subsets by keeping only images with at least one positive trainable label, which produced training, validation and test sets of size 4,526,492; 41,225; and 124,293 images, respectively. On average, each image in the training set has 2.9 human-annotated positive labels, while each image in the validation and test sets have 8.4 human-annotated positive labels. We only considered the human-annotated positives and assumed all other classes were negative. We pre-processed the images and performed data augmentation identically to ImageNet.
15. We used the TensorFlow op tf.image.per image standardization. 16. See https://gist.github.com/ksimonyan/211839e770f7b538e2d8#description for the mean RGB val-
ues used.
17. Available at https://storage.googleapis.com/openimages/web/index.html.
32
# Measuring the Effects of Data Parallelism on Neural Network Training
LM1B (Chelba et al., 2014) is a text data set of English news articles.18 We used the oï¬cial training set and created validation and test sets using ï¬les news.en.heldout- 00000-of-00050 and news.en.heldout-00001-of-00050, respectively. These splits con- tain 30,301,028; 6,075; and 6,206 sentences, respectively. We used an invertable word tokenizer to split the text into sub-word tokens with a vocabulary of size 32,000.19 On average, the training set contains around 20 tokens per sentence and the validation and test sets contain around 29 tokens per sentence. At training time, we clipped long sentences to the ï¬rst 64 tokens, which aï¬ected only about 2% of sentences. We did not clip long sentences at evaluation time. The maximum sentence across the validation and test sets has 476 tokens.
Common Crawl is a repository of web data containing over 3 billion web pages.20 We ï¬ltered and processed the data set identically to Anil et al. (2018).21 The vocabulary contains 24,006 sub-word tokens. We randomly partitioned the sentences into a training set (99.98%) and a holdout set (0.02%). Our training set contains â¼25.8 billion sentences. We used the ï¬rst 6,075 sentences of the holdout set as our validation set, which is the same number of sentences in our LM1B validation set. Some sentences are tens of thousands of tokens long. To maintain consistency with our LM1B processing, we clipped sentences to 64 tokens at training time and 476 at evaluation time.
# A.2 Evaluation Metrics
We use classiï¬cation error for MNIST, Fashion MNIST, CIFAR-10, and ImageNet. To compute this metric, we consider the modelâs classiï¬cation for each image to be the class it assigns the highest probability. Then
classiï¬cation error = # incorrect classiï¬cations # classiï¬cations
.
We use class-agnostic average precision (AP ) for Open Images. To compute this metric, we ï¬rst rank each image-class pair by the predicted likelihood of the class being a true positive for that image. Then
nm 1 AP =â > Precision(k) - Relevance(k), (7) Ww kel
where Precision(k) is the precision when considering the top k image-class pairs, Relevance(k) is an indicator function equal to 1 if the kth image-class pair is a veriï¬ed positive and 0 otherwise, n is the number of images in the validation set, m is the number of classes, and w is the number of positive labels. Average precision was proposed for Open Images by Veit et al. (2017). Due to false negatives in the validation set, Veit et al. (2017) only computed AP over the the human-annotated classes in each image. However, on average, each image
18. Available at http://www.statmt.org/lm-benchmark/. 19. The code for processing the raw data and generating the vocabulary is available at https://github.
19. The code for processing the raw data and generating the vocabulary is available at https: //github. com/tensorflow/tensor2tensor/blob/master/tensor2tensor/data_generators/1m1b.py
com/tensorflow/tensor2tensor/blob/master/tensor2tensor/data_generators/lm1b.py 20. Available at http://commoncrawl.org/2017/07/june-2017-crawl-archive-now-available/. 21. See https://github.com/google-research/google-research/tree/master/codistillation for doc-
ument IDs.
33
# Shallue, Lee, Antognini, Sohl-Dickstein, Frostig, and Dahl
in the validation set only has 8.4 positive and 4 negative human-annotated classes, so each image is only evaluated over â¼12 classes out of 7,186. This yields misleadingly high values of AP . Instead, we compute AP over all classes in each image, which may underestimate the true AP due to false negatives in the validation set, but is more indicative of the true performance in our experience. We compute AP using an eï¬cient approximation of the area under the discrete precision-recall curve.22
We use average per-token cross entropy error for LM1B and Common Crawl. For a single sentence s = (wi1,...,Wm), let p(wj|w1, ...,wjâ1) denote the modelâs predicted prob- ability of the token w; given all prior tokens in the sentence. Thus, the predicted log- probability of s is log p(s) = a , log p(w;|w1, ..., wj-1). We compute the average per-token cross entropy error over a data set {51,..., Sn} as
an log p(Sn) an len(sp) : cross entropy error =
where len(s) denotes the number of tokens in s. This is the logarithm of the per-token perplexity.
# Appendix B. Model Details
In this section we give the architectural details of the models summarized in Table 2. In addition to the descriptions below, each model has a task-speciï¬c output layer. Models trained on MNIST, Fashion MNIST, CIFAR-10, and ImageNet (classiï¬cation with mutually exclusive labels) use a softmax output layer to model the probability distribution over classes. Models trained on Open Images (classiï¬cation with multiple labels per image) use a sigmoid output layer to model the probability of each class. Models trained on LM1B and Common Crawl (language modeling) use a softmax output layer to model the probability of the next word in a sentence given all prior words in the sentence.
Fully Connected is a fully connected neural network with ReLU activation function. Hidden layers use dropout with probability 0.4 during training. We vary the number of layers and number of units per layer in diï¬erent experiments to investigate the impact of model size. We use the notation FC-N1-...-Nk to denote a fully connected neural network with k hidden layers and Ni units in the ith layer.
Simple CNN consists of 2 convolutional layers with max-pooling followed by 1 fully connected hidden layer. The convolutional layers use 5 à 5 ï¬lters with stride length 1, âsameâ padding (Goodfellow et al., 2016), and ReLU activation function. Max pooling uses 2 à 2 windows with stride length 2. The fully connected layer uses dropout with probability 0.4 during training. We used three diï¬erent model sizes: base has 32 and 64 ï¬lters in the convolutional layers and 1,024 units in the fully connected layer; narrow has 16 and 32 ï¬lters in the convolutional layers and 512 units in the fully connected layer; and wide has 64 and 128 ï¬lters in the convolutional layers and 2,048 units in the fully connected layer. We used the base model unless otherwise speciï¬ed.
22. Equation 7 can be interpreted as a right Riemann sum of the discrete precision-recall curve {(ri, pi)|i = 1, ..., w}, where ri = i/w and pi is the maximum precision among all values of precision with re- call ri (each value of recall may correspond to diï¬erent values of precision at diï¬erent classiï¬cation thresholds). We use the TensorFlow op tf.metrics.auc with curve="PR", num thresholds=200, and summation method="careful interpolation".
34
# Measuring the Effects of Data Parallelism on Neural Network Training
ResNet-8 consists of 7 convolutional layers with residual connections followed by 1 fully connected hidden layer. We used the model described in section 4.2 of He et al. (2016a) with n = 1, but with the improved residual block described by He et al. (2016b). We removed batch normalization, which is consistent with Masters and Luschi (2018).
ResNet-50 consists of 49 convolutional layers with residual connections followed by 1 fully connected hidden layer. We used the model described in section 4.1 of He et al. (2016a), but with the improved residual block described by (He et al., 2016b). We replaced batch normalization (Ioï¬e and Szegedy, 2015) with ghost batch normalization to keep the training objective ï¬xed between batch sizes and to avoid possible negative eï¬ects from computing batch normalization statistics over a large number of examples (Hoï¬er et al., 2017). We used a ghost batch size of 32 for all experiments. We also applied label smoothing (Szegedy et al., 2016) to regularize the model at training time, which was helpful for larger batch sizes. The label smoothing coeï¬cient was a metaparameter that we tuned in our experiments.
VGG-11 consists of 8 convolutional layers followed by 3 fully connected hidden layers. We used the model referred to as âmodel Aâ by Simonyan and Zisserman (2014).
LSTM is a one hidden-layer LSTM model (Hochreiter and Schmidhuber, 1997). It is a simpler variant of the LSTM-2048-512 model described by Jozefowicz et al. (2016), with 1,024 embedding dimensions, 2,048 hidden units, and 512 projection dimensions. We did not use bias parameters in the output layer because we found this improved performance in our preliminary experiments.
Transformer is a self-attention model that was originally presented for machine trans- lation (Vaswani et al., 2017). We used it as an autoregressive language model by applying the decoder directly to the sequence of word embeddings for each sentence. We used four diï¬erent sizes: the base model described by Vaswani et al. (2017); a shallow model that is identical to the base model except with only two hidden layers instead of six; a narrow and shallow model that is identical to the shallow model except with half as many hidden units and attention heads as well as half the ï¬lter size; and a wide model that is identical to the base model except with double the number of hidden units and attention heads as well as double the ï¬lter size. We used the base model unless otherwise speciï¬ed.
# Appendix C. Learning Rate Schedules
We chose our learning rate schedule by experimenting with a variety of diï¬erent schedules for ResNet-50 on ImageNet. For each schedule, we speciï¬ed the following metaparameters:
⢠η0: initial learning rate
⢠α: decay factor (α > 0)
⢠T : number of training steps until the learning rate decays from η0 to αη0
Each schedule corresponds to a decay function d(t), such that the learning rate at training step t is
η(t) = d(t) · η0 αη0 if t ⤠T, if t > T.
We experimented with the following decay functions:
35
# Shallue, Lee, Antognini, Sohl-Dickstein, Frostig, and Dahl
⢠Constant: d(t) = 1
Linear: d(t) = 1 â (1 â α) t T
# (1 + cos mH)
Cosine (Loshchilov and Hutter, 2017): d(t) = α + (1âα)
Cosine (Loshchilov and Hutter, 2017): d(t) =a+ aoe) (1 + cos mH)
Exponential Polynomial: d(t) = a + (1 â a) (1-
# ty, where \ > 0
# ⢠Inverse Exponential Polynomial: d(t) = α α+(1âα)( t T )λ , where λ > 0
Exponential: d(t) = αt/T
We also tried piecewise linear learning rate schedules. These schedules are speciï¬ed by a sequence of pairs {(t0, η0), ..., (tk, ηk)}, with 0 = t0 < t1... < tk, such that the learning rate at training step t is
η(t) = ηi + ηi+1âηi ti+1âti ηk (t â ti) if ti ⤠t < ti+1, if t ⥠tk.
The schedules used by both He et al. (2016a) (piecewise constant) and Goyal et al. (2017) (linear warm-up followed by piecewise constant) for ResNet-50 on ImageNet can both be expressed as piecewise linear.
We ran experiments with ResNet-50 on ImageNet, using Nesterov momentum with batch size 1,024 for 150,000 training steps, while tuning the momentum and all metaparameters governing the learning rate schedule. We used quasi-random metaparameter search as described in Section 4. For piecewise linear schedules, we tried 1, 3, and 5 decay events. We found that it was possible to get good results with several of the schedules we tried, and it is likely that other schedules would also work well. Ultimately, we chose linear decay because it performed at least well as all other schedules we tried, while also being the simplest and requiring only two additional metaparameters.
36
# Measuring the Effects of Data Parallelism on Neural Network Training
# Appendix D. Additional Plots
(a) Simple CNN on MNIST (b) Transformer on LM1B (c) ResNet-50 on ImageNet
Figure 12: Steps to result on the training set is almost the same as on the validation set. The evaluation metrics are described in Appendix A.2. Error goals are speciï¬ed in the plot legends.
15 1e6__Batch Size 16 1e5_Batch Size 128 ged __Bateh size 2048 glee Batch Size 16386 4 e âhee ): 2 8 Po bd # 2 . Jo | ae? oo i 14: | Mp ee i °° ° 5 5 Jo" 1 greg 8 i A ae 2° E07 P@@ ° z | # | Baas % 06 Be ho 2 Fa 4 a oe ° : ve wt | cae 7 7 1 rae 7 7 1 oo 7 7 1 OB 7 z 1 fo Fy 70 yo! âfo 70 70 vo: fo Fr 70 vo: âfo 70 70 70 Learning Rate (n) Learning Rate (n) Learning Rate (n) Learning Rate (n) ip 1e6_Batch size 16 jp le5___Batch sie 128 gles __Bateh size 2008 gles __Batch size 16364 12 . ® 5 _ ee q 7 ° Pa 24 @ a eo] | | S10 on) - oe 6 oe = @ é S20 ° | Eol.0@ 9G | 2* i | bsaf® | 2 oe 2 eo f ee 24 me ; 8 © Ce ao < :, 8 fis of ng Oi 8, eee? | ats ° sor 0 é é au ° ch MBA AAR Myst os CaP 2 4 10 1 - Momentum (1~=>) 1 - Momentum (1~=>) 1 - Momentum (1~=>) 1 - Momentum (1~=>)
Figure 13: Validating metaparameter search spaces for Transformer on LM1B. Rows cor- respond to the metaparameters we tuned (learning rate η and momentum γ) and columns correspond to diï¬erent batch sizes. The x-axis is the search range that was sampled by the quasi-random search algorithm. Blue dots represent trials that reached the goal of 3.9 validation cross entropy error, and yellow stars correspond to trials that achieved the goal in the fewest steps. We deem these search spaces appropriate because the yellow stars are not on the boundaries.
37
# Shallue, Lee, Antognini, Sohl-Dickstein, Frostig, and Dahl
Steps to Result Steps to Result Steps to Result Steps to Result Steps to Result Batch Size 64 10? 107 10+ Leaming Rate (7) 10° 10° Batch Size 64 Batch Size 1024 Steps to Result 1o7 Learning Rate (7) 10% 107 10° Batch Size 1024 Steps to Result le4 Steps to Result BoP a oN w in Steps to Result 10° 10% 107 Learning Rate (7) Batch Size 8192 10? 12 a 1 8 " " 1.2 1 " 104 107 107 107 104 107 10? 107 loâ 107 10? 107 1- Momentum (1 â +) 1 - Momentum (1 â +) 1 - Momentum (1 â +) Batch Size 64 Ls 1e5 Batch Size 1024 2.0 le4 Batch Size 8192 beeceeseseees Qa J 1.4 19 > ° ne a a 4 = 34, Baa g bocce ee eee eee secede eee e ce eee eee ee 4 2 216 wll âa g g 15 vesceeeees 4 Bao 3 7 1 0.9 13 0.8 1.2 10° 10° 107 10â 10° 107 107 10â 10> 10° 107 Decay Factor (a) Decay Factor (a) Decay Factor (a) 24 126 Batch Size 64 a5 les Batch Size 1024 1e4 Batch Size 8192 : : @ © 18 veveeleseaee veeleees 22 eo ms : %, : LB poke. . Lo veeeeeeeee hoe 20 : oe | 2 @ e x VG be -de-enn nee e ee feee ee enesecpecweececeedeces a a @ gi2 © 6 3 18) : a Saabs. beeseesheseeeeduseeeeees £14 : uw : uy : ¢ : ¢ xe jar ad bd rd ee 1s sk i | 0.9 : : 0.8 : i : 1.0 i 12 i i i i i i i i i i i i i i i i A 12 #14 16 18 20 22 24 08 09 101112131415 1.0 1.2 1.4 1.6 1.8 Decay Steps (7) 1e6 Decay Steps (7) 1e5 Decay Steps (7) le4 24 1e6 Batch Size 64 20 le4 Batch Size 8192 19 18 F Baus wv wu 4 4 5} 916 a 1.1 4 a ov y 1.5 Fro 4 a 1a 0.9 : x i 13 12 i i i H i i i i 12 i i i i -0.02 0.02 0.06 0.10 0.14 â0.02 0.02 0.06 0.10 0.14 -0.02 0.02 0.06 0.10 0.14 Label Smoothing Label Smoothing Label Smoothing Batch Size 65536 @ | 7 @ | 3 4 ov P 4 5] a | Ey a e | 107% 10° 10? Leaming Rate (7) Batch Size 65536 1- Momentum (1 â +) Batch Size 65536 10° 107 Decay Factor (a) Batch Size 65536 Steps to Result i 0.8 Decay Steps (7) â n 06 07 led Batch Size 65536 ® | 0.95 0.90 4 7 ® 3 oes : 4 2 : a 0.80} 4 Ss 5] a 0.75 4 z & 0.70 @ 4 mn wv 0.60 bee eee Sonsreseeneense 4 0.55 i i i 1 0.02 0.06 0.10 0.14 Label Smoothing =0.02
Figure 14: Validating metaparameter search spaces for ResNet-50 on ImageNet. Rows correspond to the metaparameters we tuned (initial learning rate η0, momentum γ, learning rate decay parameters α, T , and label smoothing parameter) and columns correspond to diï¬erent batch sizes. For all parameters except the label smoothing parameter, the x-axis is the search range sam- pled by the quasi-random search algorithm. The label smoothing parameter was sampled uniformly in {0, 0.01, 0.1} for b ⤠214 and {0, 0.1} for b > 214. Blue dots represent trials that reached the goal validation error rate of 0.25, and yellow stars correspond to trials that achieved the goal in the fewest steps. We deem these search spaces appropriate because the yellow stars are not on the boundaries.
38
# Measuring the Effects of Data Parallelism on Neural Network Training
Steps to Reach 0.03 Validation Error
Steps 218 2â aA FC-1024-1024-1024 2re @-@ Simple CNN 26 : 25 i i i i a_i i 2° 22 24 2° 28 210 212 24 2 6 Batch Size
221 ; Steps to Reach 0.35 Validation Error ; 220 dA ResNet-50 || 219 @-@ VGG-11 218 | Qui ie 515 |, 2u L 23 22 Que Pe ae po 2 2° 27 28 22 210 Qu 2% 28 24 2h 2 6 Batch Size
# a 2 wn
(a) Fully Connected vs Simple CNN on MNIST
(b) ResNet-50 vs VGG-11 on ImageNet
Steps to Reach 3.9 Validation Cross Entropy
221
Steps 220 4-4 Transformer || 19 @-@ LSTM 2 pol | po 24 2 2° 27 28 2° 210 2u 22 28 2M 2 Batch Size
5
Steps to Reach 0.03 Validation Error
218
27 ee FC-1024 U 216 Ga FC-128-128-128 || 215 VV FC-256-256-256 || 214 @-@ FC-512-512-512 513 r aA FC-1024-1024-1024 || 52 r @-© FC-2048-2048-2048 || L 511 2ut 210 2° 28L N 4 27 ae | 2eL L beet 4 25 pe pos 2° 22 24 2° 28 210 212 2 2 Batch Size
4 4 Y 2 â
(c) Transformer vs LSTM on LM1B
(d) Fully Connected sizes on MNIST
Steps to Reach 0.01 Validation Error
Steps 216 26 A Simple CNN 2M ©@-© Simple CNN Narrow 2Bi [*-* Simple CNN Wide ral 2u 2io| 2eb 2b 27L » 36 x 35 âS a[. SS 7 eee â 23 i i i i 1 yt 2° 22 24 2° 28 210 212 24 2 6 Batch Size
221
Steps to Reach 4.2 Validation Cross Entropy
220 @-@ Wide | 19 aA Base 2 -â Shallow | 238 @-@ Narrow and Shallow } 17 2 B 2264 215 2M 13 2 2t roe | 2utL aN 4 S grt op i Ne 24 2 2° 27 28 2° 210 2u 2% 28 24 2 Batch Size
&
(e) Simple CNN sizes on MNIST
(f) Transformer sizes on LM1B
Figure 15: Figure 3 without the y-axis normalized.
39
5
# Shallue, Lee, Antognini, Sohl-Dickstein, Frostig, and Dahl
22 220 as ee 0.01 220 a ImageNet, 0.25 22 1 [e-© Common Crawi, 3.9 ore Fashion MNIST, 0.2 e-© Open Images, 0.31 28 | [aa LMIB, 3.9 20 â 3 v w 28 a 1 2 g 2 a 2 < 32 A Qu 30 a 2° 2 pu0 3 2e 27 2h ae ae 2h 27 2 2 227 2% 27 2 gh 27 a gt 2B 2 2t 2 2% a7 2 2 272 27D IPD Batch Size Batch Size Batch Size
(a) Simple CNN on diï¬erent data sets (b) ResNet-50 on diï¬erent data sets (c) Transformer on diï¬erent data sets
Figure 16: Figure 5 without the y-axis normalized.
(a) Simple CNN on MNIST subsets (b) ResNet-50 on ImageNet subsets
# A
Figure 17: Figure 6 without the y-axis normalized.
1.0 1.0 ° & ° oa ° iS 0.5 Training Error Validation Error o oa ° B ° N â Label Smoothing = 0.00 == Label Smoothing = 0.10 || ââ- Label Smoothing = 0.00 --- == Label Smoothing = 0.10 ° w 0.0 I I I i 0.2 I I I i i?) 2000 4000 6000 8000 10000 i?) 2000 4000 6000 8000 10000 Training Steps Training Steps
Figure 18: Label smoothing reduces overï¬tting at large batch sizes. Plots are training curves for the two best models with and without label smoothing for ResNet-50 on ImageNet with batch size 216. The two models correspond to diï¬erent metaparameter tuning trials, so the learning rate, Nesterov momentum, and learning rate schedule were independently chosen for each trial. The two trials shown are those that reached the highest validation error at any point during training, for label smoothing equal to 0 and 0.1 respectively.
40
6
# Measuring the Effects of Data Parallelism on Neural Network Training
(a) Simple CNN on MNIST (b) Simple CNN on Fashion MNIST
# i
# Ss $ id >
Figure 19: Label smoothing helps all batch sizes for Simple CNN on MNIST and Fashion MNIST. There is no consistent trend of label smoothing helping smaller or larger batch sizes more. Each point corresponds to a diï¬erent metaparameter tuning trial, so the learning rate, Nesterov momentum, and learning rate schedule are independently chosen for each point. The training budget is ï¬xed for each batch size, but varies between batch sizes.
(a) Transformer on LM1B
(b) ResNet-50 on ImageNet
Figure 20: Validation error vs eï¬ective learning rate. Training budgets are consistent for each batch size, but not between batch sizes. These plots are projections of the entire metaparameter search space, which is 2-dimensional for Transformer on LM1B (see Figure 13) and 5-dimensional for ResNet-50 on ImageNet (see Figure 14).
41
# Shallue, Lee, Antognini, Sohl-Dickstein, Frostig, and Dahl
(b) Simple CNN on Fashion MNIST (e) ResNet-50 on Open Images (h) VGG-11 on ImageNet
3 imal Learning Rate A 2° JAA Cuma Learning Rat LA inear Heuristic g2|[> ureerteursti foe Boa 3 ce 23 D £2 a £57 G 5s a? aul 233 Pi a g2 8g a5 9 8g DDD IbI13 Batch Size
2? 5 |[#-@ Optimal Leaming Rate 7) 23 |e Ceo Hounc ; @ 23 |L-_ sauare Root Heuristic â o [ad 2 & ⬠$ a auto, 20 oF gt 98 98 git git ait pic Batch Size
# (a) Simple CNN on MNIST
# (c) ResNet-8 on CIFAR-10
23 32 [fara optimar teaming Rate]. ey 52 |[â tnear Heuristic y 30 f= saurenootHeuri â 2 6 [od D & = £ i Y a 2 2995 9897 98 pag lghIp tp taghagis Batch Size
23 32 [[a-w Optimal Learning Rate 2 1 {| â Linear Heuristic + 2a Loe seuare oceanic fo @ 2o\L- â © 31. as, ors, £2? 24 gos i Y 26 alba 38 2° 987 98 go glgitgizg taping ispis Batch Size
# (d) ResNet-50 on ImageNet
# (f) Transformer on LM1B
2 â 30 [[a-a Optimartearning Rate] O a |] â Linear Heuristic 3 gy 2 HL sauare Root Heuristic â 2 32 oe 2, a2 Dm 24 £25 £26 G 57 o 2 a 38 38 2 Tt ggg tatty 2° 2° 27 2% 27 202 2 22 Batch Size
2° â 7 3-4 [Jara Optimal Learning Rate ? 35 [|= uncer Heunstic 26 Lee Sauare Root Heuristic f 2 wh 6 a D & £ i o a 2 OT SOT OT DF pI H MDH MIpIS 242? 29 27 2° 2° 22M 26222 Batch Size
# (g) Transformer on Common Crawl
# (i) LSTM on LM1B
Figure 21: Optimal learning rates do not always follow linear or square root scaling heuristics. Learning rates correspond to the trial that reached the goal validation error in the fewest training steps (see Figure 1). For models using learning rate decay schedules (ResNet-8, ResNet-50, VGG-11), plots are based on the initial learning rate. See Figure 22 for the corresponding plot of optimal momentum, and Figure 8 for the corresponding plot of eï¬ective learning rate.
42
# Measuring the Effects of Data Parallelism on Neural Network Training
1.00 0.95 g 0.90 2085 g 0.80 5 0.75 = 0.70 0.65 0.60 bess as ses rr pppintns Batch Size
1.00 0.95 E 0.90 2 oes g oso 5 = 075 0.70 0.65 ge or ge i ping git gis Batch Size
1.00
0.99
(b) Simple CNN on Fashion MNIST (e) ResNet-50 on Open Images (h) VGG-11 on ImageNet
# (a) Simple CNN on MNIST
# (c) ResNet-8 on CIFAR-10
1.00 0.98 ⬠0.96 2 G 0.94 : 0.92 = 0.901 0.88 Soe se gra rpeplptpipigtps Batch Size
1.00 0.99} ⬠0.98 2 @ 097 : 0.96 = 0.95 0.94 5 arse sa ptopttphphphpinyis Batch Size
# (d) ResNet-50 on ImageNet
# (f) Transformer on LM1B
1.00 0.99 ⬠0.98 2 097 7 & 0.96 = 0.95 0.94 0.93 oe ie ste strste strata Batch Size
1.04 102 ⬠= 1.00} bed 7 {a te ⬠ool a = 0.96 | eee 0.94 ese sistoshststesteyls Batch Size
# (g) Transformer on Common Crawl
(i) LSTM on LM1B*
Figure 22: Optimal momentum has no consistent relationship with batch size. Momentum corresponds to the trial that reached the goal validation error in the fewest training steps (see Figure 1). See Figure 21 for the corresponding plot of optimal learning rate, and Figure 8 for the corresponding plot of eï¬ective learning rate. *For LSTM on LM1B, we only tuned η with ï¬xed γ = 0.99.
43
# Shallue, Lee, Antognini, Sohl-Dickstein, Frostig, and Dahl
Step budget Epoch budget (a) Simple CNN on MNIST: Validation Error
(b) Simple CNN on MNIST: Test Error
Figure 23: Zoomed version of Figure 11a.
Step budget Epoch budget
(a) Simple CNN on Fashion MNIST: Validation Error
0.080 â5k steps 0.080 - nae nee 5 0.075 20k steps 0.075 0 50k steps 100 200k steps 200 3 0.070 200k steps 0.070 500 500k steps 1000 % 5000k steps 10000 0.065 0.065 0.060 353793 git 515-55 27 23 Batch Size 0.060 BI 37 a ott ois ots Batch Size
# E
2
£
&
(b) Simple CNN on Fashion MNIST: Test Error
Figure 24: Zoomed version of Figure 11b.
44
epochs epochs
epochs epochs
epochs epochs
Measuring the Effects of Data Parallelism on Neural Network Training
# References
Martin Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeï¬rey Dean, Matthieu Devin, Sanjay Ghemawat, Geoï¬rey Irving, Michael Isard, et al. TensorFlow: a system for large-scale machine learning. In Conference on Operating Systems Design and Implementation, volume 16, pages 265â283. USENIX, 2016.
Takuya Akiba, Shuji Suzuki, and Keisuke Fukuda. Extremely large minibatch SGD: Train- ing ResNet-50 on ImageNet in 15 minutes. arXiv preprint arXiv:1711.04325, 2017.
Rohan Anil, Gabriel Pereyra, Alexandre Passos, Robert Ormandi, George E. Dahl, and Geoï¬rey E. Hinton. Large scale distributed neural network training through online distillation. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=rkr1UDeC-.
Jimmy Ba, Roger Grosse, and James Martens. Distributed second-order optimization using Kronecker-factored approximations. In International Conference on Learning Represen- tations, 2017. URL https://openreview.net/forum?id=SkkTMpjex.
L´eon Bottou and Olivier Bousquet. The tradeoï¬s of large scale learning. In Advances in Neural Information Processing Systems, pages 161â168, 2008.
Olivier Bousquet, Sylvain Gelly, Karol Kurach, Olivier Teytaud, and Damien Vincent. Critical hyper-parameters: No random, no cry. arXiv preprint arXiv:1706.03200, 2017.
Thomas M Breuel. Benchmarking of LSTM networks. arXiv preprint arXiv:1508.02774, 2015a.
Thomas M Breuel. The eï¬ects of hyperparameters on SGD training of neural networks. arXiv preprint arXiv:1508.02788, 2015b.
Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robinson. One billion word benchmark for measuring progress in statistical lan- guage modeling. In Conference of the International Speech Communication Association, 2014.
Jianmin Chen, Xinghao Pan, Rajat Monga, Samy Bengio, and Rafal Jozefowicz. Re- visiting distributed synchronous SGD. In International Conference on Learning Representations Workshop Track, 2016. URL https://openreview.net/forum?id= D1VDZ5kMAu5jEJ1zfEWL.
Lingjiao Chen, Hongyi Wang, Jinman Zhao, Dimitris Papailiopoulos, and Paraschos Koutris. The eï¬ect of network width on the performance of large-batch training. arXiv preprint arXiv:1806.03791, 2018.
Valeriu Codreanu, Damian Podareanu, and Vikram Saletore. Scale out for large minibatch SGD: Residual network training on ImageNet-1K with improved accuracy and reduced time to train. arXiv preprint arXiv:1711.04291, 2017.
45
Shallue, Lee, Antognini, Sohl-Dickstein, Frostig, and Dahl
Aditya Devarakonda, Maxim Naumov, and Michael Garland. AdaBatch: Adaptive batch sizes for training deep neural networks. arXiv preprint arXiv:1712.02029, 2017.
Laurent Dinh, Razvan Pascanu, Samy Bengio, and Yoshua Bengio. Sharp minima can generalize for deep nets. In International Conference on Machine Learning, pages 1019â 1028, 2017.
Noah Golmant, Nikita Vemuri, Zhewei Yao, Vladimir Feinberg, Amir Gholami, Kai Rothauge, Michael W Mahoney, and Joseph Gonzalez. On the computational ineï¬ciency of large batch sizes for stochastic gradient descent. arXiv preprint arXiv:1811.12941, 2018.
Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016. URL http://www.deeplearningbook.org.
Priya Goyal, Piotr Doll´ar, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Ky- rola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch SGD: training ImageNet in 1 hour. arXiv preprint arXiv:1706.02677, 2017.
Roger Grosse and James Martens. A Kronecker-factored approximate Fisher matrix for In International Conference on Machine Learning, pages 573â582, convolution layers. 2016.
Elad Hazan. Introduction to online convex optimization. Foundations and Trends in Opti- mization, 2(3-4):157â325, 2016.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Conference on Computer Vision and Pattern Recognition, pages 770â778. IEEE, 2016a.
Identity mappings in deep residual networks. In European Conference on Computer Vision, pages 630â645. Springer, 2016b.
Joel Hestness, Sharan Narang, Newsha Ardalani, Gregory Diamos, Heewoo Jun, Hassan Kianinejad, Md Patwary, Mostofa Ali, Yang Yang, and Yanqi Zhou. Deep learning scaling is predictable, empirically. arXiv preprint arXiv:1712.00409, 2017.
Geoï¬rey Hinton, Nitish Srivastava, and Kevin Swersky. Neural networks for machine lecture 6a: overview of mini-batch gradient descent, 2012. URL https: learning, //www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf.
Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735â1780, 1997.
Elad Hoï¬er, Itay Hubara, and Daniel Soudry. Train longer, generalize better: closing the In Advances in Neural generalization gap in large batch training of neural networks. Information Processing Systems, pages 1731â1741, 2017.
46
Measuring the Effects of Data Parallelism on Neural Network Training
Sergey Ioï¬e and Christian Szegedy. Batch normalization: Accelerating deep network train- ing by reducing internal covariate shift. In International Conference on Machine Learning, pages 448â456, 2015.
Prateek Jain, Sham M. Kakade, Rahul Kidambi, Praneeth Netrapalli, and Aaron Sidford. Parallelizing stochastic gradient descent for least squares regression: Mini-batching, aver- aging, and model misspeciï¬cation. Journal of Machine Learning Research, 18(223):1â42, 2018. URL http://jmlr.org/papers/v18/16-595.html.
Norman P Jouppi, Cliï¬ Young, Nishant Patil, David Patterson, Gaurav Agrawal, Raminder Bajwa, Sarah Bates, Suresh Bhatia, Nan Boden, Al Borchers, et al. In-datacenter per- formance analysis of a tensor processing unit. In International Symposium on Computer Architecture, pages 1â12. IEEE, 2017.
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016.
Ryo Karakida, Shotaro Akaho, and Shun-ichi Amari. Universal statistics of Fisher infor- mation in deep neural networks: Mean ï¬eld approach. arXiv preprint arXiv:1806.01316, 2018.
Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and In International Conference on Learning Representations, 2017. URL sharp minima. https://openreview.net/forum?id=H1oyRlYgg.
Rahul Kidambi, Praneeth Netrapalli, Prateek Jain, and Sham M. Kakade. On the in- suï¬ciency of existing momentum schemes for stochastic optimization. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum? id=rJTutzbA-.
Jack Kiefer, Jacob Wolfowitz, et al. Stochastic estimation of the maximum of a regression function. The Annals of Mathematical Statistics, 23(3):462â466, 1952.
Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. International Conference on Learning Representations, 2015. In
Ivan Krasin, Tom Duerig, Neil Alldrin, Vittorio Ferrari, Sami Abu-El-Haija, Alina Kuznetsova, Hassan Rom, Jasper Uijlings, Stefan Popov, Shahab Kamali, Matteo Malloci, Jordi Pont-Tuset, Andreas Veit, Serge Belongie, Victor Gomes, Abhinav Gupta, Chen Sun, Gal Chechik, David Cai, Zheyun Feng, Dhyanesh Narayanan, and Kevin Murphy. OpenImages: A public dataset for large-scale multi-label and multi-class image classiï¬- cation., 2017. URL https://storage.googleapis.com/openimages/web/index.html.
features from tiny images. Techni- cal report, University of Toronto, 2009. URL http://www.cs.toronto.edu/~kriz/ learning-features-2009-TR.pdf.
Guanghui Lan. An optimal method for stochastic composite optimization. Mathematical Programming, 133(1-2):365â397, 2012.
47
Shallue, Lee, Antognini, Sohl-Dickstein, Frostig, and Dahl
Yann Le Cun, L´eon Bottou, Genevieve B. Orr, and Klaus-Robert M¨uller. Eï¬cient backprop. In Neural Networks, Tricks of the Trade, Lecture Notes in Computer Science LNCS 1524. Springer Verlag, 1998. URL http://leon.bottou.org/papers/lecun-98x.
Yann LeCun, Corinna Cortes, and CJ Burges. MNIST handwritten digit database, 1998. URL http://yann.lecun.com/exdb/mnist.
Yann LeCun, Yoshua Bengio, and Geoï¬rey Hinton. Deep learning. Nature, 521(7553):436, 2015.
Mu Li, Tong Zhang, Yuqiang Chen, and Alexander J Smola. Eï¬cient mini-batch training In International Conference on Knowledge Discovery and for stochastic optimization. Data Mining, pages 661â670. ACM, 2014.
Tao Lin, Sebastian U Stich, and Martin Jaggi. Donât use large mini-batches, use local SGD. arXiv preprint arXiv:1808.07217, 2018.
SGDR: Stochastic gradient descent with warm restarts. In International Conference on Learning Representations, 2017. URL https: //openreview.net/forum?id=Skq89Scxx.
Siyuan Ma, Raef Bassily, and Mikhail Belkin. The power of interpolation: Understanding the eï¬ectiveness of SGD in modern over-parametrized learning. In International Confer- ence on Machine Learning, pages 3331â3340, 2018.
James Martens and Roger Grosse. Optimizing neural networks with Kronecker-factored approximate curvature. In International Conference on Machine Learning, pages 2408â 2417, 2015.
Dominic Masters and Carlo Luschi. Revisiting small batch training for deep neural networks. arXiv preprint arXiv:1804.07612, 2018.
Yurii Nesterov. A method for unconstrained convex minimization problem with the rate of convergence O(1/k2). In Doklady AN USSR, volume 269, pages 543â547, 1983.
Boris T Polyak. Some methods of speeding up the convergence of iteration methods. USSR Computational Mathematics and Mathematical Physics, 4(5):1â17, 1964.
Herbert Robbins and Sutton Monro. A stochastic approximation method. The Annals of Mathematical Statistics, 22(3):400â407, 1951.
David E Rumelhart, Geoï¬rey E Hinton, and Ronald J Williams. Learning representations by back-propagating errors. Nature, 323(6088):533, 1986.
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhi- heng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. ImageNet large International Journal of Computer Vision, 115(3): scale visual recognition challenge. 211â252, 2015.
48
Measuring the Effects of Data Parallelism on Neural Network Training
Shai Shalev-Shwartz and Shai Ben-David. Understanding machine learning: From founda- tions to algorithms. Cambridge University Press, 2014. URL https://books.google. com/books?id=OE9etAEACAAJ.
Ohad Shamir. Without-replacement sampling for stochastic gradient methods. In Advances in Neural Information Processing Systems, pages 46â54, 2016.
Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
Samuel L. Smith and Quoc V. Le. A Bayesian perspective on generalization and stochastic gradient descent. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=BJij4yg0Z.
Ilya Sutskever, James Martens, George E. Dahl, and Geoï¬rey E. Hinton. On the importance of initialization and momentum in deep learning. In International Conference on Machine Learning, pages 1139â1147, 2013.
Christian Szegedy, Vincent Vanhoucke, Sergey Ioï¬e, Jon Shlens, and Zbigniew Wojna. Rethinking the Inception architecture for computer vision. In Conference on Computer Vision and Pattern Recognition, pages 2818â2826. IEEE, 2016.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998-6008, 2017.
Andreas Veit, Neil Alldrin, Gal Chechik, Ivan Krasin, Abhinav Gupta, and Serge J Belongie. In Conference on Learning from noisy large-scale datasets with minimal supervision. Computer Vision and Pattern Recognition, pages 6575â6583. IEEE, 2017.
D Randall Wilson and Tony R Martinez. The general ineï¬ciency of batch training for gradient descent learning. Neural Networks, 16(10):1429â1451, 2003.
Yuhuai Wu, Mengye Ren, Renjie Liao, and Roger Grosse. Understanding short-horizon bias in stochastic meta-optimization. In International Conference on Learning Represen- tations, 2018. URL https://openreview.net/forum?id=H1MczcgR-.
Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747, 2017.
Dong Yin, Ashwin Pananjady, Max Lam, Dimitris Papailiopoulos, Kannan Ramchandran, and Peter Bartlett. Gradient diversity: a key ingredient for scalable distributed learning. In International Conference on Artiï¬cial Intelligence and Statistics, 2018. URL http: //proceedings.mlr.press/v84/yin18a.html.
Yang You, Zhao Zhang, Cho-Jui Hsieh, James Demmel, and Kurt Keutzer. ImageNet training in minutes. arXiv preprint arXiv:1709.05011, 2017.
49 | {
"id": "1712.02029"
} |
1811.01241 | Wizard of Wikipedia: Knowledge-Powered Conversational agents | In open-domain dialogue intelligent agents should exhibit the use of
knowledge, however there are few convincing demonstrations of this to date. The
most popular sequence to sequence models typically "generate and hope" generic
utterances that can be memorized in the weights of the model when mapping from
input utterance(s) to output, rather than employing recalled knowledge as
context. Use of knowledge has so far proved difficult, in part because of the
lack of a supervised learning benchmark task which exhibits knowledgeable open
dialogue with clear grounding. To that end we collect and release a large
dataset with conversations directly grounded with knowledge retrieved from
Wikipedia. We then design architectures capable of retrieving knowledge,
reading and conditioning on it, and finally generating natural responses. Our
best performing dialogue models are able to conduct knowledgeable discussions
on open-domain topics as evaluated by automatic metrics and human evaluations,
while our new benchmark allows for measuring further improvements in this
important research direction. | http://arxiv.org/pdf/1811.01241 | Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, Jason Weston | cs.CL | null | null | cs.CL | 20181103 | 20190221 | 9 1 0 2
b e F 1 2 ] L C . s c [
2 v 1 4 2 1 0 . 1 1 8 1 : v i X r a
# OF WIKIPEDIA:
# KNOWLEDGE-POWERED CONVERSATIONAL AGENTS
Emily Dinanâ, Stephen Rollerâ, Kurt Shusterâ, Angela Fan, Michael Auli, Jason Weston Facebook AI Research edinan,roller,kshuster,angelafan,michaelauli,jase {
# @fb.com }
# ABSTRACT
In open-domain dialogue intelligent agents should exhibit the use of knowledge, however there are few convincing demonstrations of this to date. The most popu- lar sequence to sequence models typically âgenerate and hopeâ generic utterances that can be memorized in the weights of the model when mapping from input ut- terance(s) to output, rather than employing recalled knowledge as context. Use of knowledge has so far proved difï¬cult, in part because of the lack of a supervised learning benchmark task which exhibits knowledgeable open dialogue with clear grounding. To that end we collect and release a large dataset with conversations directly grounded with knowledge retrieved from Wikipedia. We then design ar- chitectures capable of retrieving knowledge, reading and conditioning on it, and ï¬nally generating natural responses. Our best performing dialogue models are able to conduct knowledgeable discussions on open-domain topics as evaluated by automatic metrics and human evaluations, while our new benchmark allows for measuring further improvements in this important research direction.
# INTRODUCTION
Arguably, one of the key goals of AI, and the ultimate the goal of natural language research, is for humans to be able to talk to machines. In order to get close to this goal, machines must mas- ter a number of skills: to be able to comprehend language, employ memory to retain and recall knowledge, to reason about these concepts together, and ï¬nally output a response that both fulï¬lls functional goals in the conversation while simultaneously being captivating to their human speak- ing partner. The current state-of-the-art approaches, sequence to sequence models of various kinds (Sutskever et al., 2014; Vinyals & Le, 2015; Serban et al., 2016; Vaswani et al., 2017) attempt to address some of these skills, but generally suffer from an inability to bring memory and knowledge to bear; as indicated by their name, they involve encoding an input sequence, providing limited reasoning by transforming their hidden state given the input, and then decoding to an output. To converse intelligently on a given topic, a speaker clearly needs knowledge of that subject, and it is our contention here that more direct knowledge memory mechanisms need to be employed. In this work we consider setups where this can be naturally measured and built.
We consider the task of open-domain dialogue, where two speakers conduct open-ended chit-chat given an initial starting topic, and during the course of the conversation the topic can broaden or focus on related themes. During such conversations, an interlocutor can glean new information and personal points of view from their speaking partner, while providing similarly themselves. This is a challenging task as it requires several components not found in many standard models. We design a set of architectures speciï¬cally for this goal that combine elements of Memory Network architec- tures (Sukhbaatar et al., 2015) to retrieve knowledge and read and condition on it, and Transformer architectures (Vaswani et al., 2017) to provide state-of-the-art text representations and sequence models for generating outputs, which we term Transformer Memory Networks.
As, to our knowledge, no public domain dataset of requisite scale exists, we build a supervised dataset of human-human conversations using crowd-sourced workers, ï¬rst crowd-sourcing 1365 diverse discussion topics and then conversations involving 201, 999 utterances about them. Each
# âJoint ï¬rst authors.
1
topic is connected to Wikipedia, and one of the humans (the wizard) is asked to link the knowledge they use to sentences from existing articles. In this way, we have both a natural way to train a knowledgeable conversation agent, by employing a memory component that can recall and ground on this existing text, and a natural way to evaluate models that we build, by assessing their ability at locating and using such knowledge.
Our Transformer Memory Network architectures, both in retrieval and generative versions, are tested in this setup using both automatic metrics and human evaluations. We show their ability to execute engaging knowledgeable conversations with humans, compared to a number of baselines such as standard Memory Networks or Transformers. Our new benchmark, publicly in ParlAI (http:// parl.ai/projects/wizard of wikipedia/), aims to encourage and measure further improvements in this important research direction.
# 2 RELATED WORK
Many existing dialogue tasks do not study the use of knowledge explicitly. For example, popular chit-chat datasets such as Open-Subtitles (Vinyals & Le, 2015), Persona-Chat (Zhang et al., 2018) and Twitter (Sordoni et al., 2015) have tested the ability of sequence-to-sequence models that attend over the recent dialogue history, but do not attempt to recall long-term knowledge beyond encoding it directly into the weights of the feed-forward network.
In the area of goal-directed dialogue, separate from open domain chit-chat, such as airline (El Asri et al., 2017) or restaurant booking (Henderson et al., 2014; Wen et al., 2016; Bordes et al., 2017), knowledge conditioning is typically employed by allowing access to a database through API calls or otherwise. In contrast, our work investigates unstructured knowledge across a large, diverse set of topics potentially spanning all of Wikipedia.
In question answering one does not produce a dialogue response based on a conversation history, but a factual answer based on a question. In that case, it is clear that retrieving and conditioning knowledge is vital. For example, in SQuAD neural models have been developed that attend to a given paragraph from Wikipedia to answer questions (Rajpurkar et al., 2016), or Open-SQuAD which extends this to answering without being given the paragraph, instead performing retrieval over the entirety of Wikipedia (Chen et al., 2017). Recently, the QuAC dataset investigates similar themes, but as a sequence of questions and answers in dialogue form instead (Choi et al., 2018). In this work we do not address question answering, but focus on natural human dialogues which contain a diverse set of utterances, not just questions and answers.
The closest work to ours lies in the area of non-goal directed dialogue incorporating knowledge. The work of Dodge et al. (2016) employed Memory Networks to perform dialogue discussing movies in terms of recommendation and open-ended discussion from Reddit, conditioning on a structured knowledge base. Zhou et al. (2018) also links Reddit to structured knowledge. Both Parthasarathi & Pineau (2018) and Ghazvininejad et al. (2018) use unstructured text instead, as we do: the former to discuss news articles using Wikipedia summaries as knowledge, and the latter to discuss local businesses in two-turn dialogues using Foursquare tips as knowledge. Ghazvininejad et al. (2018) uses an extended Encoder-Decoder where the decoder is provided with an encoding of the context along with the external knowledge encoding. Neither involves dialogue authored with the given knowledge, so it is unclear when knowledge is useful or not. In contrast, in our task, we know the Wikipedia articles and sentences that ground crowdworkers dialogues. Model-wise, Parthasarathi & Pineau (2018) uses a Bag-of-Words Memory Network type fact encoder and an RNN decoder. Our work compares Memory Networks (Sukhbaatar et al., 2015) and Transformers which have been shown to be on-par or superior to RNN encoder-decoders (Vaswani et al., 2017), and develops an architecture that combines these approaches. Concurrently with our work Moghe et al. (2018) proposed a dataset based on the closed domain of movie chats. Our paper shows models working on full multi-turn dialogue in an open-domain setting, which to our knowledge was not shown before.
# 3 WIZARD OF WIKIPEDIA
We consider the following general open-domain dialogue setting: two participants engage in chit- chat, with one of the participants selecting a beginning topic, and during the conversation the topic
2
is allowed to naturally change. The two participants, however, are not quite symmetric: one will play the role of a knowledgeable expert (which we refer to as the wizard) while the other is a curious learner (the apprentice).
Apprentice At each stage of the conversation the apprentice talks to the wizard freely, playing the role of a curious learner, eager to chat. Their goal is to go into depth about a chosen topic that interests themselves or their partner, while keeping the conversation engaging and fun. Note that the instruction to delve deeply into a topic makes this different to more âshallowâ chit-chat tasks. In this task the use of knowledge is emphasized more.
Wizard The wizard is given the following instructions: âYou have just met the other person, who seems quite curious, and you are eager to discuss a topic with them!â Their goal is to inform their conversation partner about a topic that one of them will choose. Crucially, the wizard has access to an information retrieval system that shows them paragraphs from Wikipedia possibly relevant to the conversation, which are unobserved by the apprentice. Before each conversation turn the wizard can read these paragraphs and then potentially base their next reply on that observed knowledge. Note, the wizard is particularly instructed not to simply parrot this knowledge, but to use it to craft a relevant reply, and to present any relevant knowledge in a fun and engaging way, if possible.
Conversation Flow The ï¬ow of the conversation thus takes place as follows.
1. Either the wizard or apprentice is picked to choose the topic and speak ï¬rst. The other player receives the topic information, and the conversation begins.
2. When the apprentice sends the wizard a message, the wizard is shown relevant knowledge (described below), and chooses a relevant sentence in order to construct a response, or else chooses the no sentence used option.
3. The Wizard responds to the apprentice basing their response on their chosen sentence.
4. The conversation repeats until one of the conversation partners ends the chat (after a mini- mum of 4 or 5 turns each, randomly chosen beforehand).
After collecting data of such wizard-apprentice conversations between humans, the goal is to then replace the human wizard with a learned agent that will speak to a human apprentice instead, similar to the procedure in Wizard of Oz experiments (Bernsen et al., 2012).
Topics We crowd-sourced a set of 1365 natural, open-domain dialogue topics, each linked to a Wikipedia article. These include diverse topics such as commuting, Gouda cheese, music festivals, podcasts, bowling, and Arnold Schwarzenegger.
Knowledge Retrieval At each step of the dialogue the wizard has access to a set of passages of knowledge which may be relevant to the given dialogue context. While this is a potentially learnable part of the model, we required for this to be ï¬xed so that we could present the results to the annotator when collecting the dataset. We thus used exactly the same retriever that is commonly used for the Open-SQuAD dataset in Chen et al. (2017). It uses a simple inverted index lookup followed by term vector model scoring. Articles and queries are compared as TF-IDF weighted bag-of-word and n-gram vectors, using the hashing trick. We retrieve the top 7 articles (ï¬rst paragraph only) for the last two turns of dialogue (by wizard and apprentice) and the article (ï¬rst 10 sentences only) for the original topic, and present these articles to the wizard as knowledge context, along with their titles. Note that while this system is used to build the dataset, a superior method can in principle be learned and used by a model at test time.
Knowledge Selection and Response Generation During data collection, the wizard can click on any of the retrieved article titles in the dialogue UI to expand that article, at which point they can click on a sentence that is most relevant to the response they want to make (only one article, and one sentence can be selected on any turn, for simplicity). If they see no relevant article or sentence they can choose no sentence used instead. The wizard then enters their response to the apprentice. An image of the Wizardâs UI is shown in Appendix A.1.
3
Table 1: Dataset statistics of the Wizard of Wikipedia task.
Wizard of Wikipedia Task Train Valid Test Seen Test Unseen Number of Utterances Number of Dialogues Number of Topics Average Turns per Dialogue 166,787 18,430 1,247 9.0 17,715 1,948 599 9.1 8,715 965 533 9.0 8,782 968 58 9.1 Knowledge Database 5.4M articles 93M sentences
Final Dialogue Dataset The ï¬nal dialogue dataset we collect consists of 22,311 dialogues with 201,999 turns, which we divide into 166,787 for train, 17,715 for validation, and 17,497 for test. The test set is split into two subsets, Test Seen and Test Unseen. Test Seen contains 533 overlapping topics with the training set with new dialogues about those topics. Test Unseen consists of 58 topics never seen before in train or validation. Overall data statistics can be found in Table 1, and further statistics and examples of collected conversations in Appendix A.2. We observe wizards and apprentices both asking and answering questions, and providing each other with a mixture of facts and personal feelings during their general discussion.
# 4 MODELS
In this work we consider learning dialogue models to replace the wizard in our learning tasks, i.e. the knowledgeable speaker. The dialogue model thus can have access to a knowledge source, in this case Wikipedia, to ground the conversation with. We thus develop extensions of the Memory Network (Sukhbaatar et al., 2015) and Transformer (Vaswani et al., 2017) models that can (i) retrieve from a large memory relevant information conditioned on the dialogue history, (ii) carefully read and attend over the retrieved set of knowledge, and then (iii) generate the next dialogue utterance. This model is then used consecutively on each turn to form an entire dialogue with a user.
We develop two classes of models capable of leveraging knowledge: (i) retrieval models that produce an output among a set of candidate responses (the set of utterances from the training set); and (ii) generative models that generate word-by-word (using a beam).
The input to either model is the same: at each dialogue turn where the model is intended to make a response, it is given the current dialogue context x1, . . . , xt of t dialogue turns, where x1 is al- ways the initial starting topic (e.g. âKurt Cobainâ), and the remaining turns swap between the two speakers. The goal at each stage is to output the next utterance xt+1.
Knowledge Retrieval We assume a large knowledge base (memory) m1, . . . , mN which is hier- archically organized into documents consisting of paragraphs and sentences. As it is infeasible for current neural attention techniques to operate on this scale, we use standard information retrieval (IR) techniques (c = IR(x, m)) as a ï¬rst step to return a smaller set of candidates mc1 , . . . , mcK for ï¬ne-grained selection.
In our experiments, we use the IR system provided to the human annotators during dataset creation, detailed in Section 3. The retriever operates on the topic (x1) and the last two turns (xt and xtâ1) if they exist, effectively calling the IR system three times with three different queries. Empirically, this provided better performance compared to merging into one query, likely because it can address quite different topics. We retrieve the top 7 articles (ï¬rst paragraph only) for each lookup and then ï¬atten all the results into separate sentences (i.e. remove the organization of sentences belonging to articles), but prepend every sentence with its article title. In this way the candidates mc1 , . . . , mcK given to the neural model in the next stage can be attended to independently without having to deal with hierarchical issues.
Knowledge Attention We use an attention mechanism to perform ï¬ne-grained selection of which knowledge sentences will be used to produce the next turn of dialogue. Each sentence in the memory is independently encoded with a Transformer encoder (Vaswani et al., 2017), and the same Trans-
4
Two-Stage encoded Dialog independently Context > Knowledge [J] Memory
Figure 1: Generative Transformer Memory Network. An IR system provides knowledge candi- dates from Wikipedia. Dialogue Context and Knowledge are encoded using a shared encoder. In the Two-stage model, the dialogue and knowledge are re-encoded after knowledge selection.
former is used to encode the dialogue context x. We then perform standard dot-product attention between the memory candidates and the dialogue context.
Utterance Prediction Given the hidden state derived from the memory attention process described above, the ï¬nal stage is to predict the output utterance that will form the next dialogue turn.
We consider different variants of the two stages above, knowledge attention and utterance prediction, when considering retrieval and generative variants of our models. We will now detail these in turn.
4.1 RETRIEVAL TRANSFORMER MEMORY NETWORK
This model encodes each knowledge sentence m,,,...,7%c, and the dialogue context x with a Transformer, as described above. The final input encoding is calculated by performing dot-product attention over enc(m<,),...,enc(me,,) and adding the resulting weighted sum of these vectors to enc(x) to get the representation rep; ys(7â¢c,,.--,Mcx,@). The candidate responses r},..., ry are encoded with a separate Transformer to get reppys(ri) for each i. We choose as a response r¢ where
é = arg max TPL ys (Mery ** + 1 Mex 1) . TePans (Ti) iâ¬{1,...,L} lreppus(â¢Mers-++;Mexst)|l2â [IrePRns(Ti)|l2
The model is trained to minimize the cross-entropy loss, where the negative candidates for each example are the responses to the other examples in the batch (Henderson et al., 2017).
4.2 GENERATIVE TRANSFORMER MEMORY NETWORK
We consider two versions: a Two-stage and an End-to-end version. Both models ï¬nd the most rel- evant piece of knowledge mbest, and then perform an encoding step by concatenating it with the dialogue context, allowing the decoder to attend over both the knowledge and dialogue when formu- lating its response. We employ a beam search of 5 to select our best response. All generative models employ BPE encoding (Sennrich et al., 2016), which we found effective at enabling generators to copy rare words from Wikipedia sentences (Fan et al., 2018).
In the End-to-end version, a shared Transformer encoder is used to encode all candidates mci and the dialogue history. The encoded candidates are ï¬attened into vectors using the normalization from Cer et al. (2018) (summing, and normalizing by the square root of the sentence length in order to balance short and long sentences) to produce an attention prediction over the memory. The full sequence encoding of the single highest selected knowledge mbest is concatenated with the encoding of the dialogue, and passed into a Transformer decoder. An illustration of our End-to-end model is shown in Figure 1. We train the model to minimize the negative log-likelihood of the response utterance. We can add additional supervision by forcing the knowledge selection to correctly choose the same knowledge candidate as the human wizard in the training set by adding an additional cross- entropy loss over the knowledge attention, modulated by a weight λ:
λ) NLL + λ knowledge. = (1
# L
â
# L
# L
In the Two-stage version, we employ two separately trained models for each of these two tasks, knowledge selection and utterance prediction. As the knowledge selection step creates a hard deci-
5
Table 2: Test performance of various methods on the Knowledge Selection Task. The models must select the gold knowledge sentences chosen by humans given the dialogue context.
Seen Test Unseen Test Method R@1 F1 R@1 F1 Random IR baseline BoW MemNet 2.7 5.8 23.0 13.5 21.8 36.3 2.3 7.6 8.9 13.1 23.5 22.9 Transformer Transformer (+Reddit pretraining) Transformer (+Reddit pretraining, +SQuAD training) 22.5 24.5 25.5 33.2 36.4 36.2 12.2 23.7 22.9 19.8 35.8 34.2
sion inï¬uencing the output of the generator, we ï¬nd maximizing the performance of this component to be vital. We can also improve performance of the decoder by employing knowledge dropout (K.D.), wherein we artiï¬cially prevent the model from attending to knowledge a fraction of the time during training. We ï¬nd this helps the generator be more resilient to errors at the knowledge se- lection stage, and makes training faster. K. D. is a novel technique we propose here, however it is similar to many other dropout techniques, e.g. feature dropout used in Wu et al. (2017).
# 5 EXPERIMENTS
We describe each of our experimental setups and results. We ï¬rst investigate the ability of our mod- els to select knowledge appropriately, and then consider the full task of dialogue with knowledge.
5.1 KNOWLEDGE SELECTION TASK
Before looking at the full Wizard dialogue task, we assess the ability of models to predict the knowl- edge selected by human wizards in the dataset given the dialogue history. This will inform us of the feasibility of this task and the best models to use in a two-stage architecture. We compare Transform- ers against various baselines including a random baseline; an Information Retrieval (IR) baseline, which uses simple word overlap; and a Bag-of-Words Memory Network (Sukhbaatar et al., 2015). Where noted, the Transformer is pretrained on Reddit data (Mazar´e et al., 2018), and ï¬ne-tuned for our task. The results are shown in Table 2. Transformers work best, as long as they are pretrained on a large dataset (Reddit), while multi-tasking on SQuAD provides marginal impact. Further analysis of this task using other models is provided in Appendix B.1. We use the best performing Transformer model reported here for our two-stage generative Memory Network in the full dialogue task.
5.2 FULL TASK: DIALOGUE WITH KNOWLEDGE
We evaluate our models on the full task of dialogue generation given knowledge in two settings: given the gold knowledge sentence chosen by a human, or where the model needs to predict which knowledge to use. We separately describe experiments for retrieval and generative models.
Retrieval Experiments We use similar baselines as in the knowledge selection experiments, but now also apply Transformer Memory Networks, which attend over knowledge. Models are evalu- ated measuring Recall@1 when ranking the gold response among 99 randomly chosen candidates, and unigram F1 of the modelâs prediction with the gold response. The results are shown in Table 3. We ï¬nd that the addition of knowledge improves all models (improving Bow MemNet from 56 to 71 R@1 and the Transformer MemNet from 79 to 87 R@1) for predicted knowledge. Performance im- proves dramatically when models are provided gold knowledge, but otherwise retain similar trends.
Generative Experiments We compare our generative End-to-end and Two-stage Transformer Memory Network models to two more baselines: repeating the last utterance, and a generative Transformer model trained to respond to dialogue but without access to knowledge. Models are evaluated using perplexity (PPL) of the gold response and unigram F1.
6
Table 3: Retrieval methods on the full Wizard task. Models must select relevant knowledge and retrieve a response from the training set as a dialogue response. Using knowledge always helps, and the Transformer Memory Network with pretraining performs best.
Predicted Knowledge Gold Knowledge Test Seen Test Unseen Seen Unseen Method R@1 F1 R@1 F1 R@1 R@1 Random IR baseline BoW MemNet (no knowledge) BoW MemNet 1.0 17.8 56.1 71.3 7.4 12.7 14.2 15.6 1.0 14.2 28.8 33.1 7.3 11.6 11.6 12.3 1.0 73.5 56.1 84.5 1.0 67.5 28.8 66.7 Transformer (no knowledge, w/o Reddit) Transformer (no knowledge, w/ Reddit) 60.8 79.0 13.3 15.0 25.5 54.0 9.7 11.6 60.8 79.0 25.5 54.0 Transformer MemNet (w/ Reddit) Transformer MemNet (w/ Reddit+SQuAD) 86.8 87.4 15.4 15.4 69.8 69.8 12.4 12.4 91.6 92.3 82.3 83.1
Table 4: Generative models on the full Wizard Task. The Two-stage model performs best using predicted knowledge, while the End-to-end (E2E) model performs best with gold knowledge.
Predicted Knowledge Gold Knowledge Test Seen Test Unseen Test Seen Test Unseen Method PPL F1 PPL F1 PPL F1 PPL F1 Repeat last utterance Transformer (no knowledge) - - 13.8 - - - 13.7 - - 41.8 13.8 17.8 - 87.0 13.7 14.0 E2E Transformer MemNet (no auxiliary loss) E2E Transformer MemNet (w/ auxiliary loss) 66.5 63.5 15.9 16.9 103.6 97.3 14.3 14.4 24.2 23.1 33.6 35.5 35.5 32.8 29.5 32.2 Two-Stage Transformer MemNet Two-Stage Transformer MemNet (w/ K.D.) 54.8 46.5 18.6 18.9 88.5 84.8 17.4 17.3 30.0 28.6 30.7 30.6 42.7 43.7 28.6 28.0
Table 5: Human Experiments. Evaluations of the best generative and retrieval models on full dialogues with humans. Human ratings are reported as mean (stddev). Wiki F1 measures unigram overlap with the Wikipedia entry for the chosen topic, a measure of knowledge used in conversations.
Seen Unseen Method Rating Wiki F1 Rating Wiki F1 Human Performance 4.13 (1.08) 11.1 4.34 (0.98) 10.6 Retrieval Transformer (no knowledge) Generative Transformer (no knowledge) Retrieval Transformer MemNet 3.33 (1.30) 2.11 (1.11) 3.43 (1.10) 19.8 15.3 23.4 3.12 (1.34) 2.54 (1.38) 3.14 (1.31) 13.7 10.1 16.3 Two-Stage Generative Transformer MemNet 2.92 (1.33) 30.0 2.93 (1.30) 26.2
The results are given in Table 4. Our experiments show that both the End-to-end and Two-stage models employ the knowledge in their response predictions, as they outperform their counterpart Transformer without knowledge, and demonstrate substantial improvements when provided the gold knowledge. While the Two-stage model produces signiï¬cantly stronger perplexity and F1 scores using the predicted knowledge, the End-to-end model outperforms the Two-stage model in the gold knowledge experiments. This suggests that the Two-stage model beneï¬ts from the strong knowledge selection module (Section 5.1), but that the End-to-end model is better at employing the selected knowledge. Furthermore, we ï¬nd that the additional knowledge selection supervision (auxiliary loss) in the End-to-end model improves it on every metric, suggesting that tightly integrating these tasks is beneï¬cial. Knowledge dropout (K. D.) also helps (compare last two rows). More evidence
7
# Model: Retrieval Transformer MemNet
Topic: E-book (seen) Human: i do not know why, but I have never gotten into E-books Model: Same here. I have a kindle, but I donât use it for books. Itâs nicer to hold a real book and turn pages. Human: yes, i like the physical feel and smell of a real book Model: I actually had a Nook e-book where you had to download the e-book after selecting it online. I grew out of it, I prefer paper books also. Human: i tried e-books but I do not like reading on a screen, its disorientated for me Model: Yes, reading a book on a tablet just isnât the same! Human: plus I like owning the physical copy of a real book, I feel like I own it Model: I prefer print and newsreels.
Topic: Toga party (seen) Model: I love toga parties! I usually make my toga out of a bed sheet and wear my sandals. Human: Me too!!! I usually use some ï¬ip ï¬ops and my blanket! Model: Toga parties typically have games involving kegs and excessive drinking! Human: Yeah! And getting to wear funny outï¬ts and a crown of leaves! Model: I love me some Toga Parties! Human: I love me some too! Model: Toga parties are so much fun with the Roman or Greek theme. Human: Yeah! I especially like Greek themes
Model: Two-Stage Generative Transformer MemNet
# Topic: Shower (unseen)
Human: So Iâve been looking for a better showerhead with more water pressure for my shower. Model: I love pressure-versed water at nearly constant temperature to a shower Human: That sounds pretty great! Love those rainshower showers! Model: the ancient greeks were the ï¬rst people to have showers. Human: Well I knew they were good for something. Model: the original showers were neither indoor structures nor man made but were common. Human: Ah that sounds intimidating! Model: it is required manual transport of both fresh and waste water.
Figure 2: Selected conversations between humans and models.
for this is shown in Appendix B.1. Lastly, we note that both Two-stage models give higher F1 scores than any of the retrieval models shown in Table 3.
5.3 HUMAN EVALUATION
We perform human evaluation of our models using crowd-sourced workers. Humans are paired with our models and asked to chat about a speciï¬c topic (given a choice of 2â3 topics) for 3â5 dialogue turns. Following their conversation, the humans are asked to rate their dialogue partner on a scale of 1â5, with the rating indicating how much they âlikedâ the conversation (5 is best), which we refer to as the engagingness rating. Using the collected conversations, we also calculate a metric we call the Wiki F1 sore: the F1 overlap of the modelâs utterances with the ï¬rst 10 sentences of the Wikipedia page for the chosen topic as a proxy for how much knowledge the model exhibits. We seek a model that can be simultaneously engaging and knowledgeable, hence we would like to maximize both these metrics1. For comparison, we also collect 100 human-human conversations, with only one human choosing the topic and performing evaluation. In total, we collect a total of 546 conversations with ratings from 464 distinct workers. These results are shown in Table 5.
We ï¬nd that the retrieval models signiï¬cantly outperform the generative models on the human en- gagingness evaluation(Studentâs t-test, p < .05). The human engagingness differences between retriever models with and without knowledge are not signiï¬cant, but note they both trend toward use of knowledge due to the candidate sentences retrieved, with the knowledgeable version obtaining signiï¬cantly higher Wiki F1 scores in both seen and unseen test sets.
For the generative models, we ï¬nd human engagingness ratings are signiï¬cantly improved by the use of knowledge (p < .01). The signiï¬cantly higher Wiki F1 scores indicate that (i) these models convey more knowledge than their counterparts without knowledge conditioning; and (ii) on both seen and unseen sets they convey more knowledge than the retrieval models. In particular, on unseen
1For example, a model could display knowledge by copying parts of Wikipedia, but not be engaging at all.
8
data the gap between retrieval and generative models is larger. This is understandable, as retrieval models are limited to producing a response from the training set where the unseen topic did not appear.
There is still a considerable gap to human ratings of other humans compared to all our models (ï¬rst row of Table 5). Figure 2 shows example conversations with the retrieval and generative models. Additional analysis and examples can be found in Appendix B.3 and C.
# 6 CONCLUSION
In this work we build dialogue agents which are able to employ large memory systems containing en- cyclopedic knowledge about the world in order to conduct engaging open-domain conversations. We develop a set of architectures, Transformer Memory Network models, that are capable of retrieving and attending to such knowledge and outputting a response, either in retrieval or generative modes. To train and evaluate such models, we collect the Wizard of Wikipedia dataset, a large collection of open-domain dialogues grounded by Wikipedia knowledge, and demonstrated the effectiveness of our models in automatic and human experiments. Our new publicly available benchmark aims to encourage further model exploration, and we expect such efforts will result in signiï¬cant advances in this important research direction.
There is much future work to be explored using our task and dataset. Some of these include: (i) bridging the gap between the engagingness of retrieval responses versus the ability of genera- tive models to work on new knowledge and topics, (ii) learning to retrieve and reason simultane- ously rather than using a separate IR component; and (iii) investigating the relationship between knowledge-grounded dialogue and existing QA tasks which also employ such IR systems. The aim is for those strands to come together to obtain an engaging and knowledgeable conversational agent.
# REFERENCES
Niels O Bernsen, Hans Dybkjær, and Laila Dybkjær. Designing interactive speech systems: From ï¬rst ideas to user testing. Springer Science & Business Media, 2012.
Antoine Bordes, Y-Lan Boureau, and Jason Weston. Learning end-to-end goal-oriented dialog. In Proceedings of the International Conference on Learning Representations (ICLR), 2017.
Daniel Cer, Yinfei Yang, Sheng yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Con- stant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. Universal sentence encoder. arXiv preprint arXiv:1803.11175, 2018.
Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. Reading wikipedia to answer open- domain questions. In Proceedings of the 55th Annual Meeting of the Association for Computa- tional Linguistics, pp. 1870â1879. Association for Computational Linguistics, 2017.
Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen-tau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. QuAC: Question answering in context. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 2018.
Jesse Dodge, Andreea Gane, Xiang Zhang, Antoine Bordes, Sumit Chopra, Alexander Miller, Arthur Szlam, and Jason Weston. Evaluating prerequisite qualities for learning end-to-end dialog sys- tems. In Proceedings of the International Conference on Learning Representations (ICLR), 2016.
Layla El Asri, Hannes Schulz, Shikhar Sharma, Jeremie Zumer, Justin Harris, Emery Fine, Rahul Mehrotra, and Kaheer Suleman. Frames: a corpus for adding memory to goal-oriented dialogue systems. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, pp. 207â219, Saarbr¨ucken, Germany, August 2017. Association for Computational Linguistics.
Angela Fan, David Grangier, and Michael Auli. Controllable abstractive summarization. In Proceed- ings of the 2nd Workshop on Neural Machine Translation and Generation, pp. 45â54. Association for Computational Linguistics, 2018.
9
Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen-tau Yih, and Michel Galley. A knowledge-grounded neural conversation model. In Proceedings of the Conference on Association for the Advancement of Artiï¬cial Intelligence (AAAI), 2018.
Matthew Henderson, Blaise Thomson, and Jason D Williams. The second dialog state tracking challenge. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pp. 263â272, 2014.
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, Lâaszlâo Lukâacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, and Ray Kurzweil. Efï¬cient Natural Language Response Sugges- tion for Smart Reply. arXiv preprint arXiv:1705.00652, 2017.
Jiwei Li, Will Monroe, and Daniel Jurafsky. A simple, fast diverse decoding algorithm for neural generation. arXiv preprint arXiv:1611.08562, 2016.
Pierre-Emmanuel Mazar´e, Samuel Humeau, Martin Raison, and Antoine Bordes. Training millions of personalized dialogue agents. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 2018. Association for Computational Linguistics.
Nikita Moghe, Siddhartha Arora, Suman Banerjee, and Mitesh M. Khapra. Towards exploiting background knowledge for building conversation systems. In Proceedings of the 2018 Confer- ence on Empirical Methods in Natural Language Processing, pp. 2322â2332. Association for Computational Linguistics, 2018. URL http://aclweb.org/anthology/D18-1255.
Prasanna Parthasarathi and Joelle Pineau. Extending neural generative conversational model using external knowledge sources. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 2018.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 2383â2392, Austin, Texas, November 2016. Association for Computational Linguistics.
Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pp. 1715â1725, Berlin, Germany, August 2016. Association for Computational Lin- guistics.
Iulian Vlad Serban, Ryan Lowe, Laurent Charlin, and Joelle Pineau. Generative deep neural net- works for dialogue: A short review. arXiv preprint arXiv:1611.06216, 2016.
Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. A neural network approach to context-sensitive generation of conversational responses. arXiv preprint arXiv:1506.06714, 2015.
Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. End-to-end memory networks. In Advances in Neural Information Processing Systems, pp. 2440â2448, 2015.
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, pp. 3104â3112, 2014.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, In Advances in Neural Infor- Åukasz Kaiser, and Illia Polosukhin. Attention is all you need. mation Processing Systems, pp. 5998â6008, 2017.
Ashwin K. Vijayakumar, Michael Cogswell, Ramprasaath R. Selvaraju, Qing Sun, Stefan Lee, David J. Crandall, and Dhruv Batra. Diverse beam search: Decoding diverse solutions from neural sequence models. arXiv preprint arXiv:1610.02424, 2016.
Oriol Vinyals and Quoc Le. A neural conversational model. In Proceedings of the 31st International Conference on Machine Learning, Deep Learning Workshop, Lille, France, 2015.
10
Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M Rojas-Barahona, Pei-Hao Su, Stefan Ultes, and Steve Young. A network-based end-to-end trainable task-oriented dialogue system. arXiv preprint arXiv:1604.04562, 2016.
Ledell Wu, Adam Fisch, Sumit Chopra, Keith Adams, Antoine Bordes, and Jason Weston. Starspace: Embed all the things! arXiv preprint arXiv:1709.03856, 2017.
Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. Per- sonalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pp. 2204â2213, Melbourne, Australia, July 2018. Association for Computational Linguistics.
Hao Zhou, Tom Young, Minlie Huang, Haizhou Zhao, Jingfang Xu, and Xiaoyan Zhu. Com- monsense knowledge aware conversation generation with graph attention. In Proceedings of the Twenty-Seventh International Joint Conference on Artiï¬cial Intelligence (IJCAI), pp. 4623â4629. International Joint Conferences on Artiï¬cial Intelligence Organization, 7 2018.
11
# A DATASET COLLECTION
A.1 HUMAN ANNOTATION INTERFACE (FOR WIZARD)
Chat with Knowledge! You have just met the other person, who seems quite curious, and you are eager to discuss a topic with them! âYou will try to inform your conversation partner about a topic that âone of you will choose. After a topic is chosen, you will receive information about that topic that will be visible throughout the chat. Passage for Chosen Topic * @ Cupcake CIA cupcake (also British English: fairy cake; Hiberno-English: bun; Australian English: fairy cake or patty cake) is a small cake designed to serve one person, which may be baked ina âsmall thin paper or aluminum cup. CAs with larger cakes, icing and other cake decorations such as fruit and candy may be applied. (The earliest extant description of what is now often called a cupcake was in 1796, when a recipe for "a light cake to bake in small cups" was written in "American Cookery" by Amelia Simmons. The earliest extant documentation of the term âcuncake" Relevant Information Click on a topic below to expand it. Then, click the checkbox next to the sentence that you use to craft your response, or check 'No Sentence Used.' (No Sentence Used Information about your partner's message © Cupeake @ Hostess CupCake Hostess CupCake is a brand of snack cake formerly produced and distributed by Hostess Brands and currently owned by private equity firms Apollo Global Management and Metropoulos & Co. Its most common form is a chocolate cupcake with chocolate icing and vanilla creme filling, with eight distinctive white squiggles across the top. (However, other flavors have been available at times. (Ithas been claimed to be the first commercially produced cupcake and has become an ionic American brand. Information about your message 1 Farley's & Sathers Candy Company OHi-Chew 7 Candy 1 Field ration 1 Candy Candy CHi-5 (Australian band) C Drum kit
SYSTEM: Your partner has selected the topic. Please look to the left to find the relevant information for this topic. Partner: Hi! Do you have any good recipes for cupcakes? SYSTEM: Please take a look at the relevant information to your left and check the appropriate sentence before answering, but try not to copy the sentence as your whole response. You: Hi! You can add fruit and candy to make them even more delicioius! Partner: That's cool! What's your favorite cupcake? SYSTEM: Please take a look at the relevant information to your left and check the appropriate sentence before answering, but try not to copy the sentence as your whole response. | love Hostess cupcakes - they have chocolate icing and vanilla creme filling } send |
12
A.2 WIZARD OF WIKIPEDIA EXAMPLES
Topic: Lifeguard Apprentice: Wizard: So I am a lifeguard. Know anything about saving lives in water? Iâm impressed! Itâs a big responsibility to supervise other peopleâs safety in the water! Tell me more. Apprentice: Well, I help make sure people do not drown or get injured while in or near the water! Knowledge: A lifeguard is a rescuer who supervises the safety and rescue of swimmers, surfers, . . . Lifeguards are strong swimmers and trained in CPR/AED ï¬rst aid, certiï¬ed in water . . . · · · In some areas, the lifeguard service also carries out mountain rescues, or may function as the primary EMS provider. Wizard: Apprentice: Iâve heard that in some places, lifeguards also help with other sorts of emergencies, like mountain rescues! Is that part of your job too? I have! I feel like you know much about this! What brings you to know so much? Wizard: Oh, thatâs about the extent of my knowledge. Iâve just been around beaches and Iâve always admired lifeguards. Iâm not a super strong swimmer myself. Topic: Armadillo Wizard: Apprentice: Wizard: I love animals and think armadillos are awesome with their leathery shell. I donât think Iâve ever seen an armadillo in real life! Iâve seen them at the zoo. Armadillo means little armored one in Spanish. Apprentice: Are they native to a Spanish-speaking part of the world? Knowledge: Armadillos are New World placental mammals in the order Cingulata . . . The word âarmadilloâ means âlittle armoured oneâ in Spanish. · · · The nine-banded armadillo (âDasypus novemcinctusâ), or the nine-banded, long-nosed armadillo, is a medium-sized mammal found in North, Central, and South America. Wizard: Yes, they are most commonly found in North, Central, and South America Topic: Ice cream Wizard: Apprentice: I just love ice cream. I love the types with fruits and ï¬avours. Do you like ice cream? I love Ice cream as much as any one. I especially like Gelato, foreign ice cream! Knowledge Ice cream is a sweetened frozen food typically eaten as a snack or dessert.. . . It is usually made from dairy products, such as milk and cream, and . . . · · · Bacon ice cream (or bacon-and-egg ice cream) is an ice cream generally created by adding bacon to egg custard and freezing the mixture. Wizard: Me too. There are some strange combinations though, have you heard of bacon ice cream? where they add bacon and even egg custard to the freezing mixture! Surprisingly bacon ice cream doesnât surprise me. That doesnât sound appealing to me, but perhaps it could be delicious. . . Apprentice:
Figure 3: The Wizard of Wikipedia dataset. Examples of collected conversations from the dataset, where both wizard and apprentice are humans. The wizard has access to an information retrieval system over Wikipedia, so that they can ask and answer questions, and make statements relevant to the discussion. For each utterance, knowledge retrieval is performed based on dialogue history, 61 knowledge candidates per turn, with wizards clicking no sentence used 6.2% of the time. giving Assuming that a question contains a question mark or begins with âhowâ, âwhyâ, âwhoâ, âwhereâ, âwhatâ or âwhenâ, in the dataset Apprentices ask questions in 13.9% of training set utterances, and answer questions (i.e., the Wizard has asked a question) 39.5% of the time, while saying new or follow-on statements (neither asking nor answering a question) 49.3% of the time. Hence, the wizard and apprentice conduct conversations with a variety of dialogue acts.
A.3 TOPIC SELECTION
To choose between topics that are natural we employed the existing Persona-Chat dataset (Zhang et al., 2018) where crowdworkers where asked to create personas of typical speakers. There are
13
1000 personas, each of which consists of 4-5 sentences describing that personâs interests, e.g. âI â¼ love watching Game of Thronesâ, âI like to eat cheetosâ and âI recently got a catâ. These can thus naturally be seen as topics of interest, and using another set of annotators we mapped each sentence to 1 or more relevant Wikipedia pages, if possible, e.g. âAriel is my favorite princessâ was labeled with the Wikipedia page for The Little Mermaid. As some sentences are harder to connect with Wikipedia, e.g. âI am wittyâ, they are left unlabeled. We thus obtain 1,431 topics in total to use for our task. We retain the persona topic sets and thus present 2-3 related topic choices as conversation starters per dialogue during data collection.
# B ADDITIONAL EXPERIMENTS
B.1 KNOWLEDGE SELECTION
Table 6: Test performance of the Knowledge Selection Tasks. We also tested the performance of our models trained to do the full dialogue task (see Section 5.2) on the knowledge selection task. For our retrieval system, this refers to the performance of the knowledge attention. The results show that our retrieval system could be improved, and the auxiliary loss clearly helps the generative models.
Seen Test Unseen Test Method R@1 F1 R@1 F1 Random IR baseline BoW MemNet 2.7 5.8 23.0 13.5 21.8 36.3 2.3 7.6 8.9 13.1 23.5 22.9 Transformer Transformer (+Reddit pretraining) Transformer (+Reddit pretraining, +SQuAD training) 22.5 24.5 25.5 33.2 36.4 36.2 12.2 23.7 22.9 19.8 35.8 34.2 Retrieval Transformer MemNet (no auxiliary loss) 12.9 24.6 14.6 26.3 Generative E2E Transformer MemNet (no auxiliary loss) Generative E2E Transformer MemNet (w/ auxiliary loss) 13.4 21.1 28.3 32.8 11.8 14.3 25.9 22.8
B.2 FULL DIALOGUE WITH RETRIEVAL
B.3 HUMAN EXPERIMENTS
# C ERROR ANALYSIS
We perform an analysis of the dialogues produced from the human evaluation experiments detailed in Section 5.3. We sample 20 conversations from each experimental setting, split between seen and unseen. Conversations are re-tokenized and lowercased to reduce superï¬cial differences be- tween models, and then analyzed in a single-blind setup. We note of common errors and behaviors exhibited in each of the different conversations.
In general, the human-human conversations are starkly different than any of the bot conversations â humans tend to have more small talk, or use the topic of discussion as a mere icebreaker, with neither human behaving as a wizard. This is in contrast to human-human conversations from the Wizard dataset itself, where one human has access to Wikipedia, and the conversation becomes more grounded in factual sentences. Similarly, all models attempt to play the role of wizard and produce more factual sentences too. In some rounds, humans treat the bot as a sort of question-answer machine, suggesting that the models could be improved by additionally employing SQuAD-like training data.
The retriever without knowledge is particularly prone to non sequiturs, or rapidly changing the subject. During unseen conversations, it is especially likely to discuss something other than the chosen topic. In contrast, the retriever with knowledge tends to stick to the chosen topic strongly, but has difï¬culty if the human changes the subject. Frequently in unseen topics, the retriever with
14
Table 7: Retrieval methods on the full Wizard task. In addition to the models we tested in the paper, we also tested a two-stage retrieval system in which we used our best-performing model on the knowledge selection task to choose a single knowledge sentence to condition on for the dialogue retrieval task. This outperformed our best retrieval method in terms of F1 but not not in terms of Recall@1. Furthermore, we compared these results to a two-stage retrieval system in which the dialogue retrieval module is optimized for seeing the gold chosen knowledge sentence. The performance of this system on the gold knowledge task suggests that the retrieval system could be improved by increasing performance on the knowledge selection subtask.
Predicted Knowledge Gold Knowledge Test Seen Test Unseen Seen Unseen Method R@1 F1 R@1 F1 R@1 R@1 Random IR baseline 1.0 17.8 7.4 12.7 1.0 14.2 7.3 11.6 1.0 73.5 1.0 67.5 BoW MemNet (no knowledge) BoW MemNet 56.1 71.3 14.2 15.6 28.8 33.1 11.6 12.3 56.1 84.5 28.8 66.7 Transformer (no knowledge, w/o Reddit) Transformer (no knowledge, w/ Reddit) 60.8 79.0 13.3 15.0 25.5 54.0 9.7 11.6 60.8 79.0 25.5 54.0 Transformer MemNet (w/ Reddit) Transformer MemNet (w/ Reddit+SQuAD) 86.8 87.4 15.4 15.4 69.8 69.8 12.4 12.4 91.6 92.3 82.3 83.1 Two-stage Transformer (optimized for predicted knowledge) Two-stage Transformer (optimized for gold knowledge) 84.2 79.6 16.2 16.6 63.1 60.1 13.2 13.1 92.3 96.3 83.1 88.3
Table 8: Human Experiments. We calculate the Wiki F1 score for the wizard and apprentice as they appear in the dataset for the sake of comparison to our human evaluations. Note that this differed from the human-human evaluation set-up in the sense that the wizard had direct access to Wikipedia passages in the UI, which explains the higher values of Wiki F1 both for the wizard (who uses Wikipedia) and for the apprentice (who would likely reference that use).
Seen Unseen Method Rating Wiki F1 Rating Wiki F1 Human Performance Wizard Performance (in dataset) Apprentice Performance (in dataset) 4.13 (1.08) - - 11.1 43.3 23.2 4.34 (0.98) - - 10.6 43.1 23.7 Retrieval Transformer (no knowledge) Generative Transformer (no knowledge) 3.33 (1.30) 2.11 (1.11) 19.8 15.3 3.12 (1.34) 2.54 (1.38) 13.7 10.1 Retrieval Transformer MemNet Two-Stage Generative Transformer MemNet 3.43 (1.10) 2.92 (1.33) 23.4 30.0 3.14 (1.31) 2.93 (1.30) 16.3 26.2
knowledge produces similar, but factually inaccurate answers to user queries. For example, when one user asks about parts of Ireland to visit, the model enumerates a list of locations in Greece. Nonetheless, its repertoire of available responses often include inviting responses, allowing the bot to have a more natural conversational ï¬ow. Selected conversations with the retriever with knowledge may be found in Figure 4, for both seen and unseen topics.
The generator without knowledge is particularly prone to many of the typical behaviors of seq2seq systems (Li et al., 2016; Vijayakumar et al., 2016), including local repetition (âcookies are made of ï¬our, oil, oil, and oilâ), global repetition (producing the near same utterance for multiple turns), or inconsistencies in its personality (saying it both likes and dislikes movies). The generator with knowledge has signiï¬cantly fewer issues with repetition, as it errs on the side of copying large fragments from the Wikipedia knowledge. The generator with knowledge can also act as a selï¬sh conversationalist, choosing to respond or detail information without inviting a response. Although it generally produces accurate statements, it sometimes produces statements using an incorrect date,
15
name or word. It also frequently produces formulaic responses, like âI donât know, but I do know that [Wikipedia excerpt]â. Nonetheless, we ï¬nd the generator with knowledge is able to successfully generalize to unseen topics using the knowledge from Wikipedia. Selected conversations with the generator with knowledge may be found in Figure 5.
16
# Model: Retrieval Transformer MemNet
Topic: Arnold Schwarzenegger (seen)
Human: Hey are you into the movie The Running Man Model: Yeah! Thatâs one of my favorite movies! I canât believe that Team America: World Police was made in
2004 - 14 years ago!
Human: Another one of Arnies movies I like is Terminator 2 JUDGMENT DAY Model: The Terminator is probably his most iconic role, but he got his start from the movie âConan the Barbarianâ,
which came out a couple years earlier. I actually havenât seen that one.
Human: Which one? Conan or âDa Terminataâ Model: I like Arnold Schwarzeneggar as well! I thought he was a fantastic governator, serving from 2003-2011. Human: HAha Governator! Yeah, itâs pretty crazy they he held higher ofï¬ce. Model: Yes, he won the LIFETIME ACHIEVEMENT AWARD! Interestingly, he joined the U.S. Army in 1961
and was a paratrooper! (*)
# Topic: Steak (seen)
Human: Hello, itâs lunch time here, and Iâm in the mood for a great steak Model: Wow that sounds delicious I wish I could say the same hahah, When the word âsteakâ is used without
qualiï¬cation, it normally refers to a Beefsteak
Human: Interesting, I donât really have a favorite steak, how about you? Model: I like rib eye it is really good specially medium rare, When the word âsteakâ is used without qualiï¬cation,
it generally refers to a beefsteak
Human: I like rib eye too! What is it about a rib eye that you enjoy? Model: Most steak cuts are very tender meat but I think rib eye can be quite tender Human: They really are. How do you choose a good steak? Model: When it comes to steak and other cuts of meat you have rare, medium rare, medium, well done and so on Human: How do you prefer your steak prepared? Model: I like that you can prepare it many different ways. You can fried it, grill it, or even broil the steak
Topic: Ireland (unseen) Model: Hi, have you ever been to Dublin? It is the capital and largest city in Ireland. Human: No, I havenât but would love to Model: It is the largest city in Northern Ireland. (*) Human: Iâve always found Ireland to be fascinating and would love to visit sometime Model: Same here! It is the capital of Ireland too! Human: What is the most popular place to visit in Ireland? Model: I would recommend the Thrace, Crete, and the Ionian Islands as must see places. (*) Human: I donât know anything about the Ionian Islands Model: It was once referred to as the Hellenic Republic! (*)
Topic: American football (unseen) Model: American football can be traced back to Britain in the mid-19th century. The rules were very similar! Human: I did not know that. Did they wear equipment back then? Model: Am not really certain about it but American football resulted from several major divergences from associa-
tion football and rugby football
Human: The Philadelphia eagles actually have a rugby player on their team this year. he never played football until this year.
Model: Philadelphia is one of a few cities to have all four teams, but they love the Eagles more than anything else by a long shot!
Human: As a life long Philadelphia sports fan I have to agree with you. Model: Iâve always been more of a fan of the American Football team from Pittsburgh, The Steelers! Human: Being from Philadelphia I am not much of a pittsburgh fan. Model: Are the Pittsburgh Steelers in the AFC? Human: Yes they are. the AFC north to be exact.
Topic: List of national parks of the United States (unseen)
Human: I have never visited a national park in the United States, but I would really like to! Model: Me too. Iâd love to live in a national park Human: Which one do you think is the best to visit ï¬rst? Model: The United Nations is an important place to visit, the Statue of Liberty, Central park is great also Human: Is the Statue of Liberty considered a National Park? Model: Yes, the statue of Liberty is there on an island - The Liberty Island. Human: I had no idea that that was a National Park. I have heard Zion National Park is really nice! Model: Yes, it links the shenandoah national park and the great smoky mountains together! (*)
Figure 4: Retriever with Knowledge conversations. Selected conversations with a human. (*) indicates clear factual mistakes by the model.
17
Model: Two-Stage Generative Transformer MemNet
Topic: Circus (seen) Model: have you ever been to a circus? it is a company of performers who put on diverse entertainment shows. Human: I went to the circus once when I was a child actually. Model: thatâs great! entertainment is a form of activity that holds the attention and interest of an audience. Human: Thatâs true. Have you ever been to a circus? Model: I have not, but I know they were built with various types of seating ring, and sometimes a stage. Human: I went to one held in a stadium, so we had box seats. Model: thatâs awesome! iâm sure youâll have to go to the circus, with appropriate music. Human: Yes, circuses usually have their own kind of music. Model: circuses can be done by circuses.
Topic: Granny Smith (seen) Human: I really love Granny Smith apples, theyâre my favorite type of apple. Model: I love granny smith apples. they have hard, light green skin and a crisp ï¬esh. Human: Yes, I really enjoy them. I also like Honeycrisp apples but theyâre so expensive! Model: theyâve been grown for thousands of years in asia and europe, and were brought to north america by euro- pean colonists (*) Human: Oh really? Theyâve been around way longer than I thought! Model: theyâre also consumed raw, itâs one of the most popular cooking apples. Human: I always eat them raw, I hate cooked apples for some reason, even in pie. Model: iâm not sure if itâs cooked, but I do know that they can be cooked raw. Topic: Culture of Chicago (seen) Human: hi do you like chicago Model: I love chicago. itâs the third most populous city in the us. Human: how many people live there? Model: about 2. 7 million residents, it is also the most populous city in the state of illinois. Human: do you like chicago deep dish pizza? Model: chicago-style pizza may be made in deep dish style and as a stuffed pizza. Human: what is the name of the baseball team? Model: danoâs pizzeria specializes in chicago style stuffed pizza. Human: Is Danoâs pizzaria open everyday? Model: yes it is. it is an american style pizza chain headquartered in midtown brook, illinois. Topic: Animals in sport (unseen) Model: i love to watch animals. itâs the most popular form of animal-related sport. Human: horse racing is the most awesome animal sport! Model: horse racing is an equestrian performance sport, typically involving two or more horses ridden by jockeys
or driven over a set distance for competition.
Human: yes, the Kentucky Derby and the triple crown is something I look forward to every year! Model: the horse has evolved over the past 45 to 55 million years from a small multi toed creature Human: that is very interesting, how did people come to domesticate horses for sport? Model: iâm not sure but the domestication is believed to have been widespread by 3000 bc Human: very interesting! I wonder how long people used horses to hunt before deciding to race them. Model: iâm not sure but I do know that horses have been domesticated for thousands of years.
Topic: Dairy farming (unseen)
Human: hi there i was wondering how you felt about farming Model: I do not like dairy farming but I do know that in developed countries, dairy farms typically consist of high
dairy products.
dairy products. Human: oh yes they have alot of cows and push out alot of dairy every hour Model: other species used in commercial dairy farming include goats, sheep, and camels Human: i did not know you could get milk a camel Model: camel milk has supported bedouin Human: oh wow thats very interesting Model: in italy they are growing in popularity to produce an alternative milk source for humans Human: i have had goat milk before Model: the earliest evidence of using domesticated cows for dairy production is the seventh millennium bc-the neolithic era - Human: thats so neat. i would say cow milk is my favorite Model: I love cow milk, especially since the iphone is used in the iphone. Human: iphone and cow milk Model: the iphone was ï¬rst released in 2007. Human: oh ok well thanks for the talk have a great day Model: no problem! the iphone was ï¬rst released in 2007.
Figure 5: Two-stage Generator conversations. Selected conversations with a human. (*) indicates clear factual mistakes by the model.
18 | {
"id": "1506.06714"
} |
1811.00945 | Image Chat: Engaging Grounded Conversations | To achieve the long-term goal of machines being able to engage humans in
conversation, our models should captivate the interest of their speaking
partners. Communication grounded in images, whereby a dialogue is conducted
based on a given photo, is a setup naturally appealing to humans (Hu et al.,
2014). In this work we study large-scale architectures and datasets for this
goal. We test a set of neural architectures using state-of-the-art image and
text representations, considering various ways to fuse the components. To test
such models, we collect a dataset of grounded human-human conversations, where
speakers are asked to play roles given a provided emotional mood or style, as
the use of such traits is also a key factor in engagingness (Guo et al., 2019).
Our dataset, Image-Chat, consists of 202k dialogues over 202k images using 215
possible style traits. Automatic metrics and human evaluations of engagingness
show the efficacy of our approach; in particular, we obtain state-of-the-art
performance on the existing IGC task, and our best performing model is almost
on par with humans on the Image-Chat test set (preferred 47.7% of the time). | http://arxiv.org/pdf/1811.00945 | Kurt Shuster, Samuel Humeau, Antoine Bordes, Jason Weston | cs.CL | ACL 2020 | null | cs.CL | 20181102 | 20200429 | 0 2 0 2
r p A 9 2 ] L C . s c [
2 v 5 4 9 0 0 . 1 1 8 1 : v i X r a
# Image-Chat: Engaging Grounded Conversations
# Kurt Shuster, Samuel Humeau, Antoine Bordes, Jason Weston Facebook AI Research {kshuster,samuelhumeau,abordes,jase}@fb.com
# Abstract
To achieve the long-term goal of machines be- ing able to engage humans in conversation, our models should captivate the interest of their speaking partners. Communication grounded in images, whereby a dialogue is conducted based on a given photo, is a setup naturally appealing to humans (Hu et al., 2014). In this work we study large-scale architectures and datasets for this goal. We test a set of neural architectures using state-of-the-art im- age and text representations, considering var- ious ways to fuse the components. To test such models, we collect a dataset of grounded human-human conversations, where speakers are asked to play roles given a provided emo- tional mood or style, as the use of such traits is also a key factor in engagingness (Guo et al., 2019). Our dataset, Image-Chat, consists of 202k dialogues over 202k images using 215 possible style traits. Automatic metrics and hu- man evaluations of engagingness show the ef- ï¬cacy of our approach; in particular, we ob- tain state-of-the-art performance on the exist- ing IGC task, and our best performing model is almost on par with humans on the Image- Chat test set (preferred 47.7% of the time).
# Introduction
A key way for machines to exhibit intelligence is for them to be able to perceive the world around them â and to be able to communicate with humans in natural language about that world. To speak natu- rally with humans it is necessary to understand the natural things that humans say about the world they live in, and to respond in kind. This involves under- standing what they perceive, e.g. the images they see, what those images mean semantically for hu- mans, and how mood and style shapes the language and conversations derived from these observations. In this work we take a step towards these goals by considering grounded dialogue involving open-
ended discussion of a given image, a setting that is naturally fun for humans (Hu et al., 2014), and study neural conversational models for task. In par- ticular, we explore both generative and retrieval models that handle multimodal dialogue by fusing Transformer architectures (Vaswani et al., 2017) for encoding dialogue history and responses and ResNet architectures (He et al., 2016) for encoding images. We propose ways to fuse those modalities together and perform a detailed study including both automatic evaluations, ablations and human evaluations of our models using crowdworkers.
To train and evaluate such models, we collect a large set of human-human crowdworker conversa- tions, with the aim of training a model to engage a human in a similar fashion, consisting of 202k diverse images and 401k utterances over the im- ages, with 215 different style traits (e.g., optimistic, skeptical or frivolous) to promote engaging conver- sation. The dataset is made publicly available in ParlAI (Miller et al., 2017) 1.
Our results show that there is a signiï¬cant gap between state-of-the-art retrieval and generative models on this task. Our best fused retrieval mod- els set a strong baseline, being preferred to hu- man conversationalists 47.7% of the time. We show that both large-scale image and text pre-training, and utilization of style traits, are critical for best results. We then consider transfer to the exist- ing Image Grounded Conversations (IGC) task of Mostafazadeh et al. (2017), where we obtain state- of-the-art results.
# 2 Related Work
The majority of work in dialogue is not grounded in perception, e.g. much recent work explores sequence-to-sequence models or retrieval models for goal-directed (Henderson et al., 2014) or chit-
1http://parl.ai/projects/image_chat
chat tasks (Vinyals and Le, 2015; Zhang et al., 2018). While these tasks are text-based only, many of the techniques developed can likely be trans- ferred for use in multimodal systems, for example using state-of-the-art Transformer representations for text (Mazare et al., 2018) as a sub-component.
In the area of language and vision, one of the most widely studied areas is image captioning, whereby a single utterance is output given an input image. This typically involves producing a factual, descriptive sentence describing the image, in con- trast to producing a conversational utterance as in dialogue. Popular datasets include COCO (Chen et al., 2015) and Flickr30k (Young et al., 2014). Again, a variety of sequence-to-sequence (Vinyals et al., 2015; Xu et al., 2015; Anderson et al., 2018) and retrieval models (Gu et al., 2018; Faghri et al., 2018; Nam et al., 2016) have been applied. These tasks measure the ability of models to understand the content of an image, but not to carry out an en- gaging conversation grounded in perception. Some works have extended image captioning from be- ing purely factual towards more engaging captions by incorporating style while still being single turn, e.g. (Mathews et al., 2018, 2016; Gan et al., 2017; Guo et al., 2019; Shuster et al., 2019). Our work also applies a style component, but concentrates on image-grounded dialogue, rather than image captioning.
Visual question answering (Antol et al., 2015) and visual dialogue (Das et al., 2017) are another set of tasks which employ vision and language. They require the machine to answer factual ques- tions about the contents of the image, either in single turn or dialogue form. They do not attempt to model natural conversation, but rather assess whether the machine can perform basic perception over the image via a series of questions.
There are some works which directly address dia- logue grounded with vision. The work of Pasunuru and Bansal (2018) assesses the ability to execute di- alogue given video of computer soccer games. The work of Huber et al. (2018) investigates the use of sentiment-based visual features and facial expres- sions for emotional image-based dialogue. Perhaps the most related work to ours is Mostafazadeh et al. (2017). Their work considers (visual context, tex- tual context, question, response) tuples, and builds validation and test sets based on 4k eventful images called Image Grounded Conversations (IGC). No training data is provided, but instead the authors
use Twitter for that in their experiments. In contrast, we provide training, validation and testing sets over 202k images for our task (that do not overlap with IGC), and consider a general set of images and dia- logues, not just events and questions plus responses. In our experiments we also show strong transfer ability of our models to the IGC task.
While there are many ways to measure dialogue quality, human engagement is a popular metric. Engagement itself can be measured in many ways (Bohus and Horvitz, 2009; Yu et al., 2016) but here we adopt the common approach of simply asking humans which speaker they ï¬nd more engaging, following other works (Li et al., 2019; Dinan et al., 2020).
# Image-Chat
The IMAGE-CHAT dataset is a large collection of (image, style trait for speaker A, style trait for speaker B, dialogue between A & B) tuples that we collected using crowd-workers, Each dialogue consists of consecutive turns by speaker A and B. No particular constraints are placed on the kinds of utterance, only that we ask the speakers to both use the provided style trait, and to respond to the given image and dialogue history in an engaging way. The goal is not just to build a diagnostic dataset but a basis for training models that humans actually want to engage with.
Style Traits A number of works have shown that style traits for image captioning help provide cre- ative captions (Mathews et al., 2018, 2016; Gan et al., 2017; Shuster et al., 2019). We apply that same principle to image grounded dialogue, con- sidering a set of 215 possible style traits, using an existing set from Shuster et al. (2019). The traits are categorized into three classes: positive (e.g., sweet, happy, eloquent, humble, witty), neutral (e.g., old-fashioned, skeptical, solemn, question- ing) and negative (e.g., anxious, childish, critical, ï¬ckle, frivolous). We apply these to both speakers A and B, who will be assigned different style traits for each given conversation.
Images The images used in our task are ran- domly selected from the YFCC100M Dataset2 (Thomee et al., 2016).
Dialogue For each image, we pick at random two style traits, one for speaker A and one for speaker
# 2https://multimediacommons.wordpress.com/yfcc100m-core-dataset/
A: Peaceful B: Absentminded A: Iâm so thankful for this delicious food.
A: Fearful B: Miserable A: I just heard something out there and I have no idea what it was.
A: Erratic B: Skeptical
A: What is the difference between the forest and the trees? Oh look, dry pave- ment.
B: What is it called again?
B: It was probably a Wolf coming to eat us because you talk too much.
B: I doubt thatâs even a forest, it looks like a line of trees.
A: Not sure but fried goodness.
A: I would never go camping in the woods for this very reason.
A: Thereâs probably more lame pave- ment on the other side!
Figure 1: Some samples from the IMAGE-CHAT training set. For each sample we asked humans to engage in a conversation about the given image, where the two speakers, A and B, each have a given provided style.
B, and collect the dialogue using crowdworkers who are asked to both assume those roles, and to be engaging to the other speaker while doing so. It was emphasized in the data collection instructions that the style trait describes a trait of the speaker, not properties of the content of the image they are discussing. Some examples from the training set are given in Figure 1.
Split Number of Images Number of Dialogues Number of Utterances Style Types Vocabulary Size Tokens per Utterance train 186,782 186,782 355,862 215 46,371 12.3 valid 5,000 5,000 15,000 215 9,561 12.4 test 9,997 9,997 29,991 215 13,550 12.4
Table 1: IMAGE-CHAT dataset statistics.
# 4 Models
Data Quality During data collection crowd- sourcers were manually monitored, checking to ensure they were following the instructions. Poor performers were banned, with comments discarded. A veriï¬cation process was also conducted on a subset of the data, where separate annotators were asked to choose whether the utterance ï¬t the im- age, style, or both, and found that 92.8% of the time it clearly ï¬t the image, and 83.1% the style, and 80.5% both. Note, given that not all utterances should directly reference an image property or in- voke the style, we do not expect 100%.
We consider two major types of dialogue model: retrieval and generative. Both approaches make use of the same components as building blocks. We use three sub-networks for the three modalities of input: (i) an image encoder, (ii) a dialogue history encoder; and (iii) a style encoder. In the retrieval model these are then fed into a combiner module for combining the three modalities. Finally, there is a response encoder for considering candidate re- sponses and this is scored against the combined input representations. An overview of the retrieval archictecture is shown in Figure 2. For the gener- ative model, the three encoders are used as input, and a further decoder Transformer is used for out- putting a token sequence; beam search is applied.
Overall Dataset The overall dataset statistics are given in Table 1. This is a fairly large dialogue dataset compared to other existing publicly avail- able datasets. For example, PersonaChat (Zhang et al., 2018) (which is not grounded in images) con- sists of 162k utterances, while IGC (Mostafazadeh et al., 2017) (grounded in images) consists of 4k of validation and test set examples only, compared to over 400k utterances in IMAGE-CHAT.
Image Encoder We build our models on top of pretrained image features, and compare the perfor- mance of two types of image encoders. The ï¬rst is a residual network with 152 layers described in He et al. (2016) trained on ImageNet (Rus- sakovsky et al., 2015) to classify images among 1000 classes, which we refer to in the rest of the pa-
Image pezzi Dialog History Word level tokenization Jee 4 share Transformer | Weights | Transformer Response Candidate Word level tkentzation Dialogue Encoder Response Encoder v | Ra Feed Forward Netroria Linear linear | ~~ 4 Linear [Personality Encode Share Weights Multi Modal Combiner Product Trained Pretrained Candidate Response Score Fixed
Figure 2: The TRANSRESNETRET multimodal ar- chitecture for grounded dialogue. There are sev- eral options: different image encoders (ResNet152 or ResNeXt-IG-3.5B), text encoders (shared or separate Transformers for history and response), and different multimodal combiners (sum or attention-based).
per as ResNet152 features. We used the implemen- tation provided in the torchvision project (Marcel and Rodriguez, 2010). The second is a ResNeXt 32 à 48d (Xie et al., 2017) trained on 3.5 billion In- stagram pictures following the procedure described by Mahajan et al. (2018), which we refer to in the rest of the paper as ResNeXt-IG-3.5B. The repre- sentation rI of an image I is obtained by using the 2048-dimensional output of the image encoder as input to a feed-forward network: a multi-layer perceptron with ReLU activation units and a ï¬nal layer of 500 dimensions in the retrieval case, and a linear layer in the generative case.
Style Encoder To condition on a given style trait, we embed each trait to an N -dimensional vector to obtain its representation rS. We used N = 500 for retrieval and N = 300 for generation.
Dialogue Encoder The entire dialogue history D is encoded into a ï¬xed size vector rD using a Transformer architecture (Vaswani et al., 2017), followed by a linear layer. Such Transformers have been shown to perform strongly on a variety of dia-
logue tasks previously (Yang et al., 2018; Mazare et al., 2018). We use a Transformer with 4 lay- ers, 300 hidden units, and 6 attention heads. The outputs are pooled (mean) to give a ï¬nal vectorial encoding.
We pretrain the entire encoder following the setup described in Mazare et al. (2018): we train two encoders on a next-utterance retrieval task on a Reddit dataset of dialogues containing 1.7 billion pairs of utterances, where one encodes the context and another the candidates for the next utterance; their dot product indicates the degree of match, and they are trained with negative log-likelihood and k-negative sampling. We then initialize our system using the weights of the candidate encoder only, and then train on our task in either generative or retrieval mode.
# 4.1 Retrieval Models
Multimodal combiner module We consider two possible combiner modules for the inputs:
Multimodal sum combiner (MM-sum): Given an input image, style trait and dialogue (I, S, D), to- gether with a candidate response C, the score of the ï¬nal combination is computed as s(I, S, D, C) = (rI + rS + rD) · rC.
Multimodal attention combiner (MM-att): A more sophisticated approach is to use an atten- tion mechanism to choose which modalities are most relevant for each example by stacking Trans- formers. We concatenate the three representation vectors rI , rS and rD and feed them to a second Transformer (4 attention heads, 2 layers, 500 hid- den units) which performs self-attention over them. The three modalities are thus reweighted by the cor- responding attention weights to give the ï¬nal input representation vector rT , which is used to compute the score for a given candidate using rT · rC.
Response encoder We employ the same Trans- former architecture as in the dialogue encoder for encoding candidate responses. We tried two vari- ants: either sharing or not sharing the weights with the input dialogue encoder.
Training and Inference Given a tuple I, S, D, and a set of candidates (c1, .., cN ), at inference time the predicted utterance is the candidate ci that maximizes the score s(I, S, D, ci). At train- ing time we pass a set of scores through a softmax and train to maximize the log-likelihood of the cor- rect responses. We use mini-batches of 500 training
examples; for each example, we use the gold re- sponses of the other examples of the batch as nega- tives. During ï¬nal human evaluation all candidates from the training set are considered to produce a response (356k candidates in our experiments).
# 4.2 Generative Models
Dialogue Decoder The encoding from the image encoder has a ï¬nal linear layer of dimension 2048 à 300. This projects it to the same size of the to- ken encoding of the dialogue decoder. We thus add it as an extra token at the end of the Transform- ers encoder output. For style, we simply prepend the style to the beginning of the dialogue history, and it is thus encoded in the dialogue encoder. We then treat this as a standard seq2seq Transformer in order to generate dialogue responses.
Training and Inference We train with a batch size of 32 and learning rate of .0001 using adam, and apply beam search with a beam of size 2 and tri- gram blocking at inference time. Hyperparameters are chosen on the validation set.
# 5 Experiments
We test our models on the IMAGE-CHAT and IGC datasets using automatic metrics and human evalu- ations. We analyze the performance of the different module and architecture choices, as well as abla- tion studies to determine the importance of each of the modelâs inputs.
# 5.1 Automatic Evaluation on IMAGE-CHAT
Module Choices We ï¬rst compare various mod- ule conï¬gurations of our TRANSRESNETRET model, and additionally show the results for a sim- ple information retrieval baseline, in which the can- didates are ranked according to their weighted word overlap to the input message. We measure recall at 1 and 5 (R@1/100 and R@5/100) retrieval metrics, where for each sample there are 100 candidates to rank: 99 random candidates chosen from the test set, and the true label. Note that in human evalua- tions we use all the train set candidates.
The results are shown in Table 2. We report the average metrics for the total task, as well as the breakdown of the performance on each turn of di- alogue (turns 1, 2 and 3). The average metrics in- dicate that using the ResNeXt-IG-3.5B image en- coder features improves performance signiï¬cantly across the whole task, as we obtain 50.3% R@1 for our best ResNeXt-IG-3.5B model and only 40.6%
for our best ResNet152 model. When broken down by turn, it appears that the ResNeXt-IG-3.5B fea- tures are particularly important in the ï¬rst round of dialogue, in which only the image and style are con- sidered, as the difference between their best models increases from 9.7% in the full task to 19.5% in the ï¬rst turn. Our baseline multimodal sum com- biner (MM-Sum) outperforms the more sophisti- cated self-attention (MM-Att) combiner, with the latter scoring 49.3% on the full task. Having sepa- rate candidate and dialogue history text encoders also works better than sharing weights.
In subsequent experiments we use the best performing system for our retrieval model. As ResNeXt-IG-3.5B performs best we use that for our generative model going forward as well.
Full & Ablation Study We now perform experi- ments for both retrieval and generative models for the full system, and additionally we remove modal- ities (image, style, and dialogue history). For the generative models we report the ROUGE-L metric. The results are shown in Table 3, which we now analyze.
Turn 1: In the ï¬rst round of dialogue the models produce utterances given the image and style only, as there is no dialogue history yet. For both models, image is more important than style, but using both together helps.
Turn 2: In the second turn, in which a model produces a response to a ï¬rst utterance, the models perform similarly when using only the image or only the dialogue history, while performing poorly with just the style. Any combination of two modal- ities improves the results, with the style + dialogue combination performing slightly higher than the other two. Using all modalities works best.
Turn 3: By the third turn of dialogue, the con- versation history proves to be by far the most important in isolation compared to the other two modalities in isolation. Conditioning on the style+dialogue is the most effective of any combi- nation of two modalities. Again, using all modali- ties still proves best.
# 5.2 Human Evaluations on IMAGE-CHAT
We test our ï¬nal models using human evaluation.
Evaluation Setup We use a set of 500 images from YFCC-100M that are not present in IMAGE- CHAT to build a set of three-round dialogues pair- ing humans with models in conversation. We then
Model Text Encoders R@1 n/a IR Baseline TRANSRESNETRET MM-Att Separate TRANSRESNETRET MM-Sum Separate TRANSRESNETRET MM-Sum Shared Shared TRANSRESNETRET MM-Att TRANSRESNETRET MM-Att Separate TRANSRESNETRET MM-Sum Separate Combiner n/a Image Encoder R@1 n/a ResNet152 ResNet152 ResNeXt-IG-3.5B ResNeXt-IG-3.5B ResNeXt-IG-3.5B ResNeXt-IG-3.5B Turn 1 R@1 - 35.7 34.5 53.6 54.4 53.5 54.0 Turn 2 R@1 - 44.5 46.0 47.0 49.0 50.5 51.9 Turn 3 R@1 - 40.5 41.3 41.3 43.3 43.8 44.8 All R@1 R@5 5.86 2.15 67.0 40.2 67.2 40.6 73.1 47.3 74.2 48.9 74.7 49.3 75.4 50.3
Table 2: Module choices on IMAGE-CHAT. We compare different module variations for TRANSRESNETRET .
Modules Image Only Style Only Dialogue History Only Style + Dialogue (no image) Image + Dialogue (no style) Image + Style (no dialogue) Style + Dialogue + Image (full model) TRANSRESNETRET (R@1/100 ) All Turn 1 28.7 37.6 16.9 18.3 22.3 1.0 35.4 18.3 36.5 37.6 54.0 43.4 50.3 54.0 Turn 2 28.1 15.3 33.7 45.4 39.4 41.1 51.9 Turn 3 20.7 17.0 32.3 43.1 32.6 35.2 44.8 TRANSRESNETGEN (ROUGE-L) All Turn 1 21.8 21.1 21.0 20.2 21.8 18.9 23.1 20.4 22.6 21.3 23.7 23.5 24.3 23.7 Turn 2 21.9 20.9 22.7 24.1 22.8 23.2 24.2 Turn 3 22.4 22.0 23.7 24.8 23.6 23.8 24.9
Table 3: Ablations on IMAGE-CHAT. We compare variants of our best TRANSRESNET generative and retrieval models (ResNeXt-IG-3.5B image encoder, and MM-Sum + separate text encoders for retrieval) where we remove modalities: image, dialogue history and style conditioning, reporting R@1/100 for retrieval and ROUGE-L for generation for dialogue turns 1, 2 and 3 independently, as well as the average over all turns.
conduct evaluations at each round of dialogue for each example in the evaluation set; we have a sepa- rate set of human evaluators look at the provided conversation turns, and ask them to compare two possible utterances for the next turn of conversa- tion, given the image, dialogue history and relevant style (which is the same for both human author and model, so there is no advantage). We ask the evalu- ators in a blind test to choose the âmore engagingâ of the two possible utterances: one from a human, and the other from a model.
Human annotation vs. TRANSRESNET model We compare human-authored utterances to those produced by our models. The human conversa- tions are collected in the same fashion as in IMAGE-CHAT but on test images. As for hu- mans, the model outputs are conditioned on the image, style and previous dialogue history. TRANSRESNETGEN simply generates a response, whereas TRANSRESNETRET retrieves candidate utterances from the IMAGE-CHAT training set. The latter is given a separate set of candidates corre- sponding to the round of dialogue â e.g. when pro- ducing a response to turn 1, the model retrieves from all possible round 1 utterances from the train set (in that case 186,858 possible choices).
The results are shown in Fig. 4, com- paring all models on the ï¬rst round (left): TRANSRESNETGEN and TRANSRESNETRET us-
ing ResNeXt-IG-3.5B, and TRANSRESNETRET using ResNet152 features. As in automatic evalua- tions, ResNet152 features performed more poorly. The retrieval model outperformed the generative model, a result that has been observed in other (text-only) dialogue tasks (Dinan et al., 2019; Zhang et al., 2018). In turn 1, TRANSRESNETRET (ResNeXt-IG-3.5B) has a win rate against hu- mans of 49.4% (difference not signiï¬cant using a binomial two-tailed test, p > 0.5), while both other models are signiï¬cantly outperformed by humans (p < 2 à 10â7 compared to ResNet152 features), showing the importance of our retrieval architecture and image feature choices. We thus compare only TRANSRESNETRET (ResNeXt-IG- 3.5B) to humans in all three turns (Fig. 4, right). That model performs well, with an overall win rate against humans of 47.7% (difference is sig- niï¬cant, p < 7 à 10â5). Example predictions of TRANSRESNETRET (ResNeXt-IG-3.5B) are given in Figure 3.
# 5.3 Transfer to the IGC Task
To test the strength of our task and models we con- sider transfer to the IGC of task of Mostafazadeh et al. (2017). In particular, we focus on their re- sponse task, which provides an image and a dia- logue history of two utterances: a context utterance, followed by a question. The task is to then pro-
Image Turn 1 examples Style Conversation A: Artful A: Monstrous A: Earnest Model predictions: This looks like a painting. Something out of a Norman Rockwell. Some wretched town!. Yeah, we have ï¬nally arrived at Grandpaâs old barnhouse! I canât wait to get started with the restoration! A: Wishful A: Opinionated A: Imaginative Model predictions: I hope one day to have a dog this majestic. This puppy looks cold get him a blanket. Puppies are just the universeâs way of telling us everything will be okay. Model predictions: A: Respectful What an honor to have beautiful places like these to contemplate A: Shy A: Confused natures rocks at their best. While the area looks nice, Iâd stay away, fear of tsunamis leaves me paralyzed at the beach. I donât understand why would you put rocks there? Turn 2 examples A: Empathetic B: Aloof B: Frivolous I understand that an open ï¬eld full of beautiful foliage is a testament to our environment. Model predictions: Itâs just a regular old ï¬eld. So is every grass in the country A: Playful B: Questioning B: Amusing i would jump inside that waterfall Model predictions: Do you really think it is feasible? Well I would try to top you with a backï¬ip. A: Maternal B: Whimsical B: Blunt It must be very dangerous if children play there. Model predictions: Nonsense, this would be a great place to play! I would rather ï¬nd some lawn. Those rocks look uncomfortable Turn 3 examples A: Happy B: Anxious A: Happy This looks like something fun and exciting to do on vacation! oh dear what happens if he gets taken with the wind Model prediction: Ooh, the wind looks perfect to ï¬y this awesome kite! A: Pompous B: Offhand A: Pompous This archway is so dull and boring. I could have done a much better job with the garden in my own back yard. Its about as boring as you!! Model prediction: Please. I could show you much prettier places in my own back yard. A: Sweet
# B: Spontaneous
# A: Sweet
This is so beautiful, I love nature and would love to see this place and meet the people who live there! Then lets go there right now! Just you and me. Model prediction: I wish you could take me there, I would love to go with you!
Figure 3: Example predictions from our TRANSRESNETRET (MM-Sum) model on the evaluation set using all candidates for turns 1â3 . Two speakers A & B with given style traits discuss a photo. The dialogue context before the model prediction is completed by humans, followed by one or more possible model responses, given different style conditioning. The model clearly uses the image, given style and dialogue history in formulating its response.
0.6 Human o = 50.5 sg TransResNet-Gen 504 mmm (ResNeXt-IG-3.5B) 50.3 a TransResNet-Ret i=} 0.2 mmm (ResNet152) = 0.1 TransResNet-Ret (ResNeXt-IG-3.5B) First First First Second Third Round of Comparison
Figure 4: Human evaluations on IMAGE-CHAT. Engag- ingness win rates of pairwise comparisons between hu- man utterances and TRANSRESNETRET (ResNet152 or ResNeXt-IG-3.5B) or TRANSRESNETGEN , com- paring over the rounds of dialogue.
duce a response. This is clearly related to our task, except it focuses on answering questions, which our task does not. Our task is more varied as it was collected in an unconstrained way, unlike in IGC where they were asked to write a question. Nevertheless, assuming a question contains a ? or starts with who, what, when, where, why or how, our dataset contains 40,076 training utterances that are questions (11.3% of the data) and so it could be possible to produce responses to them. Without any ï¬ne-tuning at all, we thus simply took exactly the same best trained models and used them for their question response task as well.
Unfortunately, after contacting the authors of Mostafazadeh et al. (2017) they no longer have the predictions of their model available, nor have they made available the code for their human evalua- tion setup. However, the test set is available. We therefore attempted to reproduce the same setup as in their experiments, which we will also make publicly available upon acceptance.
Automatic Evaluation We measure our best TRANSRESNETGEN modelâs performance on the IGC test set in terms of BLEU-4. The results are shown in Fig. 5 (right). We ï¬nd that our model outperforms the model from Mostafazadeh et al. (2017), achieving a score of 2.30 compared to 1.49.
Human Evaluation We compare the provided human response (from the test set) with 7 vari- ants of our TRANSRESNETRET model (mimicking their setup), whereby we have our model condition on 7 styles for which it performed well on evalu- ations in section 5.2. Annotators rated the quality of responses on a scale from 1 to 3, where 3 is the highest, reporting the mean over â¼2k questions. We then scale that by the score of human authored
06 V&T Gen ; g 2.0 (Mostafazadeh et al,, 2017) c= TransResNet-Ret I oa 153 MM (ResNeXt-IG-3.5B) § r=] TransResNet-Gen g 1.05 mam (ResNeXtIG-3.5B) = ir 0.2 x 0.5 0 Human Eval BLEU-4 Rating Responses
Figure 5: IGC Evaluations. The best model from Mostafazadeh et al. (2017) is compared to our best TRANSRESNETRET and TRASNRESNETGEN mod- els. On the left, annotatorâs ratings of responses from the models are shown as a percentage of the annota- torâs ratings of human responses. On the right, BLEU-4 scores on the response task are shown.
responses, to give a percentage. The results are shown in Fig. 5 (left). Our model narrows the gap between human and model performance, yielding a higher percentage of the human score (62.9% vs. 54.2%). More detailed results and example predic- tions of our model can be found in Appendices E and F, including examples of highly rated and poorly rated outputs from our model.
# 6 Conclusion
This paper presents an approach for improving the way machines can generate grounded conversations that humans ï¬nd engaging. Focusing on the case of chit-chatting about a given image, a naturally useful application for end-users of social dialogue agents, this work shows that our best proposed model can generate grounded dialogues that humans prefer over dialogues with other fellow humans almost half of the time (47.7%). This result is made possi- ble by the creation of a new dataset IMAGE-CHAT3. Our work shows that we are close to having models that humans can relate to in chit-chat con- versations, which could set new ground for social dialogue agents. However, our retrieval models out- performed their generative versions; closing that gap is an important challenge for the community. While our human evaluations were on short con- versations, initial investigations indicate the model as is can extend to longer chats, see Appendix G, which should be studied in future work. The next challenge will also be to combine this engaging- ness with other skills, such as world knowledge (Antol et al., 2015) relation to personal interests (Zhang et al., 2018), and task proï¬ciency.
3http://parl.ai/projects/image_chat
# References
Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and vqa. CVPR.
Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Mar- garet Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. Vqa: Visual question an- swering. In Proceedings of the IEEE international conference on computer vision, pages 2425â2433.
Dan Bohus and Eric Horvitz. 2009. Models for multi- party engagement in open-world dialog. In Proceed- ings of the SIGDIAL 2009 Conference: The 10th An- nual Meeting of the Special Interest Group on Dis- course and Dialogue, pages 225â234. Association for Computational Linguistics.
Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakr- ishna Vedantam, Saurabh Gupta, Piotr Doll´ar, and C Lawrence Zitnick. 2015. Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325.
Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, Jos´e MF Moura, Devi Parikh, and Dhruv Batra. 2017. Visual dialog. In Proceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition.
Emily Dinan, Varvara Logacheva, Valentin Malykh, Alexander Miller, Kurt Shuster, Jack Urbanek, Douwe Kiela, Arthur Szlam, Iulian Serban, Ryan Lowe, et al. 2020. The second conversational in- telligence challenge (convai2). In The NeurIPSâ18 Competition, pages 187â208. Springer.
Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019. Wizard of Wikipedia: Knowledge-powered conversational agents. In Proceedings of the International Confer- ence on Learning Representations (ICLR).
Fartash Faghri, David J Fleet, Jamie Ryan Kiros, and Sanja Fidler. 2018. Vse++: Improving visual- semantic embeddings with hard negatives.
Chuang Gan, Zhe Gan, Xiaodong He, Jianfeng Gao, and Li Deng. 2017. Stylenet: Generating attrac- tive visual captions with styles. In Proc IEEE Conf on Computer Vision and Pattern Recognition, pages 3137â3146.
J. Gu, J. Cai, S. Joty, L. Niu, and G. Wang. 2018. Look, imagine and match: Improving textual-visual cross- In 2018 modal retrieval with generative models. IEEE/CVF Conference on Computer Vision and Pat- tern Recognition, pages 7181â7189.
Longteng Guo, Jing Liu, Peng Yao, Jiangwei Li, and Hanqing Lu. 2019. Mscap: Multi-style image cap- tioning with unpaired stylized text. In Proceedings of the IEEE Conference on Computer Vision and Pat- tern Recognition, pages 4204â4213.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- In Proceedings of the IEEE conference on nition. computer vision and pattern recognition.
Matthew Henderson, Blaise Thomson, and Jason D Williams. 2014. The second dialog state tracking challenge. In Proceedings of the 15th Annual Meet- ing of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 263â272.
Yuheng Hu, Lydia Manikonda, and Subbarao Kamb- hampati. 2014. What we instagram: A ï¬rst analysis of instagram photo content and user types. In Eighth International AAAI Conference on Weblogs and So- cial Media.
Bernd Huber, Daniel McDuff, Chris Brockett, Michel Galley, and Bill Dolan. 2018. Emotional dialogue generation using image-grounded language models. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, page 277. ACM.
Margaret Li, Jason Weston, and Stephen Roller. 2019. Acute-eval: Improved dialogue evaluation with opti- mized questions and multi-turn comparisons. arXiv preprint arXiv:1909.03087.
Dhruv Mahajan, Ross Girshick, Vignesh Ramanathan, Kaiming He, Manohar Paluri, Yixuan Li, Ashwin Bharambe, and Laurens van der Maaten. 2018. Ex- ploring the limits of weakly supervised pretraining. In Computer Vision â ECCV 2018, pages 185â201, Cham. Springer International Publishing.
S´ebastien Marcel and Yann Rodriguez. 2010. Torchvi- In Pro- sion the machine-vision package of torch. ceedings of the 18th ACM International Conference on Multimedia, MM â10, pages 1485â1488. ACM.
Alexander Mathews, Lexing Xie, and Xuming He. 2018. Semstyle: Learning to generate stylised image captions using unaligned text. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8591â8600.
Alexander Patrick Mathews, Lexing Xie, and Xuming He. 2016. Senticap: Generating image descriptions with sentiments. In AAAI, pages 3574â3580.
Pierre-Emmanuel Mazare, Samuel Humeau, Martin Raison, and Antoine Bordes. 2018. Training mil- In Proceed- lions of personalized dialogue agents. ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2775â2779, Brussels, Belgium. Association for Computational Linguistics.
A. H. Miller, W. Feng, A. Fisch, J. Lu, D. Batra, A. Bor- des, D. Parikh, and J. Weston. 2017. Parlai: A dialog In Empirical Methods research software platform. in Natural Language Processing (EMNLP), pages 79â84.
Nasrin Mostafazadeh, Chris Brockett, Bill Dolan, Michel Galley, Jianfeng Gao, Georgios Spithourakis, and Lucy Vanderwende. 2017. Image-grounded conversations: Multimodal context for natural ques- In Proceedings of tion and response generation. the Eighth International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers), pages 462â472, Taipei, Taiwan. Asian Federation of Natural Language Processing.
Hyeonseob Nam, Jung-Woo Ha, and Jeonghee Kim. 2016. Dual attention networks for multimodal rea- soning and matching. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2156â2164.
Ramakanth Pasunuru and Mohit Bansal. 2018. Game- based video-context dialogue. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 125â136, Brussels, Bel- gium. Association for Computational Linguistics.
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, An- drej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. 2015. Ima- geNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211â252.
Kurt Shuster, Samuel Humeau, Hexiang Hu, Antoine Bordes, and Jason Weston. 2019. Engaging image In The IEEE Confer- captioning via personality. ence on Computer Vision and Pattern Recognition (CVPR).
Bart Thomee, David A. Shamma, Gerald Fried- land, Benjamin Elizalde, Karl Ni, Douglas Poland, Damian Borth, and Li-Jia Li. 2016. Yfcc100m: The new data in multimedia research. Commun. ACM, 59(2):64â73.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, pages 5998â6008.
Oriol Vinyals and Quoc Le. 2015. A neural conver- In Proceedings of the 31st Inter- sational model. national Conference on Machine Learning, Deep Learning Workshop, Lille, France.
Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural im- age caption generator. In Proceedings of the IEEE conference on computer vision and pattern recogni- tion.
S. Xie, R. Girshick, P. Dollr, Z. Tu, and K. He. 2017. Aggregated residual transformations for deep neu- In 2017 IEEE Conference on Com- ral networks. puter Vision and Pattern Recognition (CVPR), pages 5987â5995.
Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual atten- tion. In International conference on machine learn- ing, pages 2048â2057.
Yinfei Yang, Steve Yuan, Daniel Cer, Sheng-yi Kong, Noah Constant, Petr Pilar, Heming Ge, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Learn- ing semantic textual similarity from conversations. In Proceedings of The Third Workshop on Repre- sentation Learning for NLP, pages 164â174, Mel- bourne, Australia. Association for Computational Linguistics.
Peter Young, Alice Lai, Micah Hodosh, and Julia Hock- enmaier. 2014. From image descriptions to visual denotations: New similarity metrics for semantic in- ference over event descriptions. Transactions of the Association for Computational Linguistics, 2:67â78.
Zhou Yu, Leah Nicolich-Henkin, Alan W Black, and Alexander Rudnicky. 2016. A wizard-of-oz study on a non-task-oriented dialog systems that reacts to user engagement. In Proceedings of the 17th annual meeting of the Special Interest Group on Discourse and Dialogue, pages 55â63.
Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Per- sonalizing dialogue agents: I have a dog, do you In Proceedings of the 56th An- have pets too? nual Meeting of the Association for Computational Linguistics, pages 2204â2213, Melbourne, Australia. Association for Computational Linguistics.
# A More Details of IGC Evaluations
In this section we describe a few choices we made and implementation details regarding the IGC hu- man evaluation in the section regarding Transfer to the IGC Task.
Multiple Traits In the IGC human evaluation setup from (Mostafazadeh et al., 2017), human an- notators were shown eight choices when rating the quality of responses to questions: seven responses from various models, and one human response. To mirror this setup as closely as possible, we chose seven of our highest performing style traits to con- dition on to display in addition to the human re- sponse. We show the results of each trait in Table 4.
Automatic Evaluation In (Mostafazadeh et al., 2017), the authors provide BLEU scores for their models in an attempt to evaluate their effective- ness via automated metrics. The authors note that the scores are very low, âas is characteristic for tasks with intrinsically diverse outputs.â Addition- ally, it has been shown in (Shuster et al., 2019) that BLEU scores for image captioning retrieval models are generally far lower than those of gener- ative models (as retrieval models do not optimize for such a metric), and yet human evaluations can show the complete opposite results. In fact, in that work retrieval models were shown to be superior to generative models in human evaluations, which is why we adopted them here. For these reasons we omit BLEU scores of our retrieval models on the IGC test set as uninteresting. We do however compare BLEU scores with our generative model in the main paper.
Test Set Size The IGC test set provides the urls to all 2591 images for which (context, question, response) tuples were collected. We were only able to recover 2195 images from this initial set, as some of the urls provided are no longer associated with the corresponding images. Thus, our human evaluations are conducted on this subset.
Style Neutral Charming Extravagant Calm Sweet Spirited Enthusiastic Human Score 1.55 1.55 1.55 1.57 1.58 1.60 1.61 2.55
Table 4: IGC Human Evaluation on responses from our TRANSRESNET MM-SUM model conditioned on var- ious personalities. Responses were rated on a quality scale from 1 to 3, where 3 is the highest.
# B IMAGE-CHAT Human Annotation Setup
Respond to a Comment on an Image Description In this task, you will be shown 5 images, each of which has a comment about the image. The goal of this task is to write an engaging response to this comment as if you were continuing a dialog about the image. STEP 1 With each new photo, you will be given a personality trait that you will try to emulate in your response to the comment on the image. For example, you might be given "snarky" or âsentimentalâ. The personality describes YOU, not the picture. It is you who is snarky or sentimental, not the contents of the image nor the original comment about the image. STEP 2 You will then be shown an image and a comment that goes with the image, for which you will write a response in the context of your given personality trait. Please make sure your response has at least three words. Note that these are responses to the comments on the image, and not simply image captions. Reminder - please do not write anything that involves any level of discrimination, racism, sexism and offensive religious/politics comments, otherwise the submission will be rejected. Image Someone wrote the following comment on this image: Peace and tranquility should be more abundant. This greenery evokes those feelings for me and I'm very thankful. Write your response as if you were: Profound
Figure 6: Instructions pane for crowdworkers when collecting the second round of dialogue.
Continue a Dialog on an Image Description In this task, you will imagine that you are speaking with your friend about 5 separate images. For each image, you will be shown "your" initial comment on the image, and your friend's response to the comment. The goal of this task is to write an engaging response to your friend as if you were continuing a dialog about the image. STEP 1 With each new photo, you will be given a personality trait that you will try to emulate in your response. For example, you might be given "adventurous". The personality describes YOU, not the picture. It is you who is adventurous, not the contents of the image. STEP 2 You will then be shown an image, "your" initial comment that goes with the image, and your friend's response. You will continue the dialog by responding to your friend's response in the context of your given personality trait. Please make sure your response has at least three words. Note that these are not simply image captions, but engaging responses. Reminder - please do not write anything that involves any level of discrimination, racism, sexism and offensive religious/politics comments, otherwise the submission will be rejected. Image YOU wrote the following comment on this image: I would be worried about getting cold out there. YOUR FRIEND responded: It's nice to just sit out in the snow and watch it fall. It's like being a whole different world. Write your response as if you were: Maternal (Mother-like)
Figure 7: Instructions pane for crowdworkers when collecting the third round of dialogue.
# C IMAGE-CHAT Human Evaluation Setup
Rank Responses to Image Comments on an Image In this task, you will be shown 5 images, and a short discussion about each image. The goal of this task is to pick which of two responses is the most engaging (interesting, captivating, attention-grabbing). STEP 1 You will be shown an image, a short discussion, and a response. Additionally, you may be shown the personality of the person who wrote the response. E.g., you may be shown an image of a tree, and the following discussion: 1. Comment: âWhat an absolutely beautiful tree! | would put this in my living room it's so extravagant!" 2. Response: âI bet | could climb that tree" 3. Responses to Evaluate: (a) | don't think you could, (b) Let me help you try! And, you may be shown a personality, e.g. âCheerfulâ. STEP 2 You will choose which response is more engaging. E.g. in the example above, the second response (b) is more engaging than the first. Person 1: This is so beautiful, | love nature and would love to see this place and meet the people who live there! Person 2: Then lets go there right now! Just you and me. Personality of Person 1: Sweet Person 1 Response: Aww that's nice. | want to go on your boat with you! Aww that would be nice just the two of us.
Figure 8: Instructions pane for crowdworkers when col- lecting the IMAGE-CHAT Evaluations.
# D IGC Human Evaluation Setup
Rate Quality of Responses STEP 1 You will be shown an image, some textual context, a questions in response to the textual context, and a set of candidate responses to the question. E.g., you may be shown an image of a tree; some textual context, i.e. "An amazing tree for climbing."; a question, "Do you think you could really climb that tree?"; and, a set of candidate responses: 1. "Are you kidding? | could climb that tree in my sleep." 2. "Is it time for dinner yet?" STEP 2 You will rate each candidate response on a scale from 1 to 3, where 3 is the highest quality and 1 is the lowest quality. E.g. in the example above, you might give the first question a '3" rating and the second question a "1" rating. Image Textual Context: | so love lightening storms. Question: What is your favorite thing about them? Responses Responses There's always something cosy and romantic about watching a thunderstorm from inside. That lightning, striking its destination with amazing force and tenacity. Whoa! Lightning is awesome! Lightning is very dangerous, especially when you are outside. How there is no end to how the lightening can fork out from itself. Submit Choice Ratings (1 is lowest quality, 3 is highest quality) 1 20 3 1 2°) 3 1 2;)3 1 273 ~1 2743
Responses There's always something cosy and romantic about watching a thunderstorm from inside. That lightning, striking its destination with amazing force and tenacity. Whoa! Lightning is awesome! Lightning is very dangerous, especially when you are outside. How there is no end to how the lightening can fork out from itself. Ratings (1 is lowest quality, 3 is highest quality) 1 20 3 1 2°) 3 1 2;)3 1 273 ~1 2743
Figure 9: Instructions pane for crowdworkers when col- lecting the IGC Evaluations.
# E Highly Rated Examples from IGC
Image IGC Round Output Context Question These ï¬owers are growing in my back yard. What type of ï¬owers are they? I donât know but they are so pretty. TransResNet MM-Sum I donât know but these ï¬owers are gorgeous and look so bright! Human Response Context Question I bought this at a ï¬ea market. What is this for? Human Response I think itâs a mortar and pestle. TransResNet MM-Sum Iâm not sure, but you could sell it for some cash! Context Question Iâd love to visit this city. Is this some city in Europe? Human Response It deï¬nitely looks like it. TransResNet MM-Sum Oh my goodness, yes! I love Amsterdam! Context Question These just ï¬ew over my house! Wow, arenât those Blue Angels? Human Response Yeah, they practice in this area for their shows. TransResNet MM-Sum These are the Blue Angels, they are truly one of a kind Context Question My nephewâs choir did so well yesterday! That is great. You must be proud Human Response I am so proud, because heâs been working really hard. TransResNet MM-Sum I am so proud of all that they accomplished. Context Question This would be a great place to have a picnic. yes it would it seems very relaxing doesnt it? Human Response Yes, and I would quite enjoy spending a day here. TransResNet MM-Sum I think a picnic is more than a great possibility!
Table 5: Highly rated examples from the IGC dataset test split where TRANSRESNETRET MM-Sum responses were rated the highest (score of 3) by human evaluators.
# F Poorly Rated Examples from IGC
Image IGC Round Output Context Question These are Easter eggs we made for art class. How did you make them? Human Response We started with a glass base and went from there. TransResNet MM-Sum OHMYGOSH EASTER EGG HUNT! I want to hunt for Easter eggs too! Context Question Iâm thinking of getting these shocks on my bike. Are the shocks black in color? Human Response Yes, except for the silver portion at the end. TransResNet MM-Sum Just got my new wheels! Context Question I cannot decide if this is milk or something else. It looks like milk, but it could also be keï¬r. Human Response What is keï¬r? TransResNet MM-Sum You can get all the protein you want form the sugar. Context Question This woman is visiting the grave of a loved one. Where is the cemetery? Human Response The cemetary is at the foot a beautiful quartz mountain. TransResNet MM-Sum Well sheâs just a walking, talking garden!
Table 6: Low rated examples from the IGC dataset test split where TRANSRESNETRET MM-Sum responses were rated the lowest (score of 1) by human evaluators.
# G Longer Chat Example
A: TransResNetRET MM-Sum (Extraordinary) B: Human (no style)
A: Itâs the most beautiful mountain in the world!
B: It looks really cool! I wonder if you can ski on it
A: They are a mountain range Iâve climed many times!
B: You have climbed that? That seems really hard
A: I did it; i conquered this climb in only 7 hours and 10 minutes!
B: Wow, thatâs fast. I would ski down that but I would need a lift to take me up
A: But could you? Could you truly climb this?
B: I really donât think I could
A: Climbing a mountain can give one a special strength, you need to experience it
B: Maybe one day on a smaller mountain
A: It would take hard will and determination to scale that mighty peak
Figure 10: Long-form conversation with the model. The model is given a style here, while the human is not.
# H Additional Ablation Results
Modules Image Only Style Only Dialogue History Only Style + Dialogue (no image) Image + Dialogue (no style) Image + Style (no dialogue) Style + Dialogue + Image (full model) TRANSRESNETGEN (F1) Turn 1 10.8 10.4 9.9 9.6 10.7 12.1 12.3 Turn 2 11.0 9.8 11.4 12.5 11.1 11.6 12.5 Turn 3 11.2 10.4 12.2 13.1 11.7 11.6 13.1 All 11.0 10.2 11.2 11.7 11.2 11.8 12.6 TRANSRESNETGEN (BLEU-4) All Turn 1 1.2 1.1 1.4 1.4 1.6 1.0 1.9 1.5 1.5 1.1 1.6 1.6 1.9 1.7 Turn 2 1.3 1.5 1.9 2.1 1.7 1.5 2.1 Turn 3 1.2 1.4 1.8 2.0 1.6 1.5 2.0
Table 7: Ablations on IMAGE-CHAT. We compare variants of our best TRANSRESNET generative model (ResNeXt- IG-3.5B image encoder) where we remove modalities: image, dialogue history and style conditioning, reporting F1 and BLEU-4 for generation for dialogue turns 1, 2 and 3 independently, as well as the average over all turns. | {
"id": "1909.03087"
} |
1811.00937 | CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge | When answering a question, people often draw upon their rich world knowledge
in addition to the particular context. Recent work has focused primarily on
answering questions given some relevant document or context, and required very
little general background. To investigate question answering with prior
knowledge, we present CommonsenseQA: a challenging new dataset for commonsense
question answering. To capture common sense beyond associations, we extract
from ConceptNet (Speer et al., 2017) multiple target concepts that have the
same semantic relation to a single source concept. Crowd-workers are asked to
author multiple-choice questions that mention the source concept and
discriminate in turn between each of the target concepts. This encourages
workers to create questions with complex semantics that often require prior
knowledge. We create 12,247 questions through this procedure and demonstrate
the difficulty of our task with a large number of strong baselines. Our best
baseline is based on BERT-large (Devlin et al., 2018) and obtains 56% accuracy,
well below human performance, which is 89%. | http://arxiv.org/pdf/1811.00937 | Alon Talmor, Jonathan Herzig, Nicholas Lourie, Jonathan Berant | cs.CL, cs.AI, cs.LG | accepted as a long paper at NAACL 2019 | null | cs.CL | 20181102 | 20190315 | 9 1 0 2
r a M 5 1 ] L C . s c [
2 v 7 3 9 0 0 . 1 1 8 1 : v i X r a
# COMMONSENSEQA: A Question Answering Challenge Targeting Commonsense Knowledge
# Alon Talmorââ,1,2 Jonathan Herzigâ,1 Nicholas Lourie2 Jonathan Berant1,2
1School of Computer Science, Tel-Aviv University 2Allen Institute for Artiï¬cial Intelligence {alontalmor@mail,jonathan.herzig@cs,joberant@cs}.tau.ac.il, nicholasl@allenai.org
# Abstract
When answering a question, people often draw upon their rich world knowledge in addi- tion to the particular context. Recent work has focused primarily on answering questions given some relevant document or context, and required very little general background. To investigate question answering with prior knowledge, we present COMMONSENSEQA: a challenging new dataset for commonsense question answering. To capture common sense beyond associations, we extract from CON- CEPTNET (Speer et al., 2017) multiple target concepts that have the same semantic relation to a single source concept. Crowd-workers are asked to author multiple-choice questions that mention the source concept and discrim- inate in turn between each of the target con- cepts. This encourages workers to create ques- tions with complex semantics that often re- quire prior knowledge. We create 12,247 ques- tions through this procedure and demonstrate the difï¬culty of our task with a large number of strong baselines. Our best baseline is based on BERT-large (Devlin et al., 2018) and ob- tains 56% accuracy, well below human perfor- mance, which is 89%.
a) Sample ConceptNet for specific subgraphs waterfall canyon b) Crowd source corresponding natural language questions and two additional distractors Where on a river can you hold a cup upright to catch water on a sunny day? v waterfall, X bridge, X valley, X pebble, X mountain Where can | stand on a river to see water falling without getting wet? X waterfall, V bridge, X valley, X stream, X bottom /'m crossing the river, my feet are wet but my body is dry, where am 1? X waterfall, X bridge, V valley, X bank, X island
Figure 1: (a) A source concept (in green) and three tar- get concepts (in blue) are sampled from CONCEPTNET (b) Crowd-workers generate three questions, each having one of the target concepts for its answer (/), while the other two tar- gets are not (X). Then, for each question, workers choose an additional distractor from CONCEPTNET (in red), and author one themselves (in purple).
1
# 1 Introduction
When humans answer questions, they capitalize on their common sense and background knowl- edge about spatial relations, causes and effects, scientiï¬c facts and social conventions. For in- stance, given the question âWhere was Simon when he heard the lawn mower?â, one can infer that the lawn mower is close to Simon, and that it is probably outdoors and situated at street level. This type of knowledge seems trivial for humans, but is still out of the reach of current natural lan- guage understanding (NLU) systems.
â The authors contributed equally
Work on Question Answering (QA) has mostly focused on answering factoid questions, where the answer can be found in a given context with lit- tle need for commonsense knowledge (Hermann et al., 2015; Rajpurkar et al., 2016; Nguyen et al., 2016; Joshi et al., 2017). Small benchmarks such as the Winograd Scheme Challenge (Levesque, 2011) and COPA (Roemmele et al., 2011), tar- geted common sense more directly, but have been difï¬cult to collect at scale.
Recently, efforts have been invested in devel- oping large-scale datasets for commonsense rea- soning. In SWAG (Zellers et al., 2018b), given a textual description of an event, a probable sub- sequent event needs to be inferred. However, it has been quickly realized that models trained on large amounts of unlabeled data (Devlin et al.,
2018) capture well this type of information and performance on SWAG is already at human level. VCR (Zellers et al., 2018a) is another very re- cent attempt that focuses on the visual aspects of common sense. Such new attempts highlight the breadth of commonsense phenomena, and make it evident that research on common sense has only scratched the surface. Thus, there is need for datasets and models that will further our under- standing of what is captured by current NLU mod- els, and what are the main lacunae.
In this work, we present COMMONSENSEQA, a new dataset focusing on commonsense ques- tion answering, based on knowledge encoded in CONCEPTNET (Speer et al., 2017). We propose a method for generating commonsense questions at scale by asking crowd workers to author questions that describe the relation between concepts from CONCEPTNET (Figure 1). A crowd worker ob- serves a source concept (âRiverâ in Figure 1) and three target concepts (âWaterfallâ, âBridgeâ, âVal- leyâ) that are all related by the same CONCEPT- NET relation (AtLocation). The worker then authors three questions, one per target concept, such that only that particular target concept is the answer, while the other two distractor concepts are not. This primes the workers to add commonsense knowledge to the question, that separates the tar- get concept from the distractors. Finally, for each question, the worker chooses one additional dis- tractor from CONCEPTNET, and authors another distractor manually. Thus, in total, ï¬ve candidate answers accompany each question.
Because questions are generated freely by workers, they often require background knowl- edge that is trivial to humans but is seldom explic- itly reported on the web due to reporting bias (Gor- don and Van Durme, 2013). Thus, questions in COMMONSENSEQA have a different nature com- pared to prior QA benchmarks, where questions are authored given an input text.
Using our method, we collected 12,247 com- monsense questions. We present an analysis that illustrates the uniqueness of the gathered ques- tions compared to prior work, and the types of commonsense skills that are required for tackling it. We extensively evaluate models on COMMON- SENSEQA, experimenting with pre-trained mod- els, ï¬ne-tuned models, and reading comprehen- sion (RC) models that utilize web snippets ex- tracted from Google search on top of the ques-
tion itself. We ï¬nd that ï¬ne-tuning BERT-LARGE (Devlin et al., 2018) on COMMONSENSEQA ob- tains the best performance, reaching an accuracy of 55.9%. This is substantially lower than human performance, which is 88.9%.
To summarize, our contributions are: 1. A new QA dataset centered around common
sense, containing 12,247 examples.
2. A new method for generating commonsense questions at scale from CONCEPTNET.
3. An empirical evaluation of state-of-the-art NLU models on COMMONSENSEQA, show- ing that humans substantially outperform cur- rent models.
The dataset can be downloaded from www. tau-nlp.org/commonsenseqa. The code for all our baselines is available at github. com/jonathanherzig/commonsenseqa.
# 2 Related Work
Machine common sense, or the knowledge of and ability to reason about an open ended world, has long been acknowledged as a critical component for natural language understanding. Early work sought programs that could reason about an envi- ronment in natural language (McCarthy, 1959), or leverage a world-model for deeper language un- derstanding (Winograd, 1972). Many common- sense representations and inference procedures have been explored (McCarthy and Hayes, 1969; Kowalski and Sergot, 1986) and large-scale com- monsense knowledge-bases have been developed (Lenat, 1995; Speer et al., 2017). However, evalu- ating the degree of common sense possessed by a machine remains difï¬cult.
the Winograd Schema Challenge (Levesque, 2011), asks mod- els to correctly solve paired instances of coref- erence resolution. While the Winograd Schema Challenge remains a tough dataset, the difï¬culty of generating examples has led to only a small available collection of 150 examples. The Choice of Plausible Alternatives (COPA) is a similarly im- portant but small dataset consisting of 500 devel- opment and 500 test questions (Roemmele et al., 2011). Each question asks which of two alterna- tives best reï¬ects a cause or effect relation to the premise. For both datasets, scalability is an issue when evaluating modern modeling approaches.
With the recent adoption of crowdsourcing, sev- eral larger datasets have emerged, focusing on pre-
dicting relations between situations or events in natural language. JHU Ordinal Commonsense In- ference requests a label from 1-5 for the plau- sibility that one situation entails another (Zhang et al., 2017). The Story Cloze Test (also referred to as ROC Stories) pits ground-truth endings to sto- ries against implausible false ones (Mostafazadeh et al., 2016). Interpolating these approaches, Sit- uations with Adversarial Generations (SWAG), asks models to choose the correct description of what happens next after an initial event (Zellers et al., 2018b). LM-based techniques achieve very high performance on the Story Cloze Test and SWAG by ï¬ne-tuning a pre-trained LM on the tar- get task (Radford et al., 2018; Devlin et al., 2018). Investigations of commonsense datasets, and of natural language datasets more generally, have re- vealed the difï¬culty in creating benchmarks that measure the understanding of a program rather than its ability to take advantage of distributional biases, and to model the annotation process (Gu- rurangan et al., 2018; Poliak et al., 2018). Annota- tion artifacts in the Story Cloze Test, for example, allow models to achieve high performance while only looking at the proposed endings and ignor- ing the stories (Schwartz et al., 2017; Cai et al., 2017). Thus, the development of benchmarks for common sense remains a difï¬cult challenge.
Researchers have also investigated question an- swering that utilizes common sense. Science ques- tions often require common sense, and have re- cently received attention (Clark et al., 2018; Mi- haylov et al., 2018; Ostermann et al., 2018); how- ever, they also need specialized scientiï¬c knowl- edge. In contrast to these efforts, our work stud- ies common sense without requiring additional information. SQUABU created a small hand- curated test of common sense and science ques- tions (Davis, 2016), which are difï¬cult for current techniques to solve. In this work, we create simi- larly well-crafted questions but at a larger scale.
# 3 Dataset Generation
Our goal is to develop a method for generating questions that can be easily answered by humans without context, and require commonsense knowl- edge. We generate multiple-choice questions in a process that comprises the following steps.
1. We extract subgraphs from CONCEPTNET, each with one source concept and three tar- get concepts.
x Crowdworkers author questions xR Crowdworkers add distractors Dust in house? (attic, yard, street) Dust in house? (attic, yard, street, bed, desert) Find glass outside? (bar, fork, car) ââ Find glass outside? (bar, fork, car, sand, wine) Makes you happy? (laugh, sad, fall) Makes you happy? (laugh, sad, fal, blue, feel) Extract subgraphs from ConceptNet SE Crowdworkers filter questions by quality =, as = = . dust_} Cattic_} yard) (ueet Dust in house? (attic, yard, ..) > 1.0 os = aCe Find glass outside? (bar, fork, ...) 0.2 X Makes you happy? (laugh, sad, ...) > 0.8 EEC t Filter edges from ConceptNet with rules â Crapey fal Q collect relevant snippets via search engine (© vwstin house? (atic yaa, .) @ Makes you happy? (laugh, sad, ...)
Figure 2: COMMONSENSEQA generation process. The in- put is CONCEPTNET knowledge base, and the output is a set of multiple-choice questions with corresponding relevant context (snippets).
2. We ask crowdsourcing workers to author three questions per subgraph (one per target concept), to add two additional distractors per question, and to verify questionsâ quality. 3. We add textual context to each question by querying a search engine and retrieving web snippets.
The entire data generation process is summarized in Figure 2. We now elaborate on each of the steps:
Extraction from CONCEPTNET CONCEPT- NET is a graph knowledge-base G â C Ã R Ã C, where the nodes C represent natural language con- cepts, and edges R represent commonsense re- lations. Triplets (c1, r, c2) carry commonsense knowledge such as â(gambler, CapableOf, lose money)â. CONCEPTNET contains 32 million triplets. To select a subset of triplets for crowd- sourcing we take the following steps:
1. We ï¬lter triplets with general relations (e.g., RelatedTo) or relations that are already well-explored in NLP (e.g., IsA). In total we use 22 relations.
2. We ï¬lter triplets where one of the concepts is more than four words or not in English. 3. We ï¬lter triplets where the edit distance be-
tween c1 and c2 is too low.
This results in a set of 236,208 triplets (q, r, a), where we call the ï¬rst concept the question con- cept and the second concept the answer concept.
We aim to generate questions that contain the question concept and where the answer is the an- swer concept. To create multiple-choice questions we need to choose distractors for each question.
Sampling distractors at random from CONCEPT- NET is a bad solution, as such distractors are easy to eliminate using simple surface clues.
To remedy this, we propose to create ques- tion sets: for each question concept q and relation r we group three different triplets {(q, r, a1), (q, r, a2), (q, r, a3)} (see Figure 1). This generates three answer concepts that are se- mantically similar and have a similar relation to the question concept q. This primes crowd work- ers to formulate questions that require background knowledge about the concepts in order to answer the question.
The above procedure generates approximately 130,000 triplets (43,000 question sets), for which we can potentially generate questions.
Crowdsourcing questions We used Amazon Mechanical Turk (AMT) workers to generate and validate commonsense questions.
AMT workers saw, for every question set, the question concept and three answer concepts. They were asked to formulate three questions, where all questions contain the question concept. Each question should have as an answer one of the an- swer concepts, but not the other two. To discour- age workers from providing simple surface clues for the answer, they were instructed to avoid us- ing words that have a strong relation to the answer concept, for example, not to use the word âopenâ when the answer is âdoorâ.
Formulating questions for our task is non- trivial. Thus, we only accept annotators for which at least 75% of the questions they formulate pass the veriï¬cation process described below.
Adding additional distractors To make the task more difï¬cult, we ask crowd-workers to add two additional incorrect answers to each formu- lated question. One distractor is selected from a set of answer concepts with the same relation to the question concept in CONCEPTNET (Figure 1, in red). The second distractor is formulated man- ually by the workers themselves (Figure 1, in pur- ple). Workers were encouraged to formulate a dis- tractor that would seem plausible or related to the question but easy for humans to dismiss as incor- rect. In total, each formulated question is accom- panied with ï¬ve candidate answers, including one correct answer and four distractors.
Verifying questions quality We train a disjoint group of workers to verify the generated questions.
Measurement # CONCEPTNET distinct question nodes # CONCEPTNET distinct answer nodes # CONCEPTNET distinct nodes # CONCEPTNET distinct relation lables average question length (tokens) long questions (more than 20 tokens) average answer length (tokens) # answers with more than 1 token # of distinct words in questions # of distinct words in answers
Table 1: Key statistics for COMMONSENSEQA
Veriï¬ers annotate a question as unanswerable, or choose the right answer. Each question is veri- ï¬ed by 2 workers, and only questions veriï¬ed by at least one worker that answered correctly are used. This processes ï¬lters out 15% of the questions.
Adding textual context To examine whether web text is useful for answering commonsense questions, we add textual information to each question in the following way: We issue a web query to Google search for every question and candidate answer, concatenating the answer to the question, e.g., âWhat does a parent tell their child to do after theyâve played with a lot of toys? + âclean roomââ. We take the ï¬rst 100 result snip- pets for each of the ï¬ve answer candidates, yield- ing a context of 500 snippets per question. Using this context, we can investigate the performance of reading comprehension (RC) models on COM- MONSENSEQA.
Overall, we generated 12,247 ï¬nal examples, from a total of 16,242 that were formulated. The total cost per question is $0.33. Table 1 describes the key statistics of COMMONSENSEQA.
# 4 Dataset Analysis
CONCEPTNET concepts and relations COM- MONSENSEQA builds on CONCEPTNET, which contains concepts such as dog, house, or row boat, connected by relations such as Causes, CapableOf, or Antonym. The top-5 ques- tion concepts in COMMONSENSEQA are âPersonâ (3.1%), âPeopleâ (2.0%), âHumanâ (0.7%), âWaterâ (0.5%) and âCatâ (0.5%). In addition, we present the main relations along with the percentage of questions generated from them in Table 2. Itâs worth noting that since question formulators were not shown the CONCEPTNET relation, they often asked questions that probe other relationships be- tween the concepts. For example, the question
Relation AtLocation Causes CapableOf Antonym HasSubevent HasPrerequisite If I am tilting a drink toward my face, what should I do before the liquid spills over? A. open mouth, B. eat ï¬rst, C. use glass , D. ... CausesDesire Desires PartOf HasProperty
Table 2: Top CONCEPTNET relations in COMMONSENSEQA, along with their frequency in the data and an example question. The ï¬rst answer (A) is the correct answer
Q. Where are Rosebushes typically found outside of large buildings? O Has parts Ox Spatial Ox! member of O Building Courtyard Flowers Rosebushes Q. Where would you get a Balalaika if you do not have one? oO Is member of, ~O- O Purpose O Balalaika Music store Spatial Instrument Get instruments Q. I want to use string to keep something from moving, how should I do it? O Spatial +O) *O Cause & effect) Something String Tie around Keep from moving Activit
O Has parts Ox Spatial Ox! member of O Building Courtyard Flowers Rosebushes
oO Is member of, ~O- O Purpose O Balalaika Music store Spatial Instrument Get instruments
O Spatial +O) *O Cause & effect) Something String Tie around Keep from moving Activit
Figure 3: Examples of manually-annotated questions, with the required skills needed to arrive at the answers (red cir- cles). Skills are labeled edges, and concepts are nodes.
Category Spatial Cause & Effect Concept A causes Concept B Has parts Is member of Purpose Social % 41 23 23 Concept A contains Concept B as one of its parts Concept A belongs to the larger class of Concept B 17 18 Concept A is the purpose of Concept B It is a social convention that Concept A 15 correlates with Concept B Concept A is an activity performed in the context of Concept B Concept A is a deï¬nition of Concept B Concept A must hold true in order for Concept B to take place Deï¬nition Concept A appears near Concept B Activity Deï¬nition Preconditions 8 6 3
Table 3: Skills and their frequency in the sampled data. As each example can be annotated with multiple skills, the total frequency does not sum to 100%.
âWhat do audiences clap for?â was generated from the AtLocation relation, but focuses on social conventions instead.
Question formulation Question formulators were instructed to create questions with high language variation. 122 formulators contributed to question generation. However, 10 workers formulated more than 85% of the questions.
monsense skills per questions, with an average of 1.75 skills per question. Figure 3 provides three example annotations. Each annotation contains a node for the answer concept, and other nodes for concepts that appear in the question or latent con- cepts. Labeled edges describe the commonsense skill that relates the two nodes. We deï¬ned com- monsense skills based on the analysis of LoBue and Yates (2011), with slight modiï¬cations to ac- commodate the phenomena in our data. Table 3 presents the skill categories we used, their deï¬ni- tion and their frequency in the analyzed examples.
We analyzed the distribution of ï¬rst and second words in the formulated questions along with ex- ample questions. Figure 4 presents the breakdown. Interestingly, only 44% of the ï¬rst words are WH- words. In about 5% of the questions, formulators used ï¬rst names to create a context story, and in 7% they used the word âifâ to present a hypothet- ical question. This suggests high variability in the question language.
Commonsense Skills To analyze the types of commonsense knowledge needed to correctly an- swer questions in COMMONSENSEQA, we ran- domly sampled 100 examples from the develop- ment set and performed the following analysis.
For each question, we explicitly annotated the types of commonsense skills that a human uses to answer the question. We allow multiple com-
# 5 Baseline Models
Our goal is to collect a dataset of commonsense questions that are easy for humans, but hard for current NLU models. To evaluate this, we experi- ment with multiple baselines. Table 4 summarizes the various baseline types and characterizes them based on (a) whether training is done on COM- MONSENSEQA or the model is fully pre-trained, and (b) whether context (web snippets) is used. We now elaborate on the different baselines. a VECSIM A model that chooses the answer with highest cosine similarity to the question, where the question and answers are represented by an aver- age of pre-trained word embeddings.
b LM1B Inspired by Trinh and Le (2018), we
z 3 s 3 é ) cf $ A & B 8 3 oO 5 & EE 5 2 A 3 ES g < ES By By e doy vit tumegecmed antsy ang had what> â tourist was Probably most ha; capture pictures of snow on wae â0 Ifa person needs food from a dai ity what should ohn buys a 5 pound block of salt Where is he likely to keep it?
# The
Figure 4: Distribution of the ï¬rst and second words in questions. The inner part displays words and their frequency and the outer part provides example questions.
Model Training Context VECSIM LMIB QABILINEAR QACOMPARE ESIM GPT BERT BIDAF++ «x x xx NNNNNN Qo OS
Table 4: Baseline models along with their characteristics. Training states whether the model was trained on COMMON- SENSEQA, or was only trained a different dataset. Context states whether the model uses extra context as input.
employ a large language model (LM) from Joze- fowicz et al. (2016), which was pre-trained on the One Billion Words Benchmark (Chelba et al., In 2013). We use this model in two variations. the ï¬rst (LM1B-CONCAT), we simply concate- nate each answer to the question. In the second (LM1B-REP), we ï¬rst cluster questions according to their ï¬rst two words. Then, we recognize ï¬ve high-frequency preï¬xes that cover 35% of the de- velopment set (e.g., âwhat isâ). We rephrase ques- tions that ï¬t into one of these preï¬xes as a declar- ative sentence that contains the answer. E.g., we rephrase âWhat is usually next to a door?â and the candidate answer âwallâ to âWall is usually next to a doorâ. For questions that do not start with the above preï¬xes, we concatenate the answer as in LM1B-CONCAT. In both variations we return the answer with highest LM probability.
¢ QABILINEAR This model, propsed by Yu et al. (2014) for QA, scores an answer a; with a bilinear model: qWwaj , Where the question g and answers
ai are the average pre-trained word embeddings and W is a learned parameter matrix. A softmax layer over the candidate answers is used to train the model with cross-entropy loss.
d QACOMPARE This model is similar to an NLI model from Liu et al. (2016). The model repre- sents the interaction between the question g and a candidate answer a; as: h = relu([q; aj; ¢@ ai; qâ aj]W + 61), where â;â denotes concatenation and © is element-wise product. Then, the model pre- dicts an answer score using a feed forward layer: hW2 + bo. Average pre-trained embeddings and softmax are used to train the model.
e ESIM We use ESIM, a strong NLI model Similar to Zellers et al. (Chen et al., 2016). (2018b), we change the output layer size to the number of candidate answers, and apply softmax to train with cross-entropy loss.
f BIDAF++ A state-of-the-art RC model, that uses the retrieved Google web snippets (Section 3) as context. We augment BIDAF (Seo et al., 2016) with a self-attention layer and ELMo representa- tions (Peters et al., 2018; Huang et al., 2018). To adapt to the multiple-choice setting, we choose the answer with highest model probability.
g GENERATIVE TRANS- FORMER (GPT) Radford et al. (2018) proposed a method for adapting pre-trained LMs to perform a wide range of tasks. We applied their model to COMMONSENSEQA by encoding each question and its candidate answers as a series of delimiter-
separated sequences. For example, the question âIf you needed a lamp to do your work, where would you put it?â, and the candidate answer âbedroomâ would become â[start] If ... ? [sep] bedroom [end]â. The hidden repre- sentations over each [end] token are converted to logits by a linear transformation and passed through a softmax to produce ï¬nal probabilities for the answers. We used the same pre-trained LM and hyper-parameters for ï¬ne-tuning as Radford et al. (2018) on ROC Stories, except with a batch size of 10.
h BERT Similarly to the GPT, BERT ï¬ne-tunes a language model and currently holds state-of-the- art across a broad range of tasks (Devlin et al., 2018). BERT uses a masked language mod- eling objective, which predicts missing words masked from unlabeled text. To apply BERT to COMMONSENSEQA, we linearize each question- answer pair into a delimiter-separated sequence (i.e., â[CLS] If ... ? [SEP] bedroom [SEP]â) then ï¬ne-tune the pre-trained weights from un- cased BERT-LARGE.1 Similarly to the GPT, the hidden representations over each [CLS] token are run through a softmax layer to create the predic- tions. We used the same hyper-parameters as De- vlin et al. (2018) for SWAG.
# 6 Experiments
Experimental Setup We split the data into a training/development/test set with an 80/10/10 split. We perform two types of splits: (a) ran- dom split â where questions are split uniformly at random, and (b) question concept split â where each of the three sets have disjoint question con- cepts. We empirically ï¬nd (see below) that a ran- dom split is harder for models that learn from COMMONSENSEQA, because the same question concept appears in the training set and develop- ment/test set with different answer concepts, and networks that memorize might fail in such a sce- nario. Since the random split is harder, we con- sider it the primary split of COMMONSENSEQA. We evaluate all models on the test set using ac- curacy (proportion of examples for which predic- tion is correct), and tune hyper-parameters for all trained models on the development set. To under- stand the difï¬culty of the task, we add a SANITY mode, where we replace the hard distractors (that
1The original weights and code released by Google may be found here: https://github.com/google-research/bert
share a relation with the question concept and one formulated by a worker) with random CONCEPT- NET distractors. We expect a reasonable baseline to perform much better in this mode.
For pre-trained word embeddings we consider 300d GloVe embeddings (Pennington et al., 2014) and 300d Numberbatch CONCEPTNET node em- beddings (Speer et al., 2017), which are kept ï¬xed at training time. We also combine ESIM with 1024d ELMo contextual representations, which are also ï¬xed during training.
Human Evaluation To test human accuracy, we created a separate task for which we did not use a qualiï¬cation test, nor used AMT master workers. We sampled 100 random questions and for each question gathered answers from ï¬ve workers that were not involved in question generation. Humans obtain 88.9% accuracy, taking a majority vote for each question.
Results Table 5 presents test set results for all models and setups.
The best baselines are BERT-LARGE and GPT with an accuracy of 55.9% and 45.5%, respec- tively, on the random split (63.6% and 55.5%, re- spectively, on the question concept split). This is well below human accuracy, demonstrating that the benchmark is much easier for humans. Nev- ertheless, this result is much higher than random (20%), showing the ability of language models to store large amounts of information related to com- monsense knowledge.
The top part of Table 5 describes untrained models. We observe that performance is higher than random, but still quite low. The middle part describes models that were trained on COMMON- SENSEQA, where BERT-LARGE obtains best per- formance, as mentioned above. ESIM models follow BERT-LARGE and GPT, and obtain much lower performance. We note that ELMo represen- tations did not improve performance compared to GloVe embeddings, possibly because we were un- able to improve performance by back-propagating into the representations themselves (as we do in BERT-LARGE and GPT). The bottom part shows results for BIDAF++ that uses web snippets as context. We observe that using snippets does not lead to high performance, hinting that they do not carry a lot of useful information.
Performance on the random split is ï¬ve points lower than the question concept split on average
Model VECSIM+NUMBERBATCH LM1B-REP LM1B-CONCAT VECSIM+GLOVE BERT-LARGE GPT ESIM+ELMO ESIM+GLOVE QABILINEAR+GLOVE ESIM+NUMBERBATCH QABILINEAR+NUMBERBATCH QACOMPARE+GLOVE QACOMPARE+NUMBERBATCH BIDAF++ HUMAN Random split Question concept split SANITY 54.9 39.1 35.2 27.1 93.2 88.9 77.8 78.2 71.8 75.1 71.6 71.3 66.8 72.0 Accuracy SANITY Accuracy 54.0 39.6 37.4 26.8 92.3 87.2 76.9 79.1 74.8 74.6 73.3 69.2 60.6 71.0 29.1 26.1 25.3 22.3 55.9 45.5 34.1 32.8 31.5 30.1 28.8 25.7 20.4 32.0 88.9 30.3 26.0 25.3 20.8 63.6 55.5 37.9 40.4 34.2 31.2 32.0 34.1 25.2 38.4
Table 5: Test set accuracy for all models.
Category Formulated question example If someone laughs after surprising them they have a good sense of what? humor Surface How might a automobile get off a freeway? clues Where would you store a pillow case that is not in use? Negation / Where might the stapler be if I cannot ï¬nd it? Antonym How many hours are in a day? Factoid What geographic area is a lizard likely to be? knowledge Where is a well used toy car likely to be found? Bad granularity Where may you be if youâre buying pork chops at a corner shop? Conjunction What can you use to store a book while traveling? Correct answer Distractor exit ramp drawer desk drawer twenty four west texas childâs room iowa suitcase eat ice cream laughter driveway bedroom desktop week ball stopped own home town library of congress fresh cake Accuracy % 77.7 35% 42.8 7% 38.4 13% 35.4 31% 23.8 23% On a hot day what can you do to enjoy something cool and sweet?
Table 6: BERT-LARGE baseline analysis. For each category we provide two examples, the correct answer, one distractor, model accuracy and frequency in the dataset. The predicted answer is in bold.
across all trained models. We hypothesize that this is because having questions in the develop- ment/test set that share a question concept with the training set, but have a different answer, creates difï¬culty for networks that memorize the relation between a question concept and an answer.
Lastly, all SANITY models that were trained on COMMONSENSEQA achieve very high perfor- mance (92% for BERT-LARGE), showing that se- lecting difï¬cult distractors is crucial.
° a © dev accuracy & ° G ° B @ question concept @ random human performance ° o 10? 10* 10° # instances
Baseline analysis To understand the perfor- mance of BERT-LARGE, we analyzed 100 ex- amples from the development set (Table 6). We labeled examples with categories (possibly more than one per example) and then computed the av- erage accuracy of the model for each category.
We found that the model does well (77.7% ac- curacy) on examples where surface clues hint to the correct answer. Examples that involve nega- tion or understanding antonyms have lower accu- racy (42.8%), similarly to examples that require factoid knowledge (38.4%). Accuracy is partic- ularly low in questions where the correct answer has ï¬ner granularity compared to one of the dis- tractors (35.4%), and in cases where the correct
Figure 5: Development accuracy for BERT-LARGE trained with varying amounts of data.
answer needs to meet a conjunction of conditions, and the distractor meets only one of them (23.8%).
Learning Curves To extrapolate how current models might perform with more data, we evalu- ated BERT-large on the development set, training with varying amounts of data. The resulting learn- ing curves are plotted in ï¬gure 5. For each training set size, hyper-parameters were identical to sec- tion 5, except the number of epochs was varied to
keep the number of mini-batches during training constant. To deal with learning instabilities, each data point is the best of 3 runs. We observe that the accuracy of BERT-LARGE is expected to be roughly 75% assuming 100k examples, still sub- stantially lower than human performance.
# 7 Conclusion
We present COMMONSENSEQA, a new QA dataset that contains 12,247 examples and aims to test commonsense knowledge. We describe a pro- cess for generating difï¬cult questions at scale us- ing CONCEPTNET, perform a detailed analysis of the dataset, which elucidates the unique properties of our dataset, and extensively evaluate on a strong suite of baselines. We ï¬nd that the best model is a pre-trained LM tuned for our task and obtains 55.9% accuracy, dozens of points lower than hu- man accuracy. We hope that this dataset facili- tates future work in incorporating commonsense knowledge into NLU systems.
# Acknowledgments
We thank the anonymous reviewers for their con- structive feedback. This work was completed in partial fulï¬llment for the PhD degree of Jonathan Herzig, which was also supported by a Google PhD fellowship. This research was partially sup- ported by The Israel Science Foundation grant 942/16, The Blavatnik Computer Science Re- search Fund and The Yandex Initiative for Ma- chine Learning.
# References
Zheng Cai, Lifu Tu, and Kevin Gimpel. 2017. Pay at- tention to the ending: Strong neural baselines for the roc story cloze task. In ACL.
C. Chelba, T. Mikolov, M. Schuster, Q. Ge, T. Brants, P. Koehn, and T. Robinson. 2013. One billion word benchmark for measuring progress in statistical lan- guage modeling. arXiv preprint arXiv:1312.3005.
Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Enhanced Hui Jiang, and Diana Inkpen. 2016. lstm for natural language inference. arXiv preprint arXiv:1609.06038.
Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question an- swering? try arc, the ai2 reasoning challenge.
Ernest Davis. 2016. How to write science questions that are easy for people and hard for computers. AI magazine, 37(1):13â22.
J. Devlin, M. Chang, K. Lee, and K. Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv.
Jonathan Gordon and Benjamin Van Durme. 2013. Re- porting bias and knowledge acquisition. In Proceed- ings of the 2013 Workshop on Automated Knowledge Base Construction, AKBC â13, pages 25â30, New York, NY, USA. ACM.
Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R Bowman, and Annotation artifacts in Noah A Smith. 2018. arXiv preprint natural language inference data. arXiv:1803.02324.
Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Su- leyman, and Phil Blunsom. 2015. Teaching ma- chines to read and comprehend. In Advances in Neu- ral Information Processing Systems, pages 1693â 1701.
Hsin-Yuan Huang, Eunsol Choi, and Wen-tau Yih. 2018. Flowqa: Grasping ï¬ow in history for con- versational machine comprehension. arXiv preprint arXiv:1810.06683.
M. Joshi, E. Choi, D. Weld, and L. Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised chal- lenge dataset for reading comprehension. In Associ- ation for Computational Linguistics (ACL).
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Exploring arXiv preprint Shazeer, and Yonghui Wu. 2016. the limits of language modeling. arXiv:1602.02410.
R Kowalski and M Sergot. 1986. A logic-based calcu- lus of events. New Gen. Comput., 4(1):67â95.
Douglas B. Lenat. 1995. Cyc: A large-scale invest- ment in knowledge infrastructure. Commun. ACM, 38:32â38.
Hector J. Levesque. 2011. The winograd schema chal- lenge. In AAAI Spring Symposium: Logical Formal- izations of Commonsense Reasoning.
Yang Liu, Chengjie Sun, Lei Lin, and Xiaolong Wang. 2016. Learning natural language inference using bidirectional lstm model and inner-attention. arXiv preprint arXiv:1605.09090.
Peter LoBue and Alexander Yates. 2011. Types of common-sense knowledge needed for recognizing textual entailment. In Proceedings of the 49th An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers-Volume 2, pages 329â334. Association for Computational Linguistics.
J. McCarthy. 1959. Programs with common sense. In Proceedings of the Teddington Conference on the Mechanization of Thought Processes.
John McCarthy and Patrick J. Hayes. 1969. Some philosophical problems from the standpoint of ar- tiï¬cial intelligence. In B. Meltzer and D. Michie, editors, Machine Intelligence 4, pages 463â502. Ed- inburgh University Press. Reprinted in McC90.
Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct elec- tricity? a new dataset for open book question an- swering.
N. Mostafazadeh, N. Chambers, X. He, D. Parikh, D. Batra, L. Vanderwende, P. Kohli, and J. Allen. 2016. A corpus and cloze evaluation for deeper In North understanding of commonsense stories. American Association for Computational Linguis- tics (NAACL).
T. Nguyen, M. Rosenberg, X. Song, J. Gao, S. Tiwary, R. Majumder, and L. Deng. 2016. MS MARCO: A human generated machine reading comprehension In Workshop on Cognitive Computing at dataset. NIPS.
Simon Ostermann, Ashutosh Modi, Michael Roth, Ste- fan Thater, and Manfred Pinkal. 2018. Mcscript: A novel dataset for assessing machine comprehension using script knowledge. CoRR, abs/1803.05223.
J. Pennington, R. Socher, and C. D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP).
Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proc. of NAACL.
Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018. Hypothesis only baselines in natural language infer- ence. In Proc. of *SEM.
and I. Sutskever. 2018. Improving language understand- ing by generative pre-training. Technical Report, OpenAI.
P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang. 2016. Squad: 100,000+ questions for machine comprehen- sion of text. In Empirical Methods in Natural Lan- guage Processing (EMNLP).
M. Roemmele, C. Bejan, and A. Gordon. 2011. Choice of plausible alternatives: An evaluation of common- sense causal reasoning. In AAAI Spring Symposium on Logical Formalizations of Commonsense Rea- soning.
Roy Schwartz, Maarten Sap, Ioannis Konstas, Leila Zilles, Yejin Choi, and Noah A. Smith. 2017. The effect of different writing tasks on linguistic style: A case study of the roc story cloze task. In CoNLL.
M. Seo, A. Kembhavi, A. Farhadi, and H. Hajishirzi. 2016. Bidirectional attention ï¬ow for machine com- prehension. arXiv.
Robert Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In AAAI, pages 4444â4451.
O. Tange. 2011. line power tool. 36(1):42â47. Gnu parallel - the command- ;login: The USENIX Magazine,
Trieu H Trinh and Quoc V Le. 2018. A simple method for commonsense reasoning. arXiv preprint arXiv:1806.02847.
T. Winograd. 1972. Understanding Natural Language. Academic Press.
Lei Yu, Karl Moritz Hermann, Phil Blunsom, and Stephen Pulman. 2014. Deep learning for answer sentence selection. arXiv preprint arXiv:1412.1632.
Rowan Zellers, Yonatan Bisk, Ali Farhadi, and Yejin From recognition to cognition: arXiv preprint Choi. 2018a. Visual commonsense reasoning. arXiv:1811.10830.
Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Swag: A large-scale adversarial Choi. 2018b. dataset for grounded commonsense inference. arXiv preprint arXiv:1808.05326.
Sheng Zhang, Rachel Rudinger, Kevin Duh, and Ben- jamin Van Durme. 2017. Ordinal common-sense in- ference. TACL, 5:379â395. | {
"id": "1811.10830"
} |
1811.00720 | Semantically-Aligned Equation Generation for Solving and Reasoning Math Word Problems | Solving math word problems is a challenging task that requires accurate
natural language understanding to bridge natural language texts and math
expressions. Motivated by the intuition about how human generates the equations
given the problem texts, this paper presents a neural approach to automatically
solve math word problems by operating symbols according to their semantic
meanings in texts. This paper views the process of generating equation as a
bridge between the semantic world and the symbolic world, where the proposed
neural math solver is based on an encoder-decoder framework. In the proposed
model, the encoder is designed to understand the semantics of problems, and the
decoder focuses on tracking semantic meanings of the generated symbols and then
deciding which symbol to generate next. The preliminary experiments are
conducted in a dataset Math23K, and our model significantly outperforms both
the state-of-the-art single model and the best non-retrieval-based model over
about 10% accuracy, demonstrating the effectiveness of bridging the symbolic
and semantic worlds from math word problems. | http://arxiv.org/pdf/1811.00720 | Ting-Rui Chiang, Yun-Nung Chen | cs.CL | null | null | cs.CL | 20181102 | 20190609 | 9 1 0 2
n u J 9 ] L C . s c [
2 v 0 2 7 0 0 . 1 1 8 1 : v i X r a
# Semantically-Aligned Equation Generation for Solving and Reasoning Math Word Problems
# Ting-Rui Chiang Yun-Nung Chen National Taiwan University, Taipei, Taiwan
r07922052@csie.ntu.edu.tw
y.v.chen@ieee.org
r07922052@csie.ntu.edu.tw y.v.chen@ieee.org
# Abstract
Solving math word problems is a challeng- ing task that requires accurate natural language understanding to bridge natural language texts and math expressions. Motivated by the intu- ition about how human generates the equations given the problem texts, this paper presents a neural approach to automatically solve math word problems by operating symbols accord- ing to their semantic meanings in texts. This paper views the process of generating equa- tions as a bridge between the semantic world and the symbolic world, where the proposed neural math solver is based on an encoder- decoder framework. In the proposed model, the encoder is designed to understand the se- mantics of problems, and the decoder focuses on tracking semantic meanings of the gener- ated symbols and then deciding which sym- bol to generate next. The preliminary exper- iments are conducted in a benchmark dataset Math23K, and our model signiï¬cantly outper- forms both the state-of-the-art single model and the best non-retrieval-based model over about 10% accuracy, demonstrating the effec- tiveness of bridging the symbolic and semantic worlds from math word problems.1
meaning of each numerical number in the prob- lem, perform reasoning based on the comprehen- sion in the previous step, and ï¬nally decide what to write in the equation.
Most prior work about solving math word prob- lems relied on hand-crafted features, which re- quired more human knowledge. Because those features are often in the lexical level, it is not clear whether machines really understand the math problems. Also, most prior work evaluated their approaches on relatively small datasets, and the capability of generalization is concerned.
This paper considers the reasoning procedure when writing down the associated equation given a problem. Figure 1 illustrates the problem solving process. The illustration shows that human actu- ally assigns the semantic meaning to each number when manipulating symbols, including operands (numbers) and operators (+ â Ã÷). Also, we be- lieve that the semantic meaning of operands can help us decide which operator to use. For example, the summation of âprice of one penâ and ânumber of pens Tom boughtâ is meaningless; therefore the addition would not be chosen.
# Introduction
Automatically solving math word problems has been an interesting research topic and also been viewed as a way of evaluating machinesâ abil- ity (Mandal and Naskar, 2019). For human, writ- ing down an equation that solves a math word problem requires the ability of reading compre- hension, reasoning, and sometimes real world un- derstanding. Speciï¬cally, to solve a math word problem, we ï¬rst need to know the goal of the given problem, then understand the semantic
1The source code is available at https://github.
com/MiuLab/E2EMathSolver.
this paper proposes a novel encoder decoder model, where the encoder extracts semantic meanings of num- bers in the problem, and the decoder is equipped with a stack that facilitates tracking the semantic meanings of operands. The contributions of this paper are 4-fold:
⢠This paper is the ï¬rst work that models se- mantic meanings of operands and operators for math word problems.
⢠This paper proposes an end-to-end neural math solver with a novel decoding process that utilizes the stack to generate associated equations.
How to write down x = (10 â 1 x 5) + 0.5?
+
â
Each notebook takes $0.5 and each pen takes $1.â, Tom has $10. x How many notebooks can he buy after buying 5 een! | | _, $Shasbeen ' $5 remains + +â» Jomcan buy spent on pens . 10 notebooks (40-15) 1x5 (0-1%x5)+0.5 J > Tom can buy x notebooks
Figure 1: The solving process of the math word problem âEach notebok takes $0.5 and each pen takes $1. Tom has $10. How many notebook can he buy after buying 5 pens?â and the associated equation is x = (10 â 1 à 5) ÷ 0.5. The associated equation is x = (10 â 1 à 5) ÷ 0.5.
⢠This paper achieves the state-of-the-art per- formance on the large benchmark dataset Math23K.
⢠This paper is capable of providing interpreta- tion and reasoning for the math word problem solving procedure.
# 2 Related Work
There is a lot of prior work that utilized hand- crafted features, such as POS tags, paths in the de- pendency trees, keywords, etc., to allow the model to focus on the quantities in the problems (Kush- man et al., 2014; Hosseini et al., 2014; Roy et al., 2015; Roy and Roth, 2015; Koncel-Kedziorski et al., 2015; Roy et al., 2016; Upadhyay et al., 2016; Upadhyay and Chang, 2017; Roy and Roth, 2018; Wang et al., 2018). Recently, Mehta et al.; Wang et al.; Ling et al. attempted at learning mod- els without predeï¬ned features. Following the re- cent trend, the proposed end-to-end model in this paper does not need any hand-crafted features.
siï¬ed math word problems into 4 types and used rules to decide the operators accordingly. Wang et al. trained the parser using reinforcement learn- ing with hand-crafted features. Hosseini et al. modeled the problem text as transition of world states, and the equation is generated as the world states changing. Our work uses a similar intuition, but hand-crafted features are not required and our model can be trained in an end-to-end manner. Some end-to-end approaches have been proposed, such as generating equations directly via a seq2seq model (Wang et al., 2017). Ling et al. tried to generate solutions along with its rationals with a seq2seq-like model for better interpretability.
This paper belongs to the end-to-end category, but different from the previous work; we are the ï¬rst approach that generates equations with stack actions, which facilitate us to simulate the way how human solves problems. Furthermore, the proposed approach is the ï¬rst model that is more interpretable and provides reasoning steps without the need of rational annotations.
Kushman et al. ï¬rst extracted templates about math expressions from the training answers, and then trained models to select templates and map quantities in the problem to the slots in the tem- plate. Such two-stage approach has been tried and achieved good results (Upadhyay and Chang, 2017). The prior work highly relied on hu- man knowledge, where they parsed problems into equations by choosing the expression tree with the highest score calculated by an operator classiï¬er, working on a hand-crafted âtrigger listâ contain- ing quantities and noun phrases in the problem, or utilizing features extracted from text spans (Roy et al., 2015, 2016; Koncel-Kedziorski et al., 2015). Shi et al. deï¬ned a Dolphin language to connect math word problems and logical forms, and gen- erated rules to parse math word problems. Upad- hyay et al. parsed math word problems without explicit equation annotations. Roy and Roth clas-
# 3 End-to-End Neural Math Solver
Our approach composes of two parts, an encoder and a decoder, where the process of solving math word problems is viewed as transforming multi- ple text spans from the problems into the target information the problems ask for. In the exam- ple shown in Figure 1, all numbers in the problem are attached with the associated semantics. Moti- vated by the observation, we design an encoder to extract the semantic representation of each num- ber in the problem text. Considering that human usually manipulates those numbers and operators (such as addition, subtraction, etc.) based on their semantics for problem solving, a decoder is de- signed to construct the equation, where the seman- tics is aligned with the representations extracted by the encoder. The idea of the proposed model
=
Each notebook takes $0.5 and each pen OP Return takes $1. Tom has $10. How many notebooks can he buy after buying 5 pens? Apply OP Semantic Transformer ry Operation Selector Operand Selector Tom has S$ 10 Encoder 5 pens ? > uonuany uonuany p> Stack Stack Decoder
Figure 2: The encoder-decoder model architecture of the proposed neural solver machine.
is to imitate the human reasoning process for solv- ing math word problems. The model architecture is illustrated in Figure 2.
BLSTM in (2). Instead, we model their semantic representation eÏ, e1 as parts of the model parame- ters. They are randomly initialized and are learned during model training.
# 3.1 Encoder
The encoder aims to extract the semantic represen- tation of each constant needed for solving prob- lems. However, the needed constants may come from either the given problem texts or domain knowledge, so we detail these two procedures as follows.
3.1.1 Constant Representation Extraction For each math word problem, we are given a pas- sage consisting of words {wP t=1, whose word embeddings are {eP t=1. The problem text in- cludes some numbers, which we refer as constants. The positions of constants in the problem text are denoted as {pi}n i=1. In order to capture the seman- tic representation of each constant by considering its contexts, a bidirectional long short-term mem- ory (BLSTM) is adopted as the encoder (Hochre- iter and Schmidhuber, 1997):
t , cE hE t = BLSTM(hE tâ1, cE tâ1, eP t ), (1)
and then for the i-th constant in the problem, its semantic representation ec i is modeled by the cor- responding BLSTM output vector:
i = hE ec pi. (2)
3.1.2 External Constant Leveraging External constants, including 1 and Ï, are lever- aged, because they are required to solve a math word problem, but not mentioned in the prob- lem text. Due to their absence from the problem text, we cannot extract their semantic meanings by
# 3.2 Decoder
The decoder aims at constructing the equation that can solve the given problem. We generate the equation by applying stack actions on a stack to mimic the way how human understands an equa- tion. Human knows the semantic meaning of each term in the equation, even compositing of operands and operators like the term â(10â1Ã5)â in Figure 1. Then what operator to apply on a pair operands can be chosen based on their seman- tic meanings accordingly. Hence we design our model to generate the equation in a postï¬x man- ner: a operator is chosen base on the semantic rep- resentations of two operands the operator is going to apply to. Note that the operands a operator can apply to can be any results generated previously. That is the reason why we use âstackâ as our data structure in order to keep track of the operands a operator is going to apply to. The stack con- tains both symbolic and semantic representations of operands, denoted as S = [(vS lt , eS
1 )], (3) where vS of each pair is the symbolic part, such as x + 1, while eS is the semantic representation, which is a vector. The components in the decoder are shown in the right part of Figure 2, each of which is detailed below.
# 3.3 Decoding State Features
At each decoding step, decisions are made based on features of the current state. At each step, fea-
Push 10 Push 5 Push 1 âBs C D) Encoder & i 0.5] C ») Push x 1 Cc) Generated } [4 | ( y} | 5 C D) 5 C_) Var. Tr) 10 |C » to |C +) to |C +) Aes (22) (ES) (EAE) |SE5 beccceee ee ce ne nn eet Apply x SymPy 0.5 Cc.) 1x5 |C D) t G@o-ixs)=0s|(_)| |[mo-1x5]C | |[fo-1x5]C ~») 10 |C) x= (10-1x5)+05 x C ») x iq ») x C ) x |C » Apply = Apply + Push 0.5 Apply â
Figure 3: Illustration of the inference process. The purple round blocks denote the transformed semantics, while the green ones are generated by the variable generator.
tures rsa and ropd are extracted to select a stack action (section 3.3.2) and an operand to push (sec- tion 3.3.3). Speciï¬cally, the features are the gated concatenation of following vectors:
⢠hD t is the output of an LSTM, which encodes the history of applied actions:
the encoding problem hE 1 , hE 2 , · · · is adopted:
qt = Attention(hD t , {hE i }m i=1), (6)
where the attention function in this paper is deï¬ned as a function with learnable parame- ters w, W, b:
t , cD hD t = LSTM(hD tâ1, cD tâ1, restâ1), (4)
where res;_1 is the result from the previ- ous stack action similar to the seq2seq model (Sutskever et al., 2014). For example, if the previous stack action 0;_; is âpushâ, then res;â1 is the semantic representation pushed into the stack. If the previous stack action 04-1 is to apply an operator ©, then res,_, is the semantic representation generated by fo.
It is crucial because some operators are only applicable to certain combinations of operand semantics, which is similar to the type system in programming languages. For example, operating multi- plication is applicable to the combination of âquantity of an itemâ and âprice of an itemâ, while operating addition is not. Consider- ing that all math operators supported here (+, â, Ã, ÷) are binary operators, the seman- tic representations of the stackâs top 2 ele- ments at the time t â 1 are considered:
st = [eS lt; eS lt]. (5)
⢠qt incorporates problem information in the decision. It is believed that the attention mechanism (Luong et al., 2015) can effec- tively capture dependency for longer dis- tance. Thus, the attention mechanism over
m Attention(u, {uj}â¢@,) = S- ajhi, (7) i=l exp(si) «= SE 8 "STE exp(si) â s=wl tanh(W? [u; uj] + 6). (9)
In order to model the dynamic features for dif- is gated as
ferent decoding steps, features in rsa t follows: rsa t ; gsa t,1 · hD t = [gsa t = Ï(W sa · [hD gsa
t,2 · st; gsa t ; st; qt]),
t,3 · qt], (10)
(11)
where Ï is a sigmoid function and W sa is a learned gating parameter. ropd is deï¬ned simi- larly, but with a different learned gating parameter W opd.
3.3.1 Stack Action Selector The stack action selector is to select an stack ac- tion at each decoding step (section 3.3.2) until the unknowns are solved. The probability of choosing action a at the decoding step t is calculated with a network NN constituted of one hidden layer and ReLU as the activation function: P (Yt |{yi}tâ1
i=1, {wi}m i=1) (12) = StackActionSelector(rsa t ) = softmax(NN(rsa t )),
where rsa t section 3.3. is decoding state features as deï¬ned in
3.3.2 Stack Actions The available stack actions are listed below:
⢠Variable generation: The semantic repre- sentation of an unknown variable x is gener- ated dynamically as the ï¬rst action in the de- coding process. Note that this procedure pro- vides the ï¬exibility of solving problems with more than one unknown variables. The de- coder module can decide how many unknown variables are required to solve the problem, and the semantic representation of the un- known variable is generated with an attention mechanism:
ex = Attention(hD t , {hE i }m i=1). (13)
⢠Push: This stack action pushes the operand chosen by the operand selector (section 3.3.3). Both the symbolic representation vâ and semantic representation eâ of the chosen operand would be pushed to the stack S in (3). Then the stack state becomes
S = [(vS â , eS â ), (vS lt , eS lt), · · · , (vS 1 , eS 1 )]. (14)
e Operator ¢ application (o ⬠{+,â, x, +}): One stack action pops two elements from the op of the stack, which contains two pairs, (vj, e;) and (v;,e;), and then the associated symbolic operator, vz, = v; > vj, is recorded. Also, a semantic transformation function fo âor that operator is invoked, which generates he semantic representation of uv, by trans- âorming semantic representations of v; and v; 0 ex = fo(e:,e;). Therefore, after an opera- or is applied to the stack specified in (3), the stack state becomes
SoS (SS S =[(vj, > vj,-1, folen, ef -1))) Ss Ss a! (up 2, â¬f,â2),-°> » (uP, e?)]- (5)
⢠Equal application: When the equal appli- cation is chosen, it implies that an equation is completed. This stack action pops 2 tu- ples from the stack, (vi, ei), (vj, ej), and then vi = vj is recorded. If one of them is an unknown variable, the problem is solved. Therefore, after an OP is applied to the stack speciï¬ed in (3), the stack state becomes
S = [(vS ltâ2, eS ltâ2), · · · , (vS 1 , eS 1 )]. (16)
3.3.3 Operand Selector When the stack action selector has decided to push an operand, the operand selector aims at choosing which operand to push. The operand candidates e include constants provided in the problem text whose semantic representations are ec 1, ec n, unknown variable whose semantic representation is ex, and two external constants 1 and Ï whose semantic representations are e1, eÏ:
e = [ec 1, ec 2, · · · , ec n, e1, eÏ, ex]. (17)
An operand has both symbolic and semantic repre- sentations, but the selection focuses on its seman- tic meaning; this procedure is the same as what human does when solving math word problems.
Inspired by addressing mechanisms of neural Turing machine (NTM) (Graves et al., 2014), the probability of choosing the i-th operand candidate is the attention weights of rt over the semantic rep- resentations of the operand candidates as in (8):
# i=1, {wi}m = OperandSelector(ropd = AttentionWeight(ropd
P (Zt | {yi}tâ1 (18)
i=1) ) , {ei}m
# t
i=1 ⪠{e1, eÏ, ex}), t
and ropd t is deï¬ned in section 3.3.
3.3.4 Semantic Transformer A semantic transformer is proposed to generate the semantic representation of a new symbol resulted from applying an operator, which provides the ca- pability of interpretation and reasoning for the tar- get task. The semantic transformer for an operator © ⬠{+,â,x,+} transforms semantic represen- tations of two operands e1, e2 into
fo(e1, 2) = tanh(U,ReLU(W.[e1; e2]+b.)+co), (19) where W,, U5, bo, Co are model parameters. Se- mantic transformers for different operators have different parameters in order to model different transformations.
# 3.4 Training
Both stack action selection and operand selection can be trained in a fully supervised way by giving problems and associated ground truth equations. Because our model generates the equation with stack actions, the equation is ï¬rst transformed into its postï¬x representation. Let the postï¬x represen- tation of the target equation be y1, · · · yt, · · · , yT ,
where yt can be either an operator (+, â, Ã, ÷, =) or a target operand. Then for each time step t, the loss can be computed as
L(yt) = L1(push op) + L2(yt) L1(yt) yt is an operand otherwise
where L1 is the stack action selection loss and L2 is the operand selection loss deï¬ned as
L1(yt) = â log P (Yt = yt | {oi}tâ1 L2(yt) = â log P (Zt = yt | rt). i=1, {wi}m i=1),
The objective of our training process is to min- imize the total loss for the whole equation, ve 1 L(yt ).
# 3.5 Inference
When performing inference, at each time step t, the stack action with the highest probability P (Yt|{Ëyi}tâ1 i=1) is chosen. If the chosen stack action is âpushâ, the operand with the high- est probability P (Zt|{ ËYi}tâ1 i=1) is chosen. When the stack has less than 2 elements, the prob- ability of applying operator +, â, Ã, ÷, = would be masked out to prevent illegal stack actions, so all generated equations must be legal math ex- pressions. The decoder decodes until the un- known variable can be solved. After the equations are generated, a Python package SymPy (Meurer et al., 2017) is used to solve the unknown variable. The inference procedure example is illustrated in Figure 3. The detailed algorithm can be found in Algorithm 1.
# 4 Experiments
To evaluate the performance of the proposed model, we conduct the experiments on the bench- mark dataset and analyze the learned semantics.
# 4.1 Settings
The experiments are benchmarked on the dataset Math23k (Wang et al., 2017), which con- tains 23,162 math problems with annotated equa- tions. Each problem can be solved by a single- unknown-variable equation and only uses opera- tors +, â, Ã, ÷. Also, except Ï and 1, quantities in the equation can be found in the problem text. There are also other large scale datasets like Dol- phin18K (Shi et al., 2015) and AQuA (Ling et al., 2017), containing 18,460 and 100,000 math word
# Algorithm 1 Training and Inference function SOLVEPROBLEM(problem text) v â ExtractConstants(problem text)
function SOLVEPROBLEM(problem_text) v + ExtractConstants(problem_text) > vis a list of constants in the problem. hehe, ce, E + Encoder(problem-_text) S © Stack() ret, loss, t, equations < padding, 0, 1, {} while not solvable(equations) do hP â LSTM(h?. 1, ce-1, ret) 8, < S.get_top2() h® & Attention(h? 1,hâ) re â [hP, se, hâ] Psa + StackActionSelector(r:) Popa + OperandSelector(r+) if training then > Target equation y is available when training. Yio yt if y%, is operand then loss + loss + Li (push) + Lo(ys) else loss + loss + Li (yz) end if else Y; < StackActionSelector(r?*) if Y; = push then Z, < OperandSelector(r??â) end if end if if Y, = gen_var then e* & Attention(h?, hâ) ret + e* else if Y; = push then S.push(vz,, ez, ) ret < ez, else if Y, ⬠{+, â, x, +} then (va, â¬a), (0; â¬b) = $.pop(), $-pop() S.push(vaÂ¥ive, fy, (ea, â¬b)) ret â fiz (â¬a,â¬8) else if Y; = equal then (va,â¬a); (vss â¬b) = $.pop(), $.pop() equations = equations Uâ vq = vpâ ret + S.top() end if end while return solve(equations) function
,
# end function
problems respectively. The reasons about not eval- uating on these two datasets are 1) Dolphin18k contains some unlabeled math word problems and some incorrect labels, and 2) AQuA contains ra- tional for solving the problems, but the equations in the rational are not formal (e.g. mixed with texts, using x to represent Ã, etc.) and incon- sistent. Therefore, the following experiments are performed and analyzed using Math23K, the only large scaled, good-quality dataset.
# 4.2 Results
The results are shown in Table 1. The retrieval- based methods compare problems in test data with problems in training data, and choose the most
Model Retrieval Classiï¬cation Generation Hybrid
Table 1: 5-fold cross validation results on Math23K.
similar oneâs template to solve the problem (Kush- man et al., 2014; Upadhyay and Chang, 2017). The classiï¬cation-based models choose equation templates by a classiï¬er trained on the training data. Their performance are reported in Robaidek et al.. The seq2seq and hybrid models are from Wang et al., where the former directly maps nat- ural language into symbols in equations, and the latter one ensembles prediction from a seq2seq model and a retrieval-based model. The ensemble is the previous state-of-the-art results of Math23K. Our proposed end-to-end model belongs to the generation category, and the single model perfor- mance achieved by our proposed model is new state-of-the-art (> 65%) and even better than the hybrid model result (64.7%). In addition, we are the ï¬rst to report character-based performance on this dataset, and the character-based results are slightly better than the word-based ones. Among the single model performance, our models ob- tain about more than 7% accuracy improvement compared to the previous best one (Wang et al., 2017). The performance of our character-based model also shows that our model is capable of learning the relatively accurate semantic represen- tations without word boundaries and achieves bet- ter performance.
# 4.3 Ablation Test
To better understand the performance contributed by each proposed component, we perform a series of ablation tests by removing components one by one and then checking the performance by 5-fold cross validation. Table 2 shows the ablation re- sults.
Char-Based v.s. Word-Based As reported instead of above, character-based model only causes 0.5% perfor- mance drop. To fairly compare with prior word-
Accuracy Model 65.8% Char-Based 65.3% Word-Based 64.1% Word-Based - Gate 62.5% Word-Based - Gate - Attention 60.1% Word-Based - Gate - Attention - Stack Word-Based - Semantic Transformer 64.1% Word-Based - Semantic Representation 61.7%
Table 2: 5-fold cross validation results of ablation tests.
based models, the following ablation tests are per- formed on the word-based approach.
It uses rt instead of rsa Word-Based - Gate ropr t OperandSelector. t and as the input of both StackActionSelector and
Word-Based - Gate - Attention Considering that the prior generation-based model (seq2seq) did not use any attention mechanism, we compare the models with and without the attention mecha- nism. Removing attention means excluding qtâ1 in (11), so the input of both operator and operand selector becomes rt = [hD t ; st]. The result implies that our model is not better than previous models solely because of the attention.
Word-Based - Gate - Attention - Stack To check the effectiveness of the stack status (st in (11)), the experiments of removing the stack sta- tus from the input of both operator and operand selectors (rt = hD t ) are conducted. The results well justify our idea of choosing operators based on semantic meanings of operands.
Word-Based - Semantic Transformer To val- idate the effectiveness of the idea that views an operator as a semantic transformer, we modify the semantic transformer function of the operator © into f,(e1,â¬2) = >, where e, is a learnable parameter and is different for different operators. Therefore, â¬, acts like the embedding of the op- erator ©, and the decoding process is more simi- lar to a general seq2seq model. The results show that the semantic transformer in the original model encodes not only the last operator applied on the operands but other information that helps the se- lectors.
Word-Based - Semantic Representation To explicitly evaluate the effectiveness of operandsâ semantic representations, we rewrite semantic rep- resentation of the i-th operand in the problem texts
ss .) Oo S 25 FO Q® x y RN Se
Figure 4: The self-attention map visualization of operandsâ semantic expressions for the problem âThere are 58 bananas. Each basket can contain 6 bananas. How many bananas are needed to be token off such that exactly 9 baskets are ï¬lled?â.
from (2) to ef = b§, where bf is a parameter. Thus for every problem, the representation of the i-th operand is identical, even though their mean- ings in different problems may be different. This modification assumes that no semantic informa- tion is captured by bf, which can merely represent a symbolic placeholder in an equation. Because the semantic transformer is to transform the se- mantic representations, applying this component is meaningless. Here the semantic transformer is also replaced with f.(e1,â¬2) = eo as the setting of the previous ablation test. The results show that the model without using semantic representations of operands causes a significant accuracy drop of 3.5%. The main contribution of this paper about modeling semantic meanings of symbols is vali- dated and well demonstrated here.
# 5 Qualitative Analysis
To further analyze whether the proposed model can provide interpretation and reasoning, we visu- alize the learned semantic representations of con- stants to check where the important cues are,
# 5.1 Constant Embedding Analysis
To better understand the information encoded in the semantic representations of constants in the problem, a self-attention is performed when their semantic representations are extracted by the en- coder. Namely, we rewrite (2) as
i = Attention(hE ec pi, {hE t }m t=1. (20)
Then we check the trained self-attention map (α in the attention function) on the validation dataset.
For some problems, the self-attention that gen- erates semantic representations of constants in the problem concentrates on the numberâs quantiï¬er or unit, and sometimes it also focuses on infor- mative verbs, such as âgainâ, âgetâ, âï¬llâ, etc., in the sentence. For example, Figure 4 shows the at- tention weights for an example math word prob- lem, where lighter colors indicate higher weights.
The numbers â58â and â6â focus more on the quantiï¬er-related words (e.g. âeveryâ and âhow manyâ), while â9â pays higher attention to the verb âï¬llâ. The results are consistent with those hand- craft features for solving math word problems pro- posed by the prior research (Hosseini et al., 2014; Roy and Roth, 2015; Roy et al., 2015). Hence, we demonstrate that the automatically learned seman- tic representations indeed capture critical informa- tion that facilitates solving math word problems without providing human-crafted knowledge.
# 5.2 Decoding Process Visualization
We visualize the attention map (qt in (6)) to see how the attention helps the decoding process. An example is shown in the top of Figure 5, where most attention focuses on the end of the sentence. Unlike the machine translation task, the attention shows the word-level alignment between source and target languages, solving math word problems requires high-level understanding due to the task complexity.
To further analyze the effectiveness of the pro- posed gating mechanisms for stack action and operand selection, the activation of gates gsa, gopd at each step of the decoding process is shown in the bottom of Figure 5. It shows that most of time, the gate activation is high, demonstrating that the proposed gating mechanisms play an important role during decoding. We also observe a com- mon phenomenon that the activation gsa 2 , which controls how much attention the stack action se- lector puts on the stack state when deciding an stack action, is usually low until the last âopera- tor applicationâ stack action. For example, in the example of Figure 5, gsa 2 is less than 0.20 till the last argument selection stack action, and activates when deciding the division operator application (÷) and the equal application (=). It may re- sult from the higher-level semantics of the operand (6.75â2.75) on the stack when selecting the stack action division operator application (÷). In terms
Problem & Results 红è±æ60æµï¼é»è±æ¯çº¢è±å¤1/6æµï¼é»è±æå¤å°æµï¼ (There are 60 red ï¬owers. Yellow ï¬owers are more than red ones by 1/6. How many yellow ï¬owers are there?) Generated Equation: 60 + 1 6 Correct Answer: 70 ç«è½¦ 48 å°æ¶è¡é©¶ 5920 åç±³ï¼æ±½è½¦ 25 å°æ¶è¡é©¶ 2250 åç±³ï¼æ±½è½¦å¹³åæ¯å°æ¶æ¯ç«è½¦æ¯å°æ¶æ
¢ å¤å° åç±³ ï¼ (The train travels 5920 kilometers in hours, and the car travels 2250 kilometers in 25 hours. How many kilometers per hour is the car slower than the train?) Generated Equation: 2250 ÷ 25 â 5920 ÷ 48 Correct Answer: 33 1 3 å°çº¢åé¢ 5 人ï¼åé¢ 7 人ï¼ä¸å
±æå¤å°äººï¼ (There are 5 people in front of Little Red and 7 people behind. How many persons are there in total?) Generated Equation: 5 + 7 Correct Answer: 13
Table 3: Randomly sampled incorrect predictions.
gv x 6.752.75 - 50 / = noop 6.75 + J d J d 00 . substrates ia d J d d 01 unknown 3 + d I 3 d 03. itis O01 .00 .00 .00 . 00 . 's i 3 J J J 01 5.0 + J d J d 01 times {& . . nee is § J d 5 d 04 . 2.75 J d zs d 03 . to 01 ask 3k d z d zl 00 . unknown 3 + J d 3 d 00 . iinsese OO .01 .00 .01 . .01 -30 BAe . J d -01 .00 .01 <unk> gd. 1.0 1.0 .99 oo SO 06 .02 .09 | age. 1.0 97 .96 .96 gm 1.74 98 EB 32 (50 gir? + 69 48 74 .83 .83 .93 .70 ger" 4.77 .99 1.0 .98 .90 .78 04 .06 gv x 6752.75 - 50 / = noop
gd. 1.0 1.0 .99 oo SO 06 .02 .09 | age. 1.0 97 .96 .96
Figure 5: Word attention and gate activation (gsa and gopd) visualization when generating stack actions for the problem â6.75 deducting 5 times of an unknown number is 2.75. What is the unknown number?â, where the associated equation is x = (6.75 â 2.75) ÷ 5. Note that gopd is meaningful only when the t-th stack action is push op.
of the activation of gopd, we ï¬nd that three features are important in most cases, demonstrating the ef- fectiveness of the proposed mechanisms.
# 5.3 Error Analysis
We randomly sample some results predicted incor- rectly by our model shown in Table 3. In the ï¬rst example, the error is due to the language ambigu- ity, and such ambiguity cannot be resolved without considering the exact value of the number. From the second example, although our model identi- ï¬es the problem as a comparison problem success- fully, it handles the order of the operands incor- rectly. For the third problem, it cannot be solved by using only the surface meaning but requires some common sense. Therefore, above phenom- ena show the difï¬culty of solving math word prob- lems and the large room for improvement.
# 6 Conclusion
We propose an end-to-end neural math solver us- ing an encoder-decoder framework that incorpo- rates semantic representations of numbers in or- der to generate mathematical symbols for solving math word problems. The experiments show that the proposed model achieves the state-of-the-art performance on the benchmark dataset, and empir- ically demonstrate the effectiveness of each com- ponent in the model. In sum, the proposed neu- ral math solver is designed based on how human performs reasoning when writing equations, pro- viding better interpretation without the need of la- beled rationals.
# References
Alex Graves, Greg Wayne, and Ivo Danihelka. arXiv preprint 2014. Neural turing machines. arXiv:1410.5401.
Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735â1780.
Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren Etzioni, and Nate Kushman. 2014. Learning to solve arithmetic word problems with verb catego- rization. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Process- ing, pages 523â533.
Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, H´erve J´egou, and Tomas Mikolov. 2016. Fasttext.zip: Compressing text classiï¬cation models. arXiv preprint arXiv:1612.03651.
Diederik P. Kingma and Jimmy Ba. 2014. Adam: CoRR, A method for stochastic optimization. abs/1412.6980.
Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish Sabharwal, Oren Etzioni, and Siena Dumas Ang. 2015. Parsing algebraic word problems into equa- tions. TACL, 3:585â597.
Nate Kushman, Luke Zettlemoyer, Regina Barzilay, and Yoav Artzi. 2014. Learning to automatically solve algebra word problems. In Proceedings of the 52nd Annual Meeting of the Association for Compu- tational Linguistics, ACL 2014, pages 271â281.
Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blun- som. 2017. Program induction by rationale genera- tion: Learning to solve and explain algebraic word problems. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguis- tics, ACL 2017, pages 158â167.
Thang Luong, Hieu Pham, and Christopher D. Man- ning. 2015. Effective approaches to attention-based In Proceedings of the neural machine translation. 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412â1421.
Sourav Mandal and Sudip Kumar Naskar. 2019. Solv- ing arithmetic mathematical word problems: A re- view and recent advancements. In Information Tech- nology and Applied Mathematics, pages 95â114. Springer.
Purvanshi Mehta, Pruthwik Mishra, Vinayak Athavale, Manish Shrivastava, and Dipti Misra Sharma. 2017. Deep neural network based system for solving arith- In Proceedings of the IJC- metic word problems. NLP 2017, pages 65â68.
Aaron Meurer, Christopher P. Smith, Mateusz Pa- ËCert´ık, Sergey B. Kirpichev, procki, OndËrej Matthew Rocklin, AMiT Kumar, Sergiu Ivanov, Ja- son K. Moore, Sartaj Singh, Thilina Rathnayake,
Sean Vig, Brian E. Granger, Richard P. Muller, Francesco Bonazzi, Harsh Gupta, Shivam Vats, Fredrik Johansson, Fabian Pedregosa, Matthew J. Curry, Andy R. Terrel, ËStËep´an RouËcka, Ashutosh Saboo, Isuru Fernando, Sumith Kulal, Robert Cim- rman, and Anthony Scopatz. 2017. Sympy: sym- bolic computing in python. PeerJ Computer Sci- ence, 3:e103.
Benjamin Robaidek, Rik Koncel-Kedziorski, and Data-driven meth- Hannaneh Hajishirzi. 2018. ods for solving algebra word problems. CoRR, abs/1804.10718.
Subhro Roy and Dan Roth. 2015. Solving general In Proceedings of the arithmetic word problems. 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portu- gal, September 17-21, 2015, pages 1743â1752.
Subhro Roy and Dan Roth. 2018. Mapping to declar- ative knowledge for word problem solving. TACL, 6:159â172.
Subhro Roy, Shyam Upadhyay, and Dan Roth. 2016. Equation parsing : Mapping sentences to grounded equations. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Pro- cessing, EMNLP 2016, Austin, Texas, USA, Novem- ber 1-4, 2016, pages 1088â1097.
Subhro Roy, Tim Vieira, and Dan Roth. 2015. Rea- soning about quantities in natural language. TACL, 3:1â13.
Shuming Shi, Yuehui Wang, Chin-Yew Lin, Xiaojiang Liu, and Yong Rui. 2015. Automatically solving number word problems by semantic parsing and rea- soning. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Process- ing, EMNLP 2015, pages 1132â1142.
Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural net- works. In Advances in Neural Information Process- ing Systems 27: Annual Conference on Neural Infor- mation Processing Systems 2014, pages 3104â3112.
Shyam Upadhyay and Ming-Wei Chang. 2017. An- notating derivations: A new evaluation strategy and In Proceed- dataset for algebra word problems. ings of the 15th Conference of the European Chap- ter of the Association for Computational Linguistics, pages 494â504.
Shyam Upadhyay, Ming-Wei Chang, Kai-Wei Chang, and Wen-tau Yih. 2016. Learning from explicit and implicit supervision jointly for algebra word prob- In Proceedings of the 2016 Conference on lems. Empirical Methods in Natural Language Process- ing, pages 297â306.
Lei Wang, Dongxiang Zhang, Lianli Gao, Jingkuan Song, Long Guo, and Heng Tao Shen. 2018. Math- DQN: Solving arithmetic word problems via deep
the reinforcement Thirty-Second AAAI Conference on Artiï¬cial Intel- ligence.
Yan Wang, Xiaojiang Liu, and Shuming Shi. 2017. Deep neural solver for math word problems. In Pro- ceedings of the 2017 Conference on Empirical Meth- ods in Natural Language Processing, pages 845â 854.
# A Algorithm Detail
The training and inference procedures are shown in Algortihm 1.
# B Hyperparameter Setup
The model is trained with the optimizer adam (Kingma and Ba, 2014), and the learning rate is set to 0.001. Pretrained embeddings using FastText (Joulin et al., 2016) are adopted. The hidden state size of LSTM used in the encoder and decoder is 256. The dimension of hidden layers in attention, semantic transformer and operand/stack action se- lector is 256. The dropout rate is set as 0.1 before inputting the decoder LSTM, before the stack ac- tion selector and after the hidden layer of the stack action selector and attention. The reported accu- racy is the result of 5-fold cross-validation, same as Wang et al. for fair comparison.
# C Error Analysis between Seq2Seq
We implement the seq2seq model as proposed by Wang et al. and compare the performance differ- ence between our proposed model and the base- line seq2seq model. Table 4 shows the generated results seq2seq predicts correctly but our model predicts incorrectly. Table 5 show the results our model can predict correctly but seq2seq cannot.
Problem & Results å°çº¢åé¢ 5 人ï¼åé¢ 7 人ï¼ä¸å
±æå¤å°äººï¼ (There are 5 people in front of Little Red and 7 people behind. How many persons are there in total?) Proposed Model: 5 + 7 Seq2Seq Model: 5 + 7 + 1 两个æ°ç¸å·®28ï¼å¦æ被åæ°åå°3ï¼åæ°å¢å 5ï¼é£ä¹å®ä»¬çå·®=ï¼ (The difference between two numbers is 28. If the minuend is reduced by 3, and the subtrahend is increased by 5, then their difference=?) Proposed Model: (28 â 3) ÷ 5 Seq2Seq Model: 28 â (3 + 5) æºåºå第ä¸è½¦é´æ55人ï¼ç¬¬äºè½¦é´æ45人ï¼æ¯äººæ¯å¤©å¹³åç产261个é¶ä»¶ï¼è¿ä¸¤ä¸ª 车é´æ¯ 天å
±ç产å¤å°ä¸ªé¶ä»¶ï¼ (There are 55 people in the ï¬rst workshop of the machine tool factory and 45 people in the second workshop. Each person produces 261 small components per day in average. How many components do the two workshops produce every day in total?) Proposed Model: (55 + 45) ÷ 261 Seq2Seq Model: (55 + 45) à 261 ç®é±¼æ¸¸å¨æ¶çé度æ¯28ç±³/ç§ï¼8ç§å¯ä»¥æ¸¸å¤å°ç±³ï¼ (The swordï¬sh swims at speed 28 meters/sec. How many meters can it swim in 8 seconds?) Proposed Model: 28 ÷ 8 Seq2Seq Model: 28 à 8 æ°´æåºæ梨å387åå
ï¼ååº205åå
åï¼åè¿æ¥ä¸æ¹ï¼ç°å¨æ°´æåºå
±æ梨å945åå
ï¼æ°´æ åºåè¿æ¥æ¢¨åå¤å°åå
ï¼ (The fruit shop has 387 kilograms of pears . After selling 205 kilograms, some pears arrive. Now the fruit shop has 945 kilograms of pears in total. How many kilograms of pears does the fruit shop get?) Proposed Model: 945 à (387 â 205) Seq2Seq Model: 945 â (387 â 205) çèå¸ä¹°æçç¨äº40å
ï¼ä¹°ç¯®çç¨çé±æ°æ¯æçç3åï¼çèå¸ä¹°çä¸å
±ç¨äºå¤å°å
ï¼ (Teacher Wang spent 40 dollars buying volleyballs and 3 times of money for basketballs. How many dollars did Teacher Wang spend for the balls?) Proposed Model: 40 ÷ 3 + 40 Seq2Seq Model: 40 + 40 à 3 çè·¯éä¿®çä¸æ¡é¿1200ç±³çå
¬è·¯ï¼ç²éåç¬ä¿®40天å¯ä»¥å®æä»»å¡ï¼ä¹éåç¬ä¿®30天å¯ä»¥å®æ ä»»å¡ï¼ç²éæ¯å¤©ä¿®çæ¯ä¹éå°å¤å°ç±³ï¼ (The road construction team built a road with a length of 1200 meters. Team A can complete the task in 40 days alone, and team B can complete the task in 30 days alone. How many meters does team A construct more than team B every day?) Proposed Model: 1200 ÷ 40 â 1200 ÷ 30 Seq2Seq Model: 1200 ÷ 30 â 1200 ÷ 40 ä¸å
±1800æ¬ï¼æ们å
年级åå¾2/9ï¼åç»äºå¹´çº§çæ¬æ°ç¸å½äºå
年级ç4/5ï¼äºå¹´çº§åå¾å¤å° æ¬ï¼ (There are 1800 books in total. We sixth grade get 2/9. The number of books given to the ï¬fth grade is equal to 4/5 of the number to the sixth grade. How many books does the ï¬fth grade get?) Proposed Model: 1800 à 2 Seq2Seq Model: 1800 à 2 æä¸æ¹å¸æï¼å¦æåªåä¸è¡£å¯ä»¥å10件ï¼å¦æåªå裤åå¯ä»¥å15æ¡ï¼é£ä¹è¿æ¹å¸æå¯ä»¥åå å¥è¿æ ·çè¡£æï¼ (There is a batch of fabrics. If all is used for making shirts, 10 pieces can be made, and 15 pieces if used to make pants only. Then how many suits of such clothes can be made with this batch of fabric?) Proposed Model: 10 à 1 ÷ 15 Seq2Seq Model: 1 ÷ (1 ÷ 10 + 1 ÷ 15) è´è´çé±ä¹°ä¸æ¬5.9å
ç¬è®°æ¬å·®0.6å
ï¼ä»ä¹°ä¸æ¬4.8å
çï¼å©ä¸çé±æ£å¥½ä¹°ä¸åªåç ç¬ï¼è¿åª åç ç¬å¤å°é±ï¼ (Beibei needs 0.6 dollars more to buy a notebook of 5.9 dollars. If he buys one of 4.8 dollars, the remaining money allows her to buy exactly one ball pen. How much is the ball pen?) Proposed Model: 5.9 + 0.6 â 4.8 Seq2Seq Model: 5.9 â 0.6 â 4.8
9 ÷ 4 9 à 4
5
Table 4: Examples that Seq2Seq predicts correctly while our proposed model predicts incorrectly.
Problem & Results å»é¢éç»å¸¸è¦ç»ç
人è¾å
¥è¡èç³æ°´ï¼è¿ç§è¡èç³æ°´æ¯æè¡èç³åæ°´æ1ï¼19é
å¶çï¼æ ¹æ® è¿äºä¿¡æ¯ï¼ä½ è½ç¥éä»ä¹ï¼ (In hospital, it is often necessary to give glucose injection to patient. This glucose water is prepared by mixing glucose and water at 1:19. Based on this information, what do you know?) Proposed Model: 1 ÷ (1 + 19.0) Seq2Seq Model: 1 à (1 + 19.0) ä¸æ ¹é¿2.45ç±³çæ¨æ¡©æå
¥æ²³åºï¼ç°å¨æµå¾æ¨æ¡©æ°´ä¸é¨åé¿0.75ç±³ï¼æ°´ä¸é¿1.05ç±³ï¼æ±è¿æ ¹ æ¡©æå¨æ³¥ä¸çé¿åº¦=å¤å°ç±³ï¼ (A wooden pile of 2.45 meters long is hammered into the bottom of a river. Now the part above water is measured as 0.75 meters long, and the part in the water is measured as 1.05 meters long. How long is the part of the pile in the mud?) Proposed Model: 2.45 â 0.75 â 1.05 Seq2Seq Model: 2.45 + 0.75 + 1.05 æ强6æ份çç活费为255å
ï¼æ¯è®¡åèçäº15%ï¼èçäºå¤å°å
ï¼ (Li Qiangâs living expenses in June were 255 dollars, 15% savings over the plan. How much did he save?) Proposed Model: (255.0 ÷ (1 â 0.15)) à 0.15 Seq2Seq Model: 0.15 = 6.0/(1 â 255.0) â 6.0 å°è³å¨è®¡ç®ä¸ä¸ªæ°é¤ä»¥10æ¶ï¼å°é¤å·çæäºä¹å·ï¼ç»æå¾3.2ï¼æ£ç¡®çç»æåºè¯¥=ï¼ (When Xiaofang calculates a number divided by 10 , the division sign is mistakenly treated as a multiplica- tion sign, and the result is 3.2. The correct result should be = .) Proposed Model: 3 ÷ 10 ÷ 10 Seq2Seq Model: 3.2 ÷ (1 + 10) 24 + 91 ç 2/13ï¼æå¾çååé¤ 19/20ï¼å = ï¼ (2/13 of 91 + 24, and the sum is divided by 19/20, quotient = ?) Proposed Model: 19 Seq2Seq Model: 19 1/3 + 0.25 = ï¼ (1/3 + 0.25 = ?) Proposed Model: 1 Seq2Seq Model: 1 ååºè¿æ¥é¸¡èåé¸èå7ç®±ï¼é¸¡èæ¯ç®±é26åå
ï¼é¸èæ¯ç®±é31åå
ï¼ååºä¸å
±è¿æ¥ç鸡è åé¸èå
±å¤å°åå
ï¼ (The store shipped 7 boxes of eggs and duck eggs respectively. Eggs weigh 26 kilograms per box, duck eggs weigh 31 kilograms per box. How many kilograms of eggs and duck eggs are shipped from the store in total?) Proposed Model: 26 à 7 + 31 à 7 Seq2Seq Model: 26 à 7 + 31 3.8 - 2.54 + 1.46 = ï¼ (3.8 - 2.54 + 1.46 =) Proposed Model: 3.8 â 2.54 + 1.46 Seq2Seq Model: 3.8 + 2.54 + 1.46 æä¸æ± æ°´ï¼ç¬¬ä¸å¤©æ¾åº200å¨ï¼ç¬¬äºå¤©æ¯ç¬¬ä¸å¤©å¤æ¾20%ï¼ç¬¬3天æ¾äºæ´æ± æ°´ç36%ï¼æ£å¥½å
¨ é¨æ¾å®ï¼è¿æ± æ°´å
± æå¤å°å¨ï¼ (There was a pool of water, which released 200 tons of water in the ï¬rst day, 20% more in the second day than the ï¬rst day, and 36% of the whole pool on the third day. Then the water is gone. How many tons of water did this pool have?) Proposed Model: (200.0 + 200.0 à (1 + 0.2)) ÷ (1 â 0.36) Seq2Seq Model: (200.0 + 0.2) à 3.0 + 0.2 à (1 â 0.36) 16 ç 5/12 æ¯ä¸ä¸ªæ°ç 7 åå¤ 2 ï¼ è¿ä¸ªæ° = ï¼ (5/12 of 16 is more than 7 times of a number by 2. What is the number=?) Proposed Model: (16 à 5 Seq2Seq Model: (16 à 5
Table 5: Examples that Seq2Seq predicts incorrectly while our proposed model predicts correctly. | {
"id": "1612.03651"
} |
1811.01088 | Sentence Encoders on STILTs: Supplementary Training on Intermediate Labeled-data Tasks | Pretraining sentence encoders with language modeling and related unsupervised
tasks has recently been shown to be very effective for language understanding
tasks. By supplementing language model-style pretraining with further training
on data-rich supervised tasks, such as natural language inference, we obtain
additional performance improvements on the GLUE benchmark. Applying
supplementary training on BERT (Devlin et al., 2018), we attain a GLUE score of
81.8---the state of the art (as of 02/24/2019) and a 1.4 point improvement over
BERT. We also observe reduced variance across random restarts in this setting.
Our approach yields similar improvements when applied to ELMo (Peters et al.,
2018a) and Radford et al. (2018)'s model. In addition, the benefits of
supplementary training are particularly pronounced in data-constrained regimes,
as we show in experiments with artificially limited training data. | http://arxiv.org/pdf/1811.01088 | Jason Phang, Thibault Févry, Samuel R. Bowman | cs.CL | null | null | cs.CL | 20181102 | 20190227 | 9 1 0 2
b e F 7 2 ] L C . s c [
2 v 8 8 0 1 0 . 1 1 8 1 : v i X r a
# Sentence Encoders on STILTs: Supplementary Training on Intermediate Labeled-data Tasks
# Jason Phang1,â jasonphang@nyu.edu
# Thibault F´evry1,â thibault.fevry@nyu.edu
jasonphang@nyu.edu thibault.fevry@nyu.edu
# Samuel R. Bowman1,2,3 bowman@nyu.edu
2Dept. of Linguistics New York University 10 Washington Place New York, NY 10003
1Center for Data Science New York University 60 Fifth Avenue New York, NY 10011
3Dept. of Computer Science New York University 60 Fifth Avenue New York, NY 10011
# Abstract
better than carefully-designed task-speciï¬c mod- els without such pretraining.
Pretraining sentence encoders with language modeling and related unsupervised tasks has recently been shown to be very effective for language understanding tasks. By sup- plementing language model-style pretraining with further training on data-rich supervised tasks, such as natural language inference, we obtain additional performance improvements on the GLUE benchmark. Applying sup- plementary training on BERT (Devlin et al., 2018), we attain a GLUE score of 81.8âthe state of the art1 and a 1.4 point improvement over BERT. We also observe reduced variance across random restarts in this setting. Our ap- proach yields similar improvements when ap- plied to ELMo (Peters et al., 2018a) and Rad- ford et al. (2018)âs model. In addition, the ben- eï¬ts of supplementary training are particularly pronounced in data-constrained regimes, as we show in experiments with artiï¬cially limited training data.
# Introduction
Recent work has shown mounting evidence that pretraining sentence encoder neural networks on unsupervised tasks like language modeling, and then ï¬ne-tuning them on individual target tasks, can yield signiï¬cantly better target task perfor- mance than could be achieved using target task training data alone (Howard and Ruder, 2018; Radford et al., 2018; Devlin et al., 2018). Large- scale unsupervised pretraining in works like these seems to produce sentence encoders with sub- stantial knowledge of the target language (which, so far, is generally English). These works have shown that the one-size-ï¬ts-all approach of ï¬ne- tuning a large pretrained model with a thin output layer for a given task can achieve results as good or
However, it is not obvious that the model pa- rameters obtained during unsupervised pretraining should be ideally suited to supporting this kind of transfer learning. Especially when only a small amount of training data is available for the target task, ï¬ne-tuning experiments are potentially brit- tle, and rely on the pretrained encoder parame- ters to be reasonably close to an ideal setting for the target task. During target task training, the encoder must learn and adapt enough to be able to solve the target taskâpotentially involving a very different input distribution and output label space than was seen in pretrainingâbut it must also avoid overï¬tting or catastrophic forgetting of what was learned during pretraining.
This work explores the possibility that the use of a second stage of pretraining with data-rich intermediate supervised tasks might mitigate this brittleness, improving both the robustness and effectiveness of the resulting target task model. We name this approach, which is meant to be combined with existing approaches to pretraining, Supplementary Training on Intermediate Labeled- data Tasks (STILTs).
Experiments with sentence encoders on STILTs take the following form: (i) A model is ï¬rst trained on an unlabeled-data task like language modeling that can teach it to reason about the target lan- guage; (ii) The model is then further trained on an intermediate, labeled-data task for which am- ple data is available; (iii) The model is ï¬nally ï¬ne-tuned further on the target task and evaluated. Our experiments evaluate STILTs as a means of improving target task performance on the GLUE benchmark suite (Wang et al., 2018)âa collection of language understanding tasks drawn from the NLP literature.
# âEqual contribution. 1As of 02/24/2019.
We apply STILTs to three separate pretrained
sentence encoders: BERT (Devlin et al., 2018), GPT (Radford et al., 2018), and a variant of ELMo (Peters et al., 2018a). We follow Radford et al. and Devlin et al. in our basic mechanism for ï¬ne- tuning both for the intermediate and ï¬nal tasks, and use the following four intermediate tasks: (i) the Multi-Genre NLI Corpus (MNLI; Williams et al., 2018), (ii) the Stanford NLI Corpus (SNLI; Bowman et al., 2015), (iii) the Quora Question Pairs2 (QQP) dataset, and (iv) a custom fake- sentence-detection task based on the BooksCor- pus dataset (Zhu et al., 2015a) using a method adapted from Warstadt et al. (2018). The use of MNLI and SNLI is motivated by prior work on using natual language inference tasks to pretrain sentence encoders (Conneau et al., 2017; Subra- manian et al., 2018; Bowman et al., 2019). QQP has a similar format and dataset scale, while re- quiring a different notion of sentence similarity. The fake-sentence-detection task is motivated by Warstadt et al.âs analysis on CoLA and linguis- tic acceptability, and adapted for our experiments. These four tasks are a sample of data-rich super- vised tasks that we can use to demonstrate the ben- eï¬ts of STILTs, but they do not represent an ex- haustive exploration of the space of promising in- termediate tasks.
We show that using STILTs yields signiï¬cant gains across most of the GLUE tasks, across all three sentence encoders we used, and claims the state of the art on the overall GLUE benchmark. In addition, for the 24-layer version of BERT, which can require multiple random restarts for good per- formance on target tasks with limited training data, we ï¬nd that STILTs substantially reduces the number of runs with degenerate results across ran- dom restarts. For instance, using STILTs with 5k training examples, we reduce the number of de- generate runs from ï¬ve to one on SST and from two to none on STS. As we expect
that any kind of pretraining will be most valuable in a limited training data regime, we also conduct a set of experiments where a model is ï¬ne-tuned on only 1k- or 5k- example subsamples of the target task training set. The results show that STILTs substantially im- proves model performance across most tasks in this downsampled data setting, even more so than in the full-data setting.
2https://data.quora.com/First-Quora-Dataset-Release- Question-Pairs
# 2 Related Work
In the area of pretraining for sentence encoders, Zhang and Bowman (2018) compare several pre- training tasks for syntactic target tasks, and ï¬nd that language model pretraining reliably performs well. Peters et al. (2018b) investigate the architec- tural choices behind ELMo-style pretraining with a ï¬xed encoder, and ï¬nd that the precise choice of encoder architecture strongly inï¬uences training speed, but has a relatively small impact on perfor- mance. Bowman et al. (2019) compare a variety of tasks for pretraining in an ELMo-style setting with no encoder ï¬ne-tuning. They conclude that language modeling generally works best among candidate single tasks for pretraining, but show some cases in which a cascade of a model pre- trained on language modeling followed by another model pretrained on tasks like MNLI can work well. The paper introducing BERT (Devlin et al., 2018) brieï¬y mentions encouraging results in a di- rection similar to ours: One footnote notes that unpublished experiments show âsubstantial im- provements on RTE from multitask training with MNLI.â
Most prior work uses features from frozen, pre- trained sentence encoders in downstream tasks. A more recent trend of ï¬ne-tuning the whole model for the target task from a pretrained state (Howard and Ruder, 2018; Radford et al., 2018; Devlin et al., 2018) has led to state-of-the-art results on several benchmarks. For that reason, we focus our analysis on the paradigm of ï¬ne-tuning the whole model for each task.
In the area of sentence-to-vector encoding, Con- neau et al. (2018) offer one of the most compre- hensive suites of diagnostic tasks, and highlight the importance of ensuring that these models pre- serve lexical content information.
In earlier work less closely tied to the unsuper- vised pretraining setup studied here, Bingel and Søgaard (2017) and Kerinec et al. (2018) inves- tigate the conditions under which task combina- tions can be productively combined in multitask learning. They show that multitask learning is more likely to work when the target task quickly plateaus and the auxiliary task keeps improving. They also report that gains are lowest when the Jensen-Shannon Divergence between the unigram distributions of tasks is highest, i.e when auxiliary and target tasks have different vocabulary.
In word representations, this work shares moti-
vations with work on embedding space retroï¬tting (Faruqui et al., 2015) wherein a labeled dataset like WordNet is used to reï¬ne representations learned by an unsupervised embedding learning algorithm before those representations are used for a target task.
# 3 Methods
Pretrained Sentence Encoders We primarily study the impact of STILTs on three sentence encoders: BERT (Devlin et al., 2018), GPT (Radford et al., 2018) and ELMo (Peters et al., 2018a). These models are distributed with pre- trained weights from their respective authors, and are the best performing sentence encoders as mea- sured by GLUE benchmark performance at time of writing. All three models are pretrained with large amounts of unlabeled text. ELMo uses a BiLSTM architecture whereas BERT and GPT use the Transformer architecture (Vaswani et al., 2017). These models are also trained with dif- ferent objectives and corpora. BERT is a bi- directional Transformer trained on BooksCorpus (Zhu et al., 2015b) and English Wikipedia, with a masked-language model and next sentence pre- diction objective. GPT is uni-directional masked Transformer trained only on BooksCorpus with a standard language modeling objective. ELMo is trained on the 1B Word Benchmark (Chelba et al., 2013) with a standard language modeling objec- tive.
three pretrained models, we follow BERT and GPT in using an inductive approach to transfer learning, in which the model parameters learned during pretraining are used to initialize a target task model, but are not ï¬xed and do not constrain the solution learned for the target task. This stands in contrast to the approach originally used for ELMo (Peters et al., 2018b) and for ear- lier methods like McCann et al. (2017) and Subra- manian et al. (2018), in which a sentence encoder component is pretrained and then attached to a tar- get task model as a non-trainable input layer.
To implement intermediate-task and target-task training for GPT and ELMo, we use the public jiant transfer learning toolkit,3 which is built on AllenNLP (Gardner et al., 2017) and PyTorch (Paszke et al., 2017). For BERT, we use the publicly available implementation of BERT re- leased by Devlin et al. (2018), ported into Py-
# 3https://github.com/jsalt18-sentence-repl/jiant
Torch(Paszke et al., 2017) by HuggingFace4.
Target Tasks and Evaluation We evaluate on the nine target tasks in the GLUE benchmark (Wang et al., 2018). These include MNLI, QQP, and seven others: acceptability classiï¬cation with CoLA (Warstadt et al., 2018); binary sentiment classiï¬cation with SST (Socher et al., 2013); se- mantic similarity with the MSR Paraphrase Cor- pus (MRPC; Dolan and Brockett, 2005) and STS- Benchmark (STS; Cer et al., 2017); and textual en- tailment with a subset of the RTE challenge cor- pora (Dagan et al., 2006, et seq.), and data from SQuAD (QNLI, Rajpurkar et al., 2016)5 and the Winograd Schema Challenge (WNLI, Levesque et al., 2011) converted to entailment format as in White et al. (2017). Because of the adversarial na- ture of WNLI, our models do not generally per- form better than chance, and we follow the recipe of Devlin et al. (2018) by predicting the most fre- quent label for all examples.
Most of our experimentsâincluding all of our experiments using downsampled training sets for our target tasksâare evaluated on the development set of GLUE. Based on the results on the devel- opment set, we choose the best intermediate-task training scheme for each task and submit the best- per-task model for evaluation on the test set on the public leaderboard.
Intermediate Task Training Our experiments follow the standard pretrain-then-ï¬ne-tune ap- proach, except that we add a supplementary train- ing phase on an intermediate task before target- task ï¬ne-tuning. We call this approach BERT on STILTs, GPT on STILTs and ELMo on STILTS for the respective models. We evaluate a sam- ple of four intermediate tasks, which were chosen to represent readily available data-rich sentence- level tasks similar to those in GLUE: (i) textual entailment with MNLI; (ii) textual entailment with SNLI; (iii) paraphrase detection with QQP; and (iv) a custom fake-sentence-detection task.
Our use of MNLI is motivated by prior suc- cesses with MNLI pretraining by Conneau et al. (2018) and Subramanian et al. (2018). We in- clude the single-genre captions-based SNLI in ad-
4https://github.com/huggingface/pytorch-pretrained- BERT
5A newer version of QNLI was recently released by the maintainers of GLUE benchmark. All reported numbers in this work, including the aggregated GLUE score, reï¬ect eval- uation on the older version of QNLI (QNLIv1).
Training Set Size Avg A.Ex CoLA SST MRPC 8.5k 67k 3.7k QQP 364k STS 7k MNLI QNLI RTE WNLI 393k 108k 2.5k 634 Development Set Scores BERT BERTâQQP BERTâMNLI BERTâSNLI BERTâReal/Fake BERT, Best of Each 80.8 80.9 82.4 81.4 77.4 82.6 78.4 78.5 80.5 79.2 74.3 80.8 62.1 56.8 59.8 57.0 52.4 62.1 92.5 89.0/92.3 91.5/88.5 90.3/90.1 88.7/92.0 91.5/88.5 90.9/90.7 93.1 91.4/88.4 91.0/90.8 93.2 89.5/92.3 88.5/91.7 91.4/88.4 90.7/90.6 92.7 92.1 82.8/88.5 90.8/87.5 88.7/88.6 93.2 89.5/92.3 91.5/88.5 91.0/90.8 86.2 86.1 86.2 86.1 84.5 86.2 89.4 89.5 90.5 89.8 88.0 90.5 70.0 74.7 83.4 80.1 59.6 83.4 56.3 56.3 56.3 56.3 56.3 56.3 GPT GPTâQQP GPTâMNLI GPTâSNLI GPTâReal/Fake GPT, Best of Each 75.4 76.0 76.7 76.0 76.6 77.5 72.4 73.1 74.2 73.1 73.9 75.9 50.2 48.3 45.7 41.5 49.5 50.2 93.2 80.1/85.9 89.4/85.9 86.4/86.5 93.1 83.1/88.0 89.4/85.9 87.0/86.9 92.2 87.3/90.8 89.2/85.3 88.1/88.0 89.9/86.6 88.7/88.6 91.9 86.0/89.9 91.4 83.6/88.6 90.1/86.9 87.9/87.8 87.3/90.8 90.1/86.9 88.7/88.6 93.2 81.2 80.7 81.2 81.1 81.0 81.2 82.4 82.6 82.6 82.2 82.5 82.6 58.1 62.8 67.9 65.7 66.1 67.9 56.3 56.3 56.3 56.3 56.3 56.3 ELMo 63.8 ELMoâQQP 64.8 ELMoâMNLI 66.4 ELMoâSNLI 66.4 ELMoâReal/Fake 66.9 ELMo, Best of Each 68.0 59.4 61.7 62.8 62.7 63.3 64.8 15.6 16.6 16.4 14.8 27.3 27.3 84.9 69.9/80.6 86.4/82.2 64.5/64.4 87.0 73.5/82.4 86.4/82.2 71.6/72.0 87.2/83.1 75.2/75.8 87.6 73.5/83.0 88.4 74.0/82.5 87.3/83.1 74.1/75.0 87.1/83.1 70.3/70.6 87.8 72.3/81.3 74.0/82.5 87.3/83.1 75.2/75.8 88.4 69.4 63.9 69.4 69.7 70.3 70.3 73.0 73.4 72.4 74.0 73.7 74.0 50.9 52.0 56.3 56.0 54.5 56.3 56.3 56.3 56.3 56.3 56.3 56.3 Test Set Scores BERT BERT on STILTs 80.4 81.8 79.4 81.4 60.5 62.1 94.9 85.4/89.3 94.3 89.8/86.7 89.3/72.1 87.6/86.5 89.4/71.9 88.7/88.3 86.3 86.0 91.1 91.1 70.1 80.1 65.1 65.1 GPT GPT on STILTs 74.1 76.9 71.9 75.9 45.4 47.2 91.3 82.3/75.7 88.5/70.3 93.1 87.7/83.7 82.0/80.0 88.1/70.1 85.3/84.8 81.8 80.7 88.1 87.2 56.0 69.1 65.1 65.1 ELMo ELMo on STILTs 62.2 65.9 59.0 63.8 16.2 30.3 87.1 79.7/69.1 84.9/63.9 64.3/63.9 86.5 82.0/73.9 85.2/64.4 71.8/71.4 69.0 69.7 57.1 62.6 52.3 54.4 65.1 65.1
Table 1: GLUE results with and without STILTs, ï¬ne-tuning on full training data of each target task. Bold marks the best within each section. Strikethrough indicates cases where the intermediate task is the same as the target taskâwe substitute the baseline result for that cell. A.Ex is the average excluding MNLI and QQP because of the overlap with intermediate tasks. See text for discussion of WNLI results. Test results on STILTs uses the supplementary training regime for each task based on the performance on the development set, corresponding to the numbers shown in Best of Each. The aggregated GLUE scores differ from the public leaderboard because we report performance on QNLIv1.
dition to the multi-genre MNLI to disambiguate between the beneï¬ts of domain shift and task shift from supplementary training on natural lan- guage inference. QQP is included as we believed it could improve performance on sentence simi- larity tasks such as MRPC and STS. Lastly, we construct a fake-sentence-detection task based on the BooksCorpus dataset in the style of Warstadt et al.. Importantly, because both GPT and BERT are pretrained on BooksCorpus, the fake-sentence- detection enables us to isolate the impact of task shift from domain shift from the pretaining cor- pus. We construct this task by sampling sentences from BooksCorpus, and fake sentences are gen- erated by randomly swapping 2â4 pairs of words in the sentence. We generate a dataset of 600,000 sentences with a 50/50 real/fake split for this inter- mediate task.
Training Details Unless otherwise stated, for replications and both stages of our STILTs experi- ments, we follow the model formulation and train- ing regime of BERT and the GPT speciï¬ed in De- vlin et al. and (Radford et al., 2018) respectively. Speciï¬cally, for both models we use a three-epoch training limit for both supplementary training and target-task ï¬ne-tuning. We use a fresh optimizer for each phase of training. For each task, we add only a single task-speciï¬c, randomly initialized output layer to the pretrained Transformer model, following the setup laid out by each respective work. For our baseline, we do not ï¬ne-tune on any intermediate task: Other than the batch size, this is equivalent to the formulation presented in the pa- pers introducing BERT and GPT respectively and serves as our attempt to replicate their results.
For BERT, we use a batch size of 24 and a learn-
# Avg A.Ex CoLA SST MRPC
# QQP 364k
# STS 7k
# MNLI QNLI RTE WNLI 393k
Training Set Size 8.5k 67k 3.7k 108k 2.5k 634 At Most 5k Training Examples for Target Tasks BERT 78.3 BERTâQQP 77.6 BERTâMNLI 79.5 BERTâSNLI 78.8 BERTâReal/Fake 71.0 BERT, Best of Each 80.1 78.1 77.3 79.7 78.2 71.7 79.9 60.6 55.3 59.6 56.6 53.6 60.6 93.5 87.3/91.0 83.1/78.6 90.2/89.8 92.0 88.0/91.4 83.1/78.6 90.7/90.5 92.4 89.5/92.5 83.7/78.1 91.1/90.6 90.8/90.6 91.5 88.9 88.4/88.4 93.5 89.5/92.5 83.7/78.1 91.1/90.6 88.2/91.6 82.6/87.6 83.0/77.9 81.7/76.1 77.1 75.9 77.1 80.6 59.1 80.6 82.8 81.6 83.9 82.7 74.1 83.9 74.0 76.5 83.4 80.5 54.9 83.4 56.3 56.3 56.3 56.3 56.3 56.3 GPT GPTâQQP GPTâMNLI GPTâSNLI GPTâReal/Fake GPT, Best of Each 71.6 65.2 72.3 72.3 71.4 75.4 71.2 63.3 71.8 70.2 69.3 74.3 50.8 0.0 35.3 29.6 45.1 50.8 91.1 81.4/87.1 79.5/73.8 87.6/87.4 82.0 82.8/87.7 79.5/73.8 87.4/87.3 89.4 86.8/90.8 81.6/76.3 88.8/88.7 86.3/90.2 81.6/76.0 89.5/89.4 89.2 87.8 87.8/87.5 80.6/75.4 78.2/85.2 91.1 86.8/90.8 81.6/76.3 89.5/89.4 68.8 65.1 68.8 78.3 77.5 78.3 73.1 71.6 74.1 74.7 72.2 74.7 56.3 62.8 70.4 66.4 56.3 70.4 56.3 56.3 56.3 56.3 56.3 56.3
# At Most 1k Training Examples for Target Tasks
BERT 74.2 BERTâQQP 73.2 BERTâMNLI 75.1 BERTâSNLI 75.5 BERTâReal/Fake 63.9 BERT, Best of Each 77.3 74.5 73.5 75.6 74.7 67.5 77.1 54.0 47.5 44.0 47.6 43.9 54.0 91.1 83.8/88.4 79.9/73.8 88.1/87.9 89.7 82.1/86.9 79.9/73.8 88.6/88.5 80.3/74.3 88.7/88.7 90.5 85.5/90.0 82.8/87.8 80.6/74.1 87.8/88.1 89.3 72.5 82.4/83.2 74.1/68.4 78.9/84.7 91.1 85.5/90.0 80.6/74.1 88.7/88.7 69.7 67.5 69.7 78.6 35.3 78.6 77.0 76.4 79.0 77.6 69.7 79.0 69.0 71.5 82.7 79.1 61.7 82.7 56.3 56.3 56.3 56.3 56.3 56.3 GPT GPTâQQP GPTâMNLI GPTâSNLI GPTâReal/Fake GPT, Best of Each 64.5 64.6 65.2 67.5 65.3 70.6 64.8 64.6 65.2 64.9 62.5 68.9 33.4 23.0 13.3 13.4 36.3 36.3 80.8/80.8 85.3 70.1/81.3 75.3/67.7 87.0 74.8/83.2 75.3/67.7 84.4/84.3 86.2 79.2/85.8 78.4/70.5 86.2/86.1 77.2/70.0 87.5/87.5 85.7 80.1/86.2 69.7 75.5/69.4 84.7/84.8 69.6/79.6 87.0 80.1/86.2 78.4/70.5 87.5/87.5 55.7 57.8 55.7 76.8 74.6 76.8 66.7 67.1 68.6 70.3 69.1 70.3 54.9 55.2 63.2 60.6 50.2 63.2 56.3 56.3 56.3 56.3 56.3 56.3
Table 2: Results on the GLUE development set based on ï¬ne-tuning on only a subset of target-task data, simu- lating data scarce scenarios. Bold indicates the best within each section. Strikethrough indicates cases where the intermediate task is the same as the target task: We substitute the baseline result for that cell. A.Ex is the average excluding MNLI and QQP, because of their overlap with the candidate intermediate tasks. See text for discussion of WNLI results.
ing rate of 2e-5. This is within the range of hy- perparameters recommended by the authors and initial experiments showed promising results. We use the larger, 24-layer version of BERT, which is the state of the art on the GLUE benchmark. For this model, ï¬ne-tuning can be unstable on small data setsâhence, for the tasks with limited data (CoLA, MRPC, STS, RTE), we perform 20 ran- dom restarts for each experiment and report the results of the model that performed best on the val- idation set.
For GPT, we choose the largest batch size out of 8/16/32 that a single GPU can accommodate. We use the version with an auxiliary language model- ing objective in ï¬ne-tuning, corresponding to the entry on the GLUE leaderboard.6
For ELMo, to facilitate a fair comparison with GPT and BERT, we adopt a similar ï¬ne-tuning
6Radford et al. (2018) introduced two versions of GPT: one which includes an auxiliary language modeling objective when ï¬ne-tuning, and one without.
setup where all the weights are ï¬ne-tuned. This differs from the original ELMo setup that freezes ELMo weights and trains an additional encoder module when ï¬ne-tuning. The details of our ELMo setup are described in Appendix A.
We also run our main experiment on the 12- layer BERT and the non-LM ï¬ne-tuned GPT. These results are in Table 4 in the Appendix.
Multitask Learning Strategies To compare STILTs to alternative multitask learning regimes, we also experiment with the following two ap- proaches: (i) a single phase of ï¬ne-tuning simulta- neously on both a intermediate task and the target task (ii) ï¬ne-tuning simultaneously on a interme- diate task and the target task, and then doing an additional phase of ï¬ne-tuning on the target task only. In the multitask learning phase, for both ap- proaches, training steps are sampled proportion- ally to the sizes of the respective training sets and we do not weight the losses.
CoLA MRPC STS RTE 100 80 ai OK © x 5 60 xP * (a) a ° ~ x % z* 8 20 OLâ¢~ â¢m 0 0 BERT BERT BERT BERT BERT-MNLI BERT-MNLI BERT->MNLI BERT>MNLI CoLA ssT MRPC Qqp sTS MNLI(*) QNLI RTE 100 80 t i he Fal So ca xx e x § 60) aah (b) 8 x xxx 0 =o © ° ba L % ao}x . 7 fe xX x © 0 Palen 20 x OL xm 00 BERT BERT BERT BERT BERT BERT BERT BERT BERT+MNLI BERT»MNLI BERT»MNLI- BERT»MNL|â- BERT-MNL|â BERT>MNLIâ BERTMNLI_â- BERT->MNLI CoLA ssT MRPC QaQp sTS MNLI(*) QNLI RTE 100 pe ss Ose SREP cme 9 80 * op GaP R ° e Xf . § 60} hx me (c) a 3 : Kx ° * Kx mx «OO ae od â 40 o ° £ x rc m xo 0 * 20 ° 0 {000 = BERT BERT BERT BERT BERT BERT BERT BERT BERT-MNLI BERT-MNLI BERT>MNLI BERT->MNLI BERT->MNLI BERT-MNLI BERT-MNLI BERT-MNLI
Figure 1: Distribution of task scores across 20 random restarts for BERT, and BERT with intermediary ï¬ne-tuning on MNLI. Each cross represents a single run. Error lines show mean±1std. (a) Fine-tuned on all data, for tasks with <10k training examples. (b) Fine-tuned on no more than 5k examples for each task. (c) Fine-tuned on no more than 1k examples for each task. (*) indicates that the intermediate task is the same as the target task.
Models and Code Our pretrained models and code for BERT on STILTs can be found at https://github.com/zphang/pytorch-pretrained- BERT, which is a fork of the Hugging Face im- plementation. We used the jiant framework ex- periments on GPT and ELMo.
# 4 Results
Table 1 shows our results on GLUE with and with- out STILTs. Our addition of supplementary train- ing boosts performance across many of the two- sentence tasks. We also ï¬nd that most of the gains are on tasks with limited data. On each of our STILTs models, we show improved overall
GLUE scores on the development set. Improve- ments from STILTs tend to be larger for ELMo and GPT and somewhat smaller for BERT. On the other hand, for pairs of pretraining and target tasks that are close, such as MNLI and RTE, we in- deed ï¬nd a marked improvement in performance from STILTs. For the two single-sentence tasksâ the syntax-oriented CoLA task and the SST sen- timent taskâwe ï¬nd somewhat deteriorated per- formance. For CoLA, this mirrors results reported in Bowman et al. (2019), who show that few pre- training tasks other than language modeling offer any advantage for CoLA. The Best of Each score is computed based on taking the best score for each
task, including no STILTs.
On the test set, we see similar performance gains across most tasks. Here, we compute the results for each model on STILTs, which shows scores from choosing the best correspond- ing model based on development set scores and evaluating on the test set. These also correspond to the selected models for Best of Each above.7 For both BERT and GPT, we show that using STILTs leads to improvements in test set performance im- proving on the reported baseline by 1.4 points and setting the state of the art for the GLUE bench- mark, while GPT on STILTs achieves a score of 76.9, improving on the baseline by 2.8 points, and signiï¬cantly closing the gap between GPT and the 12-layer BERT model with a similar number of parameters, which attains a GLUE score of 78.3.
Limited Target-Task Data Table 2 shows the same models ï¬ne-tuned on 5k training examples and 1k examples for each task, selected randomly without replacement. Artiï¬cially limiting the size of the training set allows us to examine the ef- fect of STILTs in data constrained contexts. For tasks with training sets that are already smaller than these limits, we use the training sets as-is. For BERT, we show the maximum task perfor- mance across 20 random restarts for all experi- ments, and the data subsampling is also random for each restart.
The results show that the beneï¬ts of supple- mentary training are generally more pronounced in these settings, with performance in several tasks showing improvements of more than 10 points. CoLA and SST are again the exceptions: Both tasks deteriorated moderately with supplementary training, and CoLA trained with the auxiliary lan- guage modeling objective in particular showed highly unstable results when trained on small amounts of data.
We see one obvious area for potential improve- ment: In our experiments, we follow the recipe for ï¬ne-tuning from the original works as closely as possible, only doing supplementary training and ï¬ne-tuning for three epochs each. Particularly in the case of the artiï¬cially data-constrained tasks, we expect that performance could be improved with more careful tuning of the training duration
7For BERT, we run an additional 80 random restartsâ100 random restarts in totalâfor the tasks with limited data, and select the best model based on validation score for test evalu- ation
and learning rate schedule.
Fine-Tuning Stability In the work that intro- duced BERT, Devlin et al. highlight that the larger, 24-layer version of BERT is particularly prone to degenerate performance on tasks with small train- ing sets, and that multiple random restarts may be required to obtain a usable model. In Figure 1, we plot the distribution of performance scores for 20 random restarts for each task, using all train- ing data and maximum of 5k or 1k training ex- amples. For conciseness, we only show results for BERT without STILTs, and BERT with inter- mediate ï¬ne-tuning on MNLI. We omit the ran- dom restarts for tasks with training sets of more than 10k examples, consistent with our training methodology.
in addition to improved per- formance, using STILTs signiï¬cantly reduces the variance of performance across random restarts. A large part of reduction can be attributed to the far fewer number of degenerate runsâperformance outliers that are close to random guessing. This effect is consistent across target tasks, though the magnitude varies from task to task. For instance, although we show above that STILTs with our four intermediate tasks does not improve model perfor- mance in CoLA and SST, using STILTs neverthe- less reduces the variance across runs as well as the number of degenerate ï¬ne-tuning results.
Multitask Learning and STILTs We investi- gate whether setups that leverage multitask learn- ing are more effective than STILTs. We high- light results from one of the cases with the largest improvement: GPT with intermediary ï¬ne-tuning on MNLI with RTE as the target task. To better isolate the impact of multitask learning, we ex- clude the auxiliary language modeling training ob- jective in this experiment. Table 3 shows all se- tups improve compared to only ï¬ne-tuning, with the STILTs format of consecutive single-task ï¬ne- tuning having the largest improvement. Although this does not represent an in-depth inquiry of all the ways to leverage multitask learning and bal- ance multiple training objective, naive multitask learning appears to yield worse performance than STILTs, at potentially greater computational cost.
# 5 Discussion
Broadly, we have shown that, across three differ- ent sentence encoders with different architectures
Model RTE accuracy GPT â RTE GPT â MNLI â RTE GPT â {MNLI, RTE} GPT â {MNLI, RTE} â RTE 54.2 70.4 68.6 67.5
Table 3: Comparison of STILTs against multitask learning setups for GPT, with MNLI as the interme- diate task, and RTE as the target task. GPT is ï¬ne- tuned without the auxiliary language modeling objec- tive in this experiment. Both intermediary and ï¬nal ï¬ne-tuning task(s) are delineated here, in contrast to Table 1 and Table 2 where we omit the name of the target-task.
and pretraining schemes, STILTs can leads to per- formance gains on many downstream target tasks. However, this beneï¬t is not uniform. We ï¬nd that sentence pair tasks seem to beneï¬t more from sup- plementary training than single-sentence ones. We also ï¬nd that tasks with little training data beneï¬t much more from supplementary training. Indeed, when applied to RTE, supplementary training on the related MNLI task leads to a eight-point in- crease in test set score for BERT.
Overall, the beneï¬t of STILTs is smaller for BERT than for GPT and ELMo. One possible reason is that BERT is better conditioned for ï¬ne- tuning for classiï¬cation tasks, such as those in the GLUE Benchmark. Indeed, GPT uses the hid- den state corresponding to the last token of the sentence as a proxy to encode the whole sen- tence, but this token is not used for classiï¬cation during pre-training. On the other hand, BERT has a <CLS> token which is used for classiï¬ca- tion during pre-training for their additional next- sentence-prediction objective. This token is then used in ï¬ne-tuning for classiï¬cation. When adding STILTs to GPT, we bridge that gap by train- ing the last token with the classiï¬cation objec- tive of the intermediary task. This might explain why fake-sentence-detection is a broadly beneï¬- cial task for GPT and not for BERT: Since fake- sentence-detection uses the same corpus that GPT and BERT are pretrained on, it is likely that the improvements we ï¬nd for GPT are due to the bet- ter conditioning of this sentence-encoding token. Applying STILTs also comes with little com- plexity or computational overhead. The same in- frastructure used to ï¬ne-tune BERT or GPT mod- els can be used to perform supplementary train- ing. The computational cost of the supplemen- tary training phase is another phase of ï¬ne-tuning,
which is small compared to the cost of training the original model. In addition, in the case of BERT, the smaller number of degenerate runs induced by STILTs will reduce the computational cost of a full training procedure in some settings.
Our results also show where STILTs may be in- effective or counterproductive. In particular, we show that most of our intermediate tasks were ac- tually detrimental to the single-sentence tasks in GLUE. The interaction between the intermediate task, the target task, and the use of the auxiliary language modeling objective is a subject due for further investigation. Moreover, the four inter- mediary training tasks we chose represent only a small sample of potential tasks, and it is likely that a more expansive survey might yield better per- formance on different downstream tasks. There- fore, for best target task performance, we recom- mend experimenting with supplementary training with several closely-related data-rich tasks and use the development set to select the most promising approach for each task, as in the Best of Each for- mulation shown in Table 1.
# 6 Conclusion
This work represents only an initial investigation into the beneï¬ts of supplementary supervised pre- training. More work remains to be done to ï¬rmly establish when methods like STILTs can be pro- ductively applied and what criteria can be used to predict which combinations of intermediate and target tasks should work well. Nevertheless, in our initial work with four example intermediate train- ing tasks, we showed signiï¬cant gains from ap- plying STILTs to three sentence encoders, BERT, GPT and ELMo, and set the state of the art on the GLUE benchmark with BERT on STILTs. STILTs also helps to signiï¬cantly stabilize training in un- stable training contexts, such as when using BERT on tasks with little data. Finally, we show that in data-constrained regimes, the beneï¬ts of using STILTs are even more pronounced, yielding up to 10 point score improvements on some intermedi- ate/target task pairs.
# Acknowledgments
We would like to thank Alex Wang, Ilya Kulikov, Nikita Nangia and Phu Mon Htut for their helpful feedback.
# References
Joachim Bingel and Anders Søgaard. 2017. Identify- ing beneï¬cial task relations for multi-task learning in deep neural networks. In EACL.
Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large anno- tated corpus for learning natural language inference. In EMNLP.
Samuel R. Bowman, Ellie Pavlick, Edouard Grave, Benjamin Van Durme, Alex Wang, Jan Hula, Patrick Xia, Raghavendra Pappagari, R. Thomas McCoy, Roma Patel, Najoung Kim, Ian Tenney, Yinghui Huang, Katherin Yu, Shuning Jin, and Berlin Chen. 2019. Looking for ELMoâs friends: Sentence- level pretraining beyond language modeling. arXiv preprint 1812.10860.
Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez- Gazpio, and Lucia Specia. 2017. Semeval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In SemEval-2017.
Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robin- son. 2013. One billion word benchmark for measur- ing progress in statistical language modeling. arXiv preprint arXiv:1312.3005.
Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo¨ıc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In EMNLP.
Alexis Conneau, Germ´an Kruszewski, Guillaume Lample, Lo¨ıc Barrault, and Marco Baroni. 2018. What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic proper- ties. In ACL.
Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The PASCAL recognising textual entailment challenge. In Machine learning challenges. evalu- ating predictive uncertainty, visual object classiï¬ca- tion, and recognising tectual entailment, pages 177â 190. Springer.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint 1810.04805.
William B. Dolan and Chris Brockett. 2005. Automati- cally constructing a corpus of sentential paraphrases. In Proc. International Workshop on Paraphrasing (IWP).
Manaal Faruqui, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith. 2015. Retroï¬tting word vectors to semantic lexicons. In NAACL.
Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew
Peters, Michael Schmitz, and Luke S. Zettlemoyer. 2017. AllenNLP: A deep semantic natural language processing platform.
Jeremy Howard and Sebastian Ruder. 2018. Universal language model ï¬ne-tuning for text classiï¬cation. In ACL.
Emma Kerinec, Chlo´e Braud, and Anders Søgaard. 2018. When does deep multi-task learning work for loosely related document classiï¬cation tasks? In Proc. EMNLP Workshop BlackboxNLP.
Hector J Levesque, Ernest Davis, and Leora Morgen- stern. 2011. The Winograd schema challenge. In Aaai spring symposium: Logical formalizations of commonsense reasoning, volume 46, page 47.
Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Con- textualized word vectors. In NIPS.
Adam Paszke, Sam Gross, Soumith Chintala, Gre- gory Chanan, Edward Yang, Zachary DeVito, Zem- ing Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch.
Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018a. Deep contextualized word rep- resentations. In NAACL.
Matthew E Peters, Mark Neumann, Luke Zettlemoyer, and Wen-tau Yih. 2018b. Dissecting contextual word embeddings: Architecture and representation. In EMNLP.
Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing by generative pre-training. Unpublished manuscript accessible via the OpenAI Blog.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In EMNLP.
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- bank. In EMNLP.
Sandeep Subramanian, Adam Trischler, Yoshua Ben- gio, and Christopher J. Pal. 2018. Learning gen- eral purpose distributed sentence representations via large scale multi-task learning. In ICLR.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2018. GLUE: A multi-task benchmark and analysis plat- arXiv form for natural language understanding. preprint 1804.07461.
Alex Warstadt, Amanpreet Singh, and Samuel R Bow- man. 2018. Neural network acceptability judg- ments. arXiv preprint 1805.12471.
Aaron Steven White, Pushpendre Rastogi, Kevin Duh, and Benjamin Van Durme. 2017. Inference is every- thing: Recasting semantic resources into a uniï¬ed evaluation framework. In Proc. Eighth International Joint Conference on Natural Language Processing.
Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In NAACL.
Kelly Zhang and Samuel R. Bowman. 2018. Language modeling teaches you more syntax than translation does: Lessons learned through auxiliary task analy- sis. arXiv preprint 1809.10040.
Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhut- dinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015a. Aligning books and movies: Towards story-like visual explanations by watching movies In Proceedings of the IEEE and reading books. international conference on computer vision, pages 19â27.
Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Aligning books and and Sanja Fidler. 2015b. movies: Towards story-like visual explanations by In arXiv watching movies and reading books. preprint arXiv:1506.06724.
# A ELMo on STILTs
Experiment setup We use the same architecture as Peters et al. (2018a) for the non-task-specific parameters. For task-specific parameters, we use the layer weights and the task weights described in the paper, as well as a classifier composed of max- pooling with projection and a logistic regression classifier. In contrast to the GLUE baselines and to Bowman et al. (2019), we refrain from adding many non-LM pretrained parameters by not us- ing pair attention nor an additional encoding layer. The whole model, including ELMo parameters, is trained during both supplementary training on the intermediate task and target-task tuning. For two-sentence tasks, we follow the model design of Wang et al. (2018) rather than that of Radford et al. (2018), since early experiments showed better per- formance with the former. Consequently, we run the shared encoder on the two sentences u and uâ independently and then use [uâ; v; |uââv'|; uâ« v'] for our task-specific classifier. We use the default optimizer and learning rate schedule from jiant.
Training Set Size Avg AvgEx CoLA SST MRPC 8.5k 67k 3.7k QQP 364k STS 7k MNLI QNLI RTE WNLI 393k 108k 2.5k 634 Development Set Scores BERT BERTâQQP BERTâMNLI BERTâSNLI BERTâReal/Fake 79.2 78.6 81.1 79.9 77.8 76.7 76.0 79.2 77.5 75.0 55.2 49.7 59.0 52.9 53.1 92.5 86.8/90.9 90.8/87.7 88.9/88.5 91.5 84.3/89.0 90.8/87.7 89.7/89.5 92.7 88.5/91.9 90.8/87.5 90.3/90.2 92.7 87.0/90.7 90.9/87.6 89.9/89.8 90.5/87.3 89.3/88.8 92.0 82.6/88.4 84.4 83.7 84.4 84.8 83.4 88.8 87.7 89.0 88.4 87.5 68.6 72.6 79.1 76.5 64.3 56.3 56.3 56.3 56.3 56.3 BERT, Best of Each 81.2 79.3 59.0 92.7 88.5/91.9 90.8/87.7 90.3/90.2 84.8 89.0 79.1 56.3 GPT GPTâQQP GPTâMNLI GPTâSNLI GPTâReal/Fake 75.3 73.1 76.2 75.4 74.9 72.7 69.7 74.1 72.5 71.9 52.8 29.8 41.5 35.3 50.3 92.3 80.6/86.4 88.2/84.6 87.5/87.2 91.4 82.8/87.7 88.2/84.6 87.4/87.3 91.9 86.8/90.8 88.8/81.3 89.2/89.0 90.9 86.3/90.2 89.0/85.4 90.1/89.8 88.3/88.1 92.1 78.2/85.2 88.4/84.7 79.6 80.1 79.6 81.2 81.2 81.5 78.9 83.1 82.9 81.8 57.8 62.8 70.4 66.4 56.3 56.3 56.3 56.3 56.3 56.3 GPT, Best of Each 78.0 75.9 52.8 92.3 86.8/90.8 89.0/85.4 90.1/89.8 81.2 83.1 70.4 56.3
Table 4: Results on the GLUE development set with and without STILTs, ï¬ne-tuning on full training data of each target task. BERT results are based on the 12-layer model, while GPT results are without an auxiliary language modeling objective. Bold indicates the best within each section. Strikethrough indicates cases where the intermediate task is the same as the target taskâwe substitute the baseline result for that cell. A.Ex is the average excluding MNLI and QQP because of the overlap with intermediate tasks. See text for discussion of WNLI results. | {
"id": "1506.06724"
} |
1811.01721 | Rethinking floating point for deep learning | Reducing hardware overhead of neural networks for faster or lower power
inference and training is an active area of research. Uniform quantization
using integer multiply-add has been thoroughly investigated, which requires
learning many quantization parameters, fine-tuning training or other
prerequisites. Little effort is made to improve floating point relative to this
baseline; it remains energy inefficient, and word size reduction yields drastic
loss in needed dynamic range. We improve floating point to be more energy
efficient than equivalent bit width integer hardware on a 28 nm ASIC process
while retaining accuracy in 8 bits with a novel hybrid log multiply/linear add,
Kulisch accumulation and tapered encodings from Gustafson's posit format. With
no network retraining, and drop-in replacement of all math and float32
parameters via round-to-nearest-even only, this open-sourced 8-bit log float is
within 0.9% top-1 and 0.2% top-5 accuracy of the original float32 ResNet-50 CNN
model on ImageNet. Unlike int8 quantization, it is still a general purpose
floating point arithmetic, interpretable out-of-the-box. Our 8/38-bit log float
multiply-add is synthesized and power profiled at 28 nm at 0.96x the power and
1.12x the area of 8/32-bit integer multiply-add. In 16 bits, our log float
multiply-add is 0.59x the power and 0.68x the area of IEEE 754 float16 fused
multiply-add, maintaining the same signficand precision and dynamic range,
proving useful for training ASICs as well. | http://arxiv.org/pdf/1811.01721 | Jeff Johnson | cs.NA, cs.LG | null | null | cs.NA | 20181101 | 20181101 | 8 1 0 2 v o N 1 ] A N . s c [
1 v 1 2 7 1 0 . 1 1 8 1 : v i X r a
# Rethinking ï¬oating point for deep learning
# Jeff Johnson Facebook AI Research New York, NY jhj@fb.com
# Abstract
Reducing hardware overhead of neural networks for faster or lower power infer- ence and training is an active area of research. Uniform quantization using inte- ger multiply-add has been thoroughly investigated, which requires learning many quantization parameters, ï¬ne-tuning training or other prerequisites. Little effort is made to improve ï¬oating point relative to this baseline; it remains energy inef- ï¬cient, and word size reduction yields drastic loss in needed dynamic range. We improve ï¬oating point to be more energy efï¬cient than equivalent bit width integer hardware on a 28 nm ASIC process while retaining accuracy in 8 bits with a novel hybrid log multiply/linear add, Kulisch accumulation and tapered encodings from Gustafsonâs posit format. With no network retraining, and drop-in replacement of all math and ï¬oat32 parameters via round-to-nearest-even only, this open-sourced 8-bit log ï¬oat is within 0.9% top-1 and 0.2% top-5 accuracy of the original ï¬oat32 ResNet-50 CNN model on ImageNet. Unlike int8 quantization, it is still a general purpose ï¬oating point arithmetic, interpretable out-of-the-box. Our 8/38-bit log ï¬oat multiply-add is synthesized and power proï¬led at 28 nm at 0.96à the power and 1.12à the area of 8/32-bit integer multiply-add. In 16 bits, our log ï¬oat multiply-add is 0.59à the power and 0.68à the area of IEEE 754 ï¬oat16 fused multiply-add, maintaining the same signï¬cand precision and dynamic range, prov- ing useful for training ASICs as well.
# 1 Introduction
Reducing the computational complexity of neural networks (NNs) while maintaining accuracy en- compasses a long line of research in NN design, training and inference. Different computer arith- metic primitives have been considered, including ï¬xed-point [21], uniform quantization via 8 bit integer [15], ternary [20] and binary/low-bit representations [29, 3, 1]. Some implementations are efï¬ciently implemented on CPU/GPU ISAs [35, 33], while others demand custom hardware [10]. Instead of developing quantization techniques increasingly divorced from the original implementa- tion, we seek to improve ï¬oating point itself, and let word size reduction yield efï¬ciency for us. It is historically known to be up to 10à less energy efï¬cient in hardware implementations than integer math [14]. Typical implementation is encumbered with IEEE 754 standard compliance [37], de- manding speciï¬c forms such as fused multiply-add (FMA) that we will show as being inefï¬cient and imprecise. Memory movement (SRAM/DRAM/ï¬ip-ï¬ops) dominates power consumption; word bit length reduction thus provides obvious advantages beyond just reducing adder and multiplier area.
We explore encodings to better capture dynamic range with acceptable precision in smaller word sizes, and more efï¬cient summation and multiplication (Sections 3-5), for a reduction in chip power and area. Signiï¬cant inspiration for our work is found in logarithmic number systems (LNS) [2] and the work of Miyashita et al. [24] that ï¬nds logarithmic quantizers better suited to data distributions in NNs, and alternative visions of ï¬oating point from Gustafson [11, 12] and Kulisch [19]. We sidestep prior LNS design issues with numerical approximation and repurpose ideas from Gustafson and
Preprint. Work in progress.
Table 1: Dynamic range and signiï¬cand fractional precision of math types considered
Word Encoding bits 8 8 8 16 8 8 16 16 12 8 16 Range in decibels 20 log10(fmax/fmin) 42.1 72.2 83.7 90.3 101.8 144.5 180.6 240.8 240.8 289.0 337.2 Fraction bits (max) â 5 3 â 3 4 10 10 8 3 12 type symmetric integer [â27 + 1, 27 â 1] (8, 0) posit or (8, 0, α, β, γ) log (4, 3) ï¬oat (w/o denormals) symmetric integer [â215 + 1, 215 â 1] (4, 3) ï¬oat (w/ denormals) (8, 1) posit or (8, 1, α, β, γ) log (5, 10) ï¬oat16 (w/o denormals) (5, 10) ï¬oat16 (w/ denormals) (12, 1) posit or (12, 1, α, β, γ) log (8, 2) posit or (8, 2, α, β, γ) log (16, 1) posit or (16, 1, α, β, γ) log
Kulisch, producing a general-purpose arithmetic that is effective on CNNs [13] without quantization tinkering or re-training (Section 7), and can be as efï¬cient as integer math in hardware (Section 8).
# 2 Floating point variants for NNs
There are few studies on NNs for ï¬oating point variants beyond those provided for in CPU/GPU ISAs. [4] shows a kind of 8 bit ï¬oating point for communicating gradients, but this is not used for general computation. Flexpoint [17] and the Brainwave NPU [6] use variants of block ï¬oating point [36], representing data as a collection of signiï¬cands with a shared exponent. This requires controlled dynamic range variation and increased management cost, but saves on data movement and hardware resources. For going to 8 bits in our work, we seek to improve the encoding and hard- ware for a reasonable tradeoff between dynamic range and precision, with less machinery needed in software.
For different precisions, [5] shows reduced-precision ï¬oating point for training smaller networks on MNIST and CIFAR-10, with (6, 5)1 ï¬oating point without denormal signiï¬cands being comparable to ï¬oat32 on these examples. (8, 7) bï¬oat16 is available on Googleâs TPUv2 [9]. This form main- tains the same normalized exponent range as ï¬oat32, except with reduced precision and smaller multipliers. However, the forms of encoding and computation for many of these variants are not substantially different than implementations available with common ISAs, hardened FPGA IP, and the like. We will seek to improve the encoding, precision and computation efï¬ciency of ï¬oating point to ï¬nd a solution that is quite different in practice than standard (e, s) ï¬oating point.
# 3 Space-efï¬cient encodings
IEEE 754-style ï¬xed width ï¬eld encodings are not optimal for most data distributions seen in prac- tice; ï¬oat32 maintains the same signiï¬cand precision at 10â10 as at 1010. Straightforward imple- mentation of this design in 8 bits will result in sizable space encoding NaNs, â¼ 6% for (4, 3) ï¬oat. Denormals use similar space and are expensive in hardware [26]; not implementing them restricts the dynamic range of the type (Table 1). Tapered ï¬oating point can solve this problem: within a ï¬xed-sized word, exponent and signiï¬cand ï¬eld size varies, with a third ï¬eld indicating relative size. To quote Morris (1971): âusers of ï¬oating-point numbers are seldom, if ever, concerned simultane- ously with loss of accuracy and with overï¬ow. If this is so, then the range of possible representation can be extended [with tapering] to an extreme degree and the slight loss of accuracy will be unno- ticed.â [25]
A more efï¬cient representation for tapered ï¬oating point is the recent posit format by Gustafson [12]. It has no explicit size ï¬eld; the exponent is encoded using a Golomb-Rice preï¬x-free code [8, 22], with the exponent e encoded as a Golomb-Rice quotient and remainder (q, r) with q in unary and r in binary (in posit terminology, q is the regime). Remainder encoding size is deï¬ned by the exponent
1Throughout, (e, s)-ï¬oat refers to IEEE 754-style ï¬oating point, with sign bit, e-bit biased exponent and s-bit 0.s or 1.s ï¬xed point signiï¬cand; ï¬oat16/ï¬oat32 are shorthand for IEEE 754 binary16/binary32.
2
scale s, where 2s is the Golomb-Rice divisor. Any space not used by the exponent encoding is used by the signiï¬cand, which unlike IEEE 754 always has a leading 1; gradual underï¬ow (and overï¬ow) is handled by tapering. A posit number system is characterized by (N, s), where N is the word length in bits and s is the exponent scale. The minimum and maximum positive ï¬nite numbers in (N, s) are fmin = 2â(N â2)2 . The number line is represented much as the projective reals, with a single point at ±â bounding âfmax and fmax. ±â and 0 have special encodings; there is no NaN. The number system allows any choice of N ⥠3 and 0 ⤠s ⤠N â 3. s controls the dynamic range achievable; e.g., 8-bit (8, 5)-posit fmax = 2192 is larger than fmax in ï¬oat32. (8, 0) and (8, 1) are more reasonable values to choose for 8-bit ï¬oating point representations, with fmax of 64 and 4096 accordingly. Precision is maximized in the range ±[2â(s+1), 2s+1) with N â 3 â s signiï¬cand fraction bits, tapering to no fraction bits at ±fmax.
# 4 Accumulator efï¬ciency and precision
A sum of scalar products Pi aibi is a frequent operation in linear algebra. For CNNs like ResNet- 50 [13], we accumulate up to 4,608 (2d convolution with k = 3 à 3, cin = 512) such products. Integer addition is associative (excepting overï¬ow); the order of operations does not matter and thus it allows for error-free parallelization. In typical accelerator use, the accumulation type is 32 bits. Typical ï¬oating point addition is notorious for its lack of associativity; this presents problems with reproducibility, parallelization and rounding error [26]. Facilities such as fused multiply-add (FMA) that perform a sum and product c + aibi with a single rounding can reduce error and further pipeline operations when computing sums of products. Such machinery cannot avoid rounding error involved with tiny (8-bit) ï¬oating point types; the accumulator can become larger in magnitude than the product being accumulated into it, and the signiï¬cand words no longer overlap as needed even with rounding (yielding c + ab = c); increasing accumulator size a bit only defers this problem.
There is a more efï¬cient and precise method than FMA available. A Kulisch accumulator [19] is a ï¬xed point register that is wide enough to contain both the largest and smallest possible scalar product of ï¬oating point values ±(f 2 min). It provides associative, error-free calculation (excepting a single, ï¬nal rounding) of a sum of scalar ï¬oating point products; a ï¬oat signiï¬cand to be accumulated is shifted based on exponent to align with the accumulator for the sum. Final rounding to ï¬oating point is performed after all sums are made. A similar operation known as Auï¬aufenlassen was available in Konrad Zuseâs Z3 as early as 1941 [18], though it is not found in modern computers.
We will term this operation of summing scalar products in a Kulisch accumulator exact multiply add (EMA). For an inner product, given a rounding function2 r(·) with the argument evaluated at inï¬nite precision, EMA calculates r(Pi aibi), whereas FMA calculates r(anbn + r(anâ1bnâ1 + r(· · · + r(a1b1 + 0) · · · ))). Both EMA and FMA can be implemented for any ï¬oating point type. Gustafson proposed Kulisch accumulators to be standard for posits, terming them quires.
Depending upon ï¬oat dynamic range, EMA can be considerably more efï¬cient than FMA in hard- ware. FMA must mutually align the addends c and the product ab, including renormalization logic for subtraction cancellation, and the proper alignment cannot be computed until fairly late in the process. Extra machinery to reduce latency such as the leading zero (LZ) anticipator or three path architectures have been invented [28]. If multiply-add needs to be pipelined for timing closure, EMA knows upfront the location of the ï¬oating point of c needed in alignment (as it is ï¬xed), and can thus accumulate a new product into it every clock cycle, while a FMA must hold onto the starting value of the accumulator c until later in the process, increasing the pipeline non-combinational area and often requiring greater use of an external register ï¬le (for multiple accumulators ci in concurrent use) and effective âloop unrollingâ at software level to ï¬ll all pipeline slots. The rounding performed every FMA requires additional logic, and rounding error can still compound greatly across repeated sums.
2r(·, b) is a rounding function that produces b fractional bits, and ri(·, b) is the i-th fractional bit returned.
We assume IEEE 754-style round-to-nearest-even (with sticky bit OR-reduction) for r(·).
3
# 5 Multiplier efï¬ciency
Floating point with EMA is still expensive, as there is added shifter, LZ counter, rounding, etc. logic. Integer MAC and ï¬oat FMA/EMA both involve multiplication of ï¬xed-point values; for int8/32 MAC this multiply is 63.4% of the combinational power in our analysis at 28 nm (Section 8).
A logarithmic number system (LNS) [16] avoids hardware multipliers entirely, where we round and encode logB(x) for some base B to represent a number x â R. Hitherto we have considered linear domain representations, where x â R is rounded and encoded as x in integer, ï¬xed or ï¬oating point representation (note that ï¬oating point is itself a combination of linear and log encodings). Log domain operations on linear x > 0, y > 0 represented as i = log2(x), j = log2(y) are:
log2(x ± y) = i + ϱ(j â i) log2(xy) = i + j log2(x/y) = i â j
# ta)
[2]
As values x ⤠0 are outside the log domain, sign and zero are handled separately [31], as is ±â. We encode B = 2 log numbers with a sign bit and a signed ï¬xed-point number of the form m.f , which represents the linear domain value ±2(m+Pi fi/2 ). For add/sub, without loss of generality, order j ⤠i, and ϱ(x) = log2(1 ± 2x); this is the historical weak point of a LNS, as implementations use costly LUTs or piecewise linear approximation of ϱ(x). This can be more expensive than hardware multipliers. The approximation log2(1 + x) â x for x â [0, 1] could also be used [24], but this adds signiï¬cant error, especially with repeated sums.
ϱ(x) need only be evaluated if one wishes to keep the partial sum in the log domain. As with Kulisch accumulation versus FMA, we accumulate in a different representation than the scalar prod- uct for efï¬ciency. For Pi aibi, we multiply aibi in the log domain, and then approximate as a linear domain ï¬oating point value for accumulation. Translating log domain m.f to linear is easier than ϱ(x), as we can just consider the fractional portion f ; m is linear domain multiplication by 2m (ï¬oating point exponent addition or ï¬xed point bit shift). A LUT maps f â [0, 1) to p(f ) = 2f â 1. p(f ) is the linear representation of the log number fractional part; the LUT maps all bits of f to a desired number of bits α of p(f ) or r(p(f ), α), for a (2fbits à α)-bit LUT. Linear approximation of i=1 2âiri(p(f ), α)). This is expanded in the usual way m.f is the ï¬oating point value ±2m(1 + Pα for Kulisch accumulation. Just as Kulisch accumulation is efï¬cient for linear domain values up to a reasonably wide dynamic range, it proves quite efï¬cient for our linear approximations of log values. To convert a linear domain value back to log domain, we map g â [0, 1) to q(g) = log2(1 + g). g is a linear domain ï¬xed-point fraction; to control the size of the LUT we only consider β bits via rounding of g. q(r(g, β)) is similarly rounded to a desired γ bits; note that this latter rounding is log domain. r(q(r(g, β)), γ) is then a (2β Ãγ)-bit LUT. We also choose α ⥠fbits +1, β ⥠α, γ = fbits to ensure that log-to-linear-to-log conversion of f is the identity, or f = r(q(r(r(p(f ), α), β)), γ).
We will name this (somewhat inaccurately) exact log-linear multiply-add (ELMA). The log product and linear sum are each exact, but the log product is not represented exactly by r(p(f )) as this requires inï¬nite precision, unlike EMA which is exact except for a ï¬nal rounding. The intermediate log product avoids overï¬ow or underï¬ow with an extra bit for the productâs m. If a linear-to-log mapping is desired (returning a log number after summation), there is also loss via r(q(g)).
Combining log-to-linear mapping with Kulisch accumulation makes log domain multiply-add efï¬- cient and reasonably accurate. Small p and q LUTs reduce well in combinational logic. They are practical for 16-bit types too, as compression can be used to reduce the size. For larger types they are impractical, as α, β, γ need to scale with 2fbits , at which point ϱ is a better strategy. As with FMA, repeated summation via ϱ is subject to magnitude difference error (e.g., the c + ab = c case). Our approximation introduces error with r(p(f )) and r(q(g)), but mitigates repeated summation error and is immune to magnitude differences. This tradeoff seems acceptable in practice (Section 7).
An 8-bit log number by default suffers from the same problem as 8-bit IEEE-style ï¬oating point; the dynamic range is limited by the ï¬xed point encoding. We can use the same tapering as used in (N, s) posit for m.f log numbers. m is encoded as an exponent, and f as a ï¬oating point sig- niï¬cand. fmin and fmax are then exactly the same for posit-tapered base-2 log or linear domain values. Setting γ = fbits (which is at maximum (N â 3 â s) for posits) introduces additional taper- ing rounding error, as subsequent rounding in encoding is performed outside regimes of maximum
4
Table 2: ResNet-50 ImageNet validation set accuracy per math type
top-5 acc (%) 92.862 -0.20
Math type ï¬oat32 (8, 1, 5, 5, 7) log ELMA Multiply-add type FMA top-1 acc (%) 76.130 -0.90 (7, 1) posit (8, 0) posit (8, 1) posit (8, 2) posit (9, 1) posit EMA EMA EMA EMA EMA -4.63 -76.03 -0.87 -2.20 -0.30 -2.28 -92.36 -0.19 -0.85 -0.09 Jacob et al. [15]: ï¬oat32 int8/32 Migacz [23]: ï¬oat32 int8/32 FMA MAC FMA MAC 76.400 -1.50 73.230 -0.20 n/a n/a 91.180 -0.03
precision. γ is increased up to 3 bits (guard, round and sticky bits in typical round-to-nearest-even) to improve accuracy here. This encoding we will refer to as (N, s, α, β, γ) log (posit tapered). We can similarly choose to encode log numbers using an IEEE 754 format (with biased exponents, NaN representations etc.); we use this for our ELMA comparison against ï¬oat16 FMA in Section 8.
# 6 Additional hardware details
To make EMA/ELMA more energy efï¬cient, we restrict accumulator range to [f 2 min, fmax]; han- dling temporary underï¬ow rather than overï¬ow is more important in our experience. Kulisch accu- mulator conversion back to log or linear N-bit types uses a LZ counter and shifter but can be sub- stantially amortized in two ways. First, many sums are performed, with ï¬nal conversion done only once per inner product. Energy for the majority of work is thus lower than MAC/FMA (Section 8); increased area for increased energy efï¬ciency is generally useful in the era of âdark siliconâ [32], or conversion module instances can be rationed (limiting throughput) and/or clock gated. Second, structures with local operand reuse (e.g., systolic arrays, ï¬xed-function convolvers) naturally require fewer converter instances, reducing area (discussion in Section 8 as well). EMA and FMA accuracy are the same for a single sum c + ab; our power advantage would disappear in this domain, but the vast majority of ï¬ops/ops in NNs require repeated rather than singular sums. Note that int8/32 usage itself requires some conversion back to int8 in the end that we do not evaluate.
# 7 FPGA experiments
Our implementation is in SystemVerilog for ASIC evaluation, built into an FPGA design with Intel FPGA OpenCL RTL integration support, with rudimentary PyTorch [27] integration. Source code is available at github.com/facebookresearch/deepfloat. We evaluate (N, s) posit and (N, s, α, β, γ) log arithmetic on the ResNet-50 CNN [13] with the ImageNet ILSVRC12 validation set [30]. We use ï¬oat32 trained parameters from the PyTorch model zoo, with batch normalization fused into preceding afï¬ne layers [15]. ï¬oat32 parameters and network input are converted to our formats via round-to-nearest-even; no other adjustment of these values is performed. When convert- ing into or out of a Kulisch accumulator, we can add a small exponent bias factor, adjusting the input exponent by m, or the output exponent by n. This is effectively free (a small adder). No changes are made to any activations except for such a bias of n = â4 at the last (fully connected) layer to recenter unnormalized log probabilities from around 16.0 to 1.0. Without this we have an additional loss in top-1 of around 0.5-1%, with little change to top-5. If the Kulisch accumulator itself can be directly considered for top-k comparison, this avoids the need as well. All math is replaced with the corresponding posit or log versions; average pooling is via division of the Kulisch accumulator.
Our results are in Table 2, along with two int8/32 quantization comparisons. (8, 0) linear posit has insufï¬cient dynamic range to work; activations are quickly rounded to zero. Our (8, 1, 5, 5, 7)
5
Table 3: Chip area and power for 28 nm, 1-cycle multiply-add at 500 MHz
Area µm2 336.672 121.212 117.810 96.768 Component int8/32 MAC PE multiply add non-combinational (8, 1, 5, 5, 7) log ELMA PE log multiply (9 bit adder) r(p(f )) (16x5 bit LUT) Kulisch shift (6 â 38 bit) Kulisch add (38 bit) non-combinational 376.110 32.760 8.946 81.774 123.732 126.756 272 17.1 5.4 71.0 54.2 124.3 ï¬oat16 (w/o denormals) FMA PE (5, 10) (11, 11, 10) log ELMA PE (this log is (5, 10) ï¬oat16-style encoding, same dynamic range; denormals for log and ï¬oat16 here are unhandled and ï¬ush to zero) 1545.012 1043.154 1358 805 32x32 systolic w/ int8/32 MAC PEs 32x32 systolic w/ (8, 1, 5, 5, 7) log ELMA PEs 348231 457738 226000 195500
Power µW 283 108.0 62.3 112.7
log result remains very close to (8, 1) linear posit. The int8/32 results listed do not start from the same ï¬oat32 parameters as our trained network, so they are not directly comparable. They use train- ing with simulated quantization [15] and KL-divergence calibration with sampled activations [23], whereas we perform math in the usual way in our log or linear domain arithmetic after rounding in- put and parameters. We obtain reasonably similar precision without retraining, sampling activations or learning quantization parameters, while retaining general ï¬oating point representations in 8 bits.
# 8 ASIC evaluation
We use Synopsys Design Compiler and PrimeTime PX with a commercially available 28 nm library, target clock 500 MHz. Process corners are SS@-40â¦C synthesis, TT@25â¦C power analysis at 0.81V. Table 3 investigates multiply-add PEs, and as a proxy for an accelerator design, a 32x32 matrix multiplication systolic array with these PEs. The ï¬oat16 FMA is Synopsys DesignWare dw_fp_mac. We accumulate to the C matrix in place (stationary C), shifting out values upon completion. The int8/32 array outputs unprocessed int32; for ELMA, Kulisch accumulators are shifted across the PEs for C output and converted to 8 bit log at the boundary via 32 conversion/encoder modules. The 1024 PEs within do not include these (as discussed in Section 6). 64 posit taper decoders are included for where A and B are passed as input. Power analysis uses testbench waves for 128-d vectors with elements drawn from N (0, 1); int8 quantization has a max of 2Ï. PEs evaluate a variety of these inner products, and the systolic arrays a variety of GEMMs with these vectors.
ELMA saves 90.9 µW over int8/32 on multiplication, but loses 68.3 µW on the add. ELMA non- combinational demands are higher with additional state required (Kulisch and decoded log numbers), but could be reduced by not handling underï¬ow all the way to f 2 min. Despite the larger Kulisch adder, effectively only 6 bits are summed (with carry) each cycle versus up to 16 with int8/32; strategies for 500+ bit Kulisch accumulators [34] might work in this small regime to further take advantage of this. Our 16-bit ELMA α = 11 p(f ) combinational LUT is 386 µm2 despite compression, now a signiï¬cant portion of the design. Larger α likely needs a compiled ROM or explicit compute of p(f ).
A more in-depth analysis for our work would need to determine a Pareto frontier between fre- quency/latency, per-operation energy, area, pipeline depth, math implementation and accuracy sim- ilar to the Galal et al. FPU generator work [7], to see precisely in what regimes ELMA is advanta- geous. We provide our limited analysis here, however rough, to help motivate future investigation.
6
# 9 Conclusions
DNNs are resilient to many forms of numerical tinkering; they allow re-evaluation of design deci- sions made long ago at the bottom of the hardware stack with reduced fear of failure. The design space of hardware real number representations is indeed quite large and underexplored [22], as is as the opportunity to improve hardware efï¬ciency and software simplicity with alternative designs and judicious use of numerical approximation. Log domain representations, posits, Kulisch accumula- tion and combinations such as ELMA show that ï¬oating point efï¬ciency and applicability can be substantially improved upon. We plan on continuing investigation of this arithmetic design space at the hardware level with DNN training, and on general numerical algorithms in the future.
# Acknowledgments
We thank Synopsys for their permission to publish baseline and comparative results obtained by using their tools and DesignWare components, so we could present realistic numbers on our research using a popular 28 nm semiconductor technology node.
# References
[1] Z. Cai, X. He, J. Sun, and N. Vasconcelos. Deep learning with low precision by half-wave gaussian quantization.
[2] J. N. Coleman, E. Chester, C. I. Softley, and J. Kadlec. Arithmetic on the european logarithmic micropro- cessor. IEEE Transactions on Computers, 49(7):702â715, 2000.
[3] M. Courbariaux, I. Hubara, D. Soudry, R. El-Yaniv, and Y. Bengio. Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1. arXiv preprint arXiv:1602.02830, 2016.
[4] T. Dettmers. 8-bit approximations for parallelism in deep learning. arXiv preprint arXiv:1511.04561, 2015.
[5] R. DiCecco, L. Sun, and P. Chow. Fpga-based training of convolutional neural networks with a reduced precision ï¬oating-point library. In 2017 International Conference on Field Programmable Technology (ICFPT), pages 239â242, Dec 2017.
[6] J. Fowers, K. Ovtcharov, M. Papamichael, T. Massengill, M. Liu, D. Lo, S. Alkalay, M. Haselman, In Proceed- L. Adams, M. Ghandi, et al. A conï¬gurable cloud-scale dnn processor for real-time ai. ings of the 45th Annual International Symposium on Computer Architecture, pages 1â14. IEEE Press, 2018.
[7] S. Galal, O. Shacham, J. S. Brunhaver II, J. Pu, A. Vassiliev, and M. Horowitz. Fpu generator for design space exploration. In Computer Arithmetic (ARITH), 2013 21st IEEE Symposium on, pages 25â34. IEEE, 2013.
[8] S. Golomb. Run-length encodings (corresp.). IEEE transactions on information theory, 12(3):399â401, 1966.
[9] Google. TPU TensorFlow ops. https://cloud.google.com/tpu/docs/tensorflow-ops. [10] S. Gupta, A. Agrawal, K. Gopalakrishnan, and P. Narayanan. Deep learning with limited numerical precision. In International Conference on Machine Learning, pages 1737â1746, 2015.
[11] J. Gustafson. The End of Error: Unum Computing. Chapman & Hall/CRC Computational Science. Taylor & Francis, 2015.
[12] J. L. Gustafson and I. T. Yonemoto. Beating ï¬oating point at its own game: Posit arithmetic. Supercom- puting Frontiers and Innovations, 4(2):71â86, 2017.
[13] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770â778, 2016.
[14] M. Horowitz. 1.1 computingâs energy problem (and what we can do about it). In Solid-State Circuits Conference Digest of Technical Papers (ISSCC), 2014 IEEE International, pages 10â14. IEEE, 2014. [15] B. Jacob, S. Kligys, B. Chen, M. Zhu, M. Tang, A. Howard, H. Adam, and D. Kalenichenko. Quantization and training of neural networks for efï¬cient integer-arithmetic-only inference. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
[16] N. G. Kingsbury and P. J. Rayner. Digital ï¬ltering using logarithmic arithmetic. Electronics Letters, 7(2):56â58, 1971.
[17] U. Köster, T. Webb, X. Wang, M. Nassar, A. K. Bansal, W. Constable, O. Elibol, S. Gray, S. Hall, L. Hornof, et al. Flexpoint: An adaptive numerical format for efï¬cient training of deep neural networks. In Advances in Neural Information Processing Systems, pages 1742â1752, 2017.
[18] U. Kulisch. Advanced Arithmetic for the Digital Computer: Design of Arithmetic Units. Springer mathe- matics. Springer Vienna, 2002.
[19] U. Kulisch. Computer Arithmetic and Validity: Theory, Implementation, and Applications. De Gruyter Studies in Mathematics. De Gruyter, 2012.
[20] F. Li and B. Liu. Ternary weight networks. CoRR, abs/1605.04711, 2016.
7
[21] D. Lin, S. Talathi, and S. Annapureddy. Fixed point quantization of deep convolutional networks. International Conference on Machine Learning, pages 2849â2858, 2016. In
[22] P. Lindstrom, S. Lloyd, and J. Hittinger. Universal coding of the reals: alternatives to ieee ï¬oating point. In Proceedings of the Conference for Next Generation Arithmetic, page 5. ACM, 2018.
[23] S. Migacz. 8-bit inference with tensorrt. Nvidia GTC, 2017. [24] D. Miyashita, E. H. Lee, and B. Murmann. Convolutional neural networks using logarithmic data repre-
sentation. arXiv preprint arXiv:1603.01025, 2016.
[25] R. Morris. Tapered ï¬oating point: A new ï¬oating-point representation. IEEE Transactions on Computers, 100(12):1578â1579, 1971.
[26] J.-M. Muller, F. De Dinechin, C.-P. Jeannerod, S. Torres, et al. Handbook of ï¬oating-point arithmetic. Springer, 2010.
[27] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer. Automatic differentiation in pytorch. In NIPS-W, 2017.
[28] E. Quinnell, E. E. Swartzlander, and C. Lemonds. Floating-point fused multiply-add architectures. In Signals, Systems and Computers, 2007. ACSSC 2007. Conference Record of the Forty-First Asilomar Conference on, pages 331â337. IEEE, 2007.
[29] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi. Xnor-net: Imagenet classiï¬cation using binary convolutional neural networks. In European Conference on Computer Vision, pages 525â542. Springer, 2016.
[30] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211â252, 2015.
[31] E. E. Swartzlander and A. G. Alexopoulos. The sign/logarithm number system. IEEE Transactions on Computers, 100(12):1238â1242, 1975.
[32] M. B. Taylor. Is dark silicon useful? harnessing the four horsemen of the coming dark silicon apocalypse. In Design Automation Conference (DAC), 2012 49th ACM/EDAC/IEEE, pages 1131â1136. IEEE, 2012.
[33] A. Tulloch and Y. Jia. High performance ultra-low-precision convolutions on mobile devices. CoRR, abs/1712.02427, 2017.
[34] Y. Uguen and F. De Dinechin. Design-space exploration for the kulisch accumulator. 2017. [35] V. Vanhoucke, A. Senior, and M. Z. Mao. Improving the speed of neural networks on cpus. Citeseer. [36] J. H. Wilkinson. Rounding errors in algebraic processes. Prentice-Hall, 1963. [37] D. Zuras, M. Cowlishaw, A. Aiken, M. Applegate, D. Bailey, S. Bass, D. Bhandarkar, M. Bhat, D. Bindel, S. Boldo, et al. Ieee standard for ï¬oating-point arithmetic. IEEE Std 754-2008, pages 1â70, 2008.
8 | {
"id": "1603.01025"
} |
1811.00511 | Towards Coherent and Cohesive Long-form Text Generation | Generating coherent and cohesive long-form texts is a challenging task.
Previous works relied on large amounts of human-generated texts to train neural
language models. However, few attempted to explicitly improve neural language
models from the perspectives of coherence and cohesion. In this work, we
propose a new neural language model that is equipped with two neural
discriminators which provide feedback signals at the levels of sentence
(cohesion) and paragraph (coherence). Our model is trained using a simple yet
efficient variant of policy gradient, called negative-critical sequence
training, which is proposed to eliminate the need of training a separate critic
for estimating baseline. Results demonstrate the effectiveness of our approach,
showing improvements over the strong baseline -- recurrent attention-based
bidirectional MLE-trained neural language model. | http://arxiv.org/pdf/1811.00511 | Woon Sang Cho, Pengchuan Zhang, Yizhe Zhang, Xiujun Li, Michel Galley, Chris Brockett, Mengdi Wang, Jianfeng Gao | cs.CL | Selected for spotlight oral presentation at NAACL-HLT 2019 Workshop
on Narrative Understanding | null | cs.CL | 20181101 | 20190529 | 9 1 0 2
y a M 9 2 ] L C . s c [
2 v 1 1 5 0 0 . 1 1 8 1 : v i X r a
# Towards Coherent and Cohesive Long-form Text Generation
# Woon Sang Cho* Michel Galley! Chris Brockettâ Pengchuan Zhangâ Yizhe Zhang! Xiujun Liâ Mengdi Wang* Jianfeng Gao!
Princeton University *Microsoft Research AI *{woonsang, mengdiw}@princeton.edu i{penzhan, yizzhang, xiul,mgalley, chrisbkt, jfgao}@microsoft.com
# Abstract
Generating coherent and cohesive long-form texts is a challenging task. Previous works relied on large amounts of human-generated texts to train neural language models. How- ever, few attempted to explicitly improve neu- ral language models from the perspectives of coherence and cohesion. In this work, we propose a new neural language model that is equipped with two neural discriminators which provide feedback signals at the levels of sentence (cohesion) and paragraph (coher- ence). Our model is trained using a simple yet efï¬cient variant of policy gradient, called negative-critical sequence training, which is proposed to eliminate the need of training a separate critic for estimating baseline. Results demonstrate the effectiveness of our approach, showing improvements over the strong base- line â recurrent attention-based bidirectional MLE-trained neural language model.
# Introduction
The terms coherence and cohesion in linguistics are commonly deï¬ned as follows (Williams and Colomb, 1995).
⢠Cohesion: sentence pairs ï¬tting together the way two pieces of a jigsaw puzzle do.
Coherence: what all the sentences in a piece of writing add up to, the way all the pieces in a puzzle add up to the picture on the box. In laymanâs terms, cohesion indicates that two consecutive sentences are locally well-connected, and coherence indicates that multiple sentences globally hold together.
Generating cohesive and coherent natural lan- guage texts that span multiple sentences is a chal- lenging task for two principal reasons. First, there is no formal speciï¬cation of cross-sentence lin- guistic properties, such as coherence and cohesion of a text. Secondly, there is no widely accepted model to measure the two properties.
Most state-of-the-art neural approaches to nat- ural language generation rely on a large amount of human-generated text to train language mod- els (Graves, 2013; Cho et al., 2014; Sutskever et al., 2014). Although these models can generate sentences that, if judged individually, are similar to human-generated ones, they often fail to cap- ture the local and global dependencies among sen- tences, resulting in a text that is neither coherent nor cohesive. For example, neural language mod- els based on Recurrent Neural Networks (RNNs) are widely applied to response generation for dia- logue (Vinyals and Le, 2015; Shang et al., 2015; Sordoni et al., 2015; Li et al., 2015). Although the responses by themselves look reasonable, they are detached from the whole dialogue session. See Gao et al. (2018) for a comprehensive survey.
In this paper, we address the challenge in a prin- cipled manner, employing a pair of discriminators to score whether and to what extent a text is co- herent or cohesive. The coherence discriminator measures the compatibility among all sentences in a paragraph. The cohesion discriminator mea- sures the compatibility of each pair of consecutive sentences. These models, given a conditional in- put text and multiple candidate output texts, are learned to score the candidates with respect to the criterion. The scores are used as reward signals to train an RNN-based language model to generate (more) coherent and cohesive texts.
Contributions. Our main contributions are: (1) we propose two neural discriminators for mod- eling coherence and cohesion of a text for long- form text generation; (2) we present a simple yet effective training mechanism to encode these lin- guistic properties; (3) we propose negative-critical sequence training, a policy gradient method that uses negative samples to estimate its reward base- line and therefore eliminates the need for a sepa-
rate critic function; and (4) we develop a new neu- ral language model that generates more coherent and cohesive long-form texts, and empirically val- idate its effectiveness using the TripAdvisor and Yelp English reviews datasets.
# 2 Related work
Coherence and cohesion. Coherence and cohe- sion have been extensively studied in the compu- tational linguistics community, particularly in the âpre-deep-learningâ era. Lack of formal speciï¬- cations for coherence and cohesion (Mani et al., 1998), resulted in many different formalisms, such as Rhetorical Structure Theory (Mann and Thompson, 1988), and other forms of coher- ence and cohesion relations and their quantiï¬- cation (Edmundson, 1969; Halliday and Hasan, 1976; Hobbs, 1985; McKeown, 1985; Cohen and Levesque, 1985; Hovy, 1988; Liddy, 1991; Hovy, 1991; Mani et al., 1998; Cristea et al., 1998; Barzi- lay and Lapata, 2008; Van Dijk, 2013). This list is not exhaustive. However, prior work jointly ex- ploring coherence and cohesion using neural mod- els in the context of long-form text generation has not come to our attention.
Reinforcement learning for text generation. The text generation task can be framed as a rein- forcement learning (RL) problem (Daumé et al., 2009), in which the generator G is acting as a policy 7, with parameters 6,, and each generated word at time t, w;, can be viewed as an action to be chosen by the policy from a large discrete space, or vocabulary, conditioned on state s;_1 = w<t-1. Let r; be the reward for a partially generated text sequence w<;. We define the long-term ex- pected reward 7(m) = Eggag axl ceo 7 ri]. where q is the initial distribution of conditional in- put texts. Following Sutton et al. (1999), the gra- dient of 7 with respect to 0, is
âÎ¸Ï J = Esâ¼ÏÏ,aâ¼Ï(·|s)[QÏ(s, a)âÎ¸Ï log ÏÎ¸Ï (a|s)]
where ÏÏ is the stationary distribution and QÏ(s, a) is the expected return from state s and taking action a, both following policy Ï. For brevity, we omit the derivation. In this work, we formulate text generation as an episodic RL prob- lem with episode length L, rewards rL being avail- able only at the end of episode and γ = 1.
There are many works on training neural lan- guage models using rewards, such as Ranzato
et al. (2015) and Paulus et al. (2017). These works directly optimize for speciï¬c metrics, such as BLEU (Papineni et al., 2002) or ROUGE (Lin and Hovy, 2003), using REINFORCE (Williams, 1992). However, these metrics do not give a com- plete picture of the text generation quality. Only recently have there been efforts to provide more relevant objectives, such as consistency and repe- tition in a text (Li et al., 2015, 2016a; Holtzman et al., 2018). But these works use the objectives to re-rank candidate outputs, not to reward or pe- nalize them. Li et al. (2016b) constructed a set of reward models for the dialogue task, such as information ï¬ow and semantic coherence, to tune the generator, yet they do not provide an ablation study on the relative contribution of these reward models individually. It is not clear that these re- ward models can be generalized to other tasks, in particular, long-form text generation tasks.
The most relevant to our work is Bosselut et al. (2018), which promotes text generation in the cor- rect order, and discourages in its reverse order us- ing rewards. However, this may not be sufï¬cient in capturing coherence since there are many nega- tive orderings given a paragraph. From this pool, we assess the relative quality of generations. Fur- thermore, we model cohesion between consecu- tive sentence pairs using word-level features.
GANs for text generation. Another line of re- search involves the use of Generative Adversar- ial Networks (GANs) (Goodfellow et al., 2014) to incorporate feedback signals for text generation (Yu et al., 2017; Lin et al., 2017; Zhang et al., 2017; Guo et al., 2017; Fedus et al., 2018; Zhang et al., 2018). The discriminators in these works are trained to distinguish real texts from gener- ated ones, operating as a black-box than providing feedback on linguistic aspects. Yang et al. (2018) partially addressed this issue by using a trained language model as the discriminator. Although the discriminator provides a ï¬ne-grained feedback at the word level, it does not model linguistic prop- erties, such as cohesion and coherence.
Many text generator models are inadequate for generating a cohesive and coherent long-form text that span multiple sentences. As a result, human readers can easily distinguish the generated texts from real ones. In this paper, we argue that the primary reason is the lack of an effective mech- anism to measure and control for the local and global consistency in model-generated texts.
# 3 Coherence and Cohesion Models
We assume that global coherence of a text depends to a large degree upon how its individual sentences with different meanings are organized. Therefore, we focus our evaluation of coherence solely based on the sentence-level features. If the sentences are not organized properly, the intention of the para- graph as a whole is obscure, regardless of seamless local connectivity between consecutive sentences. This is not to say that local connections between any two neighboring sentences can be overlooked. One can easily distinguish a generated sentence from a real one by judging whether it is seman- tically cohesive with its neighboring sentences.
We strive to embody these two different yet im- portant concepts by developing coherence and co- hesion discriminators, operating on the sentence level and word level, respectively. Our design of these two discriminators is inspired by the Deep Structured Semantic Model (DSSM) which was originally developed to measure the semantic sim- ilarity between two texts (Huang et al., 2013; Gao et al., 2014; Palangi et al., 2016; Xu et al., 2018). In this study, we extend âsemantic similarityâ to coherence and cohesion in a long-form text.
# 3.1 Coherence discriminator: Dcoherence
The coherence discriminator models the coher- ence score, which measures how likely two text chunks add up to a single coherent paragraph. Let S := [s1, 89,..., 8] be the source text chunk that consists of n sentences, Tâ := [t1, ta, ...,tm] be the veal target text chunk that consists of m sentences, and T := [t1, 2, tn] be the arti- ficially constructed incoherent target text chunk that consists of m sentences. Deoherence iS de- signed to distinguish a positive (coherent) pair (S, T) from a negative (incoherent) pair (S, T) by assigning different scores, i.e., Deoherence(S,T) > Deoherence('S, T).
Model architecture. The model takes a form of dual encoder. Given source text chunk S and target text chunk T , the coherence discriminator Dcoherence computes the coherence score in three steps, as illustrated in Figure 1 (upper). First, each sentence is encoded by the bag-of-words (BOW) embedding, i.e., the average of its word vectors from a pre-trained word embedding (Pennington et al., 2014). Secondly, an encoder which can be implemented using a convolutional neural network
colierence, CNN/RNN Fully connected source chunk source {[Sp,S2) ---, Sq] ~F(BOW(S,), BOWS), «-, BOW(S,)D +s i features / coal chunk target fl sim. tm] 9 (BOW(t), BOW (C2), .., BOW(t,,)]) | chunk feat cohesion nsecutive sentences sentence 2 EAD jonniy an u([t11,t1,2, «.,t1,%]) ââ>| features V([t2,1 ta.2 toa 2 sta) â ene
Figure 1: Illustration of coherence and cohesion dis- criminators. Dcoherence takes in bag-of-words sentence embeddings as inputs, and Dcohesion takes in the raw word embeddings of consecutive sentences as inputs. The source encoder f (or u) is different from the target encoder g (or v).
(CNN)! or RNN?, denoted as f, takes as input the BOW vectors of the source text chunk S and en- codes it into a single vector f(S'). Similarly, g en- codes the target text chunk T into g(T'). The two encoders f(-) and g(-) share the same architecture but do not share parameters, i.e., 05 # Og, and thus Deoherence(S,7') is not symmetric. Thirdly, Deoherence(S, Tâ) is computed as the cosine similar- ity of the two vectors f(S) and g(Tâ). The score is a real value between â1 and 1, where | indicates maximal coherence, and â1 minimal coherence.
Note that we use the simple BOW vectors to encode sentences in the coherence discriminator, which is different from the CNN sentence embed- ding scheme in the cohesion discriminator that we introduce in Section 3.2. Although the BOW vec- tor ignores the word-order information in the sen- tence, it is empirically shown to be effective in pre- serving the high-level semantic information in the sentences and achieves success in sentence simi- larity and entailment tasks (Wieting et al., 2016; Arora et al., 2017). Because high-level semantic information of sentences is sufï¬cient to determine whether a paragraph is coherent, we choose to use BOW vectors to encode sentences in Dcoherence.
The parameters of Dcoherence, θf and θg are op- timized using a pairwise ranking loss. To this end, we need both positive and negative pairs. While the positive (coherent) pairs come from the train-
1We explored with deeper networks. However, the perfor- mance difference was marginal. For simplicity, we decided to use a 1-layer convolutional network architecture (Kim, 2014; Collobert et al., 2011).
2For clarity in our model description, we omit RNN here- after. We present results using both CNN and RNN encoders in Table 2.
ing data, negative (incoherent) pairs need to be ar- tiï¬cially constructed. The next section describes the way these negative pairs are generated.
Constructing negative (incoherent) pairs. Given a training minibatch {(5;, T; ye 1» we con- struct 2* Bâ1 negative pairs {(5;, Tj,;) yer ' for every positive pair (S;,7;) using three different methods, inspired by Wieting et al. (2016). For notation simplicity, we omit the minibatch index i in the rest of this section. For each positive pair (S, T) in the minibatch:
e We rotate T with S fixed, and thus obtain all Bâ1 mismatched pairs {(S, T;) yea 1 asneg- ative pairs.
e We shuffle the sentence order in Jâ once, known as a derangement, to break its coher- ence. This yields one negative pair (5, T). e We combine the previous two methods, that is, we rotate T in the minibatch and shuffle sentences within the target chunk, yet another B â 1 negative pairs {(S, T;) These 2Bâ1 negative pairs anda single positive pair, in total, pose a challenge for the discriminator in learning to retrieve the correct pair.
Training using a pairwise ranking loss. The parameters of f(-) and g(-) are optimized in such a way that a positive pair scores higher than its negative pairs, i.e., Deoherence(S, 7) > Deoherence(5, Tj) for any j7. To achieve this, we propose to minimize the following pairwise rank- ing loss (Gong et al., 2013) with margin 6:
Leorerence(0 8g) â= max (0.5 ~ Deorerence(S:T) +O ((Daraml7)8851))
((Daraml7)8851)) ({arj} 44) => N jai
where AVG" ({arj} 44) => er; /Xn ert N _ jai Wie; and w; =
Notice that AVG* is the mean operator when A = 0 and approaches the max operator when A â> co. These two extreme cases correspond to rank- ing against the average of all negative pairs and ranking against the single most challenging neg- ative pair, respectively. Empirically, training the models using the weighted average (0 < \ < 00), which assigns larger weights to more challenging negative pairs, stabilizes the training and expedites the convergence.
3.2 Cohesion discriminator: Dcohesion The cohesion discriminator models the cohesion score, which measures how likely two sentences
form a cohesive pair of consecutive sentences. Let Sh = [sk, sz, tes st] be the k'" sentence that con- sists of n words, 5,41 â= [Skat Stet os SMa] be the real next sentence that consists of m words, and S441 â= [Shar Raps en Se] be the artificially constructed incohesive next sen- tence that consists of m words. Deohesion iS designed to distinguish a positive (cohesive) pair (sx, S%41) from a negative (incohesive) pair (Sk, 8g41) by assigning them with different scores, ie., Deohesion (Sk; Sk+1) > Deohesion (Sk Sk+1)-
Model architecture. Like the coherence dis- criminator, this model also takes a form of dual encoder. Given (5%, 8%+1), Deohesion Computes the cohesion score in three steps, as illustrated in Fig- ure | (lower). The first step is to obtain two se- quences of word embedding to represent the two sentences. Then, a pair of source network u(-) and target network v(-) are utilized to encode both 8, and s%,41 into two low-dimensional continuous vectors. The two encoders u(-) and v(-) share the same architecture but do not share parameters, i.e., 0, A Oy, and thus the Deohesion (SK; S41) 18 not symmetric. Finally, Deohesion (Sk; Sk+1) 1S com- puted as the cosine similarity of the two vectors.
Note that we use CNNs or RNNs to embed sen- tences in Dcohesion, which takes the word order in a sentence into consideration. This is different from the BOW embedding in the Dcoherence where the word order does not matter, because the word or- der indeed matters when determining the cohesion of two consecutive sentences. As an example from Table 1, for the source sentence âOnce you get there you are greeted by the staff.â, âThey explain everything to you.â is a cohesive follow-up while âYou explain everything to them.â is not.
The parameters of Dcohesion, θu and θv are opti- mized using the same pairwise ranking loss. The positive pairs (a training minibatch) for Dcohesion is obtained from (1) decomposing each paragraph (S, T ) in {(Si, Ti)}B i=1 into pairs of consecutive sentences and (2) randomly selecting B pairs as the positive (cohesive) pairs {(sk, sk+1)i}B i=1. We construct negative (incohesive) pairs using the same methods as in the coherence discriminator.
Constructing negative (incohesive) pairs. We construct 2 * B â 1 negative pairs {(sk, Sk41j)i er for every positive pair (sk, 8g41); using three different methods and omit the minibatch index i hereafter. For each positive
pair (sk, sk+1) in the minibatch:
e We mismatch sentence pairs to obtain . oS B-1 { (sk, S413) bjt -
e We shuffle words in s;,41 to obtain 5,41. e We combine the previous two methods and : we : ~ B-1 obtain additional pairs {(5;., 54415) bjt - In total, we obtain 2B â 1 negative pairs for each positive pair in the minibatch.
Training using a pairwise ranking loss. The parameters of u(-) and v(-) are optimized such that Deohesion (Sk: Sk41) > Deonesion (Sk Seti) for any j. To achieve this, we propose to minimize the fol- lowing pairwise ranking loss with margin 6:
Leohesion(9u; Av) = max (0. 5 â Deohesion(Sks 841) . (2) + AvG* ({Deonesion(Sk, Six) BEF") ).
We leave the training details and hyper- parameter conï¬gurations to Section 5.2.
# 4 Negative-Critical Sequence Training for Long-form Text Generation
# 4.1 Long-form text generator: G
The generator G is an attention-based bidirec- tional sequence-to-sequence model (Bahdanau et al., 2014) and is pre-trained by maximizing the log likelihood on training data, which we denote as GMLE. However, long-form texts generated us- ing GMLE often do not meet our high coherence and cohesion standards.
We propose to use the two pre-trained discrimi- nators, Dcoherence and Dcohesion, to modify the text generation behavior of GMLE. The scores from the discriminators are used as reward (or penalty) signals to adjust the parameters of GMLE using a variant of policy gradient, called negative-critical sequence training, which we propose for our task and describe in details in the next subsection.
# 4.2 Negative-critical sequence training
For an arbitrary pair of S and Tgen, where Tgen is the generatorâs output conditioned on S, we com- pute the coherence and cohesion scores by calling Dcoherence and Dcohesion. Since each generated text consists of multiple sentences, the overall cohe- sion score is computed as the mean of all the con- secutive sentence pairs, (sk, sk+1) â [Sâ1, Tgen], where Sâ1 is the last sentence from the source.
These scalar scores, however, are not inter- pretable since the discriminators are trained by op-
timizing a pairwise ranking loss. Instead, the dif- ferences between positive pair scores and the max- imal or average negative pair scores provide in- sights of how well the models distinguish between the positive and the negative pairs.
This difference relates to reward with baseline in actor-critic methods (Witten, 1977; Barto et al., 1983; Williams, 1992; Sutton et al., 1999) that typ- ically require a separate critic function as a base- line. In NLP, we have observed similar practices by Ranzato et al. (2015), Bahdanau et al. (2016), and Nguyen et al. (2017). Rennie et al. (2017) proposed a method that avoids learning a sepa- rate critic. Similarly, our method does not require learning a separate critic since this margin is a form of reward minus baseline. Speciï¬cally, we deï¬ne the reward functions with baselines as:
Reoherence (S, Tyen) = Deoherence (S, Tyen) =] (3) ~ Ee [ Peoterene(S, 7)|
Reohesion([Sâ1, Tyen ) = 1 Focal > Deohesion(Sk; 8k-+1) g (8k,8k41) (4) C[Sâ1,Tgen]} ek (8k 8k41) Deotesion( Sk, 5k+1)| SH) Cig.T]
where |Tyen| denotes the number of sentences in Tyen, and Ex ( and Eg, 41) are computed by aver- aging over an ensemble of negative pairs.
Notice that this reward resembles the ranking loss we use to train our discriminators, except that our baseline is the mean score (instead of the weighted mean) over negative pairs. The ra- tionale for this difference is that: because the best artiï¬cially constructed negative sample may be a formidably good sample, the maximal or the weighted mean can in fact be noisy as a baseline and thus introduce noise in rewards. To alleviate such noise, we use the mean discriminator score of negative pairs as the baseline, and this turns out to be an empirically better alternative. Then we use policy gradient to maximize a weighted sum of the coherence and cohesion rewards.
# 5 Experiments
In this section, we detail the training and evaluation of Dcoherence, Dcohesion, the base- line generator GMLE, and the RL-tuned gen- erators GMLE+RL(cohesion), GMLE+RL(coherence), and
# cohesion coherence 0.0002
0.0411 0.0084 0.0054
target the beds were very uncomfortable and the linen was very old . breakfast was ok , but the staff were incompetent . on our last day they were too lazy to clean our table and never bothered taking our order . we had to leave having had no breakfast , as we ran out of time . they saw us get up and leave and didn t even apologise for the appalling lack of service .
0.0768 0.0591 -0.0097 0.0457
negative target the staff recommended great restaurants with very reasonable prices within walking distance . the paris hop on bus stops nearby . the gare l est is within 3 blocks . we paid 75 euro per nite excluding breakfast but paid for breakfast one day and found it very good and reasonably priced . the rooms are clean and bathrooms ensuite . 0.0514 0.0798 -0.0156 0.0082 -0.2001
+0.3735
0.1004 -0.1103 0.0787 -0.0830
Table 1: Coherence and cohesion rewards on test data. The cohesion reward at the end of each line is computed with its next sentence. This is an example of contradiction and inconsistent sentiment, suggestive of incoherence. We append more examples with extreme cohesion rewards.
TripAdvisor Target Sentences Retrieval Yelp Target Sentences Retrieval R@1 R@5 R@10 Discriminators Encoding R@1 R@5 R@10 Dcoherence Dcohesion Conv512 GRU1024 Conv512 GRU1024 2,3,4,5 1-layer, bi-dir. 3,4,5,6 1-layer, bi-dir. 0.18 0.26 0.12 0.11 0.43 0.50 0.28 0.21 0.60 0.65 0.43 0.33 Dcoherence Dcohesion Conv512 GRU1024 Conv512 GRU1024 2,3,4,5 1-layer, bi-dir. 3,4,5,6 1-layer, bi-dir. 0.33 0.39 0.14 0.11 0.61 0.68 0.33 0.26 0.74 0.81 0.47 0.39
# Discriminators Encoding
Table 2: Retrieval ratios for coherence and cohesion discriminators from a collection of 100 negative candidates from the test data. The reported numbers are the averages over 20 evaluations. Notations: Conv512 2,3,4,5 is a convo- lutional input encoder with ï¬lter sizes 2, 3, 4, and 5, and there are 512 ï¬lters for each ï¬lter size. GRU1024 1-layer, bi-dir. is a 1-layered bi-directional GRU input encoder with hidden size 1024. We experimented different conï¬gurations for both encoder types, and selected the best performing models for the negative-critical sequence training step.
GMLE+RL(coherence, cohesion). We show that, by using feedback from the discriminators, the quality of the generated texts is signiï¬cantly improved. See Table 3 for a sample comparison.
# 5.1 Dataset
We use the TripAdvisor hotel English reviews dataset collected by Wang et al. (2010) and the Yelp English reviews dataset3. We use only the
3https://www.yelp.com/dataset
# source sentences
the hotel inglaterra delivered as promised . the staff was welcoming and spoke good english . the cleaning staff did a very good job every day . the rooms were spotless and very modern . the bathroom was large and had a very nice shower , and there were two generously sized bath towels that were twice the size of normal towels .
# GMLE
the breakfast in the morning was delicious and very good . it was the only hotel where i slept very well . the staff was very helpful in late afternoon or late times . the breakfast was adequate , with a decent range of cereals , fruit , and fruits . there is also free use of the coffee in the reception area .
the breakfast was plentiful including fresh breads and cooked to order . the location was fantastic . it is in the north of the marina and in a very short distance . the marina has a small swimming pool with sitting area and a small gym . they are very popular and guests have an evening reception which is very nice .
Table 3: Sample generations from our MLE-trained baseline model, GMLE, and our discriminator-guided model GMLE+RL(coherence, cohesion). The red texts highlight a common problem in GMLE - it exhibits a repetition, and an inconsistent opinion as a review. In contrast, our discriminator-guided model is able to generate a more interesting, and sentiment-consistent continuation.
TripAdvisor Model GMLE (baseline) GMLE +RL(cohesion) GMLE+RL(coherence) GMLE+RL(coherence, cohesion) NLL PPL BLEU-3 BLEU-4 BLEU-5 0.86 0.77 0.80 0.80 2.36 2.18 2.24 2.25 0.38 0.46 0.44 0.44 0.19 0.27 0.25 0.24 0.08 0.14 0.12 0.12 intra- unique-1 0.66 0.64 0.64 0.65 intra- unique-2 0.93 0.94 0.94 0.94 inter- unique-2 0.40 0.38 0.39 0.40 inter- unique-3 0.72 0.71 0.72 0.72 length ratio 1.08 0.97 1.06 1.02 Yelp Model GMLE (baseline) GMLE+RL(cohesion) GMLE+RL(coherence) GMLE+RL(coherence, cohesion) NLL PPL BLEU-3 BLEU-4 BLEU-5 1.32 1.26 1.24 1.25 3.84 3.65 3.56 3.59 0.37 0.45 0.45 0.43 0.17 0.23 0.23 0.22 0.07 0.11 0.11 0.11 intra- unique-1 0.68 0.68 0.69 0.69 intra- unique-2 0.95 0.95 0.95 0.95 inter- unique-2 0.54 0.53 0.55 0.56 inter- unique-3 0.86 0.85 0.87 0.88 length ratio 1.07 1.05 1.00 1.05
Table 4: An ablation study with automated evaluation metric scores: NLL, PPL, BLEU-n, intra/inter-unique-n, along with the length ratio with the length of corresponding true target sentences as 1. Signiï¬cant numbers are highlighted in bold before rounding.
subsets of the two datasets that satisfy the follow- ing two conditions: (1) a review must have at least 10 sentences, and (2) each sentence has from 5 to 30 words. This yields roughly 60,000 TripAdvi- sor reviews and 220,000 Yelp reviews, split into [0.8, 0.1, 0.1] ratio for train/dev/test sets.
We merge the source and target vocabularies, and limit it to the top 50,000 frequent words, ex- cluding special tokens. For each review, we use the ï¬rst ï¬ve sentences as the input S to G, and the next ï¬ve sentences as the target output T from G.
# Implementation details
Baseline GMLE. GMLE takes individual words as inputs and embeds into a pre-trained GloVe 300- dimensional word vectors. This embedding layer is ï¬xed throughout training. GMLE uses a two- layered GRU and hidden size of 1024 for both encoder and decoder. During optimization using Adam (Kingma and Ba, 2014), we set the learn- ing rate to 2e-4 and clip the gradientâs L2-norm to 1.0. We initially train GMLE for 60 epochs on the TripAdvisor data and 30 epochs on the Yelp data.
2, 3, 4, and 5 for Dcoherence (3, 4, 5, and 6 for Dcohesion), each with 512 ï¬lters. Each convolution ï¬lter is followed by a tanh activation. Then, we max-pool in time and append a fully connected layer to generate a feature vector of dimension 512, followed by a batch normalization layer and a tanh activation. For the RNN-based encoder, we use a 1-layered bi-directional GRU, concatenate the ï¬nal hidden states at both ends, and append the same remaining layers.
Both discriminators use the pre-trained GloVe word embedding vectors4, which are ï¬xed during the training. We use an Adam optimizer with a learning rate of 1e-5. We ï¬x λ = 2 and δ = 0.2 in equations (1) and (2).5 We train both discrimina- tors for 50 epochs and choose the models with the best R@1 scores on the validation dataset.
Model GMLE+RL. In the ï¬ne-tuning stage, we use the negative-critical sequence training method,
4The vector dimension can be different from that of G. The differences were marginal for sizes 50, 100, and 300. For results shown in this paper, we used the same dimension of size 300.
Discriminators. For the CNN-based encoder, the convolutional layer consists of ï¬lters of sizes
5We performed a coarse grid search over the values of λ and δ and these values for the hyper-parameters pair resulted in fast convergence to high recall scores on the dev dataset.
Cohesion Coherence Human judges preferred: Human judges preferred: Our Method Neutral Comparison Our Method Neutral Comparison GMLE+RL GMLE+RL 36.41% 33.57% 30.50% GMLE 29.91% 30.85% 39.24% Human GMLE+RL GMLE+RL 37.23% 31.44% 31.80% GMLE 28.96% 31.32% 39.72% Human
Table 5: Results of Human Evaluation showing preferences (%) for our model GMLE+RL(coherence, cohesion) vis-a-vis the baseline GMLE after adjustment for spamming. GMLE+RL(coherence, cohesion) is preferred over GMLE. For simplicity, the 5-point Likert scale has been collapsed to a 3-point scale. See the Appendix for further details of distributions.
as described in Section 4, up to 5 epochs, with a learning rate of 1e-5. We equally weight the coher- ence and cohesion rewards, 1 2 Rcoherence(S, Tgen)+ 1 2 Rcohesion([Sâ1, Tgen]). We also continue the su- pervised learning of G to constrain the policy search within a space that represents the sentences that are likely to be grammatically plausible, simi- lar to Wu et al. (2016); Paulus et al. (2017); Lewis et al. (2017). For all the generations from GMLE and GMLE+RL, we use the simple greedy decoding method because we do not observe any signiï¬cant difference when switching to beam search.
# 5.3 Results
domly selected 200 samples from the TripAd- visor dataset, including corresponding generated output from the baseline GMLE and our model GMLE+RL. For comparison, we pair systems as (Human â GMLE+RL) and (GMLE+RL â GMLE). The outputs of these system pairs are presented in random order and each is ranked in terms of coherence and cohesion using a ï¬ve-point Likert scale by human judges. Initially, we hired 7 judges to judge each pair. We identiï¬ed a group of poor judges (probable spammers) who choose GMLE+RL over the Human more than 40% of the time, and eliminated them from the judge pool. Table 5 re- ports the ï¬nal scores in terms of percentages of the total remaining judgments.
Evaluating Dcoherence and Dcohesion. Since the discriminators are implemented as pairwise rankers, we employ the metrics commonly used in information retrieval for evaluation, i.e., recall at K (R@K), which is deï¬ned as the fraction of correctly identifying an item in the TOP-K (Baeza-Yates and Ribeiro-Neto, retrieved list 1999). We present the retrieval results in Table 2. To help readers understand the roles of Dcoherence and Dcohesion, we present examples of positive and negative pairs and their rewards in Table 1.
Automatic evaluation of G. It is widely known that there is no perfect automated metric to eval- uate text generators. Nevertheless, we report the scores of widely used metrics, including negative log-likelihood (NLL), perplexity (PPL), BLEU and the proportion of unique n-grams within a sin- gle generation (intra-unique-n), and across gener- ations (inter-unique-n), as in Gu et al. (2018). Re- sults in Table 4 show that our discriminators sig- niï¬cantly improve BLEU scores, NLL and PPL, with marginal difference in diversity.
# 6 Conclusion
This paper proposes a neural approach to explic- itly modeling cross-sentence linguistic properties, coherence and cohesion, for long-form text gen- eration. The coherence discriminator Dcoherence provides a macro-level view on structuring a para- graph. The cohesion discriminator Dcohesion pro- vides a micro-level view on local connectivity be- tween neighboring sentences. The pre-trained dis- criminators are used to score the generated texts and artiï¬cially constructed negative pair scores are used to form baselines for the policy gradient, which we call negative-critical sequence training, to train neural language models.
On two long-form text generation tasks, hu- man evaluation results are consistent with auto- matic evaluation results, which together demon- strate that our proposed method generates more lo- cally and globally consistent texts with the help of the discriminators.
Human evaluation of G. Coherence and co- hesion of a text cannot be easily measured us- ing standard automated metrics. Thus, we per- form crowd-sourced human evaluation. We ran-
Despite the encouraging initial results, we only scratched the surface of the problem. The pro- posed method is yet to be signiï¬cantly improved to meet the ultimate goal of generating meaning- ful and logical long-form texts.
# References
Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence em- beddings. In International Conference on Learning Representations.
Ricardo Baeza-Yates and Berthier Ribeiro-Neto. 1999. Modern information retrieval, volume 463. ACM Press Books.
Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2016. An actor-critic algorithm for sequence prediction. arXiv preprint arXiv:1607.07086.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Neural machine translation by CoRR, Bengio. 2014. jointly learning to align and translate. abs/1409.0473.
Andrew G Barto, Richard S Sutton, and Charles W Anderson. 1983. Neuronlike adaptive elements that can solve difï¬cult learning control problems. IEEE transactions on systems, man, and cybernet- ics, SMC-13(5):834â846.
Regina Barzilay and Mirella Lapata. 2008. Modeling local coherence: An entity-based approach. Compu- tational Linguistics, 34(1):1â34.
Antoine Bosselut, Asli Celikyilmaz, Xiaodong He, Jianfeng Gao, Po-Sen Huang, and Yejin Choi. 2018. Discourse-aware neural rewards for coherent text generation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, pages 173â184.
Kyunghyun Cho, Bart van Merri¨enboer, C¸ alar G¨ulc¸ehre, Dzmitry Bahdanau, Fethi Bougares, Hol- ger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoderâdecoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP).
Philip R Cohen and Hector J Levesque. 1985. Speech In Proceedings of the 23rd acts and rationality. annual meeting on Association for Computational Linguistics, pages 49â60. Association for Compu- tational Linguistics.
Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. J. Mach. Learn. Res., 12:2493â2537.
Dan Cristea, Nancy Ide, and Laurent Romary. 1998. Veins theory: A model of global discourse cohe- In Proceedings of the 36th sion and coherence. Annual Meeting of the Association for Computa- tional Linguistics and 17th International Conference on Computational Linguistics-Volume 1, pages 281â 285. Association for Computational Linguistics.
Hal Daum´e, John Langford, and Daniel Marcu. 2009. Search-based structured prediction. Machine Learn- ing, 75(3):297â325.
Harold P Edmundson. 1969. New methods in au- Journal of the ACM (JACM), tomatic extracting. 16(2):264â285.
William Fedus, Ian Goodfellow, and Andrew Dai. 2018. MaskGAN: Better text generation via ï¬lling in the ËËËË. In ICLR.
Jianfeng Gao, Michel Galley, and Lihong Li. 2018. arXiv Neural approaches to conversational AI. preprint arXiv:1809.08267.
Jianfeng Gao, Patrick Pantel, Michael Gamon, Xi- aodong He, and Li Deng. 2014. Modeling interest- ingness with deep neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2â 13.
Yunchao Gong, Yangqing Jia, Thomas Leung, Alexan- der Toshev, and Sergey Ioffe. 2013. Deep con- volutional ranking for multilabel image annotation. arXiv preprint arXiv:1312.4894.
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative ad- In Advances in Neural Information versarial nets. Processing Systems 27, pages 2672â2680.
Alex Graves. 2013. Generating sequences with arXiv preprint recurrent neural networks. arXiv:1308.0850.
Xiaodong Gu, Kyunghyun Cho, JungWoo Ha, and Sunghun Kim. 2018. DialogWAE: Multimodal response generation with conditional wasserstein auto-encoder. CoRR, abs/1805.12352.
Jiaxian Guo, Sidi Lu, Han Cai, Weinan Zhang, Yong Yu, and Jun Wang. 2017. Long text generation via adversarial training with leaked information. arXiv preprint arXiv:1709.08624.
M Halliday and Ruqaiya Hasan. 1976. Cohesion in English. London, Longmans.
Jerry Hobbs. 1985. On the coherence and structure of discourse. Center for the Study of Language and Information, Stanford University.
Ari Holtzman, Jan Buys, Maxwell Forbes, Antoine Bosselut, David Golub, and Yejin Choi. 2018. Learning to write with cooperative discriminators. In Proceedings of the Association for Computational Linguistics.
Eduard H Hovy. 1988. Planning coherent multisenten- In Proceedings of the 26th annual meet- tial text. ing on Association for Computational Linguistics, pages 163â169. Association for Computational Lin- guistics.
Eduard H Hovy. 1991. Approaches to the planning of In Natural language generation in coherent text. artiï¬cial intelligence and computational linguistics, pages 83â102. Springer.
Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry P. Heck. 2013. Learning deep structured semantic models for web search us- ing clickthrough data. In CIKM.
Yoon Kim. 2014. Convolutional neural networks for sentence classiï¬cation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP).
Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
Mike Lewis, Denis Yarats, Yann N Dauphin, Devi Parikh, and Dhruv Batra. 2017. Deal or no deal? end-to-end learning for negotiation dialogues. arXiv preprint arXiv:1706.05125.
Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2015. A diversity-promoting objec- tive function for neural conversation models. arXiv preprint arXiv:1510.03055.
Jiwei Li, Michel Galley, Chris Brockett, Georgios P Spithourakis, Jianfeng Gao, and Bill Dolan. 2016a. A persona-based neural conversation model. arXiv preprint arXiv:1603.06155.
Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, and Dan Jurafsky. 2016b. Deep re- inforcement learning for dialogue generation. arXiv preprint arXiv:1606.01541.
Jiwei Li, Will Monroe, Tianlin Shi, S´ebastien Jean, Alan Ritter, and Dan Jurafsky. 2017. Adversar- ial learning for neural dialogue generation. arXiv preprint arXiv:1701.06547.
Elizabeth DuRoss Liddy. 1991. The discourse-level structure of empirical abstracts: An exploratory Information Processing & Management, study. 27(1):55â81.
Auto- matic evaluation of summaries using n-gram co- In Proceedings of the 2003 occurrence statistics. Conference of the North American Chapter of the Association for Computational Linguistics on Hu- man Language Technology - Volume 1, NAACL â03, pages 71â78, Stroudsburg, PA, USA.
Kevin Lin, Dianqi Li, Xiaodong He, Zhengyou Zhang, and Ming-Ting Sun. 2017. Adversarial ranking for language generation. In Advances in Neural Infor- mation Processing Systems, pages 3155â3165.
Inderjeet Mani, Eric Bloedorn, and Barbara Gates. 1998. Using cohesion and coherence models for text In Intelligent Text Summarization summarization. Symposium, pages 69â76.
William C Mann and Sandra A Thompson. 1988. Rhetorical structure theory: Toward a functional the- ory of text organization. Text-Interdisciplinary Jour- nal for the Study of Discourse, 8(3):243â281.
Kathleen R McKeown. 1985. Discourse strategies for generating natural-language text. Artiï¬cial Intelli- gence, 27(1):1â41.
Khanh Nguyen, Hal Daum´e, and Jordan L. Boyd- Graber. 2017. Reinforcement learning for bandit neural machine translation with simulated human feedback. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP).
H. Palangi, L. Deng, Y. Shen, J. Gao, X. He, J. Chen, X. Song, and R. Ward. 2016. Deep sentence em- bedding using long short-term memory networks: Analysis and application to information retrieval. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 24(4):694â707.
Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: A method for automatic In Proceedings evaluation of machine translation. of the 40th Annual Meeting on Association for Com- putational Linguistics, ACL â02, pages 311â318.
Romain Paulus, Caiming Xiong, and Richard Socher. 2017. A deep reinforced model for abstractive sum- marization. CoRR, abs/1705.04304.
Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. GloVe: Global vectors for In Proceedings of the 2014 word representation. Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 1532â1543.
MarcâAurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2015. Sequence level training with recurrent neural networks. CoRR, abs/1511.06732.
Steven J. Rennie, Etienne Marcheret, Youssef Mroueh, Jarret Ross, and Vaibhava Goel. 2017. Self-critical sequence training for image captioning. 2017 IEEE Conference on Computer Vision and Pattern Recog- nition (CVPR), pages 1179â1195.
Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversa- tion. arXiv preprint arXiv:1503.02364.
Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to context-sensitive gen- In Proceed- eration of conversational responses. ings of the 2015 Conference of the North American Chapter of the Association for Computational Lin- guistics on Human Language Technology.
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural net- works. In Advances in neural information process- ing systems, pages 3104â3112.
Richard S. Sutton, David McAllester, Satinder Singh, and Yishay Mansour. 1999. Policy gradient methods for reinforcement learning with function approxima- tion. In Proceedings of the 12th International Con- ference on Neural Information Processing Systems, NIPSâ99, pages 1057â1063. MIT Press.
Teun A Van Dijk. 2013. News as discourse. Routledge.
Oriol Vinyals and Quoc Le. 2015. A neural conversa- tional model. ICML Deep Learning Workshop.
Hongning Wang, Yue Lu, and ChengXiang Zhai. 2010. Latent aspect rating analysis on review text data: a rating regression approach. In KDD.
John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2016. Towards universal paraphrastic sen- tence embeddings. ICLR.
J.M. Williams and G.G. Colomb. 1995. Style: Toward Clarity and Grace. Chicago guides to writing, edit- ing, and publishing. University of Chicago Press.
Ronald J. Williams. 1992. Simple statistical gradient- following algorithms for connectionist reinforce- ment learning. Mach. Learn., 8(3-4):229â256.
Ian H Witten. 1977. An adaptive optimal controller for discrete-time markov environments. Information and control, 34(4):286â295.
Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Googleâs neural ma- chine translation system: Bridging the gap between arXiv preprint human and machine translation. arXiv:1609.08144.
Tao Xu, Pengchuan Zhang, Qiuyuan Huang, Han Zhang, Zhe Gan, Xiaolei Huang, and Xiaodong He. 2018. Attngan: Fine-grained text to image gen- eration with attentional generative adversarial net- In Proceedings of the IEEE Conference works. on Computer Vision and Pattern Recognition, pages 1316â1324.
Zichao Yang, Zhiting Hu, Chris Dyer, Eric P Xing, and Taylor Berg-Kirkpatrick. 2018. Unsupervised text style transfer using language models as discrimina- tors. In Advances in Neural Information Processing Systems, pages 7287â7298.
Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017. Seqgan: Sequence generative adversarial nets with policy gradient. In Thirty-First AAAI Confer- ence on Artiï¬cial Intelligence.
Yizhe Zhang, Michel Galley, Jianfeng Gao, Zhe Gan, Xiujun Li, Chris Brockett, and Bill Dolan. 2018. Generating informative and diverse conversational responses via adversarial information maximization. In Advances in Neural Information Processing Sys- tems, pages 1810â1820.
Yizhe Zhang, Zhe Gan, Kai Fan, Zhi Chen, Ricardo Henao, Dinghan Shen, and Lawrence Carin. 2017. Adversarial feature matching for text generation. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 4006â4015. JMLR. org.
Cohesion Coherence Human judges preferred: Human judges preferred: Our Method Neutral Comparison Our Method Neutral Comparison GMLE+RL GMLE+RL 36.25% 26.62% 37.13% GMLE 34.25% 23.63% 42.12% Human GMLE+RL GMLE+RL 39.25% 23.12% 37.63% GMLE 35.63% 21.50% 42.87% Human
Table 6: Results of Human Evaluation showing preferences (%) for our model GMLE+RL(coherence, cohesion) vis-a-vis the baseline GMLE before adjustment for spamming. For simplicity, the 5-point Likert scale has been collapsed to a 3-point scale.
# A Human evaluation un-adjusted scores
Crowd-sourced evaluation can be noisy because there may be human judges who do not take the task seriously, and rather randomly and/or delib- erately choose options that prevent us from draw- ing accurate conclusions. Therefore, we removed crowd-sourced judges who chose GMLE+RL over the Human more than 40% of the time, which threshold value we considered appropriate to iden- tify poor judges (probable spammers). In Table 6, we present the un-adjusted results before account- ing for the poor judges.
# B Sparse end-of-sequence rewards
Sequence-level rewards are available upon a com- pleted generation, so they are sparse signals for the generator. In practice, sparse end-of-sequence rewards entail a noisy training, yet would want the learning generalize to the test data. We ob- served that, for our particular task, most noises were caused by exploration, and the learning gen- eralized to the test data, as conï¬rmed via both hu- man and automatic evaluation results. Thus, re- ward shaping was unnecessary, unlike previous works (Li et al., 2017; Yang et al., 2018) that fur- ther provided signals for partially generated se- quences. | {
"id": "1706.05125"
} |
1811.00075 | The UEA multivariate time series classification archive, 2018 | In 2002, the UCR time series classification archive was first released with
sixteen datasets. It gradually expanded, until 2015 when it increased in size
from 45 datasets to 85 datasets. In October 2018 more datasets were added,
bringing the total to 128. The new archive contains a wide range of problems,
including variable length series, but it still only contains univariate time
series classification problems. One of the motivations for introducing the
archive was to encourage researchers to perform a more rigorous evaluation of
newly proposed time series classification (TSC) algorithms. It has worked: most
recent research into TSC uses all 85 datasets to evaluate algorithmic advances.
Research into multivariate time series classification, where more than one
series are associated with each class label, is in a position where univariate
TSC research was a decade ago. Algorithms are evaluated using very few datasets
and claims of improvement are not based on statistical comparisons. We aim to
address this problem by forming the first iteration of the MTSC archive, to be
hosted at the website www.timeseriesclassification.com. Like the univariate
archive, this formulation was a collaborative effort between researchers at the
University of East Anglia (UEA) and the University of California, Riverside
(UCR). The 2018 vintage consists of 30 datasets with a wide range of cases,
dimensions and series lengths. For this first iteration of the archive we
format all data to be of equal length, include no series with missing data and
provide train/test splits. | http://arxiv.org/pdf/1811.00075 | Anthony Bagnall, Hoang Anh Dau, Jason Lines, Michael Flynn, James Large, Aaron Bostrom, Paul Southam, Eamonn Keogh | cs.LG, stat.ML | null | null | cs.LG | 20181031 | 20181031 | 8 1 0 2
t c O 1 3 ] G L . s c [
1 v 5 7 0 0 0 . 1 1 8 1 : v i X r a
# The UEA multivariate time series classiï¬cation archive, 2018
Anthony Bagnall Michael Flynn Hoang Anh Dau James Large Jason Lines Aaron Bostrom Paul Southam Eamonn Keogh
October 2018
# Introduction
In 2002, the UCR time series classiï¬cation archive was ï¬rst released with six- teen datasets. It gradually expanded, until 2015 when it increased in size from 45 datasets to 85 datasets. In October 2018 more datasets were added, bring- ing the total to 128 [1]. The new archive contains a wide range of problems, including variable length series, but it still only contains univariate time series classiï¬cation problems. One of the motivations for introducing the archive was to encourage researchers to perform a more rigorous evaluation of newly pro- posed time series classiï¬cation (TSC) algorithms. It has worked: most recent research into TSC uses all 85 datasets to evaluate algorithmic advances [2]. Re- search into multivariate time series classiï¬cation, where more than one series are associated with each class label, is in a position where univariate TSC research was a decade ago. Algorithms are evaluated using very few datasets and claims of improvement are not based on statistical comparisons. Recent research has improved somewhat because of the assembly of an archive of 12 datasets by Mustafa Baydogan1. This archive is useful, but it has limitations. The data, summarised in Table 1, are all very small, are not independent and are not rep- resentative of many important multivariate time series classiï¬cation (MTSC) domains.
We aim to address this problem by forming the ï¬rst iteration of the MTSC archive, to be hosted at the website www.timeseriesclassiï¬cation.com. Like the univariate archive, this formulation was a collaborative eï¬ort between re- searchers at the University of East Anglia (UEA) and the University of Califor- nia, Riverside (UCR). The 2018 vintage consists of 30 datasets with a wide range of cases, dimensions and series lengths. For this ï¬rst iteration of the archive we format all data to be of equal length, include no series with missing data and provide train/test splits. Some of these are also in the Baydogan archive, but
1http://www.mustafabaydogan.com/multivariate-time-series-discretization-for- classiï¬cation.html
1
Table 1: Datasets lection of multivariate time series classiï¬cation problems, www.mustafabaydogan.com
Length CV Source AUSLAN Pendigits Japanese Vowels 95 10 9 22 2 12 45-136 8 7-29 1140 300 270 1425 10692 370 10-fold Robot Failure LP1 LP2 LP3 LP4 LP5 4 5 4 3 5 6 6 6 6 6 15 15 15 15 15 38 17 17 42 64 50 30 30 75 100 5-fold UCI ECG Wafer 2 2 2 6 39-152 104-198 100 298 100 896 10-fold Olszewski CMU MOCAP S16 2 62 127-580 29 29 10-fold CMU MOCAP ArabicDigits CharacterTrajectories LIBRAS 10 20 15 13 3 2 4-93 109-205 45 6600 300 180 2200 2558 180 x x x UCI uWaveGestureLibrary 8 3 315 200 4278 x UCR PEMS KickvsPunch WalkvsRun Network Flow DigitsShape Shapes 7 2 2 2 4 3 963 62 62 4 2 2 144 274-841 128-1918 50-997 30-98 52-98 267 16 28 803 24 18 173 10 16 534 16 12 x x x x x x UCI CMU MOCAP Subakan et al.
the majority have never been used in the context of time series classiï¬cation before.
The data characteristics are presented in Table 2. The whole archive is available as a single zip ï¬le 2 (it is over 2GB). The download includes a direc- tory for each problem. In that directory are text ï¬les in Weka multi-instance format. We have also provided ï¬les for each dimension separately, except for the very high dimensional ï¬les where creating thousands of extra ï¬les would massively increase the size of the overall archive. Individual problems can be downloaded from the website and code to split multivariate ARFF is available in the codebase.
Weka multi-instance format works well for MTSC when all the series are the same length. It involves deï¬ning a relational attribute, which can have multiple occurrences, each separated by a new line marker. So, for example a data ï¬le may begin as follows.
@relation input @attribute input relational @attribute t1 numeric @attribute t2 numeric @attribute t3 numeric
2www.timeseriesclassiï¬cation.com/Downloads/MultivariateTSCProblems.zip
2
Dataset ArticularyWordRecognition AtrialFibrillation BasicMotions CharacterTrajectories Cricket DuckDuckGeese EigenWorms Epilepsy EthanolConcentration ERing FaceDetection FingerMovements HandMovementDirection Handwriting Heartbeat JapaneseVowels Libras LSST InsectWingbeat MotorImagery NATOPS PenDigits PEMS-SF Phoneme RacketSports SelfRegulationSCP1 SelfRegulationSCP2 SpokenArabicDigits StandWalkJump UWaveGestureLibrary Train Cases Test Cases Dimensions Length Classes 275 15 40 1422 108 60 128 137 261 30 5890 316 320 150 204 270 180 2459 30000 278 180 7494 267 3315 151 268 200 6599 12 120 300 15 40 1436 72 40 131 138 263 30 3524 100 147 850 205 370 180 2466 20000 100 180 3498 173 3353 152 293 180 2199 15 320 9 2 6 3 6 1345 6 3 3 4 144 28 10 3 61 12 2 6 200 64 24 2 963 11 6 6 7 13 4 3 144 640 100 182 1197 270 17984 206 1751 65 62 50 400 152 405 29 45 36 78 3000 51 8 144 217 30 896 1152 93 2500 315 25 3 4 20 12 5 5 4 4 6 2 2 4 26 2 9 15 14 10 2 6 10 7 39 4 2 2 10 3 8
Table 2: A summary of the 30 datasets in the UEA Multivariate Time Series Classiï¬cation archive, 2018
@attribute t4 numeric @attribute t5 numeric @attribute t6 numeric @attribute t7 numeric @attribute t8 numeric @end input @attribute class {0,1,2,3,4,5,6,7,8,9} @data "47,27,57,26,0,56,100,40
100,81,37,0,23,53,90,98",8
This header deï¬nes that each series is of length 8, and the number of series per case is deï¬ned by the data as two (because there is a single newline). It is a little confusing in code, because each Instance object (i.e. case) contains an Instances object for the relational attribute. For example,
Instances train= //All the instances
3
Instance first=train.instance(0); //Get first instance Instances x= first.relationalValue(0);//Get relational data Instance s1=x.instance(0);//First series Instance s2=x.instance(1);//Second series
Example code to manipulate instances is available in the repository3. We have done the minimum pre-processing possible, and if the dataset donators provided a train/test split, we have retained that. The sources for these data are numer- ous and include: the UCI Machine Learning archive; a series of Brain Computer Interface competitions; Kaggle competitions; and some made by us. We split the problems into groups based on the area of application: Human Activity Recognition (HAR) is the largest group (9 problems); Motion classiï¬cation (4 problems); ECG classiï¬cation (3 problems); EEG/MEG classiï¬cation (6 prob- lems); Audio Spectra Classiï¬cation (5 problems); and others (3 problems).
3https://bitbucket.org/TonyBagnall/time-series-classiï¬cation
4
# 2 Human Activity Recognition
Human Activity Recognition (HAR) is the problem of predicting an activity (the class value) based on accelerometer and/or gyroscope data. The data are either three or six dimensions of co-ordinates. HAR is a very popular research area and it is easy to obtain or generate data from this domain. We have included 9 HAR problems. We could have included many more, but we do not want to formulate an archive of just HAR problems until we have enough data from other domains to balance.
# 2.1 BasicMotions
The data was generated as part of a student project in 2016 where four students performed four activities whilst wearing a smart watch. The watch collects 3D accelerometer and a 3D gyroscope data. It consists of four classes, which are standing, walking, running and playing badminton. Participants were required to record motion a total of ï¬ve times, and the data is sampled at 10 Hz for a ten second period.
AW VQ AEX me ACY me AZ âcirk ââGirYy ââGirZ
AW VQ AEX me ACY me AZ âcirk ââGirYy ââGirZ
Figure 1: First train case for the problem BasicMotion. The class label for this case is Standing.
# 2.2 Cricket
Cricket requires an umpire to signal diï¬erent events in the game to a distant scorer. The signals are communicated with motions of the hands. For example, No-Ball is signaled by touching each shoulder with the opposite hand, and TV- Replay, a request for an oï¬-ï¬eld review of the video of a play, is signaled by miming the outline of a TV screen.
5
(2005) [3] consists of four umpires performing twelve signals, each with ten repetitions. The data, recorded at a frequency of 184 Hz,was collected by placing accelerometers on the wrists of the umpires. Each accelerometer has three synchronous measures for three axes (x, y and z). Thus, we have a six-dimensional problem from the two accelerometers. Cricket was ï¬rst formatted for MTSC in [4].
$1». â____ ee âââLeftX lefty ââLeftZ âRightX ââRightY ââRightZ
Cancel_Ball Dead_Ball Four Last_Hour Leg_Bye No_Ball j g h ti. bh a pee $1». â____ Q RIG GT âN OW RE / 64 \ One_Short Penalty Runs Wide Six Short TV Play ee âââLeftX lefty ââLeftZ âRightX ââRightY ââRightZ
Cancel_Ball Dead_Ball Four Last_Hour Leg_Bye No_Ball j g h ti. bh a pee Q RIG GT âN OW RE / 64 \ One_Short Penalty Runs Wide Six Short TV Play
Figure 2: Image of the class labels and the ï¬rst train case for the problem Cricket. The class label for this case is Cancel Ball (1).
# 2.3 Epilepsy
The data, presented in [5], was generated with healthy participants simulating the class activities. Data was collected from 6 participants using a tri-axial accelerometer on the dominant wrist whilst conducting 4 diï¬erent activities. The four tasks, each of diï¬erent length, are: WALKING includes diï¬erent paces and gestures: walking slowing while gesturing, walking slowly, walking normal and walking fast, each of 30 seconds long; RUNNING includes running a 40 meters long corridor; SAWING with a saw and during 30 seconds; and SEIZURE MIMICKING whilst seated, with 5-6 sec before and 30 sec after the mimicked seizure. The seizure was 30 sec long. Each participant performs each activity 10 times at least. The mimicked seizures were trained and controlled, following a protocol deï¬ned by an medical expert. All the activities were carried out indoors, either inside an oï¬ce or in the corridor around it.
The sampling frequency was 16 Hz. Some activities lasted about 30 seconds, others are 1 minute long, others are about 2 minutes. Our standard practice for the archive is to truncate data to the length of the shortest series retained. We removed preï¬x and suï¬x ï¬at series and truncated to the shortest series (approximately 13 seconds), taking a random interval of activity for series longer than the minimum. A single case from the original (ID002 Running 16) was
6
removed because the data was not collected correctly. After tidying the data we have a total of 275 cases. The train test split is divided into three participants for training, three for testing, with the IDs removed for consistency with the rest of the archive.
AANA NAN MY Vy Uw i An Av Jw el AN \w J ANS va VV \ AYVI\y hy BJ Amy Me aia | iw, wine | A ere, I nl eee ao eA /\ | Avaya ! VAAN CAVAVAUAUAVAVAY Ghd Aaa ZAUACNLLS PAI YI AYN ⢠rind fee Pe poy WOW ww âK YX
AANA NAN MY Vy Uw i An Av Jw AN \w J ANS va VV \ AYVI\y hy BJ Amy Me aia | iw, wine | A ere, I eA /\ | Avaya ! VAAN CAVAVAUAUAVAVAY Ghd Aaa ZAUACNLLS PAI YI AYN ⢠rind fee Pe poy WOW ww
el nl eee ao âK YX
Figure 3: Example of an Epilepsy EEG and the ï¬rst train case for the HAR problem Epilepsy. The class label for this case is Epilepsy.
# 2.4 ERing
This data is generated with a prototype ï¬nger ring, called eRing [6], that can be used to detect hand and ï¬nger gestures. eRing uses electric ï¬eld sensing. The dataset we used to form the archive set is the D dataset used for Finger Posture Recognition. There are six classes for six postures involving the thumb, the index ï¬nger, and the middle ï¬nger. The data is four dimensional. Each series contains 65 observations. Each series is a measurement from an electrode which varies dependent on the distance to the hand.
7
(1) Hand open (2) Fist (3) Two ee { i f (ee a @) Pointing (5) Ring (© Grasp aoe FO Figure 4. Finger postures used in experiment 2.
(1) Hand open (2) Fist (3) Two { i f @) Pointing (5) Ring (© Grasp Figure 4. Finger postures used in experiment 2.
ee (ee a aoe FO
Figure 4: Image of the E-Ring and the ï¬rst train case for the HAR problem ERing. The class label for this case is Fist (2).
# 2.5 Handwriting
A dataset of motion taken from a smart watch whilst the subject writes the 26 letters of the alphabet created at UCR and reported in [4]. There are 150 train cases and 850 test cases. The three dimensions are the three accelerometer values. The data has been padded by those who donated it (see Figure 5).
Slew toe fie nuds 7 fp ve Ain ae ps, aw Show feat 0 A a spore Ian tle ee kh, fle oe pnmiten) «Seep ee f A flee sided, we hgio ooo PGME GPE agp" Dy fbn 4 bi fachay 6 a> he Ben Mee Sreclan ter , GR & Come Z Ae face Jak f por a2 0 enim AF. ben f gar Led? Lv a, Gare, he, furpac b haf & tasted ak [rid bud Kavos Aaamnslecl Aen feud, 4. 14h. ff a. Ys paola, eA ve Las heer tats fax, Boule Cxmscsvel ad Pricer tive Jbl oS MO Retin ned Bade i
Slew toe fie nuds 7 fp ve Ain ae ps, aw Show feat 0 A a spore Ian tle ee kh, fle oe pnmiten) «Seep ee f A flee sided, we hgio ooo PGME GPE agp" Dy fbn 4 bi fachay 6 a> he Ben Mee Sreclan ter , GR & Come Z Ae face Jak f por a2 0 enim AF. ben f gar Led? Lv a, Gare, he, furpac b haf & tasted ak [rid bud Kavos Aaamnslecl Aen feud, 4. 14h. ff a. Ys paola, eA ve Las heer tats fax, Boule Cxmscsvel ad Pricer tive Jbl oS MO Retin ned Bade
Figure 5: The ï¬rst train case for the HAR problem Handwriting. The class label for this case is U (21).
8
# 2.6 Libras
The LIBRAS Movement Database is part of the UCI archive and was used in [7]. LIBRAS, acronym of the Portuguese name âLingua BRAsileira de Sinaisâ, is the oï¬cial brazilian sign language. The dataset contains 15 classes of 24 instances each, where each class references to a hand movement type in LIBRAS. The hand movement is represented as a bi-dimensional curve performed by the hand in a period of time. The curves were obtained from videos of hand movements, with the Libras performance from 4 diï¬erent people, during 2 sessions. Each video corresponds to only one hand movement and has about 7 seconds.
In the video pre-processing, a time normalization is carried out selecting 45 frames from each video, in according to an uniform distribution. In each frame, the centroid pixels of the segmented objects (the hand) are found, which compose the discrete version of the curve F with 45 points. All curves are normalized in the unitary space. In order to prepare these movements to be analysed by algorithms, we have carried out a mapping operation, that is, each curve F is mapped in a representation with 90 features, with representing the coordinates of movement.
Each instance represents 45 points on a bi-dimensional space, which can be plotted in an ordered way (from 1 through 45 as the X co-ordinate) in order to draw the path of the movement.
Swing (curved, horizontal and vertical)
Swing (curved, horizontal and vertical)
Figure 6: Example of the ï¬rst train case for the HAR problem Libras. The class label for this case is 1.
# 2.7 NATOPS
This data was originally part of a competition for the AALTD workshop in 2016 4 and is described in [8]. The problem is to automatically detect the
# 4https://aaltd16.irisa.fr/challenge/
9
motion of various Naval Air Training and Operating Procedures Standardization motions used to control plane movements.
Tabet fom the NATOPS database; figures have been extracted for colours) haw W Allclear 4, Sp 45: Fold wings âHandTipletX ââHandTipleny âHandTipleftZ âââHandTipRightx âHandTipRightâY ââHandTipRightZ
Tabet fom the NATOPS database; figures have been extracted for colours) haw W Allclear 4, Sp 45: Fold wings âSource: Song eal. 2011)
âHandTipletX ââHandTipleny âHandTipleftZ âââHandTipRightx âHandTipRightâY ââHandTipRightZ
Figure 7: Examples of the six classes and six series in the ï¬rst train case for the HAR problem NATOPS. The class label for this case is Spread Wings (4).
The data is generated by sensors on the hands, elbows, wrists and thumbs. The data are the x, y, z coordinates for each of the eight locations, meaning there are 24 dimensions. The six classes are separate actions: I have command; All clear; Not clear; Spread wings; Fold wings; and Lock wings.
# 2.8 RacketSports
The data was created by university students playing badminton or squash whilst wearing a smart watch (Sony Smart watch 3). The watch relayed the x, y, z coordinates for both the gyroscope and accelerometer to an android phone (One Plus 5). The problem is to identify which sport and which stroke the players are making. The data was collected at a rate of 10 HZ over 3 seconds whilst the player played either a forehand/backhand in squash or a clear/smash in badminton. The data was collected as part of an undergraduate project by Phillip Perks in 2017/18.
10
# 2.9 UWaveGestureLibrary
A set of eight simple gestures generated from accelerometers. The data consists of the x, y, z coordinates of each motion. Each series is 315 long. The data was ï¬rst described in [9].
ACK a ACCY ee ACCZ â_âGiX ââGirY ââGirZ
ACK a ACCY ee ACCZ â_âGiX ââGirY ââGirZ
Figure 8: Example of the ï¬rst train case for the HAR problem RacketSports. The class label for this case is Badminton Smash.
Gesture vocabulary adopted from [KKM+06]. The the start and the arrow the end
Figure 3: Gesture vocabulary adopted from [KKM+06]. The dot denotes the start and the arrow the end
Figure 9: Example of the eight classes and the ï¬rst train case for the HAR problem UWaveGestureLibrary. The class label for this case is 1.
11
# 3 Motion Classiï¬cation
We diï¬erentiate HAR data, which characterised by motion recorded by ac- celerometer and/or gyroscopes, with data recording other forms of movement.
# 3.1 ArticularyWordRecognition
An Electromagnetic Articulograph (EMA) is an apparatus used to measure the movement of the tongue and lips during speech. The motion tracking using EMA is registered by attaching small sensors on the surface of the articulators (e.g., tongue and lips). The spatial accuracy of motion tracking using EMA AG500 is 0.5 mm. This is the EMA dataset used in [10]] which contains data collected from multiple native English native speakers producing 25 words. Twelve sen- sors were used in data collection, each providing x, y and z time-series positions with a sampling rate of 200 Hz. The sensors are located on the forehead, tongue; from tip to back in the midline, lips and jaw. The three head sensors (Head Center, Head Right, and Head Left) attached on a pair of glasses were used to calculate head-independent movement of other sensors. Tongue sensors were named T1, T2, T3, and T4, from tip to back. Of the total of 36 available di- mensions, this dataset includes just 9, since that was the format of the data obtained from the Shokoohi-Yekta et al. [4].
Figure 10: Example of the ï¬rst train case for the Motion problem Articulary- WordRecognition. The class label for this case is 1.0.
# 3.2 CharacterTrajectories
The data were taken from the UCI dataset, provided by Ben Williams, School of Informatics, University of Edinburgh. The data consists of 2858 character samples, captured using a WACOM tablet. Three dimensions were kept - x, y,
12
and pen tip force. The data has been numerically diï¬erentiated and Gaussian smoothed, with a sigma value of 2. Data was captured at 200Hz. The data was normalised. Only characters with a single âPEN-DOWNâ segment were considered. Character segmentation was performed using a pen tip force cut-oï¬ point. The characters have also been shifted so that their velocity proï¬les best match the mean of the set. The characters here were used for a PhD study on primitive extraction using HMM based models [11].
Each instance is a 3-dimensional pen tip velocity trajectory. The original data has diï¬erent length cases. The class label is one of 20 characters: a; b; c; d; e; g; h; l; m; n; o; p; q; r; s; u; v; w; y; z. To conform with the repository, we have truncated all series to the length of the shortest, which is 182, which will no doubt make classiï¬cation harder.
Figure 11: Example of the ï¬rst train case for the Motion problem Character- Trajectories. The class label for this case is g.
# 3.3 EigenWorms
Caenorhabditis elegans is a roundworm commonly used as a model organism in the study of genetics. The movement of these worms is known to be a use- ful indicator for understanding behavioural genetics. Brown et al. [12] describe a system for recording the motion of worms on an agar plate and measuring a range of human-deï¬ned features [13]. It has been shown that the space of shapes Caenorhabditis elegans adopts on an agar plate can be represented by combinations of six base shapes, or eigenworms. Once the worm outline is ex- tracted, each frame of worm motion can be captured by six scalars representing the amplitudes along each dimension when the shape is projected onto the six eigenworms. Using data collected for the work described in [13], we address the problem of classifying individual worms as wild-type or mutant based on the time series. The data were extracted from the C. elegans behavioural database5.
5http://movement.openworm.org/
13
We have 259 cases, which we split into 131 train and 128 test cases. We have truncated each series to the shortest series, after which each series has 17984 ob- servations. Each worm is classiï¬ed as either wild-type (the N2 reference strain) or one of four mutant types: goa-1; unc-1; unc-38 and unc-63.
A Bw o2/1 - 2 | ae AS <o2 2 o2|3 4 LNA a a_i dh _* 0 20 4 Oo 2 Segment Number Sg yy reyrn ° ° ° Projected amplitude ° Time (seconds)
A Bw o2/1 - 2 | ae AS <o2 2 o2|3 4 LNA a a_i dh _* 0 20 4 Oo 2 Segment Number Sg yy reyrn ° ° ° Projected amplitude ° Time (seconds)
Figure 12: Example of the ï¬rst train case for the Motion problem EigenWorms. The class label for this case is wild-type (1).
# 3.4 PenDigits
This is a handwritten digit classiï¬cation task, taken from the UCI Archive 6 and originally described in [14]. 44 writers were asked to draw the digits 0 to 9, where instances are made up of the x and y coordinates of the pen-tip traced across a digital screen.
The coordinate data were originally recorded at a 500x500 pixel resolution. It was then normalised and sampled to 100x100. Then, based on expert knowledge from the original dataset creators, the data was spatially resampled such that data are sampled with a constant spatial step and variable time step. The data was resampled to 8 spatial points, resulting in each instance having 2 dimensions of 8 points, with a single class label (0. . . 9) being the digit drawn.
6https://archive.ics.uci.edu/ml/datasets/Pen-Based+Recognition+of+Handwritten+Digits
14
500 500 500) 0 300 °o 300 âo 500 Normalized Data 100) â 10 100) 0 700 °o 700
500 500 500) 0 300 °o 300 âo 500 Normalized Data 100) â 10 100) 0 700 °o 700
Figure 13: Example of the ï¬rst train case for the Motion problem PenDigits. The class label for this case is 8.
# 4 ECG Classiï¬cation
ECG Classiï¬cation is an obvious application for MTSC. However, we found it surprisingly diï¬cult to ï¬nd many problems in this domain. The Physionet data often requires bespoke software to process and is not always an obvious classiï¬cation problem. We hope to get more data in this domain in the future.
# 4.1 AtrialFibrillation
This dataset of two-channel ECG recordings has been created from data used in the Computers in Cardiology Challenge 2004 7, an open competition with the goal of developing automated methods for predicting spontaneous termi- nation of atrial ï¬brillation (AF). The raw instances were 5 second segments of atrial ï¬brillation, containing two ECG signals, each sampled at 128 samples per second. The multivariate data organises these channels such that each is one dimension. The class labels are: n, s and t. Class n is described as a non termi- nation atrial ï¬brillation (that is, it did not terminate for at least one hour after the original recording of the data). class s is described as an atrial ï¬brillation that self terminates at least one minuet after the recording process. Class t is described as terminating immediately, that is within one second of the recording ending. More details are in [15].
7https://www.physionet.org/physiobank/database/aftdb/
15
le ai caan ances ahead âleft ââRight
le ai caan ances ahead AOE ima ape : i \ ; ; F ad a AMO oa es ental Ae âleft ââRight
AOE ima ape : i \ ; ; F ad a AMO oa es ental Ae
Figure 14: The ï¬rst train case for the ECG problem AtrialFibrillation. The class label for this case is ânâ.
# 4.2 StandWalkJump
This Physionet dataset 8 was presented in [16]. Short duration ECG signals were recorded from a healthy 25-year-old male performing diï¬erent physical activities to study the eï¬ect of motion artifacts on ECG signals and their sparsity. The raw data was sampled at 500 Hz, with a resolution of 16 bits before an analogue gain of 100 and ADC was applied. A spectrogram of each instance was then created with a window size of 0.061 seconds and an overlap of 70%. Each instance in this multivariate dataset is arranged such that each dimension is a frequency band from the spectrogram. There are three classes, standing, walking and jumping, each consists of 9 instances.
8https://www.physionet.org/physiobank/database/macecgdb/
16
| Fee ee eee oe
Patch location with Patch location wath Default patch location 50 degree offset 45 degreccottset [Odegree otter) | Fee ee eee oe
Patch location with Patch location wath Default patch location 50 degree offset 45 degreccottset [Odegree otter)
Figure 15: The ï¬rst train case for the ECG problem StandWalkJump. The class label for this case is standing.
# 5 EEG/MEG Classiï¬cation
Our second largest group of problems, EEG and MEG classiï¬cation has a wide range of applications in medicine, psychology and human computer interaction. The majority of our data were derived from the Brain Computer Interface com- petitions9.
# 5.1 FingerMovements
This dataset was provided by Fraunhofer-FIRST, Intelligent Data Analysis Group (Klaus-Robert Mller), and Freie Universitt Berlin, Department of Neu- rology, Neurophysics Group (Gabriel Curio)10 and is described in [17].
This dataset was recorded from a normal subject during a no-feedback ses- sion. The subject sat in a normal chair, relaxed arms resting on the table, ï¬ngers in the standard typing position at the computer keyboard. The task was to press with the index and little ï¬ngers the corresponding keys in a self-chosen order and time, i.e. using self-paced key typing. The experiment consisted of 3 sessions of 6 minutes each. All sessions were conducted on the same day with some minutes break in between. Typing was done at an average speed of 1 key per second.
There are 316 train cases and 100 test cases. Each case is a recording of 28 EEG channels of 500 ms length each ending 130 ms before a key-press. This is downsampled at 100 Hz (as recommended) so each channel consists of 50 observations. Channels are in the following order: (F3, F1, Fz, F2, F4, FC5,
# 9http://bbci.de/competition 10http://www.bbci.de/competition/ii/berlin desc.html
17
FC3, FC1, FCz, FC2, FC4, FC6, C5, C3, C1, Cz, C2, C4, C6, CP5, CP3, CP1, CPz, CP2, CP4, CP6, O1, O2).
The recording was made using a NeuroScan ampliï¬er and a Ag/AgCl elec- trode cap from ECI. 28 EEG channels were measured at positions of the in- ternational 10/20-system (F, FC, C, and CP rows and O1, O2). Signals were recorded at 1000 Hz with a band-pass ï¬lter between 0.05 and 200 Hz.
F3 meee] 2 2 a FS
F3 meee] 2 2 a FS
Figure 16: Example of the ï¬rst train case for the EEG problem Finger Move- ments. The class label for this case is âleftâ.
# 5.2 MotorImagery
This is Dataset 1 in BCI III 11 and is reported in [18], provided by University of Tbingen, Germany, Dept. of Computer Engineering (Prof. Rosenstiel) and In- stitute of Medical Psychology and Behavioral Neurobiology (Niels Birbaumer), and Max-Planck-Institute for Biological Cybernetics, Tbingen, Germany (Bern- hard Schlkopf), and Universitt Bonn, Germany, Dept. of Epileptology (Prof. Elger). During the BCI experiment, a subject had to perform imagined move- ments of either the left small ï¬nger or the tongue. The time series of the electri- cal brain activity was picked up during these trials using a 8x8 ECoG platinum electrode grid which was placed on the contralateral (right) motor cortex. The grid was assumed to cover the right motor cortex completely, but due to its size (approx. 8x8cm), it partly covered also surrounding cortex areas. All record- ings were performed with a sampling rate of 1000Hz. After ampliï¬cation the recorded potentials were stored as microvolt values. Every trial consisted of either an imagined tongue or an imagined ï¬nger movement and was recorded for 3 seconds duration. To avoid visually evoked potentials being reï¬ected by the data, the recording intervals started 0.5 seconds after the visual cue had
# 11http://bbci.de/competition/iii/desc I.html
18
ended. The EEG data has 64 dimensions, each of which is 3000 long (3 seconds measurement). The train data has 278 cases, the test data 100. The class labels are ï¬nger or tongue (the imagined movements). The best submitted solution obtained 91% accuracy on the test data.
~ "ap %
~ "ap %
Figure 17: Example of the ï¬rst train case for the EEG problem Finger Move- ments. The class label for this case is âï¬ngerâ.
# 5.3 SelfRegulationSCP1
This dataset is Ia in BCI II 12 reported in [19]: Self-regulation of Slow Cortical Potentials. It was provided by University of Tuebingen. The data were taken from a healthy subject. The subject was asked to move a cursor up and down on a computer screen, while his cortical potentials were taken. During the recording, the subject received visual feedback of his slow cortical potentials (Cz-Mastoids). Cortical positivity leads to a downward movement of the cursor on the screen. Cortical negativity leads to an upward movement of the cursor. Each trial lasted 6s.
During every trial, the task was visually presented by a highlighted goal at either the top or bottom of the screen to indicate negativity or positivity from second 0.5 until the end of the trial. The visual feedback was presented from second 2 to second 5.5. Only this 3.5 second interval of every trial is provided for training and testing. The sampling rate of 256 Hz and the recording length of 3.5s results in 896 samples per channel for every trial.
The train data consists of 268 trials recorded on two diï¬erent days and mixed randomly. 168 of the overall 268 trials origin from day 1, the remaining 100 trials from day 2. The data is derived from the two train ï¬les Traindata 0.txt and Traindata 1.txt. Each instance has six dimensions (EEG channels above) of
12http://bbci.de/competition/ii/tuebingen desc i.html
19
length 896. Class labels are negativity or positivity. There are 293 test data, the labels of which were released after the competition. The best approach has an error rate of 11.3% on the test data (presumably 33 incorrect).
âery oirversan. ! min y/ rer, a nay eS y/ Nanya AA ay
Subject B âery oirversan. ! Initial Free taining Advanced training and copy speling_speting min y/ rer, a Ea gm 3 nay eS y/ 8 3m Nanya AA ay % 00 140 180 220 260 300 340 Sessions
Subject B Initial Free taining Advanced training and copy speling_speting Ea gm 3 8 3m % 00 140 180 220 260 300 340 Sessions
Figure 18: Example of the ï¬rst train case for the EEG problem SelfRegulation- SCP1. The class label for this case is ânegativityâ.
# 5.4 SelfRegulationSCP2
Dataset Ib in BCI II reported in [19]: Self-regulation of Slow Cortical Potentials. The datasets were taken from an artiï¬cially respirated ALS patient. The subject was asked to move a cursor up and down on a computer screen, while his cortical potentials were taken. During the recording, the subject received au- ditory and visual feedback of his slow cortical potentials (Cz-Mastoids). Cortical positivity lead to a downward movement of the cursor on the screen. Cortical negativity lead to an upward movement of the cursor. Each trial lasted 8s. During every trial, the task was visually and auditorily presented by a high- lighted goal at the top (for negativity) or bottom (for positivity) of the screen from second 0.5 until second 7.5 of every trial. In addition, the task (âupâ or âdownâ) was vocalised at second 0.5. The visual feedback was presented from second 2 to second 6.5. Only this 4.5 second interval of every trial is provided for training and testing. The sampling rate of 256 Hz and the recording length of 4.5s results in 1152 samples per channel for every trial.
The train data contains 200 trials, 100 of each class which were recorded on the same day and permuted randomly. There are 7 dimensions and the series are length 1152.
Test data contains 180 trials of test data. This test data was recorded after the train data (during the same day) day. The 180 trials belong to either class 0 or class 1.
Note that it is not clear if there is any information contained in this dataset that is useful for the classiï¬cation task. A view on the result suggests that it is
20
not. The best has error 45.5%.
Anh inaa) INIA ning
UERER HERR SRBALIVER. Anh inaa) HOFFENTLICH ROMMEN-SE- MIC BESLCHEN INENN DESER- DEAS HERZLICH DENN: = KUSLERIST.ENE UNETLE oes HE INL ADEN. EINE GELEGENPEST-INDET SICH HOFFENTLIC BALD. MiT-BESTEN-GROESEN. IHE-HANS-PETER-SALZMANN, Figure 2 The first full message written by subject A INIA ning
UERER HERR SRBALIVER. HOFFENTLICH ROMMEN-SE- MIC BESLCHEN INENN DESER- DEAS HERZLICH DENN: = KUSLERIST.ENE UNETLE oes HE INL ADEN. EINE GELEGENPEST-INDET SICH HOFFENTLIC BALD. MiT-BESTEN-GROESEN. IHE-HANS-PETER-SALZMANN, Figure 2 The first full message written by subject A
Figure 19: Example of the ï¬rst train case for the EEG problem SelfRegulation- SCP2. The class label for this case is ânegativityâ.
# 5.5 FaceDetection
This data is from the train set of a Kaggle competition13. It consists of MEG recordings and the class labels (Face/Scramble), from 10 subjects (subject01 to subject10), test data from 6 subjects (subject11 to 16). For each subject approximately 580-590 trials are available. Each trial consists of 1.5 seconds of MEG recording (starting 0.5sec before the stimulus starts) and the related class label, Face (class 1) or Scramble (class 0). The data were down-sampled to 250Hz and high-pass ï¬ltered at 1Hz. 306 timeseries were recorded, one for each of the 306 channels, for each trial. All the pre-processing steps were carried out with mne-python. The trials of each subject are arranged into a 3D data matrix (trial x channel x time) of size 580 x 306 x 375.
13https://www.kaggle.com/c/decoding-the-human-brain/data
21
Figure 20: Example of the ï¬rst train case for the EEG problem FaceDetection. The class label for this case is 0.
# 5.6 HandMovementDirection
This is the third dataset from the BCI IV competition14. It was provided by the Brain Machine Interfacing Initiative, Albert-Ludwigs-University Freiburg, the Bernstein Center for Computational Neuroscience Freiburg and the Insti- tute of Medical Psychology and Behavioral Neurobiology, University of Tbingen (Stephan Waldert, Carsten Mehring, HubertPreissl, Christoph Braun).
Two subjects were recorded moving a joystick with only their hand and wrist in one of four directions (right, up, down, left) of their choice after hearing a prompt. The task is to classify the direction of movement from the Magnetoen- cephalography (MEG) data recorded during the activity. Each instance contains data from 0.4s before to 0.6s after the movement for 10 channels of the MEG reading that are located over the motor areas. Further information about the data collection process can be found at15.
The train/test split given in this archive corresponds to the exact split pro- vided in the original competition, with the trails for the two subjects merged.
# 14http://bbci.de/competition/iv/ 15http://bbci.de/competition/iv/desc 3.pdf
22
WA i neal WA a alee aN vnapoo
WA i neal a WA a alee aN vnapoo
a
Figure 21: Example of the ï¬rst train case for the EEG problem HandMove- mentDirection. The class label for this case is â1 - rightâ.
# 6 Audio Spectra Classiï¬cation
Classiï¬cation of audio signals is a univariate time series classiï¬cation problem. However, it is common in this ï¬eld to run a sliding (or striding) window over the signal and extract the spectra for each window. Each frequency bin then forms a series over the number of windows.
# 6.1 DuckDuckGeese
This dataset was derived from recordings found on the Xeno Canto website16. Each recording was taken from either the A or B quality category. Due to the variation in recorded sample rate all recordings were downsampled to 44100Hz using the MATLAB resample function. Each recording was then center trun- cated to 5 seconds (length of smallest recording), before being transformed into a spectogram using a window size of 0.061 and an overlap value of 70%. The classes are as follows: Black-bellied Whistling Duck (20 instances); Canadian Goose (20 instances); Greylag Goose (20 instances); Pink Footed Goose (20 instances); and White-faced Whistling Duck (20 instances).
16www.xenocanto.com
23
Figure 22: Example of the ï¬rst train case for the problem DuckDuckGeese. The class label for this case is Black-bellied Whistling Duck.
# 6.2 Heartbeat
This dataset is derived from the PhysioNet/CinC Challenge 201617. Heart sound recordings were sourced from several contributors around the world, collected at either a clinical or nonclinical environment, from both healthy subjects and pathological patients. The heart sound recordings were collected from diï¬er- ent locations on the body. The typical four locations are aortic area, pulmonic area, tricuspid area and mitral area, but could be one of nine diï¬erent locations. The sounds were divided into two classes: normal and abnormal. The normal recordings were from healthy subjects and the abnormal ones were from pa- tients with a conï¬rmed cardiac diagnosis. The patients suï¬er from a variety of illnesses, but typically they are heart valve defects and coronary artery disease patients. Heart valve defects include mitral valve prolapse, mitral regurgitation, aortic stenosis and valvular surgery. All the recordings from the patients were generally labeled as abnormal. Both healthy subjects and pathological patients include both children and adults.
Each recording was truncated to 5 seconds. A spectrogram of each instance was then created with a window size of 0.061 seconds and an overlap of 70%. Each instance in this multivariate dataset is arranged such that each dimension is a frequency band from the spectrogram. The two classes normal and abnormal consist of 113 and 296 respectively.
17https://www.physionet.org/physiobank/database/challenge/2016/
24
Abadi rath ehaaalan Printed tatinlantvrtaatabnass hie
Abadi rath ehaaalan Printed tatinlantvrtaatabnass hie
Figure 23: Example of the ï¬rst train case for the spectral problem Heartbeat.
# 6.3 InsectWingbeat
The InsectWingbeat data was generated by the UCR computational entomol- ogy group and used in the paper Flying Insect Classiï¬cation with Inexpensive Sensors [20]. The original data is a reconstruction of the sound of insects passing through a sensor. The data in the archive is the power spectrum of the sound. A spectorgram of each 1 second sound segment was created with a window length of 0.061 seconds and an overlap of 70%. Each instance in this multivari- ate dataset is arranged such that each dimension is a frequency band from the spectrogram. Each of the 10 classes in this dataset consist of 5,000 instances. The 10 classes are male and female mosquitos (Ae. aegypti, Cx. tarsalis, Cx. quinquefasciants, Cx. stigmatosoma), two types of ï¬ies (Musca domestica and Drosophila simulans) and other insects.
# 6.4 Phoneme
This dataset is a multivariate representation of a subset of the data used in the paper [21]. Each series was extracted from the segmented audio collected from Google Translate. Audio ï¬les collected from Google translate are recorded at 22050 HZ. The speakers are male and female. After data collection, they segment waveforms of the words to generate phonemes using the Forced Aligner tool from the Penn Phonetics Laboratory. A Spectrogram of each instance was then created with a window size of 0.001 seconds and an overlap of 90%. Each instance in this multivariate dataset is arranged such that each dimension is a frequency band from the spectrogram. The data consists of 39 classes each with 170 instances.
25
â
= - Vy roe ea Z = L : ye a ae a â PN IK an
= - Vy roe ea Z = L : ye a ae a PN IK an
Figure 24: Example of the ï¬rst train case for the spectral problem InsectWing- beat.
cliched einige. (Ku re D) Hierarchy er orwell th. gasser appreciable a (ae(ser ) (AH PRIYBHJAH BAHL) (AH NAH TAE(CHIT)
cliched einige. (Ku re D) Hierarchy er orwell th. gasser appreciable a (ae(ser ) (AH PRIYBHJAH BAHL) (AH NAH TAE(CHIT)
Figure 25: Example of the ï¬rst train case for the Audio problem Phoneme.
# 6.5 SpokenArabicDigits
This dataset is taken from the UCI repository. It is derived from sound. 8800 (10 digits x 10 repetitions x 88 speakers) samples were taken from 44 males and 44 females Arabic native speakers between the ages 18 and 40 to represent ten spoken Arabic digits. The 13 Mel Frequency Cepstral Coeï¬cients (MFCCs)
26
were computed with the following conditions: Sampling rate 11025 Hz: 16 bits Hamming window; and ï¬lter pre-emphasized, 1 â 0.97Z ( â 1) [22].
# 6.6 JapaneseVowels
This dataset was taken from the UCI Archive 18, originally reported in [23].
a
a
Figure 26: Example of the ï¬rst train case for the Audio problem JapaneseVowels.
Nine Japanese-male speakers were recorded saying the vowels âaâ and âeâ. A â12-degree linear prediction analysisâ is applied to the raw recordings to obtain time-series with 12 dimensions, a originally of lengths between 7 and 29. In this dataset, instances have been padded to the longest length; 29. The classiï¬ca- tion task is to predict the speaker. Therefore, each instance is a transformed utterance, 12*29 values with a single class label attached, 1. . . 9.
The given training set is comprised of 30 utterances for each speaker, however the test set has a varied distribution based on external factors of timing and experimental availability, between 24 and 88 instances per speaker.
18https://archive.ics.uci.edu/ml/datasets/Japanese+Vowels
27
# 7 Other Problems
7.1 EthanolConcentration
Figure 27: Example of the ï¬rst train case for the EEG problem EthanolCon- centration.
EthanolConcentration is a dataset of raw spectra of water-and-ethanol solu- tions in 44 distinct, real whisky bottles [24]. The concentrations of ethanol are 35%, 38%, 40%, and 45%. The minimum legal alcohol limit for Scotch Whisky is 40%, and many whiskies do maintain this alcohol concentration. Producers are required to ensure that the contents of their spirits contain alcohol con- centrations that are tightly bound to what is reported on the labelling. The classiï¬cation problem is to determine the alcohol concentration of a sample contained within an arbitrary bottle.
The data has been arranged such that each instance is made up of three repeat readings of the same bottle and batch of solution. Three solutions of each concentration (batches) were produced, and each bottle+batch combina- tion measured three times. Each reading is comprised of the bottle being picked up, placed between the light source and spectroscope, and spectra saved. The spectra are recorded over the maximum wavelength range of the single Stellar- Net BLACKComet-SR spectrometer used (226nm to 1101.5nm with a sampling frequency of 0.5nm), over a one second integration time. Except for avoiding labelling, embossing, and seams on the bottle, no special attempts were made to obtain the cleanest reading for each individual bottle, nor to precisely replicate the exact path through the bottle for each repeat reading. This is to replicate potential future conditions of an operative performing mass-screening of a batch of suspect spirits.
Some bottles introduce more noise and structural defects to the spectra than others, based on their shape, colour, glass thickness and angle, and the ability
28
to avoid the obstacles that may get in the way of a reading (labels, seams, etc). And so therefore the problem is to identify the alcohol concentration of the contents regardless of the properties of the containing bottle. 28 of the bottles are âstandardâ, that is, cylindrical with a roughly equal diameter, clear glass, with a clear path for the light to travel through. The remaining 16 bottles are either non-uniformly shaped, green glass, or light paths are diï¬cult to ï¬nd.
As well as the full dataset and an example 50/50 train test split, predeï¬ned folds in a âleave one bottle outâ format are given. All examples of a single bottle are reserved for the test set, meaning that the classiï¬er cannot leverage the exact properties of the bottle of a new test sample already found in the training set.
For the problemâs properties as a multivariate dataset, the dimensions are necessarily aligned in wavelength, and the relationship between them is moreso to allow for a noise cancelling or corrective aï¬ect, rather than each dimension describing strictly diï¬erent information. Whether repeat readings and some form of multivariate method improves accuracy over classiï¬cation on a single (univariate) reading is of interest. Interval methods are likely to provide beneï¬ts, as the wavelengths range from just into the Ultraviolet (UV) light, through the Visible (VIS) light, and into the Near Infrared (NIR). Diï¬erent intervals carry diï¬erent physical information.
# 7.2 PEMS-SF
This is a UCI dataset from the California Department of Transportation19 re- ported in [25]. It contains 15 months worth of daily data from the California Department of Transportation PEMS website. The data describes the occu- pancy rate, between 0 and 1, of diï¬erent car lanes of San Francisco bay area freeways. The measurements cover the period from Jan. 1st 2008 to Mar. 30th 2009 and are sampled every 10 minutes. Each day in this database is a single time series of dimension 963 (the number of sensors which functioned consis- tently throughout the studied period) and length 6 x 24=144. Public holidays were removed from the dataset, as well as two days with anomalies (March 8th 2009 and March 9th 2008) where all sensors were muted between 2:00 and 3:00 AM. This results in a database of 440 time series.
The task is to classify each observed day as the correct day of the week, from Monday to Sunday, e.g. label it with an integer in 1,2,3,4,5,6,7.
# 7.3 LSST
This dataset is from a 2018 Kaggle competition 20. The Photometric LSST As- tronomical Time Series Classiï¬cation Challenge (PLAsTiCC) is an open data challenge to classify simulated astronomical time-series data in preparation for observations from the Large Synoptic Survey Telescope (LSST), which will achieve ï¬rst light in 2019 and commence its 10-year main survey in 2022. LSST
# 19www.pems.dot.ca.gov 20https://www.kaggle.com/c/PLAsTiCC-2018
29
Di ssisroniation
Di ssisroniation
Figure 28: Example of the ï¬rst train case for the problem PEMS-SF.
will revolutionize our understanding of the changing sky, discovering and mea- suring millions of time-varying objects.
PLAsTiCC is a large data challenge for which participants are asked to clas- sify astronomical time series data. These simulated time series, or light curves are measurements of an objects brightness as a function of time - by measur- ing the photon ï¬ux in six diï¬erent astronomical ï¬lters (commonly referred to as passbands). These passbands include ultra-violet, optical and infrared re- gions of the light spectrum. There are many diï¬erent types of astronomical objects (that are driven by diï¬erent physical processes) that we separate into astronomical classes.
The problem we have formulated represents a snap shot of the data available and is created from the train set published in the aforementioned competition. 36 dimensions were chosen as it represents a value at which most instances
would not be truncated.
30
â â
â â Ee 2,
Ee 2,
Figure 29: Example of the ï¬rst train case for the problem LSST.
# 8 Benchmark Results
Our initial benchmarking is with three standard classiï¬ers for TSC: 1-Nearest Neighbour with distance functions: Euclidean (ED); dimension-independent dy- namic time warping (DTWI ); and dimension-dependent dynamic time warping (DTWD). A summary of the diï¬erences between these two multidimensional DTW variants can be found in [26]. We present results using the raw data, and after normalising each dimension independently. Accuracies are presented in Table 3, and these are summarised in a critical diï¬erence diagram in Figure 30. At the time of release we do not have full results for three datasets: Eigen- Worms; InsectWingbeat and FaceDetection. We will add these when complete. Full results will also be on the website [27].
We can see that a wide range of performances are achieved by the bench- marks. Five of the datasets can be classiï¬ed with one or more of the benchmark classiï¬ers to at least 99% accuracy, and ï¬ve cannot do better than 50%. Trivial or impossible problems may be removed from the archive in future iterations, depending on a wider scale performance evaluation.
These results are our ï¬rst attempt at benchmarking. We will expand these results over the ensuing months. We will also conduct resample and/or cross validation experiments.
31
Un-normalised Normalised 0.98 0.267 1 0.969 0.986 0.55 0.603 0.978 0.133 0.304 0.513 0.52 0.306 0.509 0.659 DTWI DTWD 0.987 0.22 0.975 0.989 1 0.6 0.618 0.964 0.133 0.323 0.529 0.53 0.231 0.286 0.717 0.98 0.267 1 0.969 0.986 0.55 0.978 0.133 0.304 0.52 0.306 0.316 0.658 0.959 0.894 0.575
Table 3: Benchmark classiï¬cation results (in terms of accuracy) for the original and normalised versions of each dataset in the new archive. The (potentially tied) best accuracy achieved for a dataset is in bold.
32
ED, y 4.7115 2.6154 DTW, ED, y 4.2885 2.7692 DTW, , DTW, , 3.3654 3.25 DTWoy
Figure 30: Critical Diï¬erence diagram of the benchmark classiï¬ers across the datasets of the new archive. The subscripts âUâ and âNâ refer to the un- normalised and normalised versions of the classiï¬ers, respectively. The aver- age accuracy ranking is given alongside the label of each classiï¬er. Classiï¬ers connected by a solid bar are not pairwise signiï¬cantly diï¬erent from each other.
# 9 Conclusions
This is our ï¬rst attempt at a multivariate archive, and it should be considered a work in progress. We hope to release an expanded version in 2019. We would very much welcome any donations of data. If you have evaluated your classiï¬er on this data, your results are reproducable and your work has been peer reviewed, get in touch and we will put your results and algorithm details on the website. If you ï¬nd any errors in the data or the descriptions, please inform us.
# References
[1] H. Dau, A. Bagnall, K. Kamgar, M. Yeh, Y. Zhu, S. Gharghabi, and C. Ratanamahatana, âThe ucr time series archive,â ArXiv e-prints, vol. arXiv:1810.07758, 2018.
[2] A. Bagnall, J. Lines, A. Bostrom, J. Large, and E. Keogh, âThe great time series classiï¬cation bake oï¬: a review and experimental evaluation of recent algorithmic advances,â Data Mining and Knowledge Discovery, vol. 31, no. 3, pp. 606â660, 2017.
[3] M. H. Ko, G. West, S. Venkatesh, and M. Kumar, âOnline context recog- nition in multisensor systems using dynamic time warping,â in Intelligent Sensors, Sensor Networks and Information Processing Conference, 2005. Proceedings of the 2005 International Conference on, pp. 283â288, IEEE, 2005.
33
[4] M. Shokoohi-Yekta, B. Hu, H. Jin, J. Wang, and E. Keogh, âGeneralizing DTW to the multi-dimensional case requires an adaptive approach,â Data Mining and Knowledge Discovery, vol. 31, no. 1, pp. 1â31, 2017.
[5] J. R. Villar, P. Vergara, M. Men´endez, E. de la Cal, V. M. Gonz´alez, and J. Sedano, âGeneralized models for the classiï¬cation of abnormal move- ments in daily life and its applicability to epilepsy convulsion recognition,â International journal of neural systems, vol. 26, no. 06, p. 1650037, 2016.
[6] M. Wilhelm, D. Krakowczyk, F. Trollmann, and S. Albayrak, âering: mul- tiple ï¬nger gesture recognition with one ring using an electric ï¬eld,â in Proceedings of the 2nd international Workshop on Sensor-based Activity Recognition and Interaction, p. 7, ACM, 2015.
[7] D. B. Dias and S. M. Peres, âAlgoritmos bio-inspirados aplicados ao re- conhecimento de padroes da libras: enfoque no parËametro movimento,â 16 Ao Simp´osio Internacional de Inicia¸cao Cientıï¬ca da Universidade de Sao Paulo, 2016.
[8] N. Ghouaiel, P.-F. Marteau, and M. Dupont, âContinuous pattern detec- tion and recognition in stream-a benchmark for online gesture recognition,â International Journal of Applied Pattern Recognition, vol. 4, no. 2, 2017.
[9] J. Liu, L. Zhong, J. Wickramasuriya, and V. Vasudevan, âuwave: Accelerometer-based personalized gesture recognition and its applications,â Pervasive and Mobile Computing, vol. 5, no. 6, pp. 657â675, 2009.
[10] J. Wang, A. Balasubramanian, L. M. de La Vega, J. R. Green, A. Samal, and B. Prabhakaran, âWord recognition from continuous articulatory move- ment time-series data using symbolic representations,â in Proceedings of the Fourth Workshop on Speech and Language Processing for Assistive Tech- nologies, pp. 119â127, 2013.
[11] B. Williams, M. Toussaint, and A. J. Storkey, âModelling motion primitives and their timing in biologically executed movements,â in Advances in neural information processing systems, pp. 1609â1616, 2008.
[12] A. E. Brown, E. I. Yemini, L. J. Grundy, T. Jucikas, and W. R. Schafer, âA dictionary of behavioral motifs reveals clusters of genes aï¬ecting caenorhab- ditis elegans locomotion,â Proceedings of the National Academy of Sciences, vol. 110, no. 2, pp. 791â796, 2013.
[13] E. Yemini, T. Jucikas, L. J. Grundy, A. E. Brown, and W. R. Schafer, âA database of caenorhabditis elegans behavioral phenotypes,â Nature meth- ods, vol. 10, no. 9, p. 877, 2013.
[14] F. AlimoËglu and E. Alpaydin, âCombining multiple representations for pen- based handwritten digit recognition,â Turkish Journal of Electrical Engi- neering & Computer Sciences, vol. 9, no. 1, pp. 1â12, 2001.
34
[15] G. Moody, âSpontaneous termination of atrial ï¬brillation: a challenge from physionet and computers in cardiology 2004,â in Computers in Cardiology, 2004, pp. 101â104, IEEE, 2004.
[16] V. Behravan, N. E. Glover, R. Farry, P. Y. Chiang, and M. Shoaib, âRate- adaptive compressed-sensing and sparsity variance of biomedical signals,â in 2015 IEEE 12th International Conference on Wearable and Implantable Body Sensor Networks (BSN), pp. 1â6, June 2015.
[17] B. Blankertz, G. Curio, and K.-R. M¨uller, âClassifying single trial eeg: Towards brain computer interfacing,â in Advances in neural information processing systems, pp. 157â164, 2002.
[18] T. Lal, T. Hinterberger, G. Widman, M. Schr¨oder, N. J. Hill, W. Rosenstiel, C. E. Elger, N. Birbaumer, and B. Sch¨olkopf, âMethods towards invasive human brain computer interfaces,â in Advances in neural information pro- cessing systems, pp. 737â744, 2005.
[19] N. Birbaumer, N. Ghanayim, T. Hinterberger, I. Iversen, B. Kotchoubey, A. K¨ubler, J. Perelmouter, E. Taub, and H. Flor, âA spelling device for the paralysed,â Nature, vol. 398, no. 6725, p. 297, 1999.
[20] Y. Chen, A. Why, G. Batista, A. Mafra-Neto, and E. Keogh, âFlying insect classiï¬cation with inexpensive sensors,â Journal of insect behavior, vol. 27, no. 5, pp. 657â677, 2014.
[21] H. Hamooni and A. Mueen, âDual-domain hierarchical classiï¬cation of pho- netic time series,â in Proc. IEEE International Conference on Data Mining, 2014.
[22] N. Hammami and M. Sellam, âTree distribution classiï¬er for automatic spoken arabic digit recognition,â in Internet Technology and Secured Trans- actions, 2009. ICITST 2009. International Conference for, pp. 1â4, IEEE, 2009.
[23] M. Kudo, J. Toyama, and M. Shimbo, âMultidimensional curve classiï¬ca- tion using passing-through regions,â Pattern Recognition Letters, vol. 20, no. 11, pp. 1103â1111, 1999.
[24] J. Large, E. K. Kemsley, N. Wellner, I. Goodall, and A. Bagnall, âDetecting forged alcohol non-invasively through vibrational spectroscopy and machine learning,â in Paciï¬c-Asia Conference on Knowledge Discovery and Data Mining, pp. 298â309, Springer, 2018.
[25] M. Cuturi, âFast global alignment kernels,â in Proceedings of the 28th in- ternational conference on machine learning (ICML-11), pp. 929â936, 2011.
[26] M. Shokoohi-Yekta, J. Wang, and E. Keogh, âOn the non-trivial generaliza- tion of dynamic time warping to the multi-dimensional case,â in Proceedings of the 2015 SIAM International Conference on Data Mining, pp. 289â297, SIAM, 2015.
35
[27] A. Bagnall, J. Lines, and E. Keogh, âThe UEA UCR time series classiï¬ca- tion archive.â http://timeseriesclassification.com, 2018.
36 | {
"id": "1810.07758"
} |
1810.12885 | ReCoRD: Bridging the Gap between Human and Machine Commonsense Reading Comprehension | We present a large-scale dataset, ReCoRD, for machine reading comprehension
requiring commonsense reasoning. Experiments on this dataset demonstrate that
the performance of state-of-the-art MRC systems fall far behind human
performance. ReCoRD represents a challenge for future research to bridge the
gap between human and machine commonsense reading comprehension. ReCoRD is
available at http://nlp.jhu.edu/record. | http://arxiv.org/pdf/1810.12885 | Sheng Zhang, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, Kevin Duh, Benjamin Van Durme | cs.CL | 14 pages | null | cs.CL | 20181030 | 20181030 | 8 1 0 2
t c O 0 3 ] L C . s c [
1 v 5 8 8 2 1 . 0 1 8 1 : v i X r a
# Re CoCoRD: Bridging the Gap between Human and Machine Commonsense Reading Comprehension
Sheng Zhangâ â, Xiaodong Liuâ¡, Jingjing Liuâ¡, Jianfeng Gaoâ¡, Kevin Duhâ and Benjamin Van Durmeâ â Johns Hopkins University â¡Microsoft Research
# Abstract
# Passage
We present a large-scale dataset, ReCoRD, for machine reading comprehension requiring commonsense reasoning. Experiments on this dataset demonstrate that the performance of state-of-the-art MRC systems fall far behind human performance. ReCoRD represents a challenge for future research to bridge the gap between human and machine commonsense reading comprehension. ReCoRD is available at http://nlp.jhu.edu/record.
# Introduction
Machine reading comprehension (MRC) is a cen- tral task in natural language understanding, with techniques lately driven by a surge of large-scale datasets (Hermann et al., 2015; Hill et al., 2015; Rajpurkar et al., 2016; Trischler et al., 2017; Nguyen et al., 2016), usually formalized as a task of answering questions given a passage. An in- creasing number of analyses (Jia and Liang, 2017; Rajpurkar et al., 2018; Kaushik and Lipton, 2018) have revealed that a large portion of questions in these datasets can be answered by simply match- ing the patterns between the question and the an- swer sentence in the passage. While systems may match or even outperform humans on these datasets, our intuition suggests that there are at least some instances in human reading compre- hension that require more than what existing chal- lenge tasks are emphasizing. One primary type of questions these datasets lack are the ones that require reasoning over common sense or under- standing across multiple sentences in the pas- sage (Rajpurkar et al., 2016; Trischler et al., 2017). To overcome this limitation, we introduce a large-scale dataset reading comprehen- for sion, ReCoRD (["rEk@rd]), which consists of over 120,000 examples, most of which require
âWork done when Sheng Zhang was visiting Microsoft.
(CNN) -- A lawsuit has been filed claiming that the iconic Led Zeppelin song "Stairway to Heaven" was far from original. The suit, filed on May 31 in the United States District Court Eastern District of Pennsylvania, was brought by the estate of the late musician Randy California against the surviving members of Led Zeppelin and their record label. The copyright infringement case alleges that the Zeppelin song was taken from the single "Taurus" by the 1960s band Spirit, for whom California served as lead guitarist. "Late in 1968, a then new band named Led Zeppelin began touring in the United States, opening for Spirit," the suit states. "It was during this time that Jimmy Page, Led Zeppelin's guitarist, grew familiar with 'Taurus' and the rest of Spirit's catalog. Page stated in interviews that he found Spirit to be 'very goodâ and that the band's performances struck him 'on an emotional level.'"
¢ Suit claims similarities between two songs
¢ Randy California was guitarist for the group Spirit
¢ Jimmy Page has called the accusation "ridiculous"
# (Cloze-style) Query
According to claims in the suit, "Parts of 'Stairway to Heaven,' instantly recognizable to the music fans across the world, sound almost identical to significant portions of *X.ââ
# Reference Answers
# Taurus
Figure 1: An example from ReCoRD. The passage is a snippet from a news article followed by some bullet points which summarize the news event. Named enti- ties highlighted in the passage are possible answers to the query. The query is a statement that is factually supported by the passage. X in the statement indicates a missing named entity. The goal is to ï¬nd the correct entity in the passage that best ï¬ts X.
deep commonsense reasoning. ReCoRD is an acronym for the Reading Comprehension with Commonsense Reasoning Dataset.
Figure 1 shows a ReCoRD example: the pas- sage describes a lawsuit claiming that the band âLed Zeppelinâ had plagiarized the song âTaurusâ
to their most iconic song, âStairway to Heavenâ. The cloze-style query asks what does âStairway to Heavenâ sound similar to. To ï¬nd the correct answer, we need to understand from the passage that âa copyright infringement case alleges that âStairway to Heavenâ was taken from âTaurusââ, and from the bullet point that âthese two songs are claimed similarâ. Then based on the common- sense knowledge that âif two songs are claimed similar, it is likely that (parts of) these songs sound almost identicalâ, we can reasonably infer that the answer is âTaurusâ.
the existing MRC datasets, all queries and passages in ReCoRD are automatically mined from news articles, which maximally reduces the human elicitation bias (Gordon and Van Durme, 2013; Misra et al., 2016; Zhang et al., 2017), and the data collection method we propose is cost-efï¬cient. Further analysis shows that a large portion of ReCoRD requires commonsense reasoning.
Experiments on ReCoRD demonstrate that hu- man readers are able to achieve a high perfor- mance at 91.69 F1, whereas the state-of-the-art MRC models fall far behind at 46.65 F1. Thus, ReCoRD presents a real challenge for future re- search to bridge the gap between human and ma- chine commonsense reading comprehension.
# 2 Task Motivation
A program has common sense if it auto- matically deduces for itself a sufï¬ciently wide class of immediate consequences of anything it is told and what it already knows. â McCarthy (1959)
Commonsense Reasoning in MRC As illustrated by the example in Figure 1, the commonsense knowledge âif two songs are claimed similar, it is likely that (parts of) these songs sound almost identicaâ is not explicitly described in the pas- sage, but is necessary to acquire in order to gen- erate the answer. Human is able to infer the answer because the commonsense knowledge is commonly known by nearly all people. Our goal is to evaluate whether a machine is able to learn such knowledge. However, since commonsense knowledge is massive and mostly implicit, deï¬n- ing an explicit free-form evaluation is challeng- ing (Levesque et al., 2011). Motivated by Mc- Carthy (1959), we instead evaluate a machineâs ability of commonsense reasoning â a reasoning
process requiring commonsense knowledge; that is, if a machine has common sense, it can de- duce for itself the likely consequences or details of anything it is told and what it already knows rather than the unlikely ones. To formalize it in MRC, given a passage p (i.e., âanything it is toldâ and âwhat it already knowsâ), and a set of conse- quences or details C which are factually supported by the passage p with different likelihood, if a machine M has common sense, it can choose the most likely consequence or detail câ from C, i.e.,
câ = arg max P (c | p, M). câC (1)
Task Deï¬nition With the above discussion, we propose a speciï¬c task to evaluate a machineâs ability of commonsense reasoning in MRC: as shown in Figure 1, given a passage p describing an event, a set of text spans E marked in p, and a cloze-style query Q(X) with a missing text span indicated by X, a machine M is expected to act like human, reading the passage p and then using its hidden commonsense knowledge to choose a text span e â E that best ï¬ts X, i.e.,
eâ = arg max P (Q(e) | p, M). eâE (2)
Once the cloze-style query Q(X) is ï¬lled in by a text span e, the resulted statement Q(e) becomes a consequence or detail c as described in Equa- tion (1), which is factually supported by the pas- sage with certain likelihood.
# 3 Data Collection
We describe the framework for automatically gen- erating the dataset, ReCoRD, for our task deï¬ned in Equation (2), which consists of passages with text spans marked, cloze-style queries, and refer- ence answers. We collect ReCoRD in four stages as shown in Figure 2: (1) curating CNN/Daily Mail news articles, (2) generating passage-query- answers triples based on the news articles, (3) ï¬l- tering out the queries that can be easily answered by state-of-the-art MRC models, and (4) ï¬ltering out the queries ambiguous to human readers.
3.1 News Article Curation We choose to create ReCoRD by exploiting news articles, because the structure of news makes it a good source for our task: normally, the ï¬rst few paragraphs of a news article summarize the news
ReCoRD Human Filtering (120k triples) Machine Filtering (244k triples) Passage-Query-Answers Generation (770k triples) CNN/Daily Mail News Article Curation (170k news articles)
Figure 2: The overview of data collection stages.
event, which can be used to generate passages of the task; and the rest of the news article provides consequences or details of the news event, which can be used to generate queries of the task. In addition, news providers such as CNN and Daily Mail supplement their articles with a number of bullet points (Svore et al., 2007; Woodsend and Lapata, 2010; Hermann et al., 2015), which out- line the highlights of the news and hence form a supplemental source for generating passages.
We ï¬rst downloaded CNN and Daily Mail news articles using the script1 provided by Hermann et al. (2015), and then sampled 148K articles from CNN and Daily Mail. In these articles, named en- tities and their coreference information have been annotated by a Google NLP pipeline, and will be used in the second stage of our data collection. Since these articles can be easily downloaded us- ing the public script, we are concerned about po- tential cheating if using them as the source for generating the dev./test datasets. Therefore, we crawled additional 22K news articles from the CNN and Daily Mail websites. These crawled articles have no overlap with the articles used in Hermann et al. (2015). We then ran the state- of-the-art named entity recognition model (Pe- ters et al., 2018) and the end-to-end coreference resolution model (Lee et al., 2017) provided by AllenNLP (Gardner et al., 2018) to annotate the crawled articles. Overall, we have collected 170K CNN/Daily Mail news articles with their named entities and coreference information annotated.
# 1https://github.com/deepmind/rc-data
3.2 Passage-Query-Answers Generation All passages, queries and answers in ReCoRD were automatically generated from the curated news articles. Figure 3 illustrates the generation (1) we split each news article into two process. parts as described in Section 3.1: the ï¬rst few paragraphs which summarize the news event, and the rest of the news which provides the details or consequences of the news event. These two parts make a good source for generating passages and queries of our task respectively. (2) we enriched the ï¬rst part of news article with the bullet points provided by the news editors. The ï¬rst part of news article, together with the bullet points, is con- sidered as a candidate passage. To ensure that the candidate passages are informative enough, we re- quired the ï¬rst part of news article to have at least 100 tokens and contain at least four different en- tities. (3) for each candidate passage, the second part of its corresponding news article was split into sentences by Stanford CoreNLP (Manning et al., 2014). Then we selected the sentences that sat- isfy the following conditions as potential details or consequences of the news event described by the passage:
⢠Sentences should have at least 10 tokens, as longer sentences contain more information and thus are more likely to be inferrable details or consequences.
⢠Sentences should not be questions, as we only consider details or consequences of a news event, not questions.
⢠Sentences should not have 3-gram overlap with the corresponding passage, so they are less likely to be paraphrase of sentences in the pas- sage.
⢠Sentences should have at least one named entity, so that we can replace it with X to generate a cloze-style query.
⢠All named entities in sentences should have precedents in the passage according to corefer- ence, so that the sentences are not too discon- nected from the passage, and the correct entity can be found in the passage to ï¬ll in X.
Finally, we generated queries by replacing enti- ties in the selected sentences with X. We only replaced one entity in the selected sentence each time, and generated one cloze-style query. Based on coreference, the precedents of the replaced en-
entertainment â stars screen Binge Culture Media Copyright infringement suit filed against Led Zeppelin for âStairway to Heavenâ By Lisa Respers France, CNN updated 12:49 PM EDT, Tue June 3, 20 STORY HIGHLIGHTS _ Suit claims similarity between two songs | Randy California was il guitarist for the group Spirit + Jimmy Page has called the accusation âridiculousâ | served as lead guitarist. The first few paragraphs and the bullet points of the. news article summarize the news event. | One of the causes of action for the suit is listed a The rest of the news article provides details or [T The suit, fled on May 31 in the United States District Court Eastern District of | Pennsylvania, was brought by the estate ofthe Jate musician Randy | Califomia against the surviving members of Led Zeppelin and their record |- â~ label. The copyright infringement case alleges that the Zeppelin so taken fom the single "Taurus" by the 1960s band Spirit, for whom California, | âLate in 1968, a then new band named Led Zeppelin began touring in the | United States, opening for Spirit. the suit states. âIt was during this time that | â! Jimmy Page, Led Zeppelin's guitarist, grew familiar with âTaurusâ and the rest | of Spirits catalog. Page stated in interviews that he found Spirit to be âvery | | goodâ and that the band's performances struck him âon an emotional level. alsification of Rock N' | Roll History" and the typeface in the section headings of the filing resembles | that used for Led Zeppelin album covers. According to claims in the suit, â+| "Parts of âStairway to Heavenâ instantly recognizable to the music Passage (CNN) - A lawsuit has been filed claiming that the iconic Led Zeppelin song "Stairway to was far from original. The suit, filed on May 31 in the United States District Court Eastem District of Pent was brought by the estate of the late musician | Misa gainst the surviving members of Led their record label. The copyright infringement case alleges that the Zeppelin song was taken from the single "Taurus" by the 1960s band Spitt, for whom California served as lead guitarist. "Late in 1968, a then new band named Led Zep . touring inthe United States, opening for Spirit, during this time that Jimmy sist, grew familiar with Taurusâ and the catalog. Page stated in interviews that he pit to be âvery good! and that the bandâ. performances struck him âon an emotional level!" was | i ~ + Suit claims similarities between two songs ifornia was guitarist for the group Spirit + Randy i + Jimmy Page has called the accusation "ridiculous" (Cloze-style) Query According to claims in the suit, "Parts of âStairway to Heaven, instantly recognizable to the music fans across the world, sound almost identical to significant portions 4- INS concequences of the | across the world, sound almost identical to significant portions of Taurus.â __| of *X."â new event. \ ia The hidden commonsens: is used in comprehension of the underlined sentence â so Reference Answers Taurus
Figure 3: Passage-query-answers generation from a CNN news article.
tity in the passage became reference answers to the query. The passage-query-answers genera- tion process matched our task deï¬nition in Sec- tion 2, and therefore created queries that require some aspect of reasoning beyond immediate pat- tern matching. In total, we generated 770k (pas- sage, query, answers) triples.
# 3.3 Machine Filtering
As discussed in Jia and Liang (2017); Rajpurkar et al. (2018); Wang and Bansal (2018); Kaushik and Lipton (2018), existing MRC models mostly learn to predict the answer by simply paraphrasing questions into declarative forms, and then match- ing them with the sentences in the passages. To overcome this limitation, we ï¬ltered out triples whose queries can be easily answered by the state- of-the-art MRC architecture, Stochastic Answer Networks (SAN) (Liu et al., 2018). We choose SAN because it is competitive on existing MRC datasets, and it has components widely used in many MRC architectures such that low bias was anticipated in the ï¬ltering (which is conï¬rmed by evaluation in Section 5). We used SAN to perform a ï¬ve-fold cross validation on all 770k triples. The SAN models correctly answered 68% of these triples. We excluded those triples, and only kept 244k triples that could not be answered by SAN. These triples contain queries which could not be answered by simple paraphrasing, and other types of reasoning such as commonsense reasoning and multi-sentence reasoning are needed.
# 3.4 Human Filtering
Since the ï¬rst three stages of data collection were fully automated, the resulted triples could be noisy and ambiguous to human readers. Therefore, we employed crowdworkers to validate these triples. We used Amazon Mechanical Turk for validation. Crowdworkers were required to: 1) have a 95% HIT acceptance rate, 2) a minimum of 50 HITs, 3) be located in the United States, Canada, or Great Britain, and 4) not be granted the qualiï¬cation of poor quality (which we will explain later in this section). Workers were asked to spend at least 30 seconds on each assignment, and paid $3.6 per hour on average.
Figure 4 shows the crowdsourcing web inter- face. Each HIT corresponds to a triple in our data collection. In each HIT assignment, we ï¬rst showed the expandable instructions for ï¬rst-time workers, to help them better understand our task (see the Appendix A.2). Then we presented work- ers with a passage in which the named entities are highlighted and clickable. After reading the pas- sage, workers were given a supported statement with a placeholder (i.e., a cloze-style query) in- dicating a missing entity. Based on their under- standing of the events that might be inferred from the passage, workers were asked to ï¬nd the correct entity in the passage that best ï¬ts the placeholder. If workers thought the answer is not obvious, they were allowed to guess one, and were required to report that case in the feedback box. Workers were also encouraged to write other feedback.
amazon Resdeg Cam... (ATOxie) aernier âm4 ery Reading Comprehension Task Instructions (Click to expand) PASSAGE tall. I was just focusing on my game today cout of 6-2 5-7 6-2 In 11 matches will face Supported Statement igh the f st Williams serve Please feel free Repo Se HIF | Wy Rapa een
Figure 4: The crowdsourcing web interface.
To ensure quality and prevent spamming, we used the reference answers in the triples to com- pute workersâ average performance after every 1000 submissions. While there might be corefer- ence or named entity recognition errors in the ref- erence answers, as reported in Chen et al. (2016) (also conï¬rmed by our analysis in Section 4), they only accounted for a very small portion of all the reference answers. Thus, the reference an- swers could be used for comparing workersâ per- formance. Speciï¬cally, if a workerâs performance was signiï¬cantly lower than the average perfor- mance of all workers, we blocked the worker by granting the qualiï¬cation of poor quality. In prac- tice, workers were able to correctly answer about 50% of all queries. We blocked workers if their average accuracy was lower than 20%, and then republished their HIT assignments. Overall, 2,257 crowdworkers have participated in our task, and 51 of them have been granted the qualiï¬cation of poor quality. Train / Dev. / Test Splits Among all the 244k triples collected from the third stage, we ï¬rst ob- tained one worker answer for each triple. Com- pared to the reference answers, workers correctly answered queries in 122k triples. We then se- lected around 100k correctly-answered triples as the training set, restricting the origins of these triples to the news articles used in Hermann et al. (2015). As for the development and test sets, we
solicited another worker answer to further ensure their quality. Therefore, each of the rest 22k triples has been validated by two workers. We only kept 20k triples that were correctly answered by both workers. The origins of these triples are either articles used in Hermann et al. (2015) or articles crawled by us (as described in Section 3.1), with a ratio of 3:7. Finally, we randomly split the 20k triples into development and test sets, with 10k triples for each set. Table 1 summarizes the statis- tics of our dataset, ReCoRD.
Train Dev. Test Overall queries unique passages 100,730 65,709 10,000 7,133 10,000 7,279 120,730 80,121 passage vocab. query vocab. 352,491 119,069 93,171 30,844 94,386 31,028 395,356 134,397 tokens / passage entities / passage tokens / query 169.5 17.8 21.3 168.6 17.5 22.1 168.1 17.3 22.2 169.3 17.8 21.4
Table 1: Statistics of ReCoRD
4 Data Analysis ReCoRD differs from other reading comprehen- sion datasets due to its unique requirement for rea- soning more than just paraphrasing. In this sec- tion, we provide a qualitative analysis of ReCoRD which highlights its unique features. Reasoning Types We sampled 100 examples from the development set, and then manually catego- rized them into types shown in table 2. The results show that signiï¬cantly different from existing datasets such as SQuAD (Rajpurkar et al., 2016), and NewsQA (Trischler et al., 2017), ReCoRD requires commonsense reasoning to answer 75% of queries. Owing to the machine ï¬ltering stage, only 3% queries could be answered by paraphras- ing. The small percentage (6%) of ambiguous queries demonstrate the beneï¬t of the human ï¬l- tering stage. We also noticed that 10% queries can be answered through partial clues. As the exam- ple shows, some of partial clues were caused by the incompleteness of named entity recognition in the stage of news article curation. Types of Commonsense Reasoning Formaliz- ing the commonsense knowledge needed for even simple reasoning problems is a huge undertaking. Based on the observation of the sampled queries that required commonsense reasoning, we roughly categorized them into the following four coarse- gained types:
Reasoning Description Example Paraphrasing The answer sentence can be found by paraphrasing the query with some syntactic or lexical variation. P: . . . Ralph Roberts. . . then acquired other cable sys- tems, changed the name of the company to Comcast and ran the company until he was aged 82 Q: X began acquiring smaller cable systems and built the company into the nationâs ï¬fth-largest by 1988. A: [Ralph Roberts] Partial Clue Although a complete semantic match cannot be found between the query and the passage, the answer can be in- ferred through partial clues, such as some word/concept overlap. P:. . . Hani Al-Sibai says he has âsevere mobility prob- lemsâ to get disability cash. . . Q: However the photographs caught X-Sibai walking with apparent ease in the sunshine. A: [Hani Al] Multi-sentence Reasoning It requires anaphora, or higher-level fusion of multiple sentences to ï¬nd the answer. P: Donald Trump is ofï¬cially a $10 billion man. . . HIs campaign wonât release a copy of the ï¬nancial disclo- sure even though the FEC says it can do so on its own. . . Q: The X campaign did provide a one-page summary of the billionaireâs investment portfolio, which is re- markably modest for a man of his means. A: [Donald Trump] Commonsense Reasoning It requires inference drew on common sense as well as multi-sentence rea- soning to ï¬nd the answer. P: . . . Daniela Hantuchova knocks Venus Williams out of Eastbourne 6-2 5-7 6-2 . . . Q: Hantuchova breezed through the ï¬rst set in just un- der 40 minutes after breaking Williamsâ serve twice to take it 6-2 and led the second 4-2 before X hit her stride. A: [Venus Williams] Ambiguous The passage informative enough, or the query does not have a unique answer. is not P: The supermarket wars have heated up with the chief executive of Wesfarmers suggesting successful ri- val Aldi may not be paying its fair share of tax in Australia. . . Q: Xâs average corporate tax rate for the last three years was almost 31 per cent of net proï¬t, and in 2013 it paid $81.6 million in income tax. A: [Aldi] % 3% 10% 6% 75% 6%
Table 2: An analysis of types of reasoning needed in 100 random samples from the dev. set of ReCoRD.
Conceptual Knowledge: the presumed knowl- edge of properties of concepts (Miller, 1995; Liu and Singh, 2004; Pas¸ca and Van Durme, 2008; Zhang et al., 2017).
Causal Reasoning: the causal bridging infer- ence invoked between two events, which is vali- dated against common sense (Singer et al., 1992; Roemmele et al., 2011).
task deï¬nition in Section 2, ReCoRD can be for- malized as two types of machine reading com- prehension (MRC) datasets: passages with cloze- style queries, or passages with queries whose answers are spans in the passage. Therefore, we can evaluate two types of MRC models on ReCoRD, and compare them with human perfor- mance. All the evaluation is carried out based on the train /dev. /test split as illustrated in Table 1.
Na¨ıve Psychology: the predictable human men- tal states in reaction to events (Stich and Raven- scroft, 1994).
Other: Other types of common sense, such as social norms, planning, spatial reasoning, etc. We annotated one or more types to each of these queries, and computed the percentage of them in these queries as shown in Table 3.
# 5 Evaluation
# 5.1 Methods
DocQA2 (Clark and Gardner, 2018) is a strong baseline model for queries with extractive an- swers. It consists of components such as bi- directional attention ï¬ow (Seo et al., 2016) and self attention which are widely used in MRC mod- els. We also evaluate DocQA with ELMo (Peters et al., 2018) to analyze the impact of largely pre- trained encoder on our dataset.
We are interested in the performance of existing MRC architectures on ReCoRD. According to the
2https://github.com/allenai/ document-qa
Reasoning
# Example
Conceptual Knowledge P: Suspended hundreds of feet in the air amid glistening pillars of ice illuminated with ghostly lights from below, this could easily be a computer-generated scene from the latest sci-ï¬ block- buster movie. But in fact these ethereal photographs were taken in real life. . . captured by photographer Thomas Senf as climber Stephan Siegrist, 43, scaled frozen waterfall. . . Q: With bright lights illuminating his efforts from below, MrX appears to be on the set of a sci-ï¬ movie. A: [Stephan Siegrist] Commonsense knowledge: Scenes such as âa person suspended hundreds of feet in the air amid glistening pillars of ice illuminated with ghostly lights from belowâ tend to be found in sci-ï¬ movies. 49.3% Causal Reasoning P: . . . Jamie Lee Sharp, 25, stole keys to £40,000 Porsche Boxster during raid. . . He ï¬lmed him- self boasting about the car before getting behind the wheel Q: X was jailed for four years after pleading guilty to burglary, aggravated vehicle taking, driving whilst disqualiï¬ed, drink-driving and driving without insurance. A: [Jamie Lee Sharp] Commonsense knowledge: If a person steals a car, the person may be arrested and jailed. 32.0% Na¨ıve Psychology P: Uruguay star Diego Forlan said Monday that he is leaving Atletico Madrid and is set to join Serie A Inter Milan. . . Forlan said â. . . At the age of 33, going to a club like Inter is not an opportunity that comes up often. . . â Q: âI am happy with the decision that I have taken, it is normal that some players come and others go,â X added. A: [Diego Forlan, Forlan] Commonsense knowledge: If a person has seized an valuable opportunity, the person will feel happy for it. 28.0% Other P: A British backpacker who wrote a romantic note to locate a handsome stranger after spot- ting him on a New Zealand beach has ï¬nally met her Romeo for the ï¬rst time. Sarah Milne, from Glasgow, left a handmade poster for the man, who she saw in Picton on Friday. . . She said she would return to the same spot in Picton, New Zealand, on Tuesday in search for him. . . William Scott Chalmers revealed himself as the man and went to meet her. . . Q: Mr Chalmers, who brought a bottle of champagne with him, walked over to where Milne was sitting and said âHello, Iâm X, you know you could have just asked for my number.â A: [William Scott Chalmers] Commonsense knowledge: When two people meet each other for the ï¬rst time, they will likely ï¬rst introduce themselves. 12.0%
Table 3: An analysis of speciï¬c types of commonsense reasoning in 75 random sampled queries illustrated in Table 2 which requires common sense reasoning. A query may require multiple types of commonsense reasoning. .
QANet3 (Yu et al., 2018) is one of the top MRC models for SQuAD-style datasets. It is differ- ent from many other MRC models due to the use of transformer (Vaswani et al., 2017). Through QANet, we can evaluate the reasoning ability of transformer on our dataset. SAN4 (Liu et al., 2018) is also a top-rank MRC model. It shares many components with DocQA, and employs a stochastic answer module. Since we used SAN to ï¬lter out easy queries in our data collection, it is necessary to verify that the queries we collect is hard for not only SAN but also other MRC architectures. ASReader5 (Kadlec et al., 2016) is a strong base- line model for cloze-style datasets such as (Her-
mann et al., 2015; Hill et al., 2015). Unlike other baseline models which search among all text spans in the passage, ASReader directly predicts an- swers from the candidate named entities. Language Models6 (LMs) (Trinh and Le, 2018) trained on large corpora recently achieved the state-of-the-art scores on the Winograd Schema Challenge (Levesque et al., 2011). Following in the same manner, we ï¬rst concatenate the passage and the query together as a long sequence, and substitute X in the long sequence with each candi- date entity; we use LMs to compute the probabil- ity of each resultant sequence and the substitution that results in the most probable sequence will be the predicted answer. Random Guess acts as the lower bound of the evaluated models. It considers the queries in our
3The ofï¬cial implementation of QANet is not released. We use the implementation at https://github.com/ NLPLearn/QANet.
4https://github.com/kevinduh/san_mrc 5https://github.com/rkadlec/asreader
6https://github.com/tensorflow/models/ tree/master/research/lm_commonsense
%
dataset as cloze style, and randomly picks a candi- date entity from the passage as the answer.
# 5.2 Human Performance
As described in Section 3.4, we obtained two worker answers for each query in the development and test sets, and conï¬rmed that each query has been correctly answered by two different workers. To get human performance, we obtained an addi- tional worker answer for each query, and compare it with the reference answers.
# 5.3 Metrics
We use two evaluation metrics similar to those used by SQuAD (Rajpurkar et al., 2016). Both ignore punctuations and articles (e.g., a, an, the). Exact Match (EM) measures the percentage of predictions that match any one of the reference an- swers exactly. (Macro-averaged) F1 measures the average over- lap between the prediction and the reference an- swers. We treat the prediction and the reference answer as bags of tokens, and compute their F1. We take the maximum F1 over all of the reference answers for a given query, and then average over all of the queries.
# 5.4 Results
We show the evaluation results in Table 4. Hu- mans are able to get 91.31 EM and 91.69 F1 on the set, with similar results on the develop- ment set. In contrast, the best automatic method â DocQA with ELMo â achieves 45.44 EM and 46.65 F1 on the test set, illustrating a signiï¬cant gap between human and machine reading com- prehension on ReCoRD. All other methods with- out ELMo get EM/F1 scores signiï¬cantly lower than DocQA with ELMo, which shows the posi- tive impact of ELMo (see in Section 5.5). We also note that SAN leads to a result comparable with other strong baseline methods. This conï¬rms that since SAN shares general components with many MRC models, using it to do machine ï¬ltering does help us ï¬lter out queries that are relatively easy to all the methods we evaluate. Finally, to our sur- prise, the unsupervised method (i.e., LM) which achieved the state-of-the-art scores on the Wino- grad Schema Challenge only leads to a result sim- ilar to the random guess baseline: a potential ex- planation is the lack of domain knowledge on our dataset. We leave this question for future work.
Exact Match F1 Dev. Test Dev. Test Human 91.28 91.31 91.64 91.69 DocQA w/ ELMo DocQA w/o ELMo 44.13 36.59 45.44 38.52 45.39 37.89 46.65 39.76 SAN 38.14 39.77 39.09 40.72 QANet 35.38 36.51 36.75 37.79 ASReader 29.24 29.80 29.80 30.35 LM 16.73 17.57 17.41 18.15 Random Guess 18.41 18.55 19.06 19.12
Table 4: Performance of various methods and human.
# 5.5 Analysis
Human Errors About 8% dev./test queries have not been correctly answered in the human evalua- tion. We analyzed samples from these queries, and found that in most queries human was able to nar- row down the set of possible candidate entities, but not able to ï¬nd a unique answer. In many cases, two candidate entities equally ï¬t X unless human has the speciï¬c background knowledge. We show an example in the Appendix A.1.
the method analysis, we mainly ana- lyzed the results of three representative methods: DocQA w/ ELMo, DocQA, and QANet.
Human DocQA w/ ELMo DocQA QANet DocQA w/ ELMo
Figure 5: The Venn diagram of correct predictions from various methods and human on the development set.
Impact of ELMo As shown in Figure 5, among all three methods the correct predictions of DocQA w/ ELMo have the largest overlap (92.6%) with the human predictions. As an ablation study, we analyzed queries which were only correctly an- swered after ELMo was added. We found that in some cases ELMo helped the prediction by incor- porating the knowledge of language models. We show an example in the Appendix A.1.
Predictions of QANet Figure 5 shows that QANet correctly answered some ambiguous queries, which we think was due to the randomness of parameter initialization and did not reï¬ect the Since QANet uses the true reasoning ability. transformer-based encoder and DocQA uses the LSTM-based encoder, we see a signiï¬cant differ- ence of predictions between QANet and DocQA.
Method OOC Rate DocQA w/ ELMo DocQA QANet 6.27% 6.37% 6.41%
Table 5: The out-of-candidate-entities (OOC) rate of three analyzed methods.
Impact of Cloze-style Setting Except ASReader, all the MRC models were evaluated under the ex- tractive setting, which means the information of candidate named entities was not used. Instead, extractive models searched answers from all pos- sible text spans in passages. To show the poten- tial beneï¬t of using the candidate entities in these models, we computed the percentage of model predictions that could not be found in the can- didate entities. As shown in Table 5, all three methods have about 6% OOC predictions. Mak- ing use of the candidate entities would potentially help them increase the performance by 6%.
In Section 4, we manually labeled 100 ran- domly sampled queries with different types of rea- soning. In Figure 6 and 7, we show the perfor- mance of three analyzed methods on these queries.
HBS DocQA w/ELMo =. DocQA 104 TG QANet 60 50 4 a pggenceccnsseeeeecccen 40 30 20 fâ- Partial clue Paraphrasing Ambiguous
Figure 6: Performance of three analyzed methods on the 100 random samples with reasoning types la- beled.(CSR stands for commonsense reasoning, and MSR stands for multi-sentence reasoning.)
Figure 6 shows that three methods performed poorly on queries requiring commonsense rea- soning, multi-sentence reasoning and partial clue.
Compared to DocQA, QANet performed better on multi-sentence reasoning queries probably due to the use of transformer. Also, QANet outperformed DocQA on paraphrased queries probably because we used SAN to ï¬ltering queries and SAN has an architecture similar to DocQA. As we expect, ELMo improved the performance of DocQA on paraphrased queries.
GBS DocQA w/ ELMo =~. DocQA. GB QANet 60 50 404-â 30 20 4 0
Conceptual Knowledge Naive Psychology
# Causality
# Other
Figure 7: Performance of three analyzed methods on 75% of the random samples with speciï¬c common- sense reasoning types labeled.
Among the 75% sampled queries that require commonsense reasoning, we see that ELMo sig- niï¬cantly improved the performance of common- sense reasoning with presumed knowledge. For all other types of commonsense reasoning, all three methods have relatively poor performance.
# 6 Related Datasets
ReCoRD relates to two strands of research in datasets: data for reading comprehension, and that for commonsense reasoning. Reading Comprehension The CNN/Daily Mail Corpus (Hermann et al., 2015), The Childrenâs Book Test (CBT) (Hill et al., 2015), and LAM- BADA (Paperno et al., 2016) are closely related to ReCoRD: (1) The CNN/Daily Mail Corpus con- structed queries from the bullet points, most of which required limited reasoning ability (Chen et al., 2016). (2) CBT is a collection of 21 con- secutive sentences from book excerpts, with one word randomly removed from the last sentence. Since CBT has no machine or human ï¬ltering to ensure quality, only a small portion of the CBT examples really probes machinesâ ability to under- stand the context. (3) Built in a similar manner to CBT, LAMBADA was ï¬ltered to be human- guessable in the broader context only. Differing from ReCoRD, LAMBADA was designed to be a language modeling problem where contexts were
not required to be event summaries, and answers were not necessarily in the context.
Since all candidate answers were extracted from in the passage, ReCoRD can also be formalized as a extractive MRC dataset, similar to SQuAD (Ra- jpurkar et al., 2016) and NewsQA (Trischler et al., 2017). The difference is that questions in these datasets were curated from crowdworkers. Since it is hard to control the quality of crowdsourced questions, a large portion of questions in these datasets can be answered by word matching or paraphrasing (Jia and Liang, 2017; Rajpurkar et al., 2018; Wang and Bansal, 2018). There are other large-scale datasets (Nguyen et al., 2016; Joshi et al., 2017; Lai et al., 2017; Dunn et al., 2017; Kocisky et al., 2018; Reddy et al., 2018; Choi et al., 2018; Yang et al., 2018) targeting different aspects of reading comprehension. See (Gao et al., 2018) for a recent survey. Commonsense Reasoning ROCStories Cor- pus (Mostafazadeh et al., 2016), SWAG (Zellers et al., 2018), and The Winograd Schema Chal- lenge (WSC) (Levesque et al., 2011) are related ReCoRD: (1) ROCStories assesses commonsense reasoning in story understanding by choosing the correct story ending from only two candi- Stories in the corpus were all curated dates. from crowdworkers, which could suffer from human elicitation bias (Gordon and Van Durme, 2013; Misra et al., 2016; Zhang et al., 2017). (2) SWAG uniï¬es commonsense reasoning and natural language inference. It selects an ending from multiple choices which is most likely to be anticipated from the situation describe in The counterfactual endings in the premise. SWAG were generated using language models with adversarial ï¬ltering. (3) WSC foucses on intra-sentential pronoun disambiguation problems that require commonsense reasoning. There are other datasets (Roemmele et al., 2011; Zhang et al., 2017; Rashkin et al., 2018a,b) targeting different aspects of commonsense reasoning.
# 7 Conclusion
We introduced ReCoRD, a large-scale reading comprehension dataset requiring commonsense reasoning. Unlike existing machine reading com- prehension (MRC) datasets, ReCoRD contains a large portion of queries that require commonsense reasoning to be answered. Our baselines, includ- ing top performers on existing MRC datasets, are
no match for human competence on ReCoRD. We hope that ReCoRD will spur more research in MRC with commonsense reasoning.
# References
Danqi Chen, Jason Bolton, and Christopher D. Man- ning. 2016. the cnn/daily mail reading comprehension task. In Pro- ceedings of the 54th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 2358â2367, Berlin, Germany. Asso- ciation for Computational Linguistics.
Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen- tau Yih, Yejin Choi, Percy Liang, and Luke Zettle- moyer. 2018. Quac: Question answering in context. arXiv preprint arXiv:1808.07036.
Christopher Clark and Matt Gardner. 2018. Simple and effective multi-paragraph reading comprehen- sion. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 845â855. Association for Computational Linguistics.
Matthew Dunn, Levent Sagun, Mike Higgins, V Ugur Guney, Volkan Cirik, and Kyunghyun Cho. 2017. Searchqa: A new q&a dataset augmented with arXiv preprint context from a search engine. arXiv:1704.05179.
Jianfeng Gao, Michel Galley, and Lihong Li. 2018. arXiv Neural approaches to conversational ai. preprint arXiv:1809.08267.
Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson Liu, Matthew Pe- ters, Michael Schmitz, and Luke Zettlemoyer. 2018. Allennlp: A deep semantic natural language pro- cessing platform. arXiv preprint arXiv:1803.07640.
Jonathan Gordon and Benjamin Van Durme. 2013. Re- porting bias and knowledge acquisition. In Proceed- ings of the 2013 Workshop on Automated Knowledge Base Construction, AKBC â13, pages 25â30, New York, NY, USA. ACM.
Karl Moritz Hermann, Tom´aËs KoËcisk´y, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Su- leyman, and Phil Blunsom. 2015. Teaching ma- chines to read and comprehend. In Advances in Neu- ral Information Processing Systems, pages 1693â 1701.
Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2015. The goldilocks principle: Reading childrenâs books with explicit memory representa- tions. arXiv preprint arXiv:1511.02301.
Robin Jia and Percy Liang. 2017. Adversarial exam- ples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages
2021â2031, Copenhagen, Denmark. Association for Computational Linguistics.
Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehen- sion. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 1601â1611, Vancouver, Canada. Association for Computational Linguistics.
Rudolf Kadlec, Martin Schmid, OndËrej Bajgar, and Jan Kleindienst. 2016. Text understanding with the at- tention sum reader network. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 908â918. Association for Computational Linguis- tics.
Divyansh Kaushik and Zachary C. Lipton. 2018. How much reading does reading comprehension require? a critical investigation of popular benchmarks.
Tomas Kocisky, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, Gabor Melis, and Edward Grefenstette. 2018. The narrativeqa reading comprehension challenge. Transactions of the Asso- ciation for Computational Linguistics, 6:317â328.
Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. Race: Large-scale reading comprehension dataset from examinations. In Pro- ceedings of the 2017 Conference on Empirical Meth- ods in Natural Language Processing, pages 785â 794, Copenhagen, Denmark. Association for Com- putational Linguistics.
Kenton Lee, Luheng He, Mike Lewis, and Luke Zettle- moyer. 2017. End-to-end neural coreference reso- In Proceedings of the 2017 Conference on lution. Empirical Methods in Natural Language Process- ing, pages 188â197, Copenhagen, Denmark. Asso- ciation for Computational Linguistics.
Hector J Levesque, Ernest Davis, and Leora Morgen- stern. 2011. The winograd schema challenge. In Aaai spring symposium: Logical formalizations of commonsense reasoning.
H. Liu and P. Singh. 2004. Conceptnet — a practical commonsense reasoning tool-kit. BT Tech- nology Journal, 22(4):211â226.
Xiaodong Liu, Yelong Shen, Kevin Duh, and Jianfeng Gao. 2018. Stochastic answer networks for ma- chine reading comprehension. In Proceedings of the 56th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 1694â1704. Association for Computational Linguis- tics.
Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David Mc- Closky. 2014. The Stanford CoreNLP natural lan- guage processing toolkit. In Association for Compu- tational Linguistics (ACL) System Demonstrations, pages 55â60.
John McCarthy. 1959. Programs with common sense. In Proceedings of the Teddington Conference on the Mechanization of Thought Processes, London: Her Majestyâs Stationery Ofï¬ce.
George A. Miller. 1995. Wordnet: A lexical database for english. Commun. ACM, 38(11):39â41.
Ishan Misra, C Lawrence Zitnick, Margaret Mitchell, and Ross Girshick. 2016. Seeing through the human reporting bias: Visual classiï¬ers from noisy human- centric labels. In Proceedings of the IEEE Confer- ence on Computer Vision and Pattern Recognition, pages 2930â2939.
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. 2016. A cor- pus and cloze evaluation for deeper understanding of In Proceedings of the 2016 commonsense stories. Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 839â849, San Diego, California. Association for Computational Linguis- tics.
Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. Ms marco: A human generated machine arXiv preprint reading comprehension dataset. arXiv:1611.09268.
Denis Paperno, Germ´an Kruszewski, Angeliki Lazari- dou, Ngoc Quan Pham, Raffaella Bernardi, San- dro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fernandez. 2016. The lambada dataset: Word prediction requiring a broad discourse context. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1525â1534. Association for Computational Linguistics.
Marius Pas¸ca and Benjamin Van Durme. 2008. Weakly-supervised acquisition of open-domain classes and class attributes from web documents and query logs. In Proceedings of ACL-08: HLT, pages 19â27. Association for Computational Linguistics.
Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227â 2237. Association for Computational Linguistics.
Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you donât know: Unanswerable ques- tions for squad. In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics (Volume 2: Short Papers), pages 784â789. Association for Computational Linguistics.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2383â2392, Austin, Texas. Association for Computational Linguistics.
Hannah Rashkin, Antoine Bosselut, Maarten Sap, Kevin Knight, and Yejin Choi. 2018a. Modeling naive psychology of characters in simple common- In Proceedings of the 56th Annual sense stories. Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 2289â 2299. Association for Computational Linguistics.
Hannah Rashkin, Maarten Sap, Emily Allaway, Noah A. Smith, 2018b. Event2mind: Commonsense inference on events, In Proceedings of the 56th intents, and reactions. Annual Meeting of the Association for Compu- Long Papers), tational Linguistics (Volume 1: pages 463â473. Association for Computational Linguistics.
Siva Reddy, Danqi Chen, and Christopher D Manning. 2018. Coqa: A conversational question answering challenge. arXiv preprint arXiv:1808.07042.
Melissa Roemmele, Cosmin Adrian Bejan, and An- drew S Gordon. 2011. Choice of plausible alterna- tives: An evaluation of commonsense causal reason- ing. In AAAI Spring Symposium: Logical Formal- izations of Commonsense Reasoning, pages 90â95.
Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention ï¬ow for machine comprehension. arXiv preprint arXiv:1611.01603.
Murray Singer, Michael Halldorson, Jeffrey C Lear, and Peter Andrusiak. 1992. Validation of causal bridging inferences in discourse understanding. Journal of Memory and Language, 31(4):507 â 524.
Stephen Stich and Ian Ravenscroft. 1994. What is folk psychology? Cognition, 50(1-3):447â468.
Krysta Svore, Lucy Vanderwende, and Christopher Burges. 2007. Enhancing single-document sum- marization by combining RankNet and third-party In Proceedings of the 2007 Joint Con- sources. ference on Empirical Methods in Natural Lan- guage Processing and Computational Natural Lan- guage Learning (EMNLP-CoNLL), pages 448â457, Prague, Czech Republic. Association for Computa- tional Linguistics.
Trieu H Trinh and Quoc V Le. 2018. A simple method for commonsense reasoning. arXiv preprint arXiv:1806.02847.
Adam Trischler, Tong Wang, Xingdi Yuan, Justin Har- ris, Alessandro Sordoni, Philip Bachman, and Ka- heer Suleman. 2017. Newsqa: A machine compre- In Proceedings of the 2nd Work- hension dataset. shop on Representation Learning for NLP, pages
191â200, Vancouver, Canada. Association for Com- putational Linguistics.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, pages 5998â6008.
Yicheng Wang and Mohit Bansal. 2018. Robust ma- chine comprehension models via adversarial train- In Proceedings of the 2018 Conference of ing. the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 2 (Short Papers), pages 575â581. Association for Computational Linguistics.
Kristian Woodsend and Mirella Lapata. 2010. Auto- In Proceed- matic generation of story highlights. ings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 565â574, Up- psala, Sweden. Association for Computational Lin- guistics.
Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Ben- gio, William W. Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question In Proceedings of the Conference on answering. Empirical Methods in Natural Language Processing (EMNLP).
Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V Le. 2018. Qanet: Combining local convolution with global self-attention for reading comprehen- sion. arXiv preprint arXiv:1804.09541.
Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. Swag: A large-scale adversarial dataset for grounded commonsense inference. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP).
Sheng Zhang, Rachel Rudinger, Kevin Duh, and Ben- jamin Van Durme. 2017. Ordinal common-sense in- ference. Transactions of the Association for Com- putational Linguistics, 5:379â395.
# A Appendices
# A.1 Case Study
Human Error Table 6 shows an example where the ambiguous query caused human error. The passage in this example describes âambivertsâ, and there are two experts studying it: âVanessa Van Edwardsâ and âAdam Grantâ. Both of them ï¬t in the query asking who gave advice to am- biverts. There is no further information to help human choose a unique answer for this query.
Passage: Your colleagues think youâre quiet, but your friends think youâre a party animal. If that sounds like you, then you may be what psychologists describe as an âambivertâ. Scientists believe around two-thirds of peo- ple are ambiverts; a personality category that has, up un- til now, been given relatively little attention. âMost peo- ple who are ambiverts have been told the wrong category their whole life,â Vanessa Van Edwards, an Orgeon-based behavioural expert, told DailyMail.com âYou hear extro- vert and you hear introvert, and you think âugh, thatâs not meâ.â Ambiversion is a label that has been around for some time, but gained popularity in 2013 with a paper in the journal Psychological Science, by Adam Grant the University of Pennsylvania. ⢠Most ambiverts have been labelled incorrectly their
whole life
⢠They slide up and down personality spectrum depend- ing on the situation
⢠Ambiverts are good at gaining peopleâs trust and mak- ing their point heard
⢠They often feel pressure to mirror personality of the person they are with
Query: âRead each situation more carefully,â X advised ambiverts, âand ask yourself, âWhat do I need to do right now to be most happy or successful?â Reference answers: Adam Grant
Table 6: An example illustrating a ambiguous query.
Impact of ELMo Table 7 shows an example where DocQA w/ ELMo correctly answered but The passage in this example DocQA failed. describes a woman artist âSarah Milneâ who launched a public appeal to ï¬nd a handsome stranger âWilliam Scott Chalmersâ, and invited him to meet her. The query asks the missing information in the greetings from âWilliam Scott Chalmersâ when he went to meet âSarah Milneâ. Our common sense about social norms tells us when two people meet each other for the ï¬rst time, they are very likely to ï¬rst introduce themselves. In the query of this example, when Mr. Chalmers said âHello, Iâm . . . â, it is very likely that he was introducing himself. Therefore, the name of Mr Chalmer ï¬t X best.
In this example, the prediction of DocQA with- out ELMo is âNew Zealandâ which is not even close to the reference answer. The beneï¬t of using ELMo in this example is that its language model will help exclude âNew Zealandâ from the likely candidate answers, because âIâm . . . â is usually followed by a person name rather than a location name. Such a pattern learnt by ELMo is useful in narrowing down candidiate answers in ReCoRD.
Passage: A British backpacker who wrote a romantic note to locate a handsome stranger after spotting him on a New Zealand beach has ï¬nally met her Romeo for the ï¬rst time. Sarah Milne, from Glasgow, left a handmade poster for the man, who she saw in Picton on Friday and described as âshirtless, wearing black shorts with stars tat- tooed on his torso and running with a curly, bouncy and blonde dogâ. In her note, entitled âIs this you? â, she in- vited the mystery stranger to meet her on the same beach on Tuesday. But the message soon became a source of huge online interest with the identity of both the author and its intended target generating unexpected publicity. ⢠Sarah Milne, a Glasgow artist, launched a public ap-
peal to ï¬nd the mystery man
⢠She wrote a heart-warming message and drew a picture of him with his dog
⢠She said she would return to the same spot in Picton, New Zealand, on Tuesday in search for him
⢠William Scott Chalmers revealed himself as the man and went to meet her
⢠He told Daily Mail Australia that he would ask her out for dinner
Query: Mr Chalmers, who brought a bottle of cham- pagne with him, walked over to where Milne was sitting and said âHello, Iâm X, you know you could have just asked for my number.â Reference answers: William Scott Chalmers
Table 7: An example illustrating the impact of ELMo.
# A.2 HIT Instructions
We show the instructions for Amazon Mechanical Turk HITs in Figure 8.
WELCOME! Please read the instructions if this is your first time to do this task. The purpose of this task is to answer reading comprehension questions. In this task, you will be presented with a Passage and will have to answer a Supported Statement based on your understanding of the events that might be inferred from the passage. Passage: A snippet from a news article followed by some bullet points which summarize the news event. We colored some in the Passage, which are possible answers to the Supported Statement. Supported Statement: A statement that is factually supported by the Passage, once you have filled in the . The indicates a missing . Your goal is to find in the Passage that best fits th To finish this task, you are expected to do the following steps: . Read the Passage and Supported Statement carefully. . Among in the Passage, try your best to find which is most likely to be filled in the . (If you think there are more than one entity qualified to be filled in, just choose any of them and report this assignment in the feedback box.) . Click on to answer this Supported Statement. . If you want to change your answer, simply click on the one you want to change to. . Click the submit button to submit your answer to us. Please spend at least 30 seconds on one task Below is an example. We provide the answer explanation for this example to help you better understand this task. Passage ( ) -- As an player, patted a lot of on the butt when he liked their work, but on Monday, defendant found out that one courtroom was not the place to play that game. After patted his lawyer on the rear, Judge rejected 's plea to a probation violation in the domestic violence case involving and his then-wife, was arrested in May for not meeting with his probation officer and was in court Monday to enter a plea. After he was asked if he was pleased with his attorney, the former wide receiver once known as " " gave his lawyer, a gentle pat on the rear. . pats his lawyer on the rear while in court * Judge thought he wasn't taking hearing on probation violation seriously . , formerly known as " ," was married to reality TV star + He was on probation for a domestic violence episode that led to their divorce Supported Statement He changed his name back to after the two were married last July. Answer explanation: In this example, the Supported Statement describes a person changed his name, and asks us what name the person changed to. From the Passage we know that was once known as " ", and he is the only person who once used a different name. Based on this, we can infer that "He" in the Supported Statement probably refers to , and he changed his name from " " to " ". Therefore, the entity that best fits the is . We can click on any " "in the Passage to answer this query. If you have any question or comment, please write it in the feedback box or email the requesters:
Figure 8: Amazon Mechanical Turk HIT Instructions. | {
"id": "1803.07640"
} |
1810.12281 | Three Mechanisms of Weight Decay Regularization | Weight decay is one of the standard tricks in the neural network toolbox, but
the reasons for its regularization effect are poorly understood, and recent
results have cast doubt on the traditional interpretation in terms of $L_2$
regularization. Literal weight decay has been shown to outperform $L_2$
regularization for optimizers for which they differ. We empirically investigate
weight decay for three optimization algorithms (SGD, Adam, and K-FAC) and a
variety of network architectures. We identify three distinct mechanisms by
which weight decay exerts a regularization effect, depending on the particular
optimization algorithm and architecture: (1) increasing the effective learning
rate, (2) approximately regularizing the input-output Jacobian norm, and (3)
reducing the effective damping coefficient for second-order optimization. Our
results provide insight into how to improve the regularization of neural
networks. | http://arxiv.org/pdf/1810.12281 | Guodong Zhang, Chaoqi Wang, Bowen Xu, Roger Grosse | cs.LG, stat.ML | null | null | cs.LG | 20181029 | 20181029 | 8 1 0 2
t c O 9 2 ] G L . s c [
1 v 1 8 2 2 1 . 0 1 8 1 : v i X r a
# Under review as a conference paper at ICLR 2019
# THREE MECHANISMS OF WEIGHT DECAY REGULARIZATION
Guodong Zhang, Chaoqi Wang, Bowen Xu, Roger Grosse University of Toronto, Vector Institute {gdzhang, cqwang, bowenxu, rgrosse}@cs.toronto.edu
# ABSTRACT
Weight decay is one of the standard tricks in the neural network toolbox, but the reasons for its regularization effect are poorly understood, and recent results have cast doubt on the traditional interpretation in terms of L2 regularization. Literal weight decay has been shown to outperform L2 regularization for optimizers for which they differ. We empirically investigate weight decay for three optimization algorithms (SGD, Adam, and K-FAC) and a variety of network architectures. We identify three distinct mechanisms by which weight decay exerts a regularization effect, depending on the particular optimization algorithm and architecture: (1) increasing the effective learning rate, (2) approximately regularizing the input- output Jacobian norm, and (3) reducing the effective damping coefï¬cient for second-order optimization. Our results provide insight into how to improve the regularization of neural networks.
# INTRODUCTION
Weight decay has long been a standard trick to improve the generalization performance of neural networks (Krogh & Hertz, 1992; Bos & Chug, 1996) by encouraging the weights to be small in magnitude. It is widely interpreted as a form of L2 regularization because it can be derived from the gradient of the L2 norm of the weights in the gradient descent setting. However, several ï¬ndings cast doubt on this interpretation:
⢠Weight decay has sometimes been observed to improve training accuracy, not just generaliza- tion performance (e.g. Krizhevsky et al. (2012)).
⢠Loshchilov & Hutter (2017) found that when using Adam (Kingma & Ba, 2014) as the optimizer, literally applying weight decay (i.e. scaling the weights by a factor less than 1 in each iteration) enabled far better generalization than adding an L2 regularizer to the training objective.
⢠Weight decay is widely used in networks with Batch Normalization (BN) (Ioffe & Szegedy, 2015). In principle, weight decay regularization should have no effect in this case, since one can scale the weights by a small factor without changing the networkâs predictions. Hence, it does not meaningfully constrain the networkâs capacity.
The effect of weight decay remains poorly understood, and we lack clear guidelines for which tasks and architectures it is likely to help or hurt. A better understanding of the role of weight decay would help us design more efï¬cient and robust neural network architectures.
In order to better understand the effect of weight decay, we experimented with both weight decay and L2 regularization applied to image classiï¬ers using three different optimization algorithms: SGD, Adam, and Kronecker-Factored Approximate Curvature (K-FAC) (Martens & Grosse, 2015). Consistent with the observations of Loshchilov & Hutter (2017), we found that weight decay consistently outperformed L2 regularization in cases where they differ. Weight decay gave an especially strong performance boost to the K-FAC optimizer, and closed most of the generalization gaps between ï¬rst- and second-order optimizers, as well as between small and large batches. We then investigated the reasons for weight decayâs performance boost. Surprisingly, we identiï¬ed three distinct mechanisms by which weight decay has a regularizing effect, depending on the particular algorithm and architecture:
1
# Under review as a conference paper at ICLR 2019
mm L2 Sm Baseline Weight Decay | CIFAR10-ResNet32 CIFAR10-ResNet32 (BNâ-Aug) CIFARI00-VGG16(BN+Aug) __CIFAR100-ResNet32.(BN+Aug) KFAC-G HENCE Lh ia ADAM. oR 086 Oks 0.80 092 083 obs 08s 0.60, 065 070 075 0.72 O78 0.76 078 accuracy accuracy accuracy accuracy
Figure 1: Comparison of test accuracy of the networks trained with different optimizers on both CIFAR10 and CIFAR100. We compare Weight Decay regularization to L2 regularization and the Baseline (which used neither). Here, BN+Aug denotes the use of BN and data augmentation. K-FAC-G and K-FAC-F denote K-FAC using Gauss-Newton and Fisher matrices as the preconditioner, respectively. The results suggest that weight decay leads to improved performance across different optimizers and settings.
1. In our experiments with ï¬rst-order optimization methods (SGD and Adam) on networks with BN, we found that it acts by way of the effective learning rate. Speciï¬cally, weight decay reduces the scale of the weights, increasing the effective learning rate, thereby increasing the regularization effect of gradient noise (Neelakantan et al., 2015; Keskar et al., 2016). As evidence, we found that almost all of the regularization effect of weight decay was due to applying it to layers with BN (for which weight decay is meaningless). Furthermore, when we computed the effective learning rate for the network with weight decay, and applied the same effective learning rate to a network without weight decay, this captured the full regularization effect.
2. We show that when K-FAC is applied to a linear network using the Gauss-Newton metric (K-FAC-G), weight decay is equivalent to regularizing the squared Frobenius norm of the input-output Jacobian (which was shown by Novak et al. (2018) to improve generalization). Empirically, we found that even for (nonlinear) classiï¬cation networks, the Gauss-Newton norm (which K-FAC with weight decay is implicitly regularizing) is highly correlated with the Jacobian norm, and that K-FAC with weight decay signiï¬cantly reduces the Jacobian norm. 3. Because the idealized, undamped version of K-FAC is invariant to afï¬ne reparameterizations, the implicit learning rate effect described above should not apply. However, in practice the approximate curvature matrix is damped by adding a multiple of the identity matrix, and this damping is not scale-invariant. We show that without weight decay, the weights grow large, causing the effective damping term to increase. If the effective damping term grows large enough to dominate the curvature term, it effectively turns K-FAC into a ï¬rst-order optimizer. Weight decay keeps the effective damping term small, enabling K-FAC to retain its second-order properties, and hence improving generalization.
Hence, we have identiï¬ed three distinct mechanisms by which weight decay improves generalization, depending on the optimization algorithm and network architecture. Our results underscore the subtlety and complexity of neural network training: the ï¬nal performance numbers obscure a variety of complex interactions between phenomena. While more analysis and experimentation is needed to understand how broadly each of our three mechanisms applies (and to ï¬nd additional mechanisms!), our work provides a starting point for understanding practical regularization effects in neural network training.
2 PRELIMINARIES
Supervised learning. Given a training set S consisting of training pairs {x, y}, and a neural network fo(x) with parameters 6 (including weights and biases), our goal is to minimize the emprical risk expressed as an average of a loss ¢ over the training set: (0) = + Lewes !(y fo(x)).
Stochastic Gradient Descent. To minimize the empirical risk L(θ), stochastic gradient descent (SGD) is used extensively in deep learning community. Typically, gradient descent methods can be derived from the framework of steepest descent with respect to standard Euclidean metric in parameter space. Speciï¬cally, gradient descent minimizes the following surrogate objective in each iteration:
h(0) = AO" VoL(O) + 1/nD(0, 0 + Ad),
h(0) = AO" VoL(O) + 1/nD(0, 0 + Ad), (1)
(1) 2. In this case,
where the distance (or dissimilarity) function D(@,@ + A@) is chosen as 4||AQ\|3. In this case, solving equation 1 yields A@ = â7VoL(9), where 77 is the learning rate.
2
# Under review as a conference paper at ICLR 2019
Natural gradient. Though popular, gradient descent methods often struggle to navigate âvalleysâ in the loss surface with ill-conditioned curvature (Martens, 2010). Natural gradient descent, as a variant of second-order methods (Martens, 2014), is able to make more progress per iteration by taking into account the curvature information. One way to motivate natural gradient descent is to show that it can be derived by adapting steepest descent formulation, much like gradient descnet, except using an alternative local distance. The distance function which leads to natural gradient is the KL divergence on the modelâs predictive distribution Dxr (pe || pe+ae) © 3A0' FAO, where F(@) is the Fisher information matrix! (Amari, 1998):
F = E [Vo log p(y|x, 0) Vo log p(y|x, 8) "] . (2)
Applying this distance function to equation 1, we have θt+1 â θt â ηFâ1âθL(θ).
Gauss-Newton algorithm. Another sensible distance function in equation | is the Lz distance on the output (logits) of the neural network, i.e. 3 || forae â fo||3. This leads to the classical Gauss-Newton algorithm which updates the parameters by 6â+! ~ 6! â nG~1VL(@), where the Gauss-Newton (GN) matrix is defined as
# G=E[Jg
Jo},
(3) and Jθ is the Jacobian of fθ(x) w.r.t θ. The Gauss-Newton algorithm, much like natural gradient descent, is also invariant to the speciï¬c parameterization of neural network function fθ.
Two curvature matrices. It has been shown that the GN matrix is equivalent to the Fisher matrix in the case of regression task with squared error loss (Heskes, 2000). However, they are not identical for the case of classiï¬cation, where cross-entropy loss is commonly used. Nevertheless, Martens (2014) showed that the Fisher matrix is equivalent to generalized GN matrix when model prediction p(y|x, θ) corresponds to exponential family model with natural parameters given by fθ(x), where the generalized GN matrix is given by
G=E[JgHeJo}, (4)
and Hy is the Hessian of ¢(y, z) w.r.t z, evaluated at z = f(x). In regression with squared error loss, the Hessian Hy happens to be identity matrix.
Preconditioned gradient descent. Given the fact that both natural gradient descent and Gauss- Newton algorithm precondition the gradient with an extra curvature matrix C(θ) (including the Fisher matrix and GN matrix), we also term them preconditioned gradient descent for convenience.
K-FAC. As modern neural networks may contain millions of parameters, computing and storing the exact curvature matrix and its inverse is impractical. Kronecker-factored approximate curvature (K-FAC) Martens & Grosse (2015) uses a Kronecker-factored approximation to the curvature matrix to perform efï¬cient approximate natural gradient updates. As shown by Luk & Grosse (2018), K-FAC can be applied to general pullback metric, including Fisher metric and the Gauss-Newton metric. For more details, we refer reader to Appendix F or Martens & Grosse (2015).
Batch Normalization. Beyond sophisticated optimization algorithms, Batch Normalization (BN) plays an indispensable role in modern neural networks. Broadly speaking, BN is a mechanism that aims to stabilize the distribution (over a mini-batch) of inputs to a given network layer during training. This is achieved by augmenting the network with additional layers that subtract the mean µ and divide by the standard deviation Ï. Typically, the normalized inputs are also scaled and shifted based on trainable parameters γ and β:
x â µ Ï For clarity, we ignore the parameters γ and β, which do not impact the performance in practice. We further note that BN is applied before the activation function and not used in the output layer.
# 3 THE EFFECTIVENESS OF WEIGHT DECAY
Our goal is to understand weight decay regularization in the context of training deep neural networks. Towards this, we ï¬rst discuss the relationship between L2 regularization and weight decay in different optimizers.
1The underlying distribution for the expectation in equation 2 has been left ambiguous. Throughout the experiments, we sample the targets from the modelâs predictions, as done in Martens & Grosse (2015)
3
# Under review as a conference paper at ICLR 2019
Table 1: Classiï¬cation results on CIFAR-10 and CIFAR-100. B denotes BN while D denotes data augmentation, including horizontal ï¬ip and random crop. WD denotes weight decay regularization. Weight decay regularization improves the generalization consistently. Interestingly, we observe that weight decay gives an especially strong performance boost to the K-FAC optimizer when BN is turned off.
SGD ADAM K-FAC-F K-FAC-G Dataset Network | Bo D WD WD WD WD 83.20 | 84.87 | 83.16 | 84.12 | 85.58 | 89.60 | 83.85 | 89.81 86.99 | 88.85 | 88.45 | 88.72 | 87.97 | 89.02 | 88.17 | 89.77 v | 9171 | 93.39 | 92.89 | 93.62 | 93.12 | 93.90 | 93.19 | 93.80 CIFAR-10 VGGI16 85.47 | 86.63 | 84.43 | 87.54 | 86.82 | 90.22 | 85.24 | 90.64 86.13 | 90.65 | 89.46 | 90.61 | 89.78 | 91.24 | 89.94 | 90.91 92.95 | 95.14 | 93.63 | 94.66 | 93.80 | 95.35 | 93.44 | 95.04 CIFAR-10 ResNet32 CIFAR-100 | VGGI6 W464 [AS v v | 68.42 | 73.31 | 69.88 | 74.22 | 71.05 | 73.36 | 67.46 | 73.57 v CIFAR-100 | ResNet32 73.61 | 77.73 | 73.60 | 77.40 | 74.49 | 78.01 | 73.70 | 78.02
Gradient descent with weight decay is deï¬ned by the following update rule: θt+1 â (1 â ηβ)θt â ηâL(θt), where β deï¬nes the rate of the weight decay per step and η is the learning rate. In this case, weight decay is equivalent to L2 regularization. However, the two differ when the gradient update is preconditioned by a matrix Câ1, as in Adam or K-FAC. The preconditioned gradient descent update with L2 regularization is given by
θt+1 â (I â ηβCâ1)θt â ηCâ1âθL(θt), (6)
whereas the weight decay update is given by
at! & (1 â 8)" â nC V6L(6'). (7) The difference between these updates is whether the preconditioner is applied to 6â. The latter update can be interpreted as the preconditioned gradient descent update on a regularized objective where the regularizer is the squared C-norm ||6||2, = @' CO. If C is adapted based on statistics collected during training, as in Adam or K-FAC, this interpretation holds only approximately because gradient descent on ||6||2, would require differentiating through C. However, this approximate regularization term can still yield insight into the behavior of weight decay. (As we discuss later, this observation informs some, but not all, of the empirical phenomena we have observed.) Though the difference between the two updates may appear subtle, we find that it makes a substantial difference in terms of generalization performance.
Initial Experiments. We now present some empirical ï¬ndings about the effectiveness of weight decay which the rest of the paper is devoted to explaining. Our experiments were carried out on two different datasets: CIFAR-10 and CIFAR-100 (Krizhevsky & Hinton, 2009) with varied batch sizes. We test VGG16 (Simonyan & Zisserman, 2014) and ResNet32 (He et al., 2016) on both CIFAR-10 and CIFAR-100 (for more details, see Appendix A). In particular, we investigate three different optimization algorithms: SGD, Adam and K-FAC. We consider two versions of K-FAC, which use the Gauss-Newton matrix (K-FAC-G) and Fisher information matrix (K-FAC-F).
Figure 1 shows the comparison between weight decay, L2 regularization and the baseline. We also compare weight decay to the baseline on more settings and report the ï¬nal test accuracies in Table 1. Finally, the results for large-batch training are summarized in Table 3. Based on these results, we make the following observations regarding weight decay:
1. In all experiments, weight decay regularization consistently improved the performance and was more effective than L2 regularization in cases where they differ (See Figure 1).
2. Weight decay closed most of the generalization gaps between ï¬rst- and second-order optimiz- ers, as well as between small and large batches (See Table 1 and Table 3).
3. Weight decay signiï¬cantly improved performance even for BN networks (See Table 1), where it does not meaningfully constrain the networksâ capacity.
4. Finally, we notice that weight decay gave an especially strong performance boost to the K-FAC optimizer when BN was disabled (see the ï¬rst and fourth rows in Table 1).
In the following section, we seek to explain these phenomena. With further testing, we ï¬nd that weight decay can work in unexpected ways, especially in the presence of BN.
4
# Under review as a conference paper at ICLR 2019
SGD. ADAM Acc (VGG16) baseline wd-cony wd-all âAcc (ResNet32) epoch epoch
Figure 2: Test accuracy as a function of training epoch for SGD and Adam on CIFAR-100 with different weight decay regularization schemes. baseline is the model without weight decay; wd-conv is the model with weight decay applied to all convolutional layers; wd-all is the model with weight decay applied to all layers; wd-fc is the model with weight decay applied to the last layer (fc). Most of the generalization effect of weight decay is due to applying it to layers with BN.
4 THREE MECHANISMS OF WEIGHT DECAY REGULARIZATION
4.1 MECHANISM I: HIGHER EFFECTIVE LEARNING RATE
As discussed in Section 3, when SGD is used as the optimizer, weight decay can be interpreted as penalizing the L2 norm of the weights. Classically, this was believed to constrain the model by penalizing explanations with large weight norm. However, for a network with Batch Normalization (BN), an L2 penalty does not meaningfully constrain the reprsentation, because the networkâs predictions are invariant to rescaling of the weights and biases. More precisely, if BN(x; θl) denotes the output of a layer with parameters θl in which BN is applied before the activation function, then
BN(x; αθl) = BN(x; θl), (8)
for any α > 0. By choosing small α, one can make the L2 norm arbitrarily small without changing the function computed by the network. Hence, in principle, adding weight decay to layers with BN should have no effect on the optimal solution. But empirically, weight decay appears to signiï¬cantly improve generalization for BN networks (e.g. see Figure 1).
van Laarhoven (2017) observed that weight decay, by reducing the norm of the weights, increases the effective learning rate. Since higher learning rates lead to larger gradient noise, which has been shown to act as a stochastic regularizer (Neelakantan et al., 2015; Keskar et al., 2016), this means weight decay can indirectly exert a regularizing effect through the effective learning rate. In this section, we provide additional evidence supporting the hypothesis of van Laarhoven (2017). For simplicity, this section focuses on SGD, but weâve observed similar behavior when Adam is used as the optimizer.
| Lâ Lââ | Lo] effective Ir epoch °
Due to its invariance to the scaling of the weights, the key property of the weight vector is its direction. As shown by Hoffer et al. (2018), the weight direction 6, = 0,/||9i||2 is updated according to
f+! â 6; ân|lOf||3° L676; )Vo,£(6')+O(n7). (9) Therefore, the effective learning rate is approximately pro- portional to n/A- Which means that by decreasing the scale of the weights, weight decay regularization increases the effective learning rate.
Figure 3: Effective learning rate of the ï¬rst layer of ResNet32 trained with SGD on CIFAR-100. Without weight decay reg- ularization, the effective learning rate de- creases quickly in the beginning.
Figure 3 shows the effective learning rate over time for two BN networks trained with SGD (the results for Adam are similar), one with weight decay and one without it. Each network is trained with a typical learning rate decay schedule, including 3 factor-of-10 reductions in the learning rate parameter, spaced 60 epochs apart. Without weight decay, the normalization effects cause an additional effective learning rate decay (due to the increase of weight norm), which reduces the effective learning rate by a factor of 10 over the ï¬rst 50 epochs. By contrast, when weight decay is applied, the effective learning rate remains more or less constant in each stage.
5
# Under review as a conference paper at ICLR 2019
We now show that the effective learning rate schedule explains nearly the entire generalization effect of weight decay. First, we independently varied whether weight de- cay was applied to the top layer of the network, and to the remaining layers. Since all layers except the top one used BN, itâs only in the top layer that weight decay would constrain the model. Training curves for SGD and Adam under all four conditions are shown in Figure 2. In all cases, we observe that whether weight decay was applied to the top (fully connected) layer did not appear to mat- ter; whether it was applied to the reamining (convolution) layers explained most of the generalization effect. This supports the effective learning rate hypothesis.
baseline wn-cony â wall = â wde-fetwn-conv 0 25 50 75 100 135 150 175 200 epoch accuracy
Figure 4: The curves of test accuracies of ResNet32 on CIFAR-100. To be noted, we use wd and wn to denote weight decay and weight normalization respectively.
We further tested this hypothesis using a simple experimental manipulation. Speciï¬cally, we trained a BN network without weight decay, but after each epoch, rescaled the weights in each layer to match that layerâs norm from the corresponding epoch for the network with weight decay. This rescaling does not affect the networkâs predictions, and is equivalent to setting the effective learning rate to match the second network. As shown in Figure 4, this effective learning rate transfer scheme (wn-conv) eliminates almost the entire generalization gap; it is fully closed by also adding weight decay to the top layer (wd-fc+wn-conv). Hence, we conclude that for BN networks trained with SGD or Adam, weight decay achieves its regularization effect primarily through the effective learning rate.
4.2 MECHANISM II: APPROXIMATE JACOBIAN REGULARIZATION
In Section 3, we observed that when BN is disabled, weight decay has the strongest regularization effect when K-FAC is used as the optimizer. Hence, in this section we analyze the effect of weight decay for K-FAC with networks without BN. First, we show that in a certain idealized setting, K-FAC with weight decay regularizes the input-output Jacobian of the network. We then empirically investigate whether it behaves similarly for practical networks.
As discussed in Section 3, when the gradient updates are preconditioned by a matrix C, weight decay can be viewed as approximate preconditioned gradient descent on the norm ||6||2, = 6' C@. This interpretation is only approximate because the exact gradient update requires differentiating through C.* When C is taken to be the (exact) Gauss-Newton (GN) matrix G, we obtain the Gauss-Newton norm ||6||2; = @'G(@)@. Similarly, when C is taken to be the K-FAC approximation to G, we obtain what we term the K-FAC Gauss-Newton norm.
These norms are interesting from a regularization perspective. First, under certain conditions, they are proportional to the average L2 norm of the networkâs outputs. Hence, the regularizer ought to make the networkâs predictions less extreme. This is summarized by the following results:
Lemma 1 (Gradient structure). For a feed-forward neural network of depth L with ReLU activation function and no biases, the networkâs outputs are related to the input-output Jacobian and parameter- output Jacobian as follows:
fo(x) = Vxfo(x) "x = xx 1 oT 1 (10) = O= Jo98. T+1v9le(x) L+1°
Lemma 2 (K-FAC Gauss-Newton Norm). For a linear feed-forward network of depth L without biases 3, we have:
Wllex exc = (E+ VE [| fo() |?) - (1)
Using these results, we show that for linear networks with whitened inputs, the K-FAC Gauss-Newton norm is proportional to the squared Frobenius norm of the input-output Jacobian. This is interesting from a regularization perspective, since Novak et al. (2018) found the norm of the input-output Jacobian to be consistently coupled to generalization performance.
2We show in Appendix E that this interpretation holds exactly in the case of Gauss-Newton norm. 3For exact Gauss-Newton norm, the result also holds for deep rectiï¬ed networks (see Appendix C).
6
# Under review as a conference paper at ICLR 2019
MNIST MNIST CIFAR-10 CIFAR-10 ely . . * * ° " e 10" wy . s ° ° ° ° ee e ot © ote wy og . $ |. eee ie oad F ee 1 Boge Te ee " . Ly âlee a 1a a i@, 0 wt so Generalization gap - Jacobian norm Generalization gap - Jacobian norm
Figure 5: Relationship between K-FAC GN norm and Jacobian norm for practical deep neural networks. Each point corresponds to a network trained to 100% training accuracy. Even for (nonlinear) classiï¬cation networks, the K-FAC GN norm is highly correlated with both the squared Frobenius norm of the input-output Jacobian and the generalization gap.
Theorem 1 (Approximate Jacobian norm). For a linear feed-forward network of depth L without biases, if we further assume that E[x] = 0 and Cov(x) = I, then:
IPllexmc = (LZ + Y)||Sxlliron (12)
Proof. From Lemma 2, we have ||9||&,._,... = (Z + 1) E ||| fo(x)||?]. From Lemma 1,
E [| fo(x)7] = E [x' I.
# Jxx] = E [tr J
# Sexx"].
When the network is linear, the input-output Jacobian Jx is independent of the input x. Then we use the assumption of whitened inputs:
Wee pane = (E+ 1) E [tr SZ Ixxx'] = (L +1) tr IIx E[xx"] = (L + 1)||SxllRro-
While the equivalence between the K-FAC GN norm and the Jacobian norm holds only for linear networks, we note that linear networks have been useful for understanding the dynamics of neural net training more broadly (e.g. Saxe et al. (2013)). Hence, Jacobian regularization may help inform our understanding of weight decay in practical (nonlinear) networks.
To test whether the K-FAC GN norm correlates with the Jacobian norm for practical networks, we trained feed-forward networks with a variety optimizers on both MNIST and CIFAR-10. For MNIST, we used simple fully-connected networks with different depth and width. For CIFAR-10, we adopted the VGG family (From VGG11 to VGG19). We deï¬ned the generalization gap to be the difference between training and test loss. Figure 5 shows the relationship of the Jacobian norm to the K-FAC GN norm and to generalization gap for these networks. We observe that the Jacobian norm correlates strongly with the generalization gap (consistent with Novak et al. (2018)) and also with the K-FAC GN norm. Hence, Remark 1 can inform the regularization of nonlinear networks.
Table 2: Squared Frobenius norm of the input-output Jacobian matrix. K-FAC-G with weight decay signiï¬- cantly reduces the Jacobian norm. Optimizer SGD K-FAC-G VGG16 564 498 WD 142 51.44 ResNet32 2765 2115 WD 1074 64.16 To test if K-FAC with weight decay reduces the Ja- cobian norm, we compared the Jacobian norms at the end of training for networks with and without weight decay. As shown in Table 2, weight decay reduced the Jacboian norm by a much larger factor when K-FAC was used as the optimizer than when SGD was used as the optimizer.
Our discussion so far as focused on the GN version of K-FAC. Recall that, in many cases, the Fisher information matrix differs from the GN matrix only in that it accounts for the output layer Hessian. Hence, this analysis may help inform the behavior of K-FAC-F as well. We also note that ||6||2., the Fisher-Rao norm, has been proposed as a complexity measure for neural networks (Liang et al., 2017). Hence, unlike in the case of SGD and Adam for BN networks, we interpret K-FAC with weight decay as constraining the capacity of the network.
4.3 MECHANISM III: SMALLER EFFECTIVE DAMPING PARAMETER
We now return our attention to the setting of architectures with BN. The Jacobian regularization mechanism from Section 4.2 does not apply in this case, since rescaling the weights results in an
7
# Under review as a conference paper at ICLR 2019
08; KFAC-F + KFAC-G 0.64 Acc (VGG16) 0.44 baseline â wd-conv â weal wd-fe âAce (ResNet32) D 20 40 60 80 100 } 20 40 60 80 100 epoch epoch
Figure 6: Test accuracy as a function of training epoch for K-FAC on CIFAR-100 with different weight decay regularization schemes. baseline is the model without weight decay regularization; wd-conv is the model with weight decay applied to all convolutional layers; wd-all is the model with weight decay applied to all layers; wd-fc is the model with weight decay applied to the last layer (fc).
equivalent network, and therefore does not affect the input-output Jacobian. Similarly, if the network is trained with K-FAC, then the effective learning rate mechanism from Section 4.1 also does not apply because the K-FAC update is invariant to afï¬ne reparameterization (Luk & Grosse, 2018) and therefore not affected by the scaling of the weights. More precisely, for a layer with BN, the curvature matrix C (either the Fisher matrix or the GN matrix) has the following property:
1 C(0,) = Jango (13)
where 6; = 0;/||91||2 as in Section 4.1. Hence, the ||@;||3 factor in the preconditioner counteracts the Oi \|5 ? factor in the effective learning rate, resulting in an equivlaent effective learning rate regardless of the norm of the weights.
These observations raise the question of whether it is still useful to apply weight decay to BN layers when using K-FAC. To answer this question, we repeated the experiments in Figure 2 (applying weight decay to subsets of the layers), but with K-FAC as the optimizer. The results are summarized in Figure 6. Applying it to the non-BN layers had the largest effect, consistent with the Jacobian regularization hypothesis. However, applying weight decay to the BN layers also led to signiï¬cant gains, especially for K-FAC-F.
The reason this does not contradict the K-FAC invariance property is that practical K-FAC implemen- tations (like many second-order optimizers) dampen the updates by adding a multiple of the identity matrix to the curvature before inversion. According to Equation 13, as the norm of the weights gets larger, C gets smaller, and hence the damping term comes to dominate the preconditioner. Mathemat- ically, we can understand this effect by deriving the following update rule for the normalized weights Ëθ (see Appendix D for proof):
~ ~ A, AT ~ ~ P Ort â OF â (LE â 876; )(C(B;) + [17 |I3A1) "Vo, £(8") + O(n"), (14) where A is the damping parameter. Hence, for large C(6!) or small ||9;||, the update is close to the idealized second-order update, while for small enough C(6/) or large enough |||], K-FAC effectively becomes a first-order optimizer. Hence, by keeping the weights small, weight decay helps K-FAC to retain its second-order properties.
Most implementations of K-FAC keep the damping pa- rameter A fixed throughout training. Therefore, it would be convenient if C(6/) and ||@;|| do not change too much during training, so that a single value of \ can work well throughout training. Interestingly, the norm of the GN matrix appears to be much more stable than the norm of the Fisher matrix. Figure 7 shows the norms of the Fisher matrix F(0,) and GN matrix G (6) of the normal- ized weights for the first layer of a CIFAR-10 network throughout training. While the norm of F(6:) decays by 4 orders of magnitude over the first 50 epochs, the norm ~ of G(4) increases by only a factor of 2.
â Trace norm 10° 1! 3 ® 0 40 epoch 60 so 100 ââ Fisher Matrix ââ Gauss-Newton Matrix
Figure 7: Trace norm of Fisher matrix and Gauss-Newton matrix of the ï¬rst layer (Nor- malized) of ResNet32. The model was trained on CIFAR-10 with K-FAC-F and BN.
8
# Under review as a conference paper at ICLR 2019
The explanation for this is as follows: in a classification task with cross-entropy loss, the Fisher matrix is equivalent to the generalized GN matrix E[J 4HJ g] (see Section 2). This differs from the GN matrix E[J Jo] only in that it incudes the output layer Hessian Hy = diag(p) â pp", where p is the vector of estimated class probabilities. It is easy to see that Hy goes to zero as p collapses to one class, as is the case for tasks such as CIFAR-10 and CIFAR-100 where networks typically achieve perfect training accuracy. Hence, we would expect F to get much smaller over the course of training, consistent with Figure 7.
To summarize, when K-FAC is applied to BN networks, it can be advantageous to apply weight decay even to layers with BN, even though this appears unnecessary based on invariance considerations. The reason is that weight decay reduces the effective damping, helping K-FAC to retain its second-order properties. This effect is stronger for K-FAC-F than for K-FAC-G because the Fisher matrix shrinks dramatically over the course of training.
# 5 DISCUSSION
Despite its long history, weight decay regularization remains poorly understood. Weâve identiï¬ed three distinct mechanisms by which weight decay improves generalization, depending on the architecture and optimization algorithm: increasing the effective learning rate, reducing the Jacobian norm, and reducing the effective damping parameter. We would not be surprised if there remain additional mechanisms we have not found.
The dynamics of neural net training is incredibly complex, and it can be tempting to simply do what works and not look into why. But we think it is important to at least sometimes dig deeper to determine exactly why an algorithm has the effect that it does. Some of our analysis may seem mundane, or even tedious, as the interactions between different hyperparameters are not commonly seen as a topic worthy of detailed scientiï¬c study. But our experiments highlight that the dynamics of the norms of weights and curvature matrices, and their interaction with optimization hyperparameters, can have a substantial impact on generalization. We believe these effects deserve more attention, and would not be surprised if they can help explain the apparent success or failure of other neural net design choices. We also believe our results highlight the need for automatic adaptation of optimization hyperparameters, to eliminate potential experimental confounds and to allow researchers and practitioners to focus on higher level design issues.
# 6 ACKNOWLEDGEMENT
We thank Jimmy Ba, Kevin Luk, Maxime Gazeau, and Behnam Neyshabur for helpful discussions, and Tianqi Chen and Shengyang Sun for their feedback on early drafts. GZ was funded by an MRIS Early Researcher Award.
REFERENCES
Shun-Ichi Amari. Natural gradient works efï¬ciently in learning. Neural computation, 10(2):251â276, 1998.
Jimmy Ba, Roger Grosse, and James Martens. Distributed second-order optimization using kronecker- factored approximations. 2016.
Siegfried Bos and E Chug. Using weight decay to optimize the generalization ability of a perceptron. In Neural Networks, 1996., IEEE International Conference on, volume 1, pp. 241â246. IEEE, 1996.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770â778, 2016.
Tom Heskes. On ânaturalâ learning and pruning in multilayered perceptrons. Neural Computation, 12(4):881â901, 2000.
9
# Under review as a conference paper at ICLR 2019
Elad Hoffer, Itay Hubara, and Daniel Soudry. Train longer, generalize better: closing the generaliza- tion gap in large batch training of neural networks. In Advances in Neural Information Processing Systems, pp. 1731â1741, 2017.
Elad Hoffer, Ron Banner, Itay Golan, and Daniel Soudry. Norm matters: efï¬cient and accurate normalization schemes in deep networks. arXiv preprint arXiv:1803.01814, 2018.
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. arXiv preprint arXiv:1609.04836, 2016.
Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with deep convolu- tional neural networks. In Advances in neural information processing systems, pp. 1097â1105, 2012.
Anders Krogh and John A Hertz. A simple weight decay can improve generalization. In Advances in neural information processing systems, pp. 950â957, 1992.
Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278â2324, 1998.
Tengyuan Liang, Tomaso Poggio, Alexander Rakhlin, and James Stokes. Fisher-rao metric, geometry, and complexity of neural networks. arXiv preprint arXiv:1711.01530, 2017.
Ilya Loshchilov and Frank Hutter. Fixing weight decay regularization in adam. arXiv preprint arXiv:1711.05101, 2017.
Kevin Luk and Roger Grosse. A coordinate-free construction of scalable natural gradient. arXiv preprint arXiv:1808.10340, 2018.
James Martens. Deep learning via hessian-free optimization. 2010.
James Martens. New insights and perspectives on the natural gradient method. arXiv preprint arXiv:1412.1193, 2014.
James Martens and Roger Grosse. Optimizing neural networks with kronecker-factored approximate curvature. In International conference on machine learning, pp. 2408â2417, 2015.
Arvind Neelakantan, Luke Vilnis, Quoc V Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, and James Martens. Adding gradient noise improves learning for very deep networks. arXiv preprint arXiv:1511.06807, 2015.
Roman Novak, Yasaman Bahri, Daniel A Abolaï¬a, Jeffrey Pennington, and Jascha Sohl- Dickstein. Sensitivity and generalization in neural networks: an empirical study. arXiv preprint arXiv:1802.08760, 2018.
Andrew M Saxe, James L McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120, 2013.
Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
Twan van Laarhoven. L2 regularization versus batch and weight normalization. arXiv preprint arXiv:1706.05350, 2017.
10
# Under review as a conference paper at ICLR 2019
Yuhuai Wu, Elman Mansimov, Roger B Grosse, Shun Liao, and Jimmy Ba. Scalable trust-region method for deep reinforcement learning using kronecker-factored approximation. In Advances in neural information processing systems, pp. 5279â5288, 2017.
Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016.
Guodong Zhang, Shengyang Sun, David Duvenaud, and Roger Grosse. Noisy natural gradient as variational inference. arXiv preprint arXiv:1712.02390, 2017.
# A EXPERIMENTS DETAILS
Throughout the paper, we perform experiments on image classiï¬cation with three different datasets, MNIST (LeCun et al., 1998), CIFAR-10 and CIFAR-100 (Krizhevsky & Hinton, 2009). For MNIST, we use simple fully-connected networks with different depth and width. For CIFAR-10 and CIFAR- 100, we use VGG16 (Simonyan & Zisserman, 2014) and ResNet32 (He et al., 2016). To make the network more ï¬exible, we widen all convolutional layers in ResNet32 by a factor of 4, according to Zagoruyko & Komodakis (2016).
We investigate three different optimization methods, including Stochastic Gradient Descent (SGD), Adam (Kingma & Ba, 2014) and K-FAC (Martens & Grosse, 2015). In K-FAC, two different curvature matrices are studied, including Fisher information matrix and Gauss-Newton matrix.
In default, batch size 128 is used unless stated otherwise. In SGD and Adam, we train the networks with a budge of 200 epochs and decay the learning rate by a factor of 10 every 60 epochs for batch sizes of 128 and 640, and every 80 epochs for the batch size of 2K. Whereas we train the networks only with 100 epochs and decay the learning rate every 40 epochs in K-FAC. Additionally, the curvature matrix is updated by running average with re-estimation every 10 iterations and the inverse operator is amortized to 100 iterations. For K-FAC, we use ï¬xed damping term 1eâ3 unless state otherwise. For each algorithm, best hyperparameters (learning rate and regularization factor) are selected using grid search on held-out 5k validation set. For the large batch setting, we adopt the same strategies in Hoffer et al. (2017) for adjusting the search range of hyperparameters. Finally, we retrain the model with both training data and validation data.
# B GRADIENT STRUCTURE IN NEURAL NETWORKS (LEMMA 1)
Claim. For a feed-forward neural network of depth L with ReLU activation function and no biases, one has the following property:
fo(x) = Vx fo(x) "x = Ixx __! Tg- = Ty Volos) O= 1 (15) Jo0 L+1°
The key observation of Lemma | is that rectified neural networks are piecewise linear up to the output fo(x). And ReLU activation function satisfies the property o(z) = o/(z)z.
Proof. For convenience, we introduce some notations here. Let zL+1 denotes output logits fθ(x), zl the output l-th layer. Similarly, we deï¬ne al = Ï(zl) and a0 = x. By deï¬nition, it is easy to see that
Oz141 da! OZ141 Oa, = DoT zl da! dz} _ Omt1 dz) Zi41 = Wray =
By induction, we conclude that fo(x) = Vx fo(x)'x = Ixx.
11
# Under review as a conference paper at ICLR 2019
On the other side, we have
Oz141 w 49 Zi41 = 1a) = » i Ww; awis ig OW)
dare 9 T Dain, LI+1? According to equation B, zp41 = therefore w e get
Our +1 agi Zr41 = y -W,â aw ij L
Summing over all the layers, we conclude the following equation eventually:
w+ DK = DY aww J l Vofo(x)'0 = J
# C PROOF OF LEMMA 2
Claim. For a feed-forward neural network of depth L with ReLU activation function and no biases, we observe: |O\lq = (L+ YE [| fo(x)|"} - (16)
Furthermore, if we assume the network is linear, we have K-FAC Gauss-Newton norm as follows
lle exe = (L + DE [II fox IP]. (17)
Proof. We first prove the equaility ||6\|2, = (L + 1)?E Gauss-Newton norm, we have [|| fo(x)||?]. Using the definition of the
(lq = E [0' Ig Jo] = E [||Jo9(I"]
By Lemma 1,
Jθθ = (L + 1)fθ(x) = (L + 1)Jxx
Combining above equalities, we arrive at the conclusion. For second part ||O||%,._.,. = (Z + DE [|| fo(x)||?], we note that kronecker-product is exact under the condition that the network is linear, which means Gx-_Fac is the diagonal block version of Gauss-Newton matrix G. Therefore lle _rac = SOE [6;' 35, Jo, 91]
lle _rac = SOE [6;' 35, Jo, 91] l
According to Lemma 1, we have E [6;' J, J@,9:] = E [|| fo(x)||*]. therefore we conclude that
J@,9:] = E [|| fo(x)||*]. = (E+ DE[ || fo() |]
Alle rac = (E+ DE[ || fo() |]
# D DERIVATION OF EQUATION 14
# Claim. During training, the weight direction Ëθt
direction 6f = 6 /||61 6,6; )(C(8.) + |||
||2 is updated according to [2AT)'VL() + O(n?)
Claim. During training, the weight direction 6f = 6 /||61 ||2 is updated according to
Ëθt+1 â Ëθt â η(I â Ëθt
Proof. Natural gradient update is given by
θt+1 â θt â η(C(θt) + λI)â1âL(θt)
Denote p; = ||9;||2. Then we have
Pisa = Pr â 2p! (C(A:) + ApiT)'VL(6:) + W pill(C(O1) + Ape T)'VL(61)I3
12
# Under review as a conference paper at ICLR 2019
and therefore
poss = pey/1â 26 (C(B,) + p?D)-!VL(Gx) + n2ll(C(B.) + Ao?T)-"VL6.) 3 = pi(l â 6} (C(8,) + Af)â *VL(6,)) + O(7?)
t (C( Ëθt) + λÏ2 Additionally, we can rewrite the natural gradient update as follows Ëθt+1 = Ït
Ëθt â ηÏt(C( Ëθt) + λÏ2
# t I)â1âL( Ëθt)
# Ït+1
# And therefore, O11 = Pan (4 = (1-4 067
O11 = Pan (4 = (C(6;) + wil) 'VL(6,)) = (1-4 067 (C(6:) + Ao#1)*VL(6,)) (8: â nC.) + A?) *VL(6.)) + O(7?) = 6, â (I â 6,6; )(C(6,) + []6,|]3A1)-!VL(6,) + O(n?)
# E THE GRADIENT OF GAUSS-NEWTON NORM
For Gauss-Newton norm ||6||2, = (LZ + 1)?Ex [(fe(x), fo(x))], its gradient has the following form: Alle 790 = (E+ 1)°E [Jg fo(x)] (18) According to Lemma 1, we have fg(x) = 1 I09, therefore we can rewrite equation 18 NO le _ T 30 =2(L+ LE [Ig Jo} (19) = 2(L+ 1)G@
For Gauss-Newton norm ||6||2, = (LZ + 1)?Ex [(fe(x), fo(x))], its gradient has the following form:
= 2(L + 1)Gθ Surprisingly, the resulting gradient has the same form as the case where we take Gauss-Newton matrix as a constant of θ up to a constant (L + 1).
F KRONECKER-FACTORED APPROXIMATE CURVATURE (K-FAC)
Martens & Grosse (2015) proposed K-FAC for performing efï¬cient natural gradient optimization in deep neural networks. Following on that work, K-FAC has been adopted in many tasks (Wu et al., 2017; Zhang et al., 2017) to gain optimization beneï¬ts, and was shown to be amendable to distributed computation (Ba et al., 2016).
F.1 BASIC IDEA OF K-FAC
As shown by Luk & Grosse (2018), K-FAC can be applied to general pullback metric, including Fisher metric and the Gauss-Newton metric. For convenience, we introduce K-FAC here using the Fisher metric. Considering /-th layer in the neural network whose input activations are a; ⬠R"!, weight W; ⬠Râ¢*"2, and output s; ⬠R"?, we have s; = Wap. Therefore, weight gradient is Vw,£L = a(V;,£) |. With this gradient formula, K-FAC decouples this layerâs fisher matrix F, using mild approximations,
F, =E [vec{ Vw, L}vec{Vw,L} 1] =E {Vs L}{Vs, Ll} @ ajay | x E[{Vs,L}{Vs,L}"] @E [aia] = 8, @ Ay (20)
Where A; = E [aa"] and S; = E [{Vs£L}{Vs£}']. The approximation above assumes indepen- dence between a and s, which proves to be accurate in practice. Further, assuming between-layer independence, the whole fisher matrix F can be approximated as block diagonal consisting of layer- wise fisher matrices F,;. Decoupling F; into A; and S, not only avoids the memory issue saving F/, but also provides efficient natural gradient computation. F, âvec{Vw,L} = 8,' @ Ap 'vec{Vw,L} = vec[A; 'Vw,LS,"] (21)
Fâ1
l â Aâ1
# l vec{âWl L} = vec[Aâ1
# l âWl LSâ1
(21) As shown by equation 21, computing natural gradient using K-FAC only consists of matrix transfor- mations comparable to size of Wl, making it very efï¬cient.
13
# Under review as a conference paper at ICLR 2019
F.2 PSEUDO CODE OF K-FAC
Algorithm 1 K-FAC with L2 regularization and K-FAC with weight decay. Subscript l denotes layers, wl = vec(Wl). We assume zero momentum for simplicity. Require: η: stepsize Require: β: weight decay Require: stats and inverse update intervals Tstats and Tinv
Require: 17: stepsize Require: 3: weight decay Require: stats and inverse update intervals Ty,.¢; and Tj, k © O and initialize {Wi}, {Si}4,, {AE while stopping criterion not met do kek+1 if k = 0 (mod Tytats) then Update the factors {S,}/_,, {Au}iZp' with moving average end if if k = 0 (mod T;j,,) then Calculate the inverses {[S;]~1}#_,, {[Ai]~}i29" end if Vi = Vw, log p(y|x, w) +8 - Wi W. © W, = (n[Ad-*Vi[S)]-1+8 - Wi) end while
# G ADDITIONAL RESULTS
G.1 LARGE-BATCH TRAINING
It has been shown that K-FAC scales very favorably to larger mini-batches compared to SGD, enjoying a nearly linear relationship between mini-batch size and per-iteration progress for medium-to-large sized mini-batches (Martens & Grosse, 2015; Ba et al., 2016). However, Keskar et al. (2016) showed that large-batch methods converge to sharp minima and generalize worse. In this subsection, we measure the generalization performance of K-FAC with large batch training and analyze the effect of weight decay.
In Table 3, we compare K-FAC with SGD using different batch sizes. In particular, we interpolate between small-batch (BS128) and large-batch (BS2000). We can see that in accordance with previous works (Keskar et al., 2016; Hoffer et al., 2017) the move from a small-batch to a large-batch indeed incurs a substantial generalization gap. However, adding weight decay regularization to K-FAC almost close the gap on CIFAR-10 and cause much of the gap diminish on CIFAR-100. Surprisingly, the generalization gap of SGD also disappears with well-tuned weight decay regularization. Moreover, we observe that the training loss cannot decrease to zero if weight decay is not used, indicating weight decay may also speed up the training.
Table 3: Classiï¬cation results with different batch sizes. WD denotes weight decay regularization. We tune weight decay factor and learning rate using held-out validation set.
Dataset Network Method BS128 WD BS640 WD BS2000 WD CIFAR10 VGG16 91.71 SGD 93.12 K-FAC-F K-FAC-G 93.19 93.39 93.90 93.80 90.46 92.93 92.98 93.09 93.55 93.74 88.50 92.17 90.78 92.24 93.31 93.46 CIFAR10 ResNet32 92.95 SGD K-FAC-F 93.80 K-FAC-G 93.44 95.14 95.35 95.04 91.68 92.30 91.80 94.45 94.79 94.73 89.70 91.15 90.02 94.68 94.43 94.85 CIFAR100 ResNet32 73.61 SGD K-FAC-F 74.49 K-FAC-G 73.70 77.73 78.01 78.02 71.74 73.54 71.13 76.67 77.34 77.40 65.38 71.64 65.41 76.87 77.13 76.93
14
# Under review as a conference paper at ICLR 2019
rd fal 6 Adam KFAC-F KFAC-G | CIFARLO-ResNet32 accuracy accuracy CIFARLO-ResNet32+ og accuracy CIFAR100-VGG16+ 0.8 accuracy CIFAR100-ResNet32+ 4g G5 3095 woo wis wo 1S 300 0 a5 30 75 100 15 ao Ts 300020 ao <6 goto 0 oa 100 epoch epoch epoch epoch
Figure 8: Test accuracy as a function of training epoch. We plot baseline vs L2 regularization vs weight decay regularization on CIFAR-10 and CIFAR-100 datasets. The â+â denotes with BN and data augmentation. Note that training accuracies of all the models are 100% in the end of the training. We smooth all the curves for visual clarity.
# G.2 THE CURVES OF TEST ACCURACIES
# G.3 OPTIMIZATION PERFORMANCE OF DIFFERENT OPTIMIZERS
While this paper mostly focus on generalization, we also report the convergence speed of different optimizers in deep neural networks; we report both per-epoch performance and wall-clock time performance.
We consider the task of image classiï¬cation on CIFAR-10 (Krizhevsky & Hinton, 2009) dataset. The models we use consist of VGG16 (Simonyan & Zisserman, 2014) and ResNet32 (He et al., 2016). We compare our K-FAC-G, K-FAC-F with SGD, Adam (Kingma & Ba, 2014).
We experiment with constant learning for K-FAC-G and K-FAC-F. For SGD and Adam, we set batch size as 128. For K-FAC, we use batch size of 640, as suggested by Martens & Grosse (2015).
In Figure 9, we report the training curves of different algorithms. Figure 9a show that K-FAC-G yields better optimization than other baselines in training loss per epoch. We highlight that the training loss decreases to 1e-4 within 10 epochs with K-FAC-G. Although K-FAC based algorithms take more time for each epoch, Figure 9b still shows wall-clock time improvements over the baselines.
In Figure 9c and 9d, we report similar results on the ResNet32. Note that we make the network wider with a widening factor of 4 according to Zagoruyko & Komodakis (2016). K-FAC-G outperforms both K-FAC-F and other baselines in term of optimization per epoch, and compute time.
15
# Under review as a conference paper at ICLR 2019
(a) Training loss (VGG16) (b) Wall-Clock Time (VGG16)
(c) Training loss (ResNet32)
(d) Wall-Clock Time (ResNet32)
Figure 9: CIFAR-10 image classiï¬cation task.
16 | {
"id": "1502.03167"
} |
1810.08810 | The Frontiers of Fairness in Machine Learning | The last few years have seen an explosion of academic and popular interest in
algorithmic fairness. Despite this interest and the volume and velocity of work
that has been produced recently, the fundamental science of fairness in machine
learning is still in a nascent state. In March 2018, we convened a group of
experts as part of a CCC visioning workshop to assess the state of the field,
and distill the most promising research directions going forward. This report
summarizes the findings of that workshop. Along the way, it surveys recent
theoretical work in the field and points towards promising directions for
research. | http://arxiv.org/pdf/1810.08810 | Alexandra Chouldechova, Aaron Roth | cs.LG, cs.DS, cs.GT, stat.ML | null | null | cs.LG | 20181020 | 20181020 | 2018:
8 1 0 2
t c O 0 2 ] G L . s c [
arXiv:1810.08810v1
1 v 0 1 8 8 0 . 0 1 8 1 : v i X r a
The Frontiers of Fairness in Machine Learning
Alexandra Chouldechovaâ Aaron Rothâ
January 11, 2022
# Abstract
The last few years have seen an explosion of academic and popular interest in algorithmic fairness. Despite this interest and the volume and velocity of work that has been produced recently, the fundamental science of fairness in machine learning is still in a nascent state. In March 2018, we convened a group of experts as part of a CCC visioning workshop to assess the state of the ï¬eld, and distill the most promising research directions going forward. This report summarizes the ï¬ndings of that workshop. Along the way, it surveys recent theoretical work in the ï¬eld and points towards promising directions for research.
1
# 1 Introduction
The last decade has seen a vast increase both in the diversity of applications to which machine learning is applied, and to the import of those applications. Machine learning is no longer just the engine behind ad placements and spam ï¬lters: it is now used to ï¬lter loan applicants, deploy police oï¬cers, and inform bail and parole decisions, amongst other things. The result has been a major concern for the potential for data driven methods to introduce and perpetuate discriminatory practices, and to otherwise be unfair. And this concern has not been without reason: a steady stream of empirical ï¬ndings has shown that data driven methods can unintentionally both encode existing human biases and introduce new ones (see e.g. [Swe13, BCZ+16, CBN17, BG18] for notable examples).
At the same time, the last two years have seen an unprecedented explosion in interest from the academic community in studying fairness and machine learning. âFairness and transparencyâ transformed from a niche topic with a trickle of papers produced every year (at least since the work of [PRT08]) to a major subï¬eld of machine learning, complete with a dedicated archival conference (ACM FAT*). But despite the volume and velocity of published work, our understanding of the fundamental questions related to fairness and machine learning remain in its infancy. What should fairness mean? What are the causes that introduce unfairness in machine learning? How best should we modify our algorithms to avoid unfairness? And what are the corresponding tradeoï¬s with which we must grapple?
In March 2018, we convened a group of about ï¬fty experts in Philadelphia, drawn from academia, industry, and government, to asses the state of our understanding of the fundamentals of the nascent science of fairness in machine learning, and to identify the unanswered questions that
âHeinz College, Carnegie Mellon University. achould@cmu.edu â Department of Computer and Information Science, University of Pennsylvania. aaroth@cis.upenn.edu
1
seem the most pressing. By necessity, the aim of the workshop was not to comprehensively cover the vast growing ï¬eld, much of which is empirical. Instead, the focus was on theoretical work aimed at providing a scientiï¬c foundation for understanding algorithmic bias. This document captures several of the key ideas and directions discussed.
# 2 What We Know
# 2.1 Causes of Unfairness
Even before we precisely specify what we mean by âfairnessâ, we can identify common distortions that can lead oï¬-the-shelf machine learning techniques to produce behavior that is intuitively unfair. These include:
1. Bias Encoded in Data: Often, the training data that we have on hand already includes human biases. For example, in the problem of recidivism prediction used to inform bail and parole decisions, the goal is to predict whether an inmate, if released, will go on to commit another crime within a ï¬xed period of time. But we do not have data on who commits crimes â we have data on who is arrested. There is reason to believe that arrest data â especially for drug crimes â is skewed towards minority populations that are policed at a higher rate [Rot14]. Of course, machine learning techniques are designed to ï¬t the data, and so will naturally replicate any bias already present in the data. There is no reason to expect them to remove existing bias.
2. Minimizing Average Error Fits Majority Populations: Diï¬erent populations of people have diï¬erent distributions over features, and those features have diï¬erent relationships to the label that we are trying to predict. As an example, consider the task of predicting college performance based on high school data. Suppose there is a majority population and a minority population. The majority population employs SAT tutors and takes the exam multiple times, reporting only the highest score. The minority population does not. We should naturally expect both that SAT scores are higher amongst the majority population, and that their relationship to college performance is diï¬erently calibrated compared to the minority population. But if we train a group-blind classiï¬er to minimize overall error, if it cannot simultaneously ï¬t both populations optimally, it will ï¬t the majority population. This is because â simply by virtue of their numbers â the ï¬t to the majority population is more important to overall error than the ï¬t to the minority population. This leads to a diï¬erent (and higher) distribution of errors in the minority population. This eï¬ect can be quantiï¬ed, and can be partially alleviated via concerted data gathering eï¬orts [CJS18].
3. The Need to Explore: In many important problems, including recidivism prediction and drug trials, the data fed into the prediction algorithm depends on the actions that algorithm has taken in the past. We only observe whether an inmate will recidivate if we release him. We only observe the eï¬cacy of a drug on patients to whom it is assigned. Learning theory tells us that in order to eï¬ectively learn in such scenarios, we need to explore â i.e. sometimes take actions we believe to be sub-optimal in order to gather more data. This leads to at least two distinct ethical questions. First, when are the individual costs of exploration borne disproportionately by a certain sub-population? Second, if in certain (e.g. medical) scenarios,
2
we view it as immoral to take actions we believe to be sub-optimal for any particular patient, how much does this slow learning, and does this lead to other sorts of unfairness?
# 2.2 Deï¬nitions of Fairness
With a few exceptions, the vast majority of work to date on fairness in machine learning has focused on the task of batch classiï¬cation. At a high level, this literature has focused on two main families of deï¬nitions1: statistical notions of fairness and individual notions of fairness. We brieï¬y review what is known about these approaches to fairness, their advantages, and their shortcomings.
# 2.2.1 Statistical Deï¬nitions of Fairness
Most of the literature on fair classiï¬cation focuses on statistical deï¬nitions of fairness. This family of deï¬nitions ï¬xes a small number of protected demographic groups G (such as racial groups), and then ask for (approximate) parity of some statistical measure across all of these groups. Popular measures include raw positive classiï¬cation rate, considered in work such as [CV10, KAS11, DHP+12, FFM+15] (also sometimes known as statistical parity [DHP+12]), false positive and false negative rates [Cho17, KMR17, HPS16, ZVGG17] (also sometimes known as equalized odds [HPS16]), and positive predictive value [Cho17, KMR17] (closely related to equal- ized calibration when working with real valued risk scores). There are others â see e.g. [BHJ+18] for a more exhaustive enumeration. This family of fairness deï¬nitions is attractive because it is simple, and deï¬nitions from this family can be achieved without making any assumptions on the data and can be easily veriï¬ed. However, statistical deï¬nitions of fairness do not on their own give meaningful guarantees to individuals or structured subgroups of the protected demographic groups. Instead they give guarantees to âaverageâ members of the protected groups. (See [DHP+12] for a litany of ways in which statistical parity and similar notions can fail to provide meaningful guar- antees, and [KNRW18b] for examples of how some of these weaknesses carry over to deï¬nitions which equalize false positive and negative rates.) Diï¬erent statistical measures of fairness can be at odds with one another. For example, [Cho17] and [KMR17] prove a fundamental impossibility result: except in trivial settings, it is impossible to simultaneously equalize false positive rates, false negative rates, and positive predictive value across protected groups. Learning subject to statistical fairness constraints can also be computationally hard [WGOS17], although practical algorithms of various sorts are known [HPS16, ZVGG17, ABD+18].
# Individual Deï¬nitions of Fairness
Individual notions of fairness, on the other hand, ask for constraints that bind on speciï¬c pairs of individuals, rather than on a quantity that is averaged over groups. For example, [DHP+12] give a deï¬nition which roughly corresponds to the constraint that âsimilar individuals should be treated similarlyâ, where similarity is deï¬ned with respect to a task-speciï¬c metric that must be determined on a case by case basis. [JKMR16] suggest a deï¬nition which roughly corresponds to âless qualiï¬ed individuals should not be favored over more qualiï¬ed individualsâ, where quality is deï¬ned with respect to the true underlying label (unknown to the algorithm). However, although
1There is also an emerging line of work that considers causal notions of fairness (see e.g., [KCP+17, KLRS17, NS18]). We intentionally avoided discussions of this potentially important direction because it will be the subject of its own CCC visioning workshop.
3
the semantics of these kinds of deï¬nitions can be more meaningful than statistical approaches to fairness, the major stumbling block is that they seem to require making signiï¬cant assumptions. For example, the approach of [DHP+12] pre-supposes the existence of an agreed upon similarity metric, whose deï¬nition would itself seemingly require solving a non-trivial problem in fairness, and the approach of [JKMR16] seems to require strong assumptions on the functional form of the relationship between features and labels in order to be usefully put into practice. These obstacle are serious enough that it remains unclear whether individual notions of fairness can be made practical â although attempting to bridge this gap is an important and ongoing research agenda.
# 3 Questions at the Research Frontier
# 3.1 Between Statistical and Individual Fairness
Given the limitations of extant notions of fairness, is there a way to get some of the âbest of both worldsâ? In other words, constraints that are practically implementable without the need for making strong assumptions on the data or the knowledge of the algorithm designer, but which nevertheless provide more meaningful guarantees to individuals? Two recent papers, [KNRW18b] and [HJKRR18] (see also [KNRW18a, KGZ18] for empirical evaluations of the algorithms proposed in these papers), attempt to do this by asking for statistical fairness deï¬nitions to hold not just on a small number of protected groups, but on an exponential or inï¬nite class of groups deï¬ned by some class of functions of bounded complexity. This approach seems promising: because ultimately they are asking for statistical notions of fairness, the approaches proposed by these papers enjoy the beneï¬ts of statistical fairness: that no assumptions need be made about the data, nor is any external knowledge (like a fairness metric) needed. It also better addresses concerns about âintersectionalityâ, a term used to describe how diï¬erent kinds of discrimination can compound and interact for individuals who fall at the intersection of several protected classes.
At the same time, the approach raises a number of additional questions: what function classes are reasonable, and once one is decided upon (e.g. conjunctions of protected attribures) what features should be âprotectedâ? Should these only be attributes that are sensitive on their own, like race and gender, or might attributes that are innocuous on their own correspond to groups we wish to protect once we consider their intersection with protected attributes (for example clothing styles intersected with race or gender)? Finally, this family of approaches signiï¬cantly mitigates some of the weaknesses of statistical notions of fairness by asking for the constraints to hold on average not just over a small number of coarsely deï¬ned groups, but over very ï¬nely deï¬ned groups as well. Ultimately, however, it inherits the weaknesses of statistical fairness as well, just on a more limited scale.
Another recent line of work aims to weaken the strongest assumption needed for the notion of individual fairness from [DHP+12]: namely that the algorithm designer has perfect knowledge of a âfairness metricâ. [KRR18] assume that the algorithm has access to an oracle which can return an unbiased estimator for the distance between two randomly drawn individuals according to an unknown fairness metric, and show how to use this to ensure a statistical notion of fairness related to [KNRW18b, HJKRR18] which informally states that âon average, individuals in two groups should be treated similarly if on average the individuals in the two groups are similarâ â and this can be achieved with respect to an exponentially or inï¬nitely large set of groups. Similarly, [GJKR18] assumes the existence of an oracle which can identify fairness violations when they are
4
made in an online setting, but cannot quantify the extent of the violation (with respect to the unknown metric). It is shown that when the metric is from a speciï¬c learnable family, this kind of feedback is suï¬cient to obtain an optimal regret bound to the best fair classiï¬er while having only a bounded number of violations of the fairness metric. [RY18] consider the case in which the metric is known, and show that a PAC-inspired approximate variant of metric fairness generalizes to new data drawn from the same underlying distribution. Ultimately, however, these approaches all assume that fairness is perfectly deï¬ned with respect to some metric, and that there is some sort of direct access to it. Can these approaches be generalized to a more âagnosticâ setting, in which fairness feedback is given by human beings who may not be responding in a way that is consistent with any metric?
# 3.2 Data Evolution and Dynamics of Fairness
The vast majority of work in computer science on algorithmic fairness has focused on one-shot classiï¬cation tasks. But real algorithmic systems consist of many diï¬erent components that are combined together, and operate in complex environments that are dynamically changing, some- times because of the actions of the learning algorithm itself. For the ï¬eld to progress, we need to understand the dynamics of fairness in more complex systems.
Perhaps the simplest aspect of dynamics that remains poorly understood is how and when components that may individually satisfy notions of fairness compose into larger constructs that still satisfy fairness guarantees. For example, if the bidders in an advertising auction individually are fair with respect to their bidding decisions, when will the allocation of advertisements be âfairâ, and when will it not? [BKN+17] and [DI18] have made a preliminary foray in this direction. These papers embark on a systematic study of fairness under composition, and ï¬nd that often the composition of multiple fair components will not satisfy any fairness constraint at all. Similarly, the individual components of a âfairâ system may appear to be unfair in isolation. There are certain special settings, e.g. the âï¬ltering pipelineâ scenario of [BKN+17] â modeling a scenario in which a job applicant is selected only if she is selected at every stage of the pipeline â in which (multiplicative approximations of) statistical fairness notions compose in a well behaved way. But the high level message from these works is that our current notions of fairness compose poorly. Experience from diï¬erential privacy [DMNS06, DR14] suggests that graceful degradation under composition is key to designing complicated algorithms satisfying desirable statistical properties, because it allows algorithm design and analysis to be modular. Thus, it seems important to ï¬nd satisfying fairness deï¬nitions and richer frameworks that behave well under composition.
In dealing with socio-technical systems, it is also important to understand how algorithms dynamically eï¬ect their environment, and the incentives of human actors. For example, if the bar (for e.g. college admission) is lowered for a group of individuals, this might increase the average qualiï¬cations for this group over time because of at least two eï¬ects: a larger proportion of children in the next generation grow up in households with college educated parents (and the opportunities this provides), and the fact that a college education is achievable can incentivize eï¬ort to prepare academically. These kinds of eï¬ects are not considered when considering either statistical or individual notions of fairness in one-shot learning settings. The economics literature on aï¬rmative action has long considered such eï¬ects â although not with the speciï¬cs of machine learning in mind: see e.g. [FV92, CL93, Bec10]. More recently, there have been some preliminary attempts to model these kinds of eï¬ects in machine learning settings â e.g. by modeling the environment as a markov decision process [JJK+17], considering the equilibrium eï¬ects of imposing
5
statistical deï¬nitions of fairness in a model of a labor market [HC18], specifying the functional relationship between classiï¬cation outcomes and quality [LDR+18], or by considering the eï¬ect of a classiï¬er on a downstream Bayesian decision maker [KRZ18]. However, the speciï¬c predictions of most of the models of this sort are brittle to the speciï¬c modeling assumptions made â they point to the need to consider long term dynamics, but do not provide robust guidance for how to navigate them. More work is needed here.
Finally, decision making is often distributed between a large number of actors who share diï¬erent goals and do not necessarily coordinate. In settings like this, in which we do not have direct control over the decision making process, it is important to think about how to incentivize rational agents to behave in a way that we view as fair. [KKM+17] takes a preliminary stab at this task, showing how to incentivize a particular notion of individual fairness in a simple, stylized setting, using small monetary payments. But how should this work for other notions of fairness, and in more complex settings? Can this be done by controlling the ï¬ow of information, rather than by making monetary payments (monetary payments might be distasteful in various fairness-relevant settings)? More work is needed here as well. Finally, [CDPF+17] take a welfare maximization view of fairness in classiï¬cation, and characterize the cost of imposing additional statistical fairness constraints as well. But this is done in a static environment. How would the conclusions change under a dynamic model?
# 3.3 Modeling and Correcting Bias in the Data
Fairness concerns typically surface precisely in settings where the available training data is already contaminated by bias. The data itself is often a product of social and historical process that op- erated to the disadvantage of certain groups. When trained in such data, oï¬-the-shelf machine learning techniques may reproduce, reinforce, and potentially exacerbate existing biases. Under- standing how bias arises in the data, and how to correct for it, are fundamental challenges in the study of fairness in machine learning.
[BCZ+16] demonstrate how machine learning can reproduce biases in their analysis of the pop- ular word2vec embedding trained on a corpus of Google News texts (parallel eï¬ects were indepen- dently discovered by [CBN17]). The authors show that the trained embedding exhibit female/male gender stereotypes, learning that âdoctorâ is more similar to man than to woman, along with analogies such as âman is to computer programmer as woman is to homemakerâ. Even if such learned associations accurately reï¬ect patterns in the source text corpus, their use in automated systems may exacerbate existing biases. For instance, it might result in male applicants being ranked more highly than equally qualiï¬ed female applicants in queries related to jobs that the embedding identiï¬es as male-associated.
Similar risks arise whenever there is potential for feedback loops. These are situations where the trained machine learning model informs decisions that then aï¬ect the data collected for future iterations of the training process. [LI16] demonstrate how feedback loops might arise in predictive policing if arrest data were used to train the model2. In a nutshell, since police are likely to make more arrests in more heavily policed areas, using arrest data to predict crime hotspots will disproportionately concentrate policing eï¬orts on already over-policed communities. Expanding on this analysis, [EFN+18] ï¬nd that incorporating community-driven data such as crime reporting
2Predictive policing models are generally proprietary, and so it is not clear whether arrest data is used to train the model in any deployed system.
6
helps to attenuate the biasing feedback eï¬ects. The authors also propose a strategy for accounting for feedback by adjusting arrest counts for policing intensity. The success of the mitigation strategy of course depends on how well the simple theoretical model reï¬ects the true relationships between crime intensity, policing, and arrests. Problematically, such relationships are often unknown, and are very diï¬cult to infer from data. This situation is by no means speciï¬c to predictive policing.
Correcting for data bias generally seems to require knowledge of how the measurement process is biased, or judgments about properties the data would satisfy in an âunbiasedâ world. [FSV16] formalize this as a disconnect between the observed spaceâfeatures that are observed in the data, such as SAT scoresâand the unobservable construct spaceâfeatures that form the desired basis for decision making, such as intelligence. Within this framework, data correction eï¬orts attempt to undo the eï¬ects of biasing mechanisms that drive discrepancies between these spaces. To the extent that the biasing mechanism cannot be inferred empirically, any correction eï¬ort must make explicit its underlying assumptions about this mechanism. What precisely is being assumed about the construct space? When can the mapping between the construct space and the observed space be learned and inverted? What form of fairness does the correction promote, and at what cost? The costs are often immediately realized, whereas the beneï¬ts are less tangible. We will directly observe reductions in prediction accuracy, but any gains hinge on a belief that the observed world is not one we should seek to replicate accurately in the ï¬rst place. This is an area where tools from causality may oï¬er a principled approach for drawing valid inference with respect to unobserved counterfactually âfairâ worlds.
# 3.4 Fair Representations
Fair representation learning is a data de-biasing process that produces transformations (interme- diate representations) of the original data that retain as much of the task-relevant information as possible while removing information about sensitive or protected attributes. This is one approach to transforming biased observational data in which group membership may be inferred from other features, to a construct space where protected attributes are statistically independent of other fea- tures. First introduced in the work of [ZWS+13], fair representation learning produces a de-biased data set that may in principle be used by other parties without any risk of disparate outcomes. [FFM+15] and [MOW17] formalize this idea by showing how the disparate impact of a decision rule is bounded in terms of its balanced error rate as a predictor of the sensitive attribute.
Several recent papers have introduced new approaches for constructing fair representations. [FFM+15] propose rank-preserving procedures for repairing features to reduce or remove pairwise dependence with the protected attribute. [JL17] build upon this work, introducing a likelihood- based approach that can additionally handle continuous protected attributes, discrete features, and which promotes joint independence between the transformed features and the protected attributes. There is also a growing literature on using adversarial learning to achieve group fairness in the form of statistical parity or false positive/false negative rate balance [ES15, BCZC17, ZLM18, MCPZ18]. Existing theory shows that the fairness promoting beneï¬ts of fair representation learning rely critically on the extent to which existing associations between the transformed features and the pro- tected characteristics are removed. Adversarial downstream users may be able to recover protected attribute information if their models are more powerful than those used initially to obfuscate the data. This presents a challenge both to the generators of fair representations as well as to auditors and regulators tasked with certifying that the resulting data is fair for use. More work is needed to understand the implications of fair representation learning for promoting fairness in the real world.
7
# 3.5 Beyond Classiï¬cation
Although the majority of the work on fairness in machine learning focuses on batch classiï¬cation, batch classiï¬cation is only one aspect of how machine learning is used. Much of machine learning â e.g. online learning, bandit learning, and reinforcement learning â focuses on dynamic settings in which the actions of the algorithm feed back into the data it observes. These dynamic settings capture many problems for which fairness is a concern. For example, lending, criminal recidivism prediction, and sequential drug trials all are so-called bandit learning problems, in which the algo- rithm cannot observe data corresponding to counterfactuals. We cannot see whether someone not granted a loan would have paid it back. We cannot see whether an inmate not released on parole would have gone on to commit another crime. We cannot see how a patient would have responded to a diï¬erent drug.
The theory of learning in bandit settings is well understood, and it is characterized by a need to trade oï¬ exploration with exploitation. Rather than always making a myopically optimal deci- sion, when counter-factuals cannot be observed, it is necessary for algorithms to sometimes take actions that appear to be sub-optimal so as to gather more data. But in settings in which deci- sions correspond to individuals, this means sacriï¬cing the well-being of a particular person for the potential beneï¬t of future individuals. This can sometimes be unethical, and a source of unfairness [BBC+16]. Several recent papers explore this issue. For example, [BBK17] and [KMR+18] give conditions under which linear learners need not explore at all in bandit settings, thereby allowing for best-eï¬ort service to each arriving individual, obviating the tension between ethical treatment of individuals and learning. [RSVW18] show that the costs associated with exploration can be un- fairly bourn by a structured sub-population, and that counter-intuitively, those costs can actually increase when they are included with a majority population, even though more data increases the rate of learning overall. However, these results are all preliminary: they are restricted to settings in which the learner is learning a linear policy, and the data really is governed by a linear model. While illustrative, more work is needed to understand real-world learning in online settings, and the ethics of exploration.
There is also some work on fairness in machine learning in other settings â for example, rank- ing [YS17, CSV17], selection [KRW17, KR18], personalization [CV17], bandit learning [JKM+18, LRD+17], human-classiï¬er hybrid decision systems [MPZ17], and reinforcement learning [JJK+17, DTB17]. But outside of classiï¬cation, the literature is relatively sparse. This should be rectiï¬ed, because there are interesting and important fairness issues that arise in other settings â especially when there are combinatorial constraints on the set of individuals that can be selected for a task, or when there is a temporal aspect to learning.
# Acknowledgements
This material is based upon work supposed by the National Science Foundation under Grant No. 1136993. Any opinions, ï¬ndings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reï¬ect the views of the National Science Foundation. We are indebted to all of the participants of the CCC visioning workshop, held March 18-19 2018 in Philadelphia. The workshop discussion shaped every aspect of this document. We are grateful to Helen Wright and Ann Drobnis, who are instrumental in making the workshop happen. Finally, we are thankful to Cynthia Dwork, Sampath Kannan, Michael Kearns, Toni Pitassi, and Suresh Venkatasubramanian who provided valuable feedback on this report.
8
# References
[ABD+18] Alekh Agarwal, Alina Beygelzimer, Miroslav Dud´ık, John Langford, and Hanna Wal- lach. A reductions approach to fair classiï¬cation. In Proceedings of the 35th Interna- tional Conference on Machine Learning, ICML, volume 80 of JMLR Workshop and Conference Proceedings, pages 2569â2577. JMLR.org, 2018.
[BBC+16]
Sarah Bird, Solon Barocas, Kate Crawford, Fernando Diaz, and Hanna Wallach. Ex- ploring or exploiting? social and ethical implications of autonomous experimentation in ai. 2016.
Hamsa Bastani, Mohsen Bayati, and Khashayar Khosravi. Exploiting the natural exploration in contextual bandits. arXiv preprint arXiv:1704.09011, 2017.
[BCZ+16]
Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Advances in Neural Information Processing Systems, pages 4349â4357, 2016.
[BCZC17] Alex Beutel, Jilin Chen, Zhe Zhao, and Ed H Chi. Data decisions and theoret- arXiv preprint ical implications when adversarially learning fair representations. arXiv:1707.00075, 2017.
Gary S Becker. The economics of discrimination. University of Chicago press, 2010.
Joy Buolamwini and Timnit Gebru. Gender shades: Intersectional accuracy disparities in commercial gender classiï¬cation. In Conference on Fairness, Accountability and Transparency, pages 77â91, 2018.
[BHJ+18]
Richard Berk, Hoda Heidari, Shahin Jabbari, Michael Kearns, and Aaron Roth. Fair- ness in criminal justice risk assessments: The state of the art. Sociological Methods & Research, 0(0):0049124118782533, 2018.
[BKN+17] Amanda Bower, Sarah N Kitchen, Laura Niss, Martin J Strauss, Alexander Vargas, and Suresh Venkatasubramanian. Fair pipelines. arXiv preprint arXiv:1707.00391, 2017.
Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. Semantics derived automat- ically from language corpora contain human-like biases. Science, 356(6334):183â186, 2017.
[CDPF+17] Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. Algo- rithmic decision making and the cost of fairness. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 797â806. ACM, 2017.
Alexandra Chouldechova. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data, 5(2):153â163, 2017.
9
Irene Chen, Fredrik D Johansson, and David Sontag. Why is my classiï¬er discrimi- natory? 2018.
Stephen Coate and Glenn C Loury. Will aï¬rmative-action policies eliminate negative stereotypes? The American Economic Review, pages 1220â1240, 1993.
L Elisa Celis, Damian Straszak, and Nisheeth K Vishnoi. Ranking with fairness constraints. arXiv preprint arXiv:1704.06840, 2017.
Toon Calders and Sicco Verwer. Three naive bayes approaches for discrimination-free classiï¬cation. Data Mining and Knowledge Discovery, 21(2):277â292, 2010.
[CV17] L Elisa Celis and Nisheeth K Vishnoi. arXiv:1707.02260, 2017. Fair personalization. arXiv preprint
[DHP+12] Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. Fairness through awareness. In Proceedings of the 3rd innovations in theoretical com- puter science conference, pages 214â226. ACM, 2012.
Cynthia Dwork and Christina Ilvento. Fairness under composition. Manuscript, 2018.
[DMNS06] Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity in private data analysis. In Theory of Cryptography Conference, pages 265â284. Springer, 2006.
DRY] Cynthia Dwork and Aaron Roth. The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science, 9(3-4):211-407, 2014.
Shayan Doroudi, Philip S. Thomas, and Emma Brunskill. Importance sampling for fair policy selection. In Proceedings of the Thirty-Third Conference on Uncertainty in Artiï¬cial Intelligence, UAI. AUAI Press, 2017.
[EFN+18] Danielle Ensign, Sorelle A. Friedler, Scott Neville, Carlos Scheidegger, and Suresh Venkatasubramanian. Runaway feedback loops in predictive policing. In 1st Con- ference on Fairness, Accountability and Transparency in Computer Science (FAT*), 2018.
Harrison Edwards and Amos Storkey. Censoring representations with an adversary. arXiv preprint arXiv:1511.05897, 2015.
[FFM+15] Michael Feldman, Sorelle A Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. Certifying and removing disparate impact. In KDD, 2015.
Sorelle A Friedler, Carlos Scheidegger, and Suresh Venkatasubramanian. On the (im) possibility of fairness. arXiv preprint arXiv:1609.07236, 2016.
Dean P Foster and Rakesh V Vohra. An economic argument for aï¬rmative action. Rationality and Society, 4(2):176â188, 1992.
Stephen Gillen, Christopher Jung, Michael Kearns, and Aaron Roth. Online learning In Advances in Neural Information Processing with an unknown fairness metric. Systems, 2018.
10
Lily Hu and Yiling Chen. A short-term intervention for long-term fairness in the labor market. In Pierre-Antoine Champin, Fabien L. Gandon, Mounia Lalmas, and Panagiotis G. Ipeirotis, editors, Proceedings of the 2018 World Wide Web Conference on World Wide Web, WWW, pages 1389â1398. ACM, 2018.
[HJKRR18] Ursula H´ebert-Johnson, Michael P Kim, Omer Reingold, and Guy N Rothblum. Cal- ibration for the (computationally-identiï¬able) masses. In Proceedings of the 35th In- ternational Conference on Machine Learning, ICML, volume 80 of JMLR Workshop and Conference Proceedings, pages 2569â2577. JMLR.org, 2018.
Moritz Hardt, Eric Price, and Nati Srebro. Equality of opportunity in supervised learning. In Advances in neural information processing systems, pages 3315â3323, 2016.
[JJK+17]
Shahin Jabbari, Matthew Joseph, Michael Kearns, Jamie Morgenstern, and Aaron Roth. Fairness in reinforcement learning. In International Conference on Machine Learning, pages 1617â1626, 2017.
[JKM+18] Matthew Joseph, Michael Kearns, Jamie Morgenstern, Seth Neel, and Aaron Roth. Fair algorithms for inï¬nite and contextual bandits. In AAAI/ACM Conference on AI, Ethics, and Society, 2018.
[JKMR16] Matthew Joseph, Michael Kearns, Jamie H Morgenstern, and Aaron Roth. Fairness in learning: Classic and contextual bandits. In Advances in Neural Information Pro- cessing Systems, pages 325â333, 2016.
James E Johndrow and Kristian Lum. An algorithm for removing sensitive in- formation: application to race-independent recidivism prediction. arXiv preprint arXiv:1703.04957, 2017.
Toshihiro Kamishima, Shotaro Akaho, and Jun Sakuma. Fairness-aware learning through regularization approach. In Data Mining Workshops (ICDMW), 2011 IEEE 11th International Conference on, pages 643â650. IEEE, 2011.
[KCP+17] Niki Kilbertus, Mateo Rojas Carulla, Giambattista Parascandolo, Moritz Hardt, Do- minik Janzing, and Bernhard Sch¨olkopf. Avoiding discrimination through causal rea- soning. In Advances in Neural Information Processing Systems, pages 656â666, 2017.
Michael P Kim, Amirata Ghorbani, and James Zou. Multiaccuracy: Black-box post- processing for fairness in classiï¬cation. arXiv preprint arXiv:1805.12317, 2018.
[KKM+17]
Sampath Kannan, Michael Kearns, Jamie Morgenstern, Mallesh Pai, Aaron Roth, Rakesh Vohra, and Zhiwei Steven Wu. Fairness incentives for myopic agents. In Proceedings of the 2017 ACM Conference on Economics and Computation, pages 369â 386. ACM, 2017.
[KLRS17] Matt J Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva. Counterfactual fairness. In Advances in Neural Information Processing Systems, pages 4069â4079, 2017.
11
Jon M. Kleinberg, Sendhil Mullainathan, and Manish Raghavan. Inherent trade-oï¬s in the fair determination of risk scores. In 8th Innovations in Theoretical Computer Science Conference, ITCS, 2017.
[KMR+18]
Sampath Kannan, Jamie Morgenstern, Aaron Roth, Bo Waggoner, and Zhiwei Steven Wu. A smoothed analysis of the greedy algorithm for the linear contextual bandit problem. In Advances in Neural Information Processing Systems, 2018.
[KNRW18a] Michael Kearns, Seth Neel, Aaron Roth, and Zhiwei Steven Wu. An empirical study of rich subgroup fairness for machine learning. arXiv preprint arXiv:1808.08166, 2018.
[KNRW18b] Michael J. Kearns, Seth Neel, Aaron Roth, and Zhiwei Steven Wu. Preventing fairness gerrymandering: Auditing and learning for subgroup fairness. In Jennifer G. Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, ICML, volume 80 of JMLR Workshop and Conference Proceedings, pages 2569â2577. JMLR.org, 2018.
Jon Kleinberg and Manish Raghavan. Selection problems in the presence of implicit bias. arXiv preprint arXiv:1801.03533, 2018.
Fairness through Michael P Kim, Omer Reingold, and Guy N Rothblum. computationally-bounded awareness. In Advances in Neural Information Processing Systems, 2018.
[KRW17] Michael Kearns, Aaron Roth, and Zhiwei Steven Wu. Meritocratic fairness for cross- population selection. In International Conference on Machine Learning, pages 1828â 1836, 2017.
Sampath Kannan, Aaron Roth, and Juba Ziani. Downstream eï¬ects of aï¬rmative action. arXiv preprint arXiv:1808.09004, 2018.
Lydia T Liu, Sarah Dean, Esther Rolf, Max Simchowitz, and Moritz Hardt. Delayed impact of fair machine learning. In Proceedings of the 35th International Conference on Machine Learning, ICML, 2018.
Kristian Lum and William Isaac. To predict and serve? Signiï¬cance, 13(5):14â19, 2016.
[LRD+17] Yang Liu, Goran Radanovic, Christos Dimitrakakis, Debmalya Mandal, and David C Parkes. Calibrated fairness in bandits. arXiv preprint arXiv:1707.01875, 2017.
[MCPZ18] David Madras, Elliot Creager, Toniann Pitassi, and Richard Zemel. Learning adver- sarially fair and transferable representations. arXiv preprint arXiv:1802.06309, 2018.
Daniel McNamara, Cheng Soon Ong, and Robert C Williamson. Provably fair repre- sentations. arXiv preprint arXiv:1710.04394, 2017.
David Madras, Toniann Pitassi, and Richard S. Zemel. Predict responsibly: Increasing fairness by learning to defer. CoRR, abs/1711.06664, 2017.
12
Razieh Nabi and Ilya Shpitser. Fair inference on outcomes. In Proceedings of the... AAAI Conference on Artiï¬cial Intelligence. AAAI Conference on Artiï¬cial Intelli- gence, volume 2018, page 1931. NIH Public Access, 2018.
Dino Pedreshi, Salvatore Ruggieri, and Franco Turini. Discrimination-aware data min- ing. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 560â568. ACM, 2008.
Jonathan Rothwell. How the war on drugs damages black social mobility. The Brook- ings Institution, published Sept, 30, 2014.
[RSVW18] Manish Raghavan, Alexandrs Slivkins, Jennifer Wortman Vaughan, and Zhiwei Steven Wu. The unfair externalities of exploration. In Conference on Learning Theory, 2018.
Guy N Rothblum and Gal Yona. Probably approximately metric-fair learning. In Proceedings of the 35th International Conference on Machine Learning, ICML, vol- ume 80 of JMLR Workshop and Conference Proceedings, pages 2569â2577. JMLR.org, 2018.
Latanya Sweeney. Discrimination in online ad delivery. Queue, 11(3):10, 2013.
[WGOS17] Blake Woodworth, Suriya Gunasekar, Mesrob I Ohannessian, and Nathan Srebro. In Conference on Learning Theory, pages Learning non-discriminatory predictors. 1920â1953, 2017.
Ke Yang and Julia Stoyanovich. Measuring fairness in ranked outputs. In Proceedings of the 29th International Conference on Scientiï¬c and Statistical Database Manage- ment, page 22. ACM, 2017.
Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. Mitigating unwanted biases with adversarial learning. 2018.
[ZVGG17] Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez-Rodriguez, and Krishna P. Gummadi. Fairness beyond disparate treatment & disparate impact: Learning clas- siï¬cation without disparate mistreatment. In Proceedings of the 26th International Conference on World Wide Web, WWW, pages 1171â1180. ACM, 2017.
[ZWS+13] Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. Learning fair representations. In ICML, 2013.
13 | {
"id": "1703.04957"
} |
1810.08575 | Supervising strong learners by amplifying weak experts | Many real world learning tasks involve complex or hard-to-specify objectives,
and using an easier-to-specify proxy can lead to poor performance or misaligned
behavior. One solution is to have humans provide a training signal by
demonstrating or judging performance, but this approach fails if the task is
too complicated for a human to directly evaluate. We propose Iterated
Amplification, an alternative training strategy which progressively builds up a
training signal for difficult problems by combining solutions to easier
subproblems. Iterated Amplification is closely related to Expert Iteration
(Anthony et al., 2017; Silver et al., 2017), except that it uses no external
reward function. We present results in algorithmic environments, showing that
Iterated Amplification can efficiently learn complex behaviors. | http://arxiv.org/pdf/1810.08575 | Paul Christiano, Buck Shlegeris, Dario Amodei | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20181019 | 20181019 | 8 1 0 2
t c O 9 1 ] G L . s c [
1 v 5 7 5 8 0 . 0 1 8 1 : v i X r a
# Supervising strong learners by amplifying weak experts
Paul Christiano OpenAI paul@openai.com
Buck Shlegeris â bshlegeris@gmail.com
Dario Amodei OpenAI damodei@openai.com
# Abstract
Many real world learning tasks involve complex or hard-to-specify objectives, and using an easier-to-specify proxy can lead to poor performance or misaligned be- havior. One solution is to have humans provide a training signal by demonstrating or judging performance, but this approach fails if the task is too complicated for a human to directly evaluate. We propose Iterated Ampliï¬cation, an alternative train- ing strategy which progressively builds up a training signal for difï¬cult problems by combining solutions to easier subproblems. Iterated Ampliï¬cation is closely related to Expert Iteration (Anthony et al., 2017; Silver et al., 2017b), except that it uses no external reward function. We present results in algorithmic environments, showing that Iterated Ampliï¬cation can efï¬ciently learn complex behaviors.
# 1 Introduction
If we want to train an ML system to perform a task, we need to be able to evaluate how well it is doing. Whether our training signal takes the form of labels, rewards, or something else entirely, we need some way to generate that signal.
If our goal can be evaluated automatically, such as winning a game of Go, or if we have an algorithm that can generate examples of correct behavior, then generating a training signal is trivial. In these cases we might say that there is an âalgorithmicâ training signal.
Unfortunately, most useful tasks donât have an algorithmic training signal. So in current applications of machine learning, humans often provide the training signal. This can be done by having a human demonstrate the task, for example labeling an image or teleoperating a robot, or by learning a reward function from human judgments. For these classes of tasks, we could say there is a âhumanâ training signal.
However, there are harder tasks for which we canât compute demonstrations or rewards even with human assistance, and for which we currently have no clear method to get a meaningful training signal. Consider making economic policy decisions, advancing the scientiï¬c frontier, or managing the security of a large network of computers. Some of these tasks are âbeyond human scaleâ â a single human canât perform them and canât make sense of their massive observation space well enough to judge the behavior of an agent. It may be possible for a human to judge performance in the very long run (for example, by looking at economic growth over several years), but such long-term feedback is very slow to learn from. We currently have no way to learn how to perform such tasks much better than a human.
The overall situation is depicted in Table 1, which shows six different combinations of training signal source and problem formulation (supervised learning or RL). The bulk of ML practice operates in the top center box (supervised learning from human labels), the bottom left box (RL with a scripted reward), and sometimes the top left box (supervised learning of algorithms). The bottom center box
âWork done while at OpenAI.
Table 1: Example problems which require different kinds of training signal.
Training signal: Algorithmic Human Beyond human Supervised learning Learning data structures Image classiï¬cation Long-term prediction Reinforcement learning Playing games Driving âwellâ Designing transit system
0000000008 fe Zea | 29212222839 3333334233 eeaecen. SSSS$557°55 bo66CES OE bLOL I7I7FHINIT7 SIVEBRE LIT eae
â . AM > et : ay.
â â = [ref o ofooolor] a] ooofova
O x x
= n) tel! al ou ' a I (RL from a human training signal) is beginning to be explored, and includes inverse reinforcement learning (Ng and Russell, 2000; Abbeel and Ng, 2004; Finn et al., 2016) and RL from human feedback (Knox and Stone, 2009; Pilarski et al., 2011; MacGlashan et al., 2017; Christiano et al., 2017). At present there seems to be no general method to handle problems in the bottom right or top right.
It seems desirable to expand the range of tasks for which we can get a training signal, for two reasons. First, it would enable ML systems to perform new tasks. SL and RL are very powerful methods when we can get a training signal, so making them applicable to tasks that humans canât directly judge or perform could have a big impact. Second, better speciï¬cation of complex goals and targets may be vital to building robustly beneï¬cial AI systems. In practice, when an accurate training signal would be âbeyond human scale,â we often instead ï¬nd a short-term proxy that is correlated with what we want. But aggressively optimizing that proxy can lead to pathological behav- ior (Lehman et al., 2018; Amodei and Clark, 2016; Amodei et al., 2016), an example of Goodhartâs Law.2 For example, we might ï¬nd that user-reported satisfaction (which we can easily measure) is a good proxy for long-term beneï¬t to society (which is very complicated), but if we maximize it with RL our agent may maintain fraudulent appearances or effectively manipulate users into providing high ratings. At large scales this kind of pathology could lead to systemic crashes, and a mismatch between proxies and our real preferences is a major source of concerns about the safety of future powerful AI systems (Bostrom, 2014).
In this paper we propose a general framework for building up a training signal on complex tasks by decomposing them (with AI assistance) into simpler tasks for which we have a human or algorithmic training signal. In our experiments we apply the framework with a number of simpliï¬cations (see Section 4.3) to relatively simple tasks, as a ï¬rst step towards addressing the problems described above.
# 1.1 Our method: Iterated Ampliï¬cation
We propose a new method, Iterated Ampliï¬cation, for a human expert H to train an ML agent X. Rather than having H demonstrate or evaluate the target behavior on their own, we allow them to invoke several copies of the current agent X to help them. We write AmplifyH(X) for the composite system, consisting of H and several copies of X working together to solve a problem. The agent X then learns from AmplifyH(X) in the same way that it would traditionally learn from H alone.
To instantiate this framework we make three design decisions:
⢠What set of tasks do we train X to solve? In order for X to be a useful assistant, we need to choose a sufï¬ciently broad set of tasks. In this article, we will focus on question-answering.
2âWhen a measure becomes a target, it ceases to be a good measureâ
2
⢠How do we construct AmplifyH(X)? In this article, we focus on delegation: AmplifyH(X) answers a question Q by having H identify a sequence of useful subquestions, using X to compute a subanswer to each subquestion, and having H decide how to answer Q after seeing the subanswers.
⢠How does X learn from AmplifyH(X)? In this article, we focus on supervised learning: X is an autoregressive model trained to predict AmplifyH(X)âs output. Future work could instead use imitation learning, or use AmplifyH(X) to deï¬ne a reward function that X maximizes with RL.
Initially X behaves randomly, so AmplifyH(X) is essentially equivalent to H and we are effectively learning from an expert. Over time the agent X becomes more powerful and the role of the expert transitions into âcoordinatingâ several copies of X to solve the problem better than a single copy could solve it. (Once X is very sophisticated, even tasks like âidentify a useful subquestionâ might be delegated.) As long as it is possible for multiple agents to collaboratively solve problems more effectively than a single agent (perhaps using human expertise to coordinate their efforts), then AmplifyH(X) can outperform X and hence provide a useful training signal. We discuss this assumption in Section 5.
The human must be involved in this process because there is no external objective to guide learningâ the objective is implicit in the way that the human coordinates the copies of X. For example, we have no external measure of what constitutes a âgoodâ answer to a question, this notion is only implicit in how a human decides to combine the answers to subquestions (which usually involves both facts and value judgments). Our goal is for X to learn the goal at the same time that it learns to behave competently. This is in contrast with the alternative approach of specifying a reward function and then training a capable agent to maximize that reward function.
# 1.2 Outline
In Section 2 we describe Iterated Ampliï¬cation and our implementation in more detail. In Section 3 we compare our approach to prior work. In Section 4 we describe our experimental results, showing that Iterated Ampliï¬cation can be stable and efï¬cient despite the non-stationary training signal and lack of external objective. In Section 5 we explain why we believe that decomposability is a realistic assumption for complex tasks in the real world.
# 2 Detailed instantiation of Iterated Ampliï¬cation
# 2.1 Predicting the human behavior
In order to reduce the burden on the human expert H, we train a âhuman predictorâ Hâ, and use this predictor to generate training data rather than consulting H directly. That is, we train Hâ to imitate the role of H when computing Amplify"(X), and we train X using Amplify!â (X) rather than using Amplify" (X) directly. Because Hâ is only learning how to identify subquestions and combine subanswers, rather than solving an entire task, we expect to train it with much less data. Note that Hâ needs to predict how H will respond to subanswers provided by X. Because X is changing, this distribution is non-stationary, and so we need to continuously update Hâ throughout the training process.
# 2.2 Training overview
We train an agent X to answer questions from some distribution D.
Our training process, depicted in Fig. 1, involves running four processes in parallel:
1. We repeatedly sample a question Q â¼ D, use AmplifyH(X) to answer that question, and record every decision made by H during the process. That is, H ï¬nds a subquestion Q1 that would help them answer Q, and we compute the answer A1 = X(Q1). We repeat this process k times, where k is a ï¬xed parameter, and then H computes an answer A. We store the transcript Ï = (Q, Q1, A1, . . . , Qk, Ak, A).
3
combines divides Q into sub-Qs collect dataset of tT =(Q, Qi, X(Q)), ..., A collect dataset of (Q, A) pairs Hâ divides Q into sub-Qs combines answers prefix t[:n] predict t[n] Q predict A answers answers subquestions Hâ xX trained to imitate H trained to predict A from x answers subquestions
Figure 1: Schematic of our Iterated Ampliï¬cation implementation.
2. We train a model Hâ to predict the decisions made by H in each of these transcripts, i.e. to pred: ict subquestions Q; and final answers A.
3. We repeatedly sample a question Q ~ D, use Amplify!â
(X) to answer that question, and record the resulting (Q, A) pairs.
4. X is trained by supervised learning on these (Q, A) pairs.
# 2.3 Dynamics of training
The behavior of the agent X develops over the course of training:
⢠Initially X answers questions randomly. When the human asks subquestions they frequently receive incoherent or useless subanswers,
⢠The human is able to answer some questions without any help from X, and eventually X learns to copy these simple answers.
⢠Once X is able to provide simple answers, the human is able to provide slightly better answers by breaking them into simple pieces. Then X learns to provide slightly better answers.
⢠This process continues, with X gradually expanding the set of queries it can answer and gradually improving the answers it provides. At each point in training, AmplifyH(X) is modestly smarter than X working on its own, and X chases this moving target.
If all goes well, at the end of this process weâre left with an agent that âapproximatesâ the behavior of an exponentially large team of copies of H. The hierarchical decomposition itself is discarded as an artifact of training, and the actual procedure learned by the agent will generally not mirror the structure used in training.
# 2.4 Questions with context
In practice, questions often take the form of a very large context (e.g. a hundred-page design document) and a relatively small context-conditional question (e.g. âwhat are the largest risks when implementing this design?â). In particular, this is the case in the experiments reported in Section 4.
Answering a question requires understanding the entire context, but it may be possible to decompose a question without looking at the entire context. This allows us to apply Iterated Ampliï¬cation to tasks where the context is too large for a human expert to observe directly. Instead, we can give H the ability to look at small parts of the context as needed. Hopefully, H can decompose a question into pieces that depend on smaller and smaller parts of the context, until arriving at questions that depend on only isolated facts from the context.
4
# Q
Large contexts also facilitate an important trick for accelerating training. In many settings, almost all of the work of answering a question is actually about understanding the context, and it is possible to ask many different questions about a single context.
We divide X into two phases, a context-encoding phase and a question-answering phase. During training, rather than sampling a single question we sample a context together with multiple questions about that context. We reuse the work of the context-encoding phase across all of these questions. Similarly, when computing AmplifyH(X), we reuse the context-encoding work between all of the subquestions that X needs to answer. In our experiments, this speeds up training by an order of magnitude.
# 2.5 Model architecture
We implement X as an encoder-decoder architecture with self-attention, closely following the Transformer architecture (Vaswani et al., 2017):
We represent the context as a set of facts, each of which is a sequence of tokens. ⢠We embed each token using a look-up table. We embed facts by concatenating the token
embeddings and applying a linear projection.
⢠We apply the Transformer encoder to the embedded facts. Our only change to the architecture from (Vaswani et al., 2017) is inserting batchnorm after each MLP.
⢠We embed questions in the same way we embed facts, then apply the Transformer decoder to a batch of questions (omitting the self-attention altogether because it would correspond to interactions amongst questions, which ought to be independent).
⢠An autoregressive MLP generates a sequence of symbols conditioned on the result of the Transformer decoder. It generates symbols either by outputting a set of logits or by choosing to copy a symbol from the context (as in pointer networks (Vinyals et al., 2015)).
The human-predictor Hâ is also a Transformer decoder augmented with the ability to copy symbols from previous steps. Hâ operates on sequences of questions and answersâlike H, it never observes the entire context.
Details of our model architecture are described in Appendix D.
# 3 Related Work
Expert Iteration: our method is very similar to Expert Iteration (ExIt) (Anthony et al., 2017) and AlphaZero (Silver et al., 2017b,a) and has recently achieved strong performance in the board games Hex, Go, Chess, and Shogi. ExIt is itself closely analogous to the Bellman update in Q learning, and all of these can be viewed as analogs of dynamic programming where neural networks replace lookup tables.
The key difference between our work and ExIt is the lack of an external objective. In ExIt, the expert is produced by a search algorithm that optimizes an external objective. Our contribution is to show that that a similar training process can be used even when the task deï¬nition is only implicit in the decomposition and recomposition strategy.
Inverse reinforcement learning: by observing human behavior and inferring the underlying reward function that they are optimizing,(Ng et al., 2000; Hadï¬eld-Menell et al., 2016) inverse reinforcement learning could also potentially learn reward functions for tasks that are too challenging for humans. Handling such tasks requires a sufï¬ciently accurate model of human cognition to predict what humans âwouldâ prefer if we relaxed their cognitive limitations; in addition to being extremely complex, such a model is not identiï¬able, because we never observe the ground truth about human preferences. Iterated Ampliï¬cation is an alternative strategy that does not require solving this challenging model speciï¬cation problem.
Debate: training AI systems to debate each other (Irving et al., 2018) is another possible approach to training question-answering systems where a human expert cannot evaluate answers directly. Both debate and Iterated Ampliï¬cation involve a recursive structure where AI systems help humans address relevant subquestions. The largest conceptual difference is that in Iterated Ampliï¬cation each
5
subquestion is answered by an independent copy of X trained by AmplifyH(X), while in a debate the subquestions are answered by one of the debaters (who are trained to defend a particular answer to the top-level question).
Algorithm learning: our problem differs from traditional work on learning algorithms (Graves et al., 2016; Kaiser and Sutskever, 2015; Neelakantan et al., 2015) because we donât assume that we have access to ground truth labels.
Recursive model architectures: our work differs from recursive model architectures (Cai et al., 2017; Nowak and Bruna, 2016), in that the learned model doesnât have a recursive structure. The recursive decomposition is used only to generate training data, and even then only a single step of decomposition is performed in each iteration.
So the trained agent might end up solving the task in a totally different way from the decomposition used by the human, and in particular it may learn heuristics that treat the problem holistically.
This ï¬exibility is important to the applicability of our method. It is often possible to divide a task into easier parts, but dividing interesting problems into pieces and solving them independently can be much less efï¬cient than considering the problem holistically.
# 4 Experiments
# 4.1 Tasks
We study Iterated Ampliï¬cation in a set of 5 toy algorithmic tasks. For each task, the agent X is given a large combinatorial context and asked questions about that context:
⢠Given a permutation Ï : {1, . . . , 64} â {1, . . . , 64}, compute Ïk(x) for k up to 64.
⢠Given a function f : {1, . . . , 8}2 â {1, . . . , 8} and a sequence of 64 assignments of the form x := 3 or x := f (y, z), evaluate a particular variable.
Given a function f : {0, 1}6 â {â1, 0, 1}, answer questions of the form âWhat is the sum
of f (x) over all x matching the wildcard expression 0 â â 1 â â?â
Given a directed graph with 64 vertices and 128 edges, ï¬nd the distance from s to t.
⢠Given a rooted forest on 64 vertices, ï¬nd the root of the tree containing a vertex x.
More detailed descriptions of the tasks are available in Appendix C. We train each task using a curriculum of smaller instances, which is unrelated to our use of Iterated Ampliï¬cation (supervised learning also needs a curriculum to learn these tasks in a reasonable amount of time even given ground truth labels).
Rather than having a human perform the decomposition, we provide a hard-coded algorithm H which decomposes each task (though we minimize the number of times we call this algorithm). Using these decompositions directly as a recursive algorithm is not efï¬cient for any of the tasks.
# 4.2 Results
In order to evaluate Iterated Ampliï¬cation, we compare it to supervised learning from the ground truth data. The results are presented in Fig. 2. Iterated Ampliï¬cation is able to solve these tasks effectively with at worst a modest slowdown, achieving our main goal.
The purpose of ampliï¬cation is to handle tasks where an expert can perform decomposition but canât solve the task directly. We donât expect ampliï¬cation to solve those tasks as quickly as supervised learning. Because we can learn these tasks almost as quickly as supervised learning from the ground truth, we have achieved our main goal.
In addition to requiring modestly more training steps, training with ampliï¬cation requires about twice as much computation per question, because we need to actually generate the training targets by
6
Permutation powering Sequential assignments Wildcard search Shortest path Union find 1.0 Accuracy â Supervised â Amplification o 1 2 3 4 o i 2 3 4 o i 2 3 4 oO i 2 3 4 o 1 2 3 Total questions 1e8. Total questions 1e8 Total questions 1e8 Total questions 1e8 Total questions 1e8
Figure 2: Comparison between Iterated Amplification (orange) and supervised learning from the ground truth data (blue). To better illustrate the dynamics of training, in[Appendix A]we compare the performance of X and Amplify" (X) separately for questions requiring different recursion depths.
running Amplify!â (Xx »F We trained in the same wall-clock time by generating data on a parallel worker.
Supervised learning required tens of millions of examples in order to learn these algorithms. This In contrast, would be a prohibitive cost if the examples were provided by an external expert. Iterated Ampliï¬cation required tens of thousands of examples in order to learn the much simpler decompositions (numbers for each task are in Table 2 in Appendix A). The sample complexity will be much larger for realistic tasks, where modeling H might require (for example) a high-quality language model. But the general phenomenon, that learning decompositions can be much cheaper than learning complex behaviors directly, is likely to hold for realistic tasks, since performing decomposition is strictly easier than solving the task without the aid of X.
# 4.3 Simpliï¬cations
These experiments make several important simpliï¬cations:
⢠In our experiments questions can be algorithmically decomposed into subquestions, and we replace the human with a hand-coded algorithm. These experiments donât shed any light on whether humans can decompose interesting real world tasks, nor on whether it would be feasible to learn messy real world decompositions.
⢠We work in combinatorial domains where it it is possible to construct an algorithmic training signal. This makes it possible for us to cheaply run experiments and easily evaluate our performance, but in the long run we care about tasks where it is not possible to construct an algorithmic or even human training signal.
⢠In our experiments X is trained by supervised learning from AmplifyH(X). In many important applications we suspect that we would learn a reward function from AmplifyH(X) and then train X to maximize that reward function.
⢠In order for Iterated Ampliï¬cation to succeed, the question distribution D needs to be broad enough to cover not only the questions we care about, but also all of the subquestions asked during the computation of AmplifyH(X). The distribution also determines how the model will allocate its capacity, and so must be carefully chosen. In our experiments, we started with a distribution D that could be used directly. In a more realistic setting, we might start with some distribution D0 of questions that have intrinsic interest, and it would be the system designerâs responsibility to construct D appropriately.
Removing these simpliï¬cations is a task for future work, which will ultimately test the hypothesis that Iterated Ampliï¬cation can be usefully applied to complex real-world tasks for which no other training strategy is available.
# 3Running Amplify!â
3Running Amplify!â (X) requires calling both X and Hâ between 3 and 10 times. We train on each (Q, A) pair about 10 times before removing it from the dataset. So the time required to generate a (Q, A) pair is comparable to the total time spent training on it, resulting in roughly twice the total computation per question.
7
# 5 Discussion of decomposition in realistic domains
Having successfully applied Iterated Ampliï¬cation to synthetic algorithmic problems, the natural question is whether it can actually be applied to complex real-world tasks that are âbeyond human scale.â We leave a convincing demonstration to future work, but we discuss here why we think this is likely.
The key assumption underlying Iterated Ampliï¬cation is that a human can coordinate multiple copies of X to perform better than a single copy of X.
As an example, consider the problem of evaluating a proposed design for a transit system. Rather than forcing a single copy of X to reach a snap judgment about a proposed design, we can have copies of X evaluate many different considerations (estimating costs, evaluating how well the system serves different populations, and so on). A human can then decide how to aggregate those different considerations (potentially with help from further copies of X). We ï¬esh out this example in more detail in Appendix B.
The problem of coordinating several copies of X to outperform a single copy of X is analogous to organizing a team of humans to outperform individual humans. Fortunately, there are several ways in which coordinating several copies of X is easier than coordinating a team of humans:
⢠We donât require that the collaboration be efï¬cient, it just needs to help at all. If ten agents working together perform â10% betterâ than a single agent on its own, then AmplifyH(X) still provides a useful training signal that we can use to improve X.
⢠The copies of X donât need to run in parallelâeach can start after the previous one has ï¬nished its task. Many tasks may be inherently difï¬cult to parallelize, which is an obstacle for human collaboration but is ï¬ne for AmplifyH(X).
⢠All of the copies of X are trained exclusively to solve the problem they are given. We donât need to manage incentives, politics, or conï¬icting preferences, which are common difï¬culties in human organizations.
Despite these difï¬culties, human organizations are often able to signiï¬cantly outperform individual humans in many domains, supporting our key assumption.
# 6 Conclusion
We have shown that Iterated Ampliï¬cation can successfully solve algorithmically complex tasks where there is no external reward function and the objective is implicit in a learned decomposition. This offers hope for applying ML in domains where we cannot compute a suitable objective, even with human helpâas long as humans are able to decompose a task into simpler pieces. If we can realize this hope, it will be an important step towards expanding the reach of ML and addressing concerns about the long-term impacts of AI by reducing our reliance simple but inaccurate proxies for complex implicit objectives.
# References
Pieter Abbeel and Andrew Y Ng. Apprenticeship learning via inverse reinforcement learning. In Proceedings of the twenty-ï¬rst international conference on Machine learning, page 1. ACM, 2004.
Dario Amodei and Jack Clark. Faulty reward functions in the wild, 2016.
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané. Concrete problems in AI safety. arXiv preprint arXiv:1606.06565, 2016.
Thomas Anthony, Zheng Tian, and David Barber. Thinking fast and slow with deep learning and tree search. In Advances in Neural Information Processing Systems, pages 5366â5376, 2017.
Nick Bostrom. Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2014.
Jonathon Cai, Richard Shin, and Dawn Song. Making neural programming architectures generalize via recursion. CoRR, abs/1704.06611, 2017. URL http://arxiv.org/abs/1704.06611.
8
Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. In Advances in Neural Information Processing Systems, pages 4302â4310, 2017.
Chelsea Finn, Sergey Levine, and Pieter Abbeel. Guided cost learning: Deep inverse optimal control via policy optimization. In International Conference on Machine Learning, volume 48, 2016.
Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska- Barwi´nska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al. Hybrid computing using a neural network with dynamic external memory. Nature, 538(7626):471, 2016.
Dylan Hadï¬eld-Menell, Stuart J Russell, Pieter Abbeel, and Anca Dragan. Cooperative inverse reinforcement learning. In Advances in neural information processing systems, pages 3909â3917, 2016.
Geoffrey Irving, Paul Christiano, and Dario Amodei. AI safety via debate. arXiv preprint arXiv:1805.00899, 2018.
Åukasz Kaiser and Ilya Sutskever. Neural GPUs learn algorithms. arXiv preprint arXiv:1511.08228, 2015.
W Bradley Knox and Peter Stone. Interactively shaping agents via human reinforcement: The TAMER framework. In International Conference on Knowledge Capture, pages 9â16, 2009.
Joel Lehman, Jeff Clune, Dusan Misevic, Christoph Adami, Julie Beaulieu, Peter J Bentley, Samuel Bernard, Guillaume Belson, David M Bryson, Nick Cheney, et al. The surprising creativity of digital evolution: A collection of anecdotes from the evolutionary computation and artiï¬cial life research communities. arXiv preprint arXiv:1803.03453, 2018.
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, David Roberts, Matthew E Taylor, and Michael L Littman. Interactive learning from policy-dependent human feedback. arXiv preprint arXiv:1701.06049, 2017.
Arvind Neelakantan, Quoc V Le, and Ilya Sutskever. Neural programmer: Inducing latent programs with gradient descent. arXiv preprint arXiv:1511.04834, 2015.
Andrew Y Ng and Stuart Russell. Algorithms for inverse reinforcement learning. In International Conference on Machine learning, pages 663â670, 2000.
Andrew Y Ng, Stuart J Russell, et al. Algorithms for inverse reinforcement learning. In Icml, pages 663â670, 2000.
Alex Nowak and Joan Bruna. Divide and conquer with neural networks. CoRR, abs/1611.02401, 2016. URL http://arxiv.org/abs/1611.02401.
Patrick M Pilarski, Michael R Dawson, Thomas Degris, Farbod Fahimi, Jason P Carey, and Richard Sutton. Online human training of a myoelectric prosthesis controller via actor-critic reinforcement learning. In International Conference on Rehabilitation Robotics, pages 1â7, 2011.
D. Silver, et al ..., and D. Hassabis. Mastering chess and shogi by self-play with a general reinforce- ment learning algorithm. arXiv, December 2017a.
David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of go without human knowledge. Nature, 550(7676):354, 2017b.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz In Advances in Neural Information Kaiser, and Illia Polosukhin. Attention is all you need. Processing Systems, pages 6000â6010, 2017.
Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. In Advances in Neural Information Processing Systems, pages 2692â2700, 2015.
9
Table 2: Total number of queries to the decomposition oracle H for each task. These experiments involve algorithmic tasks where these decompositions are straightforward; the sample complexity will likely be much larger for more realistic tasks. Expression evaluation 6,000
Permutation powering 7,000 Wildcard search 10,000 Task Union ï¬nd Shortest path Oracle calls 20,000 24,000
# A Training dynamics
âTable 2}shows how many queries were collected over the course of training for each of our tasks. [Figure 3]illustrates how X âchasesâ the moving target Amplifyâ (x ) over the course of training, separating out performance for separate recursion depths d. X always has lower accuracy than Amplify!â (X), because it is being trained to imitate Amplify!â (X) for tasks at depth d + 1 has slightly lower accuracy than X does for tasks at depth because Amplifyâ (X) answering a question correctly at depth d + 1 requires X to answer several questions correctly at depth d.
The sawtooth pattern and decreases in accuracy are an artifact of the curriculum. In contrast with Fig. 2, Fig. 3 shows the performance on the maximum difï¬culty of tasks yet encountered by the model. Each time the difï¬culty of the task is increased, the accuracy of the model drops.
Performance by depth 1.04 0.8 4 0.6 4 Accuracy 0.25 â Amplified 0.0 4 â Raw Model r r 1 r : 1 : , 0 2500 5000 7500 10000 12500 15000 17500 20000 Step
Figure 3: Performance at the permutation powering task, for instances requiring different recursion depths. The green curves are the accuracy of Amplify⢠(X); the red curves are the accuracy of X, The curves further to the right are higher depths, i.e. larger powers of the permutation. For example, the leftmost green curve is the performance of Amplifyâ (X) at squaring or cubing permutations, which quickly converges to 1 because this task can be accomplished without Xâs help. The rightmost red curve is the performance of X on powers between 32 and 63.
10
# it.
d,
# B Example decomposition
Consider the task of comparing two designs for a public transit system. We could train an AI to imitate human judgments, but human judgments may be very far from optimal. Or we could try to collect enough information about the long-term health of transit systems to train an AI to predict long-term outcomes, but we can only collect new datapoints with a ten-year delay. This is the kind of task which we might want to solve with Iterated Ampliï¬cation.
Many subquestions could help the human reach a conclusion about which design is better:
Compare the usefulness of the two designs. ⢠Compare the total cost of the two designs. ⢠Search for any risks or possible problems with each design so that they can be evaluated
more carefully.
Combining the answers to these subquestions requires making value judgments, for example about how we should quantify and compare beneï¬ts, what kinds of risks we are willing to accept, and so on. Our hope is for X to learn about these value judgments at the same time that it learns to make sophisticated decisions. Eventually, X can also help with the task of aggregating different considerations.
Each of these subquestion can itself be further divided, facilitating the use of Iterated Ampliï¬cation:
⢠Comapre the usefulness of the two designs.
â Identify and evaluate particular needs that each design may serve well or poorly.
â Identify particular routes or groups of users who may be served well or poorly. â For each, identify the most important considerations (predictability, reliability,
routes served, cost) and assess how well each design meets those.
# â Forecast capacity of the system and likely usage.
â Evaluate current transit usage, correct for measurement issues, extrapolate trends. â Evaluate capacity of the proposals across key routes and bottlenecks.
# â Estimate general performance characteristics.
â Estimate how often the system will be unavailable and how common delays will be. â Compare average speeds and waiting times. â Compare plausible last-mile transit costs associated with each proposal.
Compare the total cost of the two designs.
# â Estimate the non-ï¬nancial costs of the project
â Identify the most important effects on the landscape and space use within the city. â Estimate the costs of disruption associated with construction and maintenance. â Identify social consequences of the transit system. â Compare the likely construction costs of the two designs. â Identify comparable projects and estimate their costs. â Figure out how this project differs and how itâs cost is likely to differ.
â Compare the maintenance costs over time.
â Identify categories of maintenance cost and estimate each of them separately. â Search for comparable projects.
â Decide how to trade off immediate costs vs distant costs.
â Estimate interest rates on debt and the feasibility of borrowing to fund such a project.
â Estimate the rates of return on other uses of funds. â Evaluate our intrinsic time preferences.
· · ·
We emphasize that this need not be an efï¬cient decomposition in order to be suitable for Iterated Ampliï¬cationâanswers to the subquestion just need to help at all on the original task. As long as thatâs true, we can use a ï¬nal copy of X to answer the question in light of these subanswers. This
11
will outperform a copy of X who didnât get to see the subanswers, and can hence provide a useful training signal for X to improve.
If we had an external ground truth measure of quality then we wouldnât need a human to propose this kind of decomposition and we could instead allow an AI to search for whatever predictor worked best. However, if we donât have access to an external ground truth, we can use this kind of decomposition to deï¬ne the task.
# C Task descriptions
An overview of our ï¬ve tasks is given in Table 3. A task describes a context, a family of questions, a decomposition strategy, and a set of primitive questions. The primitive questions are the mechanism by which AmplifyH(X) is able to learn about the context. When H asks a primitive question, it immediately receives the correct answer rather than being answered by X.
# C.1 Representations
All tasks other than wildcard search involve a domain with size N . For each of these tasks, we introduce 8 special identiï¬ers {a, b, c, d, e, f, g, h}, and represent elements from the domain as a pair of identiï¬ers.
For sequential assignments, union ï¬nding, wildcard search, and shortest path, we represent each integer with a separate symbol. In the case of wildcard search this results in a very large vocabulary, most of which is used extremely rarely and simply ignored by the model. For permutation powering, we represent the exponent k in binary.
In each domain, we can unambiguously represent facts as a sequence of elements from the domain. We represent function values f (x) = y as the pair xy, x = f (y, z) as the triple xyz, edges (x, y) as the pair xy, and so on. Recall that elements from the domain are themselves represented as pairs, which we simply concatenate. For wildcard search, we simply omit zero values.
For union ï¬nd and shortest path, we preface each question with a unique symbol to disambiguate it.
We extend the vocabulary of each task with a special symbol â?â that is returned by H whenever the recursive calls produce inconsistent or inconclusive results.
# C.2 Curriculum
Each task has a size parameter N that ranges from 8 to 64. We begin training with the difï¬culty 8. Whenever X achieves an accuracy of at least 85% at predicting the (Q, A) pairs in its dataset,4 we increment the difï¬culty.
At each point in training, we sample the task size N to be equal to the difï¬culty with probability 0.5, and to otherwise be sampled from a power law between 8 and N .
Figure 2 shows the performance on a held out test set, which has 1/3 of its examples at size 64, 1/3 at size 8, and 1/3 distributed uniformly between the two. This distribution is chosen to give a useful representation of progress throughout the learning process. The graphs look essentially the same (and would reach similar maximum accuracies) if we evaluate performance on the most difï¬cult tasks encountered so far, except that it would no longer be meaningful to directly compare different training runs that are at different difï¬culties (and the curriculum introduces a sawtooth artifact as in Fig. 3).
# C.3 Detailed decompositions
The decompositions for union ï¬nd and shortest path are somewhat more complex than the others. We provide the full decomposition for shortest path here. The decomposition for union ï¬nd involves similar ideas.
What is the distance from x to y?
4We exclude answers of â?â, since these are easy to correctly predict, but donât indicate that the algorithm has mastered the task.
12
Table 3: The tasks on which we trained our algorithm. All tasks are paremeterized by a difï¬culty N that is increased from 8 to 64 over the course of training. The decompositions marked with * are signiï¬cant simpliï¬cations of the full decompositions, which are described in Appendix C.3.
Permutation powering
# Context
# Questions
A permutation Ï of N elements. What is Ïk(x)? (for 2 ⤠k < 64)
# Primitive questions
# Decomposition
What is o (a)? (a) = of (o*(a)) . + / ke ork+1(q) = o(o*(o*(x)))
# Sequential assignments
# Context
# Context
A function f : {1, 2, . . . , 8} à {1, 2, . . . , 8} â {1, 2, . . . 8} A sequence of N deï¬nitions of the form x := f (y, z) or x := 7
# Questions
# Questions
What is the value of x?
# Primitive questions
What is the deï¬nition of x? What is f (a, b)?
# Decomposition
# Decomposition
Look up deï¬nition of x. If x := f (y, z): evaluate y, evaluate z, and look up f (y, z).
# Union ï¬nd
Context Questions Primitive questions Decomposition* â
Decomposition If y is the result: what is the unique label in the component containing y? Wildcard search Context A function f : {0,1}° + {â1,0,1} with N non-zero values. Questions What is S> f (a) over x matching a wildcard expression (e.g. 0 * * 1 * 0)? Primitive questions What is f(a)? Fill in 0 for the first * in the wildcard. Decomposition Fill in 1 for the first * in the wildcard. Add the results Shortest path Context A directed graph with 2N edges and N vertices. . What is the distance from x to y? Questions Primitive questions Decomposition⢠What is the first vertex on the path from x to y? What is a random neighbor of x? Is there an edge from x to y? What is the first vertex on the path from x to y? If z is the result: what is the distance from z to y?
â Test if y is adjacent to x. If so, return 1.5 â z â What is the ï¬rst vertex on the path from x to y? â Test if z is adjacent to x. If not, return ?. â d â What is the distance from z to y? â Return d + 1.
What is the ï¬rst vertex on the path from x to y?
â z â What is the ï¬rst vertex on the path from x to y? â Choose one at random:
â w â What is the ï¬rst vertex on the path from x to y? â w â What is a random neighbor of x?
â Test whether each of z and w are vertices adjacent to x. â If neither of them is adjacent to x, return a random neighbor of x. â If exactly one of them is adjacent to x, return that one. â If both are adjacent to x, ask how far each of them is from z, and then return the one
that is closer.
# C.4 Task distributions
For the permutation powering, union ï¬nd, shortest path, and wildcard search tasks, the context is chosen uniformly at random from valid contexts. For sequential assignments, we sort the variables N variables to one of {1, . . . , 8} at random, and let each randomly, assign each of the ï¬rst subsequent variable be f (y, z) for a random pair y, z of preceding variables.
For sequential assignments, shortest path, and union ï¬nd, we choose questions at random. For wildcard search, we randomly select the number of wildcards from {1, 2, . . . , 6}, then we randomly generate a query with that many wildcards. For permutation powering, we randomly choose one bit from {2, . . . , 6} to be the leading bit of k. We set that bit to be 1 and set the other bits uniformly at random. We selected these distributions D to ensure that every subquestion of a question draw from D is also given reasonable probability under D.
# D Model details and hyperparameters
When computing Amplify⢠(X), we use a Polyak averaging over a time horizon of 1000 batches, rather than directly applying the current version of X. This is analogous to the use of a target network in Q-learning. Early experiments suggested Polyak averaging improved the stability of training, and it does not materially slow down learning.
Our model closely follows the Transformer architecture (Vaswani et al., 2017), optimized with Adam. We describe the model for completeness, along with our choice of hyperparameters.
All of our inputs are sets of 8 to 128 sentences, each of which is a sequence of 2 to 8 tokens from a vocabulary of size 10 to 30. For example, we represent a graph as a list of pairs of vertices, and we represent each vertex as a pair of tokens from a ï¬xed vocabulary of size 8.
We process each sequence by embedding each token in a space of dimension dembed = 100, concatenating the embeddings, and then applying a learned linear transformation into a space of dimension dmodel = 512. Note that there is no position encoding, because we process unordered sets of facts.
We then process a set of sentences by applying a sequence of N identical layers. Each layer implements the transformation x â z:
y â LayerNorm x + Attention x z â LayerNorm y + BatchNorm MLP y
where Attention is multi-headed attention with 8 heads of size dmodel/8, MLP is a two-layer perceptron with hidden unit size 4dmodel and ReLU activation, LayerNorm is deï¬ned as usual, and BatchNorm normalizes across both the batch and the different sentences.
5The distance from x to x is taken to be the length of the shortest cycle that contains x, rather than 0.
14
We embed contexts using N = 6 layers with self-attention. Once we have processed a context, we answer a batch of questions about that context by using N = 3 layers which attend over the context embedding. This is almost identical to the Transformer encoder/decoder, except that a Transformer decoder would also use self-attention (which is not appropriate here since different questions are unrelated to one another).
This architecture outputs a single dmodel dimensional vector for each answer. To decode this into a sequence of tokens, we train an autoregressive model that takes as input the answer vector and the sequence of tokens produced so far, and outputs the next token. The output is allowed to either specify a token directly or to specify an input token to copy, as in pointer networks (Vinyals et al., 2015). If the model chooses to copy an input symbol, it outputs an attention mask to select a sentence and a set of logits that are used to select which index it wants to copy from that sentence.
Where possible we directly copied architecture choices from (Vaswani et al., 2017) because our goal was to focus on the training process rather than architectural innovation. We added batchnorm because it signiï¬cantly improved performance during preliminary supervised experiments.
Each of our training runs involve between 100, 000 and 200, 000 batches. Each batch contains 50 contexts. The number of facts describing a context varies from task to task, and varies over the course of training as the task difï¬culty increases. The number of questions per context was the same as the number of facts. By the end of training, this quantity was either 64 or 128 depending on the task. The model was optimized with Adam, with learning rate 10â5, β2 = 0.98, and gradient clipping. These parameters were chosen based on early supervised experiments.
15 | {
"id": "1606.06565"
} |
1810.08272 | BabyAI: A Platform to Study the Sample Efficiency of Grounded Language Learning | Allowing humans to interactively train artificial agents to understand
language instructions is desirable for both practical and scientific reasons,
but given the poor data efficiency of the current learning methods, this goal
may require substantial research efforts. Here, we introduce the BabyAI
research platform to support investigations towards including humans in the
loop for grounded language learning. The BabyAI platform comprises an
extensible suite of 19 levels of increasing difficulty. The levels gradually
lead the agent towards acquiring a combinatorially rich synthetic language
which is a proper subset of English. The platform also provides a heuristic
expert agent for the purpose of simulating a human teacher. We report baseline
results and estimate the amount of human involvement that would be required to
train a neural network-based agent on some of the BabyAI levels. We put forward
strong evidence that current deep learning methods are not yet sufficiently
sample efficient when it comes to learning a language with compositional
properties. | http://arxiv.org/pdf/1810.08272 | Maxime Chevalier-Boisvert, Dzmitry Bahdanau, Salem Lahlou, Lucas Willems, Chitwan Saharia, Thien Huu Nguyen, Yoshua Bengio | cs.AI, cs.CL | Accepted at ICLR 2019 | null | cs.AI | 20181018 | 20191219 | 9 1 0 2 c e D 9 1
] I A . s c [
4 v 2 7 2 8 0 . 0 1 8 1 : v i X r a
Published as a conference paper at ICLR 2019
BABYAI: A PLATFORM TO STUDY THE SAMPLE EFFI- CIENCY OF GROUNDED LANGUAGE LEARNING
Maxime Chevalier-Boisvertâ Mila, Université de Montréal
Dzmitry Bahdanauâ Mila, Université de Montréal AdeptMind Scholar Element AI
# Salem Lahlou Mila, Université de Montréal
Lucas Willemsâ Ãcole Normale Supérieure, Paris
Chitwan Sahariaâ IIT Bombay
# Thien Huu Nguyenâ¡ University of Oregon
Yoshua Bengio Mila, Université de Montréal CIFAR Senior Fellow
# ABSTRACT
Allowing humans to interactively train artiï¬cial agents to understand language instructions is desirable for both practical and scientiï¬c reasons. Though, given the lack of sample efï¬ciency in current learning methods, reaching this goal may require substantial research efforts. We introduce the BabyAI research platform, with the goal of supporting investigations towards including humans in the loop for grounded language learning. The BabyAI platform comprises an extensible suite of 19 levels of increasing difï¬culty. Each level gradually leads the agent towards acquiring a combinatorially rich synthetic language, which is a proper subset of English. The platform also provides a hand-crafted bot agent, which simulates a human teacher. We report estimated amount of supervision required for training neural reinforcement and behavioral-cloning agents on some BabyAI levels. We put forward strong evidence that current deep learning methods are not yet sufï¬ciently sample-efï¬cient in the context of learning a language with compositional properties.
# INTRODUCTION
How can a human train an intelligent agent to understand natural language instructions? We be- lieve that this research question is important from both technological and scientiï¬c perspectives. No matter how advanced AI technology becomes, human users will likely want to customize their intelligent helpers to better understand their desires and needs. On the other hand, developmental psychology, cognitive science and linguistics study similar questions but applied to human chil- dren, and a synergy is possible between research in grounded language learning by computers and research in human language acquisition.
In this work, we present the BabyAI research platform, whose purpose is to facilitate research on grounded language learning. In our platform we substitute a simulated human expert for a real human; yet our aspiration is that BabyAI-based studies enable substantial progress towards putting an actual human in the loop. The current domain of BabyAI is a 2D gridworld in which synthetic natural-looking instructions (e.g. âput the red ball next to the box on your leftâ) require the agent to navigate the world (including unlocking doors) and move objects to speciï¬ed locations. BabyAI improves upon similar prior setups (Hermann et al., 2017; Chaplot et al., 2018; Yu et al., 2018) by supporting simulation of certain essential aspects of the future human in the loop agent training:
âEqual contribution. â Work done during an internship at Mila. â¡Work done during a post-doc at Mila.
1
Published as a conference paper at ICLR 2019
curriculum learning and interactive teaching. The usefulness of curriculum learning for training machine learning models has been demonstrated numerous times in the literature (Bengio et al., 2009; Kumar et al., 2010; Zaremba and Sutskever, 2015; Graves et al., 2016), and we believe that gradually increasing the difï¬culty of the task will likely be essential for achieving efï¬cient human- machine teaching, as in the case of human-human teaching. To facilitate curriculum learning studies, BabyAI currently features 19 levels in which the difï¬culty of the environment conï¬guration and the complexity of the instruction language are gradually increased. Interactive teaching, i.e. teaching differently based on what the learner can currently achieve, is another key capability of human teachers. Many advanced agent training methods, including DAGGER (Ross et al., 2011), TAMER (Warnell et al., 2017) and learning from human preferences (Wilson et al., 2012; Christiano et al., 2017), assume that interaction between the learner and the teacher is possible. To support interactive experiments, BabyAI provides a bot agent that can be used to generate new demonstrations on the ï¬y and advise the learner on how to continue acting.
Arguably, the main obstacle to language learning with a human in the loop is the amount of data (and thus human-machine interactions) that would be required. Deep learning methods that are used in the context of imitation learning or reinforcement learning paradigms have been shown to be very effective in both simulated language learning settings (Mei et al., 2016; Hermann et al., 2017) and applications (Sutskever et al., 2014; Bahdanau et al., 2015; Wu et al., 2016). These methods, however, require enormous amounts of data, either in terms of millions of reward function queries or hundreds of thousands of demonstrations. To show how our BabyAI platform can be used for sample efï¬ciency research, we perform several case studies. In particular, we estimate the number of demonstrations/episodes required to solve several levels with imitation and reinforcement learning baselines. As a ï¬rst step towards improving sample efï¬ciency, we additionally investigate to which extent pretraining and interactive imitation learning can improve sample efï¬ciency.
The concrete contributions of this paper are two-fold. First, we contribute the BabyAI research platform for learning to perform language instructions with a simulated human in the loop. The platform already contains 19 levels and can easily be extended. Second, we establish baseline results for all levels and report sample efï¬ciency results for a number of learning approaches. The platform and pretrained models are available online. We hope that BabyAI will spur further research towards improving sample efï¬ciency of grounded language learning, ultimately allowing human-in-the-loop training.
# 2 RELATED WORK
There are numerous 2D and 3D environments for studying synthetic language acquistion. (Hermann et al., 2017; Chaplot et al., 2018; Yu et al., 2018; Wu et al., 2018). Inspired by these efforts, BabyAI augments them by uniquely combining three desirable features. First, BabyAI supports world state manipulation, missing in the visually appealing 3D environments of Hermann et al. (2017), Chaplot et al. (2018) and Wu et al. (2018). In these environments, an agent can navigate around, but cannot alter its state by, for instance, moving objects. Secondly, BabyAI introduces partial observability (unlike the gridworld of Bahdanau et al. (2018)). Thirdly, BabyAI provides a systematic deï¬nition of the synthetic language. As opposed to using instruction templates, the Baby Language we introduce deï¬nes the semantics of all utterances generated by a context-free grammar (Section 3.2). This makes our language richer and more complete than prior work. Most importantly, BabyAI provides a simulated human expert, which can be used to investigate human-in-the-loop training, the aspiration of this paper.
Currently, most general-purpose simulation frameworks do not feature language, such as PycoLab (DeepMind, 2017), MazeBase (Sukhbaatar et al., 2015), Gazebo (Koenig and Howard, 2004), Viz- Doom (Kempka et al., 2016), DM-30 (Espeholt et al., 2018), and AI2-Thor (Kolve et al., 2017). Using a more realistic simulated environment such as a 3D rather than 2D world comes at a high computational cost. Therefore, BabyAI uses a gridworld rather than 3D environments. As we found that available gridworld platforms were insufï¬cient for deï¬ning a compositional language, we built a MiniGrid environment for BabyAI.
General-purpose RL testbeds such as the Arcade Learning Environment (Bellemare et al., 2013), DM-30 (Espeholt et al., 2018), and MazeBase (Sukhbaatar et al., 2015) do not assume a human- in-the-loop setting. In order to simulate this, we have to assume that all rewards (except intrinsic
2
Published as a conference paper at ICLR 2019
(a) GoToObj: "go to the blue ball"
(b) PutNextLocal: "put the blue key next to the green ball"
(c) BossLevel: "pick up the grey box behind you, then go to the grey key and open a door". Note that the green door near the bottom left needs to be unlocked with a green key, but this is not explicitly stated in the instruction.
Figure 1: Three BabyAI levels built using the MiniGrid environment. The red triangle represents the agent, and the light-grey shaded area represents its ï¬eld of view (partial observation).
rewards) would have to be given by a human, and are therefore rather expensive to get. Under this assumption, imitation learning methods such as behavioral cloning, Searn (Daumé Iii et al., 2009), DAGGER (Ross et al., 2011) or maximum-entropy RL (Ziebart et al., 2008) are more appealing, as more learning can be achieved per human-input unit.
Similar to BabyAI, studying sample efï¬ciency of deep learning methods was a goal of the bAbI tasks (Weston et al., 2016), which tested reasoning capabilities of a learning agent. Our work differs in both of the object of the study (grounded language with a simulated human in the loop) and in the method: instead of generating a ï¬xed-size dataset and measuring the performance, we measure how much data a general-purpose model would require to get close-to-perfect performance.
There has been much research on instruction following with natural language (Tellex et al., 2011; Chen and Mooney, 2011; Artzi and Zettlemoyer, 2013; Mei et al., 2016; Williams et al., 2018) as well as several datasets including SAIL (Macmahon et al., 2006; Chen and Mooney, 2011) and Room-to-Room (Anderson et al., 2018). Instead of using natural language, BabyAI utilises a syn- thetic Baby language, in order to fully control the semantics of an instruction and easily generate as much data as needed.
Finally, Wang et al. (2016) presented a system that interactively learned language from a human. We note that their system relied on substantial amounts of prior knowledge about the task, most importantly a task-speciï¬c executable formal language.
# 3 BABYAI PLATFORM
The BabyAI platform that we present in this work comprises an efï¬ciently simulated gridworld environment (MiniGrid) and a number of instruction-following tasks that we call levels, all formu- lated using subsets of a synthetic language (Baby Language). The platform also includes a bot that can generate successful demonstrations for all BabyAI levels. All the code is available online at https://github.com/mila-iqia/babyai/tree/iclr19.
3
Published as a conference paper at ICLR 2019
# 3.1 MINIGRID ENVIRONMENT
Studies of sample efï¬ciency are very computationally expensive given that multiple runs are re- quired for different amounts of data. Hence, in our design of the environment, we have aimed for a minimalistic and efï¬cient environment which still poses a considerable challenge for current general-purpose agent learning methods. We have implemented MiniGrid, a partially observable 2D gridworld environment. The environment is populated with entities of different colors, such as the agent, balls, boxes, doors and keys (see Figure 1). Objects can be picked up, dropped and moved around by the agent. Doors can be unlocked with keys matching their color. At each step, the agent receives a 7x7 representation of its ï¬eld of view (the grid cells in front of it) as well as a Baby Language instruction (textual string).
The MiniGrid environment is fast and lightweight. Throughput of over 3000 frames per second is possible on a modern multi-core laptop, which makes experimentation quicker and more accessible. The environment is open source, available online, and supports integration with OpenAI Gym. For more details, see Appendix B.
# 3.2 BABY LANGUAGE
We have developed a synthetic Baby Language to give instructions to the agent as well as to au- tomatically verify their execution. Although Baby Language utterances are a comparatively small subset of English, they are combinatorially rich and unambigously understood by humans. The language is intentionally kept simple, but still exhibits interesting combinatorial properties, and contains 2.48 Ã 1019 possible instructions. In this language, the agent can be instructed to go to ob- jects, pick up objects, open doors, and put objects next to other objects. The language also expresses the conjunction of several such tasks, for example âput a red ball next to the green box after you open the door". The Backus-Naur Form (BNF) grammar for the language is presented in Figure 2 and some example instructions drawn from this language are shown in Figure 3. In order to keep the resulting instructions readable by humans, we have imposed some structural restrictions on this language: the and connector can only appear inside the then and after forms, and instructions can contain no more than one then or after word.
(Sent) [: (Sentl) | (Sentl)â, then (Sentl) | (Sentl) after you (Sent1) (Sentl) [e (Clause) | (Clause) and (Clause) (Clause) |: goto (Descr) | pickup (DescrNotDoor) | open (DescrDoor) put (DescrNotDoor) next to (Descr) (DescrDoor) (Article) (Color) door (LocSpec) (DescrBall) | (Article) (Color) ball (LocSpec) (DescrBox) [|F (Article) (Color) box (LocSpec) (DescrKey) [| (Article) (Color) key (LocSpec) (Descr) [| (DescrDoor) | (DescrBall) | (DescrBox) | (DescrKey) (DescrNotDoor) [= (DescrBall) | (DescrBox) | (DescrKey) (LocSpec) |: e | onyourleft | on your right | in front of you | behind you (Color) [Ee | red | green | blue | purple | yellow | grey (Article) the | a
Figure 2: BNF grammar productions for the Baby Language
The BabyAI platform includes a veriï¬er which checks if an agentâs sequence of actions successfully achieves the goal of a given instruction within an environment. Descriptors in the language refer to one or to multiple objects. For instance, if an agent is instructed to "go to a red door", it can successfully execute this instruction by going to any of the red doors in the environment. The then and after connectors can be used to sequence subgoals. The and form implies that both subgoals must be completed, without ordering constraints. Importantly, Baby Language instructions leave
4
|
Published as a conference paper at ICLR 2019
go to the red ball
open the door on your left
put a ball next to the blue door
open the yellow door and go to the key behind you
put a ball next to a purple door after you put a blue box next to a grey box and pick up the purple box
Figure 3: Example Baby Language instructions
details about the execution implicit. An agent may have to ï¬nd a key and unlock a door, or move obstacles out of the way to complete instructions, without this being stated explicitly.
3.3 BABYAI LEVELS
There is abundant evidence in prior literature which shows that a curriculum may greatly facilitate learning of complex tasks for neural architectures (Bengio et al., 2009; Kumar et al., 2010; Zaremba and Sutskever, 2015; Graves et al., 2016). To investigate how a curriculum improves sample efï¬- ciency, we created 19 levels which require understanding only a limited subset of Baby Language within environments of varying complexity. Formally, a level is a distribution of missions, where a mission combines an instruction within an initial environment state. We built levels by selecting competencies necessary for each level and implementing a generator to generate missions solvable by an agent possessing only these competencies. Each competency is informally deï¬ned by speci- fying what an agent should be able to do:
Room Navigation (ROOM): navigate a 6x6 room. ⢠Ignoring Distracting Boxes (DISTR-BOX): navigate the environment even when there
are multiple distracting grey box objects in it.
⢠Ignoring Distractors (DISTR): same as DISTR-BOX, but distractor objects can be boxes, keys or balls of any color.
⢠Maze Navigation (MAZE): navigate a 3x3 maze of 6x6 rooms, randomly inter-connected by doors.
⢠Unblocking the Way (UNBLOCK): navigate the environment even when it requires mov- ing objects out of the way.
⢠Unlocking Doors (UNLOCK): to be able to ï¬nd the key and unlock the door if the in- struction requires this explicitly.
⢠Guessing to Unlock Doors (IMP-UNLOCK): to solve levels that require unlocking a door, even if this is not explicitly stated in the instruction.
Go To Instructions (GOTO): understand âgo toâ instructions, e.g. âgo to the red ballâ. ⢠Open Instructions (OPEN): understand âopenâ instructions, e.g. âopen the door on your
leftâ.
Pickup Instructions (PICKUP): understand âpick upâ instructions, e.g. âpick up a boxâ. ⢠Put Instructions (PUT): understand âputâ instructions, e.g. âput a ball next to the blue
keyâ.
⢠Location Language (LOC): understand instructions where objects are referred to by rela- tive location as well as their shape and color, e.g. âgo to the red ball in front of youâ.
⢠Sequences of Commands (SEQ): understand composite instructions requiring an agent to execute a sequence of instruction clauses, e.g. âput red ball next to the green box after you open the doorâ.
Table 1 lists all current BabyAI levels together with the competencies required to solve them. These levels form a progression in terms of the competencies required to solve them, culminating with
5
Published as a conference paper at ICLR 2019
Table 1: BabyAI Levels and the required competencies
M O O R X O B R T S I D - R T S I D E Z A M K C O L B N U K C O L N U K C O L N U - P M O T O G N E P O P U K C I P T U P C O L Q E S I GoToObj GoToRedBallGrey GoToRedBall GoToLocal PutNextLocal PickupLoc GoToObjMaze GoTo Pickup UnblockPickup Open Unlock PutNext Synth SynthLoc GoToSeq SynthSeq GoToImpUnlock BossLevel x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x
the BossLevel, which requires mastering all competencies. The deï¬nitions of competencies are informal and should be understood in the minimalistic sense, i.e. to test the ROOM competency we have built the GoToObj level where the agent needs to reach the only object in an empty room. Note that the GoToObj level does not require the GOTO competency, as this level can be solved without any language understanding, since there is only a single object in the room. However, solving the GoToLocal level, which instructs the agent to go to a speciï¬c object in the presence of multiple distractors, requires understanding GOTO instructions.
3.4 THE BOT AGENT
The bot is a key ingredient intended to perform the role of a simulated human teacher. For any of the BabyAI levels, it can generate demonstrations or suggest actions for a given environment state. Whereas the BabyAI learner is meant to be generic and should scale to new and more complex tasks, the bot is engineered using knowledge of the tasks. This makes sense since the bot stands for the human in the loop, who is supposed to understand the environment, how to solve missions, and how to teach the baby learner. The bot has direct access to a tree representation of instructions, and so does not need to parse the Baby Language. Internally, it executes a stack machine in which instructions and subgoals are represented (more details can be found in Appendix C). The stack- based design allows the bot to interrupt what it is currently doing to achieve a new subgoal, and then resume the original task. For example, going to a given object will require exploring the environment to ï¬nd that object.
The subgoals which the bot implements are:
Open: Open a door that is in front of the agent. ⢠Close: Close a door that is in front of the agent. ⢠Pickup: Execute the pickup action (pick up an object). ⢠Drop: Execute the drop action (drop an object being carried). ⢠GoNextTo: Go next to an object matching a given (type, color) description or next to a cell
at a given position.
6
Published as a conference paper at ICLR 2019
⢠Explore: Uncover previously unseen parts of the environment.
All of the Baby Language instructions are decomposed into these internal subgoals which the bot knows how to solve. Many of these subgoals, during their execution, can also push new subgoals on the stack. A central part of the design of the bot is that it keeps track of the grid cells of the environment which it has and has not seen. This is crucial to ensure that the bot can only use information which it could realistically have access to by exploring the environment. Exploration is implemented as part of the Explore subgoal, which is recursive. For instance, exploring the environment may require opening doors, or moving objects that are in the way. Opening locked doors may in turn require ï¬nding a key, which may itself require exploration and moving obstructing objects. Another key component of the botâs design is a shortest path search routine. This is used to navigate to objects, to locate the closest door, or to navigate to the closest unexplored cell.
# 4 EXPERIMENTS
We assess the difï¬culty of BabyAI levels by training a behavioral cloning baseline for each level. Furthermore, we estimate how much data is required to solve some of the simpler levels and study to which extent the data demands can be reduced by using basic curriculum learning and interactive teaching methods. All the code that we use for the experiments, as well as containerized pretrained models, is available online.
4.1 SETUP
The BabyAI platform provides by default a 7x7x3 symbolic observation xt (a partial and local egocentric view of the state of the environment) and a variable length instruction c as inputs at each time step. We use a basic model consisting of standard components to predict the next action a based on x and c. In particular, we use a GRU (Cho et al., 2014) to encode the instruction and a convolutional network with two batch-normalized (Ioffe and Szegedy, 2015) FiLM (Perez et al., 2017) layers to jointly process the observation and the instruction. An LSTM (Hochreiter and Schmidhuber, 1997) memory is used to integrate representations produced by the FiLM module at each step. Our model is thus similar to the gated-attention model used by Chaplot et al. (2018), inasmuch as gated attention is equivalent to using FiLM without biases and only at the output layer.
We have used two versions of our model, to which we will refer as the Large model and the Small model. In the Large model, the memory LSTM has 2048 units and the instruction GRU is bidirec- tional and has 256 units. Furthermore, an attention mechanism (Bahdanau et al., 2015) is used to focus on the relevant states of the GRU. The Small model uses a smaller memory of 128 units and encodes the instruction with a unidirectional GRU and no attention mechanism.
In all our experiments, we used the Adam optimizer (Kingma and Ba, 2015) with the hyperparam- eters a = 10-4, 6, = 0.9, Bg = 0.999 and ⬠= 10~5. In our imitation learning (IL) experiments, we truncated the backpropagation through time at 20 steps for the Small model and at 80 steps for the Large model. For our reinforcement learning experiments, we used the Proximal Policy Op- timization (PPO, Schulman et al., 2017) algorithm with parallelized data collection. Namely, we performed 4 epochs of PPO using 64 rollouts of length 40 collected with multiple processes. We gave a non-zero reward to the agent only when it fully completed the mission, and the magnitude of the reward was 1 â 0.9n/nmaz, Where n is the length of the successful episode and nmaz is the maximum number of steps that we allowed for completing the episode, different for each mission. The future returns were discounted with a factor y = 0.99. For generalized advantage estimation (Schulman et al., 2015) in PPO we used A = 0.99.
In all our experiments we reported the success rate, deï¬ned as the ratio of missions of the level that the agent was able to accomplish within nmax steps.
Running the experiments outlined in this section required between 20 and 50 GPUs over two weeks. At least as much computing was required for preliminary investigations.
7
Published as a conference paper at ICLR 2019
Table 2: Baseline imitation learning results for all BabyAI levels. Each model was trained with 1M demonstrations from the respective level. For reference, we also list the mean and standard deviation of demonstration length for each level.
100 100 100 99.8 99.2 99.4 99.9 99.4 99 99 100 98.4 98.8 97.3 97.9 95.4 87.7 87.2 77 5.18±2.38 5.81±3.29 5.38±3.13 5.04±2.76 12.4±4.54 6.13±2.97 70.8±48.9 56.8±46.7 57.8±46.7 57.2±50 31.5±30.5 81.6±61.1 89.9±49.6 50.4±49.3 47.9±47.9 72.7±52.2 81.8±61.3 110±81.9 84.3±64.5
# Success Rate (%) Demo Length (Mean ± Std)
4.2 BASELINE RESULTS
To obtain baseline results for all BabyAI levels, we have trained the Large model (see Section 4.1) with imitation learning using one million demonstration episodes for each level. The demonstrations were generated using the bot described in Section 3.4. The models were trained for 40 epochs on levels with a single room and for 20 epochs on levels with a 3x3 maze of rooms. Table 2 reports the maximum success rate on a validation set of 512 episodes. All of the single-room levels are solved with a success rate of 100.0%. As a general rule, levels for which demonstrations are longer tend to be more difï¬cult to solve.
Using 1M demonstrations for levels as simple as GoToRedBall is very inefï¬cient and hardly ever compatible with the long-term goal of enabling human teaching. The BabyAI platform is meant to support studies of how neural agents can learn with less data. To bootstrap such studies, we have computed baseline sample efï¬ciencies for imitation learning and reinforcement learning approaches to solving BabyAI levels. We say an agent solves a level if it reaches a success rate of at least 99%. We deï¬ne the sample efï¬ciency as the minimum number of demonstrations or RL episodes required to train an agent to solve a given level. To estimate the thus deï¬ned sample efï¬ciency for imitation learning while staying within a reasonable computing budget, we adopt the following pro- cedure. For a given level, we ï¬rst run three experiments with 106 demonstrations. In the remaining M experiments we use k1 = 2l0, k2 = 2l0+d, . . . , kM = 2l0+(M â1)d demonstrations respectively. We use different values of l0, M for each level to ensure that we run experiments with not enough, just enough and more than enough demonstrations. Same value of d = 0.2 is used in all imitation learning experiments. For each experiment i, we measure the best smoothed online validation per- formance si that is achieved during the ï¬rst 2T training steps, where T = (T1 + T2 + T3)/3 is the average number of training steps required to solve the level in the three runs with 106 demon- strations. We then ï¬t a Gaussian Process (GP) model (Rasmussen and Williams, 2005) with noisy observations using (ki, si) as training data in order to interpolate between these data points. The GP posterior is fully tractable, which allows us to compute analytically the posterior distribution of the expected success rate, as well as the posterior over the minimum number of samples kmin that is sufï¬cient to solve the level. We report the 99% credible interval for kmin. We refer the reader to Appendix A for a more detailed explanation of this procedure.
We estimate sample efï¬ciency of imitation learning on 6 chosen levels. The results are shown in Table 3 (see âIL from Botâ column). In the same table (column âRLâ) we report the 99% conï¬dence
8
Published as a conference paper at ICLR 2019
Table 3: The sample efï¬ciency of imitation learning (IL) and reinforcement learning (RL) as the number of demonstrations (episodes) required to solve each level. All numbers are thousands. For the imitation learning results we report a 99% credible interval. For RL experiments we report the 99% conï¬dence interval. See Section 4 for details.
Level GoToRedBallGrey GoToRedBall GoToLocal PickupLoc PutNextLocal GoTo IL from Bot 8.431 - 12.43 49.67 - 62.01 148.5 - 193.2 204.3 - 241.2 244.6 - 322.7 341.1 - 408.5
Table 4: The sample efï¬ciency results for pretraining experiments. For each pair of base levels and target levels that we have tried, we report how many demonstrations (in thousands) were required, as well as the baseline number of demonstrations required for training from scratch. In both cases we report a 99% credible interval, see Section 4 for details. Note how choosing the right base levels (e.g. GoToLocal instead of GoToObjMaze) is crucial for pretraining to be helpful.
Base Levels GoToLocal GoToObjMaze GoToLocal-GoToObjMaze GoToLocal GoToLocal Target Level Without Pretraining With Pretraining GoTo GoTo GoTo PickupLoc PutNextLocal 341 - 409 341 - 409 341 - 409 204 - 241 245 - 323 183 - 216 444 - 602 173 - 216 71.2 - 88.9 188 - 231
interval for the number of episodes that were required to solve each of these levels with RL, and as expected, the sample efï¬ciency of RL is substantially worse than that of IL (anywhere between 2 to 10 times in these experiments).
To analyze how much the sample efï¬ciency of IL depends on the source of demonstrations, we try generating demonstrations from agents that were trained with RL in the previous experiments. The results for the 3 easiest levels are reported in the âIL from RL Expertâ column in Table 5. Interestingly, we found that the demonstrations produced by the RL agent are easier for the learner to imitate. The difference is most signiï¬cant for GoToRedBallGrey, where less than 2K and more than 8K RL and bot demonstrations respectively are required to solve the level. For GoToRedBall and GoToLocal, using RL demonstrations results in 1.5-2 times better sample efï¬ciency. This can be explained by the fact that the RL expert has the same neural network architecture as the learner.
4.3 CURRICULUM LEARNING
To demonstrate how curriculum learning research can be done using the BabyAI platform, we per- form a number of basic pretraining experiments. In particular, we select 5 combinations of base levels and a target level and study whether pretraining on base levels can help the agent master the target level with fewer demonstrations. The results are reported in Table 4. In four cases, using GoToLocal as one of the base levels reduces the number of demonstrations required to solve the target level. However, when only GoToObjMaze was used as the base level, we have not found pretraining to be beneï¬cial. We ï¬nd this counter-intuitive result interesting, as it shows how current deep learning methods often can not take the full advantage of available curriculums.
4.4 INTERACTIVE LEARNING
Lastly, we perform a simple case study of how sample efï¬ciency can be improved by interactively providing more informative examples to the agent based on what it has already learned. We exper- iment with an iterative algorithm for adaptively growing the agentâs training set. In particular, we start with 210 base demonstrations, and at each iteration we increase the dataset size by a factor of 21/4 by providing bot demonstrations for missions on which the agent failed. After each dataset increase we train a new agent from scratch. We perform such dataset increases until the dataset
9
Published as a conference paper at ICLR 2019
Table 5: The sample efï¬ciency of imitation learning (IL) from an RL-pretrained expert and inter- active imitation learning deï¬ned as the number of demonstrations required to solve each level. All numbers are in thousands. 99% credible intervals are reported in all experiments, see Section 4 for details.
Interactive IL from Bot 1.71 - 1.88 31.8 - 36 93 - 107
Level GoToRedBallGrey GoToRedBall GoToLocal IL from Bot 8.43 - 12.4 49.7 - 62 148 - 193 IL from RL Expert 1.53 - 2.11 36.6 - 44.5 74.2 - 81.8
reaches the ï¬nal size is clearly sufï¬cient to achieve 99% success rate. We repeat the experiment 3 times for levels GoToRedBallGrey, GoToRedBall and GoToLocal and then estimate how many interactively provided demonstrations would be required for the agent be 99% successful for each of these levels. To this end, we use the same GP posterior analysis as for regular imitation learning experiments.
The results for the interactive imitation learning protocol are reported in Table 5. For all 3 levels that we experimented with, we have observed substantial improvement over the vanilla IL, which is most signiï¬cant (4 times less demonstrations) for GoToRedBallGrey and smaller (1.5-2 times less demonstrations) for the other two levels.
# 5 CONCLUSION & FUTURE WORK
We present the BabyAI research platform to study language learning with a human in the loop. The platform includes 19 levels of increasing difï¬culty, based on a decomposition of tasks into a set of basic competencies. Solving the levels requires understanding the Baby Language, a subset of English with a formally deï¬ned grammar which exhibits compositional properties. The language is minimalistic and the levels seem simple, but empirically we have found them quite challenging to solve. The platform is open source and extensible, meaning new levels and language concepts can be integrated easily.
The results in Section 4 suggest that current imitation learning and reinforcement learning meth- ods scale and generalize poorly when it comes to learning tasks with a compositional structure. Hundreds of thousands of demonstrations are needed to learn tasks which seem trivial by human standards. Methods such as curriculum learning and interactive learning can provide measurable improvements in terms of sample efï¬ciency, but, in order for learning with an actual human in the loop to become realistic, an improvement of at least three orders of magnitude is required.
An obvious direction of future research to ï¬nd strategies to improve sample efï¬ciency of language learning. Tackling this challenge will likely require new models and new teaching methods. Ap- proaches that involve an explicit notion of modularity and subroutines, such as Neural Module Net- works (Andreas et al., 2016) or Neural Programmer-Interpreters (Reed and de Freitas, 2015), seem like a promising direction. It is our hope that the BabyAI platform can serve as a challenge and a benchmark for the sample efï¬ciency of language learning for years to come.
# ACKNOWLEDGEMENTS
We thank Tristan Deleu, Saizheng Zhang for useful discussions. We also thank Rachel Sam- son, Léonard Boussioux and David Yu-Tung Hui for their help in preparing the ï¬nal version of the paper. This research was enabled in part by support provided by Compute Canada (www.computecanada.ca), NSERC and Canada Research Chairs. We also thank Nvidia for donating NVIDIA DGX-1 used for this research.
# REFERENCES
Anderson, P., Wu, Q., Teney, D., Bruce, J., Johnson, M., Sünderhauf, N., Reid, I., Gould, S., and Hengel, A. v. d. (2018). Vision-and-Language Navigation: Interpreting visually-grounded naviga-
10
Published as a conference paper at ICLR 2019
tion instructions in real environments. In Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Andreas, J., Rohrbach, M., Darrell, T., and Klein, D. (2016). Neural Module Networks. In Proceed- ings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Artzi, Y. and Zettlemoyer, L. (2013). Weakly supervised learning of semantic parsers for mapping instructions to actions. Transactions of the Association for Computational Linguistics, 1:49â62.
Bahdanau, D., Cho, K., and Bengio, Y. (2015). Neural Machine Translation by Jointly Learning to Align and Translate. In Proceedings of the 2015 International Conference on Learning Represen- tations.
Bahdanau, D., Hill, F., Leike, J., Hughes, E., Hosseini, A., Kohli, P., and Grefenstette, E. (2018). Learning to Understand Goal Speciï¬cations by Modelling Reward. In ICLR.
Bellemare, M. G., Naddaf, Y., Veness, J., and Bowling, M. (2013). The Arcade Learning Environ- ment: An Evaluation Platform for General Agents. Journal of Artiï¬cial Intelligence Research, 47:253â279. arXiv: 1207.4708.
Bengio, Y., Louradour, J., Collobert, R., and Weston, J. (2009). Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pages 41â48.
Chaplot, D. S., Sathyendra, K. M., Pasumarthi, R. K., Rajagopal, D., and Salakhutdinov, R. (2018). Gated-Attention Architectures for Task-Oriented Language Grounding. In Proceedings of 32nd AAAI Conference on Artiï¬cial Intelligence.
Chen, D. L. and Mooney, R. J. (2011). Learning to Interpret Natural Language Navigation In- structions from Observations. In Proceedings of the Twenty-Fifth AAAI Conference on Artiï¬cial Intelligence, pages 859â865.
Cho, K., van Merrienboer, B., Gulcehre, C., Bougares, F., Schwenk, H., and Bengio, Y. (2014). Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Transla- tion. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Pro- cessing (EMNLP).
Christiano, P., Leike, J., Brown, T. B., Martic, M., Legg, S., and Amodei, D. (2017). Deep reinforce- ment learning from human preferences. In Advances in Neural Information Processing Systems 30. arXiv: 1706.03741.
Daumé Iii, H., Langford, J., and Marcu, D. (2009). Search-based structured prediction. Machine learning, 75(3):297â325.
DeepMind (2017). PycoLab.
Espeholt, L., Soyer, H., Munos, R., Simonyan, K., Mnih, V., Ward, T., Doron, Y., Firoiu, V., Harley, T., Dunning, I., Legg, S., and Kavukcuoglu, K. (2018). IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures. In Proceedings of the 22nd international conference on Machine learning. arXiv: 1802.01561.
Graves, A., Wayne, G., Reynolds, M., Harley, T., Danihelka, I., Grabska-Barwi´nska, A., Col- menarejo, S. G., Grefenstette, E., Ramalho, T., Agapiou, J., Badia, A. P., Hermann, K. M., Zwols, Y., Ostrovski, G., Cain, A., King, H., Summerï¬eld, C., Blunsom, P., Kavukcuoglu, K., and Has- sabis, D. (2016). Hybrid computing using a neural network with dynamic external memory. Nature, 538(7626):471â476.
Hermann, K. M., Hill, F., Green, S., Wang, F., Faulkner, R., Soyer, H., Szepesvari, D., Czarnecki, W. M., Jaderberg, M., Teplyashin, D., Wainwright, M., Apps, C., Hassabis, D., and Blunsom, P. (2017). Grounded Language Learning in a Simulated 3d World. arXiv:1706.06551 [cs, stat].
Hochreiter, S. and Schmidhuber, J. (1997). Long Short-Term Memory. Neural Computation, 9(8):1735â1780.
11
Published as a conference paper at ICLR 2019
Ioffe, S. and Szegedy, C. (2015). Batch Normalization: Accelerating Deep Network Training by Re- ducing Internal Covariate Shift. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, pages 448â456.
Kempka, M., Wydmuch, M., Runc, G., Toczek, J., and Jaskowski, W. (2016). ViZDoom: A Doom- based AI research platform for visual reinforcement learning. In CIG, pages 1â8. IEEE.
Kingma, D. P. and Ba, J. (2015). Adam: A Method for Stochastic Optimization. In Proceedings of the 2015 International Conference on Learning Representations. arXiv: 1412.6980.
Koenig, N. and Howard, A. (2004). Design and Use Paradigms for Gazebo, An Open-Source Multi- Robot Simulator. In IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 2149â2154, Sendai, Japan.
Kolve, E., Mottaghi, R., Gordon, D., Zhu, Y., Gupta, A., and Farhadi, A. (2017). AI2-THOR: An Interactive 3d Environment for Visual AI. CoRR, abs/1712.05474.
Kumar, M. P., Packer, B., and Koller, D. (2010). Self-Paced Learning for Latent Variable Models. In Advances in Neural Information Processing Systems 23, pages 1189â1197. Curran Associates, Inc.
Macmahon, M., Stankiewicz, B., and Kuipers, B. (2006). Walk the Talk: Connecting Language, Knowledge, Action in Route Instructions. In In Proc. of the Nat. Conf. on Artiï¬cial Intelligence (AAAI, pages 1475â1482.
Mei, H., Bansal, M., and Walter, M. R. (2016). Listen, Attend, and Walk: Neural Mapping of Navigational Instructions to Action Sequences. In Proceedings of the 2016 AAAI Conference on Artiï¬cial Intelligence.
Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Pretten- hofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., and Duchesnay, E. (2011). Scikit-learn: machine learning in Python. Journal of Machine Learning Research, 12:2825â2830.
Perez, E., Strub, F., de Vries, H., Dumoulin, V., and Courville, A. (2017). FiLM: Visual Reasoning with a General Conditioning Layer. In In Proceedings of the 2017 AAAI Conference on Artiï¬cial Intelligence.
Rasmussen, C. E. and Williams, C. K. I. (2005). Gaussian Processes for Machine Learning (Adap- tive Computation and Machine Learning).
Reed, S. and de Freitas, N. (2015). Neural Programmer-Interpreters. In 2016 International Confer- ence on Learning Representations. arXiv: 1511.06279.
Ross, S., Gordon, G., and Bagnell, D. (2011). A Reduction of Imitation Learning and Structured Prediction to No-Regret Online Learning. In PMLR, pages 627â635.
Schulman, J., Moritz, P., Levine, S., Jordan, M., and Abbeel, P. (2015). High-Dimensional Con- tinuous Control Using Generalized Advantage Estimation. In Advances in Neural Information Processing Systems 30.
Schulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O. (2017). Proximal Policy Opti- mization Algorithms. arXiv:1707.06347 [cs]. arXiv: 1707.06347.
Sukhbaatar, S., Szlam, A., Synnaeve, G., Chintala, S., and Fergus, R. (2015). MazeBase: A Sandbox for Learning from Games. arXiv:1511.07401 [cs]. arXiv: 1511.07401.
Sutskever, I., Vinyals, O., and Le, Q. V. (2014). Sequence to Sequence Learning with Neural Net- works. In Advances in Neural Information Processing Systems 27, pages 3104â3112.
Tellex, S., Kollar, T., Dickerson, S., Walter, M. R., Banerjee, A. G., Teller, S., and Roy, N. (2011). Understanding Natural Language Commands for Robotic Navigation and Mobile Manipulation. In Twenty-Fifth AAAI Conference on Artiï¬cial Intelligence.
12
Published as a conference paper at ICLR 2019
Wang, S. I., Liang, P., and Manning, C. D. (2016). Learning Language Games through Interaction. In Proceedings Of the 54th Annual Meeting of the Association for Computational Linguistics. arXiv: 1606.02447.
Warnell, G., Waytowich, N., Lawhern, V., and Stone, P. (2017). Deep TAMER: Interactive Agent Shaping in High-Dimensional State Spaces. In Proceedings of 32nd AAAI Conference on Artiï¬cial Intelligence. arXiv: 1709.10163.
Weston, J., Bordes, A., Chopra, S., Rush, A. M., van Merriënboer, B., Joulin, A., and Mikolov, T. (2016). Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks.
Williams, E. C., Gopalan, N., Rhee, M., and Tellex, S. (2018). Learning to Parse Natural Language to Grounded Reward Functions with Weak Supervision. In 2018 IEEE International Conference on Robotics and Automation, ICRA 2018, Brisbane, Australia, May 21-25, 2018, pages 1â7.
Wilson, A., Fern, A., and Tadepalli, P. (2012). A Bayesian Approach for Policy Learning from Trajectory Preference Queries. In Pereira, F., Burges, C. J. C., Bottou, L., and Weinberger, K. Q., editors, Advances in Neural Information Processing Systems 25, pages 1133â1141. Curran Asso- ciates, Inc.
Wu, Y., Schuster, M., Chen, Z., Le, Q. V., Norouzi, M., Macherey, W., Krikun, M., Cao, Y., Gao, Q., Macherey, K., and others (2016). Googleâs Neural Machine Translation System: Bridging the Gap between Human and Machine Translation. arXiv preprint arXiv:1609.08144.
Wu, Y., Wu, Y., Gkioxari, G., and Tian, Y. (2018). Building Generalizable Agents with a Realistic and Rich 3d Environment. arXiv:1801.02209 [cs]. arXiv: 1801.02209.
Yu, H., Zhang, H., and Xu, W. (2018). Interactive Grounded Language Acquisition and Generaliza- tion in 2d Environment. In ICLR.
Zaremba, W. and Sutskever, I. (2015). Learning to Execute. In 2015 International Conference on Learning Representations. arXiv: 1410.4615.
Ziebart, B. D., Maas, A., Bagnell, J. A., and Dey, A. K. (2008). Maximum Entropy Inverse Rein- forcement Learning. In Proc. AAAI, pages 1433â1438.
13
Published as a conference paper at ICLR 2019
# A SAMPLE EFFICIENCY ESTIMATION
A.1 REINFORCEMENT LEARNING
To estimate the number of episodes required for an RL agent to solve a BabyAI level, we monitored the agentâs smoothed online success rate. We recorded the number of training episodes after which the smoothed performance crossed the 99% success rate threshold. Each experiment was repeated 10 times and the 99% t-test conï¬dence interval is reported in Table 3.
IMITATION LEARNING
Estimating how many demonstrations is required for imitation learning to achieve a given perfor- mance level is challenging. In principle, one can sample a dense grid of dataset sizes, train the model until full convergence on each of the resulting datasets, and ï¬nd the smallest dataset size for which on average the modelâs best performance exceeds the target level. In practice, such a procedure would be prohibitively computationally expensive.
To make sample efï¬ciency estimation practical, we designed a relatively cheap semi-automatic approximate protocol. We minimize computational resources by using early-stopping and non- parametric interpolation between different data points.
Early Stopping Using Normal Time Understanding if a training run has converged and if the modelâs performance will not improve any further is non-trivial. To early-stop models in a consis- tent automatic way, we estimate the ânormalâ time T that training a model on a given level would take if an unlimited (in our case 10°) number of demonstrations was available. To this end, we train 3 models with 10° demonstrations. We evaluate the online success rate after every 100 or 400 (de- pending on the model size) batches, each time using 512 different episodes. The online success rate is smoothed using a sliding window of length 10. Let s(k, j,t) denote the smoothed online perfor- mance for the j-th run with k demonstrations at time t. Using this notation, we compute the normal time T as T = (T; + T> + T3)/3, where T; = min {t : s;(10°, j,t) > 99}. Once T is computed, it is used to early stop the remaining / runs that use different numbers of demonstrations k;. Namely the result s; of the i-th of these runs is computed as s; = max 8(kj,1,t). t<
Interpolation Using Gaussian rate measurements D = {(ki, si)}M i=1, k1 < k2 < . . . < kM , we estimate the minimum number of samples kmin that is required for the model to reach 99% average success rate. To this end, we a Gaussian Process (GP) model to interpolate between the available (ki, si) data points (Rasmussen and Williams, 2005). GP is a popular model for non-linear regression, whose main advantage is principled modelling of predictionsâ uncertainty.
Speciï¬cally, we model the dependency between the success rate s and the number of examples k as follows:
f ~GPrer(l), 3(k) = 99 + of f (log, k), e(k) ~ N(0, 1), s(k) = &(k) + o,â¬(k),
f ~GPrer(l), (1)
3(k) = 99 + of f (log, k), (2)
e(k) ~ N(0, 1), (3)
s(k) = &(k) + o,â¬(k), (4)
where RBF reflects the fact that we use the Radial Basis Function kernel, / is the kernelâs length- scale parameter, ¢(k) is white noise, o» and 0, add scaling to the GP f and the noise â¬. Note the distinction between the average and the observed performances 5(k) and s(k). Using the introduced notation, kmin can be formally defined as kmin = min 8(k) = 99. kelkisku)
kâ[k1;kM ]
To focus on the interpolation in the region of interest, we drop all (k;,s;) data points for which 8; < 95. We then fit the modelâs hyperparameters /, 7 and a by maximizing the likelihood of the remaining data points. To this end, we use the implementation from scikit-learn (Pedregosa et al., 2011). Once the model is fit, it defines a Gaussian posterior density p(8(k{),..., §(k4,,)|D) for any M' data points ki, k5,..., k4,. It also defines a probability distribution p(kmin|D). We are not
14
(1) (2) (3) (4)
Published as a conference paper at ICLR 2019
aware of an analytic expression for p(Kimin|D), and hence we compute a numerical approximation as follows. We sample a dense log-scale grid of Mâ points ki, k,...,k4,, in the range [k1; ky]. For each number of demonstrations k/, we approximate the probability p(ki_) < kmin < kj|D) that 5(k) crosses the 99% threshold somewhere between k/_, and ki as follows:
grid of Mâ points ki, k,...,k4,, in we approximate the probability p(ki_) < somewhere between k/_, and ki as follows: pl = p(3(K,) < 99... 3(L) < 99,5(R,)
between k/_, and ki < 99... 3(L)
(KL < inin < Ri|D) © pl = p(3(K,) < 99... 3(L) < 99,5(R,) > 99|D) 6)
Equation 5 is an approximation because the posterior s is not necessarily monotonic. In practice, we observed that the monotonic nature of the observed data D shapes the posterior accordingly. We use the probabilities pâ, to construct the following discrete approximation of the posterior p(k in|D):
P(Fmin|D) Sms 5(k ©)
where 5(kâ) are Dirac delta-functions. Such a discrete approximation is sufficient for the purpose of computing 99% credible intervals for k,,,;, that we report in the paper.
# B MINIGRID ENVIRONMENTS FOR OPENAI GYM
The environments used for this research are built on top of MiniGrid, which is an open source grid- world package. This package includes a family of reinforcement learning environments compatible with the OpenAI Gym framework. Many of these environments are parameterizable so that the difï¬culty of tasks can be adjusted (e.g. the size of rooms is often adjustable).
B.1 THE WORLD
In MiniGrid, the world is a grid of size NxN. Each tile in the grid contains exactly zero or one object, and the agent can only be on an empty tile or on a tile containing an open door. The possible object types are wall, door, key, ball, box and goal. Each object has an associated discrete color, which can be one of red, green, blue, purple, yellow and grey. By default, walls are always grey and goal squares are always green.
B.2 REWARD FUNCTION
Rewards are sparse for all MiniGrid environments. Each environment has an associated time step limit. The agent receives a positive reward if it succeeds in satisfying an environmentâs success criterion within the time step limit, otherwise zero. The formula for calculating positive sparse rewards is 1 â 0.9 â (step_count/max_steps). That is, rewards are always between zero and one, and the quicker the agent can successfully complete an episode, the closer to 1 the reward will be. The max_steps parameter is different for each mission, and varies depending on the size of the environment (larger environments having a higher time step limit) and the length of the instruction (more time steps are allowed for longer instructions).
# B.3 ACTION SPACE
There are seven actions in MiniGrid: turn left, turn right, move forward, pick up an object, drop an object, toggle and done. The agent can use the turn left and turn right action to rotate and face one of 4 possible directions (north, south, east, west). The move forward action makes the agent move from its current tile onto the tile in the direction it is currently facing, provided there is nothing on that tile, or that the tile contains an open door. The agent can open doors if they are right in front of it by using the toggle action.
# B.4 OBSERVATION SPACE
Observations in MiniGrid are partial and egocentric. By default, the agent sees a square of 7x7 tiles in the direction it is facing. These include the tile the agent is standing on. The agent cannot see through walls or closed doors. The observations are provided as a tensor of shape 7x7x3. However, note that these are not RGB images. Each tile is encoded using 3 integer values: one describing the
15
Published as a conference paper at ICLR 2019
# GoNextTo(blue
# ball)
(a) GoToObj: "go to the blue ball"
2 .
To(green ball) Pickup
# Top
Open tTo (door)
(b) PutNextLocal: "put the blue key next to the green ball"
(c) BossLevel: "pick up the grey box behind you, then go to the grey key and open a door".
Figure 4: Examples of initial stacks corresponding to three different instructions.
type of object contained in the cell, one describing its color, and a state indicating whether doors are open, closed or locked. This compact encoding was chosen for space efï¬ciency and to enable faster training. The fully observable RGB image view of the environments shown in this paper is provided for human viewing.
C BOT IMPLEMENTATION DETAILS
C.1 TRANSLATION OF INSTRUCTIONS INTO SUBGOALS
The bot has access to a representation of the instructions for each environment. These instructions are decomposed into subgoals that are added to a stack. In Figure 4 we show the stacks correspond- ing to the examples in Figure 1. The stacks are illustrated in bottom to top order, that is, the lowest subgoal in the illustration is to be executed ï¬rst.
C.2 PROCESSING OF SUBGOALS
Once instructions for a task are translated into the initial stack of subgoals, the bot starts by process- ing the ï¬rst subgoal. Each subgoal is processed independently, and can either lead to more subgoals being added to the stack, or to an action being taken. When an action is taken, the state of the bot in the environment changes, and its visibility mask is populated with all the new observed cells and objects, if any. The visibility mask is essential when looking for objects and paths towards cells, because it keeps track of what the bot has seen so far. Once a subgoal is marked as completed, it is removed from the stack, and the bot starts processing the next subgoal in the stack. Note that the same subgoal can remain on top of the stack for multiple time steps, and result in multiple actions being taken.
The Close, Drop and Pickup subgoals are trivial, that is, they result in the execution of the corresponding action and then immediately remove themselves from the stack. Diagrams depicting how the Open, GoNextTo and Explore subgoals are handled are depicted in Figures 5, 6, and 7 respectively. In the diagrams, we use the term "forward cell" to refer to the grid cell that the agent is facing. We say that a path from X to Y contains blockers if there are objects that need to be moved in order for the agent to be able to navigate from X to Y. A "clear path" is a path without blockers.
16
Published as a conference paper at ICLR 2019
OpenSubgoal Door in forward cell is locked Door in forward cell is unlocked Agent has the right key False
Add GoNextToSubgoal (key) and PickupSubgoal
Figure 5: Processing of the Open subgoal
17
Published as a conference paper at ICLR 2019
GoNextToSubgoal (target) Agent is on the Otherwise target position Agent is facing the target position for a path to the Move to the next subgoal target A path with No path found blockers is found rd cell firs ai Add an ExploreSubgoal
Figure 6: Processing of the GoNextTo subgoal
18
Published as a conference paper at ICLR 2019
ExploreSubgoal Look for the closest un en position Position pos found with a clear path leading to it. Agent not aware of any unseen position pos found with a path containing blockers Add a GoNext ToSubgoal (pos) Look for an unlocked door Unlocked door found at pos True Add GoNextToSubgoal (pos) and an OpenSubgoal.
Figure 7: Processing of the Explore subgoal
19 | {
"id": "1707.06347"
} |
1810.06682 | Trellis Networks for Sequence Modeling | We present trellis networks, a new architecture for sequence modeling. On the
one hand, a trellis network is a temporal convolutional network with special
structure, characterized by weight tying across depth and direct injection of
the input into deep layers. On the other hand, we show that truncated recurrent
networks are equivalent to trellis networks with special sparsity structure in
their weight matrices. Thus trellis networks with general weight matrices
generalize truncated recurrent networks. We leverage these connections to
design high-performing trellis networks that absorb structural and algorithmic
elements from both recurrent and convolutional models. Experiments demonstrate
that trellis networks outperform the current state of the art methods on a
variety of challenging benchmarks, including word-level language modeling and
character-level language modeling tasks, and stress tests designed to evaluate
long-term memory retention. The code is available at
https://github.com/locuslab/trellisnet . | http://arxiv.org/pdf/1810.06682 | Shaojie Bai, J. Zico Kolter, Vladlen Koltun | cs.LG, cs.AI, cs.CL, stat.ML | Published at ICLR 2019 | null | cs.LG | 20181015 | 20190311 | 9 1 0 2
r a M 1 1 ] G L . s c [
2 v 2 8 6 6 0 . 0 1 8 1 : v i X r a
Published as a conference paper at ICLR 2019
# TRELLIS NETWORKS FOR SEQUENCE MODELING
Shaojie Bai Carnegie Mellon University
J. Zico Kolter Carnegie Mellon University and Bosch Center for AI
Vladlen Koltun Intel Labs
# ABSTRACT
We present trellis networks, a new architecture for sequence modeling. On the one hand, a trellis network is a temporal convolutional network with special struc- ture, characterized by weight tying across depth and direct injection of the input into deep layers. On the other hand, we show that truncated recurrent networks are equivalent to trellis networks with special sparsity structure in their weight matrices. Thus trellis networks with general weight matrices generalize truncated recurrent networks. We leverage these connections to design high-performing trel- lis networks that absorb structural and algorithmic elements from both recurrent and convolutional models. Experiments demonstrate that trellis networks outper- form the current state of the art methods on a variety of challenging benchmarks, including word-level language modeling and character-level language modeling tasks, and stress tests designed to evaluate long-term memory retention. The code is available here1.
# INTRODUCTION
What is the best architecture for sequence modeling? Recent research has produced signiï¬cant progress on multiple fronts. Recurrent networks, such as LSTMs, continue to be optimized and extended (Merity et al., 2018b; Melis et al., 2018; Yang et al., 2018; Trinh et al., 2018). Temporal convolutional networks have demonstrated impressive performance, particularly in modeling long- range context (van den Oord et al., 2016; Dauphin et al., 2017; Bai et al., 2018). And architectures based on self-attention are gaining ground (Vaswani et al., 2017; Santoro et al., 2018).
In this paper, we introduce a new architecture for sequence modeling, the Trellis Network. We aim to both improve empirical performance on sequence modeling benchmarks and shed light on the relationship between two existing model families: recurrent and convolutional networks.
On the one hand, a trellis network is a special temporal convolutional network, distinguished by two unusual characteristics. First, the weights are tied across layers. That is, weights are shared not only by all time steps but also by all network layers, tying them into a regular trellis pattern. Second, the input is injected into all network layers. That is, the input at a given time-step is provided not only to the ï¬rst layer, but directly to all layers in the network. So far, this may seem merely as a peculiar convolutional network for processing sequences, and not one that would be expected to perform particularly well.
Yet on the other hand, we show that trellis networks generalize truncated recurrent networks (recur- rent networks with bounded memory horizon). The precise derivation of this connection is one of the key contributions of our work. It allows trellis networks to serve as bridge between recurrent and convolutional architectures, beneï¬tting from algorithmic and architectural techniques developed in either context. We leverage these relationships to design high-performing trellis networks that ab- sorb ideas from both architectural families. Beyond immediate empirical gains, these connections may serve as a step towards uniï¬cation in sequence modeling.
We evaluate trellis networks on challenging benchmarks, including word-level language model- ing on the standard Penn Treebank (PTB) and the much larger WikiText-103 (WT103) datasets; character-level language modeling on Penn Treebank; and standard stress tests (e.g. sequential MNIST, permuted MNIST, etc.) designed to evaluate long-term memory retention. On word-level
# 1https://github.com/locuslab/trellisnet
1
Published as a conference paper at ICLR 2019
Penn Treebank, a trellis network outperforms by more than a unit of perplexity the recent architec- ture search work of Pham et al. (2018), as well as the recent results of Melis et al. (2018), which leveraged the Google Vizier service for exhaustive hyperparameter search. On character-level Penn Treebank, a trellis network outperforms the thorough optimization work of Merity et al. (2018a). On word-level WikiText-103, a trellis network outperforms by 7.6% in perplexity the contempora- neous self-attention-based Relational Memory Core (Santoro et al., 2018), and by 11.5% the work of Merity et al. (2018a). (Concurrently with our work, Dai et al. (2019) employ a transformer and achieve even better results on WikiText-103.) On stress tests, trellis networks outperform recent results achieved by recurrent networks and self-attention (Trinh et al., 2018). It is notable that the prior state of the art across these benchmarks was held by models with sometimes dramatic mutual differences.
# 2 BACKGROUND
Recurrent networks (Elman, 1990; Werbos, 1990; Graves, 2012), particularly with gated cells such as LSTMs (Hochreiter & Schmidhuber, 1997) and GRUs (Cho et al., 2014), are perhaps the most popular architecture for modeling temporal sequences. Recurrent architectures have been used to achieve breakthrough results in natural language processing and other domains (Sutskever et al., 2011; Graves, 2013; Sutskever et al., 2014; Bahdanau et al., 2015; Vinyals et al., 2015; Karpathy & Li, 2015). Convolutional networks have also been widely used for sequence processing (Waibel et al., 1989; Collobert et al., 2011). Recent work indicates that convolutional networks are effec- tive on a variety of sequence modeling tasks, particularly ones that demand long-range information propagation (van den Oord et al., 2016; Kalchbrenner et al., 2016; Dauphin et al., 2017; Gehring et al., 2017; Bai et al., 2018). A third notable approach to sequence processing that has recently gained ground is based on self-attention (Vaswani et al., 2017; Santoro et al., 2018; Chen et al., 2018). Our work is most closely related to the ï¬rst two approaches. In particular, we establish a strong connection between recurrent and convolutional networks and introduce a model that serves as a bridge between the two. A related recent theoretical investigation showed that under a certain stability condition, recurrent networks can be well-approximated by feed-forward models (Miller & Hardt, 2018).
There have been many combinations of convolutional and recurrent networks (Sainath et al., 2015). For example, convolutional LSTMs combine convolutional and recurrent units (Donahue et al., 2015; Venugopalan et al., 2015; Shi et al., 2015). Quasi-recurrent neural networks interleave con- volutional and recurrent layers (Bradbury et al., 2017). Techniques introduced for convolutional networks, such as dilation, have been applied to RNNs (Chang et al., 2017). Our work establishes a deeper connection, deriving a direct mapping across the two architectural families and providing a structural bridge that can incorporate techniques from both sides.
# 3 SEQUENCE MODELING AND TRELLIS NETWORKS
Sequence modeling. Given an input x1:T = x1, . . . , xT with sequence length T , a sequence model is any function G :
# X
# â Y
(1) y1:T = y1, . . . , yT = G(x1, . . . , xT ), where yt should only depend on x1:t and not on xt+1:T (i.e. no leakage of information from the future). This causality constraint is essential for autoregressive modeling.
In this section, we describe a new architecture for sequence modeling, referred to as a trellis net- work or TrellisNet. In particular, we provide an atomic view of TrellisNet, present its fundamental features, and highlight the relationship to convolutional networks. Section 4 will then elaborate on the relationship of trellis networks to convolutional and recurrent models. Rp Notation. We use x1:T = (x1, . . . , xT ) to denote a length-T input sequence, where vector xt â q to represent the hidden unit at is the input at time step t. Thus x1:T â time t in layer i of the network. We use Conv1D(x; W ) to denote a 1D convolution with a kernel W applied to input x = x1:T . A basic trellis network. At the most basic level, a feature vector z(i+1) t+1 i + 1 of TrellisNet is computed via three steps, illustrated in Figure 1a:
2
Published as a conference paper at ICLR 2019
(a) TrellisNet at an atomic level (b) TrellisNet on a sequence of units
Figure 1: The interlayer transformation of TrellisNet, at an atomic level (time steps t and t + 1, layers i and i + 1) and on a longer sequence (time steps 1 to 8, layers i and i + 1).
1. The hidden input comprises the hidden outputs z(i) , z(i) Rq from the previous layer i, as well t t+1 â as an injection of the input vectors xt, xt+1. At level 0, we initialize to z(0) t = 0.
2. A pre-activation output Ëz(i+1)
Râ is produced by a feed-forward linear transformation: x Lt41 0? =m [25] + ['], 2 A141
2. A pre-activation output 2th) ⬠Râ is produced by a feed-forward linear transformation:
t+1 â
5
(p+q) are weights, and r is the size of the pre-activation output Ëz(i+1) t+1 . where W1, W2 â Ã (Here and throughout the paper, all linear transformations can include additive biases. We omit these for clarity.) 3. The output z(i+1)
3. The output 2% is produced by a nonlinear activation function f : R"â x R4 â R® applied t+1 to the pre-activation output ay and the output 2â? from the previous layer. More formally, 41 s(iF1) zi = f (2h af ).
A full trellis network can be built by tiling this elementary procedure across time and depth. Given an input sequence x1:T , we apply the same production procedure across all time steps and all layers, using the same weights. The transformation is the same for all elements in the temporal dimen- sion and in the depth dimension. This is illustrated in Figure 1b. Note that since we inject the same input sequence at every layer of the TrellisNet, we can precompute the linear transformation Ëxt+1 = W x 2 xt+1 for all layers i. This identical linear combination of the input can then be added in each layer i to the appropriate linear combination of the hidden units, W z t+1, where W x
Now observe that in each level of the network, we are in effect performing a 1D convolution over the hidden units z(i) 1:T . The output of this convolution is then passed through the activation function f . q as the kernel weight matrix, the computation in layer i can be summarized Rr Formally, with W as follows (Figure 1b):
Ëz(i+1) 1:T = Conv1D z(i) 1:T ; W + Ëx1:T , z(i+1) 1:T = f 1:T , z(i) Ëz(i+1) 1:T 1 . (3)
â
The resulting network operates in feed-forward fashion, with deeper elements having progressively larger receptive ï¬elds. There are, however, important differences from typical (temporal) convo- lutional networks. Notably, the ï¬lter matrix is shared across all layers. That is, the weights are tied not only across time but also across depth. (Vogel & Pock (2017) have previously tied weights across depth in image processing.) Another difference is that the transformed input sequence Ëx1:T is directly injected into each hidden layer. These differences and their importance will be analyzed further in Section 4.
The activation function f in Equation (3) can be any nonlinearity that processes the pre-activation output Ëz(i+1) 1:T â1. We will later describe an activation function based on the LSTM cell. The rationale for its use will become clearer in light of the analysis presented in the next section.
3
@)
Published as a conference paper at ICLR 2019
# 4 TRELLISNET, TCN, AND RNN
In this section, we analyze the relationships between trellis networks, convolutional networks, and recurrent networks. In particular, we show that trellis networks can serve as a bridge between con- volutional and recurrent networks. On the one hand, TrellisNet is a special form of temporal con- volutional networks (TCN); this has already been clear in Section 3 and will be discussed further in Section 4.1. On the other hand, any truncated RNN can be represented as a TrellisNet with special structure in the interlayer transformations; this will be the subject of Section 4.2. These connections allow TrellisNet to harness architectural elements and regularization techniques from both TCNs and RNNs; this will be summarized in Section 4.3.
4.1 TRELLISNET AND TCN
We brieï¬y introduce TCNs here, and refer the readers to Bai et al. (2018) for a more thorough dis- cussion. Brieï¬y, a temporal convolutional network (TCN) is a ConvNet that uses one-dimensional convolutions over the sequence. The convolutions are causal, meaning that, at each layer, the trans- formation at time t can only depend on previous layer units at times t or earlier, not from later points in time. Such approaches were used going back to the late 1980s, under the name of âtime-delay neural networksâ (Waibel et al., 1989), and have received signiï¬cant interest in recent years due to their application in architectures such as WaveNet (van den Oord et al., 2016).
In essence, TrellisNet is a special kind of temporal convolutional network. TCNs have two distinc- tive characteristics: 1) causal convolution in each layer to satisfy the causality constraint and 2) deep stacking of layers to increase the effective history length (i.e. receptive ï¬eld). Trellis networks have both of these characteristics. The basic model presented in Section 3 can easily be elaborated with larger kernel sizes, dilated convolutions, and other architectural elements used in TCNs; some of these are reviewed further in Section 4.3.
However, TrellisNet is not a general TCN. As mentioned in Section 3, two important differences are: 1) the weights are tied across layers and 2) the linearly transformed input Ëx1:T is injected into each layer. Weight tying can be viewed as a form of regularization that can stabilize training, support generalization, and signiï¬cantly reduce the size of the model. Input injection mixes deep features with the original sequence. These structural characteristics will be further illuminated by the connection between trellis networks and recurrent networks, presented next.
4.2 TRELLISNET AND RNN
Recurrent networks appear fundamentally different from convolutional networks. Instead of operat- ing on all elements of a sequence in parallel in each layer, an RNN processes one input element at a time and unrolls in the time dimension. Given a non-linearity g (which could be a sigmoid or a more elaborate cell), we can summarize the transformations in an L-layer RNN at time-step ¢ as follows: mM WO
hh h(i) 1) + W (i) W (i) h(i) t = g hx h(i h(0) t = xt. L, for 1 i â t t â 1 ⤠⤠(4)
Despite the apparent differences, we will now show that any RNN unrolled to a ï¬nite length is equivalent to a TrellisNet with special sparsity structure in the kernel matrix W . We begin by formally deï¬ning the notion of a truncated (i.e. ï¬nite-horizon) RNN. Deï¬nition 1. Given an RNN Ï, a corresponding M-truncated RNN ÏM , applied to the sequence M +1:t (here x<0 = 0). x1:T , produces at time step t the output yt by applying Ï to the sequence xt
â
Theorem 1. Let ÏM be an M -truncated RNN with L layers and hidden unit dimensionality d. Then there exists an equivalent TrellisNet Ï with depth (M + L â 1) and layer width (i.e. number of channels in each hidden layer) Ld. Speciï¬cally, for any x1:T , ÏM (x1:T ) = ÏL(d 1)+1:Ld(x1:T ) (i.e. the TrellisNet outputs contain the RNN outputs).
Theorem 1 states that any M -truncated RNN can be represented as a TrellisNet. How se- vere of a restriction is M -truncation? Note that M -truncation is intimately related to truncated backpropagation-through-time (BPTT), used pervasively in training recurrent networks on long se- quences. While RNNs can in principle retain unlimited history, there is both empirical and theoreti- cal evidence that the memory horizon of RNNs is bounded (Bai et al., 2018; Khandelwal et al., 2018;
4
Published as a conference paper at ICLR 2019
Miller & Hardt, 2018). Furthermore, if desired, TrellisNets can recover exactly a common method of applying RNNs to long sequences â hidden state repackaging, i.e. copying the hidden state across subsequences. This is accomplished using an analogous form of hidden state repackaging, detailed in Appendix B.
Proof of Theorem{]] Let ne), ⬠R* be the hidden state at time ¢ and layer i of the truncated RNN p'-*'+1 (Le., the RNN begun at time ¢â and run until time t). Note that without truncation, history starts at time tâ = 1, so the hidden state © of p can be equivalently expressed as A, When ¢â > t, we define h;,,, = 0 (i.e. no history information if the clock starts in the future).
hx , W (i) By assumption, ÏM is an RNN deï¬ned by the following parameters: , where } W (i) d for all i = 2, . . . , L are the weight Rw hx â matrices at each layer (w is the dimension of pre-activation output). We now construct a TrellisNet Ï according to the exact deï¬nition in Section 3, with parameters
, where } . . . . . . . . . . . . W (L) hx
W1, W2, f { W (1) hx 0 W (2) hx ... 0
0 W (1) hh 0 W (2) 0 hh ... ... 0 0 0 0 ... . . . . . . . . . . . . W (L) hh 0 0 0 0 ... 0 0 ... 0 , W2 = , W1 = ... 0 ... 0 (5)
such that W1, W2 â only on the ï¬rst entry). RLw à (p+Ld). We deï¬ne a nonlinearity f by f (α, β) = g(α) (i.e. applying g
Let t TrellisNet Ï can be expressed in terms of hidden units at time t in truncated forms of Ï:
. T (3) 1 2 L Ld ay = [riya Ae jas ote We 41] ER", (6)
h(1) t,t â is the time-t hidden state at layer j of Ï and h(i)
@ t,t! where 2 ) ig the time-t hidden state at layer j of 7 and h tât' +1 p : is the time-t hidden state at layer 7 of
We prove Eq. (6 by induction on j. As a base case, consider 7 = 0; i.e. the input layer of r. Since hy. = 0 when tâ > t, we have that 2 = [0 0 ... OJâ. (Recall that in the input layer of TrellisNet we initialize 2°) = 0.) For the inductive step, suppose Eq. holds for layer j, and consider layer j + 1. By the feed-forward transformation of TrellisNet defined in Eq. (2) and the nonlinearity f we defined above, we have:
3(j41 7 | eta. Ue AY aw [i +We | 7) t-1 t 0 We oO 1. 0 m1) we 0 0 0 a fo 0 we 2. 0 He ey a) 0 We 00 | fey 0 0 0. WE YL ad 0 0. We oo | Le jae (8) 1),() 1 Whee + Wha = : (9) L),(L L),(L-1 L WR ina + Wen ja +1 op (G41 j) a(G+1 1 2 L âwe = Ke 2D) = 928 y= [ni hee ja aa 1 1a (10)
â â h(1) t,t â where in Eq. (10) we apply the RNN non-linearity g following Eq. (4). Therefore, by induction, we have shown that Eq. (6) holds for all j
â¥
If TrellisNet Ï has M +L Since ÏM is an L-layer M -truncated RNN, this (taking the last d channels of z(M +L â the output of ÏM at time t.
5
Published as a conference paper at ICLR 2019
(a) Representing RNN units as channel groups (b) Mixed group convolution
Figure 2: Representing a truncated 2-layer RNN ÏM as a trellis network Ï . (a) Each unit of Ï has three groups, which house the input, ï¬rst-layer hidden vector, and second-layer hidden vector of ÏM , respectively. (b) Each group in the hidden unit of Ï in level i + 1 at time step t + 1 is computed by a linear combination of appropriate groups of hidden units in level i at time steps t and t + 1. The linear transformations form a mixed group convolution that reproduces computation in ÏM . (Nonlinearities not shown for clarity.)
In other words, we have shown that ÏM is equivalent to a TrellisNet with sparse kernel matrices W1, W2. This completes the proof.
Note that the convolutions in the TrellisNet Ï constructed in Theorem 1 are sparse, as shown in Eq. (5). They are related to group convolutions (Krizhevsky et al., 2012), but have an unusual form because group k at time t is convolved with group k 1 at time t + 1. We refer to these as mixed group convolutions. Moreover, while Theorem 1 assumes that all layers of ÏM have the same dimensionality d for clarity, the proof easily generalizes to cases where each layer has different widths.
For didactic purposes, we recap and illustrate the construction in the case of a 2-layer RNN. The key challenge is that a na¨ıve unrolling of the RNN into a feed-forward network does not produce a convolutional network, since the linear transformation weights are not constant across a layer. The solution, illustrated in Figure 2a, is to organize each hidden unit into groups of channels, such that each TrellisNet unit represents 3 RNN units simultaneously (for xt, h(1) ). Each TrellisNet unit thus has (p + 2d) channels. The interlayer transformation can then be expressed as a mixed group convolution, illustrated in Figure 2b. This can be represented as a sparse convolution with the structure given in Eq. (5) (with L = 2). Applying the nonlinearity g on the pre-activation output, this exactly reproduces the transformations in the original 2-layer RNN.
The TrellisNet that emerges from this construction has special sparsity structure in the weight matrix. It stands to reason that a general TrellisNet with an unconstrained (dense) weight matrix W may have greater expressive power: it can model a broader class of transformations than the original RNN ÏM . Note that while the hidden channels of the TrellisNet Ï constructed in the proof of Theorem 1 are naturally arranged into groups that represent different layers of the RNN ÏM (Eq. (6)), an unconstrained dense weight matrix W no longer admits such an interpretation. A model deï¬ned by a dense weight matrix is fundamentally distinct from the RNN ÏM that served as our point of departure. We take advantage of this expressivity and use general weight matrices W , as presented in Section 3, in our experiments. Our ablation analysis will show that such generalized dense transformations are beneï¬cial, even when model capacity is controlled for.
The proof of Theorem 1 did not delve into the inner structure of the nonlinear transformation g in RNN (or f in the constructed TrellisNet). For a vanilla RNN, for instance, f is usually an elemen- twise sigmoid or tanh function. But the construction in Theorem 1 applies just as well to RNNs with structured cells, such as LSTMs and GRUs. We adopt LSTM cells for the TrellisNets in our experiments and provide a detailed treatment of this nonlinearity in Section 5.1 and Appendix A.
4.3 TRELLISNET AS A BRIDGE BETWEEN RECURRENT AND CONVOLUTIONAL MODELS
In Section 4.1 we concluded that TrellisNet is a special kind of TCN, characterized by weight tying and input injection. In Section 4.2 we established that TrellisNet is a generalization of truncated
6
Published as a conference paper at ICLR 2019
RNNs. These connections along with the construction in our proof of Theorem 1 allow TrellisNets to beneï¬t signiï¬cantly from techniques developed originally for RNNs, while also incorporating ar- chitectural and algorithmic motifs developed for convolutional networks. We summarize a number of techniques here. From recurrent networks, we can integrate 1) structured nonlinear activations (e.g. LSTM and GRU gates); 2) variational RNN dropout (Gal & Ghahramani, 2016); 3) recurrent DropConnect (Merity et al., 2018b); and 4) history compression and repackaging. From convolu- tional networks, we can adapt 1) larger kernels and dilated convolutions (Yu & Koltun, 2016); 2) auxiliary losses at intermediate layers (Lee et al., 2015; Xie & Tu, 2015); 3) weight normaliza- tion (Salimans & Kingma, 2016); and 4) parallel convolutional processing. Being able to directly incorporate techniques from both streams of research is one of the beneï¬ts of trellis networks. We leverage this in our experiments and provide a more comprehensive treatment of these adaptations in Appendix B.
5 EXPERIMENTS
5.1 A TRELLISNET WITH GATED ACTIVATION
} coat 4 coum i : ap : Wy We i cra tet) [ee Ze41 Papa Pra) ivitl
In our description of generic trellis networks in Section 3, the activation function f can be any nonlinearity that computes z(i+1) In experiments, we use a 1:T â1. 1:T gated activation based on the LSTM cell. Gated activations have been used before in convolutional networks for sequence model- ing (van den Oord et al., 2016; Dauphin et al., 2017). Our choice is inspired directly by Theorem 1, which suggests incorporating an existing RNN cell into TrellisNet. We use the LSTM cell due to its effectiveness in recurrent networks (Jozefowicz et al., 2015; Greff et al., 2017; Melis et al., 2018). We summarize the construc- tion here; a more detailed treatment can be found in Appendix A.
ing
In an LSTM cell, three information-controlling gates are com- Figure 3: A gated activation puted at time t. Moreover, there is a cell state that does not par- based on the LSTM cell. ticipate in the hidden-to-hidden transformations but is updated in every step using the result from the gated activations. We integrate the LSTM cell into the TrellisNet as follows (Figure 3):
2 Baa Tt41 s > 5 5 T aay) =Wi| jw) +Wel ow | =[2et11 2412 2413 2414] qd) 24,2 41,2 1 5 é 5 . Ee = 04411) 020) + o(Zr41,2) 0 tanh(Z,41,3) (41) 5 anh(2(t?) (12; Gated activation f) 24,2 = o(2t41,4) 0 tanh(z/4 1)
Thus the linear transformation in each layer of the TrellisNet produces a pre-activation feature Ëzt+1 with r = 4q feature channels, which are then processed by elementwise transformations and Hadamard products to yield the ï¬nal output z(i+1) t+1 =
5.2 RESULTS
We evaluate trellis networks on word-level and character-level language modeling on the standard Penn Treebank (PTB) dataset (Marcus et al., 1993; Mikolov et al., 2010), large-scale word-level modeling on WikiText-103 (WT103) (Merity et al., 2017), and standard stress tests used to study long-range information propagation in sequence models: sequential MNIST, permuted MNIST (PMNIST), and sequential CIFAR-10 (Chang et al., 2017; Bai et al., 2018; Trinh et al., 2018). Note that these tasks are on very different scales, with unique properties that challenge sequence models in different ways. For example, word-level PTB is a small dataset that a typical model easily over- ï¬ts, so judicious regularization is essential. WT103 is a hundred times larger, with less danger of overï¬tting, but with a vocabulary size of 268K that makes training more challenging (and precludes the application of techniques such as mixture of softmaxes (Yang et al., 2018)). A more complete description of these tasks and their characteristics can be found in Appendix C.
7
Published as a conference paper at ICLR 2019
Table 1: Test perplexities (ppl) on word-level language modeling with the PTB corpus. â means lower is better.
Word-level Penn Treebank (PTB) Model Size Test perplexityâ Generic TCN (Bai et al.| 13M 88.68 Variational LSTM (Gal & Ghahramani,}2016 66M 7.4 Zoph & Le! 54M. 62.4 24M 58.8 24M 59.7 2018} 24M 58.3 22M 57.55 23M 56.10 24M 55.97 24M 55.80 Ours - TrellisNet 24M 56.97 Ours - TrellisNet (1.4x larger) 33M 56.80 Ours - TrellisNet-MoS 25M. 54.67 Ours - TrellisNet-MoS (1.4x larger) 34M 54.19
Table 2: Test perplexities (ppl) on word-level language modeling with the WT103 corpus.
Word-level WikiText-103 (WT 103) Model Size Test perplexityâ LSTM (Grave et al. - 48.7 LSTM+continuous cache (Grave et al.{/2017b - 40.8 Generic TCN (Bai et al.|/2018) 150M 45.2 Gated Linear ConvNet (Dauphin et al.| 230M 37.2 AWD-QRNN (Merity et al.{/2018a] 159M 33.0 Relational Memory Core (Santoro et al. 195M 31.6 Ours - TrellisNet 180M 29.19
The prior state of the art on these tasks was set by completely different models, such as AWD-LSTM on character-level PTB (Merity et al., 2018a), neural architecture search on word-level PTB (Pham et al., 2018), and the self-attention-based Relational Memory Core on WikiText-103 (Santoro et al., 2018). We use trellis networks on all tasks and outperform the respective state-of-the-art models on each. For example, on word-level Penn Treebank, TrellisNet outperforms by a good margin the recent results of Melis et al. (2018), which used the Google Vizier service for exhaustive hyper- parameter tuning, as well as the recent neural architecture search work of Pham et al. (2018). On WikiText-103, a trellis network outperforms by 7.6% the Relational Memory Core (Santoro et al., 2018) and by 11.5% the thorough optimization work of Merity et al. (2018a).
Many hyperparameters we use are adapted directly from prior work on recurrent networks. (As highlighted in Section 4.3, many techniques can be carried over directly from RNNs.) For others, we perform a basic grid search. We decay the learning rate by a ï¬xed factor once validation error plateaus. All hyperparameters are reported in Appendix D, along with an ablation study.
Word-level language modeling. For word-level language modeling, we use PTB and WT103. The results on PTB are listed in Table 1. TrellisNet sets a new state of the art on PTB, both with and without mixture of softmaxes (Yang et al., 2018), outperforming all previously published results by more than one unit of perplexity.
WT103 is 110 times larger than PTB, with vocabulary size 268K. We follow prior work and use the adaptive softmax (Grave et al., 2017a), which improves memory efï¬ciency by assigning higher capacity to more frequent words. The results are listed in Table 2. TrellisNet sets a new state of the art on this dataset as well, with perplexity 29.19: about 7.6% better than the contemporaneous
8
Published as a conference paper at ICLR 2019
Table 3: Test bits-per-character (bpc) on character-level language modeling with the PTB corpus.
Char-level PTB Model Size Test bpcâ Generic TCN 2018) 3.0M 1.31 Independently RNN (Liet al.]/2018) 12.0M 1.23 Hyper LSTM (Ha et al. 2017) 144M 1.219 NAS Cell Zoph & Le||2017) 163M = 1.214 Fast-Slow-LSTM-2 (Mujika et al.]/2017|_7.2M 1.19 Quasi-RNN (Merity et al.{/2018a) 138M 1.187 AWD-LSTM (M 2018a} 138M 1.175 Ours = N 134M 1.158
Table 4: Test accuracies on long-range modeling benchmarks. h means higher is better.
Model Seq. MNIST Test acc.h Permuted MNIST Test acc.h 99.0 99.0 99.0 98.4 98.9 99.20 94.6 96.0 97.2 95.2 97.9 98.13 - - - 72.2 62.2 73.42
Dilated GRU (Chang et al., 2017) IndRNN (Li et al., 2018) Generic TCN (Bai et al., 2018) r-LSTM w/ Aux. Loss (Trinh et al., 2018) Transformer (self-attention) (Trinh et al., 2018) Ours - TrellisNet
self-attention-based Relational Memory Core (RMC) (Santoro et al., 2018). TrellisNet achieves this better accuracy with much faster convergence: 25 epochs, versus 90 for RMC.
Character-level language modeling. When used for character-level modeling, PTB is a medium- scale dataset with stronger long-term dependencies between characters. We thus use a deeper net- work as well as techniques such as weight normalization (Salimans & Kingma, 2016) and deep supervision (Lee et al., 2015; Xie & Tu, 2015). The results are listed in Table 3. TrellisNet sets a new state of the art with 1.158 bpc, outperforming the recent results of Merity et al. (2018a) by a comfortable margin.
Long-range modeling with Sequential MNIST, PMNIST, and CIFAR-10. We also evaluate the TrellisNet for ability to model long-term dependencies. In the Sequential MNIST, PMNIST, and CIFAR-10 tasks, images are processed as long sequences, one pixel at a time (Chang et al., 2017; Bai et al., 2018; Trinh et al., 2018). Our model has 8M parameters, in alignment with prior work. To cover the larger context, we use dilated convolutions in intermediate layers, adopting a common architectural element from TCNs (Yu & Koltun, 2016; van den Oord et al., 2016; Bai et al., 2018). The results are listed in Table 4. Note that the performance of prior models is inconsistent. The Transformer works well on MNIST but fairs poorly on CIFAR-10, while r-LSTM with unsuper- vised auxiliary losses achieves good results on CIFAR-10 but underperforms on Permuted MNIST. TrellisNet outperforms all these models on all three tasks.
# 6 DISCUSSION
We presented trellis networks, a new architecture for sequence modeling. Trellis networks form a structural bridge between convolutional and recurrent models. This enables direct assimilation of many techniques designed for either of these two architectural families. We leverage these connec- tions to train high-performing trellis networks that set a new state of the art on highly competitive language modeling benchmarks. Beyond the empirical gains, we hope that trellis networks will serve as a step towards deeper and more uniï¬ed understanding of sequence modeling.
There are many exciting opportunities for future work. First, we have not conducted thorough performance optimizations on trellis networks. For example, architecture search on the structure of the gated activation f may yield a higher-performing activation function than the classic LSTM cell
9
Published as a conference paper at ICLR 2019
we used (Zoph & Le, 2017; Pham et al., 2018). Likewise, principled hyperparameter tuning will likely improve modeling accuracy beyond the levels we have observed (Melis et al., 2018). Future work can also explore acceleration schemes that speed up training and inference.
Another signiï¬cant opportunity is to establish connections between trellis networks and self- attention-based architectures (Transformers) (Vaswani et al., 2017; Santoro et al., 2018; Chen et al., 2018), thus unifying all three major contemporary approaches to sequence modeling. Finally, we look forward to seeing applications of trellis networks to industrial-scale challenges such as machine translation.
# REFERENCES
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations (ICLR), 2015.
Shaojie Bai, J Zico Kolter, and Vladlen Koltun. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv:1803.01271, 2018.
James Bradbury, Stephen Merity, Caiming Xiong, and Richard Socher. Quasi-recurrent neural net- works. In International Conference on Learning Representations (ICLR), 2017.
Shiyu Chang, Yang Zhang, Wei Han, Mo Yu, Xiaoxiao Guo, Wei Tan, Xiaodong Cui, Michael Witbrock, Mark Hasegawa-Johnson, and Thomas Huang. Dilated recurrent neural networks. In Neural Information Processing Systems (NIPS), 2017.
Mia Xu Chen, Orhan Firat, Ankur Bapna, Melvin Johnson, Wolfgang Macherey, George Foster, Llion Jones, Niki Parmar, Noam Shazeer, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Mike Schuster, Zhifeng Chen, Yonghui Wu, and Macduff Hughes. The best of both worlds: Com- bining recent advances in neural machine translation. In Annual Meeting of the Association for Computational Linguistics (ACL), 2018.
Kyunghyun Cho, Bart Van Merri¨enboer, Dzmitry Bahdanau, and Yoshua Bengio. On the properties of neural machine translation: Encoder-decoder approaches. arXiv:1409.1259, 2014.
Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel P. Kuksa. Natural language processing (almost) from scratch. Journal of Machine Learning Re- search (JMLR), 12, 2011.
Zihang Dai, Zhilin Yang, Yiming Yang, William W Cohen, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov. Transformer-XL: Language modeling with longer-term dependency. arXiv:1901.02860, 2019.
Yann N. Dauphin, Angela Fan, Michael Auli, and David Grangier. Language modeling with gated convolutional networks. In International Conference on Machine Learning (ICML), 2017.
Jeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venu- gopalan, Trevor Darrell, and Kate Saenko. Long-term recurrent convolutional networks for visual recognition and description. In Computer Vision and Pattern Recognition (CVPR), 2015.
Jeffrey L Elman. Finding structure in time. Cognitive Science, 14(2), 1990.
Yarin Gal and Zoubin Ghahramani. A theoretically grounded application of dropout in recurrent neural networks. In Neural Information Processing Systems (NIPS), 2016.
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. Convolutional sequence to sequence learning. In International Conference on Machine Learning (ICML), 2017.
Edouard Grave, Armand Joulin, Moustapha Ciss´e, David Grangier, and Herv´e J´egou. Efï¬cient softmax approximation for GPUs. In International Conference on Machine Learning (ICML), 2017a.
Edouard Grave, Armand Joulin, and Nicolas Usunier. Improving neural language models with a continuous cache. In International Conference on Learning Representations (ICLR), 2017b.
10
Published as a conference paper at ICLR 2019
# Alex Graves. Supervised Sequence Labelling with Recurrent Neural Networks. Springer, 2012.
Alex Graves. Generating sequences with recurrent neural networks. arXiv:1308.0850, 2013.
Klaus Greff, Rupesh Kumar Srivastava, Jan Koutn´ık, Bas R. Steunebrink, and J¨urgen Schmidhuber. LSTM: A search space odyssey. IEEE Transactions on Neural Networks and Learning Systems, 28(10), 2017.
David Ha, Andrew Dai, and Quoc V Le. HyperNetworks. In International Conference on Learning Representations (ICLR), 2017.
Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural Computation, 9(8), 1997.
Rafal Jozefowicz, Wojciech Zaremba, and Ilya Sutskever. An empirical exploration of recurrent network architectures. In International Conference on Machine Learning (ICML), 2015.
Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, A¨aron van den Oord, Alex Graves, and Koray Kavukcuoglu. Neural machine translation in linear time. arXiv:1610.10099, 2016.
Andrej Karpathy and Fei-Fei Li. Deep visual-semantic alignments for generating image descrip- tions. In Computer Vision and Pattern Recognition (CVPR), 2015.
Urvashi Khandelwal, He He, Peng Qi, and Dan Jurafsky. Sharp nearby, fuzzy far away: How neural language models use context. In Annual Meeting of the Association for Computational Linguistics (ACL), 2018.
Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Tech- nical report, University of Toronto, 2009.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. ImageNet classiï¬cation with deep convo- lutional neural networks. In Neural Information Processing Systems (NIPS), 2012.
Yann LeCun, Bernhard Boser, John S. Denker, Donnie Henderson, Richard E. Howard, Wayne Hubbard, and Lawrence D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural Computation, 1(4), 1989.
Chen-Yu Lee, Saining Xie, Patrick Gallagher, Zhengyou Zhang, and Zhuowen Tu. Deeply- supervised nets. In AISTATS, 2015.
Shuai Li, Wanqing Li, Chris Cook, Ce Zhu, and Yanbo Gao. Independently recurrent neural network In Computer Vision and Pattern Recognition (IndRNN): Building a longer and deeper RNN. (CVPR), 2018.
Hanxiao Liu, Karen Simonyan, and Yiming Yang. DARTS: Differentiable architecture search. arXiv:1806.09055, 2018.
Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus of English: The Penn treebank. Computational Linguistics, 19(2), 1993.
G´abor Melis, Chris Dyer, and Phil Blunsom. On the state of the art of evaluation in neural language models. In International Conference on Learning Representations (ICLR), 2018.
Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. In International Conference on Learning Representations (ICLR), 2017.
Stephen Merity, Nitish Shirish Keskar, and Richard Socher. An analysis of neural language modeling at multiple scales. arXiv:1803.08240, 2018a.
Stephen Merity, Nitish Shirish Keskar, and Richard Socher. Regularizing and optimizing LSTM language models. In International Conference on Learning Representations (ICLR), 2018b.
Tomas Mikolov, Martin Karaï¬Â´at, Luk´as Burget, Jan Cernock´y, and Sanjeev Khudanpur. Recurrent neural network based language model. In Interspeech, 2010.
11
Published as a conference paper at ICLR 2019
John Miller and Moritz Hardt. When recurrent models donât need to be recurrent. arXiv:1805.10369, 2018.
Yasumasa Miyamoto and Kyunghyun Cho. Gated word-character recurrent language model. arXiv:1606.01700, 2016.
Asier Mujika, Florian Meier, and Angelika Steger. Fast-slow recurrent neural networks. In Neural Information Processing Systems (NIPS), 2017.
Hieu Pham, Melody Y Guan, Barret Zoph, Quoc V Le, and Jeff Dean. Efï¬cient neural architecture search via parameters sharing. In International Conference on Machine Learning (ICML), 2018.
Tara N. Sainath, Oriol Vinyals, Andrew W. Senior, and Hasim Sak. Convolutional, long short-term memory, fully connected deep neural networks. In International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2015.
Tim Salimans and Diederik P Kingma. Weight normalization: A simple reparameterization to ac- In Neural Information Processing Systems (NIPS), celerate training of deep neural networks. 2016.
Adam Santoro, Ryan Faulkner, David Raposo, Jack Rae, Mike Chrzanowski, Theophane Weber, Daan Wierstra, Oriol Vinyals, Razvan Pascanu, and Timothy Lillicrap. Relational recurrent neural networks. In Neural Information Processing Systems (NIPS), 2018.
Xingjian Shi, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-Kin Wong, and Wang-chun Woo. Convolutional LSTM network: A machine learning approach for precipitation nowcasting. In Neural Information Processing Systems (NIPS), 2015.
Ilya Sutskever, James Martens, and Geoffrey E. Hinton. Generating text with recurrent neural net- works. In International Conference on Machine Learning (ICML), 2011.
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Neural Information Processing Systems (NIPS), 2014.
Trieu H Trinh, Andrew M Dai, Thang Luong, and Quoc V Le. Learning longer-term dependencies in RNNs with auxiliary losses. In International Conference on Machine Learning (ICML), 2018.
A¨aron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew W. Senior, and Koray Kavukcuoglu. WaveNet: A generative model for raw audio. arXiv:1609.03499, 2016.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Neural Information Processing Systems (NIPS), 2017.
Subhashini Venugopalan, Marcus Rohrbach, Jeffrey Donahue, Raymond J. Mooney, Trevor Darrell, and Kate Saenko. Sequence to sequence â video to text. In International Conference on Computer Vision (ICCV), 2015.
Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neural image caption generator. In Computer Vision and Pattern Recognition (CVPR), 2015.
Christoph Vogel and Thomas Pock. A primal dual network for low-level vision problems. In German Conference on Pattern Recognition, 2017.
Alex Waibel, Toshiyuki Hanazawa, Geoffrey Hinton, Kiyohiro Shikano, and Kevin J Lang. Phoneme IEEE Transactions on Acoustics, Speech, and recognition using time-delay neural networks. Signal Processing, 37(3), 1989.
Paul J Werbos. Backpropagation through time: What it does and how to do it. Proceedings of the IEEE, 78(10), 1990.
Saining Xie and Zhuowen Tu. Holistically-nested edge detection. In International Conference on Computer Vision (ICCV), 2015.
12
Published as a conference paper at ICLR 2019
Zhilin Yang, Zihang Dai, Ruslan Salakhutdinov, and William W. Cohen. Breaking the softmax bottleneck: A high-rank RNN language model. International Conference on Learning Represen- tations (ICLR), 2018.
Fisher Yu and Vladlen Koltun. Multi-scale context aggregation by dilated convolutions. In Interna- tional Conference on Learning Representations (ICLR), 2016.
Julian Georg Zilly, Rupesh Kumar Srivastava, Jan Koutn´ık, and J¨urgen Schmidhuber. Recurrent highway networks. In International Conference on Machine Learning (ICML), 2017.
Barret Zoph and Quoc V Le. Neural architecture search with reinforcement learning. In Interna- tional Conference on Learning Representations (ICLR), 2017.
13
Published as a conference paper at ICLR 2019
# A EXPRESSING AN LSTM AS A TRELLISNET
(a) An atomic view (b) A sequence view
(2... aire
Figure 4: A TrellisNet with an LSTM nonlinearity, at an atomic level and on a longer sequence.
Here we trace in more detail the transformation of an LSTM into a TrellisNet. This is an application of Theorem 1. The nonlinear activation has been examined in Section 5.1. We will walk through the construction again here.
In each time step, an LSTM cell computes the following:
AO = o( Wh? + Uph2,) 9 = (Wahi? + U;h{2,) gf? = tanh(Wy hf + Ugh{2,) of? = o(Wahy + Ubi) ef = oe, FH ogi? RL = of? o tanh(eyâ) (13)
= + = = o where nO = 2, and f;,%,, 0; are typically called the forget, input, and output gates. By a similar construction to how we defined 7 in Theorem|I] to recover an LSTM the mixed group convolution needs to produce 3q more channels for these gated outputs, which have the form f; 47, i,,47 and gyâ (see Figure[5|for an example). In addition, at each layer of the mixed group convolution, the network also needs to maintain a group of channels for cell states c; ,/. Note that in an LSTM network, c; is updated âsynchronouslyâ with h;, so we can similarly write
ot) = iO ° Oy + i), ° gf) ne? = of) ° tanh(ct?)) (14)
Based on these changes, we show in Figure 4 an atomic and a sequence view of TrellisNet with the LSTM activation. The hidden units z1:T consist of two parts: z1:T,1, which gets updated directly via the gated activations (akin to LSTM cell states), and z1:T,2, which is processed by parameterized convolutions (akin to LSTM hidden states). Formally, in layer i:
5 ~ 5 5 5 5 T ap? = Conv1D(z{"}.9;W) +H@.7 = [vrs ure 41:73 21-7,4] Aine = o(4ur1) 0 hha + 7(Znra) e tanh(Z1-7,3) At = 0(Z1-7,4) © tanh(2{'7-))
Lo We eRe (V2) flax sw mn
Figure 5: A 2-layer LSTM is expressed as a trellis network with mixed group convolutions on four groups of feature channels. (Partial view.)
14
Published as a conference paper at ICLR 2019
B OPTIMIZING AND REGULARIZING TRELLISNET WITH RNN AND TCN METHODS
History repackaging corresponds to ory padding in TrellisNet (and TCNs in general). a kind of non-zero Truncated at time T Layer 2 Py Layer 2 > Layer 1 : Layer 1 fer) era} jer i rer Backpropagation Backpropagation
(a) History repackaging between truncated se- quences in recurrent networks. (b) History repackaging in mixed group convolutions, where we write out zt explicitly by Eq. (6).
Figure 6: Using the equivalence established by Theorem 1, we can transfer the notion of history repackaging in recurrent networks to trellis networks.
In Section 4, we formally described the relationship between TrellisNets, RNNs, and temporal con- volutional networks (TCN). On the one hand, TrellisNet is a special TCN (with weight-tying and input injection), while on the other hand it can also express any structured RNN via a sparse con- volutional kernel. These relationships open clear paths for applying techniques developed for either recurrent or convolutional networks. We summarize below some of the techniques that can be ap- plied in this way to TrellisNet, categorizing them as either inspired by RNNs or TCNs.
B.1 FROM RECURRENT NETWORKS
History repackaging. One theoretical advantage of RNNs is their ability to represent a history of inï¬nite length. However, in many applications, sequence lengths are too long for inï¬nite backprop- agation during training. A typical solution is to partition the sequence into smaller subsequences and perform truncated backpropagation through time (BPTT) on each. At sequence boundaries, the hidden state ht is ârepackagedâ and passed onto the next RNN sequence. Thus gradient ï¬ow stops at sequence boundaries (see Figure 6a). Such repackaging is also sometimes used at test time.
We can now map this repackaging procedure to trellis networks. As shown in Figure|6| the notion of passing the compressed history vector h; in an RNN corresponds to specific non-zero padding in the mixed group convolution of the corresponding TrellisNet. The padding is simply the channels from the last step of the final layer applied on the previous sequence (see Figure [6b} where without the repackaging padding, at layer 2 we will have Ars instead of na): We illustrate this in Figure where we have written out 2 in TrellisNet explicitly in the form of h; 4, according to Eq. (6. is suggests that instead of storing all effective history in memory, we can compress history in a feed-forward network to extend its history as well. For a general TrellisNet that employs a dense kernel, similarly, we can pass the hidden channels of the last step of the final layer in the previous sequence as the âhistoryâ padding for the next TrellisNet sequence (this works in both training and testing).
Gated activations. In general, the structured gates in RNN cells can be translated to gated acti- vations in temporal convolutions, as we did in Appendix A in the case of an LSTM. While in the experiments we adopted the LSTM gating, other activations (e.g. GRUs (Cho et al., 2014) or acti- vations found via architecture search (Zoph & Le, 2017)) can also be applied in trellis networks via the equivalence established in Theorem 1.
RNN variational dropout. Variational dropout (VD) for RNNs (Gal & Ghahramani, 2016) is a useful regularization scheme that applies the same mask at every time step within a layer (see Fig- ure 7a). A direct translation of this technique from RNN to the group temporal convolution implies that we need to create a different mask for each diagonal of the network (i.e. each history starting point), as well as for each group of the mixed group convolution. We propose an alternative (and extremely simple) dropout scheme for TrellisNet, which is inspired by VD in RNNs as well as Theo-
15
Published as a conference paper at ICLR 2019
Original Loss Loris Auxiliary Loss Laux a 2a aaa in â AVY + Mask 2 ââ â Mask 3 + + _ LAVYVVYV1I â> Mask 4 a 22 wz). 0 || @1 |) 2 || @s || a || 25) â. Mask 5
(a) Left: variational dropout (VD) in an RNN. Right: VD in a TrellisNet. Each color indicates a different dropout mask. (b) Auxiliary loss on intermediate layers in a TrellisNet.
Figure 7: (a) RNN-inspired variational dropout. (b) ConvNet-inspired auxiliary losses.
rem{[I} In each iteration, we apply the same mask on the post-activation outputs, at every time step in both the temporal dimension and depth dimension. That is, based on Eq. (6) in Theorem|[]] we adapt VD to the TrellisNet setting by assuming hy» 45 % his; see Figure[7al Empirically, we found this dropout to work significantly better than other dropout schemes (e.g. drop certain channels entirely).
Recurrent weight dropout/DropConnect. We apply DropConnect on the TrellisNet kernel. Merity et al. (2018b) showed that regularizing hidden-to-hidden weights Whh can be useful in optimizing LSTM language models, and we carry this scheme over to trellis networks.
B.2 FROM CONVOLUTIONAL NETWORKS
Dense convolutional kernel. Generalizing the convolution from a mixed group (sparse) convolution to a general (dense) one means the connections are no longer recurrent and we are computing directly on the hidden units with a large kernel, just like any temporal ConvNet.
Deep supervision. Recall that for sparse TrellisNet to recover truncated RNN, at each level the hidden units are of the form h; ;, representing the state at time t if we assume that history started at time tâ (Eq. (6). We propose to inject the loss function at intermediate layers of the convolutional network (e.g. after every ¢ layers of transformations, where we call ¢ the auxiliary loss frequency). For example, during training, to predict an output at time t with a L-layer TrellisNet, besides z\â in the last layer, we can also apply the loss function on 2{*~, z\*~?9, etc. â where hidden units will predict with a shorter history because they are at lower levels of the network. This had been introduced for convolutional models in computer vision . The eventual loss of the network will be
total = orig + λ aux, (15)
L where λ is a ï¬xed scaling factor that controls the weight of the auxiliary loss.
# L
L
Note that this technique is not directly transferable (or applicable) to RNNs.
Larger kernel and dilations (Yu & Koltun, 2016). These techniques have been used in convolu- tional networks to more quickly increase the receptive ï¬eld. They can be immediately applied to trellis networks. Note that the activation function f of TrellisNet may need to change if we change the kernel size or dilation settings (e.g. with dilation d and kernel size 2, the activation will be f (Ëz(i)
Weight normalization (Salimans & Kingma, 2016). Weight normalization (WN) is a technique that learns the direction and the magnitude of the weight matrix independently. Applying WN on the convolutional kernel was used in some prior works on temporal convolutional architectures (Dauphin et al., 2017; Bai et al., 2018), and have been found useful in regularizing the convolutional ï¬lters and boosting convergence.
Parallelism. Because TrellisNet is convolutional in nature, it can easily leverage the parallel pro- cessing in the convolution operation (which slides the kernel across the input features). We note that when the input sequence is relatively long, the predictions of the ï¬rst few time steps will have insufï¬cient history context compared to the predictions later in the sequence. This can be addressed by either history padding (mentioned in Appendix B.1) or chopping off the loss incurred by the ï¬rst few time steps.
16
Published as a conference paper at ICLR 2019
# C BENCHMARK TASKS
Word-level language modeling on Penn Treebank (PTB). The original Penn Treebank (PTB) dataset selected 2,499 stories from a collection of almost 100K stories published in Wall Street Jour- nal (WSJ) (Marcus et al., 1993). After Mikolov et al. (2010) processed the corpus, the PTB dataset contains 888K words for training, 70K for validation and 79K for testing, where each sentence is marked with an <eos> tag at its end. All of the numbers (e.g. in ï¬nancial news) were replaced with a ? symbol with many punctuations removed. Though small, PTB has been a highly studied dataset in the domain of language modeling (Miyamoto & Cho, 2016; Zilly et al., 2017; Merity et al., 2018b; Melis et al., 2018; Yang et al., 2018). Due to its relatively small size, many compu- tational models can easily overï¬t on word-level PTB. Therefore, good regularization methods and optimization techniques designed for sequence models are especially important on this benchmark task (Merity et al., 2018b).
Word-level language modeling on WikiText-103. WikiText-103 (WT103) is 110 times larger than PTB, containing a training corpus from 28K lightly processed Wikipedia articles (Merity et al., 2017). In total, WT103 features a vocabulary size of about 268K2, with 103M words for training, 218K words for validation, and 246K words for testing/evaluation. The WT103 corpus also retains the original case, punctuation and numbers in the raw data, all of which were removed from the PTB corpus. Moreover, since WT103 is composed of full articles (whereas PTB is sentence-based), it is better suited for testing long-term context retention. For these reasons, WT103 is typically considered much more representative and realistic than PTB (Merity et al., 2018a).
Character-level language modeling on Penn Treebank (PTB). When used for character-level language modeling, PTB is a medium size dataset that contains 5M chracters for training, 396K for validation, and 446K for testing, with an alphabet size of 50 (note: the <eos> tag that marks the end of a sentence in word-level tasks is now considered one character). While the alphabet size of char-level PTB is much smaller compared to the word-level vocabulary size (10K), there is much longer sequential token dependency because a sentence contains many more characters than words.
Sequential and permuted MNIST classiï¬cation. The MNIST handwritten digits dataset (LeCun et al., 1989) contains 60K normalized training images and 10K testing images, all of size 28 28. In the sequential MNIST task, MNIST images are presented to the sequence model as a ï¬attened 1 sequence for digit classiï¬cation. Accurate predictions therefore require good long-term 784 memory of the ï¬attened pixels â longer than in most language modeling tasks. In the setting of permuted MNIST (PMNIST), the order of the sequence is permuted at random, so the network can no longer rely on local pixel features for classiï¬cation.
Sequential CIFAR-10 classiï¬cation. The CIFAR-10 dataset (Krizhevsky & Hinton, 2009) contains 50K images for training and 10K for testing, all of size 32 32. In the sequential CIFAR-10 task, these images are passed into the model one at each time step, ï¬attended as in the MNIST tasks. Compared to sequential MNIST, this task is more challenging. For instance, CIFAR-10 contains more complex image structures and intra-class variations, and there are 3 channels to the input. Moreover, as the images are larger, a sequence model needs to have even longer memory than in sequential MNIST or PMNIST (Trinh et al., 2018).
# D HYPERPARAMETERS AND ABLATION STUDY
Table 5 speciï¬es the trellis networks used for the various tasks. There are a few things to note while reading the table. First, in training, we decay the learning rate once the validation error plateaus for a while (or according to some ï¬xed schedule, such as after 100 epochs). Second, for auxiliary loss (see Appendix B for more details), we insert the loss function after every ï¬xed number of layers in the network. This âfrequencyâ is included below under the âAuxiliary Frequencyâ entry. Finally, the hidden dropout in the Table refers to the variational dropout we translated from RNNs (see Appendix B), which is applied at all hidden layers of the TrellisNet. Due to the insight from Theorem 1, many techniques in TrellisNet were translated directly from RNNs or TCNs. Thus, most of the hyperparameters were based on the numbers reported in prior works (e.g. embedding size, embedding dropout, hidden dropout, output dropout, optimizer, weight-decay, etc.) with minor
2As a reference, Oxford English Dictionary only contains less than 220K unique English words.
17
Published as a conference paper at ICLR 2019
adjustments (Merity et al., 2018b; Yang et al., 2018; Bradbury et al., 2017; Merity et al., 2018a; Trinh et al., 2018; Bai et al., 2018; Santoro et al., 2018). For factors such as auxiliary loss weight and frequency, we perform a basic grid search.
Table 5: Models and hyperparameters used in experiments. âââ means not applicable/used.
Word-PTB (w/o MoS) Word-PTB (w/ MoS) Word-WT103 Optimizer SGD SGD Adam Adam Adam Initial Learning Rate 20 20 1e-3 2e-3 2e-3 Hidden Size (i.e. ht) 1000 1000 2000 1000 100 Output Size (only for MoS) â 480 â â â # of Experts (only for MoS) â 15 â â â Embedding Size 400 280 512 200 â Embedding Dropout 0.1 0.05 0.0 0.0 â Hidden (VD-based) Dropout 0.28 0.28 0.1 0.3 0.2 Output Dropout 0.45 0.4 0.1 0.1 0.2 Weight Dropout 0.5 0.45 0.1 0.25 0.1 # of Layers 55 55 70 125 16 Auxiliary Loss λ 0.05 0.05 0.08 0.3 â Auxiliary Frequency Weight Normalization 16 â 16 â 25 ! 70 ! â ! Gradient Clip 0.225 0.2 0.1 0.2 0.5 Weight Decay 1e-6 1e-6 0.0 1e-6 1e-6 Model Size 24M 25M 180M 13.4M 8M
# Char-PTB (P)MNIST/CIFAR-10
We have also performed an ablation study on TrellisNet to study the inï¬uence of various ingredients and techniques on performance. The results are reported in Table 6. We conduct the study on word- level PTB using a TrellisNet with 24M parameters. When we study one factor (e.g. removing hidden dropout), all hyperparameters and settings remain the same as in column 1 of Table 5 (except for âDense Kernelâ, where we adjust the number of hidden units so that the model size remains the same).
Table 6: Ablation study on word-level PTB (w/o MoS)
Model Size Test ppl â SOTA TrellisNet 24.1M 56.97 â â Hidden (VD-based) Dropout 24.1M 64.69 â 7.72 â Weight Dropout 24.1M 63.82 â 6.85 â Auxiliary Losses 24.1M 57.99 â 1.02 â Long Seq. Parallelism 24.1M 57.35 â 0.38 â Dense Kernel (i.e. mixed group conv) 24.1M 59.18 â 2.21 â Injected Input (every 2 layers instead) 24.1M 57.44 â 0.47 â Injected Input (every 5 layers instead) 24.1M 59.75 â 2.78 â Injected Input (every 10 layers instead) 24.1M 60.70 â 3.73 â Injected Input (every 20 layers instead) 24.1M 74.91 â 17.94
18 | {
"id": "1606.01700"
} |
1810.06638 | U-Net: Machine Reading Comprehension with Unanswerable Questions | Machine reading comprehension with unanswerable questions is a new
challenging task for natural language processing. A key subtask is to reliably
predict whether the question is unanswerable. In this paper, we propose a
unified model, called U-Net, with three important components: answer pointer,
no-answer pointer, and answer verifier. We introduce a universal node and thus
process the question and its context passage as a single contiguous sequence of
tokens. The universal node encodes the fused information from both the question
and passage, and plays an important role to predict whether the question is
answerable and also greatly improves the conciseness of the U-Net. Different
from the state-of-art pipeline models, U-Net can be learned in an end-to-end
fashion. The experimental results on the SQuAD 2.0 dataset show that U-Net can
effectively predict the unanswerability of questions and achieves an F1 score
of 71.7 on SQuAD 2.0. | http://arxiv.org/pdf/1810.06638 | Fu Sun, Linyang Li, Xipeng Qiu, Yang Liu | cs.CL, cs.AI | 9 pages | null | cs.CL | 20181012 | 20181012 | 8 1 0 2
t c O 2 1 ] L C . s c [
1 v 8 3 6 6 0 . 0 1 8 1 : v i X r a
# U-Net: Machine Reading Comprehension with Unanswerable Questions
Fu Sunâ , Linyang Liâ , Xipeng Qiuâ â, Yang Liuâ¡ â Shanghai Key Laboratory of Intelligent Information Processing, Fudan University â School of Computer Science, Fudan University â¡ Liulishuo Silicon Valley AI Lab {fsun17,lyli15,xpqiu}@fudan.edu.cn, yang.liu@liulishuo.com
# Abstract
Machine reading comprehension with unanswerable ques- tions is a new challenging task for natural language process- ing. A key subtask is to reliably predict whether the ques- tion is unanswerable. In this paper, we propose a uniï¬ed model, called U-Net, with three important components: an- swer pointer, no-answer pointer, and answer veriï¬er. We in- troduce a universal node and thus process the question and its context passage as a single contiguous sequence of tokens. The universal node encodes the fused information from both the question and passage, and plays an important role to pre- dict whether the question is answerable and also greatly im- proves the conciseness of the U-Net. Different from the state- of-art pipeline models, U-Net can be learned in an end-to-end fashion. The experimental results on the SQuAD 2.0 dataset show that U-Net can effectively predict the unanswerability of questions and achieves an F1 score of 71.7 on SQuAD 2.0.
Article: Endangered Species Act Paragraph: â... Other legislation followed, including the Migratory Bird Conservation Act of 1929, a 1937 treaty prohibiting the hunting of right and gray whales, and the Bald Eagle Protection Act of 1940. These later laws had a low cost to societythe species were rela- tively rareand little opposition was raised. Question 1: Which laws faced signiï¬cant opposition? Plausible Answer: later laws Question 2: What was the name of the 1937 treaty? Plausible Answer: Bald Eagle Protection Act
Table 1: Unanswerable Questions from SQUAD 2.0 (Ra- jpurkar, Jia, and Liang 2018).
Introduction Machine reading comprehension (MRC) is a challenging task in natural language processing, which requires that ma- chine can read, understand, and answer questions about a text. Beneï¬ting from the rapid development of deep learn- ing techniques and large-scale benchmarks (Hermann et al. 2015; Hill et al. 2015; Rajpurkar et al. 2016), the end-to-end neural methods have achieved promising results on MRC task (Seo et al. 2016; Huang et al. 2017; Chen et al. 2017; Clark and Gardner 2017; Hu et al. 2017). The best sys- tems have even surpassed human performance on the Stan- ford Question Answering Dataset (SQuAD) (Rajpurkar et al. 2016), one of the most widely used MRC benchmarks. How- ever, one of the limitations of the SQuAD task is that each question has a correct answer in the context passage, there- fore most models just need to select the most relevant text span as the answer, without necessarily checking whether it is indeed the answer to the question.
passage, judge the unanswerability and then identify the an- swer span. Since extensive work has been done on how to correctly predict the answer span when the question is an- swerable (e.g., SQuAD 1.1), the main challenge of this task lies in how to reliably determine whether a question is not answerable from the passage.
There are two kinds of approaches to model the answer- ability of a question. One approach is to directly extend previous MRC models by introducing a no-answer score to the score vector of the answer span (Levy et al. 2017; Clark and Gardner 2017). But this kind of approaches is rela- tively simple and cannot effectively model the answerability of a question. Another approach introduces an answer veri- ï¬er to determine whether the question is unanswerable (Hu et al. 2018; Tan et al. 2018). However, this kind of ap- proaches usually has a pipeline structure. The answer pointer and answer veriï¬er have their respective models, which are trained separately. Intuitively, it is unnecessary since the un- derlying comprehension and reasoning of language for these components is the same.
To remedy the deï¬ciency of SQuAD, Rajpurkar, Jia, and Liang (2018) developed SQuAD 2.0 that combines SQuAD with new unanswerable questions. Table 1 shows two ex- amples of unanswerable questions. The new dataset requires the MRC systems to know what they donât know.
To do well on MRC with unanswerable questions, the model needs to comprehend the question, reason among the
# â Corresponding Author.
In this paper, we decompose the problem of MRC with unanswerable questions into three sub-tasks: answer pointer, no-answer pointer, and answer veriï¬er. Since these three sub-tasks are highly related, we regard the MRC with unan- swerable questions as a multi-task learning problem (Caru- ana 1997) by sharing some meta-knowledge.
We propose the U-Net to incorporate these three sub-tasks into a uniï¬ed model: 1) an answer pointer to predict a can-
didate answer span for a question; 2) a no-answer pointer to avoid selecting any text span when a question has no answer; and 3) an answer veriï¬er to determine the probability of the âunanswerabilityâ of a question with candidate answer in- formation. Additionally, we also introduce a universal node and process the question and its context passage as a single contiguous sequence of tokens, which greatly improves the conciseness of U-Net. The universal node acts on both ques- tion and passage to learn whether the question is answerable. Different from the previous pipeline models, U-Net can be learned in an end-to-end fashion. Our experimental results on the SQuAD 2.0 dataset show that U-Net effectively pre- dicts the unanswerability of questions and achieves an F1 score of 72.6.
The contributions of this paper can be summarized as fol-
lows. ⢠We decompose the problem of MRC with unanswerable questions into three sub-tasks and combine them into a uniï¬ed model, which uses the shared encoding and inter- action layers. Thus, the three-tasks can be trained simul- taneously in an end-to-end fashion.
We introduce a universal node to encode the common in- formation of the question and passage. Thus, we can use a uniï¬ed representation to model the question and passage, which makes our model more condensed. ⢠U-Net is very easy to implement yet effective.
Proposed Model Formally, we can represent the MRC problem as: given a set of tuples (Q, P, A), where Q = (q1, q2, · · · , qm) is the question with m words, P = (p1, p2, · · · , pn) is the context passage with n words, and A = prs:re is the answer with rs and re indicating the start and end points, the task is to estimate the conditional probability P (A|Q, P ).
The architecture of our proposed U-Net is illustrated in Figure 1.
U-Net consists of four major blocks: Uniï¬ed Encod- ing, Multi-Level Attention, Final Fusion, and Prediction. As shown in Figure 1, we ï¬rst combine the embedded repre- sentation of the question and passage with a universal node u and pass them through a BiLSTM to encode the whole text. We then use the encoded representation to do the infor- mation interaction. Then we use the encoded and interacted representation to fuse the full representation and feed them into the ï¬nal prediction layers to do the multi-task training. We will describe our model in details in the following.
(A) Uniï¬ed Encoding Embedding Following the successful models on SQuAD 1.1, we ï¬rst embed both the question and the passage with the following features. Glove embedding (Pennington, Socher, and Manning 2014) and Elmo embedding (Peters et al. 2018) are used as basic embeddings. Besides, we use POS embedding, NER embedding, and a feature embedding that includes the exact match, lower-case match, lemma match, and a TF-IDF feature (Chen et al. 2017).
Now we get the question representation Q = qm i=1 and i=1, where each word is the passage representation P = pn
1/0 «++ Start End Cq Pointer1 }» Pointer2 O-O-O0°0-0-0°0-0-® t Self Attention * â> oO Multi-Level Attention
Figure 1: Architecture of the U-Net.
represented as a d-dim embedding by combining the fea- tures/embedding described above.
Universal Node We create a universal node u, which is a key factor in our model and has several roles in predicting the unanswerability of question Q.
We expect this node to learn universal information from both passage and question. This universal node is added and connects the passage and question at the phase of embed- ding, and then goes along with the whole representation, so it is a key factor in information representation. Since the uni- versal node is in between and later shared between passage and question, it has an abstract semantic meaning rather than just a word embedding.
Also, the universal node is later shared in the attention interaction mechanism and used in both the answer bound- ary detection and classiï¬cation tasks, so this node carries massive information and has several important roles in our whole model construction.
The universal node u is ï¬rst represented by a d-dim randomly-initialized vector. We concatenated question rep- resentation, universal node representation, passage represen- tation together as:
V = [Q, u, P ] = [q1, q2 . . . qm, u, p1, p2, · · · , pn],
V â RdÃ(m+n+1) is a joint representation of question and passage.
Word-level Fusion Then we ï¬rst use two-layer bidi- rectional LSTM (BiLSTM) (Hochreiter and Schmidhuber 1997) to fuse the joint representation of question, universal node, and passage.
H l = BiLSTM(V ), H h = BiLSTM(H l),
(2)
(3)
where H l is the hidden states of the ï¬rst BiLSTM, repre- senting the low-level semantic information, and H h is the hidden states of the second BiLSTM, representing the high- level semantic information.
Finally, we concatenate H l and H h together and pass them through the third BiLSTM and obtain a full represen- tation H f .
H f = BiLSTM([H l; H h]). (4)
Thus, H = [H l; H h; H f ] represents the deep fusion in- formation of the question and passage on word-level. When a BiLSTM is applied to encode representations, it learns the semantic information bi-directionally. Since the univer- sal node u is between the question and passage, its hidden states hm+1 can learn both question and passage informa- tion. When the passage-question pair was encoded as a uni- ï¬ed representation and information ï¬ows via the BiLSTM, the universal node has an important role in information rep- resentation.
(B) Multi-Level Attention To fully fuse the semantic representation of the question and passage, we use the attention mechanism (Bahdanau, Cho, and Bengio 2014) to capture their interactions on different levels.
We expected that we could simply use self-attention on the encoded representation H for interaction between ques- tion and passage, which contains both bi-attention (Seo et al. 2016) and self-attention (Wang et al. 2017) of the ques- tion and passage. But we found that it performed slightly worse than the traditional bi-directional attention with the universal node included. Therefore, we use a bi-directional attention between the question and passage.
We ï¬rst divide H into two representations: attached pas- sage Hq and attached question Hp, and let the universal node representation hm+1 attached to both the passage and ques- tion, i.e.,
Hq = [h1, h2, · · · , hm+1], Hp = [hm+1, hm+2, · · · , hm+n+1],
Hy = (hi, ho, --- , B41], (5)
Ay = [Bnyt,bmy2.°** Bm4nsil: (6)
Note hm+1 is shared by Hq and Hp. Here the universal node works as a special information carrier, and both passage and question can focus attention information on this node so that the connection between them is closer than a traditional bi- attention interaction.
q ; H f p ; H f p ] are concatenated by three-level representations, we followed
(5) (6)
previous work FusionNet (Huang et al. 2017) to construct their iterations on three levels.
Take the ï¬rst level as an example. We ï¬rst compute the afï¬ne matrix of H l q and H l p by
$= (ReLU(WiH!)) âReLU(W2H!), (7)
where S ⬠ROâ¢+D*("+)): W, and W2 are learnable param- eters. Next, a bi-directional attention is used to compute the interacted representation H! and Hj.
â
# q = H l p = H l
_ l . . ~~T H) = H,, x softmax(S"), (8)
# Fl
â Tl xe eoftm: Hj, = H, x softmax($), (9)
where softmax(·) is column-wise normalized function.
We use the same attention layer to model the interactions for all the three levels, and get the final fused representation H', H", Hf for the question and passage respectively.
Note that while dealing with the attention output of the universal node, we added two outputs from passage-to- question attention and question-to-passage attention. So af- ter the interaction, the fused representation H', H", HS still have the same length as the encoded representation H', H" and H!.
(C) Final Fusion After the three-level attentive interaction, we generate the fi- nal fused information for the question and passage. We con- catenate all the history information: we first concatenate the encoded representation H and the representation after atten- tion Hf (again, we use H!, H", HS, and H!, H, H? to rep- resent 3 different levels of representation for the two previ- ous steps respectively).
Following the success of DenseNet (Huang, Liu, and Weinberger 2016), we concatenate the input and output of each layer as the input of the next layer.
First, we pass the concatenated representation H through
a BiLSTM to get H4.
HA = BiLSTM (H's H"; HS; H's Hh; Hi), (10)
where the representation H A is a fusion of information from different levels.
Then we concatenate the original embedded representa- tion V and H A for better representation of the fused infor- mation of passage, universal node, and question.
A = [V ; H A]. (11)
Finally, we use a self-attention layer to get the attention information within the fused information. The self-attention layer is constructed the same way as (Vaswani et al. 2017):
A=Ax softmax(A⢠A), (12)
where A is the representation after self-attention of the fused information A. Next we concatenated representation Hâ and A and pass them through another BiLSTM layer:
O = BiLSTM[H*; A]. (13)
Now O is the ï¬nal fused representation of all the infor- mation. At this point, we divide O into two parts: OP , OQ, representing the fused information of the question and pas- sage respectively.
OP = [o1, o2, · · · , om], OQ = [om+1, om+2, · · · , om+n+1],
(15)
(14)
Note for the ï¬nal representation, we attach the universal node only in the passage representation OP . This is because we need the universal node as a focus for the pointer when the question is unanswerable. These will be fed into the next decoder prediction layer.
(D) Prediction The prediction layer receives fused information of passage OP and question OQ, and tackles three prediction tasks: (1) answer pointer, (2) no-answer pointer and (3) answer veri- ï¬er.
First, we use a function shown below to summarize the question information OQ into a ï¬xed-dim representation cq.
exp(W,/ o®) Q Cg = 9; LS enw)â a (16)
where Wq is a learnable weight matrix and oQ represents i the ith word in the question representation. Then we feed cq into the answer pointer to ï¬nd boundaries of answers (Wang and Jiang 2016), and the classiï¬cation layer to distinguish whether the question is answerable.
(i) Answer Pointer We use this answer pointer to detect the answer boundaries from the passage when the question is answerable (i.e., the answer is a span in the passage). This layer is a classic pointer net structure (Vinyals, Fortunato, and Jaitly 2015). We use two trainable matrices Ws and We to estimate the probability of the answer start and end boundaries of the ith word in the passage, αi and βi.
# αi â exp(cqWsoP βi â exp(cqWeoP
(17)
i ), i ),
(18)
Note here when the question is answerable, we do not consider the universal node in answer boundary detection, so we have i > 0 (i = 0 is the universal node in the pas- sage representation). The loss function for the answerable question pairs is:
La = â(log aa + log Bp), (19) where a and b are the ground-truth of the start and end boundary of the answer.
(ii) No-Answer Pointer Then we use the same pointer for questions that are not answerable. Here the loss LN A is:
Ly a = â(log ay + log Ao), (20)
α0 and β0 correspond to the position of the universal node, which is at the front of the passage representation Op. For this scenario, the loss is calculated for the universal node.
Additionally, since there exits a plausible answer for each unanswerable question in SQUAD 2.0, we introduce an aux- iliary plausible answer pointer to predict the boundaries of the plausible answers. The plausible answer pointer has the same structure as the answer pointer, but with different pa- rameters. Thus, the total loss function is: Ly a = â(log ay + log Bo) â (log a/,- + log 6}-), (21) where aâ and 9â are the output of the plausible answer pointer; a* and b* are the start and end boundary of the unan- swerable answer.
The no-answer pointer and plausible answer pointer are removed at test phase.
(iii) Answer Veriï¬er We use the answer veriï¬er to distin- guish whether the question is answerable.
Answer veriï¬er applies a weighted summary layer to summarize the passage information into a ï¬xed-dim repre- sentation cq (as shown in Eq.(16)).
And we use the weight matrix obtained from the answer pointer to get two representations of the passage.
cs = αi · oP i (22)
ce = βi · oP i i (23)
Then we use the universal node om+1 and concatenate it with the summary of question and passage to make a ï¬xed vector
F = [cq; om+1; cs; ce]. (24)
This ï¬xed F includes the representation cq representing the question information, and cs and ce representing the passage information. Since these representations are highly summarized specially for classiï¬cation, we believe that this passage-question pair contains information to distinguish whether this question is answerable. In addition, we include the universal node as a supplement. Since the universal node is pointed at when the question is unanswerable and this node itself already contains information collected from both the passage and question during encoding and information interaction, we believe that this node is important in distin- guishing whether the question is answerable.
Finally, we pass this ï¬xed vector F through a linear layer to obtain the prediction whether the question is answerable. pc = Ï(W T
# f F )
where Ï is a sigmoid function, Wf is a learnable weight ma- trix.
Here we use the cross-entropy loss in training.
Lay = -(5 -log p® + (1 â 6) - (log (1 =p), (26)
where δ â {0, 1} indicates whether the question has an an- swer in the passage.
Compared with other relatively complex structures devel- opped for this MRC task, our U-Net model passes the orig- inal question and passage pair through embedding and en- coding, which then interacts with each other, yielding fused information merged from all the levels. The entire architec- ture is very easy to construct. After we have the fused repre- sentation of the question and passage, we pass them through the pointer layer and a fused information classiï¬cation layer in a multi-task setup.
Training We jointly train the three tasks by combining the three loss functions. The ï¬nal loss function is:
L = δLA + (1 â δ)LN A + LAV , (27)
where δ â {0, 1} indicates whether the question has an an- swer in the passage, LA, LN A and LAV are the three loss functions of the answer pointer, no-answer pointer, and an- swer veriï¬er.
Although the three tasks could have different weights in the ï¬nal loss function and be further ï¬ne-tuned after joint training, here we just consider them in the same weight and do not ï¬ne-tune them individually.
At the test phase, we ï¬rst use the answer pointer to ï¬nd a potential answer to the question, while the veriï¬er layer judges whether the question is answerable. If the classiï¬er predicts the question is unanswerable, we consider the an- swer extracted by the answer pointer as plausible. In this way, we get the system result.
# Experiment
Datasets Recently, machine reading comprehension and question an- swering have progressed rapidly, owing to the computation ability and publicly available high-quality datasets such as SQuAD. Now new research efforts have been devoted to the newly released answer extraction test with unanswerable questions, SQuAD 2.0 (Rajpurkar, Jia, and Liang 2018). It is constructed by combining question-answer pairs selected from SQuAD 1.0 and newly crafted unanswerable questions. These unanswerable questions are created by workers that were asked to pose questions that cannot be answered based on the paragraph alone but are similar to the answerable questions. It is very difï¬cult to distinguish these questions from the answerable ones. We evaluate our model using this data set. It contains over 100,000+ questions on 500+ wikipedia articles.
Implementation Details We use Spacy to process each question and passage to obtain tokens, POS tags, NER tags and lemmas tags of each text. We use 12 dimensions to embed POS tags, 8 for NER tags (Chen et al. 2017). We use 3 binary features: exact match, lower-case match and lemma match between the question and passage (Lee et al. 2016). We use 100-dim Glove pre- trained word embeddings and 1024-dim Elmo embeddings. All the LSTM blocks are bi-directional with one single layer.
We set the hidden layer dimension as 125, attention layer di- mension as 250. We added a dropout layer over all the mod- eling layers, including the embedding layer, at a dropout rate of 0.3 (Srivastava et al. 2014). We use Adam optimizer with a learning rate of 0.002 (Kingma and Ba 2014).
During training, we omit passage with over 400 words and question with more than 50 words. For testing, when the passage has over 600 words and the question is over 100 words, we simply label these questions as unanswerable.
Main Results Our model achieves an F1 score of 74.0 and an EM score of 70.3 on the development set, and an F1 score of 72.6 and an EM score of 69.2 on Test set1, as shown in Table 2. Our model outperforms most of the previous approaches. Comparing to the best-performing systems, our model has a simple architecture and is an end-to-end model. In fact, among all the end-to-end models, we achieve the best F1 scores. We believe that the performance of the U-Net can be boosted with an additional post-processing step to verify answers using approaches such as (Hu et al. 2018).
Ablation Study We also do an ablation study on the SQuAD 2.0 develop- ment set to further test the effectiveness of different com- ponents in our model. In Table 3, we show four different conï¬gurations.
First, we remove the universal node U . We let the nega- tive examples focus on the plausible answer spans instead of focusing on the universal node U . This results in a loss of 2.6% F1 score on the development set, showing that the universal node U indeed learns information about whether the question is answerable.
We also tried to make the universal node U only attached to the passage representation when passing the attention layer. Our results showed that when node U is shared, as it is called âuniversalâ, it learns information interaction be- tween the question and passage, and when it is not shared, the performance slightly degraded.
As for the approaches to encode the representations, we pass both the question and passage through a shared BiL- STM. To test the effectiveness of this, we ran the experiment using separate BiLSTMs on embedded question and passage representations. Results show that the performance dropped slightly, suggesting sharing BiLSTM is an effective method to improve the quality of the encoder.
After removing the plausible answer pointer, the perfor- mance also dropped, indicating the plausible answers are useful to improve the model even though they are incorrect. the performance
After removing the answer veriï¬er, dropped greatly, indicating it is vital for our model.
Lastly, we run a test using a more concise conï¬guration. In the second block (multi-level attention) of the U-Net, we do not split the output of the encoded presentation and let it pass through a self-attention layer. The bidirectional atten- tion is removed. In this way, our model uses only one uniï¬ed
# 1https://rajpurkar.github.io/
# SQuAD-explorer/
Model Dev Test EM FI EM FI End-to-end Model BNA* ( DocQA (Rajpurkar, Jia, and Liang 2018) FusionNet++ ( g ) SAN ( VS°-Net U-Net Rajpurkar, Jia, and Liang 2018) 59.8 62.6 59.2 62.1 65.1 67.6 63.4 66.3 - - 66.6 69.6 - - 68.6 71.4 68.4 71.3 70.3 74.0 69.2 72.6 Ensemble Model FusionNet++ (ensemble) SAN (ensemble) U-Net (ensemble) - - 70.3 72.6 - - 71.3 73.7 - - 7115 75.0 Pipeline Model RMR+ELMo+ Verifier (Hu et al. 2018} Hu et al. 2018) Human 72.3 748 71.7 74.2 86.3 89.0 86.9 89.5
Table 2: Evaluation results on the SQUAD 2.0 (extracted on Sep 9, 2018). * means the BiDAF (Seo et al. 2016) with No Answer.
Conï¬guration U-Net no node U no share U no concatenate P & Q no plausible answer pointer no classiï¬cation Self-Attn Only EM F1 âEM â F1 70.3 74.0 - - 67.9 69.7 69.0 69.6 63.5 71.4 73.5 72.8 72.9 68.5 -2.4 -0.6 -1.3 -0.7 -6.8 -2.6 -0.5 -1.2 -1.1 -5.5 69.7 73.5 -0.5 -0.5
To test our classiï¬er performance, we do not use back- ward propagation over the loss of answer boundary detec- tion and simply run a classiï¬cation task. Results (the ï¬rst two rows in Table 4) show that there is a large gain when using the multi-task model. The answer boundary detection task helps the encoder learn information between the pas- sage and question and also feed information into the univer- sal node, therefore we can use a summarized representation of the passage and question as well as the universal node to distinguish whether the question is answerable, i.e., help improve classiï¬cation.
Table 3: Comparison of different conï¬gurations for our U- Net model.
representation of the question and passage at all time. We simply pass this representation layer by layer to get the ï¬- nal result. Compared to the bi-attention model, the F1-score decreases 0.5%.
Multi-task Study We also run an experiment to test the performance of our multi-task model. We select different losses that participate in the training procedure to observe the performance af- fected by answer boundary detect or classiï¬cation.
For the answer boundary detection task, we ï¬nd that the multi-task setup (i.e., the classiï¬cation layer participates in the training process) does not help its performance. Since the classiï¬er and pointer layer shared the encoding process, we originally expected that classiï¬cation information can help detect answer boundaries. But this is not the case. We think this is also reasonable since distinguishing whether the ques- tion is answerable is mainly focusing on the interactions be- tween the passage-question pair, so once the question is pre- dicted as answerable or not, it has nothing to do with the an- swer boundaries. This is consistent with how human-beings do this classiï¬cation task.
Table 4 shows the performance. Here we use EM â and F 1â to represent the EM and F1 score when the classiï¬ca- tion is not part of the task, which makes it very much like the task in SQuAD 1.1.
Loss EMâ F1â Classiï¬cation Acc. L LAV LA 75.3 - 77.2 84.8 - 85.1 80.2 67.1 -
Table 4: Multi-task performance on the development set.
We also run the test over SQuAD 1.1 development test to evaluate the performance. Due to a condensed structure, our model achieves an F 1â score of less than 86%, which is not a very competitive score on SQuAD 1.1 test. But as shown above, our model achieves a good score in SQuAD 2.0 test, which shows this model has the potential to achieve higher performance by making progress on both the answer detection and classiï¬cation tasks.
Overall, we can conclude that our multi-task model works well since the performance of unanswerability classiï¬cation improves signiï¬cantly when the answer pointer and answer veriï¬er work simultaneously.
Study on the Different Thresholds of Unanswerability Classiï¬cation The output b of the answer veriï¬er is the probability of a question being unanswerable. The smaller the output, the lower the probability of unanswerability is. In SQuAD 2.0, the proportions of unanswerable questions are different in the training and test sets. The default threshold 0.5 is opti- mized on the training set, but not suitable for the test set. Therefore, it is reasonable to set a proper threshold to man- ually adapt to the test set.
As mentioned in SQuAD 2.0 paper (Rajpurkar, Jia, and Liang 2018), different thresholds for answerability predic- tion result in ï¬uctuated scores between answerable and unanswerable questions. Here we show the variation of the F1 score with different thresholds in Figure . The threshold between [0, 1] is used to decide whether a question can be answered. When the threshold is set to 0, all questions are considered as answerable.
Avg F1 80 NoAns F1 HasAns F1 e r o c S 1 F 75 70 65 0.5 0.55 0.6 0.65 0.7 0.75 Threshold t
Figure 2: F1 score variation with different âNoAns F1â is the recall of unanswerable questions.
As we can see, when the threshold is set to 0.5, F1 score of answerable questions is similar to that of unanswerable questions. When we increase the threshold (i.e., more likely to predict the question as unanswerable), performance for answerable questions degrades, and improves for unanswer- able questions. This is as expected. We can see that the over- all F 1 score is slightly better, which is consistent with the idea from SQuAD 2.0. In addition, we ï¬nd that for larger thresholds, the variance between EM and F 1 is narrowed since EM and F 1 scores for unanswerable questions are the same.
Finally, we set the threshold to be 0.7 for the submission system to SQuAD evaluation.
# Related Work
End-to-end Models for MRC Currently, end-to-end neural network models have achieved great successes for machine reading comprehension (Seo et
al. 2016; Kumar et al. 2015; Sukhbaatar et al. 2015; Cui et al. 2016; Xiong, Zhong, and Socher 2016; Dhingra et al. 2016; Shen et al. 2016; Hu et al. 2017; Wang, Yan, and Wu 2018). Most of these models consist of three components: encoder, interaction, and pointer. The BiLSTM is widely used for encoding the embedded representation. For the interaction, bidirectional attention mechanism is very effective to fuse information of the question and passage. Finally, a pointer network (Vinyals, Fortunato, and Jaitly 2015) is used to predict the span boundaries of the answer. Speciï¬cally, in SQuAD test (Rajpurkar et al. 2016), there are approaches to combine match-LSTM and pointer networks to produce boundaries of the answer and employ variant bidirectional attention mechanism to match the question and passage mu- tually.
In our model, we learn from previous work and develop a condensed end-to-end model for the SQuAD 2.0 task. Dif- ferent from the previous models, we use a uniï¬ed representa- tion to encode the question and passage simultaneously, and introduce a universal node to encode the fused information of the question and passage, which also plays an important role to predict the unanswerability of a question.
MRC with Unanswerable Questions MRC with unanswerable questions is a more challenging task. Previous work Levy et al.; Clark and Gardner (2017; 2017) has attempted to normalize a no-answer score depend- ing on the probability of all answer spans and still detect boundaries at the same time. But the scores of the answer span predictions are not very discriminative in distinguish- ing whether the question is answerable. Therefore, this kind of approaches, though relatively simple, cannot effectively deal with the answerability of a question.
Hu et al.; Tan et al. (2018; 2018) introduced an answer veriï¬er idea to construct a classiï¬cation layer. However, this kind of approaches usually has a pipeline structure. The an- swer pointer and answer veriï¬er have their respective mod- els that are trained separately.
Multi-task models Different from existing work, we re- gard the MRC with unanswerable questions as a multi-task learning problem (Caruana 1997) by sharing some meta- knowledge. Intuitively, answer prediction and answer veri- ï¬cation are related tasks since the underlying comprehen- sion and reasoning of language for these components is the same. Therefore, we construct a multi-task model to solve three sub-tasks: answer pointer, no-answer pointer, and an- swer veriï¬er.
Conclusion and Future Work In this paper, we regard the MRC with unanswerable ques- tions as multi-task learning problems and propose the U-Net, a simple end-to-end model for MRC challenges. U-Net has good performance on SQuAD 2.0. We ï¬rst add a universal node to learn a fused representation from both the question and passage, then use a concatenated representation to pass through encoding layers. We only treat question and passage differently during attention interactions. In the rest blocks of
U-Net, we still use the uniï¬ed representation containing both the question and passage representation. Finally, we train the U-Net as a multi-task framework to determine the ï¬nal an- swer boundaries as well as whether the question is answer- able. Our model has very simple structure yet achieves good results on SQuAD 2.0 test.
Our future work is to reconstruct the structure of U-Net by replacing the current multi-level attention block with a sim- pler self-attention mechanism, which we believe can cap- ture the question and passage information, and intuitively is also coherent with the rest of our U-Net model. In addi- tion, we will improve the answer boundary detection per- formance based on some of the previous successful models. Since our model actually does not achieve very competitive performance in the boundary detection task yet still has a good overall performance on SQuAD 2.0 test, we are opti- mistic that our U-Net model is potentially capable of achiev- ing better performance. Furthermore, our model has a simple structure and is easy to implement, therefore we believe that our model can be easily modiï¬ed for various datasets.
Acknowledgement We would like to thank Robin Jia, Pranav Rajpurkar for their help with SQuAD 2.0 submissions.
References [Bahdanau, Cho, and Bengio 2014] Bahdanau, D.; Cho, K.; and Bengio, Y. 2014. Neural machine translation by jointly learning to align and translate. ArXiv e-prints. [Caruana 1997] Caruana, R. 1997. Multitask learning. Ma- chine Learning 28(1):41â75. [Chen et al. 2017] Chen, D.; Fisch, A.; Weston, J.; and Bor- des, A. 2017. Reading wikipedia to answer open-domain questions. CoRR abs/1704.00051. [Clark and Gardner 2017] Clark, C., and Gardner, M. 2017. Simple and effective multi-paragraph reading comprehen- sion. arXiv preprint arXiv:1710.10723. [Cui et al. 2016] Cui, Y.; Chen, Z.; Wei, S.; Wang, S.; Liu, T.; and Hu, G. Attention-over-attention neu- ral networks for reading comprehension. arXiv preprint arXiv:1607.04423. [Dhingra et al. 2016] Dhingra, B.; Liu, H.; Cohen, W. W.; and Salakhutdinov, R. 2016. Gated-attention readers for text comprehension. arXiv preprint arXiv:1606.01549. [Hermann et al. 2015] Hermann, K. M.; Kocisky, T.; Grefen- stette, E.; Espeholt, L.; Kay, W.; Suleyman, M.; and Blun- som, P. 2015. Teaching machines to read and compre- hend. In Advances in Neural Information Processing Sys- tems, 1684â1692. [Hill et al. 2015] Hill, F.; Bordes, A.; Chopra, S.; and We- ston, J. 2015. The goldilocks principle: Reading childrenâs books with explicit memory representations. arXiv preprint arXiv:1511.02301. and [Hochreiter and Schmidhuber 1997] Hochreiter, Schmidhuber, J. 1997. Long short-term memory. Neural computation 9(8):1735â1780.
[Hu et al. 2017] Hu, M.; Peng, Y.; Huang, Z.; Qiu, X.; Wei, F.; and Zhou, M. 2017. Reinforced mnemonic reader for machine reading comprehension. arXiv preprint arXiv:1705.02798. [Hu et al. 2018] Hu, M.; Peng, Y.; Huang, Z.; Yang, N.; Zhou, M.; et al. 2018. Read+ verify: Machine reading comprehension with unanswerable questions. arXiv preprint arXiv:1808.05759. [Huang et al. 2017] Huang, H.; Zhu, C.; Shen, Y.; and Chen, W. Fusionnet: Fusing via fully-aware atten- tion with application to machine comprehension. CoRR abs/1711.07341. [Huang, Liu, and Weinberger 2016] Huang, G.; Liu, Z.; and Weinberger, K. Q. 2016. Densely connected convolutional networks. CoRR abs/1608.06993. [Kingma and Ba 2014] Kingma, D. P., and Ba, J. Adam: A method for stochastic optimization. abs/1412.6980. [Kumar et al. 2015] Kumar, A.; Irsoy, O.; Su, J.; Bradbury, J.; English, R.; Pierce, B.; Ondruska, P.; Gulrajani, I.; and Socher, R. 2015. Ask me anything: Dynamic memory networks for natural language processing. arXiv preprint arXiv:1506.07285. [Lee et al. 2016] Lee, K.; Kwiatkowski, T.; Parikh, A. P.; and Das, D. 2016. Learning recurrent span representations for extractive question answering. CoRR abs/1611.01436. [Levy et al. 2017] Levy, O.; Seo, M.; Choi, E.; and Zettle- moyer, L. 2017. Zero-shot relation extraction via reading comprehension. arXiv preprint arXiv:1706.04115. [Liu et al. 2017] Liu, X.; Shen, Y.; Duh, K.; and Gao, J. 2017. Stochastic answer networks for machine reading com- prehension. CoRR abs/1712.03556. J.; [Pennington, Socher, and Manning 2014] Pennington, Socher, R.; and Manning, C. D. 2014. Glove: Global vectors for word representation. Proceedings of the Empiricial Methods in Natural Language Processing (EMNLP 2014) 12:1532â1543. [Peters et al. 2018] Peters, M. E.; Neumann, M.; Iyyer, M.; Gardner, M.; Clark, C.; Lee, K.; and Zettlemoyer, L. 2018. In Proc. of Deep contextualized word representations. NAACL. [Rajpurkar et al. 2016] Rajpurkar, P.; Zhang, J.; Lopyrev, K.; and Liang, P. SQuAD: 100,000+ ques- tions for machine comprehension of text. arXiv preprint arXiv:1606.05250. [Rajpurkar, Jia, and Liang 2018] Rajpurkar, P.; Jia, R.; and Liang, P. 2018. Know What You Donât Know: Unanswer- able Questions for SQuAD. ArXiv e-prints. [Seo et al. 2016] Seo, M.; Kembhavi, A.; Farhadi, A.; and Hajishirzi, H. 2016. Bidirectional attention ï¬ow for ma- chine comprehension. arXiv preprint arXiv:1611.01603. [Shen et al. 2016] Shen, Y.; Huang, P.-S.; Gao, J.; and Chen, W. 2016. Reasonet: Learning to stop reading in machine comprehension. arXiv preprint arXiv:1609.05284. [Srivastava et al. 2014] Srivastava, N.; Hinton, G. E.; I.; and Salakhutdinov, R. Krizhevsky, A.; Sutskever,
2014. Dropout: a simple way to prevent neural networks from overï¬tting. Journal of Machine Learning Research 15(1):1929â1958. [Sukhbaatar et al. 2015] Sukhbaatar, S.; Weston, J.; Fergus, R.; et al. 2015. End-to-end memory networks. In Advances in Neural Information Processing Systems, 2431â2439. [Tan et al. 2018] Tan, C.; Wei, F.; Zhou, Q.; Yang, N.; Lv, W.; and Zhou, M. 2018. I know there is no answer: Model- ing answer validation for machine reading comprehension. In CCF International Conference on Natural Language Pro- cessing and Chinese Computing, 85â97. Springer. [Vaswani et al. 2017] Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L.; and Polosukhin, I. 2017. Attention is all you need. CoRR abs/1706.03762. [Vinyals, Fortunato, and Jaitly 2015] Vinyals, O.; Fortunato, M.; and Jaitly, N. 2015. Pointer Networks. ArXiv e-prints. [Wang and Jiang 2016] Wang, S., and Jiang, J. 2016. Ma- chine comprehension using match-lstm and answer pointer. CoRR abs/1608.07905. [Wang et al. 2017] Wang, W.; Yang, N.; Wei, F.; Chang, B.; and Zhou, M. 2017. Gated self-matching networks for read- ing comprehension and question answering. In Proceedings of the 55th Annual Meeting of the Association for Computa- tional Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, 189â198. [Wang, Yan, and Wu 2018] Wang, W.; Yan, M.; and Wu, C. 2018. Multi-granularity hierarchical attention fusion net- works for reading comprehension and question answering. In Proceedings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), volume 1, 1705â1714. [Xiong, Zhong, and Socher 2016] Xiong, C.; Zhong, V.; and Socher, R. 2016. Dynamic coattention networks for question answering. arXiv preprint arXiv:1611.01604. | {
"id": "1511.02301"
} |
1810.05488 | Quantization for Rapid Deployment of Deep Neural Networks | This paper aims at rapid deployment of the state-of-the-art deep neural
networks (DNNs) to energy efficient accelerators without time-consuming fine
tuning or the availability of the full datasets. Converting DNNs in full
precision to limited precision is essential in taking advantage of the
accelerators with reduced memory footprint and computation power. However, such
a task is not trivial since it often requires the full training and validation
datasets for profiling the network statistics and fine tuning the networks to
recover the accuracy lost after quantization. To address these issues, we
propose a simple method recognizing channel-level distribution to reduce the
quantization-induced accuracy loss and minimize the required image samples for
profiling. We evaluated our method on eleven networks trained on the ImageNet
classification benchmark and a network trained on the Pascal VOC object
detection benchmark. The results prove that the networks can be quantized into
8-bit integer precision without fine tuning. | http://arxiv.org/pdf/1810.05488 | Jun Haeng Lee, Sangwon Ha, Saerom Choi, Won-Jo Lee, Seungwon Lee | cs.NE | null | null | cs.NE | 20181012 | 20181012 | 8 1 0 2
t c O 2 1 ] E N . s c [
1 v 8 8 4 5 0 . 0 1 8 1 : v i X r a
# Quantization for Rapid Deployment of Deep Neural Networks
Jun Haeng Leeâ, Sangwon Haâ, Saerom Choi, Won-Jo Lee, Seungwon Lee Samsung Advanced Institute of Technology Samsung-ro 130, Suwon-si, Republic of Korea {junhaeng2.lee, sw815.ha}@samsung.com
# Abstract
This paper aims at rapid deployment of the state-of-the-art deep neural networks (DNNs) to energy efï¬cient accelerators without time-consuming ï¬ne tuning or the availability of the full datasets. Converting DNNs in full precision to limited precision is essential in taking advantage of the accelerators with reduced memory footprint and computation power. However, such a task is not trivial since it often requires the full training and validation datasets for proï¬ling the network statistics and ï¬ne tuning the networks to recover the accuracy lost after quantization. To address these issues, we propose a simple method recognizing channel-level distribution to reduce the quantization-induced accuracy loss and minimize the required image samples for proï¬ling. We evaluated our method on eleven networks trained on the ImageNet classiï¬cation benchmark and a network trained on the Pascal VOC object detection benchmark. The results prove that the networks can be quantized into 8-bit integer precision without ï¬ne tuning.
# 1 Introduction
Deploying state-of-the-art deep neural networks (DNNs) to embedded systems is a challenging task due to the inherent nature of huge number of computations and large memory requirements. These impediments are partly caused by the considerable amount of the redundancies found in the network parameters intended for ease of training. Thus, the parameters encompasses abundant opportunities for trimming strategies, namely pruning and quantizing to low precision [1, 4, 7]. However, running DNN inference on accelerators equipped with ï¬xed-point arithmetic units in an energy efï¬cient manner also requires limiting the precision of the feature maps [15, 17, 18]. Previous works [4, 6, 15, 17] have exhibited converting pretrained DNNs to 8-bit precision does not induce any accuracy loss. The feature maps and the network parameters were quantized at the granularity of layers to accommodate for large diversities in the dynamic range across the layers. Even though they showed good results for a few popular DNNs like AlexNet, VGG-Net, or GoogLeNet, it is not clear whether it would still work for many other recent DNNs with compact architectures. From our experiments, we were able to observe that this was not the case for some of the recent state-of-the-art DNNs. For example, applying 8-bit quantization to the individual layers of the MobileNet series as done in the previous works showed large accuracy degradations. The excessive accuracy loss could be mitigated by ï¬ne tuning the quantized networks [6, 15]. However, in order to reach a competitive level of accuracy for each network with ï¬ne tuning, full-size training and validation datasets were needed to be incorporated along with painstakingly long periods of optimization. As many DNN developers only provide the pretrained networks in full precision without the training or the validation datasets from reasons such as privacy or the outright massiveness of the data size, such obstacles hinder rapid and easy deployment of DNNs in full precision to embedded accelerators designed for low precision.
âAuthours contributed equally.
1
Instead of converting a full precision pretrained network to lower precision suitable for embedded accelerators, it is also possible to train one from scratch by constraining the weights and the activations [10, 18, 28, 29]. However, achieving state-of-the-art accuracy on large benchmarks such as ImageNet classiï¬cation, increase in the network size in terms of connections [18] or integrating complicated training process [29] is required. Nevertheless, these methods have not been proven for various types of network architectures. Considering the fact that GPUs are the most popular devices for training, it is more practical to convert DNNs into lower precision after utilizing the GPUsâ full precision data path for training. In this paper, we introduce a novel technique in which ï¬ne tuning is not necessary for 8-bit linear quantization which quantizes the feature maps and the parameters for individual channels instead of layers to accommodate for the inter-channel variations in the dynamic range. Our method signiï¬cantly reduces the accuracy loss caused by quantizing to lower precision without increasing the inference computation cost. The results show that various state-of-the-art DNNs trained on the ImageNet dataset can readily be converted for 8-bit ï¬xed-point accelerators without ï¬ne tuning by using a few training samples for proï¬ling.
# 2 Low precision quantization
It is common practice to quantize the activations and the network parameters for each layer to account for the differences in the dynamic range across the layers [6, 15, 17]. Previous implementations such as Ristretto[6], a ï¬xed-point quantization simulator based on Caffe, reserves three placeholders for the fractional lengths (deï¬ned as the number of required bits for the fractional part of a ï¬xed-point number) per layer, one each for the input and output feature maps (IFM and OFM respectively) and for the layer parameters (weights and biases). At every layer, IFM, OFM, and the weights are polled separately for max values and the fractional lengths are calculated accordingly. During run-time, the MSBs and LSBs of the parameters and the activations of each layer are clipped to be containable within the given bit-width and the fractional lengths in order to emulate a generic ï¬xed-point H/W implementation. We will use the term layer-wise quantization hereafter to describe this scheme in contrast with channel-wise quantization proposed in this paper. A major down-side of the layer-wise quantization is that the inter-channel variations of the feature maps and the weights are not fully accounted for. Since the fractional length is usually selected to cover the maximum value in a layer, the layer-wise quantization tends to cause excessive information loss in channels with a smaller dynamic range. Therefore, accuracy may be degraded signiï¬cantly and sometimes never recovered even after exhaustive retraining.
# 2.1 Channel-wise quantization
In the channel-wise quantization, the fractional lengths for the feature maps and the weights can be customized for each channel to minimize the impact of low-precision rounding. Each channel of the IFMs and the OFMs has an independent fractional length based on its expected dynamic range while each channel of the kernels has a fractional length which tightly ï¬ts its known values. Figure 1 demonstrates how the IFMs and the kernels from different channels having different fractional lengths in the channel-wise quantization scheme are computed through a convolution layer compared to the layer-wise scheme. In this example, the input and the output of the convolution layer and weights are all bound to 8 bits while the partial sums are allowed to be accumulated in 32 bits as to avoid data loss. The traversal in the layer-wise scheme is straight-forward as there arenât any discrepancies while adding up the partial sums. On the other hand, the channel-wise method must cope with adding partial sums of varying fractional lengths. A naive solution would be to place a shifter in front of the partial sum adder to adjust all the partial sums to have the same fractional length. We resolve this complication by pre-coordinating the fractional lengths of the weights. The fractional length of a partial sum is determined by adding those of the IFM and the kernel being multiplied together. As the partial sums resulting from different input channels will have different fractional lengths, the smallest fractional length across all partial sums is selected as the reference. The red box in Figure 1 depicts this step. Then, the fractional lengths of the kernels in all the other channels are adjusted during the pre-processing stage to produce this reference fractional length when multiplied with their corresponding IFMs. Limitation was set on the amount adjusted so that the minimum value of the modiï¬ed fractional lengths
2
IFM & Kernel 1 PartialSum ' 8 bits : 32 bits i Q26.5 abe w, Le \ Layer-wise 1 ada Shift quantization 1 [aca] 1 1 ' . . Q24.7 ' Abits | Q25 i [vac } ' ' Q5.20 wi Loves | \ 1 Q4.3 . Shift Channel-wise ' af aca | quantization = g43 bo \ \ 1 Q1.6 wy» â>LMAS 22.9 | mint(Q24.7, Q22.9) J : â -[az4.7] Â¥ Q3.4 + Q24,.7 <ââ+ ' (b)
Figure 1: Comparison between layer-wise and channel-wise quantization in a simple convolution layer with 2 IFM channels and 1 OFM channel. Qn.m represents a ï¬xed point integer format with n bits for integer part, m bits for fractional part, and 1 bit for the sign. Total bit-width is equal to (n + m + 1) bits. min_ï¬(A, B) returns a format with the minimum fractional length. Proposed channel-wise quantization method spares more bits for fractional part (i.e. larger f l) in accumulator and adder without any extra computation cost. Thus, low-precision rounding error can be reduced during accumulation and addition.
alw wilw wicw FC Layer acw acw aicw iConv/Pool Layer| Conv/Pool Layet Concat Layer (b) (
Figure 2: Quantization policy varies with network conï¬gurations. a and w represent activation and weights, respectively. (cw: channel-wise quantization, lw: layer-wise quantization)
of the kernels would not be smaller than the layer-wise quantization. The overall procedure to determining the channel-wise fractional length is summarized in Algorithm 1. The channel-wise quantization can be applied to a fully-connected (FC) layer by considering each unit as a channel. However, for simplicity, we use the layer-wise quantization for the activations of fully-connected (FC) layers. Notwithstanding, the weights of FC layers still needs to be adapted to the preceding layer. Figure 2 shows three such scenarios where we fallback to the layer-wise quantization. In scenario (b) and (d), the activations quantized channel-wise from the previous convolution layer are multiplied with the channel-wise quantized weights of an FC layer which are pre-adjusted to yield the activations having an identical fractional length, hence, the layer-wise quantization for the activations. For scenario (c) where two FCs are stacked consecutively, the layer-wise quantization is utilized throughout the path.
# 2.2 Fractional length determination
Determining the fractional length (placing the dot in between the integer and fractional part within the given bit-width) is easier said than done. Previous works [6, 17] proï¬led the target dataset to look for the max
3
Algorithm 1 Channel-wise quantization. Proï¬ling dataset is a subset of training data set. f l stands for fractional length.
Require: network architecture, network parameters, profiling dataset Ensure: fl*", flees, fl'fâ¢, flo/â¢, shift, quantized network parameters 1. Profile weights and activations Calculate statistics of weights and activations of each channel on profiling dataset 2. Calculate channel-wise fractional lengths For each layer, calculate fl*¢â from statistics for each channel of kernels calculate flâ/â¢, f1°/⢠from statistics of activations for each channel fee = fle + fli!â for all (i,j) pairs of input and output channels dder .â yy. ( ¢ypSum F SIGE = min (70 ) for each 7 pjadder flier kel pke pSum _ ¢rbias Ly « flier â (40; â flies) shift; = plies â frel⢠3. Quantize network parameters with fl*e", flâ*s
value which became the max representable value in the dynamic range. Proï¬ling, running a network in forward path and collecting statistics, provides either an estimation of the dynamic range when executed during pre-processing stage on a subset of the training dataset or an exact ï¬t when performed during run-time on the actual data being processed. The obvious side effect is that selecting just the right size of the dataset to proï¬le is not trivial. A large dataset will increase the chance of electing an outlier as the max value which will overestimate the dynamic range. This will consequently penalize the fractional part of the ï¬xed-point representation. On the other extreme where insufï¬cient size of the proï¬ling dataset is used, the values overï¬owing the determined ï¬xed-point notation during inference will cause severe performance degradation. Lin proposed to use the n-th moments of the distribution instead of the max value to identify the optimal ï¬xed-point bit-width and the fractional length [15]. The dynamic range of the activation is determined so that the signal-to-quantization-noise-ratio (SQNR) caused by quantization would be minimized. In this case, two factors contribute to the quantization noise. The ï¬rst is the quantization error found within the dynamic range and the latter is the overload error where the values beyond the range are clipped to either the upper or the lower bound. When the number of the quantization levels is ï¬xed, there exists an optimum interval between the levels which makes these two errors balanced. This method is much less sensitive to the size of the proï¬ling dataset since the n-th moments of a distribution are more stable measures than the max value. A positive side effect is that optimum interval can be found even with a small dataset as shown in Section 3.1. Figure 3 illustrates the superpositioned probability density functions (PDFs) of the pre-activation values of the individual OFM channels measured on GoogLeNet[25] trained with the ImageNet dataset. All PDFs were normalized and shifted to have an unit variance and a mean of zero prior to compositing the functions. As can be seen from the graph, there is a large variation in the distribution of the pre-activation values. In (author?) [15], normal distribution was used to approximate them. However, the mean distribution over all the channels, shown in Figure 3, suggests Laplace distribution rather than normal distribution. We also found that the fractional length obtained by either normal or Laplace distribution tends to underestimate the dynamic range due to the heavy tails of the actual distribution in many channels. In those cases, truncated super Cauchy distribution, deï¬ned as follows, provides smaller quantization-induced noise by appropriately considering the tails.
2 if 15 <a < 15 fe) = 4 MSY] oââ (1) 0, otherwise
Here, x0 is the location parameter and γ is the scale parameter.
4
10° ââ Mean dist. ââ Laplace dist. ââ Super Cauchy dist 10 Pp y PDF 10°? 10-* -20 -15 10 5 C) 5 5 20 Normalized pre-activation
Figure 3: Superpositioned PDFs of pre-activation values of each channel (GoogLeNet w/ ImageNet dataset). Every distribution is normalized to have unit variance. Y-axis is in log scale. Mean dist. represents the averaged PDF of all channels.
# 2.3 Exploiting channel-wise PDF
Large variations in distributions across the OFM channels naturally led us to search for the optimal PDF for each channel in determining the fractional length. For this purpose, we constructed a dataset consisting of the best-ï¬t PDFs producing the highest SQNR for the individual OFM channels in GoogLeNet, Inception-v3[26], and MobileNet[9]. A simple classiï¬er was trained to select the best-ï¬t PDF during quantization by taking a vector of n-th moments of activation values in each channel. The classiï¬er was trained to choose from truncated super Cauchy or Laplace distribution. We obtained 83% classiï¬cation accuracy by using the k-nearest neighbors classiï¬er with k = 12 [21].
# 3 Benchmark results
# ImageNet classiï¬cation task
The proposed quantization method was evaluated on various state-of-the-art deep networks trained on the ImageNet dataset containing 1.2M training and 50k validation examples. Pretrained networks were quantized into 8-bit ï¬xed point format by using the proï¬ling dataset sampled from the training set and evaluated on the whole validation dataset (50k examples). Uniform linear quantization was used for all the cases. Batch normalization[12] layers were fused into convolution layers before the quantization process. Unsigned integer format was employed for the activation values with the ReLU nonlinearity. A comparison against the layer-wise quantization is summarized in Table 1. Conventional method with the layer-wise quantization based on the max value provided good quantization results for GoogLeNet, VGG16[23], and Inception-v3 which were the most popular networks in the previous quantization and pruning papers. With more recent networks such as MobileNet, MobileNet2[22], ResNet[8], Inception-v4[24], and Xception[2], severe accuracy loss was observed. We ï¬gured out that the outliers were the major source of accuracy degradation after layer-wise quantization. For example, using the max value of a parameter could signiï¬cantly overestimate its dynamic range when there are outliers with extraordinarily large values which cannot be seen in the validation set or when deployed. Carefully removing those outliers will signiï¬cantly improve the quality of quantization even if layer-wise max-based method is used. Thus, there are previous papers showing better results than our baseline layer-wise quantization. However, we did not consider such improvement in the baseline because it requires extra effort and the process itself might taint the dataset since thereâs no explicitly clear boundary of the outliers.
5
Table 1: Top-1 accuracy loss after 8-bit quantization in various large scale networks trained on the ImageNet dataset. No retraining is performed. Reference (Float32) lists baseline accuracies while all other ï¬gures are accuracy losses. Modes for determining the fractional length: MAX (reserve integer length to include at least the max value), Laplace (optimal fraction length based on Laplace distribution), S.Cauchy (optimal fraction length based on truncated super Cauchy distribution), PDF-aware (optimal fractional length based on optimum PDF for each channel). Accuracy losses above 1.0% point are in bold face.
Network Reference (Float32) Layer-wise MAX MAX Channel-wise Laplace S.Cauchy GoogLeNet[25] SqueezeNet[11] MobileNet[9] MobileNet2[22] VGG16[23] ResNet101-v2[8] ResNeXt50-32x4d[27] Inception-v3[26] Inception-v4[24] Incep.-ResNet-v2[24] Xception[2] 68.93% 58.39% 69.50% 71.23% 68.34% 78.04% 76.84% 77.97% 79.90% 80.19% 78.72% 0.23% 1.84% 5.41% 71.13% 0.29% 9.90% 1.63% 0.92% 61.6% 1.37% 54.67% 0.08% 0.15% 0.13% 0.43% 0.22% 0.23% 1.17% 0.66% 0.66% 1.73% 1.81% 3.09% -0.01% -0.04% 0.01% 1.58% 1.01% 0.74% 0.78% 0.65% 0.51% 0.18% 0.09% 0.23% 26.07% -0.06% 0.07% 1.11% 0.64% 0.18% 1.11% 0.60% 0.48% 0.05% 0.27% 0.73% 1.68% -0.06% 0.83% 0.32% 0.24% 0.13% 0.36% 0.33%
We evaluated the channel-wise quantization in four modes depending on the method to determine the fractional lengths: MAX, Laplace, S.Cauchy, and PDF-aware. In MAX mode, the max values of the activation tensors were used to decide the factional lengths of the feature maps. As for the Laplace or S.Cauchy modes, the optimal fractional lengths were estimated from the n-th moments of the activations by assuming PDF as either Laplace or truncated super Cauchy distribution. Regardless of the modes, the channel-wise quantization exhibited signiï¬cantly improved accuracy losses for all the mentioned networks. In the MAX mode, we still observed a large accuracy degradation in the Inception-v4 network. We discovered that there were extremely large outliers in the activations of a few layers causing signiï¬cant overestimation of their dynamic ranges. This problem can be resolved by applying other modes (Laplace, S.Cauchy, or PDF-aware). The Laplace and S.Cauchy modes showed similar performance overall but different behavior depending on the network. The best result came with the PDF-aware mode which selects the best-ï¬t PDF for each channel. Figure 4 illustrates the required size of the proï¬ling dataset for the MAX and the OPT methods when measured on Inception-v3. Fractional lengths were calculated based on randomly selected images from the ImageNet training dataset. The MAX method required a large number of samples (>100) to reach a stable accuracy, whereas a few samples were enough to stabilize the accuracy for the OPT method. Since most of the published networks are trained in full precision while accelerators mandate low-precision representation, being able to readily port a network with just a few training samples is a huge advantage for easy deployment of pretrained full-precision DNNs. Accordingly, the proposed quantization method is able to reach a competitive accuracy without the need for proï¬ling a large number of samples or ï¬ne tuning.
# 3.2 Object detection
We performed network quantization on YOLO-v2[20], a state-of-the-art object detection network. The network was trained and tested on the Pascal VOC dataset [5]. Table 2 shows the loss in mean AP after quantization using our method in comparison with the layer-wise quantization. The layer-wise quantization caused 2.5% point drop in mean AP after quantization. However, our method did not suffer from such a problem by selecting the fractional lengths adapted to the individual channels.
6
Inception-v3: Top-1 Accuracy 80.0% 78.0% + 76.0% T40% pono ft sccecsccececccceecensececctheecensecenstees 72.0% 70.0% 68.0% âFloat32 Accuracy 66.0% âMAX 64.0% > - 620% âLaplace 60.0% t t 1 10 100 1000 Num of profiling image samples
Figure 4: Effect of proï¬ling dataset size on accuracy with quantization for Inception-v3. MAX method requires large number of samples for proï¬ling to reach a stable accuracy. Laplace method stabilizes quickly with a few samples.
Table 2: Loss in mean AP after 8-bit quantization in YOLO-v2. No retraining performed. âReference (Float32)â lists baseline accuracy while all other ï¬gures are accuracy losses. Loss above 1.0% point is in bold face.
Network Reference (Float32) Layer-wise MAX MAX Channel-wise Laplace S.Cauchy PDF-aware YOLO-v2[20] 72.64% 2.50% 0.14% 0.22% 0.70% 0.38%
7
# 4 Related works
Han quantized the network parameters after pruning for compression in (author?) [7]. Kim proposed on using Hessian-weighted clustering to achieve a better compression ratio in quantization [1]. However, in those works, only the network parameters were quantized to save storage space leaving the feature maps in full-precision. Both the activations and the network parameters were quantized layer-wise to accommodate the large variations in the dynamic range across the layers in (author?) [4, 6]. The max values found in activations were used to decide on the fractional lengths, and intensive ï¬ne tuning were required to recover accuracies degraded by quantization in some networks. Lin used SQNR instead of the max value to minimize the bit-width for each layer and optimized DNNs for ï¬xed-point operation [15]. Migacz achieved linear quantization for 8-bit integer operation without ï¬ne tuning by minimizing the information loss with Kullback-Leibler (KL) divergence [17]. Unfortunately, collection of the activation histograms were required from a large number of samples. All in all, these methods used the layer-wise quantization scheme. Aggressively lowering the precision to be under 4 bits for both the weights and the activations have been actively explored [3, 10, 13, 14, 16, 19, 28]. Although they revealed impressive results on small benchmarks, there is still a huge gap in accuracy on large benchmarks such as the ImageNet classiï¬cation using state-of- the-art networks trained in full precision. Recent progress shows that it is possible to reduce the precision of DNNs to 4 bits without sacriï¬cing accuracy by increasing the network size [18] or training the networks in multiple stages with guided training [29]. These work focus on training DNNs for low-precision inference from scratch rather than quantizing pretrained full-precision networks.
# 5 Conclusion
In this paper, we proposed a set of methods for rapid deployment of DNNs trained in full precision to ï¬xed point accelerators with limited precision computation units. The channel-wise quantization recognizes the inter-channel diversities in the dynamic range of the feature maps. HW cost for implementation is minimized by adjusting the fractional lengths of the kernel parameters. We evaluated our method on eleven state-of-the-art DNNs trained on the ImageNet dataset and an object detection network trained on Pascal VOC dataset. In comparison to the previous method (i.e the layer-wise quantization), the channel-wise quantization can reduce the accuracy loss caused by quantization substantially. We also showed that quantization requires just a few image samples if we utilize the n-th moments of the activations instead of the maximum value. In this way, deployment is possible even when just a few training samples are available for the trained network model. We further improved our method by considering the variations in distribution across the channels. A simple classiï¬er was used for selecting the best-ï¬t PDF for each channel from the statistical features. We were able to accomplish negligible accuracy loss (less than 1% point in eleven networks out of twelve) after quantization without ï¬ne tuning.
# References
[1] Y. Choi, M. El-Khamy, and J. Lee. Towards the limit of network quantization. arXiv preprint arXiv:1612.01543v2, 2017.
[2] F. Chollet. Xception: Deep learning with depthwise separable convolutions. CoRR, abs/1610.02357, 2016.
[3] M. Courbariaux, Y. Bengio, and J.-P. David. Binaryconnect: Training deep neural networks with binary weights during propagations. Advances in Neural Information Processing Systems (NIPS), pages 3123â3131, 2015.
[4] M. Courbariaux, J.-P. David, and Y. Bengio. Training deep neural networks with low precision multiplications. arXiv preprint arXiv:1412.7024v4, 2015.
[5] M. Everingham, S. M. A. Eslami, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The pascal visual object classes challenge: A retrospective. International Journal of Computer Vision, 111(1):98â136, Jan. 2015.
[6] P. Gysel. Ristretto: Hardware-oriented approximation of convolutional neural networks. arXiv preprint arXiv:1605.06402, 2016.
8
[7] S. Han, H. Mao, and W. J. Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015.
[8] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015.
[9] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam. Mobilenets: Efï¬cient convolutional neural networks for mobile vision applications. CoRR, abs/1704.04861, 2017.
[10] I. Hubara, M. Courbariaux, D. Soudry, R. El-Yaniv, and Y. Bengio. Quantized neural networks: Training neural networks with low precision weights and activations. arXiv preprint arXiv:609.07061, 2016.
[11] F. N. Iandola, M. W. Moskewicz, K. Ashraf, S. Han, W. J. Dally, and K. Keutzer. Squeezenet: Alexnet-level accuracy with 50x fewer parameters and <0.5mb model size. CoRR, abs/1602.07360, 2016.
[12] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
[13] C. Leng, H. Li, S. Zhu, and R. Jin. Extremely low bit neural network: Squeeze the last bit out with admm. arXiv preprint arXiv:1707.09870, 2017.
[14] F. Li, B. Zhang, and B. Liu. Ternary weight networks. arXiv preprint arXiv:1605.04711, 2016.
[15] S. Lin, Darryl smf Talathi and S. Annapureddy. Fixed point quantization of deep convolutional networks. International Conference on Machine Learning (ICML), pages 344â352, 2016.
[16] X. Lin, C. Zhao, and W. Pan. Towards accurate binary convolutional neural network. Advances in Neural Information Processing Systems (NIPS), pages 344â352, 2017.
[17] S. Migacz. 8-bit inference with tensorrt. In NVIDIA GPU Technology Conference (GTC), 2017.
[18] A. Mishra, E. Nurvitadhi, J. J. Cook, and D. Marr. Wrpn: Wide reduced-precision networks. arXiv preprint arXiv:170901134, 2017.
[19] M. Rastegariy, V. Ordonezy, J. Redmon, and A. Farhadi. Xnor-net: Imagenet classiï¬cation using binary convolutional neural networks. arXiv preprint arXiv:1605.06402, 2016.
[20] J. Redmon and A. Farhadi. Yolo9000: Better, faster, stronger. arXiv preprint arXiv:1612.08242, 2016.
[21] R. J. Samworth. Optimal weighted nearest neighbour classiï¬ers. arXiv preprint arXiv:11015783v3, 2013.
[22] M. Sandler, A. G. Howard, M. Zhu, A. Zhmoginov, and L. Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. CoRR, abs/1801.04381, 2018.
[23] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014.
[24] C. Szegedy, S. Ioffe, and V. Vanhoucke. Inception-v4, inception-resnet and the impact of residual connections on learning. CoRR, abs/1602.07261, 2016.
[25] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. E. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. CoRR, abs/1409.4842, 2014.
[26] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architecture for computer vision. CoRR, abs/1512.00567, 2015.
[27] S. Xie, R. B. Girshick, P. Dollár, Z. Tu, and K. He. Aggregated residual transformations for deep neural networks. CoRR, abs/1611.05431, 2016.
[28] S. Zhou, Y. Wu, Z. Ni, X. Zhou, H. Wen, and Y. Zou. Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1605.06402, 2016.
[29] B. Zhuang, C. Shen, M. Tan, L. Liu, and I. Reid. Towards effective lowbitwidth convolutional neural networks. arXiv preprint arXiv:1711.00205, 2017.
9 | {
"id": "1502.03167"
} |
1810.04805 | BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding | We introduce a new language representation model called BERT, which stands
for Bidirectional Encoder Representations from Transformers. Unlike recent
language representation models, BERT is designed to pre-train deep
bidirectional representations from unlabeled text by jointly conditioning on
both left and right context in all layers. As a result, the pre-trained BERT
model can be fine-tuned with just one additional output layer to create
state-of-the-art models for a wide range of tasks, such as question answering
and language inference, without substantial task-specific architecture
modifications.
BERT is conceptually simple and empirically powerful. It obtains new
state-of-the-art results on eleven natural language processing tasks, including
pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI
accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering
Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1
(5.1 point absolute improvement). | http://arxiv.org/pdf/1810.04805 | Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova | cs.CL | null | null | cs.CL | 20181011 | 20190524 | 9 1 0 2
y a M 4 2 ] L C . s c [
2 v 5 0 8 4 0 . 0 1 8 1 : v i X r a
# BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
# Jacob Devlin Ming-Wei Chang Kenton Lee Kristina Toutanova Google AI Language {jacobdevlin,mingweichang,kentonl,kristout}@google.com
# Abstract
We introduce a new language representa- tion model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language repre- sentation models (Peters et al., 2018a; Rad- ford et al., 2018), BERT is designed to pre- train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a re- sult, the pre-trained BERT model can be ï¬ne- tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task- speciï¬c architecture modiï¬cations.
There are two existing strategies for apply- ing pre-trained language representations to down- stream tasks: feature-based and ï¬ne-tuning. The feature-based approach, such as ELMo (Peters et al., 2018a), uses task-speciï¬c architectures that include the pre-trained representations as addi- tional features. The ï¬ne-tuning approach, such as the Generative Pre-trained Transformer (OpenAI GPT) (Radford et al., 2018), introduces minimal task-speciï¬c parameters, and is trained on the downstream tasks by simply ï¬ne-tuning all pre- trained parameters. The two approaches share the same objective function during pre-training, where they use unidirectional language models to learn general language representations.
BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art re- sults on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answer- ing Test F1 to 93.2 (1.5 point absolute im- provement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).
# Introduction
Language model pre-training has been shown to be effective for improving many natural language processing tasks (Dai and Le, 2015; Peters et al., 2018a; Radford et al., 2018; Howard and Ruder, 2018). These include sentence-level tasks such as natural language inference (Bowman et al., 2015; Williams et al., 2018) and paraphrasing (Dolan and Brockett, 2005), which aim to predict the re- lationships between sentences by analyzing them holistically, as well as token-level tasks such as named entity recognition and question answering, where models are required to produce ï¬ne-grained output at the token level (Tjong Kim Sang and De Meulder, 2003; Rajpurkar et al., 2016).
We argue that current techniques restrict the power of the pre-trained representations, espe- cially for the ï¬ne-tuning approaches. The ma- jor limitation is that standard language models are unidirectional, and this limits the choice of archi- tectures that can be used during pre-training. For example, in OpenAI GPT, the authors use a left-to- right architecture, where every token can only at- tend to previous tokens in the self-attention layers of the Transformer (Vaswani et al., 2017). Such re- strictions are sub-optimal for sentence-level tasks, and could be very harmful when applying ï¬ne- tuning based approaches to token-level tasks such as question answering, where it is crucial to incor- porate context from both directions.
In this paper, we improve the ï¬ne-tuning based approaches by proposing BERT: Bidirectional Encoder Representations from Transformers. BERT alleviates the previously mentioned unidi- rectionality constraint by using a âmasked lan- guage modelâ (MLM) pre-training objective, in- spired by the Cloze task (Taylor, 1953). The masked language model randomly masks some of the tokens from the input, and the objective is to predict the original vocabulary id of the masked
word based only on its context. Unlike left-to- right language model pre-training, the MLM ob- jective enables the representation to fuse the left and the right context, which allows us to pre- In addi- train a deep bidirectional Transformer. tion to the masked language model, we also use a ânext sentence predictionâ task that jointly pre- trains text-pair representations. The contributions of our paper are as follows:
⢠We demonstrate the importance of bidirectional pre-training for language representations. Un- like Radford et al. (2018), which uses unidirec- tional language models for pre-training, BERT uses masked language models to enable pre- trained deep bidirectional representations. This is also in contrast to Peters et al. (2018a), which uses a shallow concatenation of independently trained left-to-right and right-to-left LMs.
⢠We show that pre-trained representations reduce the need for many heavily-engineered task- speciï¬c architectures. BERT is the ï¬rst ï¬ne- tuning based representation model that achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks, outper- forming many task-speciï¬c architectures.
⢠BERT advances the state of the art for eleven NLP tasks. The code and pre-trained mod- els are available at https://github.com/ google-research/bert.
# 2 Related Work
There is a long history of pre-training general lan- guage representations, and we brieï¬y review the most widely-used approaches in this section.
# 2.1 Unsupervised Feature-based Approaches
Learning widely applicable representations of words has been an active area of research for decades, including non-neural (Brown et al., 1992; Ando and Zhang, 2005; Blitzer et al., 2006) and neural (Mikolov et al., 2013; Pennington et al., 2014) methods. Pre-trained word embeddings are an integral part of modern NLP systems, of- fering signiï¬cant improvements over embeddings learned from scratch (Turian et al., 2010). To pre- train word embedding vectors, left-to-right lan- guage modeling objectives have been used (Mnih and Hinton, 2009), as well as objectives to dis- criminate correct from incorrect words in left and right context (Mikolov et al., 2013).
These approaches have been generalized to coarser granularities, such as sentence embed- dings (Kiros et al., 2015; Logeswaran and Lee, 2018) or paragraph embeddings (Le and Mikolov, 2014). To train sentence representations, prior work has used objectives to rank candidate next sentences (Jernite et al., 2017; Logeswaran and Lee, 2018), left-to-right generation of next sen- tence words given a representation of the previous sentence (Kiros et al., 2015), or denoising auto- encoder derived objectives (Hill et al., 2016).
ELMo and its predecessor (Peters et al., 2017, 2018a) generalize traditional word embedding re- search along a different dimension. They extract context-sensitive features from a left-to-right and a right-to-left language model. The contextual rep- resentation of each token is the concatenation of the left-to-right and right-to-left representations. When integrating contextual word embeddings with existing task-speciï¬c architectures, ELMo advances the state of the art for several major NLP benchmarks (Peters et al., 2018a) including ques- tion answering (Rajpurkar et al., 2016), sentiment analysis (Socher et al., 2013), and named entity recognition (Tjong Kim Sang and De Meulder, 2003). Melamud et al. (2016) proposed learning contextual representations through a task to pre- dict a single word from both left and right context using LSTMs. Similar to ELMo, their model is feature-based and not deeply bidirectional. Fedus et al. (2018) shows that the cloze task can be used to improve the robustness of text generation mod- els.
# 2.2 Unsupervised Fine-tuning Approaches
As with the feature-based approaches, the ï¬rst works in this direction only pre-trained word em- (Col- bedding parameters from unlabeled text lobert and Weston, 2008).
More recently, sentence or document encoders which produce contextual token representations have been pre-trained from unlabeled text and ï¬ne-tuned for a supervised downstream task (Dai and Le, 2015; Howard and Ruder, 2018; Radford et al., 2018). The advantage of these approaches is that few parameters need to be learned from scratch. At least partly due to this advantage, OpenAI GPT (Radford et al., 2018) achieved pre- viously state-of-the-art results on many sentence- level tasks from the GLUE benchmark (Wang language model- Left-to-right et al., 2018a).
Masked Sentence A Masked Sentence B Unlabeled Sentence A and B Pair Pre-training Starv/End cer Question Paragraph t Question Answer Pair KAA / Fine-Tuning
Figure 1: Overall pre-training and ï¬ne-tuning procedures for BERT. Apart from output layers, the same architec- tures are used in both pre-training and ï¬ne-tuning. The same pre-trained model parameters are used to initialize models for different down-stream tasks. During ï¬ne-tuning, all parameters are ï¬ne-tuned. [CLS] is a special symbol added in front of every input example, and [SEP] is a special separator token (e.g. separating ques- tions/answers).
ing and auto-encoder objectives have been used for pre-training such models (Howard and Ruder, 2018; Radford et al., 2018; Dai and Le, 2015).
mal difference between the pre-trained architec- ture and the ï¬nal downstream architecture.
# 2.3 Transfer Learning from Supervised Data
There has also been work showing effective trans- fer from supervised tasks with large datasets, such as natural language inference (Conneau et al., 2017) and machine translation (McCann et al., 2017). Computer vision research has also demon- strated the importance of transfer learning from large pre-trained models, where an effective recipe is to ï¬ne-tune models pre-trained with Ima- geNet (Deng et al., 2009; Yosinski et al., 2014).
Model Architecture BERTâs model architec- ture is a multi-layer bidirectional Transformer en- coder based on the original implementation de- scribed in Vaswani et al. (2017) and released in the tensor2tensor library.1 Because the use of Transformers has become common and our im- plementation is almost identical to the original, we will omit an exhaustive background descrip- tion of the model architecture and refer readers to Vaswani et al. (2017) as well as excellent guides such as âThe Annotated Transformer.â2
# 3 BERT
We introduce BERT and its detailed implementa- tion in this section. There are two steps in our framework: pre-training and ï¬ne-tuning. Dur- ing pre-training, the model is trained on unlabeled data over different pre-training tasks. For ï¬ne- tuning, the BERT model is ï¬rst initialized with the pre-trained parameters, and all of the param- eters are ï¬ne-tuned using labeled data from the downstream tasks. Each downstream task has sep- arate ï¬ne-tuned models, even though they are ini- tialized with the same pre-trained parameters. The question-answering example in Figure 1 will serve as a running example for this section.
A distinctive feature of BERT is its uniï¬ed ar- chitecture across different tasks. There is mini-
In this work, we denote the number of layers (i.e., Transformer blocks) as L, the hidden size as H, and the number of self-attention heads as A.3 We primarily report results on two model sizes: BERTBASE (L=12, H=768, A=12, Total Param- eters=110M) and BERTLARGE (L=24, H=1024, A=16, Total Parameters=340M).
BERTBASE was chosen to have the same model size as OpenAI GPT for comparison purposes. Critically, however, the BERT Transformer uses bidirectional self-attention, while the GPT Trans- former uses constrained self-attention where every token can only attend to context to its left.4
1https://github.com/tensorï¬ow/tensor2tensor 2http://nlp.seas.harvard.edu/2018/04/03/attention.html 3In all cases we set the feed-forward/ï¬lter size to be 4H, i.e., 3072 for the H = 768 and 4096 for the H = 1024. 4We note that in the literature the bidirectional Trans-
Input/Output Representations To make BERT handle a variety of down-stream tasks, our input representation is able to unambiguously represent both a single sentence and a pair of sentences (e.g., (Question, Answer )) in one token sequence. Throughout this work, a âsentenceâ can be an arbi- trary span of contiguous text, rather than an actual linguistic sentence. A âsequenceâ refers to the in- put token sequence to BERT, which may be a sin- gle sentence or two sentences packed together.
We use WordPiece embeddings (Wu et al., 2016) with a 30,000 token vocabulary. The ï¬rst token of every sequence is always a special clas- siï¬cation token ([CLS]). The ï¬nal hidden state corresponding to this token is used as the ag- gregate sequence representation for classiï¬cation tasks. Sentence pairs are packed together into a single sequence. We differentiate the sentences in two ways. First, we separate them with a special token ([SEP]). Second, we add a learned embed- ding to every token indicating whether it belongs to sentence A or sentence B. As shown in Figure 1, we denote input embedding as E, the ï¬nal hidden vector of the special [CLS] token as C â RH , and the ï¬nal hidden vector for the ith input token as Ti â RH .
For a given token, its input representation is constructed by summing the corresponding token, segment, and position embeddings. A visualiza- tion of this construction can be seen in Figure 2.
# 3.1 Pre-training BERT
Unlike Peters et al. (2018a) and Radford et al. (2018), we do not use traditional left-to-right or right-to-left language models to pre-train BERT. Instead, we pre-train BERT using two unsuper- vised tasks, described in this section. This step is presented in the left part of Figure 1.
Task #1: Masked LM Intuitively, it is reason- able to believe that a deep bidirectional model is strictly more powerful than either a left-to-right model or the shallow concatenation of a left-to- right and a right-to-left model. Unfortunately, standard conditional language models can only be trained left-to-right or right-to-left, since bidirec- tional conditioning would allow each word to in- directly âsee itselfâ, and the model could trivially predict the target word in a multi-layered context.
former is often referred to as a âTransformer encoderâ while the left-context-only version is referred to as a âTransformer decoderâ since it can be used for text generation.
In order to train a deep bidirectional representa- tion, we simply mask some percentage of the input tokens at random, and then predict those masked tokens. We refer to this procedure as a âmasked LMâ (MLM), although it is often referred to as a Cloze task in the literature (Taylor, 1953). In this case, the ï¬nal hidden vectors corresponding to the mask tokens are fed into an output softmax over the vocabulary, as in a standard LM. In all of our experiments, we mask 15% of all WordPiece to- kens in each sequence at random. In contrast to denoising auto-encoders (Vincent et al., 2008), we only predict the masked words rather than recon- structing the entire input.
Although this allows us to obtain a bidirec- tional pre-trained model, a downside is that we are creating a mismatch between pre-training and ï¬ne-tuning, since the [MASK] token does not ap- pear during ï¬ne-tuning. To mitigate this, we do not always replace âmaskedâ words with the ac- tual [MASK] token. The training data generator chooses 15% of the token positions at random for prediction. If the i-th token is chosen, we replace the i-th token with (1) the [MASK] token 80% of the time (2) a random token 10% of the time (3) the unchanged i-th token 10% of the time. Then, Ti will be used to predict the original token with cross entropy loss. We compare variations of this procedure in Appendix C.2.
Task #2: Next Sentence Prediction (NSP) Many important downstream tasks such as Ques- tion Answering (QA) and Natural Language Infer- ence (NLI) are based on understanding the rela- tionship between two sentences, which is not di- rectly captured by language modeling. In order to train a model that understands sentence rela- tionships, we pre-train for a binarized next sen- tence prediction task that can be trivially gener- ated from any monolingual corpus. Speciï¬cally, when choosing the sentences A and B for each pre- training example, 50% of the time B is the actual next sentence that follows A (labeled as IsNext), and 50% of the time it is a random sentence from the corpus (labeled as NotNext). As we show in Figure 1, C is used for next sentence predic- tion (NSP).5 Despite its simplicity, we demon- strate in Section 5.1 that pre-training towards this task is very beneï¬cial to both QA and NLI. 6
5The ï¬nal model achieves 97%-98% accuracy on NSP. 6The vector C is not a meaningful sentence representation
without ï¬ne-tuning, since it was trained with NSP.
Input {cs} my dog is cute [SEP] he | likes play ##ing | [SEP] Token Embeddings Eris) En Ek0g ES cute E sep) Exe Elikes Enay Ex sing E sep) + + + + + + + + + + + Segment Embeddings E, Ey E. E, E, E, E, E. EB E, E. + + + + + + + + + + + Position Embeddings E, E E, E, E, E. E. E, E, E, Exo
Figure 2: BERT input representation. The input embeddings are the sum of the token embeddings, the segmenta- tion embeddings and the position embeddings.
The NSP task is closely related to representation- learning objectives used in Jernite et al. (2017) and Logeswaran and Lee (2018). However, in prior work, only sentence embeddings are transferred to down-stream tasks, where BERT transfers all pa- rameters to initialize end-task model parameters.
(4) a degenerate text-â
pair in text classiï¬cation or sequence tagging. At the output, the token rep- resentations are fed into an output layer for token- level tasks, such as sequence tagging or question answering, and the [CLS] representation is fed into an output layer for classiï¬cation, such as en- tailment or sentiment analysis.
Pre-training data The pre-training procedure largely follows the existing literature on language model pre-training. For the pre-training corpus we use the BooksCorpus (800M words) (Zhu et al., 2015) and English Wikipedia (2,500M words). For Wikipedia we extract only the text passages and ignore lists, tables, and headers. It is criti- cal to use a document-level corpus rather than a shufï¬ed sentence-level corpus such as the Billion Word Benchmark (Chelba et al., 2013) in order to extract long contiguous sequences.
Compared to pre-training, ï¬ne-tuning is rela- tively inexpensive. All of the results in the pa- per can be replicated in at most 1 hour on a sin- gle Cloud TPU, or a few hours on a GPU, starting from the exact same pre-trained model.7 We de- scribe the task-speciï¬c details in the correspond- ing subsections of Section 4. More details can be found in Appendix A.5.
# 4 Experiments
# 3.2 Fine-tuning BERT
In this section, we present BERT ï¬ne-tuning re- sults on 11 NLP tasks.
Fine-tuning is straightforward since the self- attention mechanism in the Transformer al- lows BERT to model many downstream tasksâ whether they involve single text or text pairsâby swapping out the appropriate inputs and outputs. For applications involving text pairs, a common pattern is to independently encode text pairs be- fore applying bidirectional cross attention, such as Parikh et al. (2016); Seo et al. (2017). BERT instead uses the self-attention mechanism to unify these two stages, as encoding a concatenated text pair with self-attention effectively includes bidi- rectional cross attention between two sentences.
For each task, we simply plug in the task- speciï¬c inputs and outputs into BERT and ï¬ne- tune all the parameters end-to-end. At the in- put, sentence A and sentence B from pre-training are analogous to (1) sentence pairs in paraphras- ing, (2) hypothesis-premise pairs in entailment, (3) question-passage pairs in question answering, and
# 4.1 GLUE
The General Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2018a) is a col- lection of diverse natural language understanding tasks. Detailed descriptions of GLUE datasets are included in Appendix B.1.
To ï¬ne-tune on GLUE, we represent the input sequence (for single sentence or sentence pairs) as described in Section 3, and use the ï¬nal hid- den vector C â RH corresponding to the ï¬rst input token ([CLS]) as the aggregate representa- tion. The only new parameters introduced during ï¬ne-tuning are classiï¬cation layer weights W â RKÃH , where K is the number of labels. We com- pute a standard classiï¬cation loss with C and W , i.e., log(softmax(CW T )).
7For example, the BERT SQuAD model can be trained in around 30 minutes on a single Cloud TPU to achieve a Dev F1 score of 91.0%.
# 8See (10) in https://gluebenchmark.com/faq.
MNLI-(m/mm) 392k 80.6/80.1 76.4/76.1 82.1/81.4 84.6/83.4 86.7/85.9 QQP 363k 66.1 64.8 70.3 71.2 72.1 QNLI 108k 82.3 79.8 87.4 90.5 92.7 SST-2 67k 93.2 90.4 91.3 93.5 94.9 CoLA 8.5k 35.0 36.0 45.4 52.1 60.5 STS-B MRPC 5.7k 81.0 73.3 80.0 85.8 86.5 3.5k 86.0 84.9 82.3 88.9 89.3 RTE 2.5k 61.7 56.8 56.0 66.4 70.1 Average - 74.0 71.0 75.1 79.6 82.1
Table 1: GLUE Test results, scored by the evaluation server (https://gluebenchmark.com/leaderboard). The number below each task denotes the number of training examples. The âAverageâ column is slightly different than the ofï¬cial GLUE score, since we exclude the problematic WNLI set.8 BERT and OpenAI GPT are single- model, single task. F1 scores are reported for QQP and MRPC, Spearman correlations are reported for STS-B, and accuracy scores are reported for the other tasks. We exclude entries that use BERT as one of their components.
We use a batch size of 32 and ï¬ne-tune for 3 epochs over the data for all GLUE tasks. For each task, we selected the best ï¬ne-tuning learning rate (among 5e-5, 4e-5, 3e-5, and 2e-5) on the Dev set. Additionally, for BERTLARGE we found that ï¬ne- tuning was sometimes unstable on small datasets, so we ran several random restarts and selected the best model on the Dev set. With random restarts, we use the same pre-trained checkpoint but per- form different ï¬ne-tuning data shufï¬ing and clas- siï¬er layer initialization.9
Both BERTBASE and BERTLARGE outperform all sys- tems on all tasks by a substantial margin, obtaining 4.5% and 7.0% respective average accuracy im- provement over the prior state of the art. Note that BERTBASE and OpenAI GPT are nearly identical in terms of model architecture apart from the at- tention masking. For the largest and most widely reported GLUE task, MNLI, BERT obtains a 4.6% absolute accuracy improvement. On the ofï¬cial GLUE leaderboard10, BERTLARGE obtains a score of 80.5, compared to OpenAI GPT, which obtains 72.8 as of the date of writing.
We ï¬nd that BERTLARGE signiï¬cantly outper- forms BERTBASE across all tasks, especially those with very little training data. The effect of model size is explored more thoroughly in Section 5.2.
# 4.2 SQuAD v1.1
Wikipedia containing the answer, the task is to predict the answer text span in the passage.
As shown in Figure 1, in the question answer- ing task, we represent the input question and pas- sage as a single packed sequence, with the ques- tion using the A embedding and the passage using the B embedding. We only introduce a start vec- tor S â RH and an end vector E â RH during ï¬ne-tuning. The probability of word i being the start of the answer span is computed as a dot prod- uct between Ti and S followed by a softmax over all of the words in the paragraph: Pi = eS·Ti j eS·Tj . The analogous formula is used for the end of the answer span. The score of a candidate span from position i to position j is deï¬ned as S·Ti + E·Tj, and the maximum scoring span where j ⥠i is used as a prediction. The training objective is the sum of the log-likelihoods of the correct start and end positions. We ï¬ne-tune for 3 epochs with a learning rate of 5e-5 and a batch size of 32.
Table 2 shows top leaderboard entries as well as results from top published systems (Seo et al., 2017; Clark and Gardner, 2018; Peters et al., 2018a; Hu et al., 2018). The top results from the SQuAD leaderboard do not have up-to-date public system descriptions available,11 and are allowed to use any public data when training their systems. We therefore use modest data augmentation in our system by ï¬rst ï¬ne-tuning on TriviaQA (Joshi et al., 2017) befor ï¬ne-tuning on SQuAD.
The Stanford Question Answering Dataset (SQuAD v1.1) is a collection of 100k crowd- sourced question/answer pairs (Rajpurkar et al., 2016). Given a question and a passage from
9The GLUE data set distribution does not include the Test labels, and we only made a single GLUE evaluation server submission for each of BERTBASE and BERTLARGE.
# 10https://gluebenchmark.com/leaderboard
Our best performing system outperforms the top leaderboard system by +1.5 F1 in ensembling and +1.3 F1 as a single system. In fact, our single BERT model outperforms the top ensemble sys- tem in terms of F1 score. Without TriviaQA ï¬ne-
11QANet is described in Yu et al. (2018), but the system has improved substantially after publication.
System Dev Test EM F1 EM F1 Top Leaderboard Systems (Dec 10th, 2018) Human #1 Ensemble - nlnet #2 Ensemble - QANet - - - - - - 82.3 91.2 86.0 91.7 84.5 90.5 Published BiDAF+ELMo (Single) R.M. Reader (Ensemble) 85.8 81.2 87.9 82.3 88.5 - 85.6 - Ours BERTBASE (Single) BERTLARGE (Single) BERTLARGE (Ensemble) BERTLARGE (Sgl.+TriviaQA) 84.2 91.1 85.1 91.8 BERTLARGE (Ens.+TriviaQA) 86.2 92.2 87.4 93.2 80.8 88.5 84.1 90.9 85.8 91.8 - - - - - -
Table 2: SQuAD 1.1 results. The BERT ensemble is 7x systems which use different pre-training check- points and ï¬ne-tuning seeds.
System Dev Test EM F1 EM F1 Top Leaderboard Systems (Dec 10th, 2018) Human #1 Single - MIR-MRC (F-Net) #2 Single - nlnet 86.3 89.0 86.9 89.5 74.8 78.0 74.2 77.1 - - - - Published unet (Ensemble) SLQA+ (Single) - - - 71.4 74.9 71.4 74.4 Ours BERTLARGE (Single) 78.7 81.9 80.0 83.1
Table 3: SQuAD 2.0 results. We exclude entries that use BERT as one of their components.
tuning data, we only lose 0.1-0.4 F1, still outper- forming all existing systems by a wide margin.12
# 4.3 SQuAD v2.0
The SQuAD 2.0 task extends the SQuAD 1.1 problem deï¬nition by allowing for the possibility that no short answer exists in the provided para- graph, making the problem more realistic.
We use a simple approach to extend the SQuAD v1.1 BERT model for this task. We treat ques- tions that do not have an answer as having an an- swer span with start and end at the [CLS] to- ken. The probability space for the start and end answer span positions is extended to include the position of the [CLS] token. For prediction, we compare the score of the no-answer span: snull = S·C + E·C to the score of the best non-null span
12The TriviaQA data we used consists of paragraphs from TriviaQA-Wiki formed of the ï¬rst 400 tokens in documents, that contain at least one of the provided possible answers.
System Dev Test ESIM+GloVe ESIM+ELMo OpenAI GPT 51.9 52.7 59.1 59.2 78.0 - BERTBASE BERTLARGE 81.6 86.6 86.3 - Human (expert)â Human (5 annotations)â - - 85.0 88.0
Table 4: SWAG Dev and Test accuracies. â Human per- formance is measured with 100 samples, as reported in the SWAG paper.
Ësi,j = maxjâ¥iS·Ti + E·Tj. We predict a non-null answer when Ësi,j > snull + Ï , where the thresh- old Ï is selected on the dev set to maximize F1. We did not use TriviaQA data for this model. We ï¬ne-tuned for 2 epochs with a learning rate of 5e-5 and a batch size of 48.
The results compared to prior leaderboard en- tries and top published work (Sun et al., 2018; Wang et al., 2018b) are shown in Table 3, exclud- ing systems that use BERT as one of their com- ponents. We observe a +5.1 F1 improvement over the previous best system.
# 4.4 SWAG
The Situations With Adversarial Generations (SWAG) dataset contains 113k sentence-pair com- pletion examples that evaluate grounded common- sense inference (Zellers et al., 2018). Given a sen- tence, the task is to choose the most plausible con- tinuation among four choices.
When ï¬ne-tuning on the SWAG dataset, we construct four input sequences, each containing the concatenation of the given sentence (sentence A) and a possible continuation (sentence B). The only task-speciï¬c parameters introduced is a vec- tor whose dot product with the [CLS] token rep- resentation C denotes a score for each choice which is normalized with a softmax layer.
We ï¬ne-tune the model for 3 epochs with a learning rate of 2e-5 and a batch size of 16. Re- sults are presented in Table 4. BERTLARGE out- performs the authorsâ baseline ESIM+ELMo sys- tem by +27.1% and OpenAI GPT by 8.3%.
# 5 Ablation Studies
In this section, we perform ablation experiments over a number of facets of BERT in order to better understand their relative importance. Additional
Tasks Dev Set MNLI-m QNLI MRPC SST-2 SQuAD (Acc) (Acc) (Acc) (Acc) (F1) BERTBASE No NSP LTR & No NSP + BiLSTM 84.4 83.9 82.1 82.1 88.4 84.9 84.3 84.1 86.7 86.5 77.5 75.7 92.7 92.6 92.1 91.6 88.5 87.9 77.8 84.9
Table 5: Ablation over the pre-training tasks using the BERTBASE architecture. âNo NSPâ is trained without the next sentence prediction task. âLTR & No NSPâ is trained as a left-to-right LM without the next sentence prediction, like OpenAI GPT. â+ BiLSTMâ adds a ran- domly initialized BiLSTM on top of the âLTR + No NSPâ model during ï¬ne-tuning.
ablation studies can be found in Appendix C.
# 5.1 Effect of Pre-training Tasks
We demonstrate the importance of the deep bidi- rectionality of BERT by evaluating two pre- training objectives using exactly the same pre- training data, ï¬ne-tuning scheme, and hyperpa- rameters as BERTBASE:
No NSP: A bidirectional model which is trained using the âmasked LMâ (MLM) but without the ânext sentence predictionâ (NSP) task. LTR & No NSP: A left-context-only model which is trained using a standard Left-to-Right (LTR) LM, rather than an MLM. The left-only constraint was also applied at ï¬ne-tuning, because removing it introduced a pre-train/ï¬ne-tune mismatch that degraded downstream performance. Additionally, this model was pre-trained without the NSP task. This is directly comparable to OpenAI GPT, but using our larger training dataset, our input repre- sentation, and our ï¬ne-tuning scheme.
We ï¬rst examine the impact brought by the NSP In Table 5, we show that removing NSP task. hurts performance signiï¬cantly on QNLI, MNLI, and SQuAD 1.1. Next, we evaluate the impact of training bidirectional representations by com- paring âNo NSPâ to âLTR & No NSPâ. The LTR model performs worse than the MLM model on all tasks, with large drops on MRPC and SQuAD.
For SQuAD it is intuitively clear that a LTR model will perform poorly at token predictions, since the token-level hidden states have no right- side context. In order to make a good faith at- tempt at strengthening the LTR system, we added a randomly initialized BiLSTM on top. This does signiï¬cantly improve results on SQuAD, but the
results are still far worse than those of the pre- trained bidirectional models. The BiLSTM hurts performance on the GLUE tasks.
We recognize that it would also be possible to train separate LTR and RTL models and represent each token as the concatenation of the two mod- els, as ELMo does. However: (a) this is twice as expensive as a single bidirectional model; (b) this is non-intuitive for tasks like QA, since the RTL model would not be able to condition the answer on the question; (c) this it is strictly less powerful than a deep bidirectional model, since it can use both left and right context at every layer.
# 5.2 Effect of Model Size
In this section, we explore the effect of model size on ï¬ne-tuning task accuracy. We trained a number of BERT models with a differing number of layers, hidden units, and attention heads, while otherwise using the same hyperparameters and training pro- cedure as described previously.
Results on selected GLUE tasks are shown in Table 6. In this table, we report the average Dev Set accuracy from 5 random restarts of ï¬ne-tuning. We can see that larger models lead to a strict ac- curacy improvement across all four datasets, even for MRPC which only has 3,600 labeled train- ing examples, and is substantially different from the pre-training tasks. It is also perhaps surpris- ing that we are able to achieve such signiï¬cant improvements on top of models which are al- ready quite large relative to the existing literature. For example, the largest Transformer explored in Vaswani et al. (2017) is (L=6, H=1024, A=16) with 100M parameters for the encoder, and the largest Transformer we have found in the literature is (L=64, H=512, A=2) with 235M parameters (Al-Rfou et al., 2018). By contrast, BERTBASE contains 110M parameters and BERTLARGE con- tains 340M parameters.
It has long been known that increasing the model size will lead to continual improvements on large-scale tasks such as machine translation and language modeling, which is demonstrated by the LM perplexity of held-out training data shown in Table 6. However, we believe that this is the ï¬rst work to demonstrate convinc- ingly that scaling to extreme model sizes also leads to large improvements on very small scale tasks, provided that the model has been sufï¬- ciently pre-trained. Peters et al. (2018b) presented
mixed results on the downstream task impact of increasing the pre-trained bi-LM size from two to four layers and Melamud et al. (2016) men- tioned in passing that increasing hidden dimen- sion size from 200 to 600 helped, but increasing further to 1,000 did not bring further improve- ments. Both of these prior works used a feature- based approach â we hypothesize that when the model is ï¬ne-tuned directly on the downstream tasks and uses only a very small number of ran- domly initialized additional parameters, the task- speciï¬c models can beneï¬t from the larger, more expressive pre-trained representations even when downstream task data is very small.
# 5.3 Feature-based Approach with BERT
All of the BERT results presented so far have used the ï¬ne-tuning approach, where a simple classiï¬- cation layer is added to the pre-trained model, and all parameters are jointly ï¬ne-tuned on a down- stream task. However, the feature-based approach, where ï¬xed features are extracted from the pre- trained model, has certain advantages. First, not all tasks can be easily represented by a Trans- former encoder architecture, and therefore require a task-speciï¬c model architecture to be added. Second, there are major computational beneï¬ts to pre-compute an expensive representation of the training data once and then run many experiments with cheaper models on top of this representation. In this section, we compare the two approaches by applying BERT to the CoNLL-2003 Named Entity Recognition (NER) task (Tjong Kim Sang and De Meulder, 2003). In the input to BERT, we use a case-preserving WordPiece model, and we include the maximal document context provided by the data. Following standard practice, we for- mulate this as a tagging task but do not use a CRF
Hyperparams Dev Set Accuracy #L #H #A LM (ppl) MNLI-m MRPC SST-2 768 12 3 768 3 6 768 12 6 12 768 12 12 1024 16 24 1024 16 5.84 5.24 4.68 3.99 3.54 3.23 77.9 80.6 81.9 84.4 85.7 86.6 79.8 82.2 84.8 86.7 86.9 87.8 88.4 90.7 91.3 92.9 93.3 93.7
Table 6: Ablation over BERT model size. #L = the number of layers; #H = hidden size; #A = number of at- tention heads. âLM (ppl)â is the masked LM perplexity of held-out training data.
System Dev F1 Test F1 ELMo (Peters et al., 2018a) CVT (Clark et al., 2018) CSE (Akbik et al., 2018) 95.7 - - 92.2 92.6 93.1 Fine-tuning approach BERTLARGE BERTBASE 96.6 96.4 92.8 92.4 Feature-based approach (BERTBASE) Embeddings Second-to-Last Hidden Last Hidden Weighted Sum Last Four Hidden Concat Last Four Hidden Weighted Sum All 12 Layers 91.0 95.6 94.9 95.9 96.1 95.5 - - - - - -
Table 7: CoNLL-2003 Named Entity Recognition re- sults. Hyperparameters were selected using the Dev set. The reported Dev and Test scores are averaged over 5 random restarts using those hyperparameters.
layer in the output. We use the representation of the ï¬rst sub-token as the input to the token-level classiï¬er over the NER label set.
To ablate the ï¬ne-tuning approach, we apply the feature-based approach by extracting the activa- tions from one or more layers without ï¬ne-tuning any parameters of BERT. These contextual em- beddings are used as input to a randomly initial- ized two-layer 768-dimensional BiLSTM before the classiï¬cation layer.
Results are presented in Table 7. BERTLARGE performs competitively with state-of-the-art meth- ods. The best performing method concatenates the token representations from the top four hidden lay- ers of the pre-trained Transformer, which is only 0.3 F1 behind ï¬ne-tuning the entire model. This demonstrates that BERT is effective for both ï¬ne- tuning and feature-based approaches.
# 6 Conclusion
Recent empirical improvements due to transfer learning with language models have demonstrated that rich, unsupervised pre-training is an integral part of many language understanding systems. In particular, these results enable even low-resource tasks to beneï¬t from deep unidirectional architec- tures. Our major contribution is further general- izing these ï¬ndings to deep bidirectional architec- tures, allowing the same pre-trained model to suc- cessfully tackle a broad set of NLP tasks.
# References
Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence In Proceedings of the 27th International labeling. Conference on Computational Linguistics, pages 1638â1649.
Rami Al-Rfou, Dokook Choe, Noah Constant, Mandy Guo, and Llion Jones. 2018. Character-level lan- guage modeling with deeper self-attention. arXiv preprint arXiv:1808.04444.
Rie Kubota Ando and Tong Zhang. 2005. A framework for learning predictive structures from multiple tasks and unlabeled data. Journal of Machine Learning Research, 6(Nov):1817â1853.
Ido Dagan, Hoa Trang Dang, and Danilo Giampiccolo. 2009. The ï¬fth PASCAL recognizing textual entailment challenge. In TAC. NIST.
John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspon- dence learning. In Proceedings of the 2006 confer- ence on empirical methods in natural language pro- cessing, pages 120â128. Association for Computa- tional Linguistics.
Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large anno- tated corpus for learning natural language inference. In EMNLP. Association for Computational Linguis- tics.
Peter F Brown, Peter V Desouza, Robert L Mercer, Vincent J Della Pietra, and Jenifer C Lai. 1992. Class-based n-gram models of natural language. Computational linguistics, 18(4):467â479.
Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez- Gazpio, and Lucia Specia. 2017. Semeval-2017 task 1: Semantic textual similarity multilingual and In Proceedings crosslingual focused evaluation. of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1â14, Vancou- ver, Canada. Association for Computational Lin- guistics.
Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robin- son. 2013. One billion word benchmark for measur- ing progress in statistical language modeling. arXiv preprint arXiv:1312.3005.
Z. Chen, H. Zhang, X. Zhang, and L. Zhao. 2018. Quora question pairs.
Christopher Clark and Matt Gardner. 2018. Simple and effective multi-paragraph reading comprehen- sion. In ACL.
Kevin Clark, Minh-Thang Luong, Christopher D Man- Semi-supervised se- ning, and Quoc Le. 2018. quence modeling with cross-view training. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 1914â 1925.
Ronan Collobert and Jason Weston. 2008. A uniï¬ed architecture for natural language processing: Deep In Pro- neural networks with multitask learning. ceedings of the 25th international conference on Machine learning, pages 160â167. ACM.
Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo¨ıc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Nat- ural Language Processing, pages 670â680, Copen- hagen, Denmark. Association for Computational Linguistics.
Andrew M Dai and Quoc V Le. 2015. Semi-supervised sequence learning. In Advances in neural informa- tion processing systems, pages 3079â3087.
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei- ImageNet: A Large-Scale Hierarchical Fei. 2009. Image Database. In CVPR09.
William B Dolan and Chris Brockett. 2005. Automati- cally constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005).
William Fedus, Ian Goodfellow, and Andrew M Dai. 2018. Maskgan: Better text generation via ï¬lling in the . arXiv preprint arXiv:1801.07736.
Dan Hendrycks and Kevin Gimpel. 2016. Bridging nonlinearities and stochastic regularizers with gaus- sian error linear units. CoRR, abs/1606.08415.
Felix Hill, Kyunghyun Cho, and Anna Korhonen. 2016. Learning distributed representations of sentences In Proceedings of the 2016 from unlabelled data. Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computa- tional Linguistics.
Jeremy Howard and Sebastian Ruder. 2018. Universal language model ï¬ne-tuning for text classiï¬cation. In ACL. Association for Computational Linguistics.
Minghao Hu, Yuxing Peng, Zhen Huang, Xipeng Qiu, Reinforced Furu Wei, and Ming Zhou. 2018. mnemonic reader for machine reading comprehen- sion. In IJCAI.
Yacine Jernite, Samuel R. Bowman, and David Son- tag. 2017. Discourse-based objectives for fast un- supervised sentence representation learning. CoRR, abs/1705.00557.
Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehen- sion. In ACL.
Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in neural information processing systems, pages 3294â3302.
Quoc Le and Tomas Mikolov. 2014. Distributed rep- resentations of sentences and documents. In Inter- national Conference on Machine Learning, pages 1188â1196.
Hector J Levesque, Ernest Davis, and Leora Morgen- stern. 2011. The winograd schema challenge. In Aaai spring symposium: Logical formalizations of commonsense reasoning, volume 46, page 47.
Lajanugen Logeswaran and Honglak Lee. 2018. An efï¬cient framework for learning sentence represen- In International Conference on Learning tations. Representations.
Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Con- textualized word vectors. In NIPS.
Oren Melamud, Jacob Goldberger, and Ido Dagan. 2016. context2vec: Learning generic context em- bedding with bidirectional LSTM. In CoNLL.
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in Neural Information Processing Systems 26, pages 3111â3119. Curran Associates, Inc.
Andriy Mnih and Geoffrey E Hinton. 2009. A scal- able hierarchical distributed language model. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bot- tou, editors, Advances in Neural Information Pro- cessing Systems 21, pages 1081â1088. Curran As- sociates, Inc.
Ankur P Parikh, Oscar T¨ackstr¨om, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. In EMNLP.
Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Nat- ural Language Processing (EMNLP), pages 1532â 1543.
Matthew Peters, Waleed Ammar, Chandra Bhagavat- ula, and Russell Power. 2017. Semi-supervised se- quence tagging with bidirectional language models. In ACL.
Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018a. Deep contextualized word rep- resentations. In NAACL.
Matthew Peters, Mark Neumann, Luke Zettlemoyer, and Wen-tau Yih. 2018b. Dissecting contextual word embeddings: Architecture and representation. In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 1499â1509.
Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing with unsupervised learning. Technical re- port, OpenAI.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Nat- ural Language Processing, pages 2383â2392.
Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention ï¬ow for machine comprehension. In ICLR.
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- In Proceedings of the 2013 conference on bank. empirical methods in natural language processing, pages 1631â1642.
Fu Sun, Linyang Li, Xipeng Qiu, and Yang Liu. 2018. U-net: Machine reading comprehension arXiv preprint with unanswerable questions. arXiv:1810.06638.
Wilson L Taylor. 1953. Cloze procedure: A new tool for measuring readability. Journalism Bulletin, 30(4):415â433.
Erik F Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In CoNLL.
Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: A simple and general method for semi-supervised learning. In Proceedings of the 48th Annual Meeting of the Association for Compu- tational Linguistics, ACL â10, pages 384â394.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, pages 6000â6010.
Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. 2008. Extracting and composing robust features with denoising autoen- In Proceedings of the 25th international coders. conference on Machine learning, pages 1096â1103. ACM.
Alex Wang, Amanpreet Singh, Julian Michael, Fe- lix Hill, Omer Levy, and Samuel Bowman. 2018a. Glue: A multi-task benchmark and analysis platform
for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: An- alyzing and Interpreting Neural Networks for NLP, pages 353â355.
Wei Wang, Ming Yan, and Chen Wu. 2018b. Multi- granularity hierarchical attention fusion networks for reading comprehension and question answering. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers). Association for Computational Lin- guistics.
Alex Warstadt, Amanpreet Singh, and Samuel R Bow- man. 2018. Neural network acceptability judg- ments. arXiv preprint arXiv:1805.12471.
Adina Williams, Nikita Nangia, and Samuel R Bow- man. 2018. A broad-coverage challenge corpus In for sentence understanding through inference. NAACL.
Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Googleâs neural ma- chine translation system: Bridging the gap between arXiv preprint human and machine translation. arXiv:1609.08144.
Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. 2014. How transferable are features in deep neural networks? In Advances in neural information processing systems, pages 3320â3328.
Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V Le. 2018. QANet: Combining local convolution with global self-attention for reading comprehen- sion. In ICLR.
Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. Swag: A large-scale adversarial dataset for grounded commonsense inference. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP).
Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhut- dinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies In Proceedings of the IEEE and reading books. international conference on computer vision, pages 19â27.
# Appendix for âBERT: Pre-training of Deep Bidirectional Transformers for Language Understandingâ
We organize the appendix into three sections:
⢠Additional implementation details for BERT are presented in Appendix A;
⢠Additional details for our experiments are presented in Appendix B; and
⢠Additional ablation studies are presented in Appendix C.
We present additional ablation studies for BERT including:
â Effect of Number of Training Steps; and â Ablation for Different Masking Proce-
dures.
# A Additional Details for BERT
# A.1 Illustration of the Pre-training Tasks
We provide examples of the pre-training tasks in the following.
Masked LM and the Masking Procedure As- suming the unlabeled sentence is my dog is hairy, and during the random masking procedure we chose the 4-th token (which corresponding to hairy), our masking procedure can be further il- lustrated by
⢠80% of the time: Replace the word with the [MASK] token, e.g., my dog is hairy â my dog is [MASK]
⢠10% of the time: Replace the word with a random word, e.g., my dog is hairy â my dog is apple
the time: Keep the word un- changed, e.g., my dog is hairy â my dog is hairy. The purpose of this is to bias the representation towards the actual observed word.
The advantage of this procedure is that the Transformer encoder does not know which words it will be asked to predict or which have been re- placed by random words, so it is forced to keep a distributional contextual representation of ev- ery input token. Additionally, because random replacement only occurs for 1.5% of all tokens (i.e., 10% of 15%), this does not seem to harm the modelâs language understanding capability. In Section C.2, we evaluate the impact this proce- dure.
Compared to standard langauge model training, the masked LM only make predictions on 15% of tokens in each batch, which suggests that more pre-training steps may be required for the model
BERT (Ours) OpenAl GPT
Figure 3: Differences in pre-training model architectures. BERT uses a bidirectional Transformer. OpenAI GPT uses a left-to-right Transformer. ELMo uses the concatenation of independently trained left-to-right and right-to- left LSTMs to generate features for downstream tasks. Among the three, only BERT representations are jointly conditioned on both left and right context in all layers. In addition to the architecture differences, BERT and OpenAI GPT are ï¬ne-tuning approaches, while ELMo is a feature-based approach.
to converge. In Section C.1 we demonstrate that MLM does converge marginally slower than a left- to-right model (which predicts every token), but the empirical improvements of the MLM model far outweigh the increased training cost.
Next Sentence Prediction The next sentence prediction task can be illustrated in the following examples.
Input = [CLS] the man went to [MASK] store [SEP]
epochs over the 3.3 billion word corpus. We use Adam with learning rate of 1e-4, β1 = 0.9, β2 = 0.999, L2 weight decay of 0.01, learning rate warmup over the ï¬rst 10,000 steps, and linear decay of the learning rate. We use a dropout prob- ability of 0.1 on all layers. We use a gelu acti- vation (Hendrycks and Gimpel, 2016) rather than the standard relu, following OpenAI GPT. The training loss is the sum of the mean masked LM likelihood and the mean next sentence prediction likelihood.
he bought a gallon [MASK] milk [SEP]
Label = IsNext
Input = [CLS] the man [MASK] to the store [SEP]
Training of BERTBASE was performed on 4 Cloud TPUs in Pod conï¬guration (16 TPU chips total).13 Training of BERTLARGE was performed on 16 Cloud TPUs (64 TPU chips total). Each pre- training took 4 days to complete.
penguin [MASK] are flight ##less birds [SEP]
Label = NotNext
# A.2 Pre-training Procedure
To generate each training input sequence, we sam- ple two spans of text from the corpus, which we refer to as âsentencesâ even though they are typ- ically much longer than single sentences (but can be shorter also). The ï¬rst sentence receives the A embedding and the second receives the B embed- ding. 50% of the time B is the actual next sentence that follows A and 50% of the time it is a random sentence, which is done for the ânext sentence pre- dictionâ task. They are sampled such that the com- bined length is ⤠512 tokens. The LM masking is applied after WordPiece tokenization with a uni- form masking rate of 15%, and no special consid- eration given to partial word pieces.
Longer sequences are disproportionately expen- sive because attention is quadratic to the sequence length. To speed up pretraing in our experiments, we pre-train the model with sequence length of 128 for 90% of the steps. Then, we train the rest 10% of the steps of sequence of 512 to learn the positional embeddings.
# A.3 Fine-tuning Procedure
For ï¬ne-tuning, most model hyperparameters are the same as in pre-training, with the exception of the batch size, learning rate, and number of train- ing epochs. The dropout probability was always kept at 0.1. The optimal hyperparameter values are task-speciï¬c, but we found the following range of possible values to work well across all tasks:
# ⢠Batch size: 16, 32
We train with batch size of 256 sequences (256 sequences * 512 tokens = 128,000 tokens/batch) for 1,000,000 steps, which is approximately 40
13https://cloudplatform.googleblog.com/2018/06/Cloud- TPU-now-offers-preemptible-pricing-and-global- availability.html
Learning rate (Adam): 5e-5, 3e-5, 2e-5 ⢠Number of epochs: 2, 3, 4
We also observed that large data sets (e.g., 100k+ labeled training examples) were far less sensitive to hyperparameter choice than small data sets. Fine-tuning is typically very fast, so it is rea- sonable to simply run an exhaustive search over the above parameters and choose the model that performs best on the development set.
# A.4 Comparison of BERT, ELMo ,and OpenAI GPT
Here we studies the differences in recent popular representation learning models including ELMo, OpenAI GPT and BERT. The comparisons be- tween the model architectures are shown visually in Figure 3. Note that in addition to the architec- ture differences, BERT and OpenAI GPT are ï¬ne- tuning approaches, while ELMo is a feature-based approach.
The most comparable existing pre-training method to BERT is OpenAI GPT, which trains a left-to-right Transformer LM on a large text cor- pus. In fact, many of the design decisions in BERT were intentionally made to make it as close to GPT as possible so that the two methods could be minimally compared. The core argument of this work is that the bi-directionality and the two pre- training tasks presented in Section 3.1 account for the majority of the empirical improvements, but we do note that there are several other differences between how BERT and GPT were trained:
⢠GPT is trained on the BooksCorpus (800M words); BERT is trained on the BooksCor- pus (800M words) and Wikipedia (2,500M words).
⢠GPT uses a sentence separator ([SEP]) and classiï¬er token ([CLS]) which are only in- troduced at ï¬ne-tuning time; BERT learns [SEP], [CLS] and sentence A/B embed- dings during pre-training.
⢠GPT was trained for 1M steps with a batch size of 32,000 words; BERT was trained for 1M steps with a batch size of 128,000 words.
⢠GPT used the same learning rate of 5e-5 for all ï¬ne-tuning experiments; BERT chooses a task-speciï¬c ï¬ne-tuning learning rate which performs the best on the development set.
To isolate the effect of these differences, we per- form ablation experiments in Section 5.1 which demonstrate that the majority of the improvements are in fact coming from the two pre-training tasks and the bidirectionality they enable.
A.5
# Illustrations of Fine-tuning on Different Tasks
The illustration of ï¬ne-tuning BERT on different tasks can be seen in Figure 4. Our task-speciï¬c models are formed by incorporating BERT with one additional output layer, so a minimal num- ber of parameters need to be learned from scratch. Among the tasks, (a) and (b) are sequence-level tasks while (c) and (d) are token-level tasks. In the ï¬gure, E represents the input embedding, Ti represents the contextual representation of token i, [CLS] is the special symbol for classiï¬cation out- put, and [SEP] is the special symbol to separate non-consecutive token sequences.
# B Detailed Experimental Setup
# B.1 Detailed Descriptions for the GLUE Benchmark Experiments.
Our GLUE results in Table1 are obtained https://gluebenchmark.com/ from leaderboard https://blog. and openai.com/language-unsupervised. The GLUE benchmark includes the following datasets, the descriptions of which were originally summarized in Wang et al. (2018a):
MNLI Multi-Genre Natural Language Inference is a large-scale, crowdsourced entailment classiï¬- cation task (Williams et al., 2018). Given a pair of sentences, the goal is to predict whether the sec- ond sentence is an entailment, contradiction, or neutral with respect to the ï¬rst one.
QQP Quora Question Pairs is a binary classiï¬- cation task where the goal is to determine if two questions asked on Quora are semantically equiv- alent (Chen et al., 2018).
QNLI Question Natural Language Inference is a version of the Stanford Question Answering Dataset (Rajpurkar et al., 2016) which has been converted to a binary classiï¬cation task (Wang et al., 2018a). The positive examples are (ques- tion, sentence) pairs which do contain the correct answer, and the negative examples are (question, sentence) from the same paragraph which do not contain the answer.
Class Label (ea) - Sentence 1 Sentence 2 (a) Sentence Pair Classification Tasks: MNLI, QQP, QNLI, STS-B, MRPC, RTE, SWAG Start/End Span Single Sentence (b) Single Sentence Classification Tasks: SST-2, CoLA Question Paragraph (c) Question Answering Tasks: SQUAD v1.1 Single Sentence (d) Single Sentence Tagging Tasks: CoNLL-2003 NER
Figure 4: Illustrations of Fine-tuning BERT on Different Tasks.
SST-2 The Stanford Sentiment Treebank is a binary single-sentence classiï¬cation task consist- ing of sentences extracted from movie reviews with human annotations of their sentiment (Socher et al., 2013).
for whether the sentences in the pair are semanti- cally equivalent (Dolan and Brockett, 2005).
RTE Recognizing Textual Entailment is a bi- nary entailment task similar to MNLI, but with much less training data (Bentivogli et al., 2009).14
CoLA The Corpus of Linguistic Acceptability is a binary single-sentence classiï¬cation task, where the goal is to predict whether an English sentence is linguistically âacceptableâ or not (Warstadt et al., 2018).
STS-B The Semantic Textual Similarity Bench- mark is a collection of sentence pairs drawn from news headlines and other sources (Cer et al., 2017). They were annotated with a score from 1 to 5 denoting how similar the two sentences are in terms of semantic meaning.
MRPC Microsoft Research Paraphrase Corpus consists of sentence pairs automatically extracted from online news sources, with human annotations
WNLI Winograd NLI is a small natural lan- guage inference dataset (Levesque et al., 2011). The GLUE webpage notes that there are issues with the construction of this dataset, 15 and every trained system thatâs been submitted to GLUE has performed worse than the 65.1 baseline accuracy of predicting the majority class. We therefore ex- clude this set to be fair to OpenAI GPT. For our GLUE submission, we always predicted the ma-
14Note that we only report single-task ï¬ne-tuning results in this paper. A multitask ï¬ne-tuning approach could poten- tially push the performance even further. For example, we did observe substantial improvements on RTE from multi- task training with MNLI.
# 15https://gluebenchmark.com/faq
jority class.
# C Additional Ablation Studies
# C.1 Effect of Number of Training Steps
Figure 5 presents MNLI Dev accuracy after ï¬ne- tuning from a checkpoint that has been pre-trained for k steps. This allows us to answer the following questions:
1. Question: Does BERT really need such a large amount of pre-training (128,000 words/batch * 1,000,000 steps) to achieve high ï¬ne-tuning accuracy? Answer: Yes, BERTBASE achieves almost 1.0% additional accuracy on MNLI when trained on 1M steps compared to 500k steps.
2. Question: Does MLM pre-training converge slower than LTR pre-training, since only 15% of words are predicted in each batch rather than every word? Answer: The MLM model does converge slightly slower than the LTR model. How- ever, in terms of absolute accuracy the MLM model begins to outperform the LTR model almost immediately.
# C.2 Ablation for Different Masking Procedures
In Section 3.1, we mention that BERT uses a mixed strategy for masking the target tokens when pre-training with the masked language model (MLM) objective. The following is an ablation study to evaluate the effect of different masking strategies.
84 y c a r u c c A v e D 82 80 I L N M 78 76 BERTBASE (Masked LM) BERTBASE (Left-to-Right) 200 400 600 800 1,000 Pre-training Steps (Thousands)
Figure 5: Ablation over number of training steps. This shows the MNLI accuracy after ï¬ne-tuning, starting from model parameters that have been pre-trained for k steps. The x-axis is the value of k.
Note that the purpose of the masking strategies is to reduce the mismatch between pre-training and ï¬ne-tuning, as the [MASK] symbol never ap- pears during the ï¬ne-tuning stage. We report the Dev results for both MNLI and NER. For NER, we report both ï¬ne-tuning and feature-based ap- proaches, as we expect the mismatch will be am- pliï¬ed for the feature-based approach as the model will not have the chance to adjust the representa- tions.
Masking Rates Dev Set Results MASK SAME RND MNLI NER Fine-tune Fine-tune Feature-based 80% 10% 10% 0% 0% 100% 0% 20% 80% 80% 20% 0% 0% 20% 80% 0% 100% 0% 84.2 84.3 84.1 84.4 83.7 83.6 95.4 94.9 95.2 95.2 94.8 94.9 94.9 94.0 94.6 94.7 94.6 94.6
Table 8: Ablation over different masking strategies.
The results are presented in Table 8. In the table, MASK means that we replace the target token with the [MASK] symbol for MLM; SAME means that we keep the target token as is; RND means that we replace the target token with another random token.
The numbers in the left part of the table repre- sent the probabilities of the speciï¬c strategies used during MLM pre-training (BERT uses 80%, 10%, 10%). The right part of the paper represents the Dev set results. For the feature-based approach, we concatenate the last 4 layers of BERT as the features, which was shown to be the best approach in Section 5.3.
From the table it can be seen that ï¬ne-tuning is surprisingly robust to different masking strategies. However, as expected, using only the MASK strat- egy was problematic when applying the feature- based approach to NER. Interestingly, using only the RND strategy performs much worse than our strategy as well. | {
"id": "1810.06638"
} |
1810.03548 | Meta-Learning: A Survey | Meta-learning, or learning to learn, is the science of systematically
observing how different machine learning approaches perform on a wide range of
learning tasks, and then learning from this experience, or meta-data, to learn
new tasks much faster than otherwise possible. Not only does this dramatically
speed up and improve the design of machine learning pipelines or neural
architectures, it also allows us to replace hand-engineered algorithms with
novel approaches learned in a data-driven way. In this chapter, we provide an
overview of the state of the art in this fascinating and continuously evolving
field. | http://arxiv.org/pdf/1810.03548 | Joaquin Vanschoren | cs.LG, stat.ML | null | null | cs.LG | 20181008 | 20181008 | 2018:
8 1 0 2
# t c O 8
]
arXiv:1810.03548v1 [cs.LG]
# G L . s c [
1 v 8 4 5 3 0 . 0 1 8 1 : v i X r a
Meta-Learning: A Survey
# Meta-Learning: A Survey
# Joaquin Vanschoren Eindhoven University of Technology 5600MB Eindhoven, The Netherlands
j.vanschoren@tue.nl
Abstract Meta-learning, or learning to learn, is the science of systematically observing how diï¬erent machine learning approaches perform on a wide range of learning tasks, and then learning from this experience, or meta-data, to learn new tasks much faster than otherwise possible. Not only does this dramatically speed up and improve the design of machine learning pipelines or neural architectures, it also allows us to replace hand-engineered algorithms with novel approaches learned in a data-driven way. In this chapter, we provide an overview of the state of the art in this fascinating and continuously evolving ï¬eld.
# 1. Introduction
When we learn new skills, we rarely - if ever - start from scratch. We start from skills learned earlier in related tasks, reuse approaches that worked well before, and focus on what is likely worth trying based on experience (Lake et al., 2017). With every skill learned, learning new skills becomes easier, requiring fewer examples and less trial-and-error. In short, we learn how to learn across tasks. Likewise, when building machine learning models for a speciï¬c task, we often build on experience with related tasks, or use our (often implicit) understanding of the behavior of machine learning techniques to help make the right choices. The challenge in meta-learning is to learn from prior experience in a systematic, data- driven way. First, we need to collect meta-data that describe prior learning tasks and previously learned models. They comprise the exact algorithm conï¬gurations used to train the models, including hyperparameter settings, pipeline compositions and/or network ar- chitectures, the resulting model evaluations, such as accuracy and training time, the learned model parameters, such as the trained weights of a neural net, as well as measurable proper- ties of the task itself, also known as meta-features. Second, we need to learn from this prior meta-data, to extract and transfer knowledge that guides the search for optimal models for new tasks. This chapter presents a concise overview of diï¬erent meta-learning approaches to do this eï¬ectively.
The term meta-learning covers any type of learning based on prior experience with other tasks. The more similar those previous tasks are, the more types of meta-data we can leverage, and deï¬ning task similarity will be a key overarching challenge. Perhaps needless to say, there is no free lunch (Wolpert and Macready, 1996; Giraud-Carrier and Provost, 2005). When a new task represents completely unrelated phenomena, or random noise, leveraging prior experience will not be eï¬ective. Luckily, in real-world tasks, there are plenty of opportunities to learn from prior experience.
In the remainder of this chapter, we categorize meta-learning techniques based on the type of meta-data they leverage, from the most general to the most task-speciï¬c. First, in Section 2, we discuss how to learn purely from model evaluations. These techniques can
1
Joaquin Vanschoren
be used to recommend generally useful conï¬gurations and conï¬guration search spaces, as well as transfer knowledge from empirically similar tasks. In Section 3, we discuss how we can characterize tasks to more explicitly express task similarity and build meta-models that learn the relationships between data characteristics and learning performance. Finally, Section 4 covers how we can transfer trained model parameters between tasks that are inherently similar, e.g. sharing the same input features, which enables transfer learning (Pan and Yang, 2010) and few-shot learning (Ravi and Larochelle, 2017).
Note that while multi-task learning (Caruana, 1997) (learning multiple related tasks simultaneously) and ensemble learning (Dietterich, 2000) (building multiple models on the same task), can often be meaningfully combined with meta-learning systems, they do not in themselves involve learning from prior experience on other tasks.
# 2. Learning from Model Evaluations
Consider that we have access to prior tasks tj â T , the set of all known tasks, as well as a set of learning algorithms, fully deï¬ned by their conï¬gurations θi â Î; here Î represents a discrete, continuous, or mixed conï¬guration space which can cover hyperparameter settings, pipeline components and/or network architecture components. P is the set of all prior scalar evaluations Pi,j = P (θi, tj) of conï¬guration θi on task tj, according to a predeï¬ned evaluation measure, e.g. accuracy, and model evaluation technique, e.g. cross-validation. Pnew is the set of known evaluations Pi,new on a new task tnew. We now want to train a meta-learner L that predicts recommended conï¬gurations Îâ new for a new task tnew. The meta-learner is trained on meta-data P ⪠Pnew. P is usually gathered beforehand, or extracted from meta-data repositories (Vanschoren et al., 2014, 2012). Pnew is learned by the meta-learning technique itself in an iterative fashion, sometimes warm-started with an initial Pâ²
2.1 Task-Independent Recommendations First, imagine not having access to any evaluations on tnew, hence Pnew = â
. We can then still learn a function f : Î Ã T â {θâ k}, k = 1..K, yielding a set of recommended conï¬gurations independent of tnew. These θâ k can then be evaluated on tnew to select the best one, or to warm-start further optimization approaches, such as those discussed in Section 2.3.
k. This is typically done by discretizing Î into a set of candidate conï¬gurations θi, also called a portfolio, evaluated on a large number of tasks tj. We can then build a ranking per task, for in- stance using success rates, AUC, or signiï¬cant wins (Brazdil et al., 2003a; DemËsar, 2006; Leite et al., 2012). However, it is often desirable that equally good but faster algorithms are ranked higher, and multiple methods have been proposed to trade oï¬ accuracy and training time (Brazdil et al., 2003a; van Rijn et al., 2015). Next, we can aggregate these single-task rankings into a global ranking, for instance by computing the average rank (Lin, 2010; Abdulrahman et al., 2018) across all tasks. When there is insuï¬cient data to build a global ranking, one can recommend subsets of conï¬gurations based on the best known con- ï¬gurations for each prior task (Todorovski and Dzeroski, 1999; Kalousis, 2002), or return quasi-linear rankings (Cook et al., 1996).
2
Meta-Learning: A Survey
To ï¬nd the best θâ for a task tnew, never before seen, a simple anytime method is to select the top-K conï¬gurations (Brazdil et al., 2003a), going down the list and evaluating each conï¬guration on tnew in turn. This evaluation can be halted after a predeï¬ned value for K, a time budget, or when a suï¬ciently accurate model is found. In time-constrained settings, it has been shown that multi-objective rankings (including training time) converge to near-optimal models much faster (Abdulrahman et al., 2018; van Rijn et al., 2015), and provide a strong baseline for algorithm comparisons (Abdulrahman et al., 2018; Leite et al., 2012).
A very diï¬erent approach to the one above is to ï¬rst ï¬t a diï¬erentiable function fj(θi) = Pi,j on all prior evaluations of a speciï¬c task tj, and then use gradient descent to ï¬nd an optimized conï¬guration θâ j per prior task (Wistuba et al., 2015a). Assuming that some of the tasks tj will be similar to tnew, those θâ j will be useful for warm-starting Bayesian optimization approaches.
# 2.2 Conï¬guration Space Design
Prior evaluations can also be used to learn a better conï¬guration space Îâ. While again independent from tnew, this can radically speed up the search for optimal models, since only the more relevant regions of the conï¬guration space are explored. This is critical when computational resources are limited, and proves to be an important factor in practical comparisons of AutoML systems (De Sa et al., 2017).
First, in the functional ANOVA (Hutter et al., 2014a) approach, hyperparameters are deemed important if they explain most of the variance in algorithm performance on a given task. van Rijn and Hutter (2018) evaluated this technique using 250,000 OpenML experiments with 3 algorithms across 100 datasets.
An alternative approach is to ï¬rst learn an optimal hyperparameter default setting, and then deï¬ne hyperparameter importance as the performance gain that can be achieved by tuning the hyperparameter instead of leaving it at that default value. Indeed, even though a hyperparameter may cause a lot of variance, it may also have one speciï¬c setting that always results in good performance. Probst et al. (2018) do this using about 500,000 OpenML experiments on 6 algorithms and 38 datasets. Default values are learned jointly for all hyperparameters of an algorithm by ï¬rst training surrogate models for that algorithm for a large number of tasks. Next, many conï¬gurations are sampled, and the conï¬guration that minimizes the average risk across all tasks is the recommended default conï¬guration. Finally, the importance (or tunability) of each hyperparameter is estimated by observing how much improvement can still be gained by tuning it.
Weerts et al. (2018) learn defaults independently from other hyperparameters, and de- ï¬ned as the conï¬gurations that occur most frequently in the top-K conï¬gurations for every task. In the case that the optimal default value depends on meta-features (e.g. the number of training instances or features), simple functions are learned that include these meta- features. Next, a statistical test deï¬nes whether a hyperparameter can be safely left at this default, based on the performance loss observed when not tuning a hyperparameter (or a set of hyperparameters), while all other parameters are tuned. This was evaluated using 118,000 OpenML experiments with 2 algorithms (SVMs and Random Forests) across 59 datasets.
3
Joaquin Vanschoren
# 2.3 Conï¬guration Transfer
If we want to provide recommendations for a speciï¬c task tnew, we need additional informa- tion on how similar tnew is to prior tasks tj. One way to do this is to evaluate a number of recommended (or potentially random) conï¬gurations on tnew, yielding new evidence Pnew. If we then observe that the evaluations Pi,new are similar to Pi,j, then tj and tnew can be considered intrinsically similar, based on empirical evidence. We can include this knowledge to train a meta-learner that predicts a recommended set of conï¬gurations Îâ new for tnew. Moreover, every selected θâ new can be evaluated and included in Pnew, repeating the cycle and collecting more empirical evidence to learn which tasks are similar to each other.
# 2.3.1 Relative Landmarks
A ï¬rst measure for task similarity considers the relative (pairwise) performance diï¬erences, also called relative landmarks, RLa,b,j = Pa,j â Pb,j between two conï¬gurations θa and θb on a particular task tj (F¨urnkranz and Petrak, 2001). Active testing (Leite et al., 2012) leverages these as follows: it warm-starts with the globally best conï¬guration (see Section 2.1), calls it θbest, and proceeds in a tournament-style fashion. In each round, it selects the âcompetitorâ θc that most convincingly outperforms θbest on similar tasks. It deems tasks to be similar if the relative landmarks of all evaluated conï¬gurations are similar, i.e., if the conï¬gurations perform similarly on both tj and tnew then the tasks are deemed similar. Next, it evaluates the competitor θc, yielding Pc,new, updates the task similarities, and repeats. A limitation of this method is that it can only consider conï¬gurations θi that were evaluated on many prior tasks.
# 2.3.2 Surrogate Models
A more ï¬exible way to transfer information is to build surrogate models sj(θi) = Pi,j for all prior tasks tj, trained using all available P. One can then deï¬ne task similarity in terms of the error between sj(θi) and Pi,new: if the surrogate model for tj can generate accurate predictions for tnew, then those tasks are intrinsically similar. This is usually done in combination with Bayesian optimization Rasmussen (2004) to determine the next θi.
Wistuba et al. (2018) train surrogate models based on Gaussian Processes (GPs) for every prior task, plus one for tnew, and combine them into a weighted, normalized sum, with the (new) mean µ deï¬ned as the weighted sum of the individual µjâs (obtained from prior tasks tj). The weights of the µjâs are computed using the Nadaraya-Watson kernel- weighted average, where each task is represented as a vector of relative landmarks, and the Epanechnikov quadratic kernel (Nadaraya, 1964) is used to measure the similarity between the relative landmark vectors of tj and tnew. The more similar tj is to tnew, the larger the weight sj, increasing the inï¬uence of the surrogate model for tj.
Feurer et al. (2018a) propose to combine the predictive distributions of the individ- ual Gaussian processes, which makes the combined model a Gaussian process again. The weights are computed following the agnostic Bayesian ensemble of Lacoste et al. (2014), which weights predictors according to an estimate of their generalization performance.
Meta-data can also be transferred in the acquisition function rather than the surrogate model (Wistuba et al., 2018). The surrogate model is only trained on Pi,new, but the next θi to evaluate is provided by an acquisition function which is the weighted average of the
4
Meta-Learning: A Survey
expected improvement (Jones et al., 1998) on Pi,new and the predicted improvements on all prior Pi,j. The weights of the prior tasks can again be deï¬ned via the accuracy of the surrogate model or via relative landmarks. The weight of the expected improvement component is gradually increased with every iteration as more evidence Pi,new is collected.
# 2.3.3 Warm-Started Multi-task Learning
Another approach to relate prior tasks tj is to learn a joint task representation using P. Perrone et al. (2017) train task-speciï¬c Bayesian linear regression (Bishop, 2006) surrogate models sj(θi) and combine them in a feedforward Neural Network N N (θi) which learns a joint task representation that can accurately predict Pi,new. The surrogate models are pre- trained on OpenML meta-data to provide a warm-start for optimizing N N (θi) in a multi- task learning setting. Earlier work on multi-task learning (Swersky et al., 2013) assumed that we already have a set of âsimilarâ source tasks tj. It transfers information between these tj and tnew by building a joint GP model for Bayesian optimization that learns and exploits the exact relationship between the tasks. Learning a joint GP tends to be less scalable than building one GP per task, though. Springenberg et al. (2016) also assume that the tasks are related and similar, but learns the relationship between tasks during the optimization process using Bayesian Neural Networks. As such, their method is somewhat of a hybrid of the previous two approaches. Golovin et al. (2017) assume a sequence order (e.g., time) across tasks. It builds a stack of GP regressors, one per task, training each GP on the residuals relative to the regressor below it. Hence, each task uses the tasks before it as its priors.
# 2.3.4 Other Techniques
Multi-armed bandits (Robbins, 1985) provide yet another approach to ï¬nd the source tasks tj most related to tnew (Ramachandran et al., 2018a). In this analogy, each tj is one arm, and the (stochastic) reward for selecting (pulling) a particular prior task (arm) is deï¬ned in terms of the error in the predictions of a GP-based Bayesian optimizer that models the prior evaluations of tj as noisy measurements and combines them with the existing evaluations on tnew. The cubic scaling of the GP makes this approach less scalable, though.
son Sampling (Thompson, 1933) to obtain the optima distribution Ïj the KL-divergence (Kullback and Leibler, 1951) between Ïj max and Ïnew 2018b). These distributions are then merged into a mixture distribution based on the sim- ilarities and used to build an acquisition function that predicts the next most promising conï¬guration to evaluate. It is so far only evaluated to tune 2 SVM hyperparameters using 5 tasks.
Finally, a complementary way to leverage P is to recommend which conï¬gurations should not be used. After training surrogate models per task, we can look up which tj are most similar to tnew, and then use sj(θi) to discover regions of Î where performance is predicted to be poor. Excluding these regions can speed up the search for better-performing ones. Wistuba et al. (2015b) do this using a task similarity measure based on the Kendall tau rank correlation coeï¬cient (Kendall, 1938) between the ranks obtained by ranking conï¬gurations θi using Pi,j and Pi,new, respectively.
5
Joaquin Vanschoren
# 2.4 Learning Curves
We can also extract meta-data about the training process itself, such as how fast model performance improves as more training data is added. If we divide the training in steps st, usually adding a ï¬xed number of training examples every step, we can measure the performance P (θi, tj, st) = Pi,j,t of conï¬guration θi on task tj after step st, yielding a learning curve across the time steps st. Learning curves are used extensively to speed up hyperparameter optimization on a given task (Kohavi and John, 1995; Provost et al., 1999; Swersky et al., 2014; Chandrashekaran and Lane, 2017). In meta-learning, however, learning curve information is transferred across tasks.
While evaluating a conï¬guration on new task tnew, we can halt the training after a certain number of iterations r < t, and use the partially observed learning curve to predict how well the conï¬guration will perform on the full dataset based on prior experience with other tasks, and decide whether to continue the training or not. This can signiï¬cantly speed up the search for good conï¬gurations.
One approach is to assume that similar tasks yield similar learning curves. First, deï¬ne a distance between tasks based on how similar the partial learning curves are: dist(ta, tb) = f (Pi,a,t, Pi,b,t) with t = 1, ..., r. Next, ï¬nd the k most similar tasks t1..k and use their complete learning curves to predict how well the conï¬guration will perform on the new complete dataset. Task similarity can be measured by comparing the shapes of the partial curves across all conï¬gurations tried, and the prediction is made by adapting the ânearestâ complete curve(s) to the new partial curve (Leite and Brazdil, 2005, 2007). This approach was also successful in combination with active testing (Leite and Brazdil, 2010), and can be sped up further by using multi-objective evaluation measures that include training time (van Rijn et al., 2015).
Interestingly, while several methods aim to predict learning curves during neural archi- tecture search (Elsken et al., 2018), as of yet none of this work leverages learning curves previously observed on other tasks.
# 3. Learning from Task Properties
Another rich source of meta-data are characterizations (meta-features) of the task at hand. Each task tj â T is described with a vector m(tj) = (mj,1, ..., mj,K ) of K meta-features mj,k â M , the set of all known meta-features. This can be used to deï¬ne a task similarity measure based on, for instance, the Euclidean distance between m(ti) and m(tj), so that we can transfer information from the most similar tasks to the new task tnew. Moreover, together with prior evaluations P, we can train a meta-learner L to predict the performance Pi,new of conï¬gurations θi on a new task tnew.
# 3.1 Meta-Features
Table 1 provides a concise overview of the most commonly used meta-features, together with a short rationale for why they are indicative of model performance. Where possible, we also show the formulas to compute them. More complete surveys can be found in the literature (Rivolli et al., 2018; Vanschoren, 2010; Mantovani, 2018; Reif et al., 2014; Castiello et al., 2005).
6
Meta-Learning: A Survey
Name Formula Rationale Variants n Nr instances p Nr features Nr classes c Nr missing values m o Nr outliers Speed, Scalability (Michie et al., 1994) Curse of dimensionality (Michie et al., 1994) Complexity, imbalance (Michie et al., 1994) Imputation eï¬ects (Kalousis, 2002) Data noisiness (Rousseeuw and Hubert, 2011) p/n, log(n), log(n/p) log(p), % categorical ratio min/maj class % missing o/n Skewness Kurtosis Correlation Covariance Concentration Sparsity Gravity ANOVA p-value Coeï¬. of variation 3 E(XâµX ) Ï3 X E(XâµX ) Ï4 X ÏX1X2 covX1X2 ÏX1X2 sparsity(X) gravity(X) pvalX1 X2 ÏY µY 4 Feature normality (Michie et al., 1994) min,max,µ,Ï,q1, q3 Feature normality (Michie et al., 1994) min,max,µ,Ï,q1, q3 Feature interdependence (Michie et al., 1994) Feature interdependence (Michie et al., 1994) Feature interdependence (Kalousis and Hilario, 2001) min,max,µ,Ï,ÏXY Degree of discreteness (Salama et al., 2013) Inter-class dispersion (Ali and Smith-Miles, 2006a) Feature redundancy (Kalousis, 2002) min,max,µ,Ï,ÏXY min,max,µ,Ï,covXY min,max,µ,Ï pvalXY (Soares et al., 2004) Variation in target (Soares et al., 2004) λ1 Pi λi PCA kurtosis λ1 1+λ1 (Michie et al., 1994) PCA Ïλ1 PCA skewness PCA 95% Class probability Variance in ï¬rst PC (Michie et al., 1994) Skewness of ï¬rst PC (Feurer et al., 2014) Intrinsic dimensionality (Bardenet et al., 2013) Class distribution (Michie et al., 1994) q dim95%var p P (C) H(C) H(X) log2n M I(C, X) M I(C,X) H(C) H(C) M I(C,X) H(X)âM I(C,X) M I(C,X) min,max,µ,Ï Class imbalance (Michie et al., 1994) Feature informativeness (Castiello et al., 2005) Feature importance (Michie et al., 1994) Feature importance (Agresti, 2002) Class entropy Norm. entropy Mutual inform. Uncertainty coeï¬. min,max,µ,Ï min,max,µ,Ï min,max,µ,Ï Intrinsic dimensionality (Michie et al., 1994) Equiv. nr. feats Noisiness of data (Michie et al., 1994) Noise-signal ratio Fisherâs discrimin. Volume of overlap Concept variation Data consistency 2 (µc1âµc2) c1âÏ2 Ï2 c2 Separability classes c1, c2 (Ho and Basu, 2002) Class distribution overlap (Ho and Basu, 2002) Task complexity (Vilalta and Drissi, 2002) Data quality (K¨opf and Iglezakis, 2002) See Ho:2002 Nr nodes, leaves Branch length Nodes per feature Leaves per class Leaves agreement Information gain |η|, |Ï| |ηX | |Ïc| |Ï| nÏi n Concept complexity (Peng et al., 2002) Concept complexity (Peng et al., 2002) Feature importance (Peng et al., 2002) Class complexity (Filchenkov and Pendryak, 2015) Class separability (Bensusan et al., 2000) Feature importance (Bensusan et al., 2000) Tree depth min,max,µ,Ï min,max,µ,Ï min,max,µ,Ï min,max,µ,Ï min,max,µ,Ï, gini Landmarker(1NN) Landmarker(Tree) Landmarker(Lin) Landmarker(NB) Relative LM Subsample LM P (θ1NN , tj ) P (θT ree, tj) P (θLin, tj) P (θNB, tj) Pa,j â Pb,j P (θi, tj, st) Data sparsity (Pfahringer et al., 2000) Data separability (Pfahringer et al., 2000) Linear separability (Pfahringer et al., 2000) Feature independence (Pfahringer et al., 2000) Probing performance (F¨urnkranz and Petrak, 2001) Probing performance (Soares et al., 2001) See Pfahringer et al. (2000) Stump,RandomTree Lin.Disciminant See Ler et al. (2005)
See Ho and Basu (2002) See Vilalta (1999) See K¨opf and Iglezakis (2002)
Table 1: Overview of commonly used meta-features. Groups from top to bottom: sim- ple, statistical, information-theoretic, complexity, model-based, and landmarkers. Continuous features X and target Y have mean µX, stdev ÏX , variance Ï2 X. Cat- egorical features X and class C have categorical values Ïi, conditional probabil- ities Ïi|j, joint probabilities Ïi,j, marginal probabilities Ïi+ = Pj Ïij, entropy H(X) = â Pi Ïi+log2(Ïi+).
7
Joaquin Vanschoren
To build a meta-feature vector m(tj), one needs to select and further process these meta-features. Studies on OpenML meta-data have shown that the optimal set of meta- features depends on the application (Bilalli et al., 2017). Many meta-features are computed on single features, or combinations of features, and need to be aggregated by summary statistics (min,max,µ,Ï,quartiles,...) or histograms (Kalousis and Hilario, 2001). One needs to systematically extract and aggregate them (Pinto et al., 2016). When computing task similarity, it is also important to normalize all meta-features (Bardenet et al., 2013), per- form feature selection (Todorovski et al., 2000), or employ dimensionality reduction tech- niques (e.g. PCA) (Bilalli et al., 2017). When learning meta-models, one can also use relational meta-learners (Todorovski and Dzeroski, 1999) or case-based reasoning methods (Lindner and Studer, 1999; Hilario and Kalousis, 2001; Kalousis and Hilario, 2003).
Beyond these general-purpose meta-features, many more speciï¬c ones were formulated. For streaming data one can use streaming landmarks (van Rijn et al., 2018, 2014), for time series data one can compute autocorrelation coeï¬cients or the slope of regression models (Arinze, 1994; PrudËencio and Ludermir, 2004; dos Santos et al., 2004), and for unsupervised problems one can cluster the data in diï¬erent ways and extract properties of these clusters (Soares et al., 2009). In many applications, domain-speciï¬c information can be leveraged as well (Smith-Miles, 2009; Olier et al., 2018).
# 3.2 Learning Meta-Features
Instead of manually deï¬ning meta-features, we can also learn a joint representation for groups of tasks. One approach is to build meta-models that generate a landmark-like meta- feature representation M â² given other task meta-features M and trained on performance meta-data P, or f : M 7â M â². Sun and Pfahringer (2013) do this by evaluating a pre- deï¬ned set of conï¬gurations θi on all prior tasks tj, and generating a binary metafeature mj,a,b â M â² for every pairwise combination of conï¬gurations θa and θb, indicating whether θa outperformed θb or not, thus mâ²(tj) = (mj,a,b, mj,a,c, mj,b,c, ...). To compute mnew,a,b, meta-rules are learned for every pairwise combination (a,b), each predicting whether θa will outperform θb on task tj, given its other meta-features m(tj).
We can also learn a joint representation based entirely on the available P meta-data, i.e. f : P à Π7â M â². We previously discussed how to do this with feed-forward neural nets (Perrone et al., 2017) in Section 2.3. If the tasks share the same input space, e.g., they are images of the same resolution, one can also use Siamese networks to learn a meta-feature representation (Kim et al., 2017). These are trained by feeding the data of two diï¬erent tasks to two twin networks, and using the diï¬erences between the predicted and observed performance Pi,new as the error signal. Since the model parameters between both networks are tied in a Siamese network, two very similar tasks are mapped to the same regions in the latent meta-feature space. They can be used for warm starting Bayesian hyperparameter optimization (Kim et al., 2017) and neural architecture search (Aï¬f, 2018).
# 3.3 Warm-Starting Optimization from Similar Tasks
Meta-features are a very natural way to estimate task similarity and initialize optimization procedures based on promising conï¬gurations on similar tasks. This is akin to how human experts start a manual search for good models, given experience on related tasks.
8
Meta-Learning: A Survey
Starting a genetic search algorithm in regions of the search space with promising solu- tions can signiï¬cantly speed up convergence to a good solution. Gomes et al. (Gomes et al., 2012) recommend initial conï¬gurations by ï¬nding the k most similar prior tasks tj based on the L1 distance between vectors m(tj) and m(tnew), where each m(tj) includes 17 simple and statistical meta-features. For each of the k most similar tasks, the best conï¬guration is evaluated on tnew, and used to initialize a genetic search algorithm (Particle Swarm Opti- mization), as well as Tabu Search. Reif et al. (2012) follow a very similar approach, using 15 simple, statistical, and landmarking meta-features. They use a forward selection technique to ï¬nd the most useful meta-features, and warm-start a standard genetic algorithm (GAlib) with a modiï¬ed Gaussian mutation operation. Variants of active testing (see Sect. 2.3) that use meta-features were also tried (Miranda and PrudËencio, 2013; Leite et al., 2012), but did not perform better than the approaches based on relative landmarks.
Also model-based optimization approaches can beneï¬t greatly from an initial set of promising conï¬gurations. SCoT (Bardenet et al., 2013) trains a single surrogate ranking model f : M à Πâ R, predicting the rank of θi on task tj. M contains 4 meta-features (3 simple ones and one based on PCA). The surrogate model is trained on all the rankings, including those on tnew. Ranking is used because the scale of evaluation values can diï¬er greatly between tasks. A GP regression converts the ranks to probabilities to do Bayesian optimization, and each new Pi,new is used to retrain the surrogate model after every step. Schilling et al. (2015) use a modiï¬ed multilayer perceptron as a surrogate model, of the form sj(θi, m(tj), b(tj )) = Pi,j where m(tj) are the meta-features and b(tj) is a vector of j binary indications which are 1 if the meta-instance is from tj and 0 otherwise. The multi-layer perceptron uses a modiï¬ed activation function based on factorization machines (Rendle, 2010) in the ï¬rst layer, aimed at learning a latent representation for each task to model task similarities. Since this model cannot represent uncertainties, an ensemble of 100 multilayer perceptrons is trained to get predictive means and simulate variances.
Training a single surrogate model on all prior meta-data is often less scalable. Yogatama and Mann (2014) also build a single Bayesian surrogate model, but only include tasks similar to tnew, where task similarity is deï¬ned as the Euclidean distance between meta-feature vectors con- sisting of 3 simple meta-features. The Pi,j values are standardized to overcome the problem of diï¬erent scales for each tj. The surrogate model learns a Gaussian process with a speciï¬c kernel combination on all instances.
Feurer et al. (2014) oï¬er a simpler, more scalable method that warm-starts Bayesian optimization by sorting all prior tasks tj similar to Gomes et al. (2012), but including 46 simple, statistical, and landmarking meta-features, as well as H(C). The t best conï¬gura- tions on the d most similar tasks are used to warm-start the surrogate model. They search over many more hyperparameters than earlier work, including preprocessing steps. This warm-starting approach was also used very eï¬ectively, and combined with ensembling, in autosklearn (Feurer et al., 2015).
Finally, one can also use collaborative ï¬ltering to recommend promising conï¬gurations (Stern et al., 2010). By analogy, the tasks tj (users) provide ratings (Pi,j) for the conï¬g- urations θi (items), and matrix factorization techniques are used to predict unknown Pi,j values and recommend the best conï¬gurations for any task. An important issue here is the cold start problem, since the matrix factorization requires at least some evaluations on tnew. Yang et al. (2018) use a D-optimal experiment design to sample an initial set of evaluations
9
Joaquin Vanschoren
Pi,new. They predict both the predictive performance and runtime, to recommend a set of warm-start conï¬gurations that are both accurate and fast. Misir and Sebag (2013) and Mısır and Sebag (2017) leverage meta-features to solve the cold start problem. Fusi et al. (2017) also use meta-features, following the same procedure as Feurer et al. (2015), and use a probabilistic matrix factorization approach that allows them to perform Bayesian optimization to further optimize their pipeline conï¬gurations θi. This approach also yields useful latent embeddings of both the tasks and conï¬gurations.
# 3.4 Meta-Models
We can also learn the complex relationship between a taskâs meta-features and the utility of speciï¬c conï¬gurations by building a meta-model L that recommends the most useful conï¬gurations Îâ new given the meta-features M of the new task tnew. There exists a rich body of earlier work (Brazdil et al., 2009; Lemke et al., 2015; Giraud-Carrier, 2008; Luo, 2016) on building meta-models for algorithm selection (Bensusan and Giraud-Carrier, 2000; Pfahringer et al., 2000; Kalousis, 2002; Bischl et al., 2016) and hyperparameter recommen- dation (Kuba et al., 2002; Soares et al., 2004; Ali and Smith-Miles, 2006b; Nisioti et al., 2018). Experiments showed that boosted and bagged trees often yielded the best predic- tions, although much depends on the exact meta-features used (Kalousis and Hilario, 2001; K¨opf and Iglezakis, 2002).
# 3.4.1 Ranking
Meta-models can also generate a ranking of the top-K most promising conï¬gurations. One approach is to build a k-nearest neighbor (kNN) meta-model to predict which tasks are similar, and then rank the best conï¬gurations on these similar tasks (Brazdil et al., 2003b; dos Santos et al., 2004). This is similar to the work discussed in Section 3.3, but with- out ties to a follow-up optimization approach. Meta-models speciï¬cally meant for rank- ing, such as predictive clustering trees (Todorovski et al., 2002) and label ranking trees (Cheng et al., 2009) were also shown to work well. Approximate Ranking Trees Forests (ART Forests) (Sun and Pfahringer, 2013), ensembles of fast ranking trees, prove to be especially eï¬ective, since they have âbuilt-inâ meta-feature selection, work well even if few prior tasks are available, and the ensembling makes the method more robust. autoBag- ging (Pinto et al., 2017) ranks Bagging workï¬ows including four diï¬erent Bagging hyper- parameters, using an XGBoost-based ranker, trained on 140 OpenML datasets and 146 meta-features. Lorena et al. (2018) recommend SVM conï¬gurations for regression prob- lems using a kNN meta-model and a new set of meta-features based on data complexity.
# 3.4.2 Performance Prediction
Meta-models can also directly predict the performance, e.g. accuracy or training time, of a conï¬guration on a given task, given its meta-features. This allows us to estimate whether a conï¬guration will be interesting enough to evaluate in any optimization procedure. Early work used linear regression or rule-base regressors to predict the performance of a dis- crete set of conï¬gurations and then rank them accordingly (Bensusan and Kalousis, 2001; K¨opf et al., 2000). Guerra et al. (Guerra et al., 2008) train an SVM meta-regressor per classiï¬cation algorithm to predict its accuracy, under default settings, on a new task tnew
10
Meta-Learning: A Survey
given its meta-features. Reif et al. (Reif et al., 2014) train a similar meta-regressor on more meta-data to predict its optimized performance. Davis et al. (Davis and Giraud-Carrier, 2018) use a MultiLayer Perceptron based meta-learner instead, predicting the performance of a speciï¬c algorithm conï¬guration.
Instead of predicting predictive performance, a meta-regressor can also be trained to predict algorithm training/prediction time, for instance, using an SVM regressor trained on meta-features (Reif et al., 2011), itself tuned via genetic algorithms (Priya et al., 2012). Yang et al. (2018) predict conï¬guration runtime using polynomial regression, based only on the number of instances and features. Hutter et al. (2014b) provide a general treatise on predicting algorithm runtime in various domains.
Most of these meta-models generate promising conï¬gurations, but donât actually tune these conï¬gurations to tnew themselves. Instead, the predictions can be used to warm-start or guide any other optimization technique, which allows for all kinds of combinations of meta-models and optimization techniques. Indeed, some of the work discussed in Section 3.3 can be seen as using a distance-based meta-model to warm-start Bayesian optimiza- tion (Feurer et al., 2014; Fusi et al., 2017) or evolutionary algorithms (Gomes et al., 2012; Reif et al., 2012). In principle, other meta-models could be used here as well.
Instead of learning the relationship between a taskâs meta-features and conï¬guration performance, one can also build surrogate models predicting the performance of conï¬gura- tions on speciï¬c tasks(Eggensperger et al., 2018). One can then learn how to combine these per-task predictions to warm-start or guide optimization techniques on a new task tnew (Feurer et al., 2018a; Perrone et al., 2017; Springenberg et al., 2016; Wistuba et al., 2018), as discussed in Section 2.3. While meta-features could also be used to combine per-task pre- dictions based on task similarity, it is ultimately more eï¬ective to gather new observations Pi,new, since these allow to reï¬ne the task similarity estimates with every new observation (Feurer et al., 2018b; Wistuba et al., 2018; Leite et al., 2012).
# 3.5 Pipeline Synthesis
When creating entire machine learning pipelines (Serban et al., 2013), the number of con- ï¬guration options grows dramatically, making it even more important to leverage prior experience. One can control the search space by imposing a ï¬xed structure on the pipeline, fully described by a set of hyperparameters. One can then use the most promising pipelines on similar tasks to warm-start a Bayesian optimization (Feurer et al., 2015; Fusi et al., 2017).
Other approaches give recommendations for certain pipeline steps (Post et al., 2016; Strang et al., 2018), and can be leveraged in larger pipeline construction approaches, such as planning (Nguyen et al., 2014; Kietz et al., 2012; Gil et al., 2018; Wever et al., 2018) or evolutionary techniques (Olson et al., 2016; Sun et al., 2013). Nguyen et al. (2014) con- struct new pipelines using a beam search focussed on components recommended by a meta- learner, and is itself trained on examples of successful prior pipelines. Bilalli et al. (2018) predict which pre-processing techniques are recommended for a given classiï¬cation algo- rithm. They build a meta-model per target classiï¬cation algorithm that, given the tnew meta-features, predicts which preprocessing technique should be included in the pipeline.
11
Joaquin Vanschoren
Similarly, Schoenfeld et al. (2018) build meta-models predicting when a preprocessing al- gorithm will improve a particular classiï¬erâs accuracy or runtime.
AlphaD3M (Drori et al., 2018) uses a self-play reinforcement learning approach in which the current state is represented by the current pipeline, and actions include the addition, deletion, or replacement of pipeline components. A Monte Carlo Tree Search (MCTS) generates pipelines, which are evaluated to train a recurrent neural network (LSTM) that can predict pipeline performance, in turn producing the action probabilities for the MCTS in the next round. The state description also includes meta-features of the current task, allowing the neural network to learn across tasks.
# 3.6 To Tune or Not to Tune?
To reduce the number of conï¬guration parameters to be optimized, and to save valuable optimization time in time-constrained settings, meta-models have also been proposed to pre- dict whether or not it is worth tuning a given algorithm given the meta-features of the task at hand (Ridd and Giraud-Carrier, 2014) and how much improvement we can expect from tun- ing a speciï¬c algorithm versus the additional time investment (Sanders and Giraud-Carrier, 2017). More focused studies on speciï¬c learning algorithms yielded meta-models predicting when it is necessary to tune SVMs (Mantovani et al., 2015a), what are good default hyperpa- rameters for SVMs given the task (including interpretable meta-models) (Mantovani et al., 2015b), and how to tune decision trees (Mantovani et al., 2016).
# 4. Learning from Prior Models
The ï¬nal type of meta-data we can learn from are prior machine learning models themselves, i.e., their structure and learned model parameters. In short, we want to train a meta-learner L that learns how to train a (base-) learner lnew for a new task tnew, given similar tasks tj â T and the corresponding optimized models lj â L, where L is the space of all possible models. The learner lj is typically deï¬ned by its model parameters W = {wk}, k = 1..K and/or its conï¬guration θi â Î.
# 4.1 Transfer Learning
In transfer learning (Thrun and Pratt, 1998), we take models trained on one or more source tasks tj, and use them as starting points for creating a model on a similar target task tnew. This can be done by forcing the target model to be structurally or otherwise similar to the source model(s). This is a generally applicable idea, and transfer learning approaches have been proposed for kernel methods (Evgeniou et al., 2005; Evgeniou and Pontil, 2004), parametric Bayesian models (Rosenstein et al., 2005; Raina et al., 2006; Bakker and Heskes, 2003), Bayesian networks (Niculescu-Mizil and Caruana, 2005), clustering (Thrun, 1998) and reinforcement learning (Hengst, 2002; Dietterich et al., 2002). Neural networks, how- ever, are exceptionally suitable for transfer learning because both the structure and the model parameters of the source models can be used as a good initialization for the target model, yielding a pre-trained model which can then be further ï¬ne-tuned using the avail- able training data on tnew (Thrun and Mitchell, 1995; Baxter, 1996; Bengio, 2012; Caruana, 1995). In some cases, the source network may need to be modiï¬ed before transferring it
12
Meta-Learning: A Survey
(Sharkey and Sharkey, 1993). We will focus on neural networks in the remainder of this section.
Especially large image datasets, such as ImageNet (Krizhevsky et al., 2012), have been shown to yield pre-trained models that transfer exceptionally well to other tasks (Donahue et al., 2014; Sharif Razavian et al., 2014). However, it has also been shown that this approach doesnât work well when the target task is not so similar (Yosinski et al., 2014). Rather than hoping that a pre-trained model âaccidentallyâ transfers well to a new problem, we can purposefully imbue meta-learners with an inductive bias (learned from many similar tasks) that allows them to learn new tasks much faster, as we will discuss below.
# 4.2 Meta-Learning in Neural Networks
An early meta-learning approach is to create recurrent neural networks (RNNs) able to modify their own weights (Schmidhuber, 1992, 1993). During training, they use their own weights as additional input data and observe their own errors to learn how to modify these weights in response to the new task at hand. The updating of the weights is deï¬ned in a parametric form that is diï¬erentiable end-to-end and can jointly optimize both the network and training algorithm using gradient descent, yet is also very diï¬cult to train. Later work used reinforcement learning across tasks to adapt the search strategy (Schmidhuber et al., 1997) or the learning rate for gradient descent (Daniel et al., 2016) to the task at hand.
Inspired by the feeling that backpropagation is an unlikely learning mechanism for our own brains, Bengio et al. (1995) replace backpropagation with simple biologically-inspired parametric rules (or evolved rules (Chalmers, 1991)) to update the synaptic weights. The parameters are optimized, e.g. using gradient descent or evolution, across a set of input tasks. Runarsson and Jonsson (2000) replaced these parametric rules with a single layer neural network. Santoro et al. (2016b) instead use a memory-augmented neural network to learn how to store and retrieve âmemoriesâ of prior classiï¬cation tasks. Hochreiter et al. (2001) use LSTMs (Hochreiter and Schmidhuber, 1997) as a meta-learner to train multi- layer perceptrons.
Andrychowicz et al. (2016) also replace the optimizer, e.g. stochastic gradient descent, with an LSTM trained on multiple prior tasks. The loss of the meta-learner (optimizer) is deï¬ned as the sum of the losses of the base-learners (optimizees), and optimized using gradient descent. At every step, the meta-learner chooses the weight update estimated to reduce the optimizeeâs loss the most, based on the learned model weights {wk} of the previous step as well as the current performance gradient. Later work generalizes this ap- proach by training an optimizer on synthetic functions, using gradient descent (Chen et al., 2016). This allows meta-learners to optimize optimizees even if these do not have access to gradients.
In parallel, Li and Malik (2016) proposed a framework for learning optimization algo- rithms from a reinforcement learning perspective. It represents any particular optimization algorithm as a policy, and then learns this policy via guided policy search. Follow-up work (Li and Malik, 2017) shows how to leverage this approach to learn optimization algorithms for (shallow) neural networks.
The ï¬eld of neural architecture search includes many other methods that build a model of neural network performance for a speciï¬c task, for instance using Bayesian optimization
13
Joaquin Vanschoren
or reinforcement learning. See Elsken et al. (2018) for an in-depth discussion. However, most of these methods do not (yet) generalize across tasks and are therefore not discussed here.
# 4.3 Few-Shot Learning
A particularly challenging meta-learning problem is to train an accurate deep learning model using only a few training examples, given prior experience with very similar tasks for which we have large training sets available. This is called few-shot learning. Humans have an innate ability to do this, and we wish to build machine learning agents that can do the same (Lake et al., 2017). A particular example of this is âK-shot N-wayâ classiï¬cation, in which we are given many examples (e.g., images) of certain classes (e.g., objects), and want to learn a classiï¬er lnew able to classify N new classes using only K examples of each.
Using prior experience, we can, for instance, learn a common feature representation of all the tasks, start training lnew with a better model parameter initialization Winit and acquire an inductive bias that helps guide the optimization of the model parameters, so that lnew can be trained much faster than otherwise possible.
Earlier work on one-shot learning is largely based on hand-engineered features (Fei-Fei et al., 2006; Fei-Fei, 2006; Fink, 2005; Bart and Ullman, 2005). With meta-learning, however, we hope to learn a common feature representation for all tasks in an end-to-end fashion.
Vinyals et al. (2016) state that, to learn from very little data, one should look to non- parameteric models (such as k-nearest neighbors), which use a memory component rather than learning many model parameters. Their meta-learner is a Matching Network that apply the idea of a memory component in a neural net. It learns a common representation for the labelled examples, and matches each new test instance to the memorized examples using cosine similarity. The network is trained on minibatches with only a few examples of a speciï¬c task each.
vector space such that examples of a given output class are close together. It then calcu- lates a prototype (mean vector) for every class. New test instances are mapped to the same vector space and a distance metric is used to create a softmax over all possible classes. Ren et al. (2018) extend this approach to semi-supervised learning.
Ravi and Larochelle (2017) use an LSTM-based meta-learner to learn an update rule for training a neural network learner. With every new example, the learner returns the current gradient and loss to the LSTM meta-learner, which then updates the model parameters {wk} of the learner. The meta-learner is trained across all prior tasks.
Model-Agnostic Meta-Learning (MAML) (Finn et al., 2017), on the other hand, does not try to learn an update rule, but instead learns a model parameter initialization Winit that generalizes better to similar tasks. Starting from a random {wk}, it iteratively selects a batch of prior tasks, and for each it trains the learner on K examples to compute the gradient and loss (on a test set). It then backpropagates the meta-gradient to update the weights {wk} in the direction in which they would have been easier to update. In other words, after each iteration, the weights {wk} become a better Winit to start ï¬netuning any of the tasks. Finn and Levine (2017) show that MAML is able to approximate any learning algorithm when using a suï¬ciently deep ReLU network and certain losses. They also
14
Meta-Learning: A Survey
conclude that the MAML initializations are more resilient to overï¬tting on small samples, and generalize more widely than meta-learning approaches based on LSTMs. Grant et al. (2018) present a novel derivation of and extension to MAML, illustrating that this algorithm can be understood as inference for the parameters of a prior distribution in a hierarchical Bayesian model.
REPTILE (Nichol et al., 2018) is an approximation of MAML that executes stochastic gradient descent for K iterations on a given task, and then gradually moves the initialization weights in the direction of the weights obtained after the K iterations. The intuition is that every task likely has more than one set of optimal weights {wâ i }, and the goal is to ï¬nd a Winit that is close to at least one of those {wâ i } for every task.
Finally, we can also derive a meta-learner from a black-box neural network. Santoro et al. (2016a) propose Memory-Augmented Neural Networks (MANNs), which train a Neural Turing Machine (NTM) (Graves et al., 2014), a neural network with augmented memory capabilities, as a meta-learner. This meta-learner can then memorize information about previous tasks and leverage that to learn a learner lnew. SNAIL (Mishra et al., 2018) is a generic meta-learner architecture consisting of interleaved temporal convolution and causal attention layers. The convolutional networks learn a common feature vector for the training instances (images) to aggregate information from past experiences. The causal attention layers learn which pieces of information to pick out from the gathered experience to gener- alize to new tasks.
Overall, the intersection of deep learning and meta-learning proves to be particular fertile ground for groundbreaking new ideas, and we expect this ï¬eld to become more important over time.
# 4.4 Beyond Supervised Learning
Meta-learning is certainly not limited to (semi-)supervised tasks, and has been successfully applied to solve tasks as varied as reinforcement learning, active learning, density estimation and item recommendation. The base-learner may be unsupervised while the meta-learner is supervised, but other combinations are certainly possible as well.
Duan et al. (2016) propose an end-to-end reinforcement learning (RL) approach consist- ing of a task-speciï¬c fast RL algorithm which is guided by a general-purpose slow meta-RL algorithm. The tasks are interrelated Markov Decision Processes (MDPs). The meta-RL algorithm is modeled as an RNN, which receives the observations, actions, rewards and termination ï¬ags. The activations of the RNN store the state of the fast RL learner, and the RNNâs weights are learned by observing the performance of fast learners across tasks. In parallel, Wang et al. (2016) also proposed to use a deep RL algorithm to train an RNN, receiving the actions and rewards of the previous interval in order to learn a base- level RL algorithm for speciï¬c tasks. Rather than using relatively unstructured tasks such as random MDPs, they focus on structured task distributions (e.g., dependent bandits) in which the meta-RL algorithm can exploit the inherent task structure.
Pang et al. (2018) oï¬er a meta-learning approach to active learning (AL). The base- learner can be any binary classiï¬er, and the meta-learner is a deep RL network consisting of a deep neural network that learns a representation of the AL problem across tasks, and a policy network that learns the optimal policy, parameterized as weights in the network. The
15
Joaquin Vanschoren
meta-learner receives the current state (the unlabeled point set and base classiï¬er state) and reward (the performance of the base classiï¬er), and emits a query probability, i.e. which points in the unlabeled set to query next.
Reed et al. (2017) propose a few-shot approach for density estimation (DE). The goal is to learn a probability distribution over a small number of images of a certain concept (e.g., a handwritten letter) that can be used to generate images of that concept, or compute the probability that an image shows that concept. The approach uses autoregressive image models which factorize the joint distribution into per-pixel factors, usually conditioned on (many) examples of the target concept. Instead, a MAML-based few-shot learner is used, trained on examples of many other (similar) concepts.
Finally, Vartak et al. (2017) address the cold-start problem in matrix factorization. They propose a deep neural network architecture that learns a (base) neural network whose biases are adjusted based on task information. While the structure and weights of the neu- ral net recommenders remain ï¬xed, the meta-learner learns how to adjust the biases based on each userâs item history.
All these recent new developments illustrate that it is often fruitful to look at prob- lems through a meta-learning lens and ï¬nd new, data-driven approaches to replace hand- engineered base-learners.
# 5. Conclusion
Meta-learning opportunities present themselves in many diï¬erent ways, and can be em- braced using a wide spectrum of learning techniques. Every time we try to learn a certain task, whether successful or not, we gain useful experience that we can leverage to learn new tasks. We should never have to start entirely from scratch. Instead, we should sys- tematically collect our âlearning exhaustâ and learn from it to build AutoML systems that continuously improve over time, helping us tackle new learning problems ever more eï¬- ciently. The more new tasks we encounter, and the more similar those new tasks are, the more we can tap into prior experience, to the point that most of the required learning has already been done beforehand. The ability of computer systems to store virtually inï¬nite amounts of prior learning experiences (in the form of meta-data) opens up a wide range of opportunities to use that experience in completely new ways, and we are only starting to learning learn how to learn from prior experience eï¬ectively. Yet, this is a worthy goal: how to learn any task empowers us far beyond knowing how to learn speciï¬c tasks.
# Acknowledgments
The author would like to thank Pavel Brazdil, Matthias Feurer, Frank Hutter, Raghu Rajan, and Jan van Rijn for many invaluable discussions and feedback on the manuscript.
16
Meta-Learning: A Survey
# References
S. Abdulrahman, P. Brazdil, J. van Rijn, and J. Vanschoren. Speeding up Algorithm Selection using Average Ranking and Active Testing by Introducing Runtime. Machine Learning, 107:79â108, 2018.
I. Nur Aï¬f. Warm-starting deep learning model construction using meta-learning. Masterâs thesis, TU Eindhoven, 2018.
A. Agresti. Categorical Data Analysis. Wiley Interscience, 2002.
Shawkat Ali and Kate A. Smith-Miles. On learning algorithm selection for classiï¬cation. Applied Soft Computing, 6(2):119 â 138, 2006a.
Shawkat Ali and Kate A. Smith-Miles. Metalearning approach to automatic kernel selection for support vector machines. Neurocomput., 70(1):173â186, 2006b.
Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoï¬man, David Pfau, Tom Schaul, Brendan Shillingford, and Nando De Freitas. Learning to learn by gradient In Advances in Neural Information Processing Systems, descent by gradient descent. pages 3981â3989, 2016.
B Arinze. Selecting appropriate forecasting models using rule induction. Omega, 22(6): 647â658, 1994.
B. Bakker and T. Heskes. Task Clustering and Gating for Bayesian Multitask Learning. Journal of Machine Learning Research, 4:83â999, 2003.
R´emi Bardenet, M´aty´as Brendel, Bal´azs K´egl, and Michele Sebag. Collaborative hyperpa- rameter tuning. In Proceedings of ICML 2013, pages 199â207, 2013.
Evgeniy Bart and Shimon Ullman. Cross-generalization: Learning novel classes from a single example by feature replacement. In CVPR, pages 672â679, 2005.
J. Baxter. Learning Internal Representations. In Advances in Neural Information Processing Systems, NIPS, 1996.
Samy Bengio, Yoshua Bengio, and Jocelyn Cloutier. On the search for new learning rules for anns. Neural Processing Letters, 2(4):26â30, 1995.
Y. Bengio. Deep learning of representations for unsupervised and transfer learning. ICML Unsupervised and Transfer Learning, pages 17â36, 2012. In
H Bensusan and A Kalousis. Estimating the predictive accuracy of a classiï¬er. Lecture Notes in Computer Science, 2167:25â36, 2001.
Hilan Bensusan and Christophe Giraud-Carrier. Discovering task neighbourhoods through landmark learning performances. In PKDD, pages 325â330, 2000.
Hilan Bensusan, Christophe Giraud-Carrier, and Claire Kennedy. A higher-order approach to meta-learning. In ILP, pages 33 â 42, 2000.
17
Joaquin Vanschoren
Besim Bilalli, Alberto Abell´o, and Tom`as Aluja-Banet. On the predictive power of meta- features in OpenML. International Journal of Applied Mathematics and Computer Sci- ence, 27(4):697 â 712, 2017.
Besim Bilalli, Alberto Abell´o, Tom`as Aluja-Banet, and Robert Wrembel. Intelligent assis- tance for data pre-processing. Computer Standards & Interf., 57:101 â 109, 2018.
B. Bischl, P. Kerschke, L. Kotthoï¬, M. Lindauer, Y. Malitsky, A. Fr´echette, H. Hoos, F. Hutter, K. Leyton-Brown, K. Tierney, and J. Vanschoren. ASLib: A benchmark library for algorithm selection. Artiï¬cial Intelligence, 237:41â58, 2016.
Christopher M Bishop. Pattern recognition and machine learning. Springer, 2006.
P. Brazdil, C. Soares, and J. Pinto da Costa. Ranking learning algorithms: Using IBL and meta-learning on accuracy and time results. Machine Learning, 50(3):251â277, 2003a.
Pavel Brazdil, Christophe Giraud-Carrier, Carlos Soares, and Ricardo Vilalta. Metalearning: Applications to Data Mining. Springer-Verlag Berlin Heidelberg, 2009.
Pavel B. Brazdil, Carlos Soares, and Joaquim Pinto Da Coasta. Ranking learning algo- rithms: Using IBL and meta-learning on accuracy and time results. Machine Learning, 50(3):251â277, 2003b.
R. Caruana. Learning many related tasks at the same time with backpropagation. Neural Information Processing Systems, pages 657â664, 1995.
R. Caruana. Multitask Learning. Machine Learning, 28(1):41â75, 1997.
Ciro Castiello, Giovanna Castellano, and Anna Maria Fanelli. Meta-data: Characteriza- tion of input features for meta-learning. In 2nd International Conference on Modeling Decisions for Artiï¬cial Intelligence (MDAI), pages 457 â 468, 2005.
David J Chalmers. The evolution of learning: An experiment in genetic connectionism. In Connectionist Models, pages 81â90. Elsevier, 1991.
Akshay Chandrashekaran and Ian R Lane. Speeding up hyper-parameter optimization by extrapolation of learning curves using previous builds. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 477â492, 2017.
Yutian Chen, Matthew W Hoï¬man, Sergio G´omez Colmenarejo, Misha Denil, Timothy P Lillicrap, Matt Botvinick, and Nando de Freitas. Learning to learn without gradient descent by gradient descent. arXiv preprint arXiv:1611.03824, 2016.
Weiwei Cheng, Jens H¨uhn, and Eyke H¨ullermeier. Decision tree and instance-based learning for label ranking. In ICML, pages 161â168, 2009.
W. D. Cook, M. Kress, and L. W. Seiford. A general framework for distance-based consensus in ordinal ranking models. European Journal of Operational Research, 96(2):392â397, 1996.
18
Meta-Learning: A Survey
Christian Daniel, Jonathan Taylor, and Sebastian Nowozin. Learning step size controllers for robust neural network training. In AAAI, pages 1519â1525, 2016.
C. Davis and C. Giraud-Carrier. Annotative experts for hyperparameter selection. AutoML Workshop at ICML 2018, 2018. In
Alex De Sa, Walter Pinto, Luiz Otavio Oliveira, and Gisele Pappa. RECIPE: A grammar- based framework for automatically evolving classiï¬cation pipelines. In European Confer- ence on Genetic Programming, pages 246â261, 2017.
J. DemËsar. Statistical Comparisons of Classiï¬ers over Multiple Data Sets. Journal of Machine Learning Research, 7:1â30, 2006.
T Dietterich. Ensemble methods in machine learning. In International workshop on multiple classiï¬er systems, pages 1â15, 2000.
T. Dietterich, D. Busquets, R. Lopez de Mantaras, and C. Sierra. Action Reï¬nement in Reinforcement Learning by Probability Smoothing. In 19th International Conference on Machine Learning, pages 107â114, 2002.
Jeï¬ Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoï¬man, Ning Zhang, Eric Tzeng, and Trevor Darrell. Decaf: A deep convolutional activation feature for generic visual recog- nition. In ICML, pages 647â655, 2014.
P dos Santos, T Ludermir, and R PrudËencio. Selection of time series forecasting models based on performance information. 4th International Conference on Hybrid Intelligent Systems, pages 366â371, 2004.
Iddo Drori, Yamuna Krishnamurthy, Remi Rampin, Raoni de Paula Lourenco, Jorge Pi- azentin Ono, Kyunghyun Cho, Claudio Silva, and Juliana Freire. AlphaD3M: Machine learning pipeline synthesis. In AutoML Workshop at ICML, 2018.
Yan Duan, John Schulman, Xi Chen, Peter L Bartlett, Ilya Sutskever, and Pieter Abbeel. arXiv preprint RL2: Fast reinforcement learning via slow reinforcement learning. arXiv:1611.02779, 2016.
K. Eggensperger, M. Lindauer, H.H. Hoos, F. Hutter, and K. Leyton-Brown. Eï¬cient Benchmarking of Algorithm Conï¬guration Procedures via Model-Based Surrogates . Ma- chine Learning, 107:15â41, 2018.
Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. Neural architecture search: A survey. arXiv preprint arXiv:1808.05377, 2018.
T. Evgeniou and M. Pontil. Regularized multi-task learning. In Tenth Conference on Knowledge Discovery and Data Mining, 2004.
T. Evgeniou, C. Micchelli, and M. Pontil. Learning Multiple Tasks with Kernel Methods. Journal of Machine Learning Research, 6:615â637, 2005.
Li Fei-Fei. Knowledge transfer in learning to recognize visual objects classes. In Intern. Conf. on Development and Learning, page Art. 51, 2006.
19
Joaquin Vanschoren
Li Fei-Fei, Rob Fergus, and Pietro Perona. One-shot learning of object categories. Pattern analysis and machine intelligence, 28(4):594â611, 2006.
M Feurer, B Letham, and E Bakshy. Scalable meta-learning for Bayesian optimization. arXiv, 1802.02219, 2018a.
Matthias Feurer, Jost Tobias Springenberg, and Frank Hutter. Using meta-learning to In International Conference on initialize Bayesian optimization of hypxerparameters. Meta-learning and Algorithm Selection, pages 3 â 10, 2014.
Matthias Feurer, Aaron Klein, Katharina Eggensperger, Jost Springenberg, Manuel Blum, In Advances in and Frank Hutter. Eï¬cient and robust automated machine learning. Neural Information Processing Systems 28, pages 2944â2952, 2015.
Matthias Feurer, Benjamin Letham, and Eytan Bakshy. Scalable meta-learning for bayesian optimization using ranking-weighted gaussian process ensembles. In AutoML Workshop at ICML 2018, 2018b.
Andray Filchenkov and Arseniy Pendryak. Dataset metafeature description for recommend- ing feature selection. In ISMW FRUCT, pages 11â18, 2015.
Michael Fink. Object classiï¬cation from a single example utilizing class relevance metrics. In Neural information processing syst., pages 449â456, 2005.
Chelsea Finn and Sergey Levine. Meta-learning and universality. arXiv 1710.11622, 2017.
Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In ICML, pages 1126â1135, 2017.
J F¨urnkranz and J Petrak. An evaluation of landmarking variants. ECML/PKDD 2001 Workshop on Integrating Aspects of Data Mining, Decision Support and Meta-Learning, pages 57â68, 2001.
Nicolo Fusi, Rishit Sheth, and Huseyn Melih Elibol. Probabilistic matrix factorization for automated machine learning. arXiv preprint arXiv:1705.05355, 2017.
Yolanda Gil, Ke-Thia Yao, Varun Ratnakar, Daniel Garijo, Greg Ver Steeg, Pedro Szekely, Rob Brekelmans, Mayank Kejriwal, Fanghao Luo, and I-Hui Huang. P4ML: A phased performance-based pipeline planner for automated machine learning. In AutoML Work- shop at ICML 2018, 2018.
Christophe Giraud-Carrier. Metalearning-a tutorial. In Tutorial at the International Con- ference on Machine Learning and Applications, pages 1â45, 2008.
Christophe Giraud-Carrier and Foster Provost. Toward a justiï¬cation of meta-learning: Is the no free lunch theorem a show-stopper. In Proceedings of the ICML-2005 Workshop on Meta-learning, pages 12â19, 2005.
D. Golovin, B. Solnik, S. Moitra, G. Kochanski, J. Karro, and D. Sculley. Google vizier: A service for black-box optimization. In ICDM, pages 1487â1495, 2017.
20
Meta-Learning: A Survey
Taciana AF Gomes, Ricardo BC PrudËencio, Carlos Soares, Andr´e LD Rossi, and Andr´e Car- valho. Combining meta-learning and search techniques to select parameters for support vector machines. Neurocomputing, 75(1):3â13, 2012.
Erin Grant, Chelsea Finn, Sergey Levine, Trevor Darrell, and Thomas Griï¬ths. Recasting gradient-based meta-learning as hierarchical bayes. arXiv preprint arXiv:1801.08930, 2018.
Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014.
Silvio B Guerra, Ricardo BC PrudËencio, and Teresa B Ludermir. Predicting the performance of learning algorithms using support vector machines as meta-regressors. In ICANN, pages 523â532, 2008.
B. Hengst. Discovering Hierarchy in Reinforcement Learning with HEXQ. In International Conference on Machine Learning, pages 243â250, 2002.
M Hilario and A Kalousis. Fusion of meta-knowledge and meta-data for case-based model selection. Lecture Notes in Computer Science, 2168:180â191, 2001.
Tin Kam Ho and Mitra Basu. Complexity measures of supervised classiï¬cation problems. Pattern Analysis and Machine Intellig., 24(3):289â300, 2002.
S. Hochreiter, A.S. Younger, and P.R. Conwell. Learning to learn using gradient descent. In Lecture Notes on Computer Science, 2130, pages 87â94, 2001.
Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735â1780, 1997.
F. Hutter, H. Hoos, and K. Leyton-Brown. An Eï¬cient Approach for Assessing Hyperpa- rameter Importance. In Proceedings of ICML, 2014a.
F. Hutter, L. Xu, H. Hoos, and K. Leyton-Brown. Algorithm runtime prediction: Methods & evaluation. Artiï¬cial Intelligence, 206:79â111, 2014b.
Donald R Jones, Matthias Schonlau, and William J Welch. Eï¬cient global optimization of expensive black-box functions. Journal of Global Optimization, 13(4):455â492, 1998.
A. Kalousis. Algorithm Selection via Meta-Learning. PhD thesis, University of Geneva, Department of Computer Science, 2002.
A Kalousis and M Hilario. Representational issues in meta-learning. Proceedings of ICML 2003, pages 313â320, 2003.
Alexandros Kalousis and Melanie Hilario. Model selection via meta-learning: a comparative study. Intl Journ. on Artiï¬cial Intelligence Tools, 10(4):525â554, 2001.
Maurice G Kendall. A new measure of rank correlation. Biometrika, 30(1/2):81â93, 1938.
21
Joaquin Vanschoren
J¨org-Uwe Kietz, Floarea Serban, Abraham Bernstein, and Simon Fischer. Designing KDD- workï¬ows via HTN-planning for intelligent discovery assistance. In 5th Planning to Learn Workshop at ECAI 2012, 2012.
J. Kim, S. Kim, and S. Choi. Learning to warm-start Bayesian hyperparameter optimization. arXiv preprint arXiv:1710.06219, 2017.
Ron Kohavi and George H John. Automatic parameter selection by minimizing estimated error. In Proceedings of the International Conference Machine Learning, pages 304â312, 1995.
C K¨opf and I Iglezakis. Combination of task description strategies and case base properties for meta-learning. ECML/PKDD Workshop on Integration and Collaboration Aspects of Data Mining, pages 65â76, 2002.
C. K¨opf, C. Taylor, and J. Keller. Meta-analysis: From data characterization for meta- In PKDD Workshop on Data Mining, Decision Support, learning to meta-regression. Meta-Learning and ILP., pages 15â26, 2000.
Alex Krizhevsky, Ilya Sutskever, and Geoï¬rey E Hinton. Imagenet classiï¬cation with deep In Advances in neural information processing systems, convolutional neural networks. pages 1097â1105, 2012.
P. Kuba, P. Brazdil, C. Soares, and A. Woznica. Exploiting sampling and meta-learning for parameter setting support vector machines. In Proceedings of IBERAMIA 2002, pages 217â225, 2002.
Solomon Kullback and Richard A Leibler. On information and suï¬ciency. The annals of mathematical statistics, 22(1):79â86, 1951.
Alexandre Lacoste, Mario Marchand, Fran¸cois Laviolette, and Hugo Larochelle. Agnostic Bayesian learning of ensembles. In ICML, pages 611â619, 2014.
Brenden M Lake, Tomer D Ullman, Joshua B Tenenbaum, and Samuel J Gershman. Build- ing machines that learn and think like people. Beh. and Brain Sc., 40, 2017.
R Leite and P Brazdil. Predicting relative performance of classiï¬ers from samples. Proceed- ings of ICML, pages 497â504, 2005.
R Leite and P Brazdil. An iterative process for building learning curves and predicting relative performance of classiï¬ers. Lecture Notes in Computer Science, 4874:87â98, 2007.
R. Leite, P. Brazdil, and J. Vanschoren. Selecting Classiï¬cation Algorithms with Active Testing. Lecture Notes in Artif. Intel., 10934:117â131, 2012.
Rui Leite and Pavel Brazdil. Active testing strategy to predict the best classiï¬cation algo- rithm via sampling and metalearning. In ECAI 2010, pages 309â314, 2010.
C. Lemke, M. Budka, and B. Gabrys. Metalearning: a survey of trends and technologies. Artiï¬cial intelligence review, 44(1):117â130, 2015.
22
Meta-Learning: A Survey
Daren Ler, Irena Koprinska, and Sanjay Chawla. Utilizing regression-based landmarkers within a meta-learning framework for algorithm selection. Technical Report 569. Univer- sity of Sydney, pages 44â51, 2005.
Ke Li and Jitendra Malik. Learning to optimize. arXiv preprint arXiv:1606.01885, 2016.
Ke Li and Jitendra Malik. arXiv:1703.00441, 2017. Learning to optimize neural nets. arXiv preprint
S. Lin. Rank aggregation methods. WIREs Computational Statistics, 2:555â570, 2010.
G. Lindner and R. Studer. AST: Support for algorithm selection with a CBR approach. In ICML Workshop on Recent Advances in Meta-Learning and Future Work, pages 38â47. J. Stefan Institute, 1999.
Ana Carolina Lorena, Aron I. Maciel, P´ericles B. C. de Miranda, Ivan G. Costa, and Ri- cardo B. C. PrudËencio. Data complexity meta-features for regression problems. Machine Learning, 107(1):209â246, 2018. doi: 10.1007/s10994-017-5681-1.
Gang Luo. A review of automatic selection methods for machine learning algorithms and hyper-parameter values. Network Modeling Analysis in Health Informatics and Bioinfor- matics, 5(1):18, 2016.
Rafael G Mantovani, Andr´e LD Rossi, Joaquin Vanschoren, Bernd Bischl, and Andr´e CPLF Carvalho. To tune or not to tune: recommending when to adjust SVM hyper-parameters via meta-learning. In Proceedings of IJCNN, pages 1â8, 2015a.
Rafael G Mantovani, Tom´aËs Horv´ath, Ricardo Cerri, Joaquin Vanschoren, and Andr´e CPLF de Carvalho. Hyper-parameter tuning of a decision tree induction algorithm. In Brazilian Conference on Intelligent Systems, pages 37â42, 2016.
Rafael Gomes Mantovani, Andr´e LD Rossi, Joaquin Vanschoren, and Andr´e Carlos Car- valho. Meta-learning recommendation of default hyper-parameter values for SVMs in classiï¬cations tasks. In ECML PKDD Workshop on Meta-Learning and Algorithm Selec- tion, 2015b.
R.G. Mantovani. Use of meta-learning for hyperparameter tuning of classiï¬cation problems. PhD thesis, University of Sao Carlos, Brazil, 2018.
Donald Michie, David J. Spiegelhalter, Charles C. Taylor, and John Campbell. Machine Learning, Neural and Statistical Classiï¬cation. Ellis Horwood, 1994.
P.B.C. Miranda and R.B.C. PrudËencio. Active testing for SVM parameter selection. In Proceedings of IJCNN, pages 1â8, 2013.
Nikhil Mishra, Mostafa Rohaninejad, Xi Chen, and Pieter Abbeel. A simple neural attentive meta-learner. In Proceedings of ICLR, 2018.
Mustafa Misir and Mich`ele Sebag. Algorithm Selection as a Collaborative Filtering Problem. Research report, INRIA, 2013.
23
Joaquin Vanschoren
Mustafa Mısır and Mich`ele Sebag. Alors: An algorithm recommender system. Artiï¬cial Intelligence, 244:291â314, 2017.
Elizbar A Nadaraya. On estimating regression. Theory of Probability & Its Applications, 9 (1):141â142, 1964.
Phong Nguyen, Melanie Hilario, and Alexandros Kalousis. Using meta-mining to support data mining workï¬ow planning and optimization. Journal of Artiï¬cial Intelligence Re- search, 51:605â644, 2014.
A. Nichol, J. Achiam, and J. Schulman. On ï¬rst-order meta-learning algorithms. arXiv, 1803.02999v2, 2018.
A. Niculescu-Mizil and R. Caruana. Learning the Structure of Related Tasks. In Proceedings of NIPS Workshop on Inductive Transfer, 2005.
E. Nisioti, K. Chatzidimitriou, and A Symeonidis. Predicting hyperparameters from meta- features in binary classiï¬cation problems. In AutoML Workshop at ICML, 2018.
I. Olier, N. Sadawi, G.R. Bickerton, J. Vanschoren, C. Grosan, L. Soldatova, and R.D. King. Meta-QSAR: learning how to learn QSARs. Machine Learning, 107:285â311, 2018.
Randal S Olson, Nathan Bartley, Ryan J Urbanowicz, and Jason H Moore. Evaluation of a tree-based pipeline optimization tool for automating data science. In Proceedings of GECCO, pages 485â492, 2016.
Sinno Jialin Pan and Qiang Yang. A survey on transfer learning. IEEE Transactions on knowledge and data engineering, 22(10):1345â1359, 2010.
K Pang, M. Dong, Y. Wu, and T. Hospedales. Meta-learning transferable active learning policies by deep reinforcement learning. In AutoML Workshop at ICML, 2018.
Y Peng, P Flach, C Soares, and P Brazdil. Improved dataset characterisation for meta- learning. Lecture Notes in Com. Sc., 2534:141â152, 2002.
Valerio Perrone, Rodolphe Jenatton, Matthias Seeger, and Cedric Archambeau. Multiple adaptive Bayesian linear regression for scalable Bayesian optimization with warm start. arXiv preprint arXiv:1712.02902, 2017.
Bernhard Pfahringer, Hilan Bensusan, and Christophe G. Giraud-Carrier. Meta-learning by landmarking various learning algorithms. In 17th International Conference on Machine Learning (ICML), pages 743 â 750, 2000.
F´abio Pinto, Carlos Soares, and JoËao Mendes-Moreira. Towards automatic generation of metafeatures. In Proceedings of PAKDD, pages 215â226, 2016.
F´abio Pinto, V´ıtor Cerqueira, Carlos Soares, and JoËao Mendes-Moreira. autoBagging: Learning to rank bagging workï¬ows with metalearning. arXiv, 1706.09367, 2017.
24
Meta-Learning: A Survey
Martijn J. Post, Peter van der Putten, and Jan N. van Rijn. Does Feature Selection Improve Classiï¬cation? A Large Scale Experiment in OpenML. In Advances in Intelligent Data Analysis XV, pages 158â170, 2016.
Rattan Priya, Bruno F. De Souza, Andre Rossi, and Andre Carvalho. Using genetic algo- rithms to improve prediction of execution times of ML tasks. In Lecture Notes in Comp. Science, volume 7208, pages 196â207, 2012.
P. Probst, B. Bischl, and A.-L. Boulesteix. Tunability: Importance of hyperparameters of machine learning algorithms. ArXiv 1802.09596, 2018.
Foster Provost, David Jensen, and Tim Oates. Eï¬cient progressive sampling. In Proceedings of the ï¬fth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 23â32, 1999.
R PrudËencio and T Ludermir. Meta-learning approaches to selecting time series models. Neurocomputing, 61:121â137, 2004.
R. Raina, A. Y. Ng, and D. Koller. Transfer Learning by Constructing Informative Priors. In Proceedings of ICML, 2006.
Anil Ramachandran, Sunil Gupta, Santu Rana, and Svetha Venkatesh. Selecting optimal source for transfer learning in Bayesian optimisation. In Proceedings of PRICAI, pages 42â56, 2018a.
Anil Ramachandran, Sunil Gupta, Santu Rana, and Svetha Venkatesh. theoretic transfer learning framework for Bayesian optimisation. ECMLPKDD, 2018b. Information- In Proceedings of
Carl Edward Rasmussen. Gaussian processes in machine learning. In Advanced lectures on machine learning, pages 63â71. Springer, 2004.
Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. In Proceedings of ICLR, 2017.
Scott Reed, Yutian Chen, Thomas Paine, A¨aron van den Oord, SM Eslami, Danilo Rezende, Oriol Vinyals, and Nando de Freitas. Few-shot autoregressive density estimation: Towards learning to learn distributions. arXiv preprint arXiv:1710.10304, 2017.
Matthias Reif, Faisal Shafait, and Andreas Dengel. Prediction of classiï¬er training time including parameter optimization. In Proc. of GfKI, pages 260 â 271, 2011.
Matthias Reif, Faisal Shafait, and Andreas Dengel. Meta-learning for evolutionary param- eter optimization of classiï¬ers. Machine learning, 87(3):357â380, 2012.
Matthias Reif, Faisal Shafait, Markus Goldstein, Thomas Breuel, and Andreas Dengel. Automatic classiï¬er selection for non-experts. Pattern Analysis and Applications, 17(1): 83 â 96, 2014.
25
Joaquin Vanschoren
Mengye Ren, Eleni Triantaï¬llou, Sachin Ravi, Jake Snell, Kevin Swersky, Joshua B Tenen- baum, Hugo Larochelle, and Richard S Zemel. Meta-learning for semi-supervised few-shot classiï¬cation. arXiv 1803.00676, 2018.
S Rendle. Factorization machines. In ICDM 2015, pages 995â1000, 2010.
Parker Ridd and Christophe Giraud-Carrier. Using metalearning to predict when parameter optimization is likely to improve classiï¬cation accuracy. In ECAI Workshop on Meta- learning and Algorithm Selection, pages 18â23, 2014.
A. Rivolli, L.P.F. Garcia, C. Soares, J. Vanschoren, and A.C.P.L.F. de Carvalho. Towards reproducible empirical research in meta-learning. arXiv preprint, 1808.10406, 2018.
Herbert Robbins. Some aspects of the sequential design of experiments. In Herbert Robbins Selected Papers, pages 169â177. Springer, 1985.
M. T. Rosenstein, Z. Marx, and L. P. Kaelbling. To Transfer or Not To Transfer. In NIPS Workshop on transfer learning, 2005.
Peter J. Rousseeuw and Mia Hubert. Robust statistics for outlier detection. Wiley Inter- disciplinary Reviews: Data Mining and Knowledge Discovery, 1(1):73 â 79, 2011.
Thomas Philip Runarsson and Magnus Thor Jonsson. Evolution and design of distributed learning rules. In IEEE Symposium on Combinations of Evolutionary Computation and Neural Networks, pages 59â63, 2000.
Mostafa A. Salama, Aboul Ella Hassanien, and Kenneth Revett. Employment of neural network and rough set in meta-learning. Memetic Comp., 5(3):165â177, 2013.
S. Sanders and C. Giraud-Carrier. Informing the use of hyperparameter optimization through metalearning. In Proc. ICDM, pages 1051â1056, 2017.
Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lilli- crap. Meta-learning with memory-augmented neural networks. In International confer- ence on machine learning, pages 1842â1850, 2016a.
Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lil- licrap. One-shot learning with memory-augmented neural networks. arXiv preprint arXiv:1605.06065, 2016b.
N. Schilling, M. Wistuba, L. Drumond, and L. Schmidt-Thieme. Hyperparameter opti- mization with factorized multilayer perceptrons. In Proceedings of ECML PKDD, pages 87â103, 2015.
J¨urgen Schmidhuber. Learning to control fast-weight memories: An alternative to dynamic recurrent networks. Neural Comp., 4(1):131â139, 1992.
J¨urgen Schmidhuber. A neural network that embeds its own meta-levels. In Proceedings of ICNN, pages 407â412, 1993.
26
Meta-Learning: A Survey
J¨urgen Schmidhuber, Jieyu Zhao, and Marco Wiering. Shifting inductive bias with success- story algorithm, adaptive levin search, and incremental self-improvement. Machine Learn- ing, 28(1):105â130, 1997.
B. Schoenfeld, C. Giraud-Carrier, M. Poggeman, J. Christensen, and K. Seppi. Feature selection for high-dimensional data: A fast correlation-based ï¬lter solution. In AutoML Workshop at ICML, 2018.
F. Serban, J. Vanschoren, J.U. Kietz, and A.A Bernstein. A survey of intelligent assistants for data analysis. ACM Computing Surveys, 45(3):Art.31, 2013.
Ali Sharif Razavian, Hossein Azizpour, Josephine Sullivan, and Stefan Carlsson. Cnn fea- tures oï¬-the-shelf: an astounding baseline for recognition. In Proceedings of CVPR 2014, pages 806â813, 2014.
N. E. Sharkey and A. J. C. Sharkey. Adaptive Generalization. Artiï¬cial Intelligence Review, 7:313â328, 1993.
Kate A. Smith-Miles. Cross-disciplinary perspectives on meta-learning for algorithm selec- tion. ACM Computing Surveys, 41(1):1 â 25, 2009.
Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. In Neural Information Processing Systems, pages 4077â4087, 2017.
C Soares, J Petrak, and P Brazdil. Sampling based relative landmarks: Systematically testdriving algorithms before choosing. Lecture Notes in Computer Science, 3201:250â 261, 2001.
C. Soares, P. Brazdil, and P. Kuba. A meta-learning method to select the kernel width in support vector regression. Mach. Learn., 54:195â209, 2004.
C Soares, T Ludermir, and F De Carvalho. An analysis of meta-learning techniques for rank- ing clustering algorithms applied to artiï¬cial data. Lecture Notes in Computer Science, 5768:131â140, 2009.
J. Springenberg, A. Klein, S. Falkner, and Frank Hutter. Bayesian optimization with robust Bayesian neural networks. In Advances in Neural Information Processing Systems, 2016.
David H Stern, Horst Samulowitz, Ralf Herbrich, Thore Graepel, Luca Pulina, and Armando Tacchella. Collaborative expert portfolio management. In Proceedings of AAAI, pages 179â184, 2010.
Benjamin Strang, Peter van der Putten, Jan N. van Rijn, and Frank Hutter. Donât Rule Out Simple Models Prematurely. In Adv. in Intelligent Data Analysis, 2018.
Q. Sun, B. Pfahringer, and M. Mayo. Towards a Framework for Designing Full Model Selection and Optimization Systems. In International Workshop on Multiple Classiï¬er Systems, pages 259â270, 2013.
Quan Sun and Bernhard Pfahringer. Pairwise meta-rules for better meta-learning-based algorithm ranking. Machine Learning, 93(1):141â161, 2013.
27
Joaquin Vanschoren
Kevin Swersky, Jasper Snoek, and Ryan P Adams. Multi-task Bayesian optimization. In Adv. in neural information processing systems, pages 2004â2012, 2013.
Kevin Swersky, Jasper Snoek, and Ryan Prescott Adams. Freeze-thaw bayesian optimiza- tion. arXiv preprint arXiv:1406.3896, 2014.
William R Thompson. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25(3/4):285â294, 1933.
S. Thrun. Lifelong Learning Algorithms. In Learning to Learn, chapter 8, pages 181â209. Kluwer Academic Publishers, MA, 1998.
S. Thrun and T. Mitchell. Learning One More Thing. In Proceedings of IJCAI, pages 1217â1223, 1995.
S. Thrun and L. Pratt. Learning to Learn: Introduction and Overview. In Learning to Learn, pages 3â17. Kluwer, 1998.
L Todorovski and S Dzeroski. Experiments in meta-level learning with ILP. Lecture Notes in Computer Science, 1704:98â106, 1999.
L Todorovski, P Brazdil, and C Soares. Report on the experiments with feature selection in meta-level learning. PKDD 2000 Workshop on Data mining, Decision support, Meta- learning and ILP, pages 27â39, 2000.
L. Todorovski, H. Blockeel, and S. DËzeroski. Ranking with predictive clustering trees. Lecture Notes in Artiï¬cial Intelligence, 2430:444â455, 2002.
J. van Rijn, S. Abdulrahman, P. Brazdil, and J. Vanschoren. Fast Algorithm Selection Using Learning Curves. In Proceedings of IDA, 2015.
J. van Rijn, G. Holmes, B. Pfahringer, and J. Vanschoren. The Online Performance Estima- tion Framework. Heterogeneous Ensemble Learning for Data Streams. Machine Learning, 107:149â176, 2018.
J. N. van Rijn and Frank Hutter. Hyperparameter importance across datasets. In Proceed- ings of KDD, pages 2367â2376, 2018.
Jan N van Rijn, Geoï¬rey Holmes, Bernhard Pfahringer, and Joaquin Vanschoren. Algorithm selection on data streams. In Discovery Science, pages 325â336, 2014.
J. Vanschoren, J. N. van Rijn, B. Bischl, and L. Torgo. OpenML: networked science in machine learning. ACM SIGKDD Explorations Newsletter, 15(2):49â60, 2014.
Joaquin Vanschoren. Understanding Machine Learning Performance with Experiment Databases. PhD thesis, Leuven Univeristy, 2010.
Joaquin Vanschoren, Hendrik Blockeel, Bernhard Pfahringer, and Geoï¬rey Holmes. Exper- iment databases. Machine Learning, 87(2):127â158, 2012.
28
Meta-Learning: A Survey
Manasi Vartak, Arvind Thiagarajan, Conrado Miranda, Jeshua Bratman, and Hugo Larochelle. A meta-learning perspective on cold-start recommendations for items. In Advances in Neural Information Processing Systems, pages 6904â6914, 2017.
R Vilalta. Understanding accuracy performance through concept characterization and al- ICML Workshop on Recent Advances in Meta-Learning and Future gorithm analysis. Work, 1999.
R Vilalta and Y Drissi. A characterization of diï¬cult problems in classiï¬cation. Proceedings of ICMLA, 2002.
Oriol Vinyals, Charles Blundell, Tim Lillicrap, Daan Wierstra, et al. Matching networks In Advances in Neural Information Processing Systems, pages for one shot learning. 3630â3638, 2016.
Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, and Matt Botvinick. Learning to rein- forcement learn. arXiv preprint arXiv:1611.05763, 2016.
H. Weerts, M. Meuller, and J. Vanschoren. Importance of tuning hyperparameters of ma- chine learning algorithms. Technical report, TU Eindhoven, 2018.
Marcel Wever, Felix Mohr, and Eyke H¨ullermeier. Ml-plan for unlimited-length machine learning pipelines. In AutoML Workshop at ICML 2018, 2018.
M. Wistuba, N. Schilling, and L. Schmidt-Thieme. Learning hyperparameter optimization initializations. In 2015 IEEE International Conference on Data Science and Advanced Analytics (DSAA), pages 1â10, 2015a.
M. Wistuba, N. Schilling, and L. Schmidt-Thieme. Hyperparameter search space pruning, In ECML a new component for sequential model-based hyperparameter optimization. PKDD 2015, pages 104â119, 2015b.
Martin Wistuba, Nicolas Schilling, and Lars Schmidt-Thieme. Scalable Gaussian process- based transfer surrogates for hyperparameter optimization. Machine Learning, 107(1): 43â78, 2018.
D.H. Wolpert and W.G. Macready. No free lunch theorems for search. Technical Report SFI-TR-95-02-010, The Santa Fe Institute, 1996.
C. Yang, Y. Akimoto, D.W Kim, and M. Udell. Oboe: Collaborative ï¬ltering for automl initialization. arXiv preprint arXiv:1808.03233, 2018.
Dani Yogatama and Gideon Mann. Eï¬cient transfer learning method for automatic hyper- parameter tuning. In AI and Statistics, pages 1077â1085, 2014.
Jason Yosinski, Jeï¬ Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In Advances in neural information processing systems, pages 3320â3328, 2014.
29 | {
"id": "1705.05355"
} |
1810.02274 | Episodic Curiosity through Reachability | Rewards are sparse in the real world and most of today's reinforcement
learning algorithms struggle with such sparsity. One solution to this problem
is to allow the agent to create rewards for itself - thus making rewards dense
and more suitable for learning. In particular, inspired by curious behaviour in
animals, observing something novel could be rewarded with a bonus. Such bonus
is summed up with the real task reward - making it possible for RL algorithms
to learn from the combined reward. We propose a new curiosity method which uses
episodic memory to form the novelty bonus. To determine the bonus, the current
observation is compared with the observations in memory. Crucially, the
comparison is done based on how many environment steps it takes to reach the
current observation from those in memory - which incorporates rich information
about environment dynamics. This allows us to overcome the known "couch-potato"
issues of prior work - when the agent finds a way to instantly gratify itself
by exploiting actions which lead to hardly predictable consequences. We test
our approach in visually rich 3D environments in ViZDoom, DMLab and MuJoCo. In
navigational tasks from ViZDoom and DMLab, our agent outperforms the
state-of-the-art curiosity method ICM. In MuJoCo, an ant equipped with our
curiosity module learns locomotion out of the first-person-view curiosity only. | http://arxiv.org/pdf/1810.02274 | Nikolay Savinov, Anton Raichuk, Raphaël Marinier, Damien Vincent, Marc Pollefeys, Timothy Lillicrap, Sylvain Gelly | cs.LG, cs.AI, cs.CV, cs.RO, stat.ML | Accepted to ICLR 2019. Code at
https://github.com/google-research/episodic-curiosity/. Videos at
https://sites.google.com/view/episodic-curiosity/ | null | cs.LG | 20181004 | 20190806 | 9 1 0 2
g u A 6 ] G L . s c [
5 v 4 7 2 2 0 . 0 1 8 1 : v i X r a
Published as a conference paper at ICLR 2019
# EPISODIC CURIOSITY THROUGH REACHABILITY
Nikolay Savinov â 1 Anton Raichuk â 1 Rapha¨el Marinier â 1 Damien Vincent â 1 Marc Pollefeys 3 Timothy Lillicrap 2
1Google Brain, 2DeepMind, 3ETH Z¨urich
# ABSTRACT
Rewards are sparse in the real world and most of todayâs reinforcement learning algorithms struggle with such sparsity. One solution to this problem is to allow the agent to create rewards for itself â thus making rewards dense and more suitable for learning. In particular, inspired by curious behaviour in animals, observing something novel could be rewarded with a bonus. Such bonus is summed up with the real task reward â making it possible for RL algorithms to learn from the combined reward. We propose a new curiosity method which uses episodic mem- ory to form the novelty bonus. To determine the bonus, the current observation is compared with the observations in memory. Crucially, the comparison is done based on how many environment steps it takes to reach the current observation from those in memory â which incorporates rich information about environment dynamics. This allows us to overcome the known âcouch-potatoâ issues of prior work â when the agent ï¬nds a way to instantly gratify itself by exploiting actions which lead to hardly predictable consequences. We test our approach in visually rich 3D environments in VizDoom, DMLab and MuJoCo. In navigational tasks from VizDoom and DMLab, our agent outperforms the state-of-the-art curiosity method ICM. In MuJoCo, an ant equipped with our curiosity module learns lo- comotion out of the ï¬rst-person-view curiosity only. The code is available at https://github.com/google-research/episodic-curiosity.
# INTRODUCTION
Many real-world tasks have sparse rewards. For example, animals searching for food may need to go many miles without any reward from the environment. Standard reinforcement learning algorithms struggle with such tasks because of reliance on simple action entropy maximization as a source of exploration behaviour.
Multiple approaches were proposed to achieve better explorative policies. One way is to give a reward bonus which facilitates exploration by rewarding novel observations. The reward bonus is summed up with the original task reward and optimized by standard RL algorithms. Such an approach is motivated by neuroscience studies of animals: an animal has an ability to reward itself for something novel â the mechanism biologically built into its dopamine release system. How exactly this bonus is formed remains an open question.
Many modern curiosity formulations aim at maximizing âsurpriseâ â inability to predict the future. This approach makes perfect sense but, in fact, is far from perfect. To show why, let us consider a thought experiment. Imagine an agent is put into a 3D maze. There is a precious goal somewhere in the maze which would give a large reward. Now, the agent is also given a remote control to a TV and can switch the channels. Every switch shows a random image (say, from a ï¬xed set of images). The curiosity formulations which optimize surprise would rejoice because the result of the channel switching action is unpredictable. The agent would be drawn to the TV instead of looking for a goal in the environment (this was indeed observed in (Burda et al., 2018a)). So, should we call the channel switching behaviour curious? Maybe, but it is unproductive for the original sparse- reward goal-reaching task. What would be a deï¬nition of curiosity which does not suffer from such âcouch-potatoâ behaviour?
We propose a new curiosity deï¬nition based on the following intuition. If the agent knew the ob- servation after changing a TV channel is only one step away from the observation before doing that â it probably would not be so interesting to change the channel in the ï¬rst place (too easy). This
âShared ï¬rst authorship.
1
Published as a conference paper at ICLR 2019
ae . in memory Far from memory â Reachable from memory takes > k steps to reach Instie yams) (novel)
Figure 1: We deï¬ne novelty through reach- ability. The nodes in the graph are observa- tions, the edges â possible transitions. The blue nodes are already in memory, the green nodes are reachable from the memory within k = 2 steps (not novel), the orange nodes are further away â take more than k steps to reach (novel). In practice, the full possible transition graph is not available, so we train a neural network approximator to predict if the distance in steps between observations is larger or smaller than k.
intuition can be formalized as giving a reward only for those observations which take some effort to reach (outside the already explored part of the environment). The effort is measured in the number of environment steps. To estimate it we train a neural network approximator: given two observations, it would predict how many steps separate them. The concept of novelty via reachability is illustrated in Figure 1. To make the description above practically implementable, there is still one piece miss- ing though. For determining the novelty of the current observation, we need to keep track of what was already explored in the environment. A natural candidate for that purpose would be episodic memory: it stores instances of the past which makes it easy to apply the reachability approximator on pairs of current and past observations.
Our method works as follows. The agent starts with an empty memory at the beginning of the episode and at every step compares the current observation with the observations in memory to determine novelty. If the current observation is indeed novel â takes more steps to reach from observations in memory than a threshold â the agent rewards itself with a bonus and adds the current observation to the episodic memory. The process continues until the end of the episode, when the memory is wiped clean.
We benchmark our method on a range of tasks from visually rich 3D environments VizDoom, DM- Lab and MuJoCo. We conduct the comparison with other methods â including the state-of-the-art curiosity method ICM (Pathak et al., 2017) â under the same budget of environment interactions. First, we use the VizDoom environments from prior work to establish that our re-implementation of the ICM baseline is correct â and also demonstrate at least 2 times faster convergence of our method with respect to the baseline. Second, in the randomized procedurally generated environments from DMLab our method turns out to be more robust to spurious behaviours than the method ICM: while the baseline learns a persistent ï¬ring behaviour in navigational tasks (thus creating interesting pic- tures for itself), our method learns a reasonable explorative behaviour. In terms of quantitative evaluation, our method reaches the goal at least 2 times more often in the procedurally generated test levels in DMLab with a very sparse reward. Third, when comparing the behaviour of the agent in the complete absence of rewards, our method covers at least 4 times more area (measured in discrete (x, y) coordinate cells) than the baseline ICM. Fourth, we demonstrate that our curiosity bonus does not signiï¬cantly deteriorate performance of the plain PPO algorithm (Schulman et al., 2017) in two tasks with dense reward in DMLab. Finally, we demonstrate that an ant in a MuJoCo environment can learn locomotion purely from our curiosity reward computed based on the ï¬rst-person view.
# 2 EPISODIC CURIOSITY
We consider an agent which interacts with an environment. The interactions happen at discrete time steps over the episodes of limited duration 7â. At each time step t, the environment provides the agent with an observation 0, from the observational space O (we consider images), samples an action a, from a set of actions A using a probabilistic policy 7(0;,) and receives a scalar reward r, ⬠R together with the new observation 0; , and an end-of-episode indicator. The goal of the agent is to optimize the expectation of the discounted sum of rewards during the episode $ = >, y'rr. In this work we primarily focus on the tasks where rewards r; are sparse â that is, zero for most of the time steps t. Under such conditions commonly used RL algorithms (e.g., PPO Schulman et al. (2017)) do not work well. We further introduce an episodic curiosity (EC) module which alleviates this problem. The purpose of this module is to produce a reward bonus b; which is further summed
2
Published as a conference paper at ICLR 2019
oN are ( network negative ~~. ay 7 Embedding a network ~~ 4 Og Reachability network Comparator
Figure 2: Left: siamese architecture of reachability (R) network. Right: R-network is trained based on a sequence of observations that the agent encounters while acting. The temporally close (within threshold) pairs of observations are positive examples, while temporally far ones â negatives.
up with the task reward r; to give an augmented reward 7, = r; + b,. The augmented reward has a nice property from the RL point of view â it is a dense reward. Learning with such reward is faster, more stable and often leads to better final performance in terms of the cumulative task reward S.
In the following section we describe the key components of our episodic curiosity module.
2.1 EPISODIC CURIOSITY MODULE
The episodic curiosity (EC) module takes the current observation o as input and produces a reward bonus b. The module consists of both parametric and non-parametric components. There are two parametric components: an embedding network E : O â Rn and a comparator network C : Rn à Rn â [0, 1]. Those parametric components are trained together to predict reachability as parts of the reachability network â shown in Figure 2. There are also two non-parametric components: an episodic memory buffer M and a reward bonus estimation function B. The high-level overview of the system is shown in Figure 3. Next, we give a detailed explanation of all the components.
Embedding and comparator networks. Both networks are designed to function jointly for estimat- ing within-k-step-reachability of one observation oi from another observation oj as parts of a reach- ability network R(oi, oj) = C(E(oi), E(oj)). This is a siamese architecture similar to (Zagoruyko & Komodakis, 2015). The architecture is shown in Figure 2. R-network is a classiï¬er trained with a logistic regression loss: it predicts values close to 0 if probability of two observations being reach- able from one another within k steps is low, and values close to 1 when this probability is high. Inside the episodic curiosity the two networks are used separately to save up computation and memory.
Episodic memory. The episodic memory buffer M stores embeddings of past observations from the current episode, computed with the embedding network E. The memory buffer has a limited capacity K to avoid memory and performance issues. At every step, the embedding of the current observation might be added to the memory. What to do when the capacity is exceeded? One solution we found working well in practice is to substitute a random element in memory with the current element. This way there are still more fresh elements in memory than older ones, but the older elements are not totally neglected.
Reward bonus estimation module. The purpose of this module is to check for reachable observa- tions in memory and if none is found â assign larger reward bonus to the current time step. The check is done by comparing embeddings in memory to the current embedding via comparator net- work. Essentially, this check insures that no observation in memory can be reached by taking only a few actions from the current state â our characterization of novelty.
2.2 BONUS COMPUTATION ALGORITHM.
At every time step, the current observation o goes through the embedding network producing the embedding vector e = E(o). This embedding vector is compared with the stored embeddings in the memory buffer M = (er, ees e\m}) via the comparator network Câ where |M| is the current number of elements in memory. This comparator network fills the reachability buffer with values
ci = C(ei, e), i = 1, |M|. (1)
3
Published as a conference paper at ICLR 2019
om coe r f {ip Ss Reward onus ebcervaion | Embedding S ieee Si , Gye | Ry Gaaw io) network butfer bonus teen gets slat \ âAppend to memory if large curiosity reward /
Figure 3: The use of episodic curiosity (EC) module for reward bonus computation. The module take a current observation as input and computes a reward bonus which is higher for novel observa- tions. This bonus is later summed up with the task reward and used for training an RL agent.
Then the similarity score between the memory buffer and the current embedding is computed from the reachability buffer as (with a slight abuse of notation) C(M,e) = F (c1,..-, em) ⬠[0, 1]. (2)
where the aggregation function F is a hyperparameter of our method. Theoretically, F = max would be a good choice, however, in practice it is prone to outliers coming from the parametric embedding and comparator networks. Empirically, we found that 90-th percentile works well as a robust substitute to maximum.
As a curiosity bonus, we take
b = B(M, e) = α(β â C(M, e)), (3)
where α â R+ and β â R are hyperparameters of our method. The value of α depends on the scale of task rewards â we will discuss how to select it in the experimental section. The value of β determines the sign of the reward â and thus could bias the episodes to be shorter or longer. Empirically, β = 0.5 works well for ï¬xed-duration episodes, and β = 1 is preferred if an episode could have variable length.
After the bonus computation, the observation embedding is added to memory if the bonus b is larger than a novelty threshold bnovelty. This check is necessary for the following reason. If every observation embedding is added to the memory buffer, the observation from the current step will always be reachable from the previous step. Thus, the reward would never be granted. The threshold bnovelty induces a discretization in the embedding space. Intuitively, this makes sense: only âdistinct enoughâ memories are stored. As a side beneï¬t, the memory buffer stores information with much less redundancy. We refer the reader to the video1 which visualizes the curiosity reward bonus and the memory state during the operation of the algorithm.
2.3 REACHABILITY NETWORK TRAINING
If the full transition graph in Figure 1 was available, there would be no need of a reachability net- work and the novelty could be computed analytically through the shortest-path algorithm. However, normally we have access only to the sequence of observations which the agent receives while acting. Fortunately, as suggested by (Savinov et al., 2018), even a simple observation sequence graph could still be used for training a reasonable approximator to the real step-distance. This procedure is illus- trated in Figure 2. This procedure takes as input a sequence of observations o1, . . . , oN and forms pairs from those observations. The pairs (oi, oj) where |i â j| ⤠k are taken as positive (reachable) examples while the pairs with |i â j| > γk become negative examples. The hyperparameter γ is necessary to create a gap between positive and negative examples. In the end, the network is trained with logistic regression loss to output the probability of the positive (reachable) class.
In our work, we have explored two settings for training a reachability network: using a random policy and together with the task-solving policy (online training). The ï¬rst version generally follows the training protocol proposed by (Savinov et al., 2018). We put the agent into exactly the same
# 1https://youtu.be/mphIRR6VsbM
4
Published as a conference paper at ICLR 2019
(a) (b) (c) (d)
Figure 4: Examples of tasks considered in our experiments: (a) VizDoom static maze goal reaching, (b) DMLab randomized maze goal reaching, (c) DMLab key-door puzzle, (d) MuJoCo ant locomo- tion out of ï¬rst-person-view curiosity.
conditions where it will be eventually tested: same episode duration and same action set. The agent takes random actions from the action set. Given the environment interaction budget (2.5M 4-repeated steps in DMLab, 300K 4-repeated steps in VizDoom), the agent ï¬lls in the replay buffer with observations coming from its interactions with the environment, and forms training pairs by sampling from this replay buffer randomly. The second version collects the data on-policy, and re-trains the reachability network every time after a ï¬xed number of environment interactions is performed. We provide the details of R-network training in the supplementary material.
3 EXPERIMENTAL SETUP We test our method in multiple environments from VizDoom (Kempka et al., 2016), DMLab (Beattie et al., 2016) and MuJoCo (Todorov et al., 2012; Schulman et al., 2015). The experiments in Viz- Doom allow us to verify that our re-implementation of the previous state-of-the-art curiosity method ICM (Pathak et al., 2017) is correct. The experiments in DMLab allow us to extensively test the generalization of our method as well as baselines â DMLab provides convenient procedural level generation capabilities which allows us to train and test RL methods on hundreds of levels. The experiments in MuJoCo allow us to show the generality of our method. Due to space limits, the MuJoCo experiments are described in the supplementary material. The examples of tasks are shown in Figure 4.
Environments. Both VizDoom and DMLab environments provide rich maze-like 3D environments. The observations are given to the agent in the form of images. For VizDoom, we use 84 à 84 grayscale images as input. For DMLab, we use 84 à 84 RGB images as input. The agent operates with a discrete action set which comprises different navigational actions. For VizDoom, the standard action set consists of 3 actions: move forward, turn left/right. For DMLab, it consists of 9 actions: move forward/backward, turn left/right, strafe left/right, turn left/right+move forward, ï¬re. For both VizDoom and DMLab we use all actions with a repeat of 4, as typical in the prior work. We only use RGB input of the provided RGBD observations and remove all head-on display information from the screen, leaving only the plain ï¬rst-person view images of the maze. The rewards and episode durations differ between particular environments and will be further speciï¬ed in the corresponding experimental sections.
Basic RL algorithm. We choose the commonly used PPO algorithm from the open-source imple- mentation2 as our basic RL algorithm. The policy and value functions are represented as CNNs to reduce number of hyperparameters â LSTMs are harder to tune and such tuning is orthogonal to the contribution of the paper. We apply PPO to the sum of the task reward and the bonus reward coming from speciï¬c curiosity algorithms. The hyperparameters of the PPO algorithm are given in the sup- plementary material. We use only two sets of hyperparameters: one for all VizDoom environments and the other one for all DMLab environments.
Baseline methods. The simplest baseline for our approach is just the basic RL algorithm applied to the task reward. As suggested by the prior work and our experiments, this is a relatively weak baseline in the tasks where reward is sparse.
As the second baseline, we take the state-of-the-art curiosity method ICM (Pathak et al., 2017). As follows from the results in (Pathak et al., 2017; Fu et al., 2017), ICM is superior to methods VIME (Houthooft et al., 2016), #Exploration (Tang et al., 2017) and EX 2 (Fu et al., 2017) on the curiosity tasks in visually rich 3D environments.
# 2https://github.com/openai/baselines
5
Published as a conference paper at ICLR 2019
(a) (b) (c)
Figure 5: Examples of maze types used in our experiments: (a) VizDoom static maze goal reaching, (b) DMLab randomized maze goal reaching, (c) DMLab randomized maze goal reaching with doors.
Finally, as a sanity check, we introduce a novel baseline method which we call Grid Oracle. Since we can access current (x, y) coordinates of the agent in all environments, we are able to directly discretize the world in 2D cells and reward the agent for visiting as many cells as possible during the episode (the reward bonus is proportional to the number of cells visited). At the end of the episode, cell visit counts are zeroed. The reader should keep in mind that this baseline uses privileged in- formation not available to other methods (including our own method EC). While this privileged information is not guaranteed to lead to success in any particular RL task, we do observe this base- line to perform strongly in many tasks, especially in complicated DMLab environments. The Grid Oracle baseline has two hyperparameters: the weight for combining Grid Oracle reward with the task reward and the cell size.
Hyperparameter tuning. As DMLab environments are procedurally generated, we perform tuning on the validation set, disjoint with the training and test sets. The tuning is done on one of the environ- ments and then the same hyperparameters are re-used for all other environments. VizDoom environ- ments are not procedurally generated, so there is no trivial way to have proper training/validation/test splits â so we tune on the same environment (as typical in the prior RL work for the environments without splits). When tuning, we consider the mean ï¬nal reward of 10 training runs with the same set of hyperparameters as the objective â thus we do not perform any seed tuning. All hyperpa- rameter values are listed in the supplementary material. Note that although bonus scalar α depends on the range of task rewards, the environments in VizDoom and DMLab have similar ranges within each platform â so our approach with re-using α for multiple environments works.
4 EXPERIMENTS In this section, we describe the speciï¬c tasks we are solving and experimental results for all consid- ered methods on those tasks. There are 4 methods to report: PPO, PPO + ICM, PPO + Grid Oracle and PPO + EC (our method). First, we test static-maze goal reaching in VizDoom environments from prior work to verify that our baseline re-implementation is correct. Second, we test the goal-reaching behaviour in procedurally generated mazes in DMLab. Third, we train no-reward (pure curiosity) maze exploration on the levels from DMLab and report Grid Oracle reward as an approximate mea- sure of the maze coverage. Finally, we demonstrate that our curiosity bonus does not signiï¬cantly deteriorate performance in two dense reward tasks in DMLab. All the experiments were conducted under the same environment interaction budget for all methods (R-network pre-training is included in this budget). The videos of all trained agents in all environments are available online3.
For additional experiments we refer the reader to the supplementary material: there we show that R-network can successfully generalize between environments, demonstrate stability of our method to hyperparameters and present an ablation study.
4.1 STATIC MAZE GOAL REACHING.
The goal of this experiment is to verify our re-implementation of the baseline method is correct. We use the MyWayHome task from VizDoom. The agent has to reach the goal in a static 3D maze in the time limit of 525 4-repeated steps (equivalent to 1 minute). It only gets a reward of +1 when it reaches the goal (episode ends at that moment), the rest of the time the reward is zero.
The task has three sub-tasks (following the setup in (Pathak et al., 2017)): âDenseâ, âSparseâ and âVery Sparseâ. The layout of the maze is demonstrated in Figure 5(c). The goal is always at the same room but the starting points are different in those sub-tasks. For the âDenseâ subtask, the
# 3https://sites.google.com/view/episodic-curiosity
6
Published as a conference paper at ICLR 2019
Dense Sparse Very Sparse
Figure 6: Task reward as a function of training step for VizDoom tasks. Higher is better. We use the ofï¬ine version of our algorithm and shift the curves for our method by the number of environment steps used to train R-network â so the comparison is fair. We run every method with a repeat of 3 (same as in prior work (Pathak et al., 2017)) and show all runs. No seed tuning is performed.
agent starts in one of the random locations in the maze, some of which are close to the goal. In this sub-task, the reward is relatively dense (hence the name): the agent is likely to bump into the goal by a short random walk. Thus, this is an easy task even for standard RL methods. The other two sub-tasks are harder: the agent starts in a medium-distant room from the goal (âSparseâ) or in a very distant room (âVery Sparseâ). Those tasks are hard for standard RL algorithms because the probability of bumping into a rewarding state by a random walk is very low.
The training curves are shown in Figure 6. By analysing them, we draw a few conclusions. First, our re-implementation of the ICM baseline is correct and the results are in line with those pub- lished in (Pathak et al., 2017). Second, our method works on-par with the ICM baseline in terms of ï¬nal performance, quickly reaching 100% success rate in all three sub-tasks. Finally, in terms of convergence speed, our algorithm is signiï¬cantly faster than the state-of-the-art method ICM â our method reaches 100% success rate at least 2 times faster. Note that to make the comparison of the training speed fair, we shift our training curves by the environment interaction budget used for training R-network.
4.2 PROCEDURALLY GENERATED RANDOM MAZE GOAL REACHING.
In this experiment we aim to evaluate maze goal reaching task generalization on a large scale. We train on hundreds of levels and then test also on hundreds of hold-out levels. We use âExplore Goal Locations Largeâ (we will denote it âSparseâ) and âExplore Obstructed Goals Largeâ (we will denote it âSparse + Doorsâ) levels in the DMLab simulator. In those levels, the agent starts in a random location in a randomly generated maze (both layout and textures are randomized at the beginning of the episode). Within the time limit of 1800 4-repeated steps (equivalent to 2 minutes), the agent has to reach the goal as many times as possible. Every time it reaches a goal, it is re- spawned into another random location in the maze and has to go to the goal again. Every time the goal is reached, the agent gets a reward +10, the rest of the time the reward is zero. The second level is a variation of the ï¬rst one with doors which make the paths in the maze longer. The layouts of the levels are demonstrated in Figure 5(b,c).
We found out that the standard task âSparseâ is actually relatively easy even for the plain PPO algorithm. The reason is that the agent starting point and the goal are sampled on the map inde- pendently of each other â and sometimes both happen to be in the same room which simpliï¬es the task. To test the limits of the algorithms, we create a gap between the starting point and the goal which eliminates same-room initialization. We report the results for both the original task âSparseâ and its harder version âVery Sparseâ. Thus, there are overall three tasks considered in this section: âSparseâ, âVery Sparseâ and âSparse + Doorsâ.
The results demonstrate that our method can reasonably adapt to ever-changing layouts and textures â see Table 1 and training curves in Figure 7. We outperform the baseline method ICM in all three environments using the same environment interaction budget of 20M 4-repeated steps. The environment âSparseâ is relatively easy and all methods work reasonably. In the âVery Sparseâ and âSparse + Doorsâ settings our advantage with respect to PPO and ICM is more clear. On those levels, the visual inspection of the ICM learnt behaviour reveals an important property of this method: it is confused by the ï¬ring action and learns to entertain itself by ï¬ring until it runs out of ammunition. A similar ï¬nding was reported in a concurrent work (Burda et al., 2018a): the agent was given an action which switched the content on a TV screen in a maze, along with the movement actions. Instead of moving, the agent learns to switch channels forever. While one might intuitively
7
Published as a conference paper at ICLR 2019
# Sparse
# Very Sparse
# Sparse + Doors
No Reward No Reward - Fire Dense 1
w po g o* g $20 a] a ur o 0 5 10 15 20 Number of training steps (in millions)
w zg g ® o 8 = a ur 0 5 10 1520 Number of training steps (in millions)
w 230 g #20 o B10 a] a ur o 0 510 15 20 | Number of training steps (in millions)
30 wv zg $20 fa . ba â PPO o S10 â PPO +ICM 9 ââ PPO + Grid Oracle a â PPO + EC (ours) wo _ PPO + ECO (ours) 15 20 0 5 10 Number of training steps (in millions)
0 5 10 15 20 Number of training steps (in millions)
15 ) 5 10 20 | Number of training steps (in millions)
millions) | Number of training steps (in millions) Number
Figure 7: Reward as a function of training step for DMLab tasks. Higher is better. âECOâ stands for the online version of our method, which trains R-network and the policy at the same time. We run every method 30 times and show 5 randomly selected runs. No seed tuning is performed.
accept such âcouch-potatoâ behaviour in intelligent creatures, it does not need to be a consequence of curious behaviour. In particular, we are not observing such dramatic ï¬ring behaviour for our curiosity formulation: according to Figure 1, an observation after ï¬ring is still one step away from the one before ï¬ring, so it is not novel (note that ï¬ring still could happen in practice because of the entropy term in PPO). Thus, our formulation turns out to be more robust than ICMâs prediction error in this scenario. Note that we do not speciï¬cally look for an action set which breaks the baseline â just use the standard one for DMLab, in line with the prior work (e.g., (Espeholt et al., 2018)).
The result of this experiment suggests to look more into how methods behave in extremely-sparse reward scenarios. The limiting case would be no reward at all â we consider it in the next section.
4.3 NO REWARD/AREA COVERAGE.
This experiment aims to quantitatively establish how good our method is in the scenario when no task reward is given. One might question why this scenario is interesting â however, before the task reward is found for the ï¬rst time, the agent lives in the no-reward world. How it behaves in this case will also determine how likely it is to stumble into the task reward in the ï¬rst place.
We use one of the DMLab levels â âSparseâ from the previous experiment. We modify the task to eliminate the reward and name the new task âNo Rewardâ. To quantify success in this task, we report the reward coming from Grid Oracle for all compared methods. This reward provides a discrete approximation to the area covered by the agent while exploring.
The training curves are shown in Figure 7 and the ï¬nal test results in Table 1. The result of this experiment is that our method and Grid Oracle both work, while the ICM baseline is not working â and the qualitative difference in behaviour is bigger than in the previous experiments. As can be seen from the training curves, after a temporary increase, ICM quality actually decreases over time, rendering a sharp disagreement between the prediction-error-based bonus and the area coverage metric. By looking at the video3, we observe that the ï¬ring behaviour of ICM becomes even more prominent, while our method still shows reasonable exploration.
Finally, we try to ï¬nd out if the ICM baseline behaviour above is due to the ï¬ring action only. Could it learn exploration of randomized mazes if the Fire action is excluded from the action set? For that purpose, we create a new version of the task â we call it âNo Reward - Fireâ. This task demonstrates qualitatively similar results to the one with the full action set â see Table 1. By looking at the videos3, we hypothesise that the agent can most signiï¬cantly change its current view when it is close to the wall â thus increasing one-step prediction error â so it tends to get stuck near âinterestingâ diverse textures on the walls.
The results suggest that in an environment completely without reward, the ICM method will exhaust its curiosity very quickly â passing through a sharp peak and then degrading into undesired be-
8
Published as a conference paper at ICLR 2019
Table 1: Reward in DMLab tasks (mean ± std) for all compared methods. Higher is better. âECOâ stands for the online version of our method, which trains R-network and the policy at the same time. We report Grid Oracle reward in tasks with no reward. The Grid Oracle method is given for reference â it uses privileged information unavailable to other methods. Results are averaged over 30 random seeds. No seed tuning is performed.
Method Sparse Very Sparse Sparse+Doors No Reward No Reward - Fire Dense 1 PPO 27.0 ± 5.1 8.6 ± 4.3 1.5 ± 0.1 191 ± 12 217 ± 19 22.8 ± 0.5 PPO + ICM 23.8 ± 2.8 11.2 ± 3.9 2.7 ± 0.2 72 ± 2 87 ± 3 20.9 ± 0.6 PPO + EC (ours) 26.2 ± 1.9 24.7 ± 2.2 8.5 ± 0.6 475 ± 8 492 ± 10 19.9 ± 0.7 PPO + ECO (ours) 41.6 ± 1.7 40.5 ± 1.1 19.8 ± 0.5 472 ± 18 457 ± 32 22.9 ± 0.4 PPO + Grid Oracle 56.7 ± 1.3 54.3 ± 1.2 29.4 ± 0.5 796 ± 2 795 ± 3 20.9 ± 0.6 Dense 2 9.41 ± 0.02 9.39 ± 0.02 9.53 ± 0.03 9.60 ± 0.02 8.97 ± 0.04
haviour. This observation raises concerns: what if ICM passes the peak before it reaches the ï¬rst task reward in the cases of real tasks? Supposedly, it would require careful tuning per-game. Fur- thermore, in some cases, it would take a lot of time with a good exploration behaviour to reach the ï¬rst reward, which would require to stay at the top performance for longer â which is problematic for the ICM method but still possible for our method.
4.4 DENSE REWARD TASKS. A desirable property of a good curiosity bonus is to avoid hurting performance in dense-reward tasks (in addition to improving performance for sparse-reward tasks). We test this scenario in two levels in the DMLab simulator: âRooms Keys Doors Puzzleâ (which we denote âDense 1â) and âRooms Collect Good Objects Trainâ (which we denote âDense 2â). In the ï¬rst task, the agent has to collect keys and reach the goal object behind a few doors openable by those keys. The rewards in this task are rather dense (key collection/door opening is rewarded). In the second task the agent has to collect good objects (give positive reward) and avoid bad objects (give negative reward). The episode lasts for 900 4-repeated steps (equivalent to 1 minute) in both tasks.
The results show that our method indeed does not signiï¬cantly deteriorate performance of plain PPO in those dense-reward tasks â see Table 1. The training curves for âDense 1â are shown in Figure 7 and for âDense 2â â in the supplementary material. Note that we use the same bonus weight in this task as in other DMLab tasks before. All methods work similarly besides the Grid Oracle in the âDense 2â task â which performs slightly worse. Video inspection3 reveals that Grid Oracle â the only method which has ground-truth knowledge about area it covers during training â sometimes runs around excessively and occasionally fails to collect all good objects. 5 DISCUSSION Our method is at the intersection of multiple topics: curiosity, episodic memory and temporal dis- tance prediction. In the following, we discuss the relation to the prior work on those topics.
Curiosity in visually rich 3D environments. Recently, a few works demonstrated the possibility to learn exploration behaviour in visually rich 3D environments like DMLab (Beattie et al., 2016) and VizDoom (Kempka et al., 2016). (Pathak et al., 2017) trains a predictor for the embedding of the next observation and if the reality is signiï¬cantly different from the prediction â rewards the agent. In that work, the embedding is trained with the purpose to be a good embedding for predicting action taken between observations â unlike an earlier work (Stadie et al., 2015) which obtains an embedding from an autoencoder. It was later shown by (Burda et al., 2018a) that the perceptive prediction approach has a downside â the agent could become a âcouch-potatoâ if given an action to switch TV channels. This observation is conï¬rmed in our experiments by observing a persistent ï¬ring behaviour of the ICM baseline in the navigational tasks with very sparse or no reward. By contrast, our method does not show this behaviour. Another work (Fu et al., 2017) trains a temporal distance predictor and then uses this predictor to establish novelty: if the observation is easy to classify versus previous observations, it is novel. This method does not use episodic memory, however, and the predictor is used in way which is different from our work.
General curiosity. Curiosity-based exploration for RL has been extensively studied in the litera- ture. For an overview, we refer the reader to the works (Oudeyer & Kaplan, 2009; Oudeyer et al., 2007). The most common practical approaches could be divided into three branches: prediction-
9
Published as a conference paper at ICLR 2019
error-based, count-based and goal-generation-based. Since the prediction-based approaches were discussed before, in the following we focus on the latter two branches.
The count-based approach suggests to keep visit counts for observations and concentrate on visiting states which has been rarely visited before â which bears distant similarity to how we use episodic memory. This idea is natural for discrete observation spaces and has solid theoretical foundations. Its extension to continuous observation spaces is non-trivial, however. The notable step in this direction was taken by works (Bellemare et al., 2016; Ostrovski et al., 2017) which introduce a trained observation density model which is later converted to a function behaving similarly to counts. it is the The way conversion is done has some similarity to prediction-error-based approaches: difference of the density in the example before and after training of this example which is converted to count. The experiments in the original works operate on Atari games (Bellemare et al., 2013) and were not benchmarked on visually rich 3D environments. Another approach (Tang et al., 2017) discretises the continuous observation space by hashing and then uses the count-based approach in this discretised space. This method is appealing in its simplicity, however, the experiments in (Pathak et al., 2017; Fu et al., 2017) show that it does not perform well in visually rich 3D environments. Another line of work, Novelty Search (Lehman & Stanley, 2011) and its recent follow-up (Conti et al., 2018), proposed maintaining an archive of behaviours and comparing current behaviour to those â however, the comparison is done by euclidean distance and behaviours are encoded using coordinates, while we learn the comparison function and only use pixels.
Finally, our concept of novelty through reachability is reminiscent of generating the goals which are reachable but not too easy â a well-studied topic in the prior work. The work (Held et al., 2017) uses a GAN to differentiate what is easy to reach from what is not and then generate goals at the boundary. Another work (Baranes & Oudeyer, 2013) deï¬nes new goals according to the expected progress the agent will make if it learns to solve the associated task. The recent work (P´er´e et al., 2018) learns an embedding for the goal space and then samples increasingly difï¬cult goals from that space. In a spirit similar to those works, our method implicitly deï¬nes goals that are at least some ï¬xed number of steps away by using the reachability network. However, our method is easier to implement than other goal-generation methods and quite general.
Episodic memory. Two recent works (Blundell et al., 2016; Pritzel et al., 2017) were inspired by the ideas of episodic memory in animals and proposed an approach to learn the functioning of episodic memory along with the task for which this memory is applied. Those works are more focused on repeating successful strategies than on exploring environments â and are not designed to work in the absence of task rewards.
Temporal distance prediction. The idea to predict the distance between video frames has been studied extensively. Usually this prediction is an auxiliary task for solving another problem. (Ser- manet et al., 2017) trains an embedding such that closer in time frames are also closer in the embed- ding space. Multiple works (Fu et al., 2017; Savinov et al., 2018; Aytar et al., 2018) train a binary classiï¬er for predicting if the distance in time between frames is within a certain threshold or not. While (Sermanet et al., 2017; Aytar et al., 2018) use only the embedding for their algorithms, (Fu et al., 2017; Savinov et al., 2018) also use the classiï¬er trained together with the embedding. As mentioned earlier, (Fu et al., 2017) uses this classiï¬er for density estimation instead of compari- son to episodic memory. (Savinov et al., 2018) does compare to the episodic memory buffer but solves a different task â given an already provided exploration video, navigate to a goal â which is complementary to the task in our work. 6 CONCLUSION In this work we propose a new model of curiosity based on episodic memory and the ideas of reach- ability. This allows us to overcome the known âcouch-potatoâ issues of prior work and outperform the previous curiosity state-of-the-art method ICM in visually rich 3D environments from VizDoom and DMLab. Our method also allows a MuJoCo ant to learn locomotion purely out of ï¬rst-person- view curiosity. In the future, we want to make policy aware of memory not only in terms of receiving reward, but also in terms of acting. Can we use memory content retrieved based on reachability to guide exploration behaviour in the test time? This could open opportunities to learn exploration in new tasks in a few-shot style â which is currently a big scientiï¬c challenge.
ACKNOWLEDGMENTS We would like to thank Olivier Pietquin, Alexey Dosovitskiy, Vladlen Koltun, Carlos Riquelme, Charles Blundell, Sergey Levine and Matthieu Geist for the valuable discussions about our work.
10
Published as a conference paper at ICLR 2019
# REFERENCES
Yusuf Aytar, Tobias Pfaff, David Budden, Tom Le Paine, Ziyu Wang, and Nando de Freitas. Playing hard exploration games by watching youtube. arXiv preprint arXiv:1805.11592, 2018.
Adrien Baranes and Pierre-Yves Oudeyer. Active learning of inverse models with intrinsically mo- tivated goal exploration in robots. Robotics and Autonomous Systems, 61(1):49â73, 2013.
Charles Beattie, Joel Z Leibo, Denis Teplyashin, Tom Ward, Marcus Wainwright, Heinrich K¨uttler, Andrew Lefrancq, Simon Green, V´ıctor Vald´es, Amir Sadik, et al. Deepmind lab. arXiv preprint arXiv:1612.03801, 2016.
Marc Bellemare, Sriram Srinivasan, Georg Ostrovski, Tom Schaul, David Saxton, and Remi Munos. Unifying count-based exploration and intrinsic motivation. In Advances in Neural Information Processing Systems, pp. 1471â1479, 2016.
Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environ- ment: An evaluation platform for general agents. Journal of Artiï¬cial Intelligence Research, 47: 253â279, 2013.
Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, and Demis Hassabis. Model-free episodic control. arXiv preprint arXiv:1606.04460, 2016.
Yuri Burda, Harri Edwards, Deepak Pathak, Amos Storkey, Trevor Darrell, and Alexei A Efros. Large-scale study of curiosity-driven learning. arXiv preprint arXiv:1808.04355, 2018a.
Yuri Burda, Harrison Edwards, Amos Storkey, and Oleg Klimov. Exploration by random network distillation. arXiv preprint arXiv:1810.12894, 2018b.
Edoardo Conti, Vashisht Madhavan, Felipe Petroski Such, Joel Lehman, Kenneth Stanley, and Jeff Clune. Improving exploration in evolution strategies for deep reinforcement learning via a popu- lation of novelty-seeking agents. In Advances in Neural Information Processing Systems, 2018.
Lasse Espeholt, Hubert Soyer, Remi Munos, Karen Simonyan, Volodymir Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, et al. Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures. arXiv preprint arXiv:1802.01561, 2018.
Benjamin Eysenbach, Abhishek Gupta, Julian Ibarz, and Sergey Levine. Diversity is all you need: Learning skills without a reward function. arXiv preprint arXiv:1802.06070, 2018.
Justin Fu, John Co-Reyes, and Sergey Levine. Ex2: Exploration with exemplar models for deep reinforcement learning. In Advances in Neural Information Processing Systems, pp. 2577â2587, 2017.
David Held, Xinyang Geng, Carlos Florensa, and Pieter Abbeel. Automatic goal generation for reinforcement learning agents. arXiv preprint arXiv:1705.06366, 2017.
Rein Houthooft, Xi Chen, Yan Duan, John Schulman, Filip De Turck, and Pieter Abbeel. Vime: Variational information maximizing exploration. In Advances in Neural Information Processing Systems, pp. 1109â1117, 2016.
MichaÅ Kempka, Marek Wydmuch, Grzegorz Runc, Jakub Toczek, and Wojciech Ja´skowski. Viz- doom: A doom-based ai research platform for visual reinforcement learning. In Computational Intelligence and Games (CIG), 2016 IEEE Conference on, pp. 1â8. IEEE, 2016.
Joel Lehman and Kenneth O Stanley. Abandoning objectives: Evolution through the search for novelty alone. Evolutionary computation, 2011.
Georg Ostrovski, Marc G Bellemare, Aaron van den Oord, and R´emi Munos. Count-based explo- ration with neural density models. arXiv preprint arXiv:1703.01310, 2017.
Pierre-Yves Oudeyer and Frederic Kaplan. What is intrinsic motivation? a typology of computa- tional approaches. Frontiers in neurorobotics, 1:6, 2009.
11
Published as a conference paper at ICLR 2019
Pierre-Yves Oudeyer, Frederic Kaplan, and Verena V Hafner. Intrinsic motivation systems for au- tonomous mental development. IEEE transactions on evolutionary computation, 11(2):265â286, 2007.
Deepak Pathak, Pulkit Agrawal, Alexei A Efros, and Trevor Darrell. Curiosity-driven exploration by self-supervised prediction. In International Conference on Machine Learning (ICML), volume 2017, 2017.
Alexandre P´er´e, S´ebastien Forestier, Olivier Sigaud, and Pierre-Yves Oudeyer. Unsupervised learn- ing of goal spaces for intrinsically motivated goal exploration. arXiv preprint arXiv:1803.00781, 2018.
Alexander Pritzel, Benigno Uria, Sriram Srinivasan, Adria Puigdomenech, Oriol Vinyals, Demis arXiv preprint Hassabis, Daan Wierstra, and Charles Blundell. Neural episodic control. arXiv:1703.01988, 2017.
Nikolay Savinov, Alexey Dosovitskiy, and Vladlen Koltun. Semi-parametric topological memory for navigation. arXiv preprint arXiv:1803.00653, 2018.
John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. High- arXiv preprint dimensional continuous control using generalized advantage estimation. arXiv:1506.02438, 2015.
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
Pierre Sermanet, Corey Lynch, Yevgen Chebotar, Jasmine Hsu, Eric Jang, Stefan Schaal, and Sergey Levine. Time-contrastive networks: Self-supervised learning from video. arXiv preprint arXiv:1704.06888, 2017.
Bradly C Stadie, Sergey Levine, and Pieter Abbeel. Incentivizing exploration in reinforcement learning with deep predictive models. arXiv preprint arXiv:1507.00814, 2015.
Haoran Tang, Rein Houthooft, Davis Foote, Adam Stooke, OpenAI Xi Chen, Yan Duan, John Schul- man, Filip DeTurck, and Pieter Abbeel. # exploration: A study of count-based exploration for deep reinforcement learning. In Advances in Neural Information Processing Systems, pp. 2753â 2762, 2017.
Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, 2012.
Sergey Zagoruyko and Nikos Komodakis. Learning to compare image patches via convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recog- nition, pp. 4353â4361, 2015.
12
Published as a conference paper at ICLR 2019
# SUPPLEMENTARY MATERIAL
The supplementary material is organized as follows. First, we describe the MuJoCo locomotion experiments. Then we provide training details for R-network. After that, we list hyperparameter values and the details of hyperparameter search for all methods. Then we show experimental results which suggest that R-network can generalize between environments: we transfer one general R- network from all available DMLab30 levels to our tasks of interest and also transfer R-networks between single environments. After that, we present the results from a stability/ablation study which suggests our method is stable with respect to its most important hyperparameters and the components we used in the method are actually necessary for its performance (and measure their inï¬uence). Then we demonstrate the robustness of our method to the environments where every state has a stochastic next state. After that, we discuss computational considerations for our method. Finally, we provide the training curves for the âDense 2â task in the main text.
S1 MuJoCo ANT LOCOMOTION OUT OF FIRST-PERSON-VIEW CURIOSITY
Equipped with our curiosity module, a MuJoCo ant has learned4 to move out of curiosity based on the ï¬rst-person view5.
First, let us describe the setup:
⢠Environment: the standard MuJoCo environment is a plane with a uniform or repetitive texture on it â nothing to be visually curious about. To ï¬x that, we tiled the 400 à 400 ï¬oor into squares of size 4 à 4. Each tile is assigned a random texture from a set of 190 textures at the beginning of every episode. The ant is initialized at a random location in the 200 à 200 central square of the ï¬oor. The episode lasts for 1000 steps (no action repeat is used). If the z-coordinate of the center of mass of the ant is above 1.0 or below 0.2 â the episode ends prematurely (standard termination condition).
⢠Observation space: for computing the curiosity reward, we only use a ï¬rst-person view camera mounted on the ant (that way we can use the same architecture of our curiosity module as in VizDoom and DMLab). For policy, we use the standard body features from Ant-v2 in gym-mujoco6 (joint angles, velocities, etc.).
Action space: standard continuous space from Ant-v2 in gym-mujoco.
Basic RL solver: PPO (same as in the main text of the paper).
⢠Baselines: PPO on task reward, PPO on task reward plus constant reward 1 at every step as a trivial curiosity bonus (which we denote PPO+1, it optimizes for longer survival).
Second, we present quantitative results for the setting with no task reward after 10M training steps in Table S1 (the ï¬rst row). Our method outperforms the baselines. As seen in the videos7, PPO (random policy) dies quickly, PPO+1 survives for longer but does not move much and our method moves around the environment.
Additionally, we performed an experiment with an extremely sparse task reward â which we call âEscape Circleâ. The reward is given as follows: 0 reward inside the circle of radius 10, and starting from 10, we give a one-time reward of 1 every time an agent goes through a concentric circle of radius 10 + 0.5k (for integer k ⥠0). The results at 10M training steps are shown in Table S1 (the second row). Our method signiï¬cantly outperforms the baselines (better than the best baseline by a factor of 10).
Finally, let us discuss the relation to some other works in the ï¬eld of learning locomotion from intrinsic reward. The closest work in terms of task setup is the concurrent work (Burda et al., 2018a). The authors demonstrate slow motion8 of the ant learned from pixel-based curiosity only.
4Behaviour learned by our method, third-person view: https://youtu.be/OYF9UcnEbQA 5Behaviour learned by our method, ï¬rst-person view: https://youtu.be/klpDUdkv03k 6https://gym.openai.com/envs/Ant-v2/ 7https://sites.google.com/view/episodic-curiosity 8https://youtu.be/l1FqtAHfJLI?t=90
13
Published as a conference paper at ICLR 2019
Other works use state features (joint angles, velocities, etc.) for formulating intrinsic reward, not pixels â which is a different setup. One work in this direction is the concurrent work (Eysenbach et al., 2018) â which also contains a good overview of the literature on intrinsic reward from state features.
Table S1: Learning locomotion for MuJoCo Ant. For âNo rewardâ, the task reward is 0 (so plain PPO is a random policy), and Grid Oracle rewards are reported (with cell size 5). Results are averaged over 30 random seeds for âNo rewardâ and over 10 random seeds for âEscape Circleâ. No seed tuning is performed.
Task PPO PPO+1 No Reward Escape Circle 1.4 ± 0.02 0.59 ± 0.54 1.7 ± 0.06 0.45 ± 0.39 5.0 ± 0.27 6.53 ± 3.57
# S2 REACHABILITY NETWORK TRAINING DETAILS
For training R-network, we use mini-batches of 64 observation pairs (matched within episodes). The training is run for 50K mini-batch iterations for VizDoom and 200K mini-batch iterations for DMLab. At the beginning of every pass through the buffer, we re-shufï¬e it. We use Adam optimizer with learning rate 10â4. The R-network uses a siamese architecture with two branches (see Figure 2 in the main text), each branch is Resnet-18 with 512 outputs, with a fully-connected network applied to the concatenated output of the branches. The fully-connected network has four hidden layers with 512 units, batch-normalization and ReLU is applied after each layer besides the last one, which is a softmax layer. Observations are RGB-images with resolution 160 à 120 pixels.
For online training of the R-network, we collect the experience and perform training every 720K 4- repeated environment steps. Every time the experience is collected, we make 10 epochs of training on this experience. Before every epoch, the data is shufï¬ed.
# S3 HYPERPARAMETERS
The hyperparameters of different methods are given in Table S2 for VizDoom environment, in Ta- ble S3 for DMLab environment, and in Tables S4, S5 for MuJoCo Ant environment. The hyperpa- rameters for DMLab are tuned on the âSparseâ environment for all methods â because all methods work reasonably on this environment (it is unfair to tune a method on an environment where it fails and also unfair to tune different methods on different environments). We use the PPO algorithm from the open-source implementation9. For implementation convenience, we scale both the bonus and the task reward (with a single balancing coefï¬cient it would not be possible to turn off one of those rewards).
Table S2: Hyper-parameters used for VizDoom environment.
PPO PPO + ICM PPO + EC Learning rate PPO entropy coefï¬cient Task reward scale Curiosity bonus scale α ICM forward inverse ratio ICM curiosity loss strength EC memory size EC reward shift β EC novelty threshold bnovelty EC aggregation function F 0.00025 0.01 5 0 - - - - - - 0.00025 0.01 5 0.01 0.2 10 - - - - 0.00025 0.01 5 1 - - 200 0.5 0 percentile-90
9 https://github.com/openai/baselines
14
Published as a conference paper at ICLR 2019
Table S3: Hyper-parameters used for DMLab environment.
PPO PPO + ICM PPO + Grid Oracle PPO + EC Learning rate PPO entropy coefï¬cient Task reward scale Curiosity bonus scale α Grid Oracle cell size ICM forward inverse ratio ICM curiosity loss strength EC memory size EC reward shift β EC novelty threshold bnovelty EC aggregation function F 0.00019 0.0011 1 0 - - - - - - - 0.00025 0.0042 1 0.55 - 0.96 64 - - - - 0.00025 0.0066 1 0.052 30 - - - - - - 0.00025 0.0021 1 0.030 - - - 200 0.5 0 percentile-90
Table S4: Hyper-parameters used for MuJoCo Ant âNo Rewardâ environment. For the PPO+1 baseline, the curiosity reward is substituted by +1 (optimizes for survival). The curiosity bonus scale is applied to this reward.
PPO PPO+1 PPO + EC Learning rate PPO entropy coefï¬cient Task reward scale Curiosity bonus scale α EC memory size EC reward shift β EC novelty threshold bnovelty EC aggregation function F 0.0003 8e-6 0 0 - - - - 0.00007 0.0001 0 1 - - - - 0.00007 0.00002 0 1 1000 1 0 10th largest
Table S5: Hyper-parameters used for MuJoCo Ant âEscape Circleâ environment. For the PPO+1 baseline, the curiosity reward is substituted by +1 (optimizes for survival). The curiosity bonus scale is applied to this reward.
PPO PPO+1 PPO + EC Learning rate PPO entropy coefï¬cient Task reward scale Curiosity bonus scale α EC memory size EC reward shift β EC novelty threshold bnovelty EC aggregation function F 0.0001 1.21e-06 1 0 - - - - 0.0001 1.43e-06 1 0.85 - - - - 4.64e-05 1.78e-06 1 0.25 1000 1 0 10th largest
# S4 R-NETWORK GENERALIZATION STUDY
One of the promises of our approach is its potential ability to generalize between tasks. In this section we verify if this promise holds.
S4.1 TRAINING R-NETWORK ON ALL DMLab-30 TASKS
Could we train a universal R-network for all available levels â and then use this network for all our tasks of interest? Since different games have different dynamics models, the notion of closely
15
Published as a conference paper at ICLR 2019
reachable or far observations also changes from game to game. Can R-network successfully handle this variability? Table S6 suggests that using a universal R-network slightly hurts the performance compared to using a specialized R-network trained speciï¬cally for the task. However, it still deï¬- nitely helps to get higher reward compared to using the plain PPO. The R-network is trained using 10M environment interactions equally split across all 30 DMLab-30 tasks.
Table S6: Reward on the tasks âNo Rewardâ and âVery Sparseâ using a universal R-network. Two baselines (PPO and PPO + EC with a specialized R-network) are also provided.
Method No Reward Very Sparse PPO PPO + EC with specialized R-network 191 ± 12 475 ± 8 8.6 ± 4.3 24.7 ± 2.2 PPO + EC with universal R-network 348 ± 8 19.3 ± 1.0
S4.2 TRAINING R-NETWORK ON ONE LEVEL AND TESTING ON ANOTHER
This experiment is similar to the previous one but in a sense is more extreme. Instead of training on all levels (including the levels of interest and other unrelated levels), can we train R-network on just one task and use if for a different task? Table S7 suggests we can obtain reasonable performance by transferring the R-network between similar enough environments. The performance is unsatis- factory only in one case (using the R-network trained on âDense 2â). Our hypothesis is that the characteristics of the environments are sufï¬ciently different in that case: single room versus maze, static textures on the walls versus changing textures.
Table S7: Reward on the environments âNo Rewardâ and âVery Sparseâ (columns) when the R- network is trained on different environments (rows). We provide a result with a matching R-network for reference (bottom).
R-network training environment No Reward Very Sparse Dense 1 Dense 2 Sparse + Doors 320 ± 5 43 ± 2 376 ± 7 18.5 ± 1.4 0.8 ± 0.5 16.2 ± 0.7 Matching environment 475 ± 8 24.7 ± 2.2
# S5 STABILITY/ABLATION STUDY
The experiments are done both in âNo Rewardâ and âVery Sparseâ environments. The âNo Rewardâ environment is useful to avoid the situations where task reward would hide important behavioural differences between different ï¬avors of our method (this âhidingâ effect can be easily observed for different methods comparison in the dense reward tasks â but the inï¬uence of task reward still remains even in sparser cases). As in the main text, for the âNo Rewardâ task we report the Grid Oracle reward as a discrete approximation to the area covered by the agent trajectories.
S5.1 POSITIVE EXAMPLE THRESHOLD IN R-NETWORK TRAINING
Training the R-network requires a threshold k to separate negative from positive pairs. The trained policy implicitly depends on this threshold. Ideally, the policy performance should not be too sensi- tive to this hyper-parameter. We conduct a study where the threshold is varied from 2 to 10 actions (as in all experiments before, each action is repeated 4 times). Table S8 shows that the EC perfor- mance is reasonably robust to the choice of this threshold.
16
Published as a conference paper at ICLR 2019
Table S8: Reward in the âNo Rewardâ and âVery Sparseâ tasks using different positive example thresholds k when training the R-network.
Threshold k No Reward Very Sparse 2 3 4 5 7 10 378 ± 18 395 ± 10 412 ± 8 475 ± 8 451 ± 4 455 ± 7 28.3 ± 1.6 20.9 ± 1.6 31.1 ± 1.2 24.7 ± 2.2 23.6 ± 1.0 20.8 ± 0.8
S5.2 MEMORY SIZE IN EC MODULE
The EC-module relies on an explicit memory buffer to store the embeddings of past observations and deï¬ne novelty. One legitimate question is to study the impact of the size of this memory buffer on the performance of the EC-module. As observed in table S9, the memory size has little impact on the performance.
Table S9: Reward for different values of the memory size for the tasks âNo Rewardâ and âVery Sparseâ.
Memory size No Reward Very Sparse 100 200 350 500 447 ± 6 475 ± 8 459 ± 6 452 ± 6 19.4 ± 1.9 24.7 ± 2.2 23.5 ± 1.4 23.8 ± 2.0
S5.3 ENVIRONMENT INTERACTION BUDGET FOR TRAINING R-NETWORK
The sample complexity of our EC method includes two parts: the sample complexity to train the R- network and the sample complexity of the policy training. In the worst case â when the R-network does not generalize across environments â the R-network has to be trained for each environment and the total sample complexity is then the sum of the previous two sample complexities. It is then crucial to see how many steps are needed to train R-network such that it can capture the notion of reachability. R-network trained using a number of environment steps as low as 1M already gives good performance, see Table S10.
Table S10: Reward of the policy trained on the âNo Rewardâ and âVery Sparseâ tasks with an R-network trained using a varying number of environment interactions (from 100K to 5M).
Interactions No Reward Very Sparse 100K 300K 1M 2.5M 5M 357 ± 18 335 ± 9 383 ± 13 475 ± 8 416 ± 5 12.2 ± 1.3 16.2 ± 0.7 18.6 ± 0.9 24.7 ± 2.2 20.7 ± 1.4
S5.4 The R-network is composed of an Embedding network and a Comparator network. How impor- tant is each for the ï¬nal performance of our method? To establish that, we conduct two experiments. First, we ï¬x the Embedding network at the random initialization and train only the Comparator. Sec- ond, we substitute the Comparator network applied to embeddings e1, e2 with the sigmoid function
# IMPORTANCE OF TRAINING DIFFERENT PARTS OF R-NETWORK
17
Published as a conference paper at ICLR 2019
Ï(eT 1 e2) and train only the Embedding. According to the results in Table S11, we get a reasonable performance with a random embedding: the results are still better than the plain PPO (but worse than with the complete R-network). However, without the Comparator the quality drops below the plain PPO.
This experiment leads us to two conclusions. First, training the Embedding network is desired but not necessary for our method to work. Second, using the Comparator is essential for the current architecture and it cannot be naively omitted.
Table S11: Reward on the âNo Rewardâ and âVery Sparseâ tasks using ablated versions of the R-network.
Method No Reward Very Sparse PPO PPO + EC with complete R-network 191 ± 12 475 ± 8 8.6 ± 4.3 24.7 ± 2.2 PPO + EC with random Embedding PPO + EC without Comparator network 392 ± 12 48 ± 3 16.2 ± 1.4 5.8 ± 2.4
S5.5
# IMPROVING R-NETWORK ARCHITECTURE BY UN-SHARING SIAMESE BRANCHES
Even though the ablation experiments in the previous section discourage us from naively omitting the Comparator network, it is not the end of the story. There are still compelling reasons to desire an architecture with a simpler Comparator:
1 e2) instead of a complex Com- parator, we could compute all reachability queries through a single matrix-vector multi- plication and a point-wise application of the sigmoid function â which is going to be signiï¬cantly faster than applying a few fully-connected layers to the concatenation of the embeddings.
⢠Potential for approximate reachability computation. With a dot-product-based Comparator, it might be possible to use hashing techniques like locality-sensitive hashing (LSH) for approximate computations of reachabilities to a large memory buffer.
We have been able to identify a simple architectural modiï¬cation which allows us to use Ï(eT 1 e2) without loss of quality. It is sufï¬cient to un-share the weights of the two siamese branches. With a complex Comparator, the validation accuracy for R-network is around 93%. With shared branches and a simple Comparator, it is reduced to 78% (which means 3 times higher error rate and leads to unsatisfactory exploration results in the previous section). However, if we un-share branches and use a simple Comparator, we re-gain the validation accuracy of 93%!
Why would un-sharing branches help? While the in-depth reasons are still to be investigated, one superï¬cial explanation could be that with un-shared branches R-network implements a strictly larger family of functions. It is notable that similar effects were observed by Zagoruyko & Komodakis (2015) â however, the difference was milder in their case (the architecture with un-shared branches is called pseudo-siamese in that work).
# S6 RANDOMIZED ENVIRONMENTS
In the main text of the paper we observed how the ï¬ring action confused the surprise-based curiosity method ICM. This was a manifestation of the hardness of future prediction performed by ICM. Importantly, there could be more than one reason why future prediction is hard (as observed in the concurrent work (Burda et al., 2018b)): partial observability of the environment, insufï¬ciently rich future prediction model or randomized transitions in the environment. Since our own method EC relies on comparisons to the past instead of predictions of the future, one could expect it to be more robust to those factors (intuitively, comparison to the past is an easier problem). The goal of this section is to provide additional evidence for that.
We are going to experiment with one source of future prediction errors which we have used in the thought experiment from the introduction: environment stochasticity. In particular, we analyze how
18
Published as a conference paper at ICLR 2019
(a) (b)
Figure S1: Examples of randomized environments: (a) Image Action, (b) Noise.
Table S12: Reward in the randomized-TV versions of DMLab task âSparseâ (mean ± std) for all compared methods. Higher is better. âOriginalâ stands for the non-randomized standard version of the task which we used in the main text. âECOâ stands for the online version of our method, which trains R-network and the policy at the same time. The Grid Oracle method is given for reference â it uses privileged information unavailable to other methods. Results are averaged over 30 random seeds. No seed tuning is performed.
Method Image Action Noise Noise Action Original 3 10 30 PPO 11.5 ± 2.1 10.9 ± 1.8 8.5 ± 1.5 11.6 ± 1.9 9.8 ± 1.5 27.0 ± 5.1 PPO + ICM 10.0 ± 1.2 10.5 ± 1.2 6.9 ± 1.0 7.7 ± 1.1 7.6 ± 1.1 23.8 ± 2.8 PPO + EC (ours) 19.8 ± 0.7 15.3 ± 0.4 13.1 ± 0.3 18.7 ± 0.8 14.8 ± 0.4 26.2 ± 1.9 PPO + ECO (ours) 24.3 ± 2.1 26.6 ± 2.8 18.5 ± 0.6 28.2 ± 2.4 18.9 ± 1.9 41.6 ± 1.7 PPO + Grid Oracle 37.7 ± 0.7 37.1 ± 0.7 37.4 ± 0.7 38.8 ± 0.8 39.3 ± 0.8 56.7 ± 1.3
different methods behave when all the states in the environment provide stochastic next state. For that, we create versions of the DMLab environments âSparseâ and âVery Sparseâ with added strong source of stochasticity: randomized TV on the head-on display of the agent. It is implemented as follows: the lower right quadrant of the agentâs ï¬rst person view is occupied with random images. We try a few settings:
⢠âImage Action kâ: there are k images of animals retrieved from the internet, an agent has a special action which changes an image on the TV screen to a random one from this set. An example is shown in Figure S1(a).
⢠âNoiseâ: at every step a different noise pattern is shown on the TV screen, independently from agentâs actions. The noise is sampled uniformly from [0, 255] independently for each pixel. An example is shown in Figure S1(b).
⢠âNoise Actionâ: same as âNoiseâ, but the noise pattern only changes if the agent uses a special action.
The results at 20M 4-repeated environment steps are shown in Tables S12, S13. In almost all cases, the performance of all methods deteriorates because of any source of stochasticity. However, our method turns out to be reasonably robust to all sources of stochasticity and still outperforms the base- lines in all settings. The videos10,11 demonstrate that our method still explores the maze reasonably well.
# S7 COMPUTATIONAL CONSIDERATIONS
The most computationally intensive parts of our algorithm are the memory reachability queries. Reachabilities to past memories are computed in parallel via mini-batching. We have shown the
# 10Image Action: https://youtu.be/UhF1MmusIU4 11Noise: https://youtu.be/4B8VkPA2Mdw
19
Published as a conference paper at ICLR 2019
Table S13: Reward in the randomized-TV versions of DMLab task âVery Sparseâ (mean ± std) for all compared methods. Higher is better. âOriginalâ stands for the non-randomized standard version of the task which we used in the main text. âECOâ stands for the online version of our method, which trains R-network and the policy at the same time. The Grid Oracle method is given for reference â it uses privileged information unavailable to other methods. Results are averaged over 30 random seeds. No seed tuning is performed.
Method Image Action Noise Noise Action Original 3 10 30 PPO 6.5 ± 1.6 8.3 ± 1.8 6.3 ± 1.8 8.7 ± 1.9 6.1 ± 1.8 8.6 ± 4.3 PPO + ICM 3.8 ± 0.8 4.7 ± 0.9 4.9 ± 0.7 6.0 ± 1.3 5.7 ± 1.4 11.2 ± 3.9 PPO + EC (ours) 13.8 ± 0.5 10.2 ± 0.8 7.4 ± 0.5 13.4 ± 0.6 11.3 ± 0.4 24.7 ± 2.2 PPO + ECO (ours) 20.5 ± 1.3 17.8 ± 0.8 16.8 ± 1.4 26.0 ± 1.6 12.5 ± 1.3 40.5 ± 1.1 PPO + Grid Oracle 35.4 ± 0.6 35.9 ± 0.6 36.3 ± 0.7 35.5 ± 0.6 35.4 ± 0.8 54.3 ± 1.2
algorithm to work reasonably fast with a memory size of 200. For orders of magnitude larger memory sizes, one would need to better parallelize reachability computations â which should in principle be possible. Memory consumption for the stored memories is very modest (400 KB), as we only store 200 of 512-ï¬oat-embeddings, not the observations.
As for the speed comparison between different methods, PPO + ICM is 1.09x slower than PPO and PPO + EC (our method) is 1.84x slower than PPO. In terms of the number of parameters, R-network brings 13M trainable variables, while PPO alone was 1.7M and PPO + ICM was 2M. That said, there was almost no effort spent on optimizing the pipeline in terms of speed/parameters, so it is likely easy to make improvements in this respect. It is quite likely that a resource-consuming Resnet-18 is not needed for the R-network â a much simpler model may work as well. In this paper, we followed the setup for the R-network from prior work (Savinov et al., 2018) because it was shown to perform well, but there is no evidence that this setup is necessary.
S8 ADDITIONAL DMLab TRAINING CURVES
w 10.0 = AG = 2 50 PPO fa AG PPO + ICM 8 PPO + Grid Oracle i 0.0 PPO + EC (ours) 0 5 10 15 20 Number of training steps (in millions) PPO + ECO (ours) -2.5
Figure S2: Reward as a function of training step for the DMLab task âDense 2â. Higher is better. We shift the curves for our method by the num- ber of environment steps used to train R-network â so the comparison between different methods is fair. We run every method 30 times and show 5 randomly selected runs. No seed tuning is per- formed.
We show additional training curves from the main text experimental section in Figure S2.
20 | {
"id": "1606.04460"
} |
1810.05723 | Post-training 4-bit quantization of convolution networks for rapid-deployment | Convolutional neural networks require significant memory bandwidth and
storage for intermediate computations, apart from substantial computing
resources. Neural network quantization has significant benefits in reducing the
amount of intermediate results, but it often requires the full datasets and
time-consuming fine tuning to recover the accuracy lost after quantization.
This paper introduces the first practical 4-bit post training quantization
approach: it does not involve training the quantized model (fine-tuning), nor
it requires the availability of the full dataset. We target the quantization of
both activations and weights and suggest three complementary methods for
minimizing quantization error at the tensor level, two of whom obtain a
closed-form analytical solution. Combining these methods, our approach achieves
accuracy that is just a few percents less the state-of-the-art baseline across
a wide range of convolutional models. The source code to replicate all
experiments is available on GitHub:
\url{https://github.com/submission2019/cnn-quantization}. | http://arxiv.org/pdf/1810.05723 | Ron Banner, Yury Nahshan, Elad Hoffer, Daniel Soudry | cs.CV | null | null | cs.CV | 20181002 | 20190529 | 2019
arXiv:1810.05723v3 [cs.CV] 29 May
# Post training 4-bit quantization of convolutional networks for rapid-deployment
Ron Banner! , Yury Nahshan! , and Daniel Soudry?
Intel â Artificial Intelligence Products Group (AIPG)! Technion â Israel Institute of Technologyâ
{ron.banner, yury.nahshan}@intel.com daniel.soudry @ gmail.com
# Abstract
Convolutional neural networks require significant memory bandwidth and stor- age for intermediate computations, apart from substantial computing resources. Neural network quantization has significant benefits in reducing the amount of intermediate results, but it often requires the full datasets and time-consuming fine tuning to recover the accuracy lost after quantization. This paper intro- duces the first practical 4-bit post training quantization approach: it does not involve training the quantized model (fine-tuning), nor it requires the avail- ability of the full dataset. We target the quantization of both activations and weights and suggest three complementary methods for minimizing quantization error at the tensor level, two of whom obtain a closed-form analytical solution. Combining these methods, our approach achieves accuracy that is just a few percents less the state-of-the-art baseline across a wide range of convolutional models. The source code to replicate all experiments is available on GitHub: https://github.com/submission2019/cnn- quantization.
# 1 Introduction
A significant drawback of deep learning models is their computational costs. Low precision is one of the key techniques being actively studied recently to overcome the problem. With hardware support, low precision training and inference can compute more operations per second, reduce memory bandwidth and power consumption, and allow larger networks to fit into a device.
The majority of literature on neural network quantization involves some sort of training either from scratch (Hubara et al., 2016) or as a fine-tuning step from a pre-trained floating point model (Han et al., 2015). Training is a powerful method to compensate for modelâs accuracy loss due to quantization. Yet, it is not always applicable in real-world scenarios since it requires the full-size dataset, which is often unavailable from reasons such as privacy, proprietary or when using an off-the-shelf pre-trained model for which data is no longer accessible. Training is also time-consuming, requiring very long periods of optimization as well as skilled manpower and computational resources.
Consequently, it is often desirable to reduce the model size by quantizing weights and activations post-training, without the need to re-train/fine-tune the model. These methods, commonly referred to as post-training quantization, are simple to use and allow for quantization with limited data. At 8-bit precision, they provide close to floating point accuracy in several popular models, e.g., ResNet, VGG, and AlexNet. Their importance can be seen from the recent industrial publications, focusing on quantization methods that avoid re-training (Goncharenko et al., 2018; Choukroun et al., 2019; Meller et al., 2019; Migacz, 2017).
Preprint. Under review.
Unfortunately, post-training quantization below 8 bits usually incurs significant accuracy degradation (Krishnamoorthi, 2018; Jacob et al., 2018). This paper focuses on CNN post-training quantization to 4-bit representation. In the absence of a training set, our methods aim at minimizing the local error introduced during the quantization process (e.g., round-off errors). To that end, we often adopt knowledge about the statistical characterization of neural network distributions, which tend to have a bell-curved distribution around the mean. This enables to design efficient quantization schemes that minimize the mean-squared quantization error at the tensor level, avoiding the need for re-training.
# Our contributions
Our paper suggests three new contributions for post-training quantization:
1. Analytical Clipping for Integer Quantization (ACIQ): We suggest to limit (henceforth, clip) the range of activation values within the tensor. While this introduces distortion to the original tensor, it reduces the rounding error in the part of the distribution containing most of the information. Our method approximates the optimal clipping value analytically from the distribution of the tensor by minimizing the mean-square-error measure. This analytical threshold is simple to use during run-time and can easily be integrated with other techniques for quantization.
2. Per-channel bit allocation: We introduce a bit allocation policy to determine the optimal bit-width for each channel. Given a constraint on the average per-channel bit-width, our goal is to allocate for each channel the desired bit-width representation so that overall mean-square-error is minimized. We solve this problem analytically and show that by taking certain assumptions about the input distribution, the optimal quantization step size of each channel is proportional to the 2-power of its range.
3. Bias-correction: We observe an inherent bias in the mean and the variance of the weight values following their quantization. We suggest a simple method to compensate for this bias.
We use ACIQ for activation quantization and bias-correction for quantizing weights. Our per-channel bit allocation method is used for quantizing both weights and activations (we explain the reasons for this configuration in Section 5). These methods are evaluated on six ImageNet models. ACIQ and bias-correction improve, on average, the 4-bit baselines by 3.2% and 6.0%, respectively. Per-channel bit allocation improves the baselines, on average, by 2.85% for activation quantization and 6.3% for weight quantization. When the three methods are used in combination to quantize both weights and activations, most of the degradation is restored without re-training, as can be seen in Figure 1.
Mmm FP32 a Our) = baseline foe) o Validation Accuracy Wpunnad ws oOo co 0 C0 2 ,§ | Oo Se, oo
Figure 1: Top-1 accuracy of floating-point models converted directly to 4-bit weights and activations without retraining. For some models, the combination of three methods can reduce the quantization induced degradation enough as to make retraining unnecessary, enabling for the first time a rapid deployment of 4-bit models (detailed numerical results appear in Table 1).
# Previous works
Perhaps the most relevant previous work that relates to our clipping study (ACIQ) is due to (Migacz, 2017), who also proposes to clip activations post-training. Migacz (2017) suggests a time-consuming iterative method to search for a suitable clipping threshold based on the Kullback-Leibler Divergence
(KLD) measure. This requires collecting statistics for activation values before deploying the model, either during training or by running a few calibration batches on FP32 model. It has a drawback of encountering values at runtime not obeying the previously observed statistics.
Compared to the KLD method, ACIQ avoids searching for candidate threshold values to identify the optimal clipping, which allows the clipping threshold to be adjusted dynamically at runtime. In addition, we show in the Appendix that our analytical clipping approach outperforms KLD in almost all models for 4-bit quantization even when it uses only statistical information (i.e., not tensor values observed at runtime). Zhao et al. (2019) compared ACIQ (from an earlier version of this manuscript) to KLD for higher bit-width of 5 to 8 bits. It was found that ACIQ typically outperforms KLD for weight clipping and is more or less the same for activation clipping.
Several new post-training quantization schemes have recently been suggested to handle statistical outliers. Meller et al. (2019) suggests weight factorization that arranges the network to be more tolerant of quantization by equalizing channels and removing outliers. A similar approach has recently been suggested by (Zhao et al., 2019), who suggests duplicating channels containing outliers and halving their values to move outliers toward the center of the distribution without changing network functionality. Unlike our method that focuses on 4-bit quantization, the focus of these schemes was post-training quantization for larger bitwidths.
# 2 ACIQ: Analytical Clipping for Integer Quantization
In the following, we derive a generic expression for the expected quantization noise as a function of clipping value for either Gaussian or Laplace distributions. In the Appendix, we consider the case where convolutions and rectified linear units (ReLU) are fused to avoid noise accumulation, resulting in folded-Gaussian and Laplace distributions.
Let X be a high precision tensor-valued random variable, with a probability density function f(x). Without loss of generality, we assume a prepossessing step has been made so that the average value in the tensor zero, i.e., E(X) = p41 = 0 (we do not lose generality since we can always subtract and add this mean). Assuming bit-width M/, we would like to quantize the values in the tensor uniformly to 2⢠discrete values.
Commonly (e.g., in GEMMLOWP (Jacob et al., 2017)), integer tensors are uniformly quantized between the tensor maximal and minimal values. In the following, we show that this is suboptimal, and suggest a model where the tensor values are clipped in the range [âa, a] to reduce quantization noise. For any x ⬠IR, we define the clipping function clip(x, a) as follows
x if |z| <a sign(x)-a_ if |a| >a clip(a, a) = (1)
Denoting by a the clipping value, the range [a, âa] is partitioned to 2â equal quantization regions. Hence, the quantization step A between two adjacent quantized values is established as follows:
2a A= or (2)
Our model assumes values are rounded to the midpoint of the region (bin) i.e., for every index i ⬠[0,2 â 1) all values that fall in [âa + i- A, âa + (i + 1) - A] are rounded to the midpoint qg = âa+ (2+ 1/4, as illustrated in Figure 2 left. Then, the expected mean-square-error between X and its quantized version Q(X) can be written as follows:
E((X â Q(X))"] =
-a 2⢠-at(it1)A (3) f(x): (+a) vas [ f(x) +(x -â 4) rae+ | f(x) (@ âa)?dx _ atid
Eq. 3 is composed of three parts. The first and last terms quantify the contribution of clip(z, a) to the expected mean-square-error. Note that for symmetrical distributions around zero (e.g., Gaussian N(0, 07) or Laplace(0, b)) these two terms are equal and their sum can therefore be evaluated by multiplying any of the terms by 2. The second term corresponds to the expected mean-square-error when the range [âa, a] is quantized uniformly to 2â discrete levels. This term corresponds to the
â analysis 4{| simulation 2bi' § fr 03 S $ pa 2 c S o = 1 Nn QR 3 q2 tt ot ae eneeeneees hâ x oF r = 5 a is âa âatA âa+2A âat(2M-1)A a Clipping Value
Figure 2: left: An activation distribution quantized uniformly in the range [âa, a] with 2 equal quantization intervals (bins) right: Expected mean-square-error as a function of clipping value for different quantization levels (Laplace (j = 0 and b = 1)). Analytical results, stated by Eq. 5, are ina good agreement with simulations, which where obtained by clipping and quantizing 10,000 values, generated from a Laplace distribution.
quantization noise introduced when high precision values in the range [âa, a] are rounded to the nearest discrete value.
Quantization noise: We approximate the density function f by the construction of a piece-wise linear function whose segment breakpoints are points in f, as illustrated on the right side of figure 2. In the appendix we use this construction to show that quantization noise satisfies the following:
ype 2.03 2Mâ1 2 1 a ( 2 x) +(x â dx = : 4 Jatin Po) (@~ a)" de = 3 oen7 > Qa 3. 22M @
Clipping noise: In the appendix we show that clipping noise for the case of Laplace(0, b) satisfies the following:
[tee aytae= f* f(e)-(e-a)Par= 08
We can finally state Eq. 3 for the laplace case as follows.
2M_y P a2 2-a3 . a - SS? f(a) =2-P-e â + 39a (5) i=0 3 E[(X â Q(X)))] &2-B eo F +
On the right side of figure 2, we introduce the mean-square-error as a function of clipping value for various bit widths.
Finally, to find the optimal clipping value a for which mean-square-error is minimized, the corre- sponding derivative with respect to a is set equal to zero as follows:
JE((X â Q(X)" 2a Oa 3.22M â2beâ* =0 (6)
Solving Eq. 6 numerically for bit-widths M = 2,3,4 results with optimal clipping values of a* = 2.83b, 3.89, 5.03b, respectively. In practice, ACIQ uses a* to optimally clip values by estimating the Laplace parameter b = E(|X â E(X)|) from input distribution X, and multiplying by the appropriate constant (e.g., 5.03 for 4 bits).
In the appendix, we provide similar analysis for the Gaussian case. We also compare the validation accuracy against the standard GEMMLOWP approach (Jacob et al., 2017) and demonstrate significant improvements in all studied models for 3-bit activation quantization.
# 3 Per-channel bit-allocation
With classical per-channel quantization, we have a dedicated scale and offset for each channel. Here we take a further step and consider the case where different channels have different numbers of bits for
precision. For example, instead of restricting all channel values to have the same 4-bit representation, we allow some of the channels to have higher bit-width while limiting other channels to have a lower bit-width. The only requirement we have is that the total number of bits written to or read from memory remains unchanged (1.e., keep the average per-channel bit-width at 4).
Given a layer with n channels, we formulate the problem as an optimization problem aiming to find a solution that allocates a quota of B quantization intervals (bins) to all different channels. Limiting the number of bins B translates into a constraint on the number of bits that one needs to write to memory. Our goal is to minimize the overall layer quantization noise in terms of mean-square-error.
Assuming channel i has values in the range [âa;, a] quantized to M; bits of precision, Eq. 5 provides the quantization noise in terms of expected mean-square-error. We employ Eq. 5 to introduce a Lagrangian with a multiplier \ to enforce the requirement on the number of bins as follows:
L(Mo, My, ...,Myd) = > (: Pee OE tz Soir sau) + (Ee ») (7) i
The first term in the Lagrangian is the total layer quantization noise (i.e., the sum of mean-square- errors over all channels as defined by Eq. 5). The second term captures the quota constraint on the total number of allowed bins B. By setting to zero the partial derivative of the Lagrangian function L(-) with respect to M/;, we obtain for each channel index i ⬠[0,7 â 1] the following equation:
OL(Mo, My, ..., Mn, A) 2In2-a? OM; 3. 22M +. 2M â 0 (8)
By setting to zero the partial derivative of the Lagrangian function £(-) with respect to we take into account the constraint on the number of allowed bins.
OL(Mo, Mi, ..., Mn, A) M â 2â On DS "-B=0 (9) a
Considering Eq. 8 and Eq. 9, we have a separate equation for each channel 7 ⬠[0,7 â 1] and an additional equation for the the Lagrangian multiplier . In the Appendix, we show that the solution to this system of equations results with the following simple rule for optimal bin allocation for each channel 7:
a Ye Bea was âB (10) csity
By taking the logarithm of both sides, we translate Eq. 10 into bit width assignment M; for each channel i. Since M; is an integer it includes a round operation.
M,= cal a ») rab) Via?
Figure 3 illustrates the mean-square-error in a synthetic experiment including two channels i, j, each having different values for a;,a;. Results of the experiment show that optimal allocations determined by Eq. 10 are in a good agreement with the best allocations found by the experiment. Finally, the validation accuracy of per-channel bit-allocation is compared in the appendix when activations are quantized on average to 3-bit precision. Unlike the baseline method that assigns a precision of exactly 3 bits to each channel in a layer, the per-channel bit-allocation method does not change the total bit rate to memory but significantly improve validation accuracy in all models.
# 4 Bias-Correction
We observe an inherent bias in the mean and the variance of the weight values following their quantization. Formally, denoting by W. C W the weights of channel c and us quantized version by W4, we observe that E(W.) 4 E(W£) and ||W. â E(W.) ||2 4 ||W2 â E(W2) ||2. We suggest to compensate for this quantization bias. To that end, we first evaluate somrecton constants for each channel c as follows:
He = E (We) â E(W2) g = [We=E OW) lle (12 â|W -E(We) [le
Mean Square Error 01 02 03 04 05 06 07 08 0.9 percenatge of bins allocated for channel i
Figure 3: Optimal bin-allocation in a synthetic experiment including of a pair of channels 7, 7, each consisting of 1000 values taken from (0, a?) and (0, aj). The overall bin quota for the layer is set to B = 32, equivalent in terms of memory bandwidth to the number of bins allocated for two channels at 4-bit precision. As indicated by the vertical lines in the plot, the optimal allocations (predicted by Eq. 10) coincide with the best allocations found by the experiment.
Then, we compensate for the bias in W for each channel c as follows:
wiâE (wtp), Vwew? (13)
We consider a setup where each channel has a different scale and offset (per-channel quantization). We can therefore compensate for this bias by folding for each channel c the correction terms 1. and &â¬- into the scale and offset of the channel c. In the appendix, we demonstrate the benefit of using bias-correction for 3-bit weight quantization.
# 5 Combining our quantization methods
In the previous sections, we introduce each of the quantization methods independently of the other methods. In this section, we consider their efficient integration.
# 5.1 Applicability
We use per-channel bit allocation for both weights and activations. We found no advantage in doing any kind of weight clipping. This is in line with earlier works that also report no advantage to weight clipping for larger bitwidths (Migacz, 2017; Zhao et al., 2019). Therefore, ACIQ was considered for quantizing activations only. On the other hand, bias correction could in principle be implemented for both weights and activations. Yet, unlike bias correction for weights that can be done offline before model deployment, activation bias is estimated by running input images, which might not be available for gathering statistics at post-training . As the online alternative of estimating the activation bias on the fly during run-time might be prohibitive, we considered the bias correction method only for the weights.
# 5.2 Interaction between quantization medthods
We conduct a study to investigate how each quantization method affects performance. We consider four quantization methods: (1) ACIQ; (2) Bias-correction (3) Per-channel bit-allocation for weights; (4) Per-channel bit allocation for activations. In Figure 4, we demonstrate the interaction between these methods at different quantization levels for various models. In the appendix, we report the results of an experiment on ResNet101 where all possible interactions are evaluated (16 combinations).
==) PF 32 â Naive â Bit-Alloc.(A) ââ Bit-Alloc.(A)+ Bias Corr. â Bit-Alloc.(A)+ Bias Corr.tACIQ) â= Bit-Alloc.(A)+ Bias Corr.+ACIQ+Bit-Alloc.(W) resnetl8 pesosseee = see ] resnet50O resnetl0l a f=} a i=} a f=] N B is} So N 3° N is} ° ° Validation Accuracy (%) ° Validation Accuracy (%) & Validation Accuracy (%) c= o 2 3 4 5 2 3 4 5 2 3 4 5 Number of bits Number of bits Number of bits inception v3 B o i o a f=} N 3° Validation Accuracy (%) Validation Accuracy (%) Validation Accuracy (%) = o ° u 2 3 4 2 3 4 5 2 3 4 5 Number of bits Number of bits Number of bits
Figure 4: An ablation study showing the methods work in synergy and effectively at 3-4 bit precision.
# 6 Experiments & Results
This section reports experiments on post-training quantization using six convolutional models origi- nally pre-trained on the ImageNet dataset. We consider the following baseline setup:
Per-channel-quantization of weights and activations: It is often the case where the distributions of weights and activations vary significantly between different channels. In these cases, calculating a scale-factor per channel can provide good accuracy for post-training quantization (Krishnamoorthi, 2018). The per-channel scale has shown to be important both for inference (Rastegari et al., 2016) and for training (Wu et al., 2018).
Fused ReLU: In convolution neural networks, most convolutions are followed by a rectified linear unit (ReLU), zeroing the negative values. There are many scenarios where these two operations can be fused to avoid the accumulation of quantization noise. In these settings, we can ignore the negative values and find an optimal clipping value a for the positive half space [0, a]. Fused ReLU provides a smaller dynamic range, which leads to a smaller spacing between the different quantization levels and therefore smaller roundoff error upon quantization. In the Appendix, we provide a detailed analysis for the optimal value of a.
We use the common practice to quantize the first and the last layer as well as average/max-pooling layers to 8-bit precision. Table 1 summarizes our results for 4-bit post training quantization. In the appendix we provide additional results for 3-bit quantization.
# 7 Conclusion
Learning quantization for numerical precision of 4-bits and below has long been shown to be effective (Lin et al., 2017; McKinstry et al., 2018; Zhou et al., 2016; Choi et al., 2018). However, these schemes pose major obstacles that hinder their practical use. For example, many DNN developers only provide the pre-trained networks in full precision without the training dataset from reasons such as privacy or massiveness of the data size. Consequently, quantization schemes involving training have largely been ignored by the industry. This gap led intensive research efforts by several tech giants and start-ups to improve post-training quantization: (1) Samsung (Lee et al., 2018), (2) Huawei, (Choukroun et al., 2019), (3) Hailo Technologies (Meller et al., 2019), (4) NVIDIA (Migacz, 2017). Our main findings in this paper suggest that with just a few percent accuracy degradation, retraining CNN models may be unnecessary for 4-bit quantization.
Table 1: ImageNet Top-1 validation accuracy with post-training quantization using the three methods suggested by this work. Quantizing activations (8W4A): (A) Baseline consists of per-channel quantization of activations and fused ReLU; each channel is quantized to 4-bit precision with a uniform quantization step between the maximum and minimum values of the channel (GEMMLOWP, Jacob et al. (2017)). (B) ACIQ optimally clips the values within each channel before applying quantization. (C) Per-channel bit allocation assigns to each activation channel an optimal bit-width without exceeding an average of 4 bits per channel, as determined by Eq. 11. (D) ACIQ + Per channel bit allocation quantize the activation tensors in a two stage pipeline: bit-allocation and clipping. Quantizing weights (4W8A): (A) Baseline consists of per-channel quantization of weights. (B) Bias-correction compensates for the quantization bias using Eq. 13. (C) Per-channel bit allocation assigns to each weight channel the optimal bit-width without violating the quota of allowed bits, which translates on average to 4 bits per channel (D) Bias-Corr + Per-channel bit allocation quantize the weight tensors in a three-stage pipeline: per-channel bit-allocation, quantization and bias correction to compensate for the quantization bias. Quantizing weights and activation (4W4A): Baseline consists of a combination of the above two baseline settings, i.e., (4W8A) and (8W4A). Our pipeline incorporates into the baseline all methods suggested by our work, namely, ACIQ for activation quantization, per-channel bit allocation of both weights and activations, and bias correction for weight quantization.
# Method
# VGG
# VGG-BN
# IncepV3_
# Resl8_
Res50_â_âRes101
# Quantizing activations: 8 bits weights, 4 bits activations (8W4A)
(Per channel quantization of activations + fused ReLU)
Baseline 68.8% 70.6% 710.9% 61.5% 68.3% 66.5% ACIQ 70.1% 72.0% 72.71% 66.6% 71.8% 72.6% Per-channel bit allocation 69.7% 72.6% 74.3% 65.0% 71.3% 70.8% ACIQ + Per-channel bit allocation 10.7% 72.8% 75.1% 68.0% 73.6% 75.6% Reference (FP32) 71.6% 73.4% 71.2% 69.7% 76.1% 77.3%
Quantizing weights: 4 bits weights, 8 bits activations (4W8A)
# (Per channel quantization of weights)
Baseline 10.5% 68.5% 38.4% 59.7% 72.5% 74.6% Bias-Correction 71.0% 71.7% 59.5% 67.4% 74.8% 76.3% Per-channel bit-allocation 71.0% 71.9% 61.4% 66.7% 75.0% 76.4% Bias-Corr + Per-channel bit-allocation 71.2% 72.4% 68.2% 68.3% 75.3% 76.9% Reference (FP32) 71.6% 73.4% 71.2% 69.7% 76.1% 77.3%
# Quantizing weights and activations: 4 bits weights, 4 bits activations (4W4A)
(Per channel quantization of weights & activations + fused ReLU)
Baseline 67.2% 64.5% 30.6% 51.6% 62.0% 62.6% All methods combined 10.5% 71.8% 66.4% 67.0% 73.8% 75.0% Reference (FP32) 71.6% 73.4% 71.2% 69.7% 76.1% 77.3%
# References
Choi, Jungwook, Wang, Zhuo, Venkataramani, Swagath, Chuang, Pierce I-Jen, Srinivasan, Vijayalak- shmi, and Gopalakrishnan, Kailash. Pact: Parameterized clipping activation for quantized neural networks. arXiv preprint arXiv: 1805.06085, 2018.
Choukroun, Yoni, Kravchik, Eli, and Kisilev, Pavel. Low-bit quantization of neural networks for efficient inference. arXiv preprint arXiv: 1902.06822, 2019.
Goncharenko, Alexander, Denisov, Andrey, Alyamkin, Sergey, and Terentev, Evgeny. Fast adjustable threshold for uniform neural network quantization. arXiv preprint arXiv: 1812.07872, 2018.
Han, Song, Mao, Huizi, and Dally, William J. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015.
Hubara, I, Courbariaux, M, Soudry, D., El-Yaniv, R., and Bengio, Y. Binarized neural networks. In NIPS. US Patent 62/317,665, Filed, 2016.
Jacob, Benoit, Kligys, Skirmantas, Chen, Bo, Zhu, Menglong, Tang, Matthew, Howard, Andrew, Adam, Hartwig, and Kalenichenko, Dmitry. Quantization and training of neural networks for efficient integer-arithmetic-only inference. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2704-2713, 2018.
Jacob, Benoit et al. gemmlowp: a small self-contained low-precision gemm library.(2017), 2017.
Krishnamoorthi, Raghuraman. Quantizing deep convolutional networks for efficient inference: A whitepaper. arXiv preprint arXiv: 1806.08342, 2018.
Lee, Jun Haeng, Ha, Sangwon, Choi, Saerom, Lee, Won-Jo, and Lee, Seungwon. Quantization for rapid deployment of deep neural networks. arXiv preprint arXiv: 1810.05488, 2018.
Lin, Xiaofan, Zhao, Cong, and Pan, Wei. Towards accurate binary convolutional neural network. In Advances in Neural Information Processing Systems, pp. 345-353, 2017.
McKinstry, Jeffrey L, Esser, Steven K, Appuswamy, Rathinakumar, Bablani, Deepika, Arthur, John V, Yildiz, Izzet B, and Modha, Dharmendra S. Discovering low-precision networks close to full- precision networks for efficient embedded inference. arXiv preprint arXiv: 1809.04191, 2018.
Meller, Eldad, Finkelstein, Alexander, Almog, Uri, and Grobman, Mark. Same, same but different- recovering neural network quantization error through weight factorization. arXiv preprint arXiv:1902.01917, 2019.
Migacz, S. 8-bit inference with tensorrt. In GPU Technology Conference, 2017.
Rastegari, Mohammad, Ordonez, Vicente, Redmon, Joseph, and Farhadi, Ali. Xnor-net: Imagenet classification using binary convolutional neural networks. In European Conference on Computer Vision, pp. 525-542. Springer, 2016.
Wu, Shuang, Li, Guoqi, Chen, Feng, and Shi, Luping. Training and inference with integers in deep neural networks. arXiv preprint arXiv: 1802.04680, 2018.
Zhao, Ritchie, Hu, Yuwei, Dotzel, Jordan, De Sa, Christopher, and Zhang, Zhiru. Improving neural network quantization using outlier channel splitting. arXiv preprint arXiv: 1901.09504, 2019.
Zhou, Shuchang, Wu, Yuxin, Ni, Zekun, Zhou, Xinyu, Wen, He, and Zou, Yuheng. Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv: 1606.06160, 2016.
2019
arXiv:1810.05723v3 [cs.CV] 29 May
# Post training 4-bit quantization - supplementary material
# 1 ACIQ: Analytical Clipping for Integer Quantization
In the following we derive a generic expression for the expected quantization noise as a function of clipping value for either Gaussian or Laplace distributions. Let X be a high precision tensor-valued random variable, with a probability density function f(a). Without loss of generality, we assume a prepossessing step has been made so that the average value in the tensor zero i.e., E(X) = p =0 (we do not lose generality since we can always subtract and add this mean). Assuming bit-width /, we would like to quantize the values in the tensor uniformly to 2â discrete values.
Commonly (e.g., in GEMMLOWP), integer tensors are uniformly quantized between the tensor maximal and minimal values. In the following we show that this is suboptimal, and suggest a model where the tensor values are clipped in the range [âa, a] to reduce quantization noise. For any x ⬠IR, we define the clipping function clip(zx, a) as follows
c if |r| < clip(z, a) = {i if|e| <a All sign(x)-a_ if |a| >a (A)
Denoting by a the clipping value, the range [a, âa] is partitioned to 2â equal quantization regions. Hence, the quantization step A between two adjacent quantized values is established as follows:
2a A= aT (A.2)
Our model assumes values are rounded to the midpoint of the region (bin) i.e., for every index i ⬠[0,2 â 1) all values that fall in [âa + 7- A, âa + (i + 1) - A] are rounded to the midpoint qg = âa+ (2+ 1)4, as illustrated in Figure 1| left. Then, the expected mean-square-error between X and its quantized version Q(X) can be written as follows:
E(X â Q(X))"] = a avy [. f(x) (@+a)?da + » | âatiA -at(i+1)A Pla) -(eâq,)2de + i f(x) «(x âa)dr (A.3)
Eq. A.3 is composed of three parts. The first and last terms quantify the contribution of clip(x, a) to the expected mean-square-error. Note that for symmetrical distributions around zero (e.g., Gaussian N(0, 07) or Laplace(0, b)) these two terms are equal and their sum can therefore be evaluated by multiplying any of the terms by 2. The second term corresponds to the expected mean-square-error when the range [âa, a] is quantized uniformly to 2â discrete levels. This terms corresponds to the quantization noise introduced when high precision values in the range [âa, a] are rounded to the nearest discrete value.
# 1.1 Quantization noise
We approximate the density function f by a construction of a piece-wise linear function whose segment breakpoints are points in f, as illustrated on the right side of figure 1. Since we consider only
Preprint. Under review.
smooth probability density functions (e.g., Gaussian or Laplace), the resulting approximation error is small for sufficient resolution i.e., small quantization step size A. In section 2 in this appendix we show that given a density function f, the quantization noise can be approximated as follows:
2M _ 2M_y -a+(i+1)-A a > / f(x) -(@â qi)? dx ae 395M > f(a) (AA) fao Joti
# the
Eq. A.4 represents the rounding error (as opposed to clipping error) due to rounding of all values in the bin 7 to its center g;. For sufficient resolution and a smooth density function, the density function f can be approximated by a uniform distribution in the range [âa, a], which enables much simpler analysis with little effect on the accuracy. In Figure 1, we show that with this assumption the analytic results are in a good agreement with the simulation results. By substituting the uniform density function f(a) = x into eq. A.4, the following simpler rounding error can be computed:
2Mâ-1 , at (it1)-A 3 2M=1 2 P 2-a° 1 a 2 x)-(aâq)°dx & : » AS i=0 ons F(z) -(wâ qi)"'de 3. 23M Ss Qa 3. 22M (A5)
By substituting eq. A.5 into eq. A.3, and using the symmetrical argument mentioned above, eq. A.3 can be simplified for symmetrical distributions as follows:
E[(X â Q(X))?] = z spur +2: [« f(x) -(@âa)?dx (A.6)
In the following we provide a closed form O22) for the case where the density probability distribution function f(z) is either Gaussian N(0, 07) or Laplace(0, b).
# 1.2 Clipping noise
In the following we develop an expression based on eq. A.6 for the Laplace case. In Section 3 in this appendix we provide a similar analysis for the case where the probability density function is Gaussian (0,07)
Assuming j4 = 0, we have the following Laplace density function f(a) = ae ~'s\_ In order to derive a closed form solution for eq. A.6, we need to evaluate
* He (a â a)? de. (A.7)
# below:
Let (x) represent the expression
U(x) â [2ab â 2b? â a? â a? â 2(bâa) a] (A.8)
By taking the derivative of Y (a) with respect to z, it is easy to see that U(x) is the correct antideriva- tive of the integrand in eq. A.7. Hence,
[ f(x) + (@ â a)?dx = U(inf) â (a) = eo
We can finally state eq. A.6 for the laplace case as follows.
2 2 a, 2-03 â 2 BS a? E[(X â Q(X))?] © 2-B e+ 5 DE Flas) = 2-8 °F + ae (A.9) i=0
On the left side of figure 1, we introduce the mean-square-error as a function of clipping value for various bit widths. Finally, to find the optimal clipping value a for which mean-square-error is minimized, the corresponding derivative with respect to a is set equal to zero as follows:
AB|(X ~ Q(X))"] __20 Oa 3.22M â2beât =0 (A.10)
Solving Eq. A.10 numerically for bit-widths M = 2,3, 4 results with optimal clipping values of a* = 2.83b, 3.89b, 5.03b, respectively.
â analysis â simulation 2bi' Mean Square Error Nn QR 3 q2 % ri 3 TTR CE CRE Ree oe os oxen gee SEE heâ @ Clipping Value âa -atA âa+2A âat(2M-1)A a
Figure 1: left: Expected mean-square-error as a function of clipping value for different quantization levels (Laplace (4 = 0 and b = 1)). Analytical results, stated by eq. A.9, are in a good agreement with simulations, which where obtained by clipping and quantizing 10,000 values, generated from a Laplace distribution. right: An activation distribution quantized uniformly in the range [âa, a] with 2⢠equal quantization intervals (bins)
# 2 ACIQ: Piece-Wise Linear Approximation
Here we provide a more accurate analysis related to the qunatization noise (i.e., the second term in Equation A.3), measured as the expected mean-square-error when the range [âa, a] is quantized uniformly to 2â discrete levels. To that end, we approximate the density function f by a construction of a piece-wise linear function g such that f(q;) = g(qi) for each i ⬠[0, 2â â 1]. Since we consider only smooth probability density functions (e.g., Gaussian or Laplace), the resulting approximation error is small for sufficient resolution i.e., small quantization step size A. In figure 1 we provide an illustration for this construction.
We turn to calculate the linear equation for each line segment of the piece-wise linear function g, falling in the range [-a + 7- A, âa + (i + 1) - A]. To that end, we consider the slope (derivative) and the value of the density function at the midpoint g;. With these two values we can define for each segment i ⬠[0, 2â/ â 1] the corresponding form of linear approximation:
Tag) + (% â qi), where x ⬠[-a +7-A,âa+ (i+1)-A] (A.11) g(a) = f(g) +
We now turn to calculate the second term in Equation A.3. By equation A.11, and since q; is defined to be the midpoint between the integration limits, the following holds true
2M-1 .a4(i4t1)-A 2M -1 pat (it1)-A > / Po) - (w= q))Pde x ~/ gle) - (x â q))de = J Jatin atic 2Mâ-1 ,-a+(itl)-A 2M-1 at (i41)-A a = | (qi) (@ â 4)? + = | alt): (@ ~ 4:)°dr = j J-ati-a atid res 1 âa+(i+1)-A 1 2M 4 df âat(it1)-A =3 fla: (eq)? =o (ai): (@- 4) = 4 4 dx -ati-A i=0 âati-A oy 8 QM _4 12 >> f(G) = ae > F(a)
# 3 Clipping noise (Gaussian case)
We now turn to evaluate Equation A.6 for the Gaussian case. Given a Gaussian random variable X ~ N(0,07), we define W(x) to represent the expression below:
(2) (co? +o?) ert (5) (wa = 200) 078 (A.12) : 3 Ti
«
As in subsection 1.2, one can observe that by taking the derivative of U(x) with respect to 2, it is easy to show that U(x) is the correct antiderivative of Equation A.7 for the case where f represents 2 the Gaussian density function i.e., f(x) = Tene Next, we use U(x) on the range [a,, oo] to evaluate Equation A.7 for the Gaussian case as follows:
[> f)-(e = ajar = 9400) - Ya) = âfat (2) - ones
Equation A.6 can thus be written for the case of Gaussian distribution as follows:
E[(X â Q(X))?] &*© (e2 +0?) - â erf (=)| +3 a V2 ve (A.13)
In figure 2 we introduce the mean-square-error as a function of clipping value for various bit widths.
5 â analysis â_ simulation Mean Square Error ; â . Clipping Value " . â
Figure 2: Expected mean-square-error as a function of clipping value for different quantization levels (Gaussian (j4 = 0 and a = 1)). Analytical results , stated by Equation A.19, are in a good agreement with simulations, which where obtained by clipping and quantizing 10,000 values, generated from a Laplace distribution. As expected, the difference occurs only for very low-bit width and large clipping values where the uniform assumption tends to break.
In order to find the optimal clipping values for which mean-square-error is minimized, we need to differentiate E[(X â Q(X))?] with respect to a and set the derivative equal to zero as follows.
dE[(X â Q(X))?] (a oe 8% ge _ 2a AA4 Ba oft att (-)| Jina - Vin) 3M 0. ¢ )
# 4 ACIQ: Optimal Quantizer for Fused ReLU Activations
In this section we adjust Equations A.9 and A.13 for the case where convolutions and rectified linear units (ReLU) are fused to avoid accumulation of noise.
The ReLU is defined by zeroing the negative half space ie., g(x) = max(0,x). Given a high precision random variable X with a probability density function f(a) we would like to minimize the following expected mean square-error
Bl (aX) â Q(X) | (A.15)
Assuming the probability density function f(x) has a symmetrical distribution around zero, there are two adjustments that need to be made in the analysis of Section 1:
(1) The quantization step A is now set according to the range [0, a]. Hence, Equation A.2 should be modified as follows:
a = oM (A.16)
(2) Since we consider only the positive values, Equation A.3 should ignore the negative contribution i.e.,
2M (i41)- EU(X â Q(X ied [i fa) -(e@âq)2de+ [> f(w)-(@âa)2ar (AA) a
This translates to the following adjustments in Equation A.9 for the Laplace case:
a? pa a) 2 > =e B| (9X) â Q(g(X))) ] =O? e+
Similarly, for the Gaussian case Equation A.13 is modified as follows:
2
B{(9(X) - (a(x) | wee. [ier (= )] ââ oe (A.19)
# 5 Per-channel bit-allocation
With classical per-channel quantization we have a dedicated scale and an offset for each channel. Here we take a further step and consider the case where different channels have different numbers of bits for precision. For example, instead of restricting all channel values to have the same 4-bit representation, we allow some of the channels to have higher bit-width while limiting other channels to have a lower bit-width. So the total volume of data written to or read memory is still comparable to 4-bit precision.
Given a layer with n channels, we formulate the problem as an optimization problem aiming to find a solution that allocates a quota of B quantization intervals (bins) to all different channels. Limiting the number of bins B translates into a constraint on the number of bits that one needs to write to memory. Our goal is to minimize the overall layer quantization noise in terms of mean-square-error.
Assuming channel i has values in the range [âa;, a] quantized to M; bits of precision, eq. A.9 provides the quantization noise in terms of expected mean-square-error. We employ eq. A.9 to introduce a Lagrangian with a multiplier \ to enforce the requirement on the number of bins as follows:
2 L(Mo, Mi, ...; MndA) = > (2 Pee Te + your) + (r QM: _ ») (A.20) i a
The first term in the Lagrangian is the total layer quantization noise (i.e., the sum of mean-square- errors over all channels as defined by eq. A.9). The second term captures the quota constraint on the total number of allowed bins B. By setting to zero the partial derivative of the Lagrangian function L(-) with respect to M/;, we obtain for each channel index i ⬠[0,7 â 1] the following equation:
OL(Mo, Mi,..., My) 2in2- a? OM, ~ 3. 92M, FA-2M = 0 (A.21)
Hence,
3/2In2-a? gM â ame OF (A.22) 3-2
Next, by setting to zero the partial derivative of the Lagrangian function £(-) with respect to A we take into account the constraint on the number of allowed bins.
OL(Mo, Mi, ..., Mind) _ sro _B Dy (A.23)
Hence, using eq. A.22 we get the following expression:
3/2In2-a? 3/2In2 2 yoo = SO J = os =B (A.24) ; 7 3° 3A 7
Hence,
3 2iIn2 2 > of) (A.25) 3B3 ( ;
Define B* to be an optimal bin allocation that minimizes the mean-square-error. By substituting eq. A.25 into eq. A.22, we get the following simple rule for optimal bin allocation for each channel i:
2
3 a; -B (A.26) Be =2Mi= Via? colt
Finally, by taking the logarithm of both sides, we translate eq. A.26 into bit width assignment 1/; for each channel 7. Since M; is an integer it includes a round operation.
M, = cal an ») (A.27) Via? eats
# 6 Kullback Leibler divergence (KLD) method
The following table summaries the classification test accuracies of different popular pre-trained convolution networks after activations are quantized to 4-bit precision in a post-training manner (8W4A). Due to scaling issues of KLD, we could not test its performance in conjecture with the other search-based quantizaton schemes (e.g., KLD with per-channel quantization). Therefore,to make a fair comparison we compare against a baseline that does not include per-channel quantization.
Model Naive KLD ACIQ Reference (8W4A) (8W4A) (8W4A) (float32) VGGI16 53.90% 67.04% ~â 67.40% 71.59% VGGI6-BN 29.50% 65.85% 67.60% 73.36% ResNet-18 = 53.20% 65.06% 65.80% 69.75% ResNet-50 52.70% 70.80% 71.45% 76.10% ResNet-101 50.80% 71.70% 69.53% 77.30% Inception v3. 41.40% 59.25% 60.80% 77.20% AlexNet 41.60% 49.55% 52.20% 56.52%
Table 1: Validation accuracy of various architectures quantized post-training to 8-bit weights and 4-bit activations (83W4A): Naive (SW4A) refers to the conventional quantization method based on the maximum and minimum representable value which shows severe accuracy loss. KLD (8SW4A) refers to the iterative method suggested by NVIDIA to search for a good clipping threshold based on the Kullback-Leibler Divergence measure. ACIJQ (SW4A) refers to our analytic clipping approach described in Section 1; unlike KLD, which is a brute force technique, our approach is order of times faster, and, excluding ResNet 101, maintains higher validation accuracy. Reference (float32) uses full precision models with 32 bit weights and activations.
# 7 Results for 3-bit Quantization
We compared ACIQ, per-channel bit allocation and bias correction also for the case of 3-bit precision. Our results are summarized in Table 2.
Table 2: ImageNet Top-1 validation accuracy with post-training quantization using the three methods suggested by this work.
# Method
# VGG
# VGG-BN_
# IncepV3_
# Res18
# Res50
# Res101
Quantizing activations: 8 bits weights, 3 bits activations (8W3A) (Per channel quantization of activations + fused ReLU)
Baseline 57.1% 56.0% 34.1% 23.4% 5.6% 1.6% ACIQ 67.0% 69.1% 56.8% 57.8% 60.8% 59.0% Per-channel bit allocation 64.0% 69.7% 55.2% 48.7% 16.2% 61.5% Reference (FP32) 71.6% 73.4% 71.2% 69.7% 76.1% 77.3%
Quantizing weights: 3 bits weights, 8 bits activations (3W8A)
(Per channel quantization of weights)
Baseline 59.6% 40.4% 0% 3.8% 28.6% 50.5% Bias-Correction 67.3% 66.1% 3.0% 43.5% 674% 70.7% Per-channel bit-allocation 69.5% 63.6% 1.3% 44.0% 66.6% 72.6% Reference (FP32) 71.6% 73.4% 71.2% 69.7% 76.1% 77.3%
# 8 Combining our Quantization Methods
We conduct a study to investigate how each quantization method affects performance. We consider four quantization methods: (1) ACIQ; (2) Bias-correction (3) Per-channel bit-allocation for weights; (4) Per-channel bit allocation for activations. Table 3 summaries all possible combinations on ResNet101.
Method 2bit 3bit 4bit 5 bit Naive 0.1 05 61.8 74.9 Bias-Corr. 0.2 0.9 63.7 75.3 Bit-Alloc.(W) 0.1 13 63 74.9 Bit-Alloc.(W) + Bias Corr 0.1 1.3 65.5 75.3 Bit-Alloc.(A) 0.1 31.7 70.9 74.9 Bit-Alloc.(A) + Bias Corr 0.4 53.7 73.2 75.3 Bit-Alloc.(A) + Bit-Alloc.(W) 0.2 56.2 70.9 74.9 Bit-Alloc.(A) + Bit-Alloc.(W) + Bias-Corr. 0.6 58 724 75.4 ACIQ 0 23.3 68.2 75 Bias-Corr. + ACIQ 0.3 47.7 71 75.6 Bit-Alloc.(W) + ACIQ 02° 53.7 TIT 75 Bit-Alloc.(W) + Bias Corr + ACIQ 0.5 55.4. 72.2 75.5 Bit-Alloc.(A) + ACIQ 0 41.7 72.1 75.2 Bit-Alloc.(A) + Bias Corr + ACIQ 0.5 64.7 744 75.6 Bit-Alloc.(A) + Bit-Alloc.(W) + ACIQ 04 66.6 74.8 75.2 Bit-Alloc.(A) + Bit-Alloc.(W) + Bias-Corr. + ACIQ 4.6 69 754 75.5
Table 3: ImageNet Top-1 validation accuracy with post-training quantization using the three methods suggested by this work. | {
"id": "1902.01917"
} |
1810.01109 | AI Benchmark: Running Deep Neural Networks on Android Smartphones | Over the last years, the computational power of mobile devices such as
smartphones and tablets has grown dramatically, reaching the level of desktop
computers available not long ago. While standard smartphone apps are no longer
a problem for them, there is still a group of tasks that can easily challenge
even high-end devices, namely running artificial intelligence algorithms. In
this paper, we present a study of the current state of deep learning in the
Android ecosystem and describe available frameworks, programming models and the
limitations of running AI on smartphones. We give an overview of the hardware
acceleration resources available on four main mobile chipset platforms:
Qualcomm, HiSilicon, MediaTek and Samsung. Additionally, we present the
real-world performance results of different mobile SoCs collected with AI
Benchmark that are covering all main existing hardware configurations. | http://arxiv.org/pdf/1810.01109 | Andrey Ignatov, Radu Timofte, William Chou, Ke Wang, Max Wu, Tim Hartley, Luc Van Gool | cs.AI, cs.CV | null | null | cs.AI | 20181002 | 20181015 | 8 1 0 2
t c O 5 1 ] I A . s c [
2 v 9 0 1 1 0 . 0 1 8 1 : v i X r a
# AI Benchmark: Running Deep Neural Networks on Android Smartphones
Andrey Ignatov ETH Zurich Radu Timofte ETH Zurich William Chou Qualcomm, Inc. Ke Wang Huawei, Inc. andrey@vision.ee.ethz.ch timofter@vision.ee.ethz.ch wchou@qti.qualcomm.com Max Wu MediaTek, Inc. Tim Hartley Arm, Inc. Luc Van Gool â ETH Zurich max.wu@mediatek.com tim.hartley@arm.com vangool@vision.ee.ethz.ch
michael.wangke@huawei.com
# Abstract
Over the last years, the computational power of mobile de- vices such as smartphones and tablets has grown dramatically, reaching the level of desktop computers available not long ago. While standard smartphone apps are no longer a prob- lem for them, there is still a group of tasks that can easily challenge even high-end devices, namely running artiï¬cial in- telligence algorithms. In this paper, we present a study of the current state of deep learning in the Android ecosystem and describe available frameworks, programming models and the limitations of running AI on smartphones. We give an overview of the hardware acceleration resources available on four main mobile chipset platforms: Qualcomm, HiSilicon, MediaTek and Samsung. Additionally, we present the real- world performance results of different mobile SoCs collected with AI Benchmark1 that are covering all main existing hard- ware conï¬gurations.
mobile platforms is associated with a huge computational overhead on phone CPUs and a serious drain on battery power. Many recent developments in deep learning are, however, tightly connected to tasks meant for mobile devices. One no- table group of such tasks is concerned with computer vision problems like image classiï¬cation [1, 2, 3], image enhance- ment [4, 5, 6] and super-resolution [7, 8, 9], optical character recognition [10], object tracking [11, 12], visual scene under- standing [13, 14], face detection and recognition [15, 16], gaze tracking [17], etc. Another group of tasks encompasses vari- ous natural language processing problems such as natural lan- guage translation [18, 19], sentence completion [20, 21], sen- tence sentiment analysis [22, 23] or interactive chatbots [24]. A separte group deals with on-line sensor data processing for human activity recognition from accelerometer data [25, 26], gesture recognition [27] or sleep monitoring [28]. Several other deep learning problems on smartphones are related to speech recognition, virtual reality and many other tasks.
1
# Introduction
With the recent advances in mobile system-on-chip (SoC) technologies, the performance of portable Android devices has increased by a multiple over the past years. With their multi-core processors, dedicated GPUs, and gigabytes of RAM, the capabilities of current smartphones have already gone far beyond running the standard built-in phone applica- tions or simple mobile games. Whereas their computational power already signiï¬cantly exceeds the needs of most every- day use cases, artiï¬cial intelligence algorithms still remain challenging even for high-end smartphones and tablets. De- spite the fact that many machine learning solutions are highly useful when deployed on end-user devices, running them on
âWe also thank Przemyslaw Szczepaniak (pszczepaniak@google.com), Google Inc., for writing and editing sections 2.7, 3.1 and 3.2.
Despite the rising interest in deep learning for mobile ap- plications, the majority of AI algorithms are either not avail- able on smartphones or are executed on remote servers due to the aforementioned phonesâ hardware limitations. The lat- ter option is also not ï¬awless, causing: a) privacy issues; b) dependency on an internet connection; c) delays associated with network latency; d) bottleneck problems â the number of possible clients depends on the serversâ computational ca- pabilities. To overcome these issues, there were a number of attempts to port separate algorithms or whole machine learn- ing libraries to mobile platforms with added hardware accel- eration (HA) using GPUs or DSPs. In [29], the authors imple- mented a mobile neural network classiï¬cation engine capable of sensor inference tasks on Qualcommâs Hexagon DSP [30]. Though they achieved very impressive energy consumption results, the DSP was able to run only very simple CNN models due to its small program and memory space. In [31], the au- thors presented a GPU-accelerated library CNNdroid for par- allel execution of pre-trained CNNs on mobile GPUs. The
# 1http://ai-benchmark.com
1
y snapdragon SAMSUNG Eyyvros MediaTek P60 helio
Figure 1: Mobile SoCs with potential acceleration support for third-party AI applications.
library was based on the RenderScript framework [32] that parallelizes computations across CPUs and GPUs, and though the proposed solution was up to 40 times faster compared to the baseline naive singe-thread implementation, in reality its speed was comparable to a CPU-based TensorFlow Mobile library [33] relying on the Arm NEON [34] instruction set. Motamedi et al. [35] exploited the same approach of using RenderScript, but used a CPUâs imprecise computing modes to lower execution times. Despite the promising results, the effect inexact arithmetic had on accuracy was not investigated in depth in this paper, and therefore the applicability of this approach remains unclear. RSTensorFlow [36] is another at- tempt to expoit RenderScript for GPU-based acceleration of matrix operations, and in this case it was used to directly mod- ify the TensorFlow Mobile library. The results demonstrated that, while matrix multiplications can be executed up to 3 times faster, it is not possible to speed up the convolutional operations that take approximately 75% of the total inference time. Additionally, the experiment revealed that RenderScript is not always using GPUs on all the devices â sometimes it is running on a CPU only, leading to slower execution times even compared to the original TF implementation.
Besides that, some SDKs for running computationally in- tensive operations were proposed directly by SoC manufac- turers. In 2016, Qualcomm introduced the Snapdragon Neu- ral Processing Engine (SNPE) [37] to accelerate the execution of neural networks with their GPUs and DSPs. The next year HiSilicon proposed the HiAI platform [38] for running neural networks on Kirinâs NPU, and later MediaTek presented the NeuroPilot SDK [39] that can trigger GPUs or APUs to run deep learning models. The biggest issue is that all these SDKs were developed for the corresponding chipsets only, i.e., the application relying on HiAI will not run on Qualcomm SoC, and vice versa, thus forcing developers to create several ver- sions of their app for each platform, or to give up on some of them. This situation changed with the introduction of the Android Neural Networks API (NNAPI) [40], designed to run deep learning models on mobile devices. This API is basi- cally an intermediate layer between the higher-level machine learning framework and the deviceâs hardware acceleration re- sources, and is responsible for their communication and for scheduling the execution of tasks on the most suitable hard- ware. NNAPI still requires speciï¬c SoC vendorsâ drivers in order to run the computations on anything but a CPU, and therefore its default presence in Android 8.1+ does not au- tomatically guarantee hardware acceleration support.
ing the CPU and GPU performance of mobile phones, none of them measure the speed and acceleration of AI operations that can be achieved due to available AI chips and DSPs. In this paper, we present an AI Benchmark designed speciï¬cally to test the machine learning performance, available hardware AI accelerators, chipset drivers, and memory limitations of the current Android devices. It consists of a number of computer vision AI tests that are executed directly on the phonesâ hard- ware and that cover relevant deep learning architectures and operations. We provide a detailed description of the actual chipset platforms and popular mobile machine learning frame- works, and describe the limitations of running deep learning algorithms on smartphones. Finally, we present the in-the- wild performance of about 200 Android devices and major mobile chipsets, as collected with our AI Benchmark, for over 10,000 smartphones and tablets.
The rest of the paper is arranged as follows. In Section 2 we describe the hardware acceleration resources available on the main chipset platforms, as well as the programming in- terfaces for accessing them. Section 3 gives an overview of popular mobile deep learning frameworks. Section 4 provides a detailed description of the benchmark architecture, its pro- gramming implementation, and the computer vision tests that it includes. Section 5 shows the experimental results and in- ference times for different deep learning architectures, for var- ious Android devices and chipsets. Section 6 analyzes the ob- tained results. Finally, Section 7 concludes the paper.
# 2 Hardware Acceleration
While the ï¬rst consumer computers were mostly equipped with a single, stand-alone CPU, it soon became clear that its computational performance is too limited for a number of multimedia applications. This led to the creation of special co-processors working in parallel with the main CPU. Their architecture was optimized for many signal processing tasks. The era of digital signal processors (DSPs) began in the early 1980s with the introduction of the NEC PD7720 [41], the AT&T DSP1 [42] and the TI TMS32010 [43] co-processors. They established general principles of the DSP architecture used until now [44]: Harvard architecture, hardware block for multiply-accumulate (MAC) operations, VLIW and SIMD instruction sets for parallel computations, etc. Though the ï¬rst DSPs had quite restricted capabilities due to their limited set of instructions and memory constraints, they were widely used till the mid 90s of the last century. They were popu-
While there exists a number of common benchmarks test-
2
lar for applications related to computer graphics, sound and video decoding, as mathematical co-processors and accelera- tors for various photo editing software, and even for running the ï¬rst deep learning OCR models designed in 1989 [45]. The latter task of classifying handwritten digits using CNNs reached high speeds at that time (12 images per second) due to the efï¬cient vector and matrix-based calculations. These resulted from the highly parallelizable DSP architectures and the hardware implementation of MAC operations. At the end of the 90s the popularity of DSPs started to decrease and in the consumer PC sector they were largely replaced by general- purpose CPUs with integrated DSP instructions, GPUs for ef- ï¬cient parallel computations, and FPGAs conï¬gurable for var- ious speciï¬c problems.
At the beginning of the 1990s, DSPs started to appear in mobile phones. At ï¬rst, they were used only for voice cod- ing and compression, as well as for some radio signal pro- cessing. Later on, with the integration of cameras and many multimedia features like music and video playback in mobile devices, the integrated DSPs started to be extensively used for image, video and sound processing. In contrast to what hap- pened with desktop computers, DSPs were not displaced here by CPUs and GPUs because they often offered superior per- formance at lower power consumption, so critical for portable devices. In recent years, the computational power of mobile DSPs and other SoC components has grown drastically, and now, complemented by GPUs, NPUs and dedicated AI cores, they enable AI and deep learning-based computations. A de- tailed description of the current mobile platforms (ï¬g. 1) and their hardware acceleration resources is provided below.
# 2.1 Qualcomm chipsets / SNPE SDK
Qualcomm is an American semiconductor and wireless telecommunications company, founded in 1985. Its ï¬rst Snap- dragon mobile SoC QSD8250 was released in 2007 and al- ready featured a dedicated AMD Z430 GPU and the ï¬rst com- mercial generation of QDSP6 Hexagon DSPs. In 2009, after the acquisition of AMDâs mobile graphics division, the corre- sponding GPU series was renamed to Adreno (anagram from Radeon), and its successors are present under this name in all current Snapdragon SoCs. Their performance evolved from 2.1 (Adreno 200) to 727 (Adreno 630) GFLOPS. The DSP architecture has also undergone signiï¬cant changes from the ï¬rst (2006) to the current sixth generation, and is now support- ing wide vector extensions (HVX), dynamic multi-threading, VLIW and SIMD instruction sets. They can also be pro- grammed by users [30]. The main Snapdragon CPU cores have an Arm-based architecture and usually feature Qual- commâs own customized in-house design, often developed based on Arm Cortex cores. These three components (CPUs with the Arm NEON instruction set, GPUs and DSPs) form Snapdragonâs heterogeneous computing architecture (ï¬g. 2) well suitable for running various AI algorithms. The Qual- comm chipsets are now covering around 55% of the smart- phone SoC market and are installed in many popular smart- phones, tablets, and wearables.
3
Snapdragon X20 LTE modem WiFi Hexagon 685 DSP Spectra 280 ISP Agstic Audio Kryo 385 CPU System Memory Mobile Security
Figure 2: SoC components integrated into Snapdragon 845 (left) and Kirin 970 (right) chipsets.
Qualcomm ï¬rst addressed the problem of on-device AI in- ference hardware acceleration in the Snapdragon 820 in May 2015 and also announced its proprietary Snapdragon Neu- ral Processing Engine (SNPE) SDK in May 2016, which offers runtime acceleration across all Snapdragonâs process- ing components. The SDK supports common deep learning model frameworks, such as Caffe/Caffe2, TensorFlow, Py- Torch, Chainer, MxNet, CNTK and PaddlePaddle via ONNX. It is designed to enable developers to run their own custom neural network models on various Qualcomm-powered de- vices. The SDK is supported on 17 Snapdragon mobile pro- cessors starting from premium (Snapdragon 845, 835, 820), high tier (Snapdragon 710, 670, 660, 652, 650, 653, 636, 632, 630, 626 and 625) as well as the mid-tier (Snapdragon 450, 439, 429). It also supports the Qualcomm Vision Intelligence Platform (QCS603 and QCS605), designed for efï¬cient ma- chine learning on IoT devices.
Qualcommâs ï¬rst NNAPI driver for running quantized neu- ral networks on Hexagon DSPs was introduced in the Android O-MR1, though it was not used in any commercial devices at that time and ï¬rst appeared only later in the OnePlus 6 and Xiaomi Mi8 with the next Android version. In Android P, these drivers got additional support for running ï¬oat mod- els on the Adreno GPU. Yet, they are currently not present in the market. The considered NNAPI drivers are generally adopting hardware acceleration principles and implementation used in SNPE SDK. The differences mainly come from the re- strictions of the current Android NNAPI speciï¬cations. Qual- comm delivers these drivers in the software images provided to its OEM customers, which then in turn determine when and how to include them to end devices: with their initial release or later over the air in subsequent software updates. As a re- sult, their presence and actual version might vary signiï¬cantly across the phones on the market.
# 2.2 HiSilicon chipsets / Huawei HiAI SDK
HiSilicon is a Chinese semiconductor company founded in 2004 as a subsidiary of Huawei. Its ï¬rst mobile processor (K3V1) was introduced in 2008, but the ï¬rst commercially successful product used in a number of Android devices was the next SoC generation (K3V2) released in 2012 and featur- ing four Arm Cortex-A9 CPU cores and a Vivante GPU. In 2014, a new Kirin SoC family consisting of mid-range (600 Series) and high-end (900 Series) chipsets was launched as a
âTelning: Macnine Learning experts bul ard tran their etwork to sive ther particular probe a = aol Ges ==" Ga ee : Online Reasoning Engine Huawei HiAl Foundation (Me NPU
Figure 3: Schematic representation of SNPE, HiAI and NeuroPilot SDKs from Qualcomm, Huawei and MediaTek, respectively.
successor to the K3 series and is used in Huawei devices until now. Unlike Qualcomm, HiSilicon does not create customized CPU and GPU designs and all Kirin chipsets are based on off-the-shelf Arm Cortex CPU cores and various versions of Mali GPUs. A different approach was also developed for ac- celerating AI computations: instead of relying on GPUs and DSPs, HiSilicon introduced a specialized neural processing unit (NPU) aimed at fast vector and matrix-based computa- tions widely used in AI and deep learning algorithms. Accord- ing to Huawei, it delivers up to 25 times better performance and 50 times greater efï¬ciency compared to the standard quad- core Cortex-A73 CPU cluster. The NPU design was licensed from the Cambricon Technologies company (Cambricon-1A chip) and is said to deliver a peak performance of about 1.92 TFLOPs, though this number mainly refers to quantized 8-bit computations. This NPU ï¬rst appeared in the Kirin 970 SoC, and later two enhanced NPUs were also integrated into the subsequent Kirin 980 chipset. It should be noted that other SoCs apart from Kirin 970/980 do not contain this NPU mod- ule and are currently unable to provide acceleration for third- party AI-based applications. The aforementioned chipsets can be found only inside Huawei devices as they are not sold to external OEM companies; the current total market share of HiSilicon SoCs is around 10%.
# 2.3 MediaTek chipsets / NeuroPilot SDK
MediaTek is a Taiwanese semiconductor company spun off Its from the United Microelectronics Corporation in 1997. mobile division was launched in 2004 and soon after this Me- diaTek released its ï¬rst mobile chipsets that were used in many entry-level Chinese phones and smartphones produced at that time. It gained popularity on the global smartphone market in 2013 with the introduction of the MediaTek 657x/658x family of dual and quad-core SoCs with Mali or PowerVR graph- ics, and later with the release of 64-bit MediaTek MT67xx chipsets they became widely used in many Android devices from various OEMs, getting a market share of about 20%. Similarly to Huawei, MediaTek is integrating into its SoCs standard Arm Cortex CPU cores and Mali or PowerVR GPUs. At the beginning of 2018, MediaTek addressed the problem of accelerating machine learning-based applications by launch- ing their Helio P60 platform with embedded AI processing unit (APU). This APU can deliver the performance of up to 280GMAC/s for 8-bit computations and is primarily used for accelerating quantized neural networks, while ï¬oat models are running on four Cortex-A53 CPU cores and Mali-G72 MP3 GPU clocked at 800MHz. Thus, MediaTekâs approach lies in between the solutions from Huawei and Qualcomm: a dedi- cated chip for quantized computations (as in Kirinâs SoC) and CPU/GPU for ï¬oat ones (as in Snapdragon chipsets).
To give external access to Kirinâs NPU, Huawei released in late 2017 the HiAI [38] Mobile Computing Platform SDK, providing APIs for executing deep learning models on hard- ware resources integrated within Kirin SoC. This SDK is now supporting only Caffe, Tensorï¬ow Mobile and Lite frame- works, though in future releases it might also offer support for Caffe2 and ONNX. It provides acceleration for 16-bit ï¬oat, 8- bit and 1-bit quantized models, and can additionally speed-up sparse models by skipping multiply-add operations containing zero variables. Apart from low-level APIs, the HiAI Engine also provides a ready-to-use implementation of several com- puter vision algorithms including image categorization, face and facial attribute detection, document detection and correc- tion, image super-resolution, QR code detection, etc.
Starting from Android 8.1 (EMUI 8.1), Huawei is including NNAPI drivers for its Kirin 970/980 chipsets that are generally based on the HiAI implementation. Currently, they are provid- ing support only for 16-bit ï¬oat models, quantized networks will be supported in the future releases. It should be men- tioned that all Huawei devices that are based on other chipsets do not contain NNAPI drivers as they are lacking the above- mentioned NPU module.
The release of the Helio P60 was accompanied by the in- troduction of MediaTekâs NeuroPilot SDK [39] constructed around TensorFlow Lite and Android NNAPI. This SDK con- sists of four main components: 1) TOCO-based tools for quantizing ï¬oat TF Lite networks and for converting pre- trained TensorFlow/Caffe/ONNX models (with supported op- erations) to TensorFlow Lite format. 2) An extended list of implemented TF Lite operations and the corresponding interpreter for loading and running converted .tï¬ite models. 3) APU and GPU NNAPI drivers implementing hardware accelerated operations for MediaTekâs NeuroPilot platform; the APU drivers currently only support INT8 ops and GPU drivers â FP16/32 ops. 4) Facilities for proï¬ling and debug- ging neural network-based applications, and an interface for pinning target operations on a speciï¬c hardware accelerator like GPU or APU. The SDK is supporting purely MediaTek NeuroPilot-compatible chipsets (currently Helio P60 only).
There also exists a corresponding stand-alone version of NNAPI drivers supporting ï¬oat and quantized models. Nonetheless, except for the P60 developer platform, only one
4
commercial device with MediaTek P60 chipset (Vivo V11) is known to contain these drivers.
# 2.4 Samsung chipsets
Samsung Electronics is a South Korean electronics company founded in 1969. In 1988, it merged with Samsung Semi- conductor & Communications and obtained its current name. That same year it launched its ï¬rst mobile phone, while its ï¬rst mobile processor (S3C44B0, 66 MHz, Armv4) was presented only in 2000. Later it signiï¬cantly extended its S3Cxxxx and S5Pxxxx SoC series that were widely used in many Win- dows Mobile devices, in the iPhone 2G/3/3GS, and in some early Android smartphones. With the introduction of the S5PC110 chipset in 2010, all Samsung SoCs were rebranded into Exynos and are using this name up to now (Exynos 3-9th generations). Similarly to Huawei and MediaTek, Samsung is primarily using Arm Cortex CPU cores and Mali or PowerVR graphics in its chipsets, though starting from Exynos 8 it is also integrating its in-house developed Mongoose Arm-based CPU cores into high-end SoCs. As for speciï¬c AI chips, Sam- sung introduced in the Exynos 8895 a Vision Processing Unit (VPU) mainly used by its phonesâ cameras. Yet, no drivers, SDKs or additional details were released, making it inacces- sible by third-party applications. Only two Samsung devices (Note 9 and Tab S4) are currently running Android 8.1+ and are using Googleâs default NNAPI drivers utilizing the CPU only. According to some rumors, the next Exynos chipset might include a dedicated AI chip, though this information was not ofï¬cially conï¬rmed by Samsung. The current market share of Samsung chipsets is around 10%.
# 2.5 Google Pixel / Pixel Visual Core
Apart from its Android operating system, Google started, since Android 2.1, to annually release smartphones and tablets under the Google Nexus brand. These were developed in collaboration with external OEMs, among which at different times were HTC, Samsung, LG, Motorola, Huawei and Asus. These devices were featuring the stock Android operating sys- tem running on the latest high-end hardware and were the ï¬rst to receive Android updates (with the possibility of installing beta versions). In 2016 the Nexus product line was discontin- ued and all new smartphones started being produced under the Google Pixel brand, though the aforementioned principles re- mained the same. The majority of these devices were based on Qualcomm chipsets, therefore all information from the above Qualcomm section can be applied to them too. Yet, starting from Pixel 2 (XL), Google has added to its smartphones a ded- icated fully-programmable Pixel Visual Core AI chip (ï¬g. 4), separate from the main Qualcomm SoC and developed in col- laboration with Intel. The chip contains one Arm Cortex-A53 core for handling communications with the main application processor, integrated LPDDR4 RAM and eight custom image processing unit (IPU) cores. Each IPU contains 512 arithmetic logic units with 256 processing elements arranged as a 16Ã16
5
IPU.1O Block IPU IPU Core 2 Core 1 IPU IPU Core 3 Core 4 IPU Core 6 IPU Core 5 IPU Core 8 IPU Coreâ7.
Figure 4: The architecture of the Pixel Visual Core AI Chip.
two-dimensional array and supports a custom VLIW instruc- tion set. The chip provides native support for 8-bit and 16-bit integer computations and delivers a performance of up to 3.2 TFLOPS. Although the Pixel Visual Core is generally compli- ant with TensorFlow (Lite), Google did not release the corre- sponding SDK and NNAPI drivers, thus it cannot be used by external developers for accelerating machine learning-based applications and its present use is mainly limited to Googleâs HDR+ image processing.
# 2.6 Arm Cortex CPUs / Mali GPUs / NN SDK
Currently, all CPU cores integrated into mobile SoCs are based on the Arm architecture, and in devices not support- ing HA for machine learning applications these CPUs are re- sponsible for running all AI algorithms. To speed-up the com- putations in this case, Arm has introduced a number of spe- ciï¬c instruction sets aimed at fast vector- and matrix-based calculations. The most notable technology here is the Arm NEON[34] â an advanced SIMD (single instruction multiple data) architecture extension ï¬rst introduced in Armv7 proces- sors. NEON basically implements DSP-like instructions for concurrent computations and allows the simultaneous execu- tion of up to 16x8-bit, 8x16-bit, 4x32-bit, 2x64-bit integer and 8x16-bit, 4x32-bit, 2x64-bit ï¬oating-point operations. Addi- tionally, Arm has recently presented its new DynamIQ tech- nology that is able to efï¬ciently utilize all cores within a single Arm CPU for parallel computations, and a speciï¬c instruction for calculating dot products in the Armv8.4-A microarchitec- ture. Many of these optimized instructions are integrated in Googleâs default NNAPI drivers, handling the CPU path when no other means for acceleration are available.
Apart from that, Arm has also presented the Arm NN SDK [46] to accelerate machine learning computations on mo- bile SoCs. It provides both the CPU and GPU paths for ML workloads, along with parsers for TensorFlow, Caffe, ONNX and TFLite. On the CPU side it is compatible with any plat- form with Armv7 and above CPUs (assuming NEON avail-
Android NNAPL = ââ--------- y + 4%
Figure 5: System architecture for Android Neural Networks API.
ability), with key low level optimizations for speciï¬c architec- tures. The GPU path will be available on platforms with Arm Mali GPUs, either from the Midgard family (Mali-T6xx and onwards when GPGPU was introduced) or the later Bifrost family (G71 / G51 and onwards), and requires the Mali GPU and OpenCL drivers to be installed. The Arm NN SDK pro- vides support for both FP32 and quantized INT8 networks and can run on Linux or Android platforms in parallel to NNAPI.
# 2.7 Android NNAPI
While there exist a number of proprietary SDKs for accessing DSPs, GPUs or NPUs on different mobile platforms, this was not really solving the problem of using HA for running deep learning algorithms on mobiles, as all these SDKs are pro- viding access only to some particular chipsets and are addi- tionally incompatible with each other. To solve this problem, Google has recently introduced a uniï¬ed Android Neural Net- works API (NNAPI) that is an Android C API designed for running computationally intensive machine and deep learn- ing operations on mobile devices. The system architecture of NNAPI is presented in the ï¬gure 5. Apps typically would not use NNAPI directly, instead they will rely on higher-level machine learning frameworks that in turn could use NNAPI to run hardware-accelerated inference on supported devices. To perform computations using NNAPI, the executed model should be ï¬rst represented as a directed graph that deï¬nes the computations to perform. This graph, combined with the data deï¬ning the model (e.g., the weights and biases passed down from a machine learning framework), forms the model for NNAPI runtime evaluation. Based on the appâs requirements and device hardware, Androidâs neural networks runtime can efï¬ciently distribute the computation workload across avail- able on-device processors, including dedicated neural network chips, GPUs and DSPs. NNAPI is available on all devices run- ning Android 8.1 (API level 27) or higher, but it still requires a specialized vendor driver for accessing the deviceâs hardware. For devices that lack this driver, the NNAPI runtime relies on optimized code to execute requests on the CPU.
6
# 3 Deep Learning Mobile Frameworks
With the widespread use of the Android operating system, a number of popular deep learning frameworks were ported to this platform, including Torch [47], Deeplearning4j [48], TensorFlow (Mobile [33], Lite [49]), Caffe [50], Caffe2 [51], MXNet [52], NNabla [53], etc. Nowadays, the most com- monly used are three of them: Tensorï¬ow Mobile, Tensorï¬ow Lite and Caffe2 that are described below.
# 3.1 TensorFlow Mobile
Tensorï¬ow [54] is an open-source machine learning library for research and development released by Google in 2015. TensorFlowâs programming model can be described as a di- rected graph that deï¬nes the relation between the input and output (target) variables. The graph itself consists of a set of nodes representing various operators applied sequentially to the input data (e.g., convolutional, pooling, LSTM layers, etc.) that are deï¬ning a deep learning model and the corresponding dataï¬ow computation. After the model is trained, it can be exported as a .pb graph and executed on mobile devices using the TensorFlow Mobile library [33], available on Android as well as iOS platforms. A code snippet of the corresponding Java inference interface is presented in ï¬gure 6 (a). Note that there is no need to specify the model architecture in the actual application code: it is already stored along with pre-trained weights in the .pb graph, and developers only need to provide the location of this ï¬le and the input data.
The main advantage of the TensorFlow Mobile library is that it supports the majority of operations available in the stan- dard TF version, therefore almost any TensorFlow model can be converted and executed on a mobile device. Addition- ally, all current SDKs from SoC manufacturers (SNPE [37], HiAI [38], NeuroPilot [39] and ArmNN [46]) are providing (partial) hardware acceleration support for this library. This said, the development of TensorFlow Mobile is coming to a close, as Google announced its gradual deprecation in favor of the TensorFlow Lite library [55]. Particularly, TF Mobile will not get Android NNAPI support, thus without using spe- ciï¬c SDKs all models will still be executed on CPUs only.
# 3.2 TensorFlow Lite
TensorFlow Lite [49] was presented late 2017, as a successor of the TF Mobile library. According to Google, it provides better performance and a smaller binary size due to optimized kernels, pre-fused activations and fewer dependencies. Simi- larly to TF Mobile, a general TensorFlow pre-trained model can be in theory converted to .tï¬ite format and later used for inference on Android or iOS platforms, the correspond- ing Java code snippet is shown in ï¬gure 6 (b). The change of the ï¬le format (.tï¬ite instead of .pb) is caused by the use of a new FlatBuffers serialization library that allows to access saved models without a parsing/unpacking step, often cou- pled with per-object memory allocation. Finally, the new li- brary is compatible with Android NNAPI and can by default
flontLinputsize * inputsize * 217
(a) TensorFlow Mobile
(b) TensorFlow Lite
# Figure 6: Code snippets of TensorFlow Mobile and Lite Android Java interfaces.
run with hardware acceleration on devices with appropriate chipsets and drivers.
# 4 AI Benchmark
It should be noted, however, that TensorFlow Lite is in developer preview at the moment and has a number of sub- stantial limitations. First of all, it supports only a limited set of operators, lacking the full support of, e.g., image re- sizing, batch and instance normalization, LSTM units, some statistical functions or even simple mathematical operations like exponentiation or argmax. Ofï¬cially, Google guarantees only three models to work: the Inception-V3, MobileNet and Smart Reply SSL algorithm, though with some modiï¬cations it is possible to run a number of other deep learning models. A second issue concerns the inference time and the amount of consumed RAM. Since the ByteBuffer format is not sup- ported for the networkâs output, these two values can be up to 2à higher compared to TF Mobile for image-to-image trans- lation problems. Finally, stability is another concern â the current ofï¬cial version might not work ï¬awlessly with a num- ber of models and mobile devices, though some of the issues are already solved in the nightly TF Lite version. While many of these problems will probably be overcome in the upcoming library releases, currently they make the use of TensorFlow Lite complicated for many existing deep learning problems.
# 3.3 Caffe2
Caffe [56] is another open-source deep learning framework, originally developed at UC Berkeley by Yangqing Jia and re- leased in 2013. Its ï¬rst unofï¬cial Android port appeared the next year [50], and in 2017, with Facebookâs release of the successor, Caffe2, its mobile version for iOS and Android platforms was also presented [51]. Caffe2 is using a pro- gramming model similar to TensorFlowâs, with static compu- tational graphs and nodes representing various operators. Ac- cording to the Caffe2 github repository [57], the speed of its mobile library is generally comparable to that of TensorFlow Lite [58] (175ms vs. 158ms for the SqueezeNet model on Snapdragon 821 SoC). Report [59] additionally claims about up to a 6x speed-up when using the OpenGL backend for GPU-based computations, but this feature is not yet available in the current Caffe2 release. Similarly to TensorFlow, accel- eration for Caffe2 models is also supported by all proprietary SDKs (SNPE, HiAI, NeuroPilot and ArmNN), but NNAPI support is still in development and is not fully integrated yet.
The AI Benchmark is an Android application designed to check the performance and the memory limitations associated with running AI and deep learning algorithms on mobile plat- forms. It consists of several computer vision tasks performed by neural networks that are running directly on Android de- vices. The considered networks represent the most popular and commonly used architectures that can be currently de- ployed on smartphones, their detailed description along with technical details of the application are provided below.
# 4.1 Deep Learning Tests
The actual benchmark version [2.0.0] consists of the following nine deep learning tests.
Test 1: Image Recognition. This task represents a con- ventional ImageNet challenge where the goal is to classify In the ï¬rst test, classiï¬cation images into 1000 categories. is done with a resource-efï¬cient MobileNet-V1 [3] architec- ture designed speciï¬cally for mobile and embedded vision applications. The network mainly consists of 1Ã1 convolu- tional (75%) and fully connected (24%) layers, where 95% of the total 569M multiply-add operations happens in the ï¬rst ones. MobileNet achieves 70.6% accuracy on the ImageNet dataset, thus outperforming the larger AlexNet, SqueezeNet and Inception-V1 models. It can be optimized further for mo- bile usage by quantization [60, 61] â converting its weights and activations from FLOAT32 to INT8 8-bit ï¬xed point rep- resentation. Though this leads to an accuracy drop to 69.7%, the speed is simultaneously more than doubled and the size is reduced (by a factor of 4) to 4.3MB. The latter quantized MobileNet-V1 is deployed in the ï¬rst test.
Test 2: Image Recognition. The same ImageNet classiï¬ca- tion problem as above, but in the second test a considerably larger and more accurate Inception-V3 [2] CNN, presented by Google in 2015, is used. This network is comprised of 11 in- ception blocks that mainly consist of 1Ã1, 1Ã3 + 3Ã1, 1Ã7 + 7Ã1 and 3Ã3 convolutional layers. In contrast to MobileNet, Inception-V3 requires about 5,000M multiply-add operations, and the size of the saved CNN is around 96MB. The accu- racy is signiï¬cantly higher too however, â 78% on the same ImageNet dataset and currently the best result among popular networks of size below 100MB.
7
Figure 7: Sample result visualizations displayed to the user in the considered deep learning tests.
Test 3: Face Recognition. The goal of this task is to retrieve the most similar face to a given one from an existing facial database. To do this, a neural network is ï¬rst trained to pro- duce a small feature vector for each facial image that encodes its visual features and is invariant to face scaling, shifts and ro- tations. In this test, we are using the Inception-Resnet-V1 net- work [62], presented by Google in 2017. It was trained to min- imize a triplet loss [16] on the VGGFace2 dataset [63]. After the network is trained, it is applied to a new facial image and produces its feature vector that is then used to retrieve the clos- est vector (and the respective identity) from the database. The size of the input images in this task is 512Ã512 pixels, and the dimensionality of feature vectors is 128. The architecture of the Inception-ResNet-V1 consists of 20 inception blocks and is conceptually similar to the previously discussed Inception- V3 CNN; their size, accuracy on the ImageNet dataset, and computational cost are very similar as well. The biggest bene- ï¬t of this network is its training speed â it needs fewer epochs to achieve the same accuracy than Inception-V3.
We would like to note that the models used in the ï¬rst three tests currently represent a core set of architectures for clas- siï¬cation problems that are suitable for mobile deployment. Networks faster than MobileNet (or its variants) are showing substantially worse accuracy. Models with better precision than Inception-V3 or Inception-ResNet-V1 have sizes exceed- ing 100-150MB [64], which makes their application on mo- bile devices quite complicated due to the resulting size of the APK ï¬le. Quantization of these networks can partially solve the problem, but currently their quantized versions are not yet publicly available.
Test 4: Image Deblurring. This test is aimed at removing Gaussian blur from images, which is done using the SRCNN network [7] â one of the ï¬rst CNNs proposed for the super- resolution problem that is now widely used as a baseline for many image-to-image translation tasks. The architecture of this network is very shallow: three layers with 9Ã9 and 5Ã5 ï¬lters, in total 69,162 parameters and around 64B multiply- add operations for HD-resolution image. As a result, the size of the saved pre-trained network is only 278KB.
Test 5: Image Super-Resolution. The goal of the super- resolution task is to reconstruct the original image from its In this test we consider a downscal- downscaled version.
ing factor of 3, and image restoration is performed by the VDSR [65] network, presented in 2015 shortly after SRCNN. This network features a VGG-based architecture that is com- posed of 19 convolutional layers with 3Ã3 ï¬lters, enough to obtain top quantitative results on many image processing problems. The VDSR network has 665K parameters and re- quires around 600B multiply-add operations for HD images; the size of the network is 2.7MB.
Test 6: Image Super-Resolution. This test solves the same super-resolution problem, but with a downscaling factor of 4 and using the SRGAN [8] model that consists of two neural networks. The ï¬rst one is ResNet previously proposed in [66] that in this implementation consists of 16 residual blocks; this network performs image restoration. The second one is an adversarial CNN â it is trained to distinguish between the real high-resolution images and the images reconstructed by ResNet. During the training, these networks are playing the following game: the adversarial CNN is trying to maximize its classiï¬cation accuracy, while ResNet has the opposite goal of minimizing it, i.e., to provide reconstructed images that are indistinguishable from the target ones. In practice, this leads to much better perceptual results than when using the standard Euclidean norm or content-based losses. After the model is trained, the adversarial CNN is removed and inference is per- formed by ResNet only. The latter network contains 1.5M pa- rameters and the size of the saved pre-trained model is 6.2MB.
Test 7: Image Semantic Segmentation. In contrast to image classiï¬cation, the goal of this task is to get a pixel-level image understanding, meaning that each pixel has to be classiï¬ed as belonging to one of 19 categories: car, pedestrian, road, sky, vegetation, etc. This is done with an ICNet CNN [67], de- signed for fast and accurate segmentation on low-performance devices. The speedup was mainly achieved by downsampling and shrinking feature maps, though the resulting accuracy on the Cityscapes dataset remained high â 70.6% mIoU. ICNet consists of 6.7M parameters and the size of the pre-trained model is 27MB.
Test 8: Image Enhancement. We consider here a general image and photo enhancement problem that encompasses var- ious kinds of improvements including color enhancement, de- noising, sharpening, texture synthesis, etc. In this formulation the problem was ï¬rst addressed in the DPED paper [4], where
8
the authors were trying to turn low-quality smartphone pho- tos into photos as they would be taken with a DSLR camera. This work adopted a ResNet-like architecture with 4 residual blocks and proposed speciï¬c losses targeted at various aspects of image quality. The obtained results demonstrated superior visual quality compared to the results of manual retouching or standard automatic algorithms. The main limitation of the ap- proach was a need of device-speciï¬c training. The network is parameterized by 400K parameters and has a size of 1.6MB.
Test 9: Memory Limitations. While previous tests were mainly evaluating the runtime of various deep learning mod- els, the goal of the last test is to check RAM resources that can be allocated for running neural networks. In this test we are using the same SRCNN model as in the fourth task (deblur- ring), while gradually increasing the size of the input image until we run into a memory exception, meaning that the de- vice does not have enough RAM to process larger inputs. The SRCNN model was chosen here since it consumes an amount of RAM similar to other models (for images of the same res- olution), while its runtime is much faster and thus the test re- quires less time to ï¬nish. It is useful to note that the memory consumed by a network is primarily determined by the dimen- sions of its largest (convolutional) layer, which in the case of SRCNN is the ï¬rst layer with 64 convolutional ï¬lters.
These nine tests represent the current deep learning core of the benchmark (ï¬g. 7); its technical components and imple- mentation details are discussed below.
# 4.2 Technical Description
The current release of the AI Benchmark (2.0.0) is using the TensorFlow Lite [49] library as a backend for running all em- bedded deep learning models. Though the previous release was originally developed based on TF Mobile [33], its lack of NNAPI support imposed critical constraints on using hard- ware acceleration resources, and thus was later deprecated. The actual benchmark version was compiled with the latest TF Lite nightly build where some issues present in the stable TensorFlow versions were already solved.
The benchmark consists of nine deep learning tests de- scribed in the previous section. These can be generally di- vided into two groups. The ï¬rst group includes tests 1, 2, 4, 5, 8, 9. Those use CNN models fully supported by NNAPI (i.e., all underlying TensorFlow operations are implemented in NNAPI introduced in Android 8.1), and therefore they can run with hardware acceleration on devices with appropriate chipsets and drivers. NNAPI is always enabled in these tests to avoid the situation when the system fails to automatically detect the presence of AI accelerators and performs all com- putations on CPU. It should also be mentioned that the ï¬rst test runs a quantized CNN model and is used to check the per- formance of accelerated INT8-based computations.
The second group contains the other three tests, i.e. 3, 6 and 7, where neural networks are always running entirely on CPU. They contain at least one TF operation that is not yet present in NNAPI, and using partial acceleration for supported
9
Figure 8: Benchmark results displayed after the end of the tests.
ops only is currently not possible. These tests were added to evaluate the speed of CPU-based execution and the per- formance of the Arm NEON instruction set [34], present in all current Arm processors and designed speciï¬cally for high- performance computing and image processing. In cases where NNAPI drivers are missing, all computations in the tests from the ï¬rst group also fall back on CPU and are using the same instruction set.
The resolution of input images used in the tests was chosen so that all devices with at least 2GB of RAM and the majority of devices with 1GB of RAM should have enough memory to run all tests. The test is considered to be passed when the network was able to successfully process at least one image within the allocated time. In particular, during the internal testing all devices with 1GB of RAM (e.g., Samsung Galaxy S2/S3 mini, HTC One X, FiiO X7, etc.) were able to run all models after a fresh restart.
Each of the ï¬rst eight tests has a predeï¬ned time limit: 25, 40, 40, 30, 40, 50, 20 and 25 seconds, respectively. The last test does not have a time limit â images of increasing res- olution are processed until the device runs out of memory. The running time for each test is computed as an average over the set of images processed within the speciï¬ed time. When more than two images are handled, the processing time for the ï¬rst two ones is not considered as it might comprise ad- ditional time expenses associated with network initialization and memory allocation. The scores for the ï¬rst eight tests are computed inversely proportional to the corresponding average runtimes; the score for the memory test is proportionate to the maximum image size that the network was able to process. The ï¬nal AI score (ï¬g. 8) is calculated as a weighted sum of the scores obtained in these nine tests and represents the ag- gregated AI performance of a particular device. The weight coefï¬cients for these tests were calibrated based on the results obtained on Google Pixel 2 running Android P with disabled NNAPI in all tests.
Test 1 2 3 4 5 6 7 8 Task Architecure Resolution, px Parameters Size, MB Quantized NNAPI support Consumed RAM Classiï¬cation MobileNet 224Ã224 4.2M 4.3 yes yes 20MB Classiï¬cation Inception-V3 346Ã346 27.1M 96 no yes 170MB Face Recognition Inc-ResNet-V1 512Ã512 22.8M 92 no no 240MB Deblurring SRCNN 300Ã300 69K 0.3 no yes 290MB Super-Resolution VGG-19 192Ã192 665K 2.7 no yes 110MB Super-Resolution SRGAN (ResNet-16) 512Ã512 1.5M 6.2 no no 310MB Segmentation ICNet 384Ã576 6.7M 27 no no 60MB Enhancement DPED (ResNet-4) 128Ã192 400K 1.6 no yes 120MB
Table 1: Summarized characteristics of deep learning models used in the AI Benchmark
# 5 Benchmark Results
In this section, we present quantitative benchmark results ob- tained from over 10,000 mobile devices tested in the wild. The scores of each device/SoC are presented in tables 2 and 3 that are showing average processing time per one image for each test/network, maximum possible image resolution that can be processed by SRCNN model and the total aggregated AI score. The scores were calculated by averaging all obtained results of the corresponding devices/SoCs after removing the outliers. The description of the results is provided below.
range as it is primarily deï¬ned by the dimensions of the largest convolutional layer. Finally, the SRCNN model is much faster than both the VGG-19 and DPED networks, and the amount of consumed memory here is also quite similar due to the afore- mentioned reason. The size of the highest image resolution that can be processed by SRCNN is growing linearly with the amount of total (free) RAM of the device, though due to a bug in NNAPI this does not hold true for devices with An- droid 8.1+ as they are generally consuming much more RAM. We should also note that all previous conclusions are based on the results from devices not supporting hardware acceleration, since it might signiï¬cantly alter the results in tests 1, 2, 4, 5, 8 and 9 that can run with NNAPI on dedicated hardware.
# 5.1 Neural Networks
Table 1 summarizes the details of all deep learning architec- tures included in the benchmark. The results in tables 2 and 3 are quite consistent with the theoretical expectations of the relative processing time and memory consumed by the net- works. In particular, the quantized MobileNet CNN from the ï¬rst test requires about 3-4 times less RAM than the same ï¬oat model, and its speed on CPU is generally an order of mag- nitude faster compared to Inception-V3 CNN. The third face recognition test is dealing with images with a twice larger area and exhibits around 2x longer inference times than the second one, meaning that the performances of Inception-ResNet-V1 and Inception-V3 are quite comparable. In image-to-image processing tasks, the most efï¬cient model is ICNet since the computations there are mainly done on the downscaled im- ages/feature maps. The same approach is used in the SRGAN model where the original image is downsampled to 128Ã128 pixels and processed in this resolution till the last two layers that are performing its upscaling to the original size. There- fore, despite using 12 residual blocks, the processing time here still remains reasonable, though the required RAM is quite high due to the downscaling/upscaling layers working with 512Ã512px images. The DPED network from the image en- hancement task contains 4 residual blocks and is processing images without downsampling, therefore the processing time here should be roughly 128Ã128Ã12 128Ã192Ã4 = 2 times faster than in the previous case, as seen in practice. The VGG-19 model from the ï¬fth test is the most resource-consuming among all con- sidered CNNs â since it consists of 19 convolutional layers, it should be theoretically around 19 12 = 1.6 times slower than the DPED network (the size of their convolutional layers is similar), though the RAM consumption should lie in the same
# 5.2 Smartphones and mobile chipsets
The results in tables 2 and 3 show the performance of sev- eral selected Android smartphones and chipsets obtained with the AI Benchmark; the actual full list is available on the project website: http://ai-benchmark.com. Before going into details, we would ï¬rst like to mention several An- droid NNAPI bugs that are currently affecting some results presented in the tables. First of all, due to a bug in Android 8.1 with default NNAPI drivers, the performance of (convo- lutional) operations is twice as slow as when these drivers are disabled. Therefore, when calculating the average run- time for different SoCs presented in table 3, we omitted the results from the phones with this issue. While Huawei phones with Android 8.1 and the Kirin 970 chipset were using their own customized NNAPI implementation, it still suffered from a different bug â after a long standby the clock speed of Kirinâs NPU drops and does not return back until the phone is rebooted. The results in both tables represent the scores obtained from Huawei devices that were recently restarted. Finally, the RAM consumption on devices using Android NNAPI might be up to 2à higher in image-to-image process- ing tests due to the ByteBuffer issue described in Section 3.2; its consequences can be observed in the last memory test.
Below we summarize the results for each SoC manufacturer and describe the performance of the corresponding chipsets present on the market.
⢠Qualcomm. Snapdragon chipsets can now provide hard- ware acceleration for quantized neural networks (when Qual- commâs NNAPI drivers are present), while ï¬oat models are not yet supported by existing commercial devices. The ï¬rst
10
# Model
SoC RAM Android Test 1, ms Test 2, ms Test 3, ms Test 4, ms Test 5, ms Test 6, ms Test 7, ms Test 8, ms Test 9, 100 px AI-Score HiSilicon Kirin 970 Snapdragon 845/DSP Snapdragon 845 Exynos 9810 Octa Exynos 8895 Octa Snapdragon 835 Snapdragon 821 Snapdragon 820 Snapdragon 835 Snapdragon 821 Snapdragon 660 Snapdragon 636 Nvidia Tegra X1 HiSilicon Kirin 960 Snapdragon 630 Mediatek Helio X30 Snapdragon 625 Snapdragon 650 Snapdragon 450 Snapdragon 810 Mediatek Helio X20 Mediatek Helio P10 Snapdragon 435 Exynos 7870 Octa Snapdragon 800 Intel Atom Z3580 Mediatek MT6737 Exynos 4412 Quad Spreadtrum SC9832 TI OMAP 4460 6GB 8GB 6GB 6GB 4GB 6GB 6GB 6GB 4GB 4GB 4GB 4GB 3GB 6GB 4GB 6GB 4GB 3GB 3GB 3GB 4GB 3GB 3GB 3GB 2GB 2GB 1GB 1GB 1GB 1GB 8.1 9.0 8.0 8.0 8.0 8.0 8.0 8.0 9.0 9.0 9.0 8.0 8.0 8.0 8.0 7.0 7.1 8.0 7.1 8.0 7.1 6.0 7.1 7.0 4.4 5.0 7.0 4.3 7.0 4.1 144 24 60 148 134 85 106 115 143 116 136 110 105 121 170 327 160 111 188 106 183 239 246 278 332 1507 414 553 538 482 130 892 620 1208 731 823 776 909 1264 867 944 1055 1064 1720 1653 3357 1695 1804 1753 1962 2217 2061 2640 2092 2182 2433 3394 4640 5103 7613 2634 1365 1433 1572 1512 1894 1937 2099 1953 1838 2132 2405 2585 3163 3424 4550 3525 3566 3707 4113 4981 4303 5428 4648 5080 6188 7761 10321 12618 25105 279 928 1229 958 1197 1513 1707 1747 1168 1287 1320 1910 2104 1943 2638 2215 2780 2469 3020 3389 3906 3563 4155 3881 5732 4337 6356 7587 7594 12667 241 1999 2792 1672 2519 3568 3624 3683 2104 2489 2519 4271 4546 4791 5497 4971 6150 5789 6144 8155 9245 7537 8575 8495 9625 12878 14760 17187 19174 30743 4390 2885 3542 2430 3039 4302 4427 4363 4219 4125 4641 4877 5036 5719 6338 5502 7164 6846 7144 9805 10551 10116 9979 9644 12375 15128 16721 21904 22758 35417 779 303 329 612 428 381 365 313 394 365 475 515 429 1082 685 1666 780 835 751 930 936 989 1229 941 1299 1176 1668 2059 2094 4015 193 1244 1485 1230 1422 1944 1982 2030 1360 1568 1509 2330 2439 2764 3166 2651 3628 3527 3580 4733 4870 4368 5030 4699 5948 6947 7856 9291 9935 18836 6 5 11 8 6 11 10 11 4 4 5 7 6 9 9 10 9 6 8 7 9 7 8 3 3 3 3 2 2 2 6519 2053 1708 1628 1413 1384 1302 1300 1293 1260 1183 1028 980 917 799 785 776 738 706 658 641 561 537 455 387 318 283 216 202 140
# Model
Huawei P20 Pro OnePlus 6 HTC U12+ Samsung Galaxy S9+ Samsung Galaxy S8 Motorola Z2 Force OnePlus 3T Lenovo ZUK Z2 Pro Google Pixel 2 Google Pixel Nokia 7 plus Asus Zenfone 5 Google Pixel C Huawei Honor 8 Pro Sony XA2 Ultra Meizu Pro 7 Plus BlackBerry Keyone Sony X Compact Xiaomi Redmi 5 Huawei Nexus 6P Meizu MX6 HTC U Play Xiaomi Redmi 4X Samsung Galaxy J7 LG Nexus 5 Asus Zenfone 2 Motorola Moto C Samsung Galaxy S3 Fly Nimbus 15 Huawei Ascend P1
Table 2: Benchmark results for several Android devices, a full list is available at: http://ai-benchmark.com/ranking
smartphone to contain these drivers is the OnePlus 6 with Snapdragon 845 SoC and the latest Android P ï¬rmware. It can run the quantized MobileNet model under 25ms on the Hexagon DSP which is considerably faster than the corre- sponding CPU speed (60-65ms). A similar performance can be expected from Snapdragon 670/710 chipsets containing the same Hexagon 685 DSP; Snapdragon 835 with Hexagon 682 and Snapdragon 636/660/820/821 with Hexagon 680 from the same Qualcomm 68x DSP family should come with a some- what longer runtime.
While there exist no ofï¬cial tests of Qualcommâs NNAPI drivers supporting acceleration for ï¬oat models, the Snap- dragon 625 SoC, with (presumably) a beta version of these drivers using the integrated Adreno 506 GPU, can provide up to 2x speed-up compared to a CPU-based execution. While the performance of Adreno 506 is around 130 GFLOPs, this means that Adreno 630 (727 GFLOPs) present in Snapdragon 845 SoC can potentially provide a speed-up by a factor of 3-4, though the exact number might vary a lot.
As to CPU performance measured in relation to matrix/deep learning computations, currently the most powerful Qual- comm core is the Kryo 385 Gold present in the Snapdragon 845 SoC. It exhibits around a 30% improvement over the Kryo 280 cores from Snapdragon 835. Interestingly, the latter ones demonstrate a similar or slightly degraded performance (per GHz) compared to the ï¬rst Kryo generation in the Snapdragon 820 SoC with a custom non-Cortex based design, that despite having only 4 cores is still slightly faster than the Snapdragon 636/660 with newer Kryo 260 cores. The previous Krait mi- croarchitecture represented by the Snapdragon 800/801 from 2013 is still showing competitive results, outperforming the
majority of SoCs from the 2xx, 4xx and 6xx families or even subsequently presented 810 and 808 chipsets based on the Cortex-A57 microarchitecture. We also note that customized Qualcomm CPU cores are generally showing a better perfor- mance than the default Arm Cortex architectures.
⢠Huawei. Though the CPU performance of HiSilicon SoCs is not as impressive as in Qualcommâs case, its NPU integrated into the Kirin 970 provides a dramatic speed-up for ï¬oat deep learning models. In particular, depending on the task it demonstrates 7-21 times faster inference compared to its CPU and 4-7 times better performance compared to the overall best CPU results. In tests 2, 4, 5, 8 that are supporting hardware acceleration, it requires on average 132, 274, 240 and 193 milliseconds to process one image, respectively. The only main weakness of this NPU is the lack of acceleration support for quantized models â in the ï¬rst test all computa- tions are running on CPU with an average processing time of 160ms per image, which is signiï¬cantly higher than the cor- responding results of the Snapdragon 845 with enabled DSP. Though this problem can be solved by implementing a quan- tized mode in Kirinâs NNAPI drivers, at the present time this functionality is still under development.
Regarding other HiSilicon chipsets, they are now not pro- viding acceleration for AI apps, and thus all computations are running on CPUs only. Since all HiSiliconâs SoCs are based on standard Arm Cortex cores, their performance is also quite similar to other chipsets with the same Cortex architectures.
⢠MediaTek. The Helio P60 is the ï¬rst chipset to get NNAPI drivers for accelerating both ï¬oat and quantized mod- els. Quantized networks are running on its integrated APU that is showing a performance similar to that of the Hexagon
11
Cores Test 1, ms Test 2, ms Test 3, ms Test 4, ms Test 5, ms Test 6, ms Test 7, ms CPU (4x2.4 GHz A73 & 4x1.8 GHz A53) + NPU CPU (4x A73 + 4x A53) + GPU (Mali-G72 MP3) + APU 8 (4x2.7 GHz Mongoose M3 & 4x1.8 GHz Cortex-A55) 8 (4x2.8GHz Kryo 385 Gold & 4x1.8GHz Kryo 385 Silver) 8 (4x2.3 GHz Mongoose M2 & 4x1.7 GHz Cortex-A53) 8 (4x2.45 GHz Kryo 280 & 4x1.9 GHz Kryo 280) 4 (2x2.15 GHz Kryo & 2x1.6 GHz Kryo) 4 (4x1.9 GHz Maxwell) 8 (4x2.2 GHz Kryo 260 & 4x1.8 GHz Kryo 260) 8 (8x1.8 GHz Kryo 260) 8 (4x2.3 GHz Mongoose & 4x1.6 GHz Cortex-A53) 8 (4x2.5 GHz Cortex-A72 & 4x1.8 GHz Cortex A53) 160 21 149 65 135 97 119 102 115 110 139 136 132 439 1247 661 742 855 839 925 1025 1055 1810 1383 2586 2230 1580 1547 1548 2027 2074 2328 2299 2405 3314 2932 274 846 956 1384 1213 1648 1804 1811 1806 1910 1536 2143 240 1419 1661 3108 2576 3771 4015 3824 4072 4271 3594 5132 4848 4499 2450 3744 3181 4375 5055 4437 4695 4877 4717 6202 742 394 613 362 451 439 410 384 547 515 937 751
SoC
HiSilicon Kirin 970 Mediatek Helio P60 Dev Exynos 9810 Octa Snapdragon 845 Exynos 8895 Octa Snapdragon 835 Snapdragon 820 Nvidia Tegra X1 Snapdragon 660 Snapdragon 636 Exynos 8890 Octa HiSilicon Kirin 955
Table 3: Benchmark results for several SoCs, the full list available at: http://ai-benchmark.com/ranking_processors
685 DSP â 21ms for processing one image in the ï¬rst test. Float networks are executed on the Mali-G72 MP3 GPU that provides about 2-5 times acceleration compared to its CPU and 1.5-2x faster runtime than the overall best CPU results. We should mention that all these numbers were obtained on MediaTekâs developer phones, while the only Helio P60-based actual device having NNAPI drivers (Vivo V11) is showing slightly worse results.
deprecated by their manufacturers (e.g., Intel Atom, Nvidia Tegra, TI OMAP). Especially interesting in the context of AI and deep learning are Nvidia Tegra platforms that are support- ing CUDA [68] and cuDNN [69] GPU-accelerated libraries of primitives for deep neural networks. Unfortunately, no new devices using Nvidia SoCs were released since 2015, and the existing ones are already deprecated and will not get (NNAPI) drivers for accelerating machine learning mobile frameworks.
Other MediaTek chipsets are currently not supporting ac- celeration for AI applications. They run on CPU cores with standard Arm Cortex designs.
# 6 Discussion
⢠Samsung. At the time of writing, neither of Samsungâs SoCs can provide any acceleration for third-party AI apps: all devices with these chipsets are using default NNAPI drivers. Since the latest Exynos 9810 SoC has the same Mali-G72 graphics as in the MediaTek P60 chipset (but with 12 instead of 3 cores), we can expect an additional speed-up factor of 3-4 for ï¬oat neural networks if the Arm NN library was integrated by Samsung into its NNAPI drivers. Since all recent Samsung Exynos processors are using Arm Mali GPUs, the same logic can be applied to them just the same.
Software and hardware support for machine learning on mo- bile devices is now evolving extremely fast, with various mile- stone releases announced each several months. While they are certainly bringing new possibilities and higher levels of per- formance, the current lack of standardized requirements and publicly available speciï¬cations does not always allow for an objective assessment of their real advantages and limitations. Below we would like to summarize our experience of work- ing with mobile machine learning frameworks and chipsets providing hardware acceleration via NNAPI drivers.
Depending on the task, Samsungâs Mongoose M3 CPU cores can demonstrate signiï¬cantly better or worse perfor- mance compared to custom Kryo 385 cores in the Snapdragon 845, but their overall performance can be considered quite comparable. The Mongoose M2 microarchitecture shows a signiï¬cant 50% boost over the ï¬rst M1 version, while the per- formance of the second (M2) and third (M3) generations is rather similar. One notable issue with the latest Exynos 8895 and 9810 SoCs is related to their integrated power manage- ment system responsible for adjusting the CPU performance. It is causing very unstable results on the majority of devices: in particular, several subsequent benchmark runs (with an in- terval of 10 minutes, âhigh performanceâ mode) on the same Galaxy S9 phone demonstrated up to 50% variation of the total score, while the results obtained from different devices showed an even larger variation (e.g., 200-800 ms in the sev- enth test). Currently, there is no way to have external control over different performance modes as they are selected auto- matically based on the integrated logic.
Currently, the easiest way to start using deep learning on Android is to go for a mature and relatively stable TensorFlow It was introduced more than two years Mobile framework. ago, and all major issues are already solved, while plenty of information on smaller problems is available on various spe- cialized websites. If hardware acceleration is one of the crit- ical problems, TensorFlow Lite can still be an option, but we would not recommend using it now for anything more compli- cated than image classiï¬cation with MobileNet or Inception CNNs as there still might be occasional problems with non- standard network architectures on some mobile platforms. We can also mention that migrating from TF Mobile to Lite is relatively easy since they are using very similar Android pro- gramming interfaces (the biggest difference will be in con- verting pre-trained models to .tï¬ite instead of .pb format), and thus can be done later when TF Lite gets better support. If the application is targeted at some speciï¬c device or SoC, the cor- responding proprietary SDK can also be used, though in this case the development might not be so easy and convenient. Regarding Caffe2 Mobile and other less widespread frame-
⢠Others. We have obtained results from a number of other chipsets that are either not widely used (e.g., Spreadtrum) or
12
works, their communities are now very small, which means that almost no tutorials and problem descriptions are available on the internet, thus all appearing problems might be primar- ily solved only by creating new issues in the corresponding github repositories.
Hardware acceleration for AI algorithms on Android de- vices is now an even more controversial topic. At the time of writing, the fastest runtime for conventional ï¬oat neural net- works is shown by Huawei devices with Kirin 970 chipsets that at the time of their presentation were signiï¬cantly ahead of the market. Yet, we prefer to stay neutral regarding the fu- ture perspectives, as our analysis has demonstrated that almost all SoC manufacturers have the potential to achieve similar re- sults in their new chipsets. The real situation will become clear at the beginning of the next year when the ï¬rst devices with the Kirin 980, the MediaTek P80 and the next Qualcomm and Samsung Exynos premium SoCs will appear on the market. Besides the performance, we would also like to look at their power efï¬ciency since a signiï¬cant battery drain might restrict their usage to a few standard in-camera processing techniques. The last topic that we want to address here is the use of quantized networks. Their current applicability is rather lim- ited, as there are still no standard and reliable tools for quan- tizing networks trained even for image classiï¬cation, not to mention more complex tasks. At the moment we can expect two different ways of development in this area. In the ï¬rst case, the problem of quantization will be largely solved at some point, and the majority of neural networks deployed on smartphones will be quantized. In the second case, speciï¬c NPUs supporting ï¬oat networks will become even more pow- erful and efï¬cient, and the need for quantization will disappear as this happened to many optimized solutions developed due to the lack of computational power in the past. Since we can- not easily predict the future outcome, we will still be using a mixture of quantized and ï¬oat models in the benchmark with predominance of the second ones, though in the future releases the corresponding ratio might be signiï¬cantly altered.
Since currently there are still many important open ques- tions that might be answered only with new major software and hardware releases related to machine learning frameworks and new dedicated chipsets, we are planning to publish reg- ular benchmark reports describing the actual state of AI ac- celeration on mobile devices, as well as changes in the ma- chine learning ï¬eld and the corresponding adjustments made in the benchmark to reï¬ect them. The latest results obtained with the AI Benchmark and the description of the actual tests will also be updated monthly on the project website: http: //ai-benchmark.com. Additionally, in case of any tech- nical problems or some additional questions you can always contact the ï¬rst two authors of this paper.
# 7 Conclusions
In this paper, we discussed the latest achievements in the area of machine learning and AI in the Android ecosystem. First, we presented an overview of all currently existing mobile
13
chipsets that can be potentially used for accelerating the ex- ecution of neural networks on smartphones and other portable devices, and described popular mobile frameworks for run- ning AI algorithms on mobile devices. We presented the AI Benchmark that measures different performance aspects as- sociated with running deep neural networks on smartphones and other Android devices, and discussed the real-world re- sults obtained with this benchmark from over 10,000 mobile devices and more than 50 different mobile SoCs. Finally, we discussed future perspectives of software and hardware devel- opment related to this area and gave our recommendations re- garding the current deployment of deep learning models on Android devices.
# References
[1] Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classiï¬cation with deep convolutional neural networks. In: Advances in neural information processing systems. (2012) 1097â1105 1
[2] Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition. (2016) 2818â2826 1, 7
[3] Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., Adam, H.: Mobilenets: Efï¬cient convo- lutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017) 1, 7
[4] Ignatov, A., Kobyshev, N., Timofte, R., Vanhoey, K., Van Gool, L.: Dslr-quality photos on mobile devices with deep convolutional net- works. In: the IEEE Int. Conf. on Computer Vision (ICCV). (2017) 1, 8
[5] Ignatov, A., Kobyshev, N., Timofte, R., Vanhoey, K., Van Gool, L.: Wespe: weakly supervised photo enhancer for digital cameras. arXiv preprint arXiv:1709.01118 (2017) 1
[6] Ignatov, A., Timofte, R., et al.: Pirm challenge on perceptual image In: European Conference on enhancement on smartphones: Report. Computer Vision Workshops. (2018) 1
Image super-resolution using deep convolutional networks. IEEE transactions on pattern analysis and machine intelligence 38(2) (2016) 295â307 1, 8
[8] Ledig, C., Theis, L., Husz´ar, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A.P., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR. Volume 2. (2017) 4 1, 8
[9] Timofte, R., Gu, S., Wu, J., Van Gool, L., et al.: Ntire 2018 chal- lenge on single image super-resolution: Methods and results. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops. (June 2018) 1
[10] Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., Ng, A.Y.: Read- ing digits in natural images with unsupervised feature learning. In: NIPS workshop on deep learning and unsupervised feature learning. Volume 2011. (2011) 5 1
IEEE Transactions on Pattern Analysis and Machine Intelligence 37(9) (2015) 1834â1848 1
[12] Huang, J., Rathod, V., Sun, C., Zhu, M., Korattikara, A., Fathi, A., Fischer, I., Wojna, Z., Song, Y., Guadarrama, S., et al.: Speed/accuracy trade-offs for modern convolutional object detectors. In: IEEE CVPR. Volume 4. (2017) 1
[13] Li, L.J., Socher, R., Fei-Fei, L.: Towards total scene understanding: Classiï¬cation, annotation and segmentation in an automatic framework. In: Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, IEEE (2009) 2036â2043 1
[14] Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benen- son, R., Franke, U., Roth, S., Schiele, B.: The cityscapes dataset for semantic urban scene understanding. In: Proceedings of the IEEE con- ference on computer vision and pattern recognition. (2016) 3213â3223 1
[15] Li, H., Lin, Z., Shen, X., Brandt, J., Hua, G.: A convolutional neural network cascade for face detection. In: Proceedings of the IEEE Confer- ence on Computer Vision and Pattern Recognition. (2015) 5325â5334 1
[16] Schroff, F., Kalenichenko, D., Philbin, J.: Facenet: A uniï¬ed embed- ding for face recognition and clustering. In: Proceedings of the IEEE conference on computer vision and pattern recognition. (2015) 815â823 1, 8
[17] Zhang, X., Sugano, Y., Fritz, M., Bulling, A.: Appearance-based gaze In: Proceedings of the IEEE Conference on estimation in the wild. Computer Vision and Pattern Recognition. (2015) 4511â4520 1
[18] Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: Advances in neural information processing systems. (2014) 3104â3112 1
[19] Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014) 1
[20] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efï¬cient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) 1
[21] Hu, B., Lu, Z., Li, H., Chen, Q.: Convolutional neural network architec- tures for matching natural language sentences. In: Advances in neural information processing systems. (2014) 2042â2050 1
[22] Socher, R., Perelygin, A., Wu, J., Chuang, J., Manning, C.D., Ng, A., Potts, C.: Recursive deep models for semantic compositionality over a sentiment treebank. In: Proceedings of the 2013 conference on empiri- cal methods in natural language processing. (2013) 1631â1642 1
[23] Severyn, A., Moschitti, A.: Twitter sentiment analysis with deep con- volutional neural networks. In: Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval, ACM (2015) 959â962 1
[24] Serban, I.V., Sankar, C., Germain, M., Zhang, S., Lin, Z., Subramanian, S., Kim, T., Pieper, M., Chandar, S., Ke, N.R., et al.: A deep reinforce- ment learning chatbot. arXiv preprint arXiv:1709.02349 (2017) 1
[25] Kwapisz, J.R., Weiss, G.M., Moore, S.A.: Activity recognition us- ing cell phone accelerometers. ACM SigKDD Explorations Newsletter 12(2) (2011) 74â82 1
[26] Ignatov, A.: Real-time human activity recognition from accelerometer data using convolutional neural networks. Applied Soft Computing 62 (2018) 915â922 1
[27] Ord´oËnez, F.J., Roggen, D.: Deep convolutional and lstm recurrent neural networks for multimodal wearable activity recognition. Sensors 16(1) (2016) 115 1
[28] Sathyanarayana, A., Joty, S., Fernandez-Luque, L., Oï¬i, F., Srivastava, J., Elmagarmid, A., Arora, T., Taheri, S.: Sleep quality prediction from wearable data using deep learning. JMIR mHealth and uHealth 4(4) (2016) 1
[29] Lane, N.D., Georgiev, P.: Can deep learning revolutionize mobile sens- In: Proceedings of the 16th International Workshop on Mobile ing? Computing Systems and Applications, ACM (2015) 117â122 1
[30] Codrescu, L., Anderson, W., Venkumanhanti, S., Zeng, M., Plondke, E., Koob, C., Ingle, A., Tabony, C., Maule, R.: Hexagon dsp: An ar- chitecture optimized for mobile multimedia and communications. IEEE Micro (2) (2014) 34â43 1, 3
[31] Latiï¬ Oskouei, S.S., Golestani, H., Hashemi, M., Ghiasi, S.: Cn- ndroid: Gpu-accelerated execution of trained deep convolutional neural networks on android. In: Proceedings of the 2016 ACM on Multimedia Conference, ACM (2016) 1201â1205 1
[32] Guihot, H.: Renderscript. In: Pro Android Apps Performance Opti- mization. Springer (2012) 231â263 2
14
[33] TensorFlow-Mobile: https://www.tensorï¬ow.org/mobile/mobile intro. Retrieved on: 30.09.2018 2, 6, 9
[34] Reddy, V.G.: Neon technology introduction. ARM Corporation (2008) 2, 5, 9
[35] Motamedi, M., Fong, D., Ghiasi, S.: Cappuccino: Efï¬cient cnn infer- ence software synthesis for mobile system-on-chips. IEEE Embedded Systems Letters (2018) 2
[36] Alzantot, M., Wang, Y., Ren, Z., Srivastava, M.B.: Rstensorï¬ow: Gpu enabled tensorï¬ow for deep learning on commodity android devices. In: Proceedings of the 1st International Workshop on Deep Learning for Mobile Systems and Applications, ACM (2017) 7â12 2
[37] SNPE: https://developer.qualcomm.com/docs/snpe/overview.html. Re- trieved on: 30.09.2018 2, 6
[38] HiAI: https://developer.huawei.com/consumer/en/devservice/doc/2020315. Retrieved on: 30.09.2018 2, 4, 6
[39] Lee, Y.L., Tsung, P.K., Wu, M.: Techology trend of edge ai. In: VLSI Design, Automation and Test (VLSI-DAT), 2018 International Sympo- sium on, IEEE (2018) 1â2 2, 4, 6
[40] NNAPI: https://developer.android.com/ndk/guides/neuralnetworks/. Retrieved on: 30.09.2018 2
[41] Chance, R.: Devices overview. Digital Signal Processing: Principles, Devices and Applications 42 (1990) 4 2
[42] Hesseldahl, A.: The legacy of dsp1. Electronic News 45(45) (1999) 44â44 2
In: Digital Signal Processing Technology. Volume 2750., International So- ciety for Optics and Photonics (1996) 2â12 2
[44] Hays, W.P.: Dsps: Back to the future. Queue 2(1) (2004) 42 2
[45] LeCun, Y., Boser, B., Denker, J.S., Henderson, D., Howard, R.E., Hub- bard, W., Jackel, L.D.: Backpropagation applied to handwritten zip code recognition. Neural computation 1(4) (1989) 541â551 3
[46] ArmNN: https://github.com/arm-software/armnn. Retrieved on: 30.09.2018 5, 6
[47] Torch-Android: https://github.com/soumith/torch-android. Retrieved on: 30.09.2018 6
[48] Deeplearning4j: https://deeplearning4j.org/docs/latest/deeplearning4j- android. Retrieved on: 30.09.2018 6
[49] TensorFlow-Lite: https://www.tensorï¬ow.org/mobile/tï¬ite/. Retrieved on: 30.09.2018 6, 9
[50] Caffe-Android: https://github.com/sh1r0/caffe-android-lib. Retrieved on: 30.09.2018 6, 7
[51] Caffe2-Android: https://caffe2.ai/docs/mobile-integration.html. Re- trieved on: 30.09.2018 6, 7
[52] MXNet: https://github.com/leliana/whatsthis. Retrieved on: 30.09.2018 6
[53] NNabla: https://github.com/sony/nnabla. Retrieved on: 30.09.2018 6
[54] Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Ghemawat, S., Irving, G., Isard, M., et al.: Tensorï¬ow: a system for large-scale machine learning. In: OSDI. Volume 16. (2016) 265â283 6
[55] TensorFlow-Mobile/Lite: trieved on: 30.09.2018 6 https://www.tensorï¬ow.org/mobile/. Re-
[56] Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., Darrell, T.: Caffe: Convolutional architecture for fast feature embedding. In: Proceedings of the 22nd ACM international conference on Multimedia, ACM (2014) 675â678 7
[57] Caffe2-AICamera-Demo: trieved on: 30.09.2018 7 https://github.com/caffe2/aicamera. Re-
[58] TFLite-Benchmark: https://www.tensorï¬ow.org/mobile/tï¬ite/performance. Retrieved on: 30.09.2018 7
[59] Caffe2-Presentation: https://www.slideshare.net/kstan2/caffe2-on- android. Retrieved on: 30.09.2018 7
[60] Jacob, B., Kligys, S., Chen, B., Zhu, M., Tang, M., Howard, A., Adam, H., Kalenichenko, D.: Quantization and training of neural net- arXiv preprint works for efï¬cient integer-arithmetic-only inference. arXiv:1712.05877 (2017) 7
[61] Sheng, T., Feng, C., Zhuo, S., Zhang, X., Shen, L., Aleksic, M.: A quantization-friendly separable convolution for mobilenets. arXiv preprint arXiv:1803.08607 (2018) 7
Inception-v4, inception-resnet and the impact of residual connections on learning. In: AAAI. Volume 4. (2017) 12 8
[63] FaceNet-github: https://github.com/davidsandberg/facenet. Retrieved on: 30.09.2018 8
[64] TF-Slim: https://github.com/tensorï¬ow/models/tree/master/research/slim. Retrieved on: 30.09.2018 8
[65] Kim, J., Kwon Lee, J., Mu Lee, K.: Accurate image super-resolution using very deep convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. (2016) 1646â 1654 8
[66] Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: European Conference on Computer Vision, Springer (2016) 694â711 8
[67] Zhao, H., Qi, X., Shen, X., Shi, J., Jia, J.: semantic segmentation on high-resolution images. arXiv:1704.08545 (2017) 8 Icnet for real-time arXiv preprint
[68] Kirk, D., et al.: Nvidia cuda software and gpu parallel computing archi- tecture. In: ISMM. Volume 7. (2007) 103â104 12
[69] Chetlur, S., Woolley, C., Vandermersch, P., Cohen, J., Tran, J., Catan- zaro, B., Shelhamer, E.: cudnn: Efï¬cient primitives for deep learning. arXiv preprint arXiv:1410.0759 (2014) 12
15 | {
"id": "1704.08545"
} |
1810.00861 | ProxQuant: Quantized Neural Networks via Proximal Operators | To make deep neural networks feasible in resource-constrained environments
(such as mobile devices), it is beneficial to quantize models by using
low-precision weights. One common technique for quantizing neural networks is
the straight-through gradient method, which enables back-propagation through
the quantization mapping. Despite its empirical success, little is understood
about why the straight-through gradient method works.
Building upon a novel observation that the straight-through gradient method
is in fact identical to the well-known Nesterov's dual-averaging algorithm on a
quantization constrained optimization problem, we propose a more principled
alternative approach, called ProxQuant, that formulates quantized network
training as a regularized learning problem instead and optimizes it via the
prox-gradient method. ProxQuant does back-propagation on the underlying
full-precision vector and applies an efficient prox-operator in between
stochastic gradient steps to encourage quantizedness. For quantizing ResNets
and LSTMs, ProxQuant outperforms state-of-the-art results on binary
quantization and is on par with state-of-the-art on multi-bit quantization. For
binary quantization, our analysis shows both theoretically and experimentally
that ProxQuant is more stable than the straight-through gradient method (i.e.
BinaryConnect), challenging the indispensability of the straight-through
gradient method and providing a powerful alternative. | http://arxiv.org/pdf/1810.00861 | Yu Bai, Yu-Xiang Wang, Edo Liberty | cs.LG, stat.ML | null | null | cs.LG | 20181001 | 20190305 | 9 1 0 2
r a M 5 ] G L . s c [
3 v 1 6 8 0 0 . 0 1 8 1 : v i X r a
# ProxQuant: Quantized Neural Networks via Proximal Operators
# Yu Baiâ
Yu-Xiang Wangâ
# Edo Libertyâ¡
January 11, 2022
# Abstract
To make deep neural networks feasible in resource-constrained environments (such as mo- bile devices), it is beneï¬cial to quantize models by using low-precision weights. One common technique for quantizing neural networks is the straight-through gradient method, which en- ables back-propagation through the quantization mapping. Despite its empirical success, little is understood about why the straight-through gradient method works.
Building upon a novel observation that the straight-through gradient method is in fact identical to Nesterovâs dual-averaging algorithm on a quantization constrained optimization problem, we propose a more principled alternative approach, called ProxQuant, that formu- lates quantized network training as a regularized learning problem instead and optimizes it via the prox-gradient method. ProxQuant does back-propagation on the underlying full-precision vector and applies an eï¬cient prox-operator in between stochastic gradient steps to encourage quantizedness. For quantizing ResNets and LSTMs, ProxQuant outperforms state-of-the-art results on binary quantization and is on par with state-of-the-art on multi-bit quantization. We further perform theoretical analyses showing that ProxQuant converges to stationary points under mild smoothness assumptions, whereas variants such as lazy prox-gradient method can fail to converge in the same setting.
1
# Introduction
Deep neural networks (DNNs) have achieved impressive results in various machine learning tasks [7]. High-performance DNNs typically have over tens of layers and millions of parameters, resulting in a high memory usage and a high computational cost at inference time. However, these networks are often desired in environments with limited memory and computational power (such as mobile devices), in which case we would like to compress the network into a smaller, faster network with comparable performance.
A popular way of achieving such compression is through quantization â training networks with In a quantized neural network, each weight low-precision weights and/or activation functions. and/or activation can be representable in k bits, with a possible codebook of negligible additional size compared to the network itself. For example, in a binary neural network (k = 1), the weights are restricted to be in {±1}. Compared with a 32-bit single precision ï¬oat, a quantized net reduces the memory usage to k/32 of a full-precision net with the same architecture [8, 5, 20, 14, 27, 28]. In addition, the structuredness of the quantized weight matrix can often enable faster matrix-vector product, thereby also accelerating inference [14, 9].
âDepartment of Statistics, Stanford University. yub@stanford.edu. Work performed at Amazon AI. â Computer Science Department, UC Santa Barbara. yuxiangw@cs.ucsb.edu. Work performed at Amazon AI. â¡Amazon AI. libertye@amazon.com. Code available at https://github.com/allenbai01/ProxQuant.
1
Typically, training a quantized network involves (1) the design of a quantizer q that maps a full-precision parameter to a k-bit quantized parameter, and (2) the straight-through gradient method [5] that enables back-propagation from the quantized parameter back onto the original full- precision parameter, which is critical to the success of quantized network training. With quantizer q, an iterate of the straight-through gradient method (see Figure 1a) proceeds as θt+1 = θt â ηt θ) is taken as the output model. For training binary networks, choosing q(·) = sign(·) gives the BinaryConnect method [5].
Though appealingly simple and empirically eï¬ective, it is information-theoretically rather mys- terious why the straight-through gradient method works well, at least in the binary case: while the goal is to ï¬nd a parameter θ â {±1}d with low loss, the algorithm only has access to stochastic gradients at {±1}d. As this is a discrete set, a priori, gradients in this set do not necessarily contain any information about the function values. Indeed, a simple one-dimensional example (Figure 1b) shows that BinaryConnect fails to ï¬nd the minimizer of fairly simple convex Lipschitz functions in {±1}, due to a lack of gradient information in between.
Lo oe 0.6 0.4 0.2 0.0 â0.2 -0.4 -1.00 -0.75 -0.50 -0.25 0.00 0.25 0.50 0.75 1.00
(a) (b)
Figure 1: (a) Comparison of the straight-through gradient method and our PROXQUANT method. The straight-through method computes the gradient at the quantized vector and performs the update at the original real vector; PROXQUANT performs a gradient update at the current real vector followed by a prox step which encourages quantizedness. (b) A two-function toy failure case for BinaryConnect. The two functions are f(a) = |x +0.5| â 0.5 (blue) and f_;(x) = |x â 0.5] â 0.5 (orange). The derivatives of f; and f-1 coincide at {â1,1}, so any algorithm that only uses this information will have identical behaviors on these two functions. However, the minimizers in {+1} are xj = â1 and 2*, = 1, so the algorithm must fail on one of them.
In this paper, we formulate the problem of model quantization as a regularized learning problem and propose to solve it with a proximal gradient method. Our contributions are summarized as follows.
⢠We present a uniï¬ed framework for deï¬ning regularization functionals that encourage binary, ternary, and multi-bit quantized parameters, through penalizing the distance to quantized sets (see Section 3.1). For binary quantization, the resulting regularizer is a W -shaped non- smooth regularizer, which shrinks parameters towards either â1 or 1 in the same way that the L1 norm regularization shrinks parameters towards 0.
⢠We propose training quantized networks using ProxQuant (Algorithm 1) â a stochastic proximal gradient method with a homotopy scheme. Compared with the straight-through
2
gradient method, ProxQuant has access to additional gradient information at non-quantized points, which avoids the problem in Figure 1b and its homotopy scheme prevents potential overshoot early in the training (Section 3.2).
⢠We demonstrate the eï¬ectiveness and ï¬exibility of ProxQuant through systematic exper- iments on (1) image classiï¬cation with ResNets (Section 4.1); (2) language modeling with LSTMs (Section 4.2). The ProxQuant method outperforms the state-of-the-art results on binary quantization and is comparable with the state-of-the-art on ternary and multi-bit quantization.
⢠We perform a systematic theoretical study of quantization algorithms, showing that our ProxQuant (standard prox-gradient method) converges to stataionary points under mild smoothness assumptions (Section 5.1), where as lazy prox-gradient method such as BinaryRe- lax [26] fails to converge in general (Section 5.2). Further, we show that BinaryConnect has a very stringent condition to converge to any ï¬xed point (Section 5.3), which we verify through a sign change experiment (Appendix C).
# 1.1 Prior work
Methodologies Han et al. [8] propose Deep Compression, which compresses a DNN via spar- siï¬cation, nearest-neighbor clustering, and Huï¬man coding. This architecture is then made into a specially designed hardware for eï¬cient inference [9]. In a parallel line of work, Courbariaux et al. [5] propose BinaryConnect that enables the training of binary neural networks, and Li and Liu [16], Zhu et al. [28] extend this method into ternary quantization. Training and inference on quantized nets can be made more eï¬cient by also quantizing the activation [14, 20, 27], and such networks have achieved impressive performance on large-scale tasks such as ImageNet classiï¬ca- tion [20, 28] and object detection [25]. In the NLP land, quantized language models have been successfully trained using alternating multi-bit quantization [24].
Theories Li et al. [17] prove the convergence rate of stochastic rounding and BinaryConnect on convex problems and demonstrate the advantage of BinaryConnect over stochastic rounding on non- convex problems. Anderson and Berg [1] demonstrate the eï¬ectiveness of binary networks through the observation that the angles between high-dimensional vectors are approximately preserved when binarized, and thus high-quality feature extraction with binary weights is possible. Ding et al. [6] show a universal approximation theorem for quantized ReLU networks.
Principled methods Sun and Sun [21] perform model quantization through a Wasserstein regu- larization term and minimize via the adversarial representation, similar as in Wasserstein GANs [2]. Their method has the potential of generalizing to other generic requirements on the parameter, but might be hard to tune due to the instability of the inner maximization problem.
Prior to our work, a couple of proximal or regularization based quantization algorithms were proposed as alternatives to the straight-through gradient method, which we now brieï¬y review and compare with. [26] propose BinaryRelax, which corresponds to a lazy proximal gradient de- scent. [13, 12] propose a proximal Newton method with a diagonal approximate Hessian. Carreira- Perpin´an [3], Carreira-Perpin´an and Idelbayev [4] formulate quantized network training as a con- strained optimization problem and propose to solve them via augmented Lagrangian methods. Our algorithm is diï¬erent with all the aformentioned work in using the non-lazy and âsoftâ proximal
3
gradient descent with a choice of either ¢; or 2 regularization, whose advantage over lazy prox- gradient methods is demonstrated both theoretically (Section [5) and experimentally (Section [£1] and Appendix [C).
# 2 Preliminaries
The optimization difficulty of training quantized models is that they involve a discrete parameter space and hence efficient local-search methods are often prohibitive. For example, the problem of training a binary neural network is to minimize L(6) for 0 ⬠{41}7, Projected SGD on this set will not move unless with an unreasonably large stepsize [17], whereas greedy nearest-neighbor search requires d forward passes which is intractable for neural networks where d is on the order of millions. Alternatively, quantized training can also be cast as minimizing L(q(@)) for @ ⬠R¢ and an appropriate quantizer q that maps a real vector to a nearby quantized vector, but @ ++ q(@) is often non-differentiable and piecewise constant (such as the binary case q(-) = sign(-)), and thus back-propagation through q does not work.
# 2.1 The straight-through gradient method
The pioneering work of BinaryConnect [5] proposes to solve this problem via the straight-through gradient method, that is, propagate the gradient with respect to q(θ) unaltered to θ, i.e. to let âL âθ := âL âq(θ) . One iterate of the straight-through gradient method (with the SGD optimizer) is
θt+1 = θt â ηt âL(θ)|θ=q(θt).
This enables the real vector 6 to move in the entire Euclidean space, and taking q(@) at the end of training gives a valid quantized model. Such a customized back-propagation rule yields good empirical performance in training quantized nets and has thus become a standard practice [5)|28} [24]. However, as we have discussed, it is information theoretically unclear how the straight-through method works, and it does fail on very simple convex Lipschitz functions (Figure [Ib).
# 2.2 Straight-through gradient as lazy projection
Our ï¬rst observation is that the straight-through gradient method is equivalent to a dual-averaging method, or a lazy projected SGD [23]. In the binary case, we wish to minimize L(θ) over Q = {±1}d, and the lazy projected SGD proceeds as
6 = Projo(%) = sign(#) = 4(%), O44 = On _ mV L(0:). (1)
Written compactly, this is 41 = %âmVL(8)|o=q(0,): which is exactly the straight-through gradient method: take the gradient at the quantized vector and perform the update on the original real vector.
# 2.3 Projection as a limiting proximal operator
We take a broader point of view that a projection is also a limiting proximal operator with a suitable regularizer, to allow more generality and to motivate our proposed algorithm. Given any set Q, one could identify a regularizer R : Rd â Râ¥0 such that the following hold:
R(θ) = 0, âθ â Q and R(θ) > 0, âθ /â Q. (2)
4
In the case Q = {±1}d for example, one could take
d R(θ) = Rbin(θ) = min {|θj â 1|, |θj + 1|}. (3)
# jt
The proximal operator (or prox operator) [19] with respect to R and strength λ > 0 is
1 2 2 proxλR(θ) := arg min θ â θ + λR( θ) .
# deR¢
In the limiting case \ = oo, the argmin has to satisfy R(@) = 0, ie. 8 ⬠Q, and the prox operator is to minimize || â Molle over 9 ⬠Q, which is the Euclidean projection onto Q. Hence, projection is also a prox operator with \ = oo, and the straight-through gradient estimate is equivalent to a lazy proximal gradient descent with and A = oo.
While the prox operator with λ = â correponds to âhardâ projection onto the discrete set Q, when λ < â it becomes a âsoftâ projection that moves towards Q. Compared with the hard projection, a ï¬nite λ is less aggressive and has the potential advantage of avoiding overshoot early in training. Further, as the prox operator does not strictly enforce quantizedness, it is in principle able to query the gradients at every point in the space, and therefore has access to more information than the straight-through gradient method.
# 3 Quantized net training via regularized learning
We propose the ProxQuant algorithm, which adds a quantization-inducing regularizer onto the loss and optimizes via the (non-lazy) prox-gradient method with a ï¬nite λ. The prototypical version of ProxQuant is described in Algorithm 1.
Algorithm 1 ProxQuant: Prox-gradient method for quantized net training Require: Regularizer R that induces desired quantizedness,
initialization θ0, learning rates
{ηt}tâ¥0, regularization strengths {λt}tâ¥0 while not converged do
Perform the prox-gradient step
θt+1 = proxηtλtR θt â ηt âL(θt) . (4)
The inner SGD step in eq. can be replaced by any preferred stochastic optimization method such as Momentum SGD or Adam [15].
# end while
Compared to usual full-precision training, ProxQuant only adds a prox step after each stochas- tic gradient step, hence can be implemented straightforwardly upon existing full-precision training. As the prox step does not need to know how the gradient step is performed, our method adapts to other stochastic optimizers as well such as Adam.
In the remainder of this section, we deï¬ne a ï¬exible class of quantization-inducing regularizers through âdistance to the quantized setâ, derive eï¬cient algorithms of their corresponding prox operator, and propose a homotopy method for choosing the regularization strengths. Our regu- larization perspective subsumes most existing algorithms for model-quantization (e.g.,[5, 8, 24]) as limits of certain regularizers with strength λ â â. Our proposed method can be viewed as a principled generalization of these methods to λ < â with a non-lazy prox operator.
5
# 3.1 Regularization for model quantization
Let Q â Rd be a set of quantized parameter vectors. An ideal regularizer for quantization would be to vanish on Q and reï¬ect some type of distance to Q when θ /â Q. To achieve this, we propose L1 and L2 regularizers of the form
R(6) = inf, |6â All, or R(B) = inf, || â olla (5)
This is a highly ï¬exible framework for designing regularizers, as one could specify any Q and choose between L1 and L2. Speciï¬cally, Q encodes certain desired quantization structure. By appropriately choosing Q, we can specify which part of the parameter vector to quantize1, the number of bits to quantize to, whether we allow adaptively-chosen quantization levels and so on. The choice between {L1, L2} will encourage {âhardâ,âsoftâ} quantization respectively, similar as in standard regularized learning [22].
In the following, we present a few examples of regularizers under our framework eq. (5) which induce binary weights, ternary weights and multi-bit quantization. We will also derive eï¬cient algorithms (or approximation heuristics) for solving the prox operators corresponding to these regularizers, which generalize the projection operators used in the straight-through gradient algo- rithms.
In a binary neural net, the entries of θ are in {±1}. A natural choice would Binary neural nets be taking Q = {â1, 1}d. The resulting L1 regularizer is
d R(O) = inf Oâ ||, = inf 0; â [00], @) doâ¬{+1}4 oh dm ify i ols| j=l d = YE min {|9; â 1, |0; + 1]} = || â sign(@)|], . j=l
This is exactly the binary regularizer Rbin that we discussed earlier in eq. (3). Figure 2 plots the W-shaped one-dimensional component of Rbin from which we see its eï¬ect for inducing {±1} quantization in analog to L1 regularization for inducing exact sparsity.
The prox operator with respect to Rbin, despite being a non- convex optimization problem, admits a simple analytical solution:
es : sos oe
Figure 2: W-shaped regularizer for binary quantization.
proxλRbin (θ) = SoftThreshold(θ, sign(θ), λ)
= sign(@) + sign(6 â sign(@)) © [|@ â sign(6)| â A]4.- (7) We note that the choice of the LZ; version is not unique: the squared Ly version works as well, whose prox operator is given by (6 + Asign(@))/(1 + A). See Appendix for the derivation of these PTOx operators and the definition of the soft thresholding operator.
Multi-bit quantization with adaptive levels. Following [24], consider k-bit quantized parameters with a structured we
1Empirically, it is advantageous to keep the biases of each layers and the BatchNorm layers at full-precision, which â is often a negligible fraction, say 1/ d of the total number of parameters
6
adaptively-chosen set of quantization levels, which translates into
k Q= >> ayd; : {a1,...,aK} CR, b) ⬠cath = {6 = Ba:aâ¬eR*, Be fey". (8) i=1
i=1
The squared L2 regularizer for this structure is
Rx-vit(0) = inf || â Ball3. (9) aeRk, Be{t1}o**
which is also the alternating minimization objective in [24].
We now derive the prox operator for the regularizer eq. (9). For any θ, we have
~ 2 0 â volt Lyx pi proxyr, (0) = arg min ¢ â - 4|| +X inf AReâbit a 2 2 aâ¬Rk, Be{+1}** (10) Li~ 2 ~ 2 = arg min inf {5 @-4| +16 Bal \. @ aeR*,Be{41}*** (2 2 2 minimization problem in (0. , B,a), and we adopt an alternating minimization schedule
This is a joint minimization problem in ( to solve it:
θ = θ+2λBα 1+2λ .
(1) Minimize over θ given (B, α), which has a closed-form solution
(2) Minimize over (B,a) given 0, which does not depend on 69, and can be done via calling the alternating quantizer of [24]: Ba = qai(4).
Together, the prox operator generalizes the alternating minimization procedure in [24], as λ governs a trade-oï¬ between quantization and closeness to θ. To see that this is a strict generalization, note that for any λ the solution of eq. (10) will be an interpolation between the input θ and its Euclidean projection to Q. As λ â +â, the prox operator collapses to the projection.
Ternary quantization Ternary quantization is a variant of 2-bit quantization, in which weights are constrained to be in {âα, 0, β} for real values α, β > 0. We defer the derivation of the ternary prox operator into Appendix A.2.
# 3.2 Homotopy method for regularization strength
Recall that the larger λt is, the more aggressive θt+1 will move towards the quantized set. An ideal choice would be to (1) force the net to be exactly quantized upon convergence, and (2) not be too aggressive such that the quantized net at convergence is sub-optimal.
We let λt be a linearly increasing sequence, i.e. λt := λ · t for some hyper-parameter λ > 0 which we term as the regularization rate. With this choice, the stochastic gradient steps will start oï¬ close to full-precision training and gradually move towards exact quantizedness, hence the name âhomotopy methodâ. The parameter λ can be tuned by minimizing the validation loss, and controls the aggressiveness of falling onto the quantization constraint. There is nothing special about the linear increasing scheme, but it is simple enough and works well as we shall see in the experiments.
# 4 Experiments
We evaluate the performance of ProxQuant on two tasks: image classiï¬cation with ResNets, and language modeling with LSTMs. On both tasks, we show that the default straight-through gradient method is not the only choice, and our ProxQuant can achieve the same and often better results.
7
# 4.1 Image classiï¬cation on CIFAR-10
Problem setup We perform image classiï¬cation on the CIFAR-10 dataset, which contains 50000 training images and 10000 test images of size 32x32. We apply a commonly used data augmentation strategy (pad by 4 pixels on each side, randomly crop to 32x32, do a horizontal ï¬ip with probability 0.5, and normalize). Our models are ResNets [10] of depth 20, 32, 44, and 56 with weights quantized to binary or ternary.
Method We use ProxQuant with regularizer eq. (3) in the binary case and eqs. (15) and (16) in the ternary case, which we respectively denote as PQ-B and PQ-T. We use the homotopy method λt = λ · t with λ = 10â4 as the regularization strength and Adam with constant learning rate 0.01 as the optimizer.
We compare with BinaryConnect (BC) for binary nets and Trained Ternary Quantization (TTQ) [28] for ternary nets. For BinaryConnect, we train with the recommended Adam opti- mizer with learning rate decay [5] (initial learning rate 0.01, multiply by 0.1 at epoch 81 and 122), which we ï¬nd leads to the best result for BinaryConnect. For TTQ we compare with the reported results in [28].
For binary quantization, both BC and our PROXQUANT are initialized at the same pre-trained full-precision nets (warm-start) and trained for 300 epochs for fair comparison. For both methods, we perform a hard quantization @ +> q(@) at epoch 200 and keeps training till the 300-th epoch to stabilize the BatchNorm layers. We compare in addition the performance drop relative to full precision nets of BinaryConnect, BinaryRelax [26], and our PROXQUANT.
Result The top-1 classiï¬cation errors for binary quantization are reported in Table 1. Our ProxQuant consistently yields better results than BinaryConnect. The performance drop of ProxQuant relative to full-precision nets is about 1%, better than BinaryConnect by 0.2% on average and signiï¬cantly better than the reported result of BinaryRelax.
Results and additional details for ternary quantization are deferred to Appendix B.1.
Table 1: Top-1 classiï¬cation error of binarized ResNets on CIFAR-10. Performance is reported in mean(std) over 4 runs, as well as the (absolute) performance drop of over full-precision nets.
Model (Bits) ResNet-20 ResNet-32 ResNet-44 ResNet-56 FP (32) 8.06 7.25 6.96 6.54 Classiï¬cation error BC (1) 9.54 (0.03) 8.61 (0.27) 8.23 (0.23) 7.97 (0.22) Performance drop over FP net PQ-B (ours) (1) BC (1) BinaryRelax PQ-B (ours) (1) +4.84 +2.75 - - (1) +1.29 +1.28 +0.99 +1.16 9.35 (0.13) +1.48 8.53 (0.15) +1.36 7.95 (0.05) +1.27 7.70 (0.06) +1.43
# 4.2 Language modeling with LSTMs
Problem setup We perform language modeling with LSTMs [11] on the Penn Treebank (PTB) dataset [18], which contains 929K training tokens, 73K validation tokens, and 82K test tokens. Our model is a standard one-hidden-layer LSTM with embedding dimension 300 and hidden dimension 300. We train quantized LSTMs with the encoder, transition matrix, and the decoder quantized to k-bits for k â {1, 2, 3}. The quantization is performed in a row-wise fashion, so that each row of the matrix has its own codebook {α1, . . . , αk}.
8
Method We compare our multi-bit ProxQuant (eq. (10)) to the state-of-the-art alternating minimization algorithm with straight-through gradients [24]. Training is initialized at a pre-trained full-precision LSTM. We use the SGD optimizer with initial learning rate 20.0 and decay by a factor of 1.2 when the validation error does not improve over an epoch. We train for 80 epochs with batch size 20, BPTT 30, dropout with probability 0.5, and clip the gradient norms to 0.25. The regularization rate λ is tuned by ï¬nding the best performance on the validation set. In addition to multi-bit quantization, we also report the results for binary LSTMs (weights in {±1}), comparing BinaryConnect and our ProxQuant-Binary, where both learning rates are tuned on an exponential grid {2.5, 5, 10, 20, 40}.
Result We report the perplexity-per-word (PPW, lower is better) in Table 2. The performance of ProxQuant is comparable with the Straight-through gradient method. On Binary LSTMs, ProxQuant-Binary beats BinaryConnect by a large margin. These results demonstrate that ProxQuant oï¬ers a powerful alternative for training recurrent networks.
Table 2: PPW of quantized LSTM on Penn Treebank.
1 372.2 ProxQuant-Binary (ours) 288.5 104.7 106.2 Method / Number of Bits BinaryConnect ALT Straight-through2 ALT-ProxQuant (ours) 2 - - 90.2 90.0 3 - - 86.1 87.2 FP (32) 88.5
# 5 Theoretical analysis
In this section, we perform a theoretical study on the convergence of quantization algorithms. We show in Section 5.1 that our ProxQuant algorithm (i.e. non-lazy prox-gradient method) converges under mild smoothness assumptions on the problem. In Section 5.2, we provide a simple example showing that the lazy prox-gradient method fails to converge under the same set of assumptions. In Section 5.3, we show that BinaryConnect has a very stringent condition for converging to a ï¬xed point. Our theory demonstrates the superiority of our proposed ProxQuant over lazy prox- gradient type algorithms such as BinaryConnect and BinaryRelax [26]. All missing proofs are deferred to Appendix D.
Prox-gradient algorithms (both lazy and non-lazy) with a ï¬xed λ aim to solve the problem
minimize θâRd L(θ) + λR(θ), (11)
and BinaryConnect can be seen as the limiting case of the above with λ = â (cf. Section 2.2).
?We thank Xu et al. [24] for sharing the implementation of this method through a personal communication. There is a very clever trick not mentioned in their paper: after computing the alternating quantization qai(@), they multiply by a constant 0.3 before taking the gradient; in other words, their quantizer is a rescaled alternating quantizer: 6 ++ 0.3qai(@). This scaling step gives a significant gain in performance â without scaling the PPW is {116.7, 94.3, 87.3} for {1,2,3} bits. In contrast, our PROXQUANT does not involve a scaling step and achieves better PPW than this unscaled ALT straight-through method.
9
# 5.1 A convergence theorem for ProxQuant
We consider ProxQuant with batch gradient and constant regularization strength λt ⡠λ:
θt+1 = proxηtλR(θt â ηtâL(θt)).
Theorem 5.1 (Convergence of ProxQuant). Assume that the loss L is B-smooth (i.e. has B- Lipschitz gradients) and the regularizer R is differentiable. Let F\(0) = L(@) + AR(O) be the composite objective and assume that it is bounded below by F,. Running ProxQuant with batch gradient VL, constant stepsize m = n = rr and \: = X for T steps, we have the convergence guarantee
CB(F)(90) â Fx) 2 IVFAOraII3 < T where Thest = arg min ||; â 4-1 ||), (12) 1<t<T
where C > 0 is a universal constant.
Remark 5.1. The convergence guarantee requires both the loss and the regularizer to be smooth. Smoothness of the loss can be satisï¬ed if we use a smooth activation function (such as tanh). For the regularizer, the quantization-inducing regularizers deï¬ned in Section 3.1 (such as the W-shaped regularizer) are non-diï¬erentiable. However, we can use a smoothed version of them that is diï¬er- entiable and point-wise arbitrarily close to R, which will satisfy the assumptions of Theorem 5.1. The proof of Theorem 5.1 is deferred to Appendix D.1.
# 5.2 Non-convergence of lazy prox-gradient
The lazy prox-gradient algorithm (e.g. BinaryRelax [26]) for solving problem eq. (11) is a variant where the gradients are taken at proximal points but accumulated at the original sequence:
θt+1 = θt â ηtâL(proxλR(θt)). (13)
Convergence of the lazy prox-gradient algorithm eq. (13) is only known to hold for convex prob- lems [23]; on smooth non-convex problems it generally does not converge even in an ergodic sense. We provide a concrete example that satisï¬es the assumptions in Theorem 5.1 (so that Prox- Quant converges ergodically) but lazy prox-gradient does not converge.
Theorem 5.2 (Non-convergence of lazy prox-gradient). There exists L and R satisfying the as- sumptions of Theorem 5.1 such that for any constant stepsize ηt ⡠η ⤠1 2β , there exists some speciï¬c initialization θ0 on which the lazy prox-gradient algorithm eq. (13) oscillates between two non-stataionry points and hence does not converge in the ergodic sense of eq. (12).
Remark 5.2. Our construction is a fairly simple example in one-dimension and not very adver- sarial: L(θ) = 1
# 5.3 Convergence characterization for BinaryConnect
For BinaryConnect, the concept of stataionry points is no longer sensible (as the target points {±1}d are isolated and hence every point is stationary). Here, we consider the alternative deï¬nition of convergence as converging to a ï¬xed point and show that BinaryConnect has a very stringent convergence condition.
Consider the BinaryConnect method with batch gradients:
st = sign(θt), θt+1 = θt â ηtâL(st). (14)
10
Deï¬nition 5.1 (Fixed point and convergence). We say that s â {±1}d is a ï¬xed point of the BinaryConnect algorithm, if s0 = s in eq. (14) implies that st = s for all t = 1, 2, .... We say that the BinaryConnect algorithm converges if there exists t < â such that st is a ï¬xed point.
Theorem 5.3. Assume that the learning rates satisfy ran â¢m = co, then s ⬠{+1}¢ is a fixed point for BinaryConnect eq. if and only if sign(VL(s)[i]) = âs[i] for all i © [d] such that VL(6)[t] 40. Such a point may not exist, in which case BinaryConnect does not converge for any initialization 09 ⬠R¢.
Remark 5.3. Theorem 5.3 is in appearingly a stark contrast with the convergence result for Bina- ryConnect in [17] in the convex case, whose bound involves a an additive error O(â) that does not vanish over iterations, where â is the grid size for quantization. Hence, their result is only useful when â is small. In contrast, we consider the original BinaryConnect with â = 1, in which case the error makes Li et al. [17]âs bound vacuous. The proof of Theorem 5.3 is deferred to Appendix D.3.
Experimental evidence We have already seen that such a ï¬xed point s might not exist in the toy example in Figure 1b. In Appendix C, we perform a sign change experiment on CIFAR- 10, showing that BinaryConnect indeed fails to converge to a ï¬xed sign pattern, corroborating Theorem 5.3.
# 6 Conclusion
In this paper, we propose and experiment with the ProxQuant method for training quantized networks. Our results demonstrate that ProxQuant oï¬ers a powerful alternative to the straight- through gradient method and has theoretically better convergence properties. For future work, it would be of interest to propose alternative regularizers for ternary and multi-bit ProxQuant and experiment with our method on larger tasks.
# Acknowledgement
We thank Tong He, Yifei Ma, Zachary Lipton, and John Duchi for their valuable feedback. We thank Chen Xu and Zhouchen Lin for the insightful discussion on multi-bit quantization and sharing the implementation of [24] with us. We thank Ju Sun for sharing the draft of [21] and the inspiring discussions on adversarial regularization for quantization. The majority of this work was performed when YB and YW were at Amazon AI.
# References
[1] A. G. Anderson and C. P. Berg. The high-dimensional geometry of binary neural networks. arXiv preprint arXiv:1705.07199, 2017.
[2] M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein gan. arXiv preprint arXiv:1701.07875, 2017.
[3] M. A. Carreira-Perpin´an. Model compression as constrained optimization, with application to neural nets. part i: General framework. arXiv preprint arXiv:1707.01209, 2017.
[4] M. A. Carreira-Perpin´an and Y. Idelbayev. Model compression as constrained optimization, with application to neural nets. part ii: Quantization. arXiv preprint arXiv:1707.04319, 2017.
11
[5] M. Courbariaux, Y. Bengio, and J.-P. David. BinaryConnect: Training deep neural networks with binary weights during propagations. In Advances in neural information processing sys- tems, pages 3123â3131, 2015.
[6] Y. Ding, J. Liu, and Y. Shi. On the universal approximability of quantized relu neural networks. arXiv preprint arXiv:1802.03646, 2018.
[7] I. Goodfellow, Y. Bengio, A. Courville, and Y. Bengio. Deep learning, volume 1. MIT press Cambridge, 2016.
[8] S. Han, H. Mao, and W. J. Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huï¬man coding. arXiv preprint arXiv:1510.00149, 2015.
[9] S. Han, X. Liu, H. Mao, J. Pu, A. Pedram, M. A. Horowitz, and W. J. Dally. EIE: Eï¬cient inference engine on compressed deep neural network. In Computer Architecture (ISCA), 2016 ACM/IEEE 43rd Annual International Symposium on, pages 243â254. IEEE, 2016.
In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770â 778, 2016.
[11] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735â 1780, 1997.
[12] L. Hou and J. T. Kwok. Loss-aware weight quantization of deep networks. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id= BkrSv0lA-.
[13] L. Hou, Q. Yao, and J. T. Kwok. Loss-aware binarization of deep networks. In International Conference on Learning Representations, 2017. URL https://openreview.net/forum?id= S1oWlN9ll.
[14] I. Hubara, M. Courbariaux, D. Soudry, R. El-Yaniv, and Y. Bengio. Quantized neural networks: Training neural networks with low precision weights and activations. Journal of Machine Learning Research, 18:187â1, 2017.
[15] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
[16] F. Li and B. Liu. Ternary weight networks. arXiv preprint arXiv:1605.04711, 2016.
[17] H. Li, S. De, Z. Xu, C. Studer, H. Samet, and T. Goldstein. Training quantized nets: A deeper understanding. In Advances in Neural Information Processing Systems, pages 5811â5821, 2017.
[18] M. P. Marcus, M. A. Marcinkiewicz, and B. Santorini. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313â330, 1993.
N. Parikh and S. Boyd. Proximal algorithms. Foundations and Trends® in Optimization, 1 (3):127-239, 2014.
[20] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi. Xnor-net: Imagenet classiï¬cation In European Conference on Computer Vision, using binary convolutional neural networks. pages 525â542. Springer, 2016.
12
[21] J. Sun and X. Sun. Adversarial probabilistic regularization. Unpublished draft, 2018.
[22] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), pages 267â288, 1996.
[23] L. Xiao. Dual averaging methods for regularized stochastic learning and online optimization. Journal of Machine Learning Research, 11(Oct):2543â2596, 2010.
[24] C. Xu, J. Yao, Z. Lin, W. Ou, Y. Cao, Z. Wang, and H. Zha. Alternating multi-bit quantization for recurrent neural networks. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=S19dR9x0b.
[25] P. Yin, S. Zhang, Y. Qi, and J. Xin. Quantization and training of low bit-width convolutional neural networks for object detection. arXiv preprint arXiv:1612.06052, 2016.
[26] P. Yin, S. Zhang, J. Lyu, S. Osher, Y. Qi, and J. Xin. Binaryrelax: A relaxation approach for training deep neural networks with quantized weights. arXiv preprint arXiv:1801.06313, 2018.
[27] S. Zhou, Y. Wu, Z. Ni, X. Zhou, H. Wen, and Y. Zou. Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160, 2016.
[28] C. Zhu, S. Han, H. Mao, and W. J. Dally. Trained ternary quantization. arXiv preprint arXiv:1612.01064, 2016.
# A Additional results on Regularization
# A.1 Prox operators for binary nets
Here we derive the prox operators for the binary regularizer eq. (6) and its squared L2 variant. Recall that
# d
Rbin(θ) = min {|θj â 1|, |θj + 1|}.
j=1
By deï¬nition of the prox operator, we have for any θ â Rd that
Ly~ 2 d ~ ~ proxy,,,,,(9) = arg min 5 6-4 +A> min {(6; â 1], |9; +1) bin aed) 2 9 a = arg min d deR4 j= L~ ear ~ 3 (9 â 0;)° + Amin { (6; â 1,0; + i} 1
This minimization problem is coordinate-wise separable. For each 6), the same upon flipping the sign, but the quadratic term is smaller when sign(0;) the solution 6* to the prox satisfies that sign(0;) = sign(9;), and the absolute
penalty term remains the = sign(@;). Hence, value satisfies
|O;| = angmin { 3( â |6;|)? + Alt â ui} = SoftThreshold(|6;|,1, 4) = 1+sign(|4;|â1)[]|6;|â1]âA]+. 120
120
13
Multiplying by sign(67) = sign(;), we have
= SoftThreshold(0;, sign(;), A),
which gives eq. (7).
For the squared L2 version, by a similar argument, the corresponding regularizer is
d Rbin(θ) = min (θj â 1)2, (θj + 1)2 .
For this regularizer we have
aye _ _ proxy p,,, (9) = arg min 5 39 (6; â 6;)? + Amin {(; â 1), (6; + 1)" deR4 j=l
.
39 deR4 j=l Using the same argument as in the Lj case, the solution 6 satisfies sign(97) = sign(6;), and
_ fl 2 2 Sil + 65] = arganin {5 IAj|)" + A@â 1) \ =
= sign(0;) gives
Multiplying by sign(6}) = sign(0;) gives
.
pte 0; + Asign(0;) J 1+A
or, in vector form, 6* = (@ + Asign(@))/(1 + A).
# A.2 Prox operator for ternary quantization
For ternary quantization, we use an approximate version of the alternating prox operator eq. (10): compute
0 +20 142dâ 0=q(0) and 6= (15)
0 +20 142dâ
where q is the ternary quantizer defined as
0.7 q(0) = OT 1{0 > A} +O°1{9<âA}, A= |lOl],, 07 = Alissa, 0 = liaj<-a. (16)
This is a straightforward extension of the TWN quantizer [16] that allows diï¬erent levels for pos- itives and negatives. We ï¬nd that two rounds of alternating computation in eq. (15) achieves a good performance, which we use in our experiments.
# B Additional experimental results
# B.1 Ternary quantization for CIFAR-10
Our models are ResNets of depth 20, 32, and 44. Ternarized training is initialized at pre-trained full-precision nets. We perform a hard quantization @ ++ q(@) at epoch 400 and keeps training till the600-th epoch to stabilize the BatchNorm layers.
14
Result The top-1 classiï¬cation errors for ternary quantization are reported in Table 3. Our results are comparable with the reported results of TTQ,3 and the best performance of our method over 4 runs (from the same initialization) is slightly better than TTQ.
Table 3: Top-1 classiï¬cation error of ternarized ResNets on CIFAR-10. Performance is reported in mean(std) over 4 runs, where for PQ-T we report in addition the best of 4 (Bo4).
Model (Bits) ResNet-20 ResNet-32 ResNet-44 FP (32) 8.06 7.25 6.96 TTQ PQ-T (ours) PQ-T (ours, Bo4) (2) 8.87 7.63 7.02 (2) 8.40 (0.13) 7.65 (0.15) 7.05 (0.08) (2) 8.22 7.53 6.98
# C Sign change experiment
We experimentally compare the training dynamics of ProxQuant-Binary and BinaryConnect through the sign change metric. The sign change metric between any θ1 and θ2 is the proportion of their diï¬erent signs, i.e. the (rescaled) Hamming distance:
||sign(61) â sign(62)|| 2d SignChange(41, 62) ⬠[0,1].
In Rd, the space of all full-precision parameters, the sign change is a natural distance metric that represents the closeness of the binarization of two parameters.
(a) (b) (c) (d)
|
# cos
Figure 3: SignChange(θ0, θt) against t (epoch) for BinaryConnect and ProxQuant, over 4 runs starting from the same full-precision ResNet-20. ProxQuant has signiï¬cantly lower sign changes than BinaryCon- nect while converging to better models. (a) The ï¬rst conv layer of size 16 à 3 à 3 à 3; (b) The last conv layer of size 64 à 64 à 3 à 3; (c) The fully connected layer of size 64 à 10; (d) The validation top-1 error of the binarized nets (with moving average smoothing).
Recall in our CIFAR-10 experiments (Section [£-Tp, for both BinaryConnect and PROXQUANT, we initialize at a good full-precision net @ and stop at a converged binary network 6 ⬠{41}". We are interested in SignChange(o, #;) along the training path, as well as SignChange(o, 0), i.e. the distance of the final output model to the initialization.
3We note that our ProxQuant-Ternary and TTQ are not strictly comparable: we have the advantage of using better initializations; TTQ has the advantage of a stronger quantizer: they train the quantization levels (θ+, θâ) whereas our quantizer eq. (16) pre-computes them from the current full-precision parameter.
15
Our ï¬nding is that ProxQuant produces binary nets with both lower sign changes and higher performances, compared with BinaryConnect. Put diï¬erently, around the warm start, there is a good binary net nearby which can be found by ProxQuant but not BinaryConnect, suggesting that BinaryConnect, and in general the straight-through gradient method, suï¬ers from higher optimization instability than ProxQuant. This ï¬nding is consistent in all layers, across diï¬erent warm starts, and across diï¬ernent runs from each same warm start (see Figure 3 and Table 4 in Appendix C.1). This result here is also consistent with Theorem 5.3: the signs in BinaryConnect never stop changing until we manually freeze the signs at epoch 400.
# C.1 Raw data for sign change experiment
Table 4: Performances and sign changes on ResNet-20 in mean(std) over 3 full-precision initial- izations and 4 runs per (initialization x method). Sign changes are computed over all quantized parameters in the net.
Initialization Method Top-1 Error(%) FP-Net 1 (8.06) FP-Net 2 (8.31) FP-Net 3 (7.73) BC PQ-B BC PQ-B BC PQ-B 9.489 (0.223) 9.146 (0.212) 9.745 (0.422) 9.444 (0.067) 9.383 (0.211) 9.084 (0.241) Sign change 0.383 (0.006) 0.276 (0.020) 0.381 (0.004) 0.288 (0.002) 0.359 (0.001) 0.275 (0.001)
Table 5: Performances and sign changes on ResNet-20 in raw data over 3 full-precision initializations and 4 runs per (initialization x method). Sign changes are computed over all quantized parameters in the net.
Initialization Method FP-Net 1 (8.06) FP-Net 2 (8.31) FP-Net 3 (7.73) BC PQ-B BC PQ-B BC PQ-B Top-1 Error(%) 9.664, 9.430, 9.198, 9.663 9.058, 8.901, 9.388, 9.237 9.456, 9.530, 9.623, 10.370 9.522, 9.474, 9.410, 9.370 9.107, 9.558, 9.538, 9.328 9.284, 8.866, 9.301, 8.884 Sign change 0.386, 0.377, 0.390, 0.381 0.288, 0.247, 0.284, 0.285 0.376, 0.379, 0.382, 0.386 0.291, 0.287, 0.289, 0.287 0.360, 0.357, 0.359, 0.360 0.275, 0.276, 0.276, 0.275
16
# D Proofs of theoretical results
# D.1 Proof of Theorem 5.1
Recall that a function f : Rd â R is said to be β-smooth if it is diï¬erentiable and âf is β-Lipschitz: for all x, y â Rd we have
|VF(r) â VE)lle < Bila â gllo-
For any β-smooth function, it satisï¬es the bound
fly) < fle) +(VF(o).yâ2) + Sle âyl2 forall aye R⢠Convergence results like Theorem are standard in the literature of proximal algorithms, where we have convergence to stataionarity without convexity on either L or R but assuming smoothness. For completeness we provide a proof below. Note that though the convergence is ergodic, the best index Tj,-st can be obtained in practice via monitoring the proximity ||; â 0;â1]|»- Proof of Theorem Recall the ProxQuant iterate
1 ; O41 = arg min {106 + (0â6,,VL(@%)) + â ||@- O\|3 + rnc} beRd 2n
.
By the fact that θt+1 minimizes the above objective and applying the smoothness of L, we get that
1 FO) = L(A) + AR(t) = L(t) + (t41 â 9, VE()) + mm \|r41 â Olly + AR(O141) 1 3 5 , > L(Qisa) + (= 5) [lees ~ Ol + ARCs) = Flees) + 5 leer ~ 6B.
Telescoping the above bound for t = 0, . . . , T â 1, we get that
T â1
Â¥en gy3 < G0) a Sen) EMO) ~F).
Therefore we have the proximity guarantee
: 2 min |/Or41 â @\|5 < pain, Osa â Olle
We now turn this into a stationarity guarantee. The ï¬rst-order optimality condition for θt+1 gives
âL(θt) + 1 η (θt+1 â θt) + λâR(θt+1) = 0.
Combining the above equality and the smoothness of L, we get
1 [VF \Cean) la = IVECO) + AV RCP ll = | (Oe â Ban) + WL( Bisa) ~ VLCC) 2 1 < (5 +8) [01.1 â Gilly = 38 [e41 â Olly
Choosing t = Tbest â 1 and applying the proximity guarantee eq. (17), we get
188(F,(00) â F.) T . VEX Or ea )I2 S 9F? WOreae ~ Pfheâtll2 = 99" min |I@ex1 â Gll3 <
This is the desired bound.
17
# D.2 Proof of Theorem 5.2
Let our loss function L : R â R be the quadratic L(θ) = 1 2 θ2 (so that L is β-smooth with β = 1). Let the regularizer R : R â R be a smoothed version of the W-shaped regularizer in eq. (3), deï¬ned as (for ε â (0, 1/2] being the smoothing radius)
ess 4 â Reaularzer > 2 ; 4 -3 2 -1 oO 1 2 3
1 2ε θ2 + 1 â ε, ε 2 (θ â 1)2, â θ â [0, ε) â θ + 1 â , θ â [ε, 1 â ε) R(θ) = 1 2ε θ â 1 â θ â [1 â ε, 1 + ε) ε 2 , θ â [1 + ε, â)
# Figure 4
# eer
and R(âθ) = R(θ) for the negative part. See Figure 4 for an illustration of the loss L and the regularizer R (with ε = 0.2).
It is straightforward to see that R is piecewise quadratic and diï¬erentiable on R by computing the derivatives at ε and 1 ± ε. Further, by elementary calculus, we can evaluate the prox operator in closed form: for all λ ⥠1, we have
proxλR(θ) = εθ + λsign(θ) ε + λsign(θ) for all |θ| ⤠1.
Now, suppose we run the lazy prox-gradient method with constant stepsize ηt ⡠η ⤠1 For the speciï¬c initialization 2β = 1 2 .
θ0 = ηλ 2λ + (2 â η)ε â (0, 1),
we have the equality proxλR(θ0) = 2
η θ0 and therefore the next lazy prox-gradient iterate is
θ1 = θ0 â ηâL(proxλR(θ0)) = θ0 â ηâL 2 η θ0 = θ0 â η · 2 η θ0 = âθ0.
As both R and L are even functions, a symmetric argument holds for θ1 from which we get θ2 = âθ1 = θ0. Therefore the lazy prox-gradient method ends up oscillating between two points:
θt = (â1)tθ0.
On the other hand, it is straightforward to check that the only stationary points of L(θ) + λR(θ) are 0 and ± λ ε+λ , all not equal to ±θ0. Therefore the sequence {θt}tâ¥0 does not have a subsequence with vanishing gradient and thus does not approach stationarity in the ergodic sense.
# D.3 Proof of Theorem 5.3
We start with the âââ direction. If s is a ï¬xed point, then by deï¬nition there exists θ0 â Rd such that θt = θ for all t = 0, 1, 2, .... By the iterates eq. (14)
T θT = θ0 â ηtâL(st).
t=0
18
Take signs on both sides and apply st = s for all t on both sides, we get that
T 5 = sp = sign(6r) = sign (« VL(s) > ») t=0
t=0
Take the limit T â oo and apply the assumption that 57, m = 00, we get that for all 7 ⬠[d] such that [VL(9)]i £0,
T s[i] = lim sign @ âVL(s (s) > n) [i] = âsign(VL(s)){é]. T. â00 =0
=0 Now we prove the â<=â direction. If @ obeys that sign(VL(s)[i]) = âs[i] for all i ⬠[d] such that VL/(s)[i] # 0, then if we take any 6 such that sign(@)) = s, 4 will move in a straight line towards the direction of âVL(s), which does not change the sign of ho. In other words, $s; = sign(6,) = sign(@9) = s for all t = 0,1,2,.... Therefore, by definition, s is a fixed point.
19 | {
"id": "1612.06052"
} |
1810.00147 | M$^3$RL: Mind-aware Multi-agent Management Reinforcement Learning | Most of the prior work on multi-agent reinforcement learning (MARL) achieves
optimal collaboration by directly controlling the agents to maximize a common
reward. In this paper, we aim to address this from a different angle. In
particular, we consider scenarios where there are self-interested agents (i.e.,
worker agents) which have their own minds (preferences, intentions, skills,
etc.) and can not be dictated to perform tasks they do not wish to do. For
achieving optimal coordination among these agents, we train a super agent
(i.e., the manager) to manage them by first inferring their minds based on both
current and past observations and then initiating contracts to assign suitable
tasks to workers and promise to reward them with corresponding bonuses so that
they will agree to work together. The objective of the manager is maximizing
the overall productivity as well as minimizing payments made to the workers for
ad-hoc worker teaming. To train the manager, we propose Mind-aware Multi-agent
Management Reinforcement Learning (M^3RL), which consists of agent modeling and
policy learning. We have evaluated our approach in two environments, Resource
Collection and Crafting, to simulate multi-agent management problems with
various task settings and multiple designs for the worker agents. The
experimental results have validated the effectiveness of our approach in
modeling worker agents' minds online, and in achieving optimal ad-hoc teaming
with good generalization and fast adaptation. | http://arxiv.org/pdf/1810.00147 | Tianmin Shu, Yuandong Tian | cs.AI, cs.LG, cs.MA, stat.ML | ICLR 2019; 18 pages, 12 figures | null | cs.AI | 20180929 | 20190307 | 9 1 0 2
r a M 7 ] I A . s c [
3 v 7 4 1 0 0 . 0 1 8 1 : v i X r a
Published as a conference paper at ICLR 2019
M3RL: MIND-AWARE MULTI-AGENT MANAGEMENT REINFORCEMENT LEARNING
Tianmin Shu â University of California, Los Angeles tianmin.shu@ucla.edu
Yuandong Tian Facebook AI Research yuandong@fb.com
# ABSTRACT
Most of the prior work on multi-agent reinforcement learning (MARL) achieves optimal collaboration by directly learning a policy for each agent to maximize a common reward. In this paper, we aim to address this from a different angle. In particular, we consider scenarios where there are self-interested agents (i.e., worker agents) which have their own minds (preferences, intentions, skills, etc.) and can not be dictated to perform tasks they do not want to do. For achieving optimal coordination among these agents, we train a super agent (i.e., the man- ager) to manage them by ï¬rst inferring their minds based on both current and past observations and then initiating contracts to assign suitable tasks to workers and promise to reward them with corresponding bonuses so that they will agree to work together. The objective of the manager is to maximize the overall pro- ductivity as well as minimize payments made to the workers for ad-hoc worker teaming. To train the manager, we propose Mind-aware Multi-agent Management Reinforcement Learning (M3RL), which consists of agent modeling and policy learning. We have evaluated our approach in two environments, Resource Col- lection and Crafting, to simulate multi-agent management problems with various task settings and multiple designs for the worker agents. The experimental re- sults have validated the effectiveness of our approach in modeling worker agentsâ minds online, and in achieving optimal ad-hoc teaming with good generalization and fast adaptation.1
# INTRODUCTION
As the main assumption and building block in economics, self-interested agents play a central roles in our daily life. Selï¬sh agents, with their private beliefs, preferences, intentions, and skills, could collaborate (ad-hoc teaming) effectively to make great achievement with proper incentives and con- tracts, an amazing phenomenon that happens every day in every corner of the world.
However, most existing multi-agent reinforcement learning (MARL) methods focus on collaboration when agents selï¬essly share a common goal, expose its complete states and are willing to be trained towards the goal. While this is plausible in certain games, few papers address the more practical situations, in which agents are self-interested and inclined to show off, and only get motivated to work with proper incentives.
In this paper, we try to model such behaviors. We have multiple workers and a manager, together to work on a set of tasks. The manager gets an external reward upon the completion of some tasks, or one speciï¬c task. Each worker has a skill set and preference over the tasks. Note that their skills and preferences may not align with each other (Fig. 1(a)), and are not known to the manager (Fig. 1(b)). Furthermore, manager may not get any external reward until a speciï¬c task is complete, which depends on other tasks.
By default, the self-interested workers simply choose the most preferred tasks, which is often unpro- ductive from the perspective of the entire project. Therefore, the manager gives additional incentives in the form of contracts. Each contract assigns a goal and a bonus for achieving the goal to a worker.
âWork done while interning at Facebook AI Research. 1Code is available at https://github.com/facebookresearch/M3RL.
1
Published as a conference paper at ICLR 2019
== Skill mn Preference sm Skill sex Preference + Bonus Las, a Worker 1 gil | Wale. | 14 wore |] ML e\? Fel âREC aE - 2 le Worker 3 Lh @ 2 v L | he Contract Lore aa oa Task dependency (a) Nature of the workers (b) Incomplete information (c) Contract generation
Figure 1: Illustration of our problem setup. Workers have different skills (abilities for completing tasks) and preferences (which tasks they like) indicated by the bar charts. They are self-interested and perform the tasks they prefer the most. To achieve optimal collaboration, a manager has to ï¬rst infer workersâ minds, and assigns right bonuses to workers for ï¬nishing speciï¬ed tasks in the form of contracts. Consequently, workers will adjust their intentions and work together accordingly. E.g., workers in the ï¬gure initially all want to do task B. To ï¬nish all tasks, the manager has to pay more bonus to worker 1 and 2 so that they will perform A and C respectively.
With the external incentives, workers may choose different goals than their preferences. Upon com- pletion of assigned goals, the manager receives the rewards associated with those goals and makes the promised payments to the workers. To generate optimal contracts, the manager must infer the workersâ minds and learn a good policy of goal and reward assignment.
Conventional approaches of mechanism design tackle similar problems by imposing strong assump- tions (e.g., skill/preference distributions, task dependencies, etc) to ï¬nd an analytic solution. In con- trast, we aim to train a manager using reinforcement learning to i) assess minds of workers (skills, preferences, intentions, etc.) on the ï¬y, ii) to optimally assign contracts to maximize a collaborative reward, and iii) is adapted to diverse and even evolving workers and environments.
For this, we propose a novel framework â Mind-aware Multi-agent Management Reinforcement Learning (M°RL), which entails both agent modeling for estimating workersâ minds and policy learning for contract generation. For agent modeling, we infer workersâ identities by their perfor- mance history, and track their internal states with a mind tracker trained by imitation learning (IL). For contract generation, we apply deep reinforcement learning (RL) to learn goal and bonus as- signment policies. To improve the learning efficiency and adaptation, we also propose high-level successor representation (SR) learning (Kulkarni et al.||2016) and agent-wise ¢-greedy exploration.
As a proof of concept, we evaluate our approach in two environments: Resource Collection and Crafting in 2D Minecraft, to simulate multi-agent management problems. The setup and underlying assumptions are designed to mimic real world problems, where workers are not compelled to reveal their true preferences and skills, and there may be dependency between tasks resulting in delayed and sparse reward signals. Workers may also be deceitful (e.g., accepting a contract even when the assigned goal is unreachable). Our experiments demonstrate that the manager trained by our approach can i) estimate the mind of each worker from the recent behaviors, ii) motivate the workers to ï¬nish less preferable or intermediate tasks by assigning the right bonuses, iii) is adaptive to changing teams, e.g., change of members and/or change of workersâ skills and preferences, iv) and has good generalization in different team sizes and novel environments.
We have conducted substantial ablation studies by removing the key components, including IL, SR, agent-wise ¢-greedy exploration, and performance history. Our approach shows a consistent performance in standard settings as well as in more challenging ones where workersâ policies are stochastic and sub-optimal, or there are multiple levels of bonuses required to motivate workers.
# 2 RELATED WORK
Multi-agent reinforcement learning. For collaboration problems, common multi-agent reinforce- ment learning (Littman, 1994; Busoniu et al., 2008) usually trains agents (Oliehoek et al., 2008; Foerster et al., 2016; Peng et al., 2017; Omidshaï¬ei et al., 2017; Lowe et al., 2017) so that they will jointly maximize a shared reward. There also have been work on contributing different credits to agents by factorized value functions (Koller & Parr, 1999; Guestrin et al., 2001; Sunehag et al., 2018;
2
Published as a conference paper at ICLR 2019
Rashid et al., 2018), but the spontaneous collaboration assumption is still required. In contrast, we instead train a manager to manage multiple self-interested workers for an optimal collaboration.
Principal-agent problems. Our problem setup is closely related to principal-agent problems (Laf- font & Martimort, 2002) (or moral hazard problems (Hlmstrom, 1979)) in economics. Our manager and workers can be considered as the principal and agents respectively, where agents and principal have different objectives, and the principal needs to provide the right incentives to ensure that the agents make the best choices for what the principal delegates. These problems face similar tech- nical challenges as our problem setup, e.g., information asymmetry between principals and agents, how to setup incentive cost, how to infer agents types, how to monitor their behaviors, etc. Tra- ditional approaches in economics (Myerson, 1982; Hlmstrom & Milgrom, 1991; Sannikov, 2008) build mathematical models to address these issues separately in stateless games, often with the as- sumption that the utility functions and the behavior patterns of the agents are known, leading to complicated models with many tunable parameters. In comparison, our paper provides a practical end-to-end computational framework to address this problem in a data-driven way without any as- sumption about the agentsâ utilities and their decision making processes. Moreover, this framework is adaptive to changes of agents preferences and capabilities, which very few papers in economics have addressed. We also evaluate our approach in more complex game settings than the ones in the current economics literature.
Mechanism design. Similar to our problem setting, mechanism design also tackles problems where agents have different and private preferences (Myerson, 1981; Conitzer & Sandholm, 2002). Its core idea is to set up rules so that the agents will truthfully reveal their preferences for their own interests, and ultimately an optimal collective outcome can be achieved. Our work differs from mechanism de- sign in several ways. First, in addition to preferences, we also acknowledge the fact that agents may have different skills. Second, mechanism design does not consider sequential decision problems, whereas we have to dynamically change the contracts over time.
Optimal reward design. The contract generation in our work can be seen as reward design. Some prior work has proposed optimal reward design approaches (Zhang et al., 2009; Zhang & Parkes, 2008; Sorg et al., 2010; Ratner et al., 2018), where a teacher designs the best reward so that the student will learn faster or alter its policy towards the target policy. In contrast, we try to use deep RL to train optimal reward design policies to manage multi-agents in more complex tasks.
Meta-learning. Our work also resembles meta-learning (Wang et al., 2016; Finn et al., 2017), which typically aims at learning a meta strategy for multiple tasks (Maclaurin et al., 2015; Duan et al., 2017; Hariharan & Girshick, 2017; Wichrowska et al., 2017; Yu et al., 2018; Baker et al., 2017) with good sample efï¬ciency, or for a fast adaptation (Al-Shedivat et al., 2018). The meta-learning in this paper is for addressing the problem of ad-hoc teaming (Bowling & McCracken, 2005; Stone et al., 2010) by training from a limited set of worker population.
Theory of Mind. Our agent modeling is inspired by the prior work on computational theory of mind, where both Bayesian inference (Baker et al., 2009) and end-to-end training (Rabinowitz et al., 2018) have been applied to understand a single agentâs decision making by inferring their minds. In this work, we extend this to optimal multi-agent management by understanding agentsâ minds.
# 3 PROBLEM SETUP
In an environment, there is a set of goals G corresponding to several tasks, N self-interested workers with different minds, and a manager which can observe workersâ behaviors but is agnostic of their true minds. Different from the common Markov game setting for MARL in prior work ), we use an independent Markov Decision Process (MDP), ie., (S;, Ai, Ri, Ti), Vi ⬠N, to lel each worker, where S; and A; are the state space and action space, R; : S; x G; > R is the reward function, and 7; : S; x A; â S; is the state transition probabilities. For achieving goals, a worker has its own policy 7; : S; x G; â> A;. We define the key concepts in this work as follows.
Contract. A contract is a combination of goal and bonus assignment initiated by the manager to a speciï¬c worker. For simplicity, we consider discrete bonuses sampled from a ï¬nite set B. Thus, for worker i at time t, it will receive a contract deï¬ned as (gt i â G is the goal and bt i â B is the corresponding bonus for achieving the goal. Note that the contract will change over time.
3
Published as a conference paper at ICLR 2019
Workerâs mind. We model a workerâs mind by its preferences, intentions, and skills. We do not study worker agentsâ beliefs in this paper, which we leave as future work.
Preference. A workerâs preference is formally deï¬ned as its bounded internal utilities of achieving different goals, ui = (uig : g â G), where 0 ⤠uig ⤠umax. Combined with received contract, the worker agentâs reward function can be deï¬ned as ig = Ri(st rt
i, g) = (uig + 1(g = gt i)1(st i )bt g â G. i = sg), (1)
where sg is the goal state. Intention. The intention of a worker is the goal it is pursuing at any time, i.e., I t i â G, which is not fully revealed to the manager. Based on the reward deï¬ned in Eq. (1), there are multiple ways to choose the goal. For a rational worker who is clear about its skills, it will choose the goal by maximizing expected return. I.e., I t ig], where 0 < γi ⤠1 is its discount factor. However, this requires a worker to have a good estimate of its skills and to be honest, which is not always true. E.g., a worker may want to pursue some valuable goal that it can not reach. So an alternative way is to maximize the utility instead: I t i. This will make a workerâs behavior more deceptive as it may agree to pursue a goal but will rarely produce a fruitful result. In this work, we focus on the second way to achieve a more realistic simulation. After determine which goal to pursue, a worker will decide whether to sign the assigned contact. We denote this by dt i = 1 means that worker i signs the contract given at time t. Skill. The skill of a worker is jointly determined by its state transition probabilities Ti and its policy conditioned on its intention, i.e., Ïi(·|st Managerâs objective. The manager in our setting has its own utility v = (vg : g â G), where vg ⥠0 is the utility of achieving goal g. To maximize its gain, the manager needs to assign contracts to workers optimally. For the sake of realism, we do not assume that the manager knows for sure if a worker agent is really committed to the assignment. The only way to conï¬rm this is to check whether the goal achieved by the worker is consistent with its last assigned goal. If so, then the manager will gain certain reward based on its utility of that goal and pay the promised bonus to the worker. Thus, we may deï¬ne the managerâs reward function as:
N rh = RM(SH) = STS TAHT = 59)(g = gi)(Ug â B), (2) gâ¬G i=1
where Sâ+? = {sitt : i =1,--- ,.N} is the collective states of all present worker agents at time t + 1. The objective of the manager is to find optimal contract generation to maximize its expected return E[)> P29 y'râ], where 0 < 7 < 1is the discount factor for the manager. Note that the manager may get the reward of a goal for multiple times if workers reach the goal respectively.
Population of worker agents. The trained manager should be able to manage an arbitrary com- position of workers rather than only speciï¬c teams of workers. For this, we maintain a population of worker agents during training, and sample several ones from that population in each episode as the present workers in each episode. The identities of these workers are tracked across episodes. In testing, we will sample workers from a new population that has not been seen in training.
# 4 APPROACH
Our approach has three main components as shown in Figure 2: i) performance history module for identiï¬cation, ii) mind tracker module for agent modeling, and iii) manager module for learning goal and bonus assignment policies. We introduce the details of these three components as follows.
4.1 PERFORMANCE HISTORY MODULE AND MIND TRACKER MODULE
To model a workerâs mind, we ï¬rst need to infer its identity so that the manager can distinguish it from other agents. Previous work (Rabinowitz et al., 2018) typically identiï¬es agents via their trajectories in recent episodes. This only works when diverse past trajectories of agents are available beforehand. However, this is impractical in our problem as the past trajectories of a worker depends on the managerâs policy, and thus are highly correlated and can hardly cover all aspects of that agent.
4
Published as a conference paper at ICLR 2019
Tt 2 (ohh oop (syh4, mi, ha h at : â (si* Tt 2m; hi (si, af, gf, bf) (st? mk, hoy
P;
Figure 2: Overview of our network architecture.
In this work, we propose performance history for agent identiï¬cation, which is inspired by the upper conï¬dence bound (UCB) algorithm (Auer et al., 2002) for multi-bandit arm (MAB) problems. Formally, the performance history of worker i is a set of matrices Pi = {P t igb) : t = 1, · · · , T }, where 0 ⤠Ït igb ⤠1 is an empirical estimation of the probability of worker i ï¬nishing goal g within t steps after signing the contract if promised with a bonus of b. We discuss how to update this estimate in Algorithm 1. These matrices are then ï¬atten into a vector and we encode it to a history representation, hi, for worker i.
With identiï¬cation, the manager uses an independent mind tracker module with shared weights to update its belief of a workerâs current mental state online by encoding both current and past information: M (Ît i , bÏ i , aÏ i ) : Ï = 1, · · · , t} is a trajectory of the workerâs behavior and the contracts it has received upon current time t in the current episode.
4.2 MANAGER MODULE
For contract generation, the manager has to consider all present workers as a context. Thus, we encode each workerâs information and pool them over to obtain a context representation, i.e., ct+1 = C({(st+1 , mt i, hi) : i = 1, . . . , N }). With both individual information and the context, we deï¬ne goal policy, Ïg(·|st+1
In addition to learning policies for individual workers, we also want the manager to estimate the overall productivity of a team. A common choice in previous literature (e.g., Lowe et al. (2017)) is to directly learn a centralized value function based on the context. However, this is not informative in our case, as the ï¬nal return depends on achieving multiple goals and paying different bonuses. It is necessary to disentangle goal achievements, bonus payments, and the ï¬nal net gain.
To this end, we adopt the idea of successor representation (SR) (Kulkarni et al. 2017} Barreto et al] 2017} Ma et al.||2018), but use it to estimate the expectation of accumulated goal achievement an onus payment in the future instead of expected state visitation. By defining two vectors @9(c') and $°(câ) indicating goal achievement and bonus payment at time t respectively, we may define our high-level SR, 8% and ®, as 69(c') = EJ, y769%(c't7)] and & (ct) = Elo 9 7 ¢°(c'+7)]. We discuss the details in Appendixâ
4.3 LEARNING
For a joint training of these three modules, we use advantage actor-critic (A2C) (Mnih et al., 2016) to conduct on-policy updates, and learn SR similar to Kulkarni et al. (2016). In addition, we also use imitation learning (IL) to improve the mind tracker. In particular, we predict a workerâs policy based on its mental state representation, i.e., ËÏ(·|st ), which is learned by an additional cross-entropy loss for action prediction. Section A.2 summarizes the details. As our experimental results in Section 5 and Appendix C show, in difï¬cult settings such as random preferences and multiple bonus levels, the policies based on the mental state representation trained with IL have a much better performance than the ones without it.
As the manager is agnostic of workersâ minds, it is important to equip the manager with a good exploration strategy to fully understand each workerâs skills and preferences. A common exploration
5
Published as a conference paper at ICLR 2019
strategy in RL is e-greedy, where an agent has a chance of ⬠to take random actions. However, this may cause premature ending of contracts where a worker does not have sufficient amount of time to accomplish anything. Therefore, we adopt an agent-wise ¢-greedy exploration, where a worker has as a chance of ⬠to be assigned with a random goal at the beginning of an episode and the manager will never change that goal assignment throughout the whole episode. In this way, it is easier for a manager to understand why or why not a worker is able to reach an assigned goal. The details can be seen from the rollout procedure (Algorithm|I) in Appendix |B]
5 EXPERIMENTS
5.1 GENERAL TASK SETTINGS
We introduce the general task settings as follows. Note that without additional speciï¬cation, workers are implemented as rule-based agents (detailed in Appendix D.2).
Workers Resource Collection.
(a) Resource Collection.
(b) Crafting.
Figure 3: (a) Resource Collection environment, where the colored blocks are the resources and the arrows are the workers. (b) Crafting environment (left) and the recipe (right), where the numbers indicate item categories, and the colored block beside an item shows where this item can be crafted.
5.1.1 RESOURCE COLLECTION
In Resource Collection, the goals are deï¬ned as collecting certain type of resources. There are 4 types of resources on a map (Figure 3a) and the total quantity is 10. A worker can ï¬nd any resources but only has the skills to dig out certain types of resources. Note that it may not be skilled at collecting its preferred resources. We consider three different settings:
S1: Each agent can collect up to three types of resources including its preferred type. ⢠S2: Each agent can only collect one type of resource which may or may not be its preferred one. ⢠S3: Similar to S2, except that an agent has a different random preference in each episode and thus
its preference can not be inferred from history.
A worker can take ï¬ve actions: âmove forwardâ, âturn leftâ, âturn rightâ, âcollectâ, and âstopâ, and its skill is reï¬ected by the effect of taking the âcollectâ action. For workers, the internal utility of a resource is 1 if it is preferred; otherwise it is 0. The manager receives a reward of 3 for every resource collected under the contracts, and can choose to pay a worker with a bonus of 1 or 2.
5.1.2 CRAFTING
Different from previous work (Andreas et al., 2017) where all items can be directly crafted from raw materials, we consider three-level recipes (Figure 3b): crafting a top-level item requires crafting certain intermediate item ï¬rst. There are four work stations (colored blocks) for crafting the four types of items respectively. For the manager, each top-level item is worth a reward of 10, but collecting raw materials and crafting intermediate items do not have any reward. Note that certain materials are needed for crafting both top-level items, so the manager must strategically choose which one to craft. In each episode, there are raw materials sufï¬cient for crafting one to two top- level items. All collected materials and crafted items are shared in a common inventory.
We deï¬ne 8 goals including collecting raw materials and crafting items. Each worker prefers one of the collecting goals (the internal utility is 1), and is only capable of crafting one type of items. We
6
Published as a conference paper at ICLR 2019
expands the action space in Section 5.1.1 to include âcraftâ, which will only take effect if it has the ability of crafting the intended item and there are sufï¬cient materials and/or intermediate items. The manager can choose a bonus from 0 to 2 for the contracts, where 0 means no employment.
5.2 BASELINES
For comparison, we have evaluated the following baselines:
e Ours w/o SR: Learning a value function directly w/o successor representations. e Ours w/o IL: Removing action prediction loss. e Temporal e-greedy: Replacing the agent-wise exploration with conventional ¢-greedy exploration. e Agent identification using recent trajectories: Encoding an agentâs trajectories in the most recent 20 episodes instead of its performance history, which is adopted from (2018). e UCB: Applying UCB (2002) by defining the management problem as N multi-armed bandit sub-problems, each of which is for a worker agent. In each MAB sub-problem, pulling an arm is equivalent to assigning a specific goal and payment combination to a worker agent (i.e., there are |G| - |B] arms for each worker agent).
⢠GT types known: Revealing the ground-truth skill and preference of each worker and removing the performance history module, which serves as an estimation of the upper bound performance.
5.3 LEARNING EFFICIENCY
20 18 16 â ours â Ours wio SR pM â urs w/o IL $x â Temporal é-greedy oa â Past 20 trajs â uce 8 ~~~ GT types known 6 4 ° 50000 100000 150000 200000 250000 Episodes
2 2 rape 10 Pp? â ours g ; â ours w/o SR g â Ours w/o IL - â Temporal ¢-greedy Past 20 trajs : uc =~ GT types known ° 30000 100000 150000 200000 250000 Episodes
(a) Resource Collection (S1) (b) Resource Collection (S2) (c) Resource Collection (S3) (d) Crafting
br oe ue 5 Ours 2, â Ours w/o SR é â ours wo IL : â Temporal e-greedy â Past 20 trajs : â uce =-- GT types known 0 30000 100000 150000 200000 250000 Episodes
325{ â Ours â Ours w/o SR 70-07 ours w/o IL if 751 â Temporal e-greedy Pp â Past 20 trajs B50) tcp i $ @ 25] ---_GTtypes known 00 -25 oe © 20009 40000 60000 80000 100000 120000 140000 Episodes
Figure 4: Learning curves of all approaches in Resource Collection and Crafting. The rewards here are not rescaled and we show results from 5 runs in all experiments.
During training, we maintain a population of 40 worker agents. In each episode, we sample a few of them (4 workers in Resource Collection and 8 workers in Crafting). All approaches we have evaluated follow the same training protocol. The learning curves shown in Figure [4] demonstrate that ours consistently performs the best in all settings, and its converged rewards are comparable to the one trained using ground-truth agent types as part of the observations. Moreover, in more difficult settings, e.g., S3 of Resource Collection and Crafting, the benefits of IL, SR, agent-wise e-greedy exploration, and the history representations based on the performance history are more significant. In particular, when there are tasks that do not have any reward themselves such as in Crafting, SR and IL appear to offer the most critical contributions. Without them, the network hardly
7
Published as a conference paper at ICLR 2019
gets any training signals. In all cases, the agent identiï¬cation by encoding recent trajectories learns extremely slowly in Resource Collection and fails to learn anything at all in Crafting.
(a) Resource Collection (b) Crafting
â ours i 12] â Temporal e-greedy H DL) Bo od © 30000 199000 150000 200000 250000 300000350000 400000 Episodes Reward
H Ours: | â Temporal e-greedy OM r 30800 00000 130000 â-200000~=~=«S0000 = 00000 Episodes Reward
Figure 5: Comparison of the adaption capabilities of different exploration strategies during training. The dashed lines indicate the changing points of the worker agentsâ skills. The histograms show how the skill distribution in the same population evolve over time.
â ours â Temporal e-greedy ---- GT types known (no exploration) ee ae Ei 2 / @ ~~ 2000~000~«eGoo~=«OOD~â~«L0000 Episodes
---- GT types known (no exploration) Fa 2 Si â Temporal e-greedy El 2 3 2000 ~~ 4000~«aooo~=«aGOD «10000 Episodes
(a) Res. Collection (S1) (b) Res. Collection (S2) (c) Res. Collection (S3) (d) Crafting
16 EEE, AxesssegaS » > 8 â Ours ââ Temporal e-greedy 6 === GT types known (no exploration) 3 2000 ~4000~=~«SOO~=«OOD «NOOO Episodes
joo = B ; o â Ours ââ Temporal e-greedy ay =-=- GT types known (no exploration) 3 2000 ~4000~=~«SOO~=«OOD «NOOO Episodes
Figure 6: Testing performance when old team members are constantly replaced by new ones.
5.4 ADAPTATION AND GENERALIZATION
In real world scenarios, the population of worker agents and their skills may evolve over time, which requires the manager to continuously and quickly adapt its policy to the unforeseeable changes through a good exploration. Thus we compare our agent-wise e-greedy exploration with the tem- poral ¢-greedy exploration in two cases: i) training with a population where workersâ skills change drastically after 100,000 episodes (the manager does not know when and which workersâ skill sets have been updated), and ii) testing with a team where 75% of the workers will be replaced with new ones after every 2,000 episodes. Both strategies keep the same constant exploration coefficient, i.e., ⬠= 0.1. To have a better sense of the upper bound in the testing case, we also show the performance of the baseline that knows ground-truth agent information where no exploration is need. The results of the two cases are demonstrated in Figure[5]and in Figure |6]respectively.
In the first case, there are moments when the significant change in a populationâs skill distribution (i.e., how many workers can reach a specific goal) will need the manager to greatly change its policy. E.g., the first two changes in Figure|5alresult in new types of resources being collected; the changes in Figure [5b] force the team to craft a different type of top-level item. In such cases, our agent- wise â¬-greedy exploration significantly improves the learning efficiency and increases the converged rewards. When the change is moderate, the policy learned by ours is fairly stable.
8
Published as a conference paper at ICLR 2019
Ours Ours wt SR urs w/o mE Terpora greedy A rypes known 6 2 a lll I | > 2 3 4 8 6 7 B Reward Number of worker agents
(a) Res. Collection (S1) (b) Res. Collection (S2) (c) Res. Collection (S3) (d) Crafting
ours urs w/o SR. Ours wio IL Temporal ¢-greedy GT types known Fy 2 34 5 6 7 a Number of worker agents
1 mm Ours 12) mmm Ours w/o SR to BEBE Ours w/o IL fmm Temporal e-greedy B a] mmm Grtypes known â| _ 1h ol ii iL Number of worker agents
14) mmm Ours 12} Mm Ours w/o SR mam Ours wo IL 2°) mmm Temporal e-greedy B | mmm Grtypes known : lk HII 2 : F li = 3 ar Number of worker agents
Figure 7: Average rewards when different numbers of worker agents are present. The policies are trained with 4 worker agents in Resource Collection and with 8 worker agents in Crafting.
Figure 8: Testing in novel environments. Figure 9: Performance with random actions.
200 = training environment ae new environment with obstacles 15.0 pis g 20 3 & as so 25 : Collection (S1) Collection ($2) Collection ($3) Crafting Environment
20 mm 0% mm 10% - mm 20% > 30% Bio 40% = mm 50% 3 es . Collection (S1) Collection (S2) Collection (S3) Crafting Environment
In the second case, the managers trained by the three methods achieve similar converged rewards in training. While the converged reward of our approach is slightly lower than the upper bound due to exploration, it allows the manager to quickly adapt itself to a new team where it has never seen the most team members. The temporal e-greedy on the other hand never achieves a comparable reward even though its performance is comparable to ours when managing a fixed population.
We also want the managerâs policy to have good generalization in novel scenarios unseen in training, which, in our problems, has two aspects: i) generalization in different numbers of present worker agents, and ii) generalization in new environments. It can be seen from Figure 7 that as the number of workers increases, the manager achieves higher reward until it hits a plateau. Our approach consistently performs better in all settings. It even gains higher rewards than the one with ground- truth does when there are fewer workers. We also add a few walls to create novel environments unseen in training. With the additional obstacles, workersâ paths become more complex, which increases the difï¬culty of inferring their true minds. As suggested by Figure 8, the performance indeed decreases the most in S3 of Resource Collection where online intention inference is critical as the workers do not have ï¬xed preferences.
So far, we have only considered rule-based worker agents with deterministic plans. To see if our approach can handle stochastic and sub-optimal worker policies, we may randomize certain amount of actions taken by the workers (Figure 9) and train a manager with these random policies. When the randomness is moderate (e.g., ⤠20%), the performance is still comparable to the one without random actions. As randomness increases, we start to see larger decrease in reward. In Crafting speciï¬cally, random policies make the workers unlikely to achieve assigned goals within the time limit, thus the manager may never get top-level items if the policies are too random.
More results. In addition to the main experimental results discussed above, we further test our approach from different perspectives: i) showing the effect of the minimum valid period of a contract
9
Published as a conference paper at ICLR 2019
(i.e., constraints for the managerâs commitment), ii) multiple bonus levels, and iii) training RL agents as workers. We summarize these results in Appendix C.
# 6 CONCLUSIONS
In this paper, we propose Mind-aware Multi-agent Management Reinforcement Learning (M°RL) for solving the collaboration problems among self-interested workers with different skills and pref- erences. We train a manager to simultaneously infer workersâ minds and optimally assign contracts to workers for maximizing the overall productivity, for which we combine imitation learning and reinforcement learning for a joint training of agent modeling and management policy optimization. We also improve the model performance by a few techniques including learning high-level succes- sor representation, agent-wise e-greedy exploration, and agent identification based on performance history. Results from extensive experiments demonstrate that our approach learns effectively, gen- eralizes well, and has a fast and continuous adaptation.
# REFERENCES
Maruan Al-Shedivat, Trapit Bansal, Yuri Burda, Ilya Sutskever, Igor Mordatch, and Pieter Abbeel. Continuous adaptation via meta-learning in nonstationary and competitive environments. In The Sixth International Conference on Learning Representations (ICLR), 2018.
Jacob Andreas, Dan Klein, and Sergey Levine. Modular multitask reinforcement learning with policy sketches. In International Conference on Machine Learning (ICML), 2017.
Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit problem. Machine Learning, 47(2-3):235â256, 2002.
Bowen Baker, Otkrist Gupta, Nikhil Naik, and Ramesh Raskar. Designing neural network architec- tures using reinforcement learning. In The Fifth International Conference on Learning Represen- tations (ICLR), 2017.
Chris L Baker, Rebecca Saxe, and Joshua B Tenenbaum. Action understanding as inverse planning. Cognition, 113(3):329â349, 2009.
Andr´e Barreto, Will Dabney, R´emi Munos, Jonathan J. Hunt, Tom Schaul, Hado van Hasselt, and David Silver. Successor features for transfer in reinforcement learning. In Advances in Neural Information Processing Systems (NIPS), 2017.
Michael Bowling and Peter McCracken. Coordination and adaptation in impromptu teams. In The Twentieth National Conference on Artiï¬cial Intelligence (AAAI), 2005.
Lucian Busoniu, Robert Babuska, and Bart De Schutter. A comprehensive survey of multiagent IEEE Trans. Systems, Man, and Cybernetics, Part C, 38(2):156â172, reinforcement learning. 2008.
Vincent Conitzer and Tuomas Sandholm. Complexity of mechanism design. In The Eighteenth conference on Uncertainty in artiï¬cial intelligence (UAI), 2002.
Yan Duan, Marcin Andrychowicz, Bradly Stadie, OpenAI Jonathan Ho, Jonas Schneider, Ilya In Advances Sutskever, Pieter Abbeel, and Wojciech Zaremba. One-shot imitation learning. in neural information processing systems (NIPS), pp. 1087â1098, 2017.
Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In International Conference on Machine Learning (ICML), 2017.
Jakob Foerster, Ioannis Alexandros Assael, Nando de Freitas, and Shimon Whiteson. Learning to communicate with deep multi-agent reinforcement learning. In Advances in Neural Information Processing Systems (NIPS), 2016.
Carlos Guestrin, Daphne Koller, and Ronald Parr. Multiagent planning with factored mdps. In Advances in Neural Information Processing Systems (NIPS), 2001.
10
Published as a conference paper at ICLR 2019
Bharath Hariharan and Ross Girshick. Low-shot visual recognition by shrinking and hallucinating features. In IEEE International Conference on Computer Vision (ICCV), 2017.
Bengt Hlmstrom. Moral hazard and observability. The Bell journal of economics, pp. 74â91, 1979.
Bengt Hlmstrom and Paul Milgrom. Multitask principal-agent analyses: Incentive contracts, asset ownership, and job design. Journal of Law, Economics, and Organization, 7:24â52, 1991.
Daphne Koller and Ronald Parr. Computing factored value functions for policies in structured mdps. In International Joint Conference on Aritiï¬cial Intelligence (IJCAI), 1999.
Tejas D. Kulkarni, Ardavan Saeedi, Simanta Gautam, and Samuel J Gershman. Deep successor reinforcement learning. arXiv preprint arXiv:1606.02396, 2016.
Jean-Jacques Laffont and D. Martimort. The theory of incentives: the principal-agent model. Prince- ton University Press, 2002.
Michael L Littman. Markov games as a framework for multi-agent reinforcement learning. In The 11th International Conference on Machine Learning (ICML), pp. 157â163, 1994.
Ryan Lowe, Yi Wu, Aviv Tamar, Jean Harb, Pieter Abbeel, and Igor Mordatch. Multi-agent actor- critic for mixed cooperative-competitive environments. In Advances in Neural Information Pro- cessing Systems (NIPS, 2017.
Chen Ma, Junfeng Wen, and Yoshua Bengio. Universal successor representations for transfer rein- forcement learning. arXiv preprint arXiv:1804.03758, 2018.
Dougal Maclaurin, David Duvenaud, and Ryan P. Adams. Gradient-based hyperparameter opti- mization through reversible learning. In International Conference on Machine Learning (ICML), 2015.
Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International Conference on Machine Learning (ICML), 2016.
Roger B. Myerson. Optimal auction design. Mathematics of operations research, 6(1):58â73, 1981.
Roger B. Myerson. Optimal coordination mechanisms in generalized principalagent problems. Jour- nal of mathematical economics, 10(1):67â81, 1982.
Frans A. Oliehoek, Matthijs TJ Spaan, and Nikos Vlassis. Optimal and approximate q-value func- tions for decentralized pomdps. Journal of Artiï¬cial Intelligence Research, 32:289â353, 2008.
Shayegan Omidshaï¬ei, Jason Pazis, Christopher Amato, Jonathan P. How, and John Vian. Deep decentralized multi-task multi-agent reinforcement learning under partial observability. In Inter- national Conference on Machine Learning (ICML), 2017.
Peng Peng, Ying Wen, Yaodong Yang, Yuan Quan, Zhenkun Tang, Haitao Long, and Jun Wang. Multiagent bidirectionally-coordinated nets emergence of human-level coordination in learning to play starcraft combat games. arXiv preprint arXiv:1703.10069, 2017.
Neil C. Rabinowitz, Frank Perbet, H. Francis Song, Chiyuan Zhang, S.M. Ali Eslami, and Matthew Botvinick. Machine theory of mind. arXiv preprint arXiv:1802.07740, 2018.
Tabish Rashid, Mikayel Samvelyan, Christian Schroeder de Witt, Gregory Farquhar, Jakob Foer- ster, and Shimon Whiteson. Qmix: Monotonic value function factorisation for deep multi-agent reinforcement learning. In International Conference on Machine Learning (ICML), 2018.
Ellis Ratner, Dylan Hadï¬eld-Menell, and Anca D. Dragan. Simplifying reward design through divide-and-conquer. In Robotics: Science and Systems (RSS), 2018.
Yuliy Sannikov. A continuous-time version of the principal-agent problem. The Review of Economic Studies, 75(3):957â984, 2008.
11
Published as a conference paper at ICLR 2019
Jonathan Sorg, Richard L. Lewis, and Satinder P. Singh. Reward design via online gradient ascent. In Advances in Neural Information Processing Systems (NIPS), 2010.
Peter Stone, Gal A. Kaminka, Sarit Kraus, and Jeffrey S. Rosenschein. Ad hoc autonomous agent teams: Collaboration without pre-coordination. In The Twenty-Fourth Conference on Artiï¬cial Intelligence (AAAI), 2010.
Peter Sunehag, Guy Lever, Audrunas Gruslys, Wojciech Marian Czarnecki, Vinicius Zambaldi, Max Jaderberg, Marc Lanctot, Nicolas SOnnerat, Joel Z. Leibo, Karl Tuyls, and Thore Graepel. Value- decomposition networks for cooperative multi-agent learning based on team reward. In Interna- tional Conference on Autonomous Agents and MultiAgent Systems (AAMAS), 2018.
Tijmen Tieleman and Geoffrey Hinto. Lecture 6.5ârmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 2012.
Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, and Matt Botvinick. Learning to reinforcement learn. arXiv preprint arXiv:1611.05763, 2016.
Olga Wichrowska, Niru Maheswaranathan, Matthew W Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Nando de Freitas, and Jascha Sohl-Dickstein. Learned optimizers that scale and generalize. arXiv preprint arXiv:1703.04813, 2017.
Tianhe Yu, Chelsea Finn, Annie Xie, Sudeep Dasari, Pieter Abbeel, and Sergey Levine. One-shot imitation from observing humans via domain-adaptive meta-learning. In Robotics: Science and Systems (RSS), 2018.
Haoqi Zhang and David Parkes. Value-based policy teaching with active indirect elicitation. AAAI Conference on Artiï¬cial Intelligence (AAAI), 2008. In
Haoqi Zhang, David C. Parkes, and Yiling Chen. Policy teaching through reward function learning. In The 10th ACM conference on Electronic commerce, 2009.
Yuke Zhu, Daniel Gordon, Eric Kolve, Dieter Fox, Li Fei-Fei, Abhinav Gupta, Roozbeh Mottaghi, and Ali Farhadi. Visual semantic planning using deep successor representations. In International Conference on Computer Vision (ICCV), 2017.
12
Published as a conference paper at ICLR 2019
# A DETAILS OF APPROACH
# A.1 HIGH-LEVEL SUCCESSOR REPRESENTATION
We define two vectors indicating the goal achievement and bonus payment at time t: $9(c!) = (SS, L(sftt = sq) (gf = 9) + g ⬠G) and 3(c*) = (2, U(st*! = sje) (bf = b) + bE B). Let w = (b: b © B) be the weights for different bonus payments, then the reward for the manager at the current moment can be written as r? = v' $9 (c!) â w' ¢?(c*). Following the typical SR definition, we define our high-level SR as
a(c!) = e)yo7 o9(¢ Gat @)
and
o° -E Yvreeer] : (4) 7=0
Thus, the value function can be written as
Vict) =v 89(c!) â wl O(c), (5)
A.2 DETAILS OF LEARNING
The policy gradient for the goal assignment is:
Vo,J (8g) 1S y,, | flog r9(gi|s$,miâ!, hi, c's Og) A(e!) + AH(m9)] , (6) i=1
where A(câ) is the advantage estimation defined as A(c') = ST) y7r'+7 â (vl @9(ct) â w! 6(ct)) and H(-) is the entropy regularization weighted by the constant A = 0.01 for encourag- ing exploration. Similarly, the policy gradient for the bonus assignment is
Vo, J (0) = FLV log 1° (bi|st, mi! hi, c's 4) A(e!) + XH(w°)] (7)
The successor representations may be updated by the following gradient:
2 2 Vong = (5 oe tr) = O9(c!; O40 ) + Voge (5 7 oh (e*7) â Bc . 0, ») . (8)
For imitation learning, we use the following cross-entropy loss:
N 1 A _ wd log #(ai|s%, 9, bj, m; »|. (9) Ly =E
Note that the gradient from IL will be combined with Eq. 6, Eq. 7, and Eq. 8 to update corresponding parameters (see Algorithm 2 in Appendix B for details).
In tasks where there the unknown dependency may introduce a large cost in the beginning of training, the managerâs exploration may be restricted as the policy becomes too conservative in spending, which is common in many real world scenarios. To encourage exploration in these tasks, we adopt a two-phase learning curriculum. Namely, we optionally conduct a warm-up phase before the standard learning described above. In this warm-up phase, we give loans to the manager to cover its cost (i.e., setting the total payments to be zero when optimizing the networks). In practice, we apply this only to Crafting, where we set a ï¬xed number of episodes at the beginning of training to be the warm-up phase. Note that this only apply to the optimization; we still need to deduct the payments from the rewards as the actually outcomes (this is equivalent to paying back the loans).
13
Published as a conference paper at ICLR 2019
Algorithm 1 Rollout(Tmnax, Tc, â¬, {Pi : i = 1,--- , N}) Input: Maximum steps Tmax, commitment constraint T., exploration coefficient â¬, and the performance history
Input: Maximum steps Tmax, commitment constraint T., exploration coefficient â¬, and the performance history of the present worker agents {P; : i = 1,--- , N} Output: Trajectories of all workers {P17 : i = 1,--+ , N}, and the rewards for the manager R 1: Initialize the environment 2: Set performance history update rate 7 = 0.1. 3: t- 0 4.7) -0,Vi=1,---,N 5: repeat 6: Observe current states (s/, aj), Vi = 1,--- , N, from the environment 7: Encode P; to hi, Vi = 1,--- ,N 8: ift = 0 then 9: fori =1,---,Ndo 10: reo 11: Sample a random goal g? ~ G, and set a minimum bonus b? as the initial contract. 12: e; ~ U(0, 1) 13: Assign the contract (9, be) to worker i and receive dj (signing decision) 14: i+ di 15: Te & {(st,a!, gf, b)} 16: end for 17: roo 18: else . 19: P= Yyeg Der U(si = 8g) L(g = 9" (vg â BF) 20: mi} â M(Ti1 hi), Vi=1,---,N 21: cf H C({(si, mit, hi) :i=1,..., N}) 22: fori =1,---,Ndo 23: T=Ti 24: if (t â 1)%T. # 0 or e; < ⬠then # Commitment constraint and agent-wise e-greedy 25: gi G5" 26: else 27: Sample a new goal gf ~ 79(-|s}, mi~t, hi, câ) 28: Ti + T(gt = gi) 29: end if 30: Sample a new bonus bi ~ mw (sé, mit, hi, c') (w/ temporal e-greedy) 31: Assign the contract (gj, bf) to worker ¢ and receive d{ 32: 1+ 71 +t 33: Tol tu {(si,ai, gf, bi} 34: if 7! > 7; ors} = Sta then # the last contract was accepted and has been terminated now 35: Pree-ipea â (1 1)bjpe-ayea + (Si = 841-1) 36: end if 37: end for 38: end if 39: Re Ru {r'} 40: t<t+l1 41: until ¢ = Tinax or the task is finished 42: T<t
# Algorithm 2 Learning Algorithm
1: Initialize parameters 2: Set the maximum steps of an episode to be Tmax, maximum training episodes to be Ntrain, and the number
of worker agents in an episode to be N
The coefficient for the agent-wise e-greedy exploration to be ⬠Initialize a population of worker agents and set their performance history P to be all zeros. for i = 1,--- , Nmax do Sample N worker agents from the training population and obtain their performance
Sample N worker agents from the training population and obtain their performance history {Pi : i = 1, · · · , N }
7: 8: 9:
# Run an episode {ÎT i Update parameters based on the IL loss LIL deï¬ned in Eq. (9) and the gradients deï¬ned Eq. (6),
# Run an episode
:i=1,-+.,N},R © Rollout(Tinax, Te, â¬, {Pi i = 1,-++ , N})
Eq. (7), and Eq. (8) jointly.
# 10: end for
14
Published as a conference paper at ICLR 2019
20 18 16 214 3 52 z 10 5 â1 âs5 6 â 10 © 50000 100000 150000 200000 250000 Episodes
12 1 1} â 5 â 10 2° 3 Fa z 4 2 50000 100000 150000 200000 250000 Episodes
(a) Res. Collection (S1) (b) Res. Collection (S2) (c) Res. Collection (S3) (d) Crafting
â1 10 5 3 10 3 3 5 6 3 4 > © 50000 100000 150000 200000 250000 Episodes
ps âs 1004 â 10 â1 75 50 25 0.0 -25 -5.0 ® 20000 40000 60000 @0000 100000 120000 140000 Episodes
Figure 10: Learning curves in three settings of Resource Collection and in Crafting when applying different commitment constraints for the manager. The numbers indicate how many steps a contract must holds.
# B PSEUDO CODE OF OUR ALGORITHMS
We summarize the rollout algorithm and the learning algorithm in Algorithm 1 and Algorithm 2 respectively.
C MORE EXPERIMENTAL RESULTS
C.1 CONSTRAINING THE MANAGERâS COMMITMENT
The managerâs commitment is deï¬ned as the shortest time a contract must remain unchanged, which essentially constrains how frequent the manager can change its goal and bonus assignment. While short-term commitment allows the manager to quickly update contracts once it has a better mind estimation or once a goal has been reached, long-term commitment often leads to a more accurate skill assessment when the tasks are difï¬cult (e.g., crafting high-level items depends on the results of other tasks and thus needs a longer time). This is supported by the results in Figure 10: shorter com- mitment works better in Resource Collection while Crafting needs a longer commitment. Note that the commitment constraint is 1 step and 10 steps for Resource Collection and Crafting respectively in all other experiments.
C.2 MULTIPLE BONUS LEVELS
In previous experiments, the internal utility of goals for a worker agent is either 0 or 1. Here, we sample the internal utility from 0 to 3. Consequently, the manager needs to select the right bonus from multiple choices to pay each worker (i.e., a bonus from 1 to 4 for Resource Collection and a bonus from 0 to 4 for Crafting). In Resource Collection, the manager will get a reward of 5 for every collected resource; in crafting, the reward for a top-level item is still 10. As shown in Figure 11, the advantage of our approach is even more signiï¬cant compared to the ones in single payment level.
C.3 RL AGENTS AS WORKERS
Finally, we train a population of 40 RL worker agents for Resource Collection, where each one is trained with only one goal, and for each goal we train 10 agents using different random seeds.
15
Published as a conference paper at ICLR 2019
os Ours 2 Ours w/o SR 20 Ours w/o IL Temporal e-greedy Past 20 trajs UCB GT types known © 50000 100000 150000 200000 250000 Episodes
(a) Res. Collection (3 skills) (b) Res. Collection (1 skill)
20 Bis & g Ours w/o SR & Ours w/o IL 5 Temporal e-greedy Past 20 trajs ucB GT types known © 50000 100000 150000 200000 250000 Episodes
â ours â urs w/o SR â ours wo IL â Temporal â¬-greedy â Past 20 trajs â uce --- GT types known {© â 20000 40000 60000 80000 100000120000140000160000 Episodes
(c) Crafting
Figure 11: Learning curves when there are multiple bonus levels. In (a), a worker can reach 3 goals; in (b), a worker can reach 1 goal; in (c), a worker can collect raw materials and craft one type of items.
12 10 Reward ââ Rule-based worker agents 2 â RLworker agents oO 50000 100000 150000 200000 250000 300000 350000 Episodes
Figure 12: Comparing training with rule-based worker agents and with RL worker agents in Re- source Collection.
16
Published as a conference paper at ICLR 2019
This creates a population with similar skill distributions as in S2, but with very different policies. Figure 12 suggests that training to manage RL agents is slower as their policies are less predictable and less rational, but our approach can still gradually learn a good policy whose performance is comparable to the one using rule-based worker agents.
# D IMPLEMENTATION DETAILS
D.1 NETWORK ARCHITECTURE
Performance History Module. We ï¬atten the matrices in workerâs performance history Pi and concatenate them together to get a single vector. We then encode this vector into a 128-dim history representation hi.
Mind Tracker Module. We represent the state of a worker by multiple channels corresponding to different types of items. We also use four channels to indicate its orientation. We augment the state with additional |.A||G||6| channels, where a channel is either all ones or all zeros for indicating the action it takes, and the goal and bonus it receives. We then encode the state into a 128-dim hidden state by a convolutional layer with 64 channels and kernels of 1 x 1, a fully connected (FC) layer (128-dim), and an LSTM with 128 hidden units. We fuse this vector with the history representation h;. Specifically, we adopt an attention-based mechanism for the fu- sion, where we first get an attention vector (128-dim) from the history representation by an FC layer with sigmoid activation, and then do element-wise product between the attention vector and the hidden state from the LSTM. The fused vector becomes mâ. This can be formally written as mi = f(ti,hi) = f © o(h;), where o(-) is an FC layer with sigmoid activation, ¢! is the hidden state from the LSTM, and © is element-wise product. We fuse it with the state using the same mechanism: f(o(s'*, gf, Bt), mi) = o(s!*, gf i, b) @ o(m!), where o(st**, gb) is the state encoding. By feeding the fused vector to an FC layer with softmax activation, we may get the predicted worker policy.
Manager Module. For each worker, we concatenate its mind representation and history repre- sentation together and fuse it with the workerâs state using the attention-based mechanism where the attention vector comes from the concatenated vector. By pooling over these fused vectors of individual workers, we can get the context vector, from which we construct the two successor rep- resentations by two separate FC layers. Here, we use average pooling, but one may also use other pooling mechanisms. Finally, for each worker, we concatenate the context vector with its fused vec- tor we obtained before pooling, and consequently get the goal policy and bonus policy by two FC layers with softmax activation.
All modules are trained with RMSProp (Tieleman & Hinto, 2012) using a learning rate of 0.0004.
D.2 RULE-BASED WORKER AGENTS
Each rule-based worker ï¬nds a shortest path to the nearest location related to a goal, and its skill is deï¬ned as the post effect of its âcollectâ and âcraftâ actions. In particular, for collecting certain resource/material, it will go to the closest one that has the same type as the goal indicates and is currently not being collected by other agents, whereas for crafting an item, it will go to the corre- sponding work station if it is currently unoccupied. If a worker can perform collecting tasks, then after it takes âcollectâ action, the item will be collected from the map and appears in the inventory; otherwise no real effect will appear. This applies to crafting tasks as well, except in crafting, task dependencies must also be satisï¬ed before âcraftâ action can take real effect.
When considering random actions, for each step, we sample a random action with the speciï¬ed chance to replace the action from the rule-based plan.
D.3 RL WORKER AGENTS
We implement all RL worker agents using the same network architecture, where an agentâs state is augmented by additional channels to include the reward for each goal (i.e., |G||B| channels). We use a convolution layer with 64 channels and kernels of 1 Ã 1 to encode the state, and feed it to an 128-dim FC layer and then an LSTM with a 128-dim hidden state. We then predict the policy using
17
Published as a conference paper at ICLR 2019
an FC layer with softmax activation based on the hidden state from the LSTM. For each goal, we train 10 RL worker agents using 10 random seeds. For each episode, we randomly assign a reward from b â B to an agent as the hypothetical reward it may receive from a manager. We then set the corresponding channel to be all ones and set the remaining |G||B| â 1 channels to be all zeros. Note that we assume all RL workers have the ability to perform âcollectâ and âcraftâ actions.
18 | {
"id": "1606.02396"
} |
1809.11096 | Large Scale GAN Training for High Fidelity Natural Image Synthesis | Despite recent progress in generative image modeling, successfully generating
high-resolution, diverse samples from complex datasets such as ImageNet remains
an elusive goal. To this end, we train Generative Adversarial Networks at the
largest scale yet attempted, and study the instabilities specific to such
scale. We find that applying orthogonal regularization to the generator renders
it amenable to a simple "truncation trick," allowing fine control over the
trade-off between sample fidelity and variety by reducing the variance of the
Generator's input. Our modifications lead to models which set the new state of
the art in class-conditional image synthesis. When trained on ImageNet at
128x128 resolution, our models (BigGANs) achieve an Inception Score (IS) of
166.5 and Frechet Inception Distance (FID) of 7.4, improving over the previous
best IS of 52.52 and FID of 18.6. | http://arxiv.org/pdf/1809.11096 | Andrew Brock, Jeff Donahue, Karen Simonyan | cs.LG, stat.ML | null | null | cs.LG | 20180928 | 20190225 | 9 1 0 2
b e F 5 2 ] G L . s c [
2 v 6 9 0 1 1 . 9 0 8 1 : v i X r a
Published as a conference paper at ICLR 2019
# LARGE SCALE GAN TRAINING FOR HIGH FIDELITY NATURAL IMAGE SYNTHESIS
# Andrew Brockâ â Heriot-Watt University ajb5@hw.ac.uk
Jeff Donahueâ DeepMind jeffdonahue@google.com
Karen Simonyanâ DeepMind simonyan@google.com
# ABSTRACT
Despite recent progress in generative image modeling, successfully generating high-resolution, diverse samples from complex datasets such as ImageNet remains an elusive goal. To this end, we train Generative Adversarial Networks at the largest scale yet attempted, and study the instabilities speciï¬c to such scale. We ï¬nd that applying orthogonal regularization to the generator renders it amenable to a simple âtruncation trick,â allowing ï¬ne control over the trade-off between sample ï¬delity and variety by reducing the variance of the Generatorâs input. Our modiï¬cations lead to models which set the new state of the art in class-conditional image synthesis. When trained on ImageNet at 128Ã128 resolution, our models (BigGANs) achieve an Inception Score (IS) of 166.5 and Fr´echet Inception Dis- tance (FID) of 7.4, improving over the previous best IS of 52.52 and FID of 18.65.
INTRODUCTION
Figure 1: Class-conditional samples generated by our model.
The state of generative image modeling has advanced dramatically in recent years, with Generative Adversarial Networks (GANs, Goodfellow et al. (2014)) at the forefront of efforts to generate high- ï¬delity, diverse images with models learned directly from data. GAN training is dynamic, and sensitive to nearly every aspect of its setup (from optimization parameters to model architecture), but a torrent of research has yielded empirical and theoretical insights enabling stable training in a variety of settings. Despite this progress, the current state of the art in conditional ImageNet modeling (Zhang et al., 2018) achieves an Inception Score (Salimans et al., 2016) of 52.5, compared to 233 for real data.
In this work, we set out to close the gap in ï¬delity and variety between images generated by GANs and real-world images from the ImageNet dataset. We make the following three contributions to- wards this goal:
⢠We demonstrate that GANs beneï¬t dramatically from scaling, and train models with two to four times as many parameters and eight times the batch size compared to prior art. We introduce two simple, general architectural changes that improve scalability, and modify a regularization scheme to improve conditioning, demonstrably boosting performance.
âWork done at DeepMind â Equal contribution
1
Published as a conference paper at ICLR 2019
⢠As a side effect of our modiï¬cations, our models become amenable to the âtruncation trick,â a simple sampling technique that allows explicit, ï¬ne-grained control of the trade- off between sample variety and ï¬delity.
⢠We discover instabilities speciï¬c to large scale GANs, and characterize them empirically. Leveraging insights from this analysis, we demonstrate that a combination of novel and existing techniques can reduce these instabilities, but complete training stability can only be achieved at a dramatic cost to performance.
Our modiï¬cations substantially improve class-conditional GANs. When trained on ImageNet at 128Ã128 resolution, our models (BigGANs) improve the state-of-the-art Inception Score (IS) and Fr´echet Inception Distance (FID) from 52.52 and 18.65 to 166.5 and 7.4 respectively. We also successfully train BigGANs on ImageNet at 256Ã256 and 512Ã512 resolution, and achieve IS and FID of 232.5 and 8.1 at 256Ã256 and IS and FID of 241.5 and 11.5 at 512Ã512. Finally, we train our models on an even larger dataset â JFT-300M â and demonstrate that our design choices transfer well from ImageNet. Code and weights for our pretrained generators are publicly available 1.
# 2 BACKGROUND
A Generative Adversarial Network (GAN) involves Generator (G) and Discriminator (D) networks whose purpose, respectively, is to map random noise to samples and discriminate real and generated samples. Formally, the GAN objective, in its original form (Goodfellow et al., 2014) involves ï¬nding a Nash equilibrium to the following two player min-max problem:
min G max D Exâ¼qdata(x)[log D(x)] + Ezâ¼p(z)[log(1 â D(G(z)))], (1)
where z â Rdz is a latent variable drawn from distribution p(z) such as N (0, I) or U[â1, 1]. When applied to images, G and D are usually convolutional neural networks (Radford et al., 2016). Without auxiliary stabilization techniques, this training procedure is notoriously brittle, requiring ï¬nely-tuned hyperparameters and architectural choices to work at all.
Much recent research has accordingly focused on modiï¬cations to the vanilla GAN procedure to impart stability, drawing on a growing body of empirical and theoretical insights (Nowozin et al., 2016; Sønderby et al., 2017; Fedus et al., 2018). One line of work is focused on changing the objective function (Arjovsky et al., 2017; Mao et al., 2016; Lim & Ye, 2017; Bellemare et al., 2017; Salimans et al., 2018) to encourage convergence. Another line is focused on constraining D through gradient penalties (Gulrajani et al., 2017; Kodali et al., 2017; Mescheder et al., 2018) or normalization (Miyato et al., 2018), both to counteract the use of unbounded loss functions and ensure D provides gradients everywhere to G.
Of particular relevance to our work is Spectral Normalization (Miyato et al., 2018), which enforces Lipschitz continuity on D by normalizing its parameters with running estimates of their ï¬rst singular values, inducing backwards dynamics that adaptively regularize the top singular direction. Relatedly Odena et al. (2018) analyze the condition number of the Jacobian of G and ï¬nd that performance is dependent on Gâs conditioning. Zhang et al. (2018) ï¬nd that employing Spectral Normalization in G improves stability, allowing for fewer D steps per iteration. We extend on these analyses to gain further insight into the pathology of GAN training.
Other works focus on the choice of architecture, such as SA-GAN (Zhang et al., 2018) which adds the self-attention block from (Wang et al., 2018) to improve the ability of both G and D to model global structure. ProGAN (Karras et al., 2018) trains high-resolution GANs in the single-class setting by training a single model across a sequence of increasing resolutions.
In conditional GANs (Mirza & Osindero, 2014) class information can be fed into the model in In (Odena et al., 2017) it is provided to G by concatenating a 1-hot class vector various ways. to the noise vector, and the objective is modiï¬ed to encourage conditional samples to maximize the corresponding class probability predicted by an auxiliary classiï¬er. de Vries et al. (2017) and
# 1https://tfhub.dev/s?q=biggan
2
Published as a conference paper at ICLR 2019
Batch | Ch. | Param (M) | Shared | Skip-z | Ortho. Ttr x 10° FID IS 256 64 81.5 SA-GAN Baseline 18.65 52.52 512 64 81.5 x x x 15.30 58.77(£1.18) 1024 | 64 81.5 x x x 14.88 63.03(£1.42) 2048 | 64 81.5 x x x 12.39 76.85(£3.83) 2048 | 96 173.5 x x x 9.54(4 92.98(£4.27) 2048 | 96 160.6 v x x 9.18(4 94.94(£1.32) 2048 | 96 158.3 v v x 8.73 (4 98.76(£2.84) 2048 | 96 158.3 v v v 8.51(£0.¢ 99.31(£2.10) 2048 | 64 713 v v v 10.48(£0.10) | 86.90(£0.61)
Table 1: Fr´echet Inception Distance (FID, lower is better) and Inception Score (IS, higher is better) for ablations of our proposed modiï¬cations. Batch is batch size, Param is total number of param- eters, Ch. is the channel multiplier representing the number of units in each layer, Shared is using shared embeddings, Skip-z is using skip connections from the latent to multiple layers, Ortho. is Orthogonal Regularization, and Itr indicates if the setting is stable to 106 iterations, or it collapses at the given iteration. Other than rows 1-4, results are computed across 8 random initializations.
Dumoulin et al. (2017) modify the way class conditioning is passed to G by supplying it with class- conditional gains and biases in BatchNorm (Ioffe & Szegedy, 2015) layers. In Miyato & Koyama (2018), D is conditioned by using the cosine similarity between its features and a set of learned class embeddings as additional evidence for distinguishing real and generated samples, effectively encouraging generation of samples whose features match a learned class prototype.
Objectively evaluating implicit generative models is difï¬cult (Theis et al., 2015). A variety of works have proposed heuristics for measuring the sample quality of models without tractable likelihoods (Salimans et al., 2016; Heusel et al., 2017; Bi´nkowski et al., 2018; Wu et al., 2017). Of these, the Inception Score (IS, Salimans et al. (2016)) and Fr´echet Inception Distance (FID, Heusel et al. (2017)) have become popular despite their notable ï¬aws (Barratt & Sharma, 2018). We employ them as approximate measures of sample quality, and to enable comparison against previous work.
# 3 SCALING UP GANS
In this section, we explore methods for scaling up GAN training to reap the performance beneï¬ts of larger models and larger batches. As a baseline, we employ the SA-GAN architecture of Zhang et al. (2018), which uses the hinge loss (Lim & Ye, 2017; Tran et al., 2017) GAN objective. We provide class information to G with class-conditional BatchNorm (Dumoulin et al., 2017; de Vries et al., 2017) and to D with projection (Miyato & Koyama, 2018). The optimization settings follow Zhang et al. (2018) (notably employing Spectral Norm in G) with the modiï¬cation that we halve the learning rates and take two D steps per G step. For evaluation, we employ moving averages of Gâs weights following Karras et al. (2018); Mescheder et al. (2018); Yazc et al. (2018), with a decay of 0.9999. We use Orthogonal Initialization (Saxe et al., 2014), whereas previous works used N (0, 0.02I) (Radford et al., 2016) or Xavier initialization (Glorot & Bengio, 2010). Each model is trained on 128 to 512 cores of a Google TPUv3 Pod (Google, 2018), and computes BatchNorm statistics in G across all devices, rather than per-device as is typical. We ï¬nd progressive growing (Karras et al., 2018) unnecessary even for our 512Ã512 models. Additional details are in Appendix C.
We begin by increasing the batch size for the baseline model, and immediately ï¬nd tremendous beneï¬ts in doing so. Rows 1-4 of Table 1 show that simply increasing the batch size by a factor of 8 improves the state-of-the-art IS by 46%. We conjecture that this is a result of each batch covering more modes, providing better gradients for both networks. One notable side effect of this scaling is that our models reach better ï¬nal performance in fewer iterations, but become unstable and undergo complete training collapse. We discuss the causes and ramiï¬cations of this in Section 4. For these experiments, we report scores from checkpoints saved just before collapse.
We then increase the width (number of channels) in each layer by 50%, approximately doubling the number of parameters in both models. This leads to a further IS improvement of 21%, which we posit is due to the increased capacity of the model relative to the complexity of the dataset. Doubling
3
Published as a conference paper at ICLR 2019
(a) (b)
Figure 2: (a) The effects of increasing truncation. From left to right, the threshold is set to 2, 1, 0.5, 0.04. (b) Saturation artifacts from applying truncation to a poorly conditioned model.
the depth did not initially lead to improvement â we addressed this later in the BigGAN-deep model, which uses a different residual block structure.
We note that class embeddings c used for the conditional BatchNorm layers in G contain a large number of weights. Instead of having a separate layer for each embedding (Miyato et al., 2018; Zhang et al., 2018), we opt to use a shared embedding, which is linearly projected to each layerâs gains and biases (Perez et al., 2018). This reduces computation and memory costs, and improves training speed (in number of iterations required to reach a given performance) by 37%. Next, we add direct skip connections (skip-z) from the noise vector z to multiple layers of G rather than just the initial layer. The intuition behind this design is to allow G to use the latent space to directly in- ï¬uence features at different resolutions and levels of hierarchy. In BigGAN, this is accomplished by splitting z into one chunk per resolution, and concatenating each chunk to the conditional vector c which gets projected to the BatchNorm gains and biases. In BigGAN-deep, we use an even simpler design, concatenating the entire z with the conditional vector without splitting it into chunks. Pre- vious works (Goodfellow et al., 2014; Denton et al., 2015) have considered variants of this concept; our implementation is a minor modiï¬cation of this design. Skip-z provides a modest performance improvement of around 4%, and improves training speed by a further 18%.
3.1 TRADING OFF VARIETY AND FIDELITY WITH THE TRUNCATION TRICK
Unlike models which need to backpropagate through their latents, GANs can employ an arbitrary prior p(z), yet the vast majority of previous works have chosen to draw z from either N (0, I) or U[â1, 1]. We question the optimality of this choice and explore alternatives in Appendix E.
Remarkably, our best results come from using a different latent distribution for sampling than was used in training. Taking a model trained with z â¼ N (0, I) and sampling z from a truncated nor- mal (where values which fall outside a range are resampled to fall inside that range) immediately provides a boost to IS and FID. We call this the Truncation Trick: truncating a z vector by re- sampling the values with magnitude above a chosen threshold leads to improvement in individual sample quality at the cost of reduction in overall sample variety. Figure 2(a) demonstrates this: as the threshold is reduced, and elements of z are truncated towards zero (the mode of the latent dis- tribution), individual samples approach the mode of Gâs output distribution. Related observations about this trade-off were made in (Marchesi, 2016; Pieters & Wiering, 2014).
This technique allows ï¬ne-grained, post-hoc selection of the trade-off between sample quality and variety for a given G. Notably, we can compute FID and IS for a range of thresholds, obtaining the variety-ï¬delity curve reminiscent of the precision-recall curve (Figure 17). As IS does not penal- ize lack of variety in class-conditional models, reducing the truncation threshold leads to a direct increase in IS (analogous to precision). FID penalizes lack of variety (analogous to recall) but also rewards precision, so we initially see a moderate improvement in FID, but as truncation approaches zero and variety diminishes, the FID sharply drops. The distribution shift caused by sampling with different latents than those seen in training is problematic for many models. Some of our larger models are not amenable to truncation, producing saturation artifacts (Figure 2(b)) when fed trun- cated noise. To counteract this, we seek to enforce amenability to truncation by conditioning G to be smooth, so that the full space of z will map to good output samples. For this, we turn to Orthogonal Regularization (Brock et al., 2017), which directly enforces the orthogonality condition:
4
Published as a conference paper at ICLR 2019
Ra(W) = BIWTW ~ I/2, @)
where W is a weight matrix and β a hyperparameter. This regularization is known to often be too limiting (Miyato et al., 2018), so we explore several variants designed to relax the constraint while still imparting the desired smoothness to our models. The version we ï¬nd to work best removes the diagonal terms from the regularization, and aims to minimize the pairwise cosine similarity between ï¬lters but does not constrain their norm:
RoW) = B|W'Woea-Diz, 6) where 1 denotes a matrix with all elements set to 1. We sweep values and select 10~4, finding this small added penalty sufficient to improve the likelihood that our models will be amenable to truncation. Across runs in Table[]] we observe that without Orthogonal Regularization, only 16% of models are amenable to truncation, compared to 60% when trained with Orthogonal Regularization.
3.2 SUMMARY
We ï¬nd that current GAN techniques are sufï¬cient to enable scaling to large models and distributed, large-batch training. We ï¬nd that we can dramatically improve the state of the art and train models up to 512Ã512 resolution without need for explicit multiscale methods like Karras et al. (2018). Despite these improvements, our models undergo training collapse, necessitating early stopping in practice. In the next two sections we investigate why settings which were stable in previous works become unstable when applied at scale.
# 4 ANALYSIS
(a) G (b) D
= Collapse _/ Iteration
Collapse Iteration
Figure 3: A typical plot of the ï¬rst singular value Ï0 in the layers of G (a) and D (b) before Spectral Normalization. Most layers in G have well-behaved spectra, but without constraints a small sub- set grow throughout training and explode at collapse. Dâs spectra are noisier but otherwise better- behaved. Colors from red to violet indicate increasing depth.
4.1 CHARACTERIZING INSTABILITY: THE GENERATOR
Much previous work has investigated GAN stability from a variety of analytical angles and on toy problems, but the instabilities we observe occur for settings which are stable at small scale, necessitating direct analysis at large scale. We monitor a range of weight, gradient, and loss statistics during training, in search of a metric which might presage the onset of training collapse, similar to (Odena et al., 2018). We found the top three singular values Ï0, Ï1, Ï2 of each weight matrix to be the most informative. They can be efï¬ciently computed using the Alrnoldi iteration method (Golub & der Vorst, 2000), which extends the power iteration method, used in Miyato et al. (2018), to estimation of additional singular vectors and values. A clear pattern emerges, as can be seen in Figure 3(a) and Appendix F: most G layers have well-behaved spectral norms, but some layers
5
Published as a conference paper at ICLR 2019
(typically the ï¬rst layer in G, which is over-complete and not convolutional) are ill-behaved, with spectral norms that grow throughout training and explode at collapse.
To ascertain if this pathology is a cause of collapse or merely a symptom, we study the effects of imposing additional conditioning on G to explicitly counteract spectral explosion. First, we directly regularize the top singular values Ï0 of each weight, either towards a ï¬xed value Ïreg or towards some ratio r of the second singular value, r · sg(Ï1) (with sg the stop-gradient operation to prevent the regularization from increasing Ï1). Alternatively, we employ a partial singular value decompo- sition to instead clamp Ï0. Given a weight W , its ï¬rst singular vectors u0 and v0, and Ïclamp the value to which the Ï0 will be clamped, our weights become:
W = W â max(0, 00 â Tclamp)VoUg ; (4) where Ojamp is set to either peg or T - sg(o1). We observe that both with and without Spectral Normalization these techniques have the effect of preventing the gradual increase and explosion of either oo or 2, but even though in some cases they mildly improve performance, no combination prevents training collapse. This evidence suggests that while conditioning G might improve stability, it is insufficient to ensure stability. We accordingly turn our attention to D.
4.2 CHARACTERIZING INSTABILITY: THE DISCRIMINATOR
As with G, we analyze the spectra of Dâs weights to gain insight into its behavior, then seek to stabilize training by imposing additional constraints. Figure 3(b) displays a typical plot of Ï0 for D (with further plots in Appendix F). Unlike G, we see that the spectra are noisy, Ï0 is well-behaved, Ï1 and the singular values grow throughout training but only jump at collapse, instead of exploding.
The spikes in Dâs spectra might suggest that it periodically receives very large gradients, but we observe that the Frobenius norms are smooth (Appendix F), suggesting that this effect is primarily concentrated on the top few singular directions. We posit that this noise is a result of optimization through the adversarial training process, where G periodically produces batches which strongly per- turb D . If this spectral noise is causally related to instability, a natural counter is to employ gradient penalties, which explicitly regularize changes in Dâs Jacobian. We explore the R1 zero-centered gradient penalty from Mescheder et al. (2018):
Y Ry = SE pox) [VD()|lz] - (5)
With the default suggested γ strength of 10, training becomes stable and improves the smoothness and boundedness of spectra in both G and D, but performance severely degrades, resulting in a 45% reduction in IS. Reducing the penalty partially alleviates this degradation, but results in increasingly ill-behaved spectra; even with the penalty strength reduced to 1 (the lowest strength for which sud- den collapse does not occur) the IS is reduced by 20%. Repeating this experiment with various strengths of Orthogonal Regularization, DropOut (Srivastava et al., 2014), and L2 (See Appendix I for details), reveals similar behaviors for these regularization strategies: with high enough penalties on D, training stability can be achieved, but at a substantial cost to performance.
We also observe that Dâs loss approaches zero during training, but undergoes a sharp upward jump at collapse (Appendix F). One possible explanation for this behavior is that D is overï¬tting to the train- ing set, memorizing training examples rather than learning some meaningful boundary between real and generated images. As a simple test for Dâs memorization (related to Gulrajani et al. (2017)), we evaluate uncollapsed discriminators on the ImageNet training and validation sets, and measure what percentage of samples are classiï¬ed as real or generated. While the training accuracy is consistently above 98%, the validation accuracy falls in the range of 50-55%, no better than random guessing (regardless of regularization strategy). This conï¬rms that D is indeed memorizing the training set; we deem this in line with Dâs role, which is not explicitly to generalize, but to distill the training data and provide a useful learning signal for G. Additional experiments and discussion are provided in Appendix G.
4.3 SUMMARY
We ï¬nd that stability does not come solely from G or D, but from their interaction through the adversarial training process. While the symptoms of their poor conditioning can be used to track and
6
Published as a conference paper at ICLR 2019
Model SN-GAN SA-GAN BigGAN BigGAN BigGAN BigGAN-deep BigGAN-deep BigGAN-deep Res. 128 128 128 256 512 128 256 512 FID/IS 27.62/36.80 18.65/52.52 8.7 ± .6/98.8 ± 3 8.7 ± .1/142.3 ± 2 8.1/144.2 5.7 ± .3/124.5 ± 2 6.9 ± .2/171.4 ± 2 7.5/152.8 (min FID) / IS N/A N/A 7.7 ± .2/126.5 ± 0 7.7 ± .1/178.0 ± 5 7.6/170.3 6.3 ± .3/148.1 ± 4 7.0 ± .1/202.6 ± 2 7.7/181.4 FID / (valid IS) N/A N/A 9.6 ± .4/166.3 ± 1 9.3 ± .3/233.1 ± 1 11.8/241.4 7.4 ± .6/166.5 ± 1 8.1 ± .1/232.5 ± 2 11.5/241.5 FID / (max IS) N/A N/A 25 ± 2/206 ± 2 25 ± 5/291 ± 4 27.0/275 25 ± 2/253 ± 11 27 ± 8/317 ± 6 39.7/298
Table 2: Evaluation of models at different resolutions. We report scores without truncation (Column 3), scores at the best FID (Column 4), scores at the IS of validation data (Column 5), and scores at the max IS (Column 6). Standard deviations are computed over at least three random initializations.
identify instability, ensuring reasonable conditioning proves necessary for training but insufï¬cient to prevent eventual training collapse. It is possible to enforce stability by strongly constraining D, but doing so incurs a dramatic cost in performance. With current techniques, better ï¬nal performance can be achieved by relaxing this conditioning and allowing collapse to occur at the later stages of training, by which time a model is sufï¬ciently trained to achieve good results.
5 EXPERIMENTS
(a) 128Ã128 (b) 256Ã256 (c) 512Ã512 (d)
Figure 4: Samples from our BigGAN model with truncation threshold 0.5 (a-c) and an example of class leakage in a partially trained model (d).
5.1 EVALUATION ON IMAGENET
We evaluate our models on ImageNet ILSVRC 2012 (Russakovsky et al., 2015) at 128Ã128, 256Ã256, and 512Ã512 resolutions, employing the settings from Table 1, row 8. The samples generated by our models are presented in Figure 4, with additional samples in Appendix A, and on- line 2. We report IS and FID in Table 2. As our models are able to trade sample variety for quality, it is unclear how best to compare against prior art; we accordingly report values at three settings, with complete curves in Appendix D. First, we report the FID/IS values at the truncation setting which attains the best FID. Second, we report the FID at the truncation setting for which our modelâs IS is the same as that attained by the real validation data, reasoning that this is a passable measure of max- imum sample variety achieved while still achieving a good level of âobjectness.â Third, we report FID at the maximum IS achieved by each model, to demonstrate how much variety must be traded off to maximize quality. In all three cases, our models outperform the previous state-of-the-art IS and FID scores achieved by Miyato et al. (2018) and Zhang et al. (2018).
In addition to the BigGAN model introduced in the ï¬rst version of the paper and used in the majority of experiments (unless otherwise stated), we also present a 4x deeper model (BigGAN-deep) which uses a different conï¬guration of residual blocks. As can be seen from Table 2, BigGAN-deep sub- stantially outperforms BigGAN across all resolutions and metrics. This conï¬rms that our ï¬ndings
# 2https://drive.google.com/drive/folders/1lWC6XEPD0LT5KUnPXeve_kWeY-FxH002
7
Published as a conference paper at ICLR 2019
Ch. | Param (M) | Shared | Skip-z ] Ortho. FID 1S (min FID) /TS ] FID / (max IS) 64 317.1 x x x 48.38 | 23.27 48.6/23.1 49.1/23.9 64 99.4 v v v 23.48 | 24.78 22.4/21.0 60.9/35.8 96 207.9 v v v 18.84 | 27.86 17.1/23.3 51.6/38.1 128 355.7 v v v 13.75 | 30.61 13.0/28.0 46.2/47.8
Table 3: BigGAN results on JFT-300M at 256Ã256 resolution. The FID and IS columns report these scores given by the JFT-300M-trained Inception v2 classiï¬er with noise distributed as z â¼ N (0, I) (non-truncated). The (min FID) / IS and FID / (max IS) columns report scores at the best FID and IS from a sweep across truncated noise distributions ranging from Ï = 0 to Ï = 2. Images from the JFT-300M validation set have an IS of 50.88 and FID of 1.94.
extend to other architectures, and that increased depth leads to improvement in sample quality. Both BigGAN and BigGAN-deep architectures are described in Appendix B.
Our observation that D overï¬ts to the training set, coupled with our modelâs sample quality, raises the obvious question of whether or not G simply memorizes training points. To test this, we perform class-wise nearest neighbors analysis in pixel space and the feature space of pre-trained classiï¬er networks (Appendix A). In addition, we present both interpolations between samples and class-wise interpolations (where z is held constant) in Figures 8 and 9. Our model convincingly interpolates between disparate samples, and the nearest neighbors for its samples are visually distinct, suggesting that our model does not simply memorize training data.
We note that some failure modes of our partially-trained models are distinct from those previously observed. Most previous failures involve local artifacts (Odena et al., 2016), images consisting of texture blobs instead of objects (Salimans et al., 2016), or the canonical mode collapse. We observe class leakage, where images from one class contain properties of another, as exempliï¬ed by Figure 4(d). We also ï¬nd that many classes on ImageNet are more difï¬cult than others for our model; our model is more successful at generating dogs (which make up a large portion of the dataset, and are mostly distinguished by their texture) than crowds (which comprise a small portion of the dataset and have more large-scale structure). Further discussion is available in Appendix A.
5.2 ADDITIONAL EVALUATION ON JFT-300M
To conï¬rm that our design choices are effective for even larger and more complex and diverse datasets, we also present results of our system on a subset of JFT-300M (Sun et al., 2017). The full JFT-300M dataset contains 300M real-world images labeled with 18K categories. Since the category distribution is heavily long-tailed, we subsample the dataset to keep only images with the 8.5K most common labels. The resulting dataset contains 292M images â two orders of magnitude larger than ImageNet. For images with multiple labels, we sample a single label randomly and independently whenever an image is sampled. To compute IS and FID for the GANs trained on this dataset, we use an Inception v2 classiï¬er (Szegedy et al., 2016) trained on this dataset. Quantitative results are presented in Table 3. All models are trained with batch size 2048. We compare an ablated version of our model â comparable to SA-GAN (Zhang et al., 2018) but with the larger batch size â against a âfullâ BigGAN model that makes uses of all of the techniques applied to obtain the best results on ImageNet (shared embedding, skip-z, and orthogonal regularization). Our results show that these techniques substantially improve performance even in the setting of this much larger dataset at the same model capacity (64 base channels). We further show that for a dataset of this scale, we see signiï¬cant additional improvements from expanding the capacity of our models to 128 base channels, while for ImageNet GANs that additional capacity was not beneï¬cial.
In Figure 19 (Appendix D), we present truncation plots for models trained on this dataset. Unlike for ImageNet, where truncation limits of Ï â 0 tend to produce the highest ï¬delity scores, IS is typically maximized for our JFT-300M models when the truncation value Ï ranges from 0.5 to 1. We suspect that this is at least partially due to the intra-class variability of JFT-300M labels, as well as the relative complexity of the image distribution, which includes images with multiple objects at a variety of scales. Interestingly, unlike models trained on ImageNet, where training tends to collapse without heavy regularization (Section 4), the models trained on JFT-300M remain stable over many
8
Published as a conference paper at ICLR 2019
hundreds of thousands of iterations. This suggests that moving beyond ImageNet to larger datasets may partially alleviate GAN stability issues.
The improvement over the baseline GAN model that we achieve on this dataset without changes to the underlying models or training and regularization techniques (beyond expanded capacity) demon- strates that our ï¬ndings extend from ImageNet to datasets with scale and complexity thus far un- precedented for generative models of images.
# 6 CONCLUSION
We have demonstrated that Generative Adversarial Networks trained to model natural images of multiple categories highly beneï¬t from scaling up, both in terms of ï¬delity and variety of the gen- erated samples. As a result, our models set a new level of performance among ImageNet GAN models, improving on the state of the art by a large margin. We have also presented an analysis of the training behavior of large scale GANs, characterized their stability in terms of the singular values of their weights, and discussed the interplay between stability and performance.
ACKNOWLEDGMENTS
We would like to thank Kai Arulkumaran, Matthias Bauer, Peter Buchlovsky, Jeffrey Defauw, Sander Dieleman, Ian Goodfellow, Ariel Gordon, Karol Gregor, Dominik Grewe, Chris Jones, Jacob Menick, Augustus Odena, Suman Ravuri, Ali Razavi, Mihaela Rosca, and Jeff Stanway.
# REFERENCES
Mart´ın Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek Murray, Benoit Steiner, Paul Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: A system for large-scale machine learning. In OSDI, 2016.
Martin Arjovsky, Soumith Chintala, and L´eon Bottou. Wasserstein generative adversarial networks. In ICML, 2017.
Shane Barratt and Rishi Sharma. A note on the Inception Score. In arXiv preprint arXiv:1801.01973, 2018.
Marc G. Bellemare, Ivo Danihelka, Will Dabney, Shakir Mohamed, Balaji Lakshminarayanan, Stephan Hoyer, and R´emi Munos. The Cramer distance as a solution to biased Wasserstein gra- dients. In arXiv preprint arXiv:1705.10743, 2017.
Mikolaj Bi´nkowski, Dougal J. Sutherland, Michael Arbel, and Arthur Gretton. Demystifying MMD GANs. In ICLR, 2018.
Andrew Brock, Theodore Lim, J.M. Ritchie, and Nick Weston. Neural photo editing with introspec- tive adversarial networks. In ICLR, 2017.
Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In NIPS, 2016.
Harm de Vries, Florian Strub, J´er´emie Mary, Hugo Larochelle, Olivier Pietquin, and Aaron Courville. Modulating early visual processing by language. In NIPS, 2017.
Emily Denton, Soumith Chintala, Arthur Szlam, and Rob Fergus. Deep generative image models using a laplacian pyramid of adversarial networks. In NIPS, 2015.
Vincent Dumoulin, Jonathon Shlens, and Manjunath Kudlur. A learned representation for artistic style. In ICLR, 2017.
9
Published as a conference paper at ICLR 2019
William Fedus, Mihaela Rosca, Balaji Lakshminarayanan, Andrew M. Dai, Shakir Mohamed, and Ian Goodfellow. Many paths to equilibrium: GANs do not need to decrease a divergence at every step. In ICLR, 2018.
Xavier Glorot and Yoshua Bengio. Understanding the difï¬culty of training deep feedforward neural networks. In AISTATS, 2010.
Gene Golub and Henk Van der Vorst. Eigenvalue computation in the 20th century. Journal of Computational and Applied Mathematics, 123:35â65, 2000.
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, and Aaron Courville Yoshua Bengio. Generative adversarial nets. In NIPS, 2014.
Google. Cloud TPUs. https://cloud.google.com/tpu/, 2018.
Ishaan Gulrajani, Faruk Ahmed, Mart´ın Arjovsky, Vincent Dumoulin, and Aaron C. Courville. Im- proved training of Wasserstein GANs. In NIPS, 2017.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. In CVPR, 2016.
Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, G¨unter Klambauer, and Sepp Hochreiter. GANs trained by a two time-scale update rule converge to a local nash equilib- rium. In NIPS, 2017.
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015.
Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of GANs for improved quality, stability, and variation. In ICLR, 2018.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2014.
Naveen Kodali, Jacob Abernethy, James Hays, and Zsolt Kira. On convergence and stability of GANs. In arXiv preprint arXiv:1705.07215, 2017.
Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009.
Jae Hyun Lim and Jong Chul Ye. Geometric GAN. In arXiv preprint arXiv:1705.02894, 2017.
Xudong Mao, Qing Li, Haoran Xie, Raymond Y. K. Lau, and Zhen Wang. Least squares generative adversarial networks. In arXiv preprint arXiv:1611.04076, 2016.
Marco Marchesi. Megapixel size image creation using generative adversarial networks. In arXiv preprint arXiv:1706.00082, 2016.
Lars Mescheder, Andreas Geiger, and Sebastian Nowozin. Which training methods for GANs do actually converge? In ICML, 2018.
Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. In arXiv preprint arXiv:1411.1784, 2014.
Takeru Miyato and Masanori Koyama. cGANs with projection discriminator. In ICLR, 2018.
Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. In ICLR, 2018.
Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-GAN: Training generative neural sam- plers using variational divergence minimization. In NIPS, 2016.
Augustus Odena, Vincent Dumoulin, and Chris Olah. Deconvolution and checkerboard artifacts. Distill, 2016.
Augustus Odena, Christopher Olah, and Jonathon Shlens. Conditional image synthesis with auxil- iary classiï¬er GANs. In ICML, 2017.
10
Published as a conference paper at ICLR 2019
Augustus Odena, Jacob Buckman, Catherine Olsson, Tom B. Brown, Christopher Olah, Colin Raf- fel, and Ian Goodfellow. Is generator conditioning causally related to GAN performance? In ICML, 2018.
Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, and Aaron Courville. FiLM: Visual reasoning with a general conditioning layer. In AAAI, 2018.
Mathijs Pieters and Marco Wiering. Comparing generative adversarial network techniques for image creation and modiï¬catio. In arXiv preprint arXiv:1803.09093, 2014.
Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In ICLR, 2016.
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng ImageNet large scale visual Huang, Andrej Karpathy, Aditya Khosla, and Michael Bernstein. recognition challenge. IJCV, 115:211â252, 2015.
Tim Salimans and Diederik Kingma. Weight normalization: A simple reparameterization to accel- erate training of deep neural networks. In NIPS, 2016.
Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training GANs. In NIPS, 2016.
Tim Salimans, Han Zhang, Alec Radford, and Dimitris Metaxas. Improving GANs using optimal transport. In ICLR, 2018.
Andrew Saxe, James McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. In ICLR, 2014.
Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015.
Casper Kaae Sønderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc Huszr. Amortised map inference for image super-resolution. In ICLR, 2017.
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overï¬tting. JMLR, 15:1929â1958, 2014.
Chen Sun, Abhinav Shrivastava, Saurabh Singh, and Abhinav Gupta. Revisiting unreasonable ef- fectiveness of data in deep learning era. In ICCV, 2017.
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Re- thinking the inception architecture for computer vision. In CVPR, 2016.
Lucas Theis, A¨aron van den Oord, and Matthias Bethge. A note on the evaluation of generative models. In arXiv preprint arXiv:1511.01844, 2015.
Dustin Tran, Rajesh Ranganath, and David M. Blei. Hierarchical implicit models and likelihood-free variational inference. In NIPS, 2017.
Xiaolong Wang, Ross B. Girshick, Abhinav Gupta, and Kaiming He. Non-local neural networks. In CVPR, 2018.
Yuhuai Wu, Yuri Burda, Ruslan Salakhutdinov, and Roger B. Grosse. On the quantitative analysis of decoder-based generative models. In ICLR, 2017.
Yasin Yazc, Chuan-Sheng Foo, Stefan Winkler, Kim-Hui Yap, Georgios Piliouras, and Vijay In arXiv preprint Chandrasekhar. The unusual effectiveness of averaging in gan training. arXiv:1806.04498, 2018.
Han Zhang, Ian Goodfellow, Dimitris Metaxas, and Augustus Odena. Self-attention generative adversarial networks. In arXiv preprint arXiv:1805.08318, 2018.
11
Published as a conference paper at ICLR 2019
APPENDIX A ADDITIONAL SAMPLES, INTERPOLATIONS, AND NEAREST NEIGHBORS FROM IMAGENET MODELS
Figure 5: Samples generated by our BigGAN model at 256Ã256 resolution.
Figure 6: Samples generated by our BigGAN model at 512Ã512 resolution.
12
Published as a conference paper at ICLR 2019
(a) (b)
Figure 7: Comparing easy classes (a) with difï¬cult classes (b) at 512Ã512. Classes such as dogs which are largely textural, and common in the dataset, are far easier to model than classes involving unaligned human faces or crowds. Such classes are more dynamic and structured, and often have details to which human observers are more sensitive. The difï¬culty of modeling global structure is further exacerbated when producing high-resolution images, even with non-local blocks.
Figure 8: Interpolations between z, c pairs.
13
Published as a conference paper at ICLR 2019
Figure 9: Interpolations between c with z held constant. Pose semantics are frequently maintained between endpoints (particularly in the ï¬nal row). Row 2 demonstrates that grayscale is encoded in the joint z, c space, rather than in z.
Figure 10: Nearest neighbors in VGG-16-fc7 (Simonyan & Zisserman, 2015) feature space. The generated image is in the top left.
14
Published as a conference paper at ICLR 2019
Figure 11: Nearest neighbors in ResNet-50-avgpool (He et al., 2016) feature space. The generated image is in the top left.
Figure 12: Nearest neighbors in pixel space. The generated image is in the top left.
15
Published as a conference paper at ICLR 2019
Figure 13: Nearest neighbors in VGG-16-fc7 (Simonyan & Zisserman, 2015) feature space. The generated image is in the top left.
Figure 14: Nearest neighbors in ResNet-50-avgpool (He et al., 2016) feature space. The generated image is in the top left.
16
Published as a conference paper at ICLR 2019
# APPENDIX B ARCHITECTURAL DETAILS
In the BigGAN model (Figure 15), we use the ResNet (He et al., 2016) GAN architecture of (Zhang et al., 2018), which is identical to that used by (Miyato et al., 2018), but with the channel pattern in D modiï¬ed so that the number of ï¬lters in the ï¬rst convolutional layer of each block is equal to the number of output ï¬lters (rather than the number of input ï¬lters, as in Miyato et al. (2018); Gulrajani et al. (2017)). We use a single shared class embedding in G, and skip connections for the latent vector z (skip-z). In particular, we employ hierarchical latent spaces, so that the latent vector z is split along its channel dimension into chunks of equal size (20-D in our case), and each chunk is concatenated to the shared class embedding and passed to a corresponding residual block as a conditioning vector. The conditioning of each block is linearly projected to produce per-sample gains and biases for the BatchNorm layers of the block. The bias projections are zero-centered, while the gain projections are centered at 1. Since the number of residual blocks depends on the image resolution, the full dimensionality of z is 120 for 128 à 128, 140 for 256 à 256, and 160 for 512 à 512 images.
The BigGAN-deep model (Figure 16) differs from BigGAN in several aspects. It uses a simpler vari- ant of skip-z conditioning: instead of ï¬rst splitting z into chunks, we concatenate the entire z with the class embedding, and pass the resulting vector to each residual block through skip connections. BigGAN-deep is based on residual blocks with bottlenecks (He et al., 2016), which incorporate two additional 1 à 1 convolutions: the ï¬rst reduces the number of channels by a factor of 4 before the more expensive 3 à 3 convolutions; the second produces the required number of output chan- nels. While BigGAN relies on 1 à 1 convolutions in the skip connections whenever the number of channels needs to change, in BigGAN-deep we use a different strategy aimed at preserving identity throughout the skip connections. In G, where the number of channels needs to be reduced, we sim- ply retain the ï¬rst group of channels and drop the rest to produce the required number of channels. In D, where the number of channels should be increased, we pass the input channels unperturbed, and concatenate them with the remaining channels produced by a 1 à 1 convolution. As far as the network conï¬guration is concerned, the discriminator is an exact reï¬ection of the generator. There are two blocks at each resolution (BigGAN uses one), and as a result BigGAN-deep is four times deeper than BigGAN. Despite their increased depth, the BigGAN-deep models have signiï¬cantly fewer parameters mainly due to the bottleneck structure of their residual blocks. For example, the 128 à 128 BigGAN-deep G and D have 50.4M and 34.6M parameters respectively, while the corre- sponding original BigGAN models have 70.4M and 88.0M parameters. All BigGAN-deep models use attention at 64 à 64 resolution, channel width multiplier ch = 128, and z â R128.
(a) (b) (c)
Figure 15: (a) A typical architectural layout for BigGANâs G; details are in the following tables. (b) A Residual Block (ResBlock up) in BigGANâs G. (c) A Residual Block (ResBlock down) in BigGANâs D.
17
Published as a conference paper at ICLR 2019
(a)
(b)
(c)
Figure 16: (a) A typical architectural layout for BigGAN-deepâs G; details are in the following tables. (b) A Residual Block (ResBlock up) in BigGAN-deepâs G. (c) A Residual Block (ResBlock down) in BigGAN-deepâs D. A ResBlock (without up or down) in BigGAN-deep does not include the Upsample or Average Pooling layers, and has identity skip connections.
18
Published as a conference paper at ICLR 2019
Table 4: BigGAN architecture for 128 Ã 128 images. ch represents the channel width multiplier in each network from Table 1.
z â R120 â¼ N (0, I) Embed(y) â R128 Linear (20 + 128) â 4 Ã 4 Ã 16ch ResBlock up 16ch â 16ch ResBlock up 16ch â 8ch ResBlock up 8ch â 4ch ResBlock up 4ch â 2ch Non-Local Block (64 Ã 64) ResBlock up 2ch â ch BN, ReLU, 3 Ã 3 Conv ch â 3 Tanh
RGB image x â R128Ã128Ã3 ResBlock down ch â 2ch Non-Local Block (64 à 64) ResBlock down 2ch â 4ch ResBlock down 4ch â 8ch ResBlock down 8ch â 16ch ResBlock down 16ch â 16ch ResBlock 16ch â 16ch ReLU, Global sum pooling Embed(y)·h + (linear â 1)
# (a) Generator
# (b) Discriminator
Table 5: BigGAN architecture for 256 Ã 256 images. Relative to the 128 Ã 128 architecture, we add an additional ResBlock in each network at 16Ã16 resolution, and move the non-local block in G to 128 Ã 128 resolution. Memory constraints prevent us from moving the non-local block in D.
z â R140 â¼ N (0, I) Embed(y) â R128 RGB image x â R256Ã256Ã3 Linear (20 + 128) â 4 à 4 à 16ch ResBlock down ch â 2ch ResBlock up 16ch â 16ch ResBlock down 2ch â 4ch ResBlock up 16ch â 8ch Non-Local Block (64 à 64) ResBlock up 8ch â 8ch ResBlock down 4ch â 8ch ResBlock up 8ch â 4ch ResBlock down 8ch â 8ch ResBlock up 4ch â 2ch ResBlock down 8ch â 16ch Non-Local Block (128 à 128) ResBlock down 16ch â 16ch ResBlock up 2ch â ch ResBlock 16ch â 16ch BN, ReLU, 3 à 3 Conv ch â 3 ReLU, Global sum pooling Tanh Embed(y)·h + (linear â 1) (b) Discriminator
19
Published as a conference paper at ICLR 2019
Table 6: BigGAN architecture for 512 Ã 512 images. Relative to the 256 Ã 256 architecture, we add an additional ResBlock at the 512 Ã 512 resolution. Memory constraints force us to move the non-local block in both networks back to 64 Ã 64 resolution as in the 128 Ã 128 pixel setting.
z â R160 â¼ N (0, I) Embed(y) â R128 Linear (20 + 128) â 4 Ã 4 Ã 16ch ResBlock up 16ch â 16ch ResBlock up 16ch â 8ch ResBlock up 8ch â 8ch ResBlock up 8ch â 4ch Non-Local Block (64 Ã 64) ResBlock up 4ch â 2ch ResBlock up 2ch â ch ResBlock up ch â ch BN, ReLU, 3 Ã 3 Conv ch â 3 Tanh
RGB image x â R512Ã512Ã3 ResBlock down ch â ch ResBlock down ch â 2ch ResBlock down 2ch â 4ch Non-Local Block (64 à 64) ResBlock down 4ch â 8ch ResBlock down 8ch â 8ch ResBlock down 8ch â 16ch ResBlock down 16ch â 16ch ResBlock 16ch â 16ch ReLU, Global sum pooling Embed(y)·h + (linear â 1) (b) Discriminator
(a) Generator
Table 7: BigGAN-deep architecture for 128 Ã 128 images.
z â R128 â¼ N (0, I) Embed(y) â R128 ResBlock 16ch â 16ch ResBlock up 16ch â 16ch ResBlock 16ch â 16ch ResBlock up 16ch â 8ch ResBlock 8ch â 8ch ResBlock up 8ch â 4ch ResBlock 4ch â 4ch ResBlock up 4ch â 2ch Non-Local Block (64 Ã 64) ResBlock 2ch â 2ch ResBlock up 2ch â ch BN, ReLU, 3 Ã 3 Conv ch â 3 Tanh
RGB image x â R128Ã128Ã3 3 à 3 Conv 3 â ch ResBlock down ch â 2ch ResBlock 2ch â 2ch Non-Local Block (64 à 64) ResBlock down 2ch â 4ch ResBlock 4ch â 4ch ResBlock down 4ch â 8ch ResBlock 8ch â 8ch ResBlock down 8ch â 16ch ResBlock 16ch â 16ch ResBlock down 16ch â 16ch ResBlock 16ch â 16ch ReLU, Global sum pooling Embed(y)·h + (linear â 1)
(a) Generator
(b) Discriminator
20
Published as a conference paper at ICLR 2019
Table 8: BigGAN-deep architecture for 256 Ã 256 images.
z â R128 â¼ N (0, I) Embed(y) â R128 Linear (128 + 128) â 4 Ã 4 Ã 16ch ResBlock 16ch â 16ch ResBlock up 16ch â 16ch ResBlock 16ch â 16ch ResBlock up 16ch â 8ch ResBlock 8ch â 8ch ResBlock up 8ch â 8ch ResBlock 8ch â 8ch ResBlock up 8ch â 4ch Non-Local Block (64 Ã 64) ResBlock 4ch â 4ch ResBlock up 4ch â 2ch ResBlock 2ch â 2ch ResBlock up 2ch â ch BN, ReLU, 3 Ã 3 Conv ch â 3 Tanh
RGB image x â R256Ã256Ã3 3 à 3 Conv 3 â ch ResBlock down ch â 2ch ResBlock 2ch â 2ch ResBlock down 2ch â 4ch ResBlock 4ch â 4ch Non-Local Block (64 à 64) ResBlock down 4ch â 8ch ResBlock 8ch â 8ch ResBlock down 8ch â 8ch ResBlock 8ch â 8ch ResBlock down 8ch â 16ch ResBlock 16ch â 16ch ResBlock down 16ch â 16ch ResBlock 16ch â 16ch ReLU, Global sum pooling Embed(y)·h + (linear â 1)
# (a) Generator
21
Published as a conference paper at ICLR 2019
Table 9: BigGAN-deep architecture for 512 Ã 512 images.
z â R128 â¼ N (0, I) Embed(y) â R128
Linear (128 + 128) â 4 Ã 4 Ã 16ch ResBlock 16ch â 16ch ResBlock up 16ch â 16ch ResBlock 16ch â 16ch ResBlock up 16ch â 8ch ResBlock 8ch â 8ch ResBlock up 8ch â 8ch ResBlock 8ch â 8ch ResBlock up 8ch â 4ch Non-Local Block (64 Ã 64) ResBlock 4ch â 4ch ResBlock up 4ch â 2ch ResBlock 2ch â 2ch ResBlock up 2ch â ch ResBlock ch â ch ResBlock up ch â ch BN, ReLU, 3 Ã 3 Conv ch â 3 Tanh
3 à 3 Conv 3 â ch ResBlock down ch â ch ResBlock ch â ch ResBlock down ch â 2ch ResBlock 2ch â 2ch ResBlock down 2ch â 4ch ResBlock 4ch â 4ch Non-Local Block (64 à 64) ResBlock down 4ch â 8ch ResBlock 8ch â 8ch ResBlock down 8ch â 8ch ResBlock 8ch â 8ch ResBlock down 8ch â 16ch ResBlock 16ch â 16ch ResBlock down 16ch â 16ch ResBlock 16ch â 16ch ReLU, Global sum pooling Embed(y)·h + (linear â 1)
(a) Generator
(b) Discriminator
22
Published as a conference paper at ICLR 2019
# APPENDIX C EXPERIMENTAL DETAILS
Our basic setup follows SA-GAN (Zhang et al., 2018), and is implemented in TensorFlow (Abadi et al., 2016). We employ the architectures detailed in Appendix B, with non-local blocks inserted at a single stage in each network. Both G and D networks are initialized with Orthogonal Initialization (Saxe et al., 2014). We use Adam optimizer (Kingma & Ba, 2014) with β1 = 0 and β2 = 0.999 and a constant learning rate. For BigGAN models at all resolutions, we use 2 · 10â4 in D and 5 · 10â5 in G. For BigGAN-deep, we use the learning rate of 2 · 10â4 in D and 5 · 10â5 in G for 128 à 128 models, and 2.5 · 10â5 in both D and G for 256 à 256 and 512 à 512 models. We experimented with the number of D steps per G step (varying it from 1 to 6) and found that two D steps per G step gave the best results.
We use an exponential moving average of the weights of G at sampling time, with a decay rate set to 0.9999. We employ cross-replica BatchNorm in G, where batch statistics are aggregated across all devices, rather than a single device as in standard implementations. Spectral Normalization is used in both G and D, following SA-GAN (Zhang et al.||2018). We train on a Google TPU v3 Pod, with the number of cores proportional to the resolution: 128 for 128x128, 256 for 256x256, and 512 for 512x512. Training takes between 24 and 48 hours for most models. We increase ¢ from the default 10~* to 107+ in BatchNorm and Spectral Norm to mollify low-precision numerical issues. We preprocess data by cropping along the long edge and rescaling to a given resolution with area resampling.
C.1 BATCHNORM STATISTICS AND SAMPLING
The default behavior with batch normalized classiï¬er networks is to use a running average of the activation moments at test time. Previous works (Radford et al., 2016) have instead used batch statistics when sampling images. While this is not technically an invalid way to sample, it means that results are dependent on the test batch size (and how many devices it is split across), and further complicates reproducibility.
We ï¬nd that this detail is extremely important, with changes in test batch size producing drastic changes in performance. This is further exacerbated when one uses exponential moving averages of Gâs weights for sampling, as the BatchNorm running averages are computed with non-averaged weights and are poor estimates of the activation statistics for the averaged weights.
To counteract both these issues, we employ âstanding statistics,â where we compute activation statis- tics at sampling time by running the G through multiple forward passes (typically 100) each with different batches of random noise, and storing means and variances aggregated across all forward passes. Analogous to using running statistics, this results in Gâs outputs becoming invariant to batch size and the number of devices, even when producing a single sample.
# C.2 CIFAR-10
We run our networks on CIFAR-10 (Krizhevsky & Hinton, 2009) using the settings from Table 1, row 8, and achieve an IS of 9.22 and an FID of 14.73 without truncation.
INCEPTION SCORES OF IMAGENET IMAGES
We compute the IS for both the training and validation sets of ImageNet. At 128Ã128 the training data has an IS of 233, and the validation data has an IS of 166. At 256Ã256 the training data has an IS of 377, and the validation data has an IS of 234. At 512Ã512 the training data has an IS of 348, and the validation data has an IS of 241. The discrepancy between training and validation scores is due to the Inception classiï¬er having been trained on the training data, resulting in high-conï¬dence outputs that are preferred by the Inception Score.
23
Published as a conference paper at ICLR 2019
# APPENDIX D ADDITIONAL PLOTS
sot ht tg RIBS ISAENPBAZB gg BigGAN BigGAN-deep wo - ws- 1s0- ws- ° SAGAN 200 - 25 - 250 - FID 275-9 SN-GAN 300 - 325 - 350- 375 - 400 - 425 - 3 40 «650060 70D SOdDs110«a9.«130 «140-150. so 170 180 190 20 210 20 20 20 250 Inception Score
Figure 17: IS vs. FID at 128Ã128. Scores are averaged across three random seeds.
sROmBM se ROBIE,
sROmBM
se ROBIE,
Figure 18: IS vs. FID at 256 and 512 pixels. Scores are averaged across three random seeds for 256.
24
Published as a conference paper at ICLR 2019
FD ws I as 2 function of runcation Jrra00m FD JFT3000 FID WSIS as a functor sra00" AD
# JFT-300M inception Score
Figure 19: JFT-300M IS vs. FID at 256Ã256. We show truncation values from Ï = 0 to Ï = 2 (top) and from Ï = 0.5 to Ï = 1.5 (bottom). Each curve corresponds to a row in Table 3. The curve labeled with baseline corresponds to the ï¬rst row (with orthogonal regularization and other techniques disabled), while the rest correspond to rows 2-4 â the same architecture at different capacities (Ch).
25
Published as a conference paper at ICLR 2019
# APPENDIX E CHOOSING LATENT SPACES
While most previous work has employed N (0, I) or U[â1, 1] as the prior for z (the noise input to G), we are free to choose any latent distribution from which we can sample. We explore the choice of latents by considering an array of possible designs, described below. For each latent, we provide the intuition behind its design and brieï¬y describe how it performs when used as a drop-in replacement for z â¼ N (0, I) in an SA-GAN baseline. As the Truncation Trick proved more beneï¬cial than switching to any of these latents, we do not perform a full ablation study, and employ z â¼ N (0, I) for our main results to take full advantage of truncation. The two latents which we ï¬nd to work best without truncation are Bernoulli {0, 1} and Censored Normal max (N (0, I), 0), both of which improve speed of training and lightly improve ï¬nal performance, but are less amenable to truncation. We also ablate the choice of latent space dimensonality (which by default is z â R128), ï¬nding that we are able to successfully train with latent dimensions as low as z â R8, and that with z â R32 we see a minimal drop in performance. While this is substantially smaller than many previous works, direct comparison to single-class networks (such as those in Karras et al. (2018), which employ a z â R512 latent space on a highly constrained dataset with 30,000 images) is improper, as our networks have additional class information provided as input.
LATENTS
N (0, I). A standard choice of the latent space which we use in the main experiments. ⢠U[â1, 1]. Another standard choice; we ï¬nd that it performs similarly to N (0, I). ⢠Bernoulli {0, 1}. A discrete latent might reï¬ect our prior that underlying factors of variation in natural images are not continuous, but discrete (one feature is present, another is not). This latent outperforms N (0, I) (in terms of IS) by 8% and requires 60% fewer iterations. ⢠max (N (0, I), 0), also called Censored Normal. This latent is designed to introduce spar- sity in the latent space (reï¬ecting our prior that certain latent features are sometimes present and sometimes not), but also allow those latents to vary continuously, expressing different degrees of intensity for latents which are active. This latent outperforms N (0, I) (in terms of IS) by 15-20% and tends to require fewer iterations.
⢠Bernoulli {â1, 1}. This latent is designed to be discrete, but not sparse (as the network can learn to activate in response to negative inputs). This latent performs near-identically to N (0, I).
⢠Independent Categorical in {â1, 0, 1}, with equal probability. This distribution is chosen to be discrete and have sparsity, but also to allow latents to take on both positive and negative values. This latent performs near-identically to N (0, I).
⢠N (0, I) multiplied by Bernoulli {0, 1}. This distribution is chosen to have continuous latent factors which are also sparse (with a peak at zero), similar to Censored Normal but not constrained to be positive. This latent performs near-identically to N (0, I).
Concatenating N (0, I) and Bernoulli {0, 1}, each taking half of the latent dimensions. This is inspired by Chen et al. (2016), and is chosen to allow some factors of variation to be discrete, while others are continuous. This latent outperforms N (0, I) by around 5%. ⢠Variance annealing: we sample from N (0, ÏI), where Ï is allowed to vary over training. We compared a variety of piecewise schedules and found that starting with Ï = 2 and annealing towards Ï = 1 over the course of training mildly improved performance. The space of possible variance schedules is large, and we did not explore it in depth â we suspect that a more principled or better-tuned schedule could more strongly impact performance. ⢠Per-sample variable variance: N (0, ÏiI), where Ïi â¼ U[Ïl, Ïh] independently for each sample i in a batch, and (Ïl, Ïh) are hyperparameters. This distribution was chosen to try and improve amenability to the Truncation Trick by feeding the network noise samples with non-constant variance. This did not appear to affect performance, but we did not explore it in depth. One might also consider scheduling (Ïl, Ïh), similar to variance annealing.
26
Published as a conference paper at ICLR 2019
# APPENDIX F MONITORED TRAINING STATISTICS
(a) G Ï0 (b) G Ï0 Ï1 (c) G Ï1 (d) G Ï2 (e) D Ï0 (f) D Ï0 Ï1 (g) D Ï1 (h) D Ï2
Iteration
Iteration
Iteration
o sooo 00000 150000 200000 Iteration
Iteration
Iteration
Iteration
Figure 20: Training statistics for a typical model without special modiï¬cations. Collapse occurs after 200000 iterations.
27
Published as a conference paper at ICLR 2019
(a) Ï0 (b) Ï0 Ï1 (c) Ï1 (d) Ï2
oe s0bo0 100000 136000 2000 Iteration
a sco 109000 130000 200000 Iteration
Iteration
Iteration
Figure 21: G training statistics with Ï0 in G regularized towards 1. Collapse occurs after 125000 iterations.
(a) Ï0 (b) Ï0 Ï1 (c) Ï1 (d) Ï2
s Le 10 os 88 â0000 10000 150000 200000 Iteration
a sian tt oo unit eo Bt ane potent e bs 4 ° sa000 â100000 150000 200000 vs0000 Iteration
Fa Iteration
au Iteration
Figure 22: D training statistics with Ï0 in G regularized towards 1. Collapse occurs after 125000 iterations.
28
Published as a conference paper at ICLR 2019
(a) Ï0 (b) Ï0 Ï1 (c) Ï1 (d) Ï2
| Ee
f wa vera sa
f we vera sao
Figure 23: G training statistics with an R1 Gradient Penalty of strength 10 on D. This model does not collapse, but only reaches a maximum IS of 55.
(a) Ï0 (b) Ï0 Ï1 (c) Ï1 (d) Ï2
«- ss ea Be ait Ve 4 eo 150000 00000 Iteration
ase 30 2s 20 as oo/or Iteration
a Iteration
Iteration
Figure 24: D training statistics with an R1 Gradient Penalty of strength 10 on D. This model does not collapse, but only reaches a maximum IS of 55.
29
Published as a conference paper at ICLR 2019
Iteration
° sao 20000 5000 200000 0 E Iteration
(a) Ï0 (b) Ï0 Ï1 (c) Ï1 (d) Ï2
Iteration
Iteration
Figure 25: G training statistics with Dropout (keep probability 0.8) applied to the last feature layer of D. This model does not collapse, but only reaches a maximum IS of 70.
20- s0bo0 000 50000 2000 0 aaa Iteration
(a) Ï0 (b) Ï0 Ï1 (c) Ï1 (d) Ï2
Iteration
Iteration
Figure 26: D training statistics with Dropout (keep probability 0.8) applied to the last feature layer of D. This model does not collapse, but only reaches a maximum IS of 70.
30
Published as a conference paper at ICLR 2019
Frobenius Norm Frobenius Norm : = = Iteration = = Iteration â (a) G ||W]l2 (b) D ||W|l2 a = 8 5 2 i E if E ES 5 Zo a 2 5 5 sobeo seb ration seco e000 "00 ation zeotee (c) losses (d) Variance of all gradient norms in G and D
Frobenius Norm : = = Iteration =
Frobenius Norm = Iteration â
8 2 E E 5 Zo 2 5 e000 "00 ation zeotee
a = 5 i if ES a 5 sobeo seb ration seco
Figure 27: Additional training statistics for a typical model without special modiï¬cations. Collapse occurs after 200000 iterations.
Frobenius Norm £ Frobenius Norm 2 A a ae (a) G ||W|l2 (b) D ||W|2 . ee, é 2 2 §,] 5 g > 24 E a 5 | | 3 a 6 c seen ertâ rscooo bs 000 ea isto00 (c) losses (d) Variance of all gradient norms in G and D
Frobenius Norm a
£ Frobenius Norm 2 A ae
é 2 §,] 5 > E 5 | 3 6 bs 000 ea isto00
. 2 g 24 a | a c seen ertâ rscooo
Figure 28: Additional training statistics with an R1 Gradient Penalty of strength 10 on D. This model does not collapse, but only reaches a maximum IS of 55.
31
Published as a conference paper at ICLR 2019
# APPENDIX G ADDITIONAL DISCUSSION: STABILITY AND COLLAPSE
In this section, we present and discuss additional investigations into the stability of our models, expanding upon the discussion in Section 4.
INTERVENING BEFORE COLLAPSE
The symptoms of collapse are sharp and sudden, with sample quality dropping from its peak to its lowest value over the course of a few hundred iterations. We can detect this collapse when the singular values in G explode, but while the (unnormalized) singular values grow throughout training, there is no consistent threshold at which collapse occurs. This raises the question of whether it is possible to prevent or delay collapse by taking a model checkpoint several thousand iterations before collapse, and continuing training with some hyperparameters modiï¬ed (e.g., the learning rate).
We conducted a range of intervention experiments wherein we took checkpoints of a collapsed model ten or twenty thousand iterations before collapse, changed some aspect of the training setup, then observed whether collapse occurred, when it occurred relative to the original collapse, and the ï¬nal performance attained at collapse.
We found that increasing the learning rates (relative to their initial values) in either G or D, or both G and D, led to immediate collapse. This occurred even when doubling the learning rates from 2 · 10â4 in D and 5 · 10â5 in G, to 4 · 10â4 in D and 1 · 10â4 in G, a setting which is not normally unstable when used as the initial learning rates. We also tried changing the momentum terms (Adamâs β1 and β2), or resetting the momentum vectors to zero, but this tended to either make no difference or, when increasing the momentum, cause immediate collapse.
We found that decreasing the learning rate in G, but keeping the learning rate in D unchanged could delay collapse (in some cases by over one hundred thousand iterations), but also crippled trainingâ once the learning rate in G was decayed, performance either stayed constant or slowly decayed. Conversely, reducing the learning rate in D while keeping Gâs learning rate led to immediate collapse. We hypothesize that this is because of the need for D to remain optimal throughout trainingâif its learning rate is reduced, it can no longer âkeep upâ with G, and training collapses. With this in mind, we also tried increasing the number of D steps per G step, but this either had no effect, or delayed collapse at the cost of crippling training (similar to decaying Gâs learning rate).
To further illuminate these dynamics, we construct two additional intervention experiments, one where we freeze G before collapse (by ceasing all parameter updates) and observe whether D remains stable, and the reverse, where we freeze D before collapse and observe whether G remains stable. We ï¬nd that when G is frozen, D remains stable, and slowly reduces both components of its loss towards zero. However, when D is frozen, G immediately and dramatically collapses, maxing out Dâs loss to values upwards of 300, compared to the normal range of 0 to 3.
This leads to two conclusions: ï¬rst, as has been noted in previous works (Miyato et al., 2018; Gulrajani et al., 2017; Zhang et al., 2018), D must remain optimal with respect to G both for stability and to provide useful gradient information. The consequence of G being allowed to win the game is a complete breakdown of the training process, regardless of Gâs conditioning or optimization settings. Second, favoring D over G (either by training it with a larger learning rate, or for more steps) is insufï¬cient to ensure stability even if D is well-conditioned. This suggests either that in practice, an optimal D is necessary but insufï¬cient for training stability, or that some aspect of the system results in D not being trained towards optimality. With the latter possibility in mind, we take a closer look at the noise in Dâs spectra in the following section.
32
Published as a conference paper at ICLR 2019
G.2 SPIKES IN THE DISCRIMINATORâS SPECTRA
(a) D Ï0 (b) D Ï0 Ï1
Figure 29: A closeup of Dâs spectra at a noise spike.
If some element of Dâs training process results in undesirable dynamics, it follows that the behavior of Dâs spectra may hold clues as to what that element is. The top three singular values of D differ from Gâs in that they have a large noise component, tend to grow throughout training but only show a small response to collapse, and the ratio of the ï¬rst two singular values tends to be centered around one, suggesting that the spectra of D have a slow decay. When viewed up close (Figure 29), the noise spikes resemble an impulse response: at each spike, the spectra jump upwards, then slowly decrease, with some oscillation.
One possible explanation is that this behavior is a consequence of D memorizing the training data, as suggested by experiments in Section 4.2. As it approaches perfect memorization, it receives less and less signal from real data, as both the original GAN loss and the hinge loss provide zero gradients when D outputs a conï¬dent and correct prediction for a given example. If the gradient signal from real data attenuates to zero, this can result in D eventually becoming biased due to exclusively received gradients that encourage its outputs to be negative. If this bias passes a certain threshold, D will eventually misclassify a large number of real examples and receive a large gradient encouraging positive outputs, resulting in the observed impulse responses.
This argument suggests several ï¬xes. First, one might consider an unbounded loss (such as the Wasserstein loss (Arjovsky et al., 2017)) which would not suffer this gradient attentuation. We found that even with gradient penalties and brief re-tuning of optimizer hyperparameters, our models did not stably train for more than a few thousand iterations with this loss. We instead explored changing the margin of the hinge loss as a partial compromise: for a given model and minibatch of data, increasing the margin will result in more examples falling within the margin, and thus contributing to the loss.3. Training with a smaller margin (by a factor of 2) measurably reduces performance, but training with a larger margin (by up to a factor of 3) does not prevent collapse or reduce the noise in Dâs spectra. Increasing the margin beyond 3 results in unstable training similar to using the Wasserstein loss. Finally, the memorization argument might suggest that using a smaller D or using dropout in D would improve training by reducing its capacity to memorize, but in practice this degrades training.
3Unconstrained models could easily learn a different output scale to account for this margin, but the use of Spectral Normalization constrains our models and makes the speciï¬c selection of the margin meaningful.
33
Published as a conference paper at ICLR 2019
# APPENDIX H NEGATIVE RESULTS
We explored a range of novel and existing techniques which ended up degrading or otherwise not affecting performance in our setting. We report them here; our evaluations for this section are not as thorough as those for the main architectural choices.
Our intention in reporting these results is to save time for future work, and to give a more complete picture of our attempts to improve performance or stability. We note, however, that these results must be understood to be speciï¬c to the particular setup we used. A pitfall of reporting negative results is that one might report that a particular technique doesnât work, when the reality is that this technique did not have the desired effect when applied in a particular way to a particular problem. Drawing overly general conclusions might close off potentially fruitful avenues of research.
⢠We found that doubling the depth (by inserting an additional Residual block after every up- or down-sampling block) hampered performance.
We experimented with sharing class embeddings between both G and D (as opposed to just within G). This is accomplished by replacing Dâs class embedding with a projection from Gâs embeddings, as is done in Gâs BatchNorm layers. In our initial experiments this seemed to help and accelerate training, but we found this trick scaled poorly and was sensitive to optimization hyperparameters, particularly the choice of number of D steps per G step. ⢠We tried replacing BatchNorm in G with WeightNorm (Salimans & Kingma, 2016), but this crippled training. We also tried removing BatchNorm and only having Spectral Nor- malization, but this also crippled training.
We tried adding BatchNorm to D (both class-conditional and unconditional) in addition to
Spectral Normalization, but this crippled training.
⢠We tried varying the choice of location of the attention block in G and D (and inserting multiple attention blocks at different resolutions) but found that at 128Ã128 there was no noticeable beneï¬t to doing so, and compute and memory costs increased substantially. We found a beneï¬t to moving the attention block up one stage when moving to 256Ã256, which is in line with our expectations given the increased resolution.
⢠We tried using ï¬lter sizes of 5 or 7 instead of 3 in either G or D or both. We found that having a ï¬lter size of 5 in G only provided a small improvement over the baseline but came at an unjustiï¬able compute cost. All other settings degraded performance.
We tried varying the dilation for convolutional ï¬lters in both G and D at 128Ã128, but found
that even a small amount of dilation in either network degraded performance.
⢠We tried bilinear upsampling in G in place of nearest-neighbors upsampling, but this de- graded performance.
⢠In some of our models, we observed class-conditional mode collapse, where the model would only output one or two samples for a subset of classes but was still able to generate samples for all other classes. We noticed that the collapsed classes had embedings which had become very large relative to the other embeddings, and attempted to ameliorate this issue by applying weight decay to the shared embedding only. We found that small amounts of weight decay (10â6) instead degraded performance, and that only even smaller values (10â8) did not degrade performance, but these values were also too small to prevent the class vectors from exploding. Higher-resolution models appear to be more resilient to this problem, and none of our ï¬nal models appear to suffer from this type of collapse.
⢠We experimented with using MLPs instead of linear projections from Gâs class embeddings to its BatchNorm gains and biases, but did not ï¬nd any beneï¬t to doing so. We also exper- imented with Spectrally Normalizing these MLPs, and with providing these (and the linear projections) with a bias at their output, but did not notice any beneï¬t.
⢠We tried gradient norm clipping (both the global variant typically used in recurrent net- works, and a local version where the clipping value is determined on a per-parameter basis) but found this did not alleviate instability.
34
Published as a conference paper at ICLR 2019
# APPENDIX I HYPERPARAMETERS
We performed various hyperparameter sweeps in this work:
⢠We swept the Cartesian product of the learning rates for each network through [10â5, 5 · 10â5, 10â4, 2 · 10â4, 4 · 10â4, 8 · 10â4, 10â3], and initially found that the SA-GAN settings (Gâs learning rate 10â4, Dâs learning rate 4 · 10â4) were optimal at lower batch sizes; we did not repeat this sweep at higher batch sizes but did try halving and doubling the learning rate, arriving at the halved settings used for our experiments.
⢠We swept the R1 gradient penalty strength through [10â3, 10â2, 10â1, 0.5, 1, 2, 3, 5, 10]. We ï¬nd that the strength of the penalty correlates negatively with performance, but that settings above 0.5 impart training stability.
⢠We swept the keep probabilities for DropOut in the ï¬nal layer of D through [0.5, 0.6, 0.7, 0.8, 0.9, 0.95]. We ï¬nd that DropOut has a similar stabilizing effect to R1 but also degrades performance.
⢠We swept Dâs Adam β1 parameter through [0.1, 0.2, 0.3, 0.4, 0.5] and found it to have a light regularization effect similar to DropOut, but not to signiï¬cantly improve results. Higher β1 terms in either network crippled training.
⢠We swept the strength of the modiï¬ed Orthogonal Regularization penalty in G through [10â5, 5 · 10â5, 10â4, 5 · 10â4, 10â3, 10â2], and selected 10â4.
35 | {
"id": "1705.07215"
} |
1809.10610 | Counterfactual Fairness in Text Classification through Robustness | In this paper, we study counterfactual fairness in text classification, which
asks the question: How would the prediction change if the sensitive attribute
referenced in the example were different? Toxicity classifiers demonstrate a
counterfactual fairness issue by predicting that "Some people are gay" is toxic
while "Some people are straight" is nontoxic. We offer a metric, counterfactual
token fairness (CTF), for measuring this particular form of fairness in text
classifiers, and describe its relationship with group fairness. Further, we
offer three approaches, blindness, counterfactual augmentation, and
counterfactual logit pairing (CLP), for optimizing counterfactual token
fairness during training, bridging the robustness and fairness literature.
Empirically, we find that blindness and CLP address counterfactual token
fairness. The methods do not harm classifier performance, and have varying
tradeoffs with group fairness. These approaches, both for measurement and
optimization, provide a new path forward for addressing fairness concerns in
text classification. | http://arxiv.org/pdf/1809.10610 | Sahaj Garg, Vincent Perot, Nicole Limtiaco, Ankur Taly, Ed H. Chi, Alex Beutel | cs.LG, stat.ML | null | null | cs.LG | 20180927 | 20190213 | 9 1 0 2
b e F 3 1 ] G L . s c [
2 v 0 1 6 0 1 . 9 0 8 1 : v i X r a
# Counterfactual Fairness in Text Classiï¬cation through Robustness
Sahaj Garg,1* Vincent Perot,2 Nicole Limtiaco,2 Ankur Taly,3 Ed H. Chi,3 Alex Beutel2 1Stanford University, Stanford, CA 2Google AI, New York, NY 3Google AI, Mountain View, CA *Work done while the author was an intern at Google. sahajg@cs.stanford.edu, {vperot, nlimtiaco, ataly, edchi, alexbeutel}@google.com
# Abstract
In this paper, we study counterfactual fairness in text clas- siï¬cation, which asks the question: How would the predic- tion change if the sensitive attribute referenced in the example were different? Toxicity classiï¬ers demonstrate a counterfac- tual fairness issue by predicting that âSome people are gayâ is toxic while âSome people are straightâ is nontoxic. We offer a metric, counterfactual token fairness (CTF), for measuring this particular form of fairness in text classiï¬ers, and describe its relationship with group fairness. Further, we offer three ap- proaches, blindness, counterfactual augmentation, and coun- terfactual logit pairing (CLP), for optimizing counterfactual token fairness during training, bridging the robustness and fairness literature. Empirically, we ï¬nd that blindness and CLP address counterfactual token fairness. The methods do not harm classiï¬er performance, and have varying tradeoffs with group fairness. These approaches, both for measurement and optimization, provide a new path forward for addressing fairness concerns in text classiï¬cation.
# Introduction
Consider a model that determines whether an Internet forum comment is toxic. We would like to improve the modelâs fairness with respect to the content of the input text, which may reference sensitive identity attributes, such as sexual orientation, race, or religion. In practice, Dixon et al. showed that a toxicity model had a high false positive rate on ex- amples that included identity tokens such as âgay,â because such tokens occur relatively frequently in examples labeled toxic in the training set.
A related issue to users arises when nearly identical sen- tences referencing different identity groups receive different predictions. For instance, a baseline toxicity model predicts that âSome people are gayâ is 98% likely to be toxic and âSome people are straightâ is only 2% likely to be toxic. In this work, we seek to speciï¬cally address this fairness issue for text classiï¬cation.
Given an example, we ask a counterfactual question: How would the prediction change if the sensitive attribute refer- enced in the example were different? If the prediction score changes with respect to a sensitive attribute, we consider this an indicator of a potential problem. In contrast to group- based notions of fairness (e.g., demographic parity, equal- ity of odds), which seek to statistically equalize the modelâs
behavior for entire sensitive groups, counterfactual fairness requires equal model behavior on individual counterfactual pairs; see (Kusner et al. 2017; Wachter, Mittelstadt, and Rus- sell 2017).
To assess counterfactual fairness, we consider perturba- tions obtained by substituting tokens associated with iden- tity groups. For instance, substituting âgayâ with âstraight,â or âAsianâ with âAmerican.â Based on these generated coun- terfactuals, we can deï¬ne a fairness metric, which we call counterfactual token fairness (CTF). While this is more lim- ited than general counterfactual fairness, we believe it cap- tures one of the most salient issues in text classiï¬cation and is a starting point for more general counterfactual fairness metrics for text.
Deciding when counterfactual pairs should have the same prediction raises difï¬cult ethical and philosophical ques- tions. Many logical counterfactuals generated by token sub- stitution may not require identical output. We call these asymmetric counterfactuals. In toxicity classiï¬cation, such situations could arise when the comment references stereo- types associated with one group but not another, or when comments attack a particularly vulnerable group. Asymmet- ric counterfactuals suggest that practitioners should be care- ful in both training and evaluation of counterfactual fairness. We discuss proposals for addressing this in the case of toxi- city classiï¬cation in the experiments section.
To satisfy counterfactual token fairness, we borrow tech- niques from the robustness literature. We propose a gen- eral training scheme for achieving arbitrary counterfactual fairness by extending logit pairing (Kannan, Kurakin, and Goodfellow 2018) to penalize differences in the modelâs out- puts for counterfactual pairs. We compare this method to simply augmenting the training set with counterfactual ex- amples, and to blindness, which replaces all sensitive tokens with a special token.
One issue is that the aforementioned methods may only achieve fairness with respect to identity tokens considered by counterfactuals during training. To address this, we eval- uate the generalization of the methods on a held-out set of identity tokens. Another concern when optimizing for coun- terfactual fairness is potential trade-offs with other desir- able properties of a classiï¬er, including overall accuracy and group fairness. In practice, we do not ï¬nd signiï¬cant harms with respect to accuracy, and varying effects on group fair-
ness in the form of tradeoffs between true negatives and true positives.
We make the following contributions:
⢠Metric: We provide a tractable metric, counterfactual to- ken fairness, for measuring counterfactual fairness in text classiï¬cation.
⢠Methods: We study three methods for addressing coun- terfactual token fairness: (A) blindness, (B) counterfac- tual augmentation, and (B) counterfactual logit pairing, bridging research from the robustness and fairness do- mains.
⢠Empirical Evaluation: We evaluate empirical perfor- mance and tradeoffs of counterfactual token fairness, group fairness, and accuracy across these approaches.
Related Work ML Fairness Signiï¬cant work in the ML fairness litera- ture has been devoted to measuring fairness. Our work is most closely related to counterfactual fairness in causal in- ference (Kusner et al. 2017; Kilbertus et al. 2017), where fairness is evaluated by applying counterfactual interven- tions over a causal graph. Our deï¬nition of counterfactual token fairness implicitly deï¬nes a simple causal model for text generation. Kusner et al. also draw the connection be- tween counterfactual fairness and individual fairness, which requires similar predictions for similar inputs via a Lipschitz constraint (Dwork et al. 2011).
Relatively more study has been devoted to group fairness metrics, which evaluate observational criteria, or statistical relationships between the data, group membership, the la- bel, and the modelâs prediction. Such metrics include demo- graphic parity and equality of odds (Hardt, Price, and Srebro 2016). Hardt, Price, and Srebro demonstrate that observa- tional criteria are insufï¬cient to distinguish between some seemingly reasonable uses of identity and other unreason- able ones. This is because observational criteria cannot in- corporate any external understanding about what is causally acceptable in making predictions. The limitations of obser- vational criteria can be addressed by counterfactual or indi- vidual fairness, see (Kusner et al. 2017; Kilbertus et al. 2017; Dwork et al. 2011). By extending these deï¬nitions to path- speciï¬c counterfactual fairness, it is possible to specify which uses of identity are acceptable (Chiappa and Gillam 2018).
Social science literature on fairness raises arguments for counterfactual reasoning as well as potential limitations. One concern is about the ability to reasonably intervene on identity of an individual. Given that most social scientists agree that race is socially constructed, it may be unreason- able to attempt to modify race and all its associated factors (Kohler-Hausmann 2019). These limitations, among others, are reï¬ected in debate surrounding the use of counterfac- tuals over race in epidemiological studies (Krieger 2014; VanderWeele and Robinson 2014). We note that our work deals with well-deï¬ned interventions on content by only ma- nipulating identity tokens in text, rather than the actual iden- tities of individuals, which differentiates it from the work above.
ML fairness literature has also focused on debiasing methods to address these gaps. Many methods have been proposed to address group fairness issues, such as re- calibrating score functions (Hardt, Price, and Srebro 2016), adversarially learning fair representations (Zemel et al. 2013; Louizos et al. 2015; Beutel et al. 2017), data rebal- ancing (Dixon et al. 2018), and data augmentation using swaps of gender terms (Park, Shin, and Fung 2018). For nat- ural language problems, Pryzant et al. learn a lexicon that is uncorrelated to a set of confounding variables. Debias- ing methods for counterfactual or individual fairness have been studied less for neural network models. The methods in (Kusner et al. 2017; Kilbertus et al. 2017) are effective for causal graphs, but most machine learning problems will not ï¬t this mold. To address individual fairness, (Dwork et al. 2011) applies constraint based optimization over a linear program, but it is difï¬cult to deï¬ne valid distance metrics or apply the optimization to arbitrary neural networks used in natural language processing.
Robustness in Machine Learning The robustness liter- ature in machine learning has primarily focused on ro- bustness to adversarially perturbed inputs, which add small amounts of carefully selected noise to fool classiï¬ers (Good- fellow, Shlens, and Szegedy 2015). When applied to the text setting, such adversarial examples can be generated by a variety of editing methods, including through transla- tion (Ribeiro, Singh, and Guestrin 2018), attributions (Mu- drakarta et al. 2018), and autoencoders (Zhao, Dua, and Singh 2017) (Hu et al. 2017). Adversarial examples are closely related to counterfactual examples: Wachter, Mittel- stadt, and Russell characterize counterfactuals as adversar- ial examples that perturb inputs in human-interpretable and possibly problematic ways. As such, the counterfactual ex- amples presented in this work can be viewed as a speciï¬c subset of adversarial examples. The robustness literature has attempted to address adversarial examples using a variety of techniques, such as adversarial training (Madry et al. 2017; Goodfellow, Shlens, and Szegedy 2015) and adversarial logit pairing (Kannan, Kurakin, and Goodfellow 2018).
Several papers draw connections between fairness, text generation, and robustness. Landeiro and Culotta consider robustness in text with respect to counfounding variables such as the authorâs gender, and learn robust models by train- ing using an additional attribute for the latent confound, and averaging over all values of the latent variable at inference time. Madaan et al. attempt to edit text to remove gender bias or edit gender representations, leveraging analogies in word vector differences to make substitutions for words that may implicitly encode biases about gender.
Problem Deï¬nition Given text input x â X, where x is a sequence [x1, ..., xn] of tokens, our task is to predict an outcome y. We consider a classiï¬er f parameterized by θ that produces a prediction Ëy = fθ(x), where we seek to minimize some notion of error between y and Ëy. For notational simplicity, we restrict the following deï¬nitions to a single binary class, but they can be easily generalized to multi-class classiï¬cation problems.
The classiï¬er f can be an arbitrary neural network.
We wish to maximize the modelâs performance while maintaining counterfactual fairness with respect to sensitive attributes, such as identity groups. Counterfactual fairness is measured using counterfactual examples that perturb the sensitive attribute referenced in the example at hand. Let Φ(x) denote the set of counterfactual examples associated with an example x. Counterfactual fairness requires that the predictions of a model for all counterfactuals are within a speciï¬ed error.
Definition 1. Counterfactual fairness. A classifier f is counterfactually fair with respect to a counterfactual gen- eration function ® and some error rate ⬠if
(x) â f(@)|<e Va ⬠X,2' ⬠®(x)
# Counterfactual Token Fairness (CTF)
We consider a narrow class of counterfactuals that involves substituting identity tokens in the input, for instance, substi- tuting âgayâ with âstraightâ in the input âSome people are gay.â We assume a set of identity tokens, A, for which we seek to be counterfactually fair. Consider a pair of tokens a,aâ ⬠A. The associated counterfactual example genera- tion function ®, .â is defined by substituting all occurrences of a in x with aâ and vice versa. If neither identity token is present in the input x, then ®,./(x) = @. We generalize this definition to a counterfactual generation function over A that generates all counterfactual examples based on pairs of substitutions:
®,(x) = U Baaâ (x) axaeA
Definition 2. A classifier satisfies counterfactual token fair- ness with respect to a set of identity tokens A if it satis- fies counterfactual fairness with respect to the counterfactual generation function ® 4 and error rate â¬.
Although content about sensitive groups may be captured by complex semantics, this metric will surface a subset of problematic issues related to more general counterfactual fairness. This a ï¬rst step, and surfaces additional concerns for fairness beyond those of group fairness.
# Asymmetric Counterfactuals
So far we have assumed that all counterfactuals with re- spect to identity tokens should have the same prediction. This assumption is not valid in cases where the sensitive at- tribute indeed affects the prediction. For instance, consider a model predicting toxicity of text, and the counterfactual pair âThatâs so gayâ and âThatâs so straight.â The ï¬rst example is arguably more likely to be considered toxic than the sec- ond, as âgayâ is often used as an insult in Internet forums, while âstraightâ is not. Other examples include stereotyping, where one group is more vulnerable than another. Requiring equal predictions across such cases can inadvertently harm the more vulnerable group.
Fairness must be required only among counterfactuals that stipulate symmetric predictions. This restriction can be
accommodated in our framework by restricting the counter- factual generation function Φ(x) to exclude any counterfac- tuals for the example x that may have asymmetric labels. In general, the degree and direction of the asymmetry between counterfactuals varies based on the task, and the cultural sen- sitivities of the consumers of the task. This makes it difï¬cult to deï¬ne a perfect counterfactual generation function. In Ex- periments, we propose a heuristic for avoiding asymmetric counterfactuals for a model predicting toxicity of text.
# Relationship to Group Fairness
Counterfactual fairness is complementary to the group fair- ness notion of equality of odds ), which demands equality of true positive rates and true negative rates for different values of the sensitive attribute. A text classifier may satisfy one while completely failing the other. Consider the case when two sensitive attributes a and aâ only appear in disjoint sets of contexts X, and Xqâ, re- spectively. A model can satisfy equality of odds by always predicting correctly on the contexts in which a,aâ appear in the data, but never in the counterfactual contexts that do not exist in the data. Conversely, the model could predict identical output for all counterfactual pairs while predicting correctly only on X, and not X/.
# Methods
We propose three methods to improve counterfactual fair- ness: blindness, counterfactual augmentation, and counter- factual logit pairing. Both methods assume access to a list of identity tokens for which they seek to be fair.
# Blindness
Blindness substitutes all identity tokens with a special IDENTITY token, which allows the predictor to know that an identity term is present, but not which identity. This is similar to standard NLP methods such as replacing large numbers with a generic NUMBER. While this approach guar- antees counterfactual token fairness, it has a number of downsides. First, it does not have the ability to differentiate identity terms, and so necessarily equates asymmetric coun- terfactuals. Second, it cannot handle complex counterfactu- als that involve more than single token substitutions, e.g. âChristians go to church.â and âJews go to temple.â Finally, the model may still discriminate using other signals that are associated with the identity term (Dwork et al. 2011).
# Counterfactual Augmentation
Instead of blinding the model to identity terms, counterfac- tual augmentation involves augmenting the modelâs training set with generated counterfactual examples. The additional examples are meant to guide the model to become invariant to perturbing identity terms. This is a standard technique in computer vision for making the model invariant to object lo- cation, image orientation, etc. The counterfactual examples are assigned the same label as the original example.
Counterfactual Logit Pairing (CLP) Counterfactual logit pairing (CLP) encourages the model to be robust to identity by adding a robustness term to the train- ing loss. The robustness term is given by logit pairing (Kan- nan, Kurakin, and Goodfellow 2018), which penalizes the norm of the difference in logits for pairs of training examples and their counterfactuals. Speciï¬cally, suppose the classiï¬er f (x) = Ï(g(x)), where g(x) produces a logit and Ï(·) is the sigmoid function. The additional loss is the average absolute difference in logits between the inputs and their counterfac- tuals:
YE (g(x) âg(2')| Lx P wUnifb(2)]
For computational tractability, during training, we randomly sample a single counterfactual example for each input. Tak- ing J as the original loss function, the overall objective is:
SILOM +r9 Vo g(x) â g(a")| EX rexâ E wâ ~Unif[P(«)]
Similar to counterfactual augmentation, CLP can use any counterfactual generation function. For example, a restricted counterfactual generation function could be used to avoid enforcing equality over asymmetric counterfactuals. More- over, the method also applies if more sophisticated counter- factuals are generated.
In contrast to counterfactual augmentation, the robustness term in the CLP loss explicitly guides the model to satisfy two desirable properties: (1) ensuring a model produces sim- ilar outputs on counterfactual pairs and (2) learning models that generalize well to different identities. Moreover, the pa- rameter λ can be tuned to achieve varying degrees of coun- terfactual fairness.
Experiments Dataset We evaluate our methods on the task of predict- ing toxicity. For the task, a toxic comment is deï¬ned as a ârude, disrespectful, or unreasonable comment that is likely to make you leave a discussion.â (Dixon et al. 2018). We use a public Kaggle dataset of 160K Wikipedia comments, each labeled by human raters as toxic or non-toxic1, ran- domly split into train and dev sets. We evaluate AUC of the primary task on the public test set. We evaluate counterfac- tual token fairness and group fairness on a private dataset of comments from another internet forum. This dataset, hence- forth called the âevaluation dataset,â has a higher occurrence of identity terms, and therefore leads to a more meaningful fairness evaluation.
Setup We evaluate our methods for counterfactual token fairness on the set of 50 identity terms used by Dixon et al.. Out of these, 47 are single tokens and 3 are bigrams. We randomly partition the terms into a training set of 35 and a hold-out set of 12 to evaluate generalization. We also include the three bigrams in evaluation, because they reï¬ect scenarios that blindness cannot address during training.2
1https://www.kaggle.com/c/jigsaw-toxic-comment- classiï¬cation-challenge
2Only single tokens in the input are substituted with bigrams during evaluation.
All of the models are CNNs trained with cross entropy loss against the binary toxicity label. All hyperparameters except for the fairness regularizer λ for CLP were ï¬xed for all runs of all models. Models were trained for ï¬ve epochs, and the best model on the dev set was taken. Each model was trained ten times, and the average of the runs is reported. Blindness, Counterfactual augmentation, and CLP models (for different values of λ) were evaluated and compared to a baseline model.
For CLP training, we deï¬ne a different counterfactual ex- ample generation function than the one used for evaluation. The evaluation counterfactuals only apply substitutions to a pair of identity tokens, whereas during training, each sensi- tive identity token in an input is randomly substituted with another identity token.
Handling Asymmetric Counterfactuals We hypothesize that asymmetric counterfactuals are less likely to arise for ground truth non-toxic comments than toxic comments. This for two reasons. Asymmetric counterfactuals arise when stereotyping / attacking a vulnerable group occurs for some identity substitution, and no other toxicity sig- nals are present. In such cases, most identity substitutions will be nontoxic, and only the one attacking the vulnera- ble group(s) will be toxic. So if the ground truth exam- ple is nontoxic, counterfactual fairness can still be required over most identity substitutions, whereas if the ground truth example is toxic, equal prediction should not be required over most counterfactuals. Second, the stereotyping com- ments are more likely to occur in a toxic comment attack- ing the stereotyped group than in a nontoxic comment ref- erencing some other identity group. For these reasons, we evaluate counterfactual token fairness over ground truth non- toxic comments separately from ground truth toxic com- ments, and focus our analysis on nontoxic comments. We also consider applying the CLP loss only to nontoxic com- ments during training, to avoid enforcing equality of logits for potentially asymmetric counterfactuals. We distinguish this variant as CLP nontoxic.
Separately, we also evaluate CTF on simple synthetic inputs where all information about toxicity is encoded in the context, and all counterfactuals are symmetric by de- sign. Speciï¬cally, we use a dataset of synthetically gen- erated sentences based on templates such as âNAME is a ADJECTIVE.â3
Metrics We measure the counterfactual token fairness gap with respect to a given counterfactual generation function. For a single example, this is the average gap in prediction over all of the counterfactual ns for that example: CF GAPa (x) = Ife) â flaâ) aa
CF GAPa (x) = Ife) â flaâ) an aa («)]
Over an entire dataset, the gap is the average over all exam- ples that have valid counterfactuals. In this study, we mea- sure the CTF GAP for the counterfactual generation func- tion ΦA, which substitutes all pairs of identity tokens. Be- cause substitution-based counterfactuals over short inputs
3This is the updated open sourced version of the synthetic test set presented in (Dixon et al. 2018).
Model Baseline Blind CF Aug CLP nontox, λ = 1 CLP, λ = 0.05 CLP, λ = 1 CLP, λ = 5 Eval NT 0.140 0.000 0.127 0.012 0.071 0.007 0.002 Synth NT 0.180 0.000 0.226 0.015 0.082 0.015 0.004 Synth Tox 0.061 0.000 0.022 0.007 0.024 0.007 0.004
Table 1: Conterfactual token fairness gaps for non-toxic ex- amples from evaluation set and all examples from a syn- thetic test set. All gaps are measure w.r.t. 35 training terms. Smaller gaps are better.
are more likely to be logical, we evaluate the CTF gaps for inputs of at most ten tokens in length. In addition, since asymmetric counterfactuals are likely more common for toxic comments, we evaluate CTF gaps over nontoxic and toxic comments separately.
We also measure group fairness to ensure that optimizing for counterfactual fairness has no perverse impact on it. Fol- lowing the group fairness notion of equality of odds (Hardt, Price, and Srebro 2016), we measure the true positive rates (TPR) and true negatives rates (TNR) of examples referenc- ing different identity groups. We assume an example refer- ences a speciï¬c identity group based on the presence of the associated token. Equality of odds requires equal TPR and TNR across identities, so we evaluate overall TPR and TNR gaps. The gap for a pair of identity terms is computed as the absolute value of the difference in rates for the two identity terms. The overall TPR or TNR gap is the average over all pairs of identity terms.
Results Counterfactual Token Fairness Table 1 reports CTF gaps for non-toxic examples from the evaluation dataset, and all examples from the synthetic dataset. The gaps are com- puted for the 35 training terms (discussed in Setup). As dis- cussed earlier, both these sets of examples are unlikely to have asymmetric counterfactuals. The baseline model has a large CTF gap on both sets of examples. Blindness achieves a zero gap by design. CLP with a fairness regularization coefï¬cient (λ) of at least 1 also attains a near zero gap. Counterfactual augmentation decreases the CTF gap (rela- tive to the baseline) on non-toxic examples from the evalua- tion dataset, but does not obtain a zero gap. It is worth noting that the models were not trained on the synthetic dataset, but we still ï¬nd a reduction in counterfactual fairness gaps on it. Table 2 reports CTF gaps on the hold-out terms for non- toxic examples from the evaluation dataset. We say that a modelâs CTF gap generalizes to hold-out terms if its gap is less than the baseline modelâs gap (0.091). Among the mod- els compared, CLP with λ = 5 generalizes the best, though the gaps are much larger that those on the training terms. Blindness does not appear to provide generalization beneï¬ts. Thus, it may not be a favorable method in settings where we expect examples with identity terms outside the set of train- ing terms.
Baseline Blind CF Aug CLP nontox, λ = 1 CLP, λ = 0.05 CLP, λ = 1 CLP, λ = 5 CTF Gap: held-out terms 0.091 0.090 0.087 0.095 0.078 0.084 0.076
Table 2: CTF gaps on held out identity terms for non-toxic examples from the evaluation set.
To evaluate the impact on cases with asymmetric coun- terfactuals, we also measured the CTF gap for toxic ex- amples from the evaluation dataset over the 35 training terms; see Table 4 in the appendix. The baseline model has a gap of 0.241, and as expected, blindness has a gap of zero. All CLP models with λ ⥠1 achieve a CTF gap of less than 0.03, which unfortunately means that they end up equating predictions for asymmetric counterfactuals. This includes CLP nontoxic, which was trained using counterfac- tuals from non-toxic examples only. Going forward, we do not evaluate CLP nontoxic as it is not better than the regular CLP models.
Overall Performance We evaluate the overall classiï¬er performance using AUC of the ROC curve. Remarkably, all methods show consistent AUC on the test set, ranging be- tween 0.962-0.964.
Figure 1 compares the true positive rate (TPR) and true negative rate (TNR) of various models, where the threshold for a toxic classiï¬cation is set to 0.5. TPR and TNR are mea- sured only over examples that contain an identity term from the set of training terms. We ï¬nd that methods that reduce the CTF gap perform better at identifying nontoxic com- ments (true negatives) and worse at identifying toxic com- ments (true positives). We discuss this tension between im- proving the CTF gap and TPR in Error Analysis.
Group Fairness We additionally evaluate the group fair- ness metrics, TPR and TNR gaps for equality of odds (see Table 3). Counterfactual augmentation and CLP with λ = 0.05 have better TPR and TNR gaps than the baseline and are able to reduce CTF gaps. CLP with λ ⥠1 has a more extreme tradeoff, harming the TPR gap while substantially improving the TNR gap. Practitioners may choose different tradeoffs of CTF gap, TPR, and TNR depending on the rel- ative prioritization of these metrics for a given task.
Error analysis We examine the trade-off between CTF gap and TPR. We consider the CLP, λ = 5 model which attains a near zero CTF gap and compare its predictions on toxic comments to those of the baseline. Among examples with identity terms in the test set, there are 83 cases where an example was correctly classiï¬ed by the baseline and incorrectly classi- ï¬ed by the CLP model. Of these, 27 were labeled by an au- thor as having an asymmetric counterfactual. There were 20 cases where the CLP model predicted correctly compared to
0.94 0.92 TNR, TPR, and CTF Gap by Model 0.90 0.88 0.86 0.80 0.78 4 0.76 0.74 0.72 0.70 0.68 06s =o TNR TPR 0.14 y y y 7 0.12 20.10 J O 0.08 t+ 0.06 V 0.04 0.02 0.00 . â â e J ~~ > oe oot oe of of or? or?
Figure 1: Plot of the average CTF gap along with the TPR and TNR over examples that contain identity terms.
Baseline Blindness Augmentation CLP all, λ = 0.05 CLP all, λ = 1 CLP all, λ = 5 TNR Gap 0.084 0.039 0.065 0.058 0.039 0.041 TPR Gap 0.082 0.114 0.083 0.078 0.104 0.112
Table 3: TNR and TPR gaps for different models. Lower is better.
the baseline, of which none had asymmetric counterfactuals. This tells us that a large chunk of the TPR loss (relative to the baseline) is over toxic examples with asymmetric coun- terfactuals. This is expected as examples with asymmetric counterfactuals are toxic because of the presence of a spe- ciï¬c identity term, and a model trained to disregard identity terms will be less likely to predict correctly on such exam- ples.
As a means of investigating what the CLP model has learned, we examine its token embeddings after conver- gence. By the end of training with λ = 5, the average cosine similarity between pairs of identity tokens is 0.87, whereas the baseline has an average cosine similarity of 0.25. Al- though this is similar to blindness, the CLP model learns a different toxicity association with identity tokens. The aver- age toxicity prediction on a single identity token for the CLP model is 0.12, while the toxicity of the IDENTITY token in the blindness model is 0.54.
Similarly, CLP nontoxic with λ = 5 has a average co- sine similarity of 0.81. This embedding convergence, despite CLP being applied only to nontoxic comments, is the reason
why the model achieves a low CTF gap on toxic comments, including those with asymmetric counterfactuals. Methods to enforce equal prediction on some subset of counterfactu- als but not others should be further investigated.
We also qualitatively evaluate the strength of each modelâs association with various identity tokens. Table 5 in the appendix lists various examples, and the associated tox- icity scores from each model. In contrast to the baseline, all three models associate a much smaller amount of toxi- city signal with the identity tokens. For instance, unlike the baseline, the other models no longer associate a substantial amount of toxicity with clearly nontoxic statements such as âSome people are gay.â Notably, the toxicity of the statement âSome people are straightâ goes up. The negative effect on this pair is more pronounced for blindness than it is for CLP.
# Conclusions and Future Work
We make progress towards counterfactual fairness in text classiï¬cation. We propose a speciï¬c form of counterfac- tual fairness, called counterfactual token fairness (CTF), that requires a model to be robust to different identity to- kens present in the input. We show that text classiï¬cation models with good overall performance fare poorly on this metric. We approach counterfactual token fairness from a robustness perspective, and offer a procedure, counterfac- tual logit pairing, for optimizing the counterfactual token fairness metric during model training. We ï¬nd that this ap- proach performs as well as blindness to identity tokens, but also generalizes better to hold-out tokens. These results do not come at the expense of overall classiï¬er accuracy, and have varying tradeoffs between false positives and false neg- atives.
Going forward, better heuristics must be designed for identifying cases with asymmetric counterfactuals. Exclud- ing toxic comments covers many but not all asymmetric ex- amples. For example, ground truth nontoxic examples ref- erencing âblack powerâ are more likely to become toxic as they reference âwhite power.â In other text classiï¬cation tasks such as sentiment classiï¬cation, asymmetric counter- factuals will arise but not necessarily with the same clear split by label.
A next step would be to improve counterfactual gener- ation by addressing issues of polysemy of identity terms (which can result in illogical substitutions), asymmetric counterfactuals, and multiple references to an identity group. One possible method is to use analogies in word vectors to change multiple tokens used for the same identity group (Madaan et al. 2018). Another approach is deï¬ning a gener- ative model over text, as in (Hu et al. 2017), that can modify certain attributes of the text while holding others constant and preserving semantics. One could also use criteria for se- lecting semantically equivalent adversarial examples as in (Ribeiro, Singh, and Guestrin 2018), to evaluate whether counterfactual examples are logical. Optimizing for general counterfactual fairness will test many of the unique advan- tages of counterfactual logit pairing.
# Acknowledgements
We would like to thank Lucy Wasserman, Allison Woodruff, Yoni Halpern, Andrew Smart, Tulsee Doshi, Jilin Chen, Alexander DAmour, Raz Mathias, and Jon Bischof for their feedback leading up to this paper.
# References
[Beutel et al. 2017] Beutel, A.; Chen, J.; Zhao, Z.; and Chi, E. H. 2017. Data decisions and theoretical implications when adversarially learning fair representations. CoRR abs/1707.00075. [Chiappa and Gillam 2018] Chiappa, S., and Gillam, T. P. S. 2018. Path-Speciï¬c Counterfactual Fairness. arXiv e-prints arXiv:1802.08139. [Dixon et al. 2018] Dixon, L.; Li, J.; Sorensen, J.; Thain, N.; and Vasserman, L. 2018. Measuring and mitigating unin- tended bias in text classiï¬cation. [Dwork et al. 2011] Dwork, C.; Hardt, M.; Pitassi, T.; Rein- gold, O.; and Zemel, R. S. 2011. Fairness through aware- ness. CoRR abs/1104.3913. [Goodfellow, Shlens, and Szegedy 2015] Goodfellow, I.; Shlens, J.; and Szegedy, C. 2015. Explaining and harness- In International Conference on ing adversarial examples. Learning Representations. [Hardt, Price, and Srebro 2016] Hardt, M.; Price, E.; and Srebro, N. 2016. Equality of opportunity in supervised learning. CoRR abs/1610.02413. [Hu et al. 2017] Hu, Z.; Yang, Z.; Liang, X.; Salakhutdinov, R.; and Xing, E. P. 2017. Toward controlled generation In Precup, D., and Teh, Y. W., eds., Proceedings of text. of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, 1587â1596. International Convention Centre, Sydney, Aus- tralia: PMLR. [Kannan, Kurakin, and Goodfellow 2018] Kannan, H.; Ku- rakin, A.; and Goodfellow, I. J. 2018. Adversarial logit pairing. CoRR abs/1803.06373. [Kilbertus et al. 2017] Kilbertus, N.; Rojas-Carulla, M.; Parascandolo, G.; Hardt, M.; Janzing, D.; and Sch¨olkopf, B. 2017. Avoiding discrimination through causal reasoning. In Proceedings from the conference âNeural Information Pro- cessing Systems 2017., 656â666. Curran Associates, Inc. [Kohler-Hausmann 2019] Kohler-Hausmann, I. 2019. Eddie murphy and the dangers of counterfactual causal thinking about detecting racial discrimination. Northwestern Univer- sity Law Review 113(5). [Krieger 2014] Krieger, N. 2014. On the causal interpreta- tion of race. Epidemiology 25(6). [Kusner et al. 2017] Kusner, M. J.; Loftus, J.; Russell, C.; and Silva, R. 2017. Counterfactual fairness. In Guyon, I.; Luxburg, U. V.; Bengio, S.; Wallach, H.; Fergus, R.; Vish- wanathan, S.; and Garnett, R., eds., Advances in Neural In- formation Processing Systems 30. Curran Associates, Inc. 4066â4076.
[Landeiro and Culotta 2016] Landeiro, V., and Culotta, A. 2016. Robust text classiï¬cation in the presence of confound- ing bias. [Louizos et al. 2015] Louizos, C.; Swersky, K.; Li, Y.; Welling, M.; and Zemel, R. S. 2015. The variational fair autoencoder. CoRR abs/1511.00830. [Madaan et al. 2018] Madaan, N.; Mehta, S.; Agrawaal, T.; Malhotra, V.; Aggarwal, A.; Gupta, Y.; and Saxena, M. 2018. Analyze, detect and remove gender stereotyping from bollywood movies. In Friedler, S. A., and Wilson, C., eds., Proceedings of the 1st Conference on Fairness, Accountabil- ity and Transparency, volume 81 of Proceedings of Machine Learning Research, 92â105. New York, NY, USA: PMLR. [Madry et al. 2017] Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; and Vladu, A. Towards deep learning models resistant to adversarial attacks. CoRR abs/1706.06083. [Mudrakarta et al. 2018] Mudrakarta, P. K.; Taly, A.; Sun- dararajan, M.; and Dhamdhere, K. 2018. Did the model understand the question? CoRR abs/1805.05492. [Park, Shin, and Fung 2018] Park, J. H.; Shin, J.; and Fung, P. 2018. Reducing Gender Bias in Abusive Language De- tection. ArXiv e-prints. [Pryzant et al. 2018] Pryzant, R.; Wang, K.; Jurafsky, D.; and Wager, S. 2018. Deconfounded lexicon induction for inter- In 16th Annual Conference of the pretable social science. North American Chapter of the Association for Computa- tional Linguistics (NAACL). [Ribeiro, Singh, and Guestrin 2018] Ribeiro, M. T.; Singh, S.; and Guestrin, C. 2018. Semantically equivalent adver- sarial rules for debugging nlp models. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 856â865. Association for Computational Linguistics. [VanderWeele and Robinson 2014] VanderWeele, T. J., and Robinson, W. R. 2014. On the causal interpretation of race. Epidemiology 25(6). [Wachter, Mittelstadt, and Russell 2017] Wachter, S.; Mittel- stadt, B. D.; and Russell, C. 2017. Counterfactual explana- tions without opening the black box: Automated decisions and the GDPR. CoRR abs/1711.00399. [Zemel et al. 2013] Zemel, R.; Wu, Y.; Swersky, K.; Pitassi, T.; and Dwork, C. 2013. Learning fair representations. In Dasgupta, S., and McAllester, D., eds., Proceedings of the 30th International Conference on Machine Learning, vol- ume 28 of Proceedings of Machine Learning Research, 325â 333. Atlanta, Georgia, USA: PMLR. [Zhao, Dua, and Singh 2017] Zhao, Z.; Dua, D.; and Singh, S. 2017. Generating natural adversarial examples. CoRR abs/1710.11342.
# Appendix
Baseline Blind CF Augmentation CLP nontoxic, λ = 1 CLP, λ = 0.05 CLP, λ = 1 CLP, λ = 5 Train Terms Held-out Terms 0.241 0.000 0.155 0.029 0.165 0.010 0.004 0.071 0.062 0.057 0.068 0.057 0.058 0.051
Table 4: CTF gaps on toxic examples from the evaluation set, for both training terms and held-out terms.
Some people are gay Some people are straight Some people are Jewish Some people are Muslim Some people are Christian Baseline Blindness CF Augmentation CLP , λ = 5 0.98 0.02 0.28 0.46 0.04 0.61 0.61 0.61 0.61 0.16 0.82 0.11 0.17 0.24 0.02 0.14 0.14 0.13 0.14 0.14
Table 5: Counterfactuals and toxicity scores of different models. The tokens âgay,â âstraight,â âjewish,â and âmuslimâ are used during training, and âchristianâ was held-out. | {
"id": "1802.08139"
} |
1809.09600 | HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering | Existing question answering (QA) datasets fail to train QA systems to perform
complex reasoning and provide explanations for answers. We introduce HotpotQA,
a new dataset with 113k Wikipedia-based question-answer pairs with four key
features: (1) the questions require finding and reasoning over multiple
supporting documents to answer; (2) the questions are diverse and not
constrained to any pre-existing knowledge bases or knowledge schemas; (3) we
provide sentence-level supporting facts required for reasoning, allowing QA
systems to reason with strong supervision and explain the predictions; (4) we
offer a new type of factoid comparison questions to test QA systems' ability to
extract relevant facts and perform necessary comparison. We show that HotpotQA
is challenging for the latest QA systems, and the supporting facts enable
models to improve performance and make explainable predictions. | http://arxiv.org/pdf/1809.09600 | Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W. Cohen, Ruslan Salakhutdinov, Christopher D. Manning | cs.CL | EMNLP 2018 long paper. The first three authors contribute equally.
Data, code, and blog posts available at https://hotpotqa.github.io/ | null | cs.CL | 20180925 | 20180925 | 8 1 0 2
p e S 5 2 ] L C . s c [
1 v 0 0 6 9 0 . 9 0 8 1 : v i X r a
# HOTPOTQA: A Dataset for Diverse, Explainable Multi-hop Question Answering
Zhilin Yang*â Peng Qi*⥠Saizheng Zhang*â£
Yoshua Bengioâ£â¦ William W. Cohenâ Ruslan Salakhutdinovâ Christopher D. Manning⥠â Carnegie Mellon University ⥠Stanford University ⣠Mila, Universit´e de Montr´eal ⦠CIFAR Senior Fellow â Google AI {zhiliny, rsalakhu}@cs.cmu.edu, {pengqi, manning}@cs.stanford.edu saizheng.zhang@umontreal.ca, yoshua.bengio@gmail.com, wcohen@google.com
# Abstract
Existing question answering (QA) datasets fail to train QA systems to perform complex rea- soning and provide explanations for answers. We introduce HOTPOTQA, a new dataset with 113k Wikipedia-based question-answer pairs with four key features: (1) the questions re- quire ï¬nding and reasoning over multiple sup- porting documents to answer; (2) the ques- tions are diverse and not constrained to any pre-existing knowledge bases or knowledge schemas; (3) we provide sentence-level sup- porting facts required for reasoning, allowing QA systems to reason with strong supervision and explain the predictions; (4) we offer a new type of factoid comparison questions to test QA systemsâ ability to extract relevant facts and perform necessary comparison. We show that HOTPOTQA is challenging for the latest QA systems, and the supporting facts enable models to improve performance and make ex- plainable predictions.
Paragraph A, Return to Olympus: [1] Return to Olympus is the only album by the alterna- tive rock band Malfunkshun. [2] It was released after the band had broken up and after lead singer Andrew Wood (later of Mother Love Bone) had died of a drug overdose in 1990. [3] Stone Gossard, of Pearl Jam, had compiled the songs and released the album on his label, Loosegroove Records. Paragraph B, Mother Love Bone: [4] Mother Love Bone was an American rock band that formed in Seattle, Washington in 1987. [5] The band [6] Frontman Andrew was active from 1987 to 1990. Woodâs personality and compositions helped to catapult the group to the top of the burgeoning late 1980s/early 1990s Seattle music scene. [7] Wood died only days be- fore the scheduled release of the bandâs debut album, âAppleâ, thus ending the groupâs hopes of success. [8] The album was ï¬nally released a few months later. Q: What was the former band of the member of Mother Love Bone who died just before the release of âAppleâ? A: Malfunkshun Supporting facts: 1, 2, 4, 6, 7
Figure 1: An example of the multi-hop questions in HOTPOTQA. We also highlight the supporting facts in blue italics, which are also part of the dataset.
# Introduction
The ability to perform reasoning and inference over natural language is an important aspect of in- telligence. The task of question answering (QA) provides a quantiï¬able and objective way to test the reasoning ability of intelligent systems. To this end, a few large-scale QA datasets have been pro- posed, which sparked signiï¬cant progress in this direction. However, existing datasets have limita- tions that hinder further advancements of machine reasoning over natural language, especially in test- ing QA systemsâ ability to perform multi-hop rea- soning, where the system has to reason with in- formation taken from more than one document to arrive at the answer.
âThese authors contributed equally. The order of author- ship is decided through dice rolling.
First, some datasets mainly focus on testing the ability of reasoning within a single paragraph or document, or single-hop reasoning. For example, in SQuAD (Rajpurkar et al., 2016) questions are designed to be answered given a single paragraph as the context, and most of the questions can in fact be answered by matching the question with a single sentence in that paragraph. As a result, it has fallen short at testing systemsâ ability to reason over a larger context. TriviaQA (Joshi et al., 2017) and SearchQA (Dunn et al., 2017) create a more challenging setting by using information retrieval to collect multiple documents to form the con- text given existing question-answer pairs. Nev- ertheless, most of the questions can be answered by matching the question with a few nearby sen- tences in one single paragraph, which is limited as it does not require more complex reasoning (e.g.,
â Work done when WWC was at CMU.
over multiple paragraphs).
Second, existing datasets that target multi-hop reasoning, such as QAngaroo (Welbl et al., 2018) and COMPLEXWEBQUESTIONS (Talmor and Be- rant, 2018), are constructed using existing knowl- edge bases (KBs). As a result, these datasets are constrained by the schema of the KBs they use, and therefore the diversity of questions and an- swers is inherently limited.
Third, all of the above datasets only provide dis- tant supervision; i.e., the systems only know what the answer is, but do not know what supporting facts lead to it. This makes it difï¬cult for models to learn about the underlying reasoning process, as well as to make explainable predictions.
To address the above challenges, we aim at cre- ating a QA dataset that requires reasoning over multiple documents, and does so in natural lan- guage, without constraining itself to an existing knowledge base or knowledge schema. We also want it to provide the system with strong supervi- sion about what text the answer is actually derived from, to help guide systems to perform meaning- ful and explainable reasoning.
We present HOTPOTQA1, a large-scale dataset that satisï¬es these desiderata. HOTPOTQA is col- lected by crowdsourcing based on Wikipedia ar- ticles, where crowd workers are shown multiple supporting context documents and asked explic- itly to come up with questions requiring reason- ing about all of the documents. This ensures it covers multi-hop questions that are more natural, and are not designed with any pre-existing knowl- edge base schema in mind. Moreover, we also ask the crowd workers to provide the supporting facts they use to answer the question, which we also provide as part of the dataset (see Figure 1 for an example). We have carefully designed a data collection pipeline for HOTPOTQA, since the col- lection of high-quality multi-hop questions is non- trivial. We hope that this pipeline also sheds light on future work in this direction. Finally, we also collected a novel type of questionsâcomparison questionsâas part of HOTPOTQA, in which we require systems to compare two entities on some shared properties to test their understanding of both language and common concepts such as nu- merical magnitude. We make HOTPOTQA pub- licly available at https://HotpotQA.github.io.
1The name comes from the ï¬rst three authorsâ arriving at the main idea during a discussion at a hot pot restaurant.
# 2 Data Collection
The main goal of our work is to collect a diverse and explainable question answering dataset that requires multi-hop reasoning. One way to do so is to deï¬ne reasoning chains based on a knowl- edge base (Welbl et al., 2018; Talmor and Berant, 2018). However, the resulting datasets are limited by the incompleteness of entity relations and the lack of diversity in the question types. Instead, in this work, we focus on text-based question an- swering in order to diversify the questions and an- swers. The overall setting is that given some con- text paragraphs (e.g., a few paragraphs, or the en- tire Web) and a question, a QA system answers the question by extracting a span of text from the context, similar to Rajpurkar et al. (2016). We additionally ensure that it is necessary to perform multi-hop reasoning to correctly answer the ques- tion.
It is non-trivial to collect text-based multi-hop questions. In our pilot studies, we found that sim- ply giving an arbitrary set of paragraphs to crowd workers is counterproductive, because for most paragraph sets, it is difï¬cult to ask a meaning- ful multi-hop question. To address this challenge, we carefully design a pipeline to collect text-based multi-hop questions. Below, we will highlight the key design choices in our pipeline.
Building a Wikipedia Hyperlink Graph. We use the entire English Wikipedia dump as our cor- pus.2 In this corpus, we make two observations: (1) hyper-links in the Wikipedia articles often nat- urally entail a relation between two (already dis- ambiguated) entities in the context, which could potentially be used to facilitate multi-hop reason- ing; (2) the ï¬rst paragraph of each article often contains much information that could be queried in a meaningful way. Based on these observations, we extract all the hyperlinks from the ï¬rst para- graphs of all Wikipedia articles. With these hy- perlinks, we build a directed graph G, where each edge (a, b) indicates there is a hyperlink from the ï¬rst paragraph of article a to article b.
Generating Candidate Paragraph Pairs. To generate meaningful pairs of paragraphs for multi- hop question answering with G, we start by considering an example question âwhen was the singer and songwriter of Radiohead born?â To
2https://dumps.wikimedia.org/
answer this question, one would need to ï¬rst rea- son that the âsinger and songwriter of Radioheadâ is âThom Yorkeâ, and then ï¬gure out his birth- day in the text. We call âThom Yorkeâ a bridge entity in this example. Given an edge (a, b) in the hyperlink graph G, the entity of b can usually be viewed as a bridge entity that connects a and b. As we observe articles b usually determine the theme of the shared context between a and b, but not all articles b are suitable for collecting multi- hop questions. For example, entities like coun- tries are frequently referred to in Wikipedia, but donât necessarily have much in common with all incoming links. It is also difï¬cult, for instance, for the crowd workers to ask meaningful multi- hop questions about highly technical entities like the IPv4 protocol. To alleviate this issue, we con- strain the bridge entities to a set of manually cu- rated pages in Wikipedia (see Appendix A). Af- ter curating a set of pages B, we create candidate paragraph pairs by sampling edges (a, b) from the hyperlink graph such that b â B.
Comparison Questions. In addition to ques- tions collected using bridge entities, we also collect another type of multi-hop questionsâ comparison questions. The main idea is that com- paring two entities from the same category usu- ally results in interesting multi-hop questions, e.g., âWho has played for more NBA teams, Michael Jordan or Kobe Bryant?â To facilitate collecting this type of question, we manually curate 42 lists of similar entities (denoted as L) from Wikipedia.3 To generate candidate paragraph pairs, we ran- domly sample two paragraphs from the same list and present them to the crowd worker.
To increase the diversity of multi-hop questions, we also introduce a subset of yes/no questions in comparison questions. This complements the original scope of comparison questions by offer- ing new ways to require systems to reason over both paragraphs. For example, consider the en- tities Iron Maiden (from the UK) and AC/DC (from Australia). Questions like âIs Iron Maiden or AC/DC from the UK?â are not ideal, because one would deduce the answer is âIron Maidenâ even if one only had access to that article. With yes/no questions, one may ask âAre Iron Maiden and AC/DC from the same country?â, which re-
3This is achieved by manually curating lists from the Wikipedia âList of lists of listsâ (https://wiki.sh/ y8qv). One example is âHighest Mountains on Earthâ.
# Algorithm 1 Overall data collection procedure
Input: question type ratio r1 = 0.75, yes/no ratio r2 = 0.5 while not ï¬nished do if random() < r1 then Uniformly sample an entity b â B Uniformly sample an edge (a, b) Workers ask a question about paragraphs a and b else Sample a list from L, with probabilities weighted by list sizes Uniformly sample two entities (a, b) from the list if random() < r2 then Workers ask a yes/no question to compare a and b else Workers ask a question with a span answer to compare a and b end if end if Workers provide the supporting facts end while
quires reasoning over both paragraphs.
To the best of our knowledge, text-based com- parison questions are a novel type of questions that have not been considered by previous datasets. More importantly, answering these questions usu- ally requires arithmetic comparison, such as com- paring ages given birth dates, which presents a new challenge for future model development.
Collecting Supporting Facts. To enhance the explainability of question answering systems, we want them to output a set of supporting facts nec- essary to arrive at the answer, when the answer is generated. To this end, we also collect the sentences that determine the answers from crowd workers. These supporting facts can serve as strong supervision for what sentences to pay at- tention to. Moreover, we can now test the explain- ability of a model by comparing the predicted sup- porting facts to the ground truth ones.
The overall procedure of data collection is illus- trated in Algorithm 1.
# 3 Processing and Benchmark Settings
We collected 112,779 valid examples in total on Amazon Mechanical Turk4 using the ParlAI in- terface (Miller et al., 2017) (see Appendix A).To isolate potential single-hop questions from the de- sired multi-hop ones, we ï¬rst split out a sub- set of data called train-easy. Speciï¬cally, we randomly sampled questions (â¼3â10 per Turker) from top-contributing turkers, and categorized all
4https://www.mturk.com/
Name Desc. Usage # Examples single-hop train-easy train-medium multi-hop train-hard dev test-distractor test-fullwiki Total hard multi-hop hard multi-hop hard multi-hop hard multi-hop training training training dev test test 18,089 56,814 15,661 7,405 7,405 7,405 112,779
Table 1: Data split. The splits train-easy, train- medium, and train-hard are combined for training. The distractor and full wiki settings use different test sets so that the gold paragraphs in the full wiki test set remain unknown to any models.
their questions into the train-easy set if an over- whelming percentage in the sample only required reasoning over one of the paragraphs. We sam- pled these turkers because they contributed more than 70% of our data. This train-easy set contains 18,089 mostly single-hop examples.
We implemented a question answering model based on the current state-of-the-art architectures, which we discuss in detail in Section 5.1. Based on this model, we performed a three-fold cross validation on the remaining multi-hop examples. Among these examples, the models were able to correctly answer 60% of the questions with high conï¬dence (determined by thresholding the model loss). These correctly-answered questions (56,814 in total, 60% of the multi-hop examples) are split out and marked as the train-medium subset, which will also be used as part of our training set.
After splitting out train-easy and train-medium, we are left with hard examples. As our ultimate goal is to solve multi-hop question answering, we focus on questions that the latest modeling tech- niques are not able to answer. Thus we constrain our dev and test sets to be hard examples. Specif- ically, we randomly divide the hard examples into four subsets, train-hard, dev, test-distractor, and test-fullwiki. Statistics about the data split can be found in Table 1. In Section 5, we will show that combining train-easy, train-medium, and train- hard to train models yields the best performance, so we use the combined set as our default train- ing set. The two test sets test-distractor and test- fullwiki are used in two different benchmark set- tings, which we introduce next.
We create two benchmark settings. In the ï¬rst setting, to challenge the model to ï¬nd the true sup- porting facts in the presence of noise, for each ex- ample we employ bigram tf-idf (Chen et al., 2017)
to retrieve 8 paragraphs from Wikipedia as dis- tractors, using the question as the query. We mix them with the 2 gold paragraphs (the ones used to collect the question and answer) to construct the distractor setting. The 2 gold paragraphs and the 8 distractors are shufï¬ed before they are fed to the model. In the second setting, we fully test the modelâs ability to locate relevant facts as well as reasoning about them by requiring it to answer the question given the ï¬rst paragraphs of all Wikipedia articles without the gold paragraphs speciï¬ed. This full wiki setting truly tests the per- formance of the systemsâ ability at multi-hop rea- soning in the wild.5 The two settings present dif- ferent levels of difï¬culty, and would require tech- niques ranging from reading comprehension to in- formation retrieval. As shown in Table 1, we use separate test sets for the two settings to avoid leak- ing information, because the gold paragraphs are available to a model in the distractor setting, but should not be accessible in the full wiki setting.
We also try to understand the modelâs good performance on the train-medium split. Manual analysis shows that the ratio of multi-hop ques- tions in train-medium is similar to that of the hard examples (93.3% in train-medium vs. 92.0% in dev), but one of the question types appears more frequently in train-medium compared to the hard splits (Type II: 32.0% in train-medium vs. 15.0% in dev, see Section 4 for the deï¬nition of Type II questions). These observations demonstrate that given enough training data, existing neural archi- tectures can be trained to answer certain types and certain subsets of the multi-hop questions. How- ever, train-medium remains challenging when not just the gold paragraphs are presentâwe show in Appendix C that the retrieval problem on these ex- amples are as difï¬cult as that on their hard cousins.
# 4 Dataset Analysis
In this section, we analyze the types of questions, types of answers, and types of multi-hop reasoning covered in the dataset.
Question Types. We heuristically identiï¬ed question types for each collected question. To identify the question type, we ï¬rst locate the cen- tral question word (CQW) in the question. Since HOTPOTQA contains comparison questions and
5As we required the crowd workers to use complete en- tity names in the question, the majority of the questions are unambiguous in the full wiki setting.
Figure 2: Types of questions covered in HOTPOTQA. Question types are extracted heuristically, starting at question words or prepositions preceding them. Empty colored blocks indicate sufï¬xes that are too rare to show individually. See main text for more details.
yes/no questions, we consider as question words WH-words, copulas (âisâ, âareâ), and auxiliary verbs (âdoesâ, âdidâ). Because questions often in- volve relative clauses beginning with WH-words, we deï¬ne the CQW as the ï¬rst question word in the question if it can be found in the ï¬rst three to- kens, or the last question word otherwise. Then, we determine question type by extracting words up to 2 tokens away to the right of the CQW, along with the token to the left if it is one of a few com- mon prepositions (e.g., in the cases of âin whichâ and âby whomâ).
We visualize the distribution of question types in Figure 2, and label the ones shared among more than 250 questions. As is shown, our dataset cov- ers a diverse variety of questions centered around entities, locations, events, dates, and numbers, as well as yes/no questions directed at comparing two entities (âAre both A and B ...?â), to name a few.
Answer Types. We further sample 100 exam- ples from the dataset, and present the types of an- swers in Table 2. As can be seen, HOTPOTQA covers a broad range of answer types, which matches our initial analysis of question types. We ï¬nd that a majority of the questions are about en- tities in the articles (68%), and a non-negligible amount of questions also ask about various proper- ties like date (9%) and other descriptive properties such as numbers (8%) and adjectives (4%).
Answer Type % Example(s) Person Group / Org Location Date Number Artwork Yes/No Adjective Event Other noun Common noun proper 30 King Edward II, Rihanna 13 Cartoonito, Apalachee Fort Richardson, California 10 10th or even 13th century 9 8 79.92 million, 17 8 Die schweigsame Frau 6 4 1 6 Cold War, Laban Movement - conservative Prix Benois de la Danse 5 Analysis comedy, both men and women
Table 2: Types of answers in HOTPOTQA.
Multi-hop Reasoning Types. We also sampled 100 examples from the dev and test sets and man- ually classiï¬ed the types of reasoning required to answer each question. Besides comparing two en- tities, there are three main types of multi-hop rea- soning required to answer these questions, which we show in Table 3 accompanied with examples.
Most of the questions require at least one sup- porting fact from each paragraph to answer. A ma- jority of sampled questions (42%) require chain reasoning (Type I in the table), where the reader must ï¬rst identify a bridge entity before the second hop can be answered by ï¬lling in the bridge. One strategy to answer these questions would be to de- compose them into consecutive single-hop ques- tions. The bridge entity could also be used im- plicitly to help infer properties of other entities re- lated to it. In some questions (Type III), the entity in question shares certain properties with a bridge entity (e.g., they are collocated), and we can in- fer its properties through the bridge entity. An- other type of question involves locating the answer entity by satisfying multiple properties simultane- ously (Type II). Here, to answer the question, one could ï¬nd the set of all entities that satisfy each of the properties mentioned, and take an intersection to arrive at the ï¬nal answer. Questions comparing two entities (Comparison) also require the system to understand the properties in question about the two entities (e.g., nationality), and sometimes re- quire arithmetic such as counting (as seen in the table) or comparing numerical values (âWho is older, A or B?â). Finally, we ï¬nd that sometimes the questions require more than two supporting facts to answer (Other). In our analysis, we also ï¬nd that for all of the examples shown in the ta- ble, the supporting facts provided by the Turkers match exactly with the limited context shown here,
Reasoning Type % Example(s) Inferring the bridge entity complete the 2nd-hop question (Type I) to 42 Paragraph A: The 2015 Diamond Head Classic was a college basketball tournament ... Buddy Hield was named the tournamentâs MVP. Paragraph B: Chavano Rainier âBuddyâ Hield is a Bahamian professional basketball player for the Sacramento Kings of the NBA... Q: Which team does the player named 2015 Diamond Head Classicâs MVP play for? Comparing two enti- ties (Comparison) 27 Paragraph A: LostAlone were a British rock band ... consisted of Steven Battelle, Alan Williamson, and Mark Gibson... Paragraph B: Guster is an American alternative rock band ... Founding members Adam Gardner, Ryan Miller, and Brian Rosenworcel began... Q: Did LostAlone and Guster have the same number of members? (yes) Locating the answer entity checking multiple properties (Type II) by 15 Paragraph A: Several current and former members of the Pittsburgh Pirates ... John Milner, Dave Parker, and Rod Scurry... Paragraph B: David Gene Parker, nicknamed âThe Cobraâ, is an American former player in Major League Baseball... Q: Which former member of the Pittsburgh Pirates was nicknamed âThe Cobraâ? Inferring the property of an entity in question through a bridge entity (Type III) about 6 Paragraph A: Marine Tactical Air Command Squadron 28 is a United States Marine Corps aviation command and control unit based at Marine Corps Air Station Cherry Point... Paragraph B: Marine Corps Air Station Cherry Point ... is a United States Marine Corps airï¬eld located in Havelock, North Carolina, USA ... Q: What city is the Marine Air Control Group 28 located in? Other types of reason- ing that require more than two supporting facts (Other) 2 Paragraph A: ... the towns of Yodobashi, Okubo, Totsuka, and Ochiai town were merged into Yodobashi ward. ... Yodobashi Camera is a store with its name taken from the town and ward. Paragraph B: Yodobashi Camera Co., Ltd. is a major Japanese retail chain specializing in electronics, PCs, cameras and photographic equipment. Q: Aside from Yodobashi, what other towns were merged into the ward which gave the major Japanese retail chain specializing in electronics, PCs, cameras, and photographic equipment itâs name?
Table 3: Types of multi-hop reasoning required to answer questions in the HOTPOTQA dev and test sets. We show in orange bold italics bridge entities if applicable, blue italics supporting facts from the paragraphs that connect directly to the question, and green bold the answer in the paragraph or following the question. The remaining 8% are single-hop (6%) or unanswerable questions (2%) by our judgement.
showing that the supporting facts collected are of high quality.
Aside from the reasoning types mentioned above, we also estimate that about 6% of the sam- pled questions can be answered with one of the two paragraphs, and 2% of them unanswerable. We also randomly sampled 100 examples from train-medium and train-hard combined, and the proportions of reasoning types are: Type I 38%, Type II 29%, Comparison 20%, Other 7%, Type III 2%, single-hop 2%, and unanswerable 2%.
# 5 Experiments
# 5.1 Model Architecture and Training
To test the performance of leading QA systems on our data, we reimplemented the architecture described in Clark and Gardner (2017) as our baseline model. We note that our implementa- tion without weight averaging achieves perfor- mance very close to what the authors reported on SQuAD (about 1 point worse in F1). Our implemented model subsumes the latest techni-
cal advances on question answering, including character-level models, self-attention (Wang et al., 2017), and bi-attention (Seo et al., 2017). Combin- ing these three key components is becoming stan- dard practice, and various state-of-the-art or com- petitive architectures (Liu et al., 2018; Clark and Gardner, 2017; Wang et al., 2017; Seo et al., 2017; Pan et al., 2017; Salant and Berant, 2018; Xiong et al., 2018) on SQuAD can be viewed as simi- lar to our implemented model. To accommodate yes/no questions, we also add a 3-way classiï¬er after the last recurrent layer to produce the prob- abilities of âyesâ, ânoâ, and span-based answers. During decoding, we ï¬rst use the 3-way output to determine whether the answer is âyesâ, ânoâ, or a text span. If it is a text span, we further search for the most probable span.
Supporting Facts as Strong Supervision. To evaluate the baseline modelâs performance in pre- dicting explainable supporting facts, as well as how much they improve QA performance, we additionally design a component to incorporate
+{ Linear + Yes/no/span RNN >| Linear » End token -âââââj concat RNN >| Linear > Start token -âââ7 concat RNN ceomeat Apr i Lâ>{ RNN - > 0/1 : â_ââ1 ââââââ | (is supporting facts?) | Self-Attention ' ' ! Strong supervision | ea eee ' RNN residual Bi-Attention RNN (Char RNN } (Word emb } (Char RNN } [Word emb ââ ==> paragraphs question
Figure 3: Our model architecture. Strong supervision over supporting facts is used in a multi-task setting.
such strong supervision into our model. For each sentence, we concatenate the output of the self- attention layer at the ï¬rst and last positions, and use a binary linear classiï¬er to predict the prob- ability that the current sentence is a supporting fact. We minimize a binary cross entropy loss for this classiï¬er. This objective is jointly optimized with the normal question answering objective in a multi-task learning setting, and they share the same low-level representations. With this classi- ï¬er, the model can also be evaluated on the task of supporting fact prediction to gauge its explainabil- ity. Our overall architecture is illustrated in Figure 3. Though it is possible to build a pipeline system, in this work we focus on an end-to-end one, which is easier to tune and faster to train.
# 5.2 Results
We evaluate our model in the two benchmark set- tings. In the full wiki setting, to enable efï¬cient tf- idf retrieval among 5,000,000+ wiki paragraphs, given a question we ï¬rst return a candidate pool of at most 5,000 paragraphs using an inverted-index- based ï¬ltering strategy6 and then select the top 10 paragraphs in the pool as the ï¬nal candidates using bigram tf-idf.7 Retrieval performance is shown in
6See Appendix C for details. 7We choose the number of ï¬nal candidates as 10 to stay consistent with the distractor setting where candidates are 2
]
Table 5. After retrieving these 10 paragraphs, we then use the model trained in the distractor setting to evaluate its performance on these ï¬nal candi- date paragraphs.
Following previous work (Rajpurkar et al., 2016), we use exact match (EM) and F1 as two evaluation metrics. To assess the explainability of the models, we further introduce two sets of met- rics involving the supporting facts. The ï¬rst set fo- cuses on evaluating the supporting facts directly, namely EM and F1 on the set of supporting fact sentences as compared to the gold set. The second set features joint metrics that combine the evalu- ation of answer spans and supporting facts as fol- lows. For each example, given its precision and recall on the answer span (P (ans), R(ans)) and the supporting facts (P (sup), R(sup)), respectively, we calculate joint F1 as
P (joint) = P (ans)P (sup), R(joint) = R(ans)R(sup),
Joint F1 = 2P (joint)R(joint) P (joint) + R(joint) .
Joint EM is 1 only if both tasks achieve an ex- act match and otherwise 0. Intuitively, these met- rics penalize systems that perform poorly on ei- ther task. All metrics are evaluated example-by- example, and then averaged over examples in the evaluation set.
The performance of our model on the bench- mark settings is reported in Table 4, where all numbers are obtained with strong supervision over supporting facts. From the distractor setting to the full wiki setting, expanding the scope of the con- text increases the difï¬culty of question answering. The performance in the full wiki setting is sub- stantially lower, which poses a challenge to exist- ing techniques on retrieval-based question answer- ing. Overall, model performance in all settings is signiï¬cantly lower than human performance as shown in Section 5.3, which indicates that more technical advancements are needed in future work. We also investigate the explainability of our model by measuring supporting fact prediction performance. Our model achieves 60+ support- ing fact prediction F1 and â¼40 joint F1, which in- dicates there is room for further improvement in terms of explainability.
In Table 6, we break down the performance on different question types. In the distractor set- ting, comparison questions have lower F1 scores
gold paragraphs plus 8 distractors.
Setting Split Answer Sup Fact Joint EM F1 EM F1 EM F1 distractor distractor dev test 44.44 45.46 58.28 58.99 21.95 22.24 66.66 66.62 11.56 12.04 40.86 41.37 full wiki full wiki dev test 24.68 25.23 34.36 34.40 5.28 5.07 40.98 40.69 2.54 2.63 17.73 17.85
Table 4: Main results: the performance of question answering and supporting fact prediction in the two benchmark settings. We encourage researchers to report these metrics when evaluating their methods.
Set MAP Mean Rank Hits@2 Hits@10 dev test 43.93 43.21 314.71 314.05 39.43 38.67 56.06 55.88
Table 5: Retrieval performance in the full wiki setting. Mean Rank is averaged over the ranks of two gold para- graphs.
Setting Br EM Br F1 Cp EM Cp F1 distractor full wiki 43.41 19.76 59.09 30.42 48.55 43.87 55.05 50.70
Table 6: Performance breakdown over different ques- tion types on the dev set in the distractor setting. âBrâ denotes questions collected using bridge entities, and âCpâ denotes comparison questions.
Setting EM F1 our model â sup fact 44.44 42.79 58.28 56.19 â sup fact, self attention â sup fact, char model 41.59 41.66 â sup fact, train-easy 41.61 â sup fact, train-easy, train-medium 31.07 55.19 55.25 55.12 43.61 gold only sup fact only 48.38 51.95 63.58 66.98
Table 7: Ablation study of question answering perfor- mance on the dev set in the distractor setting. ââ sup factâ means removing strong supervision over support- ing facts from our model. ââ train-easyâ and ââ train- mediumâ means discarding the according data splits from training. âgold onlyâ and âsup fact onlyâ refer to using the gold paragraphs or the supporting facts as the only context input to the model.
than questions involving bridge entities (as deï¬ned in Section 2), which indicates that better mod- eling this novel question type might need better neural architectures. In the full wiki setting, the performance of bridge entity questions drops sig- niï¬cantly while that of comparison questions de- creases only marginally. This is because both en- tities usually appear in the comparison questions, and thus reduces the difï¬culty of retrieval. Com- bined with the retrieval performance in Table 5, we believe that the deterioration in the full wiki setting in Table 4 is largely due to the difï¬culty of retrieving both entities.
model, which achieves a 10+ F1 improvement over not using the supporting facts. Compared with the gain of strong supervision in our model (â¼2 points in F1), our proposed method of incorporating sup- porting facts supervision is most likely subopti- mal, and we leave the challenge of better model- ing to future work. At last, we show that combin- ing all data splits (train-easy, train-medium, and train-hard) yields the best performance, which is adopted as the default setting.
We perform an ablation study in the distractor setting, and report the results in Table 7. Both self- attention and character-level models contribute notably to the ï¬nal performance, which is consis- tent with prior work. This means that techniques targeted at single-hop QA are still somewhat ef- fective in our setting. Moreover, removing strong supervision over supporting facts decreases per- formance, which demonstrates the effectiveness of our approach and the usefulness of the supporting facts. We establish an estimate of the upper bound of strong supervision by only considering the sup- porting facts as the oracle context input to our
# 5.3 Establishing Human Performance
To establish human performance on our dataset, we randomly sampled 1,000 examples from the dev and test sets, and had at least three additional Turkers provide answers and supporting facts for these examples. As a baseline, we treat the orig- inal Turker during data collection as the predic- tion, and the newly collected answers and support- ing facts as references, to evaluate human perfor- mance. For each example, we choose the answer and supporting fact reference that maximize the F1 score to report the ï¬nal metrics to reduce the effect of ambiguity (Rajpurkar et al., 2016).
Setting Answer Sp Fact Joint EM F1 EM F1 EM gold only distractor 65.87 74.67 59.76 90.41 41.54 68.15 60.88 68.99 30.99 74.67 20.06 52.37 Human 83.60 91.40 61.50 90.04 52.30 82.55 Human UB 96.80 98.77 87.40 97.56 84.60 96.37 F1
Table 8: Comparing baseline model performance with human performance on 1,000 random samples. âHu- man UBâ stands for the upper bound on annotator per- formance on HOTPOTQA. For details please refer to the main body.
As can be seen in Table 8, the original crowd worker achieves very high performance in both ï¬nding supporting facts, and answering the ques- tion correctly. If the baseline model were provided with the correct supporting paragraphs to begin with, it achieves parity with the crowd worker in ï¬nding supporting facts, but still falls short at ï¬nding the actual answer. When distractor para- graphs are present, the performance gap between the baseline model and the crowd worker on both tasks is enlarged to â¼30% for both EM and F1.
We further establish the upper bound of human performance in HOTPOTQA, by taking the maxi- mum EM and F1 for each example. Here, we use each Turkerâs answer in turn as the prediction, and evaluate it against all other workersâ answers. As can be seen in Table 8, most of the metrics are close to 100%, illustrating that on most examples, at least a subset of Turkers agree with each other, showing high inter-annotator agreement. We also note that crowd workers agree less on supporting facts, which could reï¬ect that this task is inher- ently more subjective than answering the question.
# 6 Related Work
Various recently-proposed large-scale QA datasets can be categorized in four categories.
Single-document datasets. SQuAD (Rajpurkar et al., 2016, 2018) questions that are relatively simple because they usually require no more than one sentence in the paragraph to answer.
Multi-document datasets. TriviaQA (Joshi et al., 2017) and SearchQA (Dunn et al., 2017) contain question answer pairs that are accompa- nied with more than one document as the context. This further challenges QA systemsâ ability to accommodate longer contexts. However, since the
supporting documents are collected after the ques- tion answer pairs with information retrieval, the questions are not guaranteed to involve interesting reasoning between multiple documents.
KB-based multi-hop datasets. Recent datasets like QAngaroo (Welbl et al., 2018) and COM- PLEXWEBQUESTIONS (Talmor and Berant, 2018) explore different approaches of using pre-existing knowledge bases (KB) with pre-deï¬ned logic rules to generate valid QA pairs, to test QA modelsâ ca- pability of performing multi-hop reasoning. The diversity of questions and answers is largely lim- ited by the ï¬xed KB schemas or logical forms. Furthermore, some of the questions might be an- swerable by one text sentence due to the incom- pleteness of KBs.
Free-form answer-generation datasets. MS MARCO (Nguyen et al., 2016) contains 100k user queries from Bing Search with human generated answers. Systems generate free-form answers and are evaluated by automatic metrics such as ROUGE-L and BLEU-1. However, the reliabil- ity of these metrics is questionable because they have been shown to correlate poorly with human judgement (Novikova et al., 2017).
# 7 Conclusions
We present HOTPOTQA, a large-scale question answering dataset aimed at facilitating the devel- opment of QA systems capable of performing ex- plainable, multi-hop reasoning over diverse nat- ural language. We also offer a new type of fac- toid comparison questions to test systemsâ ability to extract and compare various entity properties in text.
# Acknowledgements
This work is partly funded by the Facebook ParlAI Research Award. ZY, WWC, and RS are sup- the DARPA grant ported by a Google grant, D17AP00001, the ONR grants N000141512791, N000141812861, and the Nvidia NVAIL Award. SZ and YB are supported by Mila, Universit´e de Montr´eal. PQ and CDM are supported by the Na- tional Science Foundation under Grant No. IIS- 1514268. Any opinions, ï¬ndings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reï¬ect the views of the National Science Foundation.
# References
Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open- In Association for Computa- domain questions. tional Linguistics (ACL).
Christopher Clark and Matt Gardner. 2017. Simple and effective multi-paragraph reading comprehen- In Proceedings of the 55th Annual Meeting sion. of the Association of Computational Linguistics.
Matthew Dunn, Levent Sagun, Mike Higgins, Ugur Guney, Volkan Cirik, and Kyunghyun Cho. 2017. SearchQA: A new Q&A dataset augmented with arXiv preprint context from a search engine. arXiv:1704.05179.
Mandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale dis- tantly supervised challenge dataset for reading com- In Proceedings of the 55th Annual prehension. Meeting of the Association for Computational Lin- guistics.
Xiaodong Liu, Yelong Shen, Kevin Duh, and Jianfeng Gao. 2018. Stochastic answer networks for ma- chine reading comprehension. In Proceedings of the 56th Annual Meeting of the Association for Compu- tational Linguistics.
Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David Mc- Closky. 2014. The Stanford CoreNLP natural lan- guage processing toolkit. In Association for Compu- tational Linguistics (ACL) System Demonstrations, pages 55â60.
Alexander H Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra, Antoine Bordes, Devi Parikh, and Ja- son Weston. 2017. ParlAI: A dialog research soft- ware platform. arXiv preprint arXiv:1705.06476.
Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A human generated machine In Proceedings of reading comprehension dataset. the 30th Annual Conference on Neural Information Processing Systems (NIPS).
Jekaterina Novikova, OndËrej DuËsek, Amanda Cercas Curry, and Verena Rieser. 2017. Why we need new evaluation metrics for NLG. In Proceedings of the Conference on Empirical Methods in Natural Lan- guage Processing.
Boyuan Pan, Hao Li, Zhou Zhao, Bin Cao, Deng Cai, and Xiaofei He. 2017. Memen: Multi-layer embed- ding with memory networks for machine compre- hension. arXiv preprint arXiv:1707.09098.
Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you donât know: Unanswerable ques- In Proceedings of the 56th An- tions for SQuAD. nual Meeting of the Association for Computational Linguistics.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP).
Shimi Salant and Jonathan Berant. 2018. Contextu- alized word representations for reading comprehen- sion. In Proceedings of the 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics.
Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention ï¬ow for machine comprehension. In Proceedings of the International Conference on Learning Represen- tations.
Alon Talmor and Jonathan Berant. 2018. The web as a knowledge-base for answering complex questions. In Proceedings of the 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics.
Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, and Ming Zhou. 2017. Gated self-matching net- works for reading comprehension and question an- swering. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 189â198.
Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2018. Constructing datasets for multi-hop reading comprehension across documents. Transac- tions of the Association of Computational Linguis- tics.
Caiming Xiong, Victor Zhong, and Richard Socher. 2018. DCN+: Mixed objective and deep residual coattention for question answering. In Proceedings of the International Conference on Learning Repre- sentations.
Zhilin Yang, Saizheng Zhang, Jack Urbanek, Will Feng, Alexander H Miller, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Mastering the dun- geon: Grounded language learning by mechanical turker descent. In Proceedings of the International Conference on Learning Representations.
# A Data Collection Details
# A.1 Data Preprocessing
We downloaded the dump of English Wikipedia of October 1, 2017, and extracted text and hyperlinks with WikiExtractor.8 We use Stanford CoreNLP 3.8.0 (Manning et al., 2014) for word and sen- tence tokenization. We use the resulting sentence boundaries for collection of supporting facts, and use token boundaries to check whether Turkers are providing answers that cover spans of entire to- kens to avoid nonsensical partial-word answers.
# A.2 Further Data Collection Details
Details on Curating Wikipedia Pages. To make sure the sampled candidate paragraph pairs are intuitive for crowd workers to ask high-quality multi-hop questions about, we manually curate 591 categories from the lists of popular pages by WikiProject.9 For each category, we sample (a, b) pairs from the graph G where b is in the considered category, and manually check whether a multi-hop question can be asked given the pair (a, b). Those categories with a high probability of permitting multi-hop questions are selected.
Bonus Structures. To incentivize crowd work- ers to produce higher-quality data more efï¬ciently, we follow Yang et al. (2018), and employ bonus structures. We mix two settings in our data collec- tion process. In the ï¬rst setting, we reward the top (in terms of numbers of examples) workers every 200 examples. In the second setting, the workers get bonuses based on their productivity (measured as the number of examples per hour).
# A.3 Crowd Worker Interface
Our crowd worker interface is based on ParlAI (Miller et al., 2017), an open-source project that facilitates the development of dialog systems and data collection with a dialog interface. We adapt ParlAI for collecting question answer pairs by converting the collection workï¬ow into a system- oriented dialog. This allows us to have more con- trol over the turkers input, as well as provide turk- ers with in-the-loop feedbacks or helpful hints to help Turkers ï¬nish the task, and therefore speed up the collection process.
Please see Figure 4 for an example of the worker interface during data collection.
8https://github.com/attardi/ wikiextractor 9https://wiki.sh/y8qu
Paragraph A, 2015 Miami Dolphine season [1] The 2015 Miami Dolphins season was the franchise's 46th season in the National FOE âoverall. [2] The Dolphins ook to improve on their 8-8 record from 2014 and return tothe pl seven seasons [3] Howerar Mar fale to cinch a lye berth forthe seventh conaecuve season afer a ook 14 loss to the Giants, Paragraph 8, 2008 Miami Dolphins season [4] The 2008 Miami Dolphins season was the organization's $6th season inthe National Football League and 43rd âoverall. [5] During the regular season the Dolphins completed the greatest single-season turnaround in NFL history, going {rom a 1-18 regular season record in 2007 to an 11-5 record in 2008. [6] The previous record for most improved team âone year after @ 1-15 season belonged to the 1992 Indianapolis Colts, who went 9-7. [7] The 1999 Indianapolis Colts were the only other team to accomplish a 10-game turnaround, winning 13 games after winning 3 in 1998. [8] âAdditionally, Miami won the AFC East, becoming the first team in NFL history to win their division after only having one win the previous season. Please type a question given the two paragraphs above, you have 10 min(s) (0/1 examples fnished so fa. HINT: Maybe ask a question where the answer is 2008 Miami Dolphins season. It should require pio I aRRULNM ELL information from both paragraphs. Example from Tutorial: Which movie starring Ed Harris is based on a French novel (Info A: starring Harris, Info B: based on a French novel) Worker Input Send
Figure 4: Screenshot of our worker interface on Ama- zon Mechanical Turk.
·104 4 s e l p m a x E f o 3 r e b m u N 2 1 10 30 50 70 90 110 130 Question Length (tokens)
Figure 5: Distribution of lengths of questions in HOT- POTQA.
# B Further Data Analysis
To further look into the diversity of the data in HOTPOTQA, we further visualized the distribu- tion of question lengths in the dataset in Figure 5. Besides being diverse in terms of types as is show in the main text, questions also vary greatly in length, indicating different levels of complexity and details covered.
# C Full Wiki Setting Details
# C.1 The Inverted Index Filtering Strategy
In the full wiki setting, we adopt an efï¬cient inverted-index-based ï¬ltering strategy for prelim- inary candidate paragraph retrieval. We provide details in Algorithm 2, where we set the control threshold N = 5000 in our experiments. For some of the question q, its corresponding gold para-
Algorithm 2 Inverted Index Filtering Strategy
Input: question text q, control threshold N , ngram-to- Wikidoc inverted index D Inintialize: Extract unigram + bigram set rq from q Ncand = +â Cgram = 0 while Ncands > N do Cgram = Cgram + 1 Set Soverlap to be an empty dictionary for w â rq do for d â D[w] do if d not in Soverlap then Soverlap[d] = 1 else Soverlap[d] = Soverlap[d] + 1 end if end for end for Scand = â
for d in Soverlap do if Soverlap[d] ⥠Cgram then Scand = Scand ⪠{d} end if end for Ncands = |Scand| end while return Scand
graphs may not be included in the output candidate pool Scand, we set such missing gold paragraphâs rank as |Scand| + 1 during the evaluation, so MAP and Mean Rank reported in this paper are upper bounds of their true values.
# C.2 Compare train-medium Split to Hard Ones
Table 9 shows the comparison between train- medium split and hard examples like dev and test under retrieval metrics in full wiki setting. As we can see, the performance gap between train- medium split and its dev/test is close, which im- plies that train-medium split has a similar level of difï¬culty as hard examples under the full wiki set- ting in which a retrieval model is necessary as the ï¬rst processing step.
Set MAP Mean Rank CorAns Rank train-medium 41.89 42.79 dev 45.92 test 288.19 304.30 286.20 82.76 97.93 74.85
Table 9: Retrieval performance comparison on full wiki setting for train-medium, dev and test with 1,000 ran- dom samples each. MAP and are in %. Mean Rank averages over retrieval ranks of two gold paragraphs. CorAns Rank refers to the rank of the gold paragraph containing the answer. | {
"id": "1707.09098"
} |
1809.04281 | Music Transformer | Music relies heavily on repetition to build structure and meaning.
Self-reference occurs on multiple timescales, from motifs to phrases to reusing
of entire sections of music, such as in pieces with ABA structure. The
Transformer (Vaswani et al., 2017), a sequence model based on self-attention,
has achieved compelling results in many generation tasks that require
maintaining long-range coherence. This suggests that self-attention might also
be well-suited to modeling music. In musical composition and performance,
however, relative timing is critically important. Existing approaches for
representing relative positional information in the Transformer modulate
attention based on pairwise distance (Shaw et al., 2018). This is impractical
for long sequences such as musical compositions since their memory complexity
for intermediate relative information is quadratic in the sequence length. We
propose an algorithm that reduces their intermediate memory requirement to
linear in the sequence length. This enables us to demonstrate that a
Transformer with our modified relative attention mechanism can generate
minute-long compositions (thousands of steps, four times the length modeled in
Oore et al., 2018) with compelling structure, generate continuations that
coherently elaborate on a given motif, and in a seq2seq setup generate
accompaniments conditioned on melodies. We evaluate the Transformer with our
relative attention mechanism on two datasets, JSB Chorales and
Piano-e-Competition, and obtain state-of-the-art results on the latter. | http://arxiv.org/pdf/1809.04281 | Cheng-Zhi Anna Huang, Ashish Vaswani, Jakob Uszkoreit, Noam Shazeer, Ian Simon, Curtis Hawthorne, Andrew M. Dai, Matthew D. Hoffman, Monica Dinculescu, Douglas Eck | cs.LG, cs.SD, eess.AS, stat.ML | Improved skewing section and accompanying figures. Previous titles
are "An Improved Relative Self-Attention Mechanism for Transformer with
Application to Music Generation" and "Music Transformer" | null | cs.LG | 20180912 | 20181212 | 8 1 0 2
c e D 2 1 ] G L . s c [
3 v 1 8 2 4 0 . 9 0 8 1 : v i X r a
# MUSIC TRANSFORMER: GENERATING MUSIC WITH LONG-TERM STRUCTURE
Cheng-Zhi Anna Huangâ Ashish Vaswani Ian Simon Curtis Hawthorne Andrew M. Dai Matthew D. Hoffman Monica Dinculescu Douglas Eck Google Brain
# Shazeer
# ABSTRACT
Music relies heavily on repetition to build structure and meaning. Self-reference occurs on multiple timescales, from motifs to phrases to reusing of entire sections of music, such as in pieces with ABA structure. The Transformer (Vaswani et al., 2017), a sequence model based on self-attention, has achieved compelling results in many generation tasks that require maintaining long-range coherence. This suggests that self-attention might also be well-suited to modeling music. In musical composition and performance, however, relative timing is critically important. Existing approaches for representing relative positional information in the Transformer modulate attention based on pairwise distance (Shaw et al., 2018). This is impractical for long sequences such as musical compositions since their memory complexity for intermediate relative information is quadratic in the sequence length. We propose an algorithm that reduces their intermediate memory requirement to linear in the sequence length. This enables us to demonstrate that a Transformer with our modiï¬ed relative attention mechanism can generate minute- long compositions (thousands of steps, four times the length modeled in Oore et al. (2018)) with compelling structure, generate continuations that coherently elaborate on a given motif, and in a seq2seq setup generate accompaniments conditioned on melodies1. We evaluate the Transformer with our relative attention mechanism on two datasets, JSB Chorales and Piano-e-Competition, and obtain state-of-the-art results on the latter.
# INTRODUCTION
A musical piece often consists of recurring elements at various levels, from motifs to phrases to sections such as verse-chorus. To generate a coherent piece, a model needs to reference elements that came before, sometimes in the distant past, repeating, varying, and further developing them to create contrast and surprise. Intuitively, self-attention (Parikh et al., 2016) appears to be a good match for this task. Self-attention over its own previous outputs allows an autoregressive model to access any part of the previously generated output at every step of generation. By contrast, recurrent neural networks have to learn to proactively store elements to be referenced in a ï¬xed size state or memory, potentially making training much more difï¬cult. We believe that repeating self-attention in multiple, successive layers of a Transformer decoder (Vaswani et al., 2017) helps capture the multiple levels at which self-referential phenomena exist in music.
In its original formulation, the Transformer relies on absolute position representations, using either positional sinusoids or learned position embeddings that are added to the per-position input repre- sentations. Recurrent and convolutional neural networks instead model position in relative terms: RNNs through their recurrence over the positions in their input, and CNNs by applying kernels that effectively choose which parameters to apply based on the relative position of the covered input representations.
âGoogle AI Resident. Correspondence to: Cheng-Zhi Anna Huang <annahuang@google.com> 1Samples are available for listening at
# https://storage.googleapis.com/music-transformer/index.html
1
Music has multiple dimensions along which relative differences arguably matter more than their absolute values; the two most prominent are timing and pitch. To capture such pairwise relations between representations, Shaw et al. (2018) introduce a relation-aware version of self-attention which they use successfully to modulate self-attention by the distance between two positions. We extend this approach to capture relative timing and optionally also pitch, which yields improvement in both sample quality and perplexity for JSB Chorales. As opposed to the original Transformer, samples from a Transformer with our relative attention mechanism maintain the regular timing grid present in this dataset. The model furthermore captures global timing, giving rise to regular phrases. The original formulation of relative attention (Shaw et al., 2018) requires O(L2D) memory where L is the sequence length and D is the dimension of the modelâs hidden state. This is prohibitive for long sequences such as those found in the Piano-e-Competition dataset of human-performed virtuosic, classical piano music. In Section 3.4, we show how to reduce the memory requirements to O(LD), making it practical to apply relative attention to long sequences.
The Piano-e-Competition dataset consists of MIDI recorded from performances of competition partic- ipants, bearing expressive dynamics and timing on the granularity of < 10 miliseconds. Discretizing time on a ï¬xed grid that would yield unnecessarily long sequences as not all events change on the same timescale. We hence adopt a sparse, MIDI-like, event-based representation from (Oore et al., 2018), allowing a minute of music with 10 milisecond resolution to be represented at lengths around 2K, as opposed to 6K to 18K on a ï¬xed-grid representation with multiple performance attributes. As position in sequence no longer corresponds to time, a priori it is not obvious that relative attention should work as well with such a representation. However, we will show in Section 4.2 that it does improve perplexity and sample quality over strong baselines.
We speculate that idiomatic piano gestures such as scales, arpeggios and other motifs all exhibit a certain grammar and recur periodically, hence knowing their relative positional distances makes it easier to model this regularity. This inductive bias towards learning relational information, as opposed to patterns based on absolute position, suggests that the Transformers with relative attention could generalize beyond the lengths it was trained on, which our experiments in Section 4.2.1 conï¬rm.
1.1 CONTRIBUTIONS
Domain contributions We show the ï¬rst successful use of Transformers in generating music that exhibits long-term structure. Before our work, LSTMs were used at timescales of 15s (~500 tokens) on the Piano-e-Competition dataset (Oore et al., 2018). Our work show that Transformers not only achieve state-of-the-art perplexity on modeling these complex expressive piano performances, but can also generate them at the scale of 60s (~2000 tokens) with remarkable internal consistency. Our relative attention mechanism is essential to the modelâs quality. In listening tests (see Section 4.2.3), samples from models with relative self-attention were perceived as more coherent than the baseline Transformer model from Vaswani et al. (2017). Relative attention not only enables Transformers to generate continuations that elaborate on a given motif, but also to generalize and generate in consistent fashion beyond the length it was trained on (see Section 4.2.1). In a seq2seq setup, Transformers can generate accompaniments conditioned on melodies, enabling users to interact with the model.
Algorithmic contributions The space complexity of the relative self attention mechanism in its original formulation (Shaw et al., 2018) made it infeasible to train on sequences of sufï¬cient length to capture long-range structure in longer musical compositions. Addressing this we present a crucial algorithmic improvement to the relative self attention mechanism, dramatically reducing its memory requirements from O(L2D) to O(LD). For example, we reduce the memory consumption per layer from 8.5 GB to 4.2 MB (per head from 1.1 GB to 0.52 MB) for a sequence of length L = 2048 and hidden-state size D = 512 (per head Dh = D H = 64, where number of heads is H = 8) (see Table 1), allowing us to use GPUs to train the relative self-attention Transformer on long sequences.
# 2 RELATED WORK
Sequence models have been the canonical choice for modeling music, from Hidden Markov Models to RNNs and Long Short Term Memory networks (e.g., Eck & Schmidhuber, 2002; Liang, 2016; Oore et al., 2018), to bidirectional LSTMs (e.g., Hadjeres et al., 2017). Successful application of sequential models to polyphonic music often requires serializing the musical score or performance
2
into a single sequence, for example by interleaving different instruments or voices. Alternatively, a 2D pianoroll-like representation (see A.1 for more details) can be decomposed into a sequence of multi-hot pitch vectors, and their joint probability distributions can be captured using Restricted Boltzmann Machines (Smolensky, 1986; Hinton et al., 2006) or Neural Autoregressive Distribution Estimators (NADE; Larochelle & Murray, 2011). Pianorolls are also image-like and can be modeled by CNNs trained either as generative adversarial networks (e.g., Dong et al., 2018) or as orderless NADEs (Uria et al., 2014; 2016) (e.g., Huang et al., 2017).
Lattner et al. (2018) use self-similarity in style-transfer fashion, where the self-similarity structure of a piece serves as a template objective for gradient descent to impose similar repetition structure on an input score. Self-attention can be seen as a generalization of self-similarity; the former maps the input through different projections to queries and keys, and the latter uses the same projection for both.
Dot-product self-attention is the mechanism at the core of the Transformer, and several recent works have focused on applying and improving it for image generation, speech, and summarization (Parmar et al., 2018; Povey et al., 2018; Liu et al., 2018). A key challenge encountered by each of these efforts is scaling attention computationally to long sequences. This is because the time and space complexity of self-attention grows quadratically in the sequence length. For relative self-attention (Shaw et al., 2018) this is particularly problematic as the space complexity also grows linearly in the dimension, or depth, of the per-position representations.
3 MODEL
3.1 DATA REPRESENTATION
We take a language-modeling approach to training generative models for symbolic music. Hence we represent music as a sequence of discrete tokens, with the vocabulary determined by the dataset. Datasets in different genres call for different ways of serializing polyphonic music into a single stream and also discretizing time.
The JSB Chorale dataset consists of four-part scored choral music, which can be represented as a matrix where rows correspond to voices and columns to time discretized to sixteenth notes. The matrixâs entries are integers that denote which pitch is being played. This matrix can than be serialized in raster-scan fashion by ï¬rst going down the rows and then moving right through the columns (see A.1 for more details). Compared to JSB Chorale, the piano performance data in the Piano-e-Competition dataset includes expressive timing information at much ï¬ner granularity and more voices. For the Piano-e-Competition we therefore use the performance encoding proposed by Oore et al. (2018) which consists of a vocabulary of 128 NOTE_ON events, 128 NOTE_OFFs, 100 TIME_SHIFTs allowing for expressive timing at 10ms and 32 VELOCITY bins for expressive dynamics (see A.2 for more details).
3.2 BACKGROUND: SELF-ATTENTION IN TRANSFORMER
The Transformer decoder is a autoregressive generative model that uses primarily self-attention mechanisms, and learned or sinusoidal position information. Each layer consists of a self-attention sub-layer followed by a feedforward sub-layer.
The attention layer ï¬rst transforms a sequence of L D-dimensional vectors X = (x1, x2, . . . , xL) into queries Q = XW Q, keys K = XW K, and values V = XW V , where W Q, W K, and W V are each D à D square matrices. Each L à D query, key, and value matrix is then split into H L à Dh parts or attention heads, indexed by h, and with dimension Dh = D H , which allow the model to focus on different parts of the history. The scaled dot-product attention computes a sequence of vector outputs for each head as
h hoach yh QhKht h Z" = Attention(Qâ, K",V") = Softmax (<-) v". (1) VD
The attention outputs for each head are concatenated and linearly transformed to get Z, a L by D dimensional matrix. A upper triangular mask ensures that queries cannot attend to keys later in the sequence. For other details of the Transfomer model, such as residual connections and learning rates, the reader can refer Vaswani et al. (2017). The feedforward (FF) sub-layer then takes the output Z
3
from the previous attention sub-layer, and performs two layers of point-wise dense layers on the depth D dimension, as shown in Equation 2. W1, W2, b1, b2 are weights and biases of those two layers.
FF(Z) = ReLU(ZW1 + b1)W2 + b2 (2)
# 3.3 RELATIVE POSITIONAL SELF-ATTENTION
As the Transformer model relies solely on positional sinusoids to represent timing information, Shaw et al. (2018) introduced relative position representations to allow attention to be informed by how far two positions are apart in a sequence. This involves learning a separate relative position embedding Er of shape (H, L, Dh), which has an embedding for each possible pairwise distance r = jk â iq between a query and key in position iq and jk respectively. The embeddings are ordered from distance âL + 1 to 0, and are learned separately for each head. In Shaw et al. (2018), the relative embeddings interact with queries and give rise to a Srel, an L Ã L dimensional logits matrix which modulates the attention probabilities for each head as:
(3) KT 4 gre RelativeAttention = Softmax (â) V. VDin
We dropped head indices for clarity. Our work uses the same approach to infuse relative distance information in the attention computation, while significantly improving upon the memory footprint for computing $â°!. For each head, Shaw et al2018) instantiate an intermediate tensor R of shape (L, L, Dp), containing the embeddings that correspond to the relative distances between all keys and queries. Q is then reshaped to an (L,1, Dp) tensor, and $7! = QR'P| This incurs a total space complexity of O(L?D), restricting its application to long sequences.
# 3.4 MEMORY EFFICIENT IMPLEMENTATION OF RELATIVE POSITION-BASED ATTENTION
We improve the implementation of relative attention by reducing its intermediate memory requirement from O(L?D) to O(LD), with example lengths shown in Table|1] We observe that all of the terms we need from QR" are already available if we directly multiply Q with Eâ, the relative position embedding. After we compute QE"', its (ig, r) entry contains the dot product of the query in position i, with the embedding of relative distance r. However, each relative logit (ig, j,) in the matrix S"@! from Equation[3|should be the dot product of the query in position 7, and the embedding of the relative distance 7, â ig, to match up with the indexing in QKâ T. We therefore need to âskewâ QE" soas to move the relative logits to their correct positions, as illustrated in Figure [tana detailed in the next section. The time complexity for both methods are O(L?D), while in practice our method is 6x faster at length 650.
Previous work: R For every (¢,, 4) pair, gather r= J, Multiply by @ from EY Relative Skew oo embed ings - i 1 el pf Ed? ourwork: QE", STAG L+1 Directly multiply byQ | Pad @ Reshape @ Slice | @ ws OO 9000 = 00010 0® 0 @ 4,
Figure 1: Relative global attention: the bottom row describes our memory-efï¬cient âskewingâ algorithm, which does not require instantiating R (top row, which is O(L2D)). Gray indicates masked or padded positions. Each color corresponds to a different relative distance.
2We assume that the batch size is 1 here. With a batch size of B, Q would be reshaped to (L, B, Dh) and Srel would be computed with a batch matrixâmatrix product.
4
Table 1: Comparing the overall relative memory complexity (intermediate relative embeddings (R or Er) + relative logits Srel), the maximal training lengths that can ï¬t in a GPU with 16GB memory assuming Dh = 64, and the memory usage per layer per head (in MB).
Implementation Shaw et al. (2018) O(L2D + L2) O(LD + L2) Ours Relative memory Maximal L L = 650 650 3500 108 + 1.7 0.17 + 1.7 L = 2048 L = 3500 1100 + 16 0.52 + 16 3100 + 49 0.90 + 49
3.4.1 THE âSKEWINGâ PROCEDURE
Hence, we propose a âskewingâ procedure to transform an absolute-by-relative (i,,7) indexed matrix into an absolute-by-absolute (i,, j),) indexed matrix. The row indices i, stay the same while the columns indices are shifted according to the following equation: j;, = r â (L â 1) + ig. For example in Figure[ithe upper right green dot in position (0,2) of QE" after skewing has a column index of 2â(3âT) +0 =0, resulting in a position of (0,0) in S"@.
We outline the steps illustrated in Figure 1 below.
1. Pad a dummy column vector of length L before the leftmost column.
2. Reshape the matrix to have shape (L+1, L). (This step assumes NumPy-style row-major ordering.)
3. Slice that matrix to retain only the last l rows and all the columns, resulting in a (L, L) matrix again, but now absolute-by-absolute indexed, which is the Srel that we need.
# 3.5 RELATIVE LOCAL ATTENTION
For very long sequences, the quadratic memory requirement of even baseline Transformer is imprac- tical. Local attention has been used for example in Wikipedia and image generation (Liu et al., 2018; Parmar et al., 2018) by chunking the input sequence into non-overlapping blocks. Each block then attends to itself and the one before, as shown by the smaller thumbnail on the top right corner of Figure 2.
To extend relative attention to the local case, we first note that the right block has the same configura- tion as in the global case (see Figure|I) but much smaller: (a)? (where MM is the number of blocks, and N be the resulting block length) as opposed to L?. The left block is unmasked with relative indices running from -1 (top right) to -2NV + 1 (bottom left). Hence, the learned Eâ for the local case has shape (2N â 1, N). Similar to the global case, we first compute QE"' and then use the following procedure to skew it to have the same indexing as QK ', as illustrated in Figure[2|
1. Pad a dummy column vector of length N after the rightmost column.
2. Flatten the matrix and then pad with a dummy row of length N â 1.
3. Reshape the matrix to have shape (N + 1, 2N â 1).
4. Slice that matrix to retain only the ï¬rst N rows and last N columns, resulting in a (N, N ) matrix.
QE," r Pad column Gee sre a 7 @ Reshape â> 4 7 -2N+1 -N e ow -N Pad N-1 after flatten t ) e@ rd) @ @
Figure 2: Relative local attention: the thumbnail on the right shows the desired conï¬guration for Srel. The âskewingâ procedure is shown from left to right.
5
4 EXPERIMENTS
J.S. BACH CHORALES
J.S. Bach chorales is a canonical dataset used for evaluating generative models for music 3 (e.g., Allan & Williams, 2005; Boulanger-Lewandowski et al., 2012; Liang, 2016; Hadjeres et al., 2016; Huang et al., 2017). It consists of score-based four-part chorales. We ï¬rst discretize the scores onto a 16th-note grid, and then serialize it by iterating through all the voices within a time step and then advancing time (see A.1 for more details). As there is a direct correspondence between position in sequence and position on the timing/instrument grid in a piece, adding relative position representations could make it easier to learn this grammar. We indeed see relative attention drastically improve negative log-likelihood (NLL) over baseline Transformer (Table 2). This improvement is also reï¬ected in sample quality. The samples now maintain the necessary timing/instrument grid, always advancing four steps before advancing in time. As local timing is maintained, the model is able to capture timing on a more global level, giving rise to regular phrasing, as shown in Figure 3.
Figure 3: Unconditioned samples from Transformer without (left) and with (right) relative self- attention. Green vertical boxes indicate the endings of (sub)phrases where cadences are held.
In addition to relative attention, we also explored enhancing absolute timing through concatenating instead of adding the sinusoids to the input embeddings. This allows the model to more directly learn its absolute positional mapping. This further improves performance for both the baseline and relative transformer (Table 2). We compare against COCONET as it is one of the best-performing models that has also been evaluated on the 16-note grid using the canonical dataset split. To directly compare, we re-evaluated COCONET to obtain note-wise losses on the validation set 4. For the Transformer models (abbreviated as TF), we implemented our attention mechanisms in the Tensor2Tensor framework (Vaswani et al., 2018). We use 8 heads, and keep the query, key (att) and value hidden size (hs) ï¬xed within a conï¬g. We tuned number of layers (L in {4,5,6}), attention hidden size (att in {256, 512}) and pointwise feedforward hidden size (ff in {512, 1024}).
4.1.1 GENERALIZING RELATIVE ATTENTION TO CAPTURE RELATIONAL INFORMATION
A musical event bears multiple attributes, such as timing, pitch, instrument etc. To capture more relational information, we extend relative attention to capture pairwise distances on additional attributes. We learn separate relative embeddings for timing Et and also pitch Ep. Et has entries corresponding to how many sixteenth notes apart are two positions, while Ep embeds the pairwise pitch interval. However this approach is not directly scalable beyond J.S. Bach Chorales because it involves explicitly gathering relative embeddings for Rt and Rp, resulting in a memory complexity of O(L2D) as in Shaw et al. (2018). This is due to relative information being computed based on content as opposed to content-invariant information such as position in sequence. It was sufï¬cient to add the extra timing signals to the ï¬rst layer, perhaps because it is closest to the raw input content. Here, the relative logits are computed from three terms, Srel = Skew(QEr) + Q(Rt + Rp) in contrast with other layers that only have one term, Skew(QEr).
4.2 PIANO-E-COMPETITION
We use the ï¬rst 6 years of of Piano-e-Competition because these years have corresponding MIDI data released 5, resulting in about 1100 pieces, split 80/10/10. Each piece is MIDI data capturing a classical piano performance with expressive dynamics and timing, encoded with the MIDI-like representation
3J.S. Bach chorales dataset: https://github.com/czhuang/JSB-Chorales-dataset 4Some earlier papers report frame-wise losses to compare to models such as RNN-RBM which model
âchordsâ. Coconet can be evaluated under note-wise or frame-wise losses.
# 5Piano-e-Competition dataset (competition history): http://www.piano-e-competition.com/
6
described in Section A.2. We trained on random crops of 2000-token sequences and employed two kinds of data augmentation: pitch transpositions uniformly sampled from {â3, â2, . . . , 2, 3} half-steps, and time stretches uniformly sampled from the set {0.95, 0.975, 1.0, 1.025, 1.05}.
We compare to Magentaâs PerformanceRNN (LSTM, which ï¬rst used this dataset) (Oore et al., 2018) and LookBack RNN (LSTM with attention) (Waite, 2016). LookBack RNN uses an input representation that requires monophonic music with barlines which is information that is not present in performed polyphonic music data, hence we simply adopt their architecture. Table 3 shows that Transformer-based architectures ï¬ts this dataset better than LSTM-based models.
Table 2: Note-wise validation NLL on J.S.Bach Chorales at 16th notes. Relative attention, more timing and relational information improve performance.
Model variation COCONET (CNN, chronological, 64L, 128 3x3f) COCONET (CNN, orderless, 64L, 128 3x3f) 0.436 ⤠0.238 6 Transformer (TF) baseline (Vaswani et al., 2017) (5L, 256hs, 256att, 1024ff, 8h) TF baseline + concat positional sinusoids (cps) TF baseline + concat positional sinusoids, instrument labels (cpsi) 0.417 0.398 0.370 Relative Transformer (Shaw et al., 2018) (5L, 512hs, 512att, 512ff, 256r, 8h) Relative Transformer + concat positional sinusoids, instrument labels (cpsi) Relative Transformer + cpsi + relative pitch and time 0.357 0.347 0.335
# Validation NLL
Table 3: Validation NLL for Piano-e-Competition dataset, with event-based representation with lengths L = 2048. Transformer with relative attention (with our efï¬cient formulation) achieves state-of-the-art performance.
Model variation PERFORMANCE RNN (LSTM) (3L, 1024hs) LSTM with attention (3L, 1024hs, 1024att) 1.969 1.959 Transformer (TF) baseline (6L, 256hs, 512att, 2048fs, 1024r, 8h) TF with local attention (Liu et al., 2018) (8L, 1024fs, 512bs) TF with relative global attention (our efï¬cient formulation) (6L, 2048fs, 1024r) TF with relative local attention (ours) (6L, 1024fs, 2048r, 512bs) 1.861 1.863 1.835 1.840
# Validation NLL
We implemented our attention mechanisms in the Tensor2Tensor framework (Vaswani et al., 2018), and used the default hyperparameters for training, with 0.1 learning rate, 0.1 dropout, and early stopping. We compare four architectures, varying on two axes: global versus local, and regular versus relative attention. We found that reducing the query and key hidden size (att) to half the hidden size (hs) works well and use this relationship for all of the models, while tuning on number of layers (L) and ï¬lter size (fs). We use block size (bs) 512 for local attention. We set the maximum relative distance to consider to half the training sequence length for relative global attention, and to the full memory length (which is two blocks) for relative local attention. Table 3 show that relative attention (global or local) outperforms regular self-attention (global or local). All else being equal, local and global attention perform similarly. Each though local attention does not see all the history at once, it can build up a larger receptive ï¬eld across layers. This can be an advantage in the future for training on much longer sequences, as local attention requires much less memory.
6COCONET is an instance of OrderlessNADE, an ensemble over orderings. The chronological loss evaluates the model as autoregressive, from left to right. We can also evaluate the model as a mixture, by averaging its losses over multiple random orderings. This is a lower bound on log-likelihood. It is intractable to sample from exactly but can be approximated through Gibbs sampling.
7
Figure 4: Comparing how models continue a prime (top left). Repeated motives and structure are seen in samples from Transformer with relative attention (top row), but less so from baseline Transformer (middle row) and PerformanceRNN (LSTM) (bottom row).
4.2.1 QUALITATIVE PRIMING EXPERIMENTS
When primed with an initial motif (Chopinâs Ãtude Op. 10, No. 5) shown in the top left corner of Figure 4, we see the models perform qualitatively differently. Transformer with relative attention elaborates the motif and creates phrases with clear contour which are repeated and varied. Baseline Transformer uses the motif in a more uniform fashion, while LSTM uses the motif initially but soon drifts off to other material. Note that the generated samples are twice as long as the training sequences. Relative attention was able to generalize to lengths longer than trained but baseline Transformer deteriorates beyond its training length. See Appendix C for visualizations of how the our Relative Transformer attends to past motifs.
4.2.2 HARMONIZATION: CONDITIONING ON MELODY
To explore the sequence-to-sequence setup of Transformers, we experimented with a conditioned generation task where the encoder takes in a given melody and the decoder has to realize the entire performance, i.e. melody plus accompani- ment. The melody is encoded as a sequence of tokens as in Waite (2016), quantized to a 100ms grid, while the decoder uses the performance encoding described in Section 3.1 (and further illustrated in A.2). We use relative attention on the de- coder side and show in Table 4 it also improves performance.
Table 4: Validation conditional NLL given groundtruth melody from Piano-e-Competition.
Model variation NLL Baseline Transformer Relative Transformer (ours) 2.066 1.786
_
# 4.2.3 HUMAN EVALUATIONS
To compare the perceived sample quality of models trained on the Piano-e-Competition dataset, and their ability to generate a continuation for a priming sequence, we carried out a listening test study comparing the baseline Transformer, our Transformer with relative-attention, PerformanceRNN (LSTM), and the validation set. Participants were presented with two musical excerpts (from two different models that were given the same priming sequence) and asked to rate which one is more musical on a Likert scale. For each model, we generated 10 samples each with a different prime, and compared them to three other models, resulting in 60 pairwise comparisons. Each pair was rated by 3 different participants, yielding a total of 180 comparisons.
Figure 5 shows the number of comparisons in which an excerpt from each model was selected as more musical. The improvement in sample quality from using relative attention over the baseline Transformer model was statistically signiï¬cant (see Appendix B for the analysis), both in aggregate and between the pair. Even though in aggregate LSTMs performed better in the study than the Transformer, despite having higher perplexity, but when compared against each other head to head, the results were not statistically signiï¬cant (see Table 5 in Appendix B).
8
Listening Test Real Data â (Ours) âelatve Transformer I â isâ = â Transformer + 0 10 20 30 40 50 60 Number of wins
Figure 5: Number of wins for each model. Error bars show standard deviations of mean.
# 5 CONCLUSION
In this work we demonstrated that the Transformer equipped with relative attention is very well-suited for generative modeling of symbolic music. The compelling long-term structure in the samples from our model leaves us enthusiastic about this direction of research. Moreover, the ability to expand upon a primer, in particular, suggests potential applications as creative tool.
The signiï¬cant improvement from relative attention highlights a shortcoming of the original Trans- former that might also limit its performance in other domains. Improving the Transformerâs ability to capture periodicity at various time scales, for instance, or relations between scalar features akin to pitch could improve time-series models. Our memory-efï¬cient implementation enables the appli- cation of relative attention to much longer sequences such as long texts or even audio waveforms, which signiï¬cantly broadens the range of problems to which it could be applied.
# 6 ACKNOWLEDGEMENT
We thank many colleagues from the Transformer (Vaswani et al., 2017) and Tensor2Tensor (Vaswani et al., 2018) papers for helping us along the way: Lukasz Kaiser, Ryan Sepassi, Niki Parmar and Llion Jones. Many thanks to Magenta and friends for their support throughout and for many insightful discussions: Jesse Engel, Adam Roberts, Fred Bertsch, Erich Elsen, Sander Dieleman, Sageev Oore, Carey Radebaugh, Natasha Jaques, Daphne Ippolito, Sherol Chan, Vida Vakilotojar, Dustin Tran, Ben Poole and Tim Cooijmans.
# REFERENCES
Moray Allan and Christopher KI Williams. Harmonising chorales by probabilistic inference. Advances in neural information processing systems, 17:25â32, 2005.
Nicolas Boulanger-Lewandowski, Yoshua Bengio, and Pascal Vincent. Modeling temporal dependen- cies in high-dimensional sequences: Application to polyphonic music generation and transcription. International Conference on Machine Learning, 2012.
Hao-Wen Dong, Wen-Yi Hsiao, Li-Chia Yang, and Yi-Hsuan Yang. Musegan: Multi-track sequential generative adversarial networks for symbolic music generation and accompaniment. In Proceedings of the AAAI Conference on Artiï¬cial Intelligence, 2018.
Douglas Eck and Juergen Schmidhuber. Finding temporal structure in music: Blues improvisation with lstm recurrent networks. In Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing, 2002.
Gaëtan Hadjeres, Jason Sakellariou, and François Pachet. Style imitation and chord invention in polyphonic music with exponential families. arXiv preprint arXiv:1609.05152, 2016.
Gaëtan Hadjeres, François Pachet, and Frank Nielsen. Deepbach: a steerable model for bach chorales generation. In International Conference on Machine Learning, pp. 1362â1371, 2017.
Geoffrey E Hinton, Simon Osindero, and Yee-Whye Teh. A fast learning algorithm for deep belief nets. Neural computation, 18(7):1527â1554, 2006.
9
Cheng-Zhi Anna Huang, Tim Cooijmans, Adam Roberts, Aaron Courville, and Doug Eck. Coun- terpoint by convolution. In Proceedings of the International Conference on Music Information Retrieval, 2017.
Hugo Larochelle and Iain Murray. The neural autoregressive distribution estimator. In AISTATS, volume 1, pp. 2, 2011.
Stefan Lattner, Maarten Grachten, and Gerhard Widmer. Imposing higher-level structure in poly- phonic music generation using convolutional restricted boltzmann machines and constraints. Journal of Creative Music Systems, 2(2), 2018.
Feynman Liang. Bachbot: Automatic composition in the style of bach chorales. Masters thesis, University of Cambridge, 2016.
Peter J Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. Generating wikipedia by summarizing long sequences. In Proceedings of the International Conference on Learning Representations, 2018.
Sageev Oore, Ian Simon, Sander Dieleman, Douglas Eck, and Karen Simonyan. This time with feeling: Learning expressive musical performance. arXiv preprint arXiv:1808.03715, 2018.
Ankur Parikh, Oscar Täckström, Dipanjan Das, and Jakob Uszkoreit. A decomposable attention model for natural language inference. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2016.
Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Åukasz Kaiser, Noam Shazeer, and Alexander Ku. Image transformer. In Proceedings of the International Conference on Machine Learning, 2018.
Daniel Povey, Hossein Hadian, Pegah Ghahremani, Ke Li, and Sanjeev Khudanpur. A time-restricted self-attention layer for ASR. In Proceddings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018.
Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. Self-attention with relative position representa- tions. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, volume 2, 2018.
Paul Smolensky. Information processing in dynamical systems: Foundations of harmony theory. Technical report, DTIC Document, 1986.
Benigno Uria, Iain Murray, and Hugo Larochelle. A deep and tractable density estimator. In International Conference on Machine Learning, pp. 467â475, 2014.
Benigno Uria, Marc-Alexandre Côté, Karol Gregor, Iain Murray, and Hugo Larochelle. Neural autoregressive distribution estimation. The Journal of Machine Learning Research, 17(1):7184â 7220, 2016.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz In Advances in Neural Information Kaiser, and Illia Polosukhin. Attention is all you need. Processing Systems, 2017.
Ashish Vaswani, Samy Bengio, Eugene Brevdo, Francois Chollet, Aidan N. Gomez, Stephan Gouws, Llion Jones, Åukasz Kaiser, Nal Kalchbrenner, Niki Parmar, Ryan Sepassi, Noam Shazeer, and Jakob Uszkoreit. Tensor2tensor for neural machine translation. CoRR, abs/1803.07416, 2018.
Elliot Waite. Generating long-term structure in songs and stories. https://magenta. tensorflow.org/2016/07/15/lookback-rnn-attention-rnn, 2016.
10
# A DOMAIN-SPECIFIC REPRESENTATIONS
Adapting sequence models for music requires making decisions on how to serialize a polyphonic texture. The data type, whether score or performance, makes certain representations more natural for encoding all the information needed while still resulting in reasonable sequence lengths.
A.1 SERIALIZED INSTRUMENT/TIME GRID (J.S.BACH CHORALES)
The ï¬rst dataset, J.S. Bach Chorales, consists of four-part score-based choral music. The time resolution is sixteenth notes, making it possible to use a serialized grid-like representation. Figure 6 shows how a pianoroll (left) can be represented as a grid (right), following (Huang et al., 2017). The rows show the MIDI pitch number of each of the four voices, from top to bottom being soprano (S), alto (A), tenor (T ) and bass (B), while the columns is discretized time, advancing in sixteenth notes. Here longer notes such as quarter notes are broken down into multiple repetitions. To serialize the grid into a sequence, we interleave the parts by ï¬rst iterating through all the voices at time step 1, and then move to the next column, and then iterate again from top to bottom, and so on. The resulting sequence is S1A1T1B1S2A2T2B2..., where the subscript gives the time step. After serialization, the most common sequence length is 1024. Each token is represented as onehot in pitch.
S: 67, 67, 67, 67 A: 62, 62, 62, 62 T: 59, 59, 57, 57 B: 43, 43, 45, 45
Figure 6: The opening measure of BWV 428 is visualized as a pianoroll (left, where the x-axis is discretized time and y-axis is MIDI pitch number), and encoded in grid representation with sixteenth note resolution (right). The soprano and alto voices have quarter notes at pitches G4 (67) and D4 (62), the tenor has eighth notes at pitches B3 (59) and A3 (57), and the bass has eighth notes at pitches A2 (45) and G2 (43).
A.2 MIDI-LIKE EVENT-BASED (PIANO-E-COMPETITION)
The second dataset, Piano-e-Competition, consists of polyphonic piano performances with expressive timing and dynamics. The time resolution here is on the millisecond level, so a grid representation would result in sequences that are too long. Instead, the polyphonic performance is serialized into a sequence of one hot encoded events as proposed in (Oore et al., 2018).
First, the input MIDI ï¬les are preprocessed to extend note durations based on sustain pedal control events. The sustain pedal is considered to be down whenever a sustain control change is encountered with a value >= 64; the sustain pedal is then considered up after a control change with a value < 64. Within a period where the sustain pedal is down, the duration of each note is extended to either the beginning of the next note of the same pitch or the end of the sustain period, whichever happens ï¬rst. If the original duration extends beyond the time when the sustain pedal is down, that original duration is used.
Next, the MIDI note events are converted into a sequence from the following set of vocabulary: 128 NOTE_ON events for starting a note of with one of the 128 MIDI pitches, 128 NOTE_OFF events for ending a note with one of the 128 MIDI pitches, 100 TIME_SHIFT events representing forward time shifts in 10ms increments from 10ms to 1s, and 32 SET_VELOCITY events representing the velocity for future NOTE_ON events in the form of the 128 possible MIDI velocities quantized into 32 bins. An example performance encoding is illustrated in Figure 7.
11
â
SET_VELOCITY<80>, NOTE_ON<60> TIME_SHIFT<500>, NOTE_ON<64> TIME_SHIFT<500>, NOTE_ON<67> TIME_SHIFT<1000>, NOTE_OFF<60>, NOTE_OFF<64>, NOTE_OFF<67> TIME_SHIFT<500>, SET_VELOCITY<100>, NOTE_ON<65> TIME_SHIFT<500>, NOTE_OFF<65>
Figure 7: A snippet of a piano performance visualized as a pianoroll (left) and encoded as performance events (right, serialized from left to right and then down the rows). A C Major chord is arpeggiated with the sustain pedal active. At the 2-second mark, the pedal is released, ending all of the notes. At the 3-second mark, an F is played for .5 seconds. The C chord is played at velocity 80 and the F is played at velocity 100.
# B SUPPLEMENT OF LISTENING TEST
B.1 STUDY PROCEDURE
Participants were presented with two musical excerpts that shared a common priming sequence. For each excerpt, the priming sequence was played, followed by 2.5 seconds of silence, followed by the priming sequence again and a continuation of that sequence. The continuations were either sampled from one of the models or extracted from our validation set. We evaluated all possible pairs in the space of data and model samples, except from the same model. Each continuation had a length of 512 events using the encoding described in Section A.2. This corresponds to the length the models were trained on to remove the deteriorating effect that happens with baseline Transformer when asked to generate beyond the length it was trained on. Participants were asked which excerpt they thought was more musical on a Likert scale of 1 to 5. The pair is laid out left versus right, with 1 indicating the left is much more musical, 2 the left is slightly more musical, 3 being a tie, 4 being the right is slightly more musical, and 5 the right is much more musical. For each model, we generated 10 samples each with a different prime, and compared them to three other models, resulting in 60 pairwise comparisons. Each pair was rated by 3 different participants, yielding a total of 180 comparisons.
B.2 ANALYSIS
A Kruskal-Wallis H test of the ratings showed that there was a statistically signiï¬cant difference between the models: Ï2(2) = 63.84, p = 8.86e-14< 0.01. Table 5 show a post-hoc analysis on the comparisons within each pair, using the Wilcoxon signed-rank test for matched samples. Table 6 shows a post-hoc analysis of how well each model performed when compared to all pairs, and compares each modelâs aggregate against each other, using the MannâWhitney U test for independent samples. We use a Bonferroni correction on both to correct for multiple comparisons. The win and loss counts bucket scores 4, 5 and scores 1, 2 respectively, while the tieing score is 3.
Both within pairs and between aggregates, participants rated samples from our relative Transformer as more musical than the baseline Transformer with p < 0.01/6.
For within pairs, we did not observe a consistent statistically signiï¬cant difference between the other model pairs, baseline transformer versus LSTM and LSTM versus relative Transformer.
When comparing between aggregates, LSTM was overall perceived as more musical than baseline Transformer. Relative Transformer came a bit close to outperforming LSTM with p = 0.018. When we listen to the samples from the two, they do sound qualitatively different. Relative Transformer often exhibits much more structure (as shown in Figure 4), but the effects were probably less pronounced in the listening test because we used samples around 10s to 15s, which is half the length of those shown in Figure 4 to prevent the baseline Transformer from deteriorating. This weakens the comparison on long-term structure.
When compared to real music from the validation set, we see that in aggregates, real music was better than LSTM and baseline Transformer. There was no statistical signiï¬cant difference between real music and relative Transformer. This is probably again due to the samples being too short as real music is deï¬nitely still better.
12
Table 5: A post-hoc comparison of each pair on their pairwise comparisons with each other, using the Wilcoxon signed-rank test for matched samples. p value less than 0.01/6=0.0016 yields a statistically signiï¬cant difference and is marked by asterisk.
Pairs wins ties losses p value Our relative transformer Our relative transformer Baseline transformer Our relative transformer LSTM LSTM Baseline transformer real music Baseline transformer real music LSTM real music 11 23 18 5 6 6 4 1 1 3 0 2 15 6 11 22 24 22 0.243 0.0006* 0.204 0.006 0.0004* 0.0014
Table 6: Comparing each pair on their aggregates (comparisons with all models) in (wins, ties, losses), using the MannâWhitney U test for independent samples.
Model Model p value Our relative transformer Our relative transformer Our relative transformer Baseline transformer Baseline transformer LSTM (52, 6, 32) (52, 6, 32) Baseline transformer (52, 6, 32) LSTM (17, 4, 69) LSTM (17, 4, 69) (39, 6, 45) real music real music real music (61, 6, 23) (17, 4, 69) (39, 6, 45) (39, 6, 45) (61, 6, 23) (61, 6, 23) 0.020 1.26e-9* 0.018 3.70e-5* 6.73e-14* 4.06e-5*
# C VISUALIZING SOFTMAX ATTENTION
One advantage of attention-based models is that we can visualize its attention distribution 3. This gives us a glimpse of how the model might be building up recurring structures and how far it is attending back. The pianorolls in the visualizations below is a sample generated from Transformer with relative attention. Each ï¬gure shows a query (the source of all the attention lines) and previous memories being attended to (the notes that are receiving more softmax probabiliy is highlighted in). The coloring of the attention lines correspond to different heads and the width to the weight of the softmax probability.
Figure 8: This piece has a recurring triangular contour. The query is at one of the latter peaks and it attends to all of the previous high notes on the peak, all the way to beginning of the piece.
Figure 9: The query a note in the left-hand, and it attends to its immediate past neighbors and mostly to the earlier left hand chords, with most attention lines distributed in the lower half of the pianoroll.
13
# D PREVIOUS FIGURES FOR THE âSKEWINGâ PROCEDURE
Steps 1 Steps 2,3:
Figure 10: Relative global attention: Steps (from left to right) for âskewingâ an absolute-by-relative (iq, r) indexed matrix into absolute-by-absolute (iq, jk). Grey indicates self-attention masks or entries introduced by the skewing procedure. Positions with relative distance zero are marked. Entries outlined by purple are removed in step 3.
QE (N,2N-1) Steps 1,2 (N+1, 2N-1) Steps 3 (N,N) Steps 4 5 X MT | i nN Pad N-1 after flatten
Figure 11: Relative local attention: Steps (from left to right) for âskewingâ an (iq, r) indexed matrix with 2N â 1 ranged relative indices r into (iq, jk indexed. Shapes are indicated above the boxes, while indices in the boxes give relative distances.
14 | {
"id": "1609.05152"
} |
1809.04474 | Multi-task Deep Reinforcement Learning with PopArt | The reinforcement learning community has made great strides in designing
algorithms capable of exceeding human performance on specific tasks. These
algorithms are mostly trained one task at the time, each new task requiring to
train a brand new agent instance. This means the learning algorithm is general,
but each solution is not; each agent can only solve the one task it was trained
on. In this work, we study the problem of learning to master not one but
multiple sequential-decision tasks at once. A general issue in multi-task
learning is that a balance must be found between the needs of multiple tasks
competing for the limited resources of a single learning system. Many learning
algorithms can get distracted by certain tasks in the set of tasks to solve.
Such tasks appear more salient to the learning process, for instance because of
the density or magnitude of the in-task rewards. This causes the algorithm to
focus on those salient tasks at the expense of generality. We propose to
automatically adapt the contribution of each task to the agent's updates, so
that all tasks have a similar impact on the learning dynamics. This resulted in
state of the art performance on learning to play all games in a set of 57
diverse Atari games. Excitingly, our method learned a single trained policy -
with a single set of weights - that exceeds median human performance. To our
knowledge, this was the first time a single agent surpassed human-level
performance on this multi-task domain. The same approach also demonstrated
state of the art performance on a set of 30 tasks in the 3D reinforcement
learning platform DeepMind Lab. | http://arxiv.org/pdf/1809.04474 | Matteo Hessel, Hubert Soyer, Lasse Espeholt, Wojciech Czarnecki, Simon Schmitt, Hado van Hasselt | cs.LG, stat.ML | null | null | cs.LG | 20180912 | 20180912 | 8 1 0 2
p e S 2 1 ] G L . s c [
1 v 4 7 4 4 0 . 9 0 8 1 : v i X r a
# Multi-task Deep Reinforcement Learning with PopArt
# Matteo Hessel
# Hubert Soyer
# Lasse Espeholt
# Wojciech Czarnecki
# Simon Schmitt
# Hado van Hasselt
# Abstract
The reinforcement learning community has made great strides in designing algorithms capable of exceeding human performance on speciï¬c tasks. These algorithms are mostly trained one task at the time, each new task requiring to train a brand new agent instance. This means the learning algo- rithm is general, but each solution is not; each agent can only solve the one task it was trained on. In this work, we study the problem of learning to master not one but multiple sequential- decision tasks at once. A general issue in multi-task learning is that a balance must be found between the needs of multiple tasks competing for the limited resources of a single learn- ing system. Many learning algorithms can get distracted by certain tasks in the set of tasks to solve. Such tasks appear more salient to the learning process, for instance because of the density or magnitude of the in-task rewards. This causes the algorithm to focus on those salient tasks at the expense of generality. We propose to automatically adapt the contribu- tion of each task to the agentâs updates, so that all tasks have a similar impact on the learning dynamics. This resulted in state of the art performance on learning to play all games in a set of 57 diverse Atari games. Excitingly, our method learned a single trained policy - with a single set of weights - that exceeds median human performance. To our knowledge, this was the ï¬rst time a single agent surpassed human-level per- formance on this multi-task domain. The same approach also demonstrated state of the art performance on a set of 30 tasks in the 3D reinforcement learning platform DeepMind Lab.
Introduction In recent years, the ï¬eld of deep reinforcement learning (RL) has enjoyed many successes. Deep RL agents have been ap- plied to board games such as Go (Silver et al. 2016) and chess (Silver et al. 2017), continuous control (Lillicrap et al. 2016; Duan et al. 2016), classic video-games such as Atari (Mnih et al. 2015; Hessel et al. 2018; Gruslys et al. 2018; Schulman et al. 2015; Schulman et al. 2017; Bacon, Harb, and Precup 2017), and 3D ï¬rst person environments (Mnih et al. 2016; Jaderberg et al. 2016). While the results are im- pressive, they were achieved on one task at the time, each task requiring to train a new agent instance from scratch.
Multi-task and transfer learning remain important open problems in deep RL. There are at least four different strains
Copyright © 2019, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
of multi-task reinforcement learning that have been explored in the literature: off-policy learning of many predictions about the same stream of experience (Schmidhuber 1990; Sutton et al. 2011; Jaderberg et al. 2016), continual learning in a sequence of tasks (Ring 1994; Thrun 1996; Thrun 2012; Rusu et al. 2016), distillation of task-speciï¬c experts into a single shared model (Parisotto, Ba, and Salakhutdinov 2015; Rusu et al. 2015; Schmitt et al. 2018; Teh et al. 2017), and parallel learning of multiple tasks at once (Sharma and Ravindran 2017; Caruana 1998). We will focus on the latter. Parallel multi-task learning has recently achieved remark- able success in enabling a single system to learn a large number of diverse tasks. The Importance Weighted Actor- Learner Architecture, henceforth IMPALA (Espeholt et al. 2018), achieved a 59.7% median human normalised score across 57 Atari games, and a 49.4% mean human normalised score across 30 DeepMind Lab levels. These results are state of the art for multi-task RL, but they are far from the human- level performance demonstrated by deep RL agents on the same domains, when trained on each task individually.
Part of why multi-task learning is much harder than sin- gle task learning is that a balance must be found between the needs of multiple tasks, that compete for the limited resources of a single learning system (for instance, for its limited representation capacity). We observed that the naive transposition of common RL algorithms to the multi-task setting may not perform well in this respect. More specif- ically, the saliency of a task for the agent increases with the scale of the returns observed in that task, and these may differ arbitrarily across tasks. This affects value-based algo- rithms such as Q-learning (Watkins 1989), as well as policy- based algorithms such as REINFORCE (Williams 1992).
The problem of scaling individual rewards appropriately is not novel, and has often been addressed through re- ward clipping (Mnih et al. 2015). This heuristic changes the agentâs objective, e.g., if all rewards are non-negative the algorithm optimises frequency of rewards rather than their cumulative sum. If the two objectives are sufï¬ciently well aligned, clipping can be effective. However, the scale of re- turns also depends on the rewardsâ sparsity. This implies that, even with reward clipping, in a multi-task setting the magnitude of updates can still differ signiï¬cantly between tasks, causing some tasks to have a larger impact on the learning dynamics than other equally important ones.
Note that both the sparsity and the magnitude of rewards collected in an environment are inherently non-stationary, because the agent is learning to actively maximise the total amount of rewards it can collect. These non-stationary learn- ing dynamics make it impossible to normalise the learning updates a priori, even if we would be willing to pour signif- icant domain knowledge into the design of the algorithm.
To summarise, in IMPALA the magnitude of updates re- sulting from experience gathered in each environment de- pends on: 1) the scale of rewards, 2) the sparsity of rewards, 3) the competence of the agent. In this paper we use PopArt normalisation (van Hasselt et al. 2016) to derive an actor- critic update invariant to these factors, enabling large per- formance improvements in parallel multi-task agents. We demonstrated this on the Atari-57 benchmark, where a sin- gle agent achieved a median normalised score of 110% and on DmLab-30, where it achieved a mean score of 72.8%.
Background Reinforcement learning (RL) is a framework for learning and decision-making under uncertainty {2018}. A learning system - the agent - must learn to inter- act with the environment it is embedded in, so as to max- imise a scalar reward signal. The RL problem is often for- malised as a Markov decision process (Bellman 1957): a tuple (S,A,p, 7), where S, A are finite sets of states and actions, p denotes the dynamics, such that p(r, sâ | s,a) is the probability of observing reward r and state sâ when ex- ecuting action a in state s, and y ⬠[0,1] discounts future rewards. The policy maps states s ⬠S to probability dis- tributions over actions 7(A|S = s), thus specifying the be- haviour of the agent. The return G; = Ri4i+yRizo+... is the 7-discounted sum of rewards collected by an agent from state S, onward under policy 7. We define action values and state values as q"(s,a) = E,[G, | Sy = s, Ay = a] and v"(s) = E,[G, | S; = s], respectively. The agentâs objec- tive is to find a policy to maximise such values.
In multi-task reinforcement learning, a single agent must learn to master N different environments T = {Di = (Si, Ai, pi, γ)}N i=1, each with its own distinct dynamics (Brunskill and Li 2013). Particularly interesting is the case in which the action space and transition dynamics are at least partially shared. For instance, the environments might follow the same physical rules, while the set of intercon- nected states and obtainable rewards differ. We may for- malise this as a single larger MDP, whose state space is S = {{(sj, i)}sj âSi}N i=1. The task index i may be latent, or may be exposed to the agentâs policy. In this paper, we use the task index at training time, for the value estimates used to compute the policy updates, but not at testing time: our algorithm will return a single general policy Ï(A|S) which is only function of the individual environmentâs state S and not conditioned directly on task index i. This is more challenging than the standard multi-task learning setup, which typically allows conditioning the model on the task index even at evaluation (Romera-Paredes et al. 2013; Collobert and Weston 2008), because our agents will need to infer what task to solve purely from the stream of raw observations and/or early rewards in the episode.
Actor-critic In our experiments, we use an actor-critic algorithm to learn a policy Ïη(A|S) and a value estimate vθ(s), which are both outputs of a deep neural network. We update the agentâs policy by using REINFORCE-style stochastic gradi- ent (Gt â vθ(St))âη log Ï(At|St) (Williams 1992), where vθ(St) is used as a baseline to reduce variance. In addition we use a multi-step return Gv t that bootstraps on the value estimates after a limited number of transitions, both to re- duce variance further and to allow us to update the policy before Gt fully resolves at the end of an episode. The value function vθ(S) is instead updated to minimise the squared loss with respect to the (truncated and bootstrapped) return:
âθ â ââθ(Gv t âvθ(St))2 = (Gv t âvθ(St))âθvθ(St) (1)
# âη â (GÏ t and GÏ
t â vθ(St))âη log(Ïη(At|St)) , (2) where Gv t are stochastic estimates of vÏ(St) and qÏ(St, At), respectively. Note how both updates depend lin- early on the scale of returns, which, as previously argued, de- pend on scale/sparsity of rewards, and agentâs competence.
Efï¬cient multi-task learning in simulation We use the IMPALA agent architecture (Espeholt et al. 2018), proposed for reinforcement learning in simulated en- vironments. In IMPALA the agent is distributed across mul- tiple threads, processes or machines. Several actors run on CPU generating rollouts of experience, consisting of a ï¬xed number of interactions (100 in our experiments) with their own copy of the environment, and then enqueue the roll- outs in a shared queue. Actors receive the latest copy of the networkâs parameters from the learner before each rollout. A single GPU learner processes rollouts from all actors, in batches, and updates a deep network. The network is a deep convolutional ResNet (He et al. 2015), followed by a LSTM recurrent layer (Hochreiter and Schmidhuber 1997). Policy and values are all linear functions of the LSTMâs output.
Despite the large network used for estimating policy Ïη and values vθ, the decoupled nature of the agent enables to process data very efï¬ciently: in the order of hundreds of thousands frames per second (Espeholt et al. 2018). The setup easily supports the multi-task setting by simply assign- ing different environments to each of the actors and then run- ning the single policy Ï(S|A) on each of them. The data in the queue can also be easily labelled with the task id, if use- ful at training time. Note that an efï¬cient implementation of IMPALA is available open-source 1, and that, while we use this agent for our experiments, our approach can be applied to other data parallel multi-task agents (e.g. A3C).
Off-policy corrections Because we use a distributed queue-based learning setup, the data consumed by the learning algorithm might be slightly off-policy, as the policy parameters change between acting and learning. We can use importance sampling corrections Ït = Ï(At|St)/µ(At|St) to compensate for this (Precup,
1www.github.com/deepmind/scalable agent
Sutton, and Singh 2000). In particular, we can write the n- ie me as Gp = Rigi + Riga +... + Y"0(Stin) = y+ yee Tk-t5., where 6) = Rigi + yu(Sr41) â ots ), and then apply appropriate importance sampling cor- ae to each error term to get G, = S)+ t yk- me i. 1 Pi )or- This i is unbiased, but has ae variance. To reduce variance, we can further clip most of the importance-sampling ratios, e.g., as c, = min(1, pz). This leads to the v-trace return (Espeholt et al. 2018)
ttn-1 k arm nse (Hea 0 k=t ist
k=t A very similar target was proposed for the ABQ(ζ) algo- rithm (Mahmood 2017), where the product Ïtλt was consid- ered and then the trace parameter λt was chosen to be adap- tive to lead to exactly the same behaviour that ct = Ïtλt = min(1, Ït). This shows that this form of clipping does not impair the validity of the off-policy corrections, in the same sense that bootstrapping in general does not change the se- mantics of a return. The returns used by the value and policy updates deï¬ned in Equation 1 and 2 are then
Gv This is the same algorithm as used by Espeholt et al.
(2018) in the experiments on the IMPALA architecture.
Adaptive normalisation In this section we use PopArt normalisation (van Hasselt et al. 2016), which was introduced for value-based RL, to de- rive a scale invariant algorithm for actor-critic agents. For simplicity, we ï¬rst consider the single-task setting, then we extend it to the multi-task setting (the focus of this work).
Scale invariant updates In order to normalise both baseline and policy gradient up- dates, we first parameterise the value estimate Vp,0,0(S) as the linear transformation of a suitably normalised value pre- diction ng(S). We further assume that the normalised value prediction is itself the output of a linear function, for in- stance the last fully connected layer of a deep neural net: Up,o0(8) =o + No(s) += a+ (w! fow,s}(s) +b) +H. $< âSâ
Up,o0(8) =o + No(s) += a+ (w! fow,s}(s) +b) +H. $< âSâ = no(s)
(5) As proposed by van Hasselt et al., µ and Ï can be updated so as to track mean and standard deviation of the values. First and second moments of can be estimated online as µt = (1âβ)µtâ1+βGv t ,
νt = (1âβ)νtâ1+β(Gv t )2,
( and then used to derive the estimated standard deviation as o1 = /% â p12. Note that the fixed decay rate 3 determines the horizon used to compute the statistics. We can then use the normalised value estimate mg (.S) and the statistics 4. and o to normalise the actor-critic loss, both in its value and pol- icy component; this results in the scale-invariant updates:
âθ â t â µ Ï â nθ(St) âθnθ(St) , (7)
An x (= f- no(5)) Vlog m (Alsi). (8)
If we optimise the new objective naively, we are at risk of making the problem harder: the normalised targets for val- ues are non-stationary, since they depend on statistics 4 and o. The PopArt normalisation algorithm prevents this, by up- dating the last layer of the normalised value network to pre- serve unnormalised value estimates v,,,,9, under any change in the statistics pp + ypâ anda > oâ: wwâ Cw, i ob+pâp a 3 . (9)
This extends PopArtâs scale-invariant updates to the actor- critic setting, and can help to make tuning hyperparameters easier, but it is not sufï¬cient to tackle the challenging multi- task RL setting that we are interested in this paper. For this, a single pair of normalisation statistics is not sufï¬cient.
Scale invariant updates for multi-task learning Let Di be an environment in some ï¬nite set T = {Di}N i=1, and let Ï(S|A) be a task-agnostic policy, that takes a state S from any of the environments Di, and maps it to a proba- bility distribution onto the shared action space A. Consider now a multi-task value function v(S) with N outputs, one for each task. We can use for v the same parametrisation as in Equation 5, but with vectors of statistics µ, Ï â RN , and a vector-valued function nθ(s) = (n1
Vu,0.0(9) = oOne(S)+p = 0O(Wfo\ pw.b}(S)+b) + (10) where W and b denote the parameters of the last fully con- nected layer in ng(s). Given a rollout {5;,,, Ax, Rint, generated under task-agnostic policy 7,(A|$) in environ- ment D;, we can adapt the updates in Equation Hlandfi}to provide scale invariant updates also in the multi-task setting:
âθ â Gv,i t â µi Ïi â ni θ(St) âθni θ(St) , (11)
âη â GÏ,i t â µi Ïi â ni θ(St) âη log Ïη(At|St) . (12)
Where the targets G·,i t use the value estimates for environ- ment Di for bootstrapping. For each rollout, only the ith head in the value net is updated, while the same policy network is updated irrespectively of the task, using the appropriate rescaling for updates to parameters η. As in the single-task case, when updating the statistics µ and Ï we also need to update W and b to preserve unnormalised outputs,
y 7, Tibi + hi = BY w,= Swi, b; = ââ_,â , i % (13)
where wi is the ith row of matrix W, and µi, Ïi, bi are the ith elements of the corresponding parameter vectors. Note that in all updates only the values, but not the policy, are con- ditioned on the task index, which ensures that the resulting agent can then be run in a fully task agnostic way, since val- ues are only used to reduce the variance of the policy updates at training time but not needed for action selection.
Table 1: Summary of results: aggregate scores for IMPALA and PopArt-IMPALA. We report median human normalised score for Atari-57, and mean capped human normalised score for DmLab-30. In Atari, Random and Human refer to whether the trained agent is evaluated with random or human starts. In DmLab-30 the test score includes evaluation on the held-out levels.
Atari-57 Atari-57 (unclipped) DmLab-30 Agent Random Human Random Human Train Test IMPALA 59.7% 28.5% 0.3% 1.0% 60.6% 58.4% PopArt-IMPALA 110.7% 101.5% 107.0% 93.7% 73.5% 72.8%
# Experiments
We evaluated our approach in two challenging multi-task benchmarks, Atari-57 and DmLab-30, based on Atari and DeepMind Lab respectively, and introduced by Espeholt et al. We also consider a new benchmark, consisting of the same games as Atari-57, but with the original unclipped rewards. We demonstrate state of the art performance on all benchmarks. To aggregate scores across many tasks, we normalise the scores on each task based on the scores of a human player and of a random agent on that same task (van Hasselt, Guez, and Silver 2016). All experiments use population-based training (PBT) to tune hyperparameters (Jaderberg et al. 2017). As in Espeholt et al., we report learn- ing curves as function of the number of frames processed by one instance of the tuning population, summed across tasks.
levels can also differ visually in non-trivial ways, as they include both natural environments and maze-like lev- els. Two levels (rooms collect good objects and rooms exploit deferred effects) have held out test versions, therefore Table 1 reports both train and test ag- gregate scores. We observed that the original IMPALA agent suffers from an artiï¬cial bottleneck in performance, due to the fact that some of the tasks cannot be solved with the ac- tion set available to the agent. As ï¬rst step, we thus ï¬x this issue by equipping it with a larger action set, resulting in a stronger IMPALA baseline than reported in the original pa- per. We also run multiple independent PBT experiments, to assess the variability of results across multiple replications.
# Atari-57 results
Domains Atari-57 is a collection of 57 classic Atari 2600 games. The ALE (Bellemare et al. 2013), exposes them as RL envi- ronments. Most prior work has focused on training agents for individual games (Mnih et al. 2015; Hessel et al. 2018; Gruslys et al. 2018; Schulman et al. 2015; Schulman et al. 2017; Bacon, Harb, and Precup 2017). Multi-task learning on this platform has not been as successful due to large num- ber of environments, inconsistent dynamics and very differ- ent reward structure. Prior work on multi-task RL in the ALE has therefore focused on smaller subsets of games (Rusu et al. 2015; Sharma and Ravindran 2017). Atari has a particu- larly diverse reward structure. Consequently, it is a perfect domain to fully assess how well can our agents deal with extreme differences in the scale of returns. Thus, we train all agents both with and without reward clipping, to com- pare performance degradation as returns get more diverse in the unclipped version of the environment. In both cases, at the end of training, we test agents both with random-starts (Mnih et al. 2015) and human-starts (Nair et al. 2015); ag- gregate results are reported in Table 1 accordingly.
Figures 1 and 2 show the median human normalised perfor- mance across the entire set of 57 Atari games in the ALE, when training agent with and without reward clipping, re- spectively. The curves are plotted as function of the total number of frames seen by each agent.
PopArt-IMPALA (orange line) achieves a median perfor- mance of 110% with reward clipping and a median perfor- mance of 101% in the unclipped version of Atari-57. Re- call that here we are measuring the median performance of a single trained agent across all games, rather than the me- dian over the performance of a set of individually trained agents as it has been more common in the Atari domain. To our knowledge, both agents are the ï¬rst to surpass median human performance across the entire set of 57 Atari games. The IMPALA agent (blue line) performs much worse. The baseline barely reaches 60% with reward clipping, and the median performance is close to 0% in the unclipped setup. The large decrease in the performance of the baseline IM- PALA agent once clipping is removed is in stark contrast with what we observed for PopArt-IMPALA, that achieved almost the same performance in the two training regimes.
DmLab-30 is a collection of 30 visually rich, par- tially observable RL environments (Beattie et al. 2016). This benchmark has strong internal consistency (all lev- els are played with a ï¬rst person camera in a 3D envi- ronment with consistent dynamics). Howevere, the tasks themselves are quite diverse, and were designed to test dis- tinct skills in RL agents: among these navigation, mem- ory, planning, laser-tagging, and language grounding. The
Since the level-speciï¬c value predictions used by multi- task PopArt effectively increase the capacity of the net- work, we also ran an additional experiment to disentan- gle the contribution of the increased network capacity from the contribution of the adaptive normalisation. For this pur- pose, we train a second baseline, that uses level speciï¬c value predictions, but does not use PopArt to adaptively normalise the learning updates. The experiments show that such MultiHead-IMPALA agent (pink line) actually per- forms slightly worse than the original IMPALA both with
Atari-57 (clipped) 120 ââ PopArt-IMPALA ââ MultiHead-IMPALA 100 ââ IMPALA s iS é Median Human Normalised Score s 8 0 2 4 6 8 10 12 Environment Frames
Figure 1: Atari-57 (reward clipping). Median human nor- malised score across all Atari levels, as function of the total number of frames seen by the agents across all levels. We compare PopArt-IMPALA to IMPALA and to an additional baseline, MultiHead-IMPALA, that uses task-speciï¬c value predictions but no adaptive normalisation. All three agent are trained with the clipped reward scheme.
and without clipping, conï¬rming that the performance boost of PopArt-IMPALA is indeed due to the adaptive rescaling. We highlight that in our experiments a single instance of multi-task PopArt-IMPALA has processed the same amount of frames as a collection of 57 expert DQN agents (57 à 200 M = 1.14 à 1010), while achieving better performance. Despite the large CPU requirements, on a cloud service, training multi-task PopArt-IMPALA can also be competi- tive in terms of costs, since it exceeds the performance of a vanilla-DQN in just 2.5 days, with a smaller GPU footprint.
Normalisation statistics It is insightful to observe the different normalisation statis- tics across games, and how they adapt during training. Fig- ure 3 (top row) plots the shift µ for a selection of Atari games, in the unclipped training regime. The scale Ï is visualised in the same ï¬gure by shading the area in the range [µ â Ï, µ + Ï]. The statistics differ by orders of magnitude across games: in crazy climber the shift ex- ceeds 2500, while in bowling it never goes above 15. The adaptivity of the proposed normalisation emerges clearly in crazy climber and qbert, where the statistics span multiple orders of magnitude during training. The bottom row in Figure 3 shows the corresponding agentâs undis- counted episode return: it follows the same patterns as the statistics (with differences in magnitude due to discounting). Finally note how the statistics can even track the instabilities in the agentâs performance, as in qbert.
DmLab-30 results Figure 4 shows, as a function of the total number of frames processed by each agent, the mean human normalised per- formance across all 30 DeepMind Lab levels, where each levelâs score is capped at 100% . For all agents, we ran three
Atari-57 (unclipped) 8 ââ PopArt-IMPALA ââ MultiHead-IMPALA â IMPALA y iS 2 2 S 8 é s s s Median Human Normalised Score ° -20 0 2 4 6 8 10 Environment Frames
Figure 2: Atari-57 (unclipped): Median human normalised score across all Atari levels, as a function of the total num- ber of frames seen by the agents across all levels. We here compare the same set of agents as in Figure 1, but now all agents are trained without using reward clipping. The ap- proximately ï¬at lines corresponding to the baselines mean no learning at all on at least 50% of the games.
breakout crazy_climber qbert seaquest } wv
Figure 3: Normalisation statistics: Top: learned statistics, without reward clipping, for four distinct Atari games. The shaded region is [µâÏ, µ+Ï]. Bottom: undiscounted returns.
independent PBT experiments. In Figure 4 we plot the learn- ing curves for each experiment and, for each agent, ï¬ll in the area between best and worse experiment.
Compared to the original paper, our IMPALA baseline uses a richer action set, that includes more possible hori- zontal rotations, and vertical rotations (details in Appendix). Fine-grained horizontal control is useful on lasertag levels, while vertical rotations are necessary for a few psychlab levels. Note that this new baseline (solid blue in Figure 4) performs much better than the original IM- PALA agent, which we also train and report for complete- ness (dashed blue). Including PopArt normalisation (in or- ange) on top of our baseline results in largely improved scores. Note how agents achieve clearly separated perfor- mance levels, with the new action set dominating the origi- nal paperâs one, and with PopArt-IMPALA dominating IM- PALA for all three replications of the experiment.
12
DmLab-30 g g 8 Mean Capped Human Normalised Score Ss ly" â PopArt-IMPALA 10 & â IMPALA === IMPALA-original 0 2 4 6 8 10 Environment Frames
Figure 4: DmLab-30. Mean capped human normalised score of IMPALA (blue) and PopArt-IMPALA (orange), across the DmLab-30 benchmark as function of the num- ber of frames (summed across all levels). Shaded region is bounded by best and worse run among 3 PBT experiments. For reference, we also plot the performance of IMPALA with the limited action set from the original paper (dashed).
# Extensions
In this section, we explore the combination of the proposed PopArt-IMPALA agent with pixel control (Jaderberg et al. 2016), to further improve data efï¬ciency, and make train- ing IMPALA-like agents on large multi-task benchmarks cheaper and more practical. Pixel control is an unsuper- vised auxiliary task introduced to help learning good state representations. As shown in Figure 5, the combination of PopArt-IMPALA with pixel control (red line) allows to match the ï¬nal performance of the vanilla PopArt-IMPALA (orange line) with a fraction of the data (â¼ 2B frames). This is on top of the large improvement in data efï¬ciency already provided by PopArt, meaning that the pixel con- trol augmented PopArt-IMPALA needs less than 1/10-th of the data to match our own IMPALA baselineâs performance (and 1/30-th of the frames to match the original published IMPALA). Importantly, since both PopArt and Pixel Con- trol only add a very small computational cost, this improve- ment in data efï¬ciency directly translates in a large reduction of the cost of training IMPALA agents on large multi-task benchmarks. Note, ï¬nally, that other orthogonal advances in deep RL could also be combined to further improve perfor- mance, similarly to what was done by Rainbow (Hessel et al. 2018), in the context of value-based reinforcement learning.
# Implementation notes
We implemented all agents in TensorFlow. For each batch of rollouts processed by the learner, we average the Gv t tar- gets within a rollout, and for each rollout in the batch we perform one online update of PopArtâs normalisation statis- tics with decay β = 3 à 10â4. Note that β didnât require any tuning. To prevent numerical issues, we clip the scale Ï in the range [0.0001, 1e6]. We do not back-propagate gra- dients into µ and Ï, exclusively updated as in Equation 6.
DmLab-30 80 3 g 8 PopArt-IMPALA@10B 8 Mean Capped Human Normalised Score 8 & 6 ââ PopArt-IMPALA â Pixel-PopArt-IMPALA 0 O21 2 4 6 8 10 Environment Frames 109
Figure 5: DmLab-30 (with pixel control). Mean capped hu- man normalised score of PopArt-IMPALA with pixel con- trol (red), across the DmLab-30 benchmark as function of the total number of frames across all tasks. Shaded region is bounded by best and worse run among 3 PBT experiments. Dotted lines mark the point where Pixel-PopArt-IMPALA matches PopArt-IMPALA and the two IMPALA baselines.
The weights W of the last layer of the value function are updated according to Equation 13 and 11. Note that we ï¬rst apply the actor-critic updates (11), then update the statis- tics (6), ï¬nally apply output preserving updates (13). For more just-in-time rescaling of updates we can invert this or- der, but this wasnât necessary. As anticipated, in all experi- ments we used population-based training (PBT) to adapt hy- perparameters during training (Jaderberg et al. 2017). As in the IMPALA paper, we use PBT to tune learning rate, entropy cost, the optimiserâs epsilon, andâin the Atari experimentsâthe max gradient norm. In Atari- 57 we used populations of 24 instances, in DmLab-30 just 8 instances. All hyperparameters are reported in the Appendix.
# Discussion
In this paper we propose a scale-invariant actor-critic al- gorithm that enables signiï¬cantly improved performance in multi-task reinforcement learning settings. Being able to ac- quire knowledge about a wide range of facts and skills has been long considered an essential feature for an RL agent to demonstrate intelligent behaviour (Sutton et al. 2011; Degris and Modayil 2012; Legg and Hutter 2007). To ask our algorithms to master multiple tasks is therefore a natural step as we progress towards increasingly powerful agents.
The wide-spread adoption of deep learning in RL is quite timely in this regard, since sharing parts of a neural net- work across multiple tasks is also a powerful way of build- ing robust representations. This is particularly important for RL, because rewards on individual tasks can be sparse, and therefore sharing representations across tasks can be vital to bootstrap learning. Several agents (Jaderberg et al. 2016; Lample and Chaplot 2016; Shelhamer et al. 2016; Mirowski et al. 2016) demonstrated this by improving performance on a single external task by learning off-policy about auxiliary
tasks deï¬ned on the same stream of experience (e.g. pixel control, immediate reward prediction or auto-encoding).
Multi-task learning, as considered in this paper, where we get to execute, in parallel, the policies learned for each task, has potential additional beneï¬ts, including deep exploration (Osband et al. 2016), and policy composition (Mankowitz et al. 2018; Todorov 2009). By learning on-policy about tasks, it may also be easier to scale to much more diverse tasks: if we only learn about some task off-policy from experience generated pursuing a very different one, we might never ob- serve any reward. A limitation of our approach is that it can be complicated to implement parallel learning outside of simulation, but recent work on parallel training of robots (Levine et al. 2016) suggests that this is not necessarily an insurmountable obstacle if sufï¬cient resources are available. Adoption of parallel multi-task RL has up to now been fairly limited. That the scaling issues considered in this pa- per, may have been a factor in the limited adoption is in- dicated by the wider use of this kind of learning in su- pervised settings (Johnson et al. 2017; Lu et al. 2016; Misra et al. 2016; Hashimoto et al. 2016), where loss func- tions are naturally well scaled (e.g. cross entropy), or can be easily scaled thanks to the stationarity of the training dis- tribution. We therefore hope and believe that the work pre- sented here can enable more research on multi-task RL.
We also believe that PopArtâs adaptive normalisation can be combined with other research in multi-task rein- forcement learning, that previously did not scale as effec- tively to large numbers of diverse tasks. We highlight as potential candidates policy distillation (Parisotto, Ba, and Salakhutdinov 2015; Rusu et al. 2015; Schmitt et al. 2018; Teh et al. 2017) and active sampling of the task distribution the agent trains on (Sharma and Ravindran 2017). The com- bination of PopArt-IMPALA with active sampling might be particularly promising since it may allow a more efï¬cient use of the parallel data generation, by focusing it on the task most amenable for learning. Elastic weight consolidation (Kirkpatrick et al. 2017) and other work from the contin- ual learning literature (Ring 1994; Mcclelland, Mcnaughton, and OâReilly 1995) might also be adapted to parallel learn- ing setups to reduce interference (French 1999) among tasks.
References [Bacon, Harb, and Precup 2017] Bacon, P.; Harb, J.; and Precup, D. 2017. The option-critic architecture. AAAI Conference on Artiï¬cial Intelligence.
[Beattie et al. 2016] Beattie, C.; Leibo, J. Z.; Teplyashin, D.; Ward, T.; Wainwright, M.; K¨uttler, H.; Lefrancq, A.; Green, S.; Vald´es, V.; Sadik, A.; Schrittwieser, J.; Anderson, K.; York, S.; Cant, M.; Cain, A.; Bolton, A.; Gaffney, S.; King, H.; Hassabis, D.; Legg, S.; and Petersen, S. 2016. Deepmind lab. CoRR abs/1612.03801.
[Bellemare et al. 2013] Bellemare, M. G.; Naddaf, Y.; Veness, J.; and Bowling, M. 2013. The arcade learning environment: An eval- uation platform for general agents. JAIR.
[Bellman 1957] Bellman, R. 1957. A markovian decision process. Journal of Mathematics and Mechanics.
[Brunskill and Li 2013] Brunskill, E., and Li, L. ple complexity of multi-task reinforcement abs/1309.6821. 2013. learning. Sam- CoRR
[Caruana 1998] Caruana, R. 1998. Multitask learning. In Learning to learn.
[Collobert and Weston 2008] Collobert, R., and Weston, J. 2008. A uniï¬ed architecture for natural language processing: Deep neural networks with multitask learning. In ICML.
2012. Scaling-up knowledge for a cognizant robot. In AAAI Spring Sym- posium: Designing Intelligent Robots.
[Duan et al. 2016] Duan, Y.; Chen, X.; Houthooft, R.; Schulman, J.; and Abbeel, P. 2016. Benchmarking deep reinforcement learning for continuous control. In ICML.
[Espeholt et al. 2018] Espeholt, L.; Soyer, H.; Munos, R.; Si- monyan, K.; Mnih, V.; Ward, T.; Doron, Y.; Firoiu, V.; Harley, T.; Dunning, I.; Legg, S.; and Kavukcuoglu, K. 2018. Impala: Scal- able distributed deep-rl with importance weighted actor-learner ar- chitectures. In ICML.
[French 1999] French, R. M. 1999. Catastrophic forgetting in con- nectionist networks. Trends in cognitive sciences.
[Gruslys et al. 2018] Gruslys, A.; Azar, M. G.; Bellemare, M. G.; and Munos, R. 2018. The reactor: A sample-efï¬cient actor-critic architecture. ICLR.
[Hashimoto et al. 2016] Hashimoto, K.; Xiong, C.; Tsuruoka, Y.; and Socher, R. 2016. A joint many-task model: Growing a neural network for multiple NLP tasks. CoRR abs/1611.01587.
[He et al. 2015] He, K.; Zhang, X.; Ren, S.; and Sun, J. 2015. arXiv preprint Deep residual learning for image recognition. arXiv:1512.03385.
[Hessel et al. 2018] Hessel, M.; Modayil, J.; van Hasselt, H.; Schaul, T.; Ostrovski, G.; Dabney, W.; Horgan, D.; Piot, B.; Azar, M. G.; and Silver, D. 2018. Rainbow: Combining improvements in deep reinforcement learning. AAAI Conference on Artiï¬cial In- telligence.
[Hochreiter and Schmidhuber 1997] Hochreiter, S., and Schmidhu- ber, J. 1997. Long short-term memory. Neural computation.
[Jaderberg et al. 2016] Jaderberg, M.; Mnih, V.; Czarnecki, W. M.; Schaul, T.; Leibo, J. Z.; Silver, D.; and Kavukcuoglu, K. 2016. Reinforcement learning with unsupervised auxiliary tasks. CoRR abs/1611.05397.
[Jaderberg et al. 2017] Jaderberg, M.; Dalibard, V.; Osindero, S.; Czarnecki, W. M.; Donahue, J.; Razavi, A.; Vinyals, O.; Green, T.; Dunning, I.; Simonyan, K.; Fernando, C.; and Kavukcuoglu, K. 2017. Population based training of neural networks. CoRR abs/1711.09846.
[Johnson et al. 2017] Johnson, M.; Schuster, M.; Le, Q. V.; Krikun, M.; Wu, Y.; Chen, Z.; Thorat, N.; Vi´egas, F. B.; Wattenberg, M.; Corrado, G.; Hughes, M.; and Dean, J. 2017. Googleâs multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics 5. [Kirkpatrick et al. 2017] Kirkpatrick, J.; Pascanu, R.; Rabinowitz, N.; Veness, J.; Desjardins, G.; Rusu, A. A.; Milan, K.; Quan, J.; Ramalho, T.; Grabska-Barwinska, A.; Hassabis, D.; Clopath, C.; Kumaran, D.; and Hadsell, R. 2017. Overcoming catastrophic forgetting in neural networks. PNAS.
[Lample and Chaplot 2016] Lample, G., and Chaplot, D. S. 2016. Playing FPS games with deep reinforcement learning. CoRR abs/1609.05521.
[Legg and Hutter 2007] Legg, S., and Hutter, M. 2007. Universal intelligence: A deï¬nition of machine intelligence. Minds Mach. [Levine et al. 2016] Levine, S.; Pastor, P.; Krizhevsky, A.; and Quillen, D. 2016. Learning hand-eye coordination for robotic grasping with large-scale data collection. In ISER.
[Lillicrap et al. 2016] Lillicrap, T.; Hunt, J.; Pritzel, A.; Heess, N.; Erez, T.; Tassa, Y.; Silver, D.; and Wierstra, D. 2016. Continuous control with deep reinforcement learning. In ICLR.
[Lu et al. 2016] Lu, Y.; Kumar, A.; Zhai, S.; Cheng, Y.; Javidi, T.; and Feris, R. S. 2016. Fully-adaptive feature sharing in multi-task networks with applications in person attribute classiï¬cation. CoRR abs/1611.05377.
[Mahmood 2017] Mahmood, A. 2017. Incremental off-policy rein- forcement learning algorithms. Ph.D. UAlberta.
[Mankowitz et al. 2018] Mankowitz, D. J.; Z´ıdek, A.; Barreto, A.; Horgan, D.; Hessel, M.; Quan, J.; Oh, J.; van Hasselt, H.; Silver, D.; and Schaul, T. 2018. Unicorn: Continual learning with a universal, off-policy agent. CoRR abs/1802.08294.
[Mcclelland, Mcnaughton, and OâReilly 1995] Mcclelland, J. L.; Mcnaughton, B. L.; and OâReilly, R. C. 1995. Why there are com- plementary learning systems in the hippocampus and neocortex: Insights from the successes and failures of connectionist models of learning and memory. Psychological Review.
[Mirowski et al. 2016] Mirowski, P.; Pascanu, R.; Viola, F.; Soyer, H.; Ballard, A. J.; Banino, A.; Denil, M.; Goroshin, R.; Sifre, L.; Kavukcuoglu, K.; Kumaran, D.; and Hadsell, R. 2016. Learning to navigate in complex environments. CoRR abs/1611.03673.
I.; Shrivastava, A.; Gupta, A.; and Hebert, M. 2016. Cross-stitch networks for multi-task learning. CoRR abs/1604.03539.
[Mnih et al. 2015] Mnih, V.; Kavukcuoglu, K.; Silver, D.; Rusu, A. A.; Veness, J.; Bellemare, M. G.; Graves, A.; Riedmiller, M.; Fidjeland, A. K.; Ostrovski, G.; Petersen, S.; Beattie, C.; Sadik, A.; Antonoglou, I.; King, H.; Kumaran, D.; Wierstra, D.; Legg, S.; and Hassabis, D. 2015. Human-level control through deep rein- forcement learning. Nature.
[Mnih et al. 2016] Mnih, V.; Badia, A. P.; Mirza, M.; Graves, A.; Lillicrap, T.; Harley, T.; Silver, D.; and Kavukcuoglu, K. 2016. Asynchronous methods for deep reinforcement learning. In ICML. [Nair et al. 2015] Nair, A.; Srinivasan, P.; Blackwell, S.; Alcicek, C.; Fearon, R.; De Maria, A.; Panneershelvam, V.; Suleyman, M.; Beattie, C.; Petersen, S.; Legg, S.; Mnih, V.; Kavukcuoglu, K.; and Silver, D. 2015. Massively parallel methods for deep reinforcement learning. arXiv preprint arXiv:1507.04296.
[Osband et al. 2016] Osband, I.; Blundell, C.; Pritzel, A.; and Van Roy, B. 2016. Deep exploration via bootstrapped DQN. In NIPS.
[Parisotto, Ba, and Salakhutdinov 2015] Parisotto, E.; Ba, L. J.; and Salakhutdinov, R. 2015. Actor-mimic: Deep multitask and transfer reinforcement learning. CoRR abs/1511.06342.
[Precup, Sutton, and Singh 2000] Precup, D.; Sutton, R. S.; and Singh, S. P. 2000. Eligibility traces for off-policy policy evalu- ation. In ICML.
[Ring 1994] Ring, M. 1994. Continual learning in reinforcement environments.
[Romera-Paredes et al. 2013] Romera-Paredes, B.; Aung, H.; Bianchi-Berthouze, N.; and Pontil, M. 2013. Multilinear multitask learning. In ICML.
[Rusu et al. 2015] Rusu, A. A.; Colmenarejo, S. G.; G¨ulc¸ehre, C¸ .; Desjardins, G.; Kirkpatrick, J.; Pascanu, R.; Mnih, V.; Kavukcuoglu, K.; and Hadsell, R. 2015. Policy distillation. CoRR abs/1511.06295.
[Rusu et al. 2016] Rusu, A. A.; Rabinowitz, N. C.; Desjardins, G.; Soyer, H.; Kirkpatrick, J.; Kavukcuoglu, K.; Pascanu, R.; and Had- sell, R. 2016. Progressive neural networks. CoRR abs/1606.04671.
[Schmidhuber 1990] Schmidhuber, J. 1990. An on-line algorithm for dynamic reinforcement learning and planning in reactive envi- ronments. In IJCNN.
[Schmitt et al. 2018] Schmitt, S.; Hudson, J. J.; Z´ıdek, A.; Osin- dero, S.; Doersch, C.; Czarnecki, W. M.; Leibo, J. Z.; K¨uttler, H.; Zisserman, A.; Simonyan, K.; and Eslami, S. M. A. 2018. Kick- starting deep reinforcement learning. CoRR abs/1803.03835.
[Schulman et al. 2015] Schulman, J.; Levine, S.; Moritz, P.; Jordan, M. I.; and Abbeel, P. 2015. Trust region policy optimization. CoRR abs/1502.05477.
[Schulman et al. 2017] Schulman, J.; Wolski, F.; Dhariwal, P.; Rad- ford, A.; and Klimov, O. 2017. Proximal policy optimization algo- rithms. CoRR abs/1707.06347.
[Sharma and Ravindran 2017] Sharma, S., and Ravindran, B. 2017. CoRR Online multi-task learning using active sampling. abs/1702.06053.
[Shelhamer et al. 2016] Shelhamer, E.; Mahmoudieh, P.; Argus, M.; and Darrell, T. 2016. Loss is its own reward: Self-supervision for reinforcement learning. CoRR abs/1612.07307.
[Silver et al. 2016] Silver, D.; Huang, A.; Maddison, C. J.; Guez, A.; Sifre, L.; van den Driessche, G.; Schrittwieser, J.; Antonoglou, I.; Panneershelvam, V.; Lanctot, M.; Dieleman, S.; Grewe, D.; Nham, J.; Kalchbrenner, N.; Sutskever, I.; Lillicrap, T.; Leach, M.; Kavukcuoglu, K.; Graepel, T.; and Hassabis, D. 2016. Mastering the game of Go with deep neural networks and tree search. Nature. J.; Antonoglou, I.; Lai, M.; Guez, A.; Lanctot, M.; Sifre, L.; Kumaran, D.; Graepel, T.; Lillicrap, T. P.; Simonyan, K.; and Hassabis, D. 2017. Mastering chess and shogi by self-play with a general reinforcement learning algorithm. CoRR abs/1712.01815. [Sutton and Barto 2018] Sutton, R. S., and Barto, A. G. 2018. Re-
inforcement Learning: An Introduction. MIT press.
[Sutton et al. 2011] Sutton, R. S.; Modayil, J.; Delp, M.; Degris, T.; Pilarski, P. M.; White, A.; and Precup, D. 2011. Horde: A scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction. In AAMAS.
[Sutton et al. 2014] Sutton, R. S.; Mahmood, A. R.; Precup, D.; and van Hasselt, H. 2014. A new q(λ) with interim forward view and Monte Carlo equivalence. In ICML.
[Teh et al. 2017] Teh, Y. W.; Bapst, V.; Czarnecki, W. M.; Quan, J.; Kirkpatrick, J.; Hadsell, R.; Heess, N.; and Pascanu, R. 2017. Distral: Robust multitask reinforcement learning. CoRR abs/1707.04175.
[Thrun 1996] Thrun, S. 1996. Is learning the n-th thing any easier than learning the ï¬rst? In NIPS.
[Thrun 2012] Thrun, S. 2012. Explanation-based neural network learning: A lifelong learning approach. Springer.
[Todorov 2009] Todorov, E. 2009. Compositionality of optimal control laws. In NIPS.
[van Hasselt et al. 2016] van Hasselt, H.; Guez, A.; Hessel, M.; Mnih, V.; and Silver, D. 2016. Learning values across many or- ders of magnitude. In NIPS.
[van Hasselt, Guez, and Silver 2016] van Hasselt, H.; Guez, A.; and Silver, D. 2016. Deep reinforcement learning with double Q- learning. In AAAI Conference on Artiï¬cial Intelligence.
[Watkins 1989] Watkins, C. J. C. H. 1989. Learning from Delayed Rewards. Ph.D. Dissertation, Kingâs College, Cambridge, England. [Williams 1992] Williams, R. 1992. Simple statistical gradient- learning.
following algorithms for connectionist reinforcement Mach. Learning.
Appendix In this Appendix we report additional details about the re- sults presented in the main text, as well as present addi- tional experiments on the DmLab-30 benchmark. We also report the breakdown per level of the scores of IMPALA and PopArt-IMPALA on the Atari-57 and DmLab-30 bench- marks. Finally, we report the hyperparameters used to train the baseline agents as well as PopArt-IMPALA. These hy- perparameters are mostly the same as in Espeholt et al., but we report them for completeness and to ease reproducibility.
Hyper-parameter tuning In our experiments we used Population-Based Training (PBT) to tune hyper-parameters. In our DmLab-30 exper- iments, however, we used smaller populations than in the original IMPALA paper. For completeness, we also report here the results of running PopArt-IMPALA and IMPALA with the larger population size used by Espeholt et al. Due to the increased cost of using larger populations, in this case we will only report one PBT tuning experiment per agent, rather than the 3 reported in the main text.
The learning curves for both IMPALA and PopArt- IMPALA are shown in Figure 6, together with horizon- tal dashed lines marking average ï¬nal performance of the agents trained with the smaller population of just 8 in- stances. The performance of both IMPALA and PopArt- IMPALA agents at the end of training is very similar whether hyperparameters are tuned with 8 or 24 PBT in- stances, suggesting that the large populations used for hyper- parameter tuning by Espeholt et al. may not be necessary.
Note, however, that we have observed larger discrepancies between experiments where small and large population size are used for tuning hyper-parameter, when training the less performing IMPALA agent that used a more limited action set, as presented in the original IMPALA paper.
DmLab-30 â PopartIMPALA [24] 20 â IMPALA [24] PopArt-IMPALA [8] (@1e10 frames) IMPALA [8] (@1e10 frames) 00 02 Om 06 08 To Environment Frames 1e10
Figure 6: Larger populations: mean capped human nor- malised score across the DmLab-30 benchmark as a func- tion of the total number of frames seen by the agents across all levels. The solid lines plot the performance of IMPALA (blue) and PopArt-IMPALA (orange) when tuning hyper- parameters with a large PBT population of 24 instances. Dashed lines correspond to the ï¬nal performance of these same agents, after 10B frames, in the previous experiments where hyper-parameters were tuned with a population of 8.
# Pixel Control
Pixel control (Jaderberg et al. 2016) is an unsupervised aux- iliary task introduced to help learning good state represen- tations. We report here also the performance of combining Pixel Control with IMPALA without also using PopArt. As shown in Figure 7, pixel control increases the performance of both the PopArt-IMPALA agent as well as that of the IM- PALA baseline. PopArt still guarantees a noticeable boost in performance, with the median human normalized score of Pixel-PopArt-IMPALA (red line) exceeding the score of Pixel-IMPALA (green line) by approximately 10 points.
We implemented the pixel control task as described in the original paper, only adapting the scheme to the rectangular observations used in DmLab-30. We split the (72 à 96) ob- servations into a 18Ã24 grid of 4Ã4 cells. For each location in the grid we deï¬ne a distinct pseudo-reward Ëri,j, equal to the absolute value of the difference between pixel intensities in consecutive frames, averaged across the 16 pixels of cell ci,j. For each cell, we train action values with multi-step Q- learning, accumulating rewards until the end of a rollout and then bootstrapping. We use a discount γ = 0.9. Learning is fully off-policy on experience generated by the actors, that follow the main policy Ï as usual.
We use a deep deconvolutional network for the action value predictions associated to each pseudo-reward Ëri,j. First, we feed the LSTMâs output to a fully connected layer, reshape the output tensor as 6 Ã 9 Ã 32, and apply a decon- volution with 3 Ã 3 kernels that outputs a 8 Ã 11 Ã 32 tensor. From this, we compute a spatial grid of Q-values using a du- eling network architecture: we use a deconvolution with 1 output channel for the state values across the grid and a de- convolution with |A| channels for the advantage estimates of each cell. Output deconvolutions use 4 Ã 4 kernels with stride 2. The additional head is only evaluated on the learner, actors do not execute it.
20 DmLab-30 PopArtIMPALA@10B IMPALA-original@10B â Pixel-PopartiMPALA = â Pixel-IMPALA ° 2 i 00) oS 10 TS 2.0 Environment Frames 109
Figure 7: Pixel Control: mean capped human normalised score across the DmLab-30 benchmark as a function of the total number of frames (summed across levels). Solid lines plot the performance PopArt-IMPALA (red) and IM- PALA (green), after augmenting both with pixel con- trol. Dashed lines mark the point at which Pixel-PopArt- IMPALA matches the ï¬nal performance of previous agents. Note how, thanks to the improved data efï¬ciency, we train for 2B frames, compared to 10B in previous experiments.
Atari-57 Score breakdown In this section we use barplots to report the ï¬nal perfor- mance of the agents on each of the levels in the Atari-57 multi-task benchmark. In order to compute these scores we take the ï¬nal trained agent and evaluate it with a frozen pol- icy on each of the levels for 200 episodes. The same trained policy is evaluated in all the levels, and the policy is not provided information about the task itâs being evaluated on. For Atari, we compare PopArt-IMPALA, with and without reward clipping, to an IMPALA baseline. In all cases the height of the bars in the plot denote human normalised score. For the Atari results we additionally rescale logarithmically the x-axis, because in this domain games may differ in their normalised performance by several orders of magnitude.
Figure 8: Atari-57 breakdown: human normalised score for IMPALA and PopArt-IMPALA, as measured in a separate evaluation phase at the end of training, broken down for the 57 games. For PopArt-IMPALA we report the scores both with and without reward clipping
[I PopArt-IMPALA [J PopArt-IMPALA (unclipped) EEE) IMPALA
atlantis, gopher assault up_n_down breakout double_dunk krull boxing demon_attack kangaroo crazy_climber defender road_runner robotank Jamesbond fishing_derby phoenix centipede bank heist, kung fu_master star_gunner tennis tutankham pong enduro wizard_of_wor battle_zone freeway surround venture vars revenge video_pinball Ice_hockey name _this_ game time_pilot riverraid chopper_command asterix ms_pacman hero alien space_invaders berzerk bowling beam_rider amidar oravitar abert frostbite asteroids seaquest pitfall private_eye montezuma_revenge solaris, skiing : . 10 100 Normalised Score
T0000
DmLab-30 Score breakdown In this section we use barplots to report the ï¬nal perfor- mance of the agents on each of the levels in the DmLab- 30 multi-task benchmark. In order to compute these scores we take the ï¬nal trained agent and evaluate it with a frozen policy on each of the levels for 500 episodes. We perform the evaluation over a higher number of episodes (compared to Atari) because the variance of the mean episode return is typically higher in DeepMind Lab. As before, the same trained policy is evaluated on all levels, and the policy is not provided information about the task itâs being evaluated on. Also in DmLab-30 we perform a three-way comparison. We compare PopArt-IMPALA to our improved IMPALA base- line, and, for completeness, to the original paperâs IMPALA.
Figure 9: DmLab-30 breakdown: human normalised score for the original paperâs IMPALA, our improved IMPALA baseline, and PopArt-IMPALA, as measured at the end of training, broken down for the 30 tasks; they all used 8 in- stances for population based training.
CJ IMPALA-original [ES PopArt-IMPALA (a IMPALA lasertag_one_opponent_small language_select_described_object lasertag_three_opponents_small explore_goal_locations_small lasertag_three_opponents_large explore_obstructed_goals_small psychlab_visual_search lasertag_one_opponent_large explore_object_locations_small language_select_located_object natlab_varying_map_regrowth language_answer_quantitative_question explore_object locations large rooms_collect_good_objects test natlab_varying_map_randomized explore_object_rewards few explore_goal_locations_large psychlab_sequential_comparison explore_obstructed_goals_large explore_object_rewards_many psychlab_continuous_recognition rooms select_nonmatching_object natlab_fixed_large_map rooms_watermaze platforms_random rooms keys doors puzzle platforms_hard psychlab_arbitrary_visuomotor_mapping language_execute_random_task rooms_exploit_deferred_effects test Por et 0 20 40 60 80 100 120 140 Normalised Score
DeepMind Lab action discretisation
DeepMind Labâs native action space is a 7-dimensional continuous space, whose dimensions correspond to rotat- ing horizontally/vertically, straï¬ng left/right, moving for- ward/backward, tagging, crouching, and jumping.
Despite the native action space being continuous, previ- ous work on this platform has however typically relied on a coarse discretisation of the action space. We therefore follow the same approach also in our experiments.
Below we list the discretisations used by the agents con- sidered in our experiments. This includes the discretisation used by IMPALA, as well as the one we introduce in this pa- per in order to unlock some levels in DmLab-30 which just canât be solved under the original IMPALA discretisation.
Table 1: Action discretisation used by IMPALA: we report below the discretisation of DeepMind Labâs action space, as used by the original IMPALA agent in Espeholt et al.
Native DmLab Action Action [ 0, 0, 0, 1, 0, 0, 0] Forward (FW) [ 0, 0, 0, -1, 0, 0, 0] Backward (BW) [ 0, 0, -1, 0, 0, 0, 0] Strafe Left [ 0, 0, 1, 0, 0, 0, 0] Strafe Right [-20, 0, 0, 0, 0, 0, 0] Look Left (LL) Look Right (LR) [ 20, 0, 0, 0, 0, 0, 0] [-20, 0, 0, 1, 0, 0, 0] FW + LL [ 20, 0, 0, 1, 0, 0, 0] FW + LR [ 0, 0, 0, 0, 1, 0, 0] Fire
Table 2: Action discretisation of DeepMind Labâs action space, as used by our version of IMPALA and by PopArt- IMPALA.
Native DmLab Action Action [ 0, 0, 0, 1, 0, 0, 0] FW [ 0, 0, 0, -1, 0, 0, 0] BW [ 0, 0, -1, 0, 0, 0, 0] Strafe Left [ 0, 0, 1, 0, 0, 0, 0] Strafe Right [-10, 0, 0, 0, 0, 0, 0] Small LL [ 10, 0, 0, 0, 0, 0, 0] Small LR [-60, 0, 0, 0, 0, 0, 0] Large LL [ 60, 0, 0, 0, 0, 0, 0] Large LR [ 0, 10, 0, 0, 0, 0, 0] Look Down [ 0,-10, 0, 0, 0, 0, 0] Look Up [-10, 0, 0, 1, 0, 0, 0] FW + Small LL FW + Small LR [ 10, 0, 0, 1, 0, 0, 0] [-60, 0, 0, 1, 0, 0, 0] FW + Large LL FW + Large LR [ 60, 0, 0, 1, 0, 0, 0] [ 0, 0, 0, 0, 1, 0, 0] Fire
Fixed Hyperparameters
Table 3: PopArt speciï¬c hyperparameters: these are held ï¬xed during training and were only very lightly tuned. The lower bound is used to avoid numerical issues when rewards are extremely sparse.
Hyperparameter Statistics learning rate 0.0003 0.0001 Scale lower bound 1e6 Scale upper bound
Table 4: DeepMind Lab preprocessing. As in previous work on DeepMind Lab, we render the observation with a resolu- tion of [72, 96], as well as use 4 action repeats. We also em- ploy the optimistic asymmetric rescaling (OAR) of rewards, that was introduced in Espeholt et al. for exploration.
Hyperparameter Image Height Image Width Number of action repeats Reward Rescaling
value 72 96 4 -0.3min(tanh(r),0)+ 5max(tanh(r),0)
standard Atari- Table 5: Atari preprocessing. The preprocessing is used in the Atari experiments. Since the introduction of DQN these setting have become a standard practice when training deep RL agent on Atari. Note however, that we report experiments training agents both with and without reward clipping.
Hyperparameter Image Height Image Width Grey scaling Max-pooling 2 consecutive frames Frame Stacking End of episode on life loss Reward Clipping (if used) Number of action repeats
Table 6: Other agent hyperparameters: These hyperparame- ters are the same used by Espeholt et al.
# value 20 (Atari), 100 (DmLab) 0.99 0.5 32 RMSProp 0.
Hyperparameter Unroll length Discount γ Baseline loss weight γ Batch size Optimiser RMSProp momentum
# Network Architecture
Table 7: Network hyperparameters. The network architec- ture is described in details in Espeholt et al., For complete- ness, we also report in the Table below the complete spec- iï¬cation of the network. Convolutional layers are speciï¬ed according to the pattern (num layers, kernel size, stride).
3 [16, 32, 32] ReLU 1 / 3x3 / 1) 1 / 3x3 / 2 2 / 3x3 / 1 Identity 2 / 3x3 / 1 Identity 20 LSTM / 64 256 256
# Linear Linear+softmax
# Population Based Training
Table 8: Population Based Training: we use PBT for tuning hyper-parameters, as described in Espeholt et al., with pop- ulation size and ï¬tness function as deï¬ned below.
Hyperparameter Population Size (Atari) Population Size (DmLab) Fitness value 24 8 Mean capped human normalised score (cap=100)
Table 9: hyperparameters tuned with population based train- ing are listed below: note that these are the same used by all baseline agents we compare to, to ensure fair comparisons.
Hyperparameter Entropy cost Learning rate RMSProp epsilon Max Grad Norm distribution Log-uniform on [5e-5, 1e-2] Log-uniform on [5e-6, 5e-3] Categorical on [1e-1, 1e-3, 1e-5, 1e-7] Uniform on [10, 100] | {
"id": "1507.04296"
} |
1809.04191 | Discovering Low-Precision Networks Close to Full-Precision Networks for Efficient Embedded Inference | To realize the promise of ubiquitous embedded deep network inference, it is
essential to seek limits of energy and area efficiency. To this end,
low-precision networks offer tremendous promise because both energy and area
scale down quadratically with the reduction in precision. Here we demonstrate
ResNet-18, -34, -50, -152, Inception-v3, Densenet-161, and VGG-16bn networks on
the ImageNet classification benchmark that, at 8-bit precision exceed the
accuracy of the full-precision baseline networks after one epoch of finetuning,
thereby leveraging the availability of pretrained models. We also demonstrate
ResNet-18, -34, -50, -152, Densenet-161, and VGG-16bn 4-bit models that match
the accuracy of the full-precision baseline networks -- the highest scores to
date. Surprisingly, the weights of the low-precision networks are very close
(in cosine similarity) to the weights of the corresponding baseline networks,
making training from scratch unnecessary.
We find that gradient noise due to quantization during training increases
with reduced precision, and seek ways to overcome this noise. The number of
iterations required by SGD to achieve a given training error is related to the
square of (a) the distance of the initial solution from the final plus (b) the
maximum variance of the gradient estimates. Therefore, we (a) reduce solution
distance by starting with pretrained fp32 precision baseline networks and
fine-tuning, and (b) combat gradient noise introduced by quantization by
training longer and reducing learning rates. Sensitivity analysis indicates
that these simple techniques, coupled with proper activation function range
calibration to take full advantage of the limited precision, are sufficient to
discover low-precision networks, if they exist, close to fp32 precision
baseline networks. The results herein provide evidence that 4-bits suffice for
classification. | http://arxiv.org/pdf/1809.04191 | Jeffrey L. McKinstry, Steven K. Esser, Rathinakumar Appuswamy, Deepika Bablani, John V. Arthur, Izzet B. Yildiz, Dharmendra S. Modha | cs.CV | null | null | cs.CV | 20180911 | 20190225 | 9 1 0 2
b e F 5 2 ] V C . s c [
2 v 1 9 1 4 0 . 9 0 8 1 : v i X r a
DISCOVERING LOW-PRECISION NETWORKS CLOSE TO FULL-PRECISION NETWORKS FOR EFFICIENT EMBED- DED INFERENCE
Jeffrey L. McKinstry, Steven K. Esser, Rathinakumar Appuswamy, Deepika Bablani, John V. Arthur, Izzet B. Yildiz & Dharmendra S. Modha IBM Almaden Research Center 650 Harry Road, San Jose, CA 95120, USA {jlmckins, sesser, rappusw, deepika.bablani, arthurjo,
byildiz,
# dmodha}@us.ibm.com
# ABSTRACT
To realize the promise of ubiquitous embedded deep network inference, it is es- sential to seek limits of energy and area efï¬ciency. To this end, low-precision networks offer tremendous promise because both energy and area scale down quadratically with the reduction in precision. Here, for the ï¬rst time, we demon- strate ResNet-18, ResNet-34, ResNet-50, ResNet-152, Inception-v3, Densenet- 161, and VGG-16bn networks on the ImageNet classiï¬cation benchmark that, at 8-bit precision exceed the accuracy of the full-precision baseline networks after one epoch of ï¬netuning, thereby leveraging the availability of pretrained models. We also demonstrate ResNet-18, ResNet-34, ResNet-50, ResNet-152, Densenet- 161, and VGG-16bn 4-bit models that match the accuracy of the full-precision baseline networks â the highest scores to date. Surprisingly, the weights of the low-precision networks are very close (in cosine similarity) to the weights of the corresponding baseline networks, making training from scratch unnecessary. We ï¬nd that gradient noise due to quantization during training increases with re- duced precision, and seek ways to overcome this noise. The number of iterations required by stochastic gradient descent to achieve a given training error is re- lated to the square of (a) the distance of the initial solution from the ï¬nal plus (b) the maximum variance of the gradient estimates. By drawing inspiration from this observation, we (a) reduce solution distance by starting with pretrained fp32 precision baseline networks and ï¬ne-tuning, and (b) combat noise introduced by quantizing weights and activations during training by training longer and reducing learning rates. Sensitivity analysis indicates that these simple techniques, coupled with proper activation function range calibration to take full advantage of the lim- ited precision, are sufï¬cient to discover low-precision networks, if they exist, close to fp32 precision baseline networks. The results herein provide evidence that 4- bits sufï¬ce for classiï¬cation.
INTRODUCTION
1.1 PROBLEM STATEMENT
To harness the power of deep convolutional networks in embedded and large-scale application do- mains requires energy-efï¬cient implementation, leading to great interest in low-precision networks suitable for deployment with low-precision hardware accelerators. Consequently there have been a ï¬urry of methods for quantizing both the weights and activations of these networks (Jacob et al., 2017; Courbariaux et al., 2015; Polino et al., 2018; Xu et al., 2018; Baskin et al., 2018; Mishra et al., 2017; Choi et al., 2018). A common perception is that 8-bit networks offer the promise of de- creased computational complexity with little loss in accuracy, without any need to retrain. However, the published accuracies are typically lower for the quantized networks than for the corresponding full-precision net (Migacz, 2017). Even training 8-bit networks from scratch fails to close this gap
1
Table 1: Fine-tuning After Quantization (FAQ) exceeds or matches the accuracy of the fp32 baseline networks on the Imagenet benchmark for both 8 and 4 bits on representative state- of-the-art network architectures, and outperforms all comparable quantization methods in all but one instance. Baselines are popular architectures He et al. (2016); Huang et al. (2017); Szegedy et al. (2016); Simonyan & Zisserman (2014) from the PyTorch model zoo. Other results reported in the literature are shown for comparison, with methods exceeding or matching their top- 1 baseline (which may be different than ours) in bold. Precision is in bits, where w= weight , and a=activation function. Accuracy is reported for the ImageNet classiï¬cation benchmark. FAQ ResNet-18, 4-bit result shows mean±std for 3 runs. Compared methods: Apprentice (Mishra & Marr, 2017), Distillation (Polino et al., 2018), UNIQ (Baskin et al., 2018), IOA (Jacob et al., 2017), Joint Training (Jung et al., 2018), EL-Net (Zhuang et al., 2018). Since only one epoch was necessary to ï¬ne-tune 8-bit models, we were able to study more 8 than 4-bit models.
ResNet-18 ResNet-18 ResNet-18 ResNet-18 ResNet-18 ResNet-18 ResNet-18 ResNet-34 ResNet-34 ResNet-34 ResNet-34 ResNet-34 ResNet-34 ResNet-50 ResNet-50 ResNet-50 ResNet-50 ResNet-50 ResNet-50 ResNet-50 ResNet-152 ResNet-152 ResNet-152 Inception-v3 Inception-v3 Inception-v3 Inception-v3 Densenet-161 Densenet-161 FAQ (This paper) Densenet-161 FAQ (This paper) baseline Apprentice FAQ (This paper) FAQ (This paper) Joint Training UNIQ Distillation baseline FAQ (This paper) FAQ (This paper) UNIQ Apprentice UNIQ baseline FAQ (This paper) FAQ (This paper) EL-Net IOA Apprentice UNIQ baseline FAQ (This paper) FAQ (This paper) baseline FAQ (This paper) FAQ (This paper) IOA baseline
|
(Jacob et al., 2017) (See Table 1). The situation is even worse for 4-bit precision. For the ImageNet classiï¬cation benchmark, only one method has been able to match the accuracy of the corresponding full-precision network when quantizing both the weights and activations at the 4-bit level (Zhuang et al., 2018). The complexity and ad-hoc nature of prior methods motivates the search for a simpler technique that can be consistently shown to match or exceed full-precision baseline networks.
2
1.2 CONTRIBUTIONS
Guided by theoretical convergence bounds for stochastic gradient descent (SGD), we discover that Fine-tuning â training pre-trained high-precision networks for low-precision inference â After Quan- tization â proper range calibration of weights and activations â (FAQ) can discover both 4-bit and 8-bit integer networks. We evaluate the proposed solution on the ImageNet benchmark on a repre- sentative set of state-of-the-art networks at 8-bit and 4-bit quantization levels (Table 1). Contribu- tions include the following.
⢠We demonstrate 8-bit scores on ResNet-18, 34, 50, and 152, Inception-v3, Densenet-161, and VGG-16 exceeding the full-precision scores after just one epoch of ï¬ne-tuning.
⢠We present evidence of 4 bit, fully integer ResNet-18, 34, 50, and 152, Densenet-16, and VGG-16 networks, which match the accuracy of the original full-precision networks on the ImageNet benchmark, settting the new state-of-the-art.
⢠We present empirical evidence for gradient noise that is introduced by weight quantization. This gradient noise increases with decreasing precision and may account for the difï¬culty in ï¬ne-tuning low-precision networks.
⢠We ï¬nd direct empirical support that, as with 8-bit quantization, near optimal 4-bit quan- tized solutions exist close to high-precision solutions, making training from scratch unnec- essary.
1.3 PROPOSED SOLUTION
Our goal is to quantize existing networks to 8 and 4 bits for both weights and activations, without increasing the computational complexity of the network to compensate, e.g. with modiï¬cations such as feature expansion, while achieving accuracies that match or exceed the corresponding full- precision networks. For precision below 8 bits, the typical method is to train the model using SGD while enforcing the constraints (Courbariaux et al., 2015). There are at least two problems faced when training low-precision networks: learning in the face of low-precision weights and activations, and capacity limits in the face of these constraints. Assuming that capacity is sufï¬cient and a low- precision solution exists, we wish to solve the ï¬rst problem, that is, ï¬nd a way to train low-precision networks to obtain the best possible score subject to capacity limits.
We use low-precision training to optimize quantized networks. We hypothesize that noise introduced by quantizing weights and activations during training is the crux of the problem and is a second source of noise that is similar to gradient noise inherent to stochastic gradient descent. In support of this idea, Polino et al. (2018) showed that unbiased quantization of weights is equivalent to adding Gaussian noise to the activations, in expectation. The problem is then to ï¬nd ways to overcome this noise. SGD requires 1
BS (07 +L |laro â 0*|[3)?/2 (1) iterations to find a 2e-approximate optimal value, where o? is the gradient noise level, L is related to the curvature of the convex function, xo and «* are the initial and optimal network parameters, respectively, and ⬠is the error tolerance This suggests two ways to minimize the final error. First, start closer to the solution, 1. ze xq â x*. We therefore start with pretrained models for quantization, rather than training from scratch (Zhou et al.| 2017} Baskin et al. 2018). Second, minimize o. To do this, we combine well-known techniques to combat noise: 1) larger batches which reduce the gradient noise proportional to the square root of the batch size 016), and 2) learning rate annealing to lower learning rates (10~°), effectively averaging over more batches (batch size increases and learning rate decreases are known to behave similarly (Smith et al.|2017)). Additionally, in the case of 4-bit precision, we fine-tune longer â 110 epochs â to achieve better accuracy, according to equation 1. Finally, we use the initial pretrained network to determine the proper ranges for quantizing both the weights and activations. We refer to this technique as Fine-tuning after quantization, or FAQ. We argue that the method of fine-tuning for quantization is the right approach in the sense that it directly optimizes the proper objective func- tion, the final score, rather than proxies which measure distance from the full-precision network parameters (Migacz| 2017). 2
1 This assumes a convex loss function, a simpler case.
3
2 BACKGROUND
2.1 NETWORK QUANTIZATION
In the quest for training state-of-the-art low-precision networks, there has been a vast diversity in how the precision constraints are imposed as well as in approaches used in their training. Typ- ical variations in applying low-precision constraints include allowing non-uniform quantization of weights and activations (Miyashita et al., 2016; Zhou et al., 2017; Cai et al., 2017) where the discrete dictionary may depend on the data, and stochastic quantization (Polino et al., 2018; Courbariaux et al., 2015). Approaches to training these networks include distillation (Polino et al., 2018), layer- wise quantization and retraining (Xu et al., 2018), introducing noise during training (Baskin et al., 2018), increasing features (Mishra et al., 2017), learning quantization-speciï¬c parameters using backpropagation (Choi et al., 2018), ï¬ne-tuning (Baskin et al., 2018; Zhuang et al., 2018), using Stochastic Variance-Reduced Gradient instead of SGD (Sa et al., 2018), and relaxation methods resembling annealing (Yin et al., 2018).
With notable exception of a few papers dealing with binary or trinary networks (Courbariaux et al., 2015; Rastegari et al., 2016; Courbariaux & Bengio, 2016)2, most of the literature on low-precision networks constrain the number of discrete values that the weights and activations assume but oth- erwise allow them to be ï¬oating-point numbers. In addition, low-precision constraints are not nec- essarily imposed on batch-normalization constants, average-pooling results etc. in these networks. This is in contrast to how 8-bit integer networks are supported by TensorRT as well as Tensor- Flow framework, where all the parameters and activations are quantized to 8-bit ï¬xed-point integers (see for example (Jacob et al., 2017)). Recent attempts (Wu et al., 2018) at training low-precision networks with integer constraints have hinted at the possibility of porting such networks to commer- cially available hardware for inference3.
We focus on training networks with both weights and activations constrained to be either 4 bit, or 8-bit ï¬xed-point integers, and restrict all other scalar multiplicative constants (for example, batch- normalization) in the network to be 8-bit integers and additive constants (for example, bias values) to be 32-bit integers.
# 3 LOW-PRECISION FINE-TUNING METHODS
We start with pretrained, high-precision networks from the PyTorch model zoo, quantize, and then ï¬ne-tune for a variable number of epochs depending on the precision. We hypothesize that noise is the limiting factor in ï¬nding low-precision solutions, and use well-known methods to over come noise in training. Otherwise, we use the techniques of Courbariaux et al. (2015); Esser et al. (2016) to train low-precision networks. Details of this procedure are described next.
3.1 FIXED POINT QUANTIZER
The quantizer we use throughout this paper is parametrized by the precision (in number of bits) b, and the location of the least significant-bit relative to the radix 1, and denoted by Qyi. A calibration phase during initialization is used to determine a unique | for each layer of acti- vations, which remains fixed subsequently throughout the fine-tuning. Similarly, each layerâs weight tensor as well as other parameter tensors are assigned a unique | and this quantity is de- termined during each training iteration. The procedures for determining / for activations and other parameters are described in the following subsections. A given scalar x is quantized to a fixed-point integer @ = Qp,(x) = min(|z x 27'],2> â 1) x 2! for unsigned values, and & = max(min([x x 27'],2°>1 â 1), 2-1 + 1)) x 2! for signed values.
Given a desired network precision of either 8 or 4 bits, we quantize all weights and activations to this level. In the 4-bit case, we leave the ï¬rst and last layer weights at 8 bits and allow full-precision (32-bit ï¬xed point) linear activations in the last, fully-connected layer Courbariaux et al. (2015); Esser et al. (2016). In addition, the input to that last, fully-connected layer is also allowed to be an
2Even these networks may have occasional ï¬oating point scaling steps between layers. 3NVIDIAâs recently announced Turing architecture supports 4-bit integer operations, for example.
4
8-bit integer as is the common practice in the literature. In such networks containing 4-bit internal layers and 8-bit ï¬nal layer, the transition from 4-bit to 8-bit is facilitated by the last ReLU activation layer in the network. Every other ReLU layerâs output tensor is quantized to a 4-bit integer tensor.
# 3.1.1 QUANTIZING NETWORK PARAMETERS
Given a weight tensor w, SGD is used to update w as usual but a fixed-point version is used for inference and gradient calculation 2015} 2016). The fixed-point version is obtained by applying Q»,, element-wise. The quantization parameter / for a given weight tensor is updated during every iteration and computed as follows: We first determine a desired quan- tization step-size A by first clipping the weight tensor at a constant multiplâ] of its numerically estimated standard-deviation, and then dividing this range into equally-sized bins. Finally, the re- quired constant | is calculated as 1 = [log (A)]. All other parameters, including those used in batch-normalization, use 1 = âb/2.
3.2 INITIALIZATION
pretrained model ï¬le Network the quantization parameter l (https://pytorch.org/docs/stable/torchvision/models.html). Next, for each layer of activation is calibrated using the following procedure: Following Jacob et al. (2017), we use a technique of running several (5) training data batches through the unquantized network to determine the maximum range for uniform quantization. Speciï¬cally, for each layer, ymax is the maximum across all batches of the 99.99th percentile of the batch of activation tensor of that layer, rounded up to the next even power of two. This percentile level was found to give the best initial validation score for 8-bit layers, while 99.9 was best for layers with 4-bit ReLUs. The estimated ymax, in turn, determines the quantization parameter l for that tensor. For ReLU layers, the clipped tensor in the range [0, ymax] is then quantized using Qb,l. Once these activation function parameters are determined for each of the tensors, they are kept ï¬xed during subsequent ï¬ne-tuning.
For control experiments which start from random initialization rather than pretrained weights, we did not perform this ReLU calibration step, since initial activation ranges are unlikely to be correct. In these experiments, we set the maximum range of all ReLU activation functions to ymax = 2p/2 â 1, where p is the number of bits of precision.
3.3 TRAINING
To train such a quantized network we use the typical procedure of keeping a ï¬oating point copy of the weights which are updated with the gradients as in normal SGD, and quantize weights and activations in the forward pass (Courbariaux et al., 2015; Esser et al., 2016), clipping values that fall above the maximum range as described above. We also use the straight through estimator (Bengio et al., 2013) to pass the gradient through the quantization operator.
For ï¬ne-tuning pretrained 8-bit networks, since the initial quantization is already within a few per- cent of the full-precision network in many cases, we ï¬nd that we need only a single additional epoch of training, with a learning rate of 10â4 after the initial quantization step, and no other changes are made to the original training parameters during ï¬ne-tuning.
However, for 4-bit networks, the initial quantization alone gives poor performance, and matching the performance of the full-precision net requires training for 110 additional epochs using exponential decay of the learning rate such that the learning rate drops from the initial rate of 0.0015 (slightly higher than the ï¬nal learning rate used to train the pretrained net) to a ï¬nal value of 10â6 5. Accord- ingly we multiply the learning rate by 0.936 after each epoch for a 110 epoch ï¬ne-tuning training run. In addition, for the smallest ResNet 4-bit network, ResNet-18, the weight decay parameter is reduced from 10â4 used to train ResNet models to 0.5 à 10â4 assuming that less regularization is needed with smaller, lower precision networks. The batch size used was 256 split over 2 GPUs. SGD with momentum was used for optimization. Software was implemented using PyTorch.
4The constant, in general, depends on the precision. We used a constant of 4.12 for all our 4-bit experiments. 5For ResNet-50, ResNet-152, and Inception-v3, a step learning rate schedule was used with an initial learn- ing rate of 0.02, reduced by a factor of 0.1 after 30, 60, and 90 epochs.
5
Table 2: Sensitivity experiments indicate that longer training duration, initialization from a pretrained model, larger batch size, lower weight decay, and initial activation calibration all contribute to improved accuracy when training the 4-bit ResNet-18 network, while the exact learning rate decay schedule contributed the least. The standard parameters are on row 1. Each subsequent row shows the parameters and score for one experiment with changed parameters in bold. * Note that to keep the number of weight updates approximately the same, the number of epochs was inceased, since larger batches result in fewer updates per epoch.
Epochs 110 60 110 165* 110 110 110 Pre- trained Yes Yes No Yes Yes Yes Yes Batch size Learning Weight decay rate schedule exp. exp. exp. exp. step exp. exp. 256 400 256 256-2048 256 256 256 0.00005 0.00005 0.00005 0.00005 0.00005 0.0001 0.00005 Activation calibration Yes Yes Yes Yes Yes Yes No Accuracy Change (% top-1) 69.82 69.40 69.24 69.96 69.90 69.59 69.19 - -0.22 -0.58 +0.14 +0.08 -0.23 -0.63
# 4 EXPERIMENTS
4.1 FINE-TUNING MATCHES OR EXCEEDS THE ACCURACY OF THE INITIAL HIGH-PRECISION NETWORK
FAQ trained 8-bit networks outperform all comparable quantization methods in all but one instance and exceeded pretrained fp32 network accuracy with only one epoch of training following quantiza- tion for all networks explored (Table 1). Immediately following quantization, network accuracy was nearly at the level of the pretrained networks (data not shown) with one exception, Inception-v3, which started at 72.34% top-1. Since the networks started close to a good solution, they did not require extensive ï¬ne-tuning to return to and surpass pretrained networks.
FAQ trained 4-bit network accuracy exceeds all comparable quantization methods, surpassing the next closest approach by nearly 0.5% for ResNet-18 (Jung et al., 2018), and matched or exceeded pretrained fp32 network accuracy. Four-bit networks required signiï¬cantly longer ï¬ne-tuning â 110 epochs â for the networks trained, ResNet-18, ResNet-34, and ResNet-50. In contrast to the 8- bit cases, immediately following quantization, network accuracy dropped precipitously, requiring signiï¬cant ï¬ne-tuning to match and surpass the pretrained networks.
FAQ trained 4-bit network accuracy is sensitive to several hyperparameters (Table 2). We elaborate on some of these results subsequently.
4.2 LONGER TRAINING TIME WAS NECESSARY FOR 4-BIT NETWORKS
For the 4-bit ResNet-18, longer ï¬ne-tuning improved accuracy (Table 2), potentially by averaging out gradient noise introduced by discretization (Polino et al., 2018). We explored sensitivity to short- ening ï¬ne-tuning by repeating the experiment for 30, 60 and 110 epochs, with the same initial and ï¬nal learning rates in all cases, resulting in top-1 accuracies of 69.30, 69.40, and 69.68 respectively. The hyperparameters were identical, except the batch size was increased from 256 to 400. These results indicate that training longer was necessary.
# 4.3 QUANTIZING A PRETRAINED NETWORK IMPROVES ACCURACY
Initializing networks with a discretized pretrained network followed by ï¬ne-tuning improved accu- racy compared with training a quantized network from random initialization for the same duration (Table 2), suggesting proximity to a full-precision network enhances low-precision ï¬ne-tuning. For a 4-bit network, we explored the contribution of the pretrained network by training two ResNet- 18 networks with standard initialization for 110 epochs, one with the previous learning rate decay
6
schedule6 and the other with a learning rate from Choi et al. (2018), dropping by a factor of 0.1 at epochs 30, 60, 85, and 95, plus an additional drop to 10â6 at epoch 95 to match the ï¬ne-tuning experiments. These two approaches reached top-1 accuracies of 67.14% and 69.24%, respectively â both less than FAQâs accuracy after 30 epochs and more than 0.5% short of FAQâs accuracy after 110 epochs. The one FAQ change that degraded accuracy the most was neglecting to calibrate activation ranges for each layer using the pretrained model, which dropped accuracy by 0.63%. This is another possible reason why training 8-bit networks from scratch has not achieved higher scores in the past (Jacob et al., 2017).
4.4 REDUCING NOISE WITH LARGER BATCH SIZE IMPROVES ACCURACY
Fine-tuning with increasing batch size improved accuracy (Table 2). For a 4-bit network, we ex- plored the contribution of increasing the batch size with a Resnet-18 network, which increased top-1 validation accuracy to 69.96%. We scheduled batch sizes, starting at 256 and doubled at epochs 55, 150, 160, reaching 2048 as maximum batch size7, each doubling effecting a 2 factor drop in gradi- ent noise, which is proportional to square root of batch size. We used 165 epochs to approximately conserve the number of weight updates as the 110-epochs 256-batch-size case as our focus here is not training faster but reducing gradient noise to improve ï¬nal accuracy. The result is consistent with the idea that gradient noise limits low-precision training, however we cannot not rule out pos- sible confounding affects of training for more epochs or the effect of larger batches on the effective learning step size.
4.5 THE EXACT FORM OF EXPONENTIAL LEARNING RATE DECAY WAS NOT CRITICAL
Replacing the exponential learning rate decay with a typical step decay which reduced the learning rate from 10â3 to 10â6 in 3 steps of 0.1 at epochs 30, 60, and 90, improved results slightly (+0.08). This suggests that FAQ is insensitive to the exact form of exponential decrease in learning rate.
4.6 REDUCING WEIGHT DECAY IMPROVES ACCURACY FOR RESNET-18
For the 4-bit ResNet-18 network, increasing weight decay from 0.5 à 104 to 10â4, used in the origi- nal pretrained network, reduced the validation accuracy by 0.23% (Table 2). The smaller ResNet-18 may lack sufï¬cient capacity to compensate for low-precision weights and activations with the same weight decay. In contrast, for the 4-bit ResNet-34 and 50 networks, best results were obtained with weight decay 10â4.
4.7 QUANTIZING WEIGHTS INTRODUCES GRADIENT NOISE
Weight discretization increases gradient noise for 8-, 4-, and 2-bit networks8. We deï¬ne the increase in gradient noise due to weight discretization as the angular difference between the step taken by the learning algorithm, δw, on the ï¬oat-point copy at iteration t â 1, wtâ1, and the actual step taken due to quantizing the weights, i.e. Qb,l(wt) â Qb,l(wtâ1). We measure this angle using cosine similarity (normalized dot-product) between the instantaneous δw and an exponential moving average of the actual step directions with smoothing factor 0.9 (Figure 1). Cosine similarity of 1.0 corresponds to an fp32 network and the absence of discretization-induced gradient noise. As bit precisions decrease, similarity decreases, signaling higher gradient noise.
These results directly show discretization-induced gradient noise appreciably inï¬uences the ï¬ne- tuning and training trajectories of quantized networks. The increased noise (decreased similarity) of the 4-bit case compared to the 8-bit case possibly accounts for the difference in ï¬ne-tuning times required. Even the 8-bit case is signiï¬cantly below unity, possibly explaining why training from scratch has not lead to the highest performance (Jacob et al., 2017).
6We used a higher initial learning rate of 0.1, equal to that used to train the full-precision net from scratch, with a decay factor of 0.901, such that ï¬nal learning rate was 10â6.
7To simulate batch sizes larger than 256 within memory constraints, we used virtual batches, updating the weights once every n actual batches with the gradient average for effective batch size 256n.
82-bit network is used only to demonstrate how discretization-induced gradient noise varies with bit preci- sion.
7
M2 bits HEE s bits 0.25 + 4 5 = ra 2 3 & s 5 2 S Boss B 8 = > 3 = = § 3 Ee 20 Layer
Figure 1: Quantizing the weights introduces considerable additional noise in the learning pro- cess. Plotted is the cosine of the average angular error between the weight change called for by SGD with momentum, and the actual weight change taken after quantizing. Cosine similarity of 1.0 corresponds to an fp32 network and the absence of discretization-induced gradient noise, i.e. higher is better. This measure is plotted for each layer in ResNet-18 after several hundred iterations in the ï¬rst epoch of ï¬ne-tuning for each of three precisions, 2, 4, and 8 bits for both weights and activations. The ï¬rst conv layer is layer 1, while the fully connected layer is layer 18. Note that the ï¬rst and last layer weights are 8 bits in all cases, thus the noise level is similar in all three cases.
° & ° & © S 06) 05) 04 03) 02) mean cosine distance between w, and w, ,. o4 [ee Train from scratch FAQ
Figure 2: The ResNet-18 4-bit solution after ï¬ne-tuning for 110 epochs was located relatively close to the initial high-precision solution used to initialize the network, indicating that training from scratch is unnecessary. Plotted is the mean, over all neurons in a ResNet-18 network, of the cosine similarity between the weights at the beginning of training from scratch, and the weights at epoch 110 (left bar). The minimum and maximum similarity measure is 0 and 1, respectively. The similarity between the random initial weights and the ï¬nal solution is near 0 in this control experiment, indicating that the weights have moved far from the initial condition when training from scratch. The right bar shows the same measure between initial weights taken from the model zoo and the 4-bit solution after 100 epochs of FAQ training. The cosine similarity is close to 1, indicating that the 4-bit solution is close to the initial fp32 solution used for initialization.
4.8 THE 4-BIT SOLUTION WAS SIMILAR TO THE HIGH-PRECISION SOLUTION
The weights of the FAQ trained 4-bit network were similar to those in the full-precision pretrained network used for its initialization (Figure 2). We deï¬ne the network similarity as the cosine simi- larity between the networksâ weights. The average of the cosine similarity between the weights of every corresponding neuron in the two networks is very close to 1 (0.994), indicating that the weight vectors have not moved very far during 110 epochs of ï¬ne-tuning and that the 4-bit network exists close to its high-precision counterpart, demonstrating that pretrained initialization strongly inï¬u- enced the ï¬nal network. Contrast this with the same measure when training from scratch, where the
8
similarity between the initial weights and ï¬nal weights is close to 0 (0.023). The fact that the 4-bit solution was close to the high-precision solution suggests that training from scratch is unnecessary.
4.9 FAQ GENERALIZES TO CIFAR10
FAQ trained models are as accurate as the full-precision model for ResNet-18 adapted for the CI- FAR10 dataset 9. The full-precision model was well-trained for 350 epochs, with learning rate 0.1 reduced by a factor of 0.1 at 150 and 250 epochs, momentum=0.9, weight decay=5e-4, batch size=128, and augmentation consisting of random crops from images padded with 4 pixels and ran- domly ï¬ipped horizontally. The baseline test accuracy was 94.65%, while FAQ 8- and 4-bit scores, respectively were 94.65% and 94.63%, evidence that FAQ generalizes to other datasets. 8-bit param- eters were the same as that for the Imagenet experiments, except weight decay equaled the baseline, and the number of epochs was extended to 10. The 4-bit score was obtained with the same parame- ters as for Imagenet (Table 2, row5), except with weight decay equal to baseline and initial learning rate of 0.02 (best result among 0.0015, 0.01, 0.02, and 0.04).
# 5 DISCUSSION
We show here that low-precision quantization followed by ï¬ne-tuning, when properly compensating for noise, is sufï¬cient to achieve state of the art performance for networks employing 4- and 8-bit weights and activations. Compared to previous work, our approach offers a major advantage in the 8-bit space, by requiring only a single epoch of post quantization training to consistently exceed high-precision network scores, and a major advantage in the 4-bit space by matching high-precision baseline scores with a simpler approach, exceeding published results on ResNet-18, 34 and 50. We ï¬nd support for the idea that overcoming noise is the main challenge in successful ï¬ne-tuning, given sufï¬cient capacity in a network model: longer training times, exponential learning rate decay, very low ï¬nal learning rate, and larger batch sizes all seem to contribute to improving the results of ï¬ne- tuning. SGD is faced with two sources of noise, one inherent to stochastic sampling, and the other due to quantization noise; these techniques may be reducing only one of the sources, or both, and we have not shown that FAQ is directly reducing quantization noise. Further experiments are warranted.
We believe that the success of ï¬ne-tuning and the wide availability of pretrained models marks a major change in how low-precision networks will be trained. We conjecture that within every region containing a local minimum for a high-precision network, there exists a subregion(s) which also contains solutions to the lower precision 4-bit nets, provided that the network has sufï¬cient capacity. The experiments reported herein provide support for this conjecture; if true, FAQ should generalize to any classiï¬cation model.
Fine-tuning for quantization has been previously studied. In Zhou et al. (2017), increasingly larger subsets of neurons from a pretrained network are replaced with low-precision neurons and ï¬ne- tuned, in stages. The accuracy exceeds the baseline for a range of networks quantized with 5-bit weights and 32-bit activations. Our results here with both ï¬xed-precision weights and activations at either 8 or 4 bits suggest that incremental training may have been unnecessary. In Baskin et al. (2018), ï¬ne-tuning is employed along with a non-linear quantization scheme during training (see UNIQ in Table 1). We have shown that low-precision quantization followed by proper ï¬ne-tuning, is sufï¬cient to achieve even greater accuracy when quantizing both weights and activations at 4 bits. Finally, using a combination of quantizing weights before activations, progressively lower precisions, ï¬ne-tuning, and a new loss function, Zhuang et al. (2018) are the ï¬rst to show that a 4-bit ResNet network can match the top-1 accuracy of a baseline full-precision network. Our results show that a simpler method can achieve this for ResNet-18, 34, and 50, 152, DenseNet-161, and VGG16bn.
Future research includes combining FAQ with other approaches, new training algorithms designed speciï¬cally to ï¬ght the ill-effects of noise (Baskin et al., 2018) introduced by weight quantizaiton, and extending to quantize 2-bit networks. Training in the 2-bit case will be more challenging given the additional quantization noise (Figure 2), and possible capacity limits with 2-bit quantization.
# 9https://github.com/kuangliu/pytorch-cifar/blob/master/models/resnet.py
9
FAQ is a principled approach to quantization. Ultimately, the goal of quantization is to match or exceed the validation score of a corresponding full-precision network. This work demonstrates that 8-bit and 4-bit quantized networks performing at the level of their high-precision counterparts can be obtained with a straightforward approach, a critical step towards harnessing the energy-efï¬ciency of low-precision hardware. The results herein provide evidence that 4-bits sufï¬ce for classiï¬cation.
# REFERENCES
Chaim Baskin, Eli Schwartz, Evgenii Zheltonozhskii, Natan Liss, Raja Giryes, Alexander M. Bron- stein, and Avi Mendelson. UNIQ: uniform noise injection for the quantization of neural networks. CoRR, abs/1804.10969, 2018. URL http://arxiv.org/abs/1804.10969.
Yoshua Bengio, Nicholas L´eonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.
Zhaowei Cai, Xiaodong He, Jian Sun, and Nuno Vasconcelos. Deep learning with low precision by half-wave gaussian quantization. CoRR, abs/1702.00953, 2017. URL http://arxiv.org/ abs/1702.00953.
Jungwook Choi, Zhuo Wang, Swagath Venkataramani, Pierce I-Jen Chuang, Vijayalakshmi Srini- vasan, and Kailash Gopalakrishnan. PACT: parameterized clipping activation for quantized neural networks. CoRR, abs/1805.06085, 2018. URL http://arxiv.org/abs/1805.06085.
Matthieu Courbariaux and Yoshua Bengio. Binarynet: Training deep neural networks with weights and activations constrained to +1 or -1. CoRR, abs/1602.02830, 2016. URL http://arxiv. org/abs/1602.02830.
Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training deep neural networks with binary weights during propagations. In Advances in neural information processing systems, pp. 3123â3131, 2015.
SK Esser, PA Merolla, JV Arthur, AS Cassidy, R Appuswamy, A Andreopoulos, DJ Berg, JL McK- instry, T Melano, DR Barch, et al. Convolutional networks for fast, energy-efï¬cient neuromorphic computing. 2016. Preprint on ArXiv. http://arxiv. org/abs/1603.08270. Accessed, 27, 2016.
Ian Goodfellow, Yoshua Bengio, Aaron Courville, and Yoshua Bengio. Deep learning, volume 1. MIT press Cambridge, 2016.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770â778, 2016.
Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In CVPR, volume 1, pp. 3, 2017.
Pavel Izmailov, Dmitrii Podoprikhin, Timur Garipov, Dmitry Vetrov, and Andrew Gordon Wil- arXiv preprint son. Averaging weights leads to wider optima and better generalization. arXiv:1803.05407, 2018.
Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew Howard, Hartwig Adam, and Dmitry Kalenichenko. Quantization and training of neural networks for efï¬cient integer-arithmetic-only inference. arXiv preprint arXiv:1712.05877, 2017.
Sangil Jung, Changyong Son, Seohyung Lee, Jinwoo Son, Youngjun Kwak, Jae-Joon Han, and Changkyu Choi. Joint training of low-precision neural network with quantization interval param- eters. arXiv preprint arXiv:1808.05779, 2018.
Raghu Meka. Cs289ml: https://raghumeka.github.io/CS289ML/gdnotes.pdf, 2017. Notes on convergence of gradient descent.
Szymon Migacz. Nvidia 8-bit inference with tensorrt. GPU Technology Conference, 2017.
Asit Mishra and Debbie Marr. Apprentice: Using knowledge distillation techniques to improve low-precision network accuracy. arXiv preprint arXiv:1711.05852, 2017.
10
Asit K. Mishra, Eriko Nurvitadhi, Jeffrey J. Cook, and Debbie Marr. WRPN: wide reduced-precision networks. CoRR, abs/1709.01134, 2017. URL http://arxiv.org/abs/1709.01134.
Daisuke Miyashita, Edward H. Lee, and Boris Murmann. Convolutional neural networks using logarithmic data representation. CoRR, abs/1603.01025, 2016. URL http://arxiv.org/ abs/1603.01025.
Antonio Polino, Razvan Pascanu, and Dan Alistarh. Model compression via distillation and quanti- zation. CoRR, abs/1802.05668, 2018. URL http://arxiv.org/abs/1802.05668.
Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classiï¬cation using binary convolutional neural networks. In European Conference on Computer Vision, pp. 525â542. Springer, 2016.
Christopher De Sa, Megan Leszczynski, Jian Zhang, Alana Marzoev, Christopher R. Aberger, Kunle Olukotun, and Christopher R´e. High-accuracy low-precision training. CoRR, abs/1803.03383, 2018. URL http://arxiv.org/abs/1803.03383.
Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
Samuel L Smith, Pieter-Jan Kindermans, and Quoc V Le. Donât decay the learning rate, increase the batch size. arXiv preprint arXiv:1711.00489, 2017.
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethink- In Proceedings of the IEEE conference on ing the inception architecture for computer vision. computer vision and pattern recognition, pp. 2818â2826, 2016.
Shuang Wu, Guoqi Li, Feng Chen, and Luping Shi. Training and inference with integers in deep neural networks. In International Conference on Learning Representations, 2018. URL https: //openreview.net/forum?id=HJGXzmspb.
Yuhui Xu, Yongzhuang Wang, Aojun Zhou, Weiyao Lin, and Hongkai Xiong. Deep neural network compression with single and multiple level quantization. CoRR, abs/1803.03289, 2018. URL http://arxiv.org/abs/1803.03289.
Penghang Yin, Shuai Zhang, Jiancheng Lyu, Stanley Osher, Yingyong Qi, and Jack Xin. Binaryre- lax: A relaxation approach for training deep neural networks with quantized weights. CoRR, abs/1801.06313, 2018. URL http://arxiv.org/abs/1801.06313.
Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, and Yurong Chen. Incremental network quanti- zation: Towards lossless cnns with low-precision weights. CoRR, abs/1702.03044, 2017. URL http://arxiv.org/abs/1702.03044.
Bohan Zhuang, Chunhua Shen, Mingkui Tan, Lingqiao Liu, and Ian Reid. Towards effective low-bitwidth convolutional neural networks. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
11 | {
"id": "1808.05779"
} |
1809.02922 | Transforming Question Answering Datasets Into Natural Language Inference Datasets | Existing datasets for natural language inference (NLI) have propelled
research on language understanding. We propose a new method for automatically
deriving NLI datasets from the growing abundance of large-scale question
answering datasets. Our approach hinges on learning a sentence transformation
model which converts question-answer pairs into their declarative forms.
Despite being primarily trained on a single QA dataset, we show that it can be
successfully applied to a variety of other QA resources. Using this system, we
automatically derive a new freely available dataset of over 500k NLI examples
(QA-NLI), and show that it exhibits a wide range of inference phenomena rarely
seen in previous NLI datasets. | http://arxiv.org/pdf/1809.02922 | Dorottya Demszky, Kelvin Guu, Percy Liang | cs.CL | 11 pages, 6 figures | null | cs.CL | 20180909 | 20180911 | 8 1 0 2
p e S 1 1 ] L C . s c [
2 v 2 2 9 2 0 . 9 0 8 1 : v i X r a
# Transforming Question Answering Datasets Into Natural Language Inference Datasets
# Dorottya Demszkyâ Department of Linguistics Stanford University ddemszky@stanford.edu
# Kelvin Guuâ Department of Statistics Stanford University kguu@stanford.edu
Percy Liang Department of Computer Science Stanford University pliang@cs.stanford.edu
# Abstract
Existing datasets for natural language in- ference (NLI) have propelled research on language understanding. We propose a new method for automatically deriving NLI datasets from the growing abundance of large- scale question answering datasets.
Our approach hinges on learning a sen- tence transformation model which converts question-answer pairs into their declarative forms. Despite being primarily trained on a single QA dataset, we show that it can be suc- cessfully applied to a variety of other QA re- sources. Using this system, we automatically derive a new freely available dataset of over 500k NLI examples (QA-NLI), and show that it exhibits a wide range of inference phenom- ena rarely seen in previous NLI datasets.
P: Taylor is a journalist [...]. She was playing golf with Ron when her phone rang. It was Liz, her motherâs friend. [...]
Q: Who called Taylor? â- QA2D A: Liz D: Liz called Taylor. entailment A: Ron ââ> âon called Tay! A doctor called m Tayi unknown contradiction A: adoctor â>
Figure 1: We learn a mapping from a question-answer pair into a declarative sentence (QA2D), which allows us to convert question answering datasets into natural language inference datasets.
1
# 1 Introduction
Natural language inference (NLI) is a task that in- corporates much of what is necessary to under- stand language, such as the ability to leverage world knowledge or perform lexico-syntactic rea- soning. Given two sentences, a premise and a hy- pothesis, an NLI system must determine whether the hypothesis is implied by the premise.
Numerous datasets have emerged to evaluate NLI systems (Marelli et al., 2014; Pavlick and Callison-Burch, 2016; Lai et al., 2017a). Two of the largest ones, SNLI (Bowman et al., 2015) and MultiNLI (Williams et al., 2017) are rich in var- ious linguistic phenomena relevant to inference (e.g. quantiï¬cation and negation), but they lack certain other phenomena, such as multi-sentence reasoning, which can be important for various downstream applications.
scale NLI datasets from existing question answer- ing (QA) datasets, which have recently become abundant and capture a wide range of reasoning phenomena.1
Inspired by the connection between QA and NLI noted by Dagan et al. (2006), we take the fol- lowing approach: given a passage of text, a ques- tion about it (Q: Who called Taylor?) and an an- swer (A: Liz), we perform sentence transforma- tions to combine the question and answer into a declarative answer sentence (D: Liz called Tay- lor). We then observe that the passage and the declarative sentence form a (premise, hypothesis) NLI pair. This is illustrated in Figure 1, and elab- orated on in Section 2, where we also discuss how to generate negative (non-entailed) examples. This approach is similar to the way SciTail (Khot et al., 2018) was constructed, except that our method is fully automated.
In this paper, we propose to augment and diver- sify NLI datasets by automatically deriving large-
Deriving NLI from QA has two key advantages.
â Equal contribution.
1Data and code are available here: https://bit.ly/2OMm4vK
Properties # wh questions Domain Multiple choice Answer type Passage type Avg question length Avg word overlap MovieQA 11k Movie plots yes free-form 1-3 sentences mult. par. 10.7 46% NewsQA QAMR 100k CNN no span RACE 20k 100k Wikinews + Wikipedia English exams Wikipedia yes no free-form span paragraph sentence 11 6.7 62% 50% SQuAD 100k no span paragraph 11.5 75% 7.6 73%
Table 1: Properties of the different QA datasets that we evaluated on. Together they cover a wide range of domains and evidence (passage) types, from sentences to multiple paragraphs and they add up to about 500k NLI examples including the multiple choice QA options. The average question length is counted by tokens and the average word overlap is measured by the percentage of tokens from the question that appear in the evidence.
First, large-scale QA datasets are abundant. Sec- ond, existing QA datasets cover a wide range of reasoning strategies, which we can now import into the study of NLI. Both advantages likely stem from the fact that question answering and question formulation are organic tasks that people perform in daily life â making QA data easy to crowd- source (He et al., 2015), and easy to ï¬nd in well- designed pre-existing resources such as reading comprehension exams (Lai et al., 2017b). In Sec- tion 3, we describe the QA datasets we work with. from a question-answer pair is the key step of our ap- proach, a subtask that we call QA2D. We explore three different ways to perform QA2D: (i) a rule-based system (Section 4), (ii) crowdsourcing (Section 5) and (iii) a neural sequence model (Section 6).
|
NLI) also exhibits a wide range of different in- ference phenomena, such as multi-sentence and meta-level reasoning and presupposition-based in- ference. We perform a thorough analysis of the re- sulting phenomena, quantifying this diversity both in terms of the type of reasoning and the contex- tual scope required to perform that reasoning. We also conduct other analyses that suggest that our approach can eliminate some of the annotation ar- tifacts (Gururangan et al., 2018) present in SNLI and MultiNLI.
# 2 Approach
We now formally deï¬ne our framework for con- verting a QA example into an NLI example, in- cluding how to generate negative (non-entailed) NLI examples.
These three approaches build on each other: we demonstrate that a good rule-based system can ac- celerate and improve the quality of crowdsourcing (by providing workers with an initial draft) while not introducing any systematic bias. This enables us to collect a dataset of 100,000 (Q, A, D) triples, which we then use to train a neural QA2D model. Although this model is primarily trained on triples from SQuAD (Rajpurkar et al., 2016), we show that our model generalizes very well to other QA datasets spanning a variety of domains, such as Wikipedia, newswire and movie plots. Our au- tomatically generated declaratives exactly match the human gold answer 45â57% of the time, with BLEU scores ranging between 73â83, depending on the dataset (Section 7).
With our automated QA2D system in place, we apply it to ï¬ve different QA datasets, creating over 500,000 NLI examples, which we make freely available. Given the diverse nature of the QA datasets we use, the resulting NLI dataset (QA-
A QA example contains a passage of text P , a question Q regarding the text and an answer span A, as illustrated in Figure 1. We perform sentence transformations (QA2D) to combine Q and A into a declarative answer sentence D. We then simply recognize that if A is a correct answer, then (P, D) is an entailed NLI pair.
Alternatively, if A is an incorrect answer or Q cannot be answered using the information in P , then D is not implied by P , yielding a negative NLI pair. Incorrect answers are available in QA datasets featuring multiple choice answers, such as MovieQA (Tapaswi et al., 2016), RACE (Lai et al., 2017b) and MCTest (Richardson et al., 2013). Unanswerable questions are available in SQuADRUn (Rajpurkar et al., 2018) and we ex- pect the number of such datasets to grow with the advancement of QA research.
Inference labels. In existing NLI datasets, ex- amples are labeled with one of three relations: en- tailment, neutral/unknown or contradiction. When
Q: Where does Jim go to buy groceries? A: Trader Joeâs
remove |. Where dees Jim goes to buy groceries? do-support . ae? reverse I. Where vim goesyto buy groceries? wh-movement delete question Ill. Jim goes were to buy groceries? words & mark IV. Jim goes Trader Joeâs to buy groceries: plug inA p F insert V. Jim goes to Trader Joeâs to buy groceries. preposition
Figure 2: An illustration of the syntactic transforma- tions needed to perform QA2D. In this example, to per- form step II one needs to know that where is comple- ment of go and not that of buy, and to perform step V, one needs to chose the appropriate preposition to insert.
performing automated QA2D, we can only make a two-way distinction between entailment and non- entailment.2
Weakly supervised QA datasets. In many QA datasets, the passage P is a short paragraph (e.g. SQuAD (Rajpurkar et al., 2016)) or even a sin- gle sentence (e.g. QAMR (Michael et al., 2018)). This yields a short, simple premise in the resulting NLI example. However, some weakly supervised QA datasets such as NewsQA (Trischler et al., 2017), RACE and TriviaQA (Joshi et al., 2017) choose P be an entire document or even corpus of documents. In this case, the resulting NLI pairâs premise could be large, but is still valid. In Ta- ble 1, we describe the âpassage typeâ for each QA dataset we work with.
# 3 Datasets
Table 1 summarizes the properties of the ï¬ve QA datasets we transform into NLI: MovieQA, NewsQA, QAMR and RACE and SQuAD. When choosing the datasets, we sought to maximize the structural and topical diversity of our data.
The domains of these datasets include movie plots, newswire text, Wikipedia and English ex- ams that cover a wide range of genres and topics. The passage types range from a sentence to mul-
2 It is tempting to think that incorrect answers yield con- tradiction labels, while unanswerable questions yield neutral labels. Unfortunately, this is false. Figure 1 illustrates an example where an incorrect multiple choice answer does not yield a contradiction. As for unanswerable questions, one ex- ample would be: P : âThe score was negative.â, Q: âWhat was the exact score?â (unanswerable), A: â10â (yields con- tradiction, not neutral).
tiple paragraphs3 and the answer type may be ei- ther a substring (span) within the passage or free- response text. The questions vary greatly in terms of their type and difï¬culty: questions in QAMR hinge on selecting the right arguments within a single sentence, while questions in RACE, writ- ten for middle- and high-schoolers, require holis- tic reasoning about the text (e.g. What is the main message of the passage?).
# 4 QA2D: Rule-based
At ï¬rst glance, QA2D appears to be a highly structured task guided by clear rules â indeed, the reverse problem of converting declarative sen- tences into questions is often taught in gram- mar textbooks. However, this belies the many nuanced semantic decisions that are effortlessly made by native English speakers, yet challenging to codify. For example, non-native speakers ï¬nd it notoriously hard to prepend the right preposi- tions/articles before phrases, as there are no simple rules.
To demonstrate these challenges, we develop a strong rule-based system (see Section 7 for results) to test how far we can go towards solving QA2D. The main steps of this system are illustrated in Fig- ure 2.
The success of the rule-based model hinges on part-of-speech tagging and parsing accuracy, given that we need to correctly identify the wh- word, the root word, any auxiliary or copula, as well as prepositions and particles that are depen- dents of the wh-word or the root. We used the state-of-the-art Stanford Graph-Based Neural De- pendency Parser (Dozat et al., 2017) to POS tag and parse Q and A. We found that about 10% of the mistakes made by our rule-based system are due to tagging/parsing errors.4
We encountered several semantic idiosyncrasies that proved difï¬cult to account for by rules. For example, if the answer span is a bare named entity (i.e. without an article) referring to an organiza- tion/institution, generally it is okay to leave it bare
3For MovieQA, we only used the plot summaries as the evidence, but one could easily use the full movie scripts or audiovisual data as well.
4The main errors made by the tagger/parser include tag- ging a verb as a noun, which is prevalent because in the pres- ence of do-support, the inï¬ections are removed from the main verb (e.g. When did the war end?). Another class of parsing errors is identiï¬cation of the parent of a dangling preposi- tion/particle (e.g. Which friend did Olga send a letter to last week?).
Split train dev test Source # Ex. SQuAD 68986 Other SQuAD Other SQuAD Other 4Ã1000 7350 1000 4Ã500 7377 1000 4Ã1000 # Ann. 1 1 1 3 1 1 3 3 Setup E (74%) S (26%) S S S S S S S
Table 2: The composition of our collected data. # Ex. and # Ann. refer to the number of unique examples and to the number of annotations (gold answers) per example, respectively. The last column lists the setup that was used for collecting the examples: post-editing (E) or from scratch (S). âOtherâ denotes the four other QA datasets besides SQuAD. The gray indicates the examples that we used for our evaluations in Section 7.
(e.g. Sam works at WHO.), but sometimes a deï¬- nite article needs to be inserted (e.g. Sam works at the UN).
# 5 QA2D: crowdsourced
Even though our rule-based model is reasonably strong, it is far from perfect. We decided to build a supervised neural model, which required the col- lection of human-authored gold declarative sen- tences. We describe our data collection method (Section 5.1) and the distribution of our collected data across the ï¬ve QA datasets (Section 5.2).
# 5.1 Data Collection
We crowdsourced the QA2D task on Amazon Me- chanical Turk using two different setups. In Setup S, Turkers were presented with Q and A, then asked to write a full sentence answer D from In Setup E, instead of writing D from Scratch. scratch, Turkers were asked to Edit the output of our rule-based system (see Section 4) until it is a well-formed sentence. Turkers were not provided with the supporting passage P in either setup be- cause we wanted to prevent them from including information in D that is not supported by Q.
Writing from scratch vs post-editing. There is a trade-off between the two setups: while Setup S minimizes bias towards the rule-based output, writing from scratch takes more time and leaves room for more typos than post-editing. Indeed, when comparing 100 random examples generated from each setup, we found that 91% of Setup S outputs were valid (grammatical and complete
while 97% of Setup E outputs were valid. How- ever, since Setup E could potentially bias our data, we exclusively used Setup S for collecting all eval- uation data.
# 5.2 Distribution of Source QA Datasets
We decided to select one QA dataset among the ï¬ve QA datasets to collect the majority of our data, so that we could test the ability of our neural model (Section 6) to generalize to other datasets. We chose SQuAD to be the main source of QA pairs because of its large size, high quality and syntactic diversity. We limited ourselves to its training set for the data collection, ï¬ltering out non-wh-questions, which left us with a total of 85,713 QA pairs. 5 In addition, we randomly sam- pled a smaller set of QA pairs from the four other datasets, most of which were used for evaluation. Table 2 summarizes the composition of our newly collected dataset of gold declarative answer sentences. For each of the dev and test sets, we collected three annotations for 1000 examples to account for the fact that there can be multiple pos- sible correct QA2D transformations. The distri- bution of datasets within the three data splits are: train (95% SQuAD, 5% other four), dev (81% SQuAD, 19% other four) and test (20% for each ï¬ve datasets).
# 6 QA2D: Neural Sequence Model
In Section 4, we discussed some of the issues (mostly involving semantics) that our rule-based system cannot handle. To improve over this base- line, we develop a neural sequence generation model to perform QA2D.
From the crowdsourcing described in the previ- ous section, we have a dataset of (Q, A, D) tuple. We use this to learn a model of p(D | Q, A), im- plemented with an encoder-decoder architecture. The inputs Q and A are each encoded using a bidirectional three-layer LSTM (Hochreiter and Schmidhuber, 1997) (the same encoder weights are used for both inputs). D is then generated using a three-layer LSTM decoder equipped with one attention head (Bahdanau et al., 2015) for each input, and a copy mechanism based on Gu et al. (2016).6 Word embeddings are initialized with
5The random split between train (80%), dev (10%) and test (10%) sets was made based on Wikipedia article titles corresponding to the QA pairs.
6The copy mechanism is similar to Gu et al. (2016), ex- cept that out-of-vocabulary copyable words are represented
80 60 40 Average BLEU score 20 ââ rule-based ââ neural, top 1 neural, top 5 ââ human agreement when (16%) (8%) (5%) (5%) (1%) (a) which = who how (17%) what (48%) where why 10 15 20 25 30 35 40 Number of tokens in the question + answer (b)
Figure 3: Figure (a) shows the results based on question type (brackets indicate their proportion in the data) and Figure (b) shows the results based on the length of Q + A. The error bars denote the 99% conï¬dence interval for the true expected performance of each model (randomness comes from noise in Turker annotations and the random sampling of the evaluation set). Note: human and model scores should not be directly compared. Human agreement is maximum BLEU score when comparing each of 3 human annotations against the two others, while the modelsâ outputs are compared against the 3 human annotations. We include human results to quantify variation across human annotators.
GloVe (Pennington et al., 2014). The model is then trained using a standard cross entropy loss minimized with Adam (Kingma and Ba, 2014).
# 7 QA2D: Results
In this section, we assess the performance of our rule-based (RULE-BASED) and neural (NEURAL) QA2D systems, using both automated metrics and human evaluation. The evaluations are conducted on the test set formed from all ï¬ve QA datasets (gray cells in Table 2) where each example in- cludes three human annotations.
# 7.1 Quantitative Results
For our quantitative evaluation, we employed two metrics: BLEU and string match (ignoring case and punctuation). We evaluated NEURAL both on the top 1 output and on the top 5 outputs (max over the beam).
Rule-based vs neural. Overall, the perfor- mance of NEURAL is consistently stronger than RULE-BASED. From Table 4, we see that across datasets, NEURAL leads RULE-BASED by an av- erage of 2.6 BLEU points, and by 6.2% on exact match accuracy. NEURAL is also capable of pro- ducing a top-5 beam of outputs, and when we eval- uate only the best of these 5 outputs, we observe an almost 30% improvement in scores.
We ï¬nd that predictions of NEURAL and RULE-BASED exactly match 40% of the time.
using absolute positional embeddings, rather than the en- coderâs hidden state at the position of the copyable word.
Table 3 includes examples when the two mod- elsâ outputs do not match. As we hypothe- sized, the NEURAL learned semantic patterns, such as preposition/article choice and the removal of redundant words from the answer span, that RULE-BASED was not able to handle.
Results by dataset. Table 4 breaks down the modelsâ performance by dataset. The ï¬rst thing to note is the domain-generality of the models â al- though BLEU scores are highest on SQuAD, they are only 1-3 points lower on other datasets. As for the exact match, we actually ï¬nd that both models perform better on QAMR than on SQuAD. This discrepancy is likely due to the length penalty in BLEU, which is affected by shorter answer lengths in QAMR.
RULE-BASED performs worst on RACE, MovieQA and SQuAD (6-9 points lower on exact match) in comparison to NEURAL due to the fact that the answers in these datasets often require semantically motivated modiï¬cations that RULE-BASED cannot handle.
Results by question length. Figure 3 (b) shows the correlation between the combined length of the question-plus-answer and each modelâs per- formance. RULE-BASEDâs performance is more robust to length increase beyond 30 tokens than NEURAL, even if we consider NEURALâs top-5 performance. In contrast, NEURAL does better on inputs shorter than 20 tokens â which con- stitute the majority of the examples â than on longer ones, which is possibly due to the general
Question Answer RULE-BASED NEURAL When was Madonna born? August 16, 1958 Madonna was born in August 16, 1958. Madonna was born on August 16, 1958. Who asks who to hit them outside of the bar? Tyler asks the Narrator to hit him Tyler asks the Narrator to hit him asks who to hit them outside of the bar. Tyler asks the narrator to hit them outside of the bar. What surprising fact do the guys learn about Jones? That he has never killed anyone The guys learn that he has never killed anyone about Jones. The guys learned about Jones that he has never killed anyone. Where is someone overlooked? American society Someone is overlooked in American society. Someone overlooked is in American society. When did Johnson crash into the wall? halfway through the race Johnson crashed into the wall halfway through the race. Johnson shot into the wall halfway through the race. What is an example of a corporate sponsor of a basketball team? Marathon Oil An example of a corporate sponsor of a basketball team is Marathon Oil. Marathon Oil is an example of a corporate sponsor of a basketball team. Where was the baby found? onboard a Carnival cruise ship, The baby was found in onboard a Carnival cruise ship. The baby was found at onboard a Carnival cruise ship.
Table 3: A randomly picked sample of those outputs where NEURAL # RULE-BASED. Green indicates good output(s) and red indicates bad ones.
Dataset Model Top 1 Top 5 BLEU Match BLEU Match SQuAD MovieQA QAMR RM NM RM NM RM NM RM NM RM NM 86 81 53 38 91 62 RACE NewsQA 83 44 89 55 80 30 85 57 90 67 82 45 82 54 83 49 88 58 82 49 84 44 89 52 N/A N/A N/A N/A N/A
5 24 2 2° 3, =| mm human , grammar naturalness completeness
Table 4: BLEU and exact match scores (out of a 100) when compar- ing the outputs of RULE-BASED (RM) and NEURAL (NM) against the human gold.
Figure 4: Human evaluation of the the human, RULE-BASED and NEURAL outputs.
tendency of NEURAL to output shorter sequences.
# 7.2 Human Evaluation
Results by question type. In Figure 3 (a) we present the results broken down by question cat- egory, which we determined based on the type of wh word in the question. We can see that our models perform best on who questions, which is not surprising because there is usually no wh- movement involved when transforming such ques- the wh word simply tions into declaratives (i.e. needs to be replaced). In contrast, the overall performance of the models is the worst on which questions, which is most likely due to the fact that the majority of such questions require a decision about which words from the wh phrase to include in D and in what position.
We crowdsourced the evaluation of RULE-BASED and NEURAL on a sample of 100 QA examples for each of the ï¬ve datasets. For reference, we also treated human outputs as a third system (HUMAN). In each Turk task, a Turker is presented with three outputs (one from each system, in a randomly shufï¬ed order) and is asked to rate them (given a question and an answer span) with respect to three criteria: grammaticality, naturalness, and com- pleteness.7 The results are shown in Figure 4.
For grammar, the modelsâ outputs are lower than HUMAN (4.6), at 3.8 for RULE-BASED and
# 7Description of ratings:
The only question type for which we see a signiï¬cant difference between performance of RULE-BASED and that of the neural one is how questions (note that how many, for example, is considered to be a how question). This is because RULE-BASED â in contrast to NEURAL â does not copy words from the wh phrase into D, which mainly affects how questions negatively, given that how many tends to be followed by at least one noun (e.g. how many people).
grammaticality: 1 â Extremely poor, 2 â Poor, 3 â OK but has some issue(s), 4 â Good but slightly unnatural, 5 â Good naturalness: 1 â Extremely unnatural, 2 â Unnatural, 3 â OK in some contexts, 4 â Natural, but could be more so, 5 â Very natural completeness: 1 â Lacks many important words from the question or the answer, 2 â Lacks a few important words from the question or the answer, 3 â The sentence is missing one or two words that would add more information, but they arenât necessary, 4 â The sentence is missing one or two words but it still conveys the same meaning without them, 5 â The sen- tence is maximally complete in terms of words (regardless of grammaticality)
Reasoning Description Argument Sentence Affects only a single argument of a predicate in T . Affects one or more predicates within a single sentence in T . Multi- sentence Affects multiple sentences in T . Example T: Caitlin de Wit: I ran a little bit, and I rode horses. H: Caitlin is de Witâs ï¬rst name. T: Michael Jackson will perform 10 concerts in London in July in what he described Thursday as a âï¬nal curtain call.â [...] H: Michael Jackson has announced 10 concerts. T: [...] invented by the British scientist William Sturgeon [...] Following Sturgeonâs work, [...] motor [...] built by [...] Thomas Davenport [...] The motors ran at up to 600 revolutions per minute [...] H: Sturgeon and Davenport s motors ran at 600 revolutions per minute . Source QAMR NewsQA SQuAD Quantities Naive physics Counting or performing other numerical operations; understanding relations between quantities. Spatiotemporal/physical reasoning that requires a mental simulation of the event beyond understanding the meaning of the words. T: It provided more than $20 billion in direct ï¬nancial. H: It only yielded $100,000 in direct ï¬nancial. T: David J. Lavau, 67, of Lake Hughes, California, was found in a ravine a week after losing control of his car on a rural road and plunging 500 feet down an embankment into heavy brush [...] H: Lavauâs car came to a rest 500 feet down an embankment. MultiNLI NewsQA Attributes Psychology Meta Other Reasoning about attributes and affordances of entities. Making inferences involving peopleâs mental states and attitudes and the way they express them. Reasoning about the genre, text structure and author. Incorporating any other world knowledge. T: The situation in Switzerland [...]. The Swiss German dialects are the default everyday language in virtually every situation [...] H: Swiss German is the dialect spoken in Switzerland. T: Dear Jorge, [...] My family are now in Sacramento, California. [...] Before I knew it, there was hot water shooting up about 60 feet into the air. [...] Iâd love to learn more about this geyser and other geysers [...] Your friend,Bennetto H: Bennettoâs letter expressed excitement. T: A man and his young son struggle to survive after an unspeciï¬ed cataclysm has killed most plant and animal life. [...] H: We meet a man and his young son at the beginning of the ï¬lm. T: When asked a life lesson he had to learn the hard way, the billionaire said staying up too late is a habit he is still trying to break. âDonât stay up too late [...] H: Bill Gates gave the advice to avoid staying up too late. SQuAD RACE MovieQA RACE
Table 5: Examples illustrating the different types of reasoning required to determine whether the text T entails the hypothesis H.
3.9 for NEURAL. A score of 4 represents the rat- ing Good but slightly unnatural. The same per- tains to the naturalness scores as well. In terms of completeness, the rule-based and NEURAL both do relatively well (.2-.3 points lower than the hu- man score of 4.8), which is well above the thresh- old for retaining the correct meaning, given that a rating of 4 requires there to be no semantic conse- quences of incompleteness.
# 8 Analysis of NLI Datasets
In this section, we analyze the various phenomena of the NLI datasets we generated (Section 8.1), validate our assumptions about how an answerâs correct/incorrect status determines the resulting inference label (Section 8.2) and compare our datasets to others in terms of annotation artifacts (Section 8.3).
correct inference classiï¬cation. The categories and their descriptions, paired with examples are illustrated in Table 5. Looking at the counts in Table 6, we can notice that MovieQA, NewsQA and SQuAD are similar. The majority of examples in these datasets require sentence-level reasoning, and the rest are split between multi-sentence and argument-level reasoning at a roughly 5:3 ratio. While the counts for the types of reasoning are also very close among these datasets, the slight differences can be explained based on genre. For example, plot summaries often focus on human motivations/emotions (psych). MultiNLI is clos- est to these datasets in terms of phenomena counts as well, except that it involves hardly any multi- sentence reasoning and less world knowledge than the other three.
# Inference Phenomena
We manually annotated 100 examples for the scope and type of reasoning required to make a
QAMR is unique in that, by construction, two thirds of the examples only involve argument- level reasoning and none of them involve multi- sentence reasoning. Given that inference pairs in QAMR often involve turning a noun phrase into
e argument sentence multi-sentence p o c s g quantities n i n o s a e r naive physics psych meta attributes other world kn. f o e p y t MovieQA NewsQA QAMR RACE SQuAD MultiNLI 26 72 2 4 3 5 1 29 5 19 56 25 1 4 9 2 41 34 13 53 34 1 7 0 5 41 15 62 38 0 1 0 1 0 12 8 1 14 85 1 5 48 12 58 68 15 58 27 2 4 1 0 40 14
0.6 | answer option: lm incorrect mmm correct 8°44 3B Given the text, the hypothesis is... = 024 FALSE TRUE L iL. . 0.0 Bs ul derintely very, somewhat equaly somewhat very, defintely
Table 6: Counts for different scope and reasoning types in the ï¬ve con- verted QA datasets in comparison to MultiNLI. We manually annotated 100 examples per dataset. While the scope categories are mutually ex- clusive, the reasoning types are not.
Figure 5: The distribution of inference ratings for NLI examples based on in- correct multiple choice option or correct multiple choice option.
a predicate, this dataset provides us with a lot of inference pairs that stem from presuppositions â i.e. entailments that still hold even if the premise is negated (e.g. Taylor Brown was playing golf outside. and Taylor Brown was not playing golf outside. both presuppose Taylorâs last name is Brown.).
RACE, in sharp contrast to QAMR, mostly in- cludes entailments that require multi-sentence rea- soning. inference pairs in RACE make extensive use of world knowledge, meta- level reasoning (e.g., about the genre, authorâs in- tentions, etc.), and reasoning about human psy- chology (e.g., a characterâs reaction to an event).
==> entailment 0.08 4 A, --- non-entailment Z a. = > 0.06 4 jf = ay 3 4 g 0.04 oan £ AN & 4 N 0.027 t _ = 0.00 T T T âââââââ 0 5 10 15 20 25 30 35 Number of tokens
Figure 6: The distribution of the length of NLI exam- ples, generated from MovieQA. Unlike in SNLI and MultiNLI, we found little correlation between the in- ference label and sentence lengths.
# 8.2 Inference Labels
In Section 2, we made the assumption that infer- ence pairs generated from incorrect answers will be non-entailments. To verify this hypothesis, we crowdsource the inference labeling of 2000 in- ference pairs based on MovieQA, half of which are generated from correct answer options and half from incorrect ones. In the task, each Turker was provided with ï¬ve randomly selected premise-hypothesis pairs (T, H) and was asked to rate how likely H is true given T .
Figure 5 shows the distribution of inference rat- ings with the examples separated based on incor- rect/correct answer options as their source. The human ratings show a strong correlation between answer type and inference score â as for correct ones, more than 90% of the examples are more likely to be true than false and as for incorrect
8For the 1000 examples based on correct answers, we used the test set we already collected. We collected the 1000 examples for incorrect answers using the same QA pairs as for the correct answers. The incorrect answer, in the case of each example, was randomly chosen among the incorrect options.
ones, about 80% are not likely to be true. These ï¬ndings also support a non-binary notion of en- tailment, as we can see that about half of the ex- amples in both categories do not ï¬t into the strict entailment/contradiction dichotomy.
# 8.3 Annotation Artifacts
We replicate some of the statistical analyses that Gururangan et al. (2018) performed on SNLI and MultiNLI to see whether our datasets contain ar- tifacts similar to SNLI and MultiNLI. We per- form the analyses on the same dataset we used in Section 8.2, based on MovieQA. We ranked the words with the highest PMI(word, class) values and found that in our case the non-entailments in our dataset no longer have negation words and the entailments no longer have positive or non-speciï¬c words such as those found in SNLI and MultiNLI (Table 7). We also looked at the distribution of hypotheses lengths, separated by label (Figure 6) and found little/no correlation between the length of an example and its label.
MQA SNLI MNLI entailment ï¬nd take years son shoot outdoors least instrument outside animal some yes something sometimes various neutral tall ï¬rst competition sad favorite also because popular many most contradiction all police york car can nobody sleeping no tv cat never no nothing any none
Table 7: Top 5 words ordered based on their PMI (word, class) with the percentage of examples they oc- cur in in a given class, based on the entailments gener- ated from MovieQA (MQA). The statistics for SNLI and MultiNLI (MNLI) are copied from Gururangan et al. (2018).
# 9 Related Work & Discussion
NLI datasets. NLI has long served as a testbed for natural language understanding (Dagan et al., 2006). More recently, the emergence of larger- scale datasets (Bowman et al., 2015; Williams et al., 2017) have also enabled researchers to lever- age NLI resources as a rich source of training data to achieve transfer learning gains on other tasks (Conneau et al., 2017). Our datasets are comple- mentary to previous resources, inheriting a rich set of phenomena found in many QA datasets (e.g. high-level reasoning about texts).
QA to NLI. Although White et al. (2017) and Poliak et al. (2018) have explored recasting datasets created for various semantic classiï¬cation tasks (e.g. semantic role labeling and named en- tity recognition) into NLI datasets, we are the ï¬rst to perform such an automated conversion on QA datasets. However, we are not the ï¬rst to observe the connection between QA and NLI. In fact, the seminal work of Dagan et al. (2006) employed this connection to construct a portion of their dataset and so have the creators of SciTail (Khot et al., 2018), but performed the QA2D step with human experts, rather than an automated system.
Text transformation tasks. By âreformattingâ QA to NLI, we obtain a more generic representa- tion of inferences: declarative sentences are trans- formed into other declarative sentences. This is the same type signature as for sentence simpliï¬ca-
tion (Chandrasekar et al., 1996), paraphrase (Lin and Pantel, 2001; Bannard and Callison-Burch, 2005) and summarization (Jones, 1993), high- lighting the close connection between these tasks. Importantly, declarative sentences are closed un- der this set of operations, allowing them to be chained together to perform more complex infer- ences (Kolesnyk et al., 2016).
Another related task is question generation (Rus and Arthur, 2009), which could be considered the reverse of QA2D, although the focus is on select- ing interesting questions, rather than robust sen- tence transformation.
Neural sequence generation. Our QA2D sys- tem could be implemented by any general-purpose sequence generation model. With rapid progress on better generation architectures (Gehring et al., 2017; Vaswani et al., 2017), we believe it should be possible to further increase the data efï¬ciency and performance, especially by leveraging mod- els that incorporate syntactic structure (Chen et al., 2017; Eriguchi et al., 2017) or a more transducer- like structure (Graves, 2012; Yu et al., 2016).
Future systems. Finally, we hope that by in- creasing the scale of NLI training resources, we can enable the development of a large variety of new systems such as generative NLI models which can take a premise and generate relevant hypothe- ses (Kolesnyk et al., 2016), sentence decomposi- tion models which can break a sentence into mul- tiple entailed parts, and sentence synthesis models which can stitch multiple pieces back into an en- tailed whole.
# Acknowledgments
We would like to thank Chris Potts and members of the Stanford NLP group for their valuable feed- back.
# References
D. Bahdanau, K. Cho, and Y. Bengio. 2015. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations (ICLR).
C. Bannard and C. Callison-Burch. 2005. Paraphrasing In Association for with bilingual parallel corpora. Computational Linguistics (ACL), pages 597â604.
S. Bowman, G. Angeli, C. Potts, and C. D. Manning. 2015. A large annotated corpus for learning natural
language inference. In Empirical Methods in Natu- ral Language Processing (EMNLP).
R. Chandrasekar, C. Doran, and B. Srinivas. 1996. Mo- tivations and methods for text simpliï¬cation. In Pro- ceedings of the 16th conference on Computational linguistics-Volume 2, pages 1041â1044.
H. Chen, S. Huang, D. Chiang, and J. Chen. 2017. Improved neural machine translation with a syntax-aware encoder and decoder. arXiv preprint arXiv:1707.05436.
A. Conneau, D. Kiela, H. Schwenk, L. Barrault, and A. Bordes. 2017. Supervised learning of universal sentence representations from natural language in- ference data. arXiv preprint arXiv:1705.02364.
I. Dagan, O. Glickman, and B. Magnini. 2006. The PASCAL recognising textual entailment challenge. Machine learning challenges. evaluating predictive uncertainty, visual object classiï¬cation, and recog- nising tectual entailment.
T. Dozat, P. Qi, and C. D. Manning. 2017. Stanfordâs graph-based neural dependency parser at the conll In Computational Natural Lan- 2017 shared task. guage Learning (CoNLL), pages 20â30.
A. Eriguchi, Y. Tsuruoka, and K. Cho. 2017. Learn- ing to parse and translate improves neural machine translation. arXiv preprint arXiv:1702.03525.
J. Gehring, M. Auli, D. Grangier, D. Yarats, and Y. N. Dauphin. 2017. Convolutional sequence to se- quence learning. arXiv preprint arXiv:1705.03122.
A. Graves. 2012. Sequence transduction with recurrent neural networks. arXiv preprint arXiv:1211.3711.
J. Gu, Z. Lu, H. Li, and V. O. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learn- In Association for Computational Linguistics ing. (ACL).
S. Gururangan, S. Swayamdipta, O. Levy, R. Schwartz, S. R. Bowman, and N. A. Smith. 2018. Annotation artifacts in natural language inference data. arXiv preprint arXiv:1803.02324.
L. He, M. Lewis, and L. Zettlemoyer. 2015. Question- answer driven semantic role labeling: Using natu- In Em- ral language to annotate natural language. pirical Methods in Natural Language Processing (EMNLP).
S. Hochreiter and J. Schmidhuber. 1997. Long short- term memory. Neural Computation, 9(8):1735â 1780.
K. S. Jones. 1993. What might be in a summary? In- formation retrieval, 93:9â26.
M. Joshi, E. Choi, D. Weld, and L. Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised chal- lenge dataset for reading comprehension. In Associ- ation for Computational Linguistics (ACL).
Tushar Khot, Ashish Sabharwal, and Peter Clark. 2018. Scitail: A textual entailment dataset from science question answering. In Proceedings of AAAI.
D. Kingma and J. Ba. 2014. Adam: A method arXiv preprint for arXiv:1412.6980. stochastic optimization.
V. Kolesnyk, T. Rockt¨aschel, and S. Riedel. 2016. Gen- arXiv erating natural language inference chains. preprint arXiv:1606.01404.
A. Lai, Y. Bisk, and J. Hockenmaier. 2017a. Natural language inference from multiple premises. arXiv preprint arXiv:1710.02925.
G. Lai, Q. Xie, H. Liu, Y. Yang, and E. Hovy. Race: Large-scale reading comprehen- arXiv preprint 2017b. sion dataset from examinations. arXiv:1704.04683.
D. Lin and P. Pantel. 2001. Discovery of inference rules for question-answering. Natural Language Engineering, 7:343â360.
M. Marelli, S. Menini, M. Baroni, L. Bentivogli, R. bernardi, and R. Zamparelli. 2014. A SICK cure for the evaluation of compositional distributional se- mantic models. In Language Resources and Evalu- ation Conference (LREC).
I. Dagan, and L. Zettlemoyer. 2018. Crowdsourcing questionâ In North Amer- answer meaning representations. ican Association for Computational Linguistics (NAACL).
E. Pavlick and C. Callison-Burch. 2016. Mostâ ba- biesâ areâ littleâ and mostâ problemsâ areâ hugeâ: Compositional entailment in adjective-nouns. In As- sociation for Computational Linguistics (ACL), vol- ume 1, pages 2164â2173.
J. Pennington, R. Socher, and C. D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP).
Adam Poliak, Aparajita Haldar, Rachel Rudinger, J Edward Hu, Ellie Pavlick, Aaron Steven White, and Benjamin Van Durme. 2018. Towards a language inference framework to uniï¬ed natural evaluate sentence representations. arXiv preprint arXiv:1804.08207.
P. Rajpurkar, R. Jia, and P. Liang. 2018. Know what you donât know: Unanswerable questions for SQuAD. In Association for Computational Linguis- tics (ACL).
P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang. 2016. SQuAD: 100,000+ questions for machine compre- In Empirical Methods in Natural hension of text. Language Processing (EMNLP).
M. Richardson, C. J. Burges, and E. Renshaw. 2013. Mctest: A challenge dataset for the open-domain machine comprehension of text. In Empirical Meth- ods in Natural Language Processing (EMNLP), pages 193â203.
V. Rus and C. G. Arthur. 2009. The question generation shared task and evaluation challenge. In The Univer- sity of Memphis. National Science Foundation.
M. Tapaswi, Y. Zhu, R. Stiefelhagen, A. Torralba, R. Urtasun, and S. Fidler. 2016. Movieqa: Un- derstanding stories in movies through question- answering. In Computer Vision and Pattern Recog- nition (CVPR), pages 4631â4640.
A. Trischler, T. Wang, X. Yuan, J. Harris, A. Sordoni, P. Bachman, and K. Suleman. 2017. NewsQA: A In Workshop on machine comprehension dataset. Representation Learning for NLP.
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin. 2017. Attention is all you need. arXiv preprint arXiv:1706.03762.
Aaron Steven White, Pushpendre Rastogi, Kevin Duh, and Benjamin Van Durme. 2017. Inference is ev- erything: Recasting semantic resources into a uni- In Proceedings of the ï¬ed evaluation framework. Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), vol- ume 1, pages 996â1005.
A. Williams, N. Nangia, and S. R. Bowman. 2017. A broad-coverage challenge corpus for sentence arXiv preprint understanding through inference. arXiv:1704.05426.
L. Yu, J. Buys, and P. Blunsom. 2016. Online seg- ment to segment neural transduction. arXiv preprint arXiv:1609.08194. | {
"id": "1710.02925"
} |
1809.02627 | Unity: A General Platform for Intelligent Agents | Recent advances in artificial intelligence have been driven by the presence
of increasingly realistic and complex simulated environments. However, many of
the existing environments provide either unrealistic visuals, inaccurate
physics, low task complexity, restricted agent perspective, or a limited
capacity for interaction among artificial agents. Furthermore, many platforms
lack the ability to flexibly configure the simulation, making the simulated
environment a black-box from the perspective of the learning system. In this
work, we propose a novel taxonomy of existing simulation platforms and discuss
the highest level class of general platforms which enable the development of
learning environments that are rich in visual, physical, task, and social
complexity. We argue that modern game engines are uniquely suited to act as
general platforms and as a case study examine the Unity engine and open source
Unity ML-Agents Toolkit. We then survey the research enabled by Unity and the
Unity ML-Agents Toolkit, discussing the kinds of research a flexible,
interactive and easily configurable general platform can facilitate. | http://arxiv.org/pdf/1809.02627 | Arthur Juliani, Vincent-Pierre Berges, Ervin Teng, Andrew Cohen, Jonathan Harper, Chris Elion, Chris Goy, Yuan Gao, Hunter Henry, Marwan Mattar, Danny Lange | cs.LG, cs.AI, cs.NE, stat.ML | null | null | cs.LG | 20180907 | 20200506 | 0 2 0 2
y a M 6
] G L . s c [
2 v 7 2 6 2 0 . 9 0 8 1 : v i X r a
Unity: A General Platform for Intelligent Agents
# Unity: A General Platform for Intelligent Agents
arthurj@unity3d.com vincentpierre@unity3d.com ervin@unity3d.com andrew.cohen@unity3d.com jharper@unity3d.com chris.elion@unity3d.com christopherg@unity3d.com vincentg@unity3d.com brandonh@unity3d.com marwan@unity3d.com dlange@unity3d.com
# Abstract
Recent advances in artiï¬cial intelligence have been driven by the presence of increasingly realistic and complex simulated environments. However, many of the existing environments provide either unrealistic visuals, inaccurate physics, low task complexity, restricted agent perspective, or a limited capacity for interaction among artiï¬cial agents. Furthermore, many platforms lack the ability to ï¬exibly conï¬gure the simulation, making the simulated environment a black-box from the perspective of the learning system. In this work, we propose a novel taxonomy of existing simulation platforms and discuss the highest level class of general platforms which enable the development of learning environments that are rich in visual, physical, task, and social complexity. We argue that modern game engines are uniquely suited to act as general platforms and as a case study examine the Unity engine and open source Unity ML-Agents Toolkit1. We then survey the research enabled by Unity and the Unity ML-Agents Toolkit, discussing the kinds of research a ï¬exible, interactive and easily conï¬gurable general platform can facilitate.
# 1. Introduction
In recent years, there have been signiï¬cant advances in the state of deep reinforcement learning research and algorithm design (Mnih et al., 2015; Schulman et al., 2017; Silver et al., 2017; Espeholt et al., 2018). Essential to this rapid development has been the presence of challenging and scalable simulation platforms such as the Arcade Learning Environment (Bellemare et al., 2013), VizDoom (Kempka et al., 2016), MuJoCo (Todorov et al., 2012), and many others (Beattie et al., 2016; Johnson et al., 2016; Coumans and Bai, 2016). The Arcade Learning Environment (ALE), for example, was essential for providing a means of benchmarking the control-from-pixels approach of the Deep Q-Network (Mnih et al., 2013). Similarly, other environments and platforms have helped motivate research into more eï¬cient and powerful algorithms (Oh et al., 2016; Andrychowicz et al., 2017). The simulation
1. https://github.com/Unity-Technologies/ml-agents
1
# Juliani, Berges, Teng, Cohen, Harper, Elion, Goy, Gao, Henry, Mattar, & Lange
environment is the fundamental way in which the reinforcement learning community tests its ideas and algorithms. Thus, the quality of environments is of critical importance. Surprisingly, the general discussion around this integral component is underdeveloped compared to its algorithmic counterpart.
Many of the current research platforms are based on popular video games or game engines such as Atari 2600, Quake III, Doom, and Minecraft. This is part of a much longer-term trend in which games have served as a platform for artiï¬cial intelligence (AI) research. This trend can be traced back to the earliest work in AI around playing games such as chess and checkers (Shannon, 1950; Samuel, 1959), or later work applying reinforcement learning to the game of Backgammon (Tesauro, 1995). The necessary search, decision-making and planning which make video games engaging challenges for humans are also the same challenges which interest AI researchers (Laird and VanLent, 2001). This insight has motivated a wide range of research into the intersection of video games and AI from the diverse perspectives of game playing, player modeling, and content generation (Yannakakis and Togelius, 2018).
As deep reinforcement learning algorithms becomes more sophisticated, existing envi- ronments and the benchmarks based on them become less informative. For example, most environments in the ALE have been solved to above human-level performance, making the continued use of the benchmark less valuable (Machado et al., 2017; Puigdomènech Badia et al., 2020). A complementary point created by this state of algorithmic progress is that there exists a virtuous circle in which the development of novel environments drives the development of novel algorithms. We can expect the research community to continue to provide high-quality algorithms. However, it is unclear from where researchers should expect high-quality environments, since the creation of such environments is often time-intensive and requires specialized domain knowledge. This continual need for novel environments necessitates an easy-to-use, ï¬exible and universal platform for unrestricted environment creation.
Simulated environments are constrained by the limitations of the simulators themselves. Simulators are not equal in their ability to provide meaningful challenges to learning systems. Furthermore, it is sometimes not obvious which properties of an environment make it a worthwhile benchmark for research. The complexity of the physical world is a primary candidate for challenging the current as well as to-be-developed algorithms. It is in the physical world where mammalian and, more speciï¬cally, human intelligence developed, and it is this kind of intelligence which researchers are often interested in replicating (Lake et al., 2017).
Modern game engines are powerful tools for the simulation of visually realistic worlds with sophisticated physics and complex interactions between agents with varying capacities. Additionally, engines designed for game development provide user interfaces which are speciï¬cally engineered to be intuitive, easy to use, interactive, and available across many platforms. Thus, in this paper we argue that game engines are perfectly poised to yield the necessary challenges for the foreseeable future of AI research. For the community, this would provide the ability to test algorithms in domains with as much depth and diversity as todayâs video games.
The contributions of this work are:
2
Unity: A General Platform for Intelligent Agents
⢠A novel taxonomy of existing platforms used for research which classiï¬es platforms in terms of their potential for complexity along the dimensions of sensory, physical, task-logic and social.
⢠A detailed analysis of the Unity game engine and the Unity ML-Agents Toolkit as an instance of a general platform, the highest level of the proposed taxonomy.
⢠A survey of current research conducted using Unity and critical areas in which progress is hindered by the current platforms but can be facilitated by a general platform such as Unity.
This paper is structured as follows: We begin with an analysis of the properties of a simulator important for the development of learning algorithms. Then, we propose a taxonomy of simulation platforms which we use to organize popular reinforcement learning (RL) benchmarks and further point out their limitations at fully realizing all desirable properties of a simulator. We then present the Unity engine and Unity ML-Agents Toolkit a general platform and discuss the extent to which it possesses the desired characteristics for enabling research. We next outline the architecture, functionality and tools provided by the open source Unity ML-Agents Toolkit which enable the deployment of learning algorithms within Unity environments and provide a set of benchmark results on example learning environments. We conclude by proposing future avenues of research we believe will be enabled by using a ï¬exible game engine versus standard black box environments.
# 2. Anatomy of Environments and Simulators
In this section, we detail some of the characteristics of environments and simulators we believe are needed to advance the state of the ï¬eld in AI research. We use the term environment to refer to the space in which an artiï¬cial agent acts and simulator to refer to the platform which computes the environment.
# 2.1 Environment Properties
As algorithms are able to solve increasingly diï¬cult tasks, the complexity of the environments themselves must increase in order to continue to provide meaningful challenges. The speciï¬c axes of environmental complexity we believe are essential are sensory, physical, task logic, and social. In this subsection, we outline the role each of these play in the state of the art in AI.
Sensory Complexity - The recent advances in deep learning have largely been driven by the ability of neural networks to process large amounts of visual, auditory, and text-based data (LeCun et al., 2015). ImageNet, a large database of natural images with associated labels, was essential in enabling models such as ResNet (He et al., 2016), and Inception (Szegedy et al., 2016) to be trained to near human-level object-recognition performance (Russakovsky et al., 2015). While ImageNet was mainly used for static image recognition tasks, its key component of visual complexity is necessary for many real-world decision- making problems, such as self-driving cars, household robots, and unmanned autonomous vehicles (Zhu et al., 2017). Additionally, advances in computer vision algorithms, speciï¬cally
3
# Juliani, Berges, Teng, Cohen, Harper, Elion, Goy, Gao, Henry, Mattar, & Lange
around convolutional neural networks, were the motivation for the pixel-to-control approach eventually found in the Deep-Q network (Mnih et al., 2015).
Physical Complexity - Many of the applied tasks researchers are interested in solving with AI involve not only rich sensory information, but a rich control scheme in which agents can interact with their dynamic environments in complex ways (Bicchi and Kumar, 2000; Levine et al., 2016). The need for complex interaction often comes with the need for environments which replicate the physical properties of the target domain, typically the real world. This realism is essential to problems where the goal is to transfer a policy learned within a simulator to the real world, as would be the case for most robotics applications (Rusu et al., 2016; Tobin et al., 2017; Andrychowicz et al., 2018).
Task Logic Complexity - A third axis is the complexity of the tasks deï¬ned within the environment. The game of Go, for example, which has long served as a test-bed for AI research, contains neither complex visuals nor complex physical interactions. Rather, the complexity comes from the large search space of possibilities open to the agent at any given time, and the diï¬culty in evaluating the value of a given board conï¬guration (Müller, 2002; Silver et al., 2017). Meaningful simulation platforms should enable designers to naturally create such problems for the learning agents within them. These complex tasks might display hierarchical structure, a hallmark of human intelligence (Botvinick, 2008), or vary from instance to instance, thus requiring meta-learning or generalization to solve (Wang et al., 2016). The tasks may also be presented in a sequential manner, where independent sampling from a ï¬xed distribution is not possible. This is often the case for human task acquisition in the real world, and the ability to learn new tasks over time is seen as a key-component of continual learning (Ring, 1994), and ultimately systems capable of artiï¬cial general intelligence (Schmidhuber, 2015; Schmidhuber, 2018).
Social Complexity - The acquisition of complex skills via learning in mammals is believed to have evolved hand-in-hand with their ability to hold relationships within their social groups (Arbib et al., 2008). At least one strong example of this exists within the human species, with language primarily being the development of a tool for communication in a social setting. As such, the development of social behavior among groups of agents is of particular interest to many researchers in the ï¬eld of AI. There are also classes of complex behavior which can only be carried out at the population level, such as the coordination needed to build modern cities (Baker et al., 2019). Additionally, the ability for multiple species to interact with one another is a hallmark of the development of ecosystems in the world, and would be desirable to simulate as well. A simulation platform designed to allow the study of communication and social behavior should then provide a robust multi-agent framework which enables interaction between agents of both the same population as well as interaction between groups of agents drawn from separate distributions.
# 2.2 Simulation Properties
In addition to the properties above, there are practical constraints imposed by the simulator itself which must be taken into consideration when designing environments for experimenta- tion. Speciï¬cally, simulated environments must be ï¬exibly controlled by the researcher and must run in a fast and distributed manner in order to provide the iteration time required for experimental research.
4
Unity: A General Platform for Intelligent Agents
Fast & Distributed Simulation - Depending on the sample eï¬ciency of the method used, modern machine learning algorithms often require up to billions of samples in order to converge to an optimal solution (Espeholt et al., 2018; Puigdomènech Badia et al., 2020). As such, the ability to collect that data as quickly as possible is paramount. One of the most appealing properties of a simulation is the ability for it to be run at a speed often orders of magnitude greater than that of the physical world. In addition to this increase in speed, simulations can often be run in parallel, allowing for orders of magnitude greater data collection than real-time serial experience in the physical world. The faster such algorithms can be trained, the greater the speed of iteration and experimentation that can take place, leading to faster development of novel methods.
Flexible Control - A simulator must also allow the researcher or developer a ï¬exible level of control over the conï¬guration of the simulation itself, both during development and at runtime. While treating the simulation as a black-box has been suï¬cient for certain advances in recent years (Mnih et al., 2015), in many cases it also inhibits use of a number of advanced machine learning approaches in which more dynamic feedback between the training process and the agents is essential. Curriculum learning (Bengio et al., 2009), for example, entails initially providing a simpliï¬ed version of a task to an agent, and slowly increasing the task complexity as the agentâs performance increases (Bengio et al., 2009). This method was used to achieve near human-level performance in a recent VizDoom competition (Wu and Tian, 2017). Such approaches are predicated on the assumption that the user has the capacity to alter the simulation to create such curricula in the ï¬rst place. Additionally, domain randomization (Tobin et al., 2017) involves introducing enough variability into the simulation so that the models learned within the simulation can generalize to the real world. This often works by ensuring that the data distribution of the real world is covered within all of the variations presented within the simulation (Tobin et al., 2017). This variation is especially important if the agent depends on visual properties of the environment to perform its task. It is often the case that without domain randomization, models trained in simulation suï¬er from a âreality gapâ and perform poorly. Concretely, performing domain randomization often involves dynamically manipulating textures, lighting, physics, and object placement within a scene.
# 3. A Survey of Existing Simulators
When surveying the landscape of simulators, environments, and platforms, we ï¬nd that there exist four categories into which these items can be organized.
(1) The ï¬rst is Environment which consists of single, ï¬xed environments that act as black-boxes from the perspective of the agent. Examples of these include the canonical CartPole or MountainCar tasks (Sutton and Barto, 2018), a single game from the ALE, such as Pitfall! (Bellemare et al., 2013), CoinRun (Cobbe et al., 2019b), and the Obstacle Tower environment (Juliani et al., 2019).
(2) The second is Environment Suite. These consist of sets of environments packaged together and are typically used to benchmark the performance of an algorithm or method along some dimensions of interest. In most cases these environments all share the same or similar observation and action spaces, and require similar, but not necessarily identical skills to solve. Examples of this include the ALE (Bellemare et al., 2013), DMLab-30 (Espeholt
5
# Juliani, Berges, Teng, Cohen, Harper, Elion, Goy, Gao, Henry, Mattar, & Lange
et al., 2018), the Hard Eight (Gulcehre et al., 2019), AI2Thor (Kolve et al., 2017), OpenAI Retro (Nichol et al., 2018), DMControl (Tassa et al., 2018b), and ProcGen (Cobbe et al., 2019a).
(3) The third category is Domain-speciï¬c Platform. This describes platforms which allow the creation of sets of tasks within a speciï¬c domain such as locomotion or ï¬rst-person navigation. These platforms are distinguished from the ï¬nal category by their narrow focus in environments types. This can include limitations to the perspective the agent can take, the physical properties of the environment, or the nature of the interactions and tasks possible within the environment. Examples of this category include Project Malmo (Johnson et al., 2016), VizDoom (Kempka et al., 2016), Habitat (Savva et al., 2019), DeepMind Lab (Beattie et al., 2016), PyBullet (Coumans and Bai, 2016) and GVGAI (Perez-Liebana et al., 2016). (4) The fourth and ï¬nal category is the General Platform whose members are capable of creating environments with arbitrarily complex visuals, physical and social interactions, and tasks. The set of environments that can be created by platforms in this category is a super-set of those that can be created by or are contained within the other three categories. In principle, members of these categories can be used to deï¬ne any AI research environment of potential interest. We ï¬nd that modern video game engines represent a strong candidate for this category. In particular, we propose the Unity engine along with a toolkit for AI interactions such as ML-Agents as an example of this category. Note that other game engines such as the Unreal engine could serve as general platforms for AI research. The important missing element however is the set of useful abstractions and interfaces for conducting AI research, something present in all examples listed here, but not inherently part of any given game engine or programming language. See Table 1 for a representative set of examples of the environments and platforms within this taxonomy.
Single Env Cart Pole Env Suite ALE Mountain Car DMLab-30 Obstacle Tower Hard Eight Pitfall! CoinRun Ant AI2Thor OpenAI Retro DMControl ProcGen Domain-Speciï¬c Platform General Platform Unity & ML-Agents MuJoCo DeepMind Lab Project Malmo VizDoom GVGAI PyBullet
Table 1: Taxonomy of simulators based on ï¬exibility of environment speciï¬cation. Includes a subset of examples for illustrative purposes.
# 3.1 Common Simulators
In recent years, there have been a number of simulation platforms developed for the purpose of providing challenges and benchmarks for deep reinforcement learning algorithms. Many of these platforms are based on existing games or game engines and carry with them speciï¬c strengths and weaknesses. While not exhaustive of all currently available platforms, below we survey a few of the simulators described in the previous section, taking examples from the middle two categories.
6
Unity: A General Platform for Intelligent Agents
# 3.1.1 Arcade Learning Environment
The release of the Arcade Learning Environment (ALE) contributed to much of the recent resurgence of interest in reinforcement learning. This was thanks to the development of the Deep Q-Network, which was able to achieve superhuman level performance on dozens of emulated Atari console games within the ALE by learning only from pixel inputs (Mnih et al., 2015). The ALE provides a Python interface for launching and controlling simulations of a few dozen Atari 2600 games. As such, the ALE falls into the category of environment suite. When considering the simulation criteria described above, the ALE provides visual complexity through pixel-based rendering, task-logic complexity in the form of hierarchical problems within some games such as Montezumaâs Revenge, and high-performance simulation with an emulation able to run at thousands of frames per second (Bellemare et al., 2013). Its downsides include deterministic environments, relatively simple visuals, a lack of realistic physics, single- agent control, and a lack of ï¬exible control of the simulation conï¬guration. In general, once an environment that is part of the ALE is launched, it is immutable and a complete black box from the perspective of the agent. Furthermore, all of the environments provided in the ALE have been solved with greater than human performance (Puigdomènech Badia et al., 2020). As such, there is little room for meaningful improvement over the state of the art with the exception of the domain of few-shot learning. This is apparent in the fact that even Agent57, the current state of the art algorithm takes orders of magnitude more training time than humans on a large number of the environments in the ALE.
# 3.1.2 DeepMind Lab
Built from the Quake III game engine, DeepMind Lab (Lab) was released in 2016 as the external version of the research platform used by DeepMind (Beattie et al., 2016). Designed in the wake of public adoption of the ALE, Lab contains a number of features designed to address the other platformâs shortcomings. By using a 3D game-engine, complex navigation tasks similar to those studied in robotics and animal psychology could be created and studied within Lab (Leibo et al., 2018). The ability to create a set of speciï¬c kinds of tasks makes DeepMind Lab a domain-speciï¬c platform. The platform contains primitive physics enabling a level of prediction about the quality of the world and allows researchers to deï¬ne their own environmental variations. Additionally, it allows for basic multi-agent interactions using language (Espeholt et al., 2018). The limitations of this platform, however, are largely tied to the dated nature of the underlying rendering and physics engine, which was built using decades-old technology. As such, the gap in quality between the physical world and the simulation provided via Lab is relatively large. Furthermore, the engine was also designed to enable ï¬rst-person shooter games and so the environments built using Lab are limited to agents with a ï¬rst-person perspective.
# 3.1.3 Project Malmo
Another popular simulation platform is Project Malmo (Malmo) (Johnson et al., 2016). Based on the exploration and building game Minecraft, the platform provides a large amount of ï¬exibility in deï¬ning scenarios and environment types making it a domain-speciï¬c platform. As a result, there have been a number of research projects exploring multi-agent communication, hierarchical control, and planning using the platform (Oh et al., 2016; Shu
7
# Juliani, Berges, Teng, Cohen, Harper, Elion, Goy, Gao, Henry, Mattar, & Lange
et al., 2017; Tessler et al., 2017). The limitations of the platform, however, are bound tightly with the underlying limitations of the Minecraft engine itself. Due to the low-polygon pixelated visuals, as well as the rudimentary physics system, Minecraft lacks both the visual as well as the physical complexity that is desirable from a modern platform. The platform is also limited to describing scenarios which are only possible within the logic of Minecraft.
3.1.4 Physics Simulators
The MuJoCo physics engine has become a popular simulation platform for benchmarking model-free continuous control tasks, thanks to a set of standard tasks built on top of MuJoCo, provided with OpenAI Gym and the DeepMind Control Suite (Todorov et al., 2012; Brockman et al., 2016; Tassa et al., 2018a). High quality physics simulation combined with a number of standardized benchmarks has led to the platform being the primary choice for researchers interested in examining the performance of continuous control algorithms. The nature of the MuJoCo engine, however, poses limitations for more general AI research. The ï¬rst is around the limited visual rendering capabilities of the engine, preventing the use of complex lighting, textures, and shaders. The second is the restrictions of the physics engine itself and that MuJoCo models are compiled which makes more diï¬cult the creation of dynamic âgame-likeâ environments, where many diï¬erent objects would be instantiated and destroyed in real-time during simulation. More dynamic environments are often necessary to pose tasks which require greater planning or coordination to solve. The PyBullet physics engine has also been used as a platform to study deep reinforcement learning algorithms as well as sim-to-real (Coumans and Bai, 2016; Tan et al., 2018). Similar to MuJoCo, the PyBullet simulator lacks the ability to provide high-ï¬delity visuals, and the nature of the physics engine limits the scope of possible tasks to be deï¬ned.
# 3.1.5 VizDoom
Based on the game Doom, VizDoom provides researchers with the ability to create tasks which involve ï¬rst-person navigation and control (Kempka et al., 2016). Through a 2017 AI deathmatch competition, the platform enabled the development of a number of compelling approaches to Deep Reinforcement Learning, including utilizing learning curricula (Wu and Tian, 2017), novel algorithm design (Dosovitskiy and Koltun, 2016), and memory systems (Lample and Chaplot, 2017). Like DeepMind Lab, the platform is mainly restricted by the underlying game engine, which was built for a decades-old ï¬rst-person shooter game. As such, the visual and physical complexity possible in the environments created using VizDoom are relatively limited. It also is restricted to simulating artiï¬cial agents with only a ï¬rst-person perspective.
# 4. The Unity Platform
Unity is a real-time 3D development platform that consists of a rendering and physics engine as well as a graphical user interface called the Unity Editor. Unity has received widespread adoption in the gaming, AEC (Architecture, Engineering, Construction), auto, and ï¬lm industries and is used by a large community of game developers to make a variety of
8
# Unity: A General Platform for Intelligent Agents
interactive simulations, ranging from small mobile and browser-based games to high-budget console games and AR/VR experiences.
Unityâs historical focus on developing a general-purpose engine to support a variety of platforms, developer experience levels, and game types makes the Unity engine an ideal candidate simulation platform for AI research. The ï¬exibility of the underlying engine enables the creation of tasks ranging from simple 2D gridworld problems to complex 3D strategy games, physics-based puzzles, or multi-agent competitive games possible. Unlike many of the research platforms discussed above, the underlying engine is not restricted to any speciï¬c genre of gameplay or simulation, making Unity a general platform. Furthermore, the Unity Editor enables rapid prototyping and development of games and simulated environments.
A Unity Project consists of a collection of Assets. These typically correspond to ï¬les within the Project. Scenes are a special type of Asset which deï¬ne the environment or level of a Project. Scenes contain a deï¬nition of a hierarchical composition of GameObjects, which correspond to the actual objects (either physical or purely logical) within the environment. The behavior and function of each GameObject is determined by the components attached to it. There are a variety of built-in components provided with the Unity Editor, including Cameras, Meshes, Renderers, RigidBodies, and many others. It is also possible to deï¬ne custom components using C# scripts or external plugins.
# 4.1 Engine Properties
This section examines the properties of the Unity engine from the perspectives described in Section 2. We demonstrate that Unity enables the complexity necessary along the key dimensions of environment properties for the creation of challenging learning environments.
4.1.1 Environment Properties
Sensory Complexity - The Unity engine enables high-ï¬delity graphical rendering. It supports pre-baked as well as real-time lighting and the ability to deï¬ne custom shaders, either programmatically or via a visual scripting language. As such, it is possible to quickly render near-photorealistic imagery to be used as training data for a machine learning model. It is also possible to render depth information, object masks, infrared, or images with noise injected into it through the use of custom shaders. Furthermore, the engine provides a means of deï¬ning audio signals which can serve as potential additional observational information to learning agents, as well as ray-cast based detection systems which can simulate Lidar. Physical Complexity - Physical phenomena in Unity environments can be simulated with either the Nvidia PhysX or Havok Physics engines. This enables research in environments with simulated rigid body, soft body, particle, and ï¬uid dynamics as well as ragdoll physics. Furthermore, the extensible nature of the platform enables the use of additional 3rd party physics engines if desired. For example, there are plugins available for Unity which provide both the Bullet and MuJoCo physics engines as alternatives to PhysX 2. Task Logic Complexity - The Unity Engine provides a rich and ï¬exible scripting system via C#. This system enables any form of gameplay or simulation to be deï¬ned and dynamically controlled. In addition to the scripting language, the GameObject and component system
2. https://assetstore.unity.com/packages/tools/physics/bullet-physics-for-unity-62991; http: //www.mujoco.org/book/unity.html
9
# Juliani, Berges, Teng, Cohen, Harper, Elion, Goy, Gao, Henry, Mattar, & Lange
enables managing multiple instances of agents, policies, and environments, making it possible to deï¬ne complex hierarchical tasks, or tasks which would require meta-learning to solve. Social Complexity - The nature of the Unity scripting language and component system makes the posing of multi-agent scenarios simple and straightforward. Indeed, because the platform was designed to support the development of multi-player video games, a number of useful abstractions are already provided out of the box, such as the Multiplayer Networking system3.
4.1.2 Simulation Properties
Fast & Distributed Simulation - The physics and frame rendering of the Unity engine take place asynchronously. As such, it is possible to greatly increase the speed of the underlying simulation without the need to increase the frame rate of the rendering process. It is also possible to run Unity simulations without rendering if it is not critical to the simulation. In scenarios where rendering is desirable, such as learning from pixels, it is possible to control the frame rate and speed of game logic. Extensive control of the rendering quality also makes it possible to greatly increase the frame rate when desired. The added capabilities of the Unity engine do add additional overhead when attempting to simulate in a large-scale distributed fashion. The memory footprint of a Unity simulation is also larger than that of environments from other platforms such as an Atari game in the ALE. Flexible Control - It is possible to control most aspects of the simulation programmatically, enabling researchers to deï¬ne curricula, adversarial scenarios, or other complex methods of changing the learning environment during the training process. For example, GameObjects can be conditionally created and destroyed in real-time. In Section 5, we discuss ways in which further control of the simulation is made possible via exposed simulation parameters and a Python API.
# 4.2 Unity Editor and Services
The Unity Editor (Figure 1) is a graphical user interface used to create the content for 2D, 3D and AR / VR experiences. It is available on Windows, Mac and Linux.
The Unity Editor and its services provide additional beneï¬ts for AI research:
1. Create custom Scenes - Unity provides a large number of guides and tutorials on how to create Scenes within the Editor. This enables developers to quickly experiment with new environments of varying complexities, or novel tasks. Furthermore, an online asset store which contains tens of thousands of free and paid assets provides users access to a huge diversity of pre-built entities for their scene.
2. Record local, expert demonstrations - The Unity Editor includes a Play mode which enables a developer to begin a simulation and control one or more of the agents in the Scene via a keyboard or game controller. This can be used for generating expert demonstrations to train and evaluate imitation learning (IL) algorithms.
3. Record large-scale demonstrations - One the most powerful features of the Unity Editor is the ability to build a Scene to run on more than 20 platforms ranging from wearables to
3. https://unity3d.com/learn/tutorials/s/multiplayer-networking
10
Unity: A General Platform for Intelligent Agents
(SHS) borpy = Hierarchy PITS aN © a amos AT Creates @AT ââ_â_â_â<â<âae % |) Gi AT Y | | create -| Gea @ ibe A state = y @ apeall* > Canvas Watermark Tag (Untagged 8) Layer Defaute 4) Directional Light Prefab (___ Select Revert âApply Main Camera yA Transform % EventSystem Position Rotation Scale 30eallHard Console Gear || Collapse | cleat on Pay [ror Pause | eator =] | D0]
Figure 1: The Unity Editor window on macOS.
mobile and consoles. This enables developers to distribute their Scenes to a large number of devices (either privately or publicly through stores such as the Apple App Store or Google Play). This can facilitate recording expert demonstrations from a large number of experts or measuring human-level performance from a user (or player) population.
# 5. The Unity ML-Agents Toolkit
The Unity ML-Agents Toolkit4 is an open source project which enables researchers and developers to create simulated environments using the Unity Editor and interact with them via a Python API. The toolkit provides the ML-Agents SDK which contains all functionality necessary to deï¬ne environments within the Unity Editor along with the core C# scripts to build a learning pipeline.
The features of the toolkit include a set of example environments, state of the art RL algorithms Soft Actor-Critic (SAC) (Haarnoja et al., 2018) and Proximal Policy Optimization (PPO) (Schulman et al., 2017), the IL algorithms Generative Adversarial Imitation Learning (GAIL) (Ho and Ermon, 2016) and Behavioral Cloning (BC) (Hussein et al., 2017), support for Self-Play (Baker et al., 2019; Bansal et al., 2017) in both symmetric and asymmetric games, as well as the option to extend algorithms and policies with the Intrinsic Curiosity Module (ICM) (Pathak et al., 2017) and Long-Short-Term Cell (LSTM) (Hochreiter and Schmidhuber, 1997), respectively. As the platform grows, we intend to provide additional algorithms and model types. In what follows, we outline the key components of the toolkit as well as provide benchmark results with SAC and PPO on the Unity example environments.
4. This describes version 1.0; the most recent release at the time of writing.
11
# Juliani, Berges, Teng, Cohen, Harper, Elion, Goy, Gao, Henry, Mattar, & Lange
Learning Environment Communicator Neural Network Python API hon Trainer
Figure 2: A Learning Environment (as of version 1.0) created using the Unity Editor contains Agents and an Academy. The Agents are responsible for collecting observations and executing actions. The Academy is responsible for global coordination of the environment simulation.
# 5.1 ML-Agents SDK
The three core entities in the ML-Agents SDK are Sensors, Agents, and an Academy. The Agent component is used to directly indicate that a GameObject within a scene is an Agent, and can thus collect observations, take actions, and receive rewards. The agent can collect observations using a variety of possible sensors corresponding to diï¬erent forms of information such as rendered images, ray-cast results, or arbitrary length vectors. Each Agent component contains a policy labeled with a behavior name.
Any number of agents can have a policy with the same behavior name. These agents will execute the same policy and share experience data during training. Additionally, there can be any number of behavior names for policies within a scene enabling simple construction of multi-agent scenarios with groups or individual agents executing many diï¬erent behavior types. A policy can reference various decision-making mechanisms including player input, hard-coded scripts, internally embedded neural network models, or via interaction through the Python API. It is possible for agents to ask for decisions from policies either at a ï¬xed or dynamic interval, as deï¬ned by the developer of the environment.
The reward function, used to provide a learning signal to the agent, can be deï¬ned or modiï¬ed at any time during the simulation using the Unity scripting system. Likewise, simulation can be placed into a done state either at the level of an individual agent or the environment as a whole. This happens either via a Unity script call or by reaching a predeï¬ned max step count.
The Academy is a singleton within the simulation, and is used to keep track of the steps of the simulation and manage the agents. The Academy also contains the ability to deï¬ne environment parameters, which can be used to change the conï¬guration of the environment at runtime. Speciï¬cally, aspects of environmental physics and textures, sizes and the existence of GameObjects are controlled via exposed parameters which can be re-sampled and altered
12
Unity: A General Platform for Intelligent Agents
throughout training. For example, the gravity in the environment can ï¬uctuate every ï¬xed interval or additional obstacles can spawn when an agent reaches a certain proï¬ciency. This enables evaluation of an agent on a train/test split of environment variations and facilitates the creation of curriculum learning scenarios (Bengio et al., 2009).
# 5.2 Python Package
The provided Python package5 contains a class called UnityEnvironment that can be used to launch and interface with Unity executables (as well as the Editor) which contain the required components described above. Communication between Python and Unity takes place via a gRPC communication protocol, and utilizes protobuf messages.
We also provide a set of wrapper APIs, which can be used to communicate with and control Unity learning environments through the standard gym interface used by many researchers and algorithm developers (Brockman et al., 2016). These gym wrappers enable researchers to more easily swap in Unity environments to an existing reinforcement learning system already designed around the gym interface.
# 5.3 Performance Metrics
It is essential that an environment be able to provide greater than real-time simulation speed. It is possible to increase Unity ML-Agents simulations up to one hundred times real-time. The possible speed increase in practice, however, will vary based on the computational resources available, as well as the complexity of the environment. In the Unity Engine, game logic, including physics, can be run independently from the rendering of frames. As such, environments which do not rely on visual observations, such as those that use ray-casts for example, can beneï¬t from simulation at speeds greater than those that do. See Table 2 for performance metrics when controlling environments from the Python API.
Environment Basic 3D Ball GridWorld Visual Food Collector Observation Type # Agents Mean (ms) Std (ms) Vector(1) Vector(8) Visual(84x84x3) Visual(84x84x3) 1 12 1 4 0.803 5.05 2.04 9.23 0.005 0.039 0.038 0.556
Table 2: Performance benchmark when using the Python API to control a Learning Environ- ment from the same machine by calling env.step(). Mean and standard deviation in time averaged over 1000 simulation steps.
# 5.4 Example Environments
The Unity ML-Agents Toolkit contains a number of example environments in addition to the core functionality. These environments are designed to both be usable for benchmarking RL and IL algorithms as well as templates to develop novel environments and tasks. These environments contain examples of single and multi-agent scenarios, with agents using either
5. https://pypi.org/project/mlagents/
13
Juliani, Berges, Teng, Cohen, Harper, Elion, Goy, Gao, Henry, Mattar, & Lange
vector or visual observations, taking either discrete or continuous actions, and receiving either dense or sparse rewards. See Figure 3 for images of the included example environments and below for environment descriptions.
Quniy
Figure 3: Images of the fourteen included example environments as of the v0.11 release of the Unity ML-Agents Toolkit. From Left-to-right, up-to-down: (a) Basic, (b) 3DBall, (c) Crawler, (d) Push Block, (e) Tennis, (f) Worm, (g) Bouncer, (h) Grid World, (i) Walker, (j) Reacher, (k) Food Collector, (l) Pyramids, (m) Wall Jump, (n) Hallway, (o) Soccer Twos.
(a) Basic - A linear movement task where the agent (blue cube) must move left or right to rewarding states. The goal is to move to the most rewarding state.
(b) 3D Ball - A balance-ball task where the agent controls the rotation of the platform. The goal is to balance the platform in order to keep the ball on it for as long as possible.
14
Unity: A General Platform for Intelligent Agents
(c) Crawler - Physics-based creatures with 4 arms and 4 forearms. The goal is to move toward the goal direction as quickly as possible without falling.
(d) Push Block - A platforming environment where the agent can push a block around. The goal is to push the block to the target area (black white grid).
(e) Tennis - Two-player game where agents control rackets to bounce ball over a net. The goal is to bounce ball to the other side instead of dropping the ball or sending ball out of bounds.
(f) Worm - A physics-based three joint locomotion agent which must move toward a goal location as quickly as possible.
(g) Bouncer - A bouncing task where the agent (blue cube) can jump with a certain speed and angle when it touches the ground. The goal is to catch the ï¬oating food object with as few jumps as possible.
(h) Grid World - A version of the classic grid-world task. Scene contains agent (blue square), target, and obstacles. The goal is to navigate to the target while avoiding the obstacles.
(i) Walker - Physics-based humanoids with 26 degrees of freedom of its body-parts. The goal is to move toward the goal direction as quickly as possible without falling.
(j) Reacher - Double-jointed arm which can move to target locations. The goal is to move its hand to the target location (green sphere), and keep it there.
(k) Food Collector - A multi-agent environment where agents (blue cube) compete to collect bananas. The goal is to move to as many yellow bananas as possible while avoiding blue bananas.
(l) Pyramids - Environment where the agent (blue cube) needs to press a button to spawn a pyramid, then navigate to the pyramid, knock it over, and move to the gold brick at the top. The goal is to move to the golden brick on top of the spawned pyramid.
(m) Wall Jump - A platforming environment with a wall and a yellow block that can be pushed around, and an agent (blue cube) that can move, rotate and jump. The goal is to reach the target (white black grid) on the other side of the wall. If the wall is too high, the agent sometimes needs to push the white block near the wall, jump onto it to reach its target. The agent trains two policiesâone for big walls (requires the small block) and one for small walls.
(n) Hallway - Environment where the agent (blue cube) needs to ï¬nd information in a room, remember it, and use it to move to the correct target. The goal is to move to the target (black white grid) which corresponds to the color of the block in the room.
(o) Soccer - Environment where four agents compete in a 2 vs 2 toy soccer game. All agents are equal and tasked with keeping the ball out of their own goal and scoring in the opponents.
15
Juliani, Berges, Teng, Cohen, Harper, Elion, Goy, Gao, Henry, Mattar, & Lange
(p) StrikersVsGoalie - A soccer variant with three agents of two diï¬erent kinds in the environment; two Strikers and one Goalie. The goal of the Striker agents is to push the ball into the goal area while the Goalie tries to prevent the ball from entering its own goal area.
For more information on the speciï¬cs of each of the environments, including the observa- tions, actions, and reward functions, see the GitHub documentation6. Trained model ï¬les as well as hyperparameter speciï¬cations for replicating all of our results on the example environments are provided with the toolkit. See Figures 4 and 5 below for baseline results on each example environment. These results describe the mean cumulative reward per-episode over ï¬ve runs using PPO and SAC (plus relevant modiï¬cations).
Environment 3DBall 3DBallHard Basic Bouncer CrawlerDynamic CrawlerStatic FoodCollector GridWorld Hallway PushBlock Pyramids Reacher Walker BigWallJump SmallWallJump WormDynamic WormStatic PPO (mean) 98.03 96.05 0.94 11.33 577.51 2816.07 36.6 0.98 0.91 4.89 1.79 35.28 2206.41 0.91 0.97 131.59 152.54 PPO (std) 2.95 7.91 0.0 0.07 25.26 231.37 8.42 0.0 0.03 0.04 0.02 4.43 165.66 0.02 0.0 9.08 4.02 SAC (mean) 86.36 91.36 0.94 17.84 479.73 2042.97 46.43 0.98 0.53 4.14 -1.0 39.29 567.45 -0.66 0.89 238.89 396.26 SAC (std) 12.08 8.91 0.0 0.27 131.71 1097.81 7.93 0.0 0.76 0.49 0.0 0.11 129.35 0.29 0.04 6.2 7.25
Table 3: Table of cumulative episodic reward for the various example environments provided with the Unity ML-Agents Toolkit. Results are averaged over ï¬nal score on ï¬ve separate runs.
# 6. Research Using Unity and the Unity ML-Agents Toolkit
In this section, we survey a collection of results from the literature which use Unity and/or the Unity ML-Agents Toolkit. The range of environments and algorithms reviewed here demonstrates the viability of Unity as a general platform. We also discuss the Obstacle Tower benchmark (Juliani et al., 2019) which serves as an example of the degree of environmental complexity achievable on Unity. The corresponding Obstacle Tower contest posed a signiï¬cant
6. https://github.com/Unity-Technologies/ml-agents/blob/master/docs/
# Learning-Environment-Examples.md
16
# Unity: A General Platform for Intelligent Agents
3DBall 3DBallHard Basic Bouncer 0.75 0.50 10 â PPO 025 â ppo â SAC 0 â SAC T T T T T T T 1 200 400 200 400 250 500 750100 CrawlerDynamic CrawlerStatic FoodCollector GridWorld 1.0 600 4 0 4 2000 4 40 400 4 05 200 â Ppo 0 â pPpo 7 â PPO â SAC â SAC â sac 00 0 0 T T T T T T T T T l 250 500 750 250 500 750 25° 50 75ââ:100 Hallway PushBlock Pyramids Reacher 14 40 a4 i4 = PPOHCM ° = sacicm 7° 24 0 = PPO+LSTM = Po â SAC+LSTM â SAC -1 0 A 0 ~ââ TT 200 400 20 40 200 400 250 500 750 Walker BigWallJump SmallWallJump WormDynamic 1 1 â pPo 2000 4 â sac 200 4 ° 0 1000 100 4 â PPO â PPO 1 â SAC â SAC 0 1 0 T T T T T T 1 T T T 1 T T T 500 1000 1500. 250 = 500 750-1000 250 500 = 750-1000 1000 2000 3000 WormStatic 400 4 â ppo â SAC 200
400 4 â ppo â SAC 200 1000 2000-3000
Figure 4: Mean cumulative episodic reward (y-axis) over simulation time-steps (in thousands, x-axis) during training and evaluation. We compare PPO (blue line) and SAC (red line) performances. Results presented are based on ï¬ve separate runs, with a 95% conï¬dence interval. LSTM indicates an LSTM unit is used in the network. ICM indicates the Intrinsic Curiosity Module is used during training.
challenge to the research community inspiring a number of creative solutions. We review the top performing algorithm to show the rallying eï¬ect a benchmark like this can have on innovation in the ï¬eld.
17
# Juliani, Berges, Teng, Cohen, Harper, Elion, Goy, Gao, Henry, Mattar, & Lange
10000 7500 5000 2500 SoccerTwos Strikers VsGoalie Tennis 3000 2500 â PPO = PPO; Striker = PPO 2500 â| â PPO; Goalie 2000 2000 1500 1500 1000 20000 40000 20000 40000 20000 40000
Figure 5: Mean episodic ELO (y-axis) over simulation time-steps (in thousands, x-axis) during training with Self-Play and PPO. In symmetric environments, the ELO of the learning policy is plotted (blue line) and in asymmetric environments (blue and red line) the ELO of both learning policies are plotted. Results presented are based on ï¬ve separate runs, with a 95% conï¬dence interval.
# 6.1 Domain-Speciï¬c Platforms and Algorithms
The AI2Thor (Kolve et al., 2017) platform provides a set of pre-built indoor scenes which are rendered using the Unity engine and a Python API for interacting with those environments using a ï¬rst-person agent. Using the AI2Thor simulator, researchers demonstrated that it is possible to transfer a policy learned in simulation to a physical robot to complete an indoor-navigation task (Zhu et al., 2017). In the same vein, the Chalet platform uses Unity to provide a set of indoor navigation environments (Yan et al., 2018). Recent work at OpenAI has also taken advantage of the rendering capabilities of the Unity engine to aid in the development of a system used to transfer a robotic handâs grasping policy from a simulator to a physical robot (Andrychowicz et al., 2018). Unity has also been used to render a physical intersection in order to aid demonstration-based learning on real-world vehicles (Behbahani et al., 2019). Finally, a set of benchmarks called âArenaâ have been built using the Unity ML-Agents Toolkit which speciï¬cally focuses on multi-agent scenarios (Song et al., 2020). The creators of Arena take care to note that Unity is selected over other engines and platforms for its generality.
Unity environments have been used in varied research such as intrinsic motivation (Burda et al., 2019; Pathak et al., 2019a), Neural Attention (Ghani et al., 2018), and semi-parametric reinforcement learning (Jain and Lindsey, 2018). Of particular interest is the way in which Unity facilitated work which developed an algorithm for the morphological self-assembly of individually trained agents in order to achieve some higher order task like standing or locomotion (Pathak et al., 2019b). The authors note that none of the standard benchmark environments support the co-evolution of control and morphology which required them to create their own. A general platform promotes experimentation with these types of highly original algorithms.
18
Unity: A General Platform for Intelligent Agents
# 6.2 Obstacle Tower
The Obstacle Tower7 environment for deep reinforcement learning (Juliani et al., 2019) demonstrates the extent of environmental complexity possible from the Unity platform. This benchmark uses procedural generation and sparse rewards in order to ensure that each instance of the task requires ï¬exible decision-making. Each episode of Obstacle Tower consists of one-hundred randomly generated ï¬oors, each with an increasingly complex ï¬oor layout. Each ï¬oor layout is composed of rooms, which can contain puzzles, obstacles, enemies, or locomotion challenges. The goal of the agent is to reach the end room of each ï¬oor and to ascend to the top ï¬oor of the tower without entering a fail-state such as falling in a hole or being defeated by an enemy. This benchmark provided a signiï¬cant challenge to contemporary RL algorithms, with baseline results showing test-time performance corresponding to solving on average ï¬ve of 100 ï¬oors after 20 million time-steps of training. This is signiï¬cantly worse than those of naive humans who have only interacted with the environment for ï¬ve minutes and are able to solve on average 15 ï¬oors (Juliani et al., 2019), and much worse than expert players who are able to solve on average 50 ï¬oors.
Figure 6: Examples of three ï¬oors generated in the Obstacle Tower environment.
Concurrent with the publication of the baseline results reported in the original work, an open competition was held where research teams competed to train agents which could solve Obstacle Tower 8. These agents were evaluated on ï¬ve held-out instances of Obstacle Tower not available during training. After six months of open contest, the top entry was able to solve an average of nineteen ï¬oors on the ï¬ve held-out towers. This corresponds to better than naive human-level performance, but still well below expert human play, or optimal performance.
In a blog post (Nichol, 2019), the top-scoring participant outlines their approach which consists of a creative combination of various RL and imitation learning modules as well as cleverly constructed human demonstrations and state augmentations; an invocation of the complexity of Obstacle Tower. This serves as an example of the role novel environments can serve in promoting the development of novel algorithms. Table 4 contains results from the top six competitors. We encourage researchers who evaluate their algorithms on Obstacle Tower to compare to the results below in addition to those of the original work (Juliani et al., 2019).
7. https://github.com/Unity-Technologies/obstacle-tower-env 8. https://www.aicrowd.com/challenges/unity-obstacle-tower-challenge
19
# Juliani, Berges, Teng, Cohen, Harper, Elion, Goy, Gao, Henry, Mattar, & Lange
JULIANI, BERGES, TENG, COHEN, HARPER, ELION, Goy, GAO, HENRY, MaTTAR, & LANGE
1st 2nd 3rd 4th 5th 6th Alex Nichol Compscience.org Songbin Choi Joe Booth Doug Meng UEFDL 19.4 16 13.2 10.8 10 10 35.86 28.7 23.2 18.06 16.5 16.42
Table 4: Performance on Obstacle Tower test-phase of top six entries in Obstacle Tower Challenge
# 7. Potential for Future AI Research
As alluded to in the previous section, we believe there are a number of extremely valuable research directions that are hindered by the current standard benchmark problems. Working in these directions necessarily incurs additional overhead by forcing the researcher to create their own suitable environments (Pathak et al., 2019b) which can be a substantial burden if the tools of a general platform are unavailable. In this section, we highlight how the use of the Unity game engine can expedite research progress in lagging areas critical to the ï¬elds of AGI and human-AI interaction.
# 7.1 Eï¬ective Learning Environments
It has been argued in recent work that generating eï¬ective and diverse learning environments (often as a co-evolutionary process involving agent and environment) is a critical component for developing artiï¬cial general intelligence (Wang et al., 2019; Clune, 2019). Furthermore, other lines of research argue that procedural generation of environments and measuring success of an algorithm using a train/test split is a principled way of understanding the generalization and robustness of learning algorithms (Cobbe et al., 2019b; Justesen et al., 2018).
As discussed in Section 4, Unity environments are highly programmable via a straight- forward C# scripting system. This enables a simple way to control changing environment dynamics and dynamically create and destroy new entities (i.e. GameObjects), two critical components of an evolving environment. Furthermore, it is very natural for Unity envi- ronments to be parameterized and procedurally generated. This ï¬exibility is uncommon among the platforms currently in use today. Additionally, Unity also has a large and active development community so that creating new and diverse environments is easy with an expansive array of oï¬-the-shelf assets.
# 7.2 Human-in-the-loop Training
Leveraging human input to guide the learning process is desirable as exploiting a humanâs domain expertise speeds up learning and helps the agent learn to behave in a manner aligned with human expectations. A number of training frameworks have been studied in the literature (Zhang et al., 2019) such as learning to imitate expert trajectories (Ho and Ermon, 2016), humans providing evaluative feedback to the agent (Knox and Stone, 2008),
20
Unity: A General Platform for Intelligent Agents
or humans manipulating the agentâs observed states and actions (Abel et al., 2016). The success of the latter two families of algorithms is in large part dependent on how the human interfaces with the agent during learning which is very diï¬cult or impossible with the current set of platforms. On the other hand, imitation learning is a signiï¬cantly more mainstream ï¬eld of research which we hypothesize is partly because recording expert demonstrations requires very little extra functionality from the platforms themselves. An alternate line of work investigates how to design agents that donât learn to avoid being interrupted by humans given that it may prevent them from receiving future reward (Orseau and Armstrong, 2016). Training within a visual environment editor, such as the Unity Editor, allows for an interactive and collaborative learning process between the human and agent. The editor oï¬ers real-time access to the training scene so that a human can interact with the agent during training simply via mouse clicks. Possible interventions include but are not limited to pausing the scene, dragging GameObjects within the scene, adding or removing GameObjects to the scene, and even assuming control of the agent through keyboard commands. This functionality will make the actual act of administering feedback and modifying the environment during training straightforward lifting a major burden in this ï¬eld of research.
# 7.3 Training Agents Alongside Humans
Developing games with the assistance of artiï¬cial agents has a long history in the domain of game design (Zhao et al., 2019). Of particular value to the game development community is the ability to train ï¬exible behaviors for non-playable characters (NPC) as either friend or foe to the player. Contained within this training dynamic is the under-explored research problem of training agents to be challenging to humans but not so dominant that the human does not engage in future contest. This may not align with an RL agentâs goal of learning an optimal strategy. Training agents to perform at a particular player strength has been achieved via behavioral cloning and conditioning the policy on an estimate of the skill of the player that generated the demonstration (Vinyals et al., 2019). Thus, when a particular strength is desired, the network can be conditioned. However, we believe there to be novel RL formulations which seek to optimize the standard expected return within an episode but also must optimize the number of expected future episodes. A formulation of this sort could lead to a new family of RL algorithms and have implications for existential concerns for AI such as the value alignment problem (Bostrom, 2014).
It is not trivial to investigate the training scenario where agents play against (or in cooperation with) humans robustly or at scale. However, Unityâs WebGL build option enables users to deploy Unity games to a browser. Thus, agent-human interaction can be studied at scale as humans play with or against an agent in a web browser game. As a side note, training agents against many humans with diï¬erent play styles will also improve generalization and robustness of the learned policy (Cobbe et al., 2019b).
# 8. Conclusion and Future Directions
In this paper, we introduce the notion of a general platform for environment creation and analyze the capabilities of the Unity engine with the Unity ML-Agents Toolkit as a member of this class. To that end, we discussed the desirable complexity and computational properties of a simulator for the continued development of AI and used that criteria to propose a
21
# Juliani, Berges, Teng, Cohen, Harper, Elion, Goy, Gao, Henry, Mattar, & Lange
novel taxonomy of existing simulators and platforms. From this analysis, we argued that the current set of platforms are insuï¬cient for long-term progress and proposed modern game engines as the natural next step. We then discussed the Unity game engine and Unity ML-Agents Toolkit in this context and highlighted the key role it has already played in spurring innovation within the ï¬eld. Finally, we surveyed a subset of the research that using an engine like Unity enables but is currently burdensome to pursue due to the inï¬exibility of the current platforms.
There exist numerous other directions for future research in addition to those discussed in Section 7. In addition to researchers, the Unity ML-Agents Toolkit is also intended to be used by game developers who are not necessarily machine learning experts. The notoriously tedious process of tuning hyperparameters may be insurmountable in some cases for a non-expert. We plan to introduce intuitive UI abstractions for navigating the iterative process of tuning an algorithm such as methods to tweak reward functions, deï¬ning observations, and deï¬ning actions as well as other aspects of algorithm design. Finally, other future work includes improving the Unity engine and the Unity ML-Agents Toolkit in both performance and breadth.
# 9. Acknowledgements
We would like to thank Jeï¬ Shih, Anupam Bhatnagar, Adam Crespi, Deric Pang, Sachin Dharashivkar, Ruoping Dong, Sankalp Paltro, and Esh Vckay for their contributions to the Unity ML-Agents Toolkit; Praneet Dutta, Christopher Lu, and Cesar Romero for their feedback during the initial toolkit design process; Trevor Santarra, Vladimir Oster, Samuel Warren, YouCyuan Jhang, Joe Ward, Catherine Morrison, and Jose De Oliveira for their feedback on a draft of this paper.
# References
Abel, D., Salvatier, J., Stuhlmüller, A., and Evans, O. (2016). Agent-agnostic human-in- the-loop reinforcement learning. In NeurIPS Future of Interactive Learning Machines Workshop.
Andrychowicz, M., Baker, B., Chociej, M., Jozefowicz, R., McGrew, B., Pachocki, J., Petron, A., Plappert, M., Powell, G., Ray, A., Schneider, J., Sidor, S., Tobin, J., Welinder, P., Weng, L., and Zaremba, W. (2018). Learning dexterous in-hand manipulation. arXiv preprint arXiv:1808.00177.
Andrychowicz, M., Wolski, F., Ray, A., Schneider, J., Fong, R., Welinder, P., McGrew, B., Tobin, J., Abbeel, O. P., and Zaremba, W. (2017). Hindsight experience replay. In Advances in Neural Information Processing Systems, pages 5048â5058.
Arbib, M. A., Liebal, K., and Pika, S. (2008). Primate vocalization, gesture, and the evolution of human language. Current anthropology, 49(6):1053â1076.
Baker, B., Kanitscheider, I., Markov, T., Wu, Y., Powell, G., McGrew, B., and Mor- datch, I. (2019). Emergent tool use from multi-agent autocurricula. arXiv preprint arXiv:1909.07528.
22
Unity: A General Platform for Intelligent Agents
Bansal, T., Pachocki, J., Sidor, Sutskever, I., and Mordatch, I. (2017). Emergent complexity via multi-agent competition. arXiv preprint arXiv:1710.03748.
Beattie, C., Leibo, J. Z., Teplyashin, D., Ward, T., Wainwright, M., Küttler, H., Lefrancq, A., Green, S., Valdés, V., Sadik, Amir Schrittwieser, J., Anderson, K., York, S., Cant, M., Cain, A., Bolton, A., Gaï¬ney, S., King, H., Hassabis, D., Legg, S., and Petersen, S. (2016). Deepmind lab. arXiv preprint arXiv:1612.03801.
Behbahani, F., Shiarlis, K., Chen, X., Kurin, V., Kasewa, S., Stirbu, C., Gomes, J., Paul, S., Oliehoek, F. A., Messias, J., and Whiteson, S. (2019). Learning from demonstration in the wild. In 2019 International Conference on Robotics and Automation (ICRA), pages 775â781. IEEE.
Bellemare, M. G., Naddaf, Y., Veness, J., and Bowling, M. (2013). The arcade learning environment: An evaluation platform for general agents. Journal of Artiï¬cial Intelligence Research, 47:253â279.
Bengio, Y., Louradour, J., Collobert, R., and Weston, J. (2009). Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pages 41â48. ACM.
Bicchi, A. and Kumar, V. (2000). Robotic grasping and contact: A review. In ICRA, volume 348, page 353. Citeseer.
Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.
Botvinick, M. M. (2008). Hierarchical models of behavior and prefrontal function. Trends in cognitive sciences, 12(5):201â208.
Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang, J., and Zaremba, W. (2016). Openai gym. arXiv preprint arXiv:1606.01540.
Burda, Y., Edwards, H., Pathak, D., Storkey, A., Darrell, T., and Efros, A. A. (2019). Large-scale study of curiosity-driven learning. In International Conference on Learning Representations.
Clune, J. (2019). AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artiï¬cial intelligence. arXiv preprint arXiv:1905.10985.
Cobbe, K., Hesse, C., Hilton, J., and Schulman, J. (2019a). Leveraging procedural generation to benchmark reinforcement learning. arXiv preprint arXiv:1912.01588.
Cobbe, K., Klimov, O., Hesse, C., Kim, T., and Schulman, J. (2019b). Quantifying general- ization in reinforcement learning. In Proceedings of the 36th International Conference on Machine Learning1, pages 97:1281â1289.
Coumans, E. and Bai, Y. (2016). Pybullet, a python module for physics simulation for games, robotics and machine learning. GitHub repository.
Dosovitskiy, A. and Koltun, V. (2016). Learning to act by predicting the future. arXiv preprint arXiv:1611.01779.
23
Juliani, Berges, Teng, Cohen, Harper, Elion, Goy, Gao, Henry, Mattar, & Lange
Espeholt, L., Soyer, H., Munos, R., Simonyan, K., Mnih, V., Ward, T., Doron, Y., Firoiu, V., Harley, T., Dunning, I., Legg, S., and Kavukcuoglu, K. (2018). IMPALA: Scalable distributed deep-rl with importance weighted actor-learner architectures. arXiv preprint arXiv:1802.01561.
Ghani, A. R. A., Koganti, N., Solano, A., Iwasawa, Y., Nakayama, K., and Matsuo, Y. (2018). Designing eï¬cient neural attention systems towards achieving human-level sharp vision. In International Conference on Learning Representations Workshop.
Gulcehre, C., Paine, T. L., Shriari, B., Denil, M., Hoï¬man, M., Soyer, H., Tanburn, R., Kapturowski, S., Rabinowitz, N., Williams, D., Barth-Maron, G., Wang, Z., de Freitas, N., and Worlds Team (2019). Making eï¬cient use of demonstrations to solve hard exploration problems. arXiv preprint arXiv:1909.01387.
Haarnoja, T., Zhou, A., Abbeel, P., and Levine, S. (2018). Soft actor-critic: Oï¬-policy maximum entropy deep reinforcement learning with a stochastic actor. arXiv preprint arXiv:1801.01290.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770â778.
Ho, J. and Ermon, S. (2016). Generative adversarial imitation learning. In Advances in neural information processing systems, pages 4565â4573.
Hochreiter, S. and Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8):1735â1780.
Hussein, A., Gaber, M. M., Elyan, E., and Jayne, C. (2017). Imitation learning: A survey of learning methods. ACM Computing Surveys (CSUR), 50(2):21.
Jain, M. S. and Lindsey, J. (2018). Semiparametric reinforcement learning. In International Conference on Learning Representations Workshop.
Johnson, M., Hofmann, K., Hutton, T., and Bignell, D. (2016). The malmo platform for artiï¬cial intelligence experimentation. In IJCAI, pages 4246â4247.
Juliani, A., Khalifa, A., Berges, V.-P., Harper, J., Henry, H., Crespi, A., Togelius, J., and Lange, D. (2019). Obstacle tower: A generalization challenge in vision, control, and planning. In IJCAI, pages 2684â2691.
Justesen, N., Rodriguez Torrado, R., Bontrager, P., Khalifa, A., Togelius, J., and Risi, S. (2018). Illuminating generalization in deep reinforcement learning through procedural level generation. In NeurIPS Workshop on Deep Reinforcement Learning.
Kempka, M., Wydmuch, M., Runc, G., Toczek, J., and JaÅkowski, W. (2016). Vizdoom: A doom-based AI research platform for visual reinforcement learning. In Computational Intelligence and Games (CIG), 2016 IEEE Conference on, pages 1â8. IEEE.
24
Unity: A General Platform for Intelligent Agents
Knox, W. B. and Stone, P. (2008). TAMER: Training an agent manually via evaluative reinforcement. In IEEE 7th International Conference on Development and Learning.
Kolve, E., Mottaghi, R., Gordon, D., Zhu, Y., Gupta, A., and Farhadi, A. (2017). Ai2-thor: An interactive 3d environment for visual ai. arXiv preprint arXiv:1712.05474.
Laird, J. and VanLent, M. (2001). Human-level AIâs killer application: Interactive computer games. AI magazine, 22(2):15.
Lake, B. M., Ullman, T. D., Tenenbaum, J. B., and Gershman, S. J. (2017). Building machines that learn and think like people. Behavioral and Brain Sciences, 40.
Lample, G. and Chaplot, D. S. (2017). Playing fps games with deep reinforcement learning. In AAAI, pages 2140â2146.
LeCun, Y., Bengio, Y., and Hinton, G. (2015). Deep learning. nature, 521(7553):436.
Leibo, J. Z., dâAutume, C. d. M., Zoran, D., Amos, D., Beattie, C., Anderson, K., Castañeda, A. G., Sanchez, M., Green, S., Gruslys, A., Legg, S., Hassabis, D., and Botvinick, M. (2018). Psychlab: a psychology laboratory for deep reinforcement learning agents. arXiv preprint arXiv:1801.08116.
Levine, S., Finn, C., Darrell, T., and Abbeel, P. (2016). End-to-end training of deep visuomotor policies. The Journal of Machine Learning Research, 17(1):1334â1373.
Machado, M. C., Bellemare, M. G., Talvitie, E., Veness, J., Hausknecht, M., and Bowling, M. (2017). Revisiting the arcade learning environment: Evaluation protocols and open problems for general agents. arXiv preprint arXiv:1709.06009.
Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., and Riedmiller, M. (2013). Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602.
Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, Georg, P. S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., and Hassabis, D. (2015). Human-level control through deep reinforcement learning. Nature, 518(7540):529.
Müller, M. (2002). Computer go. Artiï¬cial Intelligence, 134(1-2):145â179.
Nichol, A. (2019). Competing in the obstacle tower challenge. https://blog.aqnichol. com/2019/07/24/competing-in-the-obstacle-tower-challenge/.
Nichol, A., Pfau, V., Hesse, C., Klimov, O., and Schulman, J. (2018). Gotta learn fast: A new benchmark for generalization in rl. arXiv preprint arXiv:1804.03720.
Oh, J., Chockalingam, V., Singh, S., and Lee, H. (2016). Control of memory, active perception, and action in minecraft. arXiv preprint arXiv:1605.09128.
Orseau, L. and Armstrong, S. (2016). Safely interruptible agents. In 32nd Conference on Uncertainty in Artiï¬cial Intelligence.
25
Juliani, Berges, Teng, Cohen, Harper, Elion, Goy, Gao, Henry, Mattar, & Lange
Pathak, D., Agrawal, P., Efros, A. A., and Darrell, T. (2017). Curiosity-driven exploration by self-supervised prediction. In International Conference on Machine Learning (ICML), volume 2017.
Pathak, D., Gandhi, D., and Gupta, A. (2019a). Self-supervised exploration via disagreement. In Proeedings of the 36th International Conference on Machine Learning.
Pathak, D., Lu, C., Darrell, T., Isola, P., and Efros, A. A. (2019b). Learning to control self-assembling morphologies: A study of generalization via modularity. In Advances in Neural Information Processing Systems.
Perez-Liebana, D., Samothrakis, S., Togelius, J., Schaul, T., and Lucas, S. M. (2016). General video game ai: Competition, challenges and opportunities. In Thirtieth AAAI Conference on Artiï¬cial Intelligence.
Puigdomènech Badia, A., Piot, B., Kapturowski, S., Sprechmann, P., Vitvitskyi, A., Guo, D., and Blundell, C. (2020). Agent57: Outperforming the atari human benchmark. arXiv, pages arXivâ2003.
Ring, M. B. (1994). Continual learning in reinforcement environments. PhD thesis, University of Texas at Austin 78712.
Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A., and Fei-Fei, L. (2015). Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211â252.
Rusu, A. A., Vecerik, M., Rothörl, T., Heess, N., Pascanu, R., and Hadsell, R. (2016). Sim-to- real robot learning from pixels with progressive nets. arXiv preprint arXiv:1610.04286.
Samuel, A. L. (1959). Some studies in machine learning using the game of checkers. IBM Journal of research and development, 3(3):210â229.
Savva, M., Kadian, A., Maksymets, O., Zhao, Y., Wijmans, E., Jain, B., Straub, J., Liu, J., Koltun, V., Malik, J., Parikh, D., and Batra, D. (2019). Habitat: A Platform for Embodied AI Research. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV).
Schmidhuber, J. (2015). On learning to think: Algorithmic information theory for novel combinations of reinforcement learning controllers and recurrent neural world models. arXiv preprint arXiv:1511.09249.
Schmidhuber, J. (2018). One big net for everything. arXiv preprint arXiv:1802.08864.
Schulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O. (2017). Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347.
Shannon, C. E. (1950). Xxii. programming a computer for playing chess. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 41(314):256â275.
Shu, T., Xiong, C., and Socher, R. (2017). Hierarchical and interpretable skill acquisition in multi-task reinforcement learning. arXiv preprint arXiv:1712.07294.
26
Unity: A General Platform for Intelligent Agents
Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., Hubert, T., Baker, L., Lai, M., Bolton, A., Chen, Y., Lillicrap, T., Hui, F., Sifre, L., van den Driessche, G., Graepel, T., and Demis, H. (2017). Mastering the game of go without human knowledge. Nature, 550(7676):354.
Song, Y., Wang, J., Lukasiewicz, T., Xu, Z., Xu, M., Ding, Z., and Wu, L. (2020). Arena: A general evaluation platform and building toolkit for multi-agent intelligence. In AAAI.
Sutton, R. S. and Barto, A. G. (2018). Reinforcement learning: An introduction. MIT press.
Szegedy, C., Vanhoucke, V., Ioï¬e, S., Shlens, J., and Wojna, Z. (2016). Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818â2826.
Tan, J., Zhang, T., Coumans, E., Iscen, A., Bai, Y., Hafner, D., Bohez, S., and Vanhoucke, V. (2018). Sim-to-real: Learning agile locomotion for quadruped robots. arXiv preprint arXiv:1804.10332.
Tassa, Y., Doron, Y., Muldal, A., Erez, T., Li, Y., Casas, D. d. L., Budden, D., Abdolmaleki, A., Merel, J., Lefrancq, A., Lillicrap, T., and Riedmiller, M. (2018a). Deepmind control suite. arXiv preprint arXiv:1801.00690.
Tassa, Y., Doron, Y., Muldal, A., Erez, T., Li, Y., de Las Casas, D., Budden, D., Abdolmaleki, A., Merel, J., Lefrancq, A., Lillicrap, T., and Riedmiller, M. (2018b). DeepMind control suite. Technical report, DeepMind.
Tesauro, G. (1995). Temporal diï¬erence learning and td-gammon. Communications of the ACM, 38(3):58â68.
Tessler, C., Givony, S., Zahavy, T., Mankowitz, D. J., and Mannor, S. (2017). A deep hierarchical approach to lifelong learning in minecraft. In AAAI, volume 3, page 6.
Tobin, J., Fong, R., Ray, A., Schneider, J., Zaremba, W., and Abbeel, P. (2017). Domain randomization for transferring deep neural networks from simulation to the real world. In Intelligent Robots and Systems (IROS), 2017 IEEE/RSJ International Conference on, pages 23â30. IEEE.
Todorov, E., Erez, T., and Tassa, Y. (2012). Mujoco: A physics engine for model-based In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International control. Conference on, pages 5026â5033. IEEE.
Vinyals, O., Babuschkin, I., Chung, J., Mathieu, M., and Jaderberg, M. (2019). Alphas- tar: Mastering the real-time strategy game starcraft ii. https://deepmind.com/blog/ alphastar-mastering-real-time-strategy-game-starcraft-ii/.
Wang, J. X., Kurth-Nelson, Z., Tirumala, D., Soyer, H., Leibo, J. Z., Munos, R., Blundell, C., Kumaran, D., and Botvinick, M. (2016). Learning to reinforcement learn. arXiv preprint arXiv:1611.05763.
27
Juliani, Berges, Teng, Cohen, Harper, Elion, Goy, Gao, Henry, Mattar, & Lange
Wang, R., Lehman, J., Clune, J., and Stanley, K. O. (2019). Paired open-ended trail- blazer(POET): Endlessly generating increasingly complex and diverse learning environ- ments and their solutions. arXiv preprint arXiv:1901.01753.
Wu, Y. and Tian, Y. (2017). Training agent for ï¬rst-person shooter game with actor-critic curriculum learning. In International Conference on Learning Representations.
Yan, C., Misra, D., Bennett, A., Walsman, A., Bisk, Y., and Artzi, Y. (2018). CHALET: Cornell house agent learning environment. arXiv preprint arXiv:1801.07357.
Yannakakis, G. N. and Togelius, J. (2018). Artiï¬cial Intelligence and Games. Springer.
Zhang, R., Torabi, F., Guan, L., H. Ballard, D., and Stone, P. (2019). Leveraging human guidance for deep reinforcement learning tasks. In Proceedings of the 28th International Joint Conference on Artiï¬cial Intelligence.
Zhao, Y., Borovikov, I., de Mesentier Silva, F., Beirami, A., Rupert, J., Somers, C., Harder, J., Kolen, J., Pinto, J., Pourabolghasem, R., Pestrak, J., Chaput, H., Sardari, M., Lin, L., Narravula, S., Aghdaie, N., and Zaman, K. (2019). Winning isnât everything: Enhancing game development with intelligent agents. arXiv preprint arXiv:1903.10545.
Zhu, Y., Mottaghi, R., Kolve, E., Lim, J. J., Gupta, A., Fei-Fei, L., and Farhadi, A. (2017). Target-driven visual navigation in indoor scenes using deep reinforcement learning. In Robotics and Automation (ICRA), 2017 IEEE International Conference on, pages 3357â3364. IEEE.
28 | {
"id": "1712.07294"
} |
1809.02156 | Object Hallucination in Image Captioning | Despite continuously improving performance, contemporary image captioning
models are prone to "hallucinating" objects that are not actually in a scene.
One problem is that standard metrics only measure similarity to ground truth
captions and may not fully capture image relevance. In this work, we propose a
new image relevance metric to evaluate current models with veridical visual
labels and assess their rate of object hallucination. We analyze how captioning
model architectures and learning objectives contribute to object hallucination,
explore when hallucination is likely due to image misclassification or language
priors, and assess how well current sentence metrics capture object
hallucination. We investigate these questions on the standard image captioning
benchmark, MSCOCO, using a diverse set of models. Our analysis yields several
interesting findings, including that models which score best on standard
sentence metrics do not always have lower hallucination and that models which
hallucinate more tend to make errors driven by language priors. | http://arxiv.org/pdf/1809.02156 | Anna Rohrbach, Lisa Anne Hendricks, Kaylee Burns, Trevor Darrell, Kate Saenko | cs.CL, cs.CV | Rohrbach and Hendricks contributed equally; accepted to EMNLP 2018 | null | cs.CL | 20180906 | 20190329 | 9 1 0 2
r a M 9 2 ] L C . s c [
2 v 6 5 1 2 0 . 9 0 8 1 : v i X r a
# Object Hallucination in Image Captioning
Anna Rohrbachâ1, Lisa Anne Hendricksâ1, Kaylee Burns1 , Trevor Darrell1, Kate Saenko2 1 UC Berkeley, 2 Boston University
# Abstract
Despite continuously improving performance, contemporary image captioning models are prone to âhallucinatingâ objects that are not actually in a scene. One problem is that stan- dard metrics only measure similarity to ground truth captions and may not fully capture im- In this work, we propose a age relevance. new image relevance metric to evaluate current models with veridical visual labels and assess their rate of object hallucination. We analyze how captioning model architectures and learn- ing objectives contribute to object hallucina- tion, explore when hallucination is likely due to image misclassiï¬cation or language priors, and assess how well current sentence metrics capture object hallucination. We investigate these questions on the standard image caption- ing benchmark, MSCOCO, using a diverse set of models. Our analysis yields several inter- esting ï¬ndings, including that models which score best on standard sentence metrics do not always have lower hallucination and that mod- els which hallucinate more tend to make errors driven by language priors.
1
# 1 Introduction
Image captioning performance has dramatically improved over the past decade. Despite such impressive results, it is unclear to what extent captioning models actually rely on image con- tent: as we show, existing metrics fall short of fully capturing the captionsâ relevance to the im- age. In Figure 1 we show an example where a competitive captioning model, Neural Baby Talk (NBT) (Lu et al., 2018), incorrectly generates the object âbench.â We refer to this issue as object hallucination.
NBT: A woman talking on a cell phone while sitting on a bench. CIDEr: 0.87, METEOR: 0.23, SPICE: 0.22, CHs: 1.00, CHi: 0.33 TopDown: A woman is talking on a cell phone. CIDEr: 0.54, METEOR: 0.26, SPICE: 0.13, CHs: 0.00, CHi: 0.00
Figure 1: Image captioning models often âhallucinateâ objects that may appear in a given context, like e.g. a bench here. Moreover, the sentence metrics do not al- ways appropriately penalize such hallucination. Our proposed metrics (CHAIRs and CHAIRi) reï¬ect hallu- cination. For CHAIR lower is better.
not expected to describe all objects in the scene. On the other hand, describing objects that are not present in the image has been shown to be less preferable to humans. For example, the LSMDC challenge (Rohrbach et al., 2017a) documents that correctness is more important to human judges than speciï¬city. In another study, (MacLeod et al., 2017) analyzed how visually impaired people re- act to automatic image captions. They found that people vary in their preference of either coverage or correctness. For many visually impaired who value correctness over coverage, hallucination is an obvious concern.
Besides being poorly received by humans, ob- ject hallucination reveals an internal issue of a cap- tioning model, such as not learning a very good representation of the visual scene or overï¬tting to its loss function.
While missing salient objects is also a failure mode, captions are summaries and thus generally
* Denotes equal contribution.
In this paper we assess the phenomenon of object hallucination in contemporary captioning models, and consider several key questions. The
ï¬rst question we aim to answer is: Which mod- els are more prone to hallucination? We analyze this question on a diverse set of captioning models, spanning different architectures and learning ob- jectives. To measure object hallucination, we pro- pose a new metric, CHAIR (Caption Hallucination Assessment with Image Relevance), which cap- tures image relevance of the generated captions. Speciï¬cally, we consider both ground truth object annotations (MSCOCO Object segmentation (Lin et al., 2014)) and ground truth sentence annota- tions (MSCOCO Captions (Chen et al., 2015)). In- terestingly, we ï¬nd that models which score best on standard sentence metrics do not always hallu- cinate less.
The second question we raise is: What are the likely causes of hallucination? While hallucina- tion may occur due to a number of reasons, we believe the top factors include visual misclassiï¬- cation and over-reliance on language priors. The latter may result in memorizing which words âgo togetherâ regardless of image content, which may lead to poor generalization, once the test distri- bution is changed. We propose image and lan- guage model consistency scores to investigate this issue, and ï¬nd that models which hallucinate more tend to make mistakes consistent with a language model.
Finally, we ask: How well do the standard metrics capture hallucination? It is a common practice to rely on automatic sentence metrics, e.g. CIDEr (Vedantam et al., 2015), to evaluate captioning performance during development, and few employ human evaluation to measure the ï¬- nal performance of their models. As we largely rely on these metrics, it is important to under- stand how well they capture the hallucination phe- In Figure 1 we show how two sen- nomenon. tences, from NBT with hallucination and from TopDown model (Anderson et al., 2018) â with- out, are scored by the standard metrics. As we see, hallucination is not always appropriately pe- nalized. We ï¬nd that by using additional ground truth data about the image in the form of object la- bels, our metric CHAIR allows us to catch discrep- ancies that the standard captioning metrics cannot fully capture. We then investigate ways to assess object hallucination risk with the standard metrics. Finally, we show that CHAIR is complementary to the standard metrics in terms of capturing human preference.
# 2 Caption Hallucination Assessment
We ï¬rst introduce our image relevance metric, CHAIR, which assesses captions w.r.t. objects that are actually in an image. It is used as a main tool in our evaluation. Next we discuss the notions of image and language model consistency, which we use to reason about the causes of hallucination.
# 2.1 The CHAIR Metric
To measure object hallucination, we propose the CHAIR (Caption Hallucination Assessment with Image Relevance) metric, which calculates what proportion of words generated are actually in the image according to the ground truth sentences and object segmentations. This metric has two vari- ants: per-instance, or what fraction of object in- stances are hallucinated (denoted as CHAIRi), and per-sentence, or what fraction of sentences include a hallucinated object (denoted as CHAIRs):
CHAIRi = |{hallucinated objects}| |{all objects mentioned}|
CHAIRs = |{sentences with hallucinated object}| |{ all sentences}|
For easier analysis, we restrict our study to the 80 MSCOCO objects which appear in the MSCOCO segmentation challenge. To determine whether a generated sentence contains halluci- nated objects, we ï¬rst tokenize each sentence and then singularize each word. We then use a list of synonyms for MSCOCO objects (based on the list from Lu et al. (2018)) to map words (e.g., âplayerâ) to MSCOCO objects (e.g., âper- sonâ). Additionally, for sentences which in- clude two word compounds (e.g., âhot dogâ) we take care that other MSCOCO objects (in this case âdogâ) are not incorrectly assigned to the list of MSCOCO objects in the sentence. For each ground truth sentence, we determine a list The of MSCOCO objects in the same way. MSCOCO segmentation annotations are used by simply relying on the provided object labels.
We ï¬nd that considering both sources of an- notation is important. For example, MSCOCO contains an object âdining tableâ annotated with segmentation maps. However, humans refer to many different kinds of objects as âtableâ (e.g., âcoffee tableâ or âside tableâ), though these ob- jects are not annotated as they are not speciï¬cally âdining tableâ. By using sentence annotations to
scrape ground truth objects, we account for vari- ation in how human annotators refer to different objects. Inversely, we ï¬nd that frequently humans will not mention all objects in a scene. Qualita- tively, we observe that both annotations are impor- tant to capture hallucination. Empirically, we ver- ify that using only segmentation labels or only ref- erence captions leads to higher hallucination (and practically incorrect) rates.
# Image Consistency
We deï¬ne a notion of image consistency, or how consistent errors from the captioning model are with a model which predicts objects based on an image alone. To measure image consistency for a particular generated word, we train an image model and record P (w|I) or the probability of pre- dicting the word given only the image. To score the image consistency of a caption we use the av- erage of P (w|I) for all MSCOCO objects, where higher values mean that errors are more consis- tent with the image model. Our image model is a multi-label classiï¬cation model with labels corre- sponding to MSCOCO objects (labels determined the same way as is done for CHAIR) which shares the visual features with the caption models.
# 2.3 Language Consistency
We also introduce a notion of language consis- tency, i.e. how consistent errors from the cap- tioning model are with a model which predicts words based only on previously generated words. We train an LSTM (Hochreiter and Schmidhu- ber, 1997) based language model which pre- dicts a word wt given previous words w0:tâ1 on MSCOCO data. We report language consis- tency as 1/R(wt) where R(wt) is the rank of the predicted word in the language model. Again, for a caption we report average rank across all MSCOCO objects in the sentence and higher lan- guage consistency implies that errors are more consistent with the language model.
We illustrate image and language consistency in Figure 2, i.e. the hallucination error (âforkâ) is more consistent with the Language Model predic- tions than with the Image Model predictions. We use these consistency measures in Section 3.3 to help us investigate the causes of hallucination.
Image Model predictions: bowl, broccoli, carrot, dining table Language Model predictions for the last word: fork, spoon, bowl ve i Se Generated caption: A plate of food with broccoli and a fork.
Figure 2: Example of image and language consistency. The hallucination error (âforkâ) is more consistent with the Language Model.
# 3 Evaluation
In this section we present the ï¬ndings of our study, where we aim to answer the questions posed in Section 1: Which models are more prone to hal- lucination? What are the likely causes of halluci- nation? How well do the standard metrics capture hallucination?
# 3.1 Baseline Captioning Models
We compare object hallucination across a wide range of models. We deï¬ne two axes for compari- son: model architecture and learning objective.
Model architecture. Regarding model architec- ture, we consider models both with and without attention mechanisms. In this work, we use âat- tentionâ to refer to any mechanism which learns to focus on different image regions, whether im- age regions be determined by a high level feature map, or by object proposals from a trained de- tector. All models are end-to-end trainable and use a recurrent neural network (LSTM (Hochre- iter and Schmidhuber, 1997) in our case) to output text. For non-attention based methods we consider the FC model from Rennie et al. (2017) which incorporates visual information by initializing the LSTM hidden state with high level image features. We also consider LRCN (Donahue et al., 2015) which considers visual information at each time step, as opposed to just initializing the LSTM hid- den state with extracted features.
For attention based models, we consider Att2In (Rennie et al., 2017), which is similar to the original attention based model proposed by (Xu et al., 2015), except the image feature is only input into the cell gate as this was shown to lead to better performance. We then consider the attention model proposed by (Anderson et al., 2018) which proposes a speciï¬c âtop-down at- tentionâ LSTM as well as a âlanguageâ LSTM.
Generally attention mechanisms operate over high level convolutional layers. The attention mecha- nism from (Anderson et al., 2018) can be used on such feature maps, but Anderson et al. also con- sider feature maps corresponding to object pro- posals from a detection model. We consider both models, denoted as TopDown (feature map ex- tracted from high level convolutional layer) and TopDown-BB (feature map extracted from object proposals from a detection model). Finally, we consider the recently proposed Neural Baby Talk (NBT) model (Lu et al., 2018) which explicitly uses object detections (as opposed to just bound- ing boxes) for sentence generation.
Learning objective. All of the above models are trained with the standard cross entropy (CE) loss as well as the self-critical (SC) loss pro- posed by Rennie et al. (2017) (with an exception of NBT, where only the CE version is included). The SC loss directly optimizes the CIDEr metric with a reinforcement learning technique. We ad- ditionally consider a model trained with a GAN loss (Shetty et al., 2017) (denoted GAN), which applies adversarial training to obtain more diverse and âhuman-likeâ captions, and their respective non-GAN baseline with the CE loss.
TopDown deconstruction. To better evaluate how each component of a model might inï¬u- ence hallucination, we âdeconstructâ the Top- Down model by gradually removing components until it is equivalent to the FC model. The interme- diate networks are NoAttention, in which the atten- tion mechanism is replaced by mean pooling, No- Conv in which spatial feature maps are not input into the network (the model is provided with fully connected feature maps), SingleLayer in which only one LSTM is included in the model, and ï¬- nally, instead of inputting visual features at each time step, visual features are used to initialize the LSTM embedding as is done in the FC model. By deconstructing the TopDown model in this way, we ensure that model design choices and hyperpa- rameters do not confound results.
Implementation details. All the baseline mod- els employ features extracted from the fourth layer of ResNet-101 (He et al., 2016), except for the GAN model which employs ResNet-152. Mod- els without attention traditionally use fully con- nected layers as opposed to convolutional layers. However, as ResNet-101 does not have intermedi- ate fully connected layers, it is standard to average
pool convolutional activations and input these fea- tures into non-attention based description models. Note that this means the difference between the NoAttention and NoConv model is that the NoAt- tention model learns a visual embedding of spatial feature maps as opposed to relying on pre-pooled feature maps. All models except for TopDown- BB, NBT, and GAN are implemented in the same open source framework from Luo et al. (2018).1
Training/Test splits. We evaluate the captioning models on two MSCOCO splits. First, we con- sider the split from Karpathy et al. (Karpathy and Fei-Fei, 2015), speciï¬cally in that case the mod- els are trained on the respective Karpathy Training set, tuned on Karpathy Validation set and the re- ported numbers are on the Karpathy Test set. We also consider the Robust split, introduced in (Lu et al., 2018), which provides a compositional split for MSCOCO. Speciï¬cally, it is ensured that the object pairs present in the training, validation and test captions do not overlap. In this case the cap- tioning models are trained on the Robust Training set, tuned on the Robust Validation set and the re- ported numbers are on the Robust Test set.
# 3.2 Which Models Are More Prone To Hallucination?
We ï¬rst present how well competitive models perform on our proposed CHAIR metric (Ta- ble 1). We report CHAIR at sentence-level and at instance-level (CHs and CHi in the table). In general, we see that models which perform bet- ter on standard evaluation metrics, perform bet- ter on CHAIR, though this is not always true. In particular, models which optimize for CIDEr fre- quently hallucinate more. Out of all generated captions on the Karpathy Test set, anywhere be- tween 7.4% and 17.7% include a hallucinated ob- ject. When shifting to more difï¬cult training sce- narios in which new combinations of objects are seen at test time, hallucination consistently in- creases (Table 2).
Karpathy Test set. Table 1 presents object hal- lucination on the Karpathy Test set. All sentences are generated using beam search and a beam size of 5. We note a few important trends. First, mod- els with attention tend to perform better on the CHAIR metric than models without attention. As we explore later, this is likely because they have
1https://github.com/ruotianluo/ self-critical.pytorch
Cross Entropy Self Critical Model Att. S M C CHs_ CHi S M C CHs_ CHi LRCN* 17.0 23.9 90.8 17.7) 12.6 | 16.9 23.5 93.0 17.7 12.9 FC* 17.9 249 958 154 11.0 | 184 25.0 103.9 144 10.1 Att2In* v 18.9 25.8 102.0 10.8 7.9 | 19.0 25.7 106.7 12.2 8.4 TopDown* v 19.9 26.7 107.6 84 61 | 204 27.0 1172 13.6 8.8 TopDown-BB ¢ v | 204 27.1 113.7 8.3 5.9 | 214 27.7 1206 104 69 NBTt v 194 26.2 105.1 74 54 - - - - Cross Entropy GAN GAN? 18.7. 25.7 100.4 10.7 7.7 | 166 22.7 79.3 82 65
Table 1: Hallucination analysis on the Karpathy Test set: Spice (S), CIDEr (C) and METEOR (M) scores across different image captioning models as well as CHAIRs (sentence level, CHs) and CHAIRi (instance level, CHi). All models are generated with beam search (beam size=5). * are trained/evaluated within the same implementation (Luo et al., 2018), â are trained/evaluated with implementation publicly released with corresponding papers, and â¡ sentences obtained directly from the author. For discussion see Section 3.2.
a better understanding of the image. In particular, methods that incorporate bounding box attention (as opposed to relying on coarse feature maps), consistently have lower hallucination as measured by our CHAIR metric. Note that the NBT model does not perform as well on standard captioning metrics as the TopDown-BB model but has lower hallucination. This is perhaps because bounding box proposals come from the MSCOCO detec- tion task and are thus âin-domainâ as opposed to the TopDown-BB model which relies on proposals learned from the Visual Genome (Krishna et al., 2017) dataset. Second, frequently training mod- els with the self-critical loss actually increases the amount of hallucination. One hypothesis is that CIDEr does not penalize object hallucination suf- ï¬ciently, leading to both increased CIDEr and in- creased hallucination. Finally, the LRCN model has a higher hallucination rate than the FC model, indicating that inputting the visual features only at the ï¬rst step, instead of at every step, leads to more image relevant captions.
|Au] SM C CHs CHi FC* 15.5 22.7 76.2 213 15.3 Au2in® | ¥ |16.9 24.0 85.8 14.1 10.1 TopDown*| ¥ |17.7 24.7 89.8 113 7.9 NBT | v /18.2 24.9 93.5 6.2 4.2
Table 2: Hallucination Analysis on the Robust Test set: Spice (S), CIDEr (C) and METEOR (M) scores across dif- ferent image captioning models as well as CHAIRs (sen- tence level, CHs) and CHAIRi (instance level, CHi). * are trained/evaluated within the same implementation (Luo et al., 2018), â are trained/evaluated with implementation publicly released with corresponding papers. All models trained with cross-entropy loss. See Section 3.2.
decreases, implying that the GAN loss actually helps decrease hallucination. Unlike the self crit- ical loss, the GAN loss encourages sentences to be human-like as opposed to optimizing a metric. Human-like sentences are not likely to hallucinate objects, and a hallucinated object is likely a strong signal to the discriminator that a sentence is gen- erated, and is not from a human.
We also consider a GAN based model (Shetty et al., 2017) in our analysis. We include a base- line model (trained with CE) as well as a model trained with the GAN loss.2 Unlike other mod- els, the GAN model uses a stronger visual network (ResNet-152) which could explain the lower hal- lucination rate for both the baseline and the GAN model. Interestingly, when comparing the baseline and the GAN model (both trained with ResNet- 152), standard metrics decrease substantially, even though human evaluations from (Shetty et al., 2017) demonstrate that sentences are of compa- rable quality. On the other hand, hallucination
We also assess the effect of beam size on CHAIR. We ï¬nd that generally beam search de- creases hallucination. We use beam size of 5, and for all models trained with cross entropy, it out- performs lower beam sizes on CHAIR. However, when training models with the self-critical loss, beam size sometimes leads to worse performance on CHAIR. For example, on the Att2In model trained with SC loss, a beam size of 5 leads to 12.2 on CHAIRs and 8.4 on CHAIRi, while a beam size of 1 leads to 10.8 on CHAIRs and 8.1 on CHAIRi. Robust Test set. Next we review the hallucina- tion behavior on the Robust Test set (Table 2). For almost all models the hallucination increases on the Robust split (e.g. for TopDown from 8.4% to 11.3% of sentences), indicating that the issue of
2Sentences were procured directly from the authors.
: A group of people s table with laptops. TopDown: A pile of luggage sitting on top of a table. NBT: Several pieces of luggage sitting on a table. NBT: A group of people sitting around a rable with laptop. âTopDown: A cat sitting on top of a laptop computer. NBT: A cat sitting on a table next to a computer. âTopDown: A brown dog sitting on top of a chair. NBT: A brown and white dog sitting under an umbrella. âTopDown: A kitchen with a stove and a sink. NBT: A kitchen with a stove and a sink. TopDown: A couple of cats laying on top of a bed. NBT: A couple of cats laying on top of a bed. Lif âTopDown: A man standing on a beach holding a surfboard. NBT: A man standing on top of a sandy beach. TopDown: Aa man and a woman are playing with a frisbee. NBT: A man riding a skateboard down a street.
Figure 3: Examples of object hallucination from two state-of-the-art captioning models, TopDown and NBT, see Section 3.2.
hallucination is more critical in scenarios where test examples can not be assumed to have the same distribution as train examples. We again note that attention is helpful for decreasing hallucination. We note that the NBT model actually has lower hallucination scores on the robust split. This is in part because when generating sentences we use the detector outputs provided by Lu et al. (2018). Separate detectors on the Karpathy test and robust split are not available and the detector has access to images in the robust split during training. Con- sequently, the comparison between NBT and other models is not completely fair, but we include the number for completeness.
In addition to the Robust Test set, we also con- sider a set of MSCOCO in which certain ob- jects are held out, which we call the Novel Ob- ject split (Hendricks et al., 2016). We train on the training set outlined in (Hendricks et al., 2016) and test on the Karpathy test split, which includes ob- jects unseen during training. Similarly to the Ro- bust Test set, we see hallucination increase sub- stantially on this split. For example, for the Top- Down model hallucination increases from 8.4% to 12.1% for CHAIRs and 6.0% to 9.1% for CHAIRi.
We ï¬nd no obvious correlation between the av- erage length of the generated captions and the hal- lucination rate. Moreover, vocabulary size does not correlate with hallucination either, i.e. mod- els with more diverse descriptions may actually hallucinate less. We notice that hallucinated ob- jects tend to be mentioned towards the end of the sentence (on average at position 6, with average
sentence length 9), suggesting that some of the preceding words may have triggered hallucination. We investigate this below.
Which objects are hallucinated and in what context? Here we analyze which MSCOCO ob- jects tend to be hallucinated more often and what are the common preceding words and image con- text. Across all models the super-category Fur- niture is hallucinated most often, accounting for 20 â 50% of all hallucinated objects. Other com- mon super-categories are Outdoor objects, Sports and Kitchenware. On the Robust Test set, Ani- mals are often hallucinated. The dining table is the most frequently hallucinated object across all models (with an exception of GAN, where person is the most hallucinated object). We ï¬nd that often words like âsittingâ and âtopâ precede the âdin- ing tableâ hallucination, implying the two com- mon scenarios: a person âsitting at the tableâ and an object âsitting on top of the tableâ (Figure 3, row 1, examples 1, 2). Similar observations can be made for other objects, e.g. word âkitchenâ of- ten precedes âsinkâ hallucination (Figure 3, row 1, example 3) and âlayingâ precedes âbedâ (Fig- ure 3, row 1, example 4). At the same time, if we look at which objects are actually present in the image (based on MSCOCO object annotations), we can similarly identify that presence of a âcatâ co-occurs with hallucinating a âlaptopâ (Figure 3, row 2, example 1), a âdogâ â with a âchairâ (Fig- ure 3, row 2, example 2) etc. In most cases we observe that the hallucinated objects appear in the relevant scenes (e.g. âsurfboardâ on a beach), but
Karpathy Test Set 0.4 0.3 sy S 3 0.2 [a 8 [s) 0.1 0 TD No Att NoConv_ Single @ CHAIRi mIMConsistency m LM aeistency Robust Test Set 0.4 0.3 0.2 0.1 0 TD No Att NoConv_ Single Consistency @CHAIRi @IMConsistency ⢠LM Consistency
Figure 4: models. hallucination tend to make errors consistent with the language model, see Section 3.3. Image and Language model consistency (IM, LM) and CHAIRi (instance-level, CHi) on deconstructed TopDown Images with less hallucination tend to make errors consistent with the image model, whereas models with more
there are cases where objects are hallucinated out of context (e.g. âbedâ in the bathroom, Figure 3, row 1, example 4).
# 3.3 What Are The Likely Causes Of Hallucination?
Karpathy Split TD No Attention No Conv Single Layer FC S 19.5 18.8 15.7 15.5 16.4 M 26.1 25.6 22.9 22.7 23.3 C CHs CHi 103.4 99.7 81.3 80.2 85.1 10.8 14.2 25.7 25.7 23.6 7.5 9.5 17.8 18.2 15.8
In this section we investigate the likely causes of object hallucination. We have earlier described how we deconstruct the TopDown model to en- able a controlled experimental setup. We rely on the deconstructed TopDown models to analyze the impact of model components on hallucination.
Table 3: Hallucination analysis on deconstructed TopDown models with sentence metrics SPICE (S), METEOR (M), and CIDEr (C), CHAIRs (sentence level, CHs) and CHAIRi (in- stance level, CHi). See Section 3.3.
First, we summarize the hallucination analysis on the deconstructed TopDown models (Table 3). Interestingly, the NoAttention model does not do substantially worse than the full model (w.r.t. sen- tence metrics and CHAIR). However, removing Conv input (NoConv model) and relying only on FC features, decreases the performance dramati- cally. This suggests that much of the gain in at- tention based models is primarily due to access to feature maps with spatial locality, not the actual attention mechanism. Also, similar to LRCN vs. FC in Table 1, initializing the LSTM hidden state with image features, as opposed to inputting image features at each time step, leads to lower halluci- nation (Single Layer vs. FC). This is somewhat surprising, as a model which has access to image information at each time step should be less likely to âforgetâ image content and hallucinate objects. However, it is possible that models which include image inputs at each time step with no access to spatial features overï¬t to the visual features.
capture how consistent the hallucinations errors are with image- / language-only models.
Figure 4 shows the CHAIR metric, image con- sistency and language consistency for the decon- structed TopDown models on the Karpathy Test set (left) and the Robust Test set (right). We note that models with less hallucination tend to make errors consistent with the image model, whereas models with more hallucination tend to make er- rors consistent with the language model. This im- plies that models with less hallucination are bet- ter at integrating knowledge from an image into the sentence generation process. When looking at the Robust Test set, Figure 4 (right), which is more challenging, as we have shown earlier, we see that image consistency decreases when com- paring to the same models on the Karpathy split, whereas language consistency is similar across all models trained on the Robust split. This is perhaps because the Robust split contains novel composi- tions of objects at test time, and all of the models are heavily biased by language.
Now we investigate what causes hallucination using the deconstructed TopDown models and the image consistency and language consistency scores, introduced in Sections 2.2 and 2.3 which
Finally, we measure image and language con- sistency during training for the FC model and note that at the beginning of training errors are more
TD: A cat bed in a room. S: 12.1 M: 23.8 C: 69.7 TD Restrict: A bed with a blanket and a pillow on it. S: 23.5 M: 25.4 C:52.5 iting ona TD: A cat laying on the ground with a frisbee. S:8.0 M: 13.1 C:37.0 TD Restrict: A black and white animal laying on the ground. S:7.7 M:15.9 C:17.4
Figure 5: Examples of how TopDown (TD) sentences change when we enforce that objects cannot be hallucinated: SPICE (S), Meteor (M), CIDEr (C), see Section 3.4.
consistent with the language model, whereas to- wards the end of training, errors are more con- sistent with the image model. This suggests that models ï¬rst learn to produce ï¬uent language be- fore learning to incorporate visual information.
# 3.4 How Well Do The Standard Metrics Capture Hallucination?
In this section we analyze how well SPICE (An- derson et al., 2016), METEOR (Banerjee and Lavie, 2005), and CIDEr (Vedantam et al., 2015) capture hallucination. All three metrics do penal- ize sentences for mentioning incorrect words, ei- ther via an F score (METEOR and SPICE) or co- sine distance (CIDEr). However, if a caption men- tions enough words correctly, it can have a high METEOR, SPICE, or CIDEr score while still hal- lucinating speciï¬c objects.
Our ï¬rst analysis tool is the TD-Restrict model. This is a modiï¬cation of the TopDown model, where we enforce that MSCOCO objects which are not present in an image are not generated in the caption. We determine which words refer to objects absent in an image following our approach in Section 2.1. We then set the log probability for such words to a very low value. We generate sen- tences with the TopDown and TD-Restrict model with beam search of size 1, meaning all words pro- duced by both models are the same, until the Top- Down model produces a hallucinated word.
We compare which scores are assigned to such captions in Figure 5. TD-Restrict generates cap- tions that do not contain hallucinated objects, while TD hallucinates a âcatâ in both cases. In Figure 5 (left) we see that CIDEr scores the more correct caption much lower. In Figure 5 (right), the TopDown model incorrectly calls the animal a âcat.â Interestingly, it then correctly identiï¬es
CIDEr METEOR SPICE FC Att2In TopDown 0.258 0.228 0.185 0.240 0.210 0.168 0.318 0.284 0.215
Table 4: Pearson correlation coefï¬cients between 1-CHs and CIDEr, METEOR, and SPICE scores, see Section 3.4.
# SPICE
0-10 10-20 20-30 30-40 40-50 50-60 0 0.05 0.1 0.15
Difference in % Sentences with Hallucination
Figure 6: Difference in percentage of sentences with no hal- lucination for TopDown and FC models when SPICE scores fall into speciï¬c ranges. For sentences with low SPICE scores, the hallucination is generally larger for the FC model, even though the SPICE scores are similar, see Section 3.4.
the âfrisbee,â which the TD-Restrict model fails to mention, leading to lower SPICE and CIDEr.
In Table 4 we compute Pearson correlation co- efï¬cient between individual sentence scores and the absence of hallucination, i.e. 1âCHAIRs; we ï¬nd that SPICE consistently correlates higher with 1âCHAIRs. E.g., for the FC model the correlation for SPICE is 0.32, while for METEOR and CIDEr â around 0.25.
We further analyze the metrics in terms of their predictiveness of hallucination risk. Predictive- ness means that a certain score should imply a cer- tain percentage of hallucination. Here we show the results for SPICE and the captioning models FC and TopDown. For each model and a score in- terval (e.g. 10 â 20) we compute the percentage of captions without hallucination (1âCHAIRs). We plot the difference between the percentages from both models (TopDown - FC) in Figure 6. Com- paring the models, we note that even when scores are similar (e.g., all sentences with SPICE score in the range of 10 â 20), the TopDown model has fewer sentences with hallucinated objects. We see similar trends across other metrics. Consequently, object hallucination can not be always predicted based on the traditional sentence metrics.
Is CHAIR complementary to standard met- rics? In order to measure usefulness of our pro-
0.2
Metric Metric +(1-CHs) Metric +(1-CHi) METEOR CIDEr SPICE 0.269 0.282 0.248 0.299 0.321 0.277 0.304 0.322 0.281
Table 5: Pearson correlation coefï¬cients between individ- ual/combined metrics and human scores. See Section 3.4.
posed metrics, we have conducted the following human evaluation (via the Amazon Mechanical Turk). We have randomly selected 500 test images and respective captions from 5 models: non-GAN baseline, GAN, NBT, TopDown and TopDown - Self Critical. The AMT workers were asked to score the presented captions w.r.t. the given image based on their preference. They could score each caption from 5 (very good) to 1 (very bad). We did not use ranking, i.e. different captions could get the same score; each image was scored by three annotators, and the average score is used as the ï¬- nal human score. For each image we consider the 5 captions from all models and their correspond- ing sentence scores (METEOR, CIDEr, SPICE). We then compute Pearson correlation between the human scores and sentence scores; we also con- sider a simple combination of sentence metrics and 1-CHAIRs or 1-CHAIRi by summation. The ï¬nal correlation is computed by averaging across all 500 images. The results are presented in Ta- ble 5. Our ï¬ndings indicate that a simple combi- nation of CHAIRs or CHAIRi with the sentence metrics leads to an increased correlation with the human scores, showing the usefulness and com- plementarity of our proposed metrics.
Does hallucination impact generation of other words? Hallucinating objects impacts sentence quality not only because an object is predicted in- correctly, but also because the hallucinated word impacts generation of other words in the sen- tence. Comparing the sentences generated by Top- Down and TD-Restrict allows us to analyze this phenomenon. We ï¬nd that after the hallucinated word is generated, the following words in the sen- tence are different 47.3% of the time. This im- plies that hallucination impacts sentence quality beyond simply naming an incorrect object. We ob- serve that one hallucination may lead to another, e.g. hallucinating a âcatâ leading to hallucinating a âchairâ, hallucinating a âdogâ â to a âfrisbeeâ.
# 4 Discussion
In this work we closely analyze hallucination in object captioning models. Our work is similar to other works which attempt to characterize ï¬aws of different evaluation metrics (Kilickaya et al., 2016), though we focus speciï¬cally on halluci- nation. Likewise, our work is related to other work which aims to build better evaluation tools ((Vedantam et al., 2015), (Anderson et al., 2016), (Cui et al., 2018)). However, we focus on carefully quantifying and characterizing one important type of error: object hallucination.
A signiï¬cant number of objects are hallucinated in current captioning models (between 5.5% and 13.1% of MSCOCO objects). Furthermore, hal- lucination does not always agree with the output of standard captioning metrics. For instance, the popular self critical loss increases CIDEr score, but also the amount of hallucination. Addition- ally, we ï¬nd that given two sentences with similar CIDEr, SPICE, or METEOR scores from two dif- ferent models, the number of hallucinated objects might be quite different. This is especially appar- ent when standard metrics assign a low score to a generated sentence. Thus, for challenging cap- tion tasks on which standard metrics are currently poor (e.g., the LSMDC dataset (Rohrbach et al., 2017b)), the CHAIR metric might be helpful to tease apart the most favorable model. Our results indicate that CHAIR complements the standard sentence metrics in capturing human preference.
Additionally, attention lowers hallucination, but it appears that much of the gain from attention models is due to access to the underlying convo- lutional features as opposed the attention mecha- nism itself. Furthermore, we see that models with stronger image consistency frequently hallucinate fewer objects, suggesting that strong visual pro- cessing is important for avoiding hallucination.
Based on our results, we argue that the de- sign and training of captioning models should be guided not only by cross-entropy loss or standard sentence metrics, but also by image relevance. Our CHAIR metric gives a way to evaluate the phe- nomenon of hallucination, but other image rele- vance metrics e.g. those that incorporate missed salient objects, should also be investigated. We believe that incorporating visual information in the form of ground truth objects in a scene (as opposed to only reference captions) helps us better under- stand the performance of captioning models.
# References
Peter Anderson, Basura Fernando, Mark Johnson, and Spice: Semantic propo- Stephen Gould. 2016. In European sitional image caption evaluation. Conference on Computer Vision, pages 382â398. Springer.
Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for In Proceedings of the image captioning and vqa. IEEE Conference on Computer Vision and Pattern Recognition.
Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved In Proceed- correlation with human judgments. ings of the ACL Workshop on Intrinsic and Extrin- sic Evaluation Measures for Machine Translation and/or Summarization, pages 65â72.
Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakr- ishna Vedantam, Saurabh Gupta, Piotr Doll´ar, and C Lawrence Zitnick. 2015. Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325.
Yin Cui, Guandao Yang, Andreas Veit, Xun Huang, and Serge Belongie. 2018. Learning to evaluate im- age captioning. In CVPR.
Jeffrey Donahue, Lisa Anne Hendricks, Sergio Guadar- rama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, and Trevor Darrell. 2015. Long-term recurrent convolutional networks for visual recogni- In Proceedings of the IEEE tion and description. Conference on Computer Vision and Pattern Recog- nition, pages 2625â2634.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- In Proceedings of the IEEE conference on nition. computer vision and pattern recognition, pages 770â 778.
Lisa Anne Hendricks, Subhashini Venugopalan, Mar- cus Rohrbach, Raymond Mooney, Kate Saenko, and Trevor Darrell. 2016. Deep compositional cap- tioning: Describing novel object categories without In Proceedings of the IEEE paired training data. Conference on Computer Vision and Pattern Recog- nition, pages 1â10.
Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Neural computation, Long short-term memory. 9(8):1735â1780.
Andrej Karpathy and Li Fei-Fei. 2015. Deep visual- semantic alignments for generating image descrip- the IEEE conference tions. on computer vision and pattern recognition, pages 3128â3137.
Mert Kilickaya, Aykut Erdem, Nazli Ikizler-Cinbis, and Erkut Erdem. 2016. Re-evaluating automatic
metrics for image captioning. In European Chapter of the Association for Computational Linguistics.
Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin John- son, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. 2017. Visual genome: Connecting language and vision using crowdsourced dense image anno- tations. International Journal of Computer Vision, 123(1):32â73.
Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick. 2014. Microsoft coco: In European confer- Common objects in context. ence on computer vision, pages 740â755. Springer.
Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. 2018. Neural baby talk. In Proceedings of the IEEE Conference on Computer Vision and Pat- tern Recognition.
Ruotian Luo, Brian Price, Scott Cohen, and Gregory Shakhnarovich. 2018. Discriminability objective for training descriptive captions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.
Haley MacLeod, Cynthia L. Bennett, Meredith Ringel Morris, and Edward Cutrell. 2017. Understanding blind peoples experiences with computer-generated captions of social media images. In Proceedings of the 2017 SIGCHI Conference on Human Factors in Computing Systems.
Steven J Rennie, Etienne Marcheret, Youssef Mroueh, Jarret Ross, and Vaibhava Goel. 2017. Self-critical sequence training for image captioning. In Proceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition.
Anna Rohrbach, Makarand Tapaswi, Atousa Torabi, Tegan Maharaj, Marcus Rohrbach, Sanja Fi- dler Christopher Pal, and Bernt Schiele. 2017a. Joint Video and Language Understand- The ing Workshop: MovieQA and The Large Scale Movie Description Challenge (LSMDC). https://sites.google.com/site/ describingmovies/lsmdc-2017.
Anna Rohrbach, Atousa Torabi, Marcus Rohrbach, Niket Tandon, Christopher Pal, Hugo Larochelle, Aaron Courville, and Bernt Schiele. 2017b. Movie description. International Journal of Computer Vi- sion, 123(1):94â120.
Rakshith Shetty, Marcus Rohrbach, Lisa Anne Hen- dricks, Mario Fritz, and Bernt Schiele. 2017. Speak- ing the same language: Matching machine to human captions by adversarial training. In Proceedings of the IEEE International Conference on Computer Vi- sion (ICCV).
Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image de- In Proceedings of the IEEE scription evaluation.
Conference on Computer Vision and Pattern Recog- nition, pages 4566â4575.
Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual at- In International Conference on Machine tention. Learning, pages 2048â2057. | {
"id": "1504.00325"
} |
1809.00732 | emrQA: A Large Corpus for Question Answering on Electronic Medical Records | We propose a novel methodology to generate domain-specific large-scale
question answering (QA) datasets by re-purposing existing annotations for other
NLP tasks. We demonstrate an instance of this methodology in generating a
large-scale QA dataset for electronic medical records by leveraging existing
expert annotations on clinical notes for various NLP tasks from the community
shared i2b2 datasets. The resulting corpus (emrQA) has 1 million
question-logical form and 400,000+ question-answer evidence pairs. We
characterize the dataset and explore its learning potential by training
baseline models for question to logical form and question to answer mapping. | http://arxiv.org/pdf/1809.00732 | Anusri Pampari, Preethi Raghavan, Jennifer Liang, Jian Peng | cs.CL | Accepted at Conference on Empirical Methods in Natural Language
Processing (EMNLP) 2018 | null | cs.CL | 20180903 | 20180903 | 8 1 0 2
p e S 3 ] L C . s c [
1 v 2 3 7 0 0 . 9 0 8 1 : v i X r a
# emrQA: A Large Corpus for Question Answering on Electronic Medical Records
# Anusri Pampari* Preethi Raghavan'* Jennifer Liang! and Jian Peng*?
MIT-IBM Watson AI Lab, Cambridge, MA tIBM TJ Watson Research Center, Yorktown Heights, NY âDept. of Computer Science, University of Illinois Urbana Champaign, IL Carle Illinois College of Medicine, University of Illinois Urbana Champaign, IL *{pampari2, jianpeng}@illinois.edu t {praghav, jjliang}@us.ibm.com
# Abstract
We propose a novel methodology to generate domain-speciï¬c large-scale question answer- ing (QA) datasets by re-purposing existing an- notations for other NLP tasks. We demonstrate an instance of this methodology in generating a large-scale QA dataset for electronic medical records by leveraging existing expert annota- tions on clinical notes for various NLP tasks from the community shared i2b2 datasets§. The resulting corpus (emrQA) has 1 million question-logical form and 400,000+ question- answer evidence pairs. We characterize the dataset and explore its learning potential by training baseline models for question to logi- cal form and question to answer mapping.
# Introduction
Automatic question answering (QA) has made big strides with several open-domain and machine comprehension systems built using large-scale an- notated datasets (Voorhees et al., 1999; Ferrucci et al., 2010; Rajpurkar et al., 2016; Joshi et al., 2017). However, in the clinical domain this prob- lem remains relatively unexplored. Physicians fre- quently seek answers to questions from unstruc- tured electronic medical records (EMRs) to sup- port clinical decision-making (Demner-Fushman et al., 2009). But in a signiï¬cant majority of cases, they are unable to unearth the information they want from EMRs (Tang et al., 1994). Moreover to date, there is no general system for answering natural language questions asked by physicians on a patientâs EMR (Figure 1) due to lack of large- scale datasets (Raghavan and Patwardhan, 2016). EMRs are a longitudinal record of a patientâs health information in the form of unstructured clinical notes (progress notes, discharge sum- maries etc.) and structured vocabularies. Physi-
Record Date: 08/09/98 08/31/96 ascending aortic root replacement with homograft with omentopexy. The patient continued to be hemodynam- ically stable making good progress. Physical examination: BMI: 33.4 Obese, high risk. Pulse: 60. resp. rate: 18
Question: Has the patient ever had an abnormal BMI? Answer: BMI: 33.4 Obese, high risk Question: When did the patient last receive a homograft replacement ? Answer: 08/31/96 ascending aortic root replacement with homograft with omentopexy.
Figure 1: Question-Answer pairs from emrQA clinical note.
cians wish to answer questions about medical en- tities and relations from the EMR, requiring a deeper understanding of clinical notes. While this may be likened to machine comprehension, the longitudinal nature of clinical discourse, lit- tle to no redundancy in facts, abundant use of domain-speciï¬c terminology, temporal narratives with multiple related diseases, symptoms, medi- cations that go back and forth in time, and mis- spellings, make it complex and difï¬cult to ap- ply existing NLP tools (Demner-Fushman et al., 2009; Raghavan and Patwardhan, 2016). More- over, answers may be implicit or explicit and may require domain-knowledge and reasoning across clinical notes. Thus, building a credible QA sys- tem for patient-speciï¬c EMR QA requires large- scale question and answer annotations that sufï¬- ciently capture the challenging nature of clinical narratives in the EMR. However, serious privacy concerns about sharing personal health informa- tion (Devereaux, 2013; Krumholz et al., 2016), and the tedious nature of assimilating answer an- notations from across longitudinal clinical notes, makes this task impractical and possibly erroneous to do manually (Lee et al., 2017).
âPart of this work was done during an internship at IBM §https://www.i2b2.org/NLP/DataSets/
In this work, we address the lack of any pub- licly available EMR QA corpus by creating a large-scale dataset, emrQA, using a novel gener-
Ontology/Schema Representation of Data J â» Logical Form Annotation Zz [Bein | â Question | Template Normalization Logic: Tem| What is the dosage of |meUication| ? entity | What is the dosage of Nitroglycerin ? = al Form plate MedicationEvent (| medication|) [dosag âQuestion Entity Placeholder | Entity type = ee MedicationEvent (Nitroglycerin) [dosage=x] L Answer = 40mg, Evidence = Nitroglycerin 40 mg daily, evening eae Medication > Zz Relation 12b2 dataset Clinical Notes ] <Medication = Nitroglycerin Electronic Medical Record dosage = 40mg >
Figure 2: Our QA dataset generation framework using existing i2b2 annotations on a given patientâs record to generate a question, its logical form and answer evidence. The highlights in the ï¬gure show the annotations being used for this example.
ation framework that allows for minimal expert involvement and re-purposes existing annotations available for other clinical NLP tasks (i2b2 chal- lenge datasets (Guo et al., 2006)). The annota- tions serve as a proxy-expert in generating ques- tions, answers, and logical forms. Logical forms provide a human-comprehensible symbolic repre- sentation, linking questions to answers, and help build interpretable models, critical to the medical domain (Davis et al., 1977; Vellido et al., 2012). We analyze the emrQA dataset in terms of ques- tion complexity, relations, and the reasoning re- quired to answer questions, and provide neural and heuristic baselines for learning to predict question- logical forms and question-answers.
The main contributions of this work are as fol- lows:
⢠A novel framework for systematic generation of domain-speciï¬c large-scale QA datasets that can be used in any domain where manual annotations are challenging to obtain but lim- ited annotations may be available for other NLP tasks.
⢠The ï¬rst accessible patient-speciï¬c EMR QA dataset, emrQAâ, consisting of 400,000 question-answer pairs and 1 million question- logical form pairs. The logical forms will allow users to train and benchmark inter- pretable models that justify answers with cor- responding logical forms.
# 2 Related Work
Question Answering (QA) datasets are classiï¬ed into two main categories: (1) machine comprehen- sion (MC) using unstructured documents, and (2) QA using Knowledge Bases (KBs).
MC systems aim to answer any question that could be posed against a reference text. Recent advances in crowd-sourcing and search engines have resulted in an explosion of large-scale (100K) MC datasets for factoid QA, having ample re- dundant evidence in text (Rajpurkar et al., 2016; Trischler et al., 2016; Joshi et al., 2017; Dhingra et al., 2017). On the other hand, complex domain- speciï¬c MC datasets such as MCTest (Richardson et al., 2013), biological process modeling (Berant et al., 2014), BioASQ (Tsatsaronis et al., 2015), InsuranceQA (Feng et al., 2015), etc have been limited in scale (500-10K) because of the com- plexity of the task or the need for expert anno- tations that cannot be crowd-sourced or gathered In contrast to the open-domain, from the web. EMR data cannot be released publicly due to pri- vacy concerns (Å uster et al., 2017). Also, anno- tating unstructured EMRs requires a medical ex- pert who can understand and interpret clinical text. Thus, very few datasets like i2b2, MIMIC (John- son et al., 2016) (developed over several years in collaboration with large medical groups and hos- pitals), share small-scale annotated clinical notes. In this work, we take advantage of the limited ex- pertly annotated resources to generate emrQA.
⢠Two new reasoning challenges, namely arith- metic and temporal reasoning, that are absent in open-domain datasets like SQuAD (Ra- jpurkar et al., 2016).
âhttps://github.com/panushri25/emrQA, scripts to gener- ate emrQA from i2b2 data. i2b2 data is accessible by every- one subject to a license agreement.
KB-based QA datasets, used for semantic pars- ing, are traditionally limited by the requirement of annotated question and logical form (LF) pairs for supervision where the LF are used to retrieve an- swers from a schema (Cai and Yates, 2013; Lopez et al., 2013; Bordes et al., 2015). Roberts and Demner-Fushman (2016) generated a corpus by
Datasets Relations 261 Medications 255,908 198,739 119 36,746 Heart disease 30,731 1,118 Note len. 280 23,437 Obesity Smoking 502 6 4,518 455,837 1,295,814 2,425 emrQA
Table 1: (right) emrQA properties with length in tokens, averaged
manually annotating LFs on 468 EMR questions (not released publicly), thus limiting its ability to create large scale datasets. In contrast, we only collect LFs for question templates from a domain- expert - the rest of our corpus is automatically gen- erated.
Recent advances in QA combine logic-based and neural MC approaches to build hybrid mod- els (Usbeck et al., 2015; Feng et al., 2016; Palangi et al., 2018). These models are driven to combine the accuracy of neural approaches (Hermann et al., 2015) and the interpretability of the symbolic rep- resentations in logic-based methods (Gao et al.; Chabierski et al., 2017). Building interpretable yet accurate models is extremely important in the medical domain (Shickel et al., 2017). We gen- erate large-scale ground truth annotations (ques- tions, logical forms, and answers) that can provide supervision to learn such hybrid models. Our ap- proach to generating emrQA is in the same spirit as Su et al. (2016), who generate graph queries (logical forms) from a structured KB and use them to collect answers. In contrast, our framework can be applied to generate QA dataset in any domain with minimal expert input using annotations from other NLP tasks.
# 3 QA Dataset Generation Framework
Our general framework for generating a large- scale QA corpus given certain resources consists of three steps: (1) collecting questions to capture domain-speciï¬c user needs, followed by normal- izing the collected questions to templates by re- placing entities (that may be related via binary or composite relations) in the question with place- holders. The entity types replaced in the question are grounded in an ontology like WordNet (Miller, 1995), UMLS (Bodenreider, 2004), or a user- generated schema that deï¬nes and relates different entity types. (2) We associate question templates with expert-annotated logical form templates; log- ical forms are symbolic representations using re- lations from the ontology/schema to express the relations in the question, and associate the ques-
~
How was the |problem| managed ? How was the patientâs |problem| treated ? What was done to correct the patientâs |problem| ? Has the patient ever been treated for a |problem| ? What treatment has the patient had for his |problem| ? Has the patient ever received treatment for |problem| ? What treatments for |problem| has this patient tried ?
_ââ
Table 2: Paraphrase templates of a question type in emrQA.
tion entity type with an answer entity type. (3) We then proceed to the important step of re-purposing existing NLP annotations to populate question- logical form templates and generate answers. QA is a complex task that requires addressing several fundamental NLP problems before accurately an- swering a question. Hence, obtaining expert man- ual annotations in complex domains is infeasible as it is tedious to expert-annotate answers that may be found across long document collections (e.g., longitudinal EMR) (Lee et al., 2017). Thus, we reverse engineer the process where we reuse ex- pert annotations available in NLP tasks such as entity recognition, coreference, and relation learn- ing, based on the information captured in the log- ical forms to populate entity placeholders in tem- plates and generate answers. Reverse engineering serves as a proxy expert ensuring that the gener- ated QA annotations are credible. The only man- ual effort is in annotating logical forms, thus sig- niï¬cantly reducing expert labor. Moreover, in do- main speciï¬c instances such as EMRs, manually annotated logical forms allow the experts to ex- press information essential for natural language understanding such as domain knowledge, tempo- ral relations, and negation (Gao et al.; Chabierski et al., 2017). This knowledge, once captured, can be used to generate QA pairs on new documents, making the framework scalable.
# 4 Generating the emrQA Dataset
We apply the proposed framework to generate the emrQA corpus consisting of questions posed by physicians against longitudinal EMRs of a patient, using annotations provided by i2b2 (Figure 2).
# 4.1 Question Collection and Normalization
We collect questions for EMR QA by, 1) polling physicians at the Veterans Administration for what they frequently want to know from the EMR (976 questions), 2) using an existing source of 5,696 questions generated by a team of medical experts from 71 patient records (Raghavan, 2017) and 3) using 15 prototypical questions from an obser-
I Relations I Attributes I [2b2 entity types as arguments Conducted/Reveals ⢠Relates MedicationEvent Improves, Worsens, Causes. Given, Not Given
Figure 3: Events, attributes & relations in emrQAâs logical forms. Events & attributes accept i2b2 entities as arguments.
vational study done by physicians (Tang et al., 1994). To obtain templates, the questions were au- tomatically normalized by identifying medical en- tities (using MetaMap (Aronson, 2001)) in ques- tions and replacing them with generic placehold- ers. The resulting â¼2K noisy templates were ex- pert reviewed and corrected (to account for any entity recognition errors by MetaMap). We align our entity types to those deï¬ned in the i2b2 con- cept extraction tasks (Uzuner et al., 2010a, 2011) - problem, test, treatment, mode and medication. E.g., The question What is the dosage of insulin? from the collection gets converted to the template What is the dosage of |medication|? as shown in Fig.2. This process resulted in 680 question tem- plates. We do not correct for the usage/spelling errors in these templates, such as usage of "pt" for "patient", or make the templates gender neutral in order to provide a true representation of physi- ciansâ questions. Further, analyzing these tem- plates shows that physicians most frequently ask about test results (11%), medications for problem (9%), and problem existence (8%). The long tail following this includes questions about medica- tion dosage, response to treatment, medication du- ration, prescription date, etiology, etc. Temporal constraints were frequently imposed on questions related to tests, problem diagnosis and medication start/stop.
# 4.2 Associating Templates w/ Logical Forms
The 680 question templates were annotated by a physician with their corresponding logical form (LF) templates, which resulted in 94 unique LF templates. More than one question template that map to the same LF are considered paraphrases of each other and correspond to a particular question type (Table 2). Logical forms are deï¬ned based
on an ontology schema designed by medical ex- perts (Figure 3). This schema captures entities in unstructured clinical notes through medical events and their attributes, interconnected through rela- tions. We align the entity and relation types of i2b2 to this schema.
the LF gram- mar using this schema (Figure 3) is as fol- lows. Medical events are denoted as M Ei (e.g LabEvent, ConditionEvent) and relations are de- noted as REi (e.g conducted/reveals). Now, M E[a1, .., aj, .., oper(an)] is a medical event where aj represents the attribute of the event (such as result in LabEvent). An event may optionally include constraints on attributes captured by an operator (oper() â sort, range, check for null val- ues, compare). These operators sometimes require values from external medical KB (indicated by ref, e.g. lab.ref low/lab.ref high to indicate range of ref- erence standards considered healthy in lab results) indicating the need for medical knowledge to an- swer the question. Using these constructs, a LF can be deï¬ned using the following rules, LF â M Ei | M1 relation M2 M1 â M Ei, M2 â M Ej M1 â M1 relation M2, M2 â M1 relation M2 relation â OR | AN D | REi
Advantages of our LF representation include the ability to represent composite relations, de- ï¬ne attributes for medical events and constrain the attributes to precisely capture the informa- tion need in the question. While these can be achieved using different methods that combine lambda calculus and ï¬rst order logic (Roberts and Demner-Fushman, 2016), our representation is more human comprehensible. This allows a physician to consider an ontology like Figure 3 and easily deï¬ne a logical form. Some exam- ple question templates with their LF annotations are described in Table 3 using the above notation. The LF representation of the question in Figure 2 is MedicationEvent(|medication|) [dosage=x]. The entities seen in LF are the entities posed in the question and entity marked x indicates the an- swer entity type.
# 4.3 Template Filling and Answer Extraction
The next step in the process is to populate the question and logical form (QL) templates with existing annotations in the i2b2 clinical datasets and extract answer evidence for the questions.
Property Fine grained answer type (attribute entity is answer) Course grained answer type (event entity is answer) Questions with operators on entities Questions which require medical KB At least one event relation Example Annotation Q: What is the dosage of |medication| ? LF: MedicationEvent (|medication|) [dosage=x] Q: What does the patient take |medication| for? LF: MedicationEvent(|medication|)given{ConditionEvent(x) OR SymptomEvent(x)} Q: What are the last set of labs with elevated numbers out of range? LF: LabEvent (x) [date=x, (result=x)>lab.refhigh] Q: What are the last set of labs with elevated numbers out of range? LF: LabEvent (x) [date=x, (result=x)>lab.refhigh] What lab results does he have that are pertinent to |problem| diagnosis LF: LabEvent (x) [date=x, result=x] conducted/reveals ConditionEvent (|problem|) Stats. 62.7% 52.1% 25.5% 11.7% 46.8%
Table 3: Properties of question templates inferred from the corresponding logical form templates. The boldface words hint at the presence of the corresponding property in both question and the logical form template.
The i2b2 datasets are expert annotated with ï¬ne-grained annotations (Guo et al., 2006) that were developed for various shared NLP challenge tasks, including (1) smoking status classiï¬cation (Uzuner et al., 2008), (2) diagnosis of obesity and its co-morbidities (Uzuner, 2009), extraction of (3) medication concepts (Uzuner et al., 2010a), (4) relations, concepts, assertions (Uzuner et al., 2010b, 2011) (5) co-reference resolution (Uzuner et al., 2012) and (6) heart disease risk factor identiï¬cation (Stubbs and Uzuner, 2015). In this would correspond to leveraging Figure 2, annotations from medications challenge between medications and their dosages, such as medica- tion=Nitroglycerin, dosage=40mg, to populate |medication| and generate several instances of the question âWhat is the dosage of |medication|?" and form corresponding MedicationEvent(|medication|)[dosage=x]. The answer would be derived from the value of the dosage entity in the dataset.
Preprocessing: The i2b2 entities are prepro- cessed before using them with our templates to ensure syntactic correctness of the generated ques- tions. The pre-processing steps are designed based on the i2b2 annotations syntax guidelines (Guo et al., 2006). To estimate grammatical correct- ness, we randomly sampled 500 generated ques- tions and found that <5% had errors. These errors include, among others, incorrect usage of article with the entity and incorrect entity phrasing.
Answer Extraction: The ï¬nal step in the pro- cess is generating answer evidence corresponding to each question. The answers in emrQA are de- ï¬ned differently; instead of a single word or phrase we provide the entire i2b2 annotation line from the clinical note as the answer. This is because the context in which the answer entity or phrase is mentioned is extremely important in clinical decision making (Demner-Fushman et al., 2009).
Hence, we call them answer evidence instead of just answers. For example, consider the ques- tion Is the patientâs hypertension controlled?. The answer to this question is not a simple yes/no since the status of the patientâs hypertension can change through the course of treatment. The an- swer evidence to this question in emrQA are mul- tiple lines across the longitudinal notes that reï¬ect this potentially changing status of the patients con- dition, e.g. Hypertension-borderline today. Addi- tionally, for questions seeking speciï¬c answers we also provide the corresponding answer entities.
The overall process for answer evidence gener- ation was vetted by a physician. Here is a brief overview of how the different i2b2 datasets were used in generating answers. The relations chal- lenge datasets have various event-relation anno- tations across single/multiple lines in a clinical note. We used a combination of one or more of these, to generate answers for a question; in do- ing so we used the annotations provided by the i2b2 co-reference datasets. Similarly, the medica- tions challenge dataset has various event-attribute annotations but since this dataset is not provided with co-reference annotations, it is currently not possible to combine all valid answers. The heart disease challenge dataset has longitudinal notes (â¼5 per patient) with record dates. The events in this dataset are also provided with time anno- tations and are rich in quantitative entities. This dataset was primarily used to answer questions that require temporal and arithmetic reasoning on events. The patient records in the smoking and obesity challenge datasets are categorized into classes with no entity annotations. Thus, for ques- tions generated on these datasets, the entire docu- ment acts as evidence and the annotated class in- formation (7 classes) needs to be predicted as the answer.
The total questions, LFs and answers gener-
ated using this framework are summarized in Ta- ble 1. Consider the question How much does the patient smoke? for which we do not have i2b2- annotations to provide an answer. In cases where the answer entity is empty, we only generate the question and LF, resulting in more question types being used for QL than QA pairs: only 53% of question types have answers.
# emrQA Dataset Analysis
We analyze the complexity of emrQA by consider- ing the LFs for question characteristics, variations in paraphrases, and the type of reasoning required for answering questions (Table 2, 3, 4).
# 5.1 Question/Logical Form Characteristics
A quantitative and qualitative analysis of emrQA question templates is shown in Table 3, where log- ical forms help formalize their characteristics (Su et al., 2016). Questions may request speciï¬c ï¬ne- grained information (attribute values like dosage) or may express a more coarse-grained need (event entities like medications etc), or a combination of both. 25% of questions require complex op- erators (e.g compare(>)) and 12% of questions express the need for external medical knowledge (e.g. lab.refhigh). The questions in emrQA are highly compositional, where 47% of question tem- plates have at least one event relation.
# 5.2 Paraphrase Complexity Analysis
Questions templates that map to the same LF are considered paraphrases (e.g, Table 2) and corre- spond to the same question type. In emrQA, an average of 7 paraphrase templates exist per ques- tion type. This is representative of FAQ types that are perhaps more important to the physician. Good paraphrases are lexically dissimilar to each other (Chen and Dolan, 2011). In order to understand the lexical variation within our paraphrases, we randomly select a question from the list of para- phrases as a reference and evaluate the others with respect to the reference, and report the average BLEU (0.74 ± 0.06) and Jaccard Score (0.72 ± 0.19). The low BLEU and Jaccard score with large standard deviation indicates the lexical diversity captured by emrQAâs paraphrases (Papineni et al., 2002; Niwattanakul et al., 2013).
# 5.3 Answer Evidence Analysis
33% of the questions in emrQA have more than one answer evidence, with the number ranging
the question Medications from 2 to 61. E.g., Record? has all medications in the patientâs lon- gitudinal record as answer evidence. In order to analyze the reasoning required to answer emrQA questions, we sampled 35 clinical notes from the corpus and analyzed 3 random questions per note by manually labeling them with the categories de- scribed in Table 4. Categories are not mutually exclusive: a single example can fall into multiple categories. We compare and contrast this analy- sis with SQuAD (Rajpurkar et al., 2016), a popu- lar MC dataset generated through crowdsourcing, to show that the framework is capable of generat- ing a corpus as representative and even more com- plex. Compared to SQuAD, emrQA offers two new reasoning categories, temporal and arithmetic which make up 31% of the dataset. Addition- ally, over two times as many questions in emrQA require reasoning over multiple sentences. Long and noisy documents make the question answer- ing task more difï¬cult (Joshi et al., 2017). EMRs are inherently noisy and hence 29% have incom- plete context and the document length is 27 times more than SQuAD which offers new challenges to existing QA models. Owing to the domain speciï¬c nature of the task, 39% of the examples required some form of medical/world knowledge.
As discussed in Section 4.3, 12% of the ques- tions in emrQA corpus require a class category from i2b2 smoking and obesity datasets to be pre- dicted. We also found 6% of the questions had other possible answers that were not included by emrQA, this is because of the lack of co-reference annotations for the medications challenge.
# 6 Baseline Methods
We implement baseline models using neural and heuristic methods for question to logical form (Q- L) and question to answer (Q-A) mapping.
# 6.1 Q-L Mapping
Heuristic Models: We use a template-matching approach where we ï¬rst split the data into train/test sets, and then normalize questions in the test set into templates by replacing entities with placeholders. The templates are then scored against the ground truth templates of the questions in the train set, to ï¬nd the best match. The place- holders in the LF template corresponding to the best matched question template is then ï¬lled with the normalized entities to obtain the predicted LF. To normalize the test questions we use CLiNER
Reasoning Lexical Variation (Synonym) Lexical Variation (world/medical knowledge) Description Major correspondence between the question and answer sentence are synonyms. Major correspondence between the question and answer sentence re- quires world/medical knowledge to resolve Syntactic Variation After the question is paraphrased into declarative form, its syntac- tic dependency structure does not match that of the answer sentence Example Annotation Q: Has this patient ever been treated with insulin? E: Patient sugars were managed o/n with sliding scale insulin and diabetic Q: Has the patient complained of any CAD symp- toms? E: 70-year-old female who comes in with substernal chest pressure Q: Has this patient ever been treated with ffp? E: attempt to reverse anticoagulation , one unit of FFP was begun emrQA SQuAD 15.2% 33.3% 39.0% 9.1% 60.0% 64.1% Multiple Sentence Co-reference and higher level fu- sion of multiple sentences Arithmetic Knowing comparison and subtrac- tion operators. Temporal Reasoning based on time frame Incomplete Context Class Prediction Unstructured clinical text is noisy and may have missing context Questions for which a speciï¬c pre- deï¬ned class needs to be predicted Q: What happened when the patient was given as- cending aortic root replacement? E: The patient tolerated the procedure fairly well and was transferred to the ICU with his chest open Q: Show me any LDL > 100 mg/dl in the last 6 years? E: gluc 192, LDL 115, TG 71, HDL 36 Q: What were the results of the abnormal A1C on 2115-12-14? E: HBA1C 12/14/2115 11.80 Q: What is her current dose of iron? E: Iron 325 mg p.o. t.i.d. Q: Is the patient currently Obese? E: Yes 23.8% 13.6% 13.3% N.A. 18.1% N.A. 28.6% N.A. 12.4% N.A.
Table 4: We manually labeled 105 examples into one or more of the above categories. Words relevant to the corresponding reasoning type are in bold and the answer entity (if any) in the evidence is in italics. We compare this analysis with SQuAD.
Dataset GeoQuery ATIS emrQL-1 emrQL-2 HM-1 HM-2 Neural Train/Test 32.8% 52.1% 74.6%â 600/280 20.8% 52.2% 69.9%â 4,473/448 1M/253K 26.3% 22.4% 0.3% 1.1M/296K 31.6% 32.0% 42.7%
Table 5: Heuristic (HM) and neural (seq2seq) models per- formance on question to logical form learning in emrQA.
(Boag et al., 2015) for emrQA and Jia and Liang (2016)âs work for ATIS and GeoQuery. Scoring and matching is done using two heuristics: (1) HM-1, which computes an identical match, and (2) HM-2, which generates a GloVe vector (Arora et al., 2016) representation of the templates using sentence2vec and then computes pairwise cosine similarity.
# 6.1.1 Experimental Setup
We randomly partition the QL pairs in the dataset in train(80%) and test(20%) sets in two ways. (1) In emrQL-1, we ï¬rst split the paraphrase templates corresponding to a single LF template into train and test, and then generate the instances of QL (2) In emrQL-2, we ï¬rst generate the in- pairs. stances of QL pairs from the templates and then distribute them into train and test sets. As a result, emrQL-1 has more lexical variation between train and test distribution compared to emrQL-2, result- ing in increased paraphrase complexity. We use accuracy i.e, the total number of logical forms pre- dicted correctly as a metric to evaluate our model.
Neural Model: We train a sequence-to- sequence (seq2seq) (Sutskever et al., 2014) with attention paradigm (Bahdanau et al., 2014; Luong et al., 2017) as our neural baseline (2 layers, each with 64 hidden units). The same setting when used with Geoquery and ATIS gives poor results be- cause the parameters are not appropriate for the nature of that dataset. Hence, for comparison with GeoQuery and ATIS, we use the results of seq2seq model with a single 200 hidden units layer (Jia and Liang, 2016). At test time we automatically bal- ance missing right parentheses.
â results from Jia and Liang (2016)
# 6.1.2 Results
The performance of the proposed models is sum- marized in Table 5. emrQL results are not directly comparable with GeoQuery and ATIS because of the differences in the lexicon and tools available for the domains. However, it helps us establish that QL learning in emrQA is non-trivial and sup- ports signiï¬cant future work.
Error analysis of heuristic models on emrQL-1 and emrQL-2 showed that 70% of the errors oc- curred because of incorrect question normaliza- tion. In fact, 30% of these questions had not been normalized at all. This shows that the entities
added to the templates are complex and diverse and make the inverse process of template gener- ation non trivial. This makes a challenging QL corpus that cannot trivially be solved by template matching based approaches.
Errors made by the neural model on both emrQL-1 and emrQL-2 are due to long LFs (20%) and incorrectly identiï¬ed entities (10%), which are harder for the attention-based model (Jia and Liang, 2016). The increased paraphrase complex- ity in emrQL-1 compared to emrQL-2 resulted in 20% more structural errors in emrQL-1, where the predicted event/grammar structure deviates signif- icantly from the ground truth. This shows that the model is not adequately capturing the semantics in the questions to generalize to new paraphrases. Therefore, emrQL-1 can be used to benchmark QL models robust to paraphrasing.
# 6.2 Q-A Mapping
Question-answering on emrQA consists of two different tasks, (1) extraction of answer line from the clinical note (machine comprehension (MC)) and (2) prediction of answer class based on the en- tire clinical note. We provide baseline models to illustrate the complexity in doing both these tasks. Machine Comprehension: To do extractive QA on EMRs, we use DrQAâs (Chen et al., 2017) document reader which is a multi-layer RNN based MC model. We use their best performing settings trained for SQuAD data using Glove vec- tors (300 dim-840B).
Class Prediction: We build a multi-class logis- tic regression model for predicting a class as an answer based on the patientâs clinical note. Fea- tures input to the classiï¬er are TF-IDF vectors of the question and the clinical notes taken from i2b2 smoking and obesity datasets.
6.2.1 Experimental setup We consider a 80-20 split of the data for train-test. In order to evaluate worst-case performance, we train on question-evidence pairs in a clinical note obtained by using only one random paraphrase for a question instead of all the paraphrases. We use a slightly modiï¬edâ¡ version of the two popularly re- ported metrics in MC for evaluation since our ev- idence span is longer: Exact Match (EM) and F1. Wherever the answer entity in an evidence is ex- plicitly known, EM checks if the answer entity is
â¡using the original deï¬nitions, the evaluated values were far less than those obtained in Table 7
Model DrQA (MC) Class Prediction Train/Test 47,605/9,966 1276/320 Exact Match 59.2% 36.6% F1 60.6 n.a
Table 7: Performance of baseline models on the two QA sub tasks, machine comprehension (MC) and class prediction.
present within the evidence, otherwise it checks if the predicted evidence span lies within ±20 char- acters of the ground truth evidence. For F1 we construct a bag of tokens for each evidence string and measure the F1 score of the overlap between the two bags of tokens. Since there may be mul- tiple evidence for a given question, we consider only the top 10 predictions and report an average of EM and F1 over ground truth number of an- swers. In the class prediction setting, we report the subset accuracy.
6.2.2 Results The performance of the proposed models is sum- marized in Table 7. DrQA is one of the best per- forming models on SQuAD with an F1 of 78.8 and EM of 69.5. The relatively low performance of the models on emrQA (60.6 F1 and 59.2 EM) shows that QA on EMRs is a complex task and offers new challenges to existing QA models.
To understand model performance, we macro- average the EM across all the questions corre- sponding to a LF template. We observe that LFs representing temporal and arithmetic§ needs had < 16% EM. LFs expressing the need for medi- cal KB§ performed poorly since we used general Glove embeddings. An analysis of LFs which had approximately equal number of QA pair represen- tation in the test set revealed an interesting relation between the model performance and LF complex- ity, as summarized in Table 6. The trend shows that performance is worse on multiple relation questions as compared to single relation and at- tribute questions, showing that the LFs sufï¬ciently capture the complexity of the questions and give us an ability to do a qualitative model analysis.
Error analysis on a random sample of 50 ques- tions containing at least one answer entity in an evidence showed that: (1) 38% of the examples re- quired multiple sentence reasoning of which 16% were due to a missing evidence in a multiple ev- idence question, (2) 14% were due to syntactic variation, (3) 10% required medical reasoning and (4) in 14%, DrQA predicted an incomplete evi- dence span missing the answer entity in it.
§maximum representation of these templates comes from the i2b2 heart disease risk dataset
Property single attribute single relation multiple relation Exact Match 55.3% 32.2% 12.6%
Table 6: Neural models (DrQA) performance on question-evidence corpus of emrQA stratiï¬ed according to the logical form templates. Instance showing increasing complexity in the logical forms with decreasing model performance.
# 7 Discussion
In this section, we describe how our generation framework may also be applied to generate open- domain QA datasets given the availability of other NLP resources. We also discuss possible exten- sions of the framework to increase the complexity of the generated datasets.
Open domain QA dataset generation: Con- sider the popularly used SQuAD (Rajpurkar et al., 2016) reading comprehension dataset generated by crowdworkers, where the answer to every ques- tion is a segment of text from the corresponding passage in the Wikipedia article. This dataset can easily be generated or extended using our pro- posed framework with existing NLP annotations on Wikipedia (Auer et al., 2007; Nothman et al., 2008; Ghaddar and Langlais, 2017).
Extensions to the framework: The complexity of the generated dataset can be further extended as follows. (1) We can use a coreferred or a lexical variant of the original entity in the question-logical form generation. This can allow for increased lexi- cal variation between the question and answer line entities in the passage. (2) It is possible to combine two or more question templates to make composi- tional questions with the answers to these ques- tions similarly combined. This can also result in more multiple sentence reasoning questions. (3) We can generate questions with entities not related to the context in the passage. This can increase empty answer questions in the dataset, resulting in increased negative training examples.
# 8 Conclusions and Future Work
For instance, consider DBPedia (Auer et al., 2007), an existing dataset of entities and their re- lations extracted from Wikipedia. It also has its own ontology which can serve as the semantic frames schema to deï¬ne logical forms. Using these resources, our reverse engineering technique for QA dataset generation can be applied as fol- lows. (1) Question templates can be deï¬ned for each entity type and relation in DBPedia. For example¶, consider the relation [place, country] ï¬eld in DBpedia. For this we can deï¬ne a ques- tion template In what country is |place| located?. (2) Every such question template can be annotated with a logical form template using existing DB- Pedia ontology. (3) By considering the entity val- ues of DBPedia ï¬elds such as [place=Normandy, dbo:country=France], we can automatically gen- erate the question In what country is Normandy located? and its corresponding logical form from the templates. The text span of country=France from the Wikipedia passage is then used as the answer (Daiber et al., 2013). Currently, this QA pair instance is a part of the SQuAD dev set. Us- ing our framework we can generate many more in- stances like this example from different Wikipedia passages - without crowdsourcing efforts.
¶example reference: http://dbpedia.org/page/Normandy
We propose a novel framework that can generate a large-scale QA dataset using existing resources and minimal expert input. This has the potential to make a huge impact in domains like medicine, where obtaining manual QA annotations is tedious and infeasible. We apply this framework to gener- ate a large scale EMR QA corpus (emrQA), con- sisting of 400,000 question-answers pairs and 1 million question-logical forms, and analyze the complexity of the dataset to show its non-trivial nature. We show that the logical forms provide a symbolic representation that is very useful for cor- pus generation and for model analysis. The logi- cal forms also provide an opportunity to build in- terpretable systems by perhaps jointly (or latently) learning the logical form and answer for a ques- tion. In future, this framework may be applied to also re-purpose and integrate other NLP datasets such as MIMIC and generate a more diverse and representative EMR QA corpus (Johnson et al., 2016).
# Acknowledgments
This project is partially funded by Sloan Research Fellowship, PhRMA Foundation Award in Infor- matics, and NSF Career Award (1652815). The authors would like to thank Siddharth Patwardhan for his valuable feedback in formatting the paper.
# References
Alan R Aronson. 2001. Effective mapping of biomed- ical text to the umls metathesaurus: the metamap program. In Proceedings of the AMIA Symposium, page 17. American Medical Informatics Associa- tion.
Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2016. A simple but tough-to-beat baseline for sentence em- beddings.
Sören Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives. 2007. Dbpedia: A nucleus for a web of open data. In The semantic web, pages 722â735. Springer.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.
Jonathan Berant, Vivek Srikumar, Pei-Chun Chen, Abby Vander Linden, Brittany Harding, Brad Huang, Peter Clark, and Christopher D Manning. 2014. Modeling biological processes for reading comprehension. In EMNLP.
William Boag, Kevin Wacome, Tristan Naumann, and Anna Rumshisky. 2015. Cliner: A lightweight tool for clinical named entity recognition. AMIA Joint Summits on Clinical Research Informatics (poster).
Olivier Bodenreider. 2004. The uniï¬ed medical lan- guage system (umls): integrating biomedical termi- nology. Nucleic acids research, 32(suppl_1):D267â D270.
Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075.
Qingqing Cai and Alexander Yates. 2013. Large-scale semantic parsing via schema matching and lexicon extension. In ACL (1), pages 423â433.
Piotr Chabierski, Alessandra Russo, and Mark Law. 2017. Logic-based approach to machine compre- hension of text.
Danqi Chen, Adam Fisch, Jason Weston, and An- Reading wikipedia to an- arXiv preprint toine Bordes. 2017. swer open-domain questions. arXiv:1704.00051.
David L Chen and William B Dolan. 2011. Collect- ing highly parallel data for paraphrase evaluation. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 190â200. Association for Computational Linguistics.
Joachim Daiber, Max Jakob, Chris Hokamp, and Improving efï¬ciency Pablo N. Mendes. 2013. and accuracy in multilingual entity extraction. In Proceedings of the 9th International Conference on Semantic Systems (I-Semantics).
Randall Davis, Bruce Buchanan, and Edward Short- liffe. 1977. Production rules as a representation for a knowledge-based consultation program. Artiï¬cial intelligence, 8(1):15â45.
Dina Demner-Fushman, Wendy Webber Chapman, and Clement J. McDonald. 2009. What can natural lan- guage processing do for clinical decision support? Journal of Biomedical Informatics, 42(5):760â772.
Mary Devereaux. 2013. The use of patient records (ehr) for research.
Bhuwan Dhingra, Kathryn Mazaitis, and William W Cohen. 2017. Quasar: Datasets for question an- swering by search and reading. arXiv preprint arXiv:1707.03904.
Minwei Feng, Bing Xiang, Michael R Glass, Li- dan Wang, and Bowen Zhou. 2015. Applying deep learning to answer selection: A study and an open task. In Automatic Speech Recognition and Understanding (ASRU), 2015 IEEE Workshop on, pages 813â820. IEEE.
Yansong Feng, Songfang Huang, Dongyan Zhao, et al. 2016. Hybrid question answering over knowledge base and free text. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2397â2407.
David Ferrucci, Eric Brown, Jennifer Chu-Carroll, James Fan, David Gondek, Aditya A Kalyanpur, Adam Lally, J William Murdock, Eric Nyberg, John Prager, et al. 2010. Building watson: An overview of the deepqa project. AI magazine, 31(3):59â79.
Jianfeng Gao, Rangan Majumder, and Bill Dolan. Ma- chine reading for question answering: from sym- bolic to neural computation.
Abbas Ghaddar and Phillippe Langlais. 2017. Winer: A wikipedia annotated corpus for named en- tity recognition. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 413â422.
Y. Guo, R. Gaizauskas, I. Roberts, G. Demetriou, Identifying personal health and M. Hepple. 2006. information using support vector machines. In i2b2 Workshop on Challenges in Natural Language Processing for Clinical Data, pages 10â11.
Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Teaching Suleyman, and Phil Blunsom. 2015. machines to read and comprehend. In Advances in Neural Information Processing Systems, pages 1693â1701.
Robin Jia and Percy Liang. 2016. Data recombina- arXiv preprint tion for neural semantic parsing. arXiv:1606.03622.
Alistair EW Johnson, Tom J Pollard, Lu Shen, H Lehman Li-wei, Mengling Feng, Moham- mad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G Mark. 2016. Mimic-iii, a freely accessible critical care database. Scientiï¬c data, 3:160035.
Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehen- sion. arXiv preprint arXiv:1705.03551.
Harlan M Krumholz, Sharon F Terry, and Joanne Wald- streicher. 2016. Data acquisition, curation, and use for a continuously learning health system. Jama, 316(16):1669â1670.
Chonho Lee, Zhaojing Luo, Kee Yuan Ngiam, Mei- hui Zhang, Kaiping Zheng, Gang Chen, Beng Chin Ooi, and Wei Luen James Yip. 2017. Big health- care data analytics: Challenges and applications. In Handbook of Large-Scale Distributed Computing in Smart Healthcare, pages 11â41. Springer.
Vanessa Lopez, Christina Unger, Philipp Cimiano, and Enrico Motta. 2013. Evaluating question answering over linked data. Web Semantics: Science, Services and Agents on the World Wide Web, 21:3â13.
Minh-Thang Luong, Eugene Brevdo, and Rui Zhao. 2017. Neural machine translation (seq2seq) tutorial. https://github.com/tensorï¬ow/nmt.
George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39â 41.
Singthongchai, Ekkachai Naenudorn, and Supachanun Wanapu. 2013. Using of jaccard coefï¬cient for keywords In Proceedings of the International similarity. MultiConference of Engineers and Computer Scientists, volume 1.
Joel Nothman, James R Curran, and Tara Murphy. 2008. Transforming wikipedia into named entity training data. In Proceedings of the Australasian Language Technology Association Workshop 2008, pages 124â132.
Hamid Palangi, Paul Smolensky, Xiaodong He, Question-answering with and Li Deng. 2018. grammatically-interpretable representations. In Proceedings of the 32nd AAAI Conference on Artiï¬cial Intelligence, New Orleans, LA.
Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic In Proceedings evaluation of machine translation. of the 40th annual meeting on association for computational linguistics, pages 311â318. Associa- tion for Computational Linguistics.
Preethi Raghavan and Siddharth Patwardhan. 2016. Question answering on electronic medical records. In Proceedings of the 2016 Summit on Clinical
Research Informatics, San Francisco, CA, March 2016.
Siddharth; Liang Jennifer J.; Devarakonda Murthy V. Raghavan, Preethi; Patwardhan. 2017. Annotating electronic medical records for question answering. arXiv:1805.06816.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250.
Matthew Richardson, Christopher JC Burges, and Erin Renshaw. 2013. Mctest: A challenge dataset for the open-domain machine comprehension of text. In EMNLP, volume 3, page 4.
Kirk Roberts and Dina Demner-Fushman. 2016. An- notating logical forms for ehr questions. In LREC... International Conference on Language Resources & Evaluation:[proceedings]. International Conference on Language Resources and Evaluation, volume 2016, page 3772. NIH Public Access.
Benjamin Shickel, Patrick James Tighe, Azra Bihorac, and Parisa Rashidi. 2017. Deep ehr: A survey of re- cent advances in deep learning techniques for elec- tronic health record (ehr) analysis. IEEE Journal of Biomedical and Health Informatics.
Amber Stubbs and Ãzlem Uzuner. 2015. Annotating longitudinal clinical narratives for de-identiï¬cation: The 2014 i2b2/uthealth corpus. Journal of biomedical informatics, 58:S20âS29.
Yu Su, Huan Sun, Brian Sadler, Mudhakar Srivatsa, Izzeddin Gur, Zenghui Yan, and Xifeng Yan. 2016. On generating characteristic-rich question sets for qa evaluation. In EMNLP, pages 562â572.
Simon Šuster, Stéphan Tulkens, and Walter Daelemans. 2017. A short review of ethical challenges in clin- arXiv preprint ical natural language processing. arXiv:1703.10090.
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neu- In Advances in neural information ral networks. processing systems, pages 3104â3112.
Paul C Tang, Danielle Fafchamps, and Edward H Shortliffe. 1994. Traditional medical records as a source of clinical data in the outpatient setting. In Proceedings of the Annual Symposium on Computer Application in Medical Care, page 575. American Medical Informatics Association.
Adam Trischler, Tong Wang, Xingdi Yuan, Justin Har- ris, Alessandro Sordoni, Philip Bachman, and Ka- heer Suleman. 2016. Newsqa: A machine compre- hension dataset. arXiv preprint arXiv:1611.09830.
George Tsatsaronis, Georgios Balikas, Prodromos Malakasiotis, Ioannis Partalas, Matthias Zschunke, Michael R Alvers, Dirk Weissenborn, Anastasia
Krithara, Sergios Petridis, Dimitris Polychronopou- An overview of the bioasq los, et al. 2015. large-scale biomedical semantic indexing and ques- tion answering competition. BMC bioinformatics, 16(1):138.
Ricardo Usbeck, Axel-Cyrille Ngonga Ngomo, Lorenz Bühmann, and Christina Unger. 2015. Hawkâ hybrid question answering using linked data. In European Semantic Web Conference, pages 353â 368. Springer.
Ãzlem Uzuner. 2009. Recognizing obesity and co- morbidities in sparse data. Journal of the American Medical Informatics Association, 16(4):561â570.
Ozlem Uzuner, Andreea Bodnari, Shuying Shen, Tyler Forbush, John Pestian, and Brett R South. 2012. Evaluating the state of the art in coreference res- olution for electronic medical records. Journal of the American Medical Informatics Association, 19(5):786â791.
Ãzlem Uzuner, Ira Goldstein, Yuan Luo, and Isaac Ko- hane. 2008. Identifying patient smoking status from medical discharge records. Journal of the American Medical Informatics Association, 15(1):14â24.
Ãzlem Uzuner, Imre Solti, and Eithon Cadag. 2010a. Extracting medication information from clinical text. Journal of the American Medical Informatics Association, 17(5):514â518.
Imre Solti, Fei Xia, and Eithon Cadag. 2010b. Community annotation experiment for ground truth generation for the i2b2 medica- Journal of the American Medical tion challenge. Informatics Association, 17(5):519â523.
Ãzlem Uzuner, Brett R South, Shuying Shen, and Scott L DuVall. 2011. 2010 i2b2/va challenge on concepts, assertions, and relations in clinical text. Journal of the American Medical Informatics Association, 18(5):552â556.
Alfredo Vellido, José David MartÃn-Guerrero, and Paulo JG Lisboa. 2012. Making machine learning models interpretable. In ESANN, volume 12, pages 163â172. Citeseer.
Ellen M Voorhees et al. 1999. The trec-8 question an- swering track report. In Trec, volume 99, pages 77â 82. | {
"id": "1805.06816"
} |
1809.00095 | Learning Sparse Low-Precision Neural Networks With Learnable Regularization | We consider learning deep neural networks (DNNs) that consist of
low-precision weights and activations for efficient inference of fixed-point
operations. In training low-precision networks, gradient descent in the
backward pass is performed with high-precision weights while quantized
low-precision weights and activations are used in the forward pass to calculate
the loss function for training. Thus, the gradient descent becomes suboptimal,
and accuracy loss follows. In order to reduce the mismatch in the forward and
backward passes, we utilize mean squared quantization error (MSQE)
regularization. In particular, we propose using a learnable regularization
coefficient with the MSQE regularizer to reinforce the convergence of
high-precision weights to their quantized values. We also investigate how
partial L2 regularization can be employed for weight pruning in a similar
manner. Finally, combining weight pruning, quantization, and entropy coding, we
establish a low-precision DNN compression pipeline. In our experiments, the
proposed method yields low-precision MobileNet and ShuffleNet models on
ImageNet classification with the state-of-the-art compression ratios of 7.13
and 6.79, respectively. Moreover, we examine our method for image super
resolution networks to produce 8-bit low-precision models at negligible
performance loss. | http://arxiv.org/pdf/1809.00095 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | IEEE Access | null | cs.CV | 20180901 | 20200524 | 0 2 0 2
y a M 4 2 ] V C . s c [
2 v 5 9 0 0 0 . 9 0 8 1 : v i X r a
# Learning Sparse Low-Precision Neural Networks With Learnable Regularization
Yoojin Choi, Mostafa El-Khamy, Senior Member, IEEE, Jungwon Lee, Fellow, IEEE
AbstractâWe consider learning deep neural networks (DNNs) that consist of low-precision weights and activations for efï¬cient inference of ï¬xed-point operations. In training low-precision net- works, gradient descent in the backward pass is performed with high-precision weights while quantized low-precision weights and activations are used in the forward pass to calculate the loss function for training. Thus, the gradient descent becomes suboptimal, and accuracy loss follows. In order to reduce the mismatch in the forward and backward passes, we utilize mean squared quantization error (MSQE) regularization. In particular, we propose using a learnable regularization coefï¬cient with the MSQE regularizer to reinforce the convergence of high- precision weights to their quantized values. We also investigate how partial L2 regularization can be employed for weight pruning in a similar manner. Finally, combining weight pruning, quantization, and entropy coding, we establish a low-precision DNN compression pipeline. In our experiments, the proposed method yields low-precision MobileNet and Shufï¬eNet models on ImageNet classiï¬cation with the state-of-the-art compression ratios of 7.13 and 6.79, respectively. Moreover, we examine our method for image super resolution networks to produce 8-bit low-precision models at negligible performance loss.
Index TermsâDeep neural networks, ï¬xed-point arithmetic, model compression, quantization, regularization, weight pruning.
32 g â es 3 3s Re-training Re-training Uniform sé Entropy 3 zg £ 8] with the partial [>| with the MSQEP} weight Po Ba TOM LES © © | (2 regularizer Regularizer_| | quantization} 2 © By Ee EJ 5g 8 = ie Sse Ee
Fig. 1. Our low-precision DNN compression pipeline. We utilize partial L2 regularization and MSQE regularization to transform a pre-trained high- precision model into a sparse low-precision model with ï¬xed-point weights and activations. The low-precision weights are further compressed in size with lossless entropy source coding.
For weight pruning, we utilize partial L2 regularization to make a portion of small-value weights tend to zero so we can safely prune them at negligible accuracy loss. ⢠For weight quantization, we regularize (unpruned) weights with another regularization term of the mean squared quantization error (MSQE). In this stage, we also quantize the activations (feature maps) of each layer to mimic low-precision operations at inference time. The quantization bin sizes for weights and activations are optimized to minimize their MSQEs in each layer.
# I. INTRODUCTION
Deep neural networks (DNNs) have achieved performance breakthroughs in many of computer vision tasks [1]. The revolutionary progress of deep learning comes with over- parametrized multi-layer network architectures, and nowadays millions or tens of millions parameters in more than one hun- dred layers are not exceptional anymore. Network compression for efï¬cient inference is of great interest for deployment of large-size DNNs on resource-limited platforms such as battery- powered mobile devices [2], [3]. In such resource-constrained hardware, not only memory and power are limited but also basic ï¬oating-point arithmetic operations are in some cases not supported. Hence, it is preferred and sometimes necessary to deliver compressed DNNs of low-precision ï¬xed-point weights and activations (feature maps).
In this paper, we propose a network compression scheme that produces sparse low-precision DNNs through learning with regularization. In particular, we let the regularization coefï¬cient be learnable, instead of treating it as a ï¬xed hyper- parameter, to make a smooth and efï¬cient transition of a high- precision model into a sparse quantized model. The proposed compression pipeline is summarized in Figure 1.
Y. Choi, M. El-Khamy, and J. Lee are with the SoC R&D, Samsung Semi- conductor Inc., San Diego, CA 92121 USA (e-mail: yoojin.c@samsung.com; mostafa.e@samsung.com; jungwon2.lee@samsung.com).
is converted into a low-precision model and its low-precision weights are further compressed in size with lossless entropy coding such as Huffman coding and universal source coding algorithms (e.g., see [4, Section 11.3]) for memory- efï¬cient deployment.
It is difï¬cult to train low-precision DNNs with standard gradient descent since the learning rate is typically set to be a small ï¬oating-point value but low-precision weights cannot be adjusted in ï¬ne resolution. To enable training low-precision DNNs, a series of papers on binary neural networks suggests utilizing high-precision shadow weights to accumulate the negatives of the gradients in ï¬ne resolution, while the gra- dients are obtained from the network loss function calculated with binarized (or quantized) weights [5]â[7]. That is, high- precision weights are quantized in the forward pass, but the quantization function is replaced with the identity function in the backward pass for gradient descent. This approximate gradient descent algorithm is further reï¬ned in the subsequent works [8]â[15].
BinaryRelax [16] proposed relaxation of the quantization problem via Moreau envelope (also known as Moreau-Yosida regularization) [17], [18] and used pseudo quantized weights in the forward pass to solve the relaxed quantization problem. In particular, the pseudo quantized weights are obtained by
1
weighted average of high-precision weights and their quan- tized values. By manually adjusting the weighting factor in the weighted average, the pseudo quantized weights are pushed towards their quantized values gradually in training. In [19], the blended coarse gradient descent (BCGD) algorithm was proposed, where the BinaryConnect scheme [5] and the standard projected gradient descent algorithm (PGD) [20] are combined with some blending parameter. For quantization of activations, parameterized clipping activation (PACT) [21] pro- posed using an activation clipping parameter that is optimized during training to ï¬nd the right quantization scale. The two- valued proxy derivative of the parametric activation function in [21] was further enhanced by three-valued proxy partial derivative in [19]. LQ-Nets [22] proposed ï¬nding optimal quantization levels in a subspace compatible with bit-wise operations. In [23], it was proposed to learn separate scaling factors for ï¬ne-grained weight subgroups (e.g., pixel-wise or row-wise scaling factors).
The mismatch in the forward and backward passes results in sub-optimal gradient descent that causes accuracy loss. The mismatch is more problematic for the models using lower-precision weights and activations, since the quantization error is more signiï¬cant. There have been some attempts to reduce this mismatch by introducing better backward pass approximation, e.g., using clipped ReLU and log-tailed ReLU instead of the linear function (e.g., see [11]). Recently, it was proposed to use smooth differentiable approximation of the staircase quantization function. In [24], afï¬ne combination of high-precision weights and their quantized values, called alpha blending, was used to replace the quantization function. In [25], the quantization function was approximated as a linear combination of several sigmoid functions with learnable biases and scales. Similarly, differentiable soft quantization (DSQ) [26] exploited a series of hyperbolic tangent functions to approximate the staircase quantization function. The pro- posed approximation gradually approaches to the quantization function in training by adjusting the blending factor or the temperature parameter in the sigmoid function. Our approach is different from these efforts. We use regularization to steer high-precision weights to converge to their quantized values so that the mismatch between high-precision weights and quantized weights becomes smaller instead of enhancing the backward pass approximation.
We reduce the mismatch between high-precision weights and quantized weights with MSQE regularization. In particu- lar, we propose making the regularization coefï¬cient learn- able. Using learnable regularization, high-precision weights are reinforced to converge to their quantized values grad- ually in training. We empirically show that our learnable regularization yields more accurate low-precision models than the conventional regularization with a ï¬xed regularization coefï¬cient. MSQE is a well-known distortion metric in data quantization, and it has been used in network quantization as well to reduce the performance loss from quantization (e.g., see [8], [27]). Our contribution is to use MSQE as a regularizer with a learnable coefï¬cient, which is new to the best of our knowledge. The loss-aware weight quantization in [12], [13] proposed the proximal Newton algorithm to minimize the
loss function under the constraints of low-precision weights, which is however impractical for large-size networks due to the prohibitive computational cost to estimate the Hessian matrix of the loss function. Our method simply uses the stochastic gradient descent, while the mismatch between high-precision weights and quantized weights is minimized with the MSQE regularization. No regularization is considered in [12], [13]. Relaxed quantization [28] introduced a differentiable quan- tization procedure by transforming continuous distributions of weights and activations to differentiable soft categorical distributions. Our method is much simpler than the relaxation procedure in [28], since it only requires MSQE regularization. Furthermore, it shows better performance than [28] empirically in MobileNet quantization.
Weight pruning curtails redundant weights completely from DNNs so one can skip the computations for pruned weights. Some successful pruning algorithms can be found in [29]â[33]. In this paper, we discuss how partial L2 regularization can be used for weight pruning. Finally, combining weight pruning, quantization, and entropy coding, as shown in Figure 1, we achieve the state-of-the-art compression results for low- precision MobileNet [34] and Shufï¬eNet [35] on ImageNet classiï¬cation.
Weight sharing is another network compression scheme studied in [36]â[43]. It reduces the number of distinct weight values in DNNs by quantization. In contrast to low-precision weights from uniform quantization, weight sharing allows non-uniform quantization. For non-uniform quantization (e.g., k-means clustering), quantization output levels (e.g., cluster centers) do not have to be evenly spaced, and they are usually high-precision ï¬oating-point values. The quantization output levels are the shared weight values used in inference. Thus, ï¬oating-point arithmetic operations are still needed in inference, although the quantized weights can be compressed in size by lossless source coding (e.g., Huffman coding).
We ï¬nally note that reinforcement learning has been pro- posed as a promising methodology to search for quantized and/or compressed models that satisfy certain latency, energy, and/or model size requirements, given hardware speciï¬cations to deploy the models [44], [45].
II. LOW-PRECISION DNN MODEL We consider low-precision DNNs that are capable of ef- ï¬cient processing in the inference stage by using ï¬xed-point arithmetic operations. In particular, we focus on the ï¬xed-point implementation of convolutional and fully-connected layers, since they are the dominant parts of computational costs and memory requirements in DNNs (see [2, Table II]).
The major bottleneck of efï¬cient DNN processing is known to be in memory accesses [2, Section V-B]. Horowitz provides rough energy costs of various arithmetic and memory access operations for 45 nm technology [46, Figure 1.1.9], where we can ï¬nd that memory accesses typically consume more energy than arithmetic operations, and the memory access cost increases with the read size. Hence, for example, deploying binary models, instead of 32-bit models, it is expected to reduce energy consumption by 32à at least, due to 32 times fewer memory accesses.
2
1/dy EXP omy Input feature _&4 Quantization maps Output Activation }â> feature maps input FXP Convolution FXP weights FXP biases
Fig. 2. Low-precision convolutional layer using ï¬xed-point (FXP) convolution and bias addition.
Low-precision weights and activations basically stem from uniform quantization (e.g., see [47, Section 5.4]), where quan- tization bin boundaries are uniformly spaced and quantization output levels are the midpoints of bin intervals. Quantized weights and activations are represented by ï¬xed-point num- bers of small bit-width. Scaling factors (i.e., quantization bin sizes) are deï¬ned in each layer for ï¬xed-point weights and activations, respectively, to alter their dynamic ranges.
Figure 2 shows the ï¬xed-point design of a general convolu- tional layer consisting of convolution, bias addition and non- uniform activation. Fixed-point weights and input feature maps are given with common scaling factors δl and âl, respectively, where l is the layer index. Then, the convolution operation can be implemented by ï¬xed-point multipliers and accumulators. Biases are added, if present, after the convolution, and then the output is scaled properly by the product of the scaling factors for weights and input feature maps, i.e., δlâl, as shown in the ï¬gure. Here, the scaling factor for the biases is specially set to be δlâl so that ï¬xed-point bias addition can be done easily without another scaling. Then, a non-linear activation function follows. Finally, the output activations are fed into the next layer as the input.
Using rectiï¬ed linear unit (ReLU) activation, two scaling operations across two layers, i.e., scaling operations by δlâl and 1/âl+1, can be combined into one scaling operation by δlâl/âl+1 before (or after) ReLU activation. Furthermore, if the scaling factors are power-of-two numbers, then one can even implement scaling by bit-shift. Similarly, low-precision fully-connected layers can be implemented by replacing con- volution with matrix multiplication in the ï¬gure.
# III. REGULARIZATION FOR LOW-PRECISION DNNS
In this section, we present the regularizers that are utilized to learn pruned and quantized DNNs of low-precision weights and activations. We ï¬rst deï¬ne the quantization function. Given the number of bits, i.e., bit-width n, the quantization function yields
Qn(x; δ) = δ clipn(round(x/δ)), n ⥠2, n = 1, δ sign(x),
where x is the input and δ is the scaling factor; we let
round(x) = sign(x)||z| + 0.5], clip, (v) = min(max(a,-2"71), 2-1 â 1),
(1)
where || is the largest integer smaller than or equal to «. For ReLU activation, the ReLU output is always non-negative, and thus we use the unsigned quantization function given by
n (x; δ) = δ clip+
n (x; δ) = δ clip+ Q+ (2)
n (round(x/δ)), n (x) = min(max(x, 0), 2n â 1).
for n ⥠1, where clip+
A. Regularization for weight quantization
Consider a general non-linear neural network consisting of L layers. Let W1, W2, . . . , WL be the sets of high-precision weights in layers 1 to L, respectively. For notational simplicity, we let AL 1 = A1, A2, . . . , AL for any symbol A. We deï¬ne the MSQE regularizer for weights of all L layers as
L 1 RWHP 61) = 5 De lw @n(wsd)?, GB) l=1 weW
where n is the bit-width for quantized weights, δl is the scaling factor (i.e., quantization bin size) for quantized weights, and N is the total number of weights from all layers, i.e.,
L N=yom l=1
where |Wl| is the number of weights in layer l. We assumed that bit-width n is the same for all layers, just for notational simplicity, but it can be easily extended to more general cases such that each layer has a different bit-width.
Including the MSQE regularizer in (3), the cost function to optimize in training is given by
# Cn(X ; W L
1 , δL 1 ) = E(X ; Qn(W L 1 )) + λRn(W L 1 ; δL 1 ), λ > 0, (4)
1 ; δL where, with a slight abuse of notation, Qn(W L 1 ) denotes the set of quantized weights of all L layers, E(X ; Qn(W L 1 )) is the target loss function evaluated on the training dataset X using the quantized weights, and λ is the regularization coefï¬- cient. We set the scaling factors δL 1 to be learnable parameters and optimize them along with weights W L 1 . Remark 1. We clarify that we use high-precision weights in the backward pass for gradient descent by replacing approximately the quantization function Qn with the identity function. In the forward pass, we use quantized weights and activations, and the target objective function E is also calculated with the quantized weights and activations to mimic the low-precision inference-stage loss. Hence, the ï¬nal trained models are low- precision models, which can be operated on low-precision ï¬xed-point hardware in inference with no accuracy loss. Note that our method still has the gradient mismatch problem, similar to the existing approaches (see Section I). However, by adding the MSQE regularizer, we encourage high-precision weights to converge to their quantized values so that we reduce the mismatch.
Learnable regularization coefï¬cient. The regularization coefï¬cient λ in (4) is a hyper-parameter that controls the trade- off between the loss and the regularization. It is convention- ally ï¬xed ahead of training. However, searching for a good
3
(a) Iterations=10k (b) Iterations=21k (c) Iterations=23k (d) Iterations=30k
Fig. 3. Weight histogram snapshots of the MNIST LeNet-5 second convolutional layer captured at different training batch iteration numbers while a pre-trained model is quantized to have 4-bit weights and activations with the proposed regularization method.
hyper-parameter value is usually time-consuming. Hence, we propose the learnable regularization coefï¬cient, i.e., we let the regularization coefï¬cient be another learnable parameter.
We start training with a small initial value for λ, i.e., with little regularization. However, we promote the increase of λ in training by adding a penalty term for a small regularization coefï¬cient, which is â log λ for λ > 0, in the cost function (see (5)). The increasing coefï¬cient λ reinforces the conver- gence of high-precision weights to their quantized values for reducing the MSQE. It consequently alleviates the gradient mismatch problem (see Remark 1). The cost function in (4) is altered into
# Cn(X ; W L
1 , δL = E(X ; Qn(W L 1 , λ) 1 ; δL 1 )) + λRn(W L 1 ; δL 1 ) â log λ. (5)
For gradient descent, we need the gradients of (5) with respect to weights, scaling factors and the regularization coefï¬cient, respectively, which are provided in Appendix. Remark 2. In (5), observe that we use quantized weights in the forward pass to compute the loss while we update high- precision weights with gradient descent in the backward pass, as in BinaryConnect [5]. Thus, our method is different from BinaryRelax [16] that uses pseudo quantized weights in the forward pass. The pseudo quantized weights are computed by weighted average of high-precision weights and their quan- tized values. Our MSQE regularization resembles Moreau- Yosida regularization in BinaryRelax. However, the Moreau- Yosida regularization factor in BinaryRelax is manually in- creased with a ï¬xed rate at every iteration in training so the pseudo quantized weights are pushed towards quantized values as training goes on. In our scheme, the difference between high-precision weights and quantized weights is reduced by the MSQE regularization. Moreover, we propose letting the regularization coefï¬cient λ be learnable and adding another penalty term â log λ to promote increasing λ; hence, λ does not necessarily increase with a ï¬xed rate and can saturate after some point of training to ï¬nd a better local optimum, as shown in Figure 6(a). We do not constrain the range of λ in (5) so it is possible that λ diverges in optimization. However, we empirically found that λ saturates after some point of training in practice as the loss saturates (e.g., see Figure 6(a)).
Evolution of weight histogram. Figure 3 presents an ex- ample of how high-precision weights are gradually quantized by our regularization scheme. We plotted weight histogram
snapshots captured at the second convolutional layer of the MNIST LeNet-5 model1 while a pre-trained model is quan- tized to a 4-bit ï¬xed-point model. The histograms in the ï¬gure from the left to the right correspond to 10k, 21k, 23k, and 30k batch iterations in training, respectively. Observe that the weight distribution gradually converges to the sum of uniformly spaced delta functions and all high-precision weights converge to quantized values completely in the end. Comparison to soft weight sharing. In soft weight shar- ing [38], [48], a Gaussian mixture prior is assumed, and is regularized to form groups of weights that the model have similar values around the Gaussian component centers (e.g., see [49, Section 5.5.7]). The learnable regularization coefï¬cient can be related to the learnable variance in the Gaussian mixture prior. However, our weight regularization method is different from soft weight sharing since we consider uniform quantization and optimize quantization bin sizes, instead of optimizing individual Gaussian component centers for non-uniform quantization. We employ the simple MSQE regularization term for quantization, so that it is applicable to large-size DNNs. Note that soft weight sharing yields the regularization term of the logarithm of the summation of exponential functions, which is sometimes too complex to compute for large-size DNNs. In our method, the additional computational complexity for MSQE regularization is not expensive. It only scales in the order of O(N ), where N is the number of weights. Hence, the proposed scheme is easily applicable to the state-of-the-art DNNs with millions or tens of millions weights.
We note that biases are treated similar to weights. However, for the ï¬xed-point design presented in Section II, we use δlâl instead of δl as the scaling factor in (3), where âl is the scaling factor for input feature maps (i.e., activations from the previous layer), which is determined by the following activation quantization procedure.
# B. Quantization of activations
We quantize the output activation (feature map) x of layer l for 1 ⤠l ⤠L and yield Q+ m is the quantization function in (2) for bit-width m and âl is the learnable scaling factor for quantized activations of layer l. We note that âl is the scaling factor for activations of layer l whereas it denotes the scaling factor for input feature maps
1https://github.com/BVLC/caffe/tree/master/examples/mnist
4
(a) Iterations=4k (b) Iterations=6k (c) Iterations=8k (d) Iterations=10k
Fig. 4. Weight histogram snapshots of the MNIST LeNet-5 at different training batch iteration numbers when trained from scratch with the partial L2 regularizer for 90% sparsity (r = 90).
of layer l in Section II (see Figure 2). This is just one index shift in the notation, since the output of layer l is the input to layer l +1. We adopt this change just for notational simplicity. Similar to (3), we assumed that activation bit-width m is the same for all layers, but this constraint can be easily relaxed to cover the cases where each layer has a different bit-width. We assumed ReLU activation and used the unsigned quantization function Q+ m with Qm in case of general non-linear activation (see (1) and (2)).
The partial L2 regularizer encourages the weights below the threshold to move towards zero, while the other unregularized weights are updated to minimize the loss due to pruning. The threshold θ(r) is also updated at every iteration of training based on the instant weight distribution. We note that the threshold θ(r) decreases as training goes on since the regularized weights gradually converge to zero (see Figure 4). After ï¬nishing the regularized training, we ï¬nally have a set of weights clustered very near zero. The loss due to pruning these small-value weights is negligible.
We optimize âl by minimizing the MSQE for activations of layer l, i.e., we minimize
Simn(Ar; Ar) Ye le- Qh @ Ad, reA, 1 Ai] (6)
xâAl where Al is the set of activations of layer l for 1 ⤠l ⤠L. In the backward pass, we ï¬rst perform gradient descent for weights and their scaling factors using the loss function in (5), and then we update âl with gradient descent using (6). We do not utilize (6) in gradient descent for weights.
Backpropagation through quantized activations. Back- propagation is not feasible through quantized activations an- alytically since the gradient is zero almost everywhere. For backpropagation through the quantization function, we adopt the straight-through estimator [50]. In particular, we pass the gradient through the quantization function when the input is within the clipping boundary. If the input is outside the clipping boundary, we pass zero.
# C. Regularization for weight pruning
For weight pruning, we propose using partial L2 regular- ization. In particular, given a target pruning ratio r, we ï¬nd the r-th percentile of weight magnitude values. Assuming that we prune the weights below this r-th percentile value in magnitude, we deï¬ne a L2 regularizer only for them as follows:
After weight pruning, the pruned model is quantized by following the quantization procedure in Section III-A and Section III-B. In this stage, pruned weights are ï¬xed to be zero while only unpruned weights are updated and quantized. After pruning, we still use quantization bins around zero for the weights that are not pruned but have small magnitude, or for the weights that are made to be small after training the quantized network; unpruned weights between ââ/2 to â/2 are still quantized to zero, where â is the quantization bin size. However, the number of (unpruned) weights that are quantized to zero becomes much smaller after pruning.
# IV. EXPERIMENTS
We evaluate the proposed low-precision DNN compression for ImageNet classiï¬cation and image super resolution. Image super resolution is included in our experiments as a regression problem since its accuracy is more sensitive to quantization than classiï¬cation. Note that Tensorï¬ow Lite2 already supports a very efï¬cient 8-bit weight and activation quantization tool for network development on mobile platforms. Thus, our ex- perimental results focus on more extreme cases of quantization using less than 8 bits, where a more sophisticated algorithm is needed for smaller loss. We use FLP and FXP to denote the ï¬oating-point and ï¬xed-point formats, respectively.
L 1 P.(Wy) = NW > > |w|71)w| <o(r)s 1=1 weM
where θ(r) is the r-th percentile of weight magnitude values, which is the threshold for pruning. Adopting the learnable regularization coefï¬cient as in (5), we have
A. Experimental settings
For ImageNet classiï¬cation, we use the ImageNet ILSVRC 2012 dataset [51]. For image super resolution, we use the Open Images dataset3 as the training dataset, which is pre- processed as described in [52]. The proposed network pruning, quantization, and compression pipeline is implemented with
Cr(X ; W L 1 , λ) = E(X ; W L 1 ) + λPr(W L 1 ) â log λ,
for λ > 0.
2https://www.tensorï¬ow.org/lite 3https://github.com/openimages/dataset
5
TABLE I PRE-TRAINED MODELS USED IN IMAGENET CLASSIFICATION EXPERIMENTS.
AlexNet https://github.com/BVLC/caffe/tree/master/models/bvlc alexnet ResNet-18 https://github.com/HolmesShuan/ResNet-18-Caffemodel-on-ImageNet MobileNet https://github.com/shicai/MobileNet-Caffe Shufï¬eNet https://github.com/msnqqer/Shufï¬eNet
Caffe4. The pre-trained models used in our ImageNet classiï¬- cation experiments are obtained from the links in Table I. For image super resolution, we train (CT-)SRCNNs from scratch as described in [52].
Provided a pre-trained high-precision model, weight scaling factors δL 1 are initialized to cover the dynamic range of the pre-trained weights, i.e., the 99-th percentile magnitude of the weights in each layer. Similarly, activation scaling factors âL 1 are set to cover the dynamic range of the activations in each layer, which are obtained by feeding a small number of training data to the pre-trained model.
For quantization of ImageNet classiï¬cation networks, we employ the Adam optimizer [53]. The learning rate is set to be 10â5 and we train 300k batches with the batch size of 256, 128, 32 and 64 for AlexNet, ResNet-18, MobileNet and Shufï¬eNet, respectively. Then, we decrease the learning rate to 10â6 and train 200k more batches. For the learnable regularization coefï¬cient λ, we let λ = eÏ and learn Ï instead in order to make λ always positive in training. The initial value of Ï is set to be 0, and it is updated with the Adam optimizer using the learning rate of 10â4. For pruning of MobileNet and Shufï¬eNet, the Adam optimizer is used for 500k batches with learning rate 10â5, without decreasing the learning rate to 10â6 at 300k batches. The initial value of Ï is set to be 10 in pruning. The other settings are the same as described above for quantization. Then, pruned MobileNet and Shufï¬eNet models are quantized by following the same training procedure as described above for quantization. For quantization of image super resolution networks, we train the quantized models using the Adam optimizer for 3M batches with the batch size of 128. We use the learning rate of 10â5. The initial value for Ï is set to be 0 and it is updated by the Adam optimizer using the learning rate of 10â5.
# B. Experimental results
In Table II, we compare our quantization method to DoReFa-Net [9] for the AlexNet model in [54]. Since DoReFa-Net does not consider weight pruning, we neither apply pruning here. The DoReFa-Net results in Table II are (re-)produced by us from their code5, and we use the same training hyperparameters and epochs as we described in Section IV-A for fair comparison. We evaluate two cases where (1) all layers are quantized, and (2) all layers except the ï¬rst and the last layers are quantized. The results in Table II show that 4-bit quantization is needed for accuracy loss less than 1%. For binary weights, we observe some accuracy
# 4https://github.com/BVLC/caffe 5https://github.com/ppwwyyxx/tensorpack/tree/master/examples/
DoReFa-Net
TABLE II ALEXNET QUANTIZATION RESULTS ON IMAGENET CLASSIFICATION IN COMPARISON TO DOREFA-NET [9].
Quantized layers Weights Activations Top-1 / Top-5 accuracy (%) Ours DoReFa-Net [9]* Pre-trained model 32-bit FLP 32-bit FLP 58.0 / 80.8 8-bit FXP 4-bit FXP 2-bit FXP 8-bit FXP 4-bit FXP 2-bit FXP 57.7 / 80.5 56.5 / 79.4 53.5 / 77.3 57.6 / 80.8 56.9 / 80.3 43.0 / 68.1 (1) All layers 1-bit FXP 8-bit FXP 4-bit FXP 2-bit FXP 1-bit FXP 52.2 / 75.8 52.0 / 75.7 50.5 / 74.6 41.1 / 66.6 47.5 / 72.1 45.1 / 69.7 43.6 / 68.3 19.3 / 38.2 (2) Except the ï¬rst and the last layers 8-bit FXP 4-bit FXP 2-bit FXP 1-bit FXP 8-bit FXP 4-bit FXP 2-bit FXP 8-bit FXP 4-bit FXP 2-bit FXP 1-bit FXP 57.7 / 80.6 56.6 / 79.8 54.1 / 77.9 54.8 / 78.1 54.8 / 78.2 53.0 / 76.8 43.9 / 69.0 57.5 / 80.7 56.9 / 80.1 53.1 / 77.3 51.2 / 75.5 51.9 / 75.9 49.3 / 74.1 40.2 / 65.5
* from our experiments using their code.
TABLE III ACCURACY LOSS COMPARISON OF THE 4-BIT FXP RESNET-18 MODELS. SINCE THE BASELINE 32-BIT FLP MODEL SHOWS DIFFERENT ACCURACY IN EACH METHOD, WE COMPARE THE ACCURACY LOSS OF 4-BIT FXP MODELS FROM 32-BIT FLP MODELS.
Weights Activations Top-1 accuracy (%) 1-crop Ours 10-crop BCGD PACT* DSQ [26] [21] [19] 32-bit FLP 4-bit FXP 32-bit FLP 4-bit FXP 68.1 67.4 69.8 69.5 69.6 67.4 70.2 69.2 69.9 69.6 Accuracy (%) difference 0.7 0.3 2.2 1.0 0.3
* The ï¬rst and the last layers are not quantized.
loss of more or less than 10%. However, we can see that our quantization scheme performs better than DoReFa-Net in particular for low-precision cases, where the quantization error is larger and the mismatch problem of the forward and backward passes is more severe.
2) ResNet-18 quantization: Figure 5 presents the accuracy of the low-precision ResNet-18 [55] models obtained from our quantization method. The experiments on ResNet-18 are mainly for ablation study. In particular, we compare weight and activation quantization for various low-precision settings. The loss due to weight quantization is relatively less than the loss due to activation quantization, which is consistent with the results from DoReFa-Net [9]. We also compare the low-precision models obtained with and without the constraint of power-of-two scaling factors. In ï¬xed-point computations (see Figure 2), it is more appealing for scaling factors (i.e., quantization bin sizes) to be powers of two so they can be implemented by simple bit-shift, rather than with scalar multiplication. For power-of-two scaling factors, we perform rounding of scaling factors into their closest power-of-two values in the forward pass, while the rounding function is replaced with the identity function in the backward pass. We observe small performance degradation due to the constraint of power-of-two scaling factors in our experiments.
In Table III, we compare the proposed quantization scheme
6
88.4 Py 3 shes 8 8 Bis W:32-bit FLP, W:8-bit FXP, W:8-bit FXP, A:32-bit FLP = A:32-bit FLP_ = A:8-bit FXP W:4-bit FXP, A:4-bit FXP rm" W:2-bit FXP, A:2-bit FXP 83.0 5 8 : 76.1 a & W:1-bit FXP, A:1-bit FXP i 3 Bho & als W:1-bit FXP, W:1-bit FXP, W:1-bit FXP, A:8-bit FXP A:4-bit FXP A:2-bit FXP @ 32-bit FLP scaling factors, Top-1 accuracy (%) m⢠Power-of-2 scaling factors, Top-1 accuracy (%) @ 32-bit FLP scaling factors, Top-5 accuracy (%) m Power-of-2 scaling factors, Top-5 accuracy (%)
Fig. 5. Ablation study of ResNet-18 quantization on ImageNet classiï¬cation. We use âW: Weight precisionâ and âA: Activation precisionâ to denote weight and activation precisions, respectively. FLP and FXP stands for ï¬oating-point and ï¬xed-point formats, respectively.
TABLE IV COMPARISON OF LEARNABLE AND FIXED REGULARIZATION COEFFICIENTS FOR RESNET-18 ON IMAGENET CLASSIFICATION.
Weights Activations Top-1 / Top-5 accuracy (%) Learnable λ Fixed λ = 0.05 Fixed λ = 0.5 Fixed λ = 5 32-bit FLP 32-bit FLP 68.1 / 88.4 1-bit FXP 8-bit FXP 4-bit FXP 2-bit FXP 1-bit FXP 61.3 / 83.7 60.2 / 83.2 55.6 / 79.6 38.9 / 65.4 60.0 / 83.1 58.1 / 81.5 53.5 / 78.2 37.0 / 63.4 60.0 / 83.0 57.4 / 81.1 52.9 / 77.8 36.5 / 63.1 57.9 / 81.6 58.6 / 82.2 53.1 / 78.1 37.0 / 63.1
to the existing quantization methods from [19], [21], [26] for 4-bit weight and 4-bit activation quantization of ResNet-18. All convolutional and fully-connected layers of ResNet-18 are quantized in [19], [26], and ours, while the ï¬rst and the last layers are not quantized in [21]. Since the baseline 32-bit model shows different accuracy in each method, we compare the accuracy difference between 32-bit ï¬oating-point models and 4-bit ï¬xed-point models. For our method, we also show the accuracy obtained by using the average score from 10 different crops of the input (called 10-crop testing), where the baseline accuracy of our 32-bit ï¬oating-point model is aligned with the others. The results show that the proposed quantization scheme achieves 4-bit ResNet-18 quantization whose accuracy loss is comparable to the state-of-the-art methods. In particular, the accuracy loss from 4-bit quantization is shown to be very small and less than 1% in our scheme.
Learnable versus ï¬xed regularization coefï¬cients. In Table IV, we compare the performance of quantized ResNet- 18 [55] models when we use learnable and ï¬xed regularization coefï¬cients, respectively. Observe that the proposed learnable regularization method outperforms the conventional regular- ization method with a ï¬xed coefï¬cient in various low-precision settings.
In Figure 6, we compare the convergence curves when learnable and ï¬xed regularization coefï¬cients are used, respec- tively. Using a learnable regularization coefï¬cient, the MSQE regularization term decreases (although there is a bump in the middle) while λ increases in training. However, using a ï¬xed regularization coefï¬cient, the MSQE regularization term saturates and even increases after some point as training goes on, which implies that the mismatch of the forward and backward passes is not resolved. The unresolved mismatch eventually turns into accuracy loss, as shown in the ï¬gure.
Training cost C= Training cross-entropy loss
# AR
B+
# ~logd
(a) Learnable λ
(b) Fixed λ = 0.5
Fig. 6. ResNet-18 model training convergence curves for binary weights and 2-bit activations. We compare the convergence curves with learnable and ï¬xed regularization coefï¬cients.
3) MobileNet and Shufï¬eNet compression: We mainly eval- uate our method to obtain compressed low-precision Mo- bileNet [34] and Shufï¬eNet [35] models for ImageNet clas- siï¬cation. MobileNet and Shufï¬eNet are state-of-the-art Ima- geNet classiï¬cation networks developed for efï¬cient inference on resource-limited platforms. Compression and quantization of such efï¬cient networks are important in practice to lower latency and to improve power-efï¬ciency further in mobile and edge devices. It is typically more difï¬cult to compress and quantize such networks of efï¬cient architectures. For Mo- bileNet and Shufï¬eNet compression, we prune 50% weights from their pre-trained models as described in Section III-C so that the accuracy loss due to pruning is marginal. Then, we employ our weight and activation quantization method. After converting into sparse low-precision models, universal source coding with bzip2 [56] follows to compress the ï¬xed-point low-precision weights.
In Table V, for ablation study, we compare pruning-only results and pruning+quantization results with various low-
7
> 3
2 2
TABLE V LOW-PRECISION MOBILENET AND SHUFFLENET COMPRESSION RESULTS FOR IMAGENET CLASSIFICATION. FOR ABLATION STUDY, WE COMPARE PRUNING-ONLY RESULTS AND PRUNING+QUANTIZATION RESULTS WITH VARIOUS LOW-PRECISION SETTING. WE ALSO SHOW THE COMPRESSION RESULTS WITH AND WITHOUT ENTROPY CODING, WHERE WE USED BZIP2 AS A SPECIFIC ENTROPY CODING SCHEME.
Method Weights Activations MobileNet v1 Shufï¬eNet Top-1 / Top-5 Compression ratio accuracy (%) with / without bzip2 Top-1 / Top-5 Compression ratio accuracy (%) with / without bzip2 Pre-trained model 32-bit FLP 32-bit FLP 70.9 / 89.9 - 65.4 / 86.4 - Ours: pruning (50%) pruning (55%) pruning (60%) 32-bit FLP 32-bit FLP 70.2 / 89.7 70.0 / 89.5 69.5 / 89.3 2.01 / 1.00 2.22 / 1.00 2.49 / 1.00 65.3 / 86.4 64.7 / 86.0 63.6 / 85.5 1.99 / 1.00 2.20 / 1.00 2.45 / 1.00 8-bit FXP 6-bit FXP 5-bit FXP 4-bit FXP 8-bit FXP 6-bit FXP 5-bit FXP 4-bit FXP 70.8 / 90.1 70.5 / 89.9 69.7 / 89.3 66.9 / 87.7 4.83 / 4.00 6.11 / 5.33 7.13 / 6.40 9.87 / 8.00 65.8 / 86.7 65.7 / 86.7 64.0 / 85.6 59.5 / 82.6 4.99 / 4.00 5.81 / 5.33 6.78 / 6.40 9.59 / 8.00 Ours: pruning (50%) + quantization 6-bit FXP 5-bit FXP 4-bit FXP 8-bit FXP 70.6 / 90.0 70.3 / 89.7 69.7 / 89.2 6.11 / 5.33 7.13 / 6.40 8.65 / 8.00 66.3 / 87.1 65.8 / 86.7 64.8 / 86.2 5.81 / 5.33 6.79 / 6.40 8.26 / 8.00 6-bit FXP 5-bit FXP 4-bit FXP 32-bit FLP 70.7 / 90.0 70.4 / 89.8 69.3 / 89.0 6.12 / 5.33 7.13 / 6.40 10.01 / 8.00 66.3 / 87.1 65.8 / 86.9 64.1 / 85.8 5.81 / 5.33 6.78 / 6.40 9.71 / 8.00 Tensorï¬ow 8-bit model* 8-bit FXP 8-bit FXP 70.1 / 88.9 N/A / 4.00 N/A N/A Relaxed quantization [28] 8-bit FXP 6-bit FXP 5-bit FXP 8-bit FXP 6-bit FXP 5-bit FXP 70.4 / 89.4 68.0 / 88.0 61.4 / 83.7 N/A / 4.00 N/A / 5.33 N/A / 6.40 N/A N/A N/A N/A N/A N/A
* https://github.com/tensorï¬ow/models/blob/master/research/slim/nets/mobilenet v1.md
73 â- Compressed MobileNet, W: 4/5/6-bit FXP, A: 32-bit FLP (Ours) 7 â%- Compressed MobileNet, W: 4/5/6-bit FXP, A: 8-bit FXP (Ours) â®- Compressed MobileNet, W: Quantized 32-bit FLP, A: 32-bit FLP (Han et al., 2016) B67 â&- Compressed MobileNet, W: Quantized 32-bit FLP, A: 32-bit FLP (Park et al., 2017) 3 65 axe Compressed MobileNet, W: Quantized 32-bit FLP, A: 32-bit FLP (Tung & Mori, 2018) S 63 â@- Compressed MobileNet, W: Mixed precision, A: Mixed precision (Wang et al., 2019) iy 61 4 Compressed ShuffleNet, W: 4/5/6-bit FXP, A: 32-bit FLP (Ours) 59 âÂ¥- Compressed ShuffleNet, W: 4/5/6-bit FXP, A: 8-bit FXP (Ours) 57 â&- Compressed ShuffleNet, W: Quantized 32-bit FLP, A: 32-bit FLP (Park et al., 2017) 55 Compressed ShuffleNet, W: Quantized 32-bit FLP, A: 32-bit FLP (Tung & Mori, 2018) 0.5 1 15 2 2.5 3 Network size (MB)
Fig. 7. Comparison of our low-precision MobileNet and Shufï¬eNet compression results to the ones of the state-of-the-art network compression methods on ImageNet classiï¬cation. We use âW: Weight precisionâ and âA: Activation precisionâ to denote weight and activation precisions used in the compressed models, respectively.
precision setting. We also show the compression results with and without entropy coding, where we use bzip2 as a speciï¬c entropy coding scheme. Observe that the accuracy loss is marginal when we prune 50% weights for both MobileNet and Shufï¬eNet. After pruning 50% weights, we quantize the pruned models. Similar to the AlexNet and ResNet-18 results, the accuracy loss from quantization is more severe when we decrease the activation bit-width than the weight bit-width. From the experiments, we obtain low-precision models of 5-bit weights and 8-bit activations with top-1 accuracy loss of 0.6% only. The compression ratio of these low-precision models is 6.40 without bzip2 compression, but it increases and becomes 7.13 and 6.79 for MobileNet and Shufï¬eNet, respectively, after bzip2 compression. We also show that our scheme outperforms the existing quantization schemes from tensorï¬ow and [28].
In Figure 7, we compare the compression ratios of our scheme and the existing network compression methods in [36], [42], [45], [57]. Our low-precision network compression scheme shows comparable compression ratios to the state-of-
the-art weight compression schemes. We emphasize that our scheme produces low-precision models of ï¬xed-point weights and activations that support efï¬cient inference of ï¬xed-point operations, while the previous compression schemes, except [45], produces quantized weights that are still ï¬oating-point numbers and thus ï¬oating-point operations are necessary to achieve the presented accuracy of them. The hardware-aware automated quantization in [45] achieved impressive compres- sion results by searching for a quantized model of âmixedâ precision for different layers with reinforcement learning, but not all hardware supports mixed precision operations.
4) Image super resolution network quantization: The image super resolution problem is to synthesize a high-resolution image from a low-resolution one. The DNN output is the high- resolution image corresponding to the input low-resolution image, and thus the loss due to quantization is more promi- nent. We evaluate the proposed method on SRCNN [58] and cascade-trained SRCNN (CT-SRCNN) [52] for image super resolution. The objective image quality metric measured by
8
TABLE VI CT-SRCNN (9-LAYER) QUANTIZATION RESULTS FOR UPSCALING FACTOR 3.
Model Method Weights Activations Set-14 PSNR (dB) Set-14 SSIM PSNR (dB) loss Pre-trained model 32-bit FLP 32-bit FLP 29.05 0.8161 - - SRCNN 3-layer Ours 8-bit FXP 4-bit FXP 2-bit FXP 1-bit FXP 8-bit FXP 29.03 28.99 28.72 28.53 0.8141 0.8133 0.8075 0.8000 0.02 0.06 0.33 0.52 0.0020 0.0028 0.0086 0.0161 Ristretto [14]* 8-bit FXP 8-bit FXP 28.58 0.7827 0.46 0.0328 Pre-trained model 32-bit FLP 32-bit FLP 29.56 0.8273 - - CT-SRCNN 5-layer Ours 8-bit FXP 4-bit FXP 2-bit FXP 1-bit FXP 8-bit FXP 29.54 29.48 29.28 29.09 0.8267 0.8258 0.8201 0.8171 0.02 0.08 0.28 0.47 0.0006 0.0015 0.0072 0.0102 Ristretto [14]* 8-bit FXP 8-bit FXP 29.04 0.8111 0.53 0.0148 Pre-trained model 32-bit FLP 32-bit FLP 29.71 0.8300 - - CT-SRCNN 9-layer Ours 8-bit FXP 4-bit FXP 2-bit FXP 1-bit FXP 8-bit FXP 29.67 29.63 29.37 29.20 0.8288 0.8285 0.8236 0.8193 0.04 0.08 0.34 0.51 0.0012 0.0015 0.0064 0.0107 Ristretto [14]* 8-bit FXP 8-bit FXP 29.05 0.8065 0.74 0.0234 Bicubic - - - 27.54 0.7742 - -
* from our experiments using their code at https://github.com/pmgysel/caffe.
the peak signal-to-noise ratio (PSNR) and the perceptual score measured by the structural similarity index (SSIM) [59] are compared for Set-14 image dataset [60] in Table VI for 3- layer SRCNN, 5-layer CT-SRCNN, and 9-layer CT-SRCNN, respectively. Observe that our method successfully yields low- precision models of 8-bit weights and activations at negligible loss, and they are better than the results that we obtain with one of the previous works, Ristretto [14]. It is interesting to see that the PSNR loss of using binary weights and 8-bit activations is 0.5 dB only.
low-precision models of 5-bit weights and 8-bit activations with compression ratios of 7.13 and 6.79, respectively, at marginal accuracy loss. For image super resolution, we only lost 0.04 dB PSNR when using 8-bit weights and activations, instead of 32-bit ï¬oating-point numbers.
# APPENDIX
A. Gradients for weights
# The gradient of the cost function Cn in (5) for w satisï¬es
âwCn = âwE + λâwRn, (7)
# V. CONCLUSION
In this paper, we proposed a method to quantize deep neural networks (DNNs) by regularization to produce low- precision DNNs for efï¬cient ï¬xed-point inference. Although our training happens in high precision particularly for its backward passes and gradient descent, its forward passes use quantized low-precision weights and activations, and thus the resulting networks can be operated on low-precision ï¬xed- point hardware at inference time. The proposed scheme alle- viates the mismatch problem in the forward and backward passes of low-precision network training by using MSQE regularization. Moreover, we proposed a novel learnable reg- ularization coefï¬cient to reinforce the convergence of high- precision weights to their quantized values when using MSQE regularization. We also discussed how a similar regularization technique can be employed for weight pruning with partial L2 regularization.
for weight w of layer l, 1 ⤠l ⤠L. The ï¬rst partial derivative in the right side of (7) can be obtained efï¬ciently by the backpropagation algorithm. For backpropagation through the weight quantization function, we adopt the following approximation similar to straight-through estimator [50]:
Ly e{an-t_ gant 3), n>1, Vu Qn(w; 5) = (8) lge[-2,2), n=1,
where 1E is an indication function such that it is one if E is true and zero otherwise. Namely, we pass the gradient through the quantization function when the weight is within the clipping boundary. To give some room for the weight to move around the boundary in stochastic gradient descent, we additionally allow some margin of δl/2 for n ⥠2 and δl for n = 1. Outside the clipping boundary with some margin, we pass zero.
For weight w of layer l, 1 ⤠l ⤠L, the partial derivative of the regularizer Rn satisï¬es
We showed by experiments that the proposed quantization algorithm successfully produces low-precision DNNs of binary weights for classiï¬cation problems, such as ImageNet classiï¬- cation, as well as for regression and image synthesis problems, such as image super resolution. For MobileNet and Shufï¬eNet compression, we obtained sparse (50% weights are pruned)
âwRn = 2 N (w â Qn(w; δl)), (9)
almost everywhere except some non-differentiable points of w at quantization bin boundaries Un(δl) given by
2%+1â2" Un(b) = {2 aah, (10) 61,4 =0,1,.
9
for n > 1 and U4, (6,) = {0}. If the weight is located at one of these boundaries, it actually makes no difference to update w to either direction of wâe or w+e, in terms of its quantization error. Thus, we let
VuRn 20, if w EU, (5). (11)
From (7)â(11), we ï¬nally have
âwCn = âwE + 2λ N (w â Qn(w; δl))1w /âUn(δl).
Remark 3. If the weight is located at one of the bin boundaries, the weight gradient is solely determined by the network loss function derivative and thus the weight is updated towards the direction to minimize the network loss function. Otherwise, the regularization term impacts the gradient as well and encourages the weight to converge to the closest bin center as far as the loss function changes small. The regularization coefï¬cient trades off these two contributions of the network loss function and the regularization term.
# B. Gradient for the regularization coefï¬cient
The gradient of the cost function for λ is given by
âλCn = Rn(W L 1 ; δL 1 ) â 1 λ . (12)
Observe that λ tends to 1/Rn in gradient descent. Remark 4. Recall that weights gradually tend to their closest quantization output levels to reduce the regularizer Rn (see Remark 3). As the regularizer Rn decreases, the regularization coefï¬cient λ gets larger by gradient descent using (12). Then, a larger regularization coefï¬cient further forces weights to move towards quantized values in the following update. In this manner, weights gradually converges to quantized values.
C. Gradients for scaling factors
For scaling factor optimization, we approximately consider the MSQE regularization term only for simplicity. Using the chain rule, it follows that
# âδl Cn â âδl Rn
© Vo Rn 2r =-F (w â Qn(w; 61))V5, Qn (w; 61), (13) wEew,
for 1 ⤠l ⤠L. Moreover, it can be shown that
V5. Qn(w; dr) = Tn (ws 61) clip, (round(w/6z)), sign(w), n>1, (14) I> n=1,
almost everywhere except some non-differentiable points of δl satisfying
w 2i+1-â2â = ⬠4 â ââ ,i = 0,1,...,2"- 2), 15 i { > \ » 5)
for n > 1. Similar to (11), we let
VaiQn(w: 51) 20, if w ⬠Und), (16)
so that the scaling factor δl is not impacted by the weights at the bin boundaries. From (13)â(16), it follows that
Ss (w â Qn(w; 61))Tn(w; 61) weet, (61)+ wEew
Similarly, one can derive the gradients for activation scaling factors âL
# REFERENCES
[1] Y. LeCun, Y. Bengio, and G. Hinton, âDeep learning,â Nature, vol. 521, no. 7553, pp. 436â444, 2015.
[2] V. Sze, Y.-H. Chen, T.-J. Yang, and J. S. Emer, âEfï¬cient processing of deep neural networks: A tutorial and survey,â Proceedings of the IEEE, vol. 105, no. 12, pp. 2295â2329, 2017.
[3] Y. Cheng, D. Wang, P. Zhou, and T. Zhang, âModel compression and acceleration for deep neural networks: The principles, progress, and challenges,â IEEE Signal Processing Magazine, vol. 35, no. 1, pp. 126â 136, 2018.
[4] T. M. Cover and J. A. Thomas, Elements of Information Theory. Wiley & Sons, 2012. John
[5] M. Courbariaux, Y. Bengio, and J.-P. David, âBinaryConnect: Training deep neural networks with binary weights during propagations,â in Advances in Neural Information Processing Systems, 2015, pp. 3123â 3131.
[6] Z. Lin, M. Courbariaux, R. Memisevic, and Y. Bengio, âNeural networks with few multiplications,â in International Conference on Learning Representations, 2016.
[7] I. Hubara, M. Courbariaux, D. Soudry, R. El-Yaniv, and Y. Bengio, âBi- narized neural networks,â in Advances in Neural Information Processing Systems, 2016, pp. 4107â4115.
[8] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi, âXNOR-Net: Imagenet classiï¬cation using binary convolutional neural networks,â in European Conference on Computer Vision. Springer, 2016, pp. 525â 542.
[9] S. Zhou, Y. Wu, Z. Ni, X. Zhou, H. Wen, and Y. Zou, âDoReFa-Net: Training low bitwidth convolutional neural networks with low bitwidth gradients,â arXiv preprint arXiv:1606.06160, 2016.
[10] C. Zhu, S. Han, H. Mao, and W. J. Dally, âTrained ternary quantization,â in International Conference on Learning Representations, 2017. [11] Z. Cai, X. He, J. Sun, and N. Vasconcelos, âDeep learning with low precision by half-wave Gaussian quantization,â in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 5918â5926.
[12] L. Hou, Q. Yao, and J. T. Kwok, âLoss-aware binarization of deep networks,â in International Conference on Learning Representations, 2017.
[13] L. Hou and J. T. Kwok, âLoss-aware weight quantization of deep networks,â in International Conference on Learning Representations, 2018.
[14] P. Gysel, J. Pimentel, M. Motamedi, and S. Ghiasi, âRistretto: A frame- work for empirical study of resource-efï¬cient inference in convolutional neural networks,â IEEE Transactions on Neural Networks and Learning Systems, no. 99, pp. 1â6, 2018.
[15] A. Zhou, A. Yao, K. Wang, and Y. Chen, âExplicit loss-error-aware quantization for low-bit deep neural networks,â in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 9426â9435.
[16] P. Yin, S. Zhang, J. Lyu, S. Osher, Y. Qi, and J. Xin, âBinaryRelax: A relaxation approach for training deep neural networks with quantized weights,â SIAM Journal on Imaging Sciences, vol. 11, no. 4, pp. 2205â 2223, 2018.
[17] J.-J. Moreau, âProximit´e et dualit´e dans un espace hilbertien,â Bulletin de la Soci´et´e math´ematique de France, vol. 93, pp. 273â299, 1965.
[18] K. Yosida, Functional Analysis. SpringerâVerlag, 1965. [19] P. Yin, S. Zhang, J. Lyu, S. Osher, Y. Qi, and J. Xin, âBlended coarse gradient descent for full quantization of deep neural networks,â Research in the Mathematical Sciences, vol. 6, no. 1, p. 14, 2019.
[20] P. L. Combettes and J.-C. Pesquet, âStochastic approximations and perturbations in forward-backward splitting for monotone operators,â Pure and Applied Functional Analysis, vol. 1, no. 1, pp. 13â37, 2016. [21] J. Choi, Z. Wang, S. Venkataramani, P. I.-J. Chuang, V. Srinivasan, and K. Gopalakrishnan, âPACT: Parameterized clipping activation for quantized neural networks,â arXiv preprint arXiv:1805.06085, 2018.
10
[22] D. Zhang, J. Yang, D. Ye, and G. Hua, âLQ-Nets: Learned quantization for highly accurate and compact deep neural networks,â in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 365â 382.
[23] J. Faraone, N. Fraser, M. Blott, and P. H. Leong, âSYQ: Learning sym- metric quantization for efï¬cient deep neural networks,â in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 4300â4309.
[24] Z.-G. Liu and M. Mattina, âLearning low-precision neural networks without straight-through estimator (STE),â in Proceedings of the Interna- tional Joint Conference on Artiï¬cial Intelligence, 2019, pp. 3066â3072. [25] J. Yang, X. Shen, J. Xing, X. Tian, H. Li, B. Deng, J. Huang, and X.-s. Hua, âQuantization networks,â in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 7308â7316.
[26] R. Gong, X. Liu, S. Jiang, T. Li, P. Hu, J. Lin, F. Yu, and J. Yan, âDifferentiable soft quantization: Bridging full-precision and low-bit neural networks,â in Proceedings of the IEEE International Conference on Computer Vision, 2019, pp. 4852â4861.
[27] S. Anwar, K. Hwang, and W. Sung, âFixed point optimization of deep convolutional neural networks for object recognition,â in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2015, pp. 1131â1135.
[28] C. Louizos, M. Reisser, T. Blankevoort, E. Gavves, and M. Welling, âRelaxed quantization for discretized neural networks,â in International Conference on Learning Representations, 2019.
[29] S. Han, J. Pool, J. Tran, and W. Dally, âLearning both weights and con- nections for efï¬cient neural network,â in Advances in Neural Information Processing Systems, 2015, pp. 1135â1143.
[30] V. Lebedev and V. Lempitsky, âFast convnets using group-wise brain damage,â in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2554â2564.
[31] W. Wen, C. Wu, Y. Wang, Y. Chen, and H. Li, âLearning structured sparsity in deep neural networks,â in Advances in Neural Information Processing Systems, 2016, pp. 2074â2082.
[32] Y. Guo, A. Yao, and Y. Chen, âDynamic network surgery for efï¬cient DNNs,â in Advances In Neural Information Processing Systems, 2016, pp. 1379â1387.
[33] J. Lin, Y. Rao, J. Lu, and J. Zhou, âRuntime neural pruning,â in Advances in Neural Information Processing Systems, 2017, pp. 2178â2188. [34] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, âMobileNets: Efï¬cient convo- lutional neural networks for mobile vision applications,â arXiv preprint arXiv:1704.04861, 2017.
[35] X. Zhang, X. Zhou, M. Lin, and J. Sun, âShufï¬eNet: An extremely efï¬- cient convolutional neural network for mobile devices,â in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 6848â6856.
[36] S. Han, H. Mao, and W. J. Dally, âDeep compression: Compressing deep neural networks with pruning, trained quantization and Huffman coding,â in International Conference on Learning Representations, 2016. [37] Y. Choi, M. El-Khamy, and J. Lee, âTowards the limit of network quantization,â in International Conference on Learning Representations, 2017.
[38] K. Ullrich, E. Meeds, and M. Welling, âSoft weight-sharing for neural network compression,â in International Conference on Learning Repre- sentations, 2017.
[39] D. Molchanov, A. Ashukha, and D. Vetrov, âVariational dropout spar- siï¬es deep neural networks,â in International Conference on Machine Learning, 2017, pp. 2498â2507.
[40] E. Agustsson, F. Mentzer, M. Tschannen, L. Cavigelli, R. Timofte, L. Benini, and L. V. Gool, âSoft-to-hard vector quantization for end- to-end learning compressible representations,â in Advances in Neural Information Processing Systems, 2017, pp. 1141â1151.
[41] C. Louizos, K. Ullrich, and M. Welling, âBayesian compression for deep learning,â in Advances in Neural Information Processing Systems, 2017, pp. 3290â3300.
[42] F. Tung and G. Mori, âDeep neural network compression by in- parallel pruning-quantization,â IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018.
[43] Y. Choi, M. El-Khamy, and J. Lee, âUniversal deep neural network compression,â IEEE Journal of Selected Topics in Signal Processing, 2020.
[44] Y. He, J. Lin, Z. Liu, H. Wang, L.-J. Li, and S. Han, âAMC: AutoML for model compression and acceleration on mobile devices,â in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 784â800.
[45] K. Wang, Z. Liu, Y. Lin, J. Lin, and S. Han, âHAQ: Hardware-aware automated quantization with mixed precision,â in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 8612â8620.
[46] M. Horowitz, â1.1 computingâs energy problem (and what we can do about it),â in IEEE International Solid-State Circuits Conference, 2014, pp. 10â14.
[47] A. Gersho and R. M. Gray, Vector Quantization and Signal Compression. Springer Science & Business Media, 2012, vol. 159.
[48] S. J. Nowlan and G. E. Hinton, âSimplifying neural networks by soft weight-sharing,â Neural Computation, vol. 4, no. 4, pp. 473â493, 1992. [49] C. M. Bishop, Pattern Recognition and Machine Learning. Springer,
2006.
[50] Y. Bengio, N. L´eonard, and A. Courville, âEstimating or propagating gradients through stochastic neurons for conditional computation,â arXiv preprint arXiv:1308.3432, 2013.
[51] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein et al., âImageNet large scale visual recognition challenge,â International Journal of Computer Vision, vol. 115, no. 3, pp. 211â252, 2015.
[52] H. Ren, M. El-Khamy, and J. Lee, âCT-SRCNN: Cascade trained and trimmed deep convolutional neural networks for image super resolution,â in Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV), 2018.
[53] D. Kingma and J. Ba, âAdam: A method for stochastic optimization,â in International Conference on Learning Representations, 2015. [54] A. Krizhevsky, I. Sutskever, and G. E. Hinton, âImageNet classiï¬ca- tion with deep convolutional neural networks,â in Advances in Neural Information Processing Systems, 2012, pp. 1097â1105.
[55] K. He, X. Zhang, S. Ren, and J. Sun, âDeep residual learning for image recognition,â in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770â778.
[56] J. Seward, âbzip2,â 1998. [Online]. Available: http://www.bzip.org [57] E. Park, J. Ahn, and S. Yoo, âWeighted-entropy-based quantization for deep neural networks,â in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 7197â7205.
[58] C. Dong, C. C. Loy, K. He, and X. Tang, âImage super-resolution using deep convolutional networks,â IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 2, pp. 295â307, 2016.
[59] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, âImage quality assessment: From error visibility to structural similarity,â IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600â612, 2004. [60] R. Zeyde, M. Elad, and M. Protter, âOn single image scale-up using sparse-representations,â in International Conference on Curves and Surfaces. Springer, 2010, pp. 711â730.
11 | {
"id": "1606.06160"
} |
1808.09381 | Understanding Back-Translation at Scale | An effective method to improve neural machine translation with monolingual
data is to augment the parallel training corpus with back-translations of
target language sentences. This work broadens the understanding of
back-translation and investigates a number of methods to generate synthetic
source sentences. We find that in all but resource poor settings
back-translations obtained via sampling or noised beam outputs are most
effective. Our analysis shows that sampling or noisy synthetic data gives a
much stronger training signal than data generated by beam or greedy search. We
also compare how synthetic data compares to genuine bitext and study various
domain effects. Finally, we scale to hundreds of millions of monolingual
sentences and achieve a new state of the art of 35 BLEU on the WMT'14
English-German test set. | http://arxiv.org/pdf/1808.09381 | Sergey Edunov, Myle Ott, Michael Auli, David Grangier | cs.CL | 12 pages; EMNLP 2018 | null | cs.CL | 20180828 | 20181003 | 2018:
8 1 0 2
t c O 3 ] L C . s c [ 2 v 1 8 3 9 0 . 8 0 8 1 : v i X r a
# Understanding Back-Translation at Scale
# Sergey Edunovâ³ Myle Ottâ³ Michael Auliâ³ David Grangier â½â â³Facebook AI Research, Menlo Park, CA & New York, NY. â½Google Brain, Mountain View, CA.
# Abstract
An effective method to improve neural ma- chine translation with monolingual data is to augment the parallel training corpus with back-translations of target language sentences. This work broadens the understanding of back-translation and investigates a number of methods to generate synthetic source sen- tences. We ï¬nd that in all but resource poor settings back-translations obtained via sam- pling or noised beam outputs are most effec- tive. Our analysis shows that sampling or noisy synthetic data gives a much stronger training signal than data generated by beam or greedy search. We also compare how synthetic data compares to genuine bitext and study var- ious domain effects. Finally, we scale to hun- dreds of millions of monolingual sentences and achieve a new state of the art of 35 BLEU on the WMTâ14 English-German test set.
We focus on back-translation (BT) which operates in a semi-supervised setup where both bilingual and monolingual data in the target lan- guage are available. Back-translation ï¬rst trains an intermediate system on the parallel data which is used to translate the target monolingual data into the source language. The result is a parallel corpus where the source side is synthetic machine transla- tion output while the target is genuine text written by humans. The synthetic parallel corpus is then simply added to the real bitext in order to train a ï¬nal system that will translate from the source to the target language. Although simple, this method has been shown to be helpful for phrase- based translation (Bojar and Tamchyna, 2011), NMT (Sennrich et al., 2016a; Poncelas et al., 2018) as well as unsupervised MT (Lample et al., 2018a).
# 1 Introduction
Machine translation relies on the statistics of large parallel corpora, i.e. datasets of paired sentences in both the source and target language. However, bitext is limited and there is a much larger amount of monolingual data available. Monolingual data has been traditionally used to train language mod- els which improved the ï¬uency of statistical ma- chine translation (Koehn, 2010).
In the context of neural machine translation (NMT; Bahdanau et al. 2015; Gehring et al. 2017; Vaswani et al. 2017), there has been extensive work to improve models with monolingual data, including language model fusion (Gulcehre et al., 2015, 2017), back-translation (Sennrich et al., 2016a) and dual learning (Cheng et al., 2016; He et al., 2016a). These methods have different advantages and can be combined to reach high ac- curacy (Hassan et al., 2018).
In this paper, we investigate back-translation for neural machine translation at a large scale by adding hundreds of millions of back-translated sentences to the bitext. Our experiments are based on strong baseline models trained on the public bi- text of the WMT competition. We extend previous analysis (Sennrich et al., 2016a; Poncelas et al., 2018) of back-translation in several ways. We pro- vide a comprehensive analysis of different meth- ods to generate synthetic source sentences and we show that this choice matters: sampling from the model distribution or noising beam outputs out- performs pure beam search, which is typically used, by 1.7 BLEU on average across several test sets. Our analysis shows that synthetic data based on sampling and noised beam search provides a stronger training signal than synthetic data based on argmax inference. We also study how adding synthetic data compares to adding real bitext in a controlled setup with the surprising ï¬nding that synthetic data can sometimes match the accuracy of real bitext. Our best setup achieves 35 BLEU
*Work done while at Facebook AI Research.
on the WMTâ14 English-German test set by rely- ing only on public WMT bitext as well as 226M monolingual sentences. This outperforms the sys- tem of DeepL by 1.7 BLEU who train on large amounts of high quality non-benchmark data. On WMTâ14 English-French we achieve 45.6 BLEU.
# 2 Related work
This section describes prior work in machine translation with neural networks as well as semi- supervised machine translation.
# 2.1 Neural machine translation
We build upon recent work on neural machine translation which is typically a neural network with an encoder/decoder architecture. The en- coder infers a continuous space representation of the source sentence, while the decoder is a neural language model conditioned on the en- coder output. The parameters of both models are learned jointly to maximize the likelihood of the target sentences given the corresponding source sentences from a parallel corpus (Sutskever et al., 2014; Cho et al., 2014). At inference, a target sen- tence is generated by left-to-right decoding. have
been improving efï¬- proposed with the goal of includes ciency and/or effectiveness. 2014; recurrent Bahdanau et al., 2015; Luong et al., 2015), con- volutional networks (Kalchbrenner et al., 2016; Gehring et al., 2017; Kaiser et al., 2017) and (Vaswani et al., 2017). transformer Recent work relies on attention mechanisms where the encoder produces a sequence of vectors and, for each target token, the decoder attends to the most relevant part of the source through a context-dependent weighted-sum of the (Bahdanau et al., 2015; Luong et al., 2015). Attention has been reï¬ned with multi-hop attention (Gehring et al., 2017), self-attention (Vaswani et al., 2017; Paulus et al., 2018) and multi-head attention (Vaswani et al., 2017). We use a transformer architecture (Vaswani et al., 2017).
# 2.2 Semi-supervised NMT
Monolingual target data has been used to im- prove the ï¬uency of machine translations since the early IBM models (Brown et al., 1990). In phrase- based systems, language models (LM) in the tar-
get language increase the score of ï¬uent outputs during decoding (Koehn et al., 2003; Brants et al., 2007). A similar strategy can be applied to NMT (He et al., 2016b). Besides improving ac- curacy during decoding, neural LM and NMT can beneï¬t from deeper integration, e.g. by combining the hidden states of both models (Gulcehre et al., 2017). Neural architecture also allows multi-task learning and parameter sharing between MT and target-side LM (Domhan and Hieber, 2017).
Back-translation (BT) is an alternative to lever- age monolingual data. BT is simple and easy to apply as it does not require modiï¬cation to the MT training algorithms. It requires training a target-to-source system in order to generate ad- ditional synthetic parallel data from the mono- lingual target data. This data complements hu- man bitext to train the desired source-to-target system. BT has been applied earlier to phrase- base systems (Bojar and Tamchyna, 2011). For these systems, BT has also been successful in leveraging monolingual data for domain adapta- tion (Bertoldi and Federico, 2009; Lambert et al., 2011). Recently, BT has been shown beneï¬cial for NMT (Sennrich et al., 2016a; Poncelas et al., 2018). It has been found to be particularly use- ful when parallel data is scarce (Karakanta et al., 2017).
Currey et al. (2017) show that low resource lan- guage pairs can also be improved with synthetic data where the source is simply a copy of the monolingual target data. Concurrently to our work, Imamura et al. (2018) show that sampling synthetic sources is more effective than beam search. Speciï¬cally, they sample multiple sources for each target whereas we draw only a sin- gle sample, opting to train on a larger number of target sentences instead. Hoang et al. (2018) and Cotterell and Kreutzer (2018) suggest an iter- ative procedure which continuously improves the quality of the back-translation and ï¬nal systems. Niu et al. (2018) experiment with a multilingual model that does both the forward and backward translation which is continuously trained with new synthetic data.
There has also been work using source-side monolingual data (Zhang and Zong, 2016). Fur- thermore, Cheng et al. (2016); He et al. (2016a); text Xia et al. from both languages can be leveraged by ex- tending back-translation to dual learning: when
training both source-to-target and target-to-source models jointly, one can use back-translation in both directions and perform multiple rounds of BT. A similar idea is applied in unsupervised NMT (Lample et al., 2018a,b). Besides mono- lingual data, various approaches have been in- troduced to beneï¬t from parallel data in other language pairs (Johnson et al., 2017; Firat et al., 2016a,b; Ha et al., 2016; Gu et al., 2018).
Data augmentation is an established technique in computer vision where a labeled dataset is sup- plemented with cropped or rotated input images. Recently, generative adversarial networks (GANs) have been successfully used to the same end (Antoniou et al., 2017; Perez and Wang, 2017) as well as models that learn distributions over image transformations (Hauberg et al., 2016).
# 3 Generating synthetic sources
Back-translation typically uses beam search (Sennrich et al., 2016a) or just greedy search to generate synthetic (Lample et al., 2018a,b) Both are approximate al- source sentences. gorithms to identify the maximum a-posteriori (MAP) output, i.e. the sentence with the largest estimated probability given an input. Beam is gen- erally successful in ï¬nding high probability out- puts (Ott et al., 2018a).
However, MAP prediction can lead to less rich translations (Ott et al., 2018a) since it always fa- vors the most likely alternative in case of ambi- guity. This is particularly problematic in tasks where there is a high level of uncertainty such as dialog (Serban et al., 2016) and story genera- tion (Fan et al., 2018). We argue that this is also problematic for a data augmentation scheme such as back-translation. Beam and greedy focus on the head of the model distribution which results in very regular synthetic source sentences that do not properly cover the true data distribution.
As alternative, we consider sampling from the model distribution as well as adding noise to beam search outputs. First, we explore unrestricted sam- pling which generates outputs that are very di- verse but sometimes highly unlikely. Second, we investigate sampling restricted to the most likely words (Graves, 2013; Ott et al., 2018a; Fan et al., 2018). At each time step, we select the k most re- likely tokens from the output distribution, normalize and then sample from this restricted set. This is a middle ground between MAP and unre-
stricted sampling.
As a third alternative, we apply noising to beam search out- Lample et al. Adding noise to input sentences has puts. the autoencoder se- for been very beneï¬cial tups of (Lample et al., 2018a; Hill et al., 2016) which is inspired by denoising autoencoders (Vincent et al., 2008). In particular, we transform source sentences with three types of noise: deleting words with probability 0.1, replacing words by a ï¬ller token with probability 0.1, and swapping words which is implemented as a random permutation over the tokens, drawn from the uniform distribution but restricted to swapping words no further than three positions apart.
# 4 Experimental setup
# 4.1 Datasets
The majority of our experiments are based on data from the WMTâ18 English-German news transla- tion task. We train on all available bitext exclud- ing the ParaCrawl corpus and remove sentences longer than 250 words as well as sentence-pairs with a source/target length ratio exceeding 1.5. This results in 5.18M sentence pairs. For the back- translation experiments we use the German mono- lingual newscrawl data distributed with WMTâ18 comprising 226M sentences after removing dupli- cates. We tokenize all data with the Moses tok- enizer (Koehn et al., 2007) and learn a joint source and target Byte-Pair-Encoding (BPE; Sennrich et al., 2016) with 35K types. We develop on new- stest2012 and report ï¬nal results on newstest2013- 2017; additionally we consider a held-out set from the training data of 52K sentence-pairs.
We also experiment on the larger WMTâ14 English-French task which we ï¬lter in the same way as WMTâ18 English-German. This results in 35.7M sentence-pairs for training and we learn a joint BPE vocabulary of 44K types. As monolin- gual data we use newscrawl2010-2014, compris- ing 31M sentences after language identiï¬cation (Lui and Baldwin, 2012). We use newstest2012 as development set and report ï¬nal results on newstest2013-2015.
The majority of results in this paper are in terms of case-sensitive tokenized BLEU (Papineni et al., 2002) but we also report test accuracy with de- tokenized BLEU using sacreBLEU (Post, 2018).
# 4.2 Model and hyperparameters
We re-implemented the Transformer model in py- torch using the fairseq toolkit.1 All experiments are based on the Big Transformer architecture with 6 blocks in the encoder and decoder. We use the same hyper-parameters for all experiments, i.e., word representations of size 1024, feed-forward layers with inner dimension 4096. Dropout is set to 0.3 for En-De and 0.1 for En-Fr, we use 16 attention heads, and we average the check- points of the last ten epochs. Models are opti- mized with Adam (Kingma and Ba, 2015) using β1 = 0.9, β2 = 0.98, and Ç« = 1e â 8 and we use the same learning rate schedule as Vaswani et al. (2017). All models use label smoothing with a uniform prior distribution over the vocabulary Ç« = 0.1 (Szegedy et al., 2015; Pereyra et al., 2017). We run experiments on DGX-1 machines with 8 Nvidia V100 GPUs and machines are intercon- nected by Inï¬niband. Experiments are run on 16 machines and we perform 30K synchronous up- dates. We also use the NCCL2 library and the torch distributed package for inter-GPU communi- cation. We train models with 16-bit ï¬oating point operations, following Ott et al. (2018b). For ï¬nal evaluation, we generate translations with a beam of size 5 and with no length penalty.
# 5 Results
Our evaluation ï¬rst compares the accuracy of back-translation generation methods (§5.1) and analyzes the results (§5.2). Next, we simulate a low-resource setup to experiment further with dif- ferent generation methods (§5.3). We also com- pare synthetic bitext to genuine parallel data and examine domain effects arising in back-translation (§5.4). We also measure the effect of upsampling bitext during training (§5.5). Finally, we scale to a very large setup of up to 226M monolingual sen- tences and compare to previous research (§5.6).
# 5.1 Synthetic data generation methods
We ï¬rst investigate different methods to gener- ate synthetic source translations given a back- translation model, i.e., a model trained in the re- verse language direction (Section 3). We con- sider two types of MAP prediction: greedy search (greedy) and beam search with beam size 5 (beam). Non-MAP methods include unrestricted
# 1Code available at
https://github.com/pytorch/fairseq
25.5 ) 2 1 0 2 t s e t s w e n ( 25 24.5 U E L B 24 23.5 greedy top10 beam+noise beam sampling 5M 8M 11M Total training data 17M 29M
Figure 1: Accuracy of models trained on dif- ferent amounts of back-translated data obtained with greedy search, beam search (k = 5), ran- domly sampling from the model distribution, re- stricting sampling over the ten most likely words (top10), and by adding noise to the beam outputs (beam+noise). Results based on newstest2012 of WMT English-German translation.
sampling from the model distribution (sampling), restricting sampling to the k highest scoring out- puts at every time step with k = 10 (top10) as well as adding noise to the beam outputs (beam+noise). Restricted sampling is a middle-ground between beam search and unrestricted sampling, it is less likely to pick very low scoring outputs but still preserves some randomness. Preliminary experi- ments with top5, top20, top50 gave similar results to top10.
We also vary the amount of synthetic data and perform 30K updates during training for the bi- text only, 50K updates when adding 3M synthetic sentences, 75K updates for 6M and 12M sen- tences and 100K updates for 24M sentences. For each setting, this corresponds to enough updates to reach convergence in terms of held-out loss. In our 128 GPU setup, training of the ï¬nal models takes 3h 20min for the bitext only model, 7h 30min for 6M and 12M synthetic sentences, and 10h 15min for 24M sentences. During training we also sam- ple the bitext more frequently than the synthetic data and we analyze the effect of this in more de- tail in §5.5.
Figure 1 shows that sampling and beam+noise outperform the MAP methods (pure beam search and greedy) by 0.8-1.1 BLEU. Sampling and beam+noise improve over bitext-only (5M) by be-
news2013 news2014 news2015 news2016 news2017 Average bitext 27.84 30.88 31.82 34.98 29.46 31.00 + beam + greedy + top10 + sampling + beam+noise 27.82 27.67 28.25 28.81 29.28 32.33 32.55 33.94 34.46 33.53 32.20 32.57 34.00 34.87 33.79 35.43 35.74 36.45 37.08 37.89 31.11 31.25 32.08 32.35 32.66 31.78 31.96 32.94 33.51 33.43
Table 1: Tokenized BLEU on various test sets of WMT English-German when adding 24M synthetic sentence pairs obtained by various generation methods to a 5.2M sentence-pair bitext (cf. Figure 1).
6 y t i x e l p r e p 5 4 greedy top10 beam+noise beam sampling bitext g n i n i a r T 3 2 1 20 40 60 80 epoch
100
Perplexity human data beam sampling top10 beam+noise 75.34 72.42 500.17 87.15 2823.73
Table 2: Perplexity of source data as assigned by a language model (5-gram KneserâNey). Data gen- erated by beam search is most predictable.
Figure 2: Training perplexity (PPL) per epoch for different synthetic data. We separately report PPL on the synthetic data and the bitext. Bitext PPL is averaged over all generation methods.
to predict translations which may help learning, similar to denoising autoencoders (Vincent et al., 2008). Sampling is known to better approximate the data distribution which is richer than the argmax model outputs (Ott et al., 2018a). Therefore, sampling is also more likely to provide a richer training signal than argmax sequences.
tween 1.7-2 BLEU in the largest data setting. Restricted sampling (top10) performs better than beam and greedy but is not as effective as unre- stricted sampling (sampling) or beam+noise.
Table 1 shows results on a wider range of Sampling and test sets (newstest2013-2017). beam+noise perform roughly equal and we adopt sampling for the remaining experiments.
# 5.2 Analysis of generation methods
The previous experiment showed that synthetic source sentences generated via sampling and beam with noise perform signiï¬cantly better than those obtained by pure MAP methods. Why is this?
Beam search focuses on very likely outputs which reduces the diversity and richness of the generated source translations. Adding noise to beam outputs and sampling do not have this problem: Noisy source sentences make it harder
To get a better sense of the training signal pro- vided by each method, we compare the loss on the training data for each method. We report the cross entropy loss averaged over all tokens and separate the loss over the synthetic data and the real bitext data. Speciï¬cally, we choose the setup with 24M synthetic sentences. At the end of each epoch we measure the loss over 500K sentence pairs sub-sampled from the synthetic data as well as an equally sized subset of the bitext. For each generation method we choose the same sentences except for the bitext which is disjoint from the syn- thetic data. This means that losses over the syn- thetic data are measured over the same target to- kens because the generation methods only differ in the source sentences. We found it helpful to up- sample the frequency with which we observe the bitext compared to the synthetic data (§5.5) but we do not upsample for this experiment to keep condi-
Diese gegenstzlichen Auffassungen von Fairness liegen nicht nur der politischen Debatte zugrunde. These competing principles of fairness underlie not only the political debate. These conï¬icting interpretations of fairness are not solely based on the political debate. Mr President, these contradictory interpretations of fairness are not based solely on the political debate. Those conï¬icting interpretations of fairness are not solely at the heart of the political debate. conï¬icting BLANK interpretations BLANK are of not BLANK based on the political debate.
Table 3: Example where sampling produces inadequate outputs. âMr President,â is not in the source. BLANK means that a word has been replaced by a ï¬ller token.
tions as similar as possible. We assume that when the training loss is low, then the model can easily ï¬t the training data without extracting much learn- ing signal compared to data which is harder to ï¬t. Figure 2 shows that synthetic data based on greedy or beam is much easier to ï¬t compared to data from sampling, top10, beam+noise and the bitext. In fact, the perplexity on beam data falls below 2 after only 5 epochs. Except for sampling, we ï¬nd that the perplexity on the training data is somewhat correlated to the end-model accuracy (cf. Figure 1) and that all methods except sam- pling have a lower loss than real bitext.
These results suggest that synthetic data ob- tained with argmax inference does not provide as rich a training signal as sampling or adding noise. We conjecture that the regularity of syn- thetic data obtained with argmax inference is not optimal. Sampling and noised argmax both expose the model to a wider range of source sentences which makes the model more robust to reorder- ing and substitutions that happen naturally, even if the model of reordering and substitution through noising is not very realistic.
Next we analyze the richness of synthetic out- puts and train a language model on real human text and score synthetic source sentences generated by beam search, sampling, top10 and beam+noise. We hypothesize that data that is very regular should be more predictable by the language model and therefore receive low perplexity. We elimi- nate a possible domain mismatch effect between the language model training data and the synthetic data by splitting the parallel corpus into three non- overlapping parts:
1. On 640K sentences pairs, we train a back-
# translation model,
2. On 4.1M sentence pairs, we take the source side and train a 5-gram Kneser-Ney language model (Heaï¬eld et al., 2013),
3. On the remaining 450K sentences, we apply the back-translation system using beam, sam- pling and top10 generation.
For the last set, we have genuine source sen- tences as well as synthetic sources from different generation techniques. We report the perplexity of our language model on all versions of the source data in Table 2. The results show that beam out- puts receive higher probability by the language model compared to sampling, beam+noise and real source sentences. This indicates that beam search outputs are not as rich as sampling outputs or beam+noise. This lack of variability probably explains in part why back-translations from pure beam search provide a weaker training signal than alternatives.
Closer inspection of the synthetic sources (Ta- ble 3) reveals that sampled and noised beam out- puts are sometimes not very adequate, much more so than MAP outputs, e.g., sampling often in- troduces target words which have no counterpart in the source. This happens because sampling sometimes picks highly unlikely outputs which are harder to ï¬t (cf. Figure 2).
# 5.3 Low resource vs. high resource setup
The experiments so far are based on a setup with a large bilingual corpus. However, in resource poor settings the back-translation model is of much lower quality. Are non-MAP methods still more
26
24 ) 2 1 0 2 t s e t s w e n ( U E L B 22 20 18 16 14 beam 80K sampling 80K beam 640K sampling 640K beam 5M sampling 5M 80 K 160 K 320 K 640 K 1.2 M 2.6 M 5 M 8 M 11 M 17 M 29 M
Total training data
Figure 3: BLEU when adding synthetic data from beam and sampling to bitext systems with 80K, 640K and 5M sentence pairs.
effective in such a setup? To answer this ques- tion, we simulate such setups by sub-sampling the training data to either 80K sentence-pairs or 640K sentence-pairs and then add synthetic data from sampling and beam search. We compare these smaller setups to our original 5.2M sen- tence bitext conï¬guration. The accuracy of the German-English back-translation systems steadily increases with more training data: On new- stest2012 we measure 13.5 BLEU for 80K bitext, 24.3 BLEU for 640K and 28.3 BLEU for 5M.
Figure 3 shows that sampling is more effective than beam for larger setups (640K and 5.2M bi- texts) while the opposite is true for resource poor settings (80K bitext). This is likely because the back-translations in the 80K setup are of very poor quality and the noise of sampling and beam+noise is too detrimental for this brittle low-resource set- ting. When the setup is very small the very regu- lar MAP outputs still provide useful training signal while the noise from sampling becomes harmful.
# 5.4 Domain of synthetic data
Next, we turn to two different questions: How does real human bitext compare to synthetic data in terms of ï¬nal model accuracy? And how does the domain of the monolingual data affect results? To answer these questions, we subsample 640K sentence-pairs of the bitext and train a back- translation system on this set. To train a forward model, we consider three alternative types of data
to add to this 640K training set. We either add:
⢠the remaining parallel data (bitext),
⢠the back-translated target side of the remain- ing parallel data (BT-bitext),
⢠back-translated newscrawl data (BT-news).
The back-translated data is generated via sam- pling. This setup allows us to compare synthetic data to genuine data since BT-bitext and bitext It also allows us to share the same target side. estimate the value of BT data for domain adap- tation since the newscrawl corpus (BT-news) is pure news whereas the bitext is a mixture of eu- roparl and commoncrawl with only a small news- commentary portion. To assess domain adaptation effects, we measure accuracy on two held-out sets:
⢠newstest2012, i.e. pure newswire data.
⢠a held-out set of the WMT training data (valid-mixed), which is a mixture of eu- roparl, commoncrawl and the small news- commentary portion.
Figure 4 shows the results on both validation sets. Most strikingly, BT-news performs almost as well as bitext on newstest2012 (Figure 4a) and improves the baseline (640K) by 2.6 BLEU. BT- bitext improves by 2.2 BLEU, achieving 83% of the improvement with real bitext. This shows that synthetic data can be nearly as effective as real hu- man translated data when the domains match.
Figure 4b shows the accuracy on valid-mixed, the mixed domain valid set. The accuracy of BT- news is not as good as before since the domain of the BT data and the test set do not match. How- ever, BT-news still improves the baseline by up to 1.2 BLEU. On the other hand, BT-bitext matches the domain of valid-mixed and improves by 2.7 BLEU. This trails the real bitext by only 1.3 BLEU and corresponds to 67% of the gain achieved with real human bitext.
In summary, synthetic data performs remark- ably well, coming close to the improvements achieved with real bitext for newswire test data, or trailing real bitext by only 1.3 BLEU for valid- mixed. In absence of a large parallel corpus for news, back-translation therefore offers a simple, yet very effective domain adaptation technique.
33
23 32 22 21 U E L B 31 30 20 bitext BT-bitext BT-news 29 bitext BT-bitext BT-news 28 640K 1.28M 2.56M 5.19M 640K 1.28M 2.56M 5.19M Amount of data Amount of data (a) newstest2012 (b) valid-mixed
# U E L B
Figure 4: Accuracy on (a) newstest2012 and (b) a mixed domain valid set when growing a 640K bitext corpus with (i) real parallel data (bitext), (ii) a back-translated version of the target side of the bitext (BT-bitext), (iii) or back-translated newscrawl data (BT-news).
# 5.5 Upsampling the bitext
We found it beneï¬cial to adjust the ratio of bitext to synthetic data observed during training. In par- ticular, we tuned the rate at which we sample data from the bitext compared to synthetic data. For example, in a setup of 5M bitext sentences and 10M synthetic sentences, an upsampling rate of 2 means that we double the frequency at which we visit bitext, i.e. training batches contain on aver- age an equal amount of bitext and synthetic data as opposed to 1/3 bitext and 2/3 synthetic data.
Figure 5 shows the accuracy of various upsam- pling rates for different generation methods in a setup with 5M bitext sentences and 24M synthetic sentences. Beam and greedy beneï¬t a lot from higher rates which results in training more on the bitext data. This is likely because synthetic beam and greedy data does not provide as much training signal as the bitext which has more variation and is harder to ï¬t. On the other hand, sampling and beam+noise require no upsampling of the bitext, which is likely because the synthetic data is al- ready hard enough to ï¬t and thus provides a strong training signal (§5.2).
# 5.6 Large scale results
To conï¬rm our ï¬ndings we experiment on WMTâ14 English-French translation where we show results on newstest2013-2015. We augment the large bitext of 35.7M sentence pairs by 31M newscrawl sentences generated by sampling. To
) 2 1 0 2 t s e t s w e n ( U E L B 25 24 23 22 greedy beam top10 sampling beam+noise 1 2 4 bitext upsample rate 8
Figure 5: Accuracy when changing the rate at which the bitext is upsampled during training. Rates larger than one mean that the bitext is ob- served more often than actually present in the combined bitext and synthetic training corpus.
train this system we perform 300K training up- dates in 27h 40min on 128 GPUs; we do not up- sample the bitext for this experiment. Table 4 shows tokenized BLEU and Table 5 shows deto- kenized BLEU.2 To our knowledge, our baseline is the best reported result in the literature for new- stest2014, and back-translation further improves upon this by 2.6 BLEU (tokenized).
2sacreBLEU signatures: BLEU+case.mixed+lang.en- fr+numrefs.1+smooth.exp+test.SET+tok.13a+version.1.2.7
news13 news14 news15 bitext +sampling 36.97 37.85 42.90 45.60 39.92 43.95
Table 4: Tokenized BLEU on various test sets for WMT English-French translation.
news13 news14 news15 bitext +sampling 35.30 36.13 41.03 43.84 38.31 40.91
Table 5: De-tokenized BLEU (sacreBLEU) on var- ious test sets for WMT English-French.
Finally, for WMT English-German we train on all 226M available monolingual training sen- tences and perform 250K updates in 22.5 hours on 128 GPUs. We upsample the bitext with a rate of 16 so that we observe every bitext sentence 16 times more often than each monolingual sen- tence. This results in a new state of the art of 35 BLEU on newstest2014 by using only WMT benchmark data. For comparison, DeepL, a com- mercial translation engine relying on high qual- ity bilingual training data, achieves 33.3 tokenized BLEU .4 Table 6 summarizes our results and com- pares to other work in the literature. This shows that back-translation with sampling can result in high-quality translation models based on bench- mark data only.
# 6 Submission to WMTâ18
This section describes our entry to the WMTâ18 English-German news translation task which was ranked #1 in the human evaluation (Bojar et al., 2018). Our entry is based on the WMT English- German models described in the previous section (§5.6). In particular, we ensembled six back- translation models trained on all available bitext plus 226M newscrawl sentences or 5.8B German tokens. Four models used bitext upsample ratio 16, one model upsample ratio 32, and another one upsample ratio 8. Upsample ratios differed be- cause we reused models previously trained to tune the upsample ratio. We did not use checkpoint av- eraging. More details of our setup and data are
with SET â {wmt13, wmt14/full, wmt15}
3sacreBLEU signatures: BLEU+case.mixed+lang.en- LANG+numrefs.1+smooth.exp+test.wmt14/full+ tok.13a+version.1.2.7 with LANG â {de,fr}
EnâDe EnâFr 40.5 41.0 41.4 41.5 a. Gehring et al. (2017) b. Vaswani et al. (2017) c. Ahmed et al. (2017) d. Shaw et al. (2018) 25.2 28.4 28.9 29.2 DeepL Our result detok. sacreBLEU3 33.3 35.0 33.8 45.9 45.6 43.8
Table 6: BLEU on newstest2014 for WMT English-German (EnâDe) and English-French (EnâFr). The ï¬rst four results use only WMT bitext (WMTâ14, except for b, c, d in EnâDe which train on WMTâ16). DeepL uses propri- etary high-quality bitext and our result relies on back-translation with 226M newscrawl sentences for EnâDe and 31M for EnâFr. We also show deto- kenized BLEU (SacreBLEU).
news17 news18 baseline +BT +ensemble +ï¬lter copies 29.36 32.66 33.31 33.35 42.38 44.94 46.39 46.53 % of source copies 0.56% 0.53%
Table 7: De-tokenized case-insensitive sacreBLEU on WMT English-German newstest17 and new- stest18.
described in §4.
Ott et al. (2018a) showed that beam search sometimes outputs source copies rather than tar- get language translations. We replaced source copies by the output of a model trained only on the news-commentary portion of the WMTâ18 task (nc model). This model produced far fewer copies since this dataset is less noisy. Outputs are deemed to be a source copy if the Jaccard similarity be- tween the source and the target unigrams exceeds 0.5. About 0.5% of outputs are identiï¬ed as source copies. We used newstest17 as a development set to ï¬ne tune ensemble size and model parameters. Table 7 summarizes the effect of back-translation data, ensembling and source copy ï¬ltering.5
# 4https://www.deepl.com/press.html 5BLEU+case.lc+lang.en-
de+numrefs.1+smooth.exp+test.SET+tok.13a+version.1.2.11 with SET â {wmt17, wmt18}
# 7 Conclusions and future work
Back-translation is a very effective data augmen- tation technique for neural machine translation. Generating synthetic sources by sampling or by adding noise to beam outputs leads to higher ac- curacy than argmax inference which is typically used. In particular, sampling and noised beam outperforms pure beam by 1.7 BLEU on average on newstest2013-2017 for WMT English-German translation. Both methods provide a richer train- ing signal for all but resource poor setups. We also ï¬nd that synthetic data can achieve up to 83% of the performance attainable with real bitext. Fi- nally, we achieve a new state of the art result of 35 BLEU on the WMTâ14 English-German test set by using publicly available benchmark data only. In future work, we would like to investigate an end-to-end approach where the back-translation model is optimized to output synthetic sources that are most helpful to the ï¬nal forward model.
# References
Karim Ahmed, Nitish Shirish Keskar, and Richard Socher. 2017. Weighted transformer network for machine translation. arxiv, 1711.02132.
Antreas Antoniou, Amos J. Storkey, and Harrison Ed- wards. 2017. Data augmentation generative adver- sarial networks. arXiv, abs/1711.04340.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In International Con- ference on Learning Representations (ICLR).
Nicola Bertoldi and Marcello Federico. 2009. Domain adaptation for statistical machine translation with monolingual resources. In Workshop on Statistical Machine Translation (WMT).
Improving translation model by monolingual data. In Workshop on Statistical Machine Translation (WMT).
OndËrej Bojar, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Philipp Koehn, and Christof Monz. 2018. Find- ings of the 2018 conference on machine translation (WMT18). In Proceedings of the Third Conference on Machine Translation, Volume 2: Shared Task Pa- pers, Brussels, Belgium. Association for Computa- tional Linguistics.
Thorsten Brants, Ashok C. Popat, Peng Xu, Franz Josef Och, and Jeffrey Dean. 2007. Large language mod- els in machine translation. In Conference on Natural Language Learning (CoNLL).
Peter F. Brown, John Cocke, Stephen Della Pietra, Vin- cent J. Della Pietra, Frederick Jelinek, John D. Laf- ferty, Robert L. Mercer, and Paul S. Roossin. 1990. A statistical approach to machine translation. Com- putational Linguistics, 16:79â85.
Yong Cheng, Wei Xu, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Semi- supervised learning for neural machine translation. In Conference of the Association for Computational Linguistics (ACL).
Kyunghyun Cho, Bart van Merrienboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In Conference on Empirical Methods in Natural Language Processing (EMNLP).
Ryan Cotterell and Julia Kreutzer. 2018. Explain- ing and generalizing back-translation through wake- sleep. arXiv preprint arXiv:1806.04402.
Anna Currey, Antonio Valerio Miceli Barone, and Ken- neth Heaï¬eld. 2017. Copied Monolingual Data Im- proves Low-Resource Neural Machine Translation. In Proc. of WMT.
Tobias Domhan and Felix Hieber. 2017. Using target- side monolingual data for neural machine transla- tion through multi-task learning. In Conference on Empirical Methods in Natural Language Processing (EMNLP).
Angela Fan, Yann Dauphin, and Mike Lewis. 2018. In Confer- Hierarchical neural story generation. ence of the Association for Computational Linguis- tics (ACL).
Orhan Firat, Kyunghyun Cho, and Yoshua Ben- gio. 2016a. Multi-way, multilingual neural ma- chine translation with a shared attention mecha- nism. In Conference of the North American Chap- ter of the Association for Computational Linguistics (NAACL).
Orhan Firat, Baskaran Sankaran, Yaser Al-Onaizan, Fatos T. Yarman-Vural, and Kyunghyun Cho. 2016b. Zero-resource translation with multi-lingual neu- In Conference on Em- ral machine translation. pirical Methods in Natural Language Processing (EMNLP).
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional In International sequence to sequence learning. Conference of Machine Learning (ICML).
Alex Graves. 2013. Generating sequences with recur- rent neural networks. arXiv, 1308.0850.
Jiatao Gu, Hany Hassan, Jacob Devlin, and Victor O. K. Li. 2018. Universal neural machine transla- tion for extremely low resource languages. arXiv, 1802.05368.
Caglar Gulcehre, Orhan Firat, Kelvin Xu, Kyunghyun Cho, Loic Barrault, Huei-Chi Lin, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2015. On us- ing monolingual corpora in neural machine transla- tion. arXiv, 1503.03535.
Caglar Gulcehre, Orhan Firat, Kelvin Xu, Kyunghyun Cho, and Yoshua Bengio. 2017. On integrating a language model into neural machine translation. Computer Speech & Language, 45:137â148.
Thanh-Le Ha, Jan Niehues, and Alexander H. Waibel. 2016. Toward multilingual neural machine trans- lation with universal encoder and decoder. arXiv, 1611.04798.
Hany Hassan, Anthony Aue, Chang Chen, Vishal Chowdhary, Jonathan Clark, Christian Feder- mann, Xuedong Huang, Marcin Junczys-Dowmunt, William Lewis, Mu Li, et al. 2018. Achieving hu- man parity on automatic chinese to english news translation. arXiv, 1803.05567.
Soren Hauberg, Oren Freifeld, Anders Boesen Lindbo Larsen, John W. Fisher, and Lars Kai Hansen. 2016. Dreaming more data: Class-dependent distributions over diffeomorphisms for learned data augmenta- tion. In AISTATS.
Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tieyan Liu, and Wei-Ying Ma. 2016a. Dual learning for machine translation. In Conference on Advances in Neural Information Processing Systems (NIPS).
Wei He, Zhongjun He, Hua Wu, and Haifeng Wang. 2016b. Improved neural machine translation with smt features. In Conference of the Association for the Advancement of Artiï¬cial Intelligence (AAAI), pages 151â157.
Ivan Pouzyrevsky, Jonathan H. Clark, and Philipp Koehn. 2013. Scalable Modiï¬ed Kneser-Ney Language Model Estimation. In Con- ference of the Association for Computational Lin- guistics (ACL).
Felix Hill, Kyunghyun Cho, and Anna Korhonen. 2016. Learning distributed representations of sentences In Conference of the North from unlabelled data. American Chapter of the Association for Computa- tional Linguistics (NAACL).
Vu Cong Duy Hoang, Philipp Koehn, Gholamreza Haffari, and Trevor Cohn. 2018. Iterative back- translation for neural machine translation. In Pro- ceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 18â24.
Kenji Imamura, Atsushi Fujita, and Eiichiro Sumita. 2018. Enhancement of encoder and attention using target monolingual corpora in neural machine trans- lation. In Proceedings of the 2nd Workshop on Neu- ral Machine Translation and Generation, pages 55â 63.
Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Tho- rat, Fernanda B. Vi´egas, Martin Wattenberg, Gre- gory S. Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Googleâs multilingual neural machine transla- tion system: Enabling zero-shot translation. Trans- actions of the Association for Computational Lin- guistics (TACL), 5:339â351.
Lukasz Kaiser, Aidan N. Gomez, and Franc¸ois Chollet. 2017. Depthwise separable convolutions for neural machine translation. CoRR, abs/1706.03059.
Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, A¨aron van den Oord, Alex Graves, and Koray Kavukcuoglu. 2016. Neural machine translation in linear time. CoRR, abs/1610.10099.
Alina Karakanta, Jon Dehdari, and Josef van Genabith. 2017. Neural machine translation for low-resource languages without parallel corpora. Machine Trans- lation, pages 1â23.
Diederik P. Kingma and Jimmy Ba. 2015. Adam: In Inter- A Method for Stochastic Optimization. national Conference on Learning Representations (ICLR).
Philipp Koehn. 2010. Statistical machine translation. Cambridge University Press.
Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In ACL Demo Session.
Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics (NAACL).
Patrik Lambert, Holger Schwenk, Christophe Ser- van, and Sadaf Abdul-Rauf. 2011. Investigations on translation model adaptation using monolingual data. In Workshop on Statistical Machine Transla- tion (WMT).
Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and MarcâAurelio Ranzato. 2018a. Unsupervised machine translation using monolingual corpora only. In International Conference on Learning Represen- tations (ICLR).
Guillaume Lample, Myle Ott, Alexis Conneau, Lu- dovic Denoyer, and MarcâAurelio Ranzato. 2018b. Phrase-based & neural unsupervised machine trans- lation. arXiv, 1803.05567.
Marco Lui and Timothy Baldwin. 2012. langid. py: An In Pro- off-the-shelf language identiï¬cation tool. ceedings of the ACL 2012 system demonstrations, pages 25â30. Association for Computational Lin- guistics.
Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention- based neural machine translation. In Conference on Empirical Methods in Natural Language Processing (EMNLP).
Xing Niu, Michael Denkowski, and Marine Carpuat. Bi-directional neural machine transla- arXiv preprint 2018. tion with synthetic parallel data. arXiv:1805.11213.
and MarcâAurelio Ranzato. 2018a. Analyzing uncer- In Proceed- tainty in neural machine translation. ings of the 35th International Conference on Ma- chine Learning, volume 80, pages 3956â3965.
Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018b. Scaling neural machine trans- lation. In Proceedings of the Third Conference on Machine Translation: Research Papers.
Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a method for automatic In Conference evaluation of machine translation. of the Association for Computational Linguistics (ACL).
Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive sum- marization. In International Conference on Learn- ing Representations (ICLR).
Jan Chorowski, Lukasz Kaiser, and Geoffrey E. Hinton. 2017. Reg- ularizing neural networks by penalizing conï¬dent output distributions. In International Conference on Learning Representations (ICLR) Workshop.
Luis Perez and Jason Wang. 2017. The effectiveness of data augmentation in image classiï¬cation using deep learning. arxiv, 1712.04621.
Alberto Poncelas, Dimitar Sht. Shterionov, Andy Way, Gideon Maillette de Buy Wenniger, and Peyman Passban. 2018. Investigating backtranslation in neu- ral machine translation. arXiv, 1804.06189.
Matt Post. 2018. A call for clarity in reporting bleu scores. arXiv, 1804.08771.
Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation mod- els with monolingual data. Conference of the Asso- ciation for Computational Linguistics (ACL).
Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Conference of the Associa- tion for Computational Linguistics (ACL).
Iulian Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C. Courville, and Joelle Pineau. 2016. Build- ing end-to-end dialogue systems using generative hi- erarchical neural network models. In Conference of the Association for the Advancement of Artiï¬cial In- telligence (AAAI).
Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position represen- tations. In Proc. of NAACL.
Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural net- In Conference on Advances in Neural In- works. formation Processing Systems (NIPS).
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. 2015. Re- thinking the Inception Architecture for Computer Vision. arXiv preprint arXiv:1512.00567.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Conference on Advances in Neural In- formation Processing Systems (NIPS).
, and Pierre-Antoine Manzagol. 2008. Extracting and composing robust features with denoising autoen- In International Conference on Machine coders. Learning (ICML).
Yingce Xia, Tao Qin, Wei Chen, Jiang Bian, Nenghai Yu, and Tie-Yan Liu. 2017. Dual supervised learn- ing. In International Conference on Machine Learn- ing (ICML).
Jiajun Zhang and Chengqing Zong. 2016. Exploit- ing source-side monolingual data in neural machine translation. In Conference on Empirical Methods in Natural Language Processing (EMNLP). | {
"id": "1512.00567"
} |
1808.08437 | Meta-Learning for Low-Resource Neural Machine Translation | In this paper, we propose to extend the recently introduced model-agnostic
meta-learning algorithm (MAML) for low-resource neural machine translation
(NMT). We frame low-resource translation as a meta-learning problem, and we
learn to adapt to low-resource languages based on multilingual high-resource
language tasks. We use the universal lexical
representation~\citep{gu2018universal} to overcome the input-output mismatch
across different languages. We evaluate the proposed meta-learning strategy
using eighteen European languages (Bg, Cs, Da, De, El, Es, Et, Fr, Hu, It, Lt,
Nl, Pl, Pt, Sk, Sl, Sv and Ru) as source tasks and five diverse languages (Ro,
Lv, Fi, Tr and Ko) as target tasks. We show that the proposed approach
significantly outperforms the multilingual, transfer learning based
approach~\citep{zoph2016transfer} and enables us to train a competitive NMT
system with only a fraction of training examples. For instance, the proposed
approach can achieve as high as 22.04 BLEU on Romanian-English WMT'16 by seeing
only 16,000 translated words (~600 parallel sentences). | http://arxiv.org/pdf/1808.08437 | Jiatao Gu, Yong Wang, Yun Chen, Kyunghyun Cho, Victor O. K. Li | cs.CL, cs.LG | Accepted as a full paper at EMNLP 2018 | null | cs.CL | 20180825 | 20180825 | 8 1 0 2
g u A 5 2 ] L C . s c [
1 v 7 3 4 8 0 . 8 0 8 1 : v i X r a
# Meta-Learning for Low-Resource Neural Machine Translation Jiatao Gu*â , Yong Wang*â , Yun Chenâ , Kyunghyun Choâ¡ and Victor O.K. Liâ
â The University of Hong Kong â¡New York University, CIFAR Azrieli Global Scholar â {jiataogu, wangyong, vli}@eee.hku.hk â yun.chencreek@gmail.com â¡kyunghyun.cho@nyu.edu
# Abstract
In this paper, we propose to extend the recently introduced model-agnostic meta-learning al- gorithm (MAML, Finn et al., 2017) for low- resource neural machine translation (NMT). We frame low-resource translation as a meta- learning problem, and we learn to adapt to low-resource languages based on multilingual high-resource language tasks. We use the uni- versal lexical representation (Gu et al., 2018b) to overcome the input-output mismatch across different languages. We evaluate the proposed meta-learning strategy using eighteen Euro- pean languages (Bg, Cs, Da, De, El, Es, Et, Fr, Hu, It, Lt, Nl, Pl, Pt, Sk, Sl, Sv and Ru) as source tasks and ï¬ve diverse languages (Ro, Lv, Fi, Tr and Ko) as target tasks. We show that the proposed approach signiï¬cantly outper- forms the multilingual, transfer learning based approach (Zoph et al., 2016) and enables us to train a competitive NMT system with only a fraction of training examples. For instance, the proposed approach can achieve as high as 22.04 BLEU on Romanian-English WMTâ16 by seeing only 16,000 translated words (â¼ 600 parallel sentences).
# Introduction
Despite the massive success brought by neural ma- chine translation (NMT, Sutskever et al., 2014; Bahdanau et al., 2015; Vaswani et al., 2017), it has been noticed that the vanilla NMT often lags behind conventional machine translation systems, such as statistical phrase-based translation sys- tems (PBMT, Koehn et al., 2003), for low-resource language pairs (see, e.g., Koehn and Knowles, 2017). In the past few years, various approaches have been proposed to address this issue. The ï¬rst attempts at tackling this problem exploited the availability of monolingual corpora (Gulcehre
et al., 2015; Sennrich et al., 2015; Zhang and Zong, 2016). It was later followed by approaches based on multilingual translation, in which the goal was to exploit knowledge from high-resource language pairs by training a single NMT system on a mix of high-resource and low-resource lan- guage pairs (Firat et al., 2016a,b; Lee et al., 2016; Johnson et al., 2016; Ha et al., 2016b). Its variant, transfer learning, was also proposed by Zoph et al. (2016), in which an NMT system is pretrained on a high-resource language pair before being ï¬ne- tuned on a target low-resource language pair.
In this paper, we follow up on these latest ap- proaches based on multilingual NMT and propose a meta-learning algorithm for low-resource neural machine translation. We start by arguing that the recently proposed model-agnostic meta-learning algorithm (MAML, Finn et al., 2017) could be ap- plied to low-resource machine translation by view- ing language pairs as separate tasks. This view en- ables us to use MAML to ï¬nd the initialization of model parameters that facilitate fast adaptation for a new language pair with a minimal amount of training examples (§3). Furthermore, the vanilla MAML however cannot handle tasks with mis- matched input and output. We overcome this limi- tation by incorporating the universal lexical repre- sentation (Gu et al., 2018b) and adapting it for the meta-learning scenario (§3.3).
We extensively evaluate the effectiveness and generalizing ability of the proposed meta-learning algorithm on low-resource neural machine trans- lation. We utilize 17 languages from Europarl and Russian from WMT as the source tasks and test the meta-learned parameter initialization against ï¬ve target languages (Ro, Lv, Fi, Tr and Ko), in all cases translating to English. Our experiments using only up to 160k tokens in each of the tar- get task reveal that the proposed meta-learning approach outperforms the multilingual translation
* Equal contribution.
approach across all the target language pairs, and the gap grows as the number of training examples decreases.
# 2 Background
Neural Machine Translation (NMT) Given a source sentence X = {21,..., 77}, a neural ma- chine translation model factors the distribution over possible output sentences Y = {y1,..., yr} into a chain of conditional probabilities with a left- to-right causal structure:
T+1 Pp(Y|X;0) = ]] plyelyou-a.2ur39), t=1
where special tokens yo ((bos)) and y+ ((e0s)) are used to represent the beginning and the end of a target sentence. These conditional probabilities are parameterized using a neural network. Typi- cally, an encoder-decoder architecture et a 5 2014} |Cho et a 5 2014} Bahdanau et al y| 2015) with a RNN-based decoder is used. More recently, architectures without any recurrent struc- tres 2OTT,[Vaswant et al| 2017) have been proposed and shown to speed up train- ing while achieving state-of-the-art performance.
Low Resource Translation NMT is known to easily over-ï¬t and result in an inferior performance when the training data is limited (Koehn and Knowles, 2017). In general, there are two ways for handling the problem of low resource translation: (1) utilizing the resource of unlabeled monolin- gual data, and (2) sharing the knowledge between low- and high-resource language pairs. Many re- search efforts have been spent on incorporating the monolingual corpora into machine translation, such as multi-task learning (Gulcehre et al., 2015; Zhang and Zong, 2016), back-translation (Sen- nrich et al., 2015), dual learning (He et al., 2016) and unsupervised machine translation with mono- lingual corpora only for both sides (Artetxe et al., 2017b; Lample et al., 2017; Yang et al., 2018).
For the second approach, prior researches have worked on methods to exploit the knowledge of auxiliary translations, or even auxiliary tasks. For instance, Cheng et al. (2016); Chen et al. (2017); Lee et al. (2017); Chen et al. (2018) investigate the use of a pivot to build a translation path be- tween two languages even without any directed re- source. The pivot can be a third language or even an image in multimodal domains. When pivots are
not easy to obtain, Firat et al. (2016a); Lee et al. (2016); Johnson et al. (2016) have shown that the structure of NMT is suitable for multilingual ma- chine translation. Gu et al. (2018b) also showed that such a multilingual NMT system could im- prove the performance of low resource translation by using a universal lexical representation to share embedding information across languages.
All the previous work for multilingual NMT as- sume the joint training of multiple high-resource languages naturally results in a universal space (for both the input representation and the model) which, however, is not necessarily true, especially for very low resource cases.
Meta Learning In the machine learning com- munity, meta-learning, or learning-to-learn, has recently received interests. Meta-learning tries to solve the problem of âfast adaptation on new train- ing data.â One of the most successful applications of meta-learning has been on few-shot (or one- shot) learning (Lake et al., 2015), where a neural network is trained to readily learn to classify in- puts based on only one or a few training examples. There are two categories of meta-learning:
1. learning a meta-policy for updating model parameters (see, e.g., Andrychowicz et al., 2016; Ha et al., 2016a; Mishra et al., 2017)
2. learning a good parameter initialization for fast adaptation (see, e.g., Finn et al., 2017; Vinyals et al., 2016; Snell et al., 2017).
In this paper, we propose to use a meta-learning algorithm for low-resource neural machine trans- lation based on the second category. More speciï¬- cally, we extend the idea of model-agnostic meta- learning (MAML, Finn et al., 2017) in the multi- lingual scenario.
# 3 Meta Learning for Low-Resource Neural Machine Translation
The underlying idea of MAML is to use a set of source tasks {T}, tee TK} to find the initializa- tion of parameters 6° from which learning a tar- get task T° would require only a small number of training examples. In the context of machine trans- lation, this amounts to using many high-resource language pairs to find good initial parameters and training a new translation model on a low-resource language starting from the found initial parame-
Fast Adaptation Meta Learning Meta-Test { | 1 poccccccc errr MetaNMT |« Emb )« X_train |. | t intaize an | Y_train | â i query Universal Lexical Representation Translation Task Generator ââ> Forward Pass ââ» Meta Gradient Pass ââ» Gradient Pass - - â - Parameter Tying
Figure 1: The graphical illustration of the training process of the proposed MetaNMT. For each episode, one task (language pair) is sampled for meta-learning. The boxes and arrows in blue are mainly involved in language-speciï¬c learning (§3.1), and those in purple in meta-learning (§3.2).
ters. This process can be understood as
θâ = Learn(T 0; MetaLearn(T 1, . . . , T K)).
Thus, in the low-resource scenario, ï¬nding a good initialization θ0 strongly correlates the ï¬nal per- formance of the resulting model.
That is, we meta-learn the initialization from aux- iliary tasks and continue to learn the target task. We refer the proposed meta-learning method for NMT to MetaNMT. See Fig. 1 for the overall il- lustration.
3.1 Learn: language-speciï¬c learning Given any initial parameters θ0 (which can be ei- ther random or meta-learned),
the prior distribution of the parameters of a de- sired NMT model can be deï¬ned as an isotropic Guassian:
θi ⼠N (θ0 i , 1/β),
3.2 MetaLearn We ï¬nd the initialization θ0 by repeatedly simulat- ing low-resource translation scenarios using auxil- iary, high-resource language pairs. Following Finn et al. (2017), we achieve this goal by deï¬ning the meta-objective function as
£(0) =E,Ep_,.pr 2) The rk S- log p(Y |X; Learn(D7;9)) | , [osse,
where 1/β is a variance. With this prior distri- bution, we formulate the language-speciï¬c learn- ing process Learn(DT ; θ0) as maximizing the log- posterior of the model parameters given data DT :
Learn(D7; 0°) = arg max LPT (0) â-; â 9 _ â p0i2 = arg max S> log p(Â¥|X, 4) â Bl] â 0°, (XY)eDr
where we assume p(X|θ) to be uniform. The ï¬rst term above corresponds to the maximum likeli- hood criterion often used for training a usual NMT system. The second term discourages the newly learned model from deviating too much from the initial parameters, alleviating the issue of over- ï¬tting when there is not enough training data. In practice, we solve the problem above by maximiz- ing the ï¬rst term with gradient-based optimization and early-stopping after only a few update steps.
where k ~ U({1,..., K}) refers to one meta- learning episode, and Dr, D, follow the uniform distribution over Tâs data.
We maximize the meta-objective function using stochastic approximation with gradient descent. For each episode, we uniformly sample one source task at random, T*. We then sample two subsets of training ex- amples independently from the chosen task, Dyi. and D/_,,. We use the former to simulate language- specific learning and the latter to evaluate its out- come. Assuming a single gradient step is taken only the with learning rate 7), the simulation is:
6}, = Learn(Dzx;0) = 0 â nVgLl?T* (0).
Once the simulation of learning is done, we evalu- ate the updated parameters ,, on Dâ,,,, The gra- dient computed from this evaluation, which we refer to as meta-gradient, is used to update the
wy (a) Transfer Learning (b) Multilingual Transfer Learning (c) Meta Learning
Figure 2: An intuitive il- lustration in which we use solid lines to repre- sent the learning of ini- tialization, and dashed lines to show the path of ï¬ne-tuning.
meta model θ. It is possible to aggregate multiple episodes of source tasks before updating θ:
Related Work: Multilingual Transfer Learning The proposed MetaNMT differs from the existing framework of multilingual translation (Lee et al., 2016; Johnson et al., 2016; Gu et al., 2018b) or transfer learning (Zoph et al., 2016). The latter can be thought of as solving the following problem:
0-0-1! SVoLl?r* (61), k
where 7â is the meta learning rate.
max £⢠(6) =E, S- log p(Y |X; 4) (X,Y)ED,
Unlike a usual learning scenario, the resulting model θ0 from this meta-learning procedure is not necessarily a good model on its own. It is however a good starting point for training a good model us- ing only a few steps of learning. In the context of machine translation, this procedure can be under- stood as ï¬nding the initialization of a neural ma- chine translation system that could quickly adapt to a new language pair by simulating such a fast adaptation scenario using many high-resource lan- guage pairs.
where Dk is the training set of the k-th task, or lan- guage pair. The target low-resource language pair could either be a part of joint training or be trained separately starting from the solution θ0 found from solving the above problem.
The major difference between the proposed MetaNMT and these multilingual transfer ap- proaches is that the latter do not consider how learning happens with the target, low-resource lan- guage pair. The former explicitly incorporates the learning process within the framework by simulat- ing it repeatedly in Eq. (2). As we will see later in the experiments, this results in a substantial gap in the ï¬nal performance on the low-resource task.
Meta-Gradient We use the following approxi- mation property
â(x + νv) â â(x) ν H(x)v â
to approximate the meta-gradient:1
Illustration In Fig. 2, we contrast transfer learn- ing, multilingual learning and meta-learning us- ing three source language pairs (Fr-En, Es-En and Pt-En) and two target pairs (Ro-En and Lv-En). Transfer learning trains an NMT system speciï¬- cally for a source language pair (Es-En) and ï¬ne- tunes the system for each target language pair (Ro- En, Lv-En). Multilingual learning often trains a single NMT system that can handle many different language pairs (Fr-En, Pt-En, Es-En), which may or may not include the target pairs (Ro-En, Lv- En). If not, it ï¬netunes the system for each target pair, similarly to transfer learning. Both of these however aim at directly solving the source tasks. On the other hand, meta-learning trains the NMT system to be useful for ï¬ne-tuning on various tasks including the source and target tasks. This is done by repeatedly simulating the learning process on
Vol? (6') = Vel? (0'\V6(0 â nVeLl?(0)) = Volâ (6) â nVoLâ (6) Ho(L?(0)) = VoL? (0") â : [vo0(0)| Vol? (8) 6 J
,
where ν is a small constant and
6 =0+ VL?â (6).
In practice, we ï¬nd that it is also possible to ignore the second-order term, ending up with the follow- ing simpliï¬ed update rule:
Vel? (0') & VL?â (6'). (3)
# 1We omit the subscript k for simplicity.
low-resource languages using many high-resource language pairs (Fr-En, Pt-En, Es-En).
# 3.3 Uniï¬ed Lexical Representation
I/O mismatch across language pairs One ma- jor challenge that limits applying meta-learning for low resource machine translation is that the ap- proach outlined above assumes the input and out- put spaces are shared across all the source and tar- get tasks. This, however, does not apply to ma- chine translation in general due to the vocabulary mismatch across different languages. In multilin- gual translation, this issue has been tackled by us- ing a vocabulary of sub-words (Sennrich et al., 2015) or characters (Lee et al., 2016) shared across multiple languages. This surface-level sharing is however limited, as it cannot be applied to lan- guages exhibiting distinct orthography (e.g., Indo- Euroepan languages vs. Korean.)
Universal Lexical Representation (ULR) We tackle this issue by dynamically building a vo- cabulary specific to each language using a key- value memory network (Miller et al.| [2016} |Gul- 2018), as was done successfully for low-resource machine translation recently by [Gul 2018b). We start with multilingual word em- bedding matrices Chuery ⬠RIV«lX¢ pretrained on large monolingual corpora, where V;, is the vo- cabulary of the k-th language. These embedding vectors can be obtained with small dictionaries of seed word pairs (Artetxe et al.|[2017a}[Smith et al.| 2017) or in a fully unsupervised manner (Zhang| et al.||2017}|Conneau et al.|/2018). We take one of these languages kâ to build universal lexical repre- sentation consisting of a universal embedding ma- trix «¢, ⬠Râ*¢4 and a corresponding key matrix â¬key ⬠Râ*4, where M < |V;j|. Both chuery and â¬key are fixed during meta-learning. We then com- pute the language-specific embedding of token x from the language k as the convex sum of the uni- versal embedding vectors by
M [a] = S- a â¬ult], i=l
where a; x exp {âtégey[i] ' Aekuerylt]} and 7 is set to 0.05. This approach allows us to handle lan- guages with different vocabularies using a fixed number of shared parameters (â¬,, â¬key and A.)
Learning of ULR It is not desirable to update the universal embedding matrix ©, when fine-
# of sents. # of En tokens Dev Test Ro-En Lv-En Fi-En Tr-En Ko-En 0.61 M 4.46 M 2.63 M 0.21 M 0.09 M 16.66 M 67.24 M 64.50 M 5.58 M 2.33 M â 31.76 15.15 20.20 13.74 5.97 20.24 17.38 15.45 6.88
Table 1: Statistics of full datasets of the target lan- guage pairs. BLEU scores on the dev and test sets are reported from a supervised Transformer model with the same architecture.
tuning on a small corpus which contains a lim- ited set of unique tokens in the target language, as it could adversely influence the other tokensâ embedding vectors. We thus estimate the change to each embedding vector induced by language- specific learning by a separate parameter Ac* [xr]:
é [2] = fay [a] + Ac [a].
During language-specific learning, the ULR â¬Â°[z] is held constant, while only Ae* [a] is updated, starting from an all-zero vector. On the other hand, we hold Ae*{z]âs constant while updating â¬,, and A during the meta-learning stage.
# 4 Experimental Settings
# 4.1 Dataset
Target Tasks We show the effectiveness of the proposed meta-learning method for low resource NMT with extremely limited training examples on ï¬ve diverse target languages: Romanian (Ro) from WMTâ16,2 Latvian (Lv), Finnish (Fi), Turk- ish (Tr) from WMTâ17,3 and Korean (Ko) from Korean Parallel Dataset.4 We use the ofï¬cially provided train, dev and test splits for all these lan- guages. The statistics of these languages are pre- sented in Table 1. We simulate the low-resource translation scenarios by randomly sub-sampling the training set with different sizes.
Source Tasks We use the following languages from Europarl5: Bulgarian (Bg), Czech (Cs), Dan- ish (Da), German (De), Greek (El), Spanish (Es), Estonian (Et), French (Fr), Hungarian (Hu), Ital- ian (It), Lithuanian (Lt), Dutch (Nl), Polish (Pl), Portuguese (Pt), Slovak (Sk), Slovene (Sl) and
# 2 http://www.statmt.org/wmt16/translation-task.html 3 http://www.statmt.org/wmt17/translation-task.html 4 https://sites.google.com/site/koreanparalleldata/ 5 http://www.statmt.org/europarl/
(a) Ro-En (b) Lv-En (c) Fi-En (d) Tr-En
Figure 3: BLEU scores reported on test sets for {Ro, Lv, Fi, Tr} to En, where each model is ï¬rst learned from 6 source tasks (Es, Fr, It, Pt, De, Ru) and then ï¬ne-tuned on randomly sampled training sets with around 16,000 English tokens per run. The error bars show the standard deviation calculated from 5 runs.
Swedish (Sv), in addition to Russian (Ru)6 to learn the intilization for ï¬ne-tuning. In our exper- iments, different combinations of source tasks are explored to see the effects from the source tasks.
using MUSE to get mul- tilingual word vectors. We use the multilingual word vectors of the 20,000 most frequent words in English to form the universal embedding matrix Eu.
Validation We pick either Ro-En or Lv-En as a validation set for meta-learning and test the gener- alization capability on the remaining target tasks. This allows us to study the strict form of meta- learning, in which target tasks are unknown during both training and model selection.
Preprocessing and ULR Initialization As de- scribed in 3 we initialize the query embed- ding vectors query of all the languages. For each language, we use the monolingual corpora built from Wikipedigâ|and the parallel corpus. The con- catenated corpus is first tokenized and segmented using byte-pair encoding (BPE, {2016}, resulting in 40, 000 subwords for each lan- guage. We then estimate word vectors using fast- Text (Bojanowski et al.| 2016) and align them across all the languages in an unsupervised way Lhe
# 4.2 Model and Learning
Model We utilize the recently proposed Trans- former (Vaswani et al., 2017) as an underlying NMT system. We implement Transformer in this paper based on (Gu et al., 2018a)8 and mod- ify it to use the universal lexical representation from §3.3. We use the default set of hyperpa- rameters (dmodel = dhidden = 512, nlayer = 6, nhead = 8, nbatch = 4000, twarmup = 16000) for all the language pairs and across all the experi- mental settings. We refer the readers to (Vaswani et al., 2017; Gu et al., 2018a) for the details of the model. However, since the proposed meta- learning method is model-agnostic, it can be eas- ily extended to any other NMT architectures, e.g. RNN-based sequence-to-sequence models with at- tention (Bahdanau et al., 2015).
6 A subsample of approximately 2M pairs from WMTâ17. 7 We use the most recent Wikipedia dump (2018.5) from
https://dumps.wikimedia.org/backup-index.html.
8 https://github.com/salesforce/nonauto-nmt
Meta-Train zero Ro-En ï¬netune zero Lv-En ï¬netune zero Fi-En ï¬netune zero Tr-En ï¬netune zero Ko-En ï¬netune â Es Es Fr Es Fr It Pt De Ru Es Fr It Pt De Ru All 0.00 ± .00 1.40 ± .09 9.20 1.70 ± .14 12.35 1.90 ± .07 13.88 2.16 ± .09 10.60 15.93 2.44 ± .11 18.12 22.04 ± .23 9.58 10.44 ± .17 11.39 12.63 ± .22 5.34 8.97 ± .08 1.96 3.97 ± .10 00.00 ± .00 15.71 ± .22 17.46 ± .41 18.54 ± .19 16.05 ± .31 20.00 ± .27 0.00 ± .00 4.65 ± .12 5.05 ± .04 5.63 ± .11 7.19 ± .17 7.88 ± .14 0.00 ± .00 5.55 ± .08 6.08 ± .01 6.80 ± .04 7.98 ± .22 9.14 ± .05 0.00 ± .00 4.14 ± .03 4.56 ± .20 4.82 ± .10 6.02 ± .11 6.02 ± .13 2.23 2.86 3.88 5.15 6.33 2.73 3.71 4.93 6.62 7.89 1.56 2.17 2.49 3.20 3.72 0.63 0.61 0.82 1.19 1.28 Full Supervised 31.76 15.15 20.20 13.74 5.97
Table 2: BLEU Scores w.r.t. the source task set for all ï¬ve target tasks.
BLEU ât Ro-En MetaNMT = Ro-En MultiNMT 11.55 12. 11,78 BLEU âb Fi-En MetaNMT =e Fi-En MultiNMT: 40K tt} 4k 16K 160K
Figure 4: BLEU Scores w.r.t. the size of the target taskâs training set.
Learning We meta-learn using various sets of source languages to investigate the effect of source task choice. For each episode, by default, we use a single gradient step of language-speciï¬c learning with Adam (Kingma and Ba, 2014) per comput- ing the meta-gradient, which is computed by the ï¬rst-order approximation in Eq. (3).
For each target task, we sample training exam- ples to form a low-resource task. We build tasks of 4k, 16k, 40k and 160k English tokens for each lan- guage. We randomly sample the training set ï¬ve times for each experiment and report the average score and its standard deviation. Each ï¬ne-tuning is done on a training set, early-stopped on a val- idation set and evaluated on a test set. In default without notation, datasets of 16k tokens are used.
Fine-tuning Strategies The transformer con- sists of three modules; embedding, encoder and decoder. We update all three modules during meta- learning, but during ï¬ne-tuning, we can selectively tune only a subset of these modules. Following (Zoph et al., 2016), we consider three ï¬ne-tuning
strategies; (1) ï¬ne-tuning all the modules (all), (2) ï¬ne-tuning the embedding and encoder, but freez- ing the parameters of the decoder (emb+enc) and (3) ï¬ne-tuning the embedding only (emb).
# 5 Results
vs. Multilingual Transfer Learning We meta- learn the initial models on all the source tasks us- ing either Ro-En or Lv-En as a validation task. We also train the initial models to be multilin- gual translation systems. We ï¬ne-tune them us- ing the four target tasks (Ro-En, Lv-En, Fi-En and Tr-En; 16k tokens each) and compare the pro- posed meta-learning strategy and the multilingual, transfer learning strategy. As presented in Fig. 3, the proposed learning approach signiï¬cantly out- performs the multilingual, transfer learning strat- egy across all the target tasks regardless of which target task was used for early stopping. We also notice that the emb+enc strategy is most effec- tive for both meta-learning and transfer learn- ing approaches. With the proposed meta-learning and emb+enc ï¬ne-tuning, the ï¬nal NMT systems trained using only a fraction of all available train- ing examples achieve 2/3 (Ro-En) and 1/2 (Lv-En, Fi-En and Tr-En) of the BLEU score achieved by the models trained with full training sets.
vs. Statistical Machine Translation We also test the same Ro-En datasets with 16, 000 target tokens using the default setting of Phrase-based MT (Moses) with the dev set for adjusting the parameters and the test set for calculating the ï¬- nal performance. We obtain 4.79(±0.234) BLEU point, which is higher than the standard NMT per- formance (0 BLEU). It is however still lower than both the multi-NMT and meta-NMT.
Impact of Validation Tasks Similarly to train- ing any other neural network, meta-learning still requires early-stopping to avoid overï¬tting to a
speciï¬c set of source tasks. In doing so, we ob- serve that the choice of a validation task has non- negligible impact on the ï¬nal performance. For in- stance, as shown in Fig. 3, Fi-En beneï¬ts more when Ro-En is used for validation, while the oppo- site happens with Tr-En. The relationship between the task similarity and the impact of a validation task must be investigated further in the future.
Training Set Size We vary the size of the tar- get taskâs training set and compare the proposed meta-learning strategy and multilingual, transfer learning strategy. We use the emb+enc ï¬ne-tuning on Ro-En and Fi-En. Fig. 4 demonstrates that the meta-learning approach is more robust to the drop in the size of the target taskâs training set. The gap between the meta-learning and transfer learning grows as the size shrinks, conï¬rming the effective- ness of the proposed approach on extremely low- resource language pairs.
15 nf Napa ar wae oe putea Perey NNN ZON AIR Ney Wetted ne Wiad Moen een Pagar ââ MetaNMT Fine-tune --- MetaNMT Zero-shot ââ MultiNMT Fine-tune === MultiNMT Zero-shot 0 20K 40K 60K 80K 100K 120K Meta-learning steps
Figure 5: The learning curves of BLEU scores on the validation task (Ro-En).
Impact of Source Tasks In Table 2, we present the results on all ï¬ve target tasks obtained while varying the source task set. We ï¬rst see that it is always beneï¬cial to use more source tasks. Al- though the impact of adding more source tasks varies from one language to another, there is up to 2à improvement going from one source task to 18 source tasks (Lv-En, Fi-En, Tr-En and Ko-En). The same trend can be observed even without any ï¬ne-tuning (i.e., unsupervised translation, (Lam- ple et al., 2017; Artetxe et al., 2017b)). In addi- tion, the choice of source languages has different implications for different target languages. For in- stance, Ro-En beneï¬ts more from {Es, Fr, It, Pt} than from {De, Ru}, while the opposite effect is observed with all the other target tasks.
Training Curves The beneï¬t of meta-learning over multilingual translation is clearly demon- strated when we look at the training curves in Fig. 5. With the multilingual, transfer learning ap-
proach, we observe that training rapidly saturates and eventually degrades, as the model overï¬ts to the source tasks. MetaNMT on the other hand con- tinues to improve and never degrades, as the meta- objective ensures that the model is adequate for ï¬ne-tuning on target tasks rather than for solving the source tasks.
Sample Translations We present some sample translations from the tested models in Table 3. Inspecting these examples provides the insight into the proposed meta-learning algorithm. For in- stance, we observe that the meta-learned model without any ï¬ne-tuning produces a word-by-word translation in the ï¬rst example (Tr-En), which is due to the successful use of the universal lexcial representation and the meta-learned initialization. The system however cannot reorder tokens from Turkish to English, as it has not seen any train- ing example of Tr-En. After seeing around 600 sentence pairs (16K English tokens), the model rapidly learns to correctly reorder tokens to form a better translation. A similar phenomenon is ob- served in the Ko-En example. These cases could be found across different language pairs.
# 6 Conclusion
In this paper, we proposed a meta-learning algo- rithm for low-resource neural machine translation that exploits the availability of high-resource lan- guages pairs. We based the proposed algorithm on the recently proposed model-agnostic meta- learning and adapted it to work with multiple lan- guages that do not share a common vocabulary us- ing the technique of universal lexcal representa- tion, resulting in MetaNMT. Our extensive evalu- ation, using 18 high-resource source tasks and 5 low-resource target tasks, has shown that the pro- posed MetaNMT signiï¬cantly outperforms the ex- isting approach of multilingual, transfer learning in low-resource neural machine translation across all the language pairs considered.
The proposed approach opens new opportuni- ties for neural machine translation. First, it is a principled framework for incorporating various extra sources of data, such as source- and target- side monolingual corpora. Second, it is a generic framework that can easily accommodate existing and future neural machine translation systems.
Source (Tr) Target Meta-0 Meta-16k google m¨ulteciler ic¸in 11 milyon dolar toplamak ¨uzere baËgıs¸ es¸les¸tirme kampanyasını bas¸lattı . google launches donation-matching campaign to raise $ 11 million for refugees . google refugee fund for usd 11 million has launched a campaign for donation . google has launched a campaign to collect $ 11 million for refugees . Source (Ko) ì´ë²ì ì²´í¬ëì´ ê¸°ìë ì¬ëë¤ ì¤ìë í´ìí êµ° ê³ ìê´ë¦¬ , ì¸ë¡ ì¸ , ì ì¹ì¸ , ê²½ì ì¸ ë±ì´ í¬í¨ëë¤ Target Meta-0 among the suspects are retired military ofï¬cials , journalists , politicians , businessmen and others . last year , convicted people , among other people , of a high-ranking army of journalists in economic and economic policies , were included . the arrested persons were included in the charge , including the military ofï¬cials , journalists , politicians and economists . Meta-16k
Table 3: Sample translations for Tr-En and Ko-En highlight the impact of ï¬ne-tuning which results in syntactically better formed translations. We highlight tokens of interest in terms of reordering.
# Acknowledgement
This research was supported in part by the Face- book Low Resource Neural Machine Translation Award. This work was also partly supported by Samsung Advanced Institute of Technology (Next Generation Deep Learning: from pattern recogni- tion to AI) and Samsung Electronics (Improving Deep Learning using Latent Structure). KC thanks support by eBay, TenCent, NVIDIA and CIFAR.
Yun Chen, Yang Liu, and Victor OK Li. 2018. Zero- resource neural machine translation with multi- arXiv preprint agent communication game. arXiv:1802.03116.
Yong Cheng, Yang Liu, Qian Yang, Maosong Sun, and Wei Xu. 2016. Neural machine translation with pivot languages. arXiv preprint arXiv:1611.04928.
Kyunghyun Cho, Bart van Merri¨enboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the properties of neural machine translation: EncoderâDecoder ap- proaches. In Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation.
# References
Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, and Nando de Freitas. 2016. Learning to learn by gra- In Advances dient descent by gradient descent. in Neural Information Processing Systems, pages 3981â3989.
Alexis Conneau, Guillaume Lample, MarcâAurelio Ranzato, Ludovic Denoyer, and Herv´e J´egou. 2018. Word translation without parallel data. International Conference on Learning Representations.
Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. arXiv preprint arXiv:1703.03400.
Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017a. Learning bilingual word embeddings with In Proceedings of the (almost) no bilingual data. 55th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), vol- ume 1, pages 451â462.
Orhan Firat, Kyunghyun Cho, and Yoshua Bengio. 2016a. Multi-way, multilingual neural machine translation with a shared attention mechanism. In NAACL.
Orhan Firat, Baskaran Sankaran, Yaser Al-Onaizan, Fatos T Yarman Vural, and Kyunghyun Cho. 2016b. Zero-resource translation with multi-lingual neural machine translation. In EMNLP.
Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2017b. Unsupervised neural ma- chine translation. arXiv preprint arXiv:1710.11041.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR.
Jonas Gehring, Michael Auli, David Grangier, De- nis Yarats, and Yann Dauphin. 2017. Convolu- tional sequence to sequence learning. arXiv preprint arXiv:1705.03122.
Jiatao Gu, James Bradbury, Caiming Xiong, Vic- tor O. K. Li, and Richard Socher. 2018a. Non- autoregressive neural machine translation. ICLR.
Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vec- arXiv preprint tors with subword information. arXiv:1607.04606.
Jiatao Gu, Hany Hassan, Jacob Devlin, and Victor OK Li. 2018b. Universal neural machine translation for extremely low resource languages. arXiv preprint arXiv:1802.05368.
Yun Chen, Yang Liu, Yong Cheng, and Victor OK Li. 2017. A teacher-student framework for zero- resource neural machine translation. arXiv preprint arXiv:1705.00753.
Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, and Yoshua Bengio. 2018. Dynamic neural tur- ing machine with continuous and discrete address- ing schemes. Neural computation, 30(4):857â884.
Caglar Gulcehre, Orhan Firat, Kelvin Xu, Kyunghyun Cho, Loic Barrault, Huei-Chi Lin, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2015. On us- ing monolingual corpora in neural machine transla- tion. arXiv preprint arXiv:1503.03535.
David Ha, Andrew Dai, and Quoc V Le. 2016a. Hy- pernetworks. arXiv preprint arXiv:1609.09106.
Thanh-Le Ha, Jan Niehues, and Alexander Waibel. 2016b. Toward multilingual neural machine trans- lation with universal encoder and decoder. arXiv preprint arXiv:1611.04798.
Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tieyan Liu, and Wei-Ying Ma. 2016. Dual learn- ing for machine translation. In Advances in Neural Information Processing Systems, pages 820â828.
Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Tho- rat, Fernanda Vi´egas, Martin Wattenberg, Greg Cor- rado, et al. 2016. Googleâs multilingual neural ma- chine translation system: enabling zero-shot transla- tion. arXiv preprint arXiv:1611.04558.
Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
Philipp Koehn and Rebecca Knowles. 2017. challenges for neural machine translation. preprint arXiv:1706.03872.
Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computa- tional Linguistics on Human Language Technology- Volume 1, pages 48â54. Association for Computa- tional Linguistics.
Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. 2015. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332â1338.
and Ludovic Lample, Unsupervised MarcâAurelio Ranzato. 2017. machine translation using monolingual corpora only. arXiv preprint arXiv:1711.00043.
Jason Lee, Kyunghyun Cho, and Thomas Hofmann. 2016. Fully character-level neural machine trans- lation without explicit segmentation. arXiv preprint arXiv:1610.03017.
Jason Lee, Kyunghyun Cho, Jason Weston, and Douwe Kiela. 2017. Emergent translation in multi-agent communication. arXiv preprint arXiv:1710.06922.
Alexander Miller, Adam Fisch, Jesse Dodge, Amir- Hossein Karimi, Antoine Bordes, and Jason We- Key-value memory networks for ston. 2016. arXiv preprint directly reading documents. arXiv:1606.03126.
Nikhil Mishra, Mostafa Rohaninejad, Xi Chen, and Pieter Abbeel. 2017. Meta-learning with temporal convolutions. arXiv preprint arXiv:1707.03141.
Herbert Robbins and Sutton Monro. 1951. A stochastic approximation method. The annals of mathematical statistics, pages 400â407.
Rico Sennrich, Barry Haddow, and Alexandra Birch. Improving neural machine translation arXiv preprint 2015. models with monolingual data. arXiv:1511.06709.
Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Edinburgh neural machine translation sys- tems for wmt 16. arXiv preprint arXiv:1606.02891.
Samuel L Smith, David HP Turban, Steven Hamblin, and Nils Y Hammerla. 2017. Ofï¬ine bilingual word vectors, orthogonal transformations and the inverted softmax. arXiv preprint arXiv:1702.03859.
Jake Snell, Kevin Swersky, and Richard Zemel. 2017. Prototypical networks for few-shot learning. In Ad- vances in Neural Information Processing Systems, pages 4080â4090.
Ilya Sutskever, Oriol Vinyals, and QuËoc LËe. 2014. Se- quence to sequence learning with neural networks. In NIPS.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan Gomez, Åukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. arXiv preprint arXiv:1706.03762.
Oriol Vinyals, Charles Blundell, Tim Lillicrap, Daan Wierstra, et al. 2016. Matching networks for one In Advances in Neural Information shot learning. Processing Systems, pages 3630â3638.
Zhen Yang, Wei Chen, Feng Wang, and Bo Xu. 2018. Unsupervised neural machine translation with weight sharing. arXiv preprint arXiv:1804.09057.
Jiajun Zhang and Chengqing Zong. 2016. Exploit- ing source-side monolingual data in neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Pro- cessing, pages 1535â1545.
Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017. Earth moverâs distance minimization for In Pro- unsupervised bilingual lexicon induction. ceedings of the 2017 Conference on Empirical Meth- ods in Natural Language Processing, pages 1934â 1945. Association for Computational Linguistics.
Jonathan May, and Kevin Knight. 2016. Transfer learning for low- resource neural machine translation. arXiv preprint arXiv:1604.02201. | {
"id": "1611.04798"
} |
1808.07036 | QuAC : Question Answering in Context | We present QuAC, a dataset for Question Answering in Context that contains
14K information-seeking QA dialogs (100K questions in total). The dialogs
involve two crowd workers: (1) a student who poses a sequence of freeform
questions to learn as much as possible about a hidden Wikipedia text, and (2) a
teacher who answers the questions by providing short excerpts from the text.
QuAC introduces challenges not found in existing machine comprehension
datasets: its questions are often more open-ended, unanswerable, or only
meaningful within the dialog context, as we show in a detailed qualitative
evaluation. We also report results for a number of reference models, including
a recently state-of-the-art reading comprehension architecture extended to
model dialog context. Our best model underperforms humans by 20 F1, suggesting
that there is significant room for future work on this data. Dataset, baseline,
and leaderboard available at http://quac.ai. | http://arxiv.org/pdf/1808.07036 | Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen-tau Yih, Yejin Choi, Percy Liang, Luke Zettlemoyer | cs.CL, cs.AI, cs.LG | EMNLP Camera Ready | null | cs.CL | 20180821 | 20180828 | 8 1 0 2
g u A 8 2 ] L C . s c [
3 v 6 3 0 7 0 . 8 0 8 1 : v i X r a
# QuAC : Question Answering in Context
2
# Mark Yatskar*? Luke Zettlemoyerâ Eunsol Choi*® â He He*® Wen-tau Yiht = Yejin Choi Mohit Iyyer*** Percy Liang®
Eunsol Choi*® â He He*® Wen-tau Yiht = Yejin Choi Allen Institute for Artificial Intelligencet Mohit Percy
# Mark Yatskar*? Luke Zettlemoyerâ University of Washingtonâ Iyyer*** Liang®
Stanford University⦠UMass Amherstâ£
{eunsol,yejin,lsz}@cs.washington.edu {hehe,pliang}@cs.stanford.edu {mohiti,marky,scottyih}@allenai.org
# Abstract
We present QuAC, a dataset for Question Answering in Context that contains 14K information-seeking QA dialogs (100K ques- tions in total). The dialogs involve two crowd workers: (1) a student who poses a sequence of freeform questions to learn as much as pos- sible about a hidden Wikipedia text, and (2) a teacher who answers the questions by pro- viding short excerpts from the text. QuAC in- troduces challenges not found in existing ma- chine comprehension datasets: its questions are often more open-ended, unanswerable, or only meaningful within the dialog context, as we show in a detailed qualitative evaluation. We also report results for a number of ref- erence models, including a recently state-of- the-art reading comprehension architecture ex- tended to model dialog context. Our best model underperforms humans by 20 F1, sug- gesting that there is signiï¬cant room for fu- ture work on this data. Dataset, baseline, and leaderboard available at http://quac.ai.
# â2*Daffy
# Section:
# Daffy Duck, Origin & History
STUDENT: What is the origin of Daffy Duck? TEACHER: â<> first appeared in Porkyâs Duck Hunt STUDENT: What was he like in that episode? TEACHER: â+ assertive, unrestrained, combative STUDENT: Was he the star? TEACHER: <> No, barely more than an unnamed bit player in this short STUDENT: Who was the star? TEACHER: Â¥> No answer STUDENT: Did he change a lot from that first episode in future episodes? TEACHER: <> Yes, the only aspects of the char- acter that have remained consistent (...) are his voice characterization by Mel Blanc STUDENT: How has he changed? TEACHER: <> Daffy was less anthropomorphic STUDENT: In what other ways did he change? TEACHER: <> Daffyâs slobbery, exaggerated lisp (...) is barely noticeable in the early cartoons. STUDENT: Why did they add the lisp? TEACHER: <> One often-repeated âofficialâ story is that it was modeled after producer Leon Schlesingerâs tendency to lisp. STUDENT: Is there an âunofficialâ story? TEACHER: <> Yes, Mel Blanc (...) contradicts that conventional belief
1
# 1 Introduction
In information-seeking dialog, students repeat- edly ask teachers questions to learn about a topic of interest (Stede and Schlangen, 2004). Mod- eling such conversations is challenging, as the questions can be highly context-dependent, ellip- tical, and even unanswerable. To enable learning from rich information-seeking dialog, we present QuAC (henceforth ), a large-scale dataset for Question Answering in Context that contains 14K crowdsourced QA dialogs (100K total QA pairs).1 dialog. The in- teraction is student driven and centered around a short evidence text (a section from Daffy Duckâs
â2
'We use âdialogâ to refer to a sequence of QA pairs. % Authors contributed equally.
Figure 1: An example dialog about a Wikipedia sec- tion. The student, who does not see the section text, asks questions. The teacher provides a response in the form of a text span (or No answer ), optionally yes or no( Yes / No ), and encouragement about continuing a line of questioning (should, â , could <> , or should not Â¥ ask a follow-up question).
Wikipedia page), which only the teacher can ac- cess. Given just the sectionâs heading, âOrigin & Historyâ, the student aims to learn as much as pos- sible about its contents by asking questions. The teacher answers these questions with spans from the evidence text, as in existing reading compre- hension tasks (Rajpurkar et al., 2016). Addition- ally, the teacher uses dialog acts to provide the stu- dent with feedback (e.g., âask a follow up ques-
Dataset Multi Text- Dialog Simple Unanswerable Asker Canât turn based Acts Evaluation Questions See Evidence 2 Quac v v v v v v CoQA (Reddy et al., 2018) Vv Vv x Vv Vv x CSQA (Saha et al., 2018) Vv x x x Vv x CQA (Talmor and Berant, 2018) Vv Vv x Vv x Vv SQA (lyyer et al., 2017) Vv x x Vv x x NarrativeQA (Kocisky et al., 2017) x Vv x x x Vv TriviaQA (Joshi et al., 2017) x Vv x v x Vv SQuAD 2.0 (Rajpurkar et al., 2018) x Vv x Vv Vv x MS Marco (Nguyen et al., 2016) x Vv x x Vv Vv NewsQA (Trischler et al., 2016) x Vv x Vv Vv Vv
Table 1: Comparison of the QUAC dataset to other question answering datasets.
tionâ), which makes the dialogs more productive. We collect the dataset in an interactive set- ting where two crowd workers play the roles of teacher and student. To encourage natural and di- verse questions, we do not follow previous dialog- style QA datasets that semi-automatically generate questions (Talmor and Berant, 2018; Saha et al., 2018). Furthermore, unlike QA datasets such as SQuAD and CoQA (Reddy et al., 2018), students in do not know the answers to their ques- tions prior to asking them, which lessens the role of string matching and simple paraphrasing in an- swering their questions. This property makes similar to datasets that contain real user queries on search engines (Nguyen et al., 2016).
2â
â2+
Train Dev. questions dialogs unique sections 83,568 11,567 6,843 7,354 1,000 1,000 tokens / section tokens / question tokens / answer questions / dialog % yes/no % unanswerable 396.8 6.5 15.1 7.2 26.4 20.2 440.0 6.5 12.3 7.4 22.1 20.2 Test Overall 7,353 1,002 1,002 98,407 13,594 8,854 445.8 6.5 12.3 7.3 23.4 20.1 401.0 6.5 14.6 7.2 25.8 20.2
BA
Table 2: Statistics summarizing the dataset.
# 2.1 Interactive Task
â&
contains many challenging phenomena unique to dialog, such as coreference to previous questions and answers and open-ended questions that must be answered without repeating previ- ous information (Section 3). Additionally, despite lacking access to the section text, we ï¬nd that stu- dents start dialogs by asking questions about the beginning of the section before progressing to ask- ing questions about the end. These observations imply that models built for must incorporate the dialog context to achieve good performance.
â26
We present a strong neural baseline (Clark and Gardner, 2018) that considers both dialog context and section text. While this model achieves within 6 F1 of human performance on SQuAD, it per- forms 20 F1 points below the human upper bound , indicating room for future improvement. on
Our task pairs up two workers, a teacher and a student, who discuss a section s (e.g., âOrigin & Historyâ in the example from Figure 1) from a Wikipedia article about an entity e (Daffy Duck). The student is permitted to see only the sectionâs title t and the ï¬rst paragraph of the main article b, while the teacher is additionally provided with full access to the section text.
The task begins with the student formulating a free-text question q from the limited information they have been given. The teacher is not allowed to answer with free text; instead, they must select a contiguous span of text deï¬ned by indices (i, j) into the section text s.2 While this decision lim- its the expressivity of answers, it makes evalua- tion simpler and more reliable; as such, it has been adopted in other reading comprehension datasets such as SQuAD, TriviaQA (Joshi et al., 2017), and NewsQA (Trischler et al., 2016).
# 2 Dataset collection
This section describes our data collection process, which involves facilitating QA dialogs between shares many crowd workers. Table 1 shows of the same positive characteristics of existing QA datasets while expanding upon the dialog aspect.
â4
To facilitate more natural interactions, teachers must also provide the student with a list of dia- log acts v that indicates the presence of any of n discrete statements. We include three types of di-
2We set the maximum answer length to 30 tokens to pre- vent teachers from revealing the full article all at once.
Did they have a lot of followers? Did she go on any tours after this? ay win against Cuba? Did he marry? Did they serve any pris Did he have any confli Did she Did he actually get a Muslim state ith team mates? ted?
Figure 2: A treemap visualization of the eight most frequent âWhâ words in , where box area is proportional to number of occurrences. Compared to other machine comprehension datasets, we observe increased contextuality and open-endedness, as well as a variety of both general and speciï¬c questions.
alog acts: (1) continuation (follow up, maybe follow up, or donât follow up), (2) afï¬r- mation (yes, no, or neither) and (3) answer- ability (answerable or no answer). The continuation act is crucial for workers to have pro- ductive dialogs, as it allows teachers to guide the studentâs questioning towards aspects of the article that are especially important or interesting. Al- together, a teacherâs complete answer to a ques- tion q includes a pair of indices and dialog indi- cators, a = (i, j, v). If a question is marked no answer, the indices are ignored.
After receiving an answer from the teacher, the student asks another question. At every turn, the student has more information about the topic than they did previously, which encourages them to ask follow-up questions about what they have just learned. The dialog continues until (1) twelve questions are answered, (2) one of the partners de- cides to end the interaction, or (3) more than two unanswerable questions were asked.
QA pairs.3 To ensure quality, we created a qual- iï¬cation task and allowed workers to report their partner for various problems. More details on data collection can be found in our datasheet.4
Article selection Our early pilot studies showed that articles about people generally require less background knowledge to write good questions than other categories. To ï¬nd articles about peo- ple with varied backgrounds, we retrieved articles from a list of category keywords (culture, animal, people associated with event, geography, health, celebrity) using a web interface provided by the Wikimedia foundation.5 We pruned by popular- ity by selecting articles with at least 100 incoming links, and we additionally removed non-person en- tities using YAGO (Suchanek et al., 2007). After article selection, we ï¬ltered sections from these ar- ticles based on the number of paragraphs, number of tokens, and average words per sentence. 6
# 2.2 Collection Details
We used Amazon Mechanical Turk for collection, restricting the task to workers in English-speaking countries and with more than 1000 HITs with at least a 95% acceptance rate. We paid workers per the number of completed turns in the dialog, which encourages workers to have long dialogs with their partners, and discarded dialogs with less than three
Dataset validation To create our evaluation sets, we collected four additional annotations per question. Workers were presented with questions from a previously collected dialog and asked to
3On average, we paid $0.33 per question, increasing pay per question as dialogs got longer to encourage completion.
4 http://quac.ai/datasheet.pdf 5https://petscan.wmï¬abs.org/ 6These ï¬ltering steps bias our data towards entertainers;
see datasheet for details.
provide answer spans.7 Acquiring many annota- tions is important since many questions in have multiple valid answers.
Train / Dev / Test Differences Table 2 shows small differences between training, development and testing splits. Sections in the training set are shorter than those in the evaluation folds because we permit multiple dialogs about the same section only in training; since workers preferred reading shorter sections, these were more likely to result in multiple dialogs. Variations in answer span length arise from two sources: (1) having multiple anno- tations in the validation task and (2) differing in- centives between the data collection and validation procedures.8 An analysis measuring the effect of these variations shows that they result in little dif- ference in evaluation.9
# 3 Dataset Analysis
â2
differs from other reading comprehension datasets due to our dialog-style collection process and the information asymmetry between teacher In the following sections, we pro- and student. vide a qualitative analysis of the dataset in that highlights challenging question types as well as the impact of the dialog context.
Question and answer types Table 2 shows dataset summary statistics. has long an- swers of 15 tokens on average compared to 3 for SQuAD, which is unsurprising as most SQuAD answers are either entities or numerics (Jurczyk et al., 2018) while questions can be more open-ended. While the average question length (6.5 tokens) is shorter than that of SQuAD (11 tokens), this does not indicate reduced question complexity, as the student (1) cannot access the section to paraphrase it and (2) can be more con- cise by coreferencing previous interactions.
â24
â2s
Figure 2 visualizes the most frequent question based on âWhâ words.10 For a more
â8s
7After submitting an answer, they were shown the original teacherâs answer so that they could understand the context of the subsequent questions.
8Validation workers did not have to maintain the dialog and so did not include as much information in the response. 9More speciï¬cally, we analyze whether references from the initial data collection signiï¬cantly differ from references collected during validation. We observe a difference of less than 1 F1 when using the original answer as system output versus using validation answers.
10To more effectively visualize sub-boxes like âwhat didâ, we exclude questions from the tail of the distribution.
âBs
â2
Section: Augusto Pinochet : Intellectual life... STUDENT: Was he known for being intelligent? TEACHER: © No, Pinochet was publicly known as a man with a lack of culture. STUDENT: why did people feel that way? TEACHER: © reinforced by the fact that he also portrayed himself as a common man STUDENT: did he have any hobbies? TEACHER: © Yes, Before wresting power from Allende, Pinochet had written two books. STUDENT: what is the name of a book written by him? TEACHER: â+ Geopolitica (1968) and Campana de Tarapaca (1972). STUDENT: what were the books about? TEACHER: <> Chileâs military literature. STUDENT: was there anything noteworthy re- garding his books? TEACHER: <+ Yes, In Geopolitica Pinochet pla- giarized (...) Gregorio Rodriguez Tascon STUDENT: did he deny those allegations? TEACHER: Â¥ No answer STUDENT: what did he plagiarize in Geopolitica? TEACHER: <> In Geopolitica Pinochet plagia- rized (...) paragraphs from a 1949 presentation
.
Figure 3: An example successful dialog from . Questions build on each other and interesting aspects (e.g., plagiarism) are explored as they are discovered.
ï¬ne-grained analysis, we randomly sampled 100 questions (each from a different dialog) and man- ually labeled different phenomena in Table 3. Un- like most current QA datasets that focus on fac- toid questions, our task setup encourages more open-ended questions: about half of questions are non-factoid. Furthermore, 86% of questions are contextual, requiring reading the context to re- solve coreference; of these, 44% refer to entities or events in the dialog history, while 61% refer to the subject of the article.
â4
The role of context Dialog context is crucial to understanding and answering questions. Fig- ure 5a shows that the location of the answer within the text is inï¬uenced by the number of questions asked previously. Early questions are mostly an- swered in the beginning of the section, while later questions tend to focus on the end of the section. Interestingly, text in the middle of the section is not asked about as frequently (Figure 5c). As more questions get asked, the more likely a question is to be unanswerable.
â2A
Figure 5b shows how the answers progress through different chunks of the evidence text (where each section is divided into 12 chunks of
Section: Gaelic Ireland : Invasion
STUDENT: What year did the invasion happen? TEACHER: <> in 1169 the main body of Norman, Welsh and Flemish forces landed in Ireland and quickly retook Leinster and the cities of Waterford and Dublin on behalf of Diarmait. STUDENT: Who was Diarmait? TEACHER: <> King Diarmait Mac Murchada of Leinster. STUDENT: Where is Leinster located? TEACHER: ¥- landed in Ireland and quickly re- took Leinster. STUDENT: Were invasions common? TEACHER: ¥ No answer STUDENT: Are there any other interesting as- pects about this article? TEACHER: <> Yes, [Pope Adrian IV, the only English pope, had already issued a Papal Bull in 1155 giving Henry II of England authority to invade Ireland. STUDENT: Who lead the invasion? TEACHER: ¥ No answer STUDENT: Did England defeat the Irish armies? TEACHER: ¥ No answer
4.
Figure 4: A less successful dialog from . The stu- dent struggles to get information despite asking good questions. The teacher attempts to provide extra con- text to guide the student, but the dialog ultimately ends because of too many unanswerable questions.
equal size). The answer to the next question is most frequently either in the same chunk as the previous question or an adjacent chunk, and most dialogs in the dataset cover three to six of the chunks (Figure 5d). These observations suggest that models for must take into account the di- alog context. However, results in Section 5 show that solely relying on the location of previous an- swers is not sufï¬cient.
â24
Finally, we examine properties of the questions as a function of the turn position in the dialog (Figure 6). The frequency of yes/no questions increases signiï¬cantly as the dialogs progress; again, at the beginning of the dialog, students have very little information, so it is harder to formu- late a yes/no question. The percentage of ques- tions that have multiple answers declines as the dialog progresses, implying students ask general questions ï¬rst and speciï¬c ones later.
Qualitative examples Figures 3 and 4 contain two representative dialogs from . Longer di- alogs sometimes switch topics (such as in Figure 3 about âacademic workâ) and often go from gen- eral to speciï¬c questions. Students whose ques-
â24.
Question type % Example Non- factoid 54 Q: Were the peace talks a success? Q: What was her childhood like? Contextual 86 Coref (article) 61 Title: Paul C´ezanne: Early years Q: When did he start painting? Coref (history) 44 Q: What was special about the Harrahâs? A: project was built by Trump with ï¬nancing from the Holiday Corporation. Q: Which led to what? Anything else? 11 Q: What other acting did he do? Q: What else did he research?
questions. Non-factoid Table 3: An analysis of questions do not ask about speciï¬c facts, while con- textual questions require reading the history to resolve coreferences to the dialog history and/or article.
tions go unanswered commonly resort to asking their teacher for any interesting content; even if this strategy fails to prolong the dialog as in Fig- ure 4, models can still use the dialog to learn when to give no answer.
# 4 Experimental Setup
We consider the following QA task: given the ï¬rst k questions and k ground-truth answers in the dia- log, all supporting material (entity e, topic t, back- ground b, and section text s), and question qk+1, we predict the answer span indices i, j in the sec- tion text s. Since afï¬rmation questions are incom- plete without a yes/no answer and the continuation feedback is important for information-seeking di- alog, we predict the dialog acts v, which with the span form the ï¬nal answer prediction ak+1.
All of our experiments are carried out on a train/dev/test split of 83.5k/7.3k/7.3k ques- tions/answer pairs, where no sections are shared between the different folds. Questions in the training set have one reference answer, while dev and test questions have ï¬ve references each. For all experiments, we do not evaluate on questions with a human F1 lower than 40, which eliminates roughly 10% of our noisiest annotations.11
# 4.1 Evaluation Metrics
Our core evaluation metric, word-level F1, is im- plemented similarly to SQuAD (Rajpurkar et al.,
11A manual inspection of annotations below this threshold revealed many lower quality questions; however, we also re- port unthresholded F1 in the ï¬nal column of Table 4.
(a) Answer location by position in dialog
(b) Location of next answer given current answer
|| |_| Current answer chunk Zz é 2 i o 41 2 3 4 5 6 7 8 9 (Cc) % dialogs that visit nY answer chunk 11 NO ANS 0 12 3 4 5 6 7 8 9 1 Next answer chunk . â J os | as | | â â " â â a rr re ee o 11 (d) # unique answer chunks visited per dialog
# Turn number
Figure 5: Heatmaps depicting the importance of context in dialogs, where (a) and (b) share the same color scale. The studentâs earlier questions are answered mostly by the ï¬rst few chunks, while the end of the section is covered in later turns (a). The middle is the least covered portion (c), and dialogs cover around ï¬ve unique chunks of the section on average (d). The transition matrix (b) shows that the answer to the next question is more likely to be located within a chunk adjacent to the current answer than in one farther away.
== Direct answer == Indirect answer == Cannot answer â Yesjno ââ Multiple answers 0.4 Occurrence frequency Turn number
Figure 6: The number of turns in the dialog inï¬uences the studentâs behavior: they start by asking general questions (i.e., easier to answer, with multiple possible answers) and progress to more speciï¬c ones.
2016): precision and recall are computed by con- sidering the portion of words in the prediction and references that overlap after removing stop- words.12 For no answer questions, we give the system an F1 of one if it correctly predicts no answer and zero otherwise.13 Like SQuAD, we compute the maximum F1 among all references; however, since many questions have multiple valid answers, this metric varies signiï¬cantly with
â24
the number of reference annotations. To make or- acle human and system performance comparable, given n references, we report the average of the maximum F1 computed from each n â 1 subset with respect to the heldout reference.
Additionally, since averaged F1 can be mislead- ing for questions with multiple valid answers, we introduce the human equivalence score (HEQ), a performance measure for judging whether a sys- temâs output is as good as that of an average hu- man.14 HEQ measures the percentage of examples for which system F1 exceeds or matches human F1. We compute two variants: (1) the percentage of questions for which this is true (HEQ-Q), and (2) the percentage of dialogs for which this is true for every question in the dialog (HEQ-D). A sys- tem that achieves a value of 100 on HEQ-D can by deï¬nition maintain average human quality output over full dialogs.
For dialog acts, we report accuracy with respect to the majority annotation, breaking ties randomly.
# 5 Experiments
# 5.1 Sanity checks
12Since our answer spans have vaguer boundaries than the
shorter ones in SQuAD, exact match is not a useful metric.
13Because the validation task was more susceptible to spam by constant annotation of âno-answer,â we only al- low âno-answerâ if the majority of references marked âno- answerâ, removing other answers. If âno-answerâ is not the majority answer, we remove all instances of âno-answerâ.
Random sentence This baseline selects a ran- dom sentence in the section text s as the answer (including no answer).
14In cases with lower human agreement on F1, if a system produces one reference exactly (F1 = 100), it will get points that it can use to offset poor performance on other examples.
40
Majority The majority answer outputs no answer and the majority class for all other di- alog acts (neither for afï¬rmation and donât follow up for continuation).
Transition matrix We divide the supporting text into 12 chunks (with a special chunk for no answer) and use the transition matrix (computed from the training set) in Figure 5b to select an an- swer given the position of the previous answer. This baseline does not output other dialog acts.
# 5.2 Upper bounds
Gold NA + TM This is the same transition ma- trix (TM) baseline as before, except that for ques- tions whose gold annotations are no answer, we always output no answer.
â2â
Gold sentence + NA To see if can be treated as an answer sentence selection problem, we output the sentence from s with the maximal F1 with respect to references, or no answer for unanswerable questions.
Human performance We pick one reference as a system output and compute the F1 with respect to the remaining references using the method de- scribed in Section 4.1. By deï¬nition, all HEQ measures are 100, and we report agreement for the afï¬rmation dialog act.15
# 5.3 Baselines
Pretrained InferSent To test the importance of lexical matching in our dataset, we output the sen- tence in s whose pretrained InferSent representa- tion (Conneau et al., 2017) has the highest cosine similarity to that of the question.
Feature-rich logistic regression We train a lo- gistic regression using Vowpal Wabbit (Langford et al., 2007) to select answer sentences. We use simple matching features (e.g., n-gram overlap be- tween questions and candidate answers), bias fea- tures (position and length of a candidate), and con- textual features (e.g., matching features computed with previous questions / answers, turn number).
BiDAF++ We use a re-implementation of a top- performing SQuAD model (Peters et al., 2018) that augments bidirectional attention ï¬ow (Seo
15We did not collect multiple annotations for the continua- tion dialog act and so omit it.
et al., 2016, BiDAF) with self-attention (Clark and Gardner, 2018) and contextualized embeddings.16 A token for no answer is appended to s to enable its prediction following Levy et al. (2017). Additionally, we modify the model for our task to also predict dialog acts, placing a classiï¬er over the same representation used to predict the end po- sition of the predicted span.
BiDAF++ w/ k-ctx As BiDAF++ does not model any dialog context, we modify the passage and question embedding processes to consider the dialog history. We consider context from the pre- vious k QA pairs.17
⢠Passage embedding We explicitly identify the previous k answers within the section text by concatenating marker embeddings to the existing word embeddings.
⢠Question embedding Naively prepending the previous k questions to the current ques- tion did not show gains in initial experiments. We opt instead to simply encode the dialog turn number within the question embedding.
# 5.4 Results
Table 4 summarizes our results (each cell displays dev/test scores), where dialog acts are Yes/No (af- ï¬rmation) and Follow up (continuation). For com- parison to other datasets, we report F1 without ï¬l- tering low-agreement QA pairs (F1â).
Sanity check Overall, the poor sanity check re- sults imply that is very challenging. Of these, following the transition matrix (TM) gives the best performance, reinforcing the observation that the dialog context plays a signiï¬cant role in the task.
â8
Upper bounds The human upper bound (80.8 F1) demonstrates high agreement. While Gold sentence + NA does perform well, indicating that signiï¬cant progress can be made by treating the problem as answer sentence selection, HEQ mea- sures show that span-based approaches will be needed achieve average human equivalence. Fi- nally, the Gold NA + TM shows that cannot be solved by ignoring question and answer text.
â&
16The AllenNLP (Gardner et al., 2017) implementation we use reaches 82.7 on the SQuAD development set, compared to the paperâs reported 85.8 on SQuAD; regardless, this im- plementation would have been state-of-the-art less than a year ago, making it an extremely strong baseline.
17Our implementation is available in AllenNLP.
F1 HEQ-Q HEQ-D Yes / No Follow up F1 (All) Random sentence Majority answer Trans. matrix (TM) 15.7 / 15.6 22.7 / 22.5 31.8 / 31.5 6.9 / 6.9 22.7 / 22.5 15.8 / 15.8 0.0 / 0.1 0.5 / 0.4 0.1 / 0.2 â 78.8 / 77.6 â â 57.9 / 56.7 â 16.4 / 16.3 20.2 / 20.0 31.2 / 30.9 Pretrained InferSent Logistic regression BiDAF++ (no ctx) BiDAF++ (w/ 1-ctx) BiDAF++ (w/ 2-ctx) BiDAF++ (w/ 3-ctx) 21.4 / 20.8 34.3 / 33.9 51.8 / 50.2 59.9 / 59.0 60.6 / 60.1 60.6 / 59.5 10.2 / 10.0 22.4 / 22.2 45.3 / 43.3 54.9 / 53.6 55.7 / 54.8 55.6 / 54.5 0.0 / 0.0 0.6 / 0.2 2.0 / 2.2 4.7 / 3.4 5.3 / 4.0 5.0 / 4.1 â â 86.4 / 85.4 86.5 / 86.1 86.6 / 85.7 86.1 / 85.7 â â 59.7 / 59.0 61.3 / 60.3 61.6 / 61.3 61.6 / 61.2 22.0 / 21.4 34.3 / 33.8 50.1 / 48.2 57.5 / 56.5 58.3 / 57.8 58.1 / 57.0 43.0 / 42.6 Gold NA + TM Gold sentence + NA 72.4 / 72.7 80.8 / 81.1 Human performance 27.4 / 27.4 61.8 / 62.7 100 / 100 1.0 / 0.8 9.8 / 9.7 100 / 100 â â 89.4 / 89.0 â â â 41.0 / 40.6 70.8 / 71.2 74.6 / 74.7
â2A.
Table 4: Experimental results of sanity checks (top), baselines (middle) and upper bounds (bottom) on . Simple text matching baselines perform poorly, while models that incorporate the dialog context signiï¬cantly outperform those that do not. Humans outperform our best model by a large margin, indicating room for future improvement.
Baselines Text similarity methods such as bag- of-ngrams overlap and InferSent are largely inef- fective on , which shows that questions have little direct overlap with their answers. On the other hand, BiDAF++ models make signiï¬cant progress, demonstrating that existing models can already capture a signiï¬cant portion of phenom- ena in . The addition of information from previous turns (w/ 1-ctx) helps signiï¬cantly, in- dicating that integration of context is essential to solving the task. While increasing the context size in BiDAF++ continues to help, we observe saturation using contexts of length 3, suggesting that more sophisticated models are necessary to take full advantage of the context. Finally, even the sys- our best model underperforms humans: tem achieves human equivalence on only 60% of questions and 5% of full dialogs.
â84.
# 5.5 Error Analysis
behavior differs from that of both models.
In the ï¬rst plot, human agreement is unchanged throughout the dialog while the performance of both models decreases as the number of turns increases, although the context-aware model de- grades less. While continuing a dialog for more turns does not affect human agreement, the sec- ond plot shows that human disagreement increases as the distance between the current answerâs loca- tion within the section text and that of the previous answer increases. Larger distances indicate shifts in the studentâs line of questioning (e.g., if the teacher told the student not to follow up on the pre- vious question). The plot also shows that model performance suffers (signiï¬cantly more than hu- mans) as distance increases, although the context- aware model can tolerate smaller shifts better than the context-agnostic model. In the last plot, hu- man agreement is higher when the answer span is short; in contrast, our model struggles to pin down short answers compared to longer ones.
In this section, we analyze the development set performance of our best context-aware model (BiDAF++ w/ 2-ctx), our best context-agnostic model (BiDAF++), and humans. Figure 7 contains three plots showing how F1 scores of baseline models and human agreement vary with (1) turn number, (2) distance from previous answer,18 and (3) answer length in tokens. Taken as a whole, our analysis reveals signiï¬cant qualitative differences between our context-aware and context-agnostic models beyond simply F1; additionally, human
18We divide the text into 12 equally-sized chunks and com- pute the difference of the current and previous chunk indices.
The plots demonstrate the increased robust- ness of the context-aware model compared to BiDAF++. This ï¬nding is reinforced by examin- ing the difference in model performance on ques- tions where previously the teacher recommended the student to âfollow upâ vs. not to follow up. The context-aware baseline performs 6 HEQ-Q higher on the âfollow upâ questions; in contrast, the context-agnostic baseline shows no HEQ-Q difference between the two types of questions. This discrepancy stems from the context-agnostic
Model â Human --- BIDAF++ (2/ctx) BiDAF++ 4 6 A) Turn number ° 2 4 8 B) Distance (# chunks) from prev answer 10 12 5 10 16 C) Answer length
Figure 7: The F1 scores of baseline models and human agreements based on dialog turn number, answerâs distance from previous answer, and the answer span token length.
modelâs inability to take advantage of the location of the previous answer.
exploratory questions whose answers can be po- tentially be followed up on.19
# 6 Related Work
Reading Comprehension Our work builds on span based reading comprehension (Rajpurkar et al., 2016; Joshi et al., 2017; Trischler et al., 2016), while also incorporating innovations such as curating questions independently of support- ing text to reduce trivial lexical overlap (Joshi et al., 2017; Kocisk´y et al., 2017) and allowing for unanswerable questions (Trischler et al., 2016; Ra- jpurkar et al., 2018). We handle open-ended ques- tions like in MSMARCO (Nguyen et al., 2016), with multiple references, but we are the ï¬rst to in- corporate these into information-seeking dialog.
~2
Dialog ï¬ts into an increasing interest in open domain dialog, mostly studied in the con- text of social chit-chat (Li et al., 2016; Ritter et al., 2011; Fang et al., 2017; Ghazvininejad et al., 2018). Most related to our effort is visual dia- log (Das et al., 2017), which relies on images as evidence instead of text. More explicit goal driven scenarios, such as bargaining (Lewis et al., 2017) and item guessing (He et al., 2017) have also been explored, but the language is more constrained than in . Information-seeking dialog specif- ically was studied in Stede and Schlangen (2004).
â24.
# 7 Conclusion
Sequential QA Our work is similar to se- quential question answering against knowledge bases (Iyyer et al., 2017) and the web (Talmor and Berant, 2018), but instead of decomposing a single question into smaller questions, we rely on the curiosity of the student to generate a se- quence of questions. Such open information seek- ing was studied in semantic parsing on knowledge bases (Dahl et al., 1994) and more recently with modern approaches (Saha et al., 2018), but with questions paraphrased from templates. Concur- rent to our work, Saeidi et al. (2018) proposed a task of generating and answering yes/no questions for rule focused text (such as trafï¬c laws) by in- teracting with a user through dialog. Also con- currently, Reddy et al. (2018) propose conversa- tional question answering (CoQA) from text but allow both students and questioners to see the ev- idence. As a result, a large percentage of CoQA answers are named entities or short noun phrases, much like those in SQuAD. In contrast, the asym- metric nature of forces students to ask more
In this paper, we introduce , a large scale dataset of information-seeking dialogs over sec- tions from Wikipedia articles. Our data collection process, which takes the form of a teacher-student interaction between two crowd workers, encour- ages questions that are highly contextual, open- ended, and even unanswerable from the text. Our baselines, which include top performers on exist- ing machine comprehension datasets, signiï¬cantly underperform humans on . We hope this dis- crepancy will spur the development of machines that can more effectively participate in informa- tion seeking dialog.
â24.
# Acknowledgments
â
was jointly funded by the Allen Institute for Artiï¬cial Intelligence and the DARPA CwC pro- gram through ARO (W911NF-15-1-0543). We would like to thank anonymous reviewers and Hsin-Yuan Huang who helped improve the draft.
â2
19On average, CoQA answers are 2.7 tokens long, while SQuADâs are 3.2 tokens and âs are over 14 tokens.
âBA
# References
Christopher Clark and Matt Gardner. 2018. Simple and effective multi-paragraph reading comprehen- sion. Proceedings of the Association for Computa- tional Linguistics.
Alexis Conneau, Douwe Kiela, Holger Schwenk, Loic Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of Empirical Methods in Natural Language Process- ing.
Deborah A. Dahl, Madeleine Bates, Michael Brown, William M. Fisher, Kate Hunicke-Smith, David S. Pallett, Christine Pao, Alexander I. Rudnicky, and Elizabeth Shriberg. 1994. Expanding the scope of the atis task: The atis-3 corpus. In Proceedings of the workshop on Human Language Technology.
Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, Jos´e MF Moura, Devi Parikh, and Dhruv Batra. 2017. Visual dialog. In Computer Vision and Pattern Recognition.
Hao Fang, Hao Cheng, Elizabeth Clark, Ariel Holtz- man, Maarten Sap, Mari Ostendorf, Yejin Choi, and Noah A Smith. 2017. Sounding boardâuniversity of washingtons alexa prize submission. Alexa Prize Proceedings.
Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke S. Zettlemoyer. 2017. Allennlp: A deep semantic natural language processing platform.
Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen-tau Yih, and Michel Galley. 2018. A knowledge-grounded neural conversation model. Association for the Advance- ment of Artiï¬cial Intelligence.
He He, Anusha Balakrishnan, Mihail Eric, and Percy Liang. 2017. Learning symmetric collaborative di- alogue agents with dynamic knowledge graph em- beddings. Proceedings of the Association for Com- putational Linguistics.
Mohit Iyyer, Wen tau Yih, and Ming-Wei Chang. 2017. Search-based neural structured learning for sequen- tial question answering. In Proceedings of the Asso- ciation for Computational Linguistics.
Mandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehen- sion. In Proceedings of the Association for Compu- tational Linguistics.
Tomasz Jurczyk, Amit Deshmane, and Jinho D. Analysis of wikipedia-based cor- arXiv preprint Choi. 2018. pora for question answering. arXiv:abs/1801.02073.
Tom´as Kocisk´y, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, G´abor Melis, and Edward Grefenstette. 2017. The narrativeqa Transactions reading comprehension challenge. of the Association for Computational Linguistics, abs/1712.07040.
John Langford, Lihong Li, and Alex Strehl. 2007. Vowpal wabbit online learning project.
Omer Levy, Minjoon Seo, Eunsol Choi, and Luke S. Zettlemoyer. 2017. Zero-shot relation extraction via reading comprehension. In Conference on Compu- tational Natural Language Learning.
Mike Lewis, Denis Yarats, Yann N Dauphin, Devi Parikh, and Dhruv Batra. 2017. Deal or no deal? end-to-end learning for negotiation dialogues. Pro- ceedings of Empirical Methods in Natural Language Processing.
Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, and Dan Jurafsky. 2016. Deep re- inforcement learning for dialogue generation. Pro- ceedings of Empirical Methods in Natural Language Processing.
Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. Ms marco: A human generated machine read- ing comprehension dataset. arXiv, abs/1611.09268.
Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. Conference of the North American Chapter of the Association for Computational Lin- guistics.
Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you donât know: Unanswerable ques- tions for squad. In Proceedings of the Association for Computational Linguistics.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of Empirical Methods in Natural Language Process- ing.
Siva Reddy, Danqi Chen, and Christopher D. Manning. 2018. Coqa: A conversational question answering challenge. ArXiv.
Alan Ritter, Colin Cherry, and William B. Dolan. 2011. Data-driven response generation in social media. In Proceedings of Empirical Methods in Natural Lan- guage Processing.
Marzieh Saeidi, Max Bartolo, Patrick Lewis, Sameer Signh, Tim Rocktschel, Mike Sheldon, Guillaume Bouchard, and Sebastian Riedel. 2018. Interpreta- tion of natural language rules in conversational ma- chine reading. In Proceedings of Empirical Methods in Natural Language Processing.
Amrita Saha, Vardaan Pahuja, Mitesh M Khapra, Karthik Sankaranarayanan, and Sarath Chandar. 2018. Complex sequential question answering: To- wards learning to converse over linked question an- swer pairs with a knowledge graph. In Association for the Advancement of Artiï¬cial Intelligence.
Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention ï¬ow for machine comprehension. Proceedings of the International Conference on Learning Represen- tations.
2004. Information-seeking chat: Dialogues driven by topic-structure. In Eighth workshop on the seman- tics and pragmatics of dialogue; SemDial.
Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: a core of semantic knowl- edge. In Proceedings of the World Wide Web Con- ference.
A. Talmor and J. Berant. 2018. The web as knowledge- base for answering complex questions. In Confer- ence of the North American Chapter of the Associa- tion for Computational Linguistics.
Adam Trischler, Tong Wang, Xingdi Yuan, Justin Har- ris, Alessandro Sordoni, Philip Bachman, and Ka- heer Suleman. 2016. Newsqa: A machine compre- hension dataset. arXiv preprint arXiv:1611.09830. | {
"id": "1611.09830"
} |
1808.06226 | SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing | This paper describes SentencePiece, a language-independent subword tokenizer
and detokenizer designed for Neural-based text processing, including Neural
Machine Translation. It provides open-source C++ and Python implementations for
subword units. While existing subword segmentation tools assume that the input
is pre-tokenized into word sequences, SentencePiece can train subword models
directly from raw sentences, which allows us to make a purely end-to-end and
language independent system. We perform a validation experiment of NMT on
English-Japanese machine translation, and find that it is possible to achieve
comparable accuracy to direct subword training from raw sentences. We also
compare the performance of subword training and segmentation with various
configurations. SentencePiece is available under the Apache 2 license at
https://github.com/google/sentencepiece. | http://arxiv.org/pdf/1808.06226 | Taku Kudo, John Richardson | cs.CL | Accepted as a demo paper at EMNLP2018 | null | cs.CL | 20180819 | 20180819 | # SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing
# Taku Kudo John Richardson
# Google, Inc. {taku,johnri}@google.com
# Abstract
a This paper language-independent tokenizer and detokenizer designed for Neural-based including Neural Machine text processing, Translation. It provides open-source C++ and Python implementations for subword units. While existing subword segmentation tools assume that the input is pre-tokenized into word sequences, SentencePiece can train subword models directly from raw sentences, which allows us to make a purely end-to-end and language independent system. We perform a validation experiment of NMT on English-Japanese machine translation, and ï¬nd that it is possible to achieve comparable accuracy to direct subword training from raw sentences. We also compare the perfor- mance of subword training and segmentation with various conï¬gurations. SentencePiece the Apache 2 license is available under https://github.com/google/ at sentencepiece.
# Introduction
Deep neural networks are demonstrating a large impact on Natural Language Processing. Neu- ral machine translation (NMT) (Bahdanau et al., 2014; Luong et al., 2015; Wu et al., 2016; Vaswani et al., 2017) has especially gained increasing pop- ularity, as it can leverage neural networks to di- rectly perform translations with a simple end-to- end architecture. NMT has shown remarkable re- sults in several shared tasks (Denkowski and Neu- big, 2017; Nakazawa et al., 2017), and its effective approach has had a strong inï¬uence on other re- lated NLP tasks such as dialog generation (Vinyals and Le, 2015) and automatic summarization (Rush et al., 2015).
sors, which have been used in traditional statisti- cal machine translation (SMT) systems. Moses1, a de-facto standard toolkit for SMT, implements a reasonably useful pre- and postprocessor. How- ever, it is built upon hand-crafted and language de- pendent rules whose effectiveness for NMT has not been proven. these tools are mainly designed for European languages where words are segmented with whitespaces. To train NMT systems for non-segmented languages such as Chinese, Korean and Japanese, we need to run word segmenters independently. Such language- dependent processing also makes it hard to train multilingual NMT models (Johnson et al., 2016), as we have to carefully manage the conï¬gurations of pre- and postprocessors per language, while the internal deep neural architectures are language- independent.
As NMT approaches are standardized and mov- ing forward to more language-agnostic architec- tures, it is becoming more important for the NLP community to develop a simple, efï¬cient, repro- ducible and language independent pre- and post- processor that can easily be integrated into Neural Network-based NLP systems, including NMT.
In this demo paper, we describe SentencePiece, a simple and language independent text tokenizer and detokenizer mainly for Neural Network- based text generation systems where the size of vocabulary is predetermined prior to the Neu- ral model training. SentencePiece implements two subword segmentation algorithms, byte-pair- encoding (BPE) (Sennrich et al., 2016) and uni- gram language model (Kudo, 2018), with the ex- tension of direct training from raw sentences. Sen- tencePiece enables building a purely end-to-end system that does not depend on any language- speciï¬c processing.
Although NMT can potentially perform end-to- end translation, many NMT systems are still re- lying on language-dependent pre- and postproces-
1http://www.statmt.org/moses/
8 1 0 2
g u A 9 1 ] L C . s c [
1 v 6 2 2 6 0 . 8 0 8 1 : v i X r a
% spm_train ââinput=data/input.txt ââmodel_preï¬x=spm ââvocab_size=1000 % echo "Hello world." | spm_encode ââ model=spm.model _He ll o _world . % echo "Hello world." | spm_encode ââmodel=spm.model ââoutput_format=id 151 88 21 887 6 % echo "_He ll o _world ." | spm_decode ââmodel=spm.model Hello world. % echo "151 88 21 887 6" | spm_decode ââmodel=spm.model ââinput_format=id Hello world.
Figure 1: Commandline usage of SentencePiece
# 2 System Overview
SentencePiece comprises four main components: Normalizer, Trainer, Encoder, and Decoder. Normalizer is a module to normalize semantically- equivalent Unicode characters into canonical forms. Trainer trains the subword segmentation model from the normalized corpus. We specify a type of subword model as the parameter of Trainer. Encoder internally executes Normalizer to nor- malize the input text and tokenizes it into a sub- word sequence with the subword model trained by Trainer. Decoder converts the subword sequence into the normalized text.
The roles of Encoder and Decoder correspond to preprocessing (tokenization) and postprocess- ing (detokenization) respectively. However, we call them encoding and decoding as SentencePiece manages the vocabulary to id mapping and can di- rectly convert the text into an id sequence and vice versa. Direct encoding and decoding to/from id sequences are useful for most of NMT systems as their input and output are id sequences. end-to-end
example (spm_train), of encoding decoding and (spm_decode). We can see that the input text is reversibly converted through spm_encode and spm_decode.
# 3 Library Design
This section describes the design and implementa- tion details of SentencePiece with command line and code snippets.
# 3.1 Lossless Tokenization
The following raw and tokenized sentences are an example of language-dependent preprocessing.
Raw text: Hello world. ⢠Tokenized: [Hello] [world] [.]
One observation is that the raw text and tokenized sequence are not reversibly convertible. The in- formation that no space exists between âworldâ and â.â is not kept in the tokenized sequence. Detokenization, a process to restore the original raw input from the tokenized sequence, has to be language-dependent due to these irreversible oper- ations. For example, while the detokenizer usually puts whitespaces between the primitive tokens in most European languages, no spaces are required in Japanese and Chinese. ⢠Raw text: [ããã«ã¡ã¯ä¸çã] (Hello world.) ⢠Tokenized: [ããã«ã¡ã¯] [ä¸ç] [ã]
Such language speciï¬c processing has usually been implemented in manually crafted rules, which are expensive to write and maintain.
SentencePiece implements the Decoder as an inverse operation of Encoder, i.e.,
Decode(Encode(Normalize(text))) =
Normalize(text).
We call this design lossless tokenization, in which all the information to reproduce the normalized text is preserved in the encoderâs output. The ba- sic idea of lossless tokenization is to treat the in- put text just as a sequence of Unicode characters. Even whitespace is handled as a normal symbol. For the sake of clarity, SentencePiece ï¬rst escapes the whitespace with a meta symbol _ (U+2581), and tokenizes the input into an arbitrary subword sequence, for example:
Raw text: Hello_world. ⢠Tokenized: [Hello] [_wor] [ld] [.]
As the whitespace is preserved in the tokenized text, we can detokenize the tokens without any am- biguities with the following Python code.
detok = ââ.join(tokens).replace(â_â, â â)
It should be noted that subword-nmt2 adopts a different representation for subword units. It fo- cuses on how the word is segmented into subwords and uses @@ as an intra-word boundary marker.
2https://github.com/rsennrich/ subword-nmt
# ⢠Tokenized: [Hello] [wor] [@@ld] [@@.]
This representation can not always perform loss- less tokenization, as an ambiguity remains in the treatment of whitespaces. More speciï¬cally, it is not possible to encode consecutive whitespaces with this representation.
# 3.2 Efï¬cient subword training and segmentation
Existing subword segmentation tools train sub- word models from pre-tokenized sentences. Such pre-tokenization was introduced for an efï¬cient subword training (Sennrich et al., 2016). However, we can not always assume that pre-tokenization is available, especially for In addition, pre- non-segmented languages. tokenization makes it difï¬cult to perform lossless tokenization.
SentencePiece employs several speed-up tech- niques both for training and segmentation to make lossless tokenization with a large amount of raw data. For example, given an input sentence (or word) of length N , BPE segmentation requires O(N 2) computational cost when we naively scan the pair of symbols in every iteration. Sentence- Piece adopts an O(N log(N )) algorithm in which the merged symbols are managed by a binary heap (priority queue). In addition, the training and seg- mentation complexities of unigram language mod- els are linear to the size of input data.
# 3.3 Vocabulary id management
SentencePiece manages the vocabulary to id map- ping to directly convert the input text into an id sequence and vice versa. The size of vocabulary is speciï¬ed with the --vocab_size=<size> ï¬ag of spm_train. While subword-nmt spec- iï¬es the number of merge operations, Sentence- Piece speciï¬es the ï¬nal size of vocabulary, as the number of merge operations is a BPE speciï¬c pa- rameter and can not be applicable to other segmen- tation algorithms, e.g., unigram language model (Kudo, 2018).
SentencePiece reserves vocabulary ids for special meta symbols, e.g., unknown symbol (<unk>), BOS (<s>), EOS (</s>) and padding (<pad>). Their actual ids are conï¬gured with command line ï¬ags. We can also deï¬ne custom meta symbols to encode contextual information as virtual tokens. Examples include the language- indicators, <2ja> and <2de>, for multilingual
U+41 U+302 U+300 <tab> U+1EA6 U+41 U+302 U+301 <tab> U+1EA4 ...
Figure 2: Custom normalization rule in TSV
models (Johnson et al., 2016).
# 3.4 Customizable character normalization
Character normalization is an important prepro- cessing step for handling real world text, which consists of semantically-equivalent Unicode char- acters. For example, Japanese fullwidth Latin characters can be normalized into ASCII Latin characters. Lowercasing is also an effective nor- malization, depending on the application.
Character normalization has usually been im- plemented as hand-crafted rules. Recently, Uni- code standard Normalization Forms, e.g., NFC and NFKC, have been widely used in many NLP applications because of their better reproducibility and strong support as Unicode standard.
By default, SentencePiece normalizes the in- put text with the Unicode NFKC normalization. The normalization rules are speciï¬ed with the --normalization_rule_name=nfkc ï¬ag of spm_train. The normalization in Senten- cepiece is implemented with string-to-string map- ping and leftmost longest matching. The normal- ization rules are compiled into a ï¬nite state trans- ducer (Aho-Corasick automaton) to perform an ef- ï¬cient normalization3.
SentencePiece supports custom normalization rules deï¬ned as a TSV ï¬le. Figure 2 shows an example TSV ï¬le. In this example, the Unicode sequence [U+41 U+302 U+300] is converted into U+1EA64. When there are ambiguities in the conversion, the longest rule is applied. User deï¬ned TSV ï¬les are speciï¬ed with the --normalization_rule_tsv=<file> ï¬ag of spm_train. Task-speciï¬c rules can be deï¬ned by extending the default NFKC rules provided as a TSV ï¬le in SentencePiece package.
# 3.5 Self-contained models
Recently, many researchers have provided pre- trained NMT models for better reproduciblity of
3The original NFKC normalization requires CCC (Canon- ical Combining Class) reordering, which is hard to model in a ï¬nite state transducer. SentencePiece does not handle the full CCC reordering and only implements a subset of NFKC normalization.
4Note that tabs are used as the delimiter for source and target sequence and spaces are used as the delimiter for indi- vidual characters.
their experimental results. However, it is not al- ways stated how the data was preprocessed. (Post, 2018) reported that subtle differences in prepro- cessing schemes can widely change BLEU scores. Even using the Moses toolkit, it is not guaran- teed to reproduce the same settings unless the con- ï¬gurations of Moses (e.g., version and command line ï¬ags) are clearly speciï¬ed. Strictly speaking, NFKC normalization may yield different results depending on the Unicode version.
Ideally, all the rules and parameters for prepro- cessing must be embedded into the model ï¬le in a self-contained manner so that we can reproduce the same experimental setting as long as we are using the same model ï¬le.
The SentencePiece model is designed to be purely self-contained. The model ï¬le includes not only the vocabulary and segmentation parameters, but also the pre-compiled ï¬nite state transducer for character normalization. The behavior of Senten- cePiece is determined only by the model ï¬le and has no external dependencies. This design guaran- tees a perfect reproducibility as well as allowing to distribute the SentencePiece model ï¬le as part of an NMT model. In addition, the developers of SentencePiece can reï¬ne the (default) normaliza- tion rules without having to worry about breaking existing preprocessing behaviors.
The SentencePiece model is stored as a binary wire format Protocol buffer5, a platform neutral and extensible mechanism for serializing struc- tured data. Protocol buffers help to safely serialize structured data while keeping backward compati- bility as well as extensibility.
# 3.6 Library API for on-the-ï¬y processing
Text preprocessing is usually considered as ofï¬ine processing. Prior to the main NMT training, raw input is preprocessed and converted into an id se- quence with a standalone preprocessor.
Such off-line preprocessing has two problems. First, standalone tools are not directly integrated into the user-facing NMT applications which need to preprocess user input on-the-ï¬y. Second, off- line preprocessing makes it hard to employ sub- sentence level data augmentation and noise injec- tion, which aim at improving the accuracy and ro- bustness of the NMT models. There are several studies to inject noise to input sentences by ran-
5https://developers.google.com/ protocol-buffers/
#include <sentencepiece_processor.h> #include <sentencepiece_trainer.h> SentencePieceTrainer::Train( "--input=input.txt " "--model_prefix=spm " "--vocab_size=1000"); SentencePieceProcessor sp; sp.Load("spm.model"); std::vector<std::string> pieces; sp.Encode("Hello world.", &pieces); std::vector<int> ids; sp.Encode("Hello world.", &ids); std::string text; sp.Decode({151, 88, 21, 887, 6}, &text);
Figure 3: C++ API usage (The same as Figure 1.)
import sentencepiece as spm
params = (â--input=input.txt â â--model_prefix=spm â â--vocab_size=1000â) spm.SentencePieceTrainer.Train(params) sp = spm.SentencePieceProcessor() sp.Load(âspm.modelâ) print(sp.EncodeAsPieces(âHello world.â)) print(sp.EncodeAsIds(âHello world.â)) print(sp.DecodeIds([151, 88, 21, 887, 6]))
Figure 4: Python API usage (The same as Figure 1.)
domly changing the internal representation of sen- tences. (Kudo, 2018) proposes a subword regu- larization that randomly changes the subword seg- mentation during NMT training. (Lample et al., 2017; Artetxe et al., 2017) independently pro- posed a denoising autoencoder in the context of sequence-to-sequence learning, where they ran- domly alter the word order of the input sentence and the model is trained to reconstruct the origi- nal sentence. It is hard to emulate this dynamic sampling and noise injection only with the off-line processing.
SentencePiece not only provides a standalone command line tool for off-line preprocessing but supports a C++, Python and Tensorï¬ow library API for on-the-ï¬y processing, which can easily be integrated into existing NMT frameworks. Fig- ures 3, 4 and 5 show example usages of the C++, Python and TensorFlow API6. Figure 6 presents example Python code for subword regularization where one subword sequence is sampled accord- ing to the unigram language model. We can ï¬nd that the text âNew Yorkâ is tokenized differently
6As the Python and TensorFlow wrappers call the native C++ API, there is no performance drop in their interfaces.
import tensorï¬ow as tf import tf_sentencepiece as tfs model = tf.gï¬le.GFile(âspm.modelâ, ârbâ).read() input_text = tf.placeholder(tf.string, [None]) ids, lens = tfs.encode(input_text, model_proto=model, out_type=tf.int32) output_text = tfs.decode(ids, lens, model_proto=model) with tf.Session() as sess: text = [âHello world.â, âNew Yorkâ] ids_, lens_, output_text_ = sess.run([ids, lens, output_text ], feed_dict={input_text:text})
Figure 5: TensorFlow API usage The SentencePiece model (model proto) is an attribute of the TensorFlow operation and embedded into the TensorFlow graph so the model and graph become purely self-contained.
>>> sp.Load(âspm.modelâ) >>> for n in range(5): ... sp.SampleEncodeAsPieces(âNew Yorkâ, â1, 0.1) [â_â, âNâ, âeâ, âwâ, â_Yorkâ] [â_â, âNewâ, â_Yorkâ] [â_â, âNewâ, â_Yâ, âoâ, ârâ, âkâ] [â_â, âNewâ, â_Yorkâ] [â_â, âNewâ, â_Yorkâ]
Figure 6: Subword sampling with Python API
on each SampleEncodeAsPieces call. Please see (Kudo, 2018) for the details on subword regu- larization and its sampling hyperparameters.
# 4 Experiments
# 4.1 Comparison of different preprocessing
We validated the performance of the different preprocessing on English-Japanese translation of Wikipedia articles, as speciï¬ed by the Kyoto Free Translation Task (KFTT) 7. The training, develop- ment and test data of KFTT consist of 440k, 1166 and 1160 sentences respectively.
We used GNMT (Wu et al., 2016) as the imple- mentation of the NMT system in our experiments. We generally followed the settings and training procedure described in (Wu et al., 2016), however, we changed the node and layer size of LSTM to be 512 and 6 respectively.
A word model is used as a baseline system. We compared to SentencePiece (unigram lan- guage model) with and without pre-tokenization. SentencePiece with pre-tokenization is essentially the same as the common NMT conï¬guration with subword-nmt. SentencePiece without pre- tokenization directly trains the subword model from raw sentences and does not use any exter- nal resources. We used the Moses tokenizer8 and
7http://www.phontron.com/kftt 8http://www.statmt.org/moses/
# vocab. setting (source/target) Lang pair 80k/80k jaâen Word model (baseline) 8k (shared) SentencePiece SentencePiece w/ pre-tok. 8k (shared) Word/SentencePiece SentencePiece/Word 80k/8k 8k/80k 80k/80k enâja Word model (baseline) 8k (shared) SentencePiece SentencePiece w/ pre-tok. 8k (shared) Word/SentencePiece SentencePiece/Word 80k/8k 8k/80k BLEU 28.24 29.55 29.85 27.24 29.14 20.06 21.62 20.86 21.41 19.94
Table 1: Translation Results (BLEU(%)) KyTea9 for English and Japanese pre-tokenization respectively. The same tokenizers are applied to the word model.
We used the case-sensitive BLEU score (Pap- ineni et al., 2002) as an evaluation metric. As the output sentences are not segmented in Japanese, we segmented them with KyTea for before calcu- lating BLEU scores.
Table 1 shows the experimental results. First, as can be seen in the table, subword segmenta- tions with SentencePiece consitently improve the BLEU scores compared to the word model. This result is consistent with previous work (Sennrich et al., 2016). Second, it can be seen that the pre- tokenization is not always necessary to boost the In Japanese to English, the im- BLEU scores. provement is marginal and has no signiï¬cant dif- ference. In English to Japanese, the BLEU score is degraded with pre-tokenization.
We can ï¬nd larger improvements in BLEU when 1) SentencePiece is applied to Japanese, and 2) the target sentence is Japanese. As Japanese is a non-segmented language, pre-tokenization acts as a strong constraint to determine the ï¬nal vo- cabulary. It can be considered that the positive ef- fects of unsupervised segmentation from raw input worked effectively to ï¬nd the domain-speciï¬c vo- cabulary in Japanese.
# 4.2 Segmentation performance
Table 2 summarizes the training and segmentation performance of various conï¬gurations.
We can see that the training and segmentation speed of both SentencePiece and subword-nmt is almost comparable on English data set regardless of the choice of pre-tokenization. This is expected, as English is a segmented language and the search space for the vocabulary extraction is largely re- stricted. On the other hand, SentencePiece shows
9http://www.phontron.com/kytea
time (sec.) Task Train Seg. Tool subword-nmt SentencePiece subword-nmt SentencePiece subword-nmt SentencePiece subword-nmt SentencePiece Pre-tok. yes yes no no yes yes no no Japanese 56.9 10.1 528.0 217.3 23.7 8.2 216.2 5.9 24.6 English 54.1 16.8 94.7 21.8 28.6 20.3 36.1 20.3 15.8 Pre-tokenizaion KyTea(ja)/Moses(en)
Table 2: Segmentation performance. KFTT corpus (440k sentences) is used for evaluation. Experiments are executed on Linux with Xeon 3.5Ghz processors. The size of vocabu- lary is 16k. Moses and KyTea tokenizers are used for English and Japanese respectively. Note that we have to take the time of pre-tokenization into account to make a fair comparison with and without pre-tokenization. Because subword-nmt is based on BPE, we used the BPE model in SentencePiece. We found that BPE and unigram language models show almost comparable performance.
larger performance improvements when applying it to raw Japanese data (w/o pre-tok). The seg- mentation speed of SentencePiece is about 380 times faster than that of subword-nmt in this set- ting. This result strongly supports our claim that SentencePiece is fast enough to be applied to raw data and the pre-tokenization is not always neces- sary. Consequently, SentencePiece helps to build a purely data-driven and language-independent sys- tem. The segmentation speed of SentencePiece is around 21k and 74k sentences/sec. in English and Japanese respectively, which is fast enough to be executed on-the-ï¬y.
# 5 Conclusions
In this paper, we introduced SentencePiece, an open-source subword tokenizer and detokenizer designed for Neural-based text processing. Sen- tencePiece not only performs subword tokeniza- tion, but directly converts the text into an id se- quence, which helps to develop a purely end-to- end system without replying on language speciï¬c resources. The model ï¬le of SentencePiece is de- signed to be self-contained to guarantee perfect reproducibility of the normalization and subword segmentation. We hope that SentencePiece will provide a stable and reproducible text processing tool for production use and help the research com- munity to move to more language-agnostic and multilingual architectures.
# References
Mikel Artetxe, Gorka Labaka, Eneko Agirre, Unsupervised arXive preprint and Kyunghyun Cho. 2017. neural machine translation. arXiv:1710.11041.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly arXiv preprint learning to align and translate. arXiv:1409.0473.
Michael Denkowski and Graham Neubig. 2017. Stronger baselines for trustable results in neural ma- chine translation. Proc. of Workshop on Neural Ma- chine Translation.
Melvin Johnson, Mike Schuster, et al. 2016. Googleâs multilingual neural machine translation system: arXiv preprint enabling zero-shot translation. arXiv:1611.04558.
Taku Kudo. 2018. Subword regularization: Improv- ing neural network translation models with multiple subword candidates. In Proc. of ACL.
and Lample, MarcâAurelio Ranzato. 2017. Unsupervised machine translation using monolingual corpora only. arXive preprint arXiv:1711.00043.
Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention- In Proc of based neural machine translation. EMNLP.
Toshiaki Nakazawa, Shohei Higashiyama, et al. 2017. Overview of the 4th workshop on asian translation. In Proceedings of the 4th Workshop on Asian Trans- lation (WAT2017).
Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proc. of ACL.
Matt Post. 2018. A call for clarity in reporting bleu scores. arXiv preprint arXiv:1804.08771.
Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sen- tence summarization. In Proc. of EMNLP.
Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proc. of ACL.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. arXive preprint arXiv:1706.03762.
Oriol Vinyals and Quoc V. Le. 2015. A neural conver- sational model. In ICML Deep Learning Workshop.
Yonghui Wu, Mike Schuster, et al. 2016. Googleâs neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. | {
"id": "1804.08771"
} |
1808.05492 | Metric Learning for Novelty and Anomaly Detection | When neural networks process images which do not resemble the distribution
seen during training, so called out-of-distribution images, they often make
wrong predictions, and do so too confidently. The capability to detect
out-of-distribution images is therefore crucial for many real-world
applications. We divide out-of-distribution detection between novelty detection
---images of classes which are not in the training set but are related to
those---, and anomaly detection ---images with classes which are unrelated to
the training set. By related we mean they contain the same type of objects,
like digits in MNIST and SVHN. Most existing work has focused on anomaly
detection, and has addressed this problem considering networks trained with the
cross-entropy loss. Differently from them, we propose to use metric learning
which does not have the drawback of the softmax layer (inherent to
cross-entropy methods), which forces the network to divide its prediction power
over the learned classes. We perform extensive experiments and evaluate both
novelty and anomaly detection, even in a relevant application such as traffic
sign recognition, obtaining comparable or better results than previous works. | http://arxiv.org/pdf/1808.05492 | Marc Masana, Idoia Ruiz, Joan Serrat, Joost van de Weijer, Antonio M. Lopez | cs.CV | Accepted at BMVC 2018, 10 pages main article and 4 pages
supplementary material | null | cs.CV | 20180816 | 20180816 | M. MASANA ET AL.: METRIC LEARNING FOR NOVELTY AND ANOMALY DETECTION 1
8 1 0 2
# Metric Learning for Novelty and Anomaly Detection
g u A 6 1
Computer Vision Center Universitat Autònoma de Barcelona Bellaterra, Spain
# Marc Masana mmasana@cvc.uab.cat
# Idoia Ruiz iruiz@cvc.uab.cat
]
# Joan Serrat joans@cvc.uab.cat
# V C . s c [
# eee
# Joost van de Weijer joost@cvc.uab.cat
# Antonio M. Lopez antonio@cvc.uab.cat
1 v 2 9 4 5 0 . 8 0 8 1 : v i X r a
# Abstract
When neural networks process images which do not resemble the distribution seen during training, so called out-of-distribution images, they often make wrong predictions, and do so too conï¬dently. The capability to detect out-of-distribution images is there- fore crucial for many real-world applications. We divide out-of-distribution detection between novelty detection âimages of classes which are not in the training set but are related to thoseâ, and anomaly detection âimages with classes which are unrelated to the training set. By related we mean they contain the same type of objects, like digits in MNIST and SVHN. Most existing work has focused on anomaly detection, and has addressed this problem considering networks trained with the cross-entropy loss. Differ- ently from them, we propose to use metric learning which does not have the drawback of the softmax layer (inherent to cross-entropy methods), which forces the network to divide its prediction power over the learned classes. We perform extensive experiments and evaluate both novelty and anomaly detection, even in a relevant application such as trafï¬c sign recognition, obtaining comparable or better results than previous works.
# Introduction
1
Deep neural networks have obtained excellent performance for many applications. However, one of the known shortcomings of these systems is that they can be overly conï¬dent when presented with images (and classes) which were not present in the training set. Therefore, a desirable property of these systems would be the capacity to not produce an answer if an input sample belongs to an unknown class, that is, a class for which it has not been trained. The ï¬eld of research which is dedicated to this goal is called out-of-distribution detection [10, 17, 18]. Performing out-of-distribution detection is important not only to avoid classiï¬cation errors but also as the ï¬rst step towards lifelong learning systems [3]. Such systems would detect out-of-distribution samples in order to later update the model accordingly [13, 20].
© 2018. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms.
2 M. MASANA ET AL.: METRIC LEARNING FOR NOVELTY AND ANOMALY DETECTION
The problem of out-of-distribution detection has also been called one-class classiï¬ca- tion, novelty and anomaly detection [23]. More recently, associated to deep neural network classiï¬ers, some works refer to it as open-set recognition [1]. In this paper, we distinguish two cases of out-of-distribution which we believe are quite different: we propose to term as novelty an image from a class different from those contained in a dataset from which to train, but that bears some resemblance to them, for instance because it shows the same kind of object from untrained points of view. This is a very important problem in many computer vision applications. For example, imagine a system that classiï¬es trafï¬c signs on-board a car and takes automatic decisions accordingly. It can happen that it ï¬nds a class of local trafï¬c signs which was not included in the training set, and this must be detected to avoid taking wrong decisions. We reserve the word anomaly for completely unrelated samples, like dif- ferent type of objects, images from another unrelated dataset, or background patches in the case of trafï¬c sign classiï¬cation. This is also relevant from the point of view of commercial applications. In fact, most previous works focus on anomaly detection. Novelty detection remains rather unexplored. To the best of our knowledge only [26] and [18] perform some intra-dataset out-of-distribution detection experiments. The three previous works closest to ours [10, 17, 18], revolve around one idea: given a discriminative neural network model, use the output probabilities to take the decision of seen/unseen class. These networks are opti- mized to distinguish between the classes present in the training set, and are not required to explicitly model the marginal data distribution. As a consequence, at testing time the system cannot assess the probability of the presented data, complicating the assessment of novelty cases.
Here we explore a completely different approach: to learn an embedding where one can use Euclidean distance as a measure of âout-of-distributionessâ. We propose a loss that learns an embedding where samples from the same inâdistribution class form clusters, well sepa- rated from the space of other inâdistribution classes and also from out-of-distribution sam- ples. The contributions to the problem of out-of-distribution detection presented in this paper are the following. First, the use of metric learning for out-of-distribution detection, instead of doing it on the basis of the cross-entropy loss and corresponding softmax scores. Second, we distinguish between novelty and anomaly detection and show that research should focus on the more challenging problem of novelty detection. Third, we obtain comparable or better results than state-of-the-art in both anomaly and novelty detection. Last, in addition to the experiments with benchmark datasets in order to compare with previous works, we address also a real-world classiï¬cation problem, trafï¬c sign recognition, for which we obtain good detection and accuracy results.
# 2 Related work
Our paper is related to anomaly detection in its different meanings. Also to open-set recog- nition, as one of the most important applications of out-of-distribution detection. And ï¬nally to metric learning, the base of our approach. In the following we brieï¬y review the most related works in each of these areas. Out-of-distribution detection should not be confused with another desirable property of machine learning systems, namely the reject option, that is, the ability to decide not to classify an input if the conï¬dence on any of the labels is too weak (see for example [7] and references therein). The difference is that in the latter case it is assumed that the sample does belong to some class present during training.
M. MASANA ET AL.: METRIC LEARNING FOR NOVELTY AND ANOMALY DETECTION 3
Anomaly and novelty detection. Also known as out-of-distribution detection, it aims at identifying inputs that are completely different from or unknown to the original data distri- bution used for training [23]. In [2], they perform novelty detection by learning a distance in an embedding. It proposes a Kernel Null Foley-Sammon transform that aims at projecting all the samples of each in-distribution class into a single point in a certain space. Consequently, novelty detection can be performed by thresholding the distance of a test sample to the near- est of the collapsed class representations. However, they employ handcrafted features, thus optimizing only the transform parameters and not the representation, like in the presently dominating paradigm of deep learning.
Although Deep Neural Networks (DNNs) have established as state-of-the-art on many computer vision classiï¬cation and detection tasks, overconï¬dence in the probability score of such networks is a common problem. DNNs capable of detecting lots of objects with ï¬ne accuracy can still be fooled by predicting new never-seen objects with high conï¬dence. This problem can be deï¬ned by the ability of the network to decide if a new test sample belongs to the in-distribution (i.e. from a class or from the data used to train the classiï¬er) or to an out-of-distribution.
In [10], they show that DNNs trained on MNIST [16] images can frequently produce high conï¬dence guesses (+90%) on random noise images. They propose a baseline for evaluation of out-of-distribution detection methods and show that there is room for future research to improve that baseline. Their baseline assumes that out-of-distribution samples will have a more distributed conï¬dence among the different classes than an in-distribution sample. Recently, in [18] the authors propose ODIN, a simple method applied to DNNs that uses a softmax layer for classiï¬cation and does not need the network to be retrained. The key idea is to use temperature scaling and input pre-processing, which consists on introducing small perturbations in the direction of the gradients for the input images.
In [17] they diverge from the other threshold-based methods by proposing a new train- ing method. They add two loss terms that force the out-of-distribution samples to be less conï¬dent and improve the in-distribution samples respectively. In both these works, trained DNNs follow a typical softmax cross-entropy classiï¬cation loss, where each dimension on the output embedding is assigned to measure the correlation with a speciï¬c class from that task. Other than previous work which focuses on networks trained with the cross-entropy, our work studies out-of-distribution for networks which are optimized for metric learning. These networks do not have the normalization problem which is introduced by the softmax layer, and are therefore expected to provide better estimates of out-of-distribution data. One last work is still worth to mention in the context of DNNs. In [26] the authors propose to discern between seen and unseen classes through the dimensions of certain layer activations which have extreme values. They achieve a good accuracy on ImageNet but only when the number of selected classes is very small.
Open Set Recognition. It shares with out-of-distribution detection the goal of discriminat- ing samples from two different distributions. But it places the emphasis on how to apply it to improve the classiï¬er capabilities, so that it can still perform well when the input may con- tain samples not belonging to any of those in the training set. One of the ï¬rst works is [24], which formalized the problem as one of (open) risk minimization in the context of large mar- gin classiï¬ers, producing what they called a one-versus-set Support Vector Machine. More recently, a method to adapt deep neural networks to handle open set recognition has been proposed in [1]. The key idea is to replace the conventional softmax layer in a network by a so called openmax layer. It takes the N activations (being N the number of classes) of the
4 M. MASANA ET AL.: METRIC LEARNING FOR NOVELTY AND ANOMALY DETECTION
penultimate layer of the network and estimates the probability for each training class, like in softmax, plus that of not being a sample of the training data. This later is done by ï¬tting a Weilbull density function to the distance between the mean activation value for each class and those of the training samples. We see thus that distance between last layer activations or features play a key role. This is coincident with our method, only that features in their case are learned through a loss function similar to cross-entropy whereas we explicitly will learn a distance such that in-distribution samples cluster around one center per class and out-of-distribution samples are pushed away from all these centers. Metric Learning. Several computer vision tasks such as retrieval, matching, veriï¬cation, even multi-class classiï¬cation, share the need of being able to measure the similarity be- tween pairs of images. Deriving such a measure from data samples is known as metric learn- ing [15]. Two often cited seminal works on this subject through neural networks are [4, 9], where the Siamese architecture was proposed for this purpose. Differently from classiï¬ca- tion networks, the goal is to learn rather than a representation amenable for classiï¬cation, one for measuring how similar two instances are in terms of the Euclidean distance. Another popular architecture is triplet networks [11]. For both of them many authors have realized that mining the samples of the training set in order to ï¬nd out difï¬cult or challenging pairs or triplets is important in order to converge faster or to better minima [25, 27, 28]. Like them, we have also resorted to a mining strategy in order to obtain good results in the task of out-of-distribution detection.
# 3 Metric Learning for Out-of-Distribution
Most recent works on out-of-distribution detection are based on supervisely trained neural networks which optimize the cross-entropy loss. In these cases the network output has a direct correspondence with the solution of the task, namely a probability for each class. However, the representation of the output vector is forced to always sum up to one. This means that when the network is shown an input which is not part of the training distribution, it will still give probabilities to the nearest classes so that they sum up to one. This phenomena has led to the known problem of neural networks being too overconï¬dent about content that they have never seen [10].
Several works have focused on improving the accuracy of the conï¬dence estimate of methods based on the cross entropy; adapting them in such a way that they would yield lower conï¬dences for out-of-distribution [10, 17, 18]. We hypothesize that the problem of the overconï¬dent network predictions is inherent to the used cross-entropy, and therefore propose to study another class of network objectives, namely those used for metric learning. In metric learning methods, we minimize an objective which encourages images with the same label to be close and images with different labels to be at least some margin apart in an embedding space. These networks do not apply a softmax layer, and therefore are not forced to divide images which are out-of-distribution over the known classes.
# 3.1 Metric Learning
For applications such as image retrieval, images are represented by an embedding in some feature space. Images can be ordered (or classiï¬ed) according to the distance to other images in that embedding space. It has been shown that using metric learning methods to improve the embeddings could signiï¬cantly improve their performance [8]. The theory of metric
M. MASANA ET AL.: METRIC LEARNING FOR NOVELTY AND ANOMALY DETECTION 5
learning was extended to deep neural networks by Chopra et al. [4]. They proposed to pass images through two parallel network branches which share the weights (also called a Siamese network). A loss considers both embeddings, and adapts the embedding in such a way that similar classes are close and dissimilar classes are far in that embedding space.
Traditionally these networks have been trained with contrastive loss [9], which is formu- lated as:
L(x1, x2, y ; W ) = 1 2 (1 â y) D2 w + 1 2 y (max (0, m â Dw))2 , (1)
where Dw = || fW (x1) â fW (x2)||2 is the distance between the embeddings of images x1 and x2 computed by network fW with weights W . The label y = 0 indicates that the two images are from the same class, and y = 1 is used for images from different classes. The loss there- fore minimizes the distance between images of the same class, and increases the distance of images of different classes until this distance surpasses the margin m. Several other losses have been proposed for Siamese networks [11, 25, 28, 31, 32] but in this paper we will evaluate results with the contrastive loss to provide a simple baseline on which to improve.
# 3.2 Out-of-Distribution Mining (ODM)
In the previous section, we considered that during training only examples of in-distribution data are provided. However, some methods consider the availability of some out-of-distribu- tion data during training [17]. This is often a realistic assumption since it is relatively easy to obtain data from other datasets or create out-of-distribution examples, such as samples generated with Gaussian noise. However, it has to be noted that the out-of-distribution data is used unlabeled, and is of a different distribution from the out-of-distribution used at testing. The objective is to help the network be less conï¬dent about what it does not know. Therefore, noise or even unlabeled data can be used to strengthen the knowledge boundaries of the network.
We propose to adapt the contrastive loss to incorporate the out-of-distribution data:
L(x1, x2, y ; W ) = 1 2 (1 â y) zD2 w + 1 2 yz (max (0, m â Dw))2 , (2)
where we have introduced a label z which is zero when both images are from the out-of- distribution and one otherwise. This loss is similar to Eq. 1, but with the difference that in case of a pair of images where one is an out-of-distribution image (z = 1, y = 1) they are encouraged to be at least m distance apart. Note that we do not enforce the out-of-distribution images to be close, since when z = 0 the pair does not contribute to the loss. It is important to make sure that there are no pairs of out-of-distribution samples so that they are not treated as a single new class and forced to be grouped into a single cluster.
In practice, we have not implemented a two-branches Siamese network but followed recent works [19, 30] which devise a more efï¬cient approach to minimize losses traditionally computed with Siamese networks. The idea is to sample a minibatch of images which we forward through a single branch until the embedding layer. We then sample pairs from them in the loss layer and backpropagate the gradient. This allows the network to be deï¬ned with only one copy of the weights instead of having two branches with shared weights. At the same time, computing the pairs after the embedding also allows to use any subgroup of possible pairs among all the images from the minibatch. When computing the pairs we make sure that pairs of out-of-distribution samples are not used. As a result z will never be 0 and we can in practice directly apply Eq. 1 instead of Eq. 2.
6 M. MASANA ET AL.: METRIC LEARNING FOR NOVELTY AND ANOMALY DETECTION
# 3.3 Anomaly and Novelty detection
In this paper we distinguish between two categories of out-of-distribution data:
Novelty: samples that share some common space with the trained distribution, which are usually concepts or classes which the network could include when ex- panding its knowledge. If you train a network specialized in different dog breeds, an example would be a new dog breed that was not in the training set. Further- more, if the classes are more complex, some novelty out-of-distribution could be new viewpoints or modiï¬cations of an existing learned class. Anomaly: samples that are not related with the trained distribution. In this cat- egory we could include background images, Gaussian noise, or unrelated classes to the trained distribution (i.e. SVHN would be a meaningful anomaly for CIFAR- 10). Since anomalies are further from the in-distribution than novelties these are expected to be easier to detect.
To further illustrate the difference between novelties and anomalies consider the follow- ing experiment. We train a LeNet on the classes 2, 6 and 7 from the MNIST dataset [16] under the same setup for both cross-entropy (CE) and contrastive (ML) losses. We also train it with our proposed method which introduces out-of-distribution mining during train- ing (ODM). We use classes 0, 3, 4, and 8 as those seen out-of-distribution samples during training. Then, we visualize the embeddings for different out-of-distribution cases from closer to further resemblance to the train set : 1) similar numbers 5, 9 and 1 as novelty, 2) SVHN [22] and CIFAR-10 [14] as anomalies with a meaning, and 3) the simpler Gaussian noise anomalies.
In Figure 1 we show the 3-dimensional output embedding spaces for CE, ML and ODM in rows 1, 2 and 3 respectively. As expected, the CE space is bounded inside the shown trian- gle, since the three dimensions of the output (the number of classes) have to always sum up to 1. For SVHN, CE correctly assigns low conï¬dence for all classes. However, for CIFAR- 10, Gaussian noise and Novelty it increasingly is more conï¬dent about the probability of an out-of-distribution image to be classiï¬ed as an in-distribution one. In the case of ML, all anomalies seem to be more separated from the in-distributions for each class, and only the Novelty is still too close to the cluster centers. With the introduction of out-of-distribution samples during training, ODM shows how out-of-distribution images are kept away from the in-distribution, allowing the network to be conï¬dent about what it is capable of classi- fying and what not. We provide quantitative performance results for this experiment in the Supplementary Material.
In conclusion, this experiment shows that there is a difference between novel and anoma- ly out-of-distribution samples for both cross-entropy and metric learning approaches, stress- ing that those have to be approached differently. Furthermore, the overconï¬dence of the cross-entropy methods is more clear on novelty detection cases, and among the anomaly cases, the Gaussian noise seems to be the one with more overconï¬dent cases. In those cases, a metric learning approach presents more beneï¬ts when doing out-of-distribution detection. It allows for the output embedding space to be more representative of the learned classes around the class centers, and naturally has the ability to give low scores to unseen data. Fi- nally, when some out-of-distribution samples are shown during training, the network is more capable of adapting the embedding space to be more separable against anomaly data.
M. MASANA ET AL.: METRIC LEARNING FOR NOVELTY AND ANOMALY DETECTION 7
â.
â.
Figure 1: Embedding spaces for CE, ML and ODM (rows respectively) being tested on in- dist 2, 6, 7 of MNIST (red, blue, purple), and out-dist 5, 9, 1 of MNIST (green), SVHN (yellow), CIFAR-10 (orange), and Gaussian noise (grey). Best viewed in color.
# 4 Results
To assess the performance of the proposed method, we ï¬rst compare with existing state- of-the-art out-of-distribution detection methods on SVHN [22] and CIFAR-10 [14] datasets trained on VGGnet [29] and evaluated with the metrics provided in [17]. Furthermore, as a more application-based benchmark, we propose to compare cross-entropy based strategies and metric learning strategies on the Tsinghua dataset [35] of trafï¬c signs. In this second set of experiments we use our own implementation of the metrics deï¬ned in [18]. More about the metrics used can be found in the Supplementary Material.1
# 4.1 Comparison with state-of-the-art
We compare our method with two very recent state-of-the-art methods. One of them uses a conï¬dence classiï¬er and an adversarial generator (CC-AG) [17] and like ours uses out- of-distribution images during training. The second method is ODIN [18] which does not consider out-of-distribution images during training. In [17] they compare CC-AG with ODIN [18], and show that they can perform much better in the novelty case but similar for the anomaly cases.
We train each SVHN and CIFAR-10 as the in-distribution datasets while using the other dataset as the seen out-distribution during training. We train on VGGnet, just like [17], with a contrastive loss of margin 10 and a 25% of (in-dist, out-dist) pairs every two batches. Follow- ing the experiments of [17], we test the resulting networks on the in-distribution test set for classiï¬cation, and TinyImageNet [6], LSUN [33] and Gaussian noise for out-of-distribution detection. For evaluation we use the proposed metrics from their implementation, namely: true negative rate (TNR) when true positive rate (TPR) is at 95%, detection accuracy, area under the receiver operating characteristic curve (AUROC) and both area under the precision- recall curve for in-distribution (AUPR-in) and out-distribution (AUPR-out).
Table 1 shows the results. For SVHN as the in-distribution results are as expected, with ODIN having lower results due to not using any out-of-distribution during training, and both
1Code available at: https://mmasana.github.io/OoD_Mining
8 M. MASANA ET AL.: METRIC LEARNING FOR NOVELTY AND ANOMALY DETECTION
Table 1: Comparison with the state-of-the-art. All metrics show the methods as ODIN/CC- AG/ODM, red indicates worst performance, bold indicates best, * for seen distribution.
In-dist classiï¬cation SVHN 93.8/94.2/68.7 CIFAR-10 80.1/80.6/54.0 Out-dist CIFAR-10* Tiny LSUN Gaussian SVHN* Tiny LSUN Gaussian TNR at 95% TPR 47.4/99.9/99.8 49.0/100.0/99.0 46.3/100.0/99.4 56.1/100.0/100.0 13.7/99.8/99.8 13.6/10.1/17.1 14.0/10.8/19.6 2.8/3.5/3.0 Detection Accuracy 78.6/99.9/99.8 79.6/100.0/99.1 78.2/100.0/99.5 83.4/100.0/100.0 66.6/99.8/99.7 62.6/58.9/66.9 63.2/60.2/70.9 50.0/50.0/64.2 AUROC 62.6/99.9/99.5 64.6/100.0/99.0 61.8/100.0/99.3 72.0/100.0/100.0 46.6/99.9/99.9 39.6/31.8/66.2 40.7/34.8/68.4 10.2/14.1/49.8 AUPR-in 71.6/99.9/99.7 72.7/100.0/96.5 71.1/100.0/97.8 77.2/100.0/100.0 61.4/99.9/99.9 58.3/55.3/60.3 58.7/56.4/59.5 48.1/49.4/64.1 AUPR-out 91.2/99.4/99.9 91.6/99.4/99.8 90.8/99.4/99.8 92.8/99.4/100.0 73.5/99.8/100.0 71.0/66.1/68.2 71.5/68.0/70.7 39.9/47.0/46.7
CC-AG and ODM having near perfect performance. In the case of CIFAR-10 being the in- distribution, the same pattern is repeated for the seen distribution from SVHN. However, for the unseen out-distributions, CC-AG achieves the lower performance on both TinyImageNet and LSUN datasets, and ODIN the lower for Gaussian noise. Although not always achieving the best performance, ODM is able to compete with the best cases, and is never the worse performer. Gaussian noise seems to be the most difï¬cult case on CIFAR-10, which is a more complex dataset than SVHN. For ODIN, as it is only based on cross-entropy, it becomes to overconï¬dent. In the case of CC-AG and ODM, the low results might be related to Gaussian noise being too different from the out-distribution seen during training.
Finally, it is important to note that metric learning has a lower classiï¬cation accuracy of the in-distribution. This has already been observed in [12], where features learned by classiï¬cation networks with typical softmax layers are compared with metric learning based features, with regard to several benchmark datasets. For good classiï¬cation results our metric learning network should be combined with those of a network trained with cross-entropy. One could also consider a network with two heads, where after some initial shared layers a cross-entropy branch and a metric learning branch are trained in a multi-task setting.
# 4.2 Tsinghua trafï¬c sign dataset
We evaluate our method on a real application, i.e. trafï¬c sign recognition in the presence of unseen trafï¬c signs (novelty) and not-a-trafï¬c-sign detection (anomaly). We compare our proposed method ODM against ODIN [18], as a cross-entropy based method, on the Tsinghua dataset [35]. We divide trafï¬c sign classes into three disjoint partitions : the in- distribution classes, seen out-of-distribution images used for training, and unseen out-of- distribution images used for testing on out-of-distribution detection. Since Tsinghua contains some very similar trafï¬c sign classes which would rarely be learned without each other (i.e. all speed limits, all turning arrows, ...), we group those that are too similar in order to build a more reasonable and natural split than just a random one (See Supplementary Material for more on the usual random splits). For the same reason, we also discard classes with less than 10 images as they introduce errors. Therefore, we generate a random split which applies by the mentioned restrictions (see Fig. 2), by taking a 50-20-30% split of the classes for the in-distribution, seen out-distribution and unseen out-distribution respectively.
Regarding anomalies, we consider Gaussian noise, but also background patches from the same Tsinghua dataset images. Those patches are generated randomly from the central area of the original full frames to avoid an unbalanced ratio of ground and sky images, which can
M. MASANA ET AL.: METRIC LEARNING FOR NOVELTY AND ANOMALY DETECTION 9
Figure 2: In-distribution (left), seen (middle) and unseen(right) out-of-distribution partition classes from the proposed Tsinghua split.
Table 2: Comparison between ODIN and our proposed learning strategies on a WRN-28-10 architecture, when using novelty, anomaly (background patches and Gaussian noise) as seen out-of-distribution data as well as not seen out-of-distribution.
Method ODIN Ours - ML Ours - ODM Ours - ODM Ours - ODM In-dist accuracy 98.29 98.93 98.96 98.57 99.00 Out-dist Tsinghua (unseen) Background (unseen) Noise (unseen) Tsinghua (unseen) Background (unseen) Noise (unseen) Tsinghua (seen) Background (unseen) Noise (unseen) Tsinghua (unseen) Background (seen) Noise (unseen) Tsinghua (unseen) Background (unseen) Noise (seen) FPR at 95% TPR 8.74 22.42 0.23 5.23 0.25 0.07 4.38 0.17 0.00 8.65 0.01 0.00 5.72 1.51 0.00 Detection error 6.87 13.71 2.61 5.11 2.62 2.53 4.70 2.60 2.51 6.82 2.50 2.50 5.36 3.25 2.50 AUROC AUPR-in AUPR-out 97.82 96.43 98.59 98.77 99.35 99.51 99.01 99.28 99.69 97.84 99.99 100.00 98.50 98.53 100.00 96.19 92.13 98.40 97.38 99.03 99.25 98.01 98.81 99.51 94.40 99.94 99.97 97.09 97.97 99.93 98.92 98.48 98.76 99.45 99.64 99.72 99.63 99.67 99.73 98.57 99.99 99.99 99.30 99.20 99.99
be semantically richer and more challenging. In a real trafï¬c sign detector application, where detected possible trafï¬c signs are fed to a classiï¬er, this kind of anomalies are more realistic and account for possible detection errors more than Gaussian noise. The global performance of the system can be improved by avoiding that those anomalies reach the classiï¬er and produce an overconï¬dent error.
For this experiment, we learn a 32-dimensional embedding space, training a WRN-28- 10 model [34] with an Adam optimizer at learning rate 0.0001 for 10,000 steps. The same training parameters are used for ODIN since they provided the best combination on the validation set. Table 2 shows the results of the comparison between ODIN, ML and ODM for both seen novelty and anomaly cases. Note that our implementation of the Detection Error metric is ï¬xed to use the FPR at a TPR of 95%, making a value of 2.50 the one of a perfect detector (see Supplementary Material).
In terms of in-distribution classiï¬cation accuracy, both methods are equivalent. How- ever, the comparison of plain metric learning (Ours-ML) with ODIN shows that learning an embedding can be more suitable for out-of-distribution detection of both novelty and anomalies. Introducing out-distribution samples during training slightly improves all cases. Using anomalies as seen out-of-distribution during training helps the detection of the same kind of anomaly as expected since anomalies will be forced to be further away from the in- distribution in the embedding space. However, in some cases, it can damage the detection of novelty, which would not be guaranteed to be pushed away from the learned classes.
10 M. MASANA ET AL.: METRIC LEARNING FOR NOVELTY AND ANOMALY DETECTION
# 5 Conclusions
In this paper, we propose a metric learning approach to improve out-of-distribution detection which performs comparable or better than the state-of-the-art. We show that metric learn- ing provides a better output embedding space to detect data outside the learned distribution than cross-entropy softmax based models. This opens an opportunity to further research on how this embedding space should be learned, with restrictions that could further improve the ï¬eld. The presented results suggest that out-of-distribution data might not all be seen as a single type of anomaly, but instead a continuous representation between novelty and anomaly data. In that spectrum, anomaly detection is the easier task, giving more focus at the difï¬- culty of novelty detection. Finally, we also propose a new benchmark for out-of-distribution detection on the Tsinghua dataset, as a more realistic scenario for novelty detection.
# Acknowledgements
Marc Masana acknowledges 2018-FI_B1-00198 grant of Generalitat de Catalunya. Idoia Ruiz, Joan Serrat and Antonio Lopez want to acknowledge the Spanish project TIN2017- 88709-R (Ministerio de Ciencia, Innovación y Universidades). This work is supported by the EU Project CybSpeed MSCA-RISE-2017-777720. We acknowledge the project TIN2016- 79717-R and the CHISTERA project M2CR (PCIN-2015-251) of the Spanish Government. We also acknowledge the CERCA Programme of Generalitat de Catalunya and its ACCIO agency. Finally, we acknowledge the generous support of the NVIDIA GPU donation pro- gram.
# References
[1] A. Bendale and T. E. Boult. Towards open set deep networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1563â1572, 2016.
[2] Paul Bodesheim, Alexander Freytag, Erik Rodner, Michael Kemmler, and Joachim In IEEE Conference on Denzler. Kernel null space methods for novelty detection. Computer Vision and Pattern Recognition (CVPR), pages 3374â3381, 2013.
[3] Zhiyuan Chen and Bing Liu. Lifelong machine learning. Synthesis Lectures on Artiï¬- cial Intelligence and Machine Learning, 10(3):1â145, 2016.
[4] Sumit Chopra, Raia Hadsell, and Yann LeCun. Learning a similarity metric discrimi- natively, with application to face veriï¬cation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 539â546, 2005.
[5] Jesse Davis and Mark Goadrich. The relationship between precision-recall and roc In Proceedings of the 23rd international conference on Machine learning, curves. pages 233â240. ACM, 2006.
[6] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 248â255, 2009.
M. MASANA ET AL.: METRIC LEARNING FOR NOVELTY AND ANOMALY DETECTION 11
[7] Yonatan Geifman and Ran El-Yaniv. Selective classiï¬cation for deep neural networks. In Advances in Neural Information Processing Systems (NIPS), pages 4885â4894, 2017.
[8] Matthieu Guillaumin, Jakob Verbeek, and Cordelia Schmid. Is that you? metric learn- ing approaches for face identiï¬cation. In Computer Vision, 2009 IEEE 12th interna- tional conference on, pages 498â505. IEEE, 2009.
[9] Raia Hadsell, Sumit Chopra, and Yann LeCun. Dimensionality reduction by learning an invariant mapping. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1735â1742, 2006.
[10] Dan Hendrycks and Kevin Gimpel. A baseline for detecting misclassiï¬ed and out-of- distribution examples in neural networks. In Int. Conference on Learning Representa- tions (ICLR), 2017.
[11] Elad Hoffer and Nir Ailon. Deep metric learning using triplet network. In International Workshop on Similarity-Based Pattern Recognition, pages 84â92. Springer, 2015.
Signiï¬cance of softmax- based features in comparison to distance metric learning-based features. CoRR, abs/1712.10151, 2017.
[13] James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Des- jardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. Overcoming catastrophic forgetting in neural networks. Pro- ceedings of the national academy of sciences, page 201611835, 2017.
[14] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009.
[15] Brian Kulis. Metric learning: A survey. Foundations and Trends in Machine Learning, 5(4):287â364, 2013.
[16] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learn- ing applied to document recognition. Proceedings of the IEEE, 86(11):2278â2324, 1998.
[17] Kimin Lee, Honglak Lee, Kibok Lee, and Jinwoo Shin. Training conï¬dence-calibrated In Int. Conference on Learning classiï¬ers for detecting out-of-distribution samples. Representations (ICLR), 2018.
[18] Shiyu Liang, Yixuan Li, and R. Srikant. Enhancing the reliability of out-of-distribution image detection in neural networks. In Int. Conference on Learning Representations (ICLR), 2018.
[19] Xialei Liu, Joost van de Weijer, and Andrew D Bagdanov. Rankiqa: Learning from rankings for no-reference image quality assessment. In International Conference on Computer Vision (ICCV), 2017.
12 M. MASANA ET AL.: METRIC LEARNING FOR NOVELTY AND ANOMALY DETECTION
[20] Xialei Liu, Marc Masana, Luis Herranz, Joost Van de Weijer, Antonio M Lopez, and Andrew D Bagdanov. Rotate your networks: Better weight consolidation and less catastrophic forgetting. In Proceedings International Conference on Pattern Recogni- tion (ICPR), 2018.
[21] Christopher D Manning and Hinrich Schütze. Foundations of statistical natural lan- guage processing. MIT press, 1999.
[22] Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y In NIPS Ng. Reading digits in natural images with unsupervised feature learning. Workshop on deep learning and unsupervised feature learning, 2011.
[23] Marco A. F. Pimentel, David A. Clifton, Lei A. Clifton, and Lionel Tarassenko. A review of novelty detection. Signal Processing, 99:215â249, 2014.
[24] W. J. Scheirer, A. de Rezende Rocha, A. Sapkota, and T. E. Boult. Toward open set recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(7): 1757â1772, July 2013.
[25] Florian Schroff, Dmitry Kalenichenko, and James Philbin. Facenet: A uniï¬ed embed- ding for face recognition and clustering. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 815â823, 2015.
[26] Alexander Schultheiss, Christoph Käding, Alexander Freytag, and Joachim Denzler. Finding the unknown: Novelty detection with extreme value signatures of deep neural activations. In Volker Roth and Thomas Vetter, editors, German Conference on Pattern Recognition (GCPR), pages 226â238. Springer, 2017.
[27] Kihyuk Sohn. Improved deep metric learning with multi-class n-pair loss objective. In Advances in Neural Information Processing Systems (NIPS), 2016.
[28] Hyun Oh Song, Yu Xiang, Stefanie Jegelka, and Silvio Savarese. Deep metric learning via lifted structured feature embedding. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4004â4012, 2016.
[29] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, Andrew Rabinovich, et al. Going deeper with convolutions. In IEEE Conference on Computer Vision and Pattern Recog- nition (CVPR), 2015.
[30] Evgeniya Ustinova and Victor Lempitsky. Learning deep embeddings with histogram loss. In Advances in Neural Information Processing Systems (NIPS), pages 4170â4178, 2016.
[31] Jian Wang, Feng Zhou, Shilei Wen, Xiao Liu, and Yuanqing Lin. Deep metric learning with angular loss. In International Conference on Computer Vision (ICCV), 2017.
[32] Jiang Wang, Thomas Leung, Chuck Rosenberg, Jinbin Wang, James Philbin, Bo Chen, Ying Wu, et al. Learning ï¬ne-grained image similarity with deep ranking. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014.
M. MASANA ET AL.: METRIC LEARNING FOR NOVELTY AND ANOMALY DETECTION 13
[33] Fisher Yu, Yinda Zhang, Shuran Song, Ari Seff, and Jianxiong Xiao. Lsun: Construc- tion of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365, 2015.
[34] Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. In British Machine and Vision Conference (BMVC), 2016.
[35] Zhe Zhu, Dun Liang, Songhai Zhang, Xiaolei Huang, Baoli Li, and Shimin Hu. Trafï¬c- sign detection and classiï¬cation in the wild. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2110â2118, 2016.
14 M. MASANA ET AL.: METRIC LEARNING FOR NOVELTY AND ANOMALY DETECTION
# Supplementary Material Metric Learning for Novelty and Anomaly Detection
# A Out-of-Distribution detection metrics
In out-of-distribution detection, comparing different detector approaches cannot be done by measuring only accuracy. The question we want to answer is if a given test sample is from a different distribution than that of the training data. The detector will be using some infor- mation from the classiï¬er or embedding space, but the prediction is whether that processed sample is part of the in-distribution or the out-distribution. To measure that, we adopt the metrics proposed in [18]:
⢠FPR at 95% TPR is the corresponding False Positive Rate (FPR=FP/(FP+TN)) when It can be interpreted as the the True Positive Rate (TPR=TP/(TP+FN)) is at 95%. misclassiï¬cation probability of a negative (out-distribution) sample to be predicted as a positive (in-distribution) sample.
⢠Detection Error measures the probability of misclassifying a sample when the TPR is at 95%. Assuming that a sample has equal probability of being positive or negative in the test, it is deï¬ned as 0.5(1 â TPR) + 0.5FPR.
where TP, FP, TN, FN correspond to true positives, false positives, true negatives and false negatives respectively. Those two metrics were also changed to TNR at 95% TPR and Detection Accuracy in [17], which can be calculated by doing 1 â x from the two metrics above explained respectively. We use the latter metrics only when comparing to other state- of-the-art methods. This is also done because the implementation in both [17, 18] allows for using a TPR which is not at 95% in some cases, meaning that the Detection Error can go below 2.5 since TPR is not ï¬xed to 0.95.
In order to avoid the biases between the likelihood of an in-distribution sample to be- ing more frequent than an out-distribution one, we need threshold independent metrics that measure the trade-off between false negatives and false positives. We adopt the following performance metrics proposed in [10]:
⢠AUROC is the Area Under the Receiver Operating Characteristic proposed in [5]. It measures the relation between between TPR and FPR interpreted as the probability of a positive sample being assigned a higher score than a negative sample.
⢠AUPR is the Area Under the Precision-Recall curve proposed in [21]. It measures the relationship between precision (TP/(TP+FP)) and recall (TP/(TP+FN)) and is more robust when positive and negative classes have different base rates. For this met- ric we provide both AUPR-in and AUPR-out when treating in-distribution and out- distribution samples as positive, respectively.
M. MASANA ET AL.: METRIC LEARNING FOR NOVELTY AND ANOMALY DETECTION 15
Table 3: Quantitative comparison between cross-entropy and metric learning based methods training on LeNet for MNIST â 2, 6, 7 (In-dist), 0, 3, 4 and 8 (Seen Out-dist) and 5, 9, 1 (Unseen Out-dist Novelty).
Method CE Ours - ML Ours - ODM In-dist accuracy 99.70 99.54 99.64 Out-dist Novelty Gaussian noise SVHN CIFAR-10 Novelty Gaussian noise SVHN CIFAR-10 Novelty Gaussian noise SVHN CIFAR-10 FPR at 95% TPR 33.76 0.70 0.23 2.86 21.05 0.00 0.00 0.01 0.16 0.00 0.00 0.00 Detection Error 19.38 2.85 2.60 3.93 13.03 1.95 1.74 2.36 1.67 1.76 0.96 1.31 AUROC AUPR-in AUPR-out 92.33 98.85 99.48 98.96 94.48 98.54 98.88 98.87 99.95 99.14 99.65 99.54 92.73 99.21 98.64 98.02 94.02 99.21 98.76 98.93 99.94 99.46 99.41 99.45 92.29 98.14 99.91 99.57 94.46 95.15 99.61 99.12 99.96 97.66 99.89 99.68
# B Quantitative results of the MNIST experiment
In this section we present the quantitative results of the comparison on the MNIST dataset. In this case we allowed a 5-dimensional embedding space for ML so the representation is rich enough to make the discrimination between in-dist and out-dist. For CE, as it is ï¬xed to the number of classes, the embedding space is 3-dimensional. In Table 3 we see that ML performs a better than CE on all cases. ODM almost solves the novelty problem while keeping a similar performance on anomalies as ML. It is noticeable that CE struggles a bit more with Gaussian noise than the other anomalies. In this case, CE still produces highly conï¬dent predictions for some of the noise images.
# C Experimental results on additional Tsinghua splits
Alternatively to the Tsinghua split generated with the restrictions introduced in Section 4.2, we also perform the comparison in a set of 10 random splits without applying any restriction to the partition classes. We still discard the classes with less than 10 images per class. Table 4 shows the average performance for this set of splits with their respective standard deviation. Since the split of the classes is random, this leads to highly similar or mirrored classes to be separated into in-distribution and out-distribution, creating situations that are very difï¬cult to predict correctly. For instance, detecting that a turn-left trafï¬c sign is part of the in-distribution while the turn-right trafï¬c sign is part of the out-distribution, is very difï¬cult in many cases. Therefore, the results from the random splits have a much lower performance, specially for the novelty case.
When comparing the metric learning based methods, ODM improves over ML for the test set that has been seen as out-distribution during training. In general, using novelty data as out-distribution makes an improvement over said test set, as well as for background and noise. However, when using background images to push the out-of-distribution further from the in-distribution class clusters in the embedding space, novelty is almost unaffected. The same happens when noise is used as out-distribution during training. This could be explained by those cases improving the embedding space for data that is initially not so far away from
16 M. MASANA ET AL.: METRIC LEARNING FOR NOVELTY AND ANOMALY DETECTION
Table 4: Comparison between ODIN and our proposed learning strategies on a WRN-28- 10 architecture, when using novelty, anomaly (background patches and Gaussian noise) as seen out-of-distribution data as well as not seen out-of-distribution. The experiments are performed on a set of 10 random splits and the metrics provided are the mean of the metrics on the individual splits ± its standard deviation.
Method In-dist accuracy ODIN 99.29±0.05 Ours - ML 99.16±0.16 Ours - ODM 99.13±0.22 Ours - ODM 99.09±0.18 Ours - ODM 99.02±2.42 Out-dist Tsinghua (unseen) Background (unseen) Noise (unseen) Tsinghua (unseen) Background (unseen) Noise (unseen) Tsinghua (seen) Background (unseen) Noise (unseen) Tsinghua (unseen) Background (seen) Noise (unseen) Tsinghua (unseen) Background (unseen) Noise (seen) FPR at 95% TPR 20.85±2.28 8.39±6.34 0.03±0.43 21.05±3.25 1.91±1.02 0.30±0.96 16.29±4.53 0.39±1.63 0.01±1.39 20.36±3.63 0.01±0.03 0.00±0.00 20.87±1.63 0.97±1.19 0.00±0.00 Detection error 12.92±1.14 6.70±3.17 2.53±0.85 13.03±1.62 3.45±0.51 2.65±0.48 10.65±2.26 2.71±0.31 2.51±0.70 12.68±1.81 2.51±0.01 2.50±0.01 12.93±0.81 2.99±0.60 2.50±0.01 AUROC 93.50±1.05 98.06±1.26 99.67±0.34 94.18±0.92 99.14±0.32 99.27±0.36 96.27±0.86 99.50±0.27 99.59±0.54 93.47±1.55 99.97±0.02 99.99±0.03 93.65±1.05 99.14±0.19 100.00±0.00 AUPR-in 93.78±1.93 97.02±3.15 99.60±0.39 94.42±1.12 98.79±0.35 99.09±0.40 96.78±0.93 99.30±0.31 99.51±0.60 93.58±2.10 99.92±0.03 99.97±0.05 94.01±1.48 98.90±0.23 99.98±0.01 AUPR-out 92.41±0.73 98.79±0.60 99.74±0.41 92.75±1.08 99.40±0.22 99.43±0.35 95.11±1.15 99.66±0.20 99.69±0.43 92.00±1.74 99.98±0.01 99.99±0.01 92.33±0.89 99.39±0.19 99.99±1.85
the in-distribution class clusters. This would change the embedding space to push further the anomalies, but would leave the novelty classes, originally much closer to the clusters, almost at the same location.
When introducing out-of-distribution samples, the behaviour on the random splits is the same as for the restricted splits: while introducing novelty helps the detection on all cases, introducing anomaly helps the detection of the same kind of anomaly.
# D Embeddings on Tsinghua
Figure 3 shows the embeddings for ODM (with novelty as seen out-of-distribution) and ML after applying PCA. When using ML, the novelties are not forced to be pushed away from the in-distribution clusters so they share the embedding space in between those same in- distribution clusters. In the case of ODM, the out-of-distribution clusters are more clearly separated from the in-distribution ones.
M. MASANA ET AL.: METRIC LEARNING FOR NOVELTY AND ANOMALY DETECTION 17
Figure 3: Embedding spaces after PCA for ODM (left) and ML (right) tested for in-dist (blue shaded) and out-dist (yellow shaded). Results are for TSinghua (ï¬rst row), background patches (second row) and Gaussian noise (third row). Best viewed in color. | {
"id": "1506.03365"
} |
1808.05326 | SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference | Given a partial description like "she opened the hood of the car," humans can
reason about the situation and anticipate what might come next ("then, she
examined the engine"). In this paper, we introduce the task of grounded
commonsense inference, unifying natural language inference and commonsense
reasoning.
We present SWAG, a new dataset with 113k multiple choice questions about a
rich spectrum of grounded situations. To address the recurring challenges of
the annotation artifacts and human biases found in many existing datasets, we
propose Adversarial Filtering (AF), a novel procedure that constructs a
de-biased dataset by iteratively training an ensemble of stylistic classifiers,
and using them to filter the data. To account for the aggressive adversarial
filtering, we use state-of-the-art language models to massively oversample a
diverse set of potential counterfactuals. Empirical results demonstrate that
while humans can solve the resulting inference problems with high accuracy
(88%), various competitive models struggle on our task. We provide
comprehensive analysis that indicates significant opportunities for future
research. | http://arxiv.org/pdf/1808.05326 | Rowan Zellers, Yonatan Bisk, Roy Schwartz, Yejin Choi | cs.CL | EMNLP 2018 | null | cs.CL | 20180816 | 20180816 | 8 1 0 2
g u A 6 1 ] L C . s c [
1 v 6 2 3 5 0 . 8 0 8 1 : v i X r a
# Swag: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference
Rowan Zellersâ Yonatan Biskâ Roy Schwartzâ ⥠Yejin Choiâ ⥠â Paul G. Allen School of Computer Science & Engineering, University of Washington â¥Allen Institute for Artiï¬cial Intelligence {rowanz,ybisk,roysch,yejin}@cs.washington.edu https://rowanzellers.com/swag
# Abstract
Given a partial description like âshe opened the hood of the car,â humans can reason about the situation and anticipate what might come next (âthen, she examined the engineâ). In this paper, we introduce the task of grounded com- monsense inference, unifying natural language inference and commonsense reasoning.
On stage, a woman takes a seat at the piano. She a) sits on a bench as her sister plays with the doll. b) smiles with someone as the music plays. c) is in the crowd, watching the dancers. d) nervously sets her ï¬ngers on the keys.
A girl is going across a set of monkey bars. She a) jumps up across the monkey bars. b) struggles onto the monkey bars to grab her head. c) gets to the end and stands on a wooden plank. d) jumps up and does a back ï¬ip.
We present Swag, a new dataset with 113k multiple choice questions about a rich spec- trum of grounded situations. To address the recurring challenges of the annotation arti- facts and human biases found in many exist- ing datasets, we propose Adversarial Filter- ing (AF), a novel procedure that constructs a de-biased dataset by iteratively training an en- semble of stylistic classiï¬ers, and using them to ï¬lter the data. To account for the aggres- sive adversarial ï¬ltering, we use state-of-the- art language models to massively oversam- ple a diverse set of potential counterfactuals. Empirical results demonstrate that while hu- mans can solve the resulting inference prob- lems with high accuracy (88%), various com- petitive models struggle on our task. We pro- vide comprehensive analysis that indicates sig- niï¬cant opportunities for future research.
# Introduction
When we read a story, we bring to it a large body of implicit knowledge about the physical world. For instance, given the context âon stage, a woman takes a seat at the piano,â shown in Table 1, we can easily infer what the situation might look like: a woman is giving a piano performance, with a crowd watching her. We can furthermore infer her likely next action: she will most likely set her ï¬n- gers on the piano keys and start playing.
This type of natural language inference requires commonsense reasoning, substantially broadening the scope of prior work that focused primarily on
The woman is now blow drying the dog. The dog a) is placed in the kennel next to a womanâs feet. b) washes her face with the shampoo. c) walks into frame and walks towards the dog. d) tried to cut her face, so she is trying to do something very close to her face.
Table 1: Examples from Swag; the correct an- swer is bolded. Adversarial Filtering ensures that stylistic models ï¬nd all options equally appealing.
linguistic entailment (Chierchia and McConnell- Ginet, 2000). Whereas the dominant entailment paradigm asks if two natural language sentences (the âpremiseâ and the âhypothesisâ) describe the same set of possible worlds (Dagan et al., 2006; Bowman et al., 2015), here we focus on whether a (multiple-choice) ending describes a possible (fu- ture) world that can be anticipated from the situa- tion described in the premise, even when it is not strictly entailed. Making such inference necessi- tates a rich understanding about everyday physical situations, including object affordances (Gibson, 1979) and frame semantics (Baker et al., 1998).
A ï¬rst step toward grounded commonsense in- ference with todayâs deep learning machinery is to create a large-scale dataset. However, recent work has shown that human-written datasets are suscep- tible to annotation artifacts: unintended stylistic patterns that give out clues for the gold labels (Gu- rurangan et al., 2018; Poliak et al., 2018). As a result, models trained on such datasets with hu-
1
man biases run the risk of over-estimating the ac- tual performance on the underlying task, and are vulnerable to adversarial or out-of-domain exam- ples (Wang et al., 2018; Glockner et al., 2018).
In this paper, we introduce Adversarial Filtering (AF), a new method to automatically detect and reduce stylistic artifacts. We use this method to construct Swag: an adversarial dataset with 113k multiple-choice questions. We start with pairs of temporally adjacent video captions, each with a context and a follow-up event that we know is physically possible. We then use a state-of-the- art language model ï¬ne-tuned on this data to mas- sively oversample a diverse set of possible nega- tive sentence endings (or counterfactuals). Next, we ï¬lter these candidate endings aggressively and adversarially using a committee of trained mod- els to obtain a population of de-biased endings with similar stylistic features to the real ones. Fi- nally, these ï¬ltered counterfactuals are validated by crowd workers to further ensure data quality.
Extensive empirical results demonstrate unique contributions of our dataset, complementing exist- ing datasets for natural langauge inference (NLI) (Bowman et al., 2015; Williams et al., 2018) and commonsense reasoning (Roemmele et al., 2011; Mostafazadeh et al., 2016; Zhang et al., 2017). First, our dataset poses a new challenge of grounded commonsense inference that is easy for humans (88%) while hard for current state-of- the-art NLI models (<60%). Second, our pro- posed adversarial ï¬ltering methodology allows for cost-effective construction of a large-scale dataset while substantially reducing known annotation ar- tifacts. The generality of adversarial ï¬ltering al- lows it to be applied to build future datasets, en- suring that they serve as reliable benchmarks.
# Swag: Our new dataset
We introduce a new dataset for studying physically grounded commonsense inference, called Swag.1 Our task is to predict which event is most likely to occur next in a video. More formally, a model is given a context c = (s, n): a complete sentence s and a noun phrase n that begins a second sen- tence, as well as a list of possible verb phrase sen- tence endings V = {v1, . . . , v4}. See Figure 1 for an example triple (s, n, vi). The model must then select the most appropriate verb phrase vËi â V .
# 1Short for Situations With Adversarial Generations.
2
Using video captions from |jacrwvici @ISIVbIE® (the videos are never used) The mixer creams the butter. Sugar is added to the mixing bowl. context NP vp vegetables. i vs e@ lis putting vegetable fruits. | aqversarially select âSs [is using a red sponge to add! generations Oversample | eggs and parsley. endings from context+NP The mixer creams the butter. Sug fs put on top oF the Annotators filter endings to ensure agreement lis placed in the oven.
Figure 1: Overview of the data collection process. For a pair of sequential video captions, the second caption is split into noun and verb phrases. A lan- guage model generates many negative endings, of which a difï¬cult subset are human-annotated.
Overview Our corpus consists of 113k multi- ple choice questions (73k training, 20k valida- tion, 20k test) and is derived from pairs of con- secutive video captions from ActivityNet Cap- tions (Krishna et al., 2017; Heilbron et al., 2015) and the Large Scale Movie Description Chal- lenge (LSMDC; Rohrbach et al., 2017). The two datasets are slightly different in nature and allow us to achieve broader coverage: ActivityNet con- tains 20k YouTube clips containing one of 203 ac- tivity types (such as doing gymnastics or playing guitar); LSMDC consists of 128k movie captions (audio descriptions and scripts). For each pair of captions, we use a constituency parser (Stern et al., 2017) to split the second sentence into noun and verb phrases (Figure 1).2 Each question has a human-veriï¬ed gold ending and 3 distractors.
# 3 A solution to annotation artifacts
In this section, we outline the construction of Swag. We seek dataset diversity while minimizing annotation artifacts, conditional stylistic patterns such as length and word-preference biases. For many NLI datasets, these biases have been shown to allow shallow models (e.g. bag-of-words) ob- tain artiï¬cially high performance.
To avoid introducing easily âgamedâ patterns, we present Adversarial Filtering (AF), a generally- applicable treatment involving the iterative reï¬ne- ment of a set of assignments to increase the en- tropy under a chosen model family. We then dis- cuss how we generate counterfactual endings, and
# 2We ï¬lter out sentences with rare tokens (â¤3 occur-
rences), that are short (l ⤠5), or that lack a verb phrase.
Algorithm 1 Adversarial ï¬ltering (AF) of negative sam- ples. During our experiments, we set N easy = 2 for reï¬ning a population of N â = 1023 negative examples to k = 9, and used a 80%/20% train/test split.
while convergence not reached do
e Split the dataset D randomly up into train- ing and testing portions Dââ and D®. e Optimize a model fg on D*â. for index i in D® do e Identify easy indices: Ave! = {5 ⬠Ai: fola?) > fola;,)} ° Replace N®&*Y easy indices jE MAca® with adversarial indices k ¢ A; satisfying fo(X;,4) > fo(&;;)- end for end while
ï¬nally, the models used for ï¬ltering.
# 3.1 Formal deï¬nition
In this section, we formalize what it means for a dataset to be adversarial. Intuitively, we say that an adversarial dataset for a model f is one on which f will not generalize, even if evaluated on test data from the same distribution. More for- mally, let our input space be X and the label space be Y. Our trainable classiï¬er f , taking parameters θ is deï¬ned as fθ : X â R|Y|. Let our dataset of size N be deï¬ned as D = {(xi, yi)}1â¤iâ¤N , and let the loss function over the dataset be L(fθ, D). We say that a dataset is adversarial with respect to f if we expect high empirical error I over all leave-one-out train/test splits (Vapnik, 2000):
I(D ELM {(vi,m)}),
i=1 i = argmin
where 0* = azpmin E( fo,P \ {(xi,ys)}), (2)
# θ
with regularization terms omitted for simplicity.
# 3.2 Adversarial ï¬ltering (AF) algorithm
In this section, we outline an approach for gen- erating an adversarial dataset D, effectively max- imizing empirical error I with respect to a fam- ily of trainable classiï¬ers f . Without loss of generality, we consider the situation where we have N contexts, each associated with a single positive example (x+ i , 1) â X à Y, and a large population of context-speciï¬c negative examples (xâ i,j, 0) â X à Y, where 1â¤jâ¤N â for each i. For instance, the negative examples could be incorrect relations in knowledge-base completion (Socher et al., 2013), or all words in a dictionary for a
3
single-word cloze task (Zweig and Burges, 2011). Our goal will be to filter the population of neg- ative examples for each instance i to a size of k<N~. This will be captured by returning a set of assignments A, where for each instance the as- signment will be a k-subset A; = [1....N7]*. The filtered dataset will then be: DAF = {(xi,1),{(#)y,0)}iea bisicw (3)
(3) Unfortunately, optimizing I(DAF , f ) is difï¬cult as A is global and non-differentiable. To address this, we present Algorithm 1. On each iteration, we split the data into dummy âtrainâ and âtestâ splits. We train a model f on the training portion and obtain parameters θ, then use the remaining test portion to reassign the indices of A. For each context, we replace some number of âeasyâ nega- tives in A that fθ classiï¬es correctly with âadver- sarialâ negatives outside of A that fθ misclassiï¬es. This process can be thought of as increasing the overall entropy of the dataset: given a strong model fθ that is compatible with a random subset of the data, we aim to ensure it cannot generalize to the held-out set. We repeat this for several it- erations to reduce the generalization ability of the model family f over arbitrary train/test splits.
# 3.3 Generating candidate endings
To generate counterfactuals for Swag, we use an LSTM (Hochreiter and Schmidhuber, 1997) lan- guage model (LM), conditioned on contexts from video captions. We ï¬rst pretrain on BookCorpus (Zhu et al., 2015), then ï¬netune on the video cap- tion datasets. The architecture uses standard best practices and was validated on held-out perplex- ity of the video caption datasets; details are in the appendix. We use the LM to sample N â=1023 unique endings for a partial caption.3
Importantly, we greedily sample the endings, since beam search decoding biases the generated endings to be of lower perplexity (and thus easily distinguishable from found endings). We ï¬nd this process gives good counterfactuals: the generated endings tend to use topical words, but often make little sense physically, making them perfect for our task. Further, the generated endings are marked as âgibberishâ by humans only 9.1% of the time (Sec 3.5); in that case the ending is ï¬ltered out.
3To ensure that the LM generates unique endings, we split the data into ï¬ve validation folds and train ï¬ve separate LMs, one for each set of training folds. This means that each LM never sees the found endings during training.
° a â-- Baseline (random) 05, ° CNN > * ° Bow gO4 e POStag LSTM Fe . ° MLP B03 % fo â< ° Ensemble Bor % ; â~. ° ° . 01 -aa=5 (Seen ene ne ee nee 0.0 0 20 40 60 80 100 120 140 Iteration #
Figure 2: Test accuracy by AF iteration, under the negatives given by A. The accuracy drops from around 60% to close to random chance. For efï¬- ciency, the ï¬rst 100 iterations only use the MLP.
# 3.4 Stylistic models for adversarial ï¬ltering
In creating Swag, we designed the model family f to pick up on low-level stylistic features that we posit should not be predictive of whether an event happens next in a video. These stylistic features are an obvious case of annotation artifacts (Cai et al., 2017; Schwartz et al., 2017).4 Our ï¬nal clas- siï¬er is an ensemble of four stylistic models: 1. A multilayer perceptron (MLP) given LM per- plexity features and context/ending lengths. 2. A bag-of-words model that averages the word embeddings of the second sentence as features. 3. A one-layer CNN, with ï¬lter sizes ranging from 2-5, over the second sentence. 4. A bidirectional LSTM over the 100 most com- mon words in the second sentence; uncommon words are replaced by their POS tags. We ensemble the models by concatenating their ï¬- nal representations and passing it through an MLP. On every adversarial iteration, the ensemble is trained jointly to minimize cross-entropy.
The accuracies of these models (at each itera- tion, evaluated on a 20% split of the test dataset before indices of A get remapped) are shown in Figure 2. Performance decreases from 60% to close to random chance; moreover, confusing the perplexity-based MLP is not sufï¬cient to lower performance of the ensemble. Only once the other stylistic models are added does the ensemble ac- curacy drop substantially, suggesting that our ap- proach is effective at reducing stylistic artifacts.
4A broad deï¬nition of annotation artifacts might include aspects besides lexical/stylistic features: for instance, certain events are less likely semantically regardless of the context (e.g. riding a horse using a hose). For this work, we erred more conservatively and only ï¬ltered based on style.
4
Imagine that you are watching a video clip. The clip has a caption, but it is missing the ï¬nal phrase. Please choose the best 2 caption endings, and classify each as: ⢠likely, if it completes the caption in a reasonable way; ⢠unlikely, if it sounds ridiculous or impossible; ⢠gibberish if it has such serious errors that it doesnât feel like a valid English sentence.
Example: Someone is shown sitting on a fence and talking to the camera while pointing out horses. Someone ⢠stands in front of a podium. (likely, second best) ⢠rides a horse using a hose. (unlikely) ⢠is shown riding a horse. (likely, best) ⢠, the horse in a plaza ï¬eld. (gibberish)
Figure 3: Mechanical Turk instructions (abridged).
# 3.5 Human veriï¬cation
The ï¬nal data-collection step is to have humans verify the data. Workers on Amazon Mechani- cal Turk were given the caption context, as well as six candidate endings: one found ending and ï¬ve adversarially-sampled endings. The task was twofold: Turkers ranked the endings indepen- dently as likely, unlikely, or gibberish, and se- lected the best and second best endings (Fig 3).
We obtained the correct answers to each con- text in two ways. If a Turker ranks the found end- ing as either best or second best (73.7% of the time), we add the found ending as a gold exam- ple, with negatives from the generations not la- belled best or gibberish. Further, if a Turker ranks a generated ending as best, and the found ending as second best, then we have reason to believe that the generation is good. This lets us add an addi- tional training example, consisting of the gener- ated best ending as the gold, and remaining gen- erations as negatives.5 Examples with â¤3 non- gibberish endings were ï¬ltered out.6
We found after 1000 examples that the annota- tors tended to have high agreement, also generally choosing found endings over generations (see Ta- ble 2). Thus, we collected the remaining 112k ex- amples with one annotator each, periodically veri- fying that annotators preferred the found endings.
# 4 Experiments
In this section, we evaluate the performance of various NLI models on Swag. Recall that models
5These two examples share contexts. To prevent biasing the test and validation sets, we didnât perform this procedure on answers from the evaluation setsâ context.
6To be data-efï¬cient, we reannotated ï¬ltered-out exam- ples by replacing gibberish endings, as well as generations that outranked the found ending, with candidates from A.
Labels Best Second Best Neither Likely Unlikely Gibberish Label distribution by ending type Found end Gen. end 53.5% 20.2% 26.3% 80.3% 19.0% 0.7% 9.3% 15.9% 74.8% 33.3% 57.5% 9.1% Inter-annotator agreement α ppa 0.43 72% 0.39 64%
Table 2: Annotators tend to label the found ending as likely and within the top 2 (column 2), in other cases the example is ï¬ltered out. Both label groups have high inter-annotator agreement, in terms of Krippendorffâs α and pairwise percent agreement.
for our dataset take the following form: given a sentence and a noun phrase as context c = (s, n), as well as a list of possible verb phrase endings V = {v1, . . . , v4}, a model fθ must select a verb Ëi that hopefully matches igold:
Ëi = argmax fθ(s, n, vi) (4) i
To study the amount of bias in our dataset, we also consider models that take as input just the ending verb phrase vi, or the entire second sen- tence (n, vi). For our learned models, we train f by minimizing multi-class cross-entropy. We consider three different types of word representa- tions: 300d GloVe vectors from Common Crawl (Pennington et al., 2014), 300d Numberbatch vec- tors retroï¬tted using ConceptNet relations (Speer et al., 2017), and 1024d ELMo contextual repre- sentations that show improvement on a variety of NLP tasks, including standard NLI (Peters et al., 2018). We follow the ï¬nal dataset split (see Sec- tion 2) using two training approaches: training on the found data, and the found and highly-ranked generated data. See the appendix for more details.
# 4.1 Unary models
The following models predict labels from a single span of text as input; this could be the ending only, the second sentence only, or the full passage. a. fastText (Joulin et al., 2017): This library mod- els a single span of text as a bag of n-grams, and tries to predict the probability of an ending being correct or incorrect independently.7 b. Pretrained sentence encoders We consider two types of pretrained RNN sentence encoders, SkipThoughts (Kiros et al., 2015) and InferSent
7The fastText model is trained using binary cross-entropy; at test time we extract the prediction by selecting the ending with the highest positive likelihood under the model.
5
(Conneau et al., 2017). SkipThoughts was trained by predicting adjacent sentences in book data, whereas InferSent was trained on supervised NLI data. For each second sentence (or just the end- ing), we feed the encoding into an MLP. c. LSTM sentence encoder Given an arbitrary span of text, we run a two-layer BiLSTM over it. The ï¬nal hidden states are then max-pooled to ob- tain a ï¬xed-size representation, which is then used to predict the potential for that ending.
# 4.2 Binary models
The following models predict labels from two spans of text. We consider two possibilties for these models: using just the second sentence, where the two text spans are n, vi, or using the context and the second sentence, in which case the spans are s, (n, vi). The latter case includes many models developed for the NLI task. d. Dual Bag-of-Words For this baseline, we treat each sentence as a bag-of-embeddings (c, vi). We model the probability of picking an ending i using a bilinear model: softmaxi(cWvT e. Dual pretrained sentence encoders Here, we obtain representations from SkipThoughts or In- ferSent for each span, and compute their pairwise compatibility using either 1) a bilinear model or 2) an MLP from their concatenated representations. f. SNLI inference Here, we consider two mod- els that do well on SNLI (Bowman et al., 2015): Decomposable Attention (Parikh et al., 2016) and ESIM (Chen et al., 2017). We use pretrained ver- sions of these models (with ELMo embeddings) on SNLI to obtain 3-way entailment, neutral, and contradiction probabilities for each example. We then train a log-linear model using these 3-way probabilities as features. g. SNLI models (retrained) Here, we train ESIM and Decomposable Attention on our dataset: we simply change the output layer size to 1 (the po- tential of an ending vi) with a softmax over i.
# 4.3 Other models
We also considered the following models: h. Length: Although length was used by the ad- versarial classiï¬er, we want to verify that human validation didnât reintroduce a length bias. For this baseline, we always choose the shortest ending. i. ConceptNet As our task requires world knowl- edge, we tried a rule-based system on top of the
8We also tried using an MLP, but got worse results.
Ending only
# 2nd sentence only
# Context+2nd sentence
Model misc Random Length ConceptNet 25.0 25.0 25.0 25.0 26.7 27.0 26.7 27.0 25.0 25.0 26.0 26.0 25.0 25.0 26.0 26.0 25.0 25.0 25.0 25.0 Sentence encoders LSTM sequence model fastText SkipThoughts InferSent LSTM+GloVe LSTM+Numberbatch LSTM+ELMo 27.5 26.9 29.9 29.0 32.4 32.1 32.2 31.8 30.6 30.2 32.0 31.9 31.9 31.8 32.9 32.4 32.4 32.6 32.3 31.9 43.6 42.9 43.3 42.3 29.2 27.8 33.0 32.4 33.2 32.0 32.7 32.4 31.9 31.9 47.4 46.7 29.8 29.0 32.8 32.3 34.0 32.6 34.3 33.5 34.1 32.8 46.3 46.0 29.4 28.0 43.1 43.6 39.9 40.2 51.4 50.6 30.3 29.8 45.6 45.7 41.2 40.5 51.3 50.4 DualBoW Dual sentence encoders SNLI inference SNLI models (retrained) DualBoW+GloVe DualBoW+Numberbatch SkipThoughts-MLP SkipThoughts-Bilinear InferSent-MLP InferSent-Bilinear SNLI-ESIM SNLI-DecompAttn DecompAttn+GloVe DecompAttn+Numberbatch DecompAttn+ELMo ESIM+GloVe ESIM+Numberbatch ESIM+ELMo 31.3 31.3 31.9 31.4 34.6 33.9 36.0 35.7 32.9 32.1 32.0 31.3 29.8 30.3 32.4 31.7 43.4 43.4 34.8 35.1 33.1 32.6 46.0 45.7 31.9 31.2 31.6 31.3 36.2 35.5 34.7 34.5 32.8 32.7 31.6 31.3 31.1 31.7 32.5 31.9 40.6 40.3 36.3 36.7 33.0 32.4 45.9 44.8 34.5 34.7 35.1 35.1 33.4 32.3 36.5 35.6 35.9 36.2 40.5 40.3 36.4 36.1 35.8 35.8 47.4 47.6 47.4 48.0 47.7 47.3 51.9 52.7 46.5 46.4 59.1 59.2 32.9 33.1 34.2 34.1 37.4 36.4 35.3 34.9 39.5 39.4 39.0 38.4 36.2 36.0 35.8 35.7 48.5 48.6 48.0 48.3 46.0 45.4 52.5 52.5 44.0 44.6 58.7 58.5 Human 1 turker 3 turkers 5 turkers Expert 82.8 85.1 88.0 85.0
s l e d o m y r a n U
s l e d o m y r a n i B
# il
Table 3: Performance of all models in accuracy (%). All models substantially underperform humans, although performance increases as more context is provided (left to right). We optionally train on found endings only, or found and human-validated generated endings (found+gen).
ConceptNet knowledge base (Speer et al., 2017). For an ending sentence, we use the spaCy depen- dency parser to extract the head verb and its de- pendent object. The ending score is given by the number of ConceptNet causal relations9 between synonyms of the verb and synonyms of the object. j. Human performance To benchmark human performance, ï¬ve Mechanical Turk workers were asked to answer 100 dataset questions, as did an âexpertâ annotator (the ï¬rst author of this paper). Predictions were combined using a majority vote.
# 4.4 Results
We present our results in Table 3. The best model that only uses the ending is the LSTM sequence model with ELMo embeddings, which obtains 43.6%. This model, as with most models stud- ied, greatly improves with more context: by 3.1% when given the initial noun phrase, and by an ad-
ditional 4% when also given the ï¬rst sentence.
Further improvement is gained from models that compute pairwise representations of the in- puts. While the simplest such model, Dual- BoW, obtains only 35.1% accuracy, combining In- ferSent sentence representations gives 40.5% ac- curacy (InferSent-Bilinear). The best results come from pairwise NLI models: when fully trained on Swag, ESIM+ELMo obtains 59.2% accuracy.
When comparing machine results to human re- sults, we see there exists a lot of headroom. Though there likely is some noise in the task, our results suggest that humans (even untrained) con- verge to a consensus. Our in-house âexpertâ an- notator is outperformed by an ensemble of 5 Turk workers (with 88% accuracy); thus, the effective upper bound on our dataset is likely even higher.
# 5 Analysis
âRe- ceivesActionâ, âUsedForâ, and âHasSubeventâ. Though their coverage is low (30.4% of questions have an answer with â¥1 causal relation), the more frequent relations in ConceptNet, such as âIsAâ, at best only indirectly relate to our task.
# 5.1 Swag versus existing NLI datasets
The past few years have yielded great advances in NLI and representation learning, due to the avail- ability of large datasets like SNLI and MultiNLI
6
lm SWAG Te $F GES â¬s Ra £& FS & g Verb Frequencies S35 85 :,, â 90° 10° 10° 10° Number of cumulative verbs (ordered by frequency)
Figure 4: Top: Distribution of the 40 top verbs in the union of SNLI and Swag. Our dataset shows a greater variety of dynamic verbs, such as âmoveâ, as well as temporal verbs such as âstartâ and âcome.â âContinueâ is cut off for SNLI (it has frequency 6 · 10â5). Bottom: CDF for verbs in SNLI and Swag.
(Bowman et al., 2015; Williams et al., 2018). With the release of Swag, we hope to continue this trend, particularly as our dataset largely has the same input/output format as other NLI datasets. We observe three key differences between our dataset and others in this space:
First, as noted in Section 1, Swag requires a unique type of temporal reasoning. A state-of-the- art NLI model such as ESIM, when bottlenecked through the SNLI notion of entailment (SNLI- ESIM), only obtains 36.1% accuracy.10 This im- plies that these datasets necessitate different (and complementary) forms of reasoning.
Second, our use of videos results in wide cover- age of dynamic and temporal situations Compared with SNLI, with contexts from Flickr30K (Plum- mer et al., 2017) image captions, Swag has more active verbs like âpullâ and âhit,â and fewer static verbs like âsitâ and âwearâ (Figure 4).11
Reason Explanation Freq. Situational The good ending is better in context. Plausibility The bad ending is implausible regard- 53.7% 14.4% less of context. The bad ending seems redundant; it is entailed by the context. Weirdness The bad ending is semantically or âthe Novelty grammatically malformed, e.g. man is getting out of the horse.â 1.8% 18.1% Ambiguous Both endings seem equally likely. 12.0%
Table 4: Justiï¬cations for ranking the gold answer over a wrong answer chosen by ESIM+ELMo.
that ESIM+ELMo answered incorrectly, for each extracting both the gold ending and the modelâs preferred ending. We asked 5 Amazon Mechanical Turk workers to pick the better ending (of which they preferred the gold endings 94% of the time) and to select one (or more) multiple choice reasons explaining why the chosen answer was better.
Third, our dataset suffers from few lexical bi- ases. Whereas fastText, a bag of n-gram model, obtains 67.0% accuracy on SNLI versus a 34.3% baseline (Gururangan et al., 2018), fastText ob- tains only 29.0% accuracy on Swag.12
# 5.2 Error analysis
We sought to quantify how human judgments dif- fer from the best studied model, ESIM+ELMo. We randomly sampled 100 validation questions
The options, and the frequencies, are outlined in Table 4. The most common reason for the turkers preferring the correct answer is situational (52.3% of the time), followed by weirdness (17.5%) This suggests that and plausibility (14.4%). ESIM+ELMo already does a good job at ï¬ltering out weird and implausible answers, with the main bottleneck being grounded physical understand- ing. The ambiguous percentage is also relatively low (12.0%), implying signiï¬cant headroom.
10The weights of SNLI-ESIM pick up primarily on entail- ment probability (0.59), as with neutral (0.46), while contra- diction is negatively correlated (-.42).
11Video data has other language differences; notably, char- acter names in LSMDC were replaced by âsomeoneâ
12The most predictive individual words on SWAG are in- frequent in number: âdottedâ with P(+|dotted) = 77% with 10.3 counts, and P(â|similar) = 81% with 16.3 counts. (Counts from negative endings were discounted 3x, as there are 3 times as many negative endings as positive endings).
# 5.3 Qualitative examples
Last, we show several qualitative examples in Ta- ble 5. Though models can do decently well by identifying complex alignment patterns between the two sentences (e.g. being âup a treeâ im- plies that âtreeâ is the end phrase), the incorrect model predictions suggest this strategy is insufï¬-
7
A waiter brings a fork. The waiter He is up a tree. Someone a) starts to step away. (74.76%) b) adds spaghetti to the table. (21.57%) c) brings a bunch of pie to the food (2.67%) d) drinks from the mug in the bowl. (0.98%) a) stands underneath the tree. (97.44%) b) is at a pool table holding a cup. (1.14%) c) grabs a ï¬ower from a paper. (0.96%) d) is eating some cereal. (0.45%) An old man rides a small bumper car. Several people a) get in the parking lot. (76.58%) b) wait in the car. (15.28%) c) get stuck with other bumper cars. (6.75%) d) are running down the road. (1.39%) He pours the raw egg batter into the pan. He
Table 5: Example questions answered by the best model, ESIM+Elmo, sorted by model probability. Correct model predictions are in blue, incorrect model predictions are red. The right answers are bolded.
cient. For instance, answering âAn old man rides a small bumper carâ requires knowledge about bumper cars and how they differ from regular cars: bumper cars are tiny, donât drive on roads, and donât work in parking lots, eliminating the alterna- tives. However, this knowledge is difï¬cult to ex- tract from existing corpora: for instance, the Con- ceptNet entry for Bumper Car has only a single relation: bumper cars are a type of vehicle. Other questions require intuitive physical reasoning: e.g, for âhe pours the raw egg batter into the pan,â about what happens next in making an omelet.
# 5.4 Where to go next?
for performing better video captioning (Pasunuru and Bansal, 2017), summarization (Pasunuru and Bansal, 2018), and generation (Holtzman et al., 2018), conï¬rming the importance of NLI research. The NLI task requires a variety of commonsense knowledge (LoBue and Yates, 2011), which our work complements. However, previous datasets for NLI have been challenged by unwanted an- notation artifacts, (Gururangan et al., 2018; Po- liak et al., 2018) or scale issues. Our work ad- dresses these challenges by constructing a new NLI benchmark focused on grounded common- sense reasoning, and by introducing an adversar- ial ï¬ltering mechanism that substantially reduces known and easily detectable annotation artifacts.
Our results suggest that Swag is a challenging testbed for NLI models. However, the adversarial models used to ï¬lter the dataset are purely stylis- tic and focus on the second sentence; thus, subtle artifacts still likely remain in our dataset. These patterns are ostensibly picked up by the NLI mod- els (particularly when using ELMo features), but the large gap between machine and human perfor- mance suggests that more is required to solve the dataset. As models are developed for common- sense inference, and more broadly as the ï¬eld of NLP advances, we note that AF can be used again to create a more adversarial version of Swag using better language models and AF models.
# 6 Related Work
Entailment NLI There has been a long his- tory of NLI benchmarks focusing on linguistic entailment (Cooper et al., 1996; Dagan et al., 2006; Marelli et al., 2014; Bowman et al., 2015; Lai et al., 2017; Williams et al., 2018). Re- cent NLI datasets in particular have supported learning broadly-applicable sentence representa- tions (Conneau et al., 2017); moreover, models trained on these datasets were used as components
Commonsense NLI Several datasets have been introduced to study NLI beyond linguistic entail- ment: for inferring likely causes and endings given a sentence (COPA; Roemmele et al., 2011), for choosing the most sensible ending to a short story (RocStories; Mostafazadeh et al., 2016; Sharma et al., 2018), and for predicting likelihood of a hy- pothesis by regressing to an ordinal label (JOCI; (Zhang et al., 2017)). These datasets are relatively small: 1k examples for COPA and 10k cloze ex- amples for RocStories.13 JOCI increases the scale by generating the hypotheses using a knowledge graph or a neural model. In contrast to JOCI where the task was formulated as a regression task on the degree of plausibility of the hypothesis, we frame commonsense inference as a multiple choice ques- tion to reduce the potential ambiguity in the labels and to allow for direct comparison between ma- chines and humans. In addition, Swagâs use of ad- versarial ï¬ltering increases diversity of situations and counterfactual generation quality.
13For RocStories, this was by design to encourage learning from the larger corpus of 98k sensible stories.
8
Last, another related task formulation is sen- tence completion or cloze, where the task is to pre- dict a single word that is removed from a given context (Zweig and Burges, 2011; Paperno et al., 2016).14 Our work in contrast requires longer tex- tual descriptions to reason about.
Vision datasets Several resources have been in- troduced to study temporal inference in vision. The Visual Madlibs dataset has 20k image cap- tions about hypothetical next/previous events (Yu et al., 2015); similar to our work, the test portion is multiple-choice, with counterfactual answers re- trieved from similar images and veriï¬ed by hu- mans. The question of âwhat will happen next?â has also been studied in photo albums (Huang et al., 2016), videos of team sports, (Felsen et al., 2017) and egocentric dog videos (Ehsani et al., 2018). Last, annotation artifacts are also a re- curring problem for vision datasets such as Vi- sual Genome (Zellers et al., 2018) and Visual QA (Jabri et al., 2016); recent work was done to cre- ate a more challenging VQA dataset by annotating complementary image pairs (Goyal et al., 2016).
Reducing gender/racial bias Prior work has sought to reduce demographic biases in word em- beddings (Zhang et al., 2018) as well as in image recognition models (Zhao et al., 2017). Our work has focused on producing a dataset with minimal annotation artifacts, which in turn helps to avoid some gender and racial biases that stem from elic- itation (Rudinger et al., 2017). However, it is not perfect in this regard, particularly due to biases in movies (Schoï¬eld and Mehr, 2016; Sap et al., 2017). Our methodology could potentially be ex- tended to construct datasets free of (possibly inter- sectional) gender or racial bias.
Physical knowledge Prior work has studied learning grounded knowledge about objects and verbs: from knowledge bases (Li et al., 2016), syn- tax parses (Forbes and Choi, 2017), word embed- dings (Lucy and Gauthier, 2017), and images and dictionary deï¬nitions (Zellers and Choi, 2017). An alternate thread of work has been to learn scripts: high-level representations of event chains (Schank and Abelson, 1975; Chambers and Juraf- sky, 2009). Swag evaluates both of these strands.
14Prior work on sentence completion ï¬ltered negatives with heuristics based on LM perplexities. We initially tried something similar, but found the result to still be gameable.
9
# 7 Conclusion
We propose a new challenge of physically situated commonsense inference that broadens the scope of natural language inference (NLI) with com- monsense reasoning. To support research toward commonsense NLI, we create a large-scale dataset Swag with 113k multiple-choice questions. Our dataset is constructed using Adversarial Filtering (AF), a new paradigm for robust and cost-effective dataset construction that allows datasets to be con- structed at scale while automatically reducing an- notation artifacts that can be easily detected by a committee of strong baseline models. Our adver- sarial ï¬ltering paradigm is general, allowing po- tential applications to other datasets that require human composition of question answer pairs.
# Acknowledgements
We thank the anonymous reviewers, members of the ARK and xlab at the University of Wash- ington, researchers at the Allen Institute for AI, and Luke Zettlemoyer for their helpful feed- back. We also thank the Mechanical Turk work- ers for doing a fantastic job with the human val- idation. This work was supported by the Na- tional Science Foundation Graduate Research Fel- lowship (DGE-1256082), the NSF grant (IIS- 1524371, 1703166), the DARPA CwC program through ARO (W911NF-15-1-0543), the IARPA DIVA program through D17PC00343, and gifts by Google and Facebook. The views and conclu- sions contained herein are those of the authors and should not be interpreted as representing endorse- ments of IARPA, DOI/IBC, or the U.S. Govern- ment.
# A Appendix
# A.1 More detail about video datasets
As mentioned in the main paper, we obtained con- texts and found endings from video data. The videos in the ActivityNet dataset are already bro- ken up into into clips. However, the LSMDC dataset contains captions for the entire movie, so it is possible that temporally adjacent captions de- scribe events that are far apart in time. Thus, we donât include any pair of captions that have a time- difference of more than 25 seconds.
In addition to the datasets we used, we also con- sidered the DiDeMo dataset, which consists of (of- ten several) referring expressions in a video (Hen-
dricks et al., 2017). However, many of the re- ferring expressions are themselves sentence frag- ments, (e.g. âï¬rst time we see peopleâ so we ultimately did not use this dataset.) Addition- ally, we considered the Visual Madlibs dataset (Yu et al., 2015), as it contains 10k hypothetical cap- tions written by Mechanical Turk workers about what might happen next given an image. How- ever, these captions are fundamentally different from the rest of the data (as theyâre about what might) happen next; as a result, they use different types of language. They also have different tenses versus the other datasets that we considered (e.g. past tense), as a result of the âMad-libsâ style of data collection.
# A.2 Details of the language model
Our language model follows standard best prac- tices: the input and output embedding layers are tied (Inan et al., 2017; Press and Wolf, 2017), all embedding and hidden layers are set to 512, and we used recurrent dropout (Gal and Ghahra- mani, 2016) on the hidden states and embed- ding layer. We additionally train a backwards language model alongside the forward language model, and they share embedding parameters. This adds extra supervision to the embedding layer and gives us another way to score candidate gen- erations. We ï¬rst pretrain the language model for two epochs on pairs of two sentences in the Toronto Books dataset (Zhu et al., 2015), and then train on sentence pairs from ActivityNet Captions and LSMDC, validating on held-out perplexity. For optimization, we use Adam (Kingma and Ba, 2015) with a learning rate of 10â3 and clip gradi- ents to norm 1.0.
All of the above details were validated using perplexity on a held-out set of the video datasets during early experimentation. Our ï¬nal develop- ment set forward perplexity was 31.2 and back- ward perplexity was 30.4. We tried more com- plicated language modeling architectures, such as from (J´ozefowicz et al., 2016), but ended up not seeing an improvement due to overï¬tting.
# A.3 Language model features for the MLP, during adversarial ï¬ltering
We obtained LM perplexity features to be used during adversarial ï¬ltering in the following ways, using both directions of the bidirectional language model. We extract perplexities for the context by itself (going forward), the ending given the con-
10
text (going forward), the context given the ending (going backward), and the ending by itself (go- ing backward). We also extract the probability of the ï¬nal generated token going forward, since sen- tences sometimes reach the length limit of 25 to- kens and end unnaturally.
# A.4 Reï¬nining the generated answers to four distractors
In the main paper, we noted that we started with 1023 negatives per example, which the adversarial ï¬ltering process ï¬ltered down to 9. Five of these were passed to mechanical turk workers, and we were left with anywhere between 0 and 4 of these per example as âdistractors.â (Note that we always were ï¬ltering out the second best option that the was selected by the turkers). This means that for many of our examples (62%) we actually have a fourth distractor. In these cases, we sorted the dis- tractors by their âunlikely/likelyâ score, so that the fourth distractor was the one deemed most likely. We still provided the fourth distractor in the train- ing set to be possibly used in future work, however we didnât train on it for simplicity.
# A.5 More information about Mechanical turk
We used several tricks to keep the interannotator agreement high (with a pairwise percent agree- ment of 79% at classifying an ending as either in the Top 2). First, we had a screening HIT where turkers were given detailed instructions for the task, and only the best-scoring turk workers qual- iï¬ed for the remaining HITs. Second, we periodi- cally dequaliï¬ed turkers that had a low agreement with the gold endings: any turk worker with an ac- curacy of less than 55% of classifying the âgoldâ ending as the best or second best, over 10 or more HITs, had the qualiï¬cation taken away. We also gave small bonuses to turkers with high accuracy. During our crowdsourcing, we tried to pay the Turkers a fair wage (median $8.57 per hour) and they left positive comments for us on TurkOpti- con and TurkerView. The total dataset cost was $23,000, or an average of 20 cents per example.
A.6
# Implementation details of the models considered
We implemented the neural models in PyTorch us- ing the AllenNLP library (Gardner et al., 2018). Our experiments use the Adam optimizer (Kingma and Ba, 2015), with a learning rate of 10â3 and
Questions with only generated endings Questions with one original ending Questions in total Sentence pairs from ActivityNet Sentence pairs from LSMDC Unique contexts Unique endings 25,618 87,939 113,557 51,439 62,118 92,221 452,683
# Table 6: Statistics of Swag.
5.0% ball, pull, hit, wall, inside, time, game, rope, team 4.9% window, red, long, drink, bowl, ingredient, mix 6.1% arm, speak, appear, climb, tree, roll, like, roof, edge 4.0% water, bar, board, blue, boat, ï¬y, river, join, dive 5.3% eye, smile, close, little, lean, cover, remove, lip 4.6% walk, outside, street, wave, pass, beach, sidewalk 5.7% ï¬eld, drop, slide, drive, right, kick, park, road, chest 4.7% watch, dog, ï¬ip, stick, land, demonstrate, trick, mat 4.5% dance, lift, try, line, snow, gun, catch, hill, bend 4.6% fall, crowd, pour, shake, ï¬nish, raise, grass, wooden 5.9% perform, spin, house, stage, routine, fence, bow
Table 7: A visualization of the diversity of the dataset, using a topic model (Blei et al., 2003).
gradient clipping, except for Decomposable At- tention and ESIM, where we use the AllenNLP default conï¬gurations.
# A.7 More info about dataset diversity
The ï¬nal dataset has a vocabulary size of 21000. We also visualize the coverage of the dataset with a Topic model (see Table 7).
# A.8 Comparing the distribution of verbs with MultiNLI
We also produced an extension to Figure 4 of the main paper, that involves verbs from MultiNLI, in Figure 5. We ended up not including it in the paper because we wanted to focus our comparison be- tween SNLI and Swag (as they are both grounded datasets). Interestingly, we ï¬nd that Swag has a less skewed cumulative distribution of verbs up to around 120, when afterwards MultiNLI has a slightly less skewed distribution. This is possi- bly due to the broader set of domains considered by MultiNLI, whereas we consider videos (which is also a broad domain! but still underrepresents words highly used in newswire text, for instance.)
# A.9 More examples
We have more qualitative examples in Table 8.
# References
Collin F Baker, Charles J Fillmore, and John B Lowe. 1998. The berkeley framenet project. In Proceed- ings of the 17th international conference on Compu- tational linguistics-Volume 1, pages 86â90. Associ- ation for Computational Linguistics.
David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. Journal of ma- chine Learning research, 3(Jan):993â1022.
Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large an- notated corpus for learning natural language infer- In Proceedings of the 2015 Conference on ence. Empirical Methods in Natural Language Process- ing, EMNLP 2015, Lisbon, Portugal, September 17- 21, 2015, pages 632â642.
Zheng Cai, Lifu Tu, and Kevin Gimpel. 2017. Pay at- tention to the ending: Strong neural baselines for the roc story cloze task. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 616â622.
Nathanael Chambers and Dan Jurafsky. 2009. Unsu- pervised Learning of Narrative Schemas and Their In Proceedings of the Joint Confer- Participants. ence of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Lan- guage Processing of the AFNLP: Volume 2 - Volume 2, ACL â09, pages 602â610, Stroudsburg, PA, USA. Association for Computational Linguistics.
Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced lstm for In Proceedings of the natural language inference. 55th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), vol- ume 1, pages 1657â1668.
Gennaro Chierchia and Sally McConnell-Ginet. 2000. Meaning and Grammar (2Nd Ed.): An Introduction to Semantics. MIT Press, Cambridge, MA, USA.
Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo¨ıc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Nat- ural Language Processing, pages 670â680.
Robin Cooper, Dick Crouch, JV Eijckl, Chris Fox, JV Genabith, J Japars, Hans Kamp, David Mil- ward, Manfred Pinkal, Massimo Poesio, et al. 1996. A framework for computational semantics (fracas). Technical report, Technical report, The FraCaS Con- sortium.
Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The pascal recognising textual entailment challenge. In Machine learning challenges. evalu- ating predictive uncertainty, visual object classiï¬ca- tion, and recognising tectual entailment, pages 177â 190. Springer.
11
1.0 0.8 ââ MuliNLI â SNLI â swaG 6 0.6 2 £04 > 0.2 259% 10° 10 Vocabulary Size (with words sorted by frequency)
Figure 5: Bottom: CDF for verbs in SNLI, Swag, and MultiNLI.
The lady demonstrates wrapping gifts using her feet. The lady
In a cafeteria, someone holds a combination tray and bowl in one hand. With the other, he
a) shows us the different shapes of the ornaments. (99.67%) b) continues playing when the lady talks to the camera. (0.26%) c) takes the desserts from the box and continues talking to the camera . (0.07%) d) cuts the paper with scissors. (0.01%) a) heads into his own study. (80.67%) b) glances around and studies the photo of the blonde someone. (8.45%) c) struggles to serve himself food with chopsticks. (6.82%) d) opens the wall , revealing an expanse of bed within. (4.06%) As he approaches , his kayak ï¬ips upside-down. As the view follows him, we A man is bending over a sink. He a) see silhouetted black clouds making him zoom out of the trees, catching smoke. (42.54%) b) drift over a busy city street , like down buildings on the tarmac. (41.41%) c) ï¬nd someone climbing into a tawny grave atop a road drawn among german soldiers. (13.73%) d) notice another man seated on the rocks to the right in red with a white helmet. (2.32%) a) takes a rag from over the sink, putting it in his mouth. (89.54%) b) is spraying a small dog with a hose. (6.07%) c) is carrying a shaving machine with a pressure washer. (4.29%) d) is putting a pair of shaving glass on the side of his face. (0.10%) People are walking next to the camels leading them. A building a) is shown riding the camels. (90.72%) b) is shown in the background. (8.39%) c) with a riï¬e is leading them. (0.87%) d) is then shown for several clip. (0.01%) A hockey game is in progress. two hockey players a) walked together in the middle of a ï¬eld. (48.11%) b) walk past with a goal. (44.00%) c) sit around a rope watching the other team. (5.30%) d) ram into each other and begin ï¬ghting. (2.58%) Meanwhile, someone parries another giant âs attacks. The giant A lady pours ice in a glass. The lady a) strikes a ï¬ght and thuds into someone as he rushes in, who brieï¬y ï¬ees. (89.96%) b) knocks someone âs sword out of his hand. (5.25%) c) spins him across the bars. (4.55%) d) throws stick to the bat, dragging around. (0.24%) a) pours ice into the glass. (65.14%) b) measures the contents of the glass. (33.56%) c) pours lemon mixture into a glass and pours liquids into asian juice. (0.87%) d) adds 3 liquors and lemon juice. (0.43%) The stars emerge from behind the clouds. Someone a) backs away from the windows of the clip, as light- ning billows over the sky. (96.59%) b) walks back across the room with nothing of his own. (1.82%) c) stands on his boat and looks at a deep orange and red sunset. (1.47%) d) shoots the man âs shoulder sideways, but neither do anything for a few seconds. (0.12%) Someone stands waiting with the bridesmaids. Every- one a) seems to be ecstatic. (78.33%) b) looks around as someone walks down the aisle, arm-in-arm with someone âs uncle. (8.97%) c) holds someone âs eyebrow. (8.84%) d) looks at her anxiously as someone walks and sits in his seat. (3.85%)
Table 8: More (incorrect) questions answered by the best model, ESIM+Elmo, sorted by model probabil- ity. The right answers are bolded.
Kiana Ehsani, Hessam Bagherinezhad, Joseph Red- mon, Roozbeh Mottaghi, and Ali Farhadi. 2018. Who let the dogs out? modeling dog behavior from visual data. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Panna Felsen, Pulkit Agrawal, and Jitendra Malik. 2017. What will happen next? forecasting player moves in sports videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recog-
nition, pages 3342â3351.
Maxwell Forbes and Yejin Choi. 2017. Verb physics: Relative physical knowledge of actions and objects. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 266â276.
Yarin Gal and Zoubin Ghahramani. 2016. A theoret- ically grounded application of dropout in recurrent
12
neural networks. In Advances in neural information processing systems, pages 1019â1027.
Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson Liu, Matthew Pe- ters, Michael Schmitz, and Luke Zettlemoyer. 2018. Allennlp: A deep semantic natural language pro- cessing platform. arXiv preprint arXiv:1803.07640.
JJ Gibson. 1979. The ecological approach to visual perception. Houghton Mifï¬in Comp.
Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking nli systems with sentences that re- In Proceedings of quire simple lexical inferences. the 56th Annual Meeting of the Association for Com- putational Linguistics (Volume 2: Short Papers), pages 650â655, Melbourne, Australia. Association for Computational Linguistics.
Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2016. Making the V in VQA matter: Elevating the role of image un- derstanding in Visual Question Answering. arXiv preprint arXiv:1612.00837.
Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural lan- In Proceedings of the 2018 guage inference data. Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 107â112. Association for Computational Lin- guistics.
Fabian Caba Heilbron, Victor Escorcia, Bernard Ghanem, and Juan Carlos Niebles. 2015. Activ- itynet: A large-scale video benchmark for human activity understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recog- nition, pages 961â970.
Lisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, and Bryan Russell. 2017. Localizing moments in video with natural language. In Proceedings of the IEEE International Confer- ence on Computer Vision (ICCV).
Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735â 1780.
Ari Holtzman, Jan Buys, Maxwell Forbes, Antoine Bosselut, David Golub, and Yejin Choi. 2018. Learning to write with cooperative discriminators. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1638â1649. Association for Computational Linguistics.
Ting-Hao Kenneth Huang, Francis Ferraro, Nasrin Mostafazadeh, Ishan Misra, Aishwarya Agrawal, Ja- cob Devlin, Ross Girshick, Xiaodong He, Pushmeet Kohli, Dhruv Batra, et al. 2016. Visual storytelling. In Proceedings of the 2016 Conference of the North
American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 1233â1239.
Hakan Inan, Khashayar Khosravi, and Richard Socher. 2017. Tying word vectors and word classiï¬ers: A In ICLR. loss framework for language modeling. ICLR.
Allan Jabri, Armand Joulin, and Laurens van der Maaten. 2016. Revisiting visual question answer- ing baselines. In European conference on computer vision, pages 727â739. Springer.
Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efï¬cient text classiï¬cation. In Proceedings of the 15th Con- ference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Pa- pers, volume 2, pages 427â431.
Rafal J´ozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the lim- its of language modeling. CoRR, abs/1602.02410.
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR. ICLR.
Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, In and Sanja Fidler. 2015. Skip-thought vectors. Advances in neural information processing systems, pages 3294â3302.
Ranjay Krishna, Kenji Hata, Frederic Ren, Li Fei-Fei, and Juan Carlos Niebles. 2017. Dense-Captioning In International Conference on Events in Videos. Computer Vision (ICCV).
Alice Lai, Yonatan Bisk, and Julia Hockenmaier. 2017. Natural language inference from multiple premises. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers), pages 100â109, Taipei, Tai- wan. Asian Federation of Natural Language Pro- cessing.
Xiang Li, Aynaz Taheri, Lifu Tu, and Kevin Gimpel. 2016. Commonsense knowledge base completion. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1445â1455, Berlin, Germany. Association for Computational Linguistics.
Peter LoBue and Alexander Yates. 2011. Types of common-sense knowledge needed for recognizing textual entailment. In Proceedings of the 49th An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers-Volume 2, pages 329â334. Association for Computational Linguistics.
Li Lucy and Jon Gauthier. 2017. Are distributional representations ready for the real world? evaluat- ing word vectors for grounded perceptual meaning. In Proceedings of the First Workshop on Language Grounding for Robotics, pages 76â85.
13
Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella bernardi, and Roberto Zampar- elli. 2014. A sick cure for the evaluation of compo- sitional distributional semantic models. In Proceed- ings of the Ninth International Conference on Lan- guage Resources and Evaluation (LRECâ14), pages 216â223, Reykjavik, Iceland. European Language Resources Association (ELRA). ACL Anthology Identiï¬er: L14-1314.
Nasrin Mostafazadeh, Nathanael Chambers, Xiadong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. 2016. A corpus and evaluation framework for deeper understanding of commonsense stories. In NAACL.
Denis Paperno, Germ´an Kruszewski, Angeliki Lazari- dou, Ngoc Quan Pham, Raffaella Bernardi, San- dro Pezzelle, Marco Baroni, Gemma Boleda, and The lambada dataset: Raquel Fernandez. 2016. Word prediction requiring a broad discourse context. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1525â1534, Berlin, Germany. Association for Computational Linguistics.
Ankur Parikh, Oscar T¨ackstr¨om, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2249â2255.
Ramakanth Pasunuru and Mohit Bansal. 2017. Multi- task video captioning with video and entailment generation. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1273â1283, Van- couver, Canada. Association for Computational Lin- guistics.
Ramakanth Pasunuru and Mohit Bansal. 2018. Multi- reward reinforced summarization with saliency and entailment. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 646â 653. Association for Computational Linguistics.
Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 confer- ence on empirical methods in natural language pro- cessing (EMNLP), pages 1532â1543.
Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227â 2237. Association for Computational Linguistics.
Bryan A. Plummer, Liwei Wang, Chris M. Cervantes, Juan C. Caicedo, Julia Hockenmaier, and Svet-
lana Lazebnik. 2017. Flickr30k entities: Col- lecting region-to-phrase correspondences for richer Int. J. Comput. Vision, image-to-sentence models. 123(1):74â93.
Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018. Hypothesis Only Baselines in Natural Language In- ference. In Joint Conference on Lexical and Com- putational Semantics (StarSem).
Oï¬r Press and Lior Wolf. 2017. Using the output em- bedding to improve language models. In Proceed- ings of the 15th Conference of the European Chap- ter of the Association for Computational Linguistics: Volume 2, Short Papers, volume 2, pages 157â163.
Melissa Roemmele, Cosmin Adrian Bejan, and An- drew S Gordon. 2011. Choice of Plausible Alterna- tives: An Evaluation of Commonsense Causal Rea- In AAAI Spring Symposium: Logical For- soning. malizations of Commonsense Reasoning.
Anna Rohrbach, Atousa Torabi, Marcus Rohrbach, Niket Tandon, Christopher Pal, Hugo Larochelle, Aaron Courville, and Bernt Schiele. 2017. Movie Description. International Journal of Computer Vi- sion, 123(1):94â120.
and Benjamin Van Durme. 2017. Social bias in elicited natural lan- guage inferences. In Proceedings of the First ACL Workshop on Ethics in Natural Language Process- ing, pages 74â79.
Maarten Sap, Marcella Cindy Prasettio, Ari Holtzman, Hannah Rashkin, and Yejin Choi. 2017. Connota- tion frames of power and agency in modern ï¬lms. In Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 2329â2334.
Roger C. Schank and Robert P. Abelson. 1975. Scripts, In Proceedings of the 4th plans, and knowledge. International Joint Conference on Artiï¬cial Intelli- gence - Volume 1, IJCAIâ75, pages 151â157, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.
Alexandra Schoï¬eld and Leo Mehr. 2016. Gender- distinguishing features in ï¬lm dialogue. In Proceed- ings of the Fifth Workshop on Computational Lin- guistics for Literature, pages 32â39.
Roy Schwartz, Maarten Sap, Ioannis Konstas, Li Zilles, Yejin Choi, and Noah A. Smith. 2017. The ef- fect of different writing tasks on linguistic style: A case study of the ROC story cloze task. In Proc. of CoNLL.
Rishi Sharma, James Allen, Omid Bakhshandeh, and Nasrin Mostafazadeh. 2018. Tackling the story end- ing biases in the story cloze test. In Proceedings of the 56th Annual Meeting of the Association for Com- putational Linguistics (Volume 2: Short Papers), volume 2, pages 752â757.
14
Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. 2013. Reasoning with neural ten- sor networks for knowledge base completion. In Advances in neural information processing systems, pages 926â934.
Robert Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In AAAI Conference on Arti- ï¬cial Intelligence, pages 4444â4451.
Mitchell Stern, Jacob Andreas, and Dan Klein. 2017. A minimal span-based neural constituency parser. In Proceedings of the 55th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 818â827.
Vladimir Vapnik. 2000. The Nature of Statistical Information Science Learning Theory, 2 edition. and Statistics. Springer-Verlag, New York.
Alex Wang, Amapreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461.
Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112â1122. Association for Computational Linguistics.
Licheng Yu, Eunbyung Park, Alexander C. Berg, and Tamara L. Berg. 2015. Visual Madlibs: Fill in the blank Image Generation and Question Answering. ICCV.
Rowan Zellers and Yejin Choi. 2017. Zero-shot activ- ity recognition with verb attribute induction. In Pro- ceedings of the 2017 Conference on Empirical Meth- ods in Natural Language Processing (EMNLP).
Rowan Zellers, Mark Yatskar, Sam Thomson, and Yejin Choi. 2018. Neural motifs: Scene graph pars- ing with global context. In Conference on Computer Vision and Pattern Recognition.
Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. 2018. Mitigating unwanted biases with ad- versarial learning. In Conference on Artiï¬cial Intel- ligence, Ethics and Society.
Sheng Zhang, Rachel Rudinger, Kevin Duh, and Ben- jamin Van Durme. 2017. Ordinal Common-sense Inference. Transactions of the Association for Com- putational Linguistics, 5:379â395.
Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Or- donez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias ampliï¬cation us- ing corpus-level constraints. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2979â2989.
15
Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watch- In arXiv preprint ing movies and reading books. arXiv:1506.06724.
Geoffrey Zweig and Christopher JC Burges. 2011. The microsoft research sentence completion challenge. Technical report, Citeseer. | {
"id": "1804.07461"
} |
1808.04444 | Character-Level Language Modeling with Deeper Self-Attention | LSTMs and other RNN variants have shown strong performance on character-level
language modeling. These models are typically trained using truncated
backpropagation through time, and it is common to assume that their success
stems from their ability to remember long-term contexts. In this paper, we show
that a deep (64-layer) transformer model with fixed context outperforms RNN
variants by a large margin, achieving state of the art on two popular
benchmarks: 1.13 bits per character on text8 and 1.06 on enwik8. To get good
results at this depth, we show that it is important to add auxiliary losses,
both at intermediate network layers and intermediate sequence positions. | http://arxiv.org/pdf/1808.04444 | Rami Al-Rfou, Dokook Choe, Noah Constant, Mandy Guo, Llion Jones | cs.CL, cs.AI, cs.LG, stat.ML | 8 pages, 7 figures | null | cs.CL | 20180809 | 20181210 | 8 1 0 2 c e D 0 1
] L C . s c [ 2 v 4 4 4 4 0 . 8 0 8 1 : v i X r a
# Character-Level Language Modeling with Deeper Self-Attention
# Rami Al-Rfou*
# Dokook Choe* Noah Constant* Mandy Guo*
# Llion Jones*
# Google AI Language {rmyeid, choed, nconstant, xyguo, llion}@google.com
# Abstract
LSTMs and other RNN variants have shown strong perfor- mance on character-level language modeling. These models are typically trained using truncated backpropagation through time, and it is common to assume that their success stems from their ability to remember long-term contexts. In this paper, we show that a deep (64-layer) transformer model (Vaswani et al. 2017) with ï¬xed context outperforms RNN variants by a large margin, achieving state of the art on two popular benchmarks: 1.13 bits per character on text8 and 1.06 on enwik8. To get good results at this depth, we show that it is important to add auxiliary losses, both at intermedi- ate network layers and intermediate sequence positions.
only effectively uses around 200 tokens of context (even if more is provided), and that word order only has an effect within approximately the last 50 tokens.
In this paper, we show that a non-recurrent model can achieve strong results on character-level language model- ing. Speciï¬cally, we use a deep network of transformer self-attention layers (Vaswani et al. 2017) with causal (backward-looking) attention to process ï¬xed-length inputs and predict upcoming characters. The model is trained on mini-batches of sequences from random positions in the training corpus, with no information passed from one batch to the next.
Introduction Character-level modeling of natural language text is chal- lenging, for several reasons. First, the model must learn a large vocabulary of words âfrom scratchâ. Second, natural text exhibits dependencies over long distances of hundreds or thousands of time steps. Third, character sequences are longer than word sequences and thus require signiï¬cantly more steps of computation.
In recent years, strong character-level language models typically follow a common template (Mikolov et al. 2010; 2011; Sundermeyer, Schl¨uter, and Ney 2012). A recurrent neural net (RNN) is trained over mini-batches of text se- quences, using a relatively short sequence length (e.g. 200 tokens). To capture context longer than the batch sequence length, training batches are provided in sequential order, and the hidden states from the previous batch are passed for- ward to the current batch. This procedure is known as âtrun- cated backpropagation through timeâ (TBTT), because the gradient computation doesnât proceed further than a single batch (Werbos 1990). A range of methods have arisen for unbiasing and improving TBTT (Tallec and Ollivier 2017; Ke et al. 2017).
While this technique gets good results, it adds complex- ity to the training procedure, and recent work suggests that models trained in this manner donât actually make âstrongâ use of long-term context. For example Khandelwal et al. (2018) ï¬nd that a word-based LSTM language model
Our primary ï¬nding is that the transformer architecture is well-suited to language modeling over long sequences and could replace RNNs in this domain. We speculate that the transformerâs success here is due to its ability to âquicklyâ propagate information over arbitrary distances; by compar- ison, RNNs need to learn to pass relevant information for- ward step by step.
We also ï¬nd that some modiï¬cations to the basic trans- former architecture are beneï¬cial in this domain. Most im- portantly, we add three auxiliary losses, requiring the model to predict upcoming characters (i) at intermediate sequence positions, (ii) from intermediate hidden representations, and (iii) at target positions multiple steps in the future. These losses speed up convergence, and make it possible to train deeper networks.
Character Transformer Model Language models assign a probability distribution over to- ken sequences t0:L by factoring out the joint probability as follows, where L is the sequence length:
L Pr(to:z) = P(to) [] PrGtiltox-1), dd) i=l
i=1 To model the conditional probability Pr(ti|t0:iâ1), we train a transformer network to process the character se- quence t0:iâ1. Transformer networks have recently showed signiï¬cant gains in tasks that require processing sequences accurately and efï¬ciently.
Equal contribution.
Copyright © 2019, Association for the Advancement of Artiï¬cial Intelligence (www.aaai.org). All rights reserved.
Our character-level transformer architecture has 64 trans- former layers. Following Vaswani et al. (2017), by âtrans- former layerâ we mean a block containing a multihead self-
attention sub-layer followed by a feed-forward network of two fully connected sub-layers. For more details on the transformer architecture, refer to Vaswani et al. (2017) and the tensor2tensor library1. To ensure that the modelâs predictions are only conditioned on past characters, we mask our attention layers with a causal attention, so each position can only attend leftward. This is the same as the âmasked attentionâ in the decoder component of the original trans- former architecture used for sequence-to-sequence problems (Vaswani et al. 2017).
Figure 1 shows our initial model with the causal attention mask limiting information ï¬ow from left to right. Each char- acter prediction is conditioned only on the characters that appeared earlier.
s r e y a L r e m r o f s n a r T t0 t1 t2 t3 t4
Figure 1: Character transformer network of two layers pro- cessing a four character sequence to predict t4. The causal attention mask limits information to left-to-right ï¬ow. Red arrows highlight the prediction task the network has to learn.
# Auxiliary Losses
Our network is, to our knowledge, deeper than any trans- former network discussed in previous work. In initial exper- iments, we found training a network deeper than ten layers to be challenging, with slow convergence and poor accuracy. We were able to deepen the network to better effect through the addition auxiliary losses, which sped up convergence of the training signiï¬cantly.
We add several types of auxiliary losses, correspond- ing to intermediate positions, intermediate layers, and non- adjacent targets. We hypothesize that these losses not only speed up convergence but also serve as an additional regu- larizer. During training, the auxiliary losses get added to the total loss of the network with discounted weights. Each type of auxiliary loss has its own schedule of decay. During eval- uation and inference time, only the prediction of the ï¬nal position at the ï¬nal layer is used.
One consequence of this approach is that a number of the network parameters are only used during train- ingâspeciï¬cally, the parameters in the output classiï¬cation layers associated with predictions made from intermediate layers and predictions over non-adjacent targets. Thus, when
1https://github.com/tensorflow/tensor2tensor
listing the number of parameters in our models, we distin- guish between âtraining parametersâ and âinference param- etersâ.
Multiple Positions First, we add prediction tasks for each position in the ï¬nal layer, extending our predictions from one per example to |L| (sequence length). Note, predicting over all sequence positions is standard practice in RNN- based approaches. However in our case, since no informa- tion is passed forward across batches, this is forcing the model to predict given smaller contextsâsometimes just one or two characters. It is not obvious whether these sec- ondary training tasks should help on the primary task of pre- dicting with full context. However, we ï¬nd that adding this auxiliary loss speeds up training and gives better results (see Ablation Experiments below). Figure 2 illustrates the task of predicting across all sequence positions. We add these losses during training without decaying their weights.
s r e y a L r e m r o f s n a r T t1 t2 t3 t0 t1 t2 t3
Figure 2: Adding the intermediate positions prediction tasks to our network. Now, we predict the ï¬nal character t4 and all intermediate characters t0:3. t3 has access only to t0:2 because of the causal attention masks. All of these losses contribute equally during training.
Intermediate Layer Losses In addition to the ï¬nal pre- diction layer, we add predictions made from the output of each intermediate transformer layer. As with the ï¬nal layer, we add predictions for all intermediate positions in the se- quence (see Figure 3). Lower layers are weighted to con- tribute less and less to the loss as training progresses. If there are n layers total, then the lth intermediate layer stops con- tributing any loss after ï¬nishing l/2n of the training. This schedule drops all intermediate losses after half of the train- ing is done.
Multiple Targets At each position in the sequence, the model makes two (or more) predictions of future characters. For each new target we introduce a separate classiï¬er. The losses of the extra targets get weighted by a multiplier of 0.5 before being added to their corresponding layer loss.
Positional Embeddings In the basic transformer network described in Vaswani et al. (2017), a sinusoidal timing signal is added to the input sequence prior to the ï¬rst transformer layer. However, as our
# t4
s r e y a L r e m r o f s n a r T t1 t1 t2 t2 t3 t3 t0 t1 t2 t3 t4 t4
Figure 3: Our network after adding prediction tasks for the intermediate layers. For this example of two layers, the losses of the intermediate layer prediction tasks will be ab- sent after ï¬nishing 25% of the training.
s r e y a L r e m r o f s n a r T t1 t2 t1 t2 t2 t3 t2 t3 t3 t4 t3 t4 t0 t1 t2 t3 t4 t5 t4 t5
Figure 4: Our example network after adding two predictions per position.
network is deeper (64 layers), we hypothesize that the tim- ing information may get lost during the propagation through the layers. To address this, we replace the timing signal with a learned per-layer positional embedding added to the in- put sequence before each transformer layer. Speciï¬cally, the model learns a unique 512-dimensional embedding vector for each of the L context positions within each of N layers, giving a total of L à N à 512 additional parameters. We are able to safely use positional embeddings for our task, as we donât require the model to generalize to longer contexts than those seen during training.
# Experimental Setup
Datasets For evaluation we focus mainly on text8 (Mahoney 2009). This dataset consists of English Wikipedia articles, with su- perï¬uous content removed (tables, links to foreign language versions, citations, footnotes, markup, punctuation). The re- maining text is processed to use a minimal character vocab- ulary of 27 unique charactersâlowercase letters a through z, and space. Digits are replaced by their spelled-out equiva- lents, so â20â becomes âtwo zeroâ. Character sequences not in the range [a-zA-Z] are converted to a single space. Fi- nally, the text is lowercased. The size of the corpus is 100M
characters. Following Mikolov et al. (2012) and Zhang et al. (2016), we split the data into 90M characters for train, 5M characters for dev, and 5M characters for test.
To aid in comparison with other recent approaches, we also evaluate our model on enwik8 (Mahoney 2009) which is 100M bytes of unprocessed Wikipedia text, including markup and non-Latin characters. There are 205 unique bytes in the dataset. Following Chung et al. (2015), and as in text8, we split the data into 90M, 5M and 5M for training, dev and test respectively.
# Training
Compared to most models based on transformers (Vaswani et al. 2017; Salimans et al. 2018), our model is very deep, with 64 transformer layers and each layer using two atten- tion heads. Each transformer layer has a hidden size of 512 and a ï¬lter size of 2048. We feed our model sequences of length 512. Each item in the sequence represents a single byte (or equivalently, one character in text8) which gets replaced by its embedding, a vector of size 512. We add to the byte embeddings a separate learned positional embed- ding for each of the 512 token positions, as described in the Positional Embeddings section above. We do the same ad- dition at each layer activation throughout the network. The positional embeddings are not shared across the layers. With two predictions per position, each layer learns to predict 1024 characters. Because we are primarily interested in pre- dicting the immediately following character (one step away), we halve the loss of predicting characters two steps away. The prediction layers are logistic regression layers over the full 256 outputs (the number of unique bytes). To demon- strate the generality of the model, we always train and pre- dict over all 256 labels, even on datasets that cover a smaller vocabulary. Despite this, we found that in practice the model never predicted a byte value outside of the ones observed in the training dataset.
The model has approximately 235 million parameters, which is larger than the number of characters in the text8 training corpus. To regularize the model, we apply dropout in the attention and ReLU layers with a probability of 0.55. We use the momentum optimizer with 0.99 momentum. The learning rate is ï¬xed during training to 0.003. We train our model for 4 million steps, with each step processing a batch of 16 randomly selected sequences. We drop the intermedi- ate layer losses consecutively, as described in the Intermedi- ate Layer Losses section above. Starting from the ï¬rst layer, after every 62.5K (= 4Mà 1 2â64 ) steps, we drop the losses introduced by the next layer. According to this schedule, af- ter training is halfway complete, only the ï¬nal layer losses are present.
# Evaluation
At inference time, we use the modelâs prediction at the ï¬- nal position of the ï¬nal layer to compute the probability of a character given a context of 512 characters. There is no state passed between predictions as would be the case with RNN models, so for each character predicted we have to pro- cess the context from scratch. Because there is no reused
Model LSTM (Cooijmans et al. 2016) BN-LSTM (Cooijmans et al. 2016) HM-LSTM (Chung, Ahn, and Bengio 2016) Recurrent Highway (Zilly et al. 2016) mLSTM (Krause et al. 2016) T12 (ours) T64 (ours) bpc 1.43 1.36 1.29 1.27 1.27 1.18 1.13
mLSTM + dynamic eval (Krause et al. 2017) Table 1: Comparison of various models on text8 test.
45
dev 1.25 1.17 1.12 1.09 1.06 bpc test 1.34 1.26 1.20 1.16 1.13 Accuracy (%) test dev 71.1 72.8 73.0 74.8 74.4 76.1 75.3 76.9 75.9 77.3
Table 2: Bits per character (bpc) and accuracy of our best model on text8 dev and test, for different context lengths.
computation from previous steps, our model requires ex- pensive computational resources for evaluation and infer- ence. We measure the performance of training checkpoints (roughly every 10,000 steps) by evaluating bits per character (bpc) over the entire the validation set, and save the param- eters that perform the best. Our best model is achieved after around 2.5 million steps of training, which takes 175 hours on a single Google Cloud TPU v2.
Results We report the performance of our best model (T64) on the validation and test sets. Table 1 compares our models against several recent results. On the test set, we achieve a new state of the art, 1.13 bpc. This model is 5x larger than previous models, which necessitated aggressive dropout rates of 0.55. For better comparison with smaller models, we also train a smaller model (T12) with 41M parameters. This model con- sists of 12 layers, and trained for 8M steps, with a reduced dropout rate of 0.2. All other settings were left the same as T64. Our smaller model still outperforms previous models, achieving 1.18 bpc on the test dataset. Increasing the depth of the network from 12 layers to 64 improved the results sig- niï¬cantly, with the auxiliary losses enabling the training to better utilize the depth of the network. Note, our models do not use dynamic evaluation (Krause et al. 2017), a technique that adjusts model weights at test time by training on test
Model FS-LSTM-4 (Mujika, Meier, and Steger 2017) mLSTM (Krause et al. 2016) cmix v13 (Knol 2017) T12 (ours) T64 (ours) Parameters (Ã106) inference train - 47 - 46 - - 41 44 219 235 bpb 1.25 1.24 1.23 1.11 1.06
mLSTM + dynamic eval (Krause et al. 2017) Table 3: Comparison of various models on enwik8 test.
46
1.19
1.08
# data.
Table 2 shows the performance of our model given differ- ent context sizes. We are able to achieve state-of-the-art re- sults once the context increases beyond 128 characters, with the best performance of 1.06 bpc at 512 characters. As ex- pected, the model performs better when it is given more con- text. However this trend levels off after 512 characters; we do not see better results using a context of 1024.
Using the same hyperparameters and training procedure for text8, we also train and evaluate the T12 and T64 ar- chitectures on enwik8 (see Table 3). Note, several previous authors discuss âbits per characterâ on enwik8 but are in fact reporting bits per byte. Without retuning for this dataset, our models still achieve state-of-the-art performance.
# Ablation Experiments
To better understand the relative importance of the several modiï¬cations we proposed, we run an ablation analysis. We start from our best model T64 and then remove one modiï¬- cation at a time. For example, when we disable Multiple Po- sitions, the model gets trained with only the last position loss for each layer. This corresponds to calculating {L(t4 | t0:3), L(t5 | t0:3)} in the example shown in Figure 4 for both the ï¬rst and the second layers. When disabling Positional Em- beddings, we add the default transformer sinusoidal timing signal before the ï¬rst layer.
Description T64 (Baseline) T64 w/out Multiple Positions T64 w/out Intermediate Layer Losses T64 w/out Positional Embeddings T64 w/out Multiple Targets T64 w/ SGD Optimizer bpc âbpc 1.062 2.482 1.158 1.069 1.068 1.065 - 1.420 0.096 0.007 0.006 0.003
Table 4: Evaluation of T64 on text8 dev with context set to 512. Disabling each feature or loss lowers the quality of the model. The biggest win comes from adding multiple po- sitions and intermediate layers losses.
For the ablation experiments, we reuse the hyperparame- ters from our best model to avoid a prohibitively expensive parameter search for each ablation. The only exception is the SGD experiment, where we vary the learning rate. The analysis shows that the biggest advantage comes from mul- tiple positions and intermediate layers losses. Predicting all the intermediate positions leads to signiï¬cant speed up in convergence, since the model sees more effective training examples per batch. Adding losses at the intermediate lay- ers acts in the same spirit by forcing more predictions per training step.
Finally, we replace momentum with SGD as our opti- mizer, using a range of learning rates (0.3, 0.1, 0.03, 0.01, 0.003, 0.001). This ablation shows that SGD produces com- petitive models, with learning rate 0.1 giving the best per- formance. Despite the depth of our network, SGD is able to train the network efï¬ciently with the help of our auxiliary losses.
Type Model Word Byte J´ozefowicz et al. (2016) T64 bpb - 1.03 ppl 23.7 40.6
Table 5: Performance of T64 on the lm1b test set.
Comparison with Word-Level Models To understand how byte-level language models perform in comparison to word-level language models, we train T64 on the lm1b corpus (Chelba et al. 2013). For lm1b, we use the standard train/test split of the preprocessed corpus, where out-of-vocab words have been replaced with UNK, to allow comparison to previous work on word and word-piece models. We report word perplexity (ppl) by converting bits- per-byte (bpb) into ppl2. During training we use the sec- ond shard (01) of the heldout dataset as a dev set, as the ï¬rst shard (00) is the test. Given this is a signiï¬cantly larger dataset than text8, we set all dropouts to zero. Table 5 shows a gap in performance between the two classes of lan- guage models. This comparison can serve as a starting point for researching possible ways to bridge the gap.
Qualitative Analysis To probe the strengths and weaknesses of our best model (T64), we run the model forward, starting with the seed se- quence of 512 characters in Figure 5, taken from the text8 test set. Figure 6 shows several per-character metrics for the modelâs predictions over the true continuation of this seed text. At each position, we measure i) the modelâs prediction entropy in bits across all 256 output classes, ii) its lossâ the negative log probability of the target label, i.e. the âbits per characterâ for this position, and iii) the rank of the tar- get in the list of output classes sorted by likelihood. Unsur- prisingly, the model is least certain when predicting the ï¬rst character of a word, and becomes progressively more conï¬- dent and correct as subsequent characters are seen.
To investigate the degree to which our model prefers ac- tual English words over non-existent words, we compute the likelihood the model assigns to all continuations after the seed. We cut off continuations when they reach a space char- acter, or when the total probability of the continuation falls below 0.001. Figure 5 shows the entire set of word comple- tions, in order of probability, where the initial pr- from the seed is repeated for readability. Note that these are all real or plausible (proofed) English words, and that even short but bad continuations like prz are assigned a lower cumu- lative probability than long realistic word completions like predictable.
We expect that the transformer self-attention should make it easy for our model to copy sequences observed in the con- text over long distances (up to the context size of 512 char- acters). To test this expectation, we corrupt the seed and con- tinuation from above by introducing a fake name zjakdmu bmijwxn. Speciï¬cally, we change the ï¬rst occurrence of elizabeth in the seed to zjakdmu bmijwxn, and the
2For this test set, ppl = 2bpbâ826189/159658, where 826,189 is the number of bytes and 159,658 is the number of tokens.
# Seed
mary was not permitted to see them or to speak in her own de- fence at the tribunal she refused to offer a written defence unless elizabeth would guarantee a verdict of not guilty which elizabeth would not do although the casket letters were accepted by the inquiry as genuine after a study of the handwriting and of the information contained therein and were generally held to be cer- tain proof of guilt if authentic the inquiry reached the conclusion that nothing was proven from the start this could have been pr
# Word Completions
proven, proved, proof, prevented, presented, problematic, probably, provided, practical, provoked, preceded, predicted, previously, presumed, praised, proposed, practicable, pro- duced, present, preserved, precisely, prior, protected, probable, prompted, proofed, properly, practiced, prohibited, profound, preferable, proceeded, precise, predictable, practically, prevalent
Figure 5: A seed sequence of 512 characters taken from the text8 test set, and all word completions assigned cumula- tive probability above 0.001 to follow the seed, in order from most likely (0.529) to least likely (0.001).
second occurrence to she. Similarly, in the continuation, we change elizabeth to zjakdmu bmijwxn. The result- ing distance between the two occurrences of the fake name is 434 characters.
Figure 7a conï¬rms that the model can successfully copy over this long distance. While the initial z in zjakdmu is unexpected, the model immediately chooses to copy the re- mainder of this word from the context, as opposed to pre- dicting any real z- words learned during training. Similarly, while the model is somewhat unsure whether the fake sur- name bmijwxn will appear (assigning the initial b a rank of two), it immediately picks up on the correspondence after the b is observed, correctly predicting the remainder of the fake surname.
For comparison, Figure 7b shows how the model would rank the targets in our fake continuation if the original seed with elizabeth were used. This conï¬rms that the fake name is not predictable based on knowledge gained through training, and is indeed being copied from the preceding con- text.
Generation For generating samples using our language model, we train on a larger and less processed dataset, enwik9 (Mahoney 2009). We split enwik9 into 900M, 50M and 50M for training, dev and test. Using the dev dataset to tune our dropout, we ï¬nd that dropout=0.1 performs the best. On the test dataset, T64 achieves 0.85 bpb. Table 6 shows different generated samples following the seed text, using a sampling temperature of 1.0.
Related Work Character-level modeling has shown promise in many ar- eas such as sentiment analysis (Radford, J´ozefowicz, and
Loss (bits) Entropy (bits) an QD COPNWAUDIUMBO O 00 4 2 Rank (logscale) 1 edicted a5 the only Conclusion that would Satisty elizabeth the authenticity: oF edicted as the only conclusion that would satisfy elizabeth the authenticity of edicted a5 the only Conclusion that would Satisty elizabeth the authenticity of
Figure 6: Per-character entropy, loss and rank assigned by T64 after seeding on the 512 character sequence from Figure 5.
# Seed
'''Computational neuroscience''' is an interdisciplinary ï¬eld which draws on [[neuroscience]], [[computer sci- ence]], and [[applied mathematics]]. It most often uses mathematical and computational techniques such as com- puter [[simulation]]s and [[mathematical model]]s to un- derstand the function of the [[nervous system]]. The ï¬eld of computational neuroscience began with the work of [[Andrew Huxley]], [[Alan Hodgkin]], and [[David Marr]]. The results of Hodgkin and Huxley's pi- oneering work in developing
Sample 1
computational neuroscience were chronicled in ''[[Is Mathematics Anything I Could Learn?]]''. (ISBN 0826412246). Computational neuroscience concerned neurological auraria and the inherited ability to communicate and re- spond to environmental destruction - the model were published in 1982 and 1983 re- spectively, and the subsequent work on the ï¬eld began its graduate program with [[M
# Sample 2 |
Sample 3
ability to easily adapt to other languages. Neural network based language modeling has been heavily researched since its effectiveness was shown by Bengio et al. (2003). By far, the most popular architecture in this area is the RNN and variants, ï¬rst studied in Mikolov et al. (2010).
Much of the progress in this area has been made by mitigating the vanishing gradients problem (Hochreiter et al. 2001) by architectures such as LSTMs (Hochreiter and Schmidhuber 1997), GRU (Cho et al. 2014), Recurrent Highway Networks (Zilly et al. 2016), Unitary RNNs (Ar- jovsky, Shah, and Bengio 2015) and others. This is an issue that transformers do not have, due to attention allowing short paths to all inputs. Methods of normalizing activation func- tions, such as Batch Normalization (Ioffe and Szegedy 2015; Merity, Keskar, and Socher 2017) and Layer Normaliza- tion (Lei Ba, Kiros, and Hinton 2016) have also demon- strated improvements on language modeling tasks. As with this work, progress has been made with discovering ways to regularize sequential architectures, with techniques such as Recurrent Dropout (Zaremba, Sutskever, and Vinyals 2014; Gal and Ghahramani 2015) and Zoneout (Krueger et al. 2016; Rocki 2016).
Truth the voltage clamp allowed them to develop the ï¬rst mathematical model of the [[action poten- tial]]. David Marr's work focuses on
Table 6: Samples generated by T64, seeded with text from the enwik9 dev set, using a sampling temperature of 1.0.
Sutskever 2017), question answering (Kenter, Jones, and Hewlett 2018) and classiï¬cation (Zhang, Zhao, and LeCun 2015), and is an exciting area due to its simplicity and the
A closely related architecture is the Neural Cache Model (Grave, Joulin, and Usunier 2016), where the RNN is allowed to attend to all of its previous hidden states at each step. Another similar model is used in (Daniluk et al. 2017) where a key-value attention mechanism similar to transformers is used. Both approaches show improvements on word level language modeling. Memory Networks (We- ston, Chopra, and Bordes 2014) have a similarity to the transformer model in design as it also has layers of atten- tion for processing a ï¬x memory representing the input doc-
BH a Rank (logscale) PN B © edicted as the only conclusion that would satisfy zjakdmu bmijwxn the authentici
(a) Continuing after the modiï¬ed seed (including the fake name 434 characters away).
BH a Rank (logscale) PN B © edicted as the only conclusion that would satisfy zjakdmu bmijwxn the authentici
(b) Continuing after the original seed from Figure 5.
Figure 7: Per-character rank assigned by T64 to a fake continuation, after being seeded on either (a) the fake context where elizabeth is replaced with zjakdmu bmijwxn, or (b) the original context.
ument and has been shown to be effective for language mod- eling in (Sukhbaatar et al. 2015). ByteNet (Kalchbrenner et al. 2016), which is related but uses layers of dilated con- volutions rather than attention, showed promising results on byte level language modeling. Gated Convolutional Net- works (Dauphin et al. 2016) was an early non-recurrent model to show superior performance on word level language modeling.
in quality by deepening the network to 64 layers, utilizing capacity and depth efï¬ciently. The use of auxiliary losses at intermediate layers and positions is critical for reaching this performance, and these losses allow us to train much deeper transformer networks. Finally, we analyze the behavior of our network and ï¬nd that it is able to exploit dependencies in structure and content over long distances, over 400 char- acters apart.
Language models are not usually very deep due to com- putational constraints of training RNNs, and this also limits the number of parameters. The transformer architecture al- lowed us to build very deep (64 layer) models with a large number of parameters. A recent CNN model for text clas- siï¬cation (Conneau et al. 2016) at 29 layers is considered deep in the NLP community. A Sparsely-Gated Mixture-of- Experts Layer (Shazeer et al. 2017) allowed language mod- eling experiments with a greatly increased number of pa- rameters by only accessing a small portion of parameters every time step, showing a reduction in bits per word. In Ex- ploring the Limits of Language Modeling (J´ozefowicz et al. 2016), an increase in the number of parameters was achieved by mixing character-level and word level models, using spe- cialized softmaxes and using a large amount of computa- tional resources to train. IndRNN (Li et al. 2018) uses a sim- pliï¬ed RNN architecture that allows deeper stacking with 21-layers, achieving near SOTA character-level language modeling. Fast-Slow Recurrent Neural Networks (Mujika, Meier, and Steger 2017) also achieved near SOTA by in- creasing the number of recurrent steps for each character processed.
# References
Arjovsky, M.; Shah, A.; and Bengio, Y. 2015. Unitary evolution recurrent neural networks. CoRR abs/1511.06464.
Bengio, Y.; Ducharme, R.; Vincent, P.; and Janvin, C. 2003. A neural probabilistic language model. J. Mach. Learn. Res. 3:1137â1155.
Chelba, C.; Mikolov, T.; Schuster, M.; Ge, Q.; Brants, T.; Koehn, P.; and Robinson, T. 2013. One billion word bench- mark for measuring progress in statistical language modeling. arXiv preprint arXiv:1312.3005.
Cho, K.; van Merrienboer, B.; G¨ulc¸ehre, C¸ .; Bougares, F.; Schwenk, H.; and Bengio, Y. 2014. Learning phrase repre- sentations using RNN encoder-decoder for statistical machine translation. CoRR abs/1406.1078.
Chung, J.; Ahn, S.; and Bengio, Y. cal multiscale recurrent neural networks. arXiv:1609.01704. 2016. Hierarchi- arXiv preprint
Chung, J.; Gulcehre, C.; Cho, K.; and Bengio, Y. 2015. Gated feedback recurrent neural networks. In International Confer- ence on Machine Learning, 2067â2075.
Conclusion Character language modeling has been dominated by re- current network approaches. In this paper, we show that a network of 12 stacked transformer layers achieves state-of- the-art results on this task. We gain further improvements
Conneau, A.; Schwenk, H.; Barrault, L.; and LeCun, Y. 2016. Very deep convolutional networks for natural language process- ing. CoRR abs/1606.01781.
Cooijmans, T.; Ballas, N.; Laurent, C.; and Courville, A. C. 2016. Recurrent batch normalization. CoRR abs/1603.09025.
Daniluk, M.; Rockt¨aschel, T.; Welbl, J.; and Riedel, S. 2017. Frustratingly short attention spans in neural language modeling. CoRR abs/1702.04521. Dauphin, Y. N.; Fan, A.; Auli, M.; and Grangier, D. 2016. Language modeling with gated convolutional networks. CoRR abs/1612.08083. Gal, Y., and Ghahramani, Z. 2015. A Theoretically Grounded Application of Dropout in Recurrent Neural Networks. ArXiv e-prints. Grave, E.; Joulin, A.; and Usunier, N. Improv- ing neural language models with a continuous cache. CoRR abs/1612.04426. Hochreiter, S., and Schmidhuber, J. 1997. Long short-term memory. 9:1735â80. Hochreiter, S.; Bengio, Y.; Frasconi, P.; Schmidhuber, J.; et al. 2001. Gradient ï¬ow in recurrent nets: the difï¬culty of learning long-term dependencies. Ioffe, S., and Szegedy, C. 2015. Batch normalization: Acceler- ating deep network training by reducing internal covariate shift. CoRR abs/1502.03167. J´ozefowicz, R.; Vinyals, O.; Schuster, M.; Shazeer, N.; and Wu, Y. 2016. Exploring the limits of language modeling. CoRR abs/1602.02410. Kalchbrenner, N.; Espeholt, L.; Simonyan, K.; Oord, A. v. d.; Graves, A.; and Kavukcuoglu, K. 2016. Neural machine trans- lation in linear time. arXiv preprint arXiv:1610.10099. Ke, N. R.; Goyal, A.; Bilaniuk, O.; Binas, J.; Charlin, L.; Pal, C.; and Bengio, Y. 2017. Sparse attentive backtracking: Long- range credit assignment in recurrent networks. arXiv preprint arXiv:1711.02326. Kenter, T.; Jones, L.; and Hewlett, D. 2018. Byte-level machine reading across morphologically varied languages. Khandelwal, U.; He, H.; Qi, P.; and Jurafsky, D. 2018. Sharp nearby, fuzzy far away: How neural language models use con- text. In Association for Computational Linguistics (ACL). Knol, B. 2017. cmix - http://www.byronknoll.com/cmix.html. Krause, B.; Lu, L.; Murray, I.; and Renals, S. 2016. Mul- arXiv preprint tiplicative lstm for sequence modelling. arXiv:1609.07959. Krause, B.; Kahembwe, E.; Murray, I.; and Renals, S. 2017. Dynamic evaluation of neural sequence models. arXiv preprint arXiv:1709.07432. Krueger, D.; Maharaj, T.; Kram´ar, J.; Pezeshki, M.; Ballas, N.; Ke, N. R.; Goyal, A.; Bengio, Y.; Courville, A.; and Pal, C. 2016. Zoneout: Regularizing rnns by randomly preserving hid- den activations. arXiv preprint arXiv:1606.01305. Lei Ba, J.; Kiros, J. R.; and Hinton, G. E. 2016. Layer Normal- ization. ArXiv e-prints. Li, S.; Li, W.; Cook, C.; Zhu, C.; and Gao, Y. 2018. Inde- pendently recurrent neural network (indrnn): Building A longer and deeper RNN. CoRR abs/1803.04831. Mahoney, M. http://www.mattmahoney.net/text/text.html. Reg- Merity, S.; Keskar, N. S.; and Socher, R. ularizing and optimizing LSTM language models. CoRR abs/1708.02182.
Mikolov, T.; Karaï¬t, M.; Burget, L.; Cernock, J.; and Khudan- pur, S. 2010. Recurrent neural network based language model. In Kobayashi, T.; Hirose, K.; and Nakamura, S., eds., INTER- SPEECH, 1045â1048. ISCA. Mikolov, T.; Kombrink, S.; Burget, L.; ernock, J.; and Khudan- pur, S. 2011. Extensions of recurrent neural network language model. In 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 5528â5531. Mikolov, T.; Sutskever, I.; Deoras, A.; Le, H.-S.; Kombrink, S.; and Cernocky, J. Subword language model- preprint (http://www. ï¬t. vutbr. ing with neural networks. cz/imikolov/rnnlm/char. pdf) 8. Mujika, A.; Meier, F.; and Steger, A. 2017. Fast-slow recurrent neural networks. In Advances in Neural Information Processing Systems, 5915â5924. Radford, A.; J´ozefowicz, R.; and Sutskever, I. 2017. Learn- ing to generate reviews and discovering sentiment. CoRR abs/1704.01444. Rocki, K. M. 2016. Surprisal-driven feedback in recurrent net- works. arXiv preprint arXiv:1608.06027. Salimans, T.; Zhang, H.; Radford, A.; and Metaxas, D. N. CoRR 2018. abs/1803.05573. Shazeer, N.; Mirhoseini, A.; Maziarz, K.; Davis, A.; Le, Q. V.; Hinton, G. E.; and Dean, J. 2017. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. CoRR abs/1701.06538. Sukhbaatar, S.; Weston, J.; Fergus, R.; et al. 2015. End-to-end memory networks. In Advances in neural information process- ing systems, 2440â2448. Sundermeyer, M.; Schl¨uter, R.; and Ney, H. 2012. Lstm neural networks for language modeling. In Thirteenth annual confer- ence of the international speech communication association. Tallec, C., and Ollivier, Y. 2017. Unbiasing truncated back- propagation through time. arXiv preprint arXiv:1705.08209. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L.; and Polosukhin, I. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30. Curran Associates, Inc. 5998â6008. Werbos, P. J. 1990. Backpropagation through time: what it does and how to do it. Proceedings of the IEEE 78(10):1550â1560. Weston, J.; Chopra, S.; and Bordes, A. 2014. Memory net- works. CoRR abs/1410.3916. Zaremba, W.; Sutskever, I.; and Vinyals, O. 2014. Recurrent neural network regularization. CoRR abs/1409.2329. Zhang, S.; Wu, Y.; Che, T.; Lin, Z.; Memisevic, R.; Salakhut- dinov, R. R.; and Bengio, Y. 2016. Architectural complexity measures of recurrent neural networks. In Advances in Neural Information Processing Systems, 1822â1830. Zhang, X.; Zhao, J. J.; and LeCun, Y. 2015. Character- level convolutional networks for text classiï¬cation. CoRR abs/1509.01626. Zilly, J. G.; Srivastava, R. K.; Koutn´ık, J.; and Schmidhuber, J. 2016. Recurrent highway networks. CoRR abs/1607.03474. | {
"id": "1609.01704"
} |
1808.01340 | A Short Note about Kinetics-600 | We describe an extension of the DeepMind Kinetics human action dataset from
400 classes, each with at least 400 video clips, to 600 classes, each with at
least 600 video clips. In order to scale up the dataset we changed the data
collection process so it uses multiple queries per class, with some of them in
a language other than english -- portuguese. This paper details the changes
between the two versions of the dataset and includes a comprehensive set of
statistics of the new version as well as baseline results using the I3D neural
network architecture. The paper is a companion to the release of the ground
truth labels for the public test set. | http://arxiv.org/pdf/1808.01340 | Joao Carreira, Eric Noland, Andras Banki-Horvath, Chloe Hillier, Andrew Zisserman | cs.CV | Companion to public release of kinetics-600 test set labels | null | cs.CV | 20180803 | 20180803 | 8 1 0 2
g u A 3 ] V C . s c [
1 v 0 4 3 1 0 . 8 0 8 1 : v i X r a
# A Short Note about Kinetics-600
# JoËao Carreira joaoluis@google.com
# Eric Noland enoland@google.com
Andras Banki-Horvath bhandras@google.com
# Chloe Hillier chillier@google.com
# Andrew Zisserman zisserman@google.com
# Abstract
We describe an extension of the DeepMind Kinetics hu- man action dataset from 400 classes, each with at least 400 video clips, to 600 classes, each with at least 600 video clips. In order to scale up the dataset we changed the data collection process so it uses multiple queries per class, with some of them in a language other than english â portuguese. This paper details the changes between the two versions of the dataset and includes a comprehensive set of statistics of the new version as well as baseline results using the I3D neural network architecture. The paper is a companion to the release of the ground truth labels for the public test set.
# 1. Introduction
The new version of the dataset, called Kinetics-600, fol- lows the same principles as Kinetics-400: (i) The clips are from YouTube video, last 10s, and have a variable resolu- tion and frame rate; (ii) for an action class, all clips are from different YouTube videos. Kinetics-600 represents a 50% increase in number of classes, from 400 to 600, and a 60% increase in the number of video clips, from around 300k to around 500k. The statistics of the two dataset versions are detailed in table 1.
In the new Kinetics-600 dataset there is a standard test set, for which labels have been publicly released, and also a held-out test set (where the labels are not released). We encourage researchers to report results on the standard test set, unless they want to compare with participants of the Activity-Net kinetics challenge. Performance on the combi- nation of standard test set plus held-out test should be used in that case, and can be be measured only through the chal- lenge evaluation website1.
The release of the Kinetics dataset [6] in 2017 led to marked improvements in state-of-the-art performance on a variety of action recognition datasets: UCF-101 [9], HMDB-51 [7], Charades [8], AVA [3], Thumos [5], among others. Video models pre-trained on Kinetics generalized well when transferred to different video tasks on smaller video datasets, similar to what happened to image classi- ï¬ers trained on ImageNet.
The goal of the Kinetics project from the start was to replicate the size of ImageNet, which has 1000 classes, each with 1000 image examples. This proved difï¬cult initially and the ï¬rst version of the dataset had 400 classes, each with 400 video clip examples. There were two main bottle- necks and they were related: (a) identifying relevant candi- date YouTube videos for each action class, and (b) ï¬nding classes having many candidates. Problem (b) was particu- larly acute and exposed inefï¬ciencies with the way videos were selected â querying YouTube for simple variations of the class names, by varying singular/plural of nouns, adding articles (e.g. âcatching a ballâ / âcatching ballâ), etc. These problems have now been overcome, as described in the se- quel.
The URLs of the YouTube videos and temporal intervals of both Kinetics-600 and Kinetics-400 can be obtained from http://deepmind.com/kinetics.
# 2. Data Collection Process
The data collection process evolved from Kinetics-400 to Kinetics-600. The overall pipeline was the same: 1) action class sourcing, 2) candidate video matching, 3) candidate clip selection, 4) human veriï¬cation, 5) quality analysis and ï¬ltering. In words, a list of class names is created, then a list of candidate YouTube URLs is obtained for each class name, and candidate 10s clips are sampled from the videos. These clips are sent to humans in Mechanical Turk who de- cide whether those clips contain the action class that they are supposed to. Finally, there is an overall curation pro- cess including clip de-duplication, and selecting the higher quality classes and clips. Full details can be found in the original publication [6].
The main differences in the data collection process be- tween Kinetics-400 and 600 were in the ï¬rst two steps: how
1http://activity-net.org/challenges/2018/evaluation.html
1
Version Kinetics-400 [6] Kinetics-600 Train 250â1000 450â1000 Valid. Test Held-out Test 50 50 100 100 0 around 50 Total Train 246,245 392,622 Total 306,245 495,547 Classes 400 600
Table 1: Kinetics Dataset Statistics. The number of clips for each class in the various splits (left), and the totals (right). With Kinetics-600 we have released the ground truth test set labels, and also created an additional held-out test set for the purpose of the Activity-Net Challenge.
action classes were sourced, and how candidate YouTube videos were matched with classes.
tive translated queries on YouTube and tuning them through some trial and error.
# 2.1. Action class sourcing
For Kinetics-400, class names were ï¬rst sourced from existing datasets, then from the everyday experience of the authors, and ï¬nally by asking the humans in Mechan- ical Turk what classes they were seeing in videos that did not contain the classes being tested. For Kinetics-600 we sourced many classes from Googleâs Knowledge Graph, in particular from the hobby list. We also obtained class ideas from YouTubeâs search box auto-complete, for example by typing an object or verb, then following up on promis- ing auto-completion suggestions and checking if there were many videos containing the same action.
Having multiple languages had the positive side effect of also promoting greater dataset diversity by incorporating a more well-rounded range of cultures, ethnicities and ge- ographies.
Weighted ngram matching. Rather than matching directly using textual queries we found it beneï¬cial to use weighted ngram representations of the combination of the metadata of each video and the titles of related ones. Importantly, these representations were compatible with multiple languages. We combined this with standard title matching to get a robust similarity score between a query and all YouTube videos, which, unlike the binary matching we used before, meant we never ran out of candidates, although the post- mechanical-turk yield of the selected candidates became lower for smaller similarity values.
# 2.2. Candidate video matching
In Kinetics-400 we matched YouTube videos with each class by searching for videos having some of the class name words in the title, while allowing for variation in stemming. There was no separation between the class name and the query text, which turned out to be a limiting factor: in many cases we exhausted the pool of candidates, or had imprac- tically low yields. We tried matching directly these queries to not just the title but also other metadata but this proved of little use (in particular the video descriptions seemed to have plenty of spam). We tried two variations that worked out much better:
# 3. From Kinetics-400 to Kinetics-600
Kinetics-600 is an approximate superset of Kinetics-400 â overall, 368 of the original 400 classes are exactly the same in Kinetics-600 (except they have more examples). For the other 32 classes, we renamed a few (e.g. âdy- ing hairâ became âdyeing hairâ), split or removed others that were too strongly overlapping with other classes, such as âdrinkingâ. We split some classes: âhuggingâ became âhugging babyâ and âhugging (not baby)â, while âopening bottleâ became âopening wine bottleâ and âopening bottle (not wine)â.
Multiple queries. In order to get better and larger pools of candidates we found it useful to manually create sets of queries for each class and did so in two different languages: English and Portuguese. These are two out of six languages with the most native speakers in the world2, have large YouTube communities (especially in the USA and Brazil), and were also natively spoken by this paperâs authors. As an example the queries for folding paper were: âfolding paperâ (en), âorigamiâ (en) and âdobrar papelâ (pt). We found also that translating action descriptions was not always easy, and sometimes required observing the videos returned by puta-
A few video clips from 30 classes of the Kinetics-400 validation set became part of the Kinetics-600 test set, and some from the training set became part of the new val- idation set. It is therefore not ideal to evaluate models on Kinetics-600 that were pre-trained on Kinetics-400, al- though it should make almost no difference in practice. The full list of new classes in Kinetics-600 is given in the ap- pendix.
# 4. Benchmark Performance
2According to https://www.babbel.com/en/magazine/the-10-most- spoken-languages-in-the-world/
As a baseline model we used I3D [2], with standard RGB videos as input (no optical ï¬ow). We trained the model from scratch on the Kinetics-600 training set, picked hyper-
Acc. type Top-1 Top-5 100.0 â avg(Top-1,Top-5) Valid Test Test + HeldOut Test 71.9 90.1 19.0 71.7 90.4 19.0 69.7 89.1 20.6
Table 2: Performance of an I3D model with RGB inputs on the Kinetics-600 dataset, without any test time augmentation (processing a center crop of each video convolutionally in time ). The ï¬rst two rows show accuracy in percentage, the last one shows the metric used at the Kinetics challenge hosted by the ActivityNet workshop.
parameters on validation, and report performance on valida- tion, test set and the combination of the test and held-out test sets. We used 32 P100 GPUs, batch size 5 videos, 64 frame clips for training and 251 frames for testing. We trained us- ing SGD with momentum, starting with a learning rate of 0.1, decreasing it by a factor of 10 when the loss saturates. Results are shown in table 2.
The top-1 accuracy on the test set was 71.7, whereas on Test+Held-out was 69.7, which shows that the held-out test set is harder than the regular test set. On Kinetics-400 the corresponding result was 68.4, hence the task overall seems to have became slightly easier. There are several factors that may help explain this: even though Kinetics-600 has 50% extra classes, it also has around 50% extra training ex- amples; and also, some of the ambiguities in Kinetics-400 have been removed in Kinetics-600. We also used fewer GPUs (32 instead 64), which resulted in half the batch size.
Kinetics challenge. There was a ï¬rst Kinetics challenge at the ActivityNet workshop in CVPR 2017, using Kinetics- 400. The second challenge occurred at the ActivityNet workshop in CVPR 2018, this time using Kinetics-600. The performance criterion used in the challenge is the average of Top-1 and Top-5 error. There was an improvement between the winning systems of the two challenges, with error going down from 12.4% (in 2017) to 11.0% (in 2018) [1, 4].
# 5. Conclusion
ing to translate queries from English to Portuguese, and Aditya Zisserman and Radhika Desikan for data clean up.
# References
[1] Y. Bian, C. Gan, X. Liu, F. Li, X. Long, Y. Li, H. Qi, J. Zhou, S. Wen, and Y. Lin. Revisiting the effectiveness of off- the-shelf temporal modeling approaches for large-scale video classiï¬cation. arXiv preprint arXiv:1708.03805, 2017. 3 [2] J. Carreira and A. Zisserman. Quo vadis, action recogni- In IEEE Inter- tion? new models and the kinetics dataset. national Conference on Computer Vision and Pattern Recog- nition CVPR, 2017. 2
[3] C. Gu, C. Sun, D. A. Ross, C. Vondrick, C. Pantofaru, Y. Li, S. Vijayanarasimhan, G. Toderici, S. Ricco, R. Sukthankar, et al. AVA: A video dataset of spatio-temporally localized atomic visual actions. CoRR, abs/1705.08421, 4, 2017. 1 [4] D. He, F. Li, Q. Zhao, X. Long, Y. Fu, and S. Wen. Exploiting spatial-temporal modelling and multi-modal fusion for human action recognition. arXiv preprint arXiv:1806.10319, 2018. 3 [5] Y. Jiang, J. Liu, A. R. Zamir, G. Toderici, I. Laptev, M. Shah, and R. Sukthankar. Thumos challenge: Action recognition with a large number of classes, 2014. 1
[6] W. Kay, J. Carreira, K. Simonyan, B. Zhang, C. Hillier, S. Vi- jayanarasimhan, F. Viola, T. Green, T. Back, P. Natsev, M. Su- leyman, and A. Zisserman. The kinetics human action video dataset. arXiv preprint arXiv:1705.06950, 2017. 1, 2
[7] H. Kuehne, H. Jhuang, E. Garrote, T. Poggio, and T. Serre. HMDB: a large video database for human motion recognition. In Proceedings of the International Conference on Computer Vision (ICCV), 2011. 1
We have described the new Kinetics-600 dataset, which is 50% larger than the original Kinetics-400 dataset. It rep- resents another step towards our goal of producing an ac- tion classiï¬cation dataset with 1000 classes and 1000 video clips for each class. We explained the differences in the data collection process between the initial version of the dataset made available in 2017 and the new one. This publication coincides with the release of the test set annotations for both Kinetics-400 and Kinetics-600; we hope these will facilitate research as it will no longer be necessary to submit results to an external evaluation server.
[8] G. A. Sigurdsson, G. Varol, X. Wang, A. Farhadi, I. Laptev, and A. Gupta. Hollywood in homes: Crowdsourcing data col- lection for activity understanding. In European Conference on Computer Vision, pages 510â526. Springer, 2016. 1
[9] K. Soomro, A. R. Zamir, and M. Shah. UCF101: a dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402, 2012. 1
# A. List of New Human Action Classes in Kinetics-600
# Acknowledgements:
This is the list of classes in Kinetics-600 that were not in Kinetics-400, or that have been renamed.
The collection of this dataset was funded by DeepMind. The authors would like to thank Sandra Portugues for help-
1. acting in play
2. adjusting glasses
3. alligator wrestling
fw WW WwW YY YY YY Neos SCmANADAAKRHN SOS AAIADAAKRWNHHE PNnDAPFYP Nr SSF NDA
4. archaeological excavation
5. arguing
6. assembling bicycle
7. attending conference
8. backï¬ip (human)
9. base jumping
10. bathing dog
11. battle rope training
12. blowdrying hair
13. blowing bubble gum
14. bodysurï¬ng
15. bottling
16. bouncing on bouncy castle
17. breaking boards
18. breathing ï¬re
19. building lego
20. building sandcastle
21. bull ï¬ghting
22. bulldozing
23. burping
24. calculating
25. calligraphy
26. capsizing
27. card stacking
28. card throwing
29. carving ice
30. casting ï¬shing line
31. changing gear in car
32. changing wheel (not on bike)
33. chewing gum
34. chiseling stone
35. chiseling wood
36. chopping meat
37. chopping vegetables
38. clam digging
39. coloring in
40. combing hair
41. contorting
42. cooking sausages (not on barbeque)
43. cooking scallops
44. cosplaying
45. cracking back
46. cracking knuckles
47. crossing eyes
48. cumbia
49. curling (sport)
50. cutting apple
51. cutting orange
52. delivering mail
53. directing trafï¬c
54. docking boat
55. doing jigsaw puzzle
56. drooling
57. dumpster diving
58. dyeing eyebrows
59. dyeing hair
60. embroidering
61. falling off bike
62. falling off chair
63. fencing (sport)
64. ï¬dgeting
65. ï¬xing bicycle
66. ï¬int knapping
67. ï¬y tying
68. geocaching
69. getting a piercing
70. gold panning
71. gospel singing in church
72. hand washing clothes
73. head stand
74. historical reenactment
75. home roasting coffee
76. huddling
77. hugging (not baby)
78. hugging baby
79. ice swimming
80. inï¬ating balloons
81. installing carpet
82. ironing hair
83. jaywalking
84. jumping bicycle
85. jumping jacks
86. karaoke
87. land sailing
88. lawn mower racing
89. laying concrete
90. laying stone
91. laying tiles
92. leatherworking
93. licking
94. lifting hat
95. lighting ï¬re
96. lock picking
97. longboarding
98. looking at phone
99. luge
100. making balloon shapes
101. making bubbles
102. making cheese
103. making horseshoes
104. making paper aeroplanes
105. making the bed
106. marriage proposal
107. massaging neck
108. moon walking
109. mosh pit dancing
110. mountain climber (exercise)
# ee PnNnnAPFY NES
111. mushroom foraging
112. needle felting
113. opening bottle (not wine)
114. opening door
115. opening refrigerator
116. opening wine bottle
117. packing
118. passing american football (not in game)
119. passing soccer ball
120. person collecting garbage
121. photobombing
122. photocopying
123. pillow ï¬ght
124. pinching
125. pirouetting
126. planing wood
127. playing beer pong
128. playing blackjack
129. playing darts
130. playing dominoes
131. playing ï¬eld hockey
132. playing gong
133. playing hand clapping games
134. playing laser tag
135. playing lute
136. playing maracas
137. playing marbles
138. playing netball
139. playing ocarina
140. playing pan pipes
141. playing pinball
142. playing ping pong
143. playing polo
144. playing rubiks cube
145. playing scrabble
146. playing with trains
147. poking bellybutton
148. polishing metal
149. popping balloons
150. pouring beer
151. preparing salad
152. pushing wheelbarrow
153. putting in contact lenses
154. putting on eyeliner
155. putting on foundation
156. putting on lipstick
157. putting on mascara
158. putting on sari
159. putting on shoes
160. raising eyebrows
161. repairing puncture
162. riding snow blower
163. roasting marshmallows
164. roasting pig
165. rolling pastry
166. rope pushdown
167. sausage making
168. sawing wood
169. scrapbooking
170. scrubbing face
171. separating eggs
172. sewing
173. shaping bread dough
174. shining ï¬ashlight
175. shopping
176. shucking oysters
177. shufï¬ing feet
178. sipping cup
179. skiing mono
180. skipping stone
181. sleeping
182. smashing
183. smelling feet
184. smoking pipe
185. spelunking
186. square dancing
187. standing on hands
188. staring
189. steer roping
190. sucking lolly
191. swimming front crawl
192. swinging baseball bat
193. sword swallowing
194. tackling
195. tagging grafï¬ti
196. talking on cell phone
197. tasting wine
198. threading needle
199. throwing ball (not baseball or American football)
200. throwing knife
201. throwing snowballs
202. throwing tantrum
203. throwing water balloon
204. tie dying
205. tightrope walking
206. tiptoeing
207. trimming shrubs
208. twiddling ï¬ngers
209. tying necktie
210. tying shoe laces
211. using a microscope
212. using a paint roller
213. using a power drill
214. using a sledge hammer
215. using a wrench
216. using atm
217. using bagging machine
218. using circular saw
219. using inhaler
220. using puppets
221. vacuuming ï¬oor
222. visiting the zoo
223. wading through mud
224. wading through water
225. waking up
226. walking through snow
227. watching tv
228. waving hand
229. weaving fabric
230. winking
231. wood burning (art)
232. yarn spinning | {
"id": "1708.03805"
} |
1808.00265 | Interpretable Visual Question Answering by Visual Grounding from Attention Supervision Mining | A key aspect of VQA models that are interpretable is their ability to ground
their answers to relevant regions in the image. Current approaches with this
capability rely on supervised learning and human annotated groundings to train
attention mechanisms inside the VQA architecture. Unfortunately, obtaining
human annotations specific for visual grounding is difficult and expensive. In
this work, we demonstrate that we can effectively train a VQA architecture with
grounding supervision that can be automatically obtained from available region
descriptions and object annotations. We also show that our model trained with
this mined supervision generates visual groundings that achieve a higher
correlation with respect to manually-annotated groundings, meanwhile achieving
state-of-the-art VQA accuracy. | http://arxiv.org/pdf/1808.00265 | Yundong Zhang, Juan Carlos Niebles, Alvaro Soto | cs.CV, cs.AI, cs.CL, cs.LG | 8 pages, 4 figures | null | cs.CV | 20180801 | 20180801 | 8 1 0 2
g u A 1 ] V C . s c [
1 v 5 6 2 0 0 . 8 0 8 1 : v i X r a
# Interpretable Visual Question Answering by Visual Grounding from Attention Supervision Mining
# Yundong Zhang Stanford University ydzhang12345@gmail.com
# Juan Carlos Niebles Stanford University jniebles@cs.stanford.edu
# Alvaro Soto Universidad Catolica de Chile asoto@ing.puc.cl
# Abstract
A key aspect of VQA models that are interpretable is their ability to ground their answers to relevant regions in the image. Current approaches with this capability rely on supervised learning and human annotated groundings to train attention mechanisms inside the VQA architecture. Unfortunately, obtaining human annotations speciï¬c for vi- sual grounding is difï¬cult and expensive. In this work, we demonstrate that we can effectively train a VQA architec- ture with grounding supervision that can be automatically obtained from available region descriptions and object an- notations. We also show that our model trained with this mined supervision generates visual groundings that achieve a higher correlation with respect to manually-annotated groundings, meanwhile achieving state-of-the-art VQA ac- curacy.
# 1. Introduction
We are interested in the problem of visual question an- swering (VQA), where an algorithm is presented with an image and a question that is formulated in natural language and relates to the contents of the image. The goal of this task is to get the algorithm to correctly answer the ques- tion. The VQA task has recently received signiï¬cant atten- tion from the computer vision community, in particular be- cause obtaining high accuracies would presumably require precise understanding of both natural language as well as visual stimuli. In addition to serving as a milestone towards visual intelligence, there are practical applications such as development of tools for the visually impaired.
The problem of VQA is challenging due to the complex interplay between the language and visual modalities. On one hand, VQA algorithms must be able to parse and in- terpret the input question, which is provided in natural lan- guage [8, 14, 9]. This may potentially involve understand- ing of nouns, verbs and other linguistic elements, as well as their visual signiï¬cance. On the other hand, the algorithms
âAttention Supervision Module aA _âVisual Grounding Answer Prediction Q: What game are they playing? A: Baseball.
Figure 1. Interpretable VQA algorithms must ground their answer into image regions that are relevant to the question. In this pa- per, we aim at providing this ability by leveraging existing region descriptions and object annotations to construct grounding super- vision automatically.
must analyze the image to identify and recognize the visual elements relevant to the question. Furthermore, some ques- tions may refer directly to the contents of the image, but may require external, common sense knowledge to be an- swered correctly. Finally, the algorithms should generate a textual output in natural language that correctly answers the input visual question. In spite of the recent research efforts to address these challenges, the problem remains largely un- solved [22].
We are particularly interested in giving VQA algorithms the ability to identify the visual elements that are relevant to the question. In the VQA literature, such ability has been implemented by attention mechanisms. Such atten- tion mechanisms generate a heatmap over the input image, which highlights the regions of the image that lead to the answer. These heatmaps are interpreted as groundings of the answer to the most relevant areas of the image. Gen- erally, these mechanisms have either been considered as latent variables for which there is no supervision, or have been treated as output variables that receive direct supervi- sion from human annotations. Unfortunately, both of these approaches have disadvantages. First, unsupervised train- ing of attention tends to lead to models that cannot ground their decision in the image in a human interpretable manner. Second, supervised training of attention is difï¬cult and ex- pensive: human annotators may consider different regions
to be relevant for the question at hand, which entails ambi- guity and increased annotation cost. Our goal is to leverage the best of both worlds by providing VQA algorithms with interpretable grounding of their answers, without the need of direct and explicit manual annotation of attention.
From a practical point of view, as autonomous machines are increasingly ï¬nding real world applications, there is an increasing need to provide them with suitable capabilities to explain their decisions. However, in most applications, in- cluding VQA, current state-of-the-art techniques operate as black-box models that are usually trained using a discrim- inative approach. Similarly to [5], in this work we show that, in the context of VQA, such approaches lead to internal representations that do not capture the underlying semantic relations between textual questions and visual information. Consequently, as we show in this work, current state-of- the-art approaches for VQA are not able to support their answers with a suitable interpretable representation.
In this work, we introduce a methodology that provides VQA algorithms with the ability to generate human inter- pretable attention maps which effectively ground the answer to the relevant image regions. We accomplish this by lever- aging region descriptions and object annotations available in the Visual Genome dataset, and using these to automati- cally construct attention maps that can be used for attention supervision, instead of requiring human annotators to man- ually provide grounding labels. Our framework achieves competitive state-of-the-art VQA performance, while gen- erating visual groundings that outperform other algorithms that use human annotated attention during training.
The contributions of this paper are: (1) we introduce a mechanism to automatically obtain meaningful attention supervision from both region descriptions and object an- notations in the Visual Genome dataset; (2) we show that by using the prediction of region and object label attention maps as auxiliary tasks in a VQA application, it is possible to obtain more interpretable intermediate representations. (3) we experimentally demonstrate state-of-the-art perfor- mances in VQA benchmarks as well as visual grounding that closely matches human attention annotations.
# 2. Related Work
Since its introduction [8, 14, 9], the VQA problem has attracted an increasing interest [22]. Its multimodal na- ture and more precise evaluation protocol than alternative multimodal scenarios, such as image captioning, help to ex- plain this interest. Furthermore, the proliferation of suitable datasets and potential applications, are also key elements behind this increasing activity. Most state-of-the-art meth- ods follow a joint embedding approach, where deep mod- els are used to project the textual question and visual input to a joint feature space that is then used to build the an- swer. Furthermore, most modern approaches pose VQA as
2
a classiï¬cation problem, where classes correspond to a set of pre-deï¬ned candidate answers. As an example, most en- tries to the VQA challenge [9] select as output classes the most common 3000 answers in this dataset, which account for 92% of the instances in the validation set.
The strategy to combine the textual and visual embed- dings and the underlying structure of the deep model are key design aspects that differentiate previous works. Antol et al. [9] propose an element-wise multiplication between im- age and question embeddings to generate spatial attention map. Fukui et al. [6] propose multimodal compact bilinear pooling (MCB) to efï¬ciently implement an outer product operator that combines visual and textual representations. Yu et al. [26] extend this pooling scheme by introducing a multi-modal factorized bilinear pooling approach (MFB) that improves the representational capacity of the bilinear operator. They achieve this by adding an initial step that efï¬ciently expands the textual and visual embeddings to a high-dimensional space. In terms of structural innovations, Noh et al. [16] embed the textual question as an intermedi- ate dynamic bilinear layer of a ConvNet that processes the visual information. Andreas et al. [2] propose a model that learns a set of task-speciï¬c neural modules that are jointly trained to answer visual questions.
Following the successful introduction of soft attention in neural machine translation applications [3], most modern VQA methods also incorporate a similar mechanism. The common approach is to use a one-way attention scheme, where the embedding of the question is used to generate a set of attention coefï¬cients over a set of predeï¬ned image regions. These coefï¬cients are then used to weight the em- bedding of the image regions to obtain a suitable descriptor [19, 21, 6, 25, 26]. More elaborated forms of attention has also been proposed. Xu and Saenko [23] suggest use word- level embedding to generate attention. Yang et al. [24] iter- ates the application of a soft-attention mechanism over the visual input as a way to progressively reï¬ne the location of relevant cues to answer the question. Lu et al. [13] pro- poses a bidirectional co-attention mechanism that besides the question guided visual attention, also incorporates a vi- sual guided attention over the input question.
In all the previous cases, the attention mechanism is ap- plied using an unsupervised scheme, where attention coef- ï¬cients are considered as latent variables. Recently, there have been also interest on including a supervised attention scheme to the VQA problem [5, 7, 18]. Das et al. [5] com- pare the image areas selected by humans and state-of-the- art VQA techniques to answer the same visual question. To achieve this, they collect the VQA human attention dataset (VQA-HAT), a large dataset of human attention maps built by asking humans to select images areas relevant to an- swer questions from the VQA dataset [9]. Interestingly, this study concludes that current machine-generated atten-
tion maps exhibit a poor correlation with respect to the hu- man counterpart, suggesting that humans use different vi- sual cues to answer the questions. At a more fundamental level, this suggests that the discriminative nature of most current VQA systems does not effectively constraint the at- tention modules, leading to the encoding of discriminative cues instead of the underlying semantic that relates a given question-answer pair. Our ï¬ndings in this work support this hypothesis.
Related to the work in [5], Gan et al. [7] apply a more structured approach to identify the image areas used by hu- mans to answer visual questions. For VQA pairs associated to images in the COCO dataset, they ask humans to select the segmented areas in COCO images that are relevant to answer each question. Afterwards, they use these areas as labels to train a deep learning model that is able to identify attention features. By augmenting a standard VQA tech- nique with these attention features, they are able to achieve a small boost in performance. Closely related to our ap- proach, Qiao et al. [18] use the attention labels in the VQA- HAT dataset to train an attention proposal network that is able to predict image areas relevant to answer a visual ques- tion. This network generates a set of attention proposals for each image in the VQA dataset, which are used as la- bels to supervise attention in the VQA model. This strategy results in a small boost in performance compared with a non-attentional strategy. In contrast to our approach, these previous works are based on a supervised attention scheme that does not consider an automatic mechanism to obtain the attention labels. Instead, they rely on human annotated groundings as attention supervision. Furthermore, they dif- fer from our work in the method to integrate attention labels to a VQA model.
# 3. VQA Model Structure
Figure 2 shows the main pipeline of our VQA model. We mostly build upon the MCB model in [6], which exempli- ï¬es current state-of-the-art techniques for this problem. Our main innovation to this model is the addition of an Atten- tion Supervision Module that incorporates visual grounding as an auxiliary task. Next we describe the main modules behind this model. Question Attention Module: Questions are tokenized and passed through an embedding layer, followed by an LSTM layer that generates the question features Qf â RT ÃD, where T is the maximum number of words in the tokenized version of the question and D is the dimensionality of the hidden state of the LSTM. Additionally, following [25], a question attention mechanism is added that generates ques- tion attention coefï¬cients Cq â RT ÃGq , where Gq is the so-called number of âglimpsesâ. The purpose of Gq is to allow the model to predict multiple attention maps so as to increase its expressiveness. Here, we use Gq = 2. The
3
weighted question features Qw â RGqD are then computed using a soft attention mechanism [3], which is essentially a weighted sum of the T word features followed by a con- catenation according to Gq. Image Attention Module: Images are passed through an embedding layer consisting of a pre-trained ConvNet model, such as Resnet pretrained with the ImageNet dataset [10]. This generates image features If â RCÃHÃW , where C, H and W are depth, height, and width of the extracted feature maps. Fusion Module I is then used to generate a set of image attention coefï¬cients. First, question fea- tures Qw are tiled as the same spatial shape of If . Af- terwards, the fusion module models the joint relationship Jattn â ROÃHÃW between questions and images, map- ping them to a common space RO. In the simplest case, one can implement the fusion module using either concate- nation or Hadamard product [1], but more effective pooling schemes can be applied [6, 11, 25, 26]. The design choice of the fusion module remains an on-going research topic. In general, it should both effectively capture the latent rela- tionship between multi-modal features meanwhile be easy to optimize. The fusion results are then passed through an attention module that computes the visual attention coefï¬- cient Cv â RHÃW ÃGv , with which we can obtain attention- weighted visual features Vw â RGvC. Again, Gv is the number of âglimpsesâ, where we use Gv = 2. Classiï¬cation Module: Using the compact representation of questions Qw and visual information Vw, the classiï¬ca- tion module applies ï¬rst the Fusion Module II that provides the feature representation of answers Jans â RL, where L is the latent answer space. Afterwards, it computes the log- its over a set of predeï¬ned candidate answers. Following previous work [6], we use as candidate outputs the top 3000 most frequent answers in the VQA dataset. At the end of this process, we obtain the highest scoring answer ËA. Attention Supervision Module: As a main novelty of the VQA model, we add an Image Attention Supervision Mod- ule as an auxiliary classiï¬cation task, where ground-truth visual grounding labels Cgt â RHÃW ÃGv are used to guide the model to focus on meaningful parts of the im- age to answer each question. To do that, we simply treat the generated attention coefï¬cients Cv as a probability dis- tribution, and then compare it with the ground-truth using KL-divergence. Interestingly, we introduce two attention maps, corresponding to relevant region-level and object- level groundings, as shown in Figure 3. Sections 4 and 5 provide details about our proposed method to obtain the at- tention labels and to train the resulting model, respectively.
# 4. Mining Attention Supervision from Visual Genome
Visual Genome (VG) [12] includes the largest VQA dataset currently available, which consists of 1.7M QA
Image ... Question Attention Module Q: What is Ques. Feat Q the mustache }* F made of? Vv Img. Attn. : Tattn Coeff. Cy : â> Attn. Weighted: Attn. Weighted Ques. Feat Qy Attention Supervision Module i z KL-divergence g Loss a ' s A Image Attention Module Ans: Banana : Ques. â Attn. Classification Module
Figure 2. Schematic diagram of the main parts of the VQA model. It is mostly based on the model presented in [6]. Main innovation is the Attention Supervision Module that incorporates visual grounding as an auxiliary task. This module is trained through the use of a set of image attention labels that are automatically mined from the Visual Genome dataset.
pairs. Furthermore, for each of its more than 100K images, VG also provides region and object annotations by means of bounding boxes. In terms of visual grounding, these re- gion and object annotations provide complementary infor- mation. As an example, as shown in Figure 3, for questions related to interaction between objects, region annotations result highly relevant. In contrast, for questions related to properties of speciï¬c objects, object annotations result more valuable. Consequently, in this section we present a method to automatically select region and object annotations from VG that can be used as labels to implement visual ground- ing as an auxiliary task for VQA.
For region annotations, we propose a simple heuristic to mine visual groundings: for each (I, Q, A) we enumer- ate all the region descriptions of I and pick the description Di that has the most (at least two) overlapped informative words with Q and A. Informative words are all nouns and verbs, where two informative words are matched if at least one of the following conditions is met: (1) Their raw text as they appear in Q or A are the same; (2) Their lemma- tizations (using NLTK [4]) are the same; (3) Their synsets in WordNet [15] are the same; (4) Their aliases (provided from VG) are the same. We refer to the resulting labels as region-level groundings. Figure 3(a) illustrates an example of a region-level grounding.
In terms of object annotations, for each image in a
(I, Q, A) triplet we select the bounding box of an object as a valid grounding label, if the object name matches one of the informative nouns in Q or A. To score each match, we use the same criteria as region-level groundings. Addi- tionally, if a triplet (I, Q, A) has a valid region grounding, each corresponding object-level grounding must be inside this region to be accepted as valid. As a further reï¬nement, selected objects grounding are passed through an intersec- tion over union ï¬lter to account for the fact that VG usu- ally includes multiple labels for the same object instance. As a ï¬nal consideration, for questions related to counting, region-level groundings are discarded after the correspond- ing object-level groundings are extracted. We refer to the resulting labels as object-level groundings. Figure 3(b) il- lustrates an example of an object-level grounding.
As a result, combining both region-level and object-level groundings, about 700K out of 1M (I, Q, A) triplets in VG end up with valid grounding labels. We will make these labels publicly available.
# 5. Implementation Details
We build the attention supervision on top of the open- sourced implementation of MCB [6] and MFB [25]. Simi- lar to them, We extract the image feature from res5c layer of Resnet-152, resulting in 14 Ã 14 spatial grid (H = 14, W = 14, C = 2048). We construct our ground-truth visual
4
(a) Region-level grounding. Q: What are the people doing? Ans: Talking.
[4
:
(b) Object-level grounding. Q: How many people are there? Ans: Two.
How
Figure 3. (a) Example region-level groundings from VG. Left: image with region description labels; Right: our mined results. Here âmenâ in the region description is ï¬rstly lemmatized to be âmanâ, whose aliases contain âpeopleâ; the word âtalkingâ in the answer also contributes to the matching. So the selected regions have two matchings which is the most among all candidates. (b) Example object-level grounding from VG. Left: image with object instance labels; Right: our mined results. Note that in this case region-level grounding will give us the same result as in (a), but object-level grounding is clearly more localized.
grounding labels to be Gv = 2 glimpse maps per QA pair, where the ï¬rst map is object-level grounding and the sec- ond map is region-level grounding, as discussed in Section max) be the coordinate of ith 4. Let (xi selected object bounding box in the grounding labels, then the mined object-level attention maps C 0
Cole yl= So Ttinin $@ < ®nal] TYinin SYS Ynez] iâ¬objects (dd)
(1) where I[·] is the indicator function. Similarly, the region- level attention maps C 1
The model is trained using a multi-task loss,
L(A, Cv, Cgt, ËA|I, Q; Î) =CE(A, ËA|I, Q; Î) + α(t)KL(Cgt, Cv|I, Q; Î), (3)
where CE denotes cross-entropy and KL denotes KL- divergence. Πcorresponds to the learned parameters. α(t) is a scalar that weights the loss terms. This scalar decays as a function of the iteration number t. In particular, we choose to use a cosine-decay function:
α(t) = 0.5 1 + cos(Ï t tmax ) . (4)
Chile yl= SO Teinin $2 Serax] LYinin SYS Yinac] iâ¬regions Q)
gt are spatially L1-normalized to represent probabilities and concatenated to form Cgt â R14Ã14Ã2.
This is motivated by the fact that the visual grounding la- bels have some level of subjectivity. As an example, Fig- ure 4 (second row) shows a case where the learned atten- tion seems more accurate than the VQA-HAT ground truth. Hence, as the model learns suitable parameter values, we gradually loose the penalty on the attention maps to provide more freedom to the model to selectively decide what atten- tion to use. It is important to note that, for training samples
5
in VQA-2.0 or VG that do not have region-level or object- level grounding labels, α = 0 in Equation 3, so the loss is reduced to the classiï¬cation term only. In our experiment, tmax is calibrated for each tested model based on the num- ber of training steps. In particular, we choose tmax = 190K for all MCB models and tmax = 160K for others.
# 6. Experiments
# 6.1. Datasets
VQA-2.0: The VQA-2.0 dataset [9] consists of 204721 images, with a total of 1.1M questions and 10 crowd- sourced answers per question. There are more than 20 ques- tion types, covering a variety of topics and free-form an- swers. The dataset is split into training (82K images and 443K questions), validation (40K images and 214K ques- tions), and testing (81K images and 448K questions) sets. The task is to predict a correct answer A given a corre- sponding image-question pair (I, Q). As a main advantage with respect to version 1.0 [9], for every question VQA-2.0 includes complementary images that lead to different an- swers, reducing language bias by forcing the model to use the visual information. Visual Genome: The Visual Genome (VG) dataset [12] contains 108077 images, with an average of 17 QA pairs per image. We follow the processing scheme from [6], where non-informative words in the questions and answers such as âaâ and âisâ are removed. Afterwards, (I, Q, A) triplets with answers to be single keyword and overlapped with VQA-2.0 dataset are included in our training set. This adds 97697 images and about 1 million questions to the training set. Besides the VQA data, VG also provides on average 50 region descriptions and 30 object instances per image. Each region/object is annotated by one sentence/phrase descrip- tion and bounding box coordinates. VQA-HAT: VQA-HAT dataset [5] contains 58475 human visual attention heat (HAT) maps for (I, Q, A) triplets in VQA-1.0 training set. Annotators were shown a blurred image, a (Q, A) pair and were asked to âscratchâ the im- age until they believe someone else can answer the question by looking at the blurred image and the sharpened area. The authors also collect 1374 Ã 3 = 4122 HAT maps for VQA- 1.0 validation sets, where each of the 1374 (I, Q, A) were labeled by three different annotators, so one can compare the level of agreement among labels. We use VQA-HAT to evaluate visual grounding performance, by comparing the rank-correlation between human attention and model atten- tion, as in [5, 17]. VQA-X: VQA-X dataset [17] contains 2000 labeled atten- tion maps in VQA-2.0 validation sets. In contrast to VQA- HAT, VQA-X attention maps are in the form of instance segmentations, where annotators were asked to segment ob- jects and/or regions that most prominently justify the an-
6
Rank Correlation VQA-HAT VQA-X
0.623 0.396 0.276 0.580 0.517 0.276 0.416 0.354 0.483 Table 1. Evaluation of different VQA models on visual ground- ing and answer prediction. The reported accuracies are evaluated using the VQA-2.0 test-standard set.
swer. Hence the attentions are more speciï¬c and localized. We use VQA-X to evaluate visual grounding performance by comparing the rank-correlation, as in [5, 17].
# 6.2. Results
We evaluate the performance of our proposed method using two criteria: i) rank-correlation [20] to evaluate vi- sual grounding and ii) accuracy to evaluate question answer- ing. Intuitively, rank-correlation measures the similarity be- tween human and model attention maps under a rank-based metric. A high rank-correlation means that the model is âlooking atâ image areas that agree to the visual information used by a human to answer the same question. In terms of accuracy of a predicted answer Ais evaluated by: Die TA = Ai)
Die TA = Ai) i} 3 &) Accuracy(A) = min {
Table 1 reports our main results. Our models are built on top of prior works with the additional Attention Super- vision Module as described in Section 3. Speciï¬cally, we denote by Attn-* our adaptation of the respective model by including our Attention Supervision Module. We high- light that MCB model is the winner of VQA challenge 2016 and MFH model is the best single model in VQA challenge 2017. In Table 1, we can observe that our proposed model achieves a signiï¬cantly boost on rank-correlation with re- spect to human attention. Furthermore, our model outper- forms alternative state-of-art techniques in terms of accu- racy in answer prediction. Speciï¬cally, the rank-correlation for MFH model increases by 36.4% when is evaluated in VQA-HAT dataset and 7.7% when is evaluated in VQA-X. This indicates that our proposed methods enable VQA mod- els to provide more meaningful and interpretable results by generating more accurate visual grounding.
Table 1 also reports the result of an experiment where the decaying factor α(t) in Equation 4 is ï¬xed to a value of 1. In this case, the model is able to achieve higher rank- correlation, but accuracy drops by 2%. We observe that as training proceeds, attention loss becomes dominant in the
VQA-HAT Ground Truth
MFH
Attn-MFH (Ours)
Q: Is the computer on or off? Ans: on
Q: What color is the inside of the cats ears? Ans: pink
Q: How many of these animals are there? Ans: 2
Wai "Nf
Wai "Nf
Figure 4. Visual grounding comparison: the ï¬rst column is the ground-truth human attention in VQA-HAT [5]; the second column shows the results from pretrained MFH model [26]; the last column are our Attn-MFH trained with attention supervision. We can see that the attention areas considered by our model mimic the attention areas used by humans, but they are more localized in space.
ï¬nal training steps, which affects the accuracy of the classi- ï¬cation module. 7. Conclusions
Figure 4 shows qualitative results of the resulting visual grounding, including also a comparison with respect to no- attn model.
In this work we have proposed a new method that is able to slightly outperform current state-of-the-art VQA sys- tems, while also providing interpretable representations in the form of an explicitly trainable visual attention mecha- nism. Speciï¬cally, as a main result, our experiments pro-
7
vide evidence that the generated visual groundings achieve high correlation with respect to human-provided attention annotations, outperforming the correlation scores of previ- ous works by a large margin.
As further contributions, we highlight two relevant in- sides of the proposed approach. On one side, by using at- tention labels as an auxiliary task, the proposed approach demonstrates that is able to constraint the internal repre- sentation of the model in such a way that it fosters the en- coding of interpretable representations of the underlying re- lations between the textual question and input image. On other side, the proposed approach demonstrates a method to leverage existing datasets with region descriptions and ob- ject labels to effectively supervise the attention mechanism in VQA applications, avoiding costly human labeling.
As future work, we believe that the superior visual grounding provided by the proposed method can play a rel- evant role to generate natural language explanations to jus- tify the answer to a given visual question. This scenario will help to demonstrate the relevance of our technique as a tool to increase the capabilities of AI based technologies to explain their decisions.
Acknowledgements: This work was partially funded by Oppo, Panasonic and the Millennium Institute for Founda- tional Research on Data.
# References
[1] P. Anderson, X. He, C. Buehler, D. Teney, M. Johnson, S. Gould, and L. Zhang. Bottom-up and top-down atten- tion for image captioning and VQA. CoRR, abs/1707.07998, 2017.
[2] J. Andreas, M. Rohrbach, T. Darrell, and D. Klein. Neural module networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 39â48, 2016.
[3] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
the natural language toolkit. In Proceedings of the ACL 2004 on Interactive poster and demonstration sessions, page 31. Association for Computa- tional Linguistics, 2004.
[5] A. Das, H. Agrawal, C. L. Zitnick, D. Parikh, and D. Ba- tra. Human attention in visual question answering: Do hu- mans and deep networks look at the same regions? CoRR, abs/1606.05589, 2016.
[6] A. Fukui, D. H. Park, D. Yang, A. Rohrbach, T. Darrell, and M. Rohrbach. Multimodal compact bilinear pooling for visual question answering and visual grounding. CoRR, abs/1606.01847, 2016.
[7] C. Gan, Y. Li, H. Li, C. Sun, and B. Gong. Vqs: Linking seg- mentations to questions and answers for supervised attention in vqa and question-focused semantic segmentation. In Proc. IEEE Int. Conf. Comp. Vis, volume 3, 2017.
8
[8] D. Geman, S. Geman, N. Hallonquist, and L. Younes. Visual turing test for computer vision systems. Proceedings of the National Academy of Sciences, 112(12):3618â3623, 2015. [9] Y. Goyal, T. Khot, D. Summers-Stay, D. Batra, and D. Parikh. Making the V in VQA matter: Elevating the role of image understanding in Visual Question Answering. In Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
[10] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learn- ing for image recognition. In Proceedings of the IEEE con- ference on computer vision and pattern recognition, pages 770â778, 2016.
[11] J. Kim, K. W. On, W. Lim, J. Kim, J. Ha, and B. Zhang. Hadamard product for low-rank bilinear pooling. CoRR, abs/1610.04325, 2016.
[12] R. Krishna, Y. Zhu, O. Groth, J. Johnson, K. Hata, J. Kravitz, S. Chen, Y. Kalantidis, L.-J. Li, D. A. Shamma, et al. Vi- sual genome: Connecting language and vision using crowd- sourced dense image annotations. International Journal of Computer Vision, 123(1):32â73, 2017.
[13] J. Lu, J. Yang, D. Batra, and D. Parikh. Hierarchical question-image co-attention for visual question answering. In Advances In Neural Information Processing Systems, pages 289â297, 2016.
[14] M. Malinowski and M. Fritz. A multi-world approach to question answering about real-world scenes based on uncer- tain input. In Advances in neural information processing sys- tems, pages 1682â1690, 2014.
[15] G. A. Miller. Wordnet: a lexical database for english. Com- munications of the ACM, 38(11):39â41, 1995.
[16] H. Noh, P. Hongsuck Seo, and B. Han. Image question an- swering using convolutional neural network with dynamic In Proceedings of the IEEE Confer- parameter prediction. ence on Computer Vision and Pattern Recognition, pages 30â 38, 2016.
[17] D. H. Park, L. A. Hendricks, Z. Akata, A. Rohrbach, B. Schiele, T. Darrell, and M. Rohrbach. Multimodal expla- nations: Justifying decisions and pointing to the evidence. CoRR, abs/1802.08129, 2018.
[18] T. Qiao, J. Dong, and D. Xu. Exploring human-like attention supervision in visual question answering. In AAAI, 2018. [19] K. J. Shih, S. Singh, and D. Hoiem. Where to look: Focus regions for visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recogni- tion, pages 4613â4621, 2016.
[20] C. Spearman. The proof and measurement of association between two things. The American journal of psychology, 15(1):72â101, 1904.
[21] D. Teney, P. Anderson, X. He, and A. v. d. Hengel. Tips and tricks for visual question answering: Learnings from the 2017 challenge. arXiv preprint arXiv:1708.02711, 2017. [22] Q. Wu, D. Teney, P. Wang, C. Shen, A. Dick, and A. van den Hengel. Visual question answering: A survey of methods and datasets. Computer Vision and Image Understanding, 163:21â40, 2017.
[23] H. Xu and K. Saenko. Ask, attend and answer: Exploring question-guided spatial attention for visual question answer-
In European Conference on Computer Vision, pages ing. 451â466. Springer, 2016.
[24] Z. Yang, X. He, J. Gao, L. Deng, and A. Smola. Stacked In Pro- attention networks for image question answering. ceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 21â29, 2016.
[25] Z. Yu, J. Yu, J. Fan, and D. Tao. Multi-modal factorized bi- linear pooling with co-attention learning for visual question answering. In ICCV, 2017.
[26] Z. Yu, J. Yu, C. Xiang, J. Fan, and D. Tao. Beyond bilinear: Generalized multi-modal factorized high-order pooling for visual question answering. CoRR, abs/1708.03619, 2017.
9 | {
"id": "1708.02711"
} |
1807.11626 | MnasNet: Platform-Aware Neural Architecture Search for Mobile | Designing convolutional neural networks (CNN) for mobile devices is
challenging because mobile models need to be small and fast, yet still
accurate. Although significant efforts have been dedicated to design and
improve mobile CNNs on all dimensions, it is very difficult to manually balance
these trade-offs when there are so many architectural possibilities to
consider. In this paper, we propose an automated mobile neural architecture
search (MNAS) approach, which explicitly incorporate model latency into the
main objective so that the search can identify a model that achieves a good
trade-off between accuracy and latency. Unlike previous work, where latency is
considered via another, often inaccurate proxy (e.g., FLOPS), our approach
directly measures real-world inference latency by executing the model on mobile
phones. To further strike the right balance between flexibility and search
space size, we propose a novel factorized hierarchical search space that
encourages layer diversity throughout the network. Experimental results show
that our approach consistently outperforms state-of-the-art mobile CNN models
across multiple vision tasks. On the ImageNet classification task, our MnasNet
achieves 75.2% top-1 accuracy with 78ms latency on a Pixel phone, which is 1.8x
faster than MobileNetV2 [29] with 0.5% higher accuracy and 2.3x faster than
NASNet [36] with 1.2% higher accuracy. Our MnasNet also achieves better mAP
quality than MobileNets for COCO object detection. Code is at
https://github.com/tensorflow/tpu/tree/master/models/official/mnasnet | http://arxiv.org/pdf/1807.11626 | Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, Mark Sandler, Andrew Howard, Quoc V. Le | cs.CV, cs.LG | Published in CVPR 2019 | CVPR 2019 | cs.CV | 20180731 | 20190529 | 9 1 0 2
y a M 9 2 ] V C . s c [ 3 v 6 2 6 1 1 . 7 0 8 1 : v i X r a
# MnasNet: Platform-Aware Neural Architecture Search for Mobile
# Mingxing Tan1
Bo Chen2 Ruoming Pang1 Vijay Vasudevan1 Mark Sandler2 Andrew Howard2 Quoc V. Le1
1Google Brain, 2Google Inc. {tanmingxing, bochen, rpang, vrv, sandler, howarda, qvl}@google.com
# Abstract
# Abstract
Designing convolutional neural networks (CNN) for mobile devices is challenging because mobile models need to be small and fast, yet still accurate. Although signiï¬cant efforts have been dedicated to design and improve mobile CNNs on all dimensions, it is very difï¬cult to manually balance these trade-offs when there are so many architec- tural possibilities to consider. In this paper, we propose an automated mobile neural architecture search (MNAS) approach, which explicitly incorporate model latency into the main objective so that the search can identify a model that achieves a good trade-off between accuracy and latency. Unlike previous work, where latency is considered via another, often inaccurate proxy (e.g., FLOPS), our approach directly measures real-world inference latency by executing the model on mobile phones. To further strike the right balance between ï¬exibility and search space size, we propose a novel factorized hierarchical search space that encourages layer diversity throughout the network. Experimental results show that our approach consistently outperforms state-of-the-art mobile CNN models across multiple vision tasks. On the ImageNet classiï¬cation task, our MnasNet achieves 75.2% top-1 accuracy with 78ms latency on a Pixel phone, which is faster than MobileNetV2 [29] with 0.5% higher 1.8 faster than NASNet [36] with 1.2% accuracy and 2.3 higher accuracy. Our MnasNet also achieves better mAP quality than MobileNets for COCO object detection. Code is at https://github.com/tensorflow/tpu/ tree/master/models/official/mnasnet.
â Controller eee Trainer srones aceuracy atency reward Muttrobjective
Figure 1: An Overview of Platform-Aware Neural Archi- tecture Search for Mobile.
z 7. 37 MnasNet . AmoebaNet-A 5 MobileNetV2(1.4) ® S74 $ < NASNet-A By 273 o 5 72 ° @ MobileNetV2 e MobileNetV1 ) 50 100 150 200 Inference Latency (ms)
Figure 2: Accuracy vs. Latency Comparison â Our Mnas- Net models signiï¬cantly outperforms other mobile models [29, 36, 26] on ImageNet. Details can be found in Table 1.
such as mobile or embedded devices.
# 1. Introduction
Convolutional neural networks (CNN) have made signif- icant progress in image classiï¬cation, object detection, and many other applications. As modern CNN models become increasingly deeper and larger [31, 13, 36, 26], they also be- come slower, and require more computation. Such increases in computational demands make it difï¬cult to deploy state- of-the-art CNN models on resource-constrained platforms
Given restricted computational resources available on mobile devices, much recent research has focused on de- signing and improving mobile CNN models by reducing the depth of the network and utilizing less expensive oper- ations, such as depthwise convolution [11] and group con- volution [33]. However, designing a resource-constrained mobile model is challenging: one has to carefully balance accuracy and resource-efï¬ciency, resulting in a signiï¬cantly large design space.
In this paper, we propose an automated neural architec- ture search approach for designing mobile CNN models. Figure 1 shows an overview of our approach, where the main differences from previous approaches are the latency aware multi-objective reward and the novel search space. Our approach is based on two main ideas. First, we formu- late the design problem as a multi-objective optimization problem that considers both accuracy and inference latency of CNN models. Unlike in previous work [36, 26, 21] that use FLOPS to approximate inference latency, we directly measure the real-world latency by executing the model on real mobile devices. Our idea is inspired by the observa- tion that FLOPS is often an inaccurate proxy: for exam- ple, MobileNet [11] and NASNet [36] have similar FLOPS (575M vs. 564M), but their latencies are signiï¬cantly dif- ferent (113ms vs. 183ms, details in Table 1). Secondly, we observe that previous automated approaches mainly search for a few types of cells and then repeatedly stack the same cells through the network. This simpliï¬es the search pro- cess, but also precludes layer diversity that is important for computational efï¬ciency. To address this issue, we propose a novel factorized hierarchical search space, which allows layers to be architecturally different yet still strikes the right balance between ï¬exibility and search space size.
We apply our proposed approach to ImageNet classiï¬ca- tion [28] and COCO object detection [18]. Figure 2 sum- marizes a comparison between our MnasNet models and other state-of-the-art mobile models. Compared to the Mo- bileNetV2 [29], our model improves the ImageNet accuracy by 3.0% with similar latency on the Google Pixel phone. On the other hand, if we constrain the target accuracy, then our MnasNet models are 1.8 faster than MobileNetV2 and 2.3 faster thans NASNet [36] with better accuracy. Compared to the widely used ResNet-50 [9], our MnasNet model achieves slightly higher (76.7%) accuracy with 4.8 à fewer parameters and 10 fewer multiply-add operations. By plugging our model as a feature extractor into the SSD object detection framework, our model improves both the inference latency and the mAP quality on COCO dataset over MobileNetsV1 and MobileNetV2, and achieves com- parable mAP quality (23.0 vs 23.2) as SSD300 [22] with 42
à To summarize, our main contributions are as follows:
1. We introduce a multi-objective neural architecture search approach that optimizes both accuracy and real- world latency on mobile devices.
2. We propose a novel factorized hierarchical search space to enable layer diversity yet still strike the right balance between ï¬exibility and search space size.
3. We demonstrate new state-of-the-art accuracy on both ImageNet classiï¬cation and COCO object detection under typical mobile latency constraints.
# 2. Related Work
Improving the resource efï¬ciency of CNN models has been an active research topic during the last several years. Some commonly-used approaches include 1) quantizing the weights and/or activations of a baseline CNN model into lower-bit representations [8, 16], or 2) pruning less impor- tant ï¬lters according to FLOPs [6, 10], or to platform-aware metrics such as latency introduced in [32]. However, these methods are tied to a baseline model and do not focus on learning novel compositions of CNN operations.
Another common approach is to directly hand-craft more efï¬cient mobile architectures: SqueezeNet [15] reduces the number of parameters and computation by using lower- cost 1x1 convolutions and reducing ï¬lter sizes; MobileNet [11] extensively employs depthwise separable convolution to minimize computation density; Shufï¬eNets [33, 24] uti- lize low-cost group convolution and channel shufï¬e; Con- densenet [14] learns to connect group convolutions across layers; Recently, MobileNetV2 [29] achieved state-of-the- art results among mobile-size models by using resource- efï¬cient inverted residuals and linear bottlenecks. Unfortu- nately, given the potentially huge design space, these hand- crafted models usually take signiï¬cant human efforts.
Recently, there has been growing interest in automating the model design process using neural architecture search. These approaches are mainly based on reinforcement learn- ing [35, 36, 1, 19, 25], evolutionary search [26], differen- tiable search [21], or other learning algorithms [19, 17, 23]. Although these methods can generate mobile-size models by repeatedly stacking a few searched cells, they do not in- corporate mobile platform constraints into the search pro- cess or search space. Closely related to our work is MONAS [12], DPP-Net [3], RNAS [34] and Pareto-NASH [4] which attempt to optimize multiple objectives, such as model size and accuracy, while searching for CNNs, but their search process optimizes on small tasks like CIFAR. In contrast, this paper targets real-world mobile latency constraints and focuses on larger tasks like ImageNet classiï¬cation and COCO object detection.
# 3. Problem Formulation
We formulate the design problem as a multi-objective search, aiming at ï¬nding CNN models with both high- accuracy and low inference latency. Unlike previous ar- chitecture search approaches that often optimize for indi- rect metrics, such as FLOPS, we consider direct real-world inference latency, by running CNN models on real mobile devices, and then incorporating the real-world inference la- tency into our objective. Doing so directly measures what is achievable in practice: our early experiments show it is challenging to approximate real-world latency due to the variety of mobile hardware/software idiosyncrasies.
a=0 0.6 Acc(m)=0.5, T=80 B=-1 2 3 3 B04 ie) 0.2 + 20 40 60 80 100 120 140 160 0.6 Acc(m)=0.5, T=80 B g | â., 8 H = i i i 204 i) i i i 1 ' 1 ' i ' t \ t 0.2 + 4 H 20 «40 «66080100120, 140160 Model Latency (ms)
Figure 3: Objective Function Deï¬ned by Equation 2, assuming accuracy ACC(m)=0.5 and target latency T =80ms: (top) show the object values with latency as a hard constraint; (bottom) shows the objective values with latency as a soft constraint.
Given a model m, let ACC(m) denote its accuracy on the target task, LAT (m) denotes the inference latency on the target mobile platform, and T is the target latency. A common method is to treat T as a hard constraint and max- imize accuracy under this constraint:
maximize m ACC(m) subject to LAT (m) T (1)
â¤
However, this approach only maximizes a single metric and does not provide multiple Pareto optimal solutions. Infor- mally, a model is called Pareto optimal [2] if either it has the highest accuracy without increasing latency or it has the lowest latency without decreasing accuracy. Given the com- putational cost of performing architecture search, we are more interested in ï¬nding multiple Pareto-optimal solutions in a single architecture search.
While there are many methods in the literature [2], we use a customized weighted product method1 to approximate Pareto optimal solutions, with optimization goal deï¬ned as:
LAT(m)]" maximize ACC(m) x | (2) m T
where w is the weight factor deï¬ned as:
w = α, β, if LAT (m) otherwise ⤠T (3)
1We pick the weighted product method because it is easy to customize, but we expect methods like weighted sum should be also ï¬ne.
where α and β are application-speciï¬c constants. An empir- ical rule for picking α and β is to ensure Pareto-optimal so- lutions have similar reward under different accuracy-latency trade-offs. For instance, we empirically observed doubling the latency usually brings about 5% relative accuracy gain. Given two models: (1) M1 has latency l and accuracy a; (2) (1 + 5%), M2 has latency 2l and 5% higher accuracy a · they should have similar reward: Reward(M 2) = a (1 + (l/T )β. Solving this 5%) Reward(M 1) = a 0.07 in gives β our experiments unless explicitly stated.
Figure 3 shows the objective function with two typical values of (α, β). In the top ï¬gure with (α = 0, β = 1), we simply use accuracy as the objective value if measured latency is less than the target latency T ; otherwise, we sharply penalize the objective value to discourage mod- els from violating latency constraints. The bottom ï¬gure 0.07) treats the target latency T as a soft con- (α = β = straint, and smoothly adjusts the objective value based on the measured latency.
# 4. Mobile Neural Architecture Search
In this section, we will ï¬rst discuss our proposed novel factorized hierarchical search space, and then summarize our reinforcement-learning based search algorithm.
# 4.1. Factorized Hierarchical Search Space
As shown in recent studies [36, 20], a well-deï¬ned search space is extremely important for neural architecture search. However, most previous approaches [35, 19, 26] only search for a few complex cells and then repeatedly stack the same cells. These approaches donât permit layer diversity, which we show is critical for achieving both high accuracy and lower latency.
In contrast to previous approaches, we introduce a novel factorized hierarchical search space that factorizes a CNN model into unique blocks and then searches for the oper- ations and connections per block separately, thus allowing different layer architectures in different blocks. Our intu- ition is that we need to search for the best operations based on the input and output shapes to obtain better accurate- latency trade-offs. For example, earlier stages of CNNs usually process larger amounts of data and thus have much higher impact on inference latency than later stages. For- mally, consider a widely-used depthwise separable convo- lution [11] kernel denoted as the four-tuple (K, K, M, N ) that transforms an input of size (H, W, M )2 to an output of size (H, W, N ), where (H, W ) is the input resolution and M, N are the input/output ï¬lter sizes. The total number of multiply-adds can be described as:
# 2We omit batch size dimension for simplicity.
Blocks are predefined Skeletons. Search Space Per Block i: e = ConvOp: dconv, conv, ... conv iconv 1x1 5x5, e KernelSize: 3x3, 5x5 e SERatio: 0, 0.25, ... e â SkipOp: identity, pool, ... e = FilterSize: F. 0.25 ae e = #Layers: N, â Contents in blue are searched
Figure 4: Factorized Hierarchical Search Space. Network layers are grouped into a number of predeï¬ned skeletons, called blocks, based on their input resolutions and ï¬lter sizes. Each block contains a variable number of repeated identical layers where only the ï¬rst layer has stride 2 if input/output resolutions are different but all other layers have stride 1. For each block, we search for the operations and connections for a single layer and the number of layers N , then the same layer is repeated N times (e.g., Layer 4-1 to 4-N4 are the same). Layers from different blocks (e.g., Layer 2-1 and 4-1) can be different.
(4) â Here we need to carefully balance the kernel size K and ï¬lter size N if the total computation is constrained. For in- stance, increasing the receptive ï¬eld with larger kernel size K of a layer must be balanced with reducing either the ï¬lter size N at the same layer, or compute from other layers.
Figure 4 shows the baseline structure of our search space. We partition a CNN model into a sequence of pre-deï¬ned blocks, gradually reducing input resolutions and increasing ï¬lter sizes as is common in many CNN models. Each block has a list of identical layers, whose operations and con- nections are determined by a per-block sub search space. Speciï¬cally, a sub search space for a block i consists of the following choices:
⢠Convolutional ops ConvOp: regular conv (conv), depthwise conv (dconv), and mobile inverted bottleneck conv [29].
Our factorized hierarchical search space has a distinct advantage of balancing the diversity of layers and the size of total search space. Suppose we partition the network into B blocks, and each block has a sub search space of size S with average N layers per block, then our total search space size would be SB, versing the ï¬at per-layer search space with size SBâN . A typical case is S = 432, B = 5, N = 3, where our search space size is about 1013, versing the per- layer approach with search space size 1039.
# 4.2. Search Algorithm
Inspired by recent work [35, 36, 25, 20], we use a re- inforcement learning approach to ï¬nd Pareto optimal solu- tions for our multi-objective search problem. We choose reinforcement learning because it is convenient and the re- ward is easy to customize, but we expect other methods like evolution [26] should also work.
⢠Convolutional kernel size KernelSize: 3x3, 5x5.
⢠Squeeze-and-excitation [13] ratio SERatio: 0, 0.25.
⢠Skip ops SkipOp: pooling, identity residual, or no skip.
Output ï¬lter size Fi. ⢠Number of layers per block Ni.
Concretely, we follow the same idea as [36] and map each CNN model in the search space to a list of tokens. These tokens are determined by a sequence of actions a1:T from the reinforcement learning agent based on its parame- ters θ. Our goal is to maximize the expected reward:
ConvOp, KernelSize, SERatio, SkipOp, Fi determines the architecture of a layer, while Ni determines how many times the layer will be repeated for the block. For exam- ple, each layer of block 4 in Figure 4 has an inverted bot- tleneck 5x5 convolution and an identity residual skip path, and the same layer is repeated N4 times. We discretize all search options using MobileNetV2 as a reference: For #lay- based on Mo- ers in each block, we search for bileNetV2; for ï¬lter size per layer, we search for its relative size in
0.75, 1.0, 1.25 {
}
J = EP (a1:T ;θ)[R(m)] (5)
where m is a sampled model determined by action a1:T , and R(m) is the objective value deï¬ned by equation 2.
As shown in Figure 1, the search framework consists of three components: a recurrent neural network (RNN) based controller, a trainer to obtain the model accuracy, and a mobile phone based inference engine for measuring the la- tency. We follow the well known sample-eval-update loop to train the controller. At each step, the controller ï¬rst sam- ples a batch of models using its current parameters θ, by
Model Type #Params #Mult-Adds Top-1 Acc. (%) Top-5 Acc. (%) Inference Latency manual MobileNetV1 [11] manual SqueezeNext [5] manual Shufï¬eNet (1.5x) [33] manual Shufï¬eNet (2x) manual Shufï¬eNetV2 (1.5x) [24] Shufï¬eNetV2 (2x) manual CondenseNet (G=C=4) [14] manual manual CondenseNet (G=C=8) manual MobileNetV2 [29] manual MobileNetV2 (1.4x) auto NASNet-A [36] auto AmoebaNet-A [26] auto PNASNet [19] auto DARTS [21] 4.2M 3.2M 3.4M 5.4M - - 2.9M 4.8M 3.4M 6.9M 5.3M 5.1M 5.1M 4.9M 575M 708M 292M 524M 299M 597M 274M 529M 300M 585M 564M 555M 588M 595M 70.6 67.5 71.5 73.7 72.6 75.4 71.0 73.8 72.0 74.7 74.0 74.5 74.2 73.1 89.5 88.2 - - - - 90.0 91.7 91.0 92.5 91.3 92.0 91.9 91 113ms - - - - - - - 75ms 143ms 183ms 190ms - - MnasNet-A1 MnasNet-A2 MnasNet-A3 auto auto auto 3.9M 4.8M 5.2M 312M 340M 403M 75.2 75.6 76.7 92.5 92.7 93.3 78ms 84ms 103ms
Table 1: Performance Results on ImageNet Classiï¬cation [28]. We compare our MnasNet models with both manually- designed mobile models and other automated approaches â MnasNet-A1 is our baseline model;MnasNet-A2 and MnasNet-A3 are two models (for comparison) with different latency from the same architecture search experiment; #Params: number of trainable parameters; #Mult-Adds: number of multiply-add operations per image; Top-1/5 Acc.: the top-1 or top-5 accuracy on ImageNet validation set; Inference Latency is measured on the big CPU core of a Pixel 1 Phone with batch size 1.
predicting a sequence of tokens based on the softmax logits from its RNN. For each sampled model m, we train it on the target task to get its accuracy ACC(m), and run it on real phones to get its inference latency LAT (m). We then cal- culate the reward value R(m) using equation 2. At the end of each step, the parameters θ of the controller are updated by maximizing the expected reward deï¬ned by equation 5 using Proximal Policy Optimization [30]. The sample-eval- update loop is repeated until it reaches the maximum num- ber of steps or the parameters θ converge.
# 5. Experimental Setup
Directly searching for CNN models on large tasks like ImageNet or COCO is expensive, as each model takes days to converge. While previous approaches mainly per- form architecture search on smaller tasks such as CIFAR- 10 [36, 26], we ï¬nd those small proxy tasks donât work when model latency is taken into account, because one typ- ically needs to scale up the model when applying to larger problems. In this paper, we directly perform our architec- ture search on the ImageNet training set but with fewer training steps (5 epochs). As a common practice, we re- serve randomly selected 50K images from the training set as the ï¬xed validation set. To ensure the accuracy improve- ments are from our search space, we use the same RNN controller as NASNet [36] even though it is not efï¬cient:
each architecture search takes 4.5 days on 64 TPUv2 de- vices. During training, we measure the real-world latency of each sampled model by running it on the single-thread In total, our controller big CPU core of Pixel 1 phones. samples about 8K models during architecture search, but only 15 top-performing models are transferred to the full ImageNet and only 1 model is transferred to COCO.
For full ImageNet training, we use RMSProp optimizer with decay 0.9 and momentum 0.9. Batch norm is added after every convolution layer with momentum 0.99, and weight decay is 1e-5. Dropout rate 0.2 is applied to the last layer. Following [7], learning rate is increased from 0 to 0.256 in the ï¬rst 5 epochs, and then decayed by 0.97 every 2.4 epochs. We use batch size 4K and Inception preprocess- ing with image size 224 224. For COCO training, we plug our learned model into SSD detector [22] and use the same settings as [29], including input size 320
Ã
# 6. Results
In this section, we study the performance of our models on ImageNet classiï¬cation and COCO object detection, and compare them with other state-of-the-art mobile models.
# 6.1. ImageNet Classiï¬cation Performance
Table 1 shows the performance of our models on Ima- geNet [28]. We set our target latency as T = 75ms, similar
aoa Roa x Bay Imagenet Top 1 Accuracy (%) iy ; d © MnasNet-A1 60.0 60.3% % â MobileNetv2 0 20 40 60 80 100 120 140 160 Inference Latency (ms)
66 64 Imagenet Top 1 Accuracy (%) 62 60 6. 3% @ = MnasNet-A1 âwe * â MobileNetV2 58 : 10 20 30 40 50 60 70 80 Inference Latency (ms)
(a) Depth multiplier = 0.35, 0.5, 0.75, 1.0, 1.4, corresponding to points from left to right.
(b) Input size = 96, 128, 160, 192, 224, corresponding to points from left to right.
Figure 5: Performance Comparison with Different Model Scaling Techniques. MnasNet is our baseline model shown in Table 1. We scale it with the same depth multipliers and input sizes as MobileNetV2.
Inference Latency Top-1 Acc. w/o SE MobileNetV2 NASNet MnasNet-B1 75ms 183ms 77ms 72.0% 74.0% 74.5% w/ SE MnasNet-A1 MnasNet-A2 78ms 84ms 75.2% 75.6%
Table 2: Performance Study for Squeeze-and-Excitation SE [13] â MnasNet-A denote the default MnasNet with SE in search space; MnasNet-B denote MnasNet with no SE in search space.
to MobileNetV2 [29], and use Equation 2 with α=β=-0.07 as our reward function during architecture search. After- wards, we pick three top-performing MnasNet models, with different latency-accuracy trade-offs from the same search experiment and compare them with existing mobile models. As shown in the table, our MnasNet A1 model achieves 75.2% top-1 / 92.5% top-5 accuracy with 78ms latency and 3.9M parameters / 312M multiply-adds, achieving a new state-of-the-art accuracy for this typical mobile latency con- straint. In particular, MnasNet runs 1.8 faster than Mo- bileNetV2 (1.4) [29] on the same Pixel phone with 0.5% higher accuracy. Compared with automatically searched CNN models, our MnasNet runs 2.3 faster than the mobile-size NASNet-A [36] with 1.2% higher top-1 ac- curacy. Notably, our slightly larger MnasNet-A3 model achieves better accuracy than ResNet-50 [9], but with 4.8 fewer parameters and 10
Ã
Given that squeeze-and-excitation (SE [13]) is relatively new and many existing mobile models donât have this extra
optimization, we also show the search results without SE in the search space in Table 2; our automated approach still signiï¬cantly outperforms both MobileNetV2 and NASNet.
# 6.2. Model Scaling Performance
Given the myriad application requirements and device heterogeneity present in the real world, developers often scale a model up or down to trade accuracy for latency or model size. One common scaling technique is to modify the ï¬lter size using a depth multiplier [11]. For example, a depth multiplier of 0.5 halves the number of channels in each layer, thus reducing the latency and model size. An- other common scaling technique is to reduce the input im- age size without changing the network.
Figure 5 compares the model scaling performance of MnasNet and MobileNetV2 by varying the depth multipli- ers and input image sizes. As we change the depth mul- tiplier from 0.35 to 1.4, the inference latency also varies from 20ms to 160ms. As shown in Figure 5a, our Mnas- Net model consistently achieves better accuracy than Mo- bileNetV2 for each depth multiplier. Similarly, our model is also robust to input size changes and consistently outper- forms MobileNetV2 (increaseing accuracy by up to 4.1%) across all input image sizes from 96 to 224, as shown in Figure 5b.
In addition to model scaling, our approach also allows searching for a new architecture for any latency target. For example, some video applications may require latency as low as 25ms. We can either scale down a baseline model, or search for new models speciï¬cally targeted to this latency constraint. Table 4 compares these two approaches. For fair comparison, we use the same 224x224 image sizes for all
Network #Params #Mult-Adds mAP mAPS mAPM mAPL Inference Latency YOLOv2 [27] SSD300 [22] SSD512 [22] MobileNetV1 + SSDLite [11] MobileNetV2 + SSDLite [29] 50.7M 36.1M 36.1M 5.1M 4.3M 17.5B 35.2B 99.5B 1.3B 0.8B 21.6 23.2 26.8 22.2 22.1 5.0 5.3 9.0 - - 22.4 23.2 28.9 - - 35.5 39.6 41.9 - - - - - 270ms 200ms MnasNet-A1 + SSDLite 4.9M 0.8B 23.0 3.8 21.7 42.0 203ms
Table 3: Performance Results on COCO Object Detection â #Params: number of trainable parameters; #Mult-Adds: number of multiply-additions per image; mAP : standard mean average precision on test-dev2017; mAPS, mAPM , mAPL: mean average precision on small, medium, large objects; Inference Latency: the inference latency on Pixel 1 Phone.
Params MAdds Latency Top1 Acc. MobileNetV2 (0.35x) MnasNet-A1 (0.35x) MnasNet-search1 MnasNet-search2 1.66M 1.7M 1.9M 2.0M 59M 63M 65M 68M 21.4ms 22.8ms 22.0ms 23.2ms 60.3% 64.1% 64.9% 66.0%
Table 4: Model Scaling vs. Model Search â MobileNetV2 (0.35x) and MnasNet-A1 (0.35x) denote scaling the base- line models with depth multiplier 0.35; MnasNet-search1/2 denotes models from a new architecture search that targets 22ms latency constraint.
models. Although our MnasNet already outperforms Mo- bileNetV2 with the same scaling parameters, we can further improve the accuracy with a new architecture search target- ing a 22ms latency constraint.
(a) α = 0, β = â1 (b) α = β = â0.07
# 6.3. COCO Object Detection Performance
For COCO object detection [18], we pick the MnasNet models in Table 2 and use them as the feature extractor for SSDLite, a modiï¬ed resource-efï¬cient version of SSD [29]. Similar to [29], we compare our models with other mobile- size SSD or YOLO models.
Figure 6: Multi-Objective Search Results based on equa- tion 2 with (a) α=0, β=-1; and (b) α=β= 0.07. Target la- tency is T =75ms. Top ï¬gure shows the Pareto curve (blue line) for the 3000 sampled models (green dots); bottom ï¬g- ure shows the histogram of model latency.
Table 3 shows the performance of our MnasNet mod- els on COCO. Results for YOLO and SSD are from [27], while results for MobileNets are from [29]. We train our models on COCO trainval35k and evaluate them on test- dev2017 by submitting the results to COCO server. As shown in the table, our approach signiï¬cantly improve the accuracy over MobileNet V1 and V2. Compare to the stan- dard SSD300 detector [22], our MnasNet model achieves comparable mAP quality (23.0 vs 23.2) as SSD300 with 7.4
Ã
Ã
# 7. Ablation Study and Discussion
In this section, we study the impact of latency constraint and search space, and discuss MnasNet architecture details and the importance of layer diversity.
# 7.1. Soft vs. Hard Latency Constraint
Our multi-objective search method allows us to deal with both hard and soft latency constraints by setting α and β to different values in the reward equation 2. Figure 6 shows the multi-objective search results for typical α and β. When α = 0, β = 1, the latency is treated as a hard constraint, so the controller tends to focus more on faster models to avoid the latency penalty. On the other hand, by setting 0.07, the controller treats the target latency as a α = β = soft constraint and tries to search for models across a wider latency range. It samples more models around the target latency value at 75ms, but also explores models with latency smaller than 40ms or greater than 110ms. This allows us to pick multiple models from the Pareto curve in a single architecture search as shown in Table 1.
# 7.2. Disentangling Search Space and Reward
To disentangle the impact of our two key contributions: multi-objective reward and new search space, Figure 5 com- pares their performance. Starting from NASNet [36], we ï¬rst employ the same cell-base search space [36] and sim- ply add the latency constraint using our proposed multiple- object reward. Results show it generates a much faster model by trading the accuracy to latency. Then, we ap- ply both our multi-objective reward and our new factorized search space, and achieve both higher accuracy and lower latency, suggesting the effectiveness of our search space.
Reward Search Space Latency Top-1 Acc. Single-obj [36] Cell-based [36] Multi-obj Cell-based [36] MnasNet Multi-obj 183ms 100ms 78ms 74.0% 72.0% 75.2%
Table 5: Comparison of Decoupled Search Space and Reward Design â Multi-obj denotes our multi-objective reward; Single-obj denotes only optimizing accuracy.
# 7.3. MnasNet Architecture and Layer Diversity
Figure 7(a) illustrates our MnasNet-A1 model found by our automated approach. As expected, it consists of a vari- ety of layer architectures throughout the network. One in- teresting observation is that our MnasNet uses both 3x3 and 5x5 convolutions, which is different from previous mobile models that all only use 3x3 convolutions.
In order to study the impact of layer diversity, Table 6 compares MnasNet with its variants that only repeat a single type of layer (ï¬xed kernel size and expansion ra- tio). Our MnasNet model has much better accuracy-latency trade-offs than those variants, highlighting the importance of layer diversity in resource-constrained CNN models.
# 8. Conclusion
This paper presents an automated neural architecture search approach for designing resource-efï¬cient mobile CNN models using reinforcement learning. Our main ideas are incorporating platform-aware real-world latency infor- mation into the search process and utilizing a novel factor- ized hierarchical search space to search for mobile models with the best trade-offs between accuracy and latency. We demonstrate that our approach can automatically ï¬nd sig- niï¬cantly better mobile models than existing approaches, and achieve new state-of-the-art results on both ImageNet classiï¬cation and COCO object detection under typical mo- bile inference latency constraints. The resulting MnasNet architecture also provides interesting ï¬ndings on the impor- tance of layer diversity, which will guide us in designing and improving future mobile CNN models.
logits HxWxE Conv1x1, BN HxWx3F SE (Pooling, FC, FC, Slgmoid, MUL) I Pooling, FC 7x7x320 MBConv6 (k3x3) x1 HxWx3F_ DWConv5x5, BN, HxWx3F_ Conv1x1, BN, HxWxF (b) MBConv3 7x7x160 MBConv6 (k5x5), SE BR} 14x14x112 MBConv6 (k3x3), SE fq 14x14x80 MBConv6 (k3x3) Bz HXxWxF Conv1x1, BN 28x28x40 MBConv3 (k5x5), SE Bg DWConv3x3, BN, HxWx6F Conv1x1, BN, HxWxF (c) MBConv6 56x56x24 MBConv6 (k3x3) x2 112x112x16 SepConv (k3x3) Fal HxWxF Conv1x1, BN 112x112x32 224x224x3 DWConv3x3, BN, images (a) MnasNet-A1 (d) SepConv
# Relu
# Relu
# Relu
(k5x5)
# Relu
# Relu
(k3x3)
# Relu
(k3x3)
Figure 7: MnasNet-A1 Architecture â (a) is a representa- tive model selected from Table 1; (b) - (d) are a few cor- responding layer structures. MBConv denotes mobile in- verted bottleneck conv, DWConv denotes depthwise conv, k3x3/k5x5 denotes kernel size, BN is batch norm, HxWxF 1/2/3/4 denotes tensor shape (height, width, depth), and denotes the number of repeated layers within the block.
Top-1 Acc. Inference Latency MnasNet-A1 75.2% 78ms MBConv3 (k3x3) only MBConv3 (k5x5) only MBConv6 (k3x3) only MBConv6 (k5x5) only 71.8% 72.5% 74.9% 75.6% 63ms 79ms 116ms 146ms
Table 6: Performance Comparison of MnasNet and Its Variants â MnasNet-A1 denotes the model shown in Figure 7(a); others are variants that repeat a single type of layer throughout the network. All models have the same number of layers and same ï¬lter size at each layer.
# 9. Acknowledgments
We thank Barret Zoph, Dmitry Kalenichenko, Guiheng Zhou, Hongkun Yu, Jeff Dean, Megan Kacholia, Menglong Zhu, Nan Zhang, Shane Almeida, Sheng Li, Vishy Tiru- malashetty, Wen Wang, Xiaoqiang Zheng, and the larger device automation platform team, TensorFlow Lite, and Google Brain team.
# References
[1] B. Baker, O. Gupta, N. Naik, and R. Raskar. Designing neural network architectures using reinforcement learning. ICLR, 2017.
[2] K. Deb. Multi-objective optimization. Search methodolo- gies, pages 403â449, 2014.
[3] J.-D. Dong, A.-C. Cheng, D.-C. Juan, W. Wei, and M. Sun. DPP-Net: Device-aware progressive search for pareto- optimal neural architectures. ECCV, 2018.
[4] T. Elsken, J. H. Metzen, and F. Hutter. Multi-objective archi- tecture search for cnns. arXiv preprint arXiv:1804.09081, 2018.
[5] A. Gholami, K. Kwon, B. Wu, Z. Tai, X. Yue, P. Jin, S. Zhao, and K. Keutzer. Squeezenext: Hardware-aware neural net- work design. ECV Workshop at CVPR, 2018.
[6] A. Gordon, E. Eban, O. Nachum, B. Chen, H. Wu, T.-J. Yang, and E. Choi. Morphnet: Fast & simple resource- constrained structure learning of deep networks. CVPR, 2018. [7] P. Goyal,
P. Noordhuis, L. Wesolowski, A. Kyrola, A. Tulloch, Y. Jia, and K. He. Accurate, large minibatch sgd: training imagenet in 1 hour. arXiv preprint arXiv:1706.02677, 2017.
[8] S. Han, H. Mao, and W. J. Dally. Deep compression: Com- pressing deep neural networks with pruning, trained quanti- zation and huffman coding. ICLR, 2016.
[9] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning
for image recognition. CVPR, pages 770â778, 2016. [10] Y. He, J. Lin, Z. Liu, H. Wang, L.-J. Li, and S. Han. Amc: Automl for model compression and acceleration on mobile devices. ECCV, 2018.
[11] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam. Mobilenets: Efï¬- cient convolutional neural networks for mobile vision appli- cations. arXiv preprint arXiv:1704.04861, 2017.
[12] C.-H. Hsu, S.-H. Chang, D.-C. Juan, J.-Y. Pan, Y.-T. Chen, W. Wei, and S.-C. Chang. MONAS: Multi-objective neu- ral architecture search using reinforcement learning. arXiv preprint arXiv:1806.10332, 2018.
[13] J. Hu, L. Shen, and G. Sun. Squeeze-and-excitation net- works. CVPR, 2018.
[14] G. Huang, S. Liu, L. van der Maaten, and K. Q. Weinberger. Condensenet: An efï¬cient densenet using learned group con- volutions. CVPR, 2018.
[15] F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, and K. Keutzer. Squeezenet: Alexnet-level accuracy with 50x fewer parameters and ¡0.5 mb model size. arXiv preprint arXiv:1602.07360, 2016.
[16] B. Jacob, S. Kligys, B. Chen, M. Zhu, M. Tang, A. Howard, H. Adam, and D. Kalenichenko. Quantization and training of neural networks for efï¬cient integer-arithmetic-only infer- ence. CVPR, 2018.
[17] K. Kandasamy, W. Neiswanger, J. Schneider, B. Poczos, and E. Xing. Neural architecture search with bayesian optimisa- tion and optimal transport. NeurIPS, 2018.
[18] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ra- manan, P. Doll´ar, and C. L. Zitnick. Microsoft COCO: Com- mon objects in context. ECCV, 2014.
[19] C. Liu, B. Zoph, J. Shlens, W. Hua, L.-J. Li, L. Fei-Fei, A. Yuille, J. Huang, and K. Murphy. Progressive neural ar- chitecture search. ECCV, 2018.
[20] H. Liu, K. Simonyan, O. Vinyals, C. Fernando, and K. Kavukcuoglu. Hierarchical representations for efï¬cient architecture search. ICLR, 2018.
[21] H. Liu, K. Simonyan, and Y. Yang. DARTS: Differentiable architecture search. ICLR, 2019.
[22] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.- Y. Fu, and A. C. Berg. SSD: Single shot multibox detector. ECCV, 2016.
[23] R. Luo, F. Tian, T. Qin, and T.-Y. Liu. Neural architecture optimization. NeurIPS, 2018.
[24] N. Ma, X. Zhang, H.-T. Zheng, and J. Sun. Shufï¬enet v2: Practical guidelines for efï¬cient cnn architecture design. ECCV, 2018.
[25] H. Pham, M. Y. Guan, B. Zoph, Q. V. Le, and J. Dean. Efï¬cient neural architecture search via parameter sharing. ICML, 2018.
[26] E. Real, A. Aggarwal, Y. Huang, and Q. V. Le. Regular- ized evolution for image classiï¬er architecture search. AAAI, 2019.
[27] J. Redmon and A. Farhadi. Yolo9000: better, faster, stronger. CVPR, 2017.
[28] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211â252, 2015.
[29] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen. Mobilenetv2: Inverted residuals and linear bottle- necks. CVPR, 2018.
[30] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
[31] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi. Inception-v4, inception-resnet and the impact of residual connections on learning. AAAI, 4:12, 2017.
[32] T.-J. Yang, A. Howard, B. Chen, X. Zhang, A. Go, V. Sze, and H. Adam. Netadapt: Platform-aware neural network adaptation for mobile applications. ECCV, 2018.
[33] X. Zhang, X. Zhou, M. Lin, and J. Sun. Shufï¬enet: An extremely efï¬cient convolutional neural network for mobile devices. CVPR, 2018.
[34] Y. Zhou, S. Ebrahimi, S. ¨O. Arık, H. Yu, H. Liu, and G. Di- amos. Resource-efï¬cient neural architect. arXiv preprint arXiv:1806.07912, 2018.
[35] B. Zoph and Q. V. Le. Neural architecture search with rein- forcement learning. ICLR, 2017.
[36] B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le. Learn- ing transferable architectures for scalable image recognition. CVPR, 2018. | {
"id": "1602.07360"
} |
1808.00023 | The Measure and Mismeasure of Fairness | The field of fair machine learning aims to ensure that decisions guided by
algorithms are equitable. Over the last decade, several formal, mathematical
definitions of fairness have gained prominence. Here we first assemble and
categorize these definitions into two broad families: (1) those that constrain
the effects of decisions on disparities; and (2) those that constrain the
effects of legally protected characteristics, like race and gender, on
decisions. We then show, analytically and empirically, that both families of
definitions typically result in strongly Pareto dominated decision policies.
For example, in the case of college admissions, adhering to popular formal
conceptions of fairness would simultaneously result in lower student-body
diversity and a less academically prepared class, relative to what one could
achieve by explicitly tailoring admissions policies to achieve desired
outcomes. In this sense, requiring that these fairness definitions hold can,
perversely, harm the very groups they were designed to protect. In contrast to
axiomatic notions of fairness, we argue that the equitable design of algorithms
requires grappling with their context-specific consequences, akin to the
equitable design of policy. We conclude by listing several open challenges in
fair machine learning and offering strategies to ensure algorithms are better
aligned with policy goals. | http://arxiv.org/pdf/1808.00023 | Sam Corbett-Davies, Johann D. Gaebler, Hamed Nilforoshan, Ravi Shroff, Sharad Goel | cs.CY | null | Journal of Machine Learning Research, Vol. 24, 2023 | cs.CY | 20180731 | 20230814 | 3 2 0 2
g u A 4 1 ] Y C . s c [
3 v 3 2 0 0 0 . 8 0 8 1 : v i X r a
# The Measure and Mismeasure of Fairness
Sam Corbett-Daviesâ
samcorbettdavies@gmail.com
Johann D. Gaeblerâ Department of Statistics Harvard University Cambridge, MA 02138, USA Hamed Nilforoshanâ Department of Computer Science Stanford University Stanford, CA 94305, USA
jgaebler@fas.harvard.edu
JGAEBLER@FAS.HARVARD.EDU
hamedn@cs.stanford.edu
Ravi Shroff Department of Applied Statistics, Social Science, and Humanities New York University New York, NY 10003, USA
ravi.shroff@nyu.edu
Sharad Goel Harvard Kennedy School Harvard University Cambridge, MA 02138, USA
sgoel@hks.harvard.edu
âAuthors contributed equally.
Abstract The field of fair machine learning aims to ensure that decisions guided by algorithms are equitable. Over the last decade, several formal, mathematical definitions of fairness have gained prominence. Here we first assemble and categorize these definitions into two broad families: (1) those that constrain the effects of decisions on disparities; and (2) those that constrain the effects of legally protected characteristics, like race and gender, on decisions. We then show, analytically and empirically, that both families of definitions typically re- sult in strongly Pareto dominated decision policies. For example, in the case of college admissions, adhering to popular formal conceptions of fairness would simultaneously result in lower student-body diversity and a less academically prepared class, relative to what one could achieve by explicitly tailoring admissions policies to achieve desired outcomes. In this sense, requiring that these fairness definitions hold can, perversely, harm the very In contrast to axiomatic notions of fairness, we groups they were designed to protect. argue that the equitable design of algorithms requires grappling with their context-specific consequences, akin to the equitable design of policy. We conclude by listing several open challenges in fair machine learning and offering strategies to ensure algorithms are better aligned with policy goals.
Keywords: Fair machine learning, consequentialism, discrimination
©2023 Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel.
License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/.
Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
# Contents
2.1 Formal setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Limiting the Effect of Decisions on Disparities . . . . . . . . . . . . . . . . . 2.3 Limiting the Effect of Attributes on Decisions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Utility, Risk, and Threshold Rules 3.2 The Problem of Inframarginality . . . . . . . . . . . . . . . . . . . . . . . . 3.3 The Problem with Fairness through Unawareness . . . . . . . . . . . . . . . 4.1 The Geometry of Fair Decision Making . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 A Formal Theory of Fairness in the Presence of Externalities 5.1 Competing Notions of Ethical Decision Making: Process vs. Outcomes . . . 5.2 Designing Equitable Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Case Study: An Algorithm to Allocate Limited Medical Resources . . . . . 4 6 6 8 9 13 14 15 18 23 23 26 30 31 33 38 43 45 45 45 49 49 51
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . F.1 Shyness and Prevalence F.2 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . F.3 Convexity, Complete Metrizability, and Universal Measurability . . . . . . . F.4 Shy Sets and Probes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . F.5 Proof of Theorem 17 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . F.6 Proof of Theorem 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . F.7 Proof of Corollary 18 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . F.8 General Measures on K . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2
56 56 58 60 69 78 88 90 91
The Measure and Mismeasure of Fairness
G.1 Extension to Continuous Covariates G.2 A Markov Chain Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.1 Beta distributions and stochastic dominance . . . . . . . . . . . . . . . . . . H.2 Proof of Proposition 19 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.3 Proof of Proposition 20 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 95 98 100 100 103 106
# G Theorem 11 and Related Results
# H Proofs of Propositions 19 and 20
3
Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
# 1. Introduction
In banking, criminal justice, medicine, and beyond, consequential decisions are often in- formed by machine learning algorithms (Barocas and Selbst, 2016; Berk, 2012; Chouldechova et al., 2018; Shroff, 2017). As the influence and scope of algorithms increase, academics, policymakers, and journalists have raised concerns that these tools might inadvertently encode and entrench human biases. Such concerns have sparked tremendous interest in developing fair machine-learning algorithms, and, accordingly, a plethora of formal fairness criteria have been proposed in the computer science community (Berk et al., 2021; Carey and Wu, 2022; Chiappa, 2019; Chouldechova, 2017; Chouldechova and Roth, 2020; Cleary, 1968; Corbett-Davies et al., 2017; Coston et al., 2020; Darlington, 1971; Dwork et al., 2012; Galhotra et al., 2022; Hardt et al., 2016; Imai and Jiang, 2020; Imai et al., 2020; Kilbertus et al., 2017; Kleinberg et al., 2017b; Kusner et al., 2017; Loftus et al., 2018; Mhasawade and Chunara, 2021; Nabi and Shpitser, 2018; Wang et al., 2019; Woodworth et al., 2017; Wu et al., 2019; Zafar et al., 2017a,b; Zhang and Bareinboim, 2018; Zhang et al., 2017). Here we synthesize and critically examine the statistical properties of popular formal fairness approaches as well as the consequences of enforcing them. Using both theory and empirical evidence, we argue that these approaches, when used as algorithmic design principles, can often cause more harm than good. In contrast to popular axiomatic approaches to algo- rithmic fairness, we advocate for a consequentialist perspective that directly grapples with the difficult policy trade-offs inherent to many algorithmically guided decisions.
We begin, in Section 2, by proposing a two-part taxonomy of formal fairness definitions. Our first category of definitions encompasses those that consider the effects of decisions on disparities. Imagine, for example, designing an algorithm to guide decisions for college admissions. Under the principle that fair algorithms should have comparable performance across demographic groups (Hardt et al., 2016), one might check that among applicants who were ultimately academically âsuccessfulâ (e.g., who eventually earned a college de- gree, either at the institution in question or elsewhere), the algorithm would recommend admission for an equal proportion of candidates across race groups. Our second category of definitions encompasses those that seek to limit both the direct and indirect effects of oneâs group membership on decisions. Following the principle that decisions should be agnos- tic to legally protected attributes like race and gender (cf. Dwork et al., 2012), one might mandate that these features not be provided to the algorithm. Further, because oneâs race might impact earlier educational opportunities, and hence test scores, one might require that admissions decisions are robust to the effect of race along such causal paths.
These formalizations of fairness have considerable intuitive appeal. It can feel natural to exclude protected characteristics in a drive for equity; and one might understandably interpret disparities in error rates as indicating problems with an algorithmâs design or with the data on which it was trained. However, in Sections 3 and 4, we show that both classes of algorithmic fairness definitions suffer from deep statistical limitations. For example, for natural families of utility functionsâlike those that prefer both higher academic pre- paredness and more student-body diversityâwe prove that common fairness criteria almost always, in a measure theoretic sense, lead to strongly Pareto dominated decision policies.1
1. A policy is strongly Pareto dominated if there is an alternative feasible policy that is preferred under every utility function in the family (cf. Section 4.2).
4
The Measure and Mismeasure of Fairness
In particular, in our running college admissions example, adhering to several of the popular conceptions of fairness we consider would simultaneously result in lower student-body diver- sity and a less academically prepared class, relative to what one could attain by explicitly tailoring admissions policies to achieve desired outcomes. In fact, under one prominent defi- nition of fairness, we prove that the induced policies require simply admitting all applicants with equal probability, irrespective of oneâs academic qualifications or group membership. These formal fairness criteria are thus often at odds with policy goals, and, perversely, can harm the very same groups one ostensibly sought to protect by developing and adopting axiomatic notions of fairness.
How, then, can we ensure algorithms are fair? There are no easy solutions, but we conclude in Section 5 by offering several observations and suggestions for designing more equitable algorithms. Most importantly, we believe it is critical to acknowledge and tackle head-on the substantive trade-offs at the heart of many decision problems. For example, when creating a college admissions policy, one must necessarily make difficult choices that balance competing priorities. Formal fairness axioms are poor tools for engaging with these challenging issues. Our overarching exhortation is thus to recognize algorithms as encoding policy choices, and to accordingly tailor their design.
To summarize, we offer three main contributions. First, we survey the Contributions. fairness literature, describing existing fairness definitions and organizing them into a two- part taxonomy. Our categorization of formal fairness definitions proposed in the computer science literature highlights their connections to influential legal and economic notions of discrimination. Second, we lay out a consequentialist framework for designing equitable al- gorithms. Our framework is motivated by viewing algorithmic fairness as a policy objective rather than as a technical problem. This approach exposes the statistical and normative limitations of many popular formal fairness definitions. Finally, we apply our consequen- tialist framework to develop a positive vision for addressing problems of fairness and equity in algorithm design.
Much of the content we present synthesizes and builds on research that we and our collaborators have conducted over the last several years (Cai et al., 2020; Chohlas-Wood et al., 2023a,b; Corbett-Davies et al., 2017; Koenecke et al., 2023). In particular, we draw heavily on two papers by Corbett-Davies and Goel (2018) and Nilforoshan et al. (2022). In addition to synthesis, we broaden the formal theoretical results presented in this line of work and offer new, concrete illustrations of our theoretical arguments. Some of the results and arguments we present date back five years, and the field of algorithmic fairness has since moved forward in many ways. For example, in the intervening time, there has been increasing recognition of the shortcomings of popular formal fairness definitions (Barocas et al., 2019). Nevertheless, we believe our message is as relevant as ever. For instance, within the research community, new algorithmic fairness definitions are regularly intro- duced that, while different in some respects, frequently suffer from the same statistical and conceptual limitations as the notions we survey here. In the broader world, policymakers, algorithm designers, journalists, and advocates often still evaluate algorithmsâand accord- ingly influence decisionsâby turning to these formal fairness definitions without necessarily appreciating their shortcomings. For example, proposed legislation in Idaho sought to re- quire that pretrial risk assessment algorithms have equal error rates across groups (Idaho H.B. 118, 2019). Although the proposed bill was never passed, it the illustrates the ways
5
Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
in which these formal measures have garnered significant attention beyond the academic community.
The call to build equitable algorithms will only grow over time as automated decisions become even more widespread. As such, it is imperative to address limitations in past formulations of fairness, to identify best practices moving forward, and to outline important open research questions. By synthesizing and critically examining recent developments in fair machine learning, we hope to help both researchers and practitioners advance this increasingly influential field.
# 2. Mathematical Definitions of Fairness
We start by assembling and categorizing definitions of algorithmic fairness into a two-part taxonomy: those that seek to limit the effect of decisions on disparities, and those that seek to limit the effect of protected attributes like race or gender on the decisions themselves. We first introduce formal notation and concrete examples of decision problems in which one might seek to apply these fairness definitions, before reviewing prominent examples of both approaches in turn.
# 2.1 Formal setting
Consider a population of individuals with observed covariates X, drawn i.i.d. from a set X â Rn with distribution DX . Further suppose that A â A describes one or more discrete protected attributes, such as race or gender, which can be derived from X (i.e., A = α(X) for some function α). Each individual is subject to a binary decision D â {0, 1}, determined by a (randomized) rule d(x) â [0, 1], where d(x) = Pr(D = 1 | X = x) is the probability of receiving a positive decision, D = 1.2,3 Given a budget b with 0 < b ⤠1, we require the decision rule to satisfy E[D] ⤠b. Finally, we suppose that each individual has some associated binary outcome Y . In some cases, we will be concerned with the causal effect of the decision D on Y , in which case we imagine that there exist two potential outcomes, Y (0) and Y (1), corresponding to what happens to the individual depending on whether they receive a negative or positive decision.4
To make our discussion concrete, we imagine two running examples corresponding to this formal setting: diabetes screening and college admissions. As we discuss in detail below, these two examples differ in the extent to which there is agreement about the ultimate value of different decision policies, which in turn impacts our mathematical analysis. Diabetes is a common and serious health condition that afflicts many American adults. If caught early, it is often possible to avoid some of the most significant consequences of the disease through, for example, changes to oneâs diet and physical routine. A blood test can be used to determine whether an individual has diabetes, but as with many screening tools, there are risks and inconveniences associated with screening (e.g., a patient may need to
2. That is, D = 1UD â¤d(X), where UD is an independent uniform random variable on [0, 1]. 3. By âpositive,â we simply mean the decision D is greater than zero, without ascribing any normative position to the decision. Individuals may or may not have a preferences for âpositiveâ decisions in this sense.
4. As is implicit in our notation, we assume that there are no spillover effects between units (Imbens and Rubin, 2015).
6
The Measure and Mismeasure of Fairness
take time off from work). In particular, if an individual were certain that they did not have diabetes, then they would prefer not to undergo screening. Our goal is to design an equitable screening policy d(x) to determine which patients have (Y = 1) or do not have (Y = 0) diabetes, based on a set of covariates X. For example, following Aggarwal et al. (2022), the screening decision may be based on a patientâs age, body mass index (BMI) and race. (Those authors argue that consideration of race, while controversial, leads to more precise and equitable estimates of diabetes risk, a point we return to in Section 3.3.) We further imagine the budget b equals 1, corresponding to the fact that everyone could be screened in principle.
Our second example concerns college admissions. Here, the population of interest is applicants to a particular college, and the decision D is the admissions committeeâs binary admissions decision. To simplify our exposition, we assume all admitted students attend the school. In this setting, the covariates X may, for example, consist of an applicantâs test score and race A â {a0, a1}, and Y is a binary variable that indicates college graduation (i.e., degree attainment). In contrast to our diabetes example, here we imagine that the decision itself may affect the outcomes. Specifically, Y (1) and Y (0) describe whether an applicant would attain a college degree if admitted to or if rejected from the school we consider, respectively. Note that Y (0) is not necessarily zero, as a rejected applicant may attendâand graduate fromâa different university. Further, in this case we set the budget b to be less than one to reflect the fact that the admissions committee has limited resources and is unable to admit every candidate.
As mentioned above, a key distinction between these two examples is the extent to which stakeholders may agree on the value of different potential decision policies. For example, in college admissions, there may be significant disagreement on how to balance competing priorities, such as academic preparedness and class diversity.5 Admissions committees may seek to increase both dimensions, but there is often an inherent trade-off, particularly since there is a limit on the number of students that can be admitted by the college (i.e., b < 1). Our diabetes example, in contrast, reflects a setting where there is ostensibly broader agreement on the value of different decision policies. Indeed, since there is effectively no limit on the number of diabetes tests that can be administered (i.e., b = 1), we can model the value of a decision policy as the sum of each individualâs value for being screened.6 In Sections 3 and 4, we in turn examine the structure of equitable decision making in the absence and presence of such trade-offs. First, though, we introduce several formal fairness criteria.
5. In some jurisdictions, explicit considerations of racial diversity may be prohibited. For instance, a recent U.S. Supreme Court case bars colleges from explicitly considering race in admissions; however, colleges may consider âan applicantâs discussion of how race affected the applicantâs lifeâ (SFFA v. Harvard, 2023). U.S. colleges may also consider other forms of diversity, such as economic or geographic diversity. 6. In the case of infectious diseasesâwhich involve greater externalitiesâthere is again often disagreement about the value of different screening and vaccination policies. Paulus and Kent (2020) similarly draw a distinction between polar settings (in which parties have competing interests, like our admissions example) and non-polar settings (where there is broad alignment, as in our diabetes example).
7
Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
# 2.2 Limiting the Effect of Decisions on Disparities
A popular class of fairness definitions requires that error rates (e.g., false positive and false negative rates) are equal across protected groups (Hardt et al., 2016).7 We refer to these definitions as examples of âclassification parity,â meaning that some given measure of classification error is equal across groups defined by attributes such as race and gender. In particular, we include in this definition any measure that can be computed from the two-by-two confusion matrix tabulating the joint distribution of decisions D and outcomes Y for a group. Berk et al. (2021) enumerate seven such statistics, including false positive rate, false negative rate, precision, recall, and the proportion of decisions that are positive. The proportion of positive decisions is not, strictly speaking, a measure of âerrorâ, but we nonetheless include it under classification parity since it can be computed from a confusion matrix. We also include the area under the ROC curve (AUC), a popular measure among practitioners examining the fairness of algorithms (Skeem and Lowenkamp, 2016).
Two of the above measuresâthe proportion of decisions that are positive, and the false positive rateâhave received considerable attention in the machine learning commu- nity (Agarwal et al., 2018; Blum and Stangl, 2019; Calders and Verwer, 2010; Chouldechova, 2017; Edwards and Storkey, 2016; Feldman et al., 2015; Hardt et al., 2016; Jung et al., 2020a; Kamiran et al., 2013; Pedreshi et al., 2008; Zafar et al., 2017a,c; Zemel et al., 2013).
Definition 1 We say that demographic parity holds when8
D â¥â¥ A. (1)
Definition 2 We say that equalized false positive rates holds when
D â¥â¥ A | Y = 0. (2)
In our running diabetes example, demographic parity means that the proportion of patients who are screened for the disease is equal across race groups. Similarly, in our college admissions example, demographic parity means an equal proportion of students is admitted across race groups. Equalized false positive rates, in our diabetes example, means that among individuals who in reality do not have diabetesâand thus for whom screening, ex post, would not have been beneficialâscreening rates are equal across race groups.9
Causal analogues of these definitions have also recently been proposed (Coston et al., 2020; Imai and Jiang, 2020; Imai et al., 2020; Mishler et al., 2021), which require various conditional independence conditions to hold between the potential outcomes, protected at- tributes, and decisions.10 Below we list three representative examples of this class of fairness
7. Some work relaxes strict equality of error rates or other metrics to requiring only that the difference be at most some fixed ϵ (e.g., Nabi and Shpitser, 2018). For ease of exposition, we consider strict equality throughout, though we emphasize that the spirit of the critique we develop applies also in cases where fairness constraints are approximate, rather than exact.
8. We use the notation X â¥â¥ Y throughout to mean that the random variables X and Y are independent. 9. In our college admissions example, the decision D impacts the outcome Y . One could, in theory, apply the definition of error rate parity above to that case by recognizing that Y = Y (D). However, that interpretation does not seem aligned with the original intent of the definition. We instead discuss the admissions example in the context of the explicitly causal definitions of fairness below.
10. In the literature on causal fairness, there is at times ambiguity between âpredictionsâ ËY â {0, 1} of Y and âdecisionsâ D â {0, 1}. Following past work (e.g., Corbett-Davies et al., 2017; Kusner et al., 2017;
8
The Measure and Mismeasure of Fairness
definitions: counterfactual predictive parity (Coston et al., 2020), counterfactual equalized odds (Coston et al., 2020; Mishler et al., 2021), and conditional principal fairness (Imai and Jiang, 2020).11
Definition 3 We say that counterfactual predictive parity holds when
Y (1) â¥â¥ A | D = 0. (3)
In our college admissions example, counterfactual predictive parity means that among rejected applicants, the proportion who would have attained a college degree, had they been accepted, is equal across race groups. (For our diabetes example, because the screening decision does not affect whether a patient actually has diabetes, Y (0) = Y (1) = Y , and so counterfactual predictive parity, as well as the causal definitions below, reduce to their non-causal analogues).
Definition 4 We say that counterfactual equalized odds holds when
D â¥â¥ A | Y (1). (4)
In our running college admissions example, counterfactual equalized odds is satisfied when two conditions hold: (1) among applicants who would graduate if admitted (i.e., Y (1) = 1), students are admitted at the same rate across race groups; and (2) among applicants who would not graduate if admitted (i.e., Y (1) = 0), students are again admitted at the same rate across race groups.
Definition 5 We say that conditional principal fairness holds when
D â¥â¥ A | Y (0), Y (1), W, (5)
where, for some function Ï on X , W = Ï(X) describes a reduced set of the covariates X. When W is constant (or, equivalently, when we do not condition on W ), this condition is called principal fairness.
In the college admissions example, conditional principal fairness means that âsimilarâ applicantsâwhere similarity is defined by the potential outcomes and covariates W âare admitted at the same rate across race groups.
# 2.3 Limiting the Effect of Attributes on Decisions
An alternative framework for understanding fairness considers the effects of protected at- tributes on decisions. This approach can be understood as codifying the legal notion of disparate treatment (Goel et al., 2017; Zafar et al., 2017a)âwhich we discuss further in Section 5.1. Perhaps the simplest way to limit the effects of protected attributes on de- cisions is to require that the decisions do not explicitly depend on them, what some call âfairness through unawarenessâ (cf. Dwork et al., 2012).
Wang et al., 2019), here we focus exclusively on decisions, with predictions implicitly impacting decisions but not explicitly appearing in our definitions.
11. Our subsequent analytical results extend in a straightforward manner to structurally similar variants of these definitions (e.g., requiring Y (0) â¥â¥ A | D = 1 or D â¥â¥ A | Y (0), variants of counterfactual predictive parity and counterfactual equalized odds, respectively).
9
Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
Definition 6 Suppose that the covariates can be partitioned into the protected attributes and all other covariates, i.e., that X = Xu à A, where Xu consists of âunprotectedâ attributes. Then, we say that blinding holds when, for all a, aâ² â A and xu â Xu,
d(xu, a) = d(xu, aâ²).
(6)
In our running diabetes example, blinding holds when the screening decision depends solely on factors like age and BMI, and, in particular, does not depend on the patientâs race. We similarly say college admissions decisions satisfy blinding when the decisions depend on factors like test scores and extracurricular activities, but not race.
Blinding is closely tied to the notion of calibration, the requirement that, conditional on the estimated probability of some outcome (such as graduation from college or having diabetes), the outcome is independent of group membership. For example, among people with an estimated diabetes risk of 1%, calibration would require that the proportion of individuals who actually have diabetes be the same across groups. Many authors treat calibration as a kind of fairness constraintâin particular, to ensure that the meaning of estimated risks do not differ across groupsâand it has received considerable attention in the fairness literature (e.g., H´ebert-Johnson et al., 2018; Rothblum and Yona, 2022). We note, though, that miscalibration is equivalent to blindness in practice. In particular, when estimation error is small, risk estimates that are allowed to depend on group membership are calibrated; conversely, risk estimates that are blind to group membership typically are miscalibratedâan empirical phenomenon shown and discussed in Figure 3 below. Because of this close relationship, we do not treat calibration as a separate fairness constraint, but we do discuss calibration and its relationship to blinding in detail in Sections 3.3.1 and 5.2. In contrast to blindingâin which race and other protected attributes are barred from being an explicit input to a decision ruleâthe causal versions of this idea consider both the direct and indirect effects of protected attributes on decisions (Kilbertus et al., 2017; Kusner et al., 2017; Mhasawade and Chunara, 2021; Nabi and Shpitser, 2018; Wang et al., 2019; Wu et al., 2019; Zhang and Bareinboim, 2018; Zhang et al., 2017). For example, even if decisions only directly depend on test scores, race may indirectly impact decisions through its effects on educational opportunities, which in turn influence test scores. In this vein, a decision rule is deemed fair if, at a high level, decisions for individuals are the same in â(a) the actual world and (b) a counterfactual world where the individual belonged to a different demographic groupâ (Kusner et al., 2017).12 This idea can be formalized by requiring that decisions remain the same in expectation even if oneâs protected characteristics are counterfactually altered, a condition known as counterfactual fairness (Kusner et al., 2017).
12. Conceptualizing a general causal effect of an immutable characteristic such as race or gender is rife with challenges, the greatest of which is expressed by the mantra, âno causation without manipulationâ (Holland, 1986). In particular, analyzing race as a causal treatment requires one to specify what exactly is meant by âchanging an individualâs raceâ from, for example, White to Black (Gaebler et al., 2022; Hu and Kohler-Hausmann, 2020). Such difficulties can sometimes be addressed by considering a change in the perception of race by a decision maker (Greiner and Rubin, 2011)âfor instance, by changing the name listed on an employment application (Bertrand and Mullainathan, 2004), or by masking an individualâs appearance (Chohlas-Wood et al., 2021; Goldin and Rouse, 2000; Grogger and Ridgeway, 2006; Pierson et al., 2020).
10
# The Measure and Mismeasure of Fairness
A Race E Education T Test Score D Decision Y Graduation M Preparation
Figure 1: A causal DAG illustrating a hypothetical process for college admissions. Under path-specific fairness, one may require, for example, that race does not affect decisions along the path highlighted in red.
Definition 7 Counterfactual fairness holds when
E[D(aâ²) | X] = E[D | X], (7)
where D(aâ²) denotes the decision when oneâs protected attributes are counterfactually altered to be any aâ² â A.
In our running college admissions example, this means that for each group of observationally identical applicants (i.e., those with the same values of X, meaning identical race and test score), the proportion of students who are actually admitted is the same as the proportion who would be admitted if their race were counterfactually altered.
Counterfactual fairness aims to limit all direct and indirect effects of protected traits on decisions. In a generalization of this criterionâtermed path-specific fairness (Chiappa, 2019; Nabi and Shpitser, 2018; Wu et al., 2019; Zhang et al., 2017)âone allows protected traits to influence decisions along certain causal paths but not others. For example, one may wish to allow the direct consideration of race by an admissions committee to implement an affirmative action policy, while also guarding against any indirect influence of race on admissions decisions that may stem from cultural biases in standardized tests (Williams, 1983).
The formal definition of path-specific fairness requires specifying a causal DAG de- scribing relationships between attributes (both observed covariates and latent variables), decisions, and outcomes. In our running example of college admissions, we imagine that each individualâs observed covariates are the result of the process illustrated by the causal DAG in Figure 1. In this graph, an applicantâs race A influences the educational opportu- nities E available to them prior to college; and educational opportunities in turn influence an applicantâs level of college preparation, M , as well as their score on a standardized ad- missions test, T , such as the SAT. We assume the admissions committee only observes an applicantâs race and test score so that X = (A, T ), and makes their decision D based on
11
Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
these attributes. Finally, whether or not an admitted student subsequently graduates (from any college), Y , is a function of both their preparation and whether they were admitted.13 To formalize path-specific fairness, we start by defining, for the decision D, path-specific counterfactuals, a general concept in causal DAGs (cf. Pearl, 2001). Suppose G = (V, U, F) is a causal model with nodes V, exogenous variables U, and structural equations F that define the value at each node Vj as a function of its parents â(Vj) and its associated exogenous variable Uj. (See, for example, Pearl (2009a) for further details on causal DAGs.) Let V1, . . . , Vm be a topological ordering of the nodes, meaning that â(Vj) â {V1, . . . , Vjâ1} (i.e., the parents of each node appear in the ordering before the node itself). Let Î denote a collection of paths from node A to D. Now, for two possible values a and aâ² for the variable A, the path-specific counterfactuals DÎ ,a,aâ² for the decision D are generated by traversing the list of nodes in topological order, propagating counterfactual values obtained by setting A = aâ² along paths in Î , and otherwise propagating values obtained by setting A = a. (In Algorithm 1 in the Appendix, we formally define path-specific counterfactuals for an arbitrary nodeâor collection of nodesâin the DAG.)
To see this idea in action, we work out an illustrative example, computing path-specific counterfactuals for the decision D along the single path Î = {A â E â T â D} linking race to the admissions committeeâs decision through test score, highlighted in red in Figure 1. We describe the distribution of DÎ ,a,aâ² generatively, formally showing how to produce a draw from this distribution. To start, we draw values U â D of the exogenous variables. Now, the first column in Table 1 corresponds to draws V â for each node V in the DAG, where we set A to a, and then propagate that value as usual. The second column of path-specific counterfactuals, where we set A to aâ², and then corresponds to draws V propagate the counterfactuals only along the path A â E â T â D. In particular, the (since the edge E â T is on value for the test score T the specified path) and the value of M â (since the edge M â T is not on the path). As a result of this process, we obtain a draw D
Path-specific fairness formalizes the intuition that the influence of a sensitive attribute on a downstream decision may, in some circumstances, be considered âlegitimateâ (i.e., it may be acceptable for the attribute to affect decisions along certain paths in the DAG). For instance, an admissions committee may believe that the effect of race A on admissions decisions D which passes through college preparation M is legitimate, whereas the effect of race along the path A â E â T â D, which may reflect access to test prep or cultural biases of the tests, rather than actual academic preparedness, is illegitimate. In that case, the admissions committee may seek to ensure that the proportion of applicants they admit from a certain race group remains unchanged if one were to counterfactually alter the race of those individuals along the path Î = {A â E â T â D}.
Definition 8 Let Î be a collection of paths, and, for some function Ï on X , let W = Ï(X) describe a reduced set of the covariates X. Path-specific fairness, also called Î -fairness, holds when, for any aâ² â A,
E[DÎ ,A,aâ² | W ] = E[D | W ]. (8)
13. In practice, the racial composition of an admitted class may itself influence degree attainment, if, for example, diversity provides a net benefit to students (Page, 2007). Here, for simplicity, we avoid consid- eration of such peer effects.
12
# The Measure and Mismeasure of Fairness
Aâ = a Eâ = fE(Aâ, U â E) M â = fM (Eâ, U â M ) T â = fT (Eâ, M â, U â T ) Dâ = fD(Aâ, T â, U â D) A E M T D â â â â = aâ² â â
Table 1: Computing path-specific counterfactuals for the DAG in Figure 1. The first column corresponds to draws V â for each node V , where we set A to a, and then propagate that value as usual. The second column corresponds to draws V of path-specific counterfactuals, where we set A to aâ², and then propagate the counterfactuals only along the path A â E â T â D.
In the definition above, rather than a particular counterfactual level a, the baseline level of the path-specific effect is A, i.e., an individualâs actual (non-counterfactually altered) group membership (e.g., their actual race). We have implicitly assumed that the decision variable D is a descendant of the covariates X. In particular, without loss of generality, we assume D is defined by the structural equation fD(x, uD) = 1uDâ¤d(x), where the exogenous variable UD â¼ Unif(0, 1), so that Pr(D = 1 | X = x) = d(x). If Î is the set of all paths from A to D, then DÎ ,A,aâ² = D(aâ²), in which case, for W = X, path-specific fairness is the same as counterfactual fairness.
# 3. Equitable Decisions in the Absence of Externalities
In many decision-making settings, the decision maker is free to make the optimal decision for each individual, without consideration of spillover effects or other externalities. For instance, in our diabetes screening example, one could, in principle, screen all patients if that course of action were medically advisable.
To investigate notions of fairness in these settings, we first introduce a framework for utilitarian decision analysis. Specifically, we consider in this section situations in which there is broad agreement on the utility of different potential courses of action. (In the subsequent section, we consider cases where stakeholders disagree on the precise form of the utility.) In this setting, âthreshold rulesâ maximize utility. We then describe the statistical phenomenon of inframarginality, a property that is endemic to fairness definitions that seek to enforce some form of classification parity. In particular, we discuss, both informally and mathematically, why inframarginality almost surelyâin a measure theoretic senseârenders optimal decision making incompatible with classification parity. Finally, we discuss blinding. In parallel to our discussion of classification parity, we see that in many important settings, the information loss associated with, e.g., removing protected information from a predictive model, results in less efficient decision making without compensatory benefits. Moreover, in general, we see that the more stringent the standard of maskingâe.g., removing not only
13
Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
direct but also indirect effects of protected attributesâthe greater the potential harm that results from enforcing it.
# 3.1 Utility, Risk, and Threshold Rules
A natural way to analyze a decision, such as deciding whether an individual should be screened for diabetes, is to consider the costs and benefits of various possible outcomes under different courses of action. For instance, a patient screened for diabetes who does not have the disease still has to bear the risks, discomfort, and inconvenience associated with the blood test itself, while a patient who is not screened but does in fact have the disease loses out on the opportunity to start treatment.
In general, the benefit of making decision D = 1 over D = 0 when the outcome Y equals y can be represented by v(y). For instance, in our diabetes example, v(1) represents the net benefit of screening over not screening when the patient has diabetes; and âv(0) is the net cost of screening when the patient does not have diabetes, including both monetary and non-monetary costs, such as discomfort and loss of time.14 Let r(x) = Pr(Y = 1 | X = x) be the risk of Y equalling 1 when X = x. Then the expected benefit of making decision D = 1 over D = 0 for an individual with covariates X = x is
u(x) = E[v(Y ) | X = x] = r(x) · v(1) + [1 â r(x)] · v(0).
Here, for ease of interpretation, we restrict our utility to be of the form u(x) = E[v(Y ) | X = x] for some function v, and we also assume there is no budget constraint (i.e., b = 1). In Section 4, we allow the utility u(x) to be an arbitrary function on X and consider b < 1, which induces the trade-offs in decisions that are central to our later discussion.
The aggregate expected utility of a decision policy d(x)ârelative to the baseline policy of taking action D = 0 for all individualsâis then given by u(d) = E[d(X) · u(X)]. We say a decision policy dâ(x) is utility-maximizing if
u(dâ) = max d u(d).
It is better, in expectation, for an individual with covariates X = x to take action D = 1 instead of D = 0 when u(x) > 0; that is, when15
r(x) > v(0) v(0) â v(1) . (9)
Thus, the decision with the maximum utility can be determined by comparing an individ- ualâs risk against a particular risk threshold t, defined by the right-hand side of Eq. (9).
14. For ease of exposition, we assume that costs and benefits are identical across individuals; in reality, these could vary, e.g., depending on age. When utilities vary by person, the optimal decision rule is to screen only those with positive individual utility, in line with our subsequent discussion.
15. We assume, without loss of generality, that v(1) > v(0). If v(1) < v(0), we can take Y â² = 1 â Y as our outcome of interest; relative to Y â², the inequality will be reversed. If v(1) = v(0), then the outcome is irrelevant. In this degenerate case, the higher utility decision depends on the sign of v(1) alone, and not the risk.
14
The Measure and Mismeasure of Fairness
We refer to this kind of policy as a threshold policy. In particular, we see that a utility- maximizing decision for each individualâi.e., d(x) = 1 if r(x) > t and d(x) = 0 if r(x) ⤠tâ is also a decision policy that maximizes aggregate utility, so there is no conflict between doing what is best from each individual personâs perspective and what is best for the pop- ulation as a whole.
While our framing in terms of expected utility is suitably general, threshold policies can be simpler to interpret when we reparameterize in terms of more familiar quantities. In the diabetes screening example, if the patient does not have diabetes, the cost of action D = 1 over D = 0 is âv(0) = cTest, i.e., the cost (monetary and non-monetary) of the test. If the patient does have diabetes, the benefit of D = 1 over D = 0 is v(1) = bTreat â cTest, i.e., the benefit of treatment minus the cost of the test. Rewriting Eq. (9) in terms of these quantities gives
t = cTest bTreat .
In particular, if the benefit of early treatment of diabetes is 50 times greater than the cost of performing the diagnostic test, one would ideally screen patients who have at least a 2% chance of developing the disease.
Threshold rules are a natural approach to decision making in a variety of settings. In our running medical example, a threshold rule corresponds to screening patients with a sufficiently high risk of having diabetes. A threshold ruleâwith the optimally chosen thresholdâensures that only the patients at highest risk of having diabetes take the test, thereby optimally balancing the costs and benefits of screening. Indeed, in many medical examples, from diagnosis to treatment, there are no significant externalities. As a result, deviating from utility-maximizing threshold policies can only force individuals to experience greater costsâin the form of unnecessary tests or untreated illnessâin expectation, without compensatory benefits. We return to the problem of optimal (and equitable) decision- making in the presence of externalities in Section 4.
# 3.2 The Problem of Inframarginality
In the setting that we have been considering, threshold policies guarantee optimal choices are made for each individual. However, as we now show, threshold policies in general violate various versions of classification parity, such as demographic parity and equalized false positive rates. This incompatibility highlights a critical limitation of classification parity as a fairness criterion, as enforcing the definition often requires making decisions that harm individuals without any clear compensating benefits.
To help build intuition for this phenomenon, we consider the empirical distribution of diabetes risk among White and Asian patients. Following Aggarwal et al. (2022), we base our risk estimates on age, BMI, and race, using a sample of approximately 15,000 U.S. adults aged 18â70 interviewed as part of the National Health and Nutrition Survey (NHANES; Centers for Disease Control and Prevention, 2011-2018). The resulting risk distributions are shown in the left-hand panel of Figure 2. The dashed vertical lines show the group means, and indicate that the incidence of diabetes is higher among Asian Americans (11%)
15
# Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
Density Distribution of risk Conditional distribution of risk T 0% 5% 20% Probability of having diabetes [ | White [ | Asian 0% T T 10% 15% 20%
Figure 2: A graphical illustration of the incompatibility between threshold policies and classification parity, based on the National Health and Nutrition Survey. Left: The distribution of diabetes risk for White Americans and Asian Americans, with the dashed vertical lines corresponding to the overall incidence rate within each group. At a screening threshold of 1.5% (indicated by the solid black line), the screening rate for Asian Americans is higher than for White Americans, violating demographic parity. Right: The distribution of diabetes risk among individuals who do not have diabetes. Since the proportion of Asian Americans above the screening threshold is greater than the proportion of White Americans above the threshold, the false positive rate for Asian Americans is greater than the false positive rate for White Americans.
than among White Americans (9%).16 This difference in base rates is also reflected in the heavier tail of the risk distribution among Asian individuals.
Drawing on recommendations from the United States Preventative Screening Task Force, Aggarwal et al. (2022) suggest screening patients with at least a 1.5% risk of diabetes, irrespective of race. We depict this risk threshold by the solid black vertical line in the plot. Based on that recommendation, 81% of Asian Americans and 69% of White Americans are to the right of the threshold and should be screenedâviolating demographic parity. If, hypothetically, we were to raise the screening threshold to 2.2% for Asian Americans and lower the threshold to 1% for White Americans, 75% of people in both groups would be screened, satisfying demographic parity.17 The cost of doing so, however, would be failing
16. The precise shapes of the risk distributions depend on the set of covariates used to estimate outcomes, but the means of the distributions correspond to the overall incidence of diabetes in each group, and, in particular, are unaffected by the choice of covariates. It is thus necessarily the case that the risk distributions will differ across groups in this example, regardless of which covariates are used.
17. Corbett-Davies et al. (2017) show that group-specific threshold policies are utility-maximizing under the constraint of satisfying various notions of classification parity, including demographic parity and equality of false positive rates.
16
The Measure and Mismeasure of Fairness
to screen some Asian Americans who have a relatively high risk of diabetes, and subjecting some relatively low-risk White Americans to a procedure that is medically inadvisable given their low likelihood of having diabetes. In an effort to satisfy demographic parity, we would have harmed members from both groups.
This example illustrates a similar incompatibility between threshold policies and equal- ized false positive rates. In our setting, the false positive rate for a group is the screening rate among those in the group who do not in reality have diabetes. To visualize the race-specific false positive rates, the right-hand panel of Figure 2 shows the distribution of diabetes risk among those individuals who do not have diabetes. (Because the overall prevalence of diabetes is low, the conditional distribution displayed in the right-hand panel is nearly identical to the unconditional distribution displayed in the left-hand panel.) The false pos- itive rate for each group is the proportion of people in the group falling to the right of the 1.5% screening threshold. In this case, the false positive rate is 79% for Asian Americans and 67% for White Americansâviolating equalized false positive rates. As before, we could alter the screening guidelines to equalize false positive rates, but doing so requires deviating from our threshold policy, in which case we would end up screening some individuals who are relatively low-risk and not screening others who are relatively high-risk.
In this example, the incompatibility between threshold policies and classification parity stems from the fact that the risk distributions differ across groups. This general phenomenon is known as the problem of inframarginality in the economics and statistics literature, and has long been known to plague tests of discrimination in human decisions (Anwar and Fang, 2006; Ayres, 2002; Carr and Megbolugbe, 1993; Engel and Tillyer, 2008; Galster, 1993; Knowles et al., 2001; Pierson et al., 2018; Simoiu et al., 2017). Common legal and economic understandings of fairness are concerned with what happens at the margin (e.g., whether the same standard is applied to all individuals)âa point we return to in Section 5. What happens at the margin also determines whether decisions maximize social welfare, with the optimal threshold set at the point where the marginal benefits equal marginal costs. However, popular error metrics assess behavior away from the margin, hence they are called infra-marginal statistics. As a result, when risk distributions differ, standard error metrics are often poor proxies for individual equity or social well-being.
In general, we expect any two non-random subgroups of a population to differ on a variety of social and economic dimensions, which in turn is likely to yield risk distributions that differ across groups. As a result, as our running diabetes example shows, the optimal decision policyâwhich maximizes each patientâs own well-beingâwill likely violate various measures of classification parity. Thus, to the extent that formal measures of fairness are violated, that tells us more about the shapes of the risk distributions than about the quality of decisions or the utility delivered to members of any group. This intuition can be made precise, in the sense that for almost every risk distribution, the optimal decision policy violates the various notions of classification parity considered here.
The notion of almost every distribution that we use here was formalized by C (1972), Hunt et al. (1992), Anderson and 2005, for a review). Suppose, for a moment , that combinations of covariates and iristensen Zame (2001), and others (cf. Ott and Yorke, outcomes take values in a finite set of size m. Then the space of joint distributions on covariates and outcomes can be represented by the unit (mâ 1)-simplex: A"~! = {p ⬠R⢠| pi 0 and 7, pi = 1}. Since Aâ! is a subset of an (m â 1)-dimensional hyperplane in
17
Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
Rm, it inherits the usual Lebesgue measure on Rmâ1. In this finite-dimensional setting, almost every distribution means a subset of distributions that has full Lebesgue measure on the simplex. Given a property that holds for almost every distribution in this sense, that property holds almost surely under any probability distribution on the space of distributions that is described by a density on the simplex. We use a generalization of this basic idea that extends to infinite-dimensional spaces, allowing us to consider distributions with arbitrary support. (See the Appendix for further details.)
Theorem 9 Let t be the optimal decision threshold, as in Eq. (9). If 0 < t < 1, then for almost every collection of group-specific risk distributions which have densities on [0, 1], no utility-maximizing decision policy satisfies demographic parity or equalized false positive rates.
The proof of Theorem 9, which formalizes the informal discussion above, is given in Appendix F.6. At a high level, the constraints of classification parity are sensitive to even small perturbations in the underlying risk distributions. As a result, any particular collec- tion of risk distributions is unlikely to satisfy the constraints. For simplicity, we have been considering settings in which the decision D does not impact the outcome Y . However, this basic style of argument extends to causal settings, showing that threshold policies are almost surely, in the measure theoretic sense, incompatible with counterfactual predictive parity, counterfactual equalized odds, and conditional principal fairnessâdefinitions of fair- ness that we consider in depth in Section 4, in the more complex setting of having a budget b < 1.
# 3.3 The Problem with Fairness through Unawareness
We now consider notions of fairness, both causal and non-causal, that aim to limit the effects of attributes on decisions. As above, we show the inherent incompatibility of these definitions with optimal decision making. We note, though, that while blinding can lead to suboptimal decisionsâand, in some cases, harm marginalized groupsâthe legal, political, and social benefits of, for example, race-blind and gender-blind algorithms may outweigh their costs in certain instances (CerdeËna et al., 2020; Coots et al., 2023).
# 3.3.1 Blinding
A common starting point for designing an ostensibly fair algorithm is to exclude protected characteristics from the statistical model. This strategy ensures that decisions have no explicit dependence on group membership. For instance, in the case of estimating diabetes risk, one could use only BMI and ageârather than including race, as we did above. However, excluding race from models of diabetes risk can ultimately harm both White and Asian patients.
In Figure 3a, we compare the actual diabetes rate to estimated diabetes risk result- ing from the race-blind risk model. Aggarwal et al. (2022) showed that Asian patients have higher incidence of diabetes than White patients with comparable age and BMI. As a result, the race-blind model systematically underestimates risk for Asian patients and systematically overestimates risk for White individuals. In particular, applying a nominal 1.5% screening threshold under the race-blind model amounts to effectively applying a 1%
18
# The Measure and Mismeasure of Fairness
(a) Diabetes (b) Recidivism
Figure 3: Calibration plots showing the effect of removing protected attributes from risk models when estimating the risk of diabetes (left) and recidivism (right). Be- cause Asian patients with the same BMI and age have a higher rate of diabetes, the race-blind model underestimates their risk of having diabetes. Similarly, be- cause women reoffend at lower rates than men with similar criminal histories, the gender-blind COMPAS score overstates the recidivism risk for women.
screening threshold to White patients and a 3% screening threshold to Asian patients. Thus, by using race-blind risk scores, we subject relatively low-risk White patients to screening, and fail to screen Asian patients who have a relatively high risk for having diabetes. A race- aware model would ensure that nominal risk thresholds correspond to observed incidence rates across race groups.
This phenomenonâwhich we call miscalibration across subgroupsâis not unique to di- abetes screening. Consider, for instance, the case of pretrial recidivism predictions. Shortly after an individual is arrested in the United States, a judge must often determine conditions of release pending future court proceedings. In many jurisdictions across the country, these pretrial decisions are informed by statistical risk estimates of the likelihood the individual would be arrested or convicted of a future crime if released. After adjusting for factors such as criminal history, age, and substance use, women have been found to reoffend less often than men in many jurisdictions (DeMichele et al., 2018; Skeem et al., 2016). Consequently, gender-blind risk assessments are miscalibrated, meaning that they tend to overstate the recidivism risk of women and understate the recidivism risk of men.
Figure 3b illustrates this point, plotting the observed recidivism rate for men and women in Broward County, Florida as a function of their gender-blind COMPAS risk scoresâ a commonly used risk assessment tool (Bao et al., 2021). In particular, women with a COMPAS score of seven recidivate about 55% of the time, whereas men with the same score recidivate about 65% of the time. Said differently, women with a score of seven
19
Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
recidivate approximately as often as men with a score of five, and this two-point differential persists across the range of scores. By acknowledging the predictive value of gender in this setting, one could create a decision rule that detains fewer people (particularly women) while achieving the same public safety benefits. Conversely, by ignoring this information and basing decisions solely on the gender-blind risk assessments, one would effectively be subjecting women to a more stringent risk standardâand potentially harsher penaltiesâ than men.
As in the case of classification parity, one cannot typically remove protected attributes from the risk predictions without decreasing utility (cf. Manski et al., 2022); however, the reduction in utility is not always as large as one might expect (Coots et al., 2023). In concrete terms, in our running diabetes example, basing decisions on race-blind risk estimates necessarily means screening some patients who would have preferred not to be screened had they been given race-aware risk estimates, and, conversely, not screening some patients who would have preferred to be screened had they been given the more complete estimates. We state this result formally below.
Theorem 10 Suppose 0 < t < 1, where t is the optimal decision threshold on the risk scale, as in Eq. (9). Let Ï : Xu à A â Xu denote restriction to the unprotected covariates. Let Ï(x) = Pr(Y = 1 | Ï(X) = Ï(x)) denote the risk estimated using the blinded covariates. Suppose that r(x) and Ï(x) have densities on [0, 1] that are positive in a neighborhood of t. Further suppose that there exists ϵ > 0 such that the conditional variance Var(r(X) | Ï(X)) > ϵ a.s., where r(x) is the risk estimated from the full set of covariates. Then no blind policy is utility-maximizing.
The proof of Theorem 10 is given in Appendix D. In short, when race, gender, or other protected traits add predictive valueâa condition codified in our assumption that the conditional variance be greater than ϵâexcluding these attributes will in general decrease utility, both for individuals and in the aggregate.
Basing decisions on blinded risk scores can harm individuals and communities, for exam- ple by failing to flag relatively high-risk Asian patients for diabetes screening. But it is also important to consider potential harms stemming from the use of race- and gender-specific risk tools. In medicine, for instance, one might worry that race-specific risk assessments could encourage doctors and the public-at-large to advance spurious and pernicious argu- ments about inherent differences between race groups. In reality, the differences in diabetes risk we see are likely due to a complex mix of factors, both environmental and genetic, and should not be misinterpreted as indicating any causal effects of race. Indeed, even âraceâ itself is a thorny, socially influenced, concept, that elides easy definition. Similarly, the use of gender-specific recidivism estimates could reduce trust in the criminal justice system, giving the impression that individuals are held to different standards based on their gender. (Though, as we have seen above, blinded risk assessments can likewiseâand perhaps more persuasivelyâbe said to subject individuals to different standards based on their race and gender.) In some circumstances, race- and gender-specific risk estimates are even prohib- ited by lawâa topic we return to in Section 5.1. For these reasons, risk assessments in medicine, criminal justice, and beyond have generally avoided using race, gender, and other sensitive demographic attributes. Ultimately, when constructing risk assessment tools, it is
20
The Measure and Mismeasure of Fairness
important to acknowledge and carefully balance both the costs and benefits of blinding in any given circumstance.
# 3.3.2 Counterfactual and Path-Specific Fairness
As discussed in Section 2, counterfactual and path-specific fairness are generalizations of simple blinding that attempt to account for both the direct and indirect effects of protected attributes on decisions. Because the constraints are more stringent, the resulting decrease in utility is proportionally greater. In particular, in some common settings, path-specific fairness with W = X constrains decisions so severely that the only allowable policies are constant (i.e., d(x1) = d(x2) for all x1, x2 â X ). For instance, in our running admissions example, path-specific fairness requires admitting all applicants with the same probability, irrespective of academic preparation or group membership.
To build intuition for this result, we sketch the argument for a finite covariate space X . Given a policy d that satisfies path-specific fairness, select xâ â arg maxxâX d(x). By the definition of path-specific fairness, for any a â A,
d(a*) = E[DnAa | X = 2°] = > d(x) -Pr(XtAaa =x | X = 2"). (10) xea~t(a)
That is, the probability of an individual with covariates xâ receiving a positive decision must be the average probability of the individuals with covariates x in group a receiving a positive decision, weighted by the probability that an individual with covariates xâ in the real world would have covariates x counterfactually.
Next, we suppose that there exists an aâ² â A such that Pr(XÎ ,A,aâ² = x | X = xâ) > 0 for all x â αâ1(aâ²). In this case, because d(x) ⤠d(xâ) for all x â X , Eq. (10) shows that in fact d(x) = d(xâ) for all x â αâ1(aâ²).
Now, let xâ² be arbitrary. Again, by the definition of path-specific fairness, we have that
d(zâ) = E[Dpaa' | X =2"] = > d(z)-Pr(Xpaa =2|X =2') xeaâ*(aâ) = YP d(a*)-Pr(Xnaa =2| X =<"), xeaâ*(aâ) = d(c"),
where we use in the third equality the fact d(x) = d(xâ) for all x â αâ1(aâ²), and in the final equality the fact that XÎ ,A,aâ² is supported on αâ1(aâ²).
Theorem 11 formalizes and extends this argument to more general settings, where Pr(XÎ ,A,aâ² = x | X = xâ) is not necessarily positive for all x â αâ1(aâ²). The proof of Theorem 11 is in the Appendix, along with extensions to continuous covariate spaces and a more complete characterization of Î -fair policies for finite X .
Theorem 11 Suppose X is finite and Pr(X = x) > 0 for all x â X . Suppose Z = ζ(X) is a random variable such that:
21
Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
1. Z = ZÎ ,A,aâ² for all aâ² â A,
2. Pr(XÎ ,A,aâ² = xâ² | X = x) > 0 for all aâ² â A such that α(xâ²) = aâ², α(x) ̸= aâ², and x, xâ² â X such that ζ(x) = ζ(xâ²).
Then, for any Î -fair policy d, with W = X, there exists a function f such that d(X) = f (Z), i.e., d is constant across individuals having the same value of Z.
The first condition of Theorem 11 holds for any reduced set of covariates Z that is not causally affected by changes in A (e.g., Z is not a descendant of A). The second condition requires that among individuals with covariates x, a positive fraction have covariates xâ² in a counterfactual world in which they belonged to another group aâ². Because ζ(x) is the same in the real and counterfactual worldsâsince Z is unaffected by A, by the first conditionâwe only consider xâ² such that ζ(xâ²) = ζ(x) in the second condition.
In our admissions example, this result shows that, under mild conditions, causally- fair policies require admitting all applicants with equal probability. In particular, suppose that among students with a given test score, a positive fraction achieve any other test score in the counterfactual world in which their race is alteredâas, for instance, we might expect if the individual-level causal effects are drawn from an (appropriately discretized) normal distribution. In this case, the empty set of reduced covariatesâformally encoded by setting ζ to a constant functionâsatisfies the conditions of Theorem 11. The theorem then implies that under any Î -fair policy, every applicant is admitted with equal probability. (We motivated our admissions example by assuming that only a fraction b < 1 of applicants could be admitted; however, Theorem 11 holds irrespective of the budget, and, in particular, when b = 1, and so we discuss this result together with our others on unconstrained decision making as a natural extension of blinding.)
Even when decisions are not perfectly uniform lotteries, Theorem 11 suggests that en- forcing Î -fairness can lead to unexpected outcomes. For instance, suppose we modify our admissions example to additionally include age as a covariate that is causally unconnected to raceâas some past work has done. In that case, Î -fair policies would admit students based on their age alone, irrespective of test score or race. Although in some cases such re- strictive policies might be desirable, this strong structural constraint implied by Î -fairness appears to be a largely unintended consequence of the mathematical formalism.
The conditions of Theorem 11 are relatively mild, but do not hold in every setting. Suppose that in our admissions example it were the case that TÎ ,A,a0 = TÎ ,A,a1 + c for some constant câthat is, suppose the effect of intervening on race is a constant change to an applicantâs test score. Then the second condition of Theorem 11 would no longer hold for a constant ζ. Indeed, any multiple-threshold policy in which ta0 = ta1 + c would be Î -fair. In practice, though, such deterministic counterfactuals would seem to be the exception rather than the rule. For example, it seems reasonable to expect that test scores would depend on race in complex ways that induce considerable heterogeneity. Lastly, we note that W ̸= X in some variants of path-specific fairness (e.g., Nabi and Shpitser, 2018; Zhang and Bareinboim, 2018), in which case Theorem 11 does not apply. Although, in that case, path-specific fairness is still typically incompatible with optimal decision-making, as shown in Theorem 17.
22
The Measure and Mismeasure of Fairness
# 4. Equitable Decisions in the Presence of Externalities
We have thus far considered cases where there is largely agreement on the utility of different decision policies. In that setting, we showed that maximizing utility is at odds with various mathematical formalizations of fairness. We further argued that these results illustrate weaknesses in the formalizations themselves, since deviating from utility-maximizing polices in that setting can harm both individuals and groupsâas seen in our diabetes screening example.
Agreement on the utility, however, is perhaps the exception rather than the rule. One could indeed argue that the value of mathematical formalizations of fairness is their ability to arbitrate between competing definitions of utility. Here we critically examine that per- spective. We show, in analog to our previous results, that even when it is unclear how to balance competing priorities, enforcing existing fairness constraints typically leads to worse outcomes on each dimension. For instance, in our running college admissions example, policies constrained to satisfy various fairness constraints will typically require admitting a student body that is both less academically prepared and less diverse, relative to alternative policies that violate these mathematical fairness definitions.
We start, in Section 4.1, by examining our college admissions example in detail, illustrat- ing in geometric terms how existing fairness definitions can lead to problematic admissions policies. Then, in Section 4.2, we develop our formal theory of equitable decision making in the presence of externalities. The mathematics necessary to establish our key results are significantly deeper than what we have needed thus far, but our high-level message is the same: enforcing several formal notions of fairness leads to policies that can paradoxically harm the very groups that they were designed to protect.
# 4.1 The Geometry of Fair Decision Making
To build intuition about the limitations of popular definitions of fairness, we return to our running example on college admissions. In that setting, we imagine an admissions committee debating the merits of different admissions policies. In particular, we imagine disagreement within the committee over how best to balance two competing objectives: academic preparation (operationalized, e.g., in terms of the high school grades and stan- dardized test scores of admitted students) and class diversity (e.g., the number of admitted applicants from marginalized groups).
We assume that our hypothetical committee members all agree that more (total) aca- demic preparedness and more class diversity are better. Thus, in the absence of any resource constraints (with b = 1, as is approximated in some online courses), the university could admit all applicants, maximizing both the number of admitted students from marginalized groups and also the total academic preparedness of the admitted class. But given limits on the number of students who can be admitted (i.e., b < 1), one must make difficult choices on whom to admit, with reasonable and expected disagreement on how much to trade one dimension for another. The trade-offs in decision making are most acute when the budget b < 1, and for this reason we focus here on that case.
In light of these trade-offs, one might turn to the myriad formal fairness criteria we have discussed to ensure admissions decisions are equitable. Many of the fairness definitions we consider make reference to a distinguished outcome Y . In our example, we can imagine
23
Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
this outcome corresponds to college degree attainment, an ex post measure of academic preparedness. In the case of causal fairness definitions, we could take Y (1) to mean degree attainment if the student were admitted, and Y (0) to be degree attainment if the student were not admitted, with the understanding that a student who is not admitted could poten- tially attend and graduate from another university. For example, satisfying counterfactual predictive parity requires that among rejected applicants, the proportion who would have attained a college degree, had they been accepted, is equal across race groups. In these cases, we imagine academic preparedness is some student-level measure that connects observables Xâupon which the committee must make their admissions decisionsâto (potential) out- comes Y . For example, the âacademic indexâ m(x) might be a prediction of Y (1) given X based on historical data, or, more generally, could encode committee preferences for both academic preparation and participation in extracurricular activities, among other factors.
The key point of our informal discussion thus far is that we assume committee members would like to enact an admissions policy d that balances two competing objectives. First, they would like a policy that leads to large m(x), i.e., they would like E[m(X) · d(X)] to be big, where m(x) is some quantity that may, for example, encode academic preparedness and other preferences. Second, the committee would like large diversity, i.e., they would like E[1α(X)=a1 · d(X)] to be big, where a1 corresponds to some target group of interest. All committee members would like more of each dimension, but, given the budget constraint, it is in general impossible to maximize both dimensions simultaneously, leading to the inherent trade-offs we consider in this section.
We now explore the consequences of imposing additional fairness constraints on our college admissions example, as given by the causal DAG in Figure 1, via a simulation study of one million hypothetical applicants, for one quarter of whom (b = 1 4 ) seats are allocated. In particular, in the hypothetical pool of applicants we consider, applicants in the target race group a1 have, on average, fewer educational opportunities than those applicants in group a0, which leads to lower average academic preparedness, as well as lower average test scores. We define the âacademic indexâ m(x) of applicants to be the estimated probability that an applicant will graduate if admitted, based on their observed test score and race. See Appendix C for additional details, including the specific structural equations we use in the simulation.
Each of the panels in Figure 4 illustrates the geometry of fairness constraints for five different formal notions of fairness described in Section 2: counterfactual fairness, path- specific fairness, principal fairness, counterfactual equalized odds, and counterfactual pre- dictive parity. The vertical axes of each panel correspond to aggregate academic index and the horizontal axes to the number of admitted applicants from the target group. The pur- ple lines trace out the boundary of the set of feasible policies, with points on or below the curves achievable by policies that adhere to the budget constraint. Policies lying strictly below the purple curves (or, similarly, on the dashed segments of the purple curves) are âPareto dominated,â meaning that one can find feasible alternatives that are larger on both of the depicted axes (i.e., academic index and diversity). Since we have assumed committee members prefer higher values on each dimension, their effective choice set consists of those policies on the solid purple segmentsâthe âPareto frontier.â Committee members may still disagree over which policy on the frontier to adopt. But for any policy not on the frontier,
24
# The Measure and Mismeasure of Fairness
Counterfactual Fairness / Path-Specific Fairness
Principal Fairness
175,000 150,000 125,000 Academic Index » Random 100,000 Counterfactual Equalized Odds Counterfactual Predictive Parity 175,000 150,000 125,000 Academic Index 100,000 0 50,000 100,000 150,000 200,000 250,000 0 50,000 100,000 150,000 200,000 250,000 Admitted Applicants from Target Group Admitted Applicants from Target Group
Figure 4: The geometry of fairness constraints, in an illustrative example of college ad- missions. Points under the purple curve correspond to all feasible policiesâgiven the budget constraintâwhereas the shaded regions correspond to feasible policies that satisfy various formal definitions of fairness. (For path-specific fairness, we set Î equal to the single path A â E â T â D highlighted in Figure 1, and set W = X.) For each definition, the constrained policies lie strictly under the purple curve, meaning there are alternative, unconstrained, feasible policies that simultaneously achieve greater student-body diversity (indicated by the x-axis) and greater academic preparedness (indicated by the y-axis). The solid segments of the purple lines correspond to policies on the Pareto frontierâfor which one cannot simultaneously increase both diversity and academic preparedness. The points labeled ârandomâ in the upper left-hand corner correspond to policies generated by random lotteries in which each individual is admitted with equal probability, in accordance with Theorem 11.
there is a feasible policy above and to the right of it, which is thus preferred by every member of the committee.
Finally, the shaded regions indicate the set of feasible policies constrained to satisfy each of the fairness definitions. (In Appendix B, we show that these feasibility regions can be computed by solving a series of linear programs.) In each case, the constrained regions do not intersect the Pareto frontier, and so there is an alternative, unconstrained feasible policy
25
Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
that simultaneously achieves more student-body diversity and an overall higher academic index. For example, in the case of policies satisfying counterfactual or path-specific fairness, shown in the upper left panel, the set of feasible policies lie on a single line segment. That structure follows from Theorem 11, since the only policies satisfying either of these notions of fairness in our setting are ones that admit all students with a constant probability, irrespective of their covariates. While not as extreme, the other fairness definitions similarly restrict the space of feasible policies in severe ways, as shown in the remaining panels. These results illustrate that constraining decision-making algorithms to satisfy popular definitions of fairness can have unintended consequences, and may even harm the very groups they were ostensibly designed to help.
Our discussion in this section aimed to highlight the geometry of âfairâ decision policies and their consequences in the context of a simple motivating example. We next show that these qualitative findings are guaranteed to hold much more generally.
# 4.2 A Formal Theory of Fairness in the Presence of Externalities
Our simulation above showed that policies satisfying one of the mentioned fairness defini- tions are suboptimal, in the sense that they constrain one to a portion of the feasible region in which policies could be improved along both dimensions of interest. As was the case in the absence of trade-offs in Section 3.2, the phenomenon occurring in our simulation is true much more generally. To understand why, we begin by isolating and formalizing the relevant mathematical properties of our example. To generalize our setting in Section 3, we consider arbitrary utility functions of the form u : X â R. As before, for a function u and decision policy d, we write u(d) = E[d(X) · u(X)] to denote the expected utility of decision policy d(x) under the utility u. An important constraint on the admissions committee was the fact that their admissions decisions could not, in expectation, exceed the budget.
Definition 12 For a budget b, we say a decision policy d(x) is feasible if E[d(X)] ⤠b.
A key feature of the college admissions example is that despite some level of uncertainty regarding the âtrueâ utilityâi.e., exactly how to trade off between its objectivesâthe com- mittee knows what its objectives are: to increase the academic index and diversity of the incoming class. One way to encode this kind of uncertainty is to consider a set U consisting of all âreasonableâ ways of trading off between the objectives. While the utilities need not be the same, they should be consistent, in the sense that conditional on an applicantâs group membership, all of the utilities should âagreeâ that a higher academic index is better.
Definition 13 We say that a set of utilities U is consistent modulo α if, for any u, uâ² â U:
1. For any x, sign(u(x)) = sign(uâ²(x));
2. For any x1 and x2 such that α(x1) = α(x2), u(x1) > u(x2) if and only if uâ²(x1) > uâ²(x2).
A second relevant feature of the admissions problem is that certain policies were strictly better from the admissions committeeâs perspective, despite their uncertainty about the exact form of their utility. The notion that one policy is better than another regardless of the exact form of the utility is formalized by Pareto dominance.
26
The Measure and Mismeasure of Fairness
Definition 14 Suppose U is a collection of utility functions. A decision policy d is Pareto dominated if there exists a feasible alternative dâ² such that u(dâ²) ⥠u(d) for all u â U, and there exists uâ² â U such that uâ²(dâ²) > uâ²(d). A policy d is strongly Pareto dominated if there exists a feasible alternative dâ² such that u(dâ²) > u(d) for all u â U. A policy d is Pareto efficient if it is feasible and not Pareto dominated, and the Pareto frontier is the set of Pareto efficient policies.
As discussed above and in Section 3.1, in the absence of trade-offs, optimal decision policies take the simple form of threshold policies. The existence of trade-offs broadens the range of forms a Pareto efficient policy can take. Even so, for consistent collections of utilities, the Pareto efficient policies take a closely related form.
Proposition 15 Suppose U is a set of utilities that is consistent modulo α. Then any Pareto efficient decision policy d is a multiple-threshold policy. That is, for any u â U, there exist group-specific constants ta ⥠0 such that, a.s.:
d(x) = 1 u(x) > tα(x), 0 u(x) < tα(x). (11)
The proof of Proposition 15 is in the Appendix.18
# 4.2.1 Fairness definitions with many constraints
All of the definitions we study in this section prominently feature causal quantities, but the important quality driving our analysis in this section is that each definition imposes many constraints. For instance, counterfactual equalized odds requires that
Pr(D = 1 | A = a, Y (1) = y) = Pr(D = 1 | Y (1) = y)
for every outcome y.
Theorem 17 shows that for almost every joint distribution of X, Y (0), and Y (1) such that u(X) has a density, any feasible decision policy satisfying counterfactual equalized odds or conditional principal fairness is Pareto dominated. Similarly, for almost every joint distribution of X and XÎ ,A,a, we show that feasible policies satisfying path-specific fairnessâincluding counterfactual fairnessâare Pareto dominated. (The analogous state- ments for counterfactual predictive parity, equalized false positive rates, and demographic parity are not true; we return to this point in Section 4.2.2.) That is, we show that, for a typical joint distribution, any feasible policy satisfying the fairness definitions enumerated above cannot have the form of a multiple-threshold policy. To prove this result, we make relatively mild restrictions on the set of distributions and utilities we consider to exclude degenerate cases, as formalized by Definition 16.
18. In the statement of the proposition, we do not specify what happens at the thresholds u(x) = tα(x) themselves, as one can typically ignore the exact manner in which decisions are made at the threshold. Specifically, given a multiple-threshold policy d, we can construct a standardized multiple-threshold policy dâ² that is constant within group at the threshold (i.e., dâ²(x) = cα(x) when u(x) = tα(x)), and for which: (1) E[dâ²(X)|A] = E[d(X)|A]; and (2) u(dâ²) = u(d). In our running example, this means we can standardize multiple-threshold policies so that applicants at the threshold are admitted with the same group-specific probability.
27
Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
Definition 16 Let G be a collection of functions from Z to Rd for some set Z. We say that a distribution of Z on Z is G-fine if g(Z) has a density for all g â G.
In the absence of U-fineness, corner cases can arise in which an especially large number of policies may be Pareto efficient, in particular when u(X) has large atoms and X can be used to predict the potential outcomes Y (0) and Y (1) even after conditioning on u(X). See Proposition 72 in the Appendix for details.
Theorem 17 Suppose U is a set of utilities consistent modulo α. Further suppose that for all a â A there exist a U-fine distribution of X and a utility u â U such that Pr(u(X) > 0, A = a) > 0, where A = α(X). Then,
⢠For almost every U-fine distribution of X and Y (1), any feasible decision policy sat- isfying counterfactual equalized odds is strongly Pareto dominated.
⢠If | Img(Ï)| < â and there exists a U-fine distribution of X such that Pr(A = a, W = w) > 0 for all a â A and w â Img(Ï), where W = Ï(X), then, for almost every U-fine joint distribution of X, Y (0), and Y (1), any feasible decision policy satisfying conditional principal fairness is strongly Pareto dominated.
⢠If | Img(Ï)| < â and there exists a U-fine distribution of X such that Pr(A = a, W = wi) > 0 for all a â A and some distinct w0, w1 â Img(Ï), then, for almost every U A-fine joint distributions of A and the counterfactuals XÎ ,A,aâ², any feasible decision policy satisfying path-specific fairness is strongly Pareto dominated.19
The proof of Theorem 17 is given in the Appendix. At a high level, the proof proceeds in three steps, which we outline below using the example of counterfactual equalized odds. First, we show that for almost every fixed U-fine joint distribution µ of X and Y (1) there is at most one policy dâ(x) satisfying counterfactual equalized odds that is not strongly Pareto dominated. To see why, note that for any specific y0, since counterfactual equalized odds requires that D â¥â¥ A | Y (1) = y0, setting the threshold for one group determines the thresholds for all the others; the budget constraint then can be used to fix the threshold for the original group. Second, we construct a âsliceâ around µ such that for any distribution ν in the slice, dâ(x) is still the only policy that can potentially lie on the Pareto frontier while satisfying counterfactual equalized odds. We create the slice by strategically perturbing µ only where Y (1) = y1, for some y1 ̸= y0. This perturbation moves mass from one side of the thresholds of dâ(x) to the other. Due to inframarginality, this perturbation typically breaks the balance requirement D â¥â¥ A | Y (1) = y1 for almost every ν in the slice. Finally, we appeal to the notion of prevalence to stitch the slices together, showing that for almost every distribution, any policy satisfying counterfactual equalized odds is strongly Pareto dominated. Analogous versions of this general argument apply to the cases of conditional principal fairness and path-specific fairness.20 We note that the conditions of Theorem 17
9. Here, ut: (ta)aca > (u(ta))aca and UM is the set of uA for u ⬠U, i.e., component-wise application of u to elements of 4%. In other words, the requirement is that the joint distribution of the u(Xu,A,a) has a density.
20. This argument does not depend in an essential way on the definitions being causal. In Corollary 70 in the Appendix, we show an analogous result for the non-counterfactual version of equalized odds.
28
The Measure and Mismeasure of Fairness
are sufficient, rather than necessary, meaning that the conclusion of the theorem mayâand, indeed, we expect willâhold even in some cases where the conditions are not satisfied. In particular, we note that this proof technique prevents the conditions of Theorem 17 from holding when A factors through W and, in particular, when W = X. Although, when X = W , Theorem 11 shows that under slightly different conditions, a much stronger result holds.
To bring our discussion full circle, we now map Theorem 17 onto the motivation offered in Section 4.1. Recall that the admissions committee knew that given the opportunity, it preferred policies that increased both the overall academic index of its admitted class, and policies that resulted in more students being admitted from the target group. In other words, we imagine that members of the admissions committee have utilities u* of the form?! u*(d) = v (E[m(X) - d(X)], Eldacxy=a, - â¬(X))) ; (12) where, as above, m(a) denotes the academic index of an applicant with covariates X = x, and v increases in both coordinates. Corollary 18 establishes the inherent incompatibility of such preferences with the formal fairness criteria we have been considering.
Corollary 18 Consider a utility of the form given in Eq. (12), where v is monotonically increasing in both coordinates and m(x) ⥠0. Then, under the same hypotheses as in Theo- rem 17,22 for almost every joint distribution, no utility-maximizing decision-policy satisfies counterfactual equalized odds, conditional principal fairness, or path-specific fairness.
Lastly, while, in general, oneâs decision policy can depend only on the covariates known at the time of the decision, in some cases, the restriction that u(x) be a function of x â X alone may be too restrictive; the connection between an individual having covariates X = x and our utility may depend also on the relationship between X and Y . For instance, in the admissions example, the admissions committee may value high test scores and ex- tracurriculars not, e.g., as per se measures of academic merit, but rather instrumentally insofar as they are connected to whether an applicant will eventually graduate. However, allowing u to depend on both x and y greatly complicates the underlying geometry of the problem. Proving Theorem 17 in this more general setting remains an open problem. How- ever, intuition from finite-dimensionsâwhere more powerful measure-theoretic tools are availableâsuggests that the result remains true in the more general setting. For example, Proposition 19 presents a version of this result over a natural, finite-dimensional family of distributions.
Proposition 19 Suppose A = {a0, a1}, and consider the family U of utility functions of the form
u(x) = r(x) + λ · 1α(x)=a1,
indexed by λ ⥠0, where r(x) = E[Y (1) | X = x]. For almost every (α0, β0, α1, β1) â R4 the conditional distributions of r(X) given A are beta distributed with
r(X) | A = ai ⼠Beta(αi, βi),
21. Strictly speaking, we are saying that members of the admissions committee, rather than having an aggregate utilityâwhich, as we have considered so far, has the form E[u(X) · d(X)]âhas a utility on aggregate outcomes.
22. The full statement is given in Appendix F.7.
29
Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
then any policy satisfying counterfactual equalized odds is strongly Pareto dominated.
# 4.2.2 Fairness definitions with few constraints
We conclude this analysis by considering equalized false positive rates, demographic parity, and counterfactual predictive parity. These fairness notions are less demanding than the notions considered above, in that they introduce only âoneâ additional constraint, e.g., that Y (1) â¥â¥ A | D = 0, in the case of counterfactual predictive parity. Since the budget introduces a second constraint, and the form of a multiple-threshold policy allows for a degree of freedom in each group, the number of constraints and the number of degrees of freedom are equalâas opposed to the causal fairness definitions covered by Theorem 17, in which the constraints outnumber the degrees of freedom. As such, it is possible in some instances to have a policy on the Pareto frontier that satisfies these conditions; though see Section 5.3 for discussion about why such policies are still often at odds with broader goals. However, it is not always possible to find a point on the Pareto frontier satisfying these definitions. In Proposition 20, we show that counterfactual predictive parity cannot lie on the Pareto frontier in some common cases, including our example of college admissions. In that setting, when the target group has lower average graduation ratesâa pattern that often motivates efforts to actively increase diversityâdecision policies constrained to satisfy counterfactual predictive parity are Pareto dominated. The proof of the proposition is in Appendix H.3.
Proposition 20 Suppose A = {a0, a1}, and consider the family U of utility functions of the form
u(x) = r(x) + λ · 1α(x)=a1, indexed by λ ⥠0, where r(x) = E[Y (1) | X = x]. Suppose the conditional distributions of r(X) given A are beta distributed, i.e.,
r(X) | A = a ⼠Beta(µa, v),
with µa0 > µa1 and v > 0.23 Then any policy satisfying counterfactual predictive parity is strongly Pareto dominated.
# 5. A Path Forward
We have thus far worked to clarify some of the statistical limitations of existing mathemat- ical definitions of fairness. We have argued that in many cases of interest, these definitions can ultimately do more harm than good, hurting even those individuals that these notions of fairness were ostensibly designed to help.
We end on a more optimistic note, charting out a potential path toward designing more equitable algorithms. To do so, we start, in Section 5.1 by reviewing conceptions of discrimination in law and economics, and, in particular, we contrast process-oriented and outcome-oriented notions of fairness. Whereas the computer science literature is dominated by process-oriented, deontological definitions of fairness, we see more promise in adopting
23. Here we parameterize the beta distribution in terms of its mean µ and sample size v. In terms of the common, alternative α-β parameterization, µ = α/(α + β) and v = α + β.
30
The Measure and Mismeasure of Fairness
an outcome-oriented, consequentialist approach represented by the utilitarian analysis we have described above. In Section 5.2, we enumerate and discuss four issues that we feel are critical in developing equitable algorithms: (1) balancing inherent trade-offs in decision problems; (2) assessing calibration; (3) selecting the inputs and targets of prediction; and (4) designing data collection strategies. Finally, in Section 5.3, we illustrate how to grapple with these considerations in a case study of complex medical care, motivated by work from Obermeyer et al. (2019).
# 5.1 Competing Notions of Ethical Decision Making: Process vs. Outcomes
There are many distinct but related understandings of ethical decision making in law, economics, philosophy, and beyond. One key dimension on which we organize these notions is the extent to which they consider the process through which decisions are made versus the outcomes that those decisions render.
The dominant legal doctrine of discrimination in the United States treats explicit race- and gender-based decisions with heightened scrutiny. The Equal Protection Clause of the U.S. Constitutionâs Fourteenth Amendment restricts government agencies from adopting policies that explicitly reference legally protected categories, and myriad federal and state disparate treatment statutes similarly constrain a variety of private actors. Conversely, policies that do not explicitly consider legally protected traitsâor obvious proxiesâare generally deemed not to violate disparate treatment principles. Formally, it is lawful to use legally protected attributes in a limited way to further a compelling government inter- est, but, in practice, such exceptions are few and far between. Until recently, the prime example of a race-conscious policy passing legal muster was affirmative action in college ad- missions (Fisher v. University of Texas, 2016). However, in 2023, the U.S. Supreme Court barred the explicit consideration of race in admissions decisions (SFFA v. Harvard, 2023). Disparate treatment doctrine has evolved over time, and reflects ongoing debates about the role of classification (use of protected traits, a process-oriented, deontological notion) versus subordination (subjugation of disadvantaged groups, an outcome-oriented notion) in discrimination cases (Fiss, 1976). Some legal scholars have argued that courts, even when formally applying anti-classification criteria, are often sympathetic to the potential effects of judgments on social stratification, indicating tacit concern for anti-subordination (Balkin and Siegel, 2003; Colker, 1986; Siegel, 2003). Others, though, have noted that such judicial support for anti-subordination appears to be waning (Nurse, 2014). At a high level, we thus view modern disparate treatment law as primarily interested in process over outcomes, though these debates illustrate that the two concepts cannot be perfectly separated.
In contrast to process-oriented disparate treatment principles, the economics literature distinguishes between two outcome-focused, consequentialist rationales for explicitly consid- ering race, gender, and other protected traits: taste-based and statistical. With taste-based discrimination (Becker, 1957), decision makers act as if they have a preference or âtasteâ for bias, sacrificing profit to avoid certain transactions. This includes, for example, an employer who forfeits financial gain by failing to hire exceptionally qualified minority applicants. But, in contrast to legal reasoning, the economic argument against taste-based discrimination is not that decisions are based on race per se, but rather because consideration of race leads to worse outcomes: a loss of profit. With statistical discrimination (Arrow, 1973; Phelps,
31
Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
1972), decision makers explicitly consider protected attributes in order to optimally achieve some non-prejudicial goal. For example, profit-maximizing auto insurers may charge a premium to male drivers to account for gender differences in accident rates. Despite their differing outcome-based justifications, both taste-based and statistical discrimination are of- ten considered legally problematic as they explicitly consider race, in violation of disparate treatment laws.24
As the above insurance example and our running diabetes example illustrate, one might consider it acceptable to base decisions in part on legally protected traits when doing so leads to good outcomes. Conversely, whereas process-oriented disparate treatment prin- ciples generally deem race-blind policies acceptable, one might declare such blind policies problematic if they lead to bad outcomes. Indeed, under the statutory disparate impact standard, a practice may be deemed discriminatory if it has an unjustified adverse effect on legally protected groups, even in the absence of explicit categorization (Barocas and Selbst, 2016).25 The disparate impact doctrine was formalized in Griggs v. Duke Power Co., a 1971 U.S. Supreme Court case. In 1955, the Duke Power Company mandated that employees have a high school diploma to be considered for promotion, which, in practice, severely limited the eligibility of Black employees. The Court found that this facially race- neutral requirement had little relation to job performance, and accordingly deemed it to have an unjustifiedâand illegalâdisparate impact. The Court noted that the employerâs motivation for instituting the policy was irrelevant to its decision; even if enacted without discriminatory purpose, the policy was deemed discriminatory in its effects and hence ille- gal. However, disparate impact law does not prohibit all group differences produced by a policyâthe law only prohibits unjustified disparities. For example, if, hypothetically, the high-school diploma requirement in Griggs were shown to be necessary for job success, the resulting disparities would be legal.
On the spectrum from process- to outcome-based understandings of discrimination, we view the formal, axiomatic fairness definitions described in Section 2 as reflecting a largely process-based orientation. Blinding and its more stringent causal variantsâcounterfactual fairness and path-specific fairnessâcan be viewed as descendants of disparate treatment considerations, as they seek to remove the effects of race and other protected attributes on decisions. The remaining definitionsâfor example, those that aim to equalize error rates across groupsâdo explicitly reference an outcome Y , but they do so in a way that seems largely disconnected from the consequences one might naturally consider. As we have
24. In the case of auto insurance specifically, some states (including California, Hawaii, Massachusetts, Maine, Michigan, North Carolina, and Pennsylvania)âthough not allâhave barred the use of gender in pricing policies.
25. The legal doctrine of disparate impact stems largely from federal statutes, not constitutional law, and applies only in certain contexts, such as employment (via Title VII of the 1964 Civil Rights Act) and housing (via the Fair Housing Act of 1968). Apart from federal statutes, some states have passed more expansive disparate impact laws, including Illinois and California. The distinction between statutory and constitutional rules is particularly relevant here, as there is debate among scholars over whether disparate impact laws violate the Equal Protection Clause and are thus unconstitutional (Primus, 2003). There is also debate over whether disparate impact law is motivated primarily by an interest in banning bad outcomes, or seeks to provide an alternative pathway for ferreting out bad intent, when actors may mask animus with race-neutral policies (Watson v. Fort Worth, 1988). But, regardless of its underlying justification, disparate impact law is formally focused on outcomes, not intent or classification, and so we view it as an outcomes-focused principle.
32
# The Measure and Mismeasure of Fairness
argued, whether error rates are equal across groups has more to do with the structure of group-specific risk distributions than with whether decisions lead to good or bad outcomes for group members. For example, in our college admissions example, enforcing various formal notions of fairness would, in theory, typically lead to student bodies that are both less diverse and less academically prepared than those resulting from feasible alternatives not constrained to satisfy these notions.
One might, on principle, favor certain process-based understandings of discrimination over outcome-based notions. One might even adopt a meta-consequentialist position, and argue that procedural considerations (e.g., ensuring medical decisions are blind to race) engender trust and in turn bring about better downstream outcomes. In many cases, though, the ethical underpinnings of popular mathematical definitions of fairness have not been clearly articulated. Absent such justification, we advocate for an approach that more directly engages with the real-world costs and benefits of different decision policies, a per- spective that we outline in more detail in the remaining sections.
# 5.2 Designing Equitable Algorithms
A key advantage of the dominant axiomatic approach to algorithmic fairness is that it can be readily applied across contexts, with little domain-specific knowledge. One can build automated tests to check whether any predictive algorithm satisfies various formal fair- ness desiderata, and even automatically modify algorithms to ensure that they do satisfy a specific fairness criterion (e.g., Cotter et al., 2019; Weerts et al., 2023). But, as we have argued, this approach is often at odds with improving well-being, including for disadvan- taged groups. A particularly pernicious risk of automated, axiomatic approaches is that they can make invisible the cost to well-being: automatically constraining algorithms to be âfairâ can lead one to overlook unconstrained alternatives that are clearly preferable. We have instead called for a more careful analysis of the consequences, good and bad, of different decision policies, selecting the appropriate course of action based on the specific context. This is admittedly hard to doâand does not easily scaleâbut there are general principles that we believe are helpful to keep in mind when navigating this terrain. Below we enumerate and discuss four of them.
5.2.1 Contending with inherent trade-offs
There are inherent trade-offs in many important decision problems. For instance, in our college admissions example, one must balance academic preparedness with student-body diversity. Although one cannot generally circumvent these trade-offs, we believe it is useful to explicitly enumerate the primary dimensions of interest and to acknowledge the trade- offs between them. In some cases, like our stylized admissions example, one might be able to explicitly calculate the Pareto frontier shown in Figure 4, in which case it often makes sense to focus on those policies lying on the frontier. In many cases, it wonât be possible to compute the frontier. Still, by listing and discussing trade-offs, even informally, one can reduce the risk of adopting clearly problematic policies, like those that typically result from uncritically constraining decisions to satisfy formal fairness criteria.
In this sense, designing equitable algorithms is akin to designing equitable policy writ large. One might accordingly adapt democratic mechanisms used to draft and enact leg-
33
Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
islation to algorithm design. For example, adopting such a policy-oriented perspective, Chohlas-Wood et al. (2023a) and Koenecke et al. (2023) surveyed a diverse sample of Amer- icans to elicit preferences on how best to balance competing objectives in programs that algorithmically allocate government benefits.
# 5.2.2 Assessing Calibration
When designing or auditing a risk assessment algorithm, it is important to check whether predictions are calibrated, meaning that risk scores correspond to the same observed level of risk across groups. In general, the relationship between predictors and outcomes may plausibly differ across groups, leading to miscalibrated risk estimatesâwhat Ayres (2002) calls the problem of subgroup validity. Figure 3 shows instances of risk scores for diabetes and recidivism that are miscalibrated across race and gender, respectively. For example, a nominal 1.5% diabetes riskâbased on age and BMIâcorresponds to an actual, observed diabetes rate of approximately 1% among White patients and 3% among Asian patients. Similarly, among individuals receiving a COMPAS risk score of 7âbased on criminal his- tory and related factorsâabout 55% of women recidivate, compared to 65% of men. These miscalibrated risk scores can result in inequitable decisions. For instance, a policy to screen patients with a nominal diabetes risk of 1.5% or aboveâin line with existing medical rec- ommendations (Aggarwal et al., 2022)âwould overscreen White patients and underscreen Asian patients, harming individuals in both groups.
Calibration can often be visually assessed by plotting predicted risk against average outcomes, as in Figure 6 (cf. Arrieta-Ibarra et al., 2022). For a simple, more quantitative, measure, we recommend regressing observed outcomes against risk estimates and group membership. A coefficient of approximately zero on group membership suggests risk es- timates correspond to similar average outcomes across groups, with deviations from zero indicating the degree of miscalibration.
In practice, miscalibration can often be rectified by training group-specific risk mod- els, or, roughly equivalently, including group membership in a single risk model fit across groups. For example, diabetes risk models that include race, and recidivism risk models that include gender, are approximately calibrated. The relatively new literature on multicalibra- tion has introduced new computational techniques to ensure predictions are simultaneously calibrated across many different subgroups (H´ebert-Johnson et al., 2018). Of course, includ- ing protected traits in risk models raises additional legal and ethical challenges. In some cases, it may be possible to reduce or eliminate miscalibration by incorporating additional, non-protected covariates. Regardless, we believe it is important to check the calibration of risk scores to make informed decisions about if and how to address any observed disparities. Calibration is an important necessary condition to ensure risk estimates correspond to actually observable levels of risk across groups. But it is not sufficient. Indeed, even cal- ibrated risk scores can encode and reinforce deeply discriminatory policies. To see this, imagine a bank that wants to discriminate against Black applicants. Further suppose that: (1) within ZIP code, White and Black applicants have similar default rates; and (2) Black applicants live in ZIP codes with relatively high default rates. Then the bank can sur- reptitiously discriminate against Black borrowers by basing estimates of default risk only on an applicantâs ZIP code, ignoring all other relevant information. Such scores would be
34
# The Measure and Mismeasure of Fairness
1 50% 0.004 4 1 Original gine! 40% 4 @ 0.003 5 Noised 2 5 1 © 30% 4 g 1 3 © 0.0024 \ 3 E 1 B 20%4 Zz ! a 0.001 4 Ip 40% â Asian h + â White 0.000 t T rt T 0% T T T T 0% 5% 10% 15% 20% 0% 10% 20% 30% 40% 50% Risk score Risk estimate
Figure 5: Calibration is insufficient to prevent discrimination. Left: The distribution in green shows diabetes risk for Asian patients based on accurately collected age and BMI, and the distribution in purple shows estimates when the risk model is trained on noisy inputs. Estimates under the noisy model concentrate around the mean (dashed vertical line), pushing more Asian patients above the screening threshold (solid vertical line). Right: A calibration plot comparing noisy risk estimates for Asian patients and accurate risk estimates for White patients. The calibrated risk scores can mask both intentional discrimination and inadvertent errors.
calibrated (White and Black applicants with the same score would default equally often), and the bank could use these scores to justify denying loans to nearly all Black applicants. The bank, however, would be sacrificing profit by refusing loans to creditworthy Black ap- plicants,26 and thus engaging in taste-based discrimination. This discriminatory lending strategy is indeed closely related to the historical (and illegal) practice of redlining, and illustrates the limitations of calibration as a measure of equity.
Figure 5 shows another example of calibrated scores masking disparities. In the left- hand panel, we plot in green the distribution of diabetes risk for Asian patients, as estimated from age, BMI, and race. In purple, we plot the distribution of estimated risk when age and BMI are imperfectly measured for Asian patients in the training data. Training the risk model on noisy features pushes risk estimates toward the mean. As a result, based on the noisy risk model, 97% of Asian patients are above the 1.5% screening threshold, compared to 81% of Asian patients under the more accurately estimated modelâleading Importantly, however, to more medically unnecessary screening under the noisy model. the noisy model is still calibrated, as shown in the right-hand panel of Figure 5, where we compare risk scores for Asian patients estimated from the noisy predictors and risk
26. These applicants are creditworthy in the sense that they would have been issued a loan had the bank used all the information it had available to determine their risk.
35
Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
scores for White patients estimated from the accurately estimated information. In theory, a malicious algorithm designer could generate such calibrated but inaccurate scores to inten- tionally harm Asian patients. In practice, this pattern could equally arise from negligence rather than malice. These examples illustrate the importance of considering all available data when constructing statistical risk estimates; assessments that either intentionally or inadvertently ignore predictive information may facilitate discriminatory decisions while satisfying calibrationâthough, as we discuss below, even this intuitive heuristic of âuse all the dataâ has its limitations.
5.2.3 Selecting the target of prediction
In constructing algorithmic risk scores, a key ingredient is the target of prediction. In practice, though, there is often a mismatch between our true outcome of interest and the available dataâan occurrence we call label bias (Zanger-Tishler et al., 2023). As with the other issues we discuss, there is typically no perfect solution to this problem, but there are ways to mitigate it.
For example, in pretrial risk assessment, we would often like to estimate the likelihood a defendant would commit a crime if released. But there are two key difficulties with this goal. First, though we might want to measure crime conducted by defendants awaiting trial, we typically only observe crime that results in a conviction or an arrest. These observable outcomes, however, are imperfect proxies for the underlying criminal act. Further, heavier policing in communities of color might lead to Black and Hispanic defendants being arrested, and later convicted, more often than White defendants who commit the same offense (Lum and Isaac, 2016). Poor outcome data might thus cause one to systematically underestimate the risk posed by White defendants. The second, related, issue is that our target of interest is a counterfactual outcome; it corresponds to what would have happened had a defendant been released. In reality, we only observe what actually happened conditional on the judgeâs actual detention decision.
One way to reduce label bias in this case is to adjust the target of interest. For example, criminologists have found that arrests for violent crimeâas opposed to drug crimeâmay suffer from less racial bias.27 In particular, Skeem and Lowenkamp (2016) note that the racial distribution of individuals arrested for violent offenses is in line with the racial dis- tribution of offenders inferred from victim reports, and is also in line with self-reported offending data. In other cases, like lending, where one may seek to estimate default rates, the measured outcome (e.g., failure to pay) corresponds more closely to the event of in- terest. The problem of estimating counterfactuals can likewise be partially addressed in
27. DâAlessio and Stolzenberg (2003) find evidence that White offenders are even somewhat more likely than Black offenders to be arrested for certain categories of crime, including robbery, simple assault, and aggravated assault. Measurements of minor criminal activity, like drug offenses, are more problematic. For example, there is evidence that drug arrests in the United States are biased against Black and Hispanic individuals, with racial minorities who commit drug crimes substantially more likely to be arrested than White individuals who commit the same offenses (Ramchand et al., 2006). Although this pattern is well known, many existing risk assessment tools still consider arrests or convictions for any new criminal activityâincluding drug crimesâwhich may lead to biased estimates. As another example of label bias, auto insurance rates are determined in part by a driverâs record of receiving speeding tickets, but disparities in police enforcement mean that tickets are biased proxies of dangerous driving behavior (Cai et al., 2022b).
36
The Measure and Mismeasure of Fairness
some applications. In the pretrial setting, Angwin et al. (2016) measure recidivism rates in the first two-year period during which a defendant is not incarcerated; this is not identical to the desired counterfactual outcomeâsince the initial detention may be criminogenic, for exampleâbut it seems like a reasonable estimation strategy. Further, unaided human decisions often exhibit considerable randomness, a fact that can be exploited to facilitate sta- tistical estimation of counterfactual outcomes (Jung et al., 2020b; Kleinberg et al., 2017a). More generally, a spate of recent work at the intersection of machine learning and causal inference (Hill, 2011; Jung et al., 2020c; Mullainathan and Spiess, 2017) offers hope for more gains in counterfactual estimation.
5.2.4 Collecting training data
A final issue we discuss is collecting suitable training data for risk assessment algorithms to mitigate the effects of sample bias. Ideally, one would train algorithms on data sets that are broadly representative of the populations on which they are ultimately appliedâthough there are subtleties to this heuristic that we describe below. While often challenging in prac- tice, failure to train on representative data can lead to unintended, and potentially discrim- inatory, consequences. For example, Buolamwini and Gebru (2018) found that commercial facial analysis tools struggle to correctly classify the gender of dark-skinned individualsâ and of dark-skinned women in particularâa disparity likely attributable to the relative dearth of dark-skinned faces in facial analysis data sets. Similarly, Koenecke et al. (2020) found that several popular automated speech recognition systems were significantly worse at transcribing Black speakers than White speakers, likely due to insufficient data from speakers of African American Vernacular English (AAVE), a variety of English spoken by many Black Americans.
The problems of non-representative data can be even more acute in the case of risk assessment algorithms, especially when the target of interest is a causal quantity. For instance, in our running college admissions example, we seek to estimate how a student would (counterfactually) perform if admitted. Historical data are typically the result of past, potentially biased, decisions, and so may not fully generalize. Imagine, for example, that predictions of college performance are informed by where an applicant went to high school, and that, historically, only applicants from certain high schools were acceptedâand we consequently only see outcomes for students from those high schools. Then we would expect less accurate predictions for students from those absent high schools. In general, regression to the mean could attenuate estimates for high achieving students who differ from those previously accepted, potentially reinforcing existing admissions practices.
As a general heuristic, we believe it is advisable to train models on representative data, but, as Cai et al. (2022a) note, the optimal sampling strategy depends on the statistical structure of the problem and the group-specific costs of collecting data. Interestingly, the value of representative data collection strategies depends in part on the degree to which race, gender, and other protected attributes are predictive. In theory, if protected attributes are not predictive, one could build an accurate risk model using only examples from one particular group (e.g., White men). Given enough examples of White men, the model would learn the relationship between features and risk, which by our assumption would generalize to the entire population. This phenomenon highlights a tension in informal discussions of
37
Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
fairness, with some advocating both for representative training data and for the exclusion of protected attributes. However, representative data are often most important precisely when protected attributes add information, in which case their use is arguably more justified. Even if protected attributes are not predictive, representative data can still help in two additional ways. First, a representative sample ensures that the full support of features is present at training time, as it is possible that the distribution of features varies across groups, even if the connection between features and outcomes does not. We note, though, that one might have adequate support even without a representative sample in many real- world settings, particularly when models are trained on large data sets and the feature space is relatively low dimensional. Second, a representative sample can help with model validation, allowing one to assess the potential effects of group imbalance on model fit. In particular, without a representative sample, it can be difficult to determine whether a model trained on a single group generalizes to the entire population.
In many settings, one may be able to gather better training data with greater invest- ment of time and money. For example, in our diabetes example one could aim to collect more complete medical records, a process that may be both costly and logistically difficult. In theory, this additional information may lead to welfare gains, and policymakers must accordingly evaluate the relative costs and benefits to all groups of exerting this extra effort when designing algorithms. Fortunately, in practice, there are often diminishing returns to information, with a relatively short list of key features providing most of the predictive power (Jung et al., 2020b), at least partially mitigating this concern.
As with the other issues we have discussed, there is no universal solution to data collec- tion. It might, for example, simply be prohibitive in the short run to train models on the data set one would ideally like to use. Nevertheless, as in all situations, one must carefully weigh the potential costs and benefits of adopting a necessarily imperfect risk assessment algorithm relative to the other possible options. In particular, even an imperfect algorithm may in some circumstances be better than leaving decisions to similarly imperfect humans who have their own biases.
# 5.3 Case Study: An Algorithm to Allocate Limited Medical Resources
We conclude by illustrating the principles discussed above with a real-world example: an algorithm for referring patients into a âhigh-risk care managementâ program, previously considered by Obermeyer et al. (2019). The care management program more effectively aids patients with complex medical needs, in principle both improving outcomes for patients and reducing costs to the medical system for patients who are enrolled. But the program has limited capacity, which we formalize by assuming that only 2% of patients can be enrolled (i.e., we set b = 1 50 ). For our analysis, we use the data released by Obermeyer et al.,28 which contain demographic variables, cost information, comorbidities, biomarker and medication details, and health outcomes for a population of approximately 43,000 White and 5,600 Black primary care patients at an academic hospital from 2013â2015.
28. Obermeyer et al. released a synthetic data set closely mirroring the real data set, available at: https: //gitlab.com/labsysmed/dissecting-bias. In contrast to b = 2%, which we adopt to better illustrate some of the statistical phenomena we discuss in this section, Obermeyer et al. use a budget of b = 3%.
38
# The Measure and Mismeasure of Fairness
100% =â Black patients 8 75% 4 === White patients Ls) aq 2 xo < 50%74 2 ⬠° S a 25% 4 0% T T T 0% 25% 50% 75% 100° Risk estimate
Figure 6: Calibration of a race-blind model predicting whether a patient is âhigh-costâ. The lack of a gap between the proportion of patients who are actually high-cost and the proportion predicted to be high-cost for both groups indicates that the model is well calibrated across race groups.
As a first step toward identifying patients to enroll in the program, one could train a model predicting the healthcare resources a patient is likely to require over the next year. Sicker patients require more care and, consequently, incur greater healthcare costs. Thus, our initial approach is to predict how likely a patient is to be âhigh costââwhich we operationalize as being in the top decile of healthcare expenditures in a given yearâbased on the available information. (One of the main contributions of Obermeyer et al. (2019) is highlighting that healthcare costs is a problematic outcome due to label bias, a point we return to shortly.)
As discussed above, it is useful to assess the calibration of risk assessment algorithms across groups. In particular, while calibration across race is largely guaranteed if race is included as a predictor in the statistical model, race-blind models are often preferred, particularly in healthcare, in part to avoid perceptions of bias. As a result, we assess the calibration of a race-blind model trained on all available information except for race, shown in Figure 6. Unlike in the diabetes screening example considered in Section 3.2, the race-blind healthcare cost predictions are calibrated across race groups, meaning that risk estimates largely match observed costs across groups. It accordingly appears that race provides little marginal predictive power in this example, assuaging potential concerns with its omission.
We turn next to assessing the effects of applying formal fairness criteria to our enrollment decisions. Often, in healthcare contexts, a false negativeâe.g., failing to screen for a disease when it is presentâis more consequential than a false positiveâe.g., screening for a disease when it is not present. For this reason, one might seek to ensure enrollment decisions are fair by mandating false negative rates be equal across race groups (e.g., Seyyed-Kalantari
39
# Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
(a) Full population (b) Women, aged 25â34
Figure 7: Enforcing formal fairness criteria can harm marginalized groups. Feasible re- gions for admissions policies to a high-risk care management program, where the dashed line indicates the number of Black patients admitted by the policy ad- mitting the maximal number of high-cost patients. Left: The Pareto frontier for all patients in the population. Because more Black patients incur high medical costs in the population as a whole, equalizing false negative rates (FNR)âas well as false positive rates (FPR), or enforcing demographic parity (DP)âresults in fewer Black patients being admitted than under the policy that maximizes the total number of high-cost patients admitted. (Equalized false negative rates and demographic parity are achieved at the same point.) Right: The Pareto frontier for the subpopulation of women between the ages of 25 and 34. Within this subpopulation, Black patients incur lower medical costs, and so equalizing false positive rates, false negative rates, or achieving demographic parity all result in more Black patients being admitted to the high-risk management program than the policy that admits the maximum number of high-cost patients.
et al., 2021),29 i.e., requiring that A â¥â¥ D | Y = 1. In our example, equalizing false negative rates means that among patients who ultimately incur high medical costs (Y = 1), the same proportion (E[D]) are referred into the program across race groups (A).
In our setting, approximately 1,000 patients can be referred into the care management programâ2% of the roughly 50,000 patients in our data set. When we equalize false- negative rates, 747 of the enrolled patients ultimately incur high costs, and 113 enrolled
29. The authors describe their metric of concern as the âfalse positive rate,â where the positive prediction is understood as a âno findingâ label, e.g., not having a disease. In our example, we follow the more common convention that a âpositiveâ classification is assigned to the rare event (i.e., being high cost), and so we call this metric the âfalse negative rate.â
40
The Measure and Mismeasure of Fairness
patients are Black.30 However, an unconstrained decision rule (i.e., one that enrolls the patients most likely to incur high costs) enrolls both more high-cost patients (758) and more Black patients (205). In this example, we end up providing worse care to Black patients when we constrain our algorithm to satisfy the formal, mathematical fairness criterion.
Instead of applying such mathematical fairness criteria, we advocate for directly weigh- ing the costs and benefits of different decision policies. In Figure 7a, we show the Pareto frontier for our example, tracing out policies that optimally trade off the demographic com- position of the enrolled population with the number of enrolled patients who in reality incur high costs. The green point corresponds to equalizing false negative rates, and is to the left of the dashed vertical line that corresponds to the unconstrained decision ruleâvisually illustrating how constraining our algorithm leads to fewer resources for Black patients. Also shown on the plot are points corresponding to demographic parity and equal false positive rates, both of which likewise lead to fewer resources for Black patients.31
The result in Figure 7a stems from the false negative rate for Black patients being lower than the false negative rate for White patients in the unconstrained algorithmâa pattern we expect since Black patients are more likely than White patients to incur high medical costs in our data. Equalizing false negative rates thus means raising the enrollment bar for Black patients and lowering the bar for White patients. In light of this example, one might argue for applying formal fairness criteria only when error rates for racial minorities are higher than for White individuals. Figure 7b repeats our analysis above for the subset of women between the ages of 25 and 34, a subpopulation in which Black patients have higher error rates than White patients. In this case, equalizing false positive rates or false negative rates, or enforcing demographic parity all result in more Black patients being admitted into the program. It is, however, unclear why one should adopt those particular error-rate equalizing policies over any other. The policies on the Pareto frontier (i.e., on the curve to the right of the dashed line) are all arguably reasonable to consider. It is admittedly difficult to determine which of these policies to adopt, but we believe it is best to confront this challenge head on, recognizing the hard trade-offs inherent to the problem.
We conclude our case study by considering label bias, the primary concern identified by Obermeyer et al. (2019) in this context. As those authors noted, medical cost is a poor proxy for medical need, and so allocating healthcare resources to minimize anticipated costs can lead to severe disparities. Replicating an analysis by Obermeyer et al., Figure 8a shows that among patients with similar likelihood of incurring high-cost medical care, Black pa- tients are considerably more likely than White patients to have complex medical needs, operationalized as having five or more active chronic conditions. This gap is likely a conse- quence of worse access to healthcare among Black patients, due to a mix of socioeconomic factors and discrimination. To the extent that care management programs aim to aid the sickest patientsâas opposed to simply reducing costsâtargeting resources based on antic-
30. Following Corbett-Davies et al. (2017), we equalize false-negative rates in a manner that maximizes the number of enrolled patients who ultimately incur high costs.
31. As discussed in Section 4, with the exception of counterfactual predictive parity, all of the remaining fairness definitions given in Section 2 are known a priori to restrict one to enrollment policies that will lower both the number of Black patients enrolled as well as the number of truly high-cost patients enrolled.
41
# Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
(a) Label bias (b) Heterogeneous treatment effects
Figure 8: The target of prediction impacts equity. Left: The probability of having complex medical needs (i.e., at least five active chronic conditions) for Black and White patients as a function of their estimated likelihood of incurring high medical costs, reproducing an analysis by Obermeyer et al. (2019). The large gap across groups indicates that Black patients have greater medical need than White patients with similar anticipated healthcare costs. Right: Distribution of the estimated change in the probability that an individual will have complex medical needs after en- rolling in the care management program, showing that the extent to which en- rollment reduces complex medical needs varies considerably across individuals.
ipated costs can lead to inefficient and inequitable outcomes. Obermeyer et al. accordingly suggest switching the target of prediction from medical costs to health status.
It is possible to achieve further gains by recognizing resource allocation as an inherently causal problem. In particular, one may seek to enroll patients in the program in a manner that maximizes E[Y (D)], where Y (D) is the potential health outcome under the enrollment decision. To do so, we could prioritize patients by their estimated treatment effect ËYi(1) â ËYi(0), rather than an estimate ËYi of their future health status that ignores the causal effect of the program.32 A proper causal analysis is, in general, a complex topic requiring careful treatment beyond the scope of this article. Nonetheless, Figure 8b shows the distribution of ËYi(1) â ËYi(0), as estimated with a simple regression model. The plot suggests that
32. In many analyses that do not explicitly grapple with the causal effects of interventions, the estimand Y is further corrupted by the fact that some patients are expected to be enrolled in the program. As a result, some of the sickest patients may not be prioritized for care, since their expected outcome already incorporates the fact that they would have been enrolled, leading to a prediction paradox. To avoid this situation, one could explicitly estimate and prioritize patients by Yi(0), the potential outcome in the absence of care. In this case, however, the allocation decisions do not necessarily lead to the largest health gains, as the patients likely to be sickest in the absence of care are not typically the same as those likely to benefit the most from the program.
42
The Measure and Mismeasure of Fairness
there is considerable predictable heterogeneity in the extent to which enrollment in the care management program causally improves health. In particular, we find that the estimated treatment effect is only weakly correlated with the number of chronic conditions a patient currently exhibits (r = 0.05). Consequently, directly targeting resources to those most likely to benefit could yield large health improvements. Once the types of label bias we have discussed above have been identified, it may be possible to re-train predictive models to better align decision-making algorithms with policy goals.
# 6. Conclusion
From medicine to criminal justice, practitioners are increasingly turning to statistical risk assessments to help guide and improve human decisions. Algorithms can avoid many of the implicit and explicit biases of human decisions makers, but they can also exacerbate historical inequities if not developed with care. Policymakers, in response, have rightly demanded that these high-stakes decision systems be designed and audited to ensure out- comes are equitable. The research community has responded to the challenge, coalescing around several formal mathematical definitions of fairness. However, as we have aimed to articulate, these popular measures of fairness suffer from significant statistical limitations. Indeed, adopting these measures as algorithmic design principles can often harm the groups that these measures were designed to protect.
In contrast to the dominant axiomatic approach to algorithmic fairness, we advocate for a more consequentialist orientation (Cai et al., 2022a; Chohlas-Wood et al., 2023a; Liang et al., 2022; Nyarko et al., 2021). Most importantly, we stress the importance of grounding technical and policy discussions of fairness in terms of real-world quantities. For example, in the pretrial domain, one might consider a risk assessmentâs short and long-term impacts on public safety and the size of the incarcerated population, as well as a toolâs alignment with principles of due process. In lending, one could similarly consider a risk assessmentâs immediate and equilibrium effects on community development and the sustainability of a loan program. Formal mathematical measures of fairness only indirectly address such issues, and can inadvertently lead discussions astray. Of course, it is not always clear how best to quantify or to balance the relevant costs and benefits of proposed algorithmic interventions. In some cases, it may be possible to conduct randomized controlled trials; in other cases, the best one can do is hypothesize about an algorithmâs potential effects. Regardless, we believe a more explicit focus on consequences is necessary to make progress.
We further recommend decoupling the statistical problem of risk assessment from the policy problem of designing interventions. At their best, predictive algorithms estimate the likelihood of events under different scenarios; they cannot dictate policy. An algorithm might (correctly) infer that a defendant has a 20% chance of committing a violent crime if released, but that fact does not, in and of itself, determine a course of action. For example, detention is not the only alternative to release, as one could take any number of rehabil- itative interventions (Barabas et al., 2018). Even if detention is deemed an appropriate intervention, one must still determine what threshold would appropriately balance public safety with the social and financial costs of detention. One might even decide that societyâs goals are best achieved by setting different thresholds for different groups. For example, a policymaker might reason that, all else being equal, the social costs of detaining a single
43
Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
parent are higher than the social costs of detaining an individual without children, and thus decide to apply different thresholds to the two groups. When policymakers consider these options and others, we believe the primary role of a risk assessment tool is, as its name suggests, to estimate risk. This view, however, is at odds with requiring that algo- rithms satisfy popular fairness criteria. Such constrained algorithms typically do not reflect the best available estimates of risk, and thus implicitly conflate the statistical and policy problems.
Fair machine learning still has much left to accomplish and there are several important avenues of research that could benefit from new statistical and computational insights. From mitigating measurement error and sample bias, to understanding externalities and equilibrium effects, to eliciting and aggregating preferences to arbitrate between competing algorithms, there is much work to be done. But the benefits are equally large. When carefully designed and evaluated, statistical algorithms have the potential to dramatically improve both the efficacy and equity of consequential decisions. As these algorithms are increasingly deployed in all walks of life, it will become ever more important to ensure they are fair.
# Acknowledgments and Disclosure of Funding
We thank Guillaume Basse, Sander Beckers, Hana Chockler, Alex Chohlas-Wood, Madison Coots, Avi Feller, Josh Grossman, Joe Halpern, Jennifer Hill, Aziz Huq, David Kent, Keren Ladin, Julian Nyarko, Emma Pierson, Ravi Sojitra, and Michael Zanger-Tishler for helpful conversations. This paper is based on work by Corbett-Davies and Goel (2018) and Nil- foroshan et al. (2022). H.N was supported by a Stanford Knight-Hennessy Scholarship and an NSF Graduate Research Fellowship under Grant No. DGE-1656518. J.G was supported by a Stanford Knight-Hennessy Scholarship. R.S. was supported by the NSF Program on Fairness in AI in Collaboration with Amazon under the award âFAI: End-to-End Fairness for Algorithm-in-the-Loop Decision Making in the Public Sector,â no. IIS-2040898. S.G. was supported by a grant from the Harvard Data Science Initiative. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF or Amazon. Reproduction materials are available at https://github.com/jgaeb/measure-mismeasure.
44
The Measure and Mismeasure of Fairness
# A. Path-specific Counterfactuals
Constructing policies which satisfy path-specific fairness requires computing path-specific counterfactual values of features. In Algorithm 1, we describe the formal construction of path-specific counterfactuals ZÎ ,a,aâ², for an arbitrary variable Z (or collection of variables) in the DAG. To generate a sample Zâ Î ,a,aâ² from the distribution of ZÎ ,a,aâ², we first sample values U â for the exogenous variables. Then, in the first loop, we traverse the DAG in j topological order, setting A to a and iteratively computing values V â j of the other nodes based on the structural equations in the usual fashion. In the second loop, we set A to aâ², and then iteratively compute values Vj is computed using the structural â equation at that node, with value Vâ for each of its parents that are connected to it along a path in Î , and the value V â â
# B. Constructing Causally Fair Policies
Our aim is to identify the feasible region of expected outcomes attainable via policies which are constructed to satisfy various causal fairness constraints.
First, consider the problem of finding decision policies that maximize expected utility, subject to satisfying a given definition of causal fairness, as well as the outcome and budget constraints. Specifically, letting C denote the family of all decision policies that satisfy one of the causal fairness definitions listed above, a utility-maximizing policy dâ is given by
# dâ â arg max dâC s.t.
s.t. E[d(X) · u(X)] o1 â ϵ ⤠E[d(X) · 1α(X)=a1] ⤠o1 + ϵ o2 â ϵ ⤠E[d(X) · E[Y (1) | X]] ⤠o2 + ϵ s.t. E[d(X)] ⤠b. (13)
We prove that this optimization problem can be efficiently solved as a single linear programâin the case of counterfactual equalized odds, conditional principal fairness, coun- terfactual fairness, and path-specific fairnessâor as a series of linear programs in the case of counterfactual predictive parity.
Theorem 21 Consider the optimization problem given in Eq. (13).
1. If C is the class of policies that satisfies counterfactual equalized odds or conditional principal fairness, and the distribution of (X, Y (0), Y (1)) is known and supported on a finite set of size n, then a utility-maximizing policy constrained to lie in C can be constructed via a linear program with O(n) variables and constraints.
2. If C is the class of policies that satisfies path-specific fairness (including counterfactual fairness), and the distribution of (X, DÎ ,A,a) is known and supported on a finite set of size n, then a utility-maximizing policy constrained to lie in C can be constructed via a linear program with O(n) variables and constraints.
3. Suppose C is the class of policies that satisfies counterfactual predictive parity, that the distribution of (X, Y (1)) is known and supported on a finite set of size n, and that
45
# Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
# Algorithm 1: Path-specific counterfactuals
Data: G (topologically ordered), Î , a, and aâ² Result: A sample Zâ
# Î ,a,aâ² from ZÎ ,a,aâ²
# 1 Sample values {U â
j } for the exogenous variables /* Compute counterfactuals by
setting A to a 2 for j = 1, . . . , m do if Vj = A then 3 V â j â a 4 5 else 6 7 â(Vj)â â {V â â | Vâ â â(Vj)} j â fVj (â(Vj)â, U â V â j ) end 8 9 end
/* Compute counterfactuals by setting A to aâ² and propagating values
along paths in Î */
10 for j = 1, . . . , m do if Vj = A then 11 â j â aâ² V else 12 13 14 for Vk â â(Vj) do 15 16 if edge (Vk, Vj) lies on a path in Î then â k V â k â V 17 else 18 V â k â V â k 19 end 20 21 22 23 24 end end â(Vj)â â {V â V end â | Vâ â â(Vj)} â j â fVj (â(Vj)â , U â j ) 25 Zâ Î ,a,aâ² â Z â
the optimization problem in Eq. (13) has a feasible solution. Further suppose Y (1) is supported on k points, and let AÂ¥~! = {p ⬠R* | p; > 0 and ean pi = 1} be the unit (k â 1)-simplex. Then one can construct a set of linear programs L = {L(v)}ycak, with each having O(n) variables and constraints, such that the solution to one of the LPs in L£ is a utility-maximizing policy constrained to lie in C.
Before moving on to the proof of Theorem 21, we note that since the constraints of the linear programs are convex, the feasible regions in Figure 4 can be determined by solving the convex feasibility problem where we impose the additional convex constraint that the
46
/
The Measure and Mismeasure of Fairness
expected outcomesâin our admissions example, the aggregate academic index and number of admitted applicants from the target groupâlie within some distance ϵ of a given point. Performing a grid search over all points then determines the feasible regions.
Proof Let X = {x1, . . . , xm}; then, we seek decision variables di, i = 1, . . . , m, correspond- ing to the probability of making a positive decision for individuals with covariate value xi. Therefore, we require that 0 ⤠di ⤠1.
Letting pi = Pr(X = xi) denote the mass of X at xi, note that the objective function, as well as the outcome and budget constraints are all linear in the decision variables.
1. The objective function E[d(X) - u(X)] equals 77",
i=1 di · u(xi) · pi
2. The budget constraint E[d(X)] < b. constraint equals $7/",
# di- pi <b
3. The first outcome constraint 0; â ⬠< E[d(X) - La(x)=a;] < 01 + ⬠equals 0; â⬠< ery La(ei)=ar * i pi] S 01 + â¬
4. The second outcome constraint 02 â ⬠< E[d(X)-E[Y (1) | X]] < 02 +⬠equals 02 âe < , E[Y (1) | X = ai] -di- pi < 02 +â¬
We now show that each of the causal fairness definitions can be enforced via linear con- straints. We do so in three parts as listed in theorem.
First, we consider counterfactual equalized odds. A decision policy Theorem 21 Part 1. satisfies counterfactual equalized odds when D â¥â¥ A | Y (1). Since D is binary, this condition is equivalent to the expression E[d(X) | A = a, Y (1) = y] = E[d(X) | Y (1) = y] for all a â A and y â Y such that Pr(Y (1) = y) > 0. Expanding this expression and replacing d(xj) by the corresponding decision variable dj, we obtain that
m m Yodi -Pr(X =a; | A=a,Y(1) = y) Yodi -Pr(X =a; | Y(1) = y) i=l i=l
for each a â A and each of the finitely many values y â Y such that Pr(Y (1) = y) > 0. These constraints are linear in the di by inspection.
Next, we consider conditional principal fairness. A decision policy satisfies conditional principal fairness when D â¥â¥ A | Y (0), Y (1), W , where W = Ï(X) denotes a reduced set of the covariates X. Again, since D is binary, this condition is equivalent to the expression E[d(X) | A = a, Y (0) = y0, Y (1) = y1, W = w] = E[d(X) | Y (0) = y0, Y (1) = y1, W = w] for all y0, y1, and w satisfying Pr(Y (0) = y0, Y (1) = y1, W = w) > 0. As above, expanding this expression and replacing d(xj) by the corresponding decision variable dj yields linear constraints of the form
m m Yo di Pr(X =2;|A=a,S =s) = 0d Pr(X = 2% | S=s) i=l j=l
for each a â A and each of the finitely many values of S = (Y (0), Y (1), W ) such that s = (y0, y1, w) â Y Ã Y Ã W satisfies Pr(Y (0) = y0, Y (1) = y1, W = w) > 0. Again, these constraints are linear by inspection.
47
# Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
Theorem 21 Part 2. Suppose a decision policy satisfies path-specific fairness for a given collection of paths Î and a (possibly) reduced set of covariates W = Ï(X), meaning that for every aâ² â A, E[DÎ ,A,aâ² | W ] = E[D | W ].
Recall from the definition of path-specific counterfactuals that
DÎ ,A,aâ² = fD(XÎ ,A,aâ², UD) = 1UDâ¤d(XÎ ,A,aâ² ),
where UD â¥â¥ {XÎ ,A,a, X}. Since W = Ï(X), UD â¥â¥ {XÎ ,A,a, W }, it follows that
# E[DÎ ,A,aâ² | W = w]
| Ww = w| m = > E[Draa | Xn,A = %i,W = wv): Pr(Xtaa = 2% | W = w) i=1 ân = Eltyp<axy. 4) | Xt.Ae = 01, W = wv) Pr(Xt40 = 21 | W =) i=l m = So d(Xt, A.) »Pr(Xt Aa = vi | W = w) i=l m = Yodi »Pr(Xtaa = 7% |W =w). i=1
An analogous calculation yields that E[D | W = w] = 07", di-Pr(X = x | W = w). Equating these expressions gives
m m Yodi -Pr(X =2;|W=w)= Yodi »Pr(Xtaa' = 71 | W = w) i=l i=1
for each aâ² â A and each of the finitely many w â W such that Pr(W = w) > 0. Again, each of these constraints is linear by inspection.
Theorem 21 Part 3. A decision policy satisfies counterfactual predictive parity if Y (1) â¥â¥ A | D = 0, or equivalently, Pr(Y (1) = y | A = a, D = 0) = Pr(Y (1) | D = 0) for all a â A. We may rewrite this expression to obtain:
Pr(Y (1) = y, A = a, D = 0) Pr(A = a, D = 0)
where Cy = Pr(Y (1) = y | D = 0).
Expanding the numerator on the left-hand side of the above equation yields
m Pr(Y(1) =y,A=a,D=0) vei dj]: Pr(Y(1) =y,A=a,X = 2) i=1
Similarly, expanding the denominator yields
Pr(Â¥(1) =, D =0) = So[1â di] - Pr(Â¥(1) = y, X = 2%). i=1
48
# The Measure and Mismeasure of Fairness
for each of the finitely many y â Y. Therefore, counterfactual predictive parity corresponds to
Soft = di]: Pr(Â¥(1) =y, A =4,X =a) = Cy: S[l-di-Pr(V) =y, X =a), (14) i=1 a
for each a â A and y â Y. Again, these constraints are linear in the di by inspection.
Consider the family of linear programs L = {L(v)}vââk where the linear program L(v) has the same objective function, outcome constraint, and budget constraint as before, to- gether with additional constraints for each a â A as in Eq. (14), where Cyi = vi for i = 1, . . . , k.
By assumption, there exists a feasible solution to the optimization problem in Eq. (13), so the solution to at least one program in L is a utility-maximizing policy that satisfies counterfactual predictive parity.
# C. A Stylized Example of College Admissions
In the example that we consider in Section 4.1, the exogenous variables in the DAG, U = {uA, uD, uE, uM , uT , uY }, are independently distributed as follows:
UA, UD, UY â¼ Unif(0, 1), UE, UM , UT â¼ N (0, 1).
For fixed constants µA, βE,0, βE,A, βM,0, βM,E, βT,0, βT,E, βT,M , βT,B, βT,u, βY,0, βY,D, we define the endogenous variables V = {A, E, M, T, D, Y } in the DAG by the following structural equations:
fA(uA) = a1 a0 if uA ⤠µA otherwise , fE(a, uE) = βE,0 + βE,A · 1(a = a1) + uE, fM (e, uM ) = βM,0 + βM,E · e + uM , fT (e, m, uT ) = βT,0 + βT,E · e + βT,M · m + βT,B · e · m + βT,u · uT , fD(x, uD) = 1(uD ⤠d(x)), fY (m, uY , δ) = 1(uY ⤠logitâ1(βY,0 + m + βY,D · δ)),
where logitâ1(x) = (1 + exp(âx))â1 and d(x) is the decision policy. In our example, we use constants µA = 1 3 , βE,0 = 1, βE,A = â1, βM,0 = 0, βM,E = 1, βT,0 = 50, βT,E = 4, βT,M = 4, βT,u = 7, βT,B = 1, βY,0 = â 1
# D. Proof of Theorem 10
We begin with the following simple lemma.
49
(14)
# Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
Lemma 22 Suppose F is a subâÏ-algebra of measurable sets, and suppose X is a non- negative bounded random variable with X ⤠b a.s. Then
Var(X | F) b ⤠E[X | F].
Proof Note that since 0 ⤠X ⤠b a.s.,
E[X 2 | F] ⤠E[X | F] · b.
By Jensenâs inequality, Var(X | F) ⤠E[X 2 | F], and so, it follows that, a.s., Var(X | F) b
as desired.
Ignoring the conditioning, Lemma 22 can be interpreted as saying that the minimum of a bounded random variable cannot be too close to the mean. This fact enables us to prove Theorem 10.
Proof of Theorem 10 We wish to show that no event E of the form {r(X) ⥠t} â E â {r(X) > t} is Ï-measurable. To that end, it suffices to show that E[E | Ï(X)] /â {0, 1} with positive probability.
Since Pr(r(X) = t) = 0, 1r(X)â¥t = 1r(X)>t a.s., and so it suffices to show that E[1r(X)>t | Ï(X)] /â {0, 1} with positive probability.
Consider the set of covariates x = (xu, a) such that Ï(x) lies in the interval (t, t + ϵ), where, without loss of generality, we assume t + ϵ < 1. Since E[r(X) | Xu = xu] = Ï(x) > t, it follows immediately that for these x, Pr(r(X) > t | Xu) > 0. Let I(x) denote the essential infimum of the conditional distribution of r(X) | Ï(X) = Ï(x).
Then, we can apply Lemma 22 to r(X) â I(X), using the fact that 0 ⤠r(X) â I(X) ⤠1 a.s., to obtain that Ï(X) â I(X) > ϵ a.s. It follows that the event
{Ï(X) â (t, t + ϵ), Pr(r(X) < t | Ï(X)) > 0} (15)
has positive probability. We can conclude from this that the related event
{Ï(X) â (t, t + ϵ), Pr(r(X) < t | Ï(X)) > 0}
cannot have probability zero, since, by the tower law, we would consequently have that, a.s.,
0 = 1Ï(X)â(t,t+ϵ) · Pr(r(X) < t | Ï(X)),
and so, taking conditional expectations with respect to Ï(X), a.s.,
0 = 1Ï(X)â(t,t+ϵ) · Pr(r(X) < t | Ï(X)),
which contradicts Eq. (15).
Therefore, for x such that Ï(x) â (t, t + ϵ), Pr(r(X) < t | Xu = xu) > 0 and Pr(r(X) > t | Xu = xu) > 0. Since Pr(Ï(X) â (t, t + ϵ)) > 0, it follows that E[1r(X)>t | Ï(X)] /â {0, 1} with positive probability, as desired.
50
# a
The Measure and Mismeasure of Fairness
# E. Proof of Proposition 15
We begin by more formally defining (multiple) threshold policies. We assume, without loss of generality, that Pr(A = a) > 0 for all a â A throughout.
Definition 23 Let u(x) be a utility function. We say that a policy d(x) is a threshold policy with respect to u if there exists some t such that
d(x) = 1 u(x) > t, 0 u(x) < t,
and d(x) â [0, 1] is arbitrary if u(x) = t. We say that d(x) is a multiple threshold policy with respect to u if there exist group-specific constants ta for a â A such that
d(x) = 1 u(x) > tα(x), 0 u(x) < tα(x),
and d(x) â [0, 1] is arbitrary if u(x) = tα(x).
Remark 24 In general, it is possible for different thresholds to produce threshold policies that are almost surely equal. For instance, if u(X) â¼ Bern( 1 2 ), then the policies 1u(X)>p are almost surely equal for all p â [0, 1). Nevertheless, we speak in general of the threshold associated with the threshold policy d(X) unless there is ambiguity.
We first observe that if U is consistent modulo α, then whether a decision policy d(x) is a multiple threshold policy does not depend on our choice of u â U.
Lemma 25 Let U be a collection of utilities consistent modulo α, and suppose d : X â [0, 1] is a decision rule. If d(x) is a multiple threshold rule with respect to a utility uâ â U, then d(x) is a multiple threshold rule with respect to every u â U. In particular, if d(x) can be represented by non-negative thresholds over uâ, it can be represented by non-negative thresholds over any u â U.
Proof Suppose d(x) is represented by thresholds {tâ the thresholds {ta}aâA explicitly. a}aâA with respect to uâ. We construct
a. Then set ta = u(xâ). Now, if u(x) > ta = u(xâ) then, by consistency modulo α, uâ(x) > uâ(xâ) = tâ a. Similarly if u(x) < ta then uâ(x) < tâ a. We also note that by consistency modulo α, sign(ta) = sign(u(xâ)) = sign(uâ(xâ)) = sign(tâ
If there is no xâ â αâ1(a) such that uâ(xâ) = tâ
a, then let
ta = inf xâSa u(x)
where Sa = {x â αâ1(a) | uâ(x) > tâ by consistency modulo α, if tâ a}. Note that since sign(u(x)) = sign(uâ(x)) for all x a ⥠0, it follows that ta ⥠0 as well.
We need to show in this case also that if u(x) > ta then uâ(x) > tâ a, and if u(x) < ta a. To do so, let x â αâ1(a) be arbitrary, and suppose u(x) > ta. Then, by then uâ(x) < tâ
51
# Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
definition, there exists xâ² â αâ1(a) such that u(x) > u(xâ²) > ta and uâ(xâ²) > tâ uâ(x) > uâ(xâ²) > tâ by the definition of ta that uâ(x) ⤠tâ uâ(x) < tâ a.
Therefore, it follows in both cases that for x â αâ1(a), if u(x) > ta then uâ(x) > tâ if u(x) < ta then uâ(x) < tâ a. Therefore a, and
d(a) = 1 ifu(x) > tea); 0 if u(x) < tara),
i.e., d(x) is a multiple threshold policy with respect to u. Moreover, as noted above, if tâ a ⥠0 for all a â A, then ta ⥠0 for all a â A.
We now prove the following strengthening of Prop. 15.
Lemma 26 Let U be a collection of utilities consistent modulo α. Let d(x) be a feasible decision policy that is not a.s. a multiple threshold policy with non-negative thresholds with respect to U, then d(x) is strongly Pareto dominated.
Proof We prove the claim in two parts. First, we show that any policy that is not a multiple threshold policy is strongly Pareto dominated. Then, we show that any multiple threshold policy that cannot be represented with non-negative thresholds is strongly Pareto dominated.
If d(x) is not a multiple threshold policy, then there exists a u â U and aâ â A such that d(x) is not a threshold policy when restricted to αâ1(aâ) with respect to u.
We will construct an alternative policy dâ²(x) that attains strictly greater utility on αâ1(aâ) and is identical elsewhere. Thus, without loss of generality, we assume there is a single group, i.e., α(x) = aâ. The proof proceeds heuristically by moving some of the mass below a threshold to above a threshold to create a feasible policy with improved utility.
mLo(t) = E[d(X) · 1u(X)<t], mUp(t) = E[(1 â d(X)) · 1u(X)>t]. We show that there exists tâ such that mUp(tâ) > 0 and mLo(tâ) > 0. For, if not, consider Ët = inf{t â R : mUp(t) = 0}. Note that d(X) · 1u(X)>Ët = 1u(X)>Ët a.s. If Ët = ââ, then by definition d(X) = 1 a.s., which is a threshold policy, violating our assumption on d(X). If Ët > ââ, then for any tâ² < Ët, we have, by definition that mUp(tâ²) > 0, and so by hypothesis mLo(tâ²) = 0. Therefore d(X) · 1u(X)<Ët = 0 a.s., and so, again, d(X) is a threshold policy, contrary to hypothesis.
Now, with tâ as above, for notational simplicity, let mUp = mUp(tâ) and mLo = mLo(tâ) and consider the alternative policy
dâ²(x) = (1 â mUp) · d(x) u(x) < tâ, u(x) = tâ, d(x) 1 â (1 â mLo) · (1 â d(x)) u(x) > tâ.
52
# The Measure and Mismeasure of Fairness
Then it follows by construction that
E[dâ²(X)] = (1 â mUp) · mLo + E[d(X) · 1u(X)=tâ] + Pr(u(X) > tâ) â (1 â mLo) · mUp = mLo + E[d(X) · 1u(X)=tâ] + Pr(u(X) > tâ) â mUp = E[d(X) · 1u(X)<tâ] + E[d(X) · 1u(X)=tâ] + E[1u(X)>tâ] â E[(1 â d(X)) · 1u(X)>tâ] = E[d(X)] = b,
so dâ²(x) is feasible. However,
dâ²(x) â d(x) = mLo · (1 â d(x)) · 1u(x)>tâ â mUp · d(x) · 1u(x)<tâ,
and so
E[(dâ²(X) â d(X)) · u(X)] = mLo · E[(1 â d(X)) · 1u(X)>tâ · u(X)] â mUp · E[d(X) · 1u(X)<tâ · u(X)] > mLo · tâ · E[(1 â d(X)) · 1u(X)>tâ] â mUp · tâ · E[d(X) · 1u(X)<tâ] = tâ · mLo · mUp â tâ · mUp · mLo = 0.
Therefore
E[d(X) · u(X)] < E[dâ²(X) · u(X)].
It remains to show that uâ²(dâ²) > uâ²(d) for arbitrary uâ² â U. Let
tâ² = inf{uâ²(x) : dâ²(x) > d(x)}.
Note that by construction for any x, xâ² â X , if dâ²(x) > d(x) and dâ²(xâ²) < d(xâ²), then u(x) > tâ > u(xâ²). It follows by consistency modulo α that uâ²(x) ⥠tⲠ⥠uâ²(xâ²), and, moreover, that at least one of the inequalities is strict. Without loss of generality, assume uâ²(x) > tⲠ⥠uâ²(xâ²). Then, we have that u(x) > tâ if and only if uâ²(x) > tâ². Therefore, it follows that
E[(dâ²(X) â d(X)) · 1uâ²(X)>tâ²] = mUp > 0.
Since E[dâ²(X) â d(X)] = 0, we see that
E[(dâ²(X) â d(X)) · uâ²(X)] = E[(dâ²(X) â d(X)) · 1uâ²(X)>tⲠ· uâ²(X)] + E[(dâ²(X) â d(X)) · 1uâ²(X)â¤tⲠ· uâ²(X)] > tⲠ· E[(dâ²(X) â d(X)) · 1uâ²(X)>tâ²] + tⲠ· E[(dâ²(X) â d(X)) · 1uâ²(X)â¤tâ²] = tⲠ· E[dâ²(X) â d(X)] = 0,
where in the inequality we have used the fact that if dâ²(x) > d(x), uâ²(x) > tâ², and if dâ²(x) < d(x), uâ²(x) ⤠tâ². Therefore
E[d(X) · uâ²(X)] < E[dâ²(X) · uâ²(X)],
53
# Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
i.e., dâ²(x) strongly Pareto dominates d(x).
Now, we prove the second claim, namely, that a multiple threshold policy Ï (x) that cannot be represented with non-negative thresholds is strongly Pareto dominated. For, if Ï (x) is such a policy, then, by Lemma 25, for any u â U, E[Ï (X) · 1u(X)<0] > 0. It follows immediately that Ï â²(x) = Ï (x) · 1u(x)>0 satisfies u(Ï â²) > u(Ï ). By consistency modulo α, the definition of Ï â²(x) does not depend on our choice of u, and so u(Ï â²) > u(Ï ) for every u â U, i.e., Ï â²(x) strongly Pareto dominates Ï (x).
The following results, which draw on Lemma 26, are useful in the proof of Theorem 17. Definition 27 We say that a decision policy d(x) is budget-exhausting if
min(b, Pr(u(X) > 0)) ⤠E[d(X)] ⤠min(b, Pr(u(X) ⥠0)).
Remark 28 We note that if U is consistent modulo α, then whether or not a decision policy d(x) is budget-exhausting does not depend on the choice of u â U. Further, if Pr(u(X) = 0) = 0âe.g., if the distribution of X is U-fineâthen the decision policy is budget-exhausting if and only if E[d(X)] = min(b, Pr(u(X) > 0)).
Corollary 29 Let U be a collection of utilities consistent modulo α. If Ï (x) is a feasible policy that is not a budget-exhausting multiple threshold policy with non-negative thresholds, then Ï (x) is strongly Pareto dominated.
Proof Suppose Ï (x) is not strongly Pareto dominated. By Lemma 26, it is a multiple threshold policy with non-negative thresholds.
Now, suppose toward a contradiction that Ï (x) is not budget-exhausting. Then, either E[Ï (X)] > min(b, Pr(u(X) ⥠0)) or E[Ï (X)] < min(b, Pr(u(X) > 0)).
In the first case, since Ï (x) is feasible, it follows that E[Ï (X)] > Pr(u(X) ⥠0). It follows that Ï (x) · 1u(x)<0 is not almost surely zero. Therefore
E[Ï (X)] < E[Ï (X) · 1u(X)>0],
and, by consistency modulo α, this holds for any u â U. Therefore Ï (x) is strongly Pareto dominated, contrary to hypothesis. In the second case, consider
d(x) = θ · 1u(x)>0 + (1 â θ) · Ï (x).
Since E[Ï (X)] < min(b, Pr(u(X) > 0)) and
E[d(X)] = θ · Pr(u(X) > 0) + (1 â θ) · E[Ï (X)],
there exists some θ > 0 such that d(x) is feasible.
For that θ, a similar calculation shows immediately that u(d) > u(Ï ), and, by con- sistency modulo α, uâ²(d) > uâ²(Ï ) for all uâ² â U. Therefore, again, d(x) strongly Pareto dominates Ï (x), contrary to hypothesis.
54
The Measure and Mismeasure of Fairness
Lemma 30 Given a utility u, there exists a mapping T from [0, 1]A to [ââ, â]A taking sets of quantiles {qa}aâA to thresholds {ta}aâA such that:
1. T is monotonically non-increasing in each coordinate;
2. For each set of quantiles, there is a multiple threshold policy Ï : X â [0, 1] with thresholds T ({qa}) with respect to u such that E[Ï (X) | A = a] = qa.
Proof Simply choose
ta = inf{s â R : Pr(u(X) > s) < qa}. (16)
Then define
Pr(u(X)=ta|A=a) a goâPr(u(X)>talA=a) Pr(y(X) = ta, A =a) > 0 Paâ 0 Pr(u(X) = ta, A =a) = 0.
Note that Pr(u(X) ⥠ta | A = a) ⥠qa, since, by definition, Pr(u(X) > ta â ϵ | A = a) ⥠qa for all ϵ > 0. Therefore,
Pr(u(X) > ta | A = a) + Pr(u(X) = ta | A = a) ⥠qa,
and so pa ⤠1. Further, since Pr(u(X) > ta | A = a) ⤠qa, we have that pa ⥠0. Finally, let
d(x) = 1 u(x) > tα(x), pa u(x) = tα(x), u(x) < tα(x), 0
and it follows immediately that E[d(X) | A = a] = qa. That ta is a monotonically non- increasing function of qa follows immediately from Eq. (16).
We can further refine Cor. 29 and Lemma 30 as follows:
Lemma 31 Let u be a utility. Then a feasible policy is utility maximizing if and only if it is a budget-exhausting threshold policy. Moreover, there exists at least one utility maximizing policy.
Proof Let ¯α be a constant map, i.e., ¯α : X â ¯A, where | ¯A| = 1. Then U = {u} is consistent modulo ¯α, and so by Cor. 29, any Pareto efficient policy is a budget exhausting multiple threshold policy relative to U. Since U contains a single element, a policy is Pareto efficient if and only if it is utility maximizing. Since ¯α is constant, a policy is a multiple threshold policy relative to ¯α if and only if it is a threshold policy. Therefore, a policy is utility maximizing if and only if it is a budget exhausting threshold policy. By Lemma 30, such a policy exists, and so the maximum is attained.
55
# Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
# F. Prevalence and the Proof of Theorem 17
The notion of a probabilistically âsmallâ setâsuch as the event in which an idealized dart hits the exact center of a targetâis, in finite-dimensional real vector spaces, typically en- coded by the idea of a Lebesgue null set.
Here we prove that the set of distributions such that there exists a policy satisfying either counterfactual equalized odds, conditional principal fairness, or counterfactual fairness that is not strongly Pareto dominated is âsmallâ in an analogous sense. The proof turns on the following intuition. Each of the fairness definitions imposes a number of constraints. By Lemma 26, any policy that is not strongly Pareto dominated is a multiple threshold policy. By adjusting the group-specific thresholds of such a policy, one can potentially satisfy one constraint per group. If there are more constraints than groups, then one has no additional degrees of freedom that can be used to ensure that the remaining constraints are satisfied. If, by chance, those constraints are satisfied with the same threshold policy, they are not satisfied robustlyâeven a minor distribution shift, such as increasing the amount of mass above the threshold by any amount on the relevant subpopulation, will break them. Therefore, over a âtypicalâ distribution, at most |A| of the constraints can simultaneously be satisfied by a Pareto efficient policy, meaning that typically no Pareto efficient policy fully satisfies all of the conditions of the fairness definitions.
Formalizing this intuition, however, requires considerable care. In Section F.1, we give a brief introduction to a popular generalization of null sets to infinite-dimensional vector spaces, drawing heavily on a review article by Ott and Yorke (2005). In Section F.2 we provide a road map of the proof itself. In Section F.3, we establish the main hypotheses necessary to apply the notion of prevalence to a convex setâin our case, the set of U-fine distributions. In Section F.4, we establish a number of technical lemmata used in the proof of Theorem 17, and provide a proof of the theorem itself in Section F.5. In Section F.8, we show why the hypothesis of U-fineness is important and how conspiracies between atoms in the distribution of u(X) can lead to ârobustâ counterexamples.
# F.1 Shyness and Prevalence
Lebesgue measure λn on Rn has a number of desirable properties:
⢠Local finiteness: For any point v â Rn, there exists an open set U containing x such that λn[U ] < â;
⢠Strict positivity: For any open set U , if λn[U ] = 0, then U = â
; ⢠Translation invariance: For any v â Rn and measurable set E, λn[E + v] = λn[E]. No measure on an infinite-dimensional, separable Banach space, such as L1(R), can satisfy these three properties Ott and Yorke (2005). However, while there is no generalization of Lebesgue measure to infinite dimensions, there is a generalization of Lebesgue null setsâ called shy setsâto the infinite-dimensional context that preserves many of their desirable properties.
Definition 32 (Hunt et al. (1992)) Let V be a completely metrizable topological vector space. We say that a Borel set E â V is shy if there exists a Borel measure µ on V such that:
56
The Measure and Mismeasure of Fairness
1. There exists compact C â V such that 0 < µ[C] < â,
2. For all v â V , µ[E + v] = 0.
An arbitrary set F â V is shy if there exists a shy Borel set E â V containing F . We say that a set is prevalent if its complement is shy.
Prevalence generalizes the concept of Lebesgue âfull measureâ or âco-nullâ sets (i.e., sets whose complements have null Lebesgue measure) in the following sense:
Proposition 33 (Hunt et al. (1992)) Let V be a completely metrizable topological vector space. Then:
⢠Any prevalent set is dense in V ;
⢠If G â L and G is prevalent, then L is prevalent;
⢠A countable intersection of prevalent sets is prevalent;
⢠Every translate of a prevalent set is prevalent;
⢠If V = Rn, then G â Rn is prevalent if and only if λn[Rn \ G] = 0.
As is conventional for sets of full measure in finite-dimensional spaces, if some property holds for every v â E, where E is prevalent, then we say that the property holds for almost every v â V or that it holds generically in V .
Prevalence can also be generalized from vector spaces to convex subsets of vector spaces, although additional care must be taken to ensure that a relative version of Prop. 33 holds.
Definition 34 (Anderson and Zame (2001)) Let V be a topological vector space and let C â V be a convex subset completely metrizable in the subspace topology induced by V . We say that a universally measurable set E â C is shy in C at c â C if for each 1 ⥠δ > 0, and each neighborhood U of 0 in V , there is a regular Borel measure µ with compact support such that
Supp(µ) â (δ(C â c) + c) â© (U + c),
and µ[E + v] = 0 for every v â V .
We say that E is shy in C or shy relative to C if E is shy in C at c for every c â C. An arbitrary set F â V is shy in C if there exists a universally measurable shy set E â C containing F .
A set G is prevalent in C if C \ G is shy in C.
Proposition 35 (Anderson and Zame (2001)) If E is shy at some point c â C, then E is shy at every point in C and hence is shy in C.
Sets that are shy in C enjoy similar properties to sets that are shy in V .
Proposition 36 (Anderson and Zame (2001)) Let V be a topological vector space and let C â V be a convex subset completely metrizable in the subspace topology induced by V . Then:
57
# Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
Any prevalent set in C is dense in C;
⢠If G â L and G is prevalent in C, then L is prevalent in C;
⢠A countable intersection of sets prevalent in C is prevalent in C
⢠If G is prevalent in C then G + v is prevalent in C + v for all v â V .
⢠If V = Rn and C â V is a convex subset with non-empty interior, then G â C is prevalent in C if and only if λn[C \ G] = 0.
Sets that are shy in C can often be identified by inspecting their intersections with a finite-dimensional subspace W of V , a strategy we use to prove Theorem 17.
Definition 37 (Anderson and Zame (2001)) A universally measurable subset E of a convex and completely metrizable set C is said to be k-shy in C if there exists a k-dimensional subspace W â V such that
1. A translate of the set C has positive Lebesgue measure in W , i.e., λW [C + v0] > 0 for some v0 â V ;
2. Every translate of the set E is a Lebesgue null set in W , i.e., λW [E + v] = 0 for all v â V .
Here λW denotes k-dimensional Lebesgue measure supported on W .33 We refer to such a W as a k-dimensional probe witnessing the k-shyness of E, and to an element w â W as a perturbation.
The following intuition motivates the use of probes to detect shy sets. By analogy with Fubiniâs theorem, one can imagine trying to determine whether a subset of a finite- dimensional vector space is large or small by looking at its cross sections parallel to some subspace W â V . If a set E â V is small in each cross sectionâi.e., if λW [E + v] = 0 for all v â V âthen E itself is small in V , i.e., E has λV -measure zero.
# Proposition 38 (Anderson and Zame (2001)) Every k-shy set in C is shy in C.
# F.2 Outline
To aid the reader in following the application of the theory in Section F.1 to the proof of Theorem 17, we provide the following outline of the argument.
In Section F.3 we establish the context to which we apply the notion of relative shy- ness. In particular, we introduce the vector space K consisting of the totally bounded Borel measures on the state space Kâwhere K is X Ã Y, X Ã Y Ã Y, or A Ã X A, depending on which notion of fairness is under consideration. We further isolate the subspace K â K of U-fine totally bounded Borel measures. Within this space, we are interested in the convex set Q â K, the set of U-fine joint probability distributions of, respectively, X and Y (1);
33. Note that Lebesgue measure on W is only defined up to a choice of basis; however, since λ[T (A)] = | det(T )| · λ[A] for any linear automorphism T and Lebesgue measure λ, whether a set has null measure does not depend on the choice of basis.
58
The Measure and Mismeasure of Fairness
X, Y (0), Y (1); or A and the XÎ ,A,a. Within Q, we identify E â Q, the set of U-fine distributions on K over which there exists a policy satisfying the relevant fairness definition that is not strongly Pareto dominated. The claim of Theorem 17 is that E is shy relative to Q.
To ensure that relative shyness generalizes Lebesgue null measure in the expected wayâ i.e., that Prop. 36 holdsâDefinition 34 has three technical requirements: (1) that the am- bient vector space V be a topological vector space; (2) that the convex set C be completely metrizable; and (3) that the shy set E be universally measurable. In Lemma 42, we ob- serve that K is a complete topological vector space under the total variation norm, and so is a Banach space. We extend this in Cor. 47, showing that K is also a Banach space. We use this fact in Lemma 50 to show that Q is a completely metrizable subset of K, as well as convex. Lastly, in Lemma 56, we show that the set E is closed, and therefore universally measurable.
In Section F.4, we develop the machinery needed to construct a probe W for the proof of Theorem 17 and prove several lemmata simplifying the eventual proof of the theorem. To build the probe, it is necessary to construct measures µmax,a with maximal support on the utility scale. This ensures that if any two threshold policies produce different decisions on any µ â K, they will produce different decisions on typical perturbations. The construction of the µmax,a, is carried out in Lemma 58 and Cor. 59. Next, we introduce the basic style of argument used to show that a subset of Q is shy in Lemma 62 and Lemma 63, in particular, by showing that the set of µ â Q that give positive probability to an event E is either prevalent or empty. We use then use a technical lemma, Lemma 64, to show, in effect, that a generic element of Q has support on the utility scale wherever a given fixed distribution µ â Q does. In Defn. 66, we introduce the concept of overlapping and splitting utilities, and show in Lemma 67 that this property is generic in Q unless there exists a Ï-stratum that contains no positive-utility observables x. Lastly, in Lemma 68, we provide a mild simplification of the characterization of finitely shy sets that makes the proof of Theorem 17 more straightforward.
Finally, in Section F.5, we give the proof of Theorem 17. We divide the proof into three parts. In the first part, we restrict our attention to the case of counterfactual equalized odds, and show in detail how to combine the lemmata of the previous section to construct the (at most) 2 · |A|-dimensional probe W. In the second part we consider two distinct cases. The argument in both cases is conceptually parallel. First, we argue that the balance conditions of counterfactual equalized odds encoded by Eq. (4) must be broken by a typical perturbation in W. In particular, we argue that for a given base distribution µ, there can be at most one budget-exhausting multiple threshold policy that canâalthough need not necessarilyâsatisfy counterfactual equalized odds. We show that the form of this policy cannot be altered by an appropriate perturbation in W, but that the conditional probability of a positive decision will, in general, be altered in such a way that Eq. (4) can only hold for a λW-null set of perturbations. In the final section, we lay out modifications that can be made to the proof given for counterfactual equalized odds in the first two parts that adapt the argument to the cases of conditional principal fairness and path-specific fairness. In particular, we show how to construct the probe W in such a way that the additional conditioning on the reduced covariates W = Ï(X) in Eqs. (5) and (8) does not affect the argument.
59
Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
# F.3 Convexity, Complete Metrizability, and Universal Measurability
In this section, we establish the background requirements of Prop. 38 for the setting of In particular, we exhibit the U-fine distributions as a convex subset of a Theorem 17. topological vector space, the set of totally bounded U-fine Borel measures. We show that the U-fine probability distributions form a completely metrizable subset in the topology it inherits from the space of totally bounded measures. Lastly, we show that the set of regular distributions under which there exists a Pareto efficient policy satisfying one of the three fairness criteria is closed, and therefore universally measurable.
# F.3.1 Background and notation
We begin by establishing some notational conventions. We let K denote the underlying state space over which the distributions in Theorem 17 range. Specifically, K = X Ã Y in the case of counterfactual equalized odds; K = X ÃY ÃY in the case of conditional principal fairness; and K = A Ã X A in the case of path-specific fairness. We note that since X â Rk for some k and Y â R, K may equivalently be considered a subset of Rn for some n â N, with the subspace topology (and Borel sets) inherited from Rn.34
We recall the definition of totally bounded measures.
Definition 39 Let M be a Ï-algebra on V , and let µ be a countably additive (V, M)- measure. Then, we define
(u|[E] = sup S > |u| (17) i=l
where the supremum is taken over all countable partitions {E;}ien, i.e., collections such that U2, E; = E and E,0 E; =9 for j #i. We call |u| the total variation of js, and the total variation norm of pu is |s.|[V].
We say that µ is totally bounded if its total variation norm is finite, i.e., |µ|[V ] < â.
Lemma 40 If µ is totally bounded, then |µ| is a finite positive measure on (V, M), and |µ[E]| ⤠|µ|[E] for all E â M.
See Theorem 6.2 in Rudin (1987) for proof. We let K denote the set of totally bounded Borel measures on K. We note that, in the case of path specific fairness, which involves the joint distributions of counterfactuals, X is not defined directly. Rather, the joint distribution of the counterfactuals XÎ ,A,aâ² and A defines the distribution of X through consistency, i.e., what would have happened to someone if their group membership were changed to aâ² â A is what actually happens to them if their group membership is aâ². More formally, Pr(X â E | A = aâ²) = Pr(XÎ ,A,aâ² â E | A = aâ²) for all Borel sets E â X . (See § 3.6.3 in Pearl (2009b).)
For any ps ⬠K, we adopt the following notational conventions. If we say that a property holds ji-a.s., then the subset of K on which the property fails has ||-measure zero. If FE C K is a measurable set, then we denote by ju |g the restriction of 4 to E, i.e., the measure defined by the mapping BEâ > p[EN Eâ]. We let E,,[f] = fy. f dy, and for measurable sets
34. In the case of path-specific fairness, we can equivalently think of A as a set of integers indexing the groups.
60
The Measure and Mismeasure of Fairness
E, Pr,(E) = w[E].° The fairness criteria we consider involve conditional independence relations. To make sense of conditional independence relations more generally, for Borel measurable f we define E,[f | F] to be the Radon-Nikodym derivative of the measure E+ E,[f +1] with respect to the measure p restricted to the sub-o-algebra of Borel sets F. (See § 34 in Billingsley (1995).) Similarly, we define E,[f | g] to be E,[f | o(g)], where o(g) denotes the sub-o-algebra of the Borel sets generated by g. In cases where the condition can occur with non-zero probability, we can instead make use of the elementary definition of discrete conditional probability.
Lemma 41 Let g be a Borel function on K, and suppose Prµ(g = c) ̸= 0 for some constant c â R. Then, we have that µ-a.s., for any Borel function f ,
Eµ[f | g] · 1g=c = Eµ[f · 1g=c] Prµ(g = c) · 1g=c.
See Rao (2005) for proof. With these notational conventions in place, we turn to establishing the background
conditions of Prop. 38.
Lemma 42 The set of totally bounded measures on a measure space (V, M) form a complete topological vector space under the total variation norm, and hence a Banach space.
See, e.g., Steele (2019) for proof. It follows from this that K is a Banach space.
Remark 43 Since K is a Banach space, it possesses a topology, and consequently a col- lection of Borel subsets. These Borel sets are to be distinguished from the Borel subsets of the underlying state space K, which the elements of K measure. The requirement that the subset E of the convex set C be universally measurable in Proposition 38 is in reference to the Borel subsets of K; the requirement that µ â K be a Borel measure is in reference to the Borel subsets of K.
Recall the definition of absolute continuity.
Definition 44 Let µ and ν be measures on a measure space (V, M). We say that a measure ν is absolutely continuous with respect to µâalso written ν à µâif, whenever µ[E] = 0, ν[E] = 0.
Absolute continuity is a closed property in the topology induced by the total variation norm.
Lemma 45 Consider the space of totally bounded measures on a measure space (V, M) and fix µ. The set of ν such that ν à µ is closed.
35. To state and prove our results in a notationally uniform way, we occasionally write Prµ(E) even when µ ranges over measures that may not be probability measures.
61
Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
Proof Let {νi}iâN be a convergent sequence of measures absolutely continuous with respect to µ. Let the limit of the νi be ν. We seek to show that ν à µ. Let E â M be an arbitrary set such that µ[E] = 0. Then, we have that
ν[E] = lim nââ = lim nââ νi[E] 0 = 0,
since νi à µ for all i. Since E was arbitrary, the result follows.
Recall the definition of a pushforward measure.
Definition 46 Let f : (V,M) > (Vâ,Mâ) be a measurable function. Let be a measure on V. We define the pushforward measure jo f~! on Vâ by the map E' +> p[fâ!(Eâ)| for Be M.
Within K, in the case of counterfactual equalized odds and conditional principal fairness, we define the subspace K to be the set of totally bounded measures µ on K such that the pushforward measure µ ⦠uâ1 is absolutely continuous with respect to the Lebesgue measure λ on R for all u â U. By the Radon-Nikodym theorem, these pushforward measures arise from densities, i.e., for any µ â K, there exists a unique fµ â L1(R) such that for any measurable subset E of R, we have
µ ⦠uâ1[E] = fµ dλ. E
In the case of path-specific fairness, we require the joint distributions of the counterfactual utilities to have a joint density. That is, we define the subspace K to be the set of totally bounded measures µ on K such that the pushforward measure µ ⦠(uA)â1 is absolutely continuous with respect to Lebesgue measure on RA for all u â U. Here, we recall that
ut. (4, (arate) 4 (u(taâ) area:
As before, there exists a corresponding density fµ â L1(RA).
We therefore see that K extends in a natural way the notion of a U- or U A-fine distri- bution, and so, by a slight abuse of notation, refer to K as the set of U-fine measures on K.
Indeed, since Pr,(u(X) ⬠E,A = a) < Pr,(u(X) ⬠£), it also follows that, for a ¢ A such that Pr,(A = a) > 0, the conditional distributions of u(X) | A = a are also absolutely continuous with respect to Lebesgue measure, and so also have densities. For notational convenience, we set f,,,q to be the function satisfying
Prµ(u(X) â E, A = a) = fµ,a dλ, E
# so that f=
aâA fµ,a.
Since absolute continuity is a closed condition, it follows that K is a closed subspace of K. This leads to the following useful corollary of Lemma 45.
62
# a
The Measure and Mismeasure of Fairness
Corollary 47 The collection of U-fine measures on K is a Banach space.
Proof It is straightforward to see that K is a subspace of K. Since K is a closed subset of K by Lemma 45, it is complete, and therefore a Banach space.
We note the following useful fact about elements of K.
Lemma 48 Consider the mapping ++ f, from K to L(R) given by associating a measure with the Radon-Nikodym derivative of the pushforward measure 10 u-!. This mapping is continuous. Likewise, the mapping +> fya is continuous for alla ⬠A, and, in the case of path-specific fairness, the mapping of to the Radon-Nikodym derivative of wo (uA)! is continuous.
Proof We show only the first case. The others follow by virtually identical arguments. Let ϵ > 0 be arbitrary. Choose µ â K, and suppose that |µ â µâ²|[K] < ϵ. Then, let
EUp = {x â R : fµ(x) > fµâ²(x)} ELo = {x â R : fµ(x) < fµâ²(x)}.
Then EUp and ELo are disjoint, so we have that
Le furan = [fo tetra] +] Su Sura = [C= Whe (BPYI + [Cu w!Vlw CB) <6,
where the second equality follows by the definition of pushforward measures and the in- equality follows from Lemma 40. Since ϵ was arbitrary, the claim follows.
Finally, we define Q. We let Q be the subset of K consisting of all U-fine probability measures, i.e., measures µ â K such that:
1. The measure µ is U-fine;
2. For all Borel sets E â K, µ[E] ⥠0;
3. The measure of the whole space is unity, i.e., µ[K] = 1.
We conclude the background and notation by observing that threshold policies are de- fined wholly by their thresholds for distributions in K and Q. Importantly, this observation does not hold when there are atoms on the utility scaleâwhich measures in K lackâwhich can in turn lead to counterexamples to Theorem 17; see Appendix F.8.
Lemma 49 Let Ï0(x) and Ï1(x) be two multiple threshold policies. If Ï0(x) and Ï1(x) have the same thresholds, then for any µ â K, Ï0(X) = Ï1(X) µ-a.s. Similarly, for µ â Q, if
Eµ[Ï0(X) | A = a] = Eµ[Ï1(X) | A = a]
63
# Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
for all a â A such that Prµ(A = a) > 0, then Ï0(X) = Ï1(X) µ-a.s.
Moreover, for µ â K in the case of path-specific fairness, if Ï0(x) and Ï1(x) have the same thresholds, then Ï0(XÎ ,A,a) = Ï1(XÎ ,A,a) µ-a.s. for any a â A. Similarly, for µ â Q in the case of path-specific fairness, if
Eµ[Ï0(XÎ ,A,a)] = Eµ[Ï1(XÎ ,A,a)]
then Ï0(XÎ ,A,a) = Ï1(XÎ ,A,a) µ-a.s. as well.
Proof First, we show that threshold policies with the same thresholds are equal, then we show that threshold policies that distribute positive decisions across groups in the same way are equal.
Let {ta}aâA denote the shared set of thresholds. It follows that if Ï0(x) ̸= Ï1(x), then u(x) = tα(x). Now,
# ta
Pr(u(X) = ta, A = a) = fµ,a dλ = 0, ta
so Prµ(Ï0(X) ̸= Ï1(X)) = 0. Next, suppose
Eµ[Ï0(X) | A = a] = Eµ[Ï1(X) | A = a].
If the thresholds of the two policies agree for all a â A such that Prµ(A = a) > 0, then we are done by the previous paragraph. Therefore, suppose t0 a for some suitable a â A, where ti a represents the threshold for group a â A under the policy Ïi(x). Without loss of generality, suppose t0
th s Fua dA = E,.[70(X) | A=al- E,{71(X) | A=a] = 0.
Since µ â Q, µ = |µ|, whence
Pr|µ|(ta 0 ⤠u(X) ⤠t1 a | A = a) = 0.
Since this is true for all a â A such that Prµ(A = a) > 0, Ï0(X) = Ï1(X) µ-a.s. The proof in the case of path-specific fairness is almost identical.
# F.3.2 Convexity, complete metrizability, and universal measurability
The set of regular U-fine probability measures Q is the set to which we wish to apply Prop. 38. To do so, we must show that Q is a convex and completely metrizable subset of K.
Lemma 50 The set of regular probability measures Q is convex and completely metrizable.
64
# a
# The Measure and Mismeasure of Fairness
Proof The proof proceeds in two pieces. First, we show that the U-fine probability distri- butions are convex, as can be verified by direct calculation. Then, we show that Q is closed and therefore complete in the original metric of K.
We begin by verifying convexity. Let µ, µⲠâ Q and let E â K be an arbitrary Borel subset of K. Then, choose θ â [0, 1], and note that
(θ · µ + [1 â θ] · µâ²)[E] = θ · µ[E] + [1 â θ] · µâ²[E] ⥠θ · 0 + [1 â θ] · 0 = 0,
and, likewise, that
(θ · µ + [1 â θ] · µâ²)[K] = θ · µ[K] + [1 â θ] · µâ²[K] = θ · 1 + [1 â θ] · 1 = 1.
It remains only to show that Q is completely metrizable. To prove this, it suffices to show that it is closed, since closed subsets of complete spaces are complete, and K is a Banach space by Cor. 47, and therefore complete.
Suppose {µi}iâN is a convergent sequence of probability measures in K with limit µ.
# Then
µ[E] = lim iââ µi[E] ⥠lim iââ 0 = 0
and
µ[K] = lim iââ µi[K] = lim iââ 1 = 1.
Therefore Q is closed, and therefore complete, and hence is a convex, completely metrizable subset of K.
Next we prove that the set E of regular U-fine densities over which there exists a policy satisfying the relevant counterfactual fairness definition that is not strongly Pareto domi- nated is universally measurable.
Recall the definition of universal measurability.
Definition 51 Let V be a complete topological space. Then E â V is universally measur- able if V is measurable by the completion of every finite Borel measure on V , i.e., if for every finite Borel measure µ, there exist Borel sets Eâ² and S such that E â³ Eâ² â S and µ[S] = 0.
We note that if a set is Borel, it is by definition universally measurable. Moreover, if a set is open or closed, it is by definition Borel.
To show that E is closed, we show that any convergent sequence in E has a limit in E. The technical complication of the argument stems from the following fact that satisfying the fairness conditions, e.g., Eq. (7), involves conditional expectations, about which very little can be said in the absence of a density, and which are difficult to compare when taken across distinct measures.
65
Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
To handle these difficulties, we begin with a technical lemma, Lemma 55, which gives a coarse bound on how different the conditional expectations of the same variable can be with respect to a subâÏ-algebra F over two different distributions, µ and µâ², before applying the results to the proof of Lemma 56.
Definition 52 Let µ be a measure on a measure space (V, M), and let f be µ-measurable. Consider the equivalence class of M-measurable functions C = {g : g = f µ-a.e.}.36 We say that any g â C is a version of f , and that g â C is a standard version if g(v) ⤠C for some constant C and all v â V .
Remark 53 It is straightforward to see that for f â Lâ(µ), a standard version always exists with C = â¥f â¥â.
Remark 54 Note that in general, the conditional expectation Eµâ²[f | F] is defined only µâ²-a.e. If µ is not assumed to be absolutely continuous with respect to µâ², it follows that
â¥Eµ[f | F] â Eµâ²[f | F]â¥L1(µ) (18)
is not entirely well-defined, in that its value depends on what version of Eµâ²[f | F] one chooses. For appropriate f , however, one can nevertheless bound Eq. (18) for any standard version of Eµâ²[f | F].
Lemma 55 Let µ, µⲠbe totally bounded measures on a measure space (V, M). Let f â Lâ(µ) â© Lâ(µâ²). Let F be a subâÏ-algebra of M. Let
C = max(â¥f â¥Lâ(µ), â¥f â¥Lâ(µâ²)).
Then, if g is a standard version of Eµâ²[f | F], we have that
|Eµ[f | F] â g| dµ ⤠4C · |µ â µâ²|[V ]. V (19)
Proof First, we note that both Eµ[f | F] and g are F-measurable. Therefore, the sets
EUp = {v â V : Eµ[f | F](v) > g(v)}
and
ELo = {v â V : Eµ[f | F](v) < g(v)}
are in F. Now, note that
[ier Fl- slaw = [ Exlf|Flâgdu | 9g âE,lf | Flap. Vv EUp ELo
36. Some authors define Lp(µ) spaces to consist of such equivalence classes, rather than the definition we use here.
66
# The Measure and Mismeasure of Fairness
First consider EUp. Then, we have that
| Edlf|F)âgdu= [ Eulf|Flâgdu+ | gâ 94 EUp EUp EUp <| ff eelt Flan f oay!| +f galnâ nl EUp EUp Eu <|/ fa f f dy! +f Cdluâp', Up EUp Eup
where in the final inequality, we have used the fact that, since g is a standard version of Eµâ²[f | F],
g(v) ⤠â¥Eµâ²[f | F]â¥Lâ(µâ²) ⤠C
for all v â V , and the fact that, by the definition of conditional expectation,
[eon Fla f nav E E
for any E â F.
Since f is everywhere bounded by C, applying Lemma 40 yields that this final expression is less than or equal to 2C · |µ â µâ²|[V ]. An identical argument shows that
g â Eµ[f | F] dµ ⤠2C · |µ â µâ²|[V ], ELo
whence the result follows.
Lemma 56 Let E â Q denote the set of joint densities on K such that there exists a policy satisfying the relevant fairness definition that is not strongly Pareto dominated. Then, E is closed, and therefore universally measurable.
Proof For notational simplicity, we consider the case of counterfactual equalized odds. The proofs in the other two cases are virtually identical.
Suppose µi â µ in K, where {µi}iâN â E. Then, by Lemma 48, fµi,a â fµ,a in L1(R). Moreover, by Lemma 26, there exists a sequence of threshold policies {Ïi(x)}iâN such that both
Eµi[Ï (X)] = min(b, Prµi(u(X) > 0))
and
Eµi[Ïi(X) | A, Y (1)] = Eµi[Ïi(X) | Y (1)].
Let {qa,i}aâA be defined by
qa,i = Eµi[Ïi(X) | A = a]
if Prµi(A = a) > 0, and qa,i = 0 otherwise.
67
# a
# Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
Since [0, 1]A is compact, there exists a convergent subsequence {{qa,ni}aâA}iâN. Let it converge to the collection of quantiles {qa}aâA defining, by Lemma 30, a multiple threshold policy Ï (x) over µ.
Because µi â µ and {qa,ni}aâA â {qa}aâA, we have that
Eµ[Ïa,ni(X) | A = a] â Eµ[Ï (X) | A = a] for all a â A such that Prµ(A = a) > 0. Therefore, by Lemma 48, Ïni(X) â Ï (X) in L1(µ).
Choose ϵ > 0 arbitrarily. Then, choose N so large that for i greater than N ,
|µ â µni|[K] < ϵ 10 , â¥Ï (X) â Ïni(X)â¥L1(µ) ⤠ϵ 10 .
Then, observe that Ï (x), Ïi(x) ⤠1, and recall that
[Ïni(X) | Y (1)]. (20)
Eµni [Ïni(X) | A, Y (1)] = Eµni Therefore, let gi(x) be a standard version of Eµni gi(x) is also a standard version of Eµni have that
[Ïni(X) | Y (1)] over µni. By Eq. (20), [Ïni(X) | A, Y (1)] over µni. Then, by Lemma 55, we
â¥Eµ[Ï (X) | A, Y (1)] â Eµni [Ïni(X) | Y (1)]â¥L1(µ) ⤠â¥Eµ[Ï (X) | A, Y (1)] â Eµ[Ïni(X) | A, Y (1)]â¥L1(µ) < + â¥Eµ[Ïni(X) | A, Y (1)] â gi(X)â¥L1(µ) + â¥gi(X) â Eµ[Ïni(X) | Y (1)]â¥L1(µ)â¥L1(µ) + â¥Eµ[Ïni(X) | Y (1) â Eµ[Ï (X) | Y (1)]â¥L1(µ) ϵ 10 4ϵ 10 4ϵ 10 ϵ 10 + + + .
Since ϵ > 0 was arbitrary, it follows that, µ-a.e.,
Eµ[Ï (X) | A, Y (1)] = Eµ[Ï (X) | Y (1)].
Recall the standard fact that for independent random variables X and U ,
E[f (X, U ) | X] = f (X, u) dFU (u),
where FU is the distribution of U .37 Further recall that D = 1UDâ¤Ï (X), where UD â¥â¥ X, Y (1). It follows that
Pr,(D=1|X,Y(1)) = [ dy j<r(x) dA(ua) = T(X).
Hence, by the law of iterated expectations,
Prµ(D = 1 | A, Y (1)) = Eµ[Prµ(D = 1 | X, Y (1)) | A, Y (1)] = Eµ[Ï (X) | A, Y (1)] = Eµ[Ï (X) | Y (1)] = Eµ[Prµ(D = 1 | X, Y (1)) | Y (1)] = Prµ(D = 1 | Y (1)).
37. For a proof of this fact see, e.g., Brozius (2019).
68
The Measure and Mismeasure of Fairness
Therefore D â¥â¥ A | Y (1) over µ, i.e., counterfactual equalized odds holds for the decision policy Ï (x) over the distribution µ. Consequently µ â E, and so E is closed and therefore universally measurable.
# F.4 Shy Sets and Probes
We require a number of additional technical lemmata for the proof of Theorem 17. The probe must be constructed carefully, so that, on the utility scale, an arbitrary element of Q is absolutely continuous with respect to a typical perturbation. In addition, it is useful to show that a number of properties are generic to simplify certain aspects of the proof of Theorem 17. For instance, Lemma 63 is used in Theorem 17 to show that a certain conditional expectation is generically well-defined, avoiding the need to separately treat certain corner cases.
Cor. 59 concerns the construction of the probe used in the proof of Theorem 17. Lem- mata 64 to 68 use Cor. 59 to provide additional simplifications to the proof of Theorem 17.
# F.4.1 Maximal support
First, to construct the probe used in the proof of Theorem 17, we require elements µ â Q such that the densities fµ have âmaximalâ support. To produce such distributions, we use the following measure-theoretic construction.
Definition 57 Let {E}yer be an arbitrary collection of u-measurable sets for some posi- tive measure 4 on a measure space (M,M). We say that E is the measure-theoretic union of {Ey}yer if w[Ey\ E| =0 for ally â¬T and E =U, Ey, for some countable subcollection {vi} CN.
While measure-theoretic unions themselves are known (cf. Silva (2008), Rudin (1991)), for completeness, we include a proof of their existence, which, to the best of our knowledge, is not found in the literature.
Lemma 58 Let µ be a finite positive measure on a measure space (V, M). Then an arbi- trary collection {Eγ}γâÎ of µ-measurable sets has a measure-theoretic union.
Proof For each countable subcollection Îâ² â Î, consider the âerror termâ
r(Iâ) = supp | E, \ U Ey ye yerâ
We claim that the infimum of r(Îâ²) over all countable subcollections Îâ² â Î must be zero. For, toward a contradiction, suppose it were greater than or equal to ϵ > 0. Choose any set Eγ1 such that µ[Eγ1] ⥠ϵ. Such a set must exist, since otherwise r(â
) < ϵ. Choose Eγ2 such that µ[Eγ2 \ Eγ1] > ϵ. Again, some such set must exist, since otherwise r({γ1}) < ϵ. Continuing in this way, we construct a countable collection {Eγi}iâN.
69
# Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
Therefore, we see that
i=l HV] > f e,| =r lE.\ UE, i=l j=l
By construction, every term in the final sum is greater than or equal to ϵ, contradicting the fact that µ[V ] < â.
Therefore, there exist countable collections {În}nâN such that r(În) < 1
n . It follows immediately that for all n
r (U ry) <r(Vx) neN
for any fixed k â N. Consequently,
r (U a) =0, ne
# r
# and
nâN În is countable.
The construction of the âmaximalâ elements used to construct the probe in the proof of Theorem 17 follows as a corollary of Lemma 58
Corollary 59 There are measures µmax,a â Q such that for every a â A and any µ â K,
λ[Supp(fµ,a) \ Supp(fµmax,a)] = 0.
Proof Consider the collection {Supp(fµ,a)}µâK. By Lemma 58, there exists a countable collection of measures {µi}iâN such that for any µ â K,
| SUPP(fy,a) \ U SupP(fy;,a)| = 9, i=l
where, without loss of generality, we may assume that λ[Supp(fµi,a)] > 0 for all i â N. Such a sequence must exist, since, by the first hypothesis of Theorem 17, for every a â A, there exists µ â Q such that Prµ(A = a) > 0. Therefore, we can define the probability measure µmax,a, where
n 5 (Me baal Q*. or Hmaxa » lai Taal IK]
It follows immediately by construction that
SUPP(fumaca) = LJ SUPP fsa); i=l
and that µmax,a â Q.
70
# a
|
The Measure and Mismeasure of Fairness
For notational simplicity, we refer to Supp(fµmax,a) as Sa throughout. In the case of conditional principal fairness and path-specific fairness, we need a mild
refinement of the previous result that accounts for Ï.
Corollary 60 There are measures µmax,a,w â Q defined for every w â W = Img(Ï) and any a â A such that for some ν â K, Prν(W = w, A = a) > 0. These measures have the property that for any µ â K,
λ[Supp(fµâ²,a,w) \ Supp(fµmax,a,w)] = 0,
where fµâ²,a,w is the density of the pushforward measure (µⲠâ¾
\yow,A=a) © ut.
Recalling that | Img(Ï)| < â, the proof is the same as Cor. 59, and we analogously refer to Supp(fµmax,a,w ) as Sa,w. Here, we have assumed without loss of generalityâas we continue to assume in the sequelâthat for all w â W, there is some µ â K such that Prµ(W = w) > 0.
Remark 61 Because their support is maximal, the hypotheses of Theorem 17, in addition to implying that µmax,a is well-defined for all a â A, also imply that Prµmax,a(u(X) > 0) > 0. In the case of conditional principal fairness, they further imply that Prµmax,a(W = w) > 0 for all w â W and a â A. Likewise, in the case of path-specific fairness, they further imply that Prµmax,a(W = wi) > 0 for i = 0, 1 and some a â A.
# F.4.2 Shy sets and probes
In the following lemmata, we demonstrate that a number of useful properties are generic in Q. We also demonstrate a short technical lemma, Lemma 68, which allows us to use these generic properties to simplify the proof of Theorem 17.
We begin with the following lemma, which is useful in verifying that certain subspaces of K form probes.
Lemma 62 Let W be a non-trivial finite dimensional subspace of K such that ν[K] = 0 for all ν â W. Then, there exists µ â K such that λW[Q â µ] > 0.
# Proof Set
i= > i| [vil]
where ν1, . . . , νn form a basis of W. Then, if |βi| ⤠1
|νi|[K] , it follows that
where 11,...,Y form a basis of W. Then, if |8;| < Tae) it follows that
u+> Bi -% EQ i=l
Since
â 1 1 An _ ? 0, 1 ce mill .
71
# Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
it follows that λW[Q â µ] > 0.
Next we show that, given a ν â Q, a generic element of Q âseesâ events to which ν assigns non-zero probability. While Lemma 65 alone in principle suffices for the proof of Theorem 17, we include Lemma 63 both for conceptual clarity and to introduce at a high level the style of argument used in the subsequent lemmata and in the proof of Theorem 17 to show that a set is shy relative to Q.
Lemma 63 For a Borel set E â K, suppose there exists ν â Q such that ν[E] > 0. Then the set of µ â Q such that µ[E] > 0 is prevalent.
Proof First, we note that the set of µ â Q such that µ[E] = 0 is closed and therefore universally measurable. For, if {µi}iâN â Q is a convergent sequence with limit µ, then
µ[E] = lim nââ = lim nââ µi[E] 0 = 0.
Now, if µ[E] > 0 for all µ â Q, there is nothing to prove. Therefore, suppose that there exists νⲠâ Q such that νâ²[E] = 0.
Next, consider the measure Ëν = νⲠâ ν. Then, let W = Span(Ëν). Since Ëν ̸= 0 and
Ëν[K] = νâ²[K] â ν[K] = 0,
it follows by Lemma 62 that λW[Q â µ] > 0 for some µ.
Now, for arbitrary µ â Q, note that if (µ + β · Ëν)[E] = 0, then
µ[E] â β · ν[E] = 0
i.e.,
β = µ[E] ν[E] .
A singleton has null Lebesgue measure, and so the set of ν â W such that (µ + ν)[E] = 0 is λW-null. Therefore, by Prop. 38, the set of µ â Q such that µ[E] = 0 is shy relative to Q, as desired.
While Lemma 63 shows that a typical element of Q âseesâ individual events, in the proof of Theorem 17, we require a stronger condition, namely, that a typical element of Q âseesâ certain uncountable collections of events. To demonstrate this more complex property, we require the following technical result, which is closely related to the real analysis folk theorem that any convergent uncountable âsumâ can contain only countably many non-zero terms. (See, e.g., Benji (2020).)
72
# a
The Measure and Mismeasure of Fairness
Lemma 64 Suppose µ is a totally bounded measure on (V, M), f and g are µ-measurable real-valued functions, and g ̸= 0 µ-a.e. Then the set
Zβ = {v â V : f (v) + β · g(v) = 0}
has non-zero µ measure for at most countably many β â R.
Proof First, we show that for any countable collection {G;}ienw C R, the sum 7°, 1[Zg,] converges. Then, we show how this implies that [Z3] = 0 for all but countably many BER.
First, we note that for distinct β, βⲠâ R,
Zβ â© ZβⲠâ {v â V : (β â βâ²) · g(v) = 0}.
Now, by hypothesis,
µ[{v â V : g(v) = 0}] = 0,
and since β â βⲠ̸= 0, it follows that
µ[{v â V : (β â βâ²) · g(v) = 0}] = 0
as well. Consequently, it follows that if {Zβi}iâN is a countable collection of distinct elements of R, then
Yo ulZe) =4|U Ze, i=l i=1 < pV]
To see that this implies that µ[Zβ] > 0 for only countably many β â R, let Gϵ â R consist of those β such that µ[Zβ] ⥠ϵ. Then Gϵ must be finite for all ϵ > 0, since otherwise we could form a collection {βi}iâN â Gϵ, in which case
foe) foe) YS HlZa,] > Soe = 0, i=1 i=l
contrary to what was just shown. Therefore,
loo} {GB ER: n[Z5] > 0} = Gin i=l
is countable.
We now apply Lemma 64 to the proof of the following lemma, which states, informally, that, under a generic element of Q, u(X) is supported everywhere it is supported under some particular fixed element of Q. For instance, Lemma 64 can be used to show that for a generic element of Q, the density of u(X) | A = a is positive λ â¾
73
# a
Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
Lemma 65 Let ν â Q and suppose ν is supported on E, i.e., ν[K \ E] = 0. Then the set of µ â Q such that ν ⦠uâ1 à (µ â¾
Lemma 65 states, informally, that for generic µ â Q, fµ â¾ E is supported everywhere fν
is supported. Proof We begin by showing that the set of µ â Q such that ν ⦠uâ1 à (µ â¾ E) ⦠uâ1 is Borel, and therefore universally measurable. Then, we construct a probe W and use it to show that this collection is finitely shy.
To begin, let Uq denote the set of µ â Q such that
ν ⦠uâ1[{|fµ â¾ E | = 0}] < q.
We note that Uq is open. For, if µ â Uq, then there exists some r > 0 such that
ν ⦠uâ1[{|fµ â¾ E | < r}] < q.
Let
ϵ = q â ν ⦠uâ1[{|fµ â¾ E | < r}].
Now, since ν ⦠uâ1 à λ, there exists a δ such that if λ[Eâ²] < δ, then ν ⦠uâ1[Eâ²] < ϵ. Choose µⲠarbitrarily so that |µ â µâ²|[K] < δ · r. Then, by Markovâs inequality, we have that
λ[{|fµ â¾ E â fµⲠ⾠E | > r}] < δ,
i.e.,
E | > r}] < ϵ. Now, we note that by the triangle inequality, wherever |fµⲠ⾠|fµ â¾
E â fµⲠ⾠E | > r. Therefore E | = 0, either |fµ â¾ E | < r or
λ[{|fµⲠ⾠E | = 0}] ⤠ν ⦠uâ1[{|fµ â¾ E â fµⲠ⾠E | > r}] + µ ⦠uâ1[{|fµ â¾ E | < r] < ϵ + µ ⦠uâ1[{|fµ â¾ < q. E | < r]
We conclude that µⲠâ Uq, and so Uq is open.
Note that ν ⦠uâ1 à (µ â¾ E) ⦠uâ1 if and only if
λ[Supp(fν) \ Supp(fµ ⾠E )] = 0.
By the definition of the support of a function, λ â¾Supp(fµ) à µ ⦠uâ1. Therefore, it follows that
λ[Supp(fµ) \ Supp(fν ⾠E )] = 0
if and only if
µ ⦠uâ1[Supp(fµ) \ Supp(fν â¾ E )] = 0.
Then, it follows immediately that the set of v ⬠Q such that pou! « (v Ip)° u-! is ayant U1,/;, which is, by construction, Borel, and therefore universally measurable.
74
# The Measure and Mismeasure of Fairness
Now, since
t Pr, (u(X) <t) = / fy dr â0o
is a continuous function of t, by the intermediate value theorem, there exists t such that Prν(u(X) â S0) = Prν(u(X) â S1), where S0 = Supp(fν) â© (ââ, t) and S1 = Supp(fν) â© [t, â). Then, we let
Ëν[Eâ²] = Eâ² 1uâ1(S0) â 1uâ1(S1) dν.
Take W = Span(Ëν). Since Ëν ̸= 0 and Ëν[K] = 0, we have by Lemma 62 that λW[Qâµ] > 0 for some µ.
By the definition of a density, fËν is positive (Ëν â¦uâ1)-a.e. Consequently, by the definition of Ëν, fËν is non-zero (µ ⦠uâ1)-a.e. Therefore, by Lemma 64, there exist only countably many β â R such that the density of (µ+β · Ëν)â¦uâ1 equals zero on a set of positive µâ¦uâ1-measure. Since countable sets have λ-measure zero and ν is arbitrary, the set of µ â Q such that ν ⦠uâ1 à (µ â¾
The following definition and technical lemma are needed to extend Theorem 17 to the cases of conditional principal fairness and path-specific fairness, which involve additional conditioning on W = Ï(X). In particular, one corner case we wish to avoid in the proof of Theorem 17 is when the decision policy is non-trivial (i.e., some individuals receive a positive decision and others do not) but from the perspective of each Ï-stratum, the policy is trivial (i.e., everyone in the stratum receives a positive or negative decision). Definition 66 formalizes this pathology, and Lemma 67 shows that this issueâunder a mild hypothesisâ does not arise for a generic element of Q.
Definition 66 We say that µ â Q overlaps utilities when, for any budget-exhausting mul- tiple threshold policy Ï (x), if
0 < Eµ[Ï (X)] < 1,
then there exists w â W such that
0 < Eµ[Ï (X) | W = w] < 1.
If there exists a budget-exhausting multiple threshold policy Ï (x) such that
0 < Eµ[Ï (X)] < 1,
but for all w â W,
Eµ[Ï (X) | W = w] â {0, 1},
then we say that Ï (x) splits utilities over µ.
Informally, having overlapped utilities prevents a budget-exhausting threshold policy from having thresholds that fall on the utility scale exactly between the strata induced by Ïâi.e., a threshold policy that splits utilities. This is almost a generic condition in Q, as we shown in Lemma 67.
75
Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
Lemma 67 Let 0 < b < 1. Suppose that for all w â W there exists µ â Q such that Prµ(u(X) > 0, W = w) > 0. Then almost every µ â Q overlaps utilities.
Proof Our goal is to show that the set Eâ of measures yp ⬠Q such that there exists a splitting policy r(a) is shy. To simplify the proof, we divide an conquer, showing that the set Ep of measures ⬠Q such that there exists a splitting policy where the thresholds fall below w ⬠TC W and above w ¢ I is Borel, before constructing a probe that shows that it is shy. Then, we argue that Eâ = Upcy Er, which shows that Eâ is shy.
We begin by considering the linear map Φ : K â R à RW given by
(1) = (Prya(u(X) = 0), (Pru(W = W)) wew)
.
For any Î â W, the sets
Fy? = {w@ ERX RY 229 >6,0= > tw}, wel FR = {2 ERX RY: 19 <b,00 = D> aw}, wel
are closed by construction. Therefore, since Φ is continuous,
Erp=Qn "| (J FP URES (21) row
is closed, and therefore universally measurable.
Note that by our hypothesis and Cor. 60, for all w â W there exists some aw â A such that
Prµmax,aw ,w (u(X) > 0). We use this to show that EÎ is shy. Pick wâ â W arbitrarily, and consider the measures νw for w ̸= wâ defined by
νw = µmax,aw,w â¾ u(X)>0 Prµmax,aw ,w (u(X) > 0) â µmax,awâ ,wâ â¾ u(X)>0 Prµmax,awâ ,wâ (u(X) > 0)
.
We note that νw[K] = 0 by construction. Therefore, if Ww = Span(νw), then λWw [Q â µw] > 0 for some µw by Lemma 62.
Moreover, we have that Prν(u(X) > 0) = 0 for all ν â Ww, i.e.,
Prµ(u(X) > 0) = Prµ+ν(u(X) > 0).
Now, since 0 < b < 1 and Ï partitions X , it follows that
EW = Eâ
= â
.
Since λW[â
] = 0 for any subspace W, we can assume without loss of generality that Π̸= W, â
.
76
# The Measure and Mismeasure of Fairness
In that case, there exists wÎ â W such that if wâ â Î, then wÎ /â Î, and vice versa. Without loss of generality, assume wÎ â Î and wâ /â Î. It then follows that for arbitrary µ â Q,
Φ(µ + β · νwÎ) = Φ(µ) + β · ewÎ â β · ewâ, where ew is the basis vector corresponding to w â W. From this, it follows immediately by Eq. (F.4.2) that
µ + β · νwÎ â EÎ
only if
β = min(b, Prµ(u(X) > 0)) â Prµ(W = w). wâÎ
This is a measure zero subset of R, and so it follows that
λWwÎ [EÎ â µ] = 0
or all uw © K. Therefore, by Prop. 38, Ep is shy in Q. Taking the union over I C W, it ollows by Prop. 36 that Upcy Er is shy.
Now, we must show that Eâ =
ÎâW EÎ. By construction, EÎ â Eâ², since the policy Ï (x) = 1Ï(x)âÎ is budget-exhausting and separates utilities. To see the reverse inclusion, suppose µ â Eâ², i.e., that there exists a budget-exhausting multiple threshold policy Ï (x) that splits utilities over µ. Then, let
ε = {w â W : Eµ[Ï (X) | W = w] = 1}.
Since r(x) is budget-exhausting, it follows immediately that ⬠Ep,. Therefore, Eâ = Urcy Er, and so Eâ is shy, as desired. |
We conclude our discussion of shyness and shy sets with the following general lemma, which simplifies relative prevalence proofs by showing that one can, without loss of gener- ality, restrict oneâs attention to the elements of the shy set itself in applying Prop. 38.
Lemma 68 Suppose E is a universally measurable subset of a convex, completely metrizable set C in a topological vector space V . Suppose that for some finite-dimensional subspace V â², λV â²[C + v0] > 0 for some v0 â V . If, in addition, for all v â E,
λV â²[{vâ² â V â² : v + vâ² â E}] = 0, (22)
then it follows that E is shy relative to C.
Proof Let v be arbitrary. Then, either (v + V â²) â© E is empty or not.
First, suppose it is empty. Since λV â²[â
] = 0 by definition, it follows immediately that in this case λV â²[E â v] = 0.
Next, suppose the intersection is not empty, and let v + vâ â E for some fixed vâ â V â². It follows that
λV â²[E â v] = λV â²[{vâ² â V â² : v + vâ² â E}] = λV â²[{vâ² â V â² : (v + vâ) + vâ² â E}] = 0,
77
Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
where the first equality follows by definition; the second equality follows by the translation invariance of λV â², and the fact that vâ + V â² = V â²; and the final inequality follows from Eq. (22).
Therefore λV â²[E â v] = 0 for arbitrary v, and so E is shy.
# F.5 Proof of Theorem 17
Using the lemmata above, we can prove Theorem 17. We briefly summarize what has been established so far:
⢠Lemma 42: The set K of U-fine distributions on K is a Banach space;
⢠Lemma 50: The subset Q of U-fine probability measures on K is a convex, completely metrizable subset of K;
⢠Lemma 56: The subset E of Q is a universally measurable subset of K, where E is the set consisting of U-fine probability measures over which there exists a policy satisfying counterfactual equalized odds (resp., conditional principal fairness, or path- specific fairness) that is not strongly Pareto dominated.
Therefore, to apply Prop. 38, it follows that what remains is to construct a probe W and show that λW[Q + µ0] > 0 for some µ0 â K but λW[E + µ] = 0 for all µ â K.
Proof We divide the proof into three pieces. First, we illustrate how to construct the probe W from a particular collection of distributions {νUp a }aâA. Second, we show that λW[E+µ] = 0 for all µ â K. For notational and expository simplicity, we focus in these first two sections on the case of counterfactual equalized odds. Therefore, in the third section, we show how to generalize the argument to conditional principal fairness and path-specific fairness.
Construction of the probe. We will construct our probe to address two different cases. We recall that, by Cor. 29, any policy that is not strongly Pareto dominated must be a budget-exhausting multiple threshold policy with non-negative thresholds. In the first case, we consider when the candidate budget-exhausting multiple threshold policy is 1u(x)>0. By perturbing the underlying distribution by ν â WLo, we will be able to break the balance requirements implied by Eq. (4). In the second case, we treat the possibility that the candi- date budget-exhausting multiple threshold policy has a non-trivial positive threshold for at least one group. By perturbing the underlying distribution by ν â WUp for an alternative set of perturbations WUp, we will again be able to break the balance requirements.
More specifically, to construct our probe W = WUp + WLo, we want WUp and WLo to have a number of properties. In particular, for all ν â W, perturbation by ν should not affect whether the underlying distribution is a probability distribution, and should not affect how much of the budget is available to budget-exhausting policies. Specifically, for all ν â W,
1 dν = 0, K (23)
78
# a
# The Measure and Mismeasure of Fairness
and
1u(X)>0 dν = 0. K (24)
In fact, the amount of budget available to budget-exhausting policies will not change within group, i.e., for all a â A and ν â W,
1u(X)>0,A=a dν = 0. K (25)
Additionally, for some distinguished y0, y1 â Y, non-zero perturbations in νLo â WLo should move mass between y0 and y1. That is, they should have the property that if Pr|νLo|(A = a) > 0, then
1u(X)<0,Y =yi,A=a dνLo ̸= 0. K (26)
Finally, perturbations in WUp should have the property that for any non-trivial t > 0, some mass is moved either above or below t > 0. More precisely, for any µ â Q and any t such that
0 < Prµ(u(X) > t | A = a) < 1,
if νUp â WUp is such that Pr|νUp|(A = a) > 0, then
1u(X)>t,A=a dνUp ̸= 0. K (27)
To carry out the construction, choose distinct y0, y1 â Y. Then, since
µmax,a ⦠uâ1[Sa â© [0, ra)] â µmax,a ⦠uâ1[Sa â© [ra, â)]
is a continuous function of ra, it follows by the intermediate value theorem that we can partition Sa into three pieces,
SLo a = Sa â© (ââ, 0), SUp a,0 = Sa â© [0, ra), SUp a,1 = Sa â© [ra, â),
so that
Prµmax,a u(X) â SUp a,0 = Prµmax,a u(X) â SUp a,1 .
Recall that K = V x Y. Let mx : K + & denote projection onto ¥, and yy : ¥ + K be the injection x ++ (x,y). We define
uy [E] = pinax,a © (Wy: © mx) |e nut (svt)] > â Pmax,a ° (Wy ° my) | |e al ul (su3)] , i (E] = Mmax,a © (orn ° mx) [E nut (s7°)] â Pmax,a © (Yo © mx)! [E al ut (37°) ] .
79
# Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
By construction, νUp a concentrates on
{y1} Ã uâ1(Sa â© [0, â)),
# while νLo a
concentrates on
{y0, y1} Ã uâ1(Sa â© (ââ, 0)).
Moreover, if we set
WUp = Span(νUp WLo = Span(νLo a )aâA, a )aâA,
then it is easy to see that Eqs. (23) to (26) will hold. The only non-trivial case is Eq. (27). However, by Cor. 59, the support of fµmax,a is maximal. That is, for µ â Q, if
0 < Prµ(u(X) > t | A = a, u(X) > 0) < 1,
then it follows that 0 < t < sup Sa. Either t ⤠ra or t > ra. First, assume t ⤠ra; then, it follows by the construction of νUp
# love)
# Ta
love) Ta YEP ou'(t,00)} = | finaca dA â / Fnaxa dd t 50 Ta > / fimax,a dA â / fmax,a dA Ta 0 =0
Similarly, if t > ra,
# foe)
foe) yup fo) u(t, 0o)] = / fmax,a dX t oo > / fimax,a dA sup Sa =0.
Therefore Eq. (27) holds.
Since W is non-trivial38 and ν[K] = 0 for all ν â W, it follows by Lemma 62 that λW[Q â µ] > 0 for some µ â K.
Recall that, by Prop. 36, a set E is shy if and only if, for an arbitrary shy set Shyness. Eâ², E \ Eâ² is shy. By Lemma 63, a generic element of µ â Q satisfies Prµ(u(X) > 0, Y (1) = yi, A = a) > 0 for i = 0, 1, and a â A. Likewise, by Lemma 65, a generic µ â Q satisfies νUp a ⦠uâ1 à (µ â¾ X Ã{y1}) ⦠uâ1. Therefore, to simplify our task and recalling Remark 61, we may instead demonstrate the shyness of the set of µ â Q such that:
⢠There exists a budget-exhausting multiple threshold policy Ï (x) with non-negative thresholds satisfying counterfactual equalized odds over µ;
38. In general, some or all of the νLo may be zero depending on the λ-measure of SLo a . However, as noted in Remark 61, the νUp a,i cannot be zero, since Prµmax,a (u(X) > 0) > 0 for all a â A. Therefore W ̸= {0}.
80
# The Measure and Mismeasure of Fairness
For i = 0, 1,
Prµ(u(X) > 0, A = a, Y (1) = yi) > 0; (28)
For all a â A,
a ⦠uâ1 à (µ ⾠νUp αâ1(a)Ã{y1}) ⦠uâ1. (29)
By a slight abuse of notation, we continue to refer to this set as E. We note that, by the construction of W, Eq. (28) is not affected by perturbation by ν â W, and Eq. (29) is not affected by perturbation by νLo â W.
In particular, by Lemma 68, it suffices to show that λW[E â µ] = 0 for µ â E. Therefore, let µ â E be arbitrary. Let the budget-exhausting multiple threshold policy
satisfying counterfactual equalized odds over it be Ï (x), so that
Eµ[Ï (X)] = min(b, Prµ(u(X) > 0)),
with thresholds {ta}aâA. We split into two cases based on whether Ï (X) = 1u(X)>0 µ-a.s. or not.
In both cases, we make use of the following two useful observations. First, note that as E â Q, if µ + ν is not a probability measure, then µ + ν /â E. Therefore, without loss of generality, we assume throughout that µ + ν is a probability measure.
Second, suppose Ï â²(x) is a policy satisfying counterfactual equalized odds over some ν â Q. Then, if 0 < Eµ[Ï â²(X)] < 1, it follows that for all a â A,
0 < Eµ[Ï â²(X) | A = a] < 1. (30)
For, suppose not. Then, without loss of generality, there must be a0, a1 â A such that
Eµ[Ï â²(X) | A = a0] = 0
and
Eµ[Ï â²(X) | A = a1] > 0. But then, by the law of iterated expectation, there must be some Y â² â Y such that µ[X à Y â²] > 0 and so,
1X ÃY Ⲡ· Eµ[Ï â²(X) | A = a1, Y (1)] > 0 = 1X ÃY Ⲡ· Eµ[Ï â²(X) | A = a0, Y (1)],
contradicting the fact that Ï â²(x) satisfies counterfactual equalized odds over µ. Therefore, in what follows, we can assume that Eq. (30) holds.
Our goal is to show that λW[E â µ] = 0.
Case 1 (Ï (X) = 1u(X)>0) We argue as follows. First, we show that 1u(X)>0 is the unique budget-exhausting multiple threshold policy with non-negative thresholds over µ + ν for all ν â W. Then, we show that the set of ν â W such that 1u(x)>0 satisfies counterfactual equalized odds over µ + ν is a λW-null set.
We begin by observing that WLo ̸= {0}. For, if that were the case, then Eq. (30) would not hold for Ï (x).
81
# Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
Next, we note that by Eq. (24), for any ν â W,
Prµ+ν(u(X) > 0) = Prµ(u(X) > 0)
and so
Eµ+ν[1u(X)>0] = min(b, Prµ+ν(u(X) > 0)). If Ï â²(x) is a feasible multiple threshold policy with non-negative thresholds and Ï â²(X) ̸= 1u(X)>0 (µ + ν)-a.s., then, as a consequence,
Eµ+ν[Ï â²(X)] < Prµ+ν(u(X) > 0) ⤠b.
Therefore, it follows that 1u(X)>0 is the unique budget-exhausting multiple threshold policy over µ + ν with non-negative thresholds.
Now, note that if counterfactual equalized odds holds with decision policy Ï (x) = 1u(x)>0, then, by Eq. (7) and Lemma 41, we must have that
Prµ+ν(u(X) > 0 | A = a, Y (1) = y1) = Prµ+ν(u(X) > 0 | A = aâ², Y (1) = y1)
for a, aâ² â A.39
Now, we will show that a typical element of W breaks this balance requirement. Choose aâ such that νLo aâ ̸= 0. Recall that ν is fixed, and let νⲠ= ν â βLo aâ · νLo aâ . Let
pa = Prµ+νâ²(u(X) > 0 | A = aâ², Y (1) = y1).
Note that it cannot be the case that pa = 0 for all a â A, since, by Eq. (28),
Prµ+νâ²(u(X) > 0 | Y (1) = y1) > 0.
Therefore, by the foregoing discussion, either paâ > 0 or paâ = 0 and we can choose aâ² â A such that paâ² > 0. Since the νLo a,i are all mutually singular, it follows that counterfactual equalized odds can only hold over µ + ν if
paâ² = Prµ+ν(u(X) > 0 | A = aâ, Y (1) = y1).
Now, we observe that by Lemma 41, that
Prµ+ν(u(X) > 0 | A = aâ, Y (1) = y1) = η aâ · Ï Ï + βLo
where
n= Pr,(u(X) > 0,A =a*, Y(1) = 1) 7 =Pr,(A=a",Â¥(1) =u), Li p= [taco veyn dy,â
39. To ensure that both quantities are well-defined, here and throughout the remainder of the proof we use the fact that by Eqs. (25) and (28), Prµ+ν (u(X) > 0, A = a, Y (1) = y1) > 0.
82
# The Measure and Mismeasure of Fairness
since
0= 1 * dvle K u(X)>0,A=a*,Y(l)=y1 CY%a > 0 # [Ase venn dvi.
Here, the equality follows by the fact that νLo is supported on SLo inequality from Eq. (26). a à {y0, y1} and the
Therefore, if, in the first case, paâ² > 0, then counterfactual equalized odds only holds if
βLo aâ = e â paⲠ· Ï paⲠ· Ï ,
since, as noted above, Ï Ì¸= 0 by Eq. (26). In the second case, if paâ² = 0, then counterfactual equalized odds can only hold if
e = paâ · Ï = 0.
Since we chose aâ² so that paâ > 0 if paâ² = 0 and Ï > 0 by Eq. (28), this is impossible.
aâ â R such that there a budget-exhausting threshold policy with positive thresholds satisfying counterfactual equalized odds over µ + νⲠ+ βLo
aâ )[E â µ â νâ²] = 0. λSpan(νLo
Since νⲠwas arbitrary, it follows by Fubiniâs theorem that λW[E â µ] = 0.
Case 2 (Ï (X) ̸= 1u(X)>0) Our proof strategy is similar to the previous case. First, we show that, for a given fixed νLo â WLo, there is a unique candidate policy ËÏ (x) for being a budget-exhausting multiple threshold policy with non-negative thresholds and satisfying counterfactual equalized odds over µ + νLo + νUp for any νUp â WUp. Then, we show that the set of νUp such that ËÏ (X) satisfies counterfactual equalized odds has λWUp measure zero. Finally, we argue that this in turn implies that the set of ν â W such that there exists a Pareto efficient policy satisfying counterfactual equalized odds over µ + ν has λW-measure zero.
We seek to show that λWUp[E â (µ + νLo)] = 0. To begin, we note that since νUp a,i concentrates on {y1} à X for all a â A, it follows that
Eµ+νLo[d(X) | A = a, Y (1) = y0] = Eµ+νLo+νUp[d(X) | A = a, Y (1) = y0]
for any νUp â WUp.
Now, suppose there exists some νUp â WUp such that there exists a budget-exhausting multiple threshold policy ËÏ (x) with non-negative thresholds such that counterfactual equal- ized odds is satisfied over µ+νLo+νUp. (If not, then we are done and λWUp[Eâ(µ+νLo)] = 0, as the measure of the empty set is zero.) Let
p = Eµ+νLo[ËÏ (X) | A = a, Y (1) = y0].
83
Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
Suppose that ËÏ â²(x) is an alternative budget-exhausting multiple threshold policy with non- negative thresholds such that counterfactual equalized odds is satisfied. We seek to show that Ï â²(X) = Ï (X) (µ + νLo + νUp)-a.e. for any νUp â WUp. Toward a contradiction, suppose that for some a0 â A,
Eµ+νLo[ËÏ â²(X) | A = a0, Y (1) = y0] < p.
Since, by Eq. (28), Prµ+νLo(A = a0, Y (1) = y0) > 0, it follows that
Eµ+νLo[ËÏ â²(X) | A = a0] < Eµ+νLo[ËÏ (X) | A = a0].
Therefore, since ËÏ (x)â² is budget exhausting, there must be some a1 such that
Eµ+νLo[ËÏ â²(X) | A = a1] > Eµ+νLo[ËÏ (X) | A = a1].
From this, it follows ËÏ â²(x) can be represented by a threshold greater than or equal to that of ËÏ (x) on αâ1(a1), and hence
Eµ+νLo[ËÏ â²(X) | A = a1, Y (1) = y0] ⥠Eµ+νLo[ËÏ (X) | A = a0, Y (1) = y0] = p > Eµ+νLo[ËÏ â²(X) | A = a0, Y (1) = y0],
contradicting the fact that ËÏ â²(x) satisfies counterfactual equalized odds.
By the fact that νLo is supported on uâ1((ââ, 0]), the preceding discussion, and Lemma 49, it follows that
ËÏ (X) = ËÏ â²(X) (µ â¾ X Ã{y0})-a.e.
By Eq. (29), it follows that ËÏ (X) = ËÏ â²(X) νUp a,i -a.e. for i = 0, 1. As a consequence,
ËÏ (X) = ËÏ â²(X) (µ + νLo + νup)-a.e.
for all νUp â WUp. Therefore ËÏ (X) is, indeed, unique, as desired.
Now, we note that since Ï (X) ̸= 1u(X)>0, it follows that E[Ï (X)] < Prµ(u(X) > 0). It follows that Eµ[Ï (X)] = b, since Ï (x) is budget exhausting. Therefore, by Eq. (24), it follows that for any budget-exhausting policy ËÏ (X), E[ËÏ (X)] = b, and so ËÏ (X) ̸= 1u(X)>0 over µ + ν.
Therefore, fix νLo and ËÏ (X). By Eq. (30), there is some aâ such that
0 < Prµ+νLo(u(X) > Ëtaâ | A = aâ) < 1.
Then, it follows by Eq. (27) that
K 1u(X)>Ëtaâ dνUp aâ ̸= 0.
Fix νⲠ= ν â βUp aâ · νUp aâ . Then, for some a ̸= aâ, set
pâ = Eµ+νâ²[ËÏ (X) | A = a, Y (1) = y1].
84
# The Measure and Mismeasure of Fairness
a , νUp Since the νLo can only hold over µ + ν if are all mutually singular, it follows that counterfactual equalized odds a
pâ = Prµ+ν(u(X) > Ëtaâ | A = aâ, Y (1) = y1).
Now, we observe that by Lemma 41, that
Prµ+ν(u(X) > Ëtaâ | A = aâ, Y (1) = y1) = η + βUp a Ï Â· γ (31)
where
1 = Pr, 4,t0(u(X) > te | A=a*, Y(1) =), m= Pr, 4 10(A =a*,Y(1) =y1), Uj 1 [ tacori seer riven dy;?,
and we note that
0 = 1A=aâ,Y (1)=y1 dνLo a . K
Eq. (31) can be rearranged to
(pâ · Ï â η) â β · γ = 0.
This can only hold if
β = pâ · Ï â η γ ,
since by Eq. (27), γ ̸= 0. Since any countable subset of R is a λ-null set,
aâ )[E â µ â νâ²] = 0. λSpan(νUp
Since νⲠwas arbitrary, it follows by Fubiniâs theorem that λWUp[E â µ â νLo] = 0 in this case as well. Lastly, since νLo was also arbitrary, applying Fubiniâs theorem a final time gives that λW[E â µ] = 0.
The extension of these results Conditional principal fairness and path-specific fairness. to conditional principal fairness and path-specific fairness is straightforward. All that is required is a minor modification of the probe.
In the case of conditional principal fairness, we set
a,w[E] = µmax,a,w ⦠(γ(y1,y1) ⦠ÏX )â1[E â© uâ1(SUp νUp a,1)], â µmax,a ⦠(γ(y1,y1) ⦠ÏX )â1[E â© uâ1(SUp a,w)], a,w[E] = µmax,a,w ⦠(γ(y1,y1) ⦠ÏX )â1[E â© uâ1(SLo νLo a )] â µmax,a ⦠(γ(y0,y0) ⦠ÏX )â1[E â© uâ1(SLo a,w)],
85
# Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
where 7"): ¢ + K is the injection a + (z,y,yâ). Our probe is then given by
WUp = Span(νUp WLo = Span(νLo
almost as before.
The proof otherwise proceeds virtually identically, except for two points. First, recalling Remark 61, we use the fact that a generic element of Q satisfies Prµ(A = a, W = w) > 0 in place of Prµ(A = a) > 0 throughout. Second, we use the fact that Ï overlaps utility in place of Eq. (30). In particular, If Ï does not overlap utilities for a generic µ â Q, then, by Lemma 67, there exists w â W such that Prµ(u(X) > 0, W = w) = 0 for all µ â Q. If this occurs, we can show that no budget-exhausting multiple threshold policy with positive thresholds satisfies conditional principal fairness, exactly as we did to show Eq. (30).
In the case of path-specific fairness, we instead define
a,w = Sa,w â© (ââ, ra,w), a,w = Sa,w â© [ra,w, â),
where ra,w is chosen so that
Prµmax,a,w (u(X) â SLo a,w) = Prµmax,a,w (u(X) â SUp a,w).
Let ÏX denote the projection from K = A Ã X A given by
(a, (a Ja'eA) > a
Let Ïaâ² denote the projection from the aâ²-th component. distribution of XÎ ,A,aâ² over µ is given by µ ⦠Ïâ1 µ ⦠Ïâ1 (That is, given µ â K, the aâ² and the distribution of X is given by
X .) Then, we let ˵max,a,w be the measure on X given by ˵max,a,w[E] = µmax,a,w[E â© (u ⦠Ïa)â1(sUp
a,w)] â µmax,a,w[E â© (u ⦠Ïa)â1(SLo
a,w)].
Finally, let Ï : A â A be a permutation of the groups with no fixed points, i.e., so that aⲠ̸= Ï(aâ²) for all aâ² â A. Then, we define
νaâ² = δaⲠà ˵max,Ï(aâ²),w1 à µmax,a,w1 ⦠Ïâ1 a , a̸=Ï(aâ²)
where δa is the measure on A given by δa[{aâ²}] = 1a=aâ². Then, simply let
W = Span(νⲠa)aâ²âA.
Since ˵max,a,w[X ] = 0 for all a â A, it follows that νa,w ⦠Ïâ1
=0,ie.,
Prµ(X â E) = Prµ+ν(X â E)
for any ν â W and µ â Q. Therefore Eqs. (23) and (24) hold. Moreover, the νa satisfy the following strengthening of Eq. (27). Perturbations in W have the property that for
86
The Measure and Mismeasure of Fairness
any non-trivial tânot necessarily positiveâsome of the mass of u(XÎ ,A,a) is moved either above or below t. More precisely, for any µ â Q and any t such that
0 < Prµ(u(X) > t | A = a) < 1,
if ν â W is such that Pr|ν|(A = Ïâ1(a)) > 0, then
K 1u(XΠ,A,a)>t dνa ̸= 0. (32)
This stronger property means that we need not separately treat the case where Ï (X) = 1u(X)>0 µ-a.e.
Other than this difference the proof proceeds in the same way, except for two points. First, we again make use of the fact that Ï can be assumed to overlap utilities in place of Eq. (30), as in the case of conditional principal fairness. Second, w0 and w1 take the place of y0 and y1. In particular, to establish the uniqueness of ËÏ (x) given µ and νLo in the second case, instead of conditioning on y0, we instead condition on w0, where, following the discussion in Remark 61 and Lemma 63, this conditioning is well-defined for a generic element of Q.
We have focused on causal definitions of fairness, but the thrust of our analysis applies to non-causal conceptions of fairness as well. Below we show that policies constrained to satisfy (non-counterfactual) equalized odds (Hardt et al., 2016) are generically strongly Pareto dominated, a result that follows immediately from our proof above.
Definition 69 Equalized odds holds for a decision policy d(x) when
d(X) â¥â¥ A | Y. (33)
We note that Y in Eq. (33) does not depend on our choice of d(x), but rather represents the realized value of Y , e.g., under some status quo decision making rule.
Corollary 70 Suppose U is a set of utilities consistent modulo α. Further suppose that for all a â A there exist a U-fine distribution of X and a utility u â U such that Pr(u(X) > 0, A = a) > 0, where A = α(X). Then, for almost every U-fine distribution of X and Y on X à Y, any decision policy d(x) satisfying equalized odds is strongly Pareto dominated.
Proof Consider the following maps. Distributions of X and Y(1), i-e., probability measures on X x Y, can be embedded in the space of joint distributions on X, Y(0), and Y(1) via pushing forward by the map v, where u : (7, y) > (x,y, y). Likewise, given a fixed decision policy D = d(X), joint distributions of X, Y(0), and Y(1) can be projected onto the space of joint distributions of X and Y by pushing forward by the map 7q : (x, yo, y1) © (2, Ya(x))- Lastly, we see that the composition of s and 7gâregardless of our choice of d(x)âis the
87
# Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
identity, as shown in the diagram below.
X à Y ι X à Y à Y id Ïd X à Y
We note also that counterfactual equalized odds holds for µ exactly when equalized odds holds for µ ⦠(Ïd ⦠ι)â1. The result follows immediately from this and Theorem 17.
# F.6 Proof of Theorem 9
The proof of Theorem 9 is simpler than the proof of Theorem 17, but uses some of the same machinery. As before, let K = A x [0,1] denote the state space, and K denote the set of measures on [0,1] x A. Let K denote the measures p. ⬠K that are absolutely continuous with respect to A x 6, where \ is Lebesgue measure and 6 is the counting measure on A i.e., measures such that the restriction to [0,1] x {a} has a density for all a ⬠A. Applying Cor. 47 with U = {7}, where 7m : (u,a) + u, shows that K is a Banach space. As before, we let Q C K denote the probability simplex, i.e., the set of all 4. ⬠K such that p[K] = 1 and [E] > 0 for all Borel sets E.
Let R = r(X) denote risk.40 By Lemma 31, we have that a policy is utility maximizing if and only if 1R>t ⤠d(X) ⤠1Râ¥t a.s. By Since Prµ(R = t) = 0 for any absolutely continuous measure µ, we see that, in fact, a policy is utility maximizing if and only if 1R>t = d(X) µ-a.s.
Consider the sets
Erp = {1 EK: (Vae A) Eyl. â 8) -Dasar>i] _ Eul(tâ 8): 7} E,( â &)- Lane E,{lâ Rj
and
Epp = {uwâ¬K: (Va⬠A) Pr,(R>t| A=a) = Pr,(R > t)}. We note that since the mapping p ++ Pr,(£) is a continuous function on K, the set of measures such that Pr,(A = a,R >t) =0 or Pr,(R > t) = 0 is closed and shy. It follows that Epp and Epp are Borel. We note that, by definition, Epp is the set of risk distributions such that there exists a utility-maximizing policy satisfying demographic parity (when the condition is well defined). Likewise, an application of the law of iterated expectations yields that Epp is the set of distributions satisfying equalized false positive rates (when the condition is well-defined).
With this context in place, we can move on to the proof of Theorem 9.
Proof of Theorem 9 First we construct the probe. Choose distinct a0, a1 â A arbitrarily, and let Ëν be defined by
Ëν[E] = λ[Ea0 â© [0, t)] t â λ[Ea1 â© [0, t)] t ,
40. Here, since the measures are on the risk scale, rather than on X, we write R for notational simplicity.
88
# The Measure and Mismeasure of Fairness
where Ea = E â© {A = a}. Then, by Lemma 62, there exists µ0 such that λW[K + ν0] > 0, where K = Span(Ëν). We see that
PrËν(R < t, A = a0) = 1, PrËν(R < t, A = a1) = â1, PrËν(R > t, A = a0) = 0, PrËν(R > t, A = a1) = 0.
It follows that
Prµ+β·Ëν(R > t | A = a0) = e0 p0 + β , Prµ+β·Ëν(R > t, | A = a1) = e1 p1 â β ,
where
e0 = Prµ(R > t, A = a0), p0 = Prµ(A = a0), e1 = Prµ(R > t, A = a1), p1 = Prµ(A = a1).
Note that by Lemmata 65 and 68, we can assume that e0, e1 > 0. It follows, rearranging terms, that
Prµ+β·Ëν(R > t | A = a0) = Prµ+β·Ëν(R > t | A = a1)
if and only if
β = e0 · p1 â e1 · p0 e0 + e1 ,
which is a measure-zero subset of β â R. Therefore λW[EDP + µ] = 0. In the same way, we observe that
EËν[R · 1A=a0,R<t] = t 2 , EËν[R · 1A=a0,R>t] = 0 EËν[R · 1A=a1,R<t] = â t 2 , EËν[R · 1A=a1,R>t] = 0,
and so
Eµ+β·Ëν[(1 â R) · 1A=a0,R>t] Eµ+β·Ëν[(1 â R) · 1A=a0] = eâ² 0 0 + β · t pâ² 2 , Eµ+β·Ëν[(1 â R) · 1A=a1,R>t] Eµ+β·Ëν[(1 â R) · 1A=a1] = eâ² 1 1 â β · t pâ² 2 ,
where
eâ² 0 = Eµ[(1 â R) · 1A=a0,R>t], pâ² 0 = Eµ[(1 â R) · 1A=a0], eâ² 1 = Eµ[(1 â R) · 1A=aa,R>t], pâ² 1 = Eµ[(1 â R) · 1A=a0].
As before, µ + β · Ëν â EFP if and only if
β = 2 t · e0 · p1 â e1 · p0 e0 + e1 ,
which is again a measure-zero subset of β â R. Therefore λW[EFP + µ] = 0. Therefore it follows that both EFP and EDP are shy.
89
|
Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
# F.7 Proof of Corollary 18
The proof of Corollary 18 is a straightforward application of Theorem 17. We begin by giving the complete theorem statement.
Corollary 71 Consider a utility of the form given in Eq. (12), where v is monotonically increasing in both coordinates and m(x) ⥠0. Suppose that for all a â A there exist an {m}-fine distribution of X such that Pr(m(X) > 0, A = a) > 0, where A = α(X). Then,
⢠For almost every {m}-fine distribution of X and Y (1), no utility-maximizing decision policy satisfies counterfactual equalized odds.
⢠If | Img(Ï)| < â and there exists an {m}-fine distribution of X such that Pr(A = a, W = w) > 0 for all a â A and w â Img(Ï), where W = Ï(X), then, for almost every {m}-fine joint distribution of X, Y (0), and Y (1), no utility-maximizing decision policy satisfies conditional principal fairness.
⢠If | Img(Ï)| < â and there exists a {m}-fine distribution of X such that Pr(A = a, W = wi) > 0 for all a â A and some distinct w0, w1 â Img(Ï), then, for almost every {m}A-fine joint distributions of A and the counterfactuals XÎ ,A,aâ², no utility- maximizing decision policy satisfies path-specific fairness.
Recall that for notational simplicity, we refer to the distinguished utility as u*, where u*(d) =v (E[m(X) - d(X)], E[dacxya, A(X) -
Proof of Corollary 18 Consider the subset S of R2 consisting of all pairs
(E[m(X) · d(X)], E[1α(X)=a1 · d(X)]),
where d ranges over feasible policies. We note that, for θ â [0, 1],
θ · E[m(X) · d0(X)] + [1 â θ] · E[m(X) · d1(X)] = E[m(X) · (θ · d0(X) + [1 â θ] · d1(X))], and similarly for E[1α(X)=a1 · d(X)]. Likewise,
θ · E[d0(X)] + [1 â θ] · E[d1(X)] = E[θ · d0(X) + (1 â θ) · d1(X)],
and so convex combinations of feasible policies are feasible. It follows that S is convex. Now, consider a point (x0, y0) at which v(x, y) is maximized on S. Since v is monoton- ically increasing in both coordinates, we must have that
5M ((xo, yo) + RS) = {(xo, yo) };
and so, by the separating hyperplane theorem, there exists (h0, h1) â R2 and t â R such that
(ho; ha)" (a0, Yo) = ty (ho, tu)'e >t, (⬠⬠(x0, yo) + RSq,¢ F (20, yo)) (ho,i)'s<t. (sé S,s #4 (20, yo))
90
# The Measure and Mismeasure of Fairness
We note that it follows that both h0 and h1 are positive, since, otherwise, without loss of generality, if h0 = 0, then
(h0, h1)â¤(x0 + ϵ, y0) = h1 · y0 = (h0, h1)â¤(x0, y0) = 0,
contrary to assumption, since
(x0 + ϵ, y0) â (x0, y0) + R2 â¥0, (x0 + ϵ, y0) ̸= (x0, y0).
Let λâ = h1/h0, and consider the collection of utilities
U = {m(x) + λ · 1α(x)=a1}λ>0.
Since m(x) ⥠0, U is consistent modulo α.
We need the further claim that any policy that is utility maximizing for some u(x) â U is Pareto efficient. For, suppose that d0(x) were utility-maximizing for u0(x) but not Pareto efficient, i.e., there existed d1(x) and u1(x) such that u0(d1) = u0(d0) but u1(d1) > u1(d0). Then, we would have for any u â U that
u(di) â u(do) = u(di) â u(do) â (uo(di) â uo(do)) = (A Ao) > (Eldi (X) + Lacxysai] â Eldo(X) + dacx)=ai}) -
First suppose that λ1 > λ0. Then, it follows from the fact that u1(d1) > u1(d0) that
E[d1(X) · 1α(X)=a1] > E[d0(X) · 1α(X)=a1].
Now, if 0 < λ2 < λ1 and u(x) = m(x) + λ2 · 1α(x)=a1, then we have that u2(d1) < u2(d0). In the same way, if λ1 < λ0, we could choose λ2 > λ1 such that u2(d1) < u2(d0). Therefore d1 does not Pareto dominate d0, contrary to hypothesis. Therefore, any policy that is utility maximizing for some u(x) â U is Pareto efficient.
In particular, it follows that the policy that maximizes u(x) = m(x) + λâ · 1α(x)=a1 in expectation is Pareto efficient. By the construction of the separating hyperplane, this is also the policy that maximizes uâ, and so the policy that maximizes uâ is Pareto efficient. Therefore, under the hypotheses of Theorem 17, for almost every joint distribution, the utility maximizing policy does not satisfy counterfactual equalized odds, conditional prin- cipal fairness, or path-specific fairness.
# F.8 General Measures on K
Theorem 17 is restricted to U-fine and U A-fine distributions on the state space. The reason for this restriction is that when the distribution of X induces atoms on the utility scale, threshold policies can possess additionalâor even infiniteâdegrees of freedom when the threshold falls exactly on an atom. In particular circumstances, these degrees of freedom can be used to ensure causal fairness notions, such as counterfactual equalized odds, hold in a locally robust way. In particular, the generalization of Theorem 17 beyond U-fine measures to all totally bounded measures on the state space is false, as illustrated by the following proposition.
91
Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
Proposition 72 Consider the set Eâ² â K of distributionsânot necessarily U-fineâon K = X Ã Y over which there exists a Pareto efficient policy satisfying counterfactual equalized odds. There exist b, X , Y, and U such that Eâ² is not relatively shy.
Proof We adopt the notational conventions of Section F.3. We note that by Prop. 36, a set can only be shy if it has empty interior. Therefore, we will construct an example in which an open ball of distributions on K in the total variation norm all allow for a Pareto efficient policy satisfying counterfactual equalized odds, i.e., are contained in Eâ².
Let b= 3, Y = {0,1}, A = {ao,a given by a: u:(y,a,v) > v. Then, ifU = {u},U distribution 4 on K = X x Y where i}, and XY = {0,1} x {ap,ai} x R. Leta: Â¥ 4 Abe (y,a,v) > a for arbitrary (y,a,v) ⬠XY. Likewise, let u : % > R be given by is vacuously consistent modulo a. Consider the joint orally,yââ¬Y,ae A, andueR,
Prµ(X = (a, y, u), Y (1) = yâ²) = 1 4 · 1y=yⲠ· Prµ(u(X) = u),
where, over µ, u(X) is distributed as a 1 2 and Pr(a < u(X) < b) = bâa 1) = 1 2 2 -mixture of Unif(1, 2) and δ(1); that is, Pr(u(X) = for 0 ⤠a ⤠b < 1.
We first observe that there exists a Pareto efficient threshold policy Ï (x) such that counterfactual equalized odds is satisfied with respect to the decision policy Ï (X). Namely, let
Ï (a, y, u) = 1 u > 1, 1 2 u = 1, 0 u < 1.
Then, it immediately follows that E[Ï (X)] = 3 4 = b. Since Ï (x) is a threshold policy and exhausts the budget, it is utility maximizing by Lemma 31. Moreover, if D = 1UDâ¤Ï (X) for some UD â¼ Unif(0, 1) independent of X and Y (1), then D â¥â¥ A | Y (1). Since u(X) â¥â¥ A, Y (1), it follows that
Prµ(D = 1 | A = a, Y (1) = y) = Pr(UD â¤ Ï (X) | A = a, Y (1) = y) = Pr(UD â¤ Ï (X)) = Eµ[Ï (X)],
Therefore Eq. (4) is satisfied, i.e., counterfactual equalized odds holds. Now, using µ, we construct an open ball of distributions over which we can construct similar threshold policies. In particular, suppose µⲠis any distribution such that |µ â µâ²|[K] < 1 64 . Then, we claim that there exists a budget-exhausting threshold policy satisfying counterfactual equalized odds over µâ². For, we note that
Prµâ²(U > 1) < Prµ(U > 1) + Prµâ²(U ⥠1) > Prµ(U ⥠1) â 1 64 1 64 = = 33 64 63 64 , ,
92
# The Measure and Mismeasure of Fairness
and so any threshold policy Ï â²(x) satisfying E[Ï â²(X)] = b = 3 threshold. 4 must have t = 1 as its
We will now construct a threshold policy Ï â²(x) satisfying counterfactual equalized odds over µâ². Consider a threshold policy of the form
Ï â²(a, y, u) = 1 u > 1, pa,y u = 1, u < 1. 0
For notational simplicity, let
qa,y = Prµâ²(A = a, Y = y, U > 1), ra,y = Prµâ²(A = a, Y = y, U = 1), Ïa,y = Prµâ²(A = a, Y = y).
Then, we have that
Eµâ²[Ï â²(X)] = qa,y + pa,y · ra,y, Eµâ²[Ï â²(X) | A = a, Y = y] = a,y qa,y + pa,y · ra,y Ïa,y .
Therefore, the policy will be budget exhausting if
_3 Yo aay + Pay * Tay = 4 ay
and it will satisfy counterfactual equalized odds if
Ïa1,0 · (qa0,0 + pa0,0 · ra0,0) = Ïa0,0 · (qa1,0 + pa1,0 · ra1,0), Ïa1,1 · (qa0,1 + pa0,1 · ra0,1) (34) = Ïa0,1 · (qa1,1 + pa1,1 · ra1,1),
since, as above,
Pr(D = 1 | A = a, Y (1) = y) = E[Ï â²(X) | A = a, Y (1) = y].
Again, for notational simplicity, let
S = 3 4 â Prµâ²(U > 1) Prµâ²(U = 1) .
Then, a straightforward algebraic manipulation shows that Eq. (34) is solved by setting pa0,y to be
S · Ïa0,y · (ra0,y + ra1,y) + Ïa0,y · qa1,y â Ïa1,y · qa0,y ra0,y · (Ïa0,y + Ïa1,y)
,
93
# Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
and pa1,y to be
S · Ïa1,y · (ra0,y + ra1,y) + Ïa1,y · qa0,y â Ïa0,y · qa1,y ra1,y · (Ïa0,y + Ïa1,y) In order for Ï â²(x) to be a well-defined policy, we need to show that pa,y â [0, 1] for all a â A and y â Y. To that end, note that
.
qa,y = Prµâ²(A = a, Y = y, U > 1), ra,y = Prµâ²(A = a, Y = y, U = 1), Ïa,y = Prµâ²(A = a, Y = y), ra0,y + ra1,y = Prµâ²(Y = y, U = 1), Ïa0,y + Ïa1,y = Prµâ²(Y = y), S = 3 4 â Prµâ²(U > 1) Prµâ²(U = 1) .
Now, we recall that | Prµâ²(E) â Prµ(E)| < 1 64 for any event E by hypothesis. Therefore,
7 64 7 64 7 64 15 64 31 64 15 31 ⤠qa,y ⤠⤠ra,y ⤠⤠Ïa,y ⤠⤠ra0,y + ra1,y ⤠⤠Ïa0,y + Ïa1,y ⤠⤠S ⤠9 64 9 64 17 64 17 64 33 64 17 33 , , , , , .
Using these bounds and the expressions for pa,y derived above, we see that 629 3069
and hence pa,y â [0, 1] for all a â A and y â Y.
Therefore, the policy Ï â²(x) is well-defined, and, by construction, is budget-exhausting and therefore utility-maximizing by Lemma 31. It also satisfies counterfactual equalized odds by construction. Since µⲠwas arbitrary, it follows that the set of distributions on K such that there exists a Pareto efficient policy satisfying counterfactual equalized odds contains an open ball, and hence is not shy.
# G. Theorem 11 and Related Results
We first prove a variant of Theorem 11 for general, continuous covariates X . Then, we extend and generalize Theorem 11 using the theory of finite Markov chains, offering a proof of the theorem different from the sketch included in the main text.
94
The Measure and Mismeasure of Fairness
# G.1 Extension to Continuous Covariates
Here we follow the proof sketch in the main text for Theorem 11, which assumes a finite covariate-space X . In that case, we start with a point xâ with maximum decision probability d(xâ), and then assume, toward a contradiction, that there exists a point with strictly lower decision probability. The general case is more involved since it is not immediately clear that the maximum value of d(x) is achieved with positive probability in X . We start with the lemma below before proving the main result.
Lemma 73 A decision policy d(x) satisfies path-specific fairness with W = X if and only if any aâ² â A,
E[d(XÎ ,A,aâ²) | X] = d(X).
Proof First, suppose that d(x) satisfies path-specific fairness. To show the result, we use the standard fact that for independent random variables X and U ,
E[f (X, U ) | X] = f (X, u) dFU (u), (35)
where FU is the distribution of U . (For a proof of this fact see, for example, Brozius, 2019) Now, we have that
E[Du,aq | Xtcâ) = Ellup<acxy, 4.7) | X14) 1 -[ du<d(Xi14,0") du = d(Xq1,A,a');
where the first equality follows from the definition of DÎ ,A,aâ², and the second from Eq. (35), since the exogenous variable UD â¼ Unif(0, 1) is independent of the counterfactual covariates XÎ ,A,aâ². An analogous argument shows that E[D | X] = d(X).
Finally, conditioning on X, we have
E[d(XÎ ,A,aâ²) | X] = E[E[DÎ ,A,aâ² | XÎ ,A,aâ²] | X] = E[E[DÎ ,A,aâ² | XÎ ,A,aâ², X] | X] = E[DÎ ,A,aâ² | X] = E[D | X] = d(X),
where the second equality follows from the fact that DÎ ,A,aâ² â¥â¥ X | XÎ ,A,aâ², the third from the law of iterated expectations, and the fourth from the definition of path-specific fairness. Next, suppose that
E[d(XÎ ,A,aâ² | X] = d(X)
95
# Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
for all aâ² â A. Then, since W = X and X â¥â¥ UD, using Eq. (35), we have that for all aâ² â A,
E[DÎ ,A,aâ² | X] = E[E[1UDâ¤d(XÎ ,A,aâ² ) | XÎ ,A,aâ², X] | X] = E[E[d(XÎ ,A,aâ²) | XÎ ,A,aâ², X] | X] = E[d(XÎ ,A,aâ²) | X] = d(X) = E[d(X) | X] = E[D | X].
This is exactly Eq. (8), and so the result follows.
We are now ready to prove a continuous variant of Theorem 11. The technical hypotheses of the theorem ensure that the conditional probability measures Pr(E | X) are âsufficientlyâ mutually non-singular distributions on X with respect to the distribution of Xâfor example, the conditions ensure that the conditional distribution of XÎ ,A,a | X does not have atoms that X itself does not have, and vice versa. For notational and conceptual simplicity, we only consider the case of trivial ζ, i.e., where ζ(x) = ζ(xâ²) for all x, xâ² â X .
# Proposition 74 Suppose that
1. For all a â A and any event S satisfying Pr(X â S | A = a) > 0, we have, a.s.,
Pr(XÎ ,A,a â S ⨠A = a | X) > 0.
2. For all a â A and ϵ > 0, there exists δ > 0 such that for any event S satisfying Pr(X â S | A = a) < δ, we have, a.s.,
Pr(XÎ ,A,a â S, A ̸= a | X) < ϵ.
Then, for W = X, any Π-fair policy d(x) is constant a.s. (i.e., d(X) = p a.s. for some 0 ⤠p ⤠1).
Proof Let dmax = â¥d(x)â¥â, the essential supremum of d. To establish the theorem state- ment, we show that Pr(d(X) = dmax | A = a) = 1 for all a â A. To do that, we begin by showing that there exists some a â A such that Pr(d(X) = dmax | A = a) > 0.
Assume, toward a contradiction, that for all a â A,
Pr(d(X) = dmax | A = a) = 0. (36)
Because A is finite, there must be some a0 â A such that
Pr(dmax â d(X) < ϵ | A = a0) > 0 (37)
for all ϵ > 0.
Choose a1 ̸= a0. We show that for values of x such that d(x) is close to dmax, the distribution of d(XΠ,A,a1) | X = x must be concentrated near dmax with high probability
96
# a
The Measure and Mismeasure of Fairness
to satisfy the definition of path-specific fairness, in Eq. (8). But, under the assumption in Eq. (36), we also show that the concentration occurs with low probability, by the continuity hypothesis in the statement of the theorem, establishing the contradiction.
Specifically, by Markovâs inequality, for any Ï > 0, a.s.,
Pr(dmax â d(XÎ ,A,a1) â¥ Ï | X) ⤠= E[dmax â d(XÎ ,A,a1) | X] Ï dmax â d(X) Ï ,
where the final equality follows from Lemma 73. Rearranging, it follows that for any Ï > 0, a.s.,
Pr(dmax â d(XÎ ,A,a1) < Ï | X) ⥠1 â dmax â d(X) Ï . (38)
Now let S = {x â X : dmax â d(x) < Ï}. By the second hypothesis of the theorem, we can choose δ sufficiently small that if
Pr(X â S | A = a1) < δ
then, a.s.,
Pr(XÎ ,A,a1 â S, A ̸= a1 | X) < 1 2 .
In other words, we can chose δ such that if
Pr(dmax â d(X) < Ï | A = a1) < δ
then, a.s.,
Pr(dmax â d(XÎ ,A,a1) < Ï, A ̸= a1 | X) < 1 2
By Eq. (36), we can choose ϵ > 0 so small that
Pr(dmax â d(X) < ϵ | A = a1) < δ.
Then, we have that
Pr(dmax â d(XÎ ,A,a1) < ϵ, A ̸= a1 | X) < 1 2 a.s. Further, by the definition of the essential supremum and a0, and the fact that a0 ̸= a1, we have that
Pr(dmax â d(X) < ϵ 2 , A ̸= a1) > 0.
Therefore, with positive probability, we have that
1 â
dmax â d(X) ϵ > 1 â ϵ 2 ϵ = 1 2 > Pr(dmax â d(XÎ ,A,a1) < ϵ, A ̸= a1 | X).
This contradicts Eq. (38), and so it cannot be the case that Pr(d(X) = dmax | A = a0) = 0, meaning Pr(d(X) = dmax | A = a0) > 0.
97
# Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
Now, we show that Pr(d(X) = dmax | A = a1) = 1. Suppose, toward a contradiction, that
Pr(d(X) < dmax | A = a1) > 0.
Then, by the first hypothesis, a.s.,
Pr(d(XΠ,A,a1) < dmax ⨠A = a1 | X) > 0
As a consequence,
dmax = E[d(X) | d(X) = dmax, A = a0] = E[E[d(XÎ ,A,a1) | X] | d(X) = dmax, A = a0] < E[E[dmax | X] | d(X) = dmax, A = a0] = E[dmax | d(X) = dmax, A = a0] = dmax,
where we can condition on the set {d(X) = dmax, A = a0} since Pr(d(X) = dmax | A = a0) > 0; and the second equality above follows from Lemma 73. This establishes the contradiction, and so Pr(d(X) = dmax | A = a1) = 1.
Finally, we extend this equality to all a â A. Since, Pr(d(X) ̸= dmax | A = a1) = 0, we have, by the second hypothesis of the theorem, that, a.s.,
Pr(d(XΠ,A,a1) ̸= dmax, A ̸= a1 | X) = 0.
Since, by definition, Pr(XÎ ,A,a1 = X | A = a1) = 1, and Pr(d(X) = dmax | A = a1) = 1, we can strengthen this to
Pr(d(XΠ,A,a1) ̸= dmax | X) = 0.
Consequently, a.s.,
d(X) = E[d(XÎ ,A,a) | X] = E[dmax | X] = dmax,
where the first equality follows from Lemma 73, establishing the result.
# G.2 A Markov Chain Perspective
The theory of Markov chains illuminatesâand allows us to extendâthe proof of Theo- rem 11. Suppose X = {x1, . . . , xn}.41 For any aâ² â A, let Paâ² = [paâ² i,j = Pr(XÎ ,A,aâ² = xj | X = xi). Then Paâ² is a stochastic matrix.
To motivate the subsequent discussion, we first note that this perspective conceptually simplifies some of our earlier results. Lemma 73 can be recast as stating that when W = X,
41. Because of the technical difficulties associated with characterizing the long-run behavior of arbitrary infinite Markov chains, we restrict our attention in this section to Markov chains with finite state spaces.
98
# a
The Measure and Mismeasure of Fairness
a policy d is Î -fair if and only if Paâ²d = dâi.e., if and only if d is a 1-eigenvector of Paâ²âfor all aâ² â A.
The 1-eigenvectors of Markov chains have a particularly simple structure, which we derive here for completeness.
Lemma 75 Let S1, . . . , Sm denote the recurrent classes of a finite Markov chain with tran- sition matrix P . If d is a 1-eigenvector of P , then d takes a constant value pk on each Sk, k = 1, . . . , m, and
m dj = Y 7] lim SO Pi): Pe: (39) k=1 GES
>>
Remark 76 We note that limnââ jâSk Markov chain, beginning at state i, is eventually absorbed by the recurrent class Sk.
Proof Note that, possibly by reordering the states, we can arrange that the stochastic matrix P is in canonical form, i.e., that
B p=lh al:
where Q is a sub-stochastic matrix, R is non-negative, and
P1 B = P2 . . . Pm
is a block-diagonal matrix with the stochastic matrix Pi corresponding to the transition probabilities on the recurrent set Si in the i-th position along the diagonal.
Now, consider a 1-eigenvector v = [v1 v2]⤠of P . We must have that P v = v, i.e., Bv1 = v1 and Râ²v1 + Qv2 = v2. Therefore v1 is a 1-eigenvector of B. Since B is block diagonal, and each diagonal element is a positive stochastic matrix, it follows by the Perron- Frobenius theorem that the 1-eigenvectors of B are given by Span(1Si)i=1,...,m, where 1Si is the vector which is 1 at index j if j â Si and is 0 otherwise.
Now, for v1 â Span(1Si)i=1,...,m, we must find v2 such that Râ²v1 + Qv2 = v2. Note that every finite Markov chain M can be canonically associated with an absorbing Markov chain M Abs where the set of states of M Abs is exactly the union of the transitive states of M and the recurrent sets of M . (In essence, one tracks which state of M the Markov chain is in until it is absorbed by one of the recurrent sets, at which point the entire recurrent set is treated as a single absorbent state.) The transition matrix P Abs associated with M Abs is given by
ABS __ I P an a]
where R= R'[1s, ... 1g,,]. In particular, it follows that v = [v) v2]' is a 1-eigenvector of P if and only if [Tv1 v2]! is a 1-eigenvector of P*S, where T : 1g, + e:.
99
# Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
Now, if v is a 1-eigenvector of P Abs, then it is a 1-eigenvector of (P Abs)k for all k. Since
Now, if v is a 1-eigenvector of PA®S, then it is a 1-eigenvector of (Pâ®S)* for all k. Since Q is sub-stochastic, the series 77° Q* converges to (I â Q)~1. Since
ABS)k __ I (P = leas soryn tl:
it follows that
lim kââ (P Abs)k = I (I â Q)â1R 0 .
Therefore, if v = [v1 v2]⤠is a 1-eigenvector of P Abs, we must have that (I â Q)â1Rv1 = v2. By Theorem 3.3.7 in Kemeny and Snell (1976), the (i, k) entry of (I â Q)â1R is exactly the probability that, conditional on X0 = xi, the Markov chain is eventually absorbed by the recurrent set Sk. This is, in turn, by the Chapman-Kolmogorov equations and the definition of Sk, equal to limnââ
We arrive at the following simple necessary condition on Î -fair policies.
Corollary 77 Suppose X is finite, and define the stochastic matrix P = 1 |A| d(x) is a Î -fair policy then it is constant on the recurrent classes of P .
Proof By Lemma 73, d is Î -fair if and only if Paâ²d = d for all aâ² â A. Therefore,
Al > Pyd= al > d=d, (40) acA acA
and so d is a 1-eigenvector of P . Therefore it is constant on the recurrent classes of P by Lemma 75.
We note that Theorem 11 follows immediately from this.
Proof of Theorem 11 Note that 1 aâA Pa decomposes into a block diagonal stochastic |A| matrix, where each block corresponds to a single stratum of ζ and is irreducible. Conse- quently, each stratum forms a recurrent class, and the result follows.
# H. Proofs of Propositions 19 and 20
The proofs of Proposition 19 and Proposition 20 rely on certain shared theory about beta distributions. We begin by reviewing this theory before moving onto the proofs of the respective propositions.
# H.1 Beta distributions and stochastic dominance
We begin by introducing incomplete beta functions and distributions.
100
# The Measure and Mismeasure of Fairness
Definition 78 The incomplete beta function It(α, β) for t â (0, 1] is given by
t Tela, 8) = [ 211 = g)P1,
A random variable X is said to be distributed as Betat(α, β) if X ⼠Y | Y < t for Y ⼠Beta(α, β); equivalently, if X has PDF
1xâ(0,t) · xαâ1(1 â x)βâ1 It(α, β) .
An important property relating different beta distributions is stochastic dominance.
Definition 79 Let X and Y be random variables with CDFs FX (t) and FY (t), respectively. We say that X stochastically dominates Y , written Y â¤st X, if FX (t) ⤠FY (t) for all t â R. We say that X strictly stochastically dominates Y , i.e., Y <st X, if FX (t) = FY (t) implies that either FX (t) = FY (t) = 0 or FX (t) = FY (t) = 1.
Stochastic dominance has the following useful property.
Lemma 80 Suppose Y â¤st X. Then, for any monotonically non-decreasing Ï : R â R, we have that E[Ï(Y )] ⤠E[Ï(X)].
For proof, see (1.A.7) in Shaked and Shanthikumar (2007). We will need to improve the inequality to a strict inequality. We begin with the simplest case.
Lemma 81 Suppose Y <st X, where X and Y are positive and FX (t), FY (t) are contin- uous. Then, E[Y · 1Y >t] < E[X · 1X>t] for any t > 0 such that there exists tⲠ⥠t with FX (tâ²) < FY (tâ²).
Proof Since since FY (x) ⥠FX (x) for all x and, in particular, FY (x) > FX (x) on an open interval containing tâ², we have that
E[Y : lysi] =t- [ Fy(t)| [ 1 Fy (x) dx <t-[lâFx(t)] + [ 1â Fx (x) dx = E[X - 1x51]
where we have applied the Darth Vader rule to calculate the expectations (Muldowney et al., 2012).
This leads to the following modest technical generalization.
Lemma 82 Suppose Y <st X, where X and Y are positive and FX (t) and FY (t) are continuous. Suppose that t > 0 is such that there exists tâ² < t with FX (tâ²) < FY (tâ²), and f : R>0 â R>0 is monotonically decreasing and continuous. Then
E[f (Y ) · 1Y <t] > E[f (X) · 1X<t].
101
# Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
Proof Consider the transformed variables f (X) and f (Y ). Since f is monotonically de- creasing and continuous, it is invertible, and in particular
Pr(f (X) < x) = Pr(X > f â1(x)) = 1 â FX (f â1(x)).
It follows that the CDFs of f (X) and f (Y ) are 1 â FX (f â1(x)) and 1 â FY (f â1(x)), respectively. Then, observe that since FX (x) ⤠FY (x) for all x â R,
1 â FX (x) ⥠1 â FY (x)
for all x â R, i.e., f (X) â¤st f (Y ). In particular, since FX (x) = 0 if and only if 1âFX (x) = 1 and vice versa, it follows from the invertibility of f that f (X) <st f (Y ). Therefore, after noting that X < t if and only if f (X) > f (t) and similarly for Y , the result follows from Lemma 81.
It is relatively straightforward to characterize the stochastic dominance relationships between various (incomplete) beta distributions according to α and β; see Arab et al. (2021) for full details. Here, we require only the following result, which closely follows the proof of Theorem 1 there.
Lemma 83 If X â¼ Betat(α0, β0), Y â¼ Betat(α1, β1), where α0 ⥠α1 and β0 ⤠β1, then Y â¤st X. If, in addition, either α0 > α1 or β0 < β1, then Y <st X.
Proof Consider the CDFs FX (s) and FY (s). We will use the difference G(s) = FX (s) â FY (s) to demonstrate the result. The case where α0 = α1 and β0 = β1 is trivial, so we restrict our attention to the case where one of the inequalities is strict. For simplicity, we assume that α0 > α1; the case where β0 < β1 is virtually identical.
In particular, observe that G(0) = G(t) = 0, and that for s â (0, t),
Gi(s) = Sa s)at get = 9) qi on Fi(ar, 61) = _ sea 8) 301700] â 5) 91-80 _ I:(an, 81) I(a1, 61) I,(a0, Bo)
We consider the two multiplicands in the final expression. We note that the first is greater than zero for all s â (0, t). Therefore, for s â (0, t), Gâ²(s) = 0 if and only if
sα1âα0(1 â s)β1âβ0 = It(α1, β1) It(α0, β0) .
Now, since α0 > α1 and β0 ⤠β1, it follows that the left-hand side of the previous expression is strictly decreasing. In particular, Gâ²(s) = 0 for at most one s â (0, t). Since G(0) = G(t) = 0 and G(t) is non-constant, it follows that Gâ²(s) = 0 for exactly one s â (0, t) by Rolleâs theorem. In particular, either G(s) > 0 for all s â (0, t) or G(s) < 0 for all t â (0, t).
sα1âα0(1 â s)β1âβ0 â It(α1, β1) It(α0, β0)
102
The Measure and Mismeasure of Fairness
changes sign for some s0 â (0, t). Since the minuend is strictly decreasing, it follows that Gâ²(s) > 0 for s â (0, s0). Therefore, in particular, G(s) > 0 for all s â (0, s0), and hence for s â (0, t). Therefore FX (s) > FY (s) for s â (0, t), and so Y <st X.
# H.2 Proof of Proposition 19
Our proof is a relatively straightforward application of Sardâs theorem. Intuitively, counter- factual predictive parity imposes three constraints on α0, β0, α1, β1, and the group-specific thresholds t0 and t1. These constraints are sufficiently smooth that the zero locus should take the form of a 3-manifold, by the inverse function theorem. Projecting onto the first four coordinates (i.e., eliminating t0 and t1) gives rise to a set of measure zero by Sardâs theorem.
Proof of Proposition 19 We begin by noting that the risk distributions, conditional on Y (1) = i, for i = 0, 1, take a particularly simple form.
r(X) | A = ai, Y (1) = 1 ⼠Beta(αi + 1, βi), r(X) | A = ai, Y (1) = 0 ⼠Beta(αi, βi + 1).
This follows upon noting that
1 1 ; B(ai, Bi) [ we Dyce: a VL 0) dar 4, Pi 0 â [Li(ai+1, Bi) ~ Bai, Bi) 1 Pr(r(X) < t, Y(1) =0| A= aj) Be. By / (1â2)- pep 211 â 2)! de â [fai, Bi +1) Bla, Bi) Pr(r(X) <t,Y(1) =1| A=ai)
By Cor. 29, if there exists a Pareto efficient policy satisfying counterfactual equalized odds, then it must correspond to some multiple threshold policy. In particular, by Eq. (4) and the fact that the policy is budget exhausting, and using Prop. 15 with λ = 0, there must be t0, t1 such that
It0(α0, β0) B(α0, β0) It1(α1, β1) B(α1, β1) It0(α0 + 1, β0) B(α0 + 1, β0) It0(α0, β0 + 1) B(α0, β0 + 1) + = 2 â b = = It1(α1 + 1, β1) B(α1 + 1, β1) It1(α1, β1 + 1) B(α1, β1 + 1) .
Here, the first equality encodes budget exhaustion, the second the fact that Pr(D = 1 | A = a0, Y (1) = 1) = Pr(D = 1 | A = a1, Y (1) = 1), and the third the fact that Pr(D = 1 | A = a0, Y (1) = 0) = Pr(D = 1 | A = a1, Y (1) = 0).
103
# Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
Let f : R6 â R3 be given by
f0(α0, β0, t0, α1, β1, t1) = f1(α0, β0, t0, α1, β1, t1) = f2(α0, β0, t0, α1, β1, t1) = It0(α0, β0) B(α0, β0) It0(α0 + 1, β0) B(α0 + 1, β0) It0(α0, β0 + 1) B(α0, β0 + 1) + It1(α1, β1) B(α1, β1) â (2 â b), â â It1(α1 + 1, β1) B(α1 + 1, β1) It1(α1, β1 + 1) B(α1, β1 + 1) , .
Then, given α0, β0, α1 and β1, there exists a Pareto optimal policy satisfying counterfactual equalized odds only if there exist t0 and t1 such that
f (α0, β0, t0, α1, β1, t1) = 0.
Let Df denote the Jacobian of f . If we can show that f is smooth and Df has full rank, then the proof is complete. For, by Theorem 5.12 in Lee (2013), it follows that f â1(0) â R6 is a smooth 3-manifold. The restriction of the map
m : (a0, Bo, to, 01, 81, t1) + (a0, Bo, a1, B1)
to f â1(0) is smooth, and so by Sardâs theorem (see Theorem 6.10 in Lee (2013)), since >0 is a trivially a smooth 4-manifold, the measure of the singular values of Ï â¾ R4 f â1(0) is zero. However, since the maximum rank of DÏ on f â1(0) is three, every point of f â1(0) is singular, and consequently, the whole image of Ï is singular, i.e., Ï(f â1(0)) has measure zero. However, as argued above, the set of (α0, β0, α1, β1) such that there exists a Pareto efficient distribution satisfying counterfactual equalized odds is a subset of Ï(f â1(0)).
Therefore, it remains only to show that f is smooth, and that Df has full rank for all >0 à [0, 1]. This verification is a routine exercise in (α0, β0, t0, α1, β1, t1) â R2 linear algebra and multivariable calculus, and is given below. >0 à [0, 1] à R2
Smoothness. We note that since smooth functions are closed under composition, it suf- fices to show that It(α, β) is a smooth function of t, α, and β. First, we consider partial derivatives with respect to α and β. If we could differentiate under the integral sign, then we would have that
gntm t _ (pyr _ »\m,a-1/q _ »,)8-1 a, Baragm le 8) [ log(x)" log(1 â x)â¢a** (1 â 2)?" da. (41)
Recall the well-known condition for the Leibniz integral rule that if â nated by some integrable g(t) for all xâ² in some neighborhood of x, then42 âx Ï(x, t)|x=xâ² is domi-
d ft tg al ola,t)at= f jy Hts t) at.
Since the integrand log(x)n log(1 â x)mxαâ1(1 â x)βâ1 is strictly decreasing in both α and β for x â (0, 1), it suffices merely to show that log(x)n log(1 â x)mxαâ1(1 â x)βâ1 is integrable on (0, 1) for all α > 0 and β > 0. Moreover, since
42. See, e.g., Theorem 6.28 in Klenke (2020).
104
# The Measure and Mismeasure of Fairness
⢠The whole integrand is bounded on (ϵ, 1 â ϵ),
⢠The factor log(1 â x)m(1 â x)βâ1 is bounded on (0, ϵ),
⢠The factor log(x)nxαâ1 is bounded on (1 â ϵ, 1),
it suffices to show that log(x)"2°â! is integrable on (0,¢) and that log(1 â #)(1 â x)9-+ is integrable on (1 â â¬,1). Up to the change of variables 1 +> 1 â x, since n, m, a, and 3 are arbitrary, we see that it suffices to verify that log(x)"x°~! alone is integrable on (0, 1). Integrating by parts, we have that
t log(x)"x*]! 1 | log(2)"a° 1d = jeer | â nf log(x)?~ tae! da. 0 0 Qa 0
Since, by lâHËopitalâs rule, limxâ0 log(x)nxα = 0, this expression equals
1 on log(a)"âtxe! da. 0
Since x뱉1 is integrable on 0, 1 we see inductively that so is log(x)nx뱉1. Therefore, we can differentiate under the integral sign, and Eq. (41) holds. Now, taking derivatives with respect to t yields that
ân+m+1 âtâαnâβm It(α, β) = log(t)n log(1 â t)mtαâ1(1 â t)βâ1,
which is a polynomial in smooth functions of t, and hence is smooth in t. Therefore
ân+m+k âtkâαnâβm It(α, β)
exists and is a polynomial in t for all k > 1.
By continuity, the orders of the partial derivatives can be switched arbitrarily (see, e.g., Theorem 9.40 in Rudin (1976)), and so it follows that f is smooth.
By the rank-nullity theorem, it suffices to show that the column rank of Full rank. Df is three. However, letting Bα,β ⼠Beta(α, β) we have, by the results of the previous section, that the first three columns (i.e., partial derivatives with respect to α0, β0, and t0, respectively) of Df are given by
tα0â1 0
E[log(Bα0,β0) · 1Bα0,β0 <t0] E[log(Bβ0,α0) · 1Bβ0,α0 <1ât0] E[log(Bα0+1,β0) · 1Bα0+1,β0 <t0] E[log(Bβ0,α0+1) · 1Bβ0,α0+1<1ât0] E[log(Bα0,β0+1) · 1Bα0,β0+1<t0] E[log(Bβ0+1,α0) · 1Bβ0+1,α0 <1ât0] tα0â1 0 ·(1ât0)β0â1 B(α0,β0) tα0 0 ·(1ât0)β0â1 B(α0+1,β0) tα0â1 0 B(α0,β0+1) ·(1ât0)β0 .
It is easy to see that the first two columns are necessarily independent, since all entries are negative but, by Lemmata 82 and 83, taking f (x) = â log(x), we have that
E[log(Bα0+1,β0) · 1Bα0+1,β0 <t0] > E[log(Bα0,β0+1) · 1Bα0,β0+1<t0]
105
.
# Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
while
E[log(Bβ0,α0+1) · 1Bβ0,α0+1<1ât0] < E[log(Bβ0+1,α0) · 1Bβ0+1,α0 <1ât0]. Now observe that the final three columns of Df âi.e., its partial derivatives with respect to α1, β1, and t1, respectivelyâare given by
tα1â1 1 âE[log(Bα1+1,β1) · 1Bα1+1,β1 <t1] âE[log(Bβ1,α1+1) · 1Bβ1,α1+1<1ât1] â tα1 âE[log(Bα1,β1+1) · 1Bα1,β1+1<t1] âE[log(Bβ1+1,α1) · 1Bβ1+1,α1 <1ât1] â tα1â1
In particular, we notice that in the fourth columnâi.e., partial derivatives with respect to α1âthe element in the first row is negative, while the elements in the second and third row are positive. Therefore, the fourth column is necessarily independent of the first two columnsâi.e., the partial derivatives with respect to α0 and β0âin which every element is negative. Therefore, we have proven that there are three linearly independent columns, and so the rank of Df is full.
# H.3 Proof of Proposition 20
To prove the proposition, we must use our characterizations of the conditional tail risks of the beta distribution proven in Appendix H.1 above. Note that in Proposition 20, for expositional clarity, we parameterize beta distributions in terms of their mean µ and sample size v; here, for mathematical simplicity, we parameterize them in terms of successes, α, and failures, β, where µ = α
Using the theory above, we begin by proving a modest generalization of Prop. 20.
Lemma 84 Suppose A = {a0, a1}, and consider the family U of utility functions of the form
u(x) = r(x) + λ · 1α(x)=a1,
indexed by λ ⥠0, where r(x) = E[Y (1) | X = x]. Suppose the conditional distributions of r(X) given A are beta distributed, i.e.,
D(r(X) | A = a) = Beta(αa, βa),
with αa1 < αa0 and βa0 < βa1. Then any policy satisfying counterfactual predictive parity is strongly Pareto dominated.
Proof Suppose there were a Pareto efficient policy satisfying counterfactual predictive parity. Let λ = 0. Then, by Prop. 15, we may without loss of generality assume that there exist thresholds ta0, ta1 such that a threshold policy Ï (x) witnessing Pareto efficiency is given by
Ï (x) = 1 r(x) > tα(x), 0 r(x) < tα(x).
106
.
# The Measure and Mismeasure of Fairness
(Note that by our distributional assumption, Pr(u(x) = t) = 0 for all t â [0, 1].) Since λ ⥠0, we must have that ta0 ⥠ta1. Since b < 1, 0 < ta0. Therefore,
E[Y (1) | A = a0, D = 0] = E[r(X) | A = a0, u(X) < ta0] ⥠E[r(X) | A = a0, u(X) < ta1] > E[r(X) | A = a1, u(X) < ta1] = E[Y (1) | A = a1, D = 0],
where the first equality follows by the law of iterated expectation, the second from the fact that ta1 ⤠ta0, the third from our distributional assumption and Lemmata 80 and 83, and the final again from the law of iterated expectation. However, since counterfactual predic- tive parity is satisfied, E[Y (1) | A = a0, D = 0] = E[Y (1) | A = a1, D = 0], which is a contradiction. Therefore, no such threshold policy exists.
After accounting for the difference in parameterization, Prop. 20 follows as a corollary. Proof of Prop. 20 Since µa0 > µa1, αa0 = v · µa0 > v · µa1 = αa1 and βa0 = v · (1 â µa0) < v · (1 â µa1) = βa1. Therefore βa0 < βa1 and αa1 < αa0, and so, by Lemma 84, the proposi- tion follows.
107
Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
# References
Alekh Agarwal, Alina Beygelzimer, Miroslav Dud´ık, John Langford, and Hanna Wallach. In International Conference on Machine A reductions approach to fair classification. Learning, 2018.
Rahul Aggarwal, Kirsten Bibbins-Domingo, Robert W. Yeh, Yang Song, Nicholas Chiu, Rishi K. Wadhera, Changyu Shen, and Dhruv S. Kazi. Diabetes screening by race and ethnicity in the United States: Equivalent body mass index and age thresholds. Annals of Internal Medicine, 175(6):765â773, 2022.
Robert M Anderson and William R Zame. Genericity with infinitely many parameters. Advances in Theoretical Economics, 1(1):1â62, 2001.
Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. Machine bias: Thereâs software used across the country to predict future criminals. and itâs biased against blacks. ProPublica, 5 2016.
Shamena Anwar and Hanming Fang. An alternative test of racial prejudice in motor vehicle searches: Theory and evidence. The American Economic Review, 2006.
Idir Arab, Paulo Eduardo Oliveira, and Tilo Wiklund. Convex transform order of beta distributions with some consequences. Statistica Neerlandica, 75(3):238â256, 2021.
Imanol Arrieta-Ibarra, Paman Gujral, Jonathan Tannen, Mark Tygert, and Cherie Xu. Metrics of calibration for probabilistic predictions. Journal of Machine Learning Research, 2022.
Kenneth Arrow. The theory of discrimination. In Discrimination in labor markets. Princeton University Press, 1973.
Ian Ayres. Outcome tests of racial disparities in police practices. Justice Research and Policy, 4(1-2):131â142, 2002.
Jack M Balkin and Reva B Siegel. The American civil rights tradition: Anticlassification or antisubordination. Issues in Legal Scholarship, 2(1), 2003.
Michelle Bao, Angela Zhou, Samantha Zottola, Brian Brubach, Sarah Desmarais, Aaron Horowitz, Kristian Lum, and Suresh Venkatasubramanian. Itâs COMPASlicated: The messy relationship between RAI datasets and algorithmic fairness benchmarks. arXiv preprint arXiv:2106.05498, 2021.
Chelsea Barabas, Madars Virza, Karthik Dinakar, Joichi Ito, and Jonathan Zittrain. Inter- ventions over predictions: Reframing the ethical debate for actuarial risk assessment. In Conference on Fairness, Accountability and Transparency, pages 62â76, 2018.
Solon Barocas and Andrew D Selbst. Big dataâs disparate impact. Cal. L. Rev., 104:671, 2016.
Solon Barocas, Moritz Hardt, and Arvind Narayanan. Fairness and Machine Learning: Limitations and Opportunities. fairmlbook.org, 2019. http://www.fairmlbook.org.
108
The Measure and Mismeasure of Fairness
Gary S Becker. The Economics of Discrimination. University of Chicago Press, 1957.
Benji. The sum of an uncountable number of positive numbers. Mathematics Stack Ex- change, 2020. URL https://math.stackexchange.com/q/20661. (version: 2020-05-29).
Richard Berk. Criminal Justice Forecasts of Risk: A Machine Learning Approach. Springer Science & Business Media, 2012.
Richard Berk, Hoda Heidari, Shahin Jabbari, Michael Kearns, and Aaron Roth. Fairness in criminal justice risk assessments: The state of the art. Sociological Methods & Research, 50(1):3â44, 2021.
Marianne Bertrand and Sendhil Mullainathan. Are Emily and Greg more employable than Lakisha and Jamal? A field experiment on labor market discrimination. American Eco- nomic Review, 94(4):991â1013, 2004.
Patrick Billingsley. Probability and Measure. Wiley Series in Probability and Mathematical Statistics. John Wiley & Sons, Inc., New York, third edition, 1995. ISBN 0-471-00710-2. A Wiley-Interscience Publication.
Avrim Blum and Kevin Stangl. Recovering from biased data: Can fairness constraints improve accuracy? arXiv preprint arXiv:1912.01094, 2019.
Henk Brozius. Conditional expectation - E[f (X, Y )|Y ]. Mathematics Stack Exchange, 2019. URL https://math.stackexchange.com/q/3247577. (Version: 2019-06-01).
Joy Buolamwini and Timnit Gebru. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on Fairness, Accountability and Trans- parency, pages 77â91, 2018.
William Cai, Johann Gaebler, Nikhil Garg, and Sharad Goel. Fair allocation through selective information acquisition. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pages 22â28, 2020.
William Cai, Ro Encarnacion, Bobbie Chern, Sam Corbett-Davies, Miranda Bogen, Stevie Bergman, and Sharad Goel. Adaptive sampling strategies to construct equitable training datasets. In Proceedings of the Conference on Fairness, Accountability, and Transparency, 2022a.
William Cai, Johann Gaebler, Justin Kaashoek, Lisa Pinals, Samuel Madden, and Sharad Goel. Measuring racial and ethnic disparities in traffic enforcement with large-scale telem- atics data. PNAS Nexus, 1(4):pgac144, 2022b.
Toon Calders and Sicco Verwer. Three naive Bayes approaches for discrimination-free classification. Data Mining and Knowledge Discovery, 21(2):277â292, 2010.
Alycia N Carey and Xintao Wu. The causal fairness field guide: Perspectives from social and formal sciences. Frontiers in Big Data, 5, 2022.
James H Carr and Isaac F Megbolugbe. The Federal Reserve Bank of Boston study on mortgage lending revisited. Fannie Mae Office of Housing Policy Research, 1993.
109
Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
Centers for Disease Control and Prevention. National Health and Nutrition Exami- nation Survey data. Technical report, National Center for Health Statistics, 2011- 2018. URL https://wwwn.cdc.gov/nchs/nhanes/continuousnhanes/overview.aspx? BeginYear=2011.
Jessica P CerdeËna, Marie V Plaisime, and Jennifer Tsai. From race-based to race-conscious medicine: How anti-racist uprisings call us to act. The Lancet, 396(10257):1125â1128, 2020.
Silvia Chiappa. Path-specific counterfactual fairness. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7801â7808, 2019.
Alex Chohlas-Wood, Joe Nudell, Keniel Yao, Zhiyuan Lin, Julian Nyarko, and Sharad Goel. Blind justice: Algorithmically masking race in charging decisions. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, pages 35â45, 2021.
Alex Chohlas-Wood, Madison Coots, Emma Brunskill, and Sharad Goel. Learning to arXiv preprint be fair: A consequentialist approach to equitable decision-making. arXiv:2109.08792, 2023a.
Alex Chohlas-Wood, Madison Coots, Sharad Goel, and Julian Nyarko. Designing equitable algorithms. Nature Computational Science, 3, 2023b.
Alexandra Chouldechova. Fair prediction with disparate impact: A study of bias in recidi- vism prediction instruments. Big Data, 5(2):153â163, 2017.
Alexandra Chouldechova and Aaron Roth. A snapshot of the frontiers of fairness in machine learning. Communications of the ACM, 63(5):82â89, 2020.
Alexandra Chouldechova, Diana Benavides-Prado, Oleksandr Fialko, and Rhema Vaithi- anathan. A case study of algorithm-assisted decision making in child maltreatment hot- line screening decisions. In Conference on Fairness, Accountability and Transparency, pages 134â148, 2018.
Jens Peter Reus Christensen. On sets of Haar measure zero in Abelian Polish groups. Israel Journal of Mathematics, 13(3-4):255â260, 1972.
T Anne Cleary. Test bias: Prediction of grades of Negro and white students in integrated colleges. Journal of Educational Measurement, 5(2):115â124, 1968.
Ruth Colker. Anti-subordination above all: Sex, race, and equal protection. NYUL Rev., 61:1003, 1986.
Madison Coots, Soroush Saghafian, David Kent, and Sharad Goel. Reevaluating the role of race and ethnicity in diabetes screening. arXiv preprint arXiv:2306.10220, 2023.
Sam Corbett-Davies and Sharad Goel. The measure and mismeasure of fairness: A critical review of fair machine learning. arXiv preprint arXiv:1808.00023v2, 2018.
110
The Measure and Mismeasure of Fairness
Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. Algorithmic decision making and the cost of fairness. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 797â806, 2017.
Amanda Coston, Alan Mishler, Edward H Kennedy, and Alexandra Chouldechova. Counter- factual risk assessments, evaluation, and fairness. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pages 582â593, 2020.
Andrew Cotter, Heinrich Jiang, and Karthik Sridharan. Two-player games for efficient non-convex constrained optimization. In Algorithmic Learning Theory, pages 300â332. PMLR, 2019.
Stewart J DâAlessio and Lisa Stolzenberg. Race and the probability of arrest. Social forces, 81(4):1381â1397, 2003.
Richard B Darlington. Another look at âcultural fairnessâ. Journal of Educational Mea- surement, 8(2):71â82, 1971.
Matthew DeMichele, Peter Baumgartner, Michael Wenger, Kelle Barrick, Megan Comfort, and Shilpi Misra. The Public Safety Assessment: A re-validation and assessment of predictive utility and differential prediction by race and gender in Kentucky, 2018. URL https://papers.ssrn.com/abstract=3168452.
Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. Fair- ness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, pages 214â226, 2012.
Harrison Edwards and Amos Storkey. Censoring representations with an adversary. Proceedings of the International Conference in Learning Representations, 2016. In
Robin S Engel and Rob Tillyer. Searching for equilibrium: The tenuous nature of the outcome test. Justice Quarterly, 25(1):54â71, 2008.
Michael Feldman, Sorelle A Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkata- subramanian. Certifying and removing disparate impact. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 259â 268. ACM, 2015.
Fisher v. University of Texas. 579 U.S., 2016.
Owen M Fiss. Groups and the Equal Protection Clause. Philosophy & Public Affairs, pages 107â177, 1976.
Johann Gaebler, William Cai, Guillaume Basse, Ravi Shroff, Sharad Goel, and Jennifer Hill. A causal framework for observational studies of discrimination. Statistics and Public Policy, 2022.
Sainyam Galhotra, Karthikeyan Shanmugam, Prasanna Sattigeri, and Kush R Varshney. Causal feature selection for algorithmic fairness. Proceedings of the 2022 International Conference on Management of Data (SIGMOD), 2022.
111
Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
George C Galster. The facts of lending discrimination cannot be argued away by examining default rates. Housing Policy Debate, 4(1):141â146, 1993.
Sharad Goel, Maya Perelman, Ravi Shroff, and David Alan Sklansky. Combatting police discrimination in the age of big data. New Criminal Law Review: An International and Interdisciplinary Journal, 20(2):181â232, 2017.
Claudia Goldin and Cecilia Rouse. Orchestrating impartiality: The impact of âblindâ auditions on female musicians. American Economic Review, 90(4):715â741, 2000.
D James Greiner and Donald B Rubin. Causal effects of perceived immutable characteristics. Review of Economics and Statistics, 93(3):775â785, 2011.
Jeffrey Grogger and Greg Ridgeway. Testing for racial profiling in traffic stops from behind a veil of darkness. Journal of the American Statistical Association, 101(475):878â887, 2006.
Moritz Hardt, Eric Price, and Nati Srebro. Equality of opportunity in supervised learning. Advances in Neural Information Processing Systems, 29:3315â3323, 2016.
Ursula H´ebert-Johnson, Michael Kim, Omer Reingold, and Guy Rothblum. Multicalibra- tion: Calibration for the (computationally-identifiable) masses. In International Confer- ence on Machine Learning, pages 1939â1948. PMLR, 2018.
Jennifer L Hill. Bayesian nonparametric modeling for causal inference. Journal of Compu- tational and Graphical Statistics, 20(1):217â240, 2011.
Paul W Holland. Statistics and causal inference. Journal of the American Statistical Asso- ciation, 81(396):945â960, 1986.
In Proceedings of the 2020 ACM Conference on Fairness, Accountability, and Transparency, 2020.
Brian R Hunt, Tim Sauer, and James A Yorke. Prevalence: a translation-invariant âalmost everyâ on infinite-dimensional spaces. Bulletin of the American Mathematical Society, 27 (2):217â238, 1992.
Idaho H.B. 118. H.B. 118, 65th Leg., 1st Reg. Sess., 2019. https://legislature.idaho. gov/wp-content/uploads/sessioninfo/2019/legislation/H0118.pdf.
Kosuke Imai and Zhichao Jiang. Principal fairness for human and algorithmic decision- making. arXiv preprint arXiv:2005.10400, 2020.
Kosuke Imai, Zhichao Jiang, James Greiner, Ryan Halen, and Sooahn Shin. Experimental evaluation of algorithm-assisted human decision-making: Application to pretrial public safety assessment. arXiv preprint arXiv:2012.02845, 2020.
Guido W Imbens and Donald B Rubin. Causal Inference in Statistics, Social, and Biomed- ical Sciences. Cambridge University Press, 2015.
112
The Measure and Mismeasure of Fairness
Christopher Jung, Sampath Kannan, Changhwa Lee, Mallesh Pai, Aaron Roth, and Rakesh Vohra. Fair prediction with endogenous behavior. In Proceedings of the 21st ACM Con- ference on Economics and Computation, pages 677â678, 2020a.
Jongbin Jung, Connor Concannon, Ravi Shroff, Sharad Goel, and Daniel G Goldstein. Simple rules to guide expert classifications. Journal of the Royal Statistical Society: Series A (Statistics in Society), 183(3):771â800, 2020b.
Jongbin Jung, Ravi Shroff, Avi Feller, and Sharad Goel. Bayesian sensitivity analysis for offline policy evaluation. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pages 64â70, 2020c.
Faisal Kamiran, IndrËe ËZliobaitËe, and Toon Calders. Quantifying explainable discrimina- tion and removing illegal discrimination in automated decision making. Knowledge and Information Systems, 35(3):613â644, 2013.
John G. Kemeny and J. Laurie Snell. Finite Markov Chains. Undergraduate Texts in Math- ematics. Springer-Verlag, New York-Heidelberg, 1976. Reprinting of the 1960 original.
Niki Kilbertus, Mateo Rojas-Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, and Bernhard Sch¨olkopf. Avoiding discrimination through causal reasoning. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pages 656â666, 2017.
Jon Kleinberg, Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig, and Sendhil Mul- lainathan. Human decisions and machine predictions. The Quarterly Journal of Eco- nomics, 133(1):237â293, 2017a.
Inherent trade-offs in the fair determination of risk scores. In Proceedings of Innovations in Theoretical Computer Science (ITCS), 2017b.
Achim Klenke. Probability Theory: A Comprehensive Course. Universitext. Springer, Cham, 2020. Third edition.
John Knowles, Nicola Persico, and Petra Todd. Racial bias in motor vehicle searches: Theory and evidence. Journal of Political Economy, 109(1), 2001.
Allison Koenecke, Andrew Nam, Emily Lake, Joe Nudell, Minnie Quartey, Zion Mengesha, Connor Toups, John R Rickford, Dan Jurafsky, and Sharad Goel. Racial disparities in automated speech recognition. Proceedings of the National Academy of Sciences, 117(14): 7684â7689, 2020.
Allison Koenecke, Eric Giannella, Robb Willer, and Sharad Goel. Popular support for balancing equity and efficiency in resource allocation: A case study in online advertising to increase welfare program awareness. In Proceedings of the International AAAI Conference on Web and Social Media, volume 17, pages 494â506, 2023.
Matt J Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva. Counterfactual fairness. In Advances in Neural Information Processing Systems, pages 4066â4076, 2017.
113
Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
John M. Lee. Introduction to Smooth Manifolds, volume 218 of Graduate Texts in Mathe- matics. Springer, New York, second edition, 2013.
Annie Liang, Jay Lu, and Xiaosheng Mu. Algorithmic design: Fairness versus accuracy. In Proceedings of the 23rd ACM Conference on Economics and Computation, pages 58â59, 2022.
Joshua R Loftus, Chris Russell, Matt J Kusner, and Ricardo Silva. Causal reasoning for algorithmic fairness. arXiv preprint arXiv:1805.05859, 2018.
Kristian Lum and William Isaac. To predict and serve? Significance, 13(5):14â19, 2016.
Charles F Manski, John Mullahy, and Atheendar Venkataramani. Using measures of race to make clinical predictions: Decision making, patient health, and fairness. Technical report, National Bureau of Economic Research, 2022.
Vishwali Mhasawade and Rumi Chunara. Causal multi-level fairness. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, pages 784â794, 2021.
Alan Mishler, Edward H Kennedy, and Alexandra Chouldechova. Fairness in risk assessment instruments: Post-processing to achieve counterfactual equalized odds. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pages 386â400, 2021.
Pat Muldowney, Krzysztof Ostaszewski, and Wojciech Wojdowski. The Darth Vader rule. Tatra Mountains Mathematical Publications, 52(1):53â63, 2012.
Sendhil Mullainathan and Jann Spiess. Machine learning: An applied econometric approach. Journal of Economic Perspectives, 31(2):87â106, 2017.
Razieh Nabi and Ilya Shpitser. Fair inference on outcomes. In Proceedings of the AAAI Conference on Artificial Intelligence, 2018.
Hamed Nilforoshan, Johann D Gaebler, Ravi Shroff, and Sharad Goel. Causal conceptions of fairness and their consequences. In International Conference on Machine Learning, pages 16848â16887. PMLR, 2022.
Abigail Nurse. Anti-subordination in the Equal Protection Clause: A case study. NYUL Rev., 89:293, 2014.
Julian Nyarko, Sharad Goel, and Roseanna Sommers. Breaking taboos in fair machine learning: An experimental study. In Equity and Access in Algorithms, Mechanisms, and Optimization. Association for Computing Machinery, 2021.
Ziad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464): 447â453, 2019.
William Ott and James Yorke. Prevalence. Bulletin of the American Mathematical Society, 42(3):263â290, 2005.
114
The Measure and Mismeasure of Fairness
Scott E Page. Making the difference: Applying a logic of diversity. Academy of Management Perspectives, 21(4):6â20, 2007.
Jessica K Paulus and David M Kent. Predictably unequal: Understanding and addressing concerns that algorithmic clinical prediction may increase health disparities. NPJ Digital Medicine, 3(1):1â8, 2020.
Judea Pearl. Direct and indirect effects. In Proceedings of the Seventeenth Conference on Uncertainty and Artificial Intelligence, 2001, pages 411â420. Morgan Kaufman, 2001.
Judea Pearl. Causal inference in statistics: An overview. Statistics surveys, 3:96â146, 2009a.
Judea Pearl. Causality. Cambridge University Press, second edition, 2009b.
Dino Pedreshi, Salvatore Ruggieri, and Franco Turini. Discrimination-aware data mining. In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 560â568, 2008.
Edmund S Phelps. The statistical theory of racism and sexism. The American Economic Review, 62(4):659â661, 1972.
Emma Pierson, Sam Corbett-Davies, and Sharad Goel. Fast threshold tests for detect- ing discrimination. In The 21st International Conference on Artificial Intelligence and Statistics (AISTATS), 2018.
Emma Pierson, Camelia Simoiu, Jan Overgoor, Sam Corbett-Davies, Daniel Jenson, Amy Shoemaker, Vignesh Ramachandran, Phoebe Barghouty, Cheryl Phillips, Ravi Shroff, and Sharad Goel. A large-scale analysis of racial disparities in police stops across the United States. Nature Human Behaviour, 4(7):736â745, 2020.
Richard A Primus. Equal protection and disparate impact: Round three. Harv. L. Rev., 117:494, 2003.
Rajeev Ramchand, Rosalie Liccardo Pacula, and Martin Y Iguchi. Racial differences in marijuana-usersâ risk of arrest in the United States. Drug and alcohol dependence, 84(3): 264â272, 2006.
M. M. Rao. Conditional measures and applications, volume 271 of Pure and Applied Math- ematics (Boca Raton). Chapman & Hall/CRC, Boca Raton, FL, second edition, 2005.
Guy N Rothblum and Gal Yona. Decision-making under miscalibration. arXiv preprint arXiv:2203.09852, 2022.
Walter Rudin. Principles of Mathematical Analysis, volume 3. McGraw-Hill New York, 1976.
Walter Rudin. Real and Complex Analysis. McGraw-Hill Book Co., New York, third edition, 1987. ISBN 0-07-054234-1.
Walter Rudin. Functional Analysis. International Series in Pure and Applied Mathematics. McGraw-Hill, Inc., New York, second edition, 1991. ISBN 0-07-054236-8.
115
Corbett-Davies, Gaebler, Nilforoshan, Shroff, and Goel
Laleh Seyyed-Kalantari, Haoran Zhang, Matthew McDermott, Irene Y Chen, and Marzyeh Ghassemi. Underdiagnosis bias of artificial intelligence algorithms applied to chest radio- graphs in under-served patient populations. Nature Medicine, 27(12):2176â2182, 2021.
SFFA v. Harvard. Students for Fair Admissions, Inc., Petitioner, v. President and Fel- lows of Harvard College. Students for Fair Admissions, Inc., Petitioner, v. University of North Carolina, et al., 2023. https://www.supremecourt.gov/opinions/22pdf/ 20-1199_l6gn.pdf.
Moshe Shaked and J George Shanthikumar. Stochastic Orders. Springer, 2007.
Ravi Shroff. Predictive analytics for city agencies: Lessons from childrenâs services. Big Data, 5(3):189â196, 2017.
Reva B Siegel. Equality talk: Antisubordination and anticlassification values in constitu- tional struggles over Brown. Harv. L. Rev., 117:1470, 2003.
C. E. Silva. Invitation to Ergodic Theory, volume 42 of Student Mathematical Library. American Mathematical Society, Providence, RI, 2008.
Camelia Simoiu, Sam Corbett-Davies, and Sharad Goel. The problem of infra-marginality in outcome tests for discrimination. The Annals of Applied Statistics, 11(3):1193â1216, 2017.
Jennifer Skeem, John Monahan, and Christopher Lowenkamp. Gender, risk assessment, and sanctioning: The cost of treating women like men. Law and human behavior, 40(5): 580, 2016.
Jennifer L. Skeem and Christopher T. Lowenkamp. Risk, race, and recidivism: Predictive bias and disparate impact. Criminology, 54(4):680â712, 2016.
Rhys Steele. Space of vector measures equipped with the total variation norm is com- plete. Mathematics Stack Exchange, 2019. URL https://math.stackexchange.com/q/ 3197508. (Version: 2019-04-22).
Yixin Wang, Dhanya Sridhar, and David M Blei. Equal opportunity and affirmative action via counterfactual predictions. arXiv preprint arXiv:1905.10870, 2019.
Watson v. Fort Worth. Watson v. Forth Worth Bank & Trust, 1988. 487 U.S. 977.
Hilde Weerts, Miroslav Dud´ık, Richard Edgar, Adrin Jalali, Roman Lutz, and Michael Madaio. Fairlearn: Assessing and improving fairness of ai systems, 2023.
Teresa Scotton Williams. Some issues in the standardized testing of minority students. Journal of Education, pages 192â208, 1983.
Blake Woodworth, Suriya Gunasekar, Mesrob I Ohannessian, and Nathan Srebro. Learn- ing non-discriminatory predictors. In Conference on Learning Theory, pages 1920â1953. PMLR, 2017.
116
The Measure and Mismeasure of Fairness
Yongkai Wu, Lu Zhang, Xintao Wu, and Hanghang Tong. PC-fairness: A unified frame- work for measuring causality-based fairness. Advances in Neural Information Processing Systems, 32, 2019.
Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P Gummadi. Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In Proceedings of the 26th International Conference on World Wide Web, pages 1171â1180, 2017a.
Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, Krishna P Gummadi, and Adrian Weller. From parity to preference-based notions of fairness in classification. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pages 228â238, 2017b.
Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rogriguez, and Krishna P Gummadi. In Artificial Intelligence and Fairness constraints: Mechanisms for fair classification. Statistics, pages 962â970, 2017c.
Michael Zanger-Tishler, Julian Nyarko, and Sharad Goel. Risk scores, label bias, and everything but the kitchen sink. arXiv preprint arXiv:2305.12638, 2023.
Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. Learning fair repre- sentations. In International Conference on Machine Learning, pages 325â333, 2013.
Junzhe Zhang and Elias Bareinboim. Fairness in decision-makingâthe causal explanation formula. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
Lu Zhang, Yongkai Wu, and Xintao Wu. A causal framework for discovering and remov- In Proceedings of the 26th International Joint ing direct and indirect discrimination. Conference on Artificial Intelligence, pages 3929â3935, 2017.
117 | {
"id": "2203.09852"
} |
1807.11346 | Dropout-GAN: Learning from a Dynamic Ensemble of Discriminators | We propose to incorporate adversarial dropout in generative multi-adversarial
networks, by omitting or dropping out, the feedback of each discriminator in
the framework with some probability at the end of each batch. Our approach
forces the single generator not to constrain its output to satisfy a single
discriminator, but, instead, to satisfy a dynamic ensemble of discriminators.
We show that this leads to a more generalized generator, promoting variety in
the generated samples and avoiding the common mode collapse problem commonly
experienced with generative adversarial networks (GANs). We further provide
evidence that the proposed framework, named Dropout-GAN, promotes sample
diversity both within and across epochs, eliminating mode collapse and
stabilizing training. | http://arxiv.org/pdf/1807.11346 | Gonçalo Mordido, Haojin Yang, Christoph Meinel | cs.LG, stat.ML | Extended version of ACM KDD'18 Deep Learning Day | null | cs.LG | 20180730 | 20200120 | 0 2 0 2
# n a J
0 2 ] G L . s c [
2 v 6 4 3 1 1 . 7 0 8 1 : v i X r a
# Dropout-GAN: Learning from a Dynamic Ensemble of Discriminators
# Gonc¸alo Mordido1 and Haojin Yang2 and Christoph Meinel3
Abstract. We propose to incorporate adversarial dropout in gen- erative multi-adversarial networks by omitting or dropping out the feedback of each discriminator with some probability at the end of each batch. Our approach forces the generator not to constrain its output to satisfy a single discriminator, but, instead, to satisfy a dy- namic ensemble of discriminators. We show that the proposed frame- work, named Dropout-GAN, leads to a more generalized generator, promoting variety in the generated samples and avoiding the mode collapse problem commonly experienced with generative adversar- ial networks (GAN). We provide evidence that applying adversar- ial dropout promotes sample diversity on multiple datasets of varied sizes, mitigating mode collapse on several GAN approaches.
# Introduction
Generative adversarial networks [13], or GAN, is a framework that integrates adversarial training in the generative modeling process. According to its original proposal [13], the framework is composed of two models - one generator and one discriminator - that train to- gether by playing a minimax game. While the generator tries to fool the discriminator by producing fake samples that look realistic, the discriminator tries to distinguish between real and fake samples bet- ter over time, making it harder to be fooled by the generator.
The main idea of this work consists of applying the same dropout principles to generative multi-adversarial networks. This is accom- plished by taking advantage of multiple adversarial training, where the generatorâs output is dependent on the feedback given by a spe- ciï¬c set of discriminators. By applying dropout on the feedback of each discriminator, we force the generator to not rely on a speciï¬c discriminator or discriminator ensemble to learn how to produce re- alistic samples. Thus, the generator guides its learning from the var- ied feedback given by a dynamic ensemble of discriminators that changes at every batch.
In our use case, one can then see mode collapse as a consequence of overï¬tting to the feedback of a single discriminator, or even a static ensemble of discriminators. Hence, by dynamically changing the adversarial ensemble at every batch, the generator is stimulated to induce variety in its output to increase the chances of fooling the different possible discriminators that may remain in the ensemble at the end. Our main contributions can be stated as follows:
⢠We propose a novel and generic framework, named Dropout-GAN (Section 3), that trains a single generator against a dynamically changing ensemble of discriminators.
⢠We provide useful discussions and insights regarding the bene- ï¬ts of multiple adversarial training in GAN, namely the increase training stability (Section 4).
However, one of the main problems with GAN is mode col- lapse [18, 2, 6, 3], where the generator is able to fool the discrim- inator by only producing data coming from the same data mode, i.e., connected components of the data manifold. This leads to a poor gen- erator that is only able to produce samples within a narrow scope of the data space, resulting in the generation of only similarly looking samples. Hence, at the end of training, the generator comes short re- garding learning the full data distribution, and, instead, is only able to learn a small segment of it. This is the main issue we try to tackle in this work.
⢠We test our method on several datasets and multiple metrics, showing that it succeeds in reducing mode collapse by promoting sample diversity within epochs (Sections 5 and 6).
⢠We show that proposed approach of applying adversarial dropout also improves several other GAN approaches on several metrics and datasets of different size and nature (Section 7), conï¬rming the extensibility of our framework.
# 2 Generative Adversarial Networks
In a disparate line of work, dropout was introduced by [16] and it has been proven to be a very useful and widely used technique in neural networks to prevent overï¬tting [4, 35, 9]. In practice, it sim- ply consists of omitting or dropping out, the output of some randomly chosen neurons with a probability d or dropout rate. The intuition be- hind this process is to ensure that neurons are not entirely dependent on a speciï¬c set of other neurons to produce their outputs. Instead, with dropout, each neuron relies on the population behavior of sev- eral other neurons, promoting generalization in the network. Hence, the overall network becomes more ï¬exible and less prone to overï¬t- ting.
1 Hasso Plattner Institute, Germany, email: goncalo.mordido@hpi.de 2 Alibaba Group, China, email: haojin.yhj@alibaba-inc.com 3 Hasso Plattner Institute, Germany, email: christoph.meinel@hpi.de
As originally proposed [13], the standard GAN framework consists of two different models: a generator (G), that tries to capture the real data distribution to generate fake samples that look realistic, and a discriminator (D), that tries to do a better job at distinguishing real and fake samples. G maps a latent space to the data space by receiving noise as input and applying transformations to it to generate unseen samples, while D maps a given sample to a probability p of it coming from the real data distribution.
In the ideal setting, given enough iterations, G would eventually start producing samples that look so realistic that D would not be able to distinguish between real and fake samples anymore. Hence, D would assign p = 0.5 to all samples, reaching a full state of confu- sion. However, due to training instability, this equilibrium is hard to reach in practice. The two models play the following minimax game:
min G max D V (D, G) = Exâ¼pr (x)[log D(x)] + Ezâ¼pz (z)[log(1 â D(G(z)))], (1)
where pz(z) represents the noise distribution used to sample Gâs input and G(z) represents its output, which can be considered as a fake sample originated from mapping the modiï¬ed input noise to the data space. On the other hand, pr(x) represents the real data distri- bution and D(x) represents the output of D, i.e., the probability p of sample x being a real sample from the training set.
In order to maximize Eq. 1, Dâs goal is then to maximize the prob- ability of correctly classifying a sample as real or fake by getting better at distinguishing such cases by assigning p close to 1 to real images and p close to 0 to generated images. By contrast, to mini- mize Eq. 1, G tries to minimize the probability of its generated sam- ples being considered as fake by D, through fooling D into assigning them a p value close to 1.
However, in practice log(1 â D(G(z))) might saturate due van- ishing gradient problems in the beginning of training caused by D being able to easily distinguish between real and fake samples. As a workaround, the authors propose to maximize log(D(G(z))) in- stead, making it no longer a minimax game. Nevertheless, G still continues to exploit Dâs weaknesses in distinguishing real and fake samples by using Dâs feedback to update its parameters and slightly change its output to more likely trick D in the next iterations.
# 3 Dropout-GAN
We propose to integrate adversarial feedback dropout in generative multi-adversarial networks, forcing G to appease and learn from a dynamic ensemble of discriminators. This ultimately encourages G to produce samples from a variety of modes, since it now needs to fool the different possible discriminators that may remain in the en- semble. Variations in the ensemble are achieved by dropping out the feedback of each D with a certain probability d at the end of every batch. This means that G will only consider the loss of the remaining discriminators in the ensemble while updating its parameters at each iteration. Figure 1 illustrates the proposed framework.
(a) Original GAN (b) Dropout-GAN
BO.) Poeee
Figure 1. We expand the original GAN framework (left) to multiple adver- saries, where some discriminators are dropped out according to some proba- bility (right), leading to only a random subset of feedback (represented by the arrows) being used by G at the end of each batch.
Our initial modiï¬cation to the value function V of the minimax game is presented in equation (2), where δk is a Bernoulli variable (δk â¼ Bern(1 â d)) and {Dk} is the set of K total discriminators. The gradients calculated from the loss of a given discriminator Dk, are only used for the calculation of Gâs ï¬nal gradient updates when δk = 1, with P (δk = 1) = 1 â d. Otherwise, this information is discarded:
2
K K min max V(Dz,G) = 5 (Exnp, (x) [log De(a)] & fo.) » ° (2) +Ezxp,(z)[log(1 â De(G(z)))])-
There is, however, the possibility of all discriminators being dropped out from the set, leaving G without any guidance on how to further update its parameters. In this case, we randomly pick one discriminator D; ⬠{Dx} and follow the original objective function presented in equation (ip. using solely the gradient updates related to D,âs loss to update G. Hence, taking into consideration this special case, our final value function, Fâ, is set as follows:
=
F(G,{De}) =
kK min max SOV(De,G), if Sh: 5p =1 @B) = Pref k=l ky. min max V(D;,G), otherwise, for j ⬠{1, : 7 J
=
It is important to note that each discriminator trains independently, i.e., is not aware of the existence of the other discriminators, since no changes were made on their individual gradient updates. This implies that even if dropped out, each D updates its parameters at the end of every batch. The detailed algorithm of the proposed solution can be found in Algorithm 1.
Initialize: m â B K for each iteration do for k = 1 to k = K do
Sample minibatch zi, i = 1 . . . m, zi ⼠pg(z) ⢠Sample minibatch xi, i = 1 . . . m, xi ⼠pr(x) ⢠Update Dk by ascending along its gradient:
m Von, Yo llog De(2:) + log(t â De(G(%)))] i=l
end for ⢠Sample minibatch δk, k = 1 . . . K, δk â¼ Bern(1 â d) if all(δk) = 0 then
e Sample minibatch z;,i = 1...m, zi ~ pg(z) e Update G by descending along its gradient froma random discriminator D;, for some j ⬠{1, wees K}:
m Voom Ylos(l ~ D;(G(e)))
else
Sample minibatch zki , i = 1 . . . m, zki ⼠pg(z) ⢠Update G by descending along its gradient:
k=1...K,
m Voc S2dx(~ Y low( â Di(G(2x,)))) k=1 i=l
# end if end for
# Algorithm 1: Dropout-GAN.
(3)
# Implementation Details
In this section, we provide a detailed study of the effects of using a different number of discriminators together with different dropout rates. Moreover, we further provide insights into the consequence of splitting the batch among the different discriminators on the gen- eratorâs training. The proposed framework was implemented using Tensorï¬ow [1].
# 4.1 Number of Discriminators
Training instability has been noticeably reported as one of GAN biggest problems. Here, we show that this problem can be eased by using multiple adversaries. This is also stated in previous works [8, 28], however, without much detailed evidence. Furthermore, on top of increasing training stability, using multiple discriminators enables the usage of the original G loss, since there is now an increased chance that G receives positive feedback from at least one D and is able to guide its learning successfully [8].
To analyze the training procedure, we correlate the degree of train- ing instability with the gradient updates that are being used by G to update its parameters at the end of each batch. The intuition is that if such updates are big, the parameters of the model will change dras- tically at each iteration. This is intuitively an alarming sign that the training is not being efï¬cient, especially if it still occurs after several epochs of training, since G is repeatedly greatly updating its output, instead of performing slight, mild changes in a controlled fashion.
We found that when using multiple discriminators such gradients would converge to zero as training progressed, while, on the con- trary, they remained high (in terms of its absolute value) when using solely one discriminator. On the other hand, we also noticed that as the number of discriminators increases, the point at which Gâs gra- dients start to converge also increases, suggesting that using more discriminators can delay the learning process. However, this is ex- pected since G now receives more (and possibly contradictory) feed- back regarding its generated samples, needing more time to utilize such information wisely.
# 4.2 Batch Partitioning
The main purpose of splitting the batch among the different discrimi- nators is to encourage each to specialize in different data modes. This is achieved by training them with a different subset of samples of the same size within each batch. This applies to both the fake samples produce by G and real samples retrieved from the training set. Such partitioning also allows data parallelism, diminishing the overhead caused by using more discriminators in the framework.
To further investigate the success in forcing the different discrim- inators to focus on different data modes, we argue that Gâs capacity of fooling the ensemble should decrease in such situation. This is in- deed conï¬rmed in our experiments, with Gâs loss being higher when the batches are split, especially later on in training where each D had enough time to focus on a single or a small subset of data modes. Thus, one can then associate the higher G loss with the generated samples now having to comply with a higher number of realistic fea- tures to be able to fool the dynamic ensemble of discriminators, with a subset of such features being used by each D to characterize a given sample as real or fake.
We increase the overall batch size to enable each D to be trained on the same original number of samples at each batch. On the other hand, G might still have access to all samples at each batch, since it
3
uses the feedback from the remaining discriminators to update its pa- rameters at the end. However, having weaker discriminators by train- ing each one of them with fewer samples than G is not necessarily bad since they are more likely to give positive feedback to G [8, 28]. This is a result of their possible confused state that can better aid G in producing realistic samples than if it would continuously receive negative feedback, especially in the long run.
# 4.3 Dropout Rate
Dropping out the loss of a given D with a probability d before up- dating Gâs parameters is what induces variability in our framework. This forces G not to only need to fool one or even a static set of dis- criminators, but, instead, to fool a dynamic ensemble of adversaries that changes at every batch. Hence, performing this type of dropout can also be seen as a form of regularization, since it aims to promote more generalizability on the fake samples produced by G.
Depending on the number of discriminators used, using a small probability d of dropout might only lead to small changes in the ensemble of adversaries, making the feedback seen by G nearly constant throughout every batch. On the other hand, using a large dropout probability might lead to too much variance in the ensem- ble, making it difï¬cult for G to learn properly due to the variability of the visible set.
Evidence of the correlation between the dropout rate and the qual- ity of the generated samples is further given in Sections 5 and 6. Similarly to what was discussed in the original Dropout paper [16], we found that using d = 0.2 and d = 0.5 often led to better results, both in a qualitative and quantitative manner. Nevertheless, we also found that using any dropout rate (0 < d ⤠1) consistently per- formed better across the different datasets than when using a static ensemble of adversaries (d = 0).
# 5 Experimental Results
We tested the effects of the different parameter settings on three dif- ferent datasets: MNIST [21], CIFAR-10 [20], and CelebA [23]. We compared all possible combinations by using the different number of discriminators across the set {1, 2, 5, 10} with each different dropout rate in {0.0, 0.2, 0.5, 0.8, 1.0}. We used the DCGAN inspired archi- tecture used in GMAN [8], with G consisting of 4 convolutional lay- ers of decreasing neuron size, e.g., 128, 64, 32, 1 (for MNIST) or 3 (for CIFAR-10 and CelebA), and each D having 3 convolutional layers of increasing number of neurons, e.g., 32, 64, 128, and a fully connected layer at the end. We refer to GMAN [8] for more infor- mation regarding the training settings. Important to note that, even though all discriminators share the same architectures, their weights are initialized differently. Results are reported below for each dataset.
# 5.1 MNIST
MNIST is composed of 10 different classes of handwritten digits varying from 0 to 9, with the generated samples of Dropout-GAN being shown in Figure 2.
It is visible that the quality and variation of the produced sam- ples increase while using dropout rate values of 0.2 and 0.5 across all different sized discriminator sets. On the other hand, the quality of the produced numbers deteriorates considerably while using high dropout rates, i.e., 0.8 and 1, or no dropout rate at all. However, the quality gets slightly better when using more discriminators on such extreme end dropout rates, since G might still get enough feedback to be able to learn at the end of each batch.
0.5 6
Figure 2. MNIST results using different combinations of the number of discriminators and dropout rates.
# 5.2 CIFAR-10
To further validate our solution, we used the CIFAR-10 dataset also composed of 10 classes, consisting of different transportation vehi- cles and animals. Results are presented in Figure 3. Once again, we observe worst sample quality when using high or nonexistent dropout values. Moreover, there are also clear traits of mode collapsing while using no dropout rate throughout all numbers of discriminators in the set. Sharper and more diverse samples are obtained while using 0.2 or 0.5 dropout rate and a bigger number of discriminators in the set.
d=0
Figure 3. CIFAR-10 results using different combinations of the number of discriminators and dropout rates.
# 5.3 CelebA
We lastly tested our approach in the cropped version of CelebA, con- taining faces of real-world celebrities. Results are given in Figure 4. One can see that using no dropout rate leads to similar looking faces, especially when using 2 and 5 discriminators. Once more, faces pro- duced with mid-ranged dropout values with bigger discriminator en- sembles present more variety and sample quality than the rest.
# 6 Parameter Evaluation
Since the results shown above rely heavily on subjective judgment, we now evaluate the effects of using a different number of discrimi- nators and dropout rates on each dataset in a quantitative way. Note that the presented results are not state-of-the-art since exploring sev- eral architectural settings is not the focus of this work. Instead, by us-
4
d
Figure 4. CelebA results using different combinations of the number of discriminators and dropout rates.
ing different architectures on different datasets, our focus is to com- pare the effect of the different parameter combinations.
# 6.1 Fr´echet Inception Distance
We used the Fr´echet Inception Distance [15] (FID) to measure the similarity between the fake and real images. The returned distance uses the mean µ and covariance cov of a multi-variate Gaussian pro- duced from the embeddings of the last pooling layer of the Inception- v3 model [32] for both the real data and the generated data. In the original paper, the authors show that FID is more robust to noise and more correlated to human judgment than Inception Score [31]. More- over, FID has shown to be sensitive to mode collapse [24], with the returned distances increasing when samples from certain classes are missing from the generated set.
Minimum FID. Table 1 shows the minimum FID obtained by G for each dataset. Lower values indicate more similarity between the fake and real data. We ran all of our experiments for 40 epochs in total and used the same architecture described previously. To obtain the best FID across all epochs, we generated 10000 samples from G at the end of each epoch and then proceeded to calculate the FID between the set of the generated samples per epoch and the whole training set.
By analyzing Table[I] we observe that the minimum values of FID for all datasets were mostly obtained when using d = 0.5. However, by analyzing the local minima obtained while maintaining the same number of discriminators and only varying the dropout rate, it is also noticeable that one can also generally achieve very competitive re- sults while using d ⬠{0.2,0.5, 0.8}, depending on the number of discriminators and datasets being used. The results also show that applying dropout on multiple discriminators always leads to a better FID rather than maintaining the ensemble of discriminators static, i.e. d = 0, or singular, i.e., using solely 1 discriminator.
Mean FID. We followed the same procedure and calculated the mean FID across all 40 epochs. Results are presented in Figure 5. This evaluation promotes a broader look at the stage of G at the end of every epoch, reï¬ecting the quality and variety of the generated samples over time. The presented graphs provide a clear vision re- garding the advantages of using multiple discriminators instead of solely one, with the FID being better in the ï¬rst case. Using 5 or 10 discriminators with mid-range dropout rates leads to better FID results across all datasets.
MNIST CIFAR-10 CelebA 170 165 160 60 155 ° 8 150 = 145 130 20 no 100 ~~ so 140 0 335 80 * 230 00 02 04 6 os 10 00 02 oa 06 08 zo 00 02 o4 06 08 10 Dropout rate (4) Dropout rate (d) Dropout rate (d) â 1 Discriminator ââ 2 Discriminators ââ 5 Discriminators ââ 10 Discriminators
Figure 5. Mean FID calculated across 40 epochs on the different datasets. Smaller values mean better looking and more varied generated samples over time. The convex representation of FID indicates the beneï¬ts in using mid-range dropout rates.
Table 1. Minimum FID obtained across 40 epochs using the different datasets. Bold scores represent the minimum FID obtained for each dataset. Underlined scores indicate the best FID within using a given number of dis- criminators and the different possible dropout rates regarding each dataset.
discriminator set.
# 7 Method evaluation
1 DISC. 2 DISC.; d = 0.0 2 DISC.; d = 0.2 2 DISC.; d = 0.5 2 DISC.; d = 0.8 2 DISC.; d = 1.0 5 DISC.; d = 0.0 5 DISC.; d = 0.2 5 DISC.; d = 0.5 5 DISC.; d = 0.8 5 DISC.; d = 1.0 10 DISC.; d = 0.0 10 DISC.; d = 0.2 10 DISC.; d = 0.5 10 DISC.; d = 0.8 10 DISC.; d = 1.0 MNIST 21.71 ± 0.39 24.88 ± 0.13 22.34 ± 0.29 22.08 ± 0.09 21.87 ± 0.10 23.56 ± 0.29 21.47 ± 0.40 21.70 ± 0.12 19.25 ± 0.12 20.26 ± 0.07 20.54 ± 0.15 22.62 ± 0.10 19.12 ± 0.01 18.18 ± 0.44 19.33 ± 0.18 19.82 ± 0.06 CIFAR-10 104.19 ± 0.07 106.54 ± 0.38 103.55 ± 0.13 103.20 ± 0.05 103.60 ± 0.03 104.73 ± 0.19 95.75 ± 0.15 90.59 ± 0.35 89.74 ± 0.35 90.77 ± 0.70 95.71 ± 0.03 99.91 ± 0.10 91.31 ± 0.16 88.60 ± 0.08 88.76 ± 0.16 93.66 ± 0.21 CELEBA 53.38 ± 0.03 52.46 ± 0.08 46.60 ± 0.03 45.90 ± 0.04 46.82 ± 0.14 51.17 ± 0.01 45.89 ± 0.05 36.36 ± 0.11 38.10 ± 0.54 41.22 ± 0.24 41.56 ± 0.18 43.85 ± 0.30 41.74 ± 0.14 40.67 ± 0.56 41.74 ± 0.03 41.16 ± 0.55
We now proceed to compare our approach of applying adversar- ial dropout to standard GAN, i.e., Dropout-GAN, with other ex- isting methods in the literature. We followed the toy experiment with a 2D mixture of 8 Gaussian distributions (representing 8 data modes) ï¬rstly presented in UnrolledGAN [26], and further adopted by D2GAN [29] and MGAN [17]. We used the same architecture as D2GAN for a fair comparison. The results are shown in Fig- ure 7, where one can see that Dropout-GAN successfully covers the 8 modes from the real data while having signiï¬cantly less noisy samples compared to the other discriminator-driven methods. Note that MGAN takes advantage of a multi-generator framework plus an additional classiï¬er network while making use of a different ar- chitectural setting. However, due to the simplicity of our approach, we manage to converge to the real data modes faster than the other approach, speciï¬cally MGAN, as seen in the early training steps. Moreover, our framework achieves the lowest distance and diver- gence measures between the real and fake data.
The similar looking performance when using 5 and 10 discrimina- tors can be explained by what was previously mentioned regarding G needing more time to learn from more feedback. Nevertheless, by analysis of Table 1 it is visible that better generated samples are pro- duced when using 10 discriminators on all datasets, even if it takes more training to reach that state. This ultimately means that by hav- ing access to more feedback, G is eventually able to produce better, varied samples in a more consistent manner over time.
Cumulative Intra FID. To test the sample diversity within a given epoch, we calculated the FID between the set of generated samples of every epoch. This was accomplished by generating 20000 samples from G at the end of every epoch and then calculating the FID be- tween the two halves of the generated set. We evaluated the diversity of the generated samples over time by adding all calculated FIDs for each model. Results are shown in Figure 6.
From the analysis of the presented bar graphs, one can see the ef- fect of using a different number of discriminators, with bigger sets of discriminators promoting a wider variety of generated samples within each epoch. This is generally observed across all datasets. Furthermore, it is noticeable the beneï¬ts of using mid-range dropout rates to promote sample diversity, especially when using a bigger
To evaluate the extensibility of our approach, we studied the ef- fects of using adversarial dropout in the following GAN methods: LSGAN, DRAGAN, and standard GAN using both the original (GAN) and modiï¬ed objective (modGAN). These methods consist of a subset of the methods compared in [24], since they cover impor- tant variations of the original GAN framework where Dâs output is either a probability (GAN, modGAN, and DRAGAN) or unbounded (LSGAN), while making use of gradient penalty (DRAGAN) or not. We also followed their presented training settings, training models on MNIST, CIFAR-10, and CelebA for 20, 40, and 100 epochs, re- spectively. We made use of a simpler architectural setting though, similar to the one previously described in Section 5 but with double the number of neurons per convolutional layer.
The best FID scores for each the original and multiple adversarial versions are reported in Table 2. The advantage of using adversarial dropout is signiï¬cantly visible for each method, lowering the min- imum FID obtained considerably for all the tested datasets. For a fair comparison, we used only 2 discriminators when applying ad- versarial dropout, which makes the overall framework still relatively small with a great beneï¬t in the end results. When simply using an ensemble of discriminators on CIFAR-10, i.e. d = 0, the pro- posed dropout variants improve FID by 7.25, 4.39, 3.96, and 9.93, on GAN, modGAN, LSGAN, and DRAGAN, respectively.
5
CIFAR-10 CelebA 60 120 4 i | ° & s 8 Cumulative intra FID 8 8 60 30 â0 30 20 x 2 Disc Mm d=0.0 Mm d=0.2 md 5 Disc 10 Disc. 1 Disc 10 Disc. 05 mm d=0.8 mam d=10
Figure 6. Cumulative Intra FID across 40 epochs on the different datasets. Higher values represent more diversity over time.
(a) Toy dataset. (b) Wasserstein distance. (c) Symmetric KL divergence.
«*® ih a z* * * * * a*e ae ae 3 * oe * + * â4 ff, *#,* ey 3. ee ee Z * woe Me a, i i eo step 10k step 15K step 20K step 25K
step
Figure 7. Comparison of Dropout-GAN (8 discriminators) with original GANs, Unrolled GAN, D2GAN, and MGAN (a). Real data is presented in red while generated data is presented in blue. Our method manages to cover all modes with signiï¬cantly less noisy samples that fall outside any real mode when compared to the other discriminator modiï¬ed methods. Dropout-GAN also achieves the lowest distance (b) and divergence (c) between the real and generated data, continuously converging to 0 over time.
Table 2. FID comparisons on MNIST, CIFAR-10 and CelebA. We used 2 discriminators and d = 0.5 for all the adversarial dropout methods repre- sented in bold.
MNIST CIFAR-10 CELEBA REAL DATA â 0.00 â 0.00 â 0.00 GAN ([13]) DROPOUT-GAN 22.65 ± 0.13 14.63 ± 0.18 70.23 ± 0.07 66.82 ± 0.10 46.18 ± 0.07 31.25 ± 0.09 MODGAN ([13]) DROPOUT-MODGAN 22.66 ± 0.11 15.39 ± 0.15 79.58 ± 0.11 67.57 ± 0.14 41.25 ± 0.03 35.32 ± 0.06 LSGAN ([25]) DROPOUT-LSGAN 24.05 ± 0.15 15.41 ± 0.21 83.66 ± 0.08 69.37 ± 0.11 43.13 ± 0.04 37.58 ± 0.10 22.84 ± 0.15 DRAGAN ([19]) DROPOUT-DRAGAN 15.20 ± 0.16 80.57 ± 0.06 66.90 ± 0.09 46.82 ± 0.06 37.21 ± 0.08
The obtained IS are presented in Table 3. Once again, we observe that applying adversarial dropout considerably increases the obtained IS for all tested datasets, without much overhead since only 2 dis- criminators were used.
Table 3. Inception score comparisons on CIFAR-10, STL-10, and Ima- geNet. We used 2 discriminators and d = 0.5 for all the adversarial dropout methods represented in bold.
CIFAR-10 STL-10 IMAGENET REAL DATA 11.24 ± 0.16 26.08 ± 0.26 25.78 ± 0.47 GAN ([13]) DROPOUT-GAN 5.35 ± 0.04 6.22 ± 0.09 5.53 ± 0.03 7.20 ± 0.11 7.30 ± 0.08 7.52 ± 0.13 MODGAN ([13]) DROPOUT-MODGAN LSGAN ([25]) DROPOUT-LSGAN DRAGAN ([19]) DROPOUT-DRAGAN 5.49 ± 0.07 5.90 ± 0.08 5.76 ± 0.05 5.95 ± 0.07 5.65 ± 0.08 6.22 ± 0.08 6.64 ± 0.05 6.95 ± 0.09 5.32 ± 0.06 6.88 ± 0.13 6.97 ± 0.09 7.30 ± 0.13 6.96 ± 0.08 7.26 ± 0.12 6.92 ± 0.04 7.08 ± 0.13 7.41 ± 0.11 7.54 ± 0.12
To also test how adversarial dropout behaves on larger datasets, we calculated the Inception Score [31] (IS) to compare the quality in the same set of methods. On top of CIFAR-10, we further used STL- 10 [7], and ImageNet [30], with the latter two being larger datasets with 100K and 1M images, respectively. We downsized all images to 32x32. We used the same architectures mentioned above. How- ever, we trained each model longer, more speciï¬cally 250 epochs for CIFAR-10 and STL-10, and 50 epochs for ImageNet.
A subset of randomly generated samples for each method when using adversarial dropout is presented in Figure 8, where one can see high diversity alongside with high quality, even on the bigger datasets. These results solidify the success of mitigating mode col- lapse when applying the adversarial dropout to the different methods. Finally, we directly compare the quality of the generated samples between Dropout-GAN, GMAN [8], and original GANs [13] with the modiï¬ed loss, using both 2 and 5 discriminators on CIFAR-10. In their original experiments, [8] used Inception Score [31] as the evaluation metric, with higher values correlating to better generated
6
Dropout-modGAN | Dropout-LSGAN Dropout-DRAGAN
Figure 8. CIFAR-10, STL-10, and ImageNet generated samples for the dif- ferent GAN approaches using adversarial dropout.
samples. For a fair direct comparison, we used the same architectures and training procedures as originally used in GMANâs experiments. Results are presented in Table 4. Dropout-GAN outperforms both methods for all different number of discriminators scenarios on all tested dropout rates.
Table 4. Inception score comparison between Dropout-GAN and different variants of GMAN on CIFAR-10. Original GANs with the modiï¬ed loss is also presented as a baseline.
1 DISC. 2 DISC. 5 DISC. GAN ([13]) 5.74 ± 0.17 - - GMAN-0 ([8]) GMAN-1 ([8]) GMAN* ([8]) - - - 5.88 ± 0.19 5.77 ± 0.16 5.54 ± 0.09 5.96 ± 0.14 6.00 ± 0.19 5.96 ± 0.15 DROPOUT-GAN (d = 0.2) DROPOUT-GAN (d = 0.5) - - 5.95 ± 0.10 5.98 ± 0.10 6.01 ± 0.12 6.05 ± 0.15
# 8 Related Work
We will now focus on previous work that mitigated mode collapse in GAN. Instead of extending the original framework to multiple ad- versaries, one can change GAN objective to directly promote sam- ple diversity. WGAN [3] and MMD GAN [22] proposed to opti- mize distance measurements to stabilize training. On the other hand, EBGAN [36] and Coulomb GANs [33] reformulated the original GAN problem using an energy-based objective to promote sample variability. While Regularized-GAN and MDGAN [6] make use of an autoencoder to penalize missing modes and regularize GAN ob- jective, DFM [34] makes use of autoencoders to perform high-level feature matching. UnrolledGAN [26] changes G objective to satisfy an unrolled optimization of D. LSGAN [25] proposes to use a least- squares loss for the D while DRAGAN [19] applies gradient norm penalty on top of original GAN.
Although some work has focused on augmenting the number of generators [11, 12, 17], or even increasing both the number of gen- erators and discriminators [14, 10, 5], we turn our focus on methods that solely increases the number of discriminators to prevent mode
7
collapse. D2GAN [29] proposed a single generator dual discrimi- nator architecture where one D rewards samples coming from the true data distribution, while the other rewards samples that are likely to come from G. Thus, each D still operates on a different objec- tive function. GMAN [8] proposed a framework where a single G is trained against several discriminators on different levels of difï¬- culty, by either using the mean loss of all discriminators (GMAN- 0), picking only the D with the maximum loss in relation to Gâs output (GMAN-1), or controlled by G through a hyperparameter λ (GMAN*). Recently, microbatchGAN [27] assigned a different por- tion of each minibatch to each discriminator to stimulate sample di- versity.
However, all of the described approaches have some sort of con- straints, either by restricting each Dâs architecture to be different, or by using different objective functions for each D. We argue that these are limitations from an extensibility point of view, none of which exists in our proposed framework. Moreover, we note that applying Dropout-GANâs principles of using adversarial dropout to the previ- ously described methods would be a viable step to further promote sample diversity.
# 9 Conclusion and Future Work
In this work, we propose to mitigate mode collapse by proposing a new framework, called Dropout-GAN, that enables a single gener- ator to learn from an ensemble of discriminators that dynamically changes at the end of every batch by use of adversarial dropout. We conducted experiments on multiple datasets of different sizes that show that adversarial dropout successfully contributes to a bigger sample variety on multiple GAN approaches. Moreover, it also in- creases training stability over time by enabling G to receive more quantity and variety of feedback.
In the future, it would be interesting to adjust Gâs learning rate according to the size of the discriminator set, allowing a more co- herent learning speed between G and each D, especially when using a large ensemble. Moreover, applying game theory to make the dif- ferent discriminators dependent, i.e., aware of each otherâs feedback, could also be a very interesting path to follow, taking full advantage of using multiple adversarial training.
# REFERENCES
[1] Mart´ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Ge- offrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Man´e, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vin- cent Vanhoucke, Vijay Vasudevan, Fernanda Vi´egas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiao- qiang Zheng. TensorFlow: Large-scale machine learning on heteroge- neous systems, 2015. Software available from tensorï¬ow.org.
[2] Martin Arjovsky and Leon Bottou, âTowards principled methods for training generative adversarial networksâ, in International Conference on Learning Representations (ICLR 2017), (2017).
[3] Martin Arjovsky, Soumith Chintala, and L´eon Bottou, âWasserstein generative adversarial networksâ, in Proceedings of the 34th Inter- national Conference on Machine Learning, eds., Doina Precup and Yee Whye Teh, volume 70 of Proceedings of Machine Learning Re- search, pp. 214â223, International Convention Centre, Sydney, Aus- tralia, (06â11 Aug 2017). PMLR.
[4] Pierre Baldi and Peter J Sadowski, âUnderstanding dropoutâ, in Ad- vances in Neural Information Processing Systems 26, eds., C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, 2814â2822, Curran Associates, Inc., (2013).
[5] Tatjana Chavdarova and Francois Fleuret, âSgan: An alternative train- ing of generative adversarial networksâ, in Proceedings of the IEEE international conference on Computer Vision and Pattern Recognition, (2018).
[6] Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, and Wen- jie Li, âMode regularized generative adversarial networksâ, CoRR, abs/1612.02136, (2016).
[7] Adam Coates, Andrew Ng, and Honglak Lee, âAn analysis of single- layer networks in unsupervised feature learningâ, in Proceedings of the fourteenth international conference on artiï¬cial intelligence and statis- tics, pp. 215â223, (2011). Ishan P. Durugkar, Ian Gemp, and Sridhar Mahadevan, âGenerative multi-adversarial networksâ, CoRR, abs/1611.01673, (2016).
[9] Yarin Gal and Zoubin Ghahramani, âA theoretically grounded applica- tion of dropout in recurrent neural networksâ, in Advances in neural information processing systems, pp. 1019â1027, (2016).
[10] Zhe Gan, Liqun Chen, Weiyao Wang, Yunchen Pu, Yizhe Zhang, Hao Liu, Chunyuan Li, and Lawrence Carin, âTriangle generative adversar- ial networksâ, CoRR, abs/1709.06548, (2017).
[11] Arnab Ghosh, Viveka Kulharia, and Vinay P. Namboodiri, âMessage passing multi-agent gansâ, CoRR, abs/1612.01294, (2016).
[12] Arnab Ghosh, Viveka Kulharia, Vinay P. Namboodiri, Philip H. S. Torr, and Puneet Kumar Dokania, âMulti-agent diverse generative adversarial networksâ, CoRR, abs/1704.02906, (2017). Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio, âGenerative adversarial netsâ, in Advances in Neural Information Pro- cessing Systems 27, eds., Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, 2672â2680, Curran Associates, Inc., (2014).
[14] Aditya Grover and Stefano Ermon, âBoosted generative modelsâ, CoRR, abs/1702.08484, (2017).
[15] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter, âGans trained by a two time-scale up- date rule converge to a local nash equilibriumâ, in Advances in Neu- ral Information Processing Systems 30, eds., I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, 6629â6640, Curran Associates, Inc., (2017).
[16] Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov, âImproving neural networks by preventing co-adaptation of feature detectorsâ, CoRR, abs/1207.0580, (2012). [17] Quan Hoang, Tu Dinh Nguyen, Trung Le, and Dinh Q. Phung, âMulti- generator generative adversarial netsâ, CoRR, abs/1708.02556, (2017). [18] Taeksoo Kim, Moonsu Cha, Hyunsoo Kim, Jung Kwon Lee, and Ji- won Kim, âLearning to discover cross-domain relations with generative adversarial networksâ, in Proceedings of the 34th International Con- ference on Machine Learning, eds., Doina Precup and Yee Whye Teh, volume 70 of Proceedings of Machine Learning Research, pp. 1857â 1865, International Convention Centre, Sydney, Australia, (06â11 Aug 2017). PMLR.
[19] Naveen Kodali, Jacob Abernethy, James Hays, and Zsolt Kira, âOn convergence and stability of gansâ, arXiv preprint arXiv:1705.07215, (2017).
[20] Alex Krizhevsky, âLearning multiple layers of features from tiny im- agesâ, Technical report, (2009).
[21] Yann LeCun and Corinna Cortes, âMNIST handwritten digit databaseâ, (2010).
[22] Chun-Liang Li, Wei-Cheng Chang, Yu Cheng, Yiming Yang, and Barn- abas Poczos, âMmd gan: Towards deeper understanding of moment matching networkâ, in Advances in Neural Information Processing Sys- tems 30, eds., I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fer- gus, S. Vishwanathan, and R. Garnett, 2200â2210, Curran Associates, Inc., (2017).
[23] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang, âDeep learn- ing face attributes in the wildâ, in Proceedings of International Confer- ence on Computer Vision (ICCV), (2015).
[24] Mario Lucic, Karol Kurach, Marcin Michalski, Sylvain Gelly, and Olivier Bousquet, âAre gans created equal? a large-scale studyâ, arXiv preprint arXiv:1711.10337, (2017).
[25] Xudong Mao, Qing Li, Haoran Xie, Raymond YK Lau, Zhen Wang, and Stephen Paul Smolley, âLeast squares generative adversarial net- worksâ, in Computer Vision (ICCV), 2017 IEEE International Confer- ence on, pp. 2813â2821. IEEE, (2017).
[26] Luke Metz, Ben Poole, David Pfau, and Jascha Sohl-Dickstein,
8
âUnrolled generative adversarial networksâ, CoRR, abs/1611.02163, (2016).
[27] Gonc¸alo Mordido, Haojin Yang, and Christoph Meinel, âmicrobatch- gan: Stimulating diversity with multi-adversarial discriminationâ, arXiv preprint arXiv:2001.03376, (2020).
[28] Behnam Neyshabur, Srinadh Bhojanapalli, and Ayan Chakrabarti, âStabilizing GAN training with multiple random projectionsâ, CoRR, abs/1705.07831, (2017).
[29] Tu Dinh Nguyen, Trung Le, Hung Vu, and Dinh Q. Phung, âDual dis- criminator generative adversarial netsâ, CoRR, abs/1709.03831, (2017). [30] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al., âImagenet large scale visual recognition chal- lengeâ, International Journal of Computer Vision, 115(3), 211â252, (2015).
[31] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen, âImproved techniques for training gansâ, in Advances in Neural Information Processing Systems, pp. 2234â2242, (2016).
[32] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna, âRethinking the inception architecture for computer visionâ, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818â2826, (2016).
[33] Calvin Seward Gnter Klambauer Martin Heusel Hubert Ramsauer Sepp Hochreiter Thomas Unterthiner, Bernhard Nessler, âCoulomb GANs: Provably optimal nash equilibria via potential ï¬eldsâ, Interna- tional Conference on Learning Representations, (2018).
[34] David Warde-Farley and Yoshua Bengio, âImproving generative adver- sarial networks with denoising feature matchingâ, (2016).
[35] David Warde-Farley, Ian J Goodfellow, Aaron Courville, and Yoshua Bengio, âAn empirical analysis of dropout in piecewise linear net- worksâ, arXiv preprint arXiv:1312.6197, (2013). Junbo Jake Zhao, Micha¨el Mathieu, and Yann LeCun, âEnergy-based generative adversarial networkâ, CoRR, abs/1609.03126, (2016).
[36] | {
"id": "1711.10337"
} |
1807.09956 | Pythia v0.1: the Winning Entry to the VQA Challenge 2018 | This document describes Pythia v0.1, the winning entry from Facebook AI
Research (FAIR)'s A-STAR team to the VQA Challenge 2018.
Our starting point is a modular re-implementation of the bottom-up top-down
(up-down) model. We demonstrate that by making subtle but important changes to
the model architecture and the learning rate schedule, fine-tuning image
features, and adding data augmentation, we can significantly improve the
performance of the up-down model on VQA v2.0 dataset -- from 65.67% to 70.22%.
Furthermore, by using a diverse ensemble of models trained with different
features and on different datasets, we are able to significantly improve over
the 'standard' way of ensembling (i.e. same model with different random seeds)
by 1.31%. Overall, we achieve 72.27% on the test-std split of the VQA v2.0
dataset. Our code in its entirety (training, evaluation, data-augmentation,
ensembling) and pre-trained models are publicly available at:
https://github.com/facebookresearch/pythia | http://arxiv.org/pdf/1807.09956 | Yu Jiang, Vivek Natarajan, Xinlei Chen, Marcus Rohrbach, Dhruv Batra, Devi Parikh | cs.CV | null | null | cs.CV | 20180726 | 20180727 | 8 1 0 2
l u J 7 2 ] V C . s c [ 2 v 6 5 9 9 0 . 7 0 8 1 : v i X r a
# Pythia v0.1: the Winning Entry to the VQA Challenge 2018
Yu Jiangâ, Vivek Natarajanâ, Xinlei Chenâ, Marcus Rohrbach, Dhruv Batra, Devi Parikh Facebook AI Research
# Abstract
This document describes Pythia v0.1, the winning entry from Facebook AI Research (FAIR)âs A-STAR team to the VQA Challenge 20181.
Our starting point is a modular re-implementation of the bottom-up top-down (up-down) model [1, 14]. We demon- strate that by making subtle but important changes to the model architecture and the learning rate schedule, ï¬ne- tuning image features, and adding data augmentation, we can signiï¬cantly improve the performance of the up-down model on VQA v2.0 dataset [6] â from 65.67% to 70.24%. Furthermore, by using a diverse ensemble of models trained with different features and on different datasets, we are able to signiï¬cantly improve over the âstandardâ way of ensembling (i.e. same model with different random seeds) by 1.31%. Overall, we achieve 72.27% on the test- std split of the VQA v2.0 dataset. Our code in its entirety (training, evaluation, data-augmentation, ensembling) and pre-trained models are publicly available at: https:// github.com/facebookresearch/pythia .
development in VQA [2] and related directions like visual dialog [3]. The name âPythiaâ is an homage to the Ora- cle of Apollo at Delphi, who answered questions in Ancient Greece.
The starting point for Pythia v0.1 is a modular reim- plementation of the bottom-up top-down (up-down) model [14]. In this study, we demonstrate that by making a se- quence of subtle but important changes, we can signiï¬cantly improve the performance as summarized in Table 13.
# 2. Bottom-Up and Top-Down Attention
We perform ablations and augmentations over the base- line system of the up-down model [1], which was the basis of the winning entry to the 2017 VQA challenge. The key idea in up-down is the use of an object detector â Faster RCNN [12] pre-trained on the Visual Genome dataset [9] â to extract image features with bottom-up attention, i.e., visual feed-forward attention. Speciï¬cally, a ResNet-101 was chosen as the backbone network, and its entire Res-5 block was used as the second-stage region classiï¬er for de- tection. After training, each region was then represented by the 2048D feature after average pooling from a 7Ã7 grid.
# 1. Introduction
Pythia â Is there any man alive wiser than Socrates? Pythia: None.
# Chaerephon:
We present Pythia v0.1, a modular framework for Visual Question Answering research, which formed the basis for the winning entry to the VQA Challenge 2018 from Face- book AI Research (FAIR)âs A-STAR2 team.
The question text is then used to compute the top-down attention, i.e., task speciï¬c attention, for each object in the image. Multi-modal fusion is done through a simple Hadamard product followed by a multi-label classiï¬er using a sigmoid activation function to predict the answer scores. Their performance reached 70.34% on VQA 2.0 test-std split with an ensemble of 30 models trained with differ- ent seeds. For presentation clarity, we present our proposed changes (and the respective improvements) in a sequence; however, we also found them to be independently useful.
The motivation for Pythia comes from the following ob- servation â a majority of todayâs Visual Question Answer- ing (VQA) models ï¬t a particular design paradigm, with modules for question encoding, image feature extraction, fusion of the two (typically with attention), and classiï¬ca- tion over the space of answers. The long-term goal of Pythia is to serve as a platform for easy and modular research &
# 2.1. Model Architecture
We made a few changes to the up-down model to im- Instead of using the prove training speed and accuracy. gated hyperbolic tangent activation [1], we use weight nor- malization [13] followed by ReLU to reduce computation4. We also replaced feature concatenation with element-wise
â indicates equal contributions. 1and changes made after the challenge deadline. 2Agents that See, Talk, Act, and Reason.
3FAIR A-STARâs entry in the VQA 2018 Challenge was 72.25%. This document describes results produced by our code release which reaches 72.27%.
1
Table 1. Accuracy (%) on VQA v2.0. For ease of presentation, our changes are presented as a sequence building on top of previ- ous changes. â denotes that these models are not included in our ensemble results submitted to the challenge.
Model test-dev test-std up-down [1] up-down Model Adaptation (§2.1) + Learning Schedule (§2.2) + Detectron & Fine-tuning (§2.3) + Data Augmentationâ (§2.4) + Grid Featureâ (§2.5) + 100 bboxesâ (§2.5) 65.32 66.91 68.05 68.49 69.24 69.81 70.01 65.67 70.24 Ensemble, 30à same model (§2.6) Ensemble, 30à diverse model (§2.6) 70.96 72.18 72.27
multiplication to combine the features from text and vi- sual modalities when computing the top-down attention. To compute the question representation, we used 300D GloVe [11] vectors to initialize the word embeddings and then passed it to a GRU network and a question attention module to extract attentive text features [16]. For fusing the image and text information, we found the best-performing hidden size to be 5000. With these modiï¬cations, we were able to improve the performance of the model from 65.32% to 66.91% on VQA v2.0 test-dev.
# 2.2. Learning Schedule
Our model is optimized by Adamax, a variant of Adam with inï¬nite norm [8]. In one popular implementation of up-down4 learning rate is set to 0.002 with a batch size of 512. We found that reducing the batch size improves perfor- mance â which suggests that there is potential for improv- ing performance by increasing the learning rate. However, naively increasing the learning rate resulted in divergence. To increase the learning rate, we thus deployed the warm up strategy [5] commonly used for large learning-rate training of networks. Speciï¬cally, we begin with a learning rate of 0.002, linearly increasing it at each iteration till it reaches 0.01 at iteration 1000. Next, we ï¬rst reduce the learning rate by a factor of 0.1 at 5K and then reduce it every 2K iterations, and stop training at 12K. With this we increase the performance from 66.91% to 68.05% on test-dev.
# 2.3. Fine-Tuning Bottom-Up Features
Fine tuning pre-trained features is a well known tech- nique to better tailor the features to the task at hand and thus improve model performance [12].
Different from Anderson et al. [1], we also used the new state-of-the-art detectors based on feature pyramid net-
# 4https://github.com/hengyuan-hu/
# bottom-up-attention-vqa
2
> z 84 ec ennneenn ners 3 4 8 âS o 2 4 $ | â same model > » -- diversified model oT T T T T T T 0 5 10 15 20 25 30 Number of Models in Ensemble
Figure 1. Performance with different ensemble strategies.
works (FPN) [10] from Detectron5, which uses ResNeXt [15] as backbone and has two fully connected layers (fc6 and fc7) for region classiï¬cation. This allows us to extract the 2048D fc6 features and ï¬ne-tune the fc7 parameters, as opposed to the original up-down [1], where ï¬ne-tuning previous layers requires signiï¬cantly more storage/IO and computation on 7Ã7Ã2048 convolutional feature maps. Similar to up-down, we also used Visual Genome (VG) [9] with both objects and attributes annotations to train the de- tector.
We set the ï¬ne-tune learning rate as 0.1 times the overall learning rate. We are able to reach a performance of 68.49% on test-dev with this ï¬ne-tuning.
# 2.4. Data Augmentation
from Visual Genome [9] and Visual Dialog (VisDial v0.9) [3] datasets. For VisDial, we converted the 10 turns in a dialog to 10 independent question-answer pairs. Since both VG and VisDial datasets only have a single ground-truth answer while VQA has 10, we simply replicated the answer to each question in VG and VisDial 10 times to make the data format compatible with the VQA evaluation protocol.
We also performed additional data augmentation by mir- roring the images in the VQA dataset. We do some basic processing of the questions and answers for the mirrored images by interchanging the tokens âleftâ and ârightâ in the questions and answers which contain them. When adding these additional datasets, we reduce the learning rate as we described in Section 2.2 ï¬rst at 15K iterations, respectively, and stop training at 22K iterations. As a result of data aug- mentation, we are able to improve our single model perfor- mance from 68.49% to 69.24% on test-dev.
# 2.5. Post-Challenge Improvements
Anderson et al. [1] uses only the features pooled from object proposals (called bottom-up features) to represent an image. Our hypothesis is that such a representation does not fully capture a holistic spatial information about the image and visual representations from image regions not covered by the proposals. To test this hypothesis, we combined grid-
5https://github.com/facebookresearch/Detectron
level image features together with bottom-up features. We follow the same procedure as [4] to extract grid-level fea- tures from ResNet152 [7]. Object-level features and grid- level features are separately fused with features from ques- tions and then are concatenated to fed to classiï¬cation. Be- fore the challenge deadline, we had experimented with this only on images from the VQA dataset without ï¬ne-tuning. After the challenge, we performed more comprehensive ex- periments and found that adding grid level features helps to further improve the performance to 69.81%.
Instead of using an adaptive protocol for choosing the number of object proposals (between 10 and 100) per im- age as as done in [14], we also experimented with using a simpler (but slower) strategy of using 100 objects proposals for all images. As can be seen in Table 1, with features from 100 bounding-boxes, we reach 70.01% for test-dev and 70.24% for test-std on VQA 2.0.
# 2.6. Model Ensembling
All ensembling experiments described below involve models trained before the challenge deadline. That is, they do not include the two after-challenge experiments de- scribed in Section 2.5. We tried two strategies for ensem- bling. First, we choose our best single model and train the same network with different seeds, and ï¬nally average the predictions from each model. As can be seen from Fig 1, the performance plateaus at 70.96%. Second, we choose models trained with different settings, i.e., the tweaked up- down model trained on the VQA dataset with/without data augmentation and models trained with image features ex- tracted from different Detectron models with/without data augmentation. As can be seen, this ensembling strategy is much more effective than the previous one. Ensembling 30 diverse models, we reach 72.18% on test-dev and 72.27% on test-std of VQA v2.0.
# Acknowledgements
We would like to thank Peter Anderson, Abhishek Das, Stefan Lee, Jiasen Lu, Jianwei Yang, Licheng Yu, Luowei Zhou for helpful discussions, Peter Anderson for providing training data for the Visual Genome detector, Deshraj Ya- dav for responses on EvalAI related questions, Stefan Lee for suggesting the name âPythiaâ, Abhishek Das, Abhishek Kadian for feedback on our codebase and Meet Shah for making a docker image for our demo.
# References
[1] P. Anderson, X. He, C. Buehler, D. Teney, M. Johnson, S. Gould, and L. Zhang. Bottom-up and top-down atten- tion for image captioning and visual question answering. In CVPR, 2018.
3
[2] S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. Lawrence Zitnick, and D. Parikh. VQA: Visual question answering. In ICCV, 2015.
[3] A. Das, S. Kottur, K. Gupta, A. Singh, D. Yadav, J. M. Moura, D. Parikh, and D. Batra. Visual Dialog. In CVPR, 2017.
[4] A. Fukui, D. H. Park, D. Yang, A. Rohrbach, T. Dar- rell, and M. Rohrbach. Multimodal compact bilinear pool- ing for visual question answering and visual grounding. arXiv:1606.01847, 2016.
P. Noordhuis, L. Wesolowski, A. Kyrola, A. Tulloch, Y. Jia, and K. He. Accurate, large minibatch SGD: training imagenet in 1 hour. arXiv preprint arXiv:1706.02677, 2017.
[6] Y. Goyal, T. Khot, D. Summers-Stay, D. Batra, and D. Parikh. Making the V in VQA matter: Elevating the role of image understanding in Visual Question Answering. In CVPR, 2017.
[7] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016.
[8] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
[9] R. Krishna, Y. Zhu, O. Groth, J. Johnson, K. Hata, J. Kravitz, S. Chen, Y. Kalantidis, L.-J. Li, D. A. Shamma, et al. Vi- sual genome: Connecting language and vision using crowd- sourced dense image annotations. IJCV, 2017.
[10] T.-Y. Lin, P. Doll´ar, R. Girshick, K. He, B. Hariharan, and S. Belongie. Feature pyramid networks for object detection. In CVPR, 2017.
[11] J. Pennington, R. Socher, and C. Manning. Glove: Global vectors for word representation. In EMNLP, 2014.
[12] S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In NIPS, 2015.
[13] T. Salimans and D. P. Kingma. Weight normalization: A simple reparameterization to accelerate training of deep neu- ral networks. In NIPS, 2016.
[14] D. Teney, P. Anderson, X. He, and A. van den Hengel. Tips and tricks for visual question answering: Learnings from the 2017 challenge. CoRR, abs/1708.02711, 2017.
[15] S. Xie, R. Girshick, P. Dollr, Z. Tu, and K. He. Aggregated residual transformations for deep neural networks. arXiv preprint arXiv:1611.05431, 2016.
[16] Z. Yu, J. Yu, C. Xiang, J. Fan, and D. Tao. Beyond bilin- ear: Generalized multimodal factorized high-order pooling for visual question answering. TNNLS, 2018. | {
"id": "1606.01847"
} |
1807.03748 | Representation Learning with Contrastive Predictive Coding | While supervised learning has enabled great progress in many applications,
unsupervised learning has not seen such widespread adoption, and remains an
important and challenging endeavor for artificial intelligence. In this work,
we propose a universal unsupervised learning approach to extract useful
representations from high-dimensional data, which we call Contrastive
Predictive Coding. The key insight of our model is to learn such
representations by predicting the future in latent space by using powerful
autoregressive models. We use a probabilistic contrastive loss which induces
the latent space to capture information that is maximally useful to predict
future samples. It also makes the model tractable by using negative sampling.
While most prior work has focused on evaluating representations for a
particular modality, we demonstrate that our approach is able to learn useful
representations achieving strong performance on four distinct domains: speech,
images, text and reinforcement learning in 3D environments. | http://arxiv.org/pdf/1807.03748 | Aaron van den Oord, Yazhe Li, Oriol Vinyals | cs.LG, stat.ML | null | null | cs.LG | 20180710 | 20190122 | 9 1 0 2
n a J 2 2 ] G L . s c [
2 v 8 4 7 3 0 . 7 0 8 1 : v i X r a
# Representation Learning with Contrastive Predictive Coding
Aaron van den Oord DeepMind avdnoord@google.com
Yazhe Li DeepMind yazhe@google.com
Oriol Vinyals DeepMind vinyals@google.com
# Abstract
While supervised learning has enabled great progress in many applications, unsu- pervised learning has not seen such widespread adoption, and remains an important and challenging endeavor for artiï¬cial intelligence. In this work, we propose a universal unsupervised learning approach to extract useful representations from high-dimensional data, which we call Contrastive Predictive Coding. The key in- sight of our model is to learn such representations by predicting the future in latent space by using powerful autoregressive models. We use a probabilistic contrastive loss which induces the latent space to capture information that is maximally useful to predict future samples. It also makes the model tractable by using negative sampling. While most prior work has focused on evaluating representations for a particular modality, we demonstrate that our approach is able to learn useful representations achieving strong performance on four distinct domains: speech, images, text and reinforcement learning in 3D environments.
# Introduction
Learning high-level representations from labeled data with layered differentiable models in an end- to-end fashion is one of the biggest successes in artiï¬cial intelligence so far. These techniques made manually speciï¬ed features largely redundant and have greatly improved state-of-the-art in several real-world applications [1, 2, 3]. However, many challenges remain, such as data efï¬ciency, robustness or generalization.
Improving representation learning requires features that are less specialized towards solving a single supervised task. For example, when pre-training a model to do image classiï¬cation, the induced features transfer reasonably well to other image classiï¬cation domains, but also lack certain information such as color or the ability to count that are irrelevant for classiï¬cation but relevant for e.g. image captioning [4]. Similarly, features that are useful to transcribe human speech may be less suited for speaker identiï¬cation, or music genre prediction. Thus, unsupervised learning is an important stepping stone towards robust and generic representation learning.
Despite its importance, unsupervised learning is yet to see a breakthrough similar to supervised learning: modeling high-level representations from raw observations remains elusive. Further, it is not always clear what the ideal representation is and if it is possible that one can learn such a representation without additional supervision or specialization to a particular data modality.
One of the most common strategies for unsupervised learning has been to predict future, missing or contextual information. This idea of predictive coding [5, 6] is one of the oldest techniques in signal processing for data compression. In neuroscience, predictive coding theories suggest that the brain predicts observations at various levels of abstraction [7, 8]. Recent work in unsupervised learning has successfully used these ideas to learn word representations by predicting neighboring words [9]. For images, predicting color from grey-scale or the relative position of image patches has also been
Preprint. Work in progress.
Predictions Jar Gox â\ \ Ai a a 242 243 244 Tt-3 Tt-1 Tt41 Tt+2 Tt+3 Tt+4 âws vio a= th ree
Figure 1: Overview of Contrastive Predictive Coding, the proposed representation learning approach. Although this ï¬gure shows audio as input, we use the same setup for images, text and reinforcement learning.
shown useful [10, 11]. We hypothesize that these approaches are fruitful partly because the context from which we predict related values are often conditionally dependent on the same shared high-level latent information. And by casting this as a prediction problem, we automatically infer these features of interest to representation learning.
In this paper we propose the following: ï¬rst, we compress high-dimensional data into a much more compact latent embedding space in which conditional predictions are easier to model. Secondly, we use powerful autoregressive models in this latent space to make predictions many steps in the future. Finally, we rely on Noise-Contrastive Estimation [12] for the loss function in similar ways that have been used for learning word embeddings in natural language models, allowing for the whole model to be trained end-to-end. We apply the resulting model, Contrastive Predictive Coding (CPC) to widely different data modalities, images, speech, natural language and reinforcement learning, and show that the same mechanism learns interesting high-level information on each of these domains, outperforming other approaches.
# 2 Contrastive Predicting Coding
We start this section by motivating and giving intuitions behind our approach. Next, we introduce the architecture of Contrastive Predictive Coding (CPC). After that we explain the loss function that is based on Noise-Contrastive Estimation. Lastly, we discuss related work to CPC.
# 2.1 Motivation and Intuitions
The main intuition behind our model is to learn the representations that encode the underlying shared information between different parts of the (high-dimensional) signal. At the same time it discards low-level information and noise that is more local. In time series and high-dimensional modeling, approaches that use next step prediction exploit the local smoothness of the signal. When predicting further in the future, the amount of shared information becomes much lower, and the model needs to infer more global structure. These âslow featuresâ [13] that span many time steps are often more interesting (e.g., phonemes and intonation in speech, objects in images, or the story line in books.).
One of the challenges of predicting high-dimensional data is that unimodal losses such as mean- squared error and cross-entropy are not very useful, and powerful conditional generative models which need to reconstruct every detail in the data are usually required. But these models are computationally intense, and waste capacity at modeling the complex relationships in the data x, often ignoring the context c. For example, images may contain thousands of bits of information while the high-level latent variables such as the class label contain much less information (10 bits for 1,024 categories). This suggests that modeling p(x|c) directly may not be optimal for the purpose of extracting shared information between x and c. When predicting future information we instead encode the target x (future) and context c (present) into a compact distributed vector representations (via non-linear
2
learned mappings) in a way that maximally preserves the mutual information of the original signals x and c deï¬ned as
(1) = 2c og 2 = 2, r(e,¢) log rc
By maximizing the mutual information between the encoded representations (which is bounded by the MI between the input signals), we extract the underlying latent variables the inputs have in commmon.
# 2.2 Contrastive Predictive Coding
Figure 1 shows the architecture of Contrastive Predictive Coding models. First, a non-linear encoder genc maps the input sequence of observations xt to a sequence of latent representations zt = genc(xt), t in potentially with a lower temporal resolution. Next, an autoregressive model gar summarizes all z the latent space and produces a context latent representation ct = gar(z As argued in the previous section we do not predict future observations xt+k directly with a generative model pk(xt+k|ct). Instead we model a density ratio which preserves the mutual information between xt+k and ct (Equation 1) as follows (see next sub-section for further details):
â¤
fk(xt+k, ct) â p(xt+k|ct) p(xt+k) (2)
where â stands for âproportional toâ (i.e. up to a multiplicative constant). Note that the density ratio f can be unnormalized (does not have to integrate to 1). Although any positive real score can be used here, we use a simple log-bilinear model:
felese, cr) = exp (20,4 Weer), i)
In our experiments a linear transformation W T every step k. Alternatively, non-linear networks or recurrent neural networks could be used.
By using a density ratio f (xt+k, ct) and inferring zt+k with an encoder, we relieve the model from modeling the high dimensional distribution xtk . Although we cannot evaluate p(x) or p(x|c) directly, we can use samples from these distributions, allowing us to use techniques such as Noise-Contrastive Estimation [12, 14, 15] and Importance Sampling [16] that are based on comparing the target value with randomly sampled negative values.
In the proposed model, either of zt and ct could be used as representation for downstream tasks. The autoregressive model output ct can be used if extra context from the past is useful. One such example is speech recognition, where the receptive ï¬eld of zt might not contain enough information to capture phonetic content. In other cases, where no additional context is required, zt might instead be better. If the downstream task requires one representation for the whole sequence, as in e.g. image classiï¬cation, one can pool the representations from either zt or ct over all locations. Finally, note that any type of encoder and autoregressive model can be used in the proposed framework. For simplicity we opted for standard architectures such as strided convolutional layers with resnet blocks for the encoder, and GRUs [17] for the autoregresssive model. More recent advancements in autoregressive modeling such as masked convolutional architectures [18, 19] or self-attention networks [20] could help improve results further.
# InfoNCE Loss and Mutual Information Estimation
Both the encoder and autoregressive model are trained to jointly optimize a loss based on NCE, which we will call InfoNCE. Given a set X = {x1, . . . xN } of N random samples containing one positive sample from p(xt+k|ct) and N â 1 negative samples from the âproposalâ distribution p(xt+k), we optimize:
fie(@t+K, Ct) eeyex fr rj, Ct) Ly = â- E log (4)
3
Optimizing this loss will result in fk(xt+k, ct) estimating the density ratio in equation 2. This can be shown as follows.
The loss in Equation 4 is the categorical cross-entropy of classifying the positive sample correctly, with being the prediction of the model. Let us write the optimal probability for this loss as p(d = i|X, ct) with [d = i] being the indicator that sample xi is the âpositiveâ sample. The probability that sample xi was drawn from the conditional distribution p(xt+k|ct) rather than the proposal distribution p(xt+k) can be derived as follows:
P(as\ee) Tyg; P(®1) YL p(wjlee) Tz; PCa) male D(@i == . (5) ( Lit eh p(d = i|X, cz)
As we can see, the optimal value for f (xt+k, ct) in Equation 4 is proportional to p(xt+k is independent of the the choice of the number of negative samples N â 1.
Though not required for training, we can evaluate the mutual information between the variables ct and xt+k as follows:
I(xt+k, ct) ⥠log(N ) â LN, which becomes tighter as N becomes larger. Also observe that minimizing the InfoNCE loss LN maximizes a lower bound on mutual information. For more details see Appendix.
# 2.4 Related Work
CPC is a new method that combines predicting future observations (predictive coding) with a probabilistic contrastive loss (Equation 4). This allows us to extract slow features, which maximize the mutual information of observations over long time horizons. Contrastive losses and predictive coding have individually been used in different ways before, which we will now discuss.
Contrastive loss functions have been used by many authors in the past. For example, the techniques proposed by [21, 22, 23] were based on triplet losses using a max-margin approach to separate positive from negative examples. More recent work includes Time Contrastive Networks [24] which proposes to minimize distances between embeddings from multiple viewpoints of the same scene and whilst maximizing distances between embeddings extracted from different timesteps. In Time Contrastive Learning [25] a contrastive loss is used to predict the segment-ID of multivariate time-series as a way to extract features and perform nonlinear ICA.
There has also been work and progress on deï¬ning prediction tasks from related observations as a way to extract useful representations, and many of these have been applied to language. In Word2Vec [9] neighbouring words are predicted using a contrastive loss. Skip-thought vectors [26] and Byte mLSTM [27] are alternatives which go beyond word prediction with a Recurrent Neural Network, and use maximum likelihood over sequences of observations. In Computer Vision [28] use a triplet loss on tracked video patches so that patches from the same object at different timesteps are more similar to each other than to random patches. [11, 29] propose to predict the relative postion of patches in an image and in [10] color values are predicted from a greyscale images.
# 3 Experiments
We present benchmarks on four different application domains: speech, images, natural language and reinforcement learning. For every domain we train CPC models and probe what the representations contain with either a linear classiï¬cation task or qualitative evaluations, and in reinforcement learning we measure how the auxiliary CPC loss speeds up learning of the agent.
# 3.1 Audio
For audio, we use a 100-hour subset of the publicly available LibriSpeech dataset [30]. Although the dataset does not provide labels other than the raw text, we obtained force-aligned phone sequences
4
ae
1.0 0.8 0.6 0.0 oO 5 10 15 20
Figure 2: t-SNE visualization of audio (speech) representations for a subset of 10 speakers (out of 251). Every color represents a different speaker.
Figure 3: Average accuracy of predicting the positive sample in the contrastive loss for 1 to 20 latent steps in the future of a speech waveform. The model predicts up to 200ms in the future as every step consists of 10ms of audio.
Method ACC Phone classiï¬cation Random initialization MFCC features CPC Supervised 27.6 39.7 64.6 74.6 Speaker classiï¬cation Random initialization MFCC features CPC Supervised 1.87 17.6 97.4 98.5
Table 1: LibriSpeech phone and speaker classiï¬cation results. For phone classiï¬- cation there are 41 possible classes and for speaker classiï¬cation 251. All mod- els used the same architecture and the same audio input sizes.
Method ACC #steps predicted 2 steps 4 steps 8 steps 12 steps 16 steps Negative samples from Mixed speaker Same speaker Mixed speaker (excl.) Same speaker (excl.) Current sequence only 28.5 57.6 63.6 64.6 63.8 64.6 65.5 57.3 64.6 65.2
Table 2: LibriSpeech phone classiï¬ca- tion ablation experiments. More details can be found in Section 3.1.
with the Kaldi toolkit [31] and pre-trained models on Librispeech1. We have made the aligned phone labels and our train/test split available for download on Google Drive2. The dataset contains speech from 251 different speakers.
The encoder architecture genc used in our experiments consists of a strided convolutional neural network that runs directly on the 16KHz PCM audio waveform. We use ï¬ve convolutional layers with strides [5, 4, 2, 2, 2], ï¬lter-sizes [10, 8, 4, 4, 4] and 512 hidden units with ReLU activations. The total downsampling factor of the network is 160 so that there is a feature vector for every 10ms of speech, which is also the rate of the phoneme sequence labels obtained with Kaldi. We then use a GRU RNN [17] for the autoregressive part of the model, gar with 256 dimensional hidden state. The output of the GRU at every timestep is used as the context c from which we predict 12 timesteps in the future using the contrastive loss. We train on sampled audio windows of length 20480. We use the Adam optimizer [32] with a learning rate of 2e-4, and use 8 GPUs each with a minibatch of 8 examples from which the negative samples in the contrastive loss are drawn. The model is trained until convergence, which happens roughly at 300,000 updates.
Figure 3 shows the accuracy of the model to predict latents in the future, from 1 to 20 timesteps. We report the average number of times the logit for the positive sample is higher than for the negative samples in the probabilistic contrastive loss. This ï¬gure also shows that the objective is neither trivial nor impossible, and as expected the prediction task becomes harder as the target is further away.
# 1www.kaldi-asr.org/downloads/build/6/trunk/egs/librispeech/ 2https://drive.google.com/drive/folders/1BhJ2umKH3whguxMwifaKtSra0TgAbtfb
5
Jar - Output Gene ~ Output âââ_â__ + _| 64 px t+2) jee | . - Zt+3) leet yt ~~ Predictions zt+4 ta: |â ro 50% overlap ] 256 PX, | v input image |
Figure 4: Visualization of Contrastive Predictive Coding for images (2D adaptation of Figure 1).
To understand the representations extracted by CPC, we measure the phone prediction performance with a linear classiï¬er trained on top of these features, which shows how linearly separable the relevant classes are under these features. We extract the outputs of the GRU (256 dimensional), i.e. ct, for the whole dataset after model convergence and train a multi-class linear logistic regression classiï¬er. The results are shown in Table 1 (top). We compare the accuracy with three baselines: representations from a random initialized model (i.e., genc and gar are untrained), MFCC features, and a model that is trained end-to-end supervised with the labeled data. These two models have the same architecture as the one used to extract the CPC representations. The fully supervised model serves as an indication for what is achievable with this architecture. We also found that not all the information encoded is linearly accessible. When we used a single hidden layer instead the accuracy increases from 64.6 to 72.5, which is closer to the accuracy of the fully supervised model.
Table 2 gives an overview of two ablation studies of CPC for phone classiï¬cation. In the ï¬rst set we vary the number of steps the model predicts showing that predicting multiple steps is important for learning useful features. In the second set we compare different strategies for drawing negative sample, all predicting 12 steps (which gave the best result in the ï¬rst ablation). In the mixed speaker experiment the negative samples contain examples of different speakers (ï¬rst row), in contrast to same speaker experiment (second row). In the third and fourth experiment we exclude the current sequence to draw negative samples from (so only other examples in the minibatch are present in X) and in the last experiment we only draw negative samples within the sequence (thus all samples are from the same speaker).
Beyond phone classiï¬cation, Table 1 (bottom) shows the accuracy of performing speaker identity (out of 251) with a linear classiï¬er from the same representation (we do not average utterances over time). Interestingly, CPCs capture both speaker identity and speech contents, as demonstrated by the good accuracies attained with a simple linear classiï¬er, which also gets close to the oracle, fully supervised networks.
Additionally, Figure 2 shows a t-SNE visualization [33] of how discriminative the embeddings are for speaker voice-characteristics. It is important to note that the window size (maximum context size for the GRU) has a big impact on the performance, and longer segments would give better results. Our model had a maximum of 20480 timesteps to process, which is slightly longer than a second.
# 3.2 Vision
In our visual representation experiments we use the ILSVRC ImageNet competition dataset [34]. The ImageNet dataset has been used to evaluate unsupervised vision models by many authors [28, 11, 35, 10, 29, 36]. We follow the same setup as [36] and use a ResNet v2 101 architecture [37] as the image encoder genc to extract CPC representations (note that this encoder is not pretrained). We did not use Batch-Norm [38]. After unsupervised training, a linear layer is trained to measure classiï¬cation accuracy on ImageNet labels.
6
Figure 5: Every row shows image patches that activate a certain neuron in the CPC architecture.
The training procedure is as follows: from a 256x256 image we extract a 7x7 grid of 64x64 crops with 32 pixels overlap. Simple data augmentation proved helpful on both the 256x256 images and the 64x64 crops. The 256x256 images are randomly cropped from a 300x300 image, horizontally ï¬ipped with a probability of 50% and converted to greyscale. For each of the 64x64 crops we randomly take a 60x60 subcrop and pad them back to a 64x64 image.
Each crop is then encoded by the ResNet-v2-101 encoder. We use the outputs from the third residual block, and spatially mean-pool to get a single 1024-d vector per 64x64 patch. This results in a 7x7x1024 tensor. Next, we use a PixelCNN-style autoregressive model [19] (a convolutional row- GRU PixelRNN [39] gave similar results) to make predictions about the latent activations in following rows top-to-bottom, visualized in Figure 4. We predict up to ï¬ve rows from the 7x7 grid, and we apply the contrastive loss for each patch in the row. We used Adam optimizer with a learning rate of 2e-4 and trained on 32 GPUs each with a batch size of 16.
For the linear classiï¬er trained on top of the CPC features we use SGD with a momentum of 0.9, a learning rate schedule of 0.1, 0.01 and 0.001 for 50k, 25k and 10k updates and batch size of 2048 on a single GPU. Note that when training the linear classiï¬er we ï¬rst spatially mean-pool the 7x7x1024 representation to a single 1024 dimensional vector. This is slightly different from [36] which uses a 3x3x1024 representation without pooling, and thus has more parameters in the supervised linear mapping (which could be advantageous).
Tables 3 and 4 show the top-1 and top-5 classiï¬cation accuracies compared with the state-of-the-art. Despite being relatively domain agnostic, CPCs improve upon state-of-the-art by 9% absolute in top-1 accuracy, and 4% absolute in top-5 accuracy.
# 3.3 Natural Language
Our natural language experiments follow closely the procedure from [26] which was used for the skip-thought vectors model. We ï¬rst learn our unsupervised model on the BookCorpus dataset [42], and evaluate the capability of our model as a generic feature extractor by using CPC representations for a set of classiï¬cation tasks. To cope with words that are not seen during training, we employ vocabulary expansion the same way as [26], where a linear mapping is constructed between word2vec and the word embeddings learned by the model.
For the classiï¬cation tasks we used the following datasets: movie review sentiment (MR) [43], customer product reviews (CR) [44], subjectivity/objectivity [45], opinion polarity (MPQA) [46] and question-type classiï¬cation (TREC) [47]. As in [26] we train a logistic regression classiï¬er and
7
Method Top-1 ACC Using AlexNet conv5 Video [28] Relative Position [11] BiGan [35] Colorization [10] Jigsaw [29] * Using ResNet-V2 Motion Segmentation [36] Exemplar [36] Relative Position [36] Colorization [36] CPC 29.8 30.4 34.8 35.2 38.1 27.6 31.5 36.2 39.6 48.7
Method Motion Segmentation (MS) Exemplar (Ex) Relative Position (RP) Colorization (Col) Combination of MS + Ex + RP + Col CPC Top-5 ACC 48.3 53.1 59.2 62.5 69.3 73.6
Table 3: ImageNet top-1 unsupervised classiï¬- cation results. *Jigsaw is not directly compa- rable to the other AlexNet results because of architectural differences.
Table 4: ImageNet top-5 unsupervised classi- ï¬cation results. Previous results with MS, Ex, RP and Col were taken from [36] and are the best reported results on this task.
Method MR CR Subj MPQA TREC Paragraph-vector [40] Skip-thought vector [26] Skip-thought + LN [41] 74.8 75.5 79.5 78.1 79.3 82.6 90.5 92.1 93.4 74.2 86.9 89.0 91.8 91.4 - CPC 76.9 80.1 91.2 87.7 96.8
Table 5: Classiï¬cation accuracy on ï¬ve common NLP benchmarks. We follow the same transfer learning setup from Skip-thought vectors [26] and use the BookCorpus dataset as source. [40] is an unsupervised approach to learning sentence-level representations. [26] is an alternative unsupervised learning approach. [41] is the same skip-thought model with layer normalization trained for 1M iterations.
evaluate with 10-fold cross-validation for MR, CR, Subj, MPQA and use the train/test split for TREC. A L2 regularization weight was chosen via cross-validation (therefore nested cross-validation for the ï¬rst 4 datasets).
Our model consists of a simple sentence encoder genc (a 1D-convolution + ReLU + mean-pooling) that embeds a whole sentence into a 2400-dimension vector z, followed by a GRU (2400 hidden units) which predicts up to 3 future sentence embeddings with the contrastive loss to form c. We used Adam optimizer with a learning rate of 2e-4 trained on 8 GPUs, each with a batch size of 64. We found that more advanced sentence encoders did not signiï¬cantly improve the results, which may be due to the simplicity of the transfer tasks (e.g., in MPQA most datapoints consists of one or a few words), and the fact that bag-of-words models usually perform well on many NLP tasks [48].
Results on evaluation tasks are shown in Table 5 where we compare our model against other models that have been used using the same datasets. The performance of our method is very similar to the skip-thought vector model, with the advantage that it does not require a powerful LSTM as word-level decoder, therefore much faster to train. Although this is a standard transfer learning benchmark, we found that models that learn better relationships in the childeren books did not necessarily perform better on the target tasks (which are very different: movie reviews etc). We note that better [49, 27] results have been published on these target datasets, by transfer learning from a different source task.
8
coms watermaze seq @x0l0râ¬_goal_ locations. small . seekavoid_arena 01 o_ l2sertag three opponents small = fooms keys doors. puzzle Return Return Return 0 250M 500M 750M 18 0 250M âS00M 750M 1B 0 250M 500M 750M 1B 0 250M 00M 750M 18 0 250M = S00M 750M 18 Frame Frame Frame Frame Frame
Figure 6: Reinforcement Learning results for 5 DeepMind Lab tasks used in [50]. Black: batched A2C baseline, Red: with auxiliary contrastive loss.
# 3.4 Reinforcement Learning
Finally, we evaluate the proposed unsupervised learning approach on ï¬ve reinforcement learn- ing in 3D environments of DeepMind Lab [51]: rooms_watermaze, explore_goal_locations_small, seekavoid_arena_01, lasertag_three_opponents_small and rooms_keys_doors_puzzle.
This setup differs from the previous three. Here, we take the standard batched A2C [52] agent as base model and add CPC as an auxiliary loss. We do not use a replay buffer, so the predictions have to adapt to the changing behavior of the policy. The learned representation encodes a distribution over its future observations.
Following the same approach as [50], we perform a random search over the entropy regularization weight, the learning-rate and epsilon hyperparameters for RMSProp [53]. The unroll length for the A2C is 100 steps and we predict up to 30 steps in the future to derive the contrastive loss. The baseline agent consists of a convolutional encoder which maps every input frame into a single vector followed by a temporal LSTM. We use the same encoder as in the baseline agent and only add the linear prediction mappings for the contrastive loss, resulting in minimal overhead which also showcases the simplicity of implementing our method on top of an existing architecture that has been designed and tuned for a particular task. We refer to [50] for all other hyperparameter and implementation details.
Figure 6 shows that for 4 out of the 5 games performance of the agent improves signiï¬cantly with the contrastive loss after training on 1 billion frames. For lasertag_three_opponents_small, contrastive loss does not help nor hurt. We suspect that this is due to the task design, which does not require memory and thus yields a purely reactive policy.
# 4 Conclusion
In this paper we presented Contrastive Predictive Coding (CPC), a framework for extracting compact latent representations to encode predictions over future observations. CPC combines autoregressive modeling and noise-contrastive estimation with intuitions from predictive coding to learn abstract representations in an unsupervised fashion. We tested these representations in a wide variety of domains: audio, images, natural language and reinforcement learning and achieve strong or state- of-the-art performance when used as stand-alone features. The simplicity and low computational requirements to train the model, together with the encouraging results in challenging reinforcement learning domains when used in conjunction with the main loss are exciting developments towards useful unsupervised learning that applies universally to many more data modalities.
# 5 Acknowledgements
We would like to thank Andriy Mnih, Andrew Zisserman, Alex Graves and Carl Doersch for their helpful comments on the paper and Lasse Espeholt for making the A2C baseline available.
# References
[1] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â1105, 2012.
9
[2] Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29(6):82â97, 2012.
[3] Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104â3112, 2014. [4] Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neural image caption generator. In Computer Vision and Pattern Recognition (CVPR), 2015 IEEE Conference on, pages 3156â3164. IEEE, 2015.
[5] Peter Elias. Predictive codingâi. IRE Transactions on Information Theory, 1(1):16â24, 1955. [6] Bishnu S Atal and Manfred R Schroeder. Adaptive predictive coding of speech signals. The
Bell System Technical Journal, 49(8):1973â1986, 1970.
[7] Rajesh PN Rao and Dana H Ballard. Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-ï¬eld effects. Nature neuroscience, 2(1):79, 1999.
[8] Karl Friston. A theory of cortical responses. Philosophical transactions of the Royal Society B: Biological sciences, 360(1456):815â836, 2005.
[9] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efï¬cient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013.
[10] Richard Zhang, Phillip Isola, and Alexei A Efros. Colorful image colorization. In European Conference on Computer Vision, pages 649â666. Springer, 2016.
[11] Carl Doersch, Abhinav Gupta, and Alexei A. Efros. Unsupervised visual representation learning by context prediction. In The IEEE International Conference on Computer Vision (ICCV), December 2015.
[12] Michael Gutmann and Aapo Hyvärinen. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In Proceedings of the Thirteenth International Conference on Artiï¬cial Intelligence and Statistics, pages 297â304, 2010.
[13] Laurenz Wiskott and Terrence J Sejnowski. Slow feature analysis: Unsupervised learning of invariances. Neural computation, 14(4):715â770, 2002.
[14] Andriy Mnih and Yee Whye Teh. A fast and simple algorithm for training neural probabilistic language models. arXiv preprint arXiv:1206.6426, 2012.
[15] Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016.
[16] Yoshua Bengio and Jean-Sebastien Senecal. Adaptive importance sampling to accelerate training of a neural probabilistic language model. IEEE Trans. Neural Networks, 19, 2008.
[17] Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder- decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014.
[18] Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499, 2016.
[19] Aaron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, and Koray Kavukcuoglu. Conditional image generation with pixelcnn decoders. CoRR, abs/1606.05328, 2016.
[20] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Å ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998â6008. Curran Associates, Inc., 2017.
[21] Sumit Chopra, Raia Hadsell, and Yann LeCun. Learning a similarity metric discriminatively, with application to face veriï¬cation. In Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, volume 1, pages 539â546. IEEE, 2005.
[22] Kilian Q Weinberger and Lawrence K Saul. Distance metric learning for large margin nearest neighbor classiï¬cation. Journal of Machine Learning Research, 10(Feb):207â244, 2009.
10
[23] Florian Schroff, Dmitry Kalenichenko, and James Philbin. Facenet: A uniï¬ed embedding for face recognition and clustering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 815â823, 2015.
[24] Pierre Sermanet, Corey Lynch, Jasmine Hsu, and Sergey Levine. Time-contrastive networks: Self-supervised learning from multi-view observation. arXiv preprint arXiv:1704.06888, 2017.
[25] Aapo Hyvarinen and Hiroshi Morioka. Unsupervised feature extraction by time-contrastive learning and nonlinear ica. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 3765â3773. Curran Associates, Inc., 2016.
[26] Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. Skip-thought vectors. In Advances in neural information processing systems, pages 3294â3302, 2015.
[27] Alec Radford, Rafal Jozefowicz, and Ilya Sutskever. Learning to generate reviews and discover- ing sentiment. arXiv preprint arXiv:1704.01444, 2017.
[28] Xiaolong Wang and Abhinav Gupta. Unsupervised learning of visual representations using videos. arXiv preprint arXiv:1505.00687, 2015.
[29] Mehdi Noroozi and Paolo Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles. In European Conference on Computer Vision, pages 69â84. Springer, 2016.
[30] Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. Librispeech: an asr corpus based on public domain audio books. In Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, pages 5206â5210. IEEE, 2015.
[31] Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, et al. The kaldi speech recognition toolkit. In IEEE 2011 workshop on automatic speech recognition and understanding. IEEE Signal Processing Society, 2011.
[32] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
[33] Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579â2605, 2008.
[34] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211â252, 2015.
[35] Jeff Donahue, Philipp Krähenbühl, and Trevor Darrell. Adversarial feature learning. arXiv preprint arXiv:1605.09782, 2016.
[36] Carl Doersch and Andrew Zisserman. Multi-task self-supervised visual learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2051â2060, 2017.
[37] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In European Conference on Computer Vision, pages 630â645. Springer, 2016.
[38] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning, pages 448â456, 2015.
[39] Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. CoRR, abs/1601.06759, 2016.
[40] Quoc Le and Tomas Mikolov. Distributed representations of sentences and documents. In International Conference on Machine Learning, pages 1188â1196, 2014.
[41] Lei Jimmy Ba, Ryan Kiros, and Geoffrey E Hinton. Layer normalization. CoRR, abs/1607.06450, 2016.
[42] Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the IEEE international conference on computer vision, pages 19â27, 2015.
11
[43] Bo Pang and Lillian Lee. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of the 43rd annual meeting on association for computational linguistics, pages 115â124. Association for Computational Linguistics, 2005.
[44] Minqing Hu and Bing Liu. Mining and summarizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 168â177. ACM, 2004.
[45] Bo Pang and Lillian Lee. A sentimental education: Sentiment analysis using subjectivity In Proceedings of the 42nd annual meeting on summarization based on minimum cuts. Association for Computational Linguistics, page 271. Association for Computational Linguistics, 2004.
[46] Janyce Wiebe, Theresa Wilson, and Claire Cardie. Annotating expressions of opinions and emotions in language. Language resources and evaluation, 39(2-3):165â210, 2005.
[47] Xin Li and Dan Roth. Learning question classiï¬ers. In Proceedings of the 19th international conference on Computational linguistics-Volume 1, pages 1â7. Association for Computational Linguistics, 2002.
[48] Sida Wang and Christopher D. Manning. Baselines and bigrams: Simple, good sentiment and topic classiï¬cation. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers - Volume 2, ACL â12, pages 90â94, 2012.
[49] Han Zhao, Zhengdong Lu, and Pascal Poupart. Self-adaptive hierarchical sentence model. In IJCAI, pages 4069â4076, 2015.
[50] Lasse Espeholt, Hubert Soyer, Remi Munos, Karen Simonyan, Volodymir Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, Shane Legg, and Koray Kavukcuoglu. Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures. arXiv preprint arXiv:1802.01561, 2018.
[51] Charles Beattie, Joel Z Leibo, Denis Teplyashin, Tom Ward, Marcus Wainwright, Heinrich Küttler, Andrew Lefrancq, Simon Green, VÃctor Valdés, Amir Sadik, et al. Deepmind lab. arXiv preprint arXiv:1612.03801, 2016.
[52] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep rein- forcement learning. In International Conference on Machine Learning, pages 1928â1937, 2016.
[53] Geoffrey Hinton, Nitish Srivastava, and Kevin Swersky. Neural networks for machine learning- lecture 6a-overview of mini-batch gradient descent.
[54] Ishmael Belghazi, Sai Rajeswar, Aristide Baratin, R Devon Hjelm, and Aaron Courville. Mine: Mutual information neural estimation. 2018.
12
# A Appendix
# A.1 Estimating the Mutual Information with InfoNCE
By optimizing InfoNCE, the CPC loss we deï¬ned in Equation 4, we are maximizing the mutual information between ct and zt+k (which is bounded by the MI between ct and xt+k). This can be shown as follows. As already shown in Section 2.3, the optimal value for f (xt+k, ct) is given by p(xt+k p(xt+k) . Inserting this back in to Equation 4 and splitting X into the positive example and the negative examples Xneg results in:
Mash opt _f P(@t4k £N x log P(te¢K\|Ct) x P(xj\ce) 9) P(@t+k) ©j3â¬Xneg Pej)
- D(Tt+K) me aX; ile) E 1+ 7 xs P(tr+Klee) x r 5 â¢
â
â E X log 1 + p(xj|ct) p(xj) (8)
1) D(tt+K) ââââ_(N- pltrselen
(N - vE 1 )
= E X log 1 + (9)
_ D(t+4k) > Elog | âââ~_N 10 = xs D(xt+KICt) (10)
= âI(xt+k, ct) + log(N ), (11)
Therefore, I(xt+k, ct) ⥠log(N ) â Lopt N . This trivially also holds for other f that obtain a worse (higher) LN. Equation 8 quickly becomes more accurate as N increases. At the same time log(N ) â LN also increases, so itâs useful to use large values of N . InfoNCE is also related to MINE [54]. Without loss of generality letâs write f (x, c) = eF (x,c), then
f(e,0) . _ E |log â=2"* _| = E |F(a,0)| â E [log e 0] (12) x ex MeO] Go cy (98 2
= E |F(«,c)|-â E [log (eFâ¢9 + Fe) 3 (z,c) (7.0) (a,c) 8 : » (13) © 5 EXneg
â
< E |F(a,c)| -E [10 : cheer] 4 ~ (2,0) (#,¢) c i »~ (14) @5 EX neg
# Xneg â 1 N â 1
1 = E |F(a,c -E[I : oF (a5.0) 4] (Nâ1)] (a,c) (2,¢) cL oN > oF â9 + log( ) 5 Xneg
is equivalent to the MINE estimator (up to a constant). So we maximize a lower bound on this estimator. We found that using MINE directly gave identical performance when the task was non- trivial, but became very unstable if the target was easy to predict from the context (e.g., when predicting a single step in the future and the target overlaps with the context).
13
(15) | {
"id": "1505.00687"
} |
1807.03819 | Universal Transformers | Recurrent neural networks (RNNs) sequentially process data by updating their
state with each new data point, and have long been the de facto choice for
sequence modeling tasks. However, their inherently sequential computation makes
them slow to train. Feed-forward and convolutional architectures have recently
been shown to achieve superior results on some sequence modeling tasks such as
machine translation, with the added advantage that they concurrently process
all inputs in the sequence, leading to easy parallelization and faster training
times. Despite these successes, however, popular feed-forward sequence models
like the Transformer fail to generalize in many simple tasks that recurrent
models handle with ease, e.g. copying strings or even simple logical inference
when the string or formula lengths exceed those observed at training time. We
propose the Universal Transformer (UT), a parallel-in-time self-attentive
recurrent sequence model which can be cast as a generalization of the
Transformer model and which addresses these issues. UTs combine the
parallelizability and global receptive field of feed-forward sequence models
like the Transformer with the recurrent inductive bias of RNNs. We also add a
dynamic per-position halting mechanism and find that it improves accuracy on
several tasks. In contrast to the standard Transformer, under certain
assumptions, UTs can be shown to be Turing-complete. Our experiments show that
UTs outperform standard Transformers on a wide range of algorithmic and
language understanding tasks, including the challenging LAMBADA language
modeling task where UTs achieve a new state of the art, and machine translation
where UTs achieve a 0.9 BLEU improvement over Transformers on the WMT14 En-De
dataset. | http://arxiv.org/pdf/1807.03819 | Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, Łukasz Kaiser | cs.CL, cs.LG, stat.ML | Published at ICLR2019 | null | cs.CL | 20180710 | 20190305 | 9 1 0 2
r a M 5 ] L C . s c [
3 v 9 1 8 3 0 . 7 0 8 1 : v i X r a
Published as a conference paper at ICLR 2019
# UNIVERSAL TRANSFORMERS
# Mostafa Dehghaniââ University of Amsterdam DeepMind dehghani@uva.nl
# Stephan Gouwsâ
sgouws@google.com
Oriol Vinyals DeepMind vinyals@google.com
# Jakob Uszkoreit Google Brain usz@google.com
# Åukasz Kaiser Google Brain lukaszkaiser@google.com
# Jakob Uszkoreit Lukasz Kaiser
# ABSTRACT
Recurrent neural networks (RNNs) sequentially process data by updating their state with each new data point, and have long been the de facto choice for sequence modeling tasks. However, their inherently sequential computation makes them slow to train. Feed-forward and convolutional architectures have recently been shown to achieve superior results on some sequence modeling tasks such as machine translation, with the added advantage that they concurrently process all inputs in the sequence, leading to easy parallelization and faster training times. Despite these successes, however, popular feed-forward sequence models like the Transformer fail to generalize in many simple tasks that recurrent models handle with ease, e.g. copying strings or even simple logical inference when the string or formula lengths exceed those observed at training time. We propose the Universal Transformer (UT), a parallel-in-time self-attentive recurrent sequence model which can be cast as a generalization of the Transformer model and which addresses these issues. UTs combine the parallelizability and global receptive ï¬eld of feed-forward sequence models like the Transformer with the recurrent inductive bias of RNNs. We also add a dynamic per-position halting mechanism and ï¬nd that it improves accuracy on several tasks. In contrast to the standard Transformer, under certain assumptions UTs can be shown to be Turing-complete. Our experiments show that UTs outperform standard Transformers on a wide range of algorithmic and language understanding tasks, including the challenging LAMBADA language modeling task where UTs achieve a new state of the art, and machine translation where UTs achieve a 0.9 BLEU improvement over Transformers on the WMT14 En-De dataset.
# INTRODUCTION
Convolutional and fully-attentional feed-forward architectures like the Transformer have recently emerged as viable alternatives to recurrent neural networks (RNNs) for a range of sequence modeling tasks, notably machine translation (Gehring et al., 2017; Vaswani et al., 2017). These parallel-in-time architectures address a signiï¬cant shortcoming of RNNs, namely their inherently sequential computation which prevents parallelization across elements of the input sequence, whilst still addressing the vanishing gradients problem as the sequence length gets longer (Hochreiter et al., 2003). The Transformer model in particular relies entirely on a self-attention mechanism (Parikh et al., 2016; Lin et al., 2017) to compute a series of context-informed vector-space representations of the symbols in its input and output, which are then used to predict distributions over subsequent symbols as the model predicts the output sequence symbol-by-symbol. Not only is this mechanism straightforward to parallelize, but as each symbolâs representation is also directly informed by all other symbolsâ representations, this results in an effectively global receptive ï¬eld across the whole sequence. This stands in contrast to e.g. convolutional architectures which typically only have a limited receptive ï¬eld.
Notably, however, the Transformer with its ï¬xed stack of distinct layers foregoes RNNsâ inductive bias towards learning iterative or recursive transformations. Our experiments indicate that this inductive
â Equal contribution, alphabetically by last name. â Work performed while at Google Brain.
1
Published as a conference paper at ICLR 2019
Parameters are tied across positions and time steps
# Per Position States
c 1 ht Self-Attention | »{ Transition Function }-> h,'*1 Setf-Attention }â>{ Transition Function }â> h,*2 hg! Self-Attention | (Transition Function }â> h,t*1 Self-Attention } {Transition Function }â> hgt*2 hint Self-Attention | >[ Transition Function }â> h,,¢*1 Self-Attention }>{ Transition Function }â> h,,t*2
# Time
Figure 1: The Universal Transformer repeatedly reï¬nes a series of vector representations for each position of the sequence in parallel, by combining information from different positions using self-attention (see Eqn 2) and applying a recurrent transition function (see Eqn 4) across all time steps 1 ⤠t ⤠T . We show this process over two recurrent time-steps. Arrows denote dependencies between operations. Initially, h0 is initialized with the embedding for each symbol in the sequence. ht i represents the representation for input symbol 1 ⤠i ⤠m at recurrent time-step t. With dynamic halting, T is dynamically determined for each position (Section 2.2).
bias may be crucial for several algorithmic and language understanding tasks of varying complexity: in contrast to models such as the Neural Turing Machine (Graves et al., 2014), the Neural GPU (Kaiser & Sutskever, 2016) or Stack RNNs (Joulin & Mikolov, 2015), the Transformer does not generalize well to input lengths not encountered during training.
In this paper, we introduce the Universal Transformer (UT), a parallel-in-time recurrent self-attentive sequence model which can be cast as a generalization of the Transformer model, yielding increased theoretical capabilities and improved results on a wide range of challenging sequence-to-sequence tasks. UTs combine the parallelizability and global receptive ï¬eld of feed-forward sequence models like the Transformer with the recurrent inductive bias of RNNs, which seems to be better suited to a range of algorithmic and natural language understanding sequence-to-sequence problems. As the name implies, and in contrast to the standard Transformer, under certain assumptions UTs can be shown to be Turing-complete (or âcomputationally universalâ, as shown in Section 4).
In each recurrent step, the Universal Transformer iteratively reï¬nes its representations for all symbols in the sequence in parallel using a self-attention mechanism (Parikh et al., 2016; Lin et al., 2017), followed by a transformation (shared across all positions and time-steps) consisting of a depth-wise separable convolution (Chollet, 2016; Kaiser et al., 2017) or a position-wise fully-connected layer (see Fig 1). We also add a dynamic per-position halting mechanism (Graves, 2016), allowing the model to choose the required number of reï¬nement steps for each symbol dynamically, and show for the ï¬rst time that such a conditional computation mechanism can in fact improve accuracy on several smaller, structured algorithmic and linguistic inference tasks (although it marginally degraded results on MT).
Our strong experimental results show that UTs outperform Transformers and LSTMs across a wide range of tasks. The added recurrence yields improved results in machine translation where UTs outperform the standard Transformer. In experiments on several algorithmic tasks and the bAbI language understanding task, UTs also consistently and signiï¬cantly improve over LSTMs and the standard Transformer. Furthermore, on the challenging LAMBADA text understanding data set UTs with dynamic halting achieve a new state of the art.
2 MODEL DESCRIPTION
2.1 THE UNIVERSAL TRANSFORMER
The Universal Transformer (UT; see Fig. 2) is based on the popular encoder-decoder architecture commonly used in most neural sequence-to-sequence models (Sutskever et al., 2014; Cho et al., 2014; Vaswani et al., 2017). Both the encoder and decoder of the UT operate by applying a recurrent neural network to the representations of each of the positions of the input and output sequence, respectively. However, in contrast to most applications of recurrent neural networks to sequential data, the UT does not recur over positions in the sequence, but over consecutive revisions of the vector representations of each position (i.e., over âdepthâ). In other words, the UT is not computationally bound by the number of symbols in the sequence, but only by the number of revisions made to each symbolâs representation.
2
>
Published as a conference paper at ICLR 2019
In each recurrent time-step, the representation of every position is concurrently (in parallel) revised in two sub-steps: ï¬rst, using a self-attention mechanism to exchange information across all positions in the sequence, thereby generating a vector representation for each position that is informed by the representations of all other positions at the previous time-step. Then, by applying a transition function (shared across position and time) to the outputs of the self-attention mechanism, independently at each position. As the recurrent transition function can be applied any number of times, this implies that UTs can have variable depth (number of per-symbol processing steps). Crucially, this is in contrast to most popular neural sequence models, including the Transformer (Vaswani et al., 2017) or deep RNNs, which have constant depth as a result of applying a ï¬xed stack of layers. We now describe the encoder and decoder in more detail.
ENCODER: Given an input sequence of length m, we start with a matrix whose rows are initialized as the d-dimensional embeddings of the symbols at each position of the sequence H 0 â RmÃd. The UT then iteratively computes representations H t at step t for all m positions in parallel by applying the multi-headed dot-product self-attention mechanism from Vaswani et al. (2017), followed by a recurrent transition function. We also add residual connections around each of these function blocks and apply dropout and layer normalization (Srivastava et al., 2014; Ba et al., 2016) (see Fig. 2 for a simpliï¬ed diagram, and Fig. 4 in the Appendix A for the complete model.).
More speciï¬cally, we use the scaled dot-product attention which combines queries Q, keys K and values V as follows
ATTENTION(Q,K. v)=sorrmax( 25) qd) â > _ Vd d
where d is the number of columns of Q, K and V . We use the multi-head version with k heads, as introduced in (Vaswani et al., 2017),
# MULTIHEADSELFATTENTION(H t) = CONCAT(head1,...,headk)W O
where headi = ATTENTION(H tW Q i ,H tW K i ,H tW V i ) (3)
and we map the state H t to queries, keys and values with afï¬ne projections using learned parameter matrices W Q â RdÃd/k, W K â RdÃd/k, W V â RdÃd/k and W O â RdÃd. At step t, the UT then computes revised representations H t â RmÃd for all m input positions as follows
H t = LAYERNORM(At +TRANSITION(At)) (4)
where At = LAYERNORM((H tâ1 +P t)+MULTIHEADSELFATTENTION(H tâ1 +P t)),
where LAYERNORM() is deï¬ned in Ba et al. (2016), and TRANSITION() and P t are discussed below.
Depending on the task, we use one of two different transition functions: either a separable convolution (Chollet, 2016) or a fully-connected neural network that consists of a single rectiï¬ed-linear activation function between two afï¬ne transformations, applied position-wise, i.e. individually to each row of At. P t â RmÃd above are ï¬xed, constant, two-dimensional (position, time) coordinate embeddings, obtained by computing the sinusoidal position embedding vectors as deï¬ned in (Vaswani et al., 2017) for the positions 1 ⤠i ⤠m and the time-step 1 ⤠t ⤠T separately for each vector-dimension 1 ⤠j ⤠d, and summing:
(6)
i,2j = sin(i/100002j/d)+sin(t/100002j/d) P t i,2j+1 = cos(i/100002j/d)+cos(t/100002j/d). P t
(7)
3
(2)
(5)
Published as a conference paper at ICLR 2019
Output Probabilities After T steps Recurrent Decoder Transition Function Block Recurrent After T steps Ey Encoder Transition Function Multihead Attention 4 Block $ 3 i Multihead Self-Attention Multihead Self-Attention Embed Input Symbols Embed Target Symbols Input Sequence Target Sequence (right-shifted by one) sdajs | 104
Figure 2: The recurrent blocks of the Universal Transformer encoder and decoder. This diagram omits position and time-step encodings as well as dropout, residual connections and layer normalization. A complete version can be found in Appendix A. The Universal Transformer with dynamic halting determines the number of steps T for each position individually using ACT (Graves, 2016).
After T steps (each updating all positions of the input sequence in parallel), the ï¬nal output of the Universal Transformer encoder is a matrix of d-dimensional vector representations H T â RmÃd for the m symbols of the input sequence.
DECODER: The decoder shares the same basic recurrent structure of the encoder. However, after the self-attention function, the decoder additionally also attends to the ï¬nal encoder representation H T of each position in the input sequence using the same multihead dot-product attention function from Equation 2, but with queries Q obtained from projecting the decoder representations, and keys and values (K and V ) obtained from projecting the encoder representations (this process is akin to standard attention (Bahdanau et al., 2014)).
Like the Transformer model, the UT is autoregressive (Graves, 2013). Trained using teacher-forcing, at generation time it produces its output one symbol at a time, with the decoder consuming the previously produced output positions. During training, the decoder input is the target output, shifted to the right by one position. The decoder self-attention distributions are further masked so that the model can only attend to positions to the left of any predicted symbol. Finally, the per-symbol target distributions are obtained by applying an afï¬ne transformation O â RdÃV from the ï¬nal decoder state to the output vocabulary size V , followed by a softmax which yields an (mÃV )-dimensional output matrix normalized over its rows:
P(Ypos|Yi1-posâ1} 417) =SOFTMAX(OH⢠J] (8)
To generate from the model, the encoder is run once for the conditioning input sequence. Then the decoder is run repeatedly, consuming all already-generated symbols, while generating one additional distribution over the vocabulary for the symbol at the next output position per iteration. We then typically sample or select the highest probability symbol as the next symbol.
2.2 DYNAMIC HALTING
In sequence processing systems, certain symbols (e.g. some words or phonemes) are usually more ambiguous than others. It is therefore reasonable to allocate more processing resources to these more ambiguous symbols. Adaptive Computation Time (ACT) (Graves, 2016) is a mechanism for dynamically modulating the number of computational steps needed to process each input symbol
1Note that T here denotes time-step T and not the transpose operation.
4
Published as a conference paper at ICLR 2019
Model 10K examples 1K examples train single train joint train single train joint Previous best results: QRNet (Seo et al., 2016) Sparse DNC (Rae et al., 2016) GA+MAGE Dhingra et al. (2017) MemN2N Sukhbaatar et al. (2015) 0.3 (0/20) - - - - 2.9 (1/20) - - - - 8.7 (5/20) - - - - 12.4 (11/20) Our Results: Transformer (Vaswani et al., 2017) Universal Transformer (this work) UT w/ dynamic halting (this work) 15.2 (10/20) 0.23 (0/20) 0.21 (0/20) 22.1 (12/20) 0.47 (0/20) 0.29 (0/20) 21.8 (5/20) 5.31 (5/20) 4.55 (3/20) 26.8 (14/20) 8.50 (8/20) 7.78 (5/20)
Table 1: Average error and number of failed tasks (> 5% error) out of 20 (in parentheses; lower is better in both cases) on the bAbI dataset under the different training/evaluation setups. We indicate state-of-the-art where available for each, or â-â otherwise.
(called the âponder timeâ) in standard recurrent neural networks based on a scalar halting probability predicted by the model at each step.
Inspired by the interpretation of Universal Transformers as applying self-attentive RNNs in parallel to all positions in the sequence, we also add a dynamic ACT halting mechanism to each position (i.e. to each per-symbol self-attentive RNN; see Appendix C for more details). Once the per-symbol recurrent block halts, its state is simply copied to the next step until all blocks halt, or we reach a maximum number of steps. The ï¬nal output of the encoder is then the ï¬nal layer of representations produced in this way.
# 3 EXPERIMENTS AND ANALYSIS
We evaluated the Universal Transformer on a range of algorithmic and language understanding tasks, as well as on machine translation. We describe these tasks and datasets in more detail in Appendix D.
3.1 BABI QUESTION-ANSWERING
The bAbi question answering dataset (Weston et al., 2015) consists of 20 different tasks, where the goal is to answer a question given a number of English sentences that encode potentially multiple supporting facts. The goal is to measure various forms of language understanding by requiring a certain type of reasoning over the linguistic facts presented in each story. A standard Transformer does not achieve good results on this task2. However, we have designed a model based on the Universal Transformer which achieves state-of-the-art results on this task.
To encode the input, similar to Henaff et al. (2016), we ï¬rst encode each fact in the story by applying a learned multiplicative positional mask to each wordâs embedding, and summing up all embeddings. We embed the question in the same way, and then feed the (Universal) Transformer with these embeddings of the facts and questions.
As originally proposed, models can either be trained on each task separately (âtrain singleâ) or jointly on all tasks (âtrain jointâ). Table 1 summarizes our results. We conducted 10 runs with different initializations and picked the best model based on performance on the validation set, similar to previous work. Both the UT and UT with dynamic halting achieve state-of-the-art results on all tasks in terms of average error and number of failed tasks3, in both the 10K and 1K training regime (see Appendix E for breakdown by task).
To understand the working of the model better, we analyzed both the attention distributions and the average ACT ponder times for this task (see Appendix F for details). First, we observe that the attention distributions start out very uniform, but get progressively sharper in later steps around the correct supporting facts that are required to answer each question, which is indeed very similar to how humans would solve the task. Second, with dynamic halting we observe that the average ponder time (i.e. depth
2We experimented with different hyper-parameters and different network sizes, but it always overï¬ts. 3Deï¬ned as > 5% error.
5
Published as a conference paper at ICLR 2019
Hi 2
Figure 3: Ponder time of UT with dynamic halting for encoding facts in a story and question in a bAbI task requiring three supporting facts.
of the per-symbol recurrent processing chain) over all positions in all samples in the test data for tasks requiring three supporting facts is higher (3.8±2.2) than for tasks requiring only two (3.1±1.1), which is in turn higher than for tasks requiring only one supporting fact (2.3±0.8). This indicates that the model adjusts the number of processing steps with the number of supporting facts required to answer the questions. Finally, we observe that the histogram of ponder times at different positions is more uniform in tasks requiring only one supporting fact compared to two and three, and likewise for tasks requiring two compared to three. Especially for tasks requiring three supporting facts, many positions halt at step 1 or 2 already and only a few get transformed for more steps (see for example Fig 3). This is particularly interesting as the length of stories is indeed much higher in this setting, with more irrelevant facts which the model seems to successfully learn to ignore in this way.
Similar to dynamic memory networks (Kumar et al., 2016), there is an iterative attention process in UTs that allows the model to condition its attention over memory on the result of previous iterations. Appendix F presents some examples illustrating that there is a notion of temporal states in UT, where the model updates its states (memory) in each step based on the output of previous steps, and this chain of updates can also be viewed as steps in a multi-hop reasoning process.
3.2 SUBJECT-VERB AGREEMENT
Next, we consider the task of predicting number-agreement between subjects and verbs in English sentences (Linzen et al., 2016). This task acts as a proxy for measuring the ability of a model to capture hierarchical (dependency) structure in natural language sentences. We use the dataset provided by (Linzen et al., 2016) and follow their experimental protocol of solving the task using a language mod- eling training setup, i.e. a next word prediction objective, followed by calculating the ranking accuracy of the target verb at test time. We evaluated our model on subsets of the test data with different task dif- ï¬culty, measured in terms of agreement attractors â the number of intervening nouns with the opposite number from the subject (meant to confuse the model). For example, given the sentence The keys to the cabinet4, the objective during training is to predict the verb are (plural). At test time, we then evaluate the ranking accuracy of the agreement attractors: i.e. the goal is to rank are higher than is in this case.
Our results are summarized in Table 2. The best LSTM with attention from the literature achieves 99.18% on this task (Yogatama et al., 2018), outperforming a vanilla Transformer (Tran et al., 2018). UTs signiï¬cantly outperform standard Transformers, and achieve an average result comparable to the current state of the art (99.2%). However, we see that UTs (and particularly with dynamic halting) per- form progressively better than all other models as the number of attractors increases (see the last row, â).
3.3 LAMBADA LANGUAGE MODELING
The LAMBADA task (Paperno et al., 2016) is a language modeling task consisting of predicting a missing target word given a broader context of 4-5 preceding sentences. The dataset was speciï¬cally designed so that humans are able to accurately predict the target word when shown the full context, but not when only shown the target sentence in which it appears. It therefore goes beyond language
4Cabinet (singular) is an agreement attractor in this case.
6
Published as a conference paper at ICLR 2019
Model Number of attractors 0 1 2 3 4 5 Total Previous best results (Yogatama et al., 2018): Best Stack-RNN Best LSTM Best Attention 0.994 0.993 0.994 0.979 0.972 0.977 0.965 0.950 0.959 0.935 0.922 0.929 0.916 0.900 0.907 0.880 0.842 0.842 0.992 0.991 0.992 Our results: Transformer Universal Transformer UT w/ ACT 0.973 0.993 0.994 0.941 0.971 0.969 0.932 0.969 0.967 0.917 0.940 0.944 0.901 0.921 0.932 0.883 0.892 0.907 0.962 0.992 0.992 â (UT w/ ACT - Best) 0 -0.008 0.002 0.009 0.016 0.027 -
Table 2: Accuracy on the subject-verb agreement number prediction task (higher is better).
Model LM Perplexity & (Accuracy) RC Accuracy control dev test control dev Neural Cache (Grave et al., 2016) Dhingra et al. Dhingra et al. (2018) 129 - 139 - - - - - - - Transformer LSTM UT base, 6 steps (ï¬xed) UT w/ dynamic halting 142 (0.19) 138 (0.23) 131 (0.32) 130 (0.32) 5122 (0.0) 4966 (0.0) 279 (0.18) 134 (0.22) 7321 (0.0) 5174 (0.0) 319 (0.17) 142 (0.19) 0.4102 0.1103 0.4801 0.4603 0.4401 0.2316 0.5422 0.5831 UT base, 8 steps (ï¬xed) UT base, 9 steps (ï¬xed) 129(0.32) 129(0.33) 192 (0.21) 214 (0.21) 202 (0.18) 239 (0.17) - - - - test - 0.5569 0.3988 0.2007 0.5216 0.5625 - -
Table 3: LAMBADA language modeling (LM) perplexity (lower better) with accuracy in parentheses (higher better), and Reading Comprehension (RC) accuracy results (higher better). â-â indicates no reported results in that setting.
modeling, and tests the ability of a model to incorporate broader discourse and longer term context when predicting the target word.
The task is evaluated in two settings: as language modeling (the standard setup) and as reading comprehension. In the former (more challenging) case, a model is simply trained for next-word prediction on the training data, and evaluated on the target words at test time (i.e. the model is trained to predict all words, not speciï¬cally challenging target words). In the latter setting, introduced by Chu et al. Chu et al. (2017), the target sentence (minus the last word) is used as query for selecting the target word from the context sentences. Note that the target word appears in the context 81% of the time, making this setup much simpler. However the task is impossible in the remaining 19% of the cases.
The results are shown in Table 3. Universal Transformer achieves state-of-the-art results in both the language modeling and reading comprehension setup, outperforming both LSTMs and vanilla Transformers. Note that the control set was constructed similar to the LAMBADA development and test sets, but without ï¬ltering them in any way, so achieving good results on this set shows a modelâs strength in standard language modeling.
Our best ï¬xed UT results used 6 steps. However, the average number of steps that the best UT with dynamic halting took on the test data over all positions and examples was 8.2±2.1. In order to see if the dynamic model did better simply because it took more steps, we trained two ï¬xed UT models with 8 and 9 steps respectively (see last two rows). Interestingly, these two models achieve better results compared to the model with 6 steps, but do not outperform the UT with dynamic halting. This leads us to believe that dynamic halting may act as a useful regularizer for the model via incentivizing a smaller numbers of steps for some of the input symbols, while allowing more computation for others.
3.4 ALGORITHMIC TASKS
We trained UTs on three algorithmic tasks, namely Copy, Reverse, and (integer) Addition, all on strings composed of decimal symbols (â0â-â9â). In all the experiments, we train the models on sequences of length 40 and evaluated on sequences of length 400 (Kaiser & Sutskever, 2016). We
7
Published as a conference paper at ICLR 2019
Model Copy Reverse Addition char-acc seq-acc char-acc seq-acc char-acc seq-acc LSTM Transformer Universal Transformer Neural GPUâ 0.45 0.53 0.91 1.0 0.09 0.03 0.35 1.0 0.66 0.13 0.96 1.0 0.11 0.06 0.46 1.0 0.08 0.07 0.34 1.0 0.0 0.0 0.02 1.0
Table 4: Accuracy (higher better) on the algorithmic tasks. âNote that the Neural GPU was trained with a special curriculum to obtain the perfect result, while other models are trained without any curriculum.
Copy Double Reverse Model char-acc seq-acc char-acc seq-acc char-acc seq-acc LSTM Transformer Universal Transformer 0.78 0.98 1.0 0.11 0.63 1.0 0.51 0.94 1.0 0.047 0.55 1.0 0.91 0.81 1.0 0.32 0.26 1.0
Table 5: Character-level (char-acc) and sequence-level accuracy (seq-acc) results on the Memorization LTE tasks, with maximum length of 55.
Program Control Addition Model char-acc seq-acc char-acc seq-acc char-acc seq-acc LSTM Transformer Universal Transformer 0.53 0.71 0.89 0.12 0.29 0.63 0.68 0.93 1.0 0.21 0.66 1.0 0.83 1.0 1.0 0.11 1.0 1.0
Table 6: Character-level (char-acc) and sequence-level accuracy (seq-acc) results on the Program Evaluation LTE tasks with maximum nesting of 2 and length of 5.
train UTs using positions starting with randomized offsets to further encourage the model to learn position-relative transformations. Results are shown in Table 4. The UT outperforms both LSTM and vanilla Transformer by a wide margin on all three tasks. The Neural GPU reports perfect results on this task (Kaiser & Sutskever, 2016), however we note that this result required a special curriculum-based training protocol which was not used for other models.
3.5 LEARNING TO EXECUTE (LTE)
As another class of sequence-to-sequence learning problems, we also evaluate UTs on tasks indicating the ability of a model to learn to execute computer programs, as proposed in (Zaremba & Sutskever, 2015). These tasks include program evaluation tasks (program, control, and addition), and memorization tasks (copy, double, and reverse).
We use the mix-strategy discussed in (Zaremba & Sutskever, 2015) to generate the datasets. Unlike (Zaremba & Sutskever, 2015), we do not use any curriculum learning strategy during training and we make no use of target sequences at test time. Tables 5 and 6 present the performance of an LSTM model, Transformer, and Universal Transformer on the program evaluation and memorization tasks, respectively. UT achieves perfect scores in all the memorization tasks and also outperforms both LSTMs and Transformers in all program evaluation tasks by a wide margin.
3.6 MACHINE TRANSLATION
We trained a UT on the WMT 2014 English-German translation task using the same setup as reported in (Vaswani et al., 2017) in order to evaluate its performance on a large-scale sequence-to-sequence task. Results are summarized in Table 7. The UT with a fully-connected recurrent transition function (instead of separable convolution) and without ACT improves by 0.9 BLEU over a Transformer and 0.5 BLEU over a Weighted Transformer with approximately the same number of parameters (Ahmed et al., 2017).
8
Published as a conference paper at ICLR 2019
Model BLEU Universal Transformer small Transformer base (Vaswani et al., 2017) Weighted Transformer base (Ahmed et al., 2017) Universal Transformer base 26.8 28.0 28.4 28.9
Table 7: Machine translation results on the WMT14 En-De translation task trained on 8xP100 GPUs in comparable training setups. All base results have the same number of parameters.
# 4 DISCUSSION
When running for a ï¬xed number of steps, the Universal Transformer is equivalent to a multi-layer Transformer with tied parameters across all its layers. This is partly similar to the Recursive Transformer, which ties the weights of its self-attention layers across depth (Gulcehre et al., 2018)5. However, as the per-symbol recurrent transition functions can be applied any number of times, another and possibly more informative way of characterizing the UT is as a block of parallel RNNs (one for each symbol, with shared parameters) evolving per-symbol hidden states concurrently, generated at each step by attending to the sequence of hidden states at the previous step. In this way, it is related to architectures such as the Neural GPU (Kaiser & Sutskever, 2016) and the Neural Turing Machine (Graves et al., 2014). UTs thereby retain the attractive computational efï¬ciency of the original feed- forward Transformer model, but with the added recurrent inductive bias of RNNs. Furthermore, using a dynamic halting mechanism, UTs can choose the number of processing steps based on the input data.
The connection between the Universal Transformer and other sequence models is apparent from the architecture: if we limited the recurrent steps to one, it would be a Transformer. But it is more interesting to consider the relationship between the Universal Transformer and RNNs and other networks where recurrence happens over the time dimension. Superï¬cially these models may seem closely related since they are recurrent as well. But there is a crucial difference: time-recurrent models like RNNs cannot access memory in the recurrent steps. This makes them computationally more similar to automata, since the only memory available in the recurrent part is a ï¬xed-size state vector. UTs on the other hand can attend to the whole previous layer, allowing it to access memory in the recurrent step.
Given sufï¬cient memory the Universal Transformer is computationally universal â i.e. it belongs to the class of models that can be used to simulate any Turing machine, thereby addressing a shortcoming of the standard Transformer model 6. In addition to being theoretically appealing, our results show that this added expressivity also leads to improved accuracy on several challenging sequence modeling tasks. This closes the gap between practical sequence models competitive on large-scale tasks such as machine translation, and computationally universal models such as the Neural Turing Machine or the Neural GPU (Graves et al., 2014; Kaiser & Sutskever, 2016), which can be trained using gradient descent to perform algorithmic tasks.
To show this, we can reduce a Neural GPU to a Universal Transformer. Ignoring the decoder and pa- rameterizing the self-attention module, i.e. self-attention with the residual connection, to be the identity function, we assume the transition function to be a convolution. If we now set the total number of recurrent steps T to be equal to the input length, we obtain exactly a Neural GPU. Note that the last step is where the Universal Transformer crucially differs from the vanilla Transformer whose depth cannot scale dynamically with the size of the input. A similar relationship exists between the Universal Trans- former and the Neural Turing Machine, whose single read/write operations per step can be expressed by the global, parallel representation revisions of the Universal Transformer. In contrast to these models, however, which only perform well on algorithmic tasks, the Universal Transformer also achieves competitive results on realistic natural language tasks such as LAMBADA and machine translation.
Another related model architecture is that of end-to-end Memory Networks (Sukhbaatar et al., 2015). In contrast to end-to-end memory networks, however, the Universal Transformer uses memory corresponding to states aligned to individual positions of its inputs or outputs. Furthermore, the Universal Transformer follows the encoder-decoder conï¬guration and achieves competitive performance in large-scale sequence-to-sequence tasks.
5Note that in UT both the self-attention and transition weights are tied across layers. 6Appendix B illustrates how UT is computationally more powerful than the standard Transformer.
9
Published as a conference paper at ICLR 2019
# 5 CONCLUSION
This paper introduces the Universal Transformer, a generalization of the Transformer model that extends its theoretical capabilities and produces state-of-the-art results on a wide range of challenging sequence modeling tasks, such as language understanding but also a variety of algorithmic tasks, thereby addressing a key shortcoming of the standard Transformer. The Universal Transformer combines the following key properties into one model:
Weight sharing: Following intuitions behind weight sharing found in CNNs and RNNs, we extend the Transformer with a simple form of weight sharing that strikes an effective balance between inductive bias and model expressivity, which we show extensively on both small and large-scale experiments.
Conditional computation: In our goal to build a computationally universal machine, we equipped the Universal Transformer with the ability to halt or continue computation through a recently introduced mechanism, which shows stronger results compared to the ï¬xed-depth Universal Transformer.
We are enthusiastic about the recent developments on parallel-in-time sequence models. By adding computational capacity and recurrence in processing depth, we hope that further improvements beyond the basic Universal Transformer presented here will help us build learning algorithms that are both more powerful, data efï¬cient, and generalize beyond the current state-of-the-art.
The code used to train and evaluate Universal Transformers is available at https: //github.com/tensorflow/tensor2tensor (Vaswani et al., 2018).
Acknowledgements We are grateful to Ashish Vaswani, Douglas Eck, and David Dohan for their fruitful comments and inspiration.
# REFERENCES
Karim Ahmed, Nitish Shirish Keskar, and Richard Socher. Weighted transformer network for machine translation. arXiv preprint arXiv:1711.02132, 2017.
Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. URL http://arxiv.org/abs/1607.06450.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473, 2014. URL http://arxiv.org/abs/1409.0473.
Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using RNN encoder-decoder for statistical machine translation. CoRR, abs/1406.1078, 2014. URL http://arxiv.org/abs/1406.1078.
Francois Chollet. Xception: Deep learning with depthwise separable convolutions. arXiv preprint arXiv:1610.02357, 2016.
Zewei Chu, Hai Wang, Kevin Gimpel, and David McAllester. Broad context language modeling as reading In Proceedings of the 15th Conference of the European Chapter of the Association for comprehension. Computational Linguistics: Volume 2, Short Papers, volume 2, pp. 52â57, 2017.
Bhuwan Dhingra, Zhilin Yang, William W Cohen, and Ruslan Salakhutdinov. Linguistic knowledge as memory for recurrent neural networks. arXiv preprint arXiv:1703.02620, 2017.
Bhuwan Dhingra, Qiao Jin, Zhilin Yang, William W Cohen, and Ruslan Salakhutdinov. Neural models for reasoning over multiple mentions using coreference. arXiv preprint arXiv:1804.05922, 2018.
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. Convolutional sequence to sequence learning. CoRR, abs/1705.03122, 2017. URL http://arxiv.org/abs/1705.03122.
Edouard Grave, Armand Joulin, and Nicolas Usunier. Improving neural language models with a continuous cache. arXiv preprint arXiv:1612.04426, 2016.
Alex Graves. Generating sequences with recurrent neural networks. CoRR, abs/1308.0850, 2013. URL http://arxiv.org/abs/1308.0850.
Alex Graves. Adaptive computation time for recurrent neural networks. arXiv preprint arXiv:1603.08983, 2016.
10
Published as a conference paper at ICLR 2019
Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. CoRR, abs/1410.5401, 2014. URL http://arxiv.org/abs/1410.5401.
Caglar Gulcehre, Misha Denil, Mateusz Malinowski, Ali Razavi, Razvan Pascanu, Karl Moritz Hermann, Peter Battaglia, Victor Bapst, David Raposo, Adam Santoro, et al. Hyperbolic attention networks. arXiv preprint arXiv:1805.09786, 2018.
Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, and Yann LeCun. Tracking the world state with recurrent entity networks. arXiv preprint arXiv:1612.03969, 2016.
Sepp Hochreiter, Yoshua Bengio, Paolo Frasconi, and Jürgen Schmidhuber. Gradient ï¬ow in recurrent nets: the difï¬culty of learning long-term dependencies. A Field Guide to Dynamical Recurrent Neural Networks, 2003.
A. Joulin and T. Mikolov. Inferring algorithmic patterns with stack-augmented recurrent nets. In Advances in Neural Information Processing Systems, (NIPS), 2015.
Åukasz Kaiser and Ilya Sutskever. Neural GPUs learn algorithms. In International Conference on Learning Representations (ICLR), 2016. URL https://arxiv.org/abs/1511.08228.
Åukasz Kaiser, Aidan N. Gomez, and Francois Chollet. Depthwise separable convolutions for neural machine translation. CoRR, abs/1706.03059, 2017. URL http://arxiv.org/abs/1706.03059.
Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victor Zhong, Romain Paulus, and Richard Socher. Ask me anything: Dynamic memory networks for natural language processing. In International Conference on Machine Learning, pp. 1378â1387, 2016.
Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. A structured self-attentive sentence embedding. arXiv preprint arXiv:1703.03130, 2017.
Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. Assessing the ability of lstms to learn syntax-sensitive dependencies. Transactions of the Association of Computational Linguistics, 4(1):521â535, 2016.
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Ngoc Quan Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fernandez. The lambada dataset: Word prediction requiring a broad discourse context. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pp. 1525â1534, 2016.
Ankur Parikh, Oscar Täckström, Dipanjan Das, and Jakob Uszkoreit. tion model. //arxiv.org/pdf/1606.01933.pdf. In Empirical Methods in Natural Language Processing, 2016. A decomposable atten- URL https:
Jack Rae, Jonathan J Hunt, Ivo Danihelka, Timothy Harley, Andrew W Senior, Gregory Wayne, Alex Graves, and Tim Lillicrap. Scaling memory-augmented neural networks with sparse reads and writes. In Advances in Neural Information Processing Systems, pp. 3621â3629, 2016.
Minjoon Seo, Sewon Min, Ali Farhadi, and Hannaneh Hajishirzi. Query-reduction networks for question answering. arXiv preprint arXiv:1606.04582, 2016.
Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overï¬tting. Journal of Machine Learning Research, 15(1): 1929â1958, 2014.
End-to-end memory networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett (eds.), Advances in Neural Information Processing Systems 28, pp. 2440â2448. Curran Associates, Inc., 2015. URL http://papers.nips.cc/paper/5846-end-to-end-memory-networks.pdf.
Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural net- URL In Advances in Neural Information Processing Systems, pp. 3104â3112, 2014. works. http://arxiv.org/abs/1409.3215.
Ke Tran, Arianna Bisazza, and Christof Monz. The importance of being recurrent for modeling hierarchical structure. In Proceedings of NAACLâ18, 2018.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. CoRR, 2017. URL http://arxiv.org/abs/1706.03762.
Ashish Vaswani, Samy Bengio, Eugene Brevdo, Francois Chollet, Aidan N. Gomez, Stephan Gouws, Llion Jones, Åukasz Kaiser, Nal Kalchbrenner, Niki Parmar, Ryan Sepassi, Noam Shazeer, and Jakob Uszkoreit. Tensor2tensor for neural machine translation. CoRR, abs/1803.07416, 2018.
11
Published as a conference paper at ICLR 2019
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart van Merriënboer, Armand Joulin, and Tomas Mikolov. Towards ai-complete question answering: A set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698, 2015.
Dani Yogatama, Yishu Miao, Gabor Melis, Wang Ling, Adhiguna Kuncoro, Chris Dyer, and Phil Blunsom. Memory architectures in recurrent neural network language models. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=SkFqf0lAZ.
Wojciech Zaremba and Ilya Sutskever. Learning to execute. CoRR, abs/1410.4615, 2015. URL http://arxiv.org/abs/1410.4615.
12
Published as a conference paper at ICLR 2019
APPENDIX A DETAILED SCHEMA OF THE UNIVERSAL TRANSFORMER
# Output Probabilities
After T steps Recurrent Decoder Layer Normalization Block Dropout Transition Function Recurrent -_ a Encoder Layer Normalization Layer Normalization | Block Dropout Dropout a After T steps Transition Function Multihead Attention Layer Normalization Layer Normalization a 3 3 Dropout 4 Dropout g s s Multihead Self-Attention 3 Multihead Self-Attention 3 a ne rs Ce Timestep embedding + Timestep embedding Position embedding Position embedding Embed Input Symbols Embed Target Symbols Input Sequence Target Sequence (right-shifted by one)
Figure 4: The Universal Transformer with position and step embeddings as well as dropout and layer normalization.
# APPENDIX B ON THE COMPUTATIONAL POWER OF UT VS TRANSFORMER
With respect to their computational power, the key difference between the Transformer and the Universal Transformer lies in the number of sequential steps of computation (i.e. in depth). While a standard Transformer executes a total number of operations that scales with the input size, the number of sequential operations is constant, independent of the input size and determined solely by the number of layers. Assuming ï¬nite precision, this property implies that the standard Transformer cannot be computationally universal. When choosing a number of steps as a function of the input length, however, the Universal Transformer does not suffer from this limitation. Note that this holds independently of whether or not adaptive computation time is employed but does assume a non-constant, even if possibly deterministic, number of steps. Varying the number of steps dynamically after training is enabled by sharing weights across sequential computation steps in the Universal Transformer.
An intuitive example are functions whose execution requires the sequential processing of each input element. In this case, for any given choice of depth T , one can construct an input sequence of length N > T that cannot be processed correctly by a standard Transformer. With an appropriate, input-length dependent choice of sequential steps, however, a Universal Transformer, RNNs or Neural GPUs can execute such a function.
Ee as = 2 a la â Oe oli 3 Li 2 a t é = i = ] ( âN" input symbols
13
# a
# 4 g
Published as a conference paper at ICLR 2019
# APPENDIX C UT WITH DYNAMIC HALTING
We implement the dynamic halting based on ACT (Graves, 2016) as follows in TensorFlow. In each step of the UT with dynamic halting, we are given the halting probabilities, remainders, number of updates up to that point, and the previous state (all initialized as zeros), as well as a scalar threshold between 0 and 1 (a hyper-parameter). We then compute the new state for each position and calculate the new per-position halting probabilities based on the state for each position. The UT then decides to halt for some positions that crossed the threshold, and updates the state of other positions until the model halts for all positions or reaches a predeï¬ned maximum number of steps:
1 # Whileâl o o p s t o p s when t h i s p r e d i c a t e 2 # i . e . 3 d e f 4 i s FALSE a l l ( ( p r o b a b i l i t y < t h r e s h o l d ) & ( c o u n t e r < m a x _ s t e p s ) ) a r e s h o u l d _ c o n t i n u e ( u0 , u1 , h a l t i n g _ p r o b a b i l i t y , u2 , n _ u p d a t e s , u3 ) : r e t u r n t f . r e d u c e _ a n y ( f a l s e 5 t f . l o g i c a l _ a n d ( t f . l e s s ( h a l t i n g _ p r o b a b i l i t y , t f . l e s s ( n _ u p d a t e s , m a x _ s t e p s ) ) ) t h r e s h o l d ) , 6 7 8 # Do w h i l e l o o p i t e r a t i o n s 9 u n t i l p r e d i c a t e a b o v e i s f a l s e ( _ , _ , _ , r e m a i n d e r , n _ u p d a t e s , n e w _ s t a t e ) = t f . w h i l e _ l o o p ( 10 11 s h o u l d _ c o n t i n u e , u t _ w i t h _ d y n a m i c _ h a l t i n g , s t e p , h a l t i n g _ p r o b a b i l i t y , ( s t a t e , r e m a i n d e r s , n _ u p d a t e s , p r e v i o u s _ s t a t e ) )
Listing 1: UT with dynamic halting.
The following shows the computations in each step:
s t e p , h a l t i n g _ p r o b a b i l i t y , r e m a i n d e r s , n _ u p d a t e s , p r e v i o u s _ s t a t e ) :
1 d e f u t _ w i t h _ d y n a m i c _ h a l t i n g ( s t a t e , 2
# C a l c u l a t e p = c o mm o n _ l a ye r s . d e n s e ( s t a t e , 1 , t h e p r o b a b i l i t i e s b a s e d on t h e s t a t e a c t i v a t i o n = t f . nn . s i g m o i d , u s e _ b i a s = T r u e ) # Mask f o r s t i l l _ r u n n i n g = t f . c a s t ( i n p u t s which h a v e n o t h a l t e d y e t t f . l e s s ( h a l t i n g _ p r o b a b i l i t y , 1 . 0 ) , t h i s i n p u t s which h a l t e d a t # Mask o f n e w _ h a l t e d = t f . c a s t ( t f . f l o a t 3 2 ) s t e p t f . g r e a t e r ( h a l t i n g _ p r o b a b i l i t y + p â s t i l l _ r u n n i n g , t h r e s h o l d ) , t f . f l o a t 3 2 ) â s t i l l _ r u n n i n g # Mask o f s t i l l _ r u n n i n g = t f . c a s t ( i n p u t s which h a v e n â t h a l t e d , and d i d n â t h a l t t h i s s t e p t f . l e s s _ e q u a l ( h a l t i n g _ p r o b a b i l i t y + p â s t i l l _ r u n n i n g , t h r e s h o l d ) , t f . f l o a t 3 2 ) â s t i l l _ r u n n i n g # Add t h e h a l t i n g p r o b a b i l i t y f o r # p r o b a b i l i t i e s h a l t i n g _ p r o b a b i l i t y += p â s t i l l _ r u n n i n g # Compute r e m a i n d e r s t h e r e m a i n d e r s += n e w _ h a l t e d â ( 1 â h a l t i n g _ p r o b a b i l i t y ) i n p u t s which h a l t e d a t # Add t h e r e m a i n d e r s h a l t i n g _ p r o b a b i l i t y += n e w _ h a l t e d â r e m a i n d e r s # I n c r e m e n t n _ u p d a t e s i n p u t s which a r e f o r n _ u p d a t e s += s t i l l _ r u n n i n g + n e w _ h a l t e d # Compute t h e w e i g h t # # # u p d a t e _ w e i g h t s = t f . e x p a n d _ d i m s ( p â s t i l l _ r u n n i n g + s t e p t o t h e h a l t i n g t h i s f o r t h o s e i n p u t s which h a v e n â t h a l t e d y e t t h i s f o r s t e p i n p u t s which h a l t e d a t t h i s s t e p t o t h o s e s t i l l r u n n i n g a l l t o be a p p l i e d t o t h e new s t a t e and o u t p u t : i n p u t h a s a l r e a d y h a l t e d , i n p u t h a s n â t h a l t e d y e t , 0 when t h e p when t h e t h e r e m a i n d e r s when i t h a l t e d t h i s s t e p . n e w _ h a l t e d â r e m a i n d e r s , â1) # Apply t r a n s f o r m a t i o n t o t h e t r a n s f o r m e d _ s t a t e = t r a n s i t i o n _ f u n c t i o n ( s e l f _ a t t e n t i o n ( s t a t e ) ) # I n t e r p o l a t e t r a n s f o r m e d and p r e v i o u s n e w _ s t a t e = ( ( t r a n s f o r m e d _ s t a t e â u p d a t e _ w e i g h t s ) + s t a t e s t a t e s f o r nonâh a l t e d i n p u t s ( p r e v i o u s _ s t a t e â ( 1 â u p d a t e _ w e i g h t s ) ) ) s t e p += 1 r e t u r n ( t r a n s f o r m e d _ s t a t e , s t e p , h a l t i n g _ p r o b a b i l i t y ,
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
r e m a i n d e r s , n _ u p d a t e s , n e w _ s t a t e ) Listing 2: Computations in each step of the UT with dynamic halting.
14
Published as a conference paper at ICLR 2019
# APPENDIX D DESCRIPTION OF SOME OF THE TASKS/DATASETS
Here, we provide some additional details on the bAbI, subject-verb agreement, LAMBADA language modeling, and learning to execute (LTE) tasks.
D.1 BABI QUESTION-ANSWERING
The bAbi question answering dataset (Weston et al., 2015) consists of 20 different synthetic tasks7. The aim is that each task tests a unique aspect of language understanding and reasoning, including the ability of: reasoning from supporting facts in a story, answering true/false type questions, counting, understanding negation and indeï¬nite knowledge, understanding coreferences, time reasoning, positional and size reasoning, path-ï¬nding, and understanding motivations (to see examples for each of these tasks, please refer to Table 1 in (Weston et al., 2015)).
There are two versions of the dataset, one with 1k training examples and the other with 10k examples. It is important for a model to be data-efï¬cient to achieve good results using only the 1k training examples. Moreover, the original idea is that a single model should be evaluated across all the tasks (not tuning per task), which is the train joint setup in Table 1, and the tables presented in Appendix E.
D.2 SUBJECT-VERB AGREEMENT
Subject-verb agreement is the task of predicting number agreement between subject and verb in English sentences. Succeeding in this task is a strong indicator that a model can learn to approximate syntactic structure and therefore it was proposed by Linzen et al. (2016) as proxy for assessing the ability of different models to capture hierarchical structure in natural language.
Two experimental setups were proposed by Linzen et al. (2016) for training a model on this task: 1) training with a language modeling objective, i.e., next word prediction, and 2) as binary classiï¬cation, i.e. predicting the number of the verb given the sentence. In this paper, we use the language modeling objective, meaning that we provide the model with an implicit supervision and evaluate based on the ranking accuracy of the correct form of the verb compared to the incorrect form of the verb.
In this task, in order to have different levels of difï¬culty, âagreement attractorsâ are used, i.e. one or more intervening nouns with the opposite number from the subject with the goal of confusing the model. In this case, the model needs to correctly identify the head of the syntactic subject that corresponds to a given verb and ignore the intervening attractors in order to predict the correct form of that verb. Here are some examples for this task in which subjects and the corresponding verbs are in boldface and agreement attractors are underlined:
No attractor: One attractor: Two attractors: Three attractors: The boy smiles. The number of men is not clear. The ratio of men to women is not clear. The ratio of men to women and children is not clear.
D.3 LAMBADA LANGUAGE MODELING
The LAMBADA task (Paperno et al., 2016) is a broad context language modeling task. In this task, given a narrative passage, the goal is to predict the last word (target word) of the last sentence (target sentence) in the passage. These passages are speciï¬cally selected in a way that human subjects are easily able to guess their last word if they are exposed to a long passage, but not if they only see the target sentence preceding the target word8. Here is a sample from the dataset:
Context:
âYes, I thought I was going to lose the baby.â âI was scared too,â he stated, sincerity flooding his eyes. âYou were?â âYes, of course. Why do you even ask?â âThis baby wasnât exactly planned for.â Target sentence: âDo you honestly think that I would want you to have a ________?â Target word: miscarriage
The LAMBADA task consists in predicting the target word given the whole passage (i.e., the context plus the target sentence). A âcontrol setâ is also provided which was constructed by randomly sampling passages of the same shape and size as the ones used to build LAMBADA, but without ï¬ltering them in any way. The control
7https://research.fb.com/downloads/babi 8http://clic.cimec.unitn.it/lambada/appendix_onefile.pdf
15
Published as a conference paper at ICLR 2019
set is used to evaluate the models at standard language modeling before testing on the LAMBADA task, and therefore to ensure that low performance on the latter cannot be attributed simply to poor language modeling.
The task is evaluated in two settings: as language modeling (the standard setup) and as reading comprehension. In the former (more challenging) case, a model is simply trained for the next word prediction on the training data, and evaluated on the target words at test time (i.e. the model is trained to predict all words, not speciï¬cally challenging target words). In this paper, we report the results of the Universal Transformer in both setups.
D.4 LEARNING TO EXECUTE (LTE)
LTE is a set of tasks indicating the ability of a model to learn to execute computer programs and was proposed by Zaremba & Sutskever (2015). These tasks include two subsets: 1) program evaluation tasks (program, control, and addition) that are designed to assess the ability of models for understanding numerical operations, if-statements, variable assignments, the compositionality of operations, and more, as well as 2) memorization tasks (copy, double, and reverse).
The difï¬culty of the program evaluation tasks is parameterized by their length and nesting. The length parameter is the number of digits in the integers that appear in the programs (so the integers are chosen uniformly from [1, length]), and the nesting parameter is the number of times we are allowed to combine the operations with each other. Higher values of nesting yield programs with deeper parse trees. For instance, here is a program that is generated with length = 4 and nesting = 3.
# Input:
j=8584 for x in range(8): j+=920 b=(1500+j) print((b+7567)) Target: 25011
16
Published as a conference paper at ICLR 2019
# APPENDIX E BABI DETAILED RESULTS
Task id 10K train single train joint train single 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 0.0 0.0 0.4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.4 0.6 0.0 2.8 0.0 0.0 0.0 1.2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.2 0.2 0.0 3.1 0.0 0.0 0.0 3.7 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 5.8 32.0 0.0 47.1 2.4 avg err 0.21 0.29 4.55 failed 0 0 3 1K train joint 0.0 0.5 5.4 0.0 0.5 0.5 3.2 1.6 0.2 0.4 0.1 0.0 0.6 3.8 5.9 15.4 42.9 4.1 68.2 2.4 7.78 5
Average (±var) over all seeds (for 10 runs)
Task id 10K train single train joint train single 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 0.0 ±0.0 0.2 ±0.4 1.8 ±1.8 0.1 ±0.1 0.2 ±0.3 0.1 ±0.2 0.3 ±0.5 0.3 ±0.2 0.0 ±0.0 0.1 ±0.2 0.0 ±0.0 0.2 ±0.1 0.2 ±0.5 1.8 ±2.6 2.1 ±3.4 1.9 ±2.2 1.6 ±0.8 0.3 ±0.4 3.4 ±4.0 0.0 ±0.0 0.0 ±0.0 1.7 ±2.6 4.6 ±7.3 0.2 ±0.1 0.8 ±0.5 0.1 ±0.2 1.1 ±1.5 0.5 ±1.1 0.0 ±0.0 0.5 ±0.4 0.1 ±0.1 0.4 ±0.4 0.3 ±0.4 1.3 ±1.6 1.6 ±2.8 0.9 ±1.3 1.4 ±3.4 0.7 ±1.4 6.1 ±7.3 0.0 ±0.0 0.2 ±0.3 3.2 ±4.1 9.1 ±12.7 0.3 ±0.3 1.1 ±1.3 1.2 ±2.1 0.0 ±0.0 0.1 ±0.2 0.1 ±0.1 0.7 ±0.8 0.4 ±0.8 0.6 ±0.9 0.8 ±0.9 0.1 ±0.2 0.3 ±0.5 9.1 ±8.1 43.7 ±18.6 2.3 ±3.6 50.2 ±8.4 3.2 ±2.5 avg 0.73 ±0.89 1.12 ±1.62 6.34 ±3.32 1K train joint 0.1 ±0.2 4.3 ±11.6 14.3 ±18.1 0.4 ±0.6 4.3 ±5.6 0.8 ±0.4 4.1 ±2.9 3.9 ±4.2 0.3 ±0.3 1.3 ±1.6 0.3 ±0.9 0.3 ±0.4 1.1 ±0.9 4.7 ±5.2 10.3 ±8.6 34.1 ±22.8 51.1 ±12.9 12.8 ±9.0 73.1 ±23.9 2.6 ±2.8
17
Published as a conference paper at ICLR 2019
# APPENDIX F BABI ATTENTION VISUALIZATION
We present a visualization of the attention distributions on bAbI tasks for a couple of examples. The visualization of attention weights is over different time steps based on different heads over all the facts in the story and a question. Different color bars on the left side indicate attention weights based on different heads (4 heads in total).
# An example from tasks 1: (requiring one supportive fact to solve)
# Story:
John travelled to the hallway. Mary journeyed to the bathroom. Daniel went back to the bathroom. John moved to the bedroom
Question:
# Where is Mary?
Modelâs output:
bathroom
(a) Step 1 (b) Step 2 (c) Step 3
John travelled to the hallway . John travelled to the hallway . Mary journeyed to the bathroom . Mary journeyed to the bathroom . Daniel went back to the bathroom . Daniel went back to the bathroom . John moved to the bedroom . John moved to the bedroom . rs Where iMaRj?y Whereis Mary?
John travelled to the hallway . John travelled to the hallway . Mary journeyed to the bathroom . Mary journeyed to the bathroom . Daniel went back to the bathroom . Daniel went back to the bathroom . John moved to the bedroom . John moved to the bedroom . Where is Mary? Where is Mary?
John travelled to the hallway . John travelled to the hallway . Mary journeyed to the bathroom . Daniel went back to the bathroom . John moved to the bedroom . Where is Mary? Daniel went back to the bathroom . John moved to the bedroom . ; Where is Mary?
| John travelled to the hallway . Daniel went back to the bathroom . John moved to the bedroom . | Where is Mary? John travelled to the hallway . Mary journeyed to the bathroom . Daniel went back to the bathroom . John moved to the bedroom . Where is Mary?
(d) Step 4
Figure 5: Visualization of the attention distributions, when encoding the question: âWhere is Mary?â.
18
Published as a conference paper at ICLR 2019
# An example from tasks 2:
(requiring two supportive facts to solve)
# Story:
Sandra journeyed to the hallway. Mary went to the bathroom. Mary took the apple there. Mary dropped the apple.
Question:
Where is the apple?
Modelâs output:
# bathroom
(a) Step 1 (b) Step 2 (c) Step 3
Sandra journeyed to the hallway . Mary went to the bathroom . Mary went to the bathroom . Mary took the apple there . Mary took the apple there . Mary dropped the apple . Mary dropped the apple . Where is the apple ? Where is the apple ? Sandra journeyed to the hallway .
Si syed to the hallway . Sandra journeyed to the hallway . a : Mary went to the bathroom . Mary took the apple there . Mary took the apple there . Mary dropped the apple . Mary dropped the apple . Where is the apple ? Where is the apple ?
Sandra journeyed to the hallway . Sandra journeyed to the hallway . Mary welitgomfieppathroom . Mary went to the bathroom . Mary took the apple there . Mary took the apple there . Mary dropped the apple . Mary dropped the apple . Where is the apple ? Where is the apple ?
Sandra journeyed tothe hallway : Mary went to the bathroom . Mary took the apple there . Mary dropped the apple . Where is th Sandra journeyed to the hallway . Mary went to the bathroom . Mary took the apple there . Mary dropped the apple . Where is the apple ?
(d) Step 4
Figure 6: Visualization of the attention distributions, when encoding the question: âWhere is the apple?â.
19
Published as a conference paper at ICLR 2019
# An example from tasks 2:
(requiring two supportive facts to solve)
# Story:
John went to the hallway. John went back to the bathroom. John grabbed the milk there. Sandra went back to the office. Sandra journeyed to the kitchen. Sandra got the apple there. Sandra dropped the apple there. John dropped the milk.
Question:
Where is the milk?
# Modelâs output:
# bathroom
John went to the hallway . John went back to the bathroom . John grabbed the mi Sandra went back to the office . Sandra journeyed to the kitchen . Sandra got the apple there . Sandra dropped the apple there . 1 Sotn dropped the milk . Where is the milk? John went to the hallway . John went back to the bathroom . John grabbed the milk there . Sandra went back to the office . Sandra journeyed to the kitchen . Sandra got the apple there . Sandra dropped the apple there . John dropped the milk . Where is the milk?
(a) Step 1
John went to the hallway . John went back to the bathroom . John grabbed the milk there . Sandra went back to the office . Sandra journeyed to the kitchen . Sandra got the apple there . Sandra dropped the apple there . John dropped the milk . © Whereis gg $$ John went to the hallway . John went back to the bathroom . John grabbed the milk there . Sandra went back to the office . Sandra journeyed to the kitchen . Sandra got the apple there . Sandra dropped the apple there . John dropped the milk . Where is the milk?
(b) Step 2
John went to the hallway . John went back to the bathroom . John grabbed the milk there . Sandra went back to the office . Sandra journeyed to the kitchen . Sandra got the apple there . Sandra dropped the apple there . John dropped the milk . P| Where is John went to the hallway . John went back to the bathroom . John grabbed the milk there . Sandra went back to the office . Sandra journeyed to the kitchen . Sandra got the apple there . Sandra dropped the apple there . John dropped the milk . Where is the milk?
# (c) Step 3
John went to the hallway . a went ba SRRSEREIBathroom ° John grabbed the milk there . Sandra went back to the office . Sandra journeyed to the kitchen . Sandra got the apple there . Sandra dropped the apple there . John dropped the milk . Where is John went to the hallway . John went back to the bathroom . John grabbed the milk there . Sandra went back to the office . Sandra journeyed to the kitchen . Sandra got the apple there . Sandra dropped the apple there . John dropped the milk . Where is the milk?
(d) Step 4
Figure 7: Visualization of the attention distributions, when encoding the question: âWhere is the milk?â.
20
Published as a conference paper at ICLR 2019
# An example from tasks 3:
(requiring three supportive facts to solve)
Story: Mary got the milk.
John moved to the bedroom. Daniel journeyed to the office. John grabbed the apple there. John got the football. John journeyed to the garden. Mary left the milk. John left the football. Daniel moved to the garden. Daniel grabbed the football. Mary moved to the hallway. Mary went to the kitchen. John put down the apple there. John picked up the apple. Sandra moved to the hallway. Daniel left the football there. Daniel took the football. John travelled to the kitchen. Daniel dropped the football. John dropped the apple. John grabbed the apple. John went to the office. Sandra went back to the bedroom. Sandra took the milk. John journeyed to the bathroom. John travelled to the office. Sandra left the milk. Mary went to the bedroom. Mary moved to the office. John travelled to the hallway. Sandra moved to the garden. Mary moved to the kitchen. Daniel took the football. Mary journeyed to the bedroom. Mary grabbed the milk there. Mary discarded the milk. John went to the garden. John discarded the apple there.
Question:
Where was the apple before the bathroom?
Modelâs output:
# office
21
Published as a conference paper at ICLR 2019
Mary got the milk . John moved to the bedroom . Daniel journeyed to the office . John grabbed the apple there . John got the football . John journeyed to the garden . Mary left the milk . John left the football . Danie! moved to the garden . Daniel grabbed the football . Mary moved to the hallway . Mary went to the kitchen . John put down the apple there . John picked up the apple . Sandra moved to the hallway . Daniel left the football there . Daniel took the football . John travelled to the kitchen . Daniel dropped the football . John dropped the apple . John grabbed the apple . John went to the office . Sandra went back to the bedroom . Sandra took the milk . FY John journeyed to the bathroom . John travelled to the office . Sandra left the milk . Mary went to the bedroom . Mary moved to the office . John travelled to the hallway . Sandra moved to the garden . Mary moved to the kitchen . Daniel took the football . Mary journeyed to the bedroom . Mary grabbed the milk there . Mary discarded the milk . John went to the garden . John discarded the apple there . Where was the apple before the Mary got the milk . John moved to the bedroom . Daniel journeyed to the office . John grabbed the apple there . John got the football . John journeyed to the garden . Mary left the milk . John left the football . Daniel moved to the garden . Daniel grabbed the football . Mary moved to the hallway . Mary went to the kitchen . John put down the apple there . John picked up the apple. Sandra moved to the hallway . Daniel left the football there . Daniel took the football . John travelled to the kitchen. Daniel dropped the football . John dropped the apple . John grabbed the apple . John went to the office . Sandra went back to the bedroom . Sandra took the milk . John journeyed to the bathroom . John travelled to the office . Sandra left the milk . Mary went to the bedroom . Mary moved to the office . John travelled to the hallway . Sandra moved to the garden . Mary moved to the kitchen . Daniel took the football . Mary journeyed to the bedroom . Mary grabbed the milk there . Mary discarded the milk . John went to the garden . John discarded the apole there . Where was the apple before the bathroom?
(e) Step 1
Mary got the milk . John moved to the bedroom . Daniel journeyed to the office . John grabbed the apple there . John got the football . John journeyed to the garden . Mary left the milk . John left the football . Daniel moved to the garden . Daniel grabbed the football . Mary moved to the hallway . Mary went to the kitchen . John put down the apple there . John picked up the apple . Sandra moved to the hallway . Daniel left the football there . Daniel took the football . John travelled to the kitchen . Daniel dropped the football . John dropped the apple . John grabbed the apple . lohn went to the office . Sandra went back to the bedroom . Sandra took the milk . John journeyed to the bathroom . John travelled to the office . Sandra left the milk . Mary went to the bedroom . Mary moved to the office . John travelled to the hallway . Sandra moved to the garden . Mary moved to the kitchen . Daniel took the football . Mary journeyed to the bedroom . Mary grabbed the milk there . Mary discarded the milk . John went to the garden . John discarded the apple there . [Where was the apple before the Mary got the milk . John moved to the bedroom . Daniel journeyed to the office . John grabbed the apple there . John got the football . John journeyed to the garden . Mary left the milk . John left the football . Daniel moved to the garden . Daniel grabbed the football . Mary moved to the hallway . Mary went to the kitchen . John put down the apple there . John picked up the apple . Sandra moved to the hallway . Daniel left the football there . Daniel took the football . John travelled to the kitchen . Daniel dropped the football . John dropped the apple . John grabbed the apple . John went to the office . Sandra went back to the bedroom . Sandra took the milk . John journeyed to the bathroom . John travelled to the office . Sandra left the milk . Mary went to the bedroom. Mary moved to the office . John travelled to the hallway . Sandra moved to the garden . Mary moved to the kitchen . Daniel took the football . Mary journeyed to the bedroom . Mary grabbed the milk there . Mary discarded the milk . John went to the garden . John discarded the apple there . Where was the apple before the bathroom?
(f) Step 2
22
Published as a conference paper at ICLR 2019
Mary got the milk . Jonn moved to the bedroom . Daniel journeyed to the office . John grabbed the apple there . John got the football . John journeyed to the garden . Mary left the milk . John left the football . Daniel moved to the garden . Daniel grabbed the football . Mary moved to the hallway . Mary went to the kitchen . John put down the apple there . John picked up the apple . Sandra moved to the hallway . Daniel left the football there . Daniel took the football . John travelled to the kitchen . Daniel dropped the football . John dropped the apple . John grabbed the apole . MMW onn went to the office . Sandra went back to the bedroom. Sandra took the milk . John journeyed to the John travelled to the office . Sandra left the milk . Mary went to the bedroom. Mary moved to the office . John travelled to the hallway . Sandra moved to the garden. Mary moved to the kitchen . Daniel took the football . Mary journeyed to the bedroom. Mary grabbed the milk there . Mary discarded the milk . John went to the garden. John discarded the apple there . Where was the apple before the bathroom? Mary got the milk. John moved to the bedroom . Daniel journeyed to the office . John grabbed the apple there . John got the football . John journeyed to the garden . Mary left the milk . John left the football . Daniel moved to the garden . Daniel grabbed the football . Mary moved to the hallway . Mary went to the kitchen . John put down the apple there . John picked up the apple . Sandra moved to the hallway . Daniel left the football there . Daniel took the football . John travelled to the kitchen . Daniel dropped the football . John dropped the apple . John grabbed the apple . John went to the office . Sandra went back to the bedroom . Sandra took the milk. John journeyed to the bathroom . John travelled to the office . Sandra left the milk . Mary went to the bedroom. Mary moved to the office . John travelled to the hallway . Sandra moved to the garden . Mary moved to the kitchen . Daniel took the football . Mary journeyed to the bedroom . Mary grabbed the milk there . Mary discarded the milk . John went to the garden . John discarded the apple there . Where was the apple before the bathroom?
(g) Step 3
Mary got the milk . John moved to the bedroom . Daniel journeyed to the office . John grabbed the apple there . John got the football . John journeyed to the garden . Mary left the milk . John left the football . Daniel moved to the garden . Daniel grabbed the football . Mary moved to the hallway . Mary went to the kitchen . John put down the apple there . John picked up the apple . Sandra moved to the hallway . Daniel left the football there . Daniel took the football . John travelled to the kitchen . Daniel dropped the football . John dropped the apple . John grabbed the apple . EEE ohn went to the office . Sandra went back to the bedroom . Sandra took the milk . John journeyed to the bathroom John travelled to the office . \ Sandra left the milk . Mary went to the bedroom . Mary moved to the office . John travelled to the hallway . Sandra moved to the garden . Mary moved to the kitchen . Daniel took the football . Mary journeyed to the bedroom . Mary grabbed the milk there . Mary discarded the milk . John went to the garden . John discarded the apple there . Where was the apple before the bathroom? Mary got the milk . John moved to the bedroom . Daniel journeyed to the office . John grabbed the apple there . John got the football . John journeyed to the garden . Mary left the milk . John left the football . Daniel moved to the garden . Daniel grabbed the football . Mary moved to the hallway . Mary went to the kitchen . John put down the apple there . John picked up the apple. Sandra moved to the hallway . Daniel left the football there . Daniel took the football . John travelled to the kitchen. Daniel dropped the football . John dropped the apple . John grabbed the apple . John went to the office . Sandra went back to the bedroom . Sandra took the milk . John journeyed to the bathroom . John travelled to the office . Sandra left the milk . Mary went to the bedroom . Mary moved to the office . John travelled to the hallway . Sandra moved to the garden . Mary moved to the kitchen . Daniel took the football . Mary journeyed to the bedroom . Mary grabbed the milk there . Mary discarded the milk . John went to the garden . John discarded the apole there . Where was the apple before the bathroom?
(h) Step 4
Figure 7: Visualization of the attention distributions, when encoding the question: âWhere was the apple before the bathroom?â.
23 | {
"id": "1612.03969"
} |
1807.01281 | Human-level performance in first-person multiplayer games with population-based deep reinforcement learning | Recent progress in artificial intelligence through reinforcement learning
(RL) has shown great success on increasingly complex single-agent environments
and two-player turn-based games. However, the real-world contains multiple
agents, each learning and acting independently to cooperate and compete with
other agents, and environments reflecting this degree of complexity remain an
open challenge. In this work, we demonstrate for the first time that an agent
can achieve human-level in a popular 3D multiplayer first-person video game,
Quake III Arena Capture the Flag, using only pixels and game points as input.
These results were achieved by a novel two-tier optimisation process in which a
population of independent RL agents are trained concurrently from thousands of
parallel matches with agents playing in teams together and against each other
on randomly generated environments. Each agent in the population learns its own
internal reward signal to complement the sparse delayed reward from winning,
and selects actions using a novel temporally hierarchical representation that
enables the agent to reason at multiple timescales. During game-play, these
agents display human-like behaviours such as navigating, following, and
defending based on a rich learned representation that is shown to encode
high-level game knowledge. In an extensive tournament-style evaluation the
trained agents exceeded the win-rate of strong human players both as teammates
and opponents, and proved far stronger than existing state-of-the-art agents.
These results demonstrate a significant jump in the capabilities of artificial
agents, bringing us closer to the goal of human-level intelligence. | http://arxiv.org/pdf/1807.01281 | Max Jaderberg, Wojciech M. Czarnecki, Iain Dunning, Luke Marris, Guy Lever, Antonio Garcia Castaneda, Charles Beattie, Neil C. Rabinowitz, Ari S. Morcos, Avraham Ruderman, Nicolas Sonnerat, Tim Green, Louise Deason, Joel Z. Leibo, David Silver, Demis Hassabis, Koray Kavukcuoglu, Thore Graepel | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20180703 | 20180703 | 8 1 0 2
l u J 3 ] G L . s c [
1 v 1 8 2 1 0 . 7 0 8 1 : v i X r a
# Human-level performance in ï¬rst-person multiplayer games with population-based deep reinforcement learning
Max Jaderbergâ1, Wojciech M. Czarneckiâ1, Iain Dunningâ1, Luke Marris1 Guy Lever1, Antonio Garcia Castaneda1, Charles Beattie1, Neil C. Rabinowitz1 Ari S. Morcos1, Avraham Ruderman1, Nicolas Sonnerat1, Tim Green1, Louise Deason1 Joel Z. Leibo1, David Silver1, Demis Hassabis1, Koray Kavukcuoglu1, Thore Graepel1 âEqual contribution.
1DeepMind, London, UK
Recent progress in artiï¬cial intelligence through reinforcement learning (RL) has shown great success on increasingly complex single-agent environments (30, 40, 45, 46, 56) and two-player turn-based games (47, 58, 66). However, the real- world contains multiple agents, each learning and acting independently to co- operate and compete with other agents, and environments reï¬ecting this de- gree of complexity remain an open challenge. In this work, we demonstrate for the ï¬rst time that an agent can achieve human-level in a popular 3D multi- player ï¬rst-person video game, Quake III Arena Capture the Flag (28), using only pixels and game points as input. These results were achieved by a novel two-tier optimisation process in which a population of independent RL agents are trained concurrently from thousands of parallel matches with agents play- ing in teams together and against each other on randomly generated environ- ments. Each agent in the population learns its own internal reward signal to complement the sparse delayed reward from winning, and selects actions us- ing a novel temporally hierarchical representation that enables the agent to reason at multiple timescales. During game-play, these agents display human- like behaviours such as navigating, following, and defending based on a rich learned representation that is shown to encode high-level game knowledge. In an extensive tournament-style evaluation the trained agents exceeded the win- rate of strong human players both as teammates and opponents, and proved far stronger than existing state-of-the-art agents. These results demonstrate a
1
# signiï¬cant jump in the capabilities of artiï¬cial agents, bringing us closer to the goal of human-level intelligence.
We demonstrate how intelligent behaviour can emerge from training sophisticated new learning agents within complex multi-agent environments. End-to-end reinforcement learn- ing methods (45, 46) have so far not succeeded in training agents in multi-agent games that combine team and competitive play due to the high complexity of the learning problem (7, 43) that arises from the concurrent adaptation of other learning agents in the environment. We ap- proach this challenge by studying team-based multiplayer 3D ï¬rst-person video games, a genre which is particularly immersive for humans (16) and has even been shown to improve a wide range of cognitive abilities (21). We focus speciï¬cally on a modiï¬ed version (5) of Quake III Arena (28), the canonical multiplayer 3D ï¬rst-person video game, whose game mechanics served as the basis for many subsequent games, and which has a thriving professional scene (1). The task we consider is the game mode Capture the Flag (CTF) on per game randomly gener- ated maps of both indoor and outdoor theme (Figure 1 (a,b)). Two opposing teams consisting of multiple individual players compete to capture each otherâs ï¬ags by strategically navigat- ing, tagging, and evading opponents. The team with the greatest number of ï¬ag captures after ï¬ve minutes wins. CTF is played in a visually rich simulated physical environment (Supple- mentary Video https://youtu.be/dltN4MxV1RI), and agents interact with the envi- ronment and with other agents through their actions and observations. In contrast to previous work (18, 41, 42, 47, 48, 53, 58, 63, 64), agents do not have access to models of the environment, other agents, or human policy priors, nor can they communicate with each other outside of the game environment. Each agent acts and learns independently, resulting in decentralised control within a team.
Since we wish to develop a learning agent capable of acquiring generalisable skills, we go beyond training ï¬xed teams of agents on a ï¬xed map, and instead devise an algorithm and training procedure that enables agents to acquire policies that are robust to the variability of maps, number of players, and choice of teammates, a paradigm closely related to ad-hoc team play (62). The proposed training algorithm stabilises the learning process in partially observable multi-agent environments by concurrently training a diverse population of agents who learn by playing with each other, and in addition the agent population provides a mechanism for meta- optimisation. We solve the prohibitively hard credit assignment problem of learning from the sparse and delayed episodic team win/loss signal (optimising thousands of actions based on a single ï¬nal reward) by enabling agents to evolve an internal reward signal that acts as a proxy for winning and provides denser rewards. Finally, we meet the memory and long-term temporal reasoning requirements of high-level, strategic CTF play by introducing an agent architecture that features a multi-timescale representation, reminiscent of what has been observed in pri- mate cerebral cortex (11), and an external working memory module, broadly inspired by human episodic memory (22). These three innovations, integrated within a scalable, massively dis- tributed, asynchronous computational framework, enables the training of highly skilled CTF agents through solely multi-agent interaction and single bits of feedback about game outcomes.
2
(a) Outdoor procedural maps (b) Indoor procedural maps +80 7 7 na (d) Thousands of parallel CTF games generate ~s, experience to train from (c) First-person observations that the agents updates each agent's respective policy > 3% i 2 ©o9 aso > 0 > @°@ > go ) Agent â(f) Population based training provides diverse policies for Population training games and enables internal reward optimisation
Figure 1: CTF task and computational training framework. Shown are two example maps that have been sampled from the distribution of outdoor maps (a) and indoor maps (b). Each agent in the game only sees its own ï¬rst-person pixel view of the environment (c). Training data is generated by playing thousands of CTF games in parallel on a diverse distribution of procedurally generated maps (d), and used to train the agents that played in each game with reinforcement learning (e). We train a population of 30 different agents together, which provides a diverse set of teammates and opponents to play with, and is also used to evolve the internal rewards and hyperparameters of agents and learning process (f). Game-play footage and further exposition of the environment variability can be found in Supplementary Video https://youtu.be/dltN4MxV1RI.
In our formulation, the agentâs policy 7 uses the same interface available to human play- ers. It receives raw RGB pixel input x, from the agentâs first-person perspective at timestep t, produces control actions a, ~ 7 simulating a gamepad, and receives game points /; attained â the points received by the player for various game events which is visible on the in-game scoreboard. The goal of reinforcement learning (RL) is to find a policy that maximises the ex- pected cumulative y-discounted reward E, Dan y'r,] over a CTF game with T time steps. The agentâs policy 7 is parameterised by a multi-timescale recurrent neural network with external memory (20) (Figure A (a), Figure [STO). Actions in this model are generated conditional on a stochastic latent variable, whose distribution is modulated by a more slowly evolving prior process. The variational objective function encodes a trade-off between maximising expected reward and consistency between the two timescales of inference (more details are given in Sup- plementary Materials Section[2-1). Whereas some previous hierarchical RL agents construct ex- plicit hierarchical goals or skills (3, 65, 70), this agent architecture is conceptually more closely related to work on building hierarchical temporal representations (/2, 14, 33, 55) and recurrent
3
latent variable models for sequential data (13, 19). The resulting model constructs a temporally hierarchical representation space in a novel way to promote the use of memory (Figure S7) and temporally coherent action sequences.
For ad-hoc teams, we postulate that an agentâs policy Ï0 should maximise the probability of 2 â1}, which is composed of Ï0 itself, and its teammatesâ winning for its team, {Ï0, Ï1, . . . , Ï N policies Ï1, . . . , Ï N 2 â1, for a total of N players in the game:
, : N-1 8 P(moâs team wins|w, (71)-9 ) = EA) Noh {m0,771,--- Twa} > {7x, ...,%n-1f}. Cd)
w > returns 1 if the left team wins, 0 for losing, and randomly breaks ties. The winning operator Ï â¼ â¦ represents the speciï¬c map instance and random seeds, which are stochastic in learning and testing. Since game outcome as the only reward signal is too sparse for RL to be effective, we require rewards rt to direct the learning process towards winning yet are more frequently available than the game outcome. In our approach, we operationalise the idea that each agent has a dense internal reward function (60,61,74), by specifying rt = w(Ït) based on the available game points signals Ït (points are registered for events such as capturing a ï¬ag), and, crucially, allowing the agent to learn the transformation w such that policy optimisation on the internal rewards rt optimises the policy For The Win, giving us the FTW agent.
Training agents in multi-agent systems requires instantiations of other agents in the envi- ronment, like teammates and opponents, to generate learning experience. A solution could be self-play RL, in which an agent is trained by playing against its own policy. While self-play variants can prove effective in some multi-agent games (4, 9, 24, 37, 47, 57, 58), these methods can be unstable and in their basic form do not support concurrent training which is crucial for scalability. Our solution is to train a population of P different agents Ï = (Ïp)P p=1 in parallel that play with each other, introducing diversity amongst players to stabilise training (54). Each agent within this population learns from experience generated by playing with teammates and opponents sampled from the population. We sample the agents indexed by ι for a training game using a stochastic matchmaking scheme mp(Ï) that biases co-players to be of similar skill to player p. This scheme ensures that â a priori â the outcome is sufï¬ciently uncertain to provide a meaningful learning signal, and that a diverse set of teammates and opponents are seen dur- ing training. Agentsâ skill levels are estimated online by calculating Elo scores (adapted from chess (15)) based on outcomes of training games. We also use the population to meta-optimise the internal rewards and hyperparameters of the RL process itself, which results in the joint maximisation of:
(2) T Tirner(Tp|Wp) = Emp (),wrO Fawn, > rte) Vp ew t=0 Jouter(Wp; Pp|T) = Ennmp(m)wra P (1s team wins|w, 7?)
Ïw,Ï p = optimiseÏp(Jinner, w, Ï).
4
This can be seen as a two-tier reinforcement learning problem. The inner optimisation max- imises Jinner, the agentsâ expected future discounted internal rewards. The outer optimisation of Jouter can be viewed as a meta-game, in which the meta-reward of winning the match is maximised with respect to internal reward schemes wp and hyperparameters Ïp, with the inner optimisation providing the meta transition dynamics. We solve the inner optimisation with RL as previously described, and the outer optimisation with Population Based Training (PBT) (29). PBT is an online evolutionary process which adapts internal rewards and hyperparameters and performs model selection by replacing under-performing agents with mutated versions of better agents. This joint optimisation of the agent policy using RL together with the optimisation of the RL procedure itself towards a high-level goal proves to be an effective and generally appli- cable strategy, and utilises the potential of combining learning and evolution (2) in large scale learning systems.
To assess the generalisation performance of agents at different points during training, we performed a large tournament on procedurally generated maps with ad-hoc matches involving three types of agents as teammates and opponents: ablated versions of FTW (including state- of-the-art baselines), Quake III Arena scripted bots of various levels (69), and human partici- pants with ï¬rst-person video game experience. Figure 2 (b) and Figure S2 show the Elo scores and derived winning probabilities for different ablations of FTW, and how the combination of components provide superior performance. The FTW agents clearly exceeded the win-rate of humans in maps which neither agent nor human had seen previously, i.e. zero-shot generalisa- tion, with a team of two humans on average capturing 16 ï¬ags per game less than a team of two FTW agents (Figure S2 Bottom, FF vs hh). Interestingly, only as part of a human-agent team did we observe a human winning over an agent-agent team (5% win probability). This result suggests that trained agents are capable of cooperating with never seen before teammates, such as humans. In a separate study, we probed the exploitability of the FTW agent by allowing a team of two professional games testers with full communication to play continuously against a ï¬xed pair of FTW agents. Even after twelve hours of practice the human game testers were only able to win 25% (6.3% draw rate) of games against the agent team.
Interpreting the difference in performance between agents and humans must take into ac- count the subtle differences in observation resolution, frame rate, control ï¬delity, and intrinsic limitations in reaction time and sensorimotor skills (Figure S11 (a), Supplementary Materials Section 3.1). For example, humans have superior observation and control resolution â this may be responsible for humans successfully tagging at long range where agents could not (humans: 17% tags above 5 map units, agents: 0.5%). In contrast, at short range, agents have superior tagging reaction times to humans: by one measure FTW agents respond to newly appeared opponents in 258ms, compared with 559ms for humans (Figure S11 (b)). Another advantage exhibited by agents is their tagging accuracy, where FTW agents achieve 80% accuracy com- pared to humansâ 48%. By artiï¬cially reducing the FTW agentsâ tagging accuracy to be similar to humans (without retraining them), agentsâ win-rate was reduced, though still exceeded that of humans (Figure S11 (c)). Thus, while agents learn to make use of their potential for better tagging accuracy, this is only one factor contributing to their overall performance.
5
(a) FTW Agent Architecture (b) Progression During Training ok 150K 300K 450K winning 4 : f : : signal * 1600 4 Agent Elo FTW Y 1500 + Internal 1400 + reward 1300 555 5 oo fe ee Strong Human H Action 1200 Self-play + RS Game points Px i on yoo. JT Average Human 900 4 600 + Self-pl: âSampled latent Learning Rate variable 4e4 4 oe Policy Slow RNN Fast RNN Observation x, KL Weighting Internal Timescale r T T 1 ok 150K 300K 450K Games played
Figure 2: Agent architecture and benchmarking. (a) Shown is how the agent processes a temporal sequence of observations xt from the environment. The model operates at two different time scales, faster at the bottom, and slower by a factor of Ï at the top. A stochastic vector-valued latent variable is sampled at the fast time scale from distribution Qt based on observations xt. The action distribution Ït is sampled conditional on the latent variable at each time step t. The latent variable is regularised by the slow moving prior Pt which helps capture long-range temporal correlations and promotes memory. The network parameters are updated using reinforcement learning based on the agentâs own internal reward signal rt, which is obtained from a learnt transformation w of game points Ït. w is optimised for winning probability through population based training, another level of training performed at yet a slower time scale than RL. Detailed network architectures are described in Figure S10. (b) Top: Shown are the Elo skill ratings of the FTW agent population throughout training (blue) together with those of the best baseline agents using hand tuned reward shaping (RS) (red) and game winning reward signal only (black), compared to human and random agent reference points (violet, shaded region shows strength between 10th and 90th percentile). It can be seen that the FTW agent achieves a skill level considerably beyond strong human subjects, whereas the baseline agentâs skill plateaus below, and does not learn anything without reward shaping (see Supplementary Materials for evaluation procedure). (b) Bottom: Shown is the evolution of three hyperparameters of the FTW agent population: learning rate, KL weighting, and internal time scale Ï , plotted as mean and standard deviation across the population.
We hypothesise that trained agents of such high skill have learned a rich representation of the game. To investigate this, we extracted ground-truth state from the game engine at each point in time in terms of 200 binary features such as âDo I have the ï¬ag?â, âDid I see my teammate recently?â, and âWill I be in the opponentâs base soon?â. We say that the agent has knowledge of a given feature if logistic regression on the internal state of the agent accurately models the feature. In this sense, the internal representation of the agent was found to encode a wide variety of knowledge about the game situation (Figure S4). Interestingly, the FTW agentâs representation was found to encode features related to the past particularly well: e.g. the FTW
6
(d) Single Neuron Selectivity 98% 97% ee 99% (c) Expected Neural Response Map Agent flag taken Opponent flag held by teammate cr > y Ey Opponent flag held by agent (b) Basic CTF Situations Agent Flag Status TS, : om Da kaa = eo Boo Agent Respawning + (a) Agent State t-SNE Embedding Points coloured by conjunction of basic CTF situations (e) True Returns ee 4. é 5 Agent flag is stray & agent has opponent Agent flag at base & âopponent fag at base & not respavining & âagent in home base âAgent flag taken & âopponent flag held by teammate & ânot respavning & âagent in the opponent's base (g) Agent Surprise (KL > 10) geal (h) Base Camping Strategy (1) Waiting in opponent's base (3] Opponent flag returned Agent quickly picks up flag fant quickly f and runs bac! 100% Agent is respawning (f) Value Function ; Pa Agent flag is f a stray & £& &e teammate has et ee. q opponent flag yee (i) Automatically Discovered Behaviours Behaviour 12 Behaviour 14 Behaviour 32 Following teammate Opponent base camping Home base defence 9 FTW FTW Eâ5r=â FTW FTW FTW : em, ant wio TH wio TH = 8. wioTH Selt-play + âSel 1 Sel-play + Hep 7 Sor oeyg ha Hepg Human Human Human
Figure 3: Knowledge representation and behavioural analysis. (a) The 2D t-SNE embedding of an FTW agentâs internal states during game-play. Each point represents the internal state (hp, hq) at a particular point in the game, and is coloured according to the high-level game state at this time â the conjunction of four basic CTF situations (b). Colour clusters form, showing that nearby regions in the internal representation of the agent correspond to the same high-level game state. (c) A visualisation of the expected internal state arranged in a similarity-preserving topological embedding (Figure S5). (d) We show distributions of situation conditional activations for particular single neurons which are distinctly selective for these CTF situations, and show the predictive accuracy of this neuron. (e) The true return of the agentâs internal reward signal and (f) the agentâs prediction, its value function. (g) Regions where the agentâs internal two-timescale representation diverges, the agentâs surprise. (h) The four-step temporal sequence of the high-level strategy opponent base camping. (i) Three automatically discovered high- level behaviours of agents and corresponding regions in the t-SNE embedding. To the right, average occurrence per game of each behaviour for the FTW agent, the FTW agent without temporal hierarchy (TH), self-play with reward shaping agent, and human subjects (more detail in Figure S9).
agent was able to classify the state both ï¬ags are stray (ï¬ags dropped not at base) with 91% AUCROC (area under the receiver operating characteristic curve), compared to 70% with the self-play baseline. Looking at the acquisition of knowledge as training progresses, the agent ï¬rst learned about its own base, then about the opponentâs base, and picking up the ï¬ag. Immediately
7
useful ï¬ag knowledge was learned prior to knowledge related to tagging or oneâs teammateâs situation. Note that agents were never explicitly trained to model this knowledge, thus these results show the spontaneous emergence of these concepts purely through RL-based training.
A visualisation of how the agent represents knowledge was obtained by performing dimen- sionality reduction of the agentâs activations using t-SNE (67). As can be seen from Figure 3, internal agent state clustered in accordance with conjunctions of high-level game state features: ï¬ag status, respawn state, and room type. We also found individual neurons whose activations coded directly for some of these features, e.g. a neuron that was active if and only if the agentâs teammate was holding the ï¬ag, reminiscent of concept cells (51). This knowledge was ac- quired in a distributed manner early in training (after 45K games), but then represented by a single, highly discriminative neuron later in training (at around 200K games). This observed disentangling of game state is most pronounced in the FTW agent (Figure S8).
One of the most salient aspects of the CTF task is that each game takes place on a randomly generated map, with walls, bases, and ï¬ags in new locations. We hypothesise that this requires agents to develop rich representations of these spatial environments to deal with task demands, and that the temporal hierarchy and explicit memory module of the FTW agent help towards this. An analysis of the memory recall patterns of the FTW agent playing in indoor environ- ments shows precisely that: once the agent had discovered the entrances to the two bases, it primarily recalled memories formed at these base entrances (Figure 4, Figure S7). We also found that the full FTW agent with temporal hierarchy learned a coordination strategy dur- ing maze navigation that ablated versions of the agent did not, resulting in more efï¬cient ï¬ag capturing (Figure S3).
Analysis of temporally extended behaviours provided another view on the complexity of behavioural strategies learned by the agent (34). We developed an unsupervised method to automatically discover and quantitatively characterise temporally extended behaviour patterns, inspired by models of mouse behaviour (73), which groups short game-play sequences into be- havioural clusters (Figure S9, Supplementary Video https://youtu.be/dltN4MxV1RI). The discovered behaviours included well known tactics observed in human play, such as waiting in the opponents base for a ï¬ag to reappear (opponent base camping) which we only observed in FTW agents with a temporal hierarchy. Some behaviours, such as following a ï¬ag-carrying teammate, were discovered and discarded midway through training, while others such as per- forming home base defence are most prominent later in training (Figure 4).
In this work, we have demonstrated that an artiï¬cial agent using only pixels and game points as input can learn to play highly competitively in a rich multi-agent environment: a popular multiplayer ï¬rst-person video game. This was achieved by combining a number of innovations in agent training â population based training of agents, internal reward optimisation, and tem- porally hierarchical RL â together with scalable computational architectures. The presented framework of training populations of agents, each with their own learnt rewards, makes mini- mal assumptions about the game structure, and therefore should be applicable for scalable and stable learning in a wide variety of multi-agent systems, and the temporally hierarchical agent represents a sophisticated new architecture for problems requiring memory and temporally ex-
8
Phase 1 Learning the basics of the game : Phase 2 Increasing navigation, tagging, and coordination skills. : Phase 3 Perfecting strategy and memory âSingle Neuron âLam respawningâ âMy flag is takenâ âTeammate has the flagâ ts i i 2 75% iz fey 7 âAgent Tagged Opponent red Agent Strength Behaviour Probability Games Played 0K 10K 30K 350K 350K 480K] Visitation Map Top Memory Read Locations Visitation Map Top Memory Read Locations Visitation Map Top Memory Read Locations
Figure 4: Progression of agent during training. Shown is the development of knowledge rep- resentation and behaviours of the FTW agent over the training period of 450K games, segmented into three phases (Supplementary Video https://youtu.be/dltN4MxV1RI). Knowledge: Shown is the percentage of game knowledge that is linearly decodable from the agentâs representation, measured by average scaled AUCROC across 200 features of game state. Some knowledge is compressed to single neuron responses (Figure 3 (a)), whose emergence in training is shown at the top. Relative Internal Re- ward Magnitude: Shown is the relative magnitude of the agentâs internal reward weights of three of the thirteen events corresponding to game points Ï. Early in training, the agent puts large reward weight on picking up the opponent ï¬ag, whereas later this weight is reduced, and reward for tagging an opponent and penalty when opponents capture a ï¬ag are increased by a factor of two. Behaviour Probability: Shown are the frequencies of occurrence for three of the 32 automatically discovered behaviour clus- ters through training. Opponent base camping (red) is discovered early on, whereas teammate following (blue) becomes very prominent midway through training before mostly disappearing. The home base defence behaviour (green) resurges in occurrence towards the end of training, in line with the agentâs increased internal penalty for more opponent ï¬ag captures. Memory Usage: Shown are heat maps of visitation frequencies for locations in a particular map (left), and locations of the agent at which the top- ten most frequently read memories were written to memory, normalised by random reads from memory, indicating which locations the agent learned to recall. Recalled locations change considerably through- out training, eventually showing the agent recalling the entrances to both bases, presumably in order to perform more efï¬cient navigation in unseen maps, shown more generally in Figure S7.
9
tended inference. Limitations of the current framework, which should be addressed in future work, include the difï¬culty of maintaining diversity in agent populations, the greedy nature of the meta-optimisation performed by PBT, and the variance from temporal credit assignment in the proposed RL updates. Trained agents exceeded the win-rate of humans in tournaments, and were shown to be robust to previously unseen teammates, opponents, maps, and numbers of players, and to exhibit complex and cooperative behaviours. We discovered a highly com- pressed representation of important underlying game state in the trained agents, which enabled them to execute complex behavioural motifs. In summary, our work introduces novel techniques to train agents which can achieve human-level performance at previously insurmountable tasks. When trained in a sufï¬ciently rich multi-agent world, complex and surprising high-level intel- ligent artiï¬cial behaviour emerged.
# References
1. QuakeCon, 2018.
2. David Ackley and Michael Littman. Interactions between learning and evolution. Artiï¬cial life II, 10:487â509, 1991.
3. Pierre-Luc Bacon, Jean Harb, and Doina Precup. The option-critic architecture. In Pro- ceedings of AAAI Conference on Artiï¬cial Intelligence, pages 1726â1734, 2017.
4. Trapit Bansal, Jakub Pachocki, Szymon Sidor, Ilya Sutskever, and Igor Mordatch. Emer- gent complexity via multi-agent competition. In Proceedings of International Conference on Learning Representations, 2018.
5. Charles Beattie, Joel Z Leibo, Denis Teplyashin, Tom Ward, Marcus Wainwright, Heinrich K¨uttler, Andrew Lefrancq, Simon Green, V´ıctor Vald´es, Amir Sadik, et al. Deepmind Lab. arXiv preprint arXiv:1612.03801, 2016.
6. Franc¸ois B´erard, Guangyu Wang, and Jeremy R Cooperstock. On the limits of the human motor control precision: the search for a devices human resolution. In IFIP Conference on Human-Computer Interaction, pages 107â122. Springer, 2011.
7. Daniel S. Bernstein, Shlomo Zilberstein, and Neil Immerman. The complexity of decentral- ized control of Markov Decision Processes. In Proceedings of Conference on Uncertainty in Artiï¬cial Intelligence, pages 32â37, 2000.
8. Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, arXiv preprint and Samy Bengio. Generating sentences from a continuous space. arXiv:1511.06349, 2015.
10
9. G. W. Brown. Iterative solutions of games by ï¬ctitious play. In T.C. Koopmans, editor, Activity Analysis of Production and Allocation, pages 374â376. John Wiley & Sons, Inc., 1951.
10. Alan D Castel, Jay Pratt, and Emily Drummond. The effects of action video game expe- rience on the time course of inhibition of return and the efï¬ciency of visual search. Acta psychologica, 119(2):217â230, 2005.
11. Janice Chen, Uri Hasson, and Christopher J Honey. Processing timescales as an organizing principle for primate cortex. Neuron, 88(2):244â246, 2015.
12. Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neu- ral networks. 2017.
13. Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and In Proceedings Yoshua Bengio. A recurrent latent variable model for sequential data. of Annual Conference on Neural Information Processing Systems, pages 2980â2988, 2015.
14. Salah El Hihi and Yoshua Bengio. Hierarchical recurrent neural networks for long-term In Proceedings of Annual Conference on Neural Information Processing dependencies. Systems, pages 493â499, 1996.
15. Arpad E Elo. The Rating of Chessplayers, Past and Present. Arco Pub., 1978.
16. Laura Ermi and Frans M¨ayr¨a. Fundamental components of the gameplay experience: Analysing immersion. Worlds in play: International perspectives on digital games re- search, 37(2):37â53, 2005.
17. Lasse Espeholt, Hubert Soyer, Remi Munos, Karen Simonyan, Volodymir Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, et al. Impala: Scalable dis- tributed Deep-RL with importance weighted actor-learner architectures. arXiv preprint arXiv:1802.01561, 2018.
18. Jakob N Foerster, Richard Y Chen, Maruan Al-Shedivat, Shimon Whiteson, Pieter Abbeel, and Igor Mordatch. Learning with opponent-learning awareness. arXiv preprint arXiv:1709.04326, 2017.
19. Marco Fraccaro, Søren Kaae Sønderby, Ulrich Paquet, and Ole Winther. Sequential neural models with stochastic layers. In Proceedings of Annual Conference on Neural Information Processing Systems, pages 2199â2207, 2016.
20. Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwi´nska, Sergio G´omez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al. Hybrid computing using a neural network with dynamic external memory. Nature, 538(7626):471, 2016.
11
21. C Shawn Green and Daphne Bavelier. Action video game training for cognitive enhance- ment. Current Opinion in Behavioral Sciences, 4:103â108, 2015.
22. Demis Hassabis, Dharshan Kumaran, Christopher Summerï¬eld, and Matthew Botvinick. Neuroscience-inspired artiï¬cial intelligence. Neuron, 95(2):245â258, 2017.
23. Matthew Hausknecht and Peter Stone. Deep reinforcement learning in parameterized action space. In Proceedings of International Conference on Learning Representations, 2016.
24. Johannes Heinrich and David Silver. Deep reinforcement learning from self-play in imperfect-information games. In NIPS Deep Reinforcement Learning Workshop, 2016.
25. Ernst Hellinger. Neue begr¨undung der theorie quadratischer formen von unendlichvielen ver¨anderlichen. Journal f¨ur die reine und angewandte Mathematik, 136:210â271, 1909.
26. Geoffrey Hinton. Neural Networks for Machine Learning, Lecture 6e.
27. Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735â1780, 1997.
28. id Software. Quake III Arena, 1999.
29. Max Jaderberg, Valentin Dalibard, Simon Osindero, Wojciech M Czarnecki, Jeff Donahue, Ali Razavi, Oriol Vinyals, Tim Green, Iain Dunning, Karen Simonyan, et al. Population based training of neural networks. arXiv preprint arXiv:1711.09846, 2017.
30. Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z. Leibo, David Silver, and Koray Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. In Proceedings of International Conference on Learning Representations, 2017.
31. Diederik P Kingma and Max Welling. Auto-encoding variational Bayes. arXiv preprint arXiv:1312.6114, 2013.
32. Hiroaki Kitano, Minoru Asada, Yasuo Kuniyoshi, Itsuki Noda, Eiichi Osawai, and Hitoshi Matsubara. Robocup: A challenge problem for ai and robotics. In Robot Soccer World Cup, pages 1â19. Springer, 1997.
33. Jan Koutnk, Klaus Greff, Faustino Gomez, and Jrgen Schmidhuber. A clockwork RNN. arXiv preprint arXiv:1402.3511, 2014.
34. John W Krakauer, Asif A Ghazanfar, Alex Gomez-Marin, Malcolm A MacIver, and David Poeppel. Neuroscience needs behavior: correcting a reductionist bias. Neuron, 93(3):480â 490, 2017.
35. John Laird and Michael VanLent. Human-level aiâs killer application: Interactive computer games. AI magazine, 22(2):15, 2001.
12
36. Guillaume Lample and Devendra Singh Chaplot. Playing FPS games with deep reinforce- ment learning. In Proceedings of AAAI Conference on Artiï¬cial Intelligence, pages 2140â 2146, 2017.
37. Marc Lanctot, Vin´ıcius Flores Zambaldi, Audrunas Gruslys, Angeliki Lazaridou, Karl Tuyls, Julien P´erolat, David Silver, and Thore Graepel. A uniï¬ed game-theoretic approach In Proceedings of Annual Conference on Neural to multiagent reinforcement learning. Information Processing Systems, pages 4193â4206, 2017.
38. Joel Z Leibo, Vinicius Zambaldi, Marc Lanctot, Janusz Marecki, and Thore Graepel. Multi- In Proceedings of the 16th agent reinforcement learning in sequential social dilemmas. Conference on Autonomous Agents and MultiAgent Systems, pages 464â473. International Foundation for Autonomous Agents and Multiagent Systems, 2017.
39. Sergey Levine and Vladlen Koltun. Variational policy search via trajectory optimization. In Proceedings of Annual Conference on Neural Information Processing Systems, pages 207â215, 2013.
40. Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yu- val Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. In Proceedings of International Conference on Learning Representations, 2016.
41. Ryan Lowe, Yi Wu, Aviv Tamar, Jean Harb, Pieter Abbeel, and Igor Mordatch. Multi-agent In Proceedings of Annual actor-critic for mixed cooperative-competitive environments. Conference on Neural Information Processing Systems, pages 6382â6393, 2017.
42. Patrick MacAlpine and Peter Stone. UT Austin Villa: RoboCup 2017 3D simulation league competition and technical challenges champions. In Claude Sammut, Oliver Obst, Flavio Tonidandel, and Hidehisa Akyama, editors, RoboCup 2017: Robot Soccer World Cup XXI, Lecture Notes in Artiï¬cial Intelligence. Springer, 2018.
43. La¨etitia Matignon, Guillaume J. Laurent, and Nadine Le Fort-Piat. Independent reinforce- ment learners in cooperative Markov games: a survey regarding coordination problems. Knowledge Engineering Review, 27(1):1â31, 2012.
44. Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lilli- crap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In Proceedings of International Conference on Machine Learning, pages 1928â1937, 2016.
45. Volodymyr Mnih, Adri`a Puigdom`enech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lill- icrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In Proceedings of International Conference on Machine Learning, pages 1928â1937, 2016.
13
46. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin A. Riedmiller, Andreas Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Ku- maran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):529â533, 2015.
47. Matej Moravcik, Martin Schmid, Neil Burch, Viliam Lisy, Dustin Morrill, Nolan Bard, Trevor Davis, Kevin Waugh, Michael Johanson, and Michael Bowling. Deepstack: Expert- level artiï¬cial intelligence in heads-up no-limit poker. Science, 356(6337):508â513, 2017.
48. Igor Mordatch and Pieter Abbeel. Emergence of grounded compositional language in multi- agent populations. In Proceedings of AAAI Conference on Artiï¬cial Intelligence, 2018.
49. Andrew Y Ng, Daishi Harada, and Stuart Russell. Policy invariance under reward trans- In Proceedings of International formations: Theory and application to reward shaping. Conference on Machine Learning, pages 278â287, 1999.
50. Jeff Orkin. Three states and a plan: the A.I. of F.E.A.R. In Proceedings of Game Developers Conference, 2006.
51. Rodrigo Quian Quiroga. Concept cells: the building blocks of declarative memory func- tions. Nature Reviews Neuroscience, 13(8):587, 2012.
52. Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropaga- tion and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014.
53. Martin Riedmiller and Thomas Gabel. On experiences in a complex and competitive gam- ing domain: Reinforcement learning meets robocup. In Computational Intelligence and Games, 2007. CIG 2007. IEEE Symposium on, pages 17â23. IEEE, 2007.
54. Christopher D Rosin and Richard K Belew. New methods for competitive coevolution. Evolutionary computation, 5(1):1â29, 1997.
55. J¨urgen Schmidhuber. Learning complex, extended sequences using the principle of history compression. Neural Computation, 4(2):234â242, 1992.
56. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
57. David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Vedavyas Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy P. Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587):484â489, 2016.
14
58. David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Master- ing the game of Go without human knowledge. Nature, 550(7676):354, 2017.
59. Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks: Visualising image classiï¬cation models and saliency maps. arXiv preprint arXiv:1312.6034, 2013.
60. Satinder Singh, Richard L Lewis, and Andrew G Barto. Where do rewards come from? In Proceedings of Annual Meeting of the Cognitive Science Society, pages 2601â2606, 2009.
61. Satinder Singh, Richard L Lewis, Andrew G Barto, and Jonathan Sorg. Intrinsically mo- IEEE Transactions on Au- tivated reinforcement learning: An evolutionary perspective. tonomous Mental Development, 2(2):70â82, 2010.
62. Peter Stone, Gal A Kaminka, Sarit Kraus, Jeffrey S Rosenschein, et al. Ad hoc autonomous agent teams: Collaboration without pre-coordination. In Proceedings of AAAI Conference on Artiï¬cial Intelligence, 2010.
63. Peter Stone and Manuela Veloso. Layered learning. In European Conference on Machine Learning, pages 369â381. Springer, 2000.
64. Sainbayar Sukhbaatar, Arthur Szlam, and Rob Fergus. Learning multiagent communica- In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and tion with backpropagation. R. Garnett, editors, Proceedings of Annual Conference on Neural Information Processing Systems, pages 2244â2252, 2016.
65. Richard S. Sutton, Doina Precup, and Satinder P. Singh. Between MDPs and Semi-MDPs: A framework for temporal abstraction in reinforcement learning. Artiï¬cial Intelligence, 112(1-2):181â211, 1999.
66. G. Tesauro. Temporal difference learning and TD-Gammon. Communications of the ACM, 38(3):58â68, March 1995.
67. Laurens J P Van der Maaten and Geoffrey Hinton. Visualizing data using t-SNE. Journal of Machine Learning Research, 9(Nov):2579â2605, 2008.
68. Niels Van Hoorn, Julian Togelius, and Jurgen Schmidhuber. Hierarchical controller learn- In IEEE Symposium on Computational Intelligence and ing in a ï¬rst-person shooter. Games, pages 294â301. IEEE, 2009.
69. J. M. P. Van Waveren. The Quake III Arena Bot (Masterâs Thesis), 2001.
15
70. Alexander Sasha Vezhnevets, Simon Osindero, Tom Schaul, Nicolas Heess, Max Jaderberg, David Silver, and Koray Kavukcuoglu. Feudal networks for hierarchical reinforcement learning. In Proceedings of International Conference on Machine Learning, pages 3540â 3549, 2017.
71. Nikos Vlassis, Marc Toussaint, Georgios Kontes, and Savas Piperidis. Learning model-free robot control by a Monte Carlo EM algorithm. Autonomous Robots, 27(2):123â130, 2009.
72. Theophane Weber and Nicolas Heess. Reinforced variational inference. In NIPS Advances in Approximate Bayesian Inference Workshop, 2017.
73. Alexander B Wiltschko, Matthew J Johnson, Giuliano Iurilli, Ralph E Peterson, Jesse M Katon, Stan L Pashkovski, Victoria E Abraira, Ryan P Adams, and Sandeep Robert Datta. Mapping sub-second structure in mouse behavior. Neuron, 88(6):1121â1135, 2015.
74. David H Wolpert and Kagan Tumer. An introduction to collective intelligence. arXiv preprint cs/9908014, 1999.
75. Yuxin Wu and Yuandong Tian. Training agent for ï¬rst-person shooter game with actor-critic curriculum learning. In Proceedings of International Conference on Learning Representa- tions, 2017.
# Acknowledgments
We thank Matt Botvinick, Simon Osindero, Volodymyr Mnih, Alex Graves, Nando de Freitas, Nicolas Heess, and Karl Tuyls for helpful comments on the manuscript; Simon Green and Drew Purves for additional environment support and design; Kevin McKee and Tina Zhu for human experiment assistance; Amir Sadik and Sarah York for exploitation study participation; Adam Cain for help with ï¬gure design; Paul Lewis, Doug Fritz, and Jaume Sanchez Elias for 3D map visualisation work; Vicky Holgate, Adrian Bolton, Chloe Hillier, and Helen King for organisational support; and the rest of the DeepMind team for their invaluable support and ideas.
# Supplementary Materials
# 1 Task
# 1.1 Rules of Capture the Flag
CTF is a team game with the objective of scoring more ï¬ag captures than the opposing team in ï¬ve minutes of play time. To score a capture, a player must navigate to the opposing teamâs base, pick up the ï¬ag (by touching the ï¬ag), carry it back to their own base, and capture it by
16
running into their own ï¬ag. A capture is only possible if the ï¬ag of the scoring playerâs team is safe at their base. Players may tag opponents which teleports them back to their base after a delay (respawn). If a ï¬ag carrier is tagged, the ï¬ag they are carrying drops on the ground and becomes stray. If a player on the team that owns the dropped ï¬ag touches the dropped ï¬ag, it is immediately returned back to their own base. If a player on the opposing team touches the dropped ï¬ag, that player will pick up the ï¬ag and can continue to attempt to capture the ï¬ag.
# 1.2 Environment
The environment we use is DeepMind Lab (5) which is a modiï¬ed version of Quake III Arena (28). The modiï¬cations reduce visual connotations of violence, but retain all core game mechan- ics. Video games form an important domain for research (35). Previous work on ï¬rst-person games considers either much simpler games (30, 36, 45, 75), simpliï¬ed agent interfaces (68), or non-learning systems (50, 69), and previously studied multi-agent domains often consist of discrete-state environments (18, 38, 64), have simpliï¬ed 2D dynamics (23, 41, 53) or have fully observable or non-perceptual features (18, 23, 41, 48, 53, 64) rather than pixel observa- tions. As an example, the RoboCup simulation league (32) is a multi-agent environment that shares some of the same challenges of our environment, and successful work has included RL components (42, 53, 63), however these solutions use a combination of hand-engineering, human-speciï¬ed task decompositions, centralised control, and low-dimensional non-visual in- puts, compared to our approach of end-to-end machine learning of independent reinforcement learners.
CTF games are played in an artiï¬cial environment referred to as a map. In this work we consider two themes of procedurally generated maps in which agents play, indoor maps, and outdoor maps, example schematics of which are shown in Figure S1. The procedural indoor maps are ï¬at, maze-like maps, rotationally symmetric and contain rooms connected by corri- dors. For each team there is a base room that contains their ï¬ag and player spawn points. Maps are contextually coloured: the red base is coloured red, the blue base blue. The procedural out- door maps are open and hilly naturalistic maps containing randomly sized rocks, cacti, bushes, and rugged terrain that may be impassable. Each teamâs ï¬ag and starting positions are located in opposite quadrants of the map. Both the procedural indoor maps and the procedural outdoor maps are randomly generated each episode (some random seeds are not used for training and reserved for performance evaluation), providing a very large set of environments. More details can be found in Section 5.3. Every player carries a disc gadget (equivalent to the railgun in Quake III Arena) which can be used for tagging, and can see their team, shield, and ï¬ag status on screen.
17
# 2 Agent
# 2.1 FTW Agent Architecture
The agentâs policy 7 is represented by a neural network and optimised with reinforcement learn- ing (RL). In a fully observed Markov Decision Process, one would aim at finding a policy that maximises expected y-discounted return E,(.\s,) [A] in game state s,, where R, = an Tek However, when an agent does not have information about the entire environment (which is often the case in real world problems, including CTF), it becomes a Partially-Observed Markov De- cision Process, and hence we instead seek to maximise E,,.|,_,)[:], the expected return under a policy conditioned on the agentâs history of individual observations. Due to the ambiguity of the true state given the observations, P (s; X<z), we represent the current value as a random vari- able, Vi = Ex(.jx<,)[Ri] = 32, P (S|x<r) Ex(.js) [Ri]. We follow the idea of RL as probabilistic inference (39, 71, 72) which leads to a Kullback-Leibler divergence (KL) regularised objective in which the policy Q is regularised against a prior policy P. We choose both to contain a latent variable z;, the purpose of which is to model the dependence on past observations. Letting the policy and the prior differ only in the way this dependence on past observations is modelled leads to the following objective:
EQ(zt|Cq t ) [Rt] â DKL[Q (zt|Cq t ) ||P (zt|Cp t )], (3)
where P (z;|C?) and Q (z;|C?) are the prior and variational posterior distributions on z; condi- tioned on different sets of variables C? and C/ respectively, and Dx, is the Kullback-Leibler divergence. The sets of conditioning variables C? and C# determine the structure of the prob- abilistic model of the agent, and can be used to equip the model with representational priors. In addition to optimising the return as in Equation |3} we can also optimise extra modelling targets which are conditional on the latent variable z;, such as the value function to be used as a baseline (45), and pixel control (30), whose optimisation positively shapes the shared la- tent representation. The conditioning variables C? and C? and the associated neural network structure are chosen so as to promote forward planning and the use of memory. We use a hierarchical RNN consisting of two recurrent networks (LSTMs (27)) operating at different timescales. The hierarchical RNNâs fast timescale core generates a hidden state h# at every environment step t, whereas its slow timescale core produces an updated hidden state every 7 steps h? = nee We use the output of the fast ticking LSTM as the variational posterior, Q(z |P (Z1) Ze, Ker, dcr, T<t) = N (ui, Xf), where the mean jf and covariance ©f = (o/I)? of the normal distribution are parameterised by the linear transformation (yj, log o/) = f,(h/), and at each timestep take a sample of 2; ~ MN (7,7). The slow timescale LSTM output is used for the prior of P (Zilecrie) Xp [ts Ger|t] Perit) ) = N (ue, SP) where EP = (07), (u?, log o?) = f,(h?) and f, is a linear transformation. The fast timescale core takes as input the observation that has been encoded by a convolutional neural network (CNN), u; = CNN(x:), the previous action a;_1, previous reward r;_, as well as the prior distribution parameters ju?
18
and Σp core takes in the fast coreâs hidden state as input, giving the recurrent network dynamics of
q_ . P Wa P yp hj = Jq(Ue, Qeâ1, 1, h?, fy, ue, DP, Zeâ-1) b= Gp(bf_,,hf_,) ift mod 7 =0 â hee j otherwise (4)
# j
where g, and g, are the fast and slow timescale LSTM cores respectively. Stochastic policy, value function, and pixel control signals are obtained from the samples z, using further non- linear transformations. The resulting update direction is therefore: V( Ex, [âL(ze; xt)]â Dar [Q(zi|P (2) , Zt, X<t, Oct, <t)| [P(2e|Zcr[ 4), X<r[4]sGer[4];
V( Ex, [âL(ze; xt)]â Dar [Q(zi|P (2) , Zt, X<t, Oct, <t)| [P(2e|Zcr[ 4), X<r[4]sGer[4]; rert))])- _ Ct cP
(5) where L(·, ·) represents the objective function composed of terms for multi-step policy gradient and value function optimisation (45), as well as pixel control and reward prediction auxiliary tasks (30), see Section 5.4. Intuitively, this objective function captures the idea that the slow LSTM generates a prior on z which predicts the evolution of z for the subsequent Ï steps, while the fast LSTM generates a variational posterior on z that incorporates new observations, but adheres to the predictions made by the prior. All the while, z must be a useful representation for maximising reward and auxiliary task performance. This architecture can be easily extended to more than two hierarchical layers, but we found in practice that more layers made little differ- ence on this task. We also augmented this dual-LSTM agent with shared DNC memory (20) to further increase its ability to store and recall past experience (this merely modiï¬es the functional form of gp and gq). Finally, unlike previous work on DeepMind Lab (17, 30), the FTW agent uses a rich action space of 540 individual actions which are obtained by combining elements from six independent action dimensions. Exact agent architectures are described in Figure S10.
# 2.2 Internal Reward and Population Based Training
We wish to optimise the FTW agent with RL as stated in Equation 5, using a reward signal that maximises the agent teamâs win probability. Reward purely based on game outcome, such as win/draw/loss signal as a reward of rT = 1, rT = 0, and rT = â1 respectively, is very sparse and delayed, resulting in no learning (Figure 2 (b) Self-play). Hence, we obtain more frequent rewards by considering the game points stream Ït. These can be used simply for re- ward shaping (49) (Figure 2 (b) Self-play + RS) or transformed into a reward signal rt = w(Ït) using a learnt transformation w (Figure 2 (b) FTW). This transformation is adapted such that performing RL to optimise the resulting cumulative sum of expected future discounted rewards effectively maximises the winning probability of the agentâs team, removing the need for man- ual reward shaping (49). The transformation w is implemented as a table look-up for each of the 13 unique values of Ït, corresponding to the events listed in Section 5.5. In addition to
19
optimising the internal rewards of the RL optimisation, we also optimise hyperparameters Ï of the agent and RL training process automatically. These include learning rate, slow LSTM time scale Ï , the weight of the DKL term in Equation 5, and the entropy cost (full list in Section 5.4). This optimisation of internal rewards and hyperparameters is performed using a process of pop- ulation based training (PBT) (29). In our case, a population of P = 30 agents was trained in parallel. For each agent we periodically sampled another agent, and estimated the win probabil- ity of a team composed of only the ï¬rst agent versus a team composed of only the second from training matches using Elo scores. If the estimated win probability of an agent was found to be less than 70% then the losing agent copied the policy, the internal reward transformation, and hyperparameters of the better agent, and explored new internal rewards and hyperparameters. This exploration was performed by perturbing the inherited value by ±20% with a probability of 5%, with the exception of the slow LSTM time scale Ï , which was uniformly sampled from the integer range [5, 20). A burn-in time of 1K games was used after each exploration step which prevents further exploration and allows learning to occur.
# 2.3 Training Architecture
We used a distributed, population-based training framework for deep reinforcement learning agents designed for the fast optimisation of RL agents interacting with each other in an environ- ment with high computational simulation costs. Our architecture is based on an actor-learner structure (17): a large collection of 1920 arena processes continually play CTF games with players sampled at the beginning of each episode from the live training population to ï¬ll the N player positions of the game (Section 5.4.1 for details). We train with N = 4 (2 vs 2 games) but ï¬nd the agents generalise to different team sizes (Figure S2). After every 100 agent steps, the trajectory of experience from each playerâs point of view (observations, actions, rewards) is sent to the learner responsible for the policy carried out by that player. The learner corresponding to an agent composes batches of the 32 trajectories most recently received from arenas, and computes a weight update to the agentâs neural network parameters based on Equation 5 using V-Trace off-policy correction (17) to account for off-policy drift.
# 3 Performance Evaluation
An important dimension of assessing the success of training agents to play CTF is to evalu- ate their skill in terms of the agent teamâs win probability. As opposed to single-agent tasks, assessing skill in multi-agent systems depends on the teammates and opponents used during evaluation. We quantiï¬ed agent skill by playing evaluation games with players from the set of all agents to be assessed. Evaluation games were composed using ad-hoc matchmaking in the sense that all N players of the game, from both teams, were drawn at random from the set of agents being evaluated. This allowed us to measure skill against any set of opponent agents and robustness to any set of teammate agents. We estimate skill using the Elo rating system (15)
20
extended to teams (see Section 5.1 for exact details of Elo calculation).
We performed evaluation matches with snapshots of the FTW agent and ablation study agents through training time, and also included built-in bots and human participants as reference agents for evaluation purposes only. Differences between these types of players is summarised in Figure S11.
The various ablated agents in experiments are (i) UNREAL (30) trained with self-play using game winning reward â this represents the state-of-the-art naive baseline â (ii) Self-play with reward shaping (RS) which instead uses the Quake default points scheme as reward, (iii) PBT with RS, which replaces self-play with population based training, and (iv) FTW without tem- poral hierarchy which is the full FTW agent but omitting the temporal hierarchy (Section 5.6 for full details).
The built-in bots were scripted AI bots developed for Quake III Arena. Their policy has access to the entire game engine, game state, and map layout, but have no learning compo- nent (69). These bots were conï¬gured for various skill levels, from Bot 1 (very low skill level) to Bot 5 (very high skill level, increased shields), as described fully in Section 5.9.
The human participants consisted of 40 people with ï¬rst-person video game playing expe- rience. We collected results of evaluation games involving humans by playing ï¬ve tournaments of eight human players. Players were given instructions on the game environment and rules, and performed two games against Bot 3 built-in bots. Human players then played seven games in ad-hoc teams, being randomly matched with other humans, FTW agents, and FTW without a temporal hierarchy agents as teammates and opponents. Players were not told with which agent types they were playing and were not allowed to communicate with each other. Agents were executed in real-time on the CPUs of the same workstations used by human players (desktops with a commodity GPU) without adversely affecting the frame-rate of the game.
Figure S2 shows the outcome of the tournaments involving humans. To obtain statistically valid Elo estimates from the small number of games played among individuals with high skill variance, we pooled the humans into two groups, top 20% (strong) and remaining 80% (aver- age), according to their individual performances.
We also performed another study with human players to ï¬nd out if human ingenuity, adap- tivity and teamwork would help humans ï¬nd exploitative strategies against trained agents. We asked two professional games testers to play as a team against a team of two FTW agents on a ï¬xed, particularly complex map, which had been held out of training. After six hours of practice and experimentation, the human games testers were able to consistently win against the FTW team on this single map by employing a high-level strategy. This winning strategy involved careful study of the preferred routes of the agents on this map in exploratory games, drawing explicit maps, and then precise communication between humans to coordinate successful ï¬ag captures by avoiding the agentsâ preferred routes. In a second test, the maps were changed to be procedurally generated for each episode as during training. Under these conditions, the human game testers were not able to ï¬nd a consistently winning strategy, resulting in a human win-rate of only 25% (draw rate of 6.3%).
21
# 3.1 Human-Agent Differences
It is important to recognise the intrinsic differences between agents and humans when evaluat- ing results. It is very difï¬cult to obtain an even playing ground between humans and agents, and it is likely that this will continue to be the case for all human/machine comparisons in the domain of action video games. While we attempted to ensure that the interaction of agents and humans within their shared environment was as fair as possible, engineering limitations mean that differences still exist. Figure S11 (a) outlines these, which include the fact that the environ- ment serves humans a richer interface than agents: observations with higher visual resolution and lower temporal latency, and a control space of higher ï¬delity and temporal resolution.
However, in spite of these environmental constraints, agents have a set of advantages over humans in terms of their ultimate sensorimotor precision and perception. Humans cannot take full advantage of what the environment offers: they have a visual-response feedback loop far slower than the 60Hz observation rate (10); and although a high ï¬delity action space is available, humansâ cognitive and motor skills limit their effective control in video games (6).
One way that this manifests in CTF games is through reaction times to salient events. While we cannot measure reaction time directly within a full CTF game, we measure possible proxies for reaction time by considering how long it takes for an agent to respond to a newly-appeared opponent (Figure S11 (b)). After an opponent ï¬rst appears within a playerâs (90 degree) ï¬eld- of-view, it must be become âtaggableâ, i.e. positioned within a 10 degree cone of the playerâs centre of vision. This occurs very quickly within both human and agent play, less than 200ms on average (though this does not necessarily reï¬ect intentional reactions, and may also result from some combination of playersâ movement statistics and prior orientation towards opponent appearance points). However, the time between ï¬rst seeing an opponent and attempting a tag (the opponent is taggable and the tag action is emitted) is much lower for FTW agents (258ms on average) compared to humans (559ms), and when a successful tag is considered this gap widens (233ms FTW, 627ms humans). Stronger agents also had lower response times in general than weaker agents, but there was no statistically signiï¬cant difference in strong humansâ response times compared to average humans.
The tagging accuracy of agents is also signiï¬cantly higher than that of humans: 80% for FTW agents compared to 48% for humans. We measured the effect of tagging accuracy on the performance of FTW agents playing against a Bot 3 team by artiï¬cially impairing agents ability to ï¬re, without retraining the agents (Figure S11 (c)). Win probability decreased as the accuracy of the agent decreased, however at accuracies comparable to humans the FTW agents still had a greater win probability than humans (albeit with comparable mean ï¬ag capture differences). We also used this mechanism to attempt to measure the effect of successful tag time on win probability (Figure S11 (d)), and found that an average response time of up to 375ms did not effect the win probability of the FTW agent â only at 448ms did the win rate drop to 85%.
22
# 4 Analysis
# 4.1 Knowledge Representation
We carried out an analysis of the FTW agentâs internal representation to help us understand how it represents its environment, what aspects of game state are well represented, and how it uses its memory module and parses visual observations.
We say that the agent had game-state related knowledge of a given piece of information if that information could be decoded with sufï¬cient accuracy from the agentâs recurrent hidden state (hp t ) using a linear probe classiï¬er. We deï¬ned a set of 40 binary features that took the form of questions (found in Figure S4) about the state of the game in the distant and recent past, present, and future, resulting in a total of 200 features. Probe classiï¬ers were trained for each of the 200 features using balanced logistic regression on 4.5 million game situations, with results reported in terms of AUCROC evaluated with 3-fold episode-wise cross validation. This analysis was performed on the agent at multiple points in training to show what knowledge emerges at which point in training, with the results shown in Figure S4.
Further insights about the geometry of the representation space were gleaned by performing a t-SNE dimensionality reduction (67) on the recurrent hidden state of the FTW agent. We found strong evidence of cluster structure in the agentâs representation reï¬ecting conjunction of known CTF game state elements: ï¬ag possession, location of the agent, and the agentâs respawn state. Furthermore, we introduce neural response maps which clearly highlight the differences in co-activation of individual neurons of the agent in these different game states (Figure S5). In fact, certain aspects of the game, such as whether an agentâs ï¬ag is held by an opponent or not, or whether the agentâs teammate holds the opponents ï¬ag or not, are represented by the response of single neurons.
Finally, we can decode the sensitivity of the agentâs value function, policy, and internal single neuron responses to its visual observations of the environment through gradient-based saliency analysis (59) (Figure S6). Sensitivity analysis combined with knowledge classiï¬ers seems to indicate that the agent performed a kind of task-based scene understanding, with the effect that its value function estimate was sensitive to seeing the ï¬ag, other agents, and elements of the on-screen information. The exact scene objects which an agentâs value function was sensitive to were often found to be context dependent (Figure S6 bottom).
# 4.2 Agent Behaviour
The CTF games our agents played were ï¬ve minutes long and consisted of 4500 elemental ac- tions by each player. To better understand and interpret the behaviour of agents we considered modelling temporal chunks of high-level game features. We segmented games into two-second periods represented by a sequence of game features (e.g. distance from bases, agentâs room, vis- ibility of teammates and opponents, ï¬ag status, see Section 5.8) and used a variational autoen- coder (VAE) consisting of an RNN encoder and decoder (8) to ï¬nd a compressed vector repre-
23
sentation of these two seconds of high-level agent-centric CTF game-play. We used a Gaussian mixture model (GMM) with 32 components to ï¬nd clusters of behaviour in the VAE-induced vector representation of game-play segments (Section 5.8 for more details). These discrete clus- ter assignments allowed us to represent high-level agent play as a sequence of clusters indices (Figure S9 (b)). These two second behaviour prototypes were interpretable and represented a wide range of meaningful behaviours such as home base camping, opponents base camping, de- fensive behaviour, teammate following, respawning, and empty room navigation. Based on this representation, high-level agent behaviour could be represented by histograms of frequencies of behaviour prototypes over thousands of episodes. These behavioural ï¬ngerprints were shown to vary throughout training, differed strongly between hierarchical and non-hierarchical agent architectures, and were computed for human players as well (Figure S9 (a)). Comparing these behaviour ï¬ngerprints using the Hellinger distance (25) we found that the human behaviour was most similar to the FTW agent after 200K games of training.
# 5 Experiment Details
# 5.1 Elo Calculation
We describe the performance of both agents (human or otherwise) in terms of Elo ratings (15), as commonly used in both traditional games like chess and in competitive video game ranking and matchmaking services. While Elo ratings as described for chess address the one-versus-one case, we extend this for CTF to the n-versus-n case by making the assumption that the rating of a team can be decomposed as the sum of skills of its team members.
Given a population of M agents, let Ïi â R be the rating for agent i. We describe a given match between two teams, blue and red, with a vector m â ZM , where mi is the number of times agent i appears in the blue team less the number of times the agent appears in the red team. Using our additive assumption we can then express the standard Elo formula as:
P(blue wins against red|m, Ï) = 1 1 + 10âÏT m/400 . (6)
To calculate ratings given a set of matches with team assignments mi and outcomes yi (yi = 1 for âblue beats redâ and yi = 1 2 for draw), we optimise Ï to ï¬nd ratings Ïâ that maximise the likelihood of the data. Since win probabilities are determined only by absolute differences in ratings we typically anchor a particular agent (Bot 4) to a rating of 1000 for ease of interpretation.
For the purposes of PBT, we calculate the winning probability of Ïi versus Ïj using mi = 2 and mj = â2 (and mk = 0 for k /â {i, j}), i.e. we assume that both players on the blue team are Ïi and similarly for the red team.
24
# 5.2 Environment Observation and Action Space
DeepMind Lab (5) is capable of rendering colour observations at a wide range of resolutions. We elected to use a resolution of 84Ã84 pixels as in previous related work in this environment (30, 44). Each pixel is represented by a triple of three bytes, which we scale by 1 255 to produce an observation xt â [0, 1]84Ã84Ã3.
The environment accepts actions as a composite of six types of partial actions: change in yaw (continuous), change in pitch (continuous), straï¬ng left or right (ternary), moving forward or backwards (ternary), tagging and jumping (both binary). To further simplify this space, we expose only two possible values for yaw rotations (10 and 60) and just one for pitch (5). Consequently, the number of possible composite actions that the agent can produce is of size 5 · 3 · 3 · 3 · 2 · 2 = 540.
# 5.3 Procedural Environments
Indoor Procedural Maps The procedural indoor maps are ï¬at, point-symmetric mazes con- sisiting of rooms connected by corridors. Each map has two base rooms which contain the teamâs ï¬ag spawn point and several possible player spawn points. Maps are contextually coloured: the red base is red, the blue base is blue, empty rooms are grey and narrow corri- dors are yellow. Artwork is randomly scattered around the mapâs walls.
The procedure for generating an indoor map is as follows:
1. Generate random sized rectangular rooms within a ï¬xed size square area (e.g. 13 à 13 or 17 à 17 cells). Room edges were only placed on even cells meaning rooms always have odd sized walls. This restriction was used to work with the maze backtracking algorithm.
2. Fill the space between rooms using the backtracking maze algorithm to produce corridors. Backtracking only occurs on even cells to allow whole cell gaps as walls.
3. Remove dead ends and horseshoes in the maze.
4. Searching from the top left cell, the ï¬rst room encountered is declared the base room. This ensures that base rooms are typically at opposite ends of the arena.
5. The map is then made to be point-symmetric by taking the ï¬rst half of the map and concatenating it with its reversed self.
6. Flag bases and spawn points are added point-symmetrically to the base rooms.
7. The map is then checked for being solveable and for meeting certain constraints (base room is at least 9 units in area, the ï¬ags are a minimum distance apart).
8. Finally, the map is randomly rotated (to prevent agents from exploiting the skybox for navigation).
25
Outdoor Procedural Maps The procedural outdoor maps are open and hilly naturalistic maps containing obstacles and rugged terrain. Each teamâs ï¬ag and spawn locations are on opposite corners of the map. Cacti and boulders of random shapes and sizes are scattered over the landscape. To produce the levels, ï¬rst the height map was generated using the diamond square fractal algorithm. This algorithm was run twice, ï¬rst with a low variance and then with a high variance and compiled using the element-wise max operator. Cacti and shrubs were placed in the environment using rejection sampling. Each plant species has a preference for a distri- bution over the height above the water table. After initial placement, a lifecycle of the plants was simulated with seeds being dispersed near plants and competition limiting growth in high- vegetation areas. Rocks were placed randomly and simulated sliding down terrain to their ï¬nal resting places. After all entities had been placed on the map, we performed pruning to ensure props were not overlapping too much. Flags and spawn points were placed in opposite quad- rants of the map. The parameters of each map (such as water table height, cacti, shrub and rock density) were also randomly sampled over each individual map. 1000 maps were generated and 10 were reserved for evaluation.
# 5.4 Training Details
Agents received observations from the environment 15 times (steps) per second. For each ob- servation, the agent returns an action to the environment, which is repeated four times within the environment (30, 44). Every training game lasts for ï¬ve minutes, or, equivalently, for 4500 agent steps. Agents were trained for two billion steps, corresponding to approximately 450K games.
Agentsâ parameters were updated every time a batch of 32 trajectories of length 100 had been accumulated from the arenas in which the respective agents were playing. We used RM- SProp (26) as the optimiser, with epsilon 10â5, momentum 0, and decay rate 0.99. The initial learning rate was sampled per agent from LogUniform(10â5, 5 · 10â3) and further tuned during training by PBT, with a population size of 30. Both V-Trace clipping thresholds ¯Ï, ¯c were set to 1. RL discounting factor γ was set to 0.99.
All agents were trained with at least the components of the UNREAL loss (30): the losses used by A3C (44), plus pixel control and reward prediction auxiliary task losses. The baseline cost weight was ï¬xed at 0.5, the initial entropy cost was sampled per agent from LogUniform(5· 10â4, 10â2), the initial reward prediction loss weight was sampled from LogUniform(0.1, 1), and the initial pixel control loss weight was sampled from LogUniform(0.01, 0.1). All weights except the baseline cost weight were tuned during training by PBT.
Due to the composite nature of action space, instead of training pixel control policies directly on 540 actions, we trained independent pixel control policies for each of the six action groups. The reward prediction loss was trained using a small replay buffer, as in UNREAL (30). In particular, the replay buffer had capacity for 800 non-zero-reward and 800 zero-reward se- quences. Sequences consisted of three observations. The batch size for the reward prediction loss was 32, the same as the batch size for all the other losses. The batch consisted of 16
26
non-zero-reward sequences and 16 zero-reward sequences.
For the FTW agent with temporal hierarchy, the loss includes the KL divergence between the prior distribution (from the slow-ticking core) and the posterior distribution (from the fast- ticking core), as well as KL divergence between the prior distribution and a multivariate Gaus- sian with mean 0, standard deviation 0.1. The weight on the ï¬rst divergence was sampled from LogUniform(10â3, 1), and the weight on the second divergence was sampled from LogUniform(10â4, 10â1). A scaling factor on the gradients ï¬owing from fast to slow ticking cores was sampled from LogUniform(0.1, 1). Finally, the initial slower-ticking core time period Ï was sampled from Categorical([5, 6, . . . , 20]). These four quantities were further optimised during training by PBT.
# 5.4.1 Training Games
Each training CTF game was started by randomly sampling the level to play on. For indoor procedural maps, ï¬rst (with 50% probability) the size of map (13 or 17) and its geometry were generated according to the procedure described in Section 5.3. For outdoor procedural maps one of the 1000 pre-generated maps was sampled uniformly. Next, a single agent Ïp was randomly sampled from the population. Based on its Elo score three more agents were sampled without replacement from the population according to the distribution
(P(7p beats 7|¢) â or) 592 where o = 1 o 6 Vrew_,P(t|tp) x < exp (
which is a normal distribution over Elo-based probabilities of winning, centred on agents of the same skill. For self-play ablation studies agents were paired with their own policy instead. The agents in the game pool were randomly assigned to the red and blue teams. After each 5 minute episode this process was repeated.
# 5.5 Game Events
There are 13 binary game events with unique game point values Ït. These events are listed below, along with the default values wquake from the default Quake III Arena points system used
27
for manual reward shaping baselines (Self-play + RS, PBT + RS):
p(1) = â1 w(1) Ï(1) t = I am tagged with the ï¬ag p(2) = â1 w(2) Ï(2) t = I am tagged without the ï¬ag w(3) Ï(3) p(3) = 1 t = I captured the ï¬ag w(4) Ï(4) t = I picked up the ï¬ag w(5) Ï(5) t = I returned the ï¬ag w(6) Ï(6) t = Teammate captured the ï¬ag w(7) Ï(7) t = Teammate picked up the ï¬ag w(8) Ï(8) t = Teammate returned the ï¬ag Ï(9) w(9) t = I tagged opponent with the ï¬ag t = I tagged opponent without the ï¬ag p(10) = 1 w(10) Ï(10) p(11) = â1 w(11) Ï(11) t = Opponents captured the ï¬ag p(12) = â1 w(12) Ï(12) t = Opponents picked up the ï¬ag p(13) = â1 w(13) Ï(13) t = Opponents returned the ï¬ag
Agents did not have direct access to these events. FTW agentsâ initial internal reward mapping was sampled independently for each agent in the population according to
wi) =e pO e ~ LogUniform(0.1, 10.0).
after which it was adapted through training with reward evolution.
# 5.6 Ablation
We performed two separate series of ablation studies, one on procedural indoor maps and one on procedural outdoor maps. For each environment type we ran the following experiments:
⢠Self-play: An agent with an LSTM recurrent processing core (Figure S10 (e)) trained with the UNREAL loss functions described in Section 5.4. Four identical agent policies played in each game, two versus two. Since there was only one agent policy trained, no Elo scores could be calculated, and population-based training was disabled. A single reward was provided to the agent at the end of each episode, +1 for winning, -1 for losing and 0 for draw.
⢠Self-play + Reward Shaping: Same setup as Self-play above, but with manual reward shaping given by wquake.
28
⢠PBT + Reward Shaping: Same agent and losses as Self-play + Reward Shaping above, but for each game in each arena the four participating agents were sampled without re- placement from the population using the process described in Section 5.4. Based on the match outcomes Elo scores were calculated for the agents in the population as described in Section 5.1, and were used for PBT.
⢠FTW w/o Temporal Hierarchy: Same setup as PBT + Reward Shaping above, but with Reward Shaping replaced by an internal reward signals evolved by PBT.
⢠FTW: The FTW agent, using the recurrent processing core with temporal hierarchy (Fig- ure S10 (f)), with the training setup described in Methods: matchmaking, PBT, and inter- nal reward signal.
# 5.7 Distinctly Selective Neurons
For identifying the neuron in a given agent that is most selective for a game state feature y we recorded 100 episodes of the agent playing against Bot 3. Given this dataset of activations hi and corresponding labels yi we ï¬t a Decision Tree of depth 1 using Gini impurity criterion. The decision tree learner selects the most discriminative dimension of h and hence the neuron most selective for y. If the accuracy of the resulting stump exceeds 97% over 100 · 4500 steps we consider it to be a distinctly selective neuron.
# 5.8 Behavioural Analysis
For the behavioural analysis, we model chunks of two seconds (30 agent steps) of gameplay. Each step is represented by 56 agent-centric binary features derived from groundtruth game state:
⢠(3 features) Thresholded shortest path distance from other three agents.
⢠(4 features) Thresholded shortest path distance from each teamâs base and ï¬ags.
⢠(4 features) Whether an opponent captured, dropped, picked up, or returned a ï¬ag.
⢠(4 features) Whether the agent captured, dropped, picked up, or returned a ï¬ag.
⢠(4 features) Whether the agentâs teammate captured, dropped, picked up, or returned a ï¬ag.
⢠(4 features) Whether the agent was tagged without respawning, was tagged and must respawn, tagged an opponent without them respawning, or tagged an opponent and they must respawn.
⢠(4 features) What room an agent is in: home base, opponent base, corridor, empty room.
29
⢠(5 features) Visibility of teammate (visible and not visible), no opponents visible, one opponent visible, two opponents visible.
⢠(5 features) Which other agents are in the same room: teammate in room, teammate not in room, no opponents in room, one opponent in room, two opponents in room.
⢠(4 features) Each teamâs base visibility.
⢠(13 features) Each teamâs ï¬ag status and visibility. Flags status can be either at base, held by teammate, held by opponent, held by the agent, or stray.
⢠(2 features) Whether agent is respawning and cannot move or not.
For each of the agents analysed, 1000 episodes of pairs of the agent playing against pairs of Bot 3 were recorded and combined into a single dataset. A variational autoencoder (VAE) (31, 52) was trained on batches of this mixed agent dataset (each data point has dimensions 30Ã56) using an LSTM encoder (256 units) over the 30 time steps, whose ï¬nal output vector is linearly projected to a 128 dimensional latent variable (diagonal Gaussian). The decoder was an LSTM (256 units) which took in the sampled latent variable at every step.
After training the VAE, a dataset of 400K data points was sampled, the latent variable means computed, and a Gaussian mixture model (GMM) was ï¬t to this dataset of 400KÃ128, with diagonal covariance and 32 mixture components. The resulting components were treated as behavioural clusters, letting us characterise a two second clip of CTF gameplay as one belonging to one of 32 behavioural clusters.
# 5.9 Bot Details
The bots we use for evaluation are a pair of Tauri and Centuri bots from Quake III Arena as deï¬ned below.
30
Bot Personality 1 Bot Level 0.0 ATTACK SKILL 0.0 AIM SKILL 0.0 AIM ACCURACY 0.1 VIEW FACTOR 5 VIEW MAXCHANGE 5.0 REACTIONTIME 0.4 CROUCHER 0.4 JUMPER 0.1 WALKER 0.1 WEAPONJUMPING 0.1 GRAPPLE USER 0.1 AGGRESSION SELFPRESERVATION 0.1 0.1 VENGEFULNESS 0.0 CAMPER 0.1 EASY FRAGGER 0.1 ALERTNESS 0.0 AIM ACCURACY 0.01 FIRETHROTTLE 2 0.25 0.25 0.25 0.35 90 4.0 0.25 0.45 0.05 0.3 0.3 0.3 0.3 0.3 0.25 0.3 0.3 0.22 0.13 Tauri 4 3 1.0 0.5 1.0 0.5 1.0 0.5 0.9 0.6 240 120 1.75 3.0 0.1 0.1 1.0 0.5 0.0 0.0 1.0 0.5 1.0 0.5 1.0 0.5 1.0 0.5 1.0 0.5 0.5 0.5 1.0 0.5 1.0 0.5 0.45 0.75 1.0 0.25 5 1.0 1.0 1.0 1.0 360 0.0 0.0 1.0 0.0 1.0 1.0 1.0 1.0 1.0 0.0 1.0 1.0 1.0 1.0 1 0.0 0.0 0.0 0.1 5 5.0 0.4 0.4 0.1 0.1 0.1 0.1 0.1 0.1 0.0 0.1 0.1 0.0 0.01 2 0.25 0.25 0.25 0.35 90 4.0 0.25 0.45 0.05 0.3 0.3 0.3 0.3 0.3 0.25 0.3 0.3 0.22 0.13 Centauri 4 3 1.0 0.5 1.0 0.5 1.0 0.5 0.9 0.6 240 120 1.75 3.0 0.1 0.1 1.0 0.5 0.0 0.0 1.0 0.5 1.0 0.5 1.0 0.5 1.0 0.5 1.0 0.5 0.5 0.5 1.0 0.5 1.0 0.5 0.45 0.95 0.1 0.25 5 1.0 1.0 1.0 1.0 360 0.0 0.0 1.0 0.0 1.0 1.0 1.0 1.0 1.0 0.0 1.0 1.0 1.0 0.01
31
Figure S1: Shown are schematics of samples of procedurally generated maps on which agents were trained. In order to demonstrate the robustness of our approach we trained agents on two distinct styles of maps, procedural outdoor maps (top) and procedural indoor maps (bottom).
32
Win Probability (%)
Indoor Map Size ile} alls) a7) 19 oe 1 1 1 L Bot 3 4 Bot 4 4 Bot 5 4 Self-play + Self-play + RS 4 PBT + RS 4 Average Humans + Strong Humans 4
Indoor Team Size a A ] 4 fi 1
Outdoor 1 + Bot 3 6 + Bot 4 7 r Bot 5 + Self-play & + Self-play + RS | + PBT + RS + Average Humans + Strong Humans
Indoor Map Size Indoor Team Size Outdoor ile} alls) a7) 19 oe a A ] 4 1 1 1 L fi 1 1 Bot 3 4 + Bot 3 Bot 4 4 6 + Bot 4 Bot 5 4 7 r Bot 5 Self-play + + Self-play + RS 4 & + Self-play PBT + RS 4 | + PBT + RS Humans + + Average Humans 4 + Strong
Win Probability (%) Flag Difference Game Counts FFOOFF ff. fh Fh_âshh FEO FF ff) fh Fh FEOF fff Fh_âsohh FTW+FTW (FF) FTW+FTW (FF) FTW+FTW w/o TH (Ff) FTW+FTW wio TH (Ff) TH+FTW w/o TH (ff) FTW w/o TH+FTW w/o w/o TH + Human (fh) FTW w/o TH + Human FTW + Human (Fh) FTW + Human (Fh) Human + Human (hh) Human + Human (hh)
Win Probability (%) FFOOFF ff. fh Fh_âshh FTW+FTW (FF) FTW+FTW w/o TH (Ff) FTW w/o TH+FTW w/o TH (ff) FTW w/o TH + Human (fh) FTW + Human (Fh) Human + Human (hh)
Flag Difference FEO FF ff) fh Fh
Game Counts FEOF fff Fh_âsohh FTW+FTW (FF) FTW+FTW wio TH (Ff) FTW w/o TH+FTW w/o TH (ff) FTW w/o TH + Human (fh) FTW + Human (Fh) Human + Human (hh)
Figure S2: Top: Shown are win probabilities of different agents, including bots and humans, in eval- uation tournaments, when playing on procedurally generated maps of various sizes (13â21), team sizes (1â4) and styles (indoor/outdoor). On indoor maps, agents were trained with team size two on a mixture of 13 à 13 and 17 à 17 maps, so performance in scenarios with different map and team sizes measures their ability to successfully generalise. Teams were composed by sampling agents from the set in the ï¬gure with replacement. Bottom: Shown are win probabilities, differences in number of ï¬ags captured, and number of games played for the human evaluation tournament, in which human subjects played with agents as teammates and/or opponents on indoor procedurally generated 17 à 17 maps.
33
Two-player Fetch
Agent Bot Self-play + RS PBT + RS FTW w/o TH FTW Fetch-trained FTW w/o TH Fetch-trained FTW Flags 14 9 14 23 37 30 44
Figure S3: Left: Average number of ï¬ags scored per match for different CTF-trained agents playing two-player fetch (CTF without opponents) on indoor procedurally generated maps of size 17. This test provides a measure of agentsâ ability to cooperate while navigating in previously unseen maps. Ten thousand matches were played, with teams consisting of two copies of the same agent, which had not been trained on this variant of the CTF task. All bot levels performed very similarly on this task, so we report a single number for all bot levels. In addition we show results when agents are trained solely on the fetch task (+1 reward for picking up and capturing a ï¬ag only). Right: Heatmaps of the visitation of the FTW agent during the second half of several episodes while playing fetch..
34
Time: -20 frames from now âAvg. knowledge Random agent Time: -5 frames from now. Self play + RS @450K FTW @450K Random agent Time: now Self play + RS @a50K Fw. Random agent Time: +5 frames from now Seltplay + RS @a50K FTW @4soK Fm FTW @2k Random agent Time: +20 frames from now Seltplay + RS @450K FTW @asoK Random agent
Figure S4: Shown is prediction accuracy in terms of percent AUCROC of linear probe classiï¬ers on 40 different high-level game state features (columns) for different agents (rows), followed by their averages across features, for ï¬ve different temporal offsets ranging from -20 to +20 frames (top to bottom). Results are shown for the baseline self-play agent with reward shaping as well as the FTW agent after different numbers of training games, and an untrained randomly initialised FTW agent.
35
S
Vector to visualise Neural Response xeRH Map of x Values for Faces are colored according vertices colors |. to mean of vertices colors a Ubmy aman anchor ntigenva asi, Dataset XeRTâ¢" By â¬0IVEVA XT XID oo ntommentarinsnntntih Temporally faith tire rantnnertirtertah consistent 1 Maddala alas âembedding He amnneenesnca erin of neurons Triangulation i a 1 seenemariteanantitetanetpntent H â âSNE e.g. Delaunay i AN carnalnnyen brit tr, appli tO xt triangulation Fcrebetthnetterntnentetraem tn or H minimisation of VeRHX
Figure S5: Top: Shown are neural response maps for the FTW agent for game state features used in the knowledge study of Extended Data Figure S4. For each binary feature y we plot the response vector E[(hp, hq)|y = 1] â E[(hp, hq)|y = 0]. Bottom: Process for generating similarity based topological embedding of the elements of vector x â RH given a dataset of other X â RT ÃH . Here we use two independent t-SNE embeddings, one for each of the agentâs LSTM hidden state vectors at the two timescales.
36
Agentâs View Value Function Policy Opponent has the flag
Opponent has the flag t Opponent has the flag t+2 Focusing on flag carrier Focusing on both opponents Flag carrier tagged, focusing on on-screen information
Figure S6: Top two rows: Selected saliency analysis of FTW agent. Contours show sensitivity | oft OXt, i highly selective neurons in the agentâs hidden state, and x;,;; represents the pixel at position 77 at time t. Brighter colour means higher gradient norm and thus higher sensitivity to given pixels. Bottom: Saliency analysis of a single neuron that encodes whether an opponent is holding a flag. Shown is a single situation from the perspective of the FTW agent, in which attention is on an opponent flag carrier at time t, on both opponents at time t + 2, and switches to the on-screen information at time t + 4 once the flag carrier has been tagged and the flag returned. , where f; is instantiated as the agentâs value function at time ¢, its policy, or one of four 1
37
7K 200K 300K 450K 20 «5s Distance to opponents base Distance to opponents base Distance to opponents base Distance to opponents base 0 5 wo 6 2 v) 5 1 6 2 0 5 1 1% 2 0 5 ow 6b 2 Distance to my base Distance to my base Distance to my base Distance to my base Averaged across episode Early in the episode Later in the episode Recalls entering both bases Recalls its current path Recalls entering opponents
7K 200K 300K 450K 20 «5s Distance to opponents base Distance to opponents base Distance to opponents base Distance to opponents base 0 5 wo 6 2 v) 5 1 6 2 0 5 1 1% 2 0 5 ow 6b 2 Distance to my base Distance to my base Distance to my base Distance to my base
Averaged across episode Early in the episode Later in the episode Recalls entering both bases Recalls its current path Recalls entering opponents and some waypoints base @ Agent position © Recollection O Teammate position © Opponent position
Figure S7: Top: Shown are Hinton diagrams representing how often the FTW agent reads memory slots written to at different locations, which are represented in terms of distance to home and opponent base, on 1000 procedurally generated maps, at different points during training. The size of each square represents the difference between the probability of reading from the given location compared to randomly reading from one of the locations visited earlier in the episode. Red indicates that the agent reads from this position more often than random, and blue less. At 450K the agent appears to have learned to read from near its own base and just outside the opponent base. Bottom: Shown are memory recall patterns for an example episode. The heatmap plot on the left shows memory recall frequency averaged across the episode. Shown on the right are the recall patterns during the agentâs ï¬rst exploration of a newly encountered map. Early in the episode, the agent simply recalls its own path. In almost the same situation later in the episode, the agent recalls entering the opponent base instead.
38
Agent's Flag Status a Agent's Location aan Sey Opponent Flag Status âAgent Respawning Tam respawning hold the flag Lam respawning | hold the tlag âOpponents hold our flag FTW âTeammate holds the fiag Self-play + RS Agent's Flag Status Agent Respawning Bo, §
Figure S8: Shown is a side-by-side comparison of the internal representations learned from playing CTF for the FTW and Self-play + RS agent, visualised using t-SNE and single neuron activations (Figure 3 for more information). The self-play agentâs representation is seen to be signiï¬cantly less coherently clustered by game state, especially with respect to ï¬ag possessions. Furthermore, it appears to have developed only two highly selective neurons compared to four for the FTW agent.
39
(a) Behaviour occurrence per episode Steps per Episode Behaviour Cluster (b) Behaviours for randomly sampled episode played by the FTW 450k game agent Behaviour Cluster
Figure S9: (a) Shown is a collection of bar plots, one for each of 32 automatically discovered behaviour clusters, representing the number of frames per episode during which the behaviour has been observed for the FTW agent at different points in training, the FTW agent without the temporal hierarchy (TH), the Self-play + RS agent, and human players, averaged over maps and episodes. The behavioural ï¬ngerprint changes signiï¬cantly throughout training, and differs considerably between models with and without temporal hierarchy. (b) Shown is the multi-variate time series of active behaviour clusters during an example episode played by the trained FTW agent. Shown are three particular situations represented by the behaviour clusters: following your teammate, enemy base camping, and home base defence.
40
(a) Agent (b) Policy (e) Recurrent processing with LSTM Visual embedding Recurrent processing Y (f) Recurrent processing with temporal hierarchy (d) Visual embedding âa o- ae 32 64 (h) Reward prediction BORO = (i) Pixel control so el ol gi Legend 5] Convolution x eal Deconvolution nen (2) one memo Diagonal with X with X LRU) with N : Kei | Kxk filters kext] KxK filters neurons slots of size K distribution Linear layer Linear layer with ReLU Softmax Softplus x with X XYZ X-Y-Z neurons non-linearity non-linearity non-linearity neurons reshaped to 3D tensor Pointwis Module âointwise @ addition ® Outer product nee Sampling . } Concatenation
Figure S10: Shown are network architectures of agents used in this study. All agents have the same high-level architecture (a), using a decomposed policy (b) (see Section 5.2), value function (c), and convolutional neural network (CNN) visual feature extractor (d). The baseline agents and ablated FTW without temporal hierarchy agents use an LSTM for recurrent processing (e). The FTW agent uses a temporal hierarchy for recurrent processing (f) which is composed of two variational units (g). All agents use reward prediction (h) and independent pixel control (i) auxiliary task networks.
41
(a) Environment Interface (b) Time delay between opponent appearing and: Human Agent Bot Becoming Taggable 0 Attempted Tag Successful Tag Acti 2049 possible rotations, | 6 possible rotations, | 2049 possible rotations, 500 : u â \ction Space â : : discrete movement | discrete movement | discrete movement = 400 â 800 : = 800 : i E E â E : Observation Rate / 60 Hz/0 ms 15 Hz / 66.7 ms 60 Hz/0ms = 300 = 600 = 600 â Delay @ @ Pa E 200 â⬠400 â 400 T i i i F i Action Resolution 60 Hz 15 Hz 60 He 100 200 l 200 Auxiliary Map layout, al payer o] Se 0 oe SB 0 se oe Information states, all object states se & Se Se Observation | RGB 800x600 pixels RGB 84x84 pixels | Groundtruth game state BS SS) Y x Y x & x x & & & (c) Post-hoc effect of tagging accuracy _(d) Post-hoc effect of successful tag time Look up to Successful Tag ea ae âa Rbrrerrs: | . oo 01 02 03 04 05 06 o7 08 OPP PMP? SK âTagging Accuracy es Large look tet Large look ight Small lok lt Small look right Human Observation âAgent Observation oka and rotation actions eoxeowe + Silt Time (ms) Win Probability vs Bot 3 p
Figure S11: (a) The differences between the environment interface offered to humans, agents, and bots. (b) Humans and agents are in addition bound by other sensorimotor limitations. To illustrate we measure humansâ and agentsâ response times, when playing against a Bot 3 team on indoor procedural maps. Time delays all measured from ï¬rst appearance of an opponent in an observation. Left: delay until the opponent becoming taggable (i.e. lies within a 10 degree visual cone). Middle: delay until an attempted tag (i.e. opponent lies within a 10 degree visual cone and tag action emitted). Right: delay until a successful tag. We ignore situations where opponents are further than 1.5 map units away. The shaded region represents values which are impossible to obtain due to environment constraints. (c) Ef- fect of tagging accuracy on win probability against a Bot 3 team on indoor procedural maps. Accuracy is the number of successful tags divided by valid tag attempts. Agents have a trained accuracy of 80%, much higher than the 48% of humans. In order to measure the effect of decreased accuracy on the FTW agent, additional evaluation matches were performed where a proportion of tag events were artiï¬cially discarded. As the agentâs accuracy increases from below human (40%) to 80% the win probability in- creases from 90% to 100% which represents a signiï¬cant change in performance. (d) Effect of successful tag time on win probability against a Bot 3 team on indoor procedural maps. In contrast to (c), the tag actions were artiï¬cially discarded p% of the time â different values of p result in the spectrum of re- sponse times reported. Values of p greater than 0.9 did not reduce response time, showing the limitations of p as a proxy. Note that in both (c) and (d), the agents were not retrained with these p values and so obtained values are only a lower-bound of the potential performance of agents â this relies on the agents generalising outside of the physical environment they were trained in.
42 | {
"id": "1709.04326"
} |
1807.00734 | The relativistic discriminator: a key element missing from standard GAN | In standard generative adversarial network (SGAN), the discriminator
estimates the probability that the input data is real. The generator is trained
to increase the probability that fake data is real. We argue that it should
also simultaneously decrease the probability that real data is real because 1)
this would account for a priori knowledge that half of the data in the
mini-batch is fake, 2) this would be observed with divergence minimization, and
3) in optimal settings, SGAN would be equivalent to integral probability metric
(IPM) GANs.
We show that this property can be induced by using a relativistic
discriminator which estimate the probability that the given real data is more
realistic than a randomly sampled fake data. We also present a variant in which
the discriminator estimate the probability that the given real data is more
realistic than fake data, on average. We generalize both approaches to
non-standard GAN loss functions and we refer to them respectively as
Relativistic GANs (RGANs) and Relativistic average GANs (RaGANs). We show that
IPM-based GANs are a subset of RGANs which use the identity function.
Empirically, we observe that 1) RGANs and RaGANs are significantly more
stable and generate higher quality data samples than their non-relativistic
counterparts, 2) Standard RaGAN with gradient penalty generate data of better
quality than WGAN-GP while only requiring a single discriminator update per
generator update (reducing the time taken for reaching the state-of-the-art by
400%), and 3) RaGANs are able to generate plausible high resolutions images
(256x256) from a very small sample (N=2011), while GAN and LSGAN cannot; these
images are of significantly better quality than the ones generated by WGAN-GP
and SGAN with spectral normalization. | http://arxiv.org/pdf/1807.00734 | Alexia Jolicoeur-Martineau | cs.LG, cs.AI, cs.CR, stat.ML | https://github.com/AlexiaJM/RelativisticGAN | null | cs.LG | 20180702 | 20180910 | 8 1 0 2
p e S 0 1 ] G L . s c [
3 v 4 3 7 0 0 . 7 0 8 1 : v i X r a
# The relativistic discriminator: a key element missing from standard GAN
# Alexia Jolicoeur-Martineau Lady Davis Institute Montreal, Canada alexia.jolicoeur-martineau@mail.mcgill.ca
# Abstract
In standard generative adversarial network (SGAN), the discriminator D estimates the probability that the input data is real. The generator G is trained to increase the probability that fake data is real. We argue that it should also simultaneously decrease the probability that real data is real because 1) this would account for a priori knowledge that half of the data in the mini-batch is fake, 2) this would be observed with divergence minimization, and 3) in optimal settings, SGAN would be equivalent to integral probability metric (IPM) GANs. We show that this property can be induced by using a ârelativistic discriminatorâ which estimate the probability that the given real data is more realistic than a randomly sampled fake data. We also present a variant in which the discriminator estimate the probability that the given real data is more realistic than fake data, on average. We generalize both approaches to non-standard GAN loss functions and we refer to them respectively as Relativistic GANs (RGANs) and Relativistic average GANs (RaGANs). We show that IPM-based GANs are a subset of RGANs which use the identity function. Empirically, we observe that 1) RGANs and RaGANs are signiï¬cantly more stable and generate higher quality data samples than their non-relativistic counterparts, 2) Standard RaGAN with gradient penalty generate data of better quality than WGAN-GP while only requiring a single discriminator update per generator update (reducing the time taken for reaching the state-of-the-art by 400%), and 3) RaGANs are able to generate plausible high resolutions images (256x256) from a very small sample (N=2011), while GAN and LSGAN cannot; these images are of signiï¬cantly better quality than the ones generated by WGAN-GP and SGAN with spectral normalization.
# Introduction
Generative adversarial networks (GANs) [Hong et al., 2017] form a broad class of generative models in which a game is played between two competing neural networks, the discriminator D and the generator G. D is trained to discriminate real from fake data, while G is trained to generate fake data that D will mistakenly recognize as real. In the original GAN by Goodfellow et al. [2014], which we refer to as Standard GAN (SGAN), D is a classiï¬er, thus it is predicting the probability that the input data is real. When D is optimal, the loss function of SGAN is approximately equal to the JensenâShannon divergence (JSD) [Goodfellow et al., 2014].
SGAN has two variants for the generator loss functions: saturating and non-saturating. In practice, the former has been found to be very unstable, while the latter has been found to more stable [Goodfellow et al., 2014]. Under certain conditions, Arjovsky and Bottou [2017] proved that, if real and fake data are perfectly classiï¬ed, the saturating loss has zero gradient and the non-saturating loss has non-zero
Preprint. Work in progress. Not submitted to NIPS but using NIPS style.
but volatile gradient. In practice, this mean that the discriminator in SGAN often cannot be trained to optimality or with a too high learning rate; otherwise, gradients may vanish and, if so, training will stop. This problem is generally more noticeable in high-dimensional setting (e.g., high resolution images and discriminator architectures with high expressive power) given that there are more degrees of freedom available to reach perfect classiï¬cation of the training set.
To improve on SGAN, many GAN variants have been suggested using different loss functions and discriminators that are not classiï¬ers (e.g., LSGAN [Mao et al., 2017], WGAN [Arjovsky et al., 2017]). Although these approaches have partially succeeded in improving stability and data quality, the large-scale study by Lucic et al. [2017] suggests that these approaches do not consistently improve on SGAN. Additionally, some of the most successful approaches, such as WGAN-GP [Gulrajani et al., 2017], are much more computationally demanding than SGAN.
Many of the recent successful GANs variants have been based on Integral probability metrics (IPMs) [Müller, 1997] (e.g., WGAN [Arjovsky et al., 2017], WGAN-GP[Gulrajani et al., 2017], Sobolev GAN [Mroueh et al., 2017], Fisher GAN [Mroueh and Sercu, 2017]). In IPM-based GANs, the discriminator is real-valued and constrained to a speciï¬c class of function so that it does not grow too quickly; this act as a form of regularization which prevents D from becoming too strong (i.e., almost perfectly classifying real from fake data). In practice, we generally observe that the discriminator of IPM-based GANs can be trained for many iterations without causing vanishing gradients.
IPM constraints have been shown to be similarly beneï¬cial in non-IPM-based GANs. The constraint of WGAN (i.e., Lipschitz discriminator) has been shown to be beneï¬cial in other GANs through spectral normalization [Miyato et al., 2018]. The constraint of WGAN-GP (i.e., discriminator with gradient norm equal to 1 around real and fake data) has been shown to be beneï¬cial in SGAN [Fedus et al., 2017] (along with a very similar gradient penalty by Kodali et al. [2017]). Although this shows that certain IPM constraints improve the stability of GANs, it does not explain why IPMs generally provide increased stability over other metrics/divergences in GANs (e.g., JSD for SGAN, f -divergences for f -GANs [Nowozin et al., 2016]).
In this paper, we argue that non-IPM-based GANs are missing a key ingredient, a relativistic discriminator, which IPM-based GANs already possess. We show that a relativistic discriminator is necessary to make GANs analogous to divergence minimization and produce sensible predictions based on the a priori knowledge that half of the samples in the mini-batch are fake. We provide empirical evidence showing that GANs with a relativistic discriminator are more stable and produce data of higher quality.
# 2 Background
# 2.1 Generative adversarial networks
GANs can be defined very generally in terms of the discriminator in the following way: Ly = Es,» |fi(D(e-))] + Exe. [fo(D(G(2)))]
Ly = Es,» |fi(D(e-))] + Exe. [fo(D(G(2)))] )
and
LG = Exrâ¼P [Ëg1(D(xr))] + Ezâ¼Pz [Ëg2(D(G(z)))] , (2) where Ëf1, Ëf2, Ëg1, Ëg2 are scalar-to-scalar functions, P is the distribution of real data, Pz is generally a multivariate normal distribution centered at 0 with variance 1, D(x) is the discriminator evaluated at x, G(z) is the generator evaluated at z (Q is the distribution of fake data, thus of G(z)). Note that, through the paper, we refer to real data as xr and fake data as xf . Without loss of generality, we assume that both LD and LG are loss functions to be minimized.
Most GANs can be separated into two classes: non-saturating and saturating loss functions. GANs with the saturating loss are such that Ëg1=â Ëf1 and Ëg2=â Ëf2, while GANs with the non-saturating loss are such that Ëg1= Ëf2 and Ëg2= Ëf1. Saturating GANs are most intuitive as they can be interpreted as alternating between maximizing and minimizing the same loss function. After training D to optimality, the loss function is generally an approximation of a divergence (e.g., JensenâShannon divergence (JSD) for SGAN [Goodfellow et al., 2014], f -divergences for F-GANs [Nowozin et al., 2016], and Wassertein distance for WGAN [Arjovsky et al., 2017]). Thus, training G to minimize LG can be roughly interpreted as minimizing the approximated divergence (although this is not
2
technically true; see Jolicoeur-Martineau [2018]). On the other hand, non-saturating GANs can be thought as optimizing the same loss function, but swapping real data with fake data (and vice-versa). In this article, unless otherwise speciï¬ed, we assume a non-saturating loss for all GANs. SGAN assumes a cross-entropy loss, i.e., Ëf1(D(x)) = â log(D(x)) and Ëf2(D(x)) = â log(1 â D(x)), where D(x) = sigmoid(C(x)), and C(x) is the non-transformed discriminator output (which we call the critic as per Arjovsky et al. [2017]). In most GANs, C(x) can be interpreted as how realistic the input data is; a negative number means that the input data looks fake (e.g., in SGAN, D(x) = sigmoid(â5) = 0), while a positive number means that the input data looks real (e.g., in SGAN, D(x) = sigmoid(5) = 1).
In SGAN, the discriminator is said to output the probability that the input data is real. This is because minimizing the cross-entropy is equivalent to maximizing the log-likelihood of a Bernoulli variable. Thus, the output of D is approximately Bernoulli distributed and representative of a probability.
# Integral probability metrics
IPMs are statistical divergences represented mathematically as:
IP MF (P||Q) = sup CâF Exâ¼P[C(x)] â Exâ¼Q[C(x)],
where F is a class of real-valued functions. IPM-based GANs can be deï¬ned using equation 1 and 2 assuming Ëf1(D(x)) = Ëg2(D(x)) = âD(x) and Ëf2(D(x)) = Ëg1(D(x)) = D(x), where D(x) = C(x) (i.e., no transformation is applied). It can be observed that both discriminator and generator loss functions are unbounded and would diverge to ââ if optimized directly. However, IPMs assume that the discriminator is of a certain class of function that does not grow too quickly which prevent the loss functions from diverging. Each IPM applies a different constraint to the discriminator (e.g., WGAN assumes a Lipschitz D, WGAN-GP assumes that D has gradient norm equal to 1 around real and fake data).
# 3 Missing property of SGAN
# 3.1 Missing property
We argue that the key missing property of SGAN is that the probability of real data being real (D(xr)) should decrease as the probability of fake data being real (D(xf )) increase. We provide three arguments suggesting that SGAN should have this property.
# 3.2 Prior knowledge argument
With adequate training, the discriminator is able to correctly classify most real samples as real and most fake samples as not real. Subsequently, after the generator is trained to "fool" the discriminator into thinking that fake samples are real samples, the discriminator classify most samples, real or fake, as real. This behavior is illogical considering the a priori knowledge that half of the samples in the mini-batch are fake, as we explain below.
After training the generator, given that both real and fake samples look equally real, the critic values (C(x)) of real and fake data may be very close, i.e., C(xf ) â C(xr) for most xr and xf . Considering the fact that the discriminator is always shown half real data and half fake data, if the discriminator perceive all samples shown as equally real, it should assume that each sample has probability .50 of being real. However, in SGAN and other non-IPM-based GANs, we implicitly assume that the discriminator does not know that half the samples are fake. If the discriminator doesnât know, it could be possible that all samples shown are real. Thus, if all samples look real, it would be reasonable to assume that they are indeed all real (D(x) â 1 for all x).
Assuming that the generator is trained with a strong learning rate or for many iterations; in addition to both real and fake samples being classiï¬ed as real, fake samples may appear to be more realistic than real samples, i.e., C(xf ) > C(xr) for most xr and xf . In that case, considering that half of the samples are fake, the discriminator should assign a higher probability of being fake to real samples rather than classify all samples are real.
3
(A) Divergence minimization (B) Actual generator training (C) Ideal generator training 14 14 14 Discriminator output D(x) ° a i 0.54 0.54 o1 + o4, + o4, : Beginning End Beginning End Beginning End ââ Realdata ââ Fake data
Figure 1: Expected discriminator output of the real and fake data for the a) direct minimization of the JensenâShannon divergence, b) actual training of the generator to minimize its loss function, and c) ideal training of the generator to minimize its loss function (lines are dotted when they cross beyond the equilibrium to signify that this may or may not be necessary).
In summary, by not decreasing D(xr) as D(xf ) increase, SGAN completely ignores the a priori knowledge that half of the mini-batch samples are fake. Unless one makes the task of the discrimi- nator more difï¬cult (using regularization or lower learning rates), the discriminator does not make reasonable predictions. On the other hand, IPM-based GANs implicitly account for the fact that some of the samples must be fake because they compare how realistic real data is compared to fake data. This provides an intuitive argument to why the discriminator in SGAN (and GANs in general) should depends on both real and fake data.
# 3.3 Divergence minimization argument
In SGAN, we have that the discriminator loss function is equal to the JensenâShannon divergence (JSD) [Goodfellow et al., 2014]. Therefore, calculating the JSD can be represented as solving the following maximum problem:
JSD(P||Q) = ; ao + max Ey, ~pllog(D(x,))] + Ex,~ollog (1 â D(=s))) . (3) D:X (0,1)
2 for all xr â P and xf â Q The JSD is minimized (JSD(P||Q) = 0) when D(xr) = D(xf ) = 1 and maximized (JSD(P||Q) = log(2)) when D(xr) = 1, D(xf ) = 0 for all xr â P and xf â Q. Thus, if we were directly minimizing the divergence from maximum to minimum, we would expect D(xr) to smoothly decrease from 1 to .50 for most xr and D(xf ) to smoothly increase from 0 to .50 for most xf (Figure 1a). However, when minimizing the saturating loss in SGAN, we are only increasing D(xf ), we are not decreasing D(xr) (Figure 1b). Furthermore, we are bringing D(xf ) closer to 1 rather than .50. This means that SGAN dynamics are very different from the minimization of the JSD. To bring SGAN closer to divergence minimization, training the generator should not only increase D(xf ) but also decrease D(xr) (Figure 1c).
# 3.4 Gradient argument
Letâs compare the gradient steps of standard GAN and IPM-based GANs for further insight. It can be shown that the gradients of the discriminator and generator in non-saturating SGAN are respectively:
# âwLGAN D
= âExrâ¼P [(1 â D(xr))âwC(xr)] + Exf â¼Qθ [D(xf )âwC(xf )] ,
âθLGAN G = âEzâ¼Pz [(1 â D(G(z)))âxC(G(z))JθG(z)] , (5)
where J is the Jacobian.
It can be shown that the gradients of the discriminator and generator in IPM-based GANs are respectively:
âwLIP M D = âExrâ¼P[âwC(xr)] + Exf â¼Qθ [âwC(xf )], (6)
4
(4)
âθLIP M G where C(x) â F (the class of functions assigned by the IPM).
= âEzâ¼Pz [âxC(G(z))JθG(z)], (7)
From these equations, it can be observed that SGAN leads to the same dynamics as IPM-based GANs when we have that:
1. D(xr) = 0, D(xf ) = 1 in the discriminator step of SGAN 2. D(xf ) = 0 in the generator step of SGAN. 3. C(x) â F
Assuming that the discriminator and generator are trained to optimality in each step and that it is possible to perfectly distinguish real from the fake data (strong assumption, but generally true early in training); we have that D(xr) = 1, D(xf ) = 0 in the generator step and that D(xr) = 1, D(xf ) = 1 in the discriminator step for most xr and xf (Figure 1b). Thus, the only missing assumption is that D(xr) = 0 in the discriminator step.
This means that SGAN could be equivalent to IPM-based GANs, in certain situations, if the generator could indirectly inï¬uence D(xr). Considering that IPM-based GANs are generally more stable than SGAN, it would be reasonable to expect that making SGAN closer to IPM-based GANs could improve its stability.
In IPMs, both real and fake data equally contribute to the gradient of the discriminatorâs loss function. However, in SGAN, if the discriminator reach optimality, the gradient completely ignores real data. This means that if D(xr) does not indirectly change when training the discriminator to reduce D(xf ) (which might happens if real and fake data have different supports or if D has a very large capacity), the discriminator will stop learning what it means for data to be "real" and training will focus entirely on fake data. In which case, fake samples will not become more realistic and training will get stuck. On the other hand, if D(xr) always decreases when D(xf ) increases, real data will always be incorporated in the gradient of the discriminator loss function. In our experiments, we observe that GANs with this property are able to learn in very difï¬cult settings whereas traditional GANs become stuck early in training.
# 4 Method
# 4.1 Relativistic standard GAN
In standard GAN, the discriminator can be deï¬ned, in term of the non-transformed layer C(x), as D(x) = sigmoid(C(x)). A simple way to make discriminator relativistic (i.e., having the output of D depends on both real and fake data) is to sample from real/fake data pairs Ëx = (xr, xf ) and deï¬ne it as D(Ëx) = sigmoid(C(xr) â C(xf )).
We can interpret this modiï¬cation in the following way: the discriminator estimates the probability that the given real data is more realistic than a randomly sampled fake data. Similarly, we can deï¬ne Drev(Ëx) = sigmoid(C(xf ) â C(xr)) as the probability that the given fake data is more realistic than a randomly sampled real data. An interesting property of this discriminator is that we do not need to include Drev in the loss function through log(1 â Drev(Ëx)) because we have that 1 â Drev(Ëx) = 1 â sigmoid(C(xf ) â C(xr)) = sigmoid(C(xr) â C(xf )) = D(Ëx); thus, log(D(Ëx)) = log(1 â Drev(Ëx)).
The discriminator and generator (non-saturating) loss functions of the Relativistic Standard GAN (RSGAN) can be written as: LRSGAN
= âE(xr,xf )â¼(P,Q) [log(sigmoid(C(xr) â C(xf )))] . D (8)
LRSGAN G = âE(xr,xf )â¼(P,Q) [log(sigmoid(C(xf ) â C(xr)))] . (9)
# 4.2 Relativistic GANs
More generally, we consider any discriminator deï¬ned as a(C(xr)âC(xf )), where a is the activation function, to be relativistic. This means that almost any GAN can have a relativistic discriminator. This forms a new class of models which we call Relativistic GANs (RGANs).
5
Most GANs can be parametrized very generally in terms of the critic:
LGAN D = Exrâ¼P [f1(C(xr))] + Exf â¼Q [f2(C(xf ))] (10)
and
LGAN G = Exrâ¼P [g1(C(xr))] + Exf â¼Q [g2(C(xf ))] , (11)
where f1, f2, g1, g2 are scalar-to-scalar functions. If we use a relativistic discriminator, these GANs now have the following form:
LRGAN D = E(xr,xf )â¼(P,Q) [f1(C(xr) â C(xf ))] + E(xr,xf )â¼(P,Q) [f2(C(xf ) â C(xr))] (12)
and
# LRGAN G
= E(xr,xf )â¼(P,Q) [g1(C(xr) â C(xf ))] + E(xr,xf )â¼(P,Q) [g2(C(xf ) â C(xr))] .
IPM-based GANs represent a special case of RGAN where f1(y) = g2(y) = ây and f2(y) = g1(y) = y. Importantly, g1 is normally ignored in GANs because its gradient is zero since the generator does not inï¬uence it. However, in RGANs, g1 is inï¬uenced by fake data, thus by the generator. Therefore, g1 generally has a non-zero gradient and needs to be speciï¬ed in the generator loss. This means that in most RGANs (except in IPM-based GANs because they use the identity function), the generator is trained to minimize the full loss function envisioned rather than only half of it.
The formulation of RGANs can be simpliï¬ed when we have the following two properties: (1) f2(ây) = f1(y) and (2) the generator assumes a non-saturating loss (g1(y) = f2(y) and g2(y) = f1(y)). These two properties are observed in standard GAN, LSGAN using symmetric labels (e.g., -1 and 1), IPM-based GANs, etc. With these two properties, RGANs with non-saturating loss can be formulated simply as:
LRGAN â D = E(xr,xf )â¼(P,Q) [f1(C(xr) â C(xf ))] (14)
and
LRGAN â G = E(xr,xf )â¼(P,Q) [f1(C(xf ) â C(xr))] . (15)
Algorithm 1 shows how to train RGANs of this form.
Algorithm 1 Training algorithm for non-saturating RGANs with symmetric loss functions Require: The number of D iterations np (np = 1 unless one seeks to train D to optimality), batch size m, and functions f which determine the objective function of the discriminator (f is f; from equation 10 assuming that f2(ây) = fi(y), which is true for many GANS). while has not converged do fort =1,...,np do Sample {2}, ~ P Sample {2}, ~ P, Update w using SGD by ascending with V+ 7224 [f(Cw(e) â Cu(Go(z)))] end for Sample {2}, ~ P Sample {2}, ~ P, Update 6 using SGD by ascending with Vg + 77". [f (Cu(Go(2)) â C,,(@))] end while
# 4.3 Relativistic average GANs
Although the relative discriminator provide the missing property that we want in GANs (i.e., G inï¬uencing D(xr)), its interpretation is different from the standard discriminator. Rather than measuring âthe probability that the input data is realâ, it is now measuring âthe probability that the input data is more realistic than a randomly sampled data of the opposing type (fake if the input is real or real if the input is fake)â. To make the relativistic discriminator act more globally, as in its
6
13)
original deï¬nition, our initial idea was to focus on the average of the relativistic discriminator over random samples of data of the opposing type. This can be conceptualized in the following way:
P (xr is real) := Exf â¼Q[P (xr is more real than xf )] = Exf â¼Q[sigmoid(C(xr) â C(xf ))] = Exf â¼Q[D(xr, xf )],
P (xf is real) := Exrâ¼P[P (xf is more real than xr)] = Exrâ¼P[sigmoid(C(xf ) â C(xr))] = Exrâ¼P[D(xf , xr)],
where D(xr, xf ) = sigmoid(C(xr) â C(xf )).
Then, the following loss function for D could be applied:
Lp = -Ez, xr [log (Ex;~@[D(#r,2y)]))] â Ex;~e [log (1 âEx,.2[D(ws,27)])]. (16)
The main problem with this idea is that it would require looking at all possible combinations of real and fake data in the mini-batch. This would transform the problem from O(m) to O(m2) complexity, where m is the batch size. This is problematic; therefore, we do not use this approach.
Instead, we propose to use the Relativistic average Discriminator (RaD) which compares the critic of the input data to the average critic of samples of the opposite type. The discriminator loss function for this approach can be formulated as:
LESAN = âE,..p [log (D(ar)))] â Ex,~o [log (1 â D(xy))] , (17)
where
D(x) = (aereatClo âEx,xoC(rs)) if a is real as) sigmoid(C (2) â E,,.pC(a,)) if a is fake.
RaD has a more similar interpretation to the standard discriminator than the relativistic discriminator. With RaD, the discriminator estimates the probability that the given real data is more realistic than fake data, on average. This approach has O(m) complexity. Table 1 shows an intuitive and memeful visual representation of how this approach works.
As before, we can generalize this approach to work with any GAN loss function using the following formulation:
LEON Be [fy (Clr) â ExpxoC(ep)))] + Ex no [fe (Clwy) â Ex, -eC(e,))]- (9)
LECAN = Ey, xe [gr (C(@r) â Exyx@C(as)))] + Exp~e [92 (C(xs) â Ex.~eC(xr))]- (20)
We call this general approach Relativistic average GAN (RaGAN). See Algorithm 2 for how to train non-saturating RaGANs.
# 5 Experiments
Experiments were conducted on the CIFAR-10 dataset [Krizhevsky, 2009] and the CAT dataset [Zhang et al., 2008]. Code was written in Pytorch [Paszke et al., 2017] and models were trained using the Adam optimizer [Kingma and Ba, 2014] for 100K generator iterations with seed 1 (which shows that we did not ï¬sh for the best seed, instead, we selected the seed a priori). We report the Fréchet Inception Distance (FID) [Heusel et al., 2017], a measure that is generally better correlated with data quality than the Inception Distance [Salimans et al., 2016] [Borji, 2018]; lower FID means that the generated images are of better quality.
For the models architectures, we used the standard CNN described by Miyato et al. [2018] on CIFAR- 10 and a relatively standard DCGAN architecture [Radford et al., 2015] on CAT (see Appendix). We also provide the source code required to replicate all analyses presented in this paper (See our repository: www.github.com/AlexiaJM/RelativisticGAN).
7
Table 1: A illustrative example of the discriminatorâs output in standard GAN as traditionally deï¬ned (P (xr is real) = sigmoid(C(xr))) versus the Relativistic average Discriminator (RaD) (P (xr is real|C(xf )) = sigmoid(C(xr) â C(xf ))). Breads represent real images, while dogs represent fake images.
Scenario Absolute probability (Standard GAN) Relative probability (Relativistic average Standard GAN) Real image looks real and fake images look fake C(xr) = 8 P (xr is bread) = 1 C(xf ) = â5 P (xr is bread|C(xf )) = 1
Real image looks real but fake images look similarly real on average
ie
C(xr) = 8 P (xr is bread) = 1
C(xf ) = 7 P (xr is bread|C(xf )) = .73
Real image looks fake but fake images look more fake on average
C(xr) = â3 P (xr is bread) = .05
C(xf ) = â5 P (xr is bread|C(xf )) = .88
8
Algorithm 2 Training algorithm for non-saturating RaGANs Require: The number of D iterations np (np = 1 unless one seek to train D to optimality), batch size m, and functions f; and f2 which determine the objective function of the discriminator (see equation 10). while has not converged do fort =1,...,np do Sample {2}, ~ P Sample {2 }â¢@, ~ P. Let Cu (ar) = 4 oy Cw(2) Let Gules) = EY Cx(Go(2)) m. Update w using SGD by ascending with
The number of D iterations np (np = 1 unless one seek to train D to m, and functions f; and f2 which determine the objective function of the equation 10). has not converged do fort =1,...,np do Sample {2}, ~ P Sample {2 }â¢@, ~ P. Let Cu (ar) = 4 oy Cw(2) Let Gules) = EY Cx(Go(2)) m. Update w using SGD by ascending with Var Dina [filCw(a) â Caley) + fa(Cw(Go(2)) â Cnl@,))| end for Sample {2}, ~ P Sample {2}, ~ P, Let Cy, (x) = a an Cw(a) Let Cu (ep) = 4 Cw(Go(2)) m Lvi= Update 6 using SGD by ascending with Vor Dia [Fa(Cw(Go(2)) â Gar) + falCw(a) â Cale] while
# âθ end while
# 5.1 Easy/stable experiments
In these analyses, we compared standard GAN (SGAN), least-squares GAN (LSGAN), Wassertein GAN improved (WGAN-GP), Hinge-loss GAN (HingeGAN) [Miyato et al., 2018], Relativistic SGAN (RSGAN), Relativistic average SGAN (RaSGAN), Relativistic average LSGAN (RaLSGAN), and Relativistic average HingeGAN (RaHingeGAN) using the standard CNN architecture on stable setups (See Appendix for details on the loss functions used). Additionally, we tested RSGAN and RaSGAN with the same gradient-penalty as WGAN-GP (named RSGAN-GP and RaSGAN-GP respectively).
We used the following two known stable setups: (DCGAN setup) lr = .0002, nD = 1, β1 = .50 and β2 = .999 [Radford et al., 2015], and (WGAN-GP setup) lr = .0001, nD = 5, β1 = .50 and β2 = .9 [Gulrajani et al., 2017], where lr is the learning rate, nD is the number of discriminator updates per generator update, and β1, β2 are the ADAM momentum parameters. For optimal stability, we used batch norm [Ioffe and Szegedy, 2015] in G and spectral norm [Miyato et al., 2018] in D.
Results are presented in Table 2. We observe that RSGAN and RaSGAN generally performed better than SGAN. Similarly, RaHingeGAN performed better than HingeGAN. RaLSGAN performed on par with LSGAN, albeit sightly worse. WGAN-GP performed poorly in the DCGAN setup, but very well in the WGAN-GP setup. RasGAN-GP performed poorly; however, RSGAN-GP performed better than all other loss functions using only one discriminator update per generator update. Importantly, the resulting FID of 25.60 is on par with the lowest FID obtained for this architecture using spectral normalization, as reported by Miyato et al. [2018] (25.5). Overall, these results show that using a relativistic discriminator generally improve data generation quality and that RSGAN works very well in conjunction with gradient penalty to obtain state-of-the-art results.
# 5.2 Hard /unstable experiments
# 5.3 CIFAR-10
In these analyses, we compared SGAN, LSGAN, WGAN-GP, RSGAN, RaSGAN, RaLSGAN, and RaHingeGAN with the standard CNN architecture on unstable setups in CIFAR-10. Unless otherwise speciï¬ed, we used lr = .0002, β1 = .5, β2 = .999, nD = 1, and batch norm [Ioffe and Szegedy, 2015] in G and D. We tested the following four unstable setups: (1) lr = .001, (2) β1 = .9, β2 = .9,
9
Table 2: Fréchet Inception Distance (FID) at exactly 100k generator iterations on the CIFAR-10 dataset using stable setups with different GAN loss functions. We used spectral norm in D and batch norm in G. All models were trained using the same a priori selected seed (seed=1).
Loss SGAN RSGAN RaSGAN lr = .0002 β = (.50, .999) nD = 1 40.64 36.61 31.98 lr = .0001 β = (.50, .9) nD = 5 41.32 55.29 37.92 LSGAN RaLSGAN 29.53 30.92 187.01 219.39 HingeGAN RaHingeGAN 49.53 39.12 80.85 37.72 WGAN-GP RSGAN-GP RaSGAN-GP 83.89 25.60 331.86 27.81 28.13
Table 3: Fréchet Inception Distance (FID) at exactly 100k generator iterations on the CIFAR-10 dataset using instable setups with different GAN loss functions. Unless otherwise speciï¬ed, we used lr = .0002, β = (.50, .999), nD = 1, and batch norm (BN) in D and G. All models were trained using the same a priori selected seed (seed=1).
Loss lr = .001 β = (.9, .9) No BN Tanh SGAN RSGAN RaSGAN 154.20 50.95 55.55 35.29 45.12 43.46 35.54 37.11 41.96 59.17 77.21 54.42 LSGAN RaLSGAN 52.27 33.33 225.94 48.92 38.54 34.66 147.87 53.07 HingeGAN RaHingeGAN 43.28 51.05 33.47 42.78 34.21 43.75 58.51 50.69 WGAN-GP 61.97 104.95 85.27 59.94
(3) no batch norm in G or D, and (4) all activation functions replaced with Tanh in both G and D (except for the output activation function of D).
Results are presented in Table 3. We observe that RaLSGAN performed better than LSGAN in all setups. RaHingeGAN performed slightly worse than HingeGAN in most setups. RSGAN and RaSGAN performed better than SGAN in two out of four setups, although differences were small. WGAN-GP generally performed poorly which we suspect is due to the single discriminator update per generator update. Overall, this provide good support for the improved stability of using the relative discriminator with LSGAN, but not with HingeGAN and SGAN. Although results are worse for the relativistic discriminator in some settings, differences are minimal and probably reï¬ect natural variations.
It is surprising to observe low FID for SGAN without batch normalization considering its well-known difï¬culty with this setting [Arjovsky et al., 2017]. Given these results, we suspected that CIFAR-10 may be too easy to fully observe the stabilizing effects of using the relative discriminator. Therefore, our next analyses were done on the more difï¬cult CAT dataset with high resolution pictures.
# 5.4 CAT
CAT is a dataset containing around 10k pictures of cats with annotations. We cropped the pictures to the faces of the cats using those annotations. After removing outliers (hidden faces, blurriness, etc.), the CAT dataset contained 9304 images ⥠64x64, 6645 images ⥠128x128, and 2011 images â¥
10
Table 4: Minimum (min), maximum (max), mean, and standard deviation (SD) of the Fréchet Inception Distance (FID) calculated at 20k, 30k . . . , 100k generator iterations on the CAT dataset with different GAN loss functions. The hyperparameters used were lr = .0002, β = (.50, .999), nD = 1, and batch norm (BN) in D and G. All models were trained using the same a priori selected seed (seed=1).
Loss Min Max Mean SD 64x64 images (N=9304) SGAN RSGAN RaSGAN 16.56 19.03 15.38 310.56 42.05 33.11 52.54 32.16 20.53 96.81 7.01 5.68 LSGAN RaLSGAN 20.27 11.97 224.97 19.29 73.62 15.61 61.02 2.55 HingeGAN RaHingeGAN 17.60 14.62 50.94 27.31 32.23 20.29 14.44 3.96 RSGAN-GP RaSGAN-GP 16.41 17.32 22.34 22 18.20 19.58 1.82 1.81 128x128 images (N=6645) SGAN 2 RaSGAN - 21.05 - 39.65 - 28.53 - 6.52 LSGAN RaLSGAN 19.03 15.85 51.36 40.26 30.28 22.36 10.16 7.53 256x256 images (N=2011) SGAN2 RaSGAN SpectralSGAN LSGAN2 RaLSGAN - 32.11 54.08 - 35.21 - 102.76 90.43 - 299.52 - 56.64 64.92 - 70.44 - 21.03 12.00 - 86.01 WGAN-GP 155.46 437.48 341.91 101.11
256x256. Previous analyses 1 showed that the CAT dataset is particularly difï¬cult in high-dimensions; SGAN generally has vanishing/exploding gradients with 64x64 images and is unable to generate 128x128 images without using certain tricks (e.g., unequal learning rates, Lipschitz discriminator, gradient penalty, etc.); this makes this dataset perfect for testing the stability of different GAN loss functions.
We trained different GAN loss functions on 64x64, 128x128, 256x256 images. For 256x256 images, we compared RaGANs to known stable approaches: SpectralSGAN (SGAN with spectral normalization in D) and WGAN-GP. Although some approaches were able to train on 256x256 images, they did so with signiï¬cant mode collapse. To alleviate this problem, for 256x256 images, we packed the discriminator [Lin et al., 2017] (i.e., D took a concatenated pair of images instead of a single image). We looked at the mimimum, maximum, mean and standard deviation (SD) of the FID at 20k, 30k, ..., 100k generator iterations; results are presented in Table 4.
Overall, we observe lower minimum FID, maximum FID, mean and standard deviation (sd) for RGANs and RaGANs than their non-relativistic counterparts (SGAN, LSGAN, RaLSGAN).
In 64x64 resolution, both SGAN and LSGAN generated images with low FID, but they did so in a very unstable matter. For example, SGAN went from a FID of 17.50 at 30k iterations, to 310.56 at 40k iterations, and back to 27.72 at 50k iterations. Similarly, LSGAN went from a FID of 20.27 at 20k iterations, to 224.97 at 30k iterations, and back to 51.98 at 40k iterations. On the other hand, RaGANs were much more stable (lower max and SD) while also resulting in lower minimum FID.
1As reported on https://ajolicoeur.wordpress.com/cats. 2 Didnât converge, became stuck in the ï¬rst few iterations.
11
Using gradient-penalty did not improve data quality; however, it reduced the SD lower than without gradient penalty, thus increasing stability further.
SGAN was unable to converge on 128x128 or bigger images and LSGAN was unable to converge on 256x256 images. Meanwhile, RaGANs were able to generate plausible images with low FID in all resolutions. Although SpectralSGAN and WGAN-GP were able to generate 256x256 images of cats, the samples they generated were of poor quality (high FID). Thus, in this very difï¬cult setting, relativism provided a greater improvement in quality than gradient penalty or spectral normalization.
# 6 Conclusion and future work
In this paper, we proposed the relativistic discriminator as a way to ï¬x and improve on standard GAN. We further generalized this approach to any GAN loss and introduced a generally more stable variant called RaD. Our results suggest that relativism signiï¬cantly improve data quality and stability of GANs at no computational cost. Furthermore, using a relativistic discriminator with other tools of the trade (spectral norm, gradient penalty, etc.) may lead to better state-of-the-art.
Future research is needed to fully understand the mathematical implications of adding relativism to GANs. Furthermore, our experiments were limited to certain loss functions using only one seed, due to computational constraints. More experiments are required to determine which relativistic GAN loss function is best over a wide-range of datasets and hyperparameters. We greatly encourage researchers and machine learning enthusiasts with greater computing power to experiment further with our approach.
# References
Yongjun Hong, Uiwon Hwang, Jaeyoon Yoo, and Sungroh Yoon. How generative adversarial nets and its variants work: An overview of gan. arXiv preprint arXiv:1711.05914, 2017.
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 2672â2680. Curran Associates, Inc., 2014. URL http://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf.
Martin Arjovsky and Léon Bottou. Towards principled methods for training generative adversarial networks. arXiv preprint arXiv:1701.04862, 2017.
Xudong Mao, Qing Li, Haoran Xie, Raymond YK Lau, Zhen Wang, and Stephen Paul Smolley. Least squares generative adversarial networks. In 2017 IEEE International Conference on Computer Vision (ICCV), pages 2813â2821. IEEE, 2017.
Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In International Conference on Machine Learning, pages 214â223, 2017.
Mario Lucic, Karol Kurach, Marcin Michalski, Sylvain Gelly, and Olivier Bousquet. Are gans created equal? a large-scale study. arXiv preprint arXiv:1711.10337, 2017.
Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. Improved training of wasserstein gans. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5767â5777. Curran Associates, Inc., 2017. URL http://papers.nips.cc/ paper/7159-improved-training-of-wasserstein-gans.pdf.
Alfred Müller. Integral probability metrics and their generating classes of functions. Advances in Applied Probability, 29(2):429â443, 1997.
Youssef Mroueh, Chun-Liang Li, Tom Sercu, Anant Raj, and Yu Cheng. Sobolev gan. arXiv preprint arXiv:1711.04894, 2017.
12
Youssef Mroueh and Tom Sercu. Fisher gan. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 2513â2523. Curran Associates, Inc., 2017. URL http://papers.nips.cc/ paper/6845-fisher-gan.pdf.
Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. arXiv preprint arXiv:1802.05957, 2018.
William Fedus, Mihaela Rosca, Balaji Lakshminarayanan, Andrew M Dai, Shakir Mohamed, and Ian Goodfellow. Many paths to equilibrium: Gans do not need to decrease adivergence at every step. arXiv preprint arXiv:1710.08446, 2017.
Naveen Kodali, Jacob Abernethy, James Hays, and Zsolt Kira. How to train your dragan. arXiv preprint arXiv:1705.07215, 2017.
Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-gan: Training generative neural samplers using variational divergence minimization. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 271â279. Curran Associates, Inc., 2016. URL http://papers.nips.cc/paper/6066-f-gan-training- generative-neural-samplers-using-variational-divergence-minimization.pdf.
Alexia Jolicoeur-Martineau. Gans beyond divergence minimization. arXiv preprint arXiv:1809.02145, 2018.
Alex Krizhevsky. Learning multiple layers of features from tiny images. 2009.
Weiwei Zhang, Jian Sun, and Xiaoou Tang. Cat head detection-how to effectively exploit shape and texture features. In European Conference on Computer Vision, pages 802â816. Springer, 2008.
Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. 2017.
Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, Günter Klambauer, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a nash equilibrium. arXiv preprint arXiv:1706.08500, 2017.
Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, Xi Chen, and Xi Chen. Improved techniques for training gans. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 2234â2242. Curran Associates, Inc., 2016. URL http://papers.nips.cc/paper/6125- improved-techniques-for-training-gans.pdf.
Ali Borji. Pros and cons of gan evaluation measures. arXiv preprint arXiv:1802.03446, 2018.
Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
Zinan Lin, Ashish Khetan, Giulia Fanti, and Sewoong Oh. Pacgan: The power of two samples in generative adversarial networks. arXiv preprint arXiv:1712.04086, 2017.
13
# Appendices
# A Gradient step
# A.1 SGAN
G® = âVyBs,.2 [log D(e,)] â VwEr;~0 llog(l ~ D(xs))] 7 Ola) ells) 2. ae oe (22 )] a8. fne(1- 22% )] By, [C(ar) â log (e°) +1) ] â VwExy~g, [log(1) â log (e +1)] : : eller) eles) = âE,, .p [VwC(2,)] + Ex,..P lam Malte) + By ~Qo |e cep] =-V & =-V & pxp [VwO(2r)] + Ex, <P [D(tr)VwC (tr)] + Expo [D(ts)VwC(p)] ~p [(L â D@r))VwC(ar)] + Exp~oy [D(@p)VwO(xp)] Ey Ey
# âwLGAN D
Voleâ = -VoE.~8. [log D(G(2))] vp. h eGlGle)) = âVokznp, ioe core + :)| = âVyEznr. [c(G(e)) â log (core) + 1)| ef (G2) WP, [v-ciats (z)) JoG(z) - Ccoee) V.C(G(z))JoG(z) = -E.nr, (1 â D(G(2)))V20(G(z)) JoG(2)] il x
# âθLGAN G
# A.2 IPM-based GANs
D = ââwExrâ¼P[C(xr)] + âwExf â¼Qθ [C(xf )] = âExrâ¼P[âwC(xr)] + Exf â¼Qθ [âwC(xf )]
âθLIP M G = ââθEzâ¼Pz [C(G(z))] = âEzâ¼Pz [âxC(G(z))JθG(z)]
# B Simpliï¬ed form of relativistic saturating and non-saturating GANs
Assuming f2(ây) = f1(y), we have that
= E(xr,xf )â¼(P,Q) [f1(C(xr) â C(xf ))] + E(xr,xf )â¼(P,Q) [f2(C(xf ) â C(xr))] = E(xr,xf )â¼(P,Q) [f1(C(xr) â C(xf ))] + E(xr,xf )â¼(P,Q) [f1(C(xr) â C(xf ))] = 2E(xr,xf )â¼(P,Q) [f1(C(xr) â C(xf ))] .
If g1(y) = âf1(y) and g2(y) = âf2(y) (saturating GAN), we have that
= E(xr,xf )â¼(P,Q) [g1(C(xr) â C(xf ))] + E(xr,xf )â¼(P,Q) [g2(C(xf ) â C(xr))] = âE(xr,xf )â¼(P,Q) [f1(C(xr) â C(xf ))] â E(xr,xf )â¼(P,Q) [f2(C(xf ) â C(xr))] = âE(xr,xf )â¼(P,Q) [f1(C(xr) â C(xf ))] â E(xr,xf )â¼(P,Q) [f1(C(xr) â C(xf ))] = â2E(xr,xf )â¼(P,Q) [f1(C(xr) â C(xf ))] .
14
If g1(y) = f2(y) and g2(y) = f1(y) (non-saturating GAN), we have that LRGAN âN S G = E(xr,xf )â¼(P,Q) [g1(C(xr) â C(xf ))] + E(xr,xf )â¼(P,Q) [g2(C(xf ) â C(xr))] = E(xr,xf )â¼(P,Q) [f2(C(xr) â C(xf ))] + E(xr,xf )â¼(P,Q) [f1(C(xf ) â C(xr))] = E(xr,xf )â¼(P,Q) [f1(C(xf ) â C(xr))] + E(xr,xf )â¼(P,Q) [f1(C(xf ) â C(xr))] = 2E(xr,xf )â¼(P,Q) [f1(C(xf ) â C(xr))] .
# C Loss functions used in experiments
# C.1 SGAN (non-saturating)
LSGAN D = âExrâ¼P [log (sigmoid(C(xr)))] â Exf â¼Q [log (1 â sigmoid(C(xf )))] (21)
LSGAN G = âExf â¼Q [log (sigmoid(C(xf )))] (22)
# C.2 RSGAN
LRSGAN D = âE(xr,xf )â¼(P,Q) [log(sigmoid(C(xr) â C(xf )))] (23)
LRSGAN G = âE(xr,xf )â¼(P,Q) [log(sigmoid(C(xf ) â C(xr)))] (24)
# C.3 RaSGAN
LROSGAN â _B, [log (Diz-))] â Ex,.g [log (1 - D(x;))| (25)
(Diz-))] (Dw
LEOSGAN _ yg [log (Dw /)| â Ey, ap [log (1 - D(e,))| (26)
LEOSGAN _ yg [log D(x) = sigmoid (C(ar) â Ex,.@C(#s)) D(x) = sigmoid (C(x s) â Ex, .pC(2,))
# C.4 LSGAN
LOAN = Ene [(C(@r) â 0)?] + Ex, [(C(ws) â 1)? (27)
LESCAN = Ex, x0 [(C(as) â 0)"] (28)
# C.5 RaLSGAN
LBUSCAN â Bp [(C (ap) â ExpaC (as) ~ 1)?] + Exyno [(C(wy) â Exe C (ar) + may
LBUSCAN _ BB,» [(C(wy) â Ex, ~2C (2) ~ 1)?] + Bs, aP [(Cler) â Ex, oO (es) + 1)?] (30)
# C.6 HingeGAN
LHingeGAN D = Exrâ¼P [max(0, 1 â C(xr))] + Exf â¼Q [max(0, 1 + C(xf ))] (31)
LHingeGAN G = âExf â¼Q [C(xf )] (32)
# C.7 RaHingeGAN
LiingeGAN Bp [max(0, 1- D(xr))| +Ex,.0 [max(0, 14+ D(x 1)| (33)
LHlingeGAN =E,,.p [max(0, 1- D(x;))| +E,z,~0 [max(0, 1+ D(ar))| (34)
LHingeGAN = Exf â¼P ËD(xr) = C(xr) â Exf â¼QC(xf ) ËD(xf ) = C(xf ) â Exrâ¼PC(xr)
15
# C.8 WGAN-GP
LS CAN-GP = Be [C(@x)] + Enna [C(es)] + XEowe, [(l|VeC(@) |l2-1)"] G5)
LW GAN âGP G = âExf â¼Q [C(xf )] (36)
Py is the distribution of # = ex, + (1 â â¬)ay, where x, ~ P, xp ~ Qe ~ U(0, 1].
# C.9 RSGAN-GP
LESAN = Be, ny) ~(2,9) log(sigmoid(C(x,) ~ C(#s)))] + AEs~r, [((IVaC(@) |l2 â 1)"] (37)
LRSGAN G = âE(xr,xf )â¼(P,Q) [log(sigmoid(C(xf ) â C(xr)))] (38)
Py is the distribution of # = ex, + (1 â â¬)ay, where x, ~ P, xp ~ Qe ~ U(0, 1].
# C.10 RaSGAN-GP
LRSGAN __B [log (Di-))] -E,,.9 [log (1 â Dw /)| +AEsnp, [aivec(@) ll2 â | (39)
LEOSGAN = _, 9 [log (De â)| -Ey,.e [log (1 - D(e,))| (40)
D(ay) = sigmoid (C(x) â Ex,~@C (2s) D(x) = sigmoid (C(x s) â Ex, .pC(2,)) Py is the distribution of @ = ex, + (1 â)ay, where x, ~ P, xp ~ Q,e ~ U(0, 1].
# D Architectures
# D.1 Standard CNN
# Generator z â R128 â¼ N (0, I)
linear, 128 -> 512*4*4 Reshape, 512*4*4 -> 512 x 4 x 4 ConvTranspose2d 4x4, stride 2, pad 1, 512->256 BN and ReLU ConvTranspose2d 4x4, stride 2, pad 1, 256->128 BN and ReLU ConvTranspose2d 4x4, stride 2, pad 1, 128->64 BN and ReLU ConvTranspose2d 3x3, stride 1, pad 1, 64->3 Tanh
# Discriminator x â R3x32x32
Conv2d 3x3, stride 1, pad 1, 3->64 LeakyReLU 0.1 Conv2d 4x4, stride 2, pad 1, 64->64 LeakyReLU 0.1 Conv2d 3x3, stride 1, pad 1, 64->128 LeakyReLU 0.1 Conv2d 4x4, stride 2, pad 1, 128->128 LeakyReLU 0.1 Conv2d 3x3, stride 1, pad 1, 128->256 LeakyReLU 0.1 Conv2d 4x4, stride 2, pad 1, 256->256 LeakyReLU 0.1 Conv2d 3x3, stride 1, pad 1, 256->512 Reshape, 512 x 4 x 4 -> 512*4*4 linear, 512*4*4 -> 1
16
D.2 DCGAN 64x64
# Generator z â R128 â¼ N (0, I)
ConvTranspose2d 4x4, stride 1, pad 0, no bias, 128->512 BN and ReLU ConvTranspose2d 4x4, stride 2, pad 1, no bias, 512->256 BN and ReLU ConvTranspose2d 4x4, stride 2, pad 1, no bias, 256->128 BN and ReLU ConvTranspose2d 4x4, stride 2, pad 1, no bias, 128->64 BN and ReLU ConvTranspose2d 4x4, stride 2, pad 1, no bias, 64->3 Tanh
# D.3 DCGAN 128x128
# Generator z â R128 â¼ N (0, I)
ConvTranspose2d 4x4, stride 1, pad 0, no bias, 128->1024 BN and ReLU ConvTranspose2d 4x4, stride 2, pad 1, no bias, 1024->512 BN and ReLU ConvTranspose2d 4x4, stride 2, pad 1, no bias, 512->256 BN and ReLU ConvTranspose2d 4x4, stride 2, pad 1, no bias, 256->128 BN and ReLU ConvTranspose2d 4x4, stride 2, pad 1, no bias, 128->64 BN and ReLU ConvTranspose2d 4x4, stride 2, pad 1, no bias, 64->3 Tanh
17
# Discriminator x â R3x64x64
Conv2d 4x4, stride 2, pad 1, no bias, 3->64 LeakyReLU 0.2 Conv2d 4x4, stride 2, pad 1, no bias, 64->128 BN and LeakyReLU 0.2 Conv2d 4x4, stride 2, pad 1, no bias, 128->256 BN and LeakyReLU 0.2 Conv2d 4x4, stride 2, pad 1, no bias, 256->512 BN and LeakyReLU 0.2 Conv2d 4x4, stride 2, pad 1, no bias, 512->1
# Discriminator x â R3x128x128
Conv2d 4x4, stride 2, pad 1, no bias, 3->64 LeakyReLU 0.2 Conv2d 4x4, stride 2, pad 1, no bias, 64->128 BN and LeakyReLU 0.2 Conv2d 4x4, stride 2, pad 1, no bias, 128->256 BN and LeakyReLU 0.2 Conv2d 4x4, stride 2, pad 1, no bias, 256->512 BN and LeakyReLU 0.2 Conv2d 4x4, stride 2, pad 1, no bias, 512->1024 BN and LeakyReLU 0.2 Conv2d 4x4, stride 2, pad 1, no bias, 1024->1
# D.4 DCGAN 256x256
ConvTranspose2d 4x4, stride 1, pad 0, no bias, 128->1024 BN and ReLU ConvTranspose2d 4x4, stride 2, pad 1, no bias, 1024->512 LeakyReLU 0.2 BN and ReLU Conv2d 4x4, stride 2, pad 1, no bias, 32->64 ConvTranspose2d 4x4, stride 2, pad 1, no bias, 512->256 LeakyReLU 0.2 BN and ReLU Conv2d 4x4, stride 2, pad 1, no bias, 64->128 ConvTranspose2d 4x4, stride 2, pad 1, no bias, 256->128 BN and LeakyReLU 0.2 BN and ReLU Conv2d 4x4, stride 2, pad 1, no bias, 128->256 ConvTranspose2d 4x4, stride 2, pad 1, no bias, 128->64 BN and LeakyReLU 0.2 BN and ReLU Conv2d 4x4, stride 2, pad 1, no bias, 256->512 ConvTranspose2d 4x4, stride 2, pad 1, no bias, 64->32 BN and LeakyReLU 0.2 BN and ReLU Conv2d 4x4, stride 2, pad 1, no bias, 512->1024 ConvTranspose2d 4x4, stride 2, pad 1, no bias, 64->3 BN and LeakyReLU 0.2 Tanh Conv2d 4x4, stride 2, pad 1, no bias, 1024->1
E Samples
This shows a selection of cats from certain models. Images shown are from the lowest FID registered at every 10k generator iterations. Given space constraint, with higher resolutions cats, we show some of the nicer looking cats for each approach, there are evidently some worse looking cats 3.
3See https://github.com/AlexiaJM/RelativisticGAN/tree/master/images/full_minibatch for all cats of the mini-batch.
18
Figure 2: 64x64 cats with RaLSGAN (FID = 11.97)
19
Figure 3: 128x128 cats with RaLSGAN (FID = 15.85)
20
Figure 4: 256x256 cats with GAN (5k iterations)
Figure 5: 256x256 cats with LSGAN (5k iterations)
21
Figure 6: 256x256 cats with RaSGAN (FID = 32.11)
22
Figure 7: 256x256 cats with RaLSGAN (FID = 35.21)
23
Figure 8: 256x256 cats with SpectralSGAN (FID = 54.73)
24
Figure 9: 256x256 cats with WGAN-GP (FID > 100)
25 | {
"id": "1502.03167"
} |
1806.11146 | Adversarial Reprogramming of Neural Networks | Deep neural networks are susceptible to \emph{adversarial} attacks. In
computer vision, well-crafted perturbations to images can cause neural networks
to make mistakes such as confusing a cat with a computer. Previous adversarial
attacks have been designed to degrade performance of models or cause machine
learning models to produce specific outputs chosen ahead of time by the
attacker. We introduce attacks that instead {\em reprogram} the target model to
perform a task chosen by the attacker---without the attacker needing to specify
or compute the desired output for each test-time input. This attack finds a
single adversarial perturbation, that can be added to all test-time inputs to a
machine learning model in order to cause the model to perform a task chosen by
the adversary---even if the model was not trained to do this task. These
perturbations can thus be considered a program for the new task. We demonstrate
adversarial reprogramming on six ImageNet classification models, repurposing
these models to perform a counting task, as well as classification tasks:
classification of MNIST and CIFAR-10 examples presented as inputs to the
ImageNet model. | http://arxiv.org/pdf/1806.11146 | Gamaleldin F. Elsayed, Ian Goodfellow, Jascha Sohl-Dickstein | cs.LG, cs.CR, cs.CV, stat.ML | null | International Conference on Learning Representations 2019 | cs.LG | 20180628 | 20181129 | 8 1 0 2
v o N 9 2 ] G L . s c [
2 v 6 4 1 1 1 . 6 0 8 1 : v i X r a
# ADVERSARIAL REPROGRAMMING OF NEURAL NETWORKS
Gamaleldin F. Elsayedâ Google Brain gamaleldin.elsayed@gmail.com
Ian Goodfellow Google Brain goodfellow@google.com
# Jascha Sohl-Dickstein Google Brain jaschasd@google.com
# ABSTRACT
Deep neural networks are susceptible to adversarial attacks. In computer vision, well-crafted perturbations to images can cause neural networks to make mistakes such as confusing a cat with a computer. Previous adversarial attacks have been designed to degrade performance of models or cause machine learning models to produce speciï¬c outputs chosen ahead of time by the attacker. We introduce attacks that instead reprogram the target model to perform a task chosen by the attackerâwithout the attacker needing to specify or compute the desired output for each test-time input. This attack ï¬nds a single adversarial perturbation, that can be added to all test-time inputs to a machine learning model in order to cause the model to perform a task chosen by the adversaryâeven if the model was not trained to do this task. These perturbations can thus be considered a program for the new task. We demonstrate adversarial reprogramming on six ImageNet classiï¬cation models, repurposing these models to perform a counting task, as well as classiï¬cation tasks: classiï¬cation of MNIST and CIFAR-10 examples presented as inputs to the ImageNet model.
# INTRODUCTION
The study of adversarial examples is often motivated in terms of the danger posed by an attacker whose goal is to cause model prediction errors with a small change to the modelâs input. Such an attacker could make a self-driving car react to a phantom stop sign (Evtimov et al., 2017) by means of a sticker (a small L0 perturbation), or cause an insurance companyâs damage model to overestimate the claim value from the resulting accident by subtly doctoring photos of the damage (a small Lâ perturbation). With this context, various methods have been proposed both to construct (Szegedy et al., 2013; Papernot et al., 2015; 2017; 2016; Brown et al., 2017; Liu et al., 2016) and defend against (Goodfellow et al., 2014; Kurakin et al., 2016; Madry et al., 2017; Tramèr et al., 2017; Kolter & Wong, 2017; Kannan et al., 2018) this style of adversarial attack. Thus far, the majority of adversarial attacks have consisted of untargeted attacks that aim to degrade the performance of a model without necessarily requiring it to produce a speciï¬c output, or targeted attacks in which the attacker designs an adversarial perturbation to produce a speciï¬c output for that input. For example, an attack against a classiï¬er might target a speciï¬c desired output class for each input image, or an attack against a reinforcement learning agent might induce that agent to enter a speciï¬c state (Lin et al., 2017).
In practice, there is no constraint that adversarial attacks should adhere to this framework. Thus, it is crucial to proactively anticipate other unexplored adversarial goals in order to make machine learning systems more secure. In this work, we consider a novel and more challenging adversarial goal: reprogramming the model to perform a task chosen by the attacker, without the attacker needing to compute the speciï¬c desired output. Consider a model trained to perform some original task: for inputs x it produces outputs f (x). Consider an adversary who wishes to perform an adversarial task:
âWork done as a member of the Google AI Residency program (g.co/airesidency).
1
for inputs Ëx (not necessarily in the same domain as x) the adversary wishes to compute a function g(Ëx). We show that an adversary can accomplish this by learning adversarial reprogramming functions hf (·; θ) and hg(·; θ) that map between the two tasks. Here, hf converts inputs from the domain of Ëx into the domain of x (i.e., hf (Ëx; θ) is a valid input to the function f ), while hg maps output of f (h(Ëx; θ)) back to outputs of g(Ëx). The parameters θ of the adversarial program are then adjusted to achieve hg (f (hf (Ëx))) = g (Ëx).
In our work, for simplicity, we deï¬ne Ëx to be a small image, g a function that processes small images, x a large image, and f a function that processes large images. Our function hf then just consists of drawing x in the center of the large image and θ in the borders (though we explore other schemes as well), and hg is simply a hard coded mapping between output class labels. However, the idea is more general; hf (hg) could be any consistent transformation that converts between the input (output) formats for the two tasks and causes the model to perform the adversarial task.
We refer to the class of attacks where a model is repurposed to perform a new task as adversarial reprogramming. We refer to θ as an adversarial program. In contrast to most previous adversarial work, the magnitude of this perturbation need not be constrained for adversarial reprogramming to work. Though, we note that it is still possible to construct reprogramming attacks that are imperceptible. Potential consequences of adversarial reprogramming include theft of computational resources from public facing services, repurposing of AI-driven assistants into spies or spam bots, and abusing machine learning services for tasks violating the ethical principles of system providers. Risks stemming from this type of attack are discussed in more detail in Section 5.2.
It may seem unlikely that an additive offset to a neural networkâs input would be sufï¬cient on its own to repurpose the network to a new task. However, this ï¬exibility stemming only from changes to a networkâs inputs is consistent with results on the expressive power of deep neural networks. For instance, in Raghu et al. (2016) it is shown that, depending on network hyperparameters, the number of unique output patterns achievable by moving along a one-dimensional trajectory in input space increases exponentially with network depth.
In this paper, we present the ï¬rst instances of adversarial reprogramming. In Section 2, we discuss related work. In Section 3, we present a training procedure for crafting adversarial programs, which cause a neural network to perform a new task. In Section 4, we experimentally demonstrate adversarial programs that target several convolutional neural networks designed to classify ImageNet data. These adversarial programs alter the network function from ImageNet classiï¬cation to: counting squares in an image, classifying MNIST digits, and classifying CIFAR-10 images. Next, we examine the susceptibility of trained and untrained networks to adversarial reprogramming. We then demonstrate the possibility of reprograming adversarial tasks with adversarial data that has no resemblance to original data, demonstrating that results from transfer learning do not fully explain adversarial reprogramming. Further, we demonstrate the possibility of concealing adversarial programs and data. Finally, we end in Sections 5 and 6 by discussing and summarizing our results.
# 2 BACKGROUND AND RELATED WORK
2.1 ADVERSARIAL EXAMPLES
One deï¬nition of adversarial examples is that they are âinputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistakeâ (Goodfellow et al., 2017). They are often formed by starting with a naturally occuring image and using a gradient-based optimizer to search for a nearby image that causes a mistake (Biggio et al., 2013; Szegedy et al., 2013; Carlini & Wagner, 2017). These attacks can be either untargeted (the adversary succeeds when causing any mistake at all) or targeted (the adversary succeeds when causing the model to predict a speciï¬c incorrect class). Adversarial attacks have been also proposed for other domains like malware detection (Grosse et al., 2017), generative models (Kos et al., 2017), network policies for reinforcement learning tasks (Huang et al., 2017), and network interpretations (Ghorbani et al., 2017). In these domains, the attack remains either untargeted (generally degrading the performance) or targeted (producing a speciï¬c output). We extend this line of work by developing reprogramming methods that aim to produce speciï¬c functionality rather than a speciï¬c hardcoded output.
2
(a) counting ImageNet (b) Adversarial Program Yadv y 1 1 square tench 2 squares goldfish 3 squares white shark 4 squares tiger shark 5 squares hammerhead Hie. 6 squares electric ray 7 squares stingray 8 squares cock 9 squares hen 10 squares ostrich (c) tiger shark, ostrich ImageNet Classifier [iad = 4 squares, 10 squares
Figure 1: Illustration of adversarial reprogramming. (a) Mapping of ImageNet labels to adversar- ial task labels (squares count in an image). (b) Two examples of images from the adversarial task (left) are embedded at the center of an adversarial program (middle), yielding adversarial images (right). The adversarial program shown repurposes an Inception V3 network to count squares in images. (c) Illustration of inference with adversarial images. The network when presented with adversarial images will predict ImageNet labels that map to the adversarial task labels.
Several authors have observed that the same modiï¬cation can be applied to many different inputs in order to form adversarial examples (Goodfellow et al., 2014; Moosavi-Dezfooli et al., 2017). For example, Brown et al. (2017) designed an âadversarial patchâ that can switch the prediction of many models to one speciï¬c class (e.g. toaster) when it is placed physically in their ï¬eld of view. We continue this line of work by ï¬nding a single adversarial program that can be presented with many input images to cause the model to process each image according to the adversarial program.
2.2 PARASITIC COMPUTING AND WEIRD MACHINES
Parasitic computing involves forcing a target system to solve a complex computational task it wasnât originally designed to perform, by taking advantage of peculiarities in network communication protocols (Barabasi et al., 2001; Peresini & Kostic, 2013). Weird machines, on the other hand, are a class of computational exploits where carefully crafted inputs can be used to run arbitrary code on a targeted computer (Bratus et al., 2011). Adversarial reprogramming can be seen as a form of parasitic computing, though without the focus on leveraging the communication protocol itself to perform the computation. Similarly, adversarial reprogramming can be seen as an example of neural networks behaving like weird machines, though adversarial reprogramming functions only within the neural network paradigm â we do not gain access to the host computer.
2.3 TRANSFER LEARNING
Transfer learning (Raina et al., 2007; Mesnil et al., 2011) and adversarial reprogramming share the goal of repurposing networks to perform a new task. Transfer learning methods use the knowledge obtained from one task as a base to learn how to perform another. Neural networks possess properties that can be useful for many tasks (Yosinski et al., 2014). For example, neural networks when trained on images develop features that resemble Gabor ï¬lters in early layers even if they are trained with different datasets or different training objectives such as supervised image classiï¬cation (Krizhevsky et al., 2012), unsupervised density learning (Lee et al., 2009), or unsupervised learning of sparse representations (Le et al., 2011). Empirical work has demonstrated that it is possible to take a convolutional neural network trained to perform one task, and simply train a linear SVM classiï¬er to make the network work for other tasks (Razavian et al., 2014; Donahue et al., 2014). However, transfer learning is very different from the adversarial reprogramming task in that it allows model parameters to be changed for the new task. In typical adversarial settings, an attacker is unable to alter
3
the model, and instead must achieve their goals solely through manipulation of the input. Further, one may wish to adversarially reprogram across tasks with very different datasets. This makes the task of adversarial reprogramming much more challenging than transfer learning.
# 3 METHODS
In this work, we consider an adversary with access to the parameters of a neural network that is performing a speciï¬c task. The objective of the adversary is to reprogram the model to perform a new task by crafting an adversarial program to be included within the network input. Here, the network was originally designed to perform ImageNet classiï¬cation, but the methods discussed here can be directly extended to other settings.
Our adversarial program is formulated as an additive contribution to network input. Note that unlike most adversarial perturbations, the adversarial program is not speciï¬c to a single image. The same adversarial program will be applied to all images. We deï¬ne the adversarial program as:
P=tanh(W ©M) (1) where W ⬠R"*â*3 is the adversarial program parameters to be learned, n is the ImageNet image width, and M is a masking matrix that is 0 for image locations that corresponds to the adversarial data for the new task, otherwise 1. Note that the mask J is not required â we mask out the central region of the adversarial program purely to improve visualization of the action of the adversarial program. Also, note that we use tanh (-) to bound the adversarial perturbation to be in (â1, 1) â the same range as the (rescaled) ImageNet images the target networks are trained to classify. Let, & ⬠R*** 3 be a sample from the dataset to which we wish to apply the adversarial task, where k <n. X ⬠R"*â*3 js the equivalent ImageNet size image with & placed in the proper area, defined by the mask /. The corresponding adversarial image is then:
Xadv = hf (Ëx; W ) = ËX + P (2)
Let P (y|X) be the probability that an ImageNet classiï¬er gives to ImageNet label y â {1, . . . , 1000}, given an input image X. We deï¬ne a hard-coded mapping function hg(yadv) that maps a label from an adversarial task yadv to a set of ImageNet labels. For example, if an adversarial task has 10 different classes (yadv â {1, . . . , 10}), hg (·) may be deï¬ned to assign the ï¬rst 10 classes of ImageNet, any other 10 classes, or multiple ImageNet classes to the adversarial labels. Our adversarial goal is thus to maximize the probability P (hg(yadv)|Xadv). We set up our optimization problem as
W= argmin (- log P(hg(Yadv)|Xadv) + Al WII) ; (3)
where λ is the coefï¬cient for a weight norm penalty, to reduce overï¬tting. We optimize this loss with Adam while exponentially decaying the learning rate. Hyperparameters are given in Appendix A. Note that after the optimization the adversarial program has a minimal computation cost from the adversaryâs side as it only requires computing Xadv (Equation 2), and mapping the resulting ImageNet label to the correct class. In other words, during inference the adversary needs only to store the program and add it to the data, thus leaving the majority of computation to the target network.
One interesting property of adversarial reprogramming is that it must exploit nonlinear behavior of the target model. This is in contrast to traditional adversarial examples, where attack algorithms based on linear approximations of deep neural networks are sufficient to cause high error rate fet al.|[2014). Consider a linear model that receives an input # and a program 6 concatenated into a single vector: x = [a,0]'. Suppose that the weights of the linear model are partitioned into two sets, v = [vg, v9|'. The output of the model is va = uv] & + vj 0. The adversarial program @ adapts the effective biases vg @ but cannot adapt the weights applied to the input Z. The adversarial program 6 can thus bias the model toward consistently outputting one class or the other but cannot change the way the input is processed. For adversarial reprogramming to work, the model must include nonlinear interactions of % and @. A nonlinear deep network satisfies this requirement.
# 4 RESULTS
To demonstrate the feasibility of adversarial reprogramming, we conducted experiments on six architectures trained on ImageNet. In each case, we reprogrammed the network to perform three
4
(a)
Figure 2: Examples of adversarial programs. Adversarial program which cause Inception V3 ImageNet model to function as (a) MNIST classiï¬er. (b) CIFAR-10 classiï¬er
different adversarial tasks: counting squares, MNIST classiï¬cation, and CIFAR-10 classiï¬cation. The weights of all trained models were obtained from TensorFlow-Slim, and top-1 ImageNet precisions are shown in Table Supp. 1. We additionally examined whether adversarial training conferred resistance to adversarial reprogramming, and compared the susceptibility of trained networks to random networks. Further, we investigated the possibility of reprogramming the networks when the adversarial data has no resemblance to the original data. Finally, we demonstrated the possibility of concealing the adversarial program and the adversarial data.
4.1 COUNTING SQUARES
To illustrate the adversarial reprogramming procedure, we start with a simple adversarial task. That is counting the number of squares in an image. We generated images (Ëx) of size 36 à 36 à 3 that include 9 à 9 white squares with black frames. Each square could appear in 16 different position in the image, and the number of squares ranged from 1 to 10. The squares were placed randomly on gridpoints (Figure 1b left). We embedded these images in an adversarial program (Figure 1b middle). The resulting images (Xadv) are of size 299 à 299 à 3 with the 36 à 36 à 3 images of the squares at the center (Figure 1b right). Thus, the adversarial program is simply a frame around the counting task images. We trained one adversarial program per ImageNet model, such that the ï¬rst 10 ImageNet labels represent the number of squares in each image (Figure 1c). Note that the labels we used from ImageNet have no relation to the labels of the new adversarial task. For example, a âWhite Sharkâ has nothing to do with counting 3 squares in an image, and an âOstrichâ does not at all resemble 10 squares. We then evaluated the accuracy in the task by sampling 100,000 images and comparing the network prediction to the number of squares in the image.
Despite the dissimilarity of ImageNet labels and adversarial labels, and that the adversarial program is equivalent simply to a ï¬rst layer bias, the adversarial program masters this counting task for all networks (Table 1). These results demonstrate the vulnerability of neural networks to reprogramming on this simple task using only additive contributions to the input.
4.2 MNIST CLASSIFICATION
In this section, we demonstrate adversarial reprogramming on somewhat more complex task of classifying MNIST digits. We measure test and train accuracy, so it is impossible for the adversarial program to have simply memorized all training examples. Similar to the counting task, we embedded MNIST digits of size 28 à 28 à 3 inside a frame representing the adversarial program, we assigned the ï¬rst 10 ImageNet labels to the MNIST digits, and trained an adversarial program for each ImageNet model. Figure 2a shows example of the adversarial program for Inception V3 being applied.
5
Table 1: Neural networks adversarially reprogrammed to perform a variety of tasks. Table gives accuracy of reprogrammed networks to perform a counting task, MNIST classiï¬cation task, and CIFAR-10 classiï¬cation task, and Shufï¬ed MNIST pixels classiï¬cation task.
Model Pretrained on ImageNet Untrained Counting MNIST CIFAR-10 Shufï¬ed MNIST MNIST train test train test test test Incep. V3 Incep. V4 Incep. Res. V2 Res. V2 152 Res. V2 101 Res. V2 50 Incep. V3 adv. 0.9993 0.9999 0.9994 0.9763 0.9843 0.9966 0.9781 0.9638 0.9773 0.9478 0.9650 0.9506 0.9761 0.9753 0.9646 0.9744 0.9534 0.9664 0.9496 0.9752 0.7311 0.6948 0.6985 0.6410 0.6435 0.6 0.6911 0.6683 0.6719 0.6210 0.6301 0.5858 0.9709 0.9715 0.9683 0.9691 0.9678 0.9717 0.4539 0.1861 0.1135 0.1032 0.1756 0.9325
Our results show that ImageNet networks can be successfully reprogramed to function as an MNIST classiï¬er by presenting an additive adversarial program. The adversarial program additionally generalized well from the training to test set, suggesting that the reprogramming does not function purely by memorizing train examples, and is not brittle to small changes in the input.
# 4.3 CIFAR-10 CLASSIFICATION
Here we implement a more challenging adversarial task. That is, crafting adversarial programs to repurpose ImageNet models to instead classify CIFAR-10 images. An example of the resulting adversarial images are given in Figure 2b. Our results show that our adversarial program was able to increase the accuracy on CIFAR-10 from chance to a moderate accuracy (Table 1). This accuracy is near what is expected from typical fully connected networks (Lin et al., 2015) but with minimal computation cost from the adversary side at inference time. One observation is that although adversarial programs trained to classify CIFAR-10 are different from those that classify MNIST or perform the counting task, the programs show some visual similarities, e.g. ResNet architecture adversarial programs seem to possess some low spatial frequency texture (Figure Supp. 1a).
INVESTIGATION OF THE EFFECT OF THE TRAINED MODEL DETAILS AND ORIGINAL DATA
One important question is what is the degree to which susceptibility to adversarial reprogramming depends on the details of the model being attacked. To address this question, we examined attack success on an Inception V3 model that was trained on ImageNet data using adversarial training (Tramèr et al., 2017). Adversarial training augments data with adversarial examples during training, and is one of the most common methods for guarding against adversarial examples. As in Section 4.2, we adversarially reprogrammed this network to classify MNIST digits. Our results (Table 1) indicate that the model trained with adversarial training is still vulnerable to reprogramming, with only a slight reduction in attack success. This ï¬nding shows that standard approaches to adversarial defense has little efï¬cacy against adversarial reprogramming. This is likely explained by the differences between adversarial reprogramming and standard adversarial attacks. First, that the goal is to repurpose the network rather than cause it to make a speciï¬c mistake, second that the magnitude of adversarial programs can be large, while traditional adversarial attacks are of a small perturbations magnitude, and third adversarial defense methods may be speciï¬c to original data and may not generalize to data from the adversarial task.
To further explore dependence on the details of the model, we performed adversarial reprogramming attacks on models with random weights. We used the same experiment set up and MNIST reprogram- ming task as in Section 4.2 â we simply used the ImageNet models with randomly initialized rather than trained weights. The MNIST classiï¬cation task was easy for networks pretrained on ImageNet (Table 1). However, for random networks, training was very challenging and generally converged to a much lower accuracy (only ResNet V2 50 could train to a similar accuracy as trained ImageNet models; see Table 1). Moreover, the appearance of the adversarial programs was qualitatively distinct
6
iS (a) (c) original adversarial adversarial program 0.96 test accurac + adversarial data (MNIST 0.95 test accuracy 55% of the pixels 2 y & a v £ = Pas i} & ba a perturbations < 6% perturbations < 10% perturbations <6% perturbations < 10%
Figure 3: Adversarial programs may be limited in size or concealed. In all panels, an Inception V3 model pretrained on ImageNet is reprogrammed to classify MNIST digits. Example images (a) with adversarial programs of different sizes. (b) with adversarial programs of different perturbation scales. (c) Here adversarial data + program (right) are hidden inside a normal image from ImageNet (left), yielding an adversarial image (center) that is able to reprogram the network to function as an MNIST classiï¬er. The pixels of the adversarial data are shufï¬ed to conceal its structure.
from the adversarial programs obtained with networks pretrained on ImageNet (see Figure Supp. 1b). This ï¬nding demonstrates that the original task the neural networks perform is important for adversarial reprogramming. This result may seem surprising, as random networks have rich structure adversarial programs might be expected to take advantage of. For example, theoretical results have shown that wide neural networks become identical to Gaussian processes, where training speciï¬c weights in intermediate layers is not necessary to perform tasks (Matthews et al., 2018; Lee et al., 2017). Other work has demonstrated that it is possible to use random networks as generative models for images (Ustyuzhaninov et al., 2016; He et al., 2016), further supporting their potential richness. One explanation may be that randomly initialized networks perform poorly for simple reasons, such as poor scaling of network weights at initialization, whereas the trained weights are better conditioned.
One explanation of adversarial reprogramming that is motivated by transfer learning (Yosinski et al., 2014) is that the network may be relying on some similarities between original and adversarial data. To address this hypothesis, we randomized the pixels on MNIST digits such that any resemblance between the adversarial data (MNIST) and images in the original data (ImageNet) is removed (see Figure Supp. 2). We then attempted to reprogram pretrained ImageNet networks to classify the shufï¬ed MNIST digits. Despite shufï¬ed MNIST not sharing any spatial structure with images, we managed to reprogram the ImageNet network for this task (Table 1) with almost equal accuracy to standard MNIST (in some cases shufï¬ed MNIST even achieved higher accuracy). These results thus suggest that transferring knowledge between the original and adversarial data does not explain the susceptibility to adversarial reprogramming. Even more interestingly, these results suggest the possibility of reprogramming across tasks with unrelated datasets and across domains.
4.5 CONCEALING ADVERSARIAL PROGRAMS
In our previous experiment, there were no constraints on the size (number of program pixels) and scale (magnitude of perturbations) of the program and adversarial data. Here, we demonstrate the possibility of limiting the visibility of the adversarial perturbations by limiting the program size, scale, or even concealing the whole adversarial task. In these experiments, we used Inception V3 model pretrained to classify ImageNet. In our ï¬rst experiment, we adversarially reprogrammed the network to classify MNIST digits while limiting the size of the program (see Figure 3a). Our results show that adversarial reprogramming is still successful, yet with lower accuracy, even if we used a very small adversarial program. In our next experiment, we made the adversarial program nearly imperceptible by limiting the Linf norm of the adversarial perturbation to a small percentage of the pixel values. Our results show that adversarial reprogramming is still successful (see Figure 3b) even with nearly imperceptible programs. Further, we tested the possibility of concealing the whole adversarial task
7
by hiding both the adversarial data and program within a normal image from ImageNet. To do that, we shufï¬ed the pixels of the adversarial data (here MNIST), so that the adversarial data structure is hidden. Then, we limited the scale of both the adversarial program and data to a small fraction of the possible pixel values. We added the resulting image to a random image from ImageNet. Formally, we extended our reprogramming methods as follows:
Px =atanh (shufite:.(X) +(WO shuffle;:(M))) (4)
Xadv = clip (XImageN et + PX , [0, 1]) , (5)
where ËX, M and W are as described in Section 3, PX is the adversarial data combined with the adversarial program, ix is the shufï¬ing sequence (same for M and âX), α is a scalar used to limit the perturbation scale, and XImageN et is an image chosen randomly from ImageNet, which is the same for all MNIST examples. We then optimized the adversarial program for the network to classify MNIST digits (see Equation 3). The resulting adversarial images are very similar to normal images from ImageNet (see Figure 3c), yet the network is successfully reprogrammed to classify MNIST digits, though with lower accuracy (see Figure 3c). This result demonstrate the possibility of hiding the adversarial task. Here, we used a simple shufï¬ing technique and picked an image from ImageNet to hide the adversarial task, but one could go further and use more complex schemes for hiding the adversarial task and optimize the choice of the image from ImageNet, which may make adversarial reprogramming even harder to detect.
5 DISCUSSION
# 5.1 FLEXIBILITY OF TRAINED NEURAL NETWORKS
We found that trained neural networks were more susceptible to adversarial reprogramming than random networks. Further, we found that reprogramming is still successful even when data structure is very different from the structure of the data in the original task. This demonstrates a large ï¬exibility of repurposing trained weights for a new task. Our results suggest that dynamical reuse of neural circuits should be practical in modern artiï¬cial neural networks. This holds the promise of enabling machine learning systems which are easier to repurpose, more ï¬exible, and more efï¬cient due to shared compute. Indeed, recent work in machine learning has focused on building large dynamically connected networks with reusable components (Shazeer et al., 2017).
It is unclear whether the reduced performance when targeting random networks, and when reprogram- ming to perform CIFAR-10 classiï¬cation, was due to limitations in the expressivity of the adversarial perturbation, or due to the optimization task in Equation 3 being more difï¬cult in these situations. Disentangling limitations in expressivity and trainability will be an interesting future direction.
# 5.2 ADVERSARIAL GOALS BEYOND THE IMAGE DOMAIN
We demonstrated adversarial reprogramming on classiï¬cation tasks in the image domain. It is an interesting area for future research whether similar attacks might succeed for audio, video, text, or other domains and tasks. Our ï¬nding that trained networks can be reprogram to classify shufï¬ed MNIST examples, which do not have any resemblance to images, suggest that the reprogramming across domains is likely.
Adversarial reprogramming of recurrent neural networks (RNNs) would be particularly interesting, since RNNs (especially those with attention or memory) can be Turing complete (Neelakantan et al., 2015). An attacker would thus only need to ï¬nd inputs which induced the RNN to perform a number of simple operations, such as increment counter, decrement counter, and change input attention location if counter is zero (Minsky, 1961). If adversarial programs can be found for these simple operations, then they could be composed to reprogram the RNN to perform various tasks.
A variety of nefarious ends may be achievable if machine learning systems can be reprogrammed by a specially crafted input. The most direct of these is the theft of computational resources. For instance, an attacker might develop an adversarial program which causes the computer vision classiï¬er in a cloud hosted photos service to solve image captchas and enable creation of spam accounts. If RNNs can be ï¬exibly reprogrammed as mentioned above, this computational theft might extend to more
8
arbitrary tasks. A major danger beyond the computational theft is that an adversary may repurpose computational resources to perform a task which violates the code of ethics of system providers. This is particularly important as ML service providers are largely interested in protecting the ethical principles and guidelines that governs the use of their services.
# 6 CONCLUSION
In this work, we proposed a new class of adversarial attacks that aim to reprogram neural networks to perform novel adversarial tasks. Our results demonstrate for the ï¬rst time the possibility of such attacks. They are also illustrative of both surprising ï¬exibility and surprising vulnerability in deep neural networks. Future investigation should address the properties and limitations of adversarial programming and possible ways to defend against it.
ACKNOWLEDGMENTS
We are grateful to Jaehoon Lee, Sara Hooker, Simon Kornblith, Supasorn Suwajanakorn for useful comments on the manuscript. We thank Alexey Kurakin for help reviewing the code. We thank Justin Gilmer and Luke Metz for discussion surrounding the original idea.
# REFERENCES
Albert-Laszlo Barabasi, Vincent W Freeh, Hawoong Jeong, and Jay B Brockman. Parasitic computing. Nature, 412(6850):894, 2001.
Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Srndic, Pavel Laskov, Giorgio Giacinto, and Fabio Roli. Evasion attacks against machine learning at test time. In Machine Learning and Knowledge Discovery in Databases - European Conference, ECML PKDD 2013, Prague, Czech Republic, September 23-27, 2013, Proceedings, Part III, pp. 387â402, 2013. doi: 10.1007/978-3-642-40994-3_25.
Sergey Bratus, Michael Locasto, Meredith Patterson, Len Sassaman, and Anna Shubina. Exploit programming: From buffer overï¬ows to weird machines and theory of computation. {USENIX; login:}, 2011.
Tom B Brown, Dandelion Mané, Aurko Roy, MartÃn Abadi, and Justin Gilmer. Adversarial patch. arXiv preprint arXiv:1712.09665, 2017.
N. Carlini and D. Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), pp. 39â57, May 2017. doi: 10.1109/SP.2017.49.
Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor In Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. International conference on machine learning, pp. 647â655, 2014.
Ivan Evtimov, Kevin Eykholt, Earlence Fernandes, Tadayoshi Kohno, Bo Li, Atul Prakash, Amir Rahmati, and Dawn Song. Robust physical-world attacks on deep learning models. arXiv preprint arXiv:1707.08945, 1, 2017.
Amirata Ghorbani, Abubakar Abid, and James Zou. Interpretation of neural networks is fragile. arXiv preprint arXiv:1710.10547, 2017.
Ian Goodfellow, Nicolas Papernot, Sandy Huang, Yan Duan, Pieter Abbeel, and Jack Clark. Attacking machine learning with adversarial examples, 2017. URL https://blog.openai.com/ adversarial-example-research/.
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
Kathrin Grosse, Nicolas Papernot, Praveen Manoharan, Michael Backes, and Patrick D. McDaniel. Adversarial examples for malware detection. In ESORICS 2017, pp. 62â79, 2017. doi: 10.1007/ 978-3-319-66399-9_4. URL https://doi.org/10.1007/978-3-319-66399-9_4.
9
Kun He, Yan Wang, and John Hopcroft. A powerful generative model using random weights for the deep image representation. In Advances in Neural Information Processing Systems, pp. 631â639, 2016.
Sandy Huang, Nicolas Papernot, Ian Goodfellow, Yan Duan, and Pieter Abbeel. Adversarial attacks on neural network policies. arXiv preprint arXiv:1702.02284, 2017.
Harini Kannan, Alexey Kurakin, and Ian Goodfellow. Adversarial logit pairing. arXiv preprint arXiv:1803.06373, 2018.
J Zico Kolter and Eric Wong. Provable defenses against adversarial examples via the convex outer adversarial polytope. arXiv preprint arXiv:1711.00851, 2017.
Jernej Kos, Ian Fischer, and Dawn Song. Adversarial examples for generative models. arXiv preprint arXiv:1702.06832, 2017.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with deep convolu- tional neural networks. In Advances in neural information processing systems, pp. 1097â1105, 2012.
A. Kurakin, I. Goodfellow, and S. Bengio. Adversarial Machine Learning at Scale. ArXiv e-prints, November 2016.
Quoc V Le, Alexandre Karpenko, Jiquan Ngiam, and Andrew Y Ng. Ica with reconstruction cost for efï¬cient overcomplete feature learning. In Advances in neural information processing systems, pp. 1017â1025, 2011.
Honglak Lee, Roger Grosse, Rajesh Ranganath, and Andrew Y Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In Proceedings of the 26th annual international conference on machine learning, pp. 609â616. ACM, 2009.
Jaehoon Lee, Yasaman Bahri, Roman Novak, Samuel S Schoenholz, Jeffrey Pennington, and Jascha Sohl-Dickstein. Deep neural networks as gaussian processes. arXiv preprint arXiv:1711.00165, 2017.
Chunyuan Li, Heerad Farkhoor, Rosanne Liu, and Jason Yosinski. Measuring the intrinsic dimension of objective landscapes. arXiv preprint arXiv:1804.08838, 2018.
Yen-Chen Lin, Zhang-Wei Hong, Yuan-Hong Liao, Meng-Li Shih, Ming-Yu Liu, and Min Sun. Tactics of adversarial attack on deep reinforcement learning agents. arXiv preprint arXiv:1703.06748, 2017.
Zhouhan Lin, Roland Memisevic, and Kishore Konda. How far can we go without convolution: Improving fully-connected networks. arXiv preprint arXiv:1511.02580, 2015.
Yanpei Liu, Xinyun Chen, Chang Liu, and Dawn Song. Delving into transferable adversarial examples and black-box attacks. arXiv preprint arXiv:1611.02770, 2016.
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.
Alexander G de G Matthews, Mark Rowland, Jiri Hron, Richard E Turner, and Zoubin Ghahramani. Gaussian process behaviour in wide deep neural networks. arXiv preprint arXiv:1804.11271, 2018.
Grégoire Mesnil, Yann Dauphin, Xavier Glorot, Salah Rifai, Yoshua Bengio, Ian Goodfellow, Erick Lavoie, Xavier Muller, Guillaume Desjardins, David Warde-Farley, et al. Unsupervised and transfer learning challenge: a deep learning approach. In Proceedings of the 2011 International Conference on Unsupervised and Transfer Learning workshop-Volume 27, pp. 97â111. JMLR. org, 2011.
Marvin L Minsky. Recursive unsolvability of postâs problem of" tag" and other topics in theory of turing machines. Annals of Mathematics, pp. 437â455, 1961.
10
Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. Universal In Computer Vision and Pattern Recognition (CVPR), 2017 IEEE adversarial perturbations. Conference on, pp. 86â94. IEEE, 2017.
Arvind Neelakantan, Quoc V Le, and Ilya Sutskever. Neural programmer: Inducing latent programs with gradient descent. arXiv preprint arXiv:1511.04834, 2015.
Nicolas Papernot, Patrick D. McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, and Anan- thram Swami. The limitations of deep learning in adversarial settings. CoRR, abs/1511.07528, 2015.
Nicolas Papernot, Patrick McDaniel, and Ian Goodfellow. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277, 2016.
Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, pp. 506â519. ACM, 2017.
Peter Peresini and Dejan Kostic. Is the network capable of computation? In Network Protocols (ICNP), 2013 21st IEEE International Conference on, pp. 1â6. IEEE, 2013.
Maithra Raghu, Ben Poole, Jon Kleinberg, Surya Ganguli, and Jascha Sohl-Dickstein. On the expressive power of deep neural networks. arXiv preprint arXiv:1606.05336, 2016.
Rajat Raina, Alexis Battle, Honglak Lee, Benjamin Packer, and Andrew Y Ng. Self-taught learning: transfer learning from unlabeled data. In Proceedings of the 24th international conference on Machine learning, pp. 759â766. ACM, 2007.
Ali Sharif Razavian, Hossein Azizpour, Josephine Sullivan, and Stefan Carlsson. Cnn features off-the-shelf: an astounding baseline for recognition. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2014 IEEE Conference on, pp. 512â519. IEEE, 2014.
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538, 2017.
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
TensorFlow-Slim. Tensorï¬ow-slim image classiï¬cation model library. https://github.com/ tensorflow/models/tree/master/research/slim. Accessed: 2018-05-01.
F. Tramèr, A. Kurakin, N. Papernot, D. Boneh, and P. McDaniel. Ensemble Adversarial Training: Attacks and Defenses. ArXiv e-prints, May 2017.
Ivan Ustyuzhaninov, Wieland Brendel, Leon A Gatys, and Matthias Bethge. Texture synthesis using shallow convolutional networks with random ï¬lters. arXiv preprint arXiv:1606.00021, 2016.
Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In Advances in neural information processing systems, pp. 3320â3328, 2014.
11
# Supplemental material
A SUPPLEMENTARY TABLES
Table Supp. 1: Top-1 precision of models on ImageNet data
Model Accuracy Inception V3 Inception V4 Inception Resnet V2 Resnet V2 152 Resnet V2 101 Resnet V2 50 Inception V3 adv. 0.78 0.802 0.804 0.778 0.77 0.756 0.776
Table Supp. 2: Hyper-parameters for adversarial program training for the square counting adversarial task. For all models, we used the Adam optimizer with its default parameters while decaying the learning rate exponentially during training. We distributed training data across a number of GPUs (each GPU receive âbatchâ data samples ). We then performed synchronized updates of the adversarial program parameters. ImageNet Model
λ batch GPUS learn rate decay epochs/decay steps Inception V3 Inception V4 Inception Resnet V2 Resnet V2 152 Resnet V2 101 Resnet V2 50 0.01 0.01 0.01 0.01 0.01 0.01 50 50 50 20 20 20 4 4 4 4 4 4 0.05 0.05 0.05 0.05 0.05 0.05 0.96 0.96 0.96 0.96 0.96 0.96 2 2 2 2 2 2 100000 100000 100000 100000 60000 100000
12
Table Supp. 3: Hyper-parameters for adversarial program training for MNIST classiï¬cation adversar- ial task. For all models, we used the Adam optimizer with its default parameters while decaying the learning rate exponentially during training. We distributed training data across a number of GPUs (each GPU receive âbatchâ data samples ). We then performed synchronized updates of the adversarial program parameters. (The Model Inception V3 adv is pretrained on ImageNet data using adversarial training method.
ImageNet Model λ batch GPUS learn rate decay epochs/decay steps Inception V3 Inception V4 Inception Resnet V2 Resnet V2 152 Resnet V2 101 Resnet V2 50 Inception V3 adv. 0.05 0.05 0.05 0.05 0.05 0.05 0.01 100 100 50 50 50 100 50 4 4 8 8 8 4 6 0.05 0.05 0.05 0.05 0.05 0.05 0.05 0.96 0.96 0.96 0.96 0.96 0.96 0.98 2 2 2 2 2 2 4 60000 60000 60000 60000 60000 60000 100000
Table Supp. 4: Hyper-parameters for adversarial program training for CIFAR-10 classiï¬cation adversarial task. For all models, we used ADAM optimizer with its default parameters while decaying the learning rate exponentially during training. We distributed training data on number of GPUS (each GPU receive âbatchâ data samples ). We then performed synchronized updates of the adversarial program parameters. ImageNet Model
λ batch GPUS learn rate decay epochs/decay steps Inception V3 Inception V4 Inception Resnet V2 Resnet V2 152 Resnet V2 101 Resnet V2 50 0.01 0.01 0.01 0.01 0.01 0.01 50 50 50 30 30 30 6 6 6 6 6 6 0.05 0.05 0.05 0.05 0.05 0.05 0.99 0.99 0.99 0.99 0.99 0.99 4 4 4 4 4 4 300000 300000 300000 300000 300000 300000
Table Supp. 5: Hyper-parameters for adversarial program training for MNIST classiï¬cation adversar- ial task. For all models, we used the Adam optimizer with its default parameters while decaying the learning rate exponentially during training. We distributed training data across a number of GPUs (each GPU receive âbatchâ data samples ). We then performed synchronized updates of the adversarial program parameters. Random Model
λ batch GPUS learn rate decay epochs/decay steps Inception V3 Inception V4 Inception Resnet V2 Resnet V2 152 Resnet V2 101 Resnet V2 50 0.01 0.01 0.01 0.01 0.01 0.01 50 50 50 20 20 50 4 4 4 4 4 4 0.05 0.05 0.05 0.05 0.05 0.05 0.96 0.96 0.96 0.96 0.96 0.96 2 2 2 2 2 2 100000 100000 60000 60000 60000 60000
13
ImageNet networks Inception Resnet V2__â Resnet V2 152 Resnet V2 101 Inception V3 Inception V4 MNIST counting CIFAR10 random networks Inception V3 Inception V4 Inception Resnet V2 Resnet V2 152 Resnet V2 101 MNIST
Figure Supp. 1: Adversarial programs exhibit qualitative similarities and differences across both network and task. (a) Top: adversarial programs targeted to repurpose networks pre-trained on ImageNet to count squares in images. Middle: adversarial programs targeted to repurpose networks pre-trained on ImageNet to function as MNIST classiï¬ers. Bottom: adversarial programs to cause the same networks to function as CIFAR-10 classiï¬ers. (b) Adversarial programs targeted to repurpose networks with randomly initialized parameters to function as MNIST classiï¬ers.
adversarial task data shuffled adversarial program 7
Figure Supp. 2: Neural networks are susceptible to adversarial reprogramming even in cases when adversarial data and original task data are unrelated. The pixels in MNIST digits are shufï¬ed. So, that the resulting image has no resemblance to any image. Then, the shufï¬ed image is combined with the adversarial program to create a reprogramming image. This image successfully reprogram Inception V3 model to classify the shufï¬ed digits, despite that the adversarial data (i.e., shufï¬ed MNIST digits) being unrelated to the original data (i.e., ImageNet).
14 | {
"id": "1711.00851"
} |
1806.10729 | Illuminating Generalization in Deep Reinforcement Learning through Procedural Level Generation | Deep reinforcement learning (RL) has shown impressive results in a variety of
domains, learning directly from high-dimensional sensory streams. However, when
neural networks are trained in a fixed environment, such as a single level in a
video game, they will usually overfit and fail to generalize to new levels.
When RL models overfit, even slight modifications to the environment can result
in poor agent performance. This paper explores how procedurally generated
levels during training can increase generality. We show that for some games
procedural level generation enables generalization to new levels within the
same distribution. Additionally, it is possible to achieve better performance
with less data by manipulating the difficulty of the levels in response to the
performance of the agent. The generality of the learned behaviors is also
evaluated on a set of human-designed levels. The results suggest that the
ability to generalize to human-designed levels highly depends on the design of
the level generators. We apply dimensionality reduction and clustering
techniques to visualize the generators' distributions of levels and analyze to
what degree they can produce levels similar to those designed by a human. | http://arxiv.org/pdf/1806.10729 | Niels Justesen, Ruben Rodriguez Torrado, Philip Bontrager, Ahmed Khalifa, Julian Togelius, Sebastian Risi | cs.LG, cs.AI, stat.ML | Accepted to NeurIPS Deep RL Workshop 2018 | null | cs.LG | 20180628 | 20181129 | 8 1 0 2
v o N 9 2 ] G L . s c [
5 v 9 2 7 0 1 . 6 0 8 1 : v i X r a
# Illuminating Generalization in Deep Reinforcement Learning through Procedural Level Generation
# Niels Justesen IT University of Copenhagen Copenhagen, Denmark noju@itu.dk
Ruben Rodriguez Torrado New York University Brooklyn, USA rrt264@nyu.edu
Philip Bontrager New York University Brooklyn, USA philipjb@nyu.edu
Ahmed Khalifa New York University Brooklyn, USA ahmed.khalifa@nyu.edu
Julian Togelius New York University Brooklyn, USA julian@togelius.com
Sebastian Risi IT University of Copenhagen Copenhagen, Denmark sebr@itu.dk
# Abstract
Deep reinforcement learning (RL) has shown impressive results in a variety of domains, learning directly from high-dimensional sensory streams. However, when neural networks are trained in a ï¬xed environment, such as a single level in a video game, they will usually overï¬t and fail to generalize to new levels. When RL models overï¬t, even slight modiï¬cations to the environment can result in poor agent performance. This paper explores how procedurally generated levels during training can increase generality. We show that for some games procedural level generation enables generalization to new levels within the same distribution. Additionally, it is possible to achieve better performance with less data by manipulating the difï¬culty of the levels in response to the performance of the agent. The generality of the learned behaviors is also evaluated on a set of human-designed levels. The results suggest that the ability to generalize to human-designed levels highly depends on the design of the level generators. We apply dimensionality reduction and clustering techniques to visualize the generatorsâ distributions of levels and analyze to what degree they can produce levels similar to those designed by a human.
# Introduction
Deep reinforcement learning (RL) has shown remarkable results in a variety of different domains, especially when learning policies for video games [18]. However, there is increasing evidence suggesting that agents easily overï¬t to their particular training environment, resulting in policies that do not generalize well to related problems or even different instances of the same problem. Even small game modiï¬cations can often lead to dramatically reduced performance, leading to the suspicion that these networks learn reactions to particular situations rather than general strategies [20, 39].
This paper has four contributions. First, we show that deep reinforcement learning overï¬ts to a large degree on 2D arcade games when trained on a ï¬xed set of levels. These results are important because similar setups are particularly popular to use as benchmarks in deep reinforcement learning research (e.g. the Arcade Learning Environment [3]). Our ï¬ndings suggest that policies trained in such settings merely memorize certain action sequences rather than learning general strategies to solve the game. Second, we show that it is possible to overcome such overï¬tting by introducing Procedural Content Generation (PCG) [33], more speciï¬cally procedurally generated levels, in the training loop. However, this can lead to overï¬tting on a higher level, to the distribution of generated levels presented during training. This paper investigates both types of overï¬tting and the effect of several level generators for
NeurIPS Workshop on Deep Reinforcement Learning, 2018, Montréal, Canada.
multiple games. Third, we introduce a particular form of PCG-based reinforcement learning, which we call Progressive PCG, where the difï¬culty of levels/tasks is increased gradually to match the agentâs performance. While similar techniques of increasing difï¬culty do exist, they have not been combined with a PCG-based approach in which agents are evaluated on a completely new level every time a new episode begins. Our approach applies constructive level generation techniques, rather than pure randomization, and this paper studies the effect of several level generation methods. Fourth, we analyze distributions of procedurally generated levels using dimensionality reduction and clustering to understand whether they resemble human-designed levels and how this impacts generalization.
It is important to note that the primary goal of this paper is not to achieve strong results on human levels, but rather to gain a deeper understanding of overï¬tting and generalization in deep RL, which is an important and neglected area in AI research. We believe this paper makes a valuable contribution in this regard, suggesting that a PCG-based approach could be an effective tool to study these questions from a fresh perspective. We also see this study relevant for robotics, where an ongoing challenge is to generalize from simulated environments to real-world scenarios.
# 2 Related Work
Within supervised learning, it is generally accepted that accuracy (and other metrics) are reported on a testing set that is separate from the training set. In contrast, in reinforcement learning research it is common to report results on the very same task a model was trained on. However, several recent learning-focused game AI competitions, such as the Visual Doom [21] AI Competition, The General Video Game AI Learning Track [23, 28] and the OpenAI Retro Contest1 evaluate the submitted controllers on levels that the participants did not have access to. None of them are, however, based on procedurally generated levels. The only game AI competition to prominently feature procedural level generation is the Mario AI Competition which did not have provisions for learning agents [37].
Randomization of objects in simulated environments has shown to improve generality for robotic grasping to such a degree that the robotic arm could generalize to realistic settings as well [35]. Low- ï¬delity texture randomization during training in a simulated environment has allowed for autonomous indoor ï¬ight in the real world [30]. Random level generation has been applied to video games to enable generalization of reinforcement learning agents [2, 15, 16, 22]. Several RL approaches exist that manipulate the reward function instead of the structure of the environment to ease learning and ultimately improve generality, such as Hindsight Experience Replay [1] and Rarity of Events [19].
The idea of training agents on a set of progressively harder tasks is an old one and has been redis- covered several times within the wider machine learning context. Within evolutionary computation, this practice is known as incremental evolution [13, 36]. For example, it has been shown that while evolving neural networks to drive a simulated car around a particular race track works well, the resulting network has learned only to drive that particular track; but by gradually including more difï¬cult levels in the ï¬tness evaluation, a network can be evolved to drive many tracks well, even hard tracks that could not be learned from scratch [36]. Essentially the same idea has later been independently invented as curriculum learning [4]. Similar ideas have been formulated within a coevolutionary framework as well [6].
Several machine learning algorithms also gradually scale the difï¬culty of the problem. Automated curriculum learning includes intelligent sampling of training samples to optimize the learning progress [14]. Intelligent task selection through asymmetric self-play with two agents can be used for unsupervised pre-training [34]. The POWERPLAY algorithm continually searches for new tasks and new problem solvers concurrently [32] and in Teacher-Student Curriculum Learning [24] the teacher tries to select sub-tasks for which the slope of the learning curve of the student is highest. Reverse curriculum generation automatically generates a curriculum of start states, further and further away from the goal, that adapts to the agentâs performance [12].
A protocol for training reinforcement learning algorithms and evaluate generalization and overï¬tting, by having large training and test sets, was proposed in [39]. Their experiments show that training on thousands of levels in a simple video game enables the agent to generalize to unseen levels. Our (contemporaneous) work here differs by implementing an adaptive difï¬culty progression along with near endless content generation for several complex video games.
# 1https://contest.openai.com/
2
The IMPALA system [11] trains a single network to play multiple games simultaneously. Here, the same games were used for training and testing, and it is in principle possible that the network simply learned individual behaviors for all of these games within the shared model.
# 3 General Video Game AI Framework
We are building on the General Video Game AI framework (GVG-AI) which is a ï¬exible framework designed to facilitate the advance of general AI through video game playing [27]. There are currently over 160 games in GVG-AI which are speciï¬ed using the declarative video game description language (VGDL) [31], originally proposed in [10]. The game deï¬nition speciï¬es objects in the game and interaction rules such as rewards and effects of collisions. A level is deï¬ned as an ASCII grid where each character represents an object. This allows for quick development of games and levels making the framework ideal for research purposes [26].
The GVGAI framework has been integrated with the OpenAI Gym environment [28] which provides a uniï¬ed RL interface across several different environments [7] as well as state-of-the-art RL imple- mentations [9]. While GVG-AI originally provides a forward model that allows agents to use search algorithms, the GVG-AI Gym only provides the pixels of each frame, the incremental reward, and whether the game is won or lost.
3.1 Parameterized Level Generator
& oe = EN a oe © 6 © Se ed Difficulty 0 Difficulty 0.25 Difficulty 05 Difficulty 0.75 Difficulty 1 Human level 0 Human level 4 Difficulty 0 Difficulty 0.25 Difficulty 0.5 Difficulty 0.75 Difficulty 1 Human level 0 Difficulty 0 Difficulty 1 Human level 0 Boulderdash Difficulty 1 Human level 0 Difficulty 0 Difficulty 0.5
Figure 1: Procedurally generated levels for Solarfox, Zelda, Frogs, and Boulderdash with various difï¬culties between 0 and 1. For each game, human-designed levels are shown as well.
Constructive level generators were designed for four hard games in GVG-AI: Boulderdash, Frogs, Solarfox and Zelda. Tree-search algorithms do not perform well in these games [5]. Constructive level generators are popular in game development because they are relatively fast to develop and easy to debug [33]. They incorporate game knowledge to make sure the output level is directly playable without additional testing. Our generators are designed after analyzing the core components in the human-designed levels for each game and include a controllable difï¬culty parameter.
Boulderdash Level Generator: This game is a GVG-AI port of âBoulder Dashâ (First Star Software, 1984). Here the player tries to collect at least ten gems and then exit through the door while avoiding falling boulders and attacking enemies. The level generation in Boulderdash works as follows: (1) Generate the layout of the map using Cellular Automata [17]. (2) Add the player to the map at a
3
random location. (3) Add a door at a random location. (4) Add at least ten gems to the map at random locations. (5) Add enemies to the map at random locations in a similar manner to the third step.
Frogs Level Generator: Frogs is a GVG-AI port of âFroggerâ (Konami, 1981). In Frogs, the player tries to move upwards towards the goal without drowning in the water or getting run over by cars. The level generation in Frogs follow these steps: (1) Add the player at the lowest empty row in the level. (2) Add the goal at the highest row in the level. (3) Assign the intermediate rows either as roads, water, or forest. (4) Add cars to rows with a road and wood logs rows with water.
Solarfox Level Generator: Solarfox is a GVG-AI port of âSolar Foxâ (Midway Games, 1981). In Solarfox, the player is continuously moving in one of four directions (North, South, East, and West). The goal is to collect all the gems while avoiding the borders of the level as well as bullets from enemies in the north and the south. The level generation for Solarfox follow these steps: (1) Add the player in the middle of the map. (2) Add some gems either in the upper half, left half, or upper left quarter. (3) Replicate the same pattern of gems on the remaining parts of the map.
Zelda Level Generator: Zelda is a GVG-AI port of the dungeon system in âThe Legend of Zeldaâ (Nintendo, 1986). In Zelda, the goal is to grab a key and exit through a door without getting killed by enemies. The player can use their sword to kill enemies for higher scores. The level generation in Zelda works as follows: (1) Generate the map layout as a maze using Primâs Algorithm [8]. (2) Remove some of the solid walls in the maze at random locations. (3) Add the player to a random empty tile. (4) Add the key and exit door at random locations far from the player. (5) Add enemies in the maze at random locations far away from the player.
The difï¬culty of the levels created by the generator can be controlled with a difï¬culty parameter that is in the interval [0, 1]. Figure 1 shows the effect of the difï¬culty parameter in the four games. Increasing the difï¬culty has three effects: First, the area in the level where the player can move through (active level size) increases, except in Zelda and Solarfox where the level size is ï¬xed. Second, the number of objects that can kill the player and/or the number of objects that the player can collect is increased. Third, the layout of the level gets more complex to navigate. The space of possible levels for each game, using our generators, is around 108 at low difï¬culty to 1024 at high difï¬culties. Difï¬cult levels have more possible conï¬gurations as they typically have more elements.
# 4 Procedural Level Generation for Deep RL
In a supervised learning setting, generality is obtained by training a model on a large dataset, typically with thousands of examples. Similarly, the hypothesis in this paper is that RL algorithms should achieve generality if many variations of the environments are used during training, rather than just one. This paper presents a novel RL framework wherein a new level is generated whenever a new episode begins, which allows us to algorithmically design the new level to match the agentâs current performance. This framework also enables the use of search-based PCG techniques, that e.g. learn from existing level distributions [38], which could in the future reduce the dependency on domain knowledge. However, only constructive PCG is explored in this paper.
When the learning algorithm is presented with new levels continuously during training, it must learn general strategies to improve. Learning a policy this way is more difï¬cult than learning one for just a single level and it may be infeasible if the game rules and/or generated levels have sparse rewards. To ease the learning, this paper introduces Progressive PCG (PPCG), an approach where the difï¬culty of the generated levels is controlled by the learning algorithm itself. In this way, the level generator will initially create easy levels and progressively increase the difï¬culty as the agent learns.
In the PPCG implementation in this paper, levels are initially created with the lowest difï¬culty of 0. If the agent wins an episode, the difï¬culty will be incremented such that future levels during training become harder. The difï¬culty is increased by α for a win and decreased by the same amount for a loss. In our experiments, we use α = 0.01. For distributed learning algorithms, the difï¬culty setting is shared across all processes such that the outcome of all episodes inï¬uences the difï¬culty of future training levels. We compare PPCG to a simpler method, also using procedurally generated levels, but with a constant difï¬culty level. We refer to this approach as PCG X, where X refers to the ï¬xed difï¬culty setting.
4
# 5 Experiments
To evaluate the presented approach, we employ the reinforcement learning algorithm Advantage Actor-Critic (A2C) [25], speciï¬cally the implementation of A2C from the Open AI Baselines together with the GVG-AI Gym framework. The neural network has the same architecture as in Mnih et al. [25] with three convolutional layers and a single fully-connected layer. The output of A2C consists of both a policy and value output in contrast to DQN. A2C is conï¬gured to use 12 parallel workers, a step size of tmax = 5, no frame skipping following [28], and a constant learning rate of 0.007 with the RMS optimizer [29]. The code for our experiments is available online2.
We compare four different training approaches. Lv X: Training level is one of the ï¬ve human-designed levels. Lv 0-3: Several human-designed levels (level 0, 1, 2, and 3) are sampled randomly during training. PCG X: Procedurally generated training levels with a constant difï¬culty X. Progressive PCG (PPCG): Procedurally generated training levels where the difï¬culty is adjusted to ï¬t the performance of the agent.
Each training setting was repeated four times and tested on two sets of 30 pre-generated levels with either difï¬culty 0.5 and 1 as well as the ï¬ve human-designed levels. The training plots on Figure 2 and the test results in Table 1 are averaged across the four trained models where each model was tested 30 times on each test setup (thus a total of 120 test episodes per test set for each training setup). All four training approaches were tested on Zelda. Only PCG 1 and PPCG were tested on Solarfox, Frogs, and Boulderdash due to computational constraints. The trained agents are also compared to an agent taking uniformly random actions and the maximum possible score for each test set is shown as well.
# 6 Results
# 6.1 Training on a few Human-Designed Levels
Policies trained on just one level in Zelda (Lv 0 and Lv 4 in Table 1) reach high scores on the training level but have poor performance on all test levels (human-designed and procedurally generated). It is clear that these are prone to memorization and cannot adapt well to play new levels. The scores on the training levels are close to the maximum scores achievable while the scores on the test levels are often lower than the random policy, a clear indication of overï¬tting in reinforcement learning. Policies trained on four human-designed levels in Zelda also achieve high scores on all four training levels. The testing scores are marginally higher than when trained on a single level, on both the human-designed level 4 and the PCG levels.
(a) PPCG in Zelda (b) PPCG in Solarfox* (c) PPCG in Frogs (d) PPCG in Boulderdash
Figure 2: Smoothed mean scores and level difï¬culties during training across ï¬ve repetitions of Progressive PCG in Zelda, Solarfox, Frogs, and Boulderdash. One standard deviation is shown in opaque. *Only three repetitions of PPCG and one of PCG 1 for Solarfox.
# 6.2 Training on Procedurally Generated Levels
Agents trained on procedurally generated levels with a ï¬xed difï¬culty learned a general behavior within the distribution of procedurally generated levels, with mediocre scores in Zelda, Solarfox, and Boulderdash, while no progress was observed in Frogs. These results match similar observations by Torrado et al. [28], in which DQN and A2C fail to learn anything on just one level in Frogs after 1 million training steps. While PCG 1, here with 40 million steps, also fails to learn Frogs, PPCG
# 2https://github.com/njustesen/a2c_gvgai
5
Training Max. Random 60M steps: Level 0 Level 4 Level 0-3 PCG 0.5 PCG 1 PPCG 100M steps: PCG 1 PPCG Max. Random 40M steps: PCG 1 PPCG Max. Random 40M steps: PCG 1 PPCG Max. Random 60M steps: PCG 1 PPCG PCG 0.5 4.40 0.38 0.28 0.56 1.98 3.45 0.27 3.44 3.05 3.82 30.83 -3.68 20.70 16.08 1.00 0.01 0.01 0.81 31.50 6.29 14.63 11.78 PCG 1 6.87 0.22 0.51 0.07 2.37 4.00 3.56 4.28 4.38 4.51 51.83 -4.55 32.43 21.40 1.00 0.00 0.00 0.57 29.80 3.71 8.32 4.86 6.97 -0.51 6.95 2.21 2.40 2.67 2.49 2.71 Solarfox* 32.00 -5.49 22.00 16.87 Frogs 1.00 0.00 0.00 0.00 Boulderdash 48.00 0.85 5.39 3.44 Lv 1 8.00 0.17 -0.45 0.99 7.17 2.28 1.37 3.35 1.54 3.74 32.00 -4.80 , 21.83 10.26 1.00 0.00 0.00 0.00 52.00 2.58 10.28 0.98 Lv 2 8.00 -0.11 -0.53 0.04 7.20 0.92 1.49 2.43 1.18 2.84 34.00 -5.41 26.00 12.02 1.00 0.00 0.00 0.00 58.00 3.5 5.85 0.68 Lv 3 10.00 -0.07 0.07 -0.35 8.17 2.27 2.88 1.89 2.04 1.90 70.00 2.03 43.96 27.37 1.00 0.00 0.00 0.00 48.00 0.65 5.08 0.41 Lv 4 8.00 0.18 -0.58 5.93 1.91 0.15 -0.62 0.96 -0.29 0.88 62.00 1.13 28.16 20.00 1.00 0.00 0.00 0.00 44.00 2.66 8.27 3.32
Table 1: Test results of A2C under different training regimens: a single human-designed level (Level 0 and Level 4), several human-designed levels (Level 0-3), procedurally generated levels with a ï¬xed difï¬culty (PCG 0.5 and PCG 1), and PPCG that progressively adapts the difï¬culty of the levels to match the agentâs performance. Random refers to results of an agent taking uniformly random actions and Max shows the maximum possible score. Scores are in red if the training level is the same as the test level. The best scores for a game, that is not marked red, are in bold. *Only three repetitions of PPCG and one of PCG 1 were made for Solarfox so far.
achieves a score of 0.57 (57% win rate) in the test set of procedurally generated levels with difï¬culty 1 (comparable to human levels in difï¬culty - see Figure 1). In Zelda, PCG 1 was able to achieve strong scores while PPCG is slightly better. Interestingly, for the two cases where PPCG is able to reach difï¬culty 1 during training (Frogs and Zelda), it outperforms PCG 0.5 on PCG 1. As PPCG never reaches the most difï¬cult levels during training in Boulderdash and Solarfox, this is to be expected. In Boulderdash, the agents trained with PCG 1 reach decent scores (8.34 on average) on levels with difï¬culty 1. PPCG reached high scores during training but failed to win as the difï¬culty reached 0.2 and thus trained only on easy levels.
# 6.3 Generalization on Human-Designed Levels
The results demonstrate that introducing procedurally generated levels allows the trained behaviors to generalize to unseen levels within the training distribution. It is, however, interesting whether they also generalize to the ï¬ve human-designed levels in GVG-AI.
In Zelda, PCG and PPCG are decent in the human-designed levels while best in the procedurally generated levels. In Frogs, PCG and PPCG are unable to win in the human-designed levels indicat- ing a clear discrepancy between the two level distributions. In Boulderdash, PCG 1 achieved on average 5.08â10.28 points (out of 20) in the human-designed levels compared to 8.32â14.63 in the procedurally generated levels. PPCG performs worse in this game since it never reached a difï¬culty level similar to the human-designed levels. Similarly, in Solarfox, PCG 1 achieved on average a
6
higher score than PPCG on the ï¬ve human-designed levels. PCG 1, however, shows remarkable generalization in Solarfox with similar scores in human-designed and procedurally generated levels.
# 6.4 Qualitative Analysis of Agent Replays
In Zelda, PPCG has learned to reliably strike down and avoid enemies but only sometimes collects the key and exits through the door. Whether this is due to the difï¬culties of navigating in tricky mazes or a lack of motivation towards the key and door is currently unclear. In Solarfox, PCG 1 has learned to effectively pick up the diamonds and avoid ï¬reballs, occasionally getting hit while trying to avoid them. This behavior is remarkably human-like. Sometimes the agent wins in the human-designed levels, which is quite impressive. PPCG jiggles a lot around the starting location to collect nearby diamonds, most likely because the easy procedurally generated levels have diamonds near the starting location, and it never reached the hard levels during training. In Frogs, PPCG always moves towards the goal while it sometimes dies when crossing the water with only a few logs being available. We suspect that navigation in this game is learned more easily than in other games as the goal in Frogs is always at the top of the screen. In Boulderdash, PCG 1 learned to ï¬ght and pick up nearby diamonds, also under boulders, while it does not seem to be capable of long-term planning. It often dies ï¬ghting enemies or moving boulders and thus dies rather quickly in most level. Often dying from boulders and enemies can explain why PPCG never reached a difï¬culty higher than 0.2; it simply gets killed early when these entities are introduced in the levels.
# 7 Exploring the Distribution of Generated Levels
We do not expect agents to play well on levels that are dissimilar from their training distribution. To investigate the distribution of the procedurally generated levels, and how the structure of these correlate with human-designed levels, we generated 1000 levels with difï¬culty 1 for each game. The high-dimensional structure of levels was compressed to two dimensions using principal component analysis (PCA), and afterward clustered with the density-based spatial clustering of applications with noise (DBSCAN) approach. The transformed space of levels is visualized in Figure 3. For PCA to work on GVG-AI levels, they have been transformed into a binary 3D array of shape (tile_type, height, width) and then reshaped into a 1D array. The human-designed levels were included in both the transformation and clustering processes.
(a) Zelda (b) Solarfox (c) Frogs (d) Boulderdash
Figure 3: Visualization of the level distributions and how they correlate to human-designed levels (white circles). Levels were reduced to two dimensions using PCA and clustered using DBSCAN (⬠= 0.5 and a minimum of 10 samples per cluster). Outliers are black and centroids are larger.
The generated levels for Solarfox are clustered in three wide groups: (1) levels with only green diamonds, (2) levels with both green and blue diamonds, and (3) levels with only blue diamonds. None of the human-designed levels use both types of diamonds and thus only belong to two of the clusters. For Zelda, only one cluster is discovered without outliers. The generated levels in Frogs have been clustered into 19 groups. This is due to the high structural effect of roads and rivers that goes across the level. It is noticeable how level 4 is the most distant outlier. This is because level 4 has a river on the starting row which is a level variation not captured by the level generator for Frogs. Level 0â3 are near the same small cluster while the generated levels are spread across many isolated clusters. It is not exactly clear why PCG 1 and PPCG fail to play on all the human-designed
7
Frogs levels while the level distribution is remarkably different from the other games. In Boulderdash, similarly to Zelda, only one cluster emerges, but here, all human-designed levels are distant outliers. This effect is most likely a result of the ï¬xed amount of open space in the human-designed levels with padding of only one tile while the generated levels are more varied and cave-like.
# 8 Discussion
The results of our experiments afï¬rm the original concern with the way reinforcement learning research is often evaluated and reported. When it is reported that an algorithm has learned a policy that can play a game, it may simply mean that this policy has found optimal actions for a very small subspace of the possible observations the game offers. This boils down to the network mapping observations in this subspace to actions without learning general concepts of the game. Table 1 shows this with the huge disparity between the performance on the training levels compared to the test levels. If the goal of the agent is to learn how to play a game, then this work shows that it must be evaluated in several variations of the game.
Incorporating procedurally generated levels in the training loop also presents a variety of new and interesting challenges. One such challenge is how to scale the difï¬culty of the levels to smoothen the learning curve in PPCG. In Frogs, it was very effective to apply padding to easy levels, creating smaller levels in the beginning, while it was not sufï¬cient for Boulderdash. Another challenge is how to ensure that the distribution of procedurally generated levels matches another distribution, in this case human-designed levels. We have provided a tool using dimensionality reduction and clustering, which can be used to improve the design of constructive level generators or perhaps guide search-based level generators in future work. While the results vary across the four games, analyzing when the PCG-based approach works and when it fails gave valuable insights into the generalization abilities of these RL algorithms. We believe that search-based PCG is an interesting area for future work that could ultimately lead to RL agents with more general policies. We believe that this study is also relevant for robotics; learning to generalize from simulation to real-world scenarios where pure randomization of the environment is insufï¬cient.
# 9 Conclusion
We explored how policies learned with deep reinforcement learning generalize to levels that were not used during training. The results demonstrate that agents trained on just one or a handful of levels often fail to generalize to new levels. This paper presented a new approach that incorporates a procedural level generator into the reinforcement learning framework, in which a new level is generated for each episode. The presented approach, Progressive PCG (PPCG), shows that dynamically adapting the level difï¬culty during training allows the agent to solve more complex levels than training it on the most difï¬cult levels directly. This technique was able to achieve a win rate of 57% in difï¬cult Frogs levels, compared to 0% for the non-progressive approach. Additionally, in Zelda this approach was superior across procedurally generated levels and human-designed levels. In Solarfox and Boulderdash, the level difï¬culty of PPCG never reached the maximum during training and here training on procedurally generated levels with a ï¬xed difï¬culty setting resulted in the highest performance. The results of this paper also highlight the important challenge of ensuring that the training distribution resembles the test distribution. We have provided a tool that can assist with the second challenge, using dimensionality reduction and clustering to visualize the difference between two distributions of video game levels.
# Acknowledgements
Niels Justesen was ï¬nancially supported by the Elite Research travel grant from The Danish Ministry for Higher Education and Science. Ahmed Khalifa acknowledges the ï¬nancial support from NSF grant (Award number 1717324 - "RI: Small: General Intelligence through Algorithm Invention and Selection.").
8
# References
[1] M. Andrychowicz, F. Wolski, A. Ray, J. Schneider, R. Fong, P. Welinder, B. McGrew, J. Tobin, P. Abbeel, and W. Zaremba. Hindsight experience replay. In Advances in Neural Information Processing Systems, pages 5048â5058, 2017.
[2] C. Beattie, J. Z. Leibo, D. Teplyashin, T. Ward, M. Wainwright, H. Küttler, A. Lefrancq, S. Green, V. Valdés, A. Sadik, et al. Deepmind lab. arXiv preprint arXiv:1612.03801, 2016.
[3] M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. The arcade learning environment: An evaluation platform for general agents. Journal of Artiï¬cial Intelligence Research, 47:253â279, jun 2013.
[4] Y. Bengio, J. Louradour, R. Collobert, and J. Weston. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pages 41â48. ACM, 2009.
[5] P. Bontrager, A. Khalifa, A. Mendes, and J. Togelius. Matching games and algorithms for general video game playing. In Twelfth Artiï¬cial Intelligence and Interactive Digital Entertainment Conference, pages 122â128, 2016.
[6] J. C. Brant and K. O. Stanley. Minimal criterion coevolution: a new approach to open-ended search. In Proceedings of the Genetic and Evolutionary Computation Conference, pages 67â74. ACM, 2017.
[7] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba. Openai gym. arXiv preprint arXiv:1606.01540, 2016.
[8] J. Buck. Mazes for Programmers: Code Your Own Twisty Little Passages. Pragmatic Bookshelf, 2015.
[9] P. Dhariwal, C. Hesse, O. Klimov, A. Nichol, M. Plappert, A. Radford, J. Schulman, S. Sidor, and Y. Wu. Openai baselines. https://github.com/openai/baselines, 2017.
[10] M. Ebner, J. Levine, S. M. Lucas, T. Schaul, T. Thompson, and J. Togelius. Towards a video game description language. 2013.
[11] L. Espeholt, H. Soyer, R. Munos, K. Simonyan, V. Mnih, T. Ward, Y. Doron, V. Firoiu, T. Harley, I. Dunning, et al. Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures. arXiv preprint arXiv:1802.01561, 2018.
[12] C. Florensa, D. Held, M. Wulfmeier, M. Zhang, and P. Abbeel. Reverse curriculum generation for reinforcement learning. arXiv preprint arXiv:1707.05300, 2017.
[13] F. Gomez and R. Miikkulainen. Incremental evolution of complex general behavior. Adaptive Behavior, 5(3-4):317â342, 1997.
[14] A. Graves, M. G. Bellemare, J. Menick, R. Munos, and K. Kavukcuoglu. Automated curriculum learning for neural networks. arXiv preprint arXiv:1704.03003, 2017.
[15] A. Graves, G. Wayne, M. Reynolds, T. Harley, I. Danihelka, A. Grabska-Barwi´nska, S. G. Colmenarejo, E. Grefenstette, T. Ramalho, J. Agapiou, et al. Hybrid computing using a neural network with dynamic external memory. Nature, 538(7626):471, 2016.
[16] E. Groshev, M. Goldstein, A. Tamar, S. Srivastava, and P. Abbeel. Learning generalized reactive policies using deep neural networks. arXiv preprint arXiv:1708.07280, 2017.
[17] L. Johnson, G. N. Yannakakis, and J. Togelius. Cellular automata for real-time generation of inï¬nite cave levels. In Proceedings of the 2010 Workshop on Procedural Content Generation in Games, page 10. ACM, 2010.
[18] N. Justesen, P. Bontrager, J. Togelius, and S. Risi. Deep learning for video game playing. arXiv preprint arXiv:1708.07902, 2017.
[19] N. Justesen and S. Risi. Automated curriculum learning by rewarding temporally rare events. In IEEE Conference on Computational Intelligence and Games. IEEE, 2018.
9
[20] K. Kansky, T. Silver, D. A. Mély, M. Eldawy, M. Lázaro-Gredilla, X. Lou, N. Dorfman, S. Sidor, S. Phoenix, and D. George. Schema networks: Zero-shot transfer with a generative causal model of intuitive physics. arXiv preprint arXiv:1706.04317, 2017.
[21] M. Kempka, M. Wydmuch, G. Runc, J. Toczek, and W. Ja´skowski. ViZDoom: A Doom-based AI research platform for visual reinforcement learning. In IEEE Conference on Computational Intelligence and Games, pages 341â348, Santorini, Greece, Sep 2016. IEEE. The best paper award.
[22] O. Klimov. Bipedalwalkerhardcore-v2. http://gym.openai.com/, 2016.
[23] J. Liu, D. Perez-Lebana, and S. M. Lucas. The single-player GVGAI learning framework technical manual. In IEEE Conference on Computational Intelligence and Games. IEEE, 2018.
[24] T. Matiisen, A. Oliver, T. Cohen, and J. Schulman. Teacher-student curriculum learning. arXiv preprint arXiv:1707.00183, 2017.
[25] V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu. In International Conference on Asynchronous methods for deep reinforcement learning. Machine Learning, pages 1928â1937, 2016.
[26] D. Perez-Liebana, J. Liu, A. Khalifa, R. D. Gaina, J. Togelius, and S. M. Lucas. General video game AI: a multi-track framework for evaluating agents, games and content generation algorithms. arXiv preprint arXiv:1802.10363, 2018.
[27] D. Perez-Liebana, S. Samothrakis, J. Togelius, S. M. Lucas, and T. Schaul. General Video Game AI: Competition, Challenges and Opportunities. In Thirtieth AAAI Conference on Artiï¬cial Intelligence, pages 4335â4337, 2016.
[28] R. Rodriguez Torrado, P. Bontrager, J. Togelius, J. Liu, and D. Perez-Liebana. Deep reinforce- ment learning for general video game AI. In Computational Intelligence and Games (CIG), 2018 IEEE Conference on. IEEE, 2018.
[29] S. Ruder. An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747, 2016.
[30] F. Sadeghi and S. Levine. Cad2rl: Real single-image ï¬ight without a single real image. arXiv preprint arXiv:1611.04201, 2016.
[31] T. Schaul. A video game description language for model-based or interactive learning. In Computational Intelligence in Games (CIG), 2013 IEEE Conference on, pages 1â8. IEEE, 2013.
[32] J. Schmidhuber. Powerplay: Training an increasingly general problem solver by continually searching for the simplest still unsolvable problem. Frontiers in psychology, 4:313, 2013.
[33] N. Shaker, J. Togelius, and M. J. Nelson. Procedural content generation in games. Springer, 2016.
[34] S. Sukhbaatar, Z. Lin, I. Kostrikov, G. Synnaeve, A. Szlam, and R. Fergus. Intrinsic motivation and automatic curricula via asymmetric self-play. arXiv preprint arXiv:1703.05407, 2017.
[35] J. Tobin, W. Zaremba, and P. Abbeel. Domain randomization and generative models for robotic grasping. arXiv preprint arXiv:1710.06425, 2017.
[36] J. Togelius and S. M. Lucas. Evolving robust and specialized car racing skills. In Evolutionary Computation, 2006. CEC 2006. IEEE Congress on, pages 1187â1194. IEEE, 2006.
[37] J. Togelius, N. Shaker, S. Karakovskiy, and G. N. Yannakakis. The Mario AI championship 2009-2012. AI Magazine, 34(3):89â92, 2013.
[38] V. Volz, J. Schrum, J. Liu, S. M. Lucas, A. Smith, and S. Risi. Evolving mario levels in the latent space of a deep convolutional generative adversarial network. arXiv preprint arXiv:1805.00728, 2018.
[39] C. Zhang, O. Vinyals, R. Munos, and S. Bengio. A study on overï¬tting in deep reinforcement learning. arXiv preprint arXiv:1804.06893, 2018.
10 | {
"id": "1802.10363"
} |
1806.10574 | This Looks Like That: Deep Learning for Interpretable Image Recognition | When we are faced with challenging image classification tasks, we often
explain our reasoning by dissecting the image, and pointing out prototypical
aspects of one class or another. The mounting evidence for each of the classes
helps us make our final decision. In this work, we introduce a deep network
architecture -- prototypical part network (ProtoPNet), that reasons in a
similar way: the network dissects the image by finding prototypical parts, and
combines evidence from the prototypes to make a final classification. The model
thus reasons in a way that is qualitatively similar to the way ornithologists,
physicians, and others would explain to people on how to solve challenging
image classification tasks. The network uses only image-level labels for
training without any annotations for parts of images. We demonstrate our method
on the CUB-200-2011 dataset and the Stanford Cars dataset. Our experiments show
that ProtoPNet can achieve comparable accuracy with its analogous
non-interpretable counterpart, and when several ProtoPNets are combined into a
larger network, it can achieve an accuracy that is on par with some of the
best-performing deep models. Moreover, ProtoPNet provides a level of
interpretability that is absent in other interpretable deep models. | http://arxiv.org/pdf/1806.10574 | Chaofan Chen, Oscar Li, Chaofan Tao, Alina Jade Barnett, Jonathan Su, Cynthia Rudin | cs.LG, cs.AI, cs.CV, stat.ML | Chaofan Chen and Oscar Li contributed equally to this work. This work
has been accepted for spotlight presentation (top 3% of papers) at NeurIPS
2019 | Advances in Neural Information Processing Systems 32 (NeurIPS
2019) | cs.LG | 20180627 | 20191228 | 9 1 0 2
c e D 8 2 ] G L . s c [
5 v 4 7 5 0 1 . 6 0 8 1 : v i X r a
# This Looks Like That: Deep Learning for Interpretable Image Recognition
# Chaofan Chenâ Duke University cfchen@cs.duke.edu
Oscar Liâ Duke University oscarli@alumni.duke.edu
# Chaofan Tao Duke University chaofan.tao@duke.edu
# Alina Jade Barnett Duke University abarnett@cs.duke.edu
Jonathan Su MIT Lincoln Laboratoryâ su@ll.mit.edu
# Cynthia Rudin Duke University cynthia@cs.duke.edu
# Abstract
When we are faced with challenging image classiï¬cation tasks, we often explain our reasoning by dissecting the image, and pointing out prototypical aspects of one class or another. The mounting evidence for each of the classes helps us make our ï¬nal decision. In this work, we introduce a deep network architecture â prototypical part network (ProtoPNet), that reasons in a similar way: the network dissects the image by ï¬nding prototypical parts, and combines evidence from the prototypes to make a ï¬nal classiï¬cation. The model thus reasons in a way that is qualitatively similar to the way ornithologists, physicians, and others would explain to people on how to solve challenging image classiï¬cation tasks. The network uses only image-level labels for training without any annotations for parts of images. We demonstrate our method on the CUB-200-2011 dataset and the Stanford Cars dataset. Our experiments show that ProtoPNet can achieve comparable accuracy with its analogous non-interpretable counterpart, and when several ProtoPNets are combined into a larger network, it can achieve an accuracy that is on par with some of the best-performing deep models. Moreover, ProtoPNet provides a level of interpretability that is absent in other interpretable deep models.
# Introduction
How would you describe why the image in Figure 1 looks like a clay colored sparrow? Perhaps the birdâs head and wing bars look like those of a prototypical clay colored sparrow. When we describe how we classify images, we might focus on parts of the image and compare them with prototypical parts of images from a given class. This method of reasoning is commonly used in difï¬cult identiï¬cation tasks: e.g., radiologists compare suspected tumors in X-ray scans with prototypical tumor images for diagnosis of cancer [13]. The question is whether we can ask a machine learning model to imitate this way of thinking, and to explain its reasoning process in a human-understandable way.
The goal of this work is to deï¬ne a form of interpretability in image processing (this looks like that) that agrees with the way humans describe their own thinking in classiï¬cation tasks. In this work,
âContributed equally â DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited. This material is based upon work supported by the Under Secretary of Defense for Research and Engineering under Air Force Contract No. FA8702-15-D-0001. Any opinions, ï¬ndings, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reï¬ect the views of the Under Secretary of Defense for Research and Engineering.
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
Looks like Lefimost: a test image of a clay-colored sparrow Second column: same test image, each with a bounding box generated by our model ~ the content within the bounding box is considered by our model to look similar Looks lke to the prototypical part (same row, third column) learned by our algorithm Third column: prototypical parts learned by our > algorithm Iooks tke Fourth column: source images of the prototypical [. s ird column, Rightmost column: activation maps indicating how : similar each prototypical part resembles
Figure 1: Image of a clay colored sparrow and how parts of it look like some learned prototypical parts of a clay colored sparrow used to classify the birdâs species.
we introduce a network architecture â prototypical part network (ProtoPNet), that accommodates this deï¬nition of interpretability, where comparison of image parts to learned prototypes is integral to the way our network reasons about new examples. Given a new bird image as in Figure 1, our model is able to identify several parts of the image where it thinks that this part of the image looks like that prototypical part of some class, and makes its prediction based on a weighted combination of the similarity scores between parts of the image and the learned prototypes. In this way, our model is interpretable, in the sense that it has a transparent reasoning process when making predictions. Our experiments show that our ProtoPNet can achieve comparable accuracy with its analogous non-interpretable counterpart, and when several ProtoPNets are combined into a larger network, our model can achieve an accuracy that is on par with some of the best-performing deep models. Moreover, our ProtoPNet provides a level of interpretability that is absent in other interpretable deep models.
Our work relates to (but contrasts with) those that perform posthoc interpretability analysis for a trained convolutional neural network (CNN). In posthoc analysis, one interprets a trained CNN by ï¬tting explanations to how it performs classiï¬cation. Examples of posthoc analysis techniques include activation maximization [5, 12, 22, 44, 30, 38, 50], deconvolution [51], and saliency visualization [38, 42, 41, 36]. All of these posthoc visualization methods do not explain the reasoning process of how a network actually makes its decisions. In contrast, our network has a built-in case-based reasoning process, and the explanations generated by our network are actually used during classiï¬cation and are not created posthoc.
Our work relates closely to works that build attention-based interpretability into CNNs. These models aim to expose the parts of an input the network focuses on when making decisions. Examples of attention models include class activation maps [56] and various part-based models (e.g., [55, 53, 15, 57, 43, 10, 9, 34, 37, 49, 7]; see Table 1). However, attention-based models can only tell us which parts of the input they are looking at â they do not point us to prototypical cases to which the parts they focus on are similar. On the other hand, our ProtoPNet is not only able to expose the parts of the input it is looking at, but also point us to prototypical cases similar to those parts. Section 2.5 provides a comparison between attention-based models and our ProtoPNet.
Recently there have also been attempts to quantify the interpretability of visual representations in a CNN, by measuring the overlap between highly activated image regions and labeled visual concepts [1, 54]. However, to quantitatively measure the interpretability of a convolutional unit in a network requires ï¬ne-grained labeling for a signiï¬cantly large dataset speciï¬c to the purpose of the network. The existing Broden dataset for scene/object classiï¬cation networks [1] is not well-suited to measure the unit interpretability of a network trained for ï¬ne-grained classiï¬cation (which is our main application), because the concepts detected by that network may not be present in the Broden dataset. Hence, in our work, we do not focus on quantifying unit interpretability of our network, but instead look at the reasoning process of our network which is qualitatively similar to that of humans.
Our work uses generalized convolution [8, 29] by including a prototype layer that computes squared L2 distance instead of conventional inner product. In addition, we propose to constrain each convolutional ï¬lter to be identical to some latent training patch. This added constraint allows us to interpret the convolutional ï¬lters as visualizable prototypical image parts and also necessitates a novel training procedure.
2
Our work relates closely to other case-based classiï¬cation techniques using k-nearest neighbors [47, 35, 32] or prototypes [33, 2, 48], and very closely, to the Bayesian Case Model [18]. It relates to traditional âbag-of-visual-wordsâ models used in image recognition [21, 6, 17, 40, 31]. These models (like our ProtoPNet) also learn a set of prototypical parts for comparison with an unseen image. However, the feature extraction in these models is performed by Scale Invariant Feature Transform (SIFT) [27], and the learning of prototypical patches (âvisual wordsâ) is done separately from the feature extraction (and the learning of the ï¬nal classiï¬er). In contrast, our ProtoPNet uses a specialized neural network architecture for feature extraction and prototype learning, and can be trained in an end-to-end fashion. Our work also relates to works (e.g., [3, 24]) that identify a set of prototypes for pose alignment. However, their prototypes are templates for warping images and similarity with these prototypes does not provide an explanation for why an image is classiï¬ed in a certain way. Our work relates most closely to Li et al. [23], who proposed a network architecture that builds case-based reasoning into a neural network. However, their model requires a decoder (for visualizing prototypes), which fails to produce realistic prototype images when trained on datasets of natural images. In contrast, our model does not require a decoder for prototype visualization. Every prototype is the latent representation of some training image patch, which naturally and faithfully becomes the prototypeâs visualization. The removal of the decoder also facilitates the training of our network, leading to better explanations and better accuracy. Unlike the work of Li et al., whose prototypes represent entire images, our modelâs prototypes can have much smaller spatial dimensions and represent prototypical parts of images. This allows for more ï¬ne-grained comparisons because different parts of an image can now be compared to different prototypes. Ming et al. [28] recently took the concepts in [23] and the preprint of an earlier version of this work, which both involve integrating prototype learning into CNNs for image recognition, and used these concepts to develop prototype learning in recurrent neural networks for modeling sequential data.
# 2 Case study 1: bird species identiï¬cation
In this case study, we introduce the architecture and the training procedure of our ProtoPNet in the context of bird species identiï¬cation, and provide a detailed walk-through of how our network classiï¬es a new bird image and explains its prediction. We trained and evaluated our network on the CUB-200-2011 dataset [45] of 200 bird species. We performed ofï¬ine data augmentation, and trained on images cropped using the bounding boxes provided with the dataset.
# 2.1 ProtoPNet architecture
Figure 2 gives an overview of the architecture of our ProtoPNet. Our network consists of a regular convolutional neural network f , whose parameters are collectively denoted by wconv, followed by a prototype layer gp and a fully connected layer h with weight matrix wh and no bias. For the regular convolutional network f , our model use the convolutional layers from models such as VGG-16, VGG-19 [39], ResNet-34, ResNet-152 [11], DenseNet-121, or DenseNet-161 [14] (initialized with ï¬lters pretrained on ImageNet [4]), followed by two additional 1 à 1 convolutional layers in our experiments. We use ReLU as the activation function for all convolutional layers except the last for which we use the sigmoid activation function.
Given an input image x (such as the clay colored sparrow in Figure 2), the convolutional layers of our model extract useful features f (x) to use for prediction. Let H à W à D be the shape of the convolutional output f (x). For the bird dataset with input images resized to 224 à 224 à 3, the spatial dimension of the convolutional output is H = W = 7, and the number of output channels D in the additional convolutional layers is chosen from three possible values: 128, 256, 512, using cross validation. The network learns m prototypes P = {pj}m j=1, whose shape is H1 à W1 à D with H1 ⤠H and W1 ⤠W . In our experiments, we used H1 = W1 = 1. Since the depth of each prototype is the same as that of the convolutional output but the height and the width of each prototype is smaller than those of the whole convolutional output, each prototype will be used to represent some prototypical activation pattern in a patch of the convolutional output, which in turn will correspond to some prototypical image patch in the original pixel space. Hence, each prototype pj can be understood as the latent representation of some prototypical part of some bird image in this case study. As a schematic illustration, the ï¬rst prototype p1 in Figure 2 corresponds to the head of a clay colored sparrow, and the second prototype p2 the head of a Brewerâs sparrow. Given a convolutional output z = f (x), the j-th prototype unit gpj in the prototype layer gp computes the
3
Black footed albatross Indigo bunting Cardinal 5} Clay colored sparrow Common yellowthroat Similarity score L A J T YT Y TY Convolutional layers f Prototype layer gp Fully connected layer h Output logits
Figure 2: ProtoPNet architecture.
Why is this bird classfied as a red-bellied woodpecker? Evidence for this bird being a red-bellied woodpecker: Evidence for this bird being a red-cockaded woodpecker: Original image Prototype Training image Activation map â Similarity Class Points Original image Prototype Training image Activation map Similarity Class Points {box showing part that where prototype score connection contributed (box showing part that score connection contributed sli comes from 6.499 * 1.180 = 7.669 2.452 * 1.046 = 2.565 4.392 x 1.127 = 4.950 2.125 x 1.091 = 2.318 3.890 « 1.108 = 4.310 1.945 x 1.069 = 2.079 Total points to red-bellied woodpecker: 32.736 Total points to red-cockaded woodpecker: 16.886
Figure 3: The reasoning process of our network in deciding the species of a bird (top).
looks like (a) Object attention (b) Part attention (c) Part attention + comparison with learned (class activation map) (attention-based models) prototypical parts (our model)
Figure 4: Visual comparison of different types of model interpretability: (a) object-level attention map (e.g., class activation map [56]); (b) part attention (provided by attention-based interpretable models); and (c) part attention with similar prototypical parts (provided by our model).
Prototype Nearest training patches Nearest test patches (in bounding box) (in bounding box) {in bounding box) Cardinal (a) nearest prototypes of two test images left: original test image right: top: three nearest prototypes of the image, with prototypical parts shown in box below: test image with patch closest to each prototype shown in box (b) nearest image patches to prototypes left: prototype, with prototypical parts in box middle: nearest training images to prototype, with patch closest to prototype in box right: nearest test images to prototype, with patch closest to prototype in box
Figure 5: Nearest prototypes to images and nearest images to prototypes. The prototypes are learned from the training set.
4
squared L? distances between the j-th prototype p; and all patches of z that have the same shape as p;, and inverts the distances into similarity scores. The result is an activation map of similarity scores whose value indicates how strong a prototypical part is present in the image. This activation map preserves the spatial relation of the convolutional output, and can be upsampled to the size of the input image to produce a heat map that identifies which part of the input image is most similar to the learned prototype. The activation map of similarity scores produced by each prototype unit gp, is then reduced using global max pooling to a single similarity score, which can be understood as how strongly a prototypical part is present in some patch of the input image. In Figure[2} the similarity score between the first prototype pi, a clay colored sparrow head prototype, and the most activated (upper-right) patch of the input image of a clay colored sparrow is 3.954, and the similarity score between the second prototype p2, a Brewerâs sparrow head prototype, and the most activated patch of the input image is 1.447. This shows that our model finds that the head of a clay colored sparrow has a stronger presence than that of a Brewerâs sparrow in the input image. Mathematically, the prototype unit gp, computes gp, (2) = Maxzepatcnes(a) log ((|2 â py|l3 + 1)/((I@ â pyll2 + â¬)). The function Jp; 18 monotonically decreasing with respect to ||Z â p;||2 (if Z is the closest latent patch to p;). Hence, if the output of the j-th prototype unit gp, is large, then there is a patch in the convolutional output that is (in 2-norm) very close to the j-th prototype in the latent space, and this in turn means that there is a patch in the input image that has a similar concept to what the j-th prototype represents.
In our ProtoPNet, we allocate a pre-determined number of prototypes mk for each class k â {1, ..., K} (10 per class in our experiments), so that every class will be represented by some pro- totypes in the ï¬nal model. Section S9.2 of the supplement discusses the choice of mk and other hyperparameters in greater detail. Let Pk â P be the subset of prototypes that are allocated to class k: these prototypes should capture the most relevant parts for identifying images of class k.
Finally, the m similarity scores produced by the prototype layer gp are multiplied by the weight matrix wh in the fully connected layer h to produce the output logits, which are normalized using softmax to yield the predicted probabilities for a given image belonging to various classes.
ProtoPNetâs inference computation mechanism can be viewed as a special case of a more general type of probabilistic inference under some reasonable assumptions. This interpretation is presented in detail in Section S2 of the supplementary material.
# 2.2 Training algorithm
The training of our ProtoPNet is divided into: (1) stochastic gradient descent (SGD) of layers before the last layer; (2) projection of prototypes; (3) convex optimization of last layer. It is possible to cycle through these three stages more than once. The entire training algorithm is summarized in an algorithm chart, which can be found in Section S9.3 of the supplement.
Stochastic gradient descent (SGD) of layers before last layer: In the ï¬rst training stage, we aim to learn a meaningful latent space, where the most important patches for classifying images are clustered (in L2-distance) around semantically similar prototypes of the imagesâ true classes, and the clusters that are centered at prototypes from different classes are well-separated. To achieve this goal, we jointly optimize the convolutional layersâ parameters wconv and the prototypes P = {pj}m j=1 in the prototype layer gp using SGD, while keeping the last layer weight matrix wh ï¬xed. Let D = [X, Y] = {(xi, yi)}n i=1 be the set of training images. The optimization problem we aim to solve here is:
# n
# 1 n
min P,wconv CrsEnt(h ⦠gp ⦠f (xi), yi) + λ1Clst + λ2Sep, where Clst and Sep are deï¬ned by
# i=l n
n n Clst = â min min zâp,||2;Sep = ââ min min zâp,|l2. 1 f:pjâ¬Py, zepatches( f(x;)) lz â pjlla:Sep 71 4 jp, 2Py, 2cpaches(j(x:)) lz â pyle
The cross entropy loss (CrsEnt) penalizes misclassiï¬cation on the training data. The minimization of the cluster cost (Clst) encourages each training image to have some latent patch that is close to at least one prototype of its own class, while the minimization of the separation cost (Sep) encourages every latent patch of a training image to stay away from the prototypes not of its own class. These terms shape the latent space into a semantically meaningful clustering structure, which facilitates the L2-distance-based classiï¬cation of our network.
5
In this training stage, we also fix the last layer h, whose weight matrix is w),. Let w(t) be the (k,j)-th entry in wy, that corresponds to the weight connection between the output of the j-th prototype unit gp, and the logit of class k. Given a class k, we set w(t) = 1 for all j with p; ¢ P, and wie) = â0.5 for all j with p; ¢ P;, (when we are in this stage for the first time). Intuitively, the positive connection between a class k prototype and the class k logit means that similarity to a class k prototype should increase the predicted probability that the image belongs to class k, and the negative connection between a non-class k prototype and the class k logit means that similarity to a non-class k prototype should decrease class kâs predicted probability. By fixing the last layer h in this way, we can force the network to learn a meaningful latent space because if a latent patch of a class k image is too close to a non-class k prototype, it will decrease the predicted probability that the image belongs to class k and increase the cross entropy loss in the training objective. Note that both the separation cost and the negative connection between a non-class k prototype and the class k logit encourage prototypes of class k to represent semantic concepts that are characteristic of class k but not of other classes: if a class k prototype represents a semantic concept that is also present in a non-class k image, this non-class k image will highly activate that class k prototype, and this will be penalized by increased (i.e., less negative) separation cost and increased cross entropy (as a result of the negative connection). The separation cost is new to this paper, and has not been explored by previous works of prototype learning (e.g., [3] [23]).
Projection of prototypes: To be able to visualize the prototypes as training image patches, we project (âpushâ) each prototype pj onto the nearest latent training patch from the same class as that of pj. In this way, we can conceptually equate each prototype with a training image patch. (Section 2.3 discusses how we visualize the projected prototypes.) Mathematically, for prototype pj of class k, i.e., pj â Pk, we perform the following update:
pj + arg min lz â pjll2,where Z; = {%: z ⬠patches(f(x;)) Vi s.t. y; = k}. j
The following theorem provides some theoretical understanding of how prototype projection affects classiï¬cation accuracy. We use another notation for prototypes pk l , where k represents the class identity of the prototype and l is the index of that prototype among all prototypes of that class. Theorem 2.1. Let h ⦠gp ⦠f be a ProtoPNet. For each k, l, we use bk prototype for class k before the projection of pk use ak the ProtoPNet before the projection, zk f (x) to the prototype pk
l before the projection (i.e., bk is also the nearest latent patch to prototype pk
Suppose that: (A1) z* is also the nearest latent patch to prototype p' after the projection (af), ie, a = arg MiNgEparches( f(x) Zâ al'||2; (A2) there exists some 6 with 0 < 6 < 1 such that: (A2a) for all incorrect classesâ prototypes k # c andl ⬠{1,...,mp}, we have |jaf â bi |ly < A\|zk â bf ||2 â Ve, where we define 0 = min (v 1+6-1,1- ae) (⬠comes from the prototype activation function gy, defined in Section 2.1); (A2b) for the correct class c and for alll ⬠{1,...,me}, we have |\|af â bf lz < (V1 +6 â 1)||zf â bf ||2 and ||zf â bf||2 < V1 â 6; (A3) the number of prototypes is the same for each class, which we denote by mâ. (A4) for each class k, the weight connection in the fully connected last layer h between a class k prototype and the class k logit is 1, and that between a non-class k prototype and the class k logit is 0 (i.e., wd) = 1 for all j with pj ⬠Py and wd) = 0 for all j with p; ¢ Py). Then after projection, the output logit for the correct class c can decrease at most by Amax = mâ log((1 + 5)(2 â 6)), and the output logit for every incorrect class k # c can increase at most by Ainax: [f the output logits between the top-2 classes are at least 2Ayyax apart, then the projection of prototypes to their nearest latent training patches does not change the prediction of x.
Intuitively speaking, the theorem states that, if prototype projection does not move the prototypes by much (assured by the optimization of the cluster cost Clst), the prediction does not change for examples that the model predicted correctly with some conï¬dence before the projection. The proof is in Section S1 of the supplement.
Note that prototype projection has the same time complexity as feedforward computation of a regular convolutional layer followed by global average pooling, a conï¬guration common in standard CNNs
6
(e.g., ResNet, DenseNet), because the former takes the minimum distance over all prototype-sized patches, and the latter takes the average of dot-products over all ï¬lter-sized patches. Hence, prototype projection does not introduce extra time complexity in training our network.
Convex optimization of last layer: In this training stage, we perform a convex optimization on the weight matrix wy, of last layer h. The goal of this stage is to adjust the last layer connection wi a) , So that for k and j with p; ¢ P,, our final model has the sparsity property wh x0 (initially fixed at â0.5). This sparsity is desirable because it means that our model relies less on a negative reasoning process of the form âthis bird is of class kâ because it is not of class k (it contains a patch that is not prototypical of class k).â The optimization problem we solve here is: n miny, + 07-1 CrsEnt(ho gpo f (xi), yi) +A vy Viper jw"? |. This optimization is convex because we fix all the parameters from the convolutional and prototype layers. This stage further improves accuracy without changing the learned latent space or prototypes.
# 2.3 Prototype visualization
Given a prototype pj and the training image x whose latent patch is used as pj during prototype projection, how do we decide which patch of x (in the pixel space) corresponds to pj? In our work, we use the image patch of x that is highly activated by pj as the visualization of pj. The reason is that the patch of x that corresponds to pj should be the one that pj activates most strongly on, and we can ï¬nd the patch of x on which pj has the strongest activation by forwarding x through a trained ProtoPNet and upsampling the activation map produced by the prototype unit gpj (before max- pooling) to the size of the image x â the most activated patch of x is indicated by the high activation region in the (upsampled) activation map. We then visualize pj with the smallest rectangular patch of x that encloses pixels whose corresponding activation value in the upsampled activation map from gpj is at least as large as the 95th-percentile of all activation values in that same map. Section S7 of the supplement describes prototype visualization in greater detail.
# 2.4 Reasoning process of our network
Figure 3 shows the reasoning process of our ProtoPNet in reaching a classiï¬cation decision on a test image of a red-bellied woodpecker at the top of the ï¬gure. Given this test image x, our model compares its latent features f (x) against the learned prototypes. In particular, for each class k, our network tries to ï¬nd evidence for x to be of class k by comparing its latent patch representations with every learned prototype pj of class k. For example, in Figure 3 (left), our network tries to ï¬nd evidence for the red-bellied woodpecker class by comparing the imageâs latent patches with each prototype (visualized in âPrototypeâ column) of that class. This comparison produces a map of similarity scores towards each prototype, which was upsampled and superimposed on the original image to see which part of the given image is activated by each prototype. As shown in the âActivation mapâ column in Figure 3 (left), the ï¬rst prototype of the red-bellied woodpecker class activates most strongly on the head of the testing bird, and the second prototype on the wing: the most activated image patch of the given image for each prototype is marked by a bounding box in the âOriginal imageâ column â this is the image patch that the network considers to look like the corresponding prototype. In this case, our network ï¬nds a high similarity between the head of the given bird and the prototypical head of a red-bellied woodpecker (with a similarity score of 6.499), as well as between the wing and the prototypical wing (with a similarity score of 4.392). These similarity scores are weighted and summed together to give a ï¬nal score for the bird belonging to this class. The reasoning process is similar for all other classes (Figure 3 (right)). The network ï¬nally correctly classiï¬es the bird as a red-bellied woodpecker. Section S3 of the supplement provides more examples of how our ProtoPNet classiï¬es previously unseen images of birds.
# 2.5 Comparison with baseline models and attention-based interpretable deep models
The accuracy of our ProtoPNet (with various base CNN architectures) on cropped bird images is compared to that of the corresponding baseline model in the top of Table 1: the ï¬rst number in each cell gives the mean accuracy, and the second number gives the standard deviation, over three runs. To ensure fairness of comparison, the baseline models (without the prototype layer) were trained on the same augmented dataset of cropped bird images as the corresponding ProtoPNet. As we can see, the test accuracy of our ProtoPNet is comparable with that of the corresponding
7
Table 1: Top: Accuracy comparison on cropped bird images of CUB-200-2011 Bottom: Comparison of our model with other deep models
Base VGG16 Res34 Dense121 ProtoPNet Baseline 76.1 ± 0.2 79.2 ± 0.1 80.2 ± 0.2 Base 74.6 ± 0.2 VGG19 82.3 ± 0.3 Res152 80.5 ± 0.1 Dense161 ProtoPNet Baseline 78.0 ± 0.2 78.0 ± 0.3 80.1 ± 0.3 75.1 ± 0.4 81.5 ± 0.4 82.2 ± 0.2 Interpretability None Object-level attn. Part-level attention Model: accuracy B-CNN[25]: 85.1 (bb), 84.1 (full) CAM[56]: 70.5 (bb), 63.0 (full) Part R-CNN[53]: 76.4 (bb+anno.); PS-CNN [15]: 76.2 (bb+anno.); PN-CNN [3]: 85.4 (bb+anno.); DeepLAC[24]: 80.3 (anno.); SPDA-CNN[52]: 85.1 (bb+anno.); PA-CNN[19]: 82.8 (bb); MG-CNN[46]: 83.0 (bb), 81.7 (full); ST-CNN[16]: 84.1 (full); 2-level attn.[49]: 77.9 (full); FCAN[26]: 82.0 (full); Neural const.[37]: 81.0 (full); MA-CNN[55]: 86.5 (full); RA-CNN[7]: 85.3 (full) ProtoPNet (ours): 80.8 (full, VGG19+Dense121+Dense161-based) Part-level attn. + prototypical cases 84.8 (bb, VGG19+ResNet34+DenseNet121-based)
baseline (non-interpretable) model: the loss of accuracy is at most 3.5% when we switch from the non-interpretable baseline model to our interpretable ProtoPNet. We can further improve the accuracy of ProtoPNet by adding the logits of several ProtoPNet models together. Since each ProtoPNet can be understood as a âscoring sheetâ (as in Figure 3) for each class, adding the logits of several ProtoPNet models is equivalent to creating a combined scoring sheet where (weighted) similarity with prototypes from all these models is taken into account to compute the total points for each class â the combined model will have the same interpretable form when we combine several ProtoPNet models in this way, though there will be more prototypes for each class. The test accuracy on cropped bird images of combined ProtoPNets can reach 84.8%, which is on par with some of the best-performing deep models that were also trained on cropped images (see bottom of Table 1). We also trained a VGG19-, DenseNet121-, and DenseNet161-based ProtoPNet on full images: the test accuracy of the combined network can go above 80% â at 80.8%, even though the test accuracy of each individual network is 72.7%, 74.4%, and 75.7%, respectively. Section S3.1 of the supplement illustrates how combining several ProtoPNet models can improve accuracy while preserving interpretability.
Moreover, our ProtoPNet provides a level of interpretability that is absent in other interpretable deep models. In terms of the type of explanations offered, Figure 4 provides a visual comparison of different types of model interpretability. At the coarsest level, there are models that offer object-level attention (e.g., class activation maps [56]) as explanation: this type of explanation (usually) highlights the entire object as the âreasonâ behind a classiï¬cation decision, as shown in Figure 4(a). At a ï¬ner level, there are numerous models that offer part-level attention: this type of explanation highlights the important parts that lead to a classiï¬cation decision, as shown in Figure 4(b). Almost all attention- based interpretable deep models offer this type of explanation (see the bottom of Table 1). In contrast, our model not only offers part-level attention, but also provides similar prototypical cases, and uses similarity to prototypical cases of a particular class as justiï¬cation for classiï¬cation (see Figure 4(c)). This type of interpretability is absent in other interpretable deep models. In terms of how attention is generated, some attention models generate attention with auxiliary part-localization models trained with part annotations (e.g., [53, 52, 3, 24, 15]); other attention models generate attention with âblack- boxâ methods â e.g., RA-CNN [7] uses another neural network (attention proposal network) to decide where to look next; multi-attention CNN [55] uses aggregated convolutional feature maps as âpart attentions.â There is no explanation for why the attention proposal network decides to look at some region over others, or why certain parts are highlighted in those convolutional feature maps. In contrast, our ProtoPNet generates attention based on similarity with learned prototypes: it requires no part annotations for training, and explains its attention naturally â it is looking at this region of input because this region is similar to that prototypical example. Although other attention models focus on similar regions (e.g., head, wing, etc.) as our ProtoPNet, they cannot be made into a case-based reasoning model like ours: the only way to ï¬nd prototypes on other attention models is to analyze posthoc what activates a convolutional ï¬lter of the model most strongly and think of that as a
8
prototype â however, since such prototypes do not participate in the actual model computation, any explanations produced this way are not always faithful to the classiï¬cation decisions. The bottom of Table 1 compares the accuracy of our model with that of some state-of-the-art models on this dataset: âfullâ means that the model was trained and tested on full images, âbbâ means that the model was trained and tested on images cropped using bounding boxes (or the model used bounding boxes in other ways), and âanno.â means that the model was trained with keypoint annotations of bird parts. Even though there is some accuracy gap between our (combined) ProtoPNet model and the best of the state-of-the-art, this gap may be reduced through more extensive training effort, and the added interpretability in our model already makes it possible to bring richer explanations and better transparency to deep neural networks.
# 2.6 Analysis of latent space and prototype pruning
In this section, we analyze the structure of the latent space learned by our ProtoPNet. Figure 5(a) shows the three nearest prototypes to a test image of a Florida jay and of a cardinal. As we can see, the nearest prototypes for each of the two test images come from the same class as that of the image, and the test imageâs patch most activated by each prototype also corresponds to the same semantic concept as the prototype: in the case of the Florida jay, the most activated patch by each of the three nearest prototypes (all wing prototypes) indeed localizes the wing; in the case of the cardinal, the most activated patch by each of the three nearest prototypes (all head prototypes) indeed localizes the head. Figure 5(b) shows the nearest (i.e., most activated) image patches in the entire training/test set to three prototypes. As we can see, the nearest image patches to the ï¬rst prototype in the ï¬gure are all heads of black-footed albatrosses, and the nearest image patches to the second prototype are all yellow stripes on the wings of golden-winged warblers. The nearest patches to the third prototype are feet of some gull. It is generally true that the nearest patches of a prototype all bear the same semantic concept, and they mostly come from those images in the same class as the prototype. Those prototypes whose nearest training patches have mixed class identities usually correspond to background patches, and they can be automatically pruned from our model. Section S8 of the supplement discusses pruning in greater detail.
# 3 Case study 2: car model identiï¬cation
In this case study, we apply our method to car model identiï¬cation. We trained our ProtoPNet on the Stanford Cars dataset [20] of 196 car models, using similar architectures and training algorithm as we did on the CUB-200-2011 dataset. The accuracy of our ProtoPNet and the corresponding baseline model on this dataset is reported in Section S6 of the supplement. We brieï¬y state our performance here: the test accuracy of our ProtoPNet is comparable with that of the corresponding baseline model (⤠3% difference), and that of a combined network of a VGG19-, ResNet34-, and DenseNet121-based ProtoPNet can reach 91.4%, which is on par with some state-of-the-art models on this dataset, such as B-CNN [25] (91.3%), RA-CNN [7] (92.5%), and MA-CNN [55] (92.8%).
# 4 Conclusion
In this work, we have deï¬ned a form of interpretability in image processing (this looks like that) that agrees with the way humans describe their own reasoning in classiï¬cation. We have presented ProtoPNet â a network architecture that accommodates this form of interpretability, described our specialized training algorithm, and applied our technique to bird species and car model identiï¬cation.
Supplementary Material and Code: The supplementary material and code are available at https: //github.com/cfchen-duke/ProtoPNet.
# Acknowledgments
This work was sponsored in part by a grant from MIT Lincoln Laboratory to C. Rudin.
9
# References
[1] D. Bau, B. Zhou, A. Khosla, A. Oliva, and A. Torralba. Network Dissection: Quantifying Interpretability of Deep Visual Representations. In Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, pages 3319â3327. IEEE, 2017.
[2] J. Bien and R. Tibshirani. Prototype Selection for Interpretable Classiï¬cation. Annals of Applied Statistics, 5(4):2403â2424, 2011.
[3] S. Branson, G. Van Horn, S. Belongie, and P. Perona. Bird Species Categorization Using Pose Normalized Deep Convolutional Nets. In Proceedings of the British Machine Vision Conference. BMVA Press, 2014.
[4] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 248â255. IEEE, 2009.
[5] D. Erhan, Y. Bengio, A. Courville, and P. Vincent. Visualizing Higher-Layer Features of a Deep Network. Technical Report 1341, the University of Montreal, June 2009. Also presented at the Workshop on Learning Feature Hierarchies at the 26th International Conference on Machine Learning (ICML 2009), Montreal, Canada.
[6] L. Fei-Fei and P. Perona. A Bayesian Hierarchical Model for Learning Natural Scene Categories. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), volume 2, pages 524â531. IEEE, 2005.
[7] J. Fu, H. Zheng, and T. Mei. Look Closer to See Better: Recurrent Attention Convolutional Neural Network for Fine-grained Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4438â4446, 2017.
[8] K. Ghiasi-Shirazi. Generalizing the Convolution Operator in Convolutional Neural Networks. Neural Processing Letters, 2019.
[9] R. Girshick. Fast R-CNN. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 1440â1448, 2015.
[10] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 580â587, 2014.
[11] K. He, X. Zhang, S. Ren, and J. Sun. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770â778, 2016.
[12] G. E. Hinton. A Practical Guide to Training Restricted Boltzmann Machines. In Neural Networks: Tricks of the Trade, pages 599â619. Springer, 2012.
[13] A. Holt, I. Bichindaritz, R. Schmidt, and P. Perner. Medical applications in case-based reasoning. The Knowledge Engineering Review, 20:289â292, 09 2005.
[14] G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger. Densely Connected Convolutional Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4700â4708, 2017.
[15] S. Huang, Z. Xu, D. Tao, and Y. Zhang. Part-Stacked CNN for Fine-Grained Visual Categorization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1173â1182, 2016.
[16] M. Jaderberg, K. Simonyan, A. Zisserman, et al. Spatial Transformer Networks. In Advances in Neural Information Processing Systems 28 (NIPS), pages 2017â2025, 2015.
[17] Y.-G. Jiang, C.-W. Ngo, and J. Yang. Towards Optimal Bag-of-Features for Object Categorization and Semantic Video Retrieval. In Proceedings of the 6th ACM International Conference on Image and Video Retrieval, pages 494â501. ACM, 2007.
[18] B. Kim, C. Rudin, and J. Shah. The Bayesian Case Model: A Generative Approach for Case-Based Reasoning and Prototype Classiï¬cation. In Advances in Neural Information Processing Systems 27 (NIPS), pages 1952â1960, 2014.
[19] J. Krause, H. Jin, J. Yang, and L. Fei-Fei. Fine-Grained Recognition without Part Annotations. In Pro- ceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5546â5555, 2015.
[20] J. Krause, M. Stark, J. Deng, and L. Fei-Fei. 3D Object Representations for Fine-Grained Categorization. In 4th International IEEE Workshop on 3D Representation and Recognition (3dRR-13), Sydney, Australia, 2013.
[21] S. Lazebnik, C. Schmid, and J. Ponce. Beyond Bags of Features: Spatial Pyramid Matching for Recognizing Natural Scene Categories. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), volume 2, pages 2169â2178. IEEE, 2006.
10
[22] H. Lee, R. Grosse, R. Ranganath, and A. Y. Ng. Convolutional Deep Belief Networks for Scalable Unsupervised Learning of Hierarchical Representations. In Proceedings of the 26th International Conference on Machine Learning (ICML), pages 609â616, 2009.
[23] O. Li, H. Liu, C. Chen, and C. Rudin. Deep Learning for Case-Based Reasoning through Prototypes: A Neural Network that Explains Its Predictions. In Proceedings of the Thirty-Second AAAI Conference on Artiï¬cial Intelligence (AAAI), 2018.
[24] D. Lin, X. Shen, C. Lu, and J. Jia. Deep LAC: Deep Localization, Alignment and Classiï¬cation for Fine- grained Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1666â1674, 2015.
[25] T.-Y. Lin, A. RoyChowdhury, and S. Maji. Bilinear CNN Models for Fine-grained Visual Recognition. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 1449â1457, 2015.
[26] X. Liu, T. Xia, J. Wang, Y. Yang, F. Zhou, and Y. Lin. Fully Convolutional Attention Networks for Fine-Grained Recognition. arXiv preprint arXiv:1603.06765, 2016.
[27] D. G. Lowe et al. Object Recognition from Local Scale-Invariant Features. In Proceedings of the Interna- tional Conference on Computer Vision (ICCV), volume 99, pages 1150â1157, 1999.
[28] Y. Ming, P. Xu, H. Qu, and L. Ren. Interpretable and Steerable Sequence Learning via Prototypes. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDDâ19), pages 903â913. ACM, 2019.
[29] K. Nalaie, K. Ghiasi-Shirazi, and M.-R. Akbarzadeh-T. Efï¬cient Implementation of a Generalized Convo- lutional Neural Networks based on Weighted Euclidean Distance. In 2017 7th International Conference on Computer and Knowledge Engineering (ICCKE), pages 211â216. IEEE, 2017.
[30] A. Nguyen, A. Dosovitskiy, J. Yosinski, T. Brox, and J. Clune. Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. In Advances in Neural Information Processing Systems 29 (NIPS), pages 3387â3395, 2016.
[31] D. Nister and H. Stewenius. Scalable Recognition with a Vocabulary Tree. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), volume 2, pages 2161â2168. IEEE, 2006.
[32] N. Papernot and P. McDaniel. Deep k-Nearest Neighbors: Towards Conï¬dent, Interpretable and Robust Deep Learning. arXiv preprint arXiv:1803.04765, 2018.
[33] C. E. Priebe, D. J. Marchette, J. G. DeVinney, and D. A. Socolinsky. Classiï¬cation Using Class Cover Catch Digraphs. Journal of Classiï¬cation, 20(1):003â023, 2003.
[34] S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. In Advances in Neural Information Processing Systems 28 (NIPS), pages 91â99, 2015.
[35] R. Salakhutdinov and G. Hinton. Learning a Nonlinear Embedding by Preserving Class Neighbourhood Structure. In Proceedings of the Eleventh International Conference on Artiï¬cial Intelligence and Statistics (AISTATS), volume 2 of Proceedings of Machine Learning Research, pages 412â419. PMLR, 2007.
[36] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra. Grad-CAM: Visual Expla- nations from Deep Networks via Gradient-Based Localization. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Oct 2017.
[37] M. Simon and E. Rodner. Neural Activation Constellations: Unsupervised Part Model Discovery with Convolutional Networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 1143â1151, 2015.
[38] K. Simonyan, A. Vedaldi, and A. Zisserman. Deep Inside Convolutional Networks: Visualising Image Classiï¬cation Models and Saliency Maps. In Workshop at the 2nd International Conference on Learning Representations (ICLR Workshop), 2014.
[39] K. Simonyan and A. Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Proceedings of the 3rd International Conference on Learning Representations (ICLR), 2015.
[40] J. Sivic and A. Zisserman. Video Google: A Text Retrieval Approach to Object Matching in Videos. In Proceedings of the Ninth IEEE International Conference on Computer Vision (ICCV), page 1470. IEEE, 2003.
[41] D. Smilkov, N. Thorat, B. Kim, F. Viégas, and M. Wattenberg. SmoothGrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825, 2017.
[42] M. Sundararajan, A. Taly, and Q. Yan. Axiomatic Attribution for Deep Networks. In Proceedings of the 34th International Conference on Machine Learning (ICML), volume 70 of Proceedings of Machine Learning Research, pages 3319â3328. PMLR, 2017.
[43] J. R. Uijlings, K. E. Van De Sande, T. Gevers, and A. W. Smeulders. Selective Search for Object Recognition. International Journal of Computer Vision, 104(2):154â171, 2013.
11
[44] A. van den Oord, N. Kalchbrenner, and K. Kavukcuoglu. Pixel Recurrent Neural Networks. In Proceedings of the 33rd International Conference on Machine Learning (ICML), pages 1747â1756, 2016.
[45] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The Caltech-UCSD Birds-200-2011 Dataset. Technical Report CNS-TR-2011-001, California Institute of Technology, 2011.
[46] D. Wang, Z. Shen, J. Shao, W. Zhang, X. Xue, and Z. Zhang. Multiple Granularity Descriptors for Fine- grained Categorization. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 2399â2406, 2015.
[47] K. Q. Weinberger and L. K. Saul. Distance Metric Learning for Large Margin Nearest Neighbor Classiï¬ca- tion. Journal of Machine Learning Research, 10(Feb):207â244, 2009.
[48] C. Wu and E. G. Tabak. Prototypal Analysis and Prototypal Regression. arXiv preprint arXiv:1701.08916, 2017.
[49] T. Xiao, Y. Xu, K. Yang, J. Zhang, Y. Peng, and Z. Zhang. The Application of Two-Level Attention Models in Deep Convolutional Neural Network for Fine-grained Image Classiï¬cation. In Computer Vision and Pattern Recognition (CVPR), 2015 IEEE Conference on, pages 842â850. IEEE, 2015.
[50] J. Yosinski, J. Clune, T. Fuchs, and H. Lipson. Understanding Neural Networks through Deep Visualization. In Deep Learning Workshop at the 32nd International Conference on Machine Learning (ICML), 2015.
[51] M. D. Zeiler and R. Fergus. Visualizing and Understanding Convolutional Networks. In Proceedings of the European Conference on Computer Vision (ECCV), pages 818â833, 2014.
[52] H. Zhang, T. Xu, M. Elhoseiny, X. Huang, S. Zhang, A. Elgammal, and D. Metaxas. SPDA-CNN: Unifying Semantic Part Detection and Abstraction for Fine-grained Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1143â1152, 2016.
[53] N. Zhang, J. Donahue, R. Girshick, and T. Darrell. Part-based R-CNNs for Fine-grained Category Detection. In Proceedings of the European Conference on Computer Vision (ECCV), pages 834â849. Springer, 2014.
[54] Q. Zhang, Y. N. Wu, and S.-C. Zhu. Interpretable Convolutional Neural Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
[55] H. Zheng, J. Fu, T. Mei, and J. Luo. Learning Multi-Attention Convolutional Neural Network for Fine- Grained Image Recognition. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 5209â5217, 2017.
[56] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba. Learning Deep Features for Discriminative Localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2921â2929. IEEE, 2016.
[57] B. Zhou, Y. Sun, D. Bau, and A. Torralba. Interpretable Basis Decomposition for Visual Explanation. In Proceedings of the European Conference on Computer Vision (ECCV), pages 119â134, 2018.
12 | {
"id": "1803.04765"
} |
1806.10293 | QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation | In this paper, we study the problem of learning vision-based dynamic
manipulation skills using a scalable reinforcement learning approach. We study
this problem in the context of grasping, a longstanding challenge in robotic
manipulation. In contrast to static learning behaviors that choose a grasp
point and then execute the desired grasp, our method enables closed-loop
vision-based control, whereby the robot continuously updates its grasp strategy
based on the most recent observations to optimize long-horizon grasp success.
To that end, we introduce QT-Opt, a scalable self-supervised vision-based
reinforcement learning framework that can leverage over 580k real-world grasp
attempts to train a deep neural network Q-function with over 1.2M parameters to
perform closed-loop, real-world grasping that generalizes to 96% grasp success
on unseen objects. Aside from attaining a very high success rate, our method
exhibits behaviors that are quite distinct from more standard grasping systems:
using only RGB vision-based perception from an over-the-shoulder camera, our
method automatically learns regrasping strategies, probes objects to find the
most effective grasps, learns to reposition objects and perform other
non-prehensile pre-grasp manipulations, and responds dynamically to
disturbances and perturbations. | http://arxiv.org/pdf/1806.10293 | Dmitry Kalashnikov, Alex Irpan, Peter Pastor, Julian Ibarz, Alexander Herzog, Eric Jang, Deirdre Quillen, Ethan Holly, Mrinal Kalakrishnan, Vincent Vanhoucke, Sergey Levine | cs.LG, cs.AI, cs.CV, cs.RO, stat.ML | CoRL 2018 camera ready. 23 pages, 14 figures | null | cs.LG | 20180627 | 20181128 | 8 1 0 2
v o N 8 2 ] G L . s c [
3 v 3 9 2 0 1 . 6 0 8 1 : v i X r a
# QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation
Dmitry Kalashnikov1, Alex Irpan1, Peter Pastor2, Julian Ibarz1, Alexander Herzog2, Eric Jang1, Deirdre Quillen3, Ethan Holly1, Mrinal Kalakrishnan2, Vincent Vanhoucke1, Sergey Levine1,3 {dkalashnikov, alexirpan, julianibarz, ejang, eholly, vanhoucke, slevine}@google.com, {peterpastor, alexherzog, kalakris}@x.team, {deirdrequillen}@berkeley.edu
Abstract: In this paper, we study the problem of learning vision-based dynamic manipulation skills using a scalable reinforcement learning approach. We study this problem in the context of grasping, a longstanding challenge in robotic ma- nipulation. In contrast to static learning behaviors that choose a grasp point and then execute the desired grasp, our method enables closed-loop vision-based con- trol, whereby the robot continuously updates its grasp strategy based on the most recent observations to optimize long-horizon grasp success. To that end, we in- troduce QT-Opt, a scalable self-supervised vision-based reinforcement learning framework that can leverage over 580k real-world grasp attempts to train a deep neural network Q-function with over 1.2M parameters to perform closed-loop, real-world grasping that generalizes to 96% grasp success on unseen objects. Aside from attaining a very high success rate, our method exhibits behaviors that are quite distinct from more standard grasping systems: using only RGB vision- based perception from an over-the-shoulder camera, our method automatically learns regrasping strategies, probes objects to ï¬nd the most effective grasps, learns to reposition objects and perform other non-prehensile pre-grasp manipulations, and responds dynamically to disturbances and perturbations.4
Keywords: grasping, reinforcement learning, deep learning
# Introduction
Manipulation with object interaction represents one of the largest open problems in robotics: in- telligently interacting with previously unseen objects in open-world environments requires gener- alizable perception, closed-loop vision-based control, and dexterous manipulation. Reinforcement learning offers a promising avenue for tackling this problem, but current work on reinforcement learning tackles the problem of mastering individual skills, such as hitting a ball [1], opening a door [2, 3], or throwing [4]. To meet the generalization demands of real-world manipulation, we focus speciï¬cally on scalable learning with off-policy algorithms, and study this question in the context of the speciï¬c problem of grasping. While grasping restricts the manipulation problem, it still retains many of its largest challenges: a grasping system should be able to pick up previ- ously unseen objects with reliable and effective grasps, while using realistic sensing and actuation. It thus serves as a microcosm of the larger robotic manipulation problem, providing a challeng- ing and practically applicable model problem for experimenting with generalization and diverse object interaction. Much of the existing work on robotic grasping decomposes the task into a sens- the robot ï¬rst perceives the scene and identiï¬es suitable grasp ing, planning, and acting stage: locations, then plans a path to those locations [5, 6, 7, 8]. This stands in contrast to the kinds of grasping behaviors observed in humans and animals, where the grasp is a dynamical process that
1Google Brain, United States 2X, Mountain View, California, United States 3University of California Berkeley, Berkeley, California, United States 4Supplementary experiment videos can be found at https://goo.gl/ykQn6g.
2nd Conference on Robot Learning (CoRL 2018), Z¨urich, Switzerland.
tightly interleaves sensing and control at every stage [9, 10]. This kind of dynamic closed-loop grasping is likely to be much more robust to unpredictable object physics, limited sensory infor- mation (e.g., monocular camera inputs instead of depth), and imprecise actuation. A closed-loop grasping system trained for long-horizon success can also perform intelligent pre-grasping manip- ulations, such as pushing or repositioning objects for an easier grasp. However, a major challenge with closed-loop grasp control is that the sensorimotor loop must be closed on the visual modality, which is very difï¬cult to utilize effectively with standard optimal control methods in novel settings. We study how off-policy deep reinforcement learning can acquire closed-loop dynamic vi- sual grasping strategies, using entirely self- supervised data collection, so as to generalize to previously unseen objects at test time. The value of low-level end-effector movements is predicted directly from raw camera observa- tions, and the entire system is trained using grasp attempts in the real world. While the prin- ciples of deep reinforcement learning have been known for decades [11, 12], operationalizing them in a practical robotic learning algorithm that can generalize to new objects requires a stable and scalable algorithm and large datasets, as well as careful system design.
The implementation in our experiments makes very simple assumptions: observations come from a monocular RGB camera located over the shoulder (see Fig. 2), and actions consist of end-effector Cartesian motion and gripper opening and closing commands. The reinforce- ment learning algorithm receives a binary re- ward for lifting an object successfully, and no other reward shaping. This general set of as- sumptions makes the method feasible to de- ploy at large scale, allowing us to collect 580k grasp attempts on 7 real robotic systems. Un- like most reinforcement learning tasks in the literature [13, 14], the primary challenge in this task is not just to maximize reward, but to gen- eralize effectively to previously unseen objects. This requires a very diverse set of objects dur- ing training. To make maximal use of this di- verse dataset, we propose an off-policy train- ing method based on a continuous-action gen- eralization of Q-learning, which we call QT- Opt. Unlike other continuous action Q-learning methods [15, 16], which are often unstable due to actor-critic instability [17, 18], QT-Opt dispenses with the need to train an explicit actor, instead using stochastic optimization over the critic to select actions and target values [19, 20]. We show that even fully off-policy training can outperform strong baselines based on prior work, while a moderate amount of on-policy joint ï¬netuning with ofï¬ine data can improve performance to a success rate of 96% on challenging, previously unseen objects.
Our experimental evaluation demonstrates the effectiveness of this approach both quantitatively and qualitatively. We show that our method attains a high success rate across a range of objects not seen during training, and our qualitative experiments show that this high success rate is due to the system adopting a variety of strategies that would be infeasible without closed-loop vision-based control: the learned policies exhibit corrective behaviors, regrasping, probing motions to ascertain the best grasp, non-prehensile repositioning of objects, and other features that are feasible only when grasping is formulated as a dynamic, closed-loop process.
2
# 2 Related Work
Reinforcement learning has been applied in the context of robotic control using both low- dimensional [1, 2] and high-dimensional [15, 16] function approximators, including with visual inputs [21, 3]. However, all of these methods focus on learning narrow, individual tasks, and do not evaluate on broad generalization to large numbers of novel test objects. Real-world robotic manipu- lation requires broad generalization, and indeed much of the research on robotic grasping has sought to achieve such generalization, either through the use of grasp metrics based on ï¬rst principles [22] or learning [23, 10], with the latter class of methods achieving some of the best results in recent years [8, 7]. However, current grasping systems typically approach the grasping task as the problem of predicting a grasp pose, where the system looks at the scene (typically using a depth camera), chooses the best location at which to grasp, and then executes an open-loop planner to reach that location [5, 6, 7, 8]. In contrast, our approach uses reinforcement learning with deep neural net- works, which enables dynamic closed-loop control. This allows our policies to perform pre-grasp manipulation and respond to dynamic disturbances and, crucially, allows us to learn grasping in a generic framework that makes minimal assumptions about the task.
While most prior grasping methods operate in open-loop, a number of works have studied closed- loop grasping [24, 25, 26, 27]. In contrast to these methods, which frame closed-loop grasping as a servoing problem, our method uses a general-purpose reinforcement learning algorithm to solve the grasping task, which enables long-horizon reasoning. In practice, this enables our method to autonomously acquire complex grasping strategies, some of which we illustrate in Section 6. Our method is also entirely self-supervised, using only grasp outcome labels that are obtained automat- ically by the robot. Several works have proposed self-supervised grasping systems [28, 27], but to our knowledge, ours is the ï¬rst to incorporate long-horizon reasoning via reinforcement learning into a generalizable vision-based system trained on self-supervised real-world data. Related to our work, Zeng et al. [5] recently proposed a Q-learning framework for combining grasping and push- ing. Our method utilizes a much more generic action space, directly commanding gripper motion in 3D, and exhibits substantially better performance and generalization in our experiments. Finally, in contrast to many current grasping systems that utilize depth sensing [7, 29] or wrist-mounted cam- eras [25, 29], our method operates on raw monocular RGB observations from an over-the-shoulder camera, and the performance of our method indicates that effective learning can achieve excellent grasp success rates even with very rudimentary sensing.
# 3 Overview
Our closed-loop vision-based control frame- work is based on a general formulation of robotic manipulation as a Markov Decision Process (MDP)5. At each time step, the pol- icy observes the image from the robotâs camera (see Fig. 2) and chooses a gripper command, as discussed in Section 5. This task formulation is general and could in principle be applied to a wide range of robotic manipulation tasks. The grasping task is deï¬ned simply by providing a reward to the learner during data collection: a successful grasp results in a reward of 1, and a failed grasp a reward of 0. A grasp is con- sidered successful if the robot holds an object above a certain height at the end of the episode.
Training Worker Eq.(1) Cross Entropy Method s) = arg max, a(s n(s) gmax, Qa(s, a) Replay Buffers off-policy Bellman Updater Qr Qo(s,a) Model weights} on-policy
Figure 3: Our distributed RL infrastructure for QT-Opt (see Sec. 4.2). State-action-reward tuples are loaded from an ofï¬ine data stored and pushed from online real robot collection (see Sec. 5). Bellman update jobs sam- ple transitions and generate training examples, while training workers update the Q-function parameters.
The framework of MDPs provides a general and powerful formalism for such decision- making problems, but learning in this framework can be challenging. Generalization requires di- verse data, but recollecting experience on a wide range of objects after every policy update is im-
5While a partially observed (POMDP) formulation would be most general, we assume that the current ob- servation provides all necessary information. In practice, the resulting policy still exhibits moderate robustness to occlusions, and a more general extension to recurrent policies and Q-functions would be straightforward.
3
practical, ruling out on-policy algorithms. Instead, we devise a scalable off-policy reinforcement learning framework based around a continuous generalization of Q-learning. While actor-critic al- gorithms are a popular approach in the continuous action setting, we found that a more stable and scalable alternative is to train only a Q-function, and induce a policy implicitly by maximizing this Q-function using stochastic optimization. We describe the resulting algorithm, which we call QT- Opt, in Section 4, and describe its instantiation for robotic grasping in Section 5. To handle the large datasets and networks in our approach, we devise a distributed collection and training system that asynchronously updates target values, collects on-policy data, reloads off-policy data from past ex- periences, and trains the network on both data streams within a distributed optimization framework (see Fig. 3).
# 4 Scalable Reinforcement Learning with QT-Opt
In this section, we describe the reinforcement learning algorithm that we use for our closed-loop vision-based grasping method. The algorithm is a continuous action version of Q-learning adapted for scalable learning and optimized for stability, to make it feasible to handle large amounts of off-policy image data for complex tasks like grasping.
# 4.1 Reinforcement Learning and Q-Learning
We ï¬rst review the fundamentals of reinforcement learning and Q-learning, which we build on to derive our algorithm. We will use s â S to denote the state, which in our case will include image observations (see Appendix D for details). a â A denotes the action, which will correspond to robot arm motion and gripper command. At each time step t, the algorithm chooses an action, transitions to a new state, and receives a reward r(st, at). The goal in RL is to recover a policy that selects actions to maximize the total expected reward. One way to acquire such an optimal policy is to ï¬rst solve for the optimal Q-function, which is sometimes referred to as the state-action value function. The Q-function speciï¬es the expected reward that will be received after taking some action a in some state s, and the optimal Q-function speciï¬es this value for the optimal policy. In practice, we aim to learn parameterized Q-functions Qθ(s, a), where θ might denote the weights in a neural network. We can learn the optimal Q-function by minimizing the Bellman error, given by
E(9) = E(s,a,s"\~p(s,a,sâ) LD (Qo(s, a), Qr(s,a,sâ))], (1)
â¼
where Q7(s,a,sâ) = r(s,a) + yV(sâ) is a target value, and D is some divergence metric. We use the cross-entropy function for D, since total returns are bounded in [0,1], which we found to be more stable than the standard squared difference (see Appendix C). The expectation is taken under the distribution over all previously observed transitions, and V(sâ) is a target value. In our implementation, we use two target networks [15, 30, 31] to improve stability, by maintain- ing two lagged versions of the parameter vector 0, 6, and 62, where @, is the exponential mov- ing averaged version of @ with an averaging constant of 0.9999, and 62 is a lagged version of 41, which is lagged by about 6000 gradient steps. We then compute the target value according to V(sâ) = minj=1,2 Qg, (sâ, arg maxa Qg, (sâ,aâ)) . This corresponds to a combination of Polyak averaging [32, 33] and clipped double Q-learning [34, 35, 36], and we discuss this design decision further in Appendix C. Once the Q-function is learned, the policy can be recovered according to m(s) = arg max, Qo, (s, a). Practical implementations of this method collect samples from envi- ronment interaction and then perform off-policy training on all samples collected so far [15, 30, 31]. For large-scale learning problems of the sort tackled in this work, a parallel asynchronous version of this procedure substantially improves our ability to scale up this process, as discussed in Section 4.3.
# 4.2 QT-Opt for Stable Continuous-Action Q-Learning
Q-learning with deep neural network function approximators provides a simple and practical scheme for RL with image observations, and is amenable to straightforward parallelization. However, incor- porating continuous actions, such as continuous gripper motion in our grasping application, poses a challenge for this approach. Prior work has sought to address this by using a second network that amortizes the maximization [15, 16], or constraining the Q-function to be convex in a, making it easy to maximize analytically [31, 37]. Unfortunately, the former class of methods are notoriously unstable [18], which makes it problematic for large-scale RL tasks where running hyperparameter
4
sweeps is prohibitively expensive. Action-convex value functions are a poor ï¬t for complex manip- ulation tasks such as grasping, where the Q-function is far from convex in the input. For example, the Q-value may be high for actions that reach toward objects, but low for the gaps between objects.
We therefore propose a simple and practical alternative that maintains the generality of non-convex Q-functions while avoiding the need for a second maximizer network. The image s and action a are inputs into our network, and the arg max in Equation (1) is evaluated with a stochastic opti- mization algorithm that can handle non-convex and multimodal optimization landscapes, similarly to [19] and [20]. Let ϯθ1 (s) be the policy implicitly induced by the Q-function Q¯θ1 (s, a). We can recover Equation (1) by substituting the optimal policy ϯθ1(s) = arg maxa Q¯θ1 (s, a) in place of the arg max argument to the target Q-function. In our algorithm, which we call QT-Opt, ϯθ1(s) is instead evaluated by running a stochastic optimization over a, using Q¯θ1(s, a) as the objective value. We use the cross-entropy method (CEM) to perform this optimization, which is easy to par- allelize and moderately robust to local optima for low-dimensional problems [38]. CEM is a simple derivative-free optimization algorithm that samples a batch of N values at each iteration, ï¬ts a Gaus- sian distribution to the best M < N of these samples, and then samples the next batch of N from that Gaussian. In our implementation, we use N = 64 and M = 6, and perform two iterations of CEM. This is used both to compute targets at training time, and to choose actions in the real world.
# 4.3 Distributed Asynchronous QT-Opt
Learning vision based policies with reinforcement learning that generalizes over new scenes and objects requires large amounts of diverse data, in the same way that learning to generalize on com- plex vision tasks with supervised learning requires large datasets. For the grasping task in our experiments, we collected over 580k grasps over the course of several weeks across 7 robots. To effectively train on such large and diverse RL dataset, we develop a distributed, asynchronous im- plementation of QT-Opt. Fig. 3 summarizes the system. Transitions are stored in a distributed replay buffer database, which both loads historical data from disk and can accept online data from live on- going experiments across multiple robots. The data in this buffer is continually labeled with target Q-values by using a set of 1000 âBellman updaterâ jobs, which carry out the CEM optimization procedure using the current target network, and then store the labeled samples in a second train- ing buffer, which operates as a ring buffer. One consequence of this asynchronous procedure is that some samples in the training buffer are labeled with lagged versions of the Q-network. This is discussed in more detail in the supplement, in Appendix F.4. Training workers pull labeled transi- tions from the training buffer randomly and use them to update the Q-function. We use 10 training workers, each of which compute gradients which are sent asynchronously to parameter servers. We found empirically that a large number of gradient steps (up to 15M) were needed to train an effective Q-function due to the complexity of the task and large size of the dataset and model. Full details of the system design are provided in Appendix F.
# 5 Dynamic Vision-Based Grasping
In this section, we discuss how QT-Opt can be applied to enable dynamic vision-based grasping. An illustration of our grasping setup is shown in Fig. 1. The task requires a policy that can locate an object, position it for grasping (potentially by performing pre-grasp manipulations), pick up the object, potentially regrasping as needed, raise the object, and then signal that the grasp is complete to terminate the episode. To enable self-supervised grasp labeling in the real world, the reward only indicates whether or not an object was successfully picked up. This represents a fully end-to-end approach to grasping: no prior knowledge about objects, physics, or motion planning is provided to the model aside from the knowledge that it can extract autonomously from the data.
MDP for grasping. The state observation s â S includes the robotâs current camera observa- tion, an RGB image with a resolution of 472x472, recorded from an over-the-shoulder monocular camera (see Fig. 1). We also found it beneï¬cial to include the current status of the gripper in the state, which is a binary indicator of whether the gripper is open or closed, as well as the vertical position of the gripper relative to the ï¬oor (see comparisons in Appendix C). The action a â A consists of a vector in Cartesian space t â R3 indicating the desired change in the gripper posi- tion, a change in azimuthal angle encoded via a sine-cosine encoding r â R2, binary gripper open and close commands gopen and gclose, and a termination command e that ends the episode, such that a = (t, r, gopen, gclose, e). Full details of the grasping MDP formulation are provided in Appendix D.
5
Reward function. The reward is 1 at the end of the episode if the gripper contains an object and is above a certain height, and 0 otherwise. Success is determined by using a background subtraction test after dropping the picked object, as discussed in Appendix D.4. Note that this type of delayed and sparse reward function is generally quite challenging for reinforcement learning systems, but it is also the most practical reward function for automated self-supervision. To encourage the robot to grasp more quickly, we also provide a small penalty r(st, at) = â0.05 for all time steps prior to termination, when the model either emits the termination action or exceeds the maximum number of time steps (20). This penalty may in principle result in target values outside of [0, 1], though we found empirically that this does not happen. Q-Function representation. The Q-function Q¯θ1(s, a) is represented in our system by a large convolutional neural network with 1.2M parameters, where the image is provided as an input into the bottom of the convolutional stack, and the action, gripper status, and distance to ï¬oor are fed into the middle of the stack. The full neural network architecture is discussed in Appendix E.
Data collection. In order to enable our model to learn generalizable strategies that can pick up new objects, perform pre-grasp manipulation, and handle dynamic disturbances with vision-based feedback, we must train it on a sufï¬ciently large and diverse set of objects. Collecting such data in a single on-policy training run would be impractical. Our off-policy QT-Opt algorithm makes it possible to pool experience from multiple robots and multiple experiments. The full dataset used to train our ï¬nal model was collected over the course of four months, with a total of about 800 robot hours. This data was collected during multiple separate experiments, and each experiment reused the data from the previous one. This reduces our ability to provide rigidly controlled experimental results in the real-world system, but we provide more rigidly controlled results in simulation in the supplement, in Appendix C. Since a completely random initial policy would produce a very low success with such an unconstrained action space, we use a weak scripted exploration policy to bootstrap data collection. This policy is randomized, but biased toward reasonable grasps, and achieves a success rate around 15-30%. We switched to using the learned QT-Opt policy once it reached a success rate of 50%. The scripted policy is described in the supplementary material, in Appendix B. Data was collected with 7 LBR IIWA robots, with 4-10 training objects per robot. The objects were replaced every 4 hours during business hours, and left unattended at night and on weekends. The objects used during testing were distinct from those in the training data.
# 6 Experimental Results
Our experiments evaluate our learned closed-loop vision-based grasping system to answer the fol- lowing research questions: (1) How does our method perform, quantitatively, on new objects that were never seen during training? (2) How does its performance compare to a previously proposed self-supervised grasping system that does not explicitly optimize for long-horizon grasp success? (3) What types of manipulation strategies does our method adopt, and does it carry out meaningful, goal-directed pre-grasp manipulations? (4) How do the various design choices in our method affect its performance? The ï¬rst two questions are addressed through a set of rigorous real-world quan- titative experiments, which we discuss in Section 6.1, question (3) is addressed through qualitative experiments, which are discussed in Section 6.2 and shown in the supplementary video and online, and the last question is addressed through a detailed set of ablation studies in both simulation and the real world, which are discussed in Appendix C and A. The experiments in the appendices also study the impact of dataset size and off-policy training on ï¬nal performance.
# 6.1 Quantitative Performance Evaluation
In this section, we present a quantitative evaluation of our grasping system. The physical setup for each robot is shown in Fig. 1 (left): the robots are tasked with grasping objects in a bin, using an over-the-shoulder RGB camera and no other sensing.6 We use two separate evaluation protocols, which use challenging objects that were not seen at training time. In the ï¬rst protocol, each of the 7 robots make 102 grasp attempts on a set of test objects. The grasp attempts last for up to 20 time steps each, and any grasped object is deposited back into the bin. Although a policy may choose to grasp the same object multiple times, we found in practice that each robot made grasp attempts on a variety of objects, without ï¬xating on a single one. However, to control for potential confounding
6Though some of the ï¬gures show a wrist-mounted camera, this camera is not used in the experiments.
6
Method QT-Opt (ours) Levine et al. [27] QT-Opt (ours) Levine et al. [27] Dataset 580k off-policy + 28k on-policy 900k grasps from Levine et al. [27] 580k off-policy grasps only 400k grasps from our dataset Test 96% 78% 87% 67% ï¬rst 10 88% 76% Bin emptying ï¬rst 20 88% 72% ï¬rst 30 76% 72%
| | | {|
Table 1: Quantitative results in terms of grasp success rate on test objects. Policies are evaluated with object replacement (test) and without (bin emptying), with the latter showing success rates on the ï¬rst 10, 20, and 30 grasps. The variant of our method that uses on-policy joint ï¬netuning has a failure rate more than four times lower than prior work on the test set, while using substantially fewer grasp attempts for training. The variant that only uses off-policy training also substantially exceeds the performance of the prior method.
effects due to replacement, we also conducted experiments with a second protocol, which we refer to as bin emptying. Here, a single robot unloads a cluttered bin ï¬lled with 28 test objects, using 30 grasp attempts. This is repeated 5 times. Grasp success is reported over the ï¬rst 10, 20, and 30 grasp attempts, corresponding to grasps on increasingly difï¬cult objects.
The performance of our method is shown in Table 1. The results show both a variant of our method that is trained entirely using off-policy data, without any additional data collection from the latest policy, as well as the performance after joint ï¬netuning with additional on-policy data, which is collected simultaneously with the policy training (details of the joint ï¬netuning procedure in Ap- pendix F.3). The success rate of our method in both cases is very high. Effective off-policy training is valuable as it allows for rapid iteration on hyperparameters and architecture design without any data collection. However, additional on-policy joint ï¬netuning consistently provides a quantiï¬able increase in performance with only about 28,000 additional grasps, reaching 96% grasp success. Although the on-policy dataset does not observe the same data diversity as seen in the off-policy dataset, it likely affords the policy a kind of âhard negative miningâ mechanism, letting it quickly correct erroneous and over-optimistic extrapolations. Further ablations are discussed in Appendix A.
To compare our method to prior work, we evaluated the technique proposed by Levine et al. [27]. This prior method is also self-supervised, and previously attained good results on a similar visual grasping setup. This prior method does not reason about long-horizon rewards: although it can be used in closed-loop, the policy greedily optimizes for grasp success at the next grasp, does not con- trol the opening and closing of the gripper, and does not reason about pregrasp manipulation. Since the format of the data for the two methods is different due to the different action representations, we compare to two versions of this prior approach: a variant that is trained on all of the data described by Levine et al. [27], and a variant that adapts the same data used for our method, discarding grasp attempts where the gripper was not closed. The comparison in Table 1 indicates a very large gap in performance between our method and both variants of the prior approach. On the bin emptying ex- periment, our method emptied the bin in 30 grasps or less in 2 of the 5 trials, while the prior method emptied the bin in 1 of the 5 trials. The lower success rate for 30 grasps is due to the policy trying to grasp the last few objects, which are usually very small and often get stuck in an unreachable corner of the bin. Examples are shown in Appendix A.
# 6.2 Analysis of Grasping Strategies with Qualitative Experiments
Our QT-Opt grasping policy has a success rate of 96% on previously unseen test objects. What types of strategies does this policy adopt? In contrast to most grasping systems, our method performs gen- eral closed-loop control with image observations, and can choose to reposition, open, or close the gripper at any time. This ï¬exibility, combined with training for long-horizon success with reinforce- ment learning, enables it to perform behaviors that are usually not observed with standard grasping systems. We encourage the reader to watch the supplementary video, as well as the extended video, both provided at https://goo.gl/ykQn6g, and discuss some examples here. Notably, all of these examples emerge automatically from training the policy to optimize grasp success. Singulation and pregrasp manipulation. Since our policies optimizes for the success of the en- tire episode, they can carry out pregrasp manipulations that reposition objects to make them easier to grasp. In Fig. 4 (a), we show an example object singulation sequence performed by the learned policy on a previously unseen blocks puzzle, and in Fig. 4 (b), we show an example where the policy chooses to knock down a ketchup bottle to make it easier to pick up. Regrasping. The policy can open and close the gripper at any time, which allows it to detect early signs of an unstable grasp and regrasp the object more securely. In Fig. 4 (c), we show examples where the policy repeatedly regrasps a slippery object on the ï¬oor, while in Fig. 4 (d), we show an
7
(a) (b) (c) (d) (e) (f) (g) (h)
Figure 4: Eight grasps from the QT-Opt policy, illustrating some of the strategies discovered by our method: pregrasp manipulation (a, b), grasp readjustment (c, d), grasping dynamic objects and recovery from perturba- tions (e, f), and grasping in clutter (g, h). See discussion in the text and Appendix A.
example where the object slips out of the gripper during the load phase, and the policy repositions the gripper for a more secure grasp. Handling disturbances and dynamic objects. The reactive policy can also grasp objects that move dynamically during the grasping process. In Fig. 4 (e), we show examples where the policy attempts to pick up a ball, which rolls out of the gripper forcing the robot to follow. In Fig. 4 (f), we also show examples where the object is intentionally pushed out of the gripper during grasping. The policy is still able to correct and grasp another object successfully. Grasping in clutter. Although the training data included no more than ten objects at a time, the policy can still grasp in dense clutter, as shown in Fig. 4 (g). Failure cases. Although the policy was usually successful, we did observe a few failure cases. Especially in dense clutter, the policy was sometimes prone to regrasp repeatedly among cluttered objects, as shown in Fig. 4 (h). While this strategy often does produce a successful grasp, it is somewhat time consuming and not as goal-directed as the behavior observed in less cluttered scenes.
# 7 Discussion and Future Work
We presented a framework for scalable robotic reinforcement learning with raw sensory inputs such as images, based on an algorithm called QT-Opt, a distributed optimization framework, and a combi- nation of off-policy and on-policy training. We apply this framework to the task of grasping, learning closed-loop vision-based policies that attain a high success rate on previously unseen objects, and exhibit sophisticated and intelligent closed-loop behavior, including singulation and pregrasp manip- ulation, regrasping, and dynamic responses to disturbances. All of these behaviors emerge automat- ically from optimizing the grasp success probability via QT-Opt. Although our policies are trained on a large amount of robot experience (580k real-world grasps), all of this experience is collected autonomously with minimal human intervention, and the amount of data needed is substantially lower than comparable prior self-supervised techniques (e.g., [27]). Our results demonstrate that reinforcement learning with vision-based inputs can scale to large datasets and very large models, and can enable policies that generalize effectively for complex real-world tasks such as grasping. Our framework is generic with respect to the task, and extending the approach to other manipulation skills would be an exciting direction for future work.
8
# Acknowledgments
We would like to give special thanks to IËnaki Gonzalo and John-Michael Burke for overseeing the robot operations, and Chelsea Finn, Timothy Lillicrap, and Arun Nair for valuable discussions.
# References
[1] J. Peters and S. Schaal. Reinforcement learning of motor skills with policy gradients. Neural Networks, 21(4):682 â 697, 2008. ISSN 0893-6080. Robotics and Neuroscience.
[2] M. Kalakrishnan, L. Righetti, P. Pastor, and S. Schaal. Learning Force Control Policies for In IEEE/RSJ International Conference on Intelligent Robots and Compliant Manipulation. Systems, 2011.
[3] A. Yahya, A. Li, M. Kalakrishnan, Y. Chebotar, and S. Levine. Collective robot reinforce- ment learning with distributed asynchronous guided policy search. In IEEE/RSJ International Conference on Intelligent Robots and Systems, 2017.
[4] A. Ghadirzadeh, A. Maki, D. Kragic, and M. Bjrkman. Deep predictive policy training us- ing reinforcement learning. In IEEE/RSJ International Conference on Intelligent Robots and Systems, 2017.
[5] A. Zeng, S. Song, S. Welker, J. Lee, A. Rodriguez, and T. Funkhouser. Learning Synergies between Pushing and Grasping with Self-supervised Deep Reinforcement Learning. arXiv preprint arXiv:1803.09956, 2018.
[6] D. Morrison et. al. Cartman: The low-cost Cartesian Manipulator that won the Amazon Robotics Challenge. In IEEE International Conference on Robotics and Automation, 2018. [7] J. Mahler, M. Matl, X. Liu, A. Li, D. V. Gealy, and K. Goldberg. Dex-Net 3.0: Computing Robust Robot Suction Grasp Targets in Point Clouds using a New Analytic Model and Deep Learning. CoRR, abs/1709.06670, 2017. URL http://arxiv.org/abs/1709.06670. [8] A. ten Pas, M. Gualtieri, K. Saenko, and R. Platt. Grasp Pose Detection in Point Clouds. The
International Journal of Robotics Research, 36(13-14):1455â1473, 2017.
[9] N. Chavan-Daï¬e and A. Rodriguez. Stable Prehensile Pushing: In-Hand Manipulation with Alternating Sticking Contacts. In IEEE Intl Conference on Robotics and Automation, 2018.
[10] J. Bohg, A. Morales, T. Asfour, and D. Kragic. Data-Driven Grasp Synthesis A Survey. IEEE Transactions on Robotics, 30(2):289â309, 2014.
[11] R. S. Sutton and A. G. Barto. Introduction to Reinforcement Learning. MIT Press, Cambridge, MA, USA, 1st edition, 1998. ISBN 0262193981.
[12] G. Tesauro. TD-Gammon, a Self-Teaching Backgammon Program, Achieves Master-Level Play. Neural Computation, March 1994.
[13] M. C. Machado, M. G. Bellemare, E. Talvitie, J. Veness, M. J. Hausknecht, and M. Bowling. Revisiting the Arcade Learning Environment: Evaluation Protocols and Open Problems for General Agents. CoRR, abs/1709.06009, 2017.
[14] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba. Openai gym, 2016.
[15] R. Hafner and M. Riedmiller. Reinforcement learning in feedback control. Machine Learning, 84(1-2), 2011.
[16] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. Continuous Control with Deep Reinforcement Learning. CoRR, abs/1509.02971, 2015. URL http://arxiv.org/abs/1509.02971.
[17] Y. Duan, X. Chen, R. Houthooft, J. Schulman, and P. Abbeel. Benchmarking Deep Reinforce- ment Learning for Continuous Control. In Intl Conference on Machine Learning, 2016. [18] P. Henderson, R. Islam, P. Bachman, J. Pineau, D. Precup, and D. Meger. Deep Reinforcement
Learning that Matters. CoRR, 2017. URL http://arxiv.org/abs/1709.06560.
[19] G. Kahn, A. Villaï¬or, B. Ding, P. Abbeel, and S. Levine. Self-Supervised Deep Reinforcement Learning with Generalized Computation Graphs for Robot Navigation. In IEEE International Conference on Robotics and Automation, 2018.
[20] D. Quillen, E. Jang, O. Nachum, C. Finn, J. Ibarz, and S. Levine. Deep Reinforcement Learn- ing for Vision-Based Robotic Grasping: A Simulated Comparative Evaluation of Off-Policy
9
Methods. In IEEE International Conference on Robotics and Automation, 2018.
[21] S. Levine, C. Finn, T. Darrell, and P. Abbeel. End-to-end Training of Deep Visuomotor Poli- cies. Journal of Machine Learning Research, 17(39), 2016.
[22] J. Weisz and P. K. Allen. Pose Error Robust Grasping from Contact Wrench Space Metrics. In IEEE International Conference on Robotics and Automation, 2012.
[23] I. Lenz, H. Lee, and A. Saxena. Deep Learning for Detecting Robotic Grasps. The Interna- tional Journal of Robotics Research, 34(4-5):705â724, 2015.
[24] K. Yu and A. Rodriguez. Realtime State Estimation with Tactile and Visual sensing. Applica- tion to Planar Manipulation. In IEEE Intl Conference on Robotics and Automation, 2018. [25] U. Viereck, A. ten Pas, K. Saenko, and R. Platt. Learning a visuomotor controller for real
world robotic grasping using simulated depth images. In CoRL, 2017.
[26] K. Hausman, Y. Chebotar, O. Kroemer, G. S. Sukhatme, and S. Schaal. Regrasping using Tactile Perception and Supervised Policy Learning. In AAAI Symposium on Interactive Multi- Sensory Object Perception for Embodied Agents, 2017.
[27] S. Levine, P. Pastor, A. Krizhevsky, and D. Quillen. Learning hand-eye coordination for robotic grasping with large-scale data collection. In International Symposium on Experimental Robotics, 2016.
[28] L. Pinto and A. Gupta. Supersizing self-supervision: Learning to grasp from 50K tries and 700 robot hours. In IEEE International Conference on Robotics and Automation, 2016.
[29] D. Morrison, P. Corke, and J. Leitner. Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach. In Robotics: Science and Systems, 2018.
[30] V. Mnih et al. Human-level control through deep reinforcement learning. Nature, 518(7540): 529â533, 2015.
[31] S. Gu, T. Lillicrap, I. Sutskever, and S. Levine. Continuous Deep Q-learning with Model-based Acceleration. In Proceedings of Intl Conference on Machine Learning, 2016.
[32] B. T. Polyak and A. B. Juditsky. Acceleration of stochastic approximation by averaging. SIAM Journal on Control and Optimization, 30(4):838â855, 1992.
[33] T. Lillicrap, J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. Con- tinuous control with deep reinforcement learning. In International Conference on Learning Representations, 2016.
[34] H. V. Hasselt. Double Q-learning. In J. D. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. S. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems, 2010. [35] H. v. Hasselt, A. Guez, and D. Silver. Deep Reinforcement Learning with Double Q-Learning.
In AAAI Conference on Artiï¬cial Intelligence, 2016.
[36] S. Fujimoto, H. van Hoof, and D. Meger. Addressing Function Approximation Error in Actor- Critic Methods. CoRR, 2018. URL http://arxiv.org/abs/1802.09477.
[37] B. Amos, L. Xu, and J. Z. Kolter. Input convex neural networks. In International Conference on Machine Learning, volume 70, pages 146â155, 2017.
[38] R. Rubinstein and D. Kroese. The Cross-Entropy Method. Springer-Verlag, 2004. [39] E. Coumans and Y. Bai. Pybullet, a python module for physics simulation for games, robotics
and machine learning. http://pybullet.org, 2016â2018.
[40] A. X. Chang, T. A. Funkhouser, L. J. Guibas, P. Hanrahan, Q. Huang, Z. Li, S. Savarese, M. Savva, S. Song, H. Su, J. Xiao, L. Yi, and F. Yu. Shapenet: An information-rich 3d model repository. CoRR, abs/1512.03012, 2015. URL http://arxiv.org/abs/1512.03012. [41] S. Ioffe and C. Szegedy. Batch Normalization: Accelerating Deep Network Training by Re- ducing Internal Covariate Shift. In International Conference on Machine Learning, 2015. [42] A. Stooke and P. Abbeel. Accelerated Methods for Deep Reinforcement Learning. CoRR,
abs/1803.02811, 2018. URL http://arxiv.org/abs/1803.02811.
[43] L. Espeholt, H. Soyer, R. Munos, K. Simonyan, V. Mnih, T. Ward, Y. Doron, V. Firoiu, T. Harley, I. Dunning, S. Legg, and K. Kavukcuoglu. IMPALA: scalable distributed deep- rl with importance weighted actor-learner architectures. CoRR, abs/1802.01561, 2018. URL http://arxiv.org/abs/1802.01561.
[44] A. Nair, P. Srinivasan, S. Blackwell, C. Alcicek, R. Fearon, A. D. Maria, V. Panneershelvam, M. Suleyman, C. Beattie, S. Petersen, S. Legg, V. Mnih, K. Kavukcuoglu, and D. Silver. Mas- sively parallel methods for deep reinforcement learning. CoRR, abs/1507.04296, 2015. URL
10
http://arxiv.org/abs/1507.04296.
[45] D. Horgan, J. Quan, D. Budden, G. Barth-Maron, M. Hessel, H. van Hasselt, and D. Silver. Distributed Prioritized Experience Replay. In International Conference on Learning Repre- sentations, 2018.
[46] J. Dean, G. S. Corrado, R. Monga, K. Chen, M. Devin, Q. V. Le, M. Z. Mao, M. Ranzato, A. Senior, P. Tucker, K. Yang, and A. Y. Ng. Large scale distributed deep networks. In Advances in Neural Information Processing Systems, 2012.
[47] M. Abadi et. al. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL https://www.tensorflow.org/.
11
# A Real World Ablation Experiments: State, Action, and Reward Design
After prototyping ideas in our simulated setup, we take the best parameters discussed in Appendix C, and repeat those experiments in the real setup, to verify the same parameters carry over across domains. Since real-world ï¬netuning takes considerable robot time, all of these experiments were conducted with entirely off-policy training using a ï¬xed dataset of 580k grasps. The results are therefore in absolute terms worse than the ï¬nal results of our best policy, but are still useful for understanding the relative tradeoffs of various design choices.
State representation and its effects. Echoing results discussed in Appendix C we found that a rich state representation greatly impacts real robot performance. Matching simulated experiments, providing the image, gripper status, and height to bottom of the bin performs better than other repre- sentations. These experiments indicate that while hand-eye coordination can in principle be ï¬gured out purely from the image observation, explicitly adding domain-speciï¬c state features improves performance. We ï¬nd it very important that such low dimensional state features can be seamlessly integrated into our model, resulting in better performance and data efï¬ciency. All models are trained off-policy for 2.5M steps, with discount 0.9 and no reward penalty.
State Representation Image only Image + gripper status Image + gripper height status + Performance 53% 58% 70%
Table 2: Off-policy ablation over state representation.
Discount and Reward Deï¬nition To encourage faster grasps, we experimented with decreasing discount and adding a small reward penalty at each timestep. Again, matching sim results, a reward penalty did better than decreasing the discount factor. All models are trained off-policy on the same dataset for 2.5M steps.
State Representation Image only Image only Image only Discount Factor 0.9 0.7 0.9 Reward Penalty 0 0 -0.05 Performance 53% 28% 63%
Table 3: Off-policy ablation over discount and reward.
Learned Termination We compare a task-speciï¬c scripted termination condition with a task- agnostic termination action learned by the policy. Details of the scripted termination and learned termination conditions are in the Appendix D.4. The learned termination condition performs better in the off-policy case and on-policy case.
Termination Condition Automatic Learned Automatic Learned Training Regime off-policy off-policy on-policy joint ï¬ne- tuning on-policy joint ï¬ne- tuning Performance 81% 87% 95% 96%
Table 4: Off-policy and on-policy ablation of termination condition.
Quantitative experiments The performance of our algorithm is evaluated empirically in a set of grasping experiments. As discussed in Section 6.1 we follow two experimentation protocols. In one set of the experiments we let 7 robots execute 102 grasps each and then average grasp success. See results in column three of Table 1. After a successful trial, the object is randomly dropped back into the bin. In a second protocol we place 28 objects into a bin and let the robot perform 30 grasps without object replacement. This time successfully grasped objects are placed into a spare basket. For the bin unloading protocol we report success rates on the ï¬rst 10 / 20 / 30 grasps to report on increasingly difï¬cult objects remaining in the bin over the unloading process. As the unloading process progresses, large objects tend to be grasped ï¬rst and small objects are grasped last. The
12
(a) (b) (c)
Figure 5: Illustrations of the bin emptying experiment (a). The (a, right) shows a very small object getting stuck in the corner and requiring a few attempts to get grasped. The test objects for the bin emptying experiment (b, left) and grasping with replacement experiment (b, right). Note that the objects range greatly in size and appearance, and many are exceedingly difï¬cult to grasp due to small size, complex shapes, and varied material properties. Sequence (c) illustrates a grasp on the legs of a toy octopus, getting positive visual feedback and resulting in episode termination by the policy. During further ascent, the octopus slips out of the gripper, and the grasp is labeled as a failure. This is not a principled limitation of our method, as we could have terminated an episode at a larger height to get more robust feedback. Robotâs observation It is shown in (c).
policy is required to localize the small objects and then make a grasp. The ability of our policy to localize small objects in a very sparse scene is learned solely from data, and was not hand tuned.
The test sets, shown in Fig. 5 (b), consist of objects that pose a variety of challenges for grasping. They range in size from very small to larger, heavier objects. They vary in appearance, and include objects that are translucent and reï¬ective. They include objects with highly non-convex shapes, soft and deformable parts, and parts that are inherently unsuitable for grasping (such as the bristles on a brush). In analyzing the failure cases, we noticed the following patterns. In the test with replacement, many of the failed grasps were on the soft octopus toy (Fig. 5 (c)), where the robot would lift the octopus successfully by one of the tentacles, but it would slip out and fall back into the bin after lifting. Other failures were caused by the small round objects rolling into the corners of the bin, where they were difï¬cult to reach. Since we impose hard bounds on the workspace to prevent the robot from colliding with the environment, in many of these cases it was actually impossible for the robot to grasp the object successfully, regardless of the policy. In the bin emptying experiment, we found that many of the failures were due to the small black lock (Fig. 5 (a, right)) getting stuck in the corner of the bin, where it could not be grasped, resulting in multiple sequential failed attempts.
Emergent grasping behaviors We presented a challenging task of grasping a toy puzzle which needs to be broken down into pieces before any individual part could be grasped, as otherwise the puzzle will not ï¬t into the gripper, see Fig. 4 (a). This puzzle was not seen at training time. We treat grasp success as a proxy for frequency and efï¬ciency of a pregrasp manipulation, since the ï¬rst block cannot be grasped a without pregrasp manipulation. After every grasp, we reassemble the puzzle and place it at a different location. We compare our best baseline based on Levine et al. [27] to QT-Opt. The QT-Opt model succeeds in 19 out of 24 grasps (79%), while the prior method succeeds in 13 out of 24 grasps (54%).
13
In the second evaluation, we attempt grasps on a tennis ball, see Fig. 4 (e). The ï¬rst time the tennis ball is between the gripper ï¬ngers, we push the tennis ball out of the way. The ball cannot be grasped unless the model reacts to the new tennis ball position. The QT-Opt model succeeds in 28 of 38 grasps (74%). The prior model succeeds in 4 out of 25 grasps (16%).
This suggests that our policy learned better proactive and reactive behaviors than our baseline.
Clipped Double Q-Learning We ï¬nd that using Clipped Double Q-Learning [36] results in faster convergence in simulated experiments (Figure 9) and is crucial for good performance in real exper- iments (Table 5). Experiments used the scripted termination.
Q-learning Method Double Q-learning Clipped Double Q-learning Performance 63% 81%
Table 5: Off-policy performance with and without clipped Double-Q Learning.
Data efï¬ciency As discussed in Section 5 we collected 580k grasp attempts across 7 robots with a total of about 800 robot hours. Since we saved collected data on disk, we can study performance of a policy with less off-policy data. A dataset of 320k grasp attempts was generated by using data from the ï¬rst 440 robot hours of data collection. Table 6 shows performance of off-policy training, using the best model conï¬guration. With only 55% of the original dataset, we reached 78% grasp success, the same performance as our best supervised learning baseline, but using one third the number of grasps, and half the number of transitions. Further joint ï¬netuning would likely yield a ï¬nal policy that also reaches 96% grasp success, with higher data efï¬ciency but more on-robot joint ï¬netuning time.
Dataset Size 580k grasps 320k grasps Performance 87% 78%
Table 6: Data efï¬ciency.
# B Exploration and Dataset Bootstrapping
As is standard in Q-learning, we evaluate using one policy (evaluation policy) and collect training data with a different policy (exploration policy). Our evaluation policy Ïeval chooses each action by maximizing the Q-function value using our QT-Opt algorithm described in Section 4.2. For data collection, we used two different exploration policies Ïscripted, Ïnoisy at different stages of training. During the early stages of training, a policy that takes random actions would achieve reward too rarely to learn from, since the grasping task is a multi-stage problem and reward is sparse. This was indeed what we observed during early experimentation. For this reason, we collect our initial data for training using a scripted policy Ïscripted that successfully grasps 15-30% of the time. The Ïscripted simpliï¬es the multi-step exploration of the problem by randomly choosing an (x, y) coordinate above the table, lowering the open gripper to table level in a few random descent steps, closing the gripper, then returning to the original height in a few ascent steps.
We compared initial data collection with Ïscripted vs. initial data collection with Ïprior, a grasping model based on Levine et al. [27]. In our real-world experiments, we used Ïprior, but in a simulated comparison, we found that initializing training with either policy leads to the same ï¬nal performance and data efï¬ciency. The data generated by either policy has similar distributional properties, as discussed in Appendix C.1, which is sufï¬cient to bootstrap learning.
During the later stages of training, we switch to data collection with 7;,5:sy. This exploration policy uses epsilon-greedy exploration to trade off between choosing exploration actions or actions that maximize the Q-function estimate. The policy mois, chooses a random action with probability ⬠= 20%, otherwise the greedy action is chosen. To choose a random action, Tnoisy Samples a pose change t,r from a Gaussian with probability 75%, a toggle gripper action open, Zclose With probability 17%, and an episode termination e with probability 8%.
14
# C Simulated Experiments: Dataset Size, Off-Policy Training, MDP Design
Robot experiments at large scale are excessively more time-consuming, complex, and uncertain than experimentation in a simulated environment. We made extensive use of simulation for prototyping and ablation studies of our methods.
Our simulation environment is based on the Bullet Physics simulator [39] and mimics the setup shown in Fig. 2. The simulated environment uses the conï¬guration of the real Kuka IIWA arm, a similar looking bin, and a simu- lated over-the-shoulder camera. We used object models from the ShapeNet dataset [40], scaled down to graspable sizes. Using our scalable distributed learning infrastructure (as discussed in Section 4), we were able to gener- ate data with up to 1,000 virtual robots running in parallel, and conduct a large scale experiment within a few hours. Both simulation and real used the same input modality, neural net architecture, and method of robotic con- trol. Simulation was only used for prototyping, and all real world policies used only real data. We found that real world learning was generally much harder in terms of data requirements and time needed to train a good model, due to higher visual diversity, real world physics, and unmodeled properties of the real robot.
There are many parameters in our system which impact the ï¬nal performance, data efï¬ciency, gen- eralization and emergence of rich behaviours. We split the factors into three large groups: QT-Opt speciï¬c parameters, grasping parameters, and data efï¬ciency, which we discuss below.
QT-Opt speciï¬c parameters include hyperparameters for Polyak averaging and the method of computing Bellman error. We found that 0.9999 was the best Polyak averaging constant, and Double DQN performed better than Single DQN. Results are in Figure 7. Note that although asymptotic performance is similar, the settings chosen seem to provide faster convergence and lower variance in grasp success. The latter is important for real experiments, where it is costly to evaluate the policy at multiple points. We also found that the cross-entropy loss performed better than the standard squared difference loss.
Polyak Averaging Comparison (using Double DQN) DQN Comparison (using Polyak averaging 0.9999) Grasp Success Rate Grasp Success Rate â 00 --- 0.99 ~~~ 0.9999 â Single DON --- Double DQN 3 ng Staten of 3 2000000 ame ns nn 9 tn Hon nn
(a) All values of Polyak averaging provide stable training that reach similar performance, but higher values seem to converge faster and have smoother performance curves.
(b) Double DQN outperforms Single DQN on our task. Note that both are quite stable.
Figure 7: Comparison of Polyak averaging constants (a) and Single DQN vs Double DQN (b). Note these experiments do not include clipped Double Q-learning.
Figure 9 compares Clipped Double DQN to standard Double DQN, showing Clipped Double DQN does slightly better. Note that although the difference in sim was small, we found performance was signiï¬cantly better in the real world off-policy setting (see Table 5). The simulated setup uses 60 simulated, on-policy robots, which suggests the primary gains of Clipped Double DQN come when it is used with off-policy data.
Grasping parameters describe the task speciï¬c MDP design, i.e. state-action representations, closed-loop control and data generation policies. We found that differences in state representation as discussed in detail in Appendix D, greatly impacts performance. We compared models whose state is either a) just an image observation of the scene or b) a richer state comprising an image,
15
Grasp Guccess Rate of & eo 2 &@ & â Log Loss â L2Loss Ss 5 0 200000 400000 600000 800000 1000000 Training Steps (batch of 32)
Figure 8: Loss function comparison in simulation
08 a Grasp Guccess Rate g â Clipped Double-Q Learning â Double-0 Learning a0 0 200000 400000 600000 800000 1000000 Training Steps (batch of 32)
Figure 9: Comparison of DQN methods. See Appendix F.4 for clipped Double DQN deï¬nition.
the gripper aperture and distance between the gripper and the floor. Next, we studied discount factors and reward penalties. Finally, we compared a scripted episode stopping criterion to a learned termination action. In these experiments, we use 60 simulated robots, using Tscriptea policy for the first 120K gradient update steps, then switching to the Tnoisy policy with e = 0.2. These exploration policies are explained in Appendix B. The grasp performance is evaluated continuously and concurrently as training proceeds by running the 7re,; policy on 100 separate simulated robots and aggregating 700 grasps per policy for each model checkpoint.
State Image+gripper status+height Image only Image+gripper status+height Termination action No No No No Yes Intermediate reward -0.05 0 0 -0.05 -0.05 Discount factor 0.9 0.9 0.7 0.9 0.9 Perf. 300K steps 75% 68% 50% 25% 67% at Perf. at 1M steps 95% 92% 90% 81% 94%
Table 7: Simulation studies for tuning grasping task parameters
The results in Table 7 show that richer state representation results in faster convergence and better ï¬nal performance. A small reward penalty does much better than decreasing the discount factor. Finally, giving the policy greater control over the episode termination yields performance on par with the engineered termination condition.
We put special focus on the termination action: by letting the policy decide when to terminate, the policy has more potential to keep trying a grasp until it is conï¬dent the grasp is stable. We argue that explicitly learning a termination action would be beneï¬cial for many manipulation tasks, since it makes the design of the MDP easier and it forces the policy to truly understand the goal of the task.
16
10 08 2 s @ y 06 fe g FH Soa 4 â reward=-0.05, discount=0.9 6 â reward=0, discount=0.9 02 â reward=0, discount=0.7 â Image only ââ~ Learned termination 00 0 200000 400000 600000 800000 1000000 Training Steps (batch of 32)
Figure 10: Performance graphs of simulation studies for tuning grasping task parameters
s 0 5 mm step fmm step 4 | mm sep? mm step 9 » » » 20 B 8 Fy » we » , vvuldldullhe ey IML UI 002 00 000 0 mm step 10 i 1" 2 fmm step 1 mm step 3 mm steps Mab 1, . hs | 0 dk ald. 025 -020 -0.15 -0.10 -0.05 00s 010 03 02 01 00 : 020-0154
Figure 11: (top row) The distribution of z-coordinate of the pose translation of actions selected by the ran- domized scripted policy. The z-coordinate values are biased towards negative values during the ï¬rst few steps on the descent phase of the episode, then biased towards positive values at later steps on the ascend phase. Such a biased action distribution over states provides poor action space exploration and is insufï¬cient to learn a good Q-Function. (bottom row) The z-coordinate of actions selected by a suboptimal QT-Opt policy. The distribution is roughly centered around zero (red axis) at each step providing good action space exploration.
Data efï¬ciency Interestingly, our algorithm is more data efï¬cient than the supervised learning based algorithm from Levine et al. [27] work, achieving higher grasp success with fewer robots continuously generating training data (see Table 8).
Name QT-Opt (ours) Levine et al. [27] Sim Robots 30 60 60 280 1000 Success 88% 95% 55% 71% 85%
Table 8: Data efï¬ciency comparison in simulation.
We argue that the algorithm from Levine et al. [27] is less data efï¬cient because it optimizes a proxy objective of 1-step classiï¬cation accuracy, which values all data points equally. Our QT-Opt policy values data points based on how they inï¬uence reward. This focuses optimization on pivotal decision points that are very important to get right, such as learning when to close the gripper. Doing so lets the model optimize grasp success more efï¬ciently.
# C.1 Effect of Off-Policy Training on Performance
17
Num. from Dscripted 300k 0k 150k 200k transitions Num. from Dexplore 0k 300k 150k 260k transitions Success 20% 37% 86% 94%
In principle Q-Learning can learn an op- timal policy from off-policy data, as long as the data is sufï¬ciently diverse. Empiri- cally we proved that indeed a decent pol- icy can be trained from our ofï¬ine dataset. We are interested in answering the follow- ing questions: what statistical properties must the data distribution have, and which exploration policies might give rise to that distribution? To that end, we collected two dataset in simulation. The Dscripted dataset was collected by running a randomized scripted pol- icy Ïscripted, discussed in Appendix B, which averaged 30% grasp success. The second dataset Dexplore was collected by running a suboptimal QT-Opt policy Ïeval which is also averaged 30% grasp success. Table 9 shows off-policy performance on different amounts of data sampled from Dscripted and Dexplore. This experiment shows that given 300k transitions, it is impossible to learn solely from the Dscripted data, or solely from the Dexplore data, but it is possible to learn with data from both distributions. This shows the importance of collecting off-policy data from a mix of policies with different behavior. The Ïscripted is a special randomized scripted policy which sam- ples actions in a very biased way towards successful grasps for initial bootstrapping. As a result, it explores state-action space very poorly, yielding only speciï¬c greedy actions for certain states. In contrast, a suboptimal Ïeval makes a lot of suboptimal actions thus resulting in good exploration of the action space. Fig. 11 visualizes the distribution of the z-coordinate of the action translation in the two datasets, showing that Ïevalâs actions have less downward bias. We saw similar differences in distributions over other components of the action, like the gripper action. In our real world ex- periments, early policies had very poor success, as the data was coming primarily from Ïscripted, hence resulting in poor exploration of the action space. However, once we collected enough sufï¬- ciently random data, we started getting much better grasp success. We expect this pattern to hold for other real world tasks using QT-Opt: initial performance might be low, but continued data collection should eventually give rise to an appropriately diverse data distribution resulting in a good policy.
# D Grasping MDP: State Space, Action Space, and Reward Evaluation
The goal in designing the MDP is to provide a framework in which an agent may learn the nec- essary hand-eye coordination to reach, grasp, and lift objects successfully. Our sensors include a 640x512 RGB camera and joint position sensors in arm and gripper. In the following we describe representation of state, action and reward and discuss the stopping criterion of our grasping task.
# D.1 Observation Space
To provide image observations J, that capture maximal information about the state of the environ- ment, we mounted the camera to the shoulder of the robot overlooking the entire workspace (see Fig. 2). In practice, we also observed that our agent was able to learn more effectively when obser- vations also include some proprioceptive state. Specifically, this state includes a binary open/closed indicator of gripper aperture and the scalar height of the gripper above the bottom of the tray. The full observation is be defined as s; = (Iz, Gaperture,t» Jheight,t)- The model-input J; is a 472x472 crop of the full-size image with random crop anchor. This way the model is discouraged from relying on a rigid camera-to-base transformation which cannot be assumed across several robots. Instead, the policy is forced to learn about the presence and the view of the gripper and impact of the actions in the environment, which is critical for the emergence of closed-loop self-corrective behaviours. We also apply image augmentation. The brightness, contrast, and saturation are adjusted by sampling uniformly from [â0.125, 0.125], [0.5, 1.5], and [0.5, 1.5] respectively. To stay consistent, the same augmentation is applied to the images in the current state s and next state sâ. Random cropping and image augmentation is only done at train time. At inference time, no augmentation is used and we always crop to the center 472x472 square of the image.
18
# D.2 Action Space
The agentâs action comprises a gripper pose displacement, and an open/close command. The gripper pose displacement is a difference between the current pose and the desired pose in Cartesian space, encoded as translation tt â R3, and vertical rotation encoded via a sine-cosine encoding rt â R2. A gripper open/close command is encoded as one-hot vector [gclose,t, gopen,t] â {0, 1}2. In addition, as this is an episodic task with a ï¬nite horizon, our agent has to decide when to terminate. For a baseline policy, we implement a heuristic stopping criterion that triggers when the arm is holding an object above a threshold height. Such a heuristic is task-speciï¬c and might bias learning in a sub-optimal way. Thus, we introduce an additional action component et which allows our policy to learn a stopping criterion and decide when to terminate autonomously. The full action is deï¬ned as at = (tt, rt, gclose,t, gopen,t, et).
# D.3 Reward Function
The agent is given a reward of 1 at the end of an episode if it has successfully grasped an object from the bin. In all other cases the reward is 0. In order to detect a successful grasp we use an image subtraction test, see Fig. 12. An image of the scene is captured after the grasp attempt with the arm moved out of camera view. Then a second image is taken after attempting to drop the grasped object into the bin. If no object was grasped, these two images would be identical. Otherwise, if an object was picked up, certain pixels in the two images will be different. A grasp is labeled success if the image subtraction is above a threshold.
Because we use a simple image subtraction test to compute the reward, in practice the labeling is not perfect. We analyzed the failed grasps from the model that reaches 96% grasp suc- cess, and discovered that out of 28 misgrasps, 3 were actually successful grasps that have been classiï¬ed improperly by our image subtraction test. Our background subtraction test might not detect small objects, making it vulnerable to false negative registrations. This shows that our learning algorithm can train a very good policy even with a small noise in the reward function.
°
# D.4 Grasp execution and termination condition
# Algorithm 1 Grasping control-loop
1: Pick a policy policy. 2: Initialize a robot. 3: while step < N and not terminate episode do 4: 5: 6: 7: 8: 9: 10: end while
s = robot.CaptureState() a = policy.SelectAction(s) robot.ExecuteCommand(a) terminate episode = e {Termination action e is either learned or decided heuristically.} r = robot.ReceiveReward() emit(s, a, r)
With state, action and reward deï¬ned in the sections above, we now describe the control-loop ex- ecuted on our robots. As described in Algorithm 1 we control our manipulator by closing a loop around visual input and gripper commands. At the end of an episode we detect grasp success and return the sequence of state-action pairs to be fed back into the learning pipeline or stored ofï¬ine. Depending on the executed policy, our termination action e is either learned or decided heuristically as we discuss below.
19
# (left)
D D p> DZ oO oO oO oD aod aod aod ao] oO oO oO oO 4 4 ROR ES} ES} CS) S = @ = oS) = 9 Grasping Value Function 5b 3 = 3°83 53 (7) a % 3 _ *& a 5 eB ae 2-3-2-23>@>2-8 â2-372 > Qo(s, a) > gas ® 5S ww» > & onic & CS < Ss * | x = = hele I, (472, 472, 3) legless Cartesian Vector > Gripper Rotation râ>> Fy a Open Gripper donn Pe oy n Fy Target Value Function fe) rol Close inner dame Pe -B-L -2-B arg max Q,(sâ, aâ) Terminate Episode > oo 3 3 on al s Gripper Open? gayest Pe 2 Gripper Height Yocishte >
Figure 13: The architecture of grasping Q-Function. The input image is processed by a stack of convolutional layers before the introduction of the action and vector-valued state variables (gripper status and height). These are processed by several fully connected layers, tiled over the width and height dimension of the convolutional map, and added to it. The resulting convolutional map is further processed by a number of convolutional layers until the output. The output is gated through a sigmoid, such that our Q-values are always in the range [0, 1].
Scripted termination condition Initial experiments used a heuristic termination condition to de- termine when the policy is done. The heuristic detects when the gripper is holding an object, and the policy is continuing to command upward actions after it has passed a height threshold.
# Algorithm 2 Scripted termination condition
1: termination height = 0.13 2: s = robot.GetState() 3: gripper height = s.height of gripper 4: gripper closed = s.gripper status=âCLOSEDâ 5: a = robot.GetAction(s) 6: next action height = a.translation.z 7: if gripper closed and gripper height > termination height and next action height > gripper height then terminate = True 8: 9: else 10: 11: end if terminate = False
Learned termination condition In principle, a policy may use the camera image to determine if it has successfully grasped and raised an object. It may inform such knowledge through a discrete termination action. To learn the termination action, the reward structure must be slightly changed, reward = 1 is assigned when grasp success = T rue and gripper termination height > threshold. This enforces the policy to indicate episode termination only after the gripper lifts the object, which makes it easier to provide visual feedback for robust grasps; objects will fall out of a brittle grasp during the lifting process, and terminating the grasp after the object falls gives richer negative examples.
# E Q-Function Neural Network Architecture
We model the Q-function as a large deep neural network whose architecture, shown in Figure 13, is inspired by the one presented in [27]. The network takes the monocular RGB image component of the state s as input, and processes it with 7 convolutional layers. We transform actions a and additional state features (gaperture, gheight) with fully-connected layers, then merge them with visual features by broadcasted element-wise addition. After fusing state and action representations, the Q value Qθ(s, a) is modeled by 9 more convolution layers followed by two fully-connected layers.
20
Replay Buffer Pus! a train Pul Log Replay sh Hp tine butter | [ait ~100x - online Hel buffer Weighted Sampling GPU Trainer C Send gi Store Real Robot, running Bellman Update job. ays, a) MDP Recomputes Q-Targets parameters xx =7000% t i Pull model weights _}â4
Figure 14: Architecture of the QT-Opt distributed reinforcement learning algorithm.
Each convolution and fully-connected layer uses batch normalization [41] and the ReLU nonlinear- ity. The value function handles mixtures of discrete actions (open, close, terminate) and continuous actions (Cartesian vector and gripper rotation), making the same state-action space amenable to learning future tasks like placing and stacking. In Table 2, we demonstrate empirically that the model performs better when provided with redundant state representations (gaperture, gheight). All weights are initialized with truncated normal random variables (Ï = .01) and L2-regularized with a coefï¬cient of 7eâ5. Models are trained with SGD with momentum, using learning rate 0.0001 and momentum 0.9.
# F QT-Opt Distributed Reinforcement Learning System Design
Prior works [42, 43] have explored new reinforcement learning algorithms that allow compute par- allelization to accelerate training. We propose a parallel and distributed learning algorithm, inspired by [44], that can scale easily on cloud-based infrastructures. Fig. 14 is a diagram of the full dis- tributed system.
This distributed design of our QT-Opt algorithm was important for several reasons:
(a) We use high-resolution images. Trying to store all transitions in the memory of a single ma- chine is infeasible. Thus we employ a distributed replay buffer, which lets us store hundreds of thousands of transitions across several machines.
(b) The Q-network is quite large, and distributing training across multiple GPUs drastically in- creases research velocity by reducing time to convergence. Similarly, in order to support large scale simulated experiments, the design has to support running hundreds of simulated robots that cannot ï¬t on a single machine.
(c) Decoupling training jobs from data generation jobs allows us to treat training as data-agnostic, making it easy to switch between simulated data, off-policy real data, and on-policy real data. It also lets us scale the speed of training and data generation independently.
Similar to [45], our distributed replay buffer is shared across all agents, although we do not use prioritized replay. We distribute training across 10 GPUs, using asynchronous SGD with momentum as proposed by [46]. Models were trained using TensorFlow [47]. This system allows us to train the Q-function at 40 steps per second with a batch size of 32 across 10 NVIDIA P100 GPUs.
# F.1 Online Training
Online agents (real or simulated robots) collect data from the environment. The policy used is the Polyak averaged weights Q¯θ1(s, a) and the weights are updated every 10 minutes. That data is pushed to a distributed replay buffer, which we call the âonline bufferâ. The data is also persisted to disk for future ofï¬ine training.
21
# F.2 Ofï¬ine Training from Logs
To support ofï¬ine training, we run a log replay job. This job reads data sequentially from disk for efï¬ciency reasons. It replays saved episodes as if an online agent had collected that data. This lets us seamlessly merge off-policy data with on-policy data collected by online agents as described in Appendix F.3.
Ofï¬ine data comes from all previously run experiments. In total, we collected 580k grasps of ofï¬ine data, which took 4 terabytes of disk space. In fully off-policy training, we can train the policy by loading all data with the log replay job, letting us train without having to interact with the real environment.
Despite the scale of the distributed replay buffer, we still canât ï¬t the entire dataset into memory. In order to be able to visit each datapoint uniformly, we keep continuously running the Log Replay to refresh the in-memory data residing in the Replay Buffer. In practice we found that running at least 100 Log Replay jobs was important to mitigate correlations between consecutive episodes and drive sufï¬ciently diverse data to the Replay Buffer.
# F.3 Online Joint Finetuning After Ofï¬ine Training
In practice, we begin with off-policy training to initialize a good policy, and then switch to on- policy joint ï¬netuning. To do so, we ï¬rst train fully off-policy by using the Log Replay job to replay episodes from prior experiments. After training off-policy for enough time, we restart QT-Opt, training with a mix of on-policy and off-policy data. We train off-policy for 5M to 15M steps.
Real on-policy data is generated by 7 KUKA LBR IIWA robot arms where the weights of the policy Q¯θ1(s, a) are updated every 10 minutes. Compared to the ofï¬ine dataset, the rate of on-policy data production is much lower and the data has less visual diversity. However, the on-policy data also contains real-world interactions that illustrate the faults in the current policy. To avoid overï¬tting to the initially scarce on-policy data, we gradually ramp up the fraction of on-policy data from 1% to 50% over the ï¬rst 1M gradient update steps of joint ï¬netuning training.
Since the real robots can stop unexpectedly (e.g., due to hardware faults), data collection can be sporadic, potentially with delays of hours or more if a fault occurs without any operator present. This can unexpectedly cause a signiï¬cant reduction in the rate of data collection. To mitigate this, on-policy training is also gated by a training balancer, which enforces a ï¬xed ratio between the number of joint ï¬netuning gradient update steps and number of on-policy transitions collected. The ratio was deï¬ned relative to the speed of the GPUs and of the robots, which changed over time. We did not tune this ratio very carefully, and in practice the ratio ranged from 6:1 to 11:1.
# F.4 Distributed Bellman Update
To stabilize deep Q-Learning, we use a target network, as discussed in Section 4. Since target net- works parameters typically lag behind the online network when computing TD error, the Bellman backup can actually be performed asynchronously in a separate process. We propose a novel algo- rithm that computes r(s,a) + yV(sâ) in parallel on separate CPU machines, storing the output of those computations in an additional buffer named âtrainâ in our distributed replay buffer. We call this job the Bellman updater, and have been able to scale this computation up to 1,000 machines using around 14k cores. Let 6 be the parameters of the current Q-network, 6 be the parameters of the target Q-network, and p(s, a,s) be the distribution of transitions in the replay buffer. As discussed in 4.1, our Bellman backup is formulated as:
E(0) = E¢s,a,sâ)~p(s,a,sâ) LD (Qa(s, a), 7(s,a) + WV (sâ))].
â¼
Note that because we use several Bellman updater replicas, each replica will load a new target network at different times. All replicas push the Bellman backup to the shared replay buffer in the âtrain bufferâ. This makes our target Q-values effectively generated by an ensemble of recent target
22
networks, sampled from an implicit distribution Rt. Expanding the clipped Double DQN estimation of value gives the following objective:
E(0) = E(s,a,8")~p(s,a.s") (EG, 2) (@r.82) [D (Qoe(s,a), r(s, a) + Vo, 6, (s)]] (2)
where Vp, 9, (sâ) is estimated using clipped Double DQN:
Vo, a,(8') = min Qg,(s', arg max Qp, (sâ,aâ)) , i=1,2 a!
Even if a Q-function estimator is unbiased, variance in TD error estimates are converted into over- estimation bias during the Bellman backup (via the max operator). We hypothesize that ensembling via a distribution of lagging target networks stabilizes training by reducing the variance (and thereby reduces bias in target values). We also decrease overestimation bias by using Clipped Double- Q Learning and a damped discount factor of γ = .9. We found this training to be stable and reproducible. The target network Q¯θ1 is computed by doing a Polyak averaging [32] with a decay factor of 0.9999, while Q¯θ2 is lagged on average by about 6000 gradient steps. The exact amount of delay varies because of the asynchronous nature of the system.
# F.5 Distributed Replay Buffer
The distributed replay buffer supports having named replay buffers. We create three named buffers: âonline bufferâ holds online data, âofï¬ine bufferâ holds ofï¬ine data, and âtrain bufferâ stores Q- targets computed by the Bellman updater. The replay buffer interface supports weighted sampling from the named buffers, which is useful when doing on-policy joint ï¬netuning. The distributed replay buffer is spread over 6 workers, which each contain 50k transitions, totalling 6 â 50k = 300k transitions. All buffers are FIFO buffers where old values are removed to make space for new ones if the buffer is full.
23 | {
"id": "1803.09956"
} |