diff --git "a/challenge_woSectionHeader.json" "b/challenge_woSectionHeader.json" new file mode 100644--- /dev/null +++ "b/challenge_woSectionHeader.json" @@ -0,0 +1,85883 @@ +{ + "name": "SciDuet-ACL-Test-Challenge-woSectionHeader", + "data": [ + { + "slides": { + "0": { + "title": "Background Semantic Hashing", + "text": [ + "Fast and accurate similarity search (i.e., finding documents from a large corpus that are most similar to a query of interest) is at the core of many information retrieval applications;", + "One strategy is to represent each document as a continuous vector: such as Paragraph", + "Cosine similarity is typically employed to measure relatedness;", + "Semantic hashing is an effective approach: the similarity between two documents can be evaluated by simply calculating pairwise Hamming distances between hashing (binary) codes;" + ], + "page_nums": [ + 1 + ], + "images": [] + }, + "1": { + "title": "Motivation and contributions", + "text": [ + "Existing semantic hashing approaches typically require two-stage training procedures (e.g. continuous representations are crudely binarized after training);", + "Vast amount of unlabeled data is not fully leveraged for learning binary document representations.", + "we propose a simple and generic neural architecture for text hashing that learns binary latent codes for documents, which be trained an end-to-end manner;", + "We leverage a Neural Variational Inference (NVI) framework, which introduces data-dependent noises during training and makes effective use of unlabeled information." + ], + "page_nums": [ + 2 + ], + "images": [] + }, + "4": { + "title": "Framework components Injecting Data dependent Noise to z", + "text": [ + "We found that injecting random", + "Gaussian noise into z makes the decoder a more favorable regularizer for the binary codes;", + "x AAAB53icbVBNS8NAEJ3Ur1q/qh69LBbBU0lFUG9FLx5bMLbQhrLZTtq1m03Y3Ygl9Bd48aDi1b/kzX/jts1BWx8MPN6bYWZekAiujet+O4WV1bX1jeJmaWt7Z3evvH9wr+NUMfRYLGLVDqhGwSV6hhuB7UQhjQKBrWB0M/Vbj6g0j+WdGSfoR3QgecgZNVZqPvXKFbfqzkCWSS0nFcjR6JW/uv2YpRFKwwTVulNzE+NnVBnOBE5K3VRjQtmIDrBjqaQRaj+bHTohJ1bpkzBWtqQhM/X3REYjrcdRYDsjaoZ60ZuK/3md1ISXfsZlkhqUbL4oTAUxMZl+TfpcITNibAllittbCRtSRZmx2ZRsCLXFl5eJd1a9qrrN80r9Ok+jCEdwDKdQgwuowy00wAMGCM/wCm/Og/PivDsf89aCk88cwh84nz9UTYzP log x", + "The objective function in (4) can be written in a form similar to the rate-distortion tradeoff:" + ], + "page_nums": [ + 8 + ], + "images": [ + "figure/image/955-Figure1-1.png" + ] + }, + "8": { + "title": "Experiments Ablation study", + "text": [ + "Figure: The precisions of the top 100 retrieved Table: Ablation study with different documents for NASH-DN with stochastic or encoder/decoder networks. deterministic binary latent variables.", + "Leveraging stochastically sampling during training generalizes better;", + "Linear decoder networks gives rise to better empirical results." + ], + "page_nums": [ + 12 + ], + "images": [ + "figure/image/955-Table6-1.png", + "figure/image/955-Figure3-1.png" + ] + }, + "9": { + "title": "Experiments Qualitative Analysis", + "text": [ + "Figure: Examples of learned compact hashing codes on 20Newsgroups dataset.", + "NASH typically compresses documents with shared topics into very similar binary codes." + ], + "page_nums": [ + 13 + ], + "images": [ + "figure/image/955-Table5-1.png" + ] + } + }, + "paper_title": "NASH: Toward End-to-End Neural Architecture for Generative Semantic Hashing", + "paper_id": "955", + "paper": { + "title": "NASH: Toward End-to-End Neural Architecture for Generative Semantic Hashing", + "abstract": "Semantic hashing has become a powerful paradigm for fast similarity search in many information retrieval systems. While fairly successful, previous techniques generally require two-stage training, and the binary constraints are handled ad-hoc. In this paper, we present an end-to-end Neural Architecture for Semantic Hashing (NASH), where the binary hashing codes are treated as Bernoulli latent variables. A neural variational inference framework is proposed for training, where gradients are directly backpropagated through the discrete latent variable to optimize the hash function. We also draw connections between proposed method and rate-distortion theory, which provides a theoretical foundation for the effectiveness of the proposed framework. Experimental results on three public datasets demonstrate that our method significantly outperforms several state-of-the-art models on both unsupervised and supervised scenarios.", + "text": [ + { + "id": 0, + "string": "Introduction The problem of similarity search, also called nearest-neighbor search, consists of finding documents from a large collection of documents, or corpus, which are most similar to a query document of interest." + }, + { + "id": 1, + "string": "Fast and accurate similarity search is at the core of many information retrieval applications, such as plagiarism analysis (Stein et al., 2007) , collaborative filtering (Koren, 2008) , content-based multimedia retrieval (Lew et al., 2006) and caching (Pandey et al., 2009) ." + }, + { + "id": 2, + "string": "Semantic hashing is an effective approach for fast similarity search (Salakhutdinov and Hinton, 2009; Zhang * Equal contribution." + }, + { + "id": 3, + "string": "et al., 2010; Wang et al., 2014) ." + }, + { + "id": 4, + "string": "By representing every document in the corpus as a similaritypreserving discrete (binary) hashing code, the similarity between two documents can be evaluated by simply calculating pairwise Hamming distances between hashing codes, i.e., the number of bits that are different between two codes." + }, + { + "id": 5, + "string": "Given that today, an ordinary PC is able to execute millions of Hamming distance computations in just a few milliseconds (Zhang et al., 2010) , this semantic hashing strategy is very computationally attractive." + }, + { + "id": 6, + "string": "While considerable research has been devoted to text (semantic) hashing, existing approaches typically require two-stage training procedures." + }, + { + "id": 7, + "string": "These methods can be generally divided into two categories: (i) binary codes for documents are first learned in an unsupervised manner, then l binary classifiers are trained via supervised learning to predict the l-bit hashing code (Zhang et al., 2010; Xu et al., 2015) ; (ii) continuous text representations are first inferred, which are binarized as a second (separate) step during testing (Wang et al., 2013; Chaidaroon and Fang, 2017) ." + }, + { + "id": 8, + "string": "Because the model parameters are not learned in an end-to-end manner, these two-stage training strategies may result in suboptimal local optima." + }, + { + "id": 9, + "string": "This happens because different modules within the model are optimized separately, preventing the sharing of information between them." + }, + { + "id": 10, + "string": "Further, in existing methods, binary constraints are typically handled adhoc by truncation, i.e., the hashing codes are obtained via direct binarization from continuous representations after training." + }, + { + "id": 11, + "string": "As a result, the information contained in the continuous representations is lost during the (separate) binarization process." + }, + { + "id": 12, + "string": "Moreover, training different modules (mapping and classifier/binarization) separately often requires additional hyperparameter tuning for each training stage, which can be laborious and timeconsuming." + }, + { + "id": 13, + "string": "In this paper, we propose a simple and generic neural architecture for text hashing that learns binary latent codes for documents in an end-toend manner." + }, + { + "id": 14, + "string": "Inspired by recent advances in neural variational inference (NVI) for text processing (Miao et al., 2016; Yang et al., 2017; Shen et al., 2017b) , we approach semantic hashing from a generative model perspective, where binary (hashing) codes are represented as either deterministic or stochastic Bernoulli latent variables." + }, + { + "id": 15, + "string": "The inference (encoder) and generative (decoder) networks are optimized jointly by maximizing a variational lower bound to the marginal distribution of input documents (corpus)." + }, + { + "id": 16, + "string": "By leveraging a simple and effective method to estimate the gradients with respect to discrete (binary) variables, the loss term from the generative (decoder) network can be directly backpropagated into the inference (encoder) network to optimize the hash function." + }, + { + "id": 17, + "string": "Motivated by the rate-distortion theory (Berger, 1971; Theis et al., 2017) , we propose to inject data-dependent noise into the latent codes during the decoding stage, which adaptively accounts for the tradeoff between minimizing rate (number of bits used, or effective code length) and distortion (reconstruction error) during training." + }, + { + "id": 18, + "string": "The connection between the proposed method and ratedistortion theory is further elucidated, providing a theoretical foundation for the effectiveness of our framework." + }, + { + "id": 19, + "string": "Summarizing, the contributions of this paper are: (i) to the best of our knowledge, we present the first semantic hashing architecture that can be trained in an end-to-end manner; (ii) we propose a neural variational inference framework to learn compact (regularized) binary codes for documents, achieving promising results on both unsupervised and supervised text hashing; (iii) the connection between our method and rate-distortion theory is established, from which we demonstrate the advantage of injecting data-dependent noise into the latent variable during training." + }, + { + "id": 20, + "string": "Related Work Models with discrete random variables have attracted much attention in the deep learning community (Jang et al., 2016; Maddison et al., 2016; van den Oord et al., 2017; Li et al., 2017; Shu and Nakayama, 2017) ." + }, + { + "id": 21, + "string": "Some of these structures are more natural choices for language or speech data, which are inherently discrete." + }, + { + "id": 22, + "string": "More specifically, g (x) < l a t e x i t s h a 1 _ b a s e 6 4 = \" 4 g s o F B p B B A b m y f n 2 Z e N A 3 f T q K 6 U = \" > A A A B 7 3 i c b V B N T w I x E J 3 F L 8 Q v 1 K O X R m K C F 7 J r S N Q b 0 Y t H T F z B w I Z 0 S x c a 2 u 6 m 7 R r J h l / h x Y M a r / 4 d b / 4 b C + x B w Z d M 8 v L e T G b m h Q l n 2 r j u t 1 N Y W V 1 b 3 y h u l r a 2 d 3 b 3 y v s H 9 z p O F a E + i X m s 2 i H W l D N J f c M M p + 1 E U S x C T l v h 6 H r q t x 6 p 0 i y W d 2 a c 0 E D g g W Q R I 9 h Y 6 W H Q 6 y Z D V n 0 6 7 Z U r b s 2 d A S 0 T L y c V y N H s l b + 6 / Z i k g k p D O N a 6 4 7 m J C T K s D C O c T k r d V N M E k x E e 0 I 6 l E g u q g 2 x 2 8 A S d W K W P o l j Z k g b N 1 N 8 T G R Z a j 0 V o O w U 2 Q 7 3 o T c X / v E 5 q o o s g Y z J J D Z V k v i h K O T I x m n 6 P + k x R Y v j Y E k w U s 7 c i M s Q K E 2 M z K t k Q v M W X l 4 l / V r u s u b f 1 S u M q T 6 M I R 3 A M V f D g H B p w A 0 3 w g Y C A Z 3 i F N 0 c 5 L 8 6 7 8 z F v L T j 5 z C H 8 g f P 5 A 5 / Q j 9 M = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" 4 g s o F B p B B A b m y f n 2 Z e N A 3 f T q K 6 U = \" > A A A B 7 3 i c b V B N T w I x E J 3 F L 8 Q v 1 K O X R m K C F 7 J r S N Q b 0 Y t H T F z B w I Z 0 S x c a 2 u 6 m 7 R r J h l / h x Y M a r / 4 d b / 4 b C + x B w Z d M 8 v L e T G b m h Q l n 2 r j u t 1 N Y W V 1 b 3 y h u l r a 2 d 3 b 3 y v s H 9 z p O F a E + i X m s 2 i H W l D N J f c M M p + 1 E U S x C T l v h 6 H r q t x 6 p 0 i y W d 2 a c 0 E D g g W Q R I 9 h Y 6 W H Q 6 y Z D V n 0 6 7 Z U r b s 2 d A S 0 T L y c V y N H s l b + 6 / Z i k g k p D O N a 6 4 7 m J C T K s D C O c T k r d V N M E k x E e 0 I 6 l E g u q g 2 x 2 8 A S d W K W P o l j Z k g b N 1 N 8 T G R Z a j 0 V o O w U 2 Q 7 3 o T c X / v E 5 q o o s g Y z J J D Z V k v i h K O T I x m n 6 P + k x R Y v j Y E k w U s 7 c i M s Q K E 2 M z K t k Q v M W X l 4 l / V r u s u b f 1 S u M q T 6 M I R 3 A M V f D g H B p w A 0 3 w g Y C A Z 3 i F N 0 c 5 L 8 6 7 8 z F v L T j 5 z C H 8 g f P 5 A 5 / Q j 9 M = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" 4 g s o F B p B B A b m y f n 2 Z e N A 3 f T q K 6 U = \" > A A A B 7 3 i c b V B N T w I x E J 3 F L 8 Q v 1 K O X R m K C F 7 J r S N Q b 0 Y t H T F z B w I Z 0 S x c a 2 u 6 m 7 R r J h l / h x Y M a r / 4 d b / 4 b C + x B w Z d M 8 v L e T G b m h Q l n 2 r j u t 1 N Y W V 1 b 3 y h u l r a 2 d 3 b 3 y v s H 9 z p O F a E + i X m s 2 i H W l D N J f c M M p + 1 E U S x C T l v h 6 H r q t x 6 p 0 i y W d 2 a c 0 E D g g W Q R I 9 h Y 6 W H Q 6 y Z D V n 0 6 7 Z U r b s 2 d A S 0 T L y c V y N H s l b + 6 / Z i k g k p D O N a 6 4 7 m J C T K s D C O c T k r d V N M E k x E e 0 I 6 l E g u q g 2 x 2 8 A S d W K W P o l j Z k g b N 1 N 8 T G R Z a j 0 V o O w U 2 Q 7 3 o T c X / v E 5 q o o s g Y z J J D Z V k v i h K O T I x m n 6 P + k x R Y v j Y E k w U s 7 c i M s Q K E 2 M z K t k Q v M W X l 4 l / V r u s u b f 1 S u M q T 6 M I R 3 A M V f D g H B p w A 0 3 w g Y C A Z 3 i F N 0 c 5 L 8 6 7 8 z F v L T j 5 z C H 8 g f P 5 A 5 / Q j 9 M = < / l a t e x i t > z < l a t e x i t s h a 1 _ b a s e 6 4 = \" W I l b T b B F L L c q O v t 8 1 z B c 0 3 G a g J U = \" > A A A B 5 3 i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 l F U G 9 F L x 5 b M L b Q h r L Z T t q 1 m 0 3 Y 3 Q g 1 9 B d 4 8 a D i 1 b / k z X / j t s 1 B W x 8 M P N 6 b Y W Z e k A i u j e t + O 4 W V 1 b X 1 j e J m a W t 7 Z 3 e v v H 9 w r + N U M f R Y L G L V D q h G w S V 6 h h u B 7 U Q h j Q K B r W B 0 M / V b j 6 g 0 j + W d G S f o R 3 Q g e c g Z N V Z q P v X K F b f q z k C W S S 0 n F c j R 6 J W / u v 2 Y p R F K w w T V u l N z E + N n V For natural language processing (NLP), although significant research has been made to learn continuous deep representations for words or documents (Mikolov et al., 2013; Kiros et al., 2015; , discrete neural representations have been mainly explored in learning word embeddings (Shu and Nakayama, 2017; Chen et al., 2017) ." + }, + { + "id": 23, + "string": "In these recent works, words are represented as a vector of discrete numbers, which are very efficient storage-wise, while showing comparable performance on several NLP tasks, relative to continuous word embeddings." + }, + { + "id": 24, + "string": "However, discrete representations that are learned in an endto-end manner at the sentence or document level have been rarely explored." + }, + { + "id": 25, + "string": "Also there is a lack of strict evaluation regarding their effectiveness." + }, + { + "id": 26, + "string": "Our work focuses on learning discrete (binary) representations for text documents." + }, + { + "id": 27, + "string": "Further, we employ semantic hashing (fast similarity search) as a mechanism to evaluate the quality of learned binary latent codes." + }, + { + "id": 28, + "string": "R w K R T 3 U a D k 7 V R z G o e S t 8 L R z d R v P X J t R K L u c Z z y I K Y D J S L B K F q p 1 R 1 S z J 8 m v W r N r b s z k G X i F a Q G B Z q 9 6 l e 3 n 7 A s 5 g q Z p M Z 0 P D f F I K c a B Z N 8 U u l m h q e U j e i A d y x V N O Y m y G f n T s i J V f o k S r Q t h W S m / p 7 I a W z M O A 5 t Z 0 x x a B a 9 q f i f 1 8 k w u g x y o d I M u W L z R V E m C S Z k + j v p C 8 0 Z y r E l l G l h b y V s S D V l a B O q 2 B C 8 x Z e X i X 9 W v 6 q 7 d + e 1 x n W R R h m O 4 B h O w Y M L a M A t N M E H B i N 4 h l d 4 c 1 L n x X l 3 P u a t J a e Y O Y Q / c D 5 / A B u 5 j 5 w = < / l a t e x i t > x < l a t e x i t s h a 1 _ b a s e 6 4 = \" w r Y R r S 9 n q r 2 / j T K d H N f d R L t L B 0 k = \" > A A A B 5 3 i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 l F U G 9 F L x 5 b M L b Q h r L Z T t q 1 m 0 3 Y 3 Y g l 9 B d 4 8 a D i 1 b / k z X / j t s 1 B W x 8 M P N 6 b Y W Z e k A i u j e t + O 4 W V 1 b X 1 j e J m a W t 7 Z 3 e v v H 9 w r + N U M f R Y L G L V D q h G w S V 6 h h u B 7 U Q h j Q K B r W B 0 M / V b j 6 g 0 j + W d G S f o R 3 Q g e c g Z N V Z q P v X K F b f q z k C W S S 0 n F c j R 6 J W / u v 2 Y p R F K w w T V u l N z E + N n V B n O B E 5 K 3 V R j Q t m I D r B j q a Q R a j + b H T o h J 1 b p k z B W t q Q h M / X 3 R E Y j r c d R Y D s j a o Z 6 0 Z u K / 3 m d 1 I S X f s Z l k h q U b L 4 o T A U x M Z l + T f p c I T N i b A l l i t t b C R t S R Z m x 2 Z R s C L X F / j T K d H N f d R L t L B 0 k = \" > A A A B 5 3 i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 l F U G 9 F L x 5 b M L b Q h r L Z T t q 1 m 0 3 Y 3 Y g l 9 B d 4 8 a D i 1 b / k z X / j t s 1 B W x 8 M P N 6 b Y W Z e k A i u j e t + O 4 W V 1 b X 1 j e J m a W t 7 Z 3 e v v H 9 w r + N U M f R Y L G L V D q h G w S V 6 h h u B 7 U Q h j Q K B r W B 0 M / V b j 6 g 0 j + W d G S f o R 3 Q g e c g Z N V Z q P v X K F b f q z k C W S S 0 n F c j R 6 J W / u v 2 Y p R F K w w T V u l N z E + N n V B n O B E 5 K 3 V R j Q t m I D r B j q a Q R a j + b H T o h J 1 b p k z B W t q Q h M / X 3 R E Y j r c d R Y D s j a o Z 6 0 Z u K / 3 m d 1 I S X f s Z l k h q U b L 4 o T A U x M Z l + T f p c I T N i b A l l i t t b C R t S R Z m x 2 Z R s C L X F / j T K d H N f d R L t L B 0 k = \" > A A A B 5 3 i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 l F U G 9 F L x 5 b M L b Q h r L Z T t q 1 m 0 3 Y 3 Y g l 9 B d 4 8 a D i 1 b / k z X / j t s 1 B W x 8 M P N 6 b Y W Z e k A i u j e t + O 4 W V 1 b X 1 j e J m a W t 7 Z 3 e v v H 9 w r + N U M f R Y L G L V D q h G w S V 6 h h u B 7 U Q h j Q K B r W B 0 M / V b j 6 g 0 j + W d G S f o R 3 Q g e c g Z N V Z q P v X K F b f q z k C W S S 0 n F c j R 6 J W / u v 2 Y p R F K w w T V u l N z E + N n V B n O B E 5 K 3 V R j Q t m I D r B j q a Q R a j + b H T o h J 1 b p k z B W t q Q h M / X 3 R E Y j r c d R Y D s j a o Z 6 0 Z u K / 3 m d 1 I S X f s Z l k h q U b L 4 o T A U x M Z l + T f p c I T N i b A l l i t t b C R t S R Z m x 2 Z R s C L X F l 5 e J d 1 a 9 q r r N 8 0 r 9 O k + j C E d w D K d Q g w u o w y 0 0 w A M G C M / w C m / O g / P i v D s f 8 9 a C k 8 8 c w h 8 4 n z 9 U T Y z P < / l a t e x i t > log 2 < l a t e x i t s h a 1 _ b a s e 6 4 = \" 7 f X R e u S i 2 A G X H Q b F X 8 o a g c V U X c o = \" > A A A B 8 3 i c b V B N S 8 N A E J 3 4 W e t X 1 a O X x S J 4 K k k R 1 F v R i 8 c K x h a a W D b b T b p 0 N x t 3 N 4 V S + j u 8 e F D x 6 p / x 5 r 9 x 2 + a g r Q 8 G H u / N M D M v y j j T x n W / n Z X V t f W N z d J W e X t n d 2 + / c n D 4 o G W u C P W J 5 F K 1 I 6 w p Z y n 1 D T O c t j N F s Y g 4 b U W D m 6 n f G l K l m U z v z S i j o c B J y m J G s L F S G H C Z o E C z R O D H e r d S d W v u D G i Z e A W p Q o F m t / I V 9 C T J B U 0 N 4 V j r j u d m J h x j Z R j h d F I O c k 0 z T A Y 4 o R 1 L U y y o D s e z o y f o 1 C o 9 F E t l K z V o p v 6 e G G O h 9 U h E t l N g 0 9 e L 3 l T 8 z + v k J r 4 M x y z N c k N T M l 8 U 5 x w Z i a Y J o B 5 T l B g + s g Q T x e y t i P S x w s T Y n M o 2 B G / x 5 W X i 1 2 t X N f f u v N q 4 L t I o w T G c w B l 4 c A E N u I U m + E D g C Z 7 h F d 6 c o f P i v D s f 8 9 Y V p 5 g 5 g j 9 w P n 8 A n m W R i g = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" 7 f X R e u S i 2 A G X H Q b F X 8 o a g c V U X c o = \" > A A A B 8 3 i c b V B N S 8 N A E J 3 4 W e t X 1 a O X x S J 4 K k k R 1 F v R i 8 c K x h a a W D b b T b p 0 N x t 3 N 4 V S + j u 8 e F D x 6 p / x 5 r 9 x 2 + a g r Q 8 G H u / N M D M v y j j T x n W / n Z X V t f W N z d J W e X t n d 2 + / c n D 4 o G W u C P W J 5 F K 1 I 6 w p Z y n 1 D T O c t j N F s Y g 4 b U W D m 6 n f G l K l m U z v z S i j o c B J y m J G s L F S G H C Z o E C z R O D H e r d S d W v u D G i Z e A W p Q o F m t / I V 9 C T J B U 0 N 4 V j r j u d m J h x j Z R j h d F I O c k 0 z T A Y 4 o R 1 L U y y o D s e z o y f o 1 C o 9 F E t l K z V o p v 6 e G G O h 9 U h E t l N g 0 9 e L 3 l T 8 z + v k J r 4 M x y z N c k N T M l 8 U 5 x w Z i a Y J o B 5 T l B g + s g Q T x e y t i P S x w s T Y n M o 2 B G / x 5 W X i 1 2 t X N f f u v N q 4 L t I o w T G c w B l 4 c A E N u I U m + E D g C Z 7 h F d 6 c o f P i v D s f 8 9 Y V p 5 g 5 g j 9 w P n 8 A n m W R i g = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" 7 f X R e u S i 2 A G X H Q b F X 8 o a g c V U X c o = \" > A A A B 8 3 i c b V B N S 8 N A E J 3 4 W e t X 1 a O X x S J 4 K k k R 1 F v R i 8 c K x h a a W D b b T b p 0 N x t 3 N 4 V S + j u 8 e F D x 6 p / x 5 r 9 x 2 + a g r Q 8 G H u / N M D M v y j j T x n W / n Z X V t f W N z d J W e X t n d 2 + / c n D 4 o G W u C P W J 5 F K 1 I 6 w p Z y n 1 D T O c t j N F s Y g 4 b U W D m 6 n f G l K l m U z v z S i j o c B J y m J G s L F S G H C Z o E C z R O D H e r d S d W v u D G i Z e A W p Q o F m t / I V 9 C T J B U 0 N 4 V j r j u d m J h x j Z R j h d F I O c k 0 z T A Y 4 o R 1 L U y y o D s e z o y f o 1 C o 9 F E t l K z V o p v 6 e G G O h 9 U h E t l N g 0 9 e L 3 l T 8 z + v k J r 4 M x y z N c k N T M l 8 U 5 x w Z i a Y J o B 5 T l B g + s g Q T x e y t i P S x w s T Y n M o 2 B G / x 5 W X i 1 2 t X N f f u v N q 4 L t I o w T G c w B l 4 c A E N u I U m + E D g C Z 7 h F d 6 c o f P i v D s f 8 9 Y V D i l e v 3 v w 3 b t o c t P X B w O O 9 G W b m + R G j U l n W t 1 G Y m 1 9 Y X C o u l 1 Z W 1 9 Y 3 z M 2 t O x n G A h M H h y w U T R 9 J w m h A H E U V I 8 1 I E M R 9 R h r + 4 C L z G w 9 E S B o G t 2 o Y E Y + j X k C 7 F C O l p b a 5 P 7 p P 3 E h Q T l J X U g 5 d j l Q f I 5 Z c p 5 X R I d R a j y N 4 d d A 2 y 1 b V G g P O E j s n Z Z C j 3 j a / 3 E 6 I Y 0 4 C h R m S s m V b k f I S J B T F j K Q l N 5 Y k Q n i A e q S l a Y A 4 k V 4 y f i i F e 1 r p w G 4 o d A U K j t X f E w n i U g 6 5 r z u z e + W 0 l 4 n / e a 1 Y d U + 9 h A Z R r E i A J 4 u 6 M Y M q h F k 6 s E M F w Y o N N U F Y U H 0 r x H 0 k E F Y 6 w 5 I O w Z 5 + e Z Y 4 R 9 W z q n V z X K 6 d 5 2 k U w Q 7 Y B R V g g x N Q A 5 e g D h The Proposed Method Hashing under the NVI Framework Inspired by the recent success of variational autoencoders for various NLP problems (Miao et al., 2016; Bowman et al., 2015; Yang et al., 2017; Miao et al., 2017; Shen et al., 2017b; , we approach the training of discrete (binary) latent variables from a generative perspec-tive." + }, + { + "id": 29, + "string": "Let x and z denote the input document and its corresponding binary hash code, respectively." + }, + { + "id": 30, + "string": "Most of the previous text hashing methods focus on modeling the encoding distribution p(z|x), or hash function, so the local/global pairwise similarity structure of documents in the original space is preserved in latent space (Zhang et al., 2010; Wang et al., 2013; Xu et al., 2015; Wang et al., 2014) ." + }, + { + "id": 31, + "string": "However, the generative (decoding) process of reconstructing x from binary latent code z, i.e., modeling distribution p(x|z), has been rarely considered." + }, + { + "id": 32, + "string": "Intuitively, latent codes learned from a model that accounts for the generative term should naturally encapsulate key semantic information from x because the generation/reconstruction objective is a function of p(x|z)." + }, + { + "id": 33, + "string": "In this regard, the generative term provides a natural training objective for semantic hashing." + }, + { + "id": 34, + "string": "We define a generative model that simultaneously accounts for both the encoding distribution, p(z|x), and decoding distribution, p(x|z), by defining approximations q φ (z|x) and q θ (x|z), via inference and generative networks, g φ (x) and g θ (z), parameterized by φ and θ, respectively." + }, + { + "id": 35, + "string": "Specifically, x ∈ Z |V | + is the bag-of-words (count) representation for the input document, where |V | is the vocabulary size." + }, + { + "id": 36, + "string": "Notably, we can also employ other count weighting schemes as input features x, e.g., the term frequency-inverse document frequency (TFIDF) (Manning et al., 2008) ." + }, + { + "id": 37, + "string": "For the encoding distribution, a latent variable z is first inferred from the input text x, by constructing an inference network g φ (x) to approximate the true posterior distribution p(z|x) as q φ (z|x)." + }, + { + "id": 38, + "string": "Subsequently, the decoder network g θ (z) maps z back into input space to reconstruct the original sequence x asx, approximating p(x|z) as q θ (x|z) (as shown in Figure 1 )." + }, + { + "id": 39, + "string": "This cyclic strategy, x → z →x ≈ x, provides the latent variable z with a better ability to generalize (Miao et al., 2016) ." + }, + { + "id": 40, + "string": "To tailor the NVI framework for semantic hashing, we cast z as a binary latent variable and assume a multivariate Bernoulli prior on z: p(z) ∼ Bernoulli(γ) = l i=1 γ z i i (1 − γ i ) 1−z i , where γ i ∈ [0, 1] is component i of vector γ." + }, + { + "id": 41, + "string": "Thus, the encoding (approximate posterior) distribution q φ (z|x) is restricted to take the form q φ (z|x) = Bernoulli(h), where h = σ(g φ (x)), σ(·) is the sigmoid function, and g φ (·) is the (nonlinear) inference network specified as a multilayer perceptron (MLP)." + }, + { + "id": 42, + "string": "As illustrated in Figure 1 , we can obtain samples from the Bernoulli posterior either deterministically or stochastically." + }, + { + "id": 43, + "string": "Suppose z is a l-bit hash code, for the deterministic binarization, we have, for i = 1, 2, ......, l: z i = 1 σ(g i φ (x))>0.5 = sign(σ(g i φ (x) − 0.5) + 1 2 , (1) where z is the binarized variable, and z i and g i φ (x) denote the i-th dimension of z and g φ (x), respectively." + }, + { + "id": 44, + "string": "The standard Bernoulli sampling in (1) can be understood as setting a hard threshold at 0.5 for each representation dimension, therefore, the binary latent code is generated deterministically." + }, + { + "id": 45, + "string": "Another strategy to obtain the discrete variable is to binarize h in a stochastic manner: z i = 1 σ(g i φ (x))>µ i = sign(σ(g i φ (x)) − µ i ) + 1 2 , (2) where µ i ∼ Uniform(0, 1)." + }, + { + "id": 46, + "string": "Because of this sampling process, we do not have to assume a predefined threshold value like in (1)." + }, + { + "id": 47, + "string": "Training with Binary Latent Variables To estimate the parameters of the encoder and decoder networks, we would ideally maximize the marginal distribution p(x) = p(z)p(x|z)dz." + }, + { + "id": 48, + "string": "However, computing this marginal is intractable in most cases of interest." + }, + { + "id": 49, + "string": "Instead, we maximize a variational lower bound." + }, + { + "id": 50, + "string": "This approach is typically employed in the VAE framework (Kingma and Welling, 2013) : L vae = E q φ (z|x) log q θ (x|z)p(z) q φ (z|x) , (3) = E q φ (z|x) [log q θ (x|z)] − D KL (q φ (z|x)||p(z)), where the Kullback-Leibler (KL) divergence D KL (q φ (z|x)||p(z)) encourages the approximate posterior distribution q φ (z|x) to be close to the multivariate Bernoulli prior p(z)." + }, + { + "id": 51, + "string": "In this case, D KL (q φ (z|x)|p(z)) can be written in closed-form as a function of g φ (x): D KL = g φ (x) log g φ (x) γ + (1 − g φ (x)) log 1 − g φ (x) 1 − γ ." + }, + { + "id": 52, + "string": "(4) Note that the gradient for the KL divergence term above can be evaluated easily." + }, + { + "id": 53, + "string": "For the first term in (3) , we should in principle estimate the influence of µ i in (2) on q θ (x|z) by averaging over the entire (uniform) noise distribution." + }, + { + "id": 54, + "string": "However, a closed-form distribution does not exist since it is not possible to enumerate all possible configurations of z, especially when the latent dimension is large." + }, + { + "id": 55, + "string": "Moreover, discrete latent variables are inherently incompatible with backpropagation, since the derivative of the sign function is zero for almost all input values." + }, + { + "id": 56, + "string": "As a result, the exact gradients of L vae wrt the inputs before binarization would be essentially all zero." + }, + { + "id": 57, + "string": "To estimate the gradients for binary latent variables, we utilize the straight-through (ST) estimator, which was first introduced by Hinton (2012) ." + }, + { + "id": 58, + "string": "So motivated, the strategy here is to simply backpropagate through the hard threshold by approximating the gradient ∂z/∂φ as 1." + }, + { + "id": 59, + "string": "Thus, we have: dE q φ (z|x) [log q θ (x|z)] ∂φ = dE q φ (z|x) [log q θ (x|z)] dz dz dσ(g i φ (x)) dσ(g i φ (x)) dφ ≈ dE q φ (z|x) [log q θ (x|z)] dz dσ(g i φ (x)) dφ (5) Although this is clearly a biased estimator, it has been shown to be a fast and efficient method relative to other gradient estimators for discrete variables, especially for the Bernoulli case (Bengio et al., 2013; Hubara et al., 2016; Theis et al., 2017) ." + }, + { + "id": 60, + "string": "With the ST gradient estimator, the first loss term in (3) can be backpropagated into the encoder network to fine-tune the hash function g φ (x)." + }, + { + "id": 61, + "string": "For the approximate generator q θ (x|z) in (3) , let x i denote the one-hot representation of ith word within a document." + }, + { + "id": 62, + "string": "Note that x = i x i is thus the bag-of-words representation for document x." + }, + { + "id": 63, + "string": "To reconstruct the input x from z, we utilize a softmax decoding function written as: q(x i = w|z) = exp(z T Ex w + b w ) |V | j=1 exp(z T Ex j + b j ) , (6) where q(x i = w|z) is the probability that x i is word w ∈ V , q θ (x|z) = i q(x i = w|z) and θ = {E, b 1 , ." + }, + { + "id": 64, + "string": "." + }, + { + "id": 65, + "string": "." + }, + { + "id": 66, + "string": ", b |V | }." + }, + { + "id": 67, + "string": "Note that E ∈ R d×|V | can be interpreted as a word embedding matrix to be learned, and {b i } |V | i=1 denote bias terms." + }, + { + "id": 68, + "string": "Intuitively, the objective in (6) encourages the discrete vector z to be close to the embeddings for every word that appear in the input document x." + }, + { + "id": 69, + "string": "As shown in Section 5.3.1, meaningful semantic structures can be learned and manifested in the word embedding matrix E. Injecting Data-dependent Noise to z To reconstruct text data x from sampled binary representation z, a deterministic decoder is typically utilized (Miao et al., 2016; Chaidaroon and Fang, 2017 )." + }, + { + "id": 70, + "string": "Inspired by the success of employing stochastic decoders in image hashing applications (Dai et al., 2017; Theis et al., 2017) , in our experiments, we found that injecting random Gaussian noise into z makes the decoder a more favorable regularizer for the binary codes, which in practice leads to stronger retrieval performance." + }, + { + "id": 71, + "string": "Below, we invoke the rate-distortion theory to perform some further analysis, which leads to interesting findings." + }, + { + "id": 72, + "string": "Learning binary latent codes z to represent a continuous distribution p(x) is a classical information theory concept known as lossy source coding." + }, + { + "id": 73, + "string": "From this perspective, semantic hashing, which compresses an input document into compact binary codes, can be casted as a conventional ratedistortion tradeoff problem (Theis et al., 2017; Ballé et al., 2016) : min − log 2 R(z) Rate +β ·D(x,x) Distortion , (7) where rate and distortion denote the effective code length, i.e., the number of bits used, and the distortion introduced by the encoding/decoding sequence, respectively." + }, + { + "id": 74, + "string": "Further,x is the reconstructed input and β is a hyperparameter that controls the tradeoff between the two terms." + }, + { + "id": 75, + "string": "Considering the case where we have a Bernoulli prior on z as p(z) ∼ Bernoulli(γ), and x conditionally drawn from a Gaussian distribution p(x|z) ∼ N (Ez, σ 2 I)." + }, + { + "id": 76, + "string": "Here, E = {e i } |V | i=1 , where e i ∈ R d can be interpreted as a codebook with |V | codewords." + }, + { + "id": 77, + "string": "In our case, E corresponds to the word embedding matrix as in (6) ." + }, + { + "id": 78, + "string": "For the case of stochastic latent variable z, the objective function in (3) can be written in a form similar to the rate-distortion tradeoff: min E q φ (z|x)     − log q φ (z|x) Rate + 1 2σ 2 β ||x − Ez|| 2 2 Distortion +C     , (8) where C is a constant that encapsulates the prior distribution p(z) and the Gaussian distribution normalization term." + }, + { + "id": 79, + "string": "Notably, the trade-off hyperparameter β = σ −2 /2 is closely related to the variance of the distribution p(x|z)." + }, + { + "id": 80, + "string": "In other words, by controlling the variance σ, the model can adaptively explore different trade-offs between the rate and distortion objectives." + }, + { + "id": 81, + "string": "However, the optimal trade-offs for distinct samples may be different." + }, + { + "id": 82, + "string": "Inspired by the observations above, we propose to inject data-dependent noise into latent variable z, rather than to setting the variance term σ 2 to a fixed value (Dai et al., 2017; Theis et al., 2017) ." + }, + { + "id": 83, + "string": "Specifically, log σ 2 is obtained via a one-layer MLP transformation from g φ (x)." + }, + { + "id": 84, + "string": "Afterwards, we sample z from N (z, σ 2 I), which then replace z in (6) to infer the probability of generating individual words (as shown in Figure 1 )." + }, + { + "id": 85, + "string": "As a result, the variances are different for every input document x, and thus the model is provided with additional flexibility to explore various trade-offs between rate and distortion for different training observations." + }, + { + "id": 86, + "string": "Although our decoder is not a strictly Gaussian distribution, as in (6) , we found empirically that injecting data-dependent noise into z yields strong retrieval results, see Section 5.1." + }, + { + "id": 87, + "string": "Supervised Hashing The proposed Neural Architecture for Semantic Hashing (NASH) can be extended to supervised hashing, where a mapping from latent variable z to labels y is learned, here parametrized by a twolayer MLP followed by a fully-connected softmax layer." + }, + { + "id": 88, + "string": "To allow the model to explore and balance between maximizing the variational lower bound in (3) and minimizing the discriminative loss, the following joint training objective is employed: L = −L vae (θ, φ; x) + αL dis (η; z, y)." + }, + { + "id": 89, + "string": "(9) where η refers to parameters of the MLP classifier and α controls the relative weight between the variational lower bound (L vae ) and discriminative loss (L dis ), defined as the cross-entropy loss." + }, + { + "id": 90, + "string": "The parameters {θ, φ, η} are learned end-to-end via Monte Carlo estimation." + }, + { + "id": 91, + "string": "Experimental Setup Datasets We use the following three standard publicly available datasets for training and evaluation: (i) Reuters21578, containing 10,788 news documents, which have been classified into 90 different categories." + }, + { + "id": 92, + "string": "(ii) 20Newsgroups, a collection of 18,828 newsgroup documents, which are categorized into 20 different topics." + }, + { + "id": 93, + "string": "(iii) TMC (stands for SIAM text mining competition), containing air traffic reports provided by NASA." + }, + { + "id": 94, + "string": "TMC consists 21,519 training documents divided into 22 different categories." + }, + { + "id": 95, + "string": "To make direct comparison with prior works, we employed the TFIDF features on these datasets supplied by (Chaidaroon and Fang, 2017) , where the vocabulary sizes for the three datasets are set to 10,000, 7,164 and 20,000, respectively." + }, + { + "id": 96, + "string": "Training Details For the inference networks, we employ a feedforward neural network with 2 hidden layers (both with 500 units) using the ReLU non-linearity activation function, which transform the input documents, i.e., TFIDF features in our experiments, into a continuous representation." + }, + { + "id": 97, + "string": "Empirically, we found that stochastic binarization as in (2) shows stronger performance than deterministic binarization, and thus use the former in our experiments." + }, + { + "id": 98, + "string": "However, we further conduct a systematic ablation study in Section 5.2 to compare the two binarization strategies." + }, + { + "id": 99, + "string": "Our model is trained using Adam (Kingma and Ba, 2014), with a learning rate of 1 × 10 −3 for all parameters." + }, + { + "id": 100, + "string": "We decay the learning rate by a factor of 0.96 for every 10,000 iterations." + }, + { + "id": 101, + "string": "Dropout (Srivastava et al., 2014) is employed on the output of encoder networks, with the rate selected from {0.7, 0.8, 0.9} on the validation set." + }, + { + "id": 102, + "string": "To facilitate comparisons with previous methods, we set the dimension of z, i.e., the number of bits within the hashing code) as 8, 16, 32, 64, or 128." + }, + { + "id": 103, + "string": "Baselines We evaluate the effectiveness of our framework on both unsupervised and supervised semantic hashing tasks." + }, + { + "id": 104, + "string": "We consider the following unsupervised baselines for comparisons: Locality Sensitive Hashing (LSH) (Datar et al., 2004) , Stack Restricted Boltzmann Machines (S-RBM) (Salakhutdinov and Hinton, 2009 ), Spectral Hashing (SpH) (Weiss et al., 2009 ), Self-taught Hashing (STH) (Zhang et al., 2010) and Variational Deep Semantic Hashing (VDSH) (Chaidaroon and Fang, 2017) ." + }, + { + "id": 105, + "string": "For supervised semantic hashing, we also compare NASH against a number of baselines: Supervised Hashing with Kernels (KSH) (Liu et al., 2012) , Semantic Hashing using Tags and Topic Modeling (SHTTM) (Wang et al., 2013) and Supervised VDSH (Chaidaroon and Fang, 2017) ." + }, + { + "id": 106, + "string": "It is worth noting that unlike all these baselines, our NASH model is trained end-to-end in one-step." + }, + { + "id": 107, + "string": "Evaluation Metrics To evaluate the hashing codes for similarity search, we consider each document in the testing set as a query document." + }, + { + "id": 108, + "string": "Similar documents to the query in the corresponding training set need to be retrieved based on the Hamming distance of their hashing codes, i.e." + }, + { + "id": 109, + "string": "number of different bits." + }, + { + "id": 110, + "string": "To facilitate comparison with prior work (Wang et al., 2013; Chaidaroon and Fang, 2017) , the performance is measured with precision." + }, + { + "id": 111, + "string": "Specifically, during testing, for a query document, we first retrieve the 100 nearest/closest documents according to the Hamming distances of the corresponding hash codes (i.e., the number of different bits)." + }, + { + "id": 112, + "string": "We then examine the percentage of documents among these 100 retrieved ones that belong to the same label (topic) with the query document (we consider documents having the same label as relevant pairs)." + }, + { + "id": 113, + "string": "The ratio of the number of relevant documents to the number of retrieved documents (fixed value of 100) is calculated as the precision score." + }, + { + "id": 114, + "string": "The precision scores are further averaged over all test (query) documents." + }, + { + "id": 115, + "string": "Experimental Results We experimented with four variants for our NASH model: (i) NASH: with deterministic decoder; (ii) NASH-N: with fixed random noise injected to decoder; (iii) NASH-DN: with data-dependent noise injected to decoder; (iv) NASH-DN-S: NASH-DN with supervised information during training." + }, + { + "id": 116, + "string": "Table 1 presents the results of all models on Reuters dataset." + }, + { + "id": 117, + "string": "Regarding unsupervised semantic hashing, all the NASH variants consistently outperform the baseline methods by a substantial margin, indicating that our model makes the most effective use of unlabeled data and manage to assign similar hashing codes, i.e., with small Hamming distance to each other, to documents that belong to the same label." + }, + { + "id": 118, + "string": "It can be also observed that the injection of noise into the decoder networks has improved the robustness of learned binary representations, resulting in better retrieval performance." + }, + { + "id": 119, + "string": "More importantly, by making the variances of noise adaptive to the specific input, our NASH-DN achieves even better results, compared with NASH-N, highlighting the importance of exploring/learning the trade-off between rate and distortion objectives by the data itself." + }, + { + "id": 120, + "string": "We observe the same trend and superiority of our NASH-DN models on the other two benchmarks, as shown in Tables 3 and 4 ." + }, + { + "id": 121, + "string": "Semantic Hashing Evaluation Another observation is that the retrieval results tend to drop a bit when we set the length of hashing codes to be 64 or larger, which also happens for some baseline models." + }, + { + "id": 122, + "string": "This phenomenon has been reported previously in ; Liu et al." + }, + { + "id": 123, + "string": "(2012) ; Wang et al." + }, + { + "id": 124, + "string": "(2013) ; Chaidaroon and Fang (2017) , and the reasons could be twofold: (i) for longer codes, the number of data points that are assigned to a certain binary code decreases exponentially." + }, + { + "id": 125, + "string": "As a result, many queries may fail to return any neighbor documents ; (ii) considering the size of training data, it is likely that the model may overfit with long hash codes (Chaidaroon and Fang, 2017) ." + }, + { + "id": 126, + "string": "However, even with longer hashing codes, Word weapons medical companies define israel book NASH gun treatment company definition israeli books guns disease market defined arabs english weapon drugs afford explained arab references armed health products discussion jewish learning assault medicine money knowledge jews reference NVDM guns medicine expensive defined israeli books weapon health industry definition arab reference gun treatment company printf arabs guide militia disease market int lebanon writing armed patients buy sufficient lebanese pages Table 2 : The five nearest words in the semantic space learned by NASH, compared with the results from NVDM (Miao et al., 2016) ." + }, + { + "id": 127, + "string": "our NASH models perform stronger than the baselines in most cases (except for the 20Newsgroups dataset), suggesting that NASH can effectively allocate documents to informative/meaningful hashing codes even with limited training data." + }, + { + "id": 128, + "string": "We also evaluate the effectiveness of NASH in a supervised scenario on the Reuters dataset, where the label or topic information is utilized during training." + }, + { + "id": 129, + "string": "As shown in Figure 2 , our NASH-DN-S model consistently outperforms several supervised semantic hashing baselines, with various choices of hashing bits." + }, + { + "id": 130, + "string": "Notably, our model exhibits higher Top-100 retrieval precision than VDSH-S and VDSH-SP, proposed by Chaidaroon and Fang (2017) ." + }, + { + "id": 131, + "string": "This may be attributed to the fact that in VDSH models, the continuous embeddings are not optimized with their future binarization in mind, and thus could hurt the relevance of learned binary codes." + }, + { + "id": 132, + "string": "On the contrary, our model is optimized in an end-to-end manner, where the gradients are directly backpropagated to the inference network (through the binary/discrete latent variable), and thus gives rise to a more robust hash function." + }, + { + "id": 133, + "string": "Ablation study The effect of stochastic sampling As described in Section 3, the binary latent variables z in NASH can be either deterministically (via (1)) or stochastically (via (2)) sampled." + }, + { + "id": 134, + "string": "We compare these two types of binarization functions in the case of unsupervised hashing." + }, + { + "id": 135, + "string": "As illustrated in Figure 3 , stochastic sampling shows stronger retrieval results on all three datasets, indicating that endowing the sampling process of latent variables with more stochasticity improves the learned representations." + }, + { + "id": 136, + "string": "The effect of encoder/decoder networks Under the variational framework introduced here, the encoder network, i.e., hash function, and decoder network are jointly optimized to abstract semantic features from documents." + }, + { + "id": 137, + "string": "An interesting question concerns what types of network should be leveraged for each part of our NASH model." + }, + { + "id": 138, + "string": "In this regard, we further investigate the effect of Category Title/Subject 8-bit code 16-bit code Baseball Dave Kingman for the hall of fame 1 1 1 0 1 0 0 1 0 0 1 0 1 1 0 1 0 0 0 0 0 1 1 0 Time of game 1 1 1 1 1 0 0 1 0 0 1 0 1 0 0 1 0 0 0 0 0 1 1 1 Game score report 1 1 1 0 1 0 0 1 0 0 1 0 1 1 0 1 0 0 0 0 0 1 1 0 Why is Barry Bonds not batting 4th?" + }, + { + "id": 139, + "string": "1 1 1 0 1 1 0 1 0 0 1 1 1 1 0 1 0 0 0 0 0 1 1 0 Electronics Building a UV flashlight 1 0 1 1 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 0 1 0 1 1 How to drive an array of LEDs 1 0 1 1 0 1 0 1 0 0 1 0 0 0 1 0 0 0 1 0 1 0 0 1 2% silver solder 1 1 0 1 0 1 0 1 0 0 1 0 0 0 1 0 0 0 1 0 1 0 1 1 Subliminal message flashing on TV 1 0 1 1 0 1 0 0 0 0 1 0 0 1 1 0 0 0 1 0 1 0 0 1 using an encoder or decoder with different nonlinearity, ranging from a linear transformation to two-layer MLPs." + }, + { + "id": 140, + "string": "We employ a base model with an encoder of two-layer MLPs and a linear decoder (the setup described in Section 3), and the ablation study results are shown in Table 6 ." + }, + { + "id": 141, + "string": "Network Encoder Decoder linear 0.5844 0.6225 one-layer MLP 0.6187 0.3559 two-layer MLP 0.6225 0.1047 Table 6 : Ablation study with different encoder/decoder networks." + }, + { + "id": 142, + "string": "It is observed that for the encoder networks, increasing the non-linearity by stacking MLP layers leads to better empirical results." + }, + { + "id": 143, + "string": "In other words, endowing the hash function with more modeling capacity is advantageous to retrieval tasks." + }, + { + "id": 144, + "string": "However, when we employ a non-linear network for the decoder, the retrieval precision drops dramatically." + }, + { + "id": 145, + "string": "It is worth noting that the only difference between linear transformation and one-layer MLP is whether a non-linear activation function is employed or not." + }, + { + "id": 146, + "string": "This observation may be attributed the fact that the decoder networks can be considered as a sim-ilarity measure between latent variable z and the word embeddings E k for every word, and the probabilities for words that present in the document is maximized to ensure that z is informative." + }, + { + "id": 147, + "string": "As a result, if we allow the decoder to be too expressive (e.g., a one-layer MLP), it is likely that we will end up with a very flexible similarity measure but relatively less meaningful binary representations." + }, + { + "id": 148, + "string": "This finding is consistent with several image hashing methods, such as SGH (Dai et al., 2017) or binary autoencoder (Carreira-Perpinán and Raziperchikolaei, 2015) , where a linear decoder is typically adopted to obtain promising retrieval results." + }, + { + "id": 149, + "string": "However, our experiments may not speak for other choices of encoder-decoder architectures, e.g., LSTM-based sequence-to-sequence models or DCNN-based autoencoder (Zhang et al., 2017) ." + }, + { + "id": 150, + "string": "Qualitative Analysis Analysis of Semantic Information To understand what information has been learned in our NASH model, we examine the matrix E ∈ R d×l in (6)." + }, + { + "id": 151, + "string": "Similar to (Miao et al., 2016; Larochelle and Lauly, 2012) , we select the 5 nearest words according to the word vectors learned from NASH and compare with the corresponding results from NVDM." + }, + { + "id": 152, + "string": "As shown in Table 2 , although our NASH model contains a binary latent variable, rather than a continuous one as in NVDM, it also effectively group semantically-similar words together in the learned vector space." + }, + { + "id": 153, + "string": "This further demonstrates that the proposed generative framework manages to bypass the binary/discrete constraint and is able to abstract useful semantic information from documents." + }, + { + "id": 154, + "string": "Case Study In Table 5 , we show some examples of the learned binary hashing codes on 20Newsgroups dataset." + }, + { + "id": 155, + "string": "We observe that for both 8-bit and 16bit cases, NASH typically compresses documents with shared topics into very similar binary codes." + }, + { + "id": 156, + "string": "On the contrary, the hashing codes for documents with different topics exhibit much larger Hamming distance." + }, + { + "id": 157, + "string": "As a result, relevant documents can be efficiently retrieved by simply computing their Hamming distances." + }, + { + "id": 158, + "string": "Conclusions This paper presents a first step towards end-to-end semantic hashing, where the binary/discrete constraints are carefully handled with an effective gradient estimator." + }, + { + "id": 159, + "string": "A neural variational framework is introduced to train our model." + }, + { + "id": 160, + "string": "Motivated by the connections between the proposed method and rate-distortion theory, we inject data-dependent noise into the Bernoulli latent variable at the training stage." + }, + { + "id": 161, + "string": "The effectiveness of our framework is demonstrated with extensive experiments." + } + ], + "headers": [ + { + "section": "Introduction", + "n": "1", + "start": 0, + "end": 19 + }, + { + "section": "Related Work", + "n": "2", + "start": 20, + "end": 28 + }, + { + "section": "Hashing under the NVI Framework", + "n": "3.1", + "start": 29, + "end": 46 + }, + { + "section": "Training with Binary Latent Variables", + "n": "3.2", + "start": 47, + "end": 68 + }, + { + "section": "Injecting Data-dependent Noise to z", + "n": "3.3", + "start": 69, + "end": 86 + }, + { + "section": "Supervised Hashing", + "n": "3.4", + "start": 87, + "end": 90 + }, + { + "section": "Datasets", + "n": "4.1", + "start": 91, + "end": 95 + }, + { + "section": "Training Details", + "n": "4.2", + "start": 96, + "end": 102 + }, + { + "section": "Baselines", + "n": "4.3", + "start": 103, + "end": 106 + }, + { + "section": "Evaluation Metrics", + "n": "4.4", + "start": 107, + "end": 114 + }, + { + "section": "Experimental Results", + "n": "5", + "start": 115, + "end": 120 + }, + { + "section": "Semantic Hashing Evaluation", + "n": "5.1", + "start": 121, + "end": 132 + }, + { + "section": "The effect of stochastic sampling", + "n": "5.2.1", + "start": 133, + "end": 134 + }, + { + "section": "The effect of encoder/decoder networks", + "n": "5.2.2", + "start": 135, + "end": 149 + }, + { + "section": "Analysis of Semantic Information", + "n": "5.3.1", + "start": 150, + "end": 153 + }, + { + "section": "Case Study", + "n": "5.3.2", + "start": 154, + "end": 157 + }, + { + "section": "Conclusions", + "n": "6", + "start": 158, + "end": 161 + } + ], + "figures": [ + { + "filename": "../figure/image/955-Table1-1.png", + "caption": "Table 1: Precision of the top 100 retrieved documents on Reuters dataset (Unsupervised hashing).", + "page": 5, + "bbox": { + "x1": 72.0, + "x2": 289.44, + "y1": 61.44, + "y2": 163.2 + } + }, + { + "filename": "../figure/image/955-Figure2-1.png", + "caption": "Figure 2: Precision of the top 100 retrieved documents on Reuters dataset (Supervised hashing), compared with other supervised baselines.", + "page": 5, + "bbox": { + "x1": 328.32, + "x2": 505.91999999999996, + "y1": 66.24, + "y2": 186.23999999999998 + } + }, + { + "filename": "../figure/image/955-Table6-1.png", + "caption": "Table 6: Ablation study with different encoder/decoder networks.", + "page": 7, + "bbox": { + "x1": 101.75999999999999, + "x2": 261.12, + "y1": 483.84, + "y2": 540.0 + } + }, + { + "filename": "../figure/image/955-Figure3-1.png", + "caption": "Figure 3: The precisions of the top 100 retrieved documents for NASH-DN with stochastic or deterministic binary latent variables.", + "page": 7, + "bbox": { + "x1": 104.64, + "x2": 261.12, + "y1": 202.56, + "y2": 312.96 + } + }, + { + "filename": "../figure/image/955-Table5-1.png", + "caption": "Table 5: Examples of learned compact hashing codes on 20Newsgroups dataset.", + "page": 7, + "bbox": { + "x1": 108.0, + "x2": 489.12, + "y1": 61.44, + "y2": 161.28 + } + }, + { + "filename": "../figure/image/955-Figure1-1.png", + "caption": "Figure 1: NASH for end-to-end semantic hashing. The inference network maps x→ z using an MLP and the generative network recovers x as z → x̂.", + "page": 1, + "bbox": { + "x1": 328.8, + "x2": 504.0, + "y1": 60.48, + "y2": 162.72 + } + }, + { + "filename": "../figure/image/955-Table4-1.png", + "caption": "Table 4: Precision of the top 100 retrieved documents on TMC dataset.", + "page": 6, + "bbox": { + "x1": 72.0, + "x2": 289.44, + "y1": 451.68, + "y2": 620.16 + } + }, + { + "filename": "../figure/image/955-Table2-1.png", + "caption": "Table 2: The five nearest words in the semantic space learned by NASH, compared with the results from NVDM (Miao et al., 2016).", + "page": 6, + "bbox": { + "x1": 132.96, + "x2": 464.15999999999997, + "y1": 61.44, + "y2": 180.95999999999998 + } + }, + { + "filename": "../figure/image/955-Table3-1.png", + "caption": "Table 3: Precision of the top 100 retrieved documents on 20Newsgroups dataset.", + "page": 6, + "bbox": { + "x1": 72.0, + "x2": 289.44, + "y1": 233.76, + "y2": 402.24 + } + } + ] + }, + "gem_id": "GEM-SciDuet-chal-1" + }, + { + "slides": { + "0": { + "title": "Abstract", + "text": [ + "Emotions, a complex state of feeling results in physical and psychological changes that influence human behavior. Thus, in order to extract the emotional key phrases from psychological texts, here, we have presented a phrase level emotion identification and classification system. The system takes pre- defined emotional statements of seven basic emotion classes", + "(anger, disgust, fear, guilt, joy, sadness and shame) as input and extracts seven types of emotional trigrams. The trigrams were represented as Context Vectors. Between a pair of", + "Context Vectors, an Affinity Score was calculated based on the law of gravitation with respect to different distance metrics", + "(e.g., Chebyshev, Euclidean and Hamming)." + ], + "page_nums": [ + 4 + ], + "images": [] + }, + "5": { + "title": "Context Windows", + "text": [ + "The tokenized words were grouped to form trigrams in order to grasp the roles of the previous and next tokens with respect to the target token.", + "(CW) to acquire the emotional phrases." + ], + "page_nums": [ + 10 + ], + "images": [] + }, + "6": { + "title": "Context Windows contd", + "text": [ + "It is considered that, in each of the Context Windows, the first word appears as a non-affect word, second word as an affect word, and third word as a non-affect word (, ,", + "A few example patterns of the CWs which follows the pattern", + "and, sorry, just (Shame)" + ], + "page_nums": [ + 11, + 12 + ], + "images": [] + }, + "8": { + "title": "Similar and Dissimilar NAWs", + "text": [ + "It was observed that the stop words are mostly present in", + " pattern where similar and dissimilar", + "NAWs are appeared before and after their corresponding CWs." + ], + "page_nums": [ + 14 + ], + "images": [] + }, + "9": { + "title": "Similar and Dissimilar NAWs contd", + "text": [ + "NAW1= Non Affect Word1; AW=Affect Word; NAW2=Non Affect Word2" + ], + "page_nums": [ + 15 + ], + "images": [] + }, + "16": { + "title": "Distance Metrics", + "text": [ + "Chebyshev distance (Cd) = max |xi yi | where xi and yi represents two vectors.", + "Euclidean distance (Ed) = ||x y||2 for vectors x and y.", + "Hamming distance (Hd) = (c01 c10) / n where cij is the number of occurrence in the boolean vectors x and y and x[k] = i and y[k] = j for k < n. Hamming distance denotes the proportion of disagreeing components in x and y." + ], + "page_nums": [ + 23 + ], + "images": [] + }, + "17": { + "title": "POS Tagged Context Windows and POS Tagged Windows", + "text": [ + "The sentences were POS tagged using the Stanford POS", + "Tagger and the POS tagged Context Windows were extracted and termed as PTCW. Similarly, the POS tag sequence from each of the PTCWs were extracted and named each as POS" + ], + "page_nums": [ + 24 + ], + "images": [] + }, + "18": { + "title": "Count of CW PTCW PTW", + "text": [ + "Figurel:Count of CW,PTCW and PTW", + "= No of POS tagged Context Window(CW)", + "= No of Unique POS tagged Context Window(CW)", + "= No of Unique PTW", + "Anger Disgust Fear Guilt Joy Sadness Shame Emotions" + ], + "page_nums": [ + 25 + ], + "images": [] + }, + "19": { + "title": "Total Count of CW PTCW PTW", + "text": [ + "Figure 2:Total Count of CW,PTCW and PTW", + "Total CW Totla PTCW Total PTW Different windows" + ], + "page_nums": [ + 26 + ], + "images": [] + }, + "20": { + "title": "TF and TF IDF Measure", + "text": [ + "The Term Frequencies (TFs) and the Inverse Document", + "Frequencies (IDFs) of the CWs for each of the emotion classes were calculated. In order to identify different ranges of the TF and TF-IDF scores, the minimum and maximum values of the", + "TF and the variance of TF were calculated for each of the" + ], + "page_nums": [ + 27 + ], + "images": [] + }, + "26": { + "title": "Conclusion", + "text": [ + "In this paper, vector formation was done for each of the", + "Context Windows; TF and TF-IDF measures were calculated.", + "The calculated affinity score, depending on the distance values was inspired from Newton's law of gravitation. To classify these CWs, BayesNet, J48, NaivebayesSimple and", + "DecisionTable classifiers is used." + ], + "page_nums": [ + 34 + ], + "images": [] + }, + "27": { + "title": "Future Work", + "text": [ + "In future, we would like to incorporate more number of lexicons to identify and classify emotional expressions.", + "Moreover, we are planning to include associative learning process to identify some important rules for classification." + ], + "page_nums": [ + 35 + ], + "images": [] + } + }, + "paper_title": "Identification and Classification of Emotional Key Phrases from Psycho- logical Texts", + "paper_id": "956", + "paper": { + "title": "Identification and Classification of Emotional Key Phrases from Psycho- logical Texts", + "abstract": "Emotions, a complex state of feeling results in physical and psychological changes that influence human behavior. Thus, in order to extract the emotional key phrases from psychological texts, here, we have presented a phrase level emotion identification and classification system. The system takes pre-defined emotional statements of seven basic emotion classes (anger, disgust, fear, guilt, joy, sadness and shame) as input and extracts seven types of emotional trigrams. The trigrams were represented as Context Vectors. Between a pair of Context Vectors, an Affinity Score was calculated based on the law of gravitation with respect to different distance metrics (e.g., Chebyshev, Euclidean and Hamming). The words, Part-Of-Speech (POS) tags, TF-IDF scores, variance along with Affinity Score and ranked score of the vectors were employed as important features in a supervised classification framework after a rigorous analysis. The comparative results carried out for four different classifiers e.g., NaiveBayes, J48, Decision Tree and BayesNet show satisfactory performances.", + "text": [ + { + "id": 0, + "string": "Introduction Human emotions are the most complex and unique features to be described." + }, + { + "id": 1, + "string": "If we ask someone regarding emotion, he or she will reply simply that it is a 'feeling'." + }, + { + "id": 2, + "string": "Then, the obvious question that comes into our mind is about the definition of feeling." + }, + { + "id": 3, + "string": "It is observed that such terms are difficult to define and even more difficult to understand completely." + }, + { + "id": 4, + "string": "Ekman (1980) proposed six basic emotions (anger, disgust, fear, guilt, joy and sadness) that have a shared meaning on the level of facial expressions across cultures (Scherer, 1997; Scher-er and Wallbott, 1994) ." + }, + { + "id": 5, + "string": "Psychological texts contain huge number of emotional words because psychology and emotions are inter-wined, though they are different (Brahmachari et.al, 2013) ." + }, + { + "id": 6, + "string": "A phrase that contains more than one word can be a better way of representing emotions than a single word." + }, + { + "id": 7, + "string": "Thus, the emotional phrase identification and their classification from text have great importance in Natural Language Processing (NLP)." + }, + { + "id": 8, + "string": "In the present work, we have extracted seven different types of emotional statements (anger, disgust, fear, guilt, joy, sadness and shame) from the Psychological corpus." + }, + { + "id": 9, + "string": "Each of the emotional statements was tokenized; the tokens were grouped in trigrams and considered as Context Vectors." + }, + { + "id": 10, + "string": "These Context Vectors are POS tagged and corresponding TF and TF-IDF scores were measured for considering them as important features or not." + }, + { + "id": 11, + "string": "In addition, the Affinity Scores were calculated for each pair of Context Vectors based on different distance metrics (Chebyshev, Euclidean and Hamming) ." + }, + { + "id": 12, + "string": "Such features lead to apply different classification methods like NaiveBayes, J48, Decision Tree and BayesNet and after that the results are compared." + }, + { + "id": 13, + "string": "The route map for this paper is the Related Work (Section 2), Data Preprocessing Framework (Section 3) followed by Feature Analysis and Classification framework (Section 4) and result analysis (Section 5) along with the improvement due to ranking." + }, + { + "id": 14, + "string": "Finally, we have concluded the discussion (Section 6)." + }, + { + "id": 15, + "string": "Strapparava and Valitutti (2004) developed the WORDNET-AFFECT, a lexical resource that assigns one or more affective labels such as emotion, mood, trait, cognitive state, physical state, behavior, attitude and sensation etc to a number of WORDNET synsets." + }, + { + "id": 16, + "string": "A detailed annotation scheme that identifies key components and properties of opinions and emotions in language has been described in (Wiebe et al., 2005) ." + }, + { + "id": 17, + "string": "The authors in (Kobayashi et al., 2004) also developed an opinion lexicon out of their annotated corpora." + }, + { + "id": 18, + "string": "Takamura et al." + }, + { + "id": 19, + "string": "(2005) extracted semantic orientation of words according to the spin model, where the semantic orientation of words propagates in two possible directions like electrons." + }, + { + "id": 20, + "string": "Esuli and Sebastiani's (2006) approach to develop the SentiWord-Net is an adaptation to synset classification based on the training of ternary classifiers for deciding positive and negative (P-N) polarity." + }, + { + "id": 21, + "string": "Each of the ternary classifiers is generated using the Semisupervised rules." + }, + { + "id": 22, + "string": "Related Work On the other hand, Mohammad, et al., (2010) has performed an extensive analysis of the annotations to better understand the distribution of emotions evoked by terms of different parts of speech." + }, + { + "id": 23, + "string": "The authors in Bandyopadhyay, 2009, 2010) created the emotion lexicon and systems for Bengali language." + }, + { + "id": 24, + "string": "The development of SenticNet (Cambria et al., 2010) was inspired later by (Poria et al., 2013) ." + }, + { + "id": 25, + "string": "The authors developed an enriched SenticNet with affective information by assigning emotion labels." + }, + { + "id": 26, + "string": "Similarly, ConceptNet 1 is a multilingual knowledge base, representing words and phrases that people use and the common-sense relationships between them." + }, + { + "id": 27, + "string": "Balahur et al., (2012) had shown that the task of emotion detection from texts such as the one in the ISEAR corpus (where little or no lexical clues of affect are present) can be best tackled using approaches based on commonsense knowledge." + }, + { + "id": 28, + "string": "In this sense, EmotiNet, apart from being a precise resource for classifying emotions in such examples, has the advantage of being extendable with external sources, thus increasing the recall of the methods employing it." + }, + { + "id": 29, + "string": "Patra et al., (2013) adopted the Potts model for the probability modeling of the lexical network that was constructed by connecting each pair of words in which one of the two words appears in the gloss of the other." + }, + { + "id": 30, + "string": "In contrast to the previous approaches, the present task comprises of classifying the emotional phrases by forming Context Vectors and the experimentation with simple features like POS, TF-IDF and Affinity Score followed by the computation of 1 http://conceptnet5.media.mit.edu/ similarities based on different distance metrics help in making decisions to correctly classify the emotional phrases." + }, + { + "id": 31, + "string": "3 Data Preprocessing Framework Corpus Preparation The emotional statements were collected from the ISEAR 7 (International Survey on Emotion Antecedents and Reactions) database." + }, + { + "id": 32, + "string": "Each of the emotion classes contains the emotional statements given by the respondents as answers based on some predefined questions." + }, + { + "id": 33, + "string": "Student respondents, both psychologists and non-psychologists were asked to report situations in which they had experienced all of the 7 major emotions (anger, disgust, fear, guilt, joy, sadness, shame) ." + }, + { + "id": 34, + "string": "The final data set contains reports of 3000 respondents from 37 countries." + }, + { + "id": 35, + "string": "The statements were split in sentences and tokenized into words and the statistics were presented in Table 1." + }, + { + "id": 36, + "string": "It is found that only 1096 statements belong to anger, disgust sadness and shame classes whereas the fear, guilt and joy classes contain 1095, 1093 and 1094 different statements, respectively." + }, + { + "id": 37, + "string": "Since each statement may contain multiple sentences, so after sentence tokenization, it is observed that the anger and fear classes contain the maximum number of sentences." + }, + { + "id": 38, + "string": "Similarly, it is observed that the anger class contains the maximum number of tokenized words." + }, + { + "id": 39, + "string": "The tokenized words were grouped to form trigrams in order to grasp the roles of the previous and next tokens with respect to the target token." + }, + { + "id": 40, + "string": "Thus, each of the trigrams was considered as a Context Window (CW) to acquire the emotional phrases." + }, + { + "id": 41, + "string": "The updated version of the standard word lists of the WordNet Affect (Strapparava, and Vali-tutti, 2004 ) was collected and it is observed that the total of 2,958 affect words is present." + }, + { + "id": 42, + "string": "It is considered that, in each of the Context Windows, the first word appears as a non-affect word, second word as an affect word, and third word as a non-affect word (, , )." + }, + { + "id": 43, + "string": "It is observed from the statistics of CW as shown in Table 2 that the anger class contains the maximum number of trigrams (20, 785) and joy class has the minimum number of trigrams (15, 743) whereas only the fear class contains the maximum number of trigrams (1,573) that follow the CW pattern." + }, + { + "id": 44, + "string": "A few example patterns of the CWs which follows the pattern (, , ) are \"advices, about, problems\" (Anger), \"already, frightened, us\" (Fear), \"always, joyous, one\" (Joy), \"acted, cruelly, to\" (Disgust), \"adolescent, guilt, growing\" (guilt), \"always, sad, for\" (sad) , \"and, sorry, just\" (Shame) etc." + }, + { + "id": 45, + "string": "It was observed that the stop words are mostly present in pattern where similar and dissimilar NAWs are appeared before and after their corresponding CWs." + }, + { + "id": 46, + "string": "In case of fear, a total of 979 stop words were found in NAW 1 position and 935 stop words in NAW 2 position." + }, + { + "id": 47, + "string": "It is observed that in case of fear, the occurrence of similar NAW before and after of CWs is only 22 in contrast to the dissimilar occurrences of 1551." + }, + { + "id": 48, + "string": "Table 3 explains the statistics of similar and dissimilar NAWs along with their appearances as stop words." + }, + { + "id": 49, + "string": "Context Vector Formation In order to identify whether the Context Windows (CWs) play any significant role in classifying emotions or not, we have mapped the Context Windows in a Vector space by representing them as vectors." + }, + { + "id": 50, + "string": "We have tried to find out the semantic relation or similarity between a pair of vectors using Affinity Score which in turn takes care of different distances into consideration." + }, + { + "id": 51, + "string": "Since a CW follows the pattern (NAW1, AW, NAW2), the formation of vector with respect to each of the Context Windows of each emotion class was done based on the following formula, 1 2 CW ( ) #NAW #NAW #A = , , W Vectoriza T T T tion       Where, T= Total count of CW in an emotion class #NAW 1 = Total occurrence of a nonaffect word in NAW 1 position #NAW 2 = Total occurrence of a nonaffect word in NAW 2 position #AW = Total occurrence of an affect word in AW position." + }, + { + "id": 52, + "string": "It was found that in case of anger emotion, a CW identified as (always, angry, about) corresponds to a Vector, <0.29, 10.69, 1.47> Emotions Total No of Trigrams Affinity Score Calculation We assume that each of the Context Vectors in an emotion class is represented in the vector space at a specific distance from the others." + }, + { + "id": 53, + "string": "Thus, there must be some affinity or similarity exists between each of the Context Vectors." + }, + { + "id": 54, + "string": "An Affinity Score was calculated for each pair of Context Vectors (p u ,q v ) where u = {1,2,3,.........n} and v = {1,2,3,.......n} for n number of vectors with respect to each of the emotion classes." + }, + { + "id": 55, + "string": "The final Score is calculated using the following gravitational formula as described in (Poria et al., 2013) :       p q , , q * p q p            Score 2 dist The Score of any two context vectors p and q of an emotion class is the dot product of the vectors divided by the square of distance (dist) between p and q." + }, + { + "id": 56, + "string": "This score was inspired by Newton's law of gravitation." + }, + { + "id": 57, + "string": "This score values reflect the affinity between two context vectors p and q." + }, + { + "id": 58, + "string": "Higher score implies higher affinity between p and q." + }, + { + "id": 59, + "string": "However, apart from the score values, we also calculated the median, standard deviation and inter quartile range (iqr) and only those context windows were considered if their iqr values are greater than some cutoff value selected during experiments." + }, + { + "id": 60, + "string": "Affinity Scores using Distance Metrics In the vector space, it is needed to calculate how close the context vectors are in the space in order to conduct better classification into their respective emotion classes." + }, + { + "id": 61, + "string": "The Score values were calculated for all the emotion classes with respect to different metrics of distance (dist) viz." + }, + { + "id": 62, + "string": "Chebyshev, Euclidean and Hamming." + }, + { + "id": 63, + "string": "The distance was calculated for each context vector with respect to all the vectors of the same emotion class." + }, + { + "id": 64, + "string": "The distance formula is given below: a. Chebyshev distance (C d ) = max |x i -y i | where x i and y i represents two vectors." + }, + { + "id": 65, + "string": "b. Euclidean distance (E d ) = ||x -y|| 2 for vectors x and y. c. Hamming distance (H d ) = (c 01 + c 10 ) / n where c ij is the number of occurrence in the boolean vectors x and y and x[k] = i and y[k] = j for k < n. Hamming distance denotes the proportion of disagreeing components in x and y." + }, + { + "id": 66, + "string": "Feature Selection and Analysis It is observed that the feature selection always plays an important role in building a good pattern classifier." + }, + { + "id": 67, + "string": "The sentences were POS tagged using the Stanford POS Tagger and the POS tagged Context Windows were extracted and termed as PTCW." + }, + { + "id": 68, + "string": "Similarly, the POS tag sequence from each of the PTCWs were extracted and named each as POS Tagged Window (PTW)." + }, + { + "id": 69, + "string": "It is observed that \"fear\" emotion class has the maximum number of CWs and unique PTCWs whereas the \"anger\" class contains the maximum number of unique PTWs." + }, + { + "id": 70, + "string": "The Figure 1 as shown below represents the counts of CW, unique PTCWs and PTWs." + }, + { + "id": 71, + "string": "It was noticed that the total number of CWs is 8967, total number of unique PTCW is 7609 and of unique PTW is 3117." + }, + { + "id": 72, + "string": "Obviously, the number of PTCW was less than CW and number of PTW was less than PTCW, because of the uniqueness of PTCW and PTW." + }, + { + "id": 73, + "string": "In Figure 2 , the total counts of CW, PTCW and PTW have been shown." + }, + { + "id": 74, + "string": "Some sample patterns of PTWs that occur with the maximum frequencies in three emotion classes are \"VBD/RB_JJ_IN\" (anger), \"NN/VBD_VBN_NN\" (disgust) and \"VBD_VBN/JJ_IN/NN\" (fear)." + }, + { + "id": 75, + "string": "TF and TF-IDF Measure The Term Frequencies (TFs) and the Inverse Document Frequencies (IDFs) of the CWs for each of the emotion classes were calculated." + }, + { + "id": 76, + "string": "In order to identify different ranges of the TF and TF-IDF scores, the minimum and maximum values of the TF and the variance of TF were calculated for each of the emotion classes." + }, + { + "id": 77, + "string": "It was observed that guilt has the maximum scores for Max_TF and variance whereas the emotions like anger and disgust have the lowest scores for Max_TF as shown in Figure 3 ." + }, + { + "id": 78, + "string": "Similarly, the minimum, maximum and variance of the TF-IDF values were calculated for each emotion class, separately." + }, + { + "id": 79, + "string": "Again, it is found that the guilt emotion has the highest Max_TF-IDF and disgust emotion has the lowest Max_TF-IDF as shown in Figure 4 ." + }, + { + "id": 80, + "string": "Not only for the Context Windows (CWs), the TF and TF-IDF scores of the POS Tagged Context Windows (PTCWs) and POS Tagged Windows (PTWs) were also calculated with respect to each emotion." + }, + { + "id": 81, + "string": "It was observed that, similar results were found." + }, + { + "id": 82, + "string": "Variance, or second moment about the mean, is a measure of the variability (spread or dispersion) of data." + }, + { + "id": 83, + "string": "A large variance indicates that the data is spread out; a small variance indicates it is clustered closely around the mean.The variance for TF_IDF of guilt is 0.0000456874." + }, + { + "id": 84, + "string": "A few slight differences were found in the results of PTWs while calculating Max_TF , Min_TF and variance as shown in Figure 3 ." + }, + { + "id": 85, + "string": "It was observed that fear emotion has the highest Max_TF and anger has the lowest Max_TF whereas the variance of TF for guilt is 0.0002435522." + }, + { + "id": 86, + "string": "Similarly, Figure 4 shows that fear has the highest Max_TF_IDF and anger contains the lowest Max_TF-IDF values and the variance of TF-IDF of fear is 0.000922226." + }, + { + "id": 87, + "string": "Ranking Score of CW It was found that some of the Context Windows appear more than one time in the same emotion class." + }, + { + "id": 88, + "string": "Thus, they were removed and a ranking score was calculated for each of the context windows." + }, + { + "id": 89, + "string": "Each of the words in a context window was searched in the SentiWordnet lexicon and if found, we considered either positive or negative or both scores." + }, + { + "id": 90, + "string": "The summation of the absolute scores of all the words in a Context Window is returned." + }, + { + "id": 91, + "string": "The returned scores were sorted so that, in turn, each of the context windows obtains a rank in its corresponding emotion class." + }, + { + "id": 92, + "string": "All the ranks were calculated for each emotion class, successively." + }, + { + "id": 93, + "string": "This rank is useful in finding the important emotional phrases from the list of CWs." + }, + { + "id": 94, + "string": "Some examples from the list of top 12 important context windows according to their rank are \"much anger when\" (anger), \"whom love after\" (happy), \"felt sad about\" (sadness) etc." + }, + { + "id": 95, + "string": "Result Analysis The accuracies of the classifiers were obtained by employing user defined test data and data for 10 fold cross validation." + }, + { + "id": 96, + "string": "It is observed that when Euclidean distance was considered, the BayesNet Classifier gives 100% accuracy on the Test data and gives 97.91% of accuracy on 10-fold cross validation data." + }, + { + "id": 97, + "string": "On the other hand, J48 classifier achieves 77% accuracy on Test data and 83.54% on 10-fold cross validation data whereas the Nai-veBayesSimple classifier obtains 92.30% accuracy on Test data and 27.07% accuracy on 10-fold cross validation data." + }, + { + "id": 98, + "string": "In the Naïve BayesSimple with 10fold cross validation, the average Recall, Precision and F-measure values are 0.271, 0.272 and 0.264, respectively." + }, + { + "id": 99, + "string": "But, the DecisionTree classifier obtains 98.30% and 98.10% accuracies on the Test data as well as 10-fold cross validation data." + }, + { + "id": 100, + "string": "The comparative results are shown in Figure 5 ." + }, + { + "id": 101, + "string": "Overall, it is observed from Figure 5 that the BayesNet classifier achieves the best results on the score data which was prepared based on the Euclidean distance." + }, + { + "id": 102, + "string": "In contrast, the BayesNet achieved 99.30% accuracy on the Test data and 96.92% accuracy on 10-fold cross validation data when the Hamming distance was considered." + }, + { + "id": 103, + "string": "Similarly, J48 and Naïve BayesSimple classifiers produce 93.05% and 85.41% accuracies on the Test data and 87.95% and 39.50% accuracies on 10-fold cross validation data, respectively." + }, + { + "id": 104, + "string": "From Figure 6 , it is observed that the DecisionTree classifier produces the best accuracy on the score data that was found using Hamming distance." + }, + { + "id": 105, + "string": "When the score values are found by using Chebyshev distance, the BayesNet classifier obtains 100% accuracy on Test data and 97.57% accuracy on 10-fold cross validation data." + }, + { + "id": 106, + "string": "Similarly, J48 achieves 84.82% accuracy on the Test data and 82.75% accuracy on 10-fold cross validation data whereas NaiveBayes and DecisionTable achieve 80% , 29.85% and 98.62% ,96.93% accuracies on the Test data and 10-fold cross validatation data, respectively." + }, + { + "id": 107, + "string": "It has to be mentioned based on Figure 7 that the DecisionTree classifier performs better in comparison with all other classifiers and achieves the best result among the rest of the classifiers on affinity score data prepared based on the Chebyshev distance only." + }, + { + "id": 108, + "string": "Conclusions and Future Works In this paper, vector formation was done for each of the Context Windows; TF and TF-IDF measures were calculated." + }, + { + "id": 109, + "string": "The calculated affinity score, depending on the distance values was inspired from Newton's law of gravitation." + }, + { + "id": 110, + "string": "To classify these CWs, BayesNet, J48, NaivebayesSimple and Deci-sionTable classifiers." + }, + { + "id": 111, + "string": "In future, we would like to incorporate more number of lexicons to identify and classify emotional expressions." + }, + { + "id": 112, + "string": "Moreover, we are planning to include associative learning process to identify some important rules for classification." + } + ], + "headers": [ + { + "section": "Introduction", + "n": "1", + "start": 0, + "end": 21 + }, + { + "section": "Related Work", + "n": "2", + "start": 22, + "end": 30 + }, + { + "section": "Corpus Preparation", + "n": "3.1", + "start": 31, + "end": 48 + }, + { + "section": "Context Vector Formation", + "n": "3.2", + "start": 49, + "end": 51 + }, + { + "section": "Affinity Score Calculation", + "n": "3.3", + "start": 52, + "end": 59 + }, + { + "section": "Affinity Scores using Distance Metrics", + "n": "3.4", + "start": 60, + "end": 65 + }, + { + "section": "Feature Selection and Analysis", + "n": "4", + "start": 66, + "end": 74 + }, + { + "section": "TF and TF-IDF Measure", + "n": "4.2", + "start": 75, + "end": 86 + }, + { + "section": "Ranking Score of CW", + "n": "4.3", + "start": 87, + "end": 94 + }, + { + "section": "Result Analysis", + "n": "5", + "start": 95, + "end": 112 + } + ], + "figures": [ + { + "filename": "../figure/image/956-Figure5-1.png", + "caption": "Figure 5: Classification Results on Test data and 10- fold cross validation using Euclidean distance (Ed)", + "page": 5, + "bbox": { + "x1": 303.84, + "x2": 515.04, + "y1": 98.88, + "y2": 225.12 + } + }, + { + "filename": "../figure/image/956-Figure6-1.png", + "caption": "Figure 6: Classification Results on Test data and 10- fold cross validation using Hamming distance (Hd)", + "page": 5, + "bbox": { + "x1": 303.84, + "x2": 515.04, + "y1": 245.76, + "y2": 365.28 + } + }, + { + "filename": "../figure/image/956-Figure7-1.png", + "caption": "Figure 7: Classification Results on Test data and 10- fold cross validation using Chebyshev distance (Cd)", + "page": 5, + "bbox": { + "x1": 303.84, + "x2": 518.4, + "y1": 384.47999999999996, + "y2": 516.96 + } + }, + { + "filename": "../figure/image/956-Table1-1.png", + "caption": "Table 1: Corpus Statistics", + "page": 1, + "bbox": { + "x1": 297.59999999999997, + "x2": 524.16, + "y1": 477.59999999999997, + "y2": 603.36 + } + }, + { + "filename": "../figure/image/956-Table2-1.png", + "caption": "Table 2: Trigrams and Affect Words Statistics", + "page": 2, + "bbox": { + "x1": 318.71999999999997, + "x2": 503.03999999999996, + "y1": 229.44, + "y2": 348.96 + } + }, + { + "filename": "../figure/image/956-Table3-1.png", + "caption": "Table 3: Statistics for similar and dissimilar NAW patterns and stop words", + "page": 2, + "bbox": { + "x1": 305.76, + "x2": 516.0, + "y1": 370.56, + "y2": 539.04 + } + }, + { + "filename": "../figure/image/956-Figure1-1.png", + "caption": "Figure 1: Count of CW, PTCW and PTW for seven emotion classes", + "page": 3, + "bbox": { + "x1": 303.84, + "x2": 518.4, + "y1": 521.76, + "y2": 645.12 + } + }, + { + "filename": "../figure/image/956-Figure4-1.png", + "caption": "Figure 4: Variance,Max_TF-IDF, Min_TF-IDF of CW, PTCW and PTW", + "page": 4, + "bbox": { + "x1": 303.84, + "x2": 515.04, + "y1": 283.68, + "y2": 413.28 + } + }, + { + "filename": "../figure/image/956-Figure3-1.png", + "caption": "Figure 3:Variance,Max_TF,Min_TF of CW, PTCW and PTW", + "page": 4, + "bbox": { + "x1": 306.71999999999997, + "x2": 515.04, + "y1": 122.88, + "y2": 259.2 + } + }, + { + "filename": "../figure/image/956-Figure2-1.png", + "caption": "Figure 2:Total Count of CW, PTCW and PTW", + "page": 4, + "bbox": { + "x1": 76.8, + "x2": 291.36, + "y1": 98.88, + "y2": 224.16 + } + } + ] + }, + "gem_id": "GEM-SciDuet-chal-2" + }, + { + "slides": { + "0": { + "title": "Background", + "text": [ + "Information Retrieval (IR) and Recommender Systems (RS) techniques", + "have been used to address:-", + "Literature Review (LR) search tasks", + "Explicit and implicit ad-hoc information needs", + "Examples of such tasks include", + "Building a reading list of research papers", + "Recommending papers based on query logs", + "Recommending papers based on publication history", + "Serendipitous discovery of interesting papers and more.", + "What about recommending papers during manuscript preparation" + ], + "page_nums": [ + 1 + ], + "images": [] + }, + "1": { + "title": "Addressed scenarios in mp", + "text": [ + "Recommending papers based on Citation Contexts in manuscripts", + "Recommending new papers based on To-Be-Cited papers from the", + "Recommending papers based on the full text of the draft", + "What more could be done?", + "Explore the total list of papers compiled during literature review", + "Explore the article-type preference to vary recommendations correspondingly?" + ], + "page_nums": [ + 2 + ], + "images": [] + }, + "2": { + "title": "Enter rec4lrw", + "text": [ + "Rec4LRW is a task-based assistive system that offers", + "recommendations for the below tasks:-", + "Task 1 Building an initial reading list of research papers", + "Task 2 Finding similar papers based on a seed set of papers", + "Task 3 Shortlisting papers from the final reading list based on", + "The system is based on a threefold intervention framework", + "For better meeting the task requirements", + "Novel informational display features", + "For speeding up the relevance judgement decisions", + "For establishing the natural relationships between tasks" + ], + "page_nums": [ + 3 + ], + "images": [] + }, + "3": { + "title": "Rec4lrw usage sequence", + "text": [ + "Select papers from Task 2 to the final reading list", + "N Execute Task 3 with the final reading list papers" + ], + "page_nums": [ + 4 + ], + "images": [] + }, + "4": { + "title": "Corpus", + "text": [ + "ACM DL extract of papers published between 1951 and 2011 used as", + "AnyStyle (https://anystyle.io) parser used to extract article title, venue", + "and year from references", + "Data stored in a MySQL database with the tables related using a" + ], + "page_nums": [ + 5 + ], + "images": [] + }, + "5": { + "title": "Task objective and steps", + "text": [ + "OBJECTIVE: To identify the important papers from the final reading list", + "and vary recommendations count based on article-type preference", + "Input: P set of papers in the final reading list", + "AT article-type choice of the user", + "1: RC the average references count retrieved for AT", + "2: R list of retrieved citations & references of papers from P", + "3: G directed sparse graph created with papers from R", + "4: run edge betweenness algorithm on G to form cluster set C 5: S final list of shortlisted papers 6: if |C| > RC then while |S = RC for each cluster in C do sort papers in the cluster on citation count s top ranked paper from the cluster add s to S end for end while 14: else N while |S = RC N N +1 for each cluster in C do sort papers in the cluster on citation count s N ranked paper from the cluster add s to S end for end while 24: end if 25: display papers from S to user" + ], + "page_nums": [ + 6 + ], + "images": [] + }, + "6": { + "title": "User evaluation study", + "text": [ + "OBJECTIVE: To ascertain the usefulness and effectiveness", + "of the task to researchers", + "Ascertain the agreement percentages of the evaluation", + "Relevance The shortlisted papers are relevant to my article-type preference", + "Usefulness The shortlisted papers are useful for inclusion in my manuscript", + "Importance The shortlisted papers comprises of important papers from my reading list", + "Certainty The shortlisted list comprises of papers which I would definitely cite in my manuscript Good_List This is a good recommendation list, at an overall level Improvement_Needed There is a need to further improve this shortlisted papers list", + "Shortlisting_Feature I would like to see the feature of shortlisting papers from reading list based on article-type preference, in academic search systems and databases", + "Identify the top preferred and critical aspects of the task", + "through the subjective feedback of the participants", + "Feedback responses were coded by a single coder using an inductive approach" + ], + "page_nums": [ + 7 + ], + "images": [] + }, + "7": { + "title": "Study information", + "text": [ + "The study was conducted between November 2015 and January 2016", + "Pre-screening survey conducted to identify participants who have authored at", + "least one journal or conference paper", + "116 participants completed the whole study inclusive of the three tasks in the", + "57 participants were Ph.D./Masters students while 59 were research staff,", + "academic staff and librarians", + "The average research experience for students was 2 years while for staff, it", + "51% of participants were from the computer science, electrical and electronics disciplines, 35% from information and communication studies discipline while 14% from other disciplines" + ], + "page_nums": [ + 8 + ], + "images": [] + }, + "8": { + "title": "Study procedure", + "text": [ + "Step Participant selects one of the available 43 topics for executing task 1", + "Step Re-run task 1 and select at least five papers for the seed basket", + "Step Execute task 2 with the seed basket papers", + "Step Re-run task 2 (and task 1) to select at least 30 papers for the final", + "Step 5: Execute task 3 with the final reading list papers and article-type", + "Four article-type choices: conference full paper, poster, case study and a generic research paper" + ], + "page_nums": [ + 9 + ], + "images": [] + }, + "10": { + "title": "Results", + "text": [ + "Biggest differences found for the below measures:-", + "The measures with the highest agreement:-" + ], + "page_nums": [ + 11 + ], + "images": [ + "figure/image/959-Figure2-1.png" + ] + }, + "11": { + "title": "Qualitative feedback", + "text": [ + "Rank Preferred Aspects Categories Critical Aspects Categories", + "Shortlisting Feature & Rec. Quality (24%) Rote Selection of Papers (16%)", + "Information Cue Labels (15%) Limited Dataset Issue (5%)", + "View Papers in Clusters (11%) Quality can be Improved (5%)", + "Rich Metadata (7%) Not Sure of the Usefulness of the Task (4%)", + "Ranking of Papers (3%) UI can be Improved (3%)", + "The newly introduced informational display features were a big hit", + "The purely experimental nature of the study affected the experience of", + "Tasks effectiveness needs to be validated with a longitudinal study with a large collection of papers in the final reading list" + ], + "page_nums": [ + 12 + ], + "images": [] + }, + "12": { + "title": "Limitations", + "text": [ + "Lack of an offline evaluation experiment", + "Study procedure involved selection of comparatively fewer number of papers", + "in the final reading list", + "Not much variations in the final shortlisted papers for the different article-type", + "Information displayed in a purely textual manner" + ], + "page_nums": [ + 13 + ], + "images": [] + }, + "13": { + "title": "Future work", + "text": [ + "The scope for this task will be expanded to bring in more variations for the", + "Inclusion of new papers in the output which could have been missed during", + "Provide more user control in the system so that the user can select papers as", + "mandatory to be shortlisted", + "Integrate this task with the citation context recommendation task", + "Represent the information in the form of citation graphs" + ], + "page_nums": [ + 14 + ], + "images": [] + } + }, + "paper_title": "What papers should I cite from my reading list? User evaluation of a manuscript preparatory assistive task", + "paper_id": "959", + "paper": { + "title": "What papers should I cite from my reading list? User evaluation of a manuscript preparatory assistive task", + "abstract": "Literature Review (LR) and Manuscript Preparatory (MP) tasks are two key activities for researchers. While process-based and technologicaloriented interventions have been introduced to bridge the apparent gap between novices and experts for LR tasks, there are very few approaches for MP tasks. In this paper, we introduce a novel task of shortlisting important papers from the reading list of researchers, meant for citation in a manuscript. The technique helps in identifying the important and unique papers in the reading list. Based on a user evaluation study conducted with 116 participants, the effectiveness and usefulness of the task is shown using multiple evaluation metrics. Results show that research students prefer this task more than research and academic staff. Qualitative feedback of the participants including the preferred aspects along with critical comments is presented in this paper.", + "text": [ + { + "id": 0, + "string": "Introduction The Scientific Publication Lifecycle comprises of different activities carried out by researchers [5] ." + }, + { + "id": 1, + "string": "Of all these activities, the three main activities are literature review, actual research work and dissemination of results through conferences and journals." + }, + { + "id": 2, + "string": "These three activities in themselves cover multiple sub-activities that require specific expertise and experience [16] ." + }, + { + "id": 3, + "string": "Prior studies have shown researchers with low experience, face difficulties in completing research related activities [9, 15] ." + }, + { + "id": 4, + "string": "These researchers rely on assistance from supervisors, experts and librarians for learning the required skills to pursue such activities." + }, + { + "id": 5, + "string": "Scenarios where external assistance have been traditionally required are (i) selection of information sources (academic search engines, databases and citation indices), (ii) formulation of search queries, (iii) browsing of retrieved results and (iv) relevance judgement of retrieved articles [9] ." + }, + { + "id": 6, + "string": "Apart from human assistance, academic assistive systems have been built for alleviating the expertise gap between experts and novices in terms of research execution." + }, + { + "id": 7, + "string": "Some of these interventions include search systems with faceted user interfaces for better dis-play of search results [2] , bibliometric tools for visualizing citation networks [7] and scientific paper recommender systems [3, 14] , to name a few." + }, + { + "id": 8, + "string": "In the area of manuscript writing, techniques have been proposed to recommend articles for citation contexts in manuscripts [11] ." + }, + { + "id": 9, + "string": "In the context of manuscript publication, prior studies have tried to recommend prospective conference venues [25] most suited for the research in hand." + }, + { + "id": 10, + "string": "One unexplored area is helping researchers in identifying the important and unique papers that can be potentially cited in the manuscript." + }, + { + "id": 11, + "string": "This identification is affected by two factors." + }, + { + "id": 12, + "string": "The first factor is the type of research where citation of a particular paper makes sense due to the particular citation context." + }, + { + "id": 13, + "string": "The second factor is the type of article (for e.g., conference full paper, journal paper, demo paper) that the author is intending to write." + }, + { + "id": 14, + "string": "For the first factor, there have been some previous studies [11, 14, 21] ." + }, + { + "id": 15, + "string": "The second factor represents a task that can be explored since the article-type places a constraint on the citations that can be made in a manuscript, in terms of dimensions such as recency, quantity, to name a few." + }, + { + "id": 16, + "string": "In our research, we address this new manuscript preparatory task with the objective of shortlisting papers from the reading list of researchers based on article-type preference." + }, + { + "id": 17, + "string": "By the term 'shortlisting', we allude to the nature of the task in identifying important papers from the reading list This task is part of a functionality provided by an assistive system called Rec4LRW meant for helping researchers in literature review and manuscript preparation." + }, + { + "id": 18, + "string": "The system uses a corpus of papers, built from an extract of ACM Digital Library (ACM DL)." + }, + { + "id": 19, + "string": "It is hypothesized that the Rec4LRW system will be highly beneficial to novice researchers such as Ph.D. and Masters students and also for researchers who are venturing into new research topics." + }, + { + "id": 20, + "string": "A user evaluation study was conducted to evaluate all the tasks in the system, from a researcher's perspective." + }, + { + "id": 21, + "string": "In this paper, we report the findings from the study." + }, + { + "id": 22, + "string": "The study was conducted with 116 participants comprising of research students, academic staff and research staff." + }, + { + "id": 23, + "string": "Results from the six evaluation measures show that the participants prefer to have the shortlisting feature included in academic search systems and digital libraries." + }, + { + "id": 24, + "string": "Subjective feedback from the participants in terms of the preferred features and the features that need to be improved, are also presented in the paper." + }, + { + "id": 25, + "string": "The reminder of this work is organized as follows." + }, + { + "id": 26, + "string": "Section two surveys the related work." + }, + { + "id": 27, + "string": "The Rec4LRW system is introduced along with dataset, technical details and unique UI features in section three." + }, + { + "id": 28, + "string": "In section four, the shortlisting technique of the task is explained." + }, + { + "id": 29, + "string": "Details about the user study and data collection are outlined in Section five." + }, + { + "id": 30, + "string": "The evaluation results are presented in section six." + }, + { + "id": 31, + "string": "The concluding remarks and future plans for research are provided in the final section." + }, + { + "id": 32, + "string": "Related Work Conceptual models and systems have been proposed in the past for helping researchers during manuscript writing." + }, + { + "id": 33, + "string": "Generating recommendations for citation contexts is an approach meant to help the researcher in finding candidate citations for particular placeholders (locations) in the manuscript." + }, + { + "id": 34, + "string": "These studies make use of content oriented recommender techniques as there is no scope for using Collaborative Filtering (CF) based techniques due to lack of user ratings." + }, + { + "id": 35, + "string": "Translation models have been specifically used in [13, 17] as they are able to handle the issue of vocabulary mismatch gap between the user query and document content." + }, + { + "id": 36, + "string": "The efficiency of the approaches is dependent on the comprehensiveness of training set data as the locations and corresponding citations data are recorded." + }, + { + "id": 37, + "string": "The study in [11] is the most sophisticated, as it does not expect the user to mark the citation contexts in the input paper unlike other studies where the contexts have to be set by the user." + }, + { + "id": 38, + "string": "The proposed model in the study learns the placeholders in previous research articles where citations are widely made so that the citation recommendation can be made on occurrence of similar patterns." + }, + { + "id": 39, + "string": "The methods in these studies are heavily reliant on the quality & quantity of training data; therefore they are not applicable to systems which lack access to full text of research papers." + }, + { + "id": 40, + "string": "Citation suggestions have also been provided as part of reference management and stand-alone recommendation tools." + }, + { + "id": 41, + "string": "ActiveCite [21] is a recommendation tool that provides both high level and specific citation suggestions based on text mining techniques." + }, + { + "id": 42, + "string": "Docear is one of the latest reference management software [3] with a mind map feature that helps users in better organizing their references." + }, + { + "id": 43, + "string": "The in-built recommendation module in this tool is based on Content based (CB) recommendation technique with all the data stored in a central server." + }, + { + "id": 44, + "string": "The Refseer system [14] , similar to ActiveCite, provides both global and local (particular citation context) level recommendations." + }, + { + "id": 45, + "string": "The system is based on the non-parametric probabilistic model proposed in [12] ." + }, + { + "id": 46, + "string": "These systems depend on the quality and quantity of full text data available in the central server as scarcity of papers could lead to redundant recommendations." + }, + { + "id": 47, + "string": "Even though article-type recommendations have not been practically implemented, the prospective idea has been discussed in few studies." + }, + { + "id": 48, + "string": "The article-type dimension has been highlighted as part of the user's 'Purpose' in the multi-layer contextual model put forth in [8] and as one of the facets in document contextual information in [6] ." + }, + { + "id": 49, + "string": "The article type indirectly refers to the goal of the researcher." + }, + { + "id": 50, + "string": "It is to be noted that goal or purpose related dimensions have been considered for research in other research areas of recommender systems namely course recommendations [23] and TV guide recommendations [20] ." + }, + { + "id": 51, + "string": "Our work, on the other hand, is the first to explore this task of providing article-type based recommendations with the aim of shortlisting important and unique papers from the cumulative reading list prepared by researchers during their literature review." + }, + { + "id": 52, + "string": "Through this study, we hope to open new avenues of research which requires a different kind of mining of bibliographic data, for providing more relevant results." + }, + { + "id": 53, + "string": "3 Assistive System Brief Overview The Rec4LRW system has been built as a tool aimed to help researchers in two main tasks of literature review and one manuscript preparatory task." + }, + { + "id": 54, + "string": "The three tasks are (i) Building an initial reading list of research papers, (ii) Finding similar papers based on a set of papers, and (iii) Shortlisting papers from the final reading list for inclusion in manuscript based on article-type choice." + }, + { + "id": 55, + "string": "The usage context of the system is as follows." + }, + { + "id": 56, + "string": "Typically, a researcher would run the first task for one or two times at the start of the literature review, followed by selection of few relevant seed papers which are then used for task 2." + }, + { + "id": 57, + "string": "The second task takes these seed papers as an input to find topically similar papers." + }, + { + "id": 58, + "string": "This task is run multiple times until the researcher is satisfied with the whole list of papers in the reading list." + }, + { + "id": 59, + "string": "The third task (described in this paper), is meant to be run when the researcher is at the stage of writing manuscripts for publication." + }, + { + "id": 60, + "string": "It is observed that the researcher would maintain numerous papers in his/her reading list while performing research (could be more than 100 papers for most research studies)." + }, + { + "id": 61, + "string": "The third task helps the researcher in identifying both important and unique papers from the reading list." + }, + { + "id": 62, + "string": "The shortlisted papers count varies as per the article-type preference of the researcher." + }, + { + "id": 63, + "string": "The recommendation mechanisms of the three tasks are based on seven features/criteria that represent the characteristics of the bibliography and its relationship with the parent research paper [19] ." + }, + { + "id": 64, + "string": "Dataset A snapshot of the ACM Digital Library (ACM DL) is used as the dataset for the system." + }, + { + "id": 65, + "string": "Papers from proceedings and journals for the period 1951 to 2011 form the dataset." + }, + { + "id": 66, + "string": "The papers from the dataset have been shortlisted based on full text and metadata availability in the dataset, to form the sample set/corpus for the system." + }, + { + "id": 67, + "string": "The sample set contains a total of 103,739 articles and corresponding 2,320,345 references." + }, + { + "id": 68, + "string": "User-Interface (UI) Features In this sub-section, the unique UI features of the Rec4LRW system are presented." + }, + { + "id": 69, + "string": "Apart from the regular fields such as author name(s), abstract, publication year and citation count, the system displays the fields:-author-specified keywords, references count and short summary of the paper (if the abstract of the paper is missing)." + }, + { + "id": 70, + "string": "Most importantly, we have included information cue labels beside the title for each article." + }, + { + "id": 71, + "string": "There are four labels (1) Popular, (2) Recent, (3) High Reach and (4) Survey/Review." + }, + { + "id": 72, + "string": "A screenshot from the system for the cue labels (adjacent to article title) is provided in Figure 1 ." + }, + { + "id": 73, + "string": "The display logic for the cue labels are described as follows." + }, + { + "id": 74, + "string": "The recent label is displayed for papers published between the years 2009 and 2011 (the most recent papers in the ACM dataset is of 2011)." + }, + { + "id": 75, + "string": "The survey/review label is displayed for papers which are of the type -literature survey or review." + }, + { + "id": 76, + "string": "For the popular label, the unique citation counts of all papers for the selected research topic are first retrieved from the database." + }, + { + "id": 77, + "string": "The label is displayed for a paper if the citation count is in the top 5% percentile of the citation counts for that topic." + }, + { + "id": 78, + "string": "Similar logic is used for the high reach label with references count data." + }, + { + "id": 79, + "string": "The high reach label indicates that the paper has more number of references than most other articles for the research topic, thereby facilitating the scope for extended citation chaining." + }, + { + "id": 80, + "string": "Specifically for task 3, the system provides an option for the user to view the papers in the parent cluster of the shortlisted papers." + }, + { + "id": 81, + "string": "This feature helps the user in serendipitously finding more papers for reading." + }, + { + "id": 82, + "string": "The screenshot for this feature is provided in Figure 1 ." + }, + { + "id": 83, + "string": "Technique For Shortlisting Papers From Reading List The objective of this task is to help researchers in identifying important (based on citation counts) and unique papers from the final reading list." + }, + { + "id": 84, + "string": "These papers are to be considered as potential candidates for citation in the manuscript." + }, + { + "id": 85, + "string": "For this task, the Girvan-Newman algorithm [10] was used for identifying the clusters in the citations network." + }, + { + "id": 86, + "string": "The specific goal of clustering is to identify the communities within the citation network." + }, + { + "id": 87, + "string": "From the identified clusters, the top cited papers are shortlisted." + }, + { + "id": 88, + "string": "The algorithm is implemented as the EdgeBetweennessClusterer in JUNG library." + }, + { + "id": 89, + "string": "The algorithm was selected as it is the one of the most prominent community detection algorithms based on link removal." + }, + { + "id": 90, + "string": "The other algorithms considered were voltage clustering algorithm [24] and bi-component DFS clustering algorithm [22] ." + }, + { + "id": 91, + "string": "Based on internal trail tests, the Girvan-Newman algorithm was able to consistently identify meaningful clusters using the graph constructed with the citations and references of the papers from the reading list." + }, + { + "id": 92, + "string": "As a part of this task, we have tried to explore the notion of varying the count of shortlisted papers by article-type choice." + }, + { + "id": 93, + "string": "For this purpose, four article-types were considered: conference full paper (cfp), conference poster (cp), generic research paper (gp) 1 and case study (cs)." + }, + { + "id": 94, + "string": "The article-type classification is not part of the ACM metadata but it is partly inspired by the article classification used in Emerald publications." + }, + { + "id": 95, + "string": "The number of papers to be shortlisted for these article-types was identified by using the historical data from ACM dataset." + }, + { + "id": 96, + "string": "First, the papers in the dataset were filtered by using the title field and section field for the four article-types." + }, + { + "id": 97, + "string": "Second, the average of the references count was calculated for the filtered papers for each articletype from previous step." + }, + { + "id": 98, + "string": "The average references count for the article-types gp, cs, cfp and cp are 26, 17, 16 and 6 respectively." + }, + { + "id": 99, + "string": "This new data field is used to set the number of papers to be retrieved from the paper clusters." + }, + { + "id": 100, + "string": "The procedure for this technique is given in Procedure 1. for each cluster in C do 9: sort papers in the cluster on citation count 10: s top ranked paper from the cluster 11: add s to S 12: end for 13: end while 14: else 15: N 0 16. while |S| = RC 17: N N +1 18: for each cluster in C do 19: sort papers in the cluster on citation count 20: s N ranked paper from the cluster 21: add s to S 22: end for 23: end while 24: end if 25: display papers from S to user 5 User Evaluation Study In IR and RS studies, offline experiments are conducted for evaluating the proposed technique/algorithm with baseline approaches." + }, + { + "id": 101, + "string": "Since the task addressed in the current study is a novel task, the best option was to perform a user evaluation study with researchers." + }, + { + "id": 102, + "string": "Considering the suggestions from [4] , the objective of the study was to ascertain the usefulness and effectiveness of the task to researchers." + }, + { + "id": 103, + "string": "The specific evaluation goals were (i) ascertain the agreement percentages of the evaluation measures and (ii) identify the top preferred and critical aspects of the task through the subjective feedback of the participants." + }, + { + "id": 104, + "string": "An online pre-screening survey was conducted to identify the potential participants." + }, + { + "id": 105, + "string": "Participants needed to have experience in writing conference or journal paper(s) as a qualification for taking part in the study." + }, + { + "id": 106, + "string": "All the participants were required to evaluate the three tasks and the overall system." + }, + { + "id": 107, + "string": "In task 1, the participants had to select a research topic from a list of 43 research topics." + }, + { + "id": 108, + "string": "On selection of topic, the system provides the top 20 paper recommendations which are meant to be part of the initial LR reading list." + }, + { + "id": 109, + "string": "In task 2, they had to select a minimum of five papers from task 1 in order for the system to retrieve 30 topically similar papers." + }, + { + "id": 110, + "string": "For the third task, the participants were requested to add at least 30 papers in the reading list." + }, + { + "id": 111, + "string": "The paper count was set to 30 as the threshold for highest number of shortlisted papers was 26 (for the article-type 'generic research paper')." + }, + { + "id": 112, + "string": "The three other article-types provided for the experiment were conference full paper, conference poster and case study." + }, + { + "id": 113, + "string": "The shortlisted papers count for these article-types was fixed by taking average of the references count of the related papers from the ACM DL extract." + }, + { + "id": 114, + "string": "The participant had to then select the article-type and run the task so that the system could retrieve the shortlisted papers." + }, + { + "id": 115, + "string": "The screenshot of the task 3 from the Rec4LRW system is provided in Figure 1 ." + }, + { + "id": 116, + "string": "In addition to the basic metadata, the system provides the feature \"View papers in the parent cluster\" for the participant to see the cluster from which the paper has been shortlisted." + }, + { + "id": 117, + "string": "The evaluation screen was provided to the user at the bottom of the screen (not shown in Figure 1 )." + }, + { + "id": 118, + "string": "The participants had to answer seven mandatory survey questions and one optional subjective feedback question as a part of the evaluation." + }, + { + "id": 119, + "string": "The seven survey questions and the corresponding measures are provided in Table 1 ." + }, + { + "id": 120, + "string": "A five-point Likert scale was provided for measuring participant agreement for each question." + }, + { + "id": 121, + "string": "The measures were selected based on the key aspects of the task." + }, + { + "id": 122, + "string": "The measures Relevance, Usefulness, Importance, Certainty, Good_List and Improve-ment_Needed were meant to ascertain the quality of the recommendations." + }, + { + "id": 123, + "string": "The final measure Shortlisting_Feature was used to identify whether participants would be interested to use this task in current academic search systems and digital libraries." + }, + { + "id": 124, + "string": "This is a good recommendation list, at an overall level Improvement_Needed There is a need to further improve this shortlisted papers list Shortlisting_Feature I would like to see the feature of shortlisting papers from reading list based on article-type preference, in academic search systems and databases The response values 'Agree' and 'Strongly Agree' were the two values considered for the calculation of agreement percentages for the evaluation measures." + }, + { + "id": 125, + "string": "Descriptive statistics were used to measure central tendency." + }, + { + "id": 126, + "string": "Independent samples t-test was used to check the presence of statistically significant difference in the mean values of the students and staff group, for the testing the hypothesis." + }, + { + "id": 127, + "string": "Statistical significance was set at p < .05." + }, + { + "id": 128, + "string": "Statistical analyses were done using SPSS 21.0 and R. Participants' subjective feedback responses were coded by a single coder using an inductive approach [1] , with the aim of identifying the central themes (concepts) in the text." + }, + { + "id": 129, + "string": "The study was conducted between November 2015 and January 2016." + }, + { + "id": 130, + "string": "Out of the eligible 230 participants, 116 participants signed the consent form and completed the whole study inclusive of the three tasks in the system." + }, + { + "id": 131, + "string": "57 participants were Ph.D./Masters students while 59 were research staff, academic staff and librarians." + }, + { + "id": 132, + "string": "The average research experience for Ph.D. students was 2 years while for staff, it was 5.6 years." + }, + { + "id": 133, + "string": "51% of participants were from the computer science, electrical and electronics disciplines, 35% from information and communication studies discipline while 14% from other disciplines." + }, + { + "id": 134, + "string": "6 Results and Discussion Agreement Percentages (AP) The agreement percentages (AP) for the seven measures by the participant groups are shown in Figure 2 ." + }, + { + "id": 135, + "string": "In the current study, an agreement percentage above 75% is considered as an indication of higher agreement from the participants." + }, + { + "id": 136, + "string": "As expected, the AP of students was consistently higher than the staff with the biggest difference found for the measures Usefulness (82.00% for students, 64.15% for staff) and Good_List (76.00% for students, 62.26% for staff)." + }, + { + "id": 137, + "string": "It has been reported in earlier studies that graduate students generally look for assistance in most stages of research [9] ." + }, + { + "id": 138, + "string": "Consequently, students would prefer technological interventions such as the current system due to the simplicity in interaction." + }, + { + "id": 139, + "string": "Hence, the evaluation of students was evidently better than staff." + }, + { + "id": 140, + "string": "The quality measures Importance (85.96% for students, 77.97% for staff) and Shortlisting_Feature (84.21% for students, 74.58% for staff) had the highest APs." + }, + { + "id": 141, + "string": "This observation validates the usefulness of the technique in identifying popular/seminal papers from the reading list." + }, + { + "id": 142, + "string": "Due to favorable APs for the most measures, the lowest agreement values were observed for the measure Improve-ment_Needed (57.89% for students, 57.63% for staff)." + }, + { + "id": 143, + "string": "The results for the measure Certainty (70% for students, 62.26% for staff) indicate some level of reluctance among the participants in being confident of citing the papers." + }, + { + "id": 144, + "string": "Citation of a particular paper is subject to the particular citation context in the manuscript, therefore not all participants would be able to prejudge their citation behavior." + }, + { + "id": 145, + "string": "In summary, participants seem to acknowledge the usefulness of the task in identifying important papers from the reading list." + }, + { + "id": 146, + "string": "However, there is an understandable lack of inclination in citing these papers." + }, + { + "id": 147, + "string": "This issue is to be addressed in future studies." + }, + { + "id": 148, + "string": "Qualitative Data Analysis In Table 2 , the top five categories of the preferred aspects and critical aspects are listed." + }, + { + "id": 149, + "string": "Preferred Aspects." + }, + { + "id": 150, + "string": "Out of the total 116 participants, 68 participants chose to give feedback about the features that they found to be useful." + }, + { + "id": 151, + "string": "24% of the participants felt that the feature of the shortlisting papers based on article-type preference was quite preferable and would help them in completing their tasks in a faster and efficient manner." + }, + { + "id": 152, + "string": "They also felt that the quality of the shortlisting papers was satisfactory." + }, + { + "id": 153, + "string": "15% of the participants felt that the information cue labels (popular, recent, high reach and literature survey) were helpful for them in relevance judgement of the shortlisted papers." + }, + { + "id": 154, + "string": "This particular observation of the participants was echoed for the first two tasks of the Rec4LRW system, thereby validating the usefulness of information cue labels in academic search systems and digital libraries." + }, + { + "id": 155, + "string": "Around 11% of the participants felt the option of viewing papers in the parent cluster of the particular shortlisted papers was useful in two ways." + }, + { + "id": 156, + "string": "Firstly, it helped in understanding the different clusters formed with the references and citations of the papers in the reading list." + }, + { + "id": 157, + "string": "Secondly, the clusters served as an avenue for finding some useful and relevant papers in serendipitous manner as some papers could have been missed by the researcher dur-ing the literature review process." + }, + { + "id": 158, + "string": "The other features that the participants commended were the metadata provided along with the shortlisted papers (citations count, article summary) and the paper management collection features across the three tasks." + }, + { + "id": 159, + "string": "Ranking of Papers (3%) UI can be Improved (3%) Critical Aspects." + }, + { + "id": 160, + "string": "Out of the 116 participants, 41 participants gave critical comments about the task and features of the system catering to the task." + }, + { + "id": 161, + "string": "Around 16% of the participants felt that the study procedure of adding 30 papers to the reading list as a precursor for running the task was uninteresting." + }, + { + "id": 162, + "string": "The reasons cited were the irrelevance of some of the papers to the participants as these papers had to be added just for the sake of executing the task while some participants felt that the 30 papers count was too much while some could not comprehend why these many papers had to be added." + }, + { + "id": 163, + "string": "Around 5% of the participants felt that the study experience was hindered by the dataset not catering to recent papers (circa 2012-2015) and the dataset being restricted to computer science related topics." + }, + { + "id": 164, + "string": "Another 5% of the participants felt that they shortlisting algorithm/technique could be improved to provide a better list of papers." + }, + { + "id": 165, + "string": "A section of these participants needed more recent papers in the final list while others wanted papers specifically from high impact publications." + }, + { + "id": 166, + "string": "Around 4% of the participants could not find the usefulness of the task in their work." + }, + { + "id": 167, + "string": "They felt that the task was not beneficial." + }, + { + "id": 168, + "string": "The other minor critical comments given by the participants were the ranking of the list could be improved, the task execution speed could be improved and more UI control features could be provided, such as sorting options and free-text search box." + }, + { + "id": 169, + "string": "Conclusion and Future Work For literature review and manuscript preparatory related tasks, the gap between novices and experts in terms of task knowledge and execution skills is well-known [15] ." + }, + { + "id": 170, + "string": "A majority of the previous studies have brought forth assistive systems that focus heavily on LR tasks, while only a few studies have concentrated on approaches for helping researchers during manuscript preparation." + }, + { + "id": 171, + "string": "With the Rec4LRW system, we have attempted to address the aforementioned gap with a novel task for shortlisting articles from researcher's reading list, for inclusion in manuscript." + }, + { + "id": 172, + "string": "The shortlisting task makes use of a popular community detection algorithm [10] for identifying communities of papers generated from the citations network of the papers from the reading list." + }, + { + "id": 173, + "string": "Additionally, we have also tried to vary shortlisted papers count by taking the article-type choice into consideration." + }, + { + "id": 174, + "string": "In order to evaluate the system, a user evaluation study was conducted with 116 participants who had the experience of writing research papers." + }, + { + "id": 175, + "string": "The participants were instructed to run each task followed by evaluation questionnaire." + }, + { + "id": 176, + "string": "Participants were requested to answer survey questions and provide subjective feedback on the features of the tasks." + }, + { + "id": 177, + "string": "As hypothesized before the start of the study, students evaluated the task favorably for all measures." + }, + { + "id": 178, + "string": "There was high level of agreement among all participants on the availability of important papers among the shortlisted papers." + }, + { + "id": 179, + "string": "This finding validates the aim of the task in identifying the papers that manuscript reviewers would expected to be cited." + }, + { + "id": 180, + "string": "In the qualitative feedback provided by the participants, majority of the participants preferred the idea of shortlisting papers and also thought the output of the task was of good quality." + }, + { + "id": 181, + "string": "Secondly, they liked the information cue labels provided along with certain papers, for indicating the special nature of the paper." + }, + { + "id": 182, + "string": "As a part of critical feedback, participants felt that the study procedure was a bit longwinded as they had to select 30 papers without reading them, just for running the task." + }, + { + "id": 183, + "string": "As a part of future work, the scope for this task will be expanded to bring in more variations for the different article-type choices." + }, + { + "id": 184, + "string": "For instance, research would be conducted:-(i) to ascertain the quantity of recent papers to be shortlisted for different article-type choices, (ii) include new papers in the output so that the user is alerted about some key paper(s) which could have been missed during literature review, (iii) provide more user control in the system so that the user can select papers as mandatory to be shortlisted and (iv) Integrate this task with the citation context recommendation task [11, 14] so that the user can be fully aided during the whole process of manuscript writing." + } + ], + "headers": [ + { + "section": "Introduction", + "n": "1", + "start": 0, + "end": 31 + }, + { + "section": "Related Work", + "n": "2", + "start": 32, + "end": 52 + }, + { + "section": "Brief Overview", + "n": "3.1", + "start": 53, + "end": 63 + }, + { + "section": "Dataset", + "n": "3.2", + "start": 64, + "end": 67 + }, + { + "section": "User-Interface (UI) Features", + "n": "3.3", + "start": 68, + "end": 82 + }, + { + "section": "Technique For Shortlisting Papers From Reading List", + "n": "4", + "start": 83, + "end": 133 + }, + { + "section": "Agreement Percentages (AP)", + "n": "6.1", + "start": 134, + "end": 147 + }, + { + "section": "Qualitative Data Analysis", + "n": "6.2", + "start": 148, + "end": 168 + }, + { + "section": "Conclusion and Future Work", + "n": "7", + "start": 169, + "end": 184 + } + ], + "figures": [ + { + "filename": "../figure/image/959-Figure2-1.png", + "caption": "Fig. 2. Agreement percentage results by participant group", + "page": 8, + "bbox": { + "x1": 142.56, + "x2": 452.15999999999997, + "y1": 249.6, + "y2": 408.96 + } + }, + { + "filename": "../figure/image/959-Figure1-1.png", + "caption": "Fig. 1. Sample list of shortlisted papers for the task output", + "page": 4, + "bbox": { + "x1": 124.8, + "x2": 473.28, + "y1": 183.84, + "y2": 397.91999999999996 + } + }, + { + "filename": "../figure/image/959-Table2-1.png", + "caption": "Table 2. Top five categories for preferred and critical aspects", + "page": 9, + "bbox": { + "x1": 117.6, + "x2": 477.12, + "y1": 211.67999999999998, + "y2": 307.2 + } + }, + { + "filename": "../figure/image/959-Table1-1.png", + "caption": "Table 1. Evaluation measures and corresponding questions", + "page": 6, + "bbox": { + "x1": 118.56, + "x2": 477.12, + "y1": 639.84, + "y2": 668.64 + } + } + ] + }, + "gem_id": "GEM-SciDuet-chal-3" + }, + { + "slides": { + "0": { + "title": "Key point Syntactic Information", + "text": [ + "To use or not to use?", + "string-to-string model tree/graph-to-string model" + ], + "page_nums": [ + 2, + 3, + 4, + 5 + ], + "images": [] + }, + "8": { + "title": "English Chinese", + "text": [ + "s2s is the worst", + "More syntactic information is useful Chinese", + "No score is the worst English-", + "Score is useful Chinese", + "SoA is better than SoE", + "Adjusting attention is better than adjusting word embedding", + "Forest is better than 1-best English-", + "Forest (No score) is worse than 1-best (SoE/SoA)", + "FS/TN is worse than 1-best (SoE/SoA) English-", + "Better to use score in linearization Chinese" + ], + "page_nums": [ + 51, + 52, + 53, + 54, + 55, + 56 + ], + "images": [ + "figure/image/961-Table3-1.png", + "figure/image/961-Table2-1.png" + ] + }, + "9": { + "title": "English Japanese", + "text": [ + "s2s is the worst", + "No score is the worst", + "SoA is better than SoE", + "Forest is better than 1-best", + "Forest (No score) is worse", + "FS/TN is worse than 1-best" + ], + "page_nums": [ + 57 + ], + "images": [ + "figure/image/961-Table3-1.png" + ] + }, + "10": { + "title": "Merits and Demerits", + "text": [ + "Use syntactic information explicitly", + "Simpler model, more information", + "Robust to parsing errors", + "Lots of sentences are filtered out due to lengths" + ], + "page_nums": [ + 58 + ], + "images": [] + } + }, + "paper_title": "Forest-Based Neural Machine Translation", + "paper_id": "961", + "paper": { + "title": "Forest-Based Neural Machine Translation", + "abstract": "Tree-based neural machine translation (NMT) approaches, although achieved impressive performance, suffer from a major drawback: they only use the 1best parse tree to direct the translation, which potentially introduces translation mistakes due to parsing errors. For statistical machine translation (SMT), forestbased methods have been proven to be effective for solving this problem, while for NMT this kind of approach has not been attempted. This paper proposes a forest-based NMT method that translates a linearized packed forest under a simple sequence-to-sequence framework (i.e., a forest-to-string NMT model). The BLEU score of the proposed method is higher than that of the string-to-string NMT, treebased NMT, and forest-based SMT systems.", + "text": [ + { + "id": 0, + "string": "Introduction NMT has witnessed promising improvements recently." + }, + { + "id": 1, + "string": "Depending on the types of input and output, these efforts can be divided into three categories: string-to-string systems ; tree-to-string systems (Eriguchi et al., 2016 (Eriguchi et al., , 2017 ; and string-totree systems (Aharoni and Goldberg, 2017; Nadejde et al., 2017) ." + }, + { + "id": 2, + "string": "Compared with string-to-string systems, tree-to-string and string-to-tree systems (henceforth, tree-based systems) offer some attractive features." + }, + { + "id": 3, + "string": "They can use more syntactic information , and can conveniently incorporate prior knowledge ." + }, + { + "id": 4, + "string": "* Contribution during internship at National Institute of Information and Communications Technology." + }, + { + "id": 5, + "string": "† Corresponding author Because of these advantages, tree-based methods become the focus of many researches of NMT nowadays." + }, + { + "id": 6, + "string": "Based on how to represent trees, there are two main categories of tree-based NMT methods: representing trees by a tree-structured neural network (Eriguchi et al., 2016; Zaremoodi and Haffari, 2017) , representing trees by linearization (Vinyals et al., 2015; Dyer et al., 2016; Ma et al., 2017) ." + }, + { + "id": 7, + "string": "Compared with the former, the latter method has a relatively simple model structure, so that a larger corpus can be used for training and the model can be trained within reasonable time, hence is preferred from the viewpoint of computation." + }, + { + "id": 8, + "string": "Therefore we focus on this kind of methods in this paper." + }, + { + "id": 9, + "string": "In spite of impressive performance of tree-based NMT systems, they suffer from a major drawback: they only use the 1-best parse tree to direct the translation, which potentially introduces translation mistakes due to parsing errors (Quirk and Corston-Oliver, 2006) ." + }, + { + "id": 10, + "string": "For SMT, forest-based methods have employed a packed forest to address this problem (Huang, 2008) , which represents exponentially many parse trees rather than just the 1-best one ." + }, + { + "id": 11, + "string": "But for NMT, (computationally efficient) forestbased methods are still being explored 1 ." + }, + { + "id": 12, + "string": "Because of the structural complexity of forests, the inexistence of appropriate topological ordering, and the hyperedge-attachment nature of weights (see Section 3.1 for details), it is not trivial to linearize a forest." + }, + { + "id": 13, + "string": "This hinders the development of forest-based NMT to some extent." + }, + { + "id": 14, + "string": "Inspired by the tree-based NMT methods based on linearization, we propose an efficient forestbased NMT approach (Section 3), which can en-code the syntactic information of a packed forest on the basis of a novel weighted linearization method for a packed forest (Section 3.1), and can decode the linearized packed forest under the simple sequence-to-sequence framework (Section 3.2) ." + }, + { + "id": 15, + "string": "Experiments demonstrate the effectiveness of our method (Section 4)." + }, + { + "id": 16, + "string": "Preliminaries We first review the general sequence-to-sequence model (Section 2.1), then describe tree-based NMT systems based on linearization (Section 2.2), and finally introduce the packed forest, through which exponentially many trees can be represented in a compact manner (Section 2.3)." + }, + { + "id": 17, + "string": "Sequence-to-sequence model Current NMT systems usually resort to a simple framework, i.e., the sequence-to-sequence model ." + }, + { + "id": 18, + "string": "Given a source sequence (x 0 , ." + }, + { + "id": 19, + "string": "." + }, + { + "id": 20, + "string": "." + }, + { + "id": 21, + "string": ", x T ), in order to find a target sequence (y 0 , ." + }, + { + "id": 22, + "string": "." + }, + { + "id": 23, + "string": "." + }, + { + "id": 24, + "string": ", y T ) that maximizes the conditional probability p(y 0 , ." + }, + { + "id": 25, + "string": "." + }, + { + "id": 26, + "string": "." + }, + { + "id": 27, + "string": ", y T | x 0 , ." + }, + { + "id": 28, + "string": "." + }, + { + "id": 29, + "string": "." + }, + { + "id": 30, + "string": ", x T ), the sequence-to-sequence model uses one RNN to encode the source sequence into a fixed-length context vector c and a second RNN to decode this vector and generate the target sequence." + }, + { + "id": 31, + "string": "Formally, the probability of the target sequence can be calculated as follows: p(y 0 , ." + }, + { + "id": 32, + "string": "." + }, + { + "id": 33, + "string": "." + }, + { + "id": 34, + "string": ",y T | x 0 , ." + }, + { + "id": 35, + "string": "." + }, + { + "id": 36, + "string": "." + }, + { + "id": 37, + "string": ", x T ) = T t=0 p(y t | c, y 0 , ." + }, + { + "id": 38, + "string": "." + }, + { + "id": 39, + "string": "." + }, + { + "id": 40, + "string": ", y t−1 ), (1) where p(y t | c, y 0 , ." + }, + { + "id": 41, + "string": "." + }, + { + "id": 42, + "string": "." + }, + { + "id": 43, + "string": ", y t−1 ) = g(y t−1 , s t , c), (2) s t = f (s t−1 , y t−1 , c), (3) c = q(h 0 , ." + }, + { + "id": 44, + "string": "." + }, + { + "id": 45, + "string": "." + }, + { + "id": 46, + "string": ", h T ), (4) h t = f (e t , h t−1 )." + }, + { + "id": 47, + "string": "(5) Here, g, f , and q are nonlinear functions; h t and s t are the hidden states of the source-side RNN and target-side RNN, respectively, c is the context vector, and e t is the embedding of x t ." + }, + { + "id": 48, + "string": "introduced an attention mechanism to deal with the issues related to long sequences ." + }, + { + "id": 49, + "string": "Instead of encoding the source sequence into a fixed vector c, the attention model uses different c i -s when calculating the target-side output y i at time step i: c i = T j=0 α ij h j , (6) α ij = exp(a(s i−1 , h j )) T k=0 exp(a(s i−1 , h k )) ." + }, + { + "id": 50, + "string": "(7) The function a(s i−1 , h j ) can be regarded as representing the soft alignment between the target-side RNN hidden state s i−1 and the source-side RNN hidden state h j ." + }, + { + "id": 51, + "string": "By changing the format of the source/target sequences, this framework can be regarded as a string-to-string NMT system , a tree-to-string NMT system , or a string-to-tree NMT system (Aharoni and Goldberg, 2017) ." + }, + { + "id": 52, + "string": "Linear-structured tree-based NMT systems Regarding the linearization adopted for tree-tostring NMT (i.e., linearization of the source side), Sennrich and Haddow (2016) encoded the sequence of dependency labels and the sequence of words simultaneously, partially utilizing the syntax information, while traversed the constituent tree of the source sentence and combined this with the word sequence, utilizing the syntax information completely." + }, + { + "id": 53, + "string": "Regarding the linearization used for string-totree NMT (i.e., linearization of the target side), Nadejde et al." + }, + { + "id": 54, + "string": "(2017) used a CCG supertag sequence as the target sequence, while Aharoni and Goldberg (2017) applied a linearization method in a top-down manner, generating a sequence ensemble for the annotated tree in the Penn Treebank (Marcus et al., 1993) ." + }, + { + "id": 55, + "string": "Wu et al." + }, + { + "id": 56, + "string": "(2017) used transition actions to linearize a dependency tree, and employed the sequence-to-sequence framework for NMT." + }, + { + "id": 57, + "string": "It can be seen all current tree-based NMT systems use only one tree for encoding or decoding." + }, + { + "id": 58, + "string": "In contrast, we hope to utilize multiple trees (i.e., a forest)." + }, + { + "id": 59, + "string": "This is not trivial, on account of the lack of a fixed traversal order and the need for a compact representation." + }, + { + "id": 60, + "string": "Packed forest The packed forest gives a representation of exponentially many parsing trees, and can compactly encode many more candidates than the n-best list Figure 1 : An example of (a) a packed forest." + }, + { + "id": 61, + "string": "The numbers in the brackets located at the upper-left corner of each node in the packed forest show one correct topological ordering of the nodes." + }, + { + "id": 62, + "string": "The packed forest is a compact representation of two trees: (b) the correct constituent tree, and (c) an incorrect constituent tree." + }, + { + "id": 63, + "string": "Note that the terminal nodes (i.e., words in the sentence) in the packed forest are shown only for illustration, and they do not belong to the packed forest." + }, + { + "id": 64, + "string": "(Huang, 2008) ." + }, + { + "id": 65, + "string": "Figure 1a shows a packed forest, which can be unpacked into two constituent trees ( Figure 1b and Figure 1c )." + }, + { + "id": 66, + "string": "Formally, a packed forest is a pair V, E , where V is the set of nodes and E is the set of hyperedges." + }, + { + "id": 67, + "string": "Each v ∈ V can be represented as X i,j , where X is a constituent label and i, j ∈ [0, n] are indices of words, showing that the node spans the words ranging from i (inclusive) to j (exclusive)." + }, + { + "id": 68, + "string": "Here, n is the length of the input sentence." + }, + { + "id": 69, + "string": "Each e ∈ E is a three-tuple head(e), tails(e), score(e) , where head(e) ∈ V is similar to the head node in a constituent tree, and tails(e) ∈ V * is similar to the set of child nodes in a constituent tree." + }, + { + "id": 70, + "string": "score(e) ∈ R is the logarithm of the probability that tails(e) represents the tails of head(e) calculated by the parser." + }, + { + "id": 71, + "string": "Based on score(e), the score of a constituent tree T can be calculated as follows: score(T ) = −λn + e∈E(T ) score(e), (8) where E(T ) is the set of hyperedges appearing in tree T , and λ is a regularization coefficient for the sentence length 2 ." + }, + { + "id": 72, + "string": "2 Following the configuration of Charniak and Johnson Forest-based NMT We first propose a linearization method for the packed forest (Section 3.1), then describe how to encode the linearized forest (Section 3.2), which can then be translated by the conventional decoder (see Section 2.1)." + }, + { + "id": 73, + "string": "Forest linearization Recently, several studies have focused on the linearization methods of a syntax tree, both in the area of tree-based NMT (Section 2.2) and in the area of parsing (Vinyals et al., 2015; Dyer et al., 2016; Ma et al., 2017) ." + }, + { + "id": 74, + "string": "Basically, these methods follow a fixed traversal order (e.g., depthfirst), which does not exist for the packed forest (a directed acyclic graph (DAG))." + }, + { + "id": 75, + "string": "Furthermore, the weights are attached to edges of a packed forest instead of the nodes, which further increase the difficulty." + }, + { + "id": 76, + "string": "Topological ordering algorithms for DAG (Kahn, 1962; Tarjan, 1976) are not good solutions, because the outputted ordering is not always optimal for machine translation." + }, + { + "id": 77, + "string": "In particular, a topo- (2005) , for all the experiments in this paper, we fixed λ to log 2 600." + }, + { + "id": 78, + "string": "Algorithm 1 Linearization of a packed forest 1: function LINEARIZEFOREST( V, E , w) 2: v ← FINDROOT(V ) 3: r ← [] 4: EXPANDSEQ(v, r, V, E , w) 5: return r 6: function FINDROOT(V ) 7: for v ∈ V do 8: if v has no parent then 9: return v 10: procedure EXPANDSEQ(v, r, V, E , w) 11: for e ∈ E do 12: if head(e) = v then 13: if tails(e) = ∅ then 14: for t ∈ SORT(tails(e)) do Sort tails(e) by word indices." + }, + { + "id": 79, + "string": "15: EXPANDSEQ(t, r, V, E , w) 16: l ← LINEARIZEEDGE(head(e), w) 17: r.append( l, σ(0.0) ) σ is the sigmoid function, i.e., σ(x) = 1 1+e −x , x ∈ R. 18: l ← c LINEARIZEEDGES(tails(e), w) c is a unary operator." + }, + { + "id": 80, + "string": "19: r.append( l, σ(score(e)) ) 20: else 21: l ← LINEARIZEEDGE(head(e), w) 22: r.append( l, σ(0.0) ) 23: function LINEARIZEEDGE(Xi,j, w) 24: return X ⊗ ( j−1 k=i w k ) 25: function LINEARIZEEDGES(v, w) 26: return ⊕v∈vLINEARIZEEDGE(v, w) logical ordering could ignore \"word sequential information\" and \"parent-child information\" in the sentences." + }, + { + "id": 81, + "string": "For example, for the packed forest in Figure 1a , although \"[10]→[1]→[2]→ · · · →[9]→[11]\" is a valid topological ordering, the word sequential information of the words (e.g., \"John\" should be located ahead of the period), which is fairly crucial for translation of languages with fixed pragmatic word order such as Chinese or English, is lost." + }, + { + "id": 82, + "string": "As another example, for the packed forest in Figure 1a , nodes [2], [9], and [10] are all the children of node [11] ." + }, + { + "id": 83, + "string": "However, in the topological or- der \"[1]→[2]→ · · · →[9]→[10]→[11],\" node [2] is quite far from node [11], while nodes [9] and [10] are both close to node [11] ." + }, + { + "id": 84, + "string": "The parent-child information cannot be reflected in this topological order, which is not what we would expect." + }, + { + "id": 85, + "string": "To address the above two problems, we propose a novel linearization algorithm for a packed forest (Algorithm 1)." + }, + { + "id": 86, + "string": "The algorithm linearizes the packed forest from the root node (Line 2) to leaf nodes by calling the EXPANDSEQ procedure (Line 15) recursively, while preserving the word order in the sentence (Line 14)." + }, + { + "id": 87, + "string": "In this way, word sequential information is preserved." + }, + { + "id": 88, + "string": "Within the Figure 1a EXPANDSEQ procedure, once a hyperedge is linearized (Line 16), the tails are also linearized immediately (Line 18)." + }, + { + "id": 89, + "string": "In this way, parent-child information is preserved." + }, + { + "id": 90, + "string": "Intuitively, different parts of constituent trees should be combined in different ways, therefore we define different operators ( c , ⊗, ⊕, or ) to represent the relationships between different parts, so that the representations of these parts can be combined in different ways (see Section 3.2 for details)." + }, + { + "id": 91, + "string": "Words are concatenated by the operator \" \" with each other, a word and a constituent label is concatenated by the operator \"⊗\", the linearization results of child nodes are concatenated by the operator \"⊕\" with each other, while the unary operator \" c \" is used to indicate that the node is the child node of the previous part." + }, + { + "id": 92, + "string": "Furthermore, each token in the linearized sequence is related to a score, representing the confidence of the parser." + }, + { + "id": 93, + "string": "The linearization result of the packed forest in Figure 1a is shown in Figure 2 ." + }, + { + "id": 94, + "string": "Tokens in the linearized sequence are separated by slashes." + }, + { + "id": 95, + "string": "Each token in the sequence is composed of different types of symbols and combined by different operators." + }, + { + "id": 96, + "string": "We can see that word sequential information is preserved." + }, + { + "id": 97, + "string": "For example, \"NNP⊗John\" (linearization result of node [1]) is in front of \"VBZ⊗has\" (linearization result of node [3]), which is in front of \"DT⊗a\" (linearization result of node [4])." + }, + { + "id": 98, + "string": "Moreover, parent-child information is also preserved." + }, + { + "id": 99, + "string": "For example, \"NP⊗John\" (linearization result of node [2]) is followed by \" c NNP⊗John\" (linearization result of node [1], the child of node [2])." + }, + { + "id": 100, + "string": "Note that our linearization method cannot fully recover packed forest." + }, + { + "id": 101, + "string": "What we want to do is not to propose a fully recoverable linearization method." + }, + { + "id": 102, + "string": "What we actually want to do is to encode syntax information as much as possible, so that we can improve the performance of NMT." + }, + { + "id": 103, + "string": "As will be shown in Section 4, this goal is achieved." + }, + { + "id": 104, + "string": "Also note that there is one more advantage of our linearization method: the linearized sequence Figure 3 : The framework of the forest-based NMT system." + }, + { + "id": 105, + "string": "is a weighted sequence, while all the previous studies ignored the weights during linearization." + }, + { + "id": 106, + "string": "As will be shown in Section 4, the weights are actually important not only for the linearization of a packed forest, but also for the linearization of a single tree." + }, + { + "id": 107, + "string": "By preserving only the nodes and hyperedges in the 1-best tree and removing all others, our linearization method can be regarded as a treelinearization method." + }, + { + "id": 108, + "string": "Compared with other treelinearization methods, our method combines several different kinds of information within one symbol, retaining the parent-child information, and incorporating the confidence of the parser in the sequence." + }, + { + "id": 109, + "string": "We examine whether the weights can be useful not only for linear structured tree-based NMT but also for our forest-based NMT." + }, + { + "id": 110, + "string": "Furthermore, although our method is nonreversible for packed forests, it is reversible for constituent trees, in that the linearization is processed exactly in the depth-first traversal order and all necessary information in the tree nodes has been encoded." + }, + { + "id": 111, + "string": "As far as we know, there is no previous work on linearization of packed forests." + }, + { + "id": 112, + "string": "Encoding the linearized forest The linearized packed forest forms the input of the encoder, which has two major differences from the input of a sequence-to-sequence NMT system." + }, + { + "id": 113, + "string": "First, the input sequence of the encoder consists of two parts: the symbol sequence and the score sequence." + }, + { + "id": 114, + "string": "Second, each symbol in the symbol sequence consists of several parts (words and constituent labels), which are combined by certain operators ( c , ⊗, ⊕, or )." + }, + { + "id": 115, + "string": "Based on these observa-tions, we propose two new frameworks, which are illustrated in Figure 3 ." + }, + { + "id": 116, + "string": "Formally, the input layer receives the sequence ( l 0 , ξ 0 , ." + }, + { + "id": 117, + "string": "." + }, + { + "id": 118, + "string": "." + }, + { + "id": 119, + "string": ", l T , ξ T ), where l i denotes the i-th symbol and ξ i its score." + }, + { + "id": 120, + "string": "Then, the sequence is fed into the score layer and the symbol layer." + }, + { + "id": 121, + "string": "The score and symbol layers receive the sequence and output the score sequence ξ = (ξ 0 , ." + }, + { + "id": 122, + "string": "." + }, + { + "id": 123, + "string": "." + }, + { + "id": 124, + "string": ", ξ T ) and symbol sequence l = (l 0 , ." + }, + { + "id": 125, + "string": "." + }, + { + "id": 126, + "string": "." + }, + { + "id": 127, + "string": ", l T ), respectively, from the input." + }, + { + "id": 128, + "string": "Any item l ∈ l in the symbol layer has the form l = o 0 x 1 o 1 ." + }, + { + "id": 129, + "string": "." + }, + { + "id": 130, + "string": "." + }, + { + "id": 131, + "string": "x m−1 o m−1 x m , (9) where each x k (k = 1, ." + }, + { + "id": 132, + "string": "." + }, + { + "id": 133, + "string": "." + }, + { + "id": 134, + "string": ", m) is a word or a constituent label, m is the total number of words and constituent labels in a symbol, o 0 is \" c \" or empty, and each o k (k = 1, ." + }, + { + "id": 135, + "string": "." + }, + { + "id": 136, + "string": "." + }, + { + "id": 137, + "string": ", m − 1) is either \"⊗\", \"⊕\", or \" \"." + }, + { + "id": 138, + "string": "Then, in the node/operator layer, the x-s and o-s are separated and rearranged as x = (x 1 , ." + }, + { + "id": 139, + "string": "." + }, + { + "id": 140, + "string": "." + }, + { + "id": 141, + "string": ", x m , o 0 , ." + }, + { + "id": 142, + "string": "." + }, + { + "id": 143, + "string": "." + }, + { + "id": 144, + "string": ", o m−1 ), which is fed to the pre-embedding layer." + }, + { + "id": 145, + "string": "The pre-embedding layer generates a sequence p = (p 1 , ." + }, + { + "id": 146, + "string": "." + }, + { + "id": 147, + "string": "." + }, + { + "id": 148, + "string": ", p m , ." + }, + { + "id": 149, + "string": "." + }, + { + "id": 150, + "string": "." + }, + { + "id": 151, + "string": ", p 2m ), which is calculated as follows: p = W emb [I(x)]." + }, + { + "id": 152, + "string": "(10) Here, the function I(x) returns a list of the indices in the dictionary for all the elements in x, which consist of words, constituent labels, or operators." + }, + { + "id": 153, + "string": "In addition, W emb is the embedding matrix of size (|w word | + |w label | + 4) × d word , where |w word | and |w label | are the total number of words and constituent labels, respectively, d word is the dimension of the word embedding, and there are four possible operators: \" c ,\" \"⊗,\" \"⊕,\" and \" .\"" + }, + { + "id": 154, + "string": "Note that p is a list of 2m vectors, and the dimension of each vector is d word ." + }, + { + "id": 155, + "string": "Because the length of the sequence of the input layer is T + 1, there are T + 1 different ps in the pre-embedding layer, which we denote by P = (p 0 , ." + }, + { + "id": 156, + "string": "." + }, + { + "id": 157, + "string": "." + }, + { + "id": 158, + "string": ", p T )." + }, + { + "id": 159, + "string": "Depending on where the score layer is incorporated, we propose two frameworks: Score-on-Embedding (SoE) and Score-on-Attention (SoA)." + }, + { + "id": 160, + "string": "In SoE, the k-th element of the embedding layer is calculated as follows: e k = ξ k p∈p k p, (11) while in SoA, the k-th element of the embedding layer is calculated as e k = p∈p k p, (12) where k = 0, ." + }, + { + "id": 161, + "string": "." + }, + { + "id": 162, + "string": "." + }, + { + "id": 163, + "string": ", T ." + }, + { + "id": 164, + "string": "Note that e k ∈ R d word ." + }, + { + "id": 165, + "string": "In this manner, the proposed forest-to-string NMT framework is connected with the conventional sequence-to-sequence NMT framework." + }, + { + "id": 166, + "string": "After calculating the embedding vectors in the embedding layer, the hidden vectors are calculated using Equation 5." + }, + { + "id": 167, + "string": "When calculating the context vector c i -s, SoE and SoA differ from each other." + }, + { + "id": 168, + "string": "For SoE, the c i -s are calculated using Equation 6 and 7, while for SoA, the α ij -s used to calculate the c i -s are determined as follows: α ij = exp(ξ j a(s i−1 , h j )) T k=0 exp(ξ k a(s i−1 , h k )) ." + }, + { + "id": 169, + "string": "(13) Then, using the decoder of the sequence-tosequence framework, the sentence of the target language can be generated." + }, + { + "id": 170, + "string": "Experiments Setup We evaluate the effectiveness of our forest-based NMT systems on English-to-Chinese and Englishto-Japanese translation tasks 3 ." + }, + { + "id": 171, + "string": "The statistics of the corpora used in our experiments are summarized in Table 1 ." + }, + { + "id": 172, + "string": "The packed forests of English sentences are obtained by the constituent parser proposed by Huang (2008) 4 ." + }, + { + "id": 173, + "string": "We filtered out the sentences for 3 English is commonly chosen as the target language." + }, + { + "id": 174, + "string": "We chose English as the source language because a highperformance forest parser is not available for other languages." + }, + { + "id": 175, + "string": "For Japanese sentences, we followed the preprocessing steps recommended in WAT 2017 6 ." + }, + { + "id": 176, + "string": "We implemented our framework based on nematus 8 (Sennrich et al., 2017) ." + }, + { + "id": 177, + "string": "For optimization, we used the Adadelta algorithm (Zeiler, 2012) ." + }, + { + "id": 178, + "string": "In order to avoid overfitting, we used dropout (Srivastava et al., 2014) on the embedding layer and hidden layer, with the dropout probability set to 0.2." + }, + { + "id": 179, + "string": "We used the gated recurrent unit as the recurrent unit of RNNs, which are bi-directional, with one hidden layer." + }, + { + "id": 180, + "string": "Based on the tuning result, we set the maximum length of the input sequence to 300, the hidden layer size as 512, the dimension of word embedding as 620, and the batch size for training as 40." + }, + { + "id": 181, + "string": "We pruned the packed forest using the algorithm of Huang (2008) , with a threshold of 5." + }, + { + "id": 182, + "string": "If the linearization of the pruned forest is still longer than 300, then we linearize the 1-best parsing tree instead of the forest." + }, + { + "id": 183, + "string": "During decoding, we used beam search, and fixed the beam size to 12." + }, + { + "id": 184, + "string": "For the case of Forest (SoA), with 1 core of Tesla K80 GPU and LDC corpus as the training data, training spent about 10 days, and decoding speed is about 10 sentences per second." + }, + { + "id": 185, + "string": "Table 2 : English-Chinese experimental results (character-level BLEU)." + }, + { + "id": 186, + "string": "\"FS,\" \"TN,\" and \"FN\" denote forest-based SMT, tree-based NMT, and forest-based NMT systems, respectively." + }, + { + "id": 187, + "string": "We performed the paired bootstrap resampling significance test (Koehn, 2004) Table 3 : English-Japanese experimental results (character-level BLEU)." + }, + { + "id": 188, + "string": "Experimental results Table 2 and 3 summarize the experimental results." + }, + { + "id": 189, + "string": "To avoid the affect of segmentation errors, the performance were evaluated by character-level BLEU (Papineni et al., 2002) ." + }, + { + "id": 190, + "string": "We compare our proposed models (i.e., Forest (SoE) and Forest (SoA)) with three types of baseline: a string-to-string model (s2s), forest-based models that do not use score sequences (Forest (No score)), and tree-based models that use the 1-best parsing tree (1-best (No score, SoE, SoA))." + }, + { + "id": 191, + "string": "For the 1-best models, we preserve the nodes and hyperedges that are used in the 1-best constituent tree in the packed forest, and remove all other nodes and hyperedges, yielding a pruned forest that contains only the 1-best constituent tree." + }, + { + "id": 192, + "string": "For the \"No score\" configurations, we force the input score sequence to be a sequence of 1.0 with the same length as the input symbol sequence, so that neither the embedding layer nor the attention layer are affected by the score sequence." + }, + { + "id": 193, + "string": "In addition, we also perform a comparison with some state-of-the-art tree-based systems that are publicly available, including an SMT system and the NMT systems (Eriguchi et al." + }, + { + "id": 194, + "string": "(2016) 2017) )." + }, + { + "id": 195, + "string": "For , we use the implementation of cicada 11 ." + }, + { + "id": 196, + "string": "For , we reimplemented the \"Mixed RNN Encoder\" model, because of its outstanding performance on the NIST MT corpus." + }, + { + "id": 197, + "string": "We can see that for both English-Chinese and English-Japanese, compared with the s2s baseline system, both the 1-best and forest-based configurations yield better results." + }, + { + "id": 198, + "string": "This indicates syntactic information contained in the constituent trees or forests is indeed useful for machine translation." + }, + { + "id": 199, + "string": "Specifically, we observe the following facts." + }, + { + "id": 200, + "string": "First, among the three different frameworks SoE, SoA, and No-score, the SoA framework performs the best, while the No-score framework per-9 https://github.com/tempra28/tree2seq 10 https://github.com/howardchenhd/ Syntax-awared-NMT 11 https://github.com/tarowatanabe/ cicada [Source] In the Czech Republic , which was ravaged by serious floods last summer , the temperatures in its border region adjacent to neighboring Slovakia plunged to minus 18 degrees Celsius ." + }, + { + "id": 201, + "string": "forms the worst." + }, + { + "id": 202, + "string": "This indicates that the scores of the edges in constituent trees or packed forests, which reflect the confidence of the correctness of the edges, are indeed useful." + }, + { + "id": 203, + "string": "In fact, for the 1-best constituent parsing tree, the score of the edge reflects the confidence of the parser." + }, + { + "id": 204, + "string": "By using this information, the NMT system succeed to learn a better attention, paying much attention to the confident structure and not paying attention to the unconfident structure, which improved the translation performance." + }, + { + "id": 205, + "string": "This fact is ignored by previous studies on tree-based NMT." + }, + { + "id": 206, + "string": "Furthermore, it is better to use the scores to modify the values of attention instead of rescaling the word embeddings, because modifying word embeddings carelessly may change the semantic meanings of words." + }, + { + "id": 207, + "string": "Second, compared with the cases that only using the 1-best constituent trees, using packed forests yields statistical significantly better results for the SoE and SoA frameworks." + }, + { + "id": 208, + "string": "This shows the effectiveness of using more syntactic information." + }, + { + "id": 209, + "string": "Compared with one constituent tree, the packed forest, which contains multiple different trees, describes the syntactic structure of the sentence in different aspects, which together increase the accuracy of machine translation." + }, + { + "id": 210, + "string": "However, without using the scores, the 1-best constituent tree is preferred." + }, + { + "id": 211, + "string": "This is because without using the scores, all trees in the packed forest are treated equally, which makes it easy to import noise into the encoder." + }, + { + "id": 212, + "string": "Compared with other types of state-of-the-art systems, our systems using only the 1-best tree (1-best(SoE, SoA)) are better than the other treebased systems." + }, + { + "id": 213, + "string": "Moreover, our NMT systems using the packed forests achieve the best performance." + }, + { + "id": 214, + "string": "These results also support the usefulness of the scores of the edges and packed forests in NMT." + }, + { + "id": 215, + "string": "As for the efficiency, the training time of the SoA system was slightly longer than that of the SoE system, which was about twice of the s2s baseline." + }, + { + "id": 216, + "string": "The training time of the tree-based system was about 1.5 times of the baseline." + }, + { + "id": 217, + "string": "For the case of Forest (SoA), with 1 core of Tesla P100 GPU and LDC corpus as the training data, training spent about 10 days, and decoding speed was about 10 sentences per second." + }, + { + "id": 218, + "string": "The reason for the relatively low efficiency is that the linearized sequences of packed forests were much longer than word sequences, enlarging the scale of the inputs." + }, + { + "id": 219, + "string": "Despite this, the training process ended within reasonable time." + }, + { + "id": 220, + "string": "Figure 4 illustrates the translation results of an English sentence using several different configurations: the s2s baseline, using only the 1-best tree (SoE), and using the packed forest (SoE)." + }, + { + "id": 221, + "string": "This is a sentence from NIST MT 03, and the training corpus is the LDC corpus." + }, + { + "id": 222, + "string": "Qualitative analysis For the s2s case, no syntactic information is utilized, and therefore the output of the system is not a grammatical Chinese sentence." + }, + { + "id": 223, + "string": "The attributive phrase of \"Czech border region\" is a complete sentence." + }, + { + "id": 224, + "string": "However, the attributive is not allowed to be a complete sentence in Chinese." + }, + { + "id": 225, + "string": "For the case of using 1-best constituent tree, the output is a grammatical Chinese sentence." + }, + { + "id": 226, + "string": "However, the phrase \"adjacent to neighboring Slovakia\" is completely ignored in the translation result." + }, + { + "id": 227, + "string": "After analyzing the constituent tree, we found that this phrase was incorrectly parsed as an \"adverb phrase\", so that the NMT system paid little attention to it, because of the low confidence given by the parser." + }, + { + "id": 228, + "string": "In contrast, for the case of the packed forest, we can see this phrase was not ignored and was translated correctly." + }, + { + "id": 229, + "string": "Actually, besides \"adverb phrase\", this phrase was also correctly parsed as an \"adjective phrase\", and covered by multiple different nodes in the forest, making it difficult for the encoder to ignore the phrase." + }, + { + "id": 230, + "string": "We also noticed that our method performed better on learning attention." + }, + { + "id": 231, + "string": "For the example in Figure 4 , we observed that for s2s model, the decoder paid attention to the word \"Czech\" twice, which causes the output sentence contains the Chinese translation of Czech twice." + }, + { + "id": 232, + "string": "On the other hand, for our forest model, by using the syntax information, the decoder paid attention to the phrase \"In the Czech Republic\" only once, making the decoder generates the correct output." + }, + { + "id": 233, + "string": "Related work Incorporating syntactic information into NMT systems is attracting widespread attention nowadays." + }, + { + "id": 234, + "string": "Compared with conventional string-to-string NMT systems, tree-based systems demonstrate a better performance with the help of constituent trees or dependency trees." + }, + { + "id": 235, + "string": "The first noteworthy study is Eriguchi et al." + }, + { + "id": 236, + "string": "(2016) , which used Tree-structured LSTM (Tai et al., 2015) to encode the HPSG syntax tree of the sentence in the source-side in a bottom-up manner." + }, + { + "id": 237, + "string": "Then, Chen et al." + }, + { + "id": 238, + "string": "(2017) enhanced the encoder with a top-down tree encoder." + }, + { + "id": 239, + "string": "As a simple extension of Eriguchi et al." + }, + { + "id": 240, + "string": "(2016) , very recently, Zaremoodi and Haffari (2017) proposed a forest-based NMT method by representing the packed forest with a forest-structured neural network." + }, + { + "id": 241, + "string": "However, their method was evaluated in small-scale MT settings (each training dataset consists of under 10k parallel sentences)." + }, + { + "id": 242, + "string": "In contrast, our proposed method is effective in a largescale MT setting, and we present qualitative analysis regarding the effectiveness of using forests in NMT." + }, + { + "id": 243, + "string": "Although these methods obtained good results, the tree-structured network used by the encoder made the training and decoding relatively slow, therefore restricts the scope of application." + }, + { + "id": 244, + "string": "Other attempts at encoding syntactic trees have also been proposed." + }, + { + "id": 245, + "string": "Eriguchi et al." + }, + { + "id": 246, + "string": "(2017) combined the Recurrent Neural Network Grammar (Dyer et al., 2016) with NMT systems, while linearized the constituent tree and encoded it using RNNs." + }, + { + "id": 247, + "string": "The training of these methods is fast, because of the linear structures of RNNs." + }, + { + "id": 248, + "string": "However, all these syntax-based NMT systems used only the 1-best parsing tree, making the systems sensitive to parsing errors." + }, + { + "id": 249, + "string": "Instead of using trees to represent syntactic information, some studies use other data structures to represent the latent syntax of the input sentence." + }, + { + "id": 250, + "string": "For example, Hashimoto and Tsuruoka (2017) proposed translating using a latent graph." + }, + { + "id": 251, + "string": "However, such systems do not enjoy the benefit of handcrafted syntactic knowledge, because they do not use a parser trained from a large treebank with human annotations." + }, + { + "id": 252, + "string": "Compared with these related studies, our framework utilizes a linearized packed forest, meaning the encoder can encode exponentially many trees in an efficient manner." + }, + { + "id": 253, + "string": "The experimental results demonstrated these advantages." + }, + { + "id": 254, + "string": "Conclusion and future work We proposed a new NMT framework, which encodes a packed forest for the source sentence using linear-structured neural networks, such as RNN." + }, + { + "id": 255, + "string": "Compared with conventional string-tostring NMT systems and tree-to-string NMT systems, our framework can utilize exponentially many linearized parsing trees during encoding, without significantly decreasing the efficiency." + }, + { + "id": 256, + "string": "This represents the first attempt at using a forest under the string-to-string NMT framework." + }, + { + "id": 257, + "string": "The experimental results demonstrate the effectiveness of our framework." + }, + { + "id": 258, + "string": "As future work, we plan to design some more elaborate structures to incorporate the score layer in the encoder." + }, + { + "id": 259, + "string": "Further improvement in the translation performance is expected to be achieved for the forest-based NMT system." + }, + { + "id": 260, + "string": "We will also apply the proposed linearization method to other tasks." + } + ], + "headers": [ + { + "section": "Introduction", + "n": "1", + "start": 0, + "end": 15 + }, + { + "section": "Preliminaries", + "n": "2", + "start": 16, + "end": 16 + }, + { + "section": "Sequence-to-sequence model", + "n": "2.1", + "start": 17, + "end": 51 + }, + { + "section": "Linear-structured tree-based NMT systems", + "n": "2.2", + "start": 52, + "end": 59 + }, + { + "section": "Packed forest", + "n": "2.3", + "start": 60, + "end": 71 + }, + { + "section": "Forest-based NMT", + "n": "3", + "start": 72, + "end": 72 + }, + { + "section": "Forest linearization", + "n": "3.1", + "start": 73, + "end": 111 + }, + { + "section": "Encoding the linearized forest", + "n": "3.2", + "start": 112, + "end": 169 + }, + { + "section": "Setup", + "n": "4.1", + "start": 170, + "end": 187 + }, + { + "section": "Experimental results", + "n": "4.2", + "start": 188, + "end": 221 + }, + { + "section": "Qualitative analysis", + "n": "4.3", + "start": 222, + "end": 232 + }, + { + "section": "Related work", + "n": "5", + "start": 233, + "end": 253 + }, + { + "section": "Conclusion and future work", + "n": "6", + "start": 254, + "end": 260 + } + ], + "figures": [ + { + "filename": "../figure/image/961-Table1-1.png", + "caption": "Table 1: Statistics of the corpora.", + "page": 5, + "bbox": { + "x1": 307.68, + "x2": 525.12, + "y1": 61.44, + "y2": 168.0 + } + }, + { + "filename": "../figure/image/961-Table2-1.png", + "caption": "Table 2: English-Chinese experimental results (character-level BLEU). “FS,” “TN,” and “FN” denote forest-based SMT, tree-based NMT, and forest-based NMT systems, respectively. We performed the paired bootstrap resampling significance test (Koehn, 2004) over the NIST MT 03 to 05 corpus, with respect to the s2s baseline, and list the p values in the table.", + "page": 6, + "bbox": { + "x1": 118.56, + "x2": 478.08, + "y1": 61.44, + "y2": 197.28 + } + }, + { + "filename": "../figure/image/961-Table3-1.png", + "caption": "Table 3: English-Japanese experimental results (character-level BLEU).", + "page": 6, + "bbox": { + "x1": 197.76, + "x2": 400.32, + "y1": 276.48, + "y2": 412.32 + } + }, + { + "filename": "../figure/image/961-Figure1-1.png", + "caption": "Figure 1: An example of (a) a packed forest. The numbers in the brackets located at the upper-left corner of each node in the packed forest show one correct topological ordering of the nodes. The packed forest is a compact representation of two trees: (b) the correct constituent tree, and (c) an incorrect constituent tree. Note that the terminal nodes (i.e., words in the sentence) in the packed forest are shown only for illustration, and they do not belong to the packed forest.", + "page": 2, + "bbox": { + "x1": 99.36, + "x2": 526.56, + "y1": 67.2, + "y2": 305.28 + } + }, + { + "filename": "../figure/image/961-Figure4-1.png", + "caption": "Figure 4: Chinese translation results of an English sentence.", + "page": 7, + "bbox": { + "x1": 73.44, + "x2": 524.16, + "y1": 64.8, + "y2": 128.64 + } + }, + { + "filename": "../figure/image/961-Figure2-1.png", + "caption": "Figure 2: Linearization result of the packed forest in Figure 1a", + "page": 3, + "bbox": { + "x1": 312.96, + "x2": 537.12, + "y1": 63.36, + "y2": 121.44 + } + }, + { + "filename": "../figure/image/961-Figure3-1.png", + "caption": "Figure 3: The framework of the forest-based NMT system.", + "page": 4, + "bbox": { + "x1": 72.0, + "x2": 521.76, + "y1": 63.839999999999996, + "y2": 241.44 + } + } + ] + }, + "gem_id": "GEM-SciDuet-chal-4" + }, + { + "slides": { + "0": { + "title": "Adversarial Attacks Perturbations", + "text": [ + "Apply a small (indistinguishable) perturbation to the input that elicit large changes in the output", + "Figure from Goodfellow et al. (2014)" + ], + "page_nums": [ + 1, + 2, + 3, + 4, + 5 + ], + "images": [] + }, + "1": { + "title": "Indistinguishable Perturbations", + "text": [ + "Small perturbations are well defined in vision", + "Small l2 ~= indistinguishable to the human eye" + ], + "page_nums": [ + 6, + 7 + ], + "images": [] + }, + "2": { + "title": "Not all Text Perturbations are Equal", + "text": [ + "Hes very annoying Hes pretty friendly Hes She friendly Hes very freindly", + "[Different meaning] [Similar meaning] [Nonsensical] [Typo]", + "Cant expect the model to output the same output!", + "Why and How you should evaluate adversarial perturbations" + ], + "page_nums": [ + 8, + 9, + 10, + 11, + 12, + 13, + 14 + ], + "images": [] + }, + "4": { + "title": "Problem Definition", + "text": [ + "Reference They plow it right back into filing", + "Original Ils le reinvestissent directement en engageant", + "Base output They direct it directly by engaging", + "A dv. src Ilss le reinvestissent dierctement en engagaent plus de proces. Adv. output .. de plus." + ], + "page_nums": [ + 16, + 17, + 18, + 19, + 20, + 21, + 22 + ], + "images": [] + }, + "5": { + "title": "Source Side Evaluation", + "text": [ + "Evaluate meaning preservation on the source side", + "Where is a similarity metric such that", + "Hes very friendly H es pretty friendly Hes very friendly H es very annoying", + "Hes very friendly H es pretty friendly Hes very friendly Hes She friendly" + ], + "page_nums": [ + 23 + ], + "images": [] + }, + "6": { + "title": "Target Side Evaluation", + "text": [ + "Evaluate relative meaning destruction on the target side" + ], + "page_nums": [ + 24, + 25, + 26, + 27, + 28, + 29, + 30 + ], + "images": [] + }, + "7": { + "title": "Successful Adversarial Attacks", + "text": [ + "Source meaning destruction Target meaning destruction", + "Destroy the meaning on the target side more than on the source side" + ], + "page_nums": [ + 31, + 32, + 33, + 34 + ], + "images": [] + }, + "8": { + "title": "Which similarity metric to use", + "text": [ + "How would you rate the similarity between the meaning of these two sentences?", + "6 point scale, details in paper", + "The meaning is completely different or one of the sentence s is meaningless", + "The topic is the same but the meaning is different", + "Some key information is different", + "The key information is the same but the details differ", + "Meaning is essentially the same but some expressions are unnatural Meaning is essentially equal and the two sentences are well-formed [Language]", + "Geometric mean of n-gram precision + length penalty", + "METEOR [Banerjee and Lavie, 2005]", + "Word matching taking into account stemming, synonyms, paraphrases...", + "chrF [Popovic, 2015] Character n-gram F-score" + ], + "page_nums": [ + 35, + 36, + 37, + 38 + ], + "images": [] + }, + "10": { + "title": "Data and Models", + "text": [ + "{Czech, German, French} English", + "Both word and sub-word based models" + ], + "page_nums": [ + 40 + ], + "images": [] + }, + "11": { + "title": "Gradient Based Adversarial Attacks on Text", + "text": [ + "Idea: Back propagate through the model to score possible substitutions", + "Le g ros c hien The big dog .", + "The big dog . ", + "Idea: Word substitution Adding word vector difference", + "Use the 1st order approximation to maximize the loss" + ], + "page_nums": [ + 41, + 42, + 43, + 44, + 45, + 46, + 71 + ], + "images": [] + }, + "13": { + "title": "Constrained Adversarial Attacks kNN", + "text": [ + "Only replace words with 10 nearest neighbors in embedding space", + "Example from our fren Transformer source embeddings", + "grand (tall SING+MASC) grands (tall PL+MASC) grande (tall SING+FEM) grandes (tall PL+FEM) gros (fat SING+MASC) grosse (fat SING+FEM) math (math) maths (maths) mathematique (mathematic) mathematiques (mathematics) objective (objective [ADJ] SING+FEM)" + ], + "page_nums": [ + 48 + ], + "images": [] + }, + "14": { + "title": "Constrained Adversarial Attacks CharSwap", + "text": [ + "Only swap word internal characters to get OOVs", + "adversarial ad vresa rial", + "If thats impossible, repeat the last character" + ], + "page_nums": [ + 49 + ], + "images": [] + }, + "15": { + "title": "Choosing an Similarity Metric", + "text": [ + "Human vs automatic (pearson r):", + "Humans score original/adversarial outpu t", + "Compare scores to automatic metric with", + "(Relative Decrease in chrF)" + ], + "page_nums": [ + 51, + 52, + 53 + ], + "images": [] + }, + "16": { + "title": "Effect of Constraints on Evaluation", + "text": [ + "a feet eae Unconstrained" + ], + "page_nums": [ + 54 + ], + "images": [ + "figure/image/967-Figure1-1.png" + ] + }, + "18": { + "title": "Takeway", + "text": [ + "How would you rate the similarity between the meaning of these two sentences?", + "The meaning is complete ly different or one of the sentence s is meaningless", + "The topic is the same but the meaning is different Some key information is different", + "When doing adversarial attacks", + "The key information is th e same but the details differ Meaning is essentially the same but some expressions are unnatural Meaning is essentially eq ual and the two sentences are we ll-formed [Language]", + "Evaluate meaning preservation on the source side", + "When doing adversarial training", + "Consider adding constraints to your attacks", + "Not only true for seq2seq!", + "Easily transposed to classification, etc..", + "Just adapt and accordingly" + ], + "page_nums": [ + 66, + 67, + 68 + ], + "images": [] + }, + "19": { + "title": "Human Evaluation the Gold Standard", + "text": [ + "Check for semantic similarity and fluency", + "How would you rate the similarity between the meaning of these two sentences?", + "The meaning is completely different o r one of the sentences is meaningless", + "The topic is the same but the meaning is different", + "Some key information is different", + "The key information is the same but the details differ", + "Meaning is essentially the same but some expressions are unnatural", + "Meaning is essentially equal and the two sentences are well-formed [Language]" + ], + "page_nums": [ + 72 + ], + "images": [] + }, + "20": { + "title": "Example of a Successful Attack", + "text": [ + "Original Ils le reinvestissent directement en engageant plus de proces.", + "Adv. src. Ilss le reinvestissent dierctement en engagaent plus de proces.", + "Ref. They plow it right back into filing more troll lawsuits.", + "Base output They direct it directly by engaging more cases.", + "Adv. output .. de plus." + ], + "page_nums": [ + 73 + ], + "images": [] + }, + "21": { + "title": "Example of an Unsuccessful Attack", + "text": [ + "Original Cetait en Juillet 1969.", + "Adv. src. Cetiat en Jiullet", + "Base output This was in July 1969." + ], + "page_nums": [ + 74 + ], + "images": [] + } + }, + "paper_title": "On Evaluation of Adversarial Perturbations for Sequence-to-Sequence Models", + "paper_id": "967", + "paper": { + "title": "On Evaluation of Adversarial Perturbations for Sequence-to-Sequence Models", + "abstract": "Adversarial examples -perturbations to the input of a model that elicit large changes in the output -have been shown to be an effective way of assessing the robustness of sequenceto-sequence (seq2seq) models. However, these perturbations only indicate weaknesses in the model if they do not change the input so significantly that it legitimately results in changes in the expected output. This fact has largely been ignored in the evaluations of the growing body of related literature. Using the example of untargeted attacks on machine translation (MT), we propose a new evaluation framework for adversarial attacks on seq2seq models that takes the semantic equivalence of the pre-and post-perturbation input into account. Using this framework, we demonstrate that existing methods may not preserve meaning in general, breaking the aforementioned assumption that source side perturbations should not result in changes in the expected output. We further use this framework to demonstrate that adding additional constraints on attacks allows for adversarial perturbations that are more meaningpreserving, but nonetheless largely change the output sequence. Finally, we show that performing untargeted adversarial training with meaning-preserving attacks is beneficial to the model in terms of adversarial robustness, without hurting test performance. 1", + "text": [ + { + "id": 0, + "string": "Introduction Attacking a machine learning model with adversarial perturbations is the process of making changes to its input to maximize an adversarial goal, such as mis-classification (Szegedy et al., 2013) or mis-translation (Zhao et al., 2018) ." + }, + { + "id": 1, + "string": "These attacks provide insight into the vulnerabilities of machine learning models and their brittleness to samples outside the training distribution." + }, + { + "id": 2, + "string": "Lack of robustness to these attacks poses security concerns to safety-critical applications, e.g." + }, + { + "id": 3, + "string": "self-driving cars (Bojarski et al., 2016) ." + }, + { + "id": 4, + "string": "Adversarial attacks were first defined and investigated for computer vision systems (Szegedy et al." + }, + { + "id": 5, + "string": "(2013) ; Goodfellow et al." + }, + { + "id": 6, + "string": "(2014) ; Moosavi-Dezfooli et al." + }, + { + "id": 7, + "string": "(2016) inter alia), where the input space is continuous, making minuscule perturbations largely imperceptible to the human eye." + }, + { + "id": 8, + "string": "In discrete spaces such as natural language sentences, the situation is more problematic; even a flip of a single word or character is generally perceptible by a human reader." + }, + { + "id": 9, + "string": "Thus, most of the mathematical framework in previous work is not directly applicable to discrete text data." + }, + { + "id": 10, + "string": "Moreover, there is no canonical distance metric for textual data like the p norm in real-valued vector spaces such as images, and evaluating the level of semantic similarity between two sentences is a field of research of its own (Cer et al., 2017) ." + }, + { + "id": 11, + "string": "This elicits a natural question: what does the term \"adversarial perturbation\" mean in the context of natural language processing (NLP)?" + }, + { + "id": 12, + "string": "We propose a simple but natural criterion for adversarial examples in NLP, particularly untargeted 2 attacks on seq2seq models: adversarial examples should be meaning-preserving on the source side, but meaning-destroying on the target side." + }, + { + "id": 13, + "string": "The focus on explicitly evaluating meaning preservation is in contrast to previous work on adversarial examples for seq2seq models (Belinkov and Bisk, 2018; Zhao et al., 2018; Cheng et al., 2018; Ebrahimi et al., 2018a) ." + }, + { + "id": 14, + "string": "Nonetheless, this feature is extremely important; given two sentences with equivalent meaning, we would expect a good model to produce two outputs with equivalent meaning." + }, + { + "id": 15, + "string": "In other words, any meaningpreserving perturbation that results in the model output changing drastically highlights a fault of the model." + }, + { + "id": 16, + "string": "A first technical contribution of this paper is to lay out a method for formalizing this concept of meaning-preserving perturbations ( §2)." + }, + { + "id": 17, + "string": "This makes it possible to evaluate the effectiveness of adversarial attacks or defenses either using goldstandard human evaluation, or approximations that can be calculated without human intervention." + }, + { + "id": 18, + "string": "We further propose a simple method of imbuing gradient-based word substitution attacks ( §3.1) with simple constraints aimed at increasing the chance that the meaning is preserved ( §3.2)." + }, + { + "id": 19, + "string": "Our experiments are designed to answer several questions about meaning preservation in seq2seq models." + }, + { + "id": 20, + "string": "First, we evaluate our proposed \"sourcemeaning-preserving, target-meaning-destroying\" criterion for adversarial examples using both manual and automatic evaluation ( §4.2) and find that a less widely used evaluation metric (chrF) provides significantly better correlation with human judgments than the more widely used BLEU and ME-TEOR metrics." + }, + { + "id": 21, + "string": "We proceed to perform an evaluation of adversarial example generation techniques, finding that chrF does help to distinguish between perturbations that are more meaning-preserving across a variety of languages and models ( §4.3)." + }, + { + "id": 22, + "string": "Finally, we apply existing methods for adversarial training to the adversarial examples with these constraints and show that making adversarial inputs more semantically similar to the source is beneficial for robustness to adversarial attacks and does not decrease test performance on the original data distribution ( §5)." + }, + { + "id": 23, + "string": "A Framework for Evaluating Adversarial Attacks In this section, we present a simple procedure for evaluating adversarial attacks on seq2seq models." + }, + { + "id": 24, + "string": "We will use the following notation: x and y refer to the source and target sentence respectively." + }, + { + "id": 25, + "string": "We denote x's translation by model M as y M ." + }, + { + "id": 26, + "string": "Finally, x andŷ M represent an adversarially perturbed version of x and its translation by M , respectively." + }, + { + "id": 27, + "string": "The nature of M and the procedure for obtaininĝ x from x are irrelevant to the discussion below." + }, + { + "id": 28, + "string": "The Adversarial Trade-off The goal of adversarial perturbations is to produce failure cases for the model M ." + }, + { + "id": 29, + "string": "Hence, the evaluation must include some measure of the target similarity between y and y M , which we will denote s tgt (y,ŷ M )." + }, + { + "id": 30, + "string": "However, if no distinction is being made between perturbations that preserve the meaning and those that don't, a sentence like \"he's very friendly\" is considered a valid adversarial perturbation of \"he's very adversarial\", even though its meaning is the opposite." + }, + { + "id": 31, + "string": "Hence, it is crucial, when evaluating adversarial attacks on MT models, that the discrepancy between the original and adversarial input sentence be quantified in a way that is sensitive to meaning." + }, + { + "id": 32, + "string": "Let us denote such a source similarity score s src (x,x)." + }, + { + "id": 33, + "string": "Based on these functions, we define the target relative score decrease as: d tgt (y, y M ,ŷ M ) = 0 if s tgt (y,ŷ M ) ≥ s tgt (y, y M ) stgt(y,y M )−stgt(y,ŷ M ) stgt(y,y M ) otherwise (1) The choice to report the relative decrease in s tgt makes scores comparable across different models or languages 3 ." + }, + { + "id": 34, + "string": "For instance, for languages that are comparatively easy to translate (e.g." + }, + { + "id": 35, + "string": "French-English), s tgt will be higher in general, and so will the gap between s tgt (y, y M ) and s tgt (y,ŷ M )." + }, + { + "id": 36, + "string": "However this does not necessarily mean that attacks on this language pair are more effective than attacks on a \"difficult\" language pair (e.g." + }, + { + "id": 37, + "string": "Czech-English) where s tgt is usually smaller." + }, + { + "id": 38, + "string": "We recommend that both s src and d tgt be reported when presenting adversarial attack results." + }, + { + "id": 39, + "string": "However, in some cases where a single number is needed, we suggest reporting the attack's success S := s src + d tgt ." + }, + { + "id": 40, + "string": "The interpretation is simple: S > 1 ⇔ d tgt > 1 − s src , which means that the attack has destroyed the target meaning (d tgt ) more than it has destroyed the source meaning (1 − s src )." + }, + { + "id": 41, + "string": "Importantly, this framework can be extended beyond strictly meaning-preserving attacks." + }, + { + "id": 42, + "string": "For example, for targeted keyword introduction attacks (Cheng et al., 2018; Ebrahimi et al., 2018a) , the same evaluation framework can be used if s tgt (resp." + }, + { + "id": 43, + "string": "s src ) is modified to account for the presence (resp." + }, + { + "id": 44, + "string": "absence) of the keyword (or its translation in the source)." + }, + { + "id": 45, + "string": "Similarly this can be extended to other tasks by adapting s tgt (e.g." + }, + { + "id": 46, + "string": "for classification one would use the zero-one loss, and adapt the success threshold)." + }, + { + "id": 47, + "string": "Similarity Metrics Throughout §2.1, we have not given an exact description of the semantic similarity scores s src and s tgt ." + }, + { + "id": 48, + "string": "Indeed, automatically evaluating the semantic similarity between two sentences is an open area of research and it makes sense to decouple the definition of adversarial examples from the specific method used to measure this similarity." + }, + { + "id": 49, + "string": "In this section, we will discuss manual and automatic metrics that may be used to calculate it." + }, + { + "id": 50, + "string": "Human Judgment Judgment by speakers of the language of interest is the de facto gold standard metric for semantic similarity." + }, + { + "id": 51, + "string": "Specific criteria such as adequacy/fluency (Ma and Cieri, 2006) , acceptability (Goto et al., 2013) , and 6-level semantic similarity (Cer et al., 2017) have been used in evaluations of MT and sentence embedding methods." + }, + { + "id": 52, + "string": "In the context of adversarial attacks, we propose the following 6-level evaluation scheme, which is motivated by previous measures, but designed to be (1) symmetric, like Cer et al." + }, + { + "id": 53, + "string": "(2017) , (2) and largely considers meaning preservation but at the very low and high levels considers fluency of the output 4 , like Goto et al." + }, + { + "id": 54, + "string": "(2013) : How would you rate the similarity between the meaning of these two sentences?" + }, + { + "id": 55, + "string": "0." + }, + { + "id": 56, + "string": "The meaning is completely different or one of the sentences is meaningless 1." + }, + { + "id": 57, + "string": "The topic is the same but the meaning is different 2." + }, + { + "id": 58, + "string": "Some key information is different 3." + }, + { + "id": 59, + "string": "The key information is the same but the details differ 4." + }, + { + "id": 60, + "string": "Meaning is essentially equal but some expressions are unnatural 5." + }, + { + "id": 61, + "string": "Meaning is essentially equal and the two sentences are well-formed English a a Or the language of interest." + }, + { + "id": 62, + "string": "4 This is important to rule out nonsensical sentences and distinguish between clean and \"noisy\" paraphrases (e.g." + }, + { + "id": 63, + "string": "typos, non-native speech." + }, + { + "id": 64, + "string": "." + }, + { + "id": 65, + "string": "." + }, + { + "id": 66, + "string": ")." + }, + { + "id": 67, + "string": "We did not give annotators additional instruction specific to typos." + }, + { + "id": 68, + "string": "Automatic Metrics Unfortunately, human evaluation is expensive, slow and sometimes difficult to obtain, for example in the case of low-resource languages." + }, + { + "id": 69, + "string": "This makes automatic metrics that do not require human intervention appealing for experimental research." + }, + { + "id": 70, + "string": "This section describes 3 evaluation metrics commonly used as alternatives to human evaluation, in particular to evaluate translation models." + }, + { + "id": 71, + "string": "5 BLEU: (Papineni et al., 2002) is an automatic metric based on n-gram precision coupled with a penalty for shorter sentences." + }, + { + "id": 72, + "string": "It relies on exact word-level matches and therefore cannot detect synonyms or morphological variations." + }, + { + "id": 73, + "string": "METEOR: (Denkowski and Lavie, 2014) first estimates alignment between the two sentences and then computes unigram F-score (biased towards recall) weighted by a penalty for longer sentences." + }, + { + "id": 74, + "string": "Importantly, METEOR uses stemming, synonymy and paraphrasing information to perform alignments." + }, + { + "id": 75, + "string": "On the downside, it requires language specific resources." + }, + { + "id": 76, + "string": "chrF: (Popović, 2015) is based on the character n-gram F-score." + }, + { + "id": 77, + "string": "In particular we will use the chrF2 score (based on the F2-score -recall is given more importance), following the recommendations from Popović (2016) ." + }, + { + "id": 78, + "string": "By operating on a sub-word level, it can reflect the semantic similarity between different morphological inflections of one word (for instance), without requiring language-specific knowledge which makes it a good one-size-fits-all alternative." + }, + { + "id": 79, + "string": "Because multiple possible alternatives exist, it is important to know which is the best stand-in for human evaluation." + }, + { + "id": 80, + "string": "To elucidate this, we will compare these metrics to human judgment in terms of Pearson correlation coefficient on outputs resulting from a variety of attacks in §4.2." + }, + { + "id": 81, + "string": "Gradient-Based Adversarial Attacks In this section, we overview the adversarial attacks we will be considering in the rest of this paper." + }, + { + "id": 82, + "string": "Attack Paradigm We perform gradient-based attacks that replace one word in the sentence so as to maximize an adversarial loss function L adv , similar to the substitution attacks proposed in (Ebrahimi et al., 2018b) ." + }, + { + "id": 83, + "string": "Original Pourquoi faire cela ?" + }, + { + "id": 84, + "string": "English gloss Why do this?" + }, + { + "id": 85, + "string": "Unconstrained construisant (English: building) faire cela ?" + }, + { + "id": 86, + "string": "kNN interrogez (English: interrogate) faire cela ?" + }, + { + "id": 87, + "string": "CharSwap Puorquoi (typo) faire cela ?" + }, + { + "id": 88, + "string": "Original Si seulement je pouvais me muscler aussi rapidement." + }, + { + "id": 89, + "string": "English gloss If only I could build my muscle this fast." + }, + { + "id": 90, + "string": "Unconstrained Si seulement je pouvais me muscler etc rapidement." + }, + { + "id": 91, + "string": "kNN Si seulement je pouvais me muscler plsu (typo for \"more\") rapidement." + }, + { + "id": 92, + "string": "CharSwap Si seulement je pouvais me muscler asusi (typo) rapidement." + }, + { + "id": 93, + "string": "General Approach Precisely, for a word-based translation model M 6 , and given an input sentence w 1 , ." + }, + { + "id": 94, + "string": "." + }, + { + "id": 95, + "string": "." + }, + { + "id": 96, + "string": ", w n , we find the position i * and word w * satisfying the following optimization problem: arg max 1≤i≤n,ŵ∈V L adv (w 0 , ." + }, + { + "id": 97, + "string": "." + }, + { + "id": 98, + "string": "." + }, + { + "id": 99, + "string": ", w i−1 ,ŵ, w i+1 , ." + }, + { + "id": 100, + "string": "." + }, + { + "id": 101, + "string": "." + }, + { + "id": 102, + "string": ", w n ) (2) where L adv is a differentiable function which represents our adversarial objective." + }, + { + "id": 103, + "string": "Using the first order approximation of L adv around the original word vectors w 1 , ." + }, + { + "id": 104, + "string": "." + }, + { + "id": 105, + "string": "." + }, + { + "id": 106, + "string": ", w n 7 , this can be derived to be equivalent to optimizing arg max 1≤i≤n,ŵ∈V [ŵ − w i ] ∇ w i L adv (3) The above optimization problem can be solved by brute-force in O(n|V|) space complexity, whereas the time complexity is bottlenecked by a |V| × d times n × d matrix multiplication, which is not more computationally expensive than computing logits during the forward pass of the model." + }, + { + "id": 107, + "string": "Overall, this naive approach is sufficiently fast to be conducive to adversarial training." + }, + { + "id": 108, + "string": "We also found that the attacks benefited from normalizing the gradient by taking its sign." + }, + { + "id": 109, + "string": "Extending this approach to finding the optimal perturbations for more than 1 substitution would require exhaustively searching over all possible combinations." + }, + { + "id": 110, + "string": "However, previous work (Ebrahimi 6 Note that this formulation is also valid for characterbased models (see Ebrahimi et al." + }, + { + "id": 111, + "string": "(2018a) ) and subwordbased models." + }, + { + "id": 112, + "string": "For subword-based models, additional difficulty would be introduced due to changes to the input resulting in different subword segmentations." + }, + { + "id": 113, + "string": "This poses an interesting challenge that is beyond the scope of the current work." + }, + { + "id": 114, + "string": "7 More generally we will use the bold w when talking about the embedding vector of word w et al., 2018a) suggests that greedy search is a good enough approximation." + }, + { + "id": 115, + "string": "The Adversarial Loss L adv We want to find an adversarial inputx such that, assuming that the model has produced the correct output y 1 , ." + }, + { + "id": 116, + "string": "." + }, + { + "id": 117, + "string": "." + }, + { + "id": 118, + "string": ", y t−1 up to step t − 1 during decoding, the probability that the model makes an error at the next step t is maximized." + }, + { + "id": 119, + "string": "In the log-semiring, this translates into the following loss function: L adv (x, y) = |y| t=1 log(1 − p(y t |x, y 1 , ." + }, + { + "id": 120, + "string": "." + }, + { + "id": 121, + "string": "." + }, + { + "id": 122, + "string": ", y t−1 )) (4) Enforcing Semantically Similar Adversarial Inputs In contrast to previous methods, which don't consider meaning preservation, we propose simple modifications of the approach presented in §3.1 to create adversarial perturbations at the word level that are more likely to preserve meaning." + }, + { + "id": 123, + "string": "The basic idea is to restrict the possible word substitutions to similar words." + }, + { + "id": 124, + "string": "We compare two sets of constraints: kNN: This constraint enforces that the word be replaced only with one of its 10 nearest neighbors in the source embedding space." + }, + { + "id": 125, + "string": "This has two effects: first, the replacement will be likely semantically related to the original word (if words close in the embedding space are indeed semantically related, as hinted by Table 1 )." + }, + { + "id": 126, + "string": "Second, it ensures that the replacement's word vector is close enough to the original word vector that the first order assumption is more likely to be satisfied." + }, + { + "id": 127, + "string": "CharSwap: This constraint requires that the substituted words must be obtained by swapping characters." + }, + { + "id": 128, + "string": "Word internal character swaps have been shown to not affect human readers greatly (McCusker et al., 1981) , hence making them likely to be meaning-preserving." + }, + { + "id": 129, + "string": "Moreover we add the additional constraint that the substitution must not be in the vocabulary, which will likely be particularly meaning-destroying on the target side for the word-based models we test here." + }, + { + "id": 130, + "string": "In such cases where word-internal character swaps are not possible or can't produce out-of-vocabulary (OOV) words, we resort to the naive strategy of repeating the last character of the word." + }, + { + "id": 131, + "string": "The exact procedure used to produce this kind of perturbations is described in Appendix A.1." + }, + { + "id": 132, + "string": "Note that for a word-based model, every OOV will look the same (a special token), however the choice of OOV will still have an influence on the output of the model because we use unk-replacement." + }, + { + "id": 133, + "string": "In contrast, we refer the base attack without constraints as Unconstrained hereforth." + }, + { + "id": 134, + "string": "Table 1 gives qualitative examples of the kind of perturbations generated under the different constraints." + }, + { + "id": 135, + "string": "For subword-based models, we apply the same procedures at the subword-level on the original segmentation." + }, + { + "id": 136, + "string": "We then de-segment and resegment the resulting sentence (because changes at the subword or character levels are likely to change the segmentation of the resulting sentence)." + }, + { + "id": 137, + "string": "Experiments Our experiments serve two purposes." + }, + { + "id": 138, + "string": "First, we examine our proposed framework of evaluating adversarial attacks ( §2), and also elucidate which automatic metrics correlate better with human judgment for the purpose of evaluating adversarial attacks ( §4.2)." + }, + { + "id": 139, + "string": "Second, we use this evaluation framework to compare various adversarial attacks and demonstrate that adversarial attacks that are explicitly constrained to preserve meaning receive better assessment scores ( §4.3)." + }, + { + "id": 140, + "string": "Experimental setting Data: Following previous work on adversarial examples for seq2seq models (Belinkov and Bisk, 2018; Ebrahimi et al., 2018a) , we perform all experiments on the IWSLT2016 dataset (Cettolo et al., 2016) in the {French,German,Czech}→English directions (fr-en, de-en and cs-en)." + }, + { + "id": 141, + "string": "We compile all previous IWSLT test sets before 2015 as validation data, and keep the 2015 and 2016 test sets as test data." + }, + { + "id": 142, + "string": "The data is tokenized with the Moses tokenizer (Koehn et al., 2007) ." + }, + { + "id": 143, + "string": "The exact data statistics can be found in Appendix A.2." + }, + { + "id": 144, + "string": "MT Models: We perform experiments with two common neural machine translation (NMT) models." + }, + { + "id": 145, + "string": "The first is an LSTM based encoderdecoder architecture with attention (Luong et al., 2015) ." + }, + { + "id": 146, + "string": "It uses 2-layer encoders and decoders, and dot-product attention." + }, + { + "id": 147, + "string": "We set the word embedding dimension to 300 and all others to 500." + }, + { + "id": 148, + "string": "The second model is a self-attentional Transformer (Vaswani et al., 2017) , with 6 1024-dimensional encoder and decoder layers and 512 dimensional word embeddings." + }, + { + "id": 149, + "string": "Both the models are trained with Adam (Kingma and Ba, 2014), dropout (Srivastava et al., 2014) of probability 0.3 and label smoothing (Szegedy et al., 2016) with value 0.1." + }, + { + "id": 150, + "string": "We experiment with both word based models (vocabulary size fixed at 40k) and subword based models (BPE (Sennrich et al., 2016) with 30k operations)." + }, + { + "id": 151, + "string": "For word-based models, we perform replacement, replacing tokens in the translated sentences with the source words with the highest attention value during inference." + }, + { + "id": 152, + "string": "The full experimental setup and source code are available at https://github." + }, + { + "id": 153, + "string": "com/pmichel31415/translate/tree/ paul/pytorch_translate/research/ adversarial/experiments." + }, + { + "id": 154, + "string": "Automatic Metric Implementations: To evaluate both sentence and corpus level BLEU score, we first de-tokenize the output and use sacreBLEU 8 (Post, 2018) with its internal intl tokenization, to keep BLEU scores agnostic to tokenization." + }, + { + "id": 155, + "string": "We compute METEOR using the official implementation 9 ." + }, + { + "id": 156, + "string": "ChrF is reported with the sacreBLEU implementation on detokenized text with default parameters." + }, + { + "id": 157, + "string": "A toolkit implementing the evaluation framework described in §2.1 for these metrics is released at https://github." + }, + { + "id": 158, + "string": "com/pmichel31415/teapot-nlp." + }, + { + "id": 159, + "string": "Correlation of Automatic Metrics with Human Judgment We first examine which of the automatic metrics listed in §2.2 correlates most with human judgment for our adversarial attacks." + }, + { + "id": 160, + "string": "For this experiment, we restrict the scope to the case of the These sentences are sent to English and French speaking annotators to be rated according to the guidelines described in §2.2.1." + }, + { + "id": 161, + "string": "Each sample (a pair of sentences) is rated by two independent evaluators." + }, + { + "id": 162, + "string": "If the two ratings differ, the sample is sent to a third rater (an auditor and subject matter expert) who makes the final decision." + }, + { + "id": 163, + "string": "Finally, we compare the human results to each automatic metric with Pearson's correlation coefficient." + }, + { + "id": 164, + "string": "The correlations are reported in Table 3 ." + }, + { + "id": 165, + "string": "As evidenced by the results, chrF exhibits higher correlation with human judgment, followed by ME-TEOR and BLEU." + }, + { + "id": 166, + "string": "This is true both on the source side (x vsx) and in the target side (y vsŷ M )." + }, + { + "id": 167, + "string": "We Language BLEU METEOR chrF French 0.415 0.440 0.586 * English 0.357 0.478 * 0.497 Table 3 : Correlation of automatic metrics to human judgment of adversarial source and target sentences. \"" + }, + { + "id": 168, + "string": "* \" indicates that the correlation is significantly better than the next-best one." + }, + { + "id": 169, + "string": "evaluate the statistical significance of this result using a paired bootstrap test for p < 0.01." + }, + { + "id": 170, + "string": "Notably we find that chrF is significantly better than METEOR in French but not in English." + }, + { + "id": 171, + "string": "This is not too unexpected because METEOR has access to more language-dependent resources in English (specifically synonym information) and thereby can make more informed matches of these synonymous words and phrases." + }, + { + "id": 172, + "string": "Moreover the French source side contains more \"character-level\" errors (from CharSwap attacks) which are not picked-up well by word-based metrics like BLEU and ME-TEOR." + }, + { + "id": 173, + "string": "For a breakdown of the correlation coefficients according to number of perturbation and type of constraints, we refer to Appendix A.3." + }, + { + "id": 174, + "string": "Thus, in the following, we report attack results both in terms of chrF in the source (s src ) and relative decrease in chrF (RDchrF) in the target (d tgt )." + }, + { + "id": 175, + "string": "Table 2 for word-based models." + }, + { + "id": 176, + "string": "High source chrF and target RDchrF (upper-right corner) indicates a good attack." + }, + { + "id": 177, + "string": "Attack Results We can now compare attacks under the three constraints Unconstrained, kNN and CharSwap and draw conclusions on their capacity to preserve meaning in the source and destroy it in the target." + }, + { + "id": 178, + "string": "Attacks are conducted on the validation set using the approach described in §3.1 with 3 substitutions (this means that each adversarial input is at edit distance at most 3 from the original input)." + }, + { + "id": 179, + "string": "Results (on a scale of 0 to 100 for readability) are reported in Table 2 for both word-and subwordbased LSTM and Transformer models." + }, + { + "id": 180, + "string": "To give a better idea of how the different variables (language pair, model, attack) affect performance, we give a graphical representation of these same results in Figure 1 for the word-based models." + }, + { + "id": 181, + "string": "The rest of this section discusses the implication of these results." + }, + { + "id": 182, + "string": "Source chrF Highlights the Effect of Adding Constraints: Comparing the kNN and CharSwap rows to Unconstrained in the \"source\" sections of Table 2 clearly shows that constrained attacks have a positive effect on meaning preservation." + }, + { + "id": 183, + "string": "Beyond validating our assumptions from §3.2, this shows that source chrF is useful to carry out the comparison in the first place 10 ." + }, + { + "id": 184, + "string": "To give a point of reference, results from the manual evaluation carried out in §4.2 show that that 90% of the French sentence pairs to which humans gave a score of 4 or 5 in semantic similarity have a chrF > 78." + }, + { + "id": 185, + "string": "10 It can be argued that using chrF gives an advantage to CharSwap over kNN for source preservation (as opposed to METEOR for example)." + }, + { + "id": 186, + "string": "We find that this is the case for Czech and German (source METEOR is higher for kNN) but not French." + }, + { + "id": 187, + "string": "Moreover we find (see A.3) that chrF correlates better with human judgement even for kNN." + }, + { + "id": 188, + "string": "Different Architectures are not Equal in the Face of Adversity: Inspection of the targetside results yields several interesting observations." + }, + { + "id": 189, + "string": "First, the high RDchrF of CharSwap for wordbased model is yet another indication of their known shortcomings when presented with words out of their training vocabulary, even with replacement." + }, + { + "id": 190, + "string": "Second, and perhaps more interestingly, Transformer models appear to be less robust to small embedding perturbations (kNN attacks) compared to LSTMs." + }, + { + "id": 191, + "string": "Although the exploration of the exact reasons for this phenomenon is beyond the scope of this work, this is a good example that RDchrF can shed light on the different behavior of different architectures when confronted with adversarial input." + }, + { + "id": 192, + "string": "Overall, we find that the Char-Swap constraint is the only one that consistently produces attacks with > 1 average success (as defined in Section 2.1) according to Table 2 ." + }, + { + "id": 193, + "string": "Table 4 contains two qualitative examples of this attack on the LSTM model in fr-en." + }, + { + "id": 194, + "string": "Adversarial Training with Meaning-Preserving Attacks Adversarial Training Adversarial training (Goodfellow et al., 2014) augments the training data with adversarial examples." + }, + { + "id": 195, + "string": "Formally, in place of the negative log likelihood (NLL) objective on a sample x, y, L(x, y) = N LL(x, y), the loss function is replaced with an interpolation of the NLL of the original sample x, y and an adversarial samplex, y: L (x, y) = (1 − α)N LL(x, y) + αN LL(x, y) (5) Ebrahimi et al." + }, + { + "id": 196, + "string": "(2018a) suggest that while adversarial training improves robustness to adversarial attacks, it can be detrimental to test performance on non-adversarial input." + }, + { + "id": 197, + "string": "We investigate whether this is still the case when adversarial attacks are largely meaning-preserving." + }, + { + "id": 198, + "string": "In our experiments, we generatex by applying 3 perturbations on the fly at each training step." + }, + { + "id": 199, + "string": "To maintain training speed we do not solve Equation (2) iteratively but in one shot by replacing the argmax by top-3." + }, + { + "id": 200, + "string": "Although this is less exact than iterating, this makes adversarial training time less than 2× slower than normal training." + }, + { + "id": 201, + "string": "We perform adversarial training with perturbations without constraints (Unconstrained-adv) and with the CharSwap constraint (CharSwap-adv)." + }, + { + "id": 202, + "string": "All experiments are conducted with the word-based LSTM model." + }, + { + "id": 203, + "string": "Results Test performance on non-adversarial input is reported in Table 5 ." + }, + { + "id": 204, + "string": "In keeping with the rest of the paper, we primarily report chrF results, but also show the standard BLEU as well." + }, + { + "id": 205, + "string": "We observe that when α = 1.0, i.e." + }, + { + "id": 206, + "string": "the model only sees the perturbed input during training 11 , the Unconstrained-adv model suffers a drop in test performance, whereas CharSwap-adv's performance is on par with the original." + }, + { + "id": 207, + "string": "This is likely where y is not an acceptable translation ofx introduced by the lack of constraint." + }, + { + "id": 208, + "string": "This effect disappears when α = 0.5 because the model sees the original samples as well." + }, + { + "id": 209, + "string": "Not unexpectedly, Table 6 indicates that CharSwap-adv is more robust to CharSwap constrained attacks for both values of α, with 1.0 giving the best results." + }, + { + "id": 210, + "string": "On the other hand, Unconstrained-adv is similarly or more vulnerable to these attacks than the baseline." + }, + { + "id": 211, + "string": "Hence, we can safely conclude that adversarial training with CharSwap attacks improves robustness while not impacting test performance as much as unconstrained attacks." + }, + { + "id": 212, + "string": "Related work Following seminal work on adversarial attacks by Szegedy et al." + }, + { + "id": 213, + "string": "(2013) , Goodfellow et al." + }, + { + "id": 214, + "string": "(2014) introduced gradient-based attacks and adversarial training." + }, + { + "id": 215, + "string": "Since then, a variety of attack (Moosavi-Dezfooli et al., 2016) and defense (Cissé et al., 2017; Kolter and Wong, 2017) mechanisms have been proposed." + }, + { + "id": 216, + "string": "Adversarial examples for NLP specifically have seen attacks on sentiment Samanta and Mehta, 2017; Ebrahimi et al., 2018b) , malware (Grosse et al., 2016) , gender (Reddy and Knight, 2016) or toxicity (Hosseini et al., 2017) classification to cite a few." + }, + { + "id": 217, + "string": "In MT, methods have been proposed to attack word-based (Zhao et al., 2018; Cheng et al., 2018) and character-based (Belinkov and Bisk, 2018; Ebrahimi et al., 2018a) models." + }, + { + "id": 218, + "string": "However these works side-step the question of meaning preservation in the source: they mostly focus on target side evaluation." + }, + { + "id": 219, + "string": "Finally there is work centered around meaning-preserving adversarial attacks for NLP via paraphrase generation (Iyyer et al., 2018) or rule-based approaches (Jia and Liang, 2017; Ribeiro et al., 2018; Naik et al., 2018; Alzantot et al., 2018) ." + }, + { + "id": 220, + "string": "However the proposed attacks are highly engineered and focused on English." + }, + { + "id": 221, + "string": "Conclusion This paper highlights the importance of performing meaning-preserving adversarial perturbations for NLP models (with a focus on seq2seq)." + }, + { + "id": 222, + "string": "We proposed a general evaluation framework for adversarial perturbations and compared various automatic metrics as proxies for human judgment to instantiate this framework." + }, + { + "id": 223, + "string": "We then confirmed that, in the context of MT, \"naive\" attacks do not preserve meaning in general, and proposed alternatives to remedy this issue." + }, + { + "id": 224, + "string": "Finally, we have shown the utility of adversarial training in this paradigm." + }, + { + "id": 225, + "string": "We hope that this helps future work in this area of research to evaluate meaning conservation more consistently." + }, + { + "id": 226, + "string": "A Supplemental Material A.1 Generating OOV Replacements with Internal Character Swaps We use the following snippet to produce an OOV word from an existing word: 1 def make_oov( 2 word, 3 vocab, 4 max_scrambling, 5 ): 6 \"\"\"Modify a word to make it OOV 7 (while keeping the meaning)\"\"\" 8 # If the word has >3 letters 9 # try scrambling them 10 L = len ( #train #valid #test fr-en 220.4k 6,824 2,213 de-en 196.9k 11,825 2,213 cs-en 114.4k 5,716 2,213 A.3 Breakdown of Correlation with Human Judgement We provide a breakdown of the correlation coefficients of automatic metrics with human judgment for source-side meaning-preservation, both in terms of number of perturbed words (Table 8) and constraint (Table 9 )." + }, + { + "id": 227, + "string": "While those coefficients are computed on a much smaller sample size, and their differences are not all statistically significant with p < 0.01, they exhibit the same trend as the results from Table 9 : Correlation of automatic metrics to human judgment of semantic similarity between original and adversarial source sentences, broken down by type of constraint on the perturbation. \"" + }, + { + "id": 228, + "string": "* \" indicates that the correlation is significantly better than the next-best one." + }, + { + "id": 229, + "string": "In particular Table 8 shows that the good correlation of chrF with human judgment is not only due to the ability to distinguish between different number of edits." + } + ], + "headers": [ + { + "section": "Introduction", + "n": "1", + "start": 0, + "end": 22 + }, + { + "section": "A Framework for Evaluating Adversarial Attacks", + "n": "2", + "start": 23, + "end": 27 + }, + { + "section": "The Adversarial Trade-off", + "n": "2.1", + "start": 28, + "end": 46 + }, + { + "section": "Similarity Metrics", + "n": "2.2", + "start": 47, + "end": 49 + }, + { + "section": "Human Judgment", + "n": "2.2.1", + "start": 50, + "end": 67 + }, + { + "section": "Automatic Metrics", + "n": "2.2.2", + "start": 68, + "end": 80 + }, + { + "section": "Gradient-Based Adversarial Attacks", + "n": "3", + "start": 81, + "end": 81 + }, + { + "section": "Attack Paradigm", + "n": "3.1", + "start": 82, + "end": 92 + }, + { + "section": "General Approach", + "n": "3.1.1", + "start": 93, + "end": 114 + }, + { + "section": "The Adversarial Loss L adv", + "n": "3.1.2", + "start": 115, + "end": 121 + }, + { + "section": "Enforcing Semantically Similar Adversarial Inputs", + "n": "3.2", + "start": 122, + "end": 136 + }, + { + "section": "Experiments", + "n": "4", + "start": 137, + "end": 139 + }, + { + "section": "Experimental setting", + "n": "4.1", + "start": 140, + "end": 158 + }, + { + "section": "Correlation of Automatic Metrics with Human Judgment", + "n": "4.2", + "start": 159, + "end": 176 + }, + { + "section": "Attack Results", + "n": "4.3", + "start": 177, + "end": 193 + }, + { + "section": "Adversarial Training", + "n": "5.1", + "start": 194, + "end": 202 + }, + { + "section": "Results", + "n": "5.2", + "start": 203, + "end": 211 + }, + { + "section": "Related work", + "n": "6", + "start": 212, + "end": 220 + }, + { + "section": "Conclusion", + "n": "7", + "start": 221, + "end": 229 + } + ], + "figures": [ + { + "filename": "../figure/image/967-Table2-1.png", + "caption": "Table 2: Target RDchrF and source chrF scores for all the attacks on all our models (word- and subword-based LSTM and Transformer).", + "page": 5, + "bbox": { + "x1": 80.64, + "x2": 517.4399999999999, + "y1": 64.8, + "y2": 336.96 + } + }, + { + "filename": "../figure/image/967-Table3-1.png", + "caption": "Table 3: Correlation of automatic metrics to human judgment of adversarial source and target sentences. “∗” indicates that the correlation is significantly better than the next-best one.", + "page": 5, + "bbox": { + "x1": 306.71999999999997, + "x2": 501.12, + "y1": 392.64, + "y2": 435.35999999999996 + } + }, + { + "filename": "../figure/image/967-Table4-1.png", + "caption": "Table 4: Example of CharSwap attacks on the fr-en LSTM. The first example is a successful attack (high source chrF and target RDchrF) whereas the second is not.", + "page": 6, + "bbox": { + "x1": 306.71999999999997, + "x2": 527.04, + "y1": 283.68, + "y2": 462.24 + } + }, + { + "filename": "../figure/image/967-Figure1-1.png", + "caption": "Figure 1: Graphical representation of the results in Table 2 for word-based models. High source chrF and target RDchrF (upper-right corner) indicates a good attack.", + "page": 6, + "bbox": { + "x1": 116.64, + "x2": 481.44, + "y1": 61.44, + "y2": 231.35999999999999 + } + }, + { + "filename": "../figure/image/967-Table6-1.png", + "caption": "Table 6: Robustness to CharSwap attacks on the validation set with/without adversarial training (RDchrF). Lower is better.", + "page": 7, + "bbox": { + "x1": 306.71999999999997, + "x2": 537.12, + "y1": 301.44, + "y2": 411.35999999999996 + } + }, + { + "filename": "../figure/image/967-Table5-1.png", + "caption": "Table 5: chrF (BLEU) scores on the original test set before/after adversarial training of the word-based LSTM model.", + "page": 7, + "bbox": { + "x1": 306.71999999999997, + "x2": 537.12, + "y1": 64.8, + "y2": 241.92 + } + }, + { + "filename": "../figure/image/967-Table1-1.png", + "caption": "Table 1: Examples of different adversarial inputs. The substituted word is highlighted.", + "page": 3, + "bbox": { + "x1": 96.96, + "x2": 498.24, + "y1": 62.879999999999995, + "y2": 205.92 + } + }, + { + "filename": "../figure/image/967-Table7-1.png", + "caption": "Table 7: IWSLT2016 data statistics.", + "page": 11, + "bbox": { + "x1": 97.92, + "x2": 264.0, + "y1": 517.4399999999999, + "y2": 571.1999999999999 + } + }, + { + "filename": "../figure/image/967-Table8-1.png", + "caption": "Table 8: Correlation of automatic metrics to human judgment of semantic similarity between original and adversarial source sentences, broken down by number of perturbed words. “∗” indicates that the correlation is significantly better than the next-best one.", + "page": 11, + "bbox": { + "x1": 306.71999999999997, + "x2": 487.2, + "y1": 64.8, + "y2": 120.0 + } + }, + { + "filename": "../figure/image/967-Table9-1.png", + "caption": "Table 9: Correlation of automatic metrics to human judgment of semantic similarity between original and adversarial source sentences, broken down by type of constraint on the perturbation. “∗” indicates that the correlation is significantly better than the next-best one.", + "page": 11, + "bbox": { + "x1": 306.71999999999997, + "x2": 517.4399999999999, + "y1": 203.51999999999998, + "y2": 259.2 + } + } + ] + }, + "gem_id": "GEM-SciDuet-chal-5" + }, + { + "slides": { + "0": { + "title": "Do we really need context", + "text": [ + "It has 48 columns.", + "What does it refer to?", + "Possible translations into Russian:", + "48 . (masculine or neuter)", + "What do columns mean?", + "Under the cathedral lies the antique chapel." + ], + "page_nums": [ + 1, + 2, + 3, + 4, + 5, + 6, + 7, + 8 + ], + "images": [] + }, + "1": { + "title": "Recap antecedent and anaphora resolution", + "text": [ + "Under the cathedral lies the antique chapel. It has 48 columns.", + "An antecedent is an expression that gives its meaning to", + "a proform (pronoun, pro-verb, pro-adverb, etc.)", + "Anaphora resolution is the problem of resolving references to earlier", + "or later items in the discourse." + ], + "page_nums": [ + 9 + ], + "images": [] + }, + "2": { + "title": "Context in Machine Translation", + "text": [ + "focused on handling specific phenomena", + "directly provide context to an NMT system at training time", + "what kinds of discourse phenomena are successfully handled", + "how they are modeled" + ], + "page_nums": [ + 10, + 11, + 12 + ], + "images": [] + }, + "3": { + "title": "Plan", + "text": [ + "we introduce a context-aware neural model, which is effective", + "an d has a sufficiently simple and interpretable interface between Model Archit cture", + "the context and the rest of the translation model", + "we analyze the flow of information from the context and identify", + "Overall performance pr onoun translation as the key phenomenon captured by the", + "by comparing to automatically predicted or human-annotated Analys s", + "coreference relations, we observe that the model implicitly" + ], + "page_nums": [ + 13 + ], + "images": [] + }, + "5": { + "title": "Context aware model architecture", + "text": [ + "start with the Transformer [Vaswani et al, 2018]", + "incorporate context information on the encoder side", + "use a separate encoder for context", + "share first N-1 layers of source and context encoders", + "the last layer incorporates contextual information" + ], + "page_nums": [ + 16, + 17, + 18 + ], + "images": [ + "figure/image/969-Figure1-1.png" + ] + }, + "8": { + "title": "Our model different types of context", + "text": [ + "Next sentence does not appear", + "previous sentence Performance drops for a random", + "Model is robust towards being", + "shown a random context", + "(the only significant at p<0.01 difference is with the best model;", + "differences between other results are not significant)" + ], + "page_nums": [ + 21 + ], + "images": [] + }, + "10": { + "title": "What do we mean by attention to context", + "text": [ + "attention from source to context", + "mean over heads of per-head attention", + "take sum over context words", + "(excluding , and punctuation)" + ], + "page_nums": [ + 24, + 25 + ], + "images": [] + }, + "11": { + "title": "Top words influenced by context", + "text": [ + "it Need to know gender, because", + "yours verbs must agree in gender with I", + "(in past tense) yes", + "yes Many of these words appear at", + "i sentence initial position.", + "you Maybe this is all that matters?", + "word pos word pos", + "Only positions i after the first m" + ], + "page_nums": [ + 26, + 27, + 28, + 29, + 30, + 31 + ], + "images": [ + "figure/image/969-Table3-1.png" + ] + }, + "12": { + "title": "Dependence on sentence length", + "text": [ + "high attention to context" + ], + "page_nums": [ + 33, + 34, + 35 + ], + "images": [] + }, + "18": { + "title": "Ambiguous it noun antecedent", + "text": [ + "masculine feminine neuter plural" + ], + "page_nums": [ + 41 + ], + "images": [] + }, + "19": { + "title": "It with noun antecedent example", + "text": [ + "It was locked up in the hold with 20 other boxes of supplies.", + "Possible translations into Russian:", + "You left money unattended?" + ], + "page_nums": [ + 42, + 43 + ], + "images": [] + }, + "21": { + "title": "Hypothesis", + "text": [ + "Large improvements in BLEU on test sets with pronouns", + "co-referent with an expression in context", + "Attention mechanism Latent anaphora resolution" + ], + "page_nums": [ + 45 + ], + "images": [] + }, + "22": { + "title": "How to test the hypothesis agreement with CoreNLP", + "text": [ + "Find an antecedent noun phrase (using CoreNLP)", + "Pick examples where the noun phrase contains a single noun", + "Pick examples with several nouns in context", + "Identify the token with the largest attention weight (excluding punctuation,", + "If the token falls within the antecedent span, then its an agreement" + ], + "page_nums": [ + 46, + 47 + ], + "images": [] + }, + "23": { + "title": "Does the model learn anaphora", + "text": [ + "or just some simple heuristic?" + ], + "page_nums": [ + 48 + ], + "images": [] + }, + "24": { + "title": "Agreement with CoreNLP predictions", + "text": [ + "random first last attention agreement of attention is the", + "first noun is the best heuristic" + ], + "page_nums": [ + 49, + 50 + ], + "images": [] + }, + "25": { + "title": "Compared to human annotations for it", + "text": [ + "pick 500 examples from the", + "ask human annotators to mark", + "pick examples where an", + "antecedent is a noun phrase", + "calculate the agreement with" + ], + "page_nums": [ + 51 + ], + "images": [] + }, + "26": { + "title": "Attention map examples", + "text": [ + "There was a time I would", + "have lost my heart to a", + "And you, no doubt, would" + ], + "page_nums": [ + 52, + 53, + 54 + ], + "images": [ + "figure/image/969-Figure5-1.png" + ] + } + }, + "paper_title": "Context-Aware Neural Machine Translation Learns Anaphora Resolution", + "paper_id": "969", + "paper": { + "title": "Context-Aware Neural Machine Translation Learns Anaphora Resolution", + "abstract": "Standard machine translation systems process sentences in isolation and hence ignore extra-sentential information, even though extended context can both prevent mistakes in ambiguous cases and improve translation coherence. We introduce a context-aware neural machine translation model designed in such way that the flow of information from the extended context to the translation model can be controlled and analyzed. We experiment with an English-Russian subtitles dataset, and observe that much of what is captured by our model deals with improving pronoun translation. We measure correspondences between induced attention distributions and coreference relations and observe that the model implicitly captures anaphora. It is consistent with gains for sentences where pronouns need to be gendered in translation. Beside improvements in anaphoric cases, the model also improves in overall BLEU, both over its context-agnostic version (+0.7) and over simple concatenation of the context and source sentences (+0.6).", + "text": [ + { + "id": 0, + "string": "Introduction It has long been argued that handling discourse phenomena is important in translation (Mitkov, 1999; Hardmeier, 2012) ." + }, + { + "id": 1, + "string": "Using extended context, beyond the single source sentence, should in principle be beneficial in ambiguous cases and also ensure that generated translations are coherent." + }, + { + "id": 2, + "string": "Nevertheless, machine translation systems typically ignore discourse phenomena and translate sentences in isolation." + }, + { + "id": 3, + "string": "Earlier research on this topic focused on handling specific phenomena, such as translating pronouns (Le Nagard and Koehn, 2010; Hardmeier and Federico, 2010; Hardmeier et al., 2015) , discourse connectives (Meyer et al., 2012) , verb tense (Gong et al., 2012) , increasing lexical consistency (Carpuat, 2009; Tiedemann, 2010; Gong et al., 2011) , or topic adaptation (Su et al., 2012; Hasler et al., 2014) , with special-purpose features engineered to model these phenomena." + }, + { + "id": 4, + "string": "However, with traditional statistical machine translation being largely supplanted with neural machine translation (NMT) models trained in an end-toend fashion, an alternative is to directly provide additional context to an NMT system at training time and hope that it will succeed in inducing relevant predictive features (Jean et al., 2017; Wang et al., 2017; Tiedemann and Scherrer, 2017; Bawden et al., 2018) ." + }, + { + "id": 5, + "string": "While the latter approach, using context-aware NMT models, has demonstrated to yield performance improvements, it is still not clear what kinds of discourse phenomena are successfully handled by the NMT systems and, importantly, how they are modeled." + }, + { + "id": 6, + "string": "Understanding this would inform development of future discourse-aware NMT models, as it will suggest what kind of inductive biases need to be encoded in the architecture or which linguistic features need to be exploited." + }, + { + "id": 7, + "string": "In our work we aim to enhance our understanding of the modelling of selected discourse phenomena in NMT." + }, + { + "id": 8, + "string": "To this end, we construct a simple discourse-aware model, demonstrate that it achieves improvements over the discourse-agnostic baseline on an English-Russian subtitles dataset (Lison et al., 2018) and study which context information is being captured in the model." + }, + { + "id": 9, + "string": "Specifically, we start with the Trans-former (Vaswani et al., 2017) , a state-of-the-art model for context-agnostic NMT, and modify it in such way that it can handle additional context." + }, + { + "id": 10, + "string": "In our model, a source sentence and a context sentence are first encoded independently, and then a single attention layer, in a combination with a gating function, is used to produce a context-aware representation of the source sentence." + }, + { + "id": 11, + "string": "The information from context can only flow through this attention layer." + }, + { + "id": 12, + "string": "When compared to simply concatenating input sentences, as proposed by Tiedemann and Scherrer (2017) , our architecture appears both more accurate (+0.6 BLEU) and also guarantees that the contextual information cannot bypass the attention layer and hence remain undetected in our analysis." + }, + { + "id": 13, + "string": "We analyze what types of contextual information are exploited by the translation model." + }, + { + "id": 14, + "string": "While studying the attention weights, we observe that much of the information captured by the model has to do with pronoun translation." + }, + { + "id": 15, + "string": "It is not entirely surprising, as we consider translation from a language without grammatical gender (English) to a language with grammatical gender (Russian)." + }, + { + "id": 16, + "string": "For Russian, translated pronouns need to agree in gender with their antecedents." + }, + { + "id": 17, + "string": "Moreover, since in Russian verbs agree with subjects in gender and adjectives also agree in gender with pronouns in certain frequent constructions, mistakes in translating pronouns have a major effect on the words in the produced sentences." + }, + { + "id": 18, + "string": "Consequently, the standard cross-entropy training objective sufficiently rewards the model for improving pronoun translation and extracting relevant information from the context." + }, + { + "id": 19, + "string": "We use automatic co-reference systems and human annotation to isolate anaphoric cases." + }, + { + "id": 20, + "string": "We observe even more substantial improvements in performance on these subsets." + }, + { + "id": 21, + "string": "By comparing attention distributions induced by our model against co-reference links, we conclude that the model implicitly captures coreference phenomena, even without having any kind of specialized features which could help it in this subtask." + }, + { + "id": 22, + "string": "These observations also suggest potential directions for future work." + }, + { + "id": 23, + "string": "For example, effective co-reference systems go beyond relying simply on embeddings of contexts." + }, + { + "id": 24, + "string": "One option would be to integrate 'global' features summarizing properties of groups of mentions predicted as linked in a document (Wiseman et al., 2016) , or to use latent relations to trace en-tities across documents (Ji et al., 2017) ." + }, + { + "id": 25, + "string": "Our key contributions can be summarized as follows: • we introduce a context-aware neural model, which is effective and has a sufficiently simple and interpretable interface between the context and the rest of the translation model; • we analyze the flow of information from the context and identify pronoun translation as the key phenomenon captured by the model; • by comparing to automatically predicted or human-annotated coreference relations, we observe that the model implicitly captures anaphora." + }, + { + "id": 26, + "string": "Neural Machine Translation Given a source sentence x = (x 1 , x 2 , ." + }, + { + "id": 27, + "string": "." + }, + { + "id": 28, + "string": "." + }, + { + "id": 29, + "string": ", x S ) and a target sentence y = (y 1 , y 2 , ." + }, + { + "id": 30, + "string": "." + }, + { + "id": 31, + "string": "." + }, + { + "id": 32, + "string": ", y T ), NMT models predict words in the target sentence, word by word." + }, + { + "id": 33, + "string": "Current NMT models mainly have an encoderdecoder structure." + }, + { + "id": 34, + "string": "The encoder maps an input sequence of symbol representations x to a sequence of distributed representations z = (z 1 , z 2 , ." + }, + { + "id": 35, + "string": "." + }, + { + "id": 36, + "string": "." + }, + { + "id": 37, + "string": ", z S )." + }, + { + "id": 38, + "string": "Given z, a neural decoder generates the corresponding target sequence of symbols y one element at a time." + }, + { + "id": 39, + "string": "Attention-based NMT The encoder-decoder framework with attention has been proposed by Bahdanau et al." + }, + { + "id": 40, + "string": "(2015) and has become the defacto standard in NMT." + }, + { + "id": 41, + "string": "The model consists of encoder and decoder recurrent networks and an attention mechanism." + }, + { + "id": 42, + "string": "The attention mechanism selectively focuses on parts of the source sentence during translation, and the attention weights specify the proportions with which information from different positions is combined." + }, + { + "id": 43, + "string": "Transformer Vaswani et al." + }, + { + "id": 44, + "string": "(2017) proposed an architecture that avoids recurrence completely." + }, + { + "id": 45, + "string": "The Transformer follows an encoder-decoder architecture using stacked self-attention and fully connected layers for both the encoder and decoder." + }, + { + "id": 46, + "string": "An important advantage of the Transformer is that it is more parallelizable and faster to train than recurrent encoder-decoder models." + }, + { + "id": 47, + "string": "From the source tokens, learned embeddings are generated and then modified using positional encodings." + }, + { + "id": 48, + "string": "The encoded word embeddings are then used as input to the encoder which consists of N layers each containing two sub-layers: (a) a multihead attention mechanism, and (b) a feed-forward network." + }, + { + "id": 49, + "string": "The self-attention mechanism first computes attention weights: i.e., for each word, it computes a distribution over all words (including itself)." + }, + { + "id": 50, + "string": "This distribution is then used to compute a new representation of that word: this new representation is set to an expectation (under the attention distribution specific to the word) of word representations from the layer below." + }, + { + "id": 51, + "string": "In multi-head attention, this process is repeated h times with different representations and the result is concatenated." + }, + { + "id": 52, + "string": "The second component of each layer of the Transformer network is a feed-forward network." + }, + { + "id": 53, + "string": "The authors propose using a two-layered network with the ReLU activations." + }, + { + "id": 54, + "string": "Analogously, each layer of the decoder contains the two sub-layers mentioned above as well as an additional multi-head attention sub-layer that receives input from the corresponding encoding layer." + }, + { + "id": 55, + "string": "In the decoder, the attention is masked to prevent future positions from being attended to, or in other words, to prevent illegal leftward information flow." + }, + { + "id": 56, + "string": "See Vaswani et al." + }, + { + "id": 57, + "string": "(2017) for additional details." + }, + { + "id": 58, + "string": "The proposed architecture reportedly improves over the previous best results on the WMT 2014 English-to-German and English-to-French translation tasks, and we verified its strong performance on our data set in preliminary experiments." + }, + { + "id": 59, + "string": "Thus, we consider it a strong state-of-the-art baseline for our experiments." + }, + { + "id": 60, + "string": "Moreover, as the Transformer is attractive in practical NMT applications because of its parallelizability and training efficiency, integrating extra-sentential information in Transformer is important from the engineering perspective." + }, + { + "id": 61, + "string": "As we will see in Section 4, previous techniques developed for recurrent encoderdecoders do not appear effective for the Transformer." + }, + { + "id": 62, + "string": "3 Context-aware model architecture Our model is based on Transformer architecture (Vaswani et al., 2017) ." + }, + { + "id": 63, + "string": "We leave Transformer's decoder intact while incorporating context information on the encoder side ( Figure 1 )." + }, + { + "id": 64, + "string": "Source encoder: The encoder is composed of a stack of N layers." + }, + { + "id": 65, + "string": "The first N − 1 layers are identical and represent the original layers of Trans- g i = σ W g c (s−attn) i , c (c��attn) i + b g (1) c i = g i c (s−attn) i + (1 − g i ) c (c−attn) i (2) Context encoder: The context encoder is composed of a stack of N identical layers and replicates the original Transformer encoder." + }, + { + "id": 66, + "string": "In contrast to related work (Jean et al., 2017; Wang et al., 2017) , we found in preliminary experiments that using separate encoders does not yield an accurate model." + }, + { + "id": 67, + "string": "Instead we share the parameters of the first N − 1 layers with the source encoder." + }, + { + "id": 68, + "string": "Since major proportion of the context encoder's parameters are shared with the source encoder, we add a special token (let us denote it ) to the beginning of context sentences, but not source sentences, to let the shared layers know whether it is encoding a source or a context sentence." + }, + { + "id": 69, + "string": "Experiments Data and setting We use the publicly available OpenSubtitles2018 corpus (Lison et al., 2018) for English and Russian." + }, + { + "id": 70, + "string": "1 As described in the appendix, we apply data cleaning and randomly choose 2 million training instances from the resulting data." + }, + { + "id": 71, + "string": "For development and testing, we randomly select two subsets of 10000 instances from movies not encountered in training." + }, + { + "id": 72, + "string": "2 Sentences were encoded using byte-pair encoding (Sennrich et al., 2016) , with source and target vocabularies of about 32000 tokens." + }, + { + "id": 73, + "string": "We generally used the same parameters and optimizer as in the original Transformer (Vaswani et al., 2017) ." + }, + { + "id": 74, + "string": "The hyperparameters, preprocessing and training details are provided in the supplementary material." + }, + { + "id": 75, + "string": "Results and analysis We start by experiments motivating the setting and verifying that the improvements are indeed genuine, i.e." + }, + { + "id": 76, + "string": "they come from inducing predictive features of the context." + }, + { + "id": 77, + "string": "In subsequent section 5.2, we analyze the features induced by the context encoder and perform error analysis." + }, + { + "id": 78, + "string": "Overall performance We use the traditional automatic metric BLEU on a general test set to get an estimate of the overall performance of the discourse-aware model, before turning to more targeted evaluation in the next section." + }, + { + "id": 79, + "string": "We provide results in Table 1 ." + }, + { + "id": 80, + "string": "3 The 'baseline' is the discourse-agnostic version of the Transformer." + }, + { + "id": 81, + "string": "As another baseline we use the standard Transformer applied to the concatenation of the previous and source sentences, as proposed by Tiedemann and Scherrer (2017 a substantial degradation of performance (over 1 BLEU)." + }, + { + "id": 82, + "string": "Instead, we use a binary flag at every word position in our concatenation baseline telling the encoder whether the word belongs to the context sentence or to the source sentence." + }, + { + "id": 83, + "string": "We consider two versions of our discourseaware model: one using the previous sentence as the context, another one relying on the next sentence." + }, + { + "id": 84, + "string": "We hypothesize that both the previous and the next sentence provide a similar amount of additional clues about the topic of the text, whereas for discourse phenomena such as anaphora, discourse relations and elliptical structures, the previous sentence is more important." + }, + { + "id": 85, + "string": "First, we observe that our best model is the one using a context encoder for the previous sentence: it achieves 0.7 BLEU improvement over the discourse-agnostic model." + }, + { + "id": 86, + "string": "We also notice that, unlike the previous sentence, the next sentence does not appear beneficial." + }, + { + "id": 87, + "string": "This is a first indicator that discourse phenomena are the main reason for the observed improvement, rather than topic effects." + }, + { + "id": 88, + "string": "Consequently, we focus solely on using the previous sentence in all subsequent experiments." + }, + { + "id": 89, + "string": "Second, we observe that the concatenation baseline appears less accurate than the introduced context-aware model." + }, + { + "id": 90, + "string": "This result suggests that our model is not only more amendable to analysis but also potentially more effective than using concatenation." + }, + { + "id": 91, + "string": "In order to verify that our improvements are genuine, we also evaluate our model (trained with the previous sentence as context) on the same test set with shuffled context sentences." + }, + { + "id": 92, + "string": "It can be seen that the performance drops significantly when a real context sentence is replaced with a random one." + }, + { + "id": 93, + "string": "This confirms that the model does rely on context information to achieve the improvement in translation quality, and is not merely better regularized." + }, + { + "id": 94, + "string": "However, the model is robust towards being shown a random context and obtains a performance similar to the context-agnostic baseline." + }, + { + "id": 95, + "string": "Analysis In this section we investigate what types of contextual information are exploited by the model." + }, + { + "id": 96, + "string": "We study the distribution of attention to context and perform analysis on specific subsets of the test data." + }, + { + "id": 97, + "string": "Specifically the research questions we seek to answer are as follows: • For the translation of which words does the model rely on contextual history most?" + }, + { + "id": 98, + "string": "• Are there any non-lexical patterns affecting attention to context, such as sentence length and word position?" + }, + { + "id": 99, + "string": "• Can the context-aware NMT system implicitly learn coreference phenomena without any feature engineering?" + }, + { + "id": 100, + "string": "Since all the attentions in our model are multihead, by attention weights we refer to an average over heads of per-head attention weights." + }, + { + "id": 101, + "string": "First, we would like to identify a useful attention mass coming to context." + }, + { + "id": 102, + "string": "We analyze the attention maps between source and context, and find that the model mostly attends to and context tokens, and much less often attends to words." + }, + { + "id": 103, + "string": "Our hypothesis is that the model has found a way to take no information from context by looking at uninformative tokens, and it attends to words only when it wants to pass some contextual information to the source sentence encoder." + }, + { + "id": 104, + "string": "Thus we define useful contextual attention mass as sum of attention weights to context words, excluding and tokens and punctuation." + }, + { + "id": 105, + "string": "Top words depending on context We analyze the distribution of attention to context for individual source words to see for which words the model depends most on contextual history." + }, + { + "id": 106, + "string": "We compute the overall average attention to context words for each source word in our test set." + }, + { + "id": 107, + "string": "We do the same for source words at positions higher than first." + }, + { + "id": 108, + "string": "We filter out words that occurred less than 10 times in a test set." + }, + { + "id": 109, + "string": "The top 10 words with the highest average attention to context words are provided in Table 2 ." + }, + { + "id": 110, + "string": "An interesting finding is that contextual attention is high for the translation of \"it\", \"yours\", \"ones\", \"you\" and \"I\", which are indeed very ambiguous out-of-context when translating into Russian." + }, + { + "id": 111, + "string": "For example, \"it\" will be translated as third person singular masculine, feminine or neuter, or third person plural depending on its antecedent." + }, + { + "id": 112, + "string": "Table 2 : Top-10 words with the highest average attention to context words." + }, + { + "id": 113, + "string": "attn gives an average attention to context words, pos gives an average position of the source word." + }, + { + "id": 114, + "string": "Left part is for words on all positions, right -for words on positions higher than first." + }, + { + "id": 115, + "string": "\"You\" can be second person singular impolite or polite, or plural." + }, + { + "id": 116, + "string": "Also, verbs must agree in gender and number with the translation of \"you\"." + }, + { + "id": 117, + "string": "It might be not obvious why \"I\" has high contextual attention, as it is not ambiguous itself." + }, + { + "id": 118, + "string": "However, in past tense, verbs must agree with \"I\" in gender, so to translate past tense sentences properly, the source encoder must predict speaker gender, and the context may provide useful indicators." + }, + { + "id": 119, + "string": "Most surprising is the appearance of \"yes\", \"yeah\", and \"well\" in the list of context-dependent words, similar to the finding by Tiedemann and Scherrer (2017) ." + }, + { + "id": 120, + "string": "We note that these words mostly appear in sentence-initial position, and in relatively short sentences." + }, + { + "id": 121, + "string": "If only words after the first are considered, they disappear from the top-10 list." + }, + { + "id": 122, + "string": "We hypothesize that the amount of attention to context not only depends on the words themselves, but also on factors such as sentence length and position, and we test this hypothesis in the next section." + }, + { + "id": 123, + "string": "Dependence on sentence length and position We compute useful attention mass coming to context by averaging over source words." + }, + { + "id": 124, + "string": "Figure 2 illustrates the dependence of this average attention mass on sentence length." + }, + { + "id": 125, + "string": "We observe a disproportionally high attention on context for short sentences, and a positive correlation between the average contextual attention and context length." + }, + { + "id": 126, + "string": "It is also interesting to see the importance given to the context at different positions in the source sentence." + }, + { + "id": 127, + "string": "We compute an average attention mass to context for a set of 1500 sentences of the same length." + }, + { + "id": 128, + "string": "As can be seen in Figure 3 , words at the beginning of a source sentence tend to attend to context more than words at the end of a sentence." + }, + { + "id": 129, + "string": "This correlates with standard view that English sentences present hearer-old material before hearer-new." + }, + { + "id": 130, + "string": "There is a clear (negative) correlation between sentence length and the amount of attention placed on contextual history, and between token position and the amount of attention to context, which suggests that context is especially helpful at the beginning of a sentence, and for shorter sentences." + }, + { + "id": 131, + "string": "However, Figure 4 shows that there is no straightforward dependence of BLEU improvement on source length." + }, + { + "id": 132, + "string": "This means that while attention on context is disproportionally high for short sentences, context does not seem disproportionally more useful for these sentences." + }, + { + "id": 133, + "string": "Analysis of pronoun translation The analysis of the attention model indicates that the model attends heavily to the contextual history for the translation of some pronouns." + }, + { + "id": 134, + "string": "Here, we investigate whether this context-aware modelling results in empirical improvements in translation Ambiguous pronouns and translation quality Ambiguous pronouns are relatively sparse in a general-purpose test set, and previous work has designed targeted evaluation of pronoun translation (Hardmeier et al., 2015; Miculicich Werlen and Popescu-Belis, 2017; Bawden et al., 2018) ." + }, + { + "id": 135, + "string": "However, we note that in Russian, grammatical gender is not only marked on pronouns, but also on adjectives and verbs." + }, + { + "id": 136, + "string": "Rather than using a pronoun-specific evaluation, we present results with BLEU on test sets where we hypothesize context to be relevant, specifically sentences containing co-referential pronouns." + }, + { + "id": 137, + "string": "We feed Stanford CoreNLP open-source coreference resolution system (Manning et al., 2014a) with pairs of sentences to find examples where there is a link between one of the pronouns under consideration and the context." + }, + { + "id": 138, + "string": "We focus on anaphoric instances of \"it\" (this excludes, among others, pleonastic uses of \"it\"), and instances of the pronouns \"I\", \"you\", and \"yours\" that are coreferent with an expression in the previous sentence." + }, + { + "id": 139, + "string": "All these pronouns express ambiguity in the translation into Russian, and the model has learned to attend to context for their translation (Table 2) ." + }, + { + "id": 140, + "string": "To combat data sparsity, the test sets are extracted from large amounts of held-out data of OpenSubtitles2018." + }, + { + "id": 141, + "string": "Table 3 shows BLEU scores for the resulting subsets." + }, + { + "id": 142, + "string": "First of all, we see that most of the antecedents in these test sets are also pronouns." + }, + { + "id": 143, + "string": "Antecedent pronouns should not be particularly informative for translating the source pronoun." + }, + { + "id": 144, + "string": "Nevertheless, even with such contexts, improvements are generally larger than on the overall test set." + }, + { + "id": 145, + "string": "When we focus on sentences where the antecedent for pronoun under consideration contains a noun, we observe even larger improvements ( Table 4 )." + }, + { + "id": 146, + "string": "Improvement is smaller for \"I\", but we note that verbs with first person singular subjects mark gender only in the past tense, which limits the impact of correctly predicting gender." + }, + { + "id": 147, + "string": "In contrast, different types of \"you\" (polite/impolite, singular/plural) lead to different translations of the pronoun itself plus related verbs and adjectives, leading to a larger jump in performance." + }, + { + "id": 148, + "string": "Examples of nouns co-referent with \"I\" and \"you\" include names, titles (\"Mr.\", \"Mrs.\", \"officer\"), terms denoting family relationships (\"Mom\", \"Dad\"), and terms of endearment (\"honey\", \"sweetie\")." + }, + { + "id": 149, + "string": "Such nouns can serve to disambiguate number and gender of the speaker or addressee, and mark the level of familiarity between them." + }, + { + "id": 150, + "string": "The most interesting case is translation of \"it\", as \"it\" can have many different translations into Russian, depending on the grammatical gender of the antecedent." + }, + { + "id": 151, + "string": "In order to disentangle these cases, we train the Berkeley aligner on 10m sentences and use the trained model to divide the test set with \"it\" referring to a noun into test sets specific to each gender and number." + }, + { + "id": 152, + "string": "Results are in Table 5 ." + }, + { + "id": 153, + "string": "We see an improvement of 4-5 BLEU for sentences where \"it\" is translated into a feminine or plural pronoun by the reference." + }, + { + "id": 154, + "string": "For cases where \"it\" is translated into a masculine pronoun, the improvement is smaller because the masculine gender is more frequent, and the context-agnostic baseline tends to translate the pronoun \"it\" as masculine." + }, + { + "id": 155, + "string": "Latent anaphora resolution The results in Tables 4 and 5 suggest that the context-aware model exploits information about the antecedent of an ambiguous pronoun." + }, + { + "id": 156, + "string": "We hypothesize that we can interpret the model's attention mechanism as a latent anaphora resolution, and perform experiments to test this hypothesis." + }, + { + "id": 157, + "string": "For test sets from Table 4 , we find an antecedent noun phrase (usually a determiner or a possessive pronoun followed by a noun) using Stanford CoreNLP (Manning et al., 2014b) ." + }, + { + "id": 158, + "string": "We select only examples where a noun phrase contains a single noun to simplify our analysis." + }, + { + "id": 159, + "string": "Then we identify which token receives the highest attention weight (excluding and tokens and punctuation)." + }, + { + "id": 160, + "string": "If this token falls within the antecedent span, then we treat it as agreement (see Table 6 )." + }, + { + "id": 161, + "string": "One natural question might be: does the attention component in our model genuinely learn to perform anaphora resolution, or does it capture some simple heuristic (e.g., pointing to the last noun)?" + }, + { + "id": 162, + "string": "To answer this question, we consider several baselines: choosing a random, last or first pronoun agreement (in %) random first last attention it 40 36 52 58 you 42 63 29 67 I 39 56 35 62 noun from the context sentence as an antecedent." + }, + { + "id": 163, + "string": "Note that an agreement of the last noun for \"it\" or the first noun for \"you\" and \"I\" is very high." + }, + { + "id": 164, + "string": "This is partially due to the fact that most context sentences have only one noun." + }, + { + "id": 165, + "string": "For these examples a random and last predictions are always correct, meanwhile attention does not always pick a noun as the most relevant word in the context." + }, + { + "id": 166, + "string": "To get a more clear picture let us now concentrate only on examples where there is more than one noun in the context (Table 7) ." + }, + { + "id": 167, + "string": "We can now see that the attention weights are in much better agreement with the coreference system than any of the heuristics." + }, + { + "id": 168, + "string": "This indicates that the model is indeed performing anaphora resolution." + }, + { + "id": 169, + "string": "While agreement with CoreNLP is encouraging, we are aware that coreference resolution by CoreNLP is imperfect and partial agreement with it may not necessarily indicate that the attention is particularly accurate." + }, + { + "id": 170, + "string": "In order to control for this, we asked human annotators to manually evaluate 500 examples from the test sets where CoreNLP predicted that \"it\" refers to a noun in the context sentence." + }, + { + "id": 171, + "string": "More precisely, we picked random 500 examples from the test set with \"it\" from Table 7." + }, + { + "id": 172, + "string": "We marked the pronoun in a source which CoreNLP found anaphoric." + }, + { + "id": 173, + "string": "Assessors were given the source and context sentences and were asked to mark an antecedent noun phrase for a marked pronoun in a source sentence or say that there is no antecedent at all." + }, + { + "id": 174, + "string": "We then picked those examples where assessors found a link from \"it\" to some noun in context (79% of all examples)." + }, + { + "id": 175, + "string": "Then we evaluated agreement of CoreNLP and our model with the ground truth links." + }, + { + "id": 176, + "string": "We also report the performance of the best heuristic for \"it\" from our previous analysis (i.e." + }, + { + "id": 177, + "string": "last noun in context)." + }, + { + "id": 178, + "string": "The results are provided in Table 8 ." + }, + { + "id": 179, + "string": "The agreement between our model and the ground truth is 72%." + }, + { + "id": 180, + "string": "Though 5% below the coreference system, this is a lot higher than the best agreement (in %) CoreNLP 77 attention 72 last noun 54 Table 8 : Performance of CoreNLP and our model's attention mechanism compared to human assessment." + }, + { + "id": 181, + "string": "Examples with ≥1 noun in context sentence." + }, + { + "id": 182, + "string": "Figure 5 : An example of an attention map between source and context." + }, + { + "id": 183, + "string": "On the y-axis are the source tokens, on the x-axis the context tokens." + }, + { + "id": 184, + "string": "Note the high attention between \"it\" and its antecedent \"heart\"." + }, + { + "id": 185, + "string": "CoreNLP right wrong attn right 53 19 attn wrong 24 4 Table 9 : Performance of CoreNLP and our model's attention mechanism compared to human assessment (%)." + }, + { + "id": 186, + "string": "Examples with ≥1 noun in context sentence." + }, + { + "id": 187, + "string": "heuristic (+18%)." + }, + { + "id": 188, + "string": "This confirms our conclusion that our model performs latent anaphora resolution." + }, + { + "id": 189, + "string": "Interestingly, the patterns of mistakes are quite different for CoreNLP and our model (Table 9)." + }, + { + "id": 190, + "string": "We also present one example ( Figure 5) where the attention correctly predicts anaphora while CoreNLP fails." + }, + { + "id": 191, + "string": "Nevertheless, there is room for improvement, and improving the attention component is likely to boost translation performance." + }, + { + "id": 192, + "string": "Related work Our analysis focuses on how our context-aware neural model implicitly captures anaphora." + }, + { + "id": 193, + "string": "Early work on anaphora phenomena in statistical machine translation has relied on external systems for coreference resolution (Le Nagard and Koehn, 2010; Hardmeier and Federico, 2010) ." + }, + { + "id": 194, + "string": "Results were mixed, and the low performance of coreference resolution systems was identified as a problem for this type of system." + }, + { + "id": 195, + "string": "Later work by Hardmeier et al." + }, + { + "id": 196, + "string": "(2013) has shown that cross-lingual pronoun prediction systems can implicitly learn to resolve coreference, but this work still relied on external feature extraction to identify anaphora candidates." + }, + { + "id": 197, + "string": "Our experiments show that a contextaware neural machine translation system can implicitly learn coreference phenomena without any feature engineering." + }, + { + "id": 198, + "string": "Tiedemann and Scherrer (2017) and Bawden et al." + }, + { + "id": 199, + "string": "(2018) analyze the attention weights of context-aware NMT models." + }, + { + "id": 200, + "string": "Tiedemann and Scherrer (2017) find some evidence for aboveaverage attention on contextual history for the translation of pronouns, and our analysis goes further in that we are the first to demonstrate that our context-aware model learns latent anaphora resolution through the attention mechanism." + }, + { + "id": 201, + "string": "This is contrary to Bawden et al." + }, + { + "id": 202, + "string": "(2018) , who do not observe increased attention between a pronoun and its antecedent in their recurrent model." + }, + { + "id": 203, + "string": "We deem our model more suitable for analysis, since it has no recurrent connections and fully relies on the attention mechanism within a single attention layer." + }, + { + "id": 204, + "string": "Conclusions We introduced a context-aware NMT system which is based on the Transformer architecture." + }, + { + "id": 205, + "string": "When evaluated on an En-Ru parallel corpus, it outperforms both the context-agnostic baselines and a simple context-aware baseline." + }, + { + "id": 206, + "string": "We observe that improvements are especially prominent for sentences containing ambiguous pronouns." + }, + { + "id": 207, + "string": "We also show that the model induces anaphora relations." + }, + { + "id": 208, + "string": "We believe that further improvements in handling anaphora, and by proxy translation, can be achieved by incorporating specialized features in the attention model." + }, + { + "id": 209, + "string": "Our analysis has focused on the effect of context information on pronoun translation." + }, + { + "id": 210, + "string": "Future work could also investigate whether context-aware NMT systems learn other discourse phenomena, for example whether they improve the translation of elliptical constructions, and markers of discourse relations and information structure." + } + ], + "headers": [ + { + "section": "Introduction", + "n": "1", + "start": 0, + "end": 25 + }, + { + "section": "Neural Machine Translation", + "n": "2", + "start": 26, + "end": 68 + }, + { + "section": "Data and setting", + "n": "4.1", + "start": 69, + "end": 74 + }, + { + "section": "Results and analysis", + "n": "5", + "start": 75, + "end": 77 + }, + { + "section": "Overall performance", + "n": "5.1", + "start": 78, + "end": 94 + }, + { + "section": "Analysis", + "n": "5.2", + "start": 95, + "end": 104 + }, + { + "section": "Top words depending on context", + "n": "5.2.1", + "start": 105, + "end": 122 + }, + { + "section": "Dependence on sentence length and position", + "n": "5.2.2", + "start": 123, + "end": 132 + }, + { + "section": "Analysis of pronoun translation", + "n": "5.3", + "start": 133, + "end": 133 + }, + { + "section": "Ambiguous pronouns and translation quality", + "n": "5.3.1", + "start": 134, + "end": 154 + }, + { + "section": "Latent anaphora resolution", + "n": "5.3.2", + "start": 155, + "end": 191 + }, + { + "section": "Related work", + "n": "6", + "start": 192, + "end": 203 + }, + { + "section": "Conclusions", + "n": "7", + "start": 204, + "end": 210 + } + ], + "figures": [ + { + "filename": "../figure/image/969-Figure4-1.png", + "caption": "Figure 4: BLEU score vs. source sentence length", + "page": 5, + "bbox": { + "x1": 338.88, + "x2": 491.03999999999996, + "y1": 61.44, + "y2": 177.12 + } + }, + { + "filename": "../figure/image/969-Figure3-1.png", + "caption": "Figure 3: Average attention to context vs. source token position", + "page": 5, + "bbox": { + "x1": 104.64, + "x2": 255.35999999999999, + "y1": 232.79999999999998, + "y2": 346.08 + } + }, + { + "filename": "../figure/image/969-Figure2-1.png", + "caption": "Figure 2: Average attention to context words vs. both source and context length", + "page": 5, + "bbox": { + "x1": 100.8, + "x2": 259.2, + "y1": 61.44, + "y2": 191.04 + } + }, + { + "filename": "../figure/image/969-Table6-1.png", + "caption": "Table 6: Agreement with CoreNLP for test sets of pronouns having a nominal antecedent in context sentence (%).", + "page": 6, + "bbox": { + "x1": 308.64, + "x2": 524.16, + "y1": 193.44, + "y2": 264.0 + } + }, + { + "filename": "../figure/image/969-Table4-1.png", + "caption": "Table 4: BLEU for test sets of pronouns having a nominal antecedent in context sentence. N: number of examples in the test set.", + "page": 6, + "bbox": { + "x1": 74.88, + "x2": 287.03999999999996, + "y1": 193.44, + "y2": 250.07999999999998 + } + }, + { + "filename": "../figure/image/969-Table3-1.png", + "caption": "Table 3: BLEU for test sets with coreference between pronoun and a word in context sentence. We show both N, the total number of instances in a particular test set, and number of instances with pronominal antecedent. Significant BLEU differences are in bold.", + "page": 6, + "bbox": { + "x1": 104.64, + "x2": 490.08, + "y1": 62.879999999999995, + "y2": 132.0 + } + }, + { + "filename": "../figure/image/969-Table5-1.png", + "caption": "Table 5: BLEU for test sets of pronoun “it” having a nominal antecedent in context sentence. N: number of examples in the test set.", + "page": 6, + "bbox": { + "x1": 73.92, + "x2": 289.44, + "y1": 305.76, + "y2": 376.32 + } + }, + { + "filename": "../figure/image/969-Figure1-1.png", + "caption": "Figure 1: Encoder of the discourse-aware model", + "page": 2, + "bbox": { + "x1": 309.59999999999997, + "x2": 523.1999999999999, + "y1": 61.44, + "y2": 324.96 + } + }, + { + "filename": "../figure/image/969-Table7-1.png", + "caption": "Table 7: Agreement with CoreNLP for test sets of pronouns having a nominal antecedent in context sentence (%). Examples with ≥1 noun in context sentence.", + "page": 7, + "bbox": { + "x1": 73.92, + "x2": 288.0, + "y1": 62.879999999999995, + "y2": 132.0 + } + }, + { + "filename": "../figure/image/969-Figure5-1.png", + "caption": "Figure 5: An example of an attention map between source and context. On the y-axis are the source tokens, on the x-axis the context tokens. Note the high attention between “it” and its antecedent “heart”.", + "page": 7, + "bbox": { + "x1": 319.68, + "x2": 510.24, + "y1": 183.84, + "y2": 312.0 + } + }, + { + "filename": "../figure/image/969-Table8-1.png", + "caption": "Table 8: Performance of CoreNLP and our model’s attention mechanism compared to human assessment. Examples with ≥1 noun in context sentence.", + "page": 7, + "bbox": { + "x1": 341.76, + "x2": 491.03999999999996, + "y1": 62.879999999999995, + "y2": 119.03999999999999 + } + }, + { + "filename": "../figure/image/969-Table9-1.png", + "caption": "Table 9: Performance of CoreNLP and our model’s attention mechanism compared to human assessment (%). Examples with ≥1 noun in context sentence.", + "page": 7, + "bbox": { + "x1": 349.91999999999996, + "x2": 483.35999999999996, + "y1": 390.71999999999997, + "y2": 447.35999999999996 + } + }, + { + "filename": "../figure/image/969-Table1-1.png", + "caption": "Table 1: Automatic evaluation: BLEU. Significant differences at p < 0.01 are in bold.", + "page": 3, + "bbox": { + "x1": 310.56, + "x2": 522.24, + "y1": 62.879999999999995, + "y2": 145.92 + } + }, + { + "filename": "../figure/image/969-Table2-1.png", + "caption": "Table 2: Top-10 words with the highest average attention to context words. attn gives an average attention to context words, pos gives an average position of the source word. Left part is for words on all positions, right — for words on positions higher than first.", + "page": 4, + "bbox": { + "x1": 314.88, + "x2": 517.4399999999999, + "y1": 62.879999999999995, + "y2": 213.12 + } + } + ] + }, + "gem_id": "GEM-SciDuet-chal-6" + }, + { + "slides": { + "2": { + "title": "Graph to String Translation", + "text": [ + "Translation = generation of target-side surface words in order, conditioned on source semantic nodes and previously generated words.", + "Start in the (virtual) root", + "At each step, transition to a semantic node and emit a target word", + "A single node can be visited multiple times", + "One transition can move anywhere in the LF" + ], + "page_nums": [ + 3 + ], + "images": [] + }, + "3": { + "title": "Translation Example", + "text": [ + "Figure 2 : An example of the translation process illustrating several first steps of translating the sentence into German (Ich mochte dir einen Sandwich...).", + "Labels in italics correspond to the shortest undirected paths between the nodes." + ], + "page_nums": [ + 4 + ], + "images": [ + "figure/image/984-Figure1-1.png" + ] + }, + "4": { + "title": "Alignment of Graph Nodes", + "text": [ + "How do we align source-side semantic nodes to target-side words?" + ], + "page_nums": [ + 5 + ], + "images": [] + }, + "5": { + "title": "Alignment of Graph Nodes Gibbs Sampling", + "text": [ + "Alignment ( transition) distribution P(ai ) modeled as a categorical distribution:", + "Translation ( emission) distribution modeled as a set of categorical distributions, one for each source semantic node:", + "P(ei |nai c(lemma(nai ei", + "Sample from the following distribution:" + ], + "page_nums": [ + 6 + ], + "images": [] + }, + "6": { + "title": "Alignment of Graph Nodes Evaluation", + "text": [ + "I Linearize the LF, run GIZA++ (standard word alignment)", + "I Heuristic linearization, try to preserve source surface word order", + "I Source-side nodes to source-side tokens", + "I Source-target word alignment GIZA++", + "Manual inspection of alignments", + "Alignment composition clearly superior", + "Not much difference between GIZA++ and parser alignments" + ], + "page_nums": [ + 7 + ], + "images": [] + } + }, + "paper_title": "A Discriminative Model for Semantics-to-String Translation", + "paper_id": "984", + "paper": { + "title": "A Discriminative Model for Semantics-to-String Translation", + "abstract": "We present a feature-rich discriminative model for machine translation which uses an abstract semantic representation on the source side. We include our model as an additional feature in a phrase-based decoder and we show modest gains in BLEU score in an n-best re-ranking experiment.", + "text": [ + { + "id": 0, + "string": "Introduction The goal of machine translation is to take source language utterances and convert them into fluent target language utterances with the same meaning." + }, + { + "id": 1, + "string": "Most recent approaches learn transformations using statistical techniques on parallel data." + }, + { + "id": 2, + "string": "Meaning equivalent representations of words and phrases are learned directly from natural data, as are other syntactic operations such as reordering." + }, + { + "id": 3, + "string": "However, commonly used methods have a very simple view of the linguistic data." + }, + { + "id": 4, + "string": "Each word is generally modeled independently, for instance, and the relations between words are generally captured only in fixed phrases or as syntactic relationships." + }, + { + "id": 5, + "string": "Recently there has been a resurgence of interest in unified semantic representations: deep analyses with heavy normalization of morphology, syntax, and even semantic representations." + }, + { + "id": 6, + "string": "In particular, Abstract Meaning Representation (AMR, Banarescu et al." + }, + { + "id": 7, + "string": "(2013) ) is a novel representation of (sentential) semantics." + }, + { + "id": 8, + "string": "Such representations could influence a number of natural language understanding and generation tasks, particularly machine translation." + }, + { + "id": 9, + "string": "Deeper models can be used for multiple aspects of the translation modeling problem." + }, + { + "id": 10, + "string": "Building translation models that rely on a deeper representation of the input allows for a more parsimonious translation model: morphologically related words can be handled in a unified manner; semantically related concepts are immediately adjacent and available for modeling, etc." + }, + { + "id": 11, + "string": "Language models using deep representations might help us model which interpretations are more plausible." + }, + { + "id": 12, + "string": "We present an initial discriminative method for modeling the likelihood of a target language surface string given source language deep semantics." + }, + { + "id": 13, + "string": "This approach relies on an automatic parser for source language semantics." + }, + { + "id": 14, + "string": "We use a system that parses into AMR-like structures (Vanderwende et al., 2015) , and apply the resulting model as an additional feature in a translation system." + }, + { + "id": 15, + "string": "Related Work There is a large body of related work on utilizing deep language representation in NLP and MT in particular." + }, + { + "id": 16, + "string": "This is not surprising considering that such representations provide abstractions of many language-specific phenomena, effectively bringing different languages closer together." + }, + { + "id": 17, + "string": "A number of machine translation systems starting as early as the 1950s therefore used a form of transfer: the source sentences were parsed, and those parsed representations were translated into target representations." + }, + { + "id": 18, + "string": "Finally text generation was applied." + }, + { + "id": 19, + "string": "The level of analysis is somewhat arguable -sometimes it was purely syntactic, but in other cases it reached into the semantic domain." + }, + { + "id": 20, + "string": "One of the earliest architectures was described in 1957 (Yngve, 1957) ." + }, + { + "id": 21, + "string": "More contemporary examples of such systems include KANT (Nyberg and Mitamura, 1992) , which used a very deep representation close to an interlingua, early versions of SysTran and Microsoft Translator, or more recently TectoMT (Popel andŽabokrtský, 2010) for English→Czech translation." + }, + { + "id": 22, + "string": "AMR itself has recently been used for abstractive summarization (Liu et al., 2015) ." + }, + { + "id": 23, + "string": "In this work, sentences in the document to be summarized are parsed to AMRs, then a decoding algorithm is run to produce a summary graph." + }, + { + "id": 24, + "string": "The surface realization of this graph then constitutes the final sum- mary." + }, + { + "id": 25, + "string": "(Jones et al., 2012) presents an MT approach that can exploit semantic graphs such as AMR, in a continuation of earlier work that abstracted translation away from strings (Yamada and Knight, 2001; Galley et al., 2004) ." + }, + { + "id": 26, + "string": "While rule extraction algorithms such as (Galley et al., 2004) operate on trees and have also been applied to semantic parsing problems (Li et al., 2013) , Jones et al." + }, + { + "id": 27, + "string": "(2012) generalized these approaches by inducing synchronous hyperedge replacement grammars (HRG), which operate on graphs." + }, + { + "id": 28, + "string": "In contrast to (Jones et al., 2012) , our work does not have to deal with the complexities of HRG decoding, which runs in O(n 3 ) (Jones et al., 2012) , as our decoder is simply a phrase-based decoder." + }, + { + "id": 29, + "string": "Discriminative models have been used in statistical MT many times." + }, + { + "id": 30, + "string": "Global lexicon model (Mauser et al., 2009 ) and phrase-sense disambiguation (Carpuat and Wu, 2007) are perhaps the best known methods." + }, + { + "id": 31, + "string": "Similarly to Carpuat and Wu (2007) , we use the classifier to rescore phrasal translations, however we do not train a separate classifier for each source phrase." + }, + { + "id": 32, + "string": "Instead, we train a global model -similarly to Subotin (2011) or more recently Tamchyna et al." + }, + { + "id": 33, + "string": "(2014) ." + }, + { + "id": 34, + "string": "Features for our model are very different from previous work because they come from a deep representation and therefore should capture semantic relations between the languages, instead of surface or morpho-syntactic correspondences." + }, + { + "id": 35, + "string": "Semantic Representation Our representation of sentence semantics is based on Logical Form (Vanderwende, 2015) ." + }, + { + "id": 36, + "string": "LFs are labeled directed graphs whose nodes roughly correspond to content words in the sentence." + }, + { + "id": 37, + "string": "Edge labels describe semantic relations between nodes." + }, + { + "id": 38, + "string": "Additional linguistic information, such as verb subcategorization frames, definiteness, tense etc., is stored in graph nodes as bits." + }, + { + "id": 39, + "string": "Figure 1 shows a sentence parsed into the logical form." + }, + { + "id": 40, + "string": "Nodes are represented by word lemmas." + }, + { + "id": 41, + "string": "Relations include Dsub for deep subject, Dobj and Dind for direct and indirect objects etc." + }, + { + "id": 42, + "string": "Bits are shown as flags in parentheses." + }, + { + "id": 43, + "string": "Note that this graph may have cycles -for example, the Dobj of \"take\" is \"sandwich\", but \"take\" is also the Attrib of \"sandwich\"." + }, + { + "id": 44, + "string": "The verb \"take\" is also missing its obligatory subject which is replaced by the free variable X." + }, + { + "id": 45, + "string": "The logical form can be converted using a sequence of rules to a representation which conforms to the AMR specification (Vanderwende et al., 2015) ." + }, + { + "id": 46, + "string": "We do not use the full conversion pipeline in our work, so our semantic graphs are somewhere between the LF and AMR." + }, + { + "id": 47, + "string": "Notably, we keep the bits which serve as important features for the discriminative modeling of translation." + }, + { + "id": 48, + "string": "Graph-to-String Translation We develop models for semantic-graph-to-string translation." + }, + { + "id": 49, + "string": "These models are essentially discriminative translation models, relying on a decomposition structure similar to both maximum entropy language models and IBM Models 1, 2 (Brown et al., 1993) , and the HMM translation model (Vogel et al., 1996) ." + }, + { + "id": 50, + "string": "In particular, we see translation as a process of selecting target words in order conditioned on source language representation as well as prior target words." + }, + { + "id": 51, + "string": "Similar to the IBM Models, we see each target word as being generated based on source concepts, though in our case the concepts are semantic graph nodes rather than surface words." + }, + { + "id": 52, + "string": "That is, we assume the existence of an alignment, though it aligns the target words to source semantic graph nodes rather than surface words." + }, + { + "id": 53, + "string": "Our model views translation as generation of the target-side sentence given the source-side semantic graph." + }, + { + "id": 54, + "string": "We assume a generative process which operates as follows." + }, + { + "id": 55, + "string": "We begin in the virtual root node of the graph." + }, + { + "id": 56, + "string": "At each step, we transition to a graph node and we generate a target-side word." + }, + { + "id": 57, + "string": "We proceed left-to-right on the target side and we stop once the whole target sentence is generated." + }, + { + "id": 58, + "string": "Figure 2 shows an example of this process." + }, + { + "id": 59, + "string": "Say we have a source semantic graph G with nodes V = {n 1 ..n S }, edges E ⊂ V × V , and a root node n R for R ∈ 1..S. Then the likelihood of a target string E = (e 1 , ..., e T ) and alignment A = (a 1 , ..., a T ) with a i ∈ 0..S is as follows, with a 0 = R: P (A, E|G) = T i=1 P (a i |a i−1 1 , e i−1 1 , G) P (e i |a i 1 , e i−1 1 , G) (1) In this generative story, we first predict each alignment position and then predict each translated word." + }, + { + "id": 60, + "string": "The transition distribution P (a i | · · · ) resembles that of the HMM alignment model, though the features are somewhat different." + }, + { + "id": 61, + "string": "The translation distribution P (e i | · · · ) may take on several forms." + }, + { + "id": 62, + "string": "For the purposes of alignment, we explore a simple categorical distribution as in the IBM models." + }, + { + "id": 63, + "string": "For translation reranking, we instead use a feature-rich approach conditioned on a variety of source and target context." + }, + { + "id": 64, + "string": "Alignment of Semantic Graph Nodes We have experimented with a number of techniques for aligning source-side semantic graph nodes to target-side surface words." + }, + { + "id": 65, + "string": "Gibbs sampling." + }, + { + "id": 66, + "string": "We can attempt to directly align the target language words to the source language nodes using a generative HMM-style model." + }, + { + "id": 67, + "string": "Unlike the HMM word alignment model (Vogel et al., 1996) , the likelihood of jumping between nodes is based on the graph path between those nodes, rather than the linear distance." + }, + { + "id": 68, + "string": "Starting from the generative story of Equation 1, we make several simplifying assumptions." + }, + { + "id": 69, + "string": "First we assume that the alignment distribution P (a i | · · · ) is modeled as a categorical distribution: P (a i |a i−1 , G) ∝ c(LABEL(a i−1 , a i )) The function LABEL(u, v) produces a string describing the labels along the shortest (undirected) path between the two nodes." + }, + { + "id": 70, + "string": "Next, we assume that the translation distribution is modeled as a set of categorical distributions, one for each source semantic node: P (e i |n a i ) ∝ c(LEMMA(n a i ) → e i ) This model is sensitive to the order in which source language information is presented in the target language." + }, + { + "id": 71, + "string": "The alignment variables a i are not observed." + }, + { + "id": 72, + "string": "We use Gibbs sampling rather than EM so that we can incorporate a sparse prior when estimating the parameters of the model and the assignments to these latent alignment variables." + }, + { + "id": 73, + "string": "At each iteration, we shuffle the sentences in our training data." + }, + { + "id": 74, + "string": "Then for each sentence, we visit all its tokens in a random order and re-align them." + }, + { + "id": 75, + "string": "We sample the new alignment according to the Markov blanket, which has the following probability distribution: P (t|n i ) ∝ c(LEMMA(n i ) → t) + α c(LEMMA(n i )) + αL × c(LABEL(n i , n i−1 )) + β T + βP × c(LABEL(n i+1 , n i )) + β T + βP (2) L, P stand for the number of lemma/path types, respectively." + }, + { + "id": 76, + "string": "T is the total number of tokens in the corpus." + }, + { + "id": 77, + "string": "Overall, the formula describes the probability of the edge coming into the node n i , the token emission and finally the outgoing edge." + }, + { + "id": 78, + "string": "We evaluate this probability for each node n i in the graph and re-align the token according to the random sample from this distribution." + }, + { + "id": 79, + "string": "α and β are hyper-parameters specifying the concentration parameters of symmetric Dirichlet priors over the transition and emission distributions." + }, + { + "id": 80, + "string": "Specifying values less than 1 for these hyper-parameters pushes the model toward sparse solutions." + }, + { + "id": 81, + "string": "They are tuned by a grid search which evaluates model perplexity on a held-out set." + }, + { + "id": 82, + "string": "Direct GIZA++." + }, + { + "id": 83, + "string": "GIZA++ (Och and Ney, 2000 ) is a commonly used toolkit for word alignment which implements the IBM models." + }, + { + "id": 84, + "string": "In this setting, we linearized the semantic graph nodes using a simple heuristic based on the surface word order and aligned them directly to the target-side sentences." + }, + { + "id": 85, + "string": "We experimented with different symmetrizations and found that grow-diag-final-and gives the best results." + }, + { + "id": 86, + "string": "Composed alignments." + }, + { + "id": 87, + "string": "We divided the alignment problem into two stages: aligning semantic graph nodes to source-side words and aligning the source-and target-side words (i.e., standard MT word alignment)." + }, + { + "id": 88, + "string": "We then simply compose the two alignments." + }, + { + "id": 89, + "string": "For the alignment between source graph nodes and source surface words, we have two options: we can either train a GIZA++ model or we can use gold alignments provided by the semantic parser." + }, + { + "id": 90, + "string": "For the second stage, we need to train a GIZA++ model." + }, + { + "id": 91, + "string": "We evaluated the different strategies by manually inspecting the resulting alignments." + }, + { + "id": 92, + "string": "We found that the composition of two separate alignment steps produces clearly superior results, even if it seems arguable whether such division simplifies the task." + }, + { + "id": 93, + "string": "Therefore, for the remaining experiments, we used the composition of gold alignment and GIZA++, although two GIZA++ steps performed comparably well." + }, + { + "id": 94, + "string": "Model For our discriminative model, the alignment is assumed to be given." + }, + { + "id": 95, + "string": "At training time, it is the alignment produced by the parser composed with GIZA++ surface word alignment." + }, + { + "id": 96, + "string": "At test time, we compose the alignment between graph nodes and source surface tokens (given by the parser) with the bilingual surface word alignment provided by the MT decoder." + }, + { + "id": 97, + "string": "Turning to the translation distribution, we use a maximum entropy model to learn the conditional probability: P (e i |n a i , n a i−1 , G, e i−1 i−k+1 ) = exp w · f (e i , n a i , n a i−1 , G, e i−1 i−k+1 ) Z (3) where Z is defined as e ∈GEN (na i ) exp(w · f (e , n a i , n a i−1 , G, e i−1 i−k+1 )) The GEN(n) function produces the possible translations of the deep lemma associated with node n. We collect all translations observed in the training data and keep the 30 most frequent ones for each lemma." + }, + { + "id": 98, + "string": "Our model thus assigns zero probability to unseen translations." + }, + { + "id": 99, + "string": "Because of the size of our training data, we used online learning." + }, + { + "id": 100, + "string": "We implemented a parallelized (multi-threaded) version of the standard stochastic gradient descent algorithm (SGD)." + }, + { + "id": 101, + "string": "Our learning rate was fixed -using line search, we found the optimal rate to be 0.05." + }, + { + "id": 102, + "string": "Our batch size was set to one; different batch sizes made almost no difference in model performance." + }, + { + "id": 103, + "string": "We used online L1 regularization (Tsuruoka et al., 2009 ) with weight 1." + }, + { + "id": 104, + "string": "We implemented feature hashing to further improve performance and set the hash length to 22 bits." + }, + { + "id": 105, + "string": "We shuffled our data and split it into five parts which were processed independently and their final weights were averaged." + }, + { + "id": 106, + "string": "Feature Set Our semantic representation enables us to use a very rich set of features, including information commonly used by both translation models and language models." + }, + { + "id": 107, + "string": "We extract a significant amount of information from the graph node n a i aligned to the generated word: • lemma, • part of speech, • all bits." + }, + { + "id": 108, + "string": "We extract the same features from the previous graph node (n a i−1 ), from the parent node." + }, + { + "id": 109, + "string": "(If there are multiple parents in the graph, we break ties in a consistent but heuristic manner, picking the leftmost parent node according to its position in the source sentence) We also gather all the bits of the parent and the parent relation." + }, + { + "id": 110, + "string": "These features may capture agreement phenomena." + }, + { + "id": 111, + "string": "We also look at the shortest path in the semantic graph from the previous node to the current one and we extract features which describe it: • path length, • relations (edges) along the path." + }, + { + "id": 112, + "string": "We use the lemmas of all nodes in the semantic graph as bag-of-word features, as well as all the surface words in the source sentence." + }, + { + "id": 113, + "string": "We also extract lemmas of nodes within a given distance from the current node (i.e." + }, + { + "id": 114, + "string": "graph context), as well as the relation that led to these nodes." + }, + { + "id": 115, + "string": "Together, these features ground the current node in its semantic context." + }, + { + "id": 116, + "string": "An additional set of features handle the fact that source nodes may generate multiple target words, and the distribution over subsequent words should be different." + }, + { + "id": 117, + "string": "We have a feature indicating the number of words generated from the current node, both in isolation, conjoined with the lemma, and conjoined with the part of speech." + }, + { + "id": 118, + "string": "We also have a feature for each word previously generated by this same node, again in isolation, in conjunction with the lemma, and in conjunction with the part of speech." + }, + { + "id": 119, + "string": "This helps prevent the model from generating multiple copies of same target word given a source node." + }, + { + "id": 120, + "string": "On the target side, we use several previous tokens as features." + }, + { + "id": 121, + "string": "These may act as discriminative language model features." + }, + { + "id": 122, + "string": "During MT decoding, our model therefore must maintain state, which could present a computational issue." + }, + { + "id": 123, + "string": "The language model features present similar complexity as conventional MT state, and the features about prior words generated from the same node require greater memory." + }, + { + "id": 124, + "string": "Were this cost to become prohibitive, a simpler form of the prior word features would likely suffice." + }, + { + "id": 125, + "string": "Experiments We tested our model in an n-best re-ranking experiment." + }, + { + "id": 126, + "string": "We began by training a basic phrase-based MT system for English→French on 1 million parallel sentence pairs and produced 1000-best lists for three test sets provided for the Workshop on Statistical Machine Translation (Bojar et al., 2013 ) -WMT 2009 , 2010 and 2013." + }, + { + "id": 127, + "string": "This system had a set of 13 commonly used features: four channel model scores (forward and backward MLE and lexical weighting scores), a 5-gram language model, five lexicalized reordering model scores (corresponding to different ordering outcomes), linear distortion penalty, word count, and phrase count." + }, + { + "id": 128, + "string": "The system was optimized using minimum error rate training (Och, 2003) For reranking, we gathered 1000-best lists for the development and test sets." + }, + { + "id": 129, + "string": "We added six scores from our model to each translation in the n-best lists." + }, + { + "id": 130, + "string": "We included the total log probability, the sum of unnormalized scores, and the rank of the given output." + }, + { + "id": 131, + "string": "In addition, we had count features indicating the number of words that were not in the GEN set of the model, the number of NULLs (effectively deleted nodes), and a count of times a target word appeared in a stopword list." + }, + { + "id": 132, + "string": "In the end, each translation had a total of 19 features: 13 from the original features and 6 from this approach." + }, + { + "id": 133, + "string": "Next, we ran one iteration of the MERT optimizer on these 1000-best lists for all of the features." + }, + { + "id": 134, + "string": "Because this was a reranking experiment rather than decoding, we did not repeatedly gather n-best lists as in decoding." + }, + { + "id": 135, + "string": "The resulting feature weights were used to rescore the test n-best lists and evaluated the using BLEU; Table 1 shows the results." + }, + { + "id": 136, + "string": "We obtained a modest but consistent improvement." + }, + { + "id": 137, + "string": "Once the model is used directly in the decoder, the gains should increase as it will be able to influence decoding." + }, + { + "id": 138, + "string": "Conclusion We have presented an initial attempt at including semantic features in a statistical machine translation system." + }, + { + "id": 139, + "string": "Our approach uses discriminative training and a broad set of features to capture morphological, syntactic, and semantic information in a single model." + }, + { + "id": 140, + "string": "Although our gains are not particularly large yet, we believe that additional ef-fort on feature engineering and decoder integration could lead to more substantial gains." + }, + { + "id": 141, + "string": "Our approach is gated by the accuracy and consistency of the semantic parser." + }, + { + "id": 142, + "string": "We have used a broad coverage parser with accuracy competitive to the current state-of-the-art, but even the stateof-the-art is rather low." + }, + { + "id": 143, + "string": "It would be interesting to explore more robust features spanning multiple analyses, or to combine the outputs of multiple parsers." + }, + { + "id": 144, + "string": "Even syntax-based machine translation systems are dependent on accurate parsers (Quirk and Corston-Oliver, 2006) ; deeper analyses are likely to be more dependent on parse quality." + }, + { + "id": 145, + "string": "In a similar vein, it would be interesting to evaluate the impact of morphological, syntactic, and semantic features separately." + }, + { + "id": 146, + "string": "A careful feature ablation and exploration would help identify promising areas for future research." + }, + { + "id": 147, + "string": "We have only scratched the surface of possible integrations." + }, + { + "id": 148, + "string": "Even this model could be applied to MT systems in multiple ways." + }, + { + "id": 149, + "string": "For instance, rather than applying from source to target, we might evaluate in a noisy channel sense." + }, + { + "id": 150, + "string": "That is, we could predict the source language surface forms given the target language translations." + }, + { + "id": 151, + "string": "Furthermore, this would allow incorporation of a target semantic language model." + }, + { + "id": 152, + "string": "This latter approach is particularly attractive, as it would explicitly model the semantic plausibility of the target." + }, + { + "id": 153, + "string": "Of course, this would require target language semantic analysis: either we would be forced to parse n-best outcomes from some baseline system, or integrate the construction of target language semantics into the MT system." + }, + { + "id": 154, + "string": "We believe that including such models of semantic plausibility holds great promise in preventing \"word salad\" outputs from MT systems: sentences that simply cannot be interpreted by humans." + } + ], + "headers": [ + { + "section": "Introduction", + "n": "1", + "start": 0, + "end": 14 + }, + { + "section": "Related Work", + "n": "2", + "start": 15, + "end": 34 + }, + { + "section": "Semantic Representation", + "n": "3", + "start": 35, + "end": 47 + }, + { + "section": "Graph-to-String Translation", + "n": "4", + "start": 48, + "end": 63 + }, + { + "section": "Alignment of Semantic Graph Nodes", + "n": "4.1", + "start": 64, + "end": 93 + }, + { + "section": "Model", + "n": "4.2", + "start": 94, + "end": 105 + }, + { + "section": "Feature Set", + "n": "4.3", + "start": 106, + "end": 124 + }, + { + "section": "Experiments", + "n": "5", + "start": 125, + "end": 137 + }, + { + "section": "Conclusion", + "n": "6", + "start": 138, + "end": 154 + } + ], + "figures": [ + { + "filename": "../figure/image/984-Figure2-1.png", + "caption": "Figure 2: An example of the translation process illustrating several first steps of translating the sentence from Figure 1 into German (“Ich möchte dir einen Sandwich...”). Labels in italics correspond to the shortest undirected paths between the nodes.", + "page": 2, + "bbox": { + "x1": 120.0, + "x2": 478.56, + "y1": 60.0, + "y2": 119.03999999999999 + } + }, + { + "filename": "../figure/image/984-Table1-1.png", + "caption": "Table 1: BLEU scores of n-best reranking in English→French translation.", + "page": 4, + "bbox": { + "x1": 307.68, + "x2": 524.16, + "y1": 222.72, + "y2": 277.92 + } + }, + { + "filename": "../figure/image/984-Figure1-1.png", + "caption": "Figure 1: Logical Form (computed tree) for the sentence: I would like to give you a sandwich taken from the fridge.", + "page": 1, + "bbox": { + "x1": 76.8, + "x2": 521.28, + "y1": 61.44, + "y2": 172.32 + } + } + ] + }, + "gem_id": "GEM-SciDuet-chal-7" + }, + { + "slides": { + "0": { + "title": "Learning under Domain Shift", + "text": [ + "State-of-the-art domain adaptation approaches", + "evaluate on proprietary datasets or on a single benchmark", + "Only compare against weak baselines", + "Almost none evaluate against approaches from the extensive semi-supervised learning (SSL) literature" + ], + "page_nums": [ + 1, + 2, + 3, + 4, + 5, + 6 + ], + "images": [] + }, + "1": { + "title": "Revisiting Semi Supervised Learning", + "text": [ + "Classics in a Neural World", + "How do classics in SSL compare to recent advances?", + "Can we combine the best of both worlds?", + "How well do these approaches work on out-of-distribution data?" + ], + "page_nums": [ + 7, + 8, + 9, + 10 + ], + "images": [] + }, + "3": { + "title": "Self training", + "text": [ + "1. Train model on labeled data.", + "2. Use confident predictions on unlabeled data as training examples. Repeat.", + "- Er ror a mpli" + ], + "page_nums": [ + 16, + 17, + 18, + 19, + 20 + ], + "images": [] + }, + "4": { + "title": "Self training variants", + "text": [ + "Output probabilities in neural networks are poorly calibrated.", + "Throttling (Abney, 2007), i.e. selecting the top n highest confidence unlabeled examples works best.", + "Training until convergence on labeled data and then on unlabeled data works best." + ], + "page_nums": [ + 21, + 22, + 23, + 24, + 25, + 26 + ], + "images": [] + }, + "5": { + "title": "Tri training", + "text": [ + "1. Train three models on bootstrapped samples.", + "2. Use predictions on unlabeled data for third if two agree.", + "Final prediction: majority voting" + ], + "page_nums": [ + 27, + 28, + 29, + 30, + 31, + 32, + 33, + 34, + 35, + 36, + 37, + 38, + 39 + ], + "images": [] + }, + "6": { + "title": "Tri training with disagreement", + "text": [ + "1. Train three models on bootstrapped samples.", + "2. Use predictions on unlabeled data for third if two agree and prediction differs.", + "dependen t mo dels" + ], + "page_nums": [ + 40, + 41, + 42, + 43, + 44, + 45, + 46, + 47, + 48 + ], + "images": [] + }, + "7": { + "title": "Tri training hyper parameters", + "text": [ + "Producing predictions for all unlabeled examples is expensive", + "Sample number of unlabeled examples", + "Not effective for classic approaches, but essential for our method" + ], + "page_nums": [ + 49, + 50, + 51, + 52, + 53, + 54 + ], + "images": [] + }, + "8": { + "title": "Multi task Tri training", + "text": [ + "1. Train one model with 3 objective functions.", + "2. Use predictions on unlabeled data for third if two agree.", + "Restrict final layers to use different representations.", + "Train third objective function only on pseudo labeled to bridge domain shift.", + "m2 F orthogonality constraint (Bousmalis et al., 2016)", + "Loss: L() = log Pmi(y h Lorth" + ], + "page_nums": [ + 55, + 56, + 57, + 58, + 59, + 60, + 61, + 62, + 63, + 64, + 65, + 66, + 67, + 68, + 69, + 70, + 71, + 72, + 73, + 74, + 75 + ], + "images": [ + "figure/image/989-Figure1-1.png" + ] + }, + "9": { + "title": "Data and Tasks", + "text": [ + "Sentiment analysis on Amazon reviews dataset (Blitzer et al, 2006)", + "POS tagging on SANCL 2012 dataset (Petrov and McDonald, 2012)" + ], + "page_nums": [ + 76, + 77, + 78, + 79 + ], + "images": [] + }, + "13": { + "title": "Takeaways", + "text": [ + "Classic tri-training works best: outperforms recent state-of-the-art methods for sentiment analysis.", + "We address the drawback of tri-training (space & time complexity) via the proposed MT-Tri model", + "MT-Tri works best on sentiment, but not for POS.", + "Comparing neural methods to classics (strong baselines)", + "Evaluation on multiple tasks domains" + ], + "page_nums": [ + 108, + 109, + 110, + 111 + ], + "images": [ + "figure/image/989-Figure1-1.png" + ] + } + }, + "paper_title": "Strong Baselines for Neural Semi-Supervised Learning under Domain Shift", + "paper_id": "989", + "paper": { + "title": "Strong Baselines for Neural Semi-Supervised Learning under Domain Shift", + "abstract": "Novel neural models have been proposed in recent years for learning under domain shift. Most models, however, only evaluate on a single task, on proprietary datasets, or compare to weak baselines, which makes comparison of models difficult. In this paper, we re-evaluate classic general-purpose bootstrapping approaches in the context of neural networks under domain shifts vs. recent neural approaches and propose a novel multi-task tri-training method that reduces the time and space complexity of classic tri-training. Extensive experiments on two benchmarks are negative: while our novel method establishes a new state-of-the-art for sentiment analysis, it does not fare consistently the best. More importantly, we arrive at the somewhat surprising conclusion that classic tri-training, with some additions, outperforms the state of the art. We conclude that classic approaches constitute an important and strong baseline.", + "text": [ + { + "id": 0, + "string": "Introduction Deep neural networks (DNNs) excel at learning from labeled data and have achieved state of the art in a wide array of supervised NLP tasks such as dependency parsing (Dozat and Manning, 2017) , named entity recognition (Lample et al., 2016) , and semantic role labeling (He et al., 2017) ." + }, + { + "id": 1, + "string": "In contrast, learning from unlabeled data, especially under domain shift, remains a challenge." + }, + { + "id": 2, + "string": "This is common in many real-world applications where the distribution of the training and test data differs." + }, + { + "id": 3, + "string": "Many state-of-the-art domain adaptation approaches leverage task-specific characteristics such as sentiment words (Blitzer et al., 2006; Wu and Huang, 2016) or distributional features (Schn-abel and Schütze, 2014; Yin et al., 2015) which do not generalize to other tasks." + }, + { + "id": 4, + "string": "Other approaches that are in theory more general only evaluate on proprietary datasets (Kim et al., 2017) or on a single benchmark (Zhou et al., 2016) , which carries the risk of overfitting to the task." + }, + { + "id": 5, + "string": "In addition, most models only compare against weak baselines and, strikingly, almost none considers evaluating against approaches from the extensive semi-supervised learning (SSL) literature (Chapelle et al., 2006) ." + }, + { + "id": 6, + "string": "In this work, we make the argument that such algorithms make strong baselines for any task in line with recent efforts highlighting the usefulness of classic approaches (Melis et al., 2017; Denkowski and Neubig, 2017) ." + }, + { + "id": 7, + "string": "We re-evaluate bootstrapping algorithms in the context of DNNs." + }, + { + "id": 8, + "string": "These are general-purpose semi-supervised algorithms that treat the model as a black box and can thus be used easily-with a few additions-with the current generation of NLP models." + }, + { + "id": 9, + "string": "Many of these methods, though, were originally developed with in-domain performance in mind, so their effectiveness in a domain adaptation setting remains unexplored." + }, + { + "id": 10, + "string": "In particular, we re-evaluate three traditional bootstrapping methods, self-training (Yarowsky, 1995) , tri-training (Zhou and Li, 2005) , and tritraining with disagreement (Søgaard, 2010) for neural network-based approaches on two NLP tasks with different characteristics, namely, a sequence prediction and a classification task (POS tagging and sentiment analysis)." + }, + { + "id": 11, + "string": "We evaluate the methods across multiple domains on two wellestablished benchmarks, without taking any further task-specific measures, and compare to the best results published in the literature." + }, + { + "id": 12, + "string": "We make the somewhat surprising observation that classic tri-training outperforms task-agnostic state-of-the-art semi-supervised learning (Laine and Aila, 2017) and recent neural adaptation approaches (Ganin et al., 2016; Saito et al., 2017) ." + }, + { + "id": 13, + "string": "In addition, we propose multi-task tri-training, which reduces the main deficiency of tri-training, namely its time and space complexity." + }, + { + "id": 14, + "string": "It establishes a new state of the art on unsupervised domain adaptation for sentiment analysis but it is outperformed by classic tri-training for POS tagging." + }, + { + "id": 15, + "string": "Contributions Our contributions are: a) We propose a novel multi-task tri-training method." + }, + { + "id": 16, + "string": "b) We show that tri-training can serve as a strong and robust semi-supervised learning baseline for the current generation of NLP models." + }, + { + "id": 17, + "string": "c) We perform an extensive evaluation of bootstrapping 1 algorithms compared to state-of-the-art approaches on two benchmark datasets." + }, + { + "id": 18, + "string": "d) We shed light on the task and data characteristics that yield the best performance for each model." + }, + { + "id": 19, + "string": "Neural bootstrapping methods We first introduce three classic bootstrapping methods, self-training, tri-training, and tri-training with disagreement and detail how they can be used with neural networks." + }, + { + "id": 20, + "string": "For in-depth details we refer the reader to (Abney, 2007; Chapelle et al., 2006; Zhu and Goldberg, 2009 )." + }, + { + "id": 21, + "string": "We introduce our novel multitask tri-training method in §2.3." + }, + { + "id": 22, + "string": "Self-training Self-training (Yarowsky, 1995; McClosky et al., 2006b ) is one of the earliest and simplest bootstrapping approaches." + }, + { + "id": 23, + "string": "In essence, it leverages the model's own predictions on unlabeled data to obtain additional information that can be used during training." + }, + { + "id": 24, + "string": "Typically the most confident predictions are taken at face value, as detailed next." + }, + { + "id": 25, + "string": "Self-training trains a model m on a labeled training set L and an unlabeled data set U ." + }, + { + "id": 26, + "string": "At each iteration, the model provides predictions m(x) in the form of a probability distribution over classes for all unlabeled examples x in U ." + }, + { + "id": 27, + "string": "If the probability assigned to the most likely class is higher than a predetermined threshold τ , x is added to the labeled examples with p(x) = arg max m(x) as pseudo-label." + }, + { + "id": 28, + "string": "This instantiation is the most widely used and shown in Algorithm 1." + }, + { + "id": 29, + "string": "Calibration It is well-known that output probabilities in neural networks are poorly calibrated (Guo et al., 2017) ." + }, + { + "id": 30, + "string": "Using a fixed threshold τ is thus Algorithm 1 Self-training (Abney, 2007) if max m(x) > τ then 5: L ← L ∪ {(x, p(x))} 6: until no more predictions are confident not the best choice." + }, + { + "id": 31, + "string": "While the absolute confidence value is inaccurate, we can expect that the relative order of confidences is more robust." + }, + { + "id": 32, + "string": "For this reason, we select the top n unlabeled examples that have been predicted with the highest confidence after every epoch and add them to the labeled data." + }, + { + "id": 33, + "string": "This is one of the many variants for self-training, called throttling (Abney, 2007) ." + }, + { + "id": 34, + "string": "We empirically confirm that this outperforms the classic selection in our experiments." + }, + { + "id": 35, + "string": "Online learning In contrast to many classic algorithms, DNNs are trained online by default." + }, + { + "id": 36, + "string": "We compare training setups and find that training until convergence on labeled data and then training until convergence using self-training performs best." + }, + { + "id": 37, + "string": "Classic self-training has shown mixed success." + }, + { + "id": 38, + "string": "In parsing it proved successful only with small datasets (Reichart and Rappoport, 2007) or when a generative component is used together with a reranker in high-data conditions (McClosky et al., 2006b; Suzuki and Isozaki, 2008) ." + }, + { + "id": 39, + "string": "Some success was achieved with careful task-specific data selection (Petrov and McDonald, 2012) , while others report limited success on a variety of NLP tasks (Plank, 2011; Van Asch and Daelemans, 2016; van der Goot et al., 2017) ." + }, + { + "id": 40, + "string": "Its main downside is that the model is not able to correct its own mistakes and errors are amplified, an effect that is increased under domain shift." + }, + { + "id": 41, + "string": "Tri-training Tri-training (Zhou and Li, 2005 ) is a classic method that reduces the bias of predictions on unlabeled data by utilizing the agreement of three independently trained models." + }, + { + "id": 42, + "string": "Tri-training (cf." + }, + { + "id": 43, + "string": "Algorithm 2) first trains three models m 1 , m 2 , and m 3 on bootstrap samples of the labeled data L. An unlabeled data point is added to the training set of a model m i if the other two models m j and m k agree on its label." + }, + { + "id": 44, + "string": "Training stops when the classifiers do not change anymore." + }, + { + "id": 45, + "string": "Tri-training with disagreement (Søgaard, 2010) Algorithm 2 Tri-training (Zhou and Li, 2005) L i ← ∅ 7: for x ∈ U do 8: if p j (x) = p k (x)(j, k = i) then 9: L i ← L i ∪ {(x, p j (x))} m i ← train_model(L ∪ L i ) 10: until none of m i changes 11: apply majority vote over m i is based on the intuition that a model should only be strengthened in its weak points and that the labeled data should not be skewed by easy data points." + }, + { + "id": 46, + "string": "In order to achieve this, it adds a simple modification to the original algorithm (altering line 8 in Algorithm 2), requiring that for an unlabeled data point on which m j and m k agree, the other model m i disagrees on the prediction." + }, + { + "id": 47, + "string": "Tri-training with disagreement is more data-efficient than tritraining and has achieved competitive results on part-of-speech tagging (Søgaard, 2010) ." + }, + { + "id": 48, + "string": "Sampling unlabeled data Both tri-training and tri-training with disagreement can be very expensive in their original formulation as they require to produce predictions for each of the three models on all unlabeled data samples, which can be in the millions in realistic applications." + }, + { + "id": 49, + "string": "We thus propose to sample a number of unlabeled examples at every epoch." + }, + { + "id": 50, + "string": "For all traditional bootstrapping approaches we sample 10k candidate instances in each epoch." + }, + { + "id": 51, + "string": "For the neural approaches we use a linearly growing candidate sampling scheme proposed by (Saito et al., 2017) , increasing the candidate pool size as the models become more accurate." + }, + { + "id": 52, + "string": "Confidence thresholding Similar to selftraining, we can introduce an additional requirement that pseudo-labeled examples are only added if the probability of the prediction of at least one model is higher than some threshold τ ." + }, + { + "id": 53, + "string": "We did not find this to outperform prediction without threshold for traditional tri-training, but thresholding proved essential for our method ( §2.3)." + }, + { + "id": 54, + "string": "The most important condition for tri-training and tri-training with disagreement is that the models are diverse." + }, + { + "id": 55, + "string": "Typically, bootstrap samples are used to create this diversity (Zhou and Li, 2005; Søgaard, 2010) ." + }, + { + "id": 56, + "string": "However, training separate models on bootstrap samples of a potentially large amount of training data is expensive and takes a lot of time." + }, + { + "id": 57, + "string": "This drawback motivates our approach." + }, + { + "id": 58, + "string": "Multi-task tri-training In order to reduce both the time and space complexity of tri-training, we propose Multi-task Tritraining (MT-Tri)." + }, + { + "id": 59, + "string": "MT-Tri leverages insights from multi-task learning (MTL) (Caruana, 1993) to share knowledge across models and accelerate training." + }, + { + "id": 60, + "string": "Rather than storing and training each model separately, we propose to share the parameters of the models and train them jointly using MTL." + }, + { + "id": 61, + "string": "2 All models thus collaborate on learning a joint representation, which improves convergence." + }, + { + "id": 62, + "string": "The output softmax layers are model-specific and are only updated for the input of the respective model." + }, + { + "id": 63, + "string": "We show the model in Figure 1 (as instantiated for POS tagging)." + }, + { + "id": 64, + "string": "As the models leverage a joint representation, we need to ensure that the features used for prediction in the softmax layers of the different models are as diverse as possible, so that the models can still learn from each other's predictions." + }, + { + "id": 65, + "string": "In contrast, if the parameters in all output softmax layers were the same, the method would degenerate to self-training." + }, + { + "id": 66, + "string": "To guarantee diversity, we introduce an orthogonality constraint (Bousmalis et al., 2016) as an additional loss term, which we define as follows: L orth = W m 1 W m 2 2 F (1) where | · 2 F is the squared Frobenius norm and W m 1 and W m 2 are the softmax output parameters of the two source and pseudo-labeled output layers m 1 and m 2 , respectively." + }, + { + "id": 67, + "string": "The orthogonality constraint encourages the models not to rely on the same features for prediction." + }, + { + "id": 68, + "string": "As enforcing pairwise orthogonality between three matrices is not possible, we only enforce orthogonality between the softmax output layers of m 1 and m 2 , 3 while m 3 is gradually trained to be more target-specific." + }, + { + "id": 69, + "string": "We parameterize L orth by γ=0.01 following ." + }, + { + "id": 70, + "string": "We do not further tune γ." + }, + { + "id": 71, + "string": "More formally, let us illustrate the model by taking the sequence prediction task (Figure 1 ) as illustration." + }, + { + "id": 72, + "string": "Given an utterance with labels y 1 , .., y n , our Multi-task Tri-training loss consists of three task-specific (m 1 , m 2 , m 3 ) tagging loss functions (where h is the uppermost Bi-LSTM encoding): (2) In contrast to classic tri-training, we can train the multi-task model with its three model-specific outputs jointly and without bootstrap sampling on the labeled source domain data until convergence, as the orthogonality constraint enforces different representations between models m 1 and m 2 ." + }, + { + "id": 73, + "string": "From this point, we can leverage the pair-wise agreement of two output layers to add pseudo-labeled examples as training data to the third model." + }, + { + "id": 74, + "string": "We train the third output layer m 3 only on pseudo-labeled target instances in order to make tri-training more robust to a domain shift." + }, + { + "id": 75, + "string": "For the final prediction, majority voting of all three output layers is used, which resulted in the best instantiation, together with confidence thresholding (τ = 0.9, except for highresource POS where τ = 0.8 performed slightly better)." + }, + { + "id": 76, + "string": "We also experimented with using a domainadversarial loss (Ganin et al., 2016) on the jointly learned representation, but found this not to help." + }, + { + "id": 77, + "string": "The full pseudo-code is given in Algorithm 3." + }, + { + "id": 78, + "string": "L(θ) = − i 1,..,n log P m i (y| h) + γL orth Computational complexity The motivation for MT-Tri was to reduce the space and time complexity of tri-training." + }, + { + "id": 79, + "string": "We thus give an estimate of its efficiency gains." + }, + { + "id": 80, + "string": "MT-Tri is~3× more spaceefficient than regular tri-training; tri-training stores one set of parameters for each of the three models, while MT-Tri only stores one set of parameters (we use three output layers, but these make up a comparatively small part of the total parameter budget)." + }, + { + "id": 81, + "string": "In terms of time efficiency, tri-training first 3 We also tried enforcing orthogonality on a hidden layer rather than the output layer, but this did not help." + }, + { + "id": 82, + "string": "L i ← ∅ 5: for x ∈ U do 6: if p j (x) = p k (x)(j, k = i) then 7: L i ← L i ∪ {(x, p j (x))} 8: if i = 3 then m i = train_model(L i ) 9: elsem i ← train_model(L ∪ L i ) 10: until end condition is met 11: apply majority vote over m i requires to train each of the models from scratch." + }, + { + "id": 83, + "string": "The actual tri-training takes about the same time as training from scratch and requires a separate forward pass for each model, effectively training three independent models simultaneously." + }, + { + "id": 84, + "string": "In contrast, MT-Tri only necessitates one forward pass as well as the evaluation of the two additional output layers (which takes a negligible amount of time) and requires about as many epochs as tri-training until convergence (see Table 3 , second column) while adding fewer unlabeled examples per epoch (see Section 3.4)." + }, + { + "id": 85, + "string": "In our experiments, MT-Tri trained about 5-6× faster than traditional tri-training." + }, + { + "id": 86, + "string": "MT-Tri can be seen as a self-ensembling technique, where different variations of a model are used to create a stronger ensemble prediction." + }, + { + "id": 87, + "string": "Recent approaches in this line are snapshot ensembling ) that ensembles models converged to different minima during a training run, asymmetric tri-training (Saito et al., 2017) (ASYM) that leverages agreement on two models as information for the third, and temporal ensembling (Laine and Aila, 2017) , which ensembles predictions of a model at different epochs." + }, + { + "id": 88, + "string": "We tried to compare to temporal ensembling in our experiments, but were not able to obtain consistent results." + }, + { + "id": 89, + "string": "4 We compare to the closest most recent method, asymmetric tritraining (Saito et al., 2017) ." + }, + { + "id": 90, + "string": "It differs from ours in two aspects: a) ASYM leverages only pseudolabels from data points on which m 1 and m 2 agree, and b) it uses only one task (m 3 ) as final predictor." + }, + { + "id": 91, + "string": "In essence, our formulation of MT-Tri is closer to the original tri-training formulation (agreements on two provide pseudo-labels to the third) thereby incorporating more diversity." + }, + { + "id": 92, + "string": "(Petrov and McDonald, 2012) for POS tagging (above) and the Amazon Reviews dataset (Blitzer et al., 2006) for sentiment analysis (below)." + }, + { + "id": 93, + "string": "Experiments In order to ascertain which methods are robust across different domains, we evaluate on two widely used unsupervised domain adaptation datasets for two tasks, a sequence labeling and a classification task, cf." + }, + { + "id": 94, + "string": "Table 1 for data statistics." + }, + { + "id": 95, + "string": "POS tagging For POS tagging we use the SANCL 2012 shared task dataset (Petrov and McDonald, 2012) and compare to the top results in both low and high-data conditions (Schnabel and Schütze, 2014; Yin et al., 2015) ." + }, + { + "id": 96, + "string": "Both are strong baselines, as the FLORS tagger has been developed for this challenging dataset and it is based on contextual distributional features (excluding the word's identity), and hand-crafted suffix and shape features (including some languagespecific morphological features)." + }, + { + "id": 97, + "string": "We want to gauge to what extent we can adopt a nowadays fairly standard (but more lexicalized) general neural tagger." + }, + { + "id": 98, + "string": "Our POS tagging model is a state-of-the-art Bi-LSTM tagger (Plank et al., 2016) with word and 100-dim character embeddings." + }, + { + "id": 99, + "string": "Word embeddings are initialized with the 100-dim Glove embeddings (Pennington et al., 2014) ." + }, + { + "id": 100, + "string": "The BiLSTM has one hidden layer with 100 dimensions." + }, + { + "id": 101, + "string": "The base POS model is trained on WSJ with early stopping on the WSJ development set, using patience 2, Gaussian noise with σ = 0.2 and word dropout with p = 0.25 (Kiperwasser and Goldberg, 2016) ." + }, + { + "id": 102, + "string": "Regarding data, the source domain is the Ontonotes 4.0 release of the Penn treebank Wall Street Journal (WSJ) annotated for 48 fine-grained POS tags." + }, + { + "id": 103, + "string": "This amounts to 30,060 labeled sen-tences." + }, + { + "id": 104, + "string": "We use 100,000 WSJ sentences from 1988 as unlabeled data, following Schnabel and Schütze (2014) ." + }, + { + "id": 105, + "string": "5 As target data, we use the five SANCL domains (answers, emails, newsgroups, reviews, weblogs)." + }, + { + "id": 106, + "string": "We restrict the amount of unlabeled data for each SANCL domain to the first 100k sentences, and do not do any pre-processing." + }, + { + "id": 107, + "string": "We consider the development set of ANSWERS as our only target dev set to set hyperparameters." + }, + { + "id": 108, + "string": "This may result in suboptimal per-domain settings but better resembles an unsupervised adaptation scenario." + }, + { + "id": 109, + "string": "Sentiment analysis For sentiment analysis, we evaluate on the Amazon reviews dataset (Blitzer et al., 2006) ." + }, + { + "id": 110, + "string": "Reviews with 1 to 3 stars are ranked as negative, while reviews with 4 or 5 stars are ranked as positive." + }, + { + "id": 111, + "string": "The dataset consists of four domains, yielding 12 adaptation scenarios." + }, + { + "id": 112, + "string": "We use the same pre-processing and architecture as used in (Ganin et al., 2016; Saito et al., 2017) : 5,000-dimensional tf-idf weighted unigram and bigram features as input; 2k labeled source samples and 2k unlabeled target samples for training, 200 labeled target samples for validation, and between 3k-6k samples for testing." + }, + { + "id": 113, + "string": "The model is an MLP with one hidden layer with 50 dimensions, sigmoid activations, and a softmax output." + }, + { + "id": 114, + "string": "We compare against the Variational Fair Autoencoder (VFAE) (Louizos et al., 2015) model and domain-adversarial neural networks (DANN) (Ganin et al., 2016) ." + }, + { + "id": 115, + "string": "Baselines Besides comparing to the top results published on both datasets, we include the following baselines: a) the task model trained on the source domain; b) self-training (Self); c) tri-training (Tri); d) tri-training with disagreement (Tri-D); and e) asymmetric tri-training (Saito et al., 2017) ." + }, + { + "id": 116, + "string": "Our proposed model is multi-task tri-training (MT-Tri)." + }, + { + "id": 117, + "string": "We implement our models in DyNet ." + }, + { + "id": 118, + "string": "Reporting single evaluation scores might result in biased results (Reimers and Gurevych, 2017) ." + }, + { + "id": 119, + "string": "Throughout the paper, we report mean accuracy and standard deviation over five runs for POS tagging and over ten runs for Results Sentiment analysis We show results for sentiment analysis for all 12 domain adaptation scenarios in Figure 2 ." + }, + { + "id": 120, + "string": "For clarity, we also show the accuracy scores averaged across each target domain as well as a global macro average in Table 2 Self-training achieves surprisingly good results but is not able to compete with tri-training." + }, + { + "id": 121, + "string": "Tritraining with disagreement is only slightly better than self-training, showing that the disagreement component might not be useful when there is a strong domain shift." + }, + { + "id": 122, + "string": "Tri-training achieves the best average results on two target domains and clearly outperforms the state of the art on average." + }, + { + "id": 123, + "string": "MT-Tri finally outperforms the state of the art on 3/4 domains, and even slightly traditional tritraining, resulting in the overall best method." + }, + { + "id": 124, + "string": "This improvement is mainly due to the B->E and D->E scenarios, on which tri-training struggles." + }, + { + "id": 125, + "string": "These domain pairs are among those with the highest Adistance (Blitzer et al., 2007) , which highlights that tri-training has difficulty dealing with a strong shift in domain." + }, + { + "id": 126, + "string": "Our method is able to mitigate this deficiency by training one of the three output layers only on pseudo-labeled target domain examples." + }, + { + "id": 127, + "string": "In addition, MT-Tri is more efficient as it adds a smaller number of pseudo-labeled examples than tri-training at every epoch." + }, + { + "id": 128, + "string": "For sentiment analysis, tri-training adds around 1800-1950/2000 unlabeled examples at every epoch, while MT-Tri only adds around 100-300 in early epochs." + }, + { + "id": 129, + "string": "This shows that the orthogonality constraint is useful for inducing diversity." + }, + { + "id": 130, + "string": "In addition, adding fewer examples poses a smaller risk of swamping the learned representations with useless signals and is more akin to fine-tuning, the standard method for supervised domain adaptation (Howard and Ruder, 2018) ." + }, + { + "id": 131, + "string": "We observe an asymmetry in the results between some of the domain pairs, e.g." + }, + { + "id": 132, + "string": "B->D and D->B." + }, + { + "id": 133, + "string": "We hypothesize that the asymmetry may be due to properties of the data and that the domains are relatively far apart e.g., in terms of A-distance." + }, + { + "id": 134, + "string": "In fact, asymmetry in these domains is already reflected Table 4 : Accuracy for POS tagging on the dev and test sets of the SANCL domains, models trained on full source data setup." + }, + { + "id": 135, + "string": "Values for methods with * are from (Schnabel and Schütze, 2014) ." + }, + { + "id": 136, + "string": "in the results of Blitzer et al." + }, + { + "id": 137, + "string": "(2007) and is corroborated in the results for asymmetric tri-training (Saito et al., 2017) and our method." + }, + { + "id": 138, + "string": "We note a weakness of this dataset is high variance." + }, + { + "id": 139, + "string": "Existing approaches only report the mean, which makes an objective comparison difficult." + }, + { + "id": 140, + "string": "For this reason, we believe it is essential to evaluate proposed approaches also on other tasks." + }, + { + "id": 141, + "string": "POS tagging Results for tagging in the low-data regime (10% of WSJ) are given in Table 3 ." + }, + { + "id": 142, + "string": "Self-training does not work for the sequence prediction task." + }, + { + "id": 143, + "string": "We report only the best instantia-tion (throttling with n=800)." + }, + { + "id": 144, + "string": "Our results contribute to negative findings regarding self-training (Plank, 2011; Van Asch and Daelemans, 2016 )." + }, + { + "id": 145, + "string": "In the low-data setup, tri-training with disagreement works best, reaching an overall average accuracy of 89.70, closely followed by classic tritraining, and significantly outperforming the baseline on 4/5 domains." + }, + { + "id": 146, + "string": "The exception is newsgroups, a difficult domain with high OOV rate where none of the approches beats the baseline (see §3.4)." + }, + { + "id": 147, + "string": "Our proposed MT-Tri is better than asymmetric tritraining, but falls below classic tri-training." + }, + { + "id": 148, + "string": "It beats the baseline significantly on only 2/5 domains (answers and emails)." + }, + { + "id": 149, + "string": "The FLORS tagger (Yin et al., 2015) fares better." + }, + { + "id": 150, + "string": "Its contextual distributional features are particularly helpful on unknown word-tag combinations (see § 3.4), which is a limitation of the lexicalized generic bi-LSTM tagger." + }, + { + "id": 151, + "string": "For the high-data setup (Table 4 ) results are similar." + }, + { + "id": 152, + "string": "Disagreement, however, is only favorable in the low-data setups; the effect of avoiding easy points no longer holds in the full data setup." + }, + { + "id": 153, + "string": "Classic tritraining is the best method." + }, + { + "id": 154, + "string": "In particular, traditional tri-training is complementary to word embedding initialization, pushing the non-pre-trained baseline to the level of SRC with Glove initalization." + }, + { + "id": 155, + "string": "Tritraining pushes performance even further and results in the best model, significantly outperforming the baseline again in 4/5 cases, and reaching FLORS performance on weblogs." + }, + { + "id": 156, + "string": "Multi-task tritraining is often slightly more effective than asymmetric tri-training (Saito et al., 2017) ; however, improvements for both are not robust across domains, sometimes performance even drops." + }, + { + "id": 157, + "string": "The model likely is too simplistic for such a high-data POS setup, and exploring shared-private models might prove more fruitful ." + }, + { + "id": 158, + "string": "On the test sets, tri-training performs consistently the best." + }, + { + "id": 159, + "string": "POS analysis We analyze POS tagging accuracy with respect to word frequency 6 and unseen word-tag combinations (UWT) on the dev sets." + }, + { + "id": 160, + "string": "known tags, OOVs and unknown word-tag (UWT) rate." + }, + { + "id": 161, + "string": "The SANCL dataset is overall very challenging: OOV rates are high (6.8-11% compared to 2.3% in WSJ), so is the unknown word-tag (UWT) rate (answers and emails contain 2.91% and 3.47% UWT compared to 0.61% on WSJ) and almost all target domains even contain unknown tags (Schnabel and Schütze, 2014 ) (unknown tags: ADD,GW,NFP,XX), except for weblogs." + }, + { + "id": 162, + "string": "Email is the domain with the highest OOV rate and highest unknown-tag-for-known-words rate." + }, + { + "id": 163, + "string": "We plot accuracy with respect to word frequency on email in Figure 3 , analyzing how the three methods fare in comparison to the baseline on this difficult domain." + }, + { + "id": 164, + "string": "Regarding OOVs, the results in Table 5 (second part) show that classic tri-training outperforms the source model (trained on only source data) on 3/5 domains in terms of OOV accuracy, except on two domains with high OOV rate (newsgroups and weblogs)." + }, + { + "id": 165, + "string": "In general, we note that tri-training works best on OOVs and on low-frequency tokens, which is also shown in Figure 3 (leftmost bins)." + }, + { + "id": 166, + "string": "Both other methods fall typically below the baseline in terms of OOV accuracy, but MT-Tri still outperforms Asym in 4/5 cases." + }, + { + "id": 167, + "string": "Table 5 (last part) also shows that no bootstrapping method works well on unknown word-tag combinations." + }, + { + "id": 168, + "string": "UWT tokens are very difficult to predict correctly using an unsupervised approach; the less lexicalized and more context-driven approach taken by FLORS is clearly superior for these cases, resulting in higher UWT accuracies for 4/5 domains." + }, + { + "id": 169, + "string": "Related work Learning under Domain Shift There is a large body of work on domain adaptation." + }, + { + "id": 170, + "string": "Studies on unsupervised domain adaptation include early work on bootstrapping (Steedman et al., 2003; McClosky et al., 2006a) , shared feature representations (Blitzer et al., 2006 (Blitzer et al., , 2007 and instance weighting (Jiang and Zhai, 2007) ." + }, + { + "id": 171, + "string": "Recent ap-proaches include adversarial learning (Ganin et al., 2016) and fine-tuning (Sennrich et al., 2016) ." + }, + { + "id": 172, + "string": "There is almost no work on bootstrapping approaches for recent neural NLP, in particular under domain shift." + }, + { + "id": 173, + "string": "Tri-training is less studied, and only recently re-emerged in the vision community (Saito et al., 2017) , albeit is not compared to classic tri-training." + }, + { + "id": 174, + "string": "Neural network ensembling Related work on self-ensembling approaches includes snapshot ensembling or temporal ensembling (Laine and Aila, 2017) ." + }, + { + "id": 175, + "string": "In general, the line between \"explicit\" and \"implicit\" ensembling , like dropout (Srivastava et al., 2014) or temporal ensembling (Saito et al., 2017) , is more fuzzy." + }, + { + "id": 176, + "string": "As we noted earlier our multi-task learning setup can be seen as a form of self-ensembling." + }, + { + "id": 177, + "string": "Multi-task learning in NLP Neural networks are particularly well-suited for MTL allowing for parameter sharing (Caruana, 1993) ." + }, + { + "id": 178, + "string": "Recent NLP conferences witnessed a \"tsunami\" of deep learning papers (Manning, 2015) , followed by what we call a multi-task learning \"wave\": MTL has been successfully applied to a wide range of NLP tasks (Cohn and Specia, 2013; Cheng et al., 2015; Luong et al., 2015; Plank et al., 2016; Fang and Cohn, 2016; Ruder et al., 2017; Augenstein et al., 2018) ." + }, + { + "id": 179, + "string": "Related to it is the pioneering work on adversarial learning (DANN) (Ganin et al., 2016) ." + }, + { + "id": 180, + "string": "For sentiment analysis we found tri-training and our MT-Tri model to outperform DANN." + }, + { + "id": 181, + "string": "Our MT-Tri model lends itself well to shared-private models such as those proposed recently Kim et al., 2017) , which extend upon (Ganin et al., 2016) by having separate source and target-specific encoders." + }, + { + "id": 182, + "string": "Conclusions We re-evaluate a range of traditional generalpurpose bootstrapping algorithms in the context of neural network approaches to semi-supervised learning under domain shift." + }, + { + "id": 183, + "string": "For the two examined NLP tasks classic tri-training works the best and even outperforms a recent state-of-the-art method." + }, + { + "id": 184, + "string": "The drawback of tri-training it its time and space complexity." + }, + { + "id": 185, + "string": "We therefore propose a more efficient multi-task tri-training model, which outperforms both traditional tri-training and recent alternatives in the case of sentiment analysis." + }, + { + "id": 186, + "string": "For POS tagging, classic tri-training is superior, performing especially well on OOVs and low frequency to-kens, which suggests it is less affected by error propagation." + }, + { + "id": 187, + "string": "Overall we emphasize the importance of comparing neural approaches to strong baselines and reporting results across several runs." + } + ], + "headers": [ + { + "section": "Introduction", + "n": "1", + "start": 0, + "end": 18 + }, + { + "section": "Neural bootstrapping methods", + "n": "2", + "start": 19, + "end": 21 + }, + { + "section": "Self-training", + "n": "2.1", + "start": 22, + "end": 40 + }, + { + "section": "Tri-training", + "n": "2.2", + "start": 41, + "end": 57 + }, + { + "section": "Multi-task tri-training", + "n": "2.3", + "start": 58, + "end": 91 + }, + { + "section": "Experiments", + "n": "3", + "start": 92, + "end": 94 + }, + { + "section": "POS tagging", + "n": "3.1", + "start": 95, + "end": 108 + }, + { + "section": "Sentiment analysis", + "n": "3.2", + "start": 109, + "end": 114 + }, + { + "section": "Baselines", + "n": "3.3", + "start": 115, + "end": 168 + }, + { + "section": "Related work", + "n": "4", + "start": 169, + "end": 181 + }, + { + "section": "Conclusions", + "n": "5", + "start": 182, + "end": 187 + } + ], + "figures": [ + { + "filename": "../figure/image/989-Table2-1.png", + "caption": "Table 2: Average accuracy scores for each SA target domain. *: result from Saito et al. (2017).", + "page": 5, + "bbox": { + "x1": 72.0, + "x2": 288.0, + "y1": 501.59999999999997, + "y2": 633.12 + } + }, + { + "filename": "../figure/image/989-Figure2-1.png", + "caption": "Figure 2: Average results for unsupervised domain adaptation on the Amazon dataset. Domains: B (Book), D (DVD), E (Electronics), K (Kitchen). Results for VFAE, DANN, and Asym are from Saito et al. (2017).", + "page": 5, + "bbox": { + "x1": 72.0, + "x2": 543.36, + "y1": 61.44, + "y2": 280.32 + } + }, + { + "filename": "../figure/image/989-Table4-1.png", + "caption": "Table 4: Accuracy for POS tagging on the dev and test sets of the SANCL domains, models trained on full source data setup. Values for methods with * are from (Schnabel and Schütze, 2014).", + "page": 6, + "bbox": { + "x1": 72.0, + "x2": 526.0799999999999, + "y1": 235.67999999999998, + "y2": 538.0799999999999 + } + }, + { + "filename": "../figure/image/989-Table3-1.png", + "caption": "Table 3: Accuracy scores on dev set of target domain for POS tagging for 10% labeled data. Avg: average over the 5 SANCL domains. Hyperparameter ep (epochs) is tuned on Answers dev. µpseudo: average amount of added pseudo-labeled data. FLORS: results for Batch (u:big) from (Yin et al., 2015) (see §3).", + "page": 6, + "bbox": { + "x1": 72.0, + "x2": 526.0799999999999, + "y1": 65.75999999999999, + "y2": 174.23999999999998 + } + }, + { + "filename": "../figure/image/989-Figure1-1.png", + "caption": "Figure 1: Multi-task tri-training (MT-Tri).", + "page": 2, + "bbox": { + "x1": 317.76, + "x2": 515.04, + "y1": 61.44, + "y2": 178.07999999999998 + } + }, + { + "filename": "../figure/image/989-Figure3-1.png", + "caption": "Figure 3: POS accuracy per binned log frequency.", + "page": 7, + "bbox": { + "x1": 306.71999999999997, + "x2": 526.0799999999999, + "y1": 61.44, + "y2": 163.2 + } + }, + { + "filename": "../figure/image/989-Table5-1.png", + "caption": "Table 5: Accuracy scores on dev sets for OOV and unknown word-tag (UWT) tokens.", + "page": 7, + "bbox": { + "x1": 72.0, + "x2": 291.36, + "y1": 62.879999999999995, + "y2": 262.08 + } + }, + { + "filename": "../figure/image/989-Table1-1.png", + "caption": "Table 1: Number of labeled and unlabeled sentences for each domain in the SANCL 2012 dataset (Petrov and McDonald, 2012) for POS tagging (above) and the Amazon Reviews dataset (Blitzer et al., 2006) for sentiment analysis (below).", + "page": 4, + "bbox": { + "x1": 89.75999999999999, + "x2": 270.24, + "y1": 62.4, + "y2": 190.07999999999998 + } + } + ] + }, + "gem_id": "GEM-SciDuet-chal-8" + }, + { + "slides": { + "0": { + "title": "Contributions", + "text": [ + "Question Answering (Q&A) and Spoken Language Understanding (SLU) under the same parsing framework:", + "Public Q&A corpora (English)", + "Proprietary Alexa SLU corpus (English)", + "Transfer learning to learn parsers on low-resource domains, for both Q&A and SLU:" + ], + "page_nums": [ + 2 + ], + "images": [] + }, + "3": { + "title": "Parser", + "text": [ + "Which cinemas screen Star Wars tonight?", + "Time Title Title Time", + "tonight Title Title Time", + "Transition-based parser of Cheng et al. (2017) + character-level embeddings and copy mechanism:", + "t0 tn x0 xn nt0 ntn" + ], + "page_nums": [ + 12, + 13, + 14, + 15, + 16, + 17, + 18, + 19, + 20, + 21, + 22, + 23, + 24 + ], + "images": [] + }, + "4": { + "title": "Results", + "text": [ + "DATA TASK DOMAIN ACCURACY", + "Overnight Q&A publications calendar housing recipes restaurants basketball blocks social", + "Alexa SLU search recipes cinema bookings closet", + "DATA TASK DOMAIN BASELINE Copy", + "DATA TASK DOMAIN BASELINE Attention" + ], + "page_nums": [ + 25, + 26, + 27, + 29 + ], + "images": [] + }, + "8": { + "title": "Trasfer Learning Multi task Learning", + "text": [ + "HR DOMAIN LR DOMAIN", + "TER COPY TER COPY", + "t0 tn x0 xn t0 tn x0 xn" + ], + "page_nums": [ + 36 + ], + "images": [] + }, + "10": { + "title": "Multi task Learning for Alexa SLU", + "text": [ + "t0 tn x0 xn nt0 ntn" + ], + "page_nums": [ + 38 + ], + "images": [] + }, + "13": { + "title": "Takeaways", + "text": [ + "Executable semantic parsing unifies Q&A and SLU;", + "One model for all is fine but some choices must be revisited (e.g. attention, copy);", + "Transfer learning for low-resource domains on Q&A and SLU." + ], + "page_nums": [ + 41 + ], + "images": [] + } + }, + "paper_title": "Practical Semantic Parsing for Spoken Language Understanding", + "paper_id": "990", + "paper": { + "title": "Practical Semantic Parsing for Spoken Language Understanding", + "abstract": "Executable semantic parsing is the task of converting natural language utterances into logical forms that can be directly used as queries to get a response. We build a transfer learning framework for executable semantic parsing. We show that the framework is effective for Question Answering (Q&A) as well as for Spoken Language Understanding (SLU). We further investigate the case where a parser on a new domain can be learned by exploiting data on other domains, either via multitask learning between the target domain and an auxiliary domain or via pre-training on the auxiliary domain and fine-tuning on the target domain. With either flavor of transfer learning, we are able to improve performance on most domains; we experiment with public data sets such as Overnight and NLmaps as well as with commercial SLU data. The experiments carried out on data sets that are different in nature show how executable semantic parsing can unify different areas of NLP such as Q&A and SLU.", + "text": [ + { + "id": 0, + "string": "Introduction Due to recent advances in speech recognition and language understanding, conversational interfaces such as Alexa, Cortana, and Siri are becoming more common." + }, + { + "id": 1, + "string": "They currently have two large uses cases." + }, + { + "id": 2, + "string": "First, a user can use them to complete a specific task, such as playing music." + }, + { + "id": 3, + "string": "Second, a user can use them to ask questions where the questions are answered by querying knowledge graph or database back-end." + }, + { + "id": 4, + "string": "Typically, under a common interface, there exist two disparate systems that can handle each use cases." + }, + { + "id": 5, + "string": "The system underlying the first use case is known as a spoken language understanding (SLU) system." + }, + { + "id": 6, + "string": "Typical commercial SLU systems rely on predicting a coarse user intent and then tagging each word in the utterance to * Work conducted while interning at Amazon Alexa AI." + }, + { + "id": 7, + "string": "the intent's slots." + }, + { + "id": 8, + "string": "This architecture is popular due to its simplicity and robustness." + }, + { + "id": 9, + "string": "On the other hand, Q&A, which need systems to produce more complex structures such as trees and graphs, requires a more comprehensive understanding of human language." + }, + { + "id": 10, + "string": "One possible system that can handle such a task is an executable semantic parser (Liang, 2013; Kate et al., 2005) ." + }, + { + "id": 11, + "string": "Given a user utterance, an executable semantic parser can generate tree or graph structures that represent logical forms that can be used to query a knowledge base or database." + }, + { + "id": 12, + "string": "In this work, we propose executable semantic parsing as a common framework for both uses cases by framing SLU as executable semantic parsing that unifies the two use cases." + }, + { + "id": 13, + "string": "For Q&A, the input utterances are parsed into logical forms that represent the machine-readable representation of the question, while in SLU, they represent the machine-readable representation of the user intent and slots." + }, + { + "id": 14, + "string": "One added advantage of using parsing for SLU is the ability to handle more complex linguistic phenomena such as coordinated intents that traditional SLU systems struggle to handle (Agarwal et al., 2018) ." + }, + { + "id": 15, + "string": "Our parsing model is an extension of the neural transition-based parser of Cheng et al." + }, + { + "id": 16, + "string": "(2017) ." + }, + { + "id": 17, + "string": "A major issue with semantic parsing is the availability of the annotated logical forms to train the parsers, which are expensive to obtain." + }, + { + "id": 18, + "string": "A solution is to rely more on distant supervisions such as by using question-answer pairs (Clarke et al., 2010; ." + }, + { + "id": 19, + "string": "Alternatively, it is possible to exploit annotated logical forms from a different domain or related data set." + }, + { + "id": 20, + "string": "In this paper, we focus on the scenario where data sets for several domains exist but only very little data for a new one is available and apply transfer learning techniques to it." + }, + { + "id": 21, + "string": "A common way to implement transfer learning is by first pre-training the model on a domain on which a large data set is available and subsequently fine-tuning the model on the target domain (Thrun, 1996; Zoph et al., 2016) ." + }, + { + "id": 22, + "string": "We also consider a multi-task learning (MTL) approach." + }, + { + "id": 23, + "string": "MTL refers to machine learning models that improve generalization by training on more than one task." + }, + { + "id": 24, + "string": "MTL has been used for a number of NLP problems such as tagging (Collobert and Weston, 2008) , syntactic parsing (Luong et al., 2015) , machine translation Luong et al., 2015) and semantic parsing (Fan et al., 2017) ." + }, + { + "id": 25, + "string": "See Caruana (1997) and Ruder (2017) for an overview of MTL." + }, + { + "id": 26, + "string": "A good Q&A data set for our domain adaptation scenario is the Overnight data set (Wang et al., 2015b) , which contains sentences annotated with Lambda Dependency-Based Compositional Semantics (Lambda DCS; Liang 2013) for eight different domains." + }, + { + "id": 27, + "string": "However, it includes only a few hundred sentences for each domain, and its vocabularies are relatively small." + }, + { + "id": 28, + "string": "We also experiment with a larger semantic parsing data set (NLmaps; Lawrence and Riezler 2016) ." + }, + { + "id": 29, + "string": "For SLU, we work with data from a commercial conversational assistant that has a much larger vocabulary size." + }, + { + "id": 30, + "string": "One common issue in parsing is how to deal with rare or unknown words, which is usually addressed by either delexicalization or by implementing a copy mechanism (Gulcehre et al., 2016) ." + }, + { + "id": 31, + "string": "We show clear differences in the outcome of these and other techniques when applied to data sets of varying sizes." + }, + { + "id": 32, + "string": "Our contributions are as follows: • We propose a common semantic parsing framework for Q&A and SLU and demonstrate its broad applicability and effectiveness." + }, + { + "id": 33, + "string": "• We report parsing baselines for Overnight for which exact match parsing scores have not been yet published." + }, + { + "id": 34, + "string": "• We show that SLU greatly benefits from a copy mechanism, which is also beneficial for NLmaps but not Overnight." + }, + { + "id": 35, + "string": "• We investigate the use of transfer learning and show that it can facilitate parsing on lowresource domains." + }, + { + "id": 36, + "string": "Transition-based Parser Transition-based parsers are widely used for dependency parsing (Nivre, 2008; Dyer et al., 2015) and they have been also applied to semantic parsing tasks (Wang et al., 2015a; Cheng et al., 2017) ." + }, + { + "id": 37, + "string": "In syntactic parsing, a transition system is usually defined as a quadruple: T = {S, A, I, E}, where S is a set of states, A is a set of actions, I is the initial state, and E is a set of end states." + }, + { + "id": 38, + "string": "A state is composed of a buffer, a stack, and a set of arcs: S = (β, σ, A)." + }, + { + "id": 39, + "string": "In the initial state, the buffer contains all the words in the input sentence while the stack and the set of subtrees are empty: S 0 = (w 0 | ." + }, + { + "id": 40, + "string": "." + }, + { + "id": 41, + "string": "." + }, + { + "id": 42, + "string": "|w N , ∅, ∅)." + }, + { + "id": 43, + "string": "Terminal states have empty stack and buffer: S T = (∅, ∅, A)." + }, + { + "id": 44, + "string": "During parsing, the stack stores words that have been removed from the buffer but have not been fully processed yet." + }, + { + "id": 45, + "string": "Actions can be performed to advance the transition system's state: they can either consume words in the buffer and move them to the stack (SHIFT) or combine words in the stack to create new arcs (LEFT-ARC and RIGHT-ARC, depending on the direction of the arc) 1 ." + }, + { + "id": 46, + "string": "Words in the buffer are processed left-toright until an end state is reached, at which point the set of arcs will contain the full output tree." + }, + { + "id": 47, + "string": "The parser needs to be able to predict the next action based on its current state." + }, + { + "id": 48, + "string": "Traditionally, supervised techniques are used to learn such classifiers, using a parallel corpus of sentences and their output trees." + }, + { + "id": 49, + "string": "Trees can be converted to states and actions using an oracle system." + }, + { + "id": 50, + "string": "For a detailed explanation of transition-based parsing, see Nivre (2003) and Nivre (2008) ." + }, + { + "id": 51, + "string": "Neural Transition-based Parser with Stack-LSTMs In this paper, we consider the neural executable semantic parser of Cheng et al." + }, + { + "id": 52, + "string": "(2017) , which follows the transition-based parsing paradigm." + }, + { + "id": 53, + "string": "Its transition system differs from traditional systems as the words are not consumed from the buffer because in executable semantic parsing, there are no strict alignments between words in the input and nodes in the tree." + }, + { + "id": 54, + "string": "The neural architecture encodes the buffer using a Bi-LSTM (Graves, 2012) and the stack as a Stack-LSTM (Dyer et al., 2015) , a recurrent network that allows for push and pop operations." + }, + { + "id": 55, + "string": "Additionally, the previous actions are also represented with an LSTM." + }, + { + "id": 56, + "string": "The output of these networks is fed into feed-forward layers and softmax layers are used to predict the next action given the current state." + }, + { + "id": 57, + "string": "The possible actions are REDUCE, which pops an item from the stack, TER, which creates a terminal node (i.e., a leaf in the tree), and NT, which creates a non-terminal node." + }, + { + "id": 58, + "string": "When the next action is either TER or NT, additional softmax layers predict the output token to be generated." + }, + { + "id": 59, + "string": "Since the buffer does not change while parsing, an attention mechanism is used to focus on specific words given the current state of the parser." + }, + { + "id": 60, + "string": "We extend the model of Cheng et al." + }, + { + "id": 61, + "string": "(2017) by adding character-level embeddings and a copy mechanism." + }, + { + "id": 62, + "string": "When using only word embeddings, out-of-vocabulary words are usually mapped to one embedding vector and do not exploit morphological features." + }, + { + "id": 63, + "string": "Our model encodes words by feeding each character embedding onto an LSTM and concatenate its output to the word embedding: x = {e w ; h M c }, (1) where e w is the word embedding of the input word w and h M c is the last hidden state of the characterlevel LSTM over the characters of the input word w = c 0 , ." + }, + { + "id": 64, + "string": "." + }, + { + "id": 65, + "string": "." + }, + { + "id": 66, + "string": ", c M ." + }, + { + "id": 67, + "string": "Rare words are usually handled by either delexicalizing the output or by using a copy mechanism." + }, + { + "id": 68, + "string": "Delexicalization involves substituting named entities with a specific token in an effort to reduce the number of rare and unknown words." + }, + { + "id": 69, + "string": "Copy relies on the fact that when rare or unknown words must be generated, they usually appear in the same form in the input sentence and they can be therefore copied from the input itself." + }, + { + "id": 70, + "string": "Our copy implementation follows the strategy of Fan et al." + }, + { + "id": 71, + "string": "(2017) , where the output of the generation layer is concatenated to the scores of an attention mechanism (Bahdanau et al., 2015) , which express the relevance of each input word with respect to the current state of the parser." + }, + { + "id": 72, + "string": "In the experiments that follow, we compare delexicalization with copy mechanism on different setups." + }, + { + "id": 73, + "string": "A depiction of the full model is shown in Figure 1 ." + }, + { + "id": 74, + "string": "Transfer learning We consider the scenario where large training corpora are available for some domains and we want to bootstrap a parser for a new domain where little training data is available." + }, + { + "id": 75, + "string": "We investigate the use of two transfer learning approaches: pre-training and multi-task learning." + }, + { + "id": 76, + "string": "Figure 1 : The full neural transition-based parsing model." + }, + { + "id": 77, + "string": "x 0 , x 1 , ." + }, + { + "id": 78, + "string": "." + }, + { + "id": 79, + "string": "." + }, + { + "id": 80, + "string": ", x n HISTORY ." + }, + { + "id": 81, + "string": "." + }, + { + "id": 82, + "string": "." + }, + { + "id": 83, + "string": "BUFFER ." + }, + { + "id": 84, + "string": "." + }, + { + "id": 85, + "string": "." + }, + { + "id": 86, + "string": "STACK ." + }, + { + "id": 87, + "string": "." + }, + { + "id": 88, + "string": "." + }, + { + "id": 89, + "string": "ATTENTION FEED-FORWARD LAYERS TER RED NT t 0 ." + }, + { + "id": 90, + "string": "." + }, + { + "id": 91, + "string": "." + }, + { + "id": 92, + "string": "t n x 0 ." + }, + { + "id": 93, + "string": "." + }, + { + "id": 94, + "string": "." + }, + { + "id": 95, + "string": "x n TER COPY nt 0 ." + }, + { + "id": 96, + "string": "." + }, + { + "id": 97, + "string": "." + }, + { + "id": 98, + "string": "nt n NT Representations of stack, buffer, and previous actions are used to predict the next action." + }, + { + "id": 99, + "string": "When the TER or NT actions are chosen, further layers are used to predict (or copy) the token." + }, + { + "id": 100, + "string": "For MTL, the different tasks share most of the architecture and only the output layers, which are responsible for predicting the output tokens, are separate for each task." + }, + { + "id": 101, + "string": "When multi-tasking across domains of the same data set, we expect that most layers of the neural parser, such as the ones responsible for learning the word embeddings and the stack and buffer representation, will learn similar features and can, therefore, be shared." + }, + { + "id": 102, + "string": "We implement two different MTL setups: a) when separate heads are used for both the TER classifier and the NT classifier, which is expected to be effective when transferring across tasks that do not share output vocabulary; and b) when a separate head is used only for the TER classifier, more appropriate when the non-terminals space is mostly shared." + }, + { + "id": 103, + "string": "Data In order to investigate the flexibility of the executable semantic parsing framework, we evaluate models on Q&A data sets as well as on commercial SLU data sets." + }, + { + "id": 104, + "string": "For Q&A, we consider Overnight (Wang et al., 2015b) and NLmaps (Lawrence and Riezler, 2016) ." + }, + { + "id": 105, + "string": "Overnight It contains sentences annotated with Lambda DCS (Liang, 2013) ." + }, + { + "id": 106, + "string": "The sentences are divided into eight domains: calendar, blocks, housing, restaurants, publications, recipes, socialnetwork, and basketball." + }, + { + "id": 107, + "string": "As shown in Table 1 , the number of sentences and the terminal vocabularies are small, which makes the learning more challenging, preventing us from using data-hungry approaches such as sequence-to-sequence models." + }, + { + "id": 108, + "string": "The current state-of-the-art results, to the best of our knowledge, are reported by Su and Yan (2017) ." + }, + { + "id": 109, + "string": "Previous work on this data set use denotation accuracy as a metric." + }, + { + "id": 110, + "string": "In this paper, we use logical form exact match accuracy across all data sets." + }, + { + "id": 111, + "string": "NLmaps It contains more than two thousand questions about geographical facts, retrieved from OpenStreetMap (Haklay and Weber, 2008) ." + }, + { + "id": 112, + "string": "Unfortunately, this data set is not divided into subdomains." + }, + { + "id": 113, + "string": "While NLmaps has comparable sizes with some of the Overnight domains, its vocabularies are much larger: containing 160 terminals, 24 non-terminals and 280 word types (Table 1) ." + }, + { + "id": 114, + "string": "The current state-of-the-art results on this data set are reported by Duong et al." + }, + { + "id": 115, + "string": "(2017) ." + }, + { + "id": 116, + "string": "SLU We select five domains from our SLU data set: search, recipes, cinema, bookings, and closet." + }, + { + "id": 117, + "string": "In order to investigate the use case of a new lowresource domain exploiting a higher-resource domain, we selected a mix of high-resource and lowresource domains." + }, + { + "id": 118, + "string": "Details are shown in Table 1 ." + }, + { + "id": 119, + "string": "We extracted shallow trees from data originally collected for intent/slot tagging: intents become the root of the tree, slot types are attached to the roots as their children and slot values are in turn attached to their slot types as their children." + }, + { + "id": 120, + "string": "An example is shown in Figure 2 ." + }, + { + "id": 121, + "string": "A similar approach to transform intent/slot data into tree structures has been recently employed by Gupta et al." + }, + { + "id": 122, + "string": "(2018b) ." + }, + { + "id": 123, + "string": "Experiments We first run experiments on single-task semantic parsing to observe the differences among the three different data sources discussed in Section 4." + }, + { + "id": 124, + "string": "Specifically, we explore the impact of an attention mechanism on the performance as well as the comparison between delexicalization and a copy mechanism for dealing with data sparsity." + }, + { + "id": 125, + "string": "The metric used to evaluate parsers is the exact match accuracy, defined as the ratio of sentences cor- rectly parsed." + }, + { + "id": 126, + "string": "Attention Because the buffer is not consumed as in traditional transition-based parsers, Cheng et al." + }, + { + "id": 127, + "string": "(2017) use an additive attention mechanism (Bahdanau et al., 2015) to focus on the more relevant words in the buffer for the current state of the stack." + }, + { + "id": 128, + "string": "In order to find the impact of attention on the different data sets, we run ablation experiments, as shown in Table 2 (left side)." + }, + { + "id": 129, + "string": "We found that attention between stack and buffer is not always beneficial: it appears to be helpful for larger data sets while harmful for smaller data sets." + }, + { + "id": 130, + "string": "Attention is, however, useful for NLmaps, regardless of the data size." + }, + { + "id": 131, + "string": "Even though NLmaps data is similarly sized to some of the Overnight domains, its terminal space is considerably larger, perhaps making attention more important even with a smaller data set." + }, + { + "id": 132, + "string": "On the other hand, the high-resource SLU's cinema domain is not able to benefit from the attention mechanism." + }, + { + "id": 133, + "string": "We note that the performance of this model on NLmaps falls behind the state of the art (Duong et al., 2017) ." + }, + { + "id": 134, + "string": "The hyper-parameters of our model were however not tuned on this data set." + }, + { + "id": 135, + "string": "Handling Sparsity A popular way to deal with the data sparsity problem is to delexicalize the data, that is replacing rare and unknown words with coarse categories." + }, + { + "id": 136, + "string": "In our experiment, we use a named entity recognition system 2 to replace names with their named entity types." + }, + { + "id": 137, + "string": "Alternatively, it is possible to use a copy mechanism to enable the decoder to copy rare words from the input rather than generating them from its limited vocabulary." + }, + { + "id": 138, + "string": "We compare the two solutions across all data sets on the right side of Table 2 ." + }, + { + "id": 139, + "string": "Regardless of the data set, the copy mechanism generally outperforms delexicalization." + }, + { + "id": 140, + "string": "We also note that delexi-2 https://spacy.io calization has unexpected catastrophic effects on exact match accuracy for calendar and housing." + }, + { + "id": 141, + "string": "For Overnight, however, the system with copy mechanism is outperformed by the system without attention." + }, + { + "id": 142, + "string": "This is unsurprising as the copy mechanism is based on attention, which is not effective on Overnight (Section 5.1)." + }, + { + "id": 143, + "string": "The inefficacy of copy mechanisms on the Overnight data set was also discussed in Jia and Liang (2016) , where answer accuracy, rather than parsing accuracy, was used as a metric." + }, + { + "id": 144, + "string": "As such, the results are not directly comparable." + }, + { + "id": 145, + "string": "For NLmaps and all SLU domains, using a copy mechanism results in an average accuracy improvement of 16% over the baseline." + }, + { + "id": 146, + "string": "It is worth noting that the copy mechanism is unsurprisingly effective for SLU data due to the nature of the data set: the SLU trees were obtained from data collected for slot tagging, and as such, each leaf in the tree has to be copied from the input sentence." + }, + { + "id": 147, + "string": "Even though Overnight often yields different conclusions, most likely due to its small vocabulary size, the similar behaviors observed for NLmaps and SLU is reassuring, confirming that it is possible to unify Q&A and SLU under the same umbrella framework of executable semantic parsing." + }, + { + "id": 148, + "string": "In order to compare the NLmaps results with Lawrence and Riezler (2016) , we also compute F1 scores for the data set." + }, + { + "id": 149, + "string": "Our baseline outperforms previous results, achieving a score of 0.846." + }, + { + "id": 150, + "string": "Our best F1 results are also obtained when adding the copy mechanism, achieving a score of 0.874." + }, + { + "id": 151, + "string": "Transfer Learning The first set of experiments involve transfer learning across Overnight domains." + }, + { + "id": 152, + "string": "For this data set, the non-terminal vocabulary is mostly shared across domains." + }, + { + "id": 153, + "string": "As such, we use the architecture where only the TER output classifier is not shared." + }, + { + "id": 154, + "string": "Selecting the best auxiliary domain by maximizing the overlap with the main domain was not successful, and we instead performed an exhaustive search over the domain pairs on the development set." + }, + { + "id": 155, + "string": "In the interest of space, for each main domain, we report results for the best auxiliary domain (Table 3)." + }, + { + "id": 156, + "string": "We note that MTL and pre-training provide similar results and provide an average improvement of 4%." + }, + { + "id": 157, + "string": "As expected, we observe more substantial improvements for smaller domains." + }, + { + "id": 158, + "string": "We performed the same set of experiments on the SLU domains, as shown in Table 4 ." + }, + { + "id": 159, + "string": "In this case, the non-terminal vocabulary can vary significantly across domains." + }, + { + "id": 160, + "string": "We therefore choose to use the MTL architecture where both TER and NT output classifiers are not shared." + }, + { + "id": 161, + "string": "Also for SLU, there is no clear winner between pre-training and MTL." + }, + { + "id": 162, + "string": "Nevertheless, they always outperform the baseline, demonstrating the importance of transfer learning, especially for smaller domains." + }, + { + "id": 163, + "string": "While the focus of this transfer learning framework is in exploiting high-resource domains annotated in the same way as a new low-resource domain, we also report a preliminary experiment on transfer learning across tasks." + }, + { + "id": 164, + "string": "We selected the recipes domain, which exists in both Overnight and SLU." + }, + { + "id": 165, + "string": "While the SLU data set is significantly different from Overnight, deriving from a corpus annotated with intent/slot labels, as discussed in Section 4, we found promising results using pre-training, increasing the accuracy from 58.3 to 61.1." + }, + { + "id": 166, + "string": "A full investigation of transfer learning across domains belonging to heterogeneous data sets is left for future work." + }, + { + "id": 167, + "string": "The experiments on transfer learning demon- Related work A large collection of logical forms of different nature exist in the semantic parsing literature: semantic role schemes (Palmer et al., 2005; Meyers et al., 2004; Baker et al., 1998) , syntax/semantics interfaces (Steedman, 1996) , executable logical forms (Liang, 2013; Kate et al., 2005) , and general purpose meaning representations (Banarescu et al., 2013; Abend and Rappoport, 2013 Cheng et al." + }, + { + "id": 168, + "string": "(2017) , which is inspired by Recurrent Neural Network Grammars (Dyer et al., 2016) ." + }, + { + "id": 169, + "string": "We extend the model with ideas inspired by Gulcehre et al." + }, + { + "id": 170, + "string": "(2016) and Luong and Manning (2016) ." + }, + { + "id": 171, + "string": "We build our multi-task learning architecture upon the rich literature on the topic." + }, + { + "id": 172, + "string": "MTL was first introduce in Caruana (1997) ." + }, + { + "id": 173, + "string": "It has been since used for a number of NLP problems such as tagging (Collobert and Weston, 2008) , syntactic parsing (Luong et al., 2015) , and machine translation Luong et al., 2015) ." + }, + { + "id": 174, + "string": "The closest to our work is Fan et al." + }, + { + "id": 175, + "string": "(2017) , where MTL architectures are built on top of an attentive sequenceto-sequence model (Bahdanau et al., 2015) ." + }, + { + "id": 176, + "string": "We instead focus on transfer learning across domains of the same data sets and employ a different architecture which promises to be less data-hungry than sequence-to-sequence models." + }, + { + "id": 177, + "string": "Typical SLU systems rely on domain-specific semantic parsers that identify intents and slots in a sentence." + }, + { + "id": 178, + "string": "Traditionally, these tasks were performed by linear machine learning models (Sha and Pereira, 2003) but more recently jointlytrained DNN models are used (Mesnil et al., 2015; Hakkani-Tür et al., 2016) with differing contexts (Gupta et al., 2018a; Vishal Ishwar Naik, 2018) ." + }, + { + "id": 179, + "string": "More recently there has been work on extending the traditional intent/slot framework using targeted parsing to handle more complex linguistic phenomenon like coordination (Gupta et al., 2018c; Agarwal et al., 2018) ." + }, + { + "id": 180, + "string": "Conclusions We framed SLU as an executable semantic parsing task, which addresses a limitation of current commercial SLU systems." + }, + { + "id": 181, + "string": "By applying our framework to different data sets, we demonstrate that the framework is effective for Q&A as well as for SLU." + }, + { + "id": 182, + "string": "We explored a typical scenario where it is necessary to learn a semantic parser for a new domain with little data, but other high-resource domains are available." + }, + { + "id": 183, + "string": "We show the effectiveness of our system and both pre-training and MTL on different domains and data sets." + }, + { + "id": 184, + "string": "Preliminary experiment results on transfer learning across domains belonging to heterogeneous data sets suggest future work in this area." + } + ], + "headers": [ + { + "section": "Introduction", + "n": "1", + "start": 0, + "end": 35 + }, + { + "section": "Transition-based Parser", + "n": "2", + "start": 36, + "end": 50 + }, + { + "section": "Neural Transition-based Parser with", + "n": "2.1", + "start": 51, + "end": 73 + }, + { + "section": "Transfer learning", + "n": "3", + "start": 74, + "end": 102 + }, + { + "section": "Data", + "n": "4", + "start": 103, + "end": 122 + }, + { + "section": "Experiments", + "n": "5", + "start": 123, + "end": 125 + }, + { + "section": "Attention", + "n": "5.1", + "start": 126, + "end": 134 + }, + { + "section": "Handling Sparsity", + "n": "5.2", + "start": 135, + "end": 150 + }, + { + "section": "Transfer Learning", + "n": "5.3", + "start": 151, + "end": 166 + }, + { + "section": "Related work", + "n": "6", + "start": 167, + "end": 179 + }, + { + "section": "Conclusions", + "n": "7", + "start": 180, + "end": 184 + } + ], + "figures": [ + { + "filename": "../figure/image/990-Figure1-1.png", + "caption": "Figure 1: The full neural transition-based parsing model. Representations of stack, buffer, and previous actions are used to predict the next action. When the TER or NT actions are chosen, further layers are used to predict (or copy) the token.", + "page": 2, + "bbox": { + "x1": 309.59999999999997, + "x2": 515.04, + "y1": 66.24, + "y2": 317.76 + } + }, + { + "filename": "../figure/image/990-Table4-1.png", + "caption": "Table 4: Transfer learning results for SLU domains. BL + Copy is the model without transfer learning. PRETR. stands for pre-training. Again, the numbers are exact match accuracy.", + "page": 5, + "bbox": { + "x1": 80.64, + "x2": 284.15999999999997, + "y1": 263.52, + "y2": 340.32 + } + }, + { + "filename": "../figure/image/990-Table3-1.png", + "caption": "Table 3: Transfer learning results for the Overnight domains. BL − Att is the model without transfer learning. PRETR. stands for pre-training. Again, we report exact match accuracy.", + "page": 5, + "bbox": { + "x1": 93.6, + "x2": 271.2, + "y1": 62.4, + "y2": 190.07999999999998 + } + }, + { + "filename": "../figure/image/990-Table2-1.png", + "caption": "Table 2: Left side: Ablation experiments on attention mechanism. Right side: Comparison between delexicalization and copy mechanism. BL is the model of Section 2.1, −Att refers to the same model without attention, +Delex is the system with delexicalization and in +Copy we use a copy mechanism instead. The scores indicate the percentage of correct parses.", + "page": 4, + "bbox": { + "x1": 75.84, + "x2": 289.44, + "y1": 62.4, + "y2": 280.32 + } + }, + { + "filename": "../figure/image/990-Table1-1.png", + "caption": "Table 1: Details of training data. # is the number of sentences, TER is the terminal vocabulary size, NT is the nonterminal vocabulary size and Words is the input vocabulary size.", + "page": 3, + "bbox": { + "x1": 317.76, + "x2": 517.4399999999999, + "y1": 242.39999999999998, + "y2": 486.24 + } + }, + { + "filename": "../figure/image/990-Figure2-1.png", + "caption": "Figure 2: Conversion from intent/slot tags to tree for the sentence Which cinemas screen Star Wars tonight?", + "page": 3, + "bbox": { + "x1": 336.47999999999996, + "x2": 506.4, + "y1": 89.75999999999999, + "y2": 197.28 + } + } + ] + }, + "gem_id": "GEM-SciDuet-chal-9" + }, + { + "slides": { + "0": { + "title": "Abstract Meaning Representation AMR", + "text": [ + "He ate the pizza with his fingers." + ], + "page_nums": [ + 1 + ], + "images": [] + }, + "1": { + "title": "AMR to text generation English", + "text": [ + "He ate the pizza with his fingers." + ], + "page_nums": [ + 2, + 3 + ], + "images": [] + }, + "2": { + "title": "Previous work", + "text": [ + "Konstas et al. (2017): sequential encoder;" + ], + "page_nums": [ + 4 + ], + "images": [] + }, + "3": { + "title": "This work", + "text": [ + "He ate the pizza with his fingers.", + "Are improvements in graph encoders due to reentrancies?", + "Graph: Graph Convolutional Network (GCN; Kipf and Welling, 2017)." + ], + "page_nums": [ + 5 + ], + "images": [] + }, + "4": { + "title": "Sequential input Konstas et al 2017", + "text": [ + ":arg0 he :arg1 pizza :instrument finger :part-of he eat-01", + ":arg0 eat-01 he :arg1 pizza :instr. finger part-of he", + "eat-01 :arg0 he :arg1 pizza :instrument finger :part-of he" + ], + "page_nums": [ + 6, + 7, + 8 + ], + "images": [] + }, + "5": { + "title": "Tree structured input", + "text": [ + "eat-01 :arg0 he :arg1 pizza :instrument finger :part-of he", + ":arg0 he :arg1 pizza :instr. finger part-of he eat-01" + ], + "page_nums": [ + 9, + 10, + 11, + 12 + ], + "images": [] + }, + "6": { + "title": "Graph structured input", + "text": [ + ":arg0 he :arg1 pizza :instrument finger :part-of he eat-01", + "eat-01 :arg0 he :arg1 pizza :instrument finger :part-of he", + ":arg0 he :arg1 pizza :instr. finger part-of he eat-01" + ], + "page_nums": [ + 13, + 14 + ], + "images": [] + }, + "8": { + "title": "Comparison between models dev set R1", + "text": [ + "Seq TreeLSTM GCN-Tree GCN-Graph" + ], + "page_nums": [ + 16 + ], + "images": [] + }, + "9": { + "title": "Comparison with previous work test set R1", + "text": [ + "Konstas(seq) Song(graph) GCN-Tree GCN-Graph", + "Konstas: sequential baseline, Konstas et al. (2017)" + ], + "page_nums": [ + 17 + ], + "images": [] + }, + "12": { + "title": "Long range dependencies", + "text": [ + "He ate the pizza with a fork.", + "eat-01 :arg0 he :arg1 pizza :instrument fork", + "Model Max dependency length" + ], + "page_nums": [ + 20 + ], + "images": [] + }, + "13": { + "title": "Generation example", + "text": [ + "communicate-01 lawyer significant-other ex", + "REF tell your ex that all communication needs to go through the lawyer", + "Seq tell that all the communication go through lawyer", + "Tree tell your ex, tell your ex, the need for all the communication", + "Graph tell your ex the need to go through a lawyer" + ], + "page_nums": [ + 21 + ], + "images": [] + }, + "16": { + "title": "More examples", + "text": [ + "Graph i dont tell him but he finds out. i didnt tell him but he was out. i dont tell him but found out. i dont tell him but he found out.", + "Graph if you tell people they can help you , if you tell him, you can help you ! if you tell person_name you, you can help you . if you tell them, you can help you .", + "Graph i d recommend you go and see your doctor too. i recommend you go to see your doctor who is going to see your doctor. you recommend going to see your doctor too. i recommend you going to see your doctor too." + ], + "page_nums": [ + 27, + 28, + 29, + 30 + ], + "images": [] + } + }, + "paper_title": "Structural Neural Encoders for AMR-to-text Generation", + "paper_id": "992", + "paper": { + "title": "Structural Neural Encoders for AMR-to-text Generation", + "abstract": "AMR-to-text generation is a problem recently introduced to the NLP community, in which the goal is to generate sentences from Abstract Meaning Representation (AMR) graphs. Sequence-to-sequence models can be used to this end by converting the AMR graphs to strings. Approaching the problem while working directly with graphs requires the use of graph-to-sequence models that encode the AMR graph into a vector representation. Such encoding has been shown to be beneficial in the past, and unlike sequential encoding, it allows us to explicitly capture reentrant structures in the AMR graphs. We investigate the extent to which reentrancies (nodes with multiple parents) have an impact on AMR-to-text generation by comparing graph encoders to tree encoders, where reentrancies are not preserved. We show that improvements in the treatment of reentrancies and long-range dependencies contribute to higher overall scores for graph encoders. Our best model achieves 24.40 BLEU on LDC2015E86, outperforming the state of the art by 1.1 points and 24.54 BLEU on LDC2017T10, outperforming the state of the art by 1.24 points.", + "text": [ + { + "id": 0, + "string": "Introduction Abstract Meaning Representation (AMR; Banarescu et al." + }, + { + "id": 1, + "string": "2013 ) is a semantic graph representation that abstracts away from the syntactic realization of a sentence, where nodes in the graph represent concepts and edges represent semantic relations between them." + }, + { + "id": 2, + "string": "AMRs are graphs, rather than trees, because co-references and control structures result in nodes with multiple parents, called reentrancies." + }, + { + "id": 3, + "string": "For instance, the AMR of Figure 1 (a) contains a reentrancy between finger and he, caused by the possessive pronoun his." + }, + { + "id": 4, + "string": "AMR-to-text generation is the task of automatically generating natural language from AMR graphs." + }, + { + "id": 5, + "string": "Attentive encoder/decoder architectures, commonly used for Neural Machine Translation (NMT), have been explored for this task (Konstas et al., 2017; Song et al., 2018; Beck et al., 2018) ." + }, + { + "id": 6, + "string": "In order to use sequence-to-sequence models, Konstas et al." + }, + { + "id": 7, + "string": "(2017) reduce the AMR graphs to sequences, while Song et al." + }, + { + "id": 8, + "string": "(2018) and Beck et al." + }, + { + "id": 9, + "string": "(2018) directly encode them as graphs." + }, + { + "id": 10, + "string": "Graph encoding allows the model to explicitly encode reentrant structures present in the AMR graphs." + }, + { + "id": 11, + "string": "While central to AMR, reentrancies are often hard to treat both in parsing and in generation." + }, + { + "id": 12, + "string": "Previous work either removed them from the graphs, hence obtaining sequential (Konstas et al., 2017) or tree-structured (Liu et al., 2015; Takase et al., 2016) data, while other work maintained them but did not analyze their impact on performance (e.g., Song et al., 2018; Beck et al., 2018) ." + }, + { + "id": 13, + "string": "Damonte et al." + }, + { + "id": 14, + "string": "(2017) showed that state-of-the-art parsers do not perform well in predicting reentrant structures, while van Noord and Bos (2017) compared different pre-and post-processing techniques to improve the performance of sequenceto-sequence parsers with respect to reentrancies." + }, + { + "id": 15, + "string": "It is not yet clear whether explicit encoding of reentrancies is beneficial for generation." + }, + { + "id": 16, + "string": "In this paper, we compare three types of encoders for AMR: 1) sequential encoders, which reduce AMR graphs to sequences; 2) tree encoders, which ignore reentrancies; and 3) graph encoders." + }, + { + "id": 17, + "string": "We pay particular attention to two phenomena: reentrancies, which mark co-reference and control structures, and long-range dependencies in the AMR graphs, which are expected to benefit from structural encoding." + }, + { + "id": 18, + "string": "The contributions of the paper are two-fold: • We present structural encoders for the encoder/decoder framework and show the benefits of graph encoders not only compared to sequential encoders but also compared to tree encoders, which have not been studied so far for AMR-to-text generation." + }, + { + "id": 19, + "string": "• We show that better treatment of reentrancies and long-range dependencies contributes to improvements in the graph encoders." + }, + { + "id": 20, + "string": "Our best model, based on a graph encoder, achieves state-of-the-art results for both the LDC2015E86 dataset (24.40 on BLEU and 23.79 on Meteor) and the LDC2017T10 dataset (24.54 on BLEU and 24.07 on Meteor)." + }, + { + "id": 21, + "string": "Input Representations Graph-structured AMRs AMRs are normally represented as rooted and directed graphs: G 0 = (V 0 , E 0 , L), V 0 = {v 1 , v 2 , ." + }, + { + "id": 22, + "string": "." + }, + { + "id": 23, + "string": "." + }, + { + "id": 24, + "string": ", v n }, root ∈ V 0 , where V 0 are the graph vertices (or nodes) and root is a designated root node in V 0 ." + }, + { + "id": 25, + "string": "The edges in the AMR are labeled: E 0 ⊆ V 0 × L × V 0 , L = { 1 , 2 , ." + }, + { + "id": 26, + "string": "." + }, + { + "id": 27, + "string": "." + }, + { + "id": 28, + "string": ", n }." + }, + { + "id": 29, + "string": "Each edge e ∈ E 0 is a triple: e = (i, label, j), where i ∈ V 0 is the parent node, label ∈ L is the edge label and j ∈ V 0 is the child node." + }, + { + "id": 30, + "string": "In order to obtain unlabeled edges, thus decreasing the total number of parameters required by the models, we replace each labeled edge e = (i, label, j) with two unlabeled edges: e 1 = (i, label), e 2 = (label, j): G = (V, E), V = V 0 ∪ L = {v 1 , ." + }, + { + "id": 31, + "string": "." + }, + { + "id": 32, + "string": "." + }, + { + "id": 33, + "string": ", v n , 1 , ." + }, + { + "id": 34, + "string": "." + }, + { + "id": 35, + "string": "." + }, + { + "id": 36, + "string": ", n }, E ⊆ (V 0 × L) ∪ (L × V 0 )." + }, + { + "id": 37, + "string": "Each unlabeled edge e ∈ E is a pair: e = (i, j), where one of the following holds: 1. i ∈ V 0 and j ∈ L; 2. i ∈ L and j ∈ V 0 ." + }, + { + "id": 38, + "string": "For instance, the edge between eat-01 and he with label :arg0 of Figure 1 (a) is replaced by two edges in Figure 1(d) : an edge between eat-01 and :arg0 and another one between :arg0 and he." + }, + { + "id": 39, + "string": "The process, also used in Beck et al." + }, + { + "id": 40, + "string": "(2018) , tranforms the input graph into its equivalent Levi graph (Levi, 1942) ." + }, + { + "id": 41, + "string": "Tree-structured AMRs In order to obtain tree structures, it is necessary to discard the reentrancies from the AMR graphs." + }, + { + "id": 42, + "string": "Similarly to Takase et al." + }, + { + "id": 43, + "string": "(2016) , we replace nodes with n > 1 incoming edges with n identically labeled nodes, each with a single incoming edge." + }, + { + "id": 44, + "string": "Sequential AMRs Following Konstas et al." + }, + { + "id": 45, + "string": "(2017) , the input sequence is a linearized and anonymized AMR graph." + }, + { + "id": 46, + "string": "Linearization is used to convert the graph into a sequence: x = x 1 , ." + }, + { + "id": 47, + "string": "." + }, + { + "id": 48, + "string": "." + }, + { + "id": 49, + "string": ", x N , x i ∈ V. The depth-first traversal of the graph defines the indexing between nodes and tokens in the sequence." + }, + { + "id": 50, + "string": "For instance, the root node is x 1 , its leftmost child is x 2 and so on." + }, + { + "id": 51, + "string": "Nodes with multiple parents are visited more than once." + }, + { + "id": 52, + "string": "At each visit, their labels are repeated in the sequence, effectively losing reentrancy information, as shown in Figure 1 (b)." + }, + { + "id": 53, + "string": "Anonymization removes names and rare words with coarse categories to reduce data sparsity." + }, + { + "id": 54, + "string": "An alternative to anonymization is to employ a copy mechanism (Gulcehre et al., 2016) , where the models learn to copy rare words from the input itself." + }, + { + "id": 55, + "string": "In this paper, we follow the anonymization approach." + }, + { + "id": 56, + "string": "Encoders In this section, we review the encoders adopted as building blocks for our tree and graph encoders." + }, + { + "id": 57, + "string": "Recurrent Neural Network Encoders We reimplement the encoder of Konstas et al." + }, + { + "id": 58, + "string": "(2017) , where the sequential linearization is the input to a bidirectional LSTM (BiLSTM; Graves et al." + }, + { + "id": 59, + "string": "2013) network." + }, + { + "id": 60, + "string": "The hidden state of the BiL-STM at step i is used as a context-aware word representation of the i-th token in the sequence: e 1:N = BiLSTM(x 1:N ), where e i ∈ R d , d is the size of the output embeddings." + }, + { + "id": 61, + "string": "TreeLSTM Encoders Tree-Structured Long Short-Term Memory Networks (TreeLSTM; Tai et al." + }, + { + "id": 62, + "string": "2015) have been introduced primarily as a way to encode the hierarchical structure of syntactic trees (Tai et al., 2015) , but they have also been applied to AMR for the task of headline generation (Takase et al., 2016) ." + }, + { + "id": 63, + "string": "TreeLSTMs assume tree-structured input, so AMR graphs must be preprocessed to respect this constraint: reentrancies, which play an essential role in AMR, must be removed, thereby transforming the graphs into trees." + }, + { + "id": 64, + "string": "We use the Child-Sum variant introduced by Tai et al." + }, + { + "id": 65, + "string": "(2015) , which processes the tree in a bottomup pass." + }, + { + "id": 66, + "string": "When visiting a node, the hidden states of its children are summed up in a single vector which is then passed into recurrent gates." + }, + { + "id": 67, + "string": "In order to use information from both incoming and outgoing edges (parents and children), we employ bidirectional TreeLSTMs (Eriguchi et al., 2016) , where the bottom-up pass is followed by a top-down pass." + }, + { + "id": 68, + "string": "The top-down state of the root node is obtained by feeding the bottom-up state of the root node through a feed-forward layer: h ↓ root = tanh(W r h ↑ root + b), where h ↑ i is the hidden state of node x i ∈ V for the bottom-up pass and h ↓ i is the hidden state of node x i for the top-down pass." + }, + { + "id": 69, + "string": "The bottom up states for all other nodes are computed with an LSTM, with the cell state given by their parent nodes: h ↓ i = LSTM(h ↑ p(i) , h ↑ i ), where p(i) is the parent of node x i in the tree." + }, + { + "id": 70, + "string": "The final hidden states are obtained by concatenating the states from the bottom-up pass and the topdown pass: h i = h ↓ i ; h ↑ i ." + }, + { + "id": 71, + "string": "The hidden state of the root node is usually used as a representation for the entire tree." + }, + { + "id": 72, + "string": "In order to use attention over all nodes, as in traditional NMT (Bahdanau et al., 2015) , we can however build node embeddings by extracting the hidden states of each node in the tree: e 1:N = h 1:N , where e i ∈ R d , d is the size of the output embed- dings." + }, + { + "id": 73, + "string": "The encoder is related to the TreeLSTM encoder of Takase et al." + }, + { + "id": 74, + "string": "(2016) , which however encodes labeled trees and does not use a top-down pass." + }, + { + "id": 75, + "string": "Graph Convolutional Network Encoders Graph Convolutional Network (GCN; Duvenaud et al." + }, + { + "id": 76, + "string": "2015; Kipf and Welling 2016) is a neural network architecture that learns embeddings of nodes in a graph by looking at its nearby nodes." + }, + { + "id": 77, + "string": "In Natural Language Processing, GCNs have been used for Semantic Role Labeling , NMT (Bastings et al., 2017) , Named Entity Recognition (Cetoli et al., 2017) and text generation (Marcheggiani and Perez-Beltrachini, 2018) ." + }, + { + "id": 78, + "string": "A graph-to-sequence neural network was first introduced by Xu et al." + }, + { + "id": 79, + "string": "(2018) ." + }, + { + "id": 80, + "string": "The authors review the similarities between their approach, GCN and another approach, based on GRUs (Li et al., 2015) ." + }, + { + "id": 81, + "string": "The latter recently inspired a graphto-sequence architecture for AMR-to-text generation (Beck et al., 2018) ." + }, + { + "id": 82, + "string": "Simultaneously, Song et al." + }, + { + "id": 83, + "string": "(2018) proposed a graph encoder based on LSTMs." + }, + { + "id": 84, + "string": "The architectures of Song et al." + }, + { + "id": 85, + "string": "(2018) and Beck et al." + }, + { + "id": 86, + "string": "(2018) are both based on the same core computation of a GCN, which sums over the embeddings of the immediate neighborhood of each node: h (k+1) i = σ j∈N (i) W (k) (j,i) h (k) j + b (k) , where h (k) i is the embeddings of node x i ∈ V at layer k, σ is a non-linear activation function, N (i) is the set of the immediate neighbors of x i , W (k) (j,i) ∈ R m×m and b (k) ∈ R m , with m being the size of the embeddings." + }, + { + "id": 87, + "string": "It is possible to use recurrent networks to model the update of the node embeddings." + }, + { + "id": 88, + "string": "Specifically, Beck et al." + }, + { + "id": 89, + "string": "(2018) uses a GRU layer where the gates are modeled as GCN layers." + }, + { + "id": 90, + "string": "Song et al." + }, + { + "id": 91, + "string": "(2018) did not use the activation function σ and perform an LSTM update instead." + }, + { + "id": 92, + "string": "The systems of Song et al." + }, + { + "id": 93, + "string": "(2018) and Beck et al." + }, + { + "id": 94, + "string": "(2018) further differ in design and implementation decisions such as in the use of edge label and edge directionality." + }, + { + "id": 95, + "string": "Throughout the rest of the paper, we follow the traditional, non-recurrent, implementation of GCN also adopted in other NLP tasks Bastings et al., 2017; Cetoli et al., 2017) ." + }, + { + "id": 96, + "string": "In our experiments, the node embeddings are computed as follows: h (k+1) i = σ j∈N (i) W (k) dir(j,i) h (k) j + b (k) , (1) where dir(j, i) indicates the direction of the edge between x j and x i (i.e., outgoing or incoming edge)." + }, + { + "id": 97, + "string": "The hidden vectors from the last layer of the GCN network are finally used to represent each node in the graph: e 1:N = h (K) 1 , ." + }, + { + "id": 98, + "string": "." + }, + { + "id": 99, + "string": "." + }, + { + "id": 100, + "string": ", h (K) N , where K is the number of GCN layers used, e i ∈ R d , d is the size of the output embeddings." + }, + { + "id": 101, + "string": "To regularize the models we apply dropout (Srivastava et al., 2014) as well as edge dropout ." + }, + { + "id": 102, + "string": "We also include highway connections (Srivastava et al., 2015) between GCN layers." + }, + { + "id": 103, + "string": "While GCN can naturally be used to encode graphs, they can also be applied to trees by removing reentrancies from the input graphs." + }, + { + "id": 104, + "string": "In the experiments of Section 5, we explore GCN-based models both as graph encoders (reentrancies are maintained) as well as tree encoders (reentrancies are ignored)." + }, + { + "id": 105, + "string": "x 1 x 2 ." + }, + { + "id": 106, + "string": "." + }, + { + "id": 107, + "string": "." + }, + { + "id": 108, + "string": "x N GCN/TreeLSTM h 1 h 2 ." + }, + { + "id": 109, + "string": "." + }, + { + "id": 110, + "string": "." + }, + { + "id": 111, + "string": "h N h 1 h 2 ." + }, + { + "id": 112, + "string": "." + }, + { + "id": 113, + "string": "." + }, + { + "id": 114, + "string": "hn BiLSTM e 1 e 2 ." + }, + { + "id": 115, + "string": "." + }, + { + "id": 116, + "string": "." + }, + { + "id": 117, + "string": "en x 1 x 2 ." + }, + { + "id": 118, + "string": "." + }, + { + "id": 119, + "string": "." + }, + { + "id": 120, + "string": "x N x 1 x 2 ." + }, + { + "id": 121, + "string": "." + }, + { + "id": 122, + "string": "." + }, + { + "id": 123, + "string": "xn BiLSTM h 1 h 2 ." + }, + { + "id": 124, + "string": "." + }, + { + "id": 125, + "string": "." + }, + { + "id": 126, + "string": "hn h 1 h 2 ." + }, + { + "id": 127, + "string": "." + }, + { + "id": 128, + "string": "." + }, + { + "id": 129, + "string": "h N GCN/TreeLSTM e 1 e 2 ." + }, + { + "id": 130, + "string": "." + }, + { + "id": 131, + "string": "." + }, + { + "id": 132, + "string": "e N Figure 2 : Two ways of stacking recurrent and structural models." + }, + { + "id": 133, + "string": "Left side: structure on top of sequence, where the structural encoders are applied to the hidden vectors computed by the BiLSTM." + }, + { + "id": 134, + "string": "Right side: sequence on top of structure, where the structural encoder is used to create better embeddings which are then fed to the BiLSTM." + }, + { + "id": 135, + "string": "The dotted lines refer to the process of converting the graph into a sequence or vice-versa." + }, + { + "id": 136, + "string": "Stacking Encoders We aimed at stacking the explicit source of structural information provided by TreeLSTMs and GCNs with the sequential information which BiL-STMs extract well." + }, + { + "id": 137, + "string": "This was shown to be effective for other tasks with both TreeLSTMs (Eriguchi et al., 2016; Chen et al., 2017) and GCNs Cetoli et al., 2017; Bastings et al., 2017) ." + }, + { + "id": 138, + "string": "In previous work, the structural encoders (tree or graph) were used on top of the BiLSTM network: first, the input is passed through the sequential encoder, the output of which is then fed into the structural encoder." + }, + { + "id": 139, + "string": "While we experiment with this approach, we also propose an alternative solution where the BiLSTM network is used on top of the structural encoder: the input embeddings are refined by exploiting the explicit structural information given by the graph." + }, + { + "id": 140, + "string": "The refined embeddings are then fed into the BiLSTM networks." + }, + { + "id": 141, + "string": "See Figure 2 for a graphical representation of the two approaches." + }, + { + "id": 142, + "string": "In our experiments, we found this approach to be more effective." + }, + { + "id": 143, + "string": "Compared to models that interleave structural and recurrent components such as the systems of Song et al." + }, + { + "id": 144, + "string": "(2018) and Beck et al." + }, + { + "id": 145, + "string": "(2018) , stacking the components allows us to test for their contributions more easily." + }, + { + "id": 146, + "string": "Structure on Top of Sequence In this setup, BiLSTMs are used as in Section 3.1 to encode the linearized and anonymized AMR." + }, + { + "id": 147, + "string": "The context provided by the BiLSTM is a sequential one." + }, + { + "id": 148, + "string": "We then apply either GCN or TreeLSTM on the output of the BiLSTM, by initializing the GCN or TreeLSTM embeddings with the BiLSTM hidden states." + }, + { + "id": 149, + "string": "We call these models SEQGCN and SEQTREELSTM." + }, + { + "id": 150, + "string": "Sequence on Top of Structure We also propose a different approach for integrating graph information into the encoder, by swapping the order of the BiLSTM and the structural encoder: we aim at using the structured information provided by the AMR graph as a way to refine the original word representations." + }, + { + "id": 151, + "string": "We first apply the structural encoder to the input graphs." + }, + { + "id": 152, + "string": "The GCN or TreeLSTM representations are then fed into the BiLSTM." + }, + { + "id": 153, + "string": "We call these models GCNSEQ and TREELSTMSEQ." + }, + { + "id": 154, + "string": "The motivation behind this approach is that we know that BiLSTMs, given appropriate input embeddings, are very effective at encoding the input sequences." + }, + { + "id": 155, + "string": "In order to exploit their strength, we do not amend their output but rather provide them with better input embeddings to start with, by explicitly taking the graph relations into account." + }, + { + "id": 156, + "string": "Experiments We use both BLEU (Papineni et al., 2002) and Meteor (Banerjee and Lavie, 2005) as evaluation metrics." + }, + { + "id": 157, + "string": "1 We report results on the AMR dataset LDC2015E86 and LDC2017T10." + }, + { + "id": 158, + "string": "All systems are implemented in PyTorch (Paszke et al., 2017) using the framework OpenNMT-py (Klein et al., 2017) ." + }, + { + "id": 159, + "string": "Hyperparameters of each model were tuned on the development set of LDC2015E86." + }, + { + "id": 160, + "string": "For the GCN components, we use two layers, ReLU activations, and tanh highway layers." + }, + { + "id": 161, + "string": "We use single layer LSTMs." + }, + { + "id": 162, + "string": "We train with SGD with the initial learning rate set to 1 and decay to 0.8." + }, + { + "id": 163, + "string": "Batch size is set to 100." + }, + { + "id": 164, + "string": "2 We first evaluate the overall performance of the models, after which we focus on two phenomena that we expect to benefit most from structural encoders: reentrancies and long-range dependencies." + }, + { + "id": 165, + "string": "Table 1 shows the comparison on the development split of the LDC2015E86 dataset between sequential, tree and graph encoders." + }, + { + "id": 166, + "string": "The sequential encoder (SEQ) is a re-implementation of Konstas et al." + }, + { + "id": 167, + "string": "(2017) ." + }, + { + "id": 168, + "string": "We test both approaches of stacking structural and sequential components: structure on top of sequence (SEQTREELSTM and SEQGCN), and sequence on top of structure (TREELSTMSEQ and GCNSEQ)." + }, + { + "id": 169, + "string": "To inspect the effect of the sequential component, we run ablation tests by removing the RNNs altogether (TREELSTM and GCN)." + }, + { + "id": 170, + "string": "GCN-based models are used both as tree encoders (reentrancies are removed) and graph encoders (reentrancies are maintained)." + }, + { + "id": 171, + "string": "For both TreeLSTM-based and GCN-based models, our proposed approach of applying the structural encoder before the RNN achieves better scores." + }, + { + "id": 172, + "string": "This is especially true for GCN-based models, for which we also note a drastic drop in performance when the RNN is removed, highlighting the importance of a sequential component." + }, + { + "id": 173, + "string": "On the other hand, RNN layers seem to have less impact on TreeLSTM-based models." + }, + { + "id": 174, + "string": "This outcome is not unexpected, as TreeLSTMs already use LSTM gates in their computation." + }, + { + "id": 175, + "string": "The results show a clear advantage of tree and graph encoders over the sequential encoder." + }, + { + "id": 176, + "string": "The best performing model is GCNSEQ, both as a tree and as a graph encoder, with the latter obtaining the highest results." + }, + { + "id": 177, + "string": "Table 2 shows the comparison between our best sequential (SEQ), tree (GCNSEQ without reentrancies, henceforth called TREE) and graph en- coders (GCNSEQ with reentrancies, henceforth called GRAPH) on the test set of LDC2015E86 and LDC2017T10." + }, + { + "id": 178, + "string": "We also include state-of-the-art results reported on these datasets for sequential encoding (Konstas et al., 2017) and graph encoding (Song et al., 2018; Beck et al., 2018) ." + }, + { + "id": 179, + "string": "3 In order to mitigate the effects of random seeds, we train five models with different random seeds and report the results of the median model, according to their BLEU score on the development set (Beck et al., 2018) ." + }, + { + "id": 180, + "string": "We achieve state-of-the-art results with both tree and graph encoders, demonstrating the efficacy of our GCNSeq approach." + }, + { + "id": 181, + "string": "The graph encoder outperforms the other systems and previous work on both datasets." + }, + { + "id": 182, + "string": "These results demonstrate the benefit of structural encoders over purely sequential ones as well as the advantage of explicitly including reentrancies." + }, + { + "id": 183, + "string": "The differences between our graph encoder and that of Song et al." + }, + { + "id": 184, + "string": "(2018) and Beck et al." + }, + { + "id": 185, + "string": "(2018) were discussed in Section 3.3." + }, + { + "id": 186, + "string": "3 We run comparisons on systems without ensembling nor additional data." + }, + { + "id": 187, + "string": "Reentrancies Overall scores show an advantage of graph encoder over tree and sequential encoders, but they do not shed light into how this is achieved." + }, + { + "id": 188, + "string": "Because graph encoders are the only ones to model reentrancies explicitly, we expect them to deal better with these structures." + }, + { + "id": 189, + "string": "It is, however, possible that the other models are capable of handling these structures implicitly." + }, + { + "id": 190, + "string": "Moreover, the dataset contains a large number of examples that do not involve any reentrancies, as shown in Table 3 , so that the overall scores may not be representative of the ability of models to capture reentrancies." + }, + { + "id": 191, + "string": "It is expected that the benefit of the graph models will be more evident for those examples containing more reentrancies." + }, + { + "id": 192, + "string": "To test this hypothesis, we evaluate the various scenarios as a function of the number of reentrancies in each example, using the Meteor score as a metric." + }, + { + "id": 193, + "string": "4 Table 4 shows that the gap between the graph encoder and the other encoders is widest for examples with more than six reentrancies." + }, + { + "id": 194, + "string": "The Meteor score of the graph encoder for these cases is 3.1% higher than the one for the sequential encoder and 2.3% higher than the score achieved by the tree encoder, demonstrating that explicitly encoding reentrancies is more beneficial than the overall scores suggest." + }, + { + "id": 195, + "string": "Interestingly, it can also be observed that the graph model outperforms the tree model also for examples with no reentrancies, where tree and graph structures are identical." + }, + { + "id": 196, + "string": "This suggests that preserving reentrancies in the training data has other beneficial effects." + }, + { + "id": 197, + "string": "In Section 5.2 we explore one: better handling of long-range dependencies." + }, + { + "id": 198, + "string": "Manual Inspection In order to further explore how the graph model handles reentrancies differently from the other models, we performed a manual inspection of the models' output." + }, + { + "id": 199, + "string": "We selected examples containing reentrancies, where the graph model performs better than the other models." + }, + { + "id": 200, + "string": "These are shown in Table 5 ." + }, + { + "id": 201, + "string": "In Example (1), we note that the graph model is the only one that correctly predicts the phrase he finds out." + }, + { + "id": 202, + "string": "The wrong verb tense is due to the lack of tense information in AMR graphs." + }, + { + "id": 203, + "string": "In the sequential model, the pronoun is chosen correctly, but the wrong verb is predicted, while in the tree model the pronoun is missing." + }, + { + "id": 204, + "string": "In Example (2) , only the graph model correctly generates the phrase you tell them, while none of the models use people as the subject of the predicate can." + }, + { + "id": 205, + "string": "In Example (3), both the graph and the sequential models deal well with the control structure caused by the recommend predicate." + }, + { + "id": 206, + "string": "The sequential model, however, overgenerates a wh-clause." + }, + { + "id": 207, + "string": "Finally, in Example (4) the tree and graph models deal correctly with the possessive pronoun to generate the phrase tell your ex, while the sequential model does not." + }, + { + "id": 208, + "string": "Overall, we note that the graph model produces a more accurate output than sequential and tree models by generating the correct pronouns and mentions when control verbs and co-references are involved." + }, + { + "id": 209, + "string": "Contrastive Pairs For a quantitative analysis of how the different models handle pronouns, we use a method to inspect NMT output for specific linguistic analysis based on contrastive pairs (Sennrich, 2017) ." + }, + { + "id": 210, + "string": "Given a reference output sentence, a contrastive sentence is generated by introducing a mistake related to the phenomenon we are interested in evaluating." + }, + { + "id": 211, + "string": "The probability that the model assigns to the reference sentence is then compared to that of the contrastive sentence." + }, + { + "id": 212, + "string": "The accuracy of a model is determined by the percentage of examples in which the reference sentence has a higher probability than the contrastive sentence." + }, + { + "id": 213, + "string": "We produce contrastive examples by running CoreNLP (Manning et al., 2014) to identify coreferences, which are the primary cause of reentrancies, and introducing a mistake." + }, + { + "id": 214, + "string": "When an expression has multiple mentions, the antecedent is repeated in the linearized AMR." + }, + { + "id": 215, + "string": "For instance, the linearization of Figure 1(b) contains the token he twice, which instead appears only once in the sen-tence." + }, + { + "id": 216, + "string": "This repetition may result in generating the token he twice, rather than using a pronoun to refer back to it." + }, + { + "id": 217, + "string": "To investigate this possible mistake, we replace one of the mentions with the antecedent (e.g., John ate the pizza with his fingers is replaced with John ate the pizza with John fingers, which is ungrammatical and as such should be less likely)." + }, + { + "id": 218, + "string": "An alternative hypothesis is that even when the generation system correctly decides to predict a pronoun, it selects the wrong one." + }, + { + "id": 219, + "string": "To test for this, we produce contrastive examples where a pronoun is replaced by either a different type of pronoun (e.g., John ate the pizza with his fingers is replaced with John ate the pizza with him fingers) or by the same type of pronoun but for a different number (John ate the pizza with their fingers) or different gender (John ate the pizza with her fingers)." + }, + { + "id": 220, + "string": "Note from Figure 1 that the graph-structured AMR is the one that more directly captures the relation between finger and he, and as such it is expected to deal better with this type of mistakes." + }, + { + "id": 221, + "string": "From the test split of LDC2017T10, we generated 251 contrastive examples due to antecedent replacements, 912 due to pronoun type replacements, 1840 due to number replacements and 95 due to gender replacements." + }, + { + "id": 222, + "string": "5 The results are shown in Table 6 ." + }, + { + "id": 223, + "string": "The sequential encoder performs surprisingly well at this task, with better or on par performance with respect to the tree encoder." + }, + { + "id": 224, + "string": "The graph encoder outperforms the sequential encoder only for pronoun number and gender replacements." + }, + { + "id": 225, + "string": "Future work is required to more precisely analyze if the different models cope with pronomial mentions in significantly different ways." + }, + { + "id": 226, + "string": "Other approaches to inspect phenomena of co-reference and control verbs can also be explored, for instance by devising specific training objectives (Linzen et al., 2016) ." + }, + { + "id": 227, + "string": "Long-range Dependencies When we encode a long sequence, interactions between items that appear distant from each other in the sequence are difficult to capture." + }, + { + "id": 228, + "string": "The problem of long-range dependencies in natural language is well known for RNN architectures (Bengio et al., 1994) ." + }, + { + "id": 229, + "string": "Indeed, the need to solve this problem motivated the introduction of LSTM models, which are known to model long-range dependencies better than traditional RNNs." + }, + { + "id": 230, + "string": "(1) REF i dont tell him but he finds out , SEQ i did n't tell him but he was out ." + }, + { + "id": 231, + "string": "TREE i do n't tell him but found out ." + }, + { + "id": 232, + "string": "GRAPH i do n't tell him but he found out ." + }, + { + "id": 233, + "string": "( 2) Because the nodes in the graphs are not aligned with words in the sentence, AMR has no notion of distance between the nodes taking part in an edge." + }, + { + "id": 234, + "string": "In order to define the length of an AMR edge, we resort to the AMR linearization discussed in Section 2." + }, + { + "id": 235, + "string": "Given the linearization of the AMR x 1 , ." + }, + { + "id": 236, + "string": "." + }, + { + "id": 237, + "string": "." + }, + { + "id": 238, + "string": ", x N , as discussed in Section 2, and an edge between two nodes x i and x j , the length of the edge is defined as |j − i|." + }, + { + "id": 239, + "string": "For instance, in the AMR of Figure 1 , the edge between eat-01 and :instrument is a dependency of length five, because of the distance between the two words in the linearization eat-01 :arg0 he :arg1 pizza :instrument." + }, + { + "id": 240, + "string": "We then compute the maximum dependency length for each AMR graph." + }, + { + "id": 241, + "string": "To verify the hypothesis that long-range dependencies contribute to the improvements of graph models, we compare the models as a function of the maximum dependency length in each example." + }, + { + "id": 242, + "string": "Longer dependencies are sometimes caused by reentrancies, as in the dependency between :part-of and he in Figure 1 ." + }, + { + "id": 243, + "string": "To verify that the contribution in terms of longer dependencies is complementary to that of reentrancies, we exclude sentences with reentrancies from this analysis." + }, + { + "id": 244, + "string": "Table 7 shows the statistics for this measure." + }, + { + "id": 245, + "string": "Results are shown in Table 8 ." + }, + { + "id": 246, + "string": "The graph encoder always outperforms both the sequential and the tree encoder." + }, + { + "id": 247, + "string": "The gap with the sequential encoder increases for longer dependencies." + }, + { + "id": 248, + "string": "This indicates that longer dependencies are an important factor in improving results for both tree and graph encoders, especially for the latter." + }, + { + "id": 249, + "string": "Conclusions We introduced models for AMR-to-text generation with the purpose of investigating the difference between sequential, tree and graph encoders." + }, + { + "id": 250, + "string": "We showed that encoding reentrancies improves overall performance." + }, + { + "id": 251, + "string": "We observed bigger benefits when the input AMR graphs have a larger number of reentrant structures and longer dependencies." + }, + { + "id": 252, + "string": "Our best graph encoder, which consists of a GCN wired to a BiLSTM network, improves over the state of the art on all tested datasets." + }, + { + "id": 253, + "string": "We inspected the differences between the models, especially in terms of co-references and control structures." + }, + { + "id": 254, + "string": "Further exploration of graph encoders is left to future work, which may result crucial to improve performance further." + } + ], + "headers": [ + { + "section": "Introduction", + "n": "1", + "start": 0, + "end": 20 + }, + { + "section": "Input Representations", + "n": "2", + "start": 21, + "end": 54 + }, + { + "section": "Encoders", + "n": "3", + "start": 55, + "end": 56 + }, + { + "section": "Recurrent Neural Network Encoders", + "n": "3.1", + "start": 57, + "end": 60 + }, + { + "section": "TreeLSTM Encoders", + "n": "3.2", + "start": 61, + "end": 74 + }, + { + "section": "Graph Convolutional Network Encoders", + "n": "3.3", + "start": 75, + "end": 135 + }, + { + "section": "Stacking Encoders", + "n": "4", + "start": 136, + "end": 145 + }, + { + "section": "Structure on Top of Sequence", + "n": "4.1", + "start": 146, + "end": 149 + }, + { + "section": "Sequence on Top of Structure", + "n": "4.2", + "start": 150, + "end": 155 + }, + { + "section": "Experiments", + "n": "5", + "start": 156, + "end": 186 + }, + { + "section": "Reentrancies", + "n": "5.1", + "start": 187, + "end": 197 + }, + { + "section": "Manual Inspection", + "n": "5.1.1", + "start": 198, + "end": 208 + }, + { + "section": "Contrastive Pairs", + "n": "5.1.2", + "start": 209, + "end": 226 + }, + { + "section": "Long-range Dependencies", + "n": "5.2", + "start": 227, + "end": 248 + }, + { + "section": "Conclusions", + "n": "6", + "start": 249, + "end": 254 + } + ], + "figures": [ + { + "filename": "../figure/image/992-Table4-1.png", + "caption": "Table 4: Differences, with respect to the sequential baseline, in the Meteor score of the test split of LDC2017T10 as a function of the number of reentrancies.", + "page": 5, + "bbox": { + "x1": 331.68, + "x2": 501.12, + "y1": 62.4, + "y2": 143.04 + } + }, + { + "filename": "../figure/image/992-Table2-1.png", + "caption": "Table 2: Scores on the test split of LDC2015E86 and LDC2017T10. TREE is the tree-based GCNSEQ and GRAPH is the graph-based GCNSEQ.", + "page": 5, + "bbox": { + "x1": 73.92, + "x2": 288.0, + "y1": 192.48, + "y2": 253.92 + } + }, + { + "filename": "../figure/image/992-Table3-1.png", + "caption": "Table 3: Counts of reentrancies for the development and test split of LDC2017T10", + "page": 5, + "bbox": { + "x1": 76.8, + "x2": 285.12, + "y1": 316.8, + "y2": 384.0 + } + }, + { + "filename": "../figure/image/992-Table6-1.png", + "caption": "Table 6: Accuracy (%) of models, on the test split of LDC201T10, for different categories of contrastive errors: antecedent (Antec.), pronoun type (Type), number (Num.), and gender (Gender).", + "page": 7, + "bbox": { + "x1": 75.84, + "x2": 286.08, + "y1": 350.88, + "y2": 418.08 + } + }, + { + "filename": "../figure/image/992-Table7-1.png", + "caption": "Table 7: Counts of longest dependencies for the development and test split of LDC2017T10", + "page": 7, + "bbox": { + "x1": 79.67999999999999, + "x2": 283.2, + "y1": 496.79999999999995, + "y2": 563.04 + } + }, + { + "filename": "../figure/image/992-Table8-1.png", + "caption": "Table 8: Differences, with respect to the sequential baseline, in the Meteor score of the test split of LDC2017T10 as a function of the maximum dependency length.", + "page": 7, + "bbox": { + "x1": 96.0, + "x2": 266.4, + "y1": 618.24, + "y2": 699.36 + } + }, + { + "filename": "../figure/image/992-Table5-1.png", + "caption": "Table 5: Examples of generation from AMR graphs containing reentrancies. REF is the reference sentence.", + "page": 7, + "bbox": { + "x1": 94.56, + "x2": 503.03999999999996, + "y1": 62.4, + "y2": 303.36 + } + }, + { + "filename": "../figure/image/992-Figure2-1.png", + "caption": "Figure 2: Two ways of stacking recurrent and structural models. Left side: structure on top of sequence, where the structural encoders are applied to the hidden vectors computed by the BiLSTM. Right side: sequence on top of structure, where the structural encoder is used to create better embeddings which are then fed to the BiLSTM. The dotted lines refer to the process of converting the graph into a sequence or vice-versa.", + "page": 3, + "bbox": { + "x1": 333.59999999999997, + "x2": 499.2, + "y1": 62.879999999999995, + "y2": 276.0 + } + }, + { + "filename": "../figure/image/992-Table1-1.png", + "caption": "Table 1: BLEU and Meteor (%) scores on the development split of LDC2015E86.", + "page": 4, + "bbox": { + "x1": 308.64, + "x2": 524.16, + "y1": 62.4, + "y2": 235.2 + } + } + ] + }, + "gem_id": "GEM-SciDuet-chal-10" + }, + { + "slides": { + "0": { + "title": "Sentence Summarization", + "text": [ + "Generate a shorter version of a given sentence", + "Preserve its original meaning", + "Design or refine appealing headlines" + ], + "page_nums": [ + 2 + ], + "images": [] + }, + "1": { + "title": "Seq2seq Summarization", + "text": [ + "Require less human efforts", + "Achieve the state-of-the-art performance" + ], + "page_nums": [ + 3 + ], + "images": [] + }, + "2": { + "title": "Problems of Seq2seq Summarization", + "text": [ + "Solely depend on the source text to generate summaries", + "3% of summaries 3 words", + "4 summaries repeat a word for 99 times", + "Focus on extraction rather than abstraction" + ], + "page_nums": [ + 4 + ], + "images": [] + }, + "3": { + "title": "Template based Summarization", + "text": [ + "A traditional approach to abstractive summarization", + "Fill an incomplete with the input text using the manually defined rules", + "Be able to produce fluent and informative summaries", + "Template [REGION] shares [open/close] [NUMBER] percent [lower/higher]", + "Source hong kong shares closed down #.# percent on friday due to an absence of buyers and fresh incentives .", + "Summary hong kong shares close #.# percent lower" + ], + "page_nums": [ + 5 + ], + "images": [] + }, + "4": { + "title": "Problems of Template based Summarization", + "text": [ + "Template construction is extremely time-consuming and requires a plenty of domain knowledge", + "It is impossible to develop all templates for summaries in various domains" + ], + "page_nums": [ + 6 + ], + "images": [] + }, + "5": { + "title": "Motivation", + "text": [ + "Use actual summaries in the training datasets as soft templates to combine seq2seq and template-based summarization", + "Seq2seq Guide the generation of seq2seq", + "Template-based Automatically learn to rewrite from soft templates" + ], + "page_nums": [ + 7 + ], + "images": [] + }, + "7": { + "title": "Contributions", + "text": [ + "Introduce soft templates to improve the readability and stability in seq2seq", + "Extend seq2seq to conduct template reranking and template-aware summary generation simultaneously", + "Fuse the IR-based ranking technique and seq2seq-based generation technique, utilizing both supervisions", + "Demonstrate potential to generate diversely" + ], + "page_nums": [ + 9 + ], + "images": [] + }, + "8": { + "title": "Flow Chat", + "text": [ + "Retrieve Search actual summaries as candidate soft templates", + "Rerank Find out the most proper soft template from the candidates", + "Rewrite Generate the summary based on source sentence and soft template", + "Retrieve Rerank Rewrite Sentence Candidates Template Summary" + ], + "page_nums": [ + 11 + ], + "images": [ + "figure/image/993-Figure1-1.png" + ] + }, + "14": { + "title": "Setting", + "text": [ + "Dataset Gigaword (sentence, headline) pairs", + "Dataset Train Dev. Test" + ], + "page_nums": [ + 18 + ], + "images": [ + "figure/image/993-Table1-1.png" + ] + }, + "15": { + "title": "ROUGE Performance", + "text": [ + "Re3Sum significantly outperforms other approaches", + "Model ROUGE-1 ROUGE-2 ROUGE-L" + ], + "page_nums": [ + 19 + ], + "images": [ + "figure/image/993-Table3-1.png" + ] + }, + "16": { + "title": "Linguistic Quality Performance", + "text": [ + "Low LEN DIF and LESS 3 Stable", + "Low NEW NE and NEW UP Faithful", + "Item Template OpenNMT Re3Sum" + ], + "page_nums": [ + 20 + ], + "images": [ + "figure/image/993-Table5-1.png" + ] + }, + "17": { + "title": "Effects of Template", + "text": [ + "Performance highly relies on templates", + "The rewriting ability is strong", + "Type ROUGE-1 ROUGE-2 ROUGE-L" + ], + "page_nums": [ + 21 + ], + "images": [ + "figure/image/993-Table6-1.png" + ] + }, + "18": { + "title": "Generation Diversity", + "text": [ + "OpenNMT Beam search n-best outputs", + "Re3Sum Provide different templates", + "Source anny ainge said thursday he had two one-hour meetings with the new owners of the boston celtics but no deal has been completed for him to return to the franchise .", + "Target ainge says no deal completed with celtics major says no deal with spain on gibraltar Templates roush racing completes deal with red sox owner", + "Re3Sum ainge says no deal done with celtics ainge talks with new owners ainge talks with celtics owners OpenNMT ainge talks with new owners" + ], + "page_nums": [ + 22 + ], + "images": [] + }, + "19": { + "title": "Conclusion", + "text": [ + "Introduce soft templates as additional input to guide seq2seq summarization", + "Combine IR-based ranking techniques and seq2seq-based generation techniques to utilize both supervisions", + "Improve informativeness, stability, readability and diversity" + ], + "page_nums": [ + 24 + ], + "images": [] + } + }, + "paper_title": "Retrieve, Rerank and Rewrite: Soft Template Based Neural Summarization", + "paper_id": "993", + "paper": { + "title": "Retrieve, Rerank and Rewrite: Soft Template Based Neural Summarization", + "abstract": "Most previous seq2seq summarization systems purely depend on the source text to generate summaries, which tends to work unstably. Inspired by the traditional template-based summarization approaches, this paper proposes to use existing summaries as soft templates to guide the seq2seq model. To this end, we use a popular IR platform to Retrieve proper summaries as candidate templates. Then, we extend the seq2seq framework to jointly conduct template Reranking and templateaware summary generation (Rewriting). Experiments show that, in terms of informativeness, our model significantly outperforms the state-of-the-art methods, and even soft templates themselves demonstrate high competitiveness. In addition, the import of high-quality external summaries improves the stability and readability of generated summaries.", + "text": [ + { + "id": 0, + "string": "Introduction The exponentially growing online information has necessitated the development of effective automatic summarization systems." + }, + { + "id": 1, + "string": "In this paper, we focus on an increasingly intriguing task, i.e., abstractive sentence summarization (Rush et al., 2015a) , which generates a shorter version of a given sentence while attempting to preserve its original meaning." + }, + { + "id": 2, + "string": "It can be used to design or refine appealing headlines." + }, + { + "id": 3, + "string": "Recently, the application of the attentional sequence-to-sequence (seq2seq) framework has attracted growing attention and achieved state-of-the-art performance on this task (Rush et al., 2015a; Chopra et al., 2016; Nallapati et al., 2016) ." + }, + { + "id": 4, + "string": "Most previous seq2seq models purely depend on the source text to generate summaries." + }, + { + "id": 5, + "string": "However, as reported in many studies (Koehn and Knowles, 2017) , the performance of a seq2seq model deteriorates quickly with the increase of the length of generation." + }, + { + "id": 6, + "string": "Our experiments also show that seq2seq models tend to \"lose control\" sometimes." + }, + { + "id": 7, + "string": "For example, 3% of summaries contain less than 3 words, while there are 4 summaries repeating a word for even 99 times." + }, + { + "id": 8, + "string": "These results largely reduce the informativeness and readability of the generated summaries." + }, + { + "id": 9, + "string": "In addition, we find seq2seq models usually focus on copying source words in order, without any actual \"summarization\"." + }, + { + "id": 10, + "string": "Therefore, we argue that, the free generation based on the source sentence is not enough for a seq2seq model." + }, + { + "id": 11, + "string": "Template based summarization (e.g., Zhou and Hovy (2004) ) is a traditional approach to abstractive summarization." + }, + { + "id": 12, + "string": "In general, a template is an incomplete sentence which can be filled with the input text using the manually defined rules." + }, + { + "id": 13, + "string": "For instance, a concise template to conclude the stock market quotation is: [REGION] shares [open/close] [NUMBER] percent [lower/higher], e.g., \"hong kong shares close #.# percent lower\"." + }, + { + "id": 14, + "string": "Since the templates are written by humans, the produced summaries are usually fluent and informative." + }, + { + "id": 15, + "string": "However, the construction of templates is extremely time-consuming and requires a plenty of domain knowledge." + }, + { + "id": 16, + "string": "Moreover, it is impossible to develop all templates for summaries in various domains." + }, + { + "id": 17, + "string": "Inspired by retrieve-based conversation systems (Ji et al., 2014) , we assume the golden summaries of the similar sentences can provide a reference point to guide the input sentence summarization process." + }, + { + "id": 18, + "string": "We call these existing summaries soft templates since no actual rules are nee-ded to build new summaries from them." + }, + { + "id": 19, + "string": "Due to the strong rewriting ability of the seq2seq framework (Cao et al., 2017a) , in this paper, we propose to combine the seq2seq and template based summarization approaches." + }, + { + "id": 20, + "string": "We call our summarization system Re 3 Sum, which consists of three modules: Retrieve, Rerank and Rewrite." + }, + { + "id": 21, + "string": "We utilize a widely-used Information Retrieval (IR) platform to find out candidate soft templates from the training corpus." + }, + { + "id": 22, + "string": "Then, we extend the seq2seq model to jointly learn template saliency measurement (Rerank) and final summary generation (Rewrite)." + }, + { + "id": 23, + "string": "Specifically, a Recurrent Neural Network (RNN) encoder is applied to convert the input sentence and each candidate template into hidden states." + }, + { + "id": 24, + "string": "In Rerank, we measure the informativeness of a candidate template according to its hidden state relevance to the input sentence." + }, + { + "id": 25, + "string": "The candidate template with the highest predicted informativeness is regarded as the actual soft template." + }, + { + "id": 26, + "string": "In Rewrite, the summary is generated according to the hidden states of both the sentence and template." + }, + { + "id": 27, + "string": "We conduct extensive experiments on the popular Gigaword dataset (Rush et al., 2015b) ." + }, + { + "id": 28, + "string": "Experiments show that, in terms of informativeness, Re 3 Sum significantly outperforms the state-ofthe-art seq2seq models, and even soft templates themselves demonstrate high competitiveness." + }, + { + "id": 29, + "string": "In addition, the import of high-quality external summaries improves the stability and readability of generated summaries." + }, + { + "id": 30, + "string": "The contributions of this work are summarized as follows: • We propose to introduce soft templates as additional input to improve the readability and stability of seq2seq summarization systems." + }, + { + "id": 31, + "string": "Code and results can be found at http://www4.comp.polyu." + }, + { + "id": 32, + "string": "edu.hk/˜cszqcao/ • We extend the seq2seq framework to conduct template reranking and template-aware summary generation simultaneously." + }, + { + "id": 33, + "string": "• We fuse the popular IR-based and seq2seqbased summarization systems, which fully utilize the supervisions from both sides." + }, + { + "id": 34, + "string": "Method As shown in Fig." + }, + { + "id": 35, + "string": "1 we choose the one with the maximal actual saliency score in C, which speeds up convergence and shows no obvious side effect in the experiments." + }, + { + "id": 36, + "string": "Then, we jointly conduct reranking and rewriting through a shared encoder." + }, + { + "id": 37, + "string": "Specifically, both the sentence x and the soft template r are converted into hidden states with a RNN encoder." + }, + { + "id": 38, + "string": "In the Rerank module, we measure the saliency of r according to its hidden state relevance to x." + }, + { + "id": 39, + "string": "In the Rewrite module, a RNN decoder combines the hidden states of x and r to generate a summary y." + }, + { + "id": 40, + "string": "More details will be described in the rest of this section Retrieve The purpose of this module is to find out candidate templates from the training corpus." + }, + { + "id": 41, + "string": "We assume that similar sentences should hold similar summary patterns." + }, + { + "id": 42, + "string": "Therefore, given a sentence x, we find out its analogies in the corpus and pick their summaries as the candidate templates." + }, + { + "id": 43, + "string": "Since the size of our dataset is quite large (over 3M), we leverage the widely-used Information Retrieve (IR) system Lucene 1 to index and search efficiently." + }, + { + "id": 44, + "string": "We keep the default settings of Lucene 2 to build the IR system." + }, + { + "id": 45, + "string": "For each input sentence, we select top 30 searching results as candidate templates." + }, + { + "id": 46, + "string": "Jointly Rerank and Rewrite To conduct template-aware seq2seq generation (rewriting), it is a necessary step to encode both the source sentence x and soft template r into hidden states." + }, + { + "id": 47, + "string": "Considering that the matching networks based on hidden states have demonstrated the strong ability to measure the relevance of two pieces of texts (e.g., ), we propose to jointly conduct reranking and rewriting through a shared encoding step." + }, + { + "id": 48, + "string": "Specifically, we employ a bidirectional Recurrent Neural Network (BiRNN) encoder to read x and r. Take the sentence x as an example." + }, + { + "id": 49, + "string": "Its hidden state of the forward RNN at timestamp i can be Figure 1: Flow chat of the proposed method." + }, + { + "id": 50, + "string": "We use the dashed line for Retrieve since there is an IR system embedded." + }, + { + "id": 51, + "string": "represented by: − → h x i = RNN(x i , − → h x i−1 ) (1) The BiRNN consists of a forward RNN and a backward RNN." + }, + { + "id": 52, + "string": "Suppose the corresponding out- puts are [ − → h x 1 ; · · · ; − → h x −1 ] and [ ← − h x 1 ; · · · ; ← − h x −1 ] , respectively, where the index \"−1\" stands for the last element." + }, + { + "id": 53, + "string": "Then, the composite hidden state of a word is the concatenation of the two RNN repre- sentations, i.e., h x i = [ − → h x i ; ← − h x i ]." + }, + { + "id": 54, + "string": "The entire repre- sentation for the source sentence is [h x 1 ; · · · ; h x −1 ] ." + }, + { + "id": 55, + "string": "Since a soft template r can also be regarded as a readable concise sentence, we use the same BiRNN encoder to convert it into hidden states [h r 1 ; · · · ; h r −1 ]." + }, + { + "id": 56, + "string": "Rerank In Retrieve, the template candidates are ranked according to the text similarity between the corresponding indexed sentences and the input sentence." + }, + { + "id": 57, + "string": "However, for the summarization task, we expect the soft template r resembles the actual summary y * as much as possible." + }, + { + "id": 58, + "string": "Here we use the widely-used summarization evaluation metrics ROUGE (Lin, 2004) to measure the actual saliency s * (r, y * ) (see Section 3.2)." + }, + { + "id": 59, + "string": "We utilize the hidden states of x and r to predict the saliency s of the template." + }, + { + "id": 60, + "string": "Specifically, we regard the output of the BiRNN as the representation of the sentence or template: h x = [ ← − h x 1 ; − → h x −1 ] (2) h r = [ ← − h r 1 ; − → h r −1 ] (3) Next, we use Bilinear network to predict the saliency of the template for the input sentence." + }, + { + "id": 61, + "string": "s(r, x) = sigmoid(h r W s h T x + b s ), (4) where W s and b s are parameters of the Bilinear network, and we add the sigmoid activation function to make the range of s consistent with the actual saliency s * ." + }, + { + "id": 62, + "string": "According to , Bilinear outperforms multi-layer forward neural networks in relevance measurement." + }, + { + "id": 63, + "string": "As shown later, the difference of s and s * will provide additional supervisions for the seq2seq framework." + }, + { + "id": 64, + "string": "Rewrite The soft template r selected by the Rerank module has already competed with the state-of-the-art method in terms of ROUGE evaluation (see Table 4 )." + }, + { + "id": 65, + "string": "However, r usually contains a lot of named entities that does not appear in the source (see Table 5 )." + }, + { + "id": 66, + "string": "Consequently, it is hard to ensure that the soft templates are faithful to the input sentences." + }, + { + "id": 67, + "string": "Therefore, we leverage the strong rewriting ability of the seq2seq model to generate more faithful and informative summaries." + }, + { + "id": 68, + "string": "Specifically, since the input of our system consists of both the sentence and soft template, we use the concatenation function 3 to combine the hidden states of the sentence and template: H c = [h x 1 ; · · · ; h x −1 ; h r 1 ; · · · ; h r −1 ] (5) The combined hidden states are fed into the prevailing attentional RNN decoder to generate the decoding hidden state at the position t: s t = Att-RNN(s t−1 , y t−1 , H c ), (6) where y t−1 is the previous output summary word." + }, + { + "id": 69, + "string": "Finally, a sof tmax layer is introduced to predict the current summary word: o t = sof tmax(s t W o ), (7) where W o is a parameter matrix." + }, + { + "id": 70, + "string": "Learning There are two types of costs in our system." + }, + { + "id": 71, + "string": "For Rerank, we expect the predicted saliency s(r, x) close to the actual saliency s * (r, y * )." + }, + { + "id": 72, + "string": "Therefore, J R (θ) = CE(s(r, x), s * (r, y * )) (8) = −s * log s − (1 − s * ) log(1 − s), where θ stands for the model parameters." + }, + { + "id": 73, + "string": "For Rewrite, the learning goal is to maximize the estimated probability of the actual summary y * ." + }, + { + "id": 74, + "string": "We adopt the common negative log-likelihood (NLL) as the loss function: J G (θ) = − log(p(y * |x, r)) (9) = − t log(o t [y * t ]) To make full use of supervisions from both sides, we combine the above two costs as the final loss function: J(θ) = J R (θ) + J G (θ) (10) We use mini-batch Stochastic Gradient Descent (SGD) to tune model parameters." + }, + { + "id": 75, + "string": "The batch size is 64." + }, + { + "id": 76, + "string": "To enhance generalization, we introduce dropout (Srivastava et al., 2014) with probability p = 0.3 for the RNN layers." + }, + { + "id": 77, + "string": "The initial learning rate is 1, and it will decay by 50% if the generation loss does not decrease on the validation set." + }, + { + "id": 78, + "string": "Experiments Datasets We conduct experiments on the Annotated English Gigaword corpus, as with (Rush et al., 2015b) ." + }, + { + "id": 79, + "string": "This parallel corpus is produced by pairing the first sentence in the news article and its headline as the summary with heuristic rules." + }, + { + "id": 80, + "string": "All the training, development and test datasets can be downloaded at https://github." + }, + { + "id": 81, + "string": "com/harvardnlp/sent-summary." + }, + { + "id": 82, + "string": "The statistics of the Gigaword corpus is presented in Table 1." + }, + { + "id": 83, + "string": "AvgSourceLen is the average input sentence length and AvgTargetLen is the average summary length." + }, + { + "id": 84, + "string": "COPY means the copy ratio in the summaries (without stopwords)." + }, + { + "id": 85, + "string": "Evaluation Metrics We adopt ROUGE (Lin, 2004) for automatic evaluation." + }, + { + "id": 86, + "string": "ROUGE has been the standard evaluation metric for DUC shared tasks since 2004." + }, + { + "id": 87, + "string": "It measures the quality of summary by computing the overlapping lexical units between the candidate summary and actual summaries, such as unigram, bi-gram and longest common subsequence (LCS)." + }, + { + "id": 88, + "string": "Following the common practice, we report ROUGE-1 (uni-gram), ROUGE-2 (bi-gram) and ROUGE-L (LCS) F1 scores 4 in the following experiments." + }, + { + "id": 89, + "string": "We also measure the actual saliency of a candidate template r with its combined ROUGE scores given the actual summary y * : s * (r, y * ) = RG(r, y * ) + RG(r, y * ), (11) where \"RG\" stands for ROUGE for short." + }, + { + "id": 90, + "string": "ROUGE mainly evaluates informativeness." + }, + { + "id": 91, + "string": "We also introduce a series of metrics to measure the summary quality from the following aspects: LEN DIF The absolute value of the length difference between the generated summaries and the actual summaries." + }, + { + "id": 92, + "string": "We use mean value ± standard deviation to illustrate this item." + }, + { + "id": 93, + "string": "The average value partially reflects the readability and informativeness, while the standard deviation links to stability." + }, + { + "id": 94, + "string": "LESS 3 The number of the generated summaries, which contains less than three tokens." + }, + { + "id": 95, + "string": "These extremely short summaries are usually unreadable." + }, + { + "id": 96, + "string": "COPY The proportion of the summary words (without stopwords) copied from the source sentence." + }, + { + "id": 97, + "string": "A seriously large copy ratio indicates that the summarization system pays more attention to compression rather than required abstraction." + }, + { + "id": 98, + "string": "NEW NE The number of the named entities that do not appear in the source sentence or actual summary." + }, + { + "id": 99, + "string": "Intuitively, the appearance of new named entities in the summary is likely to bring unfaithfulness." + }, + { + "id": 100, + "string": "We use Stanford Co-reNLP (Manning et al., 2014) to recognize named entities." + }, + { + "id": 101, + "string": "Implementation Details We use the popular seq2seq framework Open-NMT 5 as the starting point." + }, + { + "id": 102, + "string": "To make our model more general, we retain the default settings of OpenNMT to build the network architecture." + }, + { + "id": 103, + "string": "Specifically, the dimensions of word embeddings and RNN are both 500, and the encoder and decoder structures are two-layer bidirectional Long Short Term Memory Networks (LSTMs)." + }, + { + "id": 104, + "string": "The only difference is that we add the argument \"share embeddings\" to share the word embeddings between the encoder and decoder." + }, + { + "id": 105, + "string": "This practice largely reduces model parameters for the monolingual task." + }, + { + "id": 106, + "string": "On our computer (GPU: GTX 1080, Memory: 16G, CPU: i7-7700K), the training spends about 2 days." + }, + { + "id": 107, + "string": "During test, we use beam search of size 5 to generate summaries." + }, + { + "id": 108, + "string": "We add the argument \"replace unk\" to replace the generated unknown words with the source word that holds the highest attention weight." + }, + { + "id": 109, + "string": "Since the generated summaries are often shorter than the actual ones, we introduce an additional length penalty argument \"alpha 1\" to encourage longer generation, like Wu et al." + }, + { + "id": 110, + "string": "(2016) ." + }, + { + "id": 111, + "string": "Baselines We compare our proposed model with the following state-of-the-art neural summarization systems: 2015) for summarization." + }, + { + "id": 112, + "string": "This model contained two-layer LSTMs with 500 hidden units in each layer." + }, + { + "id": 113, + "string": "OpenNMT We also implement the standard attentional seq2seq model with OpenNMT." + }, + { + "id": 114, + "string": "All the settings are the same as our system." + }, + { + "id": 115, + "string": "It is noted that OpenNMT officially examined the Gigaword dataset." + }, + { + "id": 116, + "string": "We distinguish the official result 6 and our experimental result with suffixes \"O\" and \"I\" respectively." + }, + { + "id": 117, + "string": "FTSum Cao et al." + }, + { + "id": 118, + "string": "(2017b) encoded the facts extracted from the source sentence to improve both the faithfulness and informativeness of generated summaries." + }, + { + "id": 119, + "string": "In addition, to evaluate the effectiveness of our joint learning framework, we develop a baseline named \"PIPELINE\"." + }, + { + "id": 120, + "string": "Its architecture is identical to Re 3 Sum." + }, + { + "id": 121, + "string": "However, it trains the Rerank module and Rewrite module in pipeline." + }, + { + "id": 122, + "string": "tentional seq2seq model OpenNMT I ." + }, + { + "id": 123, + "string": "Therefore, it is safe to conclude that soft templates have great contribute to guide the generation of summaries." + }, + { + "id": 124, + "string": "Informativeness Evaluation We also examine the performance of directly regarding soft templates as output summaries." + }, + { + "id": 125, + "string": "We introduce five types of different soft templates: Random An existing summary randomly selected from the training corpus." + }, + { + "id": 126, + "string": "First The top-ranked candidate template given by the Retrieve module." + }, + { + "id": 127, + "string": "Max The template with the maximal actual ROUGE scores among the 30 candidate templates." + }, + { + "id": 128, + "string": "Optimal An existing summary in the training corpus which holds the maximal ROUGE scores." + }, + { + "id": 129, + "string": "Rerank The template with the maximal predicted ROUGE scores among the 30 candidate templates." + }, + { + "id": 130, + "string": "It is the actual soft template we adopt." + }, + { + "id": 131, + "string": "As shown in Table 4 , the performance of Random is terrible, indicating it is impossible to use one summary template to fit various actual summaries." + }, + { + "id": 132, + "string": "Rerank largely outperforms First, which verifies the effectiveness of the Rerank module." + }, + { + "id": 133, + "string": "However, according to Max and Rerank, we find the Rerank performance of Re 3 Sum is far from perfect." + }, + { + "id": 134, + "string": "Likewise, comparing Max and First, we observe that the improving capacity of the Retrieve module is high." + }, + { + "id": 135, + "string": "Notice that Optimal greatly exceeds all the state-of-the-art approaches." + }, + { + "id": 136, + "string": "This finding strongly supports our practice of using existing summaries to guide the seq2seq models." + }, + { + "id": 137, + "string": "Linguistic Quality Evaluation We also measure the linguistic quality of generated summaries from various aspects, and the results are present in Table 5 ." + }, + { + "id": 138, + "string": "As can be seen from the rows \"LEN DIF\" and \"LESS 3\", the performance of Re 3 Sum is almost the same as that of soft templates." + }, + { + "id": 139, + "string": "The soft templates indeed well guide the summary generation." + }, + { + "id": 140, + "string": "Compared with Source grid positions after the final qualifying session in the indonesian motorcycle grand prix at the sentul circuit , west java , saturday : UNK Target indonesian motorcycle grand prix grid positions Template grid positions for british grand prix OpenNMT circuit Re 3 Sum grid positions for indonesian grand prix Source india 's children are getting increasingly overweight and unhealthy and the government is asking schools to ban junk food , officials said thursday ." + }, + { + "id": 141, + "string": "Target indian government asks schools to ban junk food Template skorean schools to ban soda junk food OpenNMT india 's children getting fatter Re 3 Sum indian schools to ban junk food Table 7 : Examples of generated summaries." + }, + { + "id": 142, + "string": "We use Bold font to indicate the crucial rewriting behavior from the templates to generated summaries." + }, + { + "id": 143, + "string": "Re 3 Sum, the standard deviation of LEN DF is 0.7 times larger in OpenNMT, indicating that Open-NMT works quite unstably." + }, + { + "id": 144, + "string": "Moreover, OpenNMT generates 53 extreme short summaries, which seriously reduces readability." + }, + { + "id": 145, + "string": "Meanwhile, the copy ratio of actual summaries is 36%." + }, + { + "id": 146, + "string": "Therefore, the copy mechanism is severely overweighted in OpenNMT." + }, + { + "id": 147, + "string": "Our model is encouraged to generate according to human-written soft templates, which relatively diminishes copying from the source sentences." + }, + { + "id": 148, + "string": "Look at the last row \"NEW NE\"." + }, + { + "id": 149, + "string": "A number of new named entities appear in the soft templates, which makes them quite unfaithful to source sentences." + }, + { + "id": 150, + "string": "By contrast, this index in Re 3 Sum is close to the OpenNMT's." + }, + { + "id": 151, + "string": "It highlights the rewriting ability of our seq2seq framework." + }, + { + "id": 152, + "string": "Effect of Templates In this section, we investigate how soft templates affect our model." + }, + { + "id": 153, + "string": "At the beginning, we feed different types of soft templates (refer to Table 4 ) into the Rewriting module of Re 3 Sum." + }, + { + "id": 154, + "string": "As illustrated in Table 6 , the more high-quality templates are provided, the higher ROUGE scores are achieved." + }, + { + "id": 155, + "string": "It is interesting to see that,while the ROUGE-2 score of Random templates is zero, our model can still generate acceptable summaries with Random templates." + }, + { + "id": 156, + "string": "It seems that Re 3 Sum can automatically judge whether the soft templates are trustworthy and ignore the seriously irrelevant ones." + }, + { + "id": 157, + "string": "We believe that the joint learning with the Rerank model plays a vital role here." + }, + { + "id": 158, + "string": "Next, we manually inspect the summaries generated by different methods." + }, + { + "id": 159, + "string": "We find the outputs of Re 3 Sum are usually longer and more flu-ent than the outputs of OpenNMT." + }, + { + "id": 160, + "string": "Some illustrative examples are shown in Table 7 ." + }, + { + "id": 161, + "string": "In Example 1, there is no predicate in the source sentence." + }, + { + "id": 162, + "string": "Since OpenNMT prefers selecting source words around the predicate to form the summary, it fails on this sentence." + }, + { + "id": 163, + "string": "By contract, Re 3 Sum rewrites the template and produces an informative summary." + }, + { + "id": 164, + "string": "In Example 2, OpenNMT deems the starting part of the sentences are more important, while our model, guided by the template, focuses on the second part to generate the summary." + }, + { + "id": 165, + "string": "In the end, we test the ability of our model to generate diverse summaries." + }, + { + "id": 166, + "string": "In practice, a system that can provide various candidate summaries is probably more welcome." + }, + { + "id": 167, + "string": "Specifically, two candidate templates with large text dissimilarity are manually fed into the Rewriting module." + }, + { + "id": 168, + "string": "The corresponding generated summaries are shown in Table 8." + }, + { + "id": 169, + "string": "For the sake of comparison, we also present the 2-best results of OpenNMT with beam search." + }, + { + "id": 170, + "string": "As can be seen, with different templates given, our model is likely to generate dissimilar summaries." + }, + { + "id": 171, + "string": "In contrast, the 2-best results of OpenNMT is almost the same, and often a shorter summary is only a piece of the other one." + }, + { + "id": 172, + "string": "To sum up, our model demonstrates promising prospect in generation diversity." + }, + { + "id": 173, + "string": "Related Work Abstractive sentence summarization aims to produce a shorter version of a given sentence while preserving its meaning (Chopra et al., 2016) ." + }, + { + "id": 174, + "string": "This task is similar to text simplification (Saggion, 2017) and facilitates headline design and refine." + }, + { + "id": 175, + "string": "Early studies on sentence summariza- (Zhou and Hovy, 2004) , syntactic tree pruning (Knight and Marcu, 2002; Clarke and Lapata, 2008) and statistical machine translation techniques (Banko et al., 2000) ." + }, + { + "id": 176, + "string": "Recently, the application of the attentional seq2seq framework has attracted growing attention and achieved state-of-the-art performance on this task (Rush et al., 2015a; Chopra et al., 2016; Nallapati et al., 2016) ." + }, + { + "id": 177, + "string": "In addition to the direct application of the general seq2seq framework, researchers attempted to integrate various properties of summarization." + }, + { + "id": 178, + "string": "For example, Nallapati et al." + }, + { + "id": 179, + "string": "(2016) enriched the encoder with hand-crafted features such as named entities and POS tags." + }, + { + "id": 180, + "string": "These features have played important roles in traditional feature based summarization systems." + }, + { + "id": 181, + "string": "Gu et al." + }, + { + "id": 182, + "string": "(2016) found that a large proportion of the words in the summary were copied from the source text." + }, + { + "id": 183, + "string": "Therefore, they proposed CopyNet which considered the copying mechanism during generation." + }, + { + "id": 184, + "string": "Recently, See et al." + }, + { + "id": 185, + "string": "(2017) used the coverage mechanism to discourage repetition." + }, + { + "id": 186, + "string": "Cao et al." + }, + { + "id": 187, + "string": "(2017b) encoded facts extracted from the source sentence to enhance the summary faithfulness." + }, + { + "id": 188, + "string": "There were also studies to modify the loss function to fit the evaluation metrics." + }, + { + "id": 189, + "string": "For instance, Ayana et al." + }, + { + "id": 190, + "string": "(2016) applied the Minimum Risk Training strategy to maximize the ROUGE scores of generated sum-maries." + }, + { + "id": 191, + "string": "Paulus et al." + }, + { + "id": 192, + "string": "(2017) used the reinforcement learning algorithm to optimize a mixed objective function of likelihood and ROUGE scores." + }, + { + "id": 193, + "string": "Guu et al." + }, + { + "id": 194, + "string": "(2017) also proposed to encode human-written sentences to improvement the performance of neural text generation." + }, + { + "id": 195, + "string": "However, they handled the task of Language Modeling and randomly picked an existing sentence in the training corpus." + }, + { + "id": 196, + "string": "In comparison, we develop an IR system to find proper existing summaries as soft templates." + }, + { + "id": 197, + "string": "Moreover, Guu et al." + }, + { + "id": 198, + "string": "(2017) used a general seq2seq framework while we extend the seq2seq framework to conduct template reranking and template-aware summary generation simultaneously." + }, + { + "id": 199, + "string": "Conclusion and Future Work This paper proposes to introduce soft templates as additional input to guide the seq2seq summarization." + }, + { + "id": 200, + "string": "We use the popular IR platform Lucene to retrieve proper existing summaries as candidate soft templates." + }, + { + "id": 201, + "string": "Then we extend the seq2seq framework to jointly conduct template reranking and template-aware summary generation." + }, + { + "id": 202, + "string": "Experiments show that our model can generate informative, readable and stable summaries." + }, + { + "id": 203, + "string": "In addition, our model demonstrates promising prospect in generation diversity." + }, + { + "id": 204, + "string": "We believe our work can be extended in vari-ous aspects." + }, + { + "id": 205, + "string": "On the one hand, since the candidate templates are far inferior to the optimal ones, we intend to improve the Retrieve module, e.g., by indexing both the sentence and summary fields." + }, + { + "id": 206, + "string": "On the other hand, we plan to test our system on the other tasks such as document-level summarization and short text conversation." + } + ], + "headers": [ + { + "section": "Introduction", + "n": "1", + "start": 0, + "end": 33 + }, + { + "section": "Method", + "n": "2", + "start": 34, + "end": 39 + }, + { + "section": "Retrieve", + "n": "2.1", + "start": 40, + "end": 45 + }, + { + "section": "Jointly Rerank and Rewrite", + "n": "2.2", + "start": 46, + "end": 55 + }, + { + "section": "Rerank", + "n": "2.2.1", + "start": 56, + "end": 63 + }, + { + "section": "Rewrite", + "n": "2.2.2", + "start": 64, + "end": 69 + }, + { + "section": "Learning", + "n": "2.3", + "start": 70, + "end": 77 + }, + { + "section": "Datasets", + "n": "3.1", + "start": 78, + "end": 84 + }, + { + "section": "Evaluation Metrics", + "n": "3.2", + "start": 85, + "end": 100 + }, + { + "section": "Implementation Details", + "n": "3.3", + "start": 101, + "end": 110 + }, + { + "section": "Baselines", + "n": "3.4", + "start": 111, + "end": 123 + }, + { + "section": "Informativeness Evaluation", + "n": "3.5", + "start": 124, + "end": 136 + }, + { + "section": "Linguistic Quality Evaluation", + "n": "3.6", + "start": 137, + "end": 151 + }, + { + "section": "Effect of Templates", + "n": "3.7", + "start": 152, + "end": 172 + }, + { + "section": "Related Work", + "n": "4", + "start": 173, + "end": 198 + }, + { + "section": "Conclusion and Future Work", + "n": "5", + "start": 199, + "end": 206 + } + ], + "figures": [ + { + "filename": "../figure/image/993-Table6-1.png", + "caption": "Table 6: ROUGE F1 (%) performance of Re3Sum generated with different soft templates.", + "page": 5, + "bbox": { + "x1": 313.92, + "x2": 519.36, + "y1": 62.879999999999995, + "y2": 145.92 + } + }, + { + "filename": "../figure/image/993-Table4-1.png", + "caption": "Table 4: ROUGE F1 (%) performance of different types of soft templates.", + "page": 5, + "bbox": { + "x1": 100.8, + "x2": 261.12, + "y1": 304.8, + "y2": 388.32 + } + }, + { + "filename": "../figure/image/993-Table3-1.png", + "caption": "Table 3: ROUGE F1 (%) performance. “RG” represents “ROUGE” for short. “∗” indicates statistical significance of the corresponding model with respect to the baseline model on the 95% confidence interval in the official ROUGE script.", + "page": 5, + "bbox": { + "x1": 81.6, + "x2": 280.32, + "y1": 62.879999999999995, + "y2": 215.04 + } + }, + { + "filename": "../figure/image/993-Table5-1.png", + "caption": "Table 5: Statistics of different types of summaries.", + "page": 5, + "bbox": { + "x1": 72.0, + "x2": 291.36, + "y1": 672.48, + "y2": 743.04 + } + }, + { + "filename": "../figure/image/993-Table7-1.png", + "caption": "Table 7: Examples of generated summaries. We use Bold font to indicate the crucial rewriting behavior from the templates to generated summaries.", + "page": 6, + "bbox": { + "x1": 72.0, + "x2": 526.0799999999999, + "y1": 62.879999999999995, + "y2": 233.28 + } + }, + { + "filename": "../figure/image/993-Figure1-1.png", + "caption": "Figure 1: Flow chat of the proposed method. We use the dashed line for Retrieve since there is an IR system embedded.", + "page": 2, + "bbox": { + "x1": 116.64, + "x2": 481.44, + "y1": 61.44, + "y2": 113.28 + } + }, + { + "filename": "../figure/image/993-Table8-1.png", + "caption": "Table 8: Examples of generation with diversity. We use Bold font to indicate the difference between two summaries", + "page": 7, + "bbox": { + "x1": 72.0, + "x2": 526.0799999999999, + "y1": 62.879999999999995, + "y2": 313.92 + } + }, + { + "filename": "../figure/image/993-Table1-1.png", + "caption": "Table 1: Data statistics for English Gigaword. AvgSourceLen is the average input sentence length and AvgTargetLen is the average summary length. COPY means the copy ratio in the summaries (without stopwords).", + "page": 3, + "bbox": { + "x1": 325.92, + "x2": 507.35999999999996, + "y1": 217.44, + "y2": 287.03999999999996 + } + }, + { + "filename": "../figure/image/993-Figure2-1.png", + "caption": "Figure 2: Jointly Rerank and Rewrite", + "page": 3, + "bbox": { + "x1": 82.56, + "x2": 515.04, + "y1": 61.44, + "y2": 173.28 + } + }, + { + "filename": "../figure/image/993-Table2-1.png", + "caption": "Table 2: Final perplexity on the development set. † indicates the value is cited from the corresponding paper. ABS+, Featseq2seq and Luong-NMT do not provide this value.", + "page": 4, + "bbox": { + "x1": 352.8, + "x2": 480.47999999999996, + "y1": 544.8, + "y2": 642.24 + } + } + ] + }, + "gem_id": "GEM-SciDuet-chal-11" + }, + { + "slides": { + "0": { + "title": "Conversational Agents", + "text": [ + "Sorry, I dont understand what youre saying", + "Data augmentation might help" + ], + "page_nums": [ + 1, + 2, + 3, + 4, + 5, + 6, + 7 + ], + "images": [] + }, + "1": { + "title": "Paraphrase Generation", + "text": [ + "Rephrasing a given text in multiple ways", + "Paraphrases how could i increase my height ? what should i do to increase body height ? what are the ways to increase height ? are there some ways to increase body height ?" + ], + "page_nums": [ + 8, + 9, + 10, + 11, + 12 + ], + "images": [] + }, + "2": { + "title": "Current State", + "text": [ + "Source how do i increase body height ?", + "Synonym how do i grow body height ?", + "Phrase how do i increase the body measurement vertically?", + "Beam how do i increase my height ? how do i increase my body height how do i increase the height ? how would i increase my body height" + ], + "page_nums": [ + 13, + 14, + 15, + 16, + 17, + 18, + 19, + 20, + 21, + 22, + 23 + ], + "images": [] + }, + "3": { + "title": "What can we do", + "text": [ + "Source how do i increase body height ?", + "Beam how do i increase my height ? how can i decrease my body weight ? what do i do to increase the height ? i am 17, what steps to take to decrease weight ?" + ], + "page_nums": [ + 24, + 25, + 26, + 27, + 28, + 29 + ], + "images": [] + }, + "4": { + "title": "What we need", + "text": [ + "Find k diverse paraphrases with high fidelity", + "Method based on subset selection of candidate (sub)sequences" + ], + "page_nums": [ + 30, + 31, + 32 + ], + "images": [] + }, + "5": { + "title": "Subset Selection", + "text": [ + "how do i increase my how can i decrease the how can i grow the what ways exist to increase how would I increase the how do I decrease the", + "how do i increase my how can i decrease the how can i grow the how do i increase my what ways exist to increase how can i grow the how would I increase the what ways exist to increase how do I decrease the are there ways to increase", + "If F is sub modular + monotone = Greedy algo. with good bounds exists" + ], + "page_nums": [ + 33, + 34, + 35, + 36 + ], + "images": [] + }, + "6": { + "title": "Sub modularity", + "text": [ + "F = # Unique Coloured items" + ], + "page_nums": [ + 37, + 38, + 39, + 40, + 41, + 42, + 43 + ], + "images": [] + }, + "8": { + "title": "DiPS", + "text": [ + "Induce Diversity while not compromising on Fidelity", + "Diversity C omponents Fidelity Co mponents", + "where can film I How find that that picture", + "I get can I Where can I : 3k Candidate Subsequ ences Source Sentence", + "Where can I find t h a t film? Where can I get t h a t movie?", + "Rewards unique n-grams How can I get that picture?", + "Where can I get that film?", + "I find that picture", + "Where can I get that movie?", + "Where can I ENCODER DECODER", + "Enc o d e r k- sequences", + "Enc o d e r D e c o d er k- sequences", + "How can I get that picture Fidelity" + ], + "page_nums": [ + 45, + 46, + 47, + 48, + 49, + 50, + 51, + 52, + 53, + 54, + 55, + 59, + 60 + ], + "images": [] + }, + "9": { + "title": "Diversity Components", + "text": [ + "Rewards Structural Coverage where can film I How N find that that picture", + "I get can I Where can I : 3k Candidate Subsequences Source Sentence", + "n xngram : 3k Candidate Subse q uences Source Sentence", + "Where can I n=1 xX find t h a t film? Where can I get t h a t movi Where can I get that movie? Rewards unique n-grams How can I get th at picture?", + "Rewards unique n-grams How can I get that picture?", + "Synonym (similar embeddings) W here can I get that film? S t r uct u r al C o v era g e Where can I find that pictu re Ho w can I g et that pictu re (x i, xj) k- sequences", + "Where can I find that pictu re Ho w can I g et that pictu re k- sequences", + "Rew ar ds Stru ct ural C over age xi V (t) xjX", + "Where can I get that Where movie? can I ENCODER DECODER", + ": 3k Candidate Subse q uences n xngram", + "I get can I Where can I Source Sentence", + "Where can I find t h a t film Where can I get t h a t movie", + "(xi, xj) EditDistance(xi, xj)", + "Where can I get that Where movie? can I |xi |xj" + ], + "page_nums": [ + 56, + 57, + 58 + ], + "images": [] + }, + "10": { + "title": "Fidelity Components", + "text": [ + "where can film I How find that that picture", + "I get can I Where can I : 3k Candidate Subsequences Source Sentence N", + "Where can I find t h a t film? Where can I get t h a t movie n |xn-gram sn-gram", + "xX n=1 Rewards unique n-grams How can I get that picture?", + "Where can I get that film? Embedding based Similarity", + "Where can I find that picture", + "wix Where can I get that movie?", + "where can film I How Lexical Similarity find that that picture", + "How can I get that picture (x, s)", + "Rewards Structural Coverage xX" + ], + "page_nums": [ + 61, + 62, + 63 + ], + "images": [] + }, + "11": { + "title": "DiPS Objective", + "text": [ + "DDiivveerrssiittyy C Coommppoonneenntts s FFiiddeelliittyy CCoo mmppoonneenntts s", + "where where can can film film I I How How find find that that that that picture picture", + "I I get get can can I I Where Where can can I I : : 3k 3k Candidate Candidate Subsequences Subsequences Source Source Sentence Sentence", + "Where Where can can I I find find t t h h a a t t film? film Where can I get t h a t movie Where can I get t h a t movie", + "Rewards Rewards unique unique n-grams n-grams How can I get that picture? How can I get that picture?", + "Synonym (similar embeddings) Synonym (similar embeddings) Where Where can can I I get get that that film? film?", + "Where Where can can can can I I find find that that picture picture How How I I get get that that picture picture Rewards Rewards Structural Structural Coverage Coverage", + "Where Where can can I I get get that that movie? movie?" + ], + "page_nums": [ + 64, + 65, + 66, + 67 + ], + "images": [] + }, + "12": { + "title": "Fidelity and Diversity", + "text": [ + "SBS DBS VAE-SVG DPP SSR DiPS (Ours) DiPS induces diversity without", + "compro4m-Diisstininctg (D iovenrs itfiy) delity" + ], + "page_nums": [ + 68, + 69, + 70, + 71 + ], + "images": [] + }, + "13": { + "title": "Data Augmentation Paraphrase Detection", + "text": [ + "No Aug SBS DPP SSR DBS DiPS (Ours)", + "Di PS data augmentation helps in paraphrase detection" + ], + "page_nums": [ + 72, + 73 + ], + "images": [] + }, + "14": { + "title": "Data Augmentation for Intent Classification", + "text": [ + "No. Aug SBS DBS Syn. Rep Cont. Aug DiPS (Ours)", + "Da ta augmentation using DiPS improves inten t classification" + ], + "page_nums": [ + 74, + 75, + 76, + 77 + ], + "images": [] + } + }, + "paper_title": "Submodular Optimization-based Diverse Paraphrasing and its Effectiveness in Data Augmentation", + "paper_id": "995", + "paper": { + "title": "Submodular Optimization-based Diverse Paraphrasing and its Effectiveness in Data Augmentation", + "abstract": "Introduction Paraphrasing is the task of rephrasing a given text in multiple ways such that the semantics of the generated sentences remain unaltered. Paraphrasing Quality can be attributed to two key characteristics -fidelity which measures the semantic similarity between the input text and generated text, and diversity, which measures the lexical dissimilarity between generated sentences. Many previous works (Prakash et al., 2016; Gupta et al., 2018; Li et al., 2018) address the task of obtaining semantically similar paraphrases.", + "text": [ + { + "id": 0, + "string": "Inducing diversity in the task of paraphrasing is an important problem in NLP with applications in data augmentation and conversational agents." + }, + { + "id": 1, + "string": "Previous paraphrasing approaches have mainly focused on the issue of generating semantically similar paraphrases, while paying little attention towards diversity." + }, + { + "id": 2, + "string": "In fact, most of the methods rely solely on top-k beam search sequences to obtain a set of paraphrases." + }, + { + "id": 3, + "string": "The resulting set, however, contains many structurally similar sentences." + }, + { + "id": 4, + "string": "In this work, we focus on the task of obtaining highly diverse paraphrases while not compromising on paraphrasing quality." + }, + { + "id": 5, + "string": "We provide a novel formulation of the problem in terms of monotone submodular function maximization, specifically targeted towards the task of paraphrasing." + }, + { + "id": 6, + "string": "Additionally, we demonstrate the effectiveness of our method for data augmentation on multiple tasks such as intent classification and paraphrase recognition." + }, + { + "id": 7, + "string": "In order to drive further research, we have made the source code available." + }, + { + "id": 8, + "string": "cases desirable, to produce lexically diverse ones." + }, + { + "id": 9, + "string": "Diversity in paraphrase generation finds applications in text simplification (Nisioi et al., 2017; Xu et al., 2015) , document summarization (Li et al., 2009; Nema et al., 2017) , QA systems (Fader et al., 2013; Bernhard and Gurevych, 2008) , data augmentation (Zhang et al., 2015; Wang and Yang, 2015) , conversational agents (Li et al., 2016) and information retrieval (Anick and Tipirneni, 1999) ." + }, + { + "id": 10, + "string": "To obtain a set of multiple paraphrases, most of the current paraphrasing models rely solely on topk beam search sequences." + }, + { + "id": 11, + "string": "The resulting set, however, contains many structurally similar sentences with only minor, word level changes." + }, + { + "id": 12, + "string": "There have been some prior works (Li and Jurafsky, 2016; Elhamifar et al., 2012) which address the notion of diversity in NLP, including in sequence learning frameworks (Song et al., 2018; Vijayakumar et al., 2018) ." + }, + { + "id": 13, + "string": "Although Song et al." + }, + { + "id": 14, + "string": "(2018) address the issue of diversity in the scenario of neural conversation models using determinantal point processes (DPP), it could be naturally used for paraphrasing." + }, + { + "id": 15, + "string": "On similar lines, subset selection based on Simultaneous Sparse Recovery (SSR) (Elhamifar et al., 2012) can also be easily adapted for the same task." + }, + { + "id": 16, + "string": "Though these methods are helpful in maximizing diversity, they are restrictive in terms of re-taining fidelity with respect to the source sentence." + }, + { + "id": 17, + "string": "Addressing the task of diverse paraphrasing through the lens of monotone submodular function maximization (Fujishige, 2005; Krause and Golovin; Bach et al., 2013) alleviates this problem and also provides a few additional benefits." + }, + { + "id": 18, + "string": "Firstly, the submodular objective offers better flexibility in terms of controlling diversity as well as fidelity." + }, + { + "id": 19, + "string": "Secondly, there exists a simple greedy algorithm for solving monotone submodular function maximization (Nemhauser et al., 1978) , which guarantees the diverse solution to be almost as good as the optimal solution." + }, + { + "id": 20, + "string": "Finally, many submodular programs are fast and scalable to large datasets." + }, + { + "id": 21, + "string": "Below, we list the main contributions of our paper." + }, + { + "id": 22, + "string": "We introduce Diverse Paraphraser using Submodularity (DiPS)." + }, + { + "id": 23, + "string": "DiPS maximizes a novel submodular objective function specifically targeted towards paraphrasing." + }, + { + "id": 24, + "string": "2." + }, + { + "id": 25, + "string": "We perform extensive experiments to show the effectiveness of our method in generating structurally diverse paraphrases without compromising on fidelity." + }, + { + "id": 26, + "string": "We also compare against several possible diversity inducing schemes." + }, + { + "id": 27, + "string": "3." + }, + { + "id": 28, + "string": "We demonstrate the utility of diverse paraphrases generated via DiPS as data augmentation schemes on multiple tasks such as intent and question classification." + }, + { + "id": 29, + "string": "(See et al., 2017) for generating paraphrases and an evaluator based on (Parikh et al., 2016) to penalize non-paraphrastic generations." + }, + { + "id": 30, + "string": "Several other works (Cao et al., 2017; Iyyer et al., 2018) exist for paraphrasing, though they have either been superseded by newer models or are not-directly applicable to our settings." + }, + { + "id": 31, + "string": "However, most of these methods focus on the issue of generating semantically similar paraphrases, while paying little attention towards diversity." + }, + { + "id": 32, + "string": "Diversity in paraphrasing models was first explored by (Gupta et al., 2018) where they propose to generate variations based on different samples from the latent space in a deep generative framework." + }, + { + "id": 33, + "string": "Although diversity in paraphrasing models has not been explored extensively, methods have been proposed to address diversity in other NLP tasks (Li et al., 2016 (Li et al., , 2015 Gimpel et al., 2013) ." + }, + { + "id": 34, + "string": "Diverse beam search proposed by (Vijayakumar et al., 2018) generates k-diverse sequences by dividing the candidate subsequences at each time step into several groups and penalizing subsequences which are similar to prior groups." + }, + { + "id": 35, + "string": "The most relevant to our approach is the method proposed by (Song et al., 2018) for neural conversation models where they incorporate diversity by using DPP to select diverse subsequences at each time step." + }, + { + "id": 36, + "string": "Although their work is addressed in the scenario of neural conversation models, it could be naturally adapted to paraphrasing models and thus we use it as a baseline." + }, + { + "id": 37, + "string": "Submodular functions have been applied to a wide variety of problems in machine learning (Iyer and Bilmes, 2013; Jegelka and Bilmes, 2011; Krause and Guestrin, 2011; Kolmogorov and Zabih, 2002) and have recently attracted much attention in several NLP tasks including document summarization (Lin and Bilmes, 2011) , data selection in machine translation (Kirchhoff and Bilmes, 2014) and goal-oriented chatbot training (Dimovski et al., 2018) ." + }, + { + "id": 38, + "string": "However, their application to sequence generation is largely unexplored." + }, + { + "id": 39, + "string": "Data augmentation is a technique for increasing the size of labeled training sets by leveraging task specific transformations which preserve class labels." + }, + { + "id": 40, + "string": "While the technique is ubiquitous in the vision community (Krizhevsky et al., 2012; Ratner et al., 2017) , data-augmentation in NLP is largely under-explored." + }, + { + "id": 41, + "string": "Most current augmentation schemes involve thesaurus based synonym replacement (Zhang et al., 2015; Wang and Yang, 2015) , and replacement by words with paradigmatic relations (Kobayashi, 2018) ." + }, + { + "id": 42, + "string": "Both of these F 1 S ← ∅ 2 N ← V 3 while |S| < k do 4 x * ← argmax x∈N F(S ∪ {x}) 5 S ← S ∪ {x * } 6 N ← N \\ {x * } 7 end 8 return S approaches try to boost the generalization abilities of downstream classification models through word-level substitutions." + }, + { + "id": 43, + "string": "However, they are inherently restrictive in terms of the diversity they can offer." + }, + { + "id": 44, + "string": "Our work offers a data-augmentation scheme via high quality paraphrases." + }, + { + "id": 45, + "string": "Background: Submodularity Let V = {v 1 , ." + }, + { + "id": 46, + "string": "." + }, + { + "id": 47, + "string": "." + }, + { + "id": 48, + "string": ", v n } be a set of objects, which we refer to as the ground set, and F : 2 V → R be a set function which works on subsets S of V to return a real value." + }, + { + "id": 49, + "string": "The task is to find a subset S of bounded cardinality say |S| ≤ k that maximizes the function F, i.e., argmax S⊆V F(S)." + }, + { + "id": 50, + "string": "In general, solving this problem is intractable." + }, + { + "id": 51, + "string": "However, if the function F is monotone non-decreasing submodular, then although the problem is still NPcomplete, there exists a greedy algorithm ( Algorithm 1) (Nemhauser et al., 1978) that finds an approximate solution which is guaranteed to be within 0.632 of the optimal solution." + }, + { + "id": 52, + "string": "Submodular functions are set functions F : 2 V → R, where 2 V denotes the power set of ground set V. Submodular functions satisfy the following equivalent properties of diminishing returns: ∀X, Y ⊆ V with X ⊆ Y , and ∀s ∈ V \\ Y , we have the following." + }, + { + "id": 53, + "string": "F(X ∪ {s}) − F(X) ≥ F(Y ∪ {s}) − F(Y ) (1) In other words, the value addition due to incorporation of s decreases as the subset grows from X to Y ." + }, + { + "id": 54, + "string": "Equivalently, ∀X, Y ⊆ V , we have, F(X) + F(Y ) ≥ F(X ∪ Y ) + F(X ∩ Y ) In case the above inequalities are equalities, the function F is said to be modular." + }, + { + "id": 55, + "string": "Let F(s|X) F(X ∪ {s}) − F(X)." + }, + { + "id": 56, + "string": "Therefore, F is submodular if F(s|X) ≥ F(s|Y ) for X ⊆ Y ." + }, + { + "id": 57, + "string": "t ← 0; P ← ∅ 4 while t < T do 5 Generate top 3k most probable subsequences 6 P ← Select k based on argmax X⊆V (t) F(X) using Algorithm 1 7 t = t + 1 8 end 9 return P The second criteria which the function needs to satisfy for Algorithm 1 to be applicable is of monotonicity." + }, + { + "id": 58, + "string": "A set function F is said to be mono- tone non-decreasing if ∀X ⊆ Y, F(X) ≤ F(Y )." + }, + { + "id": 59, + "string": "Submodular functions are relevant in a large class of real-world applications, therefore making them extremely useful in practice." + }, + { + "id": 60, + "string": "Additionally, submodular functions share many commonalities with convex functions, in the sense that they are closed under a number of standard operations like mixtures (non-negative weighted sum of submodular functions), truncation and some restricted compositions." + }, + { + "id": 61, + "string": "The above properties will be useful when defining the submodular objective for obtaining high quality paraphrases." + }, + { + "id": 62, + "string": "Methodology Similar to Prakash et al." + }, + { + "id": 63, + "string": "(2016) ; Gupta et al." + }, + { + "id": 64, + "string": "(2018); Li et al." + }, + { + "id": 65, + "string": "(2018) , we formulate the task of paraphrase generation as a sequence-to-sequence learning problem." + }, + { + "id": 66, + "string": "Previous SEQ2SEQ based approaches depend entirely on the standard crossentropy loss to produce semantically similar sentences and greedy decoding during generation." + }, + { + "id": 67, + "string": "However, this does not guarantee lexical variety in the generated paraphrases." + }, + { + "id": 68, + "string": "To incorporate some form of diversity, most prior approaches rely solely on top-k beam search sequences." + }, + { + "id": 69, + "string": "The kbest list generated by standard beam search are a poor surrogate for the entire search space (Finkel et al., 2006) ." + }, + { + "id": 70, + "string": "In fact, most of the sentences in the resulting set are structurally similar, differing only by punctuations or minor morphological variations." + }, + { + "id": 71, + "string": "While being similar in the encoding scheme, our work adopts a different approach for the final decoding." + }, + { + "id": 72, + "string": "We propose a framework which organi- cally combines a sentence encoder with a diversity inducing decoder." + }, + { + "id": 73, + "string": "Overview Our approach is built upon SEQ2SEQ framework." + }, + { + "id": 74, + "string": "We first feed the tokenized source sentence to the encoder." + }, + { + "id": 75, + "string": "The task of the decoder is to take as input the encoded representation and produce the respective paraphrase." + }, + { + "id": 76, + "string": "To achieve this, we train the model using standard cross entropy loss between the generated sequence and the target paraphrase." + }, + { + "id": 77, + "string": "Upon completion of training, instead of using greedy decoding or standard beam search, we use a modified decoder where we incorporate a submodular objective to obtain high quality paraphrases." + }, + { + "id": 78, + "string": "Please refer to Figure 1 for an overview of the proposed method." + }, + { + "id": 79, + "string": "During the generation phase, the encoder takes the source sentence as input and feeds its representation to the decoder to initiate the decoding process." + }, + { + "id": 80, + "string": "At each time-step t, we consider N most probable subsequences since they are likely to be wellformed." + }, + { + "id": 81, + "string": "Based on optimization of our submodular objective, a subset of size k < N are selected and sent as input to the next time step t + 1 for further generation." + }, + { + "id": 82, + "string": "The process is repeated until desired output length T or token, whichever comes first." + }, + { + "id": 83, + "string": "Monotone Submodular Objectives We design a parameterized class of submodular functions tailored towards the task of paraphrase generation." + }, + { + "id": 84, + "string": "Let V (t) be the ground set of possible subsequences at time step t. We aim to determine a set X ⊆ V (t) that retains certain fidelity as well as diversity." + }, + { + "id": 85, + "string": "Hence, we model our submodular objective function as follows: X * = argmax X⊆V (t) F(X) s.t." + }, + { + "id": 86, + "string": "|X| ≤ k (2) where k is our budget (desired number of paraphrases) and F is defined as: F(X) = λL(X, s) + (1 − λ)D(X) (3) Here s is the source sentence, L(X, s) and D(X) measure fidelity and diversity, respectively." + }, + { + "id": 87, + "string": "λ ∈ [0, 1] is the trade-off coefficient." + }, + { + "id": 88, + "string": "This formulation clearly brings out the trade-off between the two key characteristics." + }, + { + "id": 89, + "string": "Fidelity It is imperative to design functions that exploit the decoder search space to maximize the semantic similarity between the generated and the source sentence." + }, + { + "id": 90, + "string": "To achieve this we build upon a known class of monotone submodular functions (Stobbe and Krause, 2010) f (X) = i∈U µ i φ i (m i (X)) (4) where U is the set of features to be defined later, µ i ≥ 0 is the feature weight, m i (X) = x∈X m i (x) is non-negative modular function and φ i is a non-negative non-decreasing concave function." + }, + { + "id": 91, + "string": "Based on the analysis of concave functions in (Kirchhoff and Bilmes, 2014), we use the simple square root function as φ (φ(a) = √ a) in both of our fidelity objectives defined below." + }, + { + "id": 92, + "string": "We consider two complementary notions of sentence similarity namely syntactic and semantic." + }, + { + "id": 93, + "string": "To capture syntactic information we define the following function: L 1 (X, s) = µ 1 x∈X N n=1 β n |x n-gram ∩ s n-gram | (5) where |x n-gram ∩ s n-gram | represents the number of overlapping n-grams between the source and the candidate sequence x for different values of n ∈ {1, ." + }, + { + "id": 94, + "string": "." + }, + { + "id": 95, + "string": "." + }, + { + "id": 96, + "string": ", N }(we use N = 3 )." + }, + { + "id": 97, + "string": "Since longer n-gram overlaps are more valuable, we set β > 1." + }, + { + "id": 98, + "string": "This function inherently increases the BLEU score between the source and the generated sentences." + }, + { + "id": 99, + "string": "We address the semantic aspect of fidelity by devising a function based on the word embeddings of source and generated sentences." + }, + { + "id": 100, + "string": "We define embedding based similarity between two sentences as, S(x, s) = 1 |x| w i ∈x argmax w j ∈s ψ(v w i , v w j ) (6) where v w i is the word embedding for token w i and ψ(v w i , v w j ) is the gaussian radial basis function (rbf) 1 ." + }, + { + "id": 101, + "string": "For each word in the candidate sequence x, we find the best matching word in the source sentence using word level similarity." + }, + { + "id": 102, + "string": "Using the above mentioned measure for embedding similarity we use the following submodular function: L 2 (X, s) = µ 2 x∈X S(x, s) (7) 1 We find gaussian rbf to work better than other similarity metrics such as cosine similarity This function helps increase the semantic homogeneity between the source and generated sequences." + }, + { + "id": 103, + "string": "The above defined functions (Equation 5,7) are compositions of non-decreasing concave functions and modular functions." + }, + { + "id": 104, + "string": "Thus, staying in the realm of the class of monotone submodular functions mentioned in Equation 4, we define fidelity function L(X, s) = L 1 (X, s) + L 2 (X, s) Diversity Ensuring high fidelity often comes at the cost of producing sequences that only slightly differ from each other." + }, + { + "id": 105, + "string": "To encourage diversity in the generation process it is desirable to reward sequences with higher number of distinct n-grams as compared to others in the ground set V (t) ." + }, + { + "id": 106, + "string": "Accordingly, we propose to use the following function: D 1 (X) = µ 3 N n=1 β n x∈X x n−gram (8) For β = 1, D 1 (X) denotes the number of distinct n-grams present in the set X." + }, + { + "id": 107, + "string": "Since shorter n-grams contribute more towards diversity, we set β < 1, thereby giving more value to shorter ngrams." + }, + { + "id": 108, + "string": "It is easy to see that this function is monotone non-decreasing as the number of distinct ngrams can only increase with the addition of more sequences." + }, + { + "id": 109, + "string": "To see that D 1 (X) is submodular, consider adding a new sequence to two sets of sequences, one a subset of the other." + }, + { + "id": 110, + "string": "Intuitively, the increment in the number of distinct n-grams when adding a new sequence to the smaller set should be larger than the increment when adding it to the larger set, as the distinct n-grams in the new sequence might have already been covered by the sequences in the larger set." + }, + { + "id": 111, + "string": "Apart from distinct n-gram overlaps, we also wish to obtain sequence candidates that are not only diverse, but also cover all major structural variations." + }, + { + "id": 112, + "string": "It is reasonable to expect sentences that are structurally different to have lower degree of word/phrase alignment as compared to sentences with minor lexical variations." + }, + { + "id": 113, + "string": "Edit distance (Levenshtein) is a widely accepted measure to determine such dissimilarities between two sentences." + }, + { + "id": 114, + "string": "To incorporate this notion of diversity, a formulation in terms of edit distance seems like a natural fit for the problem." + }, + { + "id": 115, + "string": "To do so, we use the coverage function which measures the similarity of the candidate sequences X with the ground set V (t) ." + }, + { + "id": 116, + "string": "The coverage function is naturally monotone submodular and is defined as: D 2 (X) = µ 4 x i ∈V (t) x j ∈X R(x i , x j ) (9) where R(x i , x j ) is an alignment based similarity measure between two sequences x i and x j given by: R(x i , x j ) = 1 − EditDistance(x i , x j ) |x i | + |x j | (10) Note that R(x i , x j ) will always lie in the range [0, 1]." + }, + { + "id": 117, + "string": "Evidently, this method allows flexibility in terms of controlling diversity and fidelity." + }, + { + "id": 118, + "string": "Our goal is to strike a balance between these two to obtain highquality generations." + }, + { + "id": 119, + "string": "Experiments Datasets In this section we outline the datasets used for evaluating our proposed method." + }, + { + "id": 120, + "string": "We specify the actual splits in Table 2 ." + }, + { + "id": 121, + "string": "Based on the task, we categorize them into the following: Baseline Several models have sought to increase diversity, albeit with different goals and techniques." + }, + { + "id": 122, + "string": "However, majority of the prior works in this area have focused on the task of producing diverse responses in dialog systems (Li et al., 2016; Ritter et al., 2011) and not paraphrasing." + }, + { + "id": 123, + "string": "Given the lack of relevant baselines, we compare our model against the following methods: (Li et al., 2018) Note that the first four baselines are trained using the same SEQ2SEQ network and differ only in the decoding phase." + }, + { + "id": 124, + "string": "Intrinsic Evaluation 1." + }, + { + "id": 125, + "string": "Fidelity: To evaluate our method for fidelity of generated paraphrases, we use three machine translation metrics which have been shown to be suitable for paraphrase evaluation task (Wubben et al., 2010) : BLEU (Papineni et al., 2002)(upto bigrams), ME-TEOR (Banerjee and Lavie, 2005) and TER-Plus (Snover et al., 2009 )." + }, + { + "id": 126, + "string": "Diversity: We report degree of diversity by calculating the number of distinct n-grams (n ∈ {1, 2, 3, 4}) ." + }, + { + "id": 127, + "string": "The value is scaled by the number of generated tokens to avoid favoring long sequences." + }, + { + "id": 128, + "string": "In addition to fidelity and diversity, we evaluate the efficacy of our method by using the generated paraphrases as augmented samples in the task of paraphrase recognition on the Quora-PR dataset." + }, + { + "id": 129, + "string": "We perform experiments with multiple augmentation settings for the following classifiers: 1." + }, + { + "id": 130, + "string": "LogReg: Simple Logistic Regression model." + }, + { + "id": 131, + "string": "We use a set of hand-crafted features, the de-tails of which can be found in the supplementary." + }, + { + "id": 132, + "string": "2." + }, + { + "id": 133, + "string": "SiameseLSTM: Siamese adaptation of LSTM to measure quality between two sentences (Mueller and Thyagarajan, 2016) We also perform ablation testing to highlight the importance of each submodular component." + }, + { + "id": 134, + "string": "Details can be found in the supplementary section." + }, + { + "id": 135, + "string": "Data-Augmentation We evaluate the importance of using high quality paraphrases in two downstream classification tasks namely intent-classification and questionclassification." + }, + { + "id": 136, + "string": "Our original generation model is trained on Quora-Div question pairs." + }, + { + "id": 137, + "string": "Since intentclassification and question-classification contain questions, this setting seems like a good fit to perform transfer learning." + }, + { + "id": 138, + "string": "We perform experiments on the following standard classifier models: 1." + }, + { + "id": 139, + "string": "LogRegDA: Simple logistic regression model trained using hand-crafted features." + }, + { + "id": 140, + "string": "For details, please refer to the supplementary section." + }, + { + "id": 141, + "string": "LSTM: Single layered LSTM classification model." + }, + { + "id": 142, + "string": "In addition to SBS and DBS, we use the following data-augmentation baselines for comparison: Setup We train our SEQ2SEQ model with attention (Bahdanau et al., 2014) for up to 50 epochs using the adam optimizer (Kingma and Ba, 2014) with initial learning rate set to 2e-4." + }, + { + "id": 143, + "string": "During the generation phase, we follow standard beam search till the number of generated tokens is nearly half the source sequence length (token level) to avoid possibly erroneous sentences." + }, + { + "id": 144, + "string": "We then apply submodular maximization stochastically with probability p at each time step." + }, + { + "id": 145, + "string": "Since each candidate subsequence is extended by a single token at every time-step, information added might not necessarily be useful as our submodular components work on sentence level." + }, + { + "id": 146, + "string": "This approach is time efficient and avoids redundant computations." + }, + { + "id": 147, + "string": "For each augmentation setting, we randomly select sentences from the training data and generate its paraphrases." + }, + { + "id": 148, + "string": "We then add them in the training data with the same label as that of the source sentence." + }, + { + "id": 149, + "string": "We evaluate the performance on different classification models in terms of accuracy." + }, + { + "id": 150, + "string": "Based on the formulation of the objective function, it should be clear that diversity would attain maximum value at (or around) λ = 0 albeit at the cost of fidelity." + }, + { + "id": 151, + "string": "This is certainly not a desirable property for paraphrasing systems." + }, + { + "id": 152, + "string": "To address this, we perform hyperparameter tuning for λ value by analyzing the trade-off between diversity and fidelity based on varying λ values." + }, + { + "id": 153, + "string": "In practice, diversity metric attains saturation at certain λ range (usually 0.2 -0.5)." + }, + { + "id": 154, + "string": "This behaviour can be seen in Figure 2 ." + }, + { + "id": 155, + "string": "Corresponding plot for Twitter, the effect of λ on fidelity and additional details about the hyperparameters can be found in the supplementary." + }, + { + "id": 156, + "string": "Results Our experiments were geared towards answering the following primary questions: Q1." + }, + { + "id": 157, + "string": "Is DiPS able to generate diverse paraphrases without compromising on fidelity?" + }, + { + "id": 158, + "string": "(Section 6.1) Q2." + }, + { + "id": 159, + "string": "Are paraphrase generated by DiPS useful in data-augmentation?" + }, + { + "id": 160, + "string": "(Section 6.2) Intrinsic Evaluation We compare our method against recent paraphrasing models as well as multiple diversity inducing schemes." + }, + { + "id": 161, + "string": "DiPS outperforms these baseline models in terms of fidelity metrics namely BLEU, ME-TEOR and TERp." + }, + { + "id": 162, + "string": "A high METEOR score and a low TERp score indicate the presence of not only exact words but also synonyms and semantically similar phrases." + }, + { + "id": 163, + "string": "Notably, our model is not only able to achieve substantial gains over other diversity inducing schemes but is also able to do so without compromising on fidelity." + }, + { + "id": 164, + "string": "Diversity and fidelity scores are reported in Table 4 and Table 3 , respectively." + }, + { + "id": 165, + "string": "As described in Section 5.3, we evaluate the accuracy of paraphrase recognition models when provided with training data augmented using different schemes." + }, + { + "id": 166, + "string": "It is reasonable to expect that high quality paraphrases would tend to yield better results on in-domain paraphrase recognition task." + }, + { + "id": 167, + "string": "We observe that using the paraphrases generated by DiPS helps in achieving substantial gains in accuracy over other baseline schemes." + }, + { + "id": 168, + "string": "Figure 3 showcases the effect of using paraphrases generated by our method as compared to other competitive paraphrasing methods." + }, + { + "id": 169, + "string": "Data-augmentation Data Augmentation results for intent and question classification are shown in Table 5 ." + }, + { + "id": 170, + "string": "While, SBS does not offer much lexical variability, DBS offers high diversity at the cost of fidelity." + }, + { + "id": 171, + "string": "SynRep and ContAug are augmentation schemes which are limited by the amount of structural variations they can offer." + }, + { + "id": 172, + "string": "DiPS on the other hand provides generation having high structural variations without compromising on fidelity." + }, + { + "id": 173, + "string": "The boost in accuracy scores on both the types of classification models is indicative of the importance of using high quality paraphrases for data-augmentation." + }, + { + "id": 174, + "string": "Conclusion In this paper, we have proposed DiPS, a model which generates high quality paraphrases by maximizing a novel submodular objective function designed specifically for paraphrasing." + }, + { + "id": 175, + "string": "In contrast to prior works which focus exclusively either on fidelity or diversity, a submodular function based approach offers a large degree of freedom to control fidelity and diversity." + }, + { + "id": 176, + "string": "Through extensive experiments on multiple standard datasets, we have demonstrated the effectiveness of our approach over numerous baselines." + }, + { + "id": 177, + "string": "We observe that the diverse paraphrases generated are not only interesting and meaning preserving, but are also helpful in data augmentation." + }, + { + "id": 178, + "string": "We showcase that using multiple settings on the task of intent and question classification." + }, + { + "id": 179, + "string": "We hope that our approach will be useful not only for paraphrase generation and data augmentation, but also for other NLG problems in conversational agents and text summarization." + } + ], + "headers": [ + { + "section": "Introduction", + "n": "1", + "start": 10, + "end": 21 + }, + { + "section": "We introduce Diverse Paraphraser using", + "n": "1.", + "start": 22, + "end": 44 + }, + { + "section": "Background: Submodularity", + "n": "3", + "start": 45, + "end": 61 + }, + { + "section": "Methodology", + "n": "4", + "start": 62, + "end": 72 + }, + { + "section": "Overview", + "n": "4.1", + "start": 73, + "end": 82 + }, + { + "section": "Monotone Submodular Objectives", + "n": "4.2", + "start": 83, + "end": 118 + }, + { + "section": "Datasets", + "n": "5.1", + "start": 119, + "end": 120 + }, + { + "section": "Baseline", + "n": "5.2", + "start": 121, + "end": 123 + }, + { + "section": "Intrinsic Evaluation", + "n": "5.3", + "start": 124, + "end": 125 + }, + { + "section": "Diversity:", + "n": "2.", + "start": 126, + "end": 134 + }, + { + "section": "Data-Augmentation", + "n": "5.4", + "start": 135, + "end": 139 + }, + { + "section": "LSTM:", + "n": "2.", + "start": 140, + "end": 141 + }, + { + "section": "Setup", + "n": "5.5", + "start": 142, + "end": 155 + }, + { + "section": "Results", + "n": "6", + "start": 156, + "end": 159 + }, + { + "section": "Intrinsic Evaluation", + "n": "6.1", + "start": 160, + "end": 168 + }, + { + "section": "Data-augmentation", + "n": "6.2", + "start": 169, + "end": 173 + }, + { + "section": "Conclusion", + "n": "7", + "start": 174, + "end": 179 + } + ], + "figures": [ + { + "filename": "../figure/image/995-Table1-1.png", + "caption": "Table 1: Sample paraphrases generated by beam search and DiPS (our method). DiPS offers lexically diverse paraphrases without compromising on fidelity.", + "page": 0, + "bbox": { + "x1": 306.71999999999997, + "x2": 527.04, + "y1": 222.23999999999998, + "y2": 322.08 + } + }, + { + "filename": "../figure/image/995-Table2-1.png", + "caption": "Table 2: Dataset Statistics", + "page": 5, + "bbox": { + "x1": 309.59999999999997, + "x2": 525.6, + "y1": 64.32, + "y2": 240.95999999999998 + } + }, + { + "filename": "../figure/image/995-Table3-1.png", + "caption": "Table 3: Results on Quora-Div and Twitter dataset. Higher↑ BLEU and METEOR score is better whereas lower↓ TERp score is better. Please see Section 6 for details.", + "page": 6, + "bbox": { + "x1": 72.0, + "x2": 526.0799999999999, + "y1": 67.2, + "y2": 163.2 + } + }, + { + "filename": "../figure/image/995-Figure2-1.png", + "caption": "Figure 2: Effect of varying the trade-off coefficient λ in DiPS on various diversity metrics on the Quora dataset.", + "page": 6, + "bbox": { + "x1": 98.88, + "x2": 262.08, + "y1": 216.48, + "y2": 380.15999999999997 + } + }, + { + "filename": "../figure/image/995-Table4-1.png", + "caption": "Table 4: Results on Quora-Div and Twitter dataset. Higher distinct scores imply better lexical diversity. Please see Section 6 for details.", + "page": 7, + "bbox": { + "x1": 72.96, + "x2": 525.12, + "y1": 67.2, + "y2": 155.04 + } + }, + { + "filename": "../figure/image/995-Figure3-1.png", + "caption": "Figure 3: Comparison of accuracy scores of two paraphrase recognition models using different augmentation schemes (Quora-PR). Both LogReg and SiameseLSTM achieve the highest boost in performance when augmented with samples generated using DiPS", + "page": 7, + "bbox": { + "x1": 316.8, + "x2": 516.0, + "y1": 207.84, + "y2": 383.03999999999996 + } + }, + { + "filename": "../figure/image/995-Table5-1.png", + "caption": "Table 5: Accuracy scores of two classification models on various data-augmentation schemes. Please see Section 6 for details", + "page": 7, + "bbox": { + "x1": 72.0, + "x2": 290.4, + "y1": 208.32, + "y2": 296.15999999999997 + } + }, + { + "filename": "../figure/image/995-Figure1-1.png", + "caption": "Figure 1: Overview of DiPS during decoding to generate k paraphrases. At each time step, a set of N sequences (V (t)) is used to determine k < N sequences (X∗) via submodular maximization . The above figure illustrates the motivation behind each submodular component. Please see Section 4 for details.", + "page": 3, + "bbox": { + "x1": 121.92, + "x2": 475.68, + "y1": 62.879999999999995, + "y2": 306.24 + } + } + ] + }, + "gem_id": "GEM-SciDuet-chal-12" + }, + { + "slides": { + "0": { + "title": "Neural Question Answering", + "text": [ + "Question: What color is the sky?", + "Passage: Air is made mainly from molecules of nitrogen and oxygen.", + "These molecules scatter the blue colors of sunlight more effectively than the green and red colors. Therefore, a clean sky appears blue." + ], + "page_nums": [ + 1 + ], + "images": [] + }, + "3": { + "title": "Open Question Answering", + "text": [ + "Question: What color is the sky?", + "Relevant Text Model Answer Span Document Retrieval" + ], + "page_nums": [ + 4 + ], + "images": [] + }, + "5": { + "title": "Two Possible Approaches", + "text": [ + "Select a single paragraph from the input, and run the model on that paragraph", + "Run the model on many paragraphs from the input, and have itassign a confidence score to its results on each paragraph" + ], + "page_nums": [ + 6 + ], + "images": [] + }, + "6": { + "title": "This Work", + "text": [ + "Improve several of the key design decision that arise when training on document-level data", + "Study ways to train models to produce correct confidence scores" + ], + "page_nums": [ + 7 + ], + "images": [] + }, + "8": { + "title": "Pipeline Method Noisy Supervision", + "text": [ + "Document level data can be expected to be distantly supervised:", + "Question: Which British general was killed at Khartoum in 1885?", + "In February 1884 Gordon returned to the Sudan to evacuate Egyptian forces.", + "Rebels broke into the city , killing Gordon and the other defenders. The British public reacted to his death by acclaiming ' Gordon of Khartoum , a saint.", + "However, historians have since suggested that Gordon defied orders and.", + "Need a training objective that can handle multiple (noisy) answer spans", + "Use the summed objective from Kadlec et al (2016), that optimizes the log sum of", + "the probability of all answer spans", + "Remains agnostic to how probability mass is distributed among the answer spans" + ], + "page_nums": [ + 9, + 10 + ], + "images": [] + }, + "12": { + "title": "Learning Well Calibrated Confidence Scores", + "text": [ + "Train the model on both answering-containing and non-answering containing", + "paragraph and use a modified objective function", + "Merge: Concatenate sampled paragraphs together", + "No-Answer: Process paragraphs independently, and allow the model to place", + "probability mass on a no-answer output", + "Sigmoid: Assign an independent probability on each span using the sigmoid", + "Shared-Norm: Process paragraphs independently, but compute the span", + "probability across spans in all paragraphs" + ], + "page_nums": [ + 14 + ], + "images": [] + }, + "15": { + "title": "Pipeline Method Results on TriviaQA Web", + "text": [ + "Uses BiDAF as the model", + "Select paragraphs by truncating documents", + "Select answer-spans randomly EM", + "word embeddings (Peters et al., 2017)", + "TriviaQA Baseline Our Baseline +TF-IDF +Sum +TF-IDF +Sum +Model +TF-IDF +Sum" + ], + "page_nums": [ + 17 + ], + "images": [] + }, + "16": { + "title": "TriviaQA Leaderboard Exact Match Scores", + "text": [ + "Model Web-All Web-Verified Wiki-All Wiki-Verified", + "Best leaderboard entry (mingyan)", + "Dynamic Integration of Background" + ], + "page_nums": [ + 21 + ], + "images": [ + "figure/image/996-Figure4-1.png" + ] + }, + "18": { + "title": "Building an Open Question Answering System", + "text": [ + "Use Bing web search and a Wikipedia entity linker to locate relevant documents", + "Extract the top 12 paragraphs, as found using the linear paragraph ranker", + "Use the model trained for TriviaQA Unfiltered to find the final answer" + ], + "page_nums": [ + 24 + ], + "images": [ + "figure/image/996-Figure2-1.png" + ] + } + }, + "paper_title": "Simple and Effective Multi-Paragraph Reading Comprehension", + "paper_id": "996", + "paper": { + "title": "Simple and Effective Multi-Paragraph Reading Comprehension", + "abstract": "We introduce a method of adapting neural paragraph-level question answering models to the case where entire documents are given as input. Most current question answering models cannot scale to document or multi-document input, and naively applying these models to each paragraph independently often results in them being distracted by irrelevant text. We show that it is possible to significantly improve performance by using a modified training scheme that teaches the model to ignore non-answer containing paragraphs. Our method involves sampling multiple paragraphs from each document, and using an objective function that requires the model to produce globally correct output. We additionally identify and improve upon a number of other design decisions that arise when working with document-level data. Experiments on TriviaQA and SQuAD shows our method advances the state of the art, including a 10 point gain on TriviaQA.", + "text": [ + { + "id": 0, + "string": "Introduction Teaching machines to answer arbitrary usergenerated questions is a long-term goal of natural language processing." + }, + { + "id": 1, + "string": "For a wide range of questions, existing information retrieval methods are capable of locating documents that are likely to contain the answer." + }, + { + "id": 2, + "string": "However, automatically extracting the answer from those texts remains an open challenge." + }, + { + "id": 3, + "string": "The recent success of neural models at answering questions given a related paragraph (Wang et al., 2017c; Tan et al., 2017) suggests they have the potential to be a key part of * Work completed while interning at the Allen Institute for Artificial Intelligence a solution to this problem." + }, + { + "id": 4, + "string": "Most neural models are unable to scale beyond short paragraphs, so typically this requires adapting a paragraph-level model to process document-level input." + }, + { + "id": 5, + "string": "There are two basic approaches to this task." + }, + { + "id": 6, + "string": "Pipelined approaches select a single paragraph from the input documents, which is then passed to the paragraph model to extract an answer (Joshi et al., 2017; Wang et al., 2017a) ." + }, + { + "id": 7, + "string": "Confidence based methods apply the model to multiple paragraphs and return the answer with the highest confidence (Chen et al., 2017a) ." + }, + { + "id": 8, + "string": "Confidence methods have the advantage of being robust to errors in the (usually less sophisticated) paragraph selection step, however they require a model that can produce accurate confidence scores for each paragraph." + }, + { + "id": 9, + "string": "As we shall show, naively trained models often struggle to meet this requirement." + }, + { + "id": 10, + "string": "In this paper we start by proposing an improved pipelined method which achieves state-of-the-art results." + }, + { + "id": 11, + "string": "Then we introduce a method for training models to produce accurate per-paragraph confidence scores, and we show how combining this method with multiple paragraph selection further increases performance." + }, + { + "id": 12, + "string": "Our pipelined method focuses on addressing the challenges that come with training on documentlevel data." + }, + { + "id": 13, + "string": "We use a linear classifier to select which paragraphs to train and test on." + }, + { + "id": 14, + "string": "Since annotating entire documents is expensive, data of this sort is typically distantly supervised, meaning only the answer text, not the answer spans, are known." + }, + { + "id": 15, + "string": "To handle the noise this creates, we use a summed objective function that marginalizes the model's output over all locations the answer text occurs." + }, + { + "id": 16, + "string": "We apply this approach with a model design that integrates some recent ideas in reading comprehension models, including selfattention (Cheng et al., 2016) and bi-directional attention (Seo et al., 2016) ." + }, + { + "id": 17, + "string": "Our confidence method extends this approach to better handle the multi-paragraph setting." + }, + { + "id": 18, + "string": "Previous approaches trained the model on questions paired with paragraphs that are known a priori to contain the answer." + }, + { + "id": 19, + "string": "This has several downsides: the model is not trained to produce low confidence scores for paragraphs that do not contain an answer, and the training objective does not require confidence scores to be comparable between paragraphs." + }, + { + "id": 20, + "string": "We resolve these problems by sampling paragraphs from the context documents, including paragraphs that do not contain an answer, to train on." + }, + { + "id": 21, + "string": "We then use a shared-normalization objective where paragraphs are processed independently, but the probability of an answer candidate is marginalized over all paragraphs sampled from the same document." + }, + { + "id": 22, + "string": "This requires the model to produce globally correct output even though each paragraph is processed independently." + }, + { + "id": 23, + "string": "We evaluate our work on TriviaQA (Joshi et al., 2017) in the wiki, web, and unfiltered setting." + }, + { + "id": 24, + "string": "Our model achieves a nearly 10 point lead over published prior work." + }, + { + "id": 25, + "string": "We additionally perform an ablation study on our pipelined method, and we show the effectiveness of our multi-paragraph methods on a modified version of SQuAD (Rajpurkar et al., 2016) where only the correct document, not the correct paragraph, is known." + }, + { + "id": 26, + "string": "Finally, we combine our model with a web search backend to build a demonstration end-to-end QA system 1 , and show it performs well on questions from the TREC question answering task (Voorhees et al., 1999) ." + }, + { + "id": 27, + "string": "We release our code 2 to facilitate future work." + }, + { + "id": 28, + "string": "Pipelined Method In this section we propose a pipelined QA system, where a single paragraph is selected and passed to a paragraph-level question answering model." + }, + { + "id": 29, + "string": "Paragraph Selection If there is a single source document, we select the paragraph with the smallest TF-IDF cosine distance with the question." + }, + { + "id": 30, + "string": "Document frequencies are computed using the individual paragraphs within the document." + }, + { + "id": 31, + "string": "If there are multiple input documents, we found it beneficial to use a linear classifier that uses the same TF-IDF score, whether the paragraph was the first in its document, how many tokens preceded it, and the number of question words it includes as features." + }, + { + "id": 32, + "string": "The classifier is trained on the distantly supervised objective of selecting paragraphs that contain at least one answer span." + }, + { + "id": 33, + "string": "On TriviaQA web, relative to truncating the document as done by prior work, this improves the chance of the selected text containing the correct answer from 83.1% to 85.1%." + }, + { + "id": 34, + "string": "Handling Noisy Labels Question: Which British general was killed at Khartoum in 1885?" + }, + { + "id": 35, + "string": "Answer: Gordon Context: In February 1885 Gordon returned to the Sudan to evacuate Egyptian forces." + }, + { + "id": 36, + "string": "Khartoum came under siege the next month and rebels broke into the city, killing Gordon and the other defenders." + }, + { + "id": 37, + "string": "The British public reacted to his death by acclaiming 'Gordon of Khartoum', a saint." + }, + { + "id": 38, + "string": "However, historians have suggested that Gordon..." + }, + { + "id": 39, + "string": "Figure 1 : Noisy supervision can cause many spans of text that contain the answer, but are not situated in a context that relates to the question (red), to distract the model from learning from more relevant spans (green)." + }, + { + "id": 40, + "string": "In a distantly supervised setup we label all text spans that match the answer text as being correct." + }, + { + "id": 41, + "string": "This can lead to training the model to select unwanted answer spans." + }, + { + "id": 42, + "string": "Figure 1 contains an example." + }, + { + "id": 43, + "string": "To handle this difficulty, we use a summed objective function similar to the one from Kadlec et al." + }, + { + "id": 44, + "string": "(2016) , that optimizes the negative loglikelihood of selecting any correct answer span." + }, + { + "id": 45, + "string": "The models we consider here work by independently predicting the start and end token of the answer span, so we take this approach for both predictions." + }, + { + "id": 46, + "string": "For example, the objective for predicting the answer start token becomes − log a∈A p a where A is the set of tokens that start an answer and p i is the answer-start probability predicted by the model for token i." + }, + { + "id": 47, + "string": "This objective has the advantage of being agnostic to how the model distributes probability mass across the possible answer spans, allowing the model to focus on only the most relevant spans." + }, + { + "id": 48, + "string": "Model We use a model with the following layers (shown in Figure 2 ): Embedding: We embed words using pretrained word vectors." + }, + { + "id": 49, + "string": "We concatenate these with character-derived word embeddings, which are produced by embedding characters using a learned embedding matrix and then applying a convolutional neural network and max-pooling." + }, + { + "id": 50, + "string": "Pre-Process: A shared bi-directional GRU (Cho et al., 2014) is used to process the question and passage embeddings." + }, + { + "id": 51, + "string": "Attention: The attention mechanism from the Bi-Directional Attention Flow (BiDAF) model (Seo et al., 2016) is used to build a queryaware context representation." + }, + { + "id": 52, + "string": "Let h i and q j be the vector for context word i and question word j, and n q and n c be the lengths of the question and context respectively." + }, + { + "id": 53, + "string": "We compute attention between context word i and question word j as: a ij = w 1 · h i + w 2 · q j + w 3 · (h i q j ) where w 1 , w 2 , and w 3 are learned vectors and is element-wise multiplication." + }, + { + "id": 54, + "string": "We then compute an attended vector c i for each context token as: p ij = e a ij nq j=1 e a ij c i = nq j=1 q j p ij We also compute a query-to-context vector q c : m i = max 1≤j≤nq a ij p i = e m i nc i=1 e m i q c = nc i=1 h i p i The final vector for each token is built by concatenating h i , c i , h i c i , and q c c i ." + }, + { + "id": 55, + "string": "In our model we subsequently pass the result through a linear layer with ReLU activations." + }, + { + "id": 56, + "string": "Self-Attention: Next we use a layer of residual self-attention." + }, + { + "id": 57, + "string": "The input is passed through another bi-directional GRU." + }, + { + "id": 58, + "string": "Then we apply the same attention mechanism, only now between the passage and itself." + }, + { + "id": 59, + "string": "In this case we do not use query-tocontext attention and we set a ij = −inf if i = j." + }, + { + "id": 60, + "string": "As before, we pass the concatenated output through a linear layer with ReLU activations." + }, + { + "id": 61, + "string": "The result is then summed with the original input." + }, + { + "id": 62, + "string": "Prediction: In the last layer of our model a bidirectional GRU is applied, followed by a linear layer to compute answer start scores for each token." + }, + { + "id": 63, + "string": "The hidden states are concatenated with the input and fed into a second bi-directional GRU and linear layer to predict answer end scores." + }, + { + "id": 64, + "string": "The softmax function is applied to the start and end scores to produce answer start and end probabilities." + }, + { + "id": 65, + "string": "Dropout: We apply variational dropout (Gal and Ghahramani, 2016) to the input to all the GRUs and the input to the attention mechanisms at a rate of 0.2." + }, + { + "id": 66, + "string": "Confidence Method We adapt this model to the multi-paragraph setting by using the un-normalized and un-exponentiated (i.e., before the softmax operator is applied) score given to each span as a measure of the model's confidence." + }, + { + "id": 67, + "string": "For the boundary-based models we use here, a span's score is the sum of the start and end score given to its start and end token." + }, + { + "id": 68, + "string": "At test time we run the model on each paragraph and select the answer span with the highest confidence." + }, + { + "id": 69, + "string": "This is the approach taken by Chen et al." + }, + { + "id": 70, + "string": "(2017a) ." + }, + { + "id": 71, + "string": "Our experiments in Section 5 show that these confidence scores can be very poor if the model is only trained on answer-containing paragraphs, as done by prior work." + }, + { + "id": 72, + "string": "Table 1 contains some qualitative examples of the errors that occur." + }, + { + "id": 73, + "string": "We hypothesize that there are two key sources of error." + }, + { + "id": 74, + "string": "First, for models trained with the softmax objective, the pre-softmax scores for all spans can be arbitrarily increased or decreased by a constant value without changing the resulting softmax probability distribution." + }, + { + "id": 75, + "string": "As a result, nothing prevents models from producing scores that are arbitrarily all larger or all smaller for one paragraph ...one 2001 study finding a quarter square kilometer (62 acres) of Ecuadorian rainforest supports more than 1,100 tree species The affected region was approximately 1,160,000 square miles (3,000,000 km2) of rainforest, compared to 734,000 square miles Who was Warsz?" + }, + { + "id": 76, + "string": "....In actuality, Warsz was a 12th/13th century nobleman who owned a village located at the modern.... One of the most famous people born in Warsaw was Maria Sklodowska -Curie, who achieved international... How much did the initial LM weight in kg?" + }, + { + "id": 77, + "string": "The initial LM model weighed approximately 33,300 pounds (15,000 kg), and..." + }, + { + "id": 78, + "string": "The module was 11.42 feet (3.48 m) tall, and weighed approximately 12,250 pounds (5,560 kg) Table 1 : Examples from SQuAD where a model was less confident in a correct extraction from one paragraph (left) than in an incorrect extraction from another (right)." + }, + { + "id": 79, + "string": "Even if the passage has no correct answer and does not contain any question words, the model assigns high confidence to phrases that match the category the question is asking about." + }, + { + "id": 80, + "string": "Because the confidence scores are not well-calibrated, this confidence is often higher than the confidence assigned to correct answer spans in different paragraphs, even when those correct spans have better contextual evidence." + }, + { + "id": 81, + "string": "than another." + }, + { + "id": 82, + "string": "Second, if the model only sees paragraphs that contain answers, it might become too confident in heuristics or patterns that are only effective when it is known a priori that an answer exists." + }, + { + "id": 83, + "string": "For example, the model might become too reliant on selecting answers that match semantic type the question is asking about, causing it be easily distracted by other entities of that type when they appear in irrelevant text." + }, + { + "id": 84, + "string": "This kind of error has also been observed when distractor sentences are added to the context (Jia and Liang, 2017) We experiment with four approaches to training models to produce comparable confidence scores, shown in the following subsections." + }, + { + "id": 85, + "string": "In all cases we will sample paragraphs that do not contain an answer as additional training points." + }, + { + "id": 86, + "string": "Shared-Normalization In this approach a modified objective function is used where span start and end scores are normalized across all paragraphs sampled from the same context." + }, + { + "id": 87, + "string": "This means that paragraphs from the same context use a shared normalization factor in the final softmax operations." + }, + { + "id": 88, + "string": "We train on this objective by including multiple paragraphs from the same context in each mini-batch." + }, + { + "id": 89, + "string": "The key idea is that this will force the model to produce scores that are comparable between paragraphs, even though it does not have access to information about what other paragraphs are being considered." + }, + { + "id": 90, + "string": "Merge As an alternative to the previous method, we experiment with concatenating all paragraphs sam-pled from the same context together during training." + }, + { + "id": 91, + "string": "A paragraph separator token with a learned embedding is added before each paragraph." + }, + { + "id": 92, + "string": "No-Answer Option We also experiment with allowing the model to select a special \"no-answer\" option for each paragraph." + }, + { + "id": 93, + "string": "First we re-write our objective as: − log e sa n i=1 e s i − log e g b n j=1 e g j = − log e sa+g b n i=1 n j=1 e s i +g j where s j and g j are the scores for the start and end bounds produced by the model for token j, and a and b are the correct start and end tokens." + }, + { + "id": 94, + "string": "We have the model compute another score, z, to represent the weight given to a \"no-answer\" possibility." + }, + { + "id": 95, + "string": "Our revised objective function becomes: − log (1 − δ)e z + δe sa+g b e z + n i=1 n j=1 e s i +g j where δ is 1 if an answer exists and 0 otherwise." + }, + { + "id": 96, + "string": "If there are multiple answer spans we use the same objective, except the numerator includes the summation over all answer start and end tokens." + }, + { + "id": 97, + "string": "We compute z by adding an extra layer at the end of our model." + }, + { + "id": 98, + "string": "We build input vectors by taking the summed hidden states of the RNNs used to predict the start/end token scores weighed by the start/end probabilities, and using a learned attention vector on the output of the self-attention layer." + }, + { + "id": 99, + "string": "These vectors are fed into a two layer network with an 80 dimensional hidden layer and ReLU activations that produces z as its only output." + }, + { + "id": 100, + "string": "Sigmoid As a final baseline, we consider training models with the sigmoid loss objective function." + }, + { + "id": 101, + "string": "That is, we compute a start/end probability for each token by applying the sigmoid function to the start/end scores of each token." + }, + { + "id": 102, + "string": "A cross entropy loss is used on each individual probability." + }, + { + "id": 103, + "string": "The intuition is that, since the scores are being evaluated independently of one another, they are more likely to be comparable between different paragraphs." + }, + { + "id": 104, + "string": "Experimental Setup Datasets We evaluate our approach on four datasets: Triv-iaQA unfiltered (Joshi et al., 2017) , a dataset of questions from trivia databases paired with documents found by completing a web search of the questions; TriviaQA wiki, the same dataset but only including Wikipedia articles; TriviaQA web, a dataset derived from TriviaQA unfiltered by treating each question-document pair where the document contains the question answer as an individual training point; and SQuAD (Rajpurkar et al., 2016) , a collection of Wikipedia articles and crowdsourced questions." + }, + { + "id": 105, + "string": "Preprocessing We note that for TriviaQA web we do not subsample as was done by Joshi et al." + }, + { + "id": 106, + "string": "(2017) , instead training on the all 530k training examples." + }, + { + "id": 107, + "string": "We also observe that TriviaQA documents often contain many small paragraphs, so we restructure the documents by merging consecutive paragraphs together up to a target size." + }, + { + "id": 108, + "string": "We use a maximum paragraph size of 400 unless stated otherwise." + }, + { + "id": 109, + "string": "Paragraph separator tokens with learned embeddings are added between merged paragraphs to preserve formatting information." + }, + { + "id": 110, + "string": "We are also careful to mark all spans of text that would be considered an exact match by the official evaluation script, which includes some minor text pre-processing, as answer spans, not just spans that are an exact string match with the answer text." + }, + { + "id": 111, + "string": "Sampling Our confidence-based approaches are trained by sampling paragraphs from the context during training." + }, + { + "id": 112, + "string": "For SQuAD and TriviaQA web we take Model EM F1 baseline (Joshi et al., 2017) the top four paragraphs as judged by our paragraph ranking function (see Section 2.1)." + }, + { + "id": 113, + "string": "We sample two different paragraphs from those four each epoch to train on." + }, + { + "id": 114, + "string": "Since we observe that the higherranked paragraphs are more likely to contain the context needed to answer the question, we sample the highest ranked paragraph that contains an answer twice as often as the others." + }, + { + "id": 115, + "string": "For the merge and shared-norm approaches, we additionally require that at least one of the paragraphs contains an answer span, and both of those paragraphs are included in the same mini-batch." + }, + { + "id": 116, + "string": "For TriviaQA wiki we repeat the process but use the top 8 paragraphs, and for TriviaQA unfiltered we use the top 16, because much more context is given in these settings." + }, + { + "id": 117, + "string": "Implementation We train the model with the Adadelta optimizer (Zeiler, 2012) with a batch size 60 for Triv-iaQA and 45 for SQuAD." + }, + { + "id": 118, + "string": "At test time we select the most probable answer span of length less than or equal to 8 for TriviaQA and 17 for SQuAD." + }, + { + "id": 119, + "string": "The GloVe 300 dimensional word vectors released by Pennington et al." + }, + { + "id": 120, + "string": "(2014) are used for word embeddings." + }, + { + "id": 121, + "string": "On SQuAD, we use a dimensionality of size 100 for the GRUs and of size 200 for the linear layers employed after each attention mechanism." + }, + { + "id": 122, + "string": "We found for TriviaQA, likely because there is more data, using a larger dimensionality of 140 for each GRU and 280 for the linear layers is beneficial." + }, + { + "id": 123, + "string": "During training, we maintain an exponential moving average of the weights with a decay rate of 0.999." + }, + { + "id": 124, + "string": "We use the weight averages at test time." + }, + { + "id": 125, + "string": "We do not update the word vectors during training." + }, + { + "id": 126, + "string": "Results TriviaQA Web and TriviaQA Wiki First, we do an ablation study on TriviaQA web to show the effects of our proposed methods for our pipeline model." + }, + { + "id": 127, + "string": "We start with a baseline following the one used by Joshi et al." + }, + { + "id": 128, + "string": "(2017) ." + }, + { + "id": 129, + "string": "This system uses BiDAF (Seo et al., 2016) as the paragraph model, and selects a random answer span from each paragraph each epoch to train on." + }, + { + "id": 130, + "string": "The first 400 tokens of each document are used during training, and the first 800 during testing." + }, + { + "id": 131, + "string": "When using the TF-IDF paragraph selection approach, we instead break the documents into paragraphs of size 400 when training and 800 when testing, and select the top-ranked paragraph to feed into the model." + }, + { + "id": 132, + "string": "As shown in Table 2 , our baseline outperforms the results reported by Joshi et al." + }, + { + "id": 133, + "string": "(2017) significantly, likely because we are not subsampling the data." + }, + { + "id": 134, + "string": "We find both TF-IDF ranking and the sum objective to be effective." + }, + { + "id": 135, + "string": "Using our refined model increases the gain by another 4 points." + }, + { + "id": 136, + "string": "Next we show the results of our confidencebased approaches." + }, + { + "id": 137, + "string": "For this comparison we split documents into paragraphs of at most 400 tokens, and rank them using TF-IDF cosine distance." + }, + { + "id": 138, + "string": "Then we measure the performance of our proposed approaches as the model is used to independently process an increasing number of these paragraphs, and the highest confidence answer is selected as the final output." + }, + { + "id": 139, + "string": "The results are shown in Figure 3 ." + }, + { + "id": 140, + "string": "On this dataset even the model trained without any of the proposed training methods (\"none\") im- Figure 4 : Results for our confidence methods on TriviaQA unfiltered." + }, + { + "id": 141, + "string": "The shared-norm approach is the strongest, while the baseline model starts to lose performance as more paragraphs are used." + }, + { + "id": 142, + "string": "proves as more paragraphs are used, showing it does a passable job at focusing on the correct paragraph." + }, + { + "id": 143, + "string": "The no-answer option training approach lead to a significant improvement, and the sharednorm and merge approaches are even better." + }, + { + "id": 144, + "string": "We use the shared-norm approach for evaluation on the TriviaQA test sets." + }, + { + "id": 145, + "string": "We found that increasing the paragraph size to 800 at test time, and to 600 during training, was slightly beneficial, allowing our model to reach 66.04 EM and 70.98 F1 on the dev set." + }, + { + "id": 146, + "string": "As shown in Table 3 , our model is firmly ahead of prior work on both the TriviaQA web and TriviaQA wiki test sets." + }, + { + "id": 147, + "string": "Since our submission, a few additional entries have been added to the public leader for this dataset 5 , although to the best of our knowledge these results have not yet been published." + }, + { + "id": 148, + "string": "TriviaQA Unfiltered Next we apply our confidence methods to Trivi-aQA unfiltered." + }, + { + "id": 149, + "string": "This dataset is of particular interest because the system is not told which document contains the answer, so it provides a plausible simulation of answering a question using a document Figure 5 : Results for our confidence methods on document-level SQuAD." + }, + { + "id": 150, + "string": "The shared-norm model is the only model that does not lose performance when exposed to large numbers of paragraphs." + }, + { + "id": 151, + "string": "retrieval system." + }, + { + "id": 152, + "string": "We show the same graph as before for this dataset in Figure 4 ." + }, + { + "id": 153, + "string": "Our methods have an even larger impact on this dataset, probably because there are many more relevant and irrelevant paragraphs for each question, making paragraph selection more important." + }, + { + "id": 154, + "string": "Note the naively trained model starts to lose performance as more paragraphs are used, showing that errors are being caused by the model being overly confident in incorrect extractions." + }, + { + "id": 155, + "string": "We achieve a score of 61.55 EM and 67.61 F1 on the dev set." + }, + { + "id": 156, + "string": "This advances the only prior result reported for this dataset, 50.6 EM and 57.3 F1 from Wang et al." + }, + { + "id": 157, + "string": "(2017b) , by 10 points." + }, + { + "id": 158, + "string": "SQuAD We additionally evaluate our model on SQuAD." + }, + { + "id": 159, + "string": "SQuAD questions were not built to be answered independently of their context paragraph, which makes it unclear how effective of an evaluation tool they can be for document-level question answering." + }, + { + "id": 160, + "string": "To assess this we manually label 500 random questions from the training set." + }, + { + "id": 161, + "string": "We categorize questions as: 1." + }, + { + "id": 162, + "string": "Context-independent, meaning it can be understood independently of the paragraph." + }, + { + "id": 163, + "string": "2." + }, + { + "id": 164, + "string": "Document-dependent, meaning it can be understood given the article's title." + }, + { + "id": 165, + "string": "For example, \"What individual is the school named after?\"" + }, + { + "id": 166, + "string": "for the document \"Harvard University\"." + }, + { + "id": 167, + "string": "3." + }, + { + "id": 168, + "string": "Paragraph-dependent, meaning it can only be understood given its paragraph." + }, + { + "id": 169, + "string": "For example, \"What was the first step in the reforms?\"." + }, + { + "id": 170, + "string": "We find 67.4% of the questions to be contextindependent, 22.6% to be document-dependent, and the remaining 10% to be paragraphdependent." + }, + { + "id": 171, + "string": "There are many document-dependent questions because questions are frequently about the subject of the document." + }, + { + "id": 172, + "string": "Since a reasonably high fraction of the questions can be understood given the document they are from, and to isolate our analysis from the retrieval mechanism used, we choose to evaluate on the document-level." + }, + { + "id": 173, + "string": "We build documents by concatenating all the paragraphs in SQuAD from the same article together into a single document." + }, + { + "id": 174, + "string": "Given the correct paragraph (i.e., in the standard SQuAD setting) our model reaches 72.14 EM and 81.05 F1 and can complete 26 epochs of training in less than five hours." + }, + { + "id": 175, + "string": "Most of our variations to handle the multi-paragraph setting caused a minor (up to half a point) drop in performance, while the sigmoid version fell behind by a point and a half." + }, + { + "id": 176, + "string": "We graph the document-level performance in Figure 5 ." + }, + { + "id": 177, + "string": "For SQuAD, we find it crucial to employ one of the suggested confidence training techniques." + }, + { + "id": 178, + "string": "The base model starts to drop in performance once more than two paragraphs are used." + }, + { + "id": 179, + "string": "However, the shared-norm approach is able to reach a peak performance of 72.37 F1 and 64.08 EM given 15 paragraphs." + }, + { + "id": 180, + "string": "Given our estimate that 10% of the questions are ambiguous if the paragraph is unknown, our approach appears to have adapted to the document-level task very well." + }, + { + "id": 181, + "string": "Finally, we compare the shared-norm model with the document-level result reported by Chen et al." + }, + { + "id": 182, + "string": "(2017a) ." + }, + { + "id": 183, + "string": "We re-evaluate our model using the documents used by Chen et al." + }, + { + "id": 184, + "string": "(2017a) , which consist of the same Wikipedia articles SQuAD was built from, but downloaded at different dates." + }, + { + "id": 185, + "string": "The advantage of this dataset is that it does not allow the model to know a priori which paragraphs were filtered out during the construction of SQuAD." + }, + { + "id": 186, + "string": "The disadvantage is that some of the articles have been edited since the questions were written, so some questions may no longer be answerable." + }, + { + "id": 187, + "string": "Our model achieves 59.14 EM and 67.34 F1 on this dataset, which significantly outperforms the 49.7 EM reported by Chen et al." + }, + { + "id": 188, + "string": "(2017a) ." + }, + { + "id": 189, + "string": "Curated TREC We perform one final experiment that tests our model as part of an end-to-end question answering system." + }, + { + "id": 190, + "string": "For document retrieval, we re-implement the pipeline from Joshi et al." + }, + { + "id": 191, + "string": "(2017) ." + }, + { + "id": 192, + "string": "Given a question, we retrieve up to 10 web documents us-Model Accuracy S-Norm (ours) 53.31 YodaQA with Bing (Baudiš, 2015) , 37.18 YodaQA (Baudiš, 2015) , 34.26 DrQA + DS (Chen et al., 2017a) 25.7 Table 4 : Results on the Curated TREC corpus, Yo-daQA results extracted from its github page 7 ing a Bing web search of the question, and all Wikipedia articles about entities the entity linker TAGME (Ferragina and Scaiella, 2010) identifies in the question." + }, + { + "id": 193, + "string": "We then use our linear paragraph ranker to select the 16 most relevant paragraphs from all these documents, which are passed to our model to locate the final answer span." + }, + { + "id": 194, + "string": "We choose to use the shared-norm model trained on the TriviaQA unfiltered dataset since it is trained using multiple web documents as input." + }, + { + "id": 195, + "string": "We use the same heuristics as Joshi et al." + }, + { + "id": 196, + "string": "(2017) to filter out trivia or QA websites to ensure questions cannot be trivially answered using webpages that directly address the question." + }, + { + "id": 197, + "string": "A demo of the system is publicly available 8 ." + }, + { + "id": 198, + "string": "We find accuracy on the TriviaQA unfiltered questions remains almost unchanged (within half a percent exact match score) when using our document retrieval method instead of the given documents, showing our pipeline does a good job of producing evidence documents that are similar to the ones in the training data." + }, + { + "id": 199, + "string": "We test the system on questions from the TREC QA tasks (Voorhees et al., 1999) , in particular a curated set of questions from Baudiš (2015) , the same dataset used in Chen et al." + }, + { + "id": 200, + "string": "(2017a) ." + }, + { + "id": 201, + "string": "We apply our system to the 694 test questions without retraining on the train questions." + }, + { + "id": 202, + "string": "We compare against DrQA (Chen et al., 2017a) and YodaQA (Baudiš, 2015) ." + }, + { + "id": 203, + "string": "It is important to note that these systems use different document corpora (Wikipedia for DrQA, and Wikipedia, several knowledge bases, and optionally Bing web search for YodaQA) and different training data (SQuAD and the TREC training questions for DrQA, and TREC only for YodaQA), so we cannot make assertions about the relative performance of individual components." + }, + { + "id": 204, + "string": "Nevertheless, it is instructive to show how the methods we experiment with in this work can advance an end-to-end QA system." + }, + { + "id": 205, + "string": "The results are listed in racy mark." + }, + { + "id": 206, + "string": "This is a strong proof-of-concept that neural paragraph reading combined with existing document retrieval methods can advance the stateof-the-art on general question answering." + }, + { + "id": 207, + "string": "It also shows that, despite the noise, the data from Trivi-aQA is sufficient to train models that can be effective on out-of-domain QA tasks." + }, + { + "id": 208, + "string": "Discussion We found that models that have only been trained on answer-containing paragraphs can perform very poorly in the multi-paragraph setting." + }, + { + "id": 209, + "string": "The results were particularly bad for SQuAD; we think this is partly because the paragraphs are shorter, so the model had less exposure to irrelevant text." + }, + { + "id": 210, + "string": "The shared-norm approach consistently outperformed the other methods, especially on SQuAD and TriviaQA unfiltered, where many paragraphs were needed to reach peak performance." + }, + { + "id": 211, + "string": "Figures 3, 4 , and 5 show this technique has a minimal effect on the performance when only one paragraph is used, suggesting the model's per-paragraph performance is preserved." + }, + { + "id": 212, + "string": "Meanwhile, it can be seen the accuracy of the shared-norm model never drops as more paragraphs are added, showing it successfully resolves the problem of being distracted by irrelevant text." + }, + { + "id": 213, + "string": "The no-answer and merge approaches were moderately effective, we suspect because they at least expose the model to more irrelevant text." + }, + { + "id": 214, + "string": "However, these methods do not address the fundamental issue of requiring confidence scores to be comparable between independent applications of the model to different paragraphs, which is why we think they lagged behind." + }, + { + "id": 215, + "string": "The sigmoid objective function reduces the paragraph-level performance considerably, especially on the TriviaQA datasets." + }, + { + "id": 216, + "string": "We suspect this is because it is vulnerable to label noise, as discussed in Section 2.2." + }, + { + "id": 217, + "string": "Error Analysis We perform an error analysis by labeling 200 random TriviaQA web dev-set errors made by the shared-norm model." + }, + { + "id": 218, + "string": "We found 40.5% of the er-rors were caused because the document did not contain sufficient evidence to answer the question, and 17% were caused by the correct answer not being contained in the answer key." + }, + { + "id": 219, + "string": "The distribution of the remaining errors is shown in Table 5 ." + }, + { + "id": 220, + "string": "We found quite a few cases where a sentence contained the answer, but the model was unable to extract it due to complex syntactic structure or paraphrasing." + }, + { + "id": 221, + "string": "Two kinds of multi-sentence reading errors were also common: cases that required connecting multiple statements made in a single paragraph, and long-range coreference cases where a sentence's subject was named in a previous paragraph." + }, + { + "id": 222, + "string": "Finally, some questions required background knowledge, or required the model to extract answers that were only stated indirectly (e.g., examining a list to extract the nth element)." + }, + { + "id": 223, + "string": "Overall, these results suggest good avenues for improvement are to continue advancing the sentence and paragraph level reading comprehension abilities of the model, and adding a mechanism to handle document-level coreferences." + }, + { + "id": 224, + "string": "Related Work Reading Comprehension Datasets." + }, + { + "id": 225, + "string": "The state of the art in reading comprehension has been rapidly advanced by neural models, in no small part due to the introduction of many large datasets." + }, + { + "id": 226, + "string": "The first large scale datasets for training neural reading comprehension models used a Cloze-style task, where systems must predict a held out word from a piece of text (Hermann et al., 2015; Hill et al., 2015) ." + }, + { + "id": 227, + "string": "Additional datasets including SQuAD (Rajpurkar et al., 2016) , WikiReading (Hewlett et al., 2016) , MS Marco (Nguyen et al., 2016) and Triv-iaQA (Joshi et al., 2017) provided more realistic questions." + }, + { + "id": 228, + "string": "Another dataset of trivia questions, Quasar-T (Dhingra et al., 2017) , was introduced recently that uses ClueWeb09 (Callan et al., 2009) as its source for documents." + }, + { + "id": 229, + "string": "In this work we choose to focus on SQuAD because it is well studied, and TriviaQA because it is more challenging and features documents and multi-document contexts (Quasar T is similar, but was released after we started work on this project)." + }, + { + "id": 230, + "string": "Neural Reading Comprehension." + }, + { + "id": 231, + "string": "Neural reading comprehension systems typically use some form of attention (Wang and Jiang, 2016) , although alternative architectures exist (Chen et al., 2017a; Weissenborn et al., 2017b) ." + }, + { + "id": 232, + "string": "Our model follows this approach, but includes some recent advances such as variational dropout (Gal and Ghahramani, 2016) and bi-directional attention (Seo et al., 2016) ." + }, + { + "id": 233, + "string": "Self-attention has been used in several prior works (Cheng et al., 2016; Wang et al., 2017c; Pan et al., 2017) ." + }, + { + "id": 234, + "string": "Our approach to allowing a reading comprehension model to produce a per-paragraph no-answer score is related to the approach used in the BiDAF-T (Min et al., 2017) model to produce per-sentence classification scores, although we use an attentionbased method instead of max-pooling." + }, + { + "id": 235, + "string": "Open QA." + }, + { + "id": 236, + "string": "Open question answering has been the subject of much research, especially spurred by the TREC question answering track (Voorhees et al., 1999) ." + }, + { + "id": 237, + "string": "Knowledge bases can be used, such as in (Berant et al., 2013) , although the resulting systems are limited by the quality of the knowledge base." + }, + { + "id": 238, + "string": "Systems that try to answer questions using natural language resources such as YodaQA (Baudiš, 2015) typically use pipelined methods to retrieve related text, build answer candidates, and pick a final output." + }, + { + "id": 239, + "string": "Neural Open QA." + }, + { + "id": 240, + "string": "Open question answering with neural models was considered by Chen et al." + }, + { + "id": 241, + "string": "(2017a) , where researchers trained a model on SQuAD and combined it with a retrieval engine for Wikipedia articles." + }, + { + "id": 242, + "string": "Our work differs because we focus on explicitly addressing the problem of applying the model to multiple paragraphs." + }, + { + "id": 243, + "string": "A pipelined approach to QA was recently proposed by Wang et al." + }, + { + "id": 244, + "string": "(2017a) , where a ranker model is used to select a paragraph for the reading comprehension model to process." + }, + { + "id": 245, + "string": "More recent work has considered evidence aggregation techniques (Wang et al., 2017b; Swayamdipta et al., 2017) ." + }, + { + "id": 246, + "string": "Our work shows paragraph-level models that produce well-calibrated confidence scores can effectively exploit large amounts of text without aggregation, although integrating aggregation techniques could further improve our results." + }, + { + "id": 247, + "string": "Conclusion We have shown that, when using a paragraph-level QA model across multiple paragraphs, our training method of sampling non-answer-containing paragraphs while using a shared-norm objective function can be very beneficial." + }, + { + "id": 248, + "string": "Combining this with our suggestions for paragraph selection, using the summed training objective, and our model design allows us to advance the state of the art on TriviaQA." + }, + { + "id": 249, + "string": "As shown by our demo, this work can be directly applied to building deep-learningpowered open question answering systems." + } + ], + "headers": [ + { + "section": "Introduction", + "n": "1", + "start": 0, + "end": 26 + }, + { + "section": "Pipelined Method", + "n": "2", + "start": 27, + "end": 28 + }, + { + "section": "Paragraph Selection", + "n": "2.1", + "start": 29, + "end": 33 + }, + { + "section": "Handling Noisy Labels", + "n": "2.2", + "start": 34, + "end": 47 + }, + { + "section": "Model", + "n": "2.3", + "start": 48, + "end": 65 + }, + { + "section": "Confidence Method", + "n": "3", + "start": 66, + "end": 85 + }, + { + "section": "Shared-Normalization", + "n": "3.1", + "start": 86, + "end": 89 + }, + { + "section": "Merge", + "n": "3.2", + "start": 90, + "end": 91 + }, + { + "section": "No-Answer Option", + "n": "3.3", + "start": 92, + "end": 99 + }, + { + "section": "Sigmoid", + "n": "3.4", + "start": 100, + "end": 103 + }, + { + "section": "Datasets", + "n": "4.1", + "start": 104, + "end": 104 + }, + { + "section": "Preprocessing", + "n": "4.2", + "start": 105, + "end": 110 + }, + { + "section": "Sampling", + "n": "4.3", + "start": 111, + "end": 116 + }, + { + "section": "Implementation", + "n": "4.4", + "start": 117, + "end": 125 + }, + { + "section": "TriviaQA Web and TriviaQA Wiki", + "n": "5.1", + "start": 126, + "end": 147 + }, + { + "section": "TriviaQA Unfiltered", + "n": "5.2", + "start": 148, + "end": 157 + }, + { + "section": "SQuAD", + "n": "5.3", + "start": 158, + "end": 188 + }, + { + "section": "Curated TREC", + "n": "5.4", + "start": 189, + "end": 207 + }, + { + "section": "Discussion", + "n": "5.5", + "start": 208, + "end": 216 + }, + { + "section": "Error Analysis", + "n": "5.6", + "start": 217, + "end": 223 + }, + { + "section": "Related Work", + "n": "6", + "start": 224, + "end": 245 + }, + { + "section": "Conclusion", + "n": "7", + "start": 246, + "end": 249 + } + ], + "figures": [ + { + "filename": "../figure/image/996-Figure4-1.png", + "caption": "Figure 4: Results for our confidence methods on TriviaQA unfiltered. The shared-norm approach is the strongest, while the baseline model starts to lose performance as more paragraphs are used.", + "page": 5, + "bbox": { + "x1": 310.08, + "x2": 522.24, + "y1": 199.68, + "y2": 316.32 + } + }, + { + "filename": "../figure/image/996-Figure3-1.png", + "caption": "Figure 3: Results on TriviaQA web when applying our models to multiple paragraphs from each document. Most of our training methods improve the model’s ability to utilize more text.", + "page": 5, + "bbox": { + "x1": 74.88, + "x2": 287.03999999999996, + "y1": 199.68, + "y2": 316.32 + } + }, + { + "filename": "../figure/image/996-Table3-1.png", + "caption": "Table 3: Published TriviaQA results. Our approach advances the state of the art by about 10 points on these datasets4", + "page": 5, + "bbox": { + "x1": 84.96, + "x2": 510.24, + "y1": 62.879999999999995, + "y2": 145.92 + } + }, + { + "filename": "../figure/image/996-Figure1-1.png", + "caption": "Figure 1: Noisy supervision can cause many spans of text that contain the answer, but are not situated in a context that relates to the question (red), to distract the model from learning from more relevant spans (green).", + "page": 1, + "bbox": { + "x1": 309.59999999999997, + "x2": 522.24, + "y1": 205.44, + "y2": 321.12 + } + }, + { + "filename": "../figure/image/996-Figure5-1.png", + "caption": "Figure 5: Results for our confidence methods on document-level SQuAD. The shared-norm model is the only model that does not lose performance when exposed to large numbers of paragraphs.", + "page": 6, + "bbox": { + "x1": 74.88, + "x2": 287.03999999999996, + "y1": 64.8, + "y2": 179.51999999999998 + } + }, + { + "filename": "../figure/image/996-Figure2-1.png", + "caption": "Figure 2: High level outline of our model.", + "page": 2, + "bbox": { + "x1": 72.0, + "x2": 291.36, + "y1": 62.879999999999995, + "y2": 358.08 + } + }, + { + "filename": "../figure/image/996-Table4-1.png", + "caption": "Table 4: Results on the Curated TREC corpus, YodaQA results extracted from its github page7", + "page": 7, + "bbox": { + "x1": 86.88, + "x2": 273.12, + "y1": 61.44, + "y2": 114.24 + } + }, + { + "filename": "../figure/image/996-Table5-1.png", + "caption": "Table 5: Error analysis on TriviaQA web.", + "page": 7, + "bbox": { + "x1": 324.96, + "x2": 506.4, + "y1": 61.44, + "y2": 132.0 + } + }, + { + "filename": "../figure/image/996-Table1-1.png", + "caption": "Table 1: Examples from SQuAD where a model was less confident in a correct extraction from one paragraph (left) than in an incorrect extraction from another (right). Even if the passage has no correct answer and does not contain any question words, the model assigns high confidence to phrases that match the category the question is asking about. Because the confidence scores are not well-calibrated, this confidence is often higher than the confidence assigned to correct answer spans in different paragraphs, even when those correct spans have better contextual evidence.", + "page": 3, + "bbox": { + "x1": 75.84, + "x2": 521.28, + "y1": 62.879999999999995, + "y2": 197.28 + } + }, + { + "filename": "../figure/image/996-Table2-1.png", + "caption": "Table 2: Results on TriviaQA web using our pipelined method.", + "page": 4, + "bbox": { + "x1": 326.88, + "x2": 503.03999999999996, + "y1": 62.879999999999995, + "y2": 135.35999999999999 + } + } + ] + }, + "gem_id": "GEM-SciDuet-chal-13" + }, + { + "slides": { + "0": { + "title": "Time Critical Events", + "text": [ + "Disaster events (earthquake, flood) Urgent needs for affected people", + "Information gathering in real-time is the most challenging part", + "Relief operations Humanitarian organizations and local administration need information to help and launch response" + ], + "page_nums": [ + 1 + ], + "images": [] + }, + "1": { + "title": "Artificial Intelligence for Digital Response AIDR", + "text": [ + "Response time-line today Response time-line our target", + "Delayed decision-making Delayed crisis response Target Early decision-making Rapid crisis response" + ], + "page_nums": [ + 2 + ], + "images": [] + }, + "2": { + "title": "Artificial Intelligence for Digital Response", + "text": [ + "Informative Not informative Dont know or cant judge Facilitates decision makers Hurricane Irma Hurricane Hurricane California Mexico Iraq & Iran Sri Lanka Harvey Maria wildfires earthquake earthquake f loods", + "Small amount of labeled data and large amount of unlabeled data at the beginning of the event", + "Labeled data from the past event. Can we use them?", + "What about domain shift?" + ], + "page_nums": [ + 3, + 4, + 5 + ], + "images": [] + }, + "3": { + "title": "Our Solutions Contributions", + "text": [ + "How to use large amount of unlabeled data and small amount of labeled data from the same event?", + "How to transfer knowledge from the past events", + "=> Adversarial domain adaptions" + ], + "page_nums": [ + 6, + 7 + ], + "images": [] + }, + "6": { + "title": "Semi Supervised Learning", + "text": [ + "L: number of labeled instances (x1:L, y1:L)", + "U: number of unlabeled instances (xL+1:L+U)", + "Design a classifier f: x y" + ], + "page_nums": [ + 10, + 11 + ], + "images": [ + "figure/image/998-Figure1-1.png" + ] + }, + "7": { + "title": "Graph based Semi Supervised Learning", + "text": [ + "Nodes: Instances (labeled and unlabeled)", + "Edges: n x n similarity matrix", + "Each entry ai,j indicates a similarity between instance i and j", + "We construct the graph using k-nearest neighbor (k=10)", + "Requires n(n-1)/2 distance computation", + "K-d tree data structure to reduce the computational complexity", + "Feature Vector: taking the averaging of the word2vec vectors", + "Semi-Supervised component: Loss function", + "Learns the internal representations (embedding) by predicting a node in the graph context", + "Two types of context", + "1. Context is based on the graph to encode structural", + "2. Context is based on the labels to inject label information into the embeddings", + "{U,V} Convolution filters and dense layer parameters", + "{Vc,W} Parameters specific to the supervised part", + "{Vg,C} Parameters specific to the semi-supervised part" + ], + "page_nums": [ + 12, + 13, + 14, + 15, + 16, + 17, + 18, + 19 + ], + "images": [] + }, + "10": { + "title": "Corpus", + "text": [ + "A small part of the tweets has been annotated using crowdflower", + "Relevant: injured or dead people, infrastructure damage, urgent needs of affected people, donation requests", + "Dataset Relevant Irrelevant Train Dev Test", + "Nepal earthquake: 50K Queensland flood: 21K" + ], + "page_nums": [ + 24 + ], + "images": [] + }, + "11": { + "title": "Experiments and Results", + "text": [ + "Model trained using Convolution Neural Network (CNN)", + "Model trained using CNN were used to automatically label unlabeled data", + "Instances with classifier confidence >=0.75 were used to retrain a new model", + "Experiments AUC P R F1", + "Domain Adaptation Baseline (Transfer Baseline):", + "Trained CNN model on source (an event) and tested on target (another event)", + "Source Target AUC P R F1", + "Combining all the components of the network", + "Domain Adversarial with Graph Embedding" + ], + "page_nums": [ + 25, + 26, + 27, + 28, + 29 + ], + "images": [] + }, + "12": { + "title": "Summary", + "text": [ + "We have seen how graph-embedding based semi-supervised approach can be useful for small labeled data scenario", + "How can we use existing data and apply domain adaptation technique", + "We propose how both techniques can be combined" + ], + "page_nums": [ + 30 + ], + "images": [] + }, + "13": { + "title": "Limitation and Future Study", + "text": [ + "Graph embedding is computationally expensive", + "Graph constructed using averaged vector from word2vec", + "Explored binary class problem", + "Convoluted feature for graph construction", + "Domain adaptation: labeled and unlabeled data from target" + ], + "page_nums": [ + 31 + ], + "images": [] + } + }, + "paper_title": "Domain Adaptation with Adversarial Training and Graph Embeddings", + "paper_id": "998", + "paper": { + "title": "Domain Adaptation with Adversarial Training and Graph Embeddings", + "abstract": "The success of deep neural networks (DNNs) is heavily dependent on the availability of labeled data. However, obtaining labeled data is a big challenge in many real-world problems. In such scenarios, a DNN model can leverage labeled and unlabeled data from a related domain, but it has to deal with the shift in data distributions between the source and the target domains. In this paper, we study the problem of classifying social media posts during a crisis event (e.g., Earthquake). For that, we use labeled and unlabeled data from past similar events (e.g., Flood) and unlabeled data for the current event. We propose a novel model that performs adversarial learning based domain adaptation to deal with distribution drifts and graph based semi-supervised learning to leverage unlabeled data within a single unified deep learning framework. Our experiments with two real-world crisis datasets collected from Twitter demonstrate significant improvements over several baselines.", + "text": [ + { + "id": 0, + "string": "Introduction The application that motivates our work is the time-critical analysis of social media (Twitter) data at the sudden-onset of an event like natural or man-made disasters (Imran et al., 2015) ." + }, + { + "id": 1, + "string": "In such events, affected people post timely and useful information of various types such as reports of injured or dead people, infrastructure damage, urgent needs (e.g., food, shelter, medical assistance) on these social networks." + }, + { + "id": 2, + "string": "Humanitarian organizations believe timely access to this important information from social networks can help significantly and reduce both human loss and economic dam-age (Varga et al., 2013; Power et al., 2013) ." + }, + { + "id": 3, + "string": "In this paper, we consider the basic task of classifying each incoming tweet during a crisis event (e.g., Earthquake) into one of the predefined classes of interest (e.g., relevant vs. nonrelevant) in real-time." + }, + { + "id": 4, + "string": "Recently, deep neural networks (DNNs) have shown great performance in classification tasks in NLP and data mining." + }, + { + "id": 5, + "string": "However the success of DNNs on a task depends heavily on the availability of a large labeled dataset, which is not a feasible option in our setting (i.e., classifying tweets at the onset of an Earthquake)." + }, + { + "id": 6, + "string": "On the other hand, in most cases, we can have access to a good amount of labeled and abundant unlabeled data from past similar events (e.g., Floods) and possibly some unlabeled data for the current event." + }, + { + "id": 7, + "string": "In such situations, we need methods that can leverage the labeled and unlabeled data in a past event (we refer to this as a source domain), and that can adapt to a new event (we refer to this as a target domain) without requiring any labeled data in the new event." + }, + { + "id": 8, + "string": "In other words, we need models that can do domain adaptation to deal with the distribution drift between the domains and semi-supervised learning to leverage the unlabeled data in both domains." + }, + { + "id": 9, + "string": "Most recent approaches to semi-supervised learning (Yang et al., 2016) and domain adaptation (Ganin et al., 2016) use the automatic feature learning capability of DNN models." + }, + { + "id": 10, + "string": "In this paper, we extend these methods by proposing a novel model that performs domain adaptation and semi-supervised learning within a single unified deep learning framework." + }, + { + "id": 11, + "string": "In this framework, the basic task-solving network (a convolutional neural network in our case) is put together with two other networks -one for semi-supervised learning and the other for domain adaptation." + }, + { + "id": 12, + "string": "The semisupervised component learns internal representa-tions (features) by predicting contextual nodes in a graph that encodes similarity between labeled and unlabeled training instances." + }, + { + "id": 13, + "string": "The domain adaptation is achieved by training the feature extractor (or encoder) in adversary with respect to a domain discriminator, a binary classifier that tries to distinguish the domains." + }, + { + "id": 14, + "string": "The overall idea is to learn high-level abstract representation that is discriminative for the main classification task, but is invariant across the domains." + }, + { + "id": 15, + "string": "We propose a stochastic gradient descent (SGD) algorithm to train the components of our model simultaneously." + }, + { + "id": 16, + "string": "The evaluation of our proposed model is conducted using two Twitter datasets on scenarios where there is only unlabeled data in the target domain." + }, + { + "id": 17, + "string": "Our results demonstrate the following." + }, + { + "id": 18, + "string": "Our source code is available on Github 1 and the data is available on CrisisNLP 2 ." + }, + { + "id": 19, + "string": "The rest of the paper is organized as follows." + }, + { + "id": 20, + "string": "In Section 2, we present the proposed method, i.e., domain adaptation and semi-supervised graph embedding learning." + }, + { + "id": 21, + "string": "In Section 3, we present the experimental setup and baselines." + }, + { + "id": 22, + "string": "The results and analysis are presented in Section 4." + }, + { + "id": 23, + "string": "In Section 5, we present the works relevant to this study." + }, + { + "id": 24, + "string": "Finally, conclusions appear in Section 6." + }, + { + "id": 25, + "string": "The Model We demonstrate our approach for domain adaptation with adversarial training and graph embedding on a tweet classification task to support crisis response efforts." + }, + { + "id": 26, + "string": "Let D l S = {t i , y i } Ls i=1 and D u S = {t i } Us i=1 be the set of labeled and unlabeled tweets for a source crisis event S (e.g., Nepal earthquake), where y i ∈ {1, ." + }, + { + "id": 27, + "string": "." + }, + { + "id": 28, + "string": "." + }, + { + "id": 29, + "string": ", K} is the class label for tweet t i , L s and U s are the number of labeled and unlabeled tweets for the source event, respectively." + }, + { + "id": 30, + "string": "In addition, we have unlabeled tweets D u T = {t i } Ut i=1 for a target event T (e.g., Queensland flood) with U t being the number of unlabeled tweets in the target domain." + }, + { + "id": 31, + "string": "Our ultimate goal is to train a cross-domain model p(y|t, θ) with parameters θ that can classify any tweet in the target event T without having any information about class labels in T ." + }, + { + "id": 32, + "string": "Figure 1 shows the overall architecture of our neural model." + }, + { + "id": 33, + "string": "The input to the network is a tweet t = (w 1 , ." + }, + { + "id": 34, + "string": "." + }, + { + "id": 35, + "string": "." + }, + { + "id": 36, + "string": ", w n ) containing words that come from a finite vocabulary V defined from the training set." + }, + { + "id": 37, + "string": "The first layer of the network maps each of these words into a distributed representation R d by looking up a shared embedding matrix E ∈ R |V|×d ." + }, + { + "id": 38, + "string": "We initialize the embedding matrix E in our network with word embeddings that are pretrained on a large crisis dataset (Subsection 2.5)." + }, + { + "id": 39, + "string": "However, embedding matrix E can also be initialize randomly." + }, + { + "id": 40, + "string": "The output of the look-up layer is a matrix X ∈ R n×d , which is passed through a number of convolution and pooling layers to learn higher-level feature representations." + }, + { + "id": 41, + "string": "A convolution operation applies a filter u ∈ R k.d to a window of k vectors to produce a new feature h t as h t = f (u.X t:t+k−1 ) (1) where X t:t+k−1 is the concatenation of k look-up vectors, and f is a nonlinear activation; we use rectified linear units or ReLU." + }, + { + "id": 42, + "string": "We apply this filter to each possible k-length windows in X with stride size of 1 to generate a feature map h j as: h j = [h 1 , ." + }, + { + "id": 43, + "string": "." + }, + { + "id": 44, + "string": "." + }, + { + "id": 45, + "string": ", h n+k−1 ] (2) We repeat this process N times with N different filters to get N different feature maps." + }, + { + "id": 46, + "string": "We use a wide convolution (Kalchbrenner et al., 2014) , which ensures that the filters reach the entire tweet, including the boundary words." + }, + { + "id": 47, + "string": "This is done by performing zero-padding, where out-ofrange (i.e., t<1 or t>n) vectors are assumed to be zero." + }, + { + "id": 48, + "string": "With wide convolution, o zero-padding size and 1 stride size, each feature map contains (n + 2o − k + 1) convoluted features." + }, + { + "id": 49, + "string": "After the convolution, we apply a max-pooling operation to each of the feature maps, where µ p (h j ) refers to the max operation applied to each window of p features with stride size of 1 in the feature map h i ." + }, + { + "id": 50, + "string": "Intuitively, the convolution operation composes local features into higherlevel representations in the feature maps, and maxpooling extracts the most important aspects of each feature map while reducing the output dimensionality." + }, + { + "id": 51, + "string": "Since each convolution-pooling operation is performed independently, the features extracted become invariant in order (i.e., where they occur in the tweet)." + }, + { + "id": 52, + "string": "To incorporate order information between the pooled features, we include a fully-connected (dense) layer m = [µ p (h 1 ), · · · , µ p (h N )] (3 z = f (V m) (4) where V is the weight matrix." + }, + { + "id": 53, + "string": "We choose a convolutional architecture for feature composition because it has shown impressive results on similar tasks in a supervised setting (Nguyen et al., 2017) ." + }, + { + "id": 54, + "string": "The network at this point splits into three branches (shaded with three different colors in Figure 1 ) each of which serves a different purpose and contributes a separate loss to the overall loss of the model as defined below: L(Λ, Φ, Ω, Ψ) = L C (Λ, Φ) + λg L G (Λ, Ω) + λ d L D (Λ, Ψ) (5) where Λ = {U, V } are the convolutional filters and dense layer weights that are shared across the three branches." + }, + { + "id": 55, + "string": "The first component L C (Λ, Φ) is a supervised classification loss based on the labeled data in the source event." + }, + { + "id": 56, + "string": "The second component L G (Λ, Ω) is a graph-based semi-supervised loss that utilizes both labeled and unlabeled data in the source and target events to induce structural similarity between training instances." + }, + { + "id": 57, + "string": "The third component L D (Λ, Ω) is an adversary loss that again uses all available data in the source and target domains to induce domain invariance in the learned features." + }, + { + "id": 58, + "string": "The tunable hyperparameters λ g and λ d control the relative strength of the components." + }, + { + "id": 59, + "string": "Supervised Component The supervised component induces label information (e.g., relevant vs. non-relevant) directly in the network through the classification loss L C (Λ, Φ), which is computed on the labeled instances in the source event, D l S ." + }, + { + "id": 60, + "string": "Specifically, this branch of the network, as shown at the top in Figure 1 , takes the shared representations z as input and pass it through a task-specific dense layer z c = f (V c z) (6) where V c is the corresponding weight matrix." + }, + { + "id": 61, + "string": "The activations z c along with the activations from the semi-supervised branch z s are used for classification." + }, + { + "id": 62, + "string": "More formally, the classification layer defines a Softmax p(y = k|t, θ) = exp W T k [z c ; z s ] k exp W T k [z c ; z s ] (7) where [." + }, + { + "id": 63, + "string": "; .]" + }, + { + "id": 64, + "string": "denotes concatenation of two column vectors, W k are the class weights, and θ = {U, V, V c , W } defines the relevant parameters for this branch of the network with Λ = {U, V } being the shared parameters and Φ = {V c , W } being the parameters specific to this branch." + }, + { + "id": 65, + "string": "Once learned, we use θ for prediction on test tweets." + }, + { + "id": 66, + "string": "The classification loss L C (Λ, Φ) (or L C (θ)) is defined as LC(Λ, Φ) = − 1 Ls Ls i=1 I(yi = k) log p(yi = k|ti, Λ, Φ) (8) where I(.)" + }, + { + "id": 67, + "string": "is an indicator function that returns 1 when the argument is true, otherwise it returns 0." + }, + { + "id": 68, + "string": "Semi-supervised Component The semi-supervised branch (shown at the middle in Figure 1 ) induces structural similarity between training instances (labeled or unlabeled) in the source and target events." + }, + { + "id": 69, + "string": "We adopt the recently proposed graph-based semi-supervised deep learning framework (Yang et al., 2016) , which shows impressive gains over existing semisupervised methods on multiple datasets." + }, + { + "id": 70, + "string": "In this framework, a \"similarity\" graph G first encodes relations between training instances, which is then used by the network to learn internal representations (i.e., embeddings)." + }, + { + "id": 71, + "string": "Learning Graph Embeddings The semi-supervised branch takes the shared representation z as input and learns internal representations by predicting a node in the graph context of the input tweet." + }, + { + "id": 72, + "string": "Following (Yang et al., 2016) , we use negative sampling to compute the loss for predicting the context node, and we sample two types of contextual nodes: (i) one is based on the graph G to encode structural information, and (ii) the second is based on the labels in D l S to incorporate label information through this branch of the network." + }, + { + "id": 73, + "string": "The ratio of positive and negative samples is controlled by a random variable ρ 1 ∈ (0, 1), and the proportion of the two context types is controlled by another random variable ρ 2 ∈ (0, 1); see Algorithm 1 of (Yang et al., 2016) for details on the sampling procedure." + }, + { + "id": 74, + "string": "Let (j, γ) is a tuple sampled from the distribution p(j, γ|i, D l S , G), where j is a context node of an input node i and γ ∈ {+1, −1} denotes whether it is a positive or a negative sample; γ = +1 if t i and t j are neighbors in the graph (for graph-based context) or they both have same labels (for label-based context), otherwise γ = −1." + }, + { + "id": 75, + "string": "The negative log loss for context prediction L G (Λ, Ω) can be written as L G (Λ, Ω) = − 1 Ls + Us Ls+Us i=1 E (j,γ) log σ γC T j zg(i) (9) where z g (i) = f (V g z(i)) defines another dense layer (marked as Dense (z g ) in Figure 1 ) having weights V g , and C j is the weight vector associated with the context node t j ." + }, + { + "id": 76, + "string": "Note that here Λ = {U, V } defines the shared parameters and Ω = {V g , C} defines the parameters specific to the semi-supervised branch of the network." + }, + { + "id": 77, + "string": "Graph Construction Typically graphs are constructed based on a relational knowledge source, e.g., citation links in (Lu and Getoor, 2003) , or distance between instances (Zhu, 2005) ." + }, + { + "id": 78, + "string": "However, we do not have access to such a relational knowledge in our setting." + }, + { + "id": 79, + "string": "On the other hand, computing distance between n(n−1)/2 pairs of instances to construct the graph is also very expensive (Muja and Lowe, 2014) ." + }, + { + "id": 80, + "string": "Therefore, we choose to use k-nearest neighborbased approach as it has been successfully used in other study (Steinbach et al., 2000) ." + }, + { + "id": 81, + "string": "The nearest neighbor graph consists of n vertices and for each vertex, there is an edge set consisting of a subset of n instances, i.e., tweets in our training set." + }, + { + "id": 82, + "string": "The edge is defined by the distance measure d(i, j) between tweets t i and t j , where the value of d represents how similar the two tweets are." + }, + { + "id": 83, + "string": "We used k-d tree data structure (Bentley, 1975) to efficiently find the nearest instances." + }, + { + "id": 84, + "string": "To construct the graph, we first represent each tweet by averaging the word2vec vectors of its words, and then we measure d(i, j) by computing the Euclidean distance between the vectors." + }, + { + "id": 85, + "string": "The number of nearest neighbor k was set to 10." + }, + { + "id": 86, + "string": "The reason of averaging the word vectors is that it is computationally simpler and it captures the relevant semantic information for our task in hand." + }, + { + "id": 87, + "string": "Likewise, we choose to use Euclidean distance instead of cosine for computational efficiency." + }, + { + "id": 88, + "string": "Domain Adversarial Component The network described so far can learn abstract features through convolutional and dense layers that are discriminative for the classification task (relevant vs. non-relevant)." + }, + { + "id": 89, + "string": "The supervised branch of the network uses labels in the source event to induce label information directly, whereas the semi-supervised branch induces similarity information between labeled and unlabeled instances." + }, + { + "id": 90, + "string": "However, our goal is also to make these learned features invariant across domains or events (e.g., Nepal Earthquake vs. Queensland Flood)." + }, + { + "id": 91, + "string": "We achieve this by domain adversarial training of neural networks (Ganin et al., 2016) ." + }, + { + "id": 92, + "string": "We put a domain discriminator, another branch in the network (shown at the bottom in Figure 1 ) that takes the shared internal representation z as input, and tries to discriminate between the domains of the input -in our case, whether the input tweet is from D S or from D T ." + }, + { + "id": 93, + "string": "The domain discriminator is defined by a sigmoid function: δ = p(d = 1|t, Λ, Ψ) = sigm(w T d z d ) (10) where d ∈ {0, 1} denotes the domain of the input tweet t, w d are the final layer weights of the discriminator, and z d = f (V d z) defines the hidden layer of the discriminator with layer weights V d ." + }, + { + "id": 94, + "string": "Here Λ = {U, V } defines the shared parameters, and Ψ = {V d , w d } defines the parameters specific to the domain discriminator." + }, + { + "id": 95, + "string": "We use the negative log-probability as the discrimination loss: J i (Λ, Ψ) = −d i logδ ��� (1 − d i ) log 1 −δ (11) We can write the overall domain adversary loss over the source and target domains as L D (Λ, Ψ) = − 1 Ls + Us Ls+Us i=1 J i (Λ, Ψ) − 1 Ut U t i=1 J i (Λ, Ψ) (12) where L s + U s and U t are the number of training instances in the source and target domains, respectively." + }, + { + "id": 96, + "string": "In adversarial training, we seek parameters (saddle point) such that θ * = argmin Λ,Φ,Ω max Ψ L(Λ, Φ, Ω, Ψ) (13) which involves a maximization with respect to Ψ and a minimization with respect to {Λ, Φ, Ω}." + }, + { + "id": 97, + "string": "In other words, the updates of the shared parameters Λ = {U, V } for the discriminator work adversarially to the rest of the network, and vice versa." + }, + { + "id": 98, + "string": "This is achieved by reversing the gradients of the discrimination loss L D (Λ, Ψ), when they are backpropagated to the shared layers (see Figure 1 )." + }, + { + "id": 99, + "string": "Model Training Algorithm 1 illustrates the training algorithm based on stochastic gradient descent (SGD)." + }, + { + "id": 100, + "string": "We first initialize the model parameters." + }, + { + "id": 101, + "string": "The word embedding matrix E is initialized with pre-trained word2vec vectors (see Subsection 2.5) and is kept fixed during training." + }, + { + "id": 102, + "string": "3 Other parameters are initialized with small random numbers sampled from 3 Tuning E on our task by backpropagation increased the training time immensely (3 days compared to 5 hours on a Tesla GPU) without any significant performance gain." + }, + { + "id": 103, + "string": "a uniform distribution (Bengio and Glorot, 2010) ." + }, + { + "id": 104, + "string": "We use AdaDelta (Zeiler, 2012) adaptive update to update the parameters." + }, + { + "id": 105, + "string": "In each iteration, we do three kinds of gradient updates to account for the three different loss components." + }, + { + "id": 106, + "string": "First, we do an epoch over all the training instances updating the parameters for the semi-supervised loss, then we do an epoch over the labeled instances in the source domain, each time updating the parameters for the supervised and the domain adversary losses." + }, + { + "id": 107, + "string": "Finally, we do an epoch over the unlabeled instances in the two domains to account for the domain adversary loss." + }, + { + "id": 108, + "string": "The main challenge in adversarial training is to balance the competing components of the network." + }, + { + "id": 109, + "string": "If one component becomes smarter than the other, its loss to the shared layer becomes useless, and the training fails to converge (Arjovsky et al., 2017) ." + }, + { + "id": 110, + "string": "Equivalently, if one component becomes weaker, its loss overwhelms that of the other, causing the training to fail." + }, + { + "id": 111, + "string": "In our experiments, we observed the domain discriminator is weaker than the rest of the network." + }, + { + "id": 112, + "string": "This could be due to the noisy nature of tweets, which makes the job for the domain discriminator harder." + }, + { + "id": 113, + "string": "To balance the components, we would want the error signals from the discriminator to be fairly weak, also we would want the supervised loss to have more impact than the semi-supervised loss." + }, + { + "id": 114, + "string": "In our experiments, the weight of the domain adversary loss λ d was fixed to 1e − 8, and the weight of the semi-supervised loss λ g was fixed to 1e − 2." + }, + { + "id": 115, + "string": "Other sophisticated weighting schemes have been proposed recently (Ganin et al., 2016; Arjovsky et al., 2017; Metz et al., 2016) ." + }, + { + "id": 116, + "string": "It would be interesting to see how our model performs using these advanced tuning methods, which we leave as a future work." + }, + { + "id": 117, + "string": "Crisis Word Embedding As mentioned, we used word embeddings that are pre-trained on a crisis dataset." + }, + { + "id": 118, + "string": "To train the wordembedding model, we first pre-processed tweets collected using the AIDR system during different events occurred between 2014 and 2016." + }, + { + "id": 119, + "string": "In the preprocessing step, we lowercased the tweets and removed URLs, digit, time patterns, special characters, single character, username started with the @ symbol." + }, + { + "id": 120, + "string": "After preprocessing, the resulting dataset contains about 364 million tweets and about 3 billion words." + }, + { + "id": 121, + "string": "There are several approaches to train word embeddings such as continuous bag-of-words (CBOW) and skip-gram models of wrod2vec (Mikolov et al., 2013) , and Glove (Pennington et al., 2014) ." + }, + { + "id": 122, + "string": "For our work, we trained the CBOW model from word2vec." + }, + { + "id": 123, + "string": "While training CBOW, we filtered out words with a frequency less than or equal to 5, and we used a context window size of 5 and k = 5 negative samples." + }, + { + "id": 124, + "string": "The resulting embedding model contains about 2 million words with vector dimensions of 300." + }, + { + "id": 125, + "string": "Experimental Settings In this section, we describe our experimental settings -datasets used, settings of our models, compared baselines, and evaluation metrics." + }, + { + "id": 126, + "string": "Datasets To conduct the experiment and evaluate our system, we used two real-world Twitter datasets collected during the 2015 Nepal earthquake (NEQ) and the 2013 Queensland floods (QFL)." + }, + { + "id": 127, + "string": "These datasets are comprised of millions of tweets collected through the Twitter streaming API 4 using event-specific keywords/hashtags." + }, + { + "id": 128, + "string": "To obtain the labeled examples for our task we employed paid workers from the Crowdflower 5a crowdsourcing platform." + }, + { + "id": 129, + "string": "The annotation consists of two classes relevant and non-relevant." + }, + { + "id": 130, + "string": "For the annotation, we randomly sampled 11,670 and 10,033 tweets from the Nepal earthquake and the Queensland floods datasets, respectively." + }, + { + "id": 131, + "string": "Given a tweet, we asked crowdsourcing workers to assign the \"relevant\" label if the tweet conveys/reports information useful for crisis response such as a report of injured or dead people, some kind of infrastructure damage, urgent needs of affected people, donations requests or offers, otherwise assign the \"non-relevant\" label." + }, + { + "id": 132, + "string": "We split the labeled data into 60% as training, 30% as test and 10% as development." + }, + { + "id": 133, + "string": "Table 1 shows the resulting datasets with class-wise distributions." + }, + { + "id": 134, + "string": "Data preprocessing was performed by following the same steps used to train the word2vec model (Subsection 2.5)." + }, + { + "id": 135, + "string": "In all the experiments, the classification task consists of two classes: relevant and non-relevant." + }, + { + "id": 136, + "string": "Model Settings and Baselines In order to demonstrate the effectiveness of our joint learning approach, we performed a series of experiments." + }, + { + "id": 137, + "string": "To understand the contribution of different network components, we performed an ablation study showing how the model performs as a semi-supervised model alone and as a domain adaptation model alone, and then we compare them with the combined model that incorporates all the components." + }, + { + "id": 138, + "string": "Settings for Semi-supervised Learning As a baseline for the semi-supervised experiments, we used the self-training approach (Scudder, 1965) ." + }, + { + "id": 139, + "string": "For this purpose, we first trained a supervised model using the CNN architecture (i.e., shared components followed by the supervised part in Figure 1 )." + }, + { + "id": 140, + "string": "The trained model was then used to automatically label the unlabeled data." + }, + { + "id": 141, + "string": "Instances with a classifier confidence score ≥ 0.75 were then used to retrain a new model." + }, + { + "id": 142, + "string": "Next, we run experiments using our graphbased semi-supervised approach (i.e., shared components followed by the supervised and semisupervised parts in Figure 1) , which exploits unlabeled data." + }, + { + "id": 143, + "string": "For reducing the computational cost, we randomly selected 50K unlabeled instances from the same domain." + }, + { + "id": 144, + "string": "For our semi-supervised setting, one of the main goals was to understand how much labeled data is sufficient to obtain a reasonable result." + }, + { + "id": 145, + "string": "Therefore, we experimented our system by incrementally adding batches of instances, such as 100, 500, 2000, 5000, and all instances from the training set." + }, + { + "id": 146, + "string": "Such an understanding can help us design the model at the onset of a crisis event with sufficient amount of labeled data." + }, + { + "id": 147, + "string": "To demonstrate that the semi-supervised approach outperforms the supervised baseline, we run supervised experiments using the same number of labeled instances." + }, + { + "id": 148, + "string": "In the supervised setting, only z c activations in Figure 1 are used for classification." + }, + { + "id": 149, + "string": "Settings for Domain Adaptation To set a baseline for the domain adaptation experiments, we train a CNN model (i.e., shared components followed by the supervised part in Figure 1 ) on one event (source) and test it on another event (target)." + }, + { + "id": 150, + "string": "We call this as transfer baseline." + }, + { + "id": 151, + "string": "To assess the performance of our domain adaptation technique alone, we exclude the semisupervised component from the network." + }, + { + "id": 152, + "string": "We train and evaluate models with this network configuration using different source and target domains." + }, + { + "id": 153, + "string": "Finally, we integrate all the components of the network as shown in Figure 1 and run domain adaptation experiments using different source and target domains." + }, + { + "id": 154, + "string": "In all our domain adaptation experiments, we only use unlabeled instances from the target domain." + }, + { + "id": 155, + "string": "In domain adaption literature, this is known as unsupervised adaptation." + }, + { + "id": 156, + "string": "Training Settings We use 100, 150, and 200 filters each having the window size of 2, 3, and 4, respectively, and pooling length of 2, 3, and 4, respectively." + }, + { + "id": 157, + "string": "We do not tune these hyperparameters in any experimental setting since the goal was to have an end-to-end comparison with the same hyperparameter setting and understand whether our approach can outperform the baselines or not." + }, + { + "id": 158, + "string": "Furthermore, we do not filter out any vocabulary item in any settings." + }, + { + "id": 159, + "string": "As mentioned before in Subsection 2.4, we used AdaDelta (Zeiler, 2012) to update the model parameters in each SGD step." + }, + { + "id": 160, + "string": "The learning rate was set to 0.1 when optimizing on the classification loss and to 0.001 when optimizing on the semisupervised loss." + }, + { + "id": 161, + "string": "The learning rate for domain adversarial training was set to 1.0." + }, + { + "id": 162, + "string": "The maximum number of epochs was set to 200, and dropout rate of 0.02 was used to avoid overfitting (Srivastava et al., 2014) ." + }, + { + "id": 163, + "string": "We used validation-based early stopping using the F-measure with a patience of 25, Table 2 : Results using supervised, self-training, and graph-based semi-supervised approaches in terms of Weighted average AUC, precision (P), recall (R) and F-measure (F1)." + }, + { + "id": 164, + "string": "i.e., we stop training if the score does not increase for 25 consecutive epochs." + }, + { + "id": 165, + "string": "Evaluation Metrics To measure the performance of the trained models using different approaches described above, we use weighted average precision, recall, F-measure, and Area Under ROC-Curve (AUC), which are standard evaluation measures in the NLP and machine learning communities." + }, + { + "id": 166, + "string": "The rationale behind choosing the weighted metric is that it takes into account the class imbalance problem." + }, + { + "id": 167, + "string": "Results and Discussion In this section, we present the experimental results and discuss our main findings." + }, + { + "id": 168, + "string": "Semi-supervised Learning In Table 2 , we present the results obtained from the supervised, self-training based semi-supervised, and our graph-based semi-supervised experiments for the both datasets." + }, + { + "id": 169, + "string": "It can be clearly observed that the graph-based semi-supervised approach outperforms the two baselines -supervised and self-training based semi-supervised." + }, + { + "id": 170, + "string": "Specifically, the graph-based approach shows 4% to 13% absolute improvements in terms of F1 scores for the Nepal and Queensland datasets, respectively." + }, + { + "id": 171, + "string": "To determine how the semi-supervised approach performs in the early hours of an event when only fewer labeled instances are available, we mimic a batch-wise (not to be confused with minibatch in SGD) learning setting." + }, + { + "id": 172, + "string": "In Table 3 , we present the results using different batch sizes -100, 500, 1,000, 2,000, and all labels." + }, + { + "id": 173, + "string": "From the results, we observe that models' performance improve as we include more labeled data Table 3 : Weighted average F-measure for the graph-based semi-supervised settings using different batch sizes." + }, + { + "id": 174, + "string": "L refers to labeled data, U refers to unlabeled data, All L refers to all labeled instances for that particular dataset." + }, + { + "id": 175, + "string": "-from 43.63 to 60.89 for NEQ and from 48.97 to 80.16 for QFL in the case of labeled only (L)." + }, + { + "id": 176, + "string": "When we compare supervised vs. semi-supervised (L vs. L+U), we observe significant improvements in F1 scores for the semi-supervised model for all batches over the two datasets." + }, + { + "id": 177, + "string": "As we include unlabeled instances with labeled instances from the same event, performance significantly improves in each experimental setting giving 5% to 26% absolute improvements over the supervised models." + }, + { + "id": 178, + "string": "These improvements demonstrate the effectiveness of our approach." + }, + { + "id": 179, + "string": "We also notice that our semi-supervised approach can perform above 90% depending on the event." + }, + { + "id": 180, + "string": "Specifically, major improvements are observed from batch size 100 to 1,000, however, after that the performance improvements are comparatively minor." + }, + { + "id": 181, + "string": "The results obtained using batch sizes 500 and 1,000 are reasonably in the acceptable range when labeled and unlabeled instances are combined (i.e., L+50kU for Nepal and L+∼21kU for Queensland), which is also a reasonable number of training examples to obtain at the onset of an event." + }, + { + "id": 182, + "string": "Domain Adaptation In The results with domain adversarial training show improvements across both events -from 1.8% to 4.1% absolute gains in F1." + }, + { + "id": 183, + "string": "These results attest that adversarial training is an effective approach to induce domain invariant features in the internal representation as shown previously by Ganin et al." + }, + { + "id": 184, + "string": "(2016) ." + }, + { + "id": 185, + "string": "Finally, when we do both semi-supervised learning and unsupervised domain adaptation, we get further improvements in F1 scores ranging from 5% to 7% absolute gains." + }, + { + "id": 186, + "string": "From these improvements, we can conclude that domain adaptation with adversarial training along with graphbased semi-supervised learning is an effective method to leverage unlabeled and labeled data from a different domain." + }, + { + "id": 187, + "string": "Note that for our domain adaptation methods, we only use unlabeled data from the target domain." + }, + { + "id": 188, + "string": "Hence, we foresee future improvements of this approach by utilizing a small amount of target domain labeled data." + }, + { + "id": 189, + "string": "Related Work Two lines of research are directly related to our work: (i) semi-supervised learning and (ii) domain adaptation." + }, + { + "id": 190, + "string": "Several models have been proposed for semi-supervised learning." + }, + { + "id": 191, + "string": "The earliest approach is self-training (Scudder, 1965) , in which a trained model is first used to label unlabeled data instances followed by the model retraining with the most confident predicted labeled instances." + }, + { + "id": 192, + "string": "The co-training (Mitchell, 1999) approach assumes that features can be split into two sets and each subset is then used to train a classifier with an assumption that the two sets are conditionally independent." + }, + { + "id": 193, + "string": "Then each classifier classifies the unlabeled data, and then most confident data instances are used to re-train the other classifier, this process repeats multiple times." + }, + { + "id": 194, + "string": "In the graph-based semi-supervised approach, nodes in a graph represent labeled and unlabeled instances and edge weights represent the similarity between them." + }, + { + "id": 195, + "string": "The structural information encoded in the graph is then used to regularize a model (Zhu, 2005) ." + }, + { + "id": 196, + "string": "There are two paradigms in semi-supervised learning: 1) inductive -learning a function with which predictions can be made on unobserved instances, 2) transductive -no explicit function is learned and predictions can only be made on observed instances." + }, + { + "id": 197, + "string": "As mentioned before, inductive semi-supervised learning is preferable over the transductive approach since it avoids building the graph each time it needs to infer the labels for the unlabeled instances." + }, + { + "id": 198, + "string": "In our work, we use a graph-based inductive deep learning approach proposed by Yang et al." + }, + { + "id": 199, + "string": "(2016) to learn features in a deep learning model by predicting contextual (i.e., neighboring) nodes in the graph." + }, + { + "id": 200, + "string": "However, our approach is different from Yang et al." + }, + { + "id": 201, + "string": "(2016) in several ways." + }, + { + "id": 202, + "string": "First, we construct the graph by computing the distance between tweets based on word embeddings." + }, + { + "id": 203, + "string": "Second, instead of using count-based features, we use a convolutional neural network (CNN) to compose high-level features from the distributed representation of the words in a tweet." + }, + { + "id": 204, + "string": "Finally, for context prediction, instead of performing a random walk, we select nodes based on their similarity in the graph." + }, + { + "id": 205, + "string": "Similar similarity-based graph has shown impressive results in learning sentence representations (Saha et al., 2017) ." + }, + { + "id": 206, + "string": "In the literature, the proposed approaches for domain adaptation include supervised, semisupervised and unsupervised." + }, + { + "id": 207, + "string": "It also varies from linear kernelized approach (Blitzer et al., 2006) to non-linear deep neural network techniques (Glorot et al., 2011; Ganin et al., 2016) ." + }, + { + "id": 208, + "string": "One direction of research is to focus on feature space distribution matching by reweighting the samples from the source domain (Gong et al., 2013) to map source into target." + }, + { + "id": 209, + "string": "The overall idea is to learn a good feature representation that is invariant across domains." + }, + { + "id": 210, + "string": "In the deep learning paradigm, Glorot et al." + }, + { + "id": 211, + "string": "(Glorot et al., 2011) used Stacked Denoising Auto-Encoders (SDAs) for domain adaptation." + }, + { + "id": 212, + "string": "SDAs learn a robust feature representation, which is artificially corrupted with small Gaussian noise." + }, + { + "id": 213, + "string": "Adversarial training of neural networks has shown big impact recently, especially in areas such as computer vision, where generative unsupervised models have proved capable of synthesizing new images (Goodfellow et al., 2014; Radford et al., 2015; Makhzani et al., 2015) ." + }, + { + "id": 214, + "string": "Ganin et al." + }, + { + "id": 215, + "string": "(2016) proposed domain adversarial neural networks (DANN) to learn discriminative but at the same time domain-invariant representations, with domain adaptation as a target." + }, + { + "id": 216, + "string": "We extend this work by combining with semi-supervised graph embedding for unsupervised domain adaptation." + }, + { + "id": 217, + "string": "In a recent work, Kipf and Welling (2016) present CNN applied directly on graph-structured datasets -citation networks and on a knowledge graph dataset." + }, + { + "id": 218, + "string": "Their study demonstrate that graph convolution network for semi-supervised classification performs better compared to other graph based approaches." + }, + { + "id": 219, + "string": "Conclusions In this paper, we presented a deep learning framework that performs domain adaptation with adversarial training and graph-based semi-supervised learning to leverage labeled and unlabeled data from related events." + }, + { + "id": 220, + "string": "We use a convolutional neural network to compose high-level representation from the input, which is then passed to three components that perform supervised training, semisupervised learning and domain adversarial training." + }, + { + "id": 221, + "string": "For domain adaptation, we considered a scenario, where we have only unlabeled data in the target event." + }, + { + "id": 222, + "string": "Our evaluation on two crisis-related tweet datasets demonstrates that by combining domain adversarial training with semi-supervised learning, our model gives significant improvements over their respective baselines." + }, + { + "id": 223, + "string": "We have also presented results of batch-wise incremental training of the graph-based semi-supervised approach and show approximation regarding the number of labeled examples required to get an acceptable performance at the onset of an event." + } + ], + "headers": [ + { + "section": "Introduction", + "n": "1", + "start": 0, + "end": 24 + }, + { + "section": "The Model", + "n": "2", + "start": 25, + "end": 58 + }, + { + "section": "Supervised Component", + "n": "2.1", + "start": 59, + "end": 67 + }, + { + "section": "Semi-supervised Component", + "n": "2.2", + "start": 68, + "end": 70 + }, + { + "section": "Learning Graph Embeddings", + "n": "2.2.1", + "start": 71, + "end": 76 + }, + { + "section": "Graph Construction", + "n": "2.2.2", + "start": 77, + "end": 87 + }, + { + "section": "Domain Adversarial Component", + "n": "2.3", + "start": 88, + "end": 98 + }, + { + "section": "Model Training", + "n": "2.4", + "start": 99, + "end": 116 + }, + { + "section": "Crisis Word Embedding", + "n": "2.5", + "start": 117, + "end": 123 + }, + { + "section": "Experimental Settings", + "n": "3", + "start": 124, + "end": 125 + }, + { + "section": "Datasets", + "n": "3.1", + "start": 126, + "end": 134 + }, + { + "section": "Model Settings and Baselines", + "n": "3.2", + "start": 135, + "end": 137 + }, + { + "section": "Settings for Semi-supervised Learning", + "n": "3.2.1", + "start": 138, + "end": 148 + }, + { + "section": "Settings for Domain Adaptation", + "n": "3.2.2", + "start": 149, + "end": 155 + }, + { + "section": "Training Settings", + "n": "3.2.3", + "start": 156, + "end": 163 + }, + { + "section": "Evaluation Metrics", + "n": "3.2.4", + "start": 164, + "end": 166 + }, + { + "section": "Results and Discussion", + "n": "4", + "start": 167, + "end": 167 + }, + { + "section": "Semi-supervised Learning", + "n": "4.1", + "start": 168, + "end": 181 + }, + { + "section": "Domain Adaptation", + "n": "4.2", + "start": 182, + "end": 188 + }, + { + "section": "Related Work", + "n": "5", + "start": 189, + "end": 218 + }, + { + "section": "Conclusions", + "n": "6", + "start": 219, + "end": 223 + } + ], + "figures": [ + { + "filename": "../figure/image/998-Figure1-1.png", + "caption": "Figure 1: The system architecture of the domain adversarial network with graph-based semi-supervised learning. The shared components part is shared by supervised, semi-supervised and domain classifier.", + "page": 2, + "bbox": { + "x1": 142.56, + "x2": 452.64, + "y1": 65.75999999999999, + "y2": 249.12 + } + }, + { + "filename": "../figure/image/998-Table1-1.png", + "caption": "Table 1: Distribution of labeled datasets for Nepal earthquake (NEQ) and Queensland flood (QFL).", + "page": 5, + "bbox": { + "x1": 308.64, + "x2": 524.16, + "y1": 63.839999999999996, + "y2": 108.0 + } + }, + { + "filename": "../figure/image/998-Table4-1.png", + "caption": "Table 4: Domain adaptation experimental results. Weighted average AUC, precision (P), recall (R) and F-measure (F1).", + "page": 7, + "bbox": { + "x1": 316.8, + "x2": 516.0, + "y1": 88.8, + "y2": 269.28 + } + }, + { + "filename": "../figure/image/998-Table3-1.png", + "caption": "Table 3: Weighted average F-measure for the graph-based semi-supervised settings using different batch sizes. L refers to labeled data, U refers to unlabeled data, All L refers to all labeled instances for that particular dataset.", + "page": 7, + "bbox": { + "x1": 88.8, + "x2": 273.12, + "y1": 131.51999999999998, + "y2": 161.28 + } + }, + { + "filename": "../figure/image/998-Table2-1.png", + "caption": "Table 2: Results using supervised, self-training, and graph-based semi-supervised approaches in terms of Weighted average AUC, precision (P), recall (R) and F-measure (F1).", + "page": 6, + "bbox": { + "x1": 308.64, + "x2": 525.12, + "y1": 143.51999999999998, + "y2": 185.28 + } + } + ] + }, + "gem_id": "GEM-SciDuet-chal-14" + }, + { + "slides": { + "2": { + "title": "Comparable Corpora", + "text": [ + "Problem No large collections of comparable texts for all domains and language pairs exist", + "Objective To extract high-quality comparable corpora on specific domains", + "Pilot language pair EnglishSpanish", + "Pilot domains Science, Computer Science, Sports", + "Currently experimenting on more than 700 domains and 10 languages" + ], + "page_nums": [ + 14 + ], + "images": [] + }, + "3": { + "title": "Comparable Corpora Characteristic Vocabulary", + "text": [ + "Retrieve every article associated to the top category of the domain", + "Merge the articles contents and apply standard and ad-hoc pre-processing", + "Select the top-k tf-sorted tokens as the characteristic vocabulary", + "(we consider 10% of the tokens)", + "Articles Vocabulary en es en es" + ], + "page_nums": [ + 15, + 16, + 17, + 18 + ], + "images": [] + }, + "4": { + "title": "Comparable Corpora Graph exploration", + "text": [ + "Slice of the Spanish Wikipedia category graph departing from categories", + "Sport and Science (as in Spring 2015)", + "Scientific Sport Science disciplines", + "Mountain Earth Sports sports sciencies", + "Geology Mountains Mountaineering Geology by country", + "Mountains by country Mountains of Andorra Mountain ran- ges of Spain Geology of Spain", + "Mountains of the Pyrenees", + "Perform a breadth-first search departing from the root category", + "Visit nodes only once to avoid loops and repeating traversed paths", + "Stop at the level when most categories do not belong to the domain", + "Heuristic A category belongs to the domain if its title contains at least one term from the characteristic vocabulary", + "Explore until a minimum percentage of the categories in a tree level belong to the domain", + "Category pato in Spanish -literally \"duck\"- refers to a sport rather than an animal!!!", + "Article pairs selected according to two criteria: 50% and 60%", + "Articles Distance from the root", + "en-es en-es en es en es" + ], + "page_nums": [ + 19, + 20, + 21, + 22, + 23 + ], + "images": [ + "figure/image/1004-Table3-1.png", + "figure/image/1004-Figure1-1.png" + ] + }, + "5": { + "title": "Parallelisation Similarity Models", + "text": [ + "Character 3-grams (cosine) [McNamee and Mayfield, 2004]", + "Translated word 1-grams in both directions (cosine)", + "Length factor [Pouliquen et al., 2003]", + "Probable lengths of translations of d" + ], + "page_nums": [ + 25 + ], + "images": [] + }, + "6": { + "title": "Parallelisation Corpus for Preliminary Evaluation", + "text": [ + "30 article pairs (10 per domain)", + "Annotated at sentence level", + "Three classes: parallel, comparable, and other", + "Each pair was annotated by 2 volunteers mean Cohens" + ], + "page_nums": [ + 26 + ], + "images": [] + }, + "7": { + "title": "Parallelisation Threshold Definition", + "text": [ + "c3g cog monoen monoes len", + "S Slen S F1 S F1len" + ], + "page_nums": [ + 27, + 28 + ], + "images": [ + "figure/image/1004-Table5-1.png" + ] + }, + "9": { + "title": "Impact Corpora", + "text": [ + "in domain out of domain", + "Generation of the Wikipedia dev and test sets", + "Select only sentences starting with a letter and longer than three tokens", + "Compute the perplexity of each sentence pair (with respect to a", + "Sort the pairs according to similarity and perplexity", + "Manually select the first k parallel sentences" + ], + "page_nums": [ + 31, + 32 + ], + "images": [] + }, + "10": { + "title": "Impact Corpora Statistics", + "text": [ + "CS Sc Sp All" + ], + "page_nums": [ + 33 + ], + "images": [ + "figure/image/1004-Table7-1.png", + "figure/image/1004-Table9-1.png" + ] + }, + "11": { + "title": "Impact Phrase based SMT System", + "text": [ + "Language model 5-gram interpolated Kneser-Ney discounting, SRILM", + "Translation model Moses package", + "Weights optimization MERT against BLEU" + ], + "page_nums": [ + 34 + ], + "images": [] + }, + "12": { + "title": "Impact Experiments definition", + "text": [ + "Out of domain Training Wikipedia and Europarl", + "Test Wikipedia (+Gnome for CS)" + ], + "page_nums": [ + 35 + ], + "images": [] + }, + "13": { + "title": "Impact Results on Wikipedia in domain", + "text": [ + "CS Sc Sp Un" + ], + "page_nums": [ + 36 + ], + "images": [ + "figure/image/1004-Table11-1.png", + "figure/image/1004-Table8-1.png", + "figure/image/1004-Table12-1.png" + ] + }, + "15": { + "title": "Impact Translation Instances", + "text": [ + "Source All internet packets have a source IP address and a destination", + "EP Todos los paquetes de internet tienen un origen direccion IP y destino direccion IP.", + "EP+union-CS Todos los paquetes de internet tienen una direccion IP de origen y una direccion IP de destino.", + "Awareness of terms (possible overfitting?)", + "Source Attack of the Killer Tomatoes is a 2D platform video game developed by Imagineering and released in 1991 for the NES.", + "EP el ataque de los tomates es un asesino 2D plataforma video-juego desarrollados por Imagineering y liberados en", + "Reference Attack of the Killer Tomatoes es un videojuego de plataformas en 2D desarrollado por Imagineering y lanzado en 1991 para el NES.", + "Source Fractal compression is a lossy compression method for digital images, based on fractals.", + "EP Fractal compresion es un metodo para lossy compresion digital imagenes , basada en fractals.", + "EP+union-CS La compresion fractal es un metodo de compresion con perdida para imagenes digitales, basado en fractales." + ], + "page_nums": [ + 38, + 39, + 40 + ], + "images": [] + }, + "16": { + "title": "Impact Results on News out of domain", + "text": [ + "CS Sc Sp Un" + ], + "page_nums": [ + 41 + ], + "images": [ + "figure/image/1004-Table11-1.png", + "figure/image/1004-Table12-1.png", + "figure/image/1004-Table9-1.png" + ] + }, + "17": { + "title": "Final Remarks", + "text": [ + "A simple model to extract domain-specific comparable corpora from", + "The domain-specific corpora showed to be useful to feed SMT systems, but other tasks are possible", + "We are currently comparing our model against an IR-based system", + "The platform currently operates in more language pairs, including", + "French, Catalan, German, and Arabic; but it can operate in any language and domain", + "The prototype is coded in Java (and depends on JWPL). We plan to release it in short!" + ], + "page_nums": [ + 43, + 44, + 45 + ], + "images": [] + } + }, + "paper_title": "A Factory of Comparable Corpora from Wikipedia", + "paper_id": "1004", + "paper": { + "title": "A Factory of Comparable Corpora from Wikipedia", + "abstract": "Multiple approaches to grab comparable data from the Web have been developed up to date. Nevertheless, coming out with a high-quality comparable corpus of a specific topic is not straightforward. We present a model for the automatic extraction of comparable texts in multiple languages and on specific topics from Wikipedia. In order to prove the value of the model, we automatically extract parallel sentences from the comparable collections and use them to train statistical machine translation engines for specific domains. Our experiments on the English-Spanish pair in the domains of Computer Science, Science, and Sports show that our in-domain translator performs significantly better than a generic one when translating in-domain Wikipedia articles. Moreover, we show that these corpora can help when translating out-of-domain texts.", + "text": [ + { + "id": 0, + "string": "Introduction Multilingual corpora with different levels of comparability are useful for a range of natural language processing (NLP) tasks." + }, + { + "id": 1, + "string": "Comparable corpora were first used for extracting parallel lexicons (Rapp, 1995; Fung, 1995) ." + }, + { + "id": 2, + "string": "Later they were used for feeding statistical machine translation (SMT) systems (Uszkoreit et al., 2010) and in multilingual retrieval models (Schönhofen et al., 2007; Potthast et al., 2008) ." + }, + { + "id": 3, + "string": "SMT systems estimate the statistical models from bilingual texts (Koehn, 2010) ." + }, + { + "id": 4, + "string": "Since only the words that appear in the corpus can be translated, having a corpus of the right domain is important to have high coverage." + }, + { + "id": 5, + "string": "However, it is evident that no large collections of parallel texts for all domains and language pairs exist." + }, + { + "id": 6, + "string": "In some cases, only general-domain parallel corpora are available; in some others there are no parallel resources at all." + }, + { + "id": 7, + "string": "One of the main sources of parallel data is the Web: websites in multiple languages are crawled and contents retrieved to obtain multilingual data." + }, + { + "id": 8, + "string": "Wikipedia, an on-line community-curated encyclopaedia with editions in multiple languages, has been used as a source of data for these purposesfor instance, (Adafre and de Rijke, 2006; Potthast et al., 2008; Otero and López, 2010; Plamada and Volk, 2012) ." + }, + { + "id": 9, + "string": "Due to its encyclopaedic nature, editors aim at organising its content within a dense taxonomy of categories." + }, + { + "id": 10, + "string": "1 Such a taxonomy can be exploited to extract comparable and parallel corpora on specific topics and knowledge domains." + }, + { + "id": 11, + "string": "This allows to study how different topics are analysed in different languages, extract multilingual lexicons, or train specialised machine translation systems, just to mention some instances." + }, + { + "id": 12, + "string": "Nevertheless, the process is not straightforward." + }, + { + "id": 13, + "string": "The community-generated nature of the Wikipedia has produced a reasonably good -yet chaotic-taxonomy in which categories are linked to each other at will, even if sometimes no relationship among them exists, and the borders dividing different areas are far from being clearly defined." + }, + { + "id": 14, + "string": "The rest of the paper is distributed as follows." + }, + { + "id": 15, + "string": "We briefly overview the definition of comparability levels in the literature and show the difficulties inherent to extracting comparable corpora from Wikipedia (Section 2)." + }, + { + "id": 16, + "string": "We propose a simple and effective platform for the extraction of comparable corpora from Wikipedia (Section 3)." + }, + { + "id": 17, + "string": "We describe a simple model for the extraction of parallel sentences from comparable corpora (Section 4) ." + }, + { + "id": 18, + "string": "Experimental results are reported on each of these sub-tasks for three domains using the English and Spanish Wikipedia editions." + }, + { + "id": 19, + "string": "We present an application-oriented evaluation of the comparable corpora by studying the impact of the extracted parallel sentences on a statistical machine translation system (Section 5)." + }, + { + "id": 20, + "string": "Finally, we draw conclusions and outline ongoing work (Section 6)." + }, + { + "id": 21, + "string": "Background Comparability in multilingual corpora is a fuzzy concept that has received alternative definitions without reaching an overall consensus (Rapp, 1995; Eagles Document Eag-Tcwg-Ctyp, 1996; Fung, 1998; Fung and Cheung, 2004; Wu and Fung, 2005; McEnery and Xiao, 2007; Sharoff et al., 2013) ." + }, + { + "id": 22, + "string": "Ideally, a comparable corpus should contain texts in multiple languages which are similar in terms of form and content." + }, + { + "id": 23, + "string": "Regarding content, they should observe similar structure, function, and a long list of characteristics: register, field, tenor, mode, time, and dialect (Maia, 2003) ." + }, + { + "id": 24, + "string": "Nevertheless, finding these characteristics in real-life data collections is virtually impossible." + }, + { + "id": 25, + "string": "Therefore, we attach to the following simpler four-class classification (Skadiņa et al., 2010) : (i) Parallel texts are true and accurate translations or approximate translations with minor languagespecific variations." + }, + { + "id": 26, + "string": "(ii) Strongly comparable texts are closely related texts reporting the same event or describing the same subject." + }, + { + "id": 27, + "string": "(iii) Weakly comparable texts include texts in the same narrow subject domain and genre, but describing different events, as well as texts within the same broader domain and genre, but varying in sub-domains and specific genres." + }, + { + "id": 28, + "string": "(iv) Non-comparable texts are pairs of texts drawn at random from a pair of very large collections of texts in two or more languages." + }, + { + "id": 29, + "string": "Wikipedia is a particularly suitable source of multilingual text with different levels of comparability, given that it covers a large amount of languages and topics." + }, + { + "id": 30, + "string": "2 Articles can be connected via interlanguage links (i.e., a link from a page in one Wikipedia language to an equivalent page in another language)." + }, + { + "id": 31, + "string": "Although there are some missing links and an article can be linked by two or more articles from the same language (Hecht and Gergle, 2010) , the number of available links allows to exploit the multilinguality of Wikipedia." + }, + { + "id": 32, + "string": "Still, extracting a comparable corpus on a specific domain from Wikipedia is not so straightforward." + }, + { + "id": 33, + "string": "One can take advantage of the usergenerated categories associated to most articles." + }, + { + "id": 34, + "string": "Ideally, the categories and sub-categories would compose a hierarchically organized taxonomy, e.g., in the form of a category tree." + }, + { + "id": 35, + "string": "Nevertheless, 2 Wikipedia contains 288 language editions out of which 277 are active and 12 have more than 1M articles at the time of writing, June 2015 (http://en.wikipedia.org/ wiki/List_of_Wikipedias)." + }, + { + "id": 36, + "string": "Sport Sports Mountain sports the categories in Wikipedia compose a denselyconnected graph with highly overlapping categories, cycles, etc." + }, + { + "id": 37, + "string": "As they are manually-crafted, the categories are somehow arbitrary and, among other consequences, the potential categorisation of articles does not accomplish with the properties for representing the desirable -trusty enoughcategorisation of articles from different domains." + }, + { + "id": 38, + "string": "Moreover, many articles are not associated to the categories they should belong to and there is a phenomenon of over-categorization." + }, + { + "id": 39, + "string": "3 Figure 1 is an example of the complexity of Wikipedia's category graph topology." + }, + { + "id": 40, + "string": "Although this particular example comes from the Wikipedia in Spanish, similar phenomena exist in other editions." + }, + { + "id": 41, + "string": "Firstly, the paths from different apparently unrelated categories -Sport and Science-, converge in a common node soon in the graph (node Pyrenees)." + }, + { + "id": 42, + "string": "As a result, not only Pyrenees could be considered as a sub-category of both Sport and Science, but all its descendants." + }, + { + "id": 43, + "string": "Secondly, cycles exist among the different categories, as in the sequence Mountains of Andorra → Pyrenees → Mountains of the Pyrenees → Mountains of Andorra." + }, + { + "id": 44, + "string": "Mountaineering Mountains Mountains of Andorra Ideally, every sub-category of a category should share the same attributes, since the \"failure to observe this principle reduces the predictability [of the taxonomy] and can lead to cross-classification\" (Rowley and Hartley, 2000, p. 196) ." + }, + { + "id": 45, + "string": "Although fixing this issue -inherent to all the Wikipedia editions-falls out of the scope of our research, some heuristic strategies are necessary to diminish their impact in the domain definition process." + }, + { + "id": 46, + "string": "Plamada and Volk (2012) dodge this issue by extracting a domain comparable corpus using IR techniques." + }, + { + "id": 47, + "string": "They use the characteristic vocabulary of the domain (100 terms extracted from an external in-domain corpus) to query a Lucene search engine 4 over the whole encyclopaedia." + }, + { + "id": 48, + "string": "Our approach is completely different: we try to get along with Wikipedia's structure with a strategy to walk through the category graph departing from a root or pseudo-root category, which defines our domain of interest." + }, + { + "id": 49, + "string": "We empirically set a threshold to stop exploring the graph such that the included categories most likely represent an entire domain (cf." + }, + { + "id": 50, + "string": "Section 3)." + }, + { + "id": 51, + "string": "This approach is more similar to Cui et al." + }, + { + "id": 52, + "string": "(2008) , who explore the Wiki-Graph and score every category in order to assess its likelihood of belonging to the domain." + }, + { + "id": 53, + "string": "Other tools are being developed to extract corpora from Wikipedia." + }, + { + "id": 54, + "string": "Linguatools 5 released a comparable corpus extracted from Wikipedias in 253 language pairs." + }, + { + "id": 55, + "string": "Unfortunately, neither their tool nor the applied methodology description are available." + }, + { + "id": 56, + "string": "CatScan2 6 is a tool that allows to explore and search categories recursively." + }, + { + "id": 57, + "string": "The Accurat toolkit (Pinnis et al., 2012 ; Ş tefȃnescu, Dan and Ion, Radu and Hunsicker, Sabine, 2012) 7 aligns comparable documents and extracts parallel sentences, lexicons, and named entities." + }, + { + "id": 58, + "string": "Finally, the most related tool to ours: CorpusPedia 8 extracts non-aligned, softly-aligned, and strongly-aligned comparable corpora from Wikipedia (Otero and López, 2010) ." + }, + { + "id": 59, + "string": "The difference with respect to our model is that they only consider the articles associated to one specific category and not to an entire domain." + }, + { + "id": 60, + "string": "The inter-connection among Wikipedia editions in different languages has been exploited for multiple tasks including lexicon induction (Erdmann et al., 2008) , extraction of bilingual dictionaries (Yu and Tsujii, 2009) , and identification of particular translations (Chu et al., 2014; Prochasson and Fung, 2011) ." + }, + { + "id": 61, + "string": "Different cross-language NLP tasks have particularly taken advantage of Wikipedia." + }, + { + "id": 62, + "string": "Articles have been used for query translation (Schönhofen et al., 2007) and crosslanguage semantic representations for similarity estimation (Cimiano et al., 2009; Potthast et al., 2008; Sorg and Cimiano, 2012) ." + }, + { + "id": 63, + "string": "The extraction of parallel corpora from Wikipedia has been a hot topic during the last years (Adafre and de Rijke, 2006; Patry and Langlais, 2011; Plamada and Volk, 2012; Smith et al., 2010; Tomás et al., 2008; Yasuda and Sumita, 2008) ." + }, + { + "id": 64, + "string": "Domain-Specific Comparable Corpora Extraction In this section we describe our proposal to extract domain-specific comparable corpora from Wikipedia." + }, + { + "id": 65, + "string": "The input to the pipeline is the top category of the domain (e.g., Sport)." + }, + { + "id": 66, + "string": "The terminology used in this description is as follows." + }, + { + "id": 67, + "string": "Let c be a Wikipedia category and c * be the top category of a domain." + }, + { + "id": 68, + "string": "Let a be a Wikipedia article; a ∈ c if a contains c among its categories." + }, + { + "id": 69, + "string": "Let G be the Wikipedia category graph." + }, + { + "id": 70, + "string": "Vocabulary definition." + }, + { + "id": 71, + "string": "The domain vocabulary represents the set of terms that better characterises the domain." + }, + { + "id": 72, + "string": "We do not expect to have at our disposal the vocabulary associated to every category." + }, + { + "id": 73, + "string": "Therefore, we build it from the Wikipedia itself." + }, + { + "id": 74, + "string": "We collect every article a ∈ c * and apply standard pre-processing; i.e., tokenisation, stopwording, numbers and punctuation marks filtering, and stemming (Porter, 1980) ." + }, + { + "id": 75, + "string": "In order to reduce noise, tokens shorter than four characters are discarded as well." + }, + { + "id": 76, + "string": "The vocabulary is then composed of the top n terms, ranked by term frequency." + }, + { + "id": 77, + "string": "This value is empirically determined." + }, + { + "id": 78, + "string": "Graph exploration." + }, + { + "id": 79, + "string": "The input for this step is G, c * (i.e., the departing node in the graph), and the domain vocabulary." + }, + { + "id": 80, + "string": "Departing from c * , we perform a breadth-first search, looking for all those categories which more likely belong to the required domain." + }, + { + "id": 81, + "string": "Two constraints are applied in order to make a controlled exploration of the graph: (i) in order to avoid loops and exploring already traversed paths, a node can only be visited once, (ii) in order to avoid exploring the whole categories graph, a stopping criterion is pre-defined." + }, + { + "id": 82, + "string": "Our stopping criterion is inspired by the classification tree-breadth first search algorithm (Cui et al., 2008) ." + }, + { + "id": 83, + "string": "The core idea is scoring the explored cate- gories to determine if they belong to the domain." + }, + { + "id": 84, + "string": "Our heuristic assumes that a category belongs to the domain if its title contains at least one of the terms in the characteristic vocabulary." + }, + { + "id": 85, + "string": "Nevertheless, many categories exist that may not include any of the terms in the vocabulary." + }, + { + "id": 86, + "string": "(e.g., consider category pato in Spanish -literally \"duck\" in English-which, somehow surprisingly, refers to a sport rather than an animal)." + }, + { + "id": 87, + "string": "Our naïve solution to this issue is to consider subsets of categories according to their depth respect to the root." + }, + { + "id": 88, + "string": "An entire level of categories is considered part of the domain if a minimum percentage of its elements include vocabulary terms." + }, + { + "id": 89, + "string": "In our experiments we use the English and Spanish Wikipedia editions." + }, + { + "id": 90, + "string": "9 Table 1 shows some statistics, after filtering disambiguation and redirect pages." + }, + { + "id": 91, + "string": "The intersection of articles and categories between the two languages represents the ceiling for the amount of parallel corpora one can gather for this pair." + }, + { + "id": 92, + "string": "We focus on three domains: Computer Science (CS), Science (Sc), and Sports (Sp) -the top categories c * from which the graph is explored in order to extract the corresponding comparable corpora." + }, + { + "id": 93, + "string": "Table 2 shows the number of root articles associated to c * for each domain and language." + }, + { + "id": 94, + "string": "From them, we obtain domain vocabularies with a size between 100 and 400 lemmas (right-side columns) when using the top 10% terms." + }, + { + "id": 95, + "string": "We ran experiments using the top 10%, 15%, 20% and 100%." + }, + { + "id": 96, + "string": "The relatively small size of these vocabularies allows to manually check that 10% is the best option to characterise the desired category, higher percentages add more noise than in-domain terms." + }, + { + "id": 97, + "string": "The plots in Figure 2 show the percentage of categories with at least one domain term in the ti-9 Dumps downloaded from https://dumps." + }, + { + "id": 98, + "string": "wikimedia.org in July 2013 and pre-processed with JWPL (Zesch et al., 2008) tle: the starting point for our graph-based method for selecting the in-domain articles." + }, + { + "id": 99, + "string": "As expected, nearly 100% of the categories in the root include domain terms and this percentage decreases with increasing depth in the tree." + }, + { + "id": 100, + "string": "When extracting the corpus, one must decide the adequate percentage of positive categories allowed." + }, + { + "id": 101, + "string": "High thresholds lead to small corpora whereas low thresholds lead to larger -but noisier-corpora." + }, + { + "id": 102, + "string": "As in many applications, this is a trade-off between precision and recall and depends on the intended use of the corpus." + }, + { + "id": 103, + "string": "The stopping level is selected for every language independently, but in order to reduce noise, the comparable corpus is only built from those articles that appear in both languages and are related via an interlanguage link." + }, + { + "id": 104, + "string": "We validate the quality in terms of application-based utility of the generated comparable corpora when used in a translation system (cf." + }, + { + "id": 105, + "string": "Section 5)." + }, + { + "id": 106, + "string": "Therefore, we choose to give more importance to recall and opt for the corpora obtained with a threshold of 50%." + }, + { + "id": 107, + "string": "Parallel Sentence Extraction In this section we describe a simple technique for extracting parallel sentences from a comparable corpus." + }, + { + "id": 108, + "string": "Given a pair of articles related by an interlanguage link, we estimate the similarity between all their pairs of cross-language sentences with different text similarity measures." + }, + { + "id": 109, + "string": "We repeat the process for all the pairs of articles and rank the resulting sentence pairs according to its similarity." + }, + { + "id": 110, + "string": "After defining a threshold for each measure, those sentence pairs with a similarity higher than the threshold are extracted as parallel sentences." + }, + { + "id": 111, + "string": "This is a non-supervised method that generates a noisy parallel corpus." + }, + { + "id": 112, + "string": "The quality of the similarity measures will then affect the purity of the parallel corpus and, therefore, the quality of the translator." + }, + { + "id": 113, + "string": "However, we do not need to be very restrictive with the measures here and still favour a large corpus, since the word alignment process in the SMT system can take care of part of the noise." + }, + { + "id": 114, + "string": "Similarity computation." + }, + { + "id": 115, + "string": "We compute similarities between pairs of sentences by means of cosine and length factor measures." + }, + { + "id": 116, + "string": "The cosine similarity is calculated on three well-known characterisations in cross-language information retrieval and parallel corpora alignment: (i) character ngrams (cng) (McNamee and Mayfield, 2004); (ii) pseudo-cognates (cog) (Simard et al., 1992) ; and (iii) word 1-grams, after translation into a common language, both from English to Spanish and vice versa (mono en , mono es )." + }, + { + "id": 117, + "string": "We add the (iv) length factor (len) (Pouliquen et al., 2003) as an independent measure and as penalty (multiplicative factor) on the cosine similarity." + }, + { + "id": 118, + "string": "The threshold for each of the measures just introduced is empirically set in a manually annotated corpus." + }, + { + "id": 119, + "string": "We define it as the value that maximises the F 1 score on this development set." + }, + { + "id": 120, + "string": "To create this set, we manually annotated a corpus with 30 article pairs (10 per domain) at sentence level." + }, + { + "id": 121, + "string": "We considered three sentence classes: parallel, comparable, and other." + }, + { + "id": 122, + "string": "The volunteers of the exercise were given as guidelines the definitions by Skadiņa et al." + }, + { + "id": 123, + "string": "(2010) of parallel text and strongly comparable text (cf." + }, + { + "id": 124, + "string": "Section 2)." + }, + { + "id": 125, + "string": "A pair that did not match any of these definitions had to be classified as other." + }, + { + "id": 126, + "string": "Each article pair was annotated by two volunteers, native speakers of Spanish with high command of English (a total of nine volunteers participated in the process)." + }, + { + "id": 127, + "string": "The mean agreement between annotators had a kappa coefficient (Cohen, 1960) of κ ∼ 0.7." + }, + { + "id": 128, + "string": "A third annotator resolved disagreed sentences." + }, + { + "id": 129, + "string": "10 Table 4 shows the thresholds that obtain the maximum F 1 scores." + }, + { + "id": 130, + "string": "It is worth noting that, even if the values of precision and recall are relatively low -the maximum recall is 0.57 for len-, our intention with these simple measures is not to obtain the highest performance in terms of retrieval, but injecting the most useful data to the translator, even at the cost of some noise." + }, + { + "id": 131, + "string": "The performance with character 3-grams is the best one, comparable to that of mono, with an F 1 of 0.36." + }, + { + "id": 132, + "string": "This suggests that a translator is not mandatory for performing the sentences selection." + }, + { + "id": 133, + "string": "Len and 1-grams have no discriminating power and lead to the worse scores (F 1 of 0.14 and 0.21, respectively)." + }, + { + "id": 134, + "string": "We ran a second set of experiments to explore the combination of the measures." + }, + { + "id": 135, + "string": "the performance obtained by averaging all the similarities (S), also after multiplying them by the length factor and/or the observed F 1 obtained in the previous experiment." + }, + { + "id": 136, + "string": "Even if the length factor had shown a poor performance in isolation, it helps to lift the F 1 figures consistently after affecting the similarities." + }, + { + "id": 137, + "string": "In this case, F 1 grows up to 0.43." + }, + { + "id": 138, + "string": "This impact is not so relevant when the individual F 1 is used for weightingS." + }, + { + "id": 139, + "string": "We applied all the measures -both combined and in isolation-on the entire comparable corpora previously extracted." + }, + { + "id": 140, + "string": "Table 6 shows the amount of parallel sentences extracted by applying the empirically defined thresholds of Tables 4 and 5." + }, + { + "id": 141, + "string": "As expected, more flexible alternatives, such as low-level n-grams or length factor result in a higher amount of retrieved instances, but in all cases the size of the corpora is remarkable." + }, + { + "id": 142, + "string": "For the most restricted domain, CS, we get around 200k parallel sentences for a given similarity measure." + }, + { + "id": 143, + "string": "For the widest domain, SC, we surpass the 1M sentence pairs." + }, + { + "id": 144, + "string": "As it will be shown in the following section, these sizes are already useful to be used for training SMT systems." + }, + { + "id": 145, + "string": "Some standard parallel corpora have the same order of magnitude." + }, + { + "id": 146, + "string": "For tasks other than MT, where the precision on the extracted pairs can be more important than the recall, one can obtain cleaner corpora by using a threshold that maximises precision instead of F 1 ." + }, + { + "id": 147, + "string": "CS Evaluation: Statistical Machine Translation Task In this section we validate the quality of the obtained corpora by studying its impact on statistical machine translation." + }, + { + "id": 148, + "string": "There are several parallel corpora for the English-Spanish language pair." + }, + { + "id": 149, + "string": "We select as a general-purpose corpus Europarl v7 (Koehn, 2005) , with 1.97M parallel sentences." + }, + { + "id": 150, + "string": "The order of magnitude is similar to the largest corpus we have extracted from Wikipedia, so we can compare the results in a size-independent way." + }, + { + "id": 151, + "string": "If our corpus extracted from Wikipedia was made up with parallel fragments of the desired domain, it should be the most adequate to translate these domains." + }, + { + "id": 152, + "string": "If the quality of the parallel fragments was acceptable, it should also help when translating out-of-domain texts." + }, + { + "id": 153, + "string": "In order to test these hypotheses we analyse three settings: (i) train SMT systems only with Wikipedia (WP) or Europarl (EP) to translate domain-specific texts, (ii) train SMT systems with Wikipedia and Europarl to translate domain-specific texts, and (iii) train SMT systems with Wikipedia and Europarl to translate out-of-domain texts (news)." + }, + { + "id": 154, + "string": "For the out-of-domain evaluation we use the News Commentaries 2011 test set and the News Commentaries 2009 for development." + }, + { + "id": 155, + "string": "11 For the in-domain evaluation we build the test and development sets in a semiautomatic way." + }, + { + "id": 156, + "string": "We depart from the parallel corpora gathered in Section 4 from which sentences with more than four tokens and beginning with a letter are selected." + }, + { + "id": 157, + "string": "We estimate its perplexity with respect to a language model obtained with Europarl in order to select the most fluent sentences and then we rank the parallel sentences according to their similarity and perplexity." + }, + { + "id": 158, + "string": "The top-n fragments were manually revised and extracted to build the Wikipedia test (WPtest) and development (WPdev) sets." + }, + { + "id": 159, + "string": "We repeated the process for the three studied domains and drew 300 parallel fragments for development for every domain and 500 for test." + }, + { + "id": 160, + "string": "We removed these sentences from the corresponding training corpora." + }, + { + "id": 161, + "string": "For one of the domains, CS, we also gathered a test set from a parallel corpus of GNOME localisation files (Tiedemann, 2012) ." + }, + { + "id": 162, + "string": "Table 7 shows the size in number of sentences of these test sets and of the 20 Wikipedia training sets used for translation." + }, + { + "id": 163, + "string": "Only one measure, that with the highest F 1 score, is selected from each family: c3g, cog, mono en andS·len (cf." + }, + { + "id": 164, + "string": "Tables 4 and 5)." + }, + { + "id": 165, + "string": "We also compile the corpus that results from the union of the previous four." + }, + { + "id": 166, + "string": "Notice that, although we eliminate duplicates from this corpus, the size of the union is close to the sum of the individual corpora." + }, + { + "id": 167, + "string": "This indicates that every similarity measure selects a different set of parallel fragments." + }, + { + "id": 168, + "string": "Beside the specialised corpus for each domain, we build a larger corpus with all the data (Un)." + }, + { + "id": 169, + "string": "Again, duplicate fragments coming from articles belonging to more than one domain are removed." + }, + { + "id": 170, + "string": "SMT systems are trained using standard freely available software." + }, + { + "id": 171, + "string": "We estimate a 5-gram language model using interpolated Kneser-Ney discounting with SRILM (Stolcke, 2002) ." + }, + { + "id": 172, + "string": "Word alignment is done with GIZA++ (Och and Ney, 2003) and both phrase extraction and decoding are done with Moses (Koehn et al., 2007) ." + }, + { + "id": 173, + "string": "We optimise the feature weights of the model with Minimum Error Rate Training (MERT) (Och, 2003) against the BLEU evaluation metric (Papineni et al., 2002) ." + }, + { + "id": 174, + "string": "Our model considers the language model, direct and inverse phrase probabilities, direct and inverse lexical probabilities, phrase and word penalties, and a lexicalised reordering." + }, + { + "id": 175, + "string": "(i) Training systems with Wikipedia or Europarl for domain-specific translation." + }, + { + "id": 176, + "string": "Table 8 shows the evaluation results on WPtest." + }, + { + "id": 177, + "string": "All the specialised systems obtain significant improvements with respect to the Europarl system, regardless of their size." + }, + { + "id": 178, + "string": "For instance, the worst specialised system (c3g with only 95,715 sentences for CS) outperforms by more than 10 points of BLEU the general Europarl translator." + }, + { + "id": 179, + "string": "The most complete system (the union of the four representatives) doubles the BLEU score for all the domains with an impressive improvement of 30 points." + }, + { + "id": 180, + "string": "This is of course possible due to the nature of the test set that has been extracted from the same collection as the training data and therefore shares its structure and vocabulary." + }, + { + "id": 181, + "string": "To give perspective to these high numbers we evaluate the systems trained on the CS domain against the GNOME dataset (Table 9) ." + }, + { + "id": 182, + "string": "Except for c3g, the Wikipedia translators always outperform the baseline with EP; the union system improves it by 4 BLEU points (22.41 compared to 18.15) with a four times smaller corpus." + }, + { + "id": 183, + "string": "This confirms that a corpus automatically extracted with an F 1 smaller than 0.5 is still useful for SMT." + }, + { + "id": 184, + "string": "Notice also that using only the in-domain data (CS) is always better than using the whole WP corpus (Un) even if the former is in general ten times smaller (cf." + }, + { + "id": 185, + "string": "Table 7 )." + }, + { + "id": 186, + "string": "According to this indirect evaluation of the similarity measures, character n-grams (c3g) represent the worst alternative." + }, + { + "id": 187, + "string": "These results contradict the direct evaluation, where c3g and mono en had the highest F 1 scores on the development set among the individual similarity measures." + }, + { + "id": 188, + "string": "The size of the corpus is not relevant here: when we train all the systems with the same amount of data, the ranking in the quality of the measures remains the same." + }, + { + "id": 189, + "string": "To see this, we trained four additional systems with the top m number of parallel fragments, where m is the size of the smallest corpus for the union of domains: Un-c3g." + }, + { + "id": 190, + "string": "This new comparison is reported in columns \"Comp.\"" + }, + { + "id": 191, + "string": "in Tables 8 and 9." + }, + { + "id": 192, + "string": "In this fair comparison c3g is still the worst measure andS·len the best one." + }, + { + "id": 193, + "string": "The translator built from its associated corpus outperforms with less than half of the data used for training the general one (883,366 vs. 1,965,734 parallel fragments) both in WPtest (56.78 vs. 30.63) and GNOME (19.76 vs. 18.15 )." + }, + { + "id": 194, + "string": "(ii) Training systems on Wikipedia and Europarl for domain-specific translation." + }, + { + "id": 195, + "string": "Now we enrich the general translator with Wikipedia data or, equivalently, complement the Wikipedia translator with out-of-domain data." + }, + { + "id": 196, + "string": "Table 10 shows the results." + }, + { + "id": 197, + "string": "Augmenting the size of the indomain corpus by 2 million fragments improves the results even more, about 2 points of BLEU when using all the union data." + }, + { + "id": 198, + "string": "System c3g benefits the most of the inclusion of the Europarl data." + }, + { + "id": 199, + "string": "The reason is that it is the individual system with less corpus available and the one obtaining the worst results." + }, + { + "id": 200, + "string": "In fact, the better the Wikipedia system, the less important the contribution from Europarl is." + }, + { + "id": 201, + "string": "For the independent test set GNOME, Table 11 shows that the union corpus on CS is better than any combination of Wikipedia and Europarl." + }, + { + "id": 202, + "string": "Still, as aforementioned, the best performance on this test set is obtained with a pure in-domain system (cf." + }, + { + "id": 203, + "string": "are controlled by the Europarl baseline." + }, + { + "id": 204, + "string": "In general, systems in which we include only texts from an unrelated domain do not improve the performance of the Europarl system alone, results of the combined system are better when we use Wikipedia texts from all the domains together (column Un) for training." + }, + { + "id": 205, + "string": "This suggests that, as expected, a general Wikipedia corpus is necessary to build a general translator." + }, + { + "id": 206, + "string": "This is a different problem to deal with." + }, + { + "id": 207, + "string": "Conclusions and Ongoing Work In this paper we presented a model for the automatic extraction of in-domain comparable corpora from Wikipedia." + }, + { + "id": 208, + "string": "It makes possible the automatic extraction of monolingual and comparable article collections as well as a one-click parallel corpus generation for on-demand language pairs and domains." + }, + { + "id": 209, + "string": "Given a pair of languages and a main category, the model explores the Wikipedia categories graph and identifies a subset of categories (and their associated articles) to generate a document-aligned comparable corpus." + }, + { + "id": 210, + "string": "The resulting corpus can be exploited for multiple natural language processing tasks." + }, + { + "id": 211, + "string": "Here we applied it as part of a pipeline for the extraction of domainspecific parallel sentences." + }, + { + "id": 212, + "string": "These parallel instances allowed for a significant improvement in the machine translation quality when compared to a generic system and applied to a domain specific corpus (in-domain)." + }, + { + "id": 213, + "string": "The experiments are shown for the English-Spanish language pair and the domains Computer Science, Science, and Sports." + }, + { + "id": 214, + "string": "Still it can be applied to other language pairs and domains." + }, + { + "id": 215, + "string": "The prototype is currently operating in other languages." + }, + { + "id": 216, + "string": "The only prerequisite is the existence of the corresponding Wikipedia edition and some basic processing tools such as a tokeniser and a lemmatiser." + }, + { + "id": 217, + "string": "Our current efforts intend to generate a more robust model for parallel sentences identification and the design of other indirect evaluation schemes to validate the model performance." + } + ], + "headers": [ + { + "section": "Introduction", + "n": "1", + "start": 0, + "end": 20 + }, + { + "section": "Background", + "n": "2", + "start": 21, + "end": 63 + }, + { + "section": "Domain-Specific Comparable Corpora Extraction", + "n": "3", + "start": 64, + "end": 106 + }, + { + "section": "Parallel Sentence Extraction", + "n": "4", + "start": 107, + "end": 146 + }, + { + "section": "Evaluation: Statistical Machine Translation Task", + "n": "5", + "start": 147, + "end": 206 + }, + { + "section": "Conclusions and Ongoing Work", + "n": "6", + "start": 207, + "end": 217 + } + ], + "figures": [ + { + "filename": "../figure/image/1004-Table6-1.png", + "caption": "Table 6: Size of the parallel corpora extracted with each similarity measure.", + "page": 5, + "bbox": { + "x1": 313.92, + "x2": 519.36, + "y1": 187.68, + "y2": 413.28 + } + }, + { + "filename": "../figure/image/1004-Table4-1.png", + "caption": "Table 4: Best thresholds and their associated Precision (P), recall (R) and F1.", + "page": 5, + "bbox": { + "x1": 127.67999999999999, + "x2": 469.44, + "y1": 62.4, + "y2": 148.32 + } + }, + { + "filename": "../figure/image/1004-Table5-1.png", + "caption": "Table 5: Precision, recall, and F1 for the average of the similarities weighted by length model (len) and/or their F1.", + "page": 5, + "bbox": { + "x1": 81.6, + "x2": 280.32, + "y1": 187.68, + "y2": 274.08 + } + }, + { + "filename": "../figure/image/1004-Figure1-1.png", + "caption": "Figure 1: Slice of the Spanish Wikipedia category graph (as in May 2015) departing from categories Sport and Science. Translated for clarity.", + "page": 1, + "bbox": { + "x1": 310.08, + "x2": 529.92, + "y1": 62.879999999999995, + "y2": 212.16 + } + }, + { + "filename": "../figure/image/1004-Table7-1.png", + "caption": "Table 7: Number of sentences of the Wikipedia parallel corpora used to train the SMT systems (top rows) and of the sets used for development and test.", + "page": 6, + "bbox": { + "x1": 306.71999999999997, + "x2": 524.16, + "y1": 62.4, + "y2": 176.16 + } + }, + { + "filename": "../figure/image/1004-Table8-1.png", + "caption": "Table 8: BLEU scores obtained on the Wikipedia test sets for the 20 specialised systems described in Section 5. A comparison column (Comp.) where all the systems are trained with corpora of the same size is also included (see text).", + "page": 6, + "bbox": { + "x1": 306.71999999999997, + "x2": 524.16, + "y1": 248.64, + "y2": 346.08 + } + }, + { + "filename": "../figure/image/1004-Table10-1.png", + "caption": "Table 10: BLEU scores obtained on the Wikipedia test set for the 20 systems trained with the combination of the Europarl (EP) and the Wikipedia corpora. The results with a Europarl system and the best one from Table 8 (union) shown for comparison.", + "page": 7, + "bbox": { + "x1": 306.71999999999997, + "x2": 523.1999999999999, + "y1": 62.4, + "y2": 183.35999999999999 + } + }, + { + "filename": "../figure/image/1004-Table11-1.png", + "caption": "Table 11: BLEU scores obtained on the GNOME test set for systems trained with Europarl and Wikipedia. A system with Europarl achieves a score of 18.15.", + "page": 7, + "bbox": { + "x1": 346.56, + "x2": 486.24, + "y1": 284.64, + "y2": 378.24 + } + }, + { + "filename": "../figure/image/1004-Table9-1.png", + "caption": "Table 9: BLEU scores obtained on the GNOME test set for systems trained only with Wikipedia. A system with Europarl achieves a score of 18.15.", + "page": 7, + "bbox": { + "x1": 99.84, + "x2": 262.08, + "y1": 62.879999999999995, + "y2": 156.0 + } + }, + { + "filename": "../figure/image/1004-Table1-1.png", + "caption": "Table 1: Amount of articles and categories in the Wikipedia editions and in the intersection (i.e., pages linked across languages).", + "page": 3, + "bbox": { + "x1": 73.92, + "x2": 288.0, + "y1": 62.4, + "y2": 129.12 + } + }, + { + "filename": "../figure/image/1004-Table2-1.png", + "caption": "Table 2: Number of articles in the root categories and size of the resulting domain vocabulary.", + "page": 3, + "bbox": { + "x1": 311.03999999999996, + "x2": 515.52, + "y1": 193.44, + "y2": 330.71999999999997 + } + }, + { + "filename": "../figure/image/1004-Table12-1.png", + "caption": "Table 12: BLEU scores for the out-of-domain evaluation on the News Commentaries 2011 test set. We show in boldface all the systems that improve the Europarl translator, which achieves a score of 27.02.", + "page": 8, + "bbox": { + "x1": 74.88, + "x2": 287.03999999999996, + "y1": 62.4, + "y2": 175.2 + } + }, + { + "filename": "../figure/image/1004-Table3-1.png", + "caption": "Table 3: Number of article pairs according to the percentage of positive categories used to select the levels of the graph and distance from the root at which the percentage is smaller to the desired one.", + "page": 4, + "bbox": { + "x1": 72.0, + "x2": 288.0, + "y1": 62.4, + "y2": 151.2 + } + } + ] + }, + "gem_id": "GEM-SciDuet-chal-15" + }, + { + "slides": { + "0": { + "title": "Time is important", + "text": [ + "Understanding time is key to understanding events", + "Timelines (in stories, clinical records), time-slot filling, Q&A, common sense", + "[June, 1989] Chris Robin lives in England and he is the person that you read about in Winnie the Pooh. As a boy, Chris lived in", + "Cotchfield Farm. When he was three, his father wrote a poem about him. His father later wrote Winnie the Pooh in 1925.", + "Where did Chris Robin live? Clearly, time sensitive.", + "When was Chris Robin born? poem [Chris at age 3]", + "Requires identifying relations between events, and temporal reasoning.", + "Events are associated with time intervals:", + "A happens BEFORE/AFTER B; Time is often expressed implicitly", + "2 explicit time expressions per 100 tokens, but 12 temporal relations" + ], + "page_nums": [ + 1 + ], + "images": [] + }, + "1": { + "title": "Example", + "text": [ + "Friday in the middle of a group of men playing volleyball.", + "Temporal question: Which one happens first?", + "e1 appears first in text. Is it also earlier in time? e2 was on Friday, but we dont know when e1 happened.", + "No explicit lexical markers, e.g., before, since, or during." + ], + "page_nums": [ + 2 + ], + "images": [] + }, + "2": { + "title": "Example temporal determined by causal", + "text": [ + "More than 10 people (e1: died), he said. A car (e2: exploded)", + "Friday in the middle of a group of men playing volleyball.", + "Temporal question: Which one happens first?", + "Obviously, e2:exploded is the cause and e1:died is the effect.", + "So, e2 happens first.", + "In this example, the temporal relation is determined by the causal relation.", + "Note also that the lexical information is important here; its likely that explode BERORE die, irrespective of the context." + ], + "page_nums": [ + 3 + ], + "images": [] + }, + "3": { + "title": "Example causal determined by temporal", + "text": [ + "People raged and took to the street the government", + "Did the government stifle people because people raged?", + "Or, people raged because the government stifled people?", + "Both sound correct and we are not sure about the causality here.", + "People raged and took to the street (after) the government", + "Since stifled happened earlier, its obvious that the cause is stifled and the result is raged.", + "In this example, the causal relation is determined by the temporal relation." + ], + "page_nums": [ + 4, + 5 + ], + "images": [] + }, + "4": { + "title": "This paper", + "text": [ + "Event relations: an essential step of event understanding, which", + "supports applications such as story understanding/completion, summarization, and timeline construction.", + "[There has been a lot of work on this; see Ning et al. ACL18, presented yesterday. for a discussion of the literature and the challenges.]", + "This paper focuses on the joint extraction of temporal and", + "A temporal relation (T-Link) specifies the relation between two events along the temporal dimension.", + "A causal relation (C-Link) specifies the [cause effect] between two events." + ], + "page_nums": [ + 6 + ], + "images": [] + }, + "5": { + "title": "Temporal and casual relations", + "text": [ + "T-Link Example: John worked out after finishing his work.", + "C-Link Example: He was released due to lack of evidence.", + "Temporal and causal relations interact with each other.", + "For example, there is also a T-Link between released and lack", + "The decisions on the T-Link type and the C-link type depend on each other, suggesting that joint reasoning could help." + ], + "page_nums": [ + 7 + ], + "images": [] + }, + "7": { + "title": "Contributions", + "text": [ + "1. Proposed a novel joint inference framework for temporal and causal reasoning", + "Assume the availability of a temporal extraction system and a causal extraction system", + "Enforce declarative constraints originating from the physical nature of causality", + "2. Constructed a new dataset with both temporal and causal relations.", + "We augmented the EventCausality dataset (Do et al., 2011), which comes with causal relations, with new temporal annotations." + ], + "page_nums": [ + 9 + ], + "images": [] + }, + "8": { + "title": "Temporal relation extraction an ilp approach", + "text": [ + "--Event node set. are events.", + "' --temporal relation label", + "-Boolean variable is there a of relation r between \" -./ $? (Y/N)", + "0*(+,)--score of event pair having relation", + "Global assignment of relations: scores in this document", + "'K--the relation dictated by 'F and 'G" + ], + "page_nums": [ + 10 + ], + "images": [] + }, + "9": { + "title": "Proposed joint approach", + "text": [ + "--Event node set. are events.", + "' --temporal relation label", + "-Boolean variable is there a of relation r between \" -./ $? (Y/N)", + "0*(+,)--score of event pair having relation", + "3 4--causal relation; with corresponding variables and", + "T & C relations", + "Cause must be before effect" + ], + "page_nums": [ + 11 + ], + "images": [] + }, + "11": { + "title": "Back to the example temporal determined by causal", + "text": [ + "More than 10 people (e1: died), he said. A car (e2: exploded)", + "Friday in the middle of a group of men playing volleyball.", + "Temporal question: Which one happens first?", + "Obviously, e2:exploded is the cause and e1:died is the effect.", + "So, e2 happens first.", + "In this example, the temporal relation is determined by the", + "Note also that the lexical information is important here; its", + "likely that explode BERORE die, irrespective of the context." + ], + "page_nums": [ + 13 + ], + "images": [] + }, + "12": { + "title": "Temprob probabilistic knowledge base", + "text": [ + "Preprocessing: Semantic Role Labeling & Temporal relations model", + "Result: 51K semantic frames, 80M relations", + "Then we simply count how many times one frame is before/after another frame, as follows. http://cogcomp.org/page/publication_view/830", + "Frame 1 Frame 2 Before After" + ], + "page_nums": [ + 14 + ], + "images": [] + }, + "15": { + "title": "Result on timebank dense", + "text": [ + "TimeBank-Dense: A Benchmark Temporal Relation Dataset", + "The performance of temporal relation extraction:", + "CAEVO: the temporal system proposed along with TimeBank-Dense", + "CATENA: the aforementioned work post-editing temporal relations based on causal predictions, retrained on TimeBank-Dense.", + "System P R F1" + ], + "page_nums": [ + 18 + ], + "images": [] + }, + "16": { + "title": "A new joint dataset", + "text": [ + "TimeBank-Dense has only temporal relation annotations, so in the evaluations above, we only evaluated our temporal performance.", + "EventCausality dataset has only causal relation annotations.", + "To get a dataset with both temporal and causal relation annotations, we choose to augment the EventCausality dataset with temporal relations, using the annotation scheme we proposed in our paper [Ning et al., ACL18. A multi-axis annotation scheme for", + "event temporal relation annotation.]", + "Doc Event T-Link C-Link", + "*due to re-definition of events" + ], + "page_nums": [ + 19 + ], + "images": [] + }, + "17": { + "title": "Result on our new joint dataset", + "text": [ + "P R F Acc.", + "The temporal performance got strictly better in P, R, and F1.", + "The causal performance also got improved by a large margin.", + "Comparing to when gold temporal relations were used, we can see that theres still much room for causal improvement.", + "Comparing to when gold causal relations were used, we can see that the current joint algorithm is very close to its best." + ], + "page_nums": [ + 20 + ], + "images": [] + } + }, + "paper_title": "Joint Reasoning for Temporal and Causal Relations", + "paper_id": "1010", + "paper": { + "title": "Joint Reasoning for Temporal and Causal Relations", + "abstract": "Understanding temporal and causal relations between events is a fundamental natural language understanding task. Because a cause must occur earlier than its effect, temporal and causal relations are closely related and one relation often dictates the value of the other. However, limited attention has been paid to studying these two relations jointly. This paper presents a joint inference framework for them using constrained conditional models (CCMs). Specifically, we formulate the joint problem as an integer linear programming (ILP) problem, enforcing constraints that are inherent in the nature of time and causality. We show that the joint inference framework results in statistically significant improvement in the extraction of both temporal and causal relations from text. 1", + "text": [ + { + "id": 0, + "string": "Introduction Understanding events is an important component of natural language understanding." + }, + { + "id": 1, + "string": "An essential step in this process is identifying relations between events, which are needed in order to support applications such as story completion, summarization, and timeline construction." + }, + { + "id": 2, + "string": "Among the many relation types that could exist between events, this paper focuses on the joint extraction of temporal and causal relations." + }, + { + "id": 3, + "string": "It is well known that temporal and causal relations interact with each other and in many cases, the decision of one relation is made primarily based on evidence from the other." + }, + { + "id": 4, + "string": "In Example 1, identifying the temporal relation between e1:died and e2:exploded is 1 The dataset and code used in this paper are available at http://cogcomp.org/page/publication_ view/835 in fact a very hard case: There are no explicit temporal markers (e.g., \"before\", \"after\", or \"since\"); the events are in separate sentences so their syntactic connection is weak; although the occurrence time of e2:exploded is given (i.e., Friday) in text, it is not given for e1:died." + }, + { + "id": 5, + "string": "However, given the causal relation, e2:exploded caused e1:died,it is clear that e2:exploded happened before e1:died." + }, + { + "id": 6, + "string": "The temporal relation is dictated by the causal relation." + }, + { + "id": 7, + "string": "Ex 1: Temporal relation dictated by causal relation." + }, + { + "id": 8, + "string": "More than 10 people (e1:died) on their way to the nearest hospital, police said." + }, + { + "id": 9, + "string": "A suicide car bomb (e2:exploded) on Friday in the middle of a group of men playing volleyball in northwest Pakistan." + }, + { + "id": 10, + "string": "Since e2:exploded is the reason of e1:died, the temporal relation is thus e2 being before e1." + }, + { + "id": 11, + "string": "Ex 2: Causal relation dictated by temporal relation." + }, + { + "id": 12, + "string": "Mir-Hossein Moussavi (e3:raged) after government's efforts to (e4:stifle) protesters." + }, + { + "id": 13, + "string": "Since e3:raged is temporally after e4:stifle, e4 should be the cause of e3." + }, + { + "id": 14, + "string": "On the other hand, causal relation extraction can also benefit from knowing temporal relations." + }, + { + "id": 15, + "string": "In Example 2, it is unclear whether the government stifled people because people raged, or people raged because the government stifled people: both situations are logically reasonable." + }, + { + "id": 16, + "string": "However, if we account for the temporal relation (that is, e4:stifle happened before e3:raged), it is clear that e4:stifle is the cause and e3:raged is the effect." + }, + { + "id": 17, + "string": "In this case, the causal relation is dictated by the temporal relation." + }, + { + "id": 18, + "string": "The first contribution of this work is proposing a joint framework for Temporal and Causal Reasoning (TCR), inspired by these examples." + }, + { + "id": 19, + "string": "Assuming the availability of a temporal extraction system and a causal extraction system, the proposed joint framework combines these two using a constrained conditional model (CCM) (Chang et al., 2012) framework, with an integer linear pro-gramming (ILP) objective (Roth and Yih, 2004) that enforces declarative constraints during the inference phase." + }, + { + "id": 20, + "string": "Specifically, these constraints include: (1) A cause must temporally precede its effect." + }, + { + "id": 21, + "string": "(2) Symmetry constraints, i.e., when a pair of events, (A, B) , has a temporal relation r (e.g., before), then (B, A) must have the reverse relation of r (e.g., after)." + }, + { + "id": 22, + "string": "(3) Transitivity constraints, i.e., the relation between (A, C) must be temporally consistent with the relation derived from (A, B) and (B, C)." + }, + { + "id": 23, + "string": "These constraints originate from the one-dimensional nature of time and the physical nature of causality and build connections between temporal and causal relations, making CCM a natural choice for this problem." + }, + { + "id": 24, + "string": "As far as we know, very limited work has been done in joint extraction of both relations." + }, + { + "id": 25, + "string": "Formulating the joint problem in the CCM framework is novel and thus the first contribution of this work." + }, + { + "id": 26, + "string": "A key obstacle in jointly studying temporal and causal relations lies in the absence of jointly annotated data." + }, + { + "id": 27, + "string": "The second contribution of this work is the development of such a jointly annotated dataset which we did by augmenting the Event-Causality dataset (Do et al., 2011) with dense temporal annotations." + }, + { + "id": 28, + "string": "This dataset allows us to show statistically significant improvements on both relations via the proposed joint framework." + }, + { + "id": 29, + "string": "This paper also presents an empirical result of improving the temporal extraction component." + }, + { + "id": 30, + "string": "Specifically, we incorporate explicit time expressions present in the text and high-precision knowledge-based rules into the ILP objective." + }, + { + "id": 31, + "string": "These sources of information have been successfully adopted by existing methods (Chambers et al., 2014; Mirza and Tonelli, 2016) , but were never used within a global ILP-based inference method." + }, + { + "id": 32, + "string": "Results on TimeBank-Dense (Cassidy et al., 2014), a benchmark dataset with temporal relations only, show that these modifications can also be helpful within ILP-based methods." + }, + { + "id": 33, + "string": "Related Work Temporal and causal relations can both be represented by directed acyclic graphs, where the nodes are events and the edges are labeled with either before, after, etc." + }, + { + "id": 34, + "string": "(in temporal graphs), or causes and caused by (in causal graphs)." + }, + { + "id": 35, + "string": "Existing work on temporal relation extraction was initiated by (Mani et al., 2006; Chambers et al., 2007; Bethard et al., 2007; Verhagen and Pustejovsky, 2008) , Ex 3: Global considerations are needed when making local decisions." + }, + { + "id": 36, + "string": "The FAA on Friday (e5:announced) it will close 149 regional airport control towers because of forced spending cuts." + }, + { + "id": 37, + "string": "Before Friday's (e6:announcement), it (e7:said) it would consider keeping a tower open if the airport convinces the agency it is in the \"national interest\" to do so." + }, + { + "id": 38, + "string": "which formulated the problem as that of learning a classification model for determining the label of each edge locally (i.e., local methods)." + }, + { + "id": 39, + "string": "A disadvantage of these early methods is that the resulting graph may break the symmetric and transitive constraints." + }, + { + "id": 40, + "string": "There are conceptually two ways to enforce such graph constraints (i.e., global reasoning)." + }, + { + "id": 41, + "string": "CAEVO (Chambers et al., 2014) grows the temporal graph in a multi-sieve manner, where predictions are added sieve-by-sieve." + }, + { + "id": 42, + "string": "A graph closure operation had to be performed after each sieve to enforce constraints." + }, + { + "id": 43, + "string": "This is solving the global inference problem greedily." + }, + { + "id": 44, + "string": "A second way is to perform exact inference via ILP and the symmetry and transitivity requirements can be enforced as ILP constraints (Bramsen et al., 2006; Chambers and Jurafsky, 2008; Denis and Muller, 2011; Do et al., 2012; Ning et al., 2017) ." + }, + { + "id": 45, + "string": "We adopt the ILP approach in the temporal component of this work for two reasons." + }, + { + "id": 46, + "string": "First, as we show later, it is straightforward to build a joint framework with both temporal and causal relations as an extension of it." + }, + { + "id": 47, + "string": "Second, the relation between a pair of events is often determined by the relations among other events." + }, + { + "id": 48, + "string": "In Ex 3, if a system is unaware of (e5, e6)=simultaneously when trying to make a decision for (e5, e7), it is likely to predict that e5 is before e7 2 ; but, in fact, (e5, e7)=after given the existence of e6." + }, + { + "id": 49, + "string": "Using global considerations is thus beneficial in this context not only for generating globally consistent temporal graphs, but also for making more reliable pairwise decisions." + }, + { + "id": 50, + "string": "Prior work on causal relations in natural language text was relatively sparse." + }, + { + "id": 51, + "string": "Many causal extraction work in other domains assumes the existence of ground truth timestamps (e.g., (Sun et al., 2007; Güler et al., 2016) ), but gold timestamps rarely exist in natural language text." + }, + { + "id": 52, + "string": "In NLP, people have focused on causal relation identification using lexical features or discourse relations." + }, + { + "id": 53, + "string": "For example, based on a set of explicit causal discourse markers (e.g., \"because\", \"due to\", and \"as a result\"), Hidey and McKeown (2016) built parallel Wikipedia articles and constructed an open set of implicit markers called AltLex." + }, + { + "id": 54, + "string": "A classifier was then applied to identify causality." + }, + { + "id": 55, + "string": "Dunietz et al." + }, + { + "id": 56, + "string": "(2017) used the concept of construction grammar to tag causally related clauses or phrases." + }, + { + "id": 57, + "string": "Do et al." + }, + { + "id": 58, + "string": "(2011) considered global statistics over a large corpora, the cause-effect association (CEA) scores, and combined it with discourse relations using ILP to identify causal relations." + }, + { + "id": 59, + "string": "These work only focused on the causality task and did not address the temporal aspect." + }, + { + "id": 60, + "string": "However, as illustrated by Examples 1-2, temporal and causal relations are closely related, as assumed by many existing works (Bethard and Martin, 2008; Rink et al., 2010) ." + }, + { + "id": 61, + "string": "Here we argue that being able to capture both aspects in a joint framework provides a more complete understanding of events in natural language documents." + }, + { + "id": 62, + "string": "Researchers have started paying attention to this direction recently." + }, + { + "id": 63, + "string": "For example, Mostafazadeh et al." + }, + { + "id": 64, + "string": "(2016b) proposed an annotation framework, CaTeRs, which captured both temporal and causal aspects of event relations in common sense stories." + }, + { + "id": 65, + "string": "CATENA (Mirza and Tonelli, 2016) extended the multi-sieve framework of CAEVO to extracting both temporal and causal relations and exploited their interaction through post-editing temporal relations based on causal predictions." + }, + { + "id": 66, + "string": "In this paper, we push this idea forward and tackle the problem in a joint and more principled way, as shown next." + }, + { + "id": 67, + "string": "Temporal and Causal Reasoning In this section, we explain the proposed joint inference framework, Temporal and Causal Reasoning (TCR)." + }, + { + "id": 68, + "string": "To start with, we focus on introducing the temporal component, and clarify how to design the transitivity constraints and how to enforce other readily available prior knowledge to improve its performance." + }, + { + "id": 69, + "string": "With this temporal component already explained, we further incorporate causal relations and complete the TCR joint inference framework." + }, + { + "id": 70, + "string": "Finally, we transform the joint problem into an ILP so that it can be solved using offthe-shelf packages." + }, + { + "id": 71, + "string": "Temporal Component Let R T be the label set of temporal relations and E and T be the set of all events and the set of all time expressions (a.k.a." + }, + { + "id": 72, + "string": "timex) in a document." + }, + { + "id": 73, + "string": "For notation convenience, we use EE to represent the set of all event-event pairs; then ET and T T have obvious definitions." + }, + { + "id": 74, + "string": "Given a pair in EE or ET , assume for now that we have corresponding classifiers producing confidence scores for every temporal relation in R T ." + }, + { + "id": 75, + "string": "Let them be s ee (·) and s et (·), respectively." + }, + { + "id": 76, + "string": "Then the inference formulation for all the temporal relations within this document is: Y = arg max Y ∈Y ∑ i∈EE s ee {i → Yi} + ∑ j∈ET s et {j → Yj} (1) where Y k ∈ R T is the temporal label of pair k ∈ MM (Let M = E ∪ T be the set of all tem- poral nodes), \"k → Y k \" represents the case where the label of pair k is predicted to be Y k , Y is a vec- torization of all these Y k 's in one document, and Y is the constrained space that Y lies in." + }, + { + "id": 77, + "string": "We do not include the scores for T T because the temporal relationship between timexes can be trivially determined using the normalized dates of these timexes, as was done in (Do et al., 2012; Chambers et al., 2014; Mirza and Tonelli, 2016) ." + }, + { + "id": 78, + "string": "We impose these relations via equality constraints denoted as Y 0 ." + }, + { + "id": 79, + "string": "In addition, we add symmetry and transitivity constraints dictated by the nature of time (denoted by Y 1 ), and other prior knowledge derived from linguistic rules (denoted by Y 2 ), which will be explained subsequently." + }, + { + "id": 80, + "string": "Finally, we set Y = ∩ 2 i=0 Y i in Eq." + }, + { + "id": 81, + "string": "(1)." + }, + { + "id": 82, + "string": "Transitivity Constraints." + }, + { + "id": 83, + "string": "Let the dimension of Y be n. Then a standard way to construct the symmetry and transitivity constraints is shown in (Bramsen et al., 2006; Chambers and Jurafsky, 2008; Denis and Muller, 2011; Do et al., 2012; Ning et al., 2017 ) Y 1 = { Y ∈ R n T |∀m 1,2,3 ∈ M, Y (m1,m2) =Ȳ (m2,m1) , Y (m1,m3) ∈ Trans(Y (m1,m2) , Y (m2,m3) ) } where the bar sign is used to represent the reverse relation hereafter, and Trans(r 1 , r 2 ) is a set comprised of all the temporal relations from R T that do not conflict with r 1 and r 2 ." + }, + { + "id": 84, + "string": "The construction of Trans(r 1 , r 2 ) necessitates a clearer definition of R T , the importance of which is often overlooked by existing methods." + }, + { + "id": 85, + "string": "Existing approaches all followed the interval representation of events (Allen, 1984) , which yields 13 temporal relations (denoted byR T here) as shown in ." + }, + { + "id": 86, + "string": "\"x\" means that the label is ignored." + }, + { + "id": 87, + "string": "Brackets represent time intervals along the time axis." + }, + { + "id": 88, + "string": "Scheme 2 is adopted consistently in this work." + }, + { + "id": 89, + "string": "ample, {before, after, includes, is included, simultaneously, vague}." + }, + { + "id": 90, + "string": "For notation convenience, we denote them R T = {b, a, i, ii, s, v}." + }, + { + "id": 91, + "string": "Using a reduced set is more convenient in data annotation and leads to better performance in practice." + }, + { + "id": 92, + "string": "However, there has been limited discussion in the literature on how to interpret the reduced relation types." + }, + { + "id": 93, + "string": "For example, is the \"before\" in R T exactly the same as the \"before\" in the original set (R T ) (as shown on the left-hand-side of Fig." + }, + { + "id": 94, + "string": "1 ), or is it a combination of multiple relations inR T (the right-hand-side of Fig." + }, + { + "id": 95, + "string": "1) ?" + }, + { + "id": 96, + "string": "We compare two reduction schemes in Fig." + }, + { + "id": 97, + "string": "1 , where scheme 1 ignores low frequency labels directly and scheme 2 absorbs low frequency ones into their temporally closest labels." + }, + { + "id": 98, + "string": "The two schemes barely have differences when a system only looks at a single pair of mentions at a time (this might explain the lack of discussion over this issue in the literature), but they lead to different Trans(r 1 , r 2 ) sets and this difference can be magnified when the problem is solved jointly and when the label distribution changes across domains." + }, + { + "id": 99, + "string": "To completely cover the 13 relations, we adopt scheme 2 in this work." + }, + { + "id": 100, + "string": "The resulting transitivity relations are shown in Table 1 ." + }, + { + "id": 101, + "string": "The top part of Table 1 is a compact representation of three generic rules; for instance, Line 1 means that the labels themselves are transitive." + }, + { + "id": 102, + "string": "Note that during human annotation, if an annotator looks at a pair of events and decides that multiple well-defined relations can exist, he/she labels it vague; also, when aggregating the labels from multiple annotators, a label will be changed to vague if the annotators disagree with each other." + }, + { + "id": 103, + "string": "In either case, vague is chosen to be the label when a single well-defined relation cannot be uniquely determined by the contextual information." + }, + { + "id": 104, + "string": "This explains why a vague relation (v) is always added in Table 1 if more than one label in Trans(r 1 , r 2 ) is possible." + }, + { + "id": 105, + "string": "As for Lines 6, 9-11 in Table 1 (where vague appears in Column r 2 ), Column Trans(r 1 ,r 2 ) was designed in such a way that r 2 cannot be uniquely determined through r 1 and Trans(r 1 ,r 2 )." + }, + { + "id": 106, + "string": "For instance, r 1 is after on Line 9, if we further put before into Trans(r 1 ,r 2 ), then r 2 would be uniquely determined to be before, conflicting with r 2 being vague, so before should not be in Trans(r 1 ,r 2 )." + }, + { + "id": 107, + "string": "Enforcing Linguistic Rules." + }, + { + "id": 108, + "string": "Besides the transitivity constraints represented by Y 1 above, we also propose to enforce prior knowledge to further constrain the search space for Y ." + }, + { + "id": 109, + "string": "Specifically, linguistic insight has resulted in rules for predicting the temporal relations with special syntactic or semantic patterns, as was done in CAEVO (a state-of-the-art method)." + }, + { + "id": 110, + "string": "Since these rule predictions often have high-precision, it is worthwhile incorporating them in global reasoning methods as well." + }, + { + "id": 111, + "string": "No." + }, + { + "id": 112, + "string": "r1 r2 Trans(r1, r2) 1 r r r 2 r s r 3 r1 r2 Trans(r2,r1) 4 b i b, i, v 5 b ii b, ii, v 6 b v b, i, ii, v 7 a i a, i, v 8 a ii a, ii, v 9 a v a, i, ii ,v 10 i v b, a, i, v 11 ii v b, a, ii, v In the CCM framework, these rules can be represented as hard constraints in the search space for Y ." + }, + { + "id": 113, + "string": "Specifically, Y2 = { Yj = rule(j), ∀j ∈ J (rule) } , (2) where J (rule) ⊆ MM is the set of pairs that can be determined by linguistic rules, and rule(j) ∈ R T is the corresponding decision for pair j according to these rules." + }, + { + "id": 114, + "string": "In this work, we used the same set of rules designed by CAEVO for fair comparison." + }, + { + "id": 115, + "string": "Full Model with Causal Relations Now we have presented the joint inference framework for temporal relations in Eq." + }, + { + "id": 116, + "string": "(1)." + }, + { + "id": 117, + "string": "It is easier to explain our complete TCR framework on top of it." + }, + { + "id": 118, + "string": "Let W be the vectorization of all causal relations and add the scores from the scoring function for causality s c (·) to Eq." + }, + { + "id": 119, + "string": "(1)." + }, + { + "id": 120, + "string": "Specifically, the full inference formulation is now: Y ,Ŵ = arg max Y ∈Y,W ∈W Y ∑ i∈EE s ee {i → Y i } (3) + ∑ j∈ET s et {j → Y j } + ∑ k∈EE s c {k → W k } where W Y is the search space for W ." + }, + { + "id": 121, + "string": "W Y depends on the temporal labels Y in the sense that W Y = {W ∈ R m C |∀i, j ∈ E, if W (i,j) = c, (4) then W (j,i) =c, and Y (i,j) = b} where m is the dimension of W (i.e., the total number of causal pairs), R C = {c,c, null} is the label set for causal relations (i.e., \"causes\", \"caused by\", and \"no relation\"), and W (i,j) is the causal label for pair (i, j)." + }, + { + "id": 122, + "string": "The constraint represented by W Y means that if a pair of events i and j are labeled to be \"causes\", then the causal relation between j and i must be \"caused by\", and the temporal relation between i and j must be \"before\"." + }, + { + "id": 123, + "string": "Scoring Functions In the above, we have built the joint framework on top of scoring functions s ee (·), s et (·) and s c (·)." + }, + { + "id": 124, + "string": "To get s ee (·) and s et (·), we trained classifiers using the averaged perceptron algorithm (Freund and Schapire, 1998) and the same set of features used in (Do et al., 2012; Ning et al., 2017) , and then used the soft-max scores in those scoring functions." + }, + { + "id": 125, + "string": "For example, that means s ee {i → r} = w T r ϕ(i) ∑ r ′ ∈RT w T r ′ ϕ(i) , i ∈ EE, r ∈ R T , where {w r } is the learned weight vector for relation r ∈ R T and ϕ(i) is the feature vector for pair i ∈ EE." + }, + { + "id": 126, + "string": "Given a pair of ordered events, we need s c (·) to estimate the scores of them being \"causes\" or \"caused by\"." + }, + { + "id": 127, + "string": "Since this scoring function has the same nature as s ee (·), we can reuse the features from s ee (·) and learn an averaged perceptron for s c (·)." + }, + { + "id": 128, + "string": "In addition to these existing features, we also use prior statistics retrieved using our temporal system from a large corpus 3 , so as to know probabilistically which event happens before another event." + }, + { + "id": 129, + "string": "For example, in Example 1, we have a pair of events, e1:died and e2:exploded." + }, + { + "id": 130, + "string": "The prior knowledge we retrieved from that large corpus is that die happens before explode with probability 15% and happens after explode with probability 85%." + }, + { + "id": 131, + "string": "We think this prior distribution is correlated with causal directionality, so it was also added as features when training s c (·)." + }, + { + "id": 132, + "string": "Note that the scoring functions here are implementation choice." + }, + { + "id": 133, + "string": "The TCR joint framework is fully extensible to other scoring functions." + }, + { + "id": 134, + "string": "Convert the Joint Inference into an ILP Conveniently, the joint inference formulation in Eq." + }, + { + "id": 135, + "string": "(3) can be rewritten into an ILP and solved using off-the-shelf optimization packages, e.g., (Gurobi Optimization, Inc., 2012) ." + }, + { + "id": 136, + "string": "First, we define indicator variables y r i = I{Y i = r}, where I{·} is the indicator function, ∀i ∈ MM, ∀r ∈ R T ." + }, + { + "id": 137, + "string": "Then let p r i = s ee {i → r} if i ∈ EE, or p r i = s et {i → r} if i ∈ ET ; similarly, let w r j = I{W i = r} be the indicator variables for W j and q r j be the score for W j = r ∈ R C ." + }, + { + "id": 138, + "string": "Therefore, without constraints Y and W Y for now, Eq." + }, + { + "id": 139, + "string": "(3) can be written as: y,ŵ = arg max ∑ i∈EE∪ET ∑ r∈R T p r i y r i + ∑ j∈EE ∑ r∈R C q r j w r j s.t." + }, + { + "id": 140, + "string": "y r i , w r j ∈ {0, 1}, ∑ r∈R T y r i = ∑ r∈R C w r j = 1 The prior knowledge represented as Y and W Y can be conveniently converted into constraints for this optimization problem." + }, + { + "id": 141, + "string": "Specifically, Y 1 has two components, symmetry and transitivity: Y1 : ∀i, j, k ∈ M, y r i,j = yr j,i , (symmetry) y r 1 i,j + y r 2 j,k − ∑ r 3 ∈Trans(r 1 ,r 2 ) y r 3 i,k ≤ 1 (transitivity) wherer is the reverse relation of r (i.e.,b = a, i = ii,s = s, andv = v), and Trans(r 1 , r 2 ) is defined in Table 1 ." + }, + { + "id": 142, + "string": "As for the transitivity constraints, if both y r 1 i,j and y r 2 j,k are 1, then the constraint requires at least one of y r 3 i,k , r 3 ∈ Trans(r 1 , r 2 ) to be 1, which means the relation between i and k has to be chosen from Trans(r 1 , r 2 ), which is exactly what Y 1 is intended to do." + }, + { + "id": 143, + "string": "The rules in Y 2 is written as Y 2 : y r j = I {rule(j)=r} , ∀j ∈ J (rule) (linguistic rules) where rule(j) and J (rule) have been defined in Eq." + }, + { + "id": 144, + "string": "(2)." + }, + { + "id": 145, + "string": "Converting the T T constraints, i.e., Y 0 , into constraints is as straightforward as Y 2 , so we omit it due to limited space." + }, + { + "id": 146, + "string": "Last, converting the constraints W Y defined in Eq." + }, + { + "id": 147, + "string": "(4) can be done as following: W Y : w c i,j = wc j,i ≤ y b i,j , ∀i, j ∈ E. The equality part, w c i,j = wc j,i represents the symmetry constraint of causal relations; the inequality part, w c i,j ≤ y b i,j represents that if event i causes event j, then i must be before j." + }, + { + "id": 148, + "string": "Experiments In this section, we first show on TimeBank-Dense (TB-Dense) (Cassidy et al., 2014) , that the proposed framework improves temporal relation identification." + }, + { + "id": 149, + "string": "We then explain how our new dataset with both temporal and causal relations was collected, based on which the proposed method improves for both relations." + }, + { + "id": 150, + "string": "Temporal Performance on TB-Dense Multiple datasets with temporal annotations are available thanks to the TempEval (TE) workshops (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) ." + }, + { + "id": 151, + "string": "The dataset we used here to demonstrate our improved temporal component was TB-Dense, which was annotated on top of 36 documents out of the classic TimeBank dataset (Pustejovsky et al., 2003) ." + }, + { + "id": 152, + "string": "The main purpose of TB-Dense was to alleviate the known issue of sparse annotations in the evaluation dataset provided with TE3 (Uz-Zaman et al., 2013) , as pointed out in many previous work (Chambers, 2013; Cassidy et al., 2014; Chambers et al., 2014; Ning et al., 2017) ." + }, + { + "id": 153, + "string": "Annotators of TB-Dense were forced to look at each pair of events or timexes within the same sentence or contiguous sentences, so that much fewer links were missed." + }, + { + "id": 154, + "string": "Since causal link annotation is not available on TB-Dense, we only show our improvement in terms of temporal performance on Table 2 : Ablation study of the proposed system in terms of the standard temporal awareness metric." + }, + { + "id": 155, + "string": "The baseline system is to make inference locally for each event pair without looking at the decisions from others." + }, + { + "id": 156, + "string": "The \"+\" signs on lines 2-5 refer to adding a new source of information on top of its preceding system, with which the inference has to be global and done via ILP." + }, + { + "id": 157, + "string": "All systems are significantly different to its preceding one with p<0.05 (McNemar's test)." + }, + { + "id": 158, + "string": "TB-Dense." + }, + { + "id": 159, + "string": "The standard train/dev/test split of TB-Dense was used and parameters were tuned to optimize the F 1 performance on dev." + }, + { + "id": 160, + "string": "Gold events and time expressions were also used as in existing systems." + }, + { + "id": 161, + "string": "The contributions of each proposed information sources are analyzed in the ablation study shown in Table 2 , where we can see the F 1 score was improved step-by-step as new sources of information were added." + }, + { + "id": 162, + "string": "Recall that Y 1 represents transitivity constraints, ET represents taking eventtimex pairs into consideration, and Y 2 represents rules from CAEVO (Chambers et al., 2014) ." + }, + { + "id": 163, + "string": "System 1 is the baseline we are comparing to, which is a local method predicting temporal relations one at a time." + }, + { + "id": 164, + "string": "System 2 only applied Y 1 via ILP on top of all EE pairs by removing the 2nd term in Eq." + }, + { + "id": 165, + "string": "(1); for fair comparison with System 1, we added the same ET predictions from System 1." + }, + { + "id": 166, + "string": "System 3 incorporated ET into the ILP and mainly contributed to an increase in precision (from 42.9 to 44.3); we think that there could be more gain if more time expressions existed in the testset." + }, + { + "id": 167, + "string": "With the help of additional high-precision rules (Y 2 ), the temporal performance can further be improved, as shown by System 4." + }, + { + "id": 168, + "string": "Finally, using the causal extraction obtained via (Do et al., 2011) in the joint framework, the proposed method achieved the best precision, recall, and F 1 scores in our ablation study (Systems 1-5)." + }, + { + "id": 169, + "string": "According to the McNemar's test (Everitt, 1992; Dietterich, 1998) , all Systems 2-5 were significantly different to its preceding system with p<0.05." + }, + { + "id": 170, + "string": "The second part of Table 2 compares several state-of-the-art systems on the same test set." + }, + { + "id": 171, + "string": "ClearTK (Bethard, 2013) was the top performing system in TE3 in temporal relation extraction." + }, + { + "id": 172, + "string": "Since it was designed for TE3 (not TB-Dense), it expectedly achieved a moderate recall on the test set of TB-Dense." + }, + { + "id": 173, + "string": "CAEVO (Chambers et al., 2014) and Ning et al." + }, + { + "id": 174, + "string": "(2017) were more recent methods and achieved better scores on TB-Dense." + }, + { + "id": 175, + "string": "Compared with these state-of-the-art methods, the proposed joint system (System 5) achieved the best F 1 score with a major improvement in recall." + }, + { + "id": 176, + "string": "We think the low precision compared to System 8 is due to the lack of structured learning, and the low precision compared to System 7 is propagated from the baseline (System 1), which was tuned to maximize its F 1 score." + }, + { + "id": 177, + "string": "However, the effectiveness of the proposed information sources is already justified in Systems 1-5." + }, + { + "id": 178, + "string": "Joint Performance on Our New Dataset Data Preparation TB-Dense only has temporal relation annotations, so in the evaluations above, we only evaluated our temporal performance." + }, + { + "id": 179, + "string": "One existing dataset with both temporal and causal annotations available is the Causal-TimeBank dataset (Causal-TB) (Mirza and Tonelli, 2014) ." + }, + { + "id": 180, + "string": "However, Causal-TB is sparse in temporal annotations and is even sparser in causal annotations: In Table 3 , we can see that with four times more documents, Causal-TB still has fewer temporal relations (denoted as T-Links therein), compared to TB-Dense; as for causal relations (C-Links), it has less than two causal relations in each document on average." + }, + { + "id": 181, + "string": "Note that the T-Link sparsity of Causal-TB originates from TimeBank, which is known to have missing links (Cassidy et al., 2014; Ning et al., 2017) ." + }, + { + "id": 182, + "string": "The C-Link sparsity was a design choice of Causal-TB in which C-Links were annotated based on only explicit causal markers (e.g., \"A happened because of B\")." + }, + { + "id": 183, + "string": "Another dataset with parallel annotations is CaTeRs (Mostafazadeh et al., 2016b) , which was primarily designed for the Story Cloze Test (Mostafazadeh et al., 2016a) based on common (2014) and use this new dataset to showcase the proposed joint approach." + }, + { + "id": 184, + "string": "The EventCausality dataset provides relatively dense causal annotations on 25 newswire articles collected from CNN in 2010." + }, + { + "id": 185, + "string": "As shown in Table 3 , it has more than 20 C-Links annotated per document on average (10 times denser than Causal-TB)." + }, + { + "id": 186, + "string": "However, one issue is that its notion for events is slightly different to that in the temporal relation extraction regime." + }, + { + "id": 187, + "string": "To construct parallel annotations of both temporal and causal relations, we preprocessed all the articles in EventCausality using ClearTK to extract events and then manually removed some obvious errors in them." + }, + { + "id": 188, + "string": "To annotate temporal relations among these events, we adopted the annotation scheme from TB-Dense given its success in mitigating the issue of missing annotations with the following modifications." + }, + { + "id": 189, + "string": "First, we used a crowdsourcing platform, Crowd-Flower, to collect temporal relation annotations." + }, + { + "id": 190, + "string": "For each decision of temporal relation, we asked 5 workers to annotate and chose the majority label as our final annotation." + }, + { + "id": 191, + "string": "Second, we discovered that comparisons involving ending points of events tend to be ambiguous and suffer from low inter-annotator agreement (IAA), so we asked the annotators to label relations based on the starting points of each event." + }, + { + "id": 192, + "string": "This simplification does not change the nature of temporal relation extraction but leads to better annotation quality." + }, + { + "id": 193, + "string": "For more details about this data collection scheme, please refer to (Ning et al., 2018b) for more details." + }, + { + "id": 194, + "string": "Results Result on our new dataset jointly annotated with both temporal and causal relations is shown in Ta- ble 4." + }, + { + "id": 195, + "string": "We split the new dataset into 20 documents for training and 5 documents for testing." + }, + { + "id": 196, + "string": "In the training phase, the training parameters were tuned via 5-fold cross validation on the training set." + }, + { + "id": 197, + "string": "Table 4 demonstrates the improvement of the joint framework over individual components." + }, + { + "id": 198, + "string": "The \"temporal only\" baseline is the improved temporal extraction system for which the joint inference with causal links has NOT been applied." + }, + { + "id": 199, + "string": "The \"causal only\" baseline is to use s c (·) alone for the prediction of each pair." + }, + { + "id": 200, + "string": "That is, for a pair i, if s c {i → causes} > s c {i → caused by}, we then assign \"causes\" to pair i; otherwise, we assign \"caused by\" to pair i." + }, + { + "id": 201, + "string": "Note that the \"causal accuracy\" column in Table 4 was evaluated only on gold causal pairs." + }, + { + "id": 202, + "string": "In the proposed joint system, the temporal and causal scores were added up for all event pairs." + }, + { + "id": 203, + "string": "The temporal performance got strictly better in precision, recall, and F 1 , and the causal performance also got improved by a large margin from 70.5% to 77.3%, indicating that temporal signals and causal signals are helpful to each other." + }, + { + "id": 204, + "string": "According to the McNemar's test, both improvements are significant with p<0.05." + }, + { + "id": 205, + "string": "The second part of Table 4 shows that if gold relations were used, how well each component would possibly perform." + }, + { + "id": 206, + "string": "Technically, these gold temporal/causal relations were enforced via adding extra constraints to ILP in Eq." + }, + { + "id": 207, + "string": "(3) (imagine these gold relations as a special rule, and convert them into constraints like what we did in Eq." + }, + { + "id": 208, + "string": "(2))." + }, + { + "id": 209, + "string": "When using gold temporal relations, causal accuracy went up to 91.9%." + }, + { + "id": 210, + "string": "That is, 91.9% of the C-Links satisfied the assumption that the cause is temporally before the effect." + }, + { + "id": 211, + "string": "First, this number is much higher than the 77.3% on line 3, so there is still room for improvement." + }, + { + "id": 212, + "string": "Second, it means in this dataset, there were 8.1% of the C-Links in which the cause is temporally after its effect." + }, + { + "id": 213, + "string": "We will discuss this seemingly counter-intuitive phenomenon in the Discussion section." + }, + { + "id": 214, + "string": "When gold causal relations were used (line 5), the temporal performance was slightly better than line 3 in terms of both precision and recall." + }, + { + "id": 215, + "string": "The small difference means that the temporal performance on line 3 was already very close to its best." + }, + { + "id": 216, + "string": "Compared with the first line, we can see that gold causal relations led to approximately 2% improvement in precision and recall in temporal performance, which is a reasonable margin given the fact that C-Links are often much sparser than T-Links in practice." + }, + { + "id": 217, + "string": "Note that the temporal performance in Table 4 is consistently better than those in Table 2 because of the higher IAA in the new dataset." + }, + { + "id": 218, + "string": "However, the improvement brought by joint reasoning with causal relations is the same, which further confirms the capability of the proposed approach." + }, + { + "id": 219, + "string": "Discussion We have consistently observed that on the TB-Dense dataset, if automatically tuned to optimize its F 1 score, a system is very likely to have low precisions and high recall (e.g., Table 2 )." + }, + { + "id": 220, + "string": "We notice that our system often predicts non-vague relations when the TB-Dense gold is vague, resulting in lower precision." + }, + { + "id": 221, + "string": "However, on our new dataset, the same algorithm can achieve a more balanced precision and recall." + }, + { + "id": 222, + "string": "This is an interesting phenomenon, possibly due to the annotation scheme difference which needs further investigation." + }, + { + "id": 223, + "string": "The temporal improvements in both Table 2 and Table 4 are relatively small (although statistically significant)." + }, + { + "id": 224, + "string": "This is actually not surprising because C-Links are much fewer than T-Links in newswires which focus more on the temporal development of stories." + }, + { + "id": 225, + "string": "As a result, many T-Links are not accompanied with C-Links and the improvements are diluted." + }, + { + "id": 226, + "string": "But for those event pairs having both T-Links and C-Links, the proposed joint framework is an important scheme to synthesize both signals and improve both." + }, + { + "id": 227, + "string": "The comparison between Line 5 and Line 3 in Table 4 is a showcase of the effectiveness." + }, + { + "id": 228, + "string": "We think that a deeper reason for the improvement achieved via a joint framework is that causality often encodes humans prior knowledge as global information (e.g., \"death\" is caused by \"explosion\" rather than causes \"explosion\", regardless of the local context), while temporality often focuses more on the local context." + }, + { + "id": 229, + "string": "From this standpoint, temporal information and causal information are complementary and helpful to each other." + }, + { + "id": 230, + "string": "When doing error analysis for the fourth line of Table 4 , we noticed some examples that break the commonly accepted temporal precedence assumption." + }, + { + "id": 231, + "string": "It turns out that they are not annotation mistakes: In Example 4, e8:finished is obviously before e9:closed, but e9 is a cause of e8 since if the market did not close, the shares would not finish." + }, + { + "id": 232, + "string": "In the other sentence of Example 4, she prepares before hosting her show, but e11:host is the cause of e10:prepares since if not for hosting, no preparation would be needed." + }, + { + "id": 233, + "string": "In both cases, the cause is temporally after the effect because people are inclined to make projections for the future and change their behaviors before the future comes." + }, + { + "id": 234, + "string": "The proposed system is currently unable to handle these examples and we believe that a better definition of what can be considered as events is needed, as part of further investigating how causality is expressed in natural language." + }, + { + "id": 235, + "string": "Finally, the constraints connecting causal relations to temporal relations are designed in this paper as \"if A is the cause of B, then A must be before B\"." + }, + { + "id": 236, + "string": "People have suggested other possibilities that involve the includes and simultaneously relations." + }, + { + "id": 237, + "string": "While these other relations are simply different interpretations of temporal precedence (and can be easily incorporated in our framework), we find that they rarely happen in our dataset." + }, + { + "id": 238, + "string": "Conclusion We presented a novel joint framework, Temporal and Causal Reasoning (TCR), using CCMs and ILP to the extraction problem of temporal and causal relations between events." + }, + { + "id": 239, + "string": "To show the benefit of TCR, we have developed a new dataset that jointly annotates temporal and causal annotations, and then exhibited that TCR can improve both temporal and causal components." + }, + { + "id": 240, + "string": "We hope that this notable improvement can foster more interest in jointly studying multiple aspects of events (e.g., event sequencing, coreference, parent-child relations) towards the goal of understanding events in natural language." + } + ], + "headers": [ + { + "section": "Introduction", + "n": "1", + "start": 0, + "end": 32 + }, + { + "section": "Related Work", + "n": "2", + "start": 33, + "end": 66 + }, + { + "section": "Temporal and Causal Reasoning", + "n": "3", + "start": 67, + "end": 70 + }, + { + "section": "Temporal Component", + "n": "3.1", + "start": 71, + "end": 114 + }, + { + "section": "Full Model with Causal Relations", + "n": "3.2", + "start": 115, + "end": 122 + }, + { + "section": "Scoring Functions", + "n": "3.3", + "start": 123, + "end": 133 + }, + { + "section": "Convert the Joint Inference into an ILP", + "n": "3.4", + "start": 134, + "end": 147 + }, + { + "section": "Experiments", + "n": "4", + "start": 148, + "end": 149 + }, + { + "section": "Temporal Performance on TB-Dense", + "n": "4.1", + "start": 150, + "end": 177 + }, + { + "section": "Data Preparation", + "n": "4.2.1", + "start": 178, + "end": 193 + }, + { + "section": "Results", + "n": "4.2.2", + "start": 194, + "end": 218 + }, + { + "section": "Discussion", + "n": "5", + "start": 219, + "end": 236 + }, + { + "section": "Conclusion", + "n": "6", + "start": 237, + "end": 240 + } + ], + "figures": [ + { + "filename": "../figure/image/1010-Table4-1.png", + "caption": "Table 4: Comparison between the proposed method and existing ones, in terms of both temporal and causal performances. See Sec. 4.2.1 for description of our new dataset. Per the McNemar’s test, the joint system is significantly better than both baselines with p<0.05. Lines 4-5 provide the best possible performance the joint system could achieve if gold temporal/causal relations were given.", + "page": 7, + "bbox": { + "x1": 77.75999999999999, + "x2": 284.15999999999997, + "y1": 62.879999999999995, + "y2": 149.28 + } + }, + { + "filename": "../figure/image/1010-Table1-1.png", + "caption": "Table 1: Transitivity relations based on the label set reduction scheme 2 in Fig. 1. If (m1,m2) 7→ r1 and (m2,m3) 7→ r2, then the relation of (m1,m3) must be chosen from Trans(r1, r2), ∀m1, m2, m3 ∈ M. The top part of the table uses r to represent generic rules compactly. Notations: before (b), after (a), includes (i), is included (ii), simultaneously (s), vague (v), and r̄ represents the reverse relation of r.", + "page": 3, + "bbox": { + "x1": 352.8, + "x2": 480.47999999999996, + "y1": 62.879999999999995, + "y2": 188.16 + } + }, + { + "filename": "../figure/image/1010-Figure1-1.png", + "caption": "Figure 1: Two possible interpretations to the label set of RT = {b, a, i, ii, s, v} for the temporal relations between (A, B). “x” means that the label is ignored. Brackets represent time intervals along the time axis. Scheme 2 is adopted consistently in this work.", + "page": 3, + "bbox": { + "x1": 91.67999999999999, + "x2": 269.28, + "y1": 64.8, + "y2": 217.92 + } + }, + { + "filename": "../figure/image/1010-Table3-1.png", + "caption": "Table 3: Statistics of our new dataset with both temporal and causal relations annotated, compared with existing datasets. T-Link: Temporal relation. C-Link: Causal relation. The new dataset is much denser than Causal-TB in both T-Links and C-Links.", + "page": 6, + "bbox": { + "x1": 315.84, + "x2": 517.4399999999999, + "y1": 62.879999999999995, + "y2": 114.24 + } + } + ] + }, + "gem_id": "GEM-SciDuet-chal-16" + }, + { + "slides": { + "2": { + "title": "Updates in WMT19", + "text": [ + "I reference-based human evaluation monolingual", + "I reference-free human evaluation bilingual", + "I standard reference-based metrics", + "I reference-less metrics QE as a Metric", + "I Hybrid supersampling was not needed for sys-level:", + "I Sufficiently large numbers of MT systems serve as datapoints." + ], + "page_nums": [ + 10 + ], + "images": [] + }, + "3": { + "title": "System and Segment Level Evaluation", + "text": [ + "I Participants compute one", + "score for the whole test set, as translated by each of the systems", + "The new in The company m From Friday's joi \"The unification Cermak, which New common D", + "I Segment Level Econo For exam The new in", + "score for each sentence of each systems translation" + ], + "page_nums": [ + 11, + 12 + ], + "images": [] + }, + "4": { + "title": "Past Metrics Tasks", + "text": [ + "Rat. of Concord. Pairs", + "Pearson Corr Coeff based on", + "RR RR RR RR RR RR RR RR RR", + "main and secondary score reported for the system-level evaluation. and are slightly different variants regarding ties.", + "RR, DA, daRR are different golden truths.", + "Increase in number of participating teams?", + "I Baseline metrics: 9 + 2 reimplementations", + "I sacreBLEU-BLEU and sacreBLEU-chrF.", + "I Submitted metrics: 10 out of 24 are QE as a Metric." + ], + "page_nums": [ + 13, + 14 + ], + "images": [] + }, + "5": { + "title": "Data Overview This Year", + "text": [ + "I Direct Assessment (DA) for sys-level.", + "I Derived relative ranking (daRR) for seg-level.", + "I Multiple languages (18 pairs):", + "I English (en) to/from Czech (cs), German (de), Finnish (fi),", + "Gujarati (gu), Kazakh (kk), Lithuanian (lt), Russian (ru), and", + "Chinese (zh), but excluding cs-en.", + "I German (de)Czech (cs) and German (de)French (fr)." + ], + "page_nums": [ + 15 + ], + "images": [] + }, + "6": { + "title": "Baselines", + "text": [ + "Metric Features Seg-L Sys-L sentBLEU", + "CDER chrF chrF+ sacreBLEU-BLEU sacreBLEU-chrF n-grams n-grams n-grams", + "Levenshtein distance edit distance, edit types edit distance, edit types edit distance, edit types character n-grams character n-grams n-grams n-grams", + "We average ( ) seg-level scores." + ], + "page_nums": [ + 16 + ], + "images": [] + }, + "7": { + "title": "Participating Metrics", + "text": [ + "Features char. n-grams, permutation trees contextual word embeddings char. edit distance, edit types char. edit distance, edit types learned neural representations surface linguistic features surface linguistic features word alignments", + "Meteor++ 2.0 (syntax+copy) word alignments", + "YiSi-1 srl psuedo-references, paraphrases word mover distance semantic similarity semantic similarity semantic similarity", + "Univ. of Amsterdam, ILCC", + "Dublin City University, ADA", + "We average ( ) their seg-level scores." + ], + "page_nums": [ + 17 + ], + "images": [] + }, + "10": { + "title": "Golden Truth for Sys Level DA Pearson", + "text": [ + "You have scored individual sentences: (Thank you!)", + "News Task has filtered and standardized this (Ave z).", + "We correlate it with the metric sys-level score.", + "Ave z BLEU CUNI-Transformer uedin online-B online-A online-G" + ], + "page_nums": [ + 20 + ], + "images": [] + }, + "12": { + "title": "Segment Level News Task Evaluation", + "text": [ + "You scored individual sentences: (Same data as above.)", + "Standardized, averaged seg-level golden truth score.", + "Could be correlated to metric seg-level scores.", + "but there are not enough judgements for indiv. sentences." + ], + "page_nums": [ + 22 + ], + "images": [] + }, + "13": { + "title": "daRR Interpreting DA as RR", + "text": [ + "I If score for candidate A better than B by more than 25 points", + "infer the pairwise comparison: A B.", + "I No ties in golden daRR.", + "I Evaluate with the known Kendalls", + "I On average, there are 319 of scored outputs per src segm.", + "I From these, we generate 4k327k daRR pairs." + ], + "page_nums": [ + 23 + ], + "images": [] + }, + "15": { + "title": "Sys Level into English Official", + "text": [ + "de-en fi-en gu-en kk-en lt-en ru-en zh-en", + "chrF chrF+ EED ESIM hLEPORa baseline hLEPORb baseline Meteor++ 2.0(syntax) Meteor++ 2.0(syntax+copy) NIST PER PReP sacreBLEU.BLEU sacreBLEU.chrF TER WER WMDO YiSi-0 YiSi-1 YiSi-1 srl QE as a Metric: ibm1-morpheme ibm1-pos4gram LASIM LP UNI UNI+ YiSi-2 YiSi-2 srl newstest2019", + "I Top: Baselines and regular metrics. Bottom: QE as a metric.", + "I Bold: not significantly outperformed by any others." + ], + "page_nums": [ + 25, + 26 + ], + "images": [] + }, + "17": { + "title": "Summary of Sys Level Wins Metrics", + "text": [ + "LPs LPs LPs Corr Wins Overall wins", + "BLEU PER sacreBLEU-BLEU BERTr Met++ 2.0(s.) Met++ 2.0(s.+copy) WMDO hLEPORb baseline PReP" + ], + "page_nums": [ + 28 + ], + "images": [] + }, + "18": { + "title": "Summary of Sys Level Wins QE", + "text": [ + "LPs LPs LPs Corr Wins ibm1-morpheme ibm1-pos4gram" + ], + "page_nums": [ + 29 + ], + "images": [] + }, + "21": { + "title": "Summary of Seg Level Wins Metrics", + "text": [ + "LPs LPs LPs Corr Wins Tot" + ], + "page_nums": [ + 32 + ], + "images": [] + }, + "22": { + "title": "Summary of Seg Level Wins QE", + "text": [ + "LPs LPs LPs Corr Wins ibm1-morpheme ibm1-pos4gram" + ], + "page_nums": [ + 33 + ], + "images": [] + }, + "24": { + "title": "Overall Status of MT Metrics", + "text": [ + "I Sys-level very good overall:", + "I Pearson Correlation >.90 mostly, best reach >95 or", + "I Low pearsons exist but not many.", + "I Correlations are heavily affected by the underlying set of MT", + "I System-level correlations are much worse when based on only the better", + "I No clear winners, but have a look at this years posters.", + "I Seg-level much worse:", + "I The top Kendalls only .59.", + "I standard metrics correlations varies between 0.03 and 0.59.", + "I QE a metric obtains even negative correlations.", + "I Methods using embeddings are better:", + "I YiSi-*: Word embeddings + other types of available resources.", + "I ESIM: Sentence embeddings." + ], + "page_nums": [ + 36, + 37 + ], + "images": [] + }, + "25": { + "title": "Next Metrics Task", + "text": [ + "I Yes, we will run the task!", + "I Big Challenge remains: References possibly worse than MT.", + "I Yes, we like the QE as a metric track.", + "I We will report the top-N plots.", + "I We have to summarize them somehow, though.", + "I Doc-level golden truth did not seem different from sys-level.", + "I This may change We might run doc-level metrics." + ], + "page_nums": [ + 38 + ], + "images": [] + } + }, + "paper_title": "Results of the WMT19 Metrics Shared Task: Segment-Level and Strong MT Systems Pose Big Challenges", + "paper_id": "1012", + "paper": { + "title": "Results of the WMT19 Metrics Shared Task: Segment-Level and Strong MT Systems Pose Big Challenges", + "abstract": "This paper presents the results of the WMT19 Metrics Shared Task. Participants were asked to score the outputs of the translations systems competing in the WMT19 News Translation Task with automatic metrics. 13 research groups submitted 24 metrics, 10 of which are reference-less \"metrics\" and constitute submissions to the joint task with WMT19 Quality Estimation Task, \"QE as a Metric\". In addition, we computed 11 baseline metrics, with 8 commonly applied baselines (BLEU, SentBLEU, NIST, WER, PER, TER, CDER, and chrF) and 3 reimplementations (chrF+, sacreBLEU-BLEU, and sacreBLEU-chrF). Metrics were evaluated on the system level, how well a given metric correlates with the WMT19 official manual ranking, and segment level, how well the metric correlates with human judgements of segment quality. This year, we use direct assessment (DA) as our only form of manual evaluation.", + "text": [ + { + "id": 0, + "string": "Introduction To determine system performance in machine translation (MT), it is often more practical to use an automatic evaluation, rather than a manual one." + }, + { + "id": 1, + "string": "Manual/human evaluation can be costly and time consuming, and so an automatic evaluation metric, given that it sufficiently correlates with manual evaluation, can be useful in developmental cycles." + }, + { + "id": 2, + "string": "In studies involving hyperparameter tuning or architecture search, automatic metrics are necessary as the amount of human effort implicated in manual evaluation is generally prohibitively large." + }, + { + "id": 3, + "string": "As objective, reproducible quantities, metrics can also facilitate cross-paper compar-isons." + }, + { + "id": 4, + "string": "The WMT Metrics Shared Task 1 annually serves as a venue to validate the use of existing metrics (including baselines such as BLEU), and to develop new ones; see Koehn and Monz (2006) through Ma et al." + }, + { + "id": 5, + "string": "(2018) ." + }, + { + "id": 6, + "string": "In the setup of our Metrics Shared Task, an automatic metric compares an MT system's output translations with manual reference translations to produce: either (a) system-level score, i.e." + }, + { + "id": 7, + "string": "a single overall score for the given MT system, or (b) segment-level scores for each of the output translations, or both." + }, + { + "id": 8, + "string": "This year we teamed up with the organizers of the QE Task and hosted \"QE as a Metric\" as a joint task." + }, + { + "id": 9, + "string": "In the setup of the Quality Estimation Task (Fonseca et al., 2019) , no humanproduced translations are provided to estimate the quality of output translations." + }, + { + "id": 10, + "string": "Quality estimation (QE) methods are built to assess MT output based on the source or based on the translation itself." + }, + { + "id": 11, + "string": "In this task, QE developers were invited to perform the same scoring as standard metrics participants, with the exception that they refrain from using a reference translation in production of their scores." + }, + { + "id": 12, + "string": "We then evaluate the QE submissions in exactly the same way as regular metrics are evaluated, see below." + }, + { + "id": 13, + "string": "From the point of view of correlation with manual judgements, there is no difference in metrics using or not using references." + }, + { + "id": 14, + "string": "The source, reference texts, and MT system outputs for the Metrics task come from the News Translation Task (Barrault et al., 2019 , which we denote as Findings 2019)." + }, + { + "id": 15, + "string": "The texts were drawn from the news domain and involve translations of English (en) to/from Czech (cs), German (de), Finnish (fi), Gujarati (gu), Kazakh (kk), Lithuanian (lt), Russian (ru) , and Chinese (zh), but excluding csen (15 language pairs)." + }, + { + "id": 16, + "string": "Three other language pairs not including English were also manually evaluated as part of the News Translation Task: German→Czech and German↔French." + }, + { + "id": 17, + "string": "In total, metrics could participate in 18 language pairs, with 10 target languages." + }, + { + "id": 18, + "string": "In the following, we first give an overview of the task (Section 2) and summarize the baseline (Section 3) and submitted (Section 4) metrics." + }, + { + "id": 19, + "string": "The results for system-and segment-level evaluation are provided in Sections 5.1 and 5.2, respectively, followed by a joint discussion Section 6." + }, + { + "id": 20, + "string": "Task Setup This year, we provided task participants with one test set for each examined language pair, i.e." + }, + { + "id": 21, + "string": "a set of source texts (which are commonly ignored by MT metrics), corresponding MT outputs (these are the key inputs to be scored) and a reference translation (held out for the participants of \"QE as a Metric\" track)." + }, + { + "id": 22, + "string": "In the system-level, metrics aim to correlate with a system's score which is an average over many human judgments of segment translation quality produced by the given system." + }, + { + "id": 23, + "string": "In the segment-level, metrics aim to produce scores that correlate best with a human ranking judgment of two output translations for a given source segment (more on the manual quality assessment in Section 2.3)." + }, + { + "id": 24, + "string": "Participants were free to choose which language pairs and tracks (system/segment and reference-based/reference-free) they wanted to take part in." + }, + { + "id": 25, + "string": "Source and Reference Texts The source and reference texts we use are newstest2019 from this year's WMT News Translation Task (see Findings 2019)." + }, + { + "id": 26, + "string": "This set contains approximately 2,000 sentences for each translation direction (except Gujarati, Kazakh and Lithuanian which have approximately 1,000 sentences each, and German to/from French which has 1701 sentences)." + }, + { + "id": 27, + "string": "The reference translations provided in new-stest2019 were created in the same direction as the MT systems were translating." + }, + { + "id": 28, + "string": "The exceptions are German→Czech where both sides are translations from English and German↔French which followed last years' practice." + }, + { + "id": 29, + "string": "Last year and the years before, the dataset consisted of two halves, one originating in the source language and one in the target language." + }, + { + "id": 30, + "string": "This however lead to adverse artifacts in MT evaluation." + }, + { + "id": 31, + "string": "System Outputs The results of the Metrics Task are affected by the actual set of MT systems participating in a given translation direction." + }, + { + "id": 32, + "string": "On one hand, if all systems are very close in their translation quality, then even humans will struggle to rank them." + }, + { + "id": 33, + "string": "This in turn will make the task for MT metrics very hard." + }, + { + "id": 34, + "string": "On the other hand, if the task includes a wide range of systems of varying quality, correlating with humans should be generally easier, see Section 6.1 for a discussion on this." + }, + { + "id": 35, + "string": "One can also expect that if the evaluated systems are of different types, they will exhibit different error patterns and various MT metrics can be differently sensitive to these patterns." + }, + { + "id": 36, + "string": "This year, all MT systems included in the Metrics Task come from the News Translation Task (see Findings 2019)." + }, + { + "id": 37, + "string": "There are however still noticeable differences among the various language pairs." + }, + { + "id": 38, + "string": "• Unsupervised MT Systems." + }, + { + "id": 39, + "string": "The German→Czech research systems were trained in an unsupervised fashion, i.e." + }, + { + "id": 40, + "string": "without the access to parallel Czech-German texts (except for a couple of thousand sentences used primarily for validation)." + }, + { + "id": 41, + "string": "We thus expect the research German-Czech systems to be \"more creative\" and depart further away from the references." + }, + { + "id": 42, + "string": "The online systems in this language directions are however standard MT systems so the German-Czech evaluation could be to some extent bimodal." + }, + { + "id": 43, + "string": "• EU Election." + }, + { + "id": 44, + "string": "The French↔German translation was focused on a sub-domain of news, namely texts related EU Election." + }, + { + "id": 45, + "string": "Various MT system developers may have invested more or less time to the domain adaptation." + }, + { + "id": 46, + "string": "• Regular News Tasks Systems." + }, + { + "id": 47, + "string": "These are all the other MT systems in the evaluation; differing in whether they are trained only on WMT provided data (\"Constrained\", or \"Unconstrained\") as in the previous years." + }, + { + "id": 48, + "string": "All the freely available web services (online MT systems) are deemed unconstrained." + }, + { + "id": 49, + "string": "Overall, the results are based on 233 systems across 18 language pairs." + }, + { + "id": 50, + "string": "2 Manual Quality Assessment Direct Assessment (DA, Graham et al., 2013 Graham et al., , 2014a was employed as the source of the \"golden truth\" to evaluate metrics again this year." + }, + { + "id": 51, + "string": "The details of this method of human evaluation are provided in Findings 2019." + }, + { + "id": 52, + "string": "The basis of DA is to collect a large number of quality assessments (a number on a scale of 1-100, i.e." + }, + { + "id": 53, + "string": "effectively a continuous scale) for the outputs of all MT systems." + }, + { + "id": 54, + "string": "These scores are then standardized per annotator." + }, + { + "id": 55, + "string": "In the past years, the underlying manual scores were reference-based (human judges had access to the same reference translation as the MT quality metric)." + }, + { + "id": 56, + "string": "This year, the official WMT19 scores are reference-based (or \"monolingual\") for some language pairs and reference-free (or \"bilingual\") for others." + }, + { + "id": 57, + "string": "3 Due to these different types of golden truth collection, reference-based language pairs are in a closer match with the standard referencebased metrics, while the reference-free language pairs are better fit for the \"QE as a metric\" subtask." + }, + { + "id": 58, + "string": "Note that system-level manual scores are different than those of the segment-level." + }, + { + "id": 59, + "string": "Since for segment-level evaluation, collecting enough DA judgements for each segment is infeasible, so we resort to converting DA judgements to 2 This year, we do not use the artificially constructed \"hybrid systems\" (Graham and Liu, 2016) because the confidence on the ranking of system-level metrics is sufficient even without hybrids." + }, + { + "id": 60, + "string": "3 Specifically, the reference-based language pairs were those where the anticipated translation quality was lower or where the manual judgements were obtained with the help of anonymous crowdsourcing." + }, + { + "id": 61, + "string": "Most of these cases were translations into English (fien, gu-en, kk-en, lt-en, ru-en and zh-en) and then the language pairs not involving English (de-cs, de-fr and fr-de)." + }, + { + "id": 62, + "string": "The reference-less (bilingual) evaluations were those where mainly MT researchers themselves were involved in the annotations: en-cs, en-de, en-fi, en-gu, en-kk, en-lt, en-ru, en-zh." + }, + { + "id": 63, + "string": "golden truth expressed as relative rankings, see Section 2.3.2." + }, + { + "id": 64, + "string": "The exact methods used to calculate correlations of participating metrics with the golden truth are described below, in the two sections for system-level evaluation (Section 5.1) and segment-level evaluation (Section 5.2)." + }, + { + "id": 65, + "string": "System-level Golden Truth: DA For the system-level evaluation, the collected continuous DA scores, standardized for each annotator, are averaged across all assessed segments for each MT system to produce a scalar rating for the system's performance." + }, + { + "id": 66, + "string": "The underlying set of assessed segments is different for each system." + }, + { + "id": 67, + "string": "Thanks to the fact that the system-level DA score is an average over many judgments, mean scores are consistent and have been found to be reproducible (Graham et al., 2013) ." + }, + { + "id": 68, + "string": "For more details see Findings 2019." + }, + { + "id": 69, + "string": "Segment-level Golden Truth: daRR Starting from Bojar et al." + }, + { + "id": 70, + "string": "(2017) , when WMT fully switched to DA, we had to come up with a solid golden standard for segment-level judgements." + }, + { + "id": 71, + "string": "Standard DA scores are reliable only when averaged over sufficient number of judgments." + }, + { + "id": 72, + "string": "4 Fortunately, when we have at least two DA scores for translations of the same source input, it is possible to convert those DA scores into a relative ranking judgement, if the difference in DA scores allows conclusion that one translation is better than the other." + }, + { + "id": 73, + "string": "In the following, we denote these re-interpreted DA judgements as \"daRR\", to distinguish it clearly from the relative ranking (\"RR\") golden truth used in the past years." + }, + { + "id": 74, + "string": "5 Table 1 : Number of judgements for DA converted to daRR data; \"DA>1\" is the number of source input sentences in the manual evaluation where at least two translations of that same source input segment received a DA judgement; \"Ave\" is the average number of translations with at least one DA judgement available for the same source input sentence; \"DA pairs\" is the number of all possible pairs of translations of the same source input resulting from \"DA>1\"; and \"daRR\" is the number of DA pairs with an absolute difference in DA scores greater than the 25 percentage point margin." + }, + { + "id": 75, + "string": "From the complete set of human assessments collected for the News Translation Task, all possible pairs of DA judgements attributed to distinct translations of the same source were converted into daRR better/worse judgements." + }, + { + "id": 76, + "string": "Distinct translations of the same source input whose DA scores fell within 25 percentage points (which could have been deemed equal quality) were omitted from the evaluation of segment-level metrics." + }, + { + "id": 77, + "string": "Conversion of scores in this way produced a large set of daRR judgements for all language pairs, rely on judgements collected from known-reliable volunteers and crowd-sourced workers who passed DA's quality control mechanism." + }, + { + "id": 78, + "string": "Any inconsistency that could arise from reliance on DA judgements collected from low quality crowd-sourcing is thus prevented." + }, + { + "id": 79, + "string": "shown in Table 1 due to combinatorial advantage of extracting daRR judgements from all possible pairs of translations of the same source input." + }, + { + "id": 80, + "string": "We see that only German-French and esp." + }, + { + "id": 81, + "string": "French-German can suffer from insufficient number of these simulated pairwise comparisons." + }, + { + "id": 82, + "string": "The daRR judgements serve as the golden standard for segment-level evaluation in WMT19." + }, + { + "id": 83, + "string": "Baseline Metrics In addition to validating popular metrics, including baselines metrics serves as comparison and prevents \"loss of knowledge\" as mentioned by Bojar et al." + }, + { + "id": 84, + "string": "(2016) ." + }, + { + "id": 85, + "string": "Moses scorer 6 is one of the MT evaluation tools that aggregated several useful metrics over the time." + }, + { + "id": 86, + "string": "Since Macháček and Bojar (2013) , we have been using Moses scorer to provide most of the baseline metrics and kept encouraging authors of well-performing MT metrics to include them in Moses scorer." + }, + { + "id": 87, + "string": "7 The baselines we report are: BLEU and NIST The metrics BLEU (Papineni et al., 2002) and NIST (Doddington, 2002) were computed using mteval-v13a.pl 8 from the OpenMT Evaluation Campaign." + }, + { + "id": 88, + "string": "The tool includes its own tokenization." + }, + { + "id": 89, + "string": "We run mteval with the flag --international-tokenization." + }, + { + "id": 90, + "string": "9 TER, WER, PER and CDER." + }, + { + "id": 91, + "string": "The metrics TER (Snover et al., 2006) , WER, PER and CDER (Leusch et al., 2006) were produced by the Moses scorer, which is used in Moses model optimization." + }, + { + "id": 92, + "string": "We used the standard tokenizer script as available in Moses toolkit for tokenization." + }, + { + "id": 93, + "string": "(Han et al., 2012 (Han et al., , 2013 http://github.com/poethan/LEPOR LEPORb surface linguistic features • ⊘ Dublin City University, ADAPT (Han et al., 2012 (Han et al., , 2013 Table 2 : Participants of WMT19 Metrics Shared Task." + }, + { + "id": 94, + "string": "\"•\" denotes that the metric took part in (some of the language pairs) of the segment-and/or system-level evaluation." + }, + { + "id": 95, + "string": "\"⊘\" indicates that the system-level scores are implied, simply taking arithmetic (macro-)average of segment-level scores." + }, + { + "id": 96, + "string": "\"−\" indicates that the metric didn't participate the track (Seg/Sys-level)." + }, + { + "id": 97, + "string": "A metric is learned if it is trained on a QE or metric evaluation dataset (i.e." + }, + { + "id": 98, + "string": "pretraining or parsers don't count, but training on WMT 2017 metrics task data does)." + }, + { + "id": 99, + "string": "For the baseline metrics available in the Moses toolkit, paths are relative to http://github.com/moses-smt/ mosesdecoder/." + }, + { + "id": 100, + "string": "smoothed version of BLEU for scoring at the segment-level." + }, + { + "id": 101, + "string": "We used the standard tokenizer script as available in Moses toolkit for tokenization." + }, + { + "id": 102, + "string": "chrF and chrF+." + }, + { + "id": 103, + "string": "The metrics chrF and chrF+ (Popović, 2015 (Popović, , 2017 are computed using their original Python implementation, see Table 2 ." + }, + { + "id": 104, + "string": "We ran chrF++.py with the parameters -nw 0 -b 3 to obtain the chrF score and with -nw 1 -b 3 to obtain the chrF+ score." + }, + { + "id": 105, + "string": "Note that chrF intentionally removes all spaces before matching the n-grams, detokenizing the segments but also concatenating words." + }, + { + "id": 106, + "string": "10 sacreBLEU-BLEU and sacreBLEU-chrF." + }, + { + "id": 107, + "string": "The metrics sacreBLEU-BLEU and sacreBLEU-chrF (Post, 2018a) are re-implementation of BLEU and chrF respectively." + }, + { + "id": 108, + "string": "We ran sacreBLEU-chrF with the same parameters as chrF, but their scores are slightly different." + }, + { + "id": 109, + "string": "The signature strings produced by sacreBLEU for BLEU and chrF respectively are BLEU+case.lc+lang.de-en+numrefs.1+ smooth.exp+tok.intl+version.1.3.6 and chrF3+case.mixed+lang.de-en +numchars.6+numrefs.1+space.False+ tok.13a+version.1.3.6." + }, + { + "id": 110, + "string": "The baselines serve in system and segmentlevel evaluations as customary: BLEU, TER, WER, PER, CDER, sacreBLEU-BLEU and sacreBLEU-chrF for system-level only; sentBLEU for segment-level only and chrF for both." + }, + { + "id": 111, + "string": "Chinese word segmentation is unfortunately not supported by the tokenization scripts mentioned above." + }, + { + "id": 112, + "string": "For scoring Chinese with baseline metrics, we thus pre-processed MT outputs and reference translations with the script tokenizeChinese.py 11 by Shujian Huang, which separates Chinese characters from each other and also from non-Chinese parts." + }, + { + "id": 113, + "string": "Table 2 lists the participants of the WMT19 Shared Metrics Task, along with their metrics and links to the source code where available." + }, + { + "id": 114, + "string": "We have collected 24 metrics from a total of 13 research groups, with 10 reference-less \"metrics\" submitted to the joint task \"QE as a Metrich\" with WMT19 Quality Estimation Task." + }, + { + "id": 115, + "string": "Submitted Metrics The rest of this section provides a brief summary of all the metrics that participated." + }, + { + "id": 116, + "string": "BEER BEER (Stanojević and Sima'an, 2015) is a trained evaluation metric with a linear model that combines sub-word feature indicators (character n-grams) and global word order features (skip bigrams) to achieve a language agnostic and fast to compute evaluation metric." + }, + { + "id": 117, + "string": "BEER has participated in previous years of the evaluation task." + }, + { + "id": 118, + "string": "BERTr BERTr (Mathur et al., 2019) uses contextual word embeddings to compare the MT output with the reference translation." + }, + { + "id": 119, + "string": "The BERTr score of a translation is the average recall score over all tokens, using a relaxed version of token matching based on BERT embeddings: namely, computing the maximum cosine similarity between the embedding of a reference token against any token in the MT output." + }, + { + "id": 120, + "string": "BERTr uses bert_base_uncased embeddings for the to-English language pairs, and bert_base_multilingual_cased embeddings for all other language pairs." + }, + { + "id": 121, + "string": "CharacTER CharacTER (Wang et al., 2016b,a) , identical to the 2016 setup, is a character-level metric inspired by the commonly applied translation edit rate (TER)." + }, + { + "id": 122, + "string": "It is defined as the minimum number of character edits required to adjust a hypothesis, until it completely matches the reference, normalized by the length of the hypothesis sentence." + }, + { + "id": 123, + "string": "CharacTER calculates the character-level edit distance while performing the shift edit on word level." + }, + { + "id": 124, + "string": "Unlike the strict matching criterion in TER, a hypothesis word is considered to match a reference word and could be shifted, if the edit dis-tance between them is below a threshold value." + }, + { + "id": 125, + "string": "The Levenshtein distance between the reference and the shifted hypothesis sequence is computed on the character level." + }, + { + "id": 126, + "string": "In addition, the lengths of hypothesis sequences instead of reference sequences are used for normalizing the edit distance, which effectively counters the issue that shorter translations normally achieve lower TER." + }, + { + "id": 127, + "string": "Similarly to other character-level metrics, CharacTER is generally applied to nontokenized outputs and references, which also holds for this year's submission with one exception." + }, + { + "id": 128, + "string": "This year tokenization was carried out for en-ru hypotheses and references before calculating the scores, since this results in large improvements in terms of correlations." + }, + { + "id": 129, + "string": "For other language pairs, no tokenizer was used for pre-processing." + }, + { + "id": 130, + "string": "EED EED (Stanchev et al., 2019 ) is a characterbased metric, which builds upon CDER." + }, + { + "id": 131, + "string": "It is defined as the minimum number of operations of an extension to the conventional edit distance containing a \"jump\" operation." + }, + { + "id": 132, + "string": "The edit distance operations (insertions, deletions and substitutions) are performed at the character level and jumps are performed when a blank space is reached." + }, + { + "id": 133, + "string": "Furthermore, the coverage of multiple characters in the hypothesis is penalised by the introduction of a coverage penalty." + }, + { + "id": 134, + "string": "The sum of the length of the reference and the coverage penalty is used as the normalisation term." + }, + { + "id": 135, + "string": "ESIM Enhanced Sequential Inference Model (ESIM; Chen et al., 2017; Mathur et al., 2019 ) is a neural model proposed for Natural Language Inference that has been adapted for MT evaluation." + }, + { + "id": 136, + "string": "It uses cross-sentence attention and sentence matching heuristics to generate a representation of the translation and the reference, which is fed to a feedforward regressor." + }, + { + "id": 137, + "string": "The metric is trained on singly-annotated Direct Assessment data that has been collected for evaluating WMT systems: all WMT 2018 to-English data for the to-English language pairs, and all WMT 2018 data for all other language pairs." + }, + { + "id": 138, + "string": "hLEPORb_baseline, hLEPORa_baseline The submitted metric hLEPOR_baseline is a metric based on the factor combination of length penalty, precision, recall, and position difference penalty." + }, + { + "id": 139, + "string": "The weighted harmonic mean is applied to group the factors together with tunable weight parameters." + }, + { + "id": 140, + "string": "The systemlevel score is calculated with the same formula but with each factor weighted using weight estimated at system-level and not at segmentlevel." + }, + { + "id": 141, + "string": "In this submitted baseline version, hLE-POR_baseline was not tuned for each language pair separately but the default weights were applied across all submitted language pairs." + }, + { + "id": 142, + "string": "Further improvements can be achieved by tuning the weights according to the development data, adding morphological information and applying n-gram factor scores into it (e.g." + }, + { + "id": 143, + "string": "part-of-speech, n-gram precision and n-gram recall that were added into LEPOR in WMT13.)." + }, + { + "id": 144, + "string": "The basic model factors and further development with parameters setting were described in the paper (Han et al., 2012) and (Han et al., 2013) ." + }, + { + "id": 145, + "string": "For sentence-level score, only hLE-PORa_baseline was submitted with scores calculated as the weighted harmonic mean of all the designed factors using default parameters." + }, + { + "id": 146, + "string": "For system-level score, both hLEPORa_baseline and hLE-PORb_baseline were submitted, where hLEPORa_baseline is the the average score of all sentence-level scores, and hLE-PORb_baseline is calculated via the same sentence-level hLEPOR equation but replacing each factor value with its system-level counterpart." + }, + { + "id": 147, + "string": "PReP PReP (Yoshimura et al., 2019 ) is a method for filtering pseudo-references to achieve a good match with a gold reference." + }, + { + "id": 148, + "string": "At the beginning, the source sentence is translated with some off-the-shelf MT systems to create a set of pseudo-references." + }, + { + "id": 149, + "string": "(Here the MT systems were Google Translate and Microsoft Bing Translator.)" + }, + { + "id": 150, + "string": "The pseudoreferences are then filtered using BERT (Devlin et al., 2019) fine-tuned on the MPRC corpus (Dolan and Brockett, 2005) , estimating the probability of the paraphrase between gold reference and pseudo-references." + }, + { + "id": 151, + "string": "Thanks to the high quality of the underlying MT systems, a large portion of their outputs is indeed considered as a valid paraphrase." + }, + { + "id": 152, + "string": "The final metric score is calculated simply with SentBLEU with these multiple references." + }, + { + "id": 153, + "string": "WMDO WMDO (Chow et al., 2019b ) is a metric based on distance between distributions in the semantic vector space." + }, + { + "id": 154, + "string": "Matching in the semantic space has been investigated for translation evaluation, but the constraints of a translation's word order have not been fully explored." + }, + { + "id": 155, + "string": "Building on the Word Mover's Distance metric and various word embeddings, WMDO introduces a fragmentation penalty to account for fluency of a translation." + }, + { + "id": 156, + "string": "This word order extension is shown to perform better than standard WMD, with promising results against other types of metrics." + }, + { + "id": 157, + "string": "YiSi-0, YiSi-1, YiSi-1_srl, YiSi-2, YiSi-2_srl YiSi (Lo, 2019 ) is a unified semantic MT quality evaluation and estimation metric for languages with different levels of available resources." + }, + { + "id": 158, + "string": "YiSi-1 is a MT evaluation metric that measures the semantic similarity between a machine translation and human references by aggregating the idf-weighted lexical semantic similarities based on the contextual embeddings extracted from BERT and optionally incorporating shallow semantic structures (denoted as YiSi-1_srl)." + }, + { + "id": 159, + "string": "YiSi-0 is the degenerate version of YiSi-1 that is ready-to-deploy to any language." + }, + { + "id": 160, + "string": "It uses longest common character substring to measure the lexical similarity." + }, + { + "id": 161, + "string": "YiSi-2 is the bilingual, reference-less version for MT quality estimation, which uses the contextual embeddings extracted from BERT to evaluate the crosslingual lexical semantic similarity between the input and MT output." + }, + { + "id": 162, + "string": "Like YiSi-1, YiSi-2 can exploit shallow semantic structures as well (denoted as YiSi-2_srl)." + }, + { + "id": 163, + "string": "QE Systems In addition to the submitted standard metrics, 10 quality estimation systems were submitted to the \"QE as a Metric\" track." + }, + { + "id": 164, + "string": "The submitted QE systems are evaluated in the same settings as metrics to facilitate comparison." + }, + { + "id": 165, + "string": "Their descriptions can be found in the Findings of the WMT 2019 Shared Task on Quality Estimation (Fonseca et al., 2019) ." + }, + { + "id": 166, + "string": "Results We discuss system-level results for news task systems in Section 5.1." + }, + { + "id": 167, + "string": "The segment-level results are in Section 5.2." + }, + { + "id": 168, + "string": "System-Level Evaluation As in previous years, we employ the Pearson correlation (r) as the main evaluation measure for system-level metrics." + }, + { + "id": 169, + "string": "The Pearson correlation is as follows: r = ∑ n i=1 (Hi − H)(Mi − M ) √ ∑ n i=1 (Hi − H) 2 √ ∑ n i=1 (Mi − M ) 2 (1) where H i are human assessment scores of all systems in a given translation direction, M i are the corresponding scores as predicted by a given metric." + }, + { + "id": 170, + "string": "H and M are their means, respectively." + }, + { + "id": 171, + "string": "Since some metrics, such as BLEU, aim to achieve a strong positive correlation with human assessment, while error metrics, such as TER, aim for a strong negative correlation we compare metrics via the absolute value |r| of a YiSi.1 Figure 1 : System-level metric significance test results for DA human assessment for into English and out-of English language pairs (newstest2019): Green cells denote a statistically significant increase in correlation with human assessment for the metric in a given row over the metric in a given column according to Williams test." + }, + { + "id": 172, + "string": "− − 0.487 − − ibm1-pos4gram 0.339 − − − − − − LASIM 0.247 − − − − 0.310 − LP 0.474 − − − − 0.488 − UNI 0.846 0.930 − − − 0.805 − UNI+ 0.850 0.924 − − − 0.808 − YiSi-2 0.796 0.642 0.566 0.324 0.442 0.339 0.940 YiSi-2_srl 0.804 − − − − − 0.947 newstest2019 − − 0.810 − − ibm1-pos4gram − 0.393 − − − − − − LASIM − 0.871 − − − − 0.823 − LP − 0.569 − − − − 0.661 − UNI 0.028 0.841 0.907 − − − 0.919 − UNI+ − − − − − − 0.918 − USFD − 0.224 − − − − 0.857 − USFD-TL − 0.091 − − − − 0.771 − YiSi-2 0. given metric's correlation with human assessment." + }, + { + "id": 173, + "string": "System-Level Results Tables 3, 4 and 5 provide the system-level correlations of metrics evaluating translation of newstest2019." + }, + { + "id": 174, + "string": "The underlying texts are part of the WMT19 News Translation test set (new-stest2019) and the underlying MT systems are all MT systems participating in the WMT19 News Translation Task." + }, + { + "id": 175, + "string": "As recommended by Graham and Baldwin (2014), we employ Williams significance test (Williams, 1959) to identify differences in correlation that are statistically significant." + }, + { + "id": 176, + "string": "Williams test is a test of significance of a difference in dependent correlations and therefore suitable for evaluation of metrics." + }, + { + "id": 177, + "string": "Correlations not significantly outperformed by any other metric for the given language pair are highlighted in bold in Tables 3, 4 and 5." + }, + { + "id": 178, + "string": "Since pairwise comparisons of metrics may be also of interest, e.g." + }, + { + "id": 179, + "string": "to learn which metrics significantly outperform the most widely employed metric BLEU, we include significance test results for every competing pair of metrics including our baseline metrics in Figure 1 and Figure 2 ." + }, + { + "id": 180, + "string": "This year, the increased number of systems participating in the news tasks has provided a larger sample of system scores for testing metrics." + }, + { + "id": 181, + "string": "Since we already have sufficiently conclusive results on genuine MT systems, we do not need to generate hybrid system results as in Graham and Liu (2016) and past metrics tasks." + }, + { + "id": 182, + "string": "Segment-Level Evaluation Segment-level evaluation relies on the manual judgements collected in the News Translation Task evaluation." + }, + { + "id": 183, + "string": "This year, again we were unable to follow the methodology outlined in Graham et al." + }, + { + "id": 184, + "string": "(2015) for evaluation of segment-level metrics because the sampling of sentences did not provide sufficient number of assessments of the same segment." + }, + { + "id": 185, + "string": "We therefore convert pairs of DA scores for competing translations to daRR better/worse preferences as described in Section 2.3.2." + }, + { + "id": 186, + "string": "We measure the quality of metrics' segmentlevel scores against the daRR golden truth using a Kendall's Tau-like formulation, which is an adaptation of the conventional Kendall's Tau coefficient." + }, + { + "id": 187, + "string": "Since we do not have a total order ranking of all translations, it is not possible to apply conventional Kendall's Tau (Graham et al., 2015) ." + }, + { + "id": 188, + "string": "Our Kendall's Tau-like formulation, τ , is as follows: τ = |Concordant| − |Discordant| |Concordant| + |Discordant| (2) where Concordant is the set of all human comparisons for which a given metric suggests the same order and Discordant is the set of all human comparisons for which a given metric disagrees." + }, + { + "id": 189, + "string": "The formula is not specific with respect to ties, i.e." + }, + { + "id": 190, + "string": "cases where the annotation says that the two outputs are equally good." + }, + { + "id": 191, + "string": "The way in which ties (both in human and metric judgement) were incorporated in computing Kendall τ has changed across the years of WMT Metrics Tasks." + }, + { + "id": 192, + "string": "Here we adopt the version used in WMT17 daRR evaluation." + }, + { + "id": 193, + "string": "For a detailed discussion on other options, see also Macháček and Bojar (2014) ." + }, + { + "id": 194, + "string": "Whether or not a given comparison of a pair of distinct translations of the same source input, s 1 and s 2 , is counted as a concordant (Conc) or disconcordant (Disc) pair is defined by the following matrix: Metric s 1 < s 2 s 1 = s 2 s 1 > s 2 Human s 1 < s 2 Conc Disc Disc s 1 = s 2 − − − s 1 > s 2 Disc Disc Conc In the notation of Macháček and Bojar (2014) , this corresponds to the setup used in WMT12 (with a different underlying method of manual judgements, RR): Metric WMT12 < = > Human < 1 -1 -1 = X X X > -1 -1 1 The key differences between the evaluation used in WMT14-WMT16 and evaluation used in WMT17-WMT19 were (1) the move from RR to daRR and (2) the treatment of ties." + }, + { + "id": 195, + "string": "In the years 2014-2016, ties in metrics scores were not penalized." + }, + { + "id": 196, + "string": "With the move to daRR, where the quality of the two candidate translations Table 6 : Segment-level metric results for to-English language pairs in newstest2019: absolute Kendall's Tau formulation of segment-level metric scores with DA scores; correlations of metrics not significantly outperformed by any other for that language pair are highlighted in bold." + }, + { + "id": 197, + "string": "Table 7 : Segment-level metric results for out-of-English language pairs in newstest2019: absolute Kendall's Tau formulation of segment-level metric scores with DA scores; correlations of metrics not significantly outperformed by any other for that language pair are highlighted in bold." + }, + { + "id": 198, + "string": "is deemed substantially different and no ties in human judgements arise, it makes sense to penalize ties in metrics' predictions in order to promote discerning metrics." + }, + { + "id": 199, + "string": "− − 0.069 − − ibm1-pos4gram −0.153 − − − − − − LASIM −0.024 − − − − 0.022 − LP −0.096 − − − − −0.035 − UNI 0.022 0.202 − − − 0.084 − UNI+ 0.015 0.211 − − − 0.089 − YiSi-2 0.068 0.126 −0.001 0.096 0.075 0.053 0.253 YiSi-2_srl 0.068 − − − − − 0.246 newstest2019 ibm1-morpheme −0.135 −0.003 −0.005 − − −0.165 − − ibm1-pos4gram − −0.123 − − − − − − LASIM − 0.147 − − − − −0.24 − LP − −0.119 − − − − −0.158 − UNI 0.060 0.129 0.351 − − − 0.226 − UNI+ − − − − − − 0.222 − USFD − −0.029 − − − − 0.136 − USFD-TL − −0.037 − − − − 0.191 − YiSi-2 0.069 0.212 0.239 0.147 0.187 0.003 −0.155 0.044 YiSi-2_srl − 0.236 − − − − − 0.034 newstest2019 Note that the penalization of ties makes our evaluation asymmetric, dependent on whether the metric predicted the tie for a pair where humans predicted <, or >." + }, + { + "id": 200, + "string": "It is now important to interpret the meaning of the comparison identically for humans and metrics." + }, + { + "id": 201, + "string": "For error metrics, we thus reverse the sign of the metric score prior to the comparison with human scores: higher scores have to indicate better translation quality." + }, + { + "id": 202, + "string": "In WMT19, the original authors did this for CharacTER." + }, + { + "id": 203, + "string": "To summarize, the WMT19 Metrics Task for segment-level evaluation: • ensures that error metrics are first converted to the same orientation as the human judgements, i.e." + }, + { + "id": 204, + "string": "higher score indicating higher translation quality, • excludes all human ties (this is already implied by the construction of daRR from DA judgements), Figure 3 : daRR segment-level metric significance test results for into English and out-of English language pairs (newstest2019): Green cells denote a significant win for the metric in a given row over the metric in a given column according bootstrap resampling." + }, + { + "id": 205, + "string": "Figure 4 : daRR segment-level metric significance test results for German to Czech, German to French and French to German (newstest2019): Green cells denote a significant win for the metric in a given row over the metric in a given column according bootstrap resampling." + }, + { + "id": 206, + "string": "• counts metric's ties as a Discordant pairs." + }, + { + "id": 207, + "string": "We employ bootstrap resampling (Koehn, 2004; Graham et al., 2014b) to estimate confidence intervals for our Kendall's Tau formulation, and metrics with non-overlapping 95% confidence intervals are identified as having statistically significant difference in performance." + }, + { + "id": 208, + "string": "Segment-Level Results Results of the segment-level human evaluation for translations sampled from the News Translation Task are shown in Tables 6, 7 Discussion This year, human data was collected from reference-based evaluations (or \"monolingual\") and reference-free evaluations (or \"bilingual\")." + }, + { + "id": 209, + "string": "The reference-based (monolingual) evaluations were obtained with the help of anonymous crowdsourcing, while the reference-less (bilingual) evaluations were mainly from MT researchers who committed their time contribution to the manual evaluation for each submitted system." + }, + { + "id": 210, + "string": "Stability across MT Systems The observed performance of metrics depends on the underlying texts and systems that participate in the News Translation Task (see Section 2)." + }, + { + "id": 211, + "string": "For the strongest MT systems, distinguishing which system outputs are better is hard, even for human assessors." + }, + { + "id": 212, + "string": "On the other hand, if the systems are spread across a wide performance range, it will be easier for metrics to correlate with human judgements." + }, + { + "id": 213, + "string": "To provide a more reliable view, we created plots of Pearson correlation when the underlying set of MT systems is reduced to top n ones." + }, + { + "id": 214, + "string": "One sample such plot is in Figure 5 , all language pairs and most of the metrics are in Appendix A." + }, + { + "id": 215, + "string": "As the plot documents, the official correlations reported in Tables 3 to 5 can lead to wrong conclusions." + }, + { + "id": 216, + "string": "sacreBLEU-BLEU correlates at .969 when all systems are considered, but as we start considering only the top n systems, the correlation falls relatively quickly." + }, + { + "id": 217, + "string": "With 10 systems, we are below .5 and when only the top 6 or 4 systems are considered, the correlation falls even to the negave values." + }, + { + "id": 218, + "string": "Note that correlations point estimates (the value in the y-axis) become noiser with the decreasing number of the underlying MT systems." + }, + { + "id": 219, + "string": "Figure 6 explains the situation and illus- Top 8 Top 10 Top 12 Top 15 All systems Figure 6 trates the sensitivity of the observed correlations to the exact set of systems." + }, + { + "id": 220, + "string": "On the full set of systems, the single outlier (the worstperforming system called en_de_task) helps to achieve a great positive correlation." + }, + { + "id": 221, + "string": "The majority of MT systems however form a cloud with Pearson correlation around .5 and the top 4 systems actually exhibit a negative correlation of the human score and sacreBLEU-BLEU." + }, + { + "id": 222, + "string": "In Appendix A, baseline metrics are plotted in grey in all the plots, so that their trends can be observed jointly." + }, + { + "id": 223, + "string": "In general, most baselines have similar correlations, as most baselines use similar features (n-gram or word-level features, with the exception of chrF)." + }, + { + "id": 224, + "string": "In a number of language pairs (de-en, de-fr, en-de, en-kk, lten, ru-en, zh-en), baseline correlations tend towards 0 (no correlation) or even negative Pearson correlation." + }, + { + "id": 225, + "string": "For a widely applied metric such as sacreBLEU-BLEU, our analysis reveals weak correlation in comparing top stateof-the-art systems in these language pairs, especially in en-de, de-en, ru-en, and zh-en." + }, + { + "id": 226, + "string": "We will restrict our analysis to those language pairs where the baseline metrics have an obvious downward trend (de-en, de-fr, en-de, en-kk, lt-en, ru-en, zh-en)." + }, + { + "id": 227, + "string": "Examining the topn correlation in the submitted metrics (not including QE systems), most metrics show the same degredation in correlation as the baselines." + }, + { + "id": 228, + "string": "We note BERTr as the one exception consistently degrading less and retaining positive correlation compared to other submitted metrics and baselines, in the language pairs where it participated." + }, + { + "id": 229, + "string": "For QE systems, we noticed that in some instances, QE systems have upward correlation trends when other metrics and baselines have downward trends." + }, + { + "id": 230, + "string": "For instance, LP, UNI, and UNI+ in the de-en language pair, YiSi-2 in en-kk, and UNI and UNI+ in ru-en." + }, + { + "id": 231, + "string": "These results suggest that QE systems such as UNI and UNI+ perform worse on judging systems of wide ranging quality, but better for top performing systems, or perhaps for systems closer in quality." + }, + { + "id": 232, + "string": "If our method of human assessment is sound, we should believe that BLEU, a widely applied metric, is no longer a reliable metric for judging our best systems." + }, + { + "id": 233, + "string": "Future investigations are needed to understand when BLEU applies well, and why BLEU is not effective for output from our state of the art models." + }, + { + "id": 234, + "string": "Metrics and QE systems such as BERTr, ESIM, YiSi that perform well at judging our best systems often use more semantic features compared to our n-gram/char-gram based baselines." + }, + { + "id": 235, + "string": "Future metrics may want to explore a) whether semantic features such as contextual word embeddings are achieving semantic understanding and b) whether semantic understanding is the true source of a metric's performance gains." + }, + { + "id": 236, + "string": "It should be noted that some language pairs do not show the strong degrading pattern with top-n systems this year, for instance en-cs, engu, en-ru, or kk-en." + }, + { + "id": 237, + "string": "English-Chinese is particularly interesting because we see a clear trend towards better correlations as we reduce the set of underlying systems to the top scoring ones." + }, + { + "id": 238, + "string": "Overall Metric Performance System-Level Evaluation In system-level evaluation, the series of YiSi metrics achieve the highest correlations in several language pairs and it is not significantly outperformed by any other metrics (denoted as a \"win\" in the following) for almost all language pairs." + }, + { + "id": 239, + "string": "The new metric ESIM performs best on 5 language languages (18 language pairs) and obtains 11 \"wins\" out of 16 language pairs in which ESIM participated." + }, + { + "id": 240, + "string": "The metric EED performs better for language pairs out-of English and excluding En-glish compared to into-English language pairs, achieving 7 out of 11 \"wins\" there." + }, + { + "id": 241, + "string": "Segment-Level Evaluation For segment-level evaluation, most language pairs are quite discerning, with only one or two metrics taking the \"winner\" position (of not being significantly surpassed by others)." + }, + { + "id": 242, + "string": "Only French-German differs, with all metrics performing similarly except the significantly worse sentBLEU." + }, + { + "id": 243, + "string": "YiSi-1_srl stands out as the \"winner\" for all language pairs in which it participated." + }, + { + "id": 244, + "string": "The excluded language pairs were probably due to the lack of semantic information required by YiSi-1_srl." + }, + { + "id": 245, + "string": "YiSi-1 participated all language pairs and its correlations are comparable with those of YiSi-1_srl." + }, + { + "id": 246, + "string": "ESIM obtain 6 \"winners\" out of all 18 languages pairs." + }, + { + "id": 247, + "string": "Both YiSi and ESIM are based on neural networks (YiSi via word and phrase embeddings, as well as other types of available resources, ESIM via sentence embeddings)." + }, + { + "id": 248, + "string": "This is a confirmation of a trend observed last year." + }, + { + "id": 249, + "string": "QE Systems as Metrics Generally, correlations for the standard reference-based metrics are obviously better than those in \"QE as a Metric\" track, both when using monolingual and bilingual golden truth." + }, + { + "id": 250, + "string": "In system-level evaluation, correlations for \"QE as a Metric\" range from 0.028 to 0.947 across all language pairs and all metrics but they are very unstable." + }, + { + "id": 251, + "string": "Even for a single metric, take UNI for example, the correlations range from 0.028 to 0.930 across language pairs." + }, + { + "id": 252, + "string": "In segment-level evaluation, correlations for QE metrics range from -0.153 to 0.351 across all language pairs and show the same instability across language pairs for a given metric." + }, + { + "id": 253, + "string": "In either case, we do not see any pattern that could explain the behaviour, e.g." + }, + { + "id": 254, + "string": "whether the manual evaluation was monolingual or bilingual, or the characteristics of the given language pair." + }, + { + "id": 255, + "string": "Dependence on Implementation As it already happened in the past, we had multiple implementations for some metrics, BLEU and chrF in particular." + }, + { + "id": 256, + "string": "The detailed configuration of BLEU and sacreBLEU-BLEU differ and hence their scores and correlation results are different." + }, + { + "id": 257, + "string": "chrF and sacreBLEU-chrF use the same parameters and should thus deliver the same scores but we still observe some differences, leading to different correlations." + }, + { + "id": 258, + "string": "For instance for German-French Pearson correlation, chrF obtains 0.931 (no win) but sacreBLEU-chrF reaches 0.952, tying for a win with other metrics." + }, + { + "id": 259, + "string": "We thus fully support the call for clarity by Post (2018b) and invite authors of metrics to include their implementations either in Moses scorer or sacreBLEU to achieve a long-term assessment of their metric." + }, + { + "id": 260, + "string": "Conclusion This paper summarizes the results of WMT19 shared task in machine translation evaluation, the Metrics Shared Task." + }, + { + "id": 261, + "string": "Participating metrics were evaluated in terms of their correlation with human judgement at the level of the whole test set (system-level evaluation), as well as at the level of individual sentences (segment-level evaluation)." + }, + { + "id": 262, + "string": "We reported scores for standard metrics requiring the reference as well as quality estimation systems which took part in the track \"QE as a metric\", joint with the Quality Estimation task." + }, + { + "id": 263, + "string": "For system-level, best metrics reach over 0.95 Pearson correlation or better across several language pairs." + }, + { + "id": 264, + "string": "As expected, QE systems are visibly in all language pairs but they can also reach high system-level correlations, up to .947 (Chinese-English) or .936 (English-German) by YiSi-1_srl or over .9 for multiple language pairs by UNI." + }, + { + "id": 265, + "string": "An important caveat is that the correlations are heavily affected by the underlying set of MT systems." + }, + { + "id": 266, + "string": "We explored this by reducing the set of systems to top-n ones for various ns and found out that for many language pairs, system-level correlations are much worse when based on only the better performing systems." + }, + { + "id": 267, + "string": "With both good and bad MT systems partic-ipating in the news task, the metrics results can be overly optimistic compared to what we get when evaluating state-of-the-art systems." + }, + { + "id": 268, + "string": "In terms of segment-level Kendall's τ results, the standard metrics correlations varied between 0.03 and 0.59, and QE systems obtained even negative correlations." + }, + { + "id": 269, + "string": "The results confirm the observation from the last year, namely metrics based on word or sentence-level embeddings (YiSi and ESIM), achieve the highest performance." + }, + { + "id": 270, + "string": "A Correlations for Top-N Systems" + } + ], + "headers": [ + { + "section": "Introduction", + "n": "1", + "start": 0, + "end": 19 + }, + { + "section": "Task Setup", + "n": "2", + "start": 20, + "end": 24 + }, + { + "section": "Source and Reference Texts", + "n": "2.1", + "start": 25, + "end": 30 + }, + { + "section": "System Outputs", + "n": "2.2", + "start": 31, + "end": 49 + }, + { + "section": "Manual Quality Assessment", + "n": "2.3", + "start": 50, + "end": 64 + }, + { + "section": "System-level Golden Truth: DA", + "n": "2.3.1", + "start": 65, + "end": 68 + }, + { + "section": "Segment-level Golden Truth: daRR", + "n": "2.3.2", + "start": 69, + "end": 82 + }, + { + "section": "Baseline Metrics", + "n": "3", + "start": 83, + "end": 114 + }, + { + "section": "Submitted Metrics", + "n": "4", + "start": 115, + "end": 115 + }, + { + "section": "BEER", + "n": "4.1", + "start": 115, + "end": 117 + }, + { + "section": "BERTr", + "n": "4.2", + "start": 118, + "end": 120 + }, + { + "section": "CharacTER", + "n": "4.3", + "start": 121, + "end": 129 + }, + { + "section": "EED", + "n": "4.4", + "start": 130, + "end": 134 + }, + { + "section": "ESIM", + "n": "4.5", + "start": 135, + "end": 137 + }, + { + "section": "hLEPORb_baseline, hLEPORa_baseline", + "n": "4.6", + "start": 138, + "end": 146 + }, + { + "section": "PReP", + "n": "4.8", + "start": 147, + "end": 152 + }, + { + "section": "WMDO", + "n": "4.9", + "start": 153, + "end": 156 + }, + { + "section": "YiSi-0, YiSi-1, YiSi-1_srl, YiSi-2, YiSi-2_srl", + "n": "4.10", + "start": 157, + "end": 162 + }, + { + "section": "QE Systems", + "n": "4.11", + "start": 163, + "end": 164 + }, + { + "section": "Results", + "n": "5", + "start": 165, + "end": 167 + }, + { + "section": "System-Level Evaluation", + "n": "5.1", + "start": 168, + "end": 172 + }, + { + "section": "System-Level Results", + "n": "5.1.1", + "start": 173, + "end": 181 + }, + { + "section": "Segment-Level Evaluation", + "n": "5.2", + "start": 182, + "end": 205 + }, + { + "section": "Segment-Level Results", + "n": "5.2.1", + "start": 206, + "end": 207 + }, + { + "section": "Discussion", + "n": "6", + "start": 208, + "end": 209 + }, + { + "section": "Stability across MT Systems", + "n": "6.1", + "start": 210, + "end": 237 + }, + { + "section": "System-Level Evaluation", + "n": "6.2.1", + "start": 238, + "end": 240 + }, + { + "section": "Segment-Level Evaluation", + "n": "6.2.2", + "start": 241, + "end": 248 + }, + { + "section": "QE Systems as Metrics", + "n": "6.2.3", + "start": 249, + "end": 254 + }, + { + "section": "Dependence on Implementation", + "n": "6.3", + "start": 255, + "end": 259 + }, + { + "section": "Conclusion", + "n": "7", + "start": 260, + "end": 270 + } + ], + "figures": [ + { + "filename": "../figure/image/1012-Figure1-1.png", + "caption": "Figure 1: System-level metric significance test results for DA human assessment for into English and out-of English language pairs (newstest2019): Green cells denote a statistically significant increase in correlation with human assessment for the metric in a given row over the metric in a given column according to Williams test.", + "page": 10, + "bbox": { + "x1": 115.19999999999999, + "x2": 480.0, + "y1": 70.56, + "y2": 693.12 + } + }, + { + "filename": "../figure/image/1012-Table7-1.png", + "caption": "Table 7: Segment-level metric results for out-of-English language pairs in newstest2019: absolute Kendall’s Tau formulation of segment-level metric scores with DA scores; correlations of metrics not significantly outperformed by any other for that language pair are highlighted in bold.", + "page": 14, + "bbox": { + "x1": 88.8, + "x2": 505.44, + "y1": 62.4, + "y2": 347.03999999999996 + } + }, + { + "filename": "../figure/image/1012-Table8-1.png", + "caption": "Table 8: Segment-level metric results for language pairs not involving English in newstest2019: absolute Kendall’s Tau formulation of segment-level metric scores with DA scores; correlations of metrics not significantly outperformed by any other for that language pair are highlighted in bold.", + "page": 14, + "bbox": { + "x1": 72.0, + "x2": 292.32, + "y1": 439.68, + "y2": 655.1999999999999 + } + }, + { + "filename": "../figure/image/1012-Table4-1.png", + "caption": "Table 4: Absolute Pearson correlation of out-of-English system-level metrics with DA human assessment in newstest2019; correlations of metrics not significantly outperformed by any other for that language pair are highlighted in bold.", + "page": 9, + "bbox": { + "x1": 99.84, + "x2": 497.28, + "y1": 203.04, + "y2": 572.16 + } + }, + { + "filename": "../figure/image/1012-Table6-1.png", + "caption": "Table 6: Segment-level metric results for to-English language pairs in newstest2019: absolute Kendall’s Tau formulation of segment-level metric scores with DA scores; correlations of metrics not significantly outperformed by any other for that language pair are highlighted in bold.", + "page": 13, + "bbox": { + "x1": 85.92, + "x2": 508.32, + "y1": 231.84, + "y2": 543.36 + } + }, + { + "filename": "../figure/image/1012-Table5-1.png", + "caption": "Table 5: Absolute Pearson correlation of system-level metrics for language pairs not involving English with DA human assessment in newstest2019; correlations of metrics not significantly outperformed by any other for that language pair are highlighted in bold.", + "page": 12, + "bbox": { + "x1": 192.95999999999998, + "x2": 404.15999999999997, + "y1": 101.75999999999999, + "y2": 400.32 + } + }, + { + "filename": "../figure/image/1012-Figure2-1.png", + "caption": "Figure 2: System-level metric significance test results for DA human assessment in newstest2019 for German to Czech, German to French and French to German; green cells denote a statistically significant increase in correlation with human assessment for the metric in a given row over the metric in a given column according to Williams test.", + "page": 12, + "bbox": { + "x1": 115.19999999999999, + "x2": 480.0, + "y1": 537.12, + "y2": 661.4399999999999 + } + }, + { + "filename": "../figure/image/1012-Table1-1.png", + "caption": "Table 1: Number of judgements for DA converted to daRR data; “DA>1” is the number of source input sentences in the manual evaluation where at least two translations of that same source input segment received a DA judgement; “Ave” is the average number of translations with at least one DA judgement available for the same source input sentence; “DA pairs” is the number of all possible pairs of translations of the same source input resulting from “DA>1”; and “daRR” is the number of DA pairs with an absolute difference in DA scores greater than the 25 percentage point margin.", + "page": 3, + "bbox": { + "x1": 73.92, + "x2": 288.0, + "y1": 62.4, + "y2": 360.0 + } + }, + { + "filename": "../figure/image/1012-Figure4-1.png", + "caption": "Figure 4: daRR segment-level metric significance test results for German to Czech, German to French and French to German (newstest2019): Green cells denote a significant win for the metric in a given row over the metric in a given column according bootstrap resampling.", + "page": 16, + "bbox": { + "x1": 114.72, + "x2": 480.0, + "y1": 62.879999999999995, + "y2": 188.64 + } + }, + { + "filename": "../figure/image/1012-Figure5-1.png", + "caption": "Figure 5: Pearson correlations of sacreBLEUBLEU for English-German system-level evaluation for all systems (left) down to only top 4 systems (right). The y-axis spans from -1 to +1, baseline metrics for the language pair in grey.", + "page": 16, + "bbox": { + "x1": 349.91999999999996, + "x2": 482.88, + "y1": 258.71999999999997, + "y2": 345.59999999999997 + } + }, + { + "filename": "../figure/image/1012-Table3-1.png", + "caption": "Table 3: Absolute Pearson correlation of to-English system-level metrics with DA human assessment in newstest2019; correlations of metrics not significantly outperformed by any other for that language pair are highlighted in bold.", + "page": 8, + "bbox": { + "x1": 92.64, + "x2": 502.08, + "y1": 188.16, + "y2": 587.04 + } + }, + { + "filename": "../figure/image/1012-Figure3-1.png", + "caption": "Figure 3: daRR segment-level metric significance test results for into English and out-of English language pairs (newstest2019): Green cells denote a significant win for the metric in a given row over the metric in a given column according bootstrap resampling.", + "page": 15, + "bbox": { + "x1": 108.0, + "x2": 486.71999999999997, + "y1": 72.96, + "y2": 702.72 + } + } + ] + }, + "gem_id": "GEM-SciDuet-chal-17" + }, + { + "slides": { + "0": { + "title": "Motivations", + "text": [ + "Integrating Semantic Knowledge to Tackle Zero-shot Text Classification", + "Jingqing Zhang, Piyawat Lertvittayakumjorn, and Yike Guo", + "Insufficient or even unavailable training data of emerging classes is a big", + "challenge in real-world text classification.", + "Zero-shot text classification recognising text documents of classes that", + "have never been seen in the learning stage", + "In this paper, we propose a two-phase framework together with data", + "augmentation and feature augmentation to solve this problem." + ], + "page_nums": [ + 1 + ], + "images": [] + }, + "1": { + "title": "Zero shot Text Classification", + "text": [ + "Jingqing Zhang, Piyawat Lertvittayakumjorn, and Yike Guo", + "Let and be disjoint sets of seen and unseen classes of the classification", + "In the learning stage, a training set is given where", + "is the document containing a sequence of words", + "is the class of", + "In the inference stage, the goal is to predict the class of each document, , in", + "Supportive semantic knowledge is needed to generally infer the features of unseen classes using patterns learned from seen classes." + ], + "page_nums": [ + 3 + ], + "images": [] + }, + "2": { + "title": "Our Proposed Framework Overview", + "text": [ + "Integrating Semantic Knowledge to Tackle Zero-shot Text Classification", + "Jingqing Zhang, Piyawat Lertvittayakumjorn, and Yike Guo", + "We integrate four kinds of semantic", + "knowledge into our framework:", + "Data augmentation technique helps the classifiers be aware of the existence of unseen classes without accessing their real data. Feature augmentation provides additional information which relates the document and the unseen classes to generalise the zero-shot reasoning." + ], + "page_nums": [ + 4, + 5 + ], + "images": [ + "figure/image/1014-Figure1-1.png", + "figure/image/1014-Figure2-1.png" + ] + }, + "3": { + "title": "Phase 1 Coarse grained Classification", + "text": [ + "Integrating Semantic Knowledge to Tackle Zero-shot Text Classification", + "Jingqing Zhang, Piyawat Lertvittayakumjorn, and Yike Guo", + "Each seen class has its own CNN text classifier to predict", + "The classifier is trained with all documents of its class in the training set", + "as positive examples and the rest as negative examples.", + "For a test document , this phase computes for every seen", + "If there exists a class such that > , it predicts", + "is a classification threshold for the class , calculated based on the", + "threshold adaptation method from (Shu et al., 2017)" + ], + "page_nums": [ + 6 + ], + "images": [] + }, + "5": { + "title": "Phase 2 Fine grained Classification", + "text": [ + "Integrating Semantic Knowledge to Tackle Zero-shot Text Classification", + "Jingqing Zhang, Piyawat Lertvittayakumjorn, and Yike Guo", + "The traditional classifier is a multi-class classifier (|| classes) with a softmax", + "output, so it requires only the word embeddings as an input.", + "The zero-shot classifier is a binary classifier with a sigmoid output. It takes a text document and a class as inputs and predicts the confidence" + ], + "page_nums": [ + 8 + ], + "images": [ + "figure/image/1014-Figure1-1.png" + ] + }, + "6": { + "title": "Phase 2 Zero shot Classifier", + "text": [ + "Integrating Semantic Knowledge to Tackle Zero-shot Text Classification", + "Jingqing Zhang, Piyawat Lertvittayakumjorn, and Yike Guo", + "The zero-shot classifier predicts", + "shows how the word and", + "the class are related considering", + "the relations in a general", + "This classifier is trained with a training data from seen classes only." + ], + "page_nums": [ + 9 + ], + "images": [ + "figure/image/1014-Figure2-1.png" + ] + }, + "8": { + "title": "Experiments", + "text": [ + "Integrating Semantic Knowledge to Tackle Zero-shot Text Classification", + "Jingqing Zhang, Piyawat Lertvittayakumjorn, and Yike Guo", + "DBpedia ontology : 14 classes", + "20newsgroups : 20 classes" + ], + "page_nums": [ + 11 + ], + "images": [ + "figure/image/1014-Table1-1.png" + ] + }, + "9": { + "title": "An Experiments for Phase 1", + "text": [ + "Integrating Semantic Knowledge to Tackle Zero-shot Text Classification", + "Jingqing Zhang, Piyawat Lertvittayakumjorn, and Yike Guo", + "Compare with DOC a", + "For seen classes, our", + "DOC on both datasets.", + "improved the accuracy of", + "unseen classes clearly and led to higher overall accuracy in every setting." + ], + "page_nums": [ + 12 + ], + "images": [] + }, + "10": { + "title": "An Experiments for Phase 2", + "text": [ + "Integrating Semantic Knowledge to Tackle Zero-shot Text Classification", + "Jingqing Zhang, Piyawat Lertvittayakumjorn, and Yike Guo", + "Using only could not find", + "out the correct unseen class", + "accuracy of predicting unseen", + "highest accuracy in all settings." + ], + "page_nums": [ + 13 + ], + "images": [ + "figure/image/1014-Table6-1.png", + "figure/image/1014-Table5-1.png" + ] + }, + "11": { + "title": "An Experiments for the Whole Framework", + "text": [ + "Imperial College Integrating Semantic Knowledge to Tackle Zero-shot Text Classification", + "London Jingqing Zhang, Piyawat Lertvittayakumjorn, and Yike Guo", + "Table 2: The accuracy of the whole framework compared with the baselines.", + "Label RNN + FC", + "Unseen / - Similarity RNN (Pushp and 5", + "Dataset rate Yi Count-based (Sappadla Autoencoder Srivastava, CNN + FC Ours" + ], + "page_nums": [ + 14 + ], + "images": [ + "figure/image/1014-Table2-1.png" + ] + }, + "12": { + "title": "Conclusions", + "text": [ + "Integrating Semantic Knowledge to Tackle Zero-shot Text Classification", + "Jingqing Zhang, Piyawat Lertvittayakumjorn, and Yike Guo", + "To tackle zero-shot text classification, we proposed a novel CNN-based two-", + "phase framework together with data augmentation and feature augmentation.", + "The experiments show that", + "data augmentation improved the accuracy in detecting instances from unseen", + "feature augmentation enabled knowledge transfer from seen to unseen classes", + "our work achieved the highest overall accuracy compared with all the baselines", + "and recent approaches in all settings.", + "multi-label classification with a larger amount of data", + "utilise semantic units defined by linguists in the zero-shot scenario" + ], + "page_nums": [ + 15 + ], + "images": [] + } + }, + "paper_title": "Integrating Semantic Knowledge to Tackle Zero-shot Text Classification", + "paper_id": "1014", + "paper": { + "title": "Integrating Semantic Knowledge to Tackle Zero-shot Text Classification", + "abstract": "Insufficient or even unavailable training data of emerging classes is a big challenge of many classification tasks, including text classification. Recognising text documents of classes that have never been seen in the learning stage, so-called zero-shot text classification, is therefore difficult and only limited previous works tackled this problem. In this paper, we propose a two-phase framework together with data augmentation and feature augmentation to solve this problem. Four kinds of semantic knowledge (word embeddings, class descriptions, class hierarchy, and a general knowledge graph) are incorporated into the proposed framework to deal with instances of unseen classes effectively. Experimental results show that each and the combination of the two phases achieve the best overall accuracy compared with baselines and recent approaches in classifying real-world texts under the zeroshot scenario. * Piyawat Lertvittayakumjorn and Jingqing Zhang contributed equally to this project.", + "text": [ + { + "id": 0, + "string": "Introduction As one of the most fundamental problems in machine learning, automatic classification has been widely studied in several domains." + }, + { + "id": 1, + "string": "However, many approaches, proven to be effective in traditional classification tasks, cannot catch up with a dynamic and open environment where new classes can emerge after the learning stage (Romera-Paredes and Torr, 2015) ." + }, + { + "id": 2, + "string": "For example, the number of topics on social media is growing rapidly, and the classification models are required to recognise the text of the new topics using only general information (e.g., descriptions of the topics) since labelled training instances are unfeasible to obtain for each new topic (Lee et al., 2011) ." + }, + { + "id": 3, + "string": "This scenario holds in many real-world domains such as object recognition and medical diagnosis (Xian et al., 2017; World Health Organization, 1996) ." + }, + { + "id": 4, + "string": "Zero-shot learning (ZSL) for text classification aims to classify documents of classes which are absent from the learning stage." + }, + { + "id": 5, + "string": "Although it is challenging for a machine to achieve, humans are able to learn new concepts by transferring knowledge from known to unknown domains based on high-level descriptions and semantic representations (Thrun and Pratt, 1998) ." + }, + { + "id": 6, + "string": "Therefore, without labelled data of unseen classes, a zero-shot learning framework is expected to exploit supportive semantic knowledge (e.g., class descriptions, relations among classes, and external domain knowledge) to generally infer the features of unseen classes using patterns learned from seen classes." + }, + { + "id": 7, + "string": "So far, three main types of semantic knowledge have been employed in general zero-shot scenarios ." + }, + { + "id": 8, + "string": "The most widely used one is semantic attributes of classes such as visual concepts (e.g., colours, shapes) and semantic properties (e.g., behaviours, functions) (Lampert et al., 2009; Zhao et al., 2018) ." + }, + { + "id": 9, + "string": "The second type is concept ontology, including class hierarchy and knowledge graphs, which represents relationships among classes and features Fergus et al., 2010) ." + }, + { + "id": 10, + "string": "The third type is semantic word embeddings which capture implicit relationships between words thanks to a large training text corpus (Socher et al., 2013; Norouzi et al., 2013) ." + }, + { + "id": 11, + "string": "Nonetheless, concerning ZSL in text classification particularly, there are few studies exploiting one of these knowledge types and none has considered the combinations of them (Pushp and Srivastava, 2017; Dauphin et al., 2013) ." + }, + { + "id": 12, + "string": "Moreover, some previous works used different datasets to train and test, but there is similarity between classes in the training and testing set." + }, + { + "id": 13, + "string": "For example, in (Dauphin et al., 2013) , the class \"imdb.com\" in the training set naturally corresponds to the class \"Movies\" in the testing set." + }, + { + "id": 14, + "string": "Hence, these methods are not working under a strict zero-shot scenario." + }, + { + "id": 15, + "string": "To tackle the zero-shot text classification problem, this paper proposes a novel two-phase framework together with data augmentation and feature augmentation (Figure 1 )." + }, + { + "id": 16, + "string": "In addition, four kinds of semantic knowledge including word embeddings, class descriptions, class hierarchy, and a general knowledge graph (ConceptNet) are exploited in the framework to effectively learn the unseen classes." + }, + { + "id": 17, + "string": "Both of the two phases are based on convolutional neural networks (Kim, 2014) ." + }, + { + "id": 18, + "string": "The first phase called coarse-grained classification judges if a document is from seen or unseen classes." + }, + { + "id": 19, + "string": "Then, the second phase, named finegrained classification, finally decides its class." + }, + { + "id": 20, + "string": "Note that all the classifiers in this framework are trained using labelled data of seen classes (and augmented text data) only." + }, + { + "id": 21, + "string": "None of the steps learns from the labelled data of unseen classes." + }, + { + "id": 22, + "string": "The contributions of our work can be summarised as follows." + }, + { + "id": 23, + "string": "• We propose a novel deep learning based twophase framework, including coarse-grained and fine-grained classification, to tackle the zero-shot text classification problem." + }, + { + "id": 24, + "string": "Unlike some previous works, our framework does not require semantic correspondence between classes in a training stage and classes in an inference stage." + }, + { + "id": 25, + "string": "In other words, the seen and unseen classes can be clearly different." + }, + { + "id": 26, + "string": "• We propose a novel data augmentation technique called topic translation to strengthen the capability of our framework to detect documents from unseen classes effectively." + }, + { + "id": 27, + "string": "• We propose a method to perform feature augmentation by using integrated semantic knowledge to transfer the knowledge learned from seen to unseen classes in the zero-shot scenario." + }, + { + "id": 28, + "string": "In the remainder of this paper, we firstly explain our proposed zero-shot text classification framework in section 2." + }, + { + "id": 29, + "string": "Experiments and results, which demonstrate the performance of our framework, are presented in section 3." + }, + { + "id": 30, + "string": "Related works are discussed in section 4." + }, + { + "id": 31, + "string": "Finally, section 5 concludes our work and mentions possible future work." + }, + { + "id": 32, + "string": "Methodology Problem Formulation Let C S and C U be disjoint sets of seen and unseen classes of the classification respectively." + }, + { + "id": 33, + "string": "In the learning stage, a training set {(x 1 , y 1 ), ." + }, + { + "id": 34, + "string": "." + }, + { + "id": 35, + "string": "." + }, + { + "id": 36, + "string": ", (x n , y n )} is given where x i is the i-th document containing a sequence of words [w i 1 , w i 2 , ." + }, + { + "id": 37, + "string": "." + }, + { + "id": 38, + "string": "." + }, + { + "id": 39, + "string": ", w i t ] and y i ∈ C S is the class of x i ." + }, + { + "id": 40, + "string": "In the inference stage, the goal is to predict the class of each document,ŷ i , in a testing set which has the same data format as the training set except that y i comes from C S ∪ C U ." + }, + { + "id": 41, + "string": "Note that (i) every class comes with a class label and a class description ( Figure 2a ); (ii) a class hierarchy showing superclass-subclass relationships is also provided ( Figure 2b) ; (iii) the documents from unseen classes cannot be observed to train the framework." + }, + { + "id": 42, + "string": "Overview and Notations As discussed in the Introduction, our proposed classification framework consists of two phases ( Figure 1 )." + }, + { + "id": 43, + "string": "The first phase, coarse-grained classification, predicts whether an input document comes from seen or unseen classes." + }, + { + "id": 44, + "string": "We also apply a data augmentation technique in this phase to help the classifiers be aware of the existence of unseen classes without accessing their real data." + }, + { + "id": 45, + "string": "Then the second phase, fine-grained classification, finally specifies the class of the input document." + }, + { + "id": 46, + "string": "It uses either a traditional classifier or a zero-shot classifier depending on the coarse-grained prediction given by Phase 1." + }, + { + "id": 47, + "string": "Also, feature augmentation based on semantic knowledge is used to provide additional information which relates the document and the unseen classes to generalise the zero-shot reasoning." + }, + { + "id": 48, + "string": "We use the following notations in Figure 1 and throughout this paper." + }, + { + "id": 49, + "string": "• The list of embeddings of each word in the document x i is denoted by v i w = [v i w 1 , v i w 2 , ." + }, + { + "id": 50, + "string": "." + }, + { + "id": 51, + "string": "." + }, + { + "id": 52, + "string": ", v i wt ]." + }, + { + "id": 53, + "string": "• The embedding of each class label c is denoted by v c , ∀c ∈ C S ∪ C U ." + }, + { + "id": 54, + "string": "It is assumed that each class has a one-word class label." + }, + { + "id": 55, + "string": "If the class label has more than one word, a similar one-word class label is provided to find v c ." + }, + { + "id": 56, + "string": "• As augmented features, the relationship vec-tor v i w j ,c shows the degree of relatedness between the word w j and the class c according to semantic knowledge." + }, + { + "id": 57, + "string": "Hence, the list of relationship vectors between each word in x i and each class c ∈ C S ∪ C U is denoted by v i w,c = [v i w 1 ,c , v i w 2 ,c , ." + }, + { + "id": 58, + "string": "." + }, + { + "id": 59, + "string": "." + }, + { + "id": 60, + "string": ", v i wt,c ]." + }, + { + "id": 61, + "string": "We will explain the construction method in section 2.4.1." + }, + { + "id": 62, + "string": "Phase 1: Coarse-grained Classification Given a document x i , Phase 1 performs a binary classification to decide whetherŷ i ∈ C S orŷ i / ∈ C S ." + }, + { + "id": 63, + "string": "In this phase, each seen class c s ∈ C S has its own CNN classifier (with a subsequent dense layer and a sigmoid output) to predict the confidence that x i comes from the class c s , i.e., p(ŷ i = c s |x i )." + }, + { + "id": 64, + "string": "The classifier uses v i w as an input and it is trained using a binary cross entropy loss with all documents of its class in the training set as positive examples and the rest as negative examples." + }, + { + "id": 65, + "string": "For a test document x i , this phase computes p(ŷ i = c s |x i ) for every seen class c s in C S ." + }, + { + "id": 66, + "string": "If there exists a class c s such that p(ŷ i = c s |x i ) > τ s , it predictsŷ i ∈ C S ; otherwise,ŷ i / ∈ C S ." + }, + { + "id": 67, + "string": "τ s is a classification threshold for the class c s , calculated based on the threshold adaptation method from (Shu et al., 2017) ." + }, + { + "id": 68, + "string": "Data Augmentation During the learning stage, the classifiers in Phase 1 use negative examples solely from seen classes, so they may not be able to differentiate the positive class from unseen classes." + }, + { + "id": 69, + "string": "Hence, when the names of unseen classes are known in the inference stage, we try to introduce them to the classifiers in Phase 1 via augmented data so they can learn to reject the instances likely from unseen classes." + }, + { + "id": 70, + "string": "We do data augmentation by translating a document from its original seen class to a new unseen class using analogy." + }, + { + "id": 71, + "string": "We call this process topic translation." + }, + { + "id": 72, + "string": "In the word level, we translate a word w in a document of class c to a corresponding word w in the context of a target class c by solving an analogy question \"c:w :: c :?\"." + }, + { + "id": 73, + "string": "For example, solving the analogy \"company:firm :: village:?\"" + }, + { + "id": 74, + "string": "via word embeddings , we know that the word \"firm\" in a document of class \"company\" can be translated into the word \"hamlet\" in the context of class \"village\"." + }, + { + "id": 75, + "string": "Our framework adopts the 3COSMUL method by Levy and Goldberg (2014) to solve the analogy question and find candidates of w : w = argmax x∈V cos(x, c ) cos(x, w) cos(x, c) + where V is a vocabulary set and cos(a, b) is a cosine similarity score between the vectors of word a and word b." + }, + { + "id": 76, + "string": "Also, is a small number (i.e., 0.001) added to prevent division by zero." + }, + { + "id": 77, + "string": "In the document level, we follow Algorithm 1 to translate a document of class c into the topic of another class c ." + }, + { + "id": 78, + "string": "To explain, we translate all nouns, verbs, adjectives, and adverbs in the given document to the target class, word-by-word, using the word-level analogy." + }, + { + "id": 79, + "string": "The word to replace must have the same part of speech as the original word and all the replacements in one document are 1-to-1 relations, enforced by replace dict in Algorithm 1." + }, + { + "id": 80, + "string": "With this idea, we can create augmented documents for the unseen classes by topic-translation from the documents of seen classes in the training dataset." + }, + { + "id": 81, + "string": "After that, we can use the augmented documents as additional negative examples for all the CNNs in Phase 1 to make them aware of the tone of unseen classes." + }, + { + "id": 82, + "string": "Phase 2 decides the most appropriate classŷ i for x i using two CNN classifiers: a traditional classifier and a zero-shot classifier as shown in Figure 1 ." + }, + { + "id": 83, + "string": "Ifŷ i ∈ C S predicted by Phase 1, the traditional classifier will finally select a class c s ∈ C S asŷ i ." + }, + { + "id": 84, + "string": "Otherwise, ifŷ i / ∈ C S , the zero-shot classifier will be used to select a class c u ∈ C U asŷ i ." + }, + { + "id": 85, + "string": "The traditional classifier and the zero-shot classifier have an identical CNN-based structure followed by two dense layers but their inputs and outputs are different." + }, + { + "id": 86, + "string": "The traditional classifier is a multi-class classifier (|C S | classes) with a softmax output, so it requires only the word embeddings v i w as an input." + }, + { + "id": 87, + "string": "This classifier is trained using a cross entropy loss with a training dataset whose examples are from seen classes only." + }, + { + "id": 88, + "string": "In contrast, the zero-shot classifier is a binary classifier with a sigmoid output." + }, + { + "id": 89, + "string": "Specifically, it takes a text document x i and a class c as inputs and predicts the confidence p(ŷ i = c|x i )." + }, + { + "id": 90, + "string": "However, in practice, we utilise v i w to represent x i , v c to represent the class c, and also augmented features v i w,c to provide more information on how intimate the connections between words and the class c are." + }, + { + "id": 91, + "string": "Altogether, for each word w j , the classifier receives the concatenation of three vectors (i.e., [v i w j ; v c ; v i w j ,c ]) as an input." + }, + { + "id": 92, + "string": "This classifier is trained using a binary cross entropy loss with a training data from seen classes only, but we expect this classifier to work well on unseen classes thanks to the distinctive patterns of v i w,c in positive examples of every class." + }, + { + "id": 93, + "string": "This is how we transfer knowledge from seen to unseen classes in ZSL." + }, + { + "id": 94, + "string": "Feature Augmentation The relationship vector v w j ,c contains augmented features we input to the zero-shot classifier." + }, + { + "id": 95, + "string": "v w j ,c shows how the word w j and the class c are related considering the relations in a general knowledge graph." + }, + { + "id": 96, + "string": "In this work, we use ConceptNet providing general knowledge of natural language words and phrases (Speer and Havasi, 2013) ." + }, + { + "id": 97, + "string": "A subgraph of ConceptNet is shown in Figure 2c as an illustration." + }, + { + "id": 98, + "string": "Nodes in ConceptNet are words or phrases, while edges connecting two nodes show how they are related either syntactically or semantically." + }, + { + "id": 99, + "string": "We firstly represent a class c as three sets of nodes in ConceptNet by processing the class hierarchy, class label, and class description of c. (1) the class nodes is a set of nodes of the class label c and any tokens inside c if c has more than one word." + }, + { + "id": 100, + "string": "(2) superclass nodes is a set of nodes of all the superclasses of c according to the class hierarchy." + }, + { + "id": 101, + "string": "(3) description nodes is a set of nodes of all nouns in the description of the class c. For example, if c is the class \"Educational Institution\", according to Figure 2a -2b, the three sets of Con-ceptNet nodes for this class are: (1) educational institution, educational, institution (2) organization, agent (3) place, people, ages, education." + }, + { + "id": 102, + "string": "To construct v w j ,c , we consider whether the word w j is connected to the members of the three sets above within K hops by particular types of relations or not 1 ." + }, + { + "id": 103, + "string": "For each of the three sets, we construct a vector with 3K + 1 dimensions." + }, + { + "id": 104, + "string": "• v[0] = 1 if w j is a node in that set; otherwise, v[0] = 0." + }, + { + "id": 105, + "string": "• for k = 0, ." + }, + { + "id": 106, + "string": "." + }, + { + "id": 107, + "string": "." + }, + { + "id": 108, + "string": ", K − 1: v[3k + 1] = 1 if there is a node in the set whose shortest path to w j is k + 1." + }, + { + "id": 109, + "string": "Otherwise, v[3k + 1] = 0." + }, + { + "id": 110, + "string": "-v[3k + 2] is the number of nodes in the set whose shortest path to w j is k + 1." + }, + { + "id": 111, + "string": "-v[3k +3] is v[3k +2 ] divided by the total number of nodes in the set." + }, + { + "id": 112, + "string": "Thus, the vector associated to each set shows how w j is semantically close to that set." + }, + { + "id": 113, + "string": "Finally, we concatenate the constructed vectors from the three sets to become v w j ,c with 3×(3K+1) dimensions." + }, + { + "id": 114, + "string": "Experiments Datasets We used two textual datasets for the experiments." + }, + { + "id": 115, + "string": "The vocabulary size of each dataset was limited by 20,000 most frequent words and all numbers were excluded." + }, + { + "id": 116, + "string": "(1) DBpedia ontology dataset includes 14 non-overlapping classes and textual data collected from Wikipedia." + }, + { + "id": 117, + "string": "Each class has 40,000 training and 5,000 testing samples." + }, + { + "id": 118, + "string": "(2) The 20newsgroups dataset 2 has 20 topics each of which has approximately 1,000 documents." + }, + { + "id": 119, + "string": "70% of the documents of each class were randomly selected for training, and the remaining 30% were used as a testing set." + }, + { + "id": 120, + "string": "Implementation Details 3 In our experiments, two different rates of unseen classes, 50% and 25%, were chosen and the corresponding sizes of C S and C U are shown in Table 1 ." + }, + { + "id": 121, + "string": "For each dataset and each unseen rate, the random 1 In this paper, we only consider the most common types of positive relations which are RelatedTo, IsA, PartOf, and AtLocation." + }, + { + "id": 122, + "string": "They cover ∼60% of all edges in ConceptNet." + }, + { + "id": 123, + "string": "2 http://qwone.com/∼jason/20Newsgroups/ 3 Code: https://github.com/JingqingZ/KG4ZeroShotText." + }, + { + "id": 124, + "string": "selection of (C S , C U ) were repeated ten times and these ten groups were used by all the experiments with this setting for a fair comparison." + }, + { + "id": 125, + "string": "All documents from C U were removed from the training set accordingly." + }, + { + "id": 126, + "string": "Finally, the results from all the ten groups were averaged." + }, + { + "id": 127, + "string": "In Phase 1, the structure of each classifier was identical." + }, + { + "id": 128, + "string": "The CNN layer had three filter sizes [3, 4, 5] with 400 filters for each filter size and the subsequent dense layer had 300 units." + }, + { + "id": 129, + "string": "For data augmentation, we used gensim with an implementation of 3COSMUL (Řehůřek and Sojka, 2010) to solve the word-level analogy (line 5 in Algorithm 1)." + }, + { + "id": 130, + "string": "Also, the numbers of augmented text documents per unseen class for every setting (if used) are indicated in Table 1 ." + }, + { + "id": 131, + "string": "These numbers were set empirically considering the number of available training documents to be translated." + }, + { + "id": 132, + "string": "In Phase 2, the traditional classifier and the zero-shot classifier had the same structure, in which the CNN layer had three filter sizes [2, 4, 8] with 600 filters for each filter size and the two intermediate dense layers had 400 and 100 units respectively." + }, + { + "id": 133, + "string": "For feature augmentation, the maximum path length K in ConceptNet was set to 3 to create the relationship vectors 4 ." + }, + { + "id": 134, + "string": "The DBpedia ontology 5 was used to construct a class hierarchy of the DBpedia dataset." + }, + { + "id": 135, + "string": "The class hierarchy of the 20newsgroups dataset was constructed based on the namespaces initially provided by the dataset." + }, + { + "id": 136, + "string": "Meanwhile, the classes descriptions of both datasets were picked from Macmillan Dictionary 6 as appropriate." + }, + { + "id": 137, + "string": "For both phases, we used 200-dim GloVe vectors 7 for word embeddings v w and v c (Pennington et al., 2014)." + }, + { + "id": 138, + "string": "All the deep neural networks were implemented with TensorLayer (Dong et al., 2017a) and TensorFlow (Abadi et al., 2016) ." + }, + { + "id": 139, + "string": "Baselines and Evaluation Metrics We compared each phase and the overall framework with the following approaches and settings." + }, + { + "id": 140, + "string": "Phase 1: Proposed by (Shu et al., 2017) , DOC is a state-of-the-art open-world text classification approach which classifies a new sample into a seen class or \"reject\" if the sample does not belong to any seen classes." + }, + { + "id": 141, + "string": "The DOC uses a single CNN and a 1-vs-rest sigmoid output layer with threshold adjustment." + }, + { + "id": 142, + "string": "Unlike DOC, the classifiers in the proposed Phase 1 work individually." + }, + { + "id": 143, + "string": "However, for a fair comparison, we used DOC only as a binary classifier in this phase (ŷ i ∈ C S orŷ i / ∈ C S )." + }, + { + "id": 144, + "string": "Phase 2: To see how well the augmented feature v w,c work in ZSL, we ran the zero-shot classifier with different combinations of inputs." + }, + { + "id": 145, + "string": "Particularly, five combinations of v w , v c , and v w,c were tested with documents from unseen classes only (traditional ZSL)." + }, + { + "id": 146, + "string": "The whole framework: (1) Count-based model selected the class whose label appears most frequently in the document asŷ i ." + }, + { + "id": 147, + "string": "(2) Label similarity (Sappadla et al., 2016) is an unsupervised approach which calculates the cosine similarity between the sum of word embeddings of each class label and the sum of word embeddings of every n-gram (n = 1, 2, 3) in the document." + }, + { + "id": 148, + "string": "We adopted this approach to do single-label classification by predicting the class that got the highest similarity score among all classes." + }, + { + "id": 149, + "string": "(3) RNN Au-toEncoder was built based on a Seq2Seq model with LSTM (512 hidden units), and it was trained to encode documents and class labels onto the same latent space." + }, + { + "id": 150, + "string": "The cosine similarity was applied to select a class label closest to the document on the latent space." + }, + { + "id": 151, + "string": "(4) RNN+FC refers to the architecture 2 proposed in (Pushp and Srivastava, 2017) ." + }, + { + "id": 152, + "string": "It used an RNN layer with LSTM (512 hidden units) followed by two dense layers with 400 and 100 units respectively." + }, + { + "id": 153, + "string": "(5) CNN+FC replaced the RNN in the previous model with a CNN, which has the identical structure as the zero-shot classifier in Phase 2." + }, + { + "id": 154, + "string": "Both RNN+FC and CNN+FC predicted the confidence p(ŷ i = c|x i ) given v w and v c ." + }, + { + "id": 155, + "string": "The class with the highest confidence was selected asŷ i ." + }, + { + "id": 156, + "string": "For Phase 1, we used the accuracy for binary classification (y,ŷ i ∈ C S or y,ŷ i / ∈ C S ) as an evaluation metric." + }, + { + "id": 157, + "string": "In contrast, for Phase 2 and the whole framework, we used the multi-class classification accuracy (ŷ i = y i ) as a metric." + }, + { + "id": 158, + "string": "Results and Discussion The evaluation of Phase 1 (coarse-grained classification) checks if each x i was correctly delivered to the right classifier in Phase 2." + }, + { + "id": 159, + "string": "Table 3 shows the performance of Phase 1 with and without augmented data compared with DOC." + }, + { + "id": 160, + "string": "Considering test documents from seen classes only, our framework outperformed DOC on both datasets." + }, + { + "id": 161, + "string": "In addition, the augmented data improved the accuracy of detecting documents from unseen classes clearly and led to higher overall accuracy in every setting." + }, + { + "id": 162, + "string": "Despite no real labelled data from unseen classes, the augmented data generated by topic translation helped Phase 1 better detect documents from unseen classes." + }, + { + "id": 163, + "string": "Table 4 shows some examples of augmented data from the DBpedia dataset." + }, + { + "id": 164, + "string": "Even if they are not completely understandable, they contain the tone of the target classes." + }, + { + "id": 165, + "string": "Although Phase 1 provided confidence scores for all seen classes, we could not use them to predictŷ i directly since the distribution of scores of positive examples from different CNNs are different." + }, + { + "id": 166, + "string": "Figure 3 shows that the distribution of confidence scores of the class \"Artist\" had a noticeably larger variance and was clearly different from the class \"Building\"." + }, + { + "id": 167, + "string": "Hence, even if p(ŷ i = \"Building\"|x i ) > p(ŷ i = \"Artist\"|x i ), we cannot conclude that x i is more likely to come from the class \"Building\"." + }, + { + "id": 168, + "string": "This is why a traditional classifier in Phase 2 is necessary." + }, + { + "id": 169, + "string": "Regarding Phase 2, fine-grained classification is in charge of predictingŷ i and it employs two classifiers which were tested separately." + }, + { + "id": 170, + "string": "Assuming Phase 1 is perfect, the classifiers in Phase 2 should be able to find the right class." + }, + { + "id": 171, + "string": "The purpose of Table 5 is to show that the traditional CNN classifier in Phase 2 was highly accurate." + }, + { + "id": 172, + "string": "Mitra perdulca is a species of sea snail a marine gastropod mollusk in the family Mitridae the miters or miter snails." + }, + { + "id": 173, + "string": "Animal → Plant Arecaceae perdulca is a flowering of port aster a naval mollusk gastropod in the fabaceae Clusiaceae the tiliaceae or rockery amaryllis." + }, + { + "id": 174, + "string": "Animal → Athlete Mira perdulca is a swimmer of sailing sprinter an Olympian limpets gastropod in the basketball Middy the miters or miter skater." + }, + { + "id": 175, + "string": "Table 4 : Examples of augmented data translated from a document of the original class \"Animal\" into two target classes \"Plant\" and \"Athlete\"." + }, + { + "id": 176, + "string": "Besides, given test documents from unseen classes only, the performance of the zero-shot classifier in Phase 2 is shown in Table 6 ." + }, + { + "id": 177, + "string": "Based on the construction method, v w,c quantified the relatedness between words and the class but, unlike v w and v c , it did not include detailed semantic meaning." + }, + { + "id": 178, + "string": "Thus, the classifier using v w,c only could not find out the correct unseen class and neither hand, the combination of [v w ; v c ], which included semantic embeddings of both words and the class label, increased the accuracy of predicting unseen classes clearly." + }, + { + "id": 179, + "string": "However, the zero-shot classifier fed by the combination of all three types of inputs [v w ; v c ; v w,c ] achieved the highest accuracy in all settings." + }, + { + "id": 180, + "string": "It asserts that the integration of semantic knowledge we proposed is an effective means for knowledge transfer from seen to unseen classes in the zero-shot scenario." + }, + { + "id": 181, + "string": "Last but most importantly, we compared the whole framework with four baselines as shown in Table 2 ." + }, + { + "id": 182, + "string": "First, the count-based model is a rulebased model so it failed to predict documents from seen classes accurately and resulted in unpleasant overall results." + }, + { + "id": 183, + "string": "This was similar to the label similarity approach even though it had higher degree of flexibility." + }, + { + "id": 184, + "string": "Next, the RNN Autoencoder was trained without any supervision sinceŷ i was predicted based on the cosine similarity." + }, + { + "id": 185, + "string": "We believe the implicit semantic relatedness between classes caused the failure of the RNN Autoencoder." + }, + { + "id": 186, + "string": "Besides, the CNN+FC and RNN+FC had same inputs and outputs and it was clear that CNN+FC performed better than RNN+FC in the experiment." + }, + { + "id": 187, + "string": "However, neither CNN+FC nor RNN+FC was able to transfer the knowledge learned from seen to unseen classes." + }, + { + "id": 188, + "string": "Finally, our two-phase framework has competitive prediction accuracy on unseen classes while maintaining the accuracy on seen classes." + }, + { + "id": 189, + "string": "This made it achieve the highest overall accuracy on both datasets and both unseen rates." + }, + { + "id": 190, + "string": "In conclusion, by using integrated semantic knowledge, the proposed two-phase framework with data and feature augmentation is a promising step to tackle this challenging zero-shot problem." + }, + { + "id": 191, + "string": "[v w ; v w,c ] and [v c ; v w, Furthermore, another benefit of the framework is high flexibility." + }, + { + "id": 192, + "string": "As the modules in Figure 1 has less coupling to one another, it is flexible to improve or customise each of them." + }, + { + "id": 193, + "string": "For example, we can deploy an advanced language understanding model, e.g., BERT (Devlin et al., 2018) , as a traditional classifier." + }, + { + "id": 194, + "string": "Moreover, we may replace Con-ceptNet with a domain-specific knowledge graph to deal with medical texts." + }, + { + "id": 195, + "string": "Related Work Zero-shot Text Classification There are a few more related works to discuss besides recent approaches we compared with in the experiments (explained in section 3.3)." + }, + { + "id": 196, + "string": "Dauphin et al." + }, + { + "id": 197, + "string": "(2013) predicted semantic utterance of texts by mapping class labels and text samples into the same semantic space and classifying each sample to the closest class label." + }, + { + "id": 198, + "string": "learned the embeddings of classes, documents, and words jointly in the learning stage." + }, + { + "id": 199, + "string": "Hence, it can perform well in domain-specific classification, but this is possible only with a large amount of training data." + }, + { + "id": 200, + "string": "Overall, most of the previous works exploited semantic relationships between classes and documents via embeddings." + }, + { + "id": 201, + "string": "In contrast, our proposed framework leverages not only the word embeddings but also other semantic knowledge." + }, + { + "id": 202, + "string": "While word embeddings are used to solve analogy for data augmentation in Phase 1, the other semantic knowledge sources (in Figure 2 ) are integrated into relationship vectors and used as augmented features in Phase 2." + }, + { + "id": 203, + "string": "Furthermore, our framework does not require any semantic correspondences between seen and unseen classes." + }, + { + "id": 204, + "string": "Data Augmentation in NLP In the face of insufficient data, data augmentation has been widely used to improve generalisation of deep neural networks especially in computer vision (Krizhevsky et al., 2012) and multimodality (Dong et al., 2017b) , but it is still not a common practice in natural language processing." + }, + { + "id": 205, + "string": "Recent works have explored data augmentation in NLP tasks such as machine translation and text classification (Saito et al., 2017; Fadaee et al., 2017; Kobayashi, 2018) , and the algorithms were designed to preserve semantic meaning of an original document by using synonyms (Zhang and Le-Cun, 2015) or adding noises (Xie et al., 2017) , for example." + }, + { + "id": 206, + "string": "In contrast, our proposed data augmentation technique translates a document from one meaning (its original class) to another meaning (an unseen class) by analogy in order to substitute unavailable labelled data of the unseen class." + }, + { + "id": 207, + "string": "Feature Augmentation in NLP Apart from improving classification accuracy, feature augmentation is also used in domain adaptation to transfer knowledge between a source and a target domain (Pan et al., 2010b; Fang and Chiang, 2018; Chen et al., 2018 )." + }, + { + "id": 208, + "string": "An early research paper applying feature augmentation in NLP is Daume III (2007) which targeted domain adaptation on sequence labelling tasks." + }, + { + "id": 209, + "string": "After that, feature augmentation was used in several NLP tasks such as cross-domain sentiment classification (Pan et al., 2010a), multi-domain machine translation (Clark et al., 2012) , semantic argument classification (Batubara et al., 2018) , etc." + }, + { + "id": 210, + "string": "Our work is different from previous works not only that we applied this technique to zero-shot text classification but also that we integrated many types of semantic knowledge to create the augmented features." + }, + { + "id": 211, + "string": "Conclusion and Future Work To tackle zero-shot text classification, we proposed a novel CNN-based two-phase framework together with data augmentation and feature augmentation." + }, + { + "id": 212, + "string": "The experiments show that data augmentation by topic translation improved the accuracy in detecting instances from unseen classes, while feature augmentation enabled knowledge transfer from seen to unseen classes for zero-shot learning." + }, + { + "id": 213, + "string": "Thanks to the framework and the integrated semantic knowledge, our work achieved the highest overall accuracy compared with all the baselines and recent approaches in all settings." + }, + { + "id": 214, + "string": "In the future, we plan to extend our framework to do multi-label classification with a larger amount of data, and also study how semantic units defined by linguists can be used in the zero-shot scenario." + } + ], + "headers": [ + { + "section": "Introduction", + "n": "1", + "start": 0, + "end": 31 + }, + { + "section": "Problem Formulation", + "n": "2.1", + "start": 32, + "end": 41 + }, + { + "section": "Overview and Notations", + "n": "2.2", + "start": 42, + "end": 61 + }, + { + "section": "Phase 1: Coarse-grained Classification", + "n": "2.3", + "start": 62, + "end": 67 + }, + { + "section": "Data Augmentation", + "n": "2.3.1", + "start": 68, + "end": 93 + }, + { + "section": "Feature Augmentation", + "n": "2.4.1", + "start": 94, + "end": 113 + }, + { + "section": "Datasets", + "n": "3.1", + "start": 114, + "end": 119 + }, + { + "section": "Implementation Details 3", + "n": "3.2", + "start": 120, + "end": 138 + }, + { + "section": "Baselines and Evaluation Metrics", + "n": "3.3", + "start": 139, + "end": 157 + }, + { + "section": "Results and Discussion", + "n": "3.4", + "start": 158, + "end": 194 + }, + { + "section": "Zero-shot Text Classification", + "n": "4.1", + "start": 195, + "end": 203 + }, + { + "section": "Data Augmentation in NLP", + "n": "4.2", + "start": 204, + "end": 206 + }, + { + "section": "Feature Augmentation in NLP", + "n": "4.3", + "start": 207, + "end": 210 + }, + { + "section": "Conclusion and Future Work", + "n": "5", + "start": 211, + "end": 214 + } + ], + "figures": [ + { + "filename": "../figure/image/1014-Figure3-1.png", + "caption": "Figure 3: The distributions of confidence scores of positive examples from four seen classes of DBpedia in Phase 1.", + "page": 5, + "bbox": { + "x1": 315.36, + "x2": 517.4399999999999, + "y1": 61.919999999999995, + "y2": 192.0 + } + }, + { + "filename": "../figure/image/1014-Figure1-1.png", + "caption": "Figure 1: The overview of the proposed framework with two phases. The coarse-grained phase judges if an input document xi comes from seen or unseen classes. The fine-grained phase finally decides the class ŷi. All notations are defined in section 2.1-2.2.", + "page": 1, + "bbox": { + "x1": 78.72, + "x2": 526.0799999999999, + "y1": 62.879999999999995, + "y2": 213.12 + } + }, + { + "filename": "../figure/image/1014-Table6-1.png", + "caption": "Table 6: The accuracy of the zero-shot classifier in Phase 2 given documents from unseen classes only.", + "page": 6, + "bbox": { + "x1": 306.71999999999997, + "x2": 527.04, + "y1": 351.84, + "y2": 439.2 + } + }, + { + "filename": "../figure/image/1014-Table4-1.png", + "caption": "Table 4: Examples of augmented data translated from a document of the original class “Animal” into two target classes “Plant” and “Athlete”.", + "page": 6, + "bbox": { + "x1": 73.92, + "x2": 289.44, + "y1": 462.71999999999997, + "y2": 575.04 + } + }, + { + "filename": "../figure/image/1014-Table2-1.png", + "caption": "Table 2: The accuracy of the whole framework compared with the baselines.", + "page": 6, + "bbox": { + "x1": 78.72, + "x2": 518.4, + "y1": 62.879999999999995, + "y2": 229.92 + } + }, + { + "filename": "../figure/image/1014-Table3-1.png", + "caption": "Table 3: The accuracy of Phase 1 with and without augmented data compared with DOC .", + "page": 6, + "bbox": { + "x1": 75.84, + "x2": 286.08, + "y1": 270.71999999999997, + "y2": 415.2 + } + }, + { + "filename": "../figure/image/1014-Table5-1.png", + "caption": "Table 5: The accuracy of the traditional classifier in Phase 2 given documents from seen classes only.", + "page": 6, + "bbox": { + "x1": 307.68, + "x2": 525.12, + "y1": 270.71999999999997, + "y2": 305.28 + } + }, + { + "filename": "../figure/image/1014-Figure2-1.png", + "caption": "Figure 2: Illustrations of semantic knowledge integrated into our framework: (a) class labels and class descriptions (b) class hierarchy and (c) a subgraph of the general knowledge graph (ConceptNet).", + "page": 2, + "bbox": { + "x1": 72.96, + "x2": 290.4, + "y1": 63.839999999999996, + "y2": 312.0 + } + }, + { + "filename": "../figure/image/1014-Table1-1.png", + "caption": "Table 1: The rates of unseen classes and the numbers of augmented documents (per unseen class) in the experiments", + "page": 4, + "bbox": { + "x1": 306.71999999999997, + "x2": 534.24, + "y1": 591.84, + "y2": 655.1999999999999 + } + } + ] + }, + "gem_id": "GEM-SciDuet-chal-18" + }, + { + "slides": { + "0": { + "title": "Motivation", + "text": [ + "Most high-performance data-driven models rely on a large amount of labeled training data. However, a model trained on one language usually performs poorly on another language.", + "Extend existing services to more languages:", + "Collect, select, and pre-process data", + "Compile guidelines for new languages", + "Train annotators to qualify for annotation tasks", + "Adjudicate annotations and assess the annotation quality and inter-annotator agreement", + "Adjudicate annotations and assess inter-annotator agreement", + "languages are spoken today", + "Rapid and low-cost development of capabilities for low-resource languages.", + "Disaster response and recovery" + ], + "page_nums": [ + 1, + 2 + ], + "images": [] + }, + "1": { + "title": "TRANSFER LEARNING and MULTI TASK LEARNING", + "text": [ + "Leverage existing data of related languages and tasks and transfer knowledge to our target task.", + "The Tasman Sea lies between lAustralie est separee de lAsie par les mers dArafuraet", + "Australia and New Zealand. de Timor et de la Nouvelle-Zelande par la mer de Tasman", + "Multi-task Learning (MTL) is an effective solution for knowledge transfer across tasks.", + "In the context of neural network architectures, we usually perform MTL by sharing parameters across models.", + "Task A Data Parameter Sharing: When optimizing model A , we update", + "and hence . In this way, we can partially train model B as ." + ], + "page_nums": [ + 3 + ], + "images": [] + }, + "2": { + "title": "Sequence labeling", + "text": [ + "To illustrate our idea, we take sequence labeling as a case study.", + "In the NLP context, the goal of sequence labeling is to assign a categorical label (e.g., Part-of-speech tag) to each token in a sentence.", + "It underlies a range of fundamental NLP tasks, including POS Tagging, Name Tagging, and Chunking.", + "Koalas are largely sedentary and sleep up to 20 hours a day.", + "NNS VBP RB JJ CC VB IN TO CD NNS DT NN", + "PER NAME TAGGING B-PER E-PER GPE GPE", + "Itamar Rabinovich, who as Israel's ambassador to Washington conducted unfruitful negotiations with", + "Syria, told Israel Radio it looked like Damascus wated to talk rather than fight.", + "B-, I-, E-, S-: beginning of a mention, inside of a mention, the end of a mention and a single-token mention", + "O: not part of any mention Although we only focus on sequence labeling in this work, our architecture can be adapted for many NLP tasks with slight modification." + ], + "page_nums": [ + 4 + ], + "images": [] + }, + "3": { + "title": "Base model lstm crf chiu and nichols 2016", + "text": [ + "The CRF layer models the dependencies between labels.", + "The linear layer projects hidden states to label space.", + "The Bidirectional LSTM (long-short term memory) processes the input sentence from both directional, encodeing each token and its context into a vector", + "Input Sentence Each token in the given sentence is", + "represented as the combination of its word embedding and character feature vector.", + "Features Character- level CNN", + "Word Embedding Character Embedding" + ], + "page_nums": [ + 5 + ], + "images": [] + }, + "4": { + "title": "Previous transfer models for sequence labeling", + "text": [ + "T-A: Cross-domain transfer T-B: Cross-domain transfer With disparate label T-C: Cross-lingual Transfer sets", + "Yang et al. (2017) proposed three transfer learning architectures for different use cases.", + "* Above figures are adapted from (Yang et al., 2017)" + ], + "page_nums": [ + 6 + ], + "images": [] + }, + "5": { + "title": "Our model multi lingual multi task architecture", + "text": [ + "combines multi-lingual transfer and multi-task transfer is able to transfer knowledge from multiple sources" + ], + "page_nums": [ + 7 + ], + "images": [ + "figure/image/1017-Figure2-1.png" + ] + }, + "6": { + "title": "Our model multi lingual multi task model", + "text": [ + "Cross-task Transfer POS Tagging Name Tagging", + "Cross-lingual Transfer English Spanish", + "The bidirectional LSTM, character embeddings and character-level networks serve as the basis of the architecture. This level of parameter sharing aims to provide universal word representation and feature extraction capability for all tasks and languages" + ], + "page_nums": [ + 8, + 9 + ], + "images": [] + }, + "7": { + "title": "Our model multi lingual multi task model cross lingual transfer", + "text": [ + "For the same task, most components are shared between languages.", + "Although our architecture does not require aligned cross-lingual word embeddings, we also evaluate it with aligned embeddings generated using MUSEs unsupervised model (Conneau et al. 2017)." + ], + "page_nums": [ + 10 + ], + "images": [ + "figure/image/1017-Figure2-1.png" + ] + }, + "8": { + "title": "Our model multi lingual multi task model linear layer", + "text": [ + "English: improvement, development, payment,", + "French: vraiment, completement, immediatement", + "We combine the output of the shared linear layer and the output of the language-specific linear layer using", + "where . and are optimized during training. is the LSTM hidden states. As is a square matrix, , , and have the same dimension", + "We add a language-specific linear layer to allow the model to behave differently towards some features for different languages." + ], + "page_nums": [ + 11 + ], + "images": [ + "figure/image/1017-Figure2-1.png" + ] + }, + "9": { + "title": "Our model multi lingual multi task model cross task transfer", + "text": [ + "Linear layers and CRF layers are not shared between different tasks.", + "Tasks of the same language use the same embedding matrix: mutually enhance word representations" + ], + "page_nums": [ + 12 + ], + "images": [ + "figure/image/1017-Figure2-1.png" + ] + }, + "10": { + "title": "Alternating training", + "text": [ + "To optimize multiple tasks within one model, we adopt the alternating training approach in (Luong et", + "At each training step, we sample a task with probability:", + "In our experiments, instead of tuning mixing rate , we estimate it by:", + "where is the task coefficient, is the language coefficient, and is the number of training examples. (or ) takes the value 1 if the task (or language) of is the same as that of the target task; Otherwise it takes the value 0.1." + ], + "page_nums": [ + 13 + ], + "images": [] + }, + "12": { + "title": "Experiments setup", + "text": [ + "50-dimensional pre-trained word embeddings", + "English, Spanish and Dutch: Wikipedia", + "Chechen: TAC KBP 2017 10-Language EDL Pilot Evaluation Source Corpus", + "Cross-lingual word embedding: we aligned mono-lingual pre-trained word embeddings with MUSE", + "50-dimensional randomly initialized character embeddings", + "Optimization: SGD with momentum (), gradient clipping (threshold: 5.0) and exponential learning rate decay.", + "Highway Activation Function SeLU", + "LSTM Hidden State Size" + ], + "page_nums": [ + 15 + ], + "images": [] + }, + "14": { + "title": "Experiments comparison with state of the art models", + "text": [ + "Our Model We also compared our model with state-of-the-art models with all training data." + ], + "page_nums": [ + 19, + 20 + ], + "images": [] + }, + "15": { + "title": "Experiments cross task transfer vs cross lingual transfer", + "text": [ + "With 100 Dutch training sentences:", + "The baseline model misses the name", + "The cross-task transfer model finds the name but assigns a wrong tag to Marx.", + "The cross-lingual transfer model correctly identifies the whole name.", + "The task-specific knowledge that B-PER", + "S-PER is an invalid transition will not be learned in the POS Tagging model.", + "The cross-lingual transfer model transfers such knowledge through the shared CRF layer." + ], + "page_nums": [ + 21 + ], + "images": [ + "figure/image/1017-Table5-1.png" + ] + } + }, + "paper_title": "A Multi-lingual Multi-task Architecture for Low-resource Sequence Labeling", + "paper_id": "1017", + "paper": { + "title": "A Multi-lingual Multi-task Architecture for Low-resource Sequence Labeling", + "abstract": "We propose a multi-lingual multi-task architecture to develop supervised models with a minimal amount of labeled data for sequence labeling. In this new architecture, we combine various transfer models using two layers of parameter sharing. On the first layer, we construct the basis of the architecture to provide universal word representation and feature extraction capability for all models. On the second level, we adopt different parameter sharing strategies for different transfer schemes. This architecture proves to be particularly effective for low-resource settings, when there are less than 200 training sentences for the target task. Using Name Tagging as a target task, our approach achieved 4.3%-50.5% absolute Fscore gains compared to the mono-lingual single-task baseline model. 1 #1 [DUTCH]: If a Palestinian State is, however, the first thing the Palestinians will do. ⋆ [B] Als er een Palestijnse staat komt, is dat echter het eerste wat de Palestijnen zullen doen ⋆ [A] Als er een [S-MISC Palestijnse] staat komt, is dat echter het eerste wat de [S-MISC Palestijnen] zullen doen #2 [DUTCH]: That also frustrates the Muscovites, who still live in the proud capital of Russia but can not look at the soaps that the stupid farmers can see on the outside. ⋆ [B] Ook dat frustreert de Moskovieten , die toch in de fiere hoofdstad van Rusland wonen maar niet naar de soaps kunnen kijken die de domme boeren op de buiten wel kunnen zien ⋆ [A] Ook dat frustreert de [S-MISC Moskovieten] , die toch in de fiere hoofdstad van [S-LOC Rusland] wonen maar niet naar de soaps kunnen kijken die de domme boeren op de buiten wel kunnen zien #3 [DUTCH]: And the PMS centers are merging with the centers for school supervision, the MSTs.", + "text": [ + { + "id": 0, + "string": "Introduction When we use supervised learning to solve Natural Language Processing (NLP) problems, we typically train an individual model for each task with task-specific labeled data." + }, + { + "id": 1, + "string": "However, our target task may be intrinsically linked to other tasks." + }, + { + "id": 2, + "string": "For example, Part-of-speech (POS) tagging and Name Tagging can both be considered as sequence labeling; Machine Translation (MT) and Abstractive Text Summarization both require the ability to understand the source text and generate natural language sentences." + }, + { + "id": 3, + "string": "Therefore, it is valuable to transfer knowledge from related tasks to the target task." + }, + { + "id": 4, + "string": "Multi-task Learning (MTL) is one of * * Part of this work was done when the first author was on an internship at Facebook." + }, + { + "id": 5, + "string": "1 The code of our model is available at https://github." + }, + { + "id": 6, + "string": "com/limteng-rpi/mlmt the most effective solutions for knowledge transfer across tasks." + }, + { + "id": 7, + "string": "In the context of neural network architectures, we usually perform MTL by sharing parameters across models (Ruder, 2017) ." + }, + { + "id": 8, + "string": "Previous studies (Collobert and Weston, 2008; Dong et al., 2015; Luong et al., 2016; Liu et al., 2018; Yang et al., 2017) have proven that MTL is an effective approach to boost the performance of related tasks such as MT and parsing." + }, + { + "id": 9, + "string": "However, most of these previous efforts focused on tasks and languages which have sufficient labeled data but hit a performance ceiling on each task alone." + }, + { + "id": 10, + "string": "Most NLP tasks, including some well-studied ones such as POS tagging, still suffer from the lack of training data for many low-resource languages." + }, + { + "id": 11, + "string": "According to Ethnologue 2 , there are 7, 099 living languages in the world." + }, + { + "id": 12, + "string": "It is an unattainable goal to annotate data in all languages, especially for tasks with complicated annotation requirements." + }, + { + "id": 13, + "string": "Furthermore, some special applications (e.g., disaster response and recovery) require rapid development of NLP systems for extremely low-resource languages." + }, + { + "id": 14, + "string": "Therefore, in this paper, we concentrate on enhancing supervised models in low-resource settings by borrowing knowledge learned from related high-resource languages and tasks." + }, + { + "id": 15, + "string": "In (Yang et al., 2017) , the authors simulated a low-resource setting for English and Spanish by downsampling the training data for the target task." + }, + { + "id": 16, + "string": "However, for most low-resource languages, the data sparsity problem also lies in related tasks and languages." + }, + { + "id": 17, + "string": "Under such circumstances, a single transfer model can only bring limited improvement." + }, + { + "id": 18, + "string": "To tackle this issue, we propose a multi-lingual multi-task architecture which combines different transfer models within a unified architecture through two levels of parameter sharing." + }, + { + "id": 19, + "string": "In the first level, we share character embeddings, character-level convolutional neural networks, and word-level long-short term memory layer across all models." + }, + { + "id": 20, + "string": "These components serve as a basis to connect multiple models and transfer universal knowledge among them." + }, + { + "id": 21, + "string": "In the second level, we adopt different sharing strategies for different transfer schemes." + }, + { + "id": 22, + "string": "For example, we use the same output layer for all Name Tagging tasks to share task-specific knowledge (e.g., I-PER 3 should not be assigned to the first word in a sentence)." + }, + { + "id": 23, + "string": "To illustrate our idea, we take sequence labeling as a case study." + }, + { + "id": 24, + "string": "In the NLP context, the goal of sequence labeling is to assign a categorical label (e.g., POS tag) to each token in a sentence." + }, + { + "id": 25, + "string": "It underlies a range of fundamental NLP tasks, including POS Tagging, Name Tagging, and chunking." + }, + { + "id": 26, + "string": "Experiments show that our model can effectively transfer various types of knowledge from different auxiliary tasks and obtains up to 50.5% absolute F-score gains on Name Tagging compared to the mono-lingual single-task baseline." + }, + { + "id": 27, + "string": "Additionally, our approach does not rely on a large amount of auxiliary task data to achieve the improvement." + }, + { + "id": 28, + "string": "Using merely 1% auxiliary data, we already obtain up to 9.7% absolute gains in Fscore." + }, + { + "id": 29, + "string": "Model Basic Architecture The goal of sequence labeling is to assign a categorical label to each token in a given sentence." + }, + { + "id": 30, + "string": "Though traditional methods such as Hidden Markov Models (HMMs) and Conditional Random Fields (CRFs) (Lafferty et al., 2001; Ratinov and Roth, 2009; Passos et al., 2014) achieved high performance on sequence labeling tasks, they typically relied on hand-crafted features, therefore it is difficult to adapt them to new tasks or languages." + }, + { + "id": 31, + "string": "To avoid task-specific engineering, (Collobert et al., 2011) proposed a feed-forward neural network model that only requires word embeddings trained on a large scale corpus as features." + }, + { + "id": 32, + "string": "After that, several neural models based on the combination of long-short term memory (LSTM) and CRFs (Ma and Hovy, 2016; Lample et al., 2016; Chiu and Nichols, 2016) were proposed and 3 We adopt the BIOES annotation scheme." + }, + { + "id": 33, + "string": "Prefixes B-, I-, E-, and S-represent the beginning of a mention, inside of a mention, the end of a mention and a single-token mention respectively." + }, + { + "id": 34, + "string": "The O tag is assigned to a word which is not part of any mention." + }, + { + "id": 35, + "string": "achieved better performance on sequence labeling tasks." + }, + { + "id": 36, + "string": "Figure 1: LSTM-CNNs: an LSTM-CRFs-based model for Sequence Labeling LSTM-CRFs-based models are well-suited for multi-lingual multi-task learning for three reasons: (1) They learn features from word and character embeddings and therefore require little feature engineering; (2) As the input and output of each layer in a neural network are abstracted as vectors, it is fairly straightforward to share components between neural models; (3) Character embeddings can serve as a bridge to transfer morphological and semantic information between languages with identical or similar scripts, without requiring cross-lingual dictionaries or parallel sentences." + }, + { + "id": 37, + "string": "Therefore, we design our multi-task multilingual architecture based on the LSTM-CNNs model proposed in (Chiu and Nichols, 2016) ." + }, + { + "id": 38, + "string": "The overall framework is illustrated in Figure 1 ." + }, + { + "id": 39, + "string": "First, each word w i is represented as the combination x i of two parts, word embedding and character feature vector, which is extracted from character embeddings of the characters in w i using convolutional neural networks (CharCNN)." + }, + { + "id": 40, + "string": "On top of that, a bidirectional LSTM processes the sequence x = {x 1 , x 2 , ...} in both directions and encodes each word and its context into a fixed-size vector h i ." + }, + { + "id": 41, + "string": "Next, a linear layer converts h i to a score vector y i , in which each component represents the predicted score of a target tag." + }, + { + "id": 42, + "string": "In order to model correlations between tags, a CRFs layer is added at the top to generate the best tagging path for the whole sequence." + }, + { + "id": 43, + "string": "In the CRFs layer, given an input sentence x of length L and the output of the linear layer y, the score of a sequence of tags z is defined as: S(x, y, z) = L ∑ t=1 (A z t−1 ,zt + y t,zt ), where A is a transition matrix in which A p,q represents the binary score of transitioning from tag p to tag q, and y t,z represents the unary score of assigning tag z to the t-th word." + }, + { + "id": 44, + "string": "Given the ground truth sequence of tags z, we maximize the following objective function during the training phase: O = log P (z|x) = S(x, y, z) − log ∑ z∈Z e S(x,y,z) , where Z is the set of all possible tagging paths." + }, + { + "id": 45, + "string": "We emphasize that our actual implementation differs slightly from the LSTM-CNNs model." + }, + { + "id": 46, + "string": "We do not use additional word-and characterlevel explicit symbolic features (e.g., capitalization and lexicon) as they may require additional language-specific knowledge." + }, + { + "id": 47, + "string": "Additionally, we transform character feature vectors using highway networks (Srivastava et al., 2015) , which is reported to enhance the overall performance by (Kim et al., 2016) and (Liu et al., 2018) ." + }, + { + "id": 48, + "string": "Highway networks is a type of neural network that can smoothly switch its behavior between transforming and carrying information." + }, + { + "id": 49, + "string": "Multi-task Multi-lingual Architecture MTL can be employed to improve performance on multiple tasks at the same time, such as MT and parsing in (Luong et al., 2016) ." + }, + { + "id": 50, + "string": "However, in our scenario, we only focused on enhancing the performance of a low-resource task, which is our target task or main task." + }, + { + "id": 51, + "string": "Our proposed architecture aims to transfer knowledge from a set of auxiliary tasks to the main task." + }, + { + "id": 52, + "string": "For simplicity, we refer to a model of a main (auxiliary) task as a main (auxiliary) model." + }, + { + "id": 53, + "string": "To jointly train multiple models, we perform multi-task learning using parameter sharing." + }, + { + "id": 54, + "string": "Let Θ i be the set of parameters for model m i and Θ i,j = Θ i ∩ Θ j be the shared parameters between m i and m j ." + }, + { + "id": 55, + "string": "When optimizing model m i , we update Θ i and hence Θ i,j ." + }, + { + "id": 56, + "string": "In this way, we can partially train model m j as Θ i,j ⊆ Θ j ." + }, + { + "id": 57, + "string": "Previously, each MTL model generally uses a single transfer scheme." + }, + { + "id": 58, + "string": "In order to merge different transfer models into a unified architecture, we employ two levels of parameter sharing as follows." + }, + { + "id": 59, + "string": "On the first level, we construct the basis of the architecture by sharing character embeddings, CharCNN and bidirectional LSTM among all models." + }, + { + "id": 60, + "string": "This level of parameter sharing aims to provide universal word representation and feature extraction capability for all tasks and languages." + }, + { + "id": 61, + "string": "Character Embeddings and Character-level CNNs." + }, + { + "id": 62, + "string": "Character features can represent morphological and semantic information; e.g., the English morpheme dis-usually indicates negation and reversal as in \"disagree\" and \"disapproval\"." + }, + { + "id": 63, + "string": "For low-resource languages lacking in data to suffice the training of high-quality word embeddings, character embeddings learned from other languages may provide crucial information for labeling, especially for rare and out-of-vocabulary words." + }, + { + "id": 64, + "string": "Take the English word \"overflying\" (flying over) as an example." + }, + { + "id": 65, + "string": "Even if it is rare or absent in the corpus, we can still infer the word meaning from its suffix over-(above), root fly, and prefix -ing (present participle form)." + }, + { + "id": 66, + "string": "In our architecture, we share character embeddings and the CharCNN between languages with identical or similar scripts to enhance word representation for low-resource languages." + }, + { + "id": 67, + "string": "Bidirectional LSTM." + }, + { + "id": 68, + "string": "The bidirectional LSTM layer is essential to extract character, word, and contextual information from a sentence." + }, + { + "id": 69, + "string": "However, with a large number of parameters, it cannot be fully trained only using the low-resource task data." + }, + { + "id": 70, + "string": "To tackle this issue, we share the bidirectional LSTM layer across all models." + }, + { + "id": 71, + "string": "Bear in mind that because our architecture does not require aligned cross-lingual word embeddings, sharing this layer across languages may confuse the model as it equally handles embeddings in different spaces." + }, + { + "id": 72, + "string": "Nevertheless, under low-resource circumstances, data sparsity is the most critical factor that affects the performance." + }, + { + "id": 73, + "string": "On top of this basis, we adopt different parameter sharing strategies for different transfer schemes." + }, + { + "id": 74, + "string": "For cross-task transfer, we use the same word embedding matrix across tasks so that they can mutually enhance word representations." + }, + { + "id": 75, + "string": "For cross-lingual transfer, we share the linear layer and CRFs layer among languages to transfer taskspecific knowledge, such as the transition score between two tags." + }, + { + "id": 76, + "string": "Word Embeddings." + }, + { + "id": 77, + "string": "For most words, in addition to character embeddings, word embeddings are still crucial to represent semantic informa-Figure 2: Multi-task Multi-lingual Architecture tion." + }, + { + "id": 78, + "string": "We use the same word embedding matrix for tasks in the same language." + }, + { + "id": 79, + "string": "The matrix is initialized with pre-trained embeddings and optimized as parameters during training." + }, + { + "id": 80, + "string": "Thus, task-specific knowledge can be encoded into the word embeddings by one task and subsequently utilized by another one." + }, + { + "id": 81, + "string": "For a low-resource language even without sufficient raw text, we mix its data with a related high-resource language to train word embeddings." + }, + { + "id": 82, + "string": "In this way, we merge both corpora and hence their vocabularies." + }, + { + "id": 83, + "string": "Recently, Conneau et al." + }, + { + "id": 84, + "string": "(2017) proposed a domain-adversarial method to align two monolingual word embedding matrices without crosslingual supervision such as a bilingual dictionary." + }, + { + "id": 85, + "string": "Although cross-lingual word embeddings are not required, we evaluate our framework with aligned embeddings generated using this method." + }, + { + "id": 86, + "string": "Experiment results show that the incorporation of crosslingual embeddings substantially boosts the performance under low-resource settings." + }, + { + "id": 87, + "string": "Linear Layer and CRFs." + }, + { + "id": 88, + "string": "As the tag set varies from task to task, the linear layer and CRFs can only be shared across languages." + }, + { + "id": 89, + "string": "We share these layers to transfer task-specific knowledge to the main model." + }, + { + "id": 90, + "string": "For example, our model corrects [S-PER Charles] [S-PER Picqué] to [B-PER Charles] [E-PER Picqué] because the CRFs layer fully trained on other languages assigns a low score to the rare transition S-PER→S-PER and promotes B-PER→E-PER." + }, + { + "id": 91, + "string": "In addition to the shared linear layer, we add an unshared language-specific linear layer to allow the model to behave differently toward some features for different languages." + }, + { + "id": 92, + "string": "For example, the suffix -ment usually indicates nouns in English whereas indicates adverbs in French." + }, + { + "id": 93, + "string": "We combine the output of the shared linear layer y u and the output of the language-specific linear layer y s using: y = g ⊙ y s + (1 − g) ⊙ y u , where g = σ(W g h + b g )." + }, + { + "id": 94, + "string": "W g and b g are optimized during training." + }, + { + "id": 95, + "string": "h is the LSTM hidden states." + }, + { + "id": 96, + "string": "As W g is a square matrix, y, y s , and y u have the same dimension." + }, + { + "id": 97, + "string": "Although we only focus on sequence labeling in this work, our architecture can be adapted for many NLP tasks with slight modification." + }, + { + "id": 98, + "string": "For example, for text classification tasks, we can take the last hidden state of the forward LSTM as the sentence representation and replace the CRFs layer with a Softmax layer." + }, + { + "id": 99, + "string": "In our model, each task has a separate object function." + }, + { + "id": 100, + "string": "To optimize multiple tasks within one model, we adopt the alternating training approach in (Luong et al., 2016) ." + }, + { + "id": 101, + "string": "At each training step, we sample a task d i with probability r i ∑ j r j , where r i is the mixing rate value assigned to d i ." + }, + { + "id": 102, + "string": "In our experiments, instead of tuning r i , we estimate it by: r i = µ i ζ i √ N i , where µ i is the task coefficient, ζ i is the language coefficient, and N i is the number of training examples." + }, + { + "id": 103, + "string": "µ i (or ζ i ) takes the value 1 if the task (or language) of d i is the same as that of the target task; Otherwise it takes the value 0.1." + }, + { + "id": 104, + "string": "For example, given English Name Tagging as the target task, the task coefficient µ and language coefficient ζ of Spanish Name Tagging are 0.1 and 1 respectively." + }, + { + "id": 105, + "string": "While assigning lower mixing rate values to auxiliary tasks, this formula also takes the amount of data into consideration." + }, + { + "id": 106, + "string": "Thus, auxiliary tasks receive higher probabilities to reduce overfitting when we have a smaller amount of main task data." + }, + { + "id": 107, + "string": "Experiments Data Sets For Name Tagging, we use the following data sets: Dutch (NLD) and Spanish (ESP) data from the CoNLL 2002 shared task (Tjong Kim Sang, 2002) , English (ENG) data from the CoNLL 2003 shared task (Tjong Kim Sang and De Meulder, 2003) , Russian (RUS) data from LDC2016E95 (Russian Representative Language Pack), and Chechen (CHE) data from TAC KBP 2017 10-Language EDL Pilot Evaluation Source Corpus 4 ." + }, + { + "id": 108, + "string": "We select Chechen as another target language in addition to Dutch and Spanish because it is a truly under-resourced language and its related language, Russian, also lacks NLP resources." + }, + { + "id": 109, + "string": "For POS Tagging, we use English, Dutch, Spanish, and Russian data from the CoNLL 2017 shared task (Zeman et al., 2017; Nivre et al., 2017) ." + }, + { + "id": 110, + "string": "In this data set, each token is annotated with two POS tags, UPOS (universal POS tag) and XPOS (language-specific POS tag)." + }, + { + "id": 111, + "string": "We use UPOS because it is consistent throughout all languages." + }, + { + "id": 112, + "string": "Experimental Setup We use 50-dimensional pre-trained word embeddings and 50-dimensional randomly initialized character embeddings." + }, + { + "id": 113, + "string": "We train word embeddings using the word2vec package 5 ." + }, + { + "id": 114, + "string": "English, Span-ish, and Dutch embeddings are trained on corresponding Wikipedia articles (2017-12-20 dumps) ." + }, + { + "id": 115, + "string": "Russian embeddings are trained on documents in LDC2016E95." + }, + { + "id": 116, + "string": "Chechen embeddings are trained on documents in TAC KBP 2017 10-Language EDL Pilot Evaluation Source Corpus." + }, + { + "id": 117, + "string": "To learn a mapping between mono-lingual word embeddings and obtain cross-lingual embeddings, we use the unsupervised model in the MUSE library 6 (Conneau et al., 2017) ." + }, + { + "id": 118, + "string": "Although word embeddings are fine-tuned during training, we update the embedding matrix in a sparse way and thus do not have to update a large number of parameters." + }, + { + "id": 119, + "string": "We optimize parameters using Stochastic Gradient Descent with momentum, gradient clipping and exponential learning rate decay." + }, + { + "id": 120, + "string": "At step t, the learning rate α t is updated using α t = α 0 * �� t/T , where α 0 is the initial learning rate, ρ is the decay rate, and T is the decay step." + }, + { + "id": 121, + "string": "7 To reduce overfitting, we apply Dropout (Srivastava et al., 2014) to the output of the LSTM layer." + }, + { + "id": 122, + "string": "We conduct hyper-parameter optimization by exploring the space of parameters shown in Table 2 using random search (Bergstra and Bengio, 2012) ." + }, + { + "id": 123, + "string": "Due to time constraints, we only perform parameter sweeping on the Dutch Name Tagging task with 200 training examples." + }, + { + "id": 124, + "string": "We select the set of parameters that achieves the best performance on the development set and apply it to all models." + }, + { + "id": 125, + "string": "Comparison of Different Models In Figure 3 , 4, and 5, we compare our model with the mono-lingual single-task LSTM-CNNs model (denoted as baseline), cross-task transfer model, and cross-lingual transfer model in low-resource settings with Dutch, Spanish, and Chechen Name Tagging as the main task respectively." + }, + { + "id": 126, + "string": "We use English as the related language for Dutch and Spanish, and use Russian as the related language for Chechen." + }, + { + "id": 127, + "string": "For cross-task transfer, we take POS Tagging as the auxiliary task." + }, + { + "id": 128, + "string": "Because the CoNLL 2017 data does not include Chechen, we only use Russian POS Tagging and Russian Name Tagging as auxiliary tasks for Chechen Name Tagging." + }, + { + "id": 129, + "string": "We take Name Tagging as the target task for three reasons: (1) POS Tagging has a much lower requirement for the amount of training data." + }, + { + "id": 130, + "string": "For example, using only 10 training sentences, our baseline model achieves 75.5% and 82.9% prediction accuracy on Dutch and Spanish; (2) Compared to POS Tagging, Name Tagging has been considered as a more challenging task; (3) Existing POS Tagging resources are relatively richer than Name Tagging ones; e.g., the CoNLL 2017 data set provides POS Tagging training data for 45 languages." + }, + { + "id": 131, + "string": "Name Tagging also has a higher annotation cost as its annotation guidelines are usually more complicated." + }, + { + "id": 132, + "string": "We can see that our model substantially outperforms the mono-lingual single-task baseline model and obtains visible gains over single transfer models." + }, + { + "id": 133, + "string": "When trained with less than 50 main tasks training sentences, cross-lingual transfer consistently surpasses cross-task transfer, which is not surprising because in the latter scheme, the linear layer and CRFs layer of the main model are not shared with other models and thus cannot be fully trained with little data." + }, + { + "id": 134, + "string": "Because there are only 20,400 sentences in Chechen documents, we also experiment with the data augmentation method described in Section 2.2 by training word embeddings on a mixture of Russian and Chechen data." + }, + { + "id": 135, + "string": "This method yields additional 3.5%-10.0% absolute F-score gains." + }, + { + "id": 136, + "string": "We also experiment with transferring from English to Chechen." + }, + { + "id": 137, + "string": "Because Chechen uses Cyrillic alphabet , we convert its data set to Latin script." + }, + { + "id": 138, + "string": "Surprisingly, although these two languages are not close, we get more improvement by using English as the auxiliary language." + }, + { + "id": 139, + "string": "In Table 3 , we compare our model with state-ofthe-art models using all Dutch or Spanish Name Tagging data." + }, + { + "id": 140, + "string": "Results show that although we design this architecture for low-resource settings, it also achieves good performance in high-resource settings." + }, + { + "id": 141, + "string": "In this experiment, with sufficient training data for the target task, we perform another round of parameter sweeping." + }, + { + "id": 142, + "string": "We increase the embedding sizes and LSTM hidden state size to 100 and 225 respectively." + }, + { + "id": 143, + "string": "Qualitative Analysis In Table 4 , we compare Name Tagging results from the baseline model and our model, both trained with 100 main task sentences." + }, + { + "id": 144, + "string": "The first three examples show that shared character-level networks can transfer different levels of morphological and semantic information." + }, + { + "id": 145, + "string": "Table 3 : Comparison with state-of-the-art models." + }, + { + "id": 146, + "string": "In example #1, the baseline model fails to identify \"Palestijnen\", an unseen word in the Dutch data, while our model can recognize it because the shared CharCNN represents it in a way similar to its corresponding English word \"Palestinians\", which occurs 20 times." + }, + { + "id": 147, + "string": "In addition to mentions, the shared CharCNN can also improve representations of context words, such as \"staat\" (state) in the example." + }, + { + "id": 148, + "string": "For some words dissimilar to corresponding English words, the CharCNN may enhance their word representations by transferring morpheme-level knowledge." + }, + { + "id": 149, + "string": "For example, in sentence #2, our model is able to identify \"Rusland\" (Russia) as the suffix -land is usually associated with location names in the English data; e.g., Finland." + }, + { + "id": 150, + "string": "Furthermore, the CharCNN is capable of capturing some word-level patterns, such as capitalized hyphenated compound and acronym as example #3 shows." + }, + { + "id": 151, + "string": "In this sentence, neither \"PMScentra\" nor \"MST\" can be found in auxiliary task data, while we observe a number of similar expressions, such as American-style and LDP." + }, + { + "id": 152, + "string": "The transferred knowledge also helps reduce overfitting." + }, + { + "id": 153, + "string": "For example, in sentence #4, the baseline model mistakenly tags \"sección\" (section) and \"consellería\" (department) as organizations because their capitalized forms usually appear in Spanish organization names." + }, + { + "id": 154, + "string": "With knowledge learned in auxiliary tasks that a lowercased word is rarely tagged as a proper noun, our model is able to avoid overfitting and correct these errors." + }, + { + "id": 155, + "string": "Sentence #5 shows an opposite situation, where the capitalized word \"campesinos\" (farm worker) never appears in Spanish names." + }, + { + "id": 156, + "string": "In Table 5 , we show differences between cross-lingual transfer and cross-task transfer." + }, + { + "id": 157, + "string": "Although the cross-task transfer model recognizes \"Ingeborg Marx\" missed by the baseline model, it mistakenly assigns an S-PER tag to \"Marx\"." + }, + { + "id": 158, + "string": "Instead, from English Name Tagging, the cross-lingual transfer model borrows task-specific knowledge through the shared CRFs layer that (1) B-PER→S-PER is an invalid transition, and (2) even if we assign S-PER to \"Ingeborg\", it is rare to have continuous person names without any conjunction or punctuation." + }, + { + "id": 159, + "string": "Thus, the cross-lingual model promotes the sequence B-PER→E-PER." + }, + { + "id": 160, + "string": "In Figure 6 , we depict the change of tag distribution with the number of training sentences." + }, + { + "id": 161, + "string": "When trained with less than 100 sentences, the baseline model only correctly predicts a few tags dominated by frequent types." + }, + { + "id": 162, + "string": "By contrast, our model has a visibly higher recall and better predicts infrequent tags, which can be attributed to the implicit data augmentation and inductive bias introduced by MTL (Ruder, 2017) ." + }, + { + "id": 163, + "string": "For example, if all location names in the Dutch training data are single-token ones, the baseline model will inevitably overfit to the tag S-LOC and possibly label \"Caldera de Taburiente\" as [S-LOC Caldera] [S-LOC de] [S-LOC Taburiente], whereas with the shared CRFs layer fully trained on English Name Tagging, our model prefers B-LOC→I-LOC→E-LOC, which receives a higher transition score." + }, + { + "id": 164, + "string": "Ablation Studies In order to quantify the contributions of individual components, we conduct ablation studies on Dutch Name Tagging with different numbers of training sentences for the target task." + }, + { + "id": 165, + "string": "For the basic model, we we use separate LSTM layers and remove the character embeddings, highway networks, language-specific layer, and Dropout layer." + }, + { + "id": 166, + "string": "As Table 6 shows, adding each component usually enhances the performance (F-score, %), while the impact also depends on the size of the target task data." + }, + { + "id": 167, + "string": "For example, the language-specific layer slightly impairs the performance with only 10 training sentences." + }, + { + "id": 168, + "string": "However, this is unsurpris-ing as it introduces additional parameters that are only trained by the target task data." + }, + { + "id": 169, + "string": "Table 6 : Performance comparison between models with different components (C: character embedding; L: shared LSTM; S: language-specific layer; H: highway networks; D: dropout)." + }, + { + "id": 170, + "string": "Effect of the Amount of Auxiliary Task Data For many low-resource languages, their related languages are also low-resource." + }, + { + "id": 171, + "string": "To evaluate our model's sensitivity to the amount of auxiliary task data, we fix the size of main task data and downsample all auxiliary task data with sample rates from 1% to 50%." + }, + { + "id": 172, + "string": "As Figure 7 shows, the performance goes up when we raise the sample rate from 1% to 20%." + }, + { + "id": 173, + "string": "However, we do not observe significant improvement when we further increase the sample rate." + }, + { + "id": 174, + "string": "By comparing scores in Figure 3 and Figure 7 , we can see that using only 1% auxiliary data, our model already obtains 3.7%-9.7% absolute F-score gains." + }, + { + "id": 175, + "string": "Due to space limitations, we only show curves for Dutch Name Tagging, while we observe similar results on other tasks." + }, + { + "id": 176, + "string": "Therefore, we may conclude that our model does not heavily rely on the amount of auxiliary task data." + }, + { + "id": 177, + "string": "Related Work Multi-task Learning has been applied in different NLP areas, such as machine translation (Luong et al., 2016; Dong et al., 2015; Domhan and Hieber, 2017 ), text classification (Liu et al., 2017) , dependency parsing , textual entailment (Hashimoto et al., 2017) , text summarization (Isonuma et al., 2017) and sequence labeling (Collobert and Weston, 2008; Søgaard and Goldberg, 2016; Rei, 2017; Peng and Dredze, 2017; Yang et al., 2017; von Däniken and Cieliebak, 2017; Aguilar et al., 2017; Liu et al., 2018) Collobert and Weston (2008) is an early attempt that applies MTL to sequence labeling." + }, + { + "id": 178, + "string": "The authors train a CNN model jointly on POS Tagging, Semantic Role Labeling, Name Tagging, chunking, and language modeling using parameter sharing." + }, + { + "id": 179, + "string": "Instead of using other sequence labeling tasks, Rei (2017) and Liu et al." + }, + { + "id": 180, + "string": "(2018) take language modeling as the secondary training objective to extract semantic and syntactic knowledge from large scale raw text without additional supervision." + }, + { + "id": 181, + "string": "In (Yang et al., 2017) , the authors propose three transfer models for crossdomain, cross-application, and cross-lingual trans-fer for sequence labeling, and also simulate a lowresource setting by downsampling the training data." + }, + { + "id": 182, + "string": "By contrast, we combine cross-task transfer and cross-lingual transfer within a unified architecture to transfer different types of knowledge from multiple auxiliary tasks simultaneously." + }, + { + "id": 183, + "string": "In addition, because our model is designed for lowresource settings, we share components among models in a different way (e.g., the LSTM layer is shared across all models)." + }, + { + "id": 184, + "string": "Differing from most MTL models, which perform supervisions for all tasks on the outermost layer, (Søgaard and Goldberg, 2016) proposes an MTL model which supervised tasks at different levels." + }, + { + "id": 185, + "string": "It shows that supervising low-level tasks such as POS Tagging at lower layer obtains better performance." + }, + { + "id": 186, + "string": "Conclusions and Future Work We design a multi-lingual multi-task architecture for low-resource settings." + }, + { + "id": 187, + "string": "We evaluate the model on sequence labeling tasks with three language pairs." + }, + { + "id": 188, + "string": "Experiments show that our model can effectively transfer different types of knowledge to improve the main model." + }, + { + "id": 189, + "string": "It substantially outperforms the mono-lingual single-task baseline model, cross-lingual transfer model, and crosstask transfer model." + }, + { + "id": 190, + "string": "The next step of this research is to apply this architecture to other types of tasks, such as Event Extract and Semantic Role Labeling that involve structure prediction." + }, + { + "id": 191, + "string": "We also plan to explore the possibility of integrating incremental learning into this architecture to adapt a trained model for new tasks rapidly." + } + ], + "headers": [ + { + "section": "Introduction", + "n": "1", + "start": 0, + "end": 28 + }, + { + "section": "Basic Architecture", + "n": "2.1", + "start": 29, + "end": 48 + }, + { + "section": "Multi-task Multi-lingual Architecture", + "n": "2.2", + "start": 49, + "end": 106 + }, + { + "section": "Data Sets", + "n": "3.1", + "start": 107, + "end": 111 + }, + { + "section": "Experimental Setup", + "n": "3.2", + "start": 112, + "end": 124 + }, + { + "section": "Comparison of Different Models", + "n": "3.3", + "start": 125, + "end": 142 + }, + { + "section": "Qualitative Analysis", + "n": "3.4", + "start": 143, + "end": 163 + }, + { + "section": "Ablation Studies", + "n": "3.5", + "start": 164, + "end": 169 + }, + { + "section": "Effect of the Amount of Auxiliary Task Data", + "n": "3.6", + "start": 170, + "end": 176 + }, + { + "section": "Related Work", + "n": "4", + "start": 177, + "end": 185 + }, + { + "section": "Conclusions and Future Work", + "n": "5", + "start": 186, + "end": 191 + } + ], + "figures": [ + { + "filename": "../figure/image/1017-Figure4-1.png", + "caption": "Figure 4: Performance on Spanish Name Tagging.", + "page": 5, + "bbox": { + "x1": 308.64, + "x2": 526.56, + "y1": 282.71999999999997, + "y2": 423.35999999999996 + } + }, + { + "filename": "../figure/image/1017-Figure5-1.png", + "caption": "Figure 5: Performance on Chechen Name Tagging.", + "page": 5, + "bbox": { + "x1": 308.64, + "x2": 525.6, + "y1": 463.68, + "y2": 603.36 + } + }, + { + "filename": "../figure/image/1017-Figure3-1.png", + "caption": "Figure 3: Performance on Dutch Name Tagging. We scale the horizontal axis to show more details under 100 sentences. Our Model*: our model with MUSE cross-lingual embeddings.", + "page": 5, + "bbox": { + "x1": 308.15999999999997, + "x2": 525.6, + "y1": 62.4, + "y2": 202.56 + } + }, + { + "filename": "../figure/image/1017-Figure1-1.png", + "caption": "Figure 1: LSTM-CNNs: an LSTM-CRFs-based model for Sequence Labeling", + "page": 1, + "bbox": { + "x1": 307.68, + "x2": 525.12, + "y1": 101.75999999999999, + "y2": 264.96 + } + }, + { + "filename": "../figure/image/1017-Table3-1.png", + "caption": "Table 3: Comparison with state-of-the-art models.", + "page": 6, + "bbox": { + "x1": 72.0, + "x2": 291.36, + "y1": 61.44, + "y2": 244.32 + } + }, + { + "filename": "../figure/image/1017-Figure6-1.png", + "caption": "Figure 6: The distribution of correctly predicted tags on Dutch Name Tagging. The height of each stack indicates the number of a certain tag.", + "page": 6, + "bbox": { + "x1": 308.15999999999997, + "x2": 524.64, + "y1": 466.56, + "y2": 604.8 + } + }, + { + "filename": "../figure/image/1017-Table6-1.png", + "caption": "Table 6: Performance comparison between models with different components (C: character embedding; L: shared LSTM; S: language-specific layer; H: highway networks; D: dropout).", + "page": 7, + "bbox": { + "x1": 306.71999999999997, + "x2": 526.0799999999999, + "y1": 471.84, + "y2": 557.28 + } + }, + { + "filename": "../figure/image/1017-Table4-1.png", + "caption": "Table 4: Name Tagging results, each of which contains an English translation, result of the baseline", + "page": 7, + "bbox": { + "x1": 72.0, + "x2": 526.0799999999999, + "y1": 62.4, + "y2": 379.2 + } + }, + { + "filename": "../figure/image/1017-Table5-1.png", + "caption": "Table 5: Comparing cross-task transfer and crosslingual transfer on Dutch Name Tagging with 100 training sentences.", + "page": 7, + "bbox": { + "x1": 72.0, + "x2": 291.36, + "y1": 432.47999999999996, + "y2": 585.12 + } + }, + { + "filename": "../figure/image/1017-Figure2-1.png", + "caption": "Figure 2: Multi-task Multi-lingual Architecture", + "page": 3, + "bbox": { + "x1": 72.0, + "x2": 524.16, + "y1": 62.879999999999995, + "y2": 271.2 + } + }, + { + "filename": "../figure/image/1017-Figure7-1.png", + "caption": "Figure 7: The effect of the amount of auxiliary task data on Dutch Name Tagging.", + "page": 8, + "bbox": { + "x1": 72.48, + "x2": 289.44, + "y1": 210.72, + "y2": 337.91999999999996 + } + }, + { + "filename": "../figure/image/1017-Table1-1.png", + "caption": "Table 1: Name Tagging data set statistics: #token and #name (between parentheses).", + "page": 4, + "bbox": { + "x1": 72.0, + "x2": 291.36, + "y1": 446.88, + "y2": 514.0799999999999 + } + }, + { + "filename": "../figure/image/1017-Table2-1.png", + "caption": "Table 2: Hyper-parameter search space.", + "page": 4, + "bbox": { + "x1": 306.71999999999997, + "x2": 526.0799999999999, + "y1": 464.64, + "y2": 563.04 + } + } + ] + }, + "gem_id": "GEM-SciDuet-chal-19" + }, + { + "slides": { + "2": { + "title": "Discourse Marker", + "text": [ + "A discourse marker is a word or a phrase that plays a role in managing the flow and structure of discourse.", + "Examples: so, because, and, but, or" + ], + "page_nums": [ + 5 + ], + "images": [] + }, + "3": { + "title": "Discourse Marker and NLI", + "text": [ + "But Because If Although And So" + ], + "page_nums": [ + 6 + ], + "images": [] + }, + "4": { + "title": "Related Works", + "text": [ + "SOTA Neural Network Models", + "Transfer Learning for NLI" + ], + "page_nums": [ + 7, + 8 + ], + "images": [] + }, + "5": { + "title": "Discourse Marker Prediction DMP", + "text": [ + "Its rainy outside But + We will not take the umbrella", + "(S1, S2) Neural Networks M", + "Max pooling over all the hidden states Prediction" + ], + "page_nums": [ + 9, + 10 + ], + "images": [] + }, + "10": { + "title": "Experiments Analysis", + "text": [ + "Premise: 3 young man in hoods standing in the middle of a quiet street facing the camera. Hypothesis: Three people sit by a busy street bare-headed." + ], + "page_nums": [ + 19, + 20 + ], + "images": [ + "figure/image/1023-Table5-1.png" + ] + } + }, + "paper_title": "Discourse Marker Augmented Network with Reinforcement Learning for Natural Language Inference", + "paper_id": "1023", + "paper": { + "title": "Discourse Marker Augmented Network with Reinforcement Learning for Natural Language Inference", + "abstract": "Natural Language Inference (NLI), also known as Recognizing Textual Entailment (RTE), is one of the most important problems in natural language processing. It requires to infer the logical relationship between two given sentences. While current approaches mostly focus on the interaction architectures of the sentences, in this paper, we propose to transfer knowledge from some important discourse markers to augment the quality of the NLI model. We observe that people usually use some discourse markers such as \"so\" or \"but\" to represent the logical relationship between two sentences. These words potentially have deep connections with the meanings of the sentences, thus can be utilized to help improve the representations of them. Moreover, we use reinforcement learning to optimize a new objective function with a reward defined by the property of the NLI datasets to make full use of the labels information. Experiments show that our method achieves the state-of-the-art performance on several large-scale datasets. 1 Here sentences mean either the whole sentences or the main clauses of a compound sentence.", + "text": [ + { + "id": 0, + "string": "Introduction In this paper, we focus on the task of Natural Language Inference (NLI), which is known as a significant yet challenging task for natural language understanding." + }, + { + "id": 1, + "string": "In this task, we are given two sentences which are respectively called premise and hypothesis." + }, + { + "id": 2, + "string": "The goal is to determine whether the logical relationship between them is entailment, neutral, or contradiction." + }, + { + "id": 3, + "string": "Recently, performance on NLI (Chen et al., 2017b; Gong et al., 2018; Chen et al., 2017c ) * corresponding author Premise: A soccer game with multiple males playing." + }, + { + "id": 4, + "string": "Hypothesis: Some men are playing a sport." + }, + { + "id": 5, + "string": "Label: Entailment Premise: An older and younger man smiling." + }, + { + "id": 6, + "string": "Hypothesis: Two men are smiling and laughing at the cats playing on the floor." + }, + { + "id": 7, + "string": "Label: Neutral Premise: A black race car starts up in front of a crowd of people Hypothesis: A man is driving down a lonely road." + }, + { + "id": 8, + "string": "Label: Contradiction has been significantly boosted since the release of some high quality large-scale benchmark datasets such as SNLI (Bowman et al., 2015) and MultiNLI (Williams et al., 2017) ." + }, + { + "id": 9, + "string": "Table 1 shows some examples in SNLI." + }, + { + "id": 10, + "string": "Most state-of-the-art works focus on the interaction architectures between the premise and the hypothesis, while they rarely concerned the discourse relations of the sentences, which is a core issue in natural language understanding." + }, + { + "id": 11, + "string": "People usually use some certain set of words to express the discourse relation between two sentences 1 ." + }, + { + "id": 12, + "string": "These words, such as \"but\" or \"and\", are denoted as discourse markers." + }, + { + "id": 13, + "string": "These discourse markers have deep connections with the intrinsic relations of two sentences and intuitively correspond to the intent of NLI, such as \"but\" to \"contradiction\", \"so\" to \"entailment\", etc." + }, + { + "id": 14, + "string": "Very few NLI works utilize this information revealed by discourse markers." + }, + { + "id": 15, + "string": "proposed to use discourse markers to help rep-resent the meanings of the sentences." + }, + { + "id": 16, + "string": "However, they represent each sentence by a single vector and directly concatenate them to predict the answer, which is too simple and not ideal for the largescale datasets." + }, + { + "id": 17, + "string": "In this paper, we propose a Discourse Marker Augmented Network for natural language inference, where we transfer the knowledge from the existing supervised task: Discourse Marker Prediction (DMP) , to an integrated NLI model." + }, + { + "id": 18, + "string": "We first propose a sentence encoder model that learns the representations of the sentences from the DMP task and then inject the encoder to the NLI network." + }, + { + "id": 19, + "string": "Moreover, because our NLI datasets are manually annotated, each example from the datasets might get several different labels from the annotators although they will finally come to a consensus and also provide a certain label." + }, + { + "id": 20, + "string": "In consideration of that different confidence level of the final labels should be discriminated, we employ reinforcement learning with a reward defined by the uniformity extent of the original labels to train the model." + }, + { + "id": 21, + "string": "The contributions of this paper can be summarized as follows." + }, + { + "id": 22, + "string": "• Unlike previous studies, we solve the task of the natural language inference via transferring knowledge from another supervised task." + }, + { + "id": 23, + "string": "We propose the Discourse Marker Augmented Network to combine the learned encoder of the sentences with the integrated NLI model." + }, + { + "id": 24, + "string": "• According to the property of the datasets, we incorporate reinforcement learning to optimize a new objective function to make full use of the labels' information." + }, + { + "id": 25, + "string": "• We conduct extensive experiments on two large-scale datasets to show that our method achieves better performance than other stateof-the-art solutions to the problem." + }, + { + "id": 26, + "string": "Task Description Natural Language Inference (NLI) In the natural language inference tasks, we are given a pair of sentences (P, H), which respectively means the premise and hypothesis." + }, + { + "id": 27, + "string": "Our goal is to judge whether their logical relationship between their meanings by picking a label from a small set: entailment (The hypothesis is definitely a true description of the premise), neutral (The hypothesis might be a true description of the premise), and contradiction (The hypothesis is definitely a false description of the premise)." + }, + { + "id": 28, + "string": "Discourse Marker Prediction (DMP) For DMP, we are given a pair of sentences (S 1 , S 2 ), which is originally the first half and second half of a complete sentence." + }, + { + "id": 29, + "string": "The model must predict which discourse marker was used by the author to link the two ideas from a set of candidates." + }, + { + "id": 30, + "string": "Sentence Encoder Model Following , we use BookCorpus as our training data for discourse marker prediction, which is a dataset of text from unpublished novels, and it is large enough to avoid bias towards any particular domain or application." + }, + { + "id": 31, + "string": "After preprocessing, we obtain a dataset with the form (S 1 , S 2 , m), which means the first half sentence, the last half sentence, and the discourse marker that connected them in the original text." + }, + { + "id": 32, + "string": "Our goal is to predict the m given S 1 and S 2 ." + }, + { + "id": 33, + "string": "We first use Glove (Pennington et al., 2014) to transform {S t } 2 t=1 into vectors word by word and subsequently input them to a bi-directional LSTM: − → h i t = − −−− → LSTM(Glove(S i t )), i = 1, ..., |S t | ← − h i t = ← −−− − LSTM(Glove(S i t )), i = |S t |, ..., 1 (1) where Glove(w) is the embedding vector of the word w from the Glove lookup table, |S t | is the length of the sentence S t ." + }, + { + "id": 34, + "string": "We apply max pooling on the concatenation of the hidden states from both directions, which provides regularization and shorter back-propagation paths (Collobert and Weston, 2008) , to extract the features of the whole sequences of vectors: − → r t = Max dim ([ − → h 1 t ; − → h 2 t ; ...; − − → h |St| t ]) ← − r t = Max dim ([ ← − h 1 t ; ← − h 2 t ; ...; ← − − h |St| t ]) (2) where Max dim means that the max pooling is performed across each dimension of the concatenated vectors, [; ] denotes concatenation." + }, + { + "id": 35, + "string": "Moreover, we combine the last hidden state from both directions and the results of max pooling to represent our sentences: where r t is the representation vector of the sentence S t ." + }, + { + "id": 36, + "string": "To predict the discource marker between S 1 and S 2 , we combine the representations of them with some linear operation: r t = [ − → r t ; ← − r t ; − − → h |St| t ; ← − h 1 t ] (3) r = [r 1 ; r 2 ; r 1 + r 2 ; r 1 r 2 ] (4) where is elementwise product." + }, + { + "id": 37, + "string": "Finally we project r to a vector of label size (the total number of discourse markers in the dataset) and use softmax function to normalize the probability distribution." + }, + { + "id": 38, + "string": "Discourse Marker Augmented Network As presented in Figure 1 , we show how our Discourse Marker Augmented Network incorporates the learned encoder into the NLI model." + }, + { + "id": 39, + "string": "Encoding Layer We denote the premise as P and the hypothesis as H. To encode the words, we use the concatenation of following parts: Word Embedding: Similar to the previous section, we map each word to a vector space by using pre-trained word vectors GloVe." + }, + { + "id": 40, + "string": "Character Embedding: We apply Convolutional Neural Networks (CNN) over the characters of each word." + }, + { + "id": 41, + "string": "This approach is proved to be helpful in handling out-of-vocab (OOV) words (Yang et al., 2017) ." + }, + { + "id": 42, + "string": "POS and NER tags: We use the part-of-speech (POS) tags and named-entity recognition (NER) tags to get syntactic information and entity label of the words." + }, + { + "id": 43, + "string": "Following (Pan et al., 2017b) , we apply the skip-gram model (Mikolov et al., 2013) to train two new lookup tables of POS tags and NER tags respectively." + }, + { + "id": 44, + "string": "Each word can get its own POS embedding and NER embedding by these lookup tables." + }, + { + "id": 45, + "string": "This approach represents much better geometrical features than common used one-hot vectors." + }, + { + "id": 46, + "string": "Exact Match: Inspired by the machine comprehension tasks (Chen et al., 2017a) , we want to know whether every word in P is in H (and H in P )." + }, + { + "id": 47, + "string": "We use three binary features to indicate whether the word can be exactly matched to any question word, which respectively means original form, lowercase and lemma form." + }, + { + "id": 48, + "string": "For encoding, we pass all sequences of vectors into a bi-directional LSTM and obtain: p i = BiLSTM(f rep (P i ), p i−1 ), i = 1, ..., n u j = BiLSTM(f rep (H j ), u j−1 ), j = 1, ..., m (5) where f rep (x) = [Glove(x); Char(x); POS(x); NER(x); EM(x)] is the concatenation of the embedding vectors and the feature vectors of the word x, n = |P |, m = |H|." + }, + { + "id": 49, + "string": "Interaction Layer In this section, we feed the results of the encoding layer and the learned sentence encoder into the attention mechanism, which is responsible for linking and fusing information from the premise and the hypothesis words." + }, + { + "id": 50, + "string": "We first obtain a similarity matrix A ∈ R n×m between the premise and hypothesis by A ij = v 1 [p i ; u j ; p i • u j ; r p ; r h ] (6) where v 1 is the trainable parameter, r p and r h are sentences representations from the equation (3) learned in the Section 3, which denote the premise and hypothesis respectively." + }, + { + "id": 51, + "string": "In addition to previous popular similarity matrix, we incorporate the relevance of each word of P (H) to the whole sentence of H(P )." + }, + { + "id": 52, + "string": "Now we use A to obtain the attentions and the attended vectors in both directions." + }, + { + "id": 53, + "string": "To signify the attention of the i-th word of P to every word of H, we use the weighted sum of u j by A i: :ũ i = j A ij · u j (7) whereũ i is the attention vector of the i-th word of P for the entire H. In the same way, thep j is obtained via:p j = i A ij · p i (8) To model the local inference between aligned word pairs, we integrate the attention vectors with the representation vectors via: p i = f ([p i ;ũ i ; p i −ũ i ; p i ũ i ]) u j = f ([u j ;p j ; u j −p j ; u j p j ]) (9) where f is a 1-layer feed-forward neural network with the ReLU activation function,p i andû j are local inference vectors." + }, + { + "id": 54, + "string": "Inspired by (Seo et al., 2016) and (Chen et al., 2017b) , we use a modeling layer to capture the interaction between the premise and the hypothesis." + }, + { + "id": 55, + "string": "Specifically, we use bi-directional LSTMs as building blocks: p M i = BiLSTM(p i , p M i−1 ) u M j = BiLSTM(û j , u M j−1 ) (10) Here, p M i and u M j are the modeling vectors which contain the crucial information and relationship among the sentences." + }, + { + "id": 56, + "string": "We compute the representation of the whole sentence by the weighted average of each word: where v 2 , v 3 are trainable vectors." + }, + { + "id": 57, + "string": "We don't share these parameter vectors in this seemingly parallel strucuture because there is some subtle difference between the premise and hypothesis, which will be discussed later in Section 5. p M = i exp(v 2 p M i ) i exp(v 2 p M i ) p M i u M = j exp(v 3 u M j ) j exp(v 3 u M j ) u M j (11) Output Layer The NLI task requires the model to predict the logical relation from the given set: entailment, neutral or contradiction." + }, + { + "id": 58, + "string": "We obtain the probability distribution by a linear function with softmax function: d = softmax(W[p M ; u M ; p M u M ; r p r h ]) (12) where W is a trainable parameter." + }, + { + "id": 59, + "string": "We combine the representations of the sentences computed above with the representations learned from DMP to obtain the final prediction." + }, + { + "id": 60, + "string": "Training As shown in Table 2 , many examples from our datasets are labeled by several people, and the choices of the annotators are not always consistent." + }, + { + "id": 61, + "string": "For instance, when the label number is 3 in SNLI, \"total=0\" means that no examples have 3 annotators (maybe more or less); \"correct=8748\" means that there are 8748 examples whose number of correct labels is 3 (the number of annotators maybe 4 or 5, but some provided wrong labels)." + }, + { + "id": 62, + "string": "Although all the labels for each example will be unified to a final (correct) label, diversity of the labels for a single example indicates the low confidence of the result, which is not ideal to only use the final label to optimize the model." + }, + { + "id": 63, + "string": "We propose a new objective function that combines both the log probabilities of the ground-truth label and a reward defined by the property of the datasets for the reinforcement learning." + }, + { + "id": 64, + "string": "The most widely used objective function for the natural language inference is to minimize the negative log cross-entropy loss: J CE (Θ) = − 1 N N k log(d k l ) (13) where Θ are all the parameters to optimize, N is the number of examples in the dataset, d l is the probability of the ground-truth label l. However, directly using the final label to train the model might be difficult in some situations, where the example is confusing and the labels from the annotators are different." + }, + { + "id": 65, + "string": "For instance, consider an example from the SNLI dataset: • P : \"A smiling costumed woman is holding an umbrella.\"" + }, + { + "id": 66, + "string": "• H: \"A happy woman in a fairy costume holds an umbrella.\"" + }, + { + "id": 67, + "string": "The final label is neutral, but the original labels from the five annotators are neural, neural, entailment, contradiction, neural, in which case the relation between \"smiling\" and \"happy\" might be under different comprehension." + }, + { + "id": 68, + "string": "The final label's confidence of this example is obviously lower than an example that all of its labels are the same." + }, + { + "id": 69, + "string": "To simulate the thought of human being more closely, in this paper, we tackle this problem by using the REINFORCE algorithm (Williams, 1992) to minimize the negative expected reward, which is defined as: J RL (Θ) = −E l∼π(l|P,H) [R(l, {l * })] (14) where π(l|P, H) is the previous action policy that predicts the label given P and H, {l * } is the set of annotated labels, and R(l, {l * }) = number of l in {l * } |{l * }| (15) is the reward function defined to measure the distance to all the ideas of the annotators." + }, + { + "id": 70, + "string": "To avoid of overwriting its earlier results and further stabilize training, we use a linear function to integrate the above two objective functions: J(Θ) = λJ CE (Θ) + (1 − λ)J RL (Θ) (16) where λ is a tunable hyperparameter." + }, + { + "id": 71, + "string": "Experiments Datasets BookCorpus: We use the dataset from BookCorpus to pre-train our sentence encoder model." + }, + { + "id": 72, + "string": "We preprocessed and collected discourse markers from BookCorpus as ." + }, + { + "id": 73, + "string": "We finally curated a dataset of 6527128 pairs of sentences for 8 discourse markers, whose statistics are shown in Table 3 ." + }, + { + "id": 74, + "string": "SNLI: Stanford Natural Language Inference(Bowman et al., 2015) is a collection of more than 570k human annotated sentence pairs labeled for entailment, contradiction, and semantic independence." + }, + { + "id": 75, + "string": "SNLI is two orders of magnitude larger than all other resources of its type." + }, + { + "id": 76, + "string": "The premise data is extracted from the captions of the Flickr30k corpus (Young et al., 2014) , the hypothesis data and the labels are manually annotated." + }, + { + "id": 77, + "string": "The original SNLI corpus contains also the other category, which includes the sentence pairs lacking consensus among multiple human annotators." + }, + { + "id": 78, + "string": "We remove this category and use the same split as in (Bowman et al., 2015) and other previous work." + }, + { + "id": 79, + "string": "MultiNLI: Multi-Genre Natural Language Inference (Williams et al., 2017) is another large-scale corpus for the task of NLI." + }, + { + "id": 80, + "string": "MultiNLI has 433k sentences pairs and is in the same format as SNLI, but it includes a more diverse range of text, as well as an auxiliary test set for cross-genre transfer evaluation." + }, + { + "id": 81, + "string": "Half of these selected genres appear in training set while the rest are not, creating in-domain (matched) and cross-domain (mismatched) development/test sets." + }, + { + "id": 82, + "string": "Method SNLI MultiNLI Matched Mismatched 300D LSTM encoders (Bowman et al., 2016) 80.6 --300D Tree-based CNN encoders (Mou et al., 2016) 82.1 --4096D BiLSTM with max-pooling (Conneau et al., 2017) 84.5 --600D Gumbel TreeLSTM encoders (Choi et al., 2017) 86.0 --600D Residual stacked encoders (Nie and Bansal, 2017) 86.0 74.6 73.6 Gated-Att BiLSTM (Chen et al., 2017d) -73.2 73.6 100D LSTMs with attention (Rocktäschel et al., 2016) 83.5 --300D re-read LSTM (Sha et al., 2016) 87.5 --DIIN (Gong et al., 2018) 88.0 78.8 77.8 Biattentive Classification Network (McCann et al., 2017) 88.1 --300D CAFE (Tay et al., 2017) 88.5 78.7 77.9 KIM (Chen et al., 2017b) 88.6 --600D ESIM + 300D Syntactic TreeLSTM (Chen et al., 2017c) 88.8 --DIIN(Ensemble) (Gong et al., 2018) 88.9 80.0 78.7 KIM(Ensemble) (Chen et al., 2017b) 89.1 --300D CAFE(Ensemble) (Tay et al., 2017) 89 Implementation Details We use the Stanford CoreNLP toolkit to tokenize the words and generate POS and NER tags." + }, + { + "id": 83, + "string": "The word embeddings are initialized by 300d Glove (Pennington et al., 2014) , the dimensions of POS and NER embeddings are 30 and 10." + }, + { + "id": 84, + "string": "The dataset we use to train the embeddings of POS tags and NER tags are the training set given by SNLI." + }, + { + "id": 85, + "string": "We apply Tensorflow r1.3 as our neural network framework." + }, + { + "id": 86, + "string": "We set the hidden size as 300 for all the LSTM layers and apply dropout (Srivastava et al., 2014) between layers with an initial ratio of 0.9, the decay rate as 0.97 for every 5000 step." + }, + { + "id": 87, + "string": "We use the AdaDelta for optimization as described in (Zeiler, 2012) with ρ as 0.95 and as 1e-8." + }, + { + "id": 88, + "string": "We set our batch size as 36 and the initial learning rate as 0.6." + }, + { + "id": 89, + "string": "The parameter λ in the objective function is set to be 0.2." + }, + { + "id": 90, + "string": "For DMP task, we use stochastic gradient descent with initial learning rate as 0.1, and we anneal by half each time the validation accuracy is lower than the previous epoch." + }, + { + "id": 91, + "string": "The number of epochs is set to be 10, and the feedforward dropout rate is 0.2." + }, + { + "id": 92, + "string": "The learned encoder in subsequent NLI task is trainable." + }, + { + "id": 93, + "string": "Results In (2016) proposed a simple baseline that uses LSTM to encode the whole sentences and feed them into a MLP classifier to predict the final inference relationship, they achieve an accuracy of 80.6% on SNLI." + }, + { + "id": 94, + "string": "Nie and Bansal (2017) test their model on both SNLI and MiltiNLI, and achieves competitive results." + }, + { + "id": 95, + "string": "In the medium part, we show the results of other neural network models." + }, + { + "id": 96, + "string": "Obviously, the performance of most of the integrated methods are better than the sentence encoding based models above." + }, + { + "id": 97, + "string": "Both DIIN (Gong et al., 2018) and We present the ensemble results on both datasets in the bottom part of the table 4." + }, + { + "id": 98, + "string": "We build an ensemble model which consists of 10 single models with the same architecture but initialized with different parameters." + }, + { + "id": 99, + "string": "The performance of our model achieves 89.6% on SNLI, 80.3% on matched MultiNLI and 79.4% on mismatched MultiNLI, which are all state-of-the-art results." + }, + { + "id": 100, + "string": "Ablation Analysis As shown in Table 5 , we conduct an ablation experiment on SNLI development dataset to evaluate the individual contribution of each component of our model." + }, + { + "id": 101, + "string": "Firstly we only use the results of the sentence encoder model to predict the answer, in other words, we represent each sentence by a single vector and use dot product with a linear function to do the classification." + }, + { + "id": 102, + "string": "The result is obviously not satisfactory, which indicates that only using sentence embedding from discourse markers to predict the answer is not ideal in large-scale datasets." + }, + { + "id": 103, + "string": "We then remove the sentence encoder model, which means we don't use the knowledge transferred from the DMP task and thus the representations r p and r h are set to be zero vectors in the equation (6) and the equation (12)." + }, + { + "id": 104, + "string": "We observe that the performance drops significantly to 87.24%, which is nearly 1.5% to our DMAN model, which indicates that the discourse markers have deep connections with the logical relations between two sentences they links." + }, + { + "id": 105, + "string": "When Figure 2 : Performance when the sentence encoder is pretrained on different discourse markers sets." + }, + { + "id": 106, + "string": "\"NONE\" means the model doesn't use any discourse markers; \"ALL\" means the model use all the discourse markers." + }, + { + "id": 107, + "string": "we remove the character-level embedding and the POS and NER features, the performance drops a lot." + }, + { + "id": 108, + "string": "We conjecture that those feature tags help the model represent the words as a whole while the char-level embedding can better handle the outof-vocab (OOV) or rare words." + }, + { + "id": 109, + "string": "The exact match feature also demonstrates its effectiveness in the ablation result." + }, + { + "id": 110, + "string": "Finally, we ablate the reinforcement learning part, in other words, we only use the original loss function to optimize the model (set λ = 1)." + }, + { + "id": 111, + "string": "The result drops about 0.5%, which proves that it is helpful to utilize all the information from the annotators." + }, + { + "id": 112, + "string": "Semantic Analysis In Figure 2 , we show the performance on the three relation labels when the model is pre-trained on different discourse markers sets." + }, + { + "id": 113, + "string": "In other words, we removed discourse marker from the original set each time and use the rest 7 discourse markers to pre-train the sentence encoder in the DMP task and then train the DMAN." + }, + { + "id": 114, + "string": "As we can see, there is a sharp decline of accuracy when removing \"but\", \"because\" and \"although\"." + }, + { + "id": 115, + "string": "We can intuitively speculate that \"but\" and \"although\" have direct connections with the contradiction label (which drops most significantly) while \"because\" has some links with the entailment label." + }, + { + "id": 116, + "string": "We observe that some discourse markers such as \"if\" or \"before\" contribute much less than other words which have strong logical hints, although they actually improve the performance of the model." + }, + { + "id": 117, + "string": "Compared to the other two categories, the \"contradiction\" label examples seem to benefit the most from the pre-trained sentence encoder." + }, + { + "id": 118, + "string": "Visualization In Figure 3 , we also provide a visualized analysis of the hidden representation from similarity matrix A (computed in the equation (6) ) in the situations that whether we use the discourse markers or not." + }, + { + "id": 119, + "string": "We pick a sentence pair whose premise is \"3 young man in hoods standing in the middle of a quiet street facing the camera.\"" + }, + { + "id": 120, + "string": "and hypothesis is \"Three people sit by a busy street bareheaded.\"" + }, + { + "id": 121, + "string": "We observe that the values are highly correlated among the synonyms like \"people\" with \"man\", \"three\" with \"3\" in both situations." + }, + { + "id": 122, + "string": "However, words that might have contradictory meanings like \"hoods\" with \"bareheaded\", \"quiet\" with \"busy\" perform worse without the discourse markers augmentation, which conforms to the conclusion that the \"contradiction\" label examples benefit a lot which is observed in the Section 5.5." + }, + { + "id": 123, + "string": "6 Related Work Discourse Marker Applications This work is inspired most directly by the DisSent model and Discourse Prediction Task of , which introduce the use of the discourse markers information for the pretraining of sentence encoders." + }, + { + "id": 124, + "string": "They follow to collect a large sentence pairs corpus from Book-Corpus and propose a sentence representation based on that." + }, + { + "id": 125, + "string": "They also apply their pre-trained sentence encoder to a series of natural language understanding tasks such as sentiment analysis, question-type, entailment, and relatedness." + }, + { + "id": 126, + "string": "However, all those datasets are provided by Conneau et al." + }, + { + "id": 127, + "string": "(2017) for evaluating sentence embeddings and are almost all small-scale and are not able to support more complex neural network." + }, + { + "id": 128, + "string": "Moreover, they represent each sentence by a single vector and directly combine them to predict the answer, which is not able to interact among the words level." + }, + { + "id": 129, + "string": "In closely related work, Jernite et al." + }, + { + "id": 130, + "string": "(2017) propose a model that also leverage discourse relations." + }, + { + "id": 131, + "string": "However, they manually group the discourse markers into several categories based on human knowledge and predict the category instead of the explicit discourse marker phrase." + }, + { + "id": 132, + "string": "However, the size of their dataset is much smaller than that in , and sometimes there has been disagreement among annotators about what exactly is the correct categorization of discourse relations (Hobbs, 1990) ." + }, + { + "id": 133, + "string": "Unlike previous works, we insert the sentence encoder into an integrated network to augment the semantic representation for NLI tasks rather than directly combining the sentence embeddings to predict the relations." + }, + { + "id": 134, + "string": "Natural Language Inference Earlier research on the natural language inference was based on small-scale datasets (Marelli et al., 2014) , which relied on traditional methods such as shallow methods (Glickman et al., 2005) , natural logic methods(MacCartney and Manning, 2007) , etc." + }, + { + "id": 135, + "string": "These datasets are either not large enough to support complex deep neural network models or too easy to challenge natural language." + }, + { + "id": 136, + "string": "Large and complicated networks have been successful in many natural language processing tasks (Zhu et al., 2017; Chen et al., 2017e; Pan et al., 2017a) ." + }, + { + "id": 137, + "string": "Recently, Bowman et al." + }, + { + "id": 138, + "string": "(2015) released Stanford Natural language Inference (SNLI) dataset, which is a high-quality and large-scale benchmark, thus inspired many significant works (Bowman et al., 2016; Mou et al., 2016; Vendrov et al., 2016; Conneau et al., 2017; Gong et al., 2018; McCann et al., 2017; Chen et al., 2017b; Choi et al., 2017; Tay et al., 2017) ." + }, + { + "id": 139, + "string": "Most of them focus on the improvement of the interaction architectures and obtain competitive results, while transfer learning from external knowledge is popular as well." + }, + { + "id": 140, + "string": "Vendrov et al." + }, + { + "id": 141, + "string": "(2016) incorpated Skipthought , which is an unsupervised sequence model that has been proven to generate useful sentence embedding." + }, + { + "id": 142, + "string": "McCann et al." + }, + { + "id": 143, + "string": "(2017) proposed to transfer the pre-trained encoder from the neural machine translation (NMT) to the NLI tasks." + }, + { + "id": 144, + "string": "Our method combines a pre-trained sentence encoder from the DMP task with an integrated NLI model to compose a novel framework." + }, + { + "id": 145, + "string": "Furthermore, unlike previous studies, we make full use of the labels provided by the annotators and employ policy gradient to optimize a new objective function in order to simulate the thought of human being." + }, + { + "id": 146, + "string": "Conclusion In this paper, we propose Discourse Marker Augmented Network for the task of the natural language inference." + }, + { + "id": 147, + "string": "We transfer the knowledge learned from the discourse marker prediction task to the NLI task to augment the semantic representation of the model." + }, + { + "id": 148, + "string": "Moreover, we take the various views of the annotators into consideration and employ reinforcement learning to help optimize the model." + }, + { + "id": 149, + "string": "The experimental evaluation shows that our model achieves the state-of-the-art results on SNLI and MultiNLI datasets." + }, + { + "id": 150, + "string": "Future works involve the choice of discourse markers and some other transfer learning sources." + }, + { + "id": 151, + "string": "Acknowledgements This work was supported in part by the National Nature Science Foundation of China (Grant Nos: 61751307), in part by the grant ZJU Research 083650 of the ZJUI Research Program from Zhejiang University and in part by the National Youth Top-notch Talent Support Program." + }, + { + "id": 152, + "string": "The experiments are supported by Chengwei Yao in the Experiment Center of the College of Computer Science and Technology, Zhejiang university." + } + ], + "headers": [ + { + "section": "Introduction", + "n": "1", + "start": 0, + "end": 24 + }, + { + "section": "Natural Language Inference (NLI)", + "n": "2.1", + "start": 25, + "end": 27 + }, + { + "section": "Discourse Marker Prediction (DMP)", + "n": "2.2", + "start": 28, + "end": 29 + }, + { + "section": "Sentence Encoder Model", + "n": "3", + "start": 30, + "end": 37 + }, + { + "section": "Discourse Marker Augmented Network", + "n": "4", + "start": 38, + "end": 38 + }, + { + "section": "Encoding Layer", + "n": "4.1", + "start": 39, + "end": 48 + }, + { + "section": "Interaction Layer", + "n": "4.2", + "start": 49, + "end": 57 + }, + { + "section": "Output Layer", + "n": "4.3", + "start": 58, + "end": 59 + }, + { + "section": "Training", + "n": "4.4", + "start": 60, + "end": 70 + }, + { + "section": "Datasets", + "n": "5.1", + "start": 71, + "end": 82 + }, + { + "section": "Implementation Details", + "n": "5.2", + "start": 83, + "end": 92 + }, + { + "section": "Results", + "n": "5.3", + "start": 93, + "end": 99 + }, + { + "section": "Ablation Analysis", + "n": "5.4", + "start": 100, + "end": 111 + }, + { + "section": "Semantic Analysis", + "n": "5.5", + "start": 112, + "end": 117 + }, + { + "section": "Visualization", + "n": "5.6", + "start": 118, + "end": 122 + }, + { + "section": "Discourse Marker Applications", + "n": "6.1", + "start": 123, + "end": 133 + }, + { + "section": "Natural Language Inference", + "n": "6.2", + "start": 134, + "end": 145 + }, + { + "section": "Conclusion", + "n": "7", + "start": 146, + "end": 148 + }, + { + "section": "Acknowledgements", + "n": "8", + "start": 149, + "end": 152 + } + ], + "figures": [ + { + "filename": "../figure/image/1023-Table1-1.png", + "caption": "Table 1: Three examples in SNLI dataset.", + "page": 0, + "bbox": { + "x1": 306.71999999999997, + "x2": 529.4399999999999, + "y1": 223.2, + "y2": 407.03999999999996 + } + }, + { + "filename": "../figure/image/1023-Table4-1.png", + "caption": "Table 4: Performance on the SNLI dataset and the MultiNLI dataset. In the top part, we show sentence encoding-based models; In the medium part, we present the performance of integrated neural network models; In the bottom part, we show the results of ensemble models.", + "page": 5, + "bbox": { + "x1": 74.88, + "x2": 520.3199999999999, + "y1": 68.64, + "y2": 360.0 + } + }, + { + "filename": "../figure/image/1023-Table5-1.png", + "caption": "Table 5: Ablations on the SNLI development dataset.", + "page": 6, + "bbox": { + "x1": 72.0, + "x2": 291.36, + "y1": 62.4, + "y2": 197.28 + } + }, + { + "filename": "../figure/image/1023-Figure2-1.png", + "caption": "Figure 2: Performance when the sentence encoder is pretrained on different discourse markers sets. “NONE” means the model doesn’t use any discourse markers; “ALL” means the model use all the discourse markers.", + "page": 6, + "bbox": { + "x1": 306.71999999999997, + "x2": 544.3199999999999, + "y1": 61.44, + "y2": 240.0 + } + }, + { + "filename": "../figure/image/1023-Figure1-1.png", + "caption": "Figure 1: Overview of our Discource Marker Augmented Network, comprising the part of Discourse Marker Prediction (upper) for pre-training and Natural Language Inferance (bottom) to which the learned knowledge will be transferred.", + "page": 2, + "bbox": { + "x1": 74.88, + "x2": 524.16, + "y1": 66.72, + "y2": 239.51999999999998 + } + }, + { + "filename": "../figure/image/1023-Figure3-1.png", + "caption": "Figure 3: Comparison of the visualized similarity relations.", + "page": 7, + "bbox": { + "x1": 73.92, + "x2": 278.4, + "y1": 68.16, + "y2": 310.08 + } + }, + { + "filename": "../figure/image/1023-Table2-1.png", + "caption": "Table 2: Statistics of the labels of SNLI and MuliNLI. Total means the number of examples whose number of annotators is in the left column. Correct means the number of examples whose number of correct labels from the annotators is in the left column.", + "page": 3, + "bbox": { + "x1": 306.71999999999997, + "x2": 536.16, + "y1": 62.879999999999995, + "y2": 159.35999999999999 + } + }, + { + "filename": "../figure/image/1023-Table3-1.png", + "caption": "Table 3: Statistics of discouse markers in our dataset from BookCorpus.", + "page": 4, + "bbox": { + "x1": 325.92, + "x2": 505.44, + "y1": 62.4, + "y2": 197.28 + } + } + ] + }, + "gem_id": "GEM-SciDuet-chal-20" + }, + { + "slides": { + "0": { + "title": "Executing Context Dependent Instructions", + "text": [ + "Task: map a sequence of instructions to actions", + "Modeling Context Learning from" + ], + "page_nums": [ + 1 + ], + "images": [] + }, + "1": { + "title": "Executing a Sequence of Instructions", + "text": [ + "Empty out the leftmost beaker of purple chemical", + "Then, add the contents of the first beaker to the second", + "Then, drain 1 unit from it", + "Same for 1 more unit" + ], + "page_nums": [ + 2, + 3, + 4, + 5, + 6, + 7, + 8, + 9 + ], + "images": [] + }, + "2": { + "title": "Problem Setup", + "text": [ + "Task: follow sequence of instructions", + "Learning from instructions and corresponding world states", + "Empty out the leftmost beaker of purple chemical", + "Then, add the contents of the first beaker to the second", + "Then, drain 1 unit from it", + "Same for 1 more unit" + ], + "page_nums": [ + 10, + 11, + 12, + 13, + 14, + 15 + ], + "images": [] + }, + "4": { + "title": "Today", + "text": [ + "1. Attention-based model for generating sequences of system actions that modify the environment", + "2. Exploration-based learning procedure that avoids biases learned early in training" + ], + "page_nums": [ + 17 + ], + "images": [] + }, + "5": { + "title": "System Actions", + "text": [ + "Each beaker is a stack", + "Actions are pop and push", + "pop pop pop push brown; push brown; push brown;" + ], + "page_nums": [ + 18 + ], + "images": [] + }, + "6": { + "title": "Meaning Representation", + "text": [ + "push brown; push brown; push brown;" + ], + "page_nums": [ + 19, + 20 + ], + "images": [] + }, + "9": { + "title": "Reward Function", + "text": [ + "Source state s s0 Target state", + "if if a stops the sequence and a stops the sequence and s0 s0 is the goal state is not the goal state", + "is closer to the goal state than is closer to the goal state than s0" + ], + "page_nums": [ + 39, + 40, + 41 + ], + "images": [] + }, + "11": { + "title": "Learned Biases", + "text": [ + "Early during learning, model learns it can get positive reward by predicting the pop actions", + "Less likely to get positive reward with push action", + "Becomes biased against push - during later exploration, push is never sampled!", + "Compounding effect: never learns to generate push actions" + ], + "page_nums": [ + 49 + ], + "images": [] + }, + "12": { + "title": "Single step Reward Observation", + "text": [ + "Our approach: observe reward of all actions by looking one step ahead during exploration", + "Observe reward for actions like push" + ], + "page_nums": [ + 50 + ], + "images": [] + }, + "14": { + "title": "Simple Exploration", + "text": [ + "Only observe states along sampled trajectory", + "Observe sampled states and single-step ahead" + ], + "page_nums": [ + 52, + 53, + 54, + 55, + 56, + 57, + 58 + ], + "images": [] + }, + "15": { + "title": "Single step Observation", + "text": [ + "Add the third beaker to the first", + "push 1 orange push 1 yellow" + ], + "page_nums": [ + 59, + 60, + 61, + 62, + 63, + 64, + 65, + 66, + 67, + 68, + 69, + 70 + ], + "images": [] + }, + "17": { + "title": "Alchemy", + "text": [ + "pop pop pop push brown; push brown; push brown;" + ], + "page_nums": [ + 72 + ], + "images": [] + }, + "18": { + "title": "Scene", + "text": [ + "T he person with a red shirt and a blue hat moves t o the right end", + "remove_person remove_hat add_person red add_hat blue" + ], + "page_nums": [ + 73 + ], + "images": [] + }, + "19": { + "title": "Tangrams", + "text": [ + "Swap the third and fourth figures", + "remove 4 insert 3 boat" + ], + "page_nums": [ + 74 + ], + "images": [] + }, + "22": { + "title": "Ablations", + "text": [ + "Without World State Context", + "Need access to previous instructions", + "Need access to world state" + ], + "page_nums": [ + 78 + ], + "images": [] + } + }, + "paper_title": "Situated Mapping of Sequential Instructions to Actions with Single-step Reward Observation", + "paper_id": "1032", + "paper": { + "title": "Situated Mapping of Sequential Instructions to Actions with Single-step Reward Observation", + "abstract": "We propose a learning approach for mapping context-dependent sequential instructions to actions. We address the problem of discourse and state dependencies with an attention-based model that considers both the history of the interaction and the state of the world. To train from start and goal states without access to demonstrations, we propose SESTRA, a learning algorithm that takes advantage of singlestep reward observations and immediate expected reward maximization. We evaluate on the SCONE domains, and show absolute accuracy improvements of 9.8%-25.3% across the domains over approaches that use high-level logical representations.", + "text": [ + { + "id": 0, + "string": "Introduction An agent executing a sequence of instructions must address multiple challenges, including grounding the language to its observed environment, reasoning about discourse dependencies, and generating actions to complete high-level goals." + }, + { + "id": 1, + "string": "For example, consider the environment and instructions in Figure 1 , in which a user describes moving chemicals between beakers and mixing chemicals together." + }, + { + "id": 2, + "string": "To execute the second instruction, the agent needs to resolve sixth beaker and last one to objects in the environment." + }, + { + "id": 3, + "string": "The third instruction requires resolving it to the rightmost beaker mentioned in the second instruction, and reasoning about the set of actions required to mix the colors in the beaker to brown." + }, + { + "id": 4, + "string": "In this paper, we describe a model and learning approach to map sequences of instructions to actions." + }, + { + "id": 5, + "string": "Our model considers previous utterances and the world state to select actions, learns to combine simple actions to achieve complex goals, and can be trained using (Long et al., 2016) ALCHEMY domain, including a start state (top), sequence of instructions, and a goal state (bottom)." + }, + { + "id": 6, + "string": "Each instruction is annotated with a sequence of actions from the set of actions we define for ALCHEMY." + }, + { + "id": 7, + "string": "goal states without access to demonstrations." + }, + { + "id": 8, + "string": "The majority of work on executing sequences of instructions focuses on mapping instructions to high-level formal representations, which are then evaluated to generate actions (e.g., Chen and Mooney, 2011; Long et al., 2016) ." + }, + { + "id": 9, + "string": "For example, the third instruction in Figure 1 will be mapped to mix(prev_arg1), indicating that the mix action should be applied to first argument of the previous action (Long et al., 2016; Guu et al., 2017) ." + }, + { + "id": 10, + "string": "In contrast, we focus on directly generating the sequence of actions." + }, + { + "id": 11, + "string": "This requires resolving references without explicitly modeling them, and learning the sequences of actions required to complete high-level actions; for example, that mixing requires removing everything in the beaker and replacing with the same number of brown items." + }, + { + "id": 12, + "string": "A key challenge in executing sequences of instructions is considering contextual cues from both the history of the interaction and the state of the world." + }, + { + "id": 13, + "string": "Instructions often refer to previously mentioned objects (e.g., it in Figure 1 ) or actions (e.g., do it again)." + }, + { + "id": 14, + "string": "The world state provides the set of objects the instruction may refer to, and implicitly determines the available actions." + }, + { + "id": 15, + "string": "For example, liquid can not be removed from an empty beaker." + }, + { + "id": 16, + "string": "Both types of contexts continuously change during an interaction." + }, + { + "id": 17, + "string": "As new instructions are given, the instruction history expands, and as the agent acts the world state changes." + }, + { + "id": 18, + "string": "We propose an attentionbased model that takes as input the current instruction, previous instructions, the initial world state, and the current state." + }, + { + "id": 19, + "string": "At each step, the model computes attention encodings of the different inputs, and predicts the next action to execute." + }, + { + "id": 20, + "string": "We train the model given instructions paired with start and goal states without access to the correct sequence of actions." + }, + { + "id": 21, + "string": "During training, the agent learns from rewards received through exploring the environment with the learned policy by mapping instructions to sequences of actions." + }, + { + "id": 22, + "string": "In practice, the agent learns to execute instructions gradually, slowly correctly predicting prefixes of the correct sequences of increasing length as learning progress." + }, + { + "id": 23, + "string": "A key challenge is learning to correctly select actions that are only required later in execution sequences." + }, + { + "id": 24, + "string": "Early during learning, these actions receive negative updates, and the agent learns to assign them low probabilities." + }, + { + "id": 25, + "string": "This results in an exploration problem in later stages, where actions that are only required later are not sampled during exploration." + }, + { + "id": 26, + "string": "For example, in the ALCHEMY domain shown in Figure 1 , the agent behavior early during execution of instructions can be accomplished by only using POP actions." + }, + { + "id": 27, + "string": "As a result, the agent quickly learns a strong bias against PUSH actions, which in practice prevents the policy from exploring them again." + }, + { + "id": 28, + "string": "We address this with a learning algorithm that observes the reward for all possible actions for each visited state, and maximizes the immediate expected reward." + }, + { + "id": 29, + "string": "We evaluate our approach on SCONE (Long et al., 2016) , which includes three domains, and is used to study recovering predicate logic meaning representations for sequential instructions." + }, + { + "id": 30, + "string": "We study the problem of generating a sequence of low-level actions, and re-define the set of actions for each domain." + }, + { + "id": 31, + "string": "For example, we treat the beakers in the ALCHEMY domain as stacks and use only POP and PUSH actions." + }, + { + "id": 32, + "string": "Our approach robustly learns to execute sequential instructions with up to 89.1% task-completion accuracy for single instruction, and 62.7% for complete sequences." + }, + { + "id": 33, + "string": "Our code is available at https://github.com/clic-lab/scone." + }, + { + "id": 34, + "string": "Technical Overview Task and Notation Let S be the set of all possible world states, X be the set of all natural language instructions, and A be the set of all actions." + }, + { + "id": 35, + "string": "An instructionx ∈ X of length |x| is a sequence of tokens x 1 , ...x |x| ." + }, + { + "id": 36, + "string": "Executing an action modifies the world state following a transition function T : S × A → S. For example, the ALCHEMY domain includes seven beakers that contain colored liquids." + }, + { + "id": 37, + "string": "The world state defines the content of each beaker." + }, + { + "id": 38, + "string": "We treat each beaker as a stack." + }, + { + "id": 39, + "string": "The actions are POP N and PUSH N C, where 1 ≤ N ≤ 7 is the beaker number and C is one of six colors." + }, + { + "id": 40, + "string": "There are a total of 50 actions, including the STOP action." + }, + { + "id": 41, + "string": "Section 6 describes the domains in detail." + }, + { + "id": 42, + "string": "Given a start state s 1 and a sequence of instructions x 1 , ." + }, + { + "id": 43, + "string": "." + }, + { + "id": 44, + "string": "." + }, + { + "id": 45, + "string": ",x n , our goal is to generate the sequence of actions specified by the instructions starting from s 1 ." + }, + { + "id": 46, + "string": "We treat the execution of a sequence of instructions as executing each instruction in turn." + }, + { + "id": 47, + "string": "The executionē of an instructionx i starting at a state s 1 and given the history of the instruction sequence x 1 , ." + }, + { + "id": 48, + "string": "." + }, + { + "id": 49, + "string": "." + }, + { + "id": 50, + "string": ",x i−1 is a sequence of state-action pairsē = (s 1 , a 1 ), ..., (s m , a m ) , where a k ∈ A, s k+1 = T (s k , a k )." + }, + { + "id": 51, + "string": "The final action a m is the special action STOP, which indicates the execution has terminated." + }, + { + "id": 52, + "string": "The final state is then s m , as T (s k , STOP) = s k ." + }, + { + "id": 53, + "string": "Executing a sequence of instructions in order generates a sequence ē 1 , ...,ē n , whereē i is the execution of instructionx i ." + }, + { + "id": 54, + "string": "When referring to states and actions in an indexed executionē i , the k-th state and action are s i,k and a i,k ." + }, + { + "id": 55, + "string": "We execute instructions one after the other:ē 1 starts at the interaction initial state s 1 and s i+1,1 = s i,|ē i | , where s i+1,1 is the start state ofē i+1 and s i,|ē i | is the final state ofē i ." + }, + { + "id": 56, + "string": "Model We model the agent with a neural network policy (Section 4)." + }, + { + "id": 57, + "string": "At step k of executing the i-th instruction, the model input is the current instructionx i , the previous instructions x 1 , ." + }, + { + "id": 58, + "string": "." + }, + { + "id": 59, + "string": "." + }, + { + "id": 60, + "string": ",x i−1 , the world state s 1 at the beginning of executingx i , and the current state s k ." + }, + { + "id": 61, + "string": "The model predicts the next action a k to execute." + }, + { + "id": 62, + "string": "If a k = STOP, we switch to the next instruction, or if at the end of the instruction sequence, terminate." + }, + { + "id": 63, + "string": "Otherwise, we update the state to s k+1 = T (s k , a k )." + }, + { + "id": 64, + "string": "The model uses attention to process the different inputs and a recurrent neural network (RNN) decoder to generate actions (Bahdanau et al., 2015) ." + }, + { + "id": 65, + "string": "Learning We assume access to a set of N instruction sequences, where each instruction in each sequence is paired with its start and goal states." + }, + { + "id": 66, + "string": "During training, we create an example for each instruction." + }, + { + "id": 67, + "string": "Formally, the training set is {(x (j) i , s (j) i,1 , x (j) 1 , ." + }, + { + "id": 68, + "string": "." + }, + { + "id": 69, + "string": "." + }, + { + "id": 70, + "string": ",x (j) i−1 , g (j) i )} N,n (j) j=1,i=1 , wherex (j) i is an instruction, s (j) i,1 is a start state, x (j) 1 , ." + }, + { + "id": 71, + "string": "." + }, + { + "id": 72, + "string": "." + }, + { + "id": 73, + "string": ",x (j) i−1 is the instruction history, g (j) i is the goal state, and n (j) is the length of the j-th instruction sequence." + }, + { + "id": 74, + "string": "This training data contains no evidence about the actions and intermediate states required to execute each instruction." + }, + { + "id": 75, + "string": "1 We use a learning method that maximizes the expected immediate reward for a given state (Section 5)." + }, + { + "id": 76, + "string": "The reward accounts for task-completion and distance to the goal via potential-based reward shaping." + }, + { + "id": 77, + "string": "Evaluation We evaluate exact task completion for sequences of instructions on a test set {(s (j) 1 , x (j) 1 , ." + }, + { + "id": 78, + "string": "." + }, + { + "id": 79, + "string": "." + }, + { + "id": 80, + "string": ",x (j) n j , g (j) )} N j=1 , where g (j) is the oracle goal state of executing instructions x (j) 1 , ." + }, + { + "id": 81, + "string": "." + }, + { + "id": 82, + "string": "." + }, + { + "id": 83, + "string": ",x (j) n j in order starting from s (j) 1 ." + }, + { + "id": 84, + "string": "We also evaluate single-instruction task completion using per-instruction annotated start and goal states." + }, + { + "id": 85, + "string": "Related Work Executing instructions has been studied using the SAIL corpus (MacMahon et al., 2006) with focus on navigation using high-level logical representations (Chen and Mooney, 2011; Chen, 2012; Artzi et al., 2014) and lowlevel actions (Mei et al., 2016) ." + }, + { + "id": 86, + "string": "While SAIL includes sequences of instructions, the data demonstrates limited discourse phenomena, and instructions are often processed in isolation." + }, + { + "id": 87, + "string": "Approaches that consider as input the entire sequence focused on segmentation (Andreas and Klein, 2015) ." + }, + { + "id": 88, + "string": "Recently, other navigation tasks were proposed with focus on single instructions (Anderson et al., 2018; Janner et al., 2018) ." + }, + { + "id": 89, + "string": "We focus on sequences of environment manipulation instructions and modeling contextual cues from both the changing environment and instruction history." + }, + { + "id": 90, + "string": "Manipulation using single-sentence instructions has been stud-ied using the Blocks domain (Bisk et al., 2016 (Bisk et al., , 2018 Misra et al., 2017; Tan and Bansal, 2018) ." + }, + { + "id": 91, + "string": "Our work is related to the work of Branavan et al." + }, + { + "id": 92, + "string": "(2009) and Vogel and Jurafsky (2010) ." + }, + { + "id": 93, + "string": "While both study executing sequences of instructions, similar to SAIL, the data includes limited discourse dependencies." + }, + { + "id": 94, + "string": "In addition, both learn with rewards computed from surface-form similarity between text in the environment and the instruction." + }, + { + "id": 95, + "string": "We do not rely on such similarities, but instead use a state distance metric." + }, + { + "id": 96, + "string": "Language understanding in interactive scenarios that include multiple turns has been studied with focus on dialogue for querying database systems using the ATIS corpus (Hemphill et al., 1990; Dahl et al., 1994) ." + }, + { + "id": 97, + "string": "Tür et al." + }, + { + "id": 98, + "string": "(2010) surveys work on ATIS." + }, + { + "id": 99, + "string": "Miller et al." + }, + { + "id": 100, + "string": "(1996) , Collins (2009), and Suhr et al." + }, + { + "id": 101, + "string": "(2018) modeled context dependence in ATIS for generating formal representations." + }, + { + "id": 102, + "string": "In contrast, we focus on environments that change during execution and directly generating environment actions, a scenario that is more related to robotic agents than database query." + }, + { + "id": 103, + "string": "The SCONE corpus (Long et al., 2016) was designed to reflect a broad set of discourse context-dependence phenomena." + }, + { + "id": 104, + "string": "It was studied extensively using logical meaning representations (Long et al., 2016; Guu et al., 2017; Fried et al., 2018) ." + }, + { + "id": 105, + "string": "In contrast, we are interested in directly generating actions that modify the environment." + }, + { + "id": 106, + "string": "This requires generating lower-level actions and learning procedures that are otherwise hardcoded in the logic (e.g., mixing action in Figure 1) ." + }, + { + "id": 107, + "string": "Except for Fried et al." + }, + { + "id": 108, + "string": "(2018) , previous work on SCONE assumes access only to the initial and final states during training." + }, + { + "id": 109, + "string": "This form of supervision does not require operating the agent manually to acquire the correct sequence of actions, a difficult task in robotic agents with complex control." + }, + { + "id": 110, + "string": "Goal state supervision has been studied for instructional language (e.g., Branavan et al., 2009; Bisk et al., 2016) , and more extensively in question answering when learning with answer annotations only (e.g., Clarke et al., 2010; Liang et al., 2011; Kwiatkowski et al., 2013; Berant et al., 2013; Liang, 2014, 2015; ." + }, + { + "id": 111, + "string": "Model We map sequences of instructions x 1 , ." + }, + { + "id": 112, + "string": "." + }, + { + "id": 113, + "string": "." + }, + { + "id": 114, + "string": ",x n to actions by executing the instructions in or-Utterance initial state s 1 < l a t e x i t s h a 1 _ b a s e 6 4 = \" p S K e R C 6 K r a b k R j 9 Z F y 6 P 3 V r m k v 4 = \" > A A A C U H i c b Z B N b 9 Q w E I a d B U o J X 1 s 4 c r F Y U X F a J R S p 5 V a p F 4 5 F I r T S J l p N J r O t V d u J 7 E n L K k r / B 7 + m V z h z 4 q d w A u 8 H E m w Z y f K r 9 x 1 7 N E / Z a O U 5 S X 5 E g z t 3 7 2 3 d 3 3 4 Q P 3 z 0 + M n T 4 c 6 z T 7 5 u H V K G t a 7 d a Q m e t L K U s W J N p 4 0 j M K W m k / L i a J G f X J L z q r Y f e d 5 Q Y e D M q p l C 4 G B N h 3 u 7 8 v o 6 Z / r M X c Z M D i x S L / N c 7 s q V q 6 x i B V p 6 B i b Z S z 9 N p 8 N R M k 6 W J W + L d C 1 G Y l 3 H 0 5 1 o K 6 9 q b A 1 Z R g 3 e T 9 K k 4 a I D x w o 1 9 X H e e m o A L + C M J k F a M O S L b r l d L 1 8 F p 5 K z 2 o V j W S 7 d v 1 9 0 Y L y f m z J 0 G u B z v 5 k t z P 9 m l V 9 8 u D G d Z w d F W L p p m S y u h s 9 a L b m W C 3 y y U o 6 Q 9 T w I Q B f Q o M R z c I C B n Y / j 3 J G l K 6 y N A V t 1 O f a T t O i 6 3 B k 5 S v s + D u T S T U 6 3 R f Z m / G 6 c f H g 7 O k z W C L f F C / F S v B a p 2 B e H 4 r 0 4 F p l A 8 U X c i K / i W / Q 9 + h n 9 G k S r 1 j + 3 e C 7 + q U H 8 G 3 e a s y o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" p S K e R C 6 K r a b k R j 9 Z F y 6 P 3 V r m k v 4 = \" > A A A C U H i c b Z B N b 9 Q w E I a d B U o J X 1 s 4 c r F Y U X F a J R S p 5 V a p F 4 5 F I r T S J l p N J r O t V d u J 7 E n L K k r / B 7 + m V z h z 4 q d w A u 8 H E m w Z y f K r 9 x 1 7 N E / Z a O U 5 S X 5 E g z t 3 7 2 3 d 3 3 4 Q P 3 z 0 + M n T 4 c 6 z T 7 5 u H V K G t a 7 d a Q m e t L K U s W J N p 4 0 j M K W m k / L i a J G f X J L z q r Y f e d 5 Q Y e D M q p l C 4 G B N h 3 u 7 8 v o 6 Z / r M X c Z M D i x S L / N c 7 s q V q 6 x i B V p 6 B i b Z S z 9 N p 8 N R M k 6 W J W + L d C 1 G Y l 3 H 0 5 1 o K 6 9 q b A 1 Z R g 3 e T 9 K k 4 a I D x w o 1 9 X H e e m o A L + C M J k F a M O S L b r l d L 1 8 F p 5 K z 2 o V j W S 7 d v 1 9 0 Y L y f m z J 0 G u B z v 5 k t z P 9 m l V 9 8 u D G d Z w d F W L p p m S y u h s 9 a L b m W C 3 y y U o 6 Q 9 T w I Q B f Q o M R z c I C B n Y / j 3 J G l K 6 y N A V t 1 O f a T t O i 6 3 B k 5 S v s + D u T S T U 6 3 R f Z m / G 6 c f H g 7 O k z W C L f F C / F S v B a p 2 B e H 4 r 0 4 F p l A 8 U X c i K / i W / Q 9 + h n 9 G k S r 1 j + 3 e C 7 + q U H 8 G 3 e a s y o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" p S K e R C 6 K r a b k R j 9 Z F y 6 P 3 V r m k v 4 = \" > A A Figure 2 : Illustration of the model architecture while generating the third action a 3 in the third utterancex 3 from Figure 1 ." + }, + { + "id": 115, + "string": "Context vectors computed using attention are highlighted in blue." + }, + { + "id": 116, + "string": "The model takes as input vector encodings from the current and previous instructionsx 1 ,x 2 , andx 3 , the initial state s 1 , the current state s 3 , and the previous action a 2 ." + }, + { + "id": 117, + "string": "Instruction encodings are computed with a bidirectional RNN." + }, + { + "id": 118, + "string": "We attend over the previous and current instructions and the initial and current states." + }, + { + "id": 119, + "string": "We use an MLP to select the next action." + }, + { + "id": 120, + "string": "der." + }, + { + "id": 121, + "string": "The model generates an executionē = (s 1 , a 1 ), ." + }, + { + "id": 122, + "string": "." + }, + { + "id": 123, + "string": "." + }, + { + "id": 124, + "string": ", (s m i , a m i ) for each instructionx i ." + }, + { + "id": 125, + "string": "The agent context, the information available to the agent at step k, iss k = (x i , x 1 , ." + }, + { + "id": 126, + "string": "." + }, + { + "id": 127, + "string": "." + }, + { + "id": 128, + "string": ",x i−1 , s k ,ē[: k]), whereē[: k] is the execution up until but not including step k. In contrast to the world state, the agent context also includes instructions and the execution so far." + }, + { + "id": 129, + "string": "The agent policy π θ (s k , a) is modeled as a probabilistic neural network parametrized by θ, wheres k is the agent context at step k and a is an action." + }, + { + "id": 130, + "string": "To generate executions, we generate one action at a time, execute the action, and observe the new world state." + }, + { + "id": 131, + "string": "In step k of executing the i-th instruction, the network inputs are the current utterancex i , the previous instructions x 1 , ." + }, + { + "id": 132, + "string": "." + }, + { + "id": 133, + "string": "." + }, + { + "id": 134, + "string": ",x i−1 , the initial state s 1 at beginning of executingx i , and the current state s k ." + }, + { + "id": 135, + "string": "When executing a sequence of instructions, the initial state s 1 is either the state at the beginning of executing the sequence or the final state of the execution of the previous instruction." + }, + { + "id": 136, + "string": "Figure 2 illustrates our architecture." + }, + { + "id": 137, + "string": "W f j Z U N / x C J 0 = \" > A A A C Q X i c b V D L T h t B E J w l 4 b U 8 Y o c j l 1 E M i J O 1 i y I B N 0 v h w J F I G C x 5 V 9 b s b B t G z G M 1 0 w t Y q / 0 B v i Z X c s 5 P 8 A u c o l x z Y W y M l N i U 1 F K p q n t 6 u r J C C o d R 9 B Q s f P i 4 u L S 8 s h q u r W 9 s f m o 0 P 1 8 4 U 1 o O X W 6 k s b 2 M O Z B C Q x c F S u g V F p j K J F x m N 9 / G / u U t W C e M P s d R A a l i V 1 o M B W f o p U F j Z 4 8 m C P d Y n Q A 3 O d i a J g l 9 0 5 w W R Q F Y D x q t q B 1 N Q O d J P C U t M s X Z o B k s J b n h p Q K N X D L n + n F U Y F o x i 4 J L q M O k d F A w f s O u o O + p Z g p c W k 3 O q b d l d W 1 9 Y 1 K d f P K p L n m 0 O K p T P V N x A x I k U A L B U q 4 y T Q w F U m 4 j u 7 O B v 7 1 P W g j 0 u Q S + x m E i v U S 0 R W c o Z U 6 l e 0 A 4 R G L s 1 x r S J A a Z A i 0 p K Z z 2 K n U v L o 3 B J 0 m / p j U y B j N T t V Z C O K U 5 8 o O 4 p I Z 0 / a 9 D M O C a R R c Q u k G u Y G M 8 T v W g 7 a l C V N g w m L 4 h 5 L u W S W m 3 V T b Z w 8 Z q r 8 7 C q a M 6 a v I V i q G t 2 b S G 4 j / e r E Z D J z Y j t 2 T s B B J l i M k f L S 8 m 0 u K K R 2 E R G O h g a P s W 8 K 4 F v Z + y m + Z Z h x t l K 4 b 2 K z g g a d K s S Q u A l 6 2 / b A o A q 1 o z S 9 L 1 y b n T + Y 0 T V o H 9 d O 6 d 3 F U b d l d W 1 9 Y 1 K d f P K p L n m 0 O K p T P V N x A x I k U A L B U q 4 y T Q w F U m 4 j u 7 O B v 7 1 P W g j 0 u Q S + x m E i v U S 0 R W c o Z U 6 l e 0 A 4 R G L s 1 x r S J A a Z A i 0 p K Z z 2 K n U v L o 3 B J 0 m / p j U y B j N T t V Z C O K U 5 8 o O 4 p I Z 0 / a 9 D M O C a R R c Q u k G u Y G M 8 T v W g 7 a l C V N g w m L 4 h 5 L u W S W m 3 V T b Z w 8 Z q r 8 7 C q a M 6 a v I V i q G t 2 b S G 4 j / e r E Z D J z Y j t 2 T s B B J l i M k f L S 8 m 0 u K K R 2 E R G O h g a P s W 8 K 4 F v Z + y m + Z Z h x t l K 4 b 2 K z g g a d K s S Q u A l 6 2 / b A o A q 1 o z S 9 L 1 y b n T + Y 0 T V o H 9 d O 6 d 3 F U b d l d W 1 9 Y 1 K d f P K p L n m 0 O K p T P V N x A x I k U A L B U q 4 y T Q w F U m 4 j u 7 O B v 7 1 P W g j 0 u Q S + x m E i v U S 0 R W c o Z U 6 l e 0 A 4 R G L s 1 x r S J A a Z A i 0 p K Z z 2 K n U v L o 3 B J 0 m / p j U y B j N T t V Z C O K U 5 8 o O 4 p I Z 0 / a 9 D M O C a R R c Q u k G u Y G M 8 T v W g 7 a l C V N g w m L 4 h 5 L u W S W m 3 V T b Z w 8 Z q r 8 7 C q a M 6 a v I V i q G t 2 b S G 4 j / e r E Z D J z Y j t 2 T s B B J l i M k f L S 8 m 0 u K K R 2 E R G O h g a P s W 8 K 4 F v Z + y m + Z Z h x t l K 4 b 2 K z g g a d K s S Q u A l 6 2 / b A o A q 1 o z S 9 L 1 y b n T + Y 0 T V o H 9 d O 6 d 3 F U a 3 j j C J f I D t k l + 8 Q n x 6 R B z k m T t A g n T + S Z v J B X 5 8 3 5 c D 6 d r 1 H p j D P u 2 S J / 4 H z / A G + O q t s = < / l a t e x i t > a2 < l a t e x i t s h a 1 _ b a s e 6 4 = \" S U b k Z W m j 3 P o R h h s H Q X 1 z V F H C x 3 E = \" > A A A C H n i c b V D L S s N A F J 3 4 q D W + W l 2 6 G S y C q 5 I U Q d 0 V 3 L i s a G 2 h C W U y u W 2 H z k z C z E Q p I Z / g V t d + j S t x q 3 / j 9 L H Q 1 g M X D u f c F y d K O d P G 8 7 6 d t f W N z d J W e d v d 2 d 3 b P 6 h U D x 9 0 k i k K b Z r w R H U j o o E z C W 3 D D I d u q o C I i E M n G l 9 P / c 4 j K M 0 S e W 8 m K Y S C D C U b M E q M l e 5 I v 9 G v 1 L y 6 N w N e J f 6 C 1 N A C r X 7 V K Q V x Q j M B 0 l B O t O 7 5 X m r C n C j D K I f C D T I N K a F j M o S e p Z I I 0 G E + + 7 X A p 1 a J 8 S B R t q T B M / X 3 R E 6 E 1 h M R 2 U 5 B z E g v e 1 P x X y / W 0 4 V L 1 8 3 g M s y Z T D M D k s 6 P D z K O T Y K n Y e C Y K a C G T y w h V D H 7 P 6 Y j o g g 1 N j L X D R R I e K K J E E T G e U C L Q o T 4 = < / l a t e x i t > < l a t e x i t s h a _ b a s e 6 4 = \" S U b k Z W m j 3 P o R h h s H Q X z V F H C x 3 E = \" > A A A C H n i c b V D L S s N A F J 3 4 q D W + W l 2 6 G S y C q 5 I U Q d 0 V 3 L i s a G 2 h C W U y u W 2 H z k z C z E Q p I Z / g V t d + j S t x q 3 / j 9 L H Q 1 g M X D u f c F y d K O d P G 8 7 6 d t f W N z d J W e d v d 2 d 3 b P 6 h U D x 9 0 k i k K b Z r w R H U j o o E z C W 3 D D I d u q o C I i E M n G l 9 P / c 4 j K M 0 S e W 8 m K Y S C D C U b M E q M l e 5 I v 9 G v 1 L y 6 N w N e J f 6 C 1 N A C r X 7 V K Q V x Q j M B 0 l B O t O 7 5 X m r C n C j D K I f C D T I N K a F j M o S e p Z I I 0 G E + + 7 X A p 1 a J 8 S B R t q T B M / X 3 R E 6 E 1 h M R 2 U 5 B z E g v e 1 P x X y / W 0 4 V L 1 8 3 g M s y Z T D M D k s 6 P D z K O T Y K n Y e C Y K a C G T y w h V D H 7 P 6 Y j o g g 1 N j L X D R R I e K K J E E T G e U C L Q o T 4 = < / l a t e x i t > < l a t e x i t s h a _ b a s e 6 4 = \" S U b k Z W m j 3 P o R h h s H Q X z V F H C x 3 E = \" > A A A C H n i c b V D L S s N A F J 3 4 q D W + W l 2 6 G S y C q 5 I U Q d 0 V 3 L i s a G 2 h C W U y u W 2 H z k z C z E Q p I Z / g V t d + j S t x q 3 / j 9 L H Q 1 g M X D u f c F y d K O d P G 8 7 6 d t f W N z d J W e d v d 2 d 3 b P 6 h U D x 9 0 k i k K b Z r w R H U j o o E z C W 3 D D I d u q o C I i E M n G l 9 P / c 4 j K M 0 S e W 8 m K Y S C D C U b M E q M l e 5 I v 9 G v 1 L y 6 N w N e J f 6 C 1 N A C r X 7 V K Q V x Q j M B 0 l B O t O 7 5 X m r C n C j D K I f C D T I N K a F j M o S e p Z I I 0 G E + + 7 X A p 1 a J 8 S B R t q T B M / X 3 R E 6 E 1 h M R 2 U 5 B z E g v e 1 P x X y / W 0 4 V L 1 8 3 g M s y Z T D M D k s 6 P D z K O T Y K n Y e C Y K a C G T y w h V D H 7 P 6 Y j o g g 1 N j L X D R R I e K K J E E T G e U C L U n d Y n l j j F o d c = \" > A A A C K X i c b V A 9 T 8 M w F H T K d / g q M L J Y V E h M V Q J I w F a J h b F I l F Z q Q u U 4 L 2 D V d i L b A Z U o / 4 M V Z n 4 N E 7 D y R 3 D a D l A 4 y d L p 7 j 2 / 0 0 U Z Z 9 p 4 3 o d T m 5 t f W F x a X n F X 1 9 Y 3 N u t b 2 9 c 6 z R W F D k 1 5 q n o R 0 c C Z h I 5 h h k M v U 0 B E x K E b D c 8 r v 3 s P S r N U X p l R B q E g t 5 I l j B J j p Z t A E H M X J c V j e U M H R 4 N 6 w 2 t 6 Y + C / x J + S B p q i P d h y F o M 4 p b k A a S g n W v d 9 L z N h Q Z R h l E P p B r m G j N A h u Y W + p Z I I 0 G E x j l 3 i f a v E O E m V f d L g s f p z o y B C 6 5 G I 7 G Q V U 8 9 6 l f i v F + v q w 5 n r J j k N C y a z 3 I C k k + N J z r F J c d U L j p k C a v j I E k I V s / k x v S O K U G P b c 9 1 A g Y Q H m g p B Z F w E t O z 7 Y V E E S u C G X U n d Y n l j j F o d c = \" > A A A C K X i c b V A 9 T 8 M w F H T K d / g q M L J Y V E h M V Q J I w F a J h b F I l F Z q Q u U 4 L 2 D V d i L b A Z U o / 4 M V Z n 4 N E 7 D y R 3 D a D l A 4 y d L p 7 j 2 / 0 0 U Z Z 9 p 4 3 o d T m 5 t f W F x a X n F X 1 9 Y 3 N u t b 2 9 c 6 z R W F D k 1 5 q n o R 0 c C Z h I 5 h h k M v U 0 B E x K E b D c 8 r v 3 s P S r N U X p l R B q E g t 5 I l j B J j p Z t A E H M X J c V j e U M H R 4 N 6 w 2 t 6 Y + C / x J + S B p q i P d h y F o M 4 p b k A a S g n W v d 9 L z N h Q Z R h l E P p B r m G j N A h u Y W + p Z I I 0 G E x j l 3 i f a v E O E m V f d L g s f p z o y B C 6 5 G I 7 G Q V U 8 9 6 l f i v F + v q w 5 n r J j k N C y a z 3 I C k k + N J z r F J c d U L j p k C a v j I E k I V s / k x v S O K U G P b c 9 1 A g Y Q H m g p B Z F w E t O z 7 Y V E E S u C G X U n d Y n l j j F o d c = \" > A A A C K X i c b V A 9 T 8 M w F H T K d / g q M L J Y V E h M V Q J I w F a J h b F I l F Z q Q u U 4 L 2 D V d i L b A Z U o / 4 M V Z n 4 N E 7 D y R 3 D a D l A 4 y d L p 7 j 2 / 0 0 U Z Z 9 p 4 3 o d T m 5 t f W F x a X n F X 1 9 Y 3 N u t b 2 9 c 6 z R W F D k 1 5 q n o R 0 c C Z h I 5 h h k M v U 0 B E x K E b D c 8 r v 3 s P S r N U X p l R B q E g t 5 I l j B J j p Z t A E H M X J c V j e U M H R 4 N 6 w 2 t 6 Y + C / x J + S B p q i P d h y F o M 4 p b k A a S g n W v d 9 L z N h Q Z R h l E P p B r m G j N A h u Y W + p Z I I 0 G E x j l 3 i f a v E O E m V f d L g s f p z o y B C 6 5 G I 7 G Q V U 8 9 6 l f i v F + v q w 5 n r J j k N C y a z 3 I C k k + N J z r F J c d U L j p k C a v j I E k I V s / k x v S O K U G P b c 9 1 A g Y Q H m g p B Z F w E t O z 7 Y V E E S u C G X V w A I A u e t Y 5 K 9 K 0 r 4 A S s = \" > A A A C K X i c b V A 9 T 8 M w F H T 4 J n z D y G J R I T F V C S A B G x I L Y 5 E I r d S E y n F e W g v b i W w H V K L 8 D 1 a Y + T V M w M o f w W k 7 Q O E k a O z Q l E I a M Y z 1 Y m J B s 4 k B I Y Z D p 1 c A R E x h 3 Z 8 d 1 H 7 7 X t Q m m X y 2 g x z i A T p S 5 Y y S o y V b k N B z C B O y 8 f q N u 8 d 9 T Y b X t M b A f 8 l / o Q 0 0 A S t 3 p a z E C Y Z L Q R I Q z n R u u t 7 u Y l K o g y j H C o 3 L D T k h N 6 R P n Q t l U S A j s p R 7 A r v W y X B a a b s k w a P 1 J 8 b J R F a D 0 V s J + u Y e t q r x X + 9 R N c f T l 0 3 6 W l U M p k X B i Q d H 0 8 L j k 2 G 6 1 5 w w h R Q w 4 e W E K q Y z Y / p g C h C j W 3 P d U M F E h 5 o J g S R S R n S q u t H Z R k q g R t + V V w A I A u e t Y 5 K 9 K 0 r 4 A S s = \" > A A A C K X i c b V A 9 T 8 M w F H T 4 J n z D y G J R I T F V C S A B G x I L Y 5 E I r d S E y n F e W g v b i W w H V K L 8 D 1 a Y + T V M w M o f w W k 7 Q O E k a O z Q l E I a M Y z 1 Y m J B s 4 k B I Y Z D p 1 c A R E x h 3 Z 8 d 1 H 7 7 X t Q m m X y 2 g x z i A T p S 5 Y y S o y V b k N B z C B O y 8 f q N u 8 d 9 T Y b X t M b A f 8 l / o Q 0 0 A S t 3 p a z E C Y Z L Q R I Q z n R u u t 7 u Y l K o g y j H C o 3 L D T k h N 6 R P n Q t l U S A j s p R 7 A r v W y X B a a b s k w a P 1 J 8 b J R F a D 0 V s J + u Y e t q r x X + 9 R N c f T l 0 3 6 W l U M p k X B i Q d H 0 8 L j k 2 G 6 1 5 w w h R Q w 4 e W E K q Y z Y / p g C h C j W 3 P d U M F E h 5 o J g S R S R n S q u t H Z R k q g R t + V V w A I A u e t Y 5 K 9 K 0 r 4 A S s = \" > A A A C K X i c b V A 9 T 8 M w F H T 4 J n z D y G J R I T F V C S A B G x I L Y 5 E I r d S E y n F e W g v b i W w H V K L 8 D 1 a Y + T V M w M o f w W k 7 Q O E k a O z Q l E I a M Y z 1 Y m J B s 4 k B I Y Z D p 1 c A R E x h 3 Z 8 d 1 H 7 7 X t Q m m X y 2 g x z i A T p S 5 Y y S o y V b k N B z C B O y 8 f q N u 8 d 9 T Y b X t M b A f 8 l / o Q 0 0 A S t 3 p a z E C Y Z L Q R I Q z n R u u t 7 u Y l K o g y j H C o 3 L D T k h N 6 R P n Q t l U S A j s p R 7 A r v W y X B a a b s k w a P 1 J 8 b J R F a D 0 V s J + u Y e t q r x X + 9 R N c f T l 0 3 6 W l U M p k X B i Q d H 0 8 L j k 2 G 6 1 5 w w h R Q w 4 e W E K q Y z Y / p g C h C j W 3 P d U M F E h 5 o J g S R S R n S q u t H Z R k q g R t + V b m 2 O X + 6 p 7 8 k O G y e N b 2 r 4 8 a 5 N 6 l w C e 2 i P X S A f H S C z t E l a q E A U a T Q E 3 p G L 8 6 r 8 + a 8 O 5 / j 0 R l n s r O D f s H 5 + g Z W B a Z a < / l a t e x i t > z s 1,3 < l a t e x i t s h a 1 _ b a s e 6 4 = \" n 3 E k G 0 a 5 j S i H q V L m z t o d l U t a w 1 s = \" > A A A C L 3 i c b V B N S 8 Q w F E z 9 t n 6 t e v Q S X A Q P s r Q q q L c F L x 4 V X B W 2 d U n T V w 0 m a U l S d Q 3 9 K 1 7 1 7 K / R i 3 j 1 X 5 i u e 9 D V g c A w 8 1 7 e M E n B m T Z B 8 O a N j U 9 M T k 3 P z P p z 8 w u L S 4 3 l l T O d l 4 p C h + Y 8 V x c J 0 c C Z h I 5 h h s N F o Y C I h M N 5 c n N Y + + e 3 o D T L 5 a n p F x A L c i V Z x i g x T u o 1 V i J B z H W S 2 Y f q U v d s u L V T 9 R r N o B U M g P + S c E i a a I j j 3 r I 3 F a U 5 L Q V I Q z n R u h s G h Y k t U Y Z R D p U f l R o K Q m / I F X Q d l U S A j u 0 g f I U 3 n J L i L F f u S Y M H 6 s 8 N S 4 T W f Z G 4 y T q q H v V q 8 V 8 v 1 f W H I 9 d N t h 9 b J o v S g K T f x 7 O S We generate continuous vector representations for all inputs." + }, + { + "id": 138, + "string": "Each input is represented as a set of vectors that are then processed with an attention function to generate a single vector representation (Luong et al., 2015) ." + }, + { + "id": 139, + "string": "We assume access to a domain-specific encoding function ENC(s) that, given a state s, generates a set of vectors S representing the objects in the state." + }, + { + "id": 140, + "string": "For example, in the ALCHEMY domain, a vector is generated for each beaker using an RNN." + }, + { + "id": 141, + "string": "Section 6 describes the different domains and their encoding functions." + }, + { + "id": 142, + "string": "We use a single bidirectional RNN with a long short-term memory (LSTM; Hochreiter and Schmidhuber, 1997) recurrence to encode the instructions." + }, + { + "id": 143, + "string": "All instructionsx 1 ,." + }, + { + "id": 144, + "string": "." + }, + { + "id": 145, + "string": "." + }, + { + "id": 146, + "string": ",x i are encoded with a single RNN by concatenating them tox ." + }, + { + "id": 147, + "string": "We use two delimiter tokens: one separates previous instructions, and the other separates the previous instructions from the current one." + }, + { + "id": 148, + "string": "The forward LSTM RNN hidden states are computed as: 2 −−→ hj+1 = − −−−− → LSTM E φ I (x j+1 ); − → hj , where φ I is a learned word embedding function and − −−−− → LSTM E is the forward LSTM recurrence function." + }, + { + "id": 149, + "string": "We use a similar computation to compute the backward hidden states ← − h j ." + }, + { + "id": 150, + "string": "For each token x j inx , a vector representation h j = − → h j ; ← − h j is computed." + }, + { + "id": 151, + "string": "We then create two sets of vectors, one for all the vectors of the current instruction and one for the previous instructions: X c = {h j } J+|x i | j=J X p = {h j } j